text
stringlengths 56
7.94M
|
---|
\begin{document}
maketitle
\begin{abstract}
Let $G$ be a finite group of automorphisms of a non\-singular complex
threefold $M$ such that the canonical bundle $\omega_M$ is locally
trivial as a $G$-sheaf. We prove that the Hilbert scheme
$Y=\operatorname{\hbox{$G$}-Hilb}{M}$ parametrising
$G$-clusters in $M$ is a crepant resolution of $X=M/G$ and that there
is a derived equi\-valence (Fourier--Mukai transform) between coherent
sheaves on $Y$ and coherent \hbox{$G$-sheaves} on $M$. This identifies
the K~theory of $Y$ with the equi\-variant K~theory of $M$, and thus
generalises the classical McKay correspondence. Some higher dimensional
extensions are possible.
\\ MSC2000: Primary 14E15, 14J30; Secondary 18E20,18F20,19L47.
\end{abstract}
\section{Introduction}
The classical McKay correspondence relates representations of a finite
subgroup $G\subset\operatorname{SL}(2,mathbb C)$ to the cohomology of the well-known minimal
resolution of the Kleinian singularity $mathbb C^2/G$. Gonzalez-Sprinberg and
Verdier \cite{GV} interpreted the McKay correspondence as an isomorphism
on K~theory, observing that the representation ring of $G$ is equal to
the $G$-equi\-variant K~theory of $mathbb C^2$.
A natural generalisation is to replace $mathbb C^2$ by a non\-singular
quasi\-projective complex variety $M$ of dimension $n$ and $G$ by
a finite group
of automorphisms of $M$, with the property that the stabiliser subgroup of
any point $x\in M$ acts on the tangent space $T_xM$ as a subgroup of
$\operatorname{SL}(T_xM)$. Thus the canonical bundle $\omega_M$ is locally trivial as a
$G$-sheaf, in the sense that every point of $M$ has a $G$-invariant
open neighbourhood on which there is a nonvanishing $G$-invariant
$n$-form. This implies that the quotient variety $X=M/G$ has only
Gorenstein singularities.
The natural generalisation of the McKay correspondence should then be an
isomorphism between the $G$-equi\-variant K~theory of $M$ and the ordinary
K~theory of a crepant resolution $Y$ of $X$, that is, a resolution of
singularities $\tau\colon Y\to X$ such that $\tau^*(\omega_X)=\omega_Y$. Crepant
resolutions of Gorenstein quotient singularities are known to exist in
dimension $n=3$, but only through a case by case analysis of the local
linear actions by Ito, Markushevich and Roan (see Roan \cite{Roa} and
references given there). In dimension $\ge4$, crepant resolutions exist
only in rather special cases.
The point of view of this paper is that the derived category is the
natural context for this formulation of the correspondence, and,
more importantly, provides key tools for an appropriately general proof.
Indeed, this point of view is not so revolutionary.
Gonzalez-Sprinberg and Verdier were aware that their isomorphism
on K~theory would lift to a derived equivalence and an explicit proof
of this was given by Kapranov and Vasserot \cite{KV}.
Moreover, the statement of the McKay correspondence in 3 dimensions in terms
of K~theory and derived categories is contained in
Reid \cite[Conjecture~4.1]{R}.
One surprise, however, is that the
methods of the derived category are powerful enough to prove the
existence of a crepant resolution in 3 dimensions, without any case
by case analysis.
A good candidate for a crepant resolution of $X$ is Nakamura's $G$-Hilbert
scheme $\operatorname{\hbox{$G$}-Hilb}{M}$ parametrising $G$-clusters or `scheme theoretic
$G$-orbits' on $M$: recall that a {\em cluster} $Z\subset M$ is a zero
dimensional subscheme, and a {\em $G$-cluster} is a $G$-invariant cluster
whose global sections $H^0(mathcal O_Z)$ are isomorphic to the regular
representation $mathbb C[G]$ of $G$. Clearly, a $G$-cluster has length $|G|$ and
a free $G$-orbit is a $G$-cluster. There is a Hilbert--Chow morphism
\[
\tau\colon\operatorname{\hbox{$G$}-Hilb}{M}\longrightarrow X,
\]
which, on closed points, sends a $G$-cluster to the orbit supporting it.
Note that $\tau$~is a projective morphism, is onto and is birational on one
component.
When $M=mathbb C^3$ and $G\subset\operatorname{SL}(3,mathbb C)$ is Abelian, Nakamura \cite{N} proved
that $\operatorname{\hbox{$G$}-Hilb}{M}$ is irreducible and is a crepant resolution of $X$ (compare
also Reid \cite{R} and Craw and Reid \cite{CR}). He conjectured that the same
result holds for an arbitrary finite subgroup $G\subset\operatorname{SL}(3,mathbb C)$.
Ito and Nakajima \cite{IN} observed that the
construction of Gonzalez-Sprinberg and Verdier \cite{GV}
is the $M=mathbb C^2$ case of a natural correspondence between the
equi\-variant K~theory of $M$ and the ordinary K~theory of $\operatorname{\hbox{$G$}-Hilb}{M}$.
They proved that this correspondence is an isomorphism when $M=mathbb C^3$ and
$G\subset\operatorname{SL}(3,mathbb C)$ is Abelian by constructing an explicit resolution
of the diagonal in Beilinson style. Our approach via Fourier--Mukai
transforms leaves this resolution of the diagonal implicit (it appears
as the object $\operatorname{mathcal Q}$ of $\operatorname{D}(Y\times Y)$ in
Section~\ref{Sec6!Proj_case}), and seems to give a more direct argument.
Two of the main consequences of the results of this
paper are that Nakamura's conjecture is true and that the natural
correspondence on K~theory is an isomorphism for all finite subgroups of
$\operatorname{SL}(3,mathbb C)$.
As already indicated, the basic approach of the paper is to lift the McKay
correspondence to the appropriate derived categories. We may then apply the
techniques of Fourier--Mukai transforms, in particular the ideas of
Bridgeland \cite{Br1} and \cite{Br2}, to show that it is an equivalence at
this level. The more formal nature of the arguments means that they work
equally well for arbitrary quasi\-projective varieties. In fact, they are
somewhat simpler for projective varieties, and we therefore deal with this
case first.
Since it is not known whether $\operatorname{\hbox{$G$}-Hilb}{M}$ is irreducible or even connected
in general, we take as our initial candidate for a resolution $Y$ the
{\em irreducible component} of $\operatorname{\hbox{$G$}-Hilb}{M}$ containing the free $G$-orbits,
that is, the component mapping birationally to $X$. The aim is to show that
$Y$ is a crepant resolution, and to construct an equivalence between the
derived categories $\operatorname{D}(Y)$ of coherent sheaves on $Y$ and $\operatorname{D}^G(M)$ of
coherent \hbox{$G$-sheaves} on $M$. A particular consequence of
this equivalence is that $Y=\operatorname{\hbox{$G$}-Hilb}{M}$ when $M$ has dimension 3.
medskip
We now describe the correspondence and our results in more detail.
Let $M$ be a non\-singular quasiprojective complex variety of dimension $n$ and
let $G\subsetmathcal Aut(M)$ be a finite group of automorphisms of $M$
such that $\omegaega_M$
is locally trivial as a $G$-sheaf. Put $X=M/G$ and let
$Y\subset\operatorname{\hbox{$G$}-Hilb}{M}$ be the
irreducible component containing the free orbits, as described above.
Write $mathcal Z$ for the universal closed subscheme $mathcal Z\subset Y\times M$
and $p$ and $q$ for its projections to $Y$ and $M$. There is a commutative
diagram of schemes
\begin{equation*}\label{maindiag}
\setlength{\unitlength}{36pt}
\begin{picture}(2.2,2.2)(0,0)
\put(1,0){\object{$X$}}
\put(0,1){\object{$Y$}}
\put(2,1){\object{$M$}}
\put(1,2){\object{$mathcal Z$}}
\put(0.25,0.75){\vector(1,-1){0.5}}
\put(0.5,0.5){\nwlabel{$\tau$}}
\put(1.75,0.75){\vector(-1,-1){0.5}}
\put(1.5,0.5){\swlabel{$\pi$}}
\put(0.75,1.75){\vector(-1,-1){0.5}}
\put(0.5,1.5){\nelabel{$p$}}
\put(1.25,1.75){\vector(1,-1){0.5}}
\put(1.5,1.5){\selabel{$q$}}
\end{picture}
\end{equation*}
in which $q$ and $\tau$ are birational, $p$ and $\pi$ are finite, and $p$ is
flat. Let $G$ act trivially on $Y$ and $X$, so that all morphisms
in the diagram are equi\-variant.
Define the functor
\[
\Phi=\mathbf R q_*\circ p^*\colon\operatorname{D}(Y)\longrightarrow \operatorname{D}^G(M),
\]
where a sheaf $E$ on $Y$ is viewed as a $G$-sheaf by giving it the trivial
action. Note that $p^*$ is already exact, so we do not need to write
$\mathbf L p^*$. Our main result is the following.
medskip\goodbreak
\begin{thm}
\label{second}
Suppose that the fibre product
\[
Y\times_X Y= mathcal Bigl\{(y_1, y_2)\in Y\times Y mathcal Bigm|
\tau(y_1)=\tau(y_2)mathcal Bigr\} \subset Y\times Y
\]
has dimension\/ $\le n+1$.
Then\/ $Y$ is a crepant resolution of\/ $X$ and\/ $\Phi$ is an
equivalence of categories.
\end{thm}
When $n\le 3$ the condition of the theorem always holds because the
exceptional locus of $Y\to X$ has dimension $\le2$. In this case we
can also show that $\operatorname{\hbox{$G$}-Hilb}{M}$ is irreducible, so we obtain
\begin{thm}
\label{IN}
Suppose $n\le3$. Then $\operatorname{\hbox{$G$}-Hilb}{M}$ is irreducible and is a crepant
resolution of $X$, and\/ $\Phi$ is an equivalence of categories.
\end{thm}
The condition of Theorem~\ref{second} also holds whenever $G$
preserves a complex symplectic form on $M$ and $Y$ is a crepant
resolution of $X$, because such a resolution is symplectic and hence
semismall (see
Verbitsky \cite{Vb}, Theorem~2.8 and compare Kaledin \cite{Ka}).
\begin{cor}
\label{Kal}
Suppose $M$ is a complex symplectic variety
and $G$ acts by symplectic automorphisms.
Assume that $Y$ is a crepant resolution of $X$.
Then $\Phi$ is an equivalence of categories.
\end{cor}
Note that the condition of Theorem~\ref{second} certainly fails in
dimension $\ge4$ whenever $Y\to X$ has an exceptional divisor over a
point. This is to be expected since there are many examples of finite
subgroups $G\subset\operatorname{SL}(4,mathbb C)$ for which the corresponding quotient
singularity $mathbb C^4/G$ has no crepant resolution.
\section{Category theory}
This section contains some basic category theory, most of which is well
known. The only non\-trivial part is Section~\ref{trings} where we state a
condition for an exact functor between triangulated categories to be an
equivalence.
\subsection{Triangulated categories}
A triangulated category is an additive category $mathcal A$ equipped with a
{\em shift auto\-morphism} $T_{mathcal A}\colonmathcal A\tomathcal A\colon amapsto a[1]$
and a collection of {\em distinguished triangles}
\[
a_1\lRa{f_1}a_2\lRa{f_2}a_3\lRa{f_3}a_1[1]
\]
of morphisms of $mathcal A$ satisfying certain axioms (see Verdier \cite{V}).
We write $a[i]$ for $T_{mathcal A}^i(a)$ and
\[
\operatorname{Hom}^i_{mathcal A}(a_1,a_2)=\operatorname{Hom}_{mathcal A}(a_1,a_2[i]).
\]
A triangulated category $mathcal A$ is {\em trivial}\/ if every object
is a zero object.
The principal example of a triangulated category is the derived category
$\operatorname{D}(A)$ of an Abelian category $A$. An object of $\operatorname{D}(A)$ is a bounded
complex of objects of $A$ up to quasi-isomorphism, the shift functor
moves a complex to the left by one place and a distinguished triangle is
the mapping cone of a morphism of complexes.
In this case, for objects $a_1,a_2\in A$,
one has $\operatorname{Hom}^i_{\operatorname{D}(A)}(a_1,a_2)=\operatorname{Ext}^i_A(a_1,a_2)$.
A functor $F\colonmathcal A\tomathcal B$ between triangulated categories is {\em exact}\/
if it commutes with the shift automorphisms and takes distinguished
triangles of $mathcal A$ to distinguished triangles of $mathcal B$. For example, derived
functors between derived categories are exact.
\subsection{Adjoint functors}
Let $F\colonmathcal A\tomathcal B$ and $G\colonmathcal B\tomathcal A$ be functors.
An adjunction for $(G,F)$ is a bifunctorial isomorphism
\[
\operatorname{Hom}_{mathcal A}(G\kern.0mm\smash{-}\kern.0mm,\kern.0mm\smash{-}\kern.0mm)\,\cong\,\operatorname{Hom}_{mathcal B}(\kern.0mm\smash{-}\kern.0mm,F\kern.0mm\smash{-}\kern.0mm).
\]
In this case, we say that $G$ is left adjoint to $F$ or that $F$ is right
adjoint to $G$. When it exists, a left or right adjoint to a given functor
is unique up to isomorphism of functors. The adjoint of a composite functor
is the composite of the adjoints. An adjunction determines and is determined
by two natural transformations $\varepsilon\colon G\circ F\to\operatorname{id}_mathcal A$ and
$\eta\colon\operatorname{id}_mathcal B\to F\circ G$ that come from applying the adjunction to
$1_{Fa}$ and $1_{Gb}$ respectively (see Mac~Lane \cite[IV.1]{Mac} for more
details).
The basic adjunctions we use in this paper are described in
Section~\ref{adjunctions} below.
\subsection{Fully faithful functors and equivalences}
\label{fff}
A functor $F\colonmathcal A\to mathcal B$ is {\em fully faithful}\/ if for any pair of
objects $a_1$, $a_2$ of $mathcal A$, the map
\[
F\colon\operatorname{Hom}_{mathcal A}(a_1,a_2)\to\operatorname{Hom}_{mathcal B}(Fa_1,Fa_2)
\]
is an isomorphism. One should think of $F$ as an `injective' functor.
This is more clear when $F$ has a left adjoint $G\colonmathcal B\tomathcal A$
(or a right adjoint $H\colonmathcal B\tomathcal A$),
in which case $F$ is fully faithful if and only if the
natural transformation $G\circ F\to \operatorname{id}_{mathcal A}$ (or $\operatorname{id}_{mathcal A}\to H\circ F$)
is an isomorphism.
A functor $F$ is an {\em equivalence} if there is an `inverse' functor
$G\colonmathcal B\tomathcal A$ such that $G\circ F\cong \operatorname{id}_{mathcal A}$ and $F\circ G\cong
\operatorname{id}_{mathcal B}$. In this case $G$ is both a left and right adjoint to $F$ (see
Mac~Lane \cite[IV.4]{Mac}). In practice, we show that $F$ is an equivalence
by writing down an adjoint (a priori, one-sided) and proving that it is an
inverse.
One simple example of this is the following.
\begin{lemma}
\label{extra}
Let $mathcal A$ and $mathcal B$ be triangulated categories and $F\colonmathcal A\to mathcal B$ a
fully faithful exact functor with a right adjoint $H\colonmathcal B\tomathcal A$.
Then $F$ is an equivalence
if and only if $Hc\cong 0$ implies $c\cong 0$ for any object $c\in\Obmathcal B$.
\end{lemma}
\begin{pf}
By assumption $\eta\colon\operatorname{id}_mathcal A\to H\circ F$ is an
isomorphism, so $F$ is an equivalence if and only if
$\varepsilon\colon F\circ H\to\operatorname{id}_mathcal B$ is an isomorphism.
Thus the `only if' part of the lemma is immediate, since $c\cong FHc$.
For the `if' part, take any object $b\in\Obmathcal B$ and
embed the natural adjunction map $\varepsilon_b$ in a triangle
\begin{equation}
\label{semiortho}
c\to FHb\lRa{\varepsilon_b} b\to c[1].
\end{equation}
If we apply $H$ to this triangle, then $H(\varepsilon_b)$ is an isomorphism,
because $\eta_{Hb}$ is an isomorphism and
$H(\varepsilon_b)\circ\eta_{Hb}=1_{Hb}$ (\cite[IV.1, Theorem 1]{Mac}).
Hence $Hc\cong 0$ and so $c\cong 0$ by hypothesis.
Thus $\varepsilon_b$ is an isomorphism, as required.
\end{pf}
One may understand this lemma in a broader context
as follows.
The triangle~(\ref{semiortho}) shows that,
when $F$ is fully faithful with right adjoint $H$,
there is a `semi-orthogonal' decomposition $mathcal B=(\operatorname{Im} F,mathcal Ker H)$,
where
\begin{align*}
\operatorname{Im} F &= \{ b\in\Obmathcal B : \text{$b\cong Fa$ for some $a\in\Obmathcal A$} \},\\
mathcal Ker H &= \{ c\in\Obmathcal B : Hc\cong 0 \}.
\end{align*}
Since $F$ is fully faithful,
the fact that $b\cong Fa$ for some object $a\in\Obmathcal A$
necessarily means that $b\cong FHb$,
so only zero objects are in both subcategories.
The semi-orthogonality condition also requires that
$\operatorname{Hom}_{mathcal B}(b,c)=0$ for all $b\in\operatorname{Im} F$ and $c\inmathcal Ker H$,
which is immediate from the adjunction.
The lemma then has the very reasonable interpretation that
if $mathcal Ker H$ is trivial,
then $\operatorname{Im} F=mathcal B$ and $F$ is an equivalence.
Note that if $G$ is a left adjoint for $F$, then there is a similar
semi-orthogonal decomposition on the other side
$mathcal B=(mathcal Ker G,\operatorname{Im} F)$ and a corresponding version of the lemma.
For more details on semi-orthogonal decompositions see Bondal \cite{Bo}.
\subsection{Spanning classes and orthogonal decomposition}
A {\em spanning class} for a triangulated category $mathcal A$ is a subclass
$\Omega$ of the objects of $mathcal A$ such that for any object $a\in\Obmathcal A$
\[
\operatorname{Hom}^i_{mathcal A}(a,\omega)=0 \quadmbox{for all }
\omega\in\Omega, i\inmathbb Z\quadmbox{implies }a\cong 0
\]
and
\[
\operatorname{Hom}^i_{mathcal A}(\omega,a)=0\quadmbox{for all }
\omega\in\Omega, i\inmathbb Z\quadmbox{implies }a\cong 0.
\]
For example, the set of skyscraper sheaves $\{mathcal O_x\colon x\in X\}$
on a non\-singular variety $X$ is a spanning class for $\operatorname{D}(X)$.
A triangulated category $mathcal A$ is {\em decomposable} as an orthogonal direct
sum of two full subcategories $mathcal A_1$ and $mathcal A_2$ if every object of $mathcal A$ is
isomorphic to a direct sum $a_1\oplus a_2$ with $a_j\in\Obmathcal A_j$, and if
\[
\operatorname{Hom}_{mathcal A}^i(a_1,a_2)=\operatorname{Hom}_{mathcal A}^i(a_2,a_1)=0
\]
for any pair of objects $a_j\in\Obmathcal A_j$ and all integers $i$.
The category $mathcal A$ is indecomposable if for any such decomposition one
of the two subcategories $mathcal A_i$ is trivial.
For example, if $X$ is a scheme, $\operatorname{D}(X)$ is indecomposable
precisely when $X$ is connected. For more details see Bridgeland \cite{Br1}.
\subsection{Serre functors}
The properties of Serre duality on a non\-singular projective variety
were abstracted by Bondal and Kapranov \cite{BK} into the
notion of a Serre functor on a triangulated category.
Let $mathcal A$ be a triangulated category in which all the $\operatorname{Hom}$ sets are
finite dimensional vector spaces. A {\em Serre functor} for $mathcal A$ is an
exact equivalence $S\colonmathcal A\tomathcal A$ inducing bifunctorial
iso\-morphisms
\[
\operatorname{Hom}_{mathcal A}(a,b)\to\operatorname{Hom}_{mathcal A}(b,S (a))^{\vee}
\quad \text{for all $a,b\in\Obmathcal A$}
\]
that satisfy a simple compatibility condition (see \cite{BK}). When a
Serre functor exists, it is unique up to isomorphism of functors. We say
that $mathcal A$ has {\em trivial}\/ Serre functor if for some integer $i$ the
shift functor $[i]$ is a Serre functor for $mathcal A$.
The main example is the bounded derived category of coherent sheaves
$\operatorname{D}(X)$ on a non\-singular projective $n$-fold $X$, having the Serre
functor
\[
S_X(\kern.0mm\smash{-}\kern.0mm)=(\kern.0mm\smash{-}\kern.0mm\otimes\omega_X)[n].
\]
Thus $\operatorname{D}(X)$ has trivial Serre functor if and only if the canonical bundle
of $X$ is trivial.
\subsection{A criterion for equivalence}
\label{trings}
Let $F\colonmathcal A\tomathcal B$ be an exact functor between triangulated categories
with Serre functors $S_{mathcal A}$ and $S_{mathcal B}$. Assume that $F$ has a left
adjoint $G\colonmathcal B\tomathcal A$. Then $F$ also has a right adjoint
$H=S_{mathcal A}\circ G\circ S_{mathcal B}^{-1}$.
\begin{thm}
\label{tring}
Suppose there is a spanning class $\Omega$ for $mathcal A$ such that
\begin{equation*}
F\colon\operatorname{Hom}^i_{mathcal A}(\omega_1,\omega_2)\to \operatorname{Hom}^i_{mathcal B}(F\omega_1,F\omega_2)
\end{equation*}
is an isomorphism for all $i\inmathbb Z$ and all
$\omega_1,\omega_2\in\Omega$.
Then $F$ is fully faithful.
\end{thm}
\begin{pf}
See \cite[Theorem 2.3]{Br1}.
\end{pf}
\begin{thm}
\label{tring2}
Suppose further that $mathcal A$ is non\-trivial,
that $mathcal B$ is indecomposable and
that $FS_{mathcal A}(\omega)\congS_{mathcal B}F(\omega)$ for all $\omega\in\Omega$.
Then $F$ is an equivalence of categories.
\end{thm}
\begin{pf}
Consider an object $b\in\Obmathcal B$.
For any $\omega\in\Omega$ and $i\inmathbb Z$ we have isomorphisms
\begin{eqnarray*}
&\operatorname{Hom}_{mathcal A}^i(\omega,Gb)=\operatorname{Hom}_{mathcal A}^i(Gb,S_{mathcal A} \omega)^{\vee}=
\operatorname{Hom}_{mathcal B}^i(b,FS_{mathcal A} \omega)^{\vee} \\
&=\operatorname{Hom}_{mathcal B}^i(b,S_{mathcal B} F \omega)^{\vee}=
\operatorname{Hom}_{mathcal B}^i(F\omega,b)=\operatorname{Hom}_{mathcal A}^i(\omega,Hb),
\end{eqnarray*}
using Serre duality and the adjunctions for $(G,F)$ and $(F,H)$. Since
$\Omega$ is a spanning class we can conclude that $Gb\cong 0$ precisely when
$Hb\cong 0$. Then the result follows from \cite[Theorem 3.3]{Br1}.
\end{pf}
The proof of Theorem 3.3 in \cite{Br1} may
be understood as follows.
If $mathcal Ker H\subsetmathcal Ker G$, then
the semiorthogonal decomposition described
at the end of Section~\ref{fff} becomes an orthogonal decomposition.
Hence $mathcal Ker H$ must be trivial, because $mathcal B$ is indecomposable and
$mathcal A$, and hence $\operatorname{Im} F$, is nontrivial.
Thus $\operatorname{Im} F=mathcal B$ and $F$ is an equivalence.
\section{Derived categories of sheaves}
This section is concerned with various general properties of complexes
of $mathcal O_X$-modules on a scheme $X$. Note that all our schemes are of
finite type over $mathbb C$. Given a scheme $X$, define $\operatorname{D}^{\mathrm{qc}}(X)$ to be
the (unbounded) derived category of the Abelian category
$\operatorname{Qcoh}(X)$ of quasi\-coherent sheaves on $X$.
Also define $\operatorname{D}(X)$ to be the full subcategory of
$\operatorname{D}^{\mathrm{qc}}(X)$ consisting of complexes with bounded and coherent cohomology.
\subsection{Geometric adjunctions}
\label{adjunctions}
Here we describe three standard adjunctions that arise in algebraic
geometry and are used frequently in what follows. For the first example,
let $X$ be a scheme and $E\in\operatorname{D}(X)$ an object of finite homological
dimension. Then the derived dual
\[
E^{\vee}=\mathbf R\operatorname{\mathcal H\mathit{om}}_{mathcal O_X}(E,mathcal O_X)
\]
also has finite homological dimension, and the functor $\kern.0mm\smash{-}\kern.0mm\mathbf Ltensor E$ is
both left and right adjoint to the functor $\kern.0mm\smash{-}\kern.0mm\mathbf Ltensor E^{\vee}$.
For the second example take a morphism of schemes $f\colon X\to Y$. The
functor
\[
\mathbf R f_*\colon \operatorname{D}^{\mathrm{qc}}(X)\longrightarrow \operatorname{D}^{\mathrm{qc}}(Y)
\]
has the left adjoint
\[
\mathbf L f^*\colon \operatorname{D}^{\mathrm{qc}}(Y)\longrightarrow \operatorname{D}^{\mathrm{qc}}(X).
\]
If $f$ is proper then $\mathbf R f_*$ takes $\operatorname{D}(X)$ into $\operatorname{D}(Y)$. If $f$
has finite Tor dimension (for example if $f$ is flat, or $Y$ is
non\-singular) then $\mathbf L f^*$ takes $\operatorname{D}(Y)$ into $\operatorname{D}(X)$.
The third example is Grothendieck duality. Again take a morphism of
schemes $f\colon X\to Y$. The functor $\mathbf R f_*$ has a right adjoint
\[
f^!\colon \operatorname{D}^{\mathrm{qc}}(Y)\longrightarrow \operatorname{D}^{\mathrm{qc}}(X)
\]
and moreover, if $f$ is proper and of finite Tor dimension, there is an
isomorphism of functors
\begin{equation}
\label{amos}
f^!(\kern.0mm\smash{-}\kern.0mm)\,\cong\, \mathbf L f^*(\kern.0mm\smash{-}\kern.0mm)\mathbf Ltensor f^!(mathcal O_Y).
\end{equation}
Neeman \cite{Ne} has recently given a completely formal proof of these
statements in terms of the Brown representability theorem.
Let $X$ be a non\-singular projective variety of dimension $n$ and write
$f\colon X\to Y=\operatorname{Spec}(mathbb C)$ for the projection to a point.
In this case $f^!(mathcal O_Y)=\omega_X[n]$.
The above statement of Grothendieck duality implies that the functor
\begin{equation}
\label{sefu}
S_X(\kern.0mm\smash{-}\kern.0mm)=(\kern.0mm\smash{-}\kern.0mm\otimes\omega_X)[n]
\end{equation}
is a Serre functor on $\operatorname{D}(X)$.
\subsection{Duality for quasi\-projective schemes}
\label{qpdual}
In order to apply Grothendieck duality on quasi\-projective schemes, we
need to restrict attention to sheaves with compact support. The {\em
support} of an object $E\in\operatorname{D}(X)$ is the locus of $X$ where $E$ is not
exact, that is, the union of the supports of the cohomology sheaves of
$E$. It is always a closed subset of $X$.
Given a scheme $X$, define the category $\operatorname{D}c(X)$ to be the full
subcategory of $\operatorname{D}(X)$ consisting of complexes whose support is proper.
Note that when $X$ itself is proper, $\operatorname{D}c(X)$ is just the usual derived
category $\operatorname{D}(X)$.
If $f\colon X\to Y$ is a morphism of schemes of
finite Tor dimension, but not necessarily proper,
then~(\ref{amos}) still holds for all objects in $\operatorname{D}c(Y)$. Using this
we see that if $X$ is a non\-singular quasi\-projective variety of
dimension $n$, the category $\operatorname{D}c(X)$ has a Serre functor given
by~(\ref{sefu}).
\subsection{Crepant resolutions}
Let $X$ be a variety and $f\colon Y\to X$ a resolution of singularities.
Suppose that $X$ has rational singularities, that is, $f_*mathcal O_Y=mathcal O_X$ and
\[
\mathbf R^i f_* mathcal O_Y=0 \qquad\text{for all } i>0.
\]
Given a point $x\in X$ define $\operatorname{D}_x(Y)$ to be the full subcategory of $\operatorname{D}c(Y)$
consisting of objects whose support is contained in the fibre $f^{-1}(x)$.
We have the following categorical criterion for $f$ to be crepant.
\begin{lemma}
\label{crepancy}
If\/ $\operatorname{D}_x(Y)$ has trivial Serre functor for each $x\in X$,
then $X$ is Gorenstein and\/ $f\colon Y\to X$ is a crepant resolution.
\end{lemma}
\begin{pf}
The Serre functor on $\operatorname{D}_x(Y)$ is the restriction of the Serre
functor on $\operatorname{D}c(Y)$.
Hence, by Section~\ref{qpdual}, the condition implies that for
each $x\in X$ the restriction of the functor
$(\kern.0mm\smash{-}\kern.0mm\otimes\omega_Y)$ to the category $\operatorname{D}_x(Y)$ is isomorphic to the
identity. Since $\operatorname{D}_x(Y)$ contains the structure sheaves of all
fattened neighbourhoods of the fibre $f^{-1}(x)$ this implies that the
restriction of $\omega_Y$ to each formal fibre of $f$ is trivial.
To get the result, we must show that $\omega_X$ is a line bundle
and that $f^*\omega_X=\omega_Y$.
Since $\omega_X=f_*\omega_Y$, this is achieved by the following lemma.
\end{pf}
\begin{lemma}
A line bundle $L$ on $Y$ is the pullback $f^*M$
of some line bundle $M$ on $X$ if
and only if the restriction of $L$ to each formal fibre of $f$ is trivial.
Moreover, when this holds, $M=f_*L$.
\end{lemma}
\begin{pf}
For each point $x\in X$, the formal fibre of $f$ over $x$ is the fibre product
\[
Y\times_X \operatorname{Spec}(\widehat{mathcal O}_{X,x}).
\]
The restriction of the pullback of a line bundle from $X$ to each of these
schemes is trivial because a line bundle has trivial formal stalks at points.
For the converse suppose that the restriction of $L$ to each of these formal
fibres is trivial. The theorem on formal functions shows that the completion
of the stalks of the sheaves $\mathbf R^i f_*mathcal O_Y$ and $\mathbf R^i f_*L$ at any point
$x\in X$ are isomorphic for each $i$. Since $X$ has rational singularities
it follows that $\mathbf R^i f_*L=0$ for all $i>0$, and $M=f_*L$ is a line bundle
on $X$.
Since $f^*M$ is torsion free, the natural adjunction map
$\eta\colon f^* f_*L \to L$ is injective, so there is a short exact sequence
\begin{equation}
\label{seseta}
0\to f^* f_*L \lRa{\eta} L\to Q\to 0.
\end{equation}
By the projection formula and the fact that $X$ is rational,
\[
\mathbf R^i f_*(f^* M)=M\otimes \mathbf R^i f_*mathcal O_Y=0 \quadmbox{for all }i>0.
\]
The fact that $\eta$ is the unit of the adjunction for $(f^*,f_*)$
implies that $f_*\eta$ has a left inverse, and in particular is surjective.
Applying $f_*$ to~(\ref{seseta}) we conclude that $f_* Q=0$.
Using the theorem on formal functions again, we can deduce that
\[
f_*(Q\otimes L^{-1})=0.
\]
In particular, $Q\otimes L^{-1}$ has no global sections.
Tensoring~(\ref{seseta}) with $L^{-1}$ gives a contradiction unless $Q=0$. Hence
$\eta$ is an isomorphism and we are done.
\end{pf}
\section{$G$-sheaves}
Throughout this section $G$ is a finite group acting
on a scheme $X$ (on the left) by auto\-morphisms.
As in the last section, all schemes are of
finite type over $mathbb C$. We list some results we need concerning the
category of sheaves on $X$ equipped with a compatible $G$ action,
or `$G$-sheaves' for short.
Since $G$ is finite, most of the
proofs are trivial and are left to the reader. The main point is that
natural constructions involving sheaves on $X$ are canonical, so commute
with automorphisms of $X$.
\subsection{Sheaves and functors}
A $G$-sheaf $E$ on $X$ is a quasi\-coherent sheaf of
$mathcal O_X$-modules together with a lift of the $G$ action to $E$.
More precisely, for each $g\in G$, there is a lift
$\lift{g}{E}\colon E\to g^*E$ satisfying $\lift{1}{E}=\operatorname{id}_E$
and $\lift{hg}{E}=g^*\left(\lift{h}{E}\right)\circ\lift{g}{E}$.
If $E$ and $F$ are $G$-sheaves, then there is a (right)
action of $G$ on $\operatorname{Hom}_X(E,F)$ given by
$\theta^g=\left(\lift{g}{F}\right)^{-1}\circ g^*\theta\circ\lift{g}{E}$
and the space $\operatorname{\hbox{$G$}-Hom}_X(E,F)$ of $G$-invariant maps give the morphisms
in the Abelian categories $\operatorname{Qcoh}^G(X)$ and $mathbb Coh^G(X)$ of
$G$-sheaves.
The category $\operatorname{Qcoh}^G(X)$ has enough injectives (Grothendieck
\cite[Proposition 5.1.2]{Gr}) so we may take $G$-equivariant injective
resolutions. Since $G$ is finite, if $X$ is a quasi\-projective scheme
there is an ample invertible $G$-sheaf on $X$ and so we may also
take \hbox{$G$-equivariant} locally free resolutions. The functors
$\operatorname{\hbox{$G$}-Ext}^i_X(\kern.0mm\smash{-}\kern.0mm,\kern.0mm\smash{-}\kern.0mm)$ are the $G$-invariant parts of
$\operatorname{Ext}^i_X(\kern.0mm\smash{-}\kern.0mm,\kern.0mm\smash{-}\kern.0mm)$ and are the derived functors of
$\operatorname{\hbox{$G$}-Hom}_X(\kern.0mm\smash{-}\kern.0mm,\kern.0mm\smash{-}\kern.0mm)$. Thus if $X$ is non\-singular of dimension $n$,
so that $\operatorname{Qcoh}(X)$ has global dimension $n$, then the category $\operatorname{Qcoh}^G(X)$
also has global dimension $n$.
The local functors $\operatorname{\mathcal H\mathit{om}}$ and $\otimes$ are defined in the obvious way on
$\operatorname{Qcoh}^G(X)$, as are pullback $f^*$ and pushforward $f_*$ for any
$G$-equivariant morphism of schemes $f\colon X\to Y$. Thus, for example,
$\lift{g}{f^*E}=f^*\lift{g}{E}$. Natural isomorphisms such as
$\operatorname{Hom}_X(f^*E,F)\cong\operatorname{Hom}_Y(E,f_*F)$ are canonical, that is, commute with
isomorphisms of the base, and hence are $G$-equivariant. Therefore they
restrict to natural iso\-morphisms
\[
\operatorname{\hbox{$G$}-Hom}_X(f^*E,F)\cong\operatorname{\hbox{$G$}-Hom}_Y(E,f_*F).
\]
In other words, $f^*$ and $f_*$ are also adjoint functors between the
categories $\operatorname{Qcoh}^G(X)$ and $\operatorname{Qcoh}^G(Y)$.
Similarly, the natural isomorphisms implicit in the projection formula,
flat base change, etc. are canonical and hence $G$-equivariant.
It seems worthwhile to single out the following point:
\begin{lemma}
\label{last}
Let $E$ and $F$ be $G$-sheaves on $X$.
Then, as a representation of $G$, we have a direct sum decomposition
\[
\operatorname{Hom}_X(E,F)=\bigoplus_{i=0}^k\operatorname{\hbox{$G$}-Hom}_X(E\otimes\rho_i,F)\otimes\rho_i
\]
over the irreducible representations $\{\rho_0,\cdots,\rho_k\}$.
\end{lemma}
\begin{pf}
The result amounts to showing that
\[
\operatorname{\hbox{$G$}-Hom}(\rho_i, \operatorname{Hom}_X(E,F))=\operatorname{\hbox{$G$}-Hom}_X(E\otimes\rho_i,F).
\]
Let $f\colon X\to Y=\operatorname{Spec}(mathbb C)$ be projection to a point,
with $G$ acting trivially on $Y$ so that the map is equivariant.
Then $\operatorname{Qcoh}^G(Y)$ is just the category of $mathbb C[G]$-modules.
Note that $\operatorname{Hom}_X(E,F)= f_*\operatorname{\mathcal H\mathit{om}} _{mathcal O_X}(E,F)$ and
$f^*\rho_i=mathcal O_X\otimes\rho_i$,
so that the adjunction between $f^*$ and $f_*$ gives
\begin{eqnarray*}
\operatorname{\hbox{$G$}-Hom}_Y(\rho_i, f_* \operatorname{\mathcal H\mathit{om}}_{mathcal O_X}(E,F))&=&
\operatorname{\hbox{$G$}-Hom}_X(mathcal O_X\otimes\rho_i, \operatorname{\mathcal H\mathit{om}}_{mathcal O_X}(E,F)) \\
&=&\operatorname{\hbox{$G$}-Hom}_X(E\otimes\rho_i,F),
\end{eqnarray*}
as required.
\end{pf}
\subsection{Trivial actions}
\label{sec:triv}
If the group $G$ acts trivially on $X$, then any $G$-sheaf $E$
decomposes as a direct sum
\[
E=\bigoplus_i E_i\otimes \rho_i
\]
over the irreducible representations $\{\rho_0,\rho_1,\dots,\rho_k\}$ of $G$
(where $\rho_0=mathbf1$ is the trivial representation). The sheaves $E_i$
are just ordinary sheaves on $X$. Furthermore,
$\operatorname{\hbox{$G$}-Hom}_X(E_i\otimes \rho_i,E_j\otimes \rho_j)=0$ for $i\ne j$. Thus the
category $\operatorname{Qcoh}^G(X)$ decomposes as a direct sum $\bigoplus_i\operatorname{Qcoh}^i(X)$ and
each summand is equivalent to $\operatorname{Qcoh}(X)$.
In particular, every $G$-sheaf $E$ has a fixed part $[E]^G$
and the functor
\[
[\kern.0mm\smash{-}\kern.0mm]^G\colon\operatorname{Qcoh}^G(X)\to\operatorname{Qcoh}(X)
\]
is the left and right adjoint to the functor
\[
\kern.0mm\smash{-}\kern.0mm\otimes\rho_0\colon\operatorname{Qcoh}(X)\to\operatorname{Qcoh}^G(X),
\]
that is, `let $G$ act trivially'.
Both functors are exact.
\subsection{Derived categories}
The $G$-equivariant derived category $\operatorname{D}^G(X)$ is defined to be the
full subcategory of the (unbounded) derived category of $\operatorname{Qcoh}^G(X)$
consisting of complexes with bounded and coherent cohomology.
The usual derived functors $\mathbf R\operatorname{\mathcal H\mathit{om}}$, $\mathbf Ltensor$, $\mathbf L f^*$ and $\mathbf R f_*$ may
be defined on the equivariant derived category, and, as for sheaves, the
standard properties of adjunctions, projection formula and flat base change
then hold because the implicit natural iso\-morphisms are sufficiently
canonical.
To obtain an equivariant Grothendieck duality we refer to Neeman's
results \cite{Ne}. Let $f\colon X\to Y$ be an equivariant morphism of
schemes. The only thing to check is that equivariant pushdown
$\mathbf R f_*$ commutes with small coproducts. This is proved exactly as in
\cite{Ne}. Then the functor $\mathbf R f_*$ has a right adjoint $f^!$,
and~(\ref{amos}) holds when $f$ is proper and of finite Tor dimension.
Moreover the same result holds if $f$ is not proper, providing that we
restrict $\mathbf R f_*$ to the subcategories of objects with compact support.
Thus if $X$ is a non\-singular quasi\-projective variety of dimension $n$,
the full subcategory $\operatorname{D}cG(X)\subset\operatorname{D}^G(X)$ consisting of objects with
compact supports has the Serre functor
\[
S_X(\kern.0mm\smash{-}\kern.0mm)\,=\,(\kern.0mm\smash{-}\kern.0mm\otimes\omega_X)[n],
\]
where $\omega_X$ is the canonical bundle of $X$ with its induced
$G$-structure.
\subsection{Indecomposability}
If $G$ acts trivially on $X$ then the results of Section~\ref{sec:triv} show
that $\operatorname{D}^G(X)$ decomposes as a direct sum of orthogonal subcategories indexed
by the irreducible representations of $G$. More generally it is easy to see
that $\operatorname{D}^G(X)$ is decomposable unless $G$ acts faithfully. We need the
converse of this statement.
\begin{lemma}
Suppose a finite group $G$ acts faithfully on a quasi\-projective variety $X$.
Then $\operatorname{D}^G(X)$ is indecomposable.
\end{lemma}
\begin{pf}
Suppose that $\operatorname{D}^G(X)$ decomposes as an orthogonal direct sum of two
subcategories $mathcal A_1$ and $mathcal A_2$. Any indecomposable object of $\operatorname{D}^G(X)$ lies in
either $mathcal A_1$ or $mathcal A_2$ and
\[
\operatorname{Hom}_{\operatorname{D}^G(X)}(a_1,a_2)=0 mbox{ for all } a_1\in mathcal A_1, a_2\inmathcal A_2.
\]
Since the action of $G$ is faithful, the general orbit is free. Let
$D=G\cdot x$ be a free orbit. Then $mathcal O_D$ is indecomposable as a
$G$-sheaf. Suppose without loss of generality that $mathcal O_D$ lies in $mathcal A_1$.
Let $\rho_i$ be an irreducible representation of $G$. The sheaf
$mathcal O_X\otimes\rho_i$ is indecomposable in $\operatorname{D}^G(X)$ and there exists an
equivariant map $mathcal O_X\otimes\rho_i\tomathcal O_D$ so $mathcal O_X\otimes\rho_i$ also
lies in $mathcal A_1$. Any indecomposable $G$-sheaf $E$ supported in
dimension 0 has a section, so by Lemma~\ref{last} there is an equivariant
map $mathcal O_X\otimes\rho_i\to E$, and thus $E$ lies in $mathcal A_1$.
Finally given an indecomposable $G$-sheaf $F$, take an orbit
$G\cdot x$ contained in $\operatorname{Supp}(F)$ and let $i\colon G\cdot x\hookrightarrow X$
be the inclusion. Then $i_* i^*(F)$ is supported in dimension 0 and
there is an equivariant map $F\to i_* i^*(F)$, so $F$ also lies in
$mathcal A_1$. Now $mathcal A_2$ is orthogonal to all sheaves, hence is trivial.
\end{pf}
\section{The intersection theorem}
Our proof that $\operatorname{\hbox{$G$}-Hilb}{M}$ is non\-singular follows an idea developed in
Bridgeland and Maciocia \cite{Br2} for moduli spaces over K3 fibrations,
and uses the following famous and difficult result of commutative algebra:
\begin{thm}[Intersection theorem]
\label{com}
Let $(A,m)$ be a local\/ \hbox{$mathbb C$-algebra} of dimension $d$. Suppose that
\[
0\to M_s\to M_{s-1}\to\cdots\to M_0\to 0
\]
is a nonexact complex of finitely generated free $A$-modules with each
homology module $H_i(M_{{\scriptscriptstyle\bullet}})$ an $A$-module of finite length. Then $s\ge
d$. Moreover, if $s=d$ and $H_0(M_{{\scriptscriptstyle\bullet}})\cong A/m$, then
\[
H_i(M_{{\scriptscriptstyle\bullet}})=0 \quad\text{for all $i\ne 0$,}
\]
and $A$ is regular.
\end{thm}
The basic idea is as follows.
Serre's criterion states that any finite length $A$-module
has homological dimension $\geq d$ and that $A$ is regular
precisely if there
is a finite length $A$-module which has homological dimension
exactly $d$.
The intersection theorem gives corresponding statements
for complexes of $A$-modules with finite length homology.
As a rough slogan, ``regularity is a property of the derived category".
For the main part of the proof, see Roberts \cite{Ro1}, \cite{Ro2};
for the final clause, see \cite{Br2}.
We may rephrase the intersection theorem
using the language of support and homological
dimension. If $X$ is a scheme and $E$ an object in $\operatorname{D}(X)$,
then it is easy to check
\cite{Br2} that, for any closed point $x\in X$,
\[
x\in\operatorname{Supp} E\iff \operatorname{Hom}_{\operatorname{D}(X)}^i(E,mathcal O_x)\ne 0 \text{ for some $i\inmathbb Z$.}
\]
The {\em homological dimension} of a nonzero object $E\in\operatorname{D}(X)$, written
$\operatorname{hom\, dim} E$, is the smallest non\-negative integer $s$ such that $E$ is
isomorphic in $\operatorname{D}(X)$ to a complex of locally free sheaves on $X$ of
length $s$. If no such integer exists we put $\operatorname{hom\, dim} E =\infty$. One can
prove \cite{Br2} that if $X$ is quasiprojective, and $n$ is a nonnegative
integer, then $\operatorname{hom\, dim} E \le n$ if and only if there is an integer $j$ such
that for any point $x\in X$
\[
\operatorname{Hom}^i_{\operatorname{D}(X)}(E,mathcal O_x)=0 mbox{ unless }j\le i\le j+n.
\]
The two parts of Theorem~\ref{com} now become the following (cf.\
\cite{Br2}).
\begin{cor}
\label{ab}
Let $X$ be a scheme and $E$ a nonzero object of\/ $\operatorname{D}(X)$. Then
\[
\operatorname{codim}(\operatorname{Supp} E)\,\le\,\operatorname{hom\, dim} E.
\]
\end{cor}
\begin{cor}
\label{smooth}
Let $X$ be an irreducible $n$-dimensional scheme, and fix a point $x\in X$.
Suppose that there is an object $E$ of\/ $\operatorname{D}(X)$ such that for any point
$z\in X$, and any integer $i$,
\[
\operatorname{Hom}^i_{\operatorname{D}(X)}(E,mathcal O_z)=0 \quadmbox{unless }z=x mbox{ and \/} 0\le i\le n.
\]
Suppose also that $H_0(E)\congmathcal O_x$. Then $X$ is non\-singular at $x$
and $E\congmathcal O_x$.
\end{cor}
\section{The projective case}
\label{Sec6!Proj_case}
The aim of this section is to prove Theorem~\ref{second} under the
additional assumption that $M$ is projective. The quasi\-projective case
involves some further technical difficulties that we deal with in the
next section. Take notation as in the introduction.
We break the proof up
into 7 steps.
\Step1
Let $\pi_Y\colon Y\times M\to Y$ and $\pi_M\colon Y\times M\to M$ denote
the projections. The functor $\Phi$ may be rewritten
\[
\Phi(\kern.0mm\smash{-}\kern.0mm)\cong \mathbf R\pi_{M_*}(mathcal O_mathcal Z\otimes\pi_Y^*(\kern.0mm\smash{-}\kern.0mm\otimes\rho_0)).
\]
Note that $mathcal O_mathcal Z$ has finite homological dimension, because $mathcal Z$ is flat
over $Y$ and $M$ is non\-singular. Hence the derived dual $mathcal O_mathcal Z^{\vee}=
\mathbf R\operatorname{\mathcal H\mathit{om}}_{mathcal O_{Y\times M}}(mathcal O_mathcal Z,mathcal O_{Y\times M})$
also has finite homological dimension
and we may define another functor
$\Psi\colon\operatorname{D}^G(M)\to \operatorname{D}(Y)$, by the formula
\[
\Psi(\kern.0mm\smash{-}\kern.0mm)=[\mathbf R \pi_{Y_*}(mathcal P\mathbf Ltensor \pi_M^*(\kern.0mm\smash{-}\kern.0mm))]^G,
\]
where $mathcal P=mathcal O_mathcal Z^{\vee}\otimes\pi_M^*(\omega_M)[n]$.
Now $\Psi$ is left adjoint to $\Phi$ because of the three standard adjunctions
described in Section~\ref{adjunctions}. The functor $\pi_M^*$ is the left
adjoint to $\mathbf R\pi_{M,^*}$. The functor $\kern.0mm\smash{-}\kern.0mm\otimesmathcal O_mathcal Z$ has the (left and
right) adjoint $\kern.0mm\smash{-}\kern.0mm\otimesmathcal O_mathcal Z^{\vee}$. Finally the functor $\pi_Y^!$
has the left adjoint $\mathbf R\pi_{Y_*}$ and
\[
\pi_Y^!(\kern.0mm\smash{-}\kern.0mm)=\pi_Y^*(\kern.0mm\smash{-}\kern.0mm)\otimes\pi_M^*(\omega_M)[n].
\]
\Step2
The composite functor $\Psi\circ\Phi$ is given by
\[
\mathbf R\pi_2{}_*(\operatorname{mathcal Q}\mathbf Ltensor\pi_1^*(\kern.0mm\smash{-}\kern.0mm)),
\]
where $\pi_1$ and $\pi_2$ are the projections of $Y\times Y$ onto its
factors, and $\operatorname{mathcal Q}$ is some object of $\operatorname{D}(Y\times Y)$. This is just
composition of correspondences (see Mukai \cite[Proposition~1.3]{muk}).
If $i_y\colon\{y\}\times Y\hookrightarrow Y\times Y$ is the closed embedding then
$\mathbf L i_y^*(\operatorname{mathcal Q})=\Psi\Phimathcal O_y$, so that for any pair of points $y_1, y_2$,
\begin{equation}
\label{support}
\operatorname{Hom}_{\operatorname{D}(Y\times Y)}^i(\operatorname{mathcal Q},mathcal O_{(y_1,y_2)})=
\operatorname{Hom}_{\operatorname{D}(Y)}^i(\Psi\Phimathcal O_{y_1},mathcal O_{y_2}) =
\operatorname{\hbox{$G$}-Ext}^i_M(mathcal O_{Z_{y_1}},mathcal O_{Z_{y_2}}),
\end{equation}
using the adjunction for $(\Psi,\Phi)$. Our first objective is to show
that $\operatorname{mathcal Q}$ is supported on the diagonal $\operatorname{D}elta\subset Y\times Y$,
or equivalently that the groups
in~(\ref{support}) vanish unless $y_1=y_2$. When $n=3$ this is the same
as the assumption (4.8) of Ito and Nakajima \cite{IN}.
\Step3
\label{bike}
Let $Z_1,Z_2\subset M$ be $G$-clusters. Then
\[
\operatorname{\hbox{$G$}-Hom}_M(mathcal O_{Z_1},mathcal O_{Z_2})
=\begin{cases}
mathbb C &\text{if $Z_1=Z_2$,} \\
0 &\text{otherwise.}
\end{cases}
\]
To see this note that $mathcal O_Z$ is generated as an $mathcal O_M$ module
by any nonzero constant section.
But, since $H^0(mathcal O_Z)$ is the regular representation of $G$,
the constant sections are precisely the \hbox{$G$-invariant} sections.
Hence any equivariant morphism maps a generator to a scalar multiple of
a generator and so is determined by that scalar.
Let $y_1$ and $y_2$ be distinct points of $Y$. Serre duality, together
with our assumption that $\omega_M$ is locally trivial as a $G$-sheaf
implies that
\[
\operatorname{\hbox{$G$}-Ext}^n_M(mathcal O_{Z_{y_1}},mathcal O_{Z_{y_2}})=
\operatorname{\hbox{$G$}-Hom}_M(mathcal O_{Z_{y_2}},mathcal O_{Z_{y_1}})=0,
\]
so that
\[
\operatorname{\hbox{$G$}-Ext}^p_M(mathcal O_{Z_{y_1}},mathcal O_{Z_{y_2}})=0 \quad\text{unless $1\le p\le n-1$.}
\]
Hence $\operatorname{mathcal Q}$ restricted to $(Y\times Y)\setminus\operatorname{D}elta$ has homological
dimension $\le n-2$.
\Step4
Now we apply the intersection theorem.
If $y_1$ and $y_2$ are points of $Y$ such that $\tau(y_1)\ne \tau(y_2)$ then
the corresponding clusters $Z_{y_1}$ and $Z_{y_2}$ are disjoint, so that the
groups in~(\ref{support}) vanish. Thus the support of $\operatorname{mathcal Q}|_{(Y\times
Y)\setminus\operatorname{D}elta}$ is contained in the subscheme $Y\times_X Y$. By
assumption this has codimension $>n-2$ so Corollary~\ref{ab} implies that
\[
\operatorname{mathcal Q}|_{(Y\times Y)\setminus\operatorname{D}elta}\cong 0,
\]
that is, $\operatorname{mathcal Q}$ is supported on the diagonal.
\Step5
Fix a point $y\in Y$, and put $E=\Psi\Phi(mathcal O_y)$. We proved above that
$E$ is supported at the point $y$. We claim that $H_0(E)=mathcal O_y$. Note that
Corollary~\ref{smooth} then implies that $Y$ is non\-singular at $y$ and
$E\congmathcal O_y$.
To prove the claim, note that there is a unique map $E\tomathcal O_y$, so we
obtain a triangle
\[
C\to E\tomathcal O_y\to C[1]
\]
for some object $C$ of $\operatorname{D}(Y)$. Using the adjoint pair
$(\Psi,\Phi)$, this gives a long exact sequence
\begin{gather*}
\cdots\to\operatorname{Hom}_{\operatorname{D}(Y)}^0(mathcal O_y,mathcal O_y)\to
\operatorname{Hom}_{\operatorname{D}^G(M)}^0(\Phimathcal O_y,\Phimathcal O_y)\to \operatorname{Hom}_{\operatorname{D}(Y)}^0(C,mathcal O_y) \\
\to \operatorname{Hom}_{\operatorname{D}(Y)}^1(mathcal O_y,mathcal O_y)\lRa{\varepsilon}
\operatorname{Hom}_{\operatorname{D}^G(M)}^1(\Phimathcal O_y,\Phimathcal O_y)\to \cdots.
\end{gather*}
The homomorphism $\varepsilon$ is just the Kodaira--Spencer map for the family of
clusters $\{mathcal O_{Z_y}:y\in Y\}$ (Bridgeland \cite[Lemma~4.4]{Br1}), so is
injective. It follows that
\[
\operatorname{Hom}_{\operatorname{D}(Y)}^i(C,mathcal O_y)=0\quad\text{for all $i\le0$.}
\]
An easy spectral sequence argument (see \cite[Example~2.2]{Br1}), shows that
$H_i(C)=0$ for all $i\le0$. Taking homology sheaves of the above triangle
gives $H_0(E)=mathcal O_y$, which proves the claim.
\Step6
We have now proved that $Y$ is non\-singular, and that for any pair of
points $y_1,y_2\in Y$, the homo\-morphisms
\[
\Phi\colon\operatorname{Ext}^i_Y(mathcal O_{y_1},mathcal O_{y_2})\to
\operatorname{\hbox{$G$}-Ext}^i_M(mathcal O_{Z_{y_1}},mathcal O_{Z_{y_2}})
\]
are isomorphisms. By assumption, the action of $G$ on $M$ is such that
$\omega_M$ is trivial as a $G$-sheaf on an open neighbourhood
of each orbit $G\cdot x\subset M$. This implies that
\[
mathcal O_{Z_y}\otimes\omega_M\congmathcal O_{Z_y}
\]
in $mathbb Coh^G(M)$, for each $y\in Y$. Applying Theorem~\ref{tring2} shows
that $\Phi$ is an equivalence of categories.
\Step7
It remains to show that $\tau\colon Y\to X$ is crepant. Take a point
$x\in X=M/G$. The equivalence $\Phi$ restricts to give an equivalence
between the full subcategories $\operatorname{D}_x(Y)\subset\operatorname{D}(Y)$ and
$\operatorname{D}^G_x(M)\subset \operatorname{D}^G(M)$ consisting of objects supported on the
fibre $\tau^{-1}(x)$ and the orbit $\pi^{-1}(x)$ respectively.
The category $\operatorname{D}^G_x(M)$ has trivial Serre functor because $\omega_M$ is
trivial as a $G$-sheaf on a neighbourhood of $\pi^{-1}(x)$. Thus
$\operatorname{D}_x(Y)$ also has trivial Serre functor and Lemma~\ref{crepancy} gives
the result.
This completes the proof of Theorem~\ref{second} in the case that
$Y$ is projective.
\section{The quasi\-projective case}
In this section we complete the proof of Theorem~\ref{second}. Once again,
take notation as in the introduction. The only problem with the argument
of the last section is that when $M$ is not projective Grothendieck
duality in the form we need only applies to objects with compact support.
Thus if we restrict $\Phi$ to a functor
\[
\Phi_{\mathrm c}\colon\operatorname{D}c(Y)\to\operatorname{D}cG(M).
\]
the argument of the last section carries through to show that $Y$ is
non\-singular and crepant and that $\Phi_{\mathrm c}$ is an equivalence. It remains to
show that also $\Phi$ is an equivalence.
\Step8
The functor $\Phi$ has a right adjoint
\[
\Upsilon(\kern.0mm\smash{-}\kern.0mm)\,=\,[p_*\circ q^!(\kern.0mm\smash{-}\kern.0mm)]^G
\,=\,[\mathbf R\pi_{Y}{}_*(\omega_{Z/M}\mathbf Ltensor\pi_M^*(\kern.0mm\smash{-}\kern.0mm))]^G.
\]
As before, the composition $\Upsilon\circ\Phi$ is given by
\[
\mathbf R\pi_2{}_*(\operatorname{mathcal Q}\mathbf Ltensor\pi_1^*(\kern.0mm\smash{-}\kern.0mm)),
\]
where $\pi_1$ and $\pi_2$ are the projections of $Y\times Y$ onto its
factors, and $\operatorname{mathcal Q}$ is some object of $\operatorname{D}(Y\times Y)$.
Since $\Phi_{\mathrm c}$ is an equivalence, $\Upsilon\Phimathcal O_y=mathcal O_y$ for any point
$y\in Y$, and it follows that $\operatorname{mathcal Q}$ is actually the pushforward of a line
bundle $L$ on $Y$ to the diagonal in $Y\times Y$. The functor
$\Upsilon\circ\Phi$ is then just twisting by $L$, and to show that $\Phi$ is
fully faithful we must show that $L$ is trivial.
There is a morphism of functors $\varepsilon\colon\operatorname{id}\to\Upsilon\circ\Phi$, which for
any point $y\in Y$ gives a commutative diagram
\[
\begin{CD}
mathcal O_Y &@>\varepsilon(mathcal O_Y)>> &L \\
@VfVV && @VVL\otimes fV \\
mathcal O_y &@>\varepsilon(mathcal O_y)>> &\,mathcal O_y
\end{CD}
\]
where $f$ is non\-zero. Since $\varepsilon$ is an isomorphism on the subcategory
$\operatorname{D}c(Y)$, the maps $\varepsilon(mathcal O_y)$ are all isomorphisms, so the section
$\varepsilon(mathcal O_Y)$ is an isomorphism.
\Step9
The fact that $\Phi$ is an equivalence follows from Lemma~\ref{extra}
once we show that
\[
\Upsilon(E)\cong 0 \implies E\cong 0 \quad
\text{for any object $E$ of $\operatorname{D}^G(M)$.}
\]
Suppose $\Upsilon(E)\cong 0$. Using the adjunction for $(\Phi,\Upsilon)$,
\[
\operatorname{Hom}^i_{\operatorname{D}^G(M)}(B,E)=0 \quad\text{for all $i$,}
\]
whenever $B\cong\Phi(A)$ for some object $A\in\operatorname{D}(Y)$. In particular, this holds
for any $B$ with compact support.
If $E$ is nonzero, let $D=G\cdot x$ be an orbit of $G$ contained in the
support of $E$. Let $i\colon D\hookrightarrow M$ denote the inclusion, a projective
equi\-variant morphism of schemes. Then the adjunction morphism
$i_* i^!(E)\to E$ is non\-zero, which gives a contradiction.
This completes the proof of Theorem~\ref{second}. \qed
\section{Nakamura's conjecture}
Recall that in Theorem~\ref{second} we took the space $Y$ to be an
irreducible component of $\operatorname{\hbox{$G$}-Hilb}{M}$. Note that when $Y$ is non\-singular
and $\Phi$ is an equivalence, $Y$ is actually a connected component. This
is simply because for any point $y\in Y$, the bijection
\[
\Phi\colon\operatorname{Ext}^1_Y(mathcal O_y,mathcal O_y)\to\operatorname{\hbox{$G$}-Ext}^1_M(mathcal O_{Z_y},mathcal O_{Z_y})
\]
identifies the tangent space of $Y$ at $y$ with the tangent space of
$\operatorname{\hbox{$G$}-Hilb}{M}$ at $y$. In this section we wish to go further and prove that
when $M$ has dimension 3, $\operatorname{\hbox{$G$}-Hilb}{M}$ is in fact connected.
\paragraph{Proof of Nakamura's conjecture} Suppose by contradiction that
there exists a \hbox{$G$-cluster} $Z\subset M$ not contained among the
$\{Z_y: y\in Y\}$. Since $\Phi$ is an equivalence we can take
an object $E\in\operatorname{D}c(Y)$ such that $\Phi(E)=mathcal O_Z$. The argument of
Section~\ref{bike}, Step~3 shows that for any point $y\in Y$
\[
\operatorname{Hom}_{\operatorname{D}(Y)}^i(E,mathcal O_y)=\operatorname{\hbox{$G$}-Ext}_M^i(mathcal O_Z,mathcal O_{Z_y})=0\quad
mbox{unless }1\le i\le 2.
\]
This implies that $E$ has homological dimension 1, or more precisely, that
$E$ is quasi-isomorphic to a complex of locally free sheaves of the form
\begin{equation}
\label{ouch}
0\to L_2\lRa{f} L_1\to 0.
\end{equation}
But $mathcal O_Z$ is supported on some $G$-orbit in $M$, so $E$ is supported on a
fibre of $Y$, and hence in codimension $\ge1$. It follows that the
complex~(\ref{ouch}) is exact on the left, so $E\cong\operatorname{coker} f[1]$.
In particular
$[E]=-[\operatorname{coker} f]$ in the Grothendieck group $K_{mathrm c}(Y)$ of $\operatorname{D}c(Y)$.
Let $y$ be a point of the fibre that is the support of $E$. By
Lemma~\ref{Kgrps} below, $[mathcal O_{Z_y}]=[mathcal O_Z]$ in $K^G_{mathrm c}(M)$, so
that $[mathcal O_y]=[E]$ in $K_{mathrm c}(Y)$, since the equivalence $\Phi$
gives an isomorphism of Grothendieck groups.
Let $\overline{Y}$ be a non\-singular projective variety with an open inclusion
$i\colon Y\hookrightarrow \overline{Y}$. The functor $i_*\colon\operatorname{D}c(Y)\to\operatorname{D}(\overline{Y})$ induces a
map on K groups, so $[\operatorname{coker} f]=-[mathcal O_y]$ in $K_{mathrm c}(\overline{Y})$. But
this contradicts Riemann--Roch, because if $L$ is a sufficiently ample line
bundle on $\overline{Y}$, then $\operatorname{\chi}(\operatorname{coker} f\otimes L)$ and $\operatorname{\chi}(mathcal O_y\otimes L)$
are both positive.
\begin{lemma}
\label{Kgrps}
If $Z_1$ and $Z_2$ are two $G$-clusters on $M$ supported on the same orbit
then the corresponding elements $[mathcal O_{Z_1}]$ and $[mathcal O_{Z_2}]$ in the
Grothendieck group $K^G_{mathrm c}(M)$ of $\operatorname{D}^G_{mathrm c}(M)$ are equal.
\end{lemma}
\begin{pf}
We need to show that, as $G$-sheaves,
$mathcal O_{Z_1}$ and $mathcal O_{Z_2}$ have composition series
with the same simple factors.
Suppose that they are both supported on the $G$-orbit $D=G\cdot x\subset M$
and let $H$ be the stabiliser subgroup of $x$ in $G$.
The restriction functor
is an equivalence of categories
from finite length $G$-sheaves supported on $D$
to finite length $H$-sheaves supported at $x$.
The reverse equivalence is the induction functor
$\left(\kern.0mm\smash{-}\kern.0mm\otimes_{mathbb C[H]}mathbb C[G]\right)$.
Since the restriction of a $G$-cluster supported on $D$ is an
$H$-cluster supported at $x$,
it is sufficient to prove the result for
$H$-clusters supported at $x$.
If $\{\rho_0,\cdots,\rho_k\}$ are the irreducible representations of $H$,
then we claim that the simple $H$-sheaves supported at $x$ are precisely
\[
\{S_i=mathcal O_x\otimes\rho_i:0\le i\le k\}
\]
These sheaves are certainly simple,
since they are simple as $mathbb C[H]$-modules.
On the other hand, any $H$-sheaf $E$ supported at $x$ has a
nonzero ordinary sheaf morphism $mathcal O_x\to E$.
By Lemma~\ref{last} there must be a nonzero $H$-sheaf morphism
$S_i\to E$, for some $i$, and, if $E$ were simple, then this would
have to be an isomorphism.
Thus a composition series as an $H$-sheaf is also a composition
series as a $mathbb C[H]$-module.
Hence all $H$-clusters supported at $x$ have the same composition
factors as $H$-sheaves, since as $mathbb C[H]$-modules they are all
the regular representation of $H$.
\end{pf}
\section{K theoretic consequences of equivalence}
In this section we put $M=mathbb C^n$ and assume that the functor $\Phi$ is an
equivalence of categories. This is always the case when $n\le 3$. The main
point is that such an equivalence of derived categories immediately gives
an isomorphism of the corresponding Grothendieck groups.
\subsection{Restricting to the exceptional fibres}
Let $\operatorname{D}^G_0(mathbb C^n)$ denote the full subcategory of $\operatorname{D}^G(mathbb C^n)$ consisting
of objects supported at the origin of $mathbb C^n$. Similarly, let $\operatorname{D}_0(Y)$ denote the full subcategory of
$\operatorname{D}(Y)$ consisting of objects supported on the subscheme $\tau^{-1}(\pi(0))$ of $Y$.
The equivalence $\Phi$ induces an equivalence
\[
\Phi_0\colon\operatorname{D}_0(Y)\to\operatorname{D}^G_0(mathbb C^n),
\]
so we obtain a diagram
\[
\renewcommand{1.5}{1.5}
\begin{array}{ccc}\operatorname{D}(Y) &\lRa{\Phi} &\operatorname{D}^G(mathbb C^n) \\
\bigg{\uparrow} && \bigg{\uparrow} \\
\operatorname{D}_0(Y) &\lRa{\Phi} &\operatorname{D}^G_0(mathbb C^n)
\end{array}
\]
in which the vertical arrows are embeddings of categories.
Note that the Euler characteristic gives natural bilinear forms on both
sides; if $E$ and $F$ are objects of $\operatorname{D}^G(mathbb C^n)$ and $\operatorname{D}^G_0(mathbb C^n)$
respectively, then we can compute
\[
\operatorname{\chi}^G(E,F)=\sum_i(-1)^i \dim \operatorname{Hom}_{\operatorname{D}^G(mathbb C^n)}(E,F[i]),
\]
since the fact that the cohomology of $F$ has finite length
implies that the Ext groups are only nonvanishing in a finite interval.
Similarly, we can compute the ordinary Euler character on the left. The
fact that $\Phi$ is an equivalence of categories commuting with the shift
functors immediately gives
\[
\operatorname{\chi}^G(\Phi(A),\Phi(B))=\operatorname{\chi}(A,B),
\]
for any objects $A$ of $\operatorname{D}(Y)$ and $B$ of $\operatorname{D}_0(Y)$.
\subsection{Equivalence of K groups}
Let $K(Y)$, $K^G(mathbb C^n)$, $K_0(Y)$ and $K_0^G(mathbb C^n)$ be the Grothendieck
groups of the corresponding derived categories. The equivalences of
categories from the last section immediately give isomorphisms of these
groups. The following lemma is proved in the same way as in
Gonzalez-Sprinberg and Verdier \cite[Proposition 1.4]{GV}.
\begin{lemma}
The maps that send a representation $\rho$ of $G$ to the $G$-sheaves
$\rho\otimesmathcal O_{mathbb C^n}$ and $\rho\otimesmathcal O_0$ on $mathbb C^n$ give ring
isomorphisms of the representation ring $R(G)$ with $K^G(mathbb C^n)$ and
$K_0^G(mathbb C^n)$ respectively.
\end{lemma}
We obtain a diagram of groups
\[
\renewcommand{1.5}{1.5}
\begin{array}{ccc}
K(Y) &\lRa{\varphi} & R(G) \\
\scriptstyle{i}\bigg{\uparrow}\hphantom{\scriptstyle{i}} &&
\hphantom{\scriptstyle{j}}\bigg{\uparrow}\scriptstyle{j} \\
K_0(Y) &\lRa{\varphi} & R(G).
\end{array}
\]
in which the horizontal maps are isomorphisms but the vertical maps
are not. In fact, if $Q$ is the representation induced by the
inclusion $G\subset\operatorname{SL}(n,mathbb C)$, then the map $j$ is multiplication by
\[
r=\sum_{i=0}^n (-1)^i \mathbf Lambda^i Q\in R(G).
\]
This formula is obtained by considering a Koszul resolution of $mathcal O_0$
on $M$, as in \cite[Proposition 1.4]{GV}.
For example, in the case $n=2$ one has $r=2-Q$.
The above bilinear forms descend to give pairings on the Grothendieck
groups. These forms are non\-degenerate because if
$\{\rho_0,\cdots,\rho_k\}$ are the irreducible representations of $G$ then
the corresponding bases
\[
\{\rho_i\otimesmathcal O_{mathbb C^n}\}_{i=0}^k\subset K^G(mathbb C^n)
\quad\text{and}\quad \{\rho_i\otimesmathcal O_{0}\}_{i=0}^k\subset K^G_0(mathbb C^n)
\]
are dual with respect to the pairing $\operatorname{\chi}^G(\kern.0mm\smash{-}\kern.0mm,\kern.0mm\smash{-}\kern.0mm)$. Applying
$\varphi^{-1}$ gives dual bases
\[
\{mathcal R_i\}_{i=0}^k\subset K(Y)
\quad\text{and}\quad \{mathcal S_i\}_{i=0}^k\subset K_0(Y)
\]
as in Ito and Nakajima \cite{IN}.
\section{Topological K~theory and physics}
With notation as in the introduction, suppose that $M$ is projective, and
further that $Y$ is non\-singular and $\Phi\colon\operatorname{D}(Y)\to\operatorname{D}^G(M)$ is an
equivalence. For example suppose that $n=2$ or $3$.
\subsection{K~theory and the orbifold Euler number}
Let $mathcal K^*(Y)$ denote the topological complex K~theory of $Y$ and
$mathcal K_G^*(M)$ the $G$-equi\-variant topological K~theory of $M$. There are
natural forgetful maps
\[
\alpha_Y\colon K(Y)\to mathcal K^0(Y)
\quad\text{and}\quad \alpha_M\colon K^G(M)\to mathcal K_G^0(M).
\]
Since $\Phi$ and its inverse $\Psi$ are defined as correspondences,
we may define correspondences
\[
\varphi\colon mathcal K^*(Y)\to mathcal K_G^*(M)
\quad\text{and}\quad
\psi\colon mathcal K_G^*(M)\tomathcal K^*(Y)
\]
compatible with the maps $\alpha$, using the functors $\otimes$, $f^*$ and $f_*$
(also written $f_!$) on topological K~theory, which extend to equi\-variant
K~theory, as usual, because they are canonical. Note that the definition and
compatibility of $f_*$ is non\-trivial; see \cite{AH} for more details. But
now the fact that $\Phi$ and $\Psi$ are mutually inverse implies that $\varphi$
and $\psi$ are mutually inverse, that is, we have a graded isomorphism
\begin{equation}
\label{Kiso}
mathcal K^*(Y)\cong mathcal K_G^*(M)
\end{equation}
Atiyah and Segal \cite{AS} observed that
the physicists' orbifold Euler number of $M/G$ is
the Euler characteristic of $mathcal K_G^*(M)$, that is,
\[
e(M,G)=\dim mathcal K_G^0(M)\otimesmathbb Q -\dim mathcal K_G^1(M)\otimesmathbb Q.
\]
On the other hand, since the Chern character gives a $mathbb Z/2$ graded
isomorphism $mathcal K^*(Y)\otimesmathbb Q \cong H^*(Y,mathbb Q)$, the Euler characteristic
of $mathcal K^*(Y)$ is just the ordinary Euler number $e(Y)$ of $Y$.
Hence the isomorphism~(\ref{Kiso}) on topological K~theory provides a natural
explanation for the physicists' Euler number conjecture
\[
e(M,G)=e(Y).
\]
This was verified in the case $n=2$ as a consequence of the original McKay
correspondence (cf.\ \cite{AS}). It was proved in the case $n=3$ by Roan
\cite{Roa} in the more general case of quasi\-projective Gorenstein
orbifolds, since the numerical statement reduces to the local linear
case $M=mathbb C^3$, $G\subset\operatorname{SL}(3,mathbb C)$.
\subsection{An example: the Kummer surface}
One of the first interesting cases of the isomorphism~(\ref{Kiso}) is when
$M$ is an Abelian surface (topologically, a 4-torus $T^4$), $G=mathbb Z/2$
acting by the involution $-1$ and $Y$ is a K3 surface. In this case $Y$ is
a non\-singular Kummer surface, having 16 disjoint $-2$-curves
$C_1,\dots,C_{16}$ coming from resolving the images in $M/G$ of the 16
$G$-fixed points $x_1,\dots,x_{16}$ in $M$. Write $V=\{x_1,\dots,x_{16}\}$
for this fixed point set.
On the Abelian surface $M$ there are 32 flat line $G$-bundles,
arising from a choice of 2 $G$-actions on each of the 16 square
roots of $mathcal O_M$. Each such flat line $G$-bundle $L(\rho)$ is
characterised by a map $\rho\colon V\to mathbb F_2=\{0,1\}$
such that at a fixed point $x\in V$ the group $G$ acts on
the fibre $L_x$ with weight $(-1)^{\rho(x)}$.
Now the set $V$ naturally has the structure of an affine
$4$-space over $mathbb F_2$ and the maps $\rho$ that occur are precisely the
affine linear maps, including the two constant maps corresponding to the
two actions on $mathcal O_M$.
On the other hand, on the K3 surface $Y$ one may consider the lattice
$mathbb Z^V\subset H^2(Y,mathbb Z)$ spanned by $C_1,\dots,C_{16}$ and the smallest
primitive sublattice $\mathbf Lambda$ containing $mathbb Z^V$. The elements of
$\mathbf Lambda$ give precisely the rational linear combinations of the divisors
$C_1,\dots,C_{16}$ which are themselves divisors. It is easy to see that
$mathbb Z^V\subset\mathbf Lambda\subset(\frac12mathbb Z)^V$ and it can also be shown that the
image of $\mathbf Lambda$ in the quotient $(\frac12mathbb Z)^V/mathbb Z^V\congmathbb F_2^V$
consists of precisely the affine linear maps on $V$ (see Barth, Peters and
Van de Ven \cite[Chapter~VIII, Proposition 5.5]{BPV}).
We claim that under the correspondence $\Psi$,
the flat line $G$-bundle $L(\rho)$ is taken to the line
bundle $mathcal O_Y(D(\rho))$, where
\[
D(\rho)=\frac12mathcal Bigl( \sum_i \rho(x_i) C_i mathcal Bigr).
\]
To check the claim note that $mathcal O_M$ is taken to $mathcal O_Y$, and that, in the
local linear McKay correspondence for $mathbb C^2/(mathbb Z/2)$, the irreducible
representation of weight $-1$ is taken to the line bundle $mathcal O(\frac12 C)$,
dual to the $-2$-curve $C$ resolving the singularity.
\noindent
Tom Bridgeland, \\
Department of Mathematics and Statistics, \\
University of Edinburgh, King's Buildings, \\
Mayfield Road, Edinburgh EH9 3JZ, UK \\
e-mail: [email protected]
\noindent
Alastair King, \\
Department of Mathematical Sciences, \\
University of Bath, \\
Bath BA2 7AY, England \\
e-mail: [email protected]
\noindent
Miles Reid, \\
Math Inst., Univ.\ of Warwick, \\
Coventry CV4 7AL, England \\
e-mail: [email protected]
\end{document} |
\begin{document}
\baselineskip=17pt
\titlerunning{$p$-Laplace equations and geometric Sobolev inequalities}
\title{Regularity of stable solutions of $p$-Laplace equations through geometric
Sobolev type inequalities}
\author{Daniele Castorina
\and
Manel Sanch\'on}
\date{}
\maketitle
\address{D. Castorina: Departament de Matem\`atiques,
Universitat Aut\`onoma de Barcelona,
08193 Bellaterra, Spain; \email{[email protected]}
\and
M. Sanch\'on: Departament de Matem\`atica Aplicada i An\`alisi,
Universitat de Barcelona,
Gran Via 585, 08007 Barcelona, Spain; \email{[email protected]}}
\subjclass{Primary
35K57,
35B65
; Secondary
35J60
}
\begin{abstract}
In this paper we prove a
Sobolev and a Morrey type inequality involving the mean
curvature and the tangential gradient with respect to the level sets
of the function that appears in the inequalities. Then, as an
application, we establish \textit{a priori} estimates for semi-stable
solutions of $-\Delta_p u= g(u)$ in a smooth bounded domain
$\Omega\subset \mathbb{R}^n$. In particular, we obtain new
$L^r$ and $W^{1,r}$ bounds for the extremal solution
$u^\star$ when the domain is strictly convex. More precisely, we prove that
$u^\star\in L^\infty(\Omega)$ if $n\leq p+2$ and $u^\star\in
L^{\frac{np}{n-p-2}}(\Omega)\cap W^{1,p}_0(\Omega)$ if $n>p+2$.
\keywords{Geometric inequalities, mean curvature of level sets,
Schwarz symmetri- zation,
$p$-Laplace equations, regularity of stable solutions}
\end{abstract}
\section{Introduction}
The aim of this paper is to obtain \textit{a priori} estimates for
semi-stable solutions of $p$-Laplace equations. We will accomplish this by proving
some geometric type inequalities involving the functionals
\begin{equation}\label{Ipq}
I_{p,q}(v;\Omega):=\left( \int_{\Omega}
\Big(\frac{1}{p'}|\nabla_{T,v} |\nabla v|^{p/q}|\Big)^{q}
+ |H_v|^q |\nabla v|^p \, dx \right)^{1/p},\quad p,q\geq 1
\end{equation}
where $\Omega$ is a smooth bounded domain of $\mathbb{R}^n$ with
$n\geq 2$ and $v\in C_0^\infty(\overline{\Omega})$. Here, and in the
rest of the paper, $H_v (x)$ denotes the mean curvature at $x$ of
the hypersurface $\{y\in\Omega:|v(y)|=|v(x)|\}$ (which is smooth at
points $x\in\Omega$ satisfying $\nabla v(x)\neq 0$), and $\nabla_{T,v}$
is the tangential gradient along a level set of $|v|$. We will prove a Morrey's type
inequality when $n<p+q$ and a Sobolev inequality when $n>p+q$ (see
Theorem~\ref{Theorem:Sobolev} below).
Then, as an application of these inequalities, we establish
$L^r$ and $W^{1,r}$ \textit{a priori} estimates for semi-stable solutions of
the reaction-diffusion problem
\begin{equation}\label{problem}
\left\{
\begin{array}{rcll}
-\Delta_p u &=& g(u) &\textrm{in } \Omega, \\
u&>& 0 &\textrm{in } \Omega, \\
u &=& 0 &\textrm{on } \partial \Omega.
\end{array}
\right.
\end{equation}
Here, the diffusion is modeled
by the $p$-Laplace operator $\Delta_p$
(remember that $\Delta_p u:= {\rm div}(|\nabla u|^{p-2}\nabla u)$)
with $p>1$, while the reaction term is driven by any positive $C^1$
nonlinearity $g$.
As we will see, these estimates will lead to new $L^r$ and $W^{1,r}$ bounds
for the extremal solution $u^\star$ of \eqref{problem} when $g(u)=\lambda f(u)$
and the domain $\Omega$ is strictly convex. More precisely, we prove that
$u^\star\in L^\infty(\Omega)$ if $n\leq p+2$ and $u^\star\in
L^{\frac{np}{n-p-2}}(\Omega)\cap W^{1,p}_0(\Omega)$ if $n>p+2$.
\subsection{Geometric Sobolev inequalities}
Before we establish our Sobolev and Morrey type inequalities we will
state that the functional $I_{p,q}$ defined in \eqref{Ipq} decreases
(up to a universal multiplicative constant) by Schwarz
symmetrization. Given a Lipschitz continuous function $v$ and its
Schwarz symmetrization $v^*$ it is well known that
$$
\int_{B_R} |v^*|^r\ dx=\int_\Omega |v|^r\ dx\quad \textrm{for all }r\in[1,+\infty]
$$
and
$$
\int_{B_R} |\nabla v^*|^r\ dx \leq \int_\Omega |\nabla v|^r\ dx
\quad \textrm{for all }r\in[1,\infty).
$$
Our first result establishes that $I_{p,q}(v^*;B_R)\leq C I_{p,q}(v;\Omega)$ for
some universal constant $C$ depending only on $n$, $p$, and $q$.
\begin{thm}\label{thm:Ipq}
Let $\Omega$ be a smooth bounded domain of $\mathbb{R}^n$ with $n\geq 2$ and
$B_R$ the ball centered at the origin and with radius $R=(|\Omega|/|B_1|)^{1/n}$.
Let $v\in C^\infty_0(\overline{\Omega})$ and $v^*$ its Schwarz symmetrization.
Let $I_{p,q}$ be the functional defined in
\eqref{Ipq} with $p,q \geq 1$. If $n>q+1$ then there exists a universal constant $C$ depending
only on $n$, $p$, and $q$, such that
\begin{equation}\label{comp_integrals}
\left( \int_{B_R}\frac{1}{|x|^q} |\nabla v^*|^p \, dx \right)^{1/p}
=
I_{p,q}(v^*;B_R)
\leq
C I_{p,q}(v;\Omega).
\end{equation}
\end{thm}
Note that the Schwarz symmetrization of $v$ is a radial function,
and hence, its level sets are spheres. In particular, the mean
curvature $H_{v^*}(x)=1/|x|$ and the tangential gradient
$\nabla_{T,{v^*}} |\nabla v^*|^{p/q}=0$. This explains the equality
in \eqref{comp_integrals}.
A related result was proved by Trudinger \cite{Trudinger97} when
$q=1$ for the class of mean convex functions (\textit{i.e.}, functions
for which the mean curvature of the level sets is nonnegative). More precisely,
he proved Theorem~\ref{thm:Ipq} replacing the functional $I_{p,q}$ by
\begin{equation}\label{tilde:Ipq}
\tilde{I}_{p,q}(v;\Omega):=\left( \int_{\Omega}
|H_v|^q |\nabla v|^p \, dx \right)^{1/p}
\end{equation}
and considering the Schwarz symmetrization of $v$ with respect to the
perimeter instead of the classical one like us (see Definition~\ref{Schwarz-symm}
below). In order to define
this symmetrization (with respect to the perimeter) it is essential
to know that the mean curvature $H_v$ of the level sets of $|v|$ is
nonnegative. Then using an Aleksandrov-Fenchel inequality for mean
convex hypersurfaces (see \cite{Trudinger94}) he proved Theorem~\ref{thm:Ipq}
for this class of functions when $q=1$.
We prove Theorem~\ref{thm:Ipq} using two ingredients. The first one
is the classical isoperimetric inequality:
\begin{equation}\label{isop:ineq}
n|B_1|^{1/n}|D|^{(n-1)/n}\leq |\partial D|
\end{equation}
for any smooth bounded domain $D$ of $\mathbb{R}^n$. The second one is
a geometric Sobolev inequality, due to Michael and Simon \cite{MS}
and to Allard \cite{A}, on compact $(n-1)$-hypersurfaces $M$ without
boundary which involves the mean curvature $H$ of $M$:
for every $q\in [1, n-1)$,
there exists a constant $A$ depending only on $n$ and $q$ such that
\begin{equation}\label{Sob:mean}
\left( \int_M |\phi|^{q^\star} d\sigma \right)^{1/q^\star} \leq
A
\left( \int_M |\nabla \phi|^q + |H\phi|^q \ d\sigma \right)^{1/q}
\end{equation}
for every $\phi\in C^\infty(M)$, where $q^\star = (n-1)q/(n-1-q)$ and
$d\sigma$ denotes the area element in $M$.
Using the classical isoperimetric inequality \eqref{isop:ineq} and
the geometric Sobolev inequality \eqref{Sob:mean} with $M=\{x\in \Omega:|v(x)|=t\}$
and $\phi=|\nabla v|^{(p-1)/q}$ we will prove Theorem~\ref{thm:Ipq}
with the explicit constant $C=A^\frac{q}{p} |\partial B_1|^\frac{q}{(n-1)p}$, being $A$ the
universal constant in \eqref{Sob:mean}.
{F}rom Theorem~\ref{thm:Ipq} and well known 1-dimensional weighted
Sobolev inequalities it is easy to prove Morrey and Sobolev geometric
inequalities involving the functional $I_{p,q}$. Indeed, by
Theorem~\ref{thm:Ipq} and since Schwarz symmetrization preserves the
$L^r$ norm, it is sufficient to prove the existence of a positive
constant $\overline{C}$ independent of $v^*$ such that
$$
\|v^*\|_{L^r(B_R)}\leq \overline{C} I_{p,q}(v^*;B_R).
$$
Using this argument we prove the following geometric inequalities.
\begin{thm}\label{Theorem:Sobolev}
Let $\Omega$ be a smooth bounded domain of $\mathbb{R}^n$ with $n\geq 2$ and
$v\in C^\infty_0(\overline{\Omega})$.
Let $I_{p,q}$ be the functional defined in \eqref{Ipq} with $p,q \geq 1$ and
$$
p_{q}^\star:= \frac{n p}{n - (p+q)}.
$$
Assume $n>q+1$. The following assertions hold:
\begin{enumerate}
\item[$(a)$] If $n<p+q$ then
\begin{equation}\label{Morrey}
\|v\|_{L^\infty(\Omega)}
\leq C_1|\Omega|^{\frac{p+q-n}{np}}I_{p,q}(v;\Omega)
\end{equation}
for some constant $C_1$ depending only on $n$, $p$, and $q$.
\item[$(b)$] If $n>p+q$, then
\begin{equation}\label{Sobolev}
\|v\|_{L^r(\Omega)}
\leq C_2|\Omega|^{\frac{1}{r}-\frac{1}{p_q^\star}}
I_{p,q}(v;\Omega)\quad \textrm{for every }1\leq r \leq p_{q}^\star,
\end{equation}
where $C_2$ is a constant depending only on
$n$, $p$, $q$, and $r$.
\item[$(c)$] If $n = p+q$, then
\begin{equation}\label{Moser-Trudinger}
\int_\Omega\exp\left\{\left(\frac{|v|}{C_3 I_{p,q}(v;\Omega)}\right)^{p'}\right\}\ dx
\leq \frac{n}{n-1}|\Omega|,\quad \textrm{where }p'=p/(p-1),
\end{equation}
for some positive constant $C_3$ depending only on $n$ and $p$.
\end{enumerate}
\end{thm}
Cabr\'e and the second author \cite{CS} proved recently
Theorem~\ref{Theorem:Sobolev} under the assumption $q\geq p$ using a
different method (without the use of Schwarz symmetrization). More
precisely, they proved the theorem replacing the functional
$I_{p,q}(v;\Omega)$ by the one defined in \eqref{tilde:Ipq},
$\tilde{I}_{p,q}(v;\Omega)$. Therefore, our
geometric inequalities are only new in the range $1\leq q<p$.
\begin{open}
Is Theorem~\ref{Theorem:Sobolev} true for the range $1\leq q<p$ and
replacing the functional $I_{p,q}(v;\Omega)$ by the one defined in
\eqref{tilde:Ipq}, $\tilde{I}_{p,q}(v;\Omega)$?
\end{open}
This question has a posive answer for the class of mean convex functions.
Trudinger \cite{Trudinger97} proved this result for this class of functions
when $q=1$ and can be easily extended for every $q\geq 1$. However, to our
knowledge, for general functions (without mean convex level sets) it is an
open problem.
\subsection{Regularity of semi-stable solutions}
The second part of the paper deals with \textit{a priori} estimates for semi-stable
solutions of problem \eqref{problem}. Remember that a regular solution $u\in
C_0^1(\overline{\Omega})$ of \eqref{problem} is said to be \textit{semi-stable}
if the second variation of the associated energy functional at $u$ is nonnegative
definite, \textit{i.e.},
\begin{equation}\label{semi-stab1}
\int_\Omega |\nabla u|^{p-2} \left\{|\nabla \phi|^2+(p-2)
\left(\nabla \phi\cdot\frac{\nabla u}{|\nabla u|}\right)^2\right\} - g'(u) \phi^2\ dx \geq 0
\end{equation}
for every $\phi \in H_0$, where $H_0$ denotes the space of admissible functions
(see Definition~\ref{H0} below). The class of semi-stable solutions
includes local minimizers of the energy functional as well as minimal
and extremal solutions of \eqref{problem} when $g(u)=\lambda f(u)$.
Using an appropriate test function in \eqref{semi-stab1} we prove
the following \textit{a priori} estimates for semi-stable solutions.
This result extends the ones in \cite{Cabre09} and \cite{CS} for the
Laplacian case ($p=2$) due to Cabr\'e and the second author.
\begin{thm}\label{Theorem}
Let $g$ be any $C^\infty$ function and $\Omega\subset\mathbb{R}^n$ any smooth
bounded domain. Let $u\in C^1_0(\overline{\Omega})$ be a semi-stable
solution of \eqref{problem}, \textit{i.e.}, a solution satisfying \eqref{semi-stab1}.
The following assertions hold:
$(a)$ If $n\leq p+2$ then there exists a constant $C$ depending
only on $n$ and $p$ such that
\begin{equation}\label{L-infinty}
\|u\|_{L^\infty(\Omega)}\leq s+\frac{C}{s^{2/p}}|\Omega|^\frac{p+2-n}{np}
\left(\int_{\{u\leq s\}} |\nabla u|^{p+2}\, dx\right)^{1/p}\quad \textrm{for
all }s>0.
\end{equation}
$(b)$ If $n>p+2$ then there exists a constant $C$ depending
only on $n$ and $p$ such that
\begin{equation}\label{Lq:estimate}
\left(\int_{\{u>s\}} \Big(|u|-s\Big)^{\frac{np}{n-(p+2)}}\ dx\right)^{\frac{n-(p+2)}{np}}
\leq \frac{C}{s^{2/p}}
\left(\int_{\{u\leq s\}} |\nabla u|^{p+2} \ dx\right)^{1/p}
\end{equation}
for all $s>0$. Moreover, there exists a constant $C$ depending
only on $n$, $p$, and $r$ such that
\begin{equation}\label{grad:estimate}
\int_\Omega |\nabla u|^r\ dx\leq C\left(|\Omega|+\int_\Omega|u|^\frac{np}{n-(p+2)}\ dx
+\|g(u)\|_{L^1(\Omega)}\right)
\end{equation}
for all $1\leq r<r_1:=\frac{np^2}{(1+p)n-p-2}$.
\end{thm}
To prove \eqref{L-infinty} and \eqref{Lq:estimate} we use the semi-stability condition
\eqref{semi-stab1} with the test function $\phi=|\nabla u|\eta$ to obtain
\begin{equation}\label{ineq:key}
\int_{\Omega}\left( \frac{4}{p^2}|\nabla_{T,u} |\nabla u|^{p/2}|^{2}
+ \frac{n-1}{p-1}H_u^2 |\nabla u|^{p} \right) \eta^2 \, dx
\leq \int_{\Omega} |\nabla u|^{p}|\nabla \eta|^2\, dx
\end{equation}
for every Lipschitz function $\eta$ in $\overline{\Omega}$ with
$\eta|_{\partial\Omega}=0$. Then, taking $\eta=T_s u = \min\{s,u\}$,
we obtain \eqref{L-infinty} and \eqref{Lq:estimate} when $n\neq p+2$
by using the Morrey and Sobolev inequalities established in
Theorem~\ref{Theorem:Sobolev} with $q=2$.
The critical case $n=p+2$ is more involved. In order to get
\eqref{L-infinty} in this case, we take another
explicit test function $\eta=\eta(u)$ in \eqref{ineq:key} and use the geometric
Sobolev inequality \eqref{Sob:mean}. The
gradient estimate established in \eqref{grad:estimate} will follow by
using a technique introduced by B\'enilan \textit{et al.}
\cite{BBGGPV95} to get the regularity of entropy solutions for
$p$-Laplace equations with $L^1$ data (see Proposition
\ref{Prop:bootstrap}).
The rest of the introduction deals with the regularity of extremal solutions.
Let us recall the problem and some known results in this topic. Consider
\stepcounter{equation}
$$
\left\{
\begin{array}{rcll}
-\Delta_p u&=&\lambda f(u)&\textrm{in }\Omega,\\
u&=&0&\textrm{on }\partial \Omega,
\end{array}
\right. \eqno{(1.15)_{\lambda}}
$$
where $\lambda$ is a positive parameter and $f$ is a $C^1$ positive
increasing function satisfying
\begin{equation}\label{p-superlinear}
\lim_{t\rightarrow+\infty}\frac{f(t)}{t^{p-1}}=+\infty.
\end{equation}
Cabr\'e and the second author \cite{CS07} proved the existence of
an extremal parameter $\lambda^\star\in(0,\infty)$ such that problem
$(1.15)_{\lambda}$ admits a minimal regular solution $u_\lambda\in C^1_0(\overline{\Omega})$
for $\lambda\in(0,\lambda^\star)$ and admits no regular solution
for $\lambda>\lambda^\star$.
Moreover, every minimal solution $u_\lambda$ is a semi-stable for $\lambda\in
(0,\lambda^\star)$.
For the Laplacian case ($p=2$), the limit of minimal solutions
$$
u^\star:=\lim_{\lambda\uparrow\lambda^\star}u_\lambda
$$
is a weak solution of the extremal problem $(1.15)_{\lambda^\star}$ and it is known as
extremal solution. Nedev \cite{Nedev} proved, in the case of convex
nonlinearities, that $u^\star\in L^\infty(\Omega)$ if $n\leq 3$ and
$u^\star\in L^r(\Omega)$ for all $1\leq r<n/(n-4)$ if $n\geq 4$.
Recently, Cabr\'e~\cite{Cabre09}, Cabr\'e and the second author \cite{CS},
and Nedev \cite{Nedev01} proved,
in the case of convex domains and general nonlinearities, that $u^\star\in L^\infty(\Omega)$
if $n\leq 4$ and $u^\star\in L^{\frac{2n}{n-4}}(\Omega)\cap H^1_0(\Omega)$
if $n\geq 5$.
For arbitrary $p>1$ it is unknown if the limit of minimal solutions
$u^\star$ is a (weak or entropy) solution of $(1.15)_{\lambda^\star}$. In the affirmative
case, it is called the \textit{extremal solution of $(1.15)_{\lambda^\star}$}.
However, in \cite{S} it is proved that the limit of minimal solutions $u^\star$ is
a weak solution (in the distributional sense) of $(1.15)_{\lambda^\star}$ whenever $p\geq 2$ and
$f$ satisfies the additional condition:
\begin{equation}\label{convex:assump}
\textrm{there exists }T\geq0 \textrm{ such that }(f(t)-f(0))^{1/(p-1)}
\textrm{ is convex for all }t\geq T.
\end{equation}
Moreover,
$$
u^\star\in L^\infty(\Omega)\qquad\textrm{ if }n<p+p'
$$
and
$$
u^\star\in L^r(\Omega),\textrm{ for all }r<\tilde{r}_0:=(p-1)\frac{n}{n-(p+p')},\quad
\textrm{if }n\geq p+p'.
$$
This extends previous results of Nedev \cite{Nedev} for the Laplacian case ($p=2$)
and convex nonlinearities.
Our next result improves the $L^q$ estimate in \cite{Nedev,S} for strictly convex domains. We also
prove that $u^\star$ belongs to the energy class $W^{1,p}_0(\Omega)$ independently
of the dimension extending an unpublished result of Nedev~\cite{Nedev01} for $p=2$ to every
$p\geq 2$ (see also \cite{CS}).
\begin{thm}\label{Theorem2}
Let $f$ be an increasing positive $C^1$ function satisfying
\eqref{p-superlinear}. Assume that $\Omega$ is a smooth strictly convex
domain of $\mathbb{R}^n$. Let $u_\lambda\in
C^1_0(\overline{\Omega})$ be the minimal solution of $(1.15)_{\lambda}$.
There exists a constant $C$ independent of $\lambda$ such that:
\begin{enumerate}
\item[$(a)$] If $n\leq p+2$ then
$
\|u_\lambda\|_{L^\infty(\Omega)}\leq C \|f(u_\lambda)\|_{L^1(\Omega)}^{1/(p-1)}.
$
\item[$(b)$] If $n> p+2$ then
$
\|u_\lambda\|_{L^\frac{np}{n-p-2}(\Omega)}\leq C
\|f(u_\lambda)\|_{L^1(\Omega)}^{1/(p-1)}.
$
Moreover
$
\|u_\lambda\|_{W^{1,p}_0(\Omega)}\leq C'
$
where $C'$ is a constant depending only on $n$, $p$, $\Omega$, $f$ and
$\|f(u_\lambda)\|_{L^1(\Omega)}$.
\end{enumerate}
Assume, in addition, $p\geq 2$ and that \eqref{convex:assump} holds. Then
\begin{enumerate}
\item[$(i)$] If $n\leq p+2$ then $u^\star\in L^\infty(\Omega)$. In particular, $u^\star
\in C^1_0(\overline{\Omega})$.
\item[$(ii)$] If $n>p+2$ then $u^\star\in L^\frac{np}{n-p-2}(\Omega)\cap W^{1,p}_0(\Omega)$.
\end{enumerate}
\end{thm}
\begin{rem}
If $f(u_\lambda)$ is bounded in $L^1(\Omega)$ by a constant independent of $\lambda$,
then parts $(a)$ and $(b)$ will lead automatically to the assertions $(i)$ and $(ii)$
stated in the theorem (without the requirement that $p\geq 2$ and \eqref{convex:assump} hold true).
However, as we said before, the estimate $f(u^\star)\in L^1(\Omega)$ is unknown in
the general case, \textit{i.e}, for arbitrary positive and increasing nonlinearities
$f$ satisfying \eqref{p-superlinear} and arbitrary $p>1$.
\end{rem}
\begin{open}
Is it true that $f(u^\star)\in L^1(\Omega)$ for arbitrary positive
and increasing nonlinearities $f$ satisfying \eqref{p-superlinear}?
\end{open}
Under assumptions $p\geq 2$ and \eqref{convex:assump} it is proved in \cite{S} that
$f(u^\star)\in L^r(\Omega)$ for all $1\leq r<n/(n-p')$ when $n\geq p'$ and $f(u^\star)
\in L^\infty(\Omega)$ if $n<p'$. In particular, one has $f(u^\star)\in L^1(\Omega)$
independently of the dimension $n$ and the parameter $p>1$. As a consequence,
assertions $(i)$ and $(ii)$ follow immediately from parts $(a)$ and $(b)$ of the
theorem.
To prove the $L^r$ \textit{a priori} estimates stated in part $(a)$
and $(b)$ we make three steps. First, we use the strict convexity of
the domain $\Omega$ to prove that
$$
\{x\in\Omega:{\rm dist}(x,\partial\Omega)<\varepsilon\}
\subset \{x\in\Omega:u_\lambda(x)<s\}
$$
for a suitable $s$. This is done using a moving plane procedure for
$p$-Laplace equations (see Proposition~\ref{Prop:1} below). Then,
we prove that the Morrey and Sobolev type inequalities stated in
Theorem~\ref{Theorem:Sobolev} for smooth functions, also hold for
regular solutions of \eqref{problem} when $1\leq q\leq 2$. Finally,
taking a test function $\eta$ related to ${\rm dist}(\cdot,\partial\Omega)$
in \eqref{ineq:key} and proceeding as in the proof
of Theorem~\ref{Theorem} we will obtain the $L^r$ \textit{a priori}
estimates established in the theorem.
The energy estimate established in parts (ii) and (b) of Theorem~\ref{Theorem2}
follows by extending the arguments of Nedev \cite{Nedev01} for the Laplacian
case (see also Theorem~2.9 in \cite{CS}). First, using a Poho${\rm\check{z}}$aev
identity we obtain
\begin{equation}\label{key:Poho}
\int_\Omega|\nabla u_\lambda|^p\ dx
\leq
\frac{1}{p'}\int_{\partial\Omega}|\nabla u_\lambda|^p\ x\cdot\nu\ d\sigma,
\qquad\textrm{for all }p>1\textrm{ and }\lambda\in(0,\lambda^\star),
\end{equation}
where $d\sigma$ denotes the area element in $\partial\Omega$ and
$\nu$ is the outward unit normal to $\Omega$. Then, using the strict
convexity of the domain (as in the $L^r$ estimates) and standard regularity
estimates for $-\Delta_p u=\lambda f(u_\lambda(x))$ in a neighborhood of
the boundary, we are able to control the right hand side of \eqref{key:Poho} by
a constant whose dependence on $\lambda$ is given by a function of
$\|f(u_\lambda)\|_{L^1(\Omega)}$.
\begin{rem}
Let us compare our regularity results with the sharp ones proved
by Cabr\'e, Capella, and the second author in \cite{CCS09} when $\Omega$
is the unit ball $B_1$ of $\mathbb{R}^n$. In the radial case,
the extremal solution $u^\star$ of $(1.15)_{\lambda^\star}$ is bounded if the dimension
$n< p+\frac{4p}{p-1}$. Moreover, if $n\geq p+\frac{4p}{p-1}$ then
$u^\star\in W^{1,r}_0(B_1)$ for all $1\leq r<\bar{r}_1$, where
$$
\bar{r}_1:=\frac{np}{n-2\sqrt{\frac{n-1}{p-1}}-2}.
$$
In particular, $u^\star\in L^r(B_1)$ for all $1\leq r<\bar{r}_0$, where
$$
\bar{r}_0:=\frac{np}{n-2\sqrt{\frac{n-1}{p-1}}-p-2}.
$$
It can be shown that these regularity results are sharp by taking
the exponential and power nonlinearities.
Note that the $L^r(\Omega)$-estimate established in Theorem~\ref{Theorem2}
differs with the sharp exponent $\bar{r}_0$ defined above by the term
$2\sqrt{\frac{n-1}{p-1}}$. Moreover, observe that $\bar{r}_1$ is larger
than $p$ and tends to it as $n$ goes to infinity. In particular, the best
expected regularity independent of the dimension $n$ for the extremal
solution $u^\star$ is $W^{1,p}_0(\Omega)$, which is the one we obtain
in Theorem~\ref{Theorem2}.
\end{rem}
\subsection{Outline of the paper}
The paper is organized as follows. In section \ref{section2} we
prove Theorem~\ref{thm:Ipq} and the geometric type inequalities
stated in Theorem~\ref{Theorem:Sobolev}. In section~\ref{section3}
we prove that Theorem~\ref{Theorem:Sobolev} holds for solutions of
\eqref{problem} when $1\leq q\leq 2$. Moreover we give boundary
estimates when the domain is strictly convex. In section~\ref{section5}, we
present the semi-stability condition \eqref{semi-stab1} and the
space of admissible functions $H_0$. The rest of the section deals
with the regularity of semi-stable solutions proving
Theorems~\ref{Theorem}~and~\ref{Theorem2}.
\section{Geometric Hardy-Sobolev type inequalities}\label{section2}
In this section we prove Theorems \ref{thm:Ipq} and \ref{Theorem:Sobolev}.
As we said in the introduction, the geometric inequalities established in
Theorem~\ref{Theorem:Sobolev} are new for the range $1\leq q<p$ since the case
$q\geq p$ was proved in \cite{CS}. However, we will give the proof in all
cases using Schwarz symmetrization, giving an alternative proof for the
known range of parameters $q\geq p$.
We start recalling the definition of Schwarz symmetrization of a
compact set and of a Lipschitz continuous function.
\begin{defin}\label{Schwarz-symm}
We define the \textit{Schwarz symmetrization of a compact set $D\subset\mathbb{R}^n$}
as
$$
D^*:=\left\{
\begin{array}{lll}
B_R(0) \textrm{ with } R=(|D|/|B_1|)^{1/n}&\textrm{if}&D\neq \emptyset,\\
\emptyset&\textrm{if}&D= \emptyset.
\end{array}
\right.
$$
Let $v$ be a Lipschitz continuous function in $\overline{\Omega}$ and
$\Omega_t:=\{x\in\Omega:|v(x)|\geq t\}$. We define
the \textit{Schwarz symmetrization of $v$} as
$$
v^*(x):= \sup\{t\in \mathbb{R}: x\in \Omega_t^*\}.
$$
Equivalently, we can define the Schwarz symmetrization of $v$ as
$$
v^*(x)=\inf\{t\geq 0:V(t)<|B_1||x|^n\},
$$
where $V(t):=|\Omega_t|=|\{x\in\Omega:|v(x)|> t\}|$ denotes the distribution
function of $v$.
\end{defin}
The first ingredient in the proof of Theorem \ref{thm:Ipq}
is the isoperimetric inequality for functions $v$ in
$W^{1,1}_0(\Omega)$:
\begin{equation}\label{talenti}
n|B_1|^{1/n} V(t)^{(n-1)/n}\leq P(t):=\frac{d}{dt}\int_{\{|v|\leq t\}}|\nabla v|\ dx
\qquad\text{for a.e. } t>0,
\end{equation}
where $P(t)$ stands for the perimeter in the
sense of De Giorgi (the total variation of the characteristic function of
$\{x\in \Omega: |v(x)|>t\}$).
The second ingredient is the following Sobolev inequality on compact hypersurfaces
without boundary due to Michael and Simon \cite{MS} and to Allard \cite{A}.
\begin{thm}[\cite{A,MS}]\label{ThmMS}
Let $M\subset \mathbb{R}^{n}$ be a $C^\infty$ immersed $(n-1)$-dimensional
compact hypersurface without boundary and $\phi\in C^\infty(M)$.
If $q\in [1, n-1)$, then there exists a constant $A$ depending only on
$n$ and $q$ such that
\begin{equation}\label{MS}
\left( \int_M |\phi|^{q^\star} d\sigma \right)^{1/q^\star} \leq
A
\left( \int_M |\nabla \phi|^q + |H\phi|^q \ d\sigma \right)^{1/q},
\end{equation}
where $H$ is the mean curvature of $M$, $d\sigma$ denotes the area element in $M$,
and $q^\star = \frac{(n-1)q}{n-1-q}$.
\end{thm}
As we said in the introduction it is well known that Schwarz symmetrization
preserves the $L^r$-norm and decreases the $W^{1,r}$-norm.
Let us prove that it also decreases (up to a multiplicative constant)
the functional $I_{p,q}$ defined in \eqref{Ipq} using the isoperimetric inequality
\eqref{talenti} and the geometric inequality \eqref{MS} applied to $M=M_t=\{x\in\Omega: |v(x)|=t\}$
and $\phi=|\nabla v|^{(p-1)/q}$.
\begin{proof}[Proof of Theorem {\rm\ref{thm:Ipq}}]
Let $v\in C_0^\infty(\overline{\Omega})$, $p\geq 1$, and $1\leq q<n-1$.
By Sard's theorem, almost every $t\in(0,\|v\|_{L^\infty(\Omega)})$ is a
regular value of $|v|$. By definition, if $t$ is a regular value of $|v|$,
then $\left|\nabla v(x)\right|>0$ for all $x\in\Omega$ such that
$|v(x)|=t$. Therefore, $M_t:=\{x\in\Omega: |v(x)|=t\}$ is a
$C^{\infty}$ immersed $(n-1)-$dimensional compact hypersurface of
$\mathbb{R}^n$ without boundary for every regular value $t$ .
Applying inequality \eqref{MS} to $M=M_t$ and
$\phi=|\nabla v|^{(p-1)/q}$ we obtain
\begin{equation}\label{MSv}
\left( \int_{M_t} |\nabla v|^{(p-1)\frac{q^\star}{q}} \,
d\sigma \right)^{q/q^\star}
\leq
A^q \int_{M_t} \Big|\nabla_{T,v} |\nabla v|^{\frac{p-1}{q}}\Big|^q
+ |H_v|^q |\nabla v|^{p-1} \, d \sigma
\end{equation}
for a.e. $t\in(0,\|v\|_{L^\infty(\Omega)})$, where $H_v$ denotes the mean curvature of
$M_t$, $d\sigma$ is the area element in $M_t$, $A$ is the constant in \eqref{MS} which depends
only on $n$ and $q$, and
$$
q^\star:=\frac{(n-1)q}{n-1-q}.
$$
Recall that $V(t)$, being a nonincreasing function, is differentiable almost
everywhere and, thanks to the coarea formula and that almost every $t\in(0,\|v\|_{L^\infty(\Omega)})$
is a regular value of $|v|$, we have
\begin{equation*}
- V'(t) = \int_{M_t} \frac{1}{|\nabla v|} \, d\sigma
\qquad\textrm{and}\qquad
P(t) = \int_{M_t} d\sigma\qquad\textrm{for a.e. }t\in(0,\|v\|_{L^\infty(\Omega)}).
\end{equation*}
Therefore applying Jensen inequality and then using the isoperimetric inequality
\eqref{talenti}, we obtain
\begin{equation}\label{new1}
\left( \int_{M_t} |\nabla v|^{(p-1)\frac{q^\star}{q}+1} \,
\frac{d\sigma}{|\nabla v|} \right)^{\frac{q}{q^\star}}
\geq
\frac{P(t)^{p-1+\frac{q}{q^\star}}}{\left(- V'(t)\right)^{p-1}}
\geq
\frac{(A_1 V(t)^{\frac{n-1}{n}})^{p-1+\frac{q}{q^\star}}}
{\left(- V'(t) \right)^{p-1}}
\end{equation}
for a.e. $t\in(0,\|v\|_{L^\infty(\Omega)})$, where $A_1:=n|B_1|^{1/n}$.
Note that for radial functions the inequalities in \eqref{new1} are
equalities. Therefore, since the Schwarz symmetrization $v^*$ of $v$
is a radial function and it satisfies \eqref{MSv}, with an equality
and with constant $A=|\partial B_1|^{-1/(n-1)}$, we obtain
\begin{equation}\label{new2}
\begin{array}{lll}
\displaystyle \left( \int_{\{|v^*|=t\}} |\nabla v^*|^{(p-1)\frac{q^\star}{q}} \,
d\sigma \right)^{q/q^\star}
&=&\displaystyle
|\partial B_1|^{-\frac{q}{n-1}}\int_{\{v^*=t\}} |H_{v^*}|^q |\nabla v^*|^{p-1} \, d \sigma\\
&=&\displaystyle
\frac{(A_1 V(t)^{\frac{n-1}{n}})^{p-1+\frac{q}{q^\star}}}
{\left(- V'(t) \right)^{p-1}}.
\end{array}
\end{equation}
for a.e. $t\in(0,\|v\|_{L^\infty(\Omega)})$.
Here, we used that $V(t)=|\{|v|>t\}|=|\{|v^*|>t\}|$ for a.e. $t\in(0,\|v\|_{L^\infty(\Omega)})$.
Therefore, from \eqref{MSv}, \eqref{new1}, and \eqref{new2}, we obtain
$$
|\partial B_1|^{-\frac{q}{n-1}}\int_{\{v^*=t\}} |H_{v^*}|^q |\nabla v^*|^{p-1} \, d \sigma
\leq
A^q \int_{M_t} \Big|\nabla_{T,v} |\nabla v|^{\frac{p-1}{q}}\Big|^q
+ |H_v|^q |\nabla v|^{p-1} \, d \sigma,
$$
for a.e. $t\in(0,\|v\|_{L^\infty(\Omega)})$.
Integrating the previous inequality with respect to $t$ on
\linebreak $(0,\|v\|_{L^\infty(\Omega)})$ and using the coarea formula
we obtain inequality \eqref{comp_integrals}, with the explicit constant
$C=A^\frac{q}{p} |\partial B_1|^\frac{q}{(n-1)p}$, proving the result.
\end{proof}
\begin{rem}
We obtained the explicit admissible constant
$C=A^\frac{q}{p} |\partial B_1|^\frac{q}{(n-1)p}$
in \eqref{comp_integrals}, where $A$ is the universal constant appearing
in \eqref{MS}.
\end{rem}
We prove Theorem \ref{Theorem:Sobolev} using Theorem~\ref{thm:Ipq} and known
results on one dimensional weighted Sobolev inequalities.
\begin{proof}[Proof of Theorem~{\rm\ref{Theorem:Sobolev}}]
Let $v\in C_0^\infty(\overline{\Omega})$ and $v^*$ its Schwarz symmetrization.
Recall that $v^*$ is defined in $B_R$ with $R=(|\Omega|/|B_1|)^{1/n}$.
(a) Assume $1+q<n<p+q$. Using H\"older inequality we obtain
\begin{equation}\label{pointwise}
\begin{array}{lll}
v^*(s)&=&\displaystyle \int_s^R |(v^*)'(\tau)|\ d\tau\\
&\leq&\displaystyle \left(\int_0^R |(v^*)'(\tau)|^p\tau^{-q}\tau^{n-1}\ d\tau\right)^{1/p}
\left(\int_s^R \tau^\frac{1+q-n}{p-1}\ d\tau\right)^{1/p'}
\end{array}
\end{equation}
for a.e. $s\in(0,R)$. In particular,
$$
v^*(s)\leq |\partial B_1|^{-1/p} \left(\frac{p-1}{p+q-n}\right)^{1/p'} \left(\frac{|\Omega|}{|B_1|}\right)^\frac{p+q-n}{np}I_{p,q}(v^*;B_R)
$$
for a.e. $s\in(0,R)$.
We conclude this case, by Theorem \ref{thm:Ipq}, noting that $\|v\|_{L^\infty(\Omega)}=v^*(0)$.
(b) Assume $n>p+q$. We use the following 1-dimensional weighted Sobolev inequality:
\begin{equation}\label{radial:Sob}
\left(\int_0^R|\varphi(s)|^{p_q^\star} s^{n-1}\ ds\right)^{1/p_q^\star}
\leq C(n,p,q)\left(\int_0^R s^{-q}|\varphi'(s)|^p s^{n-1}\ ds\right)^{1/p}
\end{equation}
with optimal constant
\begin{equation}\label{ctant:C}
C(n,p,q):=\left(\frac{p-1}{n-(p+q)}\right)^{1/p'}n^{-1/p_q^\star}
\left[\frac{\Gamma\left(\frac{np}{p+q}\right)}
{\Gamma\left(\frac{n}{p+q}\right)\Gamma\left(1+\frac{n(p-1)}{p+q}\right)}\right]^\frac{p+q}{np}
\end{equation}
stated in \cite{Trudinger97}. Applying inequality \eqref{radial:Sob} to $\varphi=v^*$
and noting that the $L^{p_q^\star}$-norm is preserved by Schwarz symmetrization, we obtain
$$
|\partial B_1|^{-1/p_q^\star}\left(\int_{\Omega}|v|^{p_q^\star}\ dx\right)^{1/p_q^\star}
\leq C(n,p,q)|\partial B_1|^{-1/p}\left(\int_{B_R} |x|^{-q}|\nabla v^*|^p \ dx\right)^{1/p}.
$$
Using Theorem \ref{thm:Ipq} again we prove \eqref{Sobolev} for $r=p_q^\star$. The remaining
cases, $1\leq r<p_q^\star$, now follow easily from H\"older inequality.
(c) Assume $n=p+q$. From \eqref{pointwise} and Theorem~\ref{thm:Ipq}
we obtain
$$
\begin{array}{lll}
v^*(s)
&\leq&\displaystyle
\left(\int_0^R |(v^*)'(\tau)|^p\tau^{-q}\tau^{n-1}\ d\tau\right)^{1/p}
\left(\int_s^R \tau^{-1}\ d\tau\right)^{1/p'}\\
&\leq&\displaystyle
|\partial B_1|^{-1/p} C I_{p,q}(v;\Omega)
\left(\ln\left(\frac{R}{s}\right)\right)^{1/p'}
\end{array}
$$
for a.e. $s\in(0,R)$. Equivalently
$$
\exp\left\{\left(\frac{v^*(s)}{|\partial B_1|^{-1/p}C I_{p,q}(v;\Omega)}\right)^{p'}\right\}
|\partial B_1|s^{n-1}
\leq
\frac{R}{s}|\partial B_1|s^{n-1}
$$
for a.e. $s\in(0,R)$. Integrating the previous inequality with respect to $s$ in $(0,R)$
we obtain
$$
\int_{B_R}\exp\left\{\left(\frac{v^*}{|\partial B_1|^{-1/p}C I_{p,q}(v;\Omega)}\right)^{p'}\right\}\ dx
\leq
|\partial B_1|\frac{R^n}{n-1}=\frac{n}{n-1}|\Omega|.
$$
We conclude the proof noting that the integral in inequality \eqref{Moser-Trudinger} is
preserved under Schwarz symmetrization.
\end{proof}
\begin{rem}\label{rmk:ctans}
Note that we obtained explicit admissible constants $C_1$, $C_2$, and $C_3$
in inequalities of Theorem \ref{Theorem:Sobolev}. More precisely,
we obtained
$$
C_1=|\partial B_1|^{-\frac{1}{p}} \left(\frac{p-1}{p+q-n}\right)^{\frac{1}{p'}}
\left(\frac{|\Omega|}{|B_1|}\right)^\frac{p+q-n}{np}
A^\frac{q}{p} |\partial B_1|^\frac{q}{(n-1)p},
$$
$$
C_2= C(n,p,q)|\partial B_1|^{\frac{1}{p_q^\star}-\frac{1}{p}}
A^\frac{q}{p} |\partial B_1|^\frac{q}{(n-1)p},
$$
and
$$
C_3=|\partial B_1|^{-\frac{1}{p}} A^{\frac{n-p}{p}}|\partial B_1|^{\frac{n-p}{(n-1)p}},
$$
where $A$ is the universal constant appearing in \eqref{MS} and $C(n,p,q)$ is
defined in \eqref{ctant:C}.
All the constants $C_i$ depend only on $n$, $p$, and $q$. However, the best constant $A$
in \eqref{MS} is unknown (even for mean convex hypersurfaces). Behind this Sobolev
inequality there is the following geometric isoperimetric inequality
\begin{equation}\label{isop:mean}
|M|^{\frac{n-2}{n-1}} \leq A_2\int_M |H(x)|\ d\sigma.
\end{equation}
Here, $M\subset \mathbb{R}^{n}$ is a $C^\infty$ immersed $(n-1)$-dimensional
compact hypersurface without boundary and $H$ is the mean curvature of $M$ as in
Theorem \ref{ThmMS}. The best constant in \eqref{isop:mean} is also unknown
even for mean convex hypersurfaces.
\end{rem}
\section{Properties of solutions of $p$-Laplace equations}\label{section3}
In this section, we first establish an \textit{a priori} $L^\infty$
estimate in a neighborhood of the boundary $\partial\Omega$ for any
regular solution $u$ of \eqref{problem} when the domain $\Omega$ is
stricly convex. More precisely, we prove that there exists positive
constants $\varepsilon$ and $\gamma$, depending only on the domain
$\Omega$, such that
\begin{equation}\label{eqqq}
\Vert u\Vert_{L^\infty(\Omega_\varepsilon)}\leq \frac{1}{\gamma} \Vert u\Vert_{L^1 (\Omega)},
\ \ \text{ where } \Omega_\varepsilon:=\{x\in\Omega\, :\, \text{\rm dist}(x,\partial\Omega)
<\varepsilon\}.
\end{equation}
Then, we establish that the geometric inequalities of
Theorem~\ref{Theorem:Sobolev} still hold for solutions of \eqref{problem} in the
smaller range $1\leq q\leq 2$. In the next section, these two
ingredients will allow us to obtain \textit{a priori} estimates for
semi-stable solutions.
Let $u \in W_{0}^{1,p}(\Omega)$ be a weak solution (\textit{i.e.}, a solution in
the distributional sense) of the problem
\begin{equation}\label{prob}
\left\{
\begin{array}{rcll}
-\Delta_p u &=& g(u) &\textrm{in } \Omega, \\
u&>& 0 &\textrm{in } \Omega, \\
u &=& 0 &\textrm{on } \partial \Omega,
\end{array}
\right.
\end{equation}
where $\Omega$ is a bounded smooth domain in $\mathbb{R}^n$, with $n
\geq 2$, and $g$ is any positive smooth nonlinearity.
We say that $u \in W_{0}^{1,p}(\Omega)$ is a \textit{regular solution} of \eqref{prob}
if it satisfies the equation in the distributional sense and $g(u)\in L^\infty(\Omega)$.
By well known regularity results for degenerate
elliptic equations, one has that every regular solution $u$ belongs to $C^{1,\alpha}
(\Omega)$ for some $\alpha\in(0,1]$ (see \cite{DB,T}). Moreover,
$u\in C^1(\overline{\Omega})$ (see \cite{Lie}). This is the best
regularity that one can hope for solutions of $p$-Laplace equations.
Therefore, equation \eqref{prob} is always meant in a distributional
sense.
We prove the boundary \textit{a priori} estimate \eqref{eqqq} through a moving
plane procedure for the $p$-Laplacian which is developed in \cite{DS}.
\begin{proposition}\label{Prop:1}
Let $\Omega$ be a smooth bounded domain of $\mathbb{R}^{n}$ and $g$ any positive
smooth function. Let $u$ be any positive regular solution of \eqref{prob}.
If $\Omega$ is strictly convex, then there exist positive constants $\varepsilon$
and $\gamma$ depending only on the domain $\Omega$ such that
for every $x\in\Omega$ with $\text{\rm dist}(x,\partial\Omega)<\varepsilon$,
there exists a set $I_x\subset\Omega$ with the following properties:
$$
|I_x|\geq\gamma \qquad\text{and}\qquad
u(x) \leq u(y) \ \text{ for all } y\in I_x.
$$
As a consequence,
\begin{equation}\label{L1boundary}
\Vert u\Vert_{L^\infty(\Omega_\varepsilon)}\leq \frac{1}{\gamma} \Vert u\Vert_{L^1 (\Omega)},
\ \ \text{ where } \Omega_\varepsilon:=\{x\in\Omega\, :\, \text{\rm dist}(x,\partial\Omega)
<\varepsilon\}.
\end{equation}
\end{proposition}
\begin{proof}
First let us observe that from the regularity of the solution
$u$ up to the boundary $\partial \Omega$ and the fact that $\Delta_p u
\leq 0$, we can apply the generalized Hopf boundary lemma \cite{V}
to see that the normal derivative $\frac{\partial u}{\partial \nu} < 0$ on
$\partial \Omega$.
Thus, if we let $Z_u := \{ x \in \Omega : \nabla u (x) = 0 \}$ be the
critical set of $u$, we have that $Z_u \cap \partial \Omega = \emptyset$.
By the compactness of both sets, there exists $\varepsilon_0 > 0$ such that
$Z_u \cap \Omega_\varepsilon = \emptyset$ for any $\varepsilon \leq \varepsilon_0$.
We will now prove that this neighborhood of the boundary is in fact
independent of the solution $u$.
In order to begin a moving plane argument we need some notations:
let $e \in S^{n-1}$ be any direction and for $\lambda \in \mathbb{R}$ let us
consider the hyperplane
$$
T = T_{\lambda,e} = \{ x \in \mathbb{R}^n : x \cdot
e = \lambda \}
$$
and the corresponding cap
$$
\Sigma = \Sigma_{\lambda,e}
= \{ x \in \Omega : x \cdot e < \lambda \}.
$$
Set
$$
a(e) = \inf_{x \in \Omega} x \cdot e
$$
and for any $x \in \Omega$, let $x' = x_{\lambda,e}$ be its reflection with respect to
the hyperplane $T$, \textit{i.e.},
$$
x' = x + (\lambda - 2 x \cdot e)\ e.
$$
For any $\lambda > a (e)$ the cap $$\Sigma' = \{ x \in \Omega :
x' \in \Sigma \}$$ is the (non-empty) reflected cap of $\Sigma$ with
respect to $T$.
Furthermore, consider the function $v(x) = u (x') = u
(x_{\lambda,e})$, which is just the reflected of $u$ with respect to
the same hyperplane.
By the boundedness of $\Omega$, for $\lambda - a(e)$ small, we have
that the corresponding reflected cap $\Sigma'$ is contained in
$\Omega$. Moreover, by the strict convexity of $\Omega$, there exists
$\lambda_0 = \lambda_0 (\Omega)$ (independent of $e$) such that $\Sigma'$
remains in $\Omega$ for any $\lambda \leq \lambda_0$.
Let us then compare the function $u$ and its reflection $v$ for such
values of $\lambda$ in the cap $\Sigma$. First of all, both functions
solve the same equation since $\Delta_p$ is invariant under
reflection; secondly, on the hyperplane $T$ the functions coincide,
whereas for any $x \in \partial \Sigma \cap \partial \Omega$ we have that $u
(x) = 0$ and that $v(x) = u (x') > 0$, since the reflection $x' \in
\Omega$.
Hence we can see that:
\begin{equation*}
\Delta_p (u) + f (u) = \Delta_p (v) + f (v) \text{ in } \Sigma, \quad
u \leq v \text{ on } \partial \Sigma.
\end{equation*}
Again by the boundedness of $\Omega$, if $\lambda - a(e)$ is small,
the measure of the cap $\Sigma$ will be small. Therefore, from the
Comparison Principle in small domains (see \cite{DS}) we have that
$u \leq v \text{ in } \Sigma$. Moreover, by Strong Comparison
Principle and Hopf Lemma, we see that $u \leq v \text{ in }
\Sigma_{\lambda,e}$ for any $a(e) < \lambda \leq \lambda_0$.
In particular, this spells that $u(x)$ is nondecreasing in the $e$
direction for all $x \in \Sigma$.
Now, fix $x_0 \in \partial \Omega$ and let $e=\nu (x_0)$ be the unit normal
to $\partial \Omega$ at $x_0$. By the convexity assumption
$T_{a(\nu(x_0)),\nu(x_0)} \cap \partial\Omega = \{ x_0 \}$.
If we let $\theta \in S^{n-1}$ be another direction close to the
outer normal $\nu (x_0)$, the reflection of the caps
$\Sigma_{\lambda,\theta}$ with respect to the hyperplane
$T_{\lambda,\theta}$ (which is close to the tangent one) would still be
contained in $\Omega$ thanks to its strict convexity.
So the above argument could be applied also to the new direction
$\theta$. In particular, we see that we can get a neighborhood
$\Theta$ of $\nu (x_0)$ in $S^{n-1}$ such that $u (x)$ is
nondecreasing in every direction $\theta \in \Theta$ and for any $x$
such that $x \cdot \theta < \frac{\lambda_0}{2}$.
By eventually taking a smaller neighborhood $\Theta$, we may assume
that $$|x \cdot (\theta - \nu (x_0))| < \lambda_0 /8$$ for any $x \in
\Sigma_{\lambda_0,\theta}$ and $\theta \in \Theta$.
Moreover, noticing that $$x \cdot \theta = x \cdot (\theta - \nu (x_0)) + x
\cdot \nu (x_0)$$ and $$\frac{\lambda_0}{2} = \frac{\lambda_0}{8} +
\frac{3 \lambda_0}{8} > x \cdot \theta > \frac{\lambda_0}{8} -
\frac{\lambda_0}{8} = 0$$ it is then easy to see that $u$ is
nondecreasing in any direction $\theta \in \Theta$ on $\Sigma_0 = \{
x \in \Omega : \frac{\lambda_0}{8} < x \cdot \nu (x_0) <\frac{3\lambda_0}{8}
\}$.
Finally, let us choose $\varepsilon = \frac{\lambda_0}{8}$. Fix any point $x \in
\Omega_\varepsilon$ and let $x_0$ be its projection onto $\partial \Omega$.
{F}rom the above arguments we see that $$u(x) \leq u( x_0 - \varepsilon \nu
(x_0)) \leq u(y)$$ for any $y \in I_x$, where $I_x \subset \Sigma_0$
is a truncated cone with vertex at $x_1$, opening angle $\Theta$, and
height $\frac{\lambda_0}{4}$.
Hence, we have obtained that there exists a positive constant $\gamma = \gamma
(\Omega, \varepsilon)$ such that $|I_x| \geq \gamma$ and $u(x) \leq u(y)$
for any $y \in I_x$.
Finally, choosing $x_\varepsilon$ as the maximum of $u$ in $\Omega_\varepsilon$, we get
\begin{equation*}
\Vert u \Vert_{L^\infty(\Omega_\varepsilon)} = u_\varepsilon (x_\varepsilon) \leq
\frac{1}{\gamma} \int_{I_{x_{\varepsilon}}} u (y) \, dy \leq
\frac{1}{\gamma} \Vert u\Vert_{L^1 (\Omega)}
\end{equation*}
which proves \eqref{L1boundary}.
\end{proof}
We will now prove that inequalities in Theorem \ref{Theorem:Sobolev}
are also valid for a positive solution $u$ of \eqref{prob} in the
smaller range $1\leq q \leq 2$.
To do this, we will construct an approximation of $u$
through smooth functions and see that, thanks to strong uniform
estimates on this approximation, we can pass to the limit in all of
the inequalities.
\begin{proposition}\label{Prop:2}
Let $\Omega$ be a smooth bounded domain of $\mathbb{R}^{n}$ and $g$
any positive smooth function. Let $u$ be any positive regular
solution of \eqref{prob}. If $1\leq q \leq 2$, then inequalities in
Theorem {\rm\ref{Theorem:Sobolev}} hold for $v=u$. Given $s>0$, the same
holds true also for $v = u-s$ and $\Omega$ replaced by $\Omega_s
:= \{ x\in \Omega : u > s\}$.
\end{proposition}
\begin{proof}
Let $Z_u=\{x\in\Omega : \nabla u(x)=0\}$. Recall that by standard elliptic regularity $u \in C^\infty (\Omega
\setminus Z_u)$ and that $|Z_u| = 0$ by \cite{DS}. Therefore, $u$ is
smooth almost everywhere in $\Omega$. Let $x \in \Omega
\setminus Z_u$ and observe that for the mean curvature $H_u$ of the
level set passing through $x$ we have the following explicit
expression
\begin{equation}\label{exp1}
-(n-1) H_u = {\rm div} \left(\frac{\nabla u}{|\nabla u|}\right) =
\frac{\Delta u}{|\nabla u|} - \frac{\langle D^2 u \nabla u, \nabla u
\rangle }{|\nabla u|^3}
\end{equation}
whereas for the tangential gradient term we have
\begin{equation}\label{exp2}
\nabla_{T,u} |\nabla u| = \frac{D^2 u \nabla u}{|\nabla u|} -
\frac{\langle D^2 u \nabla u, \nabla u \rangle \nabla u }{|\nabla u|^3},
\end{equation}
where all the terms in these expressions are evaluated at $x$. Hence,
there exists a positive constant $C = C (n,p,q)$ such that
\begin{equation}\label{hsdequadro}
\left(\frac{1}{p'}|\nabla_{T,u} |\nabla u|^{\frac{p}{q}}|\right)^{q} +
|H_u|^q |\nabla u|^{p} \leq C |D^2 u|^{q} |\nabla u|^{p - q}\quad
\textrm{for a.e. }x\in\Omega.
\end{equation}
{F}rom \cite{DS} we recall the following important estimate: for any
$1 \leq q \leq 2$ there holds
\begin{equation}\label{sciunzi}
\int_\Omega |D^2 u|^{q} |\nabla u|^{p - q} \, dx < \infty.
\end{equation}
Thanks to \eqref{hsdequadro} and \eqref{sciunzi}, all of the
integrals in the geometric Hardy-Sobolev inequalities are well
defined for any $1 \leq q \leq 2$.
However, since the solution $u$ is not smooth around $Z_u$, we need to
regularize $u$ in a neighborhood of the critical set in order to apply the
inequalities of Theorem {\rm\ref{Theorem:Sobolev}}. We will now
describe an approximation argument due to Canino, Le, and Sciunzi~\cite{CLS} for
the $p(\cdot)$-Laplacian (in our case $p(x) \equiv p$ constant).
\begin{lem}[\cite{CLS}]
Let $D\subset \Omega$ be an open set, $1\leq q\leq 2$, and $\varepsilon\in(0,1)$.
Let $u\in C^1(\overline{\Omega})$ be a solution of \eqref{problem} and
$h:=g(u)$.
If $h_\varepsilon\in C^\infty(\overline{D})$ is any sequence converging to $h$
in $C^1(\overline{D})$ as $\varepsilon\downarrow 0$, then the unique solution
$v_\varepsilon$ of the following regularized problem
\begin{equation}\label{aprox:problem}
\left\{
\begin{array}{rcll}
-{\rm div} \left( (\varepsilon^2 + |\nabla v_\varepsilon|^2)^{\frac{p-2}{2}} \nabla v_\varepsilon \right)
&=& h_\varepsilon(x) &\textrm{in } D, \\
v_\varepsilon &=& u &\textrm{on } \partial D.
\end{array}
\right.
\end{equation}
tends to $u$ strongly in $W^{1,p}(B)$. Moreover, there exists a constant
$C$ independent of $\varepsilon$ such that
$$
\int_D |D^2 v_\varepsilon|^{q} ( \varepsilon^2 + |\nabla
v_\varepsilon|^2)^{\frac{p-q}{2}} \, dx \leq C
$$
and
\begin{equation}\label{limve}
\lim_{\varepsilon \to 0} \int_D |D^2 v_\varepsilon|^{q} (
\varepsilon^2 + |\nabla v_\varepsilon|^2)^{\frac{p-q}{2}} \, dx =
\int_D |D^2 u|^{q} |\nabla u|^{p - q} \, dx.
\end{equation}
\end{lem}
Let $v_\varepsilon\in C^{\infty} (D)$ be the unique solution of
\eqref{aprox:problem} and let us consider a smooth cut-off function
$\eta$ with compact support contained in $\Omega$ and such that
$\eta \equiv 1$ on $D$. We can construct a smooth regularization
$u_\varepsilon$ of $u$ defining $u_\varepsilon := (1-\eta) u + \eta
v_\varepsilon$. We can then apply Theorem \ref{Theorem:Sobolev} to
any $u_\varepsilon$ to get the appropriate inequality $(a)$, $(b)$,
or $(c)$. From \cite{DB,Lie} and standard elliptic regularity we
know that the regularization $u_\varepsilon$ will converge to $u$,
as $\varepsilon \downarrow 0$, both in $C^1 (\overline{\Omega})$ and
$C^2(\overline{\Omega} \setminus Z_u)$. Hence we can easily pass to
the limit as $\varepsilon \downarrow 0$ in the left hand side of
\eqref{Morrey} and \eqref{Sobolev}.
In order to see that also the remaining terms $I_{p,q}
(u_\varepsilon; \Omega)$ which involve tangential gradient and mean
curvature behave well under this approximation the argument is the
following. Splitting the domain $\Omega$ and recalling that
$u_\varepsilon \equiv v_\varepsilon$ in $D$ we have that: $$I_{p,q}
(u_\varepsilon;\Omega) = I_{p,q} (u_\varepsilon; D) + I_{p,q}
(u_\varepsilon; \Omega \setminus D) = I_{p,q} (v_\varepsilon; D) +
I_{p,q} (u_\varepsilon; \Omega \setminus D).$$ Clearly, from the
$C^2$ convergence we have that $I_{p,q} (u_\varepsilon; \Omega
\setminus D) \to I_{p,q} (u; \Omega \setminus D)$ as $\varepsilon
\downarrow 0$. Therefore we can concentrate on the convergence of
$I_{p,q}(v_\varepsilon;D)$.
{F}rom \eqref{exp1}, \eqref{exp2}, and through a simple expansion of
$(\varepsilon^2 + |\nabla v_\varepsilon|^2)^{\frac{p - q}{2}}$
around $\varepsilon =0$, we see that for a sufficiently small
$\varepsilon_0 > 0$ there exists a constant $K = K (n,p,q,
\varepsilon_0) > 0$ such that for any $\varepsilon \leq
\varepsilon_0$ we have
\begin{equation}\label{hsdequadrove}
\left(\frac{1}{p'} |\nabla_{T,v_\varepsilon}|\nabla v_\varepsilon|^{\frac{p}{q}}|
\right)^{q} + |H_{v_\varepsilon}|^q |\nabla v_\varepsilon|^{p} \leq
K \, |D^2 v_\varepsilon|^{q} (\varepsilon^2 + |\nabla
v_\varepsilon|^2)^{\frac{p - q}{2}}.
\end{equation}
Moreover, by the fact that $v_\varepsilon \to u$ in $C^2 (D
\setminus Z_u)$ and $|Z_u|=0$, almost everywhere in $D$ we have
\begin{equation}\label{aeve}
\lim_{\varepsilon \to 0}\left(\frac{1}{p'} |\nabla_{T,v_\varepsilon} |\nabla
v_\varepsilon|^{\frac{p}{q}}| \right)^{q} + |H_{v_\varepsilon}|^q
|\nabla v_\varepsilon|^{p} = \left(\frac{1}{p'} |\nabla_{T,u} |\nabla
u|^{\frac{p}{q}}| \right)^{q} + |H_{u}|^q |\nabla u|^{p}.
\end{equation}
Now, thanks to \eqref{limve}, \eqref{hsdequadrove},
and \eqref{aeve}, by dominated convergence theorem we see that:
$$
\lim_{\varepsilon \to 0} \int_D \left(\frac{1}{p'} |\nabla_{T,v_\varepsilon} |\nabla
v_\varepsilon|^{\frac{p}{q}}| \right)^{q} + |H_{v_\varepsilon}|^q
|\nabla v_\varepsilon|^{p} \, dx
$$
$$
= \int_D \left(\frac{1}{p'} |\nabla_{T,u} |\nabla
u|^{\frac{p}{q}}| \right)^{q} + |H_{u}|^q |\nabla u|^{p} \, dx.
$$
Thus, the assertions of Theorem~\ref{Theorem:Sobolev} hold for
$v=u$.
To conclude the proof let us fix any $s >0$ and consider $v = u-s$
on $\Omega_s = \{x\in\Omega : u > s \}$. It is clear that the integrands in the
inequalities remain unchanged in this case, so the only problem
comes from the fact $\Omega_s$ might not be smooth. If this is the
case, let us consider two sequences $\varepsilon_n \to 0$ and $s_n
\to s$, with the corresponding regularizations of $v$ given by $v_n
:= v_{\varepsilon_n} = u_{\varepsilon_n} - s_n$. Thanks to the
smoothness of any $v_n$ and Sard Lemma, we can choose each $s_n$ as
a regular value of $v_n$, so that the level set $\{ v_n > 0 \} = \{
u_n > s_n \}$ is smooth. Moreover, from the $C^1$ convergence, it is
clear that for the characteristic functions we have $\chi_{\{ u_n >
s_n \}} \to \chi_{\{ u > s \}}$. Hence we can conclude the proof
using the same dominated convergence argument as above.\end{proof}
\section{Regularity of stable solutions. Proof of Theorems \ref{Theorem}
and \ref{Theorem2}}\label{section5}
We are now ready to establish $L^r$ and $W^{1,r}$ \textit{a priori}
estimates of semi-stable solutions to $p$-Laplace equations proving
Theorems~\ref{Theorem}~ and~\ref{Theorem2}.
Before the proof our regularity results let us recall some known
facts on the linearized operator associated to \eqref{problem} and
semi-stable solutions.
\subsection{Linearized operator and semi-stable solutions}\label{section4}
This subsection deals with the linearized operator at any regular semi-stable
solution $u \in C_{0}^{1} (\overline{\Omega})$ of
\begin{equation}\label{prob2}
\left\{
\begin{array}{rcll}
-\Delta_p u &=& g(u) &\textrm{in } \Omega, \\
u&>& 0 &\textrm{in } \Omega, \\
u &=& 0 &\textrm{on } \partial \Omega,
\end{array}
\right.
\end{equation}
where $\Omega$ is a bounded smooth domain in $\mathbb{R}^n$,
with $n \geq 2$, and $g$ is any positive $C^1$ nonlinearity.
The linearized operator $L_u$ associated to \eqref{prob2} at $u$ is defined
by duality as
$$
\begin{array}{l}
L_u(v,\phi):=\displaystyle \hspace{-0.2cm}\int_\Omega |\nabla u|^{p-2}\left\{\nabla v\cdot\nabla\phi
+(p-2)\left(\nabla v\cdot\frac{\nabla u}{|\nabla u|}\right)
\left(\nabla\phi\cdot\frac{\nabla u}{|\nabla u|}\right)\right\} dx\\
\displaystyle \hspace{2cm}- \int_\Omega g'(u)v \phi\ dx
\end{array}
$$
for all $(v,\phi)\in H_0\times H_0$, where the Hilbert space $H_0$ is defined
according to \cite{DS} as follows.
\begin{defin}\label{H0}
Let $u \in C_{0}^{1} (\overline{\Omega})$ be a regular semi-stable solution
of \eqref{prob2}.
We introduce the following weighted $L^2$-norm of the gradient
$$
|\phi|:=\left(\int_\Omega \rho |\nabla \phi|^2\ dx\right)^{1/2}
\quad \textrm{where }\rho:=|\nabla u|^{p-2}.
$$
According to \cite{DS}, the space
$$
H^1_\rho(\Omega):=\{\phi \in L^2(\Omega) \hbox{ weakly differentiable}:\:|\phi|<+\infty\}
$$
is a Hilbert space and is the completion of $C^\infty(\Omega)$ with respect to the
$|\cdot|$-norm.
We define the Hilbert space $H_0$ of admissible test functions as
$$
H_0:=
\left\{
\begin{array}{lll}
\{\phi \in H_0^1(\Omega):\: |\phi|<+\infty \}&\textrm{if}&1<p\leq 2\\
\\
\textrm{the closure of }C_0^\infty(\Omega) \textrm{ in }H^1_\rho(\Omega)&\textrm{if}&p>2.
\end{array}
\right.
$$
\end{defin}
Note that for $1<p\leq 2$, $H_0$ is a subspace of $H_0^1(\Omega)$ and since
$$
\int_\Omega |\nabla \phi|^2 \leq \|\nabla u\|_{L^\infty(\Omega)}^{2-p}|\phi|^2,
$$
we see that $(H_0,|\cdot|)$ is a Hilbert space. For $p>2$, the
weight $\rho=|\nabla u|^{p-2}$ is in $L^\infty(\Omega)$ and
satisfies $\rho^{-1} \in L^1(\Omega)$, as shown in \cite{DS}.
Now, thanks to the above definition, the operator $L_u$ is well defined
for $\phi \in H_0$ and, therefore, the semistability of the solution $u$ reads as
\begin{equation}\label{semistab}
L_u (\phi,\phi) = \int_\Omega |\nabla u|^{p-2} \left\{|\nabla \phi|^2
+(p-2)\left(\nabla \phi\cdot\frac{\nabla u}{|\nabla u|}\right)^2\right\}
- g'(u) \phi^2\ dx \geq 0,
\end{equation}
for every $\phi \in H_0$.
On the one hand, considering $\phi = |\nabla u| \eta$ as a test function
in the semistability condition \eqref{semistab} for $u$, we obtain
\begin{equation}\label{StZu}
\int_{\Omega}\left[ (p-1) |\nabla u|^{p-2} |\nabla_{T,u} |\nabla u||^{2}
+ B_u^2 |\nabla u|^{p} \right] \eta^2 \, dx
\leq (p-1) \int_{\Omega} |\nabla u|^{p} |\nabla \eta|^2 \, dx
\end{equation}
for any Lipschitz continuous function $\eta$ with compact support.
Here, $B_u^2$ denotes the $L^2$-norm of the second fundamental form
of the level set of $|u|$ through $x$ (\textit{i.e.}, the sum of the
squares of its principal curvatures). The fact that $\phi=\eta
|\nabla u|$ is an admissible test function derives from the estimate
\eqref{sciunzi}, whereas the computations behind \eqref{StZu} are
done in \cite{FSV} (see Theorem~2.5 \cite{FSV}).
On the other hand, noting that $(n-1) H_u^{2} \leq B_u^2$ and
$$
|\nabla u|^{p-2} |\nabla_{T,u} |\nabla u||^{2} = \frac{4}{p^2} |\nabla_{T,u} |\nabla u|^{\frac{p}{2}}|^{2},
$$
we obtain the key inequality to prove our regularity results for
semi-stable solutions
\begin{equation}\label{StZu2}
\int_{\Omega}\left( \frac{4}{p^2}|\nabla_{T,u} |\nabla u|^{p/2}|^{2}
+ \frac{n-1}{p-1}H_u^2 |\nabla u|^{p} \right) \eta^2 \, dx
\leq \int_{\Omega} |\nabla u|^{p} |\nabla \eta|^2 \, dx
\end{equation}
for any Lipschitz continuous function $\eta$ with compact support.
\subsection{\textit{A priori} estimates of stable solutions.
Proof of Theorem~\ref{Theorem}}\label{subsection5:1}
In order to prove the gradient estimate \eqref{grad:estimate}
established in Theorem~{\rm \ref{Theorem}}~(b) we will use the
following result. Its proof is based on a technique introduced by
B\'enilan \textit{et al.} \cite{BBGGPV95} to obtain the regularity
of entropy solutions for $p$-Laplace equations with $L^1$ data.
\begin{proposition}\label{Prop:bootstrap}
Assume $n\geq 3$ and $h\in L^1(\Omega)$. Let $u$ be the entropy solution of
\begin{equation}\label{linear}
\left\{
\begin{array}{rcll}
-\Delta_p u&=&h(x)&\textrm{in }\Omega,\\
u&=&0&\textrm{on }\partial \Omega.
\end{array}
\right.
\end{equation}
Let $r_0\geq (p-1)n/(n-p)$. If $\int_\Omega |u|^{r_0}\ dx<+\infty$, then the
following \textit{a priori} estimate holds:
$$
\int_\Omega |\nabla u|^r\ dx
\leq
r|\Omega|
+\left(\frac{r_1}{r}-1\right)^{-1}\left(\int_\Omega |u|^{r_0}\ dx+\|h\|_{L^1(\Omega)}\right)
$$
for all $r< r_1:=pr_0/(r_0+1)$.
\end{proposition}
\begin{rem}
B\'enilan \textit{et al.} \cite{BBGGPV95} proved the existence and
uniqueness of entropy solutions to problem \eqref{linear}. Moreover,
they proved that $|\nabla u|^{p-1}\in L^r(\Omega)$ for all $1\leq
r<n/(n-1)$ and $|u|^{p-1}\in L^r(\Omega)$ for all $1\leq r<n/(n-p)$.
Proposition~\ref{Prop:bootstrap} establishes an improvement of the
previous gradient estimate knowing an \textit{a priori} estimate of
$\int_\Omega |u|^{r_0} dx$ for some $r_0>(p-1)n/(n-p)$.
\end{rem}
\begin{proof}[Proof of Proposition {\rm\ref{Prop:bootstrap}}]
Multiplying \eqref{linear} by $T_s u=\max\{-s,$ $\min\{s,u\}\}$ we obtain
$$
\int_{\{|u|\leq s\}}|\nabla u|^p\ dx=\int_\Omega h(x)T_su\ dx\leq s\|h\|_{L^1(\Omega)}.
$$
Let $t=s^{(r_0+1)/p}$. {F}rom the previous inequality, recalling that
$V(s)=|\{x\in\Omega:|u|>s\}|$, we deduce
$$
\begin{array}{lll}
\displaystyle s^{r_0}|\{|\nabla u|>t\}|&\leq&\displaystyle
s^{r_0}\int_{\{|\nabla u|>t\}\cap\{|u|\leq s\}}\left(\frac{|\nabla u|}{t}\right)^p dx
+s^{r_0}\int_{\{|u|>s\}}\ dx\\
\\
&\leq &\displaystyle\|h\|_{L^1(\Omega)} +s^{r_0}V(s)\quad\textrm{for a.e. }s>0.
\end{array}
$$
In particular
\begin{equation}\label{lalarito}
t^{\frac{pr_0}{r_0+1}}|\{|\nabla u|>t\}|
\leq
\|h\|_{L^1(\Omega)}+\sup_{\tau>0}\Big\{\tau^{r_0}V(\tau)\Big\}\
\quad\textrm{for a.e. }t>0.
\end{equation}
Moreover, since
$$
\tau^{r_0}V(\tau)
\leq
\tau^{r_0}\int_{\{|u|>\tau\}}\left(\frac{|u|}{\tau}\right)^{r_0}\ dx
\leq\int_\Omega|u|^{r_0}\ dx \quad\textrm{for a.e. }\tau>0,
$$
we have $\sup_{\tau>0}\Big\{\tau^{r_0}V(\tau)\Big\}\leq \int_\Omega|u|^{r_0}\ dx$.
Let $r< r_1:=pr_0/(r_0+1)$. From \eqref{lalarito} and the previous inequality,
we have
$$
\begin{array}{lll}
\displaystyle \int_\Omega |\nabla u|^r\ dx
&=&
\displaystyle r\int_0^\infty t^{r-1}|\{|\nabla u|>t\}|\ dt\\
&\leq&
\displaystyle
r|\Omega|+r\left(\int_\Omega|u|^{r_0}\ dx
+
\|h\|_{L^1(\Omega)}\right)
\int_1^\infty t^{r-1}t^{-\frac{p{r_0}}{{r_0}+1}}\ dt
\end{array}
$$
proving the proposition.
\end{proof}
Now, we have all the ingredients to prove the \textit{a priori} estimates
established in Theorem~\ref{Theorem} for semi-stable solutions. It will follow
from Theorem~\ref{Theorem:Sobolev} and Propositions~\ref{Prop:2} and \ref{Prop:bootstrap}
choosing adequate test functions in the semistability condition \eqref{StZu2}.
First, we prove Theorem \ref{Theorem} when $n\neq p+2$. We will take $\eta=T_s u
=\min\{s,u\}$ as a test function in \eqref{StZu2} and then, thanks to
Proposition~\ref{Prop:2}, we apply our Morrey and Sobolev inequalities
\eqref{Morrey} and \eqref{Sobolev} with $q=2$.
\begin{proof}[Proof of Theorem {\rm \ref{Theorem}} for $n\neq p+2$]
Assume $n\neq p+2$. Let $u\in C^1_0(\overline{\Omega})$ be a semi-stable solution
of \eqref{problem}. By taking $\eta=T_s u=\min\{s,u\}$ in the semistability condition
\eqref{StZu2} we obtain
$$
\int_{\{u>s\}}\left( \frac{4}{p^2}|\nabla_{T,u} |\nabla u|^{p/2}|^{2}
+ \frac{n-1}{p-1}H_u^2 |\nabla u|^{p} \right)\, dx
\leq \frac{1}{s^2}\int_{\{u<s\}} |\nabla u|^{p+2}\, dx
$$
for a.e. $s>0$. In particular,
$$
\min\left(\frac{4}{(n-1)p},1\right) I_{p,2}(u-s;\{x\in\Omega:u>s\})^p
\leq
\frac{p-1}{(n-1)s^2}\int_{\{u<s\}} |\nabla u|^{p+2}\, dx
$$
for a.e. $s>0$, where $I_{p,2}$ is the functional defined in
\eqref{Ipq} with $q=2$. By Proposition~\ref{Prop:2} we can apply
Theorem~\ref{Theorem:Sobolev} with $\Omega$ replaced by
$\{x\in\Omega: u > s\}$, $v=u-s$, and $q=2$. Then, the $L^r$
estimates established in parts (a) and (b) follow directly from the
Morrey and Sobolev type inequalities \eqref{Morrey} and
\eqref{Sobolev}.
Finally, the gradient estimate \eqref{grad:estimate} follows directly from
Proposition~\ref{Prop:bootstrap} with $r_0=np/(n-p-2)$.
\end{proof}
Now, we deal with the proof of Theorem {\rm \ref{Theorem}} $($a$)$ when $n=p+2$.
This critical case follows from Theorem \ref{ThmMS} and the semistability condition
\eqref{StZu2} with the test function $\eta=\eta(u)$ defined in \eqref{etaa} and
\eqref{psii} below.
\begin{proof}[Proof of Theorem {\rm \ref{Theorem}} when $n=p+2$]
Assume $n = p+2$ (and hence, $n>3$). Taking a Lipschitz function
$\eta = \eta (u)$ (to be chosen later) in \eqref{StZu} and using the coarea
formula we obtain
\begin{equation}\label{semi:n=p+2}
\begin{array}{l}
C\displaystyle \int_{0}^{\infty} \int_{\{ u = t \}}
\left\{\left|\nabla_{T,u} |\nabla u|^\frac{p-1}{2}\right|^{2}
+ \left|H_u|\nabla u|^\frac{p-1}{2}\right|^2\right\}\ \eta(t)^2 \, d\sigma dt
\\
\displaystyle \hspace{3cm}
\leq
\int_{0}^{\infty} \int_{\{ u = t \}} |\nabla u|^{p+1}\ \dot{\eta}(t)^2 \, d\sigma dt,
\end{array}
\end{equation}
where $d\sigma$ denotes the area element in $\{u=t\}$ and $C$, here and in the rest
of the proof, is a constant depending only on $p$.
To apply the Sobolev inequality \eqref{MS} in the left hand side of the
previous inequality we need to make an approximation argument.
Consider the sequence $u_k$ of smooth regularizations of $u$ introduced in
the proof of Proposition \ref{Prop:2} and note that $\{u_k=t\}$ is a smooth
hypersurface for a.e. $t\geq0$.
Then, from the Sobolev inequality \eqref{MS} with $\phi = |\nabla u_k|^{\frac{p-1}{2}}$,
$q=2$, and $M=\{u_k=t\}$, and noting that
$$
(p-1) \frac{n-1}{n-3} = p+1\quad \textrm{ when }n = p+2,
$$
we obtain
\begin{equation}\label{semi:u_k}
\begin{array}{l}
\displaystyle
C \int_{0}^{\infty} \left(\int_{\{ u_k = t \}}
|\nabla u_k|^{p+1}\right)^\frac{n-3}{n-1} \eta(t)^2 \, d\sigma \ dt
\\
\hspace{1.0cm} \leq
\displaystyle \int_{0}^{\infty} \int_{\{ u_k = t \}}
\left\{\left|\nabla_{T,u_k} |\nabla u_k|^\frac{p-1}{2}\right|^{2}
+ \left|H_{u_k}|\nabla u_k|^\frac{p-1}{2}\right|^2\right\}\ \eta(t)^2 \, d\sigma dt.
\end{array}
\end{equation}
Now, we will pass to the limit in the previous inequality. Note that,
if $\eta$ is bounded, through a dominated convergence argument as in
Proposition \ref{Prop:2} we have
$$
\begin{array}{l}
\displaystyle \lim_{k \to \infty} \int_{0}^{\infty} \int_{\{ u_k = t \}}
\left\{\left|\nabla_{T,u_k} |\nabla u_k|^\frac{p-1}{2}\right|^{2}
+ \left|H_{u_k}|\nabla u_k|^\frac{p-1}{2}\right|^2\right\}\ \eta(t)^2 \, d\sigma dt
\\
\displaystyle
\hspace{0.5cm}
=\int_{0}^{\infty} \int_{\{ u = t \}}
\left\{\left|\nabla_{T,u} |\nabla u|^\frac{p-1}{2}\right|^{2}
+ \left|H_u|\nabla u|^\frac{p-1}{2}\right|^2\right\}\ \eta(t)^2 \, d\sigma dt.
\end{array}
$$
Moreover, from the $C^1$ convergence of $u_k$ to $u$ we obtain
$$
\lim_{k \to \infty} \int_{0}^{\infty} \left(\int_{\{ u_k = t \}}
|\nabla u_k|^{p+1}\right)^\frac{n-3}{n-1} \eta(t)^2\, d\sigma \ dt
=
\int_{0}^{\infty}\left(\int_{\{ u = t \}}
|\nabla u|^{p+1}\right)^\frac{n-3}{n-1} \eta(t)^2\, d\sigma \ dt.
$$
Therefore, taking the limit as $k$ goes to infinity in \eqref{semi:u_k} and using
\eqref{semi:n=p+2}, we get
\begin{equation}\label{p+2}
C \int_{0}^{\infty} \psi(t)^{\frac{n-3}{n-1}} \, \eta (t)^2 \, dt
\leq \int_{0}^{\infty} \psi(t) \, \dot{\eta} (t)^2 \, dt =
\int_{0}^{\infty} \int_{\{ u = t \}} |\nabla u|^{p+1} \,
d\sigma\, \dot{\eta} (t)^{2}\, dt,
\end{equation}
where
\begin{equation}\label{psii}
\psi(t) := \int_{\{ u = t \}} |\nabla u|^{p+1} \, d\sigma.
\end{equation}
Now, let $\bar{M}:={\| u \|_{L^\infty(\Omega)}}$. Given $s > 0$, choose
\begin{equation}\label{etaa}
\eta(t)=\eta_s (t) :=\left\{
\begin{array}{lll}
\displaystyle t/s&\textrm{ if }& 0\leq t \leq s,\\
\displaystyle \exp \left( \frac{1}{\sqrt{2}} \int_{s}^{t}
\left(\frac{C \psi(\tau)^{\frac{n-3}{n-1}}}{ \psi(\tau)}\right)^{\frac12}
\, d\tau \right)&\text{ if }& s < t \leq \bar{M}\\
\displaystyle \eta(M)&\textrm{ if }& t >\bar{M}.
\end{array}
\right.
\end{equation}
It is then clear that
\begin{equation*}
\int_{0}^{\bar{M}} \int_{\{ u = t \}} |\nabla u|^{p+1} \, d\sigma \, \dot{\eta}_{s} (t)^{2}\, dt
=
\frac{1}{s^2} \int_{\{ u \leq s \}} |\nabla u|^{p+2} \, dx
+ \frac{C}{2} \int_{s}^{\bar{M}} \psi(t)^{\frac{n-3}{n-1}} \, \eta_{s} (t)^{2} \, dt.
\end{equation*}
Therefore, from \eqref{p+2} we obtain
\begin{equation}\label{p+2:bis}
\frac{C}{2} \int_{s}^{\bar{M}} \psi(t)^{\frac{n-3}{n-1}} \, \eta_{s} (t)^{2} \, dt
\leq
\frac{1}{s^2} \int_{\{ u \leq s \}} |\nabla u|^{p+2} \, dx.
\end{equation}
Let us choose $\alpha = \frac{2}{n-2}$, $\beta = \frac{n-3}{(n-2)(n-1)}$, and $m = n-2$.
Note that $\alpha,\beta>0$, $m>1$, and $\beta m'=1/(n-1)$. Moreover, using the definition
of $\eta_s$ we have
\begin{equation}\label{asdf}
\frac{1}{\psi(t)^{\beta m'} \eta_{s}(t)^{\alpha m'}}
=
\sqrt{\frac{2}{C}}\frac{\dot{\eta}_{s}(t)}{\eta_{s}(t)^{\alpha m' + 1}}
\end{equation}
for all $t>s$. By \eqref{asdf}, H\"older inequality, and
\eqref{p+2:bis}, we see that
$$
\begin{array}{lll}
\displaystyle \bar{M} - s
&=&\displaystyle
\int_{s}^{\bar{M}} \frac{\psi(t)^{\beta}\eta_{s}(t)^{\alpha}}{\psi(t)^{\beta}\eta_{s}(t)^{\alpha}} \, dt\\
&\leq&\displaystyle
\left( \int_{s}^{\bar{M}} \psi(t)^{\beta m}\eta_{s}(t)^{\alpha m} \, dt \right)^{\frac{1}{m}}
\left( \int_{s}^{\bar{M}} \frac{dt}{\psi(t)^{\beta m'}\eta_{s}(t)^{\alpha m'}} \right)^{\frac{1}{m'}}\\
&\leq&\displaystyle
\left( \int_{s}^{\bar{M}} \psi(t)^{\frac{n-3}{n-1}}\eta_{s}(t)^{2} \, dt \right)^{\frac{1}{n-2}}
\left( \sqrt{\frac{2}{C}}\int_{s}^{\bar{M}} \frac{\dot{\eta}_{s} (t)}
{\eta_{s}(t)^{m'\alpha + 1}} \, dt \right)^{\frac{n-3}{n-2}}\\
&\leq&\displaystyle
\left( \frac{2}{Cs^2} \int_{\{ u\leq s \}} |\nabla u|^{p+2} \, dx \right)^{\frac{1}{n-2}}
\left(\sqrt{\frac{2}{C}}\frac{n-3}{2}\right)^{\frac{n-3}{n-2}}
\end{array}
$$
which is exactly \eqref{L-infinty} (note that $n-2=p$ and $\eta(\bar{M})\geq 1$).
\end{proof}
\subsection{Regularity of the extremal solution.
Proof of Theorem~\ref{Theorem2}}\label{subsection5:2}
In this subsection we will prove the \textit{a priori} estimates for minimal
and extremal solutions of $(1.15)_{\lambda}$ stated in Theorem~\ref{Theorem2}.
Let us remark that in the proof of Theorem~\ref{Theorem2} we will assume
the nonlinearity $f$ to be smooth. However, if it is only $C^1$ we can
proceed with an approximation argument as in the proof of Theorem 1.2 in
\cite{Cabre09}.
The $W^{1,p}$-estimate established in Theorem~\ref{Theorem2} has as main
ingredient the following result.
\begin{lem}\label{lemma:Poho}
Let $f$ be an increasing positive $C^1$ function satisfying \eqref{p-superlinear}
and $\lambda\in(0,\lambda^\star)$.
Let $u=u_\lambda\in C^1_0(\overline{\Omega})$ be the minimal solution of $(1.15)_{\lambda}$.
The following inequality holds:
\begin{equation}\label{Pohozaev}
\int_\Omega|\nabla u|^p\ dx
\leq
\left(\max_{x\in\overline{\Omega}}|x|\right)
\frac{1}{p'}\int_{\partial\Omega}|\nabla u|^p\ d\sigma.
\end{equation}
\end{lem}
\begin{proof}
Let $G'(t)=g(t)=\lambda f(t)$. First, we note that
$$
x\cdot\nabla u\ g(u) =x\cdot\nabla G(u)={\rm
div}\Big(G(u)x\Big)-nG(u)
$$
and that almost everywhere on $\Omega$ we can evaluate
$$
\begin{array}{lll}
\displaystyle x\cdot\nabla u\ \Delta_p u
-{\rm div}\Big(x\cdot\nabla u\ |\nabla u|^{p-2}\nabla u\Big)
&=&\displaystyle
-|\nabla u|^{p-2}\nabla u\cdot\nabla(x\cdot\nabla u)\\
&=&
\displaystyle
-|\nabla u|^{p}-\frac{1}{p}\nabla|\nabla u|^p\cdot x\\
&=&
\displaystyle
\frac{n-p}{p}|\nabla u|^{p}-\frac{1}{p}{\rm div} \Big(|\nabla u|^p x\Big).
\end{array}
$$
As a consequence, multiplying $(1.15)_{\lambda}$ by $x\cdot\nabla u$ and integrating on $\Omega$,
we have
\begin{equation}\label{Pohozaev0}
n\int_\Omega G(u)\ dx-\frac{n-p}{p}\int_\Omega |\nabla u|^p\ dx=
\frac{1}{p'}\int_{\partial\Omega}|\nabla u|^p\ x\cdot\nu\ d\sigma,
\end{equation}
where $\nu$ is the outward unit normal to $\Omega$.
Noting that $u$ is an absolute minimizer of the energy functional
$$
J(u)=\frac{1}{p}\int_\Omega|\nabla u|^p\ dx-\int_\Omega G(u)\ dx
$$
in the convex set $\{v\in W^{1,p}_0(\Omega):0\leq v\leq u\}$ (see \cite{CS07}),
we have that $J(u)\leq J(0)=0$. Therefore, from \eqref{Pohozaev0}
we obtain
$$
\int_\Omega|\nabla u|^p\ dx
=
n J(u)
+\frac{1}{p'}\int_{\partial\Omega}|\nabla u|^p\ x\cdot\nu\ d\sigma
\leq
\left(\max_{x\in\overline{\Omega}}|x|\right)
\frac{1}{p'}\int_{\partial\Omega}|\nabla u|^p\ d\sigma
$$
proving the lemma.
\end{proof}
Finally, we prove Theorem~\ref{Theorem2} (using the semistability
condition \eqref{StZu2} with an appropriate test function),
Theorem~\ref{Theorem:Sobolev}, and Lemma~\ref{lemma:Poho}.
\begin{proof}[Proof of Theorem {\rm \ref{Theorem2}}]
Let $u_\lambda$ be the minimal solution of $(1.15)_{\lambda}$ for $\lambda\in(0,\lambda^\star)$.
{F}rom \cite{CS07} we know that minimal solutions are semi-stable. In particular, $u_\lambda$
satisfies the semistability condition \eqref{StZu2} for all $\lambda\in(0,\lambda^\star)$.
Assume that $\Omega$ is strictly convex. Let $\delta(x) := {\rm
dist}(x,\partial\Omega)$ be the distance to the boundary and
$\Omega_\varepsilon:=\{x\in\Omega:\delta (x)<\varepsilon\}$. By
Proposition~\ref{Prop:1} there exist positive constants
$\varepsilon$ and $\gamma$ such that for every $x_0\in
\Omega_\varepsilon$ there exists a set $I_{x_0}\subset \Omega$
satisfying $|I_{x_0}|>\gamma$ and
\begin{equation}\label{kkkkey}
u_\lambda(x_0)^{p-1}
\leq
u_\lambda(y)^{p-1}\quad\textrm{for all }y\in I_{x_0}.
\end{equation}
Let $x_\varepsilon\in\overline{\Omega}_\varepsilon$ be such that
$u_\lambda(x_\varepsilon)=\|u_\lambda\|_{L^\infty(\Omega_\varepsilon)}$.
Integrating with respect to $y$ in $I_{x_\varepsilon}$ inequality
\eqref{kkkkey} and using \eqref{p-superlinear}, we obtain
\begin{equation}\label{kkkkeyyyyy}
\|u_\lambda\|_{L^\infty(\Omega_\varepsilon)}^{p-1}
\leq
\frac{1}{\gamma}\int_{I_{x_\varepsilon}}u_\lambda^{p-1}\ dy
\leq
\frac{1}{\gamma}\int_{\Omega}u_\lambda^{p-1}\ dy
\leq \frac{C}{\gamma}\|f(u_\lambda)\|_{L^1(\Omega)},
\end{equation}
where $C$, here and in the rest of the proof, is a constant independent of $\lambda$. Letting
$s=\left(\frac{C}{\gamma}\|f(u_\lambda)\|_{L^1(\Omega)}\right)^{1/(p-1)}$,
we deduce
\begin{equation}\label{ghjklk}
\Omega_\varepsilon\subset\{x\in\Omega:u_\lambda(x) \leq s\}.
\end{equation}
Now, choose
$$
\eta (x) := \left\{
\begin{array}{lll}
\delta (x)&\textrm{if}&\delta (x) < \varepsilon,\\
\varepsilon&\textrm{if}&\delta (x) \geq \varepsilon,
\end{array}
\right.
$$
as a test function in \eqref{StZu2} and use \eqref{ghjklk} to obtain
$$
\varepsilon^2
\int_{\{u_\lambda>s\}}\left( \frac{4}{p^2}|\nabla_{T,u_\lambda} |\nabla u_\lambda|^{p/2}|^{2}
+ \frac{n-1}{p-1}H_{u_\lambda}^2 |\nabla u_\lambda|^{p} \right) \, dx
\leq \int_{\{u_\lambda \leq s\}} |\nabla u_\lambda|^{p} \, dx.
$$
Multiplying equation $(1.15)_{\lambda}$ by $T_su_\lambda=\min\{s,u_\lambda\}$ we have
\begin{equation}\label{umens}
\int_{\{u_\lambda<s\}}|\nabla u_\lambda|^p\ dx=\lambda\int_\Omega
f(u_\lambda)T_su\ dx \leq\lambda^\star
s\|f(u_\lambda)\|_{L^1(\Omega)}=C
\|f(u_\lambda)\|_{L^1(\Omega)}^{p'}.
\end{equation}
Combining the previous two inequalities we obtain
$$
\int_{\{u_\lambda>s\}}\left( \frac{4}{p^2}|\nabla_{T,u_\lambda} |\nabla u_\lambda|^{p/2}|^{2}
+ \frac{n-1}{p-1}H_{u_\lambda}^2 |\nabla u_\lambda|^{p} \right) \, dx
\leq
C \|f(u_\lambda)\|_{L^1(\Omega)}^{p'}.
$$
At this point, proceeding exactly as in the proof of Theorem~\ref{Theorem}, we conclude
the $L^r$ estimates established in parts $(a)$ and $(b)$.
In order to prove the $W^{1,p}$-estimate of part $(b)$, recall that
by \eqref{Pohozaev0} we have
$$
\int_\Omega|\nabla u_\lambda|^p\ dx \leq C
\int_{\partial\Omega}|\nabla u_\lambda|^p\ d\sigma.
$$
Therefore, we need to control the right hand side of the previous inequality.
Since the nonlinearity $f$ is increasing by hypothesis we obtain
$$
f(u_\lambda)\leq f\left(C\|f(u_\lambda)\|_{L^1(\Omega)}^\frac{1}{p-1}\right)
\quad\textrm{in }\Omega_\varepsilon
$$
by \eqref{kkkkeyyyyy}, where $C$ is a constant independent of $\lambda$.
Now, since $-\Delta_p u_\lambda = \lambda f(u_\lambda)\in L^\infty(\Omega_\varepsilon)$
in $\Omega_\varepsilon$, it holds
$$
\| u_\lambda \|_{C^{1,\beta} (\overline{\Omega}_\varepsilon)} \leq
C'
$$
for some $\beta\in(0,1)$ by \cite{Lie}, where $C'$ is a constant depending only on
$n$, $p$, $\Omega$, $f$, and $\|f(u_\lambda)\|_{L^1(\Omega)}$ proving the assertion.
Finally, assume that $p\geq 2$ and \eqref{convex:assump} holds. From \cite{S}
we know that $f(u^\star)\in L^r(\Omega)$ for all $1\leq r<n/(n-p')$.
In particular, $f(u^\star)\in L^1(\Omega)$. Therefore, parts $(i)$
and $(ii)$ follow directly from $(a)$ and $(b)$.
\end{proof}
\footnotesize
\noindent\textit{Acknowledgments.}
The authors were supported by grant 2009SGR345(Catalunya) and
MTM2011-27739-C04 (Spain). The second author was also supported by
grant MTM2008-06349-C03-01 (Spain).
\end{document} |
\begin{document}
\title[A Pentagonal Crystal]{A Pentagonal Crystal, the Golden Section,
alcove packing and aperiodic tilings.}
\author[Anthony Joseph]{Anthony Joseph\\}
\address {Donald Frey, Professional Chair\\
Department of Mathematics\\
The Weizmann Institute of Science\\
Rehovot, 76100, Israel}
\email {[email protected]}
\date{\today}
\maketitle
Key Words: Crystals, aperiodic tiling.
AMS Classification: 17B35, 16W22.
\footnotetext[1]{Work supported in part by Minerva grant, no.
8596/1.}
\
\
\textbf{Abstract}
\
A Lie theoretic interpretation is given to a pattern with
five-fold symmetry occurring in aperiodic Penrose tiling based on
isosceles triangles with length ratios equal to the Golden
Section. Specifically a $B(\infty)$ crystal based on that of
Kashiwara is constructed exhibiting this five-fold symmetry. It
is shown that it can be represented as a Kashiwara $B(\infty)$
crystal in type $A_4$. Similar crystals with $(2n+1)$-fold
symmetry are represented as Kashiwara crystals in type $A_{2n}$.
The weight diagrams of the latter inspire higher aperiodic tiling.
In another approach alcove packing is seen to give aperiodic
tiling in type $A_4$. Finally $2m$-fold symmetry is related to
type $B_m$.
\
\section{Introduction}
\subsection{}\label{1.1}
This work arose as an attempt to explicitly describe and more
deeply understand the $B(\infty)$ crystal introduced by Kashiwara
\cite [Sections 0,4] {Ka1}. Let us first recall the context in
which it is described.
\subsection{}\label{1.2}
Let $C$ be a Cartan matrix in the sense used to define
Kac-Moody algebras. More precisely $C$ is a square matrix of
finite (or even countable size) with diagonal entries equal to 2
and non-positive integer off-diagonal entries. In the Kashiwara
theory one needs $C$ to be symmetrizable in order to introduce the
associated quantized enveloping algebra from which the (purely
combinatorial) properties of $B(\infty)$ are deduced. However
using the Littelmann path model \cite{Li1} it is possible (\ref
{1.6}) to weaken this to the requirement that the $ij^{th}$ entry
of $C$ be non-zero if and only if the $ji^{th}$ entry be non-zero.
This is of course exactly the condition under which the Kac-Moody
algebra $\mathfrak{g}$ associated to $C$ is defined \cite {Kac1}.
\subsection{}\label{1.3}
The $B(\infty)$ crystal is a purely combinatorial object which
can viewed as providing a basis (the crystal basis) of the algebra
of functions on the open Bruhat cell defined by $\mathfrak{g}.$
The latter is of course a polynomial algebra (on possibly infinite
many variables) though this is very far from obvious from the
combinatorial description of $B(\infty)$. Indeed one only knows
its formal character to have the expected product form by means
which are particularly roundabout, especially in the
non-symmetrizable case. Moreover the combinatorial complexity of
$B(\infty)$ is essential in that it leads in a simple manner to a
crystal basis for each highest weight integrable module.
\subsection{}\label{1.4}
The $B(\infty)$ crystal is specified entirely in terms of the
Cartan matrix $C$ by the following very simple procedure. First as
in Kac \cite[Chapter 2]{Kac1} one realizes $C$ through a vector
space ${\mathfrak h}$ (eventually the Cartan subalgebra of
${\mathfrak g}$), a set of simple coroots in ${\mathfrak h}$ and a
set $\Delta$ of simple roots in ${\mathfrak h}^*$ so that the
entries of C are given by evaluation of coroots on roots. Moreover
using these roots and coroots we may define the Weyl group $W$ in
the usual way.
\subsection{}\label{1.5}
To each simple root ${\alpha}$, Kashiwara \cite[Example 1.2.4]
{Ka2} introduced an "elementary crystal" $B_\alpha$ whose elements
can be viewed simply as non-positive multiples of $\alpha$, so
then $B_\alpha$ is identified with ${\mathbb N}$. (We have changed
Kashiwara's definition slightly (see \cite[12.3]{J1}). Now fix any
countable sequence $J$ of simple roots (indexed by the positive
integers) with the property that every simple root occurs
infinitely many times and take the subset $B_{J}$ of the
corresponding tensor product of the elementary crystals having
only finitely many non-zero entries. Of course $B_J$ has a
distinguished element, denoted $b_{\infty}$, in which all entries
are equal to zero. One views the elements of ${B_J}$ as forming
the vertices of a graph (the crystal graph). On $B_{J}$ one
defines a Kashiwara function $r$ with entries in $\mathbb Z$,
given by a very simple formula involving just the entries of the
Cartan matrix $C$. Its role is to describe the edges of the
crystal graph which are labelled by the simple roots. Indeed
inequalities between the values of $r$ on a given vertex $b$
decide the neighbours of $b$. Finally $B_{J}(\infty)$ is defined
to be the connected component of $B_{J}$ containing $b_{\infty}$ .
\subsection{}\label{1.6}
A deep and important result of Kashiwara is that as a graph
$B_{J}(\infty)$ is independent of $J$. Kashiwara's result is
obtained via the quantized enveloping algebra using a
$q\rightarrow 0 $ limit. (Lusztig \cite{Lu1} has a different
version of this limit and the resulting combinatorics.) It
requires $C$ to be symmetrizable; but this condition can be
dropped through a purely combinatorial proof using the Littelmann
path model \cite[11.16, 15.11, 16.10]{J1}.
\subsection{}\label{1.7}
One can ask if it is possible to describe $B_{J}(\infty)$
explicitly as a subset of $B_{J}$. Of course this should
involve the Cartan matrix which is in effect the only ingredient
in the determination of $B_{J}(\infty)$. However this occurs in
an extremely complicated fashion and it is even rather difficult
to establish general properties of the embedding \cite {Ka3,N1}.
Nevertheless it was noted by Kashiwara \cite[Prop. 2.2.3] {Ka2}
that the rank $2$ case is manageable. Here we note that this
solution involves the Chebyshev polynomials with argument being
the square root of the product of the two off-diagonal entries of
$C$. In truth these are not quite the Chebyshev polynomials as
customarily defined, however the difference will probably not
bother most readers. We refer the fastidious to \ref {2.2}.
\subsection{}\label{1.8}
The square of the largest zero of the $n^{th}$ Chebyshev
polynomial is $< 4$ and tends to this value as n tends to
infinity. In particular the largest non-negative integer values
must be $0,1,2,3$ and these occur as the squares of the largest
zeros for just the second, third, fourth and sixth Chebyshev
polynomial. Moreover such a zero results in a cut-off in the
description of $B_{J}(\infty)$ which as a consequence lies in a
finite Cartesian product of elementary crystals.
\subsection{}\label{1.9}
One can ask if the squares of the largest zeros of the remaining
Chebyshev polynomials leads to a similar cut-off in the
description of $B_{J}(\infty)$. The first interesting case is the
fifth Chebyshev polynomial whose largest zero is the Golden
Section $g$. Of course since $g$ is not an integer or even
rational one needs to modify the definition of $B_{J}(\infty)$ for
the construction to make any sense.
\subsection{}\label{1.10}
In a similar vein one does not need the Cartan matrix to have
integer off-diagonal entries in order to define the Weyl group.
That the resulting group be finite (in rank two) similarly
involves the largest zeros of the Chebyshev polynomials. In this
fashion the $n^{th}$ Chebyshev polynomial gives a Weyl group
isomorphic to the dihedral group of order $2n$. Returning to the
case of $n = 5$ this leads to a "root system" having 5 positive
roots which matches with the expectation that $B_{J}(\infty)$
embeds in a five-fold tensor product.
\subsection{}\label{1.11}
The fact that the Golden Section $g$ is irrational and satisfies a
quadratic equation means that we may retain a purely integer
set-up in the definition of $B_J$ by adding collinear roots of
relative length $g$. This gives in all twenty non-zero roots. A
link with mathematics of the ancient world is that the resulting
root system can be described as the projection of the vertices of
a dodecahedron onto the plane defined by one of its faces - see
Figure 1.
\subsection{}\label{1.12}
A further justification for introducing pairs of collinear roots
comes from the nature of the Weyl group itself, which has two
generators and isomorphic to $\mathbb Z_5 \ltimes \mathbb Z_2$,
that is the dihedral group of order $10$. Because $g$ is not
rational but satisfies a quadratic equation there is a natural
decomposition of each of the two simple reflections into two
commuting involutions giving a larger group on four generators,
which we call the augmented Weyl group $W^a$, see \ref {3.9}. To
our surprise this larger group (which acts just ${\mathbb Z}$
linearly and not isometrically) leaves the enlarged root system
invariant. Having said this it is no surprise that this larger
group is just the permutation group on 5 elements and obtained as
the symmetry group of the dodecahedron via the projection
described in \ref{1.11}.
\subsection{}\label{1.13}
In view of the above rather pleasing geometric interpretation, it
became a seemingly worthwhile challenge to indeed construct and
describe explicitly a "pentagonal crystal", in the sense of the
Kashiwara $B_{J}(\infty)$, proving its independence on $J$ and
computing its formal character.
\subsection{}\label{1.14}
The above program was carried out, not without some difficulty.
Indeed the inequalities which describe $B_{J}(\infty)$ as a subset
of $B_J$ are significantly more complicated than a naive
interpretation of our previous inequalities from the rank $2$ case
would suggest.
\subsection{}\label{1.15}
The reader may spare himself the detailed verification of the
assertions alluded to in \ref{1.14} since it becomes apparent that
our pentagonal crystal is a realization of a Kashiwara
$B_{J}(\infty)$ crystal in type $A_4$ and $W^a$ the corresponding
Weyl group $W(A_4)$ in type $A_4$. Thus existence and independence
of $J$ may be proved by exhibiting this isomorphism. Nevertheless
our computation was not entirely in vain. Indeed for some special
choices of J the description of $B_{J}(\infty)$ in type $A$ is
particularly simple \cite{N1}. However these are not the choices
required here and for them the resulting description is far more
complicated.
\subsection{}\label{1.16}
In view of \ref{1.15} it is natural to ask if the remaining
largest zeros of Chebyshev polynomials for $n \geqslant1$ lead to
crystals with a similar interpretation. Indeed the cases of the
third and fifth Chebyshev polynomials are just special cases of
the crystals obtained from the $(2n+1)^{th}$ Chebyshev polynomial.
It turns out that the $(2n+1)^{th}$ Chebyshev polynomial
factorizes into a pair of polynomials (related by replacing the
argument by its negative). These are irreducible over $\mathbb Q$,
if and only if $(2n+1)$ is prime. This leads to n collinear roots
replacing each positive (or negative) root leading to a total of
2n(2n+1) roots which just happens to be the number of roots in
type $A_{2n}$. In \ref {7.7} we exhibit the required isomorphism
with $B_{J}(\infty)$ for a particular choice of $J$. However one
should note that there is an important distinction with the
pentagonal case if $(2n+1)$ is not prime. In particular we show
that the augmented Weyl group $W^a$ is just the Weyl group
$W(A_{2n})$ in type $A_{2n}$. In Section 10, we consider the even
case for which $W\cong \mathbb Z_{2m} \ltimes \mathbb Z_2$. We
show that then $W^a$ is isomorphic to the Weyl group $W(B_m)$ in
type $B_m$. One would clearly like to extend this connection for
all finite reflection groups $W$, that is to say construct $W^a$
and show it to be isomorphic to the Weyl group of a root system.
\subsection{}\label{1.17}
We remark that although our pentagonal root system is just an
appropriate orthogonal projection of a dodecahedron, the latter
cannot be obtained from a similar orthogonal projection of the
root system of type $A_4$. Yet the relation between the Coxeter
groups of type $A_4$ and the pentagonal system may be thought to
be an extension of the traditional one obtained by say embedding a
root system of type $G_2$ into one of type $B_3$ via a seven
dimensional representation of the Lie algebra of the former.
\subsection{}\label{1.18}
In Section 8 we consider Penrose aperiodic tiling based on the two
isosceles triangles (the Golden Pair \ref {8.7}) whose unequal
side lengths ratios is the Golden Section. We view these
triangles as being obtained by triangularization of the regular
pentagon. We show (Theorem \ref {8.14}) that the triangles
obtained by the regular $n$-gon lead to a higher aperiodic tiling,
though this is a totally elementary result having no Lie theory
content. In Section 9 we suggest that such tilings can be thought
of as a consequence of alcove packing in the Cartan subalgebras
whose associated Weyl group is the augmented Weyl group. An
explicit construction is given in the pentagonal case, \ref
{9.10}, {9.11}. Aperiodicity (which we view as the possibility to
obtain arbitrary many tilings) corresponds to using different
sequences of reflections in the affine Weyl group. However for the
moment our construction does not give all possible tilings.
\section{Root systems}
Throughout the base field will be assumed to be the real numbers
$\mathbb R$.
\subsection{}\label{2.1}
Let ${\mathfrak h}$ be a vector space and $I : =\{1,2,\ldots,n\}$.
Define a root pair $(\pi^{\vee},\pi)$ to consist of a set
$\pi^{\vee}=\{\alpha_i^{\vee}\ |\ i \in I\}$ of linearly
independent elements (called simple coroots) of ${\mathfrak h}$
and a set $\pi= \{\alpha_ {i} \ |\ i\in I\}$ of linearly
independent elements (called simple roots) of ${\mathfrak h^*}$
such that $\alpha_i^{\vee}(\alpha_i) = 2$, for all $i \in I$. For
all $i \in I$, define the simple reflection $s_i \in Aut\
{\mathfrak h^*}$ by
$$ s_i\lambda=\lambda-\alpha_i^{\vee}(\lambda)\alpha_i,$$
and let $W$ be the group they generate. It will be assumed that
$\alpha_i^{\vee}(\alpha_j) = 0$, if and only if
$\alpha_j^{\vee}(\alpha_i) = 0$. The matrix with entries
$\alpha_i^{\vee}(\alpha_j)$ will be called the Cartan matrix. For
the moment we shall only assume that its off-diagonal entries are
non-positive reals.
\subsection{}\label{2.2}
Take $n = 2$ in \ref {2.1}. Set $\alpha = \alpha_1,\beta
=\alpha_2, s_\alpha = s_1 , s_\beta = s_2$ . Since we do not
mind introducing possibly superfluous square roots we shall
symmetrize the Cartan matrix so that its off-diagonal entries are
both equal to $-x$. Observe that $$s_\alpha s_{\beta}\alpha= (x^2
- 1)\alpha + x\beta , s_\alpha s_{\beta}\beta= - \beta-x\alpha.$$
Thus if we define functions $R_n(x), S_n(x)$ by $$(s_\alpha
s_\beta)^n\alpha = R_n(x)\alpha + S_n(x)\beta,$$ we find that
$R_n,S_n$ are defined by the recurrence relations $$S_{n+1}=xR_n -
S_n, R_{n+1}=(x^2 - 1)R_n - xS_n = xS_{n+1}- R_n : n > 0,$$ with
the initial conditions $S_0 = 0, R_0 = 1$.
These relations are exactly satisfied by setting
$$R_n = P_{2n}, S_n = P_{2n-1},$$
where the $P_n$ satisfy the recurrence relation
$$P_{n+1}= xP_n - P_{n-1}, \forall n \geq 0,$$
with the initial condition $P_{-1}= 0, P_0 = 1$. One may check
that $$(\sin\theta) P_n(2\cos\theta)=\sin(n+1)\theta, \forall n
\in \mathbb N.\eqno{(*)}$$
A few examples are given below
$$P_0=1,P_1=x,P_2=x^2-1,P_3=x(x^2-2),P_4=x^4-3x^2+1,P_5=x(x^2-3)(x^2-1),$$
$$P_6=x^6-5x^4+6x^2-1,P_7=x(x^2-2)(x^4-4x^2+2),P_8=(x^2-1)(x^6-6x^4+9x^2-1).$$
Set $\theta =\pi/n+1$. Then by $(*)$ the $2\cos t\theta :t \in
1,2,\ldots,n$, form the set of zeros of the degree $n$ polynomial
$P_n$. Thus these zeros are pairwise distinct and real with
$x:=2\cos \pi/(n+1)$ being the largest. Moreover $x$ is just the
third length of the isosceles triangle with equal side lengths $1$
and equal angles $\theta$. Finally $(\pi - \theta)$ is just the
angle between the vectors $\alpha$ and $\beta$ in Euclidean
two-space.
We will refer to $P_n$ as the $(n+1)^{th}$ Chebyshev polynomial.
To be precise the "true" Chebyshev polynomials $P_n^c$ only
coincide with the $P_n$ for $n=0,1$. Otherwise they are defined by
the recurrence relation
$$P_{n+1}^c= 2xP_n^c - P_{n-1}^c, \forall n \geq 0.$$ One may
check that
$$P_{n+1}^c(x) = P_{n+1}(2x) - xP_n(2x),\quad P_n^c(\cos \theta)=
\cos n\theta, \forall n \in \mathbb N.$$ A few examples are given
below
$$P_0^c=1,P_1=x,P_2^c=2x^2-1,P_3^c=4x^3-3x,P_4^c=8x^4-8x^2+1.$$
\subsection{}\label{2.3}
In the above situation the Weyl group $W = <s_\alpha,s_\beta>$ is
finite if and only if $(s_\alpha s_\beta )^n = 1$, for some $n
\geqq 2$. Assume that n is the least integer with this property.
Observe that $n = 2$, exactly when $x = 0$. In general $s_\alpha
s_\beta $ is a rotation by $2\pi/n$. Thus if $n$ is even, say $n =
2m$, then $(s_\alpha s_\beta )^m$ is a rotation by $\pi$ and so
acts by $-1$. This is satisfied by the vanishing of $S_m$ . In $n$
is odd, say $n = 2m+1$, then $$(s_\alpha s_\beta )^m\alpha =
(s_\beta s_\alpha )^{m+1}\alpha = -s_\beta(s_\alpha s_\beta
)^m\alpha.$$ Consequently $(s_\alpha s_\beta )^m\alpha$ is a
multiple of $\beta$ and similarly $(s_\beta s_\alpha )^m\beta$ is
a multiple of $\alpha$. These conditions are satisfied by the
vanishing of $R_m$ . In all cases $W \cong \mathbb Z_n \ltimes
\mathbb Z_2$ , that is the dihedral group of order $2n$.
\section{Crystals}
\subsection{}\label{3.1}
Adopt the conventions of \ref {2.1}. A crystal $B$ is a
countable set whose elements are viewed as vertices of a graph
(the crystal graph, also denoted by $B$). The edges of $B$ are
labelled by elements of $\alpha$ with the following two conditions
imposed.
\\
\textsl{\textit{1) Removing all edges except one results in a
disjoint union of linear graphs.}}
\\
\textsl{\textit{2) There is a weight function $wt:B\rightarrow
{\mathfrak h^*}$ with the property that $wt\ b - wt\ b^\prime \in
\{\pm\alpha\}$, if $b,b^\prime$ are joined by an edge labelled by
$\alpha$.}}
\\
By 1), 2) we may define maps $e_\alpha,f_\alpha:B\rightarrow B\cup
\{0\}$ by $e_\alpha b^\prime = b$ if and only if $f_\alpha b =
b^\prime$, whenever $b,b^\prime$ are non-zero, with $e_\alpha$
(resp. $f_\alpha$ ) increasing (resp. decreasing) weight by
$\alpha$.
Any subset $B^\prime$ of $B$ inherits a crystal structure by
deleting all edges joining elements of $B^\prime$ to $B\setminus
B^\prime$ . We say that $B^\prime$ is a strict subcrystal of $B$
if no edges need be deleted, that is as a graph, $B^\prime$ is a
component of $B$.
Let $\mathscr E$ (resp.$\mathscr F$) denote the monoid generated
by the $e_\alpha$ (resp. $f_\alpha$ ):$\alpha \in \pi$. A crystal
is said to be of highest weight $\lambda \in {\mathfrak h^*}$ if
there exists $b \in B$ of weight $\lambda$ satisfying $\mathscr E
b = 0$ and $\mathscr F b = B$.
\subsection{}\label{3.2} A crucial component of crystal theory is tensor structure. As a
set the tensor product $B\otimes B^\prime$ of crystals
$B,B^\prime$ is simply the Cartesian product where the weight
function satisfies $wt(b\otimes b^\prime) = wt \ b + wt \
b^\prime$. In order to assign edges to the required graph
Kashiwara \cite[Definition 1.2.1] {Ka2} introduced auxillary
functions $\varepsilon_\alpha: B\rightarrow \mathbb Z
\cup\{-\infty\}:\alpha \in \pi$, with the property that
$\varepsilon_\alpha(e_\alpha b) = \varepsilon_\alpha (b) - 1$,
whenever $e_\alpha b \neq 0$.
Let us pass immediately to a multiple tensor product
$B_n \otimes B_{n-1} \otimes\ldots \otimes B_1$. On the
corresponding Cartesian product we shall define the edges through
the Kashiwara function given on the element $b_n \times b_{n-1}
\times\ldots \times b_1 : b_i \in B_i$ , by $$r^k_\alpha (b) =
\varepsilon_\alpha(b_k) - \sum_{j=k+1}^n \alpha^{\vee}(wt \
b_j).$$
\subsection{}\label{3.3}
Let us assume for the moment that the Cartan matrix is classical,
namely has non-positive integer diagonal entries. Set
$$\varepsilon_\alpha(b) = \max\limits_k r_\alpha^k(b), \ g_\alpha(b)=\max\limits_k\{\varepsilon_\alpha(b_k) = \varepsilon_\alpha(b)\},
\ d_\alpha(b)=\min\limits_k \{\varepsilon_\alpha(b_k)
=\varepsilon_\alpha(b)\}.$$ The Kashiwara tensor product rule
\cite[Lemma 1.3.6]{Ka2} is given by
\\
\textsl{\textit{i) $e_\alpha b = b_n\times\ldots \times e_\alpha
b_t\times\ldots \times b_t$, where $t = g_\alpha(b)$,}}
\\
\textsl{\textit{ii) $f_\alpha b = b_n\times\ldots \times f_\alpha
b_t\times\ldots \times b_t$, where $t = d_\alpha(b)$.}}
\\
One checks that this gives the Cartesian product a crystal structure. It is manifestly
associative, but it is not commutative. When (i) (resp. (ii))
holds we say that $e_\alpha$ (resp. $f_\alpha$) enters $b$ at the
$t^{th}$ place.
\subsection{}\label{3.4}
When one begins to tamper with the entries of $C$ taking them to
be arbitrary reals, some of the required properties may fail,
particularly that noted in the last part of \ref {3.1}. This is
already the case when the diagonal elements are replaced by
non-positive integers. For this case a cure has been given by
Jeong, Kang, Kashiwara and Shin \cite{JKKS}. Non-integer (real)
entries are also problematic. Indeed suppose that $f_\alpha b \neq
0$ and set $t = d_\alpha(b)$. This means that $r_\alpha^s(b) <
r_\alpha^t(b)$, for all $s < t$. On the other hand
$r_\alpha^t(f_\alpha b) = 1 + r_\alpha^t(b)$, whilst
$r_\alpha^s(f_\alpha b) = {\alpha^\vee}(\alpha) + r_\alpha^s(b) =
2 + r_\alpha^s(b)$, for $s < t$. Thus to obtain $e_\alpha f_\alpha
b = b$ , we require that $r_\alpha^s(b) < r_\alpha^t(b)$ to imply
$r_\alpha^s(b) \leqq r_\alpha^t(b) - 1$, which is true if the
Kashiwara functions take integer values, but may fail otherwise.
\subsection{}\label{3.5}
Non-integer values of the Kashiwara function are inevitable if $C$
has non-integer entries. Our remedy is to assume that the
entries of $C$ take values in a ring which is free finitely
generated $\mathbb Z$ module $M = \mathbb Z g_1 + \mathbb Z g_2
+ \ldots + \mathbb Z g_s$, with $g_1 = 1$. If the Cartan matrix
$C$ has real entries, then $M$ one might expect to take M to be a
subring of $\mathbb R$. However it is rather more convenient to
allow $M$ to have zero divisors. Given $m \in M$, let $m_i$ denote
its $i^{th}$ component (in which we always omit the multiple of
$g_i$). Define
$$P =\{\lambda \in {\mathfrak h^*}| \alpha^\vee(\lambda) \in M,
\forall \alpha \in \pi\},P^+=\{\lambda \in
P|\alpha^\vee(\lambda)_i\geq0,\forall \alpha \in \pi,\forall i \in
\{1,2,\ldots,s\}\}.$$
Assume that $wt$ takes values in $P$. Further assume that
the $\varepsilon_\alpha$ take values in $M$. Consequently the
Kashiwara function will also take values in $M$. Let ${\bf g}_i$
denote the element in M with $g_i$ in the $i^{th}$ entry and zeros
elsewhere.
Following the above we extend the notion of a crystal in the
following obvious fashion. Define for all $\alpha \in \pi$ and
$i \in \{1,2,\ldots,s\}$ maps $e_{\alpha,i}, f_{\alpha,i}: B
\rightarrow B \cup \{0\}$ satisfying $e_{\alpha,i}b^\prime = b$,
if and only if $f_{\alpha,i}b = b^\prime$ whenever $b, b^\prime$
are non-zero with $e_{\alpha,i}$ (resp. $f_{\alpha,i}$) increasing
(resp. decreasing) weight by ${\bf g}_i\alpha$ and decreasing
$\varepsilon_\alpha$(resp. increasing) by ${\bf g}_i$.
Continue to define the Kashiwara functions as in \ref {3.2}. Its
component in the $i^{th}$ factor will be an integer and as in \ref
{3.3} provides the rule for the insertion of the $e_{\alpha,i},
f_{\alpha,i}$, in a multiple product.
Notice that our algorithm is being applied to each simple root at
a time. Thus we can allow ourselves the flexibility of using
different $\mathbb Z$ bases for $M$. In some cases it is even
convenient to allow $M$ itself to depend on the simple root in
question (see Section 10).
\subsection{}\label{3.6}
Return to the case of a classical Cartan matrix. Here Kashiwara
\cite[Example 1.2.6] {Ka2} introduced "elementary" crystals. We
follow \cite [12.3]{J1} in modifying slightly their definition
which is given below.
$$B_\alpha =
\{b_\alpha(m): m \in \mathbb N : \forall \alpha \in \pi\}.$$
Their crystal structure is given by
$$wt\ b_\alpha(m) = -m\alpha, \varepsilon_\alpha(b_\alpha(m))=m,$$
and
$$e_\alpha b_\alpha(m)=\left\{\begin{array}{ll}0& :\ m=0,\\
b_\alpha(m-1)& :\ m > 0,\\
\end{array}\right.$$
$$\varepsilon_\beta(b_\alpha(m)) = -{\infty},\ e_\beta(b_\alpha(m)) = f_\beta(b_\alpha(m)) = 0,\ for\ \beta \neq\alpha.$$
\
Since $B_\alpha$ has linear growth, we refer to it as a one
dimensional crystal.
Let $J$ be a countable sequence $\alpha_{i_m},\alpha_{i_{m-1}}, \ldots,\alpha_{i_1}$, of simple roots
in which each element of $\pi$ occurs infinitely many times. Then
we may form the countable Cartesian product $$\ldots\times
B_{i_n}\times B_{i_{n-1}}\times\ldots \times B_{i_1},$$ where
$B_{i{_m}}$ denotes $B_\alpha$, when $\alpha = \alpha_i{_m}$. We
note an element b of this product simply by the sequence
$\{\ldots, m_n, m_{n-1},\ldots,m_1\}$ of its entries. Now let the
$B_J$ denote the subset in which all but finitely many $m_i$ are
equal to zero. Then the expression for the Kashiwara function is a
finite sum and through it we obtain a crystal structure on $B_J$.
Note that $B_J$ has a distinguished element $b_{\infty}$, in which all
the $m_i$ are equal to zero. It may also be characterized by the
property that $\varepsilon_\alpha(b_{\infty})=0, \forall \alpha
\in\pi$. Set $B_J({\infty})=\mathscr F b_{\infty}$.
From Kashiwara's work \cite {Ka1,Ka2} one may immediately deduce
some quite remarkable facts about $B_J({\infty})$ given that $C$
is symmetrizable. These, noted in \ref {3.7} below, can be
extended (\ref {1.6}) to all classical $C$ through the Littelmann
path model. We remark that if $J$ is non-redundant in the sense
that every entry is non-zero for some $b \in B_J({\infty})$, then
$\ldots s_{i_n}s_{i_{n-1}},\ldots,s_{i_1}$, can be taken to be
reduced decompositions of a sequence of elements of $W$.
N.B. Here there is a small but annoying subtlety (see
\cite[3.13]{J3}) which one may only notice when one gets down to nitty
gritty calculations. In the formulation of Kashiwara a factor
of $B(\infty)$ (of which only $b_{\infty}$ is needed) is carried to the left. This ensures that
the values of the $\varepsilon_\alpha$ stay non-negative on
the elements of $B_J({\infty})$, which in turn is needed for
the properties described in \ref {3.7} to hold.
Alternatively one may add "dummy" factors causing some
redundancy. Thus for any $s_\alpha:\alpha \in \pi$ which occurs only finitely
many times in the above sequence one adds to $B_J$
one additional factor of $B_\alpha$ at any place to their left. On
this the corresponding entry stays zero for all $b \in B_J({\infty})$.
\subsection{}\label{3.7}
Fix $\alpha \in \pi$. After Kashiwara a crystal B is called
$\alpha$-upper normal if $$\varepsilon_\alpha(b) = max\{n |
e_\alpha^n b = 0\},\quad \forall b \in B.$$
A crystal is called upper normal if it is $\alpha$-upper normal
for all $\alpha \in \pi$. For example $B_\alpha$ is $\alpha$-upper
normal; but not upper normal. Though we barely need this we remark
that a crystal $B$ is said to be lower normal if for all $\alpha
\in B$ one has
$$\varphi_\alpha(b) = max\{n | f_\alpha^n b = 0\},\quad
\forall b \in B,$$where $\varphi_\alpha(b) = \varepsilon_\alpha(b)
+\alpha^\vee(wt b)$. A crystal is said to be normal if it is both
upper and lower normal.
Let $B$ be a subset of $B_J$ viewed as
a subgraph by just retaining all edges joining elements of $B$.
Because the weights of B must lie in $-\mathbb N\pi$, the subset
$B^\mathscr E : =\{b \in B|e_\alpha b = 0,\forall \alpha \in \pi
\}$ is non-empty. Upper normality of $B$ implies that $B^\mathscr
E = \{b_\infty \}$ by the remark in \ref {3.6}.
The first remarkable result of Kashiwara is that $B_J({\infty})$ is upper normal.
By the remarks above this immediately implies that $B_J({\infty})$
is a strict subcrystal of $B_J$.
The second remarkable result of Kashiwara is that as a crystal (or
equivalently as a graph) $B_J({\infty})$ is independent of $J$. We
denote it by $B({\infty})$.
The third remarkable result of Kashiwara is that
$$ch B({\infty}) : = \sum_{b \in B({\infty})}e^{wt\ b}= \prod_{{\alpha}\in \Delta^+}(1-e^\alpha)^{-m_\alpha},$$
where $\Delta^+$ denotes the set of positive roots of the
corresponding Kac-Moody algebra and $m_\alpha$ the multiplicity of
$\alpha$ root space. We remark that in order to extend this result
to the non-symmetrizable case we need the Littelmann character
formula for $B({\infty})$, which expresses the latter as an
alternating sum over $W$, together with the corresponding Weyl
denominator formula. The validity of the latter for the
non-symnmetrizable case was established independently by Mathieu
\cite {M} and Kumar \cite {Ku}.
\subsection{}\label{3.8}
Suppose now that $C$ admits off-diagonal entries with values in
$M$ as defined in \ref {3.5}. Then we modify the elementary
crystals to take account of the additional elements $e_{\alpha,i},
f_{\alpha,i}$, introduced there. For this we set ${\mathbf g}
=(g_1,g_2,\ldots,g_s)$ and let ${\mathbf m} =
(m^1,m^2,\ldots,m^s)$ denote an arbitrary element of ${\mathbb
N}^s$ setting $$\mathbf g.\mathbf m = \sum_{i=1}^s g_im^i.$$
Now define $$B_\alpha = \{b_\alpha(\mathbf m): \mathbf m \in \mathbb N^s \},$$
given the obvious crystal structure extending \ref {3.6}. In
particular $$wt\ b_\alpha(\mathbf m) = -(\mathbf g.\mathbf
m)\alpha, \\\ \varepsilon_\alpha(b_\alpha(\mathbf m))=(\mathbf
g.\mathbf m),$$
$$e_\alpha b_\alpha(\mathbf m)=\left\{\begin{array}{ll}0& :\ m^i=0,\\
b_\alpha({\mathbf m}-{\mathbf g}_i)& :\ m^i > 0.\\
\end{array}\right.$$
Notice for example that the $f_{\alpha,i}:i=1,2,\ldots,s$, commute on $B_\alpha$.
Via the insertion rules interpreted through \ref {3.5} they also
commute on $B_J$ and hence on $B_J(\infty)$.
Note that $B_\alpha$ is now an s-fold tensor product of
one-dimensional crystals, so we call it an $s$-dimensional
crystal.
It is not a priori obvious that this new $B_J(\infty)$ will still retain all
the good properties described in 3.7.
\subsection{}\label{3.9}
For each $\alpha \in \pi$, define $\mathbb Z$ linear maps on $P$
by $$s_{\alpha,i}\lambda=\lambda -
\alpha^\vee(\lambda)_ig_i\alpha:i=1,2,\ldots,s,$$ where as before
the $i^{th}$ subscript denotes projection onto the $i^{th}$
component of $M$, which we recall has values in $\mathbb Z$. One
checks that
$$s_{\alpha,i}s_{\alpha,j}\lambda = \lambda-\alpha^\vee(\lambda)_jg_j\alpha -
\alpha^\vee(\lambda)_ig_i\alpha+2\delta_{i,j}g_i\alpha^\vee(\lambda)_j,$$
where $\delta$ is the Kronecker delta. Thus the $s_{\alpha,i}$
commute pairwise, are involutions and their product is $s_\alpha$.
This commutation property parallels that noted in \ref {3.8}. Let
$W^a$ denote the group generated by the
$<s_{\alpha,i}:i=1,2,\ldots,s>$. We call it the augmented Weyl
group $W^a$. A priori its structure depends on the choice of
generators for $M$. One can ask if choices can be made so that
$W^a$ is a Coxeter group and finite whenever $W$ is finite. We
investigate these questions in the rank 2 case, that is when
$|\pi| = 2$. Apart from the pentagonal case (see Section 5) where
we discovered inadvertently that $W^a$ is the Weyl group for
$\mathfrak {sl}(5)$, our reasoning is somewhat a posteriori.
\section{Rank Two}
\subsection{}\label{4.1}
Fix $J$ as in \ref {3.6} and recall the definition of
$B_J(\infty)$. As noted in \ref {3.7} it may be viewed as a
presentation of the Kashiwara crystal $B(\infty)$ which is in turn
a combinatorial manifestation of a Verma module (or more properly
its $\mathscr O$ dual). As a set $B_J(\infty)$ is completely
determined by its specification as a subset of countably many
copies of $\mathbb N$. We would like to determine this subset
explicitly. This seems to be rather difficult; yet Kashiwara
\cite[Prop. 2.2.3] {Ka2} found an elegant solution which we derive
below by a different method which is applicable to the pentagonal
crystal.
\subsection{}\label{4.2}
Take $\pi= \{\alpha_1,\alpha_2 \}$. Then the off-diagonal
elements of $C$ are $a: = -\alpha^\vee_1(\alpha_2),a':=
-\alpha^\vee_2(\alpha_1)$, which are integers $\geq 0$. Set $y =
aa'$. Since we may now care about preserving integrality we do
not yet symmetrize $C$ as in \ref {2.1}.
There are just two possible non-redundant choices for $J =
\{\ldots.j_n,j_{n-1},\ldots,j_1\}$. The first is given by
$$j_k=\left\{\begin{array}{ll}1& :\ k \ odd,\\
2& :\ k \ even.\\
\end{array}\right.$$
The second by interchange of $1,2$. We consider just the first.
Suppose y = 0. Then one easily checks that $B_J(\infty)=\mathbb N^2$. Assume $y > 0$. Set
$$c_k=\left\{\begin{array}{ll}a& :\ k \ odd,\\
a'& :\ k \ even.\\
\end{array}\right.$$
Define the rational functions $T_n: n \geq 3$ by $T_3(y)= 1$ and
$$T_{n+1}(y) = 1- \frac{1}{yT_n(y)} :\forall n \geq 3.$$
Recalling the conventions of \ref {3.6} let $\mathbf m
=\{\ldots,m_n,m_{n-1},\ldots,m_1\}$
denote an element of $B_J$.
\begin{lemma} One has $\mathbf m \in B_J(\infty)$ if and only if $m_n \leq c_nT_n(y)m_{n-1}, \forall n \geq 3.$
\end{lemma}
\begin{proof} It suffices to show that the subset of
$B_J(\infty)$ defined by the right hand side of the lemma is $\mathscr E$ stable,
$\mathscr F$ stable and that its only element annihilated by
$\mathscr E$ is $b_{\infty}$.
For $n$ odd $\geq 1$ one has
$$r_{\alpha_1}^{n+2}(\mathbf m)-r_{\alpha_1}^n(\mathbf m) = am_{n+1}- m_{n+2} -
m_n.$$
Recall that if $e_{\alpha_1}$(resp $f_{\alpha_1})$ enters $\mathbf m$ at the $n^{th}$ place
then the above expression must be $< 0$ (resp. $\leq 0$).
Assume the inequality of the lemma for $n$ replaced by $n+2$,
whenever $n \geq 1$. Suppose $e_{\alpha_1}$ enters at the $n^{th}$ place. If $m_n = 0$, then
$e_{\alpha_1}\mathbf m = 0$. Otherwise $m_n$ is reduced by one.
Yet $$m_n - 1 \geq am_{n+1} - m_{n+2} \geq a(1 -
T_{n+2}(y))m_{n+1} = \frac{m_{n+1}}{a'T_{n+1}(y)}.$$
It follows that the right hand side of the lemma is satisfied by
$e_{\alpha_1}\mathbf m$. A similar result holds for n even.
This establishes stability under $\mathscr E$. A very similar
argument establishes stability under $\mathscr F$.
Finally assume $e_{\alpha_1}\mathbf m = e_{\alpha_2}\mathbf m = 0$. Suppose $\mathbf m \neq b_{\infty}$
and let $m_n$ be the last non-vanishing entry of $\mathbf m$. One
easily checks that $n \geq 3$. Then the inequalities of the lemma
force $m_i > 0$, for all $i < n$. Assume n even. Then the above
vanishing implies that $e_{\alpha_2}$ goes in at the $(n+2)^{nd}$
place which through the above expression for the differences of
Kashiwara functions gives the contradiction $-m_n \geq 0$. The
case of n odd is similar.
\end{proof}
\subsection{}\label{4.3}
One may easily compute the rational functions $T_n(y)$ for small
n. One obtains
$$T_3 =1,\ T_4=\frac{y-1}{y}, T_5=\frac{y-2}{y-1},\ T_6
=\frac{y^2-3y+1}{y(y-2)},\
T_7=\frac{(y-3)(y-1)}{y^2-3y+1},$$
$$T_8 =\frac{y^3-5y^2+6y-1}{y(y-3)(y-1)},\quad T_9 =\frac{(y-2)(y^2-4y+2)}{y^3-5y^2+6y-1}.$$ .
One may easily check the
\begin{lemma} If $y$ is real and $ \geq4$, then $1 > T_n(y) > \frac{1}{2},\forall n > 3$.
\end{lemma}
\subsection{}\label{4.4}
The $T_n(y)$ are related to the Chebyshev polynomials $P_m(x)$
defined in \ref {2.2} by the formula
$$T_n(x^2)=\frac{P_{n-2}(x)}{xP_{n-3}(x)}, \forall n > 3. \eqno{(*)}$$
\subsection{}\label{4.5}
In applying $(*)$ to Lemma \ref{4.2}, we recall that the case $x =
0$ has been excluded. Moreover if $P_{n-3}(x) = 0$, for a chosen
value of $x$, then by Lemma \ref{4.2} one has $m_{n-1} = 0$ and so
$m_n= 0$ also. Apart from the case $x = 0$ corresponding to $\pi$
of type $A_1 \times A_1$, the remaining cases when $B(\infty)$
lies in a finite tensor product are when $P_{n-2}(x)$ has a zero
for some positive integer value of $y = x^2$. By \ref {4.3} we
must have $y < 4$. Thus there are three such possible values of
$y$, namely $1, 2, 3$ and these respectively are zeros of
$T_4,T_5,T_7$. They correspond to types $A_2,B_2,G_2$.
\subsection{}\label{4.6}
One can ask if real positive non-integer zeros of $P_n(x)$ can
also lead to $B_J(\infty)$ being embedded in a finite tensor
product. The first interesting case is $P_4(x)$, namely the fifth
Chebyshev polynomial in our conventions. One has $$P_4(x) = x^4 -
3x^2 + 1 = (x^2- x - 1)(x^2 + x - 1).$$
The positive roots of this polynomial are the Golden Section $g$ and
its inverse. One may remark that $y = g^2 = 2.618\ldots$, and
lies between $2$ and $3$, the latter respectively corresponding to
types $B_2$ and $G_2$.
From the point of view of the Weyl group
(see \ref {2.3}) which is isomorphic to $\mathbb Z_5\ltimes
\mathbb Z_2$ in this case, we might expect $B_J(\infty)$ to be a
strict subcrystal of a five-fold tensor product of elementary
crystals. As pointed out in \ref {3.4} some modifications are
necessary since the Kashiwara functions will not be
integer-valued. Now $M : = \mathbb Z[g]$ is a free rank 2 module.
Thus each elementary crystal should itself be a tensor product of
one-dimensional crystals and so ultimately $B_J(\infty)$ should
lie in a ten-fold tensor product of one-dimensional crystals. From
this one may anticipate that the underlying root system should
have ten positive roots coming in five collinear pairs of relative
length $g$. To realize this we shall make two choices which
ultimately may affect the result, namely we choose $g_1 = 1, g_2 =
g$ in \ref {3.5} and the Cartan matrix to have off-diagonal
entries both equal to $-g$. Remarkably the resulting root system
is stable under the augmented Weyl group (\ref {3.9}) itself
isomorphic to $S_5$, the permutation group on five symbols.
\section{The Pentagonal Crystal}
\subsection{}\label{5.1}
Recall the notation and conventions of \ref {2.2} and take the
Cartan matrix to have off-diagonal entries equal to $-g$, where
$g$ is the Golden Section. Choose $J$ as in \ref {4.2} with
$B_\alpha$, $B_\beta$ the elementary crystals defined in \ref
{3.8} with $g_1=1,g_2=g$. We write $e_{\alpha,1}= e_\alpha,\
e_{\alpha,2} =e_{g\alpha}$, and so on. The entries in the $i^{th}$
factor of $B_J$ will be denoted by $\mathbf m_j=(m_j,n_j)$.
\subsection{}\label{5.2}
The aim of this section is to give an explicit description of
$B_J(\infty)$ as a subset of $B_J$ in a manner analogous to \ref
{4.2}. In particular we show that $B_J(\infty)$ lies in a
five-fold tensor product of elementary crystals. We show that as
a crystal it is independent of the two possible non-redundant
choices of $J$. We denote the resulting crystal by $B(\infty)$. We
show that the formal character of $B(\infty)$ has ten factors each
corresponding to the ten positive roots alluded to in \ref {4.6}.
\subsection{}\label{5.3}
We may regard $B_J$ as a repeated tensor product of the crystal
$B_\beta \otimes B_{g\beta} \otimes B_\alpha \otimes B_{g\alpha}$
defined via \ref {3.6}, {3.7}. Set $\mathbf m = (m,n)$. A given
element $b\in B_J$ is given by a finite sequence $({\mathbf
m}_j,\ldots,{\mathbf m}_1): =
(m_j,n_j,m_{j-1},n_{j-1},\ldots,m_1,n_1)$ of non-negative
integers, though $j$ may be arbitrarily large.
Recall \ref {3.5} that the Kashiwara functions take values in
${\mathbb Z}\oplus{\mathbb Z}g$. Their components in the first
(resp. second) factor will be denoted by $r_\alpha^j,r_\beta^j$
(resp.$r_{g \alpha}^j,r_{g \beta}^j$). These integers determine
the places in $b$ at which the crystal operators enter via the
rules $i), ii)$ of \ref {3.3}. As in \ref {4.2}, it is only
certain differences that matter.
Take $b =({\mathbf m}_j,\ldots,{\mathbf m}_1)$ defined as above.
For $j$ odd $\geq 1$, we obtain $r_\alpha^{j+2}(b)- r_\alpha^j(b)$
(resp. $r_{g\alpha}^{j+2}(b) - r_{g\alpha}^j(b))$ as the
coefficient of 1 (resp. $g$) in the expression $$g(\mathbf
m_{j+1}.\mathbf g)-(\mathbf m_{j+2}.\mathbf g) - (\mathbf
m_j.\mathbf g) = g(m_{j+1} + gn_{j+1}) - (m_{j+2}+ gn_{j+2}) -
(m_j + gn_j ).$$
Then the identity $g^2 = g + 1$ gives
$$r_\alpha^{j+1}(b)-r_\alpha^j(b) = n_{j+1} - m_{j+2} - m_j, \ r_{g\alpha}^{j+2}(b) -
r_{g\alpha}^j(b) = m_{j+1} + n_{j+1} - n_{j+2} - n_j.$$
When $j$ is even $\geq 2$, the above formulae still hold but with
$\alpha$ replaced by $\beta$.
\subsection{}\label{5.4}
At first one might expect the exact analogue of \ref {4.2} to hold
with the inequality $$(\mathbf m_j.\mathbf g)\leq
gT_j(g^2)(\mathbf m_{j-1}.\mathbf g),\eqno{(*)}$$ suitably
interpreted. Since $T_6(g^2) = 0$ this would give $B_J(\infty)$
to lie in a five-fold tensor product of two dimensional crystals.
The true solution is more complex. It is given by the following
\begin{prop} One has $({\mathbf m}_j,\ldots,{\mathbf m}_1) \in B_J(\infty)$ if and only if
$(i) \ m_k, n_k = 0 : k \geq 6.$
$(ii) \ m_3 = n_2-u,n_3 = m_2 + n_2 - v : u,v \geq 0.$
$(iii) \ m_4 = n_2 - v - s, n_4 = m_2 + n_2 - u - v - t :$
\quad \quad $u + t \geq 0, v + s \geq 0, v + t \geq 0 , s + t + v \geq
0.$
$(iv) \ m_5 = m_2 - v - t - a, n_5 = n_2 - u - v - s - t - a':$
\quad \quad $v + t + a \geq 0, u + t + a' \geq 0, s + v + t + a \geq 0,$
\quad \quad $s + v + t + a' \geq 0, s + t + v + a + a' \geq
0.$
\end{prop}
Remark 1 It is implicit that $m_k,n_k\geq 0$, for all $k \in
\mathbb N^+$, and this gives some additional inequalities.
Remark 2. Recall that $T_3(g^2) = 1$. Interpret $m + gn \leq m' +
gn'$ to mean $m \leq m', n \leq n'$. Then $(ii)$ can be
interpreted as $(*)$ for k = 3. However $(iii)$ and $(iv)$
cannot be similarly interpreted.
Remark 3. Set $b = (\mathbf m,\mathbf n)$. Notice that
$$r_\beta^6(b)-r_\beta^4(b) = n_5 - m_4 = - (u + t + a')\leq 0,$$
$$r_{g\beta}^6(b) - r_{g\beta}^4(b) = m_5 + n_5 - n_4 = - (v + s + t + a + a') \leq0.$$
These imply $(i)$ above.
Remark 4. In $5.9$ we describe an algorithm giving these
inequalities.
\subsection{}\label{5.5}
The proof of Proposition \ref {5.4} follows exactly the same path
as the proof of Lemma \ref {4.2}, as do also the remaining
assertions in \ref {5.2}. However verification of the details
should only be attempted by a certified masochist. We give a few
vignettes from the proof. These illustrate the ubiquitous nature
of the inequalities.
\subsection{}\label{5.6}
Define $B_J(\infty)$ by the inequalities in the right hand side of
Proposition \ref{5.4}. We shall first illustrate stability under
$\mathscr E$ in the more tricky cases.
Take $b =({\mathbf m}_j,\ldots,{\mathbf m}_1)$ as before and
suppose that $e_\alpha$ enters $b$ at the third place. This
implies in particular that
$$0 > r_\alpha^5(b)- r_\alpha^3(b)= n_4 - m_5 - m_3 = a.$$
Moreover if $e_\alpha b \geq 0$, then $m_3 \geq 1$ and this
insertion decreases $m_3$ by 1. The latter can be achieved
without changing the remaining entries in $b =({\mathbf
m}_j,\ldots,{\mathbf m}_1)$ by increasing $u$ and $a$ by 1 and
decreasing $t$ by 1. Then the only inequalities for the new
variables that might fail are those involving $t$ but neither $a$
nor $u$. One is thus reduced to the expressions $v + t, v + t + s
+ a', s + t + v$. Yet in the old variables all these expressions
are $> 0$ in view of the fact that $a + v + t, a + v + t + s + a',
a + s + t + v$ are all non-negative, whilst as we have seen above
$a < 0$.
Suppose as a second example that $e_\beta$ enters $b$ in the
fourth place. This implies in particular that
$$0 > r_\beta^6(b) - r_\beta^4(b)= n_5- m_6 - m_4 = -(u + t +
a').$$
Moreover if $e_\beta b\neq 0$, then $m_4 \geq 1$ and the
insertion of $e_\beta$ decreases $m_4$ by 1. The latter can be
achieved without changing the remaining entries in $b =({\mathbf
m}_j,\ldots,{\mathbf m}_1)$ by increasing $s$ by 1 and decreasing
$a'$ by 1. Then the only inequalities for the new variables that
might fail are those involving $a'$ but not $s$. This is just $u +
t + a'$ which is strictly positive by the above.
\subsection{}\label{5.7}
Retain the above conventions. We illustrate stability under
$\mathscr F$ in some of the more tricky situations.
Suppose that $f_\alpha$ enters $b$ at the third place. This
implies in particular that
$$0 < r_\alpha^3(b) - r_\alpha^1(b) = n_2 - m_3-m_1 = u - m_1,$$
and so $u> m_1 \geq 0.$
Moreover this insertion increases $m_3$ by 1. The latter can be
achieved without changing the remaining entries in $b =({\mathbf
m}_j,\ldots,{\mathbf m}_1)$ by decreasing $u$ and $a$ by 1 and
increasing $t$ by 1. One easily checks that all inequalities of
Proposition \ref {5.4} are preserved.
As a second example suppose that $f_{g\beta}$ enters $b$ at the
fourth place. This implies in particular that
$$0 < r_{g\beta}^4(b) - r_{g\beta}^2(b) = m_3 + n_3 - n_4- n_2 =
t.$$
Moreover this insertion increases $n_4$ by 1. The latter can be
achieved without changing the remaining entries in $b =({\mathbf
m}_j,\ldots,{\mathbf m}_1)$ by decreasing $t$ by 1 and increasing
$a$ and $a'$ by 1. Since already $u,v \geq 0$ and $t > 0$, it
easily follows that all inequalities are preserved.
\subsection{}\label{5.8}
Retain the conventions of \ref {5.6}. To complete the proof of
Proposition \ref {5.4} it remains to show that
$B_J(\infty)^{\mathscr E} = \{b_\infty \}$. This obtains from the
following lemma which also implies that $B_J(\infty)$ is upper
normal.
\begin{lemma} For all $b \in B_J(\infty)$, one has
(i) $e_\alpha b = 0$, if and only if $a \geq 0, a + u - m_1 \geq
0, m_5 = 0$,
(ii) $e_{g\alpha}b = 0$, if and only if $a'\geq 0, a' + v - n_1
\geq 0, n_5 = 0$,
(iii) $e_\beta b = 0$, if and only if $s \geq 0, u + t + a' = 0$,
(iv) $e_{g\beta}b = 0$, if and only if $t \geq 0, a + a' + s + t
+ v = 0$.
\end{lemma}
\begin{proof} Suppose $e_\alpha b = 0$. This means that there exists $j \in \mathbb N$ such that
$e_\alpha$ enters $b$ at the $(2j + 1)^{th}$ place and that $m_{2j+1}=0$. One has
$$(r_\alpha^5,r_\alpha^3,r_\alpha^1)= (m_5, m_5 - a, m_5 - a + m_1 -
u).$$
Now $u \geq 0$, so then $e_\alpha$ cannot enter at the first place
since this would mean that $m_1$ = 0. Suppose $a < 0$. Then
$e_\alpha$ enters $b$ at the third place implying that $0 = m_3 =
n_2- u = 0$. Since $n_5 \geq 0$, we obtain $v + s + t + a' \leq
0$, and since $a < 0$, this contradicts $(iv)$ of \ref {5.4}.
Thus $a \geq 0$ and so $e_\alpha$ enters at the fifth place. All
this gives $(i)$. The proof of $(ii)$ is practically the same.
Suppose $e_\beta b = 0$. This means that there exists $j \in
\mathbb N^+$ such that $e_\beta$ enters $b$ at the $2j^{th}$ place
and that $m_{2j} = 0$. One has
$$(r_\beta^6, r_\beta^4,r_\beta^2) = (0, u + t +a', u + t + a' -
s).$$
Recall that $u + t + a' \geq 0$. Suppose $s < 0$. Then $e_\beta$
enters $b$ at the second place implying $m_2 = 0$. Since $m_5
\geq 0$, we obtain $v + t + a = 0$, and since $s < 0$, this
contradicts $(iv)$ of \ref {5.4}. Thus $s> 0$. If $u + t + a' >
0$, then $0 = m_4 = n_2 - v - s$. Yet $n_5 \geq 0$, and this
forces a contradiction. All this gives $(iii)$. The proof of
$(iv)$ is practically the same.
\end{proof}
\subsection{}\label{5.9}
Let $J'$ be the second non-redundant sequence described in \ref
{4.2}. We sketch briefly how to obtain a crystal isomorphism
$\varphi:B_J(\infty)\iso B_{J'}(\infty)$.
Let us use $F_\alpha^{\mathbf m}$ to denote
$f_\alpha^mf_{g\alpha}^n$, when $\mathbf m=(m,n)$ and $E_\alpha
b=0$ to mean $e_\alpha b = e_{g\alpha}b = 0$. Similar meanings
are given to these expressions when $\beta$ replaces $\alpha$.
Then $f:=F_\alpha^{\mathbf m_1}F_\beta^{\mathbf
m_2}F_\alpha^{\mathbf m_3}F_\beta^{\mathbf m_4}F_\alpha^{\mathbf
m_5}$ is said to be in normal form if
$$E_\alpha F_\beta^{\mathbf m_2}F_\alpha^{\mathbf m_3}F_\beta^{\mathbf m_4}F_\alpha^{\mathbf
m_5}b_{\infty}=0,E_\beta F_\alpha^{\mathbf m_3}F_\beta^{\mathbf
m_4}F_\alpha^{\mathbf m_5}b_{\infty}=0,\ldots, E_\alpha
b_{\infty}=0.$$ \
Let $\mathscr F_0 \subset\mathscr F$ denote the subset of elements
having normal form. Obviously for each element of $B_J(\infty)$
can be written as $fb_\infty$ , for some unique $f \in \mathscr
F_0$. (Nevertheless it should be stressed that the definition of
normal form depends on a choice of reduced decomposition.)
Take $f$ as above with $\mathbf m_j = (m_j,n_j):j \in \mathbb
N^+$, given by the expressions in $(i)-(iv)$ of \ref {5.4}. One
checks that the condition for $f$ to have normal form exactly
reproduces the inequalities of Proposition \ref {5.4}. Moreover
$fb_\infty$ takes exactly the form of the element $b$ occurring in
the proposition, namely $(\mathbf m_5,\ldots,\mathbf m_1)$. In the
classical Cartan case this is essentially Kashiwara's algorithm
(cf \cite[see after Cor. 2.2.2]{Ka2} for computing $B_J(\infty)$;
but it is not too effective as each step becomes increasingly
arduous. Again in our more general set-up it was not a priori
obvious that such a procedure would work. The explanation of its
success obtains from Section 7.
\subsection{}\label{5.10}
Now let $b_\infty'$ denote the canonical generator of
$B_{J'}(\infty)$ of weight zero. We define the required crystal
map $\varphi$ of \ref {5.9} by setting $\varphi(fb_\infty) =
fb_\infty'$, for all $f \in \mathscr F_0$. One checks that $f$
still has normal form with respect to $b_\infty'$. (This is a
slightly different calculation to that given in \ref {5.9}; but
still uses exactly the same inequalities of Proposition \ref
{5.4}.) It follows that $\varphi$ is injective. Repeating this
procedure with $J$ and $J'$ interchanged we obtain crystal
embeddings $B_J(\infty)\hookrightarrow
B_{J'}(\infty)\hookrightarrow B_J(\infty)$. Since these preserve
weight and weight subsets have finite cardinality, they must both
be isomorphisms, as required.
\subsection{}\label{5.11}
Recall \ref {3.5}. Then just as in \cite {J2}, 5.3.13, we may
define a singleton highest weight crystal $S_\lambda = \{
s_\lambda \} $ of weight $\lambda$ with
$\varepsilon_\alpha(s_\lambda) = -\alpha^\vee(\lambda )$, for all
$\lambda \in P^+$ and one checks that $B(\lambda) = \mathscr
F(b\otimes s_\lambda)$ is a strict subcrystal of $B(\infty)\otimes
S_\lambda$. The upper normality of $B(\infty$) implies that
$B(\lambda)$ is a normal crystal (see \cite {J1}, 5.2.1, for
example) and moreover its character can be calculated in a manner
analogous to the case of a classical Cartan matrix (see \cite
{J1},6.3.5, for example) using appropriate Demazure operators.
This leads to the character formula for $B(\infty)$ alluded to in
\ref {5.2}. However one may now begin to suspect that $B(\infty)$
is itself just a Kashiwara crystal for a classical Cartan matrix
of larger rank. This is what we shall show in Sections 6 and 7.
\section{The extended Weyl group for the Pentagonal Crystal}
\subsection{}\label{6.1}
Take $\pi$ and $C$ as in \ref {5.1}. One checks that {W} applied
to $\pi$ generates as a single orbit the set $\Delta_s$, called
the set of short roots, given by the formulae below
$$\Delta_s=\Delta_s^+\sqcup\Delta_s^-, \Delta_s^-=-\Delta_s^+,\Delta_s^+=
\{\alpha,\alpha+g\beta,g\alpha+g\beta,g\alpha+\beta,\beta\}.$$
Set $\Delta_\ell=g\Delta_s$, called the set of long roots. They
are collinear to the short roots. Finally set
$\Delta=\Delta_s=\Delta_s\sqcup\Delta_\ell$.
\subsection{}\label{6.2}
At first sight it may seem that the introduction of $\Delta_\ell$
is quite superfluous, yet it is rather natural from the point of
view of the theory of crystals. Indeed we have already noted
that we need both short and long simple roots to interpret the
Kashiwara tensor product and then to obtain $B(\infty)$. Again
the analysis of \ref {5.11} implies that
$$ch B(\infty) = \prod_{\gamma\in \Delta^+}(1-e^{-\gamma})^{-1}$$
As promised we shall eventually prove the above formula by
slightly different means. However for the moment we note that
it may in principle be obtained from the inequalities in \ref
{5.4}. Here we have ten parameters whose values define a subset of
$\mathbb N^{10}$. It is possible to break this subset into smaller
"sectors" in which each of the corresponding terms become a
geometric progression. In general this is a rather impractical
procedure and indeed even in the present case the number of
sectors runs into several thousand. However one can at least
deduce that $ch B(\infty)$ must be a rational function of the
variables $e^\alpha,e^{g\alpha},e^\beta,e^{g\beta}$. Moreover
using \ref {5.8}, one may deduce that the subset $B^\alpha$ of
$B_J(\infty)$ defined by the condition $\mathbf m_1 = 0$ has an
$s_\alpha$ invariant character. Yet $B_J(\infty)\cong
B^\alpha\otimes B_\alpha$, and so $ch B_J(\infty)$ is invariant
with respect to an obvious translated action of $s_\alpha$.
Through the isomorphism $B_J(\infty)\iso B_{J'}(\infty)$,
described in \ref {5.9}, \ref {5.10}, it follows that the common
crystal admits a formal character which is invariant under the
translated action of $W$. A slightly more refined analysis shows
it to be invariant under a translated action of the augmented Weyl
group $W^a$.
In the present "finite" situation it is more efficient to use the
Demazure operators as alluded to \ref {5.11}. However in general
there is no known combinatorial recipe to obtain an expression for
$ch B(\infty)$ as an (infinite) product. This is mainly because
there is no known meaning to imaginary roots, outside their Lie
algebraic definition. In the non-symmetrizable Borcherds case such
a product formula is not even known, even though $B(\infty)$ can
be defined and its formal character calculated \cite[Thm. 9.1.3]
{JL}.
\subsection{}\label{6.3}
Recall \ref {3.9}. It is rather easy to compute the image of a
generator of $W^a$ on a given root, always remembering however
that its elements are only $\mathbb Z$-linear maps. We need
only do this for the $s_{\alpha,i} : i = 1,2$, since the action of
the remaining elements can be obtained by $\alpha,\beta$
interchange. The result is given by the following
\begin{lemma} \
(i) $s_{\alpha,1}$ stabilizes $g\alpha,\beta, g\alpha + \beta$,
and interchanges elements in each of the pairs $(\alpha,-\alpha);
(\alpha+ g\beta,g\beta);(g\alpha + g\beta, g^2\alpha
+g\beta);(g\alpha+ g^2\beta, g^2(\alpha + \beta))$.
(ii) $s_{\alpha,2}$ stabilizes $\alpha,g^2(\alpha+\beta), g\alpha
+ g^2\beta$, and interchanges elements in each of the pairs
$(g\alpha,-g\alpha); (\alpha+ g\beta,g^2\alpha+g\beta);(g\alpha +
\beta, \beta);(g\beta, g(\alpha + \beta))$.
\end{lemma}
\subsection{}\label{6.4}
We see that $s_{\alpha,1}$ can interchange long and short roots.
Since is $\Delta$ just two $W$ orbits it follows that $\Delta$ is
a single $W^a$ orbit. The stability of $\Delta$ under $W^a$ came
to us as a surprise. It truth depends on the particular basis we
have chosen for $M$.
\subsection{}\label{6.5}
A further surprise is the following. Let $\pi=
\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$ be the simple roots of a
system of type $A_4$ in the standard Bourbaki \cite {B} labelling.
Make the identifications $\alpha_1=\alpha,\alpha_2 =
g\beta,\alpha_3=g\alpha,\alpha_4=\beta$. Then the relations in
\ref {6.3} allow us to make the further identifications
$s_{\alpha_1} = s_{\alpha,1},s_{\alpha_2}= s_{\beta,2},
s_{\alpha_3} = s_{\alpha,2},s_{\alpha_4} = s_{\beta,1}$. We
conclude that $W^a$ is actually the Weyl group $W(A_4)$ of the
system of type $A_4$ with the augmented set of simple roots
$(\alpha,g\beta,g\alpha,\beta)$ being the set of simple roots of
this larger rank system. However notice that $s_\alpha$ (resp.
$s_\beta$) is not a reflection in this larger system; but rather a
product of commuting reflections, namely:
$s_{\alpha_1}s_{\alpha_3}$ (resp. $s_{\alpha_2}s_{\alpha_4})$.
Precisel;y what we obtain is the following
\begin{lemma}
\
(i) \ The map
$\{s_{\alpha,1},s_{\beta,2},s_{\alpha,2},s_{\beta,1}\}
\rightarrowtail \{
s_{\alpha_1},s_{\alpha_2},s_{\alpha_3},s_{\alpha_4} \}$ extends to
an isomorphism $\psi$ of $W^a$ onto $W(A_4)$ of Coxeter groups. In
particular
$$\psi(s_\alpha)=s_{\alpha_1}s_{\alpha_3},\quad \psi(s_\alpha)=s_{\alpha_2}s_{\alpha_4}.$$
(ii) \ Set $\widehat{W}= W^a \iso W(A_4)$. The map
$\{\alpha,g\beta,g\alpha,\beta \} \rightarrowtail
\{\alpha_1,\alpha_2,\alpha_3,\alpha_4 \}$ extends to an
isomorphism of $\mathbb Z \widehat{W}$ modules.
\end{lemma}
\textbf{Remark}. Thus the root diagram of $A_4$ can be drawn on
the plane. This is the beautiful picture presented in Figure 1,
which illustrates a tiling using two triangles based on the Golden
Section. It not only admits pentagonal symmetry coming from $W$;
but also a further "hidden" symmetry coming from $\widehat{W}$.
Similar considerations apply to weight diagrams for $A_4$. In
particular the defining (five-dimensional) module for $A_4$ gives
rise to what we call a zig-zag triangularization of the regular
pentagon, see Figure 2. The fact that the longer to shorter length
ratio of the triangles obtained is the Golden Section now follows
from the above lemma! Moreover the angles in the triangles are
thereby easily computed. In Section 8 we describe the
generalization of the above lemma to type $A_{2n}$ and its
consequences for aperiodic tiling based on the $n$ triangles
obtained from appropriate triangularizations of the regular
$(2n+1)$-gon (see Figure 2).
\subsection{}\label{6.6}
There is a relation of the Golden Section to the dodecahedron
which goes back to ancient times. As a consequence it is not too
surprising that our root system can be obtained from the vertices
of the dodecahedron by projection onto the plane of one of the
faces (see Figure 1). This is an orthogonal projection as the
remaining co-ordinate is normal to the face in question. Let us
describe the co-ordinates of the vertices of dodecahedron
$\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$ which thus project onto
$\{\alpha,g\beta,g\alpha,\beta\}$.
The planar co-ordinates of $\alpha$ and $\beta$ which generate our
pentagonal system can be taken to be
$$\alpha =(1,0),\quad \beta = (-g/2,\sqrt{1-g^2/4}).$$ View these
as two vertices, called $\alpha_1$ and $\alpha_4$, of one face
$f_1$ of the dodecahedron. Now project the dodecahedron onto
$f_1$. Of course this takes all the vertices of the dodecahedron
onto the plane defined by $f_1$. Exactly one of these becomes
collinear to $\alpha$ (resp. $\beta$) and this with relative
length of exactly g. Call it $\alpha_3$ (resp.$\alpha_2$). Now the
perpendicular distance of $f_1$ to its opposite face $f_2$ is
exactly $(g+1)$. We take this normal direction to these two faces
as defining a third co-ordinate fixed to be zero at the centre of
the dodecahedron. Thus the value of this co-ordinate on $f_1$
equals $(g+1)/2$, whilst its value on $\alpha_3$ and on $\alpha_2$
turns out to be $(g-1)/2$. We conclude that the co-ordinates of
these four vertices of the dodecahedron are given by
$$\alpha_1=(1,0,(g+1)/2),\quad \alpha_2=(-g^2/2,g\sqrt{1-g^2/4},(g-1)/2),$$
$$\alpha_3 =(g,0,(g-1)/2),\quad \alpha_4 = (-g/2,\sqrt{1-g^2/4},(g+1)/2).$$
These co-ordinates allow one to calculate the scalar products
between the above vertices. One finds that
$(\alpha_i,\alpha_i)=3(g+2)/4$, for all $i=1,2,3,4$, whilst
$(\alpha_1,\alpha_2)=(\alpha_3,\alpha_4) = -1/2 -g/4
=-(\alpha_1,\alpha_4)$ and
$(\alpha_1,\alpha_3)=(\alpha_2,\alpha_4)=5g/4
=-(\alpha_2,\alpha_3)$. From this one obtains a further fact -
there is no orthogonal projection of the set $\pi$ of simple roots
of $A_4$ onto these vertices of the dodecahedron. Indeed the above
vectors have all the same lengths as do also their pre-images
$\hat{\alpha_i}:i=1,2,3,4$. Thus for some $a \in \mathbb R$ the
fourth co-ordinate must be $a$ or $-a$ in each element of $\pi$.
Since $(\hat{\alpha_1},\hat{\alpha_3})=0$, this already forces
$a^2=5g/4$ and then that $(\hat{\alpha_2},\hat{\alpha_3})=-5g/2$
and $(\hat{\alpha_1},\hat{\alpha_1}=2g+3/2$, giving the
contradiction $g=1/2$.
\section{Beyond the Pentagonal Crystal}
\subsection{}\label{7.1}
The aim of this section is to show that the pentagonal crystal
described above and coming from the Golden Section is in fact a
manifestation of the "ordinary" Kashiwara crystal in type $A_4$
with respect to special reduced decompositions of the longest
element of the Weyl group of the latter. The construction we give
can be immediately generalized to the largest zeros of the
$(2n+1)^{th}$ Chebyshev polynomial, though as we shall see there
is a slight difference when $2n+1$ is not prime. We start with
three easy (read, well-known) facts.
\subsection{}\label{7.2}
For our first easy fact, let
$\pi=\{\alpha_1,\alpha_2,\ldots,\alpha_{2n}\}$ be the set of
simple roots for a system of type $A_{2n}$ in the usual Bourbaki
labelling \cite[Appendix]{B}. Set
$s_i=s_{\alpha_i}:i=1,2,\ldots,2n$. These are Coxeter generators
of the Weyl group for type $A_{2n}$, and which we view as
elementary permutations. Set $s_\alpha = s_1s_3\ldots s_{2n-1},
s_\beta = s_{2n}s_{2n-2}\ldots s_2$, which are involutions. Take
$i \in \{1,2,\ldots,n\}$. One checks that $s_\alpha
s_\beta(2i+1)=s_\alpha(2i)=2i-1$, whilst $s_\alpha
s_\beta(2i)=s_\alpha(2i+1)=2i+2$. Thus $s_\alpha s_\beta$ is a
(2n+1)-cycle. We conclude that $<s_\alpha,s_\beta>$ is the
Coxeter group isomorphic to $\mathbb Z_n \ltimes \mathbb Z_2$ as
an abstract group. Its unique longest element $w_0$ has two
reduced decompositions namely $(s_\alpha s_\beta)^n s_\alpha$ and
$s_\beta(s_\alpha s_\beta)^n$. Observe further that $s_\beta
(s_\alpha s_\beta)^n(2i) =s_\beta (2(n+1-i)+1) = 2(n+1)-2i$,
whilst $s_\beta (s_\alpha s_\beta)^n(2i+1)=s_\beta
(2(n-i))=2n+2-(2i+1)$. Thus $w_0$ is also the unique longest
element of the Weyl group in type $A_{2n}$. Moreover substituting
the above expressions for $s_\alpha, s_\beta$, it follows that
$w_0$ has length $\leq (2n+1)n$ and so the resulting expressions
for $w_0$ as products of elementary permutations are again reduced
decompositions in the larger Coxeter group.
\subsection{}\label{7.3}
Our second easy fact concerns the factorization of the Chebyshev
polynomials $P_{2n}(x):n \in \mathbb N^+$. Define the polynomials
$$Q_n(x):=P_n(x)-P_{n-1}(x):n \in \mathbb N.\eqno (*) $$ These satisfy the
same recurrence relations as the $P_n(x)$; but with different
initial conditions which are now $Q_{-1}=1,Q_0=1$. A few examples
are
$$Q_1=x-1,\ Q_2=x^2-x-1,\ Q_3=x^3-x^2-2x+1,\
Q_4=x^4-x^3-3x^2+2x+1.$$ Of these just $Q_4$ factorizes over
$\mathbb Q$ giving $Q_4=(x-1)(x^3-3x-1)$. It is the first case
when $2n+1$ is not prime.
\begin{lemma} For all $n \in \mathbb N$ one has
$(i)_n\quad Q_n(x)Q_n(-x) = (-1)^nP_{2n}(x),$
$(ii)_n\quad Q_n(x)Q_{n-1}(-x) = (-1)^{n-1}P_{2n-1}(x)+(-1)^n,$
$(ii)_n\quad Q_{n+1}(x)Q_{n-1}(-x) = (-1)^{n+1}P_{2n}(x)+(-1)^nx.$
\end{lemma}
\begin{proof} The cases for which $n=0$ are easily checked. Then one checks
using the recurrence relations for the $P_n$ and $Q_n$ that
$(i)_{n-1}$ and $(ii)_{n-1}$ imply $(ii)_n$, that $(i)_{n-2}$ and
$(ii)_{n-1}$ imply $(iii)_{n-1}$ and finally that $(ii)_n$ and
$(iii)_{n-1}$ imply $(i)_n$.
\end{proof}
\subsection{}\label{7.4}
Our third easy fact describes when $Q_n(x)$ is irreducible over
$\mathbb Q$.
\begin{lemma} For all $n \in \mathbb N^+$ one has
(i) The roots of $Q_n(x)$ form the set $\{2\cos (2t-1)\pi/(2n+1):
t \in \{1,2,\ldots,n \}\}$,
(ii) If \ $2m+1$ divides $2n+1$, then $Q_m(x)$ divides $Q_n(x)$,
(iii) If \ $2n+1$ is prime, then $Q_n(x)$ is irreducible over
$\mathbb Q$.
\end{lemma}
\begin{proof}
$(i)$. By $(*)$ of \ref {2.2} and $(*)$ of \ref {7.3}, one has
$$(\sin \theta) Q_n(2\cos \theta)= \sin (n+1)\theta - \sin
n\theta.$$ Yet the right hand side vanishes for $\theta =
(2t-1)\pi/(2n+1): t \in \{1,2,\ldots,n\}$, since $\sin
(2n+1)\theta$ = 0 and $\cos n\theta = -\cos (n+1)\theta \neq 0$ .
Hence $(i)$. Then $(ii)$ follows from $(i)$ by comparison of
roots.
$(iii)$. Set $z=e^{i\theta}$, with $\theta = \pi/(2n+1)$.
By (i) the roots of $Q_n(2x)$ are the real parts of $z^{2t-1} :
t\in \{1,2,\ldots,n\}$. Since $2n+1$ is assumed prime, these are
all $2(2n+1)^{th}$ primitive roots of unity. There are therefore
permuted by the Galois group of $\mathbb Q[z]$ over $\mathbb Q$
and so are their real parts. Hence they cannot satisfy over
$\mathbb Q$ a polynomial equation of degree $<n$. Thus $Q_n(2x)$
is irreducible over $\mathbb Q$ and for the same reason so is
$Q_n(x)$.
\end{proof}
\subsection{}\label{7.5}
Return to the notation and hypotheses of 7.2. Let $J$ be the
sequence of simple roots defined by the reduced decomposition
$(s_\alpha s_\beta)^ns_\alpha$ of $w_0$, namely
$J=\{\alpha,\beta,\ldots,\alpha\}$. Set $g = 2\cos( \pi/(2n+1))$
which is the Golden Section if $n=2$. Let $B_j$ denote the
elementary crystal corresponding to $j^{th}$ entry in $J$ counting
from the right. Thus $B_j$ is of type $\alpha$ (resp. $\beta$) if
j is odd (resp. even) and let $\mathbf m_j$ denote its entry. Set
$\mathbf m = \{\mathbf m_{2n+1},\ldots,\mathbf m_1\}$, which we
view as an element of $B_J$. Use the convention that $\mathbf m_j
=0, \forall j \notin \{1,2,\ldots,2n+1\}$. Then as in \ref {4.2}
successive differences of the Kashiwara function take the form
$$r_\alpha^{2j+1}(\mathbf m)-r_\alpha^{2j-1}(\mathbf m) = -\mathbf m_{2j+1} - \mathbf
m_{2j-1}+g \mathbf m_{2j}.\eqno{(*)}$$
When $\alpha$ is replaced by $\beta$ then the same relation holds
except that $2j+1$ is replaced by $2j$.
\subsection{}\label{7.6}
Retain the notation of \ref {7.5} but now interpret $\alpha$
(resp. $\beta$) in $J$ as the sequence
$\{\alpha_1,\alpha_3,\ldots,\alpha_{2n-1}\}$ (resp.
$\{\alpha_{2n},\alpha_{2n-2},\ldots,\alpha_2\}$), so now $B_J$
denotes the crystal which is a $2n(2n+1)$-fold tensor product of
elementary crystals in type $A_{2n}$. Let $B_{i,j}: j \in
\{1,2,\ldots,2n+1\}$ denote the elementary crystal $B_{\alpha_i}:i
\in \{1,2,\ldots,2n\}$ occurring in the $j^{th}$ place of $J$
counting from the right, let $m_{i,j}$ denote its entry and
$\mathbf m$ the element of $B_J$ they define. Then the successive
differences of Kashiwara functions take the form
$$r_{2i+1}^{2j+1}(\mathbf m) - r_{2i+1}^{2j-1}(\mathbf m)=
m_{2i+1,2j+1}-m_{2i+1,2j-1} + m_{2i,2j} + m_{2i+2,2j},\eqno
{(*)}$$ with $2i$ replacing $2i+1$ when $2j$ replaces $2j+1$.
\subsection{}\label{7.7}
We attempt to interpret $(*)$ of \ref {7.5} so that it becomes
$(*)$ of \ref{7.6}. Since $Q_n(g)=0$ by $(i)$ of \ref {7.4} and
$Q_n$ is a degree $n$ monic polynomial with integer coefficients,
it follows that the ring $\mathbb Z[g]$ is a free $\mathbb Z$
module of rank n. Let $\{g_i\}_{i=0}^{n-1}$ be a free basis for
$\mathbb Z[g]$ with $g_0=1$ and using the convention that $g_{-1}
= g_n =0$. Notice that we have shifted the labelling by $1$
relative to our convention in \ref {3.8} and that we are using the
same basis for all $\alpha \in \pi$. Following \ref {3.8} we
write
$$r_\alpha^{2j+1}(\mathbf m)=\sum_{i=0}^{n-1}g_ir_{2i+1}^{2j+1}(\mathbf m),
\quad \mathbf m_{2j+1}=\sum_{i=0}^{n-1}g_im_{2i+1,2j+1},\quad
\mathbf m_{2j}=\sum_{i=1}^ng_{n-i}m_{2i,2j+1}. $$
After these substitutions and equating coefficients of $g_i$ in
$(*)$ of \ref {7.5} we obtain $(*)$ of \ref {7.6} given that
$gg_{n-i}=g_i+g_{i-1}$, equivalently that
$gg_i=g_{n-i}+g_{n-i-1}$, for all $i=0,1,\ldots,n-1$. Now set
$p_{2i}=g_i,p_{2i+1}=g_{n-i-1}$, for all $i=0,1,\ldots,[n/2]$.
Note that this implies $p_{-1}=0, p_0=1$ and $p_n=p_{n-1}$. (We
also obtain $p_{n+1}=g_0$ for $n$ even but this we ignore.) These
identifications give $p_{i+1}=gp_i-p_{i-1}$ for all
$i=0,1,\ldots,n-1$. It follows that $p_i=P_i(g)$. In other words
$p_i$ is the $i+1^{th}$ Chebyshev polynomial evaluated at g. Since
the latter are monic polynomials of degee $i$, the
$\{p_i:i=0,1,\ldots,n-1\}$ form a free basis of $\mathbb Z[g]$. In
addition the relation $p_n=p_{n-1}$ becomes exactly the relation
$Q_n(g)=0$, precisely as required. Thus our goal has been achieved
and we have shown the
\begin{thm} Let $C$ be the $2\times2$ Cartan matrix with $-2\cos
(\pi/2n+1):n \in \mathbb N^+$ as off-diagonal elements. Then the
crystal $B(\infty)$ defined through \ref{3.8} using the free basis
$\{P_i(g)\}_{i=0}^{n-1}$ of $\mathbb Z[g]$ is isomorphic to the
crystal $B(\infty)$ in type $A_{2n}$.
\end{thm}
\subsection{}\label{7.8}
From \ref {7.7} we obtain all the good properties of the
pentagonal crystal noted in Section 5, namely that it is upper
normal, independent of the choice of $J$ and satisfies the
character formula given \ref {6.2}. We do \textit{not} obtain the
explicit description of $B_J(\infty)$ described in \ref {5.4}; but
we do obtain the justification of this description given by the
Kashiwara algorithm noted in \ref {5.9}.
\section{Weight Diagrams}
\subsection{}\label{8.1}
Recall that Lemma \ref{6.5} allows one to draw the weight diagrams
of $A_4$ in the plane and that furthermore from the defining
representation one naturally recovers the two triangles used in
(aperiodic) Penrose tiling. This leads to the following question.
Suppose we are given a Penrose tiling of part $P$ of the plane.
When do the vertices of $P$ viewed as a graph form a weight
diagram for $A_4$ ?
We note below that one may similarly draw the weight diagrams of
$A_{2n}$ in the plane and then one can further ask if it is
possible recover a family $\mathscr T_{2n+1}$ of triangles (see
\ref {8.4}) which lead to higher aperiodic tiling. Then one can
similarly ask which such tilings are weight diagrams.
Conversely given a weight diagram viewed as a set of points on the
plane can one join vertices to obtain a tiling in (part of) the
plane using just the elements of $\mathscr T_{2n+1}$ or possibly a
slightly bigger set ? In fact the image of the weight lattice of
$A_{2n}$ for $n \geq 2$ is dense in the plane (for the metric
topology) and so the limit of weight diagrams takes on a fractal
aspect. Consequently one is certainly forced to supplement
$\mathscr T_{2n+1}$ using similar triangles which become smaller
and smaller by factors of $g^{-1}$.
These questions lead us to the following. Observe that the
essence of Penrose tiling is that the two triangles involved are
self-reproducing up to similarity by factors of the Golden
Section. Here we show that this property naturally extends to
$\mathscr T_m$ for all $m \geq 3$.
\subsection{}\label{8.2}
We start with a generalization of Lemma \ref{6.5}. Fix a positive
integer $n$. Recall the notation of \ref {3.9} and \ref {7.7}. Let
$\pi := \{\alpha_1,\ldots,\alpha_{2n}\}$ be the set of simple
roots in type $A_{2n}$. Notice that since now $M=\mathbb
Zg_0+\ldots+\mathbb Zg_{n-1}$, we should write
$s_{\alpha,i}\lambda=\lambda-\alpha^\vee(\lambda)_{i-1}g_{i-1}\alpha$
and
$s_{\beta,i}\lambda=\lambda-\beta^\vee(\lambda)_{i-1}g_{i-1}\beta$.
\begin{lemma} Set
$$\psi(s_{\alpha,i+1})=s_{2i+1},\psi(s_{\beta,i+1})=s_{2n-2i},$$
$$\psi(g_i\alpha)=\alpha_{2i+1},\psi(g_i\beta)=\alpha_{2n-2i},
\forall i=0,1,\ldots,n-1.$$
\
(i) \ $\psi$ extends to an isomorphism of $W^a$ onto $W(A_{2n})$
of Coxeter groups. In particular
$$\psi(s_\alpha)=\prod_{i=0}^{n-1}s_{2i+1},\quad \psi(\beta)=\prod_{i=0}^{n-1}s_{2n-2i}.$$
Set $\widehat{W}= W^a \iso W(A_4)$.
\
(ii) \ $\psi$ extends to a $\mathbb Z \widehat{W}$ module
isomorphism of $\mathbb Z[g]\alpha + \mathbb Z[g]\beta$ onto
$\mathbb Z\pi$.
\end{lemma}
\begin{proof} It suffices to verify that the $s_i\alpha_j: i,j
=1,2,\ldots,2n$, satisfy the correct identities. Here we just
verify one example of a non-trivial case. One has
$s_{2i+1}\alpha_{2i}=\psi(s_{\alpha,i+1}(g_{n-i}\beta))$, whilst
$$s_{\alpha,i+1}(g_{n-i}\beta)=g_{n-i}\beta-(g_{n-i}\alpha^\vee(\beta))_i\alpha=g_{n-i}\beta+(gg_{n-i})_i\alpha.$$
Yet by \ref {7.7} one has $gg_{n-i}=g_i +g_{i-1}$ and so the right
hand side above equals $g_{n-i}\beta+g_i\alpha$, whose image under
$\psi$ is $\alpha_{2i}+\alpha_{2i+1}$, as required.
\end{proof}
\subsection{}\label{8.3}
Retain the above notation. We can only view $\mathbb Z[g]$ as a
lattice in the complex plane when $2n+1$ is prime, because by
Lemma 7.4 the monic polynomial satisfied by $g$, namely $Q_n$ is
irreducible over $\mathbb Q$ just when $2n+1$ is prime. We shall
avoid this difficulty as follows.
Recall that by \ref {7.4} the largest solution of the equation
$Q_n(g)=0$ is $g=2cos\pi/2n+1$. Let $\psi'$ be the composition of
$\psi^{-1}$ with evaluation of $g$ at the above value, which is of
course well-defined map of $\mathbb Z\pi$ into $\mathbb R$ which
is injective just when $2n+1$ is prime.
It is clear that the weight diagram of the defining
$2n+1$-dimensional representation of $\mathfrak {sl}(2n+1)$
becomes under $\psi'$ is exactly the regular $(2n+1)$-gon. Choose
its highest weight to be the fundamental weight $\varpi_1$
corresponding to $\alpha_1$. Let $v_0$ designate the corresponding
point $\psi^{-1}(\varpi_1)$ in the plane. For all $i =0,1,\ldots
n-1$ join the vertex
$v_i:=\psi^{-1}(\varpi_1-\alpha_1-\alpha_2-\ldots-\alpha_i)$ to
$v_{i+1}:=\psi^{-1}(\varpi_1-\alpha_1-\alpha_2-\ldots-\alpha_{i+1})$.
This gives what we call a zig-zag triangularization of the regular
$(2n+1)$-gon. Such a triangularization is rather natural from the
point of view of representation theory. Let $T_i:i=1,2\ldots,n-1$
denote the triangle with vertices $\{v_{i-1},v_i,v_{i+1}\}$. From
the above lemma or directly one can easily compute their angles
and edge lengths. Set $p_i=P_i(g)$ and let $\mathbf P_{2n+1}$ be
the monoid they generate in $\mathbb C$. Observe that $T_i$ is
related to $T_{2n-i}$ by the parity transformation
$T\rightarrowtail T'$, which reverses the cyclic order of the
vertices.
\begin{cor} Take $i \in \{1,2,\ldots,n-1\}$. The angles (up to a multiple of $\pi$)
(resp. edge lengths scaled to one for the sides of the
$(2n+1)-gon$) in $T_i$ given in cyclic order starting from $v_i$
(resp. $v_{i-1}-v_i$ are $\{1/2n+1,i/(2n+1),(2n-i-1)/(2n+1)\}$
(resp. $\{p_{i-1},p_i,p_0\}$.
\end{cor}
\textbf{Remark}. Notice that we may now give the conclusion of
Lemma \ref {8.2} the following aesthetically pleasing
presentation. Consider the extended Dynkin diagram of $A_{2n}$. We
may regard it as a regular $(2n+1)$-gon with vertices labelled by
the roots $\alpha_i : i =0,1,2,\ldots,2n$. Then the distance from
$\alpha_0$ to $\alpha_i:i=1,2,\ldots,2n$, is $p_{i-1}$. Comparison
with Lemma \ref {8.2} shows that this is exactly the factor which
multiplies $\alpha$ or $\beta$ in the image of $\alpha_i$.
\subsection{}\label{8.4}
Unless otherwise specified a weight diagram (in the plane) will
mean the image under $\psi'$ of a weight diagram of $\mathfrak
{sl}(2n+1)$. A weight triangularization of a weight diagram is
then defined to be a triangularization in which the vertices are
exactly the images of weights of non-zero weight subspaces. For
example the zig-zag triangularization of the regular $(2n+1)$-gon
defined above is a weight triangularization of the weight diagram
of the defining representation. It is not the only weight
triangularization possible and unless $n\leq 2$ other weight
triangularizations can lead to a different set of triangles (see
Figure 2). Nevertheless since there can be no vertices in the
interior of the $(2n+1)$-gon every edge must join two vertices on
the boundary and hence must come from a root. In particular every
edge length must be some $p_i$. The additional triangles obtained
in this fashion (which include all possible isosceles triangles
with angles which are a multiple of $\pi/(2n+1)$ can be needed for
a general weight triangularization (see Figure 3). In general the
set $\mathscr T_{2n+1}$ (or simply, $\mathscr T$ of all triangles
obtained from a weight triangularization of the $(2n+1)$-gon is
described by the following easy
\begin{lemma} Suppose $T \in \mathscr T_{2n+1}$. Then, up to multiples
of $\pi /(2n+1)$, the angles in $T$ are given by an (unordered)
partition of $2n+1$ into three non-zero parts. The length of the
side opposite to the angle of size $\pi k/(2n+1)$ equals $p_{k-1}$
if $k\leqslant n$ and $p_{2n-k-1}$ if $k\geqslant n$.
\end{lemma}
\subsection{}\label{8.5}
Fix $m$ an integer $\geq 3$. Label the vertices of the regular
$m$-gon by $\{0,1,2,\ldots,m-1\}$ in a clockwise order. Take
$i_1,i_2,i_3$ with $0\leq i_1<i_2<i_3\leq m-1$ and let
$T_{i_1,i_2,i_3}$ be the triangle whose vertices form the set
$\{i_1,i_2,i_3\}$. We may also write $T$ as
$T\{i_2-i_1,i_3-i_2,i_1-i_3\}$, that is through its angle set
(omitting the multiple of $\pi/m$). Of course we should not want
to distinguish such triangles which can be transformed into one
another by rotation; but it is not so obvious whether we should
equate triangles interrelated by parity. Indeed if the triangle
were a tile with all angles distinct, then its parity translate
could only be obtained by flipping it onto its "undecorated" side
! Thus we shall regard $T\{i,j,k\}, T\{j.k.i\},T\{k.i.j\}$ as the
same triangle T; but write $T'=T\{k,j,i\}$ for the triangle
obtained from $T$ through parity. In this convention $T=T'$, if
$T$ is an isosceles triangle. Let $\mathscr T_m$ denote the set
of all such triangles. This extends our previous definition for
$m$ odd.
\subsection{}\label{8.6}
Let $P$ be a polygon (in the Euclidean plane) and $p$ a
non-negative integer. Let $pP$ denote the polygon scaled by a
factor of $p$, being the empty set when $p=0$. Suppose $P,P'$ are
polygons with share a side of the same length and which do not
overlap when fitted together along this side of common length.
Then we denote by $P*P'$ the resulting polygon. One should of
course appreciate that this notation does not take into account
all possible fittings; but for us additional formalism will not
serve any purpose. Again it may often be the case that a tiling
will violate this condition (see Figure 10). This is in
particular true of the tilings obtained through the construction
of \ref {8.11}.
\subsection{}\label{8.7}
Retain the notation and conventions of \ref {8.5}. Extending \ref
{8.3} we set $T_i=T\{1,i,m-i-1\} \in \mathscr F_m$. Let
$p_{j-1}:j=1,2,\ldots,m-1$ denote the distance from the vertex $0$
to the vertex $j$. It is the length of a side opposite an angle of
size $j\pi/m$ in any element of $\mathscr T_m$. Scale the
elements of $\mathscr T_m$, so that $p_0=1$ and set $p_m =0$.
Observe that $p_1=2\cos \pi/m$ and that $p_{j-1}=p_{m-1-j}$. Let
$\mathbf P_m$ denote the monoid generated by the
$p_i:i=1,2,\ldots,m-2$.
\begin{lemma} For all $i \in \{1,2,\ldots,m-3\}$, one has
$T_i*T_{i+1}=p_iT_1$. Moreover $p_1p_i=p_{i+1}+p_{i-1}$.
\end{lemma}
\begin {proof} The first part follows by joining the triangles
along their common side of length $p_0$. Through similarity of
$p_iT_1$ with $T_1$, it implies the second part.
\end {proof}
\textbf{Remark}. Thus $p_i$ is the value of the $(i+1)^{th}$
Chebyshev polynomial $P_i$ at $x=2\cos\pi/m$. Via $(*)$ of \ref
{2.2}, we further deduce that $x$ is the largest (real) root of
the equation $P_{[m/2]}=P_{[(m-3)/2]}$.
\
\textbf{ Example 1.} Let $T$ be the equilateral triangle of side
1. It is exactly the weight diagram of the defining representation
of $\mathfrak {sl}(3)$. Then $T*T*T*T = 2T$ which is a weight
triangularization of the six-dimensional representation of
$\mathfrak {sl}(3)$. Moreover as is well-known) a weight
triangularization of a weight diagram for $\mathfrak {sl}(3)$ can
be given in the form $T^{*n}$ for some positive integer $n$. The
resulting tiling of the plane goes back to ancient times.
We remark that the relation $T*T*T*T = 2T$ holds for any triangle
$T$. In this it suffices match up edges of the same length for
then the angles take care of themselves. It results in a tiling
of the plane cut out by three infinite sets of parallel lines.
Surprisingly it is hardly ever seen or used - perhaps for only
technical reasons !
\
\textbf{ Example 2.} Consider the pair $T_1,T_2$ of triangles
given by \ref {8.3}. Their sides have lengths $\{1,g,1\}$ and
$\{g,g,1\}$ respectively, where g is the Golden Section. Their
areas $a_1,a_2$ satisfy $a_2/a_1=g$. We call it the Golden Pair.
Notice that $T_1*T_2=gT_1$ by the above and in addition that
$gT_1*T_2=gT_2$. Inductively we may generate the pair
$g^nT_1,g^nT_2, \forall n \in \mathbb N$ and moreover in $2^m$
possible ways, where m is the number of products. This fact is the
basis of one way of presenting Penrose tiling. (To be precise
Penrose constructed a "kite" as $T_1*T_1$ by joining these
triangles along their shortest edge and a "dart" as $T_2*T_2$ by
joining these triangles along their longest edge and then
considered \textit{certain} tilings obtained from kites and darts.
A useful discussion of this may be found in \cite {W}. We have
adopted the more prosaic tilings by the Golden pair as a
description of Penrose tiling.)
Just the two triangles of the Golden Pair do not suffice to give
all weight triangularizations. Thus although all root lengths are
all either $1$ or $g$, roots becoming collinear give vertices
separated by a distance of $g-1=g^{-1}$. Indeed if the highest
weight equals $\varpi_2+\varpi_3$ then the all three triangles
$\{T_1,g^{-1}T_1,g^{-1}T_2\}$ are needed for a weight
triangularization. In general such differences for example forces
one to smaller and smaller triangles until a weight triangulation
begins to resemble a fractal.
\
\textbf{Example 3} Take $n=3$. In the zig-zag triangularization
of the regular $7$-gon one may replace the lines joining $v_3,v_4$
and $v_4,v_5$, by lines joining $v_3,v_6$ and $v_2,v_6$ and still
obtain a weight triangularization (see Figure 2). This results in
a new triangle which we shall denote by $T_0$. It is an isosceles
triangle with side lengths $\{p_2,p_1,p_1\}$ and angles (up to a
multiple of $\pi/7$) which are $\{2,2,3\}$. Moreover if we
further replace the line joining $v_2,v_6$ by that joining
$v_3,v_4$ we obtain the "relation" $T_0*T_1 = T_2*T_3$. One might
compare the resulting "algebra" of triangles to the commutative
ring $\mathbb Z[x_0,x_1,x_2,x_3]/<x_0x_1-x_2x_3>$. As is
well-known the latter is not freely generated.
Return for the moment to the Golden Pair $T_1,T_2$. We noted in
Example 2 that it was possible using $*$ to generate all the
triangles in the set $\{pT_1,pT_2, \forall p \in \mathbf P_5\}$.
Moreover we were able to obtain a weight triangularization of the
root diagram of $\mathfrak {sl}(5)$ by using just $\{T_1,T_2\}$
scaled by a factor of $g^{-1}$. Here the situation is more
complex. Thus in Figure 3 we illustrate a weight triangularization
of the root diagram of $\mathfrak {sl}(7)$. It uses
$\{T_0,T_2,T_3\}$ scaled by a factor of $p_2^{-1}$. The first
surprise is that we cannot just use the set of triangles coming
from a zig-zag triangularization of the weight diagram of the
defining representation,namely the set $\{T_1,T_2,T_2',T_2\}$ with
whatever scaling. In addition by spotting unions of triangles in
Figure 3 we obtain the following relations
$$T_0*p_2T_1=p_1T_3,\quad T_2*p_2T_1=p_1T_2,\quad T_2*p_1T_3=p_2T_2,\quad p_2T_2*T_3=p_2T_3,$$
$$p_2T_2*p_1T_3=p_2T_0, \quad T_0*T_0*T_2*T_3=p_1T_0.$$
(This last relation is more difficult to spot but is illustrated
in Figure 4.)
We conclude that all the triangles in the set
$\{pT_0,pT_2,pT_3,\forall p \in \mathbf P_7\}$ can be generated in
complete analogy with the case of the Golden Pair. However a
further surprise is that we obtain some extra triangles through
the relation $T_0*T_2=(p_1/p_2) T_0$. A further fact (though
perhaps less of a surprise) is that we cannot obtain $T_1$. This
is because if $a_i$ denotes the area of $T_i$ then
$$a_0=(p_1+p_2)a_1,\quad a_2=p_2a_1,\quad a_0=(p_1+p_2)a_1,$$
so the area of $T_1$ is too small. On the other hand by Lemma
\ref{8.6} and the above $\{T_i: i=0,1,2,3\}$ generates
$\{pT_0,pT_1,pT_2,pT_7,\forall p \in \mathbf P_7\}$. Notice that
we also obtain the above relations when a given triangle $T$ is
replaced by $T'$. (Here only $T_2$ is affected.) We conclude that
$\mathscr T_7$ generates the set $\{pT:p \in \mathbf P_7, T \in
\mathscr T_7 \}$.
\subsection{}\label{8.8}
The result in the Lemma \ref {8.7} may be viewed in another way
which makes its generalization to other elements of $\mathscr T_m$
immediate. Thus let $T_1 \in \mathscr T_m$ be the triangle defined
in \ref {8.7}. Now join the vertex $0$ to any vertex $i:i \in
{2,3,\ldots,m-1}$. The resulting line cuts $T_1$ into two
triangles which are easily seen to be similar to two of those in
$\mathscr T_m$. Moreover there are $m-3$ such decompositions
which come in pairs related by parity. These decompositions are
exactly those given in \ref {8.7}. This construction immediately
gives the result below. Recall the notation of \ref {8.5}.
\begin{lemma} Take $0<i<j<m$. Then for all $t:0<t<i$ one has
$$p_{j-i+t-1}T_{0,i,j}=p_{j-1}T_{0,t,j-i+t}*p_{j-i-1}T_{0,i-t,j}.$$
\end{lemma}
\begin{proof} Cut the given triangle by the line joining the
vertex $t$ to the vertex $j$. Then compute angles and side
lengths through \ref {8.5}, {8.7}.
\end{proof}
\subsection{}\label{8.9}
When $n=3$, the above lemma gives all the pair decompositions
given in Example 3. However it does not give the more tricky
decomposition of $p_1T_0$ into four parts. For $n>2$, the above
lemma is insufficient for our purposes because there are missing
$p_s$ factors on the left hand side. This arises whenever $j-i>1$
or taking into account equivalences (under $W$) all sides of the
$T_{0,i,j}$ have length $>1$. However we can then make what we
call an inscribed decomposition of $T_{0,i,j}$. Up to a rotation
we can assume $i\leq min\{j, m-j\}$. Then assume that $i>1$,
which is the "bad" case.
\begin{lemma} For all $t:0<t<i$ one has
$$p_{2t-1}T_{0,i,j}=p_{t-1}T_{0,i,j}*p_{t-1}T_{0,i+t,j+t}*p_{t-1}T_{0,j-i+t,j}*p_{t-1}T_{0,i,j-t}.$$
\end{lemma}
\begin{proof} Draw a second triangle with vertices
$\{t,i+t,j+t\}$. Join the vertices $\{u_1,u_2,u_3\}$ where
respectively the line $(0,i)$ meets $(t,i+t)$, the line $(i,j)$
meets $(i+t,j+t)$ and the line $(j,0)$ meets $(j+t,t)$. The
decomposes our original triangle into four triangles with vertices
$\{0,u_1,u_3\}$, $\{u_1,i,u_2\}$, $\{u_2,j,u_3\}$ and the
inscribed triangle $\{u_1,u_2,u_3\}$. Compute angles and lengths
of edges starting from the periphery. This computation is
illustrated in Figure 5.
\textbf{N.B.} Notice that $T_{0,i,j}$ intersects its rotated twin
at six points and the sides of the inscribed triangle are obtained
by joining second successive intersection points. However there
are two ways to do this and unless $T_{0,i,j}$ is equilateral only
the one specified in the proof works ! Namely one must join the
points of intersection of a given line of $T_{0,i,j}$ with the
corresponding line of its rotated twin. If $T_{0,i,j}$ is
equilateral, the second choice corresponds to making a different
rotation.
\end{proof}
\subsection{}\label{8.10}
Yet we are still missing some $p_s$ factors, when $s$ is even. For
$m$ odd the first bad case occurs when $m=9$, which is also the
first case when $m$ is odd and not prime. Indeed $\mathscr T_9$
admits just one exceptional triangle whose angles are not coprime
(up to the factor $\pi/9$), namely $T\{3,3,3\}$. Let $\mathscr
T'_9$ denote its complement in $\mathscr T_9$. Using Lemmas 8.8
and 8.9 one may verify that $\mathscr T'_9$ generates the set
$pT:p \in \mathbf P_9, T \in \mathscr T'_9$. This again allows one
to obtain higher Penrose tiling using nine triangles with angles
which are multiples of $\pi/9$.
In order to fill the above lacuna, consider the missing triangle,
namely $p_2T\{3,3,3\}$ whose decomposition into elements of
$\mathscr T_9$ is required in order to show that the set generated
by $\mathscr T_4$ contains $pT:p \in \mathbf P_9, T \in \mathscr
T_9$. Here we may first recall that to decompose $p_iT\{3,3,3\}$
for $i=1,3$, we cut this triangle with a second one rotated by an
angle of $2\pi/9$. This cuts the original triangle into three
triangles and an "internal" hexagon. The hexagon was then cut into
four triangles by joining second successive edges in either of the
two ways possible. The internal triangle is exactly the
"inscribed" triangle. Furthermore each of the three triangles in
this second set shares a common edge with one in the first set and
may be joined to it. The resulting decomposition of
$p_iT\{3,3,3\}:i=1,2$ is exactly what is described in \ref {8.9}.
We shall modify this procedure in two ways. First rotate the
second triangle by just $\pi/9$. Secondly cut the internal
hexagon into six triangles by joining third successive edges (that
is opposite edges). This cuts the original triangle into nine
smaller triangles. Verification of angles and using the identity
$p_2^2=p_4+p_2+p_0$ we obtain the decompositon
$$p_2T\{3,3,3\}=T\{1,3,5\}^{*3}*T\{2,3,4\}^{*6},$$
illustrated in Figure 6.
One remark that if we rotate the second triangle by $3\pi/9$ then
its intersection with the first gives a well-known symbol beloved
by some. Moreover decomposition the internal hexagon by joining
opposite edges gives the decomposition
$$3T\{3,3,3\}=T\{3,3,3\}^{*9}.$$ This generalizes to give (via
Remark 1 of \ref {8.7}) the relation
$$nT=T^{*n^2}, \forall T \in \mathscr T, n \in \mathbb N^+.\eqno (*)$$
\subsection{}\label{8.11}
It comes as somewhat of a surprise that the above construction
does not generalize in the obvious fashion for all $n$, though the
identity $p_2^2=p_4+p_2+p_0$ is still valid for all $n\geq3$. In
other words in order for example to decompose $p_2T: T\in \mathscr
T_n$ into nine triangles it is not appropriate to cut it with its
twin rotated by $\pi/2n+1$. Nevertheless there is a way to
similarly decompose $p_2T:T\in \mathscr T_n$ into nine elements of
$T\in \mathscr T_n$. This is illustrated in Figure 7 for the case
$n=5$ and $T=T\{3,3,5\}$. It can be viewed as being obtained by
cutting $T$ with a second copy and decomposing the internal
hexagon as before; but the latter does not have vertices on the
same circle.
Finally we have illustrated the general decomposition of $p_2T:
T=T\{i,j,k\}\in \mathscr T_n; i,j,k \geq 3$ into nine triangles
symbolically in Figure 8. It is based on the identity $p_2p_i =
p_{i+2}+ p_i + p_{i-2}$, which holds for all $i: 2 \leq i \leq
m-4$. It is verified using $p_2=p_1^2-1$ and
$p_1p_i=p_{i+1}+p_{i-1}$.
In Figure 8 all nine triangles are drawn for convenience though
incorrectly as equilateral triangles. The correct angles are given
in each corner (as multiples of $\pi/m$). The reader will easily
discern a pattern and verify all the needed relations (which are
not entirely trivial - for example the angles opposite the edge
shared shared by a pair of triangles must be the same. Moreover at
external lines must all be straight ones.)
\subsection{}\label{8.12} Finally we describe the decomposition
of $p_{t}T:T=T\{i,j,k\}\in \mathscr T_m; i,j,k \geq t+1$, into
$(t+1)^2$ triangles in $\mathscr T_n$. (We remark that the
construction does not specifically require t to be even.)
First we need the
following preliminary. Recall that $p_i= P_i(g):i=0,1,\ldots,n, g
= 2\cos \pi/m$ and that $p_i:=p_{m-2-i}:(m-2) \geq i \geq n-1$
with $g$ being the largest real solution to the identity
$p_{[m/2]}=p_{[(m-3)]}$, namely $2\cos \pi/m$. Recall further that
$gp_i=p_{i-1}+p_{i+1}: 0<i<2n-1$.
\begin {lemma} For all $i,t \in \mathbb N^+:
t\leq i\leq 2n-t-1$, one has
$$p_tp_i = \sum_{j=0}^t p_{i+t-2j}.$$
\end {lemma}
\begin {proof} One has $$\begin{array}{lcl}
p_tp_i&=&(p_1p_{t-1} - p_{t-2})p_i,\\&=&p_{t-1}(p_{i+1} + p_{i-1}) - p_{t-2}p_i,\\
\end{array}$$
from which the assertion follows by induction on $t$.
\end {proof}
\subsection{}\label{8.13}
To describe the decomposition of $p_tT\{i,j,k\}$ it is perhaps
best to start with the cases $t=2$ and $t=3$. The first has been
already been described symbolically in Figure 8. The second case
decomposing $p_3T\{i,j,k\}: i,j,k >3, i+j+k=2n+1$ into 16
triangles in $\mathscr T_m$ is similarly described symbolically in
Figure 9. From these two cases the reader can easily figure out
the general solution for himself. The result is described as
follows where we use $\prod$ instead of $*$.
\begin {prop} For all $m,t \in \mathbb N^+$ and $i,j,k > t$ with
$i+j+k = m$ one has
$$\begin{array}{lcl}p_{t}T\{i,j,k\}&=&\prod_{c=0}^{t}\prod_{r=0}^cT\{i+c-2r,j+2c-r,k-t+c+r\}*\\\\
&&\prod_{c=1}^{t}\prod_{r=1}^cT\{j+t-2c+r,k-t+c+r-1,i+c-2r+1\}.\\
\end{array}$$
\end {prop}
\begin {proof} Let us first explain the notation. To begin with
$c$ (resp. $r$) labels columns from the top (resp. rows from the
left). In the first column there is just one triangle, namely
$T\{i,j+t,k-t\}$. Here the angles are given in clockwise order
starting from $i$ at the top. Notice that the (upper) edges of
this triangle have lengths $p_{j+t-1}$ and $p_{k-t-1}$ as required
by Lemma \ref {8.12}.
The triangles appearing in the first product above are exactly
those which one vertex (with angle $i+c-2r$) "above" with the
remaining two vertices forming a "horizontal" line. Those to the
extreme left (corresponding to $r=0$) have edges of lengths
$p_{j+2c}$ which together form the side of $p_{t}T\{i,j,k\}$
opposite to the angle of size $j$. The sum of their edges equals
$p_{t}p_{j-1}$ via \ref {8.12}. Similarly the sum of the edges of
those triangles corresponding to $r=c$ is just $p_{t}p_{k-1}$,
whilst the sum of the edges of these triangles in the last row
(corresponding to $c=t$) is just $p_{t}p_{i-1}$.
The triangles is the second sum are inverted relative to the
first. Each share a common edge with a triangle in the first set.
One checks that the opposite angle sizes coincide. Finally one
checks that any vertex has two (resp. 3,6) edges to it and for
which the resulting angle sizes are $\pi i$, $\pi j$ or $\pi k$
(resp. any three in cyclic order sum to $\pi$). This means that
the large triangle $p_{t}T\{i,j,k\}$ is cut into $(t+1)^2$
triangles in $\mathscr T_n$ by $3t$ lines with end points on its
edges exactly cutting the latter into the sums described in \ref
{8.12}.
\end {proof}
\subsection{}\label{8.14}
Let $<\mathscr T_m>$ denote the set of triangles generated by
$\mathscr T_m$ through $*$. Combining \ref {8.8}, \ref {8.9},
\ref {8.13} we obtain the following
\begin {thm} For all $n >1$ the set $pT: p \in \mathbf P_m, T \in \mathscr T_m$
is contained in $<\mathscr T_m>$.
\end {thm}
\subsection{}\label{8.15}
As we already noted the above inclusion may be strict (Example 3
of \ref {8.7}). It may also be possible to combine triangles in
a different manner than that described using \ref {8.8}, {8.9},
{8.13}. This already occurs for $m=9$ as illustrated in Figure
10. Nevertheless we have managed to accomplish the program
outlined in the last part of \ref {8.1}. Notice that in virtue of
$(*)$ of \ref {8.10} we may obtain the stronger conclusion defined
by replacing $\mathbf P_m$ with $\widehat{\mathbf P}_n:= \{s
\mathbf P_m: s \in \mathbb N^+\}$.
\section{Fundamental domains, alcoves and the affine Weyl group.}
\subsection{}\label{9.1}
In the notation of \ref {2.2} define $\varpi_\alpha,\varpi_\beta$
to be the fundamental weights in $\mathfrak h^*$ given by
$\gamma^{\vee}(\varpi_{\zeta}) =
\delta_{\gamma,\zeta}:\gamma,\zeta \in \{\alpha,\beta\}$. It is
immediate that if $\pi - \theta$ is the angle between $\alpha$ and
$\beta$ then $\theta$ is the angle between $\varpi_{\alpha}$ and
$\varpi_{\beta}$. In particular the area between the lines they
define is a fundamental domain for the action of $W$ in $\mathfrak
h^*$. Put another way the lines bordering this domain define
reflection planes and the group they generate has this domain as a
fundamental domain. Notice that in this some integer multiple of
$\theta$ must equal $\pi$.
\subsection{}\label{9.2}
Now recall the remark in Example 1 of \ref {8.7} where we
described a tiling of the plane by a given triangle $T$. Consider
the group $W^{aff}$ generated by the reflection planes defined by
the three sides of the triangle. One can ask if the given
triangle is a fundamental domain for the action of $W^{aff}$
acting on the plane. It is clear that a necessary condition is
that this be true with respect to the subgroup generated by just
two reflection planes and the wedge shaped domain they enclose.
Now in \ref {9.1} we saw that this means that the angle they
define must have some integer multiple equal to $\pi$. Thus we
must be able to write the angle set of $T$ in the form
$\{i\pi/m,j\pi/m,k\pi/m\}$ where $i,j,k$ divide $m$ and of course
sum to $m$. We can of course further assume that the greatest
common divisor of $i,j,k$ equals one. One easily checks the
well-known fact that this condition has just three solutions
$i=j=k=1$, $i=j=1,k=2$, $1=1,j=2,k=3$ corresponding to the
fundamental domains (called alcoves) in types $A_2,B_2,G_2$ under
the action of the affine Weyl group $W^{aff}$. Alcoves are not
disjoint; yet they intersect only in lower dimension and so we may
refer to their providing a decomposition of $\mathfrak h^*$ into
(essentially) disjoint subsets.
We conclude that the tiling of the plane described in Example 1 of
\ref {8.7} has rather rarely the special property described above
of being the translate of a fundamental domain.
\subsection{}\label{9.3}
The above apparently negative result nevertheless leads to the
following question. Recall that for any simple Lie algebra
$\mathfrak g$ of rank n, the Cartan subspace $\mathfrak h$ admits
a decomposition into alcoves any one of which is a fundamental
domain for the action of the affine Weyl group. We may therefore
ask if the decomposition into alcoves in type $A_{2n}$ leads to an
aperiodic tiling of the plane through the map $\psi'$ defined in
\ref {8.3}.
First we briefly recall how alcoves are obtained. (For more
details we refer the reader to \cite [Chap VI, Section 2]{B}. Fix
a system $\pi$ of simple roots and let $\{\alpha^\vee: \alpha \in
\pi\}$ (resp. $\{\varpi^\vee_\alpha:\alpha \in \pi\}$ be the
corresponding system of coroots (resp. fundamental coweights). Set
$Q^\vee = \mathbb Z\pi^\vee$. Let $\alpha_0$ be the highest root.
In type $A_n$ this is just $\alpha_1+\alpha_2+\ldots+\alpha_n$. By
a slight abuse of notation we let $s_{\alpha_0}$ denote the linear
bijection on $\mathfrak h$ defined by
$$s_{\alpha_0}(h) = h - (\alpha_0(h)-1) \alpha_0^\vee, \forall h \in \mathfrak h.$$
It is the reflection in the hyperplane in $\mathfrak h$ defined by
$\alpha_0(h) = 1$. The affine Weyl group $W^{aff}$ is the group
generated by the Weyl group for $\mathfrak g$ and $s_{\alpha_0}$.
It may be viewed as the Coxeter group with generating set
$s_{\alpha_0},s_{\alpha_1},\ldots,s_{\alpha_n}$. It is the
semi-direct product $W^{aff}= Q^\vee \ltimes W$. Here one uses
the fact that $\alpha_0^\vee$ is the highest short root for the
dual root system and one checks that $Q^\vee = \mathbb
ZW\alpha_0^\vee$.
Let $m_i$ be the coefficient $\alpha_i$ in $\alpha_0$. Then the
convex set in $\mathfrak h$ with vertex set
$\{0,\varpi^\vee_1/m_1,\varpi^\vee_2/m_2,\ldots,\varpi^\vee_n/m_n\}$
is a fundamental domain (alcove) for the action of $W^{aff}$ on
$\mathfrak h$. Since the $\varpi^\vee_i/m_i:i \in
\{1,2,\ldots,n\}$ are fixed points under $s_{\alpha_0}$, it is
transformed under $s_{\alpha_0}$ into the alcove with vertex set
$\{\alpha_0^\vee,\varpi^\vee_1/m_1,\varpi^\vee_2/m_2,\ldots,\varpi^\vee_n/m_n\}$.
The transformation of a given alcove under the reflections defined
by their faces (defined by $n-1$ vertices are similarly described.
In type $A_2$ their translates give the tessilation of the plane
by equilateral triangles as described in Example 1 of \ref {8.7}.
We remark that if $\mathfrak g$ is simply-laced then the
identification of $\mathfrak h$ with $\mathfrak h^*$ through the
Killing form identifies roots with coroots and weights with
coweights. In addition in type $A$ the $m_i$ above are all equal
to one. We shall use this identification and simplification in
the sequel.
\subsection{}\label{9.4}
In order to study the possible consequences of decomposition into
alcoves described in \ref {9.3} above for planar tiling we first
describe the images of the fundamental weights for type $A_{2n}$
under the map $\psi'$ of \ref {8.3}. Fix $n \in \mathbb N^+$ and
recall the $p_i:i=0,1,\ldots,2n-1$ as described in \ref {8.12}.
Let $P(\pi)$ denote the set of (integer) weights in $\mathfrak
h^*$ relative to the choice of the set $\pi$ of simple roots. Its
image under $\psi^{-1}$ is just the $\mathbb Z[x]/<Q_n(x)>$ module
generated by the integer weights relative to $\{\alpha,\beta\}$
denoted in \ref {3.5} by $P$. Its image under $\psi'$ is obtained
by further evaluation of $x$ as $g = 2 \cos \pi/(2n+1)$. (This
makes a difference only if $2n+1$ fails to be prime.)
\begin {lemma} For all $i \in \{0,1,\ldots,n-1\}$, one has
$$\psi'(\varpi_{2i+1})=g_i\varpi_\alpha,\quad
\psi'(\varpi_{(2n-2i)})=g_i\varpi_\beta.$$
\end {lemma}
\begin {proof} Recall by Lemma \ref {8.2} that
$s_{2i+1}\psi(\lambda)=\psi(s_{\alpha,i+1}\lambda):
i=0,1,\ldots,n-1, \forall \lambda \in P$. Equivalently
$g_i\alpha^\vee_{2i+1}(\lambda)=(\alpha^\vee(\psi^{-1}(\lambda)))_i$,
for all $\lambda \in P(\pi)$. Similarly
$g_i\alpha^\vee_{2(n-i)}(\lambda)=(\beta^\vee(\psi^{-1}(\lambda)))_i$.
Hence taking $\lambda=\varpi_{2j+1}$, for $j \in
\{0,1,\ldots,n-1\}$ we obtain
$$(\alpha^\vee(\psi^{-1}(\varpi_{2j+1})))_i=\delta_{i,j}g_i,\quad
(\beta^\vee(\psi^{-1}(\varpi_{2j+1})))_i =0,\forall i,j
=0,1,\ldots,n-1.$$ This gives the first assertion. Taking
$\lambda=\varpi_{2(n-j)}$ gives the second assertion.
\end {proof}
\subsection{}\label{9.5}
The above result already has a pleasing geometric feature worth
noting. First we recall the relation between the $g_i$ and the
chord lengths $p_i$ in the $(2n+1)$-gon, noted in \ref {7.7}.
Denote $\psi'(\varpi_i)$ simply as $x_i$, for all $i \in
\{1,2,\ldots,2n\}$. The image $T_f$ of the fundamental alcove
under $\psi'$ is the convex set over $\mathbb Q$ of $\{\{0\}, x_i
:i=1,2,\ldots,2n\}$. It lies in the wedge enclosed by two
semi-infinite lines starting at the origin $\{0\}$ and forming an
angle of $\pi/2n+1$. The triangle $T_i$ with vertex set
$\{\{0\},x_i,x_{i+1}\}$ for $i \in \{1,2\ldots,n-1\}$ is one of
those obtained by a zig-zag triangularization of the weight
diagram of the defining representation. (Moreover taken with its
parity translate $T'$ having vertex set
$\{\{0\},x_{2n+1-i},x_{2n-i}\}$ for $i \in \{1,2,\ldots,n-1\}$
gives all such triangles.) Since the distance between $\{0\}$ and
$x_1$ is $p_0=1$ the distance between $\{x_i,x_{i+1}\}$, for $i
\in \{0,1,\ldots,2n-1\}$ is again $1$. Joining the points
$x_i,x_{i+1}:i=1,2,\ldots,n$ gives a triangularization $T_f$
viewed as an isosceles triangle $T\{1,n,n\}$ in $\mathscr
T_{2n+1}$ with vertex set $\{\{0\}.x_n,x_{n+1}\}$, by exactly the
isosceles triangles $p_{i-1}^{-1}T\{2n-2i+1,i,i\}:i
=1,2,\ldots,n-1$ in $\mathscr T_n$. Joining the points
$\{\{0\},x_{2n+1-i},x_{2n-i}\}:i=1,2,\ldots,n$ gives the parity
translated triangularization. This result is a direct
generalization of the two possible ways to join the triangles
$\{T_1,T_2\}$ in the Golden pair to give $gT_1$. Of course it is
also natural (read, tempting) to join in addition the points
$x_1,x_{2n+1-i}\}: i=1,2,\ldots,n-1$. These give the
triangularizations similar to those which appear in Figures 1,3 of
the root diagrams in types $A_4$ and $A_6$ except that they
involve weights rather than roots.
Augment the above notation by setting $x_0=x_{2n+1}=\{0\}$. The
above result may be summarized by the following "shoelace"
\begin {lemma} The distance between $x_i,x_{i+1}$ is independent
of $i \in \{0,1,\ldots,2n\}$. Conversely the vertex set of the
image $T_f$ of the fundamental alcove can be obtained from the
cone with vertex $\{0\}$ and angle $\pi/2n+1$ by marking on
alternate sides of the cone equidistant points (namely the $x_i: i
=0,1,2,\ldots,2n+1$) starting (and ending) at $\{0\}$.
\end {lemma}
\subsection{}\label{9.6}
Recall that $\mathfrak g = \mathfrak {sl}(2n+1)$ and that we are
identifying $\mathfrak g$ with $\mathfrak g^*$ through the Killing
form. The presentation $W^{aff}= Q \ltimes W$ implies that the map
$\psi$ defined in Lemma \ref {8.2} commutes with the action of
$W^{aff}$. Moreover we may replace $\mathbb Z$ in its conclusion
by the field $\mathbb Q$ of rational numbers. Now view $\mathfrak
h_{\mathbb Q}$ as the Cartan subalgebra $\mathfrak
{sl}(2n+1,\mathbb Q)$. Let $\mathbb Q'$ denote the number field
$\mathbb Q[g] $, where as before $g = 2\bar{\cos\pi/(2n+1)}$. Then
$\psi'$ is a $\mathbb QW^{aff}$ map of $\mathfrak h_{\mathbb Q}$
onto $V_{\mathbb Q'}:=\mathbb Q'\alpha+\mathbb Q'\beta$, which is
injective if $2n+1$ is prime. In the latter case the (essentially
disjoint) decomposition of $\mathfrak h_{\mathbb Q}$ into alcoves
gives a corresponding (essentially disjoint) decomposition of
$V_{\mathbb Q'}$. This is less easy to interpret geometrically as
the image of each alcove is the convex set of its extremal points
over $\mathbb Q$, rather than over $\mathbb Q'$. As a consequence
the images of the alcove interiors appear to intersect when they
are naively drawn on the plane.
\subsection{}\label{9.7}
We have already given a description of the image $T_f$ of the
fundamental alcove in \ref {9.5}. Towards describing the images of
the remaining alcoves in $V_{\mathbb Q'}$ we proceed as follows.
First retain the conventions of \ref {9.7} and revert to our
notation of $\widehat{W}$ for the Weyl group of $\mathfrak
{sl}(2n+1,\mathbb Q)$. Since the corresponding affine Weyl group
is the semidirect product of $\widehat{W}$ with the lattice of
roots, it follows from \ref {8.2} that we need only describe the
image of the fundamental alcove and its $\widehat{W}$ translates.
Indeed the images of the remaining alcoves will simply be
translates of the former by $\mathbb Z[g]\alpha +Z[g]\beta$. This
means in particular that there will be only finitely many image
types.
The image of the weights of the defining representation of
$\mathfrak {sl}(2n+1)$ is just the $(2n+1)$-gon with vertex set
$\{W\varpi_1\}$. As is well-known the Grassmannian of the
corresponding module generates the remaining fundamental modules
(together with two copies of the trivial module). Consequently
the set $\textbf{W} :=\{\widehat{W}\varpi_i:i=1,2,\ldots,2n\}$ is
exactly the set of all sums (different to zero) of distinct
elements of the weights of the defining representation and has
cardinality $2^{2n+1}-2$ (with no multiplicities). Its image
under $\psi'$ is a union of non-trivial $W$ orbits and hence has
cardinality divisible by $2n+1$. Thus $2n+1$ must divide
$2^{2n+1}-2$, if $\psi'$ separates the elements of $\textbf{W}$.
This generally fails unless $2n+1$ is prime.
Recall that $W=<s_\alpha,s_\beta>$ and for each $w \in W$ set
$r_w=\mathbb Q'w\alpha$ (resp. $p_w=\mathbb Q'w\varpi_\alpha$)and
$\textbf{R}$ (resp. $\textbf{P}$) their union. It follows from
\ref {8.2} that the image under $\psi'$ of the roots of $\mathfrak
{sl}(2n+1)$ lie in $\textbf{R}$. Thus one would have expected the
image of $\textbf{W}$ to lie in $\textbf{P}$. This is false !
Rather we have
\begin {lemma} The image under $\psi'$ of $\{\widehat{W}\varpi_j: j
=1,2,\ldots,2n\}$ lies in the union of the lines $p_w: w \in W$ if
and only if $n=1,2$.
\end {lemma}
\begin {proof} Of course the case $n=1$ is trivial. The case
when $2n+1$ is prime obtains from a simple numerical criterion.
Indeed in this case we can easily calculate the cardinality of the
intersection $\psi'(\textbf{W})\cap p_e$. We claim that it equals
$2^{n+1}-2$ and hence that the cardinality of
$\psi'(\textbf{W})\cap \textbf{P}$ equals $(2^{n+1}-2)(2n+1)$.
Indeed if we take adjacent sums in the weights of the defining
representations using the convention that
$\varpi_{-1}=\varpi_{2n+1}=0$, we obtain the set
$S:=\{\varpi_{2i+1}-\varpi_{2i-1}:i=0,1,\ldots,n\}$ of cardinality
$n+1$ in which the $\varpi_{2i}:i=1,2,\ldots,n$, have cancelled
out. Clearly all ways of cancelling out these elements in taking
sums of distinct weights in the defining representation exactly
gives all possible non-trivial sums of distinct elements in $S$,
the number of which is $2^{n+1}-2$.
Now we have seen that the cardinality of $\psi'(\textbf{W})$
equals $2^{2n+1}-2$. Thus $\psi'(\textbf{W})\subset \textbf{P}$ if
and only if $(2^{n+1}-2)(2n+1)=2^{2n+1}-2$, that is when
$2^n+1=2n+1$, or $n=1,2$.
In the case when $2n+1$ is not prime it suffices to exhibit an
element of $\textbf{W}$ whose image under $\psi'$ lies strictly
inside the dominant chamber with respect to $<\alpha,\beta>$. If
$n\geq 3$ the element $\varpi_1-\varpi_2+\varpi_4$ serves the
purpose.
\end {proof}
\subsection{}\label{9.8}
Though our result in \ref {8.14} rather banalizes aperiodic
tiling, the above lemma shows that the pentagonal system based on
the Golden Section has a special (or perhaps just simplifying)
feature.
Let us describe the images of all alcoves in the pentagonal case.
As before it is enough to describe those obtain from the
fundamental alcove through the action of $\widehat{W}$, since the
remaining ones are then obtained through translation by $\mathbb
Z[g]\alpha+\mathbb Z[g]\beta$, where $g$ is now the Golden
Section. Let us use $\mathscr I$ to denote the image set, namely
$W^aT_f$.
Each $T \in \mathscr I$ is the $\mathbb Q$ convex hull of its
vertex set, so it suffices to determine the latter. Every vertex
must lie in $\textbf{W}\cup \{0\}\subset\textbf{P}$. Every such
vertex set must contain $\{0\}$ which is a $W^a$ fixed point.
Moreover $W$ acts on $\mathscr I$ with every element having a
trivial stabilizer because this is already true for $W^a$. Of
course the action of $W$ as opposed to the action of $W^a$ is an
isometry. Since card$(W^a/W)=12$, we have just 12 vertex sets to
describe. The first forms the small regular pentagon $P_s$ of
which its remaining four vertices, two lie at distance $g^{-1}$ to
the origin, two at distance $1$ and all in $\textbf{W}$. One may
remark that ten such small pentagons can be formed as expected.
The second forms the large regular pentagon $P_l$ of which its
remaining four vertices, two lie at distance $1$ to the origin,
two at distance $g$ and all in $\textbf{W}$.
The third is $T_0:=T_f$ itself and in this we note that it is the
apex of this isosceles triangle which lies at $\{0\}$. There are
four further isosceles triangles $T_i:i=1,2,3,4$ in $\mathscr I$
depending on which of its four remaining vertices lies at $\{0\}$.
Finally there are five rhombi $R_i:i=0,1,2,3,4$. Each of these
have three sides of length $1$ and one side of length $g$ parallel
to one of the sides of length $1$. Its fifth vertex lies at the
intersection of its two diagonals. The latter is at a distance $1$
from two of its adjacent vertices on the boundary and at a
distance $g^{-1}$ from the opposite two. Thus as we already know
from the injectivity of $\psi'$ this "internal" vertex is not in
the $\mathbb Q$ convex set of the "external" vertices. Any one of
the five elements of its vertex set can be chosen to be $\{0\}$
and we use $R_0$ to designate the rhombus with $\{0\}$ as its
internal vertex.
As noted in \ref {9.3} for any alcove $A$ there is a unique
element $s$ of the affine Weyl group which fixes four of its
vertices. Thus there is a unique alcove, namely $sA$ which shares
with $A$ the face defined by the four fixed vertices. (Moreover
$s$ is the reflection defined by this face.) It follows that the
vertex set of $\psi'(A)$ shares exactly four elements with
$\psi'(sA)$. If $\psi'(A)$ is a large (resp. small) pentagon, then
$\psi'(sA)$ must be a rhombus (resp. triangle). If $\psi'(A)$ is
rhombus then $\psi'(sA)$ can be a triangle or a large pentagon. If
$\psi'(A)$ is a triangle then $\psi'(sA)$ can be a rhombus or a
small pentagon. For any alcove $A$ there are just five
reflections with the above property (each fixing four of the five
vertices). Call them the allowed reflections of $A$. If $s$ an
allowed reflection of $A$ and $s'$ an allowed reflection of $sA$
different from $s$, we call $\{s',s\}$ an $A$ compatible pair.
\begin {lemma} Let $A$ be an alcove, and $\{s',s\}$ an $A$ compatible pair.
Then $A$ and $s'sA$ share exactly three vertices. Conversely any
three vertices of $A$ define an $A$ compatible pair $\{s',s\}$ so
that $A$ and $s'sA$ share exactly the given vertices.
\end {lemma}
\begin {proof} Obviously $A$ and $s'sA$ share at least three
vertices and cannot share all five. If they share four vertices
then there is a reflection $s''$ such that $s''A=s'sA$ forcing
$s''=s's$, which is impossible since all three are reflections.
The converse is obvious.
\end {proof}
\subsection{}\label{9.9}
Retain the above notation and hypotheses. From the above lemma we
deduce that $\psi'(A)$ and $\psi'(s'sA)$ share exactly three
vertices. The possible triangles they define each have as vertex
set any three vertices of the above four shapes, namely
$T,R,P_s,P_l$. From these we obtain a straight line segment
defined by two vertices with an additional vertex dividing it in
the Golden ratio, a Golden pair coming from $P_s$ and a second
Golden pair from $P_l$ inflated by a factor of $g$. The remaining
triangles coming from $R$ and $T$ are equivalent to these.
Conversely a triangularization of one of these shapes obtained by
joining its vertices defines a set of compatible pairs of the
given alcove.
The Golden pair $\{T_1,T_2\}$ coming from the triangularization of
the small pentagon has side lengths $\{1,g^{-1},g^{-1}\}$ and
$\{1,1,g^{-1}\}$ respectively in the above normalization. Of the
remaining shapes the triangle can be decomposed into $T_1$ and two
copies of $T_2$ by lines joining its vertex set. Such a
decomposition of $R$ and $P_l$ into members of this Golden pair is
(only) possible if we add extra vertices.
Our general idea is that covering $\mathfrak h^*$ by alcoves
coming from a fixed alcove $A$ should translate under a
\textit{particular} sequence of elements of the affine Weyl group
to give a covering of the plane by the triangles obtained Lemma
\ref {9.8}. Then aperiodicity can be introduced since different
sequences can be chosen. However the simplest interpretation of
this procedure will not give a tiling since the resulting
triangles will overlap. Worse than this the image under $\psi'$
of $P(\pi)$ is dense in the metric topology giving any such tiling
a fractal aspect. We describe a rather ad hoc remedy to this
situation in the next section.
\subsection{Aperiodic tiling from alcove packing}\label{9.10}
Assume $n=2$, that is consider the pentagonal case. Recall that
we have described the image $T_f$ under $\psi'$ of the fundamental
alcove. It is an isosceles triangle with apex at $\{0\}$ with its
equal sides of length the Golden Section $g$ the third side being
of length $1$. Finally it has two extra vertices on its equal
sides at distance $1$ from its apex. Joining these vertices and
further just one of them to an opposite vertex on the base gives
it two possible triangularizations satisfying $T = T_1*T_2*T_2$.
To proceed we need the following
\begin {lemma} There exists a subset $\mathscr A$ of alcoves such
that the set $\psi'(A); A \in \mathscr A$ tiles the plane.
\end {lemma}
\begin {proof} Consider the triangle $T$ with vertices
$\{\psi'(0),\psi'(5\varpi_2),\psi'(5\varpi_4)\}$ and its parity
translate $T'$ with vertices
$\{\psi'(5\varpi_2+5\varpi_3),\psi'(5\varpi_2),\psi'(5\varpi_4)\}$.
Since $5\varpi_2,5\varpi_3$ both lie in $\mathbb Z\pi$, it follows
that $\psi'(\mathbb Z\pi)$ translates of $T$ and $T'$ tile the
fundamental chamber with respect to $W$. Hence their $W$
translates tile the whole plane. Recalling that the affine Weyl
group contains $\mathbb Z\pi$ and (the image of) $W$ it remains to
show that we can choose a subset $\mathscr A_0$ of alcoves whose
images tile $T$. We choose $\mathscr A_0$ so that the images of
its elements are again triangles, twenty-five in all. We must show
that all the elements of $\mathscr A_0$ are translates of the
fundamendal alcove under the affine Weyl group. Here we recall
that each of the images has exactly five vertices (coming as
explained previously as images of the vertices of the
corresponding alcoves). Now we noted in \ref {9.8} that the
images of the $W^a$ translates of $T_f$ which are triangles come
in five $W$ orbits each determined by which vertex of the triangle
lies at $\{0\}$. Thus it remains to show that each of the above
twenty-five triangles has at least one vertex (and as it turns out
only one) which is a $\mathbb Z\pi$ translate of $\{0\}$. This is
shown in Figure 11, the vertices in question being labelled by the
corresponding element of $P(\pi)$ which in each case the reader
will check lies in $\mathbb Z\pi$. (This is the only non-trivial
and slightly surprising part of the proof.)
\end {proof}
\subsection{Aperiodic tiling from alcove packing, continued}\label{9.11}
It remains to give an (aperiodic) triangularization of each
$\psi'(A): A \in \mathscr A$. This we shall do using the two
previous lemmas. First starting from the fundamental alcove
generate the remaining alcoves in $\mathscr A$ by taking a
sequence of reflections in the affine Weyl group. Of course we
get plenty of other alcoves but these we eventually discard. We
can add a predecessor to the fundamental alcove different to its
successor. Then every $A \in \mathscr A$ admits a predecessor
$A_-$ and successor $A_+$ obtained by a single generating
reflection $s$ (resp. $s'$) with $s \neq s'$. (Of course $A_-$ and
$A_+$ need not and will not belong to $\mathscr A$.) By Lemma
\ref {9.8}, the pair $(s,s')$ determines three vertices
$v_1,v_2,v_3$ of $T:=\psi'(A)$, which we recall is a triangle. Now
as explained above we always join the vertices of $\psi'(A)$ lying
on its sides and breaking it into $T_2$ and a small rhombus $R$.
In addition there will be just two ways of writing $R$ as
$T_1*T_2$. If $v_1,v_2,v_3$ do not belong to the apex of $T$, then
they are three vertices of $R$ which when joined gives the
required decomposition. Otherwise we take the four vertices of
$\psi'(A)\cap\psi(A_+)$ which now has exactly three vertices of
$R$ which we then join. This gives the required (aperiodic)
tiling of the plane by the Golden Pair $T_1,T_2$ where the
different tilings correspond to different sequences in the affine
Weyl group successively running through the elements of $\mathscr
A$ and of course some discarded alcoves not in $\mathscr A$.
Of course all this is a bit of a swizzle, since in particular
rather many alcoves are discarded. Again a main point in the
construction is to find a subset of alcoves whose images form a
tiling of the plane. We found one example but certainly not all.
Nor can do we find all tilings of the plane that can be obtained
using the Golden Pair. In particular our construction leads to a
$1:2$ ratio in the contribution of the Golden Pair $T_1,T_2$,
whereas one might prefer to have a $1:1$ ratio. The latter could
be recovered by a equal weighted tiling of the plane using the
images of alcoves which are the triangle and small pentagon. Again
we could desist in joining the vertices of each $\psi'(A)$ lying
on its sides, which was done in all cases and so with no
aperiodicity. Then we would obtain a tiling by $gT_1,T_2$ with a
$1:1$ ratio.
A challenging problem would be obtain a three dimensional analogue
of the above construction. Namely for some simple root system
$\pi$, find a $\mathbb QW^{aff}$ linear map from $P(\pi)$ to
$\mathbb Q^3$ and a subset $\mathscr A$ of alcoves such that their
images form a packing of $\mathbb Q^3$. Then use the possible
words in $W^{aff}$ to obtain a multitude of (that is to say
aperiodic) packings in $\mathbb Q^3$.
\section{The Even Case}
\subsection{}\label{10.1}
We shall describe the analogue of Lemma \ref {8.2} when
$W:=<s_\alpha,s_\beta>\cong\mathbb Z_n \ltimes \mathbb Z_2$ with
$n$ even. First we need to extend the factorization described in
Lemma \ref {7.3}. Recall the Chebyshev polynomials $P_n(x)$
defined in \ ref {2.2}. Set $$S_n(x)=P_n(x)-P_{n-2}(x), \forall
n \geq 1. \eqno{(*)}$$ One finds that $$S_2(x)=P_2(x)-1, S_1(x) =
P_1(x) =x, \quad S_{n+1}(x)=xS_n(x)-S_{n-1}(x), \forall n \geq
2.$$ It is convenient to set $S_0(x)=1$.
\begin {lemma} For all $n \geq 1$, one has
$(i)_n \quad S_nP_{n-1}=P_{2n-1}$,
$(ii)_n \quad S_nP_n=P_{2n} +1$,
$(iii)_n \quad S_nP_{n+1} = P_{2n+1} +x$,
$(iv)_n \quad S_nP_{n-2}=P_{2n-2} - 1$.
\end {lemma}
\begin {proof} From the above formulae and those in \ref {2.2}, {8.12},
one easily checks the assertions for $n=1,2$. For $ n \geq 2$ one
checks using the above recurrence relations for $S_n, P_n$, that
$(ii)_{n-1}, (iii)_{n-2}$ imply $(i)_n$, that $(i)_n, (iv)_n$
imply $(ii)_n$, that $(i)_n, (ii)_n$ imply $(iii)_n$ and that
$(i)_{n-1}, (ii)_{n-2}$ imply $(iv)_n$.
\end {proof}
\subsection{}\label{10.2}
The following is the analogue of Lemma \ref {7.4}.
\begin {lemma} For all $n \in \mathbb N^+$ one has
(i) The roots of $S_n(x)$ form the set $\{2\cos(2t-1)\pi/2n:
t\in \{1,2,\ldots,n\}\}$,
(ii) Suppose $m$ is odd and divides $n$. then $S_d(x)$ divides
$S_n(x)$ with $d=n/m$.
(iii) $S_n(x)$ is irreducible over $\mathbb Q$ if and only if $n$
is a power of $2$.
(iv) Take $n>1$ and odd. Then $x$ divides $S_n(x)$ and the
quotient is irreducible over $\mathbb Q$ if and only if $n$ is
prime.
\end {lemma}
\begin {proof} By $(*)$ of \ref {2.2} and $(*)$ of \ref {10.1}
one has $$\sin \theta S_n(2\cos\theta)=\sin(n+1)\theta -
\sin(n-1)\theta.$$ The right hand side vanishes for $\theta =
(2t-1)\pi/2n : t \in \{1,2,\ldots,n\}$, whereas $\sin \theta \neq
0$. Hence (i). Then (ii) follows from (i) by comparison of
roots. Set $z=e^{i\theta}$ with $\theta = \pi /2n$. By (i) the
roots of $S_n(2x)$ are the real parts of $x^{2t-1}:t \in
\{1,2,\ldots,n\}$. Since no odd number can divide a power of $2$
these are all primitive $4n^{th}$ roots of unity. These are then
permuted by the Galois group of $\mathbb Q[z]$ over $\mathbb Q$
and so are their real parts. Therefore they cannot satisfy over
$\mathbb Q$ a polynomial equation of degree $<n$. Hence (iii).
The proof of (iv) is similar.
\end {proof}
\subsection{}\label{10.3}
As in \ref {2.2} we consider a $2 \times 2$ Cartan matrix with
off-diagonal entries $\alpha^\vee(\beta) = -y, \beta^\vee(\alpha)
=-1$, regarding $\{\alpha,\beta\}$ as a simple root system with
Weyl group given by $W=<s_\alpha,s_\beta>$ with the generators
being defined as in \ref {2.1}. Set $y=x^2$ with $x=2\cos\pi/m; m
\geq 3$. Previously we had considered the case when $m$ is odd,
say $m=2n+1$ and shown (Lemma \ref {8.2}) that this root system
together with its augmented Weyl group $W^a$ could be obtained
from a root system of type $A_{2n}$. Here we establish the related
results when $m$ is even. This divides into two cases depending
on whether $m$ is divisible by $4$ or not. This is not surprising
since the Cartan matrix is a system of type $B_2$ (resp. $G_2$) if
$m=4$ (resp. $m=6$).
\subsection{}\label{10.4}
Observe that $S_k(x)$ is a polynomial in $y=x^2$ if $k$ is even
which we write as $T_k(y)$. Again if $k$ is odd then $S_k(x)$ is
divisible by $x$ and $\frac{1}{x}S_k(x)$ is a polynomial in $y$
which we write as $T_k(y)$.
Set $m=4n$ in \ref {10.3}. Take $x = 2\cos \pi/4n$. Then
$S_{2n}(x)=0$ by Lemma \ref {10.2} and indeed is its largest
(real) root. Let $\pi:=\{\alpha_1,\ldots,\alpha_{2n}\}$ be the
set of simple roots in type $B_{2n}$. Set
$s_i=s_{\alpha_i}:i=1,2,\ldots,2n$ be the corresponding set of
simple reflections in the Weyl group $W(B_{2n})$, with
$W=<s_\alpha,s_\beta>$ defined as in \ref {10.3}.
\begin {lemma} Set $$\psi'(\alpha_{2(n-k)})=T_{2k}(y)\alpha, \quad
\psi'(\alpha_{2(n-k)-1})=T_{2k+1}(y)\beta, \forall k
=0,1,\ldots,n-1,$$
$$\psi'(\prod_{i=1}^n s_{2i})= s_\alpha, \quad \psi'(\prod_{i=1}^n s_{2i-1})= s_\beta.$$
Then $\psi'$ extends to a $\mathbb ZW$ epimorphism of $\mathbb
Z\pi$ onto $\mathbb Z[y]\alpha +\mathbb Z[y]\beta$.
\end {lemma}
\begin {proof} This is straightforward computation using the
relations in \ref {2.2} to apply successive products of
$s_\alpha,s_\beta$ to $\mathbb Z[y]\alpha +\mathbb Z[y]\beta$.
Compared to the corresponding products in the left hand side of
the second equation above applied to $\mathbb Z\pi$ this gives two
different ways to compute the left hand side of the first equation
above and one checks that both give the right hand side.
\end {proof}
\subsection{}\label{10.5}
By \ref {10.2}, the map $\psi'$ of \ref {10.4} is injective if and
only if $n$ is a power of $2$. However we can make it injective
for all $n$ by reinterpreting $\mathbb Z[y]$ as the ring generated
by $y$ and satisfying exactly the relation $T_{2n}(y)=0$. Of
course this ring has zero divisors if $n$ is not a power of $2$
and so cannot be embedded in $\mathbb R$.
Let us adopt the above interpretation of $\mathbb Z[y]$ so that
$\psi'$ becomes an isomorphism. Then we can recover the
generating reflections of $W(B_{2n})$ by the same means as used in
type $A_{2n}$. Observe that $M:=\mathbb Z[y]$ is a free $\mathbb
Z$ module of rank $n$. However it now has \textit{two} natural
bases, namely $\{T_{2(n-i)}\}$ which we shall use to define the
$s_{\alpha,i}$ and $\{T_{2(n-i)+1}\}$ which we shall use to define
the $s_{\beta,i}$, for $i=1,2,\ldots,n$. More precisely
$s_{\alpha,i}$ is determined by $(*)$ of \ref {3.8} with $m_i:
m\in M$ defined by extending $\mathbb Z$ linearly the rule
$(T_{2(n-i)})_j=\delta_{i,j}:i,j=1,2,\ldots,n$ and similarly
$s_{\beta,i}$ is determined by $(*)$ of \ref {3.8} with $m_i: m\in
M$ defined by extending $\mathbb Z$ linearly the rule
$(T_{2(n-i)+1})_j=\delta_{i,j}:i,j=1,2,\ldots,n$. (This dichotomy
is essentially a result of replacing $x$ by $y$.) Then the
augmented Weyl group $W^a$ is defined as before to be the group
generated by the $s_{\alpha,i},s_{\beta,i}:i=1,2.\ldots,n$. With
these conventions we obtain the following
\begin {lemma} Set $$\psi'(s_{2i})=s_{\alpha,i}, \quad
\psi'(s_{2i-1})=s_{\beta,i}, \forall i=1,2,\ldots,n.$$
Then
(i) $\psi'$ extends to an isomorphism of $W(B_{2n})$ onto $W^a$.
Denote this common group by $\widehat{W}$.
(ii) $\psi'$ extends to a $\mathbb Z\widehat{W}$ module
isomorphism of $\mathbb Z\pi$ onto $M\alpha +M\beta$.
\end {lemma}
\begin {proof} For example
$$\alpha^\vee(\alpha_{2i-1})_i = -(x^2T_{2(n-i)+1})_i =
-(T_{2(n-i)+2}+T_{2(n-i)})_i=-1,$$ which gives
$s_{\alpha,i}\psi'(\alpha_{2i-1})=
\psi'(\alpha_{2i-1}+\alpha_{2i})$, as required. Similarly for
example
$$\beta^\vee(\alpha_{2i})_i=
-(x^{-1}xT_{2(n-i)})_i=-(T_{2(n-i)+1}+T_{2(n-i)-1})_i=-1,$$ which
gives $s_{\beta,i}\psi'(\alpha_{2i-1})=
\psi'(\alpha_{2i-1}+\alpha_{2i})$, as required.
\end {proof}
\subsection{}\label{10.6}
Set $m=4n+2$ in \ref {10.3}. Take $x = 2\cos \pi/(4n+2)$. Then
$S_{2n+1}(x)=0$ by Lemma \ref {10.2} and indeed is its largest
(real) root. Let $\pi:=\{\alpha_1,\ldots,\alpha_{2n+1}\}$ be the
set of simple roots in type $B_{2n+1}$. Let
$s_i=s_{\alpha_i}:i=1,2,\ldots,2n+1$ be the corresponding set of
simple reflections in the Weyl group $W(B_{2n+1})$, with
$W=<s_\alpha,s_\beta>$ defined as in \ref {10.3}.
\begin {lemma} Set $$\psi'(\alpha_{2(n-k)+1})=T_{2k}(y)\alpha,\forall k
=0,\ldots,n, \quad \psi'(\alpha_{2(n-k)})=T_{2k+1}(y)\beta,
\forall k =0,\ldots,n-1,$$
$$\psi'(\prod_{i=1}^{n+1}s_{2i-1})= s_\alpha, \quad \psi'(\prod_{i=1}^n s_{2i})= s_\beta.$$
Then $\psi'$ extends to a $\mathbb ZW$ epimorphism of $\mathbb
Z\pi$ onto $\mathbb Z[y]\alpha +\mathbb Z[y]\beta$.
\end {lemma}
\begin {proof} Similar to that of Lemma \ref {10.4}.
\end {proof}
\subsection{}\label{10.7}
Take $n=1$ in \ref {10.6}. Then $T_3(y)=y-3=0$ and so
$\alpha_3=\alpha$ and $\alpha_1=T_2\alpha=(y-2)\alpha = \alpha$.
Consequently $\psi'$ is not injective in this case. Indeed
$<\alpha,\beta>$ is a system of type $G_2$ whilst $\pi$ is of type
$B_3$. More generally $\psi'$ is never injective and this remains
true even if we interpret $M:=\mathbb Z[y]$ as the ring generated
by $y$ satisfying the relation $T_{2n+1}(y)=0$. The trouble is
that $T_{2n+1}$ is a polynomial of degree $n$ in $y$, whilst there
are $n+1$ simple roots which $\psi'$ maps to $M\alpha$. To
recover injectivity we define $M_\alpha$ as the ring generated by
$y$ satisfying the relation $yT_{2n+1}(y)=0$, whilst we define
$M_\beta$ as the ring generated by $y$ satisfying the relation
$T_{2n+1}(y)=0$. One checks that the conclusion of Lemma \ref
{10.6} remains valid when the target space of $\psi'$ is replaced
by $M_\alpha\alpha+M_\beta\beta$. (This is false if we also try to
make $M_\beta$ to be the ring generated by $y$ satisfying the
relation $yT_{2n+1}(y)=0$.) By construction $\psi'$ becomes
injective.
Now determine $s_{\alpha,i}$ by $(*)$ of \ref {3.8} with $m_i:
m\in M_\alpha$ defined by extending $\mathbb Z$ linearly the rule
$(T_{2(n+1-i)})_j=\delta_{i,j}:i,j=1,2,\ldots,n+1$ and similarly
determine $s_{\beta,i}$ by $(*)$ of \ref {3.8} with $m_i: m\in
M_\beta$ defined by extending $\mathbb Z$ linearly the rule
$(T_{2(n-i)+1})_j=\delta_{i,j}:i,j=1,2,\ldots,n$.
\begin {lemma} Set $$\psi'(s_{2i-1})=s_{\alpha,i},\forall i=1,2,\ldots,n+1, \quad
\psi'(s_{2i})=s_{\beta,i}, \forall i=1,2,\ldots,n.$$
Then
(i) $\psi'$ extends to an isomorphism of $W(B_{2n+1})$ onto $W^a$.
Denote this common group by $\widehat{W}$.
(ii) $\psi'$ extends to a $\mathbb Z\widehat{W}$ module
isomorphism of $\mathbb Z\pi$ onto $M_\alpha\alpha +M_\beta\beta$.
\end {lemma}
\begin {proof} For example
$$\alpha^\vee(\alpha_{2i})_i = -(x^2T_{2(n-i)+1})_i =
-(T_{2(n-i)+2}+T_{2(n-i)})_i=-1,$$ which gives
$s_{\alpha,i}\psi'(\alpha_{2i})=
\psi'(\alpha_{2i-1}+\alpha_{2i})$, as required. Similarly for
example
$$\beta^\vee(\alpha_{2i+1})_i=
-(x^{-1}xT_{2(n-i)})_i=-(T_{2(n-i)+1}+T_{2(n-i)-1})_i=-1,$$ which
gives $s_{\beta,i}\psi'(\alpha_{2i-1})=
\psi'(\alpha_{2i-1}+\alpha_{2i})$, as required.
\end {proof}
\subsection{}\label{10.8}
We have constructed the extended Weyl group $W^a$ from
$W:=<s_\alpha,s_\beta>\cong\mathbb Z_m \ltimes \mathbb Z_2$ when
$m=4n$ and (resp. when $m=4n+2$) and shown it to be isomorphic to
$W(B_{2n})$ (resp. $W(B_{2n+1}$). However the construction is
rather ad hoc and it is not so obvious what one should do for an
arbitrary finite reflection group. Form this construction we may
go on to describe the crystals which result in a manner analogous
to the case described in \ref {7.7}. However this does not seem
to be particularly interesting. Yet we note that there are now
two ways of interpreting the $B(\infty)$ crystal for $m=6$, either
as a in type $G_2$ or type $D_3$. However these cannot give the
same crystal as the number of positive roots is different in the
two cases. This is ultimately a consequence of the failure of
the injectivity of $\psi'$, which we only rather artificially
restored. Again we note the image of the root diagram of $B_3$
under $\psi'$ is just the root diagram of $G_2$ where for example
$\alpha_1$ and $\alpha_3$ coalesce to a single root.
In Figure 12 we have drawn the images under $\psi'$ of the root
diagrams in types $B_4$ given a weight triangularization. We
recall that in this case $\psi'$ is injective.
On the left the dodecahedron with co-ordinates given in \ref
{6.6}. Projected onto one face it gives the pentagonal root
system for which a crystal in the sense of Kashiwara is
constructed. On the right the root diagram of $A_4$ presented on
the plane through the map defined in \ref {6.5} and with a weight
triangularization exhibiting a tiling by the Golden Pair.
Zig-zag triangularization of regular $n$-gons. For $n=3,4,6$, alcoves
are obtained (see \ref {9.2}). For $n=5$ one obtains the Golden Pair
\ref {8.7}. For $n \geq 6$ additional triangles may be obtained
as indicated by the dotted lines. These may be required for
further weight triangularizations. For example see Figure 3.
The root diagram given a weight triangularization in type $A_6$ presented on the plane
through the map $\psi'$ defined in \ref {8.3}.
The relation $T_0*T_0*T_2*T_3 =p_1T_0$. Either one of the
triangles of type $T_0$ with vertices on the regular heptagon is
cut into four triangles through its intersection with the second
such triangle. The resulting four triangles are given by the left hand side
above.
Symbolic presentation of the computation
$$p_{2t-1}T_{0,i,j}=p_{t-1}T_{0,i,j}*p_{t-1}T_{0,i+t,j+t}*p_{t-1}T_{0,j-i+t,j}*p_{t-1}T_{0,i,j-t}.$$
Angle sizes are given up to multiples of $\pi/(2n+1)$,
having being computed through the indices of the vertices on the circumference.
Consider the isosceles triangle
$T:=T\{t,t,2n+1-2t\}$ in the lower left hand corner. Its dotted edge
has length $s_1=p_{j-i-t-1}$ because it joins the vertices
$i+1,j$. Through $T$, this forces
$s_2=p_{t-1}p_{j-t-i-1}/p_{2t-1}$. In a similar fashion one
shows that $s_3 =p_{t-1}p_{j-t-1}/p_{2t-1}$. The triangle $T'$
with these two edge lengths subtending an angle $i$ is hence
completed determined and is given by $T'=T\{i,j-i-t,2n+1-j+t\}$.
This in turn implies that $s_{1,3}=p_{i-1}p_{t-1}/p_{2t-1}$.
Repeating this computation for the other two sides of the central
triangle $T''$ shows it to be $p_{t-1}/p_{2t-1}T_{0,i,j}$. The
data for the three remaining triangles which form $T_{0,i,j}$ are
simultaneously obtained and together give the required assertion.
Decomposition of $p_2T\{3,3,3\}$ into nine triangles illustrating
$(*)$ of \ref {8.10}.
Decomposition of $p_2T\{3,3,5\}$ into nine triangles. To be
contrasted with the previous figure.
Symbolic presentation of the decomposition of
$p_2T\{i,j,k\}$ into nine triangles in $\mathscr T_{2n+1}$,
whose angles (as multiples of $\pi/(2n+1)$) and edge lengths are as indicated.
In this $i,j,k \geq 3$ and $i+j+k = 2n+1$, with $n$ an integer
$\geq 4$.
Symbolic presentation of the decomposition of $p_3T\{i,j,k\}$ into
$16$ triangles with angles as multiples of $\pi/2n+1$ indicated.
Non-standard decompositions of $T\{3,3,3\}$ and $T\{3,3,5\}$,
that is not satisfying \ref {8.6}.
Detailing the last part of Lemma \ref {9.10}. One has
$p_i=\varpi_i:i=1,2,3,4$. Then $p_i:i=5,6\ldots,18$ are computed
by vector addition. One checks that all the latter lie in the
root lattice. For example
$p_5=\varpi_2+\varpi_3=\alpha_1+2\alpha_2+2\alpha_3+\alpha_4$ and
$p_6=\varpi_1+2\varpi_2=2\alpha_1+3\alpha_2+2\alpha_3+\alpha_4$.
Root diagram in $B_4$ given a weight triangularization.
\
\
\textbf{Acknowledgements}
\
This work was started during a sabbatical spent at the University
of British Columbia and at Berkeley. I would like to thank my
respective hosts, Jim Carrell and Vera Serganova for their
invitation. The particular inspiration for this work came from
viewing a presentation of aperiodic Penrose tiling based on the
Golden Pair (see Example 2, \ref {8.7}) displayed in the
mathematics department of the University of Geneva. I should like
to thank Anton Alekseev for his wonderful hospitality and his
enthusiasm for the ideas I presented at his seminar. I should
like to thank Bruce Westbury for drawing my attention to Chebyshev
polynomials and Anna Melnikov for finding the work interesting, as
well as helping me with Latex. Figures 8, 9 were drawn by our
secretary, Diana Mandelik. I should like to thank her as well
Dimitry Novikov and David Peleg for helping me with the resulting
Latex.
The main results presented here were described in our seminar
"Algebraic Geometry and Representation Theory" at the Weizmann
Institute in August, 2008.
\end{document} |
\begin{document}
\newtheorem{lemma}{Lemma}
\newtheorem{proposition}[lemma]{Proposition}
\newtheorem{theorem}[lemma]{Theorem}
\newtheorem{definition}[lemma]{Definition}
\newtheorem{hypothesis}[lemma]{Hypothesis}
\newtheorem{conjecture}[lemma]{Conjecture}
\newtheorem{remark}[lemma]{Remark}
\newtheorem{example}[lemma]{Example}
\newtheorem{property}[lemma]{Property}
\newtheorem{corollary}[lemma]{Corollary}
\newtheorem{algorithm}[lemma]{Algorithm}
\title{On the polycirculant conjecture}
\author{Aleksandr Golubchik\\
\small (Osnabrueck, Germany)}
\maketitle
\begin{abstract}
In the paper the foundation of the $k$-orbit theory is developed. The
theory opens a new simple way to the investigation of groups and
multidimensional symmetries.
The relations between combinatorial symmetry properties of a $k$-orbit and
its automorphism group are found. It is found the local property of a
$k$-orbit. The difference between 2-closed group and $m$-closed group for
$m>2$ is discovered. It is explained the specific property of Petersen
graph automorphism group $n$-orbit. It is shown that any non-trivial
primitive group contains a transitive imprimitive subgroup and as a result
it is proved that the automorphism group of a vertex transitive graph
(2-closed group) contains a regular element (polycirculant conjecture).
Using methods of the $k$-orbit theory, it is considered different
possibilities of permutation representation of a finite group and shown
that the most informative, relative to describing of the structure of a
finite group, is the permutation representation of the lowest degree.
Using this representation it is obtained a simple proof of the W. Feit,
J.G. Thompson theorem: Solvability of groups of odd order. It is described
the enough simple structure of lowest degree representation of finite
groups and found a way to constructing of the simple full invariant of a
finite group.
To the end, using methods of $k$-orbit theory, it is obtained one of
possible polynomial solutions of the graph isomorphism problem.
\end{abstract}
\section{Introduction}
A permutation group $G$ on a $n$-element set $V$ is called regular, if its
every stabilizer (a subgroup that fixes some element $v\in V$) is trivial.
Every permutation of the regular group can be decomposed into cycles of
the same length.
A permutation group, containing a regular subgroup, is called semiregular.
Let $G$ be a permutation group on a $n$-element set $V$, $V^k$ be
Cartesian power of $V$ and $V^{(k)}\subset V^k$ be the non-diagonal part of
$V^k$, i.e. every $k$-tuple of $V^{(k)}$ has $k$ different values of its
coordinates. The action of $G$ on $V$ forms the partition of $V^{(k)}$ on
classes of $k$-tuples related with $G$. This partition is called the
system of $k$-orbits of $G$ on $V^{(k)}$ and we write it as $Orb_k(G)$. If
$\langle v_1\ldots v_k\rangle\in V^{(k)}$, then $G\langle v_1\ldots
v_k\rangle\equiv \{\langle gv_1\ldots gv_k\rangle : g\in G\}$ is a
$k$-orbit from $Orb_k(G)$.
For considered tasks it is of interest a maximal subgroup of
$Aut(Orb_k(G))$ that maintains $k$-orbits from $Orb_k(G)$. We shall denote
this subgroup as $aut(Orb_k(G))$. Thus $aut(Orb_k(G))\equiv\cap_{X_k\in
Orb_k(G)} Aut(X_k)$ and $G\leq aut(Orb_k(G))$.
\begin{definition}
We call a permutation group $G$ a \emph{$k$-defined} group, if
$G=aut(Orb_k(G))$.
\end{definition}
There is the obvious property of a $k$-defined group:
\begin{proposition}
If a group $G$ is $k$-defined, then it is $(k+1)$-defined.
\end{proposition}
{\bfseries Proof:\quad}
If a group $G$ is $k$-defined, then, on the one hand,
$aut(Orb_{k+1}(G))<aut(Orb_k(G))=G$ and, on the other hand,
$G<aut(Orb_{k+1}(G)$. $\Box$
The $k$-defined group is called $k$-closed if it is not $(k-1)$-defined.
P. Cameron \cite{Cameron} has described the conjecture of M. Klin, that
every $2$-closed transitive group is semiregular, and the similar
polycirculant conjecture of D. Maru$\breve{\rm s}$i$\breve{\rm c}$, that
every vertex-transitive finite graph has a regular automorphism.
We shall prove these conjectures in the next reformulation:
\begin{definition}\label{tr.pr}
We shall emphasize in the conventional definition of primitivity and
imprimitivity of permutation groups the case of cyclic group of a prime
order. We shall say that these groups are \emph{trivial primitive} and
\emph{trivial imprimitive}. The reason of such consideration will be clear
below.
\end{definition}
\begin{theorem}\label{2cl.impr->reg}
The $2$-closure of a transitive, imprimitive permutation group contains a
regular element.
\end{theorem}
\begin{lemma}\label{tr.impr<tr.pr}
A primitive permutation group contains a transitive, imprimitive subgroup.
\end{lemma}
In order to prove these statements we shall study symmetry properties of
$k$-orbits.\footnote{The objects, that generalize symmetry properties of
$k$-orbits, were applied by author for the polynomial solution of graph
isomorphism problem. The part of such investigations is used in
\url{http://arXiv.org/find/math/1/AND+au:+Golubchik_Aleksandr+ti:+AND+polynomial+algorithm/0/1/0/2002/0/1}}
\section{$k$-Orbits}
A $k$-orbit $X_k$ is a set of $k$-tuples with property
$X_k=Aut(X_k)\alpha_k$ for a $k$-tuple $\alpha_k\in X_k$. Such $k$-sets we
shall call \emph{automorphic} $k$-sets.
All, what is written below, can become easier for understanding, if to
represent a $k$-orbit as a matrix, whose lines are $k$-tuples and columns
are values of coordinates of $k$-tuples. A $k$-orbit can be represented by
various matrices that differ by lines permutation. Various orders of lines
in matrices demonstrate various symmetry properties of $k$-orbit. For
example $3$-orbit of symmetric group $S_3$ we can represent as
$$
\begin{array}{lcr}
\begin{array}{||ccc||}
\hline
1 & 2 & 3 \\
2 & 3 & 1 \\
3 & 1 & 2 \\
\hline
1 & 3 & 2 \\
3 & 2 & 1 \\
2 & 1 & 3 \\
\hline
\end{array} &
\mbox{ or } &
\begin{array}{||cc|c||}
\hline
1 & 2 & 3 \\
2 & 1 & 3 \\
\hline
1 & 3 & 2 \\
3 & 1 & 2 \\
\hline
2 & 3 & 1 \\
3 & 2 & 1 \\
\hline
\end{array}\,.
\end{array}
$$
In order to indicate number of $k$-tuples in a $k$-orbit $X_k$ of power
$l$ we shall call it $(l,k)$-orbit or write as $X_{lk}$.
$k$-Orbits have the next general number property:
\begin{proposition}\label{homo}
Let $X_k$ be a $k$-orbit and $l\leq k$, then all $l$-tuples with the same
coordinates $i_1,\ldots i_l\in [1,k]$ form a homogeneous multiset (i.e.
all $l$-tuples in this multiset have the same multiplicity).
\end{proposition}
{\bfseries Proof:\quad}
Let two $k$-subsets $Y_k(u_1,u_2,\ldots,u_l),Z_k(v_1,v_2,\ldots,v_l)
\subset X_k$ consist of all $k$-tuples, that have their $l$ coordinates
$i_1,\ldots i_l$ equal to $u_1,u_2,\ldots,u_l$ and $v_1,v_2,\ldots,v_l$
correspondingly. Let $g\in Aut(X_k)$ be such permutation that $\langle
v_1\ldots v_l\rangle =\langle gu_1\ldots gu_l\rangle$, then $Z_k=gY_k$.
$\Box$
Following constructions simplify the study of $k$-orbits. We call a
$k$-orbit as a \emph{cyclic $k$-orbit} or simply a \emph{$k$-cycle}, if it
is generated by single permutation. A $k$-cycle, that consists of $l$
$k$-tuples, we write as $(l,k)$-cycle. The order of a generating $k$-cycle
permutation can differ from the number of $k$-tuples in the $k$-cycle. The
structure of $k$-cycles is enough simple and can be represented with four
structure elements:
\begin{example}\label{simple-struc}
$$
\begin{array}{||cc|cc||}
\hline
1 & 2 & 3 & 4 \\
2 & 1 & 4 & 3 \\
\hline
\end{array}\,, \
\begin{array}{||cc|cc||}
\hline
1 & 2 & 3 & 4 \\
2 & 1 & 3 & 4 \\
\hline
\end{array}\,, \
\begin{array}{||cc|cc||}
\hline
1 & 2 & 3 & 4 \\
2 & 1 & 5 & 6 \\
\hline
\end{array}\,, \
\begin{array}{||cccc|cc||}
\hline
1 & 2 & 3 & 4 & 5 & 6 \\
2 & 3 & 4 & 1 & 6 & 5 \\
3 & 4 & 1 & 2 & 5 & 6 \\
4 & 1 & 2 & 3 & 6 & 5 \\
\hline
\end{array}\,.
$$
\end{example}
The first example shows the $(2,4)$-cycle that is a \emph{concatenation} of
two $(2,2)$-cycles. Such $(k,k)$-cycle, that is a $k$-orbit of a cycle of
length $k$, we shall call a \emph{$k$-rcycle}. This term is an
abbreviation from a ``right cycle'' and indicates on invariance of such
$(k,k)$-cycle relative to cyclic permutation of not only its $k$-tuples
but also the coordinates of $k$-tuples, or on invariance of the
$(k,k)$-cycle relative to not only the left but also the right action of
permutation (s. below).
The second is the $(2,4)$-cycle with fix-points. It is represented by the
concatenation of the $2$-rcycle and the trivial $(2,2)$-multiorbit,
consisting of the single $2$-tuple. Such $k$-multiorbit we shall call a
\emph{$k$-multituple} or $(l,k)$-multituple.
The third example is the concatenation of the $2$-rcycle and a $2$-orbit
that consists of two $2$-tuples with not intersected values of
coordinates. This kind of $(l,k)$-orbit we shall call
\emph{$S_l^k$-orbit}. It designates that this $(l,k)$-orbit consists of
$lk$ elements of $V$ and its automorphism group is subdirectproduct of
symmetric groups $S_l(B_i)$, where $B_i\subset V$, $i\in [1,k]$, $|B_i|=l$
and $B_i\cap B_j=\emptyset$ for $i\neq j$. From this definition follows
that any $(l,1)$-orbit ($l$-element set) is $S_l^1$-orbit.
The fourth example shows the possible structure of a $k$-cycle whose
length is not prime. It is seen that the fourth case can be represented
through first three cases. So these three cases are fundamental for
constructing of any $k$-orbit of any finite group.
One of our tasks is the study of a permutation action on $k$-orbits.
Indeed, there exist different possibilities of the permutation action on
$k$-orbits, which are arisen from their different symmetry properties.
We shall start with consideration of permutation actions on a $n$-orbit
$X_n$ of a group $G$ of the degree $n$.
\subsection{The actions of permutations on $n$-sets}
A $n$-orbit $X_n$ of a group $G$ is a set of all $n$-tuples, any pair of
which defines a permutation from $G$. So we can represent any $n$-tuple
$\alpha_n=\langle u_1\ldots u_n\rangle$ as a permutation
$$
g_{\alpha_n}=
\left( \begin{array}{ccc}
v_1 & \ldots & v_n \\
u_1 & \dots & u_n
\end{array}
\right),
$$
where $n$-tuple $\langle v_1\ldots v_n\rangle$ is related to the unit of
$G$ and will be called as the \emph{initial} $n$-tuple. Of course, any
$n$-tuple from $X_n$ can be chosen as the initial. The property of the
initial $n$-tuple is the equality of number value to order value of
each its coordinate. Here is accepted that sets of number values and order
values of coordinates are equal and for ordering of coordinates it is
determinate (any time if it is necessary) certain linear order on this
set. The next example shows two different orders of coordinates of the
same $2$-orbit:
$$
\begin{array}{lcr}
\begin{array}{||cc||}
\hline
1 & 2 \\
\hline
1 & 2 \\
2 & 1 \\
\hline
\end{array}\,,
&&
\begin{array}{||cc||}
\hline
2 & 1 \\
\hline
1 & 2 \\
2 & 1 \\
\hline
\end{array}\,.
\end{array}
$$
In first case the initial $2$-tuple is $\langle 12\rangle$, in the second
it is $\langle 21\rangle$.
Further we shall take the next rule for permutation multiplication:
$$
\left(
\begin{array}{ccc}
v_1 & \dots & v_n \\
u_1 & \dots & u_n
\end{array}
\right)
\left(
\begin{array}{ccc}
v_1 & \dots & v_n \\
w_1 & \dots & w_n
\end{array}
\right)
=
\left(
\begin{array}{ccc}
w_1 & \dots & w_n \\
x_1 & \dots & x_n
\end{array}
\right)
\left(
\begin{array}{ccc}
v_1 & \dots & v_n \\
w_1 & \dots & w_n
\end{array}
\right).
$$
From this rule follows that the left action of the permutation
$$
\left(
\begin{array}{ccc}
v_1 & \dots & v_n \\
u_1 & \dots & u_n
\end{array}
\right)
$$
on the $n$-tuple $\alpha_n=\langle w_1\dots w_n\rangle$ gives the $n$-tuple
$\beta_n=\langle x_1\dots x_n\rangle$ that can be considered as:
\begin{enumerate}
\item
the changing of (number) values of coordinates of the $n$-tuple $\alpha_n$;
\item
the mapping of $n$-tuple $\alpha_n$ coordinate-wise on a $n$-tuple
$\beta_n$.
\end{enumerate}
The right action of the permutation
$$
\left(
\begin{array}{ccc}
v_1 & \ldots & v_n \\
w_1 & \dots & w_n
\end{array}
\right)
$$
on the $n$-tuple $\alpha_n=\langle u_1\dots u_n\rangle$ gives the $n$-tuple
$\beta_n=\langle x_1\dots x_n\rangle$ that can be interpreted as:
\begin{enumerate}
\item
the permutation of coordinates of the $n$-tuple $\alpha_n$;
\item
the mapping of $n$-tuple $\alpha_n$ coordinate-wise on a $n$-tuple
$\beta_n$.
\end{enumerate}
We shall choose every time such interpretation of permutation action that
will be more suitable.
If a $n$-orbit $X_n$ contains a $n$-tuple $\alpha_n=\langle v_1\ldots
v_n\rangle$, then $X_n=\{\langle gv_1\ldots gv_n\rangle : g\in G\}$ and
$|X_n|=|G|$. Here we have used the first method of permutation action on
$n$-tuple, namely: a permutation $g$ changes values of coordinates of
$\alpha_n$ or acts on the permutation $g_{\alpha_n}$ from left
($gg_{\alpha_n}$). We shall say also that a permutation $g$ acts from left
on the $n$-tuple $\alpha_n$ and write this action as $g\alpha_n$.
The second method gives $X_n=\{\langle v_{g1}\ldots v_{gn}\rangle: g\in
G\}$. It is an action of a permutation $g$ on the order of coordinates of
$n$-tuple $\alpha_n$ or the action $g_{\alpha_n}g$ of $g$ on
$g_{\alpha_n}$ from right. We shall say in this case that $g$ acts on the
$n$-tuple $\alpha_n$ from right and write this action as $\alpha_ng$.
\paragraph{$n$-Orbits of left, right cosets of a subgroup.}
Let $A$ be a subgroup of $G$, $Y_n$ be a $n$-orbit of $A$ and $g\in G$,
then $gY_n$ is a subset of $X_n$ that represents permutations from a
left coset of $A$ in $G$ and a subset $Y_ng$ represents permutations from
a right coset of $A$ in $G$.
\begin{definition}\label{GA,AG}
For convenience we shall write the sets of left $G\backslash A$ and right
$G/A$ cosets of $A$ in $G$ as $GA\equiv \{gA: g\in G\}$ and $AG\equiv
\{Ag: g\in G\}$.
The defined notation can be easily distinguished from the group
multiplication, because the product of a group with its subgroup is always
trivial. The same reasoning will be used in the like formulas by the
action of a group on $k$-orbits.
Corresponding to this remark we write $GY_n=\{gY_n: g\in G\}$,
$Y_nG=\{Y_ng: g\in G\}$, $AX_n=\{A\alpha_n: \alpha_n\in X_n\}$ and
$X_nA=\{\alpha_nA: \alpha_n\in X_n\}$.
\end{definition}
From this definition and notions of left and right cosets of a subgroup we
obtain:
\begin{lemma}\label{GY_n=X_nA}
Let $n$-orbit $Y_n$ of a subgroup $A<G$ contains initial $n$-tuple, then
partitions of $X_n$ on $n$-subsets of left, right cosets of $A$ in $G$
are $L_n=GY_n=X_nA$ and $R_n=Y_nG=AX_n$.
\end{lemma}
Let $Y_n$ be a $n$-orbit of a subgroup $A$ of a group $G$ and $g\in G$.
\begin{lemma}\label{Y_ng}
The $n$-subset $Y_ng$ is also a $n$-orbit of the subgroup $A$.
\end{lemma}
{\bfseries Proof:\quad}
The right action of a permutation on a $n$-orbit changes the order of
coordinates of $n$-tuples. The permutation that is defined by any pair of
$n$-tuples does not depend on order of coordinates, hence every
permutation that is defined by any pair of $n$-tuples from $Y_ng$ belongs
to $A$. $\Box$
\begin{lemma}\label{gY_n}
The $n$-subset $gY_n$ is a $n$-orbit of the conjugate to $A$ subgroup
$B=gAg^{-1}$.
\end{lemma}
{\bfseries Proof:\quad}
The $n$-subsets $gY_n$ and $gY_ng^{-1}$ define, as in lemma \ref{Y_ng},
the same sets of permutations from $G$. But the set of $n$-tuples
$gY_ng^{-1}$ is by definition equivalent to the set of permutations
$B=gAg^{-1}$. $\Box$
\begin{proposition}\label{Yn'=gYn''f-1}
$n$-Subsets of left and right cosets of a subgroup $A<G$ are connected
with elements of $G$.
\end{proposition}
{\bfseries Proof:\quad} Let $Y_n$ be a $n$-orbit of $A$, $Y_n'$ be the $n$-subset of
a left coset of $A$ and $Y_n''$ be the $n$-subset of a right coset of $A$,
then there exist permutations $g,f\in G$ so that $Y_n'=gY_n$ and
$Y_n''=Y_nf$. Hence $Y_n'=gY_n''f^{-1}$. $\Box$
We shall say further ``$n$-orbit of coset'' instead of ``$n$-subset of
coset'', in order to show that this $n$-subset is a $n$-orbit. It will be
referred also to a $k$-subset, if it is a $k$-orbit.
\begin{proposition}\label{H->Yn=gYng-1}
Let $H$ be a normal subgroup of $G$, then sets of $n$-orbits of left and
right cosets of $H$ are equal and if to choose an arbitrary $n$-tuple from
$n$-orbit $Y_n$ of an arbitrary coset of $H$ as the initial, then
$Y_n=gY_ng^{-1}$ for any permutation $g\in G$.
\end{proposition}
{\bfseries Proof:\quad}
The sets of $n$-orbits of left and right cosets of $H$ are equal, because
the sets of left and right cosets of $H$ are equal.
$Y_n=gY_ng^{-1}$ for every permutation $g\in G$, because the choice of the
initial $n$-tuple determines an equivalence between $Y_n$ and $H$
relative to the defined above action of $G$ on its $n$-orbit.
$\Box$
\begin{proposition}\label{Yn=gYng-1->N}
Let $A$ be a subgroup of a group $G$, $Y_n$ be a $n$-orbit of $A$ and
$Y_n'$, $Y_n''$ be $n$-orbits of left and right cosets of $A$
correspondingly. Let $Y_n'=Y_n''\neq Y_n$, then $A$ has non-trivial
normalizer $N_G(A)$.
\end{proposition}
{\bfseries Proof:\quad}
There exist permutations $g,f\in G\setminus A$ so that
$Y_n'=gY_n=Y_nf=Y_n''$. From this equality and lemma~\ref{Y_ng} follows
that $gY_ng^{-1}$ is a $n$-orbit of $A$. As $Y_n$ contains the initial
$n$-tuple, then $gY_ng^{-1}\cap Y_n\neq \emptyset$ and hence
$gY_ng^{-1}=Y_n$. So $gAg^{-1}=A$. $\Box$
\begin{proposition}\label{g*an*g-1,f*Yn*f-1}
Let $A$ be a subgroup of $G$, $g,f\in G$, $gag^{-1}=a$ for every $a\in A$,
$faf^{-1}\neq a$ for some $a\in A$ and $fAf^{-1}=A$. Let $Y_n$ be a
$n$-orbit of $A$ and $\overrightarrow{Y_n}$ be the arbitrary ordered set
$Y_n$, then $g\overrightarrow{Y_n}=\overrightarrow{Y_n}g$, $fY_n=Y_nf$, but
$f\overrightarrow{Y_n}\neq\overrightarrow{Y_n}f$.
\end{proposition}
\begin{lemma}\label{L_n}
Let $X_n$ be a $n$-orbit of a group $G$ and $L_n$ be a partition of $X_n$.
If the left action of $G$ on $X_n$ maintains $L_n$, then classes of $L_n$
are $n$-orbits of left cosets of some subgroup of $G$.
\end{lemma}
{\bfseries Proof:\quad}
Let $Y_n\in L_n$. Since $L_n$ is a partition, the left action of
$Aut(Y_n)$ on $Y_n$ is transitive and hence $Y_n$ is a $n$-orbit.
$\Box$
The same we have
\begin{lemma}\label{R_n}
Let $X_n$ be a $n$-orbit of a group $G$ and $R_n$ be a partition of $X_n$.
If the right action of $G$ on $X_n$ maintains $R_n$, then classes of $R_n$
are $n$-orbits of right cosets of some subgroup of~$G$.
\end{lemma}
\paragraph{Intersections and unions of left- and right-automorphic
partitions.}
\begin{proposition}\label{Y_n*Z_n}
Let $Y_n$ and $Z_n$ be $n$-orbits, then $T_n=Y_n\cap Z_n$ is a $n$-orbit
and $Aut(T_n)=Aut(Y_n)\cap Aut(Z_n)$.
\end{proposition}
{\bfseries Proof:\quad}
It is sufficient to choose an initial $n$-tuple from $T_n$.
$\Box$
\begin{definition}\label{covering}
Let $M$ be a set and $A$ be a system of subsets of $M$. We say that $A$ is
a covering of $M$ if classes of $A$ contain all elements of $M$ and have
non-vacuous intersections. If the all intersections are vacuous then we
say that (the covering) $A$ is a partition of $M$. So we say that $A$ is a
covering, if it is not a partition. We say also that $A$ is a covering, if
we do not know, whether it is a partition.
\end{definition}
\begin{definition}\label{automorphic}
Let $X_n$ be a $n$-orbit of a group $G$ and $Y_n$ be an arbitrary subset
of $X_n$, then we say that $L_n=GY_n$ is \emph{left-automorphic} and
$R_n=Y_nG$ is \emph{right-automorphic} covering of $X_n$. This
definition we shall apply also to corresponding coverings of a $k$-orbit
$X_k$ of a group $G$ for $k<n$.
\end{definition}
\begin{corollary}\label{autom.-part.}
Let $L_n$ and $R_n$ be partitions of a $n$-orbit of a group $G$ on
$n$-orbits of left and right cosets of a subgroup $A<G$, then $L_n$
and $R_n$ are left-automorphic and right-automorphic partitions.
\end{corollary}
\begin{definition}\label{part. op.}
Let $M$ be a set and $P,Q$ be partitions of $M$. We write:
\begin{itemize}
\item
$P\sqsubset Q$ if for every $A\in P$ there exists $B\in Q$, so that
$A\subset B$.
\item
$P\sqcap Q$ for partition of $M$ that consists of intersections of
classes from $P$ and $Q$.
\item
$P\sqcup Q$ for partition of $M$ whose class is a union of intersected
classes from $P$ and $Q$.
\end{itemize}
\end{definition}
\begin{proposition}\label{GA,GB;AG,BG}
Let $A,B<G$, then $GA\sqcap GB=G(A\cap B)$, $AG\sqcap BG=(A\cap B)G$,
$GA\sqcup GB=Ggr(A,B)$ and $AG\sqcup BG=gr(A,B)G$.
\end{proposition}
{\bfseries Proof:\quad}
\begin{itemize}
\item
Since $GA\sqcap GB=G(GA\sqcap GB)$ and $A\cap B\in GA\sqcap GB$,
$GA\sqcap GB=G(A\cap B)$. Analogously $AG\sqcap BG=(A\cap B)G$.
\item
Let $\{A_i:\,i\in [1,l]\}\subset GA$, $\{B_j:\,j\in [1,m]\}\subset GB$,
$A_1=A$, $B_1=B$ and $U=\cup_{(i=1,l)} A_i=\cup_{(j=1,m)} B_j$, then
$U\in GA\sqcup GB$ and $e\in U$. Let $f\in A_i$, $g\in A_k$ and $B_j$ has
non-vacuous intersections with $A_i$ and $A_k$, then $f=gaba'$ for some
elements $a,a'\in A$ and $b\in B$. It shows that every element from $U$
can be represented as a product of elements from $A$ and $B$. Hence
$U=gr(A,B)$ and $GA\sqcup GB=Ggr(A,B)$. The same $AG\sqcup
BG=gr(A,B)G$.~$\Box$
\end{itemize}
From this proposition follows:
\begin{lemma}\label{L_n,L_n;R_n,R_n}
Let $A,B<G$ and $X_n\in Orb_n(G)$, then $X_nA\sqcap X_nB=X_n(A\cap B)$,
$AX_n\sqcap BX_n=(A\cap B)X_n$, $X_nA\sqcup X_nB=X_ngr(A,B)$ and
$AX_n\sqcup BX_n=gr(A,B)X_n$.
\end{lemma}
\paragraph{Intersection and union of left-automorphic partition with
right-automorphic partition.}
Let $L_n=X_nA$ and $R_n=AX_n$. First we see that $L_n$ and $R_n$ have at
least one common class the $n$-orbit of $A$ containing initial $n$-tuple.
Then from proposition \ref{Yn=gYng-1->N} we know that if $L_n$ and $R_n$
have more as one common class, then $A$ has non-trivial normalizer in $G$.
\begin{lemma}\label{L_n,R_n,B<A}
Let $L_n\sqcap R_n$ be not trivial, i.e. it contains a class $Z_n$ by
power $l$, where $1<l<|A|$, then conjugate to $A$ subgroups have
non-trivial intersections and $Z_n$ is a $n$-orbit of a some subgroup
$B<A$.
\end{lemma}
{\bfseries Proof:\quad}
Let $U_n\in L_n$, $W_n\in R_n$ and $Z_n=U_n\cap W_n$, then $U_n$ is a
$n$-orbit of some conjugate to $A$ subgroup $D$ and $W_n$ is a $n$-orbit
of $A$. Taking in opinion that we can choose an initial $n$-tuple from
$Z_n$, we obtain that $Z_n$ is a $n$-orbit of a subgroup $B=A\cap D$.
$\Box$
\begin{corollary}\label{L_n-sqcap-R_n,p}
Let $A$ be a prime order cyclic group, then $L_n\sqcap R_n$ is trivial.
\end{corollary}
The union $L_n\sqcup R_n$ can contain non-automorphic classes:
$$
\begin{array}{|c|}
\hline
123 \\
213 \\
\hline
132 \\
312 \\
\hline
231 \\
321 \\
\hline
\end{array}\ \sqcup\
\begin{array}{|c|}
\hline
123 \\
213 \\
\hline
132 \\
231 \\
\hline
312 \\
321 \\
\hline
\end{array}\ =\
\begin{array}{|c|}
\hline
123 \\
213 \\
\hline
132 \\
231 \\
312 \\
321 \\
\hline
\end{array}
$$
and therefore it is not of interest for investigation, nevertheless the
symmetry properties of this union can give an information about the
structure of the studied group $G$ and help to find subgroups of $G$ that
are supergroups for $A$.
\subsection{The actions of permutations on $k$-sets}
In order to consider the actions of permutations on $k$-sets we shall need
to have some special operations that we introduce from the beginning.
\subsubsection{Operations on $k$-sets}
\paragraph{Projecting and multiprojecting operators.}
Let $\alpha _k=\langle v_1\ldots v_k\rangle$ be a $k$-tuple, $l\leq
k$ and $i_1,i_2,\ldots ,i_l$ be $l$ different coordinates from $[1,k]$.
Then $\beta_l=\langle v_{i_1}\ldots v_{i_l}\rangle$ is a $l$-tuple that we
call a \emph{projection} of the $k$-tuple $\alpha_k$ on the ordered set of
coordinates $I_l=\{i_1<i_2<\ldots<i_l\}$. We shall enter a projecting
operator $\hat{p}$ and write this projection as
$\beta_l=\hat{p}(I_l)\alpha_k$. The projection of all $k$-tuples of a
$k$-set $X_k$ on $I_l$ gives a $l$-set $X_l=\hat{p}(I_l)X_k$.
The projection of all $k$-tuples of the $k$-set $X_k$ on $I_l$, that
distinguishes the equal $l$-tuples, is a multiset that we call a
\emph{multiprojection} of $X_k$ on $I_l$ and denote it as
$\uplus_{X_k}X_l$ or simply $\uplus X_l$, if from context it is clear,
what a multiprojection we consider. By definition $|\uplus X_l|=|X_k|$.
Using a multiprojecting operator $\hat{p}_{\uplus}$, we shall write that
$\uplus X_l=\hat{p}_{\uplus}(I_l)X_k$.
\paragraph{Concatenating operation.}
Let $\beta_l=\langle v_1\ldots v_l\rangle$ and $\gamma_m=\langle
u_1\ldots u_m\rangle$ be $l$- and $m$-tuple, then $(l+m)$-tuple
$\langle v_1v_2\ldots v_lu_1u_2\ldots u_m\rangle$ we call a concatenation
of $\beta_l$ and $\gamma_m$ and write this as $\beta _l\circ \gamma _m$.
It will be also suitable to use the concatenation of intersected tuples.
We shall consider such concatenation as multiset of coordinates, for
example $\langle 12\rangle\circ \langle 23\rangle=\langle 1223\rangle$.
We shall use this concatenating operation also for multisets of tuples in
the next way:
Let $\uplus Y_l$ and $\uplus Z_m$ be multisets with the same number of
tuples and $\phi:\uplus Y_l\leftrightarrow \uplus Z_m$, then $\uplus
Y_l\stackrel{\phi}{\circ}\uplus Z_m$ is the $(m+l)$-multiset, that
consists of concatenations of $l$-tuples of $Y_l$ with $m$-tuples of
$Z_m$ accordingly to the map $\phi$. We shall not write the map $\phi$, if
it is clear from context.
\paragraph{Operation properties.}\label{Op}
\begin{lemma}\label{pX_k}
$l$-projection of a $k$-orbit is a $l$-orbit.
\end{lemma}
Let $g$ be a permutation, $\alpha_n=\langle v_1\ldots v_n\rangle$ be a
$n$-tuple and $I_k$ be a $k$-subspace.
\begin{lemma}\label{pgan=gpan}
$g\hat{p}(I_k)\alpha_n=\hat{p}(I_k)g\alpha_n$.
\end{lemma}
{\bfseries Proof:\quad}
It is sufficient to show the equality for $I_k\in [1,k]$.
$$g\hat{p}(I_k)\alpha_n=g\langle v_1\ldots v_k\rangle
=\langle gv_1\ldots gv_k\rangle$$ and
$$\hat{p}(I_k)g\alpha_n=\hat{p}(I_k)\langle gv_1\ldots gv_n\rangle
=\langle gv_1\ldots gv_k\rangle\!.\;\Box$$
\begin{lemma}\label{p.ang=pan.g}
$\hat{p}(I_k)(\alpha_ng)=\hat{p}(gI_k)\alpha_n$.
\end{lemma}
{\bfseries Proof:\quad}
$$\hat{p}(I_k)(\alpha_ng)=\hat{p}([1,k])\langle v_{g1}\ldots
v_{gn}\rangle=\langle v_{g1}\ldots v_{gk}\rangle$$ and
$$\hat{p}(gI_k)\alpha_n=\hat{p}(g[1,k])\langle v_1\ldots v_n\rangle
=\hat{p}(\{g1<\ldots<gk\})\langle v_1\ldots
v_n\rangle =\langle v_{g1}\ldots v_{gk}\rangle.\ \Box $$
The equality
$\hat{p}(I_k)(\alpha_ng)=(\hat{p}(I_k)\alpha_n)g=\alpha_k(I_k,\alpha_n)g$
has not an interest application, because the corresponding right
permutation action on $k$-tuple $\alpha_k$ cannot be disengaged from the
$n$-tuple $\alpha_n$, as it takes place by the left permutation action on
$k$-tuple $\alpha_k$. But we shall write for convenience $\alpha_kg$
instead of $\hat{p}(I_k)(\alpha_ng)$, where it will not lead to
misunderstanding. Similarly, we consider a $k$-projection of equalities
$GY_n=X_nA$ and $AX_n=Y_nG$ (lemma \ref{GY_n=X_nA}).
From $\hat{p}(I_k)GY_n=\hat{p}(I_k)X_nA$, it follows
$G\hat{p}(I_k)Y_n=\hat{p}(AI_k)X_n$, where on definition \ref{GA,AG}
$$
\hat{p}(AI_k)X_n\equiv \{\hat{p}(AI_k)\alpha_n:\,\alpha_n\in X_n\}
\mbox{ and }\hat{p}(AI_k)\alpha_n=\{\hat{p}(aI_k)\alpha_n:\,a\in A\},
$$
so we can write the $k$-projection of this equality as
$GY_k(I_k)=\hat{p}(AI_k)X_n$ or (by correct understanding) simply as
$GY_k=X_kA$.
For the second equality we have
$\hat{p}(I_k)AX_n=A\hat{p}(I_k)X_n=AX_k(I_k)$ and
$$\hat{p}(I_k)Y_nG=\hat{p}(GI_k)Y_n\equiv \{\hat{p}(gI_k)Y_n:\, g\in G\}.$$
For convenience we will write the $k$-projection of second equality simply
as $AX_k=Y_kG$.
\begin{proposition}\label{p(I_k)P_k*Q_k}
Let $X_n$ be a $n$-set and $P_n$, $Q_n$ be two partitions of $X_n$. Let
$P_k=\hat{p}(I_k)P_n$ and $Q_k=\hat{p}(I_k)Q_n$ be partitions of
$X_k=\hat{p}(I_k)X_n$. It does not necessitate the equality
$\hat{p}(I_k)(P_n\sqcap Q_n)=\hat{p}(I_k)P_n\sqcap \hat{p}(I_k)Q_n)$.
\end{proposition}
{\bfseries Proof:\quad}
$X_n=\{15,26,36,45\}$, $P_n=\{\{15,26\},\{36,45\}\}$ and
$Q_n=\{\{15,36\},\{26,45\}\}$.~$\Box$
\subsubsection{Some additional definitions and auxiliary statements}
\begin{definition}
The presentation $GY_k=\{gY_k:g\in G\}$ that we use for the action of a
group $G$ on a $k$-subset $Y_k$ we shall apply also for the action of a
group $G$ on a system of $k$-subsets $P_k$ as $GP_k=\{gP_k:g\in G\}$ and
$gP_k=\{gY_k:Y_k\in P_k\}$.
\end{definition}
\begin{definition}
Let $M$ be a set and $Q$ be a set of subsets of $M$, then we write $\cup
Q\equiv \cup_{(U\in Q)}U$. From such definition follows that for $GP_k$ we
can consider two kinds of unions: $\cup GP_k$ and $\cup\cup GP_k$. For
example, let we have two sets $\{\{1,2\},\{2,3\}\}$ and
$\{\{2,3\},\{4,5\}\}$, then the first union of these sets is the set
$\{\{1,2\},\{2,3\},\{4,5\}\}$ and the second is $\{1,2,3,4,5\}$.
The symbol $\sqcup$ we shall apply to union of intersected classes of a
system of sets, as in example: $\sqcup
\{\{1,2\},\{2,3\},\{4,5\}\}=\{\{1,2,3\},\{4,5\}\}$.
\end{definition}
\begin{definition}
Let $\alpha_k$ be a $k$-tuple. We shall write the set of coordinates of
$\alpha_k$ as $Co(\alpha_k)$. We shall use this notation also for
a $k$-set $Y_k$, where $Co(Y_k)=\{Co(\alpha_k):\alpha_k\in Y_k\}$.
\end{definition}
\begin{lemma}\label{rm}
Let $M$ be a $m$-element set and $\uplus M$ be a homogeneous multiset
with multiplicity $k$. Let $P_m$ be a multipartition of $\uplus M$ on
$k$-element multisubsets of $M$, then $P_m$ is a union of $k$
distributions of $m$ elements of set $M$ between $m$ $k$-element classes
of $P_m$.
\end{lemma}
{\bfseries Proof:\quad}
We do an induction on $m$. Let us to represent $P_m$ as $m\times k$
matrix, whose lines are classes of $P_m$. Let $m=1$, then the statement is
evidently correct. Let $m>1$ and first $l\leq k$ lines contains all $k$
occurences of an element $u\in M$. By permutation of elements in these $l$
lines we can placed element $u$ in all $k$ columns. Now by permutation of
elements in $k$ columns we can replaced element $u$ in the first line.
Thus we have obtained $(m-1)\times k$ matrix (without the first line) that
by induction hypothesis can be transformed (with permutation of elements
in lines) to $m-1$ different elements in each column. Now we need only to
do the inverse permutation of element $u$ from the first line with
corresponding elements in other $l-1$ lines. $\Box$
From this lemma follows
\begin{corollary}\label{mMmM->MM}
Let $M$ be a $m$-element set and $M_2=\uplus M\circ \uplus M$, where
$\uplus M$ is homogeneous, then $M_2$ can be partition in $m$-element
subsets that are concatenations $M\circ M$.
\end{corollary}
{\bfseries Proof:\quad}
Let $|\uplus M|/|M|=k$, $M_2$ be associated with space $I_2=I_1^1\circ
I_1^2$ and $L_2$ be the partition of $M_2$ on $k$-element classes so that
$\hat{p}(I_1^1)L_2=M$, then $\hat{p}(I_1^2)L_2=P_m$ from lemma
\ref{rm}. $\Box$
\begin{lemma}\label{mMmM}
Let $M_2$ be a set of pairs that is associated with space
$I_2=I_1^1\circ I_1^2$. Let $\hat{p}_{\uplus}(I_1^1)M_2=
\hat{p}_{\uplus}(I_1^2)M_2$, then $M_2$ can be partition in cycles.
\end{lemma}
{\bfseries Proof:\quad}
Let $\langle u_1u_2\rangle\in M_2$, then there exists a pair $\langle
u_2u_3\rangle\in M_2$. The continuation gives a first cycle $C_2=\{\langle
u_1u_2\rangle,\langle u_2u_3\rangle,\ldots, \langle u_ru_1\rangle\}$. The
set $M_2\setminus C_2$ holds the property of the set $M_2$.
$\Box$
\begin{definition}
Let $Y_k$ be a $k$-subset defined on the subset $U\subset V$, then under
$Aut(Y_k)$ we shall understand the permutation group on the set $U$. An
extension of $Aut(Y_k)$ on the set $V$ we shall write as $Aut(Y_k;V)$.
\end{definition}
\begin{definition}\label{sdeco}
Let $c_1c_2\ldots c_l$ be decomposition of a permutation $g$. Product $a$
of some cycles from this decomposition we shall call a {\em
subdecomposition} of $g$ and write this as $a\subset g$.
Let $a$ be a subdecomposition of an automorphism $g\in G$. We shall call
the permutation $a$ as a \emph{subautomorphism} of $G$ and write this as
$a\subset\in G$. Let $B$ be an intransitive subgroup of $G$ and $A(U)$ be
a transitive component of $B$ on the subset $U\subset V$. We shall write
this fact as $A(U)<<G$ and say that $A(U)$ is a projection of $B$. It is
clear that $A(U)$ is generated by some subautomorphisms of $G$.
We can consider an action of a subautomorphism $a$ on a $k$-orbit $X_k$ of
a group $G$, extending it to an action of some automorphism $g\in G$.
\end{definition}
\begin{definition}
Let $X_k$ be a $k$-orbit of a group $G$ and $Y_k\subset X_k$ be a
$k$-orbit of a subgroup of $G$, then we say that $Y_k$ is a $k$-suborbit
of $G$.
\end{definition}
\begin{definition}
Let $G$ be a group, $X_n\in Orb_n(G)$, $I_k$ be a $k$-subspace and
$Co(I_k)$ be a $1$-orbit of some subgroup $A<G$, then we say that number
$k$ is automorphic, $I_k$ is an \emph{automorphic subspace} and
$X_k=\hat{p}(I_k)X_n$ is a \emph{right-automorphic $k$-orbit} or a
\emph{$k$-rorbit}.
\end{definition}
\subsubsection{The left action of permutations on $k$-sets}
The left action of a permutation on a $k$-set is the same as its action
on a $n$-set. The right action of a permutation on a $k$-set follows
from its action on a $n$-set. The right action is not just visible
combinatorially as the left action, so we shall begin with the left
action.
We say that two sets of $k$-tuples $Y_k$ and $Y_k'$ are {\em
$S_n$-isomorphic} or simply isomorphic if there exists permutation $g\in
S_n$ so that $Y_k'=gY_k$, for example $Y_2=\{\langle 12\rangle,\langle
21\rangle\}$ and $Y_2'=\{\langle 13\rangle,\langle 31\rangle\}$. We shall
say that $Y_k$ and $Y_k'$ are $G$-isomorphic, if $g\in G<S_n$ and we study
invariants of $G$. We shall not indicate a group relative to that we
consider the symmetry, if it is clear from context. From this definition
follows:
\begin{proposition}
Let $X_k$ be a $k$-orbit and $Y_k$ be an arbitrary $k$-subset of $X_k$,
then $Aut(X_k)Y_k$ is a covering of $X_k$ on isomorphic to $Y_k$ classes.
\end{proposition}
and
\begin{corollary}
$n$-Orbits of left cosets of a subgroup $A$ of a group $G$ are isomorphic.
\end{corollary}
The $n$-orbits of right cosets of a subgroup $A$ in general are not
isomorphic. An example is $3$-orbits of the subgroup $A=gr((12))<S_3$:
$$
\begin{array}{||ccc||}
\hline
1 & 2 & 3 \\
2 & 1 & 3 \\
\hline
\end{array}\
\mbox{ and }\
\begin{array}{||ccc||}
\hline
1 & 3 & 2 \\
2 & 3 & 1 \\
\hline
\end{array}\,.
$$
The same is valid for $k$-orbits of left, right cosets of a subgroup $A$ by
$k<n$.
\begin{proposition}
Let $G$ be a group, $X_n\in Orb_n(G)$, $X_k\in Orb_k(G)$ and $A<G$, then
\begin{itemize}
\item
$n$-Orbits of left cosets of $A$ form a partition of $X_n$.
\item
The $G$-isomorphic $k$-orbits of left cosets of $A$ belong to the same
$k$-orbit $X_k$ and form a covering of $X_k$.
\end{itemize}
\end{proposition}
{\bfseries Proof:\quad}
The first statement is evident. Let $Y_k\in Orb_k(A)$ be subset of $X_k$ ,
then a covering $L_k=GY_k$ of $X_k$ contains all $G$-isomorphic to $Y_k$
$k$-orbits. The example of such covering is $L_1=\{\{1,2\}\{2,3\}\{1,3\}\}$
$\Box$
A $k$-orbit $X_k$ of a group $G$ can have different representations through
$k$-orbits of the same subgroup $A<G$, because $X_k$ can contain
non-isomorphic $k$-orbits of $A$. For example, $1$-orbit of the symmetric
group $S_n$ can be represented, on the one hand, as a covering by
$1$-orbits of left cosets of $gr((12\ldots (n-1)))$ that are
$(n-1)$-element subsets of $V$ and, on the other hand, as a partition on
$1$-orbits of left cosets of this subgroup that are $1$-element subsets of
$V$.
\begin{lemma}\label{Yk->Un}
Let $A<G$, $X_n\in Orb_n(G)$, $Y_n\subset X_n$ be a $n$-orbit of $A$ and
$Y_k=\hat{p}(I_k)Y_n$. Let $U_n\subset X_n$ be the union of such classes
of $GY_n$ whose $I_k$-projection is $Y_k$, then $U_n$ is a $n$-orbit.
\end{lemma}
{\bfseries Proof:\quad}
$Aut(U_n)=Aut(Y_k;V)\cap G$. $\Box$
The subset $U_n$ can contain not all $n$-tuples whose $I_k$-projections
belongs to $Y_k$. An example is given by the group $S_3$,
$U_3=Y_3=\{123,213\}$ and $k=1$:
$$
\begin{array}{||c|cc||}
\hline
1 & 2 & 3 \\
2 & 1 & 3 \\
\hline
1 & 3 & 2 \\
3 & 1 & 2 \\
\hline
2 & 3 & 1 \\
3 & 2 & 1 \\
\hline
\end{array}\,.
$$
In this case $U_1=\{1,2\}$ and the intersections $\{1,2\}\cap \{1,3\}$ and
$\{1,2\}\cap \{2,3\}$ are not vacuous.
\paragraph{Now we consider when a subgroup $A<G$ forms a partition of a
$k$-orbit $X_k\in Orb_k(G)$.}
Let a $k$-set $X_k$ be a $k$-projection of a $(k+1)$-set $X_{k+1}$, then we
shall say that $X_{k+1}$ is an \emph{extension} of $X_k$ on a
$(k+1)$-subspace or is a $(k+1)$-extension of $X_k$.
\begin{lemma}\label{Xk-X(k+1)}
Let $X_k\in Orb_k(G)$, $A<G$ and $X_kA=GY_k$ be a partition. Let
$X_{k+1}\in Orb_{k+1}(G)$ be an extension of $X_k$, then $X_kA$ generates
a partition $X_{k+1}A$.
\end{lemma}
{\bfseries Proof:\quad}
Let $Y_{k+1}\subset X_{k+1}$ and $Y_k=\hat{p}(I_k)Y_{k+1}$. If $GY_{k+1}$
is not a partition, then evidently $GY_k$ contains intersected classes.
$\Box$
\begin{lemma}\label{XAUXB=XAB}
Let $X_k\in Orb_k(G)$, $A,B<G$ be not conjugate subgroups or $A,B$ be
conjugate subgroups and $Y_k,Z_k\subset X_k$ be not isomorphic
$k$-orbits of $A$ and $B$ correspondingly. Let $Y_k,Z_k$ be
intersected and $X_kA=GY_k,X_kB=GZ_k$ be partitions, then
$X_kA\sqcup X_kB=X_kgr(A,B)$.
\end{lemma}
{\bfseries Proof:\quad}
$G(GY_k\sqcup GZ_k)=GY_k\sqcup GZ_k$. Then $X_kA\sqcup X_kB=GT_k=X_kC$,
where $C=gr(A,B)$ accordingly to lemma \ref{L_n,L_n;R_n,R_n}.
$\Box$
Without condition $Y_k\cap Z_k\neq\emptyset$ the equality $X_kA\sqcup
X_kB=X_kgr(A,B)$ can lead to misunderstanding for conjugate subgroups
$A,B$. Consider an example:
$$
\begin{array}{||cc||}
\hline
1 & 2 \\
2 & 1 \\
\hline
1 & 3 \\
3 & 1 \\
\hline
2 & 3 \\
3 & 2 \\
\hline
\end{array}\,,\
\begin{array}{||cc||}
\hline
1 & 3 \\
2 & 3 \\
\hline
1 & 2 \\
3 & 2 \\
\hline
2 & 1 \\
3 & 1 \\
\hline
\end{array}\,.
$$
\begin{itemize}
\item
The subgroups $A=gr((12))$ and $B=gr((13))$ are conjugate in $G=S_3$, so
they determine the same partitions of $2$-orbits of $G$, if we
consider isomorphic (not intersected) $2$-orbits of these subgroups.
Therefore in this case $X_kA\sqcup X_kB=X_kA\neq X_kgr(A,B)$.
\item
If we consider not isomorphic and not intersected $2$-orbits of the same
subgroup $A=gr((12))$, then we have two different partitions $L_1,L_2$ of
$2$-orbit $X_2$ of $S_3$. So it can be seen that $X_2A\sqcup
X_2A\neq X_2gr(A,A)=X_2A$. This misanderstansing follows from
interpretation $X_2A$, first, as $S_3Y_2'$, $Y_2'=\{12,21\}$, and, second,
as $S_3Y_2''$, $Y_2''=\{13,23\}$, where $Y_2'\cap Y_2''=\emptyset$. If we
take $Y_2''=\{12,32\}$, then $A=gr((12))$, $B=gr((13))$ and formula
$X_kA\sqcup X_kB=X_kgr(A,B)$ gives correct result.
\end{itemize}
\begin{lemma}\label{Co(Y_k)=Co(alpha_k)}
Let $X_k\in Orb_k(G)$, $Y_k\subset X_k$, $\alpha_k\in Y_k$ and $Y_k$ be a
maximal subset with property: $Co(Y_k)=Co(\alpha_k)$, then $L_k=GY_k$ is a
partition of $X_k$.
\end{lemma}
{\bfseries Proof:\quad}
On the condition the permutations of coordinates of $Y_k$ that
maintains $Y_k$ maintains also each class of $L_k=GY_k$. $\Box$
\paragraph{In the next statements we shall study reverse question.}
Namely, when a subset of a $k$-orbit of a group $G$ generates a subgroup
of $G$.
\begin{lemma}\label{L_k}
Let $X_k\in Orb_k(G)$ and $L_k$ be a partition of $X_k$. If the left
action of $G$ on $X_k$ maintains $L_k$, then classes of $L_k$ are
$k$-orbits of left cosets of some subgroup $A<G$.
\end{lemma}
{\bfseries Proof:\quad}
Let $Y_k\in L_k$, then $L_k=GY_k$, hence a subgroup $A=Aut(Y_k:V)\cap G$
acts on $Y_k$ transitive. $\Box$
\begin{remark}
It can be seen that the partitioning of $X_n$ (and hence $X_k$) on
$G$-isomorphic classes is not sufficient for the automorphism of this
partition. This shows the next partition $L_n$ of a $n$-orbit $X_n$ of
the group $G=C_6$:
$$
\begin{array}{||cccccc||}
\hline
1 & 2 & 3 & 4 & 5 & 6 \\
2 & 3 & 4 & 5 & 6 & 1 \\
\hline
3 & 4 & 5 & 6 & 1 & 2 \\
4 & 5 & 6 & 1 & 2 & 3 \\
\hline
5 & 6 & 1 & 2 & 3 & 4 \\
6 & 1 & 2 & 3 & 4 & 5 \\
\hline
\end{array}\, .
$$
The classes of $L_n$ are connected with permutation $(135)(246)$,
but are not automorphic. Let $Y_n\in L_n$, then in this case $|GY_n|$ is
a covering of $X_n$.
\end{remark}
From lemmas \ref{Co(Y_k)=Co(alpha_k)} and \ref{L_k} follows
\begin{corollary}\label{Y_k-A}
Let $X_k\in Orb_k(G)$, $Y_k\subset X_k$, $\alpha_k\in Y_k$ and $Y_k$ be a
maximal subset with property: $Co(Y_k)=Co(\alpha_k)$, then $Y_k$ is a
$k$-suborbit of $G$.
\end{corollary}
Consider a generalization of lemma \ref{L_k}.
\begin{theorem}\label{|Y_k||GY_k|}
Let $X_k\in Orb_k(G)$, $Y_k\subset X_k$ and $L_k=GY_k$ be a covering of
$X_k$. If $|Y_k||L_k|$ divides $|G|$, then $Y_k$ is a $k$-orbit of some
subgroup $A<G$.
\end{theorem}
{\bfseries Proof:\quad}
On condition, there exists a partition $L_n$ of a $n$-orbit $X_n\in
Orb_n(G)$ so that $\hat{p}(I_k)L_n=L_k$ and $|L_n|=|L_k|$. Then there
exists $Y_n\in L_n$ so that $\hat{p}(I_k)Y_n=Y_k$,
$G\hat{p}(I_k)Y_n=\hat{p}(I_k)GY_n=GY_k$. Hence $L_n=GY_n$ is a partition
of $X_n$ and we can apply the lemma \ref{L_n}. $\Box$
\begin{corollary}\label{Qk}
Let $X_k\in Orb_k(G)$, $Y_k\subset X_k$ and $L_k=GY_k$ be a covering of
$X_k$. Let classes of $L_k$ can be assembled in isomorphic partitions of
$X_k$. Let $Q_k$ be a system of these partitions and $|Q_k||X_k|$ divides
$|X_n|$. Let $L_k'\in Q_k$ and $Y_k\in L_k'$, then $X_k$ is a $k$-orbit of
a subgroup $B=Aut(L_k')\cap G$, $L_k'=BY_k$ and $Y_k$ is a $k$-orbit of
some subgroup $A<B$.
\end{corollary}
{\bfseries Proof:\quad}
Let $X_n\in Orb_n(G)$ and $X_k=\hat{p}(I_k)X_n$. Since $|Q_k||X_k|$
divides $|X_n|$, there exists a partition $L_n$ of $X_n$ so that
$\hat{p}(I_k)L_n=X_k$ and permutations of classes of $L_n$ correspond with
permutations of partitions from $Q_k$. So the classes of $L_n$ are
$n$-orbits of subgroups of $G$ and hence $X_k$ is a $k$-orbit of the
subgroup $B=Aut(L_k')\cap G$. Since $BY_k=L_k'$ is a partition of $X_k$
and $B$ acts on $L_k'$ transitive, the subgroup $A=Aut(Y_k;V)\cap B$ acts
on $Y_k$ transitive. $\Box$
The example of such $G$-isomorphic system of partitions is
$$
Q_1=
\{\
\begin{array}{||c||}
\hline
1 \\
2 \\
\hline
3 \\
4 \\
\hline
\end{array}\,,\
\begin{array}{||c||}
\hline
1 \\
3 \\
\hline
2 \\
4 \\
\hline
\end{array}\,,\
\begin{array}{||c||}
\hline
1 \\
4 \\
\hline
2 \\
3 \\
\hline
\end{array}\
\},
$$
where $Q_1$ is formed by $1$-orbits of left cosets of the subgroup
$gr((12)(34))<A_4$.
Let $Q_k$ be a set of $S_n$-isomorphic partitions that therefore do not
belong to the same $k$-orbit $X_k$, then the action of $G$ on $X_k$
maintains simultaneously all partitions $L_k^i$ as in the previous
example, where now $Q_1$ is formed by $1$-orbits of not conjugate
subgroups $gr((12)(34))$, $gr((13)(24))$ and $gr((14)(23))$ of the group
$S_2\otimes S_2$.
\begin{corollary}\label{rCk-gr}
Let $X_k$ be a $k$-rorbit of a group $G$ that contains a $k$-rcycle
$rC_k$, then $rC_k$ is a $k$-orbit of some subgroup $A<G$.
\end{corollary}
{\bfseries Proof:\quad}
It is a special case of the corollary \ref{Y_k-A}. $\Box$
The order of the subgroup $A$ in the corollary can differ from $k$. The
example gives the subgroup $A=gr((1234)(56))<S_6$ and $rC_2=\{\langle
56\rangle,\langle 65\rangle\}$.
Projections of $k$-rcycles from $X_k$ on $l$-subspace ($l<k$) can have
non-trivial intersections as in example:
$$
\begin{array}{lcr}
\begin{array}{||cc|c||}
\hline
1 & 2 & 3 \\
\hline
2 & 3 & 1 \\
3 & 1 & 2 \\
\hline
\end{array}&
\mbox{ and }&
\begin{array}{||cc|c||}
\hline
1 & 2 & 4 \\
\hline
2 & 4 & 1 \\
4 & 1 & 2 \\
\hline
\end{array}\,.
\end{array}
$$
So these projections form a covering of $X_l$.
\subsubsection{The special left action of permutations on $k$-sets}
There exists a left action of permutations on a $k$-orbit $X_k$ of a group
$G$ that forms a partition $R_k$ of $X_k$ on $k$-orbits of right cosets of
a subgroup $A<G$. It is a partition $R_k=AX_k$.
Classes of $AX_k$ as well as classes of $AX_n$ are in general case not
isomorphic. Moreover, if the classes of $AX_n$ have the same order, then
the classes of $AX_k$ satisfy to this property not always. For example
$R_1=gr((12))\{1,2,3\}=\{\{1,2\},\{3\}\}$.
$k$-Orbits of left cosets of a subgroup $A$ can have intersections,
$k$-Orbits of right cosets of a subgroup $A$ have no intersection.
$k$-Orbits of left cosets of a subgroup $A$ are $k$-orbits of subgroups
that are conjugate to $A$, $k$-orbits of right cosets of a subgroup $A$
are $k$-orbits of the subgroup $A$. These properties we shall assemble in
the following statements:
\begin{lemma}\label{conjugate-n}
Let $A,B$ be conjugate subgroups of a group $G$ and $X_n$ be a $n$-orbit
of $G$, then
\begin{enumerate}
\item
The partitions of $G$ on left (right) cosets of subgroups $A,B$ are not
equal.
\item
The partitions of $X_n$ on $n$-orbits of left cosets of subgroups $A,B$
are equal and this partition consists of isomorphic classes.
\item
The partitions of $X_n$ on $n$-orbits of right cosets of subgroups $A,B$
are not equal and each partition consists of not isomorphic classes of
power $|A|$.
\end{enumerate}
\end{lemma}
{\bfseries Proof:\quad}
The first statement is the fact from the group theory, the second is the
repeating of lemma \ref{gY_n} and the third follows from lemma
\ref{Y_ng}. $\Box$
\begin{corollary}\label{conjugate-k}
Let $A,B$ be conjugate subgroups of a group $G$ and $X_k$ be a $k$-orbit
of $G$, then
\begin{enumerate}
\item
The coverings of $X_k$ on isomorphic $k$-orbits of left cosets of
subgroups $A,B$ are equal.
\item
The partitions of $X_k$ on $k$-orbits of right cosets of subgroups $A,B$
are not equal, each partition consists of not isomorphic classes, which can
differ by power.
\end{enumerate}
\end{corollary}
\paragraph{$k$-orbit property of normal subgroups.}
\begin{lemma}\label{AX_k=X_kA}
Let $X_k\in Orb_k(G)$, $A<G$ and $R_k=AX_k=X_kA=L_k$. If $A$ is the
maximal subgroup, then $A\lhd G$.
\end{lemma}
{\bfseries Proof:\quad}
Let $X_n\in Orb_n(G)$, then $R_n=AX_n=X_nA=L_n$, because $L_n$ is the only
partition of $X_n$ with property $\hat{p}(I_k)L_n=L_k$ and $|L_n|=|L_k|$,
hence $A\lhd G$. $\Box$
\begin{corollary}\label{fix-tuple}
Let $Y_k$ be a maximal subset of a $k$-orbit $X_k$ so that
$Co(Y_k)=Co(\alpha_k)$ for some $k$-tuple $\alpha_k\in Y_k$, then a
stabilizer $Stab(\alpha_k)\lhd Stab(Y_k)$.
\end{corollary}
\begin{lemma}\label{A<|G}
Let $A\lhd G$ and $X_k\in Orb_k(G)$, then $R_k=AX_k=X_kA=L_k$.
\end{lemma}
{\bfseries Proof:\quad}
It is given that $R_n=AX_n=X_nA=L_n$, then
$\hat{p}(I_k)AX_n=\hat{p}(I_k)X_nA$ or $AX_k=X_kA$.~$\Box$
So we have
\begin{theorem}\label{s-gr}
A group $G$ is a simple group if and only if $AX_k\neq X_kA$ for
arbitrary $k$, arbitrary $X_k\in Orb_k(G)$ and each subgroup $A<G$.
\end{theorem}
\paragraph{Intersections and unions of $k$-orbits}
Above we have seen (proposition \ref{Y_n*Z_n}) that the intersection of
$n$-orbits is a $n$-orbit. The same is correct
\begin{lemma}\label{Y_k*Z_k}
Let $Y_k$ and $Z_k$ be $k$-orbits, then $T_k=Y_k\cap Z_k$ is a $k$-orbit
of $Aut(T_k)=Aut(Y_k)\cap Aut(Z_k)$.
\end{lemma}
{\bfseries Proof:\quad}
It is sufficient to consider the intersection of $n$-orbits of $Aut(Y_k)$
and $Aut(Z_k)$ and then their corresponding $k$-projections.
$\Box$
\begin{corollary}\label{R_k*R_k'}
Let $Y_k$ and $Z_k$ be $k$-suborbits of a $k$-orbit $X_k$, $A=Aut(Y_k)\cap
Aut(X_k)$ and $B=Aut(Z_k)\cap Aut(X_k)$, then $(A\cap B)X_k=AX_k\sqcap
BX_k$.
\end{corollary}
For subgroups $G<Aut(Y_k)$ and $G'<Aut(Z_k)$ with $k$-orbits $Y_k\in
Orb_k(G)$ and $Z_k\in Orb_k(G')$ the similar relation is not correct. Let
us to give a counterexample:
The partition $R_1=AX_1=AV$ is a system of orbits of $A$ on $V$ in the
conventional meaning. Let $A=gr((12))<S_3$ and $B=gr((123))<S_3$, then
$AV=\{\{1,2\},\{3\}\}$, $BV=\{\{1,2,3\}\}$, $AV\sqcap
BV=\{\{1,2\},\{3\}\}$ and $(A\cap B)V=\{\{1\},\{2\},\{3\}\}$.
This example, lemma \ref{Y_k*Z_k} and corollary \ref{R_k*R_k'} determine
the relation between intersections of groups, their $k$-orbits and
corresponding systems of $k$-orbits of right cosets of this groups
(subgroups). The same is valid for the intersection of systems of
$k$-orbits of left cosets.
The union of partitions of $X_k\in Orb_k(G)$ on $k$-orbits of left cosets
of subgroups $A,B<G$ we have considered in lemma \ref{XAUXB=XAB}.
For the union of partitions of $X_k\in Orb_k(G)$ on $k$-orbits of right
cosets of subgroups $A,B<G$ we have:
\begin{lemma}\label{AV*BV,AV+BV}
$AX_k\sqcup BX_k=gr(A,B)X_k$.
\end{lemma}
\paragraph{The condition of automorphism of a subspace $I_k\subset V$.}
\begin{lemma}\label{k-rorbit}
Let $X_n$ be a $n$-orbit of a transitive group $G$, $X_k=\hat{p}(I_k)X_n$,
$|X_n|/|X_k|>1$ and $I_k$ contains all elements of $V$ that are fixed with
$Stab(I_k)<G$, then $I_k$ is automorphic.
\end{lemma}
{\bfseries Proof:\quad}
Let $Co(I_k)=\{v_i:\, i\in [1,k]\}$, $T_k^1\in Orb_k(Stab(v_1))$,
$T_k^1\subset X_k$, $\hat{p}(v_1)T_k^1=v_1$ and $L_k'=GT_k^1$. Let
$T_k^i\in Orb_k(Stab(v_i))$ be a class of $L_k'$, then
$Stab(I_k)T_k^i=T_k^i$. It follows that all $T_k^i$ consist of $k$-orbits
of right cosets of $Stab(I_k)$ and systems of these $k$-orbits of right
cosets of $Stab(I_k)$ for different $i$ are isomorphic. Hence each $T_k^i$
contains a fix $k$-tuple $\alpha_k^i$ for that $Co(\alpha_k^i)=Co(I_k)$.
The union $\cup_{(i=1,k)} \alpha_k^i$ is a $k$-orbit of a normalizer
$N_G(Stab(I_k))$ (corollary \ref{fix-tuple}) that acts transitive on the
subset $Co(I_k)$. Hence $I_k$ is automorphic. $\Box$
\subsubsection{The right action of permutations on $k$-sets}
\paragraph{Right action isomorphism.}
Under right action of a permutation $g$ on a $k$-tuple $\alpha_k$ we
understand the mapping of $\alpha_k$ on a $k$-tuple $\beta_k\equiv
\alpha_kg$ that is placed on the position of coordinates of $\alpha_k$ in
the same $n$-tuple $\alpha_n$ under the right action of $g$ on $\alpha_n$.
If $\alpha_k=\langle v_1v_2\ldots v_k\rangle$, then on definition we write
$\alpha_kg=\langle v_{g1}v_{g2}\ldots v_{gk}\rangle$. Thus we consider the
$k$-tuple $\alpha_k\subset \alpha_n$ with its certain position in
$\alpha_n$ that we define by $k$-subspace of coordinates $I_k$.
Let $X_n\in Orb_n(G)$, $Y_n\subset X_n$, $Y_k=\hat{p}(I_k)Y_n$, $g\in S_n$
and $Y_k'=Y_kg=\hat{p}(gI_k)Y_n$, then we say that $Y_k$ and $Y_k'$ are
right $S_n$-related. If $g\in G$, then $Y_k$ and $Y_k'$ are right
$G$-related. In general case the image $Y_k'$ is not isomorphic to its
original $Y_k$. We shall study, when the right action of a permutation
transforms $k$-subset $Y_k$ on isomorphic $k$-subset $Y_k'$.
\paragraph{Two kinds of right action isomorphism.}
\begin{definition}\label{bYk=Yka}
If we study $k$-orbits of a group $G$, $X_k\in Orb_k(G)$, $Y_k\subset
X_k$, $a,b\in S_n$ and a $k$-subset $Y_k'=Y_ka=bY_k$, then we say that
$Y_k'$ and $Y_k$ are right $S_n$-isomorphic (as in first case of example
\ref{simple-struc}). If $a,b\in G$, then we say that $Y_k'$ and $Y_k$ are
right $G$-isomorphic.
\end{definition}
Let $Y_k=\hat{p}(I_k)Y_n\subset X_k$, $X_k'\in Orb_k(G)$,
$Y_k'=\hat{p}(I_k')Y_n\subset X_k'$ and $Y_k'=Y_ka=bY_k$. If $a\in G$ then
$X_k'=X_k$, and $Y_k'\subset X_k$ too. Now we shall find, when from $a\in
G$ it follows $b\in G$.
\begin{proposition}\label{SnYk-not-Aut(Xk)Yk}
Let $Y_k$ and $Y_k'$ be arbitrary $S_n$-isomorphic subsets of a $k$-orbit
$X_k$, then $Y_k$ and $Y_k'$ are not with necessary $Aut(X_k)$-isomorphic.
\end{proposition}
{\bfseries Proof:\quad}
The $2$-subsets $\{\langle 12\rangle,\langle 23\rangle\}$ and $\{\langle
12\rangle,\langle 24\rangle\}$ from the $2$-orbit $X_2'$ (page
\pageref{X_2'}) are $S_n$-isomorphic, but not $Aut(X_2')$-isomorphic.
$\Box$
\begin{proposition}\label{SnYk->Aut(Xk)Yk}
Let $Y_k$ and $Y_k'$ be arbitrary right $S_n$-isomorphic $k$-subsets of
a $k$-orbit $X_k$ of a group $G$, then $Y_k$ and $Y_k'$ are
$G$-isomorphic.
\end{proposition}
{\bfseries Proof:\quad}
Let $Y_k=\{I_k(i):i\in [1,|Y_n|]\}$ and $Y_k'=\{I_k'(i):i\in [1,|Y_n|]\}$,
then $Y_k=\hat{p}(I_k(i))Y_n$ and $Y_k'=\hat{p}(I_k'(i))Y_n$. Maps
$I_k(i)\rightarrow I_k'(i)$ are restrictions of automorphisms. Hence a map
$\cup I_k(i)\rightarrow \cup I_k'(i)$ is also a restriction of an
automorphism. $\Box$
\paragraph{General properties of right permutation action.}
\begin{proposition}\label{Yk'=gYk''f-1}
$k$-Orbits of left, right cosets of a subgroup $A$ are connected with
elements of~$G$.
\end{proposition}
{\bfseries Proof:\quad}
From proposition \ref{Yn'=gYn''f-1} we obtain
$Y_k'(I_k)=\hat{p}(I_k)Y_n'= \hat{p}(I_k)gY_n''f^{-1}
=g\hat{p}(I_k)Y_n''f^{-1} = gY_k''(I_k)f^{-1} = gY_k''(f^{-1}I_k)$.
$\Box$
\begin{proposition}\label{H->r.aut}
The right action of any automorphism $g\in G$ on a $k$-orbit of a normal
subgroup $H\lhd G$ is isomorphic.
\end{proposition}
{\bfseries Proof:\quad}
Accordingly to proposition \ref{H->Yn=gYng-1}
$\hat{p}(I_k)gY_n=\hat{p}(I_k)Y_ng$ or $gY_k=Y_kg$. $\Box$
Now we give a generalization of proposition \ref{Yn=gYng-1->N}.
\begin{lemma}\label{Lk*Rk}
Let $A<G$, $X_k\in Orb_k(G)$, $L_k=X_kA=GY_k$, $R_k=AX_k$ and $Q_k=L_k\cap
R_k\neq \emptyset$. Let $A$ be a maximal subgroup, then $A$ has non-trivial
normalizer $B=N_G(A)$.
\end{lemma}
{\bfseries Proof:\quad}
Let $Z_k=\cup Q_k$, then $Aut(Y_k;V)\cap Aut(Z_k;V)\lhd Aut(Z_k;V)$, where
$Z_k$ is not with necessary automorphic. Hence $A=Aut(Y_k;V)\cap
Aut(Z_k;V)\cap G\lhd Aut(Z_k;V)\cap G=B$. $\Box$
\begin{proposition}\label{gakg-1,fYkf-1}
Let $A$ be a subgroup of $G$, $g,f\in G$, $gag^{-1}=a$ for every $a\in A$
and $faf^{-1}\neq a$ for some $a\in A$, but $fAf^{-1}=A$. Let $Y_k$ be a
$k$-orbit of $A$ and $\overrightarrow{Y_k}$ be the arbitrary ordered set
$Y_k$, then $g\overrightarrow{Y_k}=\overrightarrow{Y_k}g$ and $fY_k=Y_kf$,
but $f\overrightarrow{Y_k}\neq\overrightarrow{Y_k}f$.
\end{proposition}
{\bfseries Proof:\quad}
It follows from proposition \ref{g*an*g-1,f*Yn*f-1}. $\Box$
\paragraph{Right permutation action on $k$-rcycles.}
\begin{lemma}\label{C_k-conc}
Let $C_{lk}$ be a $(l,k)$-cycle, then $C_{lk}=\uplus C_{l_1p_1}\circ
\uplus C_{l_2p_2}\circ\ldots \circ \uplus C_{l_qp_q}$, where
$\sum_{i=1}^qp_i=k$, $l_i$ divides $|\uplus C_{l_ip_i}|=l$, $C_{l_ip_i}$
is a $p_i$-projection of a $l_i$-rcycle and different $l_i$-rcycles have
no intersection on $V$.
\end{lemma}
{\bfseries Proof:\quad}
From definition it follows that $C_{lk}=gr(g)\alpha_k$ for some
permutation $g$ and $k$-tuple $\alpha_k$. Let $n$-tuple
$\beta_n=\beta_{l_1}\circ \beta_{l_2}\circ \ldots\circ \beta_{l_q}$,
$g=(\beta_{l_1})(\beta_{l_2})\ldots (\beta_{l_q})$ and
$m=\sum_{i=1}^ql_i$, then $g$ generates $(l,m)$-cycle $C_{lm}=\uplus
rC_{l_1}\circ \uplus rC_{l_2}\circ\ldots\circ \uplus rC_{l_q}$. The
$(l,k)$-cycle $C_{lk}$ is a $k$-projection of the $(l,m)$-cycle
$C_{lm}$.~$\Box$
The $(l,k)$-cycle $C_{lk}$ can be represented as a concatenation of
$(l,p_i)$-multiorbits, whose $p_i$-projections are either $p_i$-tuple, or
$S_{l_i}^{p_i}$-orbits, or $p_i$-projections of $l_i$-rcycles. It is
obtained from lemma \ref{C_k-conc} by doing singled out the concatenation
of fix $1$-tuples and reassembling the cycles in $S_{l_i}^{p_i}$-orbits as
in example:
$$
\begin{array}{||cc|c|c|cc|cc||}
\hline
1 & 2 & 3 & 4 & 5 & 7 & 6 & 8 \\
2 & 1 & 3 & 4 & 7 & 5 & 8 & 6 \\
\hline
\end{array}=
\begin{array}{||cc|cc|cc|cc||}
\hline
1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\
2 & 1 & 3 & 4 & 7 & 8 & 5 & 6 \\
\hline
\end{array}\,.
$$
The difference in representations of type
$$
\begin{array}{||cc|cc||}
\hline
5 & 7 & 6 & 8 \\
7 & 5 & 8 & 6 \\
\hline
\end{array}\
\mbox{ and }\
\begin{array}{||cc|cc||}
\hline
5 & 6 & 7 & 8 \\
7 & 8 & 5 & 6 \\
\hline
\end{array}
$$
can be important if rcycles $\{\langle 57\rangle,\langle 75\rangle\}$ and
$\{\langle 68\rangle,\langle 86\rangle\}$ are not $G$-isomorphic.
There exist cases, where the partitioning of a $(l,k)$-cycle on right
$G$-related concatenation components cannot be represented as
projections of base three types $p$-orbits. The simplest of these cases
gives the $3$-orbit of subgroup $gr((12))<S_3$. For this case we have the
next right $G$-related $2$-orbits of $gr((12))$:
$$
\begin{array}{||cc||}
\hline
1 & 2 \\
2 & 1 \\
\hline
\end{array}\,,\
\begin{array}{||cc||}
\hline
1 & 3 \\
2 & 3 \\
\hline
\end{array}\
\mbox{ and }\
\begin{array}{||cc||}
\hline
2 & 3 \\
1 & 3 \\
\hline
\end{array}\,.
$$
The existence of such decomposition of a $(n,k)$-cycle on condition $k|n$
leeds to some intricate structures as, for example, the automorphism group
of Petersen graph (s. below).
\paragraph{Finite group permutation representation.}
Let us to consider some examples with different properties of right
permutation action. The $2$-orbit of subgroup $gr((12))<S_3$ shows an
existence of cases with no non-trivial isomorphic right permutation action
for $k$-orbits of non-normal subgroup. The example \ref{S2*S2} (s. below)
shows the existence of the right $G$-isomorphism for $2$-orbits $\{\langle
12\rangle,\langle 21\rangle\}$ and $\{\langle 34\rangle,\langle
43\rangle\}$ of normal subgroup $gr((12)(34))$ of group $S_2\otimes S_2$
that follows from proposition \ref{H->r.aut}. The next example is a
$n$-orbit of a group $G$ that is the regular permutation representation of
$S_3$ in two assemblies.
\begin{example} \label{S3(6)}
$$
\begin{array}{lcr}
\begin{array}{||cc|cc|cc||}
\hline
1 & 2 & 3 & 4 & 5 & 6 \\
\hline
2 & 1 & 5 & 6 & 3 & 4 \\
\hline
3 & 4 & 1 & 2 & 6 & 5 \\
\hline
4 & 3 & 6 & 5 & 1 & 2 \\
\hline
5 & 6 & 2 & 1 & 4 & 3 \\
\hline
6 & 5 & 4 & 3 & 2 & 1 \\
\hline
\end{array}&
\mbox{and}&
\begin{array}{||cc|cc|cc||}
\hline
1 & 2 & 3 & 5 & 4 & 6 \\
2 & 1 & 5 & 3 & 6 & 4 \\
\hline
3 & 4 & 1 & 6 & 2 & 5 \\
4 & 3 & 6 & 1 & 5 & 2 \\
\hline
5 & 6 & 2 & 4 & 1 & 3 \\
6 & 5 & 4 & 2 & 3 & 1 \\
\hline
\end{array}\,.
\end{array}
$$
\end{example}
The first table is partitioned relative to $G$-isomorphic $2$-subspaces
and the second to $S_n$-isomorphic $2$-subspaces. The example shows no
existence of a right $G$-isomorphism for $2$-rcycle $\{12,21\}$, but an
existence of right $S_6$-isomorphism for this $2$-rcycle. This fact can be
explained with next arguments: a subgroup defined by $2$-rcycle
$\{12,21\}$ has the trivial normalizer and hence its $2$-orbits of left
cosets are not necessitated to be $G$-isomorphic to the $2$-orbits of
right cosets, on the one hand, but $G$ is regular and hence necessitates
the existence of the isomorphic right action, on the other hand.
Given example shows the difference in properties of the right permutation
action in various permutation representations of a finite group. We
consider this difference and recall at first some facts from the finite
group theory.
Let $F$ be a finite group, $A<F$, $|F|/|A|=n$ and $\overrightarrow{L_n}$,
$\overrightarrow{R_n}$ be ordered partitions $FA$ and $AF$. It is known
that every transitive permutation representation of $F$ is equivalent to
the representation of $F$ given by $n$-orbits
$X_n'=\{f\overrightarrow{L_n}: f\in F\}$ or $X_n''=\{\overrightarrow{R_n}f:
f\in F\}$. It is also known that $F$ is homomorphic to its image
$Aut(X_n')$ ($Aut(X_n'')$) with the kernel of the homomorphism equal to a
maximal normal subgroup of $F$ that is contained in $A$. Further we always
assume that a finite group is isomorphic to its representation.
A maximal by inclusion subgroup $A$ of a finite group $F$ that contains no
normal subgroup of $F$ we call a \emph{md-stabilizer} of $F$ and
the corresponding representation of $F$ we call a
\emph{md-representation}. A md-stabilizer $A$ of a finite group $F$
defines a minimal degree permutation representation of $F$ in the family
of permutation representations of $F$ defined with subgroups of $A$. A
maximal degree permutation representation of $F$ is correspondingly the
permutation representation defined with trivial minimal subgroup given by
the unit of the group and this representation is called a regular
representation of $F$.
A finite group can have many (not conjugate) md-stabilizers. For example,
$S_5$ contains a transitive md-stabilizer of order $20$ generated by
permutations $(12345)$ and $(1243)$ and an intransitive md-stabilizer of
order $12$ generated by permutations $(123)(45)$ and $(23)$. The
$n$-orbits of these md-stabilizers are correspondingly:
\begin{example}\label{S_5,md}
$$
\begin{array}{lcr}
\begin{array}{||ccccc||}
\hline
1 & 2 & 3 & 4 & 5 \\
2 & 3 & 4 & 5 & 1 \\
3 & 4 & 5 & 1 & 2 \\
4 & 5 & 1 & 2 & 3 \\
5 & 1 & 2 & 3 & 4 \\
\hline
5 & 4 & 3 & 2 & 1 \\
4 & 3 & 2 & 1 & 5 \\
3 & 2 & 1 & 5 & 4 \\
2 & 1 & 5 & 4 & 3 \\
1 & 5 & 4 & 3 & 2 \\
\hline
1 & 3 & 5 & 2 & 4 \\
3 & 5 & 2 & 4 & 1 \\
5 & 2 & 4 & 1 & 3 \\
2 & 4 & 1 & 3 & 5 \\
4 & 1 & 3 & 5 & 2 \\
\hline
4 & 2 & 5 & 3 & 1 \\
2 & 5 & 3 & 1 & 4 \\
5 & 3 & 1 & 4 & 2 \\
3 & 1 & 4 & 2 & 5 \\
1 & 4 & 2 & 5 & 3 \\
\hline
\end{array}&
\mbox{ and }&
\begin{array}{||ccc|cc||}
\hline
1 & 2 & 3 & 4 & 5 \\
2 & 3 & 1 & 4 & 5 \\
3 & 1 & 2 & 4 & 5 \\
\hline
1 & 2 & 3 & 5 & 4 \\
2 & 3 & 1 & 5 & 4 \\
3 & 1 & 2 & 5 & 4 \\
\hline
1 & 3 & 2 & 4 & 5 \\
2 & 1 & 3 & 4 & 5 \\
3 & 2 & 1 & 4 & 5 \\
\hline
1 & 3 & 2 & 5 & 4 \\
2 & 1 & 3 & 5 & 4 \\
3 & 2 & 1 & 5 & 4 \\
\hline
\end{array}\,.
\end{array}
$$
\end{example}
The first md-stabilizer is a representation of group $C_5C_4=C_4C_5$. The
representation of $S_5$ with this md-stabilizer has degree $6$ equal to
maximal order of elements of $S_5$. The second md-stabilizer is a
representation of group $C_6\otimes C_2$. The representation of
$S_5$ with second md-stabilizer (as we shall see) is the automorphism
group of Petersen graph.
Because of the property given in proposition \ref{p<n}, the special
interest is presented by md-stabilizer of a finite group with the maximal
order, which we call a \emph{least degree stabilizer} or a
\emph{ld-stabilizer} of a finite group. The corresponding representation
of a finite group we call a transitive least degree representation or
\emph{tld-representation}. That property urges also to consider a least
degree intransitive representation or a \emph{ild-representation} of a
finite group, that for some groups, for example for $C_6$, has the degree
less than the degree of tld-representation. So under a lowest degree
representation or a \emph{ld-representation} we shall understand the
smallest degree representation among tld- and ild-representations.
The given consideration puts a question: \emph{is there existing a finite
group with two non-similar ld-representations?}. If there exists no two
non-similar ld-representations, then the ld-representation is a full
invariant of a finite group and hence the study of a finite group number
invariants could be reduced to the study of ld-representation number
invariants.
For a non-minimal degree representations a simple example, of the same
degree non-similar permutation representations, is representations of $S_4$
on sets of right cosets of subgroups $gr((12))$ and $gr((12)(34))$. The
first representation contains a stabilizer of $2$-tuple on a $12$-element
set $V$ and the second contains a stabilizer of $4$-tuple on $V$.
Let $\phi:F\rightarrow S_n(V)$. We shall denote the image $\phi(F)$ of
group $F$ by representation $\phi$ as $F(V)$ and term $F(V)$ also as
representation of $F$. The $n$-orbit of $F(V)$ we term for convenience
also as representation of $F$.
\paragraph{One possible reformulation of the polycirculant conjecture.}
Let $F$ be a finite group and $A<F$ be a md-stabilizer. Let $F$ contains a
subgroup $P$ of a prime order $p$ that conjugates with no subgroup of $A$,
then it follows that $P$ is a regular subgroup of a representation $F(AF)$
and hence $p$ divides $n=|AF|$.
So the polycirculant conjecture statements that if $F(AF)$ is a $2$-closed
representation of a finite group $F$, then $F$ contains the corresponding
subgroup $P$.
To all appearance this approach cannot be successful, because it lies out
of the inside structure of a $n$-orbit.
\paragraph{Some properties of ld-representations.}
\begin{lemma}\label{A_mxB_l}
Let $A_m$ and $B_l$ be ld-representations, then the ld-representation of a
group $A_m\otimes B_l$ has degree $n=m+l$.
\end{lemma}
The simplest example is
$$
\begin{array}{||cc|cc||}
\hline
1 & 2 & 3 & 4 \\
1 & 2 & 4 & 3 \\
\hline
2 & 1 & 3 & 4 \\
2 & 1 & 4 & 3 \\
\hline
\end{array}.
$$
\begin{theorem}\label{ild=ld}
The ld-representation of a finite group $F$ is an ild-representation if and
only if $F$ is a direct product.
\end{theorem}
{\bfseries Proof:\quad}
If $F$ is a direct product, then the statement follows from lemma
\ref{A_mxB_l}. So let $X_n=X_{m+l}=X_m\circ X_l$ be an ild-representation
of $F$, where $X_m$ and $X_l$ are transitive, then $|X_n|/|X_m|$ and
$|X_n|/|X_l|$ greater than $1$. It follows that a stabilizer of $m$-tuple
from $X_m$ and a stabilizer of $l$-tuple from $X_l$ are normal subgroups
of $F$ (corollary \ref{fix-tuple}) and elementwise commutative.
$\Box$
\begin{lemma}\label{ld=regular}
The ld-representation of a group $F$ is regular if and only if $F$ is not
a direct product and has a trivial $ld$-stabilizer.
\end{lemma}
\begin{corollary}\label{C_(p^m)}
The regular representation of cyclic $p$-group is ld-representation.
\end{corollary}
\begin{corollary}\label{C_n}
Let $C_n$ be ld-representation of cyclic group $C$ of order
$p_1^{m_1}\cdots p_q^{m_q}$, where $p_i$ are prime, then $C_n$ is
intransitive and $n=p_1^{m_1}+\ldots +p_q^{m_q}$.
\end{corollary}
\begin{lemma}\label{any gr.-st.}
Any finite group is a ld-stabilizer of some finite group.
\end{lemma}
{\bfseries Proof:\quad}
Let $A$ be a finite group, then $A$ is a ld-stabilizer of a group
$F=gr(A\otimes B,d)$, where a group $B$ is isomorphic to $A$, $d$ is an
involution and $dA\otimes B=A\otimes Bd$. $\Box$
The corresponding example gives the representation of dihedral group $D_4$:
$$
\begin{array}{||cc|cc||}
\hline
1 & 2 & 3 & 4 \\
1 & 2 & 4 & 3 \\
\hline
2 & 1 & 3 & 4 \\
2 & 1 & 4 & 3 \\
\hline
3 & 4 & 1 & 2 \\
3 & 4 & 2 & 1 \\
\hline
4 & 3 & 1 & 2 \\
4 & 3 & 2 & 1 \\
\hline
\end{array}.
$$
\paragraph{The following several sentences do the object, that we study,
more visible.}
\begin{proposition}\label{not-intersect}
Let $F$ be a finite group and $F(F)$, $F(V)$ be two its images. Let
$\alpha_k(V),\beta_k(V)$ be $k$-tuples from $V^{(k)}$ and $l$-tuples
$\alpha_l(F),\beta_l(F)$ be their images from $F^{(l)}$, then
$\alpha_k(V)$ and $\beta_k(V)$ have no intersection on $V$ if and only if
$\alpha_l(F)$ and $\beta_l(F)$ have no intersection on $F$.
\end{proposition}
{\bfseries Proof:\quad}
Let $X_n$ be a $n$-orbit of $F(V)$ and $Y_m$ ($m=|F|$) be a $m$-orbit
of $F(F)$. The $k$-tuples $\alpha_k(V)$ and $\beta_k(V)$ are situated in
$X_n$ and $l$-tuples $\alpha_l(F)$ and $\beta_l(F)$ are situated in $Y_m$.
The statement follows from the method of reconstruction of $Y_m$ to $X_n$
that is a substitution of certain not intersected on $F$ $(m/n)$-tuples of
$Y_m$ on certain not equal elements of $V$. $\Box$
From this proposition follows directly
\begin{corollary}\label{Lk-Stabilizer}
Let $F$ be a finite group, $A<F$ be a md-stabilizer, $B<A$, $V=FB$ and
$G=F(V)$. Let $|G|/|A|=l$, $|A|/|B|=k$, $kl=n$, $X_n\in Orb_n(G)$,
$Y_n\subset X_n$ be a $n$-orbit of the subgroup $A(V)<G$ and
$L_n=GY_n=X_nA(V)$. Let $I_k=AB\subset V$, $X_k=\hat{p}(I_k)X_n$ and
$Y_k=\hat{p}(I_k)Y_n$.
\begin{enumerate}
\item
Let $L_k=X_kA=GY_k=\hat{p}(I_k)L_n$, then classes of $L_k$ have no
intersection on $V$ and hence $L_k$ is a partition of $X_k$.
\item
Let $Y_n'\in L_n$, $Y_k'=\hat{p}(I_k)Y_n'$ and $Y_{n-k}'=
\hat{p}(I_{n-k})Y_n'$, then $Y_k'$ and $Y_{n-k}'$ have no intersection on
$V$.
\end{enumerate}
\end{corollary}
\begin{proposition}\label{p<n}
Let $F$ be a finite group and $p$ be a prime divisor of $|F|$. Let
$\{n_1,\,n_2,\,\ldots\}$ be degrees of transitive components of the
ld-representation of $F$, then $p\leq max(n_i)$.
\end{proposition}
{\bfseries Proof:\quad}
The group $F$ contains a subgroup of order $p$. Any permutation $g\in S_n$
of prime order $p$ is decomposed in cycles of length either $p$ or $1$.
$\Box$
From the definition of a minimal degree permutation representation follows
\begin{proposition}\label{reg->reg}
Let a minimal degree permutation representation contains a regular
element, then a subordinate non-minimal degree permutation representation
contains a regular element too.
\end{proposition}
\paragraph{$n$-Orbits containing $S_n$-isomorphic
$\mathbf{k}$-orbits.}
\begin{proposition}\label{Sn-iso-intr}
Let $X_n\in Orb_n(G)$, $A<G$, $Y_n\in Orb_n(A)$ and $I_k,J_k$ be
$k$-subspaces. Let $\hat{p}(I_k)Y_n$ be $S_n$-isomorphic to
$\hat{p}(J_k)Y_n$, then $\hat{p}(I_k)X_n$ is not with necessary
$S_n$-isomorphic to $\hat{p}(J_k)X_n$.
\end{proposition}
{\bfseries Proof:\quad}
The corresponding intransitive example is simply to construct:
$$
\begin{array}{||cc|cc|cc||}
\hline
1 & 2 & 3 & 4 & 5 & 6 \\
2 & 1 & 4 & 3 & 6 & 5 \\
\hline
3 & 4 & 1 & 2 & 5 & 6 \\
4 & 3 & 2 & 1 & 6 & 5 \\
\hline
\end{array}\,.
$$
Here $2$-suborbits $\{\langle 12\rangle,\langle 21\rangle\}$ and $\{\langle
56\rangle,\langle 65\rangle\}$ are $S_n$-isomorphic, but corresponding
$2$-orbits have different power. The transitive example is not evident and
is presented in $n$-orbit of the automorphism group of Petersen graph (s.
below). $\Box$
\begin{theorem}\label{Sn-iso}
Non-minimal degree permutation representation of a finite group $F$
contains $S_n$-isomorphic $k$-orbits.
\end{theorem}
{\bfseries Proof:\quad}
Let $B<A<F$, $|FB|=n$, $|FA|=m$ and $|AB|=k$. Let $F(FA)$ be minimal and
$F(FB)$ be non-minimal degree permutation representations of a finite
group $F$. Let $X_n$ be a $n$-orbit of $F(FB)$ and $X_m'$ be a $m$-orbit
of $F(FA)$. Let $Y_n\subset X_n$ be a $n$-orbit of $A(FB)$ and
$Y_m'\subset X_m'$ be a $m$-orbit of $A(FA)$. Let $Z_n\subset Y_n$ be a
$n$-orbit of $B(FB)$ and $Z_m'\subset Y_m'$ be a $m$-orbit of $B(FA)$. Let
$P<B$ be a subgroup of a prime order $p$. Let $T_n\subset Z_n$ be a
$n$-orbit of $P(FB)$ and $T_m'\subset Z_m'$ be a $m$-orbit of $P(FA)$.
Let $Y_k$, $Z_k\subset Y_k$ and $T_k\subset Z_k$ be $k$-orbits of $A(AB)$,
$B(AB)$ and $P(AB)$ correspondingly and let $T_{n-k}\circ T_k=T_n$.
The $m$-orbit $T_m'$, the $n$-orbit $T_n$ and the $k$-orbit $T_k$, can be
represent as a concatenation of $p$-rcycles and multituples.
A $p$-rcycle from $T_m'$ generates $k$ $p$-rcycles in $T_{n-k}$ that are
associated with cyclic permutation of $k$ right cosets of $A$. But a
multituple from $T_m'$ generates $p$-rcycles and multituple in $T_k$.
The latter $p$-rcycles are existing because $P<B<A$ and hence the action of
$P$ on $T_k$ permutes right cosets of $B<A$.
A $p$-rcycle from $T_n$, that is generated by a $p$-rcycle from $T_m'$, is
evidently not $F(FB)$-isomorphic to a $p$-rcycle from $T_n$, that is
generated by a multituple from $T_m'$. $\Box$
This situation is demonstrated on example \ref{S3(6)}.
This property can be also emerged in a md-representation of a finite group
$F$. An example gives the group $F=C_6\otimes C_2$ in the next
representation:
\begin{example}\label{C6*C2}
$$
\begin{array}{||cc|cc|cc||}
\hline
1 & 6 & 2 & 5 & 3 & 4 \\
6 & 1 & 5 & 2 & 4 & 3 \\
\hline
2 & 1 & 3 & 6 & 4 & 5 \\
1 & 2 & 6 & 3 & 5 & 4 \\
\hline
3 & 2 & 4 & 1 & 5 & 6 \\
2 & 3 & 1 & 4 & 6 & 5 \\
\hline
4 & 5 & 3 & 6 & 2 & 1 \\
5 & 4 & 6 & 3 & 1 & 2 \\
\hline
3 & 4 & 2 & 5 & 1 & 6 \\
4 & 3 & 5 & 2 & 6 & 1 \\
\hline
5 & 6 & 4 & 1 & 3 & 2 \\
6 & 5 & 1 & 4 & 2 & 3 \\
\hline
\end{array}\,.
$$
\end{example}
It is seen that the $2$-rcycle $\{\langle 25\rangle,\langle
52\rangle\}$ does not belong to $2$-orbit of $G$, containing the
$2$-rcycle $\{\langle 16\rangle,\langle 61\rangle\}$. This matrix is a
md-representation of a finite group $F$, but not ld-representation, and it
contains a submatrix that is a non-minimal degree representation of
subgroup $S_3<F$.
In this example the $p$-subgroup does not belong to a stabilizer, but,
using the construction given in lemma \ref{any gr.-st.}, we obtain
the property for a $p$-subgroup (of order $p$) of a stabilizer of~$F$.
The considered example suggests us the next property of $n$-orbits.
\begin{proposition}\label{nmd Zn}
Let $I=\{I_k^i:i\in [1,l]\}$ be a partition of $V$ on $G$-isomorphic
$k$-subspaces and $W=\{Co(I_k): I_k\in I\}$.
Let $Z_n$ be a maximal subset of $X_n\in Orb_n(G)$ so that
$Co(\hat{p}(I_k^i)Z_n)=W$ for each $i\in [1,l]$.
Let $A<Aut(Z_n)$, $Y_n\subset Z_n$ be a $n$-orbit of $A$,
$Y_k^1=\hat{p}(I_k^1)Y_n$ be the representation $A(I_k^1)$ and
$Y_k^2=\hat{p}(I_k^2)Y_n$ be $S_{|A|}^k$-orbit, then $Z_n$ is a
non-minimal degree representation.
\end{proposition}
{\bfseries Proof:\quad}
$Z_n$ is automorphic, because $GZ_n$ is a partition of $X_n$. From
proposition \ref{H->r.aut} follows that $A$ contains no normal subgroup of
$Aut(Z_n)$. Hence $Z_n$, that is defined on $V$, is isomorphic to a
$l$-orbit $Z_l'$, that is defined on $W$ and obtained by the evident
reduction of $Z_n$. $\Box$
The next example of a minimal degree representation of the group
$S_5\otimes S_2$ contains $S_n$-isomorphic $p$-orbits in the case
$(p,n)=1$.
\begin{example}\label{S5*S2}
$$
\begin{array}{||c|cc|cc||}
\hline
1 & 2 & 5 & 3 & 4 \\
1 & 5 & 2 & 4 & 3 \\
\hline
2 & 3 & 1 & 4 & 5 \\
2 & 1 & 3 & 5 & 4 \\
\hline
3 & 4 & 2 & 5 & 1 \\
3 & 2 & 4 & 1 & 5 \\
\hline
4 & 5 & 3 & 1 & 2 \\
4 & 3 & 5 & 2 & 1 \\
\hline
5 & 1 & 4 & 2 & 3 \\
5 & 4 & 1 & 3 & 2 \\
\hline
\end{array}\,.
$$
\end{example}
This example shows an existence of $n$-orbit of a subgroup of order $p$,
that contains $S_n$-isomorphic $p$-rcycles, but no $S_p^p$-orbit
$G$-isomorphic to a $p$-rcycle. Here: $2$-rcycles $\{\langle
25\rangle,\langle 52\rangle\}$ and $\{\langle 34\rangle,\langle
43\rangle\}$ are $S_n$-isomorphic, $S_p^p$-orbits $\langle\langle
23\rangle\langle 54\rangle\rangle$ and $\langle\langle 54\rangle\langle
23\rangle\rangle$ are equal and hence $G$-isomorphic and $2$-rcycle
$\{\langle 25\rangle,\langle 52\rangle\}$ is $G$-isomorphic to a
$2$-orbit $\{\langle 13\rangle,\langle 14\rangle\}$. We shall see that
properties of this $5$-orbit give an appearance to unconventional
properties of the $10$-orbit of the Petersen graph automorphism group.
\begin{theorem}\label{(p,n)=1,mdr}
Let a prime $p$ divides $|G|$ and does not divide $n$, then $G$ is a
md-representation.
\end{theorem}
{\bfseries Proof:\quad}
Let $(\alpha_p)$ be a cycle, $X_p=G\alpha_p$, $L_p=G(\alpha_p)\alpha_p$
and $L_1=\hat{p}(I_1)L_p$, then $L_1=Co(L_p)$ is a covering of $V$ on
$G$-isomorphic $p$-subsets. We consider two possible cases.
\begin{enumerate}
\item
Let $\sqcup L_1=V$, then $G$ is a primitive group and hence there exists no
partition $Q$ of $V$ on $G$-isomorphic subsets for that $G(Q)$ would be a
representation of $G(V)$. Indeed, if $g\in G$ is a permutation of order
$p$, then $gQ\sqcap Q\neq Q$ for any partition $Q$. It contradicts
proposition~\ref{not-intersect}.
\item
Let now $\sqcup L_1=Q$, where $Q$ is a partition of $V$, then $Q$ consists
of $G$-isomorphic classes. But in this case, because of transitivity $G$
and $(n,p)=1$, there exists a $n$-orbit of a $p$-subgroup whose projections
on subspaces from $Q$ are $G$-isomorphic and hence $X_n$ does not contain
$S_n$-isomorphic $(n/|Q|)$-orbits. Thus, accordingly to theorem
\ref{Sn-iso}, $X_n$ is a minimal degree permutation representation. $\Box$
\end{enumerate}
An example of the second case representation of a finite group $F$ can be
obtained from lemma~\ref{any gr.-st.}, if to assign $A=A_4$ and $p=3$. The
consideration of this example for $A=S_3$ and $p=2$ shows that the
condition, $p$ does not divide $n$, is not of principal for the second case
of theorem~\ref{(p,n)=1,mdr}. We shall see that namely this situation
takes place in the $10$-orbit of the Petersen graph automorphism group.
\paragraph{Conditions of $k$-closure and properties of $k$-closed groups.}
\begin{proposition}\label{Xk,Yk Aut}
Let $Y_k\in Orb_k(Aut(X_k))$, then it is not follows that
$Aut(Y_k)=Aut(X_k)$.
\end{proposition}
{\bfseries Proof:\quad}
An example: $X_2=\{\langle 12\rangle,\langle 23\rangle,\langle
34\rangle,\langle 41\rangle\}$, $Y_2=\{\langle 13\rangle,\langle
31\rangle,\langle 24\rangle,\langle 42\rangle\}$.
$\Box$
\begin{proposition}\label{Xk,Yk iso}
Let $k$-orbits $Y_k$ and $X_k$ be isomorphic and $Aut(X_k)=Aut(Y_k)$, then
it is not follows that $Y_k=X_k$.
\end{proposition}
{\bfseries Proof:\quad}
An example: $X_2=\{\langle 13\rangle,\langle 24\rangle\}$, $Y_2=\{\langle
14\rangle,\langle 23\rangle\}$. $\Box$
\begin{proposition}\label{iso-rorbits}
Let $X_k$ and $Y_k$ be isomorphic $k$-rorbits with the same automorphism
group, then it is not follows that $X_k=Y_k$.
\end{proposition}
{\bfseries Proof:\quad}
The $2$-orbits $X_2=\hat{p}(\langle 23\rangle)X_5$ and $Y_2=\hat{p}(\langle
45\rangle)X_5$ from example \ref{S5*S2} represent such case.
$\Box$
\begin{lemma}\label{k.cl.gr->k.cl.subgr}
Let $X_n$ be a $n$-orbit of a $k$-closed group $G$, $A<G$ and $Y_n\subset
X_n$ be a $n$-orbit of $A$. Let $Y_k(I_k)=\hat{p}(I_k)Y_n$,
$B=\cap_{(I_k\subset I_n)}Aut(Y_k(I_k))$, $P_n=G\,(\cup BY_n)$ and classes
of $P_n$ have no intersections, then $A$ is $k$-closed.
\end{lemma}
{\bfseries Proof:\quad}
Let $X_k(I_k)=\hat{p}(I_k)X_n$. It is given that $G=\cap_{(I_k\subset
I_n)}Aut(X_k(I_k))$. Further we have: $Aut(Y_n)<B$, $Y_n\in BY_n$,
$Y_n\subset\cup BY_n$ and $X_n=\cup GY_n\subseteq \cup G\cup BY_n=\cup
P_n$. Let $Y_n$ be not $k$-closed, then every class of $GY_n$ is not
$k$-closed. As classes of $P_n$ have no intersections, then $X_n\subset
\cup P_n$ and hence $X_n$ is not $k$-closed. Contradiction. $\Box$
\begin{theorem}\label{Sn.iso->2-cl}
Let a transitive $n$-orbit $X_n$ contains $S_n$-isomorphic $k$-projections
($k\geq 2$), then $X_n$ is $2$-closed.
\end{theorem}
{\bfseries Proof:\quad}
We have two possibilities:
\begin{enumerate}
\item
There exists a subspace $I_4=\langle 1234\rangle$ so that $4$-orbit
$X(1234)=\hat{p}(\langle 1234\rangle)X_n$ is not $2$-closed and $2$-orbits
$X(12)=\hat{p}(\langle 12\rangle)X_n$ and $X(34)=\hat{p}(\langle
34\rangle)X_n$ are $S_n$-isomorphic. Then $3$-orbits
$X(123)=\hat{p}(\langle 123\rangle)X(1234)$ and $X(234)=\hat{p}(\langle
234\rangle)X(1234)$ are also not $2$-closed.
Let $A(ijk\ldots)=Aut(X(ij))\cap Aut(X(ik))\cap Aut(X(jk))\cap\,\ldots\,$,
then $\cup Aut(123)X(1234)$ and $\cup Aut(234)X(1234)$ have to be
$2$-closed and equal, i.e. $Aut(123)=Aut(234)=Aut(1234)$. But such
equality (for transitive $4$-orbit $X(1234)$) is impossible, because
$Aut(123)$ and $Aut(234)$ are conjugate subgroups of $S_n$ and hence are
not equal.
\item
There exists a subspace $I_6=\langle 123456\rangle$ so that $6$-orbit
$X(123456)=\hat{p}(\langle 123456\rangle)X_n$ is not $2$-closed,
$3$-orbits $X(123)=\hat{p}(\langle 123\rangle)X_n$ and
$X(456)=\hat{p}(\langle 456\rangle)X_n$ are $S_n$-isomorphic and not
$2$-closed. Then $\cup Aut(123)X(123456)$ and $\cup Aut(456)X(123456)$
have to be $2$-closed and equal or $Aut(123)=Aut(456)=Aut(123456)$. This
equality is also impossible for the same reason. $\Box$
\end{enumerate}
Let $X_n$, $Y_m$ and $Z_l$ be three representations of a finite group $F$.
Let $n>m>l$. It is of interest the relation between a $k$-closure
property of $Z_l$, $Y_m$ and $X_n$.
It is evident that, if $Y_m$ is not $1$-closed, then $X_n$ is not
$1$-closed too and $Z_l$ can be $1$-closed. And, if $Y_m$ is $k$-closed for
$k>1$, then $X_n$ is $k$-closed too and $Z_l$ can be not $k$-closed.
\paragraph{Unconventional cyclic structure on $k$-orbits.}
Now we shall consider one interesting property of the right permutation
action on $k$-orbits that has no analogy in the group theory. The right
automorphism action on a $n$-orbit $X_n$ of a group $G$ maps a
$k$-subspace $I_k$ on an isomorphic $k$-subspace $I_k'$ and so
$X_k=\hat{p}(I_k)X_n=\hat{p}(I_k')X_n=X_k'$. The latter equality
generates an unconventional cyclic structure on a $k$-orbit $X_k$, that
one can see on next examples:
\begin{example}\label{S2*S2}
$$
\left(
\begin{array}{cc|cc}
1 & 2 & 3 & 4 \\
2 & 1 & 4 & 3 \\
\hline
3 & 4 & 1 & 2 \\
4 & 3 & 2 & 1 \\
\end{array}
\right)
\rightarrow
\left(
\begin{array}{c|c}
B & B' \\
\hline
B'& B \\
\end{array}
\right)
\mbox{ \emph{and} }
\left(
\begin{array}{ccc}
1 & 2 & 3 \\
2 & 1 & 3 \\
1 & 3 & 2 \\
3 & 1 & 2 \\
2 & 3 & 1 \\
3 & 2 & 1 \\
\end{array}
\right)
\rightarrow
\left(
\begin{array}{cc|cc}
1 & 2 & 2 & 3 \\
2 & 1 & 1 & 3 \\
\hline
2 & 3 & 3 & 1 \\
1 & 3 & 3 & 2 \\
\hline
3 & 1 & 1 & 2 \\
3 & 2 & 2 & 1 \\
\end{array}
\right)
\rightarrow
\left(
\begin{array}{c|c}
B_1 & B_2 \\
\hline
B_2 & B_3 \\
\hline
B_3 & B_1 \\
\end{array}
\right).
$$
\end{example}
We have introduced the right permutation action on $n$-orbits as a
permutation of coordinates of $n$-tuples. Of course, we can consider this
action as the permutation $n$-tuples with the same result. Such
interpretation of the right permutation action leads to next
\begin{lemma}\label{Rk=Yn}
Let $X_k$ be a $k$-orbit of a group $G$, $A<G$, $R_k=AX_k$ be a
partition of $X_k$ on $k$-orbits of right cosets of $A$ in $G$ and $Y_n$
be $n$-orbit of $A$. Then for any $k$-orbit $Y_k\in R_k$ there exists a
subspace $I_k$ so that $Y_k=\hat{p}(I_k)Y_n$.
\end{lemma}
{\bfseries Proof:\quad}
It follows from $A\alpha_n=Y_n$ for $\alpha_n\in Y_n$ and
$A\alpha_k\in R_k$ for $\alpha_k\in X_k$. $\Box$
\begin{theorem}\label{rr}
Let $X_n$ be a $n$-orbit of a group $G$, $I_k,I_k'$ be isomorphic
subspaces, and $X_{2k}=\hat{p}(I_k\circ I_k')X_n=\uplus X_k\circ \uplus
X_k$. Let $A<G$, $R_{2k}=AX_{2k}$ and $R_k=AX_k$, then $R_{2k}$ can be
partition in cycles on classes of $R_k$.
\end{theorem}
{\bfseries Proof:\quad}
It follows from lemmas \ref{Rk=Yn} and \ref{mMmM}. $\Box$
Namely this property we can see on examples.
\begin{remark}\label{rem}
Let $I_k^1$, $I_k^2$ and $I_k^3$ be isomorphic subspaces of a $n$-orbit
$X_n$, then $\hat{p}(I_k^1)X_n=\hat{p}(I_k^2)X_n=\hat{p}(I_k^3)X_n$. But
on the condition it does not follow that $\hat{p}(I_k^1\circ
I_k^2)X_n=\hat{p}(I_k^2\circ I_k^3)X_n$.
Hence the corresponding cycle structure on $lk$-orbits of right cosets of
a subgroup does not exist with necessary for $l>2$. This fact represents
the difference between $2$-closed groups and $m$-closed groups for $m>2$.
\end{remark}
\begin{proposition}\label{rlrl}
Let $X_{2k}\in Orb_{2k}(G)$, $A<G$, $R_{2k}=AX_{2k}$, $Y_k^1$ and
$Y_k^2$ be isomorphic $k$-orbits of a subgroup $A$, $Y_{2k}=Y_k^1\circ
Y_k^2\in R_{2k}$ and $C_{2k}\subset R_{2k}$ be a cycle containing
$Y_{2k}$. Let the set $Z_{2k}=\cup C_{2k}$ is a $(2k)$-orbit of some
subgroup $B<G$, then $A$ is a normal subgroup of $B$.
\end{proposition}
{\bfseries Proof:\quad}
Since $Y_k^1$ and $Y_k^2$ are isomorphic, $k$-orbits $Y_k^i$, that form
the cycle $C_{2k}$, are $k$-orbits of left and right cosets of $A$ in $B$.
Then statement follows from lemma \ref{Lk*Rk}. $\Box$
\section{Correspondence between $k$-orbits and their automorphism\\ groups}
\begin{lemma}
Every cycle $c\subset\in G$ of length $l\geq k$ corresponds to a
$(l,k)$-cycle $C_{lk}$ of some $k$-orbit $X_k\in Orb_k(G)$.
\end{lemma}
{\bfseries Proof:\quad}
If $c=(\alpha_l)$, then $l$-orbit $X_l=G\alpha_l$ contains $l$-rcycle
$rC_l=gr(c)\alpha_l$. The $X_k$ and $C_{lk}$ are corresponding
$k$-projections of $X_l$ and $rC_l$. $\Box$
For $k>l$ the counterexample is given by the cycle $(56)$ in fourth case
of example \ref{simple-struc}, where there exists no $(2,3)$-cycle for
subautomorphism $(56)$.
The reverse statement for $k$-orbits of not $k$-closed groups is not
correct. An example is $2$-orbit of $A_4$ that contains $(4,2)$-cycle
related to no subautomorphism of $A_4$.
For $k$-closed groups the reverse statement is also not correct. This
shows an example of $2$-closed group that is defined by $2$-orbit
$X_2=\{14,25,36,41,52,63\}$. The group $Aut(X_2)$ has a $2$-orbit
$X_2'=\{12,13,15,16,21,24,23,26,32,35,31,34,42,45,43,46,51,54,53,
56,62,65,61,64\}$.\label{X_2'} The automorphism group of $2$-suborbit
$\{12,13,15,16\}\subset X_2'$ contains a cycle $(2536)$ that does not
belong to $Aut(X_2')$. We can see that the possibility for construction of
this counterexample gives a concatenation of $S_4^1$-orbit with
$(4,1)$-multituple. But $X_2'$ contains a suborbit $\{12,24,43,31\}$ that
is a projection of a $4$-rcycle and also is not a $2$-orbit of a subgroup
of $Aut(X_2')$. In latter case the length $l$ of a cycle $(1243)$ is not
prime first and does not divide degree $n$ second. For a prime $l$ that
does not divide $|Aut(X_2')|=48$ we have the next counterexample: an
automorphic $2$-subset $\{13,32,24,45,51\}\subset X_2'$.
\subsection{The local property of $k$-orbits}
The trivial case of reverse statement we obtain from
corollary \ref{Y_k-A}. For disclosing of a non-trivial local property of
an automorphism group $k$-orbits we have to consider the case, where
$(p,k)$-subset of a $k$-orbit $X_k$ is a $k$-projection of $p$-rcycle for
$p$ being a prime divisor of $|Aut(X_k)|$. We shall write further a
$k$-projection of $l$-rcycle for $k\leq l$ as \emph{$(l,k)$-rcycle}
$rC_{lk}$.
\begin{theorem}\label{rC_pk->A}
Let $X_k$ be automorphic, $p\geq k$ be a prime divisor of $|Aut(X_k)|$ and
$rC_{pk}\subset X_k$ be a $(p,k)$-rcycle, then $Aut(rC_{pk})<<Aut(X_k)$.
\end{theorem}
{\bfseries Proof:\quad}
Let $L_k=Aut(X_k)rC_{pk}$ and $|L_k|p$ divides $|Aut(X_k)|$, then the
statement follows from theorem \ref{|Y_k||GY_k|}. Let $|L_k|p>|Aut(X_k)|$,
then $|L_k|=|Aut(X_k)|$ and hence $L_k$ can be partition on subsets
$L_k^i$, $i\in [1,p]$ so that $|L_k^i|p=|Aut(X_k)|$. Since subsets $L_k^i$
have no intersections, there exist subgroups $A_i<Aut(X_k)$ so that
$A_iL_k^i=L_k^i$. It follows that $|L_k|p$ divides $|Aut(X_k)|$ and hence
$|L_k|p=|Aut(X_k)|$. $\Box$
The theorem \ref{rC_pk->A} and lemma \ref{XAUXB=XAB} give a
possibility for the reconstruction of subautomorphisms of $k$-orbit
through its symmetry properties.
The next statement gives the relation between automorphism group of a
$k$-orbit and automorphism group of its $k$-suborbit.
\begin{proposition}\label{Aut(Y_k)<<Aut(X_k)}
Let $X_k$ be a $k$-orbit and $Y_k\subset X_k$ be a $k$-orbit of a subgroup
$A=Aut(Y_k;V)\cap Aut(X_k)$, then $Aut(Y_k)<<A<Aut(X_k)$ if and only if
$Aut(Y_k)<<\cap_{(Z_k\in AX_k)}Aut(Z_k;V)$.
\end{proposition}
{\bfseries Proof:\quad}
The statement follows from evident equality $A=\cap_{(Z_k\in
AX_k)}Aut(Z_k;V)$. $\Box$
\section{Primitivity and imprimitivity.}
Below a group $G$ is transitive.
Let $X_k$ be a $k$-rorbit, $Y_k\subset X_k$ be a $k$-subrorbit,
$\alpha_k\in Y_k$ and $Co(Y_k)=Co(\alpha_k)$, then $L_k=GY_k$ is a
partition of $X_k$ (lemma~\ref{Co(Y_k)=Co(alpha_k)}), but classes of $L_k$
can be intersected on $V$. We shall call a $k$-rorbit $X_k$ for $k<n$
\emph{$V$-coherent}, if $\sqcup Co(X_k)=V$ and \emph{$V$-incoherent} if
$\sqcup Co(X_k)$ is a partition of $V$. We shall write simply coherent and
incoherent, instead of $V$-coherent and $V$-incoherent, if it will be
clear, what a set we consider.
\begin{proposition}\label{Aut-incoherent}
The automorphism group of an incoherent $k$-rorbit is imprimitive.
\end{proposition}
\begin{corollary}\label{impr-gr}
A group $G$ is imprimitive if and only if it contains an incoherent
$k$-rorbit.
\end{corollary}
\begin{corollary}\label{nmd-imprimit}
Non-minimal degree representations are imprimitive.
\end{corollary}
The automorphism group of a coherent $k$-rorbit can be imprimitive. The
example is the $2$-rorbit $\{13,31,24,42,14,41,23,32\}$.
Let a coherent (incoherent) $k$-rorbit $X_k$ contains no $V$-coherent and
no $V$-incoherent $k$-subrorbit, then $X_k$ can be called an
\emph{elementary coherent} (\emph{elementary incoherent}) $k$-rorbit.
\begin{lemma}\label{el.coherent-primitive}
The automorphism group of an elementary coherent $k$-rorbit is primitive.
\end{lemma}
\begin{corollary}\label{primitive-el.coherent}
The group $G$ is primitive if and only if it contains an elementary
$V$-coherent $k$-subrorbit.
\end{corollary}
A maximal $k$-subrorbit $Y_k$ of a $k$-rorbit $X_k$, that is a structure
element of coherent (incoherent) $k$-subrorbits, we call a
\emph{$k$-block}.
Let $Y_k$ be a $k$-block of an incoherent $k$-rorbit $X_k$, then
$U=Co(Y_k)$ is a $1$-block or $k$-element block of an imprimitive group
$G$ in conventional definition.
Let us to give some examples of coherent and incoherent $k$-rorbits:
\begin{enumerate}
\item
A $2$-orbit $X_2$ of $S_3$ is elementary coherent and a $2$-rcycle from
$X_2$ is a $2$-block.
\item
$2$-orbit of $C_5\otimes C_2$ is elementary coherent. This group contains
$S_5$-isomorphic elementary coherent $2$-rorbits.
\item
A $2$-orbit of $A_5$ is coherent but not elementary coherent. It
contains an elementary coherent $2$-orbit of $C_5\otimes C_2$. A
$3$-orbit of $A_5$ is elementary coherent and contains elementary coherent
suborbits on $4$-element subsets of $V$.
\item
All $2$-orbits of $C_2\otimes C_2$ and two from six $2$-orbits of $D_4$
are elementary incoherent. Other four $2$-orbits of $D_4$ are coherent.
\end{enumerate}
\begin{proposition}\label{el.impr->base.typr}
Let $p$ be prime and $X_p$ be an elementary incoherent $p$-rorbit, then
there exists a subgroup $A<Aut(X_p)$ of order $p$ for that classes of a
partition $R_p=AX_p$ are $p$-orbits of the base type, i.e. they are either
$p$-rcycles, or $S_p^p$-orbits or $p$-tuples.
\end{proposition}
{\bfseries Proof:\quad}
An elementary incoherent $p$-rorbit consists of not intersected on $V$
$p$-rcycles. $\Box$
The reverse statement:
\begin{lemma}\label{base.typ->impr}
Let $p$ be prime, $X_p$ be a $p$-rorbit and $rC_p\subset X_p$ be a
$p$-rcycle that is a $p$-orbit of a subgroup $A<Aut(X_p)$ of order $p$.
Let classes of $R_p=AX_p$ be $p$-orbits of the base type, then $Aut(X_p)$
is imprimitive.
\end{lemma}
{\bfseries Proof:\quad}
Let the statement is not correct and $Aut(X_p)$ be primitive, then
$L_p=Aut(X_p)rC_p$ contains intersected on $V$ classes and hence there
exists a class of $R_p$ that has a $(p,m<p)$-multituple as a concatenation
component. Contradiction. $\Box$
In the next example:
$L_1=\{\{124\},\{235\},\{346\},\{451\},\{562\},\{613\}\}$, we see that
$\sqcup L_1=V=[1,6]$, where $k=3$ divides $n=6$, but the corresponding
$3$-set $X_3=\{124,241,421,\ldots\}$ is not automorphic.
\begin{theorem}\label{not k|n}
Let $X_k$ be an elementary coherent $k$-rorbit, then $\neg (k|n)$.
\end{theorem}
{\bfseries Proof:\quad}
The statement is correct for $n$ being a prime, because of $k<n$, so we
assume that $n$ is not prime. Let the statement is not correct for some
$k$, then it is not correct also for a prime divisor $p$ of $k$. So we
assume that $k=p$ is a prime. Let $Y_p\subset X_p$ be a $p$-rcycle that is
a $p$-orbit of a subgroup $A<Aut(X_p)$ of order $p$, $R_p=AX_p$ and
$L_p=Aut(X_p)Y_p$. Since by hypothesis classes of $L_p$ are intersected on
$V$, then $R_p$ contains classes that have a $(p,l<p)$-multituple as a
concatenation component. Let $X_n\in Orb_n(Aut(X_p))$ and $Y_n\subset X_n$
be a $n$-orbit of $A$, then $Y_n$ is a concatenation of (not with
necessary $Aut(X_p)$-isomorphic) $p$-rcycles and a $(p,pr)$-multituple.
Let $I_{pr}$ be a subspace defining $(p,pr)$-multituple, then, accordingly
to lemma \ref{k-rorbit}, the $pr$-orbit $X_{pr}=\hat{p}(I_{pr})X_n$ is a
$pr$-rorbit and hence $l|pr$. Another, the subspace $I_{pr}$ is a
concatenation of $m=pr/l$ $Aut(X_p)$-isomorphic subspaces $I_l^i$ ($i\in
[1,m]$). It follows that $R_p$ contains $m$ $Aut(X_p)$-isomorphic classes
and that $Aut(X_p)$ contains a subgroup $B$ that acts transitive on the
system of these $m$ classes. It follows that $R_p$ and hence $X_p$ can be
partition on $m$ $Aut(X_p)$-isomorphic classes. Let $L_p'=L_p\sqcup R_p$
be this partition of $X_p$ on $m$ $Aut(X_p)$-isomorphic classes, then
classes of $L_p'$ are $p$-orbits of a normal subgroup of $Aut(X_p)$ and
cannot be intersected on $V$. Hence $X_p$ is not elementary coherent
$p$-rorbit. Contradiction. $\Box$
\section{Petersen graph}
Here we consider the properties of $n$-orbits that are not visible
from the group theory and therefore had hindered to solve the
polycirculant conjecture. It is the cases, where $p|n$, but there exists no
transitive and imprimitive subgroup of order $p$ and therefore there
exists no subgroup of order $p$, whose $n$-orbits could be represented as
a concatenation of $(p,p)$-orbits of base type.
The simplest example is a $(n=6)$-orbit of a group $G=S_3\otimes C_2$.
\begin{example}\label{S3xC2}
$$
\begin{array}{||cc|c|cc|c||}
\hline
1 & 2 & 3 & 4 & 5 & 6 \\
2 & 1 & 3 & 5 & 4 & 6 \\
\hline
1 & 3 & 2 & 4 & 5 & 6 \\
3 & 1 & 2 & 5 & 4 & 6 \\
\hline
2 & 3 & 1 & 5 & 6 & 4 \\
3 & 2 & 1 & 6 & 5 & 4 \\
\hline
4 & 5 & 6 & 1 & 2 & 3 \\
5 & 4 & 6 & 2 & 1 & 3 \\
\hline
4 & 5 & 6 & 1 & 3 & 2 \\
5 & 4 & 6 & 3 & 1 & 2 \\
\hline
5 & 6 & 4 & 2 & 3 & 1 \\
6 & 5 & 4 & 3 & 2 & 1 \\
\hline
\end{array}\,.
$$
\end{example}
It is seen that the pair $\langle 12\rangle$ is $G$-isomorphic to $\langle
45\rangle$, but not $G$-isomorphic to $\langle 36\rangle$, so $6$-orbit of
a subgroup $gr((12)(45))$ cannot be represented as a concatenation of
$p$-orbits of base type. We shall say that this subgroup of a prime order
and its $n$-orbit are \emph{undecomposable}. Nevertheless the given group
contains a \emph{decomposable} (on the $(p,p)$-orbits of the base type)
subgroup $gr((14)(25)(36))$, where $\{14\},\{25\},\{36\}$ are incoherent
$2$-blocks of a corresponding imprimitive subgroup of $G$.
The Petersen graph gives an example, where for the least prime divisor
$p=2$ of the degree $n=10$ there exists no decomposable subgroup with
incoherent $2$-blocks. From here follows unconventional properties of the
automorphism group of this graph. The automorphism group of Petersen graph
is a representation $G$ of $S_5$ on $10$-element set $V$. It is a
representation of $S_5$ with right (left) cosets of a subgroup of order
$12$ represented in example \ref{S_5,md}. This representation can be also
obtained by action of $S_5$ on unordered pairs from the set
$V'=\{1,2,3,4,5\}$, where $V=\{1=\{1,2\},2=\{1,3\},\ldots,0=\{4,5\}\,\}$.
The following matrix is a $10$-orbit $Y_{10}$ of a transitive, imprimitive
subgroup of $G$ (that is isomorphic to a first subgroup from example
\ref{S_5,md}).
\begin{example}
$$
\begin{array}{||ccccc|ccccc||c||}
\hline
12 & 23 & 34 & 45 & 15 & 14 & 24 & 25 & 35 & 13 & \\
\hline
\hline
1 & 5 & 8 & 0 & 4 & 3 & 6 & 7 & 9 & 2 & e \\
5 & 8 & 0 & 4 & 1 & 7 & 9 & 2 & 3 & 6 & (12345) \\
8 & 0 & 4 & 1 & 5 & 2 & 3 & 6 & 7 & 9 & (13524) \\
0 & 4 & 1 & 5 & 8 & 6 & 7 & 9 & 2 & 3 & (14253) \\
4 & 1 & 5 & 8 & 0 & 9 & 2 & 3 & 6 & 7 & (15432) \\
\hline
3 & 6 & 7 & 9 & 2 & 4 & 0 & 8 & 5 & 1 & (2453) \\
6 & 7 & 9 & 2 & 3 & 8 & 5 & 1 & 4 & 0 & (1435) \\
7 & 9 & 2 & 3 & 6 & 1 & 4 & 0 & 8 & 5 & (1254) \\
9 & 2 & 3 & 6 & 7 & 0 & 8 & 5 & 1 & 4 & (1523) \\
2 & 3 & 6 & 7 & 9 & 5 & 1 & 4 & 0 & 8 & (1342) \\
\hline
4 & 0 & 8 & 5 & 1 & 2 & 9 & 7 & 6 & 3 & (25)(34) \\
0 & 8 & 5 & 1 & 4 & 7 & 6 & 3 & 2 & 9 & (15)(24) \\
8 & 5 & 1 & 4 & 0 & 3 & 2 & 9 & 7 & 6 & (23)(14) \\
5 & 1 & 4 & 0 & 8 & 9 & 7 & 6 & 3 & 2 & (45)(13) \\
1 & 4 & 0 & 8 & 5 & 6 & 3 & 2 & 9 & 7 & (12)(35) \\
\hline
2 & 9 & 7 & 6 & 3 & 1 & 5 & 8 & 0 & 4 & (2354) \\
9 & 7 & 6 & 3 & 2 & 8 & 0 & 4 & 1 & 5 & (1325) \\
7 & 6 & 3 & 2 & 9 & 4 & 1 & 5 & 8 & 0 & (1534) \\
6 & 3 & 2 & 9 & 7 & 5 & 8 & 0 & 4 & 1 & (1243) \\
3 & 2 & 9 & 7 & 6 & 0 & 4 & 1 & 5 & 8 & (1452) \\
\hline
\end{array}
$$
\end{example}
and the reassembling of this example:
\begin{example}
$$
\begin{array}{||cc|cc|cc|cc|cc||c||}
\hline
12 & 15 & 14 & 13 & 23 & 35 & 34 & 45 & 24 & 25 & \\
\hline
\hline
1 & 4 & 3 & 2 & 5 & 9 & 8 & 0 & 6 & 7 & e \\
4 & 1 & 2 & 3 & 0 & 6 & 8 & 5 & 9 & 7 & (25)(34) \\
\hline
5 & 1 & 7 & 6 & 8 & 3 & 0 & 4 & 9 & 2 & (12345) \\
1 & 5 & 6 & 7 & 4 & 9 & 0 & 8 & 3 & 2 & (12)(35) \\
\hline
8 & 5 & 2 & 9 & 0 & 7 & 4 & 1 & 3 & 6 & (13524) \\
5 & 8 & 9 & 2 & 1 & 3 & 4 & 0 & 7 & 6 & (45)(13) \\
\hline
0 & 8 & 6 & 3 & 4 & 2 & 1 & 5 & 7 & 9 & (14253) \\
8 & 0 & 3 & 6 & 5 & 7 & 1 & 4 & 2 & 9 & (23)(14) \\
\hline
4 & 0 & 9 & 7 & 1 & 6 & 5 & 8 & 2 & 3 & (15432) \\
0 & 4 & 7 & 9 & 8 & 2 & 5 & 1 & 6 & 3 & (15)(24) \\
\hline
3 & 2 & 4 & 1 & 6 & 5 & 7 & 9 & 0 & 8 & (2453) \\
2 & 3 & 1 & 4 & 9 & 0 & 7 & 6 & 5 & 8 & (2354) \\
\hline
6 & 3 & 8 & 0 & 7 & 4 & 9 & 2 & 5 & 1 & (1435) \\
3 & 6 & 0 & 8 & 2 & 5 & 9 & 7 & 4 & 1 & (1452) \\
\hline
7 & 6 & 1 & 5 & 9 & 8 & 2 & 3 & 4 & 0 & (1254) \\
6 & 7 & 5 & 1 & 3 & 4 & 2 & 9 & 8 & 0 & (1243) \\
\hline
9 & 7 & 0 & 4 & 2 & 1 & 3 & 6 & 8 & 5 & (1523) \\
7 & 9 & 4 & 0 & 6 & 8 & 3 & 2 & 1 & 5 & (1534) \\
\hline
2 & 9 & 5 & 8 & 3 & 0 & 6 & 7 & 1 & 4 & (1342) \\
9 & 2 & 8 & 5 & 7 & 1 & 6 & 3 & 0 & 4 & (1325) \\
\hline
\end{array}
$$
\end{example}
It can be seen that there exists no partition of $2$-projection
$\hat{p}(\langle 14\rangle)Y_{10}$ on not intersected on $V$ classes, but
this covering of $V$ with $2$-tuples can be partition in two not
intersected coverings: $\{14,15,58,40,08\}$ and $\{23,36,29,79,67\}$ that
form elementary coherent $2$-orbits on corresponding two $5$-element
subsets of $V$. This case is similar to we could see on example
\ref{S5*S2}.
The following example of a $(2,10)$-orbit
$$
\begin{array}{||cc||cc|cc|cc|cc||}
\hline 24 & 35 &
12 & 14 & 13 & 15 & 23 & 25 & 45 & 34 \\
\hline
\hline
6 & 9 & 1 & 3 & 2 & 4 & 5 & 7 & 0 & 8 \\
9 & 6 & 3 & 1 & 4 & 2 & 7 & 5 & 8 & 0 \\
\hline
\end{array}
$$
contains $S_{10}$ isomorphic $2$-rcycles
$\{\langle 69\rangle,\langle 96\rangle\}$ and $\{\langle 13\rangle,\langle
31\rangle\}$, but the corresponding two projections of $X_{10}$ are not
$S_{10}$ isomorphic, because the $2$-orbit $X_2(\langle 69\rangle)$
consists of $30$ pairs and $2$-orbit $X_2(\langle 13\rangle)$ consists of
$60$ pairs. It is a transitive realization of the property of example from
proposition \ref{Sn-iso-intr}.
\begin{remark}
One can see that given properties of $X_n$ cannot be obtained in the group
theory, because they are properties of the internal structure of $X_n$. Of
course, the internal structure of $X_n$ characterizes the group and
therefore its properties are also group properties. But these properties
of a group lie out the group algebra that characterizes $X_n$ as whole.
\end{remark}
\section{The proof of the polycirculant conjecture}
\subsection{The proof of lemma \ref{tr.impr<tr.pr}}
The proof of lemma \ref{tr.impr<tr.pr} follows from
\begin{lemma}\label{qGDn}
Let $q$ be the greatest prime divisor of $n$, then $G$ contains a
transitive, imprimitive subgroup with incoherent $q$-blocks.
\end{lemma}
{\bfseries Proof:\quad}
Let the statement is not correct, then every $q$-rorbit $X_q(I_q)$
contains a coherent $q$-subrorbit $Y_q(I_q)$ on some automorphic
$k$-subspace $I_k$, where $k$ is a divisor of $n$ greater than $q$,
$I_q\subset I_k$ and $X_k(I_k)$ is an incoherent $k$-rorbit. Let $p$ be a
prime divisor of $k$, then it follows that $p<q$ and hence there exists an
automorphic $p$-subspace $I_p\subset I_q$. Therefore there exists a
coherent $p$-subrorbit $Y_p(I_p)$ of a $p$-rorbit $X_p(I_p)$ on the
$q$-subspace $I_q$. Since $q|n$, classes of the partition $L_p=GY_p(I_p)$
of $X_p(I_p)$ are not intersected on $V$ and hence $X_q(I_q)$ can not
contain a coherent $q$-subrorbit $Y_q(I_q)$ on the $k$-subspace $I_k$.
Contradiction. $\Box$
\subsection{The proof of theorem \ref{2cl.impr->reg}}
Let $G$ be a transitive group, then it contains a transitive, imprimitive
subgroup. So we can assume that $G$ is imprimitive. Then for some prime
divisor $p$ of $n$ there exists a partition $I=\{I_p^i:\, i\in [1,l]\}$ of
$V$ on $G$-isomorphic, automorphic $p$-subspaces. Let $X_n$ be a $n$-orbit
of $G$, $X_p=\hat{p}(I_p^i)X_n$ and $X_{2p}^{ij}=\hat{p}(I_p^i\circ
I_p^j)X_n=\uplus X_p\circ \uplus X_p$. Accordingly to corollary
\ref{mMmM->MM} $X_{2p}^{ij}=\cup_t (X_p\stackrel{\phi_t(i,j)}{\circ}
X_p)\equiv\cup L_p(i,j)$ for some maps $\{\phi_t(i,j)\}$. It follows that
the action of any cycle of length $p$, that is generated with some
$p$-rcycle from $X_p$, on partitions $L_p(i,j)$ generates cycles of
length $p$ that are again connected with $p$-rcycles from $X_p$. Hence $G$
contains a regular permutation of order $p$.
\section{Some applications of $k$-orbit theory}
\subsection{Solvability of groups of odd order}
Now we shall show that the $k$-orbit theory gives a simple proof of the
W. Feit, J.G. Thompson theorem: Solvability of groups of odd order
\cite{G}.
In this section we do not difference between a finite group and its
$ld$-representation. Also we assume that a finite group is not a direct
product.
\begin{lemma}\label{Aut(incoh)-not-simple}
Let $X_k\in Orb_k(G)$ be an incoherent $k$-orbit, then $G$ is not a
simple group.
\end{lemma}
{\bfseries Proof:\quad}
Let $Y_k\subset X_k$ be a $k$-block and $L_k=Aut(X_k)Y_k$, then it is
evident that a concatenation of classes from $L_k$ is a $n$-orbit of a
normal subgroup $H\lhd Aut(X_k)=Aut(L_k)$ and that every transitive
subgroup $G<Aut(L_k)$ has non-trivial intersection with $H$. $\Box$
\begin{corollary}\label{impr-not.simple}
Let $G$ be a (non-cyclic) simple group, then it is (non-trivial) primitive.
\end{corollary}
\begin{theorem}\label{prim-invol.}
Any primitive group $G$ contains an involution.
\end{theorem}
{\bfseries Proof:\quad}
Let $n$ be odd, then there exist odd numbers $k<l\leq n$, so that
$(k,l)=1$ and there exist automorphic subspaces $I_k\subset I_l$. If
$r=[l/k]$ is odd, then $m=l-rk$ is even and there exists an automorphic
subspace $I_m$ (lemma \ref{k-rorbit}). If $r$ is even, then $m$ is odd
and, because of primitivity of $G$, we can choose $l:=k$ and $k:=m$, if
$(k,m)=1$, or else $l:=l$ and $k:=m$. $\Box$
\begin{corollary}\label{odd-solv}
Let $G$ be a (non-cyclic) simple group, then it contains an involution.
\end{corollary}
\begin{corollary}\label{odd.od-impr}
Let $G$ be a group of odd order, then it is an imprimitive and hence
solvable group.
\end{corollary}
\subsection{A full invariant of a finite group}
Here we shall discuss the problem of a full invariant of a finite group
$F$ and assume that $F$ is not a direct product.
At first we can note that, if the ld-representation $G$ of $F$ is unique
accurate to similarity, then a full invariant of $F$ is defined by a full
invariant of $G$. Then we have two cases: $G$ is primitive and $G$ is
imprimitive.
\subsubsection{Let $G$ be primitive}
\begin{proposition}\label{k=n-l[n/l]}
Let $\neg(l|n)$, $I_l$ be an automorphic subspace and $k=n-l[n/l]$, then
\begin{enumerate}
\item\label{p^m<n}
$k$ divides $|G|$;
\item
there exists an automorphic $k$-subspace $I_k$;
\item
a subgroup $Stab(I_k)$ has non-trivial normalizer in $G$.
\end{enumerate}
\end{proposition}
{\bfseries Proof:\quad}
The statements follows directly from lemmas \ref{k-rorbit} and
\ref{fix-tuple}. $\Box$
\begin{proposition}\label{V-coherent}
Let $\neg(k|n)$ and $X_k$ be a $k$-rorbit, then $X_k$ contains an
elementary $V$-coherent $k$-suborbit.
\end{proposition}
{\bfseries Proof:\quad}
The statement follows from the definition of an elementary $V$-coherent
$k$-orbit. $\Box$
So we see that $|G|$ and $n$ are high dependent and in general the degree
$n$ allows to define whether there can exist a group $G$ of order $\nu$.
Also an elementary $U$-coherent $k$-suborbit on every possible automorphic
subset $U\subset V$ is unique accurate to similarity.
All these facts suggest us the hypothesis that $|G|$ and $n$ could be a
full invariant of $G$ in the considered case.
\subsubsection{Let $G$ be imprimitive}
Let $k$ be a maximal automorphic divisor of $n$, then there exists an
incoherent $k$-rorbit $X_k$ of $G$. Let $Y_k\subset X_k$ be a $k$-block
and $L_k=Aut(X_k)Y_k=Aut(L_k)Y_k$, then $G<Aut(L_k)$, $L_k=GY_k$ and $Y_k$
is a $k$-orbit of a maximal normal subgroup $H\lhd G$, because of
lemma \ref{Aut(incoh)-not-simple}.
Let us to assume that we know $Y_k$, $|G|$ and $n$. It gives us the next
information: $|L_k|=n/k$, $|X_k|=|Y_k||L_k|$, $|H|=|G|/|L_k|$ and
$|Stab(I_k)|=|G|/|X_k|=|H|/|Y_k|$. In addition we know that $n$-orbit
$X_n$ of $G$ is a $|L_k|\times |L_k|$ matrix $M$, whose elements are
multi-classes of $L_k$ and which gives a regular representation of a
factor group $\Phi=G/H$. Since $H$ is a maximal normal subgroup, hence
$\Phi$ is a simple group.
Let $\Phi$ be not a cyclic group, then it is not a ld-representation. But
from here follows that $G$ is not a ld-representation too. This
contradiction shows us that $\Phi$ is always a cyclic group.
In order to give a full description of the group $G$ we have to find
the elements of matrix $M$. Let $L_k=\{Y_k^i:\,i\in [1,p]\}$ and
$\Phi=gr((1\ldots p))$, then we know that $M_{ij}=Y_k^r$, where
$r=i+j-1(mod\ p)$. One of possible construction of $M$ is obtained as
next. Every element $M_{ij}$ is obtained from the element $M_{1j}$ by
permutation of columns and every element $M_{1j}$ is obtained from the
element $M_{11}$ by permutation of lines. The columns of the element
$M{ij}$ are permutated relative to the columns of the element $M_{1j}$
with automorphisms of $M_{1j}$ that are similar to automorphisms of
$M_{11}$. The lines of the element $M_{1j}$ are permutated relative to
lines of the element $M_{11}$ on the condition to maintain the
automorphism property of $M_{11}$.
So, for obtaining of a full number invariant in this imprimitive case it
wants to find the number invariants that allow to calculate corresponding
$p^2-1$ permutations.
Now we consider the properties of $Y_k$.
\begin{lemma}\label{Y_k-prim}
$Aut(Y_k)$ is imprimitive.
\end{lemma}
{\bfseries Proof:\quad}
Let $Aut(Y_k)$ be primitive, then $Y_k$ contains an elementary coherent
$q$-subrorbit for some prime $q<k$. From here follows that permutations
generating elements of matrix $M$ are trivial and hence $G$ is a direct
product. Contradiction. $\Box$
\begin{corollary}\label{Y_k-reg}
$Aut(Y_k)$ is regular.
\end{corollary}
\begin{corollary}\label{p-group}
$G$ is $p$-group.
\end{corollary}
So we can formulate
\begin{hypothesis}\label{full.inv}
Full invariant of not $p$-group is defined with $|G|$ and $n$ and full
invariant of $p$-group of order $p^m$ is defined with maximum $mp^2$
permutations or corresponding numbers that define these permutations.
\end{hypothesis}
\subsection{The polynomial algorithm of graph isomorphism testing}
The graph isomorphism problem has a polynomial solution, if the problem of
separating of orbits of the automorphism group of a graph has a polynomial
solution. So we want to find the partition $O_2=Aut(X_2)X_2\subset
Orb_2(Aut(X_2))$ of a $2$-set $X_2\subset V^{(2)}$ polynomially on $n$.
Let $X_k\subset V^{(k)}$ be a $k$-set. We say that $X_k$ is
\emph{transitive}, if All $1$-projections of $X_k$, are equal. We say that
$X_k$ is \emph{regular}, if it satisfy to the two conditions:
\begin{enumerate}
\item
Every $l$-multiprojection of $X_k$ for $l\in [1,k]$ is homogeneous.
\item
All $l$-projections of $X_k$, containing the same $l$-tuple, are equal.
\end{enumerate}
\begin{lemma}\label{X_2=X_1}
Let $X_2$ be a regular $2$-set and $X_1^1,X_1^2$ be its $1$-projections.
Let $|X_2|=|X_1^1|$. If $X_1^1\neq X_1^2$, then $X_2$ is automorphic. If
$X_1^1=X_1^2$, then the partition $O_2=Aut(X_2)X_2$ is detected directly.
\end{lemma}
So the problem presents, if $|X_2|/|X_1^1|>1$ and $|X_2|/|X_1^2)|>1$. Let
$X_2\subset V^{(2)}$ be a regular $2$-set and $\cup Co(X_2)=V$, then
$Aut(X_2)$ is (generally intransitive) group of degree $n$ and all prime
cycles from $Aut(X_2)$ have the length not greater than $n$. Let $X_2$ be
automorphic and $Y_2\subset X_2$ be a $2$-orbit of a subgroup
$A<Aut(X_2)$, then $R_2=AX_2$ is a partition of $X_2$ on $2$-orbits of $A$
and hence classes of $R_2$ are regular $2$-sets.
The fundamental role in polynomial solution of considered problem plays the
theorem \ref{rr}. One can see that cyclic structure, that was described
for transitive $(2k)$-orbits, exists also on intransitive $(2k)$-orbits.
But the direction (left, right, left, right,\,\ldots) must be change to
(left, right, right, left, left,\,\ldots). We can also note that in
general, if we have a regular $2$-set $X_2\subset V^{(2)}$, then we have
also a partition $P_2$ of $V^{(2)}$ on regular $2$-sets invariant relative
to $Aut(X_2)$. Thus with given intransitive regular $2$-set $X_2$ we can
also find transitive, regular, $Aut(X_2)$-invariant $2$-sets of $P_2$.
\begin{algorithm}\label{R_2,reg}
Let now $X_2$ be arbitrary regular, $2$-set and we try find its
automorphism, then we follow the next steps:
\begin{enumerate}
\item
Find an automorphic $2$-subset $Y_2\subset X_2$ that is expected to be a
$2$-suborbit of $Aut(X_2)$.
\item
Construct a partition $R_2^i$, $i=1,2,\ldots$, iterating the process that
follows from theorem \ref{rr}. By each iteration verify classes of $R_2^i$
on regularity and subpartition them if they are not regular.
\end{enumerate}
\end{algorithm}
This process leads to an automorphism, possibly trivial.
To find $Y_2$ is not difficult. At first it could be taken a subset
$Y_2(u)$ of $X_2$ whose $1$-projection $\hat{p}(I_1^1)Y_2$ is an element
$u$ of $V$. Thus we can define whether $Stab(u)$ is trivial. If it is
trivial, then it follows that $X_2$ is incoherent and can be partition on
coherent $2$-subsets.
If $Aut(X_2)$ is trivial, then given algorithm detects this triviality in
maximally $n$ steps, if to assume that in each step only one element of
$V$ is separated.
Using lemma \ref{AV*BV,AV+BV} we can, having automorphic partitions $R_k'$
and $R_k''$ of $X_k$, obtain new more big partition $R_k=R_k'\sqcup
R_k''$.
For simplification of the process it can be chosen for partitioning on
$i$-th iteration the most suitable $2$-set $X_2^i$ from the partition
$P_2^i$ of $V^{(2)}$, on regular $2$-sets invariant to $Aut(X_2)$, and, by
partitioning of $X_2^i$, the whole partition $P_2^i$ can be further
partitioned to regular classes $P_2^{i+1}$ and used in the next iteration.
\section*{Conclusion}
This work was initiated by the polycirculant conjecture, described by P.
Comeron in his text \cite{Cameron} and represented on the site
(\url{http://www.maths.qmw.ac.uk/~pjc/homepage.html}).
The using of $k$-orbits to the polynomial solution of the graph isomorphism
problem was begun by Author in 1984. The generalization of $k$-orbits,
regular $k$-sets, was used for describing of the structure of strongly
regular graphs and their generalization on dimensions greater as two. This
approach discovered the difference between the structure of strongly
symmetrical but not automorphic partitions of $V^{(k)}$ and automorphic
partitions of $V^{(k)}$.
From this point of view the polycirculant conjecture seemed enough simple.
But nevertheless to find a correct proof was very difficult and only the
analysis of two examples of permutation groups (one elusive group of order
72 and degree 12, and the automorphism group of Peterson graph), that was
presented to author by P. Comeron, leaded to discovery of the specific
properties of $k$-orbits, not detectable with group theory, that brought a
proof of the conjecture.
In 1997 Author understood the connection between the graph isomorphism
problem and the problem of a full invariant of a finite group and has done
some attempts to obtain this full invariant by construction of some
appropriate group representations. This work gave better understanding of
the problem but did not bring the expected result. By construction of the
$k$-orbit theory it was of interest to consider a finite group with new
representation and this time the result was obtained.
Also the specific symmetry properties of $k$-orbits, that are not visible
in other most algebraic theories, gave possibility for simple polynomial
solution of the graph isomorphism problem.
\end{document} |
\begin{document}
\title{Particle Gibbs Split-Merge Sampling for Bayesian Inference in Mixture Models}
\author{
\name Alexandre Bouchard-C\^ot\'e \email [email protected]\\
\addr Department of Statistics \\
University of British Columbia \\
Corresponding address: 3182 Earth Sciences Building, 2207 Main Mall, Vancouver, BC, Canada V6T 1Z4
\AND
\name Arnaud Doucet \email [email protected]\\
\addr Department of Statistics \\
University of Oxford, United Kingdom
\AND
\name Andrew Roth \email [email protected] \\
\addr Department of Statistics and Ludwig Institute for Cancer Research\\
University of Oxford, United Kingdom
}
\editor{Zhihua Zhang}
\maketitle
\begin{abstract}
This paper presents an original Markov chain Monte Carlo method to sample from the posterior distribution of conjugate mixture models.
This algorithm relies on a flexible split-merge procedure built using the particle Gibbs sampler introduced in \cite{andrieudoucet2009,andrieu2010}.
The resulting so-called Particle Gibbs Split-Merge sampler does not require the computation of a complex acceptance ratio and can be implemented using existing sequential Monte Carlo libraries.
We investigate its performance experimentally on synthetic problems as well as on geolocation data.
Our results show that for a given computational budget, the Particle Gibbs Split-Merge sampler empirically outperforms existing split merge methods.
The code and instructions allowing to reproduce the experiments are available at \url{https://github.com/aroth85/pgsm}.
\emph{Keywords}: Dirichlet process mixture models; Gibbs sampler; Particle Gibbs sampler; Sequential Monte Carlo.
\end{abstract}
\section{Introduction}
Mixture models are very commonly used to perform clustering and density estimation, and they have consequently found numerous applications in a wide range of scientific fields.
Since the introduction of Markov chain Monte Carlo (MCMC) methods in statistics over twenty five years ago, the Bayesian approach to mixture models has become very popular \citep{marin2005,richardsongreen1997}.
However, sampling from the posterior distribution of mixture models remains a challenging computational problem.
When conjugate priors are used, it is possible to analytically integrate out the mixing proportions and the parameters of the components.
This is the scenario we will focus on in this article.
In this case, we aim to sample from the posterior distribution of the latent indicator variables associated with the observations, each latent variable indicating which component of the mixture generates a given data point.
A simple Gibbs sampler can be used which updates the latent indicator variables one-at-a-time \citep{mceachern1994,escobarWest1995} but this algorithm is inefficient when the number of observed data points $\T$ is large.
First, the simulated Markov chain would have to visit a long chain of lower probability configurations in order to split and merge large clusters.
As a result, it is prone to getting trapped in severe local modes.
Second, it is non-trivial to parallelize due to the inherently sequential nature of the updates.
The limitations of the simple Gibbs sampler has motivated a rich literature on MCMC algorithms for Bayesian mixture models which partially address these issues; see, e.g., \cite{Ishwaran2001,Liang2007,walker2007sampling,kalli2011slice}. In particular, procedures proposing to split and merge existing clusters in one single step have
become prominent as they generally perform better than the simple Gibbs sampler \citep{richardsongreen1997,neal2000,dahl2005,jain2004}.
While designing an efficient merge proposal is simple, designing an efficient split proposal is a more complicated task.
When the mixing proportions and parameters are not integrated out, split-merge moves were first proposed in \cite{richardsongreen1997}.
The proposals were built to ensure the conservation of some moments and accepted/rejected using Metropolis-Hastings steps.
However, it is difficult to design efficient proposals in this context.
When the mixing proportions and parameters are integrated out, split-merge moves on the latent indicator variables were first proposed in \cite{jain2004}.
Assume one is interested in splitting a block/cluster of points $\b\subset\{1,\dots,\T\}$ into two blocks.
We select two points in $\b$, which will be in distinct blocks after the split.
There are $2^{|\b|-2}$ possible ways to split the original block $\b$, hence any efficient proposal needs to be informed by the observations corresponding to the indices in $\b$.
In \cite{jain2004}, one selects two points at random which are used as anchors.
When the two anchors are in separate clusters, a merging of the two clusters is proposed.
When the two anchors are in the same cluster, a split is proposed as follows: first, the two anchor points seed a pair of new clusters, and second, several restricted Gibbs scans are performed to reallocate the remaining points originally clustered with the anchors to the two new clusters.
All clusters which do not contain the anchors are not altered, leading to a restricted Gibbs move.
After either a split or a merge is proposed, the Metropolis-Hastings ratio is computed to accept or reject the move.
The number of Gibbs scans in the split move is a free tuning parameter for this sampler. In \cite{dahl2005}, an alternative approach is proposed for split moves.
The restricted Gibbs scans are replaced by a sequential allocation step whereby the anchors define two new clusters and all points which were originally clustered with these points are sequentially allocated to one of the anchor clusters.
These split-merge algorithms have become popular as they provide state-of-the-art performance but they are relatively difficult to implement due to their complex Metropolis-Hastings acceptance ratios.
In the present work, we propose a novel split-merge sampler based on the conditional Sequential Monte Carlo (SMC) algorithm appearing in the Particle Gibbs (PG) sampler \citep{andrieudoucet2009,andrieu2010}, which we call the Particle Gibbs Split Merge (PGSM) sampler.
Most of the complexity inherent to split-merge operators is encapsulated into the well-understood PG sampling procedure \citep{ChopinSingh2012}, and no acceptance ratio needs to be computed.
Moreover, as the PGSM sampler relies on SMC methods, it benefits from advanced simulation methods from the SMC literature, such as adaptation schemes \citep{Lee2011} and methods for parallel and distributed inference \citep{Lee2010GraphicCards,Jun2012Entangled,Lee2014Forest}, as well as from efficient SMC software libraries \citep{Johansen2009smctc,Murray2013}.
The PGSM sampler does not make any topological assumption on the observation space in contrast to the posterior simulation techniques described in \cite{Dahl2003b} and \cite{Liang2007}.
This methodology complements the maximum \emph{a posteriori} inference techniques developed in \cite{Daume2007,Lianming2011}.
There has been previous work on applying sequential importance sampling and SMC methods for posterior simulation of Dirichlet processes and related mixture models.
However, to the best of our knowledge, SMC methods have never been previously used to design split-merge moves.
Indeed, the methods proposed in \cite{MacEACHERN1999,fearnhead2004dp,fearnhead2007dp2,Mansinghka2007,Caron2009Decomposable,carvalho2010} directly apply a single pass SMC algorithm to the entire clustering problem.
Empirical results in \cite{Kantas2015} suggest that such methods may require a number of particle which scales at least quadratically with respect to the number of datapoints.
The work of \cite{UlkerGC10} uses SMC within the context of the SMC Samplers methodology \citep{delmoral2006}, which makes it closer in spirit to existing MCMC methods.
Our contribution is to provide a principled approach for breaking down the clustering problem into smaller sub-problems more amenable to the use of SMC techniques.
Finally, other lines of work are devoted to parallelization and distribution of MCMC methods for mixture models \citep{chang13dpmm,williamson2013parallel,icml2014c2_gal14,ge2015distributed}.
As alluded to earlier, our method can potentially be parallelized and distributed using existing approaches from the SMC literature \citep{Lee2010GraphicCards,Jun2012Entangled,Lee2014Forest}. Like the other available split-merge procedures, it is also possible to consider different
split-merge moves simultaneously when the prior clustering distribution restricted to the clusters being updated does not depend on the number of clusters for the whole dataset. However, we do not focus on these aspects here.
The rest of this article is organized as follows.
Section~\ref{sec:Bayesianmixturemodels} introduces our notation for the types of Bayesian mixture models that we consider.
Section~\ref{sec:PGSMsampler} details the PGSM sampler. Section~\ref{sec:applications} applies the method to synthetic datasets, as well as real data from a geolocation application.
We conclude with some directions for future work and discussion in Section~\ref{sec:discussion}.
\section{Mixture models and Bayesian inference\label{sec:Bayesianmixturemodels}}\label{sec:mixture}
In this section we first layout notation and then describe Bayesian mixture models.
We focus on the case where the component base measure is conjugate to the data likelihood, so that the posterior distribution of any clustering can be evaluated analytically up
to a normalizing constant.
\subsection{Notation}\label{sec:conventions}
We use bold letters for (random) vectors, and normal fonts for (random) scalars, sets, and matrices.
For quantities such as an individual observation $\y_i$, or a parameter $\theta$, which can be either scalars or vectors without affecting our methodology, we consider them as scalars without loss of generality.
Given a vector $\bold{x} = (x_1, x_2, \dots, x_n)$, and $i \le j$, we use $\bold{x}_{i:j}$ to denote the sub-vector $\bold{x}_{i:j} = (x_i, x_{i+1}, \dots, x_j)$.
To simplify notation, we do not distinguish random variables from their realization.
We define discrete probability distributions with their probability mass functions, and continuous probability distributions with their density functions with respect to the Lebesgue measure.
A list of symbols is available in the Appendix.
\subsection{Bayesian mixture model}\label{sec:bayesian-mix-subsection}
Consider $\T$ observations $\yVec \coloneqq \left(\y_{1},\ldots,\y_{\T}\right)$.
A mixture model assumes that the observations indices $[\T] \coloneqq \{1, \ldots,\T\}$ are partitioned into subsets.
This partition is called a clustering, $\c \coloneqq \{\b_1,\ldots,\b_\nclust : \b_k \subseteq [\T]\}$ where $\nclust$ denotes the cardinality of the set $\c$ and each block $\b$ in the partition is referred to as a cluster.
Given the clustering $\c$, we define the following likelihood for the data
\begin{equation}\label{eq:likelihoodpartition}
\pi\left(\yVec \vert \c \right) \coloneqq \prod_{\b\in\c} \L(\yVec_\b),
\end{equation}
where $\L(\yVec_\b)$ is the likelihood of the observations in cluster $\b$
\begin{equation}\label{eq:conjugacy}
\L(\yVec_\b) \coloneqq\int \left( \prod_{i\in\b} \L(\y_\i \vert \theta) \right) \H (\,\mathrm{d}\theta ).
\end{equation}
In this expression, $\L(\y_\i \vert \theta)$ is a probability density function parametrized by $\theta$ and $\H(\,\mathrm{d}\theta)$ a prior measure over this parameter.
The clustering $\c$ is unknown and is viewed as a random variable.
Let $\tau(\c)$ denote its prior probability, defined over the space of partitions of $[\T]$ and assumed to factorize as
\begin{equation}
\tau(\c) \propto \tauI(\nclust) \prod_{\b \in \c}
\tauII(|\b|) , \label{eq:priorpartition}
\end{equation}
where $\tauI:{\mathbb N}\rightarrow {\mathbb R}^{+}$ and $\tauII:{\mathbb N}\rightarrow {\mathbb R}^{+}$ are arbitrary functions.
This assumption on the prior clustering distribution is not restrictive and includes several popular priors, such as:
\begin{description}
\item[Dirichlet process prior] \citep{Ferguson1973} with parameter $\concentration>0$: $\tauI(j) \propto\concentration^{j}$ and $\tauII(j) \propto (j-1)!$.
\item[Pitman-Yor process prior] \citep{Pitman1997,Ishwaran2003} with parameters $\concentration, \discount$ ($\concentration>-\discount,$ $0\leq \discount < 1$): $\tauI(j) \propto \prod_{j'=1}^{j} \left\{\concentration+\discount \left(j'-1\right) \right\}$ and $\tauII(j) \propto\Gamma(j-\discount)$, where $\Gamma(\cdot)$ is the Gamma function.
\item[Finite Dirichlet mixture] with parameter $\finiteconcentration>0$ and $\maxnclust$ components and symmetric concentration $\left(\finiteconcentration,\ldots,\finiteconcentration\right)$: $\tauI(j) \propto {\mathbf 1}\left[j \le \maxnclust \right]$ and $\tauII(j) \propto \Gamma\left(j+\finiteconcentration\right)$.
\end{description}
The likelihood (\ref{eq:likelihoodpartition})-(\ref{eq:conjugacy}) and prior (\ref{eq:priorpartition}) define the following target posterior distribution
\begin{equation}\label{eq:target}
\pi(\c) \coloneqq \pi(\c\mid\yVec) \propto \tau(\c) \prod_{\b\in\c} \L(\yVec_\b).
\end{equation}
Since we view the observations as fixed, we drop the dependency on $\yVec$ from
the notation throughout the paper.
We detail in the following sections an original approach to sample from this posterior distribution.
\section{Methodology\label{sec:PGSMsampler}}
\begin{figure}
\caption{
(\textbf{a}
\label{fig:example}
\end{figure}
We organize the description of our method into two main parts.
First, we define a generic construction for decomposing the problem of sampling from the posterior (Equation~(\ref{eq:target})) with arbitrary numbers of clusters into \emph{split-merge} sub-problems.
Second, we show how the PG methodology can be used to address these sub-problems.
\subsection{Decomposing the clustering problem into split-merge subproblems}\label{sec:decomposing}
Algorithm~\ref{alg:setup_split_merge} allows us to break down the problem of sampling from the posterior into split-merge subproblems. We refer the reader to Figure~\ref{fig:example} for an illustrative example of the notation used throughout this description.
\begin{algorithm}
\caption{}
\begin{algorithmic}[1]
\Function{SplitMerge}{$c, h(s), K_{c,s}(\bar c'|\bar c)$}
\State $\s \sim \h(\cdot)$ \Comment{$\s=\{i_{1}, i_{2}\}$}
\State $\cBar \gets \{\b\in\c : \b \cap \sBar \neq \emptyset\}$ \Comment{Clustering restricted to the anchors}
\State $\sBar \gets \bigcup_{\b \in \cBar} \b$ \Comment{Closure of the anchors with respect to the clustering $\cBar$}
\State $\cBar' \sim \K_{\c,\s}(\cdot|\cBar)$
\State $\c' \gets \cBar' \cup (\c \backslash \cBar)$ \label{step:create-final-cluster}
\State \textbf{return} $\c'$
\EndFunction
\end{algorithmic}
\label{alg:setup_split_merge}
\end{algorithm}
The algorithm requires three inputs:
\begin{enumerate}
\item $\c$: the current clustering,
\item $\h(\s)$: a distribution for proposing an unordered pair of \emph{anchors} $\s=\{i_{1}, i_{2}\} \subset [\T]$,
\item $\K_{c,s}(\cBar'|\cBar)$: a Markov transition kernel over the space of partitions of $\sBar$.
This kernel is assumed to be invariant with respect to the following target distribution:
\begin{eqnarray}
\piBar_{c,s}(\cBar') &\propto& \tauIBar(|\cBar'|) \left( \prod_{\b\in\cBar'} \tauII(|\b|) \L(\yVec_\b) {\mathbf 1}\left[b \cap \s \ne \emptyset \right] \right), \label{eq:piBar} \\
\tauIBar(j) &\coloneqq & \tauI(j + \nclust - |\cBar|). \nonumber
\end{eqnarray}
\end{enumerate}
In the following, we drop the subscripts from the kernel $\K$ and target $\piBar$ for simplicity.
The distribution $\piBar$ has a form similar to the posterior distribution defined in Equation~(\ref{eq:target}) with two modifications.
First, $\tauI(\nclust)$ is replaced by $\tauIBar(|\cBar'|) $.
Second, the support of the distribution is restricted so that each block in $\cBar'$ must contain at least one anchor point.
This also implicitly enforces the constraint that $|\cBar'| \le |s| = 2$.
Algorithm \ref{alg:setup_split_merge} returns an updated clustering where only the allocation of points in $\sBar$ have changed; i.e. the updated clustering $\c'$ only potentially differs from $\c$ at points which were initially clustered with the anchor points.
The anchor proposal distribution $\h$ obviously impacts the performance of this procedure.
We empirically compare the performance of three anchor proposal distributions $\h$ in Section~\ref{sec:artificial}.
This scheme has the following property:
\begin{proposition}\label{prop:split-merge-correctness}
If $\c \sim \pi$, where $\pi$ is given by Equation~(\ref{eq:target}), then the output of Algorithm~\ref{alg:setup_split_merge}, $\c'$, satisfies $\c' \sim \pi$.
That is, the Markov kernel $\K(\c'|\c)$ induced by Algorithm~\ref{alg:setup_split_merge} is $\pi$-invariant.
\end{proposition}
\subsection{Overview of the particle Gibbs algorithm}\label{sec:pgsm-overview}
Ideally, we would like to sample independently from $\piBar$ in Algorithm \ref{alg:setup_split_merge}, that is we would like to have $K(\cBar'|\cBar)=\piBar(\cBar')$, but this is too computationally expensive if $|\sBar|$ is large. Our primary contribution is an original way to address this issue using SMC-based methods.
In principle, it would be possible to use SMC methods to obtain a sample approximately distributed according to $\piBar$ \citep{fearnhead2004dp}. However, if we were to use this sample within Algorithm~\ref{alg:setup_split_merge}, the resulting invariant distribution would not be $\pi$. For this reason, we consider Particle MCMC\ (PMCMC) methods \citep{andrieudoucet2009,andrieu2010}.
PMCMC methods allow us to use SMC\ ideas in a principled way within MCMC\ schemes.
We will focus here on the PG sampler and show how one can use this methodology to obtain an efficient MCMC\ kernel $\K$ targeting the distribution $\piBar$ given in Equation~(\ref{eq:piBar}). The outcome of the PG sampling steps will be either to cluster all the points in $\sBar$ into one block or to break $\sBar$ into two clusters, with the restriction that each of the two blocks should contain one anchor.
Interestingly, the form of the PG\ algorithm is the same no matter if the two anchors were previously together or apart before its execution.
This contrasts with previous split-merge algorithms such as \cite{jain2004}, which require a different treatment for split and merge moves.
To sample from $\piBar$, PG breaks the sampling of $\cBar'$ into a sequence of $\n \coloneqq |\sBar|$ simpler sampling problems.
In this scenario, contrary to most applications of PG, there is no intrinsic time ordering of the observations.
We randomize the order in which the points are included by introducing, conditionally on $\s$ and $\sBar$, a random permutation $\sigmaVec \coloneqq (\sigma_1, \dots, \sigma_\n)$.
This permutation is sampled using Algorithm~\ref{alg:sample_permmutation}.
\begin{algorithm}
\caption{•}
\begin{algorithmic}[1]
\Function{SamplePermutation}{$\s, \sBar$}
\State $\sigma_1 \sim \mbox{Uniform}(\s)$
\State $\sigma_2 \gets \s\backslash\{\sigma_1\}$
\State $(\sigma_3, \sigma_4, \dots, \sigma_\n) \gets \mbox{UniformPermutation}(\sBar \backslash \s)$
\State \textbf{return} $\sigmaVec$
\EndFunction
\end{algorithmic}
\label{alg:sample_permmutation}
\end{algorithm}
In other words, $\sigmaVec$ is uniform over the permutations of the observation indices in $\sBar$ such that the members of $\s$ appear in the first two entries.
The variable $\sigma_\t$ specifies the index of the observation $\y_{\sigma_\t}$ introduced into the PG algorithm at SMC\ iteration (``algorithmic'' time) $\t$ and $\x_\t$ is the corresponding allocation decision.
A particle $\xVec_{\t}$ is defined as a sequence of allocation decisions, $\xVec_{\t} \coloneqq (\x_1, \dots, \x_\t)$, where $\x_\t \in \X$, $\t \in \{1, \dots, \n\}$.
Given $\sigmaVec$, we denote the SMC\ proposals used within PG by $\q^\sigmaVec_\t(\x_\t|\xVec_{\t-1})$, and the intermediate unnormalized target distributions, by $\gamma^\sigmaVec_\t(\xVec_{\t})$.
We remind the reader that both $\gamma^\sigmaVec_\t$ and $\q^\sigmaVec_\t$ are allowed to depend on arbitrary subsets of the observations $\yVec$; see, e.g., \citep{delmoral2006}.
However, we omit this dependency for notational simplicity.
Our methodology is flexible with respect to the choice of the proposals and the choice of the intermediate unnormalized target distributions.
For our methodology to provide consistent estimates, only the following weak assumptions have to be satisfied.
\begin{assumption}\label{assumption:support}
For all $\t \in \{1, \dots, \n\}$, we assume $\support(\gamma^\sigmaVec_\t) \subseteq \support(\q^\sigmaVec_{\t})$ where $\q^\sigmaVec_{\t}(\xVec_{\t}) \coloneqq \q^\sigmaVec_1(\x_1) \prod_{\k= 2}^{\t} \q^\sigmaVec_{\k}(\x_{\k} | \xVec_{\k-1})$ for $\t\geq 2$.
\end{assumption}
\begin{assumption}\label{assumption:bijection}
We assume that there exists a bijection $\phi^\sigmaVec$ taking a particle as input, and outputting a clustering of $\sBar$.
More precisely, $\phi^\sigmaVec$ is a bijection between the support of the proposal, and the support of the split-merge target distribution, $\phi^\sigmaVec : \support(\q^\sigmaVec_{\n}) \to \support(\piBar)$.
\end{assumption}
\begin{assumption}\label{assumption:intermediate-final}
We assume that $\gamma^\sigmaVec_\n(\xVec_{\n}) \propto \piBar(\phi^\sigmaVec(\xVec_{\n}))$.
\end{assumption}
Assumption~\ref{assumption:support} ensures that all the importance weights appearing in the SMC method are well-defined.
Assumption~\ref{assumption:bijection} is a simple condition ensuring that we can consistently relabel the particles.
Assumption~\ref{assumption:intermediate-final} ensures that we target the desired distribution at algorithmic time $n$.
Note that Assumption~\ref{assumption:intermediate-final} only restricts the choice of $\gamma_\t$ for the final SMC\ iteration, $\t = \n$. We use this flexibility in Section~\ref{sec:improved}.
We show in the next section how to design $\q^\sigmaVec$, $\gamma^\sigmaVec$, $\phi^\sigmaVec$ and $\X$ that satisfy these assumptions.
PG proceeds in a way similar to standard SMC\ algorithms, with the important difference that one of the $\N$ particle paths is fixed.
In our setup, this path is obtained using the inverse of the bijection described in Assumption~\ref{assumption:bijection}, applied to the state of the restricted clustering $\cBar$ prior to the current PG step.
As discussed in \cite{ChopinSingh2012}, we can without loss of generality set the genealogy of the conditioning path $\cBar$ to $(1,...,1)$, i.e. we use the particle index $\p = 1$ for this conditioning path: $\xVec_{\n}^1 \coloneqq \left(\phi^\sigmaVec\right)^{-1}(\cBar)$.
This defines a path by taking a prefix of length $\t$ of the vector $\xVec_{\n}^1$ for $\xVec_{\t}^1$, i.e. $\xVec_{\t}^1 = \left( \xVec_{\n}^1 \right)_{1:\t}$.
The final ingredient required to describe the PG algorithm is a \emph{conditional resampling} distribution $\r(\ancestors\mid\wVec)$, where $\ancestors \coloneqq (\ancestor_2, \dots, \ancestor_N)$ denotes the resampling ancestors, $\ancestor_\p \in \{1, \dots, \N\}$, and $\wVec \coloneqq (\w^1, \dots, \w^\N)$ denotes a vector of probabilities.
We limit ourselves to multinomial resampling:
\begin{align}\label{eq:resampling}
\r(\ancestors\mid\wVec) = \prod_{\p=2}^\N r(\ancestor_{\p} \mid\wVec) &= \prod_{\p=2}^\N \w^{\ancestor_\p}.
\end{align}
More elaborate schemes can be used, see \cite{andrieudoucet2009,andrieu2010}. Instead of resampling at each time step as in vanilla SMC\ algorithms, we only resample when the relative Effective Sampling Size (ESS) criterion, which takes values between $0$ and $1$, is below a pre-specified threshold $\ressThreshold$, $\ressThreshold \in [0,1]$. The adaptive resampling procedure was proposed by \cite{Liu1995AdaptResampling} for standard particle methods and the correctness of this procedure for PG has been established in \cite{Lee2011}. The resulting procedure is described in Algorithm~\ref{alg:pgsm}.
\begin{algorithm}[h]
\caption{•}
\begin{algorithmic}[1]
\Function{ParticleGibbsSplitMerge}{$\s, \sBar, \cBar$, $\piBar$} \Comment{Inputs coming from Algorithm~\ref{alg:setup_split_merge}}
\State $\sigmaVec \gets \Call{SamplePermutation}{\s, \sBar}$
\State $\xVec_{\n}^1 \gets \left(\phi^\sigmaVec\right)^{-1}(\cBar)$ \Comment{Compute the conditional path}
\For{$\t \in \{1, \dots, \n-1\}$}
\State $\xVec_{\t}^1 \gets \left( \xVec_{\n}^1 \right)_{1:\t}$ \Comment{First particle of each generation matches the conditional path}
\EndFor
\For{$p \in \{2, \dots, N\}$} \Comment{Initialize particles}
\State $\x_1^\p \sim \q_1^\sigmaVec\left(\cdot\right)$
\State $\xVec_1^\p \gets (\x_1^\p)$
\EndFor
\For{$p \in \{1, \dots, N\}$} \Comment{Initialize incremental importance weights}
\State $\tilde \w_1^\p \gets \frac{\gamma^\sigmaVec_1(\xVec_1^\p)}{\q^\sigmaVec_1(\xVec_1^\p)}$
\EndFor
\For{$p \in \{1, \dots, N\}$}
\State $\w_1^\p \gets \frac{\tilde \w_1^\p}{({\sum_{\p'=1}^\N \tilde \w_1^{\p'}})}$ \Comment{Compute normalized weights}
\EndFor
\For{$t \in \{2, \dots, \n\}$}
\If{$(\N \sum_{\p=1}^{\N} (\w_{t-1}^p)^2)^{-1} < \ressThreshold$} \Comment{Resample only if relative ESS is too low}
\State $\ancestors \sim \r(\cdot\mid\wVec_{\t-1})$ \Comment{Perform the conditional resampling step} \label{alg:line:resampling}
\State $\tilde \wVec_{\t-1} \gets (1, 1, 1, \dots, 1)$ \Comment{Reset the weights}
\Else
\State $\ancestors \gets (1, 2, 3, \dots, \N)$ \Comment{Resampling is skipped: set $\ancestors$ to the identity map}
\EndIf
\For{$p \in \{2, \dots, N\}$}
\State $\x_\t^\p \sim \q_\t^\sigmaVec\left(\cdot \mid \xVec_{\t-1}^{\ancestor_\p}\right)$ \Comment{Propose new block allocation for $\y_{\sigma_t}$}
\State $\xVec_\t^\p \gets (\xVec_{\t-1}^{\ancestor_\p}, \x_\t^\p)$ \Comment{Concatenate new block allocation to path}
\EndFor
\For{$p \in \{1, \dots, N\}$}
\State $\tilde \w_\t^\p \gets \tilde \w_{\t-1}^\p \cdot \w(\xVec_{\t-1}^{\ancestor_\p}, \x_\t^\p)$ \Comment{Update weights (see Equation~(\ref{eq:particle-weights}))} \label{alg:line:incremental_weight}
\EndFor
\For{$p \in \{1, \dots, N\}$}
\State $\w_\t^\p \gets \frac{\tilde \w_\t^\p}{({\sum_{\p'=1}^\N \tilde \w_\t^{\p'}})}$ \Comment{Compute normalized weights} \label{alg:line:weight_norm}
\EndFor
\EndFor \label{step:end-of-aux-vars}
\State $\xVec'_\n \sim \sum_{\p = 1}^\N \w_\n^\p \delta_{\xVec_\n^\p}(\cdot)$ \Comment{Sample particle representing new state} \label{alg:line:sample_particlepath}
\State $\cBar' \gets \phi^\sigmaVec(\xVec'_\n)$ \Comment{Compute updated partition} \label{step:update-part}
\State \textbf{return} $\cBar'$
\EndFunction
\end{algorithmic}
\label{alg:pgsm}
\end{algorithm}
Most of Algorithm~\ref{alg:pgsm} is concerned with the creation of temporary auxiliary variables (lines 1--\ref{step:end-of-aux-vars}).
These auxiliary variables can all be discarded after the algorithm returns $\cBar'$, as they can be resampled from scratch every time Algorithm~\ref{alg:pgsm} is run.
The part of the algorithm that performs the actual split or merge is in lines \ref{alg:line:sample_particlepath} and \ref{step:update-part}.
At this point of the execution of the algorithm, the particle population at SMC\ generation $\n$ can be interpreted (via $\phi^\sigmaVec$) as a distribution over clusterings of $\sBar$, with some particles
corresponding to merging all points in $\sBar$ into one block (i.e. when $|\phi^\sigmaVec(\xVec'_\n)| = 1$), and others, to various way of splitting $\sBar$ into two blocks (when $|\phi^\sigmaVec(\xVec'_\n)| = 2$).
Correctness of this procedure follows straightforwardly from the original PG argument (see Appendix~\ref{appendix:pg-correctness} for details):
\begin{proposition}\label{prop:pg-correctness}
Under Assumptions~\ref{assumption:support}, \ref{assumption:bijection}, and \ref{assumption:intermediate-final}, the output of Algorithm~\ref{alg:pgsm}, $\cBar'$, satisfies $\cBar' \sim \piBar$ if $\cBar \sim \piBar$, for any $\N\geq2$, i.e. the Markov kernel $\KBar(\cBar'|\cBar)$ induced by Algorithm~\ref{alg:pgsm} is $\piBar$-invariant.
\end{proposition}
\subsection{Intermediate target distributions and proposals construction}\label{sec:intermediate}
We detail here the construction of a set of proposal distributions $\q^\sigmaVec$, unnormalized target distributions $\gamma^\sigmaVec$, and mappings $\phi^\sigmaVec$ satisfying Assumptions~\ref{assumption:support} to \ref{assumption:intermediate-final}.
We denote the space of possible allocation decisions at a given PG\ iteration by $\X$. Our construction is based on an encoding where the space $\X$ consists in the rectangles shown in Figure~\ref{fig:local-state-space}.
We call the rectangles \emph{states} for short.
These states are used to build particles: recall that a particle $\xVec_\t$ is defined as a list of local decisions, $\xVec_{\t} \coloneqq (\x_1, \dots, \x_\t)$, $\x_{\t'} \in \X$.
\begin{figure}
\caption{
Left: State space $\X$ and allowed transitions $\partSupport(\cdot)$ for the local allocation decisions.
Right: allowed transitions between the states.
}
\label{fig:local-state-space}
\end{figure}
The state appended to a particle at time $\t$ represents (a) the clustering restricted to the anchors (shown in the first line of each rectangle in Figure~\ref{fig:local-state-space}), and (b), the cluster joined by $\y_{\sigma_\t}$ (encoded by the anchor(s) contained in the joined cluster, second line in the same figure).
As shown in Figure~\ref{fig:local-state-space}, the ``merge state'' (left) is an absorbing state, encoding the fact that following this local decision, all children particles are forced to join the unique block in the restricted clustering. The two ``split states'' (right), on the other hand, both have two outgoing transitions, encoding the fact that for each index in $\sBar \backslash \s$, the corresponding observation needs to be allocated to one of the two blocks.
There is a bijection between the support of $\piBar$, and particles respecting the transition constraints defined by the arrows in Figure~\ref{fig:local-state-space}.
More precisely, for each state $\x \in \X$, we let $\partSupport(\x)$ denote the set of allowed transitions from $\x$. We write $\xVec_{\t} \in \partSupport_\t$ if (a) $\x_1 = \#1$, and (b) for all $\t' \in \{2, \dots, \t\}$, $\x_{\t'} \in \partSupport(\x_{\t'-1})$.
From this definition, we obtain the following result whose proof is given in Appendix~\ref{appendix:bijection}.
\begin{proposition}\label{prop:bijection}
For any permutation $\sigmaVec$ satisfying $\{\sigma_1, \sigma_2\} = \s$, there is a bijective map $\phi^\sigmaVec$ from the space of particles respecting the transition constraints, $\partSupport_\n$, to the support of the restricted target, $\support(\piBar)$.
\end{proposition}
We use this bijection to define a sequence of intermediate target and proposal distributions. The intermediate target at time $t$ of support $\partSupport_\t$ is given by:
\begin{equation}
\gamma^\sigmaVec_\t(\xVec_\t) \coloneqq \tauIBar(\cBar_\t) \left( \prod_{\b\in\cBar_\t} \tauII(|\b|) \L(\yVec_\b) \right),
\end{equation}
where $\cBar_\t = \phi^{\boldsymbol{\sigma}_{1:\t}}(\xVec_\t)$.
By construction, we have that for $\t = \n$, $\gamma^\sigmaVec_\n(\xVec_{\n}) \propto \piBar(\phi^\sigmaVec(\xVec_{\n}))$ so Assumption~\ref{assumption:intermediate-final} is satisfied.
We define as proposals:
\begin{eqnarray}
\q^\sigmaVec_1(\x_1) &\coloneqq& \delta_{\#1}(\x_1), \\
\q^\sigmaVec_\t(\x_\t \mid \xVec_{\t-1}) &:=& \frac{\gamma^\sigmaVec_\t(\xVec_\t)}{\sum_{\x'_\t \in \partSupport(\x_{\t-1})}\gamma^\sigmaVec_\t(\xVec_{\t-1}, \x'_\t)}, \nonumber
\end{eqnarray}
where $(\xVec_{\t-1}, \x'_\t)$ denotes the concatenation of $\x'_\t$ to the vector $\xVec_{\t-1}$, and $\xVec_{\t} = (\xVec_{\t-1}, \x_\t)$.
These definitions satisfy Assumption~\ref{assumption:support}, and yield the following weight updates:
\begin{eqnarray}\label{eq:particle-weights}
\w_\t(\xVec_{\t-1}, \x_t) &\coloneqq& \frac{\gamma^\sigmaVec_\t(\xVec_\t)}{\gamma^\sigmaVec_{\t-1}( \xVec_{\t-1} )} \frac{1}{\q^\sigmaVec_\t(\x_\t \mid \xVec_{\t-1})} \\
&=& \frac{\sum_{\x'_\t \in \partSupport(\x_{\t-1})} \gamma^\sigmaVec_\t\left(\xVec_{\t-1}, \x'_\t\right)}{\gamma^\sigmaVec_{\t - 1}(\xVec_{\t-1})} \nonumber \\
&=& \sum_{\x'_\t \in \partSupport(\x_{\t-1})} \frac{\gamma^\sigmaVec_\t\left(\xVec_{\t-1}, \x'_\t\right)}{\gamma^\sigmaVec_{\t - 1}(\xVec_{\t-1})}. \nonumber \label{eqn:gamma_ratio}
\end{eqnarray}
If $t > |\s|$ then Equation~(\ref{eqn:gamma_ratio}) simplifies as follows
\begin{eqnarray}
\frac{\gamma^\sigmaVec_\t(\xVec_\t)}{\gamma^\sigmaVec_{\t-1}(\xVec_{\t-1})} &=& \frac{\tauII(|\b_\t^+|)}{\tauII(|\b_\t^-|)} \L\left(\yVec_{\b_\t^+}\mid \yVec_{\b_\t^-}\right), \label{eq:tau_ratio}
\end{eqnarray}
where
\begin{equation}
\L\left(\yVec_{\b_\t^+}\mid \yVec_{\b_\t^-}\right) \coloneqq \frac{\L\left(\yVec_{\b_\t^+}\right)}{\L\left(\yVec_{\b_\t^-}\right)}\;.
\end{equation}
Here $\b_t^-$ and $\b_t^+$ encode the block in which a point is added to when transitioning from $\xVec_{\t-1}$ to $\xVec_\t$, the first being the block before the addition, and the second, the same block after the addition:
\begin{equation}
\b_t^- \coloneqq \cBar_{\t-1} \backslash \cBar_\t
,\;\;
\b_t^+ \coloneqq \b^- \cup \{\sigma_\t\}
.
\end{equation}
Depending on the form of the partition prior and likelihood it may be possible to simplify these quantities into more computationally efficient forms.
\subsection{An improved sequence of intermediate target distributions}\label{sec:improved}
We now describe an improvement over the basic intermediate and proposal distributions presented in the previous section.
This improvement addresses a ``greediness" problem of the (conditional) SMC procedure.
Consider a case where the ratio $\tauIBar(1)/\tauIBar(2)$ between a merge and a split is large.
This can occur for example when the Dirichlet process concentration parameter $\concentration$ is small.
In this case, the proposal in the first non-trivial step, $\q^\sigmaVec_2$, will assign most of its mass to the transition from state $\#1$ to state $\#2$ (see Figure~\ref{fig:local-state-space}).
However, the likelihood might overcome this prior when $|\sBar|$ is large.
But proposing such split has low probability under the definitions given in the previous section, as $\#2$ is an absorbing state.
To overcome this issue, we build a new sequence of intermediate distributions, which delay the incorporation of the prior:
\begin{equation}
\widehat{\gamma^\sigmaVec_\t}(\xVec_\t) \coloneqq \bracearraycond{
{\mathbf 1}[\xVec_\t \in \partSupport_\t],&\;\;\textrm{if }\t\in\{1,2\}, \\
\left( \gamma^\sigmaVec_2(\xVec_{1:2}) \right)^{\schedule_\t} \frac{\gamma^\sigmaVec_\t(\xVec_\t)}{\gamma^\sigmaVec_2(\xVec_{1:2}),}&\;\;\textrm{otherwise.}
}
\end{equation}
where $\schedule_\t$ is a positive increasing annealing schedule such that $\schedule_\n = 1$.
We use the following proposal based on these new intermediate distributions:
\begin{eqnarray}
\widehat{\q^\sigmaVec_1}(\x_1) &\coloneqq& \delta_{\#1}(\x_1), \\
\widehat{\q^\sigmaVec_\t}(\x_\t \mid \xVec_{\t-1}) &:=& \frac{\widehat{\gamma^\sigmaVec_\t}(\xVec_\t)}{\sum_{\x'_\t \in \partSupport(\x_{\t-1})} \widehat{\gamma^\sigmaVec_\t}(\xVec_{\t-1}, \x'_{\t-1})}. \nonumber
\end{eqnarray}
This yields the weight updates:
\begin{eqnarray}\label{eq:final-particle-weight}
\widehat{\w_\t}(\xVec_{\t-1}, \x_t) &:=& \sum_{\x'_\t \in \partSupport(\x_{\t-1})}\frac{\widehat{\gamma^\sigmaVec_\t}\left( \xVec_{\t-1}, \x'_\t\right)}{\widehat{\gamma^\sigmaVec_{\t - 1}}(\xVec_{\t-1})}.
\end{eqnarray}
For simplicity, we pick $\schedule_\t = \frac{\t - 2}{\n - 2}$. This choice simplifies ratios of intermediate distributions to:
\begin{eqnarray}
\frac{\widehat{\gamma^\sigmaVec_\t}(\xVec_\t)}{\widehat{\gamma^\sigmaVec_{\t-1}}(\xVec_{\t-1})} &=& \bracearraycond{
1&\;\;\textrm{if }\t = 2, \\
\left( \gamma^\sigmaVec_2(\xVec_{1:2}) \right)^{\Delta \schedule} \frac{\gamma^\sigmaVec_\t(\xVec_\t)}{\gamma^\sigmaVec_{\t-1}(\xVec_{\t-1})}&\;\;\textrm{if }\t > 2,} \label{eq:weight-final} \\ \nonumber
\end{eqnarray}
where $\Delta \schedule\coloneqq (\n - 2)^{-1}$.
\subsection{Runtime analysis}\label{sec:runtime}
To simplify the analysis of the running time, we make a few assumptions.
\begin{assumption}\label{assumption:parametric-likelihood-running-time}
The parametric likelihood model has the following properties:
\begin{enumerate}
\item let $\suffstat \coloneqq \suffstat(\yVec_\b)$ denote a sufficient statistic, and define, with a slight abuse of notation, $\L(\suffstat) \coloneqq \L(\yVec_\b)$.
For a given sufficient statistic value $\suffstat$, the likelihood $L(\suffstat)$ can be computed in time $O(l)$,
\item the sufficient statistic for $\b^+$, $\suffstat^+ \coloneqq \suffstat(\yVec_{\b^+})$, can be updated in time $O(u)$ from the sufficient statistic for $\b^-$, $\suffstat^- \coloneqq \suffstat(\yVec_{\b^-})$.
\end{enumerate}
\end{assumption}
The next assumption holds for all the clustering priors reviewed in Section~\ref{sec:Bayesianmixturemodels}.
\begin{assumption}\label{assumption:ratio-running-time}
The ratio $\frac{\tauII(j+1)}{\tauII(j)}$ can be computed in constant time.
\end{assumption}
For example, with a Dirichlet process, this ratio is equal to $j!/(j-1)! = j$. Since $|\b^+| = |\b^-| + 1$, Assumption~\ref{assumption:ratio-running-time} implies that the ratio $\frac{\tauII(|\b^+|)}{\tauII(|\b^-|)}$ in Equation~(\ref{eq:tau_ratio}) can be computed in constant time.
\begin{proposition}
Under Assumptions~\ref{assumption:parametric-likelihood-running-time} and \ref{assumption:ratio-running-time}, one weight computation, Equation~(\ref{eq:final-particle-weight}), takes time $O(u+l)$.
The storage cost per particle is $O(1)$. Moreover, the running time per weight computation is independent of the number of clusters.
\end{proposition}
The running time result follows directly from the fact that $|\partSupport(\cdot)| \le 2$, and hence the sum in Equation~(\ref{eq:final-particle-weight}) has a constant number of terms.
The constant storage cost follows from the finite dimensionality of the sufficient statistics (see Assumption~\ref{assumption:parametric-likelihood-running-time}), and from $|\X| = 4$.
We also remind the reader that for most resampling schemes, including the one in Equation~(\ref{eq:resampling}), the computational cost as a function of the number of particles and SMC\ iterations is $O(\N \n) = O(\N |\sBar|)$ \citep{Doucet2009Tutorial}.
\subsection{Generalization}
For simplicity, we have assumed so far that $|\s| = 2$, and hence, according to the auxiliary variable analysis of Appendix~\ref{appendix:pg-correctness}, $|\cBar| \le 2$.
In fact, the same auxiliary variables with more than two anchor points can be used to construct novel sampling algorithms. Details are given in Appendix~\ref{appendix:generalization}.
This generalization loses some interpretability compared to the split-merge case ($|\s| = 2$), but can be useful in finite clustering models.
In this case, it may only be possible to split a cluster if a merge is performed simultaneously.
For this reason, we use $|\s| = 3$ in the finite Dirichlet mixture model examples in Section~\ref{sec:artificial}.
For the Dirichlet Process, we did not observe notable improvements by going from $|\s| = 2$ to $|\s| = 3$, so we use the former setting for the non-parametric models.
\section{Applications\label{sec:applications}}
In this section, we demonstrate the performance of our methodology and compare it to standard alternatives.
We use a series of synthetic datasets covering a large spectrum of cluster separateness, as well as real data coming from a geolocation application.
\subsection{Implementation and evaluation}
We have implemented the following three Dirichlet Process (DP) clustering samplers in the same Python codebase: the PGSM method described in this work, the efficient Sequentially-Allocated Merge Split (SAMS) method of \cite{dahl2005}, as well as the standard Gibbs sampler.
The code and instructions allowing to reproduce the experiments are available at \url{https://github.com/aroth85/pgsm}.
The implementation of the likelihood computations are the same for all samplers, thus the running times are comparable.
We have tested the correctness of our computer implementations by computing the true posterior distribution on small examples via combinatorial enumeration, and verified that the Monte Carlo estimates converged to this distribution for all three methods.
Unless we state otherwise, we initialized the samplers with the single-cluster configuration.
In datasets much smaller than those studied in this work, initializing the Gibbs sampler to the fully disconnected clustering is advantageous \citep{sudderth06a}, but in larger datasets, the quadratic burn-in cost involved with this initialization is not scalable.
However, we verified that after a long burn-in period the Gibbs method initialized to the fully disconnected clustering eventually reaches the same likelihood values in the synthetic examples.
We also investigate the high cost of the fully disconnected initialization in the results shown in Figure~\ref{fig:mopsi}.
To evaluate the performance of the samplers, we held-out a random but fixed 10\% of each dataset.
We collected samples and computed the predictive likelihood and V-measure \citep{rosenberg2007vmeasure} every 100 iterations.
All experiments are replicated 10 times, and smoothed using a moving average with a window size of 20 for plotting.
\subsection{Likelihoods and priors}
In six of the synthetic experiments and the geolocation experiments discussed further in Section \ref{sec:artificial} and Section \ref{sec:geo}, we used a Normal-Inverse-Wishart conjugate likelihood model.
In two of the synthetic experiments in Section~\ref{sec:artificial} we used Bernoulli mixture models with 50 dimensions.
Each dimension is an independent draw from a Bernoulli random variable with cluster specific parameters.
We set a proportion of the dimensions to be uninformative as follows.
Values for uninformative dimensions were drawn from Bernoulli variables with parameter 0.5 regardless of the cluster membership.
Values for the remaining dimensions were drawn from cluster specific Bernoulli variables with parameters sampled from the Uniform distribution.
For the cancer data discussed in Section \ref{sec:genomics}, we use the application-specific PyClone likelihood model \citep{roth2014pyclone}.
The PyClone model uses genomic sequence data from tumours to identify mutations which co-occur in cells and estimates the proportion of cells harbouring the mutations.
The model is not conjugate, so we apply a discretization that allows us to treat the model as conjugate.
Complete details for each model are provided in Appendix~\ref{appendix:models}.
We use a DP prior with base measure given by the conjugate prior of the corresponding likelihood model in all experiments.
We use a $\mbox{Gamma}(1,0.1)$ prior and resample the value of the concentration parameters $\alpha_{0}$ using a standard auxiliary variable method \citep{escobarWest1995}.
The value of $\alpha_{0}$ is initialized to 1.0.
\subsection{Artificial datasets}\label{sec:artificial}
\begin{figure}
\caption{
Effect on the predictive performance and clustering accuracy as a function of CPU time in log scale.
\textbf{a}
\label{fig:pilot}
\end{figure}
\begin{figure}
\caption{
Effect on the predictive performance and clustering accuracy as a function of CPU time in log scale with different distribution $h$ for proposing pairs of anchor points.
\textbf{a}
\label{fig:pilot2}
\end{figure}
We used four sources of synthetic data.
First, the four datasets from \cite{Franti2006data} denoted S1--4.
Each of the four datasets consists in 5000 points generated from 15 bivariate Normal distributions with increasing amount of overlap between the clusters.
Second, we created another synthetic dataset, which we call C1, shown in Figure~\ref{fig:finite}, right.
Third, we simulated two datasets with 5000 points from a Bernoulli mixture model with 50 dimensions and 16 clusters, where we set 25\% (Ber-0.25) and 50\% (Ber-0.5) of dimensions to be uninformative.
Finally, we used 64 (DIM064) and 128 (DIM128) dimensional Normal datasets from \cite{dim_sets} with 1024 data points and 16 clusters.
We started with a series of pilot experiments on S1 only, designed to assess the effect of various tuning parameters on the performance of PGSM.
For all pilot experiments we use the unmixed PGSM sampler to isolate the effect of each tuning parameter.
In practice it is usually better to alternate between one iteration of the PGSM sampler and one iteration of the Gibbs sampler.
The effect of this mixing is explored later in this section.
We first explore the performance as we vary the number of particles used for each PGSM iteration (Figures~\ref{fig:pilot} \textbf{a} and \textbf{b}).
The curves with more particles take more time per iteration to run, however seem to achieve slightly better V-measure and predictive likelihood after the initial iterations.
The performance difference are negligible and the PGSM sampler generally seems insensitive to the number of particles for this dataset.
For subsequent experiments we used 20 particles.
Next we compare the performance of different proposal distributions for the anchor auxiliary variables (Figures~\ref{fig:pilot} \textbf{c} and \textbf{d}).
For this experiment we kept the number of particles fixed at 20 and the resampling threshold at 0.5.
We consider three proposal distributions.
\begin{description}
\item[Uniform:] Sample the anchors uniformly at random from the $\binom{\T}{2}$ possibilities.
\item[Cluster informed:] Sample the first anchor uniformly at random.
Sample a cluster to draw the second anchor from with probability $\frac{1}{|c-1|}$ for the cluster containing the first anchor; otherwise proportional to $$\frac{L(y_{\bar{b} \cup b})}{L(y_{\bar{b}}) L(y_{b})}$$ where $\bar{b}$ is the cluster containing the first anchor and $b$ is the candidate cluster.
Sample the second anchor uniformly from the chosen cluster.
\item[Threshold informed:] Sample the first anchor uniformly at random and the second anchor from clusters that have Chinese restaurant attachment probabilities greater than a threshold of 0.01.
\end{description}
To ensure the adaptation of the informed proposals stops, and does not perturb the invariant distribution of the sampler, we only update the proposal distributions when the number of clusters instantiated breaks the previous record.
Adaptation is guaranteed to terminate in finite time given there are only a finite number of points.
This would usually take a long time, so in practice it is advantageous to stop adaptation after a fixed period of time.
Detailed implementations of the cluster informed and threshold informed proposals are given in Appendix~\ref{appendix:proposals}.
Our results suggest that performance is not strongly affected by the anchor proposal distribution $\h$.
We only saw a small advantage when using the informed proposal distributions for the auxiliary anchor variables in $\s$.
We also explored the effect of $h$ in higher dimensional datasets (Figure~\ref{fig:pilot2}).
Again we found the results are not sensitive to the choice of $h$.
With the exception of the circle dataset, where we used the uniform proposal, we used the cluster informed prior for both the PGSM and SAMS samplers in subsequent experiments.
The frequency of the resampling step had a larger effect.
A critical implementation point in order for the PGSM method to work is that resampling should be done adaptively by monitoring the ESS of the particle approximations in Algorithm \ref{alg:pgsm} \citep{Liu1995AdaptResampling, Lee2011}.
We varied the relative ESS resampling threshold $\ressThreshold$ from 0 (never resample) to 1 (always resample) (Figures~\ref{fig:pilot} \textbf{e} and \textbf{f}).
We observed that the performance is markedly degraded if resampling is performed after each SMC iteration, but similar for all other resampling thresholds.
We used a threshold of 0.5 in all other experiments.
\begin{figure}
\caption{
Comparison of Gibbs and PGSM for finite Dirichlet prior with $\maxnclust=5$.
\textbf{a}
\label{fig:finite}
\end{figure}
Next, we used the dataset C1 to investigate the effectiveness of our method with the finite clustering model introduced in Section~\ref{sec:mixture}, with the number of clusters fixed to $\maxnclust=5$.
In this case, standard split-merge methods such as SAMS are less helpful since only merging can be performed when the maximum number of clusters has been allocated.
The PGSM sampler does not have this restriction and naturally allows simultaneously splitting and merging while preserving the total number of clusters.
Furthermore, the PGSM sampler can use more than two anchors, potentially allowing for large changes in configuration without altering the number of clusters.
We compared the PGSM with two ($|s|=2$) and three ($|s|=3$) anchors to the Gibbs sampler.
The PGSM method outperformed the Gibbs sampler, though increasing the number of anchors did not improve the performance (Figure~\ref{fig:finite} \textbf{a}).
We plot the predictive densities (Figure~\ref{fig:finite} \textbf{b}, \textbf{d}, \textbf{f}) and cluster allocations (Figure~\ref{fig:finite} \textbf{c}, \textbf{e}, \textbf{g}) after running each sampler for 1000 seconds.
At this point the PGSM sampler used a single cluster to model the points in the middle, while the Gibbs samplers used two clusters to model the central cluster.
\begin{figure}
\caption{
Comparison of MCMC algorithm using only a single kernel at a time (pure kernels) on 2D Normal datasets.
Predictive log likelihood for datasets \textbf{a}
\label{fig:main-synthetic-pure}
\end{figure}
\begin{figure}
\caption{
Comparison of MCMC algorithm using split-merge moves combined with standard Gibbs moves (mixed kernels) on 2D Normal datasets.
Predictive log likelihood for datasets \textbf{a}
\label{fig:main-synthetic-mixed}
\end{figure}
In Figures~\ref{fig:main-synthetic-pure} and \ref{fig:main-synthetic-mixed} we show a series of experiments on the four datasets S1--4 describe in the previous section.
We compare the PGSM to standard Gibbs and the SAMS method of \cite{dahl2005}.
We first compared pure kernels, where the split-merge samplers are not mixed with standard Gibbs moves (Figure~\ref{fig:main-synthetic-pure}).
The pure PGSM kernel outperformed both Gibbs and SAMS on datasets S1 and S2.
The Gibbs kernel and PGSM perform similarly for datasets S3 and S4, and both outperformed SAMS.
When the split-merge moves are mixed with standard Gibbs moves, the split-merge methods outperformed Gibbs on datasets S1 and S2, with all methods showing similar performance on datasets S3 and S4 (Figure~\ref{fig:main-synthetic-mixed}).
\begin{figure}
\caption{
Comparison of MCMC algorithm using split-merge moves with combined with standard Gibbs moves (mixed kernels) on 50 dimensional Bernoulli data with 25\% (Ber-0.25) and 50\% (Ber-0.5) and Normal data 64 (DIM064) and 128 (DIM128) dimensions.
Predictive log likelihood for datasets \textbf{a}
\label{fig:main-synthetic-high-dim}
\end{figure}
Finally, we explored the performance of the methods on four high dimensional datasets.
The mixed PGSM and Gibbs samplers performed the best on the Bernoulli datasets, while the unmixed PGSM sampler is slower to reach the same predictive likelihood and V-measure (Figures~\ref{fig:main-synthetic-high-dim} \textbf{a}-\textbf{d}).
The mixed SAMS sampler failed to reach the same predictive likelihood as the PGSM and Gibbs methods, oscillating around lower values.
The unmixed SAMS sampler appears to be trapped in a local mode, corresponding to poor predictive likelihood and V-measure.
For the Normal datasets, the Gibbs sampler was trapped in a local mode and had markedly worse performance than other methods (Figures~\ref{fig:main-synthetic-high-dim} \textbf{e}-\textbf{h}).
The unmixed samplers outperformed the mixed equivalents on the 64 dimensional data.
Furthermore, the unmixed PGSM method had a large performance advantage over all other methods on the 128 dimensional dataset.
\subsection{Geolocation data}\label{sec:geo}
We compared the performance of the three sampling methods on a geolocation dataset.
The dataset, described in more detail in \cite{Franti2010Mopsi}, consists of a subset of data collected by MOPSI, a Finnish mobile application where users can post their current geographic location via their mobile device.
The subset we used consists of a list of 13,467 locations (latitude-longitude pairs) from users located in Finland until 2012.
The data is freely accessible from \url{http://cs.joensuu.fi/sipu/datasets/}.
We use this data as a proxy for the estimation of mobile device user density.
The DP mixture of Normal-Inverse-Wishart distributions provides a natural way to obtain a parsimonious estimate of population density, where the flexibility on the shape and number of clusters can accommodate a broad range of density variability factors ranging from densely populated cities to vast low-density rural areas.
We summarize the results in Figure~\ref{fig:mopsi}.
In Figures~\ref{fig:mopsi} \textbf{a}-\textbf{c}, we display quantitative results as measured by held-out predictive likelihood performance.
In Figure~\ref{fig:mopsi} \textbf{a}, we show that mixed PGSM, mixed SAMS and Gibbs samplers perform similarly.
In Figure~\ref{fig:mopsi} \textbf{b}, we show that the performance of SAMS is considerably degraded if SAMS is not mixed with a Gibbs kernel.
In Figure~\ref{fig:mopsi} \textbf{c}, we show that the performance of PGSM is less degraded if not mixed with a Gibbs kernel.
In Figures~\ref{fig:mopsi} \textbf{e}-\textbf{j}, we visualize the posterior predictive density approximated using MCMC samples.
We also show the raw data in Figure~\ref{fig:mopsi} \textbf{d} for reference.
The following three pairs of density plots are included to illustrate the high computational cost of initializing a standard Gibbs sampler at a fully disconnected configuration.
From left to right: the first pair shows the predictive density after one round of Gibbs sampling initialized at the fully disconnected configuration (Figures~\textbf{e} and \textbf{f}); the second pair, after sampling with PGSM initialized at the fully connected configuration for the same time (Figures~\textbf{g} and \textbf{h}); the third, after running the Gibbs sampler for $10^5$ seconds (Figures~\textbf{i} and \textbf{j}).
This demonstrates that our method can produce accurate and compact density estimates without relying on an expensive initialization phase.
\begin{figure}
\caption{
Geolocation dataset.
Comparison of predictive likelihoods of \textbf{a}
\label{fig:mopsi}
\end{figure}
\subsection{Inferring population structure in heterogeneous tumours}\label{sec:genomics}
The PyClone model \citep{roth2014pyclone} is designed to infer the proportion of cancer cells in a tumour sample which contain a mutation, which we refer to as the cellular prevalence of the mutation.
The input data consists of a set of digital measurements of allelic abundance which is assumed to be proportional to the true abundance of the allele in the sample.
The key factors which need to be deconvolved to convert this measurement to an estimate of cellular prevalence are that some cells derive from healthy (normal) tissue and the genomes of cancer cells contain multiple copies of a locus.
The model assumes that mutations will group by cellular prevalence due to the expansion of populations of genetically identical cells.
The number of populations is unknown, thus the PyClone model uses a DP prior with a Uniform($\left[0, 1\right]$) base measure.
The component parameters are interpreted as the cellular prevalence of the mutations associated with the component.
We show results on a dataset with 10,000 synthetic mutations in Figure~\ref{fig:cancer-results}.
All methods except the pure SAMS kernel performed similarly in terms of predictive likelihood, while the pure SAMS kernel performed significantly worse (Figure~\ref{fig:cancer-results} \textbf{a}).
The pure PGSM and SAMS kernels outperformed the other methods in terms of V-measure, though the difference were small (Figures~\ref{fig:cancer-results} \textbf{b}).
As observed in the other domains, the performance of SAMS critically depends on mixing the kernel with GIBBS moves.
We show the data points for each replication of the pure split-merge kernels further supporting this point (Figure~\ref{fig:cancer-results} \textbf{c}).
\begin{figure}
\caption{
Clustering of cancer mutations from synthetic next-generation sequencing data.
\textbf{a}
\label{fig:cancer-results}
\end{figure}
\section{Discussion}\label{sec:discussion}
We have proposed a new methodology to design efficient split-merge moves for Bayesian mixture models.
The method also generalizes to new types of moves useful for finite clustering models when $|s|>2$.
We have shown empirically that the proposed method is competitive in a range of clustering and likelihood models, including synthetic and real datasets from geolocation and genomics applications.
Our method, being based on the established PMCMC framework, opens up many directions for future improvements.
This includes applying recent advances in parallel implementations of SMC, for example via graphical processing units \citep{Lee2010GraphicCards}, or modifications of the SMC algorithm itself \citep{Jun2012Entangled,Murray2012GPU,Lee2014Forest}.
Another area of improvement comes from the development of resampling schemes tailored to discrete latent variables. In Algorithm \ref{alg:pgsm}, the number of possible distinct successors for each given particle is a small finite number (at most two if $|s|=2$ for example).
The complexity of the problem comes from the fact that a potentially long sequence of such decisions need to be made in order to split a cluster.
In these specific scenarios, custom PMCMC methods based on the early work of \cite{Fearnhead2003DiscretePF} have been developed in \cite{Whiteley2010Tech} and would provide futher improvement.
The fact that the state transitions $\partSupport(\cdot)$ have an absorbing state has both advantages and disadvantages. On the one hand it may cause Algorithm \ref{alg:pgsm} to be greedy, as explained in Section~\ref{sec:improved}. We have described in the same section a choice of intermediate and proposal distributions tailored to alleviate this issue. A potential alternative consists in designing a resampling distribution $\r$, which conditions on the survival of at least one representative of both a merge and a split. None of the existing resampling schemes have this property.
On the other hand, having an absorbing state has the advantage that if all particles simulated by Algorithm \ref{alg:pgsm} at some iteration $\t$ are equal to the merge absorbing state (i.e. $\x_\t^\p = \#2$ for all particle index $\p \in \{1, \dots, \N\}$), then there is no need to continue the computation of the particle filter for $\t' > \t$.
In standard applications of the PG algorithm, coalescence of the particle genealogy may cause slow mixing as noted in \cite{andrieu2010}.
The issue is that the particles $\xVec_\n^1, \xVec_\n^2, \dots, \xVec_\n^\N$ appearing in Algorithm \ref{alg:pgsm} have components at time $\t$ for $\t \ll \n$ which coincide with high probability with the components of the conditioning path.
This can be resolved using more sophisticated MCMC moves on the PG auxiliary variables \citep{Whiteley2010Discussion,Whiteley2010Tech,Lindsten2014PGAS}.
In our non-standard setup, this issue is partially mitigated by the fact that the order $\sigmaVec$ at which the particles are introduced is itself random.
Nonetheless, it would be interesting to implement these more advanced schemes to the problem at hand.
We have shown in Section~\ref{sec:genomics} a simple and effective method for handling models where each cluster component is governed by a non-conjugate model with a low-dimensional parameterization.
We leave for future work the extension of our method to higher dimensional non-conjugate likelihood models.
This problem can be approached, for example, by combining our method with the auxiliary variables described in \cite{neal2000}.
\section{Correctness of the decomposition into split-merge subproblems}\label{appendix:split-merge-correctness}
We present in this section the proof of correctness of the decomposition of the clustering into split-merge sub-problems.
The main tool used to prove this result is an auxiliary variable construction.
The auxiliary variable consists of a pair $(\s, \cMinus)$, where $\s$ is the set of anchors, and $\cMinus$ consists of the blocks of the partition $\c$ that do not contain anchor points:
\begin{eqnarray}
\cMinus \coloneqq \{\b\in\c : \b \cap \s = \emptyset\},
\end{eqnarray}
These intuitively correspond to the blocks of the partition that are forced to stay unchanged in this split-merge step.
We will view the split-merge step as a Gibbs step conditioning on $\cMinus, \s$.
A slight subtlety is that conditioning on the auxiliary variables not only forces the blocks in $\cMinus$ to stay constant; it also forces the other blocks to each contain at least one of the anchors.
See Figure~\ref{fig:support-example} for an example.
This leads to condition~\ref{assumption:2fromlemma} in Lemma~\ref{lemma:main}.
\begin{figure}
\caption{
Assuming that the values of $\c, \cMinus$ and $\s$ are as in Figure~\ref{fig:example}
\label{fig:support-example}
\end{figure}
\begin{lemma}\label{lemma:main}
Let $\c, \c'$ denote two partitions of $[\T]$. Let $\s \subseteq [\T]$.
Define $\cMinus \coloneqq \{\b \in \c : \s \cap \b = \emptyset\}$ and $\cMinus' \coloneqq \{\b \in \c' : \s \cap \b = \emptyset\}$.
Then $\cMinus = \cMinus'$ if and only if the following two conditions hold:
\begin{enumerate}
\item $\c \cap \cMinus = \c' \cap \cMinus$, and, \label{assumption:1fromlemma}
\item $\b \in \c' \backslash \cMinus \Longrightarrow \b \cap \s \neq \emptyset$. \label{assumption:2fromlemma}
\end{enumerate}
\end{lemma}
\begin{proof}
$(\Longrightarrow)$ Condition~\ref{assumption:1fromlemma} holds trivially.
For condition~\ref{assumption:2fromlemma}, suppose (a) $\cMinus = \cMinus'$, (b), $\b \in \c' \backslash \cMinus$, but (c) $\b \cap \s = \emptyset$.
By (b), $\b\in\c'$ and $\b\notin\cMinus$. This and (c) implies that $\b\in\cMinus'$.
But this contradicts (a), so condition~\ref{assumption:2fromlemma} holds as well.
$(\Longleftarrow)$ First, suppose $\b\in\cMinus$.
By condition~\ref{assumption:1fromlemma}, $\b\in\cMinus'$.
Therefore, $\cMinus \subseteq \cMinus'$.
Second, suppose $\b\in\cMinus'$.
By the contrapositive of condition~\ref{assumption:2fromlemma}, $\b \notin \c'\backslash \cMinus$.
This point and $\cMinus' \subseteq \c'$ implies that $\b\in\cMinus$.
\end{proof}
We can now turn to the proof of Proposition~\ref{prop:split-merge-correctness}.
We copy its statement here for convenience:
\begin{proposition}
If $\c \sim \pi$, then the output of Algorithm~\ref{alg:setup_split_merge}, $\c'$, satisfies $\c' \sim \pi$; i.e. the Markov kernel $\K(\c'|\c)$ induced by Algorithm~\ref{alg:setup_split_merge} is $\pi$-invariant.
\end{proposition}
\begin{proof}
Consider the model augmented with the auxiliary variables $\s$ and $\cMinus$ (see Figure~\ref{fig:graphical-models}(a)), defined formally using the following auxiliary distribution:
\begin{equation}
\piTilde(\s, \cMinus, \c) \coloneqq \pi(\c) \h(\s) {\mathbf 1}[\cMinus = \cMinus(\s,\c)],
\end{equation}
where $ \cMinus(\s, \c) \coloneqq \{\b\in\c : \b \cap \s = \emptyset\}$. Note that this auxiliary distribution admits the target distribution as a marginal:
\begin{eqnarray}
\sum_\s \sum_{\cMinus} \piTilde(\s, \cMinus, \c) &=& \pi(\c) \sum_\s \h(\s) \sum_{\cMinus}{\mathbf 1}[\cMinus = \cMinus(\s,\c)] \\
&=& \pi(\c) \sum_\s \h(\s) = \pi(\c), \nonumber
\end{eqnarray}
where the sum over $\cMinus$ is over all sets of subsets of $[\T]$, and the sum over $\s$ is over all subsets of $[\T]$.
We used the fact that only one $\cMinus$ satisfies $\cMinus(\s,\c) = \cMinus$, and that $\h$ is a probability mass function.
Next, we introduce three kernels with inputs and outputs denoted by:
\begin{equation}
\c \labelledmapsto{\K_1} (\s, \cMinus, \c) \labelledmapsto{\K_2} (\s, \cMinus, \c') \labelledmapsto{\K_3} \c'.
\end{equation}
These kernels play the following roles:
\begin{itemize}
\item $\K_1$ samples the auxiliary variables according to $\piTilde(\s, \cMinus\mid\c)$, while keeping $\c$ fixed,
\item $\K_2$ performs a Metropolis-within-Gibbs step on $\c$ targeting the auxiliary distribution $\piTilde$,
\item $\K_3$ deterministically projects the triplet back to the original space, retaining only the clustering $\c$.
\end{itemize}
Formally:
\begin{eqnarray}
\K_1(\s', \cMinus', \c'\mid \c) &\coloneqq& h(\s') {\mathbf 1}[\c = \c'] {\mathbf 1}[\cMinus' = \cMinus(\s, \c)], \\
\K_2(\s', \cMinus', \c'\mid \s, \cMinus, \c) &\coloneqq& \piTilde(\c' | \s, \cMinus) {\mathbf 1}[\s' = \s] {\mathbf 1}[\cMinus' = \cMinus], \nonumber \\
\K_3(\c'|\s, \cMinus, \c) &\coloneqq& {\mathbf 1}[\c' = \c]. \nonumber
\end{eqnarray}
Since $\piTilde$ admits $\pi$ as a marginal, the composition of $\K_1, \K_2,$ and $\K_3$ is clearly $\pi$-invariant.
It is therefore enough to show that when $\cMinus = \cMinus(\s, \c)$ where $\c$ is a valid partition of $[\T]$, sampling from $\K_2$ is equivalent to sampling from the Markov kernel $\K(\c'|\c)$ induced by Algorithm~\ref{alg:setup_split_merge}:
\begin{eqnarray}\label{eq:equivalence-last}
\piTilde(\c'\mid \s, \cMinus) &\propto& \pi(\c') {\mathbf 1}[\cMinus = \cMinus(\s, \c')] \\
&=& \pi(\c') {\mathbf 1}[\cMinus(\s, \c) = \cMinus(\s, \c')] \nonumber \\
&=& \tauI(|\c'|) \left( \prod_{\b\in\c'} \tauII(|\b|) \L(\yVec_\b) \right) {\mathbf 1}[\cMinus(\s, \c) = \cMinus(\s, \c')]. \nonumber
\end{eqnarray}
Using Lemma~\ref{lemma:main}, we now rewrite the support as follows:
\begin{eqnarray}
{\mathbf 1}[\cMinus(\s, \c') = \cMinus(\s, \c)] &=& {\mathbf 1}[\c \cap \cMinus = \c' \cap \cMinus] {\mathbf 1}[\b \in \c' \backslash \cMinus \Longrightarrow \b \cap \s \neq \emptyset].
\end{eqnarray}
Let now $\cBar' = \c' \backslash \cMinus$. Plugging in the last line of Equation~(\ref{eq:equivalence-last}), we obtain:
\begin{eqnarray}\label{eq:last}
\piTilde(\c'\mid \s, \cMinus) &=& \tauIBar(|\cBar'|) \left( \prod_{\b\in\cBar'} \tauII(|\b|) \L(\yVec_\b) \right) {\mathbf 1}[\c \cap \cMinus = \c' \cap \cMinus] {\mathbf 1}[\b \cap \s \neq \emptyset] \\
&=& \piBar(\cBar') {\mathbf 1}[\c \cap \cMinus = \c' \cap \cMinus], \nonumber
\end{eqnarray}
where $\piBar(\cBar')$ is defined in Equation (\ref{eq:piBar}). Since Algorithm~\ref{alg:setup_split_merge} does not change the clustering of points outside of $\sBar$ (line \ref{step:create-final-cluster} of Algorithm~\ref{alg:setup_split_merge}), it follows that the indicator
function in the last line of Equation~(\ref{eq:last}) is equal to one.
\end{proof}
\section{Correctness of particle Gibbs for split merge}\label{appendix:pg-correctness}
We provide here the proof of Proposition~\ref{prop:pg-correctness}.
The main steps in the proof follow a structure similar to the proof of Proposition~\ref{prop:split-merge-correctness}.
\begin{proposition}
Under Assumption~\ref{assumption:support}, \ref{assumption:bijection}, and \ref{assumption:intermediate-final}, and if $\cBar \sim \piBar$, then the output of Algorithm~\ref{alg:pgsm}, $\cBar'$, satisfies $\cBar' \sim \piBar$ for any $N\geq2$; i.e. the Markov kernel $\KBar(\cBar'|\cBar)$ induced by Algorithm~\ref{alg:pgsm} is $\piBar$-invariant.
\end{proposition}
\begin{proof}
We augment the model $\cBar$ with the auxiliary variables $\sigmaVec$ and $\g$ (see Figure~\ref{fig:graphical-models}(b)), defined as:
\begin{enumerate}
\item $\sigmaVec$ is distributed according to the output of Algorithm~\ref{alg:sample_permmutation}, defined in Section~\ref{sec:pgsm-overview}.
\item Given $\sigmaVec$ and $\cBar$, the variables $\g = (\ancestors_{2:\n}, \xVec_{1:\n}^{1:\N}, k)$ are distributed according to the specification of Algorithm~\ref{alg:pgsm}, with the exception that all particle indices are shuffled according to an independent permutation of $\{1, \dots, \N\}$ at each generation.
Here $k$ is the index of the particle sampled at iteration $\n$ (on line (\ref{alg:line:sample_particlepath})).
\end{enumerate}
\begin{figure}
\caption{
Graphical models of the auxiliary variables used in the correctness proofs.
The structure of the dependencies give an intuitive justification that the original model can be recovered as a marginal in both cases, as there are not directed path from the auxiliary variables to the original variables.
(a) In Appendix~\ref{appendix:split-merge-correctness}
\label{fig:graphical-models}
\end{figure}
Next, we introduce three kernels with inputs and outputs denoted by:
\begin{equation}
\cBar \labelledmapsto{\KBar_1} (\sigmaVec, \cBar) \labelledmapsto{\KBar_2} (\sigmaVec, \g, \cBar') \labelledmapsto{\KBar_3} \cBar'.
\end{equation}
These kernels play the following roles:
\begin{itemize}
\item $\KBar_1$ samples the permutation $\sigmaVec$ while keeping the auxiliary variables $\cBar$ fixed,
\item $\KBar_2$ samples $g$ using the PG step then sets $\cBar'$ to $\phi^\sigmaVec(\xVec^k_\n)$,
\item $\KBar_3$ deterministically projects the triplet back to the original space, retaining only the restricted clustering $\cBar'$.
\end{itemize}
The kernel $\KBar_2$ is equivalent to a standard PG algorithm.
Assumption~\ref{assumption:support}, \ref{assumption:intermediate-final}, and Theorem~5(a) of \cite{andrieu2010} imply that $\KBar_2$ is $\piBar$-invariant (and in fact, irreducible).
Assumption~\ref{assumption:bijection} ensures that the computation of the conditioned path is well-defined.
\end{proof}
\section{Construction of the bijection}\label{appendix:bijection}
We provide here the proof of Proposition~\ref{prop:bijection}:
\begin{proposition}
For any permutation $\sigmaVec$ satisfying $\{\sigma_1, \sigma_2\} = \s$, there is a bijective map $\phi^\sigmaVec$ from the space of particles respecting the transition constraints, $\partSupport_\n$, to the support of the restricted target, $\support(\piBar)$.
\end{proposition}
\begin{proof}
Consider the following mapping:
\begin{equation}
\phi^\sigmaVec(\xVec_\t) \coloneqq \bracearraycond{
\{\{\sigma_1, \dots, \sigma_\t\}\}&\;\;\textrm{if }\x_\t \in \{\#1, \#2\}, \\
\{\overline{\sigma}_1(\xVec_\t), \overline{\sigma}_2(\xVec_\t)\}&\;\;\textrm{otherwise,}
}
\end{equation}
where $\overline{\sigma}_i(\xVec_\t) \coloneqq \{\sigma_{\t'} : \x_{\t'} = \#(2+i), 1 \le \t' \le \t\}$. It is easy to check that it has an inverse given by:
\begin{equation}
\left(\left(\phi^\sigmaVec\right)^{-1}(\cBar)\right)_\t \coloneqq \bracearraycond{
\#1&\;\;\textrm{if }\t = 1, \\
\#2&\;\;\textrm{if }\t > 1, |\cBar| = 1, \\
\#3&\;\;\textrm{if }\t > 1, |\cBar| > 1, \sigma_1 \isclusteredwith_{\cBar} \sigma_\t, \\
\#4&\;\;\textrm{if }\t > 1, |\cBar| > 1, \sigma_2 \isclusteredwith_{\cBar} \sigma_\t,
}
\end{equation}
where $\sigma_i \isclusteredwith_{\cBar} \sigma_j$ means that $\y_{\sigma_i}$ is in the same block as $\y_{\sigma_j}$ for the clustering $\cBar$.
By the construction of the support of $\piBar$, exactly one of the four cases above holds when $\cBar \in \support(\piBar)$.
\end{proof}
\section{Generalization to $|\s| > 2$}\label{appendix:generalization}
We describe here the algorithmic implications of increasing the number of anchor points, $|\s|$, to some constant greater than two.
This constant should be selected so that the number of partitions of $|\s|$ points is much lower than the number of particles.
The algorithm is generally unchanged, with the following exceptions:
\begin{enumerate}
\item Algorithm~\ref{alg:sample_permmutation} is modified to sample $(\sigma_1, \sigma_2, \dots, \sigma_{|\s|})$ uniformly over the permutations of $\s$, and $(\sigma_{|\s|+1}, \dots, \sigma_\n)$, over the permutations of $\sBar \backslash \s$,
\item as before, the local allocation state space $\X$ can be viewed as a pair each containing a partition and a block in this partition (see Figure~\ref{fig:local-state-space}).
In the case where $|\s| = 2$, the partitions are taken from the union of the set of partitions of a set of size one with the set of partitions of a set of size two. When $|\s| > 2$, we add more states, corresponding to partitions of a set of size three, etc. until we add states corresponding to partitions of a set of size $|\s|$.
The support of the transition $\partSupport$ consists in (a) edges $\x \to \x'$ linking a state $\x'$ such that removing one element from one of its blocks yields $\x$, and (b) edges $\x \to \x'$ where $\x$ and $\x'$ correspond to the same partition of a set of size $|\s|$.
This is a generalization of the case $|\s| = 2$ shown in Figure~\ref{fig:local-state-space}.
The mapping $\phi^\sigmaVec$ is generalized in the obvious way,
\item in Section~\ref{sec:improved} the following equations are substituted,
\begin{enumerate}
\item $\t \in \{1, 2\} \rightarrow \t \in \{1, 2, \dots, |\s|\}$,
\item $\t = 2 \rightarrow \t \in \{2, \dots, |\s|\}$,
\item $\t > 2 \rightarrow \t > |\s|$,
\item $\Delta \schedule \coloneqq (n - 2)^{-1} \rightarrow \Delta \schedule \coloneqq (n - |\s|)^{-1}$.
\end{enumerate}
\end{enumerate}
\section{Models}\label{appendix:models}
\subsection{Multivariate normal}
The first likelihood we use is the multivariate normal (MVN) with density denoted $\mathcal{N} (y| \mu, \Sigma)$.
We specify a normal inverse Wishart (NIW) prior for the mean and covariance parameters with density denoted $\mathcal{N}I\mathcal{W} (\mu, \Sigma | \nu, r, u, S)$.
The densities are given by
\begin{eqnarray}
\mathcal{N}I\mathcal{W} (\mu, \Sigma | \nu, r, u, S) & = & \mathcal{N} \left( \mu |u, \frac{1}{r} \Sigma \right) \mathcal{I}\mathcal{W} (\Sigma | \nu, S), \\
\mathcal{N} (y| \mu, \Sigma) & = & \frac{1}{(2 \pi)^{\frac{D}{2}} | \Sigma |^{\frac{1}{2}}} \exp \left( - \frac{1}{2} (y - \mu)^T \Sigma^{- 1} (y - \mu) \right), \nonumber \\
\mathcal{I}\mathcal{W} (\Sigma | \nu, S) & = & \frac{| S |^{\frac{\nu}{2}}}{2^{\frac{\nu p}{2}} \Gamma_D \left( \frac{\nu}{2} \right)} | \Sigma |^{- \frac{\nu + p + 1}{2}} \exp \left( - \frac{1}{2} \text{tr} (S \Sigma^{- 1}) \right), \nonumber
\end{eqnarray}
where $\Gamma_D (x) = \pi^{\frac{D (D - 1)}{4}} \prod_{d = 1}^D \Gamma \left(x + \frac{d - 1}{2} \right)$.
We use the following priors for all experiments $(\nu, r, u, S) = (\nu_0, r_0, u_0, S_0) = (2 + D, 1, \tmmathbf{0}, \tmmathbf{I})$, where $\tmmathbf{0}$ is the $D$ dimensional vector of zeros, and $\tmmathbf{I}$ is the $D$ dimensional identity matrix.
The posterior distribution of $\mu, \Sigma$ given $\tmmathbf{y}= (y_1, \ldots, y_m)$ is $\mathcal{N}\mathcal{I}W (\mu, \Sigma |\nu_m, r_m, u_m, S_m)$ where
\begin{eqnarray}
\nu_m & = & \nu_0 + m, \\
r_m & = & r_0 + m, \nonumber\\
u_m & = & \frac{r_0 u_0 + \sum_{i = 1}^m y_i}{r_m}, \nonumber \\
S_m & = & S_0 + \sum_{i = 1}^m y_i y_i^T + r_0 u_0 u_0^T - r_m u_m u_m^T. \nonumber
\end{eqnarray}
For computational efficiency it is convenient to express these updates iteratively using the following equations:
\begin{eqnarray}
\nu_m & = & \nu_{m - 1} + 1, \\
r_m & = & r_{m - 1} + 1, \nonumber\\
u_m & = & \frac{r_{m - 1} u_{m - 1} + y_m}{r_m}, \nonumber\\
S_m & = & S_{m - 1} + \frac{r_m}{r_{m - 1}} (y_m - u_m) (y_m - u_m)^T. \nonumber
\end{eqnarray}
Using these equations the Cholesky decomposition of $S_0$ can be performed once using \ $O (D^3)$ operations and cached.
This decomposition can then be updated using $m$ rank one updates, each requiring $O (D^2)$ operations, to obtain $S_m$.
This allows for efficient evaluation of the marginal and predictive likelihoods as $| S_m |$ can be evaluated using $O (D)$ operations using the Cholesky decomposition, instead of the standard $O (D^3)$ operations.
The marginal likelihood for the MVN-NIW congugate pair is
\begin{eqnarray}
L (\tmmathbf{y}) & = & \int \prod_{i = 1}^m L (y_i | \theta) H (\mathrm{d} \theta) \\
& = & \int \prod_{i = 1}^m \mathcal{N} (y_i | \mu, \Sigma) \mathcal{N}I\mathcal{W} (\mu, \Sigma | \nu, r, u, S) \,\mathrm{d}\mu \,\mathrm{d}\Sigma \nonumber \\
& = & \frac{1}{\pi^{\frac{m D}{2}}} \frac{r_0^{\frac{D}{2}}}{r_m^{\frac{D}{2}}} \frac{| S_0 |^{\frac{\nu_0}{2}}}{| S_m |^{\frac{\nu_m}{2}}} \frac{\prod_{d = 1}^D \Gamma \left( \frac{\nu_m + d - 1}{2} \right)}{\prod_{d = 1}^D \Gamma \left(\frac{\nu_0 + d - 1}{2} \right)}. \nonumber
\end{eqnarray}
The predictive likelihood is given by
\begin{eqnarray}
L (\tmmathbf{y}^+ |\tmmathbf{y}^-) & = & \frac{L (y_1, \ldots, y_m)}{L(y_1, \ldots, y_{m - 1})} \\
& = & \frac{1}{\pi^{\frac{D}{2}}} \frac{r_{m - 1}^{\frac{D}{2}}}{r_m^{\frac{D}{2}}} \frac{| S_{m - 1} |^{\frac{\nu_{m - 1}}{2}}}{| S_m |^{\frac{\nu_m}{2}}} \frac{\prod_{d = 1}^D \Gamma \left(\frac{\nu_m + d - 1}{2} \right)}{\prod_{d = 1}^D \Gamma \left( \frac{\nu_{m - 1} + d - 1}{2} \right)}. \nonumber
\end{eqnarray}
\subsection{Bernoulli}
We use a Bernoulli likelihood, $\text{Bernoulli} (x| \theta)$, with a Beta prior distribution, $\text{Beta} (\theta | \alpha, \beta)$.
We use the following priors $(\alpha, \beta) = (\alpha_0, \beta_0) = (1, 1)$ for all experiments.
The densities are
\begin{eqnarray}
\text{Bernoulli} (x| \theta) & = & \theta^x (1 - \theta)^{1 - x}, \\
\text{Beta} (\theta | \alpha, \beta) & = & \frac{\Gamma (\alpha) \Gamma(\beta)}{\Gamma (\alpha + \beta)} \theta^{\alpha - 1} \theta^{\beta - 1}. \nonumber
\end{eqnarray}
The posterior density of $\theta$ given $\tmmathbf{y}= (y_1, \ldots, y_m)$ is $\text{Beta} (\alpha_m, \beta_m)$ where $\alpha_m = \alpha_0 + \sum_{i = 1}^m y_i$ and $\beta_m = \beta_0 + \sum_{i = 1}^m (1 - y_i)$.
The marginal likelihood is
\begin{eqnarray}
L (\tmmathbf{y}) & = & \int \prod_{i = 1}^m L (y_i | \theta) H (\mathrm{d} \theta) \\
& = & \int \prod_{i = 1}^m \text{Bernoulli} (y_i | \theta) \text{Beta}(\theta | \alpha_0, \beta_0) \,\mathrm{d}\theta \nonumber \\
& = & \frac{\Gamma (\alpha) \Gamma (\beta)}{\Gamma (\alpha_m) \Gamma(\beta_m)} \frac{\Gamma (\alpha_m + \beta_m)}{\Gamma (\alpha_0 + \beta_0)}, \nonumber
\end{eqnarray}
and the predictive log likelihood is
\begin{eqnarray}
L (\tmmathbf{y}^+ |\tmmathbf{y}^-) & = & \frac{L (y_1, \ldots, y_m)}{L(y_1, \ldots, y_{m - 1})} \\
& = & \frac{\Gamma (\alpha_{m - 1}) \Gamma (\beta_{m - 1})}{\Gamma(\alpha_m) \Gamma (\beta_m)} \frac{\Gamma (\alpha_m + \beta_m)}{\Gamma(\alpha_{m - 1} + \beta_{m - 1})}. \nonumber
\end{eqnarray}
\subsection{PyClone}
For the cancer genomics data we use the application-specific PyClone likelihood model over clonal prevalences, genotypes, and observed read counts.
The key variables in the model are as follows (see \cite{roth2014pyclone} for a more detailed description of the model):
\begin{eqnarray}
\phi_i &:& \text{proportion of cancer cells with mutation $i$, } \phi_i\in[0,1], \nonumber \\
t & : & \text{proportion of cancer cells in a sample (treated as known), } t \in [0, 1], \nonumber \\
\psi_i & : & \text{genotype of normal, non-mutated cancer and mutated cancer cells, } \nonumber \psi_i \in (g_N, g_R, g_V), \nonumber \\
g_x & \in & \mathcal{G} = \left\{ \text{A}, \text{B}, \text{AA}, \text{AB}, \ldots \right\}, \nonumber \\
\pi_{i, \psi_i} & : & \text{probability that mutation $i$ has genotype $\psi_i$ (elicited from auxillary data)}, \nonumber \\
c (g_x) & = & \# \text{A} (g_x) +\# \text{B} (g_x), \nonumber \\
\mu (g_x) & = & \frac{\# \text{A} (g_x)}{c (g_x)}, \nonumber \\
\xi (\psi, \phi, t) & : & \text{probability of sampling a B from the population of cells in the sample, i.e.:} \nonumber \\
& = & \frac{(1 - t) c (g_N) \mu (g_N) + t (1 - \phi) c (g_R) \mu (g_R) + t \phi c (g_V) \mu (g_V) }{(1 - t) c (g_N) + t (1 - \phi) c (g_R) + t \phi c (g_V)}, \nonumber \\
y_i & : & \text{number of sequence reads with a B and total number of reads covering mutation $i$, i.e.:} \nonumber \\
& = & (y_{i, b}, y_{i, d}) \in \mathbb{N}^2. \nonumber
\end{eqnarray}
The generative model is specified as follow:
\begin{eqnarray}
H_0 & = & \text{Uniform} ([0, 1]), \\
\concentration & \sim & \text{Gamma} (\concentration |a, b), \nonumber \\
H| \concentration, H_0 & \sim & \text{DP} (H| \concentration, H_0), \nonumber \\
\phi_i |H & \sim & H, \nonumber \\
y_i | \psi_i, \phi_i, t & \sim & \text{Binomial} (y_{i, b} |y_{i, d}, \xi(\psi_i, \phi, t)). \nonumber
\end{eqnarray}
This model is not conjugate. However, if we let $x \in \{ x_0, \ldots, x_M \} = \left\{ 0, \frac{1}{M - 1}, \ldots, \frac{M - 2}{M - 1}, 1 \right\}$ be a discretization of the interval $[0, 1]$ and replace the continuous uniform base measure, $H_0 = \text{Uniform} ([0, 1])$, with the discrete uniform measure, $H_0 = \text{Uniform} (\{ x_0, \ldots, x_M \})$, then we can approximate the model.
Using this approximation, we can now treat the model as if it were conjugate.
The marginal likelihood for data $(y_1,...,y_m)$ is given by
\begin{eqnarray}
\int \prod_{i = 1}^m L (y_i | \theta) H (\mathrm{d} \theta) & = & \int \prod_{i = 1}^m \sum_{\psi_i \in \mathcal{G}^3} \pi_{i, \psi_i} \text{Binomial}(y_{i, b} |y_{i, d}, \xi (\psi_i, \phi_{}, t)) H (\mathrm{d} \phi_{}) \\
& = & \sum_{k = 0}^M \prod_{i = 1}^m \sum_{\psi_i \in \mathcal{G}^3} \pi_{i, \psi_i} \text{Binomial} (y_{i, b} |y_{i, d}, \xi (\psi_i, x_k, t)) \frac{1}{M} \nonumber \\
& = & \sum_{k = 0}^M \prod_{i = 1}^m \exp \left( \underbrace{\log \sum_{\psi_i \in \mathcal{G}^3} \pi_{i, \psi_i} \text{Binomial} (y_{i, b} |y_{i, d}, \xi (\psi_i, x_k, t)) }_{\Xi_k (y_i)} \right) \frac{1}{M} \nonumber \\
& = & \sum_{k = 0}^M \exp \left( \sum_{i = 1}^m \Xi_k (y_i) \right) \frac{1}{M}, \nonumber
\end{eqnarray}
where we have the sufficient statistics
\begin{eqnarray}
\tmmathbf{\Xi} (y_i) & = & (\Xi_0 (y_i), \ldots, \Xi_M (y_i)).
\end{eqnarray}
\begin{remark}
The possibly infinite sum $\sum_{\psi_i \in \mathcal{G}^3}$ is truncated to a finite sum over biologically plausible states.
\end{remark}
\
\section{Anchor proposal distribution}\label{appendix:proposals}
The anchor proposal distribution, $h$, is a free tuning parameter for the PGSM sampler.
In principle, proposals which are informed by the current clustering state of the chain or by the topology of the space may improve the performance of the sampler.
We consider two informed proposal distributions.
While bespoke proposals for each model may perform better, we restrict attention here to proposals which can be applied generically to any class of model for which the PGSM sampler is applicable.
In particular, we do not assume a distance metric is available.
Both proposals we discuss are only applicable when two anchor points are used.
\begin{algorithm}
\caption{Cluster informed (CI) proposal}\label{alg:ci_proposal}
\begin{algorithmic}[1]
\State $i_{1} \sim \mbox{Uniform}([\T])$
\State $\bar{b} \gets b \in c$ s.t. $i_{1} \in b$
\State $c' \gets c \setminus \{\bar{b}\}$
\For{$b \in c'$}
\State $s_{b} \gets \frac{L(y_{\bar{b} \cup b})}{L(y_{\bar{b}}) L(y_{b})}$
\EndFor
\State $s_{\bar{b}} \gets \frac{\sum_{b \in c'} s_{b}}{|c| - 1}$ \Comment{Merge probability is set to $\frac{1}{|c| - 1}$}
\For{$b \in c$}
\State $p_{b} \gets \frac{s_{b}}{\sum_{b \in c} s_{b}}$
\EndFor
\State $b' \sim \mbox{Discrete}(c, p_{b})$ \Comment{Sample a block $b'$ in $c$ with probability $p_{b}$}
\State $i_{2} \sim \mbox{Uniform}(b' \setminus {i_{1}})$
\State \textbf{return} $i_{1}, i_{2}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Threshold informed (TI) proposal}\label{alg:ti_proposal}
\begin{algorithmic}[1]
\State $i_{1} \sim \mbox{Uniform}([\T])$
\For{$b \in c$}
\If{$i_1 \in b$}
\State $b \gets b \setminus i_1$
\EndIf
\State $s_{b} \gets \tau_{2}(b) L(y_{i_{1}}|b)$ \Comment{CRP attachment probability where $L(\cdot|b)$ is the predictive distribution}
\EndFor
\For{$b \in c$}
\State $p_{b} \gets \frac{s_{b}}{\sum_{b \in c} s_{b}}$
\EndFor
\State $b' \sim \mbox{Uniform}(\{b : p_{b} \ge t\})$ \Comment{$t$ is a pre-specified threshold, set to 0.01 in the experiments}
\State $i_{2} \sim \mbox{Uniform}(b' \setminus {i_{1}})$
\State \textbf{return} $i_{1}, i_{2}$
\end{algorithmic}
\end{algorithm}
\begin{remark}
If the any of the sets that we sample uniformly from are empty, we return two anchors sampled uniformly at random.
\end{remark}
\renewcommand{List of Symbols}{List of Symbols}
{\footnotesize
\printnomenclature
}
\end{document} |
\begin{equation}gin{document}
\begin{equation}gin{titlepage}
\vspace*{4mm}
\title{On the Second Fundamental Theorem of Asset Pricing}
\vskip 2em
\zetanterline{\mbox{{\sc Rajeeva L. Karandikar}\hspace{10pt} and
\hspace{10pt}{\sc B. V. Rao
}}}
\vskip 1em
\place{Chennai Mathematical Institute, Chennai.}
\vskip 9em
\zetanterline{{\sc Abstract}}
{ Let $X^1,\ldots, X^d$ be sigma-martingales on $({\mathbb O}mega,{\mathcal F}, {{\sf P}})$. We show that every bounded martingale (with respect to the underlying filtration) admits an integral representation w.r.t. $X^1,\ldots, X^d$ if and only if there is no equivalent probability measure (other than ${{\sf P}}$) under which $X^1,\ldots,X^d$ are sigma-martingales.
From this we deduce the second fundamental theorem of asset pricing- that completeness of a market is equivalent to uniqueness of Equivalent Sigma-Martingale Measure (ESMM).}
\hrule width100pt height .8pt
\vskip 2mm
\noindent {\it 2010 Mathematics Subject Classication:} 91G20, 60G44, 97M30, 62P05.\\
\noindent {\it Key words and phrases:} Martingales, Sigma Martingales, Stochastic Calculus, Martingale Representation, No Arbitrage, Completeness of Markets.
\end{titlepage}
\section{Introduction}
The (first) fundamental theorem of asset pricing says that a market consisting of finitely many stocks satisfies the {\em No Arbitrage property} (NA) if and only there exists an {\em Equivalent Martingale Measure} (EMM)- {\em i.e. there exists an equivalent probability measure under which the (discounted) stocks are (local) martingales.} The No Arbitrage property has to be suitably defined when we are dealing in continuous time, where one rules out approximate arbitrage in the class of admissible strategies. For a precise statement in the case when the underlying processes are locally bounded, see Delbaen and Schachermayer \cite{DS}. Also see Bhatt and Karandikar \cite{BK15} for an alternate formulation, where the approximate arbitrage is defined only in terms of simple strategies. For the general case, the result is true when local martingale in the statement above is replaced by sigma-martingale. See Delbaen and Schachermayer \cite{DS98}. They have an example where the No Arbitrage property holds but there is no equivalent measure under which the underlying process is a local martingale. However, there is an equivalent measure under which the process is a sigma-martingale.
The second fundamental theorem of asset pricing says that the market is complete ({\em i.e. every contingent claim can be replicated by trading on the underlying securities}) if and only if the EMM is unique. Interestingly, this property was studied by probabilists well before the connection between finance and stochastic calculus was established (by Harrison-Pliska \cite{HP}). The completeness of market is same as the question: when is every martingale representable as a stochastic integral w.r.t. a given set of martingales $\{M^1,\ldots ,M^d\}$. When $M^1,\ldots ,M^d$ is the $d$-dimensional Wiener Process, this property was proven by Ito \cite{Ito}. Jacod and Yor \cite{JY} proved that if $M$ is a ${{\sf P}}$-local martingale, then every martingale $N$ admits a representation as a stochastic integral w.r.t. $M$ if and only if there is no probability measure ${{\sf Q}}$ (other than ${{\sf P}}$) such that ${{\sf Q}}$ is equivalent to ${{\sf P}}$ and $M$ is a ${{\sf Q}}$-local martingale. The situation in higher dimension is more complex. The obvious generalisation to higher dimension is not true as was noted by Jacod-Yor \cite{JY}.
To remedy the situation, a notion of vector stochastic integral was introduced- where a vector valued predictable process is the integrand and vector valued martingale is the integrator. The resulting integrals yield a class larger than the linear space generated by component wise integrals. See \cite{J80}, \cite{CS}. However, one has to prove various properties of stochastic integrals once again.
Here we achieve the same objective in another fashion avoiding defining integration again from scratch. In the same breath, we also take into account the general case, when the underlying processes need not be bounded but satisfy the property NFLVR and thus one has a equivalent sigma-martingale measure (ESMM). We say that a martingale $M$ admits a integral representation w.r.t. $(X^1,X^2,\ldots, X^d)$ if there exits predictable $f, g^j$ such that $g^j\in{\mathbb L}(X^j)$,
\[Y_t=\sum_{j=1}^d\int_0^t g^j_s{\hspace{0.6pt}d\hspace{0.1pt}} X^j_s,\]
$f\in{\mathbb L}(Y)$ and
\[M_t=\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s.\]
The security $Y$ can be thought of as a mutual fund or an index fund and the investor is trading on such a fund trying to replicate the security $M$.
We will show that if for a multidimensional r.c.l.l. process $(X^1,X^2,\ldots, X^d)$ an ESMM exists, then all bounded martingales admit a representation w.r.t $X^j$, $1\le j\le d$ if and only if ESMM is unique.
\section{Preliminaries and Notation}
Let us start with some notations. $ ({\mathbb O}mega, {\mathcal F}, {{\sf P}})$ denotes a complete probability space with a filtration $ ({\mathcal F}_\zetanterdot)= \{{\mathcal F}_t:\;t\ge 0 \}$ such that ${\mathcal F}_0$ consists of all ${{\sf P}}$-null sets (in ${\mathcal F}$) and
\[\cap_{t>s}{\mathcal F}_t={\mathcal F}_s \;\;\forall s\ge 0.\]
For various notions, definitions and standard results on stochastic integrals, we refer the reader to Jacod \cite{J78} or Protter \cite{P}.
For a local martingale $M$, let ${\mathbb L}^1_m(M)$ be the class of predictable processes $f$ such that there exists a sequence of stopping times $\sigma_k\uparrow\infty$ with
\[{{\sf E}}[\{\int_0^{\sigma_k}f^2_s{\hspace{0.6pt}d\hspace{0.1pt}} [M,M]_s\}^\frac{1}{2}]<\infty\]
and for such an $f$, $N=\int f{\hspace{0.6pt}d\hspace{0.1pt}} M$ is defined and is a local martingale.
Let ${\mathbb M}$ denote the class of martingales and for $M^1,M^2,\ldots , M^d\in{\mathbb M}$ let
\[{\mathbb C}(M^1,M^2,\ldots , M^d)=\{Z\in{\mathbb M}\,:\,Z_t=Z_0+\sum_{j=1}^d\int_0^t f^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s,\;f^j\in{\mathbb L}^1_m(M^j)\}\]
and for $T<\infty$, let
\[\tilde{{\mathbb K}}_T(M^1,M^2,\ldots , M^d)=\{Z_T\,:\, Z\in {\mathbb C}(M^1,M^2,\ldots , M^d)\}.\]
For the case $d=1$, Yor \cite{Yor} had proved that $\tilde{{\mathbb K}}_T$ is a closed subspace of ${\mathbb L}^1({\mathbb O}mega,{\mathcal F},{{\sf P}})$. The problem in case $d>1$ is that in general $\tilde{{\mathbb K}}_T(M^1,M^2,\ldots , M^d)$ need not be closed. Jacod-Yor \cite{JY} gave an example where $M^1,M^2$ are continuous square integrable martingales and $\tilde{{\mathbb K}}_T(M^1,M^2)$ is not closed.
For martingales $M^1,M^2,\ldots ,M^d$, let
\[{\mathbb F}(M^1,\ldots , M^d)=\{Z\in{\mathbb M}\,: Z_t=Z_0+\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s, \,f\in{\mathbb L}^1_m(Y),\,Y\in {\mathbb C}(M^1,\ldots , M^d)\}.\]
Let
\[{\mathbb K}_T(M^1,M^2,\ldots , M^d)=\{Z_T\,:\,Z\in{\mathbb F}(M^1,M^2,\ldots , M^d)\}.\]
The main result of the next section is
\begin{theorem} \label{aztm1} Let $M^1,M^2,\ldots ,M^d$ be martingales. Then ${\mathbb K}_T(M^1,M^2,\ldots , M^d)$ is closed in ${\mathbb L}^1({\mathbb O}mega,{\mathcal F}, {{\sf P}})$. \end{theorem}
This would be deduced from
\begin{theorem} \label{aztm2} Let $M^1,M^2,\ldots ,M^d$ be martingales and $Z^n\in {\mathbb F}(M^1,M^2,\ldots , M^d)$ be such that ${{\sf E}}[\lvert Z^n_t-Z_t\rvert ]\rightarrow 0$ for all $t$. Then $Z\in {\mathbb F}(M^1,M^2,\ldots , M^d)$.
\end{theorem}
When $M^1,M^2,\ldots ,M^d$ are square integrable martingales, the analogue of Theorem \ref{aztm1} for ${\mathbb L}^2$ follows from the work of Davis-Varaiya \cite{DV}. However, for the EMM characterisation via integral representation, one needs the ${\mathbb L}^1$ version, which we deduce using change of measure technique.
We will need the Burkholder-Davis-Gundy inequality (see \cite{Mey}) (for $p=1$) which states that there exist universal
constants $c^{1}, c^{2}$ such that for all martingales $M$ and $T<\infty$,
\[
c^{1}{{\sf E}} [ ([M,M ]_T)^{\frac{1}{2}} ]\le {{\sf E}} [\sup_{0\le t\le T} \lvert M_t \rvert ]\le c^{2}{{\sf E}} [ ([M,M ]_T)^{\frac{1}{2}}].\]
After proving Theorem \ref{aztm1}, in the next section we will introduce sigma-martingales and give some elementary properties. Then we come to the main theorem on integral representation of martingales. This is followed by the second fundamental theorem of asset pricing.
\section{Proof of Theorem \ref{aztm1}}
We begin with a few auxiliary results.
In this section, we fix martingales $M^1,M^2,\ldots ,M^d$.
\begin{lemma}\label{azl0} Let
{\small
\[{\mathbb C}_b(M^1,\ldots , M^d)=\{Z\in{\mathbb M}\,:\,Z_t=Z_0+\textstyle\sum_{j=1}^d\int_0^t f^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s,\;f^j\text{ bounded predictable },1\le j\le d\},\]
\[{\mathbb F}_b(M^1,\ldots , M^d)=\{Z\in{\mathbb M}\,: Z_t=Z_0+\textstyle\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s, \,f\in{\mathbb L}^1_m(Y),\,Y\in {\mathbb C}_b(M^1,\ldots , M^d)\}.\]}
Then ${\mathbb F}_b(M^1,\ldots , M^d)={\mathbb F}(M^1,\ldots , M^d)$.
\end{lemma}
\noindent{\sc Proof : } Since bounded predictable process belong to ${\mathbb L}^1_m(N)$ for every martingale $N$, it follows that ${\mathbb C}_b(M^1,\ldots , M^d)\subseteq {\mathbb C}(M^1,\ldots , M^d)$ and hence ${\mathbb F}_b(M^1,\ldots , M^d)\subseteq {\mathbb F}(M^1,\ldots , M^d)$.
For the other part, let $Z\in {\mathbb F}$ be given by
\[Z_t= Z_0+\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s , \,f\in{\mathbb L}^1_m(Y)\]
where
\[Y_t=\sum_{j=1}^d\int_0^t g^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s\]
with $g^j\in{\mathbb L}^1_m(M^j)$. Let $\xi_s=1+\sum_{j=1}^d\lvert g^j_s\rvert$, $h^j_s=\frac{g^j_s}{\xi_s}$ and
\[V_t=\sum_{j=1}^d\int_0^t h^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s.\]
Since $h^j$ are bounded, it follows that $V\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$.
Using $g^j_s=\xi_sh^j_s$ and $g^j\in{\mathbb L}^1_m(M^j)$, it follows that $\xi\in{\mathbb L}^1_m(V)$ and
\[Y_t=\int_0^t\xi_s{\hspace{0.6pt}d\hspace{0.1pt}} V_s.\]
Since $f\in{\mathbb L}^1_m(Y)$, it follows that $f\xi\in{\mathbb L}^1_m(V)$ and
$\int f{\hspace{0.6pt}d\hspace{0.1pt}} Y=\int f\xi {\hspace{0.6pt}d\hspace{0.1pt}} V$.
\qed
\begin{lemma} \label{azl1}Let $Z\in{\mathbb M}$ be such that there exists a sequence of stopping times $\sigma_k\uparrow\infty$ with ${{\sf E}}_{{\sf P}}[\sqrt{[Z,Z]_{\sigma_k}}]<\infty$ and $X^k\in {\mathbb F}(M^1,M^2,\ldots , M^d)$ where $X^k_t=Z_{t\wedge\sigma_k}$. Then $Z\in{\mathbb F}(M^1,M^2,\ldots , M^d)$.
\end{lemma}
\noindent{\sc Proof : } Let $X^k=\int f^k{\hspace{0.6pt}d\hspace{0.1pt}} Y^k$ for $k\ge 1$ with $f^k\in{\mathbb L}^1_m(Y^k)$ and $Y^k\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$. Let $\phi^{k,j}$ be bounded predictable processes such that
\[Y^k_t=\sum_{j=1}^d\int_0^t\phi^{k,j}_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s.\]
Let $c_k>0$ be a common bound for $\phi^{k,1},\phi^{k,2},\ldots ,\phi^{k,d}$. Let us define $\end{theorem}a^j,f$ by
\[\end{theorem}a^j_t=\sum_{k=1}^\infty \frac{1}{c_k}\phi^{k,j}_t{\mathbb I}nd_{\{(\sigma_{k-1},\sigma_k]\}}(t).\]
\[f_t=\sum_{k=1}^\infty c_kf^k_t{\mathbb I}nd_{\{(\sigma_{-1},\sigma_k]\}}(t).\]
\[Y_t=\sum_{j=1}^d \int_0^t \end{theorem}a^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s .\]
By definition, $\end{theorem}a^j$ is bounded by 1 for every $j$ and thus
$Y\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$.
We can note that
\[\begin{equation}gin{split}
Z_{t\wedge\sigma_k}- Z_{t\wedge\sigma_{k-1}}&=X^k_{t\wedge\sigma_k}-X^k_{t\wedge\sigma_{k-1}}\\
&=\int_0^tf^k_s{\mathbb I}nd_{\{(\sigma_{k-1},\sigma_k]\}}(s){\hspace{0.6pt}d\hspace{0.1pt}} Y^k_s\\
&=\int_0^tf_s{\mathbb I}nd_{\{(\sigma_{k-1},\sigma_k]\}}(s){\hspace{0.6pt}d\hspace{0.1pt}} Y_s.\end{split}
\]
Thus
\[Z_{t\wedge\sigma_k}=Z_0+\int_0^t{\mathbb I}nd_{[0,{\sigma_k}]}(s)f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s\]
and hence
\[[Z,Z]_{\sigma_k}=\int_0^{\sigma_k}(f_s)^2{\hspace{0.6pt}d\hspace{0.1pt}} [Y,Y]_s.\]
Since by assumption, for all $k$
\[{{\sf E}}_{{\sf P}}[\sqrt{[Z,Z]_{\sigma_k}}\;]<\infty\]
it follows that $f\in{\mathbb L}^1_m(Y)$.
This proves the required result.
\qed
\begin{lemma} \label{azl2}Let $Z^n\in{\mathbb M}$ be such that ${{\sf E}}[\lvert Z^n_t-Z_t\rvert ]\rightarrow 0$ for all $t$. Then there exists a sequence of stopping times $\sigma_k\uparrow\infty$ and a subsequence $\{n^j\}$ such that each $k\ge 1$,
\[{{\sf E}}[\sqrt{[Z,Z]_{\sigma_k }}]<\infty\] and writing $Y^j=Z^{n^j}$,
\begin{equation}\label{az1}{{\sf E}}[\sqrt{[Y^j-Z,Y^j-Z]_{\sigma_k }}\;]\rightarrow 0 \text{ as }j\uparrow\infty.\end{equation}
\end{lemma}
\noindent{\sc Proof : } Let $n_0=0$. For each $j$, ${{\sf E}}[\lvert Z^n_j-Z_j\rvert ]\rightarrow 0$ as $n\rightarrow\infty$ and hence we can choose $n^j>n^{(j-1)}$ such that
\[{{\sf E}}[\lvert Z^{n^j}_j-Z_j\rvert ]\le 2^{-j}.\]
Then using Doob's maximal inequality we have
\[{{\sf P}}(\sup_{t\le j}\lvert Z^{n^j}_t-Z_t\rvert \ge \frac{1}{j^2})\le \frac{j^2}{2^j}.\]
As a consequence, writing $Y^j=Z^{n^j}$, we have
\begin{equation}\label{az31}
\end{theorem}a_t=\sum_{j=1}^\infty \sup_{s\le t} \lvert Y^j_s-Z_s\rvert<\infty \;\;a.s. \text{ for all }t<\infty.\end{equation}
Now define
\[U_t=\lvert Z_t\rvert+\sum_{j=1}^\infty \lvert Y^j_t-Z_t\rvert.\]
In view of \eqref{az31}, it follows that $U$ is r.c.l.l. adapted process. For any stopping time $\tau\le m$, we have
\[\begin{equation}gin{split}
{{\sf E}}[U_\tau]&= {{\sf E}}[\lvert Z_\tau\rvert]+ \sum_{j=1}^\infty {{\sf E}}[\lvert Y^j_\tau-Z_\tau\rvert]\\
&\le {{\sf E}}[\lvert Z_m\rvert]+\sum_{j=1}^\infty {{\sf E}}[\lvert Y^j_m-Z_m\rvert]\\
&\le {{\sf E}}[\lvert Z_m\rvert]+\sum_{j=1}^m {{\sf E}}[\lvert Y^j_m-Z_m\rvert]+\sum_{j=m+1}^\infty 2^{-j} \\
&<\infty.
\end{split}\]
Here, we have used that $Z$, $Y^j-Z$ being martingales, $\lvert Z\rvert$, $\lvert Y^j-Z\rvert$ are sub-martingales and $\tau\le m$. Now defining
\[\sigma_k=\inf\{t\,:\;U_t\ge k\text{ or } U_{t-}\ge k\}\wedge k\]
it follows that $\sigma_k$ are bounded stop times increasing to $\infty$ with
\[
\sup_{s\le \sigma_k} U_s \le k+ U_{\sigma_k}
\]
and hence
\begin{equation}\label{az31w}
{{\sf E}}[\sup_{s\le \sigma_k} U_s ]<\infty.
\end{equation}
Thus, for each $k$ fixed ${{\sf E}}[\sup_{s\le \sigma_k} \lvert Z_s\rvert]<\infty$ and thus
Burkholder-Davis-Gundy inequality ($p=1$ case), we have ${{\sf E}}[\sqrt{[Z,Z]_{\sigma_k }}]<\infty$.
In view of \eqref{az31} \[\lim_{j\rightarrow\infty}\sup_{s\le \sigma_k} \lvert Y^j_s-Z_s\rvert= 0\;\;\;\;a.s.\]
and is dominated by $(\sup_{s\le \sigma_k} U_s) $ which in turn is integrable as seen in \eqref{az31w}. Thus by dominated convergence theorem, we have
\[
\lim_{j\rightarrow\infty}{{\sf E}}[\sup_{s\le \sigma_k} \lvert Y^j_s-Z_s\rvert]= 0. \]
The result \eqref{az1} now follows from Burkholder-Davis-Gundy inequality ($p=1$ case).
\qed
\begin{lemma} \label{azl3}Let $V\in{\mathbb F}(M^1,M^2,\ldots , M^d)$ and $\tau$ be a bounded stopping time such that ${{\sf E}}[\sqrt{[V,V]_{\tau }}\;]<\infty$. Then for $\end{proposition}silon>0$, there exists $U\in
{\mathbb C}_b(M^1,M^2,\ldots , M^d)$ such that
\[ {{\sf E}}[\sqrt{[V-U,V-U]_{\tau }}\;]\le\end{proposition}silon.\]
\end{lemma}
\noindent{\sc Proof : } Let $V=\int f{\hspace{0.6pt}d\hspace{0.1pt}} X$ where $f\in{\mathbb L}^1_m(X)$ and $X\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$. Since
\[[V,V]_t=\int_0^t\lvert f_s\rvert^2{\hspace{0.6pt}d\hspace{0.1pt}}[X,X]_s,\]
the assumption on $V$ gives
\begin{equation}\label{az33}
{{\sf E}}[\sqrt{\textstyle\int_0^\tau\lvert f_s\rvert^2{\hspace{0.6pt}d\hspace{0.1pt}}[X,X]_s }]<\infty.
\end{equation}
Defining $f^k_s=f_s{\mathbb I}nd_{\{\lvert f_s\rvert\le k\}}$, let
\[U^k=\int f^k{\hspace{0.6pt}d\hspace{0.1pt}} X.\]
Since $X\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$ and $f^k$ is bounded, it follows that
\[U^k\in{\mathbb C}_b(M^1,M^2,\ldots , M^d).\]
Note that as $k\rightarrow\infty$,
\[{{\sf E}}[\sqrt{[V-U^k,V-U^k]_{\tau }}]={{\sf E}}[\sqrt{\textstyle\int_0^\tau\lvert f_s\rvert^2{\mathbb I}nd_{\{\lvert f_s\rvert> k\}}{\hspace{0.6pt}d\hspace{0.1pt}}[X,X]_s }]\rightarrow 0\]
in view of \eqref{az33}. The result now follows by taking $U=U^k$ with $k$ large enough so that ${{\sf E}}[\sqrt{[V-U^k,V-U^k]_{\tau }}]<\end{proposition}silon.$
\qed
\begin{lemma} \label{azl4} Suppose $Z\in {\mathbb M}$ and $\tau$ is a bounded stopping time such that ${{\sf E}}[\sqrt{[Z,Z]_{\tau }}\;]<\infty$, $Z_t=Z_{t\wedge\tau}$.
Let
$U^n\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$ with $U^n_0=0$ be such that
\[{{\sf E}}[\sqrt{[U^n-Z,U^n-Z]_{\tau }}\;]\le 4^{-n}.\]
Then there exists $X\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$ and $f\in{\mathbb L}^1_m(X)$ such that
\begin{equation}\label{az66}Z_t=Z_0+\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} X_s.\end{equation}
\end{lemma}
\noindent{\sc Proof : }
Since $U^n\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$ with $U^n_0=0$, get bounded predictable processes $\{f^{n,j}:n\ge 1,\,1\le j\le d\}$ such that
\begin{equation}\label{az2}U^n_t=\sum_{j=1}^d\int_0^tf^{n,j}_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s.\end{equation}
Without loss of generality, we assume that $U^n_t=U^n_{t\wedge\tau}$ and $f^{n,j}_s=f^{n,j}_s{\mathbb I}nd_{[0,\tau]}(s)$.
Let
\[\zeta=\sum_{n=1}^\infty 2^n\sqrt{[U^n-Z,U^n-Z]_{\tau }}.\]
Then ${{\sf E}}[\zeta]<\infty$ and hence ${{\sf P}}(\zeta<\infty)=1.$ Let
\[\end{theorem}a=\zeta+\sqrt{[Z,Z]_{\tau }}+\sum_{j=1}^d\sqrt{[M^j,M^j]_{\tau }}\]
Let $c={{\sf E}}[\exp\{-\end{theorem}a\}]$ and let ${{\sf Q}}$ be the probability measure on $({\mathbb O}mega,{\mathcal F})$ defined by
\[\frac{{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf Q}}}{{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}}=\frac{1}{c}\exp\{-\end{theorem}a\}.\]
Then it follows that $\alpha={{\sf E}}_{{\sf Q}}[\end{theorem}a^2]<\infty$. Noting that
\[\end{theorem}a^2\ge [Z,Z]_{\tau }+\sum_{j=1}^d[M^j,M^j]_{\tau }+\sum_{n=1}^\infty 2^{2n}[U^n-Z,U^n-Z]_{\tau }\]
we have ${{\sf E}}_{{\sf Q}}[[Z,Z]_{\tau }]<\infty$, ${{\sf E}}_{{\sf Q}}[[M^j,M^j]_{\tau }]<\infty$ for $1\le j\le d$. Likewise, ${{\sf E}}_{{\sf Q}}[[U^n-Z,U^n-Z]_{\tau }]<\infty$ and so ${{\sf E}}_{{\sf Q}}[[U^n,U^n]_{\tau }]<\infty$. Note that $Z, M^j$ are no longer a martingales under ${{\sf Q}}$, but we do not need that.
Let $ \widetilde{{\mathbb O}mega}=[0,\infty)\times{\mathbb O}mega$.
Recall that the predictable $\sigma$-field ${\mathcal P}$ is the smallest $\sigma$ field on $ \widetilde{{\mathbb O}mega}$ with respect to which all continuous adapted processes are measurable. We will define signed measures ${\mathbb G}amma_{ij}$ on ${\mathcal P}$ as follows: for $E\in{\mathcal P}$, $1\le i,j\le d$ let
\[{\mathbb G}amma_{ij}(E)=\int_{{\mathbb O}mega}\int_0^\tau {\mathbb I}nd_E(s,\omega){\hspace{0.6pt}d\hspace{0.1pt}} [M^i,M^j]_s(\omega){\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}(\omega).\]
Let ${\mathbb L}ambda=\sum_{j=1}^d{\mathbb G}amma_{jj}$. From the properties of quadratic variation $[M^i,M^j]$, it follows that for all $E\in {\mathcal P}$, the matrix $(({\mathbb G}amma_{ij}(E)))$ is non-negative definite. Further, ${\mathbb G}amma_{ij}$ is absolutely continuous w.r.t. ${\mathbb L}ambda$ $\forall i,j$. It follows (see appendix) that we can get predictable processes $c^{ij}$ such that
\[\frac{{\hspace{0.6pt}d\hspace{0.1pt}}{\mathbb G}amma_{ij}}{{\hspace{0.6pt}d\hspace{0.1pt}}{\mathbb L}ambda}=c^{ij}\]
and that $C=((c^{ij}))$ is a non-negative definite matrix. By construction $\lvert c^{ij}\rvert \le 1$. We can diagonalise $C$ (i.e. obtain singular value decomposition) in a measurable way (see appendix) to obtain predictable processes $b^{ij}$, $d^j$ such that for all $i,k$, (writing $\alphalta_{ik}=1$ if $i=k$ and $\alphalta_{ik}=0$ if $i\neq k$),)
\begin{equation}\label{az5}\sum_{j=1}^db^{ij}_sb^{kj}_s=\alphalta_{ik}\end{equation}
\begin{equation}\label{az6}\sum_{j=1}^db^{ji}_sb^{jk}_s=\alphalta_{ik}\end{equation}
\begin{equation}\label{az7}\sum_{j,l=1}^db^{ij}_sc^{jl}_sb^{kl}_s=\alphalta_{ik}d^{i}_s\end{equation}
Since $((c^{ij}_s))$ is non-negative definite, it follows that $d^i_s\ge 0$. For $1\le j\le d$,
let
\[N^k=\sum_{l=1}^d\int b^{kl}_s{\hspace{0.6pt}d\hspace{0.1pt}} M^l.\] Then $N^k$ are ${{\sf P}}$- martingales since $b^{ik}$ is a bounded predictable process. Indeed, $N^k\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$. Further, for $i\neq k$
\[[N^i,N^k]=\sum_{j,l=1}b^{ij}_sb^{kl}_s{\hspace{0.6pt}d\hspace{0.1pt}} [M^j,M^l]\]
and hence for any bounded predictable process $h$
\[\begin{equation}gin{split}
{{\sf E}}_{{\sf Q}}[\int_0^\tau h_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^i,N^k]]&=\int_{\mathbb O}mega\int_0^\tau h_s\sum_{j,l=1}^db^{ij}_sb^{kl}_s{\hspace{0.6pt}d\hspace{0.1pt}} [M^j,M^l]{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}(\omega)\\
&=\int_{\bar{{\mathbb O}mega}}h\sum_{j,l=1}^db^{ij}b^{kl}{\hspace{0.6pt}d\hspace{0.1pt}}{\mathbb G}amma_{jl}\\
&=\int_{\bar{{\mathbb O}mega}}h\sum_{j,l=1}^db^{ij}b^{kl}c^{jl}{\hspace{0.6pt}d\hspace{0.1pt}}{\mathbb L}ambda\\
&=0\end{split}
\]
where the last step follows from \eqref{az7}. As a consequence, for bounded predictable $h^i$,
\begin{equation}\label{az10}
{{\sf E}}_{{\sf Q}}[\sum_{i,k=1}^d\int_0^\tau h^i_sh^k_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^i,N^k]_s]={{\sf E}}_{{\sf Q}}[\sum_{k=1}^d\int_0^\tau (h^k_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s]
\end{equation}
Let us observe that \eqref{az10} holds for any predictable processes $\{h^i: 1\le i\le d\}$ provided the RHS if finite: we can first note that it holds for $\tilde{h}^i=h^i{\mathbb I}nd_{\{\lvert h\rvert\le c\}}$ where $\lvert h\rvert=\sum_{i=1}^d\lvert h^i\rvert$ and then let $c\uparrow\infty$.
Note that for $n\ge m$
\[\begin{equation}gin{split}
\sqrt{[U^n-U^m,U^n-U^m]_{\tau }}&\le \sqrt{[U^n-Z,U^n-Z]_{\tau }}+ \sqrt{[U^m-Z,U^m-Z]_{\tau }}\\
&\le 2^{-m}\end{theorem}a\end{split}
\]
and hence
\begin{equation}\label{az11}
{{\sf E}}_{{\sf Q}}[[U^n-U^m,U^n-U^m]_{\tau }]\le 4^{-m}\alpha.\end{equation}
Let us define $g^{n,k}=\sum_{j=1}^df^{n,j}b^{kj}$. Then note that
\begin{equation}\label{az12}
\begin{equation}gin{split}
\sum_{k=1}^d\int g^{n,k}{\hspace{0.6pt}d\hspace{0.1pt}} N^k&=\sum_{k=1}^d\sum_{j=1}^d\int f^{n,j}b^{kj}{\hspace{0.6pt}d\hspace{0.1pt}} N^k\\
&=\sum_{k=1}^d\sum_{j=1}^d\sum_{l=1}^d\int f^{n,j}b^{kj}b^{kl}_s{\hspace{0.6pt}d\hspace{0.1pt}} M^l\\
&=\sum_{j=1}^d\int f^{n,j}{\hspace{0.6pt}d\hspace{0.1pt}} M^j\\
&=U^n \end{split}\end{equation}
where in the last but one step, we have used \eqref{az6}. Noting that for $n\ge m$,
\begin{equation}\label{az13}
{{\sf E}}_{{\sf Q}}[\,[U^n-U^m,U^n-U^m]_\tau\,]={{\sf E}}_{{\sf Q}}[\sum_{k=1}^d\int_0^\tau(g^{n,k}_s-g^{m,k}_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s]\end{equation}
and using \eqref{az11}, we conclude
\begin{equation}\label{az14}\begin{equation}gin{split}
{{\sf Q}}(\sum_{k=1}^d\int_0^\tau(g^{n,k}_s-g^{m,k}_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]\ge \frac{1}{m^4})&\le m^4{{\sf E}}_{{\sf Q}}[\,[U^n-U^m,U^n-U^m]_\tau\,]\\
&\le \alpha m^44^{-m}.\end{split}\end{equation}
Since ${{\sf E}}_{{\sf Q}}[\,[M^i,M^i]_\tau]<\infty$ for all $i$ and $g^{n,i}$ are bounded, it follows that ${{\sf E}}_{{\sf Q}}[\,[N^i,N^i]_\tau]<\infty$. Thus defining a measure ${\mathbb T}heta$ on ${\mathcal P}$ by
\[{\mathbb T}heta(E)=\int[\sum_{k=1}^d\int_0^\tau{\mathbb I}nd_E(s,\omega){\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s(\omega){\hspace{0.6pt}d\hspace{0.1pt}}{{\sf Q}}(\omega)]\]
we get (using \eqref{az11} and \eqref{az13})
\[\int (g^{m+1,k}-g^{m,k})^2{\hspace{0.6pt}d\hspace{0.1pt}}{\mathbb T}heta\le \alpha 4^{-m}\]
and as a consequence, using Cauchy-Schwartz inequality, we get
\[\int \sum_{m=1}^\infty\lvert g^{m+1,k}-g^{m,k}\rvert {\hspace{0.6pt}d\hspace{0.1pt}}{\mathbb T}heta\le \sqrt{{\mathbb T}heta(\bar{{\mathbb O}mega})\alpha}<\infty.\]
Defining
\[g^k_s=\limsup_{m\rightarrow\infty}g^{m,k}_s\]
it follows that $g^{m,k}\rightarrow g^k$ a.s. ${\mathbb T}heta$ and as a consequence,
taking limit in \eqref{az14} as $n\rightarrow \infty$, we get
\begin{equation}\label{az15}
{{\sf Q}}(\sum_{k=1}^d\int_0^\tau(g^{k}_s-g^{m,k}_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s\ge \frac{1}{m^4})
\le m^44^{-m}.\end{equation}
Since ${{\sf Q}}$ and ${{\sf P}}$ and equivalent measures, it follows that
\begin{equation}\label{az16}
{{\sf P}}(\sum_{k=1}^d\int_0^\tau(g^{k}_s-g^{m,k}_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s\ge \frac{1}{m^4})
\rightarrow 0 \text{ as }m\rightarrow\infty.\end{equation}
In view of \eqref{az12}, we have for $m\le n$
\begin{equation}\label{az3}[U^n,U^n]_\tau=\sum_{i,j=1}^d\int_0^\tau g^{n,i}_sg^{n,j}_s{\hspace{0.6pt}d\hspace{0.1pt}} [N^i,N^j]_s\end{equation}
and
\begin{equation}\label{az4}[U^n-U^m,U^n-U^m]_\tau=\sum_{i,j=1}^d\int_0^\tau (g^{n,i}_s-g^{m,i}_s)(g^{n,j}_s-g^{m,j}_s){\hspace{0.6pt}d\hspace{0.1pt}} [N^i,N^j]_s.\end{equation}
Taking limit in \eqref{az3} as $n\rightarrow\infty$, we get (using Fatou's lemma)
\begin{equation}\label{az20}
{{\sf E}}_{{\sf P}}[\sqrt{\textstyle\sum_{i,j=1}^d\int_0^\tau g^{i}_sg^{j}_s{\hspace{0.6pt}d\hspace{0.1pt}} [N^i,N^j]}\,]\le {{\sf E}}_{{\sf P}}[\,\sqrt{[Z,Z]_\tau}\, ]\end{equation}
(since \eqref{az1} implies $ {{\sf E}}_{{\sf P}}[\,\sqrt{[U^n,U^n]_\tau}\, ]\rightarrow {{\sf E}}_{{\sf P}}[\,\sqrt{[Z,Z]_\tau}\, ]$).
Let us define bounded predictable processes $\phi^{j}$ and predictable process $h^n,h$ and a ${{\sf P}}$-martingale $X$ as follows:
\begin{equation}\label{az21}
h_s=1+\sum_{i=1}^d\lvert g^{i}_s\rvert\end{equation}
\begin{equation}\label{az22}
\phi^{j}_s=\frac{g^{j}_s}{h_s}
\end{equation}
\begin{equation}\label{az23}
X_t=\sum_{j=1}^d\int_0^t\phi^{j}_s{\hspace{0.6pt}d\hspace{0.1pt}} N^j_s\end{equation}
Since $\phi^{j}$ is predictable, $\lvert \phi^{j}\rvert\le 1$ it follows that $X\in{\mathbb C}_b(M^1,M^2,\ldots,M^d)$ and
\begin{equation}\label{az24}
[X,X]_t=\sum_{j,k=1}^d\int_0^t\phi^{j}_s\phi^{k}_s{\hspace{0.6pt}d\hspace{0.1pt}} [N^j,N^k]_s.\end{equation}
Noting that $g^{j}_s=h_s\phi^{j}_s$ by definition, we conclude using \eqref{az20} that
\[
\int_0^t(h_s)^2{\hspace{0.6pt}d\hspace{0.1pt}} [X,X]_s=\sum_{j,k=1}^d\int_0^tg^{j}_sg^{k}_s{\hspace{0.6pt}d\hspace{0.1pt}} [N^j,N^k]_s
\]
and hence that
\begin{equation}\label{az25}
{{\sf E}}_{{\sf P}}[\sqrt{\textstyle\int_0^\tau(h_s)^2{\hspace{0.6pt}d\hspace{0.1pt}} [X,X]_s}]\le {{\sf E}}_{{\sf P}}[\,\sqrt{[Z,Z]_\tau}\, ]\end{equation}
Since $h=h{\mathbb I}nd_{[0,\tau]}$, we conclude that $h\in{\mathbb L}^1_m(X)$ and $Y=\int h{\hspace{0.6pt}d\hspace{0.1pt}} X$ is a martingale with $Y_t=Y_{t\wedge\tau}$ for all $t$. Observe that
\[ [U^n,X]_t=\sum_{k,j=1}^d\int_0^tg^{n,k}_s\phi^j_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\]
and hence
\[\begin{equation}gin{split}
[U^n,Y]_t&=\int_0^th_s{\hspace{0.6pt}d\hspace{0.1pt}}[U^n,X]_s\\
&=\sum_{k,j=1}^d\int_0^tg^{n,k}_sh_s\phi^j_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\\
&=\sum_{k,j=1}^d\int_0^tg^{n,k}_sg^j_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\end{split}\]
and as a consequence
\[\begin{equation}gin{split}
[U^n-Y,U^n-Y]_t&=[U^n,U^n]_t-2[U^n,Y]_t+[Y,Y]_t\\
&=\sum_{k,j=1}^d\int_0^tg^{n,k}_sg^{n,j}_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s-2\sum_{k,j=1}^d\int_0^tg_s^{n,k}g^j_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\\
&\;\;\;\;\;\;\;\;
+\sum_{k,j=1}^d\int_0^tg^{k}_sg^j_s{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\\
&=\sum_{k,j=1}^d\int_0^t(g^{n,k}_s-g^k_s)(g^{n,j}_s-g^j_s){\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^j]_s\end{split}\]
and thus
\begin{equation}\label{az26}
{{\sf E}}_{{\sf Q}}[[U^n-Y,U^n-Y]_\tau]={{\sf E}}_{{\sf Q}}[\sum_{k=1}^d\int_0^\tau (g^{n,k}_s-g^k_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s\end{equation}
where we have used \eqref{az10}.
Taking $\liminf$ on the RHS in \eqref{az13} and using \eqref{az11}, we conclude
\[{{\sf E}}_{{\sf Q}}[\sum_{k=1}^d\int_0^\tau (g^{k}_s-g^{m,k}_s)^2{\hspace{0.6pt}d\hspace{0.1pt}}[N^k,N^k]_s]\le \alpha 4^{-m}\]
and hence
\[{{\sf E}}_{{\sf Q}}[[U^n-Y,U^n-Y]_\tau]\le \alpha 4^{-n}.\]
Thus $[U^n-Y,U^n-Y]_\tau\rightarrow 0$ in ${{\sf Q}}$-probability and hence in ${{\sf P}}$-probability. By assumption, $[U^n-Z,U^n-Z]_\tau\rightarrow 0$ in ${{\sf P}}$-probability.
Since
\[[Y-Z,Y-Z]_\tau\le 2([Y-U^n,Y-U^n]_\tau+[Z-U^n,Z-U^n]_\tau)\]
for every $n$, it follows that
\begin{equation}\label{az27}
[Y-Z,Y-Z]_\tau=0\;\;a.s.\;{{\sf P}}.\end{equation}
Since $Y,Z$ are ${{\sf P}}$-martingales such that $Z_t=Z_{t\wedge\tau}$ and $Y_t=Y_{t\wedge\tau}$, \eqref{az27} implies $Y_t-Y_0=Z_t-Z_0$ for all $t$.
Recall that by construction, $Y_0=0$, $Y=\int h{\hspace{0.6pt}d\hspace{0.1pt}} X$ and $h\in{\mathbb L}^1_m(X)$, $X\in{\mathbb C}_b(M^1,M^2,\ldots , M^d)$. Thus \eqref{az66} holds.
\qed
We now come to the proof of Theorem \ref{aztm2}. Let $Z^n\in {\mathbb F}(M^1,M^2,\ldots , M^d)$ be such that ${{\sf E}}[\lvert Z^n_t-Z_t\rvert ]\rightarrow 0$ for all $t$. We have to show that $Z\in {\mathbb F}(M^1,M^2,\ldots , M^d)$.
Using Lemma \ref{azl2}, get a sequence of stopping times $\sigma_k\uparrow\infty$ and a subsequence $\{n^j\}$ such that $Y^j=Z^{n^j}$ satisfies for each $k\ge 1$, ${{\sf E}}[\sqrt{[Z,Z]_{\sigma_k }}]<\infty$ and
\[{{\sf E}}[\sqrt{[Y^j-Z,Y^j-Z]_{\sigma_k }}\;]\rightarrow 0 \text{ as }j\uparrow\infty.\]
Let us now fix a $k$ and let $\tilde{Z}_t=Z_{t\wedge\sigma_k}$. We will show that $\tilde{Z}\in{\mathbb F}(M^1,M^2,\ldots , M^d)$. This will complete the proof in view of Lemma \ref{azl1}.
Using Lemma \ref{azl3}, for each $n$, get $j_n$ such that
\[{{\sf E}}[\sqrt{[Y^{j_n}-\tilde{Z},Y^{j_n}-\tilde{Z}]_{\sigma_k }}\;]\le 4^{-j-1}\]
For each $n$, taking $V=Y^{j_n}$ and $\end{proposition}silon=4^{-j-1}$, get $U^n\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$ such that
\[ {{\sf E}}[\sqrt{[Y^{j_n}-U^n,Y^{j_n}-U^n]_{\sigma_k }}\;]\le 4^{-j-1}.\]
Then we have
\[ {{\sf E}}[\sqrt{[U^n-\tilde{Z},U^n-\tilde{Z}]_{\sigma_k }}\;]\le 4^{-j}.\]
with $U^n\in {\mathbb C}_b(M^1,M^2,\ldots , M^d)$.
This $\tilde{Z}\in{\mathbb F}(M^1,M^2,\ldots , M^d)$ in view of Lemma \ref{azl4}
\qed
Now we turn to proof of Theorem \ref{aztm1}. Let $\xi^n\in{\mathbb G}_T$ be such that $\xi^n\rightarrow\xi$ in ${\mathbb L}^1({\mathbb O}mega,{\mathcal F}, {{\sf P}})$. Let $\xi^n=X^n_T$ where $X^n\in{\mathbb F}(M^1,M^2,\ldots , M^d)$. Let us define $Z^n_t=X^n_{t\wedge T}$. Then $Z^n\in{\mathbb F}(M^1,M^2,\ldots , M^d)$ and the assumption on $\xi^n$ implies
\[Z^n_t\rightarrow Z_t={{\sf E}}[\xi\mid{\mathcal F}_t]\text{ in }{\mathbb L}^1({\mathbb O}mega,{\mathcal F}, {{\sf P}})\;\;\forall t.\]
Thus Theorem \ref{aztm2} implies $Z\in {\mathbb F}(M^1,M^2,\ldots , M^d)$ and thus $\xi=Z_T$ belongs to ${\mathbb G}_T$.
\qed
\section{Sigma-martingales}
For a semimartingale $X$, let $L(X)$ denote the class of predictable process $f$
such that $X$ admits a decomposition $X=N+A$ with $N$ being a local martingale, $A$ being a process with finite variation paths with $f\in{\mathbb L}^1_m(N)$ and
\begin{equation}\label{ax1}
\int_0^t\lvert f_s\rvert {\hspace{0.6pt}d\hspace{0.1pt}}\lvert A\rvert_s<\infty \;\;a.s.\;\;\forall t<\infty.\end{equation}
Then for $f\in{\mathbb L}(X)$, the stochastic integral $\int f{\hspace{0.6pt}d\hspace{0.1pt}} X$ is defined as $\int f{\hspace{0.6pt}d\hspace{0.1pt}} N+\int f{\hspace{0.6pt}d\hspace{0.1pt}} A$. It can be shown that the definition does not depend upon the decomposition $X=N+A$. See \cite{J78}.
Let $M$ be a martingale, $f\in{\mathbb L}(M)$ and $Z=\int f{\hspace{0.6pt}d\hspace{0.1pt}} M$. Then $Z$ is a local martingale if and only if $f\in{\mathbb L}^1_m(M)$. In answer to a question raised by P. A. Meyer, Chou \cite{Chou} introduced a class ${\mathbb S}igma_m$ of semimartingales consisting of $Z=\int f{\hspace{0.6pt}d\hspace{0.1pt}} M$ for $f\in{\mathbb L}(M)$. Emery \cite{Em80} constructed example of $f,M$ such that $f\in{\mathbb L}(M)$ but $Z=\int f{\hspace{0.6pt}d\hspace{0.1pt}} M$ is not a local martingale. Such processes occur naturally in mathematical finance and have been called sigma-martingales by Delbaen and Schachermayer\cite{DS98}.
\alphafinition{A semimartingale $X$ is said to be a sigma-martingale if there exists a $(0,\infty)$ valued predictable process $\phi$ such that $\phi\in{\mathbb L}(X)$ and
\begin{equation}\label{ax3}M_t=\int_0^t \phi_s{\hspace{0.6pt}d\hspace{0.1pt}} X_s\end{equation}
is a martingale.}
Our first observation is:
\begin{lemma}\label{axl5} Every local martingale $N$ is a sigma-martingale.
\end{lemma}
\noindent{\sc Proof : } Let $\end{theorem}a_n\uparrow\infty$ be a sequence of stopping times such that $N_{t\wedge\end{theorem}a_n}$ is a martingale,
\[\sigma_n=\inf\{t\ge 0\,:\;\lvert N_t\rvert\ge n\text{ or }\lvert N_{t-}\rvert\ge n\}\wedge n\]
and $\tau_n=\sigma_n\wedge\end{theorem}a_n$, then it follows that $N_{t\wedge\tau_n}$ is a uniformly integrable martingale and $a_n={{\sf E}}[\,[N,N]_{\tau_n}]<\infty$.
Define
\[h_s=\sum_{n=1}^\infty\frac{1}{2^n(1+a_n)}{\mathbb I}nd_{(\tau_{n-1},\tau_n]}.\]
Then $h$ being bounded belongs to ${\mathbb L}(N)$ and $M=\int h{\hspace{0.6pt}d\hspace{0.1pt}} N$ is a local martingale with
\begin{equation}\label{ax4}\sup_{t<\infty}{{\sf E}}[\,[M,M]_t]<\infty.\end{equation}
Thus $M$ is a uniformly integrable martingale. Since $h$ is $(0,\infty)$ valued by definition, it follows that $N$ is a sigma-martingale.
\qed
This leads to
\begin{lemma}\label{axl4} A semimartingale $X$ is a sigma-martingale if and only if there exists a uniformly integrable martingale $M$ satisfying \eqref{ax4} and a predictable process $\psi\in{\mathbb L}(M)$ such that
\begin{equation}\label{ax5}X_t=\int_0^t \psi_s{\hspace{0.6pt}d\hspace{0.1pt}} M_s.\end{equation}
\end{lemma}
\noindent{\sc Proof : }
Let $X$ be given by \eqref{ax5} with $M$ being a martingale satisfying \eqref{ax4} and $\psi\in{\mathbb L}(M)$, then defining
\[g_s=\frac{1}{(1+(\psi_s)^2)}, \;\;N_t=\int_0^tg_s{\hspace{0.6pt}d\hspace{0.1pt}} X_s\]
it follows that $N=\int g\psi{\hspace{0.6pt}d\hspace{0.1pt}} M$. Since $g\psi$ is bounded by 1 and $M$ satisfies \eqref{ax4}, it follows that $N$ is a martingale. Thus $X$ is a sigma-martingale.
Conversely, given a sigma-martingale $X$ and a $(0,\infty)$ valued predictable process
$\phi$ such that $N=\int \phi{\hspace{0.6pt}d\hspace{0.1pt}} X$ is a martingale, get $h$ as in Lemma \ref{axl5} and let $M=\int h{\hspace{0.6pt}d\hspace{0.1pt}} N=\int h\phi {\hspace{0.6pt}d\hspace{0.1pt}} X$. Then $M$ is a uniformly integrable martingale that satisfies \eqref{ax4} and $h\phi$ is a $(0,\infty)$ valued predictable process.
\qed
From the definition, it is not obvious that sum of sigma-martingales is also a sigma-martingale, but this is so as the next result shows.
\begin{theorem} Let $X^1,X^2$ be sigma-martingales and $a_1,a_2$ be real numbers. Then $Y=a_1X^1+a_2X^2$ is also a sigma-martingale.
\end{theorem}
\noindent{\sc Proof : } Let $\phi^1,\phi^2$ be $(0,\infty)$ valued predictable processes such that
\[M^i_t=\int_0^t\phi^i_s{\hspace{0.6pt}d\hspace{0.1pt}} X^i_s,\;\;i=1,2\]
are uniformly integrable martingales. Then, writing $\xi=\min(\phi^1,\phi^2)$ and $\end{theorem}a^i_s=\frac{\xi_s}{\phi^i_s}$, it follows that
\[N^i_t=\int_0^t \end{theorem}a^i_s{\hspace{0.6pt}d\hspace{0.1pt}} M^i_s=\int_0^t\xi_s{\hspace{0.6pt}d\hspace{0.1pt}} X^i_s\]
are uniformly integrable martingales since $\end{theorem}a^i$ is bounded by one. Clearly, $Y=a_1X^1+a_2X^2$ is a semimartingale and $\xi\in{\mathbb L}(X^i)$ for $i=1,2$ implies $\xi\in{\mathbb L}(Y)$ and
\[\int_0^t\xi_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s=a_1N^1_s+a_2N^2_s\]
is a uniformly integrable martingale. Since $\xi$ is $(0,\infty)$ valued predictable process, it follows that $Y$ is a sigma-martingale.
\qed
The following result gives conditions under which a sigma-martingale is a local martingale.
\begin{lemma}\label{axl1} Let $X$ be a sigma-martingale with $X_0=0$. Then $X$ is a local martingale if and only if there exists a sequence of stopping times $\tau_n\uparrow\infty$ such that
\begin{equation}\label{ax7}
{{\sf E}}[\,\sqrt{[X,X]_{\tau_n}}\,]<\infty\;\;\forall n.\end{equation}
\end{lemma}
\noindent{\sc Proof : }
Let $X$ be a sigma-martingale and $\phi,\psi,M$ be such that \eqref{ax3}, \eqref{ax4} holds. Let $\psi_s=\frac{1}{\phi_s}$ and as noted above, \eqref{ax5} holds. Then
\[[X,X]_t=\int_0^t(\psi_s)^2{\hspace{0.6pt}d\hspace{0.1pt}} [M,M]_s.
\]
Defining $\psi^k_s=\psi_s{\mathbb I}nd_{\{\lvert \psi_s\rvert\le k\}}$, it follows that
\[X^k=\int_0^t\psi^k_s{\hspace{0.6pt}d\hspace{0.1pt}} M_s\]
is a uniformly integrable martingale. Noting that for $k\ge 1$
\[[X-X^k,X-X^k]_t=\int_0^t(\psi_s)^2{\mathbb I}nd_{\{k<\lvert \psi_s\rvert\}}{\hspace{0.6pt}d\hspace{0.1pt}} [M,M]_s\]
the assumption \eqref{ax7} implies that for each $n$ fixed,
\[{{\sf E}}[\,\sqrt{[X-X^k,X-X^k]_{\tau_n}}\,]\rightarrow 0\text{ as }k\rightarrow\infty.\]
The Burkholder-Davis-Gundy inequality ($p=1$) implies
that for each $n$ fixed,
\[{{\sf E}}[\,\sup_{0\le t\le\tau_n}\lvert X_t-X^k_t\rvert\,]\rightarrow 0\text{ as }k\rightarrow\infty.\]
and as a consequence $X^{[n]}_t=X_{t\wedge\tau_n}$ is a martingale for all $n$ and so $X$ is a local martingale. Conversely, if $X$ is a local martingale with $X_0=0$, and $\sigma_n$ are stop times increasing to $\infty$ such that $X_{t\wedge\sigma_n}$ are martingales then defining $\zeta_n=\inf\{t\,:\,\lvert X_t\rvert\ge n\}$ and
$\tau_n=\sigma_n\wedge\zeta_n$, it follows that ${\mathbb E}[\lvert X_{\tau_n}\rvert]<\infty$ and since
\[\sup_{t\le \tau_n}\lvert X_t\rvert\le n+\lvert X_{\tau_n}\rvert\]
it follows that ${\mathbb E}[\sup_{t\le \tau_n}\lvert X_t\rvert]<\infty$. Thus, \eqref{ax7} holds in view of Burkholder-Davis-Gundy inequality ($p=1$).
\qed
The previous result gives us:
\begin{corollary} \label{azc1} A bounded sigma-martingale $X$ is a martingale.
\end{corollary}
\noindent{\sc Proof : } Since $X$ is bounded, say by $K$, it follows that jumps of $X$ are bounded by $2K$. Thus jumps of the increasing process $[X,X]$ are bounded by $4K^2$ and thus $X$ satisfies \eqref{ax7} for
\[\tau_n=\inf\{t\ge 0\,:\;[X,X]_t\ge n\}.\]
Thus $X$ is a local martingale and being bounded, it is a martingale.
\qed
Here is a variant of the example given by Emery \cite{Em80} of a sigma-martingale that is not a local martingale.. Let $\tau$ be a random variable with exponential distribution (assumed to be $(0,\infty)$ valued without loss of generailty) and $\xi$ with ${{\sf P}}(\xi=1)={{\sf P}}(\xi=-1)=0.5$, independent of $\tau$. Let
\[M_t=\xi{\mathbb I}nd_{[\tau,\infty)}(t)\]
and ${\mathcal F}_t=\sigma(M_s:s\le t)$. Easy to see that $M$ is a martingale. Let $f_t=\frac{1}{t}{\mathbb I}nd_{(0,\infty)}(t)$ and $X_t=\int_0^tf{\hspace{0.6pt}d\hspace{0.1pt}} M$. Then $X$ is a sigma-martingale and
\[[X,X]_t=\frac{1}{\tau^2}{\mathbb I}nd_{[\tau,\infty)}(t).\]
For any stopping time $\sigma$, it can be checked that $\sigma$ is a constant on $\sigma<\tau$ and thus if $\sigma$ is not identically equal to 0, $\sigma\ge (\tau\wedge a)$ for some $a>0$. Thus, $\sqrt{[X,X]_\sigma}\ge\frac{1}{\tau}{\mathbb I}nd_{\{\tau<a\}}$. It follows that for any stop time $\sigma$, not identically zero, ${{\sf E}}[\sqrt{[X,X]_\sigma}]=\infty$ and so $X$ is not a local martingale.
The next result shows that $\int f{\hspace{0.6pt}d\hspace{0.1pt}} X$ is a sigma-martingale if $X$ is one.
\begin{lemma}\label{axl7} Let $X$ be a sigma-martingale, $f\in{\mathbb L}(X)$ and let $U=\int f{\hspace{0.6pt}d\hspace{0.1pt}} X$. Then $U$ is a sigma-martingale.
\end{lemma}
\noindent{\sc Proof : } Let $M$ be a martingale and $\psi\in{\mathbb L}(M)$ be such that $X=\int \psi {\hspace{0.6pt}d\hspace{0.1pt}} M$ (as in Lemma \ref{axl4}). Now $U=\int f{\hspace{0.6pt}d\hspace{0.1pt}} X=\int f\psi {\hspace{0.6pt}d\hspace{0.1pt}} M$. Thus, once again invoking Lemma \ref{axl4}, one concludes that $X$ is a sigma-martingale.
\qed
We now introduce the class of equivalent sigma-martingale measures (ESMM) and show that it is a convex set.
\alphafinition{Let $X^1,\ldots ,X^d$ be r.c.l.l. adapted processes and let ${\mathbb E}^s(X^1,\ldots ,X^d)$ denote the class of probability measures ${{\sf Q}}$ such that $X^1,\ldots ,X^d$ are sigma-martingales w.r.t. ${{\sf Q}}$.}
Let
\[{\mathbb E}^s_{{\sf P}}(X^1,\ldots ,X^d)=\{{{\sf Q}}\in {\mathbb E}^s(X^1,\ldots ,X^d)\,:\, {{\sf Q}}\text{ is equivalent to }{{\sf P}}\}\]
and
\[\tilde{{\mathbb E}}^s_{{\sf P}}(X^1,\ldots ,X^d)=\{{{\sf Q}}\in {\mathbb E}^s(X^1,\ldots ,X^d)\,:\, {{\sf Q}}\text{ is absolutely continuous w.r.t. }{{\sf P}}\}\]
\begin{theorem} \label{esmm} For semimartingales $X^1,\ldots ,X^d$, ${\mathbb E}^s(X^1,\ldots ,X^d)$, ${\mathbb E}^s_{{\sf P}}(X^1,\ldots ,X^d)$ and $\tilde{{\mathbb E}}^s_{{\sf P}}(X^1,\ldots ,X^d)$ are convex sets.
\end{theorem}
\noindent{\sc Proof : } Let us consider the case $d=1$. Let ${{\sf Q}}^1,{{\sf Q}}^2\in{\mathbb E}^s(X)$. Let $\phi^1,\phi^2$ be $(0,\infty)$ valued predictable processes such that
\[M^i_t=\int_0^t \phi^i_s{\hspace{0.6pt}d\hspace{0.1pt}} X_s\]
are martingales under ${{\sf Q}}_i$, $i=1,2$. Let $\phi_s=min(\phi^1_s,\phi^2_s)$ and let
\[M_t=\int_0^t\phi_s{\hspace{0.6pt}d\hspace{0.1pt}} X_s.\]
Noting that $M_t=\int_0^t\xi_s^i{\hspace{0.6pt}d\hspace{0.1pt}} M^i_s $ where $\xi_s={\phi_s}(\phi_s^i)^{-1}$ is bounded, it follows that $M$ is a martingale under ${{\sf Q}}^i,i=1,2$. Now if ${{\sf Q}}$ is any convex combination of ${{\sf Q}}^1,{{\sf Q}}^2$, it follows that $M$ is a ${{\sf Q}}$ martingale and hence $X_t=\int_0^t(\phi_s)^{-1}{\hspace{0.6pt}d\hspace{0.1pt}} M_s$ is a sigma-martingale under ${{\sf Q}}$. Thus
${\mathbb E}^s_{{\sf P}}(X)$ is a convex set. Since
${\mathbb E}^s(X^1,\ldots ,X^d)=\cap_{j=1}^d{\mathbb E}^s(X^j)$
it follows that ${\mathbb E}^s(X^1,\ldots ,X^d)$ is convex. Convexity of ${\mathbb E}^s_{{\sf P}}(X^1,\ldots ,X^d)$ and $\tilde{{\mathbb E}}^s_{{\sf P}}(X^1,\ldots ,X^d)$ follows from this.
\qed
In analogy with the definition of ${\mathbb C}$ for martingales $M^1,\ldots ,M^d$,
for sigma-martingales $M^1,M^2,\ldots , M^d$ let
\[{\mathbb C}(M^1,M^2,\ldots , M^d)=\{Z\in{\mathbb M}\,:\,Z_t=Z_0+\sum_{j=1}^d\int_0^t f^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s,\;f^j\in{\mathbb L}^1_m(M^j)\}\]
\[{\mathbb F}(M^1,\ldots , M^d)=\{Z\in{\mathbb M}\,: Z_t=Z_0+\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s, \,f\in{\mathbb L}^1_m(Y),\,Y\in {\mathbb C}(M^1,\ldots , M^d)\}.\]
\begin{lemma}\label{ayl9} Let $M^1,\ldots, M^d$ be
sigma-martingales and let $\phi^j$ be $(0,\infty)$ valued predictable processes such that
\begin{equation}\label{ay9}N^j_t=\int_0^t\phi^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s\end{equation}
are uniformly integrable martingales. Then
\begin{equation}\label{ay10}
{\mathbb C}(M^1,M^2,\ldots , M^d)={\mathbb C}(N^1,N^2,\ldots ,N^d)
\end{equation}
\begin{equation}\label{ay10a}
{\mathbb F}(M^1,M^2,\ldots , M^d)={\mathbb F}(N^1,N^2,\ldots ,N^d).
\end{equation}
\end{lemma}
\noindent{\sc Proof : }
Let $\psi^j_s=(\phi^j_s)^{-1}$. Note that $M^j=\int \psi^j{\hspace{0.6pt}d\hspace{0.1pt}} N^j$.
Then if $Y\in {\mathbb C}(M^1,M^2,\ldots , M^d)$ is given by
\begin{equation}\label{ay11}
Y_t=\sum_{j=1}^d\int_0^tf^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s,\;\;f^j\in{\mathbb L}(M^j)\end{equation}
then defining $g^j=f^j\psi^j$, we can see that $g^j\in{\mathbb L}(N^j)$ and $\int f^j{\hspace{0.6pt}d\hspace{0.1pt}} M^j=\int g^j{\hspace{0.6pt}d\hspace{0.1pt}} N^j$. Thus
\begin{equation}\label{ay12}
Y_t=\sum_{j=1}^d\int_0^tg^j_s{\hspace{0.6pt}d\hspace{0.1pt}} N^j_s,\;\;g^j\in{\mathbb L}(N^j).\end{equation}
Similarly, if $Y\in{\mathbb C}(N^1,N^2,\ldots ,N^d)$ is given by \eqref{ay12}, then defining $f^j=\phi^jg^j$, we can see that $Y$ satisfies \eqref{ay11}. Thus \eqref{ay10} is true. Now \eqref{ay10a} follows from \eqref{ay10}.
\qed
\section{Integral Representation w.r.t. Martingales}
Let $M^1,\ldots, M^d$ be sigma-martingales.
\alphafinition{ A sigma-martingale $N$ is said to have an integral representation w.r.t.
$M^1,\ldots, M^d$ if $N\in {\mathbb F}(M^1,M^2,\ldots , M^d)$ or in other words,
$\exists Y\in {\mathbb C}(M^1,M^2,\ldots , M^d)$ and $f\in{\mathbb L}(Y)$ such that
\begin{equation}\label{ay21}
N_t=N_0+\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s\,\;\;\forall t.
\end{equation}
}
Here is another observation needed later.
\begin{lemma}\label{cal1}
Let $M$ be a ${{\sf P}}$-martingale. Let ${{\sf Q}}$ be a probability measure equivalent to ${{\sf P}}$. Let $\xi=\frac{{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}}{{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf Q}}}$ and let $Z$ be the r.c.l.l. martingale given by $Z_t={{\sf E}}_{{\sf P}}[\xi\mid{\mathcal F}_t]$. Then
\begin{equation}gin{enumerate}[(i)]
\item $M$ is a ${{\sf Q}}$-martingale if and only if $MZ$ is a ${{\sf P}}$-martingale.
\item $M$ is a ${{\sf Q}}$-local martingale if and only if $MZ$ is a ${{\sf P}}$-local martingale.
\item If $M$ is a ${{\sf Q}}$-local martingale then $[M,Z]$ is a ${{\sf P}}$-local martingale.
\item If $M$ is a ${{\sf Q}}$-sigma-martingale then $[M,Z]$ is a ${{\sf P}}$-sigma-martingale.
\end{enumerate}
\end{lemma}
\noindent{\sc Proof : } For a stopping time $\sigma$,
let $\end{theorem}a$ be a non-negative ${\mathcal F}_\sigma$ measurable random variable. Then
\[{{\sf E}}_{{\sf Q}}[\end{theorem}a]={{\sf E}}_{{\sf P}}[\end{theorem}a Z]={{\sf E}}_{{\sf P}}[\end{theorem}a {{\sf E}}[Z\mid{\mathcal F}_\sigma]\,]={{\sf E}}_{{\sf P}}[\end{theorem}a Z_\sigma].\]
Thus $M_s$ is ${{\sf Q}}$ integrable if and only if $M_sZ_s$ is ${{\sf P}}$-integrable.
Further, for
any stopping time $\sigma$,
\begin{equation}\label{bm60}
{{\sf E}}_{{\sf Q}}[M_\sigma]={{\sf E}}_{{\sf P}}[M_\sigma Z_\sigma].\end{equation}
Thus (i) follows from the observation that an integrable adapted process $N$ is a martingale if and only if ${{\sf E}}[N_\sigma]={{\sf E}}[N_0]$ for all bounded stopping times $\sigma$. For (ii), if $M$ is a ${{\sf Q}}$-local martingale, then get stopping times $\tau_n\uparrow\infty$ such that for each $n$, $M_{t\wedge\tau_n}$ is a martingale. Then we have
\begin{equation}\label{bm61}{{\sf E}}_{{\sf Q}}[M_{\sigma\wedge\tau_n}]=
{{\sf E}}_{{\sf P}}[M_{\sigma\wedge\tau_n} Z_{\sigma\wedge\tau_n}].
\end{equation}
Thus $M_{t\wedge\tau_n}Z_{t\wedge\tau_n}$ is a ${{\sf P}}$-martingale and thus $MZ$
is a ${{\sf P}}$- local martingale. The converse follows similarly.
For (iii), note that $M_tZ_t=M_0Z_0+\int_0^tM_{s-}{\hspace{0.6pt}d\hspace{0.1pt}} Z_s+\int_0^tZ_{s-}{\hspace{0.6pt}d\hspace{0.1pt}} M_s +[M,Z]_t$ and the two stochastic integrals are ${{\sf P}}$ local martingales, the result follows from (ii). For (iv), representing the ${{\sf Q}}$ sigma-martingale $M$ as $M=\int \psi {\hspace{0.6pt}d\hspace{0.1pt}} N$, where $N$ is a ${{\sf Q}}$ martingale and $\psi\in{\mathbb L}(N)$, we see
\[[M,Z]=\int_0^t\psi_s{\hspace{0.6pt}d\hspace{0.1pt}} [N,Z]_s.\]
By (iii), $[N,Z]$ is a ${{\sf Q}}$ sigma-martingale and hence $[M,Z]$ is a ${{\sf Q}}$ sigma-martingale.
\qed
The main result on integral representation is:
\begin{theorem} \label{intrep} Let ${\mathcal F}_0$ be trivial. Let $M^1,\ldots, M^d$ be sigma-martingales on $({\mathbb O}mega,{\mathcal F},{{\sf P}})$. Then the following are equivalent.
\begin{equation}gin{enumerate}[(i)]
\item Every bounded martingale admits representation w.r.t. $M^1,\ldots, M^d$.
\item Every uniformly integrable martingale admits representation w.r.t. $M^1,\ldots, M^d$.
\item Every sigma-martingale admits representation w.r.t. $M^1,\ldots, M^d$.
\item ${{\sf P}}$ is an extreme point of the convex set ${\mathbb E}^s(M^1,\ldots, M^d)$.
\item $\tilde{{\mathbb E}}^s_{{{\sf P}}}(M^1,\ldots, M^d)=\{{{\sf P}}\}.$
\item ${\mathbb E}^s_{{{\sf P}}}(M^1,\ldots, M^d)=\{{{\sf P}}\}.$
\end{enumerate}
\end{theorem}
\noindent{\sc Proof : } Since every bounded martingale is uniformly integrable and a uniformly integrable martinagle is a sigma-martingale, we have\\
\zetanterline{ (iii)${\mathbb R}ightarrow$ (ii) ${\mathbb R}ightarrow$ (i).}
\noindent (i) ${\mathbb R}ightarrow$ (ii) is an easy consequence of Theorem \ref{aztm2}: given a uniformly integrable martingale $Z$, for $n\ge 1$, let us define martingales $Z^n$ by
\[Z^n_t={{\sf E}}[Z{\mathbb I}nd_{\{\lvert Z\rvert\le n \}}]\mid {\mathcal F}_t].\]
We take the r.c.l.l. version of the martingale. It is easy to see that $Z^n$ are bounded martingales and in view of (i), $Z^n\in{\mathbb F}(M^1,M^2,\ldots , M^d)$. Moreover, for $n\ge t$
\[{{\sf E}}[\lvert Z^n_t-Z_t\rvert ]\le {{\sf E}}[Z{\mathbb I}nd_{\{\lvert Z\rvert> n \}}]\]
and hence for all $t$, ${{\sf E}}[\lvert Z^n_t-Z_t\rvert ]\rightarrow 0$. Theorem \ref{aztm2} now implies $Z\in{\mathbb F}(M^1,M^2,\ldots , M^d)$. This proves (ii).
\noindent We next prove (ii) ${\mathbb R}ightarrow$ (iii). Let $X$ be a sigma-martingale. In view of Lemma \ref{axl4}, get a uniformly integrable martingale $N$ and a predictable process $\psi$ such that
\[X=\int \psi {\hspace{0.6pt}d\hspace{0.1pt}} N.\]
Let $N_t=N_0+\int_0^t f_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s$ where $Y\in{\mathbb C}(M^1,M^2,\ldots , M^d)$. Then we have
\[X_t=X_0+\int_0^t f_s\psi{\hspace{0.6pt}d\hspace{0.1pt}} Y_s\]
and thus $X$ admits an integral representation w.r.t. $M^1,\ldots, M^d$.
Suppose (v) holds and suppose ${{\sf Q}}_1,{{\sf Q}}_2$ $\in{\mathbb E}^s(M^1,M^2,\ldots , M^d)$ and ${{\sf P}}=\alpha {{\sf Q}}_1+(1-\alpha){{\sf Q}}_2$. It follows that ${{\sf Q}}_1,{{\sf Q}}_2$ are absolutely continuous w.r.t. ${{\sf P}}$ and hence ${{\sf Q}}_1, {{\sf Q}}_2\in \tilde{{\mathbb E}}_{{{\sf P}}}^s(M^1,M^2,\ldots , M^d)$. In view of (v), ${{\sf Q}}_1={{\sf Q}}_2={{\sf P}}$ and thus ${{\sf P}}$ is an extreme point of ${\mathbb E}^s(M^1,M^2,\ldots , M^d)$ and so (iv) is true.
Since ${\mathbb E}_{{{\sf P}}}^s(M^1,M^2,\ldots , M^d) \subseteq \tilde{{\mathbb E}}_{{{\sf P}}}^s(M^1,M^2,\ldots , M^d)$, it follows that (v) implies (vi). On the other hand, suppose (vi) is true and ${{\sf Q}}\in\tilde{{\mathbb E}}_{{{\sf P}}}^s(M^1,M^2,\ldots , M^d)$. Then ${{\sf Q}}_1=\frac{1}{2}({{\sf Q}}+{{\sf P}})\in {\mathbb E}_{{{\sf P}}}^s(M^1,M^2,\ldots , M^d)$. Then (vi) implies ${{\sf Q}}_1={{\sf P}}$ and hence ${{\sf Q}}={{\sf P}}$ and thus (v) holds.
Till now we have proved (i) ${\mathbb L}ongleftrightarrow$ (ii) ${\mathbb L}ongleftrightarrow$ (iii) and
(iv) ${\mathbb L}eftarrow$ (v) ${\mathbb L}ongleftrightarrow$ (vi). To complete the proof, we will show (iii) ${\mathbb R}ightarrow$ (v) and (iv) ${\mathbb R}ightarrow$ (i).
Suppose that (iii) is true and let ${{\sf Q}}\in\tilde{{\mathbb E}}^s_{{\sf P}}(M^1,M^2,\ldots , M^d)$. Now let $\xi$ be the Radon-Nikodym derivative of ${{\sf Q}}$ w.r.t. ${{\sf P}}$.
Let $R$ denote the r.c.l.l. martingale: $R_t={{\sf E}}[\xi\mid{\mathcal F}_{t}]$. Since ${\mathcal F}_0$ is trivial, $N_0=1$. In view of property (iii), we can get $Y\in{\mathbb C}(M^1,M^2,\ldots , M^d)$
and a predictable processes $f\in{\mathbb L}(Y)$ such that
\begin{equation}\label{ax51}
R_t=1+\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s.
\end{equation}
Note that
\begin{equation}\label{ax51d} [R,R]_t=\int_0^tf_s^2{\hspace{0.6pt}d\hspace{0.1pt}} [Y,Y]_s.\end{equation}
Since $M^j$ is a sigma-martingale under ${{\sf Q}}$ for each $j$, it follows that $Y$ is a ${{\sf Q}}$ sigma-martingale. By Lemma \ref{cal1}, this gives $[Y,R]$ is a ${{\sf P}}$ sigma-martingale and hence
\begin{equation}\label{ax51a}
V^k_t=\int_0^tf_s{\mathbb I}nd_{\{\lvert f_s\rvert\le k\}}{\hspace{0.6pt}d\hspace{0.1pt}} [Y,R]_s\end{equation}
is a ${{\sf P}}$ sigma-martingale. Noting that
\[ [Y,R]_t=\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} [Y,Y]_s\]
we see that
\begin{equation}\label{ax51b}
V^k_t=\int_0^tf^2_s{\mathbb I}nd_{\{\lvert f_s\rvert\le k\}}{\hspace{0.6pt}d\hspace{0.1pt}} [Y,Y]_s.\end{equation}
Thus we can get $(0,\infty)$ valued predictable processes $\phi^j$ such that
\[U^k_t=\int_0^t\phi^k_s{\hspace{0.6pt}d\hspace{0.1pt}} V^k_s\]
is a martingale. But $U^k$ is a non-negative martingale with $U^k_0=0$. As a result $U^k$ is identically equal to 0 and thus so is $V^k$. It then follows that (see \eqref{ax51d}) $[R,R]=0$ which yields $R$ is identical to 1 and so ${{\sf Q}}={{\sf P}}$. Thus
$\tilde{{\mathbb E}}^s_{{\sf P}}(M^1,M^2,\ldots , M^d)$ is a singleton.
Thus (iii) ${\mathbb R}ightarrow$ (v).
To complete the proof, we will now prove that (iv) ${\mathbb R}ightarrow$ (i).
Suppose $M^1,M^2,\ldots , M^d$ are such that ${{\sf P}}$ is an extreme point of ${\mathbb E}^s(M^1,M^2,\ldots , M^d)$. Since $M^j$ is a sigma-martingale under ${{\sf P}}$, we can choose $(0,\infty)$ valued predictable $\phi^j$ such that
\[N^j_t=\int_0^t\phi^j_s{\hspace{0.6pt}d\hspace{0.1pt}} M^j_s\]
is a uniformly integrable martingale under ${{\sf P}}$ and as seen in Lemma \ref{ayl9}, we then have
\[{\mathbb F}(M^1,M^2,\ldots , M^d)={\mathbb F}(N^1,N^2,\ldots ,N^d).\]
Suppose (i) is not true. We will show that this leads to a contradiction.
So suppose $S$ is a bounded martingale that does not admit representation w.r.t. $
M^1,M^2,\ldots , M^d$, {\em i.e.} $S\not\in {\mathbb F}(M^1,M^2,\ldots , M^d)={\mathbb F}(N^1,N^2,\ldots ,N^d)$, then for some $T$,
\[S_T\not\in {\mathbb K}_T(N^1,N^2,\ldots ,N^d)\]
We have proven in Theorem \ref{aztm1} that $ {\mathbb K}_T(N^1,N^2,\ldots ,N^d)$ is closed in ${\mathbb L}^1({\mathbb O}mega,{\mathcal F}, {{\sf P}})$.
Since ${\mathbb K}_T$ is not equal to ${\mathbb L}^1({\mathbb O}mega, {\mathcal F}_T,{{\sf P}})$, by the Hahn-Banach Theorem, there exists $\xi\in{\mathbb L}^\infty({\mathbb O}mega, {\mathcal F}_T,{{\sf P}})$, ${{\sf P}}(\xi\neq 0)>0$ such that
\[\int \end{theorem}a\xi {\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}=0\;\;\forall \end{theorem}a\in{\mathbb K}_T.\]
Then for all constants $c$, we have
\begin{equation}\label{azk44}
\int \end{theorem}a(1+c\xi) {\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}=\int\end{theorem}a{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}\;\;\forall \end{theorem}a\in{\mathbb K}_T.\end{equation}
Since $\xi$ is bounded, we can choose a $c>0$ such that
\[{{\sf P}}(c\lvert\xi\rvert<\frac{1}{2})=1.\]
Now, let ${{\sf Q}}$ be the measure with density $\end{theorem}a=(1+c\xi)$. Then ${{\sf Q}}$ is a probability measure. Thus \eqref{azk44} yields
\begin{equation}\label{azk45}
\int \end{theorem}a{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf Q}}=\int\end{theorem}a{\hspace{0.6pt}d\hspace{0.1pt}}{{\sf P}}\;\;\forall \end{theorem}a\in{\mathbb K}_T.\end{equation}
For any bounded stop time $\tau$and $1\le j\le d$, $N^j_{\tau\wedge T}\in{\mathbb K}_T$ and hence
\begin{equation}\label{azk46}
{{\sf E}}_{{\sf Q}}[N^j_{\tau\wedge T}]={{\sf E}}_{{\sf P}}[N^j_{\tau\wedge T}]=N^j_0\end{equation}
On the other hand,
\begin{equation}\label{azk47}\begin{equation}gin{split}
{{\sf E}}_{{\sf Q}}[N^j_{\tau\mbox{\sc \small Var}epsilone T}]&={{\sf E}}_{{\sf P}}[\end{theorem}a N^j_{\tau\mbox{\sc \small Var}epsilone T}]\\
&={{\sf E}}_{{\sf P}}[{{\sf E}}_{{\sf P}}[\end{theorem}a N^j_{\tau\mbox{\sc \small Var}epsilone T}\mid {\mathcal F}_T]]\\
&={{\sf E}}_{{\sf P}}[\end{theorem}a{{\sf E}}_{{\sf P}}[ N^j_{\tau\mbox{\sc \small Var}epsilone T}\mid {\mathcal F}_T]]\\
&={{\sf E}}_{{\sf P}}[\end{theorem}a N_T]\\
&={{\sf E}}_{{\sf Q}}[ N_T]\\
&=N^j_0.\end{split}\end{equation}
where we have used the facts that $\end{theorem}a$ is ${\mathcal F}_T$ measurable, $N^j$ is a ${{\sf P}}$ martingale and \eqref{azk46}. Now
\[ {{\sf E}}_{{\sf Q}}[N^j_{\tau}]={{\sf E}}_{{\sf Q}}[N^j_{\tau\wedge T}]+{{\sf E}}_{{\sf Q}}[N^j_{\tau\mbox{\sc \small Var}epsilone T}]-{{\sf E}}_{{\sf Q}}[N^j_{T}]=N^j_0.\]
Thus $N^j$ is a ${{\sf Q}}$ martingale and since
\[M^j=\int_0^t\frac{1}{\phi^j_s}{\hspace{0.6pt}d\hspace{0.1pt}} N^j_s\]
it follows that $M^j$ is a ${{\sf Q}}$ sigma-martingale.
Thus ${{\sf Q}}\in {\mathbb E}^s(M^1,\ldots, M^d)$.
Similarly, if $\tilde{{{\sf Q}}}$ is the measure with density $\end{theorem}a=(1-c\xi)$, we can prove that $\tilde{{{\sf Q}}}\in {\mathbb E}^s(M^1,\ldots, M^d)$. Since ${{\sf P}}=\frac{1}{2}({{\sf Q}}+\tilde{{{\sf Q}}})$, this contradicts the assumption that ${{\sf P}}$ is an extreme point of ${\mathbb E}^s(M^1,\ldots, M^d)$.
Thus (iv) ${\mathbb R}ightarrow$ (i). This completes the proof.
\qed
\section{Completeness of Markets}
Let the (discounted) prices of $d$ securities be given by $X^1,\ldots ,X^d$. We assume that $X^j$ are semimartingales and that they satisfy the property NFLVR so that an ESMM exists.
\begin{theorem} \label{sftap} {\bf The Second Fundamental Theorem Of Asset Pricing} \{\mathbb L}et $X^1,\ldots ,X^d$ be semimartingales on $({\mathbb O}mega,{\mathcal F},{{\sf P}})$ such that ${\mathbb E}^s_{{\sf P}}(X^1,\ldots, X^d)$ is non-empty. Then the following are equivalent:
\begin{equation}gin{enumerate}[(a)]
\item For all $T<\infty$, for all ${\mathcal F}_T$ measurable bounded random variables $\xi$ (bounded by say $K$), there exist $g^j\in{\mathbb L}(X^j)$ with
\begin{equation}\label{aza1}
Y_t=\sum_{j=1}^d\int_0^tg^j_s{\hspace{0.6pt}d\hspace{0.1pt}} X^j_s\end{equation}
a constant $c$ and $f\in{\mathbb L}(Y)$ such that $\lvert \int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s\rvert \le 2K$ and
\begin{equation}\label{aza2}
\xi=c+\int_0^Tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s.\end{equation}
\item The set ${\mathbb E}^s_{{\sf P}}(X^1,\ldots, X^d)$ is a singleton.
\end{enumerate}
\end{theorem}
\noindent{\sc Proof : } First suppose that ${\mathbb E}^s_{{\sf P}}(X^1,\ldots, X^d)=\{{{\sf Q}}\}.$ Consider the martingale
$M_t={{\sf E}}_{{\sf Q}}[\xi\mid{\mathcal F}_t]$. Note that $M$ is bounded by $K$. In view of the equivalence of (i) and (v) in Theorem \ref{intrep}, we get that $M$ admits a representation w.r.t. $X^1,\ldots, X^d$ - thus we get $g^j\in{\mathbb L}(X^j)$ and $f\in{\mathbb L}(Y)$ where $Y$ is given by by \eqref{aza1}, with
\[M_t=M_0+\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s.\]
Since ${\mathcal F}_0$ is trivial, $M_0$ is a constant. Since $M$ is bounded by $K$, it follows that $\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s$ is bounded by $2K$. Thus (b) implies (a).
Now suppose (a) is true. Let ${{\sf Q}}$ be an ESMM. Let $M_t$ be a martingale. We will show that $M\in{\mathbb F}(X^1,\ldots, X^d)$, {\em i.e.} $M$ admits integral representation w.r.t. $X^1,\ldots, X^d$. In view of Lemmas \ref{azl1} and \ref{ayl9}, suffices to show that for each $T<\infty$, $N\in{\mathbb F}(X^1,\ldots, X^d)$, where $N$ is defined by $N_t=M_{t\wedge T}$.
Let $\xi=N_T$. Then in view of assumption (a), we have
\[\xi=c+\int_0^Tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s\]
with $Y$ given by \eqref{aza1}, a constant $c$ and $f\in{\mathbb L}(Y)$ such that $U_t=\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s$ is bounded. Since $U$ is a sigma-martingale that is bounded, it follows that $U$ is a martingale. It follows that
\[N_t=c+\int_0^tf_s{\hspace{0.6pt}d\hspace{0.1pt}} Y_s, \;\;0\le t\le T.\]
Thus $N\in {\mathbb F}(X^1,\ldots, X^d)$.
We have proved that (i) in Theorem \ref{intrep} holds and hence (v) holds, {\em i.e.} the ESMM is unique.
\qed
\appendix
\begin{equation}gin{center}
{\bf APPENDIX}
\end{center}
\renewcommand{A.\arabic{equation}}{A.\arabic{equation}}
\setcounter{equation}{0}
For a non-negative definite symmetric matrix $C$, the eigenvalue-eigenvector decomposition gives us a representation
\begin{equation}\label{apx1}C=B^TDB\end{equation}
\begin{equation}\label{apx2}\mbox{$B$ is a orthogonal matrix and $D$ is a diagonal matrix.}\end{equation}
This decomposition is not unique, but for each non-negative definite symmetric matrix $C$, the set of pairs $(B,D)$ satisfying \eqref{apx1}-\eqref{apx2} is compact. Thus it admits a measurable selection, in other words, there exists a Borel mapping $\theta$ such that $\theta(C)=(B,D)$ where $B,C,D$ satisfy \eqref{apx1}-\eqref{apx2}.
(See \cite{Graf} or Corollary 5.2.6 \cite{SMS}).
Let ${\mathcal D}$ be a $\sigma$-field on a non-empty set ${\mathbb G}amma$ and for $1\le i,j\le d$, $\lambda_{ij}$ be $\sigma$-finite signed measures on $({\mathbb G}amma, {\mathcal D})$ such that\\
\zetanterline{ For all $E\in{\mathcal D}$, the matrix$((\lambda_{ij}(E)))$ is a symmetric non-negative definite matrix.}
Let $\theta(E)=\sum_{i=1}^d\lambda_{ii}(E)$. Then for $1\le i,j\le d$ there exists a version $c^{ij}$ of the Radon-Nikodym derivate $\frac{d\lambda_{ij}}{d\theta}$ such that for all $\gamma\in{\mathbb G}amma$, the matrix $((c^{ij}(\gamma)))$ is non-negative definite.
To see this, for $1\le i\le j\le d$ let $f^{ij}$ be a version of the Radon-Nikodym derivative $\frac{d\lambda_{ij}}{d\theta}$ and let $f^{ji}=f^{ij}$. For rationals $r_1,r_2,\ldots , r_d$, let
\[A_{r_1,r_2,\ldots , r_d}=\{\gamma:\sum_{ij}r_ir_jf^{ij}(\gamma)< 0\}.\]
Then $\theta(A_{r_1,r_2,\ldots , r_d})=0$ and hence $\theta(A)=0$ where
\[A=\cup\{A_{r_1,r_2,\ldots , r_d}:\; r_1,r_2,\ldots , r_d\text{ rationals}\}.\]
The required version is now given by
\[c^{ij}(\gamma)=f^{ij}(\gamma){\mathbb I}nd_{A^c}(\gamma).\]
\begin{equation}gin{thebibliography}{99}
\bibitem{BK15} Bhatt, A. G. and Karandikar, R. L.:
On the Fundamental Theorem of Asset Pricing, {\em Communications on Stochastic Analysis} {\bf 9} (2015), 251 - 265.
\bibitem{CS}Cherny, A. S. and Shiryaev, A. N.:
Vector stochastic integrals and the fundamental theorems of asset pricing, {\em Proceedings of the Steklov Institute of Mathematics}, {\bf 237} (2002), 6 - 49.
\bibitem{Chou} Chou, C. S.: Characterisation dune classe de semimartingales.
{\em Seminaire de Probabilites XIII, Lecture Notes in Mathematics} {\bf 721} (1977), Springer-Verlag, 250 - 252.
\bibitem{DS} Delbaen, F. and Schachermayer, W.:
A general version of the fundamental theorem of asset pricing, {\em Mathematische Annalen} {\bf 300} (1994), 463 - 520.
\bibitem{DS98} Delbaen, F. and Schachermayer, W.:
The Fundamental Theorem of Asset Pricing for Unbounded Stochastic Processes, {\em Mathematische Annalen} {\bf 312} (1998), 215-250.
\bibitem{DV} Davis, M. H. A. and Varaiya, P. : On the multiplicity of an increasing family of sigma-fields. {\em Annals of Probability} {\bf 2} (1974), 958 - 963.
\bibitem{Em80} Emery, M. : Compensation de processus a variation finie non localement integrables. {\em Seminaire de Probabilites XIV, Lecture Notes in Mathematics} {\bf 784} (1980), Springer-Verlag, 152 - 160.
\bibitem{Graf} Graf, S. : A measurable selection theorem for compact-valued maps. {\em Manuscripta Math.} {\bf 27} (1979), 341 - 352.
\bibitem{HP} Harrison, M. and Pliska, S.:
Martingales and Stochastic Integrals in the theory of continuous trading, {\em Stochastic Processes and Applications} {\bf 11} (1981), 215 - 260.
\bibitem{Ito}Ito, K. Lectures on stochastic processes, {\em Tata Institute of Fundamental Research}, Bombay, (1961).
\bibitem{J78} Jacod, J.: Calcul Stochastique et Problemes de Martingales.
{\em Lecture Notes in Mathematics}
{\bf 714}, (1979), Springer-Verlag.
\bibitem{J80} Jacod, J. : Integrales stochastiques par rapport a une semi-martingale
vectorielle et changements de filtration. {\em Lecture Notes in Mathematics} {\bf 784} (1980), Springer-Verlag, 161 - 172.
\bibitem{JY} Jacod, J. and Yor, M.: Etude des solutions extremales et representation intégrale des solutions pour certains problemes de martingales. {\em Z. Wahrscheinlichkeitstheorie
ver. Geb.}, {\bf 125} (1977), 38 - 83.
\bibitem{Mey} Meyer, P. A.: Un cours sur les integrales stochastiques. {\em Seminaire de Probabilites X, Lecture Notes in Mathematics} {\bf 511} (1976), Springer-Verlag, 245 - 400.
\bibitem{P} Protter, P.: Stochastic Integration and Differential Equations. (1980), Springer-Verlag.
\bibitem{SMS}Srivastava, S. M.: A Course on Borel Sets. (1998), Springer-Verlag.
\bibitem{Yor} Yor, M.: Sous-espaces denses dans $L^1$ ou $H^1$ et representation des martingales. {\em Seminaire de Probabilites XII, Lecture Notes in Mathematics} {\bf 649} (1978), Springer-Verlag, 265 - 309.
\end{thebibliography}
\noindent {\it Address}: H1 Sipcot IT Park, Siruseri, Kelambakkam 603103, India\\
{\it E-mail}: [email protected], [email protected]
\end{document} |
\begin{document}
\title{State-independent experimental test of quantum contextuality in an
indivisible system}
\author{C. Zu$^{1}$, Y.-X. Wang$^{1}$, D.-L. Deng$^{1,2}$, X.-Y. Chang$^{1}$
, K. Liu$^{1}$, P.-Y. Hou$^{1}$, H.-X. Yang$^{1}$, L.-M. Duan$^{1,2}$}
\affiliation{$^{1}$Center for Quantum Information, IIIS, Tsinghua University, Beijing,
China}
\affiliation{$^{2}$Department of Physics, University of Michigan, Ann Arbor, Michigan
48109, USA}
\begin{abstract}
We report the first state-independent experimental test of quantum
contextuality on a single photonic qutrit (three-dimensional system), based on a recent theoretical
proposal [Yu and Oh, Phys. Rev. Lett. 108, 030402 (2012)]. Our experiment spotlights quantum contextuality in its
most basic form, in a way that is independent of either the state or the
tensor product structure of the system.
\end{abstract}
\pacs{03.65.Ta, 42.50.Xa, 03.65.Ca, 03.65.Ud}
\maketitle
Contextuality represents a major deviation of quantum theory from
classical physics \textbf{\cite{1,2}}. Non-contextual realism is a pillar of the familiar worldview of classical
physics. In a non-contextual world, observables have pre-defined values,
which are independent of our choices of measurements. Non-contextuality
plays a role also in the derivation of Bell's inequalities, as the property
of local realism therein can be seen as a special form of non-contextuality,
where the independence of the measurement context is enforced by the
no-signalling principle \textbf{\cite{2a,2b,8,9}}. In an attempt to save the
non-contextuality of the classical worldview, non-contextual hidden variable
theories have been proposed as an alternative to quantum mechanics. In these
theories, the outcomes of measurements are associated to hidden variables,
which are distributed according to a joint probability distribution.
However, the celebrated Kochen-Specker theorem \textbf{\cite{1,2,2a,2b}}
showed that non-contextual hidden variable theories are incompatible with
the predictions of quantum theory. The original Kochen-Specker theorem is
presented in the form of a logical contradiction, which is conceptually
striking, but experimentally unfriendly: the presence of unavoidable
experimental imperfections motivated a debate on whether or not the
non-contextual features highlighted by Kochen-Specker theorem can be
actually tested in experiments \textbf{\cite{10,11}}. As a result of the
debate, new Bell-type inequalities have been proposed in the recent years,
with the purpose of pinpointing the contextuality of quantum mechanics in an
experimentally testable way. These inequalities are generally referred to as
the \emph{KS\ inequalities} \textbf{\cite{8}}. Violation of the KS\
inequalities confirms quantum contextuality and rules out the non-contextual
hidden variable theory. Different from the Bell inequality tests, violation
of the KS\ inequality can be achieved independently of the state of quantum
systems \textbf{\cite{1,2,8,9}}, showing that the conflict between quantum
theory and non-contextual realism resides in the structure of quantum
mechanics instead of particular quantum states. The KS inequalities have
been tested in experiments for two qubits, using ions \textbf{\cite{3}},
photons \textbf{\cite{4,4a}}, neutrons \textbf{\cite{5}}, or an ensemble
nuclear magnetic resonance system \textbf{\cite{6}}. A single qutrit
represents the simplest system where it is possible to observe conflict
between quantum theory and non-contextual realistic models \textbf{\cite
{7,9,9a,9b}}. A recent experiment has demonstrated quantum contextuality for
photonic qutrits in a particular quantum state \textbf{\cite{7}}, based on a
version of the KS inequality proposed by Klyachko, Can, Binicioglu, and
Shumovsky \textbf{\cite{9a}}.
A state-independent test of quantum contextuality for a single qutrit, in
the spirit of the original KS\ theorem, is possible but complicated as one
needs to measure many experimental configurations \textbf{\cite{2b,9,9b}}. A
recent theoretical work by Yu and Oh proposes another version of the KS\
inequality, which requires to measure $13$ variables and $24$ of their pair
correlations \textbf{\cite{9}}. This is a significant simplification
compared with the previous KS inequalities for single qutrits, and the
number of variables cannot be further reduced as proven recently by Cabello
\textbf{\cite{12}}. Our experiment confirms quantum contextuality in a
state-independent fashion using the Yu-Oh version of the KS inequality for
qutrits represented by three distinctive paths of single photons. The
maximum violation of this inequality by quantum mechanics is only $4\%$
beyond the bound set by the non-contextual hidden variable theory, so we
need to accurately control the paths of single photons in experiments to
measure the $13$ variables and their correlations for different types of
input states. We have achieved a violation of the KS\ inequality by more
than five standard deviations for all the nine different states that we
tested.
For a single qutrit with basis vectors $\left\{ \left\vert 0\right\rangle
,\left\vert 1\right\rangle ,\left\vert 2\right\rangle \right\} $, we detect
projection operators to the states $i\left\vert 0\right\rangle +j\left\vert
1\right\rangle +k\left\vert 2\right\rangle $ specified by the $13$ unit
vectors $\left( i,j,k\right) $ in Fig. 1. The $13$ projectors have
eigenvalues either $0$ or $1$. In the hidden variable theory, the
corresponding observables are assigned randomly with values $0$ or $1$
according to a (generally unknown) joint probability distribution. When two
states are orthogonal, the projectors onto them commute, and the
corresponding observables are called compatible, which means that they can
be measured simultaneously. Non-contextuality means that the assignment of
values to an observable should be independent of the choice of compatible
observables that are measured jointly with it. For instance, $z_{1}$ in Fig.
1 should be assigned the same value in the correlators $z_{1}z_{2}$ and $
z_{1}y_{1}^{\pm }$ for each trial of measurement. For each observable $
b_{i}\in \left\{ z_{\mu },y_{\mu }^{\pm },h_{\alpha },\mu =1,2,3;\alpha
=0,1,2,3\right\} $ defined in Fig. 1, we introduce a new variable $
a_{i}\equiv 1-2b_{i}$, which takes values of $\pm 1$. For the $13$
observables $a_{i}$ with two outcomes $\pm 1$, it is shown in Ref. \textbf{
\cite{9}} that they satisfy the inequality
\begin{equation}
\mathop{\displaystyle \sum }\limits_{i}a_{i}-\frac{1}{4}\mathop{
\displaystyle \sum }\limits_{\left\langle i,j\right\rangle }a_{i}a_{j}\leq 8,
\end{equation}
where $\left\langle i,j\right\rangle $ denotes all pairs of observables that
are compatible with each other. There are $24$ compatible pairs among all
the $13\times 13$ combinations, and a complete list of them is given in
Table 1 for the corresponding operator correlations. The inequality (1)\ can
be proven either through an exhaustive check of all the possible $2^{13}$
value assignments of $a_{i}$ $\left( i=1,2,\cdots ,13\right) $ or by a more
elegant analytic argument as shown in Ref. \textbf{\cite{9}}. In quantum
theory, each $a_{i}$ corresponds to an operator $A_{i}$ with eigenvalues $
\pm 1$ in quantum mechanics. In the hidden variable theory, the value $a_{i}$
corresponds to a random variable $A_{i}$, and the different values are
distributed according to a (possibly correlated) joint probability
distribution. Hence, for the hidden variable theory the expectation values
of $A_{i}$ must satisfy the inequality
\begin{equation}
\mathop{\displaystyle \sum }\limits_{i}\left\langle A_{i}\right\rangle -
\frac{1}{4}\mathop{\displaystyle \sum }\limits_{\left\langle
i,j\right\rangle }\left\langle A_{i}A_{j}\right\rangle \leq 8.
\end{equation}
which follows by taking the average of (1) over the joint probability
distribution of the values $a_{i}$. On the other hand, quantum theory gives
a different prediction: From the definition $A_{i}\equiv I-2B_{i}$, where $
B_{i}$ is the projection operator to the $13$ states in Fig. 1, we find that
$S=\mathop{\displaystyle \sum }\limits_{i}A_{i}-\frac{1}{4}
\mathop{\displaystyle \sum }\limits_{\left\langle i,j\right\rangle
}A_{i}A_{j}\equiv \frac{25}{3}I$, where $I$ is the unity operator. Hence,
for any state of the system, quantum theory predicts the inequality $
\left\langle S\right\rangle =25/3\nleq 8$, which violates the inequality (2)
imposed by the non-contextual hidden variable theory and rules out any
non-contextual realistic model.
Since the quantum mechanical prediction $\langle S\rangle =25/3$ is close to
the upper bound $\langle S\rangle \leq 8$ set by the non-contextual realism,
we need to achieve accurate control in experiments to violate the inequality
(2). Yu and Oh also derived another simpler inequality in Ref. \textbf{\cite
{9}} by introducing an additional assumption (as proposed in the original
KS\ proof \textbf{\cite{1,2b}}) that the algebraic structure of compatible
observables is preserved at the hidden variable level, that is, that the
value assigned to the product (or sum) of two compatible observables is
equal to the product (or sum) of the values assigned to these observables.
Under this assumption, it is shown in \textbf{\cite{9}} that
\begin{equation}
\mathop{\displaystyle \sum }\limits_{\alpha =0,1,2,3}\left\langle
B_{h_{\alpha }}\right\rangle \leq 1
\end{equation}
for non-contextual hidden variable theory, while quantum mechanically $
\mathop{\displaystyle \sum }\limits_{\alpha =0,1,2,3}B_{h_{\alpha }}\equiv
\frac{4}{3}I$, and thus $\mathop{\displaystyle \sum }\limits_{\alpha
=0,1,2,3}\left\langle B_{h_{\alpha }}\right\rangle =4/3>1$. The inequality
(3) is more amenable to experimental tests than Eq. (2), as it requires only
four measurement settings. However, conceptually it is weaker than Eq. (2)
due to the additional assumption required for its proof. Our experiment
achieves significant violation of both the inequalities (2) and (3).
To experimentally test the inequalities (2) and (3), first we prepare a
single photonic qutrit through the spontaneous parametric down conversion
(SPDC) setup shown in Fig. 2. The SPDC process generates correlated
(entangled) photon pairs, and through detection of one of the photons by a
detector D0, we get a heralded single-photon source on the other output
mode. This photon is then split by two polarizing beam splitters (PBS) into
three spatial modes that represent a single photonic qutrit. Through control
of the wave plates before the PBS\ and for the pump light, we can prepare
any state for this photonic qutrit.
The state of the qutrit is then detected by three single-photon detectors
D1-D3. To measure the observables $A_{i}$ and their correlations, we use the
setup shown in Fig. 1 based on cascaded Mach-Zehnder interferometers. The
wave plates HWP3 and HWP4 in the interferometers can be tilted to fine tune
the phase difference between the two arms. To stabilize the relative phase,
the whole interferometer setup is enclosed in a black box. The detectors
D1-D3 measure projections to three orthogonal states in the qutrit space,
which always correspond to mutually compatible observables. By tuning the
half wave plates HWP5 and HWP6 in Fig. 1, we can choose these projections so
that they give a subset of the $13$ projection operators $B_{i}$. A detector
click (non-click) then means assignment of value $1$ ($0$) to the
corresponding observable $B_{i}$ (or equivalently, assignment of $-1$ $
\left( +1\right) $ to the observable $A_{i}$). The coincidence between the
detectors measures the correlation. The detailed configurations of the wave
plates to measure different correlations are summarized in section 1 of the
supplementary information. Due to the photon loss, sometimes our photonic
qutrit does not yield a click in the detectors D1-D3, even though we
registered a heralding photon at the detector D0. To take this into account,
we discard the events when none of the detectors D1-D3 fires, in the same
way as it was done in Ref. \textbf{\cite{7}}. The use of this post-selection
technique opens up a detection efficiency loophole, and we need to assume
that the events selected out by the photonic coincidence is an unbiased
representation of the whole sample (\emph{fair-sampling assumption}).
We have measured all the expectation values in the inequality (2) and (3)
for different input states. Table 1 summarizes the measurement results for a
particular input state $\left\vert s\right\rangle =\left( \left\vert
0\right\rangle +\left\vert 1\right\rangle +\left\vert 2\right\rangle \right)
/\sqrt{3}$ in equal superposition of the three basis-vectors. The
theoretical values in the quantum mechanical case are calculated using the
Born rule with the ideal state $\left\vert s\right\rangle $. Each of the
experimental correlations is constructed from the joint probabilities $
P\left( A_{i}=\pm 1;A_{j}=\pm 1\right) $ registered by the detectors. As an
example to show the measurement method, in section 2 of the supplementary
information, we give detailed data for the registered joint probabilities
under different measurement configurations, which together fix all the
correlations in Table 1. The expectation value $\left\langle
B_{i}\right\rangle $ (or $\left\langle A_{i}\right\rangle \equiv
1-2\left\langle B_{i}\right\rangle $) is directly determined by the relative
probability of the photon firing in the corresponding detector. From the
data summarized in Table 1, we find both of the inequalities (2) and (3) are
significantly violated in experiments, in agreement with the quantum
mechanics prediction and in contradiction with the non-contextual realistic
models. Even the tough inequality (2)\ is violated by more than five times
the error bar (standard deviation).
To verify that the inequalities (2) and (3) are experimentally violated
independently of the state of the system, we have tested them for different
kinds of input states. The set of states tested include the three
basis-vectors $\left\{ \left\vert 0\right\rangle ,\left\vert 1\right\rangle
,\left\vert 2\right\rangle \right\} $, the two-component superposition
states $\left\{ \left( \left\vert 0\right\rangle +\left\vert 1\right\rangle
\right) /\sqrt{2},\left( \left\vert 0\right\rangle +\left\vert
2\right\rangle \right) /\sqrt{2},\left( \left\vert 1\right\rangle
+\left\vert 2\right\rangle \right) /\sqrt{2}\right\} $, the three-component
superposition state $\left\vert s\right\rangle $, and two mixed states $\rho
_{8}=\left( \left\vert 0\right\rangle \left\langle 0\right\vert +\left\vert
2\right\rangle \left\langle 2\right\vert \right) /2$ and $\rho _{9}=\left(
\left\vert 0\right\rangle \left\langle 0\right\vert +\left\vert
1\right\rangle \left\langle 1\right\vert +\left\vert 2\right\rangle
\left\langle 2\right\vert \right) /3\equiv I/3$. The detailed configurations
of the wave plates to prepare these different input states are summarized in
section 1 of the supplementary information. To generate the mixed states, we
first produce photon pairs entangled in polarization using the type-I phase
matching in the BBO\ crystal \textbf{\cite{14}}. After tracing out the idler
photon by the detection at D0, we get a mixed state in polarization for the
signal photon, which is then transferred to a mixed qutrit state represented
by the optical paths through the PBS. For various input states, we measure
correlations of all the observables in the inequality (2) and the detailed
results are presented in section 3 of the supplementary information.
Although the expectation values $\left\langle A_{i}\right\rangle $ and the
correlations $\left\langle A_{i}A_{j}\right\rangle $ strongly depend on the
input states, the inequalities (2) and (3) are state-independent and
significantly violated for all the cases tested in experiments. In Fig. 3,
we present the measurement outcomes of these two inequalities for nine
different input states. The results violate the boundary set by the
non-contextual hidden variable theory and are in excellent agreement with
quantum mechanics predictions.
In this work, we have observed violation of the KS inequalities (2) and (3)
for a single photonic qutrit, which represents the first state-independent
experimental test of quantum contextuality in an indivisible quantum system.
The experiment confirmation of quantum contextuality in its most basic form,
in a way that is independent of either the state or the tensor product
structure of the system, sheds new light on the contradiction between
quantum mechanics and non-contextual realistic models.
\textbf{Acknowledgement} This work was supported by the National Basic
Research Program of China (973 Program) 2011CBA00300 (2011CBA00302) and the
NSFC Grant 61033001. DLD and LMD acknowledge in addition support from the
IARPA MUSIQC program, the ARO and the AFOSR MURI program.
\begin{thebibliography}{99}
\bibitem{1} S. Kochen and E. P. Specker, J. Math. Mech. 17, 59 (1967).
\bibitem{2} J. S. Bell, Rev. Mod. Phys. 38, 447 (1966).
\bibitem{2a} N. D. Mermin, Phys. Rev. Lett. 65, 3373 (1990).
\bibitem{2b} A. Peres, Quantum Theory: Concepts and Methods Ch. 7 (Kluwer,
1993).
\bibitem{8} A. Cabello, Phys. Rev. Lett. 101, 210401 (2008).
\bibitem{9} S. X. Yu and C. H. Oh, Phys. Rev. Lett. 108, 030402-1-5 (2012).
\bibitem{10} D. A. Meyer, Phys. Rev. Lett. 83, 3751 (1999).
\bibitem{11} A. Kent, Phys. Rev. Lett. 83, 3755 (1999).
\bibitem{3} G. Kirchmair et al. Nature 460, 494 (2009).
\bibitem{4} E. Amselem, M. Radmark, M. Bourennane, A. Cabello, Phys. Rev.
Lett. 103, 160405 (2009).
\bibitem{4a} Y.-F. Huang et al., Phys. Rev. Lett. 90, 250401 (2003).
\bibitem{5} H. Bartosik et al., Phys. Rev. Lett. 103, 040403 (2009).
\bibitem{6} O. Moussa, C. A. Ryan, D. G. Cory, R. Laflamme, Phys. Rev. Lett.
104, 160501 (2010).
\bibitem{7} R. Lapkiewicz et al., Nature (London) 474, 490 (2011).
\bibitem{9a} A. A. Klyachko, M. A. Can, S. Binicioglu, A. S. Shumovsky,
Phys. Rev. Lett. 101, 20403 (2008).
\bibitem{9b} P. Badziag, I. Bengtsson, A. Cabello, and I. Pitowsky, Phys.
Rev. Lett. 103, 050401 (2009).
\bibitem{12} A. Cabello, Preprint: arXiv:1112.5149.
\bibitem{14} P. G. Kwiat, E. Waks, A. G. White, I. Appelbaum, \textit{Phys.
Rev. A} \textbf{60}, R773 (1999).
\end{thebibliography}
\begin{widetext}
\end{widetext}
\section{Supplementary information: State-independent experimental test of
quantum contextuality in an indivisible system}
This supplementary information gives the detailed configurations and data
for the experimental test of state-independent quantum contextuality on a
single photonic qutrit. In Sec. I, we first give the configurations of the
wave plates to prepare different input states for a single photonic qutrit,
and then summarize the configurations of the experiment to measure all the
observables and their correlations in the KS\ inequalities. In Sec. II, we
show the method to calculate the correlations form the measured joint
probabilities, using a particular input state as an example. In Sec. III, we
give the detailed data of the correlation measurements for the other eight
input states that are not present in the main manuscript.
\section{Configurations of the wave plates for state preparation and
measurement}
\begin{table}[tbp]
\includegraphics[width=8.5cm,height=6cm]{Table_1_preparation.pdf}
\caption[Supplementary Table. 1 ]{Angles of half-wave plates (HWP) to
prepare different states for a single photonic qutrit.}
\end{table}
To demonstrate that the violation of the KS\ inequalities is independent of
the state of the system, we need to prepare different types of input states
for the single photonic qutrit. This is achieved by adjusting the angles of
three half wave plates HWP0, HWP1, and HWP2 in the experimental setup shown
in Fig. 2 of the manuscript. To prepare pure input state, the polarization
of the pumping laser is set to $|V\rangle $ (vertically polarized) by the
HWP0. With the type-I phase matching in the BBO\ nonlinear crystal, the
generated signal and idler photons are both in the polarization state $
|H\rangle $. After the heralding measurement of the idler photon, the
polarization of the signal photon is rotated by the HWP1 and HWP2. The
polarization beam splitter (PBS) transmits the photon when it is in $
|H\rangle $ polarization and reflects it when it is in $|V\rangle $
polarization. The half-wave plate with angle $\theta $ transfers the
polarization basis-states $\left\vert H\right\rangle $ and $\left\vert
V\right\rangle $ by the formula $\left\vert H\right\rangle \rightarrow \cos
\left( 2\theta \right) \left\vert H\right\rangle +\sin \left( 2\theta
\right) \left\vert V\right\rangle $ and $\left\vert V\right\rangle
\rightarrow \cos \left( 2\theta \right) \left\vert V\right\rangle -\sin
\left( 2\theta \right) \left\vert H\right\rangle $. To prepare an arbitrary
input state $c_{0}|0\rangle +c_{1}|1\rangle +c_{2}|2\rangle $, the angle of
the HWP1 sets the branching ratio $c_{0}/\sqrt{c_{1}^{2}+c_{2}^{2}}$, and
the angle of the HWP2 then determines $c_{1}/c_{2}$. For the seven pure
input states in the experiment, the corresponding angles of the HWP1 and
HWP2 are listed in Table. I.
It is more striking to see that the KS\ inequalities are violated even for
completely mixed states. To prepare a mixed state for the signal photon, we
rotate the polarization of the pumping laser to $\left( |H\rangle +|V\rangle
\right) /\sqrt{2}$ by setting the angle of HWP0 at $22.5^{o}$. The output
state for the signal and the ideal photon after the BBO\ crystal is a
maximally entangled one with the form $|\Psi \rangle _{si}=\left( |HH\rangle
+e^{i\varphi }|VV\rangle \right) /\sqrt{2}$, where $\varphi $ is a relative
phase of the two polarization components. After the heralding measurement of
the idler photon, the state of the signal photon is described by the reduced
density matrix $(|H\rangle \left\langle H\right\vert +|V\rangle \left\langle
V\right\vert )/2$. If we set the HWP1 and HWP2 respectively at the angle of $
0^{o}$ and $45^{o}$, this polarization mixed state is transferred to the
qutrit mixed state $\rho _{8}=(|0\rangle \left\langle 0\right\vert
+|2\rangle \left\langle 2\right\vert )/2$ as shown in Table 1. To prepare
the mixed state $\rho _{9}$ (the most noisy qutrit state), we set the HWP0
at the angle $27.37^{o}$ and the density operator for the signal photon
right after the BBO\ crystal becomes $(|H\rangle \left\langle H\right\vert
+2|V\rangle \left\langle V\right\vert )/3$. When we set the HWP1 and HWP2
respectively at $0^{o}$ and $22.5^{o}$, the photonic qutrit is described by
the state $\left[ |0\rangle \left\langle 0\right\vert +|1\rangle
\left\langle 1\right\vert +|2\rangle \left\langle 2\right\vert +\left(
e^{i\phi }|1\rangle \left\langle 2\right\vert +H.c.\right) \right] /3$,
where the relative phase $\phi =0$ in the ideal case. However, we randomly
tilt the HWP3 in this experiment which sets the phase $\phi $ to a random
value. After average over many experimental runs to measure the correlation,
the effective state for the qutrit is described by $\rho _{9}=(|0\rangle
\left\langle 0\right\vert +|1\rangle \left\langle 1\right\vert +|2\rangle
\left\langle 2\right\vert )/3=I/3$, the completely mixed state.
\begin{table}[tbp]
\includegraphics[width=8.5cm,height=6cm]{Table_2_measurement.pdf}
\caption[Supplementary Table. 2 ]{Angles of half-wave plates (HWP5 and HWP6
in Fig. 1 of the main manuscript) to measure correlations of different
compatible observables, where $A_{h_{0}^{c}}$, $A_{h_{1}^{c}}$, $
A_{h_{2}^{c}}$, and $A_{h_{3}^{c}}$\ correspond, respectively, to the
projection operators onto the states $\left\vert h_{0}^{c}\right\rangle
=(|0\rangle +|1\rangle -2|2\rangle )/\protect\sqrt{6}$, $\left\vert
h_{1}^{c}\right\rangle =(-|0\rangle +|1\rangle -2|2\rangle )/\protect\sqrt{6}
$, $\left\vert h_{2}^{c}\right\rangle =(|0\rangle -|1\rangle -2|2\rangle )/
\protect\sqrt{6}$, and $\left\vert h_{3}^{c}\right\rangle =(|0\rangle
+|1\rangle +2|2\rangle )/\protect\sqrt{6}$. }
\end{table}
To detect the KS inequalities, we need to measure the $13$ observables $
\left\langle A_{i}\right\rangle $ and $24$ compatible combinations of their
pair-wise correlations $\left\langle A_{i}A_{j}\right\rangle $. By rotating
the angles of the HWP5 and HWP6, We choose the measurement bases so that the
photon count at the detectors D1, D2, or D3 correspond to a measurement of
the compatible combinations of the projection operators $A_{i}$. In Table
II, we list the angles of the HWP5 and HWP6 and the corresponding operators
detected by the single-photon detectors D1, D2, and D3. With these
configurations of the wave plates, we read out the $13$ expectation values $
\left\langle A_{i}\right\rangle $ and $16$ of their compatible pair-wise
correlations. The other $8$ compatible correlations $\left\langle A_{y_{\mu
}^{\pm }}A_{h_{\alpha }}\right\rangle $ ($\mu =1,2;\alpha =0,1,2,3$) are
obtained from the correlations $\left\langle A_{y_{3}^{\pm }}A_{h_{\alpha
}}\right\rangle $ with an exchange of the basis-vectors $|2\rangle
\leftrightarrow |0\rangle $ or $|2\rangle \leftrightarrow |1\rangle $. So,
to measure $\left\langle A_{y_{\mu }^{\pm }}A_{h_{\alpha }}\right\rangle $,
we use the same configurations as specified by the last four rows of Table II
and exchange the basis-vectors $|2\rangle \leftrightarrow |0\rangle $ (or $
|1\rangle $) in the input state through an appropriate rotation of the HWP1
and HWP2.
\section{Calculation of correlations from the measured joint probabilities}
The outcomes for observables $A_{i}$ are either $+1$ or $-1$, depending on
whether there is a photon click (or no click) in the corresponding photon
detector. The correlation $\left\langle A_{i}A_{j}\right\rangle $ of the
compatible observables $A_{i}$ and $A_{j}$ is constructed from the four
measured joint probabilities $P(A_{i}=\pm 1,A_{j}=\pm 1)$, whereas the
latter is read out from coincidence of the single-photon detectors through
the relation
\begin{eqnarray}
\langle A_{i}A_{j}\rangle &=&P(A_{i}=1,A_{j}=1)+P(A_{i}=-1,A_{j}=-1) \notag
\\
-P(A_{i} &=&1,A_{j}=-1)-P(A_{i}=-1,A_{j}=1).
\end{eqnarray}
All the events need to be heralded by the single-photon detector D0. So the
joint probability $P(A_{i}=-1,A_{j}=+1)$ corresponds to the coincidence rate
$\left\langle D0,Di\right\rangle $ of the detectors D0 and Di, normalized by
the total coincidence $\left\langle D0,D1\right\rangle +\left\langle
D0,D2\right\rangle +\left\langle D0,D3\right\rangle $. Similar expressions
hold for $P(A_{i}=-1,A_{j}=1)$\ and $P(A_{i}=+1,A_{j}=+1)$. The joint
probability $P(A_{i}=-1,A_{j}=-1)$ is proportional to the three-photon
coincidence rate $\left\langle D0,Di,Dj\right\rangle $, which is smaller
than the two-photon coincidence rate by about four orders of magnitude in
our experiment. So the probability $P(A_{i}=-1,A_{j}=-1)$ is significantly
less than the error bars for the other joint probabilities and negligible in
calculation of the correlation $\langle A_{i}A_{j}\rangle $ by Eq. (1).
In table III, we list the measured joint probabilities for all the compatible
pairs $A_{i}$ and $A_{j}$, taking the superposition state $|\Psi _{7}\rangle
=|s\rangle =(|0\rangle +|1\rangle +|2\rangle )/\sqrt{3}$ as an example for
the input. The corresponding correlations $\left\langle
A_{i}A_{j}\right\rangle $ are calculated using Eq. (1). The number in the
bracket represent the error bar on the last (or last two) digits associated
with the statistical error in the photon counts.
\section{Results of correlation measurements and KS\ inequalities for other
input states}
In the main manuscript, we have given the measured expectation values $
\left\langle A_{i}\right\rangle $ and the correlations $\left\langle
A_{i}A_{j}\right\rangle $ for a particular input state $|\Psi _{7}\rangle
=|s\rangle =(|0\rangle +|1\rangle +|2\rangle )/\sqrt{3}$. We have tested the
KS inequalities for $9$ different input states, ranging from the simple
basis vectors, to the superposition states, and to the most noisy mixed
states. For all the input states, we have observed significant violation of
the KS inequalities for single photonic qutrits. In this section, we list
the measured expectation values $\left\langle A_{i}\right\rangle $ and the
correlations $\left\langle A_{i}A_{j}\right\rangle $ for the other $8$ input
states (shown in Table IV, V, VI, and VII).
\begin{widetext}
\begin{table}[tbp]
\includegraphics[width=18cm,height=15cm]{Table_3_calculation.pdf}
\caption[Supplementary Table. 3 ]{The measured joint probabilities for all the compatible
pairs $A_{i}$ and $A_{j}$ in the KS inequality under the input state $|\Psi_7\rangle = \frac{1}{\protect\sqrt{3}} (|0\rangle
+|1\rangle +|2\rangle)$}
\end{table}
\begin{table}[tbp]
\includegraphics[width=18cm,height=24cm]{Table_4_1.pdf}
\caption[Table.4_1]{The measured expectation values $\left\langle A_{i}\right\rangle $ and
the correlations $\left\langle A_{i}A_{j}\right\rangle $ for all the compatible pairs under
input states $|\Psi _{1}\rangle =|0\rangle $ and $|\Psi _{2}\rangle =|1\rangle $.}
\end{table}
\begin{table}[tbp]
\includegraphics[width=18cm,height=24cm]{Table_4_2.pdf}
\caption[Table.4_2]{The measured expectation values $\left\langle A_{i}\right\rangle $ and
the correlations $\left\langle A_{i}A_{j}\right\rangle $ for all the compatible pairs under
input states $|\Psi _{3}\rangle =|2\rangle $ and $|\Psi _{4}\rangle =(|0\rangle
+|1\rangle )/\sqrt{2}$.}
\end{table}
\begin{table}[tbp]
\includegraphics[width=18cm,height=24cm]{Table_4_3.pdf}
\caption[Table.4_3]{The measured expectation values $\left\langle A_{i}\right\rangle $ and
the correlations $\left\langle A_{i}A_{j}\right\rangle $ for all the compatible pairs under
input states $|\Psi _{5}\rangle =(|0\rangle +|2\rangle )/\sqrt{2}$ and $|\Psi _{6}\rangle
=(|1\rangle +|2\rangle )/\sqrt{2}$.}
\end{table}
\begin{table}[tbp]
\includegraphics[width=18cm,height=24cm]{Table_4_4.pdf}
\caption[Table.4_4]{The measured expectation values $\left\langle A_{i}\right\rangle $ and
the correlations $\left\langle A_{i}A_{j}\right\rangle $ for all the compatible pairs under
input states $\rho _{8}=(|0\rangle \left\langle 0\right\vert +|2\rangle \left\langle
2\right\vert )/2$ and $\rho _{9}=(|0\rangle \left\langle 0\right\vert
+|1\rangle \left\langle 1\right\vert +|2\rangle \left\langle 2\right\vert )/3
$.}
\end{table}
\end{widetext}
\end{document} |
\begin{equation}gin{document}
\def\begin{equation}{\begin{equation}gin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{eqnarray}{\begin{equation}gin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\nonumber{\nonumber}
\newcommand{\mathbf}{\mathbf}
\newcommand{\mathrm}{\mathrm}
\title{Structure of the interaction and energy transfer between an open quantum system and its environment\\}
\author{Tarek Khalil$^{a}$
\footnote{E-mail address: [email protected]}\\
and\\
Jean Richert$^{b}$
\footnote{E-mail address: [email protected]}\\
$^{a}$ Department of Physics, Faculty of Sciences(V),\\
Lebanese University, Nabatieh,
Lebanon\\
$^{b}$ Institut de Physique, Universit\'e de Strasbourg,\\
3, rue de l'Universit\'e, 67084 Strasbourg Cedex,
France}
\date{\today}
\maketitle
\begin{equation}gin{abstract}
Due to the coupling of a quantum system to its environment energy can be transfered between the two subsystems in both directions. In the present study we consider this process in a general framework for interactions with different properties and show how these properties govern the exchange.
\end{abstract}
\maketitle
PACS numbers: 02.50.Ey, 02.50.Ga, 03.65.Aa, 05.60.Gg, 42.50.Lz
\vskip .2cm
Keywords: open quantum system, Markovian and non-Markovian systems, energy exchange in open quantum systems.\\
\section{Introduction}
Quantum systems are generally never completely isolated but interact with an environment with which they may exchange energy and other physical observable quantities. Their properties are naturally affected by their coupling to the external world. The understanding and control of the influence of an environment on a given physical system is of crucial importance in different fields of physics and in technological applications and led to a large amount of investigations ~\cite{reb,aba,kos,gol,cor}.
Among the quantities of interest energy exchange between a system and its environment is of prime importance. The total system composed of the considered system and its environment is closed. It conserves the physical observables, in particular the total energy it contains. However, the existence of an interaction between its two parts leads to possible exchanges between them, energy and other observables can be transfered between them in both directions. In the present work we study the conditions under which this tranfer occurs.
In order to investigate the process a number of different approaches have been developed
\cite{bel,bag,fl1,fl2,esp}. They make use of the cumulant method as well as other developments ~\cite{car,sch,def}. In the following we shall use the cumulant method in order to work out the energy exchange and the speed at which this exchange takes place~\cite{gua}.
In previous studies ~\cite{kr1,kr2,kr3} we defined criteria which allow to classify open quantum systems with respect to their behaviour in the presence of an environment. In order to do so we used a general formulation which relies on the examination of the properties of the density operator of the system and its environment. The dynamical behaviour of this operator is governed by the structure of the Hamiltonians of the system, its environment and the coupling Hamiltonian which acts between them. The method introduces a general form of the total density operator and avoids the determination of explicit solutions of the equation of motion which governs the time evolution of the system. We follow this formalism in order to work out different cases of physical interest concerning the energy exchange process.\\
The analysis is presented in the following way. In section 2 we define the energy exchange and its rate starting from the characteristic function which generates these quantities in terms of its first and second moment. Section 3 introduces a general formal expression of the density operator at the initial time and the structure of the total Hamiltonian of the system, the environment and their interaction. The expressions of the energy exchange and the exchange rate are explicitly written out. In section 4 we analyze and work out two different cases. They exemplify the role of the interaction between the system and the environment which may or may not induce the time divisibility property of the open system. Conclusions are drawn in section 5.
\section{Energy transfer between S and E: the cumulant approach}
Consider a system $S$ coupled to an environment $E$. The time dependent density operator of the total system is $\hat \rho_{SE}(t)$ and $S$ and $E$ are coupled by an interaction $\hat H_{SE}$. The interaction may generate an energy exchange between the two parts. This quantity can be worked out by means of the cumulant method ~\cite{gua} which was developed in a series of works, first for closed driven systems ~\cite{esp}, later extended to open systems, see f.i.~\cite{fl1,fl2} and further approaches quoted in ~\cite{gua}.
Define the modified density operator
\begin{eqnarray}
\rho^{\sim}_{SE}(t,0)=Tr_{E}[{\hat U_{\eta/2}(t,0)\hat \rho_{SE}(0)\hat U^{+}_{-\eta/2}(t,0)}]
\label{eq1}
\end{eqnarray}
where $Tr_{E}$ is the trace over the states in $E$ space and
\begin{eqnarray}
\hat U_{\eta}(t,0)=\exp(+i\eta \hat H_{SE})\hat U(t,0)\exp(-i\eta \hat H_{SE})
\label{eq2}
\end{eqnarray}
Here $\hat U(t,0)$ is the time evolution operator of the interacting total system $S+E$.
The characteristic function obtained from the generating function reads
\begin{eqnarray}
\chi^{(\eta)}(t)=Tr_{S}(\rho^{\sim}_{SE}(t,0))
\label{eq3}
\end{eqnarray}
From $\chi^{(\eta)}(t)$ one derives the energy exchange between the environment and the system
\begin{eqnarray}
\Delta E(t)=\frac{d \chi^{(\eta)}(t)}{d (i\eta)}_{| \eta=0}
\label{eq4}
\end{eqnarray}
The speed at which the energy flows between the system and its environment is given by
\begin{eqnarray}
V_{E}(t)=\frac{\partial \dot \chi^{(\eta)}(t)}{\partial (i\eta)}_{|\eta=0}
\label{eq5}
\end{eqnarray}
where $\dot \chi^{(\eta)}(t)$ is the time derivative of $\chi^{\eta}(t)$. If the energy flows from the sytem to the environment $V_{E}(t)$ is positive and negative if the flow is reversed.
\section{Energy flow and speed of energy exchange}
\subsection{The density operator}
At time $t=0$ we choose the system to stay in a mixed state
\begin{eqnarray}
|\psi(0)\rangle=\sum_{{i_k}}c_{i_k}|i_k\rangle
\label{eq6}
\end{eqnarray}
where the normalized states $\{|i_{k}\rangle\}$ are eigenstates of the Hamiltonian $\hat H_{S}$. The environment is described in terms of its density matrix chosen as
$\{ |\alpha\rangle d_{\alpha,\alpha}\langle \alpha|\}$ where $\{d_{\alpha,\alpha}\}$ are the statistical weights of the density matrix in a diagonal basis of states $\{\alpha\}$.
Given these bases of states in $S$ space and in $E$ space the density operator of the total system at
time $t=0$ is written as~\cite{buz}
\begin{eqnarray}
\hat \rho_{SE}(0)=\hat \rho_{S}(0) \otimes \hat \rho_{E}(0)
\notag\\
\hat \rho_{S}(0)=\sum_{k,l}|i_{k}\rangle c_{i_{k}}c^{*}_{i_{l}}\langle i_{l}|
\notag\\
\hat \rho_{E}(0)=\sum_{\alpha}|\alpha \rangle d_{\alpha,\alpha} \langle \alpha|
\label{eq7}
\end{eqnarray}
The density operator $\hat \rho_{SE}$ describes a system in the absence of an interaction
$\hat H_{SE}$ at $t=0$, hence in the absence of an initial entanglement between $S$ and $E$.
The total Hamiltonian $\hat H$ reads
\begin{eqnarray}
\hat H=\hat H_{S}+\hat H_{E}+\hat H_{SE}
\label{eq8}
\end{eqnarray}
and
\begin{eqnarray}
\hat H_{S}|i_{k}\rangle=\epsilon_{i_{k}}|i_{k}\rangle
\notag\\
\hat H_{E}|\gamma\rangle=E_{\gamma}|\gamma\rangle
\label{eq9}
\end{eqnarray}
where $\{|i_{k}\rangle\}$ and $\{|\gamma\rangle\}$ are the eigenvector bases of $\hat H _{S}$ and $\hat H_{E}$. At time $t$ the evolution of $S+E$ is given by
\begin{eqnarray}
\hat\rho_{S+E}(t)= \hat U(t)\hat \rho_{S+E}(0) \hat U^{+}(t)
\label{eq10}
\end{eqnarray}
where $\hat U(t,0)=e^{-i\hat H t}$ is the evolution operator of $S+E$.
\subsection{Explicit expressions of the energy exchange and the speed of the flow}
Using the definitions given in Eqs. (4) and (5) the energy transfer and speed of the energy flow read
\begin{eqnarray}
\Delta E(t)=\frac{1}{2}Tr_{S}Tr_{E}\{[\hat H_{E},\hat U(t,0)]\hat\rho_{S+E}(0)\hat U^{+}(t,0)\}+h.c.
\label{eq11}
\end{eqnarray}
and
\begin{eqnarray}
V_{E}(t)=\frac{1}{2}Tr_{S}Tr_{E}\{[\hat H_{E},\frac{d}{dt}\hat U(t,0)]\hat\rho_{S+E}(0)\hat U^{+}(t,0)\}+h.c.
\label{eq12}
\end{eqnarray}
Developing these expressions in the bases of states given above they read
\begin{eqnarray}
\Delta E(t)=\frac{1}{2}\sum_{j,\gamma}\sum_{i_{1},i_{2},\gamma_{1}}c_{i_{1}} c{*}_{i_{2}}
\langle j \gamma|[\hat H_{E},e^{-i\hat H t}]|i_{1} \gamma_{1}\rangle
\notag\\
d_{\gamma_{1},\gamma_{1}}\langle i_{2} \gamma_{1}|e^{+i\hat H t}| j \gamma \rangle +h.c.
\label{eq13}
\end{eqnarray}
and
\begin{eqnarray}
V_{E}(t)=\frac{(-i)}{2} \sum_{j,\gamma} \sum_{\alpha_{1},i_{1},i_{2}}
c_{i_{1}} c^{*}_{i_{2}} d_{\alpha_{1},\alpha_{1}}
\notag\\
\sum_{j_{1}, \gamma_{1}}\{\langle j \gamma|[\hat H_{E},\hat H_{SE}]|j_{1}, \gamma_{1}\rangle
\langle j_{1}, \gamma_{1}|e^{-i\hat H t}|i_{1}\alpha_{1}\rangle
\langle i_{2}\alpha_{1}||e^{+i\hat H t}|j \gamma \rangle
\notag\\
+ \langle j \gamma|\hat H|j_{1} \gamma_{1}\rangle
\langle j_{1} \gamma_{1}|[\hat H_{E},e^{-i\hat H t}]|i_{1} \alpha_{1}\rangle
\langle i_{2} \alpha_{1}|e^{+i\hat H t}|j \gamma \rangle
\notag\\
- \langle j \gamma|\hat H e^{-i\hat H t}|i_{1} \alpha_{1}\rangle
\langle i_{2}\alpha_{1}||[\hat H_{E}, e^{+i\hat H t}]|j \gamma \rangle \}
+h.c.
\label{eq14}
\end{eqnarray}
\section{Properties of the interaction Hamiltonian}
We use now the general expressions of the energy transfer and its exchange rate in order to test the role of the interaction Hamiltonian in this process. Since $S$ and $E$ are distinct different physical systems their Hamiltonians verify the commutation relation $[\hat H_{S},\hat H_{E}]=0$. Former work ~\cite{kr1} has shown that one may consider two cases which are of special interest:\\
(a) $[\hat H_{E},\hat H_{SE}]=0$\\
(b) $[\hat H_{S},\hat H_{SE}]=0$
\subsection{Case (a)}
It has been shown elsewhere ~\cite{kr1,kr2} that if $H_{E}$ and $H_{SE}$ commute the evolution of the system $S$ is characterized by the divisibility property which is a specific property of Markovian systems.
Since the Hamiltonian of the system $S$ commutes in practice with $\hat H_{E}$ it follows that $\hat H_{E}$ commutes with the whole Hamiltonian $\hat H$, hence also $\hat U(t,0)$ and its derivatives. Going back to the expressions of $\Delta E(t)$ and $V_{E}(t)$ it comes out that $\Delta E(t)=V_{E}(t)=0$.
There is no energy exchange between $S$ and $E$ in this case. The physical explanation is the following: the divisibility property imposes that the environment stays in a fixed state at a fixed energy which blocks any possible transfer of energy between the two parts of the total system $S+E$. This is due to the fact that $\hat H_{SE}$ is diagonal in $E$ space, hence the considered state $|\gamma \rangle$ stays the same over any time interval.
\subsection{Evolution of the energy: an impurity immersed in a bosonic condensate}
We introduce the Hamiltonian of a fermionic impurity interacting with a Bose-Einstein condensate ~\cite{chr}. It is given by $\hat H= \hat H_{S}+\hat H_{E}+\hat H_{SE}$ where
\begin{equation}gin{center}
\begin{eqnarray}
\hat H_{S}=\sum_{\vec k}\epsilon_{\vec k}c^{+}_{\vec k}c_{\vec k}
\label{eq15}
\end{eqnarray}
\end{center}
\begin{equation}gin{center}
\begin{eqnarray}
\hat H_{E}=\sum_{\vec k}e_{\vec k}a^{+}_{\vec k}a_{\vec k}+\frac{1}{2V}\sum_{\vec k_{1}\vec k_{2}
\vec q}V_{B}(\vec q)(\vec q)a^{+}_{\vec k_{1}}a^{+}_{\vec k_{1}+\vec q}a^{+}_{\vec k_{2}-\vec q}
a_{\vec k_{2}}a_{\vec k_{1}}
\label{eq16}
\end{eqnarray}
\end{center}
\begin{equation}gin{center}
\begin{eqnarray}
\hat H_{SE}=\frac{1}{V}\sum_{\vec k_{3}\vec k_{4}\vec q}c^{+}_{\vec k_{3}+\vec q}c_{\vec k_{4}}
a^{+}_{\vec k_{4}-\vec q}a_{\vec k_{3}}
\label{eq17}
\end{eqnarray}
\end{center}
where $[c,c^{+}]$ and $[a,a^{+}]$ are fermion and boson annihilation and creation operators.
We consider the case where the momentum transfer $\vec q=0$. Then one expects that there is no energy exchange between $S$ and $E$. This is indeed so since a simple calculation shows that
$[\hat H_{E},\hat H_{SE}]=0$ in this case.
\subsection{Case (b)}
Start from the general expression of $\Delta E(t)$ given by Eq.(13).
Since $[\hat H_{S},\hat H_{SE}]=0$ all the matrix elements are diagonal in $S$ space and the expression takes the form
\begin{eqnarray}
\Delta E(t)=\sum_{j}|c_{j}|^{2}\sum_{\gamma,\gamma_{1}}(E_{\gamma}-E_{\gamma_{1}})
\langle j \gamma|e^{-it \hat H}|j \gamma_{1}\rangle d_{\gamma_{1}\gamma_{1}}
\langle j \gamma_{1}|e^{+it \hat H}|j \gamma \rangle
\label{eq18}
\end{eqnarray}
In order to follow the evolution of $\Delta E(t)$ in time we determine the expression of $V_{E}(t)$.
A somewhat lengthy but straightforward calculation leads to the following expression:
\begin{eqnarray}
V_{E}(t)=V^{(1)}_{E}(t)+ c.c. +V^{(2)}_{E}(t)+ c.c.
\label{eq19}
\end{eqnarray}
where
\begin{eqnarray}
V^{(1)}_{E}(t)=(-i)\sum_{j}|c_{j}|^{2}\sum_{\gamma,\gamma_{1}}(E_{\gamma}-E_{\gamma_{1}})
\langle j \gamma|\hat H_{SE}|j \gamma_{1}\rangle
\notag\\
\sum_{\gamma_{2}}\langle j \gamma_{1}|e^{-it \hat H}|j \gamma_{2}\rangle
d_{\gamma_{2},\gamma_{2}} \langle j \gamma_{2}|e^{+it \hat H}|j \gamma \rangle
\label{eq20}
\end{eqnarray}
and $V^{(1)*}_{E}(t)$ its complex conjugate. It comes out that the c.c. $V^{(1)*}_{E}(t)=V^{(1)}_{E}(t)$ which means that $V^{(1)}_{E}(t)$ is real.
The second term reads
\begin{eqnarray}
V_{E}^{(2)}(t)= (-i)\sum_{j}|c_{j}|^{2}\sum_{\gamma \gamma_{1}}(E_{\gamma}+\epsilon_{j})
\sum_{\gamma_{2}}(E_{\gamma}-E_{\gamma_{2}})
\notag\\
\langle j \gamma_{1}|e^{-it \hat H}|j \gamma_{2}\rangle
d_{\gamma_{2},\gamma_{2}}\langle j \gamma_{2}|e^{+it \hat H}|j \gamma \rangle
\label{eq21}
\end{eqnarray}
It is easy to see that $V_{E}^{(2)}(t) +c.c.=0$, hence $V_{E}(t)=2V_{E}^{(1)}(t)$.
For $t=0$
\begin{eqnarray}
\Delta E(0)= \sum_{j}|c_{j}|^{2}\sum_{\gamma,\gamma_{1}}(E_{\gamma}-E_{\gamma_{1}})
\langle j \gamma|j \gamma_{1}\rangle d_{\gamma_{1}\gamma_{1}}
\langle j \gamma_{1}|j \gamma \rangle=
\notag\\
\sum_{j}|c_{j}|^{2}\sum_{\gamma,\gamma_{1}}(E_{\gamma}-E_{\gamma_{1}})\delta_{\gamma \gamma_{1}}
\label{eq22}
\end{eqnarray}
Hence $\Delta E(0)=0$ which could have been anticipated from the symmetry property of the expression of $\Delta E(t)$.\\
But contrary to case $(a)$ the energy tranfer is now different from zero. It varies with time hence $\Delta E(t)$ increases or de decreases as a function of the sign of $V_{E}(t)$. This is due to the fact that now many channels can open in $E$ space because $\hat H_{SE}$is no longer diagonal in this space.
\subsection{Evolution of the energy: two examples}
In order to illustrate this case we develop two models on which we exemplify the time dependence of the energy transfer between a system and its environment when $[\hat H_{S},\hat H_{SE}]=0$.
\begin{equation}gin{itemize}
\item {\bf First example}: we consider a model consisting of a 2-level state system $E$,
$[|\gamma\rangle=|1\rangle,|2\rangle]$. The Hamiltonian $\hat H$ decomposes into two parts
$\hat H_{0}=\hat H_{S}+\hat H_{SE}$ and $\hat H_{E}$. We consider the case where
\begin{eqnarray}
[\hat H_{0},[\hat H_{0},\hat H_{E}]]=[\hat H_{E},[\hat H_{0}, \hat H_{E}]]=0
\label{eq23}
\end{eqnarray}
and $[\hat H_{0},\hat H_{E}]=c1$ where $c$ is a number. Then
\begin{eqnarray}
e^{\hat H_{0}+\hat H_{E}}=e^{\hat H_{0}}e^{\hat H_{E}}e^{c/2}
\label{eq24}
\end{eqnarray}
and
\begin{eqnarray}
e^{i(\hat H_{0}+\hat H_{E)}t}=e^{i\hat H_{0}t}e^{i\hat H_{E}t}e^{ct^{2}/2}
\label{eq25}
\end{eqnarray}
The quantities which enter the expressions which follow are defined in Appendix A.
The expressions of $\Delta E(t)$ and $V_{E}(t)$ read
\begin{eqnarray}
\Delta E(t)=e^{ct^{2}}\Delta_{12}\sum_{j}|c_{j}|^{2}(d_{22}-d_{11})[a_{j}^{(12)2}(t)+b_{j}^{(12)2}(t)]
\label{eq26}
\end{eqnarray}
where $\Delta_{12}=E_{1}-E_{2}$, $d_{11},d_{22}$ the weights of the states in $E$ space and
\begin{eqnarray}
V_{E}(t)=2e^{ct^{2}} \Delta_{12}\sum_{j} |c_{j}|^{2}[I^{(j)}_{12}Re(\langle 2|\hat \Omega_{j}t)|1\rangle + R^{(j)}_{12}
Im (\langle 2|\hat \Omega_{j}(t)|1\rangle]
\label{eq27}
\end{eqnarray}
with
\begin{eqnarray}
Re\langle 2|\hat \Omega_{j}(t)|1\rangle=a_{j}^{11}(t)[a_{j}^{21}(t)\cos(\Delta_{12}t)+b_{j}^{21}(t)
\sin(\Delta_{12}t)]d_{11}
\notag\\
+a_{j}^{22}(t)a^{21}(t)d_{22}
\notag\\
Im\langle 2|\hat \Omega_{j}(t)|1\rangle=a_{j}{11}(t)
[-b_{j}^{21}(t)\cos(\Delta_{12}t)+a_{j}^{21}(t)\sin(\Delta_{12}t)]d_{11}
\notag\\
+b_{j}^{21}(t)a_{j}^{22}(t)d_{22}
\label{eq28}
\end{eqnarray}
Both $\Delta E(t)$ and $V_{E}(t$ are oscillating functions of time. $\Delta E(t)$ keeps a fixed sign depending on the sign of $\Delta_{12}$, $V_{E}(t)$ may change sign with time. The energy and the speed of the energy transfer decays to zero for real $c$ real and negative.\\
\item
{\bf Second example}: we consider the Hamiltonian $\hat H=\hat H_{S}+\hat H_{E}+\hat H_{SE}$ which governs the coupling of a phonon field to the electron in the BCS theory of superconductivity.
The total Hamiltonian of the electron-phonon system reads
\begin{equation}gin{center}
\begin{eqnarray}
\hat H_{S}=\sum_{\vec k_{1}}\epsilon_{\vec k_{1}}c^{+}_{\vec k_{1}}c_{\vec k_{1}}
\label{eq29}
\end{eqnarray}
\end{center}
\begin{equation}gin{center}
\begin{eqnarray}
\hat H_{E}=\sum_{\vec q}\hbar \omega_{\vec q}a^{+}_{\vec q}a_{\vec q}
\label{eq30}
\end{eqnarray}
\end{center}
\begin{equation}gin{center}
\begin{eqnarray}
\hat H_{SE}=V_{ph-e}(\vec q)\sum_{\vec k_{2}\vec q}(a^{+}_{-\vec q}+a_{\vec q})
c^{+}_{\vec k_{2}+\vec q}c_{\vec k_{2}}
\label{eq31}
\end{eqnarray}
\end{center}
where $V_{ph-e}(\vec q)$ is the phonon-electron interaction, $[a,a^{+}]$ and $[c,c^{+}]$ are phonon and electron annihilation and creation operators. We consider the case where the phonons evolve in the zero mode $\vec q=0$.
Then $\hat H_{E}=\hbar \omega_{0}a^{+}_{0}a_{0}$ and
$\hat H_{SE}=V_{ph-e}(0)\sum_{\vec k_{2}}(a^{+}_{0}+a_{0})c^{+}_{\vec k_{2}}c_{\vec k_{2}}$.
The density operator of the total system $S+E$ at $t=0$ is given by the expression
\begin{eqnarray}
\hat \rho^{(n_{0},n'_{0})}_{(\vec k,\vec k')}=\frac{1}{2\pi(n_{0}!n'_{0}!)^{1/2}}|\vec k n_{0}\rangle \langle \vec k' n'_{0}|
\label{eq32}
\end{eqnarray}
Working out the commutation relation between $\hat H_{S}$ and $\hat H_{SE}$ leads to
$[\hat H_{S},\hat H_{SE}]=0$, hence the evolution operator can be written as
\begin{eqnarray}
e^{-i\hat Ht}=e^{-i\hat H_{S}t}e^{-i(\hat H_{E}+\hat H_{SE})t}
\label{eq33}
\end{eqnarray}
and in explicit form
\begin{eqnarray}
e^{-i\hat Ht}=e^{-it\sum_{k=1}^{\nu}\epsilon_{\vec k}c^{+}_{\vec k}c_{\vec k}}
e^{-i[\omega_{0} a^{+}_{0}a_{0}+\nu V_{ph-e}(0)(a^{+}_{0}+a_{0})]t}
\label{eq34}
\end{eqnarray}
where $\nu$ is the number of electron states. The quantity of interest concerns the time dependence of $\Delta E(t)$ and $V_{E}(t)$ given by Eqs.(18) and (19), hence the time evolution of the matrix elements of $e^{-i\hat Ht}$
\begin{eqnarray}
E^{(n_{0},n'_{0})}_{\vec k \vec k}(t)=\langle \vec k n_{0}|e^{-it\sum_{k=1}^{\nu}\epsilon_{\vec k}c^{+}_{\vec k}c_{\vec k}}e^{-i[\omega_{0} a^{+}_{0}a_{0}+\nu V_{ph-e}(0)(a^{+}_{0}+a_{0})]t}|\vec k n'_{0}\rangle
\label{eq35}
\end{eqnarray}
These matrix elements and their hermitic conjugates which enter the expressions of $\Delta E(t)$ and $V_{E}(t)$ can be worked out explicitly using the Zassenhaus development~\cite{za}, see Appendix B and~\cite{ca}.
They read
\begin{eqnarray}
E^{(n_{0},n'_{0})}_{\vec k \vec k}(t)=N^{-1}e^{-i\epsilon_{k}t}e^{-i\omega_{0}n_{0}t} F^{(n_{0},n'_{0})}(t)
\label{36}
\end{eqnarray}
with
\begin{eqnarray}
F^{(n_{0},n'_{0})}(t)= \sum_{n_{2}\leq n_{0},n_{2}\leq n_{3}}
\sum_{n_{4}\leq n_{3},n_{4}\leq n'_{0}}(-i)^{n_{0}+n_{3}}(-1)^{n'_{0}+n_{2}-n_{4}}
\notag\\
\frac{n_{0}!n'_{0}!(n_{2}!)^{2}(n_{3}!)^{2}[\alpha(t)^{n_{0}+n_{3}-2n_{2}}]
[\zeta(t)^{n'_{0}+n_{3}-2n_{4}}]}{(n_{2})^{2} (n_{4})^{2}(n_{0}-n_{2})!(n_{3}-n_{4})!
(n_{3}-n_{2})!(n'_{0}-n_{4})!}e^{\Psi(t)}
\label{37}
\end{eqnarray}
and the normalization factor $N= 2\pi(n_{0}!n'_{0}!)^{1/2}$. The functions $\alpha(t)$, $\zeta(t)$
and $\Psi(t)$ read
\begin{eqnarray}
\alpha(t)=\frac{\nu V_{ph-e}(0)\sin\omega_{0}t}{\omega_{0}}
\label{eq38}
\end{eqnarray}
\begin{eqnarray}
\zeta(t)=\frac{\omega_{0}[1-\cos\nu V_{ph-e}(0)t]}{\nu V_{ph-e}(0)}
\label{eq39}
\end{eqnarray}
\begin{eqnarray}
\Psi(t)=-\frac{1}{2}[\frac{\nu^{2}V^{2}_{ph-e}(0)\sin^{2}(\omega_{0} t)}{\omega_{0}^{2}}+\frac{\omega_{0}^{2}(1-\cos\nu V_{ph-e}(0)t)^{2}}{\nu^{2}V_{ph-e}^{2}}]
\label{eq40}
\end{eqnarray}
As one can see from these expressions $E^{(n_{0},n'_{0})}_{\vec k \vec k}(t)$ are oscillating functions of time which leads to the conclusion that the energy transfer and the transfer velocity oscillate continuously and stay finite over any interval of time.
\end{itemize}
\section{Conclusions, remarks}
In the present work we used a cumulant approach~\cite{gua} in order to study the energy transfer between a system and its environment and the rate at which this transfer evolves in time.
If the Hamiltonian of the environment commutes with the interaction between the system and the environment
there results an absence of energy transfer between the two subsystems. This can be explained in the following way. As already seen in former work the commutation property is a sufficient condition for divisibility in the time behaviour of an open system, one of the properties which characterize Markov processes ~\cite{sti,riv}.
In this case it has been shown~\cite{kr1,kr2,kr3} that the environment keeps in the energy state in which it was at the origin of time and stays there over any interval of time. Hence the energy of the environment is blocked in a fixed state so that it cannot feed the system and does not receive energy from it. Time delays are correlated with the possibility of the environment to jump between different states in closed systems as well as in open ones (see f.i.~\cite{dod,def1,def2} and references quoted in there). This is the case in non-Markovian systems ~\cite{zhe,hai,ren,gua1}.
The experimental realization of the absence of energy transfer may be obtained under different conditions:
\begin{equation}gin{itemize}
\item the strength of the interaction can be chosen such that it keeps very weak and hence does not allow any possible jump to another level in the case of a discrete environment spectrum.
\item the temperature of the environment is kept close to zero so that the ground state is the only accessible state.
\item the commutation relation between the environment and the interaction is rigorously verified which is the case discussed in the present work.
\end{itemize}
We considered also the case where the system and the interaction Hamiltonians commute with each other.
In this case energy can flow from the environment to the system and back, there is no blocking effect
coming from the environment since several states are at hand and the energy exchange can take place.
In order to illustrate this situation we worked out two model systems corresponding to different physical situations. In the first case the energy exchange oscillates but stops exponentially, in the second case which models the electron-phonon system the energy exchange and the exchange speed oscillate in time and never go to zero. This is so because the system behaves coherently as it has been shown in ref.~\cite{kr3,lid}.
\section{Appendix A}
We define the following matrix elements which enter the expressions of $\Delta E(t)$ and $V_{E}(t)$ given by Eqs.(16-18)
\begin{eqnarray}
\langle j 1|e^{-i\hat Ht}|j 1\rangle=e^{ct^{2}/2}e^{-i\epsilon_{j}t}e^{-iE_{1}t}a_{j}^{11}(t)
\notag\\
\langle j 2|e^{+i\hat Ht}|j 2\rangle=e^{ct^{2}/2}e^{+i\epsilon_{j}t}e^{+iE_{2}t}a_{j}^{22}(t)
\notag\\
\langle j 1|e^{+i\hat Ht}|j 2\rangle=e^{ct^{2}/2}e^{+i\epsilon_{j}t}e^{+iE_{1}t}
(a_{j}^{12}(t)+ib_{j}^{12}(t))
\notag\\
\langle j 2|e^{-i\hat Ht}|j 1\rangle=e^{ct^{2}/2}e^{-i\epsilon_{j}t}e^{-iE_{2}t}
(a_{j}^{21}(t)-ib_{j}^{21}(t))
\label{eq41}
\end{eqnarray}
where $\epsilon_{j}$ is the eigenvalue of state $|j\rangle$ and $|E_{k}\rangle,(k=1,2)$ the eigenvalues of states the states $|\gamma\rangle$
\begin{eqnarray}
a_{j}^{11}(t)=Re(\langle j 1|e^{-i \hat H_{SE}t}|j 1 \rangle)
\notag\\
a_{j}^{22}(t)=Re(\langle j 2|e^{+i \hat H_{SE}t}|j 2 \rangle)
\notag\\
a_{j}^{12}(t)=Re(\langle j 1|e^{+i \hat H_{SE}t}|j 2 \rangle)
\notag\\
a_{j}^{21}(t)=Re(\langle j 2|e^{-i \hat H_{SE}t}|j 1 \rangle)
\notag\\
b_{j}^{12}(t)=Im(\langle j 1|e^{+i \hat H_{SE}t}|j 2 \rangle)
\notag\\
b_{j}^{21}(t)=Im(\langle j 2|e^{-i \hat H_{SE}t}|j 1 \rangle)
\label{eq42}
\end{eqnarray}
Introduce also
\begin{eqnarray}
R^{(j)}_{12}=Re(\langle j 1|\hat H_{SE}|j 2 \rangle)
\notag\\
I^{(j)}_{12}=Im(\langle j 1|\hat H_{SE}|j 2 \rangle)
\label{eq43}
\end{eqnarray}
and
\begin{eqnarray}
\hat \Omega_{j}(t)=\sum_{\gamma}e^{-i\hat H t}|j \gamma\rangle d_{\gamma \gamma}\langle j \gamma|
e^{+i\hat H t}
\label{eq44}
\end{eqnarray}
Using these defintions the calculation leads to expressions (26) and (27) of $\Delta E(t)$ and $V_{E}(t)$ in the text.
\section{Appendix B: the Zassenhaus development}
If
$X=-i(t-t_{0})(\hat H_{S}+\hat H_{E})$ and $Y=-i(t-t_{0})\hat H_{SE}$
\begin{eqnarray}
e^{X+Y}=e^{X}\otimes e^{Y}\otimes e^{-c_{2}(X,Y)/2!}\otimes e^{-c_{3}(X,Y)/3!}\otimes e^{-c_{4}(X,Y)/4!}...
\label{eq45}
\end{eqnarray}
where
\begin{equation}gin{center}
$c_{2}(X,Y)=[X,Y]$\\
$c_{3}(X,Y)=2[[X,Y],Y]+[[X,Y],X]$\\
$c_{4}(X,Y)=c_{3}(X,Y)+3[[[X,Y],Y],Y]+[[[X,Y],X],Y]+[[X,Y],[X,Y]$\\
\end{center}
The series has an infinite number of term which can be generated iteratively in a straightforward way ~\cite{ca}. If $[X,Y]=0$ the truncation at the third term leads to the factorisation of the $X$ and the $Y$ contribution. If $[X,Y]=c$ where $c$ is a c-number the expression corresponds to the well-known Baker-Campbell-Hausdorff formula.
\begin{equation}gin{thebibliography}{99}
\bibitem{reb} P. Rebentrost, A. Aspuru-Guzik, J. Chem. Phys. {\bf134} (2011) 101103
\bibitem{aba} O. Abah, J. Rossnagel, G. Jacob, S. Deffner, F. Schmidt-Kaler, K. Singer, E. Lutz, Phys.Rev.Lett. {\bf109} (2012) 203006
\bibitem{kos} R. Kosloff, Entropy {\bf15} (2013) 2100
\bibitem{gol} D. Golubev, T. Faivre, J.P. Pekola, Phys.Rev. B{\bf87} (2013) 094522
\bibitem{cor} L.A. Correa, J.P. Palao, D. Alonso, G. Adesso, Sci. Rep. {\bf4} (2014) 3949
\bibitem{bel} W. Belzig and Yu. Nazarov, Phys.Rev.Lett. {\bf87} (2001) 067006
\bibitem{bag} D.A. Bagrets and Yu. Nazarov, Phys.Rev. B{\bf67} (2003) 085316
\bibitem{fl1} C. Flindt, T. Novotn\'y, A. Braggio, M. Sassetti, A.-P. Jauho, Phys.Rev.Lett. {\bf100} (2008) 150601
\bibitem{fl2} C. Flindt, T. Novotn\'y, A. Braggio, A.-P. Jauho, Phys.Rev. B{\bf82} (2010) 155407
\bibitem{esp} Massimiliano Esposito, Upendra Harbola and Shaul Mukamel Rev. Mod Phys. 81 (2009) 1665 and refs. therein
\bibitem{car} Matteo Carrega, Paolo Solinas, Maura Sassetti and Ulrich Weiss, Phys.Rev.Lett. 116 (2016) 240403
\bibitem{sch} R. Schmidt, S. Maniscalco, T. Ala-Nissila, Phys. Rev. A {\bf94} 2016 010101(R)
\bibitem{def} Sebastian Deffner, Juan Pablo Paz and Wojciech H. Zurek, arXiv:1603.06509v2 [quant-ph]
\bibitem{gua} G. Guarnieri, J. Nokkala, R. Schmidt, S. Maniscalco, B. Vacchini, arXiv: 1607.04977v1 [quant-ph]
\bibitem{sti} W. F. Stinespring, Proc. Amer. Math. Soc. 6 (1955) 211
\bibitem{riv} \'Angel Rivas, Susana F. Huelga and Martin B. Plenio, Rep.Prog. Phys. 77 (2014) 094001
\bibitem{kr1} T. Khalil and J. Richert, arXiv:1408.6352 [quant-ph]
\bibitem{kr2} T. Khalil and J. Richert, arXiv:1503.08920 [quant-ph]
\bibitem{chr} Rasmus Soegaard Christensen, Jesper Levinsen and Georg M. Bruun, Phys.Rev.Lett. {\bf115} (2015) 160401
\bibitem{kr3} T. Khalil and J. Richert, arXiv:1605.09555[quant-ph]
\bibitem{dod}V. V. Dodonov and A. V. Dodonov, arXiv:1504.00862 [quant-ph] to appear in the special issue of Physica Scripta "150 years of Margarita and Vladimir Man'ko"
\bibitem{def1} S. Deffner and E. Lutz, J. Phys. A: Math. Theor. 46 (2013) 335302
\bibitem{def2} S. Deffner and E. Lutz, Phys. Rev. Lett. 111 (2013) 010402
\bibitem{zhe} Zhen-Yu Xu, Shunlong Luo, W. L. Yang, Chen Liu and Shiqun Zhu, PRA 89(2014) 012307
\bibitem{hai} Hai-Bin Liu, W. L. Yang, Jun-Hong An and Zhen-Yu Xu, PRA 93(2016) 020105
\bibitem{lid} D. A. Lidar, D. Bacon and K. B. Whaley, Phys. Rev. Lett. {\bf82}, (1999) 4556
\bibitem{ren} J. Ren, P. H\"anggi, B. Li, Phys.Rev.Lett. {\bf104} (2010) 170601
\bibitem{gua1} G. Guarnieri, C. Uchiyama, B. Vacchini, Phys. Rev. A {\bf93} (2016) 012118
\bibitem{buz} Vladimir Bu\v{z}ek, Phys. Rev. A {\bf58}, (1998) 1723
\bibitem{za} H. Zassenhaus, Abh. Math. Sem. Univ. Hamburg {\bf13} (1940) 1 - 100
\bibitem{ca} Fernando Casas, Ander Murua, Mladen Nadinic, Computer Physics Communications {\b183}, (2012) 2386
\end{thebibliography}
\end{document} |
\betagin{document}
\maketitle
\betagin{flushright}
\footnote{\textit{Mathematics Subject Classification} (2010): Primary 57M12; Secondary 30C65}
\footnote{\textit{Keywords}: branched covering, monodromy space}
\end{flushright}
\betagin{abstract}By a construction of Berstein and Edmonds every proper branched cover $f$ between manifolds is a factor of a branched covering orbit map from a locally connected and locally compact Hausdorff space called the monodromy space of $f$ to the target manifold. For proper branched covers between $2$-manifolds the monodromy space is known to be a manifold. We show that this does not generalize to dimension $3$ by constructing a self-map of the 3-sphere for which the monodromy space is not a locally contractible space. \end{abstract}
\section{Introduction}
A map $f \complementlon X \to Y$ between topological spaces is a \textit{branched covering}, if $f$ is open, continuous and discrete map.The \textit{branch set} $B_f \subset X$ of $f$ is the set of points in $X$ for which $f$ fails to be a local homeomorphism. The map $f$ is \textit{proper}, if the pre-image in $f$ of every compact set is compact.
Let $f \complementlon X |ightarrow Y$ be a proper branched covering between manifolds. Then the codimension of $B_f\subset X$ is at least two by V\"ais\"al\"a \cite{Vaisala} and the restriction map
$$f':=f | X \setminus f^{-1}(f(B_f)) \complementlon X \setminus f^{-1}(f(B_f)) |ightarrow Y \setminus f(B_f)$$ is a covering map between open connected manifolds, see Church and Hemmingsen \cite{Church-Hemmingsen}. Thus there exists, by classical theory of covering maps, an open manifold $X_f'$ and a commutative diagram of proper branched covering maps
\betagin{equation*}
\xymatrix{
& X_f' \ar[ld]_{p'} \ar[rd]^{q'} &\\
X \ar[rr]^f & & Y }
\end{equation*}
where $p' \complementlon X_f' |ightarrow X \setminus f^{-1}(f(B_f))$ and $q' \complementlon X_f' |ightarrow Y \setminus f(B_f)$ are normal covering maps and the deck-trans\-for\-ma\-ti\-on group of the covering map $q' \complementlon X_f' |ightarrow Y \setminus f(B_f)$ is isomorphic to the mono\-dromy group of $f'.$
Further, by Berstein and Edmonds \cite{Berstein-Edmonds}, there exists a locally compact and locally
connected second countable Hausdorff space $X_f \supset
X_f'$ so that $X_f \setminus X_f'$ does not locally
separate $X_f$ and the maps $p'$ and $q'$ extend to
proper normal branched covering maps $p \complementlon X_f |ightarrow X$ and $
\bar f:=q \complementlon X_f |ightarrow Y$ so that the diagram
\betagin{equation*}
\xymatrix{
& X_f \ar[ld]_{p} \ar[rd]^{\bar f} &\\
X \ar[rr]^f & & Y }
\end{equation*}
commutes, and $p$ and $\bar f$ are the Fox-completions of $p'\complementlon X'_f \to X$ and $q' \complementlon X'_f \to Y,$ see also \cite{Aaltonen}, \cite{Fox} and \cite{Montesinos-old}. In this paper the triple $(X_f,p, \bar f)$ is called the \textit{monodromy representation,} $\bar f \complementlon X_f \to Y$ the \textit{normalization} and the space $X_f$ the \textit{monodromy space} of $f.$
The monodromy space $X_f$ is a locally connected and locally compact Hausdorff space and, by construction, all points in the open and dense subset $X_f \setminus B_{\bar f} \subset X_f$ are manifold points. The natural question to ask regarding the monodromy space $X_f$ is thus the following: \textit{What does the monodromy space $X_f$ look like around the branch points of $\bar f$?}
When $X$ and $Y$ are $2$-manifolds, Sto\"ilows Theorem implies, that the points in $B_{\bar f}$ are manifold points and the monodromy space $X_f$ is a manifold. We further know by Fox \cite{Fox} that the monodromy space $X_f$ is a locally finite simplicial complex, when $f \complementlon X \to Y$ is a simplicial branched covering between piecewise linear manifolds. It is, however, stated as a question in \cite{Fox} under which assumptions the locally finite simplicial complex obtained in Fox' completion process is a manifold. We construct here an example in which the locally finite simplicial complex obtained in this way is not a manifold.
\betagin{Thm}\label{eka}
There exists a simplicial branched cover $f \complementlon S^3 \to S^3$ for which the monodromy space $X_f$ is not a manifold.
\end{Thm}
Theorem |ef{eka} implies that the monodromy space is not in general a manifold even for proper simplicial branched covers between piecewise linear manifolds. Our second theorem states further, that in the non-piecewise linear case the monodromy space is not in general even a locally contractible space. We construct a branched covering, which is a piecewise linear branched covering in the complement of a point, but for which the monodromy space is not a locally contractible space.
\betagin{Thm}\label{toka}
There exists a branched cover $f \complementlon S^3 \to S^3$ for which the monodromy space $X_f$ is not a locally contractible space.
\end{Thm}
We end this introduction with our results on the cohomological properties of the monodromy space. The monodromy space of a proper branched covering between manifolds is always a locally orientable space of finite cohomological dimension. However, in general the monodromy space is not a cohomology manifold in the sense of Borel \cite{Borel-book}; there exist a piecewise linear branched covering $S^3 \to S^3$ for which the monodromy space is not a cohomology manifold. This shows, in particular, that the theory of normalization maps of proper branched covers between manifolds is not covered by Smith-theory in \cite{Borel-book} and completes \cite{Aaltonen-Pankka} for this part.
This paper is organized as follows. In Section |ef{tm}, we give an example $f \complementlon S^2 \to S^2$ of an open and discrete map for which the monodromy space is not a two sphere. In Section |ef{ts} we show that the suspension $\Sigma f \complementlon S^3 \to S^3$ of $f$ prove Theorem |ef{eka} and that the monodromyspace of $f$ is not a cohomology manifold. In Section |ef{pii} we construct an open and discrete map $g \complementlon S^3 \to S^3.$ In Section |ef{dp} we show that $g \complementlon S^3 \to S^3$ proves Theorem |ef{toka}.
\subsection*{Aknowledgements} I thank my adviser Pekka Pankka for introducing me to the paper \cite{Heinonen-Rickman} by Heinonen and Rickman and for many valuable discussions on the topic of this paper.
\section{Preliminaries}
In this paper all topological spaces are locally connected and locally compact Hausdorff spaces if not stated otherwise. Further, all proper branched coverings $f \complementlon X \to Y$ between topological spaces are also branched coverings in the sense of Fox \cite{Fox} and completed coverings in the sense of \cite{Aaltonen}; $Y':=Y\setminus f(B_f)$ and $X':=X \setminus f^{-1}(B_f)$ are open dense subsets so that $X \setminus X'$ does not locally separate $X$ and $Y' \setminus Y$ does not locally separate $Y.$ We say that the proper branched covering $f \complementlon X |ightarrow Y$ is \textit{normal}, if $f':=f| X' \complementlon X' |ightarrow Y'$ is a normal covering. By Edmonds \cite{Edmonds-Michigan} every proper normal branched covering $f \complementlon X |ightarrow Y$ is an orbit map for the action of the deck-transformation group $\mathrm{Deck}(f)$ i.e. $X/\mathrm{Deck(f)} \approx Y.$
We recall some elementary properties of proper branched coverings needed in the forthcoming sections. Let $f \complementlon X |ightarrow Y$ be a proper normal branched covering and $V \subset Y$ an open and connected set. Then each component of $f^{-1}(V)$ maps onto $V,$ see \cite{Aaltonen}. Further, if the pre-image $D:=f^{-1}(V)$ is connected, then $f | D \complementlon D |ightarrow V$ is a normal branched covering and the map
\betagin{equation}\label{tf}
\mathrm{Deck}(f) \to \mathrm{Deck}(f | D), \tau \mapsto \tau| D,
\end{equation}
is an isomorphism, see \cite{Montesinos-old}.
\betagin{Apulause}\label{1}Let $f \complementlon X |ightarrow Y$ be a branched covering between manifolds. Suppose $p \complementlon W |ightarrow X$ and $q \complementlon W |ightarrow Y$ are normal branched coverings so that $q=p \circ f.$ Then $\mathrm{Deck}(p) \subset \mathrm{Deck}(q)$ is a normal subgroup if and only if $f$ is a normal branched covering.
\end{Apulause}
\betagin{proof} Let $Y':=Y \setminus f(B_f),$ $X':=X \setminus f^{-1}(f(B_f))$ and $W'=W \setminus q^{-1}(f(B_f)).$ Let $f':=f | X' \complementlon X' \to Y',$ $p':=p \complementlon W' |ightarrow X'$ and $q':= q | W'\complementlon W' \to Y'$ and let $w_0 \in W',$ $x_0=p'(w_0)$ and $y_0=q'(w_0).$ Then $\mathrm{Deck}(p) \subset \mathrm{Deck}(q)$ is a normal subgroup if and only if $\mathrm{Deck}(p') \subset \mathrm{Deck}(q')$ is a normal subgroup and the branched covering $f$ is a normal branched covering if and only if the covering $f'$ is normal. We have also a commutative diagram
\betagin{equation*}
\xymatrix{
& W' \ar[ld]_{p'} \ar[rd]^{q'} &\\
X' \ar[rr]^{f'} & & Y'&}
\end{equation*}
of covering maps, where
$$q_*'(\pi_1(W',w_0)) \subset f'_*(\pi_1(X', x_0)) \subset \pi_1(Y',y_0).$$ The deck-homomorphism $$\pi_{(q',_{w_0})} \complementlon \pi_1(Y',y_0) \to \mathrm{Deck}(q')$$ now factors as
\betagin{equation*}
\xymatrix{
\pi_1(Y',y_0) \ar[rr]^{\pi_{(q',w_0)}} \ar[dr]& & \mathrm{Deck}(q')\\
& \pi_1(Y',y_0)/q_*'(\pi_1(W',w_0)) \ar[ru]^{\overline{\pi}_{(q',w_0)}},&}
\end{equation*}
for an isomorphism $\overline{\pi}_{(q',w_0)}$ and
$$\pi_{(q',_{w_0})}(f'_* (\pi_1(X',x_0)))=\overline{\pi}_{(q',w_0)} \big(f'_*(\pi_1(X',x_0))/q_*'(\pi_1(W', w_0))\big)=\mathrm{Deck}(p').$$
In particular, $\mathrm{Deck}(p') \subset \mathrm{Deck}(q')$ is a normal subgroup if and only if
$$f'_*(\pi_1(X',x_0))/q_*'(\pi_1(W', w_0)) \subset \pi_1(Y',y_0)/q_*'(\pi_1(W',w_0))$$ is a normal subgroup. Since $q_*'(\pi_1(W',w_0)) \subset f'_*(\pi_1(X', x_0)),$ this implies that
$\mathrm{Deck}(p') \subset \mathrm{Deck}(q')$ is a normal subgroup if and only if $$f'_*(\pi_1(X',x_0)) \subset \pi_1(Y',y_0)$$ is a normal subgroup. We conclude that $f$ is a normal branched covering if and only if $\mathrm{Deck}(p) \subset \mathrm{Deck}(q)$ is a normal subgroup.
\end{proof}
Let $f \complementlon X |ightarrow Y$ be a proper branched covering. We say that $D \subset X$ is a \textit{normal neighbourhood} of $x$ if $f^{-1}\{f(x)\} \cap D=\{x\}$ and $f | D \complementlon D |ightarrow V$ is a proper branched covering. We note that for every $x \in X$ there exists a neighbourhood $U$ of $f(x)$ so that for every open connected neighbourhood $V \subset U$ of $f(x)$ we have the $x$-component $D$ of $f^{-1}(V)$ a normal neighbourhood of $x$ and the pre-image $f^{-1}(E) \subset X$ is connected for every open connected subset $E \subset Y$ satisfying $Y \setminus E \subset U.$ This follows from the following lemma.
\betagin{Apulause}\label{fun} Let $f \complementlon X \to Y$ be a proper branched covering. Then for every $y \in Y$ there exists such a neighbourhood $U$ of $y$ that the pre-image $f^{-1}(V) \subset X$ is connected for every open connected set $V \subset Y$ satisfying $Y \setminus V \subset U.$
\end{Apulause}
\betagin{proof} Let $y_0 \in Y \setminus \{y\}.$ Since $f$ is proper the subsets $f^{-1}\{y\}, f^{-1}\{y_0\} \subset X$ are finite. Since $W \setminus f^{-1}\{y\}$ is connected, there exists a path $\gamma \complementlon [0,1] \to W \setminus f^{-1}\{y\}$ so that $f^{-1}\{y_0\} \subset \gamma[0,1].$ Let $U \subset Y$ be a neighbourhood of $y$ satisfying $U \cap f(\gamma[0,1])=\varnothing.$
Suppose that $V \subset Y$ is an open connected subset satisfying $Y \setminus V \subset U.$
Then $f^{-1}\{y_0\} \subset \gamma[0,1]$ is contained in a component of $f^{-1}(V),$ since $f(\gamma[0,1]) \subset V.$ Since $V \subset Y$ is connected, every component of $f^{-1}(V)$ maps onto $V.$ Thus $f^{-1}(V)=D.$
\end{proof}
We end this section with introduction the terminology and elementary results for the part of singular homology. Let $X$ be a locally compact and locally connected second countable Hausdorff space. In this paper $H_i(X; \mathbb{Z})$ is the $i$:th singular homology group of $X$ and $\widetilde{H}_i(X;\mathbb{Z})$ the $i$:th reduced singular homology group of $X$ with coefficients in $\mathbb{Z},$ see \cite{Massey-book}. We recall that $H_i(X; \mathbb{Z})=\widetilde{H}_i(X;\mathbb{Z})$ for all $i\neq 0$ and $\widetilde{H}_0(X;\mathbb{Z})=\mathbb{Z}^{k-1},$ where $k$ is the number of components in $X.$ We recall that for open subsets $U,V \subset X$ with $X=U \curvearrowrightp V$ and $X=U \cap V$ connected the reduced Mayer-Vietoris sequence is a long exact sequence of homomorphisms that terminates as follows:
$$\to H_1(X;\mathbb{Z}) \to \widetilde{H}_0(U \cap V;\mathbb{Z}) \to \widetilde{H}_0(U; \mathbb{Z}) \oplus \widetilde{H}_0(V; \mathbb{Z}) \to \widetilde{H}_0(X; \mathbb{Z}).$$
\section{Local orientability and cohomological dimension}
In this section we show that the monodromy space of a proper branched covering between manifolds is a locally orientable space of finite cohomological dimension. We also introduce Alexander-Spanier cohomology following the terminology of Borel \cite{Borel-book} and Massey \cite{Massey-book} and define a cohomology manifold in the sense of Borel \cite{Borel-book}.
Let $X$ be a locally compact and locally connected second countable Hausdorff space. In this paper $H_c^i(X; \mathbb{Z})$ is the $i$:th Alexander-Spanier cohomology group of $X$ with coefficients in $\mathbb{Z}$ and compact supports. Let $A \subset X$ be a closed subset and $U=X \setminus A$. The standard push-forward homomorphism $H^i_c(U;\mathbb{Z}) \to H^i_c(X;\mathbb{Z})$ is denoted $\tau^i_{XU},$ the standard restriction homomorphism $H^i_c(X;\mathbb{Z})\to H^i_c(U;\mathbb{Z})$ is denoted $\iota^i_{UX}$ and the standard boundary homomorphism $H^i_c(A; \mathbb{Z}) \to H^{i+1}_c(X \setminus A; \mathbb{Z})$ is denoted $\partial_{(X/A)A}^i$ for all $i \in \mathbb{N}.$
We recall that the exact sequence of the pair $(X,A)$ is a long exact sequence
$$\to H^i_c(X \setminus A; \mathbb{Z}) \to H^i_c(X; \mathbb{Z}) \to H^i_c(A; \mathbb{Z}) \to H^{i+1}_c(X \setminus A; \mathbb{Z})\to $$
where all the homomorphisms are the standard ones. We also recall that $\tau^i_{VX}=\tau^i_{XU} \circ \tau^i_{UV}$ for all open subsets $V \subset U$ and $i \in \mathbb{N}.$
The \textit{cohomological dimension} of a locally compact and locally connected Hausdorff space $X$ is $\leq n,$ if $H_c^{n+1}(U ;\mathbb{Z})=0$ for all open subsets $U \subset X.$
\betagin{Thm} Let $f \complementlon X \to Y$ be a proper branched covering between $n$-manifolds. Then the monodromy space $X_f$ of $f$ has dimension $\leq n.$
\end{Thm}
\betagin{proof} Let $B_{\bar f} \subset X_f$ be the branch set of the normalization map $\overline{f} \complementlon X_f \to Y$ of $f.$ Let $U \subset X_f$ be a connected open subset and $B_{\bar f | U}$ the branch set of $\bar f | U.$ The cohomological dimension of $B_{\bar f | U}$ is at most $n-2$ by \cite{Borel-book}, since $B_{\bar f | U}$ does not locally separate $U.$ Thus $H^{i}_c(B_{\bar f | U}; \mathbb{Z})=0$ for $i > n-2$ and the part
$$ \to H^{i-1}_c(B_{\bar f | U}; \mathbb{Z}) \to H^i_c(U \setminus B_{\bar f | U}; \mathbb{Z}) \to H^i_c(U; \mathbb{Z}) \to H^i_c(B_{\bar f | U}; \mathbb{Z})\to $$
of the long exact sequence of the pair $(U,B_{\bar f | U})$ gives us an isomorphism $H^i_c(U \setminus B_{\bar f | U}; \mathbb{Z}) \to H^i_c(U; \mathbb{Z})$ for $i \geq n.$ Since $U \setminus B_{\bar f | U}$ is a connected $n$-manifold, $H^{n+1}_c(U \setminus B_{\bar f | U}; \mathbb{Z})=0.$ Thus
$H^{n+1}_c(U; \mathbb{Z}) \complementng H^{n+1}_c(U \setminus B_{\bar f | U}; \mathbb{Z}) =0.$ We conclude that $X_f$ has dimension $\leq n.$
\end{proof}
The $i$:th \textit{local Betti-number} $|ho^i(x)$ around $x$ is $k,$ if given a neighbourhood $U$ of $x,$ there exists open neighbourhoods $W \subset V \subset U$ with $\bar W \subset V$ and $\bar V \subset U$ so that $\mathrm{Im}(\tau_{VW'}^i)=\mathrm{Im}(\tau_{VW'}^i)$ and has rank $k.$ The space $X$ is called a \textit{Wilder manifold}, if $X$ is finite dimensional and for all $x \in X$ the local Betti-numbers satisfy $|ho^i(x)=0$ for all $i < n$ and $|ho^n(x)=1.$
A locally compact and locally connected Hausdorff space $X$ with cohomological dimension $\leq n$ is \textit{orientable}, if there exists for every $x \in X$ a neighbourhood basis $\mathcal{U}$ of $x$ so that $\mathrm{Im}(\tau^n_{XU})=\mathbb{Z}$ for all $U \in \mathcal{U},$ and locally orientable if every point in $X$ has an orientable neighbourhood.
\betagin{Thm}\label{harmautta} Let $\bar f \complementlon X_f \to Y$ be a normalization map of a branched covering $f \complementlon X \to Y$ so that $Y$ is orientable. Then $X_f$ is orientable.
\end{Thm}
\betagin{proof} The set $Y \setminus \bar f (B_{\bar f})$ is an open connected subset of the orientable manifold $Y.$ Thus $X_f \setminus B_{\bar f}$ is an orientable manifold as a cover of the orientable manifold $Y \setminus \bar f (B_{\bar f}).$ Thus $H_c^n(X_f';Z)=\mathbb{Z}.$
We show that $H_c^n(X_f;\mathbb{Z})=\mathbb{Z}$ and that the push-forward $\tau_{X_fW}$ is an isomorphism for every $x \in X_f$ and normal neighbourhood $W$ of $x.$ The cohomological dimension of $B_{\bar f}$ is $\leq 2.$ Thus, by the long exact sequences of the pairs $(X_f,B_{\bar f})$ and $(W,B_{\bar f} \cap W),$ the push-forward homomorphisms $\tau^n_{X_f(X_f \setminus B_{\bar f})}$ and $\tau^n_{W(W \setminus B_{\bar f})}$ are isomorphisms. Since $W \setminus B_ {\bar f} \subset X_f \setminus B_{\bar f}$ is a connected open subset and $X_f \setminus B_{\bar f}$ is a connected orientable $n$-manifold $,$ the push-forward homomorphism $\tau_{(X_f \setminus B_{\bar f})((W \setminus B_{\bar f}))}$ is an isomorphism.
Since $\tau^n_{X_fW} \circ \tau^n_{W(W \setminus B_{\bar f})}=\tau^n_{X_f(X_f \setminus B_{\bar f})} \circ \tau^n_{(X_f \setminus B_ {\bar f})(W \setminus B_{\bar f})},$ we conclude that $\tau^n_{X_fW}$ is an isomorphism and $\mathrm{Im}(\tau^n_{X_fW})=\mathrm{Im}(\tau^n_{X_fX_f'})\complementng \mathbb{Z}.$
\end{proof}
We note that a similar argument shows that a monodromy space of a proper barnched covering between manifolds is always locally orientable. A \textit{cohomology manifold} in the sense of Borel \cite{Borel-book} is a locally orientable Wilder manifold.
\section{The monodromy space of branched covers between surfaces}\label{tm}
A \textit{surface} is a closed orientable $2$-manifold. The monodromy space related to a branched covering between surfaces is always a surface as mentioned in \cite{Fox}. We first present the proof of this fact in the case we use it for completion of presentation and then we show that there exists a branched cover $f \complementlon S^2 \to S^2$ so that $X_f\neq S^2$ towards proving Theorems |ef{eka} and |ef{kolmas}.
\betagin{Apulause}\label{r} Let $F$ be an orientable surface and $f \complementlon F \to S^2$ be a proper branched cover and $\overline{f} \complementlon X_f \to S^2$ the normalization of $f$. Then $X_f$ is an orientable surface.
\end{Apulause}
\betagin{proof} Since $S^2$ is orientable, the space $X_f$ is orientable by Theorem |ef{harmautta}. Since the domain $F$ of $f$ is compact, the normalization $\overline{f}$ has finite multiplicity and the space $X_f$ is compact. Let $x \in X_f.$ By Sto\"ilow's theorem, see \cite{Whyburn-book}, $f(B_f)=\overline{f}(B_{\overline{f}})$ is a discrete set of points. Thus there exists a normal neighbourhood $V \subset X_f$ of $x$ so that $\overline{f}(V) \cap f(B_{f}) \subset \{f(x)\}$ and $\overline{f}(V )\approx \mathbb{R}^2.$ Now $\overline{f}| V \setminus \{x\} \complementlon V \setminus \{x\} \to \overline{f}(V \setminus \{f(x)\})$ is a cyclic covering of finite multiplicity, since $\overline{f}(V \setminus \{f(x)\})$ is homeomorphic to the complement of a point in $\mathbb{R}^2.$ We conclude from this that $x$ is a manifold point of $X_f.$ Thus $X_f$ is a $2$-manifold and a surface.
\end{proof}
We record as a theorem the following result in the spirit of Fox \cite[p.255]{Fox}.
\betagin{Thm}\label{s} Let $F$ be an orientable surface and $f \complementlon F \to S^2$ a proper branched cover and $\bar f \complementlon X_f \to S^2$ the normalization of $f$. Assume $|fB_f| > 3.$ Then $X_f \neq S^2.$
\end{Thm}
\betagin{proof} The space $X_f$ is $S^2$ if and only if the Euler characteristics $\chi(X_f)$ is $2$. By Riemann Hurwitz formula
\betagin{equation*}
\label{d}
\chi(X_f) = (\deg \bar f)\chi(S^2)-\sum_{x \in X_f}(i(x,\bar f)-1),
\end{equation*}
where $i(x,\bar f)$ is the local index of $\bar f$ at $x$. Since $\overline{f}$ is a normal branched cover, $i(x',\bar f)=i(x,\bar f)$ for $x,x' \in X_f$ with $\bar f (x)= \bar f (x').$ We define for all $y \in S^2,$
$$n(y):=i(x,\bar f),x \in \overline{f}^{-1}\{y\}.$$
Then for all $y \in S^2$
$$\deg \bar f = \sum_{x\in \bar f^{-1}\{y\}} i(x,\bar f) =n(y)|\bar f ^{-1}\{y\}|$$ and thus for all $y \in S^2$
$$|\bar f^{-1}\{y\}|=\dfrac{\deg \bar f}{n(y)}.$$ Hence,
\betagin{equation*}
\label{f}
\chi(X_f) = (\deg \bar f)\left(\chi(S^2)-\sum_{y \in \bar fB_{ \bar f}}\dfrac{n(y)-1}{n(y)}|ight),
\end{equation*}
where $\chi(S^2)=2$ and $(\deg \bar f):=N \in \mathbb{N}.$ Since $n(y)\geq 2$ for all $y \in \bar f B_{\bar f}=fB_f$ and $\dfrac{k-1}{k} \to 1$ as $k \to \infty,$ we get the estimate
\betagin{equation*}
\chi(X_f) \leq N\left(2-|fB_f|\dfrac{1}{2}|ight).
\end{equation*}
Thus $\chi(X_f)\leq 0 < 2,$ since $|fB_f|\geq 4$ by assumption. Thus $X_f \neq S^2.$
\end{proof}
We end this section with two independent easy corollaries.
\betagin{Cor} Let $F$ be an orientable surface and $f \complementlon F \to S^2$ be a proper branched cover so that $|fB_f| > 3.$ Then $f$ is not a normal covering.
\end{Cor}
\betagin{Cor} Let $F$ be an orientable surface and $f \complementlon F \to S^2$ be a proper branched cover so that $|fB_f| < 3.$ Then $f$ is a normal covering.
\end{Cor}
\betagin{proof} Since the first fundamental group of $S^2\setminus fB_f$ is cyclic, the monodromy group of $f | F \setminus f^{-1}(fB_f) \complementlon F \setminus f^{-1}(fB_f) \to S^2\setminus fB_f$ is cyclic. Thus every subgroup of the deck-transformation group of the normalization map $\bar f \complementlon X_f \to X$ is a normal subgroup. Thus $f=\bar f$ and in particular, $f$ is a normal branched covering.
\end{proof}
\section{The suspension of a branched cover between orientable surfaces}\label{ts}
In this section we prove Theorem |ef{eka} in the introduction and the following theorem.
\betagin{Thm}\label{kolmas}
There exists a simplicial branched cover $f \complementlon S^3 \to S^3$ for which the monodromy space $X_f$ is not a cohomology manifold.
\end{Thm}
More precisely, we show that there exists a branched cover $S^2 \to S^2$ for which the monodromy space is not a manifold or a cohomology manifold for the suspension map $\Sigma S^2 \to \Sigma S^2.$
Let $F$ be an orientable surface. Let $\sim$ be the equivalence relation in $F \times [-1,1]$ defined by the relation $(x,t) \sim (x',t)$ for $x,x' \in F$ and $t \in \{-1,1\}.$ Then the quotient space $\Sigma F:=F \times [-1,1]/\sim$ is the \textit{suspension space} of $F$ and the subset $CF:=\{\overline{(x,t)}: x \in F, t \in [0,1]\} \subset \Sigma F$ is the \textit{cone} of $F.$ We note that $\Sigma S^2 \approx S^3.$ Let $f \complementlon F_1 \to F_2$ be a piecewise linear branched cover between surfaces. Then $\Sigma f \complementlon \Sigma F_1 \to \Sigma F_2, \overline{(x,t)} \mapsto \overline{(f(x),t)},$ is a piecewise linear branched cover and called the \textit{suspension map} of $f.$ We note that the suspension space $\Sigma F$ is a polyhedron and locally contractible for all surfaces $F.$
We begin this section with a lemma showing that the normalization of a suspension map of a branched cover between surfaces is completely determined by the normalization of the original map.
\betagin{Apulause}\label{t} Let $F$ be an orientable surface and $f \complementlon F \to S^2$ a branched cover and $\bar f \complementlon X_f \to S^2$ the normalization of $f.$ Then $\Sigma \bar f \complementlon \Sigma X_f \to \Sigma S^2$ is the normalization of $\Sigma f \complementlon \Sigma F \to \Sigma S^2.$
\end{Apulause}
\betagin{proof}Let $p \complementlon X_f \to F$ be a normal branched covering so that $\bar f=f \circ p.$ Then $\Sigma \bar f$ is a normal branched cover so that $\Sigma \bar f = \Sigma p \circ \Sigma f$ and $\varphi \complementlon \mathrm{Deck}(\Sigma \bar f) \to \mathrm{Deck}(\bar f), \tau \mapsto \tau | X_f$ is an isomorphism. We need to show that, if $G \subset \mathrm{Deck}(\Sigma \bar f)$ is a subgroup so that $f \circ (\Sigma p/G)$ is normal. Then $G$ is trivial.
Suppose $G \subset \mathrm{Deck}(\Sigma \bar f)$ is a group so that $f \circ \Sigma/G$ is normal. Then $f \circ (p/G')$ is normal for the quotient map $p/G' \complementlon X_f/G' \to S^2,$ where $G'=\varphi(G).$ Since $\bar f$ is the normalization of $f,$ the group $G'$ is trivial. Thus $G=\varphi^{-1}(G')$ is trivial, since $\varphi^{-1}$ is an isomorphism.
\end{proof}
We then characterize the surfaces for which the suspension space is a manifold or a cohomology manifold in the sense of Borel.
\betagin{Apulause}\label{y}Let $F$ be an orientable surface. Then $\Sigma F$ is a manifold if and only if $F=S^2.$
\end{Apulause}
\betagin{proof} Suppose $F=S^2.$ Then $\Sigma F \approx S^3.$ Suppose then that $F\neq S^2.$ Then there exists a (cone) point $x \in \Sigma F$ and a contractible neighbourhood $V \subset \Sigma F$ of $x$ so that $F \subset V$ and $V \setminus \{x\}$ contracts to $F.$ Now $\pi_1(V \setminus \{x\},x_0) \complementng \pi_1(F,x_0)\neq 0$ for $x_0 \in F.$ Suppose that $\Sigma F$ is a $3$-manifold. Then $\pi_1(V \setminus \{x\},x_0)=\pi_1(V,x_0)=0,$ which is a contradiction. Thus $\Sigma F$ is not a manifold.
\end{proof}
\betagin{Apulause}\label{p} Let $F$ be an orientable surface. Then $\Sigma F$ is not a Wilder manifold if $F\neq S^2.$
\end{Apulause}
\betagin{proof} We show that then the second local Betti-number is non-trivial around a point in $\Sigma F.$ Let $CF \subset \Sigma F$ be the cone of $F.$ Let $\pi \complementlon F \times [0,1] \to \Sigma F, (x,t) \mapsto \overline{(x,t)},$ be the quotient map to the suspension space and $\bar x= \pi (F \times \{1\}).$ We first note that $H_c^1(CF,\mathbb{Z})=H_c^2(CF,\mathbb{Z})=0,$ since $CF$ contracts properly to a point. Further, by Poincare duality $H_c^1(F; \mathbb{Z})=\mathbb{Z}^{2g},$ where $g$ is the genus of $F.$ In the exact sequence of the pair $(CF,F)$ we have the short exact sequence
$$\to 0 \to H^2_c(CF \setminus F; \mathbb{Z}) \to H_c^1(F; \mathbb{Z}) \to 0.$$
Thus $H^2_c(CF \setminus F; \mathbb{Z}) \complementng H_c^1(F; \mathbb{Z})$ and $H^2_c(CF \setminus F; \mathbb{Z})=0$ if and only if $g=0$ for the genus $g$ of $F.$ Thus $H^2_c(CF \setminus F; \mathbb{Z})=0$ if and only if $F=S^2.$
We then show that the rank of $H^2_c(CF \setminus F; \mathbb{Z})$ is the local Betti-number $|ho^2(\bar x)$ around $\bar x.$ For this it is sufficient to show that given any neighbourhood $U\subset CF$ of $\bar x,$ there exists open neighbourhoods $W \subset V \subset U$ of $\bar x$ with $\bar W \subset V, \bar V \subset U,$ so that for any open neighbourhood $W' \subset W$ of $\bar x,$ $\mathrm{Im}(\tau_{VW})=\mathrm{Im}(\tau_{VW'})\complementng H^2_c(CF \setminus F; \mathbb{Z}).$
Denote $\Omega_{t}=\varphi(F \times [0,t))$ for all $t \in (0,1).$ We note that then $\tau_{\Omega_s \Omega_t}$ is an isomorphism for all $t,s \in \mathbb{R}, t<s,$ since $\iota \complementlon \Omega_t \to \Omega_s$ is properly homotopic to the identity. Let $U \subset CF$ be any neighbourhood of $\bar x.$ We set $V=\Omega_t$ for such $t \in (0,1)$ that $\Omega_t \subset U$ and $W=\Omega_{t/2}.$ Then for any neighbourhood $W' \subset \Omega_t$ of $\bar x$ there exists $t' \in (0,t/2)$ so that $\Omega_{t'} \subset W'.$ Now $\tau_{\Omega_tW'}$ is surjective, since $\tau_{\Omega_t\Omega_{t'}}=\tau_{\Omega_t W'} \circ \tau_{W'\Omega_ {t'}}$ is an isomorphisms. Thus $$\mathrm{Im}(\tau_{\Omega_t\Omega_{t/2}})=\mathrm{Im}(\tau_{\Omega_t\Omega_{t'}})= H^2_c(\Omega_t; \mathbb{Z}) \complementng H^2_c(CF \setminus F;\mathbb{Z}),$$
since $\tau_{\Omega_t\Omega_{t/2}}$ and $\tau_{\Omega_t(CF/F)}$ are isomorphisms. Thus $|ho^2(\bar x)\neq 0,$ since $F \neq S^2.$
\end{proof}
\betagin{Cor}\label{89} Let $f \complementlon S^2 \to S^2$ be a branched cover with $|fB_f|> 3,$ $\Sigma f \complementlon S^3 \to S^3$ the suspension map of $f$ and $\overline{\Sigma f} \complementlon X_{\Sigma f} \to S^3$ the normalization of $\Sigma f.$ Then $X_{\Sigma f}$ is not a manifold and not a Wilder manifold.
\end{Cor}
\betagin{proof} By Lemmas |ef{r} and |ef{t} we know that $X_f$ is a surface and $X_{\Sigma f}=\Sigma X_f.$ Further, by Lemma |ef{s} we know that $X_f \neq S^2,$ since $|fB_f|> 3.$ Thus, by Lemma |ef{y}, $X_f$ is not a manifold. Further, by Lemma |ef{p}, $X_{\Sigma f}$ is not a Wilder manifold. Thus $X_f$ is not a cohomology manifold in the sense of Borel.
\end{proof}
\betagin{proof}[Proof of Theorems |ef{eka} and |ef{kolmas}] By Corollary |ef{89} it is sufficient to show that there exists a branched cover $f \complementlon S^2 \to S^2$ so that $|f(B_{f})|=4.$ Such a branched cover we may easily construct as $f=f_1 \circ f_2,$ where $f_i \complementlon S^2 \to S^2$ is a winding map with branch points $x_1^i$ and $x_1^i$ for $i \in \{1,2\}$ satisfying $x_1^2, x_2^2 \notin \{f_1(x_1^1),f_1(x_2^1)\}$ and $f_2(f_1(x_1^1))\neq f_2(f(x_1^1)).$
\end{proof}
\section{An example of a non-locally contractible monodromy space}\label{pii}
In this section we introduce an example of a branched cover $S^3 \to S^3$ for which the related monodromy space is not a locally contractible space. The construction of the example is inspired by Heinones and Rickmans construction in \cite{Heinonen-Rickman} of a branched covering $S^3 \to S^3$ containing a wild Cantor set in the branch set.
We need the following result originally due to Berstein and Edmonds \cite{ONT} in the extent we use it.
\betagin{Thm}[\cite{PKJ}, Theorem 3.1]\label{PKJ}Let $W$ be a connected, compact, oriented piecewise linear 3-manifold whose boundary consists of $p \geq 2$ components $M_0, \ldots, M_{p-1}$ with the induced orientation. Let $W'=N \setminus int B_j$ be an oriented piecewise linear $3$-sphere $N$ in $\mathbb{R}^4$ with $p$ disjoint, closed, polyhedral $3$-balls removed, and have the induced orientation on the boundary. Suppose that $n \geq 3$ and $\varphi_j \complementlon M_j |ightarrow \partial B_j$ is a sense preserving piecewise linear branched cover of degree $n,$ for each $j=0,1, \ldots, p-1.$ Then there exists a sense preserving piecewise linear branched cover $\varphi \complementlon W |ightarrow W'$ of degree $n$ that extends $\varphi_j:$s.
\end{Thm}
\betagin{figure}[htb]
\hookrightarrowudegraphics{RigidityPoster1.5}
\caption{}\label{rer}
\end{figure}
Let $x \in S^3$ be a point in the domain and $y \in S^3$ a point in the target. Let $X \subset S^3$ be a closed piecewise linear ball with center $x$ and let $Y \subset S^3$ be a closed piecewise linear ball with center $y.$ Let $T_0 \subset \mathrm{int} X$ be a solid piecewise linear torus so that $x \in \mathrm{int} T_0.$ Now let $\mathcal{T}=(T_{n})_{n \in \mathbb{N}}$ be a sequence of solid piecewise linear tori in $T_0$ so that $T_{k+1} \subset \mathrm{int} T_{k}$ for all $k \in \mathbb{N}$ and $\mathbb{C}ap_{n=1}^{\infty}T_{n}=\{x\}.$ Let further $B_0 \subset \mathrm{int} Y$ be a closed piecewise linear ball with center $y$ and let $\mathcal{B}=(B_{n})_{n \in \mathbb{N}}$ be a sequence of closed piecewise linear balls with center $y$ so that $B_{k+1} \subset \mathrm{int} B_{k}$ for all $k \in \mathbb{N}$ and $\mathbb{C}ap_{n=1}^{\infty}B_{n}=\{y\}.$ See illustration in Figure |ef{rer}.
We denote $\partial X=\partial T_{-1}$ and $\partial Y=\partial B_{-1}$ and choose an orientation to all boundary surfaces from an outward normal. Let $f_n \complementlon \partial T_n |ightarrow \partial B_n, n \in \{-1\} \curvearrowrightp \mathbb{N},$ be a collection of sense preserving piecewise linear branched coverings so that
\betagin{itemize}
\item[(i)]the degree of all the maps in the collection are the same and greater than $2,$
\item[(ii)]$f_{-1}$ has an extension to a branched covering $g \complementlon S^3 \setminus \mathrm{int}X \to S^3 \setminus \mathrm{int}Y,$
\item[(iii)]the maps $f_n$ are for all even $n \in \mathbb{N}$ normal branched covers with no points of local degree three and
\item[(iv)]the branched covers $f_n$ have for all uneven $n \in \mathbb{N}$ a point of local degree three.
\end{itemize}
We note that for an example of such a collection of maps of degree $18,$ we may let $f_{-1}$ be a $18$-to-$1$ winding map, $f_n$ be for even $n \in \mathbb{N}$ as illustrated in Figure |ef{wer} and $f_n$ be for all uneven $n \in \mathbb{N}$ as illustrated in Figure |ef{ter}.
\betagin{figure}[htb]
\hookrightarrowudegraphics{RigidityPoster1.3}
\caption{}\label{wer}
\end{figure}
\betagin{figure}[hbt]
\hookrightarrowudegraphics{RigidityPoster1.2}
\caption{}\label{ter}
\end{figure}
Let then $n \in \mathbb{N}$ and let $F_n \subset X$ be the compact piecewise linear manifold with boundary $\partial T_{n-1} \curvearrowrightp \partial T_{n}$ that is the closure of a component of $X \setminus (\mathbb{C}up_{k=-1}^\infty \partial T_k).$ Let further, $G_n \subset Y$ be the compact piecewise linear manifold with boundary $\partial B_{n-1} \curvearrowrightp \partial B_{n}$ that is the closure of a component of $Y \setminus (\mathbb{C}up_{k=-1}^\infty \partial B_k).$ Then $F_n \subset X$ is a compact piecewise linear manifold with two boundary components and $G_n \subset Y$ is the complement of the interior of two distinct piecewise linear balls in $S^3.$ Further, $f_{n-1} \complementlon \partial T_{n-1} |ightarrow \partial B_{n-1}$ and $f_{n} \complementlon \partial T_{n} |ightarrow \partial B_{n}$ are sense preserving piecewise linear branched covers between the boundary components of $F_n$ and $G_n.$ Since the degree of $f_n$ is the same as the degree of $f_{n-1}$ and the degree is greater than $2,$ there exists by |ef{PKJ} a piecewise linear branched cover $g_n \complementlon F_n \to G_n$ so that $g_n | \partial T_{n-1}=f_{n-1}$ and $g_n | \partial T_{n}=f_{n}.$
Now $X=\mathbb{C}up_{k=0}^\infty F_n$ and $Y=\mathbb{C}up_{k=0}^\infty G_n$ and $g \complementlon S^3 \setminus \mathrm{int}X \to S^3 \setminus \mathrm{int}Y$ satisfies $g | \partial X=g_{0} | \partial X.$ Hence we may define a branched covering $f \complementlon S^3 |ightarrow S^3$ by setting $f(x)=g_n(x)$ for $x \in G_n, n \in \mathbb{N},$ and $f(x)=g(x)$ otherwise.
However, we want the map $f \complementlon S^3 \to S^3$ to satisfy one more technical condition, namely the existence of collections of properly disjoint open sets $(M_k)_{k \in \mathbb{N}}$ of $X$ and $(N_k)_{k \in \mathbb{N}}$ of $Y$ so that $M_k \subset X$ is a piecewise linear regular neighbourhood of $\partial T_k$ and $N_k \subset Y$ is a piecewise linear regular neighbourhood of $B_k$ and $M_k=f^{-1}N_k,$ and $f | M_k \complementlon M_k \to N_k$ has a product structure of $f_k$ and the identity map for all $k \in \mathbb{N}.$ We may require this to hold for the $f \complementlon S^3 \to S^3$ defined, since in other case we may by cutting $S^3$ along the boundary surfaces of $\partial T_k$ and $\partial B_k$ and adding regular neighbourhoods $M_k$ of $\partial T_k$ and $N_k$ of $ \partial B_k$ in between for all $k \in \mathbb{N}$ arrange this to hold without loss of conditions (i)--(iv), see \cite{RourkeSanderson}.
In the last section of this paper we prove the following theorem.
\betagin{Thm}\label{11} Let $f \complementlon S^3 |ightarrow S^3$ and $y \in S^3$ be as above and $\overline{f} \complementlon X_f \to Y$ the normalization of $f.$ Then $H_1(W; \mathbb{Z}) \neq 0$ for all open sets $W \subset X_f$ satisfying $$\overline{f}^{-1}\{y\} \cap W \neq \varnothing.$$
\end{Thm}
Theorem |ef{eka} in the introduction then follows from Theorem |ef{11} by the following easy corollary.
\betagin{Cor}\label{pimia} Let $f \complementlon S^3 |ightarrow S^3$ and $y \in S^3$ be as above. Then the monodromy space $X_f$ of $f$ is not locally contractible.
\end{Cor}
\betagin{proof} Let $x \in \bar f ^{-1} \{y\}$ and $W$ a neighbourhood of $x.$ Then $H_1(W; \mathbb{Z})\neq 0$ and $W$ has non-trivial fundamental group by Hurewich Theorem, see \cite{Massey-book}. Thus $W$ is not contractible. Thus the monodromy space $X_f$ of $f$ is not a locally contractible space.
\end{proof}
\section{Destructive points}\label{dp}
In this section we define destructive points and prove Theorem |ef{11}.
Let $X$ be a locally connected Hausdorff space. We call an open and connected subset $V \subset X$ a \textit{domain}. Let $V \subset X$ be a domain. A pair $\{A,B\}$ is called a \textit{domain covering} of $V,$ if $A,B \subset X$ are domains and $V=A \curvearrowrightp B.$ We say that a domain covering $\{A,B\}$ of $V$ is \textit{strong}, if $A \cap B$ is connected. Let $x \in V$ and let $U \subset V$ be a neighbourhood of $x.$ Then we say that $\{A,B\}$ is $U$-\textit{small} at $x$, if $x \in A \subset U$ or $x \in B \subset U.$
Let then $f \complementlon X \to Y$ be a branched covering between manifolds, $y \in Y$ and $V_0 \subset Y$ a domain containing $y.$ Then $V_0$ is a \textit{destructive neighbourhood} of $y$ with respect to $f,$ if $f | f^{-1}(V_0)$ is not a normal covering to its image, but there exists for every neighbourhood $U \subset V_0$ of $y$ a $U$-small strong domain covering $\{A,B\}$ of $V_0$ at $y$ so that
$\{f^{-1}(A),f^{-1}(B)\}$ is a strong domain cover of $f^{-1}(V_0)$ and $f | (f^{-1}(A) \cap f^{-1}(B))$ is a normal covering to its image.
We say that $y$ is a \textit{destructive} point of $f$, if $y$ has a neighbourhood basis consisting of neighbourhoods that are destructive with respect to $f.$
\betagin{Thm}\label{45}The map $S^3 \to S^3$ of the example in section |ef{pii} has a destructive point.
\end{Thm}
\betagin{proof} We show that $y \in \mathbb{C}ap_{n=1}^{\infty}B_{n}$ is a destructive point of $f$. We first show that $V_0=\mathrm{int}B_0$ is a destructive neighbourhood of $y.$
We begin this by showing that $g:=f | f^{-1}(V_0) \complementlon f^{-1}(V_0) \to V_0$ is not a normal branched cover. Towards contradiction suppose that $g$ is a normal branched cover. Then $\mathrm{Deck}(g) \complementng \mathrm{Deck}(g | M_1)$ and $\mathrm{Deck}(g) \complementng \mathrm{Deck}(g | M_2),$ since $M_1=f^{-1}(N_1)$ and $M_2=f^{-1}(N_2)$ are connected. On the other hand (iii) and (iv) imply that $\mathrm{Deck}(g | M_1) \ncong \mathrm{Deck}(g | M_2)$ and we have a contradiction.
Let then $V_1 \subset V_0$ be any open connected neighbourhood of $y.$ Then there exists such $k \in \mathbb{N},$ that $B_{2k} \curvearrowrightp N_{2k} \subset V_1.$ Let $B:=B_{2k} \curvearrowrightp N_{2k}$ and $A=(V_0 \setminus B_{2k}) \curvearrowrightp N_{2k}.$ Then $\{A,B\}$ is a strong $V_1$-small domain cover of $V_0$ at $y$ and $A \cap B=N_{2k}.$ In particular, $\{f^{-1}(A),f^{-1}(B)\}$ is a strong domain cover of $f^{-1}(V_0).$ Further,
$$f^{-1}(A) \cap f^{-1}(B)=f^{-1}(A \cap B)=f^{-1}(N_{2k})=M_{2k}$$ and $f | (f^{-1}(A) \cap f^{-1}(B))=f | M_{2k} \complementlon M_{2k} \to N_{2k}$ is a normal branched covering by (iii).
Thus $V_0$ is a destructive neighbourhood of $y.$ The same argument shows that $V_k:=\mathrm{int}B_k$ is a destructive neighbourhood of $y$ for all $k \in \mathbb{N}.$ Thus $y$ has a neighbourhood basis consisting of neighbourhoods that are destructive with respect to $f.$
\end{proof}
Theorem |ef{45} implies that Theorems |ef{11} and |ef{eka} follow from the following result.
\betagin{Thm}\label{15}Let $f \complementlon X \to Y$ be a proper branched covering between manifolds and let
\betagin{equation*}
\xymatrix{
& X_f \ar[dl]_p \ar[dr]^{\overline{f}}& \\
X \ar[rr]^f & & Y }
\end{equation*}
be a commutative diagram of branched coverings so that $X_f$ is a connected, locally connected Hausdorff space and $p \complementlon X_f |ightarrow X$ and $q \complementlon X_f |ightarrow Y$ are proper normal branched coverings. Suppose there exists a destructive point $y \in Y$ of $f$. Then $H_1(W; \mathbb{Z}) \neq 0$ for all open sets $W \subset X_f$ satisfying $$\overline{f}^{-1}\{y\} \cap W \neq \varnothing.$$
\end{Thm}
We begin the proof of Theorem |ef{15} with two lemmas. The following observation is well known for experts.
\betagin{Apulause}\label{2} Let $X$ be a locally connected Hausdoff space and $W \subset X$ an open and connected subset. Suppose there exists open and connected subsets $U,V \subset W$ so that $W=U \curvearrowrightp V$ and $U \cap V$ is not connected. Then the first homology group $H_1(W; \mathbb{Z})$ is not trivial.
\end{Apulause}
\betagin{proof} Towards contradiction we suppose that $H_1(W; \mathbb{Z})=0.$ Then the reduced Mayer-Vietoris sequence
$$\to H_1(W;\mathbb{Z}) \to \widetilde{H}_0(U \cap V;\mathbb{Z}) \to \widetilde{H}_0(U; \mathbb{Z}) \oplus \widetilde{H}_0(V; \mathbb{Z}) \to \widetilde{H}_0(W; \mathbb{Z})$$
takes the form
$$0 \to \widetilde{H}_0(U \cap V; \mathbb{Z}) \to 0.$$
Thus, $\widetilde{H}_0(U \cap V; \mathbb{Z})=0.$ Thus $U \cap V$ is connected, which is a contradiction. Thus, $H_1(W; \mathbb{Z})$ is not trivial.
\end{proof}
The following lemma is the key observation in the proof of Theorem |ef{15}.
\betagin{Apulause}\label{kuu}Suppose $f \complementlon X \to Y$ is a branched covering between manifolds. Suppose $W$ is a connected locally connected Hausdorff space and $p \complementlon W \to X$ and $q \complementlon W \to Y$ are normal branched coverings so that the diagram
\betagin{equation*}
\xymatrix{
& W \ar[ld]_{p} \ar[rd]^{q} &\\
X \ar[rr]^{f} & & Y}
\end{equation*}
commutes. Suppose there exists an open and connected subset $C_1 \subset Y$ so that $D_1=f^{-1}(C_1)$ is connected and $f | D_1 \complementlon D_1 \to C_1$ is a normal branched covering. Then $f$ is a normal branched covering, if $E_1=q^{-1}(C_1)$ is connected.
\end{Apulause}
\betagin{proof}
Since $E_1:=q^{-1}(C_1)$ is connected, we have
$$\mathrm{Deck}(q)=\{\tau \in \mathrm{Deck}(q) : \tau(E_1)=E_1\} \complementng \mathrm{Deck}(q | E_1 \complementlon E_1 |ightarrow C_1)$$
and
$$\mathrm{Deck}(p)=\{\tau \in \mathrm{Deck}(p) : \tau(E_1)=E_1\} \complementng \mathrm{Deck}(p | E_1 \complementlon E_1 |ightarrow D_1),$$
where the isomorphisms are canonical in the sense that they map every deck-homomorphism $\tau \complementlon W \to W$ to the restriction $\tau | E_1 \complementlon E_1 |ightarrow E_1.$
Since $f | D_1 \complementlon D_1 |ightarrow C_1$ is a normal branched covering,
$$\mathrm{Deck}(p | E_1 \complementlon E_1 |ightarrow D_1) \subset \mathrm{Deck}(q | E_1 \complementlon E_1 |ightarrow C_1)$$ is a normal subgroup. Hence, $\mathrm{Deck}(p) \subset \mathrm{Deck}(q)$ is a normal subgroup. Hence, the branched covering $f \complementlon X |ightarrow Y$ is normal.
\end{proof}
\betagin{proof}[Proof of Theorem |ef{15}] Let $W \subset X_f$ be a open set and $y \in f(W)$ a destructive point and $x \in \bar f^{-1}\{y\}$. By Lemma |ef{2}, to show that $H_1(W; \mathbb{Z})\neq 0$ it is sufficient to show that there exists a domain cover of $W$ that is not strong.
Let $V_0$ be a destructive neighbourhood of $y$ so that the $x$-component $W_0$ of is a normal neighbourhood of $x$ in $W.$ Let $\{A,B\}$ be a strong domain cover of $V_0$ so that $y \in \bar B \subset V_0$ and $\{W_0^A,W_0^B\}$ is a domain cover of $W_0$ for $W_0^A:=(f| W_0)^{-1}(A)$ and $W_0^B:=(f| W_0)^{-1}(B),$ (see Lemma |ef{fun}).
We first show that $\{W_0^A,W_0^B\}$ is not strong. Suppose towards contradiction that $\{W_0^A,W_0^B\}$ is strong. Then $A \cap B,$ $f^{-1}(A) \cap f^{-1}(B)$ and $W^A_0 \cap W^B_0$ are connected and
\betagin{equation*}\label{hei}
\xymatrix{
& W^A_0 \cap W^B_0 \ar[ld]_{p | W^A_0 \cap W^B_0} \ar[rd]^{\overline{f}| W^A_0 \cap W^B_0} &\\
f^{-1}(A) \cap f^{-1}(B) \ar[rr]^{f | f^{-1}(A) \cap f^{-1}(B)} & & A \cap B}
\end{equation*}
is a commutative diagram of branched covers.
In particular, since $f | f^{-1}(A) \cap f^{-1}(B)$ is a normal branched cover $\mathrm{Deck}(p | W^A_0 \cap W^B_0)\subset \mathrm{Deck}(\bar f | W^A_0 \cap W^B_0)$ is a normal subgroup.
On the other hand, since
$$W^A_0 \cap W^B_0=(\bar f | W_0) ^{-1}(A \cap B)=(p | W_0)^{-1}(f^{-1}(A) \cap f^{-1}(B)) \subset W_0$$ is connected,
$\mathrm{Deck}(p | W^A_0 \cap W^B_0)\complementng \mathrm{Deck}(p | W_0)$, $\mathrm{Deck}(\bar f | W^A_0 \cap W^B_0) \complementng \mathrm{Deck}(\bar f | W_0)$ and in particular, $\mathrm{Deck}(p | W_0)\subset \mathrm{Deck}(\bar f | W_0)$ is a normal subgroup. Thus the factor $f | f^{-1}(V_0) \complementlon f^{-1}(f(V_0)) \to V_0$ of $\bar f | W_0 \complementlon W_0 \to V_0$ is a normal branched covering. This is a contradiction, since $V_0$ is a destructive neighbourhood of $y$ and we conclude that $W^A_0 \cap W^B_0 \subset W_0$ is not connected.
Since $\overline{B} \subset V_0$ there exists a connected neighbourhood $W' \subset W$ of $W \setminus W_0^B$ so that $W' \cap W_0^B= \varnothing.$ Now $W_0^A \curvearrowrightp W'$ is connected, since $W_0^A$ is connected and every component of $W'$ has a non-empty intersection with $W_0^A.$ Further, $W=W_0^B \curvearrowrightp (W_0^A \curvearrowrightp W')$ and $W_0^B \cap (W_0^A \curvearrowrightp W')=W_0^A \cap W_0^B.$ Thus $\{W_0^B,W_0^A \curvearrowrightp W'\}$ is a domain cover of $W$ that is not strong and by Lemma |ef{2} we conclude $H_1(W; \mathbb{Z})\neq 0$.
\end{proof}
This concludes the proof of Theorem |ef{11}, and further by Corollary |ef{pimia}, the proof of Theorem |ef{toka} in the introduction.
\end{document} |
\begin{document}
\sectionfont{\bfseries\large\sffamily}
\subsectionfont{\bfseries\sffamily\normalsize}
\noindent
{\sffamily\bfseries\LARGE
Selecting and Ranking Individualized Treatment Rules with Unmeasured
Confounding}
\noindent
\textsf{Bo Zhang, Jordan Weiss, Dylan S.\ Small, Qingyuan Zhao*}
\noindent
\textsf{University of Pennsylvania and University of Cambridge}
\noindent
\textsf{}
\noindent
\textsf{{\bf Abstract}: It is common to compare individualized treatment rules based on the value function, which is the expected potential outcome under the treatment rule. Although the value function is not point-identified when there is unmeasured confounding, it still defines a partial order among the treatment rules under Rosenbaum's sensitivity analysis model. We first consider how to compare two treatment rules with unmeasured confounding in the single-decision setting and then use this pairwise test to rank multiple treatment rules. We consider how to, among many treatment rules, select the best rules and select the rules that are better than a control rule. The proposed methods are illustrated using two real examples, one about the benefit of malaria prevention programs to different age groups and another about the effect of late retirement on senior health in different gender and occupation groups.}
\noindent
\textsf{{\bf Keywords}: Multiple testing; Observational studies;
Partial order; Policy discovery; Sensitivity analysis.}
\noindent
\textsf{*Correspondence to: Qingyuan Zhao, Statistical Laboratory,
University of Cambridge, Wilberforce Road, Cambridge, CB3 0WB,
UK. Email: [email protected].}
\section{Introduction}
A central statistical problem in precision medicine and health policy
is to learn treatment rules that are tailored to the
patient's characteristics. There is now an exploding literature on
individualized policy discovery; see \citet{Precision_medicine} for an
up-to-date review. Although randomized experiments remain the gold
standard for causal inference, there has been a growing interest in using
observational data to draw causal conclusions and discover
individualized treatment rules due to the increasing availability of
electronic health records and other observational data sources
\citep{moodie2012q,athey2017efficient,kallus2017recursive,M_learning,zhao2019efficient}.
A common way to formulate the problem of individualized policy
discovery is via the \emph{value
function}, which is the expected potential outcome under a treatment rule
or regime. The optimal treatment rule is usually defined as the one
that maximizes the value function. In the single-decision setting, the
value function can be easily
identified when the data come from a randomized experiment (as long as
the probability of receiving treatment is never $0$ or $1$). When the
data come from an observational study, the value function can still be identified under the assumption that all confounders are measured. This assumption
can be further extended to the multiple-decision setting
\citep{murphy2003optimal,robins2004optimal}. In this paper we
will focus our discussion on the single-decision setting but consider
the possibility of unmeasured confounding.
With few exceptions, the vast majority
of existing methods for treatment rule discovery from observational
data are based on the no unmeasured confounding
assumption. Typically, these methods first estimate the value
function assuming no unmeasured confounding
and then select the treatment rule that maximizes the
estimated value function. However, it is common that a substantial
fraction of the
population appear to behave similarly under treatment or control. From
a statistical perspective and if there is truly no unmeasured
confounder, we should still attempt to estimate the treatment effect
for individuals in this subpopulation and optimize the treatment rule
accordingly. However, the optimal treatment decisions for
these individuals are, intuitively, also the most sensitive to unmeasured
confounding. It may only take a small amount of unmeasured confounding
to change the sign of the estimated treatment effects for these
individuals. From a policy perspective (especially when there is a
cost constraint), learning the ``optimal'' treatment decision for
these individuals from observational data seems likely
to be error-prone.
\subsection{Sensitivity analysis for individualized treatment rules}
\label{sec:sens-analys-indiv}
There is a long literature on studying the sensitivity of
observational studies to unmeasured confounding, dating from
\citet{Cornfield1959}. In short, such sensitivity analysis asks
how much unmeasured confounding is needed to alter the causal
conclusion of an observational study qualitatively. In this paper, we
will study the sensitivity of individualized treatment rules to
unmeasured confounding using a prominent model proposed by
\citet{Rosenbaum1987}, where the odds ratio of receiving the treatment
for any two individuals with the same observed covariates is bounded
between $1/\Gamma$ and $\Gamma$ ($\Gamma \ge 1$; $\Gamma = 1$
corresponds to no unmeasured confounding). More specifically,
we will consider selecting and ranking individualized treatment rules
under Rosenbaum's model for unmeasured confounding.
Our investigation is motivated by the impact of effect
modification on the power of Rosenbaum's sensitivity analysis that is studied by
\citet{Hsu2013effectmodification}. A phenomenon found by
\citet{Hsu2013effectmodification} is that subgroups with larger
treatment effect may have larger design sensitivity. For example,
suppose a subgroup A has larger treatment effect than a subgroup B
based on observational data. Then, there may exist a $\Gamma > 1$ such
that, when the sample size of both subgroups go to infinity, the
probability of rejecting Fisher's sharp null hypothesis under the
$\Gamma$-sensitivity model goes to $1$ for subgroup A and $0$ for
subgroup B. Therefore, to obtain causal conclusions that are most
robust to unmeasured confounding, it may be more desirable to use a smaller
subgroup with larger treatment effect than to use a larger subgroup
with smaller treatment effect.
When comparing individualized treatment rules to a baseline, the above
phenomenon suggests that a treatment rule with smaller value may be
less sensitive to unmeasured confounding than a treatment rule with
larger value. In other words, when there is unmeasured confounding,
the ``optimal'' treatment rule might not be the one that maximizes the
value function assuming no measured confounding; in fact, there are usually
many ``optimal'' treatment rules. This is because the value function
in this case only defines a \emph{partial order} on the set of
individualized treatment rules, so two rules with different value
function assuming no unmeasured confounding may become
indistinguishable under the $\Gamma$-sensitivity model when $\Gamma >
1$. Fundamentally, the reason is that the value function is only
partially identified in Rosenbaum's $\Gamma$-sensitivity model.
As an example, let's use $r_2 \succ_{\Gamma} r_1$ (abbreviated as
$r_2 \succ r_1$ if the value of $\Gamma$ is clear from the context) to denote that
the value of rule $r_2$ is \emph{always} greater than the value of
$r_1$ when the unmeasured confounding satisfies the
$\Gamma$-sensitivity model. Then, it is possible that
\begin{itemize}
\item Under $\Gamma = 1$, $r_2 \succ r_1 \succ r_0$ (so $r_2 \succ r_0$);
\item Under some $\Gamma > 1$, $r_1 \succ r_0$ but $r_2 \not \succ r_0$.
\end{itemize}
This phenomenon occurs frequently in real data examples, see
\Cref{fig: hasse malaria hybrid} in \Cref{subsec: ranking}. Note that
the relation $\succ$ is
defined using the value function computed using the population instead
of a particular sample.
Because the value function only defines a partial order on the
treatment rules, it is no longer well-defined to estimate \emph{the}
optimal treatment rule when there is unmeasured confounding. Instead,
we aim to recover the partial ordering of a set of treatment rules or
select a subset of rules that satisfy certain statistical
properties. This problem is related to the problem of selecting and ranking
subpopulations (as a post hoc analysis for randomized experiments)
which has been extensively studied in statistics
\citep{gupta1979multiple,gibbons1999selecting}. Unfortunately, in problems
considered by the existing literature, the subpopulations always have
a \emph{total order}. For example, a prototypical problem in that
literature is to select a subset that contains the largest $\mu_i$
based on independent observations $Y_i \sim \mathrm{N}(\mu_i,1)$. It is
evident that the methods developed there cannot be directly applied to the
problem of comparing treatment rules which only bears a partial
order. Nevertheless, we will borrow some definitions in
that literature to define the goal of selecting and ranking
individualized treatment rules.
\subsection{Related work and our approach}
Existing methods for individualized policy discovery from observational data
often take an \emph{exploratory} stance. They often aim to select the
individualized treatment rule, often within an infinite-dimension
function class, that maximized the estimated value function using
outcome regression-based \citep{robins2004optimal, qian_murphy2011},
inverse-probability weighting \citep{Zhao_OWL, kallus2017recursive},
or doubly-robust estimation\citep{dudik2014doubly,
athey2017efficient}. In order to estimate the value function, some
parametric or semiparametric models are specified to model the outcome
and/or the treatment selection process. To identify the value function,
the vast majority of these approaches make the no unmeasured confounding
assumption which may be unrealistic in many applications. The only
exception to our knowledge is \citet{kallus2018confounding}, in which
the authors propose to maximize the minimum
value of an individualized treatment rule when the unmeasured
confounding satisfies a marginal sensitivity model
\citep{tan2006distributional,Zhao2017}. This is further extended to
the estimation of conditional average treatment effect with unmeasured
confounding in \citet{kallus2018interval}. Another related work is
\citet{yadlowsky2018bounds} who consider semiparametric inference
for the average treatment effect in Rosenbaum's sensitivity model.
In this paper we take a different perspective. Our
approach is based on a statistical test to compare the
value of two individualized treatment rules when there is limited
unmeasured confounding. Briefly speaking, we first match the treated and
control observations by the observed covariates and then propose to
use Rosenbaum's sensitivity model to quantify the magnitude of
unmeasured confounding after matching (the deviation of the
matched observational study from a pairwise randomized
experiment). At the core of our proposal is a randomization
test introduced by \cite{fogarty2016studentized} to compare the value
of two individualized treatment rules in Rosenbaum' sensitivity
model. Based on this test, we
introduce a framework to rank and select treatment rules within a
given finite collection and show that different statistical errors
can be controlled with the appropriate multiple hypothesis testing
methods.
In principle, our framework can be used with an arbitrary (finite)
number of pre-specified treatment rules. In practice, it is more
suitable for small-scale policy discovery with relatively few
decision variables, where it is not needed to use machine
learning methods to discover complex patterns or such methods have
already been employed in a preliminary study to suggest a few
candidat rules. The design-based nature of our approach makes it
particularly useful
for \emph{confirmatory} analyses, the importance of which is widely
acknowledged in the policy discovery literature
\citep[e.g.][]{kallus2017recursive,zhang2018interpretable,Precision_medicine}. Methods
proposed in this paper thus complement the existing literature on
individualized treatment rules by
providing a way to confirm the effectiveness of a treatment rule
learned from observational data and assess its robustness to
unmeasured confounding. When there are several competing treatment
rules, our framework further facilitates the decision maker
to select or rank the treatment rules using flexible criteria.
The rest of the paper is organized as follows. In \Cref{sec:2} we
introduce a real data example that will be used to illustrate the
proposed methods. We then introduce some notations and discuss how to
compare two treatment rules when there is unmeasured confounding. In
\Cref{sec:multiple} we consider three questions about ranking and
selecting among multiple treatment rules. We compare our proposal with
some baseline procedures in a simulation study
\Cref{sec:simulations} and apply our method to another application
using data from the Health and Retirement Study. Finally, we conclude
our paper with some brief discussion in \Cref{sec:discussion}.
\section{Comparing treatment rules with unmeasured confounding}
\label{sec:2}
\subsection{Running example: Malaria in West Africa}
\label{sec:running-example}
The Garki Project, conducted by the World Health Organization and the
Government of Nigeria from 1969-1976, was an observational study that
compared several strategies to control
malaria. \citet{Hsu2013effectmodification} studied the effect
modification for one of the malaria control
strategies, namely spraying with an insecticide, propoxur, together
with mass administration of a drug sulfalene-pyrimethamine at high
frequency. The outcome is the difference between the frequency
of Plasmodium falciparum in blood samples, that is, the frequency of a
protozoan parasite that causes malaria, measured before and after the
treatment. Using 1560 pairs of treated and control individuals matched
by their age and gender, \citet{Hsu2013effectmodification} found that
the treatment was much more beneficial for young children
than for other individuals, if there is not unmeasured
confounding.
More interestingly, they found that, despite the reduced
sample size, the 447 pairs of young children exhibit a treatment
effect that is far less sensitive to unmeasured confounding bias than
the full sample of 1560 pairs. So from a policy perspective, it may be
preferable to implement the treatment only for young children rather
than the whole population. In the rest of this paper we will
generalize this idea to selecting and ranking treatment rules. We will
use the matched dataset in \citet{Hsu2013effectmodification} to
illustrate the definitions and methodologies in the paper; see the
original article for more information about the Garki Project dataset
and the matched design. A different application concerning the effect
of late retirement on health outcomes will be presented in
\Cref{sec:application} near the end of this article.
\subsection{Some notations and definitions}
\label{sec:some-notat-defin}
We first introduce some notations in order to compare treatment rules
when there is unmeasured confounding. Let ${X}$
denote all the pre-treatment covariates measured by the
investigator. In the single-decision setting considered in this paper,
an individualized treatment rule (or treatment regime) $d$ maps a
vector of pre-treatment covariates $ X$ to the binary treatment decisions,
$\{0, 1\}$ ($0$ indicates control and $1$ indicates treatment). In our
running example, we shall consider six treatment rules,
$r_0,r_1,\cdots,r_5$, where $r_i$ assigns treatment to the youngest $i
\times 20\%$ of the individuals. Specifically, the minimum, $20\%,
40\%, 60\%, 80\%$ quantiles, and maximum of age are $0 \,
(\text{newborn})$, $7$, $20$, $31$, $41$, and $73$ years old.
Let
$Y$ be the outcome and $Y(0),Y(1)$ be the potential outcomes under
control and treatment. The potential outcome under a treatment rule
$d$ is defined, naturally, as $Y(d) = Y(0) 1_{\{d(X)=0\}} + Y(1)
1_{\{d(X)=1\}}$. A common way to compare treatment rule is to use its
value function, defined as the expected potential outcome under that
rule, $V(d) = \mathbb{E}[Y(d)]$. The \emph{value difference} of two treatment
rules, $r_1$ and $r_2$, is thus
\begin{equation} \label{eq:value-diff}
\begin{split}
V(r_2) - V(r_1) &= \mathbb{E}[Y(r_2) - Y(r_1) \,|\, r_2 \neq
r_1] \cdot \mathbb{P}(r_2 \neq r_1) \\
&= \mathbb{E}[Y(1) - Y(0) \,|\, r_2 > r_1] \cdot \mathbb{P}(r_2 > r_1)
- \mathbb{E}[Y(1) - Y(0) \,|\, r_2 < r_1] \cdot \mathbb{P}(r_2 < r_1),
\end{split}
\end{equation}
where for simplicity the event $r_1( X) \ne r_2( X)$ is
abbreviated as $r_1 \ne r_2$ (similarly for $r_1 < r_2$ and $r_1 >
r_2$). Note that the event $r_2 > r_1$ is the same as $r_2 = 1,r_1 =
0$ because the treatment decision is binary. One of the terms on the right
hand side of \eqref{eq:value-diff} will become $0$ if the treatment
rules are nested. In the malaria example, $r_0 \le r_1 \le \cdots \le
r_5$, so the value difference of the rules $r_1$ and $r_2$ can be written as
\[
V(r_2) - V(r_1) = \mathbb{E}[Y(1)-Y(0)\, |\, \text{Age} \in [7, 20)] \cdot
\mathbb{P}(\text{Age} \in [7, 20)).
\]
In this case, testing the sign of $V(r_2) - V(r_1)$ is equivalent to
testing the sign of the conditional average treatment effect,
$\mathbb{E}[Y(1)- Y(0)\,|\,r_2 > r_1]$.
The definition of the value function depends on the potential
outcomes. To identify the value function using observational data, it
is standard to make the following assumptions \citep{Precision_medicine}:
\begin{enumerate}
\item Positivity: $\mathbb{P}(A=a \,|\, X = x) > 0$ for all $a$
and $ x$;
\item Consistency (SUTVA): $Y = Y(A)$;
\item Ignorability (no unmeasured confounding): $Y(a)\, \indep \, A \, | \, X$
for all $a$.
\end{enumerate}
Under these conditions, it is straightforward to show that the value
function is identified by \citep{qian_murphy2011}
\[
V(d) = \mathbb{E} \bigg[ \frac{Y I(A=d(X))}{\pi(A,X)} \bigg],
\]
where $I$ is the indicator function of an event and $\pi(a,x) =
\mathbb{P}(A=a|X=x)$ is the propensity score.
The value function gives a natural and total order to the
treatment rules. If the above identification assumptions hold, the
value functions can be identified and thus this order can be
consistently estimated as the sample size increases to infinity. In
general, it is impossible to recover this order when there is
unmeasured confounding. However, if the magnitude of unmeasured
confounding is bounded according to a sensitivity model (a collection
of distributions of the observed variables and unobserved potential
outcomes), it is possible to partially identify difference between the
value of two treatment rules and thus obtain a partial order.
\begin{definition}
\label{def: prec_gamma}
Let $r_1$ and $r_2$ be two treatment rules that map a vector of
pre-treatment covariates $X$ to a binary treatment decision $\{0,
1\}$, and $V(r_1)$, $V(r_2)$ their corresponding value
functions. Given a sensitivity analysis model indexed by $\Gamma$, we say
that the rule $r_1$ is dominated by $r_2$ with a margin $\delta$ if
$V(r_2) - V(r_1) > \delta$ for all distributions in the sensitivity
analysis model. We denote this relation as $r_1 \prec_{\Gamma,\delta} r_2$
and furthere abbreviate it as $r_1 \prec_\Gamma r_2$ if $\delta =
0$. We denote $r_1 \not \prec_{\Gamma} r_2$ if $r_1$ is not
dominated by $r_2$ with margin $\delta = 0$.
\end{definition}
Notice that the partial order should be defined in terms of the
partially identified interval for $V(r_2) - V(r_1)$ instead of the
partially identified intervals for $V(r_1)$ and $V(r_2)$. This is
because the same distribution of the unobserved potential outcomes
needs to be used when computing the partially identified interval for
$V(r_2) - V(r_1)$, so it is not simply the difference between the
partially identified intervals for the individual values (the easiest
way to see this is to take $r_1 = r_2$). We thank an anonymous
reviewer for pointing this out.
It is easy to see that $\prec_{\Gamma}$ is a strict partial order on
the set of treatment rules because it satisfies irreflexivity (not $r_1
\prec_{\Gamma} r_1$),
transitivity ($r_1 \prec_{\Gamma} r_2$ and $r_2 \prec_{\Gamma} r_3$
imply $r_1 \prec_{\Gamma} r_3$), and asymmetry ($r_1 \prec_{\Gamma}
r_2$ implies not $r_2 \prec_{\Gamma} r_1$). In Rosenbaum's sensitivity
model be introduced in the section below, $\Gamma = 1$ corresponds
to no unmeasured confounding and thus the relationship
$\prec_{\Gamma=1}$ is a total order.
\subsection{Testing $r_1 \not \prec_\Gamma r_2$ using matched observational
studies}
\label{sec:test-r_1-prec_g}
With the goal of selecting and ranking
treatment rules with unmeasured confounding in mind, in this section we
consider the easier but essential task of comparing the value of two treatment
rules, $r_1$ and $r_2$, under Rosenbaum's sensitivity model. This test
will then serve as the basic element of our procedures of selecting and
ranking among multiple treatment rules below. We will first
introduce the pair-matched design of an observational study and Rosenbaum's
sensitivity model, and then describe a studentized sensitivity
analysis proposed by \citet{fogarty2016studentized} that tests
Neyman's null hypothesis of average treatment effect being zero under
Rosenbaum's sensitivity model. This test can be immediately extended
to compare the value of treatment rules.
Suppose the observed data are $n$ pairs, $i = 1,2,...,n$, of two
subjects $j = 1,2$. These $n$ pairs are matched for observed
covariates $\bm X$ and within each pair, one subject is treated,
denoted $A_{ij} = 1$, and the other control, denoted $A_{ij} = 0$, so
that we have $\bm X_{i1} = \bm X_{i2}$ and $A_{i1} + A_{i2} = 1$ for all
$i$. In a sensitivity analysis, we may fail to match on
an unobserved confounder $U_{ij}$ and thus incur unmeasured
confounding bias.
\cite{Rosenbaum1987,Rosenbaum2002a} proposed a
one-parameter sensitivity model. Let $\mathcal{F} =
\{(Y_{ij}(0),Y_{ij}(1), \allowbreak \bm X_{ij}, U_{ij}), i = 1,\dotsc,n,
j = 1,2\}$ be the collection of all measured or unmeasured variables
other than the treatment assignment. Rosenbaum's sensitivity model
assumes that $\pi_i = P(A_{i1} = 1 | \mathcal{F})$ satisfies
\begin{equation}
\frac{1}{1+\Gamma} \leq \pi_i \leq
\frac{\Gamma}{1+\Gamma},~i =
1,2,...,n.
\label{eqn: rosenbaum model}
\end{equation}
When $\Gamma = 1$, this model asserts that $\pi_i = 1/2$ for all $i$
and thus every subject has equal probability to be assigned to
treatment or control (i.e.\ no unmeasured confounding). In general,
$\Gamma > 1$ controls the degree of departure from
randomization. \cite{Rosenbaum2002a,rosenbaum2011new} derived
randomization inference
based on signed score tests for Fisher's sharp null hypothesis that
$Y_{ij}(0) = Y_{ij}(1)$ for all $i,j$. The asymptotic properties of
these randomization tests are studied in
\citet{rosenbaum2004design,Rosenbaum2015} and
\citet{Zhao2018sens_value}.
In the context of comparing individualized treatment rules, Fisher's
sharp null hypothesis is no longer suitable because we expect to have
(and indeed are tasked to find) heterogeneous treatment effect. Recently,
\cite{fogarty2016studentized}
developed a valid studentized test for Neyman's null hypothesis that
the average treatment effect is equal to zero, $(2n)^{-1}\sum_{ij}
Y_{ij}(1) - Y_{ij}(0) = 0$, under Rosenbaum's sensitivity model. We
briefly describe Fogarty's test. Let $D_i$ denote the
treated-minus-control difference in the $i^{th}$ matched pair,
$D_i = (A_{i1} - A_{i2}) (Y_{i1} - Y_{i2})$. Fix the
sensitivity parameter $\Gamma$ and define
\[
D_{i, \Gamma} = D_i - \left(\frac{\Gamma - 1}{\Gamma +1}
\right)|D_i|,~\overline{D}_{\Gamma} = \frac{1}{n}
\sum_{i=1}^n D_{i,\Gamma},~\text{and}~
\text{se}(\overline{D}_\Gamma)^2 = \frac{1}{n(n - 1)} \sum_{i = 1}^n (D_{i,
\Gamma} - \overline{D}_{\Gamma})^2.
\]
\citet{fogarty2016studentized} showed that the one-sided student-$t$
test that rejects Neyman's hypothesis when
\[
\frac{\overline{D}_\Gamma}{\text{se}(\overline{D}_\Gamma)}
> \Phi^{-1}(1 - \alpha)
\]
is asymptotically valid with level $\alpha$ under Rosenbaum's
sensitivity model \eqref{eqn: rosenbaum
model} and mild regularity
conditions. This test can be easily extended to test the null that the
average treatment effect is no greater than $\delta$ by replacing
$D_i$ with $D_i - \delta$.
\citet{fogarty2016studentized} also provided a
randomization-based reference distribution in addition to the
large-sample normal approximation.
The above test for the average treatment effect can be readily
extended to comparing treatment rules. Recall that equation
\eqref{eq:value-diff} implies the value difference of two rules $r_1$
and $r_2$ is a weighted difference of two conditional average
treatment effects on the set $r_1 > r_2$ and $r_2 > r_1$. When the two
rules are nested (without loss of generality assume $r_2 \ge r_1$),
testing the null hypothesis that $r_1 \not \prec_{\Gamma} r_2$
is equivalent to testing a Neyman-type hypothesis $\mathbb{E}[Y(1) - Y(0) |
r_2 > r_1] \le 0$ under the $\Gamma$-sensitivity model. We can simply
apply Fogarty's test to the matched pairs (indexed by $i$) that satisfy
$r_2(\bm X_{i1}) > r_1(\bm X_{i1})$. When the two rules are not nested, we can
flip the sign of $D_i$ for those $i$ such that $r_2(\bm X_{i1}) <
r_1(\bm X_{i1})$ and then apply Fogarty's test. In summary, to test the
null hypothesis that $r_1 \not \prec_{\Gamma} r_2$, we can
simply apply Fogarty's test to $\{D_i \cdot [r_2(\bm X_{i1}) - r_1(\bm X_{i1})],~\text{for}~i~\text{such
that}~r_1(\bm X_{i1}) \ne r_2(\bm X_{i1})\}$. To test the hypothesis $r_1 \not
\prec_{\Gamma,\delta} r_2$, we can use Fogarty's test for the average
treatment effect no greater than $\delta \cdot (n / m)$ where $m =
\big|\{i:\,r_1(\bm X_{i1}) \ne r_2(\bm X_{i2})\}\big|$.
\subsection{Sensitivity value of treatment rule comparison}
A hallmark of Rosenbaum's sensitivity analysis framework is its tipping-point
analysis, and that extends to the comparison of treatment rules. When
testing $r_1 \not \prec_\Gamma r_2$ with a series of
$\Gamma$, there exists a smallest $\Gamma$ such that the null
hypothesis cannot be rejected, that is, we are no longer confidence
that $r_1$ is dominated by $r_2$ in that $\Gamma$-sensitivity
model. This tipping point is commonly referred to as the
\emph{sensitivity value} \citep{Zhao2018sens_value}. Formally, we
define the sensitivity value for $r_1 \prec r_2$ as
\begin{equation*}
\begin{split}
\Gamma_{\alpha}^\ast(r_1 \prec r_2) = \inf\{\Gamma \geq 1~:
&~\text{The hypothesis}~V(r_1)
\ge V(r_2) \mbox{ cannot be rejected} \\
&\mbox{ at level $\alpha$ under the
$\Gamma$-sensitivity model} \}.
\end{split}
\end{equation*}
Let $r_0$ be the null treatment rule (for example, assigning control
to the entire population). The sensitivity $\Gamma_{\alpha}^\ast(r_0 \prec r_1)$ is further abbreviated as $\Gamma_{\alpha}^\ast(r_1)$.
\citet{Zhao2018sens_value} studied the asymptotic properties of the
sensitivity value when testing Fisher's sharp null hypothesis using a
class of signed core statistics. Below, we will give the asymptotic
distribution of $\Gamma_{\alpha}^\ast(r_1 \prec r_2)$ using Fogarty's
test as described in the last section. The result will be stated in
terms of a transformation of the sensitivity value,
\[
\kappa^\ast_{\alpha}(r_1 \prec r_2) = \frac{\Gamma_{\alpha}^\ast(r_1 \prec
r_2) - 1}{\Gamma_{\alpha}^\ast(r_1 \prec r_2) + 1}.
\]
Note that $\Gamma^\ast = 1$ is transformed to $\kappa^\ast = 0$ and
$0 \le \kappa^{\ast} < 1$.
\begin{proposition}
Assume the treatment rules are nested, $r_1(x) \le r_2(x)$, and let $\mathcal{I}$ be the set of indices $i$ where $r_1(\bm X_{i1}) <
r_2(\bm X_{i2})$. Assuming the moments of $|D_i|$ exist and
$\mathbb{E}[D_i \mid r_1 < r_2] > 0$, then
\begin{equation} \label{eq:sen-value-asymp}
\begin{split}
\sqrt{|\mathcal{I}|}\left(\kappa_{\alpha}^\ast(r_1 \prec r_2) -
\frac{\mathbb{E}[D_i \mid r_1 < r_2]}{\mathbb{E}[|D_i| \mid
r_1 < r_2]}\right) \overset{d}{\to}
\text{N}\left(z_{\alpha} \mu,
~\sigma^2\right),~\text{as}~ |\mathcal{I}| \to \infty, \\
\end{split}
\end{equation}
where $z_\alpha$ is the upper-$\alpha$ quantile of the standard normal
distribution and the parameters $\mu$ and $\sigma^2$ depend on the
distribution of $D_i$ (the expressions can be found in the Appendix).
\label{thm: asymp kappa}
\end{proposition}
The proof of this proposition can be found in the Appendix. When the
treatment rules are not nested, one can simply replace $D_i$ with $D_i
[r_2(\bm X_{i1}) - r_1(\bm X_{i1})]$ and the condition $r_1 < r_2$
with $r_1 \ne r_2$ in the proposition statement. The asymptotic
distribution of $\Gamma^{\ast}_{\alpha}(r_1 \succ r_2)$ can be found by the delta
method and we omit further detail.
The asymptotic distribution in \eqref{eq:sen-value-asymp} is similar to
the one obtained in \citet[Thm.\ 1]{Zhao2018sens_value}. When the
treatment rules are nested $r_1 \le r_2$ and $|\mathcal{I}|
\to \infty$, the sensitivity value converges to a number that depends
on the distribution of $D_i$,
\[
\Gamma_{\alpha}^{*}(r_1 \prec r_2) \overset{p}{\to}
\frac{\mathbb{E}[|D_i| \mid r_1 < r_2] + \mathbb{E}[D_i \mid r_1 <
r_2]}{\mathbb{E}[|D_i| \mid r_1 < r_2] -
\mathbb{E}[D_i \mid r_1 < r_2]}.
\]
The limit on the right hand side is the \emph{design sensitivity}
\citep{rosenbaum2004design} of Fogarty's test for comparing the
treatment rules. As the sample size converge to infinity, the power of
Fogarty's test converges to $1$ at $\Gamma$ smaller than the design
sensitivity and to $0$ at $\Gamma$ larger than the design
sensitivity. The normal distribution in \eqref{eq:sen-value-asymp}
further approximates of the finite-sample behavior of the
sensitivity value and can be used to compute the power of a
sensitivity analysis by the fact that rejecting $r_1 \not
\prec_{\Gamma} r_2$ at level $\alpha$ is equivalent to
$\Gamma_{\alpha}^{*}(r_1 \prec_{\Gamma} r_2) \ge \Gamma$.
\section{Selecting and ranking treatment rules}
\label{sec:multiple}
Next we consider the problem of comparing multiple treatment rules
with unmeasured confounding. To this end, we need to define the goal
and the statistical error we
would like to control. A problem related to this is the selecting and
ordering of multiple subpopulations
\citep{gupta1979multiple,gibbons1999selecting}, for example, given $K$
independent measurements $Y_i \sim \mathrm{N}(\mu_i,1)$ where $\mu_i$
is some characteristic of the $i$-th
subpopulation. When comparing $\mu_i$, there are many goals
we can define. In fact, \citet[p.\ 4]{gibbons1999selecting} gave a
list of 7 possible goals for ranking and selection of subpopulations and
considered them in the rest of their book. We believe at least $3$ out
of their $7$ goals have practically meaningful counterparts in
comparing treatment rules. Given $K+1$ treatment rules, $\mathcal{R} =
\{r_0, r_1, \dotsc, r_K\}$, we may ask, in terms of their values,
\begin{enumerate}
\item What is the ordering of all the treatment rules?
\item Which treatment rule is the best?
\item Which treatment rule(s) are better than the null/control $r_0$?
\end{enumerate}
In a randomized experiment or an observational study with no
unmeasured confounding, it may be possible to obtain estimates of the
value that are jointly asymptotically normal and then directly use the
methods in \citet{gibbons1999selecting}. However, as discussed in
\Cref{sec:2}, this no longer
applies when there is unmeasured confounding because the value function
may only be partially identified.
\subsection{Defining the inferential goals}
When there is unmeasured confounding, the three goals above need to be
modified because the value
function only defines a partial order among the treatment rules
(\Cref{def: prec_gamma}). We make the following definitions
\begin{definition}
In the $\Gamma$-sensitivity model, the \emph{maximal rules} in $\mathcal{R}$
are the ones not dominated by any other rule,
\[
\mathcal{R}_{\text{max},\Gamma} = \{r_i\,:\,r_i \not \prec_{\Gamma}
r_j,~\forall j\}.
\]
The \emph{positive rules} are the ones which dominate the control and the
\emph{null rules} are the ones which don't dominate the control,
\[
\mathcal{R}_{\text{pos},\Gamma} = \{r_i\,:\,r_0 \prec_{\Gamma} r_i
\},~\mathcal{R}_{\text{nul},\Gamma} = \mathcal{R} \setminus
\mathcal{R}_{\text{pos},\Gamma}.
\]
\end{definition}
The maximal set $\mathcal{R}_{\text{max},\Gamma}$ and the null set
$\mathcal{R}_{\text{nul},\Gamma}$ are always non-empty (the latter is
because $r_0 \in \mathcal{R}_{\text{nul},\Gamma}$), become
larger as $\Gamma$ increases, and in general become the full set
$\mathcal{R}$ as $\Gamma \to \infty$.
In the rest of this section, we will consider the following
three statistical problems: for some pre-specified significance level
$\alpha > 0$,
\begin{enumerate}
\item Can we give a set of ordered pairs of treatment rules,
$\hat{\mathcal{O}}_{\Gamma} \subset
\{(r_i,r_j),\,i,j=0,\dotsc,K,\,i\ne j\}$,
such that the probability that all the orderings are correct is at
least $1 - \alpha$, that is, $\mathbb{P}(r_i \prec_{\Gamma}
r_j,\,\forall (r_i,r_j) \in \hat{\mathcal{O}}_{\Gamma}) \ge 1-\alpha$?
\item Can we construct a subset of treatment rules,
$\hat{\mathcal{R}}_{\text{max},\Gamma}$, such that the probability that it
contains all maximal rules is at least $1-\alpha$, that
is, $\mathbb{P}(\mathcal{R}_{\text{max},\Gamma} \subseteq
\hat{\mathcal{R}}_{\text{max},\Gamma}) \ge 1-\alpha$?
\item Can we construct a subset of treatment rules,
$\hat{\mathcal{R}}_{\text{pos},\Gamma}$, such that the probability
that it does not cover any null rule is at least $1 - \alpha$, that is, $\mathbb{P}(\hat{\mathcal{R}}_{\text{pos},\Gamma} \cap
\mathcal{R}_{\text{null},\Gamma} = \not\emptyset) \ge 1-\alpha$?
\end{enumerate}
Next, we will propose strategies to achieve the above statistical
goals based on the test of two treatment rules with unmeasured
confounding described in \Cref{sec:test-r_1-prec_g}.
\subsection{Goal 1: Ordering the treatment rules}
\label{subsec: ranking}
To start with, let's consider the first goal---ordering the treatment
rules, as the statistical inference is more straightforward. It is the
same as the multiple testing problem where we would like to control
the family-wise error rate (FWER) for
the collection of hypotheses, $\{H_{ij}\,:\,r_i \not \prec_{\Gamma}
r_j,\,i,j=0,\dotsc,K,\,i\ne j\}$. In principle, we can apply any
multiple testing procedure that controls the FWER. A simple example is
Bonferroni's correction for all the $K(K-1)$ tests.
In sensitivity analysis problems, we can often greatly improve the
statistical power by reducing the number of tests using a planning
sample \citep{heller2009split,zhao2018cross}. This is because
Rosenbaum's sensitivity analysis considers the worst case scenario and
is generally conservative when $\Gamma > 1$. The planning sample can
be further used to order the hypotheses so we can sequentially test
them, for example, using a fixed sequence testing procedure
\citep{koch1996statistical,westfall2001optimally}.
There are many possible ways to screen out, order, and then test the
hypotheses. Here we demonstrate one possibility:
\begin{itemize}
\item[Step 1:] Split the data into two parts. The first part
is used for planning and the second part for testing.
\item[Step 2:] For every pair of treatment rules $(r_i,r_j)$, use the
planning sample to estimate population parameters in the asymptotic
distribution of the sensitivity value \eqref{eq:sen-value-asymp}.
\item[Step 3:] Compute the approximate power of testing $H_{ij}:~r_i
\not \prec_{\Gamma} r_j$ in the testing sample using
\eqref{eq:sen-value-asymp}. Order the hypotheses by the estimated
power, from highest to lowest.
\item[Step 4:] Sequentially test the ordered hypotheses using the
testing sample at level $\alpha$, until one hypothesis is rejected.
\item[Step 5:] Output a Hasse diagram of the treatment rules by using all
the rejected hypotheses.
\end{itemize}
A Hasse diagram is an informative graph to represent a partial order
(in our case, $\prec_{\Gamma}$). In this diagram, each treatment rule
is represented by a vertex and an edge goes upward from rule
$r_i$ to rule $r_j$ if $r_i \prec_{\Gamma} r_j$ and there exists no $r_k$
such that $r_i \prec_{\Gamma} r_k$ and $r_k \prec_{\Gamma} r_j$.
Due to transitivity of a partial order, an
upward path from $r_i$ to
$r_j$ in the Hasse diagram {(for example $r_0$ to $r_3$ in Figure 1,
$\Gamma = 1.3$)} indicates that $r_i \prec_{\Gamma} r_j$, even if we could not directly
reject
$r_i \not \prec_{\Gamma} r_j$ in Step 4. The next proposition shows
that the above multiple testing procedure also controls the FWER for all the
apparent and implied orders represented by the Hasse diagram.
\begin{proposition}
Let $\hat{\mathcal{O}}_{\Gamma} \subset \{(r_i,r_j),\,i\ne j\}$ be
a random set of ordered treatment rules obtained using the
procedure above or any other multiple testing procedure. Let
\[
\hat{\mathcal{O}}_{\Gamma,\text{ext}} = \hat{\mathcal{O}}_{\Gamma}
\bigcup \{(r_i,r_j)\,:\,\exists\, k_1,\dotsc,k_m~\text{such that}~
(r_i,r_{k_1}),(r_{k_1},r_{k_2}),\dotsc,(r_{k_m},r_j)
\in \hat{\mathcal{O}}_{\Gamma}\}
\]
be the extended set implied from the Hasse diagram. Then FWER with
respect to $\hat{\mathcal{O}}_{\Gamma,\text{ext}}$ is the same as FWER with
respect to $\hat{\mathcal{O}}_{\Gamma}$:
\[
\mathbb{P}(r_i \prec_{\Gamma}
r_j,\,\forall (r_i,r_j) \in \hat{\mathcal{O}}_{\Gamma}) =
\mathbb{P}(r_i \prec_{\Gamma}
r_j,\,\forall (r_i,r_j) \in \hat{\mathcal{O}}_{\Gamma,\text{ext}}).
\]
\end{proposition}
\begin{proof}
We show the two events are equivalent. The $\subseteq$ direction is
trivial. For $\supseteq$, notice that any false
positive in $\hat{\mathcal{O}}_{\Gamma,\text{ext}}$, say $r_i
\prec_{\Gamma} r_j$ implies that there is at least one false
positive along the path from $r_i$ to $r_j$, that is, there is at
least one false positive among $r_i \prec_{\Gamma} r_{k_1},
r_{k_1}\prec_{\Gamma} r_{k_2},\dotsc,r_{k_m} \prec_{\Gamma}
r_j$, which are all in $\hat{\mathcal{O}}_{\Gamma}$. Thus, any false
positive in $\hat{\mathcal{O}}_{\Gamma,\text{ext}}$ implies that there is
also at least one false positive in $\hat{\mathcal{O}}_{\Gamma}$.
\end{proof}
We illustrate the proposed method using the malaria
dataset. We first use half of the data to
estimate the population parameters in \eqref{eq:sen-value-asymp} for
each pair of treatment rules $(r_i, r_j)$. For every value of
$\Gamma$, we use
\eqref{eq:sen-value-asymp} to compute the asymptotic power for the
test of $H_{ij}:r_i \not \prec_{\Gamma} r_j$ using the other half of
the data. We then order the hypotheses by the estimated power, from
the highest to the lowest. In the
malaria example, when $\Gamma = 1$, the order is
\[
H_{01}, H_{02}, H_{03}, H_{04}, H_{05}, H_{13}, H_{12}, H_{14}, H_{15}, H_{23}, \dotsc.
\]
When $\Gamma = 2$, the order becomes
\[
H_{02}, H_{01}, H_{03}, H_{04}, H_{05}, H_{12}, H_{13}, H_{14}, H_{15}, H_{45}, \dotsc.
\]
Finally we follow Steps 4 and 5 above. We obtained Hasse diagrams for
a variety of $\Gamma$, which are shown in \Cref{fig:
hasse malaria hybrid}. As a baseline for comparison, \Cref{fig:
malaria hasse Bonferroni} shows the Hasse diagrams obtained by a
simple Bonferroni
adjustment for all $K(K-1) = 30$ hypotheses using all the data. Although only half of the data is used to test, ordering the hypotheses not only
identified all the discoveries that the Bonferroni procedure
identified, but also made one extra discovery when $\Gamma = 1.3$,
$1.5$, $2.5$, $3.5$, and $4$, and
two extra discoveries when $\Gamma = 1$, $1.8$, $2$, and $3$.
\begin{figure}
\caption{Malaria example: Hasse diagrams obtained using sample-splitting and fixed
sequence testing; $\alpha = 0.1$.}
\label{fig: hasse malaria hybrid}
\end{figure}
\begin{figure}
\caption{Malaria example: Hasse diagrams obtained using
Bonferroni's adjustment; $\alpha = 0.1$.}
\label{fig: malaria hasse Bonferroni}
\end{figure}
\subsection{Goal 2: Selecting the best rules}
Next we consider constructing a set that covers all the maximal
rules. Our proposal is based on the following observation: if the
hypothesis $r_i \not \prec_{\Gamma} r_j$ can be rejected, then $r_i$ is
unlikely a maximal rule. More precisely, because $r_i \in
\mathcal{R}_{\text{max},\Gamma}$ implies that $r_i \not \prec_{\Gamma}
r_j$ must be true, by the definition of the type I error of a
hypothesis test,
\[
\mathbb{P}(r_i \not \prec_{\Gamma} r_j~\text{is rejected} \,|\,r_i \in
\mathcal{R}_{\text{max},\Gamma}) \le \alpha.
\]
This suggests that we can derive a set of maximal rules from an
estimated set of partial orders:
\begin{equation} \label{eq:est-max}
\hat{\mathcal{R}}_{\text{max}, \Gamma} = \{r_i\,:\,(r_i,r_j) \not \in
\hat{\mathcal{O}}_{\Gamma},~\forall j\}.
\end{equation}
In other words, $\hat{\mathcal{R}}_{\text{max}, \Gamma}$ contains all
the ``leaves'' in the Hasse diagram of
$\hat{\mathcal{O}}_{\Gamma}$ (a leaf in the Hasse diagram is a vertex
who has no edge going upward). For example, in Figure \ref{fig: hasse
malaria hybrid}, the leaves are $\{r_3, r_4, r_5\}$ when $\Gamma = 1.0$ and $\{r_2, r_3, r_4, r_5\}$ when $\Gamma = 1.5$.
Because $\big\{\mathcal{R}_{\text{max},\Gamma} \not \subseteq
\hat{\mathcal{R}}_{\text{max},\Gamma}\big\} = \big\{\exists\,
i \in \mathcal{R}_{\text{max},\Gamma}~\text{such that}~(r_i,r_j)\in
\hat{\mathcal{O}}_{\Gamma}~\text{for some}~j\big\}$, the estimated set
of maximal rules satisfies
$\mathbb{P}(\mathcal{R}_{\text{max},\Gamma} \not \subseteq
\hat{\mathcal{R}}_{\text{max},\Gamma}) \le \alpha$ as desired whenever
$\hat{\mathcal{O}}_{\Gamma}$ strongly controls the FWER at level $\alpha$.
Equation \eqref{eq:est-max} suggests that only one hypothesis $r_i
\not \prec_{\Gamma} r_j$ needs to be rejected in order to exclude
$r_i$ from $\hat{\mathcal{R}}_{\text{max}, \Gamma}$. This means that,
when the purpose is to select the maximal rules, we do not need to test
$r_i \not \prec_{\Gamma} r_j$ if another hypothesis $r_i \not
\prec_{\Gamma} r_k$ for some $k \ne j$ is already rejected. Therefore, we
can modify the procedure of finding $\hat{\Omega}_{\Gamma}$ to further
decrease the size of $\hat{\mathcal{R}}_{\text{max}, \Gamma}$ obtained
from \eqref{eq:est-max}. For example, in the five-step procedure
demonstrated in \Cref{subsec: ranking}, we can further replace Step 3 by:
\begin{itemize}
\item[Step 3':] After ordering the hypotheses in Step 3,
remove any hypothesis $H_{ij}:\,r_i \prec_{\Gamma} r_j$ if
there is already a hypothesis $H_{ik}$ appearing before $H_{ij}$ for
some $k \ne j$.
\end{itemize}
Again we use the malaria example to illustrate the selection of best
treatment rules. As an example, when $\Gamma = 2.0$, Step 3' reduced the
original sequence of hypotheses to the following:
\[
H_{02}, H_{12}, H_{45}, H_{35}, H_{53}, H_{21}.
\]
We used the hold-out samples to test the hypotheses sequentially at
level $\alpha = 0.1$ and stopped at $H_{45}$. Therefore, a level
$\alpha = 0.1$ confidence set of the set of maximal elements is
$\{r_2, r_3, r_4, r_5\}$ when $\Gamma = 2$. Table \ref{tbl: malaria
max set result} lists the estimated maximal set
$\hat{\mathcal{R}}_{\text{max}, \Gamma}$ for $\Gamma = 1, 1.3, 1.5, 1.8, 2, \text{and }2.5$.
\begin{table}[h]
\centering
\caption{$\hat{\mathcal{R}}_{\text{max}, \Gamma}$ for different choices of $\Gamma$.}
\label{tbl: malaria max set result}
\begin{tabular}{cc|cc}
\toprule
$\Gamma$ &$\hat{\mathcal{R}}_{\text{max}, \Gamma}$ &$\Gamma$
&$\hat{\mathcal{R}}_{\text{max}, \Gamma}$ \\
\midrule
1.0 & $\{r_3, r_4, r_5\}$ & 2.5 &$\{r_2, r_3, r_4, r_5\}$ \\
1.3 & $\{r_3, r_4, r_5\}$ & 3.0 &$\{r_1, r_2, r_3, r_4, r_5\}$\\
1.5 & $\{r_2, r_3, r_4, r_5\}$ & 3.5 &$\{r_1, r_2, r_3, r_4, r_5\}$ \\
1.8 & $\{r_2, r_3, r_4, r_5\}$ & 4.0 &$\{r_1, r_2, r_3, r_4, r_5\}$ \\
2.0 & $\{r_2, r_3, r_4, r_5\}$ & 6.0 &$\{r_0, r_1, r_2, r_3, r_4, r_5\}$\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Goal 3: Selecting the positive rules}
Finally we consider how to select treatment rules that are better than
a control rule. This can also be transformed to a multiple testing
problem for the hypotheses
$H_{0i}:\,r_0\not\prec_{\Gamma}r_i,~i=1,\dotsc,K$. Let
$\hat{\mathcal{R}}_{\text{pos},\Gamma}$ be the collection of rejected hypotheses
following some multiple testing procedure. By definition of FWER,
$\mathbb{P}(\hat{\mathcal{R}}_{\text{pos},\Gamma} \cap
\mathcal{R}_{\text{nul},\Gamma}) \le \alpha$ if the multiple testing
procedure strongly controls FWER at level $\alpha$. As an example, one
can modify the procedure in \eqref{subsec: ranking} to select the
positive rules by only considering $H_{0i},~i=1,\dotsc,K$ in Step 3.
In practice, a small increase of the value function, though
statistically significant, may not justify a policy change. In this
case, it may be desirable to estimate the positive rules that dominate
the control rule by margin $\delta$,
$\mathcal{R}_{\text{pos},\Gamma,\delta} = \{r_i:\,r_0
\prec_{\Gamma,\delta} r_i\}$. To obtain an estimate of
$\mathcal{R}_{\text{pos},\Gamma,\delta}$, one can further modify the
procedure in \eqref{subsec: ranking} by replacing the hypothesis $H_{0i}:\,r_0
\not \prec_{\Gamma} r_i$ with the stronger $r_0
\not\prec_{\Gamma,\delta} r_i$.
\begin{table}[h]
\centering
\caption{Estimated positive rules $\hat{\mathcal{R}}_{\text{pos},\Gamma, \delta}$ for different choices of $\Gamma$ and $\delta$.
}
\label{tbl: malaria null set result}
\begin{tabular}{l|cccc}
\toprule
&$\Gamma = 1$ &$\Gamma = 1.3$ &$\Gamma = 1.5$&$\Gamma = 1.8$ \\
\midrule
$\delta = 0$ & $\{r_1, r_2, r_3, r_4, r_5\}$ &$\{r_1, r_2, r_3, r_4, r_5\}$& $\{r_1, r_2, r_3, r_4, r_5\}$&$\{r_1, r_2, r_3, r_4, r_5\}$ \\
$\delta = 1$ & $\{r_1, r_2, r_3, r_4, r_5\}$ &$\{r_1, r_2, r_3, r_4, r_5\}$& $\{r_1, r_2, r_3, r_4, r_5\}$&$\{r_1, r_2, r_3, r_4, r_5\}$ \\
$\delta = 2$ & $\{r_1, r_2, r_3, r_4, r_5\}$ &$\{r_1, r_2, r_3, r_4, r_5\}$& $\{r_1, r_2, r_3, r_4, r_5\}$&$\{r_1, r_2, r_3, r_4, r_5\}$ \\
$\delta = 4$ & $\{r_1, r_2, r_3, r_4, r_5\}$ &$\{r_1, r_2, r_3, r_4, r_5\}$& $\{r_1, r_2, r_3, r_4, r_5\}$&$\{r_1, r_2, r_3, r_4, r_5\}$ \\
$\delta = 6$ & $\{r_1, r_2, r_3, r_4, r_5\}$ &$\{r_1, r_2, r_3, r_4, r_5\}$& $\{r_2, r_3, r_4,r_5\}$&$\{r_2\}$ \\
\midrule
&$\Gamma = 2.0$ &$\Gamma = 2.5$&$\Gamma = 3.0$ & \\
\midrule
$\delta = 0$ &$\{r_1, r_2, r_3, r_4, r_5\}$ & $\{r_1, r_2, r_3, r_4, r_5\}$& $\{r_1, r_2, r_3, r_4, r_5\}$ & \\
$\delta = 1$ & $\{r_1, r_2, r_3, r_4, r_5\}$ & $\{r_1, r_2, r_3, r_4, r_5\}$ & $\{r_1, r_2, r_3\}$ & \\
$\delta = 2$ & $\{r_1, r_2, r_3, r_4, r_5\}$ & $\{r_1, r_2, r_3\}$& $\{r_1, r_2\}$ & \\
$\delta = 4$ & $\{r_1, r_2, r_3\}$ & $\emptyset$ & $\emptyset$ & \\
$\delta = 6$ & $\emptyset$ & $\emptyset$ & $\emptyset$ & \\
\midrule
& $\Gamma = 3.5$ &$\Gamma = 4.0$ &$\Gamma = 6.0$ &\\
\midrule
$\delta = 0$ & $\{r_1, r_2, r_3\}$ & $\{r_1, r_2\}$ & $\emptyset$ & \\
$\delta = 1$ & $\{r_1, r_2\}$ & $\{r_1\}$ & $\emptyset$ \\
$\delta = 2$ & $\emptyset$ & $\emptyset$ & $\emptyset$\\
$\delta = 4$ & $\emptyset$ & $\emptyset$ & $\emptyset$\\
$\delta = 6$ & $\emptyset$ & $\emptyset$ & $\emptyset$\\
\bottomrule
\end{tabular}
\end{table}
We construct $\hat{\mathcal{R}}_{\text{pos},\Gamma, \delta}$ with
various choices of $\Gamma$ and $\delta$ for the malaria example. In
this case, $\delta$ measures the decrease in the number of Plasmodium
falciparum parasites per milliliter of blood samples averaged over the
entire study samples. Table \ref{tbl: malaria null set result} gives a
summary of the results. As expected, the estimated set of positive
rules becomes smaller as $\Gamma$ or $\delta$ increases. We observe
that, although $r_1,r_2$---assigning treatment to those under $7$ and
$20$---are unlikely the optimal rules if there is no
unmeasured confounding (\Cref{tbl: malaria max set result}), they are
more robust to unmeasured confounding than the others, dominating the
control rule up till $\Gamma = 4.0$ (\Cref{tbl: malaria null set
result}).
\section{Simulations}
\label{sec:simulations}
We study and report the performance of three methods of selecting the
positive rules $\mathcal{R}_{\text{pos},\Gamma, \delta}$ using
numerical simulations in this section. Simulation results for
selecting the maximal rules are reported in the Supplementary Materials.
We constructed $5$ or $10$ cohorts of data where the
treatment effect is constant within each cohort but different between
the cohorts. After matching, the treated-minus-control difference in
each cohort was normally distributed with mean
\begin{enumerate}
\item $\mu = (0.5, 0.25, 0.25, 0.15, 0.05)$,
\item $\mu = (0.5, 0.2, -1.0, 0.2, 0.5)$,
\item $\mu = (0.5, 0.5, 0.25, 0.25, 0.25, 0.25, 0.15, 0.15, 0.05,
0.05)$,
\item $\mu = (0.5, 0.3, 0.2, 0.0, -1.0, -1.0, 0.5, 0.5, 1.0, 1.0)$.
\end{enumerate}
The size of each cohort was either $100$ or $250$.
Three methods of selecting positive rules were considered:
\begin{enumerate}
\item {\bf Bonferroni:} The full data is used to test the hypotheses
$H_{0i}:\,r_0 \not \prec_{\Gamma} r_i$ and the Bonferroni correction
is used to adjust for multiple comparisons.
\item {\bf Ordering by power:} This is the procedure described in
\Cref{subsec: ranking} using sample splitting and fixed sequence testing.
\item {\bf Ordering by value function:} This is the same as above except that
the hypotheses are ordered by their estimated value at $\Gamma = 1$.
\end{enumerate}
For the second and third methods, we used either a half or a quarter
of the matched pairs (randomly chosen) to order the
hypotheses. Extra simulation results using different
split proportions are reported in Supplementary Materials. This simulation was replicated $1000$ times to report the power and the
error rate of the methods. The power is defined as the average size of
the estimated set of positive rules
$\hat{\mathcal{R}}_{\text{pos},\Gamma}$ and the error rate is $1 -
P(\hat{\mathcal{R}}_{\text{pos},\Gamma} \subseteq
\mathcal{R}_{\text{pos}, \Gamma})$ with nominal level
$0.05$.
The results of this simulation study are reported in \Cref{tbl: simu
res mu_1; 5 cohorts,tbl: simu res mu_2; 5 cohorts,tbl: simu res
mu_1; 10 cohorts,tbl: simu res mu_2; 10 cohorts}. The error rate is
controlled under the nominal level in most cases and is usually quite
conservative. The conservativeness is not surprising because Rosenbaum's
sensitivity analysis is a worst-case analysis. In terms of power, the
five methods being compared performed very similarly assuming no
unmeasured confounding ($\Gamma =
1$). Bonferroni is still competitive at $\Gamma = 1.5$, but ordering
the hypotheses by (the estimated) power, though losing some sample for
testing, can be much more powerful at larger values of $\Gamma$.
For instance, in \Cref{tbl: simu res mu_1; 10 cohorts} when $\Gamma = 3.0$, two power-based methods are more than twice as powerful as the Bonferroni method. We observe that only using a small planning sample ($25\%$) seems to
work well in the simulations. This is not too surprising given our
theoretical results. \Cref{thm: asymp kappa} suggest that only the
first two moments of $D$ and $|D|$ are needed to estimate the sensitivity
value asymptotically.
\begin{table}[h]
\centering
\captionsetup{singlelinecheck=off}
\caption[]{Power and error rate (separated by/) of three
methods estimating
$\mathcal{R}_{\text{pos},\Gamma}$. Power is defined as the size of
$\hat{\mathcal{R}}_{\text{pos},\Gamma}$ and
error rate is defined as $1 -
P(\hat{\mathcal{R}}_{\text{pos},\Gamma} \subseteq
\mathcal{R}_{\text{pos}, \Gamma})$. Treatment effect in the
5 cohorts is given by $\mu = (0.5, 0.25, 0.25, 0.15, 0.05)$. The true $\mathcal{R}_{\text{pos},\Gamma}$ is listed below each $\Gamma$ value.
}
\label{tbl: simu res mu_1; 5 cohorts}
\begin{tabular}{cccccccc}
\toprule
\multirow{2}{*}{Cohort size} & \multirow{2}{*}{Method} &$\Gamma = 1.0$ &$\Gamma = 1.8$ &$\Gamma = 2.0$
&$\Gamma = 2.3$&$\Gamma = 3.0$ \\
& & $\{r_1, \dotsc, r_5\}$ & $\{r_1, \dotsc, r_4\}$ & $\{r_1, r_2, r_3\}$
& $\{r_1,r_2\}$ & $\{r_1\}$ \\
\midrule
\multirow{5}{*}{250}
& Bonferroni
& 5.00 / 0.00 & 2.54 / 0.01 & 1.60 / 0.03 & 0.72 / 0.02 & 0.11 /
0.00 \\
&Value (50\%)
& 5.00 / 0.00 & 0.51 / 0.07 & 0.08 / 0.04 & 0.00 / 0.00 & 0.00 / 0.00 \\
&Power (50\%)
& 5.00 / 0.00 & 2.30 / 0.07 & 1.46 / 0.07 & 0.74 / 0.04 & 0.20 / 0.00 \\
&Value (25\%)
& 5.00 / 0.00 & 0.73 / 0.07 & 0.18 / 0.05 & 0.03 / 0.01 & 0.00 / 0.00 \\
&Power (25\%)
& 5.00 / 0.00 & 2.64 / 0.07 & 1.69 / 0.06 & 0.85 / 0.05 & 0.21 / 0.00 \\
\midrule
\multirow{5}{*}{100}
& Bonferroni
& 4.99 / 0.00 & 1.39 / 0.02 & 0.75 / 0.02 & 0.37 / 0.02 & 0.08 / 0.00 \\
& Value (50\%)
& 4.80 / 0.00 & 0.49 / 0.07 & 0.16 / 0.04 & 0.04 / 0.02 & 0.00 / 0.00 \\
& Power (50\%)
& 4.77 / 0.00 & 1.15 / 0.05 & 0.75 / 0.05 & 0.38 / 0.03 & 0.15 / 0.02 \\
& Value (25\%)
& 4.99 / 0.00 & 0.61 / 0.06 & 0.24 / 0.03 & 0.12 / 0.03 & 0.01 / 0.00 \\
& Power (25\%)
& 4.99 / 0.00 & 1.33 / 0.05 & 0.80 / 0.05 & 0.52 / 0.05 & 0.12 / 0.00 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\captionsetup{singlelinecheck=off}
\caption[]{Power and error rate (separated by/) of three
methods estimating
$\mathcal{R}_{\text{pos},\Gamma}$. Power is defined as the size of
$\hat{\mathcal{R}}_{\text{pos},\Gamma}$ and
error rate is defined as $1 -
P(\hat{\mathcal{R}}_{\text{pos},\Gamma} \subseteq
\mathcal{R}_{\text{pos}, \Gamma})$. Treatment effect in the
5 cohorts is given by $\mu = (0.5, 0.2, -1.0, 0.2, 0.5)$. The true $\mathcal{R}_{\text{pos},\Gamma}$ is listed below each $\Gamma$ value.
}
\label{tbl: simu res mu_2; 5 cohorts}
\begin{tabular}{cccccccc}
\toprule
\multirow{2}{*}{Cohort size} & \multirow{2}{*}{Method} &$\Gamma = 1.0$ &$\Gamma = 1.5$ &$\Gamma = 2.0$
&$\Gamma = 2.5$&$\Gamma = 3.5$ \\
& & $\{r_1,r_2,r_4, r_5\}$ & $\{r_1, r_2, r_5\}$ &$\{r_1,r_2\}$
& $\{r_1\}$ & $\emptyset$ \\
\midrule
\multirow{5}{*}{250}
& Bonferroni
& 3.14 / 0.00 & 2.36 / 0.00 & 0.78 / 0.00 & 0.45 / 0.01 & 0.01 / 0.01 \\
&Value (50\%)
& 3.21 / 0.00 & 1.78 / 0.00 & 0.06 / 0.02 & 0.00 / 0.00 & 0.00 / 0.00 \\
&Power (50\%)
& 3.21 / 0.00 & 2.03 / 0.00 & 0.75 / 0.02 & 0.47 / 0.02 & 0.07 / 0.07 \\
&Value (25\%)
& 3.25 / 0.00 & 2.21 / 0.00 & 0.07 / 0.03 & 0.00 / 0.00 & 0.00 / 0.00 \\
&Power (25\%)
& 3.25 / 0.00 & 2.35 / 0.00 & 0.83 / 0.02 & 0.54 / 0.02 & 0.07 / 0.07 \\
\midrule
\multirow{5}{*}{100}
& Bonferroni
& 3.02 / 0.00 & 1.19 / 0.00 & 0.37 / 0.00 & 0.21 / 0.01 & 0.02 / 0.02 \\
& Value (50\%)
& 3.03 / 0.00 & 0.71 / 0.00 & 0.04 / 0.02 & 0.00 / 0.00 & 0.00 / 0.00 \\
& Power (50\%)
& 3.02 / 0.00 & 0.93 / 0.00 & 0.43 / 0.01 & 0.29 / 0.04 & 0.08 / 0.08 \\
& Value (25\%)
& 3.10 / 0.00 & 1.12 / 0.00 & 0.04 / 0.02 & 0.00 / 0.00 & 0.00 / 0.00 \\
& Power (25\%)
& 3.11 / 0.00 & 1.32 / 0.00 & 0.42 / 0.01 & 0.31 / 0.03 & 0.07 / 0.07 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\captionsetup{singlelinecheck=off}
\caption[]{Power and error rate (separated by/) of three
methods estimating
$\mathcal{R}_{\text{pos},\Gamma}$. Power is defined as the size of
$\hat{\mathcal{R}}_{\text{pos},\Gamma}$ and
error rate is defined as $1 -
P(\hat{\mathcal{R}}_{\text{pos},\Gamma} \subseteq
\mathcal{R}_{\text{pos}, \Gamma})$. Treatment effect in the
10 cohorts is given by $\mu = (0.5, 0.5, 0.25, 0.25, 0.25, 0.25, 0.15, 0.15, 0.05, 0.05)$. The true $\mathcal{R}_{\text{pos},\Gamma}$ is listed below each $\Gamma$ value.
}
\label{tbl: simu res mu_1; 10 cohorts}
\begin{tabular}{cccccccc}
\toprule
\multirow{2}{*}{Cohort size} & \multirow{2}{*}{Method} &$\Gamma = 1.0$ &$\Gamma = 1.8$ &$\Gamma = 2.2$
&$\Gamma = 3.0$&$\Gamma = 3.5$ \\
& & $\{r_1,\dotsc, r_{10}\}$ & $\{r_1,\dotsc, r_9\}$ &$\{r_1, \dotsc, r_6 \}$
& $\{r_1, r_2\}$ & $\{r_1\}$ \\
\midrule
\multirow{5}{*}{250}
& Bonferroni
& 10.00 / 0.00 & 6.80 / 0.01 & 2.41 / 0.00 & 0.20 / 0.00 & 0.02 / 0.01 \\
&Value (50\%)
& 10.00 / 0.00 & 0.88 / 0.06 & 0.00 / 0.00 & 0.00 / 0.00 & 0.00 / 0.00 \\
&Power (50\%)
& 10.00 / 0.00 & 6.30 / 0.05 & 2.34 / 0.03 & 0.44 / 0.02 & 0.11 / 0.05 \\
&Value (25\%)
& 10.00 / 0.00 & 1.12 / 0.01 & 0.00 / 0.00 & 0.00 / 0.00 & 0.00 / 0.00 \\
&Power (25\%)
& 10.00 / 0.00 & 7.06 / 0.06 & 2.73 / 0.02 & 0.42 / 0.02 & 0.10 / 0.05 \\
\midrule
\multirow{5}{*}{100}
& Bonferroni
& 9.99 / 0.00 & 3.97 / 0.01 & 1.14 / 0.00 & 0.12 / 0.00 & 0.03 / 0.02 \\
& Value (50\%)
& 9.95 / 0.00 & 0.76 / 0.05 & 0.03 / 0.01 & 0.00 / 0.00 & 0.00 / 0.00 \\
& Power (50\%)
& 9.91 / 0.00 & 3.18 / 0.04 & 1.17 / 0.02 & 0.28 / 0.03 & 0.10 / 0.05 \\
& Value (25\%)
& 9.95 / 0.00 & 1.06 / 0.04 & 0.06 / 0.01 & 0.00 / 0.00 & 0.00 / 0.00 \\
& Power (25\%)
& 9.99 / 0.00 & 3.93 / 0.04 & 1.39 / 0.02 & 0.25 / 0.02 & 0.09 / 0.05 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\captionsetup{singlelinecheck=off}
\caption[]{Power and error rate (separated by/) of three
methods estimating
$\mathcal{R}_{\text{pos},\Gamma}$. Power is defined as the size of
$\hat{\mathcal{R}}_{\text{pos},\Gamma}$ and
error rate is defined as $1 -
P(\hat{\mathcal{R}}_{\text{pos},\Gamma} \subseteq
\mathcal{R}_{\text{pos}, \Gamma})$. Treatment effect in the
10 cohorts is given by $\mu = (0.5, 0.3, 0.2, 0.0, -1.0, -1.0, 0.5, 0.5, 1.0, 1.0)$. The true $\mathcal{R}_{\text{pos},\Gamma}$ is listed below each $\Gamma$ value.
}
\label{tbl: simu res mu_2; 10 cohorts}
\begin{tabular}{cccccccc}
\toprule
\multirow{2}{*}{Cohort size} & \multirow{2}{*}{Method} &$\Gamma = 1.0$ &$\Gamma = 1.5$ &$\Gamma = 2.0$
&$\Gamma = 2.5$&$\Gamma = 3.0$ \\
& & $\{r_1,\dotsc, r_4, r_9, r_{10}\}$ & $\{r_1,\dotsc, r_4\}$ &$\{r_1, r_2, r_3 \}$
& $\{r_1, r_2\}$ & $\{r_1\}$ \\
\midrule
\multirow{5}{*}{250}
& Bonferroni
& 5.98 / 0.00 & 3.51/0.00 & 1.55 / 0.00 & 0.37 / 0.00 & 0.07 / 0.00 \\
&Value (50\%)
& 5.97 / 0.02 & 0.10 / 0.02 & 0.00 / 0.00 & 0.00 / 0.00 & 0.00 / 0.00 \\
&Power (50\%)
& 6.00 / 0.02 & 3.43 / 0.02 & 1.53 / 0.01 & 0.56 / 0.02 & 0.21 / 0.01 \\
&Value (25\%)
& 6.02 / 0.03 & 0.23 / 0.04 & 0.03 / 0.00 & 0.01 / 0.00 & 0.00 / 0.00 \\
&Power (25\%)
& 6.02 / 0.02 & 3.67 / 0.04 & 1.84 / 0.01 & 0.66 / 0.01 & 0.22 / 0.01\\
\midrule
\multirow{5}{*}{100}
& Bonferroni
& 5.60 / 0.01 & 2.42 / 0.00 & 0.68 / 0.00 & 0.19 / 0.00 & 0.06 / 0.00\\
& Value (50\%)
& 5.24 / 0.03 & 0.22 / 0.03 & 0.04 / 0.00 & 0.01 / 0.00 & 0.00 / 0.00\\
& Power (50\%)
& 5.48 / 0.03 & 2.23 / 0.03 & 0.86 / 0.02 & 0.31 / 0.02 & 0.16 / 0.02 \\
& Value (25\%)
& 5.58 / 0.02 & 0.71 / 0.03 & 0.16 / 0.00 & 0.04 / 0.00 & 0.01 / 0.00 \\
& Power (25\%)
& 5.71 / 0.02 & 2.61 / 0.03 & 0.98 / 0.01 & 0.36 / 0.02 & 0.14 / 0.02 \\
\bottomrule
\end{tabular}
\end{table}
\section{Application: The effect of late retirement on senior health}
\label{sec:application}
Finally we apply the proposed method to study the potentially
heterogeneous effect of retirement timing on senior health.
Many empirical studies have focused on the effect
of retirement timing on short-term and long-term health status of the
elderly people
\citep{morrow2001productive,alavinia2008unemployment,borsch2006early}. One
theory known as the ``psychosocial-materialist'' approach suggests
that retiring late may have health benefits because work forms a key part of
the identity of the elderly and provides financial, social and
psychological resources \citep{Calvo2012}. However, the health
benefits of late retirement may differ in different
subpopulations \citep{Dave_2008_late_retirement,
Westerlund_late_retirement_2009}.
We obtained the observational data from the Health and Retirement
Study, an ongoing nationally representative survey of more than 30,000
adults who are older than 50 and their spouses in the United States. HRS
is sponsored by the National Institute of Aging; Detailed information
on the HRS and its design can be found in
\citet{sonnega_IJE_HRS}. We use the RAND HRS Longitudinal File 2014
(V2), an easy-to-use dataset based on the HRS core data
that consists of a follow-up study of $15,843$ elderly people
\citep{RAND_HRS_data}.
We defined the treatment as late retirement (retirement after
$65$ years old and before $70$ years old) and asked how it impacted
self-reported health status
at the age of $70$ (coded by: 5 - extremely good, 4 - very good,
3 - good, 2 - fair, and 1 - poor). We included individuals who retired
before $70$ and had complete measurements of the following confounders: year of
birth, gender, education (years), race (white or not), occupation (1:
executives and managers, 2: professional specialty, 3: sales and
administration, 4: protection services and armed forces, 5: cleaning,
building, food prep, and personal services, 6: production,
construction, and operation), partnered, annual income, and smoking
status. This left us with $1934$
treated subjects and $4831$ controls. Figure \ref{fig: age_dist}
plots the distribution of retirement age in all samples and in the
treatment group. The distribution of retirement age in the treatment
group is right skewed, with a spike of people retiring shortly after
$65$ years old. In the Supplementary Materials,
we give a detailed account of data
preprocessing and sample inclusion criteria.
\begin{figure}
\caption{Distribution of retirement age}
\label{fig:first}
\label{fig:second}
\label{fig: age_dist}
\end{figure}
Using optimal matching as implemented in the \texttt{optmatch} R
package \citep{optmatch}, we form $1858$ matched pairs, matching
exactly on the year of
birth, gender, occupation, and partnered or not, and balance the race,
years of education, and smoking status. Table \ref{tbl: balance table
retirement} summarizes the covariate balance after
matching. After matching, the treated and control
groups are well-balanced (\Cref{tbl: balance table retirement}): the
standardized differences of all covariates are less than
0.1. Additionally, the propensity score in the treated and control group
have good overlap before and after matching (see the Supplementary
Materials).
\begin{table}[t]
\centering
\caption{Covariate balance after matching.}
\label{tbl: balance table retirement}
\begin{tabular}{lrrrr}
\hline
\hline
& Control & Treated & std.diff & \\
\hline
\hline
Year of birth & 1936.27 & 1936.27 & 0.00 & \\
Female & 0.53 & 0.53 & 0.00 & \\
Non-hispanic white & 0.77 & 0.75 & -0.04 & \\
Education (yrs) & 12.52 & 12.53 & 0.00 & \\
Occupation: cleaning, building, food prep, and personal services & 0.10 & 0.10 & 0.00 & \\
Occupation: executives and managers & 0.16 & 0.16 & 0.00 & \\
Occupation: production construction and operation occupations & 0.28 & 0.28 & 0.00 & \\
Occupation: professional specialty & 0.19 & 0.19 & 0.00 & \\
Occupation: protection services and armed forces & 0.02 & 0.02 & 0.00 & \\
Occupation: sales and admin & 0.25 & 0.25 & 0.00 & \\
Partnered & 0.74 & 0.74 & 0.00 & \\
Smoke ever & 0.63 & 0.59 & -0.08 & \\
\hline
\end{tabular}
\end{table}
We considered two potential effect modifiers, namely gender and
occupation. More complicated treatment rules can in principle be
considered within our framework, though having more treatment rules
generally reduces the power of multiple testing. We grouped the $6$
occupations into $2$ broad categories: white collar jobs (executives and managers and professional
specialties) and blue collar jobs (sales, administration, protection
services, personal services, production, construction, and
operation). There were $4$ subgroups defined by these two potential
effect modifiers: male, white-collar workers ($G_1$), female,
white-collar workers ($G_2$), male, blue-collar workers ($G_3$), and
female, blue-collar workers ($G_4$). Thus, there were a total of $2^4 =
16$ different regimes formed out of these two effect modifiers. We
gave decimal as well as binary codings to the $16$ groups: $r_0$
($r_{0000}$) assigns control to everyone, $r_8\,(r_{1000}), r_4\,
(r_{0100}), r_2\,(r_{0010}), r_1\,(r_{0001})$
assign treatment to one of the $4$ subgroups, and so forth.
We split the matched samples and used $1/4$ of them to plan the
test in the other $3/4$. Then we followed the procedures proposed in
\Cref{sec:multiple} to rank and select the treatment rules.
\Cref{fig: hrs hasse 1.2} reports the estimated Hasse diagram at
$\Gamma = 1.2$; additional results can be found in the Supplementary
Materials. The estimated maximal rules for various choices of $\Gamma$ and $\delta$ are reported in
\Cref{tbl: HRS max set result} and the estimated positive rules are
reported in the Supplementary Materials. According to
\Cref{tbl: HRS max set result}, the maximal rules under the no unmeasured confounding assumption are $r_{11} \, (r_{1011})$ which assigns late retirement
to all but female, white-collar workers, $r_{13} \, (r_{1101})$ which
assigns late retirement to all but male, blue-collar workers, and
$r_{15} \, (r_{1111})$ which assigns treatment to everyone. When
$\Gamma$ increases to $1.2$, $r_9 \, (r_{1001})$ which assigns
treatment to male, white-collar workers and female, blue-collar
workers, further enters the set of maximal rules. The estimated
positive rules suggest that $r_9\,(r_{1001})$ and $r_1 \, (r_{0001})$ which only assigns late retirement to
female blue collar workers, though not among the maximal rules at
$\Gamma = 1$ in \Cref{tbl: HRS max set result}, are the most robust to
unmeasured confounding. This suggests that later retirement perhaps
benefit female blue-collar workers more than others.
\begin{figure}
\caption{The effect of late retirement on health: Hasse diagram at $\Gamma = 1.2$}
\label{fig: hrs hasse 1.2}
\end{figure}
\begin{table}[h]
\centering
\caption{The effect of late retirement on health: $\hat{\mathcal{R}}_{\text{max}, \Gamma}$ for different choices of $\Gamma$.}
\label{tbl: HRS max set result}
\begin{tabular}{cc}
\toprule
$\Gamma$ &$\hat{\mathcal{R}}_{\text{max}, \Gamma}$ \\
\midrule
1.0 & $\{r_{11}, r_{13}, r_{15}\}$ \\
1.2 & $\{r_9, r_{11}, r_{13}, r_{15}\}$ \\
1.35 & $\{r_1, r_3, r_5, r_7, r_9, r_{11}, r_{13}, r_{15}\}$ \\
\bottomrule
\end{tabular}
\end{table}
Does $\Gamma = 1.2$ represent a weak or strong
unmeasured confounder? \citet{Rosenbaum2009} proposed to
\emph{amplify} $\Gamma$ to a two-dimensional curve indexed by
$(\Lambda, \Delta)$, where $\Lambda$ describes the relationship
between the unmeasured confounder and the treatment assignment, and
$\Delta$ describes the relationship between the unmeasured confounder and the
outcome. For instance, $\Gamma = 1.2$ corresponds to an unmeasured
confounder associated with a doubling of the odds of late retirement
and a $75\%$ increase in the odds of better health status at the age
of $70$ in each matched pair, i.e., $(\Delta, \Lambda) = (2.0,
1.75)$. \citet{Hsu2013} further proposed to calibrate $(\Lambda,
\Delta)$ values to coefficients of observed
covariates, however, their method only works for binary outcome and
binary treatment. In the Supplementary Materials, we describe a
calibration analysis that handle the ordinal self-reported health
status level in our application that has $5$ levels.
We follow \citet{Hsu2013} and use a plot to
summarize the calibration analysis. In Figure \ref{fig: calibration
plotb}, the
blue curve represents \citet{Rosenbaum2009}'s two-dimensional amplification of
$\Gamma = 1.2$ indexed by $(\Lambda, \Delta)$. The estimated
coefficients of observed covariates are represented by black dots
(after taking an exponential so they are comparable to $(\Lambda,
\Delta)$). We followed the suggestion in \cite{Gelman2008} and
standardized all the non-binary covariates to have mean $0$ and
standard deviation $0.5$, so the coefficient of each binary variable
can be interpreted directly and the coefficients of each
continuous/ordinal variable can be interpreted as the effect of a
2-SD increase in the covariate value, which roughly corresponds to
flipping a binary variable from $0$ to $1$. Note that all
coefficients are under the $\Gamma = 1.2$ curve. In fact,
$\Gamma = 1.2$ roughly corresponds to a moderately strong binary
unobserved covariate whose effects on late retirement and health
status are comparable to a binary covariate $U$ constructed from
smoking and education (red star in \Cref{fig: calibration plotb}).
\begin{figure}
\caption{The effect of latent retirement on health: Calibration of the sensitivity
analysis. The blue curve is \citet{Rosenbaum2009}
\label{fig: calibration plotb}
\end{figure}
\section{Discussion}
\label{sec:discussion}
In this paper we proposed a general framework to compare, select, and
rank treatment rules when there is a limited degree of unmeasured
confounding and illustrated the proposed methods by two real data
examples. A central message is that the best treatment rule (with the
largest estimated value) assuming no unmeasured confounding is often
not the most robust to unmeasured confounding. This may have important
policy implications when individualized treatment rules are learned
from observational data.
Because the value function only defines a partial order on the
treatment rules when there is unmeasured confounding, there is a
multitude of statistical questions one can ask about selecting
and ranking the treatment rules. We have considered three questions
that we believe are most relevant to policy research, but there are
many other questions (such as in \citet{gibbons1999selecting}) one
could ask.
In principle, our framework can be used with
an arbitrary number of prespecified individualized treatment
rules. However, to maintain a good statistical power in the
multiple testing, the prespecified treatment rules should not be too
many. This limitation makes our method most suitable as a
confirmatory analysis to complement machine learning algorithms
for individualized treatment rule discovery. Alternatively, if
the number of decision variables is relatively low due to economic
or practical reasons, our method is also reasonably powered
for treatment rule discovery.
\section{Appendix: Proof of \Cref{thm: asymp kappa}}
To simplify the notation, suppose $r_1(\bm X_i) < r_2(\bm X_i)$ for
all $1 \le i \le I$. Let \[
D_{i, \Gamma} = D_i - \left(\frac{\Gamma - 1}{\Gamma + 1}\right)|D_{i, \Gamma}|, \quad \overline{D} = \frac{1}{I}\sum_{i = 1}^{I} D_i, \quad \overline{|D|} = \frac{1}{I}\sum_{i = 1}^{I} |D_i|,
\]
\[
\overline{D}_\Gamma = (1/I)\sum_{i = 1}^{I} D_{i, \Gamma} = \overline{D} - \left(\frac{\Gamma - 1}{\Gamma + 1}\right) \overline{|D|},
\]
and \[
se(\overline{D}_\Gamma)^2 = \frac{1}{I^2} \sum_{i = 1}^{I} (D_{i, \Gamma} - \overline{D}_\Gamma)^2.
\]
When $\mathbb{E}[D_i] > 0$, $\Gamma^\ast(r_1, r_2)$ is obtained by solving the equation below in $\Gamma$:
\begin{equation}
\frac{\overline{D}_\Gamma}{se(\overline{D}_\Gamma)} = \Phi^{-1}(1 - \alpha).
\label{eqn: sens value}
\end{equation}
Square both sides of the equation above and plug in the expressions for $\overline{D}_\Gamma$ and $se(\overline{D}_\Gamma)^2$. Let $\kappa = (\Gamma - 1)/(\Gamma + 1)$ and $z_\alpha = \Phi^{-1}(1 - \alpha)$. Denote \[
z_\alpha = \Phi^{-1}(1-\alpha), \quad \overline{|D|} = \frac{1}{I}\sum_{i = 1}^{I } |D_i|,\quad ,
\] and
\[
s^2_{D} = \frac{1}{I}\sum_{i = 1}^{I} (D_i - \overline{D})^2,\quad s^2_{|D|} = \frac{1}{I}\sum_{i = 1}^{I} (|D_i| - \overline{|D|})^2,\quad s_{D, |D|} = \frac{1}{I}\sum_{i = 1}^{I} (D_i - \overline{D})(|D_i| - \overline{|D|}).
\]
One can show the sensitivity value $\Gamma^\ast(r_1, r_2)$ corresponds to $\kappa^\ast$ that solves the following quadratic equation:
\[
\left(\overline{|D|}^2 - \frac{1}{I} s^2_{|D|}z_\alpha^2 \right) \kappa^2 - 2\left(\overline{D}\overline{|D|} - \frac{1}{I} s_{D, |D|} z_\alpha^2\right)\kappa + \overline{D}^2 - \frac{1}{I} s^2_{D}c^2_\alpha = 0.
\]
Specifically, we have
\begin{equation}
\kappa^\ast = \frac{{\overline{D}\overline{|D|} - \frac{1}{I} s_{D, |D|} z_\alpha^2} \pm \sqrt{\Delta} }{\overline{|D|}^2 - \frac{1}{I} s^2_{|D|}z_\alpha^2},
\label{eqn: quadratic root}
\end{equation}
where $\Delta = (\overline{D}\overline{|D|} - \frac{1}{I} s_{D, |D|} z_\alpha^2)^2 - (\overline{|D|}^2 - \frac{1}{I} s^2_{|D|}z_\alpha^2)(\overline{D}^2 - \frac{1}{I} s^2_{D}c^2_\alpha)$.
Note
\[
\sqrt{\Delta} = z_\alpha \sqrt{\frac{1}{I} \left(s^2_{|D|}\cdot \overline{D}^2 + s^2_{D}\cdot \overline{|D|}^2 -2 \overline{D}\overline{|D|} s_{D, |D|}\right) + \frac{1}{I^2}z_\alpha^2 \left(s_{D, |D|}^2 - s^2_{|D|}\cdot s^2_{D}\right)}.
\]
Let us denote $A = \mathbb{E}[D]\cdot\mathbb{E}[|D|]$, $B = -z_\alpha \sqrt{\sigma^2_{|D|}\cdot \mathbb{E}[D]^2 + \sigma^2_{D}\cdot \mathbb{E}[|D|]^2 -2 \mathbb{E}[D]\mathbb{E}[|D|] \sigma_{D, |D|}}$, $C = \mathbb{E}[|D|]^2$, $R_1 = \sqrt{I}(\overline{D}\overline{|D|} - A)$, $R_2 = \sqrt{I}(\overline{|D|}^2 - C)$.
We have \[
\kappa^\ast = \frac{A + \frac{1}{\sqrt{I}}R_1 + \frac{1}{\sqrt{I}} B}{C + \frac{1}{\sqrt{I}} R_2} + o_p\left(\frac{1}{\sqrt{I}}\right) = \frac{(A + \frac{1}{\sqrt{I}}R_1 + \frac{1}{\sqrt{I}} B)\cdot(1 - \frac{1}{\sqrt{I}}\frac{R_2}{C})}{C} + o_p\left(\frac{1}{\sqrt{I}}\right).
\]
Scale both sides by $\sqrt{I}$ and rearrange the terms, we have\[
\sqrt{I}\left(\kappa^\ast - \frac{A}{C}\right) = \frac{B}{C} + \frac{1}{C}R_1 - \frac{A}{C^2} R_2 + o_p(1).
\]
Moreover, let $\phi: \mathbb{R}^2 \mapsto \mathbb{R}^2 = (xy, y^2)$:
\[
\sqrt{I}\begin{pmatrix}
\overline{D} - \mathbb{E}[D] \\
\overline{|D|} - \mathbb{E}[|D|]
\end{pmatrix} \sim N(0, \Sigma),
\quad \text{implies} \quad
\begin{pmatrix}
R_1 \\
R_2
\end{pmatrix} = \sqrt{I}\begin{pmatrix}
\overline{D}\overline{|D|} - \mathbb{E}[D]\cdot\mathbb{E}[|D|] \\
\overline{|D|}^2 - \mathbb{E}[|D|]^2
\end{pmatrix}\sim N(0, \phi'\Sigma (\phi')^T),
\] where $\Sigma = \begin{pmatrix}
\text{Var}[D], & \text{Cov}(D, |D|)\\
\text{Cov}(D, |D|), & \text{Var}[|D|]
\end{pmatrix}$ and $\phi' = \begin{pmatrix}
\mathbb{E}[|D|], &\mathbb{E}[D] \\
0, &2\mathbb{E}[|D|]
\end{pmatrix}$.
Plug in the expressions for $A$, $B$, and $C$ and compute the variance-covariance matrix of $(1/C, -A/C^2)(R_1, R_2)^T$: \[
\sqrt{I}\left(\kappa^\ast - \frac{\mathbb{E}[D]}{\mathbb{E}[|D|]}\right) \sim N(z_\alpha\mu, ~\sigma^2)
\]
where
\[
\mu = -\frac{\sqrt{\sigma^2_{|D|}\cdot \mathbb{E}[D]^2 + \sigma^2_{D}\cdot \mathbb{E}[|D|]^2 -2 \mathbb{E}[D]\mathbb{E}[|D|] \sigma_{D, |D|}}}{\mathbb{E}[|D|]^2},
\]
and
\[
\sigma^2 = \frac{\text{Var}[D] \mathbb{E}^2[|D|] - \text{Var}[|D|] \mathbb{E}^2[D] - 2\mathbb{E}[D]\mathbb{E}[|D|]\text{Cov}(D, |D|) + 2\mathbb{E}^2[D]\text{Var}[|D|]}{\mathbb{E}^4[|D|]}.
\]
The Supplementary Materials contain additional appendices about matching in
observational studies and further simulation and real data results.
\end{document} |
\begin{equation}gin{document}
(\mathbb{N}^\ast)^2aketitle
\symbolfootnote[0]{{\bf MSC 2000 subject classifications.}
46L54,
15A52}
\symbolfootnote[0]{{\bf Key words.} free probability, random matrices, free convolution}
\begin{equation}gin{abstract}Debbah and Ryan have recently \cite{dr07} proved a result about the limit empirical singular distribution of the sum of two rectangular random matrices whose dimensions tend to infinity. In this paper, we reformulate it in terms of the rectangular free convolution introduced in \cite{bg07} and then we give a new, shorter, proof of this result under weaker hypothesis: we do not suppose the probability measure in question in this result to be compactly supported anymore. At last, we discuss the inclusion of this result in the family of relations between rectangular and square random matrices.\end{abstract}
\section*{Introduction}Free convolutions are operations on probability measures on the real line which allow to compute the spectral or singular empirical measures of large random matrices which are expressed as sums or products of independent random matrices, the spectral measures of which are known.
More specifically, the operations $\boxplus,\boxtimes$, called respectively {\em free additive and multiplicative convolutions} are defined in the following way \cite{vdn91}. Let, for each $n$, $M_n$, $N_n$ be $n$ by $n$ independent random hermitian matrices, one of them having a distribution which is invariant under the action of the unitary group by conjugation, which empirical spectral measures\fracootnote{The {\em empirical spectral measure} of a matrix is the uniform law on its eigenvalues with multiplicity.} converge, as $n$ tends to infinity, to non random probability measures denoted respectively by $\tau_1, \tau_2$. Then $\tau_1\boxplus\tau_2$ is the limit of the empirical spectral law of $M_n+N_n$ and, in the case where the matrices are positive, $\tau_1\boxtimes\tau_2$ is the limit of the empirical spectral law of $M_nN_n$. In the same way, for any $\lambda\in [0,1]$, the {\em rectangular free convolution} $\boxplus_{\la}$ is defined, in \cite{bg07}, in the following way. Let $M_{n,p}, N_{n,p}$ be $n$ by $p$ independent random matrices, one of them having a distribution which is invariant by multiplication by any unitary matrix on any side, which symmetrized\fracootnote{The {\em symmetrization} of a probability measure $(\mathbb{N}^\ast)^2u$ on $[0,+\infty)$ is the law of $\varepsilon X$, for $\varepsilon, X$ independent random variables with respective laws $\frac{\delta_1+\delta_{-1}}{2}, (\mathbb{N}^\ast)^2u$. Dealing with laws on $[0,+\infty)$ or with their symmetrizations is equivalent, but for historical reasons, the rectangular free convolutions have been defined with symmetric laws. In all this paper, we shall often pass from symmetric probability measures to measures on $[0,+\infty)$ and vice-versa. Thus in order to avoid confusion, we shall mainly use the letter $(\mathbb{N}^\ast)^2u$ for measures on $[0,\infty)$ and $(\mathbb{N}^\ast)^2athbb{N}u$ for symmetric ones.} empirical singular measures\fracootnote{The {\em empirical singular measure} of a matrix $M$ with size $n$ by $p$ ($n\leq p$) is the
empirical spectral measure of $|M|:=\operatorname{sq}rt{MM^*}$.} tend, as $n,p$ tend to infinity in such a way that $n/p$ tends to $\lambda$, to non random probability measures $(\mathbb{N}^\ast)^2athbb{N}u_1,(\mathbb{N}^\ast)^2athbb{N}u_2$.
Then the symmetrized empirical singular law of $M_{n,p}+N_{n,p}$ tends to $(\mathbb{N}^\ast)^2athbb{N}u_1\boxplus_{\la} (\mathbb{N}^\ast)^2athbb{N}u_2$.
These operations can be explicitly computed using either a combinatorial or an analytic machinery (see \cite{vdn91} and \cite{ns06} for $\boxplus, \boxtimes$ and \cite{bg07} for $\boxplus_{\la}$). In the cases $\lambda=0$ or $\lambda=1$, i.e. where the rectangular random matrices we consider are either ``almost flat" or ``almost square", the rectangular free convolution with ratio $\lambda$ can be expressed with the additive free convolution: $\boxplus_1=\boxplus$ and for all symmetric laws $(\mathbb{N}^\ast)^2athbb{N}u_1,(\mathbb{N}^\ast)^2athbb{N}u_2$, $(\mathbb{N}^\ast)^2athbb{N}u_1\boxplus_0 (\mathbb{N}^\ast)^2athbb{N}u_2$ is the symmetric law which push-forward by the map $t(\mathbb{N}^\ast)^2apsto t^2$ is the free convolution of the push forwards of $(\mathbb{N}^\ast)^2athbb{N}u_1$ and $(\mathbb{N}^\ast)^2athbb{N}u_2$ by the same map. However, though one can find many analogies between the definitions of $\boxplus$ and $\boxplus_{\la}$ and still more analogies have been proved \cite{fbg05.inf.div},
no general relation
between $\boxplus_{\la}$ and $\boxplus$ had been proved until a paper of Debbah and Ryan \cite{dr07} (which submitted version, more focused on applications than on this result, is \cite{dr08}). It is to notice that this result is not due to researchers from the communities of Operator Algebras or Probability Theory,
but to researchers from Information Theory, working on communication networks.
In \cite{dr07}, Debbah and Ryan proved a result about random matrices which can be interpreted as an expression, for certain probability measures $(\mathbb{N}^\ast)^2athbb{N}u_1,(\mathbb{N}^\ast)^2athbb{N}u_2$, of their rectangular convolution $(\mathbb{N}^\ast)^2athbb{N}u_1\boxplus_{\la}(\mathbb{N}^\ast)^2athbb{N}u_2$ in terms of $\boxplus$ and of another convolution, called the {\em free multiplicative deconvolution} and denoted by $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}$. In this note, we present this result with a new approach and we give a new and shorter proof, where the hypothesis are more general. This generalization of the hypothesis answers a question asked by Debbah and Ryan in the last section of their paper \cite{dr07}. The question of a more general relation between square and rectangular free convolutions is considered in a last ``perspectives" section.
{\bf Acknowledgments:} The author would like to thank Raj Rao for bringing the paper \cite{dr07} to his attention and M\'erouane Debbah for his encouragements and many useful discussions.
\section{The result of Debbah and Ryan}Let us define the operation $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}$ on certain pairs of probability measures on $[0,+\infty)$ in the following way. For $(\mathbb{N}^\ast)^2u,(\mathbb{N}^\ast)^2u_2$ probability measures on $[0,+\infty)$, if there is a probability measure on $[0,+\infty)$ such that $(\mathbb{N}^\ast)^2u=(\mathbb{N}^\ast)^2u_1\boxtimes(\mathbb{N}^\ast)^2u_2$, then $(\mathbb{N}^\ast)^2u_1$ is called the {\em free multiplicative deconvolution} of $(\mathbb{N}^\ast)^2u$ by $(\mathbb{N}^\ast)^2u_2$ and is denoted by $(\mathbb{N}^\ast)^2u_1=(\mathbb{N}^\ast)^2u\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_2$.
Let us define, for $\lambda\in (0,1]$, $(\mathbb{N}^\ast)^2u_\lambda$ be the law of $\lambda X$ for $X$ random variable distributed according to the Marchenko-Pastur law with parameter $1/\lambda$, i.e. the law with support $[(1-\operatorname{sq}rt{\lambda})^2, (1+\operatorname{sq}rt{\lambda})^2]$ and density $$x(\mathbb{N}^\ast)^2apsto \frac{\operatorname{sq}rt{4\lambda -(x-1-\lambda)^2}}{2\pi\lambda x}.$$
Theorem 1 of \cite{dr07} states the following result. $\lambda\in (0,1]$ is fixed and $(p_n)$ is a sequence of positive integers such that $n/p_n$ tends to $\lambda$ as $n$ tends to infinity. $\delta_1$ denotes the Dirac mass at $1$.
\begin{equation}gin{Th}[Debbah and Ryan]\lambdabel{1.07.08.3}Let, for each $n$, $A_n$, $G_n$ be independent $n$ by $p_n$ random matrices such that the empirical spectral law of $A_nA_n^*$ converges almost surely weakly, as $n$ tends to infinity, to a compactly supported probability measure $(\mathbb{N}^\ast)^2u_A$ and such that the entries of $G_n$ are independent $N(0, \fracf{p_n})$ random variables. Then the empirical spectral law of $(A_n+G_n)(A_n+G_n)^*$ converges almost surely to a compactly supported probability measure $\rho$ which, in the case where $(\mathbb{N}^\ast)^2u_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda$ exists, satisfies the relation \begin{equation}\lambdabel{1.07.08.2}\rho=[((\mathbb{N}^\ast)^2u_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda)\boxplus\delta_1]\boxtimes(\mathbb{N}^\ast)^2u_\lambda.\end{equation}
\end{Th}
\begin{equation}gin{rmq}\lambdabel{1.7.8.16h}{\rm Note that in the case where $(\mathbb{N}^\ast)^2u_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda$ doesn't exist, the relation \eqref{1.07.08.2} stays true in the formal sense. More specifically, for $(\mathbb{N}^\ast)^2u_A$ probability measure such that $(\mathbb{N}^\ast)^2u_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda$ exists, the moments of $\rho=[((\mathbb{N}^\ast)^2u_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda)\boxplus\delta_1]\boxtimes(\mathbb{N}^\ast)^2u_\lambda$ have a polynomial expression in the moments of $(\mathbb{N}^\ast)^2u_A$ (this can easily be seen by the theory of free cumulants \cite{ns06}). It happens that this relation between the moments of the limit spectral law $ \rho$ of $(A_n+G_n)(A_n+G_n)^*$ and the ones of $(\mathbb{N}^\ast)^2u_A$ stays true even when $(\mathbb{N}^\ast)^2u_A\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda$ doesn't exist. It follows from the original proof of Theorem \ref{1.07.08.3} and it will also follow from our proof (see Remark \ref{1.7.8.1}).}\end{rmq}
Note that by the very definition of the rectangular free convolution $\boxplus_{\la}$ with ratio $\lambda$ recalled in the introduction and since the limit empirical spectral law of $GG^*$ is $(\mathbb{N}^\ast)^2u_\lambda$ (it is a well known fact, see, e.g. Theorem 4.1.9 of \cite{hiai}), this result can be stated as follows: for all compactly supported probability measure $(\mathbb{N}^\ast)^2u$ on $[0,+\infty)$ such that $(\mathbb{N}^\ast)^2u\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda$ exists, \begin{equation}\lambdabel{30.06.08.1}(\operatorname{sq}rt{(\mathbb{N}^\ast)^2u}\boxplus_{\la} \operatorname{sq}rt{(\mathbb{N}^\ast)^2u_\lambda})^2=[ ((\mathbb{N}^\ast)^2u\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda)\boxplus\delta_1]\boxtimes(\mathbb{N}^\ast)^2u_\lambda,\end{equation} where for any probability measure $\rho$ on $[0,+\infty)$, $\operatorname{sq}rt{\rho}$ denotes the symmetrization of the push-forward by the square root functions of $\rho$ and for any symmetric probability measure $(\mathbb{N}^\ast)^2athbb{N}u$ on the real line, $(\mathbb{N}^\ast)^2athbb{N}u^2$ denotes the push-forward of $(\mathbb{N}^\ast)^2athbb{N}u$ by the function $t(\mathbb{N}^\ast)^2apsto t^2$. This formula allows to express the operator $\boxplus_{\la}\operatorname{sq}rt{(\mathbb{N}^\ast)^2u_\lambda}$ on the set of symmetric compactly supported probability measures on the real line in terms of $\boxplus$ and $\boxtimes$: for all symmetric probability measure on the real line $(\mathbb{N}^\ast)^2athbb{N}u$, \begin{equation}\lambdabel{30.06.08.2}(\mathbb{N}^\ast)^2athbb{N}u\boxplus_{\la} \operatorname{sq}rt{(\mathbb{N}^\ast)^2u_\lambda}=\operatorname{sq}rt{ [((\mathbb{N}^\ast)^2athbb{N}u^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda)\boxplus\delta_1]\boxtimes(\mathbb{N}^\ast)^2u_\lambda}.\end{equation}
\section{A proof of the generalized theorem of Debbah and Ryan}
$\lambda\in (0,1]$ is still fixed.
In this section, we shall give a new shorter proof of the theorem of Debbah and Ryan, under weaker hypothesis: we shall prove \eqref{30.06.08.2} without supposing the support of $(\mathbb{N}^\ast)^2athbb{N}u$ to be compactly supported. The proof is based on the machinery of the rectangular free convolution and of the rectangular $R$-transform.
\subsection{Some analytic transforms}
Let us first recall a few facts about the analytic approach to $\boxtimes$ and $\boxplus_{\la}$. Let us define, for $\rho$
probability measure on $[0,\infty)$, $$M_\rho(z):=\int_{t\in (\mathbb{N}^\ast)^2athbb{R}}\frac{zt}{1-zt}\mathrm{d} \rho(t),\quad S_\rho(s)=\frac{1+z}{z}M_\rho^{\lambdangle -1\rangle}(z),$$where, as it shall be in the rest of the text, the exponent $^{\lambdangle -1\rangle}$ stands for the inversion of analytic functions on $(\mathbb{N}^\ast)^2athbb{C}\backslash [0,+\infty)$ with respect to the composition operation $\circ$, in a neighborhood of zero. By \cite{vdn91}, for all pair $(\mathbb{N}^\ast)^2u_1, (\mathbb{N}^\ast)^2u_2$ of probability measures on $[0,+\infty)$, $(\mathbb{N}^\ast)^2u_1\boxtimes(\mathbb{N}^\ast)^2u_2$ is characterized by the fact that $S_{(\mathbb{N}^\ast)^2u_1\boxtimes(\mathbb{N}^\ast)^2u_2}=S_{(\mathbb{N}^\ast)^2u_1}S_{(\mathbb{N}^\ast)^2u_2}$.
In the same way, the rectangular free convolution with ratio $\lambda$ can be computed with an analytic transform of probability measures. Let $(\mathbb{N}^\ast)^2athbb{N}u$ be a symmetric probability measure on the real line. Let us define
$H_(\mathbb{N}^\ast)^2athbb{N}u(z)= z(\lambda M_{(\mathbb{N}^\ast)^2athbb{N}u^2}(z)+1)(M_{(\mathbb{N}^\ast)^2athbb{N}u^2}(z)+1)$. Then the {\em rectangular $R$-transform with ratio $\lambda$} of $(\mathbb{N}^\ast)^2athbb{N}u$ is defined to be $$C_(\mathbb{N}^\ast)^2athbb{N}u(z)=U\left( \frac{z}{H_(\mathbb{N}^\ast)^2athbb{N}u^{\lambdangle -1\rangle}(z)}-1\right), $$where $U(z)= \frac{-\lambda-1+\left[(\lambda+1)^2+4\lambda z\right]^{1/2}}{2\lambda}$. By theorem 3.12 of \cite{bg07}, for all pair $(\mathbb{N}^\ast)^2athbb{N}u_1, (\mathbb{N}^\ast)^2athbb{N}u_2$ of symmetric probability measures, $(\mathbb{N}^\ast)^2athbb{N}u_1\boxplus_{\la}(\mathbb{N}^\ast)^2athbb{N}u_2$ is characterized by the fact that $C_{(\mathbb{N}^\ast)^2athbb{N}u_1\boxplus_{\la}(\mathbb{N}^\ast)^2athbb{N}u_2}=C_{(\mathbb{N}^\ast)^2athbb{N}u_1}+C_{(\mathbb{N}^\ast)^2athbb{N}u_2}$.
\subsection{Some preliminary computations}
Note that by \cite{ns06}, the $S$- and $R$-transforms of a probability measure $(\mathbb{N}^\ast)^2u$ on $[0,+\infty)$ are linked by the relation $S_(\mathbb{N}^\ast)^2u(s)=\fracf{z}R_(\mathbb{N}^\ast)^2u^{\lambdangle -1\rangle}(z)$, thus since the free cumulants of the Marchenko-Pastur law with parameter $1/\lambda$ are all equal to $1/\lambda$ (see \cite{ns06}), we have $S_{(\mathbb{N}^\ast)^2u_\lambda}(z)=\fracf{1+\lambda z}$. Moreover, since by \cite{ns06} again, $S_(\mathbb{N}^\ast)^2u(s)= \frac{1+z}{z}M_(\mathbb{N}^\ast)^2u^{\lambdangle -1\rangle}(z),$ for any law $\sigma$ on $[0,+\infty)$, \begin{equation}\lambdabel{1.7.8.3}M_{\sigma\boxtimes (\mathbb{N}^\ast)^2u_\lambda} ^{\lambdangle -1\rangle}=\frac{z}{z+1}\frac{S_{\sigma}}{1+\lambda z}=\frac{M_\sigma^{\lambdangle -1\rangle}}{1+\lambda z}\;\quad \textrm{ and }\;\quad M_{\sigma\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture} (\mathbb{N}^\ast)^2u_\lambda}^{\lambdangle -1\rangle}=(1+\lambda z)M_\sigma^{\lambdangle -1\rangle}.\end{equation} At last, since $\boxplus\delta_1=*\delta_1$, which implies that $M_{\sigma\boxplus\delta_1}(z)=[(z+1)M_\sigma(z)+z]\circ \frac{z}{z+1}$, for any symmetric law $(\mathbb{N}^\ast)^2athbb{N}u$, we have \begin{equation}\lambdabel{1.07.08.1}M_{(((\mathbb{N}^\ast)^2athbb{N}u^2\ltimes(\mathbb{N}^\ast)^2u_\lambda)\boxplus\delta_1)\boxtimes(\mathbb{N}^\ast)^2u_\lambda}^{\lambdangle -1\rangle}=\fracf{1+\lambda z}\times\frac{z}{1+z}\circ\left[(z+1)\left((1+\lambda z)M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}\right)^{\lambdangle -1\rangle}+z\right]^{\lambdangle -1\rangle} .\end{equation}
\subsection{Proof of the result}\lambdabel{2.7.8.3}
So let us consider a symmetric probability measure $(\mathbb{N}^\ast)^2athbb{N}u$ such that $(\mathbb{N}^\ast)^2athbb{N}u^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture} (\mathbb{N}^\ast)^2u_\lambda$ exists and let us prove \eqref{30.06.08.2}. As proved in the proof of Theorem 3.8 of \cite{bg07}, for any symmetric probability measure $\tau$, $H_\tau$ characterizes $\tau$, thus it suffices to prove that $H_{ (\mathbb{N}^\ast)^2athbb{N}u\boxplus_{\la} \operatorname{sq}rt{(\mathbb{N}^\ast)^2u_\lambda}}=H_{m}$ for $m=\operatorname{sq}rt{ [((\mathbb{N}^\ast)^2athbb{N}u^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda)\boxplus\delta_1]\boxtimes(\mathbb{N}^\ast)^2u_\lambda}.$ By Theorem 4.3 and the paragraph preceding in \cite{fbg05.inf.div},
$C_{\operatorname{sq}rt{(\mathbb{N}^\ast)^2u_\lambda}}(z)=z$.
Thus Lemma 4.1 of \cite{bba07} applies here, and it states that in a neighborhood of zero in $(\mathbb{N}^\ast)^2athbb{C}\backslash [0,+\infty)$, $$H_{(\mathbb{N}^\ast)^2athbb{N}u\boxplus_{\la} \operatorname{sq}rt{(\mathbb{N}^\ast)^2u_\lambda}}=H_(\mathbb{N}^\ast)^2athbb{N}u\circ \left(\frac{H_(\mathbb{N}^\ast)^2athbb{N}u}{T(H_(\mathbb{N}^\ast)^2athbb{N}u+M_{(\mathbb{N}^\ast)^2athbb{N}u^2})}\right)^{\lambdangle-1\rangle},$$where $T(z)=(\lambda z+1)(z+1)$. So it suffices to prove that in such a neighborhood of zero, $$H_m=H_(\mathbb{N}^\ast)^2athbb{N}u\circ \left(\frac{H_(\mathbb{N}^\ast)^2athbb{N}u}{T(H_(\mathbb{N}^\ast)^2athbb{N}u+M_{(\mathbb{N}^\ast)^2athbb{N}u^2})}\right)^{\lambdangle -1\rangle},\quad\textrm{ i.e. }\quad H_m\circ \frac{H_(\mathbb{N}^\ast)^2athbb{N}u}{T(H_(\mathbb{N}^\ast)^2athbb{N}u+M_{(\mathbb{N}^\ast)^2athbb{N}u^2})}=H_(\mathbb{N}^\ast)^2athbb{N}u.$$Using the fact that for any symmetric law $\tau$, $H_\tau(z)=zT(M_{\tau^2}(z)),$ it amounts to prove that $$\frac{H_(\mathbb{N}^\ast)^2athbb{N}u}{T(H_(\mathbb{N}^\ast)^2athbb{N}u+M_{(\mathbb{N}^\ast)^2athbb{N}u^2})}\times T\circ M_{m^2}\circ \frac{H_(\mathbb{N}^\ast)^2athbb{N}u}{T(H_(\mathbb{N}^\ast)^2athbb{N}u+M_{(\mathbb{N}^\ast)^2athbb{N}u^2})} =H_{(\mathbb{N}^\ast)^2athbb{N}u}(z), $$i.e.$$
T\circ M_{m^2}\circ \frac{H_(\mathbb{N}^\ast)^2athbb{N}u}{T(H_(\mathbb{N}^\ast)^2athbb{N}u+M_{(\mathbb{N}^\ast)^2athbb{N}u^2})}=T(H_(\mathbb{N}^\ast)^2athbb{N}u(z)+M_{(\mathbb{N}^\ast)^2athbb{N}u^2}(z)),$$which is implied, simplifying by $T$ and using again $H_\tau(z)=zT(M_{\tau^2}(z))$, by $$ M_{m^2}\circ \frac{zT(M_{(\mathbb{N}^\ast)^2athbb{N}u^2}(z))}{T[zT(M_{(\mathbb{N}^\ast)^2athbb{N}u^2}(z))+M_{(\mathbb{N}^\ast)^2athbb{N}u^2}(z)]}=zT[M_{(\mathbb{N}^\ast)^2athbb{N}u^2}(z)]+M_{(\mathbb{N}^\ast)^2athbb{N}u^2}(z).$$ It is implied, composing by $M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}$ on the right and by $M_{m^2}^{\lambdangle -1\rangle}$ on the left, by $$ M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}\times T=(T\times M_{m^2}^{\lambdangle -1\rangle})\circ (M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}(z)T(z)+z).$$Using the expression of $M_{m^2}^{\lambdangle -1\rangle}$ given by \eqref{1.07.08.1}, it amounts to prove that $$ M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}(z)T(z)=$$ $$
\left((z+1)\times\frac{z}{1+z}\circ\left[(z+1)\left((1+\lambda z)M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}\right)^{\lambdangle -1\rangle}+z\right]^{\lambdangle -1\rangle}\right)\circ (M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}(z)T(z)+z),$$i.e. that
$$ \frac{M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}(z)T(z)}{M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}(z)T(z)+z+1}=\frac{z}{1+z}\circ\left[(z+1)\left((1+\lambda z)M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle} \right)^{\lambdangle -1\rangle}+z\right]^{\lambdangle -1\rangle}\circ (M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}(z)T(z)+z)
.$$Now, composing by $\left[(z+1)\left((1+\lambda z)M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle} \right)^{\lambdangle -1\rangle}+z\right]\circ\frac{z}{1-z}$ on the left, it gives
$$ \left[(z+1)\left((1+\lambda z)M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle} \right)^{\lambdangle -1\rangle}+z\right]\circ[(1+\lambda z)M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}(z)]=M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}(z)T(z)+z,
$$i.e.
$$ [M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}(z)(\lambda z+1)+1]{z}+[M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}(z)(\lambda z+1)]=M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}(z)(\lambda z+1)(z+1)+z,
$$which is easily verified.
\subsection{Remarks on this result}
\begin{equation}gin{rmq}\lambdabel{1.7.8.1}{\rm Note that we did not use the fact that $(\mathbb{N}^\ast)^2athbb{N}u^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture} (\mathbb{N}^\ast)^2u_\lambda$ exists to prove that $H_{ (\mathbb{N}^\ast)^2athbb{N}u\boxplus_{\la} \operatorname{sq}rt{(\mathbb{N}^\ast)^2u_\lambda}}=H_{m}$. It means that if $(\mathbb{N}^\ast)^2athbb{N}u^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture} (\mathbb{N}^\ast)^2u_\lambda$ doesn't exist, there is no more probability measure $(\mathbb{N}^\ast)^2u$ on $[0,+\infty)$ such that $M_(\mathbb{N}^\ast)^2u^{\lambdangle -1\rangle}=(1+\lambda z)M_{(\mathbb{N}^\ast)^2athbb{N}u^2}^{\lambdangle -1\rangle}$ as in \eqref{1.7.8.3}, but the polynomial expression of the moments of $ (\mathbb{N}^\ast)^2athbb{N}u\boxplus_{\la} \operatorname{sq}rt{(\mathbb{N}^\ast)^2u_\lambda}$ (i.e. of the limit symmetrized singular law of the matrix $A_n+G_n$ of Theorem \ref{1.07.08.3}) in the moments of $(\mathbb{N}^\ast)^2athbb{N}u$ following from $H_{ (\mathbb{N}^\ast)^2athbb{N}u\boxplus_{\la} \operatorname{sq}rt{(\mathbb{N}^\ast)^2u_\lambda}}=H_{m}$ for $m=\operatorname{sq}rt{ [((\mathbb{N}^\ast)^2athbb{N}u^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda)\boxplus\delta_1]\boxtimes(\mathbb{N}^\ast)^2u_\lambda}$ stays true (see Remark \ref{1.7.8.16h}).}\end{rmq}
\begin{equation}gin{rmq}[Case $\lambda=0$]\lambdabel{1.7.8.2}{\rm A continuous way to define $(\mathbb{N}^\ast)^2u_\lambda$ for any $\lambda\in [0,1]$ is to define it to be the probability measure with free cumulants $k_n((\mathbb{N}^\ast)^2u_\lambda)=\lambda^{1-n}$ for all $n\geq 1$ (see \cite{ns06}). This definition gives $(\mathbb{N}^\ast)^2u_0=\delta_1$. Note that by definition of the rectangular free convolution with null ratio $\boxplus_0$ (which is recalled in the introduction), the relation \eqref{30.06.08.2} stays true for $\lambda=0$.}\end{rmq}
\begin{equation}gin{rmq}\lambdabel{1.7.8.17h39}{\rm Note that the original proof of Debbah and Ryan in \cite{dr07} is based on the combinatorics approach to freeness, via the free cumulants of Nica and Speicher \cite{ns06}, whereas our proof is based on the analytical machinery for the computation of the rectangular $R$-transform, namely the rectangular $R$-transform. It happens sometimes that combinatorial proofs can be translated on the analytical plan by considering the generating functions of the combinatorial objects in question. Notice however that it is not what we did here. Indeed, the rectangular $R$-transform machinery is actually related to other cumulants than the ones of Nica and Speicher. These are the so-called rectangular cumulants, defined in \cite{bg07}.}\end{rmq}
\subsection{Remarks about the free deconvolution by $(\mathbb{N}^\ast)^2u_\lambda$}
The following corollary is part of the answer given in the present paper to the question asked in the last section of the paper of Debbah and Ryan \cite{dr07}. Let us endow the set of probability measures on the real line with the weak topology \cite{billingsley}.
\begin{equation}gin{cor}The functional $(\mathbb{N}^\ast)^2athbb{N}u(\mathbb{N}^\ast)^2apsto [((\mathbb{N}^\ast)^2athbb{N}u^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda)\boxplus\delta_1]\boxtimes(\mathbb{N}^\ast)^2u_\lambda$, defined on the set of probability measures $(\mathbb{N}^\ast)^2athbb{N}u$ on $[0,+\infty)$ such that $(\mathbb{N}^\ast)^2athbb{N}u\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda$ exists, extends continuously to the whole set of probability measures on $[0,+\infty)$.\end{cor}
\begin{equation}gin{pr} We just proved, in section \ref{2.7.8.3}, that the formula $$(\mathbb{N}^\ast)^2athbb{N}u\boxplus_{\la} \operatorname{sq}rt{(\mathbb{N}^\ast)^2u_\lambda}=\operatorname{sq}rt{ [((\mathbb{N}^\ast)^2athbb{N}u^2\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda)\boxplus\delta_1]\boxtimes(\mathbb{N}^\ast)^2u_\lambda}$$is true for any probability measure $(\mathbb{N}^\ast)^2athbb{N}u$ on $[0,+\infty)$.
Since the operation $\boxplus_{\la}$ is continuous on the set of symmetric probability measures on the real line (Theorem 3.12 of \cite{bg07}) and the bijective corespondance between symmetric laws on the real line and laws on $[0,+\infty)$, which maps any symmetric law to its push-forward by the map $t(\mathbb{N}^\ast)^2apsto t^2$, is continuous with continuous inverse, the corollary is obvious.\end{pr}
The functional $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda$, which domain is contained in the set of probability measures on $[0,+\infty)$, plays surprisingly a key role here. It seems natural to try to study its domain. The first step is to notice that this domain is the whole set of probability measures on $[0,+\infty)$ if and only if $\delta_1$ is in this domain, and that in this case, the functional $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda$ is simply equal to $\boxtimes(\delta_1\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda)$.
However, the following proposition states that despite the previous corollary, the domain of the functional $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda$ is not the whole set of probability measures on $[0,+\infty)$.
\begin{equation}gin{propo}The Dirac mass $\delta_1$ at $1$ is not in the domain of the functional $\begin{picture}(9,5) \put(2,0){\framebox(5,5){$\smallsetminus$}} \end{picture}(\mathbb{N}^\ast)^2u_\lambda$.\end{propo}
\begin{equation}gin{pr} Suppose that there is a probability measure $\tau$ on $[0,+\infty)$ such that $\delta_1=\tau\boxtimes(\mathbb{N}^\ast)^2u_\lambda$.
Such a law $\tau$ has to satisfy $S_\tau(z)=1+\lambda z$. It implies that for $z$ small enough, $M_\tau(z)=\frac{z-1+[(1-z)^2+4\lambda z]^{1/2}}{2\lambda}$. Such a function doesn't admit any analytic continuation to $(\mathbb{N}^\ast)^2athbb{C}\backslash[0,+\infty)$, thus no such probability measure $\tau$ exists.
\end{pr}
\section{Relations between square and rectangular matrices/convolutions}
The theorem of Debbah and Ryan gives an expression of the empirical singular
measure of the sum of two rectangular random matrices in terms of operations related to hermitian square random matrices. Two other results relate empirical singular measures of (non hermitian) square or rectangular random matrices to the operations devoted to hermitian random matrices.
The first one can be resumed by $\boxplus_1=\boxplus$. Concretely, it states that denoting by $\operatorname{ESM}(X)$ the symmetrization of empirical singular
measure of any rectangular matrix $X$, for any pair $M,N$ of large $n$ by $p$ random matrices, one of them being invariant in law by the left and right actions of the unitary groups, for $n/p\simeq 1$, \begin{equation}\lambdabel{2.7.8.16h008}\operatorname{ESM}(M+N)\simeq \operatorname{ESM}(M)\boxplus \operatorname{ESM}(N). \end{equation} Note that the matrices $M,N$ are not hermitian, which makes \eqref{2.7.8.16h008} pretty surprising (since $\boxplus$ was defined with hermitian random matrices). It means that for $\varepsilon, \varepsilon_1,\varepsilon_2$ independent random variables with law $\frac{\delta_{-1}+\delta_1}{2}$ independent of $M$ and $N$, we have \begin{equation}\lambdabel{1.7.8.20h14}\operatorname{Spectrum}(\varepsilon |M+N|){\simeq}\operatorname{Spectrum}(\varepsilon_1 |M|+\varepsilon_2|N|)\end{equation}
The second one can be resumed by for any pair $(\mathbb{N}^\ast)^2athbb{N}u,\tau$ of symmetric probability measures on the real line, $((\mathbb{N}^\ast)^2athbb{N}u\boxplus_0\tau)^2=(\mathbb{N}^\ast)^2athbb{N}u^2\boxplus\tau^2$
Concretely, it states that for any pair $M,N$ of $n$ by $p$ unitarily invariant random matrices, for $1<<n<<p$,
\begin{equation}\lambdabel{1.7.8.20h15}\operatorname{Spectrum}[(M+N)(M+N)^*] {\simeq}\operatorname{Spectrum}( MM^*+NN^*).\end{equation}
The advantage of the result of Debbah and Ryan on those ones is that it works for any value of the ratio $\lambda$, but its disadvantage is that it only works when one of the laws convoluted is $(\mathbb{N}^\ast)^2u_\lambda$, i.e. one of the matrices considered is a Gaussian one. In fact this sharp restriction can be understood by the fact that among rectangular random matrices which are invariant in law under multiplication by unitary matrices, the Gaussian ones are the only ones which can be extended to square matrices which are also invariant in law under multiplication by unitary matrices.
It could be interesting to understand better how relations like \eqref{1.7.8.20h14}, \eqref{1.7.8.20h15} or like the one of the Debbah and Ryan's theorem work and can be generalized. Unfortunately, until now, even though nice proofs (see \cite{bg07} for \eqref{1.7.8.20h14} and \eqref{1.7.8.20h15} or Theorem 4.3.11 of \cite{hiai} and Proposition 3.5 of \cite{haag2} for the $n=p$ case of \eqref{1.7.8.20h14}) relying in free probability have been given for these results relating rectangular convolutions and ``square non hermitian convolutions" with the ``square hermitian convolution" (i.e. $\boxplus$), no ``concrete" explanation has been given, and no generalization (to any $\lambda$, to any pair of probability measures) neither.
Such a generalization could be the given of a functional $f_\lambda$ on the set of symmetric probability measures such that for all $(\mathbb{N}^\ast)^2athbb{N}u,\tau$ symmetric probability measures, $(\mathbb{N}^\ast)^2athbb{N}u\boxplus_{\la}\tau$ is the only symmetric probability measure satisfying $$f_\lambda((\mathbb{N}^\ast)^2athbb{N}u\boxplus_{\la}\tau)=f_\lambda((\mathbb{N}^\ast)^2athbb{N}u)\boxplus f_\lambda(\tau).$$
Note that in the case $\lambda=1$, the functional $f_\lambda((\mathbb{N}^\ast)^2athbb{N}u)=(\mathbb{N}^\ast)^2athbb{N}u$ works, and in the case $\lambda=0$, the functional which maps a measure to its push-forward by the square function works.
\begin{equation}gin{rmq}{\rm Let $(\mc{A},\varphi)$ be a $*$-non commutative probability space and $p_1, p_2$ be two self-adjoint projectors of $\mc{A}$ such that $p_1+p_2=1$ such that $\lambda=\varphi(p_1)/\varphi(p_2)$.
As explained in Proposition-Definition 2.1 of \cite{bg07}, $\boxplus_{\la}$ can be defined by the fact that for any pair $a,b\in p_1\mc{A} p_2$ free with amalgamation over $\operatorname{Vect}(p_1,p_2)$, the symmetrized distribution of $|a+b|$ in $(p_1\mc{A} p_1,\fracf{\varphi(p_1)}\varphi_{|p_1\mc{A} p_1})$ is the rectangular free convolution with ratio $\lambda$ of the symmetrized distributions of $|a|$ and $|b|$ in the same space.
Moreover, it is easy to see that for all $a\in p_1\mc{A} p_2$, the symmetrized distribution $\tau$ of $|a|$ in $(p_1\mc{A} p_1,\fracf{\varphi(p_1)}\varphi_{|p_1\mc{A} p_1})$ is linked to the distribution $(\mathbb{N}^\ast)^2athbb{N}u$ of $a+a^*$
in $(\mc{A},\varphi)$ by the relation $(\mathbb{N}^\ast)^2athbb{N}u= \frac{2\lambda}{1+\lambda}\tau+\frac{1-\lambda}{1+\lambda}\delta_0.$
When $\lambda=1$, the equation $\boxplus=\boxplus_{\la}$ can be summurized in the following way: for $a,b\in p_1\mc{A} p_2$ free with amalgamation over $\operatorname{Vect}(p_1,p_2)$, the distribution of $(a+b)+(a+b)^*$ in $(\mc{A}, \varphi)$ is the free convolution of the distributions of $a+a^*$ and $b+b^*$.
If this had stayed true for other values of $\lambda$,
it would have meant that for all $(\mathbb{N}^\ast)^2athbb{N}u,\tau$ compactly supported symmetric probability measures on the real line, we have \begin{equation}gin{equation}\lambdabel{13.03.06.1}f_\lambda((\mathbb{N}^\ast)^2athbb{N}u\boxplus_{\la}\tau)=f_\lambda((\mathbb{N}^\ast)^2athbb{N}u)\boxplus f_\lambda(\tau) ,\end{equation} where $f_\lambda$ is the function which maps a probability measure $\tau$ on the real line to $\frac{2\lambda}{1+\lambda}\tau+\frac{1-\lambda}{1+\lambda}\delta_0$. But looking at fourth moment, it appears that \eqref{13.03.06.1} isn't true.}\end{rmq}
\begin{equation}gin{thebibliography}{99}\bibitem[BBA07]{bba07} Belinschi, S., Benaych-Georges, F., Guionnet, A. \emph{Regularization by free additive convolution, square and rectangular cases}. To appear in {\bf Complex Analysis and Operator Theory}.
\bibitem[BG07a]{fbg05.inf.div} Benaych-Georges, F. \emph{Infinitely divisible distributions for rectangular free convolution: classification and matricial interpretation}
{\bf Probability Theory and Related Fields} Volume 139, Numbers 1-2 / septembre 2007, 143-189.
\bibitem[BG07b]{bg07} Benaych-Georges, F. \emph{Rectangular random matrices, related convolution}. To appear in {\bf Probability and Theory Related Fields}.
\bibitem[B68]{billingsley} Billingsley, P. \emph{Convergence of probability measures} Wiley, 1968
\bibitem[DR07]{dr07} Debbah, M., Ryan, {\O}. {\em Multiplicative free Convolution and Information-Plus-Noise Type Matrices}. arXiv.
\bibitem[DR08]{dr08} Debbah, M., Ryan, {\O}. {\em Free Deconvolution for Signal Processing Applications} Second round review, submitted to IEEE
transactions on Information Theory, 2007.
\bibitem[HL00]{haag2} Haagerup, U., Larsen, F. \emph{Brown's spectral distribution measure for R-diagonal elements in finite von Neumann algebras} {\bf Journ. Functional Analysis} 176, 331-367 (2000).
\bibitem[HP00]{hiai} Hiai, F., Petz, D. \emph{The semicircle law, free random variables, and entropy} Amer.
Math. Soc., Mathematical Surveys and Monographs Volume 77, 2000
\bibitem[NS06]{ns06} Nica, Alexandru; Speicher, Roland {\em Lectures on the combinatorics of free probability}. London Mathematical Society Lecture Note Series, 335. Cambridge University Press, Cambridge, 2006.
\bibitem[VDN91]{vdn91} Voiculescu, D.V., Dykema, K., Nica, A. \emph{Free random variables} CRM Monograghs Series No.1, Amer. Math. Soc., Providence, RI, 1992
\end{thebibliography}
\end{document} |
\begin{equation}gin{document}
\tildetle[Verblunsky coefficients defined by the skew-shift]{Orthogonal polynomials on the unit circle with Verblunsky coefficients defined by the skew-shift}
\author[H.\ Kr\"uger]{Helge Kr\"uger}
\address{Mathematics 253-37, Caltech, Pasadena, CA 91125}
\email{\href{[email protected]}{[email protected]}}
\urladdr{\href{http://www.its.caltech.edu/~helge/}{http://www.its.caltech.edu/~helge/}}
\thanks{H.\ K.\ was supported by a fellowship of the Simons foundation.}
\date{\today}
\keywords{OPUC, CMV matrices, spectrum, skew-shift, eigenvalue statistics}
\begin{equation}gin{abstract}
I give an example of a family of
orthogonal polynomials on the unit circle
with Verblunsky coefficients
given by the skew-shift for which the associated
measures are supported on the entire unit circle
and almost-every Aleksandrov measure is pure point.
Furthermore, I show in the case of the two dimensional
skew-shift the zeros of para-orthogonal polynomials
obey the same statistics as an appropriate irrational
rotation.
The proof is based on an analysis of the associated
CMV matrices.
\end{abstract}
\maketitle
\section{Introduction}
In this article, I consider orthogonal polynomials on
the unit circle, whose Verblunsky coefficients are given
by
\begin{equation}\label{eq:defVeromegank}
\alpha_n = \lambdabda \mathrm{e}^{2\pi\mathrm{i}\cdot \omegaega n^{k}}
\end{equation}
for $0\neq \lambdabda\in{\mathbb D}=\{z:\quad |z|<1\}$,
$\omegaega$ an irrational number, and $k\geq 2$.
The case $k=1$ corresponds to rotated versions
of the Geronimus polynomials, see Theorem~1.6.13
in \cite{opuc2} and Proposition~\mathrm{Re}f{prop:rotatealpha}
(see also Theorem~5.3 in \cite{gzborg}).
Given Verblunsky coefficients $\alpha_n$,
we define orthogonal polynomials
on the unit circle recursively by
\begin{equation}\label{eq:defPhin}
\Phi_0(z) = 1, \quad \Phi_{n+1}(z) = z \Phi_n(z)
-\overline{\alpha_n} \Phi_n^{\ast}(z),
\end{equation}
where $\Phi_n^{\ast}(z) = z^n \overline{\Phi(\overline{z}^{-1})}$
is the reversed polynomial. By Verblunsky's theorem,
there exists an unique probability measure $\mu$ on $\partial{\mathbb D}$ such that
the $\Phi_n$ are orthogonal with respect to it.
The first result is
\begin{equation}gin{theorem}\label{thm:main1}
The support of $\mu$ satisfies
\begin{equation}
\mathrm{supp}(\mu) = \partial{\mathbb D}.
\end{equation}
\end{theorem}
The key to the proof of this theorem is that the
support of $\mu$ is the same as the support
of the measure with Verblunsky coefficients
$\alpha_n \mathrm{e}^{2\pi\mathrm{i} y n}$ by ergodicity for
any $y\in{\mathbb T}={\mathbb R}/{\mathbb Z}$. Now these two supports are
just rotated versions of each other. Hence
$\mathrm{supp}(\mu)$ must be the entire unit circle.
I give the details of the proof in Section~\mathrm{Re}f{sec:pfmain1}.
Next, consider the family of Verblunsky coefficients
given by $\alpha_{x,n} = \alpha_n \cdot \mathrm{e}^{2\pi\mathrm{i} x}$.
The corresponding measures are known as {\em Aleksandrov
measures} $\mu_x$ see Section~3.2. in \cite{opuc1}.
Then we have that
\begin{equation}gin{theorem}\label{thm:main2}
For almost every $x$, the Aleksandrov measure $\mu_x$
is pure point.
\end{theorem}
The proof of this theorem is essentially the same as
Theorem~\mathrm{Re}f{thm:main1}, since the rotational invariance implies
positivity of the Lyapunov exponent. Pure point spectrum
then follows from spectral averaging.
Deterministic examples with similar properties
have been previously obtained in \cite{dk}.
Adapting the methods of \cite{krho1}, \cite{krho2}
to orthogonal polynomials on the unit circle, it should
be possible to obtain similar even for $k > 1$
not an integer.
At this point, let me mention that the corresponding
question for orthogonal polynomials on the real line
respectively better Schr\"odinger operators is open.
Consider the potential $V(n) = \lambdabda \cos(2\pi \omegaega n^2)$
for an irrational number $\omegaega$. Then under a
Diophantine assumption on $\omegaega$ and a largeness
condition on $\lambdabda$ one can show pure point
spectrum, see \cite{b2002}, \cite{bgs},
and Chapter~15 in \cite{bbook} and that
the spectrum contains intervals \cite{kskew}.
However, it is believed that for all $\lambdabda > 0$
the spectrum of this operator is an interval
and pure point. Partial results for $\lambdabda > 0$
small can be found in \cite{b, b2,b3}.
The proofs of Theorem~\mathrm{Re}f{thm:main1} and \mathrm{Re}f{thm:main2}
are much easier than the real case, because of algebraic
miracles (Proposition~\mathrm{Re}f{prop:rotatealphas}). However, there is also an analytic reason
why the case on the unit circle should be simpler,
namely that then the spectrum has no edges.
For this reason, I expect it to be possible to show
analogs of Theorem~\mathrm{Re}f{thm:main1} and \mathrm{Re}f{thm:main2}
if one perturbs $\alpha_n$ slightly by for example
$\alpha_n + \varepsilon f(\omegaega n^k)$ for an analytic
and one-periodic function $f$ and $\varepsilon > 0$
small enough.
At first sight Theorem~\mathrm{Re}f{thm:main1} and \mathrm{Re}f{thm:main2}
might not seem too surprising, since we know many measures
whose support is the entire unit circle. But the Verblunsky
coefficients of these measures behave quite differently,
for regular measures one knows \cite{si} that the
Verblunsky coefficients Ces\'aro sum to $0$. Similarly
non-zero periodic potentials have at least one gap.
The situation becomes even more striking when considering
Schr\"odinger operators. There have been a series of
innovative works \cite{abd1,abd2,aj1,aj2,gs4,gsfest} to prove
Cantor spectrum, whereas there are only the perturbative
methods from \cite{cs,kskew} to prove that the spectrum
contains an interval.
\begin{equation}gin{figure}[ht]
\includegraphics[width=0.9\textwidth]{CMVGaps2N2000}
\caption{Zeros of $\Phi_{2000}(z; \begin{equation}ta)$ for $k=2$
and $\omegaega = \sqrt{2}$.}
\label{fig:1}
\end{figure}
Finally, I also want to address the zero distribution
of the para-orthogonal polynomials. This question has
not been discussed for Schr\"odinger operators yet.
Define for $\begin{equation}ta\in\partial{\mathbb D}$
\begin{equation}
\Phi_n(z; \begin{equation}ta) = z \Phi_{n-1}(z) - \overline{\begin{equation}ta} \Phi_{n-1}^{\ast}(z).
\end{equation}
In difference to $\Phi_n(z)$ the zeros of $\Phi_n(z; \begin{equation}ta)$
are on the unit circle. Denote these zeros
by $\mathrm{e}^{2\pi\mathrm{i} \theta_1}, \dots, \mathrm{e}^{2\pi\mathrm{i}\theta_N}$.
An inspection of the proof of Theorem~\mathrm{Re}f{thm:evskew}
shows that an appropriate adaption of the
results would remain true for $\Phi_n(z)$.
Before stating our main result,
I will now illustrate the behavior of the zeros with
some numerical computations. Order the values
$\theta_j$ such that
\begin{equation}
0 \leq \theta_1 < \theta_2 < \dots < \theta_N < 1.
\end{equation}
Define the length of gaps by
\begin{equation}
g_j = \theta_{j+1} - \theta_j.
\end{equation}
Figure~\mathrm{Re}f{fig:1} and \mathrm{Re}f{fig:2} show the distribution
of the values of $g_j$ for different values of $N$
when $k = 2$. One
sees that this distribution peaks at only three values.
This should remind one of the distribution of gap
lengths for the sequence of values $\{\eta n \pmod{1} \}_{n=1}^{N}$
for some value of $\eta$ and in fact, we will show this
in Theorem~\mathrm{Re}f{thm:main4}. Also it should be pointed
out that these gap distributions do not converge.
\begin{equation}gin{figure}[ht]
\includegraphics[width=0.9\textwidth]{CMVGaps2N4000}
\caption{Zeros of $\Phi_{4000}(z; \begin{equation}ta)$ for $k=2$
and $\omegaega = \sqrt{2}$.}
\label{fig:2}
\end{figure}
On the other hand Figure~\mathrm{Re}f{fig:3} shows the same
graphic for $k =3$ and the distribution resembles
an exponential distribution. One obtains similar
figures for $k\geq 4$. This is the same distribution
one would obtain if the $\theta_j$ were given by
a Poisson process and by \cite{st06} also if the
the Verblunsky coefficients $\alpha_n$ were given
by independent identically distributed random variables
whose distribution is non constant and rotationally invariant.
\begin{equation}gin{figure}[ht]
\includegraphics[width=0.9\textwidth]{CMVGaps3N2000}
\caption{Zeros of $\Phi_{4000}(z; \begin{equation}ta)$ for $k=2$
and $\omegaega = \sqrt{2}$.}
\label{fig:3}
\end{figure}
Finally, in the case $k=1$, the (rotated) Geronimus Polynomials,
the assumptions of the Freud--Levin theorem hold
(Theorem~2.6.10 in \cite{szego}) and one has
clock spacing, so the spacing is given by the inverse
of the corresponding density of states measure.
This measure turns out to be non-constant, so
there is not a single peak.
In order to state our result, we need to
introduce more notation.
Define the Laplace functional of $N$ points
$x_1,\dots,x_N \in{\mathbb T}$ by
\begin{equation}
\mathfrak{L}_{\underline{x},N}(f) =
\int_{{\mathbb T}} \exp\left(-\sum_{n=1}^{N} f(N x_n(\theta))
\right)
d\theta,
\end{equation}
where $[-\frac{1}{2}, \frac{1}{2}) \ni
x_n(\theta) = x_n - \theta \pmod{1}$ and $f\geq 0$
is continuous and compactly supported function.
See \cite{kisto09} for a discussion of Laplace
functionals related to zeros of paraorthogonal
polynomials.
Denote by $\mathfrak{L}^{R}_{\omegaega, N}$
the Laplace functional of the sequence of points
$\{n\omegaega \pmod{1}\}_{n=1}^{N}$. The behavior of this sequence
is well understood, see for example \cite{raven}.
In particular, this quantity does not converge to
a limit. We will show
\begin{equation}gin{theorem}\label{thm:main4}
Let $k=2$, $\tau > 1$ and assume that
$\omegaega$ satisfies
\begin{equation}\label{eq:conddiop}
\inf_{q \geq 1, q\in{\mathbb Z}} q^{\tau} \dist(q \omegaega, {\mathbb Z}) > 0.
\end{equation}
Then for any positive, continuous, and compactly supported
function $f:{\mathbb R} \to{\mathbb R}$, we have
\begin{equation}
\lim_{N \to \infty} (\mathfrak{L}_{\underline{\theta}, N}(f) -
\mathfrak{L}^{R}_{2 \omegaega, N}(f)) = 0.
\end{equation}
\end{theorem}
This says that the values of $\mathfrak{L}_{\underline\theta, N}$
are deterministic in the large $N$ limit. However,
they do not converge to a single value as the one
for the irrational rotation does not.
Using either Theorem~\mathrm{Re}f{thm:main4} or easier
Theorem~\mathrm{Re}f{thm:evskew}, one can show that the
gap distribution of the eigenvalues indeed
obeys the distribution shown in Figure~\mathrm{Re}f{fig:1} and
\mathrm{Re}f{fig:2}. The Diophantine
assumption \eqref{eq:conddiop} is necessary,
I sketch an argument in Remark~\mathrm{Re}f{rem:liouville}.
Furthermore, it should be noted that Lebesgue
almost every $\omegaega$ satisfies \eqref{eq:conddiop}.
In this sense the case $k = 2$ is of intermediate
disorder, one has pure point spectrum with exponentially
decaying eigenfunctions, but one does not have sufficient
independence to obtain Poisson statistics.
The definition of the Laplace functional given here
is different from the one usually given in the theory
of point processes. There, one does not introduce
averaging over the unit circle by hand, but this comes
from the points $x_n$ being defined on some probability
space. In Section~\mathrm{Re}f{sec:skew}, we will see that
our Verblunsky coefficients are defined on a probability
space, and that averaging over it in particular contains
the $\theta$ average. Hence, the name Laplace functional
is justified.
\begin{equation}gin{remark}\label{rem:liouville}
Assume that for coprime integers $p,q$, $N$
very large, and $\delta > 0$ a small parameter, we have
that $|\omegaega-\frac{p}{q}|\leq \frac{1}{N^{3 + \delta}}$.
Then for $1 \leq n \leq N$, we have that
\begin{equation}
\left|\alpha_n - \lambdabda \mathrm{e}^{2\pi\mathrm{i} \frac{p n^2}{q}}
\right|\leq\frac{1}{N^{1 + \frac{\delta}{2}}}.
\end{equation}
Since the Verblunsky coefficients $\lambdabda \mathrm{e}^{2\pi\mathrm{i} \frac{p n^2}{q}}$
are $q$-periodic, the corresponding zeros of the
paraorthogonal polynomials are clock-spaced, so
of size $\frac{1}{N}$, whereas the points
$\{2 n\omegaega\pmod{1}\}_{n=1}^{N}$
are all in a $\frac{1}{N^{2 + \delta}}$
neighborhood of the points $\{\frac{\ell}{q}\}_{\ell=1}^{q}$.
These two behaviors are clearly incompatible, and
thus Theorem~\mathrm{Re}f{thm:main4} cannot hold for
Liouville frequencies.
\end{remark}
Let me now outline the rest of the content of
the paper. Section~\mathrm{Re}f{sec:pfmain1} discusses the
basic theory of half-line CMV matrices and gives
the proof of Theorem~\mathrm{Re}f{thm:main1}. Then Section~\mathrm{Re}f{sec:excmv}
introduces extended CMV operators, so ones
defined on the whole-line, discusses restrictions
of these, defines the Green's function,
and derives useful formulas relating determinants
of CMV matrices to transfer matrices. This discussion
is somewhat more complicated than the case of
Schr\"odinger operators. Section~\mathrm{Re}f{sec:ueCMV}
combines the formulas from the previous section
with the ones for ergodic CMV matrices.
In Section~\mathrm{Re}f{sec:skew}, CMV matrices with built-in
rotational invariance are discussed and
Theorem~\mathrm{Re}f{thm:main2} is proven.
In Section~\mathrm{Re}f{sec:evstat}, we prove Theorem~\mathrm{Re}f{thm:main4}
relying on results from Sections~\mathrm{Re}f{sec:existtest}
and \mathrm{Re}f{sec:greendecays}. Basically, Section~\mathrm{Re}f{sec:greendecays}
improves the bounds on decay of the Green's function
obtained in Section~\mathrm{Re}f{sec:ueCMV} from
unique ergodicity by using quantitative recurrence
results for the skew-shift discussed in Appendix~\mathrm{Re}f{sec:dynskew}.
Section~\mathrm{Re}f{sec:existtest} shows how to exploit
Section~\mathrm{Re}f{sec:greendecays} to obtain good
test functions.
\section{A first look at the CMV matrix}
\label{sec:pfmain1}
In this section, we take a look at half-line CMV matrices
and provide a proof of Theorem~\mathrm{Re}f{thm:main1}. In the following
sections, we will discuss whole line CMV matrices in more
details. Although most results in this section will be reproven
in later parts, I have included it, since it is closed
to the notation of \cite{opuc1,opuc2}.
Let $\{\alpha_n\}_{n=0}^{\infty}$ be a sequence of Verblunsky
coefficients. Define $\rho_n = (1 - |\alpha_n|^2)^{\frac{1}{2}}$
and the unitary matrices
\begin{equation}
{\mathbb T}heta_n = \begin{equation}gin{pmatrix} \overline{\alpha_n} & \rho_n \\ \rho_n
& - \alpha_n \end{pmatrix}.
\end{equation}
Define the operators $\mathcal{L}_+, \mathcal{M}_+$ by
\begin{equation}
\mathcal{L}_+ = \begin{equation}gin{pmatrix} {\mathbb T}heta_0 \\ & {\mathbb T}heta_2 \\ & & \ddots
\end{pmatrix},\quad
\mathcal{M}_+ = \begin{equation}gin{pmatrix} 1 \\ & {\mathbb T}heta_1 \\ & & \ddots
\end{pmatrix}
\end{equation}
where $1$ represents the identity $1 \tildemes 1$ matrix.
The CMV matrix is then defined by $\mathcal{C} = \mathcal{L}_+
\mathcal{M}_+$ which will be five-diagonal and unitary. Its importance
comes from that the measure $\mu$ associated to the
Verblunsky coefficients $\{\alpha_n\}_{n=0}^{\infty}$ is
the spectral measure of $\delta_0$ with respect to
$\mathcal{C}$, so one has
\begin{equation}
\int_{\partial{\mathbb D}} z^n d\mu(z) =
\spr{\delta_0}{\mathcal{C}^n \delta_0}.
\end{equation}
We denote by $\mathrm{supp}_{\mathrm{ess}}(\mu)$ the
essential support of the measure $\mu$, that is the
support of $\mu$ with point masses removed.
\begin{equation}gin{lemma}\label{lem:translatealpha}
Define $\tilde\alpha_n = \alpha_{n+1}$. Let $\tilde{\mu}$ be
the measure corresponding to $\{\tilde\alpha_n\}_{n=0}^{\infty}$.
Then
\begin{equation}
\mathrm{supp}_{\mathrm{ess}}(\tilde{\mu})
= \mathrm{supp}_{\mathrm{ess}}(\mu).
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
Clearly $\mathrm{supp}_{\mathrm{ess}}(\mu) =
\sigmama_{\mathrm{ess}}(\mathcal{C})$. Let $S$ be the backward
shift on $\ell^2({\mathbb N})$. Then $\mathcal{C}$ and
$S^* \mathcal{\tilde{C}} S$ differ by a finite rank operator.
The claim follows.
\end{proof}
A similar proof implies that for all the translates
$\alpha^{\ell}_n = \alpha_{n+\ell}$ the corresponding
CMV matrices have the same essential spectrum.
Hence, for Verblunsky coefficients given by
\eqref{eq:defVeromegank}, one obtains that the family
of Verblunsky coefficients given by
\begin{equation}
\alpha^{\ell}_n = \lambdabda \exp\left(2\pi\mathrm{i}
\left(\omegaega n^k + \sum_{j=0}^{k-1}
\binom{k}{j} \omegaega \ell^{k-j} \cdot n^j\right)\right)
\end{equation}
have the same essential spectrum. Define
for $y \in [0,1]^k$ a family of Verblunsky
coefficients by
\begin{equation}
\tilde{\alpha}_{y, n} = \lambdabda \exp\left(2\pi\mathrm{i}
\left(\omegaega n^k + \sum_{j=0}^{k-1}
y_j \cdot n^j\right)\right).
\end{equation}
\begin{equation}gin{lemma}\label{lem:sigmaCy}
We have for any $y\in [0,1]^k$ that
\begin{equation}
\sigmama_{\mathrm{ess}}(\mathcal{C}) =
\sigmama_{\mathrm{ess}}(\widetilde{\mathcal{C}}_{y})
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
Given $y$, there exists a sequence $\ell_s$ such that
\[
\binom{k}{j} \omegaega \ell_s^{k-j} \to y_j
\]
for $0\leq j\leq k-1$ as $s\to\infty$
(see Theorem~2.2 and Lemma~2.3 in \cite{krho1}).
By strong convergence, one thus obtains that
\[
\sigmama_{\mathrm{ess}}(\mathcal{C}) \supseteq
\sigmama_{\mathrm{ess}}(\widetilde{\mathcal{C}}_{y}).
\]
The other inclusion can be proven in a similar way.
\end{proof}
Results similar to Lemma~\mathrm{Re}f{lem:sigmaCy} have been
discussed in \cite{lsess}.
For the proof of Theorem~\mathrm{Re}f{thm:main1},
we will also need
\begin{equation}gin{proposition}\label{prop:rotatealpha}
Define Verblunsky coefficients by
$\tilde{\alpha}_n = \mathrm{e}^{2\pi\mathrm{i} \eta n} \alpha_n$.
Then
\begin{equation}
\mathrm{supp}(\tilde\mu) = \mathrm{e}^{-2\pi\mathrm{i} \eta}
\mathrm{supp}(\mu).
\end{equation}
\end{proposition}
\begin{equation}gin{proof}
This follows from the formulas in
Appendix~A.H. in \cite{opuc2}. I will
also give another proof in Section~\mathrm{Re}f{sec:skew}.
\end{proof}
Given $y \in [0,1]^k$ and $\eta \in [0,1]$ define
\begin{equation}
\hat{y}_j = \begin{equation}gin{cases} y_j,&j=0, 2 \leq j \leq k-1;\\
y_1 + \eta,& j=1.\end{cases}
\end{equation}
Proposition~\mathrm{Re}f{prop:rotatealpha} shows that
\begin{equation}
\sigmama_{\mathrm{ess}}(\widetilde{\mathcal{C}}_{y}) =
\mathrm{e}^{- 2\pi\mathrm{i} \eta}
\sigmama_{\mathrm{ess}}(\widetilde{\mathcal{C}}_{\hat{y}}).
\end{equation}
Having this, we are now ready for
\begin{equation}gin{proof}[Proof of Theorem~\mathrm{Re}f{thm:main1}]
The results discussed so far imply that
$\sigmama_{\mathrm{ess}}(\mathcal{C})$ is a non-empty,
rotationally invariant, subset of $\partial{\mathbb D}$.
Hence, we must have
\[
\sigmama_{\mathrm{ess}}(\mathcal{C}) = \partial{\mathbb D}
\]
Since also $\sigmama_{\mathrm{ess}}(\mathcal{C})\subseteq
\sigmama(\mathcal{C})\subseteq\partial{\mathbb D}$,
the claim follows.
\end{proof}
\section{Extended CMV operators}
\label{sec:excmv}
In this section, we introduce extended CMV operators
and discuss their properties that will be useful to
us. See also \cite{gzweyl} and Section~10.5 in \cite{opuc2}
for discussions from different viewpoints.
Let now $\{\alpha_n\}_{n\in{\mathbb Z}}$ be a bi-infinite
sequence of Verblunsky coefficients, i.e.
$\alpha_n \in {\mathbb D}$ although we will discuss setting
certain $\alpha_{n}$ to values in $\overline{\mathbb D}$ below.
Recall that $\rho_n = (1-|\alpha_n|^2)^{\frac{1}{2}}$ and
\begin{equation}
{\mathbb T}heta_n = \begin{equation}gin{pmatrix} \overline{\alpha_n} & \rho_n \\
\rho_n & - \alpha_n \end{pmatrix}
\end{equation}
viewed as acting on $\ell^2(\{n,n+1\})$. Define
\begin{equation}
\mathcal{L} = \bigoplus_{n\text{ even}} {\mathbb T}heta_n,\quad
\mathcal{M} = \bigoplus_{n\text{ odd}} {\mathbb T}heta_n
\end{equation}
and the extended CMV operator $\mathcal{E} = \mathcal{L}
\cdot \mathcal{M}$. We note
\begin{equation}gin{lemma}
$\mathcal{E}$, $\mathcal{L}$, and $\mathcal{M}$ are
unitary operators $\ell^2({\mathbb Z})\to\ell^2({\mathbb Z})$. Furthermore,
$\mathcal{L}$ leaves the subspaces $\ell^2(\{n,n+1\})$
for $n$ even invariant, whereas $\mathcal{L}$ does this for
$n$ odd.
\end{lemma}
We will now discuss various restrictions of CMV
operators. First denote by $P^{[a,b]}$ the projection
$\ell^2({\mathbb Z}) \to \ell^2([a,b])$. We define
\begin{equation}
X^{[a,b]} = (P^{[a,b]})^* X P^{[a,b]}
\end{equation}
for $X\in\{\mathcal{E},\mathcal{M},\mathcal{L}\}$.
\begin{equation}gin{lemma}
$\mathcal{E}^{[a,b]} = \mathcal{L}^{[a,b]} \mathcal{M}^{[a,b]}$.
\end{lemma}
\begin{equation}gin{proof}
Compute.
\end{proof}
It is easy to check that the operator $\mathcal{E}^{[a,b]}$
will no longer be unitary, but it will still be an useful
object. Let now $\begin{equation}ta \in \partial{\mathbb D}$ and $a\in{\mathbb Z}$
and consider the modified Verblunsky coefficients
\begin{equation}
\tilde{\alpha}_n = \begin{equation}gin{cases} \alpha_n,& n\neq a\\
\begin{equation}ta,& n=a.\end{cases}
\end{equation}
We then have that $\widetilde{\mathcal{E}}$,
$\widetilde{\mathcal{L}}$, and
$\widetilde{\mathcal{M}}$ leave the
spaces $\ell^2(\{a+1, a+2, \dots\})$ and $\ell^2(\{\dots, a-1,a\})$
invariant. In particular, we can define unitary
restrictions
\begin{equation}
\mathcal{E}^{[a+1, \infty)}_{\begin{equation}ta, \bullet} = P^{[a+1,\infty)}
\widetilde{\mathcal E} P^{[a+1, \infty)},\quad
\mathcal{E}^{(-\infty,a]}_{\bullet, \begin{equation}ta} = P^{(-\infty,a]}
\widetilde{\mathcal E} P^{(-\infty,a]}.
\end{equation}
\begin{equation}gin{lemma}\label{lem:relationC}
Let $\mathcal{C}$ be the CMV operator with Verblunksy
coefficients $\{\alpha_n\}_{n=0}^{\infty}$. Then
\begin{equation}
\mathcal{C} = \mathcal{E}^{[0,\infty)}_{1, \bullet}.
\end{equation}
Denote by $R$ the identification $\ell^2(\{\dots, -2,-1\})$
with $\ell^2(\{0,1,2,\dots\})$ and
by $\mathcal{C}_-$ the CMV operator with Verblunsky
coefficients $\{-\overline{\alpha}_{-n-1}\}_{n=0}^{\infty}$.
Then
\begin{equation}
\mathcal{C}_- = R \mathcal{E}^{(-\infty,-1]}_{\bullet, 1} R^{*}.
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
These are computations.
\end{proof}
We will now consider restrictions to intervals.
So let $a < b$ be integers, and $\begin{equation}ta,\gammama \in
\partial{\mathbb D}$. Define a sequence of Verblunsky
coefficients
\begin{equation}
\tilde{\alpha}_n = \begin{equation}gin{cases} \begin{equation}ta,&n=a;\\
\gammama,&n=b;\\
\alpha_n,& n\notin\{a,b\} \end{cases}.
\end{equation}
We then define the operator
\begin{equation}
\mathcal{E}^{[a+1,b]}_{\begin{equation}ta, \gammama}
= P^{[a+1,b]} \widetilde{\mathcal{E}} P^{[a+1,b]}.
\end{equation}
Of course, this definition makes sense
for $\begin{equation}ta,\gammama \in \overline{{\mathbb D}}$ and $a = - \infty$
or $b = \infty$. Furthermore, we write
$\bullet$ if we leave $\alpha_a$ or $\alpha_b$
unchanged to match the previous definition.
$\begin{equation}ta,\gammama\in\partial{\mathbb D}$ should be thought of
as boundary conditions.
\begin{equation}gin{lemma}
If $\begin{equation}ta,\gammama\in\partial{\mathbb D}$ then
$\mathcal{E}^{[a,b]}_{\begin{equation}ta,\gammama}$,
$\mathcal{L}^{[a,b]}_{\begin{equation}ta,\gammama}$, and
$\mathcal{M}^{[a,b]}_{\begin{equation}ta,\gammama}$ are unitary.
\end{lemma}
Since the equation $\mathcal{E} \psi = z \psi$
is equivalent to $(z \mathcal{L}^{\ast} - \mathcal{M}) \psi = 0$.
We note for further reference
\begin{equation}gin{lemma}\label{lem:tridiag}
The matrix $A = z (\mathcal{L}^{[a,b]}_{\begin{equation}ta,\gammama})^{*}
- \mathcal{M}^{[a,b]}_{\begin{equation}ta,\gammama}$ is tridiagonal.
Write $A = \{A_{i,j}\}_{a\leq i,j \leq b}$.
Then we have that
\begin{equation}
A_{j,j} = \begin{equation}gin{cases}
z \alpha_j + \alpha_{j-1},&j\text{ even}\\
- z \overline{\alpha_{j-1}}
- \overline{\alpha_{j}},&j\text{ odd},\end{cases}\quad
A_{j+1, j} = A_{j,j+1} = \tilde{\rho}_j =
\begin{equation}gin{cases} z \rho_j,&j\text{ even},\\
- \rho_{j},&j\text{ odd}.\end{cases}
\end{equation}
\end{lemma}
Let $z\in{\mathbb C}$, $\begin{equation}ta,\gammama\in\partial{D}$,
$a\leq k,\ell\leq b$, then the Green's function is
defined by
\begin{equation}
G^{[a,b]}_{\begin{equation}ta,\gammama}(z; k,\ell)=
\spr{\delta_k}{(z \left(\mathcal{L}^{[a,b]}_{\begin{equation}ta,\gammama})^{*}
- \mathcal{M}^{[a,b]}_{\begin{equation}ta,\gammama}\right)^{-1} \delta_\ell}.
\end{equation}
Our goal now will be to provide a formula
for the Green's function in terms of quantities
that are easier to analyze, like the formula
for the Green's function of Schr\"odinger operators
in term of orthogonal polynomials, respectively
entries of the transfer matrix.
We define
\begin{equation}gin{align}
\Phi^{[a,b]}_{\begin{equation}ta,\gammama}(z)
&= \det\left(z - \mathcal{E}^{[a,b]}_{\begin{equation}ta,\gammama}\right) \\
\nonumber &= \det\left(
z (\mathcal{L}^{[a,b]}_{\begin{equation}ta,\gammama} )^*
- \mathcal{M}^{[a,b]}_{\begin{equation}ta,\gammama}
\right) \cdot \det((\mathcal{L}^{[a,b]}_{\begin{equation}ta,\gammama} )^*)
\end{align}
and
\begin{equation}
\varphi^{[a,b]}_{\begin{equation}ta,\gammama}(z) = (\rho_a \cdots \rho_{b})^{-1}
\Phi^{[a,b]}_{\begin{equation}ta,\gammama}(z).
\end{equation}
\begin{equation}gin{lemma}
Let $\Phi_n(z)$ be defined as in \eqref{eq:defPhin}.
Then
\begin{equation}
\Phi_{n}(z) = \Phi^{[0,n-1]}_{1,\bullet}(z).
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
Proposition~3.4. in \cite{scmv5} states
\[
\Phi_n(z) = \det(z - \mathcal{E}^{[0,n-1]}_{1, \bullet}).
\]
The claim follows.
\end{proof}
We also introduce the Aleksandrov polynomials $\Phi^{\begin{equation}ta}_n(z)$
by applying the recursion \eqref{eq:defPhin} to
the Verblunsky coefficients $\{\begin{equation}ta \alpha_n\}_{n=0}^{\infty}$.
In particular, the polynomial of the second
kind is defined by
\begin{equation}
\Psi_n(z) = \Phi^{-1}_n(z).
\end{equation}
We have that (Theorem~9.5. in \cite{s1foot})
\begin{equation}gin{lemma}
We have
\begin{equation}
\Phi^{\begin{equation}ta}_n(z) = \Phi_{\begin{equation}ta,\bullet}^{[0,n-1]}(z)
\end{equation}
and
\begin{equation}
\Phi^{\begin{equation}ta}_n(z;\gammama) =
\Phi_{\begin{equation}ta,\gammama}^{[0,n-1]}(z)
\end{equation}
\end{lemma}
With these formulas, we obtain the following equality
for the absolute value of the Green's function. It would
be possible to derive an equality for the Green's function
but one would need distinguish between 4 cases depending
on if $a$ or $b$ is even or odd.
\begin{equation}gin{proposition}\label{prop:greenaspolynomial}
Let $z\in{\mathbb C}$, $\begin{equation}ta,\gammama\in\partial{\mathbb D}$,
and $a \leq k \leq \ell \leq b$. Then
\begin{equation}
\left| G^{[a, b]}_{\begin{equation}ta,\gammama} (z; k, \ell)\right|
= \frac{1}{\rho_k \rho_{\ell}}
\left| \frac{\varphi_{\begin{equation}ta, \bullet}^{[a,k-1]}(z)
\varphi_{\bullet, \gammama}^{[\ell+1, b]}(z)}
{\varphi_{\begin{equation}ta,\gammama}^{[a,b]}(z)}\right|
\end{equation}
\end{proposition}
\begin{equation}gin{proof}
By Cramer's rule and Lemma~\mathrm{Re}f{lem:tridiag},
we thus obtain
\[
\left|G^{[a, b]}_{\begin{equation}ta,\gammama} (z; k, \ell)\right|
= \tilde{\rho}_{k+1} \cdots \tilde{\rho}_{\ell - 1}
\left|\frac{\Phi_{\begin{equation}ta, \bullet}^{[a,k-1]}(z)
\Phi_{\bullet, \gammama}^{[\ell+1, b]}(z)}
{\Phi_{\begin{equation}ta,\gammama}^{[a,b]}(z)}\right|
\]
The claim now follows from the definition of $\varphi$.
\end{proof}
This formula is more awkward than the one for
Schr\"odinger operators, since it involves
three different type of polynomials whereas the
one for Schr\"odinger operators only has one
(see (2.7) in \cite{bbook}). Nevertheless it
is useful in exactly the same way.
We now give the relation of the Green's function
to solution of our equation.
\begin{equation}gin{lemma}\label{lem:green2sol}
Let $\psi$ solve $\mathcal{E} \psi = z \psi$. Then
for $a < n < b$
\begin{equation}gin{align}
\psi(n) & = G^{[a,b]}_{\begin{equation}ta,\gammama}(z;n,a)
\begin{equation}gin{cases} (z \overline{\begin{equation}ta} - \alpha_a) \psi(a)
-\rho_a \psi(a+1),& a\text{ even};\\
(z \alpha_a - \begin{equation}ta) \psi(a) + z \rho_a \psi(a+1),&a\text{ odd}
\end{cases}\\
\nonumber &+ G^{[a,b]}_{\begin{equation}ta,\gammama}(z;n, b)
\begin{equation}gin{cases} (z \overline{\gammama} - \alpha_b) \psi(b)
-\rho_b \psi(b-1),& b\text{ even};\\
(z \alpha_b - \gammama) \psi(b)
+ z \rho_{b-1} \psi(b-1),&b\text{ odd}
\end{cases}
\end{align}
\end{lemma}
\begin{equation}gin{proof}
With $A = (z(\mathcal{L}^{[a,b]}_{\begin{equation}ta,\gammama})^{\ast} -
\mathcal{M}^{[a,b]}_{\begin{equation}ta,\gammama})$, we have
\[
\varphi(n) = \spr{A^{-1} \delta_n, A \varphi}.
\]
Since, $(z(\mathcal{L})^{\ast} - \mathcal{M} )\varphi=0$,
we have that for $a+1 \leq n \leq b$ also
\[
A \varphi (n) = 0.
\]
The claim now follows by evaluating this
expression for $n \in \{a,b\}$.
\end{proof}
Our next goal will be to introduce transfer
matrices and related them to the determinants
defined above. We begin with the one-step
transfer matrix
\begin{equation}
A_z(\alpha) = \frac{1}{(1-|\alpha|^2)^{\frac{1}{2}}}
\begin{equation}gin{pmatrix} z & - \overline{\alpha} \\
-\alpha z & 1 \end{pmatrix}.
\end{equation}
We define the transfer matrix by
\begin{equation}
T^{[a,b]}(z) = A_z(\alpha_{b}) \cdots A_z(\alpha_a).
\end{equation}
\begin{equation}gin{lemma}
We have that
\begin{equation}
T^{[a,b]}(z) = \frac{1}{2} \begin{equation}gin{pmatrix}
\varphi^{[a,b]}_{1,\bullet}(z) +
\varphi^{[a,b]}_{-1,\bullet}(z) &
\varphi^{[a,b]}_{1,\bullet}(z) -
\varphi^{[a,b]}_{-1,\bullet}(z) \\
(\varphi^{[a,b]}_{1,\bullet})^*(z) -
(\varphi^{[a,b]}_{-1,\bullet})^*(z) &
(\varphi^{[a,b]}_{1,\bullet})^*(z)
+ (\varphi^{[a,b]}_{-1,\bullet})^*(z)\end{pmatrix}.
\end{equation}
where $(\varphi^{[a,b]}_{\begin{equation}ta,\gammama})^*(z)
= z^{b-a+1} \overline{\varphi^{[a,b]}_{\begin{equation}ta,\gammama}(\overline{z}^{-1})}$.
\end{lemma}
\begin{equation}gin{proof}
The $T_n(z)$ in \cite{s1foot} is $T^{[0,n-1]}(z)$
in our notation. We have that
\[
T_n(z) = \frac{1}{2} \begin{equation}gin{pmatrix}
\varphi_n(z) + \psi_n(z) & \varphi_n(z) - \psi_n(z) \\
\varphi_n^*(z) - \psi_n^*(z) & \varphi_n^*(z)
+ \psi_n^*(z)\end{pmatrix}.
\]
It follows that
\[
T^{[0,n-1]}(z) = \frac{1}{2} \begin{equation}gin{pmatrix}
\varphi^{[0,n-1]}_{1,\bullet}(z) +
\varphi^{[0,n-1]}_{-1,\bullet}(z) &
\varphi^{[0,n-1]}_{1,\bullet}(z) -
\varphi^{[0,n-1]}_{-1,\bullet}(z) \\
(\varphi^{[0,n-1]}_{1,\bullet})^*(z) -
(\varphi^{[0,n-1]}_{-1,\bullet})^*(z) &
(\varphi^{[0,n-1]}_{1,\bullet})^*(z)+
(\varphi^{[0,n-1]}_{-1,\bullet})^*(z)\end{pmatrix}.
\]
The claim follows using translation invariance.
\end{proof}
We thus obtain that
\begin{equation}gin{corollary}
We have that
\begin{equation}\label{eq:varphibetabullet}
\begin{equation}gin{pmatrix} \varphi^{[a,b]}_{\begin{equation}ta,\bullet}(z)\\
(\varphi^{[a,b]}_{\begin{equation}ta,\bullet})^{*}(z) \end{pmatrix} =
T^{[a,b]}(z) \begin{equation}gin{pmatrix} 1 \\ \overline{\begin{equation}ta} \end{pmatrix}
\end{equation}
and
\begin{equation}\label{eq:varphibetagamma}
\varphi^{[a,b]}_{\begin{equation}ta,\gammama}(z) =
\frac{1}{\rho_b}
\spr{\begin{equation}gin{pmatrix} z \\ - \overline{\gammama} \end{pmatrix}}
{T^{[a,b-1]}(z) \begin{equation}gin{pmatrix} 1 \\ \overline{\begin{equation}ta} \end{pmatrix}}.
\end{equation}
\end{corollary}
\begin{equation}gin{proof}
The first equation is (3.2.26) in \cite{opuc1}.
For the second equation, we have that
\[
\Phi^{[a,b]}_{\begin{equation}ta,\gammama}(z) = z
\Phi^{[a,b-1]}_{\begin{equation}ta,\bullet}(z) - \overline{\gammama}
(\Phi^{[a,b-1]}_{\begin{equation}ta,\bullet})^*(z).
\]
We thus have that
\[
\varphi^{[a,b]}_{\begin{equation}ta,\gammama}(z) =
\frac{1}{\rho_b} \left( z
\varphi^{[a,b-1]}_{\begin{equation}ta,\bullet}(z) - \overline{\gammama}
(\varphi^{[a,b-1]}_{\begin{equation}ta,\bullet})^*(z)
\right),
\]
which implies the second equation by the first one.
\end{proof}
There is one final object, we need to identify
$\varphi^{[a,b]}_{\bullet, \gammama}(z)$. We employ
the same strategy as we used in Lemma~\mathrm{Re}f{lem:relationC}
to identify $\mathcal{E}^{(-\infty,0]}_{\bullet,\gammama}$.
Let
\begin{equation}
\tilde{\alpha}_{n} = \begin{equation}gin{cases} \gammama, & n = -1;\\
- \overline{\alpha}_{b-n}, &n \geq 0.\end{cases}
\end{equation}
Then we have that
\begin{equation}
\varphi^{[a,b]}_{\bullet, \gammama}(z) =
\tilde{\varphi}^{[0,b-a-1]}_{\gammama,\bullet}(z).
\end{equation}
\begin{equation}gin{lemma}
We have htat
\begin{equation}\label{eq:varphibulletgamma}
\begin{equation}gin{pmatrix} \varphi^{[a,b]}_{\bullet,\gammama}(z) \\
(\varphi^{[a,b]}_{\bullet,\gammama})^*(z) \end{pmatrix} =
\begin{equation}gin{pmatrix} - \frac{1}{z} & 0 \\ 0 & 1 \end{pmatrix}
(T^{[a,b]}(z))^t \begin{equation}gin{pmatrix} -z\\\overline{\gammama} \end{pmatrix}.
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
We have that
\[
\begin{equation}gin{pmatrix} - \frac{1}{z} & 0 \\ 0 & 1 \end{pmatrix}
A_z(-\overline{\alpha})^{t} \begin{equation}gin{pmatrix} -z & 0 \\ 0 & 1 \end{pmatrix} = A_z(\alpha).
\]
From this the claim follows.
\end{proof}
\section{Strictly ergodic CMV matrices}
\label{sec:ueCMV}
In this section, we will consider families of CMV operators.
This has the advantage that certain formulas will simplify,
when viewed probabilistically. Also strict ergodicity
simplifies certain statements not available in the ergodic
case, in particular \cite{furman}.
Let $\Omega$ be a compact metric space,
$T:\Omega\to\Omega$ a uniquely ergodic
and minimal homeomorphism,
and $\mu$ the unique $T$-invariant probability measure.
We call $(\Omega,\mu,T)$ strictly ergodic in this case.
For a continuous function $f:\Omega\to{\mathbb D}$, we define
the family of Verblunsky coefficients
\begin{equation}
\alpha_{\omegaega, n} = f(T^n \omegaega).
\end{equation}
We denote by $\mathcal{E}_{\omegaega}, \dots$ the associated
objects.
The main example to keep in mind is the $k$-dimensional
skew-shift with $\Omega = {\mathbb T}^k = ({\mathbb R}/{\mathbb Z})^k$
\begin{equation}\label{eq:defkskew}
(T x)_{\ell} = \begin{equation}gin{cases} x_1 + \omegaega, & \ell = 1; \\
x_{\ell} + x_{\ell - 1}, &2 \leq \ell \leq k.\end{cases}
\end{equation}
One can then show by induction that
\begin{equation}
(T^n x)_{\ell} = \binom{n}{\ell} \omegaega + \binom{n}{\ell -1} x_1
+ \dots + \binom{n}{0} x_{\ell}.
\end{equation}
This map is strictly ergodic, see Proposition~4.7.4. in \cite{brinstuck}.
Then one can realize the Verblunsky coefficient
from the introduction as $\alpha_{x, n}$
for $f(x) = \lambdabda \mathrm{e}^{2\pi\mathrm{i} x_k}$ and a particular
choice of $x$.
We now return to our study of the general case
of uniquely ergodic and minimal CMV matrices.
\begin{equation}gin{lemma}
We have that $\mathcal{E}_{T x} = (S^{\ast} \mathcal{E}_x S)^{t}$,
where $S$ is the usual forward shift on $\ell^2({\mathbb Z})$. In
particular for any $x,y\in\Omega$
\begin{equation}
\sigmama(\mathcal{E}_x) = \sigmama(\mathcal{E}_y).
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
The first claim is algebraic. The second
claim follows as Lemma~\mathrm{Re}f{lem:sigmaCy}.
\end{proof}
For $n\geq 1$, we define the $n$-step (forward) transfer matrix by
\begin{equation}\label{eq:defTxn}
T_{x;n}(z) = A(\alpha_{x,n-1}, z) \cdots A(\alpha_{x,0}, z).
\end{equation}
We note that $T_{x;n}(z) = T^{[0,n-1]}_{x}(z)$ in the notation
of the previous section, and that also
$T^{[a,b]}_x(z) = T_{T^{a} x; b-a+1}(z)$.
The Lyapunov exponent is defined by
\begin{equation}
L(z) = \lim_{n\to\infty} \frac{1}{n} \int_{{\mathbb T}^k}
\log\|T_{n,x}(z)\| dx.
\end{equation}
We collect its properties
\begin{equation}gin{proposition}\label{prop:proplyap}
Let $(\Omega,\mu,T)$ be strictly ergodic and
$z\in\partial{\mathbb D}$.
\begin{equation}gin{enumerate}
\item $L(z) \geq 0$.
\item For almost-every $x\in{\mathbb T}^k$, we have as $n\to\infty$
that
\begin{equation}
\frac{1}{n} \log\|T_{x; n}(z)\| \to L(z).
\end{equation}
\item For every $\varepsilon > 0$, there exists $N$ such that
for $n \geq N$ and $x\in{\mathbb T}^K$ we have
\begin{equation}
\frac{1}{n} \log\|T_{x; n}(z)\| \leq L(z) + \varepsilon.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{equation}gin{proof}
(i) follows from $\det(A(\alpha,z)) = z$.
(ii) is the subadditive ergodic theorem (see Corollary 10.5.25
in \cite{opuc2}).
(iii) is Furman's strengthening for uniquely
ergodic transformations \cite{furman}.
\end{proof}
The right extension of \eqref{eq:defTxn}
for negative numbers is
\begin{equation}
T_{x; - n}(z) = \begin{equation}gin{pmatrix} -\frac{1}{z} & 0 \\ 0 & 1
\end{pmatrix} A(-\overline{\alpha_{x, -1}}, z) \cdots
A(-\overline{\alpha_{x,-n}}, z) \begin{equation}gin{pmatrix}
-z & 0 \\ 0 & 1 \end{pmatrix}
\end{equation}
(where $n\geq 0$). This can be seen from
\eqref{eq:varphibulletgamma}. In particular,
one has
\begin{equation}
L(z) = \lim_{n\to\infty} \frac{1}{n} \int_{{\mathbb T}^k}
\log\|T_{x; -n}(z)\|dx.
\end{equation}
\begin{equation}gin{lemma}
Let $(\Omega,\mu,T)$ be strictly ergodic
and $\varepsilon > 0$.
There exists $C > 1$ such that for $n \geq 1$
and $\begin{equation}ta,\gammama \in \partial{\mathbb D}$,
we have for $0\leq k \leq \ell \leq n-1$ that
\begin{equation}
|G_{x; \begin{equation}ta,\gammama}^{[0,n-1]}(z; k,\ell)| \leq
C \frac{\mathrm{e}^{ (L(z) + \varepsilon) (k + n - 1 - \ell)}}
{|\varphi^{[0,n-1]}_{x; \begin{equation}ta,\gammama}(z)|}.
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
By (iii) of Proposition~\mathrm{Re}f{prop:proplyap}, there exists $c \geq 1$
such that for any $x\in\Omega$ and $n\geq 1$, we have
\[
\|T_{x; n}(z)\| \leq c \mathrm{e}^{(L(z) + \varepsilon) n}.
\]
By \eqref{eq:varphibetabullet} and \eqref{eq:varphibulletgamma},
we obtain that the numerator in Proposition~\mathrm{Re}f{prop:greenaspolynomial}
is bounded by
\[
c_2 \cdot \mathrm{e}^{(L(z) + \varepsilon) (n-1 - \ell + k)}.
\]
The claim follows.
\end{proof}
In particular, we obtain the important theorem
\begin{equation}gin{theorem}\label{thm:uegreen}
Let $(\Omega,\mu,T)$ be strictly ergodic,
$m \in (0, L(E))$, $\delta > 0$,
and $\begin{equation}ta_0, \gammama_0\in\partial{\mathbb D}$.
Then for $n$ large enough, there exists $\Omega_n$
satisfying $\mu(\Omega_n) \geq 1 - \delta$ and
for $x \in \Omega_n$ there exists
\begin{equation}
\begin{equation}ta \in \{-\begin{equation}ta_0,\begin{equation}ta_0\},\quad
\gammama\in\{-\gammama_0,\gammama_0\}
\end{equation}
such that
for $\frac{n}{3} \leq \ell \leq \frac{2n}{3}$
and $k \in \{0,n-1\}$
\begin{equation}
|G_{x; \begin{equation}ta,\gammama}^{[0,n-1]}(z; k,\ell)| \leq
\mathrm{e}^{- m |k - \ell|}.
\end{equation}
\end{theorem}
\begin{equation}gin{proof}
By \eqref{eq:varphibetagamma}, we have that
\[
\begin{equation}gin{pmatrix} \varphi^{[0,n-1]}_{x;\begin{equation}ta,\gammama}(z) &
\varphi^{[0,n-1]}_{x;-\begin{equation}ta,\gammama}(z) \\
\varphi^{[0,n-1]}_{x;\begin{equation}ta,-\gammama}(z) &
\varphi^{[0,n-1]}_{x;-\begin{equation}ta,-\gammama}(z) \end{pmatrix}
= \begin{equation}gin{pmatrix} z & -\overline{\gammama} \\ z & \overline{\gammama}
\end{pmatrix}
T_{x,n}(z) \begin{equation}gin{pmatrix} 1 & 1 \\ \begin{equation}ta & -\begin{equation}ta \end{pmatrix}
\]
Since for almost every $x$
$\frac{1}{n}\log\|T_{x,n}(z)\| \geq L(z) (1 - \varepsilon)$
for $n$ large enough, the claim
follows.
\end{proof}
\section{Rotationally invariance and the proof of Theorem~\mathrm{Re}f{thm:main2}}
\label{sec:skew}
We begin this section by investigating what
happens if one rotates the Verblunsky coefficients,
which is essentially what we used to prove Theorem~\mathrm{Re}f{thm:main1}.
We have the following important proposition
\begin{equation}gin{proposition}\label{prop:rotatealphas}
Let $\begin{equation}ta,\gammama\in\overline{\mathbb D}$, $a<b$ integers, and
$x,y\in{\mathbb T}$ and define
\begin{equation}
\tilde{\alpha}_n = e(n x+y) \alpha_n,\quad
\tilde{\begin{equation}ta} = e((a-1) x+ y) \begin{equation}ta,\quad
\tilde{\gammama} = e(b x + y) \gammama.
\end{equation}
Then $\mathcal{E}^{[a,b]}_{\begin{equation}ta,\gammama}$
and $e(x) \widetilde{\mathcal{E}}^{[a,b]}_{\tilde{\begin{equation}ta},\tilde{\gammama}}$
are unitarily equivalent.
\end{proposition}
Here and in the following, we abbreviate
$e(x) = \mathrm{e}^{2\pi\mathrm{i} x}$.
We will prove this proposition in the case of $a$ and
$b$ finite. It is interesting if it holds for $a,b$
possibly infinite. An inspection of the proof of
Proposition~\mathrm{Re}f{prop:rotatealphas} shows that it
also holds for whole line CMV operators with pure
point spectrum. In particular, it implies that
in the case $k = 2$, all the operators $\mathcal{E}_x$
defined by the skew-shift are unitarily equivalent.
Since the Jitomirskaya--Simon \cite{js} argument
applies in our case, all the $\mathcal{E}_x$ have purely
singular continuous spectrum.
For the proof of this proposition, we need the following
lemma
\begin{equation}gin{lemma}\label{lem:rotatealpha}
Pick some $u_a\in\partial{\mathbb D}$ and define a sequence
recursively by
\begin{equation}
u_{n} = \begin{equation}gin{cases} u_{n-1} e(-(n-1) x - y),&n\text{ even};\\
u_{n-1} e((n-1) x+y),&n\text{ odd}.\end{cases}
\end{equation}
Furthermore, we define the multiplication operators
\begin{equation}
U \psi(n) = u_n \psi(n),\quad V \psi(n) = \begin{equation}gin{cases}
u_{n-1} \psi(n), & n\text{ even};\\
u_{n-1} e(-x) \psi(n), & n\text{ odd}\end{cases}.
\end{equation}
Then for $\tilde{z} = e(-x) z$
\begin{equation}
\left(\tilde{z} (\widetilde{\mathcal{L}}^{[a,b]}_{\tilde\begin{equation}ta,\tilde\gammama})^*
- \widetilde{\mathcal{M}}^{[a,b]}_{\tilde\begin{equation}ta,\tilde\gammama}\right) U =
V \left(z (\mathcal{L}^{[a,b]}_{\begin{equation}ta,\gammama})^*
- \mathcal{M}^{[a,b]}_{\begin{equation}ta,\gammama}\right).
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
A computation shows for $n$ even that
\[
\tilde{z} \tilde{\alpha}_n + \tilde{\alpha}_{n-1} = e((n-1)x+y) (z\alpha_n+\alpha_{n-1})
\]
and for $n$ odd
\[
\tilde{z} \overline{\tilde{\alpha}_{n-1}} + \overline{\tilde{\alpha}_{n}} =
e(-nx-y) (z\overline{\alpha_{n-1}}+\overline{\alpha_{n}}).
\]
By Lemma~\mathrm{Re}f{lem:tridiag}, we thus obtain that for $n$ even
we have that
\begin{equation}gin{align*}
(\tilde{z} (\widetilde{\mathcal{L}}^{[a,b]}_{\begin{equation}ta,\gammama})^*
- \widetilde{\mathcal{M}}^{[a,b]}_{\begin{equation}ta,\gammama}) U \psi (n) &=
e(-x) z \rho_{n} u_{n+1} \psi(n+1) - \rho_{n-1} u_{n-1} \psi(n-1)\\
& + u_n e((n-1) x +y) (z\alpha_n+\alpha_{n-1}) \psi(n).
\end{align*}
Since $u_{n} = e(-(n-1) x - y) u_{n-1}$ and $u_{n+1} = e(x)u_{n-1}$,
the claimed equality follows for $n$ even.
Similarly for $n$ odd
\begin{equation}gin{align*}
(\tilde{z} (\widetilde{\mathcal{L}}^{[a,b]}_{\begin{equation}ta,\gammama})^*
- \widetilde{\mathcal{M}}^{[a,b]}_{\begin{equation}ta,\gammama}) U \psi (n) &=
- \rho_n u_{n+1} \psi_{n+1} + e(-x) z \rho_{n-1} u_{n-1} \psi_{n-1} \\
& - e(-nx - y) (z \overline{\alpha_{n-1}} + \overline{\alpha_{n}}) u_n \psi(n).
\end{align*}
Since $u_{n+1} = u_n \cdot e(-nx-y)$ and $u_{n} = e((n-1)x + y) u_{n-1}$,
we obtain the claim.
\end{proof}
\begin{equation}gin{proof}[Proof of Proposition~\mathrm{Re}f{prop:rotatealphas}]
Since the spectra of $\mathcal{E}^{[a,b]}_{\begin{equation}ta,\gammama}$
and $e(x) \widetilde{\mathcal{E}}^{[a,b]}_{\tilde\begin{equation}ta,\tilde\gammama}$ are
simple, it suffices to show that they are the same.
If $\mathcal{E}^{[a,b]}_{\begin{equation}ta,\gammama} \psi = z \psi$
for $\psi\neq 0$,
we have that $(z (\mathcal{L}^{[a,b]}_{\begin{equation}ta,\gammama})^*
- \mathcal{M}^{[a,b]}_{\begin{equation}ta,\gammama}) \psi = 0$.
Hence, by the previous lemma also that
\[
(\tilde{z} (\widetilde{\mathcal{L}}^{[a,b]}_{\tilde\begin{equation}ta,\tilde\gammama})^*
- \widetilde{\mathcal{M}}^{[a,b]}_{\tilde\begin{equation}ta,\tilde\gammama}) \varphi =0
\]
for $\varphi = U \psi \neq 0$. Hence, we also have that
\[
(z - e(x) \widetilde{\mathcal{E}}^{[a,b]}_{\tilde\begin{equation}ta,\tilde\gammama}) \varphi =0
\]
which implies the claim.
\end{proof}
We will now begin drawing conclusions from Proposition~\mathrm{Re}f{prop:rotatealphas}.
For the sake of concreteness, we will only consider
the Verblunsky coefficients given by
\begin{equation}
\alpha_{x,n} = \lambdabda \mathrm{e}^{2\pi\mathrm{i} (T^n x)_k}
\end{equation}
where $x\in{\mathbb T}^k$, $\lambdabda\in{\mathbb D}\setminus\{0\}$,
and $T:{\mathbb T}^k\to{\mathbb T}^k$ is the
$k$ dimensional skew-shift defined in \eqref{eq:defkskew}.
For $\theta_1, \theta_2 \in {\mathbb T}$, we denote by
$P_{[\theta_1,\theta_2]}$ the spectral projection
on the arc $\{\mathrm{e}^{2\pi\mathrm{i} t}:\quad t \in [\theta_1, \theta_2] \pmod{1}\}$.
We then have that
\begin{equation}gin{theorem}\label{thm:wegneresti}
Let $\begin{equation}ta_0,\gammama_0 \in\partial{\mathbb D}$ and define
\begin{equation}
\begin{equation}ta_x = \begin{equation}ta_0 \frac{\alpha_{x,-1}}{|\alpha_{x,-1}|},\quad
\gammama_x = \gammama_0 \frac{\alpha_{x,n-1}}{|\alpha_{x,n-1}|}.
\end{equation}
Then
\begin{equation}
\frac{1}{n} \int_{{\mathbb T}^k}\mathrm{tr}\left(P_{[\vartheta_1, \vartheta_2]}
\mathcal{E}^{[0,n-1]}_{x; \begin{equation}ta_x,\gammama_x} \right) dx
= |\theta_2 - \theta_1|.
\end{equation}
\end{theorem}
\begin{equation}gin{proof}
We will show this is true, when only performing the $x_{k-1}$
integral. Let $s = x_{k-1}$. Then changing $s$ amounts to changing
$x$ in Proposition~\mathrm{Re}f{prop:rotatealphas}. Hence, the eigenvalues
are given by
\[
\mathrm{e}^{2\pi\mathrm{i} (\theta_1 - s)},\dots, \mathrm{e}^{2\pi\mathrm{i} (\theta_N - s)}
\]
as $s$ varies. This implies the claim.
\end{proof}
It is easy to infer from this that the integrated
density of states is just given by the normalized
Lebesgue measure.
We now come to
\begin{equation}gin{theorem}\label{thm:lyappos}
For $z\in \partial{\mathbb D}$, we have that
\begin{equation}
\gammama(z) = - \frac{1}{2} \log(1 - |\lambdabda|^2).
\end{equation}
\end{theorem}
\begin{equation}gin{proof}
This can be shown as in Theorem~12.6.2. in \cite{opuc2}.
\end{proof}
\begin{equation}gin{proof}[Proof of Theorem~\mathrm{Re}f{thm:main2}]
For $\theta\in{\mathbb T}$, we have
\[
\alpha_{\tilde{x}, n} = \mathrm{e}^{2\pi\mathrm{i} \theta} \alpha_{x, n}
\]
where
\[
\tilde{x}_{\ell} = \begin{equation}gin{cases}
x_{\ell}, &1\leq \ell \leq k-1; \\
x_{k} + \theta,&\ell=k.\end{cases}.
\]
The claim now follows from Theorem~12.6.1.
in \cite{opuc2}.
\end{proof}
\begin{equation}gin{proof}[Proof of Proposition~\mathrm{Re}f{prop:rotatealpha}]
If $z \in \sigmama_{\mathrm{ess}}(\mathcal{C})$ then there
exists a sequence $\psi_j \in \ell^2({\mathbb N})$ such that
$\|\psi_j\| = 1$, $\psi_j \to 0$ weakly,
and $\|(\mathcal{C} - z)\psi_j\| \to 0$. In particular,
we have for any $N \geq 1$ fixed
\[
\sum_{n=1}^{N} |\psi_j(n)|^2 \to 0.
\]
By Lemma~\mathrm{Re}f{lem:rotatealpha} with $x = 2\pi\eta$,
$y = 0$, we obtain that $\varphi_j = U \psi_j$
satisfy $\varphi_j \to 0$ weakly and
\[
\|(\widetilde{\mathcal{C}} - \mathrm{e}^{-2\pi\mathrm{i} \eta} z) \varphi_j\|
\to 0.
\]
Hence, the claim follows.
\end{proof}
\section{Eigenvalue statistics and the proof of Theorem~\mathrm{Re}f{thm:main4}}
\label{sec:evstat}
Since we will focus on the case $k=2$, it will be convenient
to introduce the skew-shift $T:{\mathbb T}^2\to{\mathbb T}^2$ by
\begin{equation}
T(x,y) = (x + 2\omegaega, x + y) \pmod{1}.
\end{equation}
One easily checks that this is equivalent
to \eqref{eq:defkskew} and that
\begin{equation}
T^n(x,y) = (x + 2 n \omegaega, y + n x+ n(n-1)\omegaega)\pmod{1}.
\end{equation}
Then our Verblunsky coefficients are given by
\begin{equation}
\alpha_{x,y; n} = \lambdabda e(y + n x + n(n-1)\omegaega),
\end{equation}
where we use the abbreviation $e(t) = \mathrm{e}^{2\pi\mathrm{i} t}$.
The main goal of this section is to prove the following
theorem, which will imply Theorem~\mathrm{Re}f{thm:main4}.
\begin{equation}gin{theorem}\label{thm:evskew}
Assume $\omegaega$ satisfies \eqref{eq:conddiop}.
Let $x,y\in{\mathbb T}$ and $\begin{equation}ta,\gammama\in\partial{\mathbb D}$.
There exists $\sigmama > 0$ such that for $N$
sufficiently large, there exist
$\theta_1^N, \dots, \theta_N^N$ and $\vartheta^N$
such that
\begin{equation}
\sigmama(\mathcal{E}_{x,y; \begin{equation}ta,\gammama}^{[0,N-1]})
= \left\{\mathrm{e}^{2\pi\mathrm{i} \theta_1^N}, \dots, \mathrm{e}^{2\pi\mathrm{i}\theta_N^N}
\right\}
\end{equation}
and
\begin{equation}
\frac{1}{N} \#\left\{n:\quad \|\theta_n^N - \vartheta^N + 2 n \omegaega\|
> \frac{1}{N^{1 + \sigmama}}\right\} \leq \frac{1}{N^{\sigmama}}.
\end{equation}
\end{theorem}
In order to see how this implies Theorem~\mathrm{Re}f{thm:main4},
we need to introduce some more notation related to the
Laplace functional. Given $N$ points $x_1^N, \dots, x_N^N \in {\mathbb T}$,
we define for $\theta\in{\mathbb T}$
\begin{equation}
\left[-\frac{1}{2},\frac{1}{2}\right) \ni x_n^N(\theta)
= x_n^N - \theta \pmod{1}.
\end{equation}
Then their Laplace functional is defined by
\begin{equation}
\mathfrak{L}_{x^N, N}(f) = \int_{{\mathbb T}} \exp\left(-\sum_{n=1}^{N}
f(N x_n^N(\theta))\right) d\theta
\end{equation}
where $f$ is a continuous, compactly supported, and positive
function. If $\underline{x} = \{ \{x_n^{N}\}_{n=1}^{N} \}_{N=1}^{\infty}$
is a sequence of vectors, we denote
\begin{equation}
\mathfrak{L}_{\underline{x}, N}(f) = \mathfrak{L}_{x^{N}, N}(f).
\end{equation}
Theorem~\mathrm{Re}f{thm:main4} follows by applying
(iv) of the next lemma to the sequences
\begin{equation}
\underline{\theta} = \{\{\theta_n^{N}\}_{n=1}^{N}\}_{N=1}^{\infty},\quad
\underline{\vartheta} = \{\{\vartheta^{N} - 2 n\omegaega\}_{n=1}^{N}\}_{N=1}^{\infty}
\end{equation}
\begin{equation}gin{lemma}
Let $f:{\mathbb R}\to{\mathbb R}$ be a positive, continuous, and
compactly supported function,
$\underline{x} = \{ \{x_n^{N}\}_{n=1}^{N}\}_{N=1}^{\infty}$
and $\underline{y}$ be sequences of vectors in ${\mathbb T}$.
\begin{equation}gin{enumerate}
\item Let $c > 0$ and $A > 1$, then
\begin{equation}
|\{\theta\in{\mathbb T}:\quad\#\{1\leq n\leq N:\quad N x_n^N(\theta)\in
[-c,c]\} \geq A \}| \leq \frac{2c}{N}.
\end{equation}
\item If $\max_{1\leq n\leq N} N \|x_n^{N} - y_n^{N}\| \to 0$
then
\begin{equation}
|\mathfrak{L}_{\underline{x},N}(f) -
\mathfrak{L}_{\underline{y},N}(f)| \to 0.
\end{equation}
\item If
\begin{equation}
\frac{1}{N} \#\{1\leq n \leq N:\quad x_n^{N} \neq y_n^{N}\} \to 0
\end{equation}
then
\begin{equation}
|\mathfrak{L}_{\underline{x},N}(f) -
\mathfrak{L}_{\underline{y},N}(f)| \to 0.
\end{equation}
\item If for every $\varepsilon > 0$
\begin{equation}
\frac{1}{N} \#\{1\leq n \leq N:\quad \|x_n^{N} - y_n^{N}\| \geq \frac{\varepsilon}{N}\} \to 0
\end{equation}
then
\begin{equation}
|\mathfrak{L}_{\underline{x},N}(f) -
\mathfrak{L}_{\underline{y},N}(f)| \to 0.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{equation}gin{proof}[Proof of (i)]
Follows from
\[
\int_{-\frac{1}{2}}^{\frac{1}{2}}
\sum_{n=1}^{N} \chi_{\left[-\frac{c}{N}, \frac{c}{N}\right]}
(x_n^{N}(\theta)) d\theta = 2 c.
\]
and Markov's inequality.
\end{proof}
\begin{equation}gin{proof}[Proof of (ii)]
Let $\varepsilon > 0$.
Since $f$ is compactly supported, we have $\mathrm{supp}(f)
\subseteq [-c,c]$. Let $A = \lceil \frac{2c}{10 \varepsilon} \rceil$.
By (i), there exists a set $I \subseteq {\mathbb T}$ such that
for $\theta \in I$
\[
\#\{1\leq n \leq N:\quad N x_n(\theta) \in [-c,c]
\text{ or } N y_n(\theta) \in [-c,c] \} \leq A
\]
and $|{\mathbb T} \setminus I| \leq \frac{\varepsilon}{2}$.
By assumption, $f$ is uniformly continuous, so there
exists a $\delta > 0$ such that $|f(x) - f(y)| \leq \frac{\varepsilon}{2 A}$
for $|x-y| < \delta$. Choose $N$ so large that
\[
\max_{1\leq n \leq N} N \|x_n^{N} - y_n^{N}\| < \delta.
\]
Then we clearly have that $|N x_n(\theta) - N y_n(\theta)| < \delta$,
and thus that for $\theta \in I$.
\[
\left|\sum_{n=0}^{N-1} f(N x_n(\theta))
- \sum_{n=0}^{N-1} f(N y_n(\theta))\right| < \frac{\varepsilon}{2}.
\]
The claim follows.
\end{proof}
\begin{equation}gin{proof}[Proof of (iii)]
(ii) follows from the set of $\theta$ for which
\[
\sum_{n=1}^{N} f(N y_n^N(\theta)) \neq
\sum_{n=1}^{N} f(N x_n^N(\theta))
\]
having vanishing measure as $N\to\infty$.
\end{proof}
\begin{equation}gin{proof}[Proof of (iv)]
By assumption, there exists $\varepsilon_N \to 0$ such that
\[
\frac{1}{N} \#\{1\leq n\leq N:\quad \|x_n^{N} - y_n^{N} \|
\geq \frac{\varepsilon_N}{N} \} \to 0.
\]
Define
\[
\tilde{x}_n^{N} = \begin{equation}gin{cases} y_n^{N},& \|x_n^{N} - y_n^{N}\|
\geq \frac{\varepsilon_N}{N};\\
x_n^N, & \text{otherwise}.\end{cases}
\]
Then $x^N$ and $\tilde{x}^N$ satisfy the assumptions of (i)
and $\tilde{x}^N$ and $y^N$ the ones of (ii). The claim follows.
\end{proof}
We now begin the proof of Theorem~\mathrm{Re}f{thm:evskew}.
\eqref{eq:conddiop} implies that there exists
some $c > 0$ such that
\begin{equation}\label{eq:conddiop2}
\|q \omegaega\| \geq \frac{c}{q^{\tau}}
\end{equation}
for all positive integers $q$. The following
theorem will be essential to our proof and
proven only in the next section.
\begin{equation}gin{theorem}\label{thm:existtest}
There is a constant $\sigmama \in (0,1)$.
Let $\eta \geq 1$, $N$ sufficiently large,
$\begin{equation}ta,\gammama\in\partial{\mathbb D}$
and $x,y\in{\mathbb T}$. There exists a normalized
$\psi \in \ell^2(\{0,\dots,N-1\})$ such that
$\psi(n) = 0$ for $n \geq N^{\sigmama}$, $n =0,1$
and $z = \mathrm{e}^{2\pi\mathrm{i} \vartheta}$ such that
\begin{equation}
\|(\mathcal{E}^{[0,N-1]}_{x,y;\begin{equation}ta,\gammama} - z) \psi\|
\leq \left(\frac{1}{N}\right)^{\eta}.
\end{equation}
\end{theorem}
Define $\vartheta_k = \vartheta - 2 \omegaega k$ and
$z_k = e(\vartheta_k)$.
\begin{equation}gin{lemma}
Let $\varepsilon > 0$.
If \eqref{eq:conddiop2} holds, then for
$N$ large enough and $k \neq \tilde{k} \in \{0, \dots, N-1\}$
we have
\begin{equation}
|z_k - z_{\tilde{k}}| \geq \frac{1}{N^{\tau + \varepsilon}}.
\end{equation}
\end{lemma}
\begin{equation}gin{proof} Clear. \end{proof}
\begin{equation}gin{proof}[Proof of Theorem~\mathrm{Re}f{thm:evskew}]
We let $\eta = \tau + 2\varepsilon$ in Theorem~\mathrm{Re}f{thm:existtest}.
With $u_{n}$ the appropriate factors as given
in Lemma~\mathrm{Re}f{lem:rotatealpha}, we define the test
functions
\[
\psi_k(n) = \begin{equation}gin{cases} u_{n} \psi(n - k), &k\leq n\leq k+N^{\sigmama}\\
0, &\text{otherwise}. \end{cases}
\]
We then have for $0 \leq k \leq N - N^{\sigmama}$ that
\[
\|(\mathcal{E}_{x,y;\begin{equation}ta,\gammama}^{[0,N-1]}-z_k) \psi_k\|
\leq \frac{1}{N^{\eta}}.
\]
Hence, there is some eigenvalue $\mathrm{e}^{2\pi\mathrm{i}\theta_{\ell_k}}$
such that
\[
\|\theta_{\ell_{k}} - \vartheta_k\| \leq \frac{1}{N^{\eta}}.
\]
By the previous lemma, we must have $\ell_k \neq \ell_{\tilde{k}}$
for $k \neq \tilde{k}$. The claim then follows upon
reordering the $\theta_{\ell}$.
\end{proof}
\section{Proof of Theorem~\mathrm{Re}f{thm:existtest}}
\label{sec:existtest}
Let $L = \lfloor \frac{1}{3} N^{\sigmama}\rfloor$.
If we show that for every $(x,y)\in{\mathbb T}^2$, there exists
a normalized vector $\psi$ and $z\in\partial{\mathbb D}$
such that
\begin{equation}
\|(\mathcal{E}^{[-L,L]}_{x,y; \begin{equation}ta,\gammama} - z) \psi\|
\leq \frac{1}{N^C}
\end{equation}
then Theorem~\mathrm{Re}f{thm:existtest} follows. We will
show this modified claim, since it is notationally
somewhat simpler to deal with.
Since $\mathcal{E}^{[-L,L]}_{x,y; \begin{equation}ta,\gammama}$ has $2 L + 1$ eigenvalues,
there exists $z\in{\mathbb D}$ and $\|\psi\| = 1$ such that
\begin{equation}
\mathcal{E}^{[-L,L]}_{x,y; \begin{equation}ta,\gammama} \psi = z \psi,
\quad |\psi(0)|^2 \geq \frac{1}{2L+1}.
\end{equation}
We will prove in the following section
\begin{equation}gin{theorem}\label{thm:greendecays}
There exists $\eta > 0$ such that for every $C \geq 1$,
we have for $L$ large enough and $M = \lfloor L^{\eta} \rfloor$
that there exist
\begin{equation}
-\frac{2}{3} L \leq k_- \leq -\frac{1}{3} L,\quad
\frac{1}{3} L \leq k_+ \leq \frac{2}{3} L
\end{equation}
such that for
\begin{equation}
k \in \{k_- - C M, \dots, k_- + C M\}
\cup \{k_+ - CM, \dots, k_+ + C M\}
\end{equation}
we have that there
exist $\begin{equation}ta, \gammama \in \partial{\mathbb D}$ such that
for $|k - \ell| \leq \frac{M}{2}$ we have
\begin{equation}
|G^{[k-M, k+M]}_{x,y;\begin{equation}ta,\gammama}(z; \ell, k-M)|,\
|G^{[k-M, k+M]}_{x,y;\begin{equation}ta,\gammama}(z; \ell, k+M)| \leq \frac{1}{M}.
\end{equation}
\end{theorem}
Define
\begin{equation}
\mathcal{K}_{t} =
\{k_- - t M, \dots, k_- + t M\}
\cup \{k_+ - t M, \dots, k_+ + t M\}.
\end{equation}
Using Lemma~\mathrm{Re}f{lem:green2sol} combined
with the estimate from the previous theorem, we
can conclude for $k \in K_{C}$ and $|\ell -k| \leq \frac{M}{2}$
that
\begin{equation}
|\psi(\ell)| \leq \frac{4}{M},
\end{equation}
where we used the trivial estimate $|\psi(n)|\leq 1$
We can iterate this to obtain for $s = 1,\dots, C$ that
for $k \in K_{C-s+1}$ and $|\ell -k| \leq \frac{M}{2}$
\begin{equation}
|\psi(\ell)| \leq \left(\frac{4}{M}\right)^s.
\end{equation}
In particular, we obtain that
\begin{equation}
|\psi(k_-)|, |\psi(k_+)| \leq \left(\frac{4}{M}\right)^C.
\end{equation}
Define a test function $\varphi$ by
\begin{equation}
\varphi(n) = \begin{equation}gin{cases} \psi(n), & k_- \leq n \leq k_+; \\
0,&\text{otherwise}.\end{cases}
\end{equation}
We have that
\begin{equation}
\|(\mathcal{E}^{[-L,L]}_{x,y;\begin{equation}ta,\gammama} - z) \varphi\|
\leq \left(\frac{8}{L}\right)^{C \eta}
\end{equation}
and thus Theorem~\mathrm{Re}f{thm:existtest} follows.
\section{Decay of the Green's function:
Proof of Theorem~\mathrm{Re}f{thm:greendecays}}
\label{sec:greendecays}
Theorem~\mathrm{Re}f{thm:uegreen} states that
$m = L(z) > 0$ implies that for $N$ large enough
there exists a set $B_N \subseteq{\mathbb T}^2$ with
\begin{equation}gin{enumerate}
\item $|B_N|\to 0$.
\item For $(x,y) \in {\mathbb T}^2\setminus B_N$, there exists
\begin{equation}
\begin{equation}ta \in \left\{-\frac{\alpha_{-N-1}}{|\alpha_{-N-1}|},
\frac{\alpha_{-N-1}}{|\alpha_{-N-1}|}\right\},\quad
\gammama \in \left\{\frac{\alpha_N}{|\alpha_N|},
-\frac{\alpha_N}{|\alpha_N|}\right\}
\end{equation}
such that for $|k| \leq \frac{N}{2}$, we have
\begin{equation}
|G^{[-N,N]}_{x,y;\begin{equation}ta,\gammama}(z; -N, k)|
\leq \mathrm{e}^{-\frac{m}{4} |k+N|},
\quad
|G^{[-N,N]}_{x,y;\begin{equation}ta,\gammama}(z; N, k)|
\leq \mathrm{e}^{-\frac{m}{4} |N-k|}.
\end{equation}
\end{enumerate}
We will first need the following lemma.
\begin{equation}gin{lemma}
Let $\sigmama > 0$, there exists a constant $C > 1$ such
that for $N \geq 1$, there exists a set $B^2_N$ such
that
\begin{equation}
|B_N^2| \leq \frac{C}{N^{\sigmama}}
\end{equation}
and for
\begin{equation}
\begin{equation}ta \in \left\{-\frac{\alpha_{-N-1}}{|\alpha_{-N-1}|},
\frac{\alpha_{-N-1}}{|\alpha_{-N-1}|}\right\},\quad
\gammama \in \left\{\frac{\alpha_N}{|\alpha_N|},
-\frac{\alpha_N}{|\alpha_N|}\right\}
\end{equation}
and $x,y\in{\mathbb T}^2\setminus B_N^{2}$ we have
\begin{equation}
\left\|\left(z \left(
\mathcal{L}^{[-N,N]}_{x,y;\begin{equation}ta,\gammama}\right)^{*} -
\mathcal{M}^{[-N,N]}_{x,y;\begin{equation}ta,\gammama}\right)^{-1}
\right\|
\leq N^{1 + \sigmama}
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
This is a consequence of Theorem~\mathrm{Re}f{thm:wegneresti}.
\end{proof}
In summary, we have extracted the following statement
\begin{equation}gin{proposition}
Let $\sigmama > 0$. For $N \geq 1$ large enough, there
exists $\Omega_N \subseteq{\mathbb T}^2$ such that
\begin{equation}
\lim_{N\to\infty} |{\mathbb T}^2 \setminus\Omega_N| = 0.
\end{equation}
For each $(x,y) \in \Omega_N$ and
\begin{equation}
|\tilde{x} - x| \leq \frac{1}{N^{2(1+2\sigmama)}},\quad
|\tilde{y} - y| \leq \frac{1}{N^{1+2\sigmama}},
\end{equation}
we have that there exists
\begin{equation}
\begin{equation}ta \in \left\{-\frac{\alpha_{-N-1}}{|\alpha_{-N-1}|},
\frac{\alpha_{-N-1}}{|\alpha_{-N-1}|}\right\},\quad
\gammama \in \left\{\frac{\alpha_N}{|\alpha_N|},
-\frac{\alpha_N}{|\alpha_N|}\right\}
\end{equation}
such that for $|k| \leq \frac{N}{2}$, we have
\begin{equation}
|G^{[-N,N]}_{\tilde x, \tilde y;\begin{equation}ta,\gammama}(z; -N, k)| \leq \frac{1}{N},
\quad
|G^{[-N,N]}_{\tilde x, \tilde y;\begin{equation}ta,\gammama}(z; N, k)| \leq \frac{1}{N}.
\end{equation}
\end{proposition}
\begin{equation}gin{proof}
A computation shows that
\[
\|\mathcal{L}^{[-N,N]}_{x,y;\begin{equation}ta,\gammama} -
\mathcal{L}^{[-N,N]}_{\tilde{x},\tilde{y};\begin{equation}ta,\gammama}\|
\lesssim N |x - \tilde{x}| + |y + \tilde{y}|
\leq \frac{1}{N^{\frac{\sigmama}{2}}}
\]
for $N$ large enough
and a similar result for
$\mathcal{M}^{[-N,N]}_{\tilde{x},\tilde{y};\begin{equation}ta,\gammama}$. The result now
follows from
\[
B^{-1} - A^{-1} = B^{-1} (A - B) A^{-1}
\]
and some computations.
\end{proof}
Let $X_N = \lceil N^{2 (1 + 2\sigmama)} \rceil$,
$Y_N = \lceil N^{1+2\sigmama} \rceil$. We partition
${\mathbb T}^2$ into $X_N \cdot Y_N \lesssim N^{3(1+2\sigmama)}$
boxes of side length $\frac{1}{X_N}$ and $\frac{1}{Y_N}$.
We call a box $I_{\ell}$ bad if
\begin{equation}
\Omega_N \cap I_{\ell} = \emptyset
\end{equation}
and good otherwise. We note that if $(x,y)$ is in a good
box, then for $|k|\leq\frac{N}{2}$
\begin{equation}
|G^{[-N,N]}_{x,y; \begin{equation}ta, \gammama}(z; k, \pm N)| \leq \frac{1}{N}
\end{equation}
for some $\begin{equation}ta, \gammama\in\partial{\mathbb D}$.
We now given an upper bound on the number of iterates
of $T^{j}(x,y)$ that land in any bad box.
We will show the following theorem in Appendix~\mathrm{Re}f{sec:dynskew}.
For $\varepsilon, \delta >0$, denote by $B_{\varepsilon,\delta}
\subseteq{\mathbb T}^2$
the set
\begin{equation}
B_{\varepsilon,\delta} = \{(x,y)\in{\mathbb T}^2:\quad
\|x\|\leq \varepsilon,\ \|y\|\leq\delta\}.
\end{equation}
\begin{equation}gin{theorem}\label{thm:skewergodic2}
Assume \eqref{eq:conddiop} and let
$\delta > 0$, $\varepsilon > 0$, $N \geq 1$. There exists $L_0 = L_0(\sigmama,\omegaega) \geq 1$
such that for any $x,y\in{\mathbb T}$ there exists $0 \leq \ell_0 \leq N$
such that for $L \geq L_0 \delta^{-4} \varepsilon^{-9}$
\begin{equation}
\#\{0 \leq \ell \leq \frac{L_0}{N}:\quad
T^{\ell N}(x,y) \in B_{\varepsilon, \delta}\} \leq 10 \varepsilon \delta \frac{L}{N}.
\end{equation}
\end{theorem}
We now obtain that for $L \geq N^{15}$ and $N$ large enough, we have
for some $0 \leq \ell_0 \leq N-1$
\begin{equation}
\#\{\lfloor\frac{1}{3N } L+\ell_0\rfloor \leq \ell \leq \frac{2}{3N} L:
\quad T^{\ell N}(x,y)\text{ in fixed
bad box}\} \leq \frac{10 L/N}{N^{3 (1 + \sigmama)}}.
\end{equation}
Since
\begin{equation}
\#\{\text{bad boxes}\} \leq \delta_N N^{3(1+\sigmama)}
\end{equation}
with $\delta_N \to 0$ as $N \to \infty$, we obtain for $L \geq N^{15}$ that
\begin{equation}
\#\{\lfloor\frac{1}{3N } L+\ell_0\rfloor \leq \ell \leq \frac{2}{3N} L:
\quad T^{\ell N}(x,y)\text{ in some bad box}\} \leq \delta_N \frac{L}{N}
\end{equation}
for $\delta_N\to 0$ as $N \to \infty$.
\begin{equation}gin{proof}[Proof of Theorem~\mathrm{Re}f{thm:greendecays}]
We just give the argument for $k_+$.
Choose $\delta_N \leq \frac{1}{10 C}$.
Now divide $\left[\lfloor\frac{1}{3N } L+\ell_0\rfloor,
\frac{2}{3N} L\right]$ into segments of length $3 C$.
Then at most $\delta_N \frac{L}{N}$ of them can
contain an iterate that lands in a bad box,
but there are $\frac{1}{C} \frac{L}{3 N} = 10 \delta_N
\frac{L}{3N}$ many of them. Hence, we must have at
least one, where our conclusion holds.
\end{proof}
\appendix
\section{Dynamics of the skew-shift}
\label{sec:dynskew}
In this section, we will discuss quantitative recurrence
results for the skew-shift. The discussing here follows
the one in Chapters~10 and 11 in \cite{kthesis}.
\begin{equation}gin{theorem}\label{thm:skewergodic}
Assume \eqref{eq:conddiop2} and let
$\sigmama > 0$. Then for a constant $C = C(c,\tau,\sigmama ) >0$
we have for $L\geq 1$
\begin{equation}
\#\{1\leq\ell\leq L:\quad
T^{\ell}(x,y) \in B_{\varepsilon,\delta}\} \leq 5 \varepsilon \delta L
+ C \left(\frac{1}{\varepsilon}\right)^{1 + \sigmama}
L^{\frac{1}{2} + \sigmama}.
\end{equation}
\end{theorem}
\begin{equation}gin{proof}[Proof of Theorem~\mathrm{Re}f{thm:skewergodic2}]
Let $\sigmama = \frac{1}{4}$ in the previous theorem.
Then we have that
\[
\#\{1\leq\ell\leq L:\quad
T^{\ell}(x,y) \in B_{\varepsilon,\delta}\} \leq 10 \varepsilon \delta L
\]
if $ C \left(\frac{1}{\varepsilon}\right)^{\frac{5}{4}}
L^{\frac{3}{4}} \leq 5 \varepsilon \delta L$
or equivalently
\[
L^{\frac{1}{4}} \geq \frac{C}{5 \delta} \cdot \frac{1}{\varepsilon^{\frac{9}{4}}}.
\]
Next divide $1 \leq \ell \leq L$ into $N$ arithmetic
progressions of the form $\{\ell_0 + \ell N\}_{\ell=0}^{L/N}$
for $\ell_0 \in \{1,\dots, L\}$. Then at least
one of them must contain less than $\frac{10 \varepsilon \delta L}{N}$
elements.
\end{proof}
We now begin to prove Theorem~\mathrm{Re}f{thm:skewergodic}.
\begin{equation}gin{lemma}
There exists a trigonometric polynomial $P$ given
by
\begin{equation}
P(x,y) = \sum_{|j| \leq \frac{2}{\varepsilon}}
\sum_{|k| \leq \frac{2}{\delta}} P_{j,k} e(jx+k y)
\end{equation}
such that $|P_{j,k}|\leq 5\varepsilon\delta$ and
\begin{equation}
\chi_{B_{\varepsilon,\delta}} \leq P,
\end{equation}
where $\chi_{A}$ denotes the characteristic function
of $A\subseteq{\mathbb T}^2$.
\end{lemma}
\begin{equation}gin{proof}
Follows by using Selberg polynomials,
see Chapter~2 in \cite{mont}.
\end{proof}
We compute that
\begin{equation}gin{align*}
\#\{1\leq\ell\leq L:\quad
&T^{\ell}(x,y) \in B_{\varepsilon,\delta}\} \leq \sum_{\ell=1}^{L}
P(T^{\ell}(x,y)) \\
& \leq 5 \varepsilon \delta L + 5 \varepsilon \delta
\sum_{|j| \leq \frac{2}{\varepsilon}}
\sum_{|k| \leq \frac{2}{\delta}}
\left|\sum_{\ell=1}^{L} e(j \cdot 2 \ell \omegaega + k x \ell
- k \omegaega \ell
+ k \omegaega \ell^2 )\right|.
\end{align*}
To finish the proof of Theorem~\mathrm{Re}f{thm:skewergodic}
we will need the next two bounds.
\begin{equation}gin{lemma}
We have
\begin{equation}
\left|\sum_{\ell=1}^{L} e(\ell \cdot \omegaega)\right|
\leq \frac{1}{2 \|\omegaega\|}.
\end{equation}
and for $\sigmama > 0$, there exists $C = C(\sigmama) > 0$
such that for any $t\in{\mathbb R}$
\begin{equation}
\left|\sum_{\ell=1}^{L} e(t \ell + \omegaega \ell^2)\right|
\leq \frac{C L^{\frac{1}{2} + \sigmama}}{\|\omegaega\|}.
\end{equation}
\end{lemma}
\begin{equation}gin{proof}
See Chapter~3 in \cite{mont}.
\end{proof}
\begin{equation}gin{proof}[Proof of Theorem~\mathrm{Re}f{thm:skewergodic}]
From the previous lemma and the computation preceeding
it, we obtain
\[
\#\{1\leq\ell\leq L:\quad
T^{\ell}(x,y) \in B_{\varepsilon,\delta}\} \leq 5 \varepsilon \delta L
+ C L^{\frac{1}{2} + \sigmama}
\sup_{1 \leq k \leq \frac{2}{\varepsilon}} \frac{1}{\|k \omegaega\|}.
\]
The claim now follows by \eqref{eq:conddiop}.
\end{proof}
\begin{equation}gin{thebibliography}{xxx}
\bibitem{abd1} A. Avila, J. Bochi, D. Damanik, {\em Cantor spectrum for Schr\"odinger
operators with potentials arising from generalized skew-shifts}.
Duke Math. J. 146 (2009), 253--280.
\bibitem{abd2} A. Avila, J. Bochi, D. Damanik, {\em Opening Gaps in the Spectrum
of Strictly Ergodic Schr\"odinger Operators}.
JEMS, (to appear).
\bibitem{aj1} A. Avila, S. Jitomirskaya, {\em Solving the Ten Martini Problem}.
Lecture Notes in Physics 690 (2006), 5-16.
\bibitem{aj2} A. Avila, S. Jitomirskaya, {\em The Ten Martini Problem}.
Ann. of Math. 170 (2009), 303-342.
\bibitem{bbook} J. Bourgain, \textit{Green's function estimates for lattice Schr\"odinger operators and applications},
Annals of Mathematics Studies, \textbf{158}. Princeton University Press, Princeton, NJ, 2005. x+173 pp.
\bibitem{b} J. Bourgain, \textit{Positive Lyapounov exponents for most energies},
Geometric aspects of functional analysis, 37--66, Lecture Notes in Math. \textbf{1745},
Springer, Berlin, 2000.
\bibitem{b2} J. Bourgain, \textit{On the spectrum of lattice Schr\"odinger operators with deterministic potential},
J. Anal. Math. \textbf{87} (2002), 37--75.
\bibitem{b3} J. Bourgain, \textit{On the spectrum of lattice Schr\"odinger operators with deterministic potential (II)},
J. Anal. Math. \textbf{88} (2002), 221--254.
\bibitem{b2002} J. Bourgain, \textit{Estimates on Green's functions, localization and the quantum kicked rotor model},
Ann. of Math. (2) \textbf{156-1} (2002), 249--294.
\bibitem{bgs} J. Bourgain, M. Goldstein, W. Schlag, \textit{Anderson localization for
Schr\"odinger operators on $\mathbb Z$
with potentials given by the skew-shift}, Comm. Math. Phys. \textbf{220-3} (2001), 583--621.
\bibitem{brinstuck} M. Brin, G. Stuck,
{\em Introduction to dynamical systems}.
Cambridge University Press, Cambridge, 2002. xii+240 pp.
\bibitem{cs} V. Chulaevsky, Y. Sinai, {\em Anderson Localization for the I-D Discrete Schr\"odinger
Operator with Two-Frequency Potential}.
Comm. Math, Phys. {\bf 125} (1989), 91--112.
\bibitem{dk} D. Damanik, H. Kr\"uger,
{\em Almost Periodic Szeg\H{o} Cocycles with Uniformly Positive Lyapunov Exponents}.
J. Approx. Theory {\bf 161:2}, 813-818 (2009).
\bibitem{furman} A. Furman,
{\em On the multiplicative ergodic theorem for uniquely ergodic systems}.
Ann. Inst. H. Poincaré Probab. Statist. {\bf 33:6} (1997), 797--815.
\bibitem{gzweyl} F. Gesztesy, M. Zinchenko,
{\em Weyl-Titchmarsh theory for CMV operators associated with orthogonal polynomials on the unit circle}.
J. Approx. Th. {\bf 139} (2006), 172--213.
\bibitem{gzborg} F. Gesztesy, M. Zinchenko,
{\em A Borg-type theorem associated with orthogonal
polynomials on the unit circle}.
J. Lond. Math. Soc. (2) {\bf 74} (2006), 757--777.
\bibitem{gs4} M. Goldstein, W. Schlag, {\em On resonances and the formation of gaps in the
spectrum of quasi-periodic Schr\"odinger equations}.
Ann. of Math. (to appear).
\bibitem{gsfest} M. Goldstein, W. Schlag, {\em On the formation of gaps in the spectrum of
Schr\"odinger operators with quasi-periodic potentials}.
Spectral theory and mathematical physics: a Festschrift in honor of Barry Simon's 60th birthday,
539--563, Proc. Sympos. Pure Math., \textbf{76}, Part 2, Amer. Math. Soc., Providence, RI, 2007.
\bibitem{js} S. Jitomirskaya, B. Simon,
{\em Operators with singular continuous spectrum: III. Almost periodic Schr\"odinger operators}.
Commun. Math. Phys. {\bf 165} (1994), 201--205.
\bibitem{kisto09} R. Killip, M. Stoiciu,
{\em Eigenvalue Statistics for CMV Matrices: From Poisson to Clock via Random Matrix Ensembles}.
Duke {\bf 146:3} (2009), 361--399.
\bibitem{krho1} H. Kr\"uger,
{\em A family of Schr\"odinger Operators whose spectrum is an interval}.
Comm. Math. Phys. {\bf 290:3} (2009), 935--939.
\bibitem{krho2} H. Kr\"uger,
{\em Probabilistic averages of Jacobi operators}.
Comm. Math. Phys. {\bf 295:3} (2010), 853--875.
\bibitem{kthesis} H. Kr\"uger,
{\em Positive Lyapunov Exponent for Ergodic Schr\"odinger Operators}.
PhD Thesis, Rice University, April 2010.
\bibitem{kskew} H. Kr\"uger,
{\em On the spectrum of skew-shift Schr\"odinger operators},
J. Funct. Anal. (to appear).
\bibitem{lsess} Y. Last, B. Simon,
{\em The essential spectrum of Schr\"odinger, Jacobi, and CMV operators}.
J. d'Analyse Math. {\bf 98} (2006), 183-220.
\bibitem{mont} H. L. Montgomery, {\em Ten lectures on the interface between analytic number theory and harmonic analysis},
CBMS Regional Conference Series in Mathematics, 84. Published for the Conference Board of the Mathematical Sciences, Washington, DC; by the American Mathematical Society, Providence, RI, 1994. xiv+220 pp.
\bibitem{raven} T. van Ravenstein,
{\em The three gap theorem (Steinhaus conjecture)}.
J. Austral. Math. Soc. Ser. A {\bf 45:3} (1988), 360--370.
\bibitem{si} B. Simon, \textit{Regularity and the Ces\'aro-Nevai class}, J. Approx. Theory \textbf{156} (2009), 142-153.
\bibitem{opuc1} B.\ Simon, \textit{Orthogonal Polynomials on the Unit Circle. Part~1. Classical
Theory}, Colloquium Publications, 54, American Mathematical
Society, Providence (2005)
\bibitem{opuc2} B.\ Simon, \textit{Orthogonal Polynomials on the Unit Circle. Part~2. Spectral Theory},
Colloquium Publications, 54, American Mathematical Society,
Providence (2005)
\bibitem{scmv5} B.\ Simon,
{\em CMV matrices: Five years after}.
J. Comput. Appl. Math. {\bf 208} (2007), 120--154.
\bibitem{s1foot} B.\ Simon,
{\em OPUC on one foot}.
Bull. Amer. Math. Soc. {\bf 42} (2005), 431--460.
\bibitem{szego} B.\ Simon, \textit{Szeg\H{o}'s Theorem
and its Descendants}.
\bibitem{st06} M. Stoiciu,
{\em The statistical distribution of the zeros of random paraorthogonal polynomials on the unit circle}.
J. Approx. Theory {\bf 39} (2006), 29--64.
\end{thebibliography}
\end{document} |
\begin{document}
\newtheorem{theorem}[subsection]{Theorem}
\newtheorem{proposition}[subsection]{Proposition}
\newtheorem{lemma}[subsection]{Lemma}
\newtheorem{corollary}[subsection]{Corollary}
\newtheorem{conjecture}[subsection]{Conjecture}
\newtheorem{prop}[subsection]{Proposition}
\newtheorem{defin}[subsection]{Definition}
\numberwithin{equation}{section}
\newcommand{\ensuremath{\mathbb R}}{\ensuremath{\mathbb R}}
\newcommand{\ensuremath{\mathbb C}}{\ensuremath{\mathbb C}}
\newcommand{\mathrm{d}}{\mathrm{d}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{\mathbb{F}}{\mathbb{F}}
\newcommand{\mathfrak{M}}{\mathfrak{M}}
\newcommand{\mathfrak{m}}{\mathfrak{m}}
\newcommand{\mathfrak{S}}{\mathfrak{S}}
\newcommand{\ensuremath{\mathfrak A}}{\ensuremath{\mathfrak A}}
\newcommand{\ensuremath{\mathbb N}}{\ensuremath{\mathbb N}}
\newcommand{\ensuremath{\mathbb Q}}{\ensuremath{\mathbb Q}}
\newcommand{\half}{\tfrac{1}{2}}
\newcommand{f\times \chi}{f\times \chi}
\newcommand{\mathop{{\sum}^{\star}}}{\mathop{{\sum}^{\star}}}
\newcommand{\chi \bmod q}{\chi \bmod q}
\newcommand{\chi \bmod db}{\chi \bmod db}
\newcommand{\chi \bmod d}{\chi \bmod d}
\newcommand{\text{sym}^2}{\text{sym}^2}
\newcommand{\hhalf}{\tfrac{1}{2}}
\newcommand{\sumstar}{\sideset{}{^*}\sum}
\newcommand{\sumprime}{\sideset{}{'}\sum}
\newcommand{\sumprimeprime}{\sideset{}{''}\sum}
\newcommand{\sumflat}{\sideset{}{^f\times \chilat}\sum}
\newcommand{\ensuremath{\negthickspace \negthickspace \negthickspace \pmod}}{\ensuremath{\negthickspace \negthickspace \negthickspace \pmod}}
\newcommand{\V}{V\left(f\times \chirac{nm}{q^2}\right)}
\newcommand{\mathop{{\sum}^{\dagger}}}{\mathop{{\sum}^{\dagger}}}
\newcommand{\ensuremath{\mathbb Z}}{\ensuremath{\mathbb Z}}
\newcommand{\leg}[2]{\left(f\times \chirac{#1}{#2}\right)}
\newcommand{\mu_{\omega}}{\mu_{\omega}}
\newcommand{\tfrac12}{\tfrac12}
\newcommand{\left(}{\left(}
\newcommand{\right)}{\right)}
\newcommand{\Lambda_{[i]}}{\Lambda_{[i]}bda_{[i]}}
\newcommand{\lambda}{\lambdabda}
\newcommand{\mathfrak{a}}{\mathfrak{a}}
\newcommand{S_{[i]}(X,Y;\Phi,\Psi)}{S_{[i]}(X,Y;\Phi,\Psi)}
\newcommand{\left(}{\left(}
\newcommand{\right)}{\right)}
\newcommand{\bfrac}[2]{\left(f\times \chirac{#1}{#2}\right)}
\newcommand{\mathrm{\ primary}}{\mathrm{\ primary}}
\newcommand{\text{ even}}{\text{ even}}
\newcommand{\mathrm{Res}}{\mathrm{Res}}
\newcommand{\sumstar_{(c,1+i)=1} w\left( \frac {N(c)}X \right)}{\sumstar_{(c,1+i)=1} w\left( f\times \chirac {N(c)}X \right)}
\newcommand{\left|}{\left|}
\newcommand{\right|}{\right|}
\newcommand{\Gamma_{o}}{\Gamma_{o}}
\newcommand{\Gamma_{e}}{\Gamma_{e}}
\newcommand{\widehat}{\widehat}
\theoremstyle{plain}
\newtheorem{conj}{Conjecture}
\newtheorem{remark}[subsection]{Remark}
\makeatletter
\def\mathpalette\wide@breve{\mathpalette\wide@breve}
\def\wide@breve#1#2{\sbox\z@{$#1#2$}
\mathop{\vbox{\m@th\ialign{##\crcr
\kern0.08em\brevefill#1{0.8\wd\z@}\crcr\noalign{\nointerlineskip}
$\hss#1#2\hss$\crcr}}}\limits}
\def\brevefill#1#2{$\m@th\sbox\tw@{$#1($}
\hss\mathrm{Res}izebox{#2}{\wd\tw@}{\rotatebox[origin=c]{90}{\upshape(}}\hss$}
\makeatletter
\title[First moment of central values of quadratic Dirichlet $L$-functions]{First moment of central values of quadratic Dirichlet $L$-functions}
\author[P. Gao]{Peng Gao}
\address{School of Mathematical Sciences, Beihang University, Beijing 100191, China}
\email{[email protected]}
\author[L. Zhao]{Liangyi Zhao}
\address{School of Mathematics and Statistics, University of New South Wales, Sydney NSW 2052, Australia}
\email{[email protected]}
\begin{abstract}
We evaluate the first moment of central values of the family of quadratic Dirichlet $L$-functions using the method of double Dirichlet series. Under the generalized Riemann hypothesis, we prove an asymptotic formula with an error term of size that is the fourth root of that of the primary main term.
\end{abstract}
\maketitle
\noindent {\bf Mathematics Subject Classification (2010)}: 11M06, 11M41 \newline
\noindent {\bf Keywords}: quadratic Dirichlet $L$-functions, first moment, double Dirichlet series
\section{Introduction}\left|el{sec 1}
Moments of central values of families of $L$-functions have been widely studied in the literature as they have many important applications. In this paper, we are interested in the first moment of central values of the family of quadratic Dirichlet $L$-functions. For this family, an asymptotic formula for the first moment was initially obtained by M. Jutila \cite{Jutila} with the main term of size $X \log X$ and an error term of size $O(X^{3/4+\varepsilon})$ for any $\varepsilon>0$. An error term of the same size was later given by A. I. Vinogradov and L. A. Takhtadzhyan in \cite{ViTa}. Using the method of double Dirichlet series, D. Goldfeld and J. Hoffstein \cite{DoHo} improved the error term to $O(X^{19/32 + \varepsilon})$. It is also implicit in \cite{DoHo} that one may obtain an error term of size $O(X^{1/2 + \varepsilon})$ for the smoothed first moment, a result that is achieved via a different approach by M. P. Young \cite{Young1} who utlized a recursive argument. The optimal error term is conjectured to be $O(X^{1/4+\varepsilon})$ in \cite{DoHo} and this been observed in a numerical study conducted by M. W. Alderson and M. O. Rubinstein in \cite{AR12}. \newline
The method of multiple Dirichlet series is a powerful tool to employ when studying moments of $L$-functions. The success of such method relies
heavily on the analytic properties of these series. In \cite{DoHo}, Goldfeld and Hoffstein treated the variables separately of the double Dirichlet series under their consideration and applied the theory of Eisenstein series of metaplectic type to obtain analytic continuation of the series in one variable. It was later pointed out by A. Diaconu, D. Goldfeld and J. Hoffstein in \cite{DGH} that there are many advantages in viewing multiple Dirichlet series as functions of several complex variables. From this point of view, much progress has been made towards understanding analytic properties of various multiple Dirichlet series in \cite{DGH}, including a result on third moment of central values of the family of quadratic Dirichlet $L$-functions. \newline
For any integer $m \equiv 0, 1 \pmod 4$, let $\chi^{(m)}=\leg {m}{\cdot}$ be the Kronecker symbol defined on \cite[p. 52]{iwakow}. As usual, $\zeta(s)$ is the Riemann zeta function. For any $L$-function, we write $L^{(c)}$ (resp. $L_{(c)}$) for the function given by the Euler product defining $L$ but omitting those primes dividing (resp. not dividing) $c$. We reserve the letter $p$ for a prime throughout the paper and we write $L_p$ for $L_{(p)}$ for simplicity. In \cite{Blomer11}, V. Blomer obtained meromorphic continuation to the whole complex plane for the double Dirichlet series given by
\begin{align*}
\zeta^{(2)}(2s+2w-1)\sum_{(d,2)=1}f\times \chirac{L(s, \chi^{(4d)}\psi)\psi'(d)}{d^w}.
\end{align*}
The primary goal of \cite{Blomer11} is to establish a subconvexity bound of the above double series. Its analytic properties actually also allow one to evaluate the smoothed first moment of $L(1/2, \chi^{(4d)})$ given by
\begin{align}
\left|el{ZetWithCharacters}
\sum_{(d,2)=1}L(\tfrac 12, \chi^{(4d)})w \bfrac {d}X,
\end{align}
where $w(t)$ is non-negative Schwartz function. Under the generalized Riemann hypothesis (GRH) and arguing in a manner similar to \eqref{Integral for all characters} below, one is able to obtain an asymptotical formula for the expression above with the error term being the conjectured size $O(X^{1/4+\varepsilon})$. \newline
In \cite{Cech1}, M. \v Cech studied the $L$-functions ratios conjecture for the case of quadratic Dirichlet $L$-functions using multiple Dirichlet series. The advantage of \v Cech's method is that instead of pursuing meromorphic continuation to the entire complex plane for the multiple Dirichlet series involved as done in other works, one makes a crucial use of the functional equation of a general (not necessarily primitive) quadratic Dirichlet $L$-function \cite[Proposition 2.3]{Cech1} to extend the multiple Dirichlet series under consideration to a suitable large region for the purpose of the investigation. \newline
Motivated by the above work, we adapt its approach to evaluate asymptotically the first moment of central values of
a family of quadratic Dirichlet $L$-functions. To state our result, we denote $\chi_n$ for the quadratic character $\left(f\times \chirac {\cdot}{n} \right)$ for an odd, positive integer $n$. By the quadratic reciprocity law, $L^{(2)}(s, \chi_n)=L(s, \chi^{(4n)})$ (resp. $L(s, \chi^{(-4n)})$) if $n \equiv 1 \pmod 4$ (resp. $n \equiv -1 \pmod 4$). Notice that one can factor every such $m$ uniquely into $m=dl^2$ so that $d$ is a fundamental discriminant, i.e. $d$ is either square-free and $d \equiv 1 \pmod 4$ or $d=4n$ with $n \equiv 2,3 \pmod 4$ and square-free. It is known (see \cite[Theorem 9.13]{MVa1}) that every the primitive quadratic Dirichlet character is of the form $\chi^{(d)}$ for some fundamental discriminant $d$. For such $d$, it follows from \cite[Theorem 4.15]{iwakow} that the function $L(s, \chi^{(d)})$ has an analytic continuation to the entirety of $\mathbb{C}$. Thus the same can be said of $L^{(2)}(s, \chi_n)$. \newline
We evaluate in this paper asymptotically the family of quadratic Dirichlet $L$-functions $L^{(2)}(s, \chi_n)$ averaged over all odd, positive $n$. Our main result is as follows.
\begin{theorem}
\left|el{Theorem for all characters}
Under the notation as above and the truth of GRH, suppose that $w(t)$ is a non-negative Schwartz function and $\widehat w(s)$ is its Mellin transform. For $1/2>\Re(\alpha)>0$ and any $\varepsilon>0$, we have
\begin{align}
\left|el{Asymptotic for ratios of all characters}
\begin{split}
\sum_{\substack{(n,2)=1}}L^{(2)}(\tfrac{1}{2}+\alpha, \chi_{n}) w \bfrac {n}X =& X\widehat w(1)f\times \chirac {\zeta(1+2\alpha)}{\zeta(2+2\alpha)}f\times \chirac {1-2^{-1-2\alpha}}{2(1-2^{-2-2\alpha})}+X^{1-\alpha}\widehat w(1-\alpha)f\times \chirac{\pi^{\alpha}\Gamma (1/2-\alpha)\Gamma (f\times \chirac {\alpha}2)}{\Gamma(f\times \chirac{1-\alpha}2)\Gamma (\alpha)}\cdotf\times \chirac{\zeta(1-2\alpha)}{\zeta(2)}\cdot f\times \chirac{2^{2\alpha}}{6} \\
& \hspace*{3cm} +O\left((1+|\alpha|)^{2+\varepsilon}X^{1/4+\varepsilon}\right).
\end{split}
\end{align}
\end{theorem}
Notice that the error term in \eqref{Asymptotic for ratios of all characters} is uniform for $\alpha$, we can therefore take the limit $\alpha \rightarrow 0^+$ to deduce the following asymptotic formula for the smoothed first moment of central values of the family of quadratic Dirichlet $L$-functions under consideration.
\begin{corollary}
\left|el{Thmfirstmomentatcentral}
With the notation as above and assuming the truth of GRH, we have, for any $\varepsilon>0$,
\begin{align}
\left|el{Asymptotic for first moment at central}
\begin{split}
& \sum_{\substack{(n,2)=1}}L^{(2)}(\tfrac{1}{2}, \chi_{n}) w \bfrac {n}X = XQ(\log X)+O\left( X^{1/4+\varepsilon}\right).
\end{split}
\end{align}
where $Q$ is a linear polynomial whose coefficients depend only on the absolute constants and $\widehat w(1)$ and $\widehat w'(1)$.
\end{corollary}
Note that our error term above is consistent with the conjecture size given in \cite{DoHo}. The explicit expression of $Q$ is omitted here as our main focus is the error term. The proof of Theorem \ref{Theorem for all characters} requires one to obtain meromorphic continuation of certain double Dirichlet series, which we get by making a crucial use of the functional equation of a general quadratic Dirichlet $L$-function in \cite[Proposition 2.3]{Cech1} to convert the original double Dirichlet series to its dual series which we carefully analyze using the ideas used by K. Soundararajan and M. P. Young in \cite{S&Y}. \newline
We remark here that our proof of Theorem \ref{Theorem for all characters} implies that \eqref{Asymptotic for first moment at central} holds with the error term $O\left( X^{1/2+\varepsilon}\right)$ unconditionally. It may also be applied to study the first moment of the family of quadratic Dirichlet $L$-functions given in \eqref{ZetWithCharacters}. \newline
\section{Preliminaries}
\left|el{sec 2}
\subsection{Gauss sums}
\left|el{sec2.4}
We write $\psi_j=\chi^{(4j)}$ for $j = \pm 1, \pm 2$ where we recall that $\chi^{(d)}=\leg {d}{\cdot} $ is the Kronecker symbol for integers $d \equiv 0, 1 \pmod 4$. Note that each $\psi_j$ is a primitive character modulo $4j$. Let $\psi_0$ stand for the primitive principal character. \newline
Given any Dirichlet character $\chi$ modulo $n$ and any integer $q$, the Gauss sum $\tau(\chi,q)$ is defined to be
\begin{equation*}
\tau(\chi,q)=\sum_{j\ensuremath{\negthickspace \negthickspace \negthickspace \pmod} n}\chi(j)e \left( f\times \chirac {jq}n \right), \quad \mbox{where} \quad e(z)= \exp(2 \pi i z).
\end{equation*}
For the evaluation of $\tau(\chi,q)$, we cite the following result from \cite[Lemma 2.2]{Cech1}.
\begin{lemma}
\left|el{Lemma changing Gauss sums}
\begin{enumerate}
\item If $l\equiv1 \pmod 4$, then
\begin{equation*}
\tau\left(\chi^{(4l)},q\right)=
\begin{cases}
0,&\hbox{if $(q,2)=1$,}\\
-2\tau\left(\chi_l,q\right),&\hbox{if $q\equiv2 \pmod 4$,}\\
2\tau\left( \chi_l ,q\right),&\hbox{if $q\equiv0 \pmod 4$.}
\end{cases}
\end{equation*}
\item If $l \equiv3 \pmod 4$, then
\begin{equation*}
\tau\left(\chi^{(4l)},q\right)=\begin{cases}
0,&\hbox{if $2|q$,}\\
-2i\tau\left(\chi_l,q\right),&\hbox{if $q\equiv1 \pmod 4$,}\\
2i\tau\left(\chi_l,q\right),&\hbox{if $q\equiv3 \pmod 4$.}
\end{cases}
\end{equation*}
\end{enumerate}
\end{lemma}
Recall that for an odd positive integer $n$, we define $\chi_n=\left(f\times \chirac {\cdot}{n} \right)$. We then define an associated Gauss sum $G\left(\chi_n,q\right)$ by
\begin{align*}
\begin{split}
G\left(\chi_n,q\right)&=\left(f\times \chirac{1-i}{2}+\leg{-1}{n}f\times \chirac{1+i}{2}\right)\tau\left(\chi_n,q\right)=\begin{cases}
\tau\left(\chi_n,q\right),&\hbox{if $n\equiv1 \pmod 4$,}\\
-i\tau\left(\chi_n,q\right),&\hbox{if $n\equiv3\pmod 4$}.
\end{cases}
\end{split}
\end{align*}
The advantage of $G\left(\chi_n,q\right)$ over $\tau\left(\chi_n,q\right)$ is that $G\left(\chi_n,q\right)$ is now a multiplicative function of $n$. In fact, upon denoting $\varphi(m)$ for the Euler totient function of $m$, we have the following result from \cite[Lemma 2.3]{sound1} that evaluates $G\left(\chi_n,q\right)$.
\begin{lemma}
\left|el{lem:Gauss}
If $(m,n)=1$ then $G(\chi_{mn},q)=G(\chi_m,q)G(\chi_n,q)$. Suppose that $p^a$ is
the largest power of $p$ dividing $q$ (put $a=\infty$ if $m=0$).
Then for $k \geq 0$ we have
\begin{equation*}
G\left(\chi_{p^k},q\right)=\begin{cases}\varphi(p^k),&\hbox{if $k\leq a$, $k$ even,}\\
0,&\hbox{if $k\leq a$, $k$ odd,}\\
-p^a,&\hbox{if $k=a+1$, $k$ even,}\\
\leg{qp^{-a}}{p}p^{a}\sqrt p,&\hbox{if $k=a+1$, $k$ odd,}\\
0,&\hbox{if $k\geq a+2$}.
\end{cases}
\end{equation*}
\end{lemma}
\subsection{Functional equations for Dirichlet $L$-functions}
We quote the following functional equation from \cite[Proposition 2.3]{Cech1} concerning all Dirichlet characters $\chi$ modulo $n$, which plays a key role in our proof of Theorem \ref{Theorem for all characters}.
\begin{lemma}
\left|el{Functional equation with Gauss sums}
Let $\chi$ be any Dirichlet character modulo $n \neq \square$ such that $\chi(-1)=1$. Then we have
\begin{equation}
\left|el{Equation functional equation with Gauss sums}
L(s,\chi)=f\times \chirac{\pi^{s-1/2}}{n^s}f\times \chirac{\Gamma\bfrac{1-s}{2}}{\Gamma\bfrac {s}2} K(1-s,\chi), \quad \mbox{where} \quad K(s,\chi)=\sum_{q=1}^\inftyf\times \chirac{\tau(\chi,q)}{q^s}.
\end{equation}
\end{lemma}
\subsection{Bounding $L$-functions}
For a fixed quadratic character $\psi$ modulo $n$, let $\widehat{\psi}$ be the primitive character that induces $\psi$ so that we have $\widehat\psi=\chi^{(d)}$ for some fundamental discriminant $d|n$ (see \cite[Theorem 9.13]{MVa1}). We gather in this section certain estimations on $L(s, \psi)$ that are necessary in the proof of Theorem \ref{Theorem for all characters}. Most of the estimations here are unconditional, except for the following one, which asserts that when $\Re(s) \geq 1/2+\varepsilon$ for any $\varepsilon>0$, we have by \cite[Theorem 5.19]{iwakow} that under GRH,
\begin{align}
\left|el{PgLest1}
\begin{split}
& \big| L( s, \widehat\psi )\big |^{-1} \ll |sn|^{\varepsilon}.
\end{split}
\end{align}
Write $n=n_1n_2$ uniquely such that $(n_1, d)=1$ and that $p |n_2 \Rightarrow p|d$. The above notations imply that for any integer $q$,
\begin{align}
\left|el{Ldecomp}
\begin{split}
L^{(q)}( s, \psi ) =L( s, \widehat{\psi}) \prod_{p|qn_1}\left( 1-f\times \chirac {\widehat\psi(p)}{p^s} \right).
\end{split}
\end{align}
Observe that
\begin{align*}
\Big |1-f\times \chirac {\widehat\psi(p)}{p^s}\Big | \leq 2p^{\max (0,-\Re(s))}.
\end{align*}
We then deduce that
\begin{align}
\left|el{Lnbound}
\begin{split}
\prod_{p|qn_1}\left( 1-f\times \chirac {\widehat\psi(p)}{p^s} \right) \ll 2^{\omega(q_1n)}(qn_1)^{\max (0,-\Re(s))} \ll (qn_1)^{\max (0,-\Re(s))+\varepsilon},
\end{split}
\end{align}
where $\omega(n)$ denotes the number of distinct prime factors of $n$ and the last estimation above follows from the well-known bound (see \cite[Theorem 2.10]{MVa1})
\begin{align*}
\omega(h) \ll f\times \chirac {\log h}{\log \log h}, \quad \mbox{for} \quad h \geq 3.
\end{align*}
When $d$ is a fundamental discriminant, we recall the convexity bound for $L(s, \chi^{(d)})$ (see \cite[Exercise 3, p. 100]{iwakow}) asserts that
\begin{align}
\left|el{Lconvexbound}
\begin{split}
L( s, \chi^{(d)} ) \ll
\begin{cases}
& \left (|d|(1+|s|) \right)^{(1-\Re(s))/2+\varepsilon}, \quad 0 \leq \Re(s) \leq 1, \\
& 1, \quad \Re(s)>1.
\end{cases}
\end{split}
\end{align}
To estimate $L( s, \chi^{(d)} )$ for $\Re(s) < 0$, we note the following functional equation (see \cite[p. 456]{sound1}).
\begin{align}
\left|el{fneqnquad}
\Lambda_{[i]}bda(s, \chi^{(d)}) =: \Big( f\times \chirac {|d|} {\pi} \Big)^{s/2}\Gamma \Big( f\times \chirac {s}{2} \Big)L(s, \chi^{(d)})=\Lambda_{[i]}bda(1-s, \chi^{(d)}).
\end{align}
Moreover, Stirling's formula (\cite[(5.113)]{iwakow}) implies that, for constants $a_0$, $b_0$,
\begin{align}
\left|el{Stirlingratio}
f\times \chirac {\Gamma(a_0(1-s)+ b_0)}{\Gamma (a_0s+ b_0)} \ll (1+|s|)^{a_0(1-2\Re (s))}.
\end{align}
It follows from \eqref{fneqnquad} and \eqref{Stirlingratio} that when $\Re(s)<1/2$, we have
\begin{align}
\left|el{fneqnquad1}
L(s, \chi^{(d)}) \ll (|d|(1+|s|))^{(1-\Re(s))/2+\varepsilon}L(1-s, \chi^{(d)}).
\end{align}
Moreover, we conclude from \eqref{Lconvexbound}-\eqref{Stirlingratio} that
\begin{align}
\left|el{Lchidbound}
\begin{split}
L(s, \chi^{(d)}) \ll \begin{cases}
1 \qquad & \Re(s) >1,\\
(|d|(1+|s|))^{(1-\Re(s))/2+\varepsilon} \qquad & 0\leq \Re(s) <1,\\
(|d|(1+|s|))^{1/2-\Re(s)+\varepsilon} \qquad & \Re(s) < 0.
\end{cases}
\end{split}
\end{align}
From \eqref{Ldecomp}, \eqref{Lnbound} and \eqref{Lchidbound}, we deduce that for all complex number $s$,
\begin{align}
\left|el{Lchibound1}
\begin{split}
L^{(q)}( s, \psi ) \ll (qn_1)^{\max (0,-\Re(s))+\varepsilon}(n(1+|s|))^{\max \{1/2-\Re (s),(1-\Re(s))/2, 0 \} +\varepsilon}.
\end{split}
\end{align}
We conclude this section by including the following large sieve result for quadratic Dirichlet $L$-functions, which is a consequence of \cite[Theorem 2]{DRHB}.
\begin{lemma} \left|el{lem:2.3}
With the notation as above, let $S(X)$ denote the set of real, primitive characters $\chi$ with conductor not exceeding $X$. Then we have, for any complex number $s$ with $\Re(s) \geq 1/2$ and any $\varepsilon>0$,
\begin{align}
\left|el{L1estimation}
\sum_{\substack{\chi \in S(X)}} |L(s, \chi)|
\ll & X^{1+\varepsilon} |s|^{1/4+\varepsilon}.
\end{align}
\end{lemma}
\begin{proof}
From \cite[Theorem 2]{DRHB}, we get
\begin{align*}
\sum_{\substack{\chi \in S(X)}} |L(s, \chi)|^4
\ll & (X|s|)^{1+\varepsilon}.
\end{align*}
The lemma now follows from the above and H\"older's inequality.
\end{proof}
\subsection{Some results on multivariable complex functions}
We gather here some results from multivariable complex analysis. We begin with the notation of a tube domain.
\begin{defin}
An open set $T\subset\ensuremath{\mathbb C}^n$ is a tube if there is an open set $U\subset\ensuremath{\mathbb R}^n$ such that $T=\{z\in\ensuremath{\mathbb C}^n:\ \Re(z)\in U\}.$
\end{defin}
For a set $U\subset\ensuremath{\mathbb R}^n$, we define $T(U)=U+i\ensuremath{\mathbb R}^n\subset \ensuremath{\mathbb C}^n$. We quote the following Bochner's Tube Theorem \cite{Boc}.
\begin{theorem}
\left|el{Bochner}
Let $U\subset\ensuremath{\mathbb R}^n$ be a connected open set and $f(z)$ a function holomorphic on $T(U)$. Then $f(z)$ has a holomorphic continuation to the convex hull of $T(U)$.
\end{theorem}
We denote the convex hull of an open set $T\subset\ensuremath{\mathbb C}^n$ by $\widehat T$. Our next result is \cite[Proposition C.5]{Cech1} on the modulus of holomorphic continuations of multivariable complex functions.
\begin{prop}
\left|el{Extending inequalities}
Assume that $T\subset \ensuremath{\mathbb C}^n$ is a tube domain, $g,h:T\rightarrow \ensuremath{\mathbb C}$ are holomorphic functions, and let $\tilde g,\tilde h$ be their holomorphic continuations to $\widehat T$. If $|g(z)|\leq |h(z)|$ for all $z\in T$ and $h(z)$ is nonzero in $T$, then also $|\tilde g(z)|\leq |\tilde h(z)|$ for all $z\in \widehat T$.
\end{prop}
\section{Proof of Theorem \ref{Theorem for all characters}}
For $\Re(s)$, $\Re(w)$ sufficiently large, define
\begin{align}
\left|el{Aswzexp}
\begin{split}
A(s,w)=& \sum_{\substack{(n,2)=1 }}f\times \chirac{L^{(2)}(w, \chi_{n})}{n^s}
=\sum_{\substack{(nm,2)=1}}f\times \chirac{\chi_n(m)}{m^wn^s}= \sum_{\substack{(m,2)=1}}f\times \chirac{L( s,\chi^{(4m)} )}{m^w}.
\end{split}
\end{align}
We shall develop some analytic properties of $A(s,w)$, necessary in establishing Theorem \ref{Theorem for all characters}.
\subsection{First region of absolute convergence of $A(s,w)$}
Using the first equality in the definition for $A(s,w)$ in \eqref{Aswzexp} and arguing similar to \eqref{Lnbound},
\begin{align} \left|el{Abound}
\begin{split}
A(s,w)=& \sum_{\substack{(n,2)=1 }}f\times \chirac{L^{(2)}(w, \chi_{n})}{n^s}
= \sum_{\substack{(h,2)=1}}f\times \chirac {1}{h^{2s}}\sumstar_{\substack{(n,2)=1}}f\times \chirac{L^{(2)}(w, \chi_{n})\prod_{p | h}(1-\chi_{n}(p)p^{-w}) }{n^s} \\
\ll& \sum_{\substack{(h,2)=1}}f\times \chirac {h^{\max (0,-\Re(w))+\varepsilon} }{h^{2s}}\sumstar_{\substack{(n,2)=1}}\Big | f\times \chirac{L^{(2)}(w, \chi_n)}{n^s} \Big |,
\end{split}
\end{align}
where $\sum^*$ henceforth denotes the sum over square-free integers. \newline
Write $\widetilde \chi_n$ for the primitive Dirichlet character that induces $\chi^{(n)}$ (resp. $\chi^{(-n)}$) for $n \equiv 1 \pmod 4$ (resp. for $n \equiv -1 \pmod 4$). Recall that we have $L^{(2)}(w, \chi_n)=L(w, \chi^{(\pm 4n)})$ for $n \equiv \pm 1 \pmod 4$. For a square-free integer $n$, we see that $\widetilde \chi_n=\chi^{(n)}$ (resp. $\widetilde \chi_n=\chi^{(4n)}$ ) is a primitive character modulo $n$ for $n \equiv 1 \pmod 4$ (resp. $n \equiv -1 \pmod 4$). In either case, for square-free integers $n$,
\begin{align*}
\begin{split}
\big|L^{(2)}(w, \chi_n)\big|=& \big|(1-\widetilde\chi_n(2)2^{-w})L(w, \widetilde\chi_n)\big| \ll \big|L(w, \widetilde\chi_n)\big |.
\end{split}
\end{align*}
It follows from \eqref{Abound} and the above estimation that
\begin{align}
\begin{split}
\left|el{Aboundinitial}
A(s,w)
\ll& \sum_{\substack{(h,2)=1}}f\times \chirac {h^{\max (0,-\Re(w))+\varepsilon} }{h^{2s}}\sumstar_{\substack{(n,2)=1}}f\times \chirac{|L(w, \widetilde\chi_n)|}{|n^{s}|}.
\end{split}
\end{align}
Now \eqref{L1estimation} and partial summation implies that, except for a simple pole at $w=1$, both sums of the right-hand side expression in \eqref{Aboundinitial} are convergent for $\Re(s)>1$, $\Re(w) \geq 1/2$ as well as for $\Re(2s)>1$, $\Re(2s+w)>1$, $\Re(s+w)>3/2$, $\Re(w) < 1/2$. Hence, $A(s,w)$ converges absolutely in the region
\begin{equation*}
S_0=\{(s,w): \Re(s)>1,\ \Re(2s+w)>1,\ \Re(s+w)>3/2\}.
\end{equation*}
As the condition $\Re(2s+w)>1$ is contained in the other conditions, the description of $S_0$ simplifies to
\begin{equation*}
S_0=\{(s,w): \Re(s)>1, \ \Re(s+w)>3/2\}.
\end{equation*}
Next, upon writing $m=m_0m^2_1$ with $m_0$ odd and square-free, we recast the last expression of \eqref{Aswzexp} as
\begin{align}
\left|el{Sum A(s,w,z) over n}
A(s,w)=&\sum_{\substack{(m,2)=1}}f\times \chirac{L( s, \chi^{(4m)})}{m^w}=\sum_{\substack{(m_1,2)=1}}f\times \chirac{1}{m_1^{2w}}\sumstar_{\substack{(m_0,2)=1}}f\times \chirac{L( s, \chi^{(4m_0)})\prod_{p | m_1}(1-\chi^{(4m_0)}(p)p^{-s}) }{m_0^{w}}.
\end{align}
Note that $\chi^{(4m_0)}$ is a primitive character modulo $4m_0$ for $m_0 \equiv -1 \pmod 4$. Arguing as above by making use of \eqref{L1estimation} and partial summation again, we see that except for a simple pole at $s=1$ arising from the summands with $m=\square$, the sum over $m$ such that $m_0 \equiv -1 \pmod 4$ in \eqref{Sum A(s,w,z) over n} converges absolutely in the region
\begin{align*}
S_1=& \{(s,w):\hbox{$\Re(w)>1, \ \Re(s)\geq \tfrac 12$}\} \bigcup \{(s,w): 0 \leq \Re(s)< \tfrac 12, \ \Re(s+w)>\tfrac32 \} \bigcup \{(s,w): \ \Re(s)<0, \ \Re(2s+w)>\tfrac32 \}.
\end{align*}
Similarly, the sum over $m$ with $m_0 \equiv 1 \pmod 4$ in \eqref{Sum A(s,w,z) over n} also converges absolutely. In this case, $\chi^{(m_0)}$ is a primitive character modulo $m_0$. This allows us to deduce that the function $A(s,w)$ converges absolutely in the region $S_1$, except for a simple pole at $s=1$ arising from the summands with $m=\square$. \newline
Notice that the convex hull of $S_0$ and $S_1$ is
\begin{equation}
\left|el{Region of convergence of A(s,w,z)}
S_2=\{(s,w):\ \Re(s+w)>\tfrac32, \ \Re(2s+w)>\tfrac32 \}.
\end{equation}
Hence, Theorem \ref{Bochner} implies that $(s-1)(w-1)A(s,w)$ converges absolutely in the region $S_2$.
\subsection{Residue of $A(s,w)$ at $s=1$}
\left|el{sec:resA}
We see that $A(s,w)$ has a pole at $s=1$ arising from the terms with $m=\square$ from \eqref{Sum A(s,w,z) over n}. In order to compute the corresponding residue and for later use, we define the sum
\begin{align*}
\begin{split}
A_1(s,w) =: \sum_{\substack{(m,2)=1 \\ m = \square}}f\times \chirac{L\left( s, \chi^{(4m)}\right)}{m^w}
= \sum_{\substack{(m,2)=1 \\ m = \square}}f\times \chirac{\zeta(s)\prod_{p | 2m}(1-p^{-s}) }{m^w} .
\end{split}
\end{align*}
For any $t \in \ensuremath{\mathbb C}$, let $a_t(n)$ be the multiplicative function such that $a_t(p^k)=1-1/p^t$ for any prime $p$. This notation renders
\begin{align*}
\begin{split}
A_1(s,w)
= \zeta^{(2)}(s)\sum_{\substack{(m,2)=1 \\ m = \square}}f\times \chirac{a_s(m) }{m^w} .
\end{split}
\end{align*}
Recasting the last sum above as an Euler product,
\begin{align}
\left|el{residuesgen}
\begin{split}
A_1(s,w)
=& \zeta^{(2)}(s)\prod_{p>2}\sum_{\substack{m \geq0\\m \text{ even}}}f\times \chirac{a_s(p^{m})}{p^{mw}}= \zeta^{(2)}(s)\prod_{p>2}\left(1+\left( 1-f\times \chirac 1{p^s} \right) f\times \chirac 1{p^{2w}}(1-p^{-2w})^{-1} \right) \\
=& \zeta^{(2)}(s)\zeta^{(2)}(2w)\prod_{p>2}\left(1-f\times \chirac 1{p^{s+2w}}\right) =: \ \zeta(s)\zeta(2w)P(s,w),
\end{split}
\end{align}
where
\begin{align}
\left|el{Pdef}
\begin{split}
P(s, w)=& \left( 1-f\times \chirac 1{2^s} \right) \left( 1-f\times \chirac 1{2^{2w}} \right) \prod_{p>2}\left(1-f\times \chirac 1{p^{s+2w}}\right).
\end{split}
\end{align}
It follows from \eqref{residuesgen} and \eqref{Pdef} that except for a simple pole at $s=1$, the functions $P(s, w)$ and $A_1(s,w)$ are holomorphic in the region
\begin{align}
\left|el{S3}
S_3=\Big\{(s,w):\ & \Re(s+2w)>1 \Big\}.
\end{align}
As the residue of $\zeta(s)$ at $s = 1$ equals $1$, we deduce that
\begin{align}
\left|el{Residue at s=1}
\mathrm{Res}_{s=1}& A(s, \tfrac{1}{2}+\alpha) = \mathrm{Res}_{s=1} A_1(s, \tfrac{1}{2}+\alpha)
=\zeta(1+2\alpha)P(1,\tfrac{1}{2}+\alpha).
\end{align}
\subsection{Second region of absolute convergence of $A(s,w)$}
We infer from \eqref{Sum A(s,w,z) over n} that
\begin{align}
\begin{split}
\left|el{A1A2}
A(s,w) =& \sum_{\substack{(m,2)=1 \\ m = \square}}f\times \chirac{L( s, \chi^{(4m)})}{m^w} +\sum_{\substack{(m,2)=1 \\ m \neq \square}}f\times \chirac{L( s, \chi^{(4m)})}{m^w} \\
=& \sum_{\substack{(m,2)=1 \\ m = \square}}f\times \chirac{\zeta(s)\prod_{p | 2m}(1-p^{-s}) }{m^w} +\sum_{\substack{(m,2)=1 \\ m \neq \square}}f\times \chirac{L( s, \chi^{(4m)})}{m^w} =: \ A_1(s,w)+A_2(s,w).
\end{split}
\end{align}
Here we recall from our discussions in the previous section that $A_1(s,w)$ is holomorphic in the region $S_3$, except for a simple pole at $s=1$. \newline
Next, observe that $\chi^{(4m)}$ is a Dirichlet character modulo $4m$ for any $m\geq1$ such that $\chi^{(4m)}(-1)=1$. We thus apply the functional equation \eqref{Equation functional equation with Gauss sums} given in Lemma \ref{Functional equation with Gauss sums} for $L\left( s, \chi^{(4m)}\right)$ in the case $m \neq \square$, arriving at
\begin{align}
\begin{split}
\left|el{Functional equation in s}
A_2(s,w) =f\times \chirac{\pi^{s-1/2}}{4^s}f\times \chirac {\Gamma (f\times \chirac{1-s}2)}{\Gamma(f\times \chirac {s}2) } C(1-s,s+w),
\end{split}
\end{align}
where $C(s,w)$ is given by the double Dirichlet series
\begin{align*}
C(s,w)=& \sum_{\substack{q, m \\ (m,2)=1 \\ m \neq \square}}f\times \chirac{\tau(\chi^{(4m)}, q)}{q^sm^w}=\sum_{\substack{q, m \\ \substack{(m,2)=1}}}f\times \chirac{\tau(\chi^{(4m)}, q)}{q^sm^w}-\sum_{\substack{q, m \\ (m,2)=1 \\ m = \square}}f\times \chirac{\tau(\chi^{(4m)}, q)}{q^sm^w}.
\end{align*}
Note that $C(s,w)$ is initially convergent for $\Re(s)$, $\Re(w)$ large enough by \eqref{Region of convergence of A(s,w,z)}, \eqref{S3} and the functional equation \eqref{Functional equation in s}. To extend this region, we recast $C(s,w)$ as
\begin{align}
\left|el{Cexp}
\begin{split}
C(s,w)=& \sum^{\infty}_{q =1}f\times \chirac{1}{q^s}\sum_{\substack{(m,2)=1}}f\times \chirac{\tau\left( \chi^{(4m)}, q \right) }{m^w}-\sum^{\infty}_{q =1}f\times \chirac{1}{q^s}\sum_{\substack{(m,2)=1 \\ m = \square}}f\times \chirac{\tau\left( \chi^{(4m)}, q \right) }{m^w} =: \ C_1(s,w)-C_2(s,w).
\end{split}
\end{align}
For two Dirichlet characters $\psi,\psi'$ whose conductors dividing $8$, we define
\begin{align}
\begin{split}
\left|el{C12def}
C_1(s,w;\psi,\psi')=: \sum_{l,q\geq 1}f\times \chirac{G\left( \chi_l,q\right)\psi(l)\psi'(q) }{l^wq^s} \quad \mbox{and} \quad C_2(s,w;\psi,\psi')=: \sum_{l,q\geq 1}f\times \chirac{G \left( \chi_{l^2},q\right)\psi(l)\psi'(q) }{l^{2w}q^s}.
\end{split}
\end{align}
We follow the arguments contained in \cite[\S 6.4]{Cech1} and apply Lemma \ref{Lemma changing Gauss sums} to obtain that
\begin{align}
\begin{split}
\left|el{C(s,w,z) as twisted C(s,w,z)}
C_1(s,w)=&
-2^{-s}\big( C_1(s,w;\psi_2,\psi_1)+C_1(s,w;\psi_{-2},\psi_1)\big) +4^{-s}\big( C_1(s,w;\psi_1,\psi_0)+C_1(s,w;\psi_{-1},\psi_0)\big)\\
&\hspace*{2cm} +C_1(s,w;\psi_1,\psi_{-1})-C_1(s,w;\psi_{-1},\psi_{-1}), \\
C_2(s,w)=&-2^{1-s}C_2(s,w;\psi_1,\psi_1)+2^{1-2s}C_2(s,w;\psi_1,\psi_0).
\end{split}
\end{align}
We now follow the approach by K. Soundararajan and M. P. Young in \cite[\S 3.3]{S&Y} to write every integer $q \geq 1$ uniquely as $q=q_1q^2_2$ with $q_1$ square-free to derive that
\begin{equation}
\left|el{Cidef}
C_i(s,w;\psi,\psi')=\sumstar_{q_1}f\times \chirac{\psi'(q_1)}{q_1^s}\cdot D_i(s, w; q_1,\psi, \psi'), \quad i =1,2,
\end{equation}
where
\begin{align}
\left|el{Didef}
\begin{split}
D_1(s, w; q_1, \psi,\psi')=\sum_{l,q_2=1}^\inftyf\times \chirac{G\left( \chi_{l},q_1q^2_2\right)\psi(l)\psi'(q^2_2) }{l^wq^{2s}_{2}} \quad \mbox{and} \quad D_2(s, w; q_1, \psi,\psi')=\sum_{l,q_2=1}^\inftyf\times \chirac{G\left( \chi_{l^2},q_1q^2_2\right)\psi(l)\psi'(q^2_2) }{l^{2w}q^{2s}_{2}}.
\end{split}
\end{align}
The following result develops the required analytic properties of $D_i(s, w;q_1, \psi, \psi')$.
\begin{lemma}
\left|el{Estimate For D(w,t)}
With the notation as above and assuming the truth of GRH, for $\psi \neq \psi_0$, the functions $D_i(s, w; q_1, \psi,\psi')$, with $i=1$, $2$ have meromorphic continuations to the region
\begin{align}
\left|el{Dregion}
\{(s,w): \Re(s)>1, \ \Re(w)> \tfrac{3}{4} \}.
\end{align}
Moreover, the only pole in this region occurs when $q_1=1, \psi=\psi_1$ at $w = 3/2$ and the pole is simple. For $\Re(s) \geq 1+\varepsilon$, $\Re(w) \geq 3/4+\varepsilon$, away from the possible
poles, we have
\begin{align}
\left|el{Diest}
|D_i(s, w; q_1, \psi,\psi')|\ll (q_1(1+|w|))^{\max \{ (3/2-\Re(w))/2, 0 \}+\varepsilon}.
\end{align}
\end{lemma}
\begin{proof}
We focus on $D_1(s, w; q_1, \psi,\psi')$ here since the proof for the other case is similar. By Lemma \ref{lem:Gauss}, in the double sum in \eqref{Didef} defining $D_1(s,w; q_1, \psi,\psi')$, the summands there are jointly multiplicative functions of $l,q_2$. Moreover, we may assume that $l$ is odd in \eqref{Didef} since $\psi \neq \psi_0$. These observations enable us to recast $D_1(s,w; q_1, \psi,\psi')$ as an Euler product such that
\begin{align}
\left|el{D1Eulerprod}
\begin{split}
& D_1(s, w; q_1, \psi,\psi')= \prod_p D_{1,p}(s, w; q_1, \psi,\psi'),
\end{split}
\end{align}
where
\begin{align}
\left|el{Dexp}
\begin{split}
& D_{1,p}(s, w; q_1, \psi,\psi')= \displaystyle
\begin{cases}
\displaystyle \sum_{k=0}^\inftyf\times \chirac{ \psi'(2^{2k})}{2^{2ks}}, \quad p=2, \\
\displaystyle \sum_{l,k=0}^\inftyf\times \chirac{ \psi(p^l)\psi'(p^{2k})G\left( \chi_{p^l}, q_1p^{2k} \right) }{p^{lw+2ks}}, \quad p>2.
\end{cases}
\end{split}
\end{align}
Now, for any fixed $p > 2$,
\begin{align}
\left|el{Dgenest}
\begin{split}
& \sum_{l,k=0}^\inftyf\times \chirac{ \psi(p^l)\psi'(p^{2k})G\left( \chi_{p^l}, q_1p^{2k} \right) }{p^{lw+2ks}} = \sum_{l=0}^\infty f\times \chirac{ \psi(p^l)G\left( \chi_{p^l}, q_1 \right) }{p^{lw}} + \sum_{l \geq 0, k \geq 1}f\times \chirac{ \psi(p^l)\psi'(p^{2k})G\left( \chi_{p^l}, q_1p^{2k} \right) }{p^{lw+2ks}}.
\end{split}
\end{align}
Remembering that $q_1$ is square-free, we deduce from Lemma \ref{lem:Gauss} that
\begin{align*}
\begin{split}
|G( \chi_{p^l}, q_1p^{2k} )| \ll p^l, \quad G(\chi_{p^l}, q_1p^{2k})=0, \quad l \geq 2k+3.
\end{split}
\end{align*}
The above estimations allow us to see that when $\Re(s)>1$, $\Re(w)>3/4$,
\begin{align}
\left|el{Dk1est}
\begin{split}
\sum_{l \geq 0, k \geq 1}f\times \chirac{ \psi(p^l)\psi'(p^{2k})G\left( \chi_{p^l}, q_1p^{2k} \right) }{p^{lw+2ks}} =& \sum_{k \geq 1}f\times \chirac{ \psi'(p^{2k})G\left( \chi_{1}, q_1p^{2k} \right) }{p^{2ks}}+\sum_{l, k \geq 1}f\times \chirac{ \psi(p^l)\psi'(p^{2k})G\left( \chi_{p^l}, q_1p^{2k} \right) }{p^{lw+2ks}} \\
\ll & p^{-2\Re(s)}+ \Bigg| \sum^{\infty}_{k=1}\sum_{1 \leq l \leq 2k+2}f\times \chirac{1}{p^{l(w-1)+2ks}}\Bigg| \\
\ll & p^{-2\Re(s)}+\Bigg| \sum^{\infty}_{k=1}f\times \chirac{2k+2}{p^{2ks}}\Big (f\times \chirac 1{p^{w-1}}+ f\times \chirac 1{p^{(2k+2)(w-1)}}\Big ) \Bigg| \\
\ll & p^{-2\Re(s)}+p^{-2\Re(s)-\Re(w)+1}+p^{-2\Re(s)-4\Re(w)+4} .
\end{split}
\end{align}
Further applying Lemma \ref{lem:Gauss} yields that if $p \nmid 2q_1$ and $\Re(w)>3/4$,
\begin{align}
\left|el{Dgenl0gen}
\begin{split}
& \sum_{l=0}^\infty f\times \chirac{ \psi(p^l)G\left( \chi_{p^l}, q_1 \right) }{p^{lw}} = 1+f\times \chirac{ \psi(p)\chi^{(q_1)}(p)}{p^{w-1/2}}
= L_p \left( w-\tfrac{1}{2}, \chi^{(q_1)}\psi\right) \left( 1-f\times \chirac {1}{p^{2w-1}} \right) = f\times \chirac {L_{p}\left( w-\tfrac{1}{2}, \chi^{(q_1)}\psi\right)}{\zeta_{p}(2w-1)}.
\end{split}
\end{align}
We derive from \eqref{Dexp}--\eqref{Dgenl0gen} that for $p \nmid 2q_1$, $\Re(s)>1$, $\Re(w)>3/4$,
\begin{align}
\left|el{Dgenexp}
\begin{split}
& D_{1,p}(s, w; q_1, \psi,\psi') = f\times \chirac {L_{p}\left( w-\tfrac{1}{2}, \chi^{(q_1)}\psi\right)}{\zeta_{p}(2w-1)}\left( 1+O \Big ( p^{-2\Re(s)}+p^{-2\Re(s)-\Re(w)+1}+p^{-2\Re(s)-4\Re(w)+4} \Big ) \right) .
\end{split}
\end{align}
Now, we deduce the first assertion of the lemma from \eqref{D1Eulerprod}, \eqref{Dexp} and the above. We also see this way that the only pole that is simple in the region given in \eqref{Dregion} is at $w = 3/2$ and this occurs when $q_1=1$, $\psi=\psi_1$. \newline
We further note that Lemma \ref{lem:Gauss} implies that when $p | q_1, p \neq 2$,
\begin{align}
\left|el{Dgenl0}
\begin{split}
& \sum_{l=0}^\infty f\times \chirac{ \psi(p^l)G\left( \chi_{p^l}, q_1 \right) }{p^{lw}} = 1-f\times \chirac{ \psi(p^2)}{p^{2w-1}} = 1+O(p^{-2\Re(w)+1}).
\end{split}
\end{align}
It follows from \eqref{Dexp}, \eqref{Dk1est} and \eqref{Dgenl0} that for $p | q_1, p \neq 2$, $\Re(s)>1, \Re(w)>3/4$,
\begin{align}
\left|el{Dgenexp1}
\begin{split}
& D_{1,p}(s, w;q_1, \psi,\psi')
= 1+O \Big (p^{-2\Re(w)+1}+p^{-2\Re(s)}+p^{-2\Re(s)-\Re(w)+1}+p^{-2\Re(s)-4\Re(w)+4}\Big ) .
\end{split}
\end{align}
We conclude from \eqref{D1Eulerprod}, \eqref{Dexp}, \eqref{Dgenexp} and \eqref{Dgenexp1} that for $\Re(s)\geq 1+\varepsilon$ and $\Re(w) \geq 3/4+\varepsilon$,
\begin{align*}
\begin{split}
D_{1}(s, w; q_1, \psi,\psi') \ll & |q_1|^{\varepsilon}\Big |f\times \chirac {L^{(2q_1)}\left( w-\tfrac{1}{2}, \chi^{(q_1)}\psi\right)}{\zeta^{(2q_1)}(2w-1)} \Big | \ll |q_1|^{\varepsilon}\Big |f\times \chirac {L\left( w-\tfrac{1}{2}, \chi^{(q_1)}\psi\right)}{\zeta(2w-1)} \Big | \ll (q_1(1+|w|))^{\max \{ (3/2-\Re(w))/2, 0 \}+\varepsilon},
\end{split}
\end{align*}
where the last bound follows from \eqref{PgLest1} (by taking $\widehat \psi=\psi_0$ to be the
primitive principal character) and \eqref{Lchibound1}. This leads to the estimate in \eqref{Diest} and completes the proof of the lemma.
\end{proof}
Now applying Lemma \ref{Estimate For D(w,t)} with \eqref{C(s,w,z) as twisted C(s,w,z)} and \eqref{Cidef}, we that $(w-3/2)C(s,w)$ is defined in the region
\begin{equation*}
\{(s,w):\ \Re(s)>1, \ \Re(w)>3/4, \ \Re(s+w/2)>7/4 \}.
\end{equation*}
The above together with \eqref{S3} and \eqref{A1A2} now implies that $(s-1)(w-1)(s+w-3/2)A(s,w)$ can be extended to the region
\begin{align*}
S_4=& \{(s,w): \ \Re(s+2w)>1, \ \Re(s+w)>3/4,\ \Re(w-s)>3/2, \ \Re(s)<0\}.
\end{align*}
The condition $\Re(s+2w)>1$ is redundant. So
\begin{equation*}
S_4=\{(s,w):\ \Re(s+w)>3/4,\ \Re(w-s)>3/2, \ \Re(s)<0\}.
\end{equation*}
It is then easily observed that the convex hull of $S_2$ and $S_4$ equals
\begin{equation*}
S_5=\{(s,w):\ \Re(s+w)>3/4 \}.
\end{equation*}
Now Theorem \ref{Bochner} renders that $(s-1)(w-1)(s+w-3/2)A(s,w)$ converges absolutely in the region $S_5$.
\subsection{Residue of $A(s,w)$ at $s=3/2-w$}
\left|el{sec:resAw}
We keep the notation from the proof of Lemma \ref{Estimate For D(w,t)}. We deduce from \eqref{Cexp}, \eqref{C(s,w,z) as twisted C(s,w,z)}, \eqref{Cidef} and Lemma \ref{Estimate For D(w,t)}, $C(s, w)$ has a pole at $w=3/2$ and
\begin{align} \left|el{Cres}
\mathrm{Res}_{w=3/2}C(s,w)=& 4^{-s}\mathrm{Res}_{w=3/2}D_1(s,w;1, \psi_1, \psi_0)+\mathrm{Res}_{w=3/2}D_1(s,w;1, \psi_1, \psi_{-1}).
\end{align}
We apply Lemma \ref{lem:Gauss} to see that for $p \neq 2$,
\begin{align}
\left|el{Dk1estpsi1}
\begin{split}
\sum_{l \geq 0, k \geq 1}f\times \chirac{ \psi'(p^k)G\left( \chi_{p^l}, p^{2k} \right) }{p^{3l/2+2ks}} =& \sum_{k \geq 1}f\times \chirac{ G\left( \chi_{1}, p^{2k} \right) }{p^{2ks}}+\sum_{l, k \geq 1}f\times \chirac{ G\left( \chi_{p^l}, p^{2k} \right) }{p^{3l/2+2ks}} \\
= & p^{-2s}(1-p^{-2s})^{-1}+\sum_{k \geq 1}f\times \chirac 1{p^{2ks}}\Big (\sum^k_{l=1}f\times \chirac{ \varphi(p^{2l}) }{p^{3l}}+f\times \chirac {p^{2k}\sqrt{p}}{p^{3(2k+1)/2}} \Big ) \\
= & p^{-2s}(1-p^{-2s})^{-1}+f\times \chirac 1p\sum_{k \geq 1}f\times \chirac 1{p^{2ks}} = \Big(1+f\times \chirac 1p \Big)p^{-2s}(1-p^{-2s})^{-1}.
\end{split}
\end{align}
We derive from \eqref{Dexp}, \eqref{Dgenest}, \eqref{Dgenl0gen} and \eqref{Dk1estpsi1} that for $p \neq 2$,
\begin{align}
\left|el{D1pexp}
\begin{split}
D_{1,p}(s, w; 1, \psi_1,\psi_0)= D_{1,p}(s, w; 1, \psi_1,\psi_{-1})=& 1+f\times \chirac{1}{p^{w-1/2}}+\Big( 1+f\times \chirac 1p \Big)p^{-2s}(1-p^{-2s})^{-1}
= \zeta_p(w-1/2)Q_p(s,w),
\end{split}
\end{align}
where
\begin{align}
\left|el{Qpexp}
\begin{split}
Q_{p}(s, w)\Big |_{w=3/2} =& \Big(1-f\times \chirac {1}{p^{2}} \Big)(1-p^{-2s})^{-1}.
\end{split}
\end{align}
It follows from \eqref{D1Eulerprod}, \eqref{Dexp}, \eqref{D1pexp} and \eqref{Qpexp} that we have
\begin{align}
\left|el{D1exp}
\begin{split}
D_{1}(s, w; 1, \psi_1,\psi_0) =\zeta(w-1/2)Q(s,w), \quad D_{1}(s, w; 1, \psi_1,\psi_{-1})=(1-p^{-2s})\zeta(w-1/2)Q(s,w),
\end{split}
\end{align}
with
\begin{align}
\left|el{Qexp}
\begin{split}
Q(s,w)\Big |_{w=3/2}=f\times \chirac {2\zeta(2s)}{3\zeta(2)}.
\end{split}
\end{align}
As the residue of $\zeta(s)$ at $s=1$ equals $1$, we deduce from \eqref{Cres}, \eqref{D1exp} and \eqref{Qexp} that
\begin{align*}
\mathrm{Res}_{w=3/2}C(s,w)=& f\times \chirac {2}{3}f\times \chirac {\zeta(2s)}{\zeta(2)}.
\end{align*}
Now \eqref{A1A2}, the functional equation \eqref{Functional equation in s} and the above lead to
\begin{align*}
\begin{split}
\mathrm{Res}_{s=3/2-w}A(s,w) =\mathrm{Res}_{s=3/2-w}A_2(s,w)=f\times \chirac{2 \cdot \pi^{1-w}}{3 \cdot 4^{3/2-w}}f\times \chirac {\Gamma (f\times \chirac{w-1/2}2)}{\Gamma(f\times \chirac {3/2-w}2) } f\times \chirac {\zeta(2w-1)}{\zeta(2)}.
\end{split}
\end{align*}
Setting $w=1/2+\alpha$ in the above gives
\begin{align}
\left|el{Aress1alpha}
\begin{split}
&\mathrm{Res}_{s=1-\alpha}A(s, \tfrac{1}{2}+\alpha) =f\times \chirac{2^{2\alpha-1}\pi^{1/2-\alpha}\Gamma (f\times \chirac {\alpha}2)}{3\Gamma (f\times \chirac {1- \alpha}2)}f\times \chirac {\zeta(2\alpha)}{\zeta(2)}.
\end{split}
\end{align}
Note that the functional equation \eqref{fneqnquad} for $d=1$ implies that
\begin{align*}
\zeta(2\alpha)=\pi^{2\alpha-1/2}f\times \chirac {\Gamma(\tfrac{1}{2}-\alpha)}{\Gamma (\alpha)}\zeta(1-2\alpha).
\end{align*}
The above allows us to recast the expression in \eqref{Aress1alpha} as
\begin{align}
\left|el{Aress1}
\begin{split}
&\mathrm{Res}_{s=1-\alpha}A(s,\tfrac{1}{2}+\alpha) =f\times \chirac{\pi^{\alpha}\Gamma (\tfrac{1}{2}-\alpha)\Gamma (f\times \chirac {\alpha}2)}{\Gamma(f\times \chirac{1-\alpha}2)\Gamma (\alpha)}\cdotf\times \chirac{\zeta(1-2\alpha)}{\zeta(2)}\cdot f\times \chirac{2^{2\alpha}}{6}.
\end{split}
\end{align}
\subsection{Bounding $A(s,w)$ in vertical strips}
\left|el{Section bound in vertical strips}
We shall estimate $|A(s,w)|$ in vertical strips, which is necessary in the proof of Theorem \ref{Theorem for all characters}. \newline
For previously defined regions $S_j$, we set
\begin{equation*}
\widetilde S_j=S_{j,\delta}\cap\{(s,w):\Re(s) \geq -5/2,\ \Re(w) \geq 1/2-\delta\},
\end{equation*}
where $\delta$ is a fixed number with $0<\delta <1/1000$ and where $S_{j,\delta}= \{ (s,w)+\delta (1,1) : (s,w) \in S_j \} $. We further Set
\begin{equation*}
p(s,w)=(s-1)(w-1)(s+w-3/2), \quad \tilde p(s,w)=1+|p(s,w)|.
\end{equation*}
Observe that $p(s,w)A(s,w)$ is analytic in the regions under our consideration. \newline
We apply \eqref{L1estimation} and partial summation to bound the expression for $A(s,w)$ given in \eqref{Aboundinitial} when $\Re(w) \geq 1/2$. We further apply \eqref{fneqnquad1} to convert the case $\Re(w) <1/2$ back to the case $\Re(w) >1/2$. This gives that in the region $\widetilde S_0$,
\begin{align*}
\begin{split}
|p(s,w)A(s,w)| \ll \tilde p(s,w)(1+|w|)^{\max \{1/2-\Re(w), 0 \}+\varepsilon}.
\end{split}
\end{align*}
Similarly, we bound the expression for $A(s,w)$ given in \eqref{Sum A(s,w,z) over n} to see that in the region $\widetilde S_1$,
\begin{align*}
|p(s,w)A(s,w)|\ll \tilde p(s,w)(1+|s|)^{\max \{1/2-\Re(s), 0\}+\varepsilon}.
\end{align*}
From the above estimations, we apply Proposition \ref{Extending inequalities} to deduce that in the convex hull $\widetilde S_2$ of $\widetilde S_0$ and $\widetilde S_1$,
\begin{equation}
\left|el{AboundS2}
|p(s,w)A(s,w)|\ll \tilde p(s,w) (1+|w|)^{\max \{1/2-\Re(w),0 \}+\varepsilon}(1+|s|)^{3+\varepsilon}.
\end{equation}
Moreover, using the estimations given in \eqref{Lchibound1} for $\zeta(s), \zeta(2w)$ (corresponding to the case with $\psi=\psi_0$ being the primitive principal character) to bound $A_1(s,w)$ given in \eqref{residuesgen}, we see that in the region $\widetilde S_3$,
\begin{align}
\left|el{A1bound}
|A_1(s,w)| \ll (1+|w|)^{\max \{1/2-\Re(2w), (1-\Re(2w))/2, 0\}+\varepsilon}(1+|s|)^{\max \{1/2-\Re(s), 0\}+\varepsilon}.
\end{align}
Also, we deduce from \eqref{Cexp}--\eqref{Didef} and Lemma \ref{Estimate For D(w,t)} that, under GRH,
\begin{equation}
\left|el{Csbound}
|C(s,w)|\ll (1+|w|)^{\max \{ (3/2-\Re(w))/2, 0 \}+\varepsilon}
\end{equation}
in the region
\begin{equation*}
\{(s,w):\Re(s) \geq 1+\varepsilon, \ \Re(w) \geq 3/4+\varepsilon\}.
\end{equation*}
Now applying \eqref{A1A2}, the functional equation \eqref{Functional equation in s} together with \eqref{Stirlingratio} to bound the ratio of the gamma functions appearing there, as well as the estimations given in \eqref{A1bound} and \eqref{Csbound}, we obtain that in the region $\widetilde S_4$, under GRH,
\begin{align}
\left|el{AboundS3}
\begin{split}
|p(s,w)A(s,w)|\ll& \tilde p(s,w)(1+|s+w|)^{\max \{(3/2-\Re(s+w))/2, 0\} +\varepsilon}(1+|s|)^{3+\varepsilon} \\
\ll & \tilde p(s,w) (1+|w|)^{2+\varepsilon}(1+|s|)^{5+\varepsilon}.
\end{split}
\end{align}
Finally, we conclude from \eqref{AboundS2}, \eqref{AboundS3} and Proposition \ref{Extending inequalities} that in the convex hull $\widetilde S_5$ of $\widetilde S_2$ and $\widetilde S_4$, under GRH,
\begin{equation}
\left|el{AboundS4}
|p(s,w)A(s,w)|\ll \tilde p(s,w)(1+|w|)^{2+\varepsilon}(1+|s|)^{5+\varepsilon}.
\end{equation}
\subsection{Completing the proof}
Using the Mellin inversion, we see that for the function $A(s, w)$ defined in \eqref{Aswzexp},
\begin{equation}
\left|el{Integral for all characters}
\sum_{\substack{(n,2)=1}}L^{(2)}(\tfrac{1}{2}+\alpha, \chi_{n})w \bfrac {n}X=f\times \chirac1{2\pi i}\int\limits_{(2)}A\left( s,\tfrac12+\alpha\right) X^s\widehat w(s) \mathrm{d} s,
\end{equation}
where $\widehat{w}$ is the Mellin transform of $w$ given by
\begin{align*}
\widehat{w}(s) =\int\limits^{\infty}_0w(t)t^sf\times \chirac {\mathrm{d} t}{t}.
\end{align*}
Integration by parts renders that for any integer $E \geq 0$,
\begin{align}
\left|el{whatbound}
\widehat w(s) \ll f\times \chirac{1}{(1+|s|)^{E}}.
\end{align}
We shift the line of integration in \eqref{Integral for all characters} to $\Re(s)=1/4+\varepsilon$. The integral on the new line can be absorbed into the $O$-term in \eqref{Asymptotic for ratios of all characters} upon using \eqref{AboundS4} and \eqref{whatbound}. We also encounter two simple poles at $s=1$ and $s=1-\alpha$ in the move with the corresponding residues given in \eqref{Residue at s=1} and \eqref{Aress1}, respectively. Direct computations now lead to the main terms given in \eqref{Asymptotic for ratios of all characters}. This completes the proof of Theorem \ref{Theorem for all characters}.
\vspace*{.5cm}
\noindent{\bf Acknowledgments.} P. G. is supported in part by NSFC grant 11871082 and L. Z. by the FRG Grant PS43707 at the University of New South Wales.
\end{document} |
\begin{document}
\preprint{}
\title{Bell inequality of frequency-bin entangled photon pairs with time-resolved detection}
\author{Xianxin Guo}
\affiliation{Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China}
\author{Yefeng Mei}
\affiliation{Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China}
\author{Shengwang Du}\email{Corresponding author: [email protected]}
\affiliation{Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China}
\date{\today}
\begin{abstract}
{Violation of Bell inequality, for ruling out the possibility of local hidden variable theories, is commonly used as a strong witness for quantum entanglement. In previous Bell test experiments with photonic entanglement based on two-photon coincidence measurement, the photon temporal wave packets are absorbed completely by the detectors. That is, the photon coherence time is much shorter than the detection time window. Here we demonstrate generation of frequency-bin entangled narrowband biphotons, and for the first time, test the Clauser-Horne-Shimony-Holt (CHSH) Bell inequality $|S|\leq 2$ for their nonlocal temporal correlations with time-resolved detection. We obtain a maximum $|S|$ value of $2.52\pm0.48$ that violates the CHSH inequality. Our result will have applications in quantum information processing involving time-frequency entanglement.}
\end{abstract}
\pacs{03.65.Ud, 03.67.Mn, 42.50.Dv}
\maketitle
\textit{Introduction.---}As one of the most important features of quantum mechanics, entanglement is essential in quantum information processing, quantum computation, and quantum communication \cite{Entanglement}. Photonic entanglement has been realized in diverse degrees of freedom, including polarization \cite{OuPRL1988,ShihPRL1988,KwiatPRL1995}, position-momentum \cite{HornePRL1989,RarityPRL1990}, orbital angular momentum \cite{MairNature2001}, and time-frequency \cite{FransonPRL1989, StefanovPRA2013, OlislagerPRA2010, XieNature2015, RamelowPRL2009, GisinNP2007}. The nonlocal correlations between distant entangled photons provide a standard platform for Bell test \cite{Bell1964, BellBook, CHSH1969}, and confirm that quantum mechanics is incompatible with the local hidden variable theories which is the essence of the Einstein-Podolsky-Rosen (EPR) paradox \cite{EPR}. Recently loophole-free test of Bell's theorem has been demonstrated \cite{LoopholeFreeTest}. On the other side, violation of Bell inequality is often used as an entanglement witness.
In most previous experimental tests of Bell's theorem with entangled photons, the photon wave packets are absorbed completely by the coincidence detectors. That is, the photon coherence time is much shorter than the detection time window. In these works, the photon coincidence detection is modelled as integral over the entire wave packets. Surprisingly, although the time-frequency entanglement has been intensively studied \cite{FransonPRL1989, StefanovPRA2013, OlislagerPRA2010, XieNature2015, RamelowPRL2009, GisinNP2007}, the nonlocal correlation between the arrival times on the detectors of frequency-bin entangled photons (or the biphoton temporal correlation) has never been used for testing Bell's theorem.
Recent development of narrowband biphoton generation makes it possible to reveal the rich temporal quantum state information directly with time-resolved single-photon counters. Producing biphotons with a bandwidth narrower than 50 MHz has been demonstrated with spontaneous parametric down conversion insider a cavity \cite{FeketePRL2013, PanPRL2008}, spontaneous four-wave mixing (SFWM) in a hot atomic vapor cell \cite{Du2016} or laser-cooled atoms \cite{DuPRL2008, ZhaoOptica2014, ChenSR2015, KurtsieferPRL2013, YanPRL2014, KimPRL2014}. Although the time-frequency entanglement is naturally endowed by the energy conservation in generating these narrowband biphotons \cite{TQST2015}, its violation of Bell inequality has not been directly tested.
In this Letter, we demonstrate generation of frequency-bin entangled narrowband biphotons using SFWM in cold atoms for testing Bell's theorem. For the first time, we test the Clauser-Horne-Shimony-Holt (CHSH) Bell inequality $|S|\leq 2$ for their nonlocal temporal correlations with time-resolved detection. We obtain a maximum $|S|$ value of $2.52\pm0.48$ that violates the CHSH inequality. Our result also reveals the connection between the visibility of the two-photon quantum temporal beating resulting from the frequency entanglement and the violation of the Bell inequality.
\begin{figure*}
\caption{\label{fig:schematic}
\label{fig:schematic}
\end{figure*}
\textit{Generation of frequency-bin entangled narrowband biphotons.---} Our experimental configuration of double-path SFWM and relevant atomic energy level diagram are illustrated in Figs. \ref{fig:schematic}(a) and \ref{fig:schematic}(b). We work with laser-cooled $^{85}$Rb atoms trapped in a two-dimensional (2D) magneto-optical trap (MOT) \cite{2DMOT}. In the presence of counter-propagating pump ($\omega_\mathrm{p}$) and coupling ($\omega_\mathrm{c}$) laser beams along the longitudinal z axis of the 2D MOT, correlated Stokes ($\omega_s$) and anti-Stokes ($\omega_{as}$) photons are spontaneously generated in opposite directions and collected into two spatial symmetric single-mode paths (paths 1 and 2). We send the Stokes photons in path 1 through a frequency-phase shifter FPS($\delta=2\pi \times 10$ MHz, $\Delta\phi_\mathrm{s}$) so that these Stokes photons' frequency become $\omega_\mathrm{s}+\delta$ and obtain a relative phase $\Delta\phi_\mathrm{s}$ to those in path 2. Symmetrically, we add a second FPS($\delta, \Delta\phi_\mathrm{as}$) to the anti-Stokes photons in path 2. We then combine the two paths with two beam splitters (BS$_1$ and BS$_2$, 50\% : 50\%). The single-mode outputs of the beam splitters are detected by single-photon counting modules (D$_\mathrm{s\pm}$ and D$_\mathrm{as\pm}$), as shown in Fig. \ref{fig:schematic}(a). The detailed description of our experimental set-up is presented in the Supplemental Material \cite{SupplementalMaterial}. The photon pairs from the beam-splitter outputs are frequency-bin entangled and their biphoton states are described as
\begin{eqnarray}
|\Psi_\mathrm{XY}\rangle &=& \frac{1}{\sqrt{2}}\big[|\omega_\mathrm{s}+\delta\rangle|\omega_\mathrm{as}\rangle \nonumber\\
&+& \mathrm{XY} e^{i(\Delta\phi_\mathrm{as}-\Delta\phi_\mathrm{s})}|\omega_\mathrm{s}\rangle|\omega_\mathrm{as}+\delta\rangle\big], \label{eq:Entangled state}
\end{eqnarray}
where $\mathrm{XY}$, as a product of the signs (+, -), represents the Stoke to anti-Stokes combinations from the beam splitter outputs: $|\Psi_\mathrm{XY}\rangle$ is detected by (D$_\mathrm{sX}$, D$_\mathrm{asY}$).
\begin{figure*}
\caption{\label{fig:Bell inequality}
\label{fig:Bell inequality}
\end{figure*}
The Glauber correlation function of the biphoton state in Eq. (\ref{eq:Entangled state}) exhibits a quantum beating \cite{SupplementalMaterial}:
\begin{eqnarray}
G_\mathrm{XY}^{(2)}(&\tau&; \Delta\phi_\mathrm{s}, \Delta\phi_\mathrm{as})= \frac{1}{2} [G^{(2)}_0(\tau)-N_0]\nonumber\\
&& \times [1+\mathrm{XY} \cos(\delta\tau+\Delta\phi_\mathrm{s}-\Delta\phi_\mathrm{as})]+N_0,
\label{eq:G2 function}
\end{eqnarray}
where $\tau=t_\mathrm{as}-t_\mathrm{s}$. $G^{(2)}_0(\tau)$ is the biphoton Glauber correlation function before the beam splitters, which is the same for both paths 1 and 2. $N_0$ is the uncorrelated accidental coincidence rate. We then have the normalized biphoton correlation function
\begin{eqnarray}
&&g_\mathrm{XY}^{(2)}(\tau; \Delta\phi_\mathrm{s}, \Delta\phi_\mathrm{as})=G_\mathrm{XY}^{(2)}(\tau; \Delta\phi_\mathrm{s}, \Delta\phi_\mathrm{as})/N_0 \nonumber\\
&&=\frac{1}{2} [g^{(2)}_0(\tau)-1][1+\mathrm{XY} \cos(\delta\tau+\Delta\phi_\mathrm{s}-\Delta\phi_\mathrm{as})]+1,
\label{eq:g2 function}
\end{eqnarray}
where $g^{(2)}_0(\tau)=G^{(2)}_0(\tau)/N_0$. As $N_0>0$, the beating visibility slowly varies as a function of $\tau$:
\begin{eqnarray}
V(\tau)=\frac{G^{(2)}_0(\tau)-N_0}{G^{(2)}_0(\tau)+N_0}=\frac{g^{(2)}_0(\tau)-1}{g^{(2)}_0(\tau)+1}.
\label{eq:V}
\end{eqnarray}
If $N_0=0$, the cosine modulation in the quantum beat has a visibility of 100\%. Experimentally, the two-photon temporal correlation is measured as coincidence counts between the detectors
\begin{eqnarray}
C_\mathrm{XY}(\tau; \Delta\phi_\mathrm{s}, \Delta\phi_\mathrm{as}) = G_\mathrm{XY}^{(2)}(\tau; \Delta\phi_\mathrm{s}, \Delta\phi_\mathrm{as})\eta\xi\Delta t_\mathrm{bin}T,
\label{eq:Coincidence}
\end{eqnarray}
where $\eta$ is the joint detection efficiency, $\xi$ is the duty cycle, $\Delta t_\mathrm{bin}$ is the detector time bin width, and $T$ is the data collection time.
Figures \ref{fig:schematic}(c) and \ref{fig:schematic}(d) show the biphoton correlations for paths 1 and 2, respectively, measured without the presence of the two beam splitters. They are nearly identical to each other, with a coherence time of about 300 ns, corresponding to a bandwidth of 1.28 MHz. With the two beam splitters presented, the correlation $g_{++}^{(2)}(\tau; 3\pi/2, -\pi/4)$ displays a quantum beating shown in Fig. \ref{fig:schematic}(e), as predicted in Eq. (\ref{eq:g2 function}).
\textit{Bell inequality of frequency-bin entanglement.---} As shown in Eqs. (\ref{eq:G2 function})-(\ref{eq:Coincidence}) and confirmed in our experiment, the two-photon coincidence counts are functions of the relative arrival time delay $\tau=t_\mathrm{as}-t_\mathrm{s}$ between Stokes and anti-Stokes photons at the two distant detectors and the relative phase difference $\Delta\phi_\mathrm{s}-\Delta\phi_\mathrm{as}$. We set the photon counters D$_\mathrm{s\pm}$ enough far away from D$_\mathrm{as\pm}$, and also the two phase shifts (PS) far away, to make our Bell test locality-loophole free. Meanwhile, we take the fair-sampling assumption. To test the Clauser-Horner-Shimony-Holt (CHSH) type Bell inequality \cite{CHSH1969}, we define the measurement output as +1 for coincidence between D$_\mathrm{s+}$ and D$_\mathrm{as+}$ (or D$_\mathrm{s-}$ and D$_\mathrm{as-}$), and -1 for for coincidence between D$_\mathrm{s+}$ and D$_\mathrm{as-}$ (or D$_\mathrm{s-}$ and D$_\mathrm{as+}$). Then the Bell correlation coefficient can be obtained from
\begin{eqnarray}
E(\tau; \Delta\phi_\mathrm{s}, \Delta\phi_\mathrm{as})=\frac{C_{++}+C_{--}-C_{+-}-C_{-+}}{C_{++}+C_{--}+C_{+-}+C_{-+}}.
\label{eq:Ecorrelation}
\end{eqnarray}
Considering the symmetries of the two SFWM paths and the beam splitters, Eq. (\ref{eq:Ecorrelation}) can be reduced to
\begin{eqnarray}
E(&\tau&; \Delta\phi_\mathrm{s}, \Delta\phi_\mathrm{as}) \nonumber\\
&=& \frac{C_{++}(\tau; \Delta\phi_\mathrm{s}, \Delta\phi_\mathrm{as})-C_{++}(\tau; \Delta\phi_\mathrm{s}^\bot, \Delta\phi_\mathrm{as})}{C_{++}(\tau; \Delta\phi_\mathrm{s}, \Delta\phi_\mathrm{as})+C_{++}(\tau; \Delta\phi_\mathrm{s}^\bot, \Delta\phi_\mathrm{as})},
\label{eq:E value}
\end{eqnarray}
which requires only two photon detectors with $\Delta\phi_s^\bot=\Delta\phi_s+\pi$. Then the CHSH Bell parameter $S$ can be estimated as
\begin{eqnarray}
S(\tau) &=& E(\tau; \Delta\phi_\mathrm{s}, \Delta\phi_\mathrm{as})-E(\tau; \Delta\phi_\mathrm{s}', \Delta\phi_\mathrm{as}) \nonumber \\
&+&E(\tau; \Delta\phi_\mathrm{s}, \Delta\phi_\mathrm{as}')+E(\tau; \Delta\phi_\mathrm{s}', \Delta\phi_\mathrm{as}').
\label{eq:S value}
\end{eqnarray}
Under local realism the Bell inequality holds $|S(\tau)|\leq2$.
For the biphoton source described by Eqs. (\ref{eq:Entangled state})-(\ref{eq:Coincidence}), setting $\Delta\phi_\mathrm{s}' = \Delta\phi_\mathrm{s}-\pi/2$ and $\Delta\phi_\mathrm{as}' = \Delta\phi_\mathrm{as}-\pi/2$, we derive \cite{SupplementalMaterial}
\begin{eqnarray}
S(\tau)=2\sqrt{2}V(\tau)\cos(\delta\tau+\Delta\phi_\mathrm{s}-\Delta\phi_\mathrm{as}+\frac{\pi}{4}),
\label{eq:S}
\end{eqnarray}
For ideal frequency-bin entanglement without accidental coincidence, i.e., $V(\tau)=1$, our theory predicts $|S|_{max}=2\sqrt{2}$, which violates the CHSH Bell inequality. It's clear that $S(\tau)$ exhibits a sinusoidal oscillation pattern with $2\sqrt{2}V(\tau)$ as the slowly varying envelope.
The measured $S(\tau)$ and Bell correlations $E(\tau)$ are shown in Fig.~\ref{fig:Bell inequality}, with the phase setting $\Delta\phi_s = 0$, $\Delta\phi_{as} = \pi/4$, $\Delta\phi_s' = -\pi/2$, and $\Delta\phi_{as}' = -\pi/4$. At $\tau=52$ ns, we have $S=-2.52\pm0.48$, which violates the classical limit. The solid theoretical curve in Fig.~\ref{fig:Bell inequality}(a) is predicted from Eq.(\ref{eq:S}) and agree well with the experiment. The oscillation amplitude follows well the dashed envelope plotted from $\pm2\sqrt{2}V(\tau)$. By adjusting the phase setting, we can violate the Bell inequality $|S|\leq 2$ for $0<\tau\leq 350$ ns, where the visibility satisfies $V>1/\sqrt{2}$.
\begin{figure}
\caption{\label{fig:NonlocalPhase}
\label{fig:NonlocalPhase}
\end{figure}
\textit{Nonlocal phase correlation.---} At a fixed relative time delay $\tau$, the two-photon correlation in Eq. (\ref{eq:G2 function}) is identical to that obtained in polarization entanglement, where $\Delta\phi_\mathrm{s}$ and $\Delta\phi_\mathrm{as}$ correspond to the orientations of distant polarizers. To confirm this, we plot in Fig.~\ref{fig:NonlocalPhase}(a) the measured quantum beating temporal correlations with different phase settings. Figure \ref{fig:NonlocalPhase}(b) shows the nonlocal phase correlation as a function of $\Delta\phi_\mathrm{s}$ at $\tau=252$ ns under two different $\Delta\phi_\mathrm{as}$. Similarly to the polarization entanglement, the system has a rotation symmetry and the correlation depends only on the relative phase difference $\Delta\phi_\mathrm{s}-\Delta\phi_\mathrm{as}$. Therefore the visibility of the nonlocal phase correlation $V>1/\sqrt{2}$ is an indication of violation of the Bell inequality. The solid lines in Fig. \ref{fig:NonlocalPhase}(b) are the best fitting sinusoidal curves with visibilities of $V_1 =0.86\pm0.09$ for $\Delta\phi_\mathrm{as} = \pi/4$ and $V_2 =0.85\pm0.13$ for $\Delta\phi_\mathrm{as} = -\pi/4$. These visibilities are consistent with the visibility envelope of the quantum beating $V(\tau=252$ ns$)=0.78$ in Fig.~\ref{fig:schematic}e. We estimate $S$ value from $S = \sqrt{2}(V_1+V_2)= 2.42\pm0.31$, which violates the Bell inequality.
\begin{figure}
\caption{\label{fig:quantum beating}
\label{fig:quantum beating}
\end{figure}
\textit{Biphoton temporal beating.---} The Bell inequality was derived for general local experimental apparatus settings. For the polarization entanglement, these settings are the orientations of the polarizers. For the time-frequency entanglement, one can choose the detection time as the local detector setting parameter. In this experiment, we take $t_\mathrm{s}$ and $t_\mathrm{as}$ as the two distant local parameters. For the frequency-bin entangled state we prepared here, quantum mechanics predicts the two-photon temporal correlation exhibits a temporal quantum beating [Eq. (\ref{eq:G2 function})], which in mathematical form is similar to the polarization correlation from polarization entanglement. Figure \ref{fig:quantum beating}(a) shows the temporal beatings of two-photon correlation at two different phase settings. To compare with that in conventional polarization entanglement measurement, we normalize the quantum beating $g_{++}^{(2)}(\tau)$ to the correlation envelope $g_0^{(2)}(\tau)$ without subtracting the contribution from accidental photon coincidence counts. The normalized beating signals are plotted in Fig. \ref{fig:quantum beating}(b). With sinusoidal curve fitting for the normalized beating over 200 ns, we obtain the visibilities to be $\bar{V}=0.77\pm0.04$ and $\bar{V}=0.76\pm0.06$, which clearly surpasses the 0.5 limit of a classical probability theory \cite{ClassicalWaveLimit1, ClassicalWaveLimit2}. The corresponding $|S|$ values are $2.17\pm0.11$ and $2.16\pm0.17$, which violate Bell inequality.
\textit{Summary and discussion.---}In summary, we generate frequency-bin entangled narrowband biphotons from SFWM in cold atoms with a double-path configuration, where the phase difference between the two spatial paths can be controlled independently and nonlocally. The two-photon correlation exhibits a temporal quantum beating between the entangled frequency modes whose phase is determined by the relative phase difference between the two paths. We have successfully tested the CHSH Bell inequality and our best result is $|S|=2.52\pm0.48$ that violates the Bell inequality $|S|\leq2$. With $V$ as the visibility of the two-photon temporal beating, the quantum theory predicts $|S|_\mathrm{max}=2\sqrt{2}V$, which is confirmed by the experiment. Therefore the visibility $V>1/\sqrt{2}=71\%$ of the two-photon temporal beating is sufficient to violate Bell inequality. The experimental value of $|S|$ is below $2\sqrt{2}$ because the uncorrelated accidental coincidence counts from stray light, dark counts, and the randomness of photon pair generation in the spontaneous process reduces the beating visibility. We can reduce the pump laser power for lower accidental coincidence counts and thus for a higher visibility, but it will take a longer time for collecting data. The error bar of $|S|$ is the standard deviation resulting from statistical uncertainties of coincidence counts which can also be reduced with longer data taking time. The time-resolved single-photon detection is a powerful tool for quantum-state control, such as entanglement swapping and teleportation \cite{GisinNP2007}. Our result, for the first time, tests Bell inequality in nonlocal temporal correlation of frequency-bin entangled narrowband biphotons with time-resolved detection, and will have applications in quantum information processing involving time-frequency entanglement.
The work was supported by Hong Kong Research Grants Council (Project No. 16305615).
\end{document} |
\begin{document}
\title{The space of strictly-convex real-projective structures on a closed manifold}
\date{\today}
\noindent\address{DC: Department of Mathematics, University of California, Santa Barbara, CA 93106, UlSA}\\
\address{ST: School of Mathematics and Statistics, The University of Sydney, NSW 2006, Australia}
\address{}\\
\email{[email protected]}\\
\email{[email protected]}
\author{Daryl Cooper and Stephan Tillmann}
\begin{abstract} We give an expository proof of the fact that, if $M$ is a compact $n$--manifold with no boundary, then the set
of holonomies of strictly-convex real-projective structures on $M$ is a subset
of $\operatorname{Hom}(\pi_1M,\operatorname{PGL}(n+1,\mathbb{R}))$ that is both open and closed.
\end{abstract}
\maketitle
If $M$ is a compact $n$--manifold, then $\operatorname{Rep}(M)=\operatorname{Hom}(\pi_1M,\operatorname{GL}(n+1,{\mathbb R}))$ is a real algebraic variety. Let
$\operatorname{Rep}_{P}(M)$ and $\operatorname{Rep}_{S}(M)$ be, respectively, the subsets of $\operatorname{Rep}(M)$ of holonomies of properly-convex, and of strictly-convex structures on $M$.
Then $\operatorname{Rep}_S(M)\subset\operatorname{Rep}_P(M)$. Throughout this paper we use the Euclidean topology everywhere, and not the {\em Zariski topology}.
In the following, $M$ is closed and $n\ge 2$.
\begin{open}\label{open}
$\operatorname{Rep}_{P}(M)$
is open in $\operatorname{Rep}(M)$.
\end{open}
\begin{closed}\label{closed}
$\operatorname{Rep}_{S}(M)$
is closed in $\operatorname{Rep}(M)$.
\end{closed}
\begin{clopen}
$\operatorname{Rep}_S(M)$ is a union of connected components
of $\operatorname{Rep}(M)$.
\end{clopen}
It follows from (\ref{noinvt}) that the holonomy of a strictly-convex structure determines a projective manifold up to projective isomorphism.
The Open Theorem is due to Koszul~\cite{Kos1, Kos2}. Our proof is distilled from his, and proceeds by showing that $M$ is properly-convex if and only if
there is a projective $(n+1)$--manifold $N\cong M\times I$ with $M\times 0$ flat, and $M\times 1$
convex, and triangulated so that adjacent $n$-simplices
are never coplanar. This type of convexity is easily shown to be preserved by small deformations.
The Closed Theorem is due to Choi and Goldman~\cite{CW2} when $n=2$, to Kim~\cite{Kim} when $n=3$, and to Benoist~\cite{Ben5} in general.
Our proof is new, and based on a geometric argument called {\em the box estimate} (\ref{boxestimate}). This might be viewed
as related to Benzecri's compactness theorem \cite{Benz} for a properly-convex domain $\operatorname{O}mega$, but pertaining to $\operatorname{Aut}(\operatorname{O}mega)$.
It also uses an elementary geometric fact (\ref{centerexists}) about an analogue of centroids for subsets of the sphere.
The Clopen Theorem follows from the Open Theorem and the fact, due to Benoist,
that a properly-convex manifold that is homeomorphic to a strictly
convex manifold is also strictly-convex (\ref{stricthomeo}).
We have made an effort to make the paper self-contained by including proofs
of all the foundational results needed in \Cref{background,sec:cones}.
We have endeavoured to simplify these proofs as much as possible.
See \cite{MR2464391} for an excellent survey.
There are extensions of the Open and Closed Theorems
when $M$ is the interior of a compact manifold with boundary, see \cite{CLT2, CT2, CT1}.
Indeed, the techniques in this paper were developed to handle this more general situation for which the pre-existing
methods do not suffice.
But it seems useful to present some of the main ideas
in the simplest setting. We have liberally borrowed from \cite{CT2}.
The first author thanks the University of Sydney Mathematical Research
Institute (SMRI) for partial support and hospitality while writing this paper. He also thanks the audience at SMRI that participated in a presentation
of this material.
Research of the second author is supported in part under the Australian Research Council's ARC Future Fellowship FT170100316.
\section{Properly and strictly-convex}\label{background}
This section and the next review some well known results concerning properly and strictly-convex projective manifolds that are needed to prove the theorems.
The results needed subsequently are (\ref{stricthomeo}), (\ref{noinvt}), (\ref{unique}), (\ref{nilpotent}), and
(\ref{C1}), (\ref{dualcpct}).
The only mild innovation
is that, to avoid appealing to results about word-hyperbolic groups, a more direct approach was taken to the proof of (\ref{stricthomeo}).
The early sections of \cite{CLT1} greatly expand on the background in this section.
If $M$ is a manifold, the universal cover is $\pi_{M}:\widetilde M \to M$, and if $g\in\pi_1M$ then $\tau_g:\widetilde M\to\widetilde M$
is the covering transformation corresponding to $g$. A {\em geometry} is a pair $(G,X)$, where $G$ is a group that acts analytically and transitively
on a manifold $X$.
A $(G,X)$-structure on a manifold $M$ is determined by a {\em development pair} $(\operatorname{dev},\rho)$ that consists of the {\em holonomy} $\rho\in \operatorname{Hom}(\pi_1M,G)$ and
the {\em developing map} $\operatorname{dev}:\widetilde M\to X$ which is a local homeomorphism. The pair
satisifies for all $x\in\widetilde M$
and $g\in\pi_1M$ that $\operatorname{dev}(\tau_{g}x)=(\rho g)\operatorname{dev} x$.
In what follows $V={\mathbb R}^{n+1}$, and $V^*=\operatorname{Hom}(V,{\mathbb R})$ is the dual vector space, and ${\mathbb R}^{n+1}_0=V_0=V\setminus \{0\}$.
{\em Projective space} is $\operatorname{{\mathbb P}} V=V_0/{\mathbb R}_0$, and $[A]\in\operatorname{Aut}(\operatorname{{\mathbb P}} V)=\operatorname{PGL}(V)$ acts on $\operatorname{{\mathbb P}} V$ by $[A][x]=[Ax]$.
{\em Projective geometry}
is $(\operatorname{Aut}(\operatorname{{\mathbb P}} V),\operatorname{{\mathbb P}} V)$ and is also written $(\operatorname{Aut}({\mathbb R}P^n),{\mathbb R}P^n)$.
We use the notation ${\mathbb R}^+=(0,\infty)$. {\em Positive projective space} is ${\mathbb R}P^n_+=\operatorname{{\mathbb P}}_+V=V_0/{\mathbb R}^+$ and
$[x]_+=\{\lambda x:\lambda>0\}$ for $x\in V_0$. Sometimes we identify $\operatorname{{\mathbb P}}_+V$ with the unit sphere $S^n\subset{\mathbb R}^{n+1}$
via $[x]_+\equiv x/\|x\|$.
The group $\operatorname{Aut}(\operatorname{{\mathbb P}}_+V):=\operatorname{SL}(V)\subset\operatorname{GL}(V)$ is the subgroup with $\det=\pm1$.
{\em Positive projective geometry} $(\operatorname{Aut}(\operatorname{{\mathbb P}}_+V),\operatorname{{\mathbb P}}_+V)$ is the double cover of projective geometry.
We will pass back and forth between projective geometry and positive projective geometry without mention, often omitting the term {\em positive}.
If $U$ is a vector subspace of $V$ then $\operatorname{{\mathbb P}} U$ is a {\em projective subspace} of $\operatorname{{\mathbb P}} V$, and
is a {\em (projective) line} if $\dim U=2$ and a {\em (projective) hyperplane} if $\dim U=\dim V-1$.
The {\em dual} of $U$ is the projective subspace $\operatorname{{\mathbb P}} U^0\subset\operatorname{{\mathbb P}} V^*$ where $U^0=\{\phi\in V^* :\phi(U)=0\}$.
We use the same terminology in positive projective geometry.
By lifting developing maps one obtains:
\begin{proposition}\label{holonomylifts} Every projective structure on $M$ lifts to a
positive projective structure.
\end{proposition}
The {\em frontier} of a subset $X\subset Y$ is $\operatorname{Fr} X=\operatorname{cl}(X)\setminus\operatorname{int}(X)$, and the {\em boundary} is $\partial X=X\cap\operatorname{Fr} X$.
A {\em segment} is a connected, proper subset of a projective line that contains more than one point.
In what follows $\operatorname{O}mega\subset{\mathbb R}P^n$. If $H$ is a hyperplane and $x\in H\cap\operatorname{Fr}\operatorname{O}mega$ and $H\cap\operatorname{int}\operatorname{O}mega=\emptyset$, then $H$ is a called a {\em supporting
hyperplane (to $\operatorname{O}mega$) at $x$}.
The set $\operatorname{O}mega$
\begin{itemize}
\item is {\em convex} if every pair of points in $\operatorname{O}mega$ is contained in a segment in $\operatorname{O}mega$.
\item is {\em properly-convex} if it is convex, and $\operatorname{cl}\operatorname{O}mega$ does not contain a projective line.
\item is {\em strictly-convex} it it is properly-convex and $\operatorname{Fr}\operatorname{O}mega$
does not contain a segment.
\item is {\em flat} if it is convex and $\dim\operatorname{O}mega<n$.
\item is a {\em convex domain} if it is a properly convex open set.
\item is a {\em convex body} if it is the closure of a convex domain.
\item is $C^1$ if for each $x\in\operatorname{Fr}\operatorname{O}mega$
there is a unique supporting hyperplane at $x$.
\end{itemize}
If $V=U\oplus W$, then {\em projection (along $\operatorname{{\mathbb P}} W$) onto $\operatorname{{\mathbb P}} U$} is $\pi:\operatorname{{\mathbb P}} V\setminus\operatorname{{\mathbb P}} W\longrightarrow\operatorname{{\mathbb P}} U$
given by $\pi[u+w]=[u]$. {\em Duality} is the map that sends each point $\theta=[\phi]\in\operatorname{{\mathbb P}} V^*$ to the hyperplane
$H_{\theta}=\operatorname{{\mathbb P}}\ker\phi\subset\operatorname{{\mathbb P}} V$. If $L$ is a line in $\operatorname{{\mathbb P}} V^*$ the hyperplanes $H_{\theta}$ dual to the points
$\theta\in L$
are called a {\em pencil of hyperplanes}. Then $Q=\cap H_{\theta}$ is the projective dual of $L$ and is called the {\em core of the pencil}.
\begin{lemma} A convex subset $\operatorname{O}mega$ is properly-convex if and only if $\operatorname{cl}\operatorname{O}mega$ is disjoint from some hyperplane.
\end{lemma}
\begin{proof} Without loss of generality, suppose $\operatorname{O}mega$ is closed. The result is obvious if $n=1$. Since $\operatorname{O}mega$ is convex and contains no projective line, it is simply connected, and so lifts to $\operatorname{O}mega'\subset\mathbb R\operatorname{P}_+^n$.
Let $H\subset \mathbb R\operatorname{P}_+^n$ be a hyperplane. Then $H\cap \operatorname{O}mega'$ is empty or properly-convex. By induction
on dimension, $H$ contains a projective subspace $Q$ with $\dim Q=n-2$ that is disjoint from $H\cap \operatorname{O}mega'$. There is a pencil of hyperplanes $H_{\theta}$ with core $Q$. Now $\operatorname{O}mega'\cap H_{\theta}$ is contained in one of the two components of $H_{\theta}\setminus Q$. As $\theta$
moves half way round $\mathbb R\operatorname{P}_+^1$ the component must change. Thus for some $\theta$ the intersection is empty.
\end{proof}
In what follows $\operatorname{O}mega\subset {\mathbb R}P^n$ is a properly-convex domain, and $\operatorname{Aut}(\operatorname{O}mega)\subset\operatorname{Aut}({\mathbb R}P^n)$ is the subgroup that preserves $\operatorname{O}mega$.
The {\em Hilbert metric $d_{\operatorname{O}mega}$} on $\operatorname{O}mega$ is defined as follows.
If $\ell\subset{\mathbb R}P^n$ is a line and $\alpha=\operatorname{O}mega\cap\ell\ne\emptyset$ then $\alpha$ is a
{\em proper segment in $\operatorname{O}mega$} and $\alpha=(a_-,a_+)$
with $a_{\pm}\in\operatorname{Fr}\operatorname{O}mega$. There is a projective isomorphism
$f:\alpha\rightarrow{\mathbb R}^+$
and
\[
d_{\operatorname{O}mega}(x,y)=\frac{1}{2}\left|\;\log \frac{f(x)}{f(y)}\;\right|
\] for $x,y\in\alpha$. A {\em geodesic} in $\operatorname{O}mega$ is a curve whose length is the distance between its endpoints.
It follows that line segments are geodesics.
It is immediate that $\operatorname{Aut}(\operatorname{O}mega)$ acts by isometries of $d_{\operatorname{O}mega}$.
Let $H_{\pm}$ be supporting hyperplanes at $a_{\pm}$ and
$P=H_+\cap H_-$. {\em Projection (along $P$) onto $\alpha$} is the map $\pi:\operatorname{O}mega\to\alpha$ given by $\pi[u+v]=[v]$
where $[u]\in P$ and $[v]\in\ell$.
The choice of $H_{\pm}$ is unique only if $a_{\pm}$ are $C^1$ points. {\em In general $\pi x$ is not the closet point on $\alpha$
to $x$.}
\begin{lemma}\label{projlemma} (i) $d_{\operatorname{O}mega}(x,y)\ge d_{\operatorname{O}mega}(\pi x,\pi y)$. \\
(ii) If $\operatorname{O}mega$ is strictly
convex then geodesics are segments of lines.
\end{lemma}
\begin{proof} \Cref{projection} represent two {\em or more} dimensions.
Let $\delta=(b_-,b_+)$ be the proper segment in $\operatorname{O}mega$ containing $x$ and $y$, and $\pi:\operatorname{O}mega\rightarrow\alpha$ as above. Then $d_{\operatorname{O}mega}(x,y)=d_{\delta}(x,y)$. Since $(\pi|\delta):\delta\to\alpha$ is a projective embedding
$d_{\delta}(x,y)=d_{\pi\delta}(\pi x,\pi y)$. Also $d_{\pi\delta}(\pi x,\pi y)\ge d_{\alpha}(\pi x,\pi y)$ because
$\pi\delta\subset\alpha$. Finally $d_{\alpha}(\pi x,\pi y)=d_{\operatorname{O}mega}(\pi x,\pi y)$, which proves (i).
\begin{figure}
\caption{Projecting onto a line}
\label{projection}
\end{figure}
Equality implies $\alpha=\pi\delta$. Thus, after relabelling if needed, $b_{\pm}\in H_{\pm}$. Thus $[a_{\pm},b_{\pm}]\subset H_{\pm}\cap\operatorname{Fr}\operatorname{O}mega$.
If $\operatorname{O}mega$ is strictly-convex it follows that $b_{\pm}=a_{\pm}$. This gives (ii).
\end{proof}
In a strictly convex domain $\operatorname{O}mega$ will use the term {\em geodesic} to mean a proper segment in $\operatorname{O}mega$.
A subset $\mathcal C\subset V_0$ is a {\em cone} if $t\cdot\mathcal C=\mathcal C$ for all $t>0$.
The {\em dual cone} $\mathcal C^*=\operatorname{int}\left(\{\phi\in V^*:\ \phi(\mathcal C)\ge 0\}\right)$ is convex.
If $\operatorname{O}mega\subset\operatorname{{\mathbb P}}_+ V$ then $\mathcal C\operatorname{O}mega$ is the cone $\{v\in V_0:\ [v]_+\in\operatorname{O}mega\}$.
If $\operatorname{O}mega$ is open and properly-convex, then the {\em dual domain}
$\operatorname{O}mega^*=\operatorname{{\mathbb P}}(\mathcal C\operatorname{O}mega^*)$ is open, and properly-convex. Using the natural identification $V\cong V^{**}$
it is immediate that $(\operatorname{O}mega^*)^*=\operatorname{O}mega$.
\begin{corollary}\label{C1domain} If $\operatorname{O}mega$ is open and properly-convex,
then $\operatorname{O}mega$ is $C^1$ (resp. strictly convex)
if and only if $\operatorname{O}mega^*$ is strictly-convex (resp. $C^1$).
\end{corollary}
\begin{proof} We have $[\phi]\in\operatorname{Fr}\operatorname{O}mega^*$ if and only if the dual hyperplane $H=[\ker\phi]$
supports $\operatorname{O}mega$.
If $p\in\operatorname{Fr}\operatorname{O}mega$, then there are
two distinct supporting hyperplanes $H_0=[\ker\phi_0]$ and $H_1=[\ker\phi_1]$ for $\operatorname{O}mega$ at $p$
if and only if
all the hyperplanes $H_t=[\ker \phi_t]$ contain $p$ and support $\operatorname{O}mega$, where $\phi_t=(1-t)\phi_0+t\phi_1$ and $0\le t\le 1.$
This happens if and only if the segment
$\sigma=\{[\phi_t]:\ 0\le t\le 1\}$ is in $\operatorname{Fr}\operatorname{O}mega^*$. Hence $\operatorname{O}mega$ is $C^1$ if and only if $\operatorname{O}mega^*$ is strictly
convex. Replacing $\operatorname{O}mega$ by $\operatorname{O}mega^*$ and using $\operatorname{O}mega^{**}=\operatorname{O}mega$ gives the other result.
\end{proof}
If $\Gamma\subset\operatorname{Aut}(\operatorname{O}mega)$
is a discrete and torsion-free subgroup, then
$M=\operatorname{O}mega/\Gamma$ is a {\em properly (resp. strictly) convex} manifold if $\operatorname{O}mega$ is properly (resp. strictly) convex.
Then $\widetilde M=\operatorname{O}mega$
and there is a development pair $(\operatorname{dev},\rho)$, where $\operatorname{dev}$ is the inclusion map, and the holonomy $\rho:\pi_1M\to\Gamma$ is an isomorphism.
\begin{lemma}\label{htpyequiv} Suppose $M$ and $M'$ are properly-convex manifolds and
$\pi_1M\cong\pi_1M'$. If $M$ is
closed, then $M'$ is closed.
\end{lemma}
\begin{proof} Let $\pi:\operatorname{O}mega\to M$ be the projection. This is the universal covering space of $M$, so $M$ is
a $K(\pi_1M,1)$. The same holds for $M'$. Hence $M$ and $M'$ are homotopy equivalent. If $n=\dim M$, then $M$ is closed if and only if
$H_n(M;{\mathbb Z}_2)\cong{\mathbb Z}_2$. Since homology is an invariant of homotopy type the result follows.\end{proof}
A compactness argument shows that if $\operatorname{O}mega$ is the universal cover of a closed strictly-convex manifold, then projection
(along a projective subspace) onto a geodesic in $\operatorname{O}mega$ is
uniformly distance decreasing in the following sense.
\begin{lemma}\label{strictprojection} Suppose $M=\operatorname{O}mega/\Gamma$ is a strictly-convex closed manifold. Given $b>1$ there is $R=R(b)>0$ so the following holds.
Suppose $\pi:\operatorname{O}mega\to\alpha$ is a projection onto a geodesic $\alpha$. Suppose that $\eta$ is a rectifiable arc in
$\operatorname{O}mega\setminus N_R(\alpha)$
of length $\ell$ with endpoints $x$ and $y$. Then $d_{\operatorname{O}mega}(\pi x,\pi y)< 1+(\ell/b)$ \end{lemma}
\begin{proof} By decomposing $\eta$ into finitely many subarcs, with one of length at most $b$, and $\lfloor \ell/b \rfloor$ of length $b$,
it suffices to prove if $length(\eta)\le b$ then $d_{\operatorname{O}mega}(\pi x,\pi y)\le 1$.
Write $d=d_{\operatorname{O}mega}$. If no such $R$ exists then there are sequences $x_k,y_k\in\operatorname{O}mega$ and projections $\pi_{_k}:\operatorname{O}mega\to\alpha_k$
with $d(x_k,y_k)\le b$ and $d(x,\alpha_k)\ge k$ and $1\le d(\pi_{_k} x_k,\pi_{_k} y_k)\le b$. Then $\beta_k=[\pi x_k,x_k]$ and $\gamma_k=[\pi_{_k} y_k,y_k]$
are geodesic segments and $\pi_{_k}\beta_k=\pi_{_k} x_k $ and $\pi_{_k}\gamma_k=\pi_{_k}y_k$.
There is a compact $W\subset\operatorname{O}mega$ such that $\Gamma\cdot W=\operatorname{O}mega$. By applying an element of $\Gamma$ we may assume that $\pi_{_k}x_k\in W$.
After subsequencing we may assume $\beta=\lim \beta_k$ and $\gamma=\lim \gamma_k$ and $\alpha=\lim\alpha_k$ and $\pi=\lim \pi_{_k}$ all exist and $\pi:\operatorname{O}mega\to\alpha$ is
projection. Refer to \Cref{projection}. Thus $\beta$ are $\gamma$ are geodesic rays in $\operatorname{O}mega$ that start on $\alpha$, and end on $\operatorname{Fr}\operatorname{O}mega$, with
$\pi(\beta)=\alpha\cap\beta\ne \alpha\cap\gamma=\pi(\gamma)$.
Since $\operatorname{O}mega$ is strictly-convex, and $d(x_k,y_k)\le b$, it follows that $x_k$ and $y_k$
limit on the same point $p\in \operatorname{Fr}\operatorname{O}mega$. Hence $p$ is the endpoint of both $\beta$ and $\gamma$ on $\operatorname{Fr}\operatorname{O}mega$.
Also $p\in P=H_-\cap H_+$, where $H_{\pm}$ are the supporting hyperplanes to $\operatorname{O}mega$ at the endpoints of $\alpha$.
Since $\operatorname{O}mega$ is strictly-convex, $p\notin P$. It follows that $\pi(\beta)=\pi(p)=\pi(\gamma)$ which is a contradiction.
\end{proof}
If $X$ is a subset of a metric space $Y$, the $r$-neighborhood of $X$ is $N_r(X)=N(X,r)=\{y\in Y: d(y,x)\le r\}$.
The following is due to Benoist \cite{MR2094116}.
A {\em triangle} is a disc $\Delta$ in a projective plane bounded by
three segments. If $\Delta\subset\operatorname{cl}\operatorname{O}mega$ and $\Delta\cap\operatorname{Fr}\operatorname{O}mega=\partial\Delta$, then $\Delta$ is
called a {\em properly embedded triangle} or {\em PET}.
A properly-convex set $\operatorname{O}mega$ has {\em thin triangles} if there is $\delta>0$ such that for every triangle
$T$ in $\operatorname{O}mega$, each side of $T$ is contained in a $\delta$-neighborhood of the union of the other two sides with respect to the Hilbert metric.
\begin{proposition}\label{PETstrict} If $M=\operatorname{O}mega/\Gamma$ is a properly-convex closed manifold, then the following are equivalent:
\begin{enumerate}
\item $M$ is strictly-convex,
\item $\operatorname{O}mega$ does not contain a PET,
\item $\operatorname{O}mega$ has thin triangles.
\end{enumerate}
\end{proposition}
\begin{proof} If $\operatorname{O}mega$ is not strictly-convex, there is a maximal segment $\ell\subset\operatorname{Fr}\operatorname{O}mega$. Choose $x\in\operatorname{O}mega$ and let $P\subset\operatorname{O}mega$
be the interior of the triangle that is the convex hull of $x$ and $\ell$. Choose a sequence $x_n$ in $P$ that limits on the midpoint of $\ell$.
Then $d_{\operatorname{O}mega}(x_n,\operatorname{Fr} P)\to\infty$ because $\ell$ is maximal. Since $M$ is compact there is compact set $W\subset\operatorname{O}mega$
such that $\Gamma\cdot W=\operatorname{O}mega$. Thus there is $\gamma_n\in\Gamma$ with $\gamma_nx_n\in W$. After
choosing a subsequence $\gamma_nx_n\to y\in W$, and $\gamma_n P$ converges to the interior of a PET
$\Delta\subset\operatorname{O}mega$ that contains $y$.
Since $\Delta$ is flat $d_{\operatorname{O}mega}|\Delta=d_{\Delta}.$ Now $\operatorname{PGL}(\Delta)$ contains a subgroup $G\cong{\mathbb R}^2$ that acts transitively on $\Delta$.
Therefore $(\Delta,d_{\Delta})$ is isometric to a normed vector-space, thus does not
have thin triangles.
Conversely if $\operatorname{O}mega$ does not have thin triangles, then there is a sequence of triangles $T_k$ in $\operatorname{O}mega$
and points $x_k$ in $T_k$
with $d_{\operatorname{O}mega}(x_k,\partial T_k)>k$. As above, after applying elements of $\Gamma$, we may assume $T_k$
converges to a PET, so $\operatorname{O}mega$ is not strictly-convex.
\end{proof}
A {\em $(K,L)$--quasi-isometric embedding} is a map $f:X\to Y$ between metric spaces $(X,d_X)$ and $(Y,d_Y)$ such that
$$K^{-1}\;d_X(x,x')-L\;\le\; d_Y(fx,fx')\;\le\; K\;d_X(x,x')+L $$
for all $x,x'\in X.$
If $X=[a,b]\subset {\mathbb R}$, then $f$ is called {\em quasi-geodesic}.
The map $f$ is a {\em quasi-isometry} or {\em QI} if $Y\subset N(fX,L)$.
If $g:Y\to Z$ is a $(K',L')$--QI, then $g\circ f$ is a $(KK',K'L+2L')$--QI.
In particular, if $g$ is a QI
and $f$ is a quasi-geodesic, then $g\circ f$ is a quasi-geodesic
with QI constants that only depend on
those of $f$ and $g$.
If $G$ is a finitely generated group then a choice of finite generating set gives a {\em word metric} on $G$.
A different choice of generating set gives a bi-Lipschitz equivalent metric. The \v{S}varc-Milnor lemma in this setting is
\begin{proposition}\label{QIhomeo} If $M=\operatorname{O}mega/\Gamma$ is a closed and properly-convex manifold,
then $(\operatorname{O}mega,d_{\operatorname{O}mega})$ is QI to $\pi_1M$. \end{proposition}
\begin{proof} Fix $x\in\widetilde M=\operatorname{O}mega$. Then $f:\pi_1M\rightarrow\widetilde M$ given by $f(g)=\tau_g(x)$ is a QI.
\end{proof}
A metric space $(Y,d_Y)$ is {\em ML} if it satisfies the {\em Morse Lemma} that for all $(K,L)$ there is $S=S(K,L)>0$, called a {\em tracking constant}, such that if $\alpha$ and $\beta$ are $(K,L)$-quasi-geodesics in $Y$
with the same endpoints, then $\alpha\subset N_{S}(\beta)$ and $\beta\subset N_{S}(\alpha)$.
Since quasi-geodesics are sent to quasi-geodesics by quasi-sometries, it follows that the property of ML is preserved by quasi-isometry.
Clearly ${\mathbb R}^2$ is not ML, and any norm on ${\mathbb R}^2$ is QI to the standard norm, so ${\mathbb R}^2$ with any norm is not ML.
A geodesic in the Cayley graph picks out a sequence in $G$ that is a quasi-geodesic.
It follows from (\ref{QIhomeo}) that whether or not a closed properly convex manifold is ML only depends on $\pi_1M$.
\begin{proposition}\label{stricthyperbolic} If $M=\operatorname{O}mega/\Gamma$ is a properly-convex closed manifold, then $M$ is strictly-convex if and only if $(\operatorname{O}mega,d_{\operatorname{O}mega})$ is {\em ML}.
\end{proposition}
\begin{proof} If $\operatorname{O}mega$ is not strictly-convex, then by (\ref{PETstrict}) it contains a PET $\Delta$
which is isometric to a norm on ${\mathbb R}^2$, and therefore is not {\em ML}.
\begin{figure}
\caption{Quasi-geodesics}
\label{quasigeodesic}
\end{figure}
Now suppose $\operatorname{O}mega$ is strictly-convex. Given $(K,L)$ let $R=R(2K)$ be given by (\ref{strictprojection}). Let $S'=R+4RK+K+L$.
We show that $S=2KS'+2L$ is a tracking constant.
Refer to \Cref{quasigeodesic}. Let $\alpha'$ be a $(K,L)$--quasi-geodesic in $\operatorname{O}mega$ with endpoints $p$ and $q$, and $\beta'=[p,q]$.
For simplicity we assume $\alpha'$ is continuous. Let $\beta^+$ be the geodesic that contains $\beta'$ and $\pi:\operatorname{O}mega\rightarrow\beta^+$ be the projection.\\
Claim 1) $\alpha'\subset N(\beta^+,S')$.\\
Claim 2) $\alpha'\subset N(\beta',S/2)$ and $\beta'\subset N(\alpha',S/2)$.
Assuming this, if $\gamma'$ is another $(K,L)$-quasi-geodesic with endpoints $p$ and $q$ then
$\beta'\subset N(\gamma',S/2)$ so $$\alpha'\subset N(\beta',S/2)=N(N(\gamma',S/2),S/2)=N(\gamma',S)$$
which proves that $S$ is a tracking constant.
For Claim 1 suppose that $\alpha$ is the closure of a component of $\alpha'\setminus N(\beta^+,R)$. The endpoints
$x$ and $y$ of $\alpha$ satisfy $d(x,\beta^+)=d(y,\beta^+)=R$. Let $x',y'\in\beta^+$ be chosen so that $d(x,x')=R=d(y,y')$ and
let $\beta=[\pi x,\pi y]\subset\beta^+$. Then $$\operatorname{length}(\beta)\le 1+\operatorname{length}(\alpha)/2K$$ by definition of $R$.
Since $\pi$ is distance non-increasing $d(x',\pi x)\le d(x', x)=R$, and similarly $d(y',\pi y)\le R$. Thus
$$d(x,y)\le d(x,x')+d(x',\pi x)+d(\pi x,\pi y)+d(\pi y,y')+d(y',y)\le 4R+\operatorname{length}(\beta)$$
Since $\alpha$
is a $(K,L)$-quasi-geodesic $$\begin{array}{rcl}
\operatorname{length}(\alpha) &\le & K d(x,y)+L \\
& \le & K(4R +\operatorname{length}(\beta))+L\\
& \le & K(4R+1+\operatorname{length}(\alpha)/2K)+L\\
& = &4RK+K+L+\operatorname{length}(\alpha)/2
\end{array}$$
Hence $\operatorname{length}(\alpha)\le 2(4RK+K+L)$. It follows that the distance of every point on $\alpha'$ from $\beta^+$ is at most
$$R+(1/2)\operatorname{length}(\alpha)=R+4RK+K+L=S'$$
This proves Claim 1.
For Claim 2, there is a maximal subarc $\delta\subset\alpha'$ that starts at $p$ and ends
at a point $w\in\alpha'$ such that $\pi w=p$. Possibly the arc is trivial with $w=p$. By the above
$d(w,\beta^+)\le S'$. The arc $\delta$ is a $(K,L)$--quasi-geodesic because it is a subarc of $\alpha$, so $d(w,\beta^+)\ge K^{-1}(\operatorname{length}(\delta)-L)$.
Thus $\operatorname{length}(\delta)\le K S'+L=S/2$. Using $\pi$ is distance non-increasing, gives $\operatorname{length}(\pi\delta)\le S/2$.
Now $\pi\alpha'$ extends beyond $\beta'$ by $\pi\delta$ so $\pi\alpha'\subset N(\beta',S/2)$.
Also $\beta'\subset\pi\alpha'$ so $\beta'\subset N(\alpha',S')$ and $K\ge 1$, so $\beta'\subset N(\alpha',K S'+L)$.
This proves the claim. \end{proof}
\begin{corollary}\label{stricthomeo} Suppose $M$ and $N$ are closed and properly-convex, and that $\pi_1M\cong\pi_1N$. If $M$ is strictly-convex, then $N$ is strictly-convex.
\end{corollary}
\begin{proof} Since ML is preserved by quasi-isometry, it follows from (\ref{stricthyperbolic}) and (\ref{QIhomeo}) that $M$ is strictly convex if and only if $\pi_1M$ is ML, and this is determined by $\pi_1M$.
\end{proof}
\begin{lemma}\label{noinvt} If $M=\operatorname{O}mega/\Gamma$ is a closed properly-convex manifold and
$\operatorname{O}mega'\subset\operatorname{O}mega$ is a non-empty properly-convex subset
that is preserved by $\Gamma$, then $\operatorname{O}mega'=\operatorname{O}mega$.
\end{lemma}
\begin{proof} Otherwise the function $F:\operatorname{O}mega\to {\mathbb R}$ given by $F(x)=d_{\operatorname{O}mega}(x,\operatorname{O}mega')$ is continuous, unbounded, and $\Gamma$--invariant.
Thus it covers a continuous unbounded function $f:M\to{\mathbb R}$, contradicting the compactness of $M$.
\end{proof}
The {\em displacement distance of $\gamma\in\operatorname{Aut}(\operatorname{O}mega)$} is
$t(\gamma)=\inf\{d_{\operatorname{O}mega}(x,\gamma x): x\in\operatorname{O}mega\}$. The element $\gamma$ is {\em hyperbolic} if $t(\gamma)>0$.
\begin{lemma}[Hyperbolics]\label{hyperbolic} Suppose $\operatorname{O}mega\subset \operatorname{{\mathbb P}} V$ and $M=\operatorname{O}mega/\Gamma$
is a strictly-convex closed manifold and $1\ne \gamma\in\Gamma$.
Then
$\gamma$ is hyperbolic and there are
$a_{\pm}\in\operatorname{Fr}(\operatorname{O}mega)$ such that for all $x\in\operatorname{{\mathbb P}} V\setminus (H_+\cup H_-)$ we have
\[\lim_{n\to\pm\infty}\gamma^nx=a_{\pm}\]
where $H_{\pm}$ is the supporting hyperplane to $\operatorname{O}mega$ at $a_{\pm}$.
\end{lemma}
\begin{proof} Since $M$ is compact, the Arzela-Ascoli Theorem implies there is a closed
geodesic $C$ in $M$ that is conjugate to $\gamma$ in $\pi_1M$
and $t(\gamma)=\operatorname{length}(C)>0$. Hence $\gamma$ is hyperbolic. By (\ref{projlemma}) $C$ is covered by a proper segment $\alpha=(a_-,a_+)$ in $\operatorname{O}mega$
that is preserved by $\gamma$.
By (\ref{dualcpct}) $M$ is $C^1$ so there are unique supporting hyperplanes $H_{\pm}$
to $\operatorname{cl}\operatorname{O}mega$ that contain $a_{\pm}$ respectively.
Then $Q=H_+\cap H_-$
is a codimension--2 subspace, and it is disjoint from $\operatorname{O}mega$ by {\em strict} convexity. Moroever $Q$ is
preserved by $\gamma$.
The pencil of hyperplanes $\{H_t:t\in L\}$ in $\operatorname{{\mathbb P}} V$ that contain $Q$ is dual to a line $L\subset\operatorname{{\mathbb P}}(V^*)$.
Since the dual of $\gamma$ acts on $L\cong{\mathbb R}P^1$ projectively and non-trivially, it
only fixes the two points $[H_{\pm}]\in L$. The other hyperplanes $[H_t]$ are moved by $\gamma$ away from $[H_-]$ and towards $[H_+]$. Since $\operatorname{O}mega$ is strictly-convex, $\gamma$ moves all points in $\operatorname{O}mega$ towards $a_+$.
Suppose $\ell$ is a projective line that contains $a_-$. If $\ell$ is not contained
in $H_-$ then, since $H_-$ is the unique supporting hyperplane at $a_-$, it follows that $\operatorname{O}mega\cap \ell\ne\emptyset$.
Thus $\gamma^k\ell\to\ell'$ where $\ell'$ is the projective line containing $\alpha$. Hence $\gamma^k(\ell\setminus a_-)\to a_+$
as $k\to\infty$.
This reasoning applied to $\gamma^{-1}$ gives the corresponding statements for $a_-$, and gives the second conclusion.\end{proof}
The point $a_+$ is called the {\em attracting}, and $a_-$ is the {\em repelling}, fixed point of $\gamma$,
and $(a_-,a_+)\subset \operatorname{O}mega$ is called the {\em axis} of $\gamma$. This axis is the only proper segment in $\operatorname{O}mega$ preserved by $\gamma$. The attracting fixed point of $\gamma^{-1}$ is the repelling fixed point of $\gamma$.
\begin{proposition}[Unique domain]\label{unique} Let $n\ge 2.$ Suppose $\operatorname{O}mega\subset{\mathbb R}P^n$ and $M=\operatorname{O}mega/\Gamma$ is a strictly-convex, closed $n$--manifold.
If $\operatorname{O}mega'$ is open and properly-convex and preserved by $\Gamma$, then $\operatorname{O}mega'=\operatorname{O}mega$.\end{proposition}
\begin{proof}
Let $X$ be union, over all $1\ne\gamma\in\Gamma$, of attracting fixed points of $\gamma$.
Then $X\subset\operatorname{Fr}\operatorname{O}mega$ by (\ref{hyperbolic}).
Let $W$ be the convex hull of $X$ in $\operatorname{cl}\operatorname{O}mega$.
Then $U=W\cap\operatorname{O}mega$ contains the axis of each hyperbolic in $\Gamma$, so $U$ is non-empty, convex,
$\Gamma$-invariant. Since $U\subset\operatorname{O}mega$, we have $\operatorname{O}mega=U$ by (\ref{noinvt}).
Since $\partial\operatorname{O}mega$ is strictly-convex it follows $\operatorname{Fr}\operatorname{O}mega=\operatorname{cl} X$.
Since $\operatorname{O}mega'$ is preserved by $\Gamma$ it follows that $\operatorname{cl}\operatorname{O}mega'\supset \operatorname{cl} X$. Now $\operatorname{Fr}\operatorname{O}mega$ is a
convex hypersurface of dimensions $n-1>0$.
Only one side of $\operatorname{Fr}\operatorname{O}mega$ is locally convex.
Hence $\operatorname{O}mega'$ contains points on the same side of $\operatorname{Fr}\operatorname{O}mega$ as $\operatorname{O}mega$. Thus $U=\operatorname{O}mega'\cap\operatorname{O}mega\ne\emptyset$ is
properly convex and
is preserved by $\Gamma$.
By (\ref{noinvt}) $\operatorname{O}mega=U=\operatorname{O}mega'$.\end{proof}
\begin{corollary}[nilpotent subgroups]\label{nilpotent} If $M=\operatorname{O}mega/\Gamma$ is a closed, strictly-convex, projective manifold, then
every nilpotent subgroup of $\Gamma$ is cyclic.\end{corollary}
\begin{proof} By (\ref{hyperbolic}) every element of $\Gamma$ is hyperbolic, and the axis is the only segment preserved by a non-trivial hyperbolic in $\Gamma$.
Suppose $1\ne \alpha,\beta\in\Gamma$ and $[[\alpha,\beta],\beta]=1$. Let $\ell$ be the axis of $\beta$. Since $[\alpha,\beta]$
commutes with $\beta$, it preserves $\ell$.
Now $\beta$ preserves $\ell$ and $[\alpha,\beta]=(\alpha\beta\alpha^{-1})\beta^{-1}$ so $\alpha\beta\alpha^{-1}$ also preserves $\ell$.
But $\alpha\beta\alpha^{-1}$ is a hyperbolic that preserves $\alpha\ell$, thus $\alpha\ell=\ell$.
If follows by induction on the length of the upper central series
that if $\Gamma'\subset\Gamma$ is nilpotent then $\Gamma'$ preserves an axis.
The action of $\Gamma$ on $\operatorname{O}mega$
is free, so $\Gamma'$ acts freely on $\ell$. A discrete group acting freely by homeomorphisms on ${\mathbb R}$ is cyclic.
\end{proof}
\section{Convex Cones}\label{sec:cones}
This section is based on work of Vinberg, as simplified by Goldman.
Write $V={\mathbb R}^{n+1}$ and
fix an inner product $\langle\;\cdot\;,\;\cdot\;\rangle$ on $V$. This determines a norm on $V$, and induces a Riemannian metric
and associated volume form
on every smooth submanifold of $V$.
Given $\phi\in\mathcal C\operatorname{O}mega^*$, the centroid of $\mathcal C_{\phi}=\phi^{-1}(1)\cap\mathcal C\operatorname{O}mega$ is a point $\mu(\mathcal C_{\phi})\in\mathcal C\operatorname{O}mega$.
Define $\Theta(\phi)=\mu(\mathcal C_{\phi})$. We show this is a homeomorphism $\Theta:\mathcal C\operatorname{O}mega^*\rightarrow\mathcal C\operatorname{O}mega$.
\begin{figure}
\caption{$\Theta:\mathcal C\operatorname{O}
\label{Thetapicnew}
\end{figure}
Let $S^n=\{x\in V:\ \|x\|^2=1\}$.
Suppose $\operatorname{O}mega$ is an open, properly-convex subset of $S^n$ and $\mathcal C=\mathcal C\operatorname{O}mega$ is the corresponding cone in $V$.
The {\em dual cone} is
$$\mathcal C^*=\operatorname{int}\left(\{\phi\in V^*:\ \phi(\mathcal C)\ge0\ \}\right)$$
The {\em centroid} of a bounded convex set
$K$ in $V$ is the point $\mu(K)$ in $K$ given by
$$\mu(K)=\left.\int_K x\operatorname{dvol}_K\right/\int_K \operatorname{dvol}_K$$
Here $\dim K\le \dim V$ and $\operatorname{dvol}_K$ is the induced volume form on $K$. The centroid is independent of the inner product.
Given $\phi\in\mathcal C^*$, the set $\phi^{-1}(1)$ is an affine hyperplane in $V$, and
$\mathcal C_{\phi}=\{x\in\mathcal C: \phi(x)=1\}$ is a bounded subset of this hyperplane that separates $\mathcal C$.
The subset of $\mathcal C$ below this hyperplane, $\mathcal C\cap\phi^{-1}(0,1]$, has finite volume and boundary $\mathcal C_{\phi}$.
The {\em volume function} $\mathcal V:\mathcal C^*\to{\mathbb R}$ is defined by
\begin{align}\label{Vdefa}\mathcal V(\phi)=\operatorname{vol}(\mathcal C\cap\phi^{-1}(0,1])=\int_{\mathcal C\cap\phi^{-1}(0,1]}\operatorname{dvol}\end{align}Given $\phi\in\mathcal C^*$,
there is $v\in V$ such that $\phi(x)=\langle v,x\rangle$ for all $x\in V$.
Let $\operatorname{dB}$ be the volume form on $S^n$. We compute this integral in
polar coordinates, so
$\operatorname{dvol}=r^{n} \operatorname{dr}\wedge\operatorname{dB}$.
Given $x\in \mathcal C_{\phi}$, let $y=x/\|x\|$ be the corresponding point in $S^n$.
Using $\phi(x)=1$ gives \begin{align}\label{reqtn}
r(x)=\|x\|=\phi(x)/\phi(y)=\langle y,v\rangle^{-1}\quad{\rm and}\qquad x=\langle y,v\rangle^{-1}y\end{align}
Using polar coordinates
\begin{align}\label{Vformula}\mathcal V(\phi)=\int_{C_{\phi}}r^n\operatorname{dr}\wedge\operatorname{dB}=
\int_{\operatorname{O}mega}\left(\int_0^{\langle y,v\rangle^{-1}} r^n \operatorname{dr}\right)\operatorname{dB}_y
=(n+1)^{-1}\int_{\operatorname{O}mega}\langle y,v\rangle^{-n-1}\; \operatorname{dB}_y\end{align}
For $q\in\mathcal C$, the set $\mathcal C^*_q=\{\phi\in\mathcal C^*:\ \phi(q)=1\ \}$
is the intersection of a hyperplane in $V^*$ with $\mathcal C^*$, and has compact closure. Property (iv) below is very useful later.
\begin{proposition}\label{Vprop}The volume function has the following properties:
\begin{itemize}
\item[(i)] $\mathcal V$ is smooth and strictly-convex.\\
\item[(ii)] $\mathcal V(\phi)\to\infty$ as $\phi\to\operatorname{Fr}\mathcal C^*$.\\
\item[(iii)] $\mathcal V(t\cdot\phi)=t^{-n-1}\mathcal V(\phi)$ for all $t>0$.\\
\item[(iv)]
There is a unique $\phi\in \mathcal C^*_q$ at which $\mathcal V|\mathcal C^*_q$ attains a minimum, and $q=\mu(\mathcal C_{\phi})$.
\end{itemize}
\end{proposition}
\begin{proof} We use the inner product to identify $V$ with $V^*$
so that $\mathcal C^*$ is a subset of $V$, and $\mathcal C_q^*=\{x\in \mathcal C^*:\langle x,q\rangle=1\}$.
We also
regard $\mathcal V(\phi)$ as the function $\mathcal W(v)$ given by \Cref{Vformula} and prove corresponding statements
for $\mathcal W$.
For fixed $y$ the integrand $f(v)=\langle y,v\rangle^{-n-1}$ in \Cref{Vformula} is a smooth and convex function of $v$.
It follows that $\mathcal W(v)$ is a smooth, {\em strictly} convex function of $v$. This proves (i).
If $0\ne\psi\in\operatorname{Fr}\mathcal C^*$, then there is $0\ne x\in\operatorname{Fr}\mathcal C$ with $\psi(x)=0$.
Thus ${\mathbb R}^+\cdot x\subset\operatorname{cl}\mathcal C_{\psi}$, so $\mathcal C_{\psi}$ is not compact. Moreover $\mathcal C_{\psi}$ is convex
and has non-empty interior, so $\operatorname{vol}(\mathcal C_{\psi})=\infty$. It easily follows that $\mathcal V(\phi)\to\infty$
as $\phi\to\psi$. This proves (ii), and (iii) follows from \Cref{Vdefa}. It follows from convexity and (ii)
that $\mathcal V|\mathcal C^*_q$ has a unique critical
point, and it is a minimum.
The gradient of $\mathcal W(v)$ is
\begin{align}\label{gradV} \nabla \mathcal W =-\int_{\operatorname{O}mega} \langle y,v\rangle^{-n-2} y\;\operatorname{dB}_y
\end{align}
From the definition of $\mathcal C^*_q$ it follows that the condition for a critical point is that $\nabla \mathcal W\in{\mathbb R}\cdot q$.
\begin{figure}
\caption{Volume forms on $\operatorname{O}
\label{dAdBfig}
\end{figure}
Refer to \Cref{dAdBfig}. Let $v\in V$ be dual to $\phi$, so $\phi(x)=\langle v,x\rangle$. Thus $\phi(v\|v\|^{-2})=1$ and
$v\|v\|^{-2}$ is the point on $\phi^{-1}(1)$ closest to $0$. Thus
the distance of $\phi^{-1}(1)$ from $0$ is $\|v\|^{-1}$. Let
$\pi:\mathcal C_{\phi}\to \operatorname{O}mega=S^n\cap\mathcal C$ be the radial projection $\pi(x)=x/\|x\|$. Given $x\in \mathcal C_{\phi}$ set $y=\pi(x)$ so $\|y\|=1$,
and define $\cos\theta=\langle y,v\rangle/\|v\|$. Since $\langle x,v\rangle=1$ it follows that $x=y/\langle y,v\rangle$ and
so $\|x\|=1/\langle y,v\rangle$.
Let $S_x\subset{\mathbb R}^{n+1}$ denote the sphere with center $0$ and radius $\|x\|$. The volume form on $S_x$ is $d_{S_x}=\|x\|^{n}\pi_1^*dB$, where
$\pi_1:S_x\rightarrow S^n$ is radial projection.
Let
$d A_x$ be the volume element on $\mathcal C_{\phi}$. Then $dA_x=(\cos\theta)^{-1}\pi_2^*d_{S_x}$,
where $\pi_2:\mathcal C_{\phi}\rightarrow S_x$ is radial projection. Then
$\pi=\pi_1\circ\pi_2:\mathcal C_{\phi}\rightarrow S^n$ is radial projection, and
combining gives
\begin{equation}\label{dAdB}
d A_x=(\cos\theta)^{-1}\|x\|^{n}\; \pi^*d B_y= \|v\|\langle y,v\rangle^{-n-1}\; \pi^*d B_y
\end{equation}
It follows from \Cref{gradV,dAdB} that $$-\nabla \mathcal W =\|v\|^{-1}\int_{\mathcal C_{\phi}}x \; \operatorname{dA}_x=\|v\|^{-1}\mu(\mathcal C_{\phi})$$
Thus $\mathcal W$ has a minimum when
$$\|v\|^{-1}\mu(\mathcal C_{\phi})\in{\mathbb R}\cdot q$$
Since the centroid $\mu(\mathcal C_{\phi})$ is in $\mathcal C_{\phi}$, and $q\in \mathcal C_{\phi}$, it follows that at the minimum we have $\mu(\mathcal C_{\phi})=q.$
\end{proof}
The volume function gives a natural, strictly-convex hypersurface in $\mathcal C$:
\begin{definition}\label{SPdef}
Suppose $\mathcal C$ is a properly-convex cone. Let $\mathcal D\subset\mathcal C$ be the intersection of all affine halfspaces with the property that the volume of the subset
of $\mathcal C$ outside the halfspace equals one.
\end{definition}
In particular $\partial\mathcal D$ is strictly-convex
and meets every ray in $\mathcal C$ once. Clearly $\partial\mathcal D$ is preserved by $\operatorname{SL}(\mathcal C)$.
If $q\in \partial\mathcal D$, then with $\phi$ as in (\ref{Vprop})(iv) the tangent hyperplane to $\partial\mathcal D$ at $q$
is $\phi^{-1}(1)$, and $q=\mu(\mathcal C_{\phi})$.
In other words, the part inside $\mathcal C$ of the tangent plane to $\mathcal D$ at a point $q\in\partial\mathcal D$
has centroid $q$.
\begin{corollary}\label{Thetaprop} If $\operatorname{O}mega$ is properly-convex, then $\Theta:\mathcal C\operatorname{O}mega^*\rightarrow\mathcal C\operatorname{O}mega$ is a homeomorphism that maps rays to rays, and $[\Theta]:\operatorname{O}mega^*\rightarrow\operatorname{O}mega$ is a homeomorphism.\end{corollary}
\begin{proof}
Given $p\in\operatorname{O}mega$ there is a unique $x\in\partial\mathcal D$
with $p=[x]$. By (\ref{Vprop})(iv) there is a unique $\phi\in\mathcal C^*$ with $\phi(x)=1$
and $\mathcal V(\phi)=1$. Moroever $\mu(\mathcal C_{\phi})=x$ and so $[\Theta][\phi]=p$.
Hence $[\Theta]$ is a bijection.
Clearly $\Theta$ is continuous. By (\ref{Vprop})(ii) $[\Theta]$ is proper, and thus a homeomorphism.
Since $\Theta(t x)=t^{-n-1}\Theta(x)$, rays are mapped to rays. It follows that $\Theta$ is also a homeomorphism.
\end{proof}
The dual action of $A\in\operatorname{SL} V$ on $\operatorname{{\mathbb P}} V^*$
is given by $A[\phi]=[\phi\circ A^{-1}]$. If $\Gamma\subset\operatorname{SL} V$ and preserves $\operatorname{O}mega$, then the dual action
of $\Gamma$ preserves $\operatorname{O}mega^*$. It is clear that $\Theta$ is equivariant with respect to these actions.
It directly follows from (\ref{C1domain}) and (\ref{Thetaprop}) that:
\begin{corollary}[Vinberg]\label{dualmfd} If $M=\operatorname{O}mega/\Gamma$ is a closed, properly-convex manifold, then $M^*=\operatorname{O}mega^*/\Gamma$
is a properly-convex manifold that is homeomorphic to $M$. Thus $\pi_1M\cong\pi_1M^*$.
\end{corollary}
If $M=\operatorname{O}mega/\Gamma$, then $M$ is called {\em $C^1$} if $\operatorname{O}mega$ is $C^1$. It follows from
(\ref{C1domain}) that:
\begin{corollary}\label{C1} $M$ is strictly-convex if and only if $M^*$ is $C^1$.\end{corollary}
\begin{corollary}\label{dualcpct} If $M=\operatorname{O}mega/\Gamma$ is a closed, strictly-convex manifold, then $M^*=\operatorname{O}mega^*/\Gamma$
is a closed, strictly-convex manifold. We have $\pi_1M\cong\pi_1M^*$ and call $M^*$ the {\em dual } of $M$
Moreover, $M$ is $C^1$.
\end{corollary}
\begin{proof} Since $M$ is strictly convex, it is properly convex, so $M^*$ is a properly-convex manifold,
and $\pi_1M\cong\pi_1M^*$. By (\ref{stricthomeo}) $M^*$ is closed, and
strictly-convex. Thus $M=(M^*)^*$ is $C^1$ by (\ref{C1domain}).
\end{proof}
The centroid of a bounded open convex set $\operatorname{O}mega\subset {\mathbb R}^n$ is a distinguished point in $\operatorname{O}mega$.
We wish to define something similar for certain subsets of the sphere $S^n=\{x\in{\mathbb R}^{n+1}:\|x\|=1\}$. Imagine a room that contains a transparent globe close to one wall, with a light source at the center of the globe. The shadow of Belgium appears on the wall.
You can rotate the globe so that the centroid of this shadow is the point $p$ on the wall closest to the center of the globe.
The point on the globe that projects to $p$ is called the
{\em (spherical) center} of Belgium.
The open hemisphere that is
the $\pi/2$ neighborhood of $y\in S^n$ is $U_y=\{x\in S^n: \langle x,y\rangle>0\}$.
Radial projection $\pi_y:U_y\longrightarrow {\operatorname{T_y}}S^n$
from the origin onto the tangent space
to $\mathbb R\operatorname{P}_+^n$ at $y$ is given by $$\pi_y(x)=\frac{x-\langle x,y\rangle y}{\langle x,y\rangle}$$
Identify $S^n$ with $\mathbb R\operatorname{P}_+^n$ using $\pi_{\xi}:S^n\to\mathbb R\operatorname{P}_+^n$.
\begin{definition} If $\operatorname{O}mega\subset \mathbb R\operatorname{P}_+^n$ is open, then $y\in \operatorname{O}mega^*$ is called
a {\em (spherical) center of $\operatorname{O}mega$} if
$\hat{\mu}(\pi_y(\operatorname{O}mega))=\pi_y(y)$.
\end{definition}
If $A\in\operatorname{O}(n+1),$ then $A$ is an isometry of the inner product and so $(A\operatorname{O}mega)^*=A(\operatorname{O}mega^*)$, and $Ay$ is a center of $A\operatorname{O}mega$
if $y$ is a center of $\operatorname{O}mega$. The following property of $\partial\mathcal D$ is important for the proof of the Closed Theorem.
\begin{theorem}\label{centerexists} If $\operatorname{O}mega\subset S^n$ is properly-convex, then it has a unique spherical center $[x]$.
Moreover
$x$ is the point on $\partial\mathcal D$ that minimizes $\|x\|$.
\end{theorem}
\begin{proof} Let $\phi\in V^*$ be given by $\phi(y)=\langle y,x\rangle$.
The tangent plane to $\partial\mathcal D$ at $x$ is $\phi^{-1}(1)$ and is orthogonal to $x$.
Thus $\pi_x(\operatorname{O}mega)=\mathcal C_{\phi}$ and $\mu(\mathcal C_{\phi})=x$.
\end{proof}
\begin{corollary} If $\operatorname{O}mega\subset{\mathbb R}P^n$ is properly-convex and $p\in \operatorname{O}mega$, then
there is an affine patch ${\mathbb R}^n\subset{\mathbb R}P^n$ such that $p$ is the centroid of $\operatorname{O}mega$ in ${\mathbb R}^n$.
\end{corollary}
\begin{proof} There is a unique $x\in\partial\mathcal D$ with $p=[x]$.
Choose an inner product on $V$ so that $x$ is the closest point to $0$ on $\partial\mathcal D$.
The required affine patch is ${\mathbb R}P^n\setminus\operatorname{{\mathbb P}}(x^{\perp})$.
\end{proof}
\section{Open}\label{opensec}
A set is {\em locally convex} if every point
has a convex neighborhood. There is a basic local-to-global principle for convexity.
\begin{proposition}\label{convexhyper} If $K$ is a closed, connected, locally convex subset of ${\mathbb R}^n$, then $K$ is convex.
\end{proposition}
If $K$ has non-empty interior, then local convexity only needs to be checked at points in $\partial K$. Suppose that $K$ is contained in the upper halfspace $x_1\ge0$. Then $\operatorname{O}mega=\operatorname{int}(K\cap(0\times{\mathbb R}^{n}))$
is a convex subset of ${\mathbb R}^n$ if the subset $S\subset \partial K$
where $x_1>0$ is a locally convex hypersurface in ${\mathbb R}^{n+1}$. Informally: the base of a convex mountain is convex.
We now explain how to use this to show that a manifold is properly-convex.
A smooth hypersurface $S\subset{\mathbb R}^n$ is {\em Hessian-convex} if
the surface is locally the zero-set of a smooth, real-valued function with positive-definite Hessian. Suppose $\operatorname{O}mega\subset{\mathbb R}Pn$ is properly-convex and $\mathcal C\operatorname{O}mega=\{v\in {\mathbb R}^{n+1}_0:[v]\in\operatorname{O}mega\}$ is the corresponding convex cone.
Suppose $M=\operatorname{O}mega/\Gamma$ is a compact, and properly-convex, $n$--manifold.
Then $\widetilde W=\mathcal C\operatorname{O}mega/\Gamma\cong M\times (0,\infty)$
is a properly-convex affine $(n+1)$--manifold. We may divide out by a homothety to obtain a compact affine $(n+1)$--manifold $W\cong M\times S^1$.
Suppose there is a hypersurface $S\subset \mathcal C\operatorname{O}mega$ that is $\Gamma$--invariant and Hessian-convex away from $0$.
Then $Q=S/\Gamma$ is a compact, Hessian-convex, codimension--1 submanifold of $W$.
Let $U$ be the component of $\mathcal C\operatorname{O}mega\setminus S$ whose closure does not contain $0$.
Let $K\subset{\mathbb R}P^{n+1}$ be
the closure of $U$ and regard ${\mathbb R}P^{n+1}={\mathbb R}^{n+1}\sqcup{\mathbb R}P^n_{\infty}$. The interior of $K\cap{\mathbb R}P^n_{\infty}$ can be identified with $\operatorname{O}mega$.
The existence of such $Q$ {\em implies} the existence of $S\subset{\mathbb R}^{n+1}$, which {\em implies} $\operatorname{O}mega$ is properly-convex, by the reasoning above.
For general reasons (the Ehresmann-Thurston principle) deforming $\Gamma$ a small amount to $\Gamma'$
gives a new projective $n$--manifold $M'\cong M$, and a new affine $(n+1)$--manifold $W'\cong M\times S^1$.
Now $W'$ contains a submanifold $Q'$ that is Hessian-convex, provided
the deformation of the developing map is small enough in $C^2$. It then follows
that $M'$ is properly-convex. This is the approach taken in \cite{CLT2} for non-compact manifolds.
Here, we work with a piecewise linear submanifold $Q$ in place of a smooth one. Hessian convexity
is replaced by the condition
that at each vertex of the triangulation the determinants, of certain matrices formed by the relative positions of vertices, are strictly positive.
This ensures local convexity near the vertex. This version of convexity is preserved by small $C^0$--deformations
of the developing map.
We start by reviewing these ideas for hyperbolic manifolds.
The hyperboloid model of the hyperbolic plane is the action of $\operatorname{SO}(2,1)$ on
the surface $S\subset{\mathbb R}^3$ given by $x^2+y^2-z^2=-1$. If we identify ${\mathbb R}^3$ with ${\mathbb R}P^3\setminus{\mathbb R}P^2_{\infty}$
then we can regard this as an affine action on
${\mathbb R}P^3$ by
using the embedding $\operatorname{SO}(2,1)\oplus(1)\subset\operatorname{SL}(4,{\mathbb R})$.
The open disc $$D=\{[x:y:z:0]:\ x^2+y^2<z^2\}\subset{\mathbb R}P^2_{\infty}$$ has the same frontier as $S$
and $(\operatorname{SO}(2,1),D)$ is the projective (Klein) model of the hyperbolic plane.
\begin{figure}
\caption{The Hyperboloid and Klein models of ${\mathbb H}
\label{SandOmega}
\end{figure}
The surface $\operatorname{cl}(S\cup D)\subset{\mathbb R}P^3$ bounds a closed $3$-ball $B\subset{\mathbb R}P^3$.
Let $\operatorname{O}mega=B\setminus\operatorname{Fr} D\cong D\times I$.
The fact that $S$ is
a strictly-convex surface {\em implies} $\operatorname{cl}(S\cup D)$
is a convex surface in some affine patch which {\em implies} $\operatorname{O}mega$ is properly-convex, and this {\em implies} $D$ is properly-convex.
The action of $\operatorname{SO}(2,1)\oplus(1)$ preserves $B$.
Suppose $\Gamma\subset\operatorname{SO}(2,1)$ and $\Sigma=D/\Gamma$ is a properly-convex projective (and hyperbolic) surface. Let $\Gamma'=\Gamma\oplus(1)$ then $N=B/\Gamma'\cong\Sigma\times[0,1]$
is a properly-convex manifold with one flat boundary component $\Sigma=D/\Gamma$ and one
strictly-convex boundary component $M=S/\Gamma$. The fact that $M$ is a strictly-convex surface {\em implies} $\Sigma$ is properly-convex.
We will generalize this construction to arbitrary properly-convex manifolds in place of $\Sigma$. But first we
divide out by a homothety.
Consider the cone $\mathcal C=\{\lambda\cdot x:x\in S,\ \ \lambda>0\}$. There is a product structure $\widetilde \phi:S\times{\mathbb R}^+ \to\mathcal C$ given by $\widetilde\phi(x,\lambda)=\lambda^{-1}\cdot x$
on $\mathcal C$ that is preserved by $\Gamma'$. Let $H\subset\operatorname{GL}(4,{\mathbb R})$ be the cyclic
group generated by $\exp (\operatorname{Diag}(0,0,0,1))$. Then $\Gamma'$ centralizes $H$ and the group $\Gamma^+\subset\operatorname{GL}(4,{\mathbb R})$ generated by $\Gamma'$ and $H$
preserves $\mathcal C$ and $W:=\mathcal C/\Gamma^+$ is a $3$--manifold.
In what follows ${\mathbb R}^+=\{x\in{\mathbb R}:x>0\}$, and $S^1={\mathbb R}^+/\exp({\mathbb Z})$, has universal cover ${\mathbb R}^+$.
There is a product structure $\phi:\Sigma\times S^1\to W$
covered by $\widetilde \phi$, and the surfaces $\phi(\Sigma,\theta)$ are convex.
The reader might contemplate all this in the case of one dimension lower, where $\Gamma\subset\operatorname{SO}(1,1)$
is generated by a hyperbolic and $\mathcal C/\Gamma^+$ is an affine structure on $S^1\times S^1$.
We now return to the general setting. Regard ${\mathbb R}^+$ as a subgroup of $\operatorname{GL}(1,{\mathbb R})$, and write an element as either $x$ or as $(x)$.
Let $\operatorname{FG}(n)=\operatorname{SL}(n,{\mathbb R})\oplus{\mathbb R}^+\subset\operatorname{GL}(n+1,{\mathbb R})$. We also use the notation ${\mathbb R}^+$ to denote the subgroup
$1\oplus {\mathbb R}^+\subset \operatorname{FG}(n)$. {\em Flow geometry} is the subgeometry
$(\operatorname{FG}(n),{\mathbb R}^n_0)\subset(\operatorname{GL}(n+1,{\mathbb R}),{\mathbb R}P^n)$ of projective geometry where ${\mathbb R}^n_0\equiv\{[x:1]: x\in{\mathbb R}^n_0\}\subset{\mathbb R}P^n$.
The
subgroup ${\mathbb R}^+\subset \operatorname{FG}(n)$ is called the {\em homothety flow} and the action of $1\oplus(t)$ on ${\mathbb R}^n_0$ is given
by $(1\oplus(t))[x:1]=[x:t]$. In affine coordinates, this action is $(1\oplus(t))x=t^{-1}\cdot x$. Thus points in ${\mathbb R}^n_0$ move {\em towards} $0$ as $t$ increases.
The projection $\pi_{hor}:\operatorname{FG}(n)\to\operatorname{SL}(n,{\mathbb R})$ is called the {\em horizontal holonomy} and $\pi_{rad}:\operatorname{FG}(n)\to{\mathbb R}^+$ is called the {\em radial holonomy}.
\begin{definition} If $M$ is a closed $n$--manifold, then a {\em flow product structure} on $M\times S^1$
is a flow geometry structure $(\operatorname{dev},\rho)$ such that:
\begin{itemize}
\item $\operatorname{dev}(x,t)=t^{-1}\cdot \operatorname{dev}(x,1)$ where $(x,t)\in\widetilde M\times{\mathbb R}^+$
\item $\pi_{rad}(\rho\;\pi_1M)=1$ regarding $\pi_1M\subset\pi_1(M\times S^1)$
\end{itemize}
\end{definition}
It follows that $\pi_{rad}(\rho\;\pi_1M\times S^1)=\exp{\mathbb Z}$.
The action of ${\mathbb R}^+$ on the right factor of $\widetilde W=\widetilde M\times {\mathbb R}^+$ is conjugate, by the developing map, to the homothety action on ${\mathbb R}^{n+1}_0$.
The action of $S^1$ on the right factor of $W=M\times S^1$ is covered by this action on $\widetilde W$. We call call all of these actions {\em homothety}.
Flow geometry is also a subgeometry of affine geometry, so we may use affine notions.
Suppose $W$ is an affine $n$--manifold, and $S$ is a hypersurface in $W$. Then
$S$
is a {\em convex hypersurface}, and it is {\em locally convex}, if
there is a submanifold $V\cong S\times [0,1)\subset W$ with $\partial V=S$, and
every point $p\in S$ has a small neighborhood $U\subset V$, such that $\operatorname{dev}(U)\subset{\mathbb R}^n$ is convex.
If $S$ is locally convex and, in addition, every maximal flat subset of the universal cover $\widetilde S$ is compact,
then $S$ is called {\em strongly locally convex}. This condition automatically holds if $S$ contains no segment.
A hypersurface, $S\subset W$, is {\em simplicial} if it has a triangulation
by flat simplices.
Strongly locally convex simplicial surfaces are a substitute for {\em strictly convex} in this context.
If $W=M\times S^1$ has a flow product structure, then $S$ is {\em outwards} locally-convex
if, in addition, $t\cdot \partial V\subset V$ for some $t>1$. In other words $\operatorname{dev}\widetilde S$ is locally-convex {\em away} from $0$ in ${\mathbb R}^{n+1}$.
\begin{definition} A flow product structure $(\operatorname{dev},\rho)$ on $M\times S^1$ is {\em flow convex}
if $M\times\theta$ is a strongly outwards convex hypersurface in $M\times S^1$ for some (and hence all) $\theta\in S^1$.
\end{definition}
If $\sigma,\tau\subset W$ are flat $(n-1)$-simplices
and $\sigma\cap\tau\ne\emptyset$ then they are {\em coplanar} if they have lifts $\widetilde\sigma,\widetilde\tau\subset\widetilde W$
with $\widetilde\sigma\cap\widetilde\tau\ne\emptyset$ and $\operatorname{dev}(\widetilde\sigma\cup\widetilde\tau)$ is contained in a hyperplane.
\begin{definition} A simplicial hypersurface in an affine manifold is {\em generic-convex} if it is locally convex, and whenever two $(n-1)$-simplices intersect then they
are not coplanar.
\end{definition}
It is immediate that generic-convex implies strongly locally convex. Moreover a small movement of the vertices of a generic-convex hypersurface produces a new generic-convex hypersurface. We state these properties in the following:
\begin{lemma}\label{convexlemma} If $S\subset M$ is a generic-convex simplicial hypersurface, then $S$ is strongly locally convex.
Moreover, for each vertex $v$ of $S$, there is a neighborhood $U(v)\subset M$,
such that the hypersurface $S'$ obtained
by moving each vertex $v$ inside $U(v)$ is generic-convex. \end{lemma}
\begin{proof}
The first conclusion is immediate.
The generic-convex condition is equivalent to the positivity of the determinants
of certain $n\times n$ matrices formed by vectors of the form $u-v$ for certain vertices, $u$, that are adjacent to $v$ in $S$.
If the vertices of $S$ are moved a small enough distance, these determinants remain positive. Thus $S'$
is generic-convex. Now we explain the determinant condition.
Let $K$ be the union of the simplices in $S$ that contain $v$. Then $\dim K=n-1$.
Suppose $\sigma$
is a simplex in $K$ of dimension $(n-1)$. Then $\sigma$ is contained in a hyperplane $H$.
If $K$ is generic convex
then the closure, $W$, of one component of the complement of $H$
satisfies $W\cap K=\sigma$.
Conversely, $S$ is generic convex at $v$ if this condition holds
for each such $\sigma$. Let $\{v_0,\cdots,v_{n-1}\}$ be the vertices of $\sigma$.
Let $f(u)=\det(v_0-u:v_1-u:\cdots:v_{n-1}-u)$. The condition for generic convexity is
that the sign of $f(v)$ is constant as $u$ ranges over all vertices in $K$ that are connected by an edge to $\sigma$
and that are not in $\sigma$. This ensures the simplices adjacent to $\sigma$ in $K$ all lie on the same side
of the hyperplane that contains $\sigma$.
\end{proof}
The {\em convex hull, $\operatorname{CH}(Y)$,} of $Y\subset {\mathbb R}^n$ is the intersection of all the {\em closed} convex sets that contain $Y$.
Let $\pi_{\xi}:V_0\to\operatorname{{\mathbb P}}_+(V)$ be the projection $\pi_{\xi}(x)=[x]_+$.
The next result can be proven by using the existence of a {\em characteristic hypersurface}
~\cite{Koecher1957, Vinberg1963homog}, or of an {\em affine sphere} \cite{MR437805, MR2743442}.
\begin{theorem}\label{chulllemma} Suppose $\mathcal C\subset{\mathbb R}^{n+1}$ is a convex cone, $\operatorname{O}mega=\pi_{\xi}(C)\subset{\mathbb R}P_+^n$ and $M=\operatorname{O}mega/\Gamma$ is a properly-convex closed manifold.
Then there
is a convex proper submanifold $P\subset \mathcal C$ with $t\cdot P\subset P$ for all $t\ge 1$,
and $(\pi_{\xi}|\partial P):\partial P\to \operatorname{O}mega$ is a homeomorphism.
Moreover, $P$ can be chosen such that $\partial P/\Gamma\subset\mathcal C/\Gamma$ is a compact, generic-convex, simplicial hypersurface.
\end{theorem}
\begin{proof} Let $\mathcal D\subset\mathcal C$ be the closed convex subset given by (\ref{SPdef}).
Then $N=\mathcal D/\Gamma$ is a convex submanifold of $\mathcal C/\Gamma$ with boundary
a strictly-convex hypersurface. Let $H\subset {\mathbb R}^{n+1}$ be an affine hyperplane
that separates $\mathcal D$ into two components, such that the bounded component, $Y$, projects to a subset $\pi(Y)\subset N$ that is contained
in a small
ball in $N$.
Replace $N$ by $N\setminus \pi(Y)$. This produces a flat part in the boundary.
This can be done finitely many times, to produce a
submanifold $Q\subset N$
with simplicial boundary. The preimage of $Q$ in $\mathcal C$ is the required $P$. \end{proof}
\begin{corollary}[\cite{MR3500374}] A closed properly-convex manifold $\operatorname{O}mega/\Gamma$ has a convex polyhedral fundamental domain
in $\operatorname{O}mega$.
\end{corollary}
\begin{proof} Use the notation in the proof of (\ref{chulllemma}).
Let $H$ be a closed halfspace that contains $\mathcal D$ and such that $(\partial\mathcal D)\cap\partial H$ is a single point.
Let $Y=\cap(\gamma\cdot H)$ where the intersection is over all $\gamma\in\Gamma$. Then
$\mathcal D\subset Y\subset\mathcal C\operatorname{O}mega$, and $Q=Y\cap\partial H$ is a convex polytope.
Moreover $\partial Y=\Gamma\cdot Q$ is a locally finite union of images of $Q$.
It follows that $\pi_{\xi}(Q)$
is a convex polyhedral fundamental domain.
\end{proof}
Suppose that $M=\operatorname{O}mega/\Gamma$ is properly-convex with $\operatorname{O}mega\subset{\mathbb R}P_+^n$ and $\Gamma\subset\operatorname{SL}(n+1,{\mathbb R})$.
Then $\mathcal C=\{x\in{\mathbb R}^{n+1}: [x]_+\in\operatorname{O}mega\}$ is a convex cone.
Let $\Gamma^+=\Gamma\oplus\exp{\mathbb Z}\subset \operatorname{FG}(n+1)$.
Then $W=\mathcal C/\Gamma^+$ is a flow product structure on $M\times S^1$ by (\ref{chulllemma}). Conversely if $(\operatorname{dev},\rho)$ is a flow product structure
on $M\times S^1$ then $(\pi_{\xi}\circ\operatorname{dev},\pi_{hor}\circ\rho)$ is a properly-convex structure on $M\times\theta$ because of:
\begin{theorem}\label{sufficient} If $M\times S^1$ is compact and flow-convex, then $M\times S^1$ is properly-convex.
\end{theorem}
\begin{proof}
Let $N'=M\times S^1$ and $\pi_{_{N'}}:\widetilde N'=\widetilde M\times{\mathbb R}^+\rightarrow N'$ be the universal cover,
and $\operatorname{dev}:\widetilde N'\rightarrow{\mathbb R}^{n+1}$ the developing map for $N'$.
Let $R'=\widetilde M\times 1$ and choose a basepoint $p'\in R'$.
Let $V\subset{\mathbb R}^{n+1}$ be a $2$-dimensional vector subspace that contains $p=\operatorname{dev}(p')$.
Then $\operatorname{dev}^{-1}V = X\times {\mathbb R}^+$ for some $1$-submanifold
$X\subset \widetilde M$.
Let $C'$
be the component of $R'\cap \operatorname{dev}^{-1}V$ that contains $p'$. Then $C'$ is a
connected curve in $\operatorname{dev}^{-1}V$ without endpoints
that is transverse to
the homothety flow and convex outwards.
The curve $C=\operatorname{dev}(C')$ is immersed in $V_0$, and is everywhere transverse to the radial direction, and convex outwards: radially away from $0$.
Let $\pi:V_0\to S^1$ be radial projection $\pi(x)=x/\|x\|$.
{\bf Claim 1:} $\pi:C\to S^1$ is injective. Since $C'$ is transverse to the radial direction in $N'$, it follows that
$\theta=\pi\circ \operatorname{dev}:C'\rightarrow S^1$ is an immersion. Let $\ell\subset V$ be the tangent line to $C$ at $p$. Suppose
$q\in C\cap\ell$ is distinct from $p$. Then at some point $r$ in $C$ between $p$ and $q$, the distance
of $r$ from $\ell$ is a maximum. This contradicts that $C$ is convex outwards at $r$.
Hence $\pi(\ell)$ is an open semi-circle in $S^1$ that contains $\pi (C)$. Thus $\theta$ is an immersion of $C'$ into another arc, therefore $\theta$ is injective, and it follows that $\pi$ is injective. This proves Claim~1.
Define $R\subset{\mathbb R}^{n+1}$ to be the union, as $V$ varies,
of all the curves $C$ above. Since the developing map is a local homeomorphism, each curve
$C$ is an open arc, and $R$ is the developing image of a connected open subset of $R'$. Thus $R$ is
a locally convex hypersurface without boundary.
{\bf Claim 2:} $R$ is a closed subset of ${\mathbb R}^{n+1}$. Otherwise there is a sequence $r_k\in R$ that converges to a point
$r\in {\mathbb R}^{n+1}\setminus R$. Define some two dimensional vector spaces of ${\mathbb R}^{n+1}$ by $V_k=\langle p,r_k\rangle$, and $V=\langle p,r\rangle$. Then $V_k$ converges to $V$.
The segment $\gamma_k=[p,r_k]$ in $V_k$ is the developing image of a segment $\gamma'_k=[p',r'_k]$ in $\widetilde N'$.
This is because $\operatorname{dev}({\mathbb R}^+\cdot R')$ contains ${\mathbb R}^+\cdot (R\cap V_k)$, and $\gamma_k$ is in the latter.
\begin{figure}
\caption{$\kappa$ approaching $e^tC$ from above or below}
\label{kappa}
\end{figure}
Let $C= V\cap R$ and let $C'\subset R'$ be the curve with $\operatorname{dev}(C')=C$. Let $C^+\subset V$
be the Hausdorff limit of the curves $C_k=R\cap V_k$ in ${\mathbb R}^{n+1}$. Then $r\in C^+\setminus C$.
Let $w$ be the limit point of $C$ in $C^+$ closest along $C^+$ to $r$. Let $\kappa:[0,1)\rightarrow V$ be an affine homeomorphism
with image
$[p,w)$.
There is an affine ray $\widetilde\kappa':[0,1)\rightarrow \widetilde N'$ with $\kappa=\operatorname{dev}\circ\widetilde\kappa'$
that starts at $\widetilde\kappa'(0)=p'$ and is the limit of segments $[p',c'_n]$ in $\widetilde N'$
where $c'_n\in C'$
and $\lim_{n\to\infty} \operatorname{dev}(c_n)= w$. The points $c_n'$ in $\widetilde N'$ leave every compact set.
In $V$ the point $w$ is on $e^tC$ for some $t$, and $\kappa$ limits on $w$, either from above or from below. See \Cref{kappa}.
It follows that $\kappa'=\pi_{_{N'}}\circ\widetilde\kappa':[0,1)\rightarrow N'$ is an affine ray that
spirals in towards, and accumulates on,
some subset $F$ of $M\times t\subset N'$. In other words $F$ is the forward limit set of $\kappa'$.
Now $F$ is flat because $\kappa'$
is affine. Some component of $\pi_{_{N'}}^{-1}(F)$ in $\widetilde N'$ is not compact, otherwise $\kappa'$
converges to a limit point in $F$ that maps to $w$. This
contradicts that $M\times t$ is strongly locally convex, and thus contradicts that $M\times S^1$ is flow-convex.
This proves Claim 2.
Hence $R$ is a hypersurface that is properly embedded in ${\mathbb R}^{n+1}$ and that is transverse to the radial direction
and convex outwards. Then $X=\cup_{t\ge 1} t\cdot R$ is the closure of the component of ${\mathbb R}^n\setminus R$ that does not contain $0$.
Thus $X$ is closed and has boundary $R$. Since $R=\partial X$ is a convex hypersurface, it follows that $X$ is convex, and the image of
$X$ in ${\mathbb R}P^n$ is properly convex. Now ${\mathbb R}^+\cdot R'$ is a clopen subset of $\widetilde N'$, so these sets are equal.
Hence $M\times S^1$ is also properly convex.\end{proof}
Let $\operatorname{FC}(M\times S^1)\subset \operatorname{Hom}(\pi_1M,\operatorname{SL}(n+1,{\mathbb R}))$ consist of all $\sigma$ such that there is a flow convex structure $(\operatorname{dev},\rho)$ on $M\times S^1$
with $\sigma=\pi_{hor}\circ\rho$.
\begin{theorem}\label{developing} If $M\times S^1$ is compact, then $\operatorname{FC}(M\times S^1)$ is an open subset of
$\operatorname{Hom}(\pi_1M,\operatorname{SL}(n+1,{\mathbb R}))$ where $n=\dim M$.
\end{theorem}
\begin{proof} Suppose $(\operatorname{dev}_0,\rho_0)$ is a flow convex structure on $W=M\times S^1$. By (\ref{sufficient}) $W$ is properly-convex. Thus $M$ is properly
convex. Then by (\ref{chulllemma}) there is a triangulation, $\mathcal T$, of $W$ such that $M\times 0$ is a generic-convex subcomplex
of $\mathcal T$ that is outwards locally convex.
Let $\widetilde\mathcal T$ be the lifted triangulation on the universal cover $\widetilde W$.
Let $\mathcal C\subset{\mathbb R}^{n+1}_0$ be the image of $\operatorname{dev}_0$ and $\Gamma=\rho_0(\pi_1W)$. Then $\mathcal C$ is triangulated by $\operatorname{dev}\widetilde\mathcal T$.
Suppose $K\subset\mathcal C$ is a compact set.
Let $D$ be a finite subcomplex that contains $K$ such that $\Gamma\cdot D=\mathcal C$.
Let $\mathcal P$ be the (finite) set of vertices of $D$ and choose
a subset $\mathcal P^-\subset\mathcal P$ that consists of one point in each $\Gamma$-orbit.
For each $v\in\mathcal P$ there is a unique $g_v\in\pi_1M$
such that $\rho_0(g_v)v\in\mathcal P^-$.
Given $v\in\mathcal P$ define $f_v:\operatorname{Rep}(M)\to{\mathbb R}^{n+1}_0$ by \begin{equation}\label{eq9}f_v(\rho)=\rho(g_v^{-1})\rho_0(g_v)v\end{equation} for $\rho\in \operatorname{Rep}(M)$.
This is continuous and $f_v(\rho_0)=v$. Define $\mathcal P(\rho)=\{f_v(\rho):v\in\mathcal P\}$,
then $\mathcal P(\rho_0)=\mathcal P$. If $\rho$ is close enough to $\rho_0$, there is a simplicial complex $D(\rho)\subset{\mathbb R}^{n+1}$ with vertex set $\mathcal P(\rho)$,
and a simplicial homeomorphism $F:D\to D(\rho)$ defined as follows.
Given $X\subset D$ a map $F:X\to {\mathbb R}^{n+1}$ is called {\em equivariant}
if whenever $a,b\in X$ and $g\in\pi_1M$ and $\rho_0(g)a=b$ then $\rho(g)F(a)=F(b)$. Since
the action of $\Gamma$ is free there is at most one such $g$.
Define $F:\mathcal P\to\mathcal P(\rho)$ by $F(v)=f_v(\rho)$.
Two vertices $u,v\in\mathcal P$ have the same image in $M$ if and only if $\rho_0(g_v)v=\rho_0(g_{u})u\in\mathcal P^-$.
Since $\rho_0(g_v)v=\rho(g_v)f_v(\rho)$ it follows from \Cref{eq9} that $\rho(g_v)f_v(\rho)=\rho(g_{u})f_u(\rho)$,
thus $\rho(g_v)F(v)=\rho(g_{u})F(u)$. Hence $F$ is equivariant.
Extend $F$ over each simplex $\sigma\in D$ using the affine map determined by the images of the vertices. This
extension is equivariant. If $\rho$ is close enough to $\rho_0$ then $F$ is a homeomorphism.
Whenever $g\in\pi_1M$, and $\rho_0(g)$ identifies two simplices in $\partial D$, we use
$\rho(g)$ to identify the corresponding simplices of $D(\rho)$. Set $M=D/\sim$ and $N=D(\rho)/\sim$. Then $F$ covers a simplicial homeomorphism $f:M\to N$.
Using $M=\mathcal C/\Gamma$, then $\widetilde M=\mathcal C$, and $D\subset\widetilde M$.
It follows that $N$ is an affine structure on $M$ for some developing pair $(\operatorname{dev},\rho)$ with $\operatorname{dev}|D=F.$
Since $K\subset D$, by choosing $\rho$ close
enough to $\rho_0$, then $\operatorname{dev}|K$ is close to $\operatorname{dev}_0|K$.
By choosing $K$ large enough, it follows from (\ref{convexlemma}) that the subcomplex $W\times 0$ is generic-convex, and thus locally convex. Thus $(\operatorname{dev},\rho)$ is a flow convex structure on $M$.
\end{proof}
\subsection{Proof of (\ref{open}. Open)} Suppose $\rho_0\in\operatorname{Rep}_P(M)$ and $(\operatorname{dev},\rho_0)$ is a development pair for $M$.
By (\ref{chulllemma}) there is a flow convex structure on $M\times S^1$ with horizontal holonomy $\rho_0$. By (\ref{developing})
if $\rho$ is close to $\rho_0$ there is a flow convex structure on $M\times S^1$ with horizontal holonomy $\rho$.
By (\ref{sufficient}) this is a properly-convex structure. Thus $M$ has a properly-convex structure with holonomy $\rho$.
\qed
\section{Closed}
We first give an outline of the proof of the Closed Theorem.
Suppose $\rho_k$ is a sequence of holonomies of properly-convex real projective structures on $M$,
so that $M\cong \operatorname{O}mega_k/\rho_k(\pi_1M)$ with $\operatorname{O}mega_k$ properly-convex.
Suppose the holonomies converge pointwise to
$\lim\rho_k=\rho_{\infty}$. If $M$ is strictly-convex, a special case of Chuckrow's theorem (\ref{discfaith}) implies $\rho_{\infty}$
is discrete and faithful; in general $\rho_{\infty}$ is neither.
After taking a subsequence we may assume $\operatorname{O}mega_{\infty}=\lim\operatorname{O}mega_k\subset{\mathbb R}Pn$ exists.
If $\operatorname{O}mega_{\infty}$ is properly-convex then $\operatorname{O}mega_{\infty}/\rho_{\infty}(\pi_1M)$ is a properly-convex structure on $M$.
But $\operatorname{O}mega_{\infty}$
might have smaller dimension, or it might not be properly-convex. We describe this by saying {\em the domain
has degenerated}.
The {\em box estimate} (\ref{boxestimate}) implies that one may replace the original sequence $\rho_k$
by conjugates $\rho'_k$ which preserve domains $\operatorname{O}mega'_k$ and $\lim\rho_k'=\sigma$,
and
$\lim \operatorname{O}mega'_k=\operatorname{O}mega$ is properly-convex.
Then $N=\operatorname{O}mega/\sigma(\pi_1M)$ is a properly-convex manifold
homotopy equivalent to $M$. Hence $N$ is closed and $\pi_1M\cong\pi_1N$. It then follows from
Gromov-Hausdorff convergence that $M$ is
homeomorphic to $N$.
Also $N$ is strictly-convex
by (\ref{stricthomeo}). Finally, $\sigma$ is irreducible by (\ref{irreducible}), which implies that, in fact,
the original domains $\operatorname{O}mega_k$ did not degenerate. Thus $\rho_{\infty}$ is the holonomy
of a strictly-convex structure on $M$.
We supply the missing details, starting with the algebraic preliminaries.
\begin{theorem}[Irreducible]\label{irreducible} Suppose $M=\operatorname{O}mega/\Gamma$ is a strictly-convex closed manifold and $\dim M\ge 2$.
Then $\Gamma$ does not preserve any proper projective subspace.
\end{theorem}
\begin{proof} Suppose ${\mathbb R}^{n+1}=U\oplus V$ with $U\ne 0\ne V$ and $\Gamma$ preserves $U$. Then
$\Gamma$ preserves $Y=\operatorname{cl}\operatorname{O}mega\cap\operatorname{{\mathbb P}}_+ U$.
Now $Y\cap \operatorname{O}mega= \emptyset$ by (\ref{noinvt}).
Suppose $Y=\emptyset$. By the hyperplane separation theorem, there is $\phi\in ({\mathbb R}^{n+1})^*$ such that $\ker \phi$ contains
$U$, and $\phi(\operatorname{cl}\operatorname{O}mega)>0$.
Thus $[\phi]\in \operatorname{{\mathbb P}}_+ U^0\cap\operatorname{O}mega^*$.
By (\ref{dualcpct}) the dual manifold $M^*=\operatorname{O}mega^*/\Gamma$ is compact and strictly-convex, and the dual action of $\Gamma$ preserves $U^0$,
which contradicts (\ref{noinvt}). Thus $Y\ne\emptyset$, so
$Y\subset\operatorname{Fr}\operatorname{O}mega$.
Since $M$ is strictly-convex, $Y=c$ is a single point that is fixed by $\Gamma$.
Thus every non-trivial element of $\Gamma$ has an axis
with one endpoint at $Y$.
If $\ell$ and $\ell'$ are the axes of $\gamma,\gamma'\in\Gamma$ then they limit on $Y$. Now $\operatorname{O}mega$ is $C^1$ at $Y$ by (\ref{C1})
so $d_{\operatorname{O}mega}(p,\ell')\to0$ as the point $p$ on $\ell$ approaches $c$. Let $\pi:\operatorname{O}mega\to M$
be the projection. Then $C=\pi(\ell)$ and $C'=\pi(\ell')$ are closed geodesics that become arbitrarily close to each other. It follows that $C=C'$, so $\ell=\ell'$. Thus $\Gamma$
preserves $\ell$ which contradicts (\ref{noinvt}) unless $\operatorname{O}mega=\ell$, but then $\dim M=1$.\end{proof}
The following is a special case of Chuckrow's theorem (see \cite{CHU} or \cite{KAP}(8.4)).
\begin{lemma}\label{discfaith} If $M$ is a closed, strictly-convex, projective manifold
and $\dim M\ge 2$, then the closure of $\operatorname{Rep}_S(M)$ in $\operatorname{Rep}(M)$ consists of discrete faithful
representations.
\end{lemma}
\begin{proof} Let $d$ be the metric on $G=\operatorname{GL}(m+1,{\mathbb R})$ given by $d(g,h)=\max|(g-h)_{ij}|$ and set $\|g\|=d(I,g)$.
Let $\mathcal W\subset\pi_1M$ be a finite generating set.
Suppose the sequence $\rho_n\in \operatorname{Rep}_S(M)$ converges to $\rho_{\infty}\in \operatorname{Rep}(M)$.
Then there is a compact set $K\subset G$ such that $\rho_n(\mathcal W)\subset K$
for all $n$.
The map $\theta:G\times G\to G$ given by $\theta(g,h)=[[g,h],h]$ has zero derivative on $G\times\operatorname{I}$.
Thus there is a neighborhood $U\subset G$ of the identity such that if $k\in K$ and $u\in U$, then
$\|\; [[k,u],u]\;\|\le \|\;u\;\|/2$.
Since $\rho=\rho_n$ is discrete, there is $1\ne\alpha\in\pi_1M$ that minimizes $\|\rho(\alpha)\|$.
Suppose that $\rho(\alpha)\in U$. If $\beta\in \mathcal W$, then $\rho(\beta)\in K$, so $\|\;\rho[[\beta,\alpha],\alpha]\;\|\le \|\;\rho \alpha\;\|/2$. By minimality,
$\rho[[\beta,\alpha],\alpha]=\operatorname{I}$, and
since $\rho$ is injective,
$[[\beta,\alpha],\alpha]=\operatorname{I}$.
By (\ref{nilpotent}) $\alpha$ and $\beta$ commute.
Since $\mathcal W$ is a generating set, $\alpha$ is central in $\pi_1M$. Thus the entire group preserves the axis of $\alpha$.
This contradicts (\ref{noinvt}) because $\dim M \ge 2$.
Thus $\rho(\alpha)\notin U.$ Since $U$ is an open neighbourhood of the identity, it contains an open metric ball $U'$, and since $\rho(\alpha)$ is of minimal norm, we have $\rho(\gamma)\notin U'$ for all $1\neq \gamma \in \pi_1M.$ Since $U$ and $U'$ are independent of $n,$ we have $\rho_{\infty}(\alpha)\notin U$ for all $1\neq \gamma \in \pi_1M.$
This implies that $\rho_{\infty}$ is discrete and faithful.
\end{proof}
Let $\mathfrak B=\prod_{i=1}^n[-1,1]\subset{\mathbb R}^n\subset{\mathbb R}P^n$. For each $K>0$, the set $K\cdot \mathfrak B=\prod_{i=1}^n[-K,K]$ is called a {\em box}.
\begin{lemma}[Box estimate]\label{boxestimate} If $A=(A_{ij})\in \operatorname{GL}(n+1,{\mathbb R})$
and $K\ge1$ and $[A](\mathfrak B)\subset K.\mathfrak B$ then $$|A_{ij}|\le 2K\cdot|A_{n+1,n+1}|$$
\end{lemma}
\begin{proof} Set $\alpha=A_{n+1,n+1}$. Using the standard basis we have
$$[x_1e_1+\cdots +x_ne_n+e_{n+1}]=[x_1:x_2:\cdots:x_n:1]=(x_1,\cdots,x_n)\in\mathfrak B\quad \Leftrightarrow\quad \max_i |x_i|\le 1$$
First consider the entries $A_{i,n+1}$ in the last column of $A$. Since $[e_{n+1}]=0\in \mathfrak B$ we have
$$[Ae_{n+1}]=[A_{1,n+1}e_1+A_{2,n+1}e_2+\cdots A_{n,n+1}e_n+\alpha e_{n+1}]\in K\cdot\mathfrak B$$
It follows that $|A_{i,n+1}/\alpha|\le K$. This establishes the bound
when $j=n+1$ and $i\le n$.
Next consider the entries $A_{n+1,j}$ in the bottom row with $j\le n$. Observe that
$$p=[t e_j+e_{n+1}]\in\mathfrak B\quad\Leftrightarrow\quad |\;t\;|\le 1$$
Then $[A]p=[A(te_j+e_{n+1})]\in K\cdot \mathfrak B$. This is in ${\mathbb R}^n$ so the $e_{n+1}$ component
is not zero. Hence $t A_{n+1,j}+\alpha\ne 0$ whenever $|\;t\;|\le 1$ and it follows that $|A_{n+1,j}| <|\alpha|$.
Since $K\ge 1$ the required bound follows when $i=n+1$ and $j\le n$.
The remaining entries are $1\le i,j\le n$. Since $p=[p_1:\cdots:p_{n+1}]=[A(te_j+e_{n+1})]\in K\cdot \mathfrak B$ it follows that
$$|t|\le1\quad{\mathbb R}ightarrow\qquad \left|\frac{p_i}{p_{n+1}}\right|=\left|\frac{A_{i,n+1}+t A_{i,j}}{\alpha + t A_{n+1,j}}\right| \le K$$
For all $|t|\le1$ the denominator is not zero hence $|A_{n+1,j}|<|\alpha|$. It follows that
$$|\alpha + t A_{n+1,j}|\le 2|\alpha|$$
Thus
$$|t|\le1\quad{\mathbb R}ightarrow\qquad\left|A_{i,n+1}+t A_{i,j}\right| \le 2K\cdot|\alpha|$$
We may choose the sign of $t=\pm1$ so that $A_{i,n+1}$ and $t A_{i,j}$ have the same sign. Then
$$ \left| A_{i,j}\right|\le \left|A_{i,n+1}+t A_{i,j}\right| $$
This gives the result
$ \left| A_{i,j}\right|\le 2|\alpha|\cdot K$ in this remaining case.
\end{proof}
If $\operatorname{O}mega\subset{\mathbb R}^n$ has finite positive Lebesgue measure and the {\em centroid} of $\operatorname{O}mega$ is
$\hat{\mu}(\operatorname{O}mega)=0$, then
$$Q_{_{\operatorname{O}mega}}(y)=\int_K\left(\|x\|^2\|y\|^2-\langle x,y\rangle^2\right) \operatorname{dvol}_x$$
is a positive definite quadratic form on ${\mathbb R}^n$ called the {\em inertia tensor}.
\begin{lemma}[Uniform estimate]\label{Klemma} For each dimension $n$ there is $K=K(n)>1$ such if $\operatorname{O}mega\subset{\mathbb R}^n$ is an open bounded convex
set with inertia tensor $Q_{_{\operatorname{O}mega}}=x_1^2+\cdots+x_n^2$ and centroid at the origin, then $K^{-1}\mathfrak B\subset\operatorname{O}mega\subset K\cdot\mathfrak B$. Moreover, if $A\in\operatorname{GL}(\operatorname{O}mega)$, then $|A_{ij}|\le K\cdot|A_{n+1,n+1}|$.
\end{lemma}
\begin{proof} The first conclusion follows from the theorem of Fritz John~\cite{Fritz}, see also \cite{ball}.
Let $D\in\operatorname{GL}(n+1,{\mathbb R})$ be the diagonal matrix $\operatorname{Diag}(K,\cdots,K,1)$ then $\mathfrak B\subset D(\operatorname{O}mega)\subset K^2\mathfrak B$.
Set $A'=D\cdot A\cdot D^{-1}$ then $A'\in\operatorname{GL}(D(\operatorname{O}mega))$, thus $|A'_{ij}|\le 2K^2\cdot|A'_{n+1,n+1}|$ by (\ref{boxestimate}).
Now $|A'_{n+1,n+1}|=|A_{n+1,n+1}|$ and $|A_{i,j}|\le K^2 |A'_{i,j}|$ thus
$|A_{ij}|\le 2K^4\cdot|A_{n+1,n+1}|$. The result now holds using the constant $2K^4$.
\end{proof}
\subsection{Proof of (\ref{closed}. Closed)} Suppose $\rho\in\operatorname{Rep}(M)$ is the limit of the sequence $\rho_k\in\operatorname{Rep}_s(M)$.
Let $\operatorname{O}mega_k\subset{\mathbb R}P^n$ be the properly-convex open set preserved by $\Gamma_k=\rho_k(\pi_1M)$, then $M\cong M_k=\operatorname{O}mega_k/\Gamma_k$.
Choose
an affine patch ${\mathbb R}^n\subset{\mathbb R}P^n$. Then by (\ref{centerexists}) there is $\alpha_k\in \operatorname{PO}(n+1)$
such that $\alpha_k(\operatorname{O}mega_k)\subset{\mathbb R}^n$ has center $0\in {\mathbb R}^n$. We may choose $\alpha_k$ so that the
interia tensor $Q_k=Q(\alpha_k(\operatorname{O}mega_k))$ is diagonal in the standard coordinates on ${\mathbb R}^n$, and the entries on the main
diagonal of $Q_k$ are non-increasing going down the diagonal.
Since $\operatorname{PO}(n+1)$ is compact, after subsequencing
we may assume the conjugates of $\rho_k$ by $\alpha_k$ converge.
We now replace the original sequence of representations
and domains by this new sequence.
Let $K=K(n)$ be given by (\ref{Klemma}). There is a unique positive diagonal matrix $D_k$
such that $Q_k=D_k^{-2}$. Set $\operatorname{O}mega'_k=D_k\operatorname{O}mega_k$, then $Q_{\operatorname{O}mega'_k}=x_1^2+\cdots+x_n^2$.
By (\ref{Klemma}), there is $K>1$ depending only on $n$, such that $$K^{-1}\mathfrak B_1\subset \operatorname{O}mega'_k\subset K\cdot\mathfrak B_1$$
Given $g\in\pi_1M$, then $A=A(k,g)=\rho_k(g)\in \operatorname{SL}(n+1,{\mathbb R})$ preserves $\operatorname{O}mega_k$.
The matrix $B=B(k,g)=D_k A(k,g) D_k^{-1}$ preserves $\operatorname{O}mega'_k$. By (\ref{Klemma}) $$\forall i,j\quad |B_{i,j}|\le K\cdot |B_{n+1,n+1}|$$ Since $D_k$ is diagonal it follows that $$B_{n+1,n+1}=A_{n+1,n+1}$$
Now $A_{n+1,n+1}=A(k,g)_{n+1,n+1}$ converges as $k\to\infty$ for each $g$. Hence the entries of $B(k,g)$ are uniformly bounded for fixed
$g$ as $k\to\infty$.
Thus we may pass to a subsequence where $B(k,g)=D_k\rho_k(g)D_k^{-1}$ converges
for every $g\in\pi_1M$, and this gives a limiting representation $\sigma=\lim D_k\rho_kD_k^{-1}$.
The space consisting of properly-convex open sets $\operatorname{O}mega$
with $K^{-1}\cdot \mathcal B_1\subset\operatorname{O}mega\subset K\cdot\mathcal B_1$ is compact. Therefore there is a subsequence
so that
$\operatorname{O}mega=\lim\operatorname{O}mega_k'$ exists.
Then $K^{-1}\mathfrak B_1\subset \operatorname{O}mega\subset K\cdot\mathfrak B_1$, hence
$\operatorname{O}mega$ is open and properly-convex. Then $\sigma$ is discrete and faithful by (\ref{discfaith}), and $\Gamma=\sigma(\pi_1M)$
preserves $\operatorname{O}mega$,
so $N=\operatorname{O}mega/\Gamma$ is a properly-convex manifold. Since $M$ is closed and $\pi_1M\cong \pi_1N$, then by (\ref{htpyequiv})
$N$ is also closed. Since $M$ is strictly-convex, (\ref{stricthomeo}) implies
$N$ is
strictly-convex.
Since $\operatorname{Rep}_P(N)$ is open, for $D_k\rho_k(g)D_k^{-1}$ close enough to $\sigma$, there is a properly-convex open set $\operatorname{O}mega'_k\subset{\mathbb R}P^m$
that is preserved by $\Gamma_k$ and $N_k=\operatorname{O}mega'_k/\Gamma_k$
is homeomorphic to $N$. By (\ref{stricthomeo}) $N_k$ is strictly-convex. By (\ref{unique}) $\operatorname{O}mega'_k=\operatorname{O}mega_k$ so $N\cong N_k=M_k\cong M$. Thus $\sigma\in\operatorname{Rep}_S(M)$.
If $D_k$ does not remain bounded, since the entries are non-increasing going down the diagonal, and $\rho_k(g)$ remains bounded,
$\sigma(g)$ is block upper triangular and therefore $\sigma$ is reducible. Since $N$ is strictly-convex, (\ref{irreducible}) implies $\sigma$ is irreducible,
therefore $D_k$ stays bounded. Hence we may subsequence the $D_k$ so they converge. Then $\sigma=\rho$, so $\rho\in\operatorname{Rep}_S(M)$.
\qed
\small
\end{document} |
\begin{document}
\title {Squarefree vertex cover algebras}
\author {Shamila Bayati and Farhad Rahmati}
\address{Shamila Bayati, Faculty of Mathematics and Computer Science,
Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Ave., Tehran
15914, Iran}\email{[email protected]}
\address{Farhad Rahmati, Faculty of Mathematics and Computer Science,
Amirkabir University of Technology (Tehran Polytechnic), 424 Hafez Ave., Tehran
15914, Iran} \email{[email protected]}
\begin{abstract}
In this paper we introduce squarefree vertex cover algebras. We study the question when these algebras coincide with the ordinary vertex cover algebras and when these algebras are standard graded. In this context we exhibit a duality theorem for squarefree vertex cover algebras.
\end{abstract}
\subjclass{13A30, 05C65}
\keywords{Monomial ideals; Alexander Dual; Vertex Cover algebras; Squarefree Borel ideals}
\maketitle
\section*{Introduction}
The study of squarefree vertex cover algebras was originally the motivation to better understand the Alexander dual of the facet ideals of the skeletons of a simplicial complex. In the special case of the simplicial complex ${\mathcal D}elta_r(P)$ whose facets correspond to sequences $p_{i_1}\leq p_{i_2}\leq \ldots \leq p_{i_r}$ in a finite poset $P$, it was shown in \cite{VHF} that $I({\mathcal D}elta_{r}(P)^{(k)})^\vee=(I({\mathcal D}elta_{r}(P))^\vee)^{\langle d-k\rangle}$; see Section~1 for a detailed explanation of the formula.
The question arises whether this kind of duality is valid for more general simplicial complexes. Considering this problem, it turned out that this is indeed the case for a pure simplicial complex ${\mathcal D}elta$ on the vertex set $[n]$, provided a certain algebra attached to ${\mathcal D}elta$ is standard graded. The algebra in question, which we now call the squarefree vertex cover algebras of ${\mathcal D}elta$, denoted by $B({\mathcal D}elta)$, is defined as follows: let $K$ be a field and $S=K[x_1,\ldots,x_n]$ be the polynomial ring over $K$ in the variables $x_1,\ldots,x_n$. Then $B({\mathcal D}elta)$ is the graded $S$-algebra generated by the monomials $x^{\bold c} t^k\in S[t]$ where the $(0,1)$-vector ${\bold c}$ is a $k$-cover of ${\mathcal D}elta$ in the sense of \cite{HHT}. Thus in contrast to the vertex cover algebra $A({\mathcal D}elta)$, introduced in \cite{HHT}, whose generators correspond to all $k$-covers, $B({\mathcal D}elta)$ is generated only by the monomials corresponding to squarefree $k$-covers, called binary $k$-covers in \cite{DV}. In particular, $B({\mathcal D}elta)$ is a graded $S$-subalgebra of $A({\mathcal D}elta)$ whose generators in degree $1$ coincide.
The graded components of $B({\mathcal D}elta)$ are of the form $L_k({\mathcal D}elta)t^k$, where $L_k({\mathcal D}elta)$ is a monomial ideal whose squarefree part, denoted by $L_k({\mathcal D}elta)^{sq}$, corresponds to the squarefree $k$-covers. The above mentioned duality is a consequence of the following duality
\begin{eqnarray}
\label{volleyball}
L_j({\mathcal D}elta^{(d-i)})^{sq}=L_i({\mathcal D}elta^{(d-j)})^{sq}, \quad 1\leq i,j\leq d,
\end{eqnarray}
inside $B({\mathcal D}elta)$, which is valid for any pure simplicial complex of dimension $d-1$, no matter whether $B({\mathcal D}elta)$ is standard graded or not.
This result is a simple consequence of Theorem~\ref{dual} where it is shown that the squarefree $k$-covers of ${\mathcal D}elta$ correspond to the vertex covers of the $(d-k)$-skeleton ${\mathcal D}elta^{(d-k})$ of ${\mathcal D}elta$.
The duality described in (\ref{volleyball}) yields the desired generalization of \cite[Theorem 1.1]{VHF}, and we obtain in Corollary~\ref{duality}
\[
I({\mathcal D}elta^{(k)})^\vee=(I({\mathcal D}elta)^\vee)^{\langle d-k\rangle} \quad\text{for all} \quad k,
\]
if and only if $B({\mathcal D}elta)$ is standard graded. Therefore it is of interest to know when $B({\mathcal D}elta)$ is standard graded.
The starting point of our investigations has been the formula $I({\mathcal D}elta_{r}(P)^{(k)})^\vee=(I({\mathcal D}elta_{r}(P))^\vee)^{\langle d-k\rangle}$. As explained before this implies that $B({\mathcal D}elta_r(P))$ is standard graded. As a last result in Section~1, we show that even the algebra $A({\mathcal D}elta_r(P))$ is standard graded.
In Section 2 we compare the algebras $A({\mathcal D}elta)$ and $B({\mathcal D}elta)$, and discuss the following questions:
\begin{itemize}
\item[(1)] When is $B({\mathcal D}elta)$ standard graded and when does this imply that $A({\mathcal D}elta)$ is standard graded?
\item[(2)] When do we have that $B({\mathcal D}elta)=A({\mathcal D}elta)$?
\end{itemize}
In general $B({\mathcal D}elta)$ may be standard graded, while $A({\mathcal D}elta)$ is not standard graded, as an example shows which was communicated to the authors by Villarreal; see Section~2.
On the other hand, quite often it happens that $A({\mathcal D}elta)$ is standard graded if $B({\mathcal D}elta)$ is standard graded. In Proposition~\ref{standardgraph} we show that for any $1$-dimensional simplicial complex ${\mathcal D}elta$, the algebra $B({\mathcal D}elta)$ is standard graded if and only if $A({\mathcal D}elta)$ is standard graded. We also show in Proposition \ref{coveringIdeal} that the same statement holds true when the facet ideal of ${\mathcal D}elta$ is the covering ideal of a graph. In Theorem~\ref{no-odd}, it is shown that this is also the case for all subcomplexes of ${\mathcal D}elta$ if and only if ${\mathcal D}elta$ has no special odd cycles. In the remaining part of Section~2 we present cases for which we have $B({\mathcal D}elta)=A({\mathcal D}elta)$. The $1$-dimensional simplicial complexes with this property are classified in Proposition~\ref{graphequality}. A classification of simplicial complexes ${\mathcal D}elta$ of higher dimension with $B({\mathcal D}elta)=A({\mathcal D}elta)$ seems to be unaccessible for the moment. Thus we consider simplicial complexes in higher dimensions which generalize the concept of a graph. Roughly speaking, we replace the vertices of a graph by simplices of various dimensions. This graph is called the intersection graph of ${\mathcal D}elta$. The main result regarding this class of simplicial complexes is formulated in Theorem~\ref{str-intersec-prop}, where the criterion for the equality $A({\mathcal D}elta)=B({\mathcal D}elta)$ is given in terms of the intersection graph of ${\mathcal D}elta$.
The last section of this paper is devoted to study the vertex cover algebras of shifted simplicial complexes. A simplicial complexes ${\mathcal D}elta$ is shifted if its set of facets is a Borel sets ${\mathcal B}$. When the set of facets of ${\mathcal D}elta$ is principal Borel we have $B({\mathcal D}elta)=A({\mathcal D}elta)$ as shown in Theorem~\ref{borel-generators}. Since all skeletons of such simplicial complexes also correspond to principal Borel sets, one even has that $B({\mathcal D}elta^{(i)})=A({\mathcal D}elta^{(i)})$ for all~$i$.
The squarefree monomial ideal $(\{x_F\:\; x_Ft\in A_1({\mathcal D}elta)\})$ generated by the degree~1 elements of $A({\mathcal D}elta)$ is the Alexander dual $I({\mathcal D}elta)^\vee$ of $I({\mathcal D}elta)$. Francisco, Mermin and Schweig showed that this ideal is again a squarefree Borel ideal, and they give precise Borel generators in the case that ${\mathcal B}$ is principal Borel. We generalize this result in Theorem~\ref{B-generators}, by showing that the squarefree part of the ideal $(\{x_F\:\; x_Ft^k\in A_k({\mathcal D}elta)\})$ is again a squarefree Borel ideal whose generators can be explicitly described when ${\mathcal B}$ is principal Borel. It turns out that in this case the $S$-algebra $A({\mathcal D}elta)$ may have minimal generators in the degrees up to $\dim{\mathcal D}elta+1$. In Proposition~\ref{higher-generator}, we present a necessary and sufficient condition such that this maximal degree achieved.
We would like to thank Professor Villarreal for several useful comments and for drawing our attention to related work on this subject.
\section{Duality}
We first fix some notation and recall some basic concepts regarding simplicial complexes.
Let ${\mathcal D}elta$ be a simplicial complex of dimension $d-1$ on the vertex set $[n]$. We denote by ${\mathcal F}({\mathcal D}elta)$ the set of facets of ${\mathcal D}elta$.
The $i$-skeleton ${\mathcal D}elta^{(i)}$ of ${\mathcal D}elta$ is defined to be the simplicial complex whose faces are those of ${\mathcal D}elta$ with $\dim F\leq i$. Observe that ${\mathcal D}elta^{(i)}$ is a pure simplicial complex with ${\mathcal F}({\mathcal D}elta^{(i)})=\{F\in {\mathcal D}elta\:\; \dim F=i\}$ if ${\mathcal D}elta$ is pure.
The Alexander dual ${\mathcal D}elta^\vee$ of ${\mathcal D}elta$ is the simplicial complex with
\[
{\mathcal D}elta^\vee=\{[n]\setminus F\:\; F\not\in {\mathcal D}elta\}.
\]
One has $({\mathcal D}elta^\vee)^\vee ={\mathcal D}elta$.
Let $K$ be a field and $S=K[x_1,\ldots,x_n]$ the polynomial ring over $K$ in the variables $x_1,\ldots,x_n$. The Stanley-Reisner ideal of ${\mathcal D}elta$ is defined to be
\[
I_{\mathcal D}elta=(\{x_F\:\; F\subseteq [n], F\not\in{\mathcal D}elta\}).
\]
Here $x_F=\prod_{i\in F}x_i$ for $F\subseteq [n]$.
For $F\subseteq [n]$, we denote by $P_F$ the monomial prime ideal $(\{x_i\:\; i\in F\}$. Let $$I_{\mathcal D}elta=P_{F_1}\sect \cdots \sect P_{F_m}$$ be the irredundant primary decomposition of $I_{\mathcal D}elta$, then
$I_{{\mathcal D}elta^\vee}$ is minimally generated by $x_{F_1},\ldots,x_{F_m}.$
Now let $I\subseteq S$ be an arbitrary squarefree monomial ideal. There is a unique simplicial complex ${\mathcal D}elta$ on $[n]$ such that $I=I_{\mathcal D}elta$. We set $I^\vee =I_{{\mathcal D}elta^\vee}$. It follows that if $I=P_{F_1}\sect \cdots \sect P_{F_m}$, then $I^\vee=(x_{F_1},\ldots,x_{F_m})$, and if $J=(x_{G_1},\ldots,x_{G_r})$, then $J^\vee=P_{G_1}\sect \cdots \sect P_{G_m}$.
Let $k$ be a nonnegative integer. A {\em $k$-cover} or a {\em cover of order $k$} of ${\mathcal D}elta$ is a nonzero vector $\mathbf{c}=(c_1,\ldots,c_n)$ whose entries are nonnegative integers such that $\sum_{i\in F}c_i\geq k$ for all $F\in {\mathcal F}({\mathcal D}elta)$. We denote the set $\{i\in [n]\:\; c_i\neq 0\}$ by $\supp({\bold c})$. The $k$-cover $\mathbf{c}$ is called {\em squarefree } if $c_i\leq 1$ for all $i$. A $(0,1)$-vector $\mathbf{c}$ is a squarefree cover of ${\mathcal D}elta$ with positive order if and only if $\supp({\bold c})$ is a vertex cover of ${\mathcal D}elta$. A $k$-cover $\mathbf{c}$ of ${\mathcal D}elta$ is called {\em decomposable} if there exist an $i$-cover $\mathbf{a}$ and a $j$-cover $\mathbf{b}$ of ${\mathcal D}elta$ such that $\mathbf{c}=\mathbf{a}+\mathbf{b}$ and $k=i+j$. Then ${\bold c}={\bold a}+{\bold b}$ is called a {\em decomposition} of ${\bold c}$. A $k$-cover of ${\mathcal D}elta$ is {\em indecomposable} if it is not decomposable.
The $K$-vector space spanned by the monomials $x^{{\bold c}}$ where ${\bold c}$ is a $k$-cover, denoted by $J_k({\mathcal D}elta)$, is an ideal. Obviously one has $J_k({\mathcal D}elta)J_\ell({\mathcal D}elta)\subseteq J_{k+\ell}({\mathcal D}elta)$ for all $k$ and $\ell$. Therefore
\[
A({\mathcal D}elta)={\mathcal D}irsum_{k\geq 0}J_k({\mathcal D}elta)t^k\subseteq S[t]
\]
is a graded $S$-subalgebra of the polynomial ring $S[t]$ over $S$ in the variable $t$. The $S$-algebra $A({\mathcal D}elta)$ is called the vertex cover algebra of ${\mathcal D}elta$. This algebra is minimally generated by the monomials $x^{\mathbf{c}}t^k$ where $\mathbf{c}$ is an indecomposable $k$-cover of ${\mathcal D}elta$ with $k\neq 0$.
If ${\mathcal F}({\mathcal D}elta)=\{F_1,\ldots,F_m\}$, then $J_k({\mathcal D}elta)=P_{F_1}^k\sect \cdots \sect P_{F_m}^k$ \cite[Lemma 4.1]{HHT}. In particular, for the facet ideal of ${\mathcal D}elta$, i.e.
$I({\mathcal D}elta)= (x_{F_1},\ldots,x_{F_m})$, one has $J_k({\mathcal D}elta)=(I({\mathcal D}elta)^\vee)^{(k)}$ where $(I({\mathcal D}elta)^\vee)^{(k)}$ is the $k$-th symbolic power of $I({\mathcal D}elta)^\vee$.
Let $B({\mathcal D}elta)$ be the $S$-subalgebra of $S[t]$ generated by the elements $x^{\bold c} t^k$ where ${\bold c}$ is a squarefree $k$-cover. The algebra $B({\mathcal D}elta)$ is called the {\em squarefree vertex cover algebra} of ${\mathcal D}elta$. Observe that $B({\mathcal D}elta)$ is a graded $S$-algebra,
\[
B({\mathcal D}elta)={\mathcal D}irsum_{k\geq 0}L_k({\mathcal D}elta)t^k.
\]
It is clear that each $L_k({\mathcal D}elta)$ is a monomial ideal in $S$ and that $L_k({\mathcal D}elta)\subseteq J_k({\mathcal D}elta)$ for all $k$.
For a monomial ideal $I\subseteq S$, we denote by $I^{sq}$ the squarefree monomial ideal generated by all squarefree monomials $u\in I$.
The $k$-th squarefree power of a monomial ideal $I$, denoted by $I^{\langle k\rangle}$, is defined to be $(I^k)^{sq}$.
\begin{Proposition}
\label{alsoeasy}
Let ${\mathcal D}elta$ be a simplicial complex with ${\mathcal F}({\mathcal D}elta)=\{F_1,\ldots,F_m\}$. Then
$$ L_k({\mathcal D}elta)^{sq}= \bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle},$$
and the algebra $B({\mathcal D}elta)$ is standard graded if and only if $\bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle}={(\bigcap _{i=1}^m P_{F_i})}^{\langle k\rangle}.$
\end{Proposition}
\begin{proof}
Let $x^{{\bold c}}$ be a squarefree monomial in $L_k({\mathcal D}elta)^{sq}$. Then $\sum_{j\in F_i}c_j\geq k$ for all $i$, and $c_j\leq 1$ for all $j$. So if $u_i=\prod_{j\in F_i}x_j^{c_j}$, then $u_i$ is of degree at least $k$ which implies $u_i\in P_{F_i}^{\langle k\rangle}$. Furthermore, we have $u_i|x^c$ for all $i$. Therefore, $x^{{\bold c}}\in \bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle}$. On the other hand, if
$x^{{\bold c}} \in \bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle}$, then $\sum_{j\in F_i}c_j\geq k$ for all $i$, or in other words ${\bold c}$ is a $k$-cover. Hence $x^{{\bold c}}\in L_k({\mathcal D}elta)^{sq}.$
One has ${(\bigcap _{i=1}^m P_{F_i})}^{\langle k\rangle} \subseteq \bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle}$. Now the graded algebra $B({\mathcal D}elta)$ is standard graded if and only if every squarefree $k$-cover of ${\mathcal D}elta$ can be written as a sum of $k$ 1-cover of ${\mathcal D}elta$, and this is the case if and only if
$\bigcap _{i=1}^m {P_{F_i}}^{\langle k\rangle} \subseteq {(\bigcap _{i=1}^m P_{F_i})}^{\langle k\rangle}$.
\end{proof}
It turns out that every squarefree $k$-cover of ${\mathcal D}elta$ can be considered as a $1$-cover of a suitable skeleton of ${\mathcal D}elta$ as we see in the next result.
\begin{Theorem}
\label{dual}
Let ${\mathcal D}elta$ be a simplicial complex of dimension $d-1$ on the vertex set $[n]$, and $k \in \{1, \dots, d\}$. Then
\[
L_k({\mathcal D}elta)^{sq}\subseteq I({\mathcal D}elta^{(d-k)})^\vee \quad \text{for all} \quad k.
\]
Furthermore, the following conditions are equivalent:
\begin{enumerate}
\item[(i)]${\mathcal D}elta$ is a pure simplicial complex;
\item[(ii)]$L_k({\mathcal D}elta)^{sq}= I({\mathcal D}elta^{(d-k)})^\vee \quad \text{for some} \quad k \neq 1$;
\item[(iii)]$L_k({\mathcal D}elta)^{sq}= I({\mathcal D}elta^{(d-k)})^\vee \quad \text{for all} \quad k$.
\end{enumerate}
\end{Theorem}
\begin{proof}
Let $x^{{\bold c}}$ belong to the minimal set of monomial generators of the ideal $L_k({\mathcal D}elta)^{sq}$. So ${\bold c}$ is a squarefree $k$-cover of ${\mathcal D}elta$. Let $C=\supp({\bold c}) =\{i : c_i=1\}$. Then $C$ contains at least $k$ elements of each facet. Therefore, $C$ also meets all $(d-k)$-faces of ${\mathcal D}elta$. This implies that $x^{\bold c} \in I({\mathcal D}elta^{(d-k)})^\vee$.
Now we show that statements (i), (ii) and (iii) are equivalent.
(i)$\Rightarrow$ (iii): Let $x^{{\bold c}}$ be in the minimal set of monomial generators of the ideal
$I({\mathcal D}elta^{(d-k)})^\vee$. Then $C=\supp({\bold c})=\{i : c_i=1\}$ is a minimal vertex cover of ${\mathcal D}elta^{(d-k)}$. Every facet of ${\mathcal D}elta$ is of dimension of $d-1$. Hence the set $C$ contains at least $k$ vertices of each facet because otherwise there exists a $(d-k)$-face of ${\mathcal D}elta$ which does not intersect $C$. Therefore ${\bold c}$ is a $k$-cover of ${\mathcal D}elta$. Since $c_i \leq 1$ for all $i$, the cover $x^{\bold c}$ is squarefree, and hence $x^{\bold c} \in L_k({\mathcal D}elta)^{sq}$.
(iii)$\Rightarrow $(ii): This implication is trivial.
(ii)$\Rightarrow$ (i): Suppose for some fixed $k \neq 1$, we have $L_k({\mathcal D}elta)^{sq}= I({\mathcal D}elta^{(d-k)})^\vee$. By contrary assume there is a facet $H$ of ${\mathcal D}elta$ of dimension $\ell-1$ where $\ell<d$. Every facet of ${\mathcal D}elta^{(d-k)}$ which is not a subset of $H$ contains at least one vertex which does not belong to $H$. Hence there exists a set $A\subseteq [n]$ such that $A \cap H = \emptyset$ and $A \cap F \neq \emptyset$ for all facets $F$ of ${\mathcal D}elta^{(d-k)}$ which are not subsets of $H$.
First suppose that $\ell-1<d-k$, then $H$ is also a facet of ${\mathcal D}elta^{(d-k)}$. By choosing a vertex $i$ of $H$, the set $C = A \cup \{ i\}$ is a vertex cover of ${\mathcal D}elta^{(d-k)}$ which contains exactly one vertex of $H$. Hence for the vector ${\bold c}$ with $C=\supp({\bold c})$, the monomial $x^{{\bold c}}$ belongs to $I({\mathcal D}elta^{(d-k)})^\vee$. But since $C$ contains only one vertex of $H$, it does not belong to $L_k({\mathcal D}elta)^{sq}$, a contradiction.
Next suppose that $\ell-1 \geq d-k$. By choosing $\ell+k-d$ vertices $i_1,\ldots, i_{\ell+k-d}$ of $H$, the set
$C= A \cup \{i_1,\ldots, i_{\ell+k-d}\}$ is a vertex cover of ${\mathcal D}elta^{(d-k)}$. Indeed, let $F$ be a facet of ${\mathcal D}elta^{(d-k)}$. If $F\not\subseteq H$, then $F\sect C\neq\emptyset$ because $F\sect A\neq\emptyset$ by the choice of $A$. If $F\subseteq H$, then $|F|=d-k+1$. So
$F\sect H\neq \emptyset$ because $$|F|+|C\sect H|=(d-k+1)+(\ell+k-d)>\ell=|H|.$$ Hence for the vector ${\bold c}$ with $C=\supp({\bold c})$, we have $x^{{\bold c}} \in I({\mathcal D}elta^{(d-k)})^\vee$. Thus $x^{{\bold c}}\in L_k({\mathcal D}elta)^{sq}$ which implies that $C$ contains at least $k$ elements of $H$. Furthermore, we know that $C$ contains exactly $\ell+k-d$ vertices of $H$. Hence $\ell+k-d \geq k$ which means $\ell \geq d$, a contradiction.
\end{proof}
As an immediate consequence of Theorem~\ref{dual} we obtain
\begin{Corollary}
\label{duality}
Let ${\mathcal D}elta$ be a pure simplicial complex of dimension $d-1$. Then
\[
I({\mathcal D}elta^{(k)})^\vee=(I({\mathcal D}elta)^\vee)^{\langle d-k\rangle} \quad\text{for all} \quad k,
\]
if and only if $B({\mathcal D}elta)$ is standard graded.
\end{Corollary}
\begin{proof}
It is enough to notice that $B({\mathcal D}elta)$ is standard graded if and only if $L_k({\mathcal D}elta)^{sq}=(I({\mathcal D}elta)^\vee)^{\langle k\rangle}$ for all $k$. Now Theorem~\ref{dual} yields the result.
\end{proof}
\begin{Corollary}[Duality]
Let ${\mathcal D}elta$ be a pure simplicial complex of dimension $d-1$. Then
\[
L_j({\mathcal D}elta^{(d-i)})^{sq}=L_i({\mathcal D}elta^{(d-j)})^{sq},
\]
where $1\leq i,j \leq d.$
\end{Corollary}
\begin{proof}
Consider integer numbers $i,j\in \{1,\ldots,d\}$. Since ${\mathcal D}elta$ is pure, the simplicial complexes ${\mathcal D}elta^{(d-i)}$ and ${\mathcal D}elta^{(d-j)}$ are also pure. So by Theorem~\ref{dual}, we have
$$L_j({\mathcal D}elta^{(d-i)})^{sq}= I({\mathcal D}elta^{((d-i+1)-j)})^\vee,$$
and
$$L_i({\mathcal D}elta^{(d-j)})^{sq}= I({\mathcal D}elta^{((d-j+1)-i)})^\vee.$$
Thus $L_j({\mathcal D}elta^{(d-i)})^{sq}=L_i({\mathcal D}elta^{(d-j)})^{sq}$.
\end{proof}
As an application we consider the following class of ideals introduced in \cite{VHF}. Let $P=\{p_1,\ldots,p_m\}$ be a finite poset and $r\geq 1$ an integer. We consider the $(r\times m)$-matrix $X=(x_{ij})$ of indeterminates, and define the ideal $I_r(P)$ generated by all monomials $x_{1j_1}{x_{2j_2}}\cdots x_{rj_r}$ with $p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_r}$. Let ${\mathcal D}elta_{r}(P)$ be the simplicial complex on the vertex set $V=\{(i,j)\: 1\leq i \leq r,\;1\leq j \leq m \}$ with the property that $I({\mathcal D}elta_{r}(P))=I_r(P)$. So ${\mathcal D}elta_r(P)$ is of dimension $r-1$. In \cite[Theorem 1.1]{VHF}, it is shown that
\[
I({\mathcal D}elta_{r}(P)^{(k)})^\vee=(I({\mathcal D}elta_{r}(P))^\vee)^{\langle r-k\rangle} \quad\text{for all} \quad k.
\]
Hence Corollary \ref{duality} implies that $B({\mathcal D}elta_{r}(P))$ is standard graded. One even has
\begin{Theorem}
\label{evenmaybe}
The $S$-algebra $A({\mathcal D}elta_{r}(P))$ is standard graded.
\end{Theorem}
\begin{proof}
Let the $(r\times m)$-matrix ${\bold c}=[c_{ij}]$ be a $k$-cover of ${\mathcal D}elta_r(P)$ with $k\geq 2$. We show that there is a decomposition of ${\bold c}$ into a 1-cover ${\bold a}$ and a $(k-1)$-cover ${\bold b}$ of ${\mathcal D}elta_r(P)$. This will imply that $A({\mathcal D}elta_{r}(P))$ is standard graded.
We consider the subset $A$ of $V$ containing the vertices $(i,j)\in V$ with the following properties:
\begin{enumerate}
\item[(i)] $c_{ij}\neq 0$;
\item[(ii)] There exists a chain $ p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_{i-1}}$ of elements of $P$ with $ p_{j_{i-1}} \leq p_j$ and $c_{t,j_t}=0$, for $t=1,\dots,i-1$.
\end{enumerate}
One should notice that $A$ includes the set $\{(1,j)\in V \: c_{1,j}\neq 0 \}$. First, we show that $A$ is a vertex cover of ${\mathcal D}elta_r(P)$. Suppose $F$ is a facet of ${\mathcal D}elta_r(P)$. So there exists the chain $p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_r}$ of elements of $P$ such that $$F=\{(1,j_1),(2,j_2),\ldots,(r,j_r)\}.$$
Since ${\bold c}$ is a cover of ${\mathcal D}elta_r(P)$ of positive order, the set $D=\{(i,j_i)\in F \: c_{ij_i}\neq 0\}$ is nonempty. Suppose $t=\min\{s\:(s,j_s)\in D \}$. Then $(t,j_t)$ satisfies the property (i) of elements of $A$. Considering the chain $p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_{t-1}}$, one can see $(t,j_t)$ also satisfies the property (ii) of elements of $A$. Hence $(t,j_t)\in A$. This shows that $A$ is a vertex cover of ${\mathcal D}elta_r(P)$. Let the $(r \times m)$-matrix $[a_{ij}]$ be the squarefree 1-cover of ${\mathcal D}elta_r(P)$ corresponding to $A$. In other words, $[a_{ij}]$ is the $(0,1)$-matrix with $a_{ij}=1$ if and only if $(i,j)\in A$.
Next we show that for every chain $p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_r}$ of elements of $P$ with $a_{tj_t}>0$, we have $\sum_{s=t}^r c_{s,j_s}\geq k$. Indeed, since $(t,j_t)\in A$, property (ii) implies that there exists a chain $ p_{i_1}\leq p_{i_2}\leq\ldots \leq p_{i_{t-1}}$ of elements of $P$ with $ p_{i_{t-1}} \leq p_{j_t}$ and $c_{s,i_s}=0$, for $s=1,\dots,t-1$. Consider the facet $F$ of ${\mathcal D}elta_r(P)$ corresponding to the chain $$p_{i_1}\leq p_{i_2}\leq\ldots \leq p_{i_{t-1}}\leq p_{j_t}\leq\ldots \leq p_{j_r}.$$
Since ${\bold c}$ is a $k$-cover of ${\mathcal D}elta_r(P)$, one has $\sum_{(i,j)\in F}c_{ij}\geq k$. This implies that
$$\sum_{s=1}^{t-1} c_{s,i_s}+\sum_{s=t}^r c_{s,j_s}=\sum_{s=t}^r c_{s,j_s}\geq k.$$
Finally, we show that the $(r \times m)$-matrix $[b_{ij}]=[c_{ij}-a_{ij}]$ is a $(k-1)$-cover of ${\mathcal D}elta_r(P)$. To see this, let $F$ be a facet of ${\mathcal D}elta_r(P)$ corresponding to a chain $p_{j_1}\leq p_{j_2}\leq\ldots \leq p_{j_r}$ of elements of $P$. Since $A$ is a vertex cover of ${\mathcal D}elta_r(P)$, the set $A\cap F$ is nonempty. Suppose $A\cap F=\{(i_1,j_{i_1}),(i_2,j_{i_2}),\ldots,(i_t,j_{i_t})\}$ with $i_1< i_2<\ldots< i_t$. By the above discussion, since $a_{i_t j_{i_t}}>0$, we have $\sum_{s=i_t}^r c_{s,j_s}\geq k$. Furthermore, the elements $a_{i_1 j_{i_1}},a_{i_1,j_{i_1}},\ldots,a_{i_{t-1},j_{i_{t-1}}}$ are nonzero because $$\{(i_1,j_{i_1}),(i_2,j_{i_2}),\ldots,(i_{t-1},j_{i_{t-1}})\}\subseteq A.$$
This implies $c_{i_1 j_{i_1}},c_{i_2,j_{i_2}},\ldots,c_{i_{t-1},j_{i_{t-1}}}$ are nonzero. Hence $$\sum_{(i,j)\in F}c_{ij}\geq \sum _{s=1}^{t-1} c_{i_s,j_{i_s}} +\sum_{s=i_t}^r c_{s,j_s}\geq (t-1)+\sum_{s=i_t}^r c_{s,j_s}\geq (t-1)+k.$$
Consequently, $$\sum_{(i,j)\in F}b_{ij}=\sum_{(i,j)\in F}c_{ij}-\sum_{(i,j)\in F}a_{ij} \geq ((t-1)+k)-t=k-1.$$ It shows that ${\bold b}$ is a $(k-1)$-cover of ${\mathcal D}elta_r(P)$, and ${\bold c}={\bold a}+{\bold b}$ is the desired decomposition of ${\bold c}$.
\end{proof}
\section{Comparison of $A({\mathcal D}elta)$ and $B({\mathcal D}elta)$}
In view of Corollary~\ref{duality} it is of interest to know when $B({\mathcal D}elta)$ is standard graded as an $S$-algebra. Of course this is the case if $A({\mathcal D}elta)$ is standard graded. Thus the following questions arise:
\begin{enumerate}
\item[(1)] Is $A({\mathcal D}elta)$ standard graded if and only if $B({\mathcal D}elta)$ is standard graded?
\item[(2)] When do we have $A({\mathcal D}elta)=B({\mathcal D}elta)$?
\end{enumerate}
In general Question (1) does not have a positive answer. The following example was communicated to us by Villarreal. Let ${\mathcal D}elta$ be the simplicial complex with the following facets:
\[
\{1,2\}, \{3,4\}, \{5,6\}, \{7,8\}, \{1,3,7\}, \{1,4,8\}, \{3,5,7\}, \{4,5,8\}, \{2,3,6,8\}, \{2,4,6,7\}
\]
It can be seen that ${\bold c}=(1,1,1,1,2,0,1,1)$ is an indecomposable 2-cover of ${\mathcal D}elta$, and hence $A({\mathcal D}elta)$ is not standard graded. However using CoCoA \cite{Cocoa}, one can check that the (finitely many) squarefree covers of ${\mathcal D}elta$ of order $\geq 2$ are all decomposable. Thus $B({\mathcal D}elta)$ is standard graded.
Now we consider some cases where Question (1) has a positive answer. We begin with a general fact about the generators of $B({\mathcal D}elta)$.
\begin{Lemma}
\label{lunch}
Let $r=\min\{|F|\:\; F\in {\mathcal F}({\mathcal D}elta)\}$. Then $B({\mathcal D}elta)$ is generated in degree $\leq r$, and $L_r({\mathcal D}elta)^{sq}=(x_1x_2\cdots x_n)$ if and only if each vertex is contained in a $(r-1)$-facet of ${\mathcal D}elta$.
\end{Lemma}
\begin{proof} Let ${\bold c}$ be a squarefree cover of ${\mathcal D}elta$, and $F\in {\mathcal F}({\mathcal D}elta)$ for which $|F|=r$. If $C=\supp({\bold c})$, then $|C\cap F|\leq r$. Hence the order of ${\bold c}$ is at most $r$. This shows that $B({\mathcal D}elta)$ is generated in degree $\leq r$. For the other statement we first observe that by the definition of the integer $r$ we have $(x_1x_2\cdots x_n)\subseteq L_r({\mathcal D}elta)^{sq}$. Therefore $(x_1x_2\cdots x_n)\neq L_r({\mathcal D}elta)^{sq}$ if and only if there exists $i\in[n]$ such that $x_1\cdots \hat{x}_i\cdots x_n\in L_r({\mathcal D}elta)^{sq}$. This is the case if and only if for each facet $F\in\ {\mathcal D}elta$ with $i\in F$ one has $|F\cap\{1,\ldots,\hat{i},\ldots,n\}|\geq r$ which implies that $|F|>r$.
\end{proof}
When $\dim{\mathcal D}elta=1$, one may view ${\mathcal D}elta$ as a finite simple graph on $[n]$. In that case we have
\begin{Proposition}
\label{standardgraph}
Let $G$ be a finite simple graph. Then $B(G)$ is standard graded if and only if $A(G)$ is standard graded.
\end{Proposition}
\begin{proof}
We may assume that $G$ has no isolated vertices, because adding or removing an isolated vertex to $G$ does not change the property of $B(G)$, respectively of $A(G)$, to be standard graded.
Assume $B(G)$ is standard graded. Then ${\bold c}=(1,\ldots,1)$ is decomposable, or in other words, there exist vertex covers $C_1,C_2\subseteq [n]$ such that $C_1\union C_2=[n]$ and $C_1\sect C_2=\emptyset$. This implies that each edge of $G$ has exactly one vertex in $C_1$ and one in $C_2$. Therefore $G$ is bipartite. By Theorem~\cite[Theorem5.1.(b)]{HHT} it follows that $A(G)$ is standard graded.
\end{proof}
Let $G$ be a finite simple graph on $[n]$. The {\em edge ideal} of $G$, denoted by $I(G)$, is the ideal generated by $\{x_ix_j\: \{i,j\}\text{ is an edge of } G\}$. We also denote $I(G)^\vee$ by $J(G)$ which is called the {\em cover ideal} of $G$.
An ideal $I$ is said to be {\em normally torsion free} if all the powers $I^j$ have the same associated prime ideals. The following result is shown in \cite[Theorem 5.9]{SVV}.
\begin{Theorem}[Simis, Vasconcelos, Villarreal]
\label{Villarreal}
Let G be a graph, and suppose $I(G)$ is its edge ideal. The following conditions are equivalent:
\begin{enumerate}
\item[(i)] $G$ is bipartite;
\item[(ii)] $I(G)$ is normally torsion free.
\end{enumerate}
\end{Theorem}
By this theorem, we have another case for which the question (1) has a positive answer; as shown in the next proposition.
\begin{Proposition}
\label{coveringIdeal}
Let $G$ be a graph, and suppose ${\mathcal D}elta$ is the simplicial complex with $I({\mathcal D}elta)=J(G)$. Then $B({\mathcal D}elta)$ is standard graded if and only if $A({\mathcal D}elta)$ is standard graded.
\end{Proposition}
\begin{proof}
Suppose $B({\mathcal D}elta)$ is standard graded. We show that $G$ is bipartite. By contrary, assume that $G$ has a cycle $l\:\;i_1,i_2,\ldots,i_r$ of odd length. Every facet $F$ of ${\mathcal D}elta$ is a vertex cover of the graph $G$. In addition, $r$ is an odd number, so we have
\begin{eqnarray}
\label{shokr}
|F\cap \{i_1,i_2,\ldots ,i_r\} |\geq (r+1)/2.
\end{eqnarray}
for every facet $F$ of ${\mathcal D}elta$. Indeed, if inequality (\ref{shokr}) does not hold, then there exists an edge of the cycle $l$ which does not meet $F$.
Consider the squarefree vector ${\bold c}=(c_1,\ldots,c_n)$ with $c_j=1$ if and only if $j\in C=\{i_1,i_2,\ldots,i_r\}$. Inequality (\ref{shokr}) shows that ${\bold c}$ is a $(r+1)/2$-cover of ${\mathcal D}elta$. Thus the set $C$ is a disjoint union of $(r+1)/2$ vertex covers of ${\mathcal D}elta$ because $B({\mathcal D}elta)$ is standard graded. Observe that the minimal vertex covers of ${\mathcal D}elta$ are exactly the edges of the graph $G$. So the above argument implies that the set of edges of the cycle $l$ contains $(r+1)/2$ pairwise disjoint edges, a contradiction. Hence $G$ is bipartite. So by Theorem~\ref{Villarreal}, the ideal $I(G)$ is normally torsion free. It is well known that in this case the simplicial complex with facet ideal $I(G)^\vee=J(G)$, i.e. ${\mathcal D}elta$, has the standard graded vertex cover algebra; see \cite[Corollary 3.14]{GVV}, \cite[Corollary 1.5 and 1.6]{XIN} and \cite{HSV}.
\end{proof}
Next we discuss another result regarding question (1).
Let ${\mathcal D}elta$ be a simplicial complex. A {\em subcomplex} ${\mathcal G}amma$ of ${\mathcal D}elta$, denoted by ${\mathcal G}amma \subseteq {\mathcal D}elta$, is a simplicial complex such that
${\mathcal F}({\mathcal G}amma) \subseteq {\mathcal F}({\mathcal D}elta)$.
A {\em cycle} of length $r$ of ${\mathcal D}elta$ is a sequence $i_1,F_1,i_2,\ldots , F_r,i_{r+1}=i_1$ where $F_j\in {\mathcal F}({\mathcal D}elta)$, $i_j \in [n]$ and $v_j,v_{j+1}\in F_j$ for $j=1,\ldots , j=r$.
A cycle is called {\em special} if each facet of the cycle contains exactly two vertices of the cycle.
\begin{Theorem}
\label{no-odd}
Let ${\mathcal D}elta$ be a simplicial complex. Then the following conditions are equivalent:
\begin{enumerate}
\item[(i)] The vertex cover algebra $B({\mathcal G}amma)$ is standard graded for all ${\mathcal G}amma \subseteq {\mathcal D}elta;$
\item[(ii)] The vertex cover algebra $A({\mathcal G}amma)$ is standard graded for all ${\mathcal G}amma \subseteq {\mathcal D}elta;$
\item[(iii)] ${\mathcal D}elta$ has no special odd cycles.
\end{enumerate}
\end{Theorem}
\begin{proof}
The equivalence of (ii) and (iii) is known by \cite[Theorem 2.2]{XIN}; see also \cite[Proposition 4.10]{GRV}. We show (i) and (iii) are also equivalent.
(i)$\Rightarrow$ (iii) Following the proof of \cite[Lemma 2.1]{XIN}, let $i_1,F_1,i_2,\ldots , F_r,i_{r+1}=i_1$ be a special cycle of ${\mathcal D}elta$. Consider the subcomplex ${\mathcal G}amma \subseteq {\mathcal D}elta$ with ${\mathcal F}({\mathcal G}amma)=\{F_1,\ldots , F_r\}$. By the definition of the special cycles, one has $|F_j\cap \{i_1,\ldots, i_r\}|= 2$ for each $j=1,\ldots,r$. So
$\{i_1,\ldots, i_r\}$ corresponds to a squarefree 2-cover of ${\mathcal G}amma$, that is, there exists a squarefree 2-cover ${\bold c}$ of ${\mathcal G}amma$ with $\supp({\bold c})=\{i_1,\ldots, i_r\}$. Since by assumption $B({\mathcal G}amma)$ is standard graded, there are disjoint vertex covers $C_1$ and $C_2$ of ${\mathcal G}amma$ such that $\{i_1,\ldots, i_r\}=C_1\cup C_2$. So the sets $F_j\cap C_1$ and $F_j\cap C_2$ are nonempty for all $j$. Furthermore, since every facet of a special cycle contains exactly two vertices of the cycle, it follows that $|F_j\cap C_1|=|F_j\cap C_2|=1$. Hence $C_1$ and $C_2$ have the same number of vertices which implies that $r$ is an even number.
(iii)$\Rightarrow$ (i) By \cite[Theorem 2.2]{XIN}, when ${\mathcal D}elta$ has no special odd cycle, statement (ii) holds. Therefore $B({\mathcal G}amma)$ is standard graded for all ${\mathcal G}amma \subseteq {\mathcal D}elta$.
\end{proof}
Next we discuss question (2) in which we ask when $B({\mathcal D}elta)=A({\mathcal D}elta)$. In the case that $\dim {\mathcal D}elta=1$, in other words if ${\mathcal D}elta$ can be identified with a graph $G$, we have a complete answer which is an immediate consequence of the following result, see
\cite[Proposition 5.3]{HHT}:
\begin{Proposition}
\label{quoted}
Let G be a simple graph on $[n]$. Then the following conditions are equivalent:
\begin{enumerate}
\item[(i)] The graded $S$-algebra $A(G)$ is generated by $x_1x_2\cdots x_nt^2$ together with those monomials $x^{{\bold c}}t$ where $c$ is a squarefree 1-cover of $G$;
\item[(ii)] For every cycle $C$ of $G$ of odd length and for every vertex $i$ of $G$ there exist a vertex $j$ of the cycle $C$ such that $\{i,j\}$ is an edge of $G$.
\end{enumerate}
\end{Proposition}
By using this result we get
\begin{Proposition}
\label{graphequality}
Let G be a simple graph on $[n]$. Then the following conditions are equivalent:
\begin{enumerate}
\item[(i)] $A(G)=B(G);$
\item[(ii)] For every cycle $C$ of $G$ of odd length and for every vertex $i$ of $G$ there exist a vertex $j$ of the cycle $C$ such that $\{i,j\}$ is an edge of $G$.
\end{enumerate}
\end{Proposition}
\begin{proof}
(i)$\Rightarrow$ (ii): We may assume that $G$ has no isolated vertex. So by Lemma~\ref{lunch}, the graded $S$-algebra $B(G)$ is generated by $x_1x_2\cdots x_nt^2$ together with those monomials $x^{\bold c} t$ where ${\bold c}$ is a squarefree $1$-cover of $G$. Since we assume that $A(G)=B(G)$, the same holds true for $A(G)$. Hence (ii) follows from Proposition~\ref{quoted}.
(ii)$\Rightarrow$ (i): If (ii) holds, then Proposition~\ref{quoted} implies that $A(G)$ is generated by the monomials corresponding to squarefree covers. So $A(G)=B(G)$.
\end{proof}
Let ${\mathcal D}elta$ be a simplicial complex on $[n]$. For a subset $W$ of $[n]$, we define the {\em restriction} of ${\mathcal D}elta$ with respect to $W$, denoted by ${\mathcal D}elta_W$, to be the subcomplex of ${\mathcal D}elta$ with
$${\mathcal F}({\mathcal D}elta_W) = \{F\in {\mathcal F}({\mathcal D}elta) \: F \subseteq W\}.$$
\begin{Lemma}
\label{restriction}
Let ${\mathcal D}elta$ be a simplicial complex on $[n]$ and $W\subseteq [n]$. If $B({\mathcal D}elta)=A({\mathcal D}elta)$, then $B({\mathcal D}elta_W)=A({\mathcal D}elta_W)$.
\end{Lemma}
\begin{proof} We assume that there exists at least one facet $F$ of ${\mathcal D}elta$ such that $F \subseteq W$, otherwise there is nothing to prove. Furthermore, without loss of generality we assume that $W=\{1,\ldots,t\}$. Let $\mathbf{c}'=(c_1,\ldots,c_t)$ be an indecomposable $k$-cover of ${\mathcal D}elta_W$ with $k>0$. We will show that ${\bold c}'$ is squarefree.
We extend ${\bold c}'$ to the $k$-cover ${\bold c}=(c_1,\ldots,c_t,k,\ldots,k)$ of ${\mathcal D}elta$. Since $B({\mathcal D}elta)=A({\mathcal D}elta)$, there exists a decomposition of ${\bold c}={\bold a}+{\bold b}$ where ${\bold a}$ is an indecomposable squarefree $i$-cover of ${\mathcal D}elta$ with $i>0$, ${\bold b}$ is a $j$-cover of ${\mathcal D}elta$, and $i+j=k$. Let ${\bold a}'$ and ${\bold b}'$ be respectively the restrictions of ${\bold a}$ and ${\bold b}$ to the first $t$-components. For every facet $F\in {\mathcal F}({\mathcal D}elta_W)$ we have $\sum_{\ell\in F}a_{\ell}\geq i>0$ This implies that ${\bold a}'\neq 0$ . Suppose ${\bold b}'\neq 0$, then ${\bold c}'={\bold a}'+{\bold b}'$ is a decomposition of ${\bold c}'$, a contradiction. Hence ${\bold b}'=0$, and so ${\bold c}'={\bold a}'$ which means that ${\bold c}'$ is squarefree.
\end{proof}
By Theorem~\ref{no-odd}, so far we know that for a simplicial complex ${\mathcal D}elta$ without any special odd cycle we have $B({\mathcal D}elta)=A({\mathcal D}elta)$. On the other hand, if ${\mathcal D}elta$ contains special odd cycles, then $A({\mathcal D}elta)$ and $B({\mathcal D}elta)$ may not be equal. This may even happen if the facets of ${\mathcal D}elta$ are precisely the facets of a special odd cycle, as the following two examples demonstrate. Figure~\ref{cycle5} shows a simplicial complex ${\mathcal D}elta_1$ of dimension 2 such that $$1,F_1,2,F_2,3,F_3,4,F_5,5,F_5,1$$ is a special odd cycle of length 5.
\begin{figure}\label{cycle5}
\end{figure}
One can see the vector $(1,0,2,0,1,0,1)$ is an indecomposable 2-cover of ${\mathcal D}elta_1$. Therefore $B({\mathcal D}elta_1)\neq A({\mathcal D}elta_1)$.
Next consider the simplicial complex ${\mathcal D}elta_2$ as shown in Figure~\ref{cycle3}. In this case the facets of ${\mathcal D}elta_2$ form the special odd cycle $6,F_1,2,F_2,4,F_3,6$ of length 3 and the equality $B({\mathcal D}elta_2)=A({\mathcal D}elta_2)$ holds. Indeed, the equality $B({\mathcal D}elta_2)=A({\mathcal D}elta_2)$ is a consequence of a more general result given in the next theorem.
\begin{figure}\label{cycle3}
\end{figure}
These two examples show that it is not easy to classify simplicial complexes ${\mathcal D}elta$ containing odd cycles for which $B({\mathcal D}elta)=A({\mathcal D}elta)$. Therefore, we restrict ourselves to consider special classes of simplicial complexes. Firstly, we consider simplicial complexes satisfying the following intersection property: let ${\mathcal D}elta$ be a simplicial complex with $\mathcal{F}({\mathcal D}elta)=\{F_1,\ldots,F_m\}$. We say that ${\mathcal D}elta$ has the {\em strict intersection property} if
\begin{enumerate}
\item[ ($\text{I}_1$)] $|F_i\sect F_j|\leq 1$ for all $i\neq j$;
\item[ ($\text{I}_2$)] $F_i\sect F_j\sect F_k=\emptyset$ for pairwise distinct $i$,$j$ and $k$.
\end{enumerate}
Given a simplicial complex ${\mathcal D}elta$ with $\mathcal{F}({\mathcal D}elta)=\{F_1,\ldots,F_m\}$ satisfying the strict intersection property, we define the {\em intersection graph $G_{\mathcal D}elta$} of ${\mathcal D}elta$ as follows: $$V(G_{\mathcal D}elta)=\{v_1,\ldots,v_m\}$$ is the vertex set of $G_{\mathcal D}elta$, and $$E(G_{\mathcal D}elta)=\{ \{v_i,v_j\}\:\; i\neq j \quad \text{and}\quad F_i\sect F_j\neq \emptyset\}$$
is the edge set of $G_{\mathcal D}elta$.
Note that if $W$ is a subset of $[n]$, then ${\mathcal D}elta_W$ satisfies again the strict intersection property and the graph $G_{{\mathcal D}elta_W}$ is the subgraph of $G_{\mathcal D}elta$ induced by
$$S=\{v_i\in V(G_{{\mathcal D}elta})\: F_i\subseteq W \}.$$
\begin{Lemma}
\label{odd-cycle}
Let ${\mathcal D}elta$ be a simplicial complex satisfying the strict intersection property. If $G_{\mathcal D}elta$ is an odd cycle, then $A({\mathcal D}elta)$ is minimally generated in degree $1$ and $2$, and $B({\mathcal D}elta)=A({\mathcal D}elta)$.
\end{Lemma}
\begin{proof}
Let $[n]$ be the vertex set of ${\mathcal D}elta$ and ${\mathcal F}({\mathcal D}elta)=\{F_1,\ldots,F_m\}$. Since $G_{\mathcal D}elta$ is an odd cycle and ${\mathcal D}elta$ has the strict intersection property, we may assume $$i_1, F_1,i_2,F_2,\ldots,i_m,F_m, i_{m+1}=i_1$$ is the special odd cycle corresponding to $G_{\mathcal D}elta$ where $\{i_j\}=F_{j-1}\cap F_j$ for $j=2,\ldots,m$ and $\{i_1\}=F_{1}\cap F_m$. We consider a non-squarefree $k$-cover ${\bold c}$ of ${\mathcal D}elta$ with $k>0$ and show that it has a decomposition ${\bold a}+{\bold b}$ such that ${\bold a}$ is a squarefree cover of positive order. This will imply that $B({\mathcal D}elta)=A({\mathcal D}elta)$. If all entries $c_{i_1},\ldots,c_{i_m}$ of the $k$-cover ${\bold c}$ are nonzero, then we have the decomposition ${\bold c}={\bold a}+{\bold b}$ where ${\bold a}$ is the squarefree 2-cover of ${\mathcal D}elta$ with $a_{i_1}=\ldots=a_{i_m}=1$ and all other entries of ${\bold a}$ are zero, and ${\bold b}$ is the $(k-2)$-cover ${\bold c}-{\bold a}$. So we may assume that at least one of the entries $c_{i_1},\ldots,c_{i_m}$, say $c_{i_1}$, is zero. Let ${\mathcal G}amma$ be the simplicial complex on the vertex set $[n]\setminus \{i_1\}$ with
$${\mathcal F}({\mathcal G}amma)= \{F\setminus \{i_1\}\:F\in {\mathcal F}({\mathcal D}elta)\}.$$
Consider the vector ${\bold c}'=(c_1,\ldots, \hat{c}_{i_1},\ldots,c_m)$. The vector ${\bold c}'$ is also a $k$-cover of ${\mathcal G}amma$ because $c_{i_1}= 0$. Since the simplicial complex ${\mathcal G}amma$ has no special cycle, by Theorem~\ref{no-odd}, the $S$-algebra $A({\mathcal G}amma)$ is standard graded. Therefore, there exists a decomposition ${\bold a}'+{\bold b}'$ of ${\bold c}'$ such that ${\bold a}'$ is a squarefree 1-cover and ${\bold b}'$ is a $(k-1)$-cover of ${\mathcal G}amma$. The vector ${\bold a}$ with $a_{i_1}=0$ and $a_j=a'_j$ for $j\neq i_1$, is a 1-cover of ${\mathcal D}elta$ and the vector ${\bold b}$ with $b_{i_1}=0$ and $b_j=b'_j$ for $j\neq i_1$, is a $(k-1)$-cover of ${\mathcal D}elta$. Hence ${\bold a}+{\bold b}$ gives us the desired decomposition of ${\bold c}$.
It follows from the above discussion that $A({\mathcal D}elta)$ is generated by $x_{i_1}x_{i_2}\cdots x_{i_m}t^2$ and the monomials $x^{{\bold c}}t$ where ${\bold c}$ is an indecomposable squarefree $1$-cover of ${\mathcal D}elta$. This is a minimal set of generators of $A({\mathcal D}elta)$ because the 2-cover $(1,\ldots,1)$ is not decomposable. In fact since $m$ is an odd number, if $(1,\ldots,1)={\bold a}+{\bold b}$, then one of ${\bold a}$ and ${\bold b}$ has at most $(m-1)/2$ nonzero entries which means that it is not a 1-cover.
\end{proof}
\begin{Theorem}
\label{str-intersec-prop}
Let ${\mathcal D}elta$ be a simplicial complex satisfying the strict intersection property and suppose that no two cycles of $G_{\mathcal D}elta$ have precisely two edges in common. Then $B({\mathcal D}elta)=A({\mathcal D}elta)$ if and only if each connected component of $G_{\mathcal D}elta$ is a bipartite graph or an odd cycle.
\end{Theorem}
\begin{proof}
Let $G_1,\dots,G_t$ be the connected components of $G_{\mathcal D}elta$, ${\mathcal D}elta_1,\ldots,{\mathcal D}elta_t$ be the corresponding connected components of ${\mathcal D}elta$ and $\{v_{j1},\ldots,v_{js_j}\}$ be the vertex set of $G_j$. Then a $k$-cover ${\bold c}$ of ${\mathcal D}elta$ can be decomposed into an $i$-cover ${\bold a}$ and a $j$-cover ${\bold b}$ of ${\mathcal D}elta$ if and only if the $k$-cover ${\bold c}_j=(c_{j1},\ldots,c_{js_j})$ of ${\mathcal D}elta_j$ can be decomposed to $i$-cover $(a_{j1},\ldots,a_{js_j})$ and $j$-cover $(b_{j1},\ldots,b_{js_j})$ of ${\mathcal D}elta_j$ for all $j$. Hence $B({\mathcal D}elta)=A({\mathcal D}elta)$ if and only if $B({\mathcal D}elta_j)=A({\mathcal D}elta_j)$ for all $j$. Therefore, it is enough to consider the case that $G_{\mathcal D}elta$ is connected.
First, let $B({\mathcal D}elta)=A({\mathcal D}elta)$. We assume that $G_{\mathcal D}elta$ is not bipartite and show that it is an odd cycle. Let $[n]$ be the set of vertices of ${\mathcal D}elta$ and ${\mathcal F}({\mathcal D}elta)=\{F_1,\ldots,F_m\}$. Since $G_{\mathcal D}elta$ is not bipartite, it has an odd cycle $C$. We may assume that the cycle $C\: v_{i_1},\ldots,v_{i_t}$ has no chord because if $\{v_{i_r},v_{i_s}\}$ is a chord of $C$, then either $v_{i_1},\ldots,v_{i_r},v_{i_s},\ldots,v_{i_t}$ or $v_{i_r},v_{i_{r+1}},\ldots,v_{i_s}$ is an odd cycle. Suppose the special cycle corresponding to $C$ in ${\mathcal D}elta$, after a relabeling of facets, is the cycle
$${\mathcal C}:\; i_1,F_1,i_2,\ldots,i_r,F_r,i_{r+1}=i_1,$$ where $\{i_j\}=F_{j-1}\cap F_j$ and $r$ is an odd integer. We show that $m=r$. This will imply that $G_{\mathcal D}elta =C$ because $C$ has no chord.
Assume $m>r$. We consider two cases and in each case we will find a non-squarefree indecomposable cover of ${\mathcal D}elta$, contradicting our assumption that $B({\mathcal D}elta)=A({\mathcal D}elta)$.
{\em Case1.} Suppose each vertex of ${\mathcal D}elta$ belongs to at least one of the facets of the cycle ${\mathcal C}$. It is enough to consider the case that $m=r+1$. Indeed, if $m>r+1$, we set $W=[n]\setminus \Union_{j=r+2}^mF_j$ and consider ${\mathcal D}elta_W$. The strict intersection property implies that ${\mathcal D}elta_W$ has the special odd cycle
$${\mathcal C}:\; i_1,F_1',i_2,\ldots,i_r,F_r',i_{r+1}=i_1,$$
where $F_i' =F_i\setminus \Union_{j=r+2}^mF_j$ for all $i$. Moreover, ${\mathcal F}({\mathcal D}elta_W)=\{F_1',\ldots, F_r',F_{r+1}\}$. Lemma~\ref{restriction} implies that $B({\mathcal D}elta)\neq A({\mathcal D}elta)$ if $B({\mathcal D}elta_W)\neq A({\mathcal D}elta_W)$.
There exist two facets of ${\mathcal C}$, say $F_1$ and $F_s$, which intersect $F_{r+1}$. Since $r$ is an odd integer, one of the cycles
$$ j_1,F_1,i_2,F_2,\ldots,i_s,F_s,j_s,F_{r+1},j_1$$ or $$j_1, F_{r+1},j_s,F_s,i_{s+1},\ldots,F_r, i_1,F_1,j_1$$ is of odd length where $\{j_1\}=F_{r+1}\cap F_1$ and
$\{j_s\}=F_{r+1}\cap F_s$. Without loss of generality, we assume $ j_1,F_1,i_2,F_2,\ldots,i_s,F_s,j_s,F_{r+1},j_1$ is an odd cycle and call it ${\mathcal D}$. One has $r-s>1$ because $r-s$ is an odd number and if we had $r-s=1$, then the cycles $v_1,v_{r+1},v_s,v_r$ and $v_1,\ldots,v_r$ in $G_{\mathcal D}elta$ would have exactly two edges in common, contradicting our assumption.
Now we consider the vector ${\bold c}$ with $c_{j_1}=c_{j_s}=c_{i_2}=\ldots=c_{i_s}=1$, $c_{i_{s+2}}=\ldots=c_{i_r}=2$ and all the other entries of ${\bold c}$ are zero. The vector ${\bold c}$ is a non-squarefree 2-cover of ${\mathcal D}elta$ while $x^{{\bold c}}t^2\not\in B({\mathcal D}elta)$. In fact, if $x^{{\bold c}}t^2\in B({\mathcal D}elta)$, then there is a decomposition ${\bold c}={\bold a}+{\bold b}$ by an indecomposable squarefree cover ${\bold a}$ and a cover ${\bold b}$ such that either ${\bold a}$ is a 2-cover or ${\bold a}$ and ${\bold b}$ are both 1-covers of ${\mathcal D}elta$. Since ${\bold a}$ is squarefree, one has $\sum_{j\in F_{s+1}}a_j \leq 1$. Hence the order of ${\bold a}$ cannot be 2. On the other hand, since $\sum_{j \in F_i }a_j \geq 1$ for all facets $F_i$ of ${\mathcal D}$, at least $(s/2)+1$ of the entries $a_{j_1},a_{j_s},a_{i_2},\ldots,a_{i_s}$ of ${\bold a}$ are nonzero. Therefore at most $s/2$ of the entries $b_{j_1},b_{j_s},b_{i_2},\ldots,b_{i_s}$ of ${\bold b}$ are nonzero. Therefore ${\bold b}$ is not a 1-cover of ${\mathcal D}elta$, and so $x^{{\bold c}}t^2\not\in B({\mathcal D}elta)$.
{\em Case 2}. Suppose there exists a vertex $t$ of ${\mathcal D}elta$ which belongs to none of the facets of the cycle ${\mathcal C}$.
We may assume that $t$ is the only vertex of ${\mathcal D}elta$ with this property. Because if $t_1,\ldots,t_s$ are the other vertices of ${\mathcal D}elta$ with the same property, then we consider the restriction of ${\mathcal D}elta$ to the set $W= [n]\backslash \{t_1,\ldots,t_s\}$, and show that $B({\mathcal D}elta_W) \neq A({\mathcal D}elta_W)$. Then by applying Lemma~\ref{restriction} we obtain $B({\mathcal D}elta) \neq A({\mathcal D}elta)$.
We may assume that every facet of ${\mathcal D}elta$ which is not a facet of ${\mathcal C}$ contains $t$. Otherwise we restrict ${\mathcal D}elta$ to $W'=[n]\setminus \{t\}$, and by Case 1 it follows that $B({\mathcal D}elta_{W'})\neq A({\mathcal D}elta_{W'})$ which implies that $B({\mathcal D}elta)\neq A({\mathcal D}elta)$. Let ${\bold c}$ be the vector with $c_t=2$, $c_{i_1}=c_{i_2}=\ldots=c_{i_r}=1$ and all other entries of ${\bold c}$ be zero. The vector ${\bold c}$ is a 2-cover of ${\mathcal D}elta$. Suppose ${\bold c}={\bold a}+{\bold b}$ for some squarefree indecomposable cover ${\bold a}$ of positive order and some cover ${\bold b}$ of ${\mathcal D}elta$. Then at least $(r+1)/2$ of the entries $a_{i_1},a_{i_2},\ldots,a_{i_r}$ of ${\bold a}$ are nonzero, and consequently at most $(r-1)/2$ of the entries $b_{i_1},b_{i_2},\ldots,b_{i_r}$ of ${\bold b}$ are nonzero. Hence ${\bold b}$ is not a $1$-cover. Furthermore, if $F$ is a facet of ${\mathcal D}elta$ containing $t$, then $\sum_{j\in F}a_j=a_t=1$. Hence ${\bold a}$ is not 2-cover of ${\mathcal D}elta$. Therefore ${\bold c}$ is an indecomposable non-squarefree 2-cover of ${\mathcal D}elta$.
Conversely, we suppose $G_{\mathcal D}elta$ is a bipartite graph or an odd cycle. If $G_{\mathcal D}elta$ is bipartite, then ${\mathcal D}elta$ has no special odd cycle. Therefore by Theorem~\ref{no-odd}, $A({\mathcal D}elta)$ is standard graded and consequently $B({\mathcal D}elta)=A({\mathcal D}elta)$. The equality for the case that $G_{\mathcal D}elta$ is an odd cycle, has been shown in Lemma~\ref{odd-cycle}.
\end{proof}
Consider the simplicial complexes ${\mathcal D}elta_1$ and ${\mathcal D}elta_2$ as shown in Figure~\ref{2common}. They both satisfy the strict intersection property and have the same intersection graph. The cycles $v_1,v_2,v_3$ and $v_1,v_2,v_3,v_4$ of this intersection graph have exactly two edges in common. We have $B({\mathcal D}elta_1)=A({\mathcal D}elta_1)$ but $B({\mathcal D}elta_2)\neq A({\mathcal D}elta_2)$.
In fact, vector $(1,0,2,0,1,1)$ is an indecomposable 2-cover of ${\mathcal D}elta_2$ which shows $B({\mathcal D}elta_2)\neq A({\mathcal D}elta_2)$. However, $A({\mathcal D}elta_1)$ is even standard graded. Let ${\bold c}=(c_1,\ldots,c_5)$ be a $k$-cover of ${\mathcal D}elta_1$ with $k\geq 2$. We show that there is a decomposition of ${\bold c}$ to a 1-cover ${\bold a}$ and $(k-1)$-cover ${\bold b}={\bold c} -{\bold a}$ of ${\mathcal D}elta_1$. If $c_1$ and $c_3$ are both nonzero, then ${\bold a}=(1,0,1,0,0)$ gives the desired decomposition because the set $\{1,3\}$ meets each facet of ${\mathcal D}elta_1$ at exactly one vertex. Therefore, we may assume $c_1=0$ or $c_3=0$. By the same argument, one may assume $c_2=0$ or $c_4=0$. So it is enough to consider the case that $c_1$ and $c_2$ are zero or the case that $c_1$ and $c_4$ are zero. However, ${\bold c}$ is a $k$-cover of positive order so the entries $c_1$ and $c_4$ cannot be both zero. Now since the vector ${\bold c}=(0,0,c_3,c_4,c_5)$ is a $k$-cover of ${\mathcal D}elta_1$, one obtains that $c_i\geq k$ for $i=3,4,5$. This implies that the vector ${\bold a}=(0,0,1,1,1)$ and ${\bold b}={\bold c}-{\bold a}$ give the desired decomposition of ${\bold c}$.
\begin{figure}\label{2common}
\end{figure}
\section{Vertex covers of principal Borel sets}
The classes of simplicial complexes considered so far for which $B({\mathcal D}elta)=A({\mathcal D}elta)$, happened to have the property that $A({\mathcal D}elta)$ is generated over $S$ in degree at most 2. In this section we present classes of simplicial complexes ${\mathcal D}elta$ such that $B({\mathcal D}elta)=A({\mathcal D}elta)$ and $A({\mathcal D}elta)$ has generators in higher degrees.
We will consider a family of simplicial complexes whose set of facets corresponds to a Borel set. Recall that a subset ${\mathcal B}\in 2^{[n]}$ is called {\em Borel} if
whenever $F\in B$ and $i < j$ for some $i\in [n]\setminus F$ and $j\in F$, then $(F \setminus \{j\}) \cup \{i\}\in {\mathcal B}$. Elements $F_1,\ldots,F_m\in {\mathcal B}$ are called {\em Borel generators} of ${\mathcal B}$, denoted by ${\mathcal B}=B( F_1,\ldots,F_m)$, if ${\mathcal B}$ is the smallest Borel subset of $2^{[n]}$ such that $F_1,\ldots,F_m\in {\mathcal B}$. A Borel set B is called
{\em principal} if there exists $F\in {\mathcal B}$ such that ${\mathcal B} = B(F)$.
A squarefree monomial ideal $I\subseteq S$ is called a {\em squarefree Borel ideal} if there exists a Borel set ${\mathcal B}\subseteq 2^{[n]}$ such that
\[
I=(\{x_F \:\; F\in {\mathcal B} \}).
\]
If ${\mathcal B}=B( F_1,\ldots,F_m)$, then the monomials $x_{F_1},\ldots,x_{F_m}$ are called the {\em Borel generators} of $I$.
The ideal $I$ is called a {\em squarefree principal Borel ideal} if ${\mathcal B}$ is principal Borel.
It is known that the Alexander dual of a squarefree Borel ideal is again squarefree Borel \cite{FMS}. In the case that $I$ is squarefree principal Borel, the following result is shown in \cite[Theorem 3.18]{FMS}.
\begin{Theorem}[Francisco, Mermin, Schweig]
\label{BorelAlexander}
Let $I$ be a squarefree principal Borel ideal with the Borel generator $x_F$ where $F=\{i_1<i_2<\cdots < i_d\}$. Then the Alexander dual $I^\vee$ of $I$ is the squarefree Borel ideal with the Borel generators $x_{H_1},\ldots,x_{H_d}$ where $H_q=\{q,q+1,\ldots,i_q\}$ for $q=1,\ldots,d$.
\end{Theorem}
Let $G_1=\{i_1<\cdots<i_d\}$ and $G_2=\{j_1<\cdots<j_d\}$ be subsets of $[n]$. It is said that {\em $G_1$ precedes $G_2$ (with respect to the Borel order)}, denoted by $G_1\prec G_2$, if $i_s\leq j_s$ for all $s$. By \cite[Lemma 2.11]{FMS}, $F\in B( F_1,\ldots,F_m) $ if and only if $F$ precedes $F_i$, for some $i=1,\ldots,m.$
\begin{Lemma}
\label{skeleton}
Let ${\mathcal B}=B( F_1,\ldots,F_m)$ be a Borel set with Borel generators $F_j=\{i_{j,1}<i_{j,2}<\cdots < i_{j,d_j}\}$ for $j=1,\ldots,m$, and suppose ${\mathcal D}elta$ is the simplicial complex with ${\mathcal F}({\mathcal D}elta)={\mathcal B}$.
Then for the $q$-skeleton ${\mathcal D}elta^{(q)}$ of ${\mathcal D}elta$, the set ${\mathcal F}({\mathcal D}elta^{(q)})$ is a Borel set with the Borel generators $G_1,\ldots,G_m$ such that $G_j=\{i_{j,d_j-q}<i_{j,d_j-q+1}<\cdots<i_{j,d_j}\}$ if $d_j> q$, and $G_j=F_j$ if $d_j\leq q$.
\end{Lemma}
\begin{proof}
Let ${\mathcal B}\subseteq 2^{[n]}$, and $G$ be a subset of $[n]$. First assume that $|G|\leq q$. In this case $G$ is a facet of ${\mathcal D}elta^{(q)}$ if and only if $G$ is a facet of ${\mathcal D}elta$, and this is the case if and only if $G$ precedes $F_j$ for some $j$ with $d_j\leq q$.
Next assume that $|G|=q+1$. We must show that $G$ is a facet of ${\mathcal D}elta^{(q)}$ if and only if $G$ precedes $\{i_{j,d_j-q},i_{j,d_j-q+1},\ldots,i_{j,d_j}\}$ for some $j$ with $d_j> q$. The set $G$ is a facet of ${\mathcal D}elta^{(q)}$ if and only if there exists a facet $H$ of ${\mathcal D}elta$, preceding $F_j$ for some $j$ with $d_j> q$, and $G\subseteq H$. So for simplicity, we may assume that ${\mathcal B}$ is a principal Borel set with the Borel generator $F=\{i_1<i_2<\cdots < i_d\}$ where $d>q$, and show that $G$ is a facet of ${\mathcal D}elta^{(q)}$ if and only if $G$ precedes $\{i_{d-q},\ldots,i_d\}$. Firstly, let $G= \{k_{d-q}<k_{d-q+1}<\cdots<k_d\}\subseteq [n]$, and suppose $G$ precedes $\{i_{d-q},\ldots,i_d\}$. So we have $k_j\leq i_j$, for $j=d-q,\ldots,d$. Let $r$ be an integer such that the cardinality of $[r]\cup \{k_{d-q},k_{d-q+1},\cdots,k_d\}$ is $d$. The set $H=[r]\cup \{k_{d-q},k_{d-q+1},\cdots,k_d\}$ precedes $F$ and obviously includes $G$. This means that $G$ is a facet of ${\mathcal D}elta^{(q)}$. On the other hand, if there exists a set $H$ such that $G\subseteq H$ and $H$ precedes $F$, then $G$ precedes $\{i_{d-q},\ldots,i_d\}$.
\end{proof}
By the preceding lemma, we have the following generalization of Theorem \ref{BorelAlexander}(\cite[Theorem 3.18]{FMS}).
\begin{Proposition}
\label{B-generators}
Let ${\mathcal B}=B( F)$ be a principal Borel set with Borel generator $F=\{i_1<i_2<\cdots < i_d\}$, and let ${\mathcal D}elta$ be the simplicial complex with ${\mathcal F}({\mathcal D}elta)={\mathcal B}$. Then the $S$-algebra $B({\mathcal D}elta)$ is generated by the elements $x_Ht^k$, for k=1,\ldots,d, where
\[
H\in B(\{q,q+1,\ldots, i_{k+q-1}\}\:\; q=1,\ldots,d-k+1).
\]
\end{Proposition}
\begin{proof}
First observe that by Lemma \ref{lunch}, $B({\mathcal D}elta)$ is generated in degree $\leq d$. So it is enough to show that for each $k=1,\ldots,d$, the monomials $x_Ht^k$ with
$H\in B(\{q,q+1,\ldots ,i_{k+q-1}\}\:\; q=1,\ldots,d-k+1)$ generate $L_k({\mathcal D}elta)^{sq}$. Since ${\mathcal D}elta$ is pure, by Theorem \ref{dual}, this is the case if these monomials generate $I({\mathcal D}elta^{(d-k)})^\vee$. By Lemma~\ref{skeleton}, the ideal $I({\mathcal D}elta^{(d-k)})$ is a Borel ideal with the Borel generator $x_{i_k}x_{i_{k+1}}\cdots x_{i_d}$. Hence Theorem~\ref{BorelAlexander} implies the result.
\end{proof}
\begin{Proposition}
\label{borel}
Let ${\mathcal B}=B(F_1,\ldots,F_m)$ be a Borel set such that $|F_i|=|F_j|$ for all $i,j$, and suppose ${\mathcal D}elta $ is a simplicial complex with ${\mathcal F}({\mathcal D}elta)={\mathcal B}$. Then $L_k({\mathcal D}elta)^{sq}$ is a squarefree Borel ideal for all $k$.
\end{Proposition}
\begin{proof}
Since $|F_i|=|F_j|$ for all $i,j$, the simplicial complex ${\mathcal D}elta$ is pure. Considering Theorem~\ref{dual}, this implies that $L_k({\mathcal D}elta)^{sq}=I({\mathcal D}elta^{(d-k)})^\vee$ for all $k$. By Lemma~\ref{skeleton}, the ideal $I({\mathcal D}elta^{(d-k)})$ is a squarefree Borel ideal. As it is shown in Theorem~\ref{BorelAlexander}, the Alexander dual of a principal squarefree Borel ideal is squarefree Borel. Since $(I+J)^\vee=I^\vee\cap J^\vee$ for every squarefree monomial ideals $I$ and $J$, by squarefree version of \cite[Proposition 2.16]{FMS} one obtains that $I({\mathcal D}elta^{(d-k)})^\vee$ is a squarefree Borel ideal. Hence $L_k({\mathcal D}elta)^{sq}$ is squarefree Borel.
\end{proof}
In Proposition \ref{borel}, suppose that the cardinality of the Borel generators of ${\mathcal B}$ are not the same. Consider the simplicial complex ${\mathcal D}elta$ with ${\mathcal F}({\mathcal D}elta)={\mathcal B}'$ where ${\mathcal B}'$ is the set of maximal elements of ${\mathcal B}$, with respect to inclusion. Then the ideal $L_k({\mathcal D}elta)^{sq}$ does not need to be squarefree Borel. For example, if ${\mathcal B}=B(\{1,4\},\{1,2,3\})$, then ${\mathcal F}({\mathcal D}elta)=\{\{1,4\},\{1,2,3\}\}$. However, the ideal $L_2({\mathcal D}elta)^{sq}=(x_1x_2x_4,x_1x_3x_4)$ is not squarefree Borel.
Following the proof of Proposition \ref{borel}, one observes that anyway $I({\mathcal D}elta^{(j)})^\vee$ is always squarefree Borel ideal for all $j$ no matter what is the cardinality of the Borel generators.
Let ${\mathcal B}$ be a Borel set not necessarily principal, and let ${\mathcal D}elta$ be the simplicial complex with ${\mathcal F}({\mathcal D}elta)={\mathcal B}$. In \cite{FMS}, the authors describe 1-covers of ${\mathcal D}elta$ by using Theorem~\ref{BorelAlexander} and the fact that $(I+J)^\vee=I^\vee\cap J^\vee$ for all squarefree ideals $I$ and $J$. With similar argument, one can use Proposition \ref{B-generators} to have squarefree $k$-covers of ${\mathcal D}elta$ when ${\mathcal D}elta$ is pure. More precisely, let ${\mathcal F}({\mathcal D}elta)=B( F_1,\ldots,F_m)$ be a Borel set and $|F_i|=|F_j|$ for all $i,j$. By Lemma~\ref{skeleton}, ${\mathcal F}({\mathcal D}elta^{(d-k)})$ is also a Borel set with the Borel generators as described in this lemma. Now using the above mentioned fact, i.e. $(I+J)^\vee=I^\vee\cap J^\vee$ for all squarefree ideals $I$ and $J$, and the fact $I({\mathcal D}elta^{(d-k)})^\vee=L_k({\mathcal D}elta)^{sq}$, one has the result in more general case.
For example, let ${\mathcal D}elta$ be the simplicial complex with ${\mathcal F}({\mathcal D}elta)=B(\{1,4,5\},\{2,3,4\})$. We find the 2-covers of ${\mathcal D}elta$ as follows: for the simplicial complex ${\mathcal D}elta_1$ with ${\mathcal F}({\mathcal D}elta_1)=\{1,4,5\}$, Proposition~\ref{B-generators} yields
\begin{eqnarray*}
L_2({\mathcal D}elta_1)^{sq}&=&(x_H\:\; H\in B(\{1,2,3,4\},\{2,3,4,5\})\;)\\
&=&(x_1x_2x_3x_4,x_1x_2x_3x_5,x_1x_2x_4x_5,x_1x_3x_4x_5,x_2x_3x_4x_5),
\end{eqnarray*}
and similarly for the simplicial complex ${\mathcal D}elta_2$ with ${\mathcal F}({\mathcal D}elta_2)=\{2,3,4\}$, one obtains
\begin{eqnarray*}
L_2({\mathcal D}elta_2)^{sq}&=&(\{x_H\:\; H\in B(\{1,2,3\},\{2,3,4\})\})\\
&=&(x_1x_2x_3,x_1x_2x_4,x_1x_3x_4,x_2x_3x_4).\hspace{3.2cm}
\end{eqnarray*}
Hence
\begin{eqnarray*}
L_2({\mathcal D}elta)^{sq}&=&L_2({\mathcal D}elta_1)^{sq}\cap L_2({\mathcal D}elta_2)^{sq}\\
&=&(x_1x_2x_3x_4,x_1x_2x_3x_5,x_1x_2x_4x_5,x_1x_3x_4x_5,x_2x_3x_4x_5).
\end{eqnarray*}
Observe that the generators of $B({\mathcal D}elta)$ as described in Proposition~\ref{B-generators} are not necessarily the minimal ones.
\begin{Theorem}
\label{borel-generators}
Let ${\mathcal B}=B( F)$ be a principal Borel set with Borel generator $F=\{i_1<i_2<\cdots < i_d\}$, and let ${\mathcal D}elta$ be the simplicial complex with ${\mathcal F}({\mathcal D}elta)={\mathcal B}$. Then $B({\mathcal D}elta)=A({\mathcal D}elta)$.
\end{Theorem}
\begin{proof}
We show that for every non-squarefree cover ${\bold c}$ of ${\mathcal D}elta$ of positive order, there exists a decomposition ${\bold c}={\bold a}+{\bold b}$ such that ${\bold a}$ is a squarefree cover of ${\mathcal D}elta$. This will imply that $B({\mathcal D}elta)=A({\mathcal D}elta).$ So consider a non-squarefree $k$-cover ${\bold c}$ of ${\mathcal D}elta$ with $k>0$ and let $C=\supp({\bold c})$. We may assume that $k$ is the maximum order of ${\bold c}$, that is, if $\ell> k$, then ${\bold c}$ is not an $\ell$-cover. Indeed, assume that $k$ is the maximum order of ${\bold c}$ and ${\bold c}={\bold a}+{\bold b}$ is a decomposition of ${\bold c}$ such that ${\bold a}$ and ${\bold b}$ are covers of ${\mathcal D}elta$ of orders $i$ and $j$, respectively, with $k=i+j$. Then for every $k'<k$, one can choose $i'\leq i$ and $j'\leq j$ such that $i'+j'=k'$. Hence ${\bold c}={\bold a}+{\bold b}$ can be also considered a decomposition of ${\bold c}$ as a $k'$-cover.
We denote by ${\mathcal A}_{\ell}$ the Borel set $ B(\{q,q+1,\ldots i_{\ell+q-1}\}\:\; q=1,\ldots,d-\ell+1)$, for $\ell=1,\ldots,d$. Then by Proposition~\ref{B-generators}, for every $H\in {\mathcal A}_{\ell}$, the (0,1)-vector $x^{\mathbf{h}}$ with $\supp(\mathbf{h})=H$ is a squarefree $\ell$-cover of ${\mathcal D}elta$.
\\
{\em Step 1} (Defining the cover ${\bold a}$)\textbf{.}
Let
$${\mathcal T}=\{H\subseteq C\:\; H\in {\mathcal A}_{\ell} \text{ for some } \ell \}.$$ Since ${\bold c}$ is a cover of positive order, it is a $1$-cover of ${\mathcal D}elta$ as well. Hence Theorem \ref{BorelAlexander} implies that ${\mathcal T}$ is nonempty. Let
$$r=\max\{\ell \:\; \text{there exists } H\in {\mathcal T} \text{ with } H\in {\mathcal A}_{\ell} \text{ for some } \ell\}.$$
Observe that $r\leq k$ because $k$ is the maximum order of ${\bold c}$. Let $A$ be an element of ${\mathcal T}$ such that $A\in {\mathcal A}_{r}$. Consider the vector ${\bold a}$ with $\supp({\bold a})=A$. Then the vector ${\bold a}$ is an $r$-cover of ${\mathcal D}elta$. We will show later that ${\bold b}={\bold c}-{\bold a}$ is a $(k-r)$-cover of ${\mathcal D}elta$ to obtain the desired decomposition of ${\bold c}$.
\\
{\em Step 2.} If $r\neq d$, we observe that there exist at least $t$ entries $c_j$ of ${\bold c}$ with $c_j=0$ where $j\leq i_{r+t}$ for all $t=1,\ldots d-r.$ In fact, if at most $t-1$ number of entries $c_j$ with $j\leq i_{r+t}$ are zero, then there exists a subset $A'$ of $C$ which precedes $\{t,t+1,\ldots,i_{r+t}\}$. This means that $A'\in {\mathcal A}_{r+1}$, a contradiction to the choice of $r$.
\\
{\em Step 3.}
We show that ${\bold b}={\bold c}-{\bold a}$ is a $(k-r)$-cover of ${\mathcal D}elta$. If $r=d$, then $A=[i_d]$ and ${\bold a}$ is the vector for which all entries are $1$. Thus ${\bold b}$ is a $(k-r)$-cover of ${\mathcal D}elta$ because every facet of ${\mathcal D}elta$ is of cardinality $d$. Therefore, we have the desired decomposition of ${\bold c}$. So we may assume that $r<d$.
We consider a facet $G$ of ${\mathcal D}elta$, and show that $\sum_{i\in G}b_i\geq k-r$. For this purpose, attached to $G$, we inductively define a sequence $G=G_0,G_1,\ldots,G_m$ of facets of ${\mathcal D}elta$, where $m=|G\cap A |-r$, with the following properties:
\begin{enumerate}
\item[(i)] $1+|G_i\cap A|=|G_{i-1}\cap A|$ for all $i=1,\ldots,m;$
\item[(ii)] $1+\sum_{j\in G_i}c_j\leq \sum_{j\in G_{i-1}}c_j $ for all $i=1,\ldots,m.$
\end{enumerate}
Assume that the facets $G_0,\ldots,G_{\ell}$ is already defined for some $\ell<m$. Let $G_{\ell}=\{j_1<\cdots<j_d\}$ and $j_s=\max\{j_i\in G_{\ell}\: \; j_i\in A\}$. By property (i) and since $\ell<m=|G\cap A|-r$, one has $|G_{\ell}\cap A|>r$. Hence $s>r$. By step 2, since ${\bold c}$ has at least $d$ zero entries, the set $\{i\:\; c_i=0 \text{ and } i\not\in G_{\ell}\}$ is nonempty. In fact, otherwise $i\in G_\ell$ whenever $c_i=0$, and so $G_{\ell}\cap \supp({\bold c}) = \emptyset$, a contradiction. Let
$$t=\min\{i\:\; c_i=0 \text{ and } i\not\in G_{\ell}\}.$$
We set $G_{\ell+1}=(G_{\ell}\setminus \{j_s\})\cup \{t\}$. Since $c_t=0$, one has $t\not\in A$, and so (i) holds for $G_{\ell+1}$. But $c_{j_s}\neq 0$ because $j_s\in A$. Thus (ii) also holds for $G_{\ell+1}$. So we only need to show $G_{\ell+1}$ is a facet of ${\mathcal D}elta$. If $t<j_s$, then $G_{\ell+1}$ is a facet of ${\mathcal D}elta$ by the definition of the Borel sets. Furthermore, since $t\not\in G_{\ell}$, one has $t\neq j_s$. So assume that $t>j_s$ and
$i_{s+q}<t\leq i_{s+q+1}$. Then for each $q'$ where $q'=1,\ldots,q+1$, we have $j_{s+q'}\leq i_{s+q'-1}$. By contrary, assume that $j_{s+q'}> i_{s+q'-1}$ for some $q'=1,\ldots,q$. Then exactly $s+q'-1$ elements of $G_{\ell}$ belongs to the set $[i_{s+q'-1}]$. On the other hand, by Step 2 there exist $s+q'-1$ entries $c_i=0$ with $i\in [i_{s+q'-1}]$. So by the choice of $t$, we would have $c_i=0$ for $i=j_1,\ldots,j_{s+q'-1}$ which implies $\{j_1,\ldots,j_{s+q'-1}\}\cap A=\emptyset$, a contradiction. Thus $j_{s+q'}\leq i_{s+q'-1}$ for all $q'=1,\ldots,q+1$, and hence the set $\{j_1<\cdots<\hat{j}_s<\cdots < j_{s+q+1}\}$ precedes $\{i_1<\cdots< i_{s+q}\}.$ Let $E_1=\{j'_{s+q+1}<\cdots<j'_d\}$ equals the set $\{t,j_{s+q+2},\ldots,j_d\}$. Considering the fact $t\leq i_{s+q+1}$, it follows that $E_1$ precedes $\{i_{s+q+1},\ldots , i_d\}$ because if $j_{q'}<t$, then $j_{q'}\leq i_{q'-1}$ for $q'=s+q+2,\ldots,d$. Therefore, $G_{\ell+1}=\{j_1<\cdots<\hat{j_s}<\cdots < j_{s+q+1}<j'_{s+q+1}<\cdots<j'_d\}$ precedes $F$ which means $G_{\ell+1}$ is a facet of ${\mathcal D}elta$, as desired.
Now using the property (ii) of the sequence $G_0,\ldots,G_m$, we obtain
$$\sum_{i\in G}c_i\geq \sum_{i\in G_m}c_i+m \geq k+m=k+|G\cap A |-r.$$
The second above inequality holds because ${\bold c}$ is a $k$-cover of ${\mathcal D}elta$. So
$$\sum_{i\in G}b_i=\sum_{i\in G}c_i-\sum_{i\in G}a_i\geq (k+|G\cap A |-r)-|G\cap A |=k-r.$$
Thus ${\bold b}$ is a $(k-r)$-cover of ${\mathcal D}elta$, and this means that ${\bold c}={\bold a}+{\bold b}$ is the desired decomposition of ${\bold c}$.
\end{proof}
The statement of Theorem \ref{borel-generators} does not hold for the Borel sets in general. Once more consider the example after Proposition \ref{borel} where ${\mathcal D}elta$ is the simplicial complex with ${\mathcal F}({\mathcal D}elta)=B(\{1,4,5\},\{2,3,4\})$. Then ${\bold c}=(2,1,1,1,0)$ is a 3-cover of ${\mathcal D}elta$. We have already known the squarefree 2-covers of ${\mathcal D}elta$ and one can also find the squarefree 1-covers and 3-cover of ${\mathcal D}elta$ by Proposition~\ref{B-generators} in order to see that ${\bold c}$ cannot be decomposed to squarefree covers of ${\mathcal D}elta$. Hence $B({\mathcal D}elta)\neq A({\mathcal D}elta).$
\begin{Corollary}
Let ${\mathcal B}=B( F)$ be a principal Borel set with Borel generator $F=\{i_1<i_2<\cdots < i_d\}$, and let ${\mathcal D}elta$ be the simplicial complex with ${\mathcal F}({\mathcal D}elta)={\mathcal B}$.
Then $B({\mathcal D}elta^{(j)})=A({\mathcal D}elta^{(j)})$ for every $j=0,\ldots,d-1$.
\end{Corollary}
\begin{proof}
It is enough to notice that by Lemma~\ref{skeleton}, the set ${\mathcal F}({\mathcal D}elta^{(j)})$ is a principal Borel set. Hence Theorem~\ref{borel-generators} implies the result.
\end{proof}
An immediate consequence of Theorem \ref{borel-generators} is the following result in \cite[Proposition 4.6]{HHT}.
\begin{Corollary}
Let $\Sigma_n$ denote the simplex of all subsets of $[n]$. Then the $S$-algebra $A(\Sigma_n^{(d-1)})$ is minimally generated by the monomials $x_{j_1}x_{j_2}\cdots x_{j_{n-d+k}}t^k $, where $k=1,\ldots,d$ and $1\leq j_1<j_2<\cdots <j_{n-d+k}\leq n$.
\end{Corollary}
\begin{proof}
Consider the Borel set ${\mathcal B}=B(\{n-d+1,n-d+2,\ldots,n\})$. Then ${\mathcal F}(\Sigma_n^{(d-1)})={\mathcal B}$. Thus by Proposition ~\ref{B-generators} and Theorem~\ref{borel-generators}, the $S$-algebra $A(\Sigma_n^{(d-1)})$ is generated by the monomials $x_Ht^k$ where
\[
H\in B(\{1,2,\ldots,n-d+k\},\{2,3,\ldots,n-d+k+1\},\ldots,\{d-k+1,d-k+2,\ldots,n\}),
\]
for $k=1,\ldots,d$. The above mentioned set is exactly the set of all subsets of $[n]$ of cardinality $n-d+k$. On the other hand, by these monomials we have a minimal system of generators. In fact for the case that $n=d$ there is nothing to prove, and for the case that $n>d$ by contrary, assume that for a generator $x_Ht^k$, there exist monomials $x_{H_1}t^{k_1}$ and $x_{H_2}t^{k_2}$ in this set such that $x_{H_1}x_{H_2}|x_H$ and $k_1+k_2=k$. Hence we have
\[
(n-d+k_1)+(n-d+k_2)=|H_1|+|H_2|\leq |H|=(n-d+k).
\]
Thus $n\leq d$, a contradiction.
\end{proof}
The following proposition exhibit a condition which guarantees the existence of a generator of degree $\dim({\mathcal D}elta)+1$ in the minimal set of monomial generators of $A({\mathcal D}elta)$.
\begin{Proposition}
\label{higher-generator}
Let ${\mathcal B}=B( F)$ be a principal Borel set with Borel generator $F=\{i_{1}<i_{2}<\cdots < i_{d}\}$, and let ${\mathcal D}elta$ be the simplicial complex with ${\mathcal F}({\mathcal D}elta)={\mathcal B}$. Then $x_1x_2\cdots x_{i_d}t^d$ belongs to the minimal set of monomial generators of $A({\mathcal D}elta)$ if and only if $i_1\neq 1$.
\end{Proposition}
\begin{proof}
First observe that $i_1\neq 1$ if and only if $i_j\neq j$ for all j. So we show that the $d$-cover ${\bold c}$ of ${\mathcal D}elta$ with $c_i=1$ for all $i$, is decomposable if and only if $i_j=j$ for some $j$. First, suppose that ${\bold c}$ is decomposable and ${\bold c}={\bold a}+{\bold b}$ is a decomposition of ${\bold c}$. Since ${\mathcal D}elta$ is pure of dimension $d-1$, the vector ${\bold c}$ is the only squarefree $d$-cover of ${\mathcal D}elta$. Therefore, the order of ${\bold a}$ and ${\bold b}$ is nonzero. Let the order of ${\bold a}$ be $r$ and the order of ${\bold b}$ be $d-r$. Since ${\bold c}$ does not have a decomposition to a 0-cover and a $d$-cover, the same is true for the covers ${\bold a}$ and ${\bold b}$. Hence if $A=\supp({\bold a})$ and $B=\supp({\bold b})$, then they are in the form described in Proposition~\ref{B-generators}. In other words, there exist numbers $j$ and $l$ such that $A$ precedes $\{l-r+1,l-r+2,\ldots,i_l\}$ and $B$ precedes $\{j-d+r+1, j-d+r+2, \ldots, i_j\}$. Observe that $[i_d]$ is the disjoint union of $A$ and $B$ because ${\bold c}={\bold a}+{\bold b}$. Thus we may assume that $i_d\in A$ which implies $l=d$. Moreover, we see that $|A|+|B|= i_d$. For this reason, since $|A|=i_d-(d-r+1)+1$ and $|B|=i_j-(j-d+r+1)+1$, we obtain $i_j=j$.
Conversely, let $i_j=j$ for some $j$. By Theorem~\ref{B-generators}, the monomial $x_At^j\in L_j({\mathcal D}elta)$ where $A=\{1,2,\ldots,i_j=j\}$ and the monomial $x_Bt^{d-j}\in L_{d-j}$ where $B=\{j+1,j+2,\ldots,i_d\}$. Let ${\bold a}$ and ${\bold b}$ be the vectors with $A=\supp({\bold a})$ and $B=\supp({\bold b})$. Then we have the decomposition ${\bold c}={\bold a}+{\bold b}$ of the cover ${\bold c}$.
\end{proof}
{}
\end{document} |
\begin{document}
\theoremstyle{plain}
\newtheorem{thm}{Theorem}[section]
\newtheorem{prop}[thm]{Proposition}
\newtheorem{lem}[thm]{Lemma}
\newtheorem{cor}[thm]{Corollary}
\newtheorem{conj}[thm]{Conjecture}
\newtheorem{claim}[thm]{Claim}
\theoremstyle{definition}
\newtheorem{rem}[thm]{Remark}
\newtheorem{ass}[thm]{Assumption}
\newtheorem{defn}[thm]{Definition}
\newtheorem{example}[thm]{Example}
\setlength{\parskip}{1ex}
\title[The blocks of the Brauer algebra]{A geometric characterisation
of\\ the blocks of the Brauer algebra}
\author{Anton Cox}
\email{[email protected], [email protected], [email protected]}
\author{Maud De Visscher}
\author{Paul Martin}
\address{Centre for Mathematical Science\\
City University\\
Northampton Square\\
London\\
EC1V 0HB\\
England.}
\subjclass[2000]{Primary 20G05}
\begin{abstract}We give a geometric description of the
blocks of the Brauer algebra $B_n(\delta)$ in characteristic zero as
orbits of the Weyl group of type $D_n$. We show how the corresponding
affine Weyl group controls the representation theory of the
Brauer algebra in positive characteristic, with orbits corresponding
to unions of blocks.
\end{abstract}
\maketitle
\section{Introduction}
Classical Schur-Weyl duality relates the representation theory of the
symmetric and general linear groups by realising each as the
centraliser algebra of the action of the other on a certain tensor
space. The Brauer algebra $B_n(\delta)$ was introduced to provide a
corresponding duality for the symplectic and orthogonal groups
\cite{brauer}. The abstract $k$-algebra is defined for each $\delta\in
k$, however for Brauer the key case is $k={\mathbb C}$ with $\delta$ integral,
when the action of $B_n(\delta)$ on $({\mathbb C}^{|\delta|})^{\otimes n}$ can
be identified with the centraliser algebra for the corresponding group
action of O$(\delta,{\mathbb C})$ for $\delta$ positive, and with
Sp$(-\delta,{\mathbb C})$ for $\delta$ negative). In characteristic $p$, the
natural algebra in correspondence to the centraliser algebra for
$\delta$ negative is the symplectic Schur algebra
\cite{dongf,dotpoly1,oe1,ddh}.
For $|\delta|<n$ the centraliser algebra is a proper quotient of the
Brauer algebra. Thus, despite the fact that the symplectic and
orthogonal groups, and hence the centraliser, are semisimple over
${\mathbb C}$, the Brauer algebra can have a non-trivial
cohomological structure in such cases.
Brown \cite{brownbrauer} showed that the Brauer algebra is semisimple
over ${\mathbb C}$ for generic values of $\delta$. Wenzl proved that
$B_n(\delta)$ is semisimple over ${\mathbb C}$ for all non-integer $\delta$
\cite{wenzlbrauer}. It was not until very recently that any progress
was made in positive characteristic. A necessary and sufficient
condition for semisimplicity (valid over an arbitrary field) was given
by Rui \cite{ruibrauer}. The blocks were determined in characteristic
zero \cite{cdm} by the authors.
The block result uses the theory of towers of recollement developed in
\cite{cmpx}, and built on work by Doran, Hanlon and Wales
\cite{dhw}. The approach was combinatorial, using the language of
partitions and tableaux, and depended also on a careful analysis of
the action of the symmetric group $\Sigma_n$, realised as a subalgebra
of the Brauer algebra. However, we speculated in \cite{cdm} that there
could be an alcove geometric version, in the language of algebraic Lie
theory \cite{jannewed} (despite the absence of an obvious
Lie-theoretic context) . This should replace the combinatorics of
partitions by the action of a suitable reflection group on a weight
space, so that the blocks correspond to orbits under this action. In
this paper we will give such a geometric description of this block
result.
A priori there is no specific evidence from algebraic Lie theory to
suggest that such a reflection group action will exist (beyond certain
similarities with the partition algebra case, where there is a
reflection group of infinite type $A$ \cite{mwdef}). As already
noted, the obvious link to Lie theory (via the duality with symplectic
and orthogonal groups) in characteristic zero only corresponds to a
semisimple quotient.
Remarkably however, we will show that there is a Weyl group $W$ of type
$D$ which does control the representation theory. To obtain a natural
action of this group, we will find that it is easier to work with the
transpose of the usual partition notation. (This is reminiscent of the
relation under Ringel duality between the combinatorics of the
symmetric and general linear groups, although we do not have a
candidate for a corresponding dual object in this case.)
Our proof of the geometric block result in characteristic $0$ is
entirely combinatorial, as we show that the action of $W$ corresponds
to the combinatorial description of blocks in \cite{cdm}. However,
having done this, it is natural to consider extending these results to
arbitrary fields.
As the algebras and (cell) modules under consideration can all be
defined \lq integrally' (over ${\mathbb Z}[\delta]$), one might hope that some
aspects of the characteristic $0$ theory could be translated to other
characteristics by a reduction mod $p$ argument. If this were the
case then, for consistency between different values of $\delta$ which
are congruent modulo $p$, we might expect that the role of the Weyl
group would be replaced by the corresponding affine Weyl group, so
that blocks again lie within orbits.
We will extend certain basic results in \cite{dhw} to arbitrary
characteristic, and then show that orbits of the affine Weyl group do
indeed correspond to (possibly non-trivial) unions of blocks of the
Brauer algebra.
In Section \ref{Braueris} we review some basic properties of the
Brauer algebra, following \cite{cdm}. Sections \ref{Wis} and
\ref{Waffis} review the Weyl and affine Weyl groups of type $D$, and
give a combinatorial description of their orbits on a weight space.
Using this description we prove in Section \ref{blockzero} that we can
restate the block result from \cite{cdm} using Weyl group
orbits. Section \ref{blockp} generalises certain representation
theoretic results from \cite{dhw} and \cite{cdm} to positive
characteristic, which are then used to give a necessary condition for
two weights to lie in the same block in terms of the affine Weyl
group.
In Section \ref{absect} we describe how abacus notation \cite{jk} can
be applied to the Brauer algebra, and use this to show that the orbits
of the affine Weyl group do not give a sufficient condition for two
weights to lie in the same block.
\section{The Brauer algebra}\label{Braueris}
We begin with a very brief review of the basic theory of Brauer
algebras; details can be found in \cite{cdm}. Fix an algebraically
closed field $k$ of characteristic $p\geq 0$, and some $\delta\in
k$. For $n\in{\mathbb N}$ the Brauer algebra $B_n(\delta)$ can be defined in
terms of a basis of partitions of
$\{1,\ldots,n,\bar{1},\ldots,\bar{n}\}$ into pairs. To determine the
product $AB$ of two basis elements, represent each by a graph on $2n$
points, and identify the vertices $\bar{1},\ldots,\bar{n}$ of $A$ with the
vertices $1,\ldots n$ of $B$ respectively. The graph thus obtained may
contain some number ($t$ say) of closed loops; the product $AB$ is
then defined to be $\delta^tC$, where $C$ is the basis element
corresponding to the graph arising after these closed loops are
removed (ignoring intermediate vertices in connected components).
Usually we represent basis elements graphically by a diagram with $n$
northern nodes numbered $1$ to $n$ from left to right, and $n$
southern nodes numbered $\bar{1}$ to $\bar{n}$ from left to right,
where each node is connected to precisely one other by a line. Edges
joining northern nodes to southern nodes of a diagram are called
propagating lines, the remainder are called northern or southern
arcs. An example of the product of two diagrams in given in Figure \ref{multex}.
\begin{figure}
\caption{The product of two diagrams in $B_n(\delta)$.}
\label{multex}
\end{figure}
With this convention, and assuming that $\delta\neq 0$, we have for
each $n\geq 2$ an idempotent $e_n$ as illustrated in Figure \ref{eis}.
\begin{figure}
\caption{The element $e_8$.}
\label{eis}
\end{figure}
These idempotents induce algebra
isomorphisms
\begin{equation}\label{A1}
\Phi_n:B_{n-2}(\delta)\longrightarrow e_nB_n(\delta)e_n
\end{equation}
which take
a diagram in $B_{n-2}$ to the diagram in $B_n$ obtained by adding an
extra northern and southern arc to the righthand end. From this we
obtain, following \cite{green}, an exact
localization functor
\begin{eqnarray*}
F_n \, :\, B_n(\delta)\mbox{\rm -mod} &\longrightarrow&
B_{n-2}(\delta)\mbox{\rm -mod}\\
M &\longmapsto& e_n M
\end{eqnarray*}
and a right exact globalization functor
\begin{eqnarray*}
G_n\, : \, B_n(\delta)\mbox{\rm -mod} &\longrightarrow&
B_{n+2}(\delta)\mbox{\rm -mod}\\ M &\longmapsto& B_{n+2}e_{n+2}\otimes_{B_n}M.
\end{eqnarray*}
Note that $F_{n+2}G_n(M)\cong M$ for all $M\in B_n\mbox{\rm -mod}$, and
hence $G_n$ is a full embedding. As
$$B_n(\delta)/B_n(\delta)e_nB_n(\delta)\cong k\Sigma_n$$ the group
algebra of the symmetric group on $n$ symbols, it follows from
\cite{green} and (\ref{A1}) that the simple $B_n$-modules are indexed
by the set
\begin{equation}\label{indexset}
\Lambda_n = \Lambda^n \sqcup \Lambda_{n-2}= \Lambda^n \sqcup
\Lambda^{n-2} \sqcup \cdots \sqcup \Lambda^{\mn}
\end{equation}
where $\Lambda^n$ denotes an indexing set for the simple
$k\Sigma_n$-modules, and $\min=0$ or $1$ depending on the parity of
$n$. (When $\delta=0$ a slight modification of this construction is
needed; see \cite{hpbrauer} or \cite[Section 8]{cdm}.) If $\delta\neq
0$ and either $p=0$ or $p>n$ then the set $\Lambda^n$ corresponds to
the set of partitions of $n$; we write $\lambda \vdash n$ if $\lambda$
is such a partition.
If $\delta\neq 0$ and $p=0$ or $p>n$ then the algebra $B_n(\delta)$ is
quasihereditary -- in general however it is only cellular. In all
cases however we can explicitly construct a standard/cell module
$\Delta_n(\lambda)$ for each partition $\lambda$ of $m$ where $m\leq
n$ with $m-n$ even (by arguing as in \cite[Section 2]{dhw}). In the
quasihereditary case, the heads $L_n(\lambda)$ of the standard modules
$\Delta_n(\lambda)$ are simple, and provide a full set of simple
$B_n(\delta)$-modules. In the general cellular case, a proper subset
of the heads of the cell modules is sufficient to provide such a full
set of simples. The key result which we will need is that in all
cases, the blocks of the algebra correspond to the equivalence classes
of simple modules generated by the relation of occurring in the same
cell or standard module \cite[(3.9) Remarks]{gl}.
\section{Orbits of weights for the Weyl group of type $D$}
\label{Wis}
We review some basic results about the Weyl group of type $D$,
following \cite[Plate IV]{bourbaki}. Let $\epsilon_i$ with $1\leq i\leq n$
be a set of formal symbols. We set
$$X\ (=X_n)=\bar{\imath}goplus_{i=1}^{n}{\mathbb Z}\epsilon_i$$ which will play the role
of a weight lattice. We denote an element
$$\lambda=\lambda_1\epsilon_1+\cdots+\lambda_n\epsilon_n$$ in $X$ by
any tuple of the form $(\lambda_1,\ldots,\lambda_m)$, with $m\leq n$,
where $\lambda_i=0$ for $i>m$. The set of dominant weights is given
by
$$X^+=\{\lambda\in X :
\lambda=\lambda_1\epsilon_1+\cdots\lambda_n\epsilon_n\ \mbox{\rm
with}\ \lambda_1\geq\cdots\geq\lambda_n\geq 0\}.$$
Define an inner
product on $E=X\otimes_{\mathbb Z}{\mathbb R}$ by setting
$$(\epsilon_i,\epsilon_j)=\delta_{ij}$$
and extending by linearity.
Consider the root system of type $D_n$:
$$\Phi=\{\pm(\epsilon_i-\epsilon_j), \pm(\epsilon_i+\epsilon_j):1\leq
i<j\leq n\}$$
For each root $\beta\in\Phi$ we define a corresponding reflection
$s_{\beta}$ on $E$ by
\begin{equation}\label{reflect}
s_{\beta}(\lambda)=\lambda-(\lambda,\beta)\beta
\end{equation}
for all $\lambda\in E$, and let $W$ be the group generated by these
reflections. Fix $\delta$ and define $\rho$ $(=\rho(\delta))\in E$ by
$$\rho=(-\frac{\delta}{2},-\frac{\delta}{2}-1,-\frac{\delta}{2}-2,\ldots,
-\frac{\delta}{2}-(n-1)).$$
We consider the dot action of $W$ on $E$ given by
$$w.\lambda=w(\lambda+\rho)-\rho$$ for all $w\in W$ and $\lambda\in
E$. (Note that this preserves the lattice $X$.) This is the action
which we will consider henceforth.
It will be convenient to have an explicit description of the dot
action of $W$ on $X$. Let $\Sigma_n$ denote the group of permutations
of ${\bf n}=\{1,\ldots, n\}$. Given
$\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_n)$ and
$\mu=(\mu_1,\mu_2,\ldots, \mu_n)$ in $X$, we have $\mu=w.\lambda$ for
some $w\in W$ if and only if
$$\mu_i+\rho_i=\sigma(i)(\lambda_{\pi(i)}+\rho_{\pi(i)})$$ for all
$1\leq i\leq n$ and some $\pi\in\Sigma_n$ and $\sigma:{\bf n}\longrightarrow\{\pm
1\}$ with
$$d(\sigma)=|\{i:\sigma(i)=-1\}|$$
even. (See \cite[IV.4.8]{bourbaki}.)
Thus $\mu=w.\lambda$ if and only if there exists $\pi\in\Sigma_n$ such
that for all $1\leq i\leq n$ we have either
\begin{equation}\label{firstcase}
\mu_i-i=\lambda_{\pi(i)}-\pi(i)\end{equation}
or
\begin{equation}\label{secondcase}
\mu_i+\lambda_{\pi(i)}-i-\pi(i)=\delta-2
\end{equation}
and (\ref{secondcase}) occurs only for an even number of $i$.
For example, if $n=5$ and $\lambda=(6,4,-2,3,5)$ then
$\mu=(-4,\delta,5,\delta-3,4)$ is in the same orbit under the dot
action of $W$, taking $\pi(1)=3$, $\pi(2)=5$, $\pi(3)=2$, $\pi(4)=1$,
$\pi(5)=4$, and $\sigma(i)=1$ for $i$ odd and
$\sigma(i)=-1$ for $i$ even.
We will also need to have a graphical representation of elements of
$X$, generalising the usual partition notation. We will represent any
$\lambda=(\lambda_1,\ldots,\lambda_n)\in X$ by a sequence of $n$ rows of
boxes, where row $i$ contains all boxes to the left of column
$\lambda_i$ inclusive, together with a vertical bar between columns
$0$ and $1$. We set the content of a box $\epsilon$ in row $i$ and
column $j$ to be $c(\epsilon)=i-j$. (This is not the usual choice for
partitions, for reasons which will become apparent later.) For
example, when $n=8$ the element $(6,2,4,-3,1,-2)$ (and the content of
its boxes) is illustrated in Figure \ref{howrep}.
\begin{figure}
\caption{The element $(6,2,4,-3,1,-2)$ when $n=8$.}
\label{howrep}
\end{figure}
When $\lambda$ is a partition we will usually omit the portion of the
diagram to the left of the bar, and below the final non-zero row, thus
recovering the usual Young diagram notation for partitions.
If $\lambda=(\lambda_1,\ldots,\lambda_n)$ then the content
$c(\lambda)_i$ of the last box in row $i$ is $-\lambda_i+i$. Combining
this with (\ref{firstcase}) and (\ref{secondcase}) we obtain
\begin{prop}\label{orbits1}
For any two elements $\lambda$ and $\mu$ in $X$ there exists $w\in W$
with $\mu=w.\lambda$ if and only if there exists $\pi\in\Sigma_n$ and
$\sigma\, : \, {\bf n} \rightarrow \{\pm 1\}$ with $d(\sigma)$ even
such that for all $1\leq i\leq n$ we have either
$$\sigma(i)=1 \quad \mbox{and} \quad c(\mu)_i=c(\lambda)_{\pi(i)}$$
or
$$\sigma(i)=-1 \quad \mbox{and} \quad c(\mu)_i+c(\lambda)_{\pi(i)}=2-\delta .$$
\end{prop}
It is helpful when considering low rank examples in Lie theory to use
a graphical representation of the action of a Weyl group. As our
weight space is generally greater than two-dimensional, we can rarely
use such an approach directly. However, we can still apply a limited
version of this approach, by considering various two-dimensional
projections of the weight lattice.
We can depict elements of the weight lattice $X$ by projecting into
the $ij$ plane for various choices of $i<j$. Each weight $\lambda$ is
represented by the projected coordinate pair $(\lambda_i,\lambda_j)$,
and each such pair represents a fibre of weights, which may
or may not include any dominant weights. For example, the point
$(0,0)$ in the $1j$ plane represents precisely one dominant weight
(the zero weight), while the $(0,0)$ point in the $23$ plane
represents the set of dominant weights
$(\lambda_1,0,0,\ldots,0)$. Clearly a necessary condition for dominance
is that $\lambda_i\geq \lambda_j\geq 0$.
We will represent such projections in the natural two-dimensional
coordinate system, so that the set of points representing at least one
dominant weight correspond to those shaded in Figure
\ref{nonaffsetup}. (If $\delta=2$ then the example shown is the case
$i=1$ and $j=5$.)
\begin{figure}
\caption{A projection onto the $ij$ plane.}
\label{nonaffsetup}
\end{figure}
It will be convenient to give an explicit description of the action of
$s_{\epsilon_{i}-\epsilon_{j}}$ and $s_{\epsilon_{i}+\epsilon_{j}}$ on
a partition $\lambda$. We have that
$$s_{\epsilon_{i}-\epsilon_{j}}.\lambda=\lambda-(\lambda_i-\lambda_j-i+j)
(\epsilon_i-\epsilon_j)$$ and hence if $r=\lambda_i-\lambda_j-i+j$ is
positive (respectively negative) the effect of the dot action of
$s_{\epsilon_{i}-\epsilon_{j}}$ on $\lambda$ is to add $r$ boxes to row $j$
(respectively row $i$) and subtract $r$ boxes from row $i$
(respectively row $j$). Similarly,
$$s_{\epsilon_{i}+\epsilon_{j}}.\lambda
=\lambda-(\lambda_i+\lambda_j-\delta+2-i-j) (\epsilon_i+\epsilon_j)$$
and hence if $r=\lambda_i+\lambda_j-\delta+2-i-j$ is positive
(respectively negative) the effect of the dot action of
$s_{\epsilon_{i}+\epsilon_{j}}$ on $\lambda$ is to remove
(respectively add) $r$ boxes from each of rows $i$ and $j$. In terms
of our projection onto the $ij$ plane these operations correspond to
reflection about the dashed lines in Figure \ref{nonaffsetup} labelled
$(ij)$ for $s_{\epsilon_{i}-\epsilon_{j}}$ and $(\bar{ij})$ for
$s_{\epsilon_{i}+\epsilon_{j}}$. Note that the position of
$(\bar{ij})$ depends on $\delta$, but $(ij)$ does not.
Various examples of reflections, and their effect on a dominant
representative of each coordinate pair, are given in Figures
\ref{nona}, \ref{nonb}, and \ref{nonc}. For each reflection indicated,
a dominant weight is illustrated, together with a shaded
subcomposition corresponding to the image of that weight under the
reflection. Where no shading is shown (as in Figure \ref{nona}(a)) the
image is the empty partition.
\begin{figure}
\caption{Projections into the $12$ plane with $\delta=2$.}
\label{nona}
\end{figure}
\begin{figure}
\caption{Projections into the $23$ plane with $\delta=2$.}
\label{nonb}
\end{figure}
\begin{figure}
\caption{Projections into the $13$ plane with $\delta=2$.}
\label{nonc}
\end{figure}
Note that some of the reflections may take a dominant weight to a
non-dominant one, even if the associated fibres both contain dominant
weights. For example the cases in Figure \ref{nonc}(a) and (b)
correspond to the reflection of $(3,3,3)$ to $(1,3,1)$ and of
$(4,3,3)$ to $(1,3,0)$. Also, some reflections may represent a family
of reflections of dominant weights, as in Figure \ref{nonc}(e), where
there are three possible weights in each fibre (corresponding to
whether none, one or both of the boxes marked X are included).
\section{The blocks of the Brauer algebra in characteristic zero}
\label{blockzero}
The main result in \cite{cdm} was the determination of the blocks of
$B_n(\delta)$ when $k={\mathbb C}$. In that paper, the blocks were described
by a combinatorial condition on partitions. We would like to have a
geometric formulation of this result.
We will identify the simple $B_n(\delta)$-modules with weights in
$X^+$ using the correspondence
$$(\lambda \in X^+)\quad \longleftrightarrow \quad L(\lambda^T)$$
where $\lambda^T$ denotes the conjugate partition of $\lambda$
(i.e. the one obtained by reversing the roles of rows and columns in
the usual Young diagram). Using this correspondence, we restate the
main result of \cite{cdm} as follows. Given two partitions
$\mu\subset\lambda$ we write $\lambda/\mu$ for the associated skew
partition. We say that that two weights $\lambda , \mu\in X^+$ are
($\delta$-)\textit{balanced} if and only if the boxes of $\lambda
/(\lambda \cap \mu)$ (respectively $\mu / (\lambda \cap \mu)$) can be
paired up such that the contents of each pair sum to $1-\delta$ and if
$\delta$ is even and the boxes with content $-\frac{\delta}{2}$ and
$\frac{2-\delta}{2}$ are configured as in Figure \ref{exclude} then
the number of rows in Figure \ref{exclude} must be even.
\begin{figure}
\caption{A potentially unbalanced configuration.}
\label{exclude}
\end{figure}
Noting that the definition of content given in Section 1 is the
transpose of the one used in \cite{cdm}, it is easy to see (simply by
transposing everything) that \cite{cdm} Corollary 6.7 becomes
\begin{thm}\label{oldblock}
Two simple $B_n(\delta)$-modules $L(\lambda^T)$ and $L(\mu^T)$ are in
the same block if and only if $\lambda$ and $\mu$ are balanced.
\end{thm}
We now give the desired geometric formulation of Theorem \ref{oldblock}.
\begin{thm}\label{geomblock}
Two simple $B_n(\delta)$-modules $L(\lambda^T)$ and $L(\mu^T)$ are in
the same block if and only if $\mu\in W\cdotp \lambda$
\end{thm}
\begin{proof}
We will show that this description is equivalent to that given in
Theorem \ref{oldblock}, by proceeding in two stages. First we will
show, using the action of the generators of $W$ on $X$, that two
partitions in the same orbit are balanced. This implies that the
blocks are unions of $W$-orbits. Next we will show that balanced
partitions lie in the same $W$-orbit.
\noindent
{\bf Stage 1:} The case $n=2$ is an easy calculation. For $n>2$, note
that
$$s_{\epsilon_i-\epsilon_j}
=s_{\epsilon_j+\epsilon_k}s_{\epsilon_i+\epsilon_k}
s_{\epsilon_j+\epsilon_k}$$ where $i\neq k\neq j$, and so $W$ is
generated by reflections of the form $s_{\epsilon_i +\epsilon_j}$. Now
consider the action of such a generator on a weight in $X$.
$$s_{\epsilon_i+\epsilon_j}.\lambda
=\lambda-(\lambda_i+\lambda_j-\delta+2-i-j)(\epsilon_i+\epsilon_j).$$
If $\lambda_i+\lambda_j-\delta+2-i-j\geq 0$ then this involves the
removal of two rows of boxes with respective contents
$$-\lambda_i+i+\lambda_i+\lambda_j-i-j-\delta+1,\ldots,-\lambda_i+i+1,
-\lambda_i+i$$
and
$$-\lambda_j+j+\lambda_i+\lambda_j-i-j-\delta+1,\ldots,-\lambda_j+j+1,
-\lambda_j+j$$
which simplify to
$$\lambda_j-j-\delta+1,\ldots,-\lambda_i+i+1,
-\lambda_i+i$$
and
$$\lambda_i-i-\delta+1,\ldots,-\lambda_j+j+1, -\lambda_j+j.$$ If we
pair these two rows in reverse order, each pair of contents sum to
$1-\delta$. Note also that for $\delta$ even, the number of horizontal
pairs of boxes of content $-\frac{\delta}{2}$ and $\frac{2-\delta}{2}$
is either unchanged or decreased by 2. The argument when
$\lambda_i-\lambda_j-\delta+2-i-j< 0$ is similar (here we add paired
boxes instead of removing them).
Now take two partitions $\lambda , \mu\in X^+$ with $\mu=w\cdotp
\lambda$ for some $w\in W$. We need to show that they are balanced,
i.e. that the boxes of $\lambda / \lambda \cap \mu$ (respectively $\mu
/ \lambda \cap \mu$) can be paired up in the appropriate way. First
observe that the set of contents of boxes in $\lambda / \lambda \cap
\mu$ and in $\mu / \lambda \cap \mu$ are disjoint. To see this,
suppose that there is a box $\epsilon$ in $\lambda / \lambda \cap \mu$
with the same content as a box $\eta$ in $\mu / \lambda \cap
\mu$. Then these two boxes must lie on the same diagonal. Assume,
without loss of generality, that $\epsilon$ appears in an earlier row
than $\eta$. As $\eta$ belongs to $\mu$ and $\epsilon$ is above and to
the left of $\eta$, we must have that $\epsilon$ is also in $\mu$ (as
$\mu$ is a partition). But then $\epsilon$ belongs to $\lambda \cap
\mu$ which is a contradiction.
Let us concentrate on the action of $w$ on boxes either with a fixed
content $c$ say or with the paired content $1-\delta -c$. As $w$ can
be written as a product of the generators considered above, it will add and
remove pairs of boxes of these content, say
$$(\tau_1 + \tau_1')+(\tau_2 + \tau_2')+ \ldots + (\tau_m + \tau_m')$$
$$-(\sigma_1 + \sigma_1')-(\sigma_2 + \sigma_2') - \ldots -(\sigma_q +
\sigma_q') $$ for some boxes $\tau_i$, $\tau_i'$, $\sigma_j$ and
$\sigma_j'$ with $c(\tau_i)=c=1-\delta - c(\tau_i')$ for $1\leq i \leq
m$ and $c(\sigma_j)=c=1-\delta -c(\sigma_j')$ for $1\leq j \leq q$.
Thus the number of boxes in $\mu=w\cdotp \lambda$ of content $c$
(resp. $1-\delta -c$) minus the number of boxes in $\lambda$ of
content $c$ (resp. $1-\delta -c$) is equal to $m-q$. But this must be
equal to the number of boxes in $\mu / (\lambda \cap \mu)$ of content
$c$ (resp. $1-\delta -c$) minus the number of boxes in $\lambda /
(\lambda \cap \mu)$ of content $c$ (resp. $1-\delta -c$). As we have
just observed that the contents of boxes in $\lambda / (\lambda \cap
\mu)$ and in $\mu / (\lambda \cap \mu)$ are disjoint, we either have
$m-q \geq 0$ and
\begin{eqnarray*}
m-q &=& |\{ \mbox{boxes of content $c$ in $\mu / (\lambda \cap
\mu)$}\} |\\ &=& | \{ \mbox{boxes of content $1-\delta -c$ in $\mu /
(\lambda \cap \mu)$}\} |
\end{eqnarray*}
or $m-q<0$ and
\begin{eqnarray*}
m-q &=& - | \{ \mbox{boxes of content $c$ in $\lambda / (\lambda \cap
\mu)$}\} |\\ &=& - | \{ \mbox{boxes of content $1-\delta -c$ in
$\lambda / (\lambda \cap \mu)$}\} |
\end{eqnarray*}
Thus the boxes of $\lambda / (\lambda \cap \mu)$ (resp. $\mu /
(\lambda \cap \mu)$) can be paired up such that the sum of the
contents in each pair is equal to $1-\delta$. Moreover, for $\delta$
even, as each generator $s_{\epsilon_i + \epsilon_j}$ either adds or
removes 2 (or no) horizontal pairs of boxes of contents
$-\frac{\delta}{2}$ and $\frac{2-\delta}{2}$, we see that $\lambda$
and $\mu$ are indeed balanced.
\noindent{\bf Stage 2:} We need to show that if $\lambda$ and $\mu$
are balanced partitions then they are in the same $W$-orbit. Note that
if $\lambda$ and $\mu$ are balanced then by definition so are
$\lambda$ and $\lambda \cap \mu$, and $\mu$ and $\lambda \cap
\mu$. Thus it is enough to show that if $\mu \subset \lambda$ are
balanced then they are in the same $W$-orbit.
We will show that whenever we have a weight $\eta \in X$ with $\eta +
\rho \in X^+$ such that $\mu \subset \eta$ (i.e. $\mu_i \leq \eta_i$ for all
$i$) are balanced, we can construct $\eta^{(1)}\in W_p \cdotp \eta$
such that either $\eta^1=\mu$ or $\eta ^{(1)}\subset \eta$ having the
same properties as $\eta$. Starting with $\eta = \lambda$ and applying
induction will prove that $\mu \in W_p\cdotp \lambda$.
Pick a box $\epsilon$ in $\eta / \mu$ such that
(i) it is the last box in a row of $\eta$,
(ii) $\frac{1-\delta}{2} -c(\epsilon)$ is maximal.
\noindent If more than one such box exists, pick the southeastern-most
one. Say that $\epsilon$ is in row $i$. Find a box $\epsilon '$ on the
edge of $\eta / \mu$ (i.e. a box in $\eta/\mu$ such that there is no
box to the northeast, east, or southeast of it in $\eta/\mu$) with
$c(\epsilon)+c(\epsilon ')=1-\delta$. Say that $\epsilon '$ is in row
$j$.
Note that $i\neq j$ as if $i$ were equal to $j$ then there would
either be a box of content $\frac{1-\delta}{2}$ (for $\delta$ odd) or
a pair of boxes of content $-\frac{\delta}{2}$ and
$\frac{2-\delta}{2}$ (for $\delta$ even) in between $\epsilon$ and
$\epsilon '$. Now, as $\eta / \mu$ is balanced and
$\eta_{i-1}-\eta_i\geq -1$, it must contain another such box or pair of
boxes of the same content(s) in row $i-1$, as illustrated in Figure
\ref{2row1} (where the shaded area is part of $\mu$). As $\eta
_{i-1}-\eta_i\geq -1$ we see that $\eta /\mu$ contains at least two
boxes of content $c(\epsilon)$. But as $\epsilon$ was chosen with
maximal content and $\mu$ is a partition, $\eta / \mu$ can only have
one box of content $c(\epsilon ')$, as otherwise the box $\chi$ would
be in $\eta/\mu$. This contradicts the fact that
$\eta / \mu$ is balanced.
\begin{figure}
\caption{The (impossible) configuration occurring if $i=j$.}
\label{2row1}
\end{figure}
Now let $\alpha$ be the last box in row $j$ and let $\alpha '$ be the
southeastern-most box on the edge of $\eta / \mu$ having content
$c(\alpha)=1-\delta - c(\alpha)$. Say that $\alpha '$ is in row $k$.
\noindent \textit{Case 1:} $k=j$.\\ In this case there must either be
a box of content $\frac{1-\delta}{2}$ (for $\delta$ odd) or a pair of
boxes of content $-\frac{\delta}{2}$ and $\frac{2-\delta}{2}$ (for
$\delta$ even) in between $\alpha '$ and $\alpha$. Now, as $\eta /
\mu$ is balanced and $\eta_{j-1}-\eta_j\geq -1$, it must contain
another such box or pair of boxes of the same content(s) in row $j-1$,
as illustrated in Figure \ref{2row2}.
\begin{figure}
\caption{The case $j=k$.}
\label{2row2}
\end{figure}
For each $c(\epsilon )\leq c <c(\alpha)$, define $i_c$ by saying that
the southeastern-most box of content $c$ on the edge of $\eta / \mu$
is in row $i_c$. For $c=c(\alpha)$, define $i_{c(\alpha)}=j-1$. Note
that the $i_c$'s are not necessarily all distinct. Consider all
distinct values of $i_c$ and order them
$$i=i_{c_0}<i_{c_1}<\ldots < i_{c_l}=j-1.$$
Now consider
$$\eta^{(1)} = (s_{\epsilon_i -\epsilon_{i_{c_1}}}\ldots
s_{\epsilon_i-\epsilon_{i_{c_{l-1}}}} s_{\epsilon_i -\epsilon_{j-1}}
s_{\epsilon_i+\epsilon_j})\cdotp \eta.$$ This is illustrated
schematically in Figure \ref{2row3}, where curved lines indicate
boundaries whose precise configuration does not concern us. Then $\mu
\subset \eta^{(1)}\subset \eta$ are balanced and $\eta^{(1)}+\rho \in
X^+$ as required.
\begin{figure}
\caption{The elements $\mu\subset\eta^{(1)}
\label{2row3}
\end{figure}
\noindent \textit{Case 2:} $k\neq j$.\\
If $i=k$ then consider
$$\eta^{(1)}=s_{\epsilon_i+\epsilon_j}\cdotp \eta$$ then $\mu \subset
\eta^{(1)} \subset \eta$ are balanced and $\eta^{(1)}+\rho \in X^+$.
If $k\neq i$, then as in Case 1, for each $c(\epsilon)\leq c\leq
c(\alpha ')$ we define $i_c$ by saying that the southeastern most box
in $\eta/\mu$ is in row $i_c$. As before, there are not necessarily
all distinct but we can pick a set of representatives
$$i=i_{c_0} < i_{c_1} < \ldots < i_{c_l}=k.$$
Now consider
$$\eta^{(1)} = (s_{\epsilon_i -\epsilon_{i_{c_1}}}\ldots
s_{\epsilon_i-\epsilon_{i_{c_{l-1}}}} s_{\epsilon_i -\epsilon_{k}}
s_{\epsilon_i+\epsilon_j})\cdotp \eta.$$ Again, this is illustrated
schematically in Figure \ref{2row4}, where curved lines indicate
boundaries whose precise configuration does not concern us.
As before we have $\mu \subset \eta^{(1)} \subset \eta$ are balanced
and $\eta^{(1)} +\rho \in X^+$.
\end{proof}
\begin{figure}
\caption{The elements $\mu\subset\eta^{(1)}
\label{2row4}
\end{figure}
\begin{example} We illustrate Stage 2 of the proof by an example.
Take $\lambda = (8,8,8,7,3,3,2)$ and $\mu = (6,5,1,1)$ and $\delta
=2$. Then it is easy to see that $\mu \subset \lambda$ are
balanced. We will construct $w\in W_p$ such that $\mu = w\cdotp
\lambda$.
\begin{figure}
\caption{The elements $\mu\subset\lambda^{(1)}
\label{ex1step1}
\end{figure}
First consider $\lambda ^{(1)} = s_{\epsilon_1
-\epsilon_2}s_{\epsilon_1 + \epsilon_7} \cdotp \lambda$. The
elements $\lambda$ and $\mu$ are illustrated in outline in Figure
\ref{ex1step1}, with the boxes removed to form $\lambda^{(1)}$ shaded.
Repeating the process we next consider $\lambda^{(2)}= s_{\epsilon_1 -
\epsilon_3}s_{\epsilon_1 + \epsilon_6}\cdotp \lambda^{(1)}$, as in
Figure \ref{ex1step2}, followed by $\lambda^{(3)}=s_{\epsilon_2 -
\epsilon_4}s_{\epsilon_2 + \epsilon_5}\cdotp \lambda ^{(2)}$ as in
Figure \ref{ex1step3}. Finally consider
$\lambda^{(4)}=s_{\epsilon_3+\epsilon_4} \cdotp \lambda^{(3)}$ as
shown in Figure \ref{ex1step4}.
\begin{figure}
\caption{The elements $\mu\subset\lambda^{(2)}
\label{ex1step2}
\end{figure}
\begin{figure}
\caption{The elements $\mu\subset\lambda^{(3)}
\label{ex1step3}
\end{figure}
\begin{figure}
\caption{The elements $\mu=\lambda^{(4)}
\label{ex1step4}
\end{figure}
\end{example}
\section{Orbits of the affine Weyl group of type $D$}
\label{Waffis}
We would like to have a block result in characteristic $p>0$ similar
in spirit to Theorem \ref{geomblock}. For this we first need a
candidate to play the role of $W$. To motivate our choice of such, we
begin by considering a possible approach to modular representation
theory via reduction from characteristic $0$.
The verification that the Brauer algebra is cellular is a
characteristic-free calculation over ${\mathbb Z}[\delta]$. Thus all of our algebras
and cell modules have a ${\mathbb Z}[\delta]$-form, from which the corresponding
objects over $k$ can be obtained by specialisation. If the maps
between cell modules that have been constructed in charateristic zero
in \cite{dhw,cdm} also had a corresponding integral form, then they
would also specialise to maps in characteristic $p$. As there is not
yet an explicit construction of these maps, we are unable to verify
this except in very small examples. However, if we assume for the
moment that it holds, this will suggest a candidate for our new
reflection group.
We will wish to consider the dot action of $W$ for different values of
shift parameter $\rho$. In such cases we will write
$w._{\delta}\lambda$ for the element
$$w(\lambda+\rho(\delta))-\rho(\delta).$$ When we wish to emphasise
the choice of dot action we will also write $W^{\delta}$ for $W$.
Fix $\delta\in k$, and suppose that maps between cell modules in
characteristic $0$ do reduce mod $p$. Then we would expect weights to
be in the same block in characteristic $p$ if they are linked by the
action of $W^{\delta}$ in characteristic zero. However, all elements
of the form $\delta+rp$ in characteristic zero reduce to the same
element $\delta$ mod $p$, and so weights should be in the same block
if they are linked by the action of $W^{\delta+rp}$ for some
$r\in{\mathbb Z}$. Thus our candidate for a suitable reflection group will be
${\bf W}=\langle W^{\delta+rp}: r\in{\mathbb Z}\rangle$.
Note however that a block result does not follow \emph{automatically}
from the integrality assumption, as: (i) the chain of reflections from
${\bf W}$ linking two weights might leave the set of weights for
$B_n(\delta)$; (ii) in characteristic $p$ there may be new connections
between weights not coming from connections in characteristic zero. We
shall see that the former is indeed a problem, but that the latter
does not occur.
Now fix a prime number $p>2$ and consider the affine Weyl group
$W_p$ associated to $W$. This is defined to be
$$W_p=\langle s_{\beta,rp}:\beta\in\Phi, r\in{\mathbb Z}\rangle$$
where
$$s_{\beta,rp}(\lambda)=\lambda
-((\lambda,\beta)-rp)\beta.$$
As before, we consider the dot action of $W_p$ on $X$ (or $E$) given by
$$w.\lambda=w(\lambda+\rho)-\rho.$$
It is an easy exercise to show
\begin{lem}\label{genrel} For all $r\in{\mathbb Z}$ and $1\leq i\neq j\neq
k\leq n$, we have
\begin{eqnarray*}
s_{\epsilon_i+\epsilon_j}._{\delta+rp}\lambda
&=&s_{\epsilon_i+\epsilon_j,rp}._{\delta}\lambda.\\
s_{\epsilon_i-\epsilon_j}._{\delta+rp}\lambda
&=&s_{\epsilon_i-\epsilon_j}._{\delta}\lambda.\\
s_{\epsilon_i-\epsilon_j,rp}&=&
s_{\epsilon_j+\epsilon_k}
s_{\epsilon_i+\epsilon_k,rp}s_{\epsilon_j+\epsilon_k}.
\end{eqnarray*}
and $$s_{\epsilon_i+\epsilon_j,rp}s_{\epsilon_i+\epsilon_j}\ \mbox{
is translation by}\ rp(\epsilon_i+\epsilon_j).$$
In particular, for $n>2$ we have
$$W_p=\langle s_{\epsilon_i+\epsilon_j,rp}:
1\leq i<j\leq n\ \mbox{ and}\ r\in{\mathbb Z}\rangle.$$
\end{lem}
It follows from the first two parts of the Lemma that the group
$$W^{[r]}=\langle
s_{\epsilon_i+\epsilon_j,rp},s_{\epsilon_i-\epsilon_j} :1\leq i<j\leq
n\rangle$$ is isomorphic to the original group $W^{\delta+rp}$, and
its $\delta$-dot action on $X$ is the same as the $(\delta+rp)$-dot
action of $W^{\delta+rp}$ on $X$. Further, the usual dot action of
$W_p$ on $X$ is generated by all the $W^{[r]}$ with $r\in{\mathbb Z}$. Thus we
have
\begin{cor} For $p>2$ we have ${\bf W}\cong W_p$, and the isomorphism
is compatible with their respective dot actions on $X$.
\end{cor}
The above considerations suggest that the affine Weyl group is a
potential candidate for the reflection group needed for a positive
characteristic block result. It will be convenient to have a
combinatorial description of the orbits of this group on $X$.
\begin{prop}\label{orbits2}
Suppose that $\lambda$ and $\mu$ are in $X$ with $|\lambda|-|\mu|$
even. Then $\mu\in W_p\cdotp \lambda$ if and only if there exists
$\pi\in\Sigma_n$ and $\sigma\, : \, {\bf n} \rightarrow \{\pm 1\}$
with $d(\sigma)$ even such that for all $1\leq i\leq
n$ we have either
$$\sigma(i)=1 \quad \mbox{and} \quad c(\mu)_i\equiv
c(\lambda)_{\pi(i)} \mod p$$ or
$$\sigma(i)=-1 \quad \mbox{and} \quad
c(\mu)_i+c(\lambda)_{\pi(i)}\equiv 2-\delta \mod p$$
\end{prop}
\begin{proof}
We have $\mu \in W_p\cdotp \lambda$ if and only if
$$\mu+\rho=w(\lambda+\rho)+p\nu$$ for some $w\in W$ and
$\nu\in{\mathbb Z}\Phi$. Note that for any $\nu\in X$ we have $\nu\in{\mathbb Z}\Phi$ if and
only if $|\nu|=\sum\nu_i$ is even, as
$$2\epsilon_i=(\epsilon_i-\epsilon_{i+1})+(\epsilon_i+\epsilon_{i+1})$$
and
$$2\epsilon_{i+1}=(\epsilon_i+\epsilon_{i+1})-(\epsilon_i-\epsilon_{i+1})$$
for all $1\leq i\leq n-1$.
Thus, if $|\lambda |- |\mu|$ is even, then $\mu \in W_p\cdotp \lambda$
if and only if $\mu +\rho = w(\lambda + \rho) +p\nu$ for some $w\in W$
and some $\nu\in X$. Combining this with Proposition \ref{orbits1}
gives the result.
\end{proof}
As in the non-affine case, we may represent reflections graphically
via projection into the plane. In this case each projection will
contain two families of reflections; those parallel to
$s_{\epsilon_i-\epsilon_j}$ and those parallel to
$s_{\epsilon_i+\epsilon_j}$. This is illustrated for $p=5$ in Figure
\ref{affsetup}. An example of the effect of various such reflections
on partitions will be given in Figure \ref{rosetta}, after we have
introduced a third, abacus, notation.
\begin{figure}
\caption{A projection onto the $ij$ plane with $p=5$.}
\label{affsetup}
\end{figure}
\section{On the blocks of the Brauer algebra in characteristic $p$}
\label{blockp}
We have already seen that the blocks of the Brauer algebra in
characteristic $0$ are given by the restriction of orbits of $W$ to
the set of partitions. We would like a corresponding result in
characteristic $p>0$ involving the orbits of $W_p$. As noted in the
introduction, one does not expect the blocks of the Brauer algebra to
be given by $W_p$ in {\it exactly} the same manner as in
characteristic $0$. Instead, we can ask if the orbits of $W_p$ are
unions of blocks. We will show that this is the case, and give
examples in Section \ref{absect} to show that indeed these orbits are
not in general single blocks. (A similar result for the symplectic
Schur algebra has been given by the second author \cite{devblock}.)
Throughout the next two sections we will assume that we are working
over a field of characteristic $p>0$.
We will need the following positive characteristic analogue of
\cite[Proposition 4.2]{cdm}. Denote by $[\lambda]$ the set of boxes in
$\lambda$.
\begin{prop}\label{content}
Let $\lambda, \mu\in X^+$ with $|\lambda|-|\mu|=2t\geq 0$. If there
exists $M\leq \Delta_n(\mu^T)$ with
$$\Hom_{B_n(\delta)}(\Delta_n(\lambda^T),\Delta_n(\mu^T)/M)\neq 0$$ then
$$t(\delta-1)+\sum_{d\in[\lambda]}c(d)-\sum_{d\in[\mu]}c(d)\equiv 0 \mod p.$$
\end{prop}
\begin{proof}
As the localisation functor is exact, we may assume without loss of
generality that $\lambda\vdash n$. For $1\leq i<j\leq n$ let
$X_{i,j}$ be the Brauer diagram with edges between $l$ and $\bar{l}$
for all $l\neq i,j$, and with edges between $i$ and $j$ and $\bar{i}$
and $\bar{j}$. Then we define $T_n\in B_n(\delta)$ by
$$T_n=\sum_{1\leq i<j\leq n}X_{i,j}.$$ As in \cite[Lemma 4.1]{cdm} we
have for all $y\in\Delta_n(\mu^T)$ that
$$T_ny=
\left[t(\delta-1)-\sum_{d\in[\mu]}c(d)+
\sum_{1\leq i<j\leq n}(i,j)\right]y
$$ where $(i,j)$ denotes the transposition in $\Sigma_n$ permuting $i$
and $j$, regarded as an element of $B_n(\delta)$.
Let $\phi:\Delta_n(\lambda^T)\longrightarrow\Delta_n(\mu^T)/M$ be a non-zero
$B_n(\delta)$-homomorphism. By our assumption on $n$, we have
$\Delta_n(\lambda^T)\cong S^{\lambda^T}$ (the Specht module labelled by
$\lambda^T$) as a module for $\Sigma_n$. As such modules are defined
over ${\mathbb Z}$, the remarks in the proof of \cite[Proposition 4.2]{cdm}
about the action of $\sum_{1\leq i<j\leq n}(i,j)$ on
$\Delta_n(\lambda^T)$ still hold, and hence this element acts as the
scalar $\sum_{d\in[\lambda]}c(d)$ on $\Delta_n(\lambda^T)$. Hence
$$\sum_{1\leq i<j\leq
n}(i,j)\phi(x)=\sum_{d\in[\lambda]}c(d)\phi(x)$$ for all
$x\in\Delta_n(\lambda^T)$, and so for all $y+M\in\im(\phi)$ we must have
$$T_n(y+M)=
\left[t(\delta-1)-\sum_{d\in[\mu]}c(d)+
\sum_{d\in[\lambda]}c(d)\right](y+M).
$$
Again as in the proof of \cite[Proposition 4.2]{cdm}, the element
$T_n$ must act as zero on $\Delta_n(\lambda^T)$, and so
$$t(\delta-1)-\sum_{d\in[\mu]}c(d)+
\sum_{d\in[\lambda]}c(d)\equiv 0\mod p$$
as required.
\end{proof}
We wish to replace the role played by the combinatorics of partitions
by the action of our affine reflection group $W_p$.
\begin{thm}
Suppose that $\lambda ,\mu\in X^+$. If there exists $M\leq
\Delta_n(\mu^T)$ with
$$\Hom_{B_n(\delta)}(\Delta_n(\lambda^T),\Delta_n(\mu^T)/M)\neq 0$$
then $\mu\in W_p.\lambda$.
\end{thm}
\begin{proof}
First note that $\Hom_{B_n(\delta)}(\Delta_n(\lambda^T),
\Delta_n(\mu^T)/M)\neq 0$ implies that $|\lambda | - |\mu| =2t\geq
0$. As if we had $|\lambda |<|\mu|$ then using the fact that the
localisation function $F$ is exact, we can assume that $\mu\vdash n$,
so $\Delta_n(\mu^T) \cong S^{\mu^T}$. However, this module only contains
composition factors of the form $L_n(\eta)$ where $\eta \vdash
n$, which gives a contradiction.
We now use induction on $n$. If $n=1$ then $\lambda = \mu = (1)$ and
so there is nothing to prove. Assume $n>1$. If $\lambda =\emptyset$
then by the above remark we have $\mu=\emptyset$ and we are done. Now
suppose that $\lambda$ has a removable box in row $i$ say. Then we
have
$${\rm Ind}\, \Delta_{n-1}((\lambda - \epsilon_i)^T)
\twoheadrightarrow \Delta_n(\lambda ^T)$$ and so, using our assumption
we have
$$\Hom_{B_n(\delta)}({\rm Ind}\, \Delta_{n-1}((\lambda -
\epsilon_i)^T), \Delta_n(\mu^T)/M)$$
$$=\Hom_{B_{n-1}(\delta)}(\Delta_{n-1}((\lambda - \epsilon_i)^T), {\rm
Res}\, (\Delta_n(\mu^T)/M)) \neq 0.$$
Thus either (Case 1) we have
$$\Hom_{B_{n-1}(\delta)}(\Delta_{n-1}((\lambda - \epsilon_i)^T),
\Delta_{n-1}((\mu - \epsilon_j)^T)/N)\neq 0$$ for some positive
integer $j$ with $\mu - \epsilon_j\in X^+$ and some $N\leq
\Delta_{n-1}((\mu-\epsilon_j)^T)$,\\
or (Case 2) we have
$$\Hom_{B_{n-1}(\delta)}(\Delta_{n-1}((\lambda - \epsilon_i)^T),
\Delta_{n-1}((\mu + \epsilon_j)^T)/N)\neq 0$$ for some positive
integer $j$ with $\mu + \epsilon_j\in X^+$ and some $N\leq
\Delta_{n-1}((\mu+\epsilon_j)^T)$.
\noindent \textbf{Case 1:} Using Proposition \ref{content} for
$\lambda$ and $\mu$ and for $\lambda - \epsilon_i$ and $\mu -
\epsilon_j$, we see that $c(\lambda)_i \equiv c(\mu)_j \,\, \mbox{mod
$p$}$. Now, using induction on $n$ we have that $\mu - \epsilon_j \in
W_p \cdotp (\lambda - \epsilon_i)$. By Proposition \ref{orbits2}, we
can find $\pi\in \Sigma_n$ and $\sigma \, : \, {\bf n} \rightarrow
\{\pm 1\}$ such that $d(\sigma)$ is even and for all $1\leq m\leq n$,
if $\sigma (m)=1$ we have
$$c(\mu - \epsilon_i)_m \equiv c(\lambda)_{\pi(m)} \,\, \mbox{mod $p$}$$
and if $\sigma(m)=-1$ we have
$$c(\mu)_m + c(\lambda)_{\pi(m)} \equiv 2-\delta \,\, \mbox{mod
$p$}.$$ We will now construct $\pi '\in \Sigma_n$ and $\sigma ' \, :
\, {\bf n} \rightarrow \{ \pm 1\}$ to show that $\mu\in W_p \cdotp
\lambda$. Suppose $\pi(j)=k$ and $\pi(l)=i$ for some $k,l\geq
1$. Define $\pi '$ by $\pi '(j)=i$, $\pi '(l)=k$ and $\pi '(m)=\pi(m)$
for all $m\neq j,l$. Now if $\sigma(j)=\sigma(l)$ then define $\sigma
'$ by $\sigma '(j)=\sigma '(l)=1$ and $\sigma '(m)=\sigma(m)$ for all
$m\neq j,l$. And if $\sigma(j)=-\sigma(l)$ the define $\sigma '$ by
$\sigma '(j)=1$, $\sigma '(l)=-1$ and $\sigma '(m)=\sigma (m)$ for all
$m\neq j,l$. Now it's easy to check, using the fact that
$c(\mu)_j\equiv c(\lambda)_i\,\, \mbox{mod $p$}$, that $\pi '$ and
$\sigma '$ satisfy the conditions in Proposition \ref{orbits2} for
$\lambda$ and $\mu$, and so
$\mu \in W_p \cdotp \lambda$.
\noindent \textbf{Case 2:} This case is similar to Case 1. Using
Proposition \ref{content} we see that $c(\lambda )_i + c(\mu)_j \equiv
2-\delta \,\, \mbox{mod $p$}$. Now using induction on $n$ we have
$\pi\in \Sigma_n$ and $\sigma \, : \, {\bf n} \rightarrow \{ \pm
1\}$ satisfying the conditions in Proposition \ref{orbits2} for
$\lambda -\epsilon_i$ and $\mu + \epsilon_j$. Suppose $\pi(j)=k$ and
$\pi(l)=i$ for some $k,l\geq 1$. Define $\pi '$ by $\pi '(j)=i$, $\pi
'(l)=k$ and $\pi '(m)=\pi (m)$ for all $m\neq j,l$. Now if $\sigma
(j)=\sigma(l)$ the define $\sigma '$ by $\sigma '(j)=-1$, $\sigma
'(l)=-1$ and $\sigma '(m)=\sigma(m)$ for all $m\neq j,l$. And if
$\sigma (j)=-\sigma (l)$ then we define $\sigma '(j)=-1$, $\sigma
'(l)=1$ and $\sigma '(m)=\sigma (m)$ for all $m\neq j,l$.
\end{proof}
Note that by the cellularity of $B_n(\delta)$ this immediately implies
\begin{cor}
Two simple $B_n(\delta)$-modules $L(\lambda^T)$ and $L(\mu^T)$ are in
the same block only if $\mu\in W_p.\lambda$.
\end{cor}
Thus we have the desired necessary condition in terms of the affine
Weyl group for two weights to lie in the same block.
\section{Abacus notation and orbits of the affine Weyl group}
\label{absect}
In this section we will show that, even if $n$ is arbitrarily large,
being in the same orbit under the affine Weyl group is not
sufficient to ensure that two weights lie in the same block. This is
most conveniently demonstrated using the abacus notation \cite{jk},
and so we first explain how this can be applied in the Brauer algebra
case. We begin by recalling the standard procedure for constructing an
abacus from a partition, and then show how this is compatible with the
earlier orbit results for $W_p$. As in the preceding section, we
assume that our algebra is defined over some field of characteristic
$p>2$.
To each partition we shall associate a certain configuration of beads
on an abacus in the following manner. An {\it abacus with $p$ runners}
will consist of $p$ columns (called runners) together with some number
of beads distributed amongst these runners. Such beads will lie at a
fixed height on the abacus, and there may be spaces between beads on
the same runner. We will number the possible bead positions from left
to right in each row, starting from the top row and working down, as
illustrated in Figure \ref{abacus}.
\begin{figure}
\caption{The possible bead positions with $p=5$.}
\label{abacus}
\end{figure}
For a fixed value of $n$, we will associate to each partition
$\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_t)$ of $m$, with $m\leq n$
and $n-m$ even, a configuration of beads on the abacus. Let $b$ be a
positive integer such that $b\geq n$. We then represent $\lambda$ on
the abacus using $b$ beads by placing a bead in position numbered
$$\lambda_i+b-i$$ for each $1\leq i\leq b$, where we take
$\lambda_i=0$ for $i>t$. In representing such a configuration we will
denote the beads for $i\leq n$ by black circles, for $n<i\leq b$ by
grey beads, and the spaces by white circles (or blanks if this is
unambiguous). Runners will be numbered left to right from $0$ to
$p-1$. For example, the abacus corresponding to the partition
$(5,3,3,2,1,1,0^{10})$ when $p=5$, $n=16$, and $b=20$ is given in Figure
\ref{exab}. Note that the abacus uniquely determines the partition
$\lambda$.
\begin{figure}
\caption{The abacus for $(5,3,3,2,1,1,0^{10}
\label{exab}
\end{figure}
We would like a way of identifying whether two partitions $\lambda$
and $\mu$ are in the same $W_p$ orbit directly from their abacus
representation. First let us rephrase the content condition which we
had earlier.
Recall from Proposition \ref{orbits2} and the definition of
$c(\lambda)$ that $\lambda$ and $\mu$ are in the same $W_p$-orbit if
and only if there exists $\pi\in\Sigma_n$ such that for each $1\leq
i\leq n$ either
$$\mu_i-i\equiv\lambda_{\pi(i)}-\pi(i)\mod p$$
or
$$\mu_i-i\equiv \delta-2-(\lambda_{\pi(i)}-\pi(i))\mod p$$ and
the second case occurs an even number of times.
Choose $b\in{\mathbb N}$ with $2b\equiv 2-\delta\mod p$ (such a $b$ always
exists as $p>2$). Then $\lambda$ and $\mu$ are in the same $W_p$-orbit
if and only if there exists $\pi\in\Sigma_n$ such that for each $1\leq
i\leq n$ either
\begin{equation}\label{samerun}
\mu_i+b-i\equiv\lambda_{\pi(i)}+b-\pi(i)\mod p
\end{equation}
or
\begin{equation}\label{diffrun}
\mu_i+b -i\equiv p-(\lambda_{\pi(i)}+b -\pi(i))\mod p
\end{equation}
and the second case occurs an even number of times. Thus if we also
choose $b$ large enough such that $\lambda$ and $\mu$ can be
represented on an abacus with $b$ beads then (\ref{samerun}) says that
the bead corresponding to $\mu_i$ is on the same runner as the bead
corresponding to $\lambda_{\pi(i)}$, and (\ref{diffrun}) says that the
bead corresponding to $\mu_i$ is on runner $l$ only if the bead
corresponding to $\lambda_{\pi(i)}$ is on runner $p-l$. Note that for
corresponding black beads on runner $0$ both (\ref{samerun}) and
(\ref{diffrun}) hold, and so we can use this pair of beads to modify
$d(\sigma)$ to ensure that it is even. Obviously if there are no such
black beads then the number of beads changing runners between
$\lambda$ and $\mu$ must be even. Further, the grey beads (for
$i>n$) are the same on each abacus. Summarising, we have
\begin{prop}\label{abblock}
Choose $b\geq n$ with $2b\equiv 2-\delta\mod p$, and $\lambda$
and $\mu$ in $\Lambda_n$. Then
$\lambda$ and $\mu$ are in the same $W_p$-orbit if and only if
(i) the number of beads on runner $0$ is the same for $\lambda$ and
$\mu$, and
(ii) for each $1\leq l\leq p-1$, the \emph{total} number of beads
on runners $l$ and $p-l$ is the same for $\lambda$ and $\mu$, and
(iii) if there are no black beads on runner $0$, then
the number of beads changing runners between $\lambda$ and $\mu$ must
be even.
\end{prop}
Note that condition (iii) plays no role when $n$ is large (compared to
$p$) as in such cases every partition will have a black bead on runner
$0$.
To illustrate this result, consider the case $n=16$ and the partitions
\begin{equation}\label{trio}
\lambda=(5,3,3,2,1,1),\quad \mu=(2,2,2,1,1,1),\quad
\eta=(5,3,3,2,1,1,1).
\end{equation}
Take $p=5$ and $\delta=2$, then $b=20$
satisfies $2b\equiv 2-\delta\mod p$, and is large enough for all three
partitions to be represented using $b$ beads. The respective abacuses
are illustrated in Figure \ref{threeab}, with the matching rows for
condition (ii) in Proposition \ref{abblock} indicated.
\begin{figure}
\caption{Abacuses representing the elements $\lambda$, $\mu$ and
$\eta$ in (\ref{trio}
\label{threeab}
\end{figure}
We see that $\mu\in W_p.\lambda$, as the number of beads on runner
$0$, and on runners $1/4$ and $2/3$ are the same for both $\lambda$
and $\mu$ (respectively 5, 8, and 7) and there is a black bead on
runner $0$. (The number of beads moving from runner $l$ to a distinct
runner $p-l$ is $1$, which is odd. However, as discussed above, we can
chose $\sigma$ such that one of the two black beads on runner $0$ is
regarded as moving (to the same runner), to obtain the required even
number of such moves. If there were no black beads on runner $0$ then
this would not be possible.) However, $\eta\notin W_p.\lambda$ as
columns $1/4$ and $2/3$ have 9 and 6 entries respectively.
Having reinterpreted the orbit condition in terms of the abacus, we
will now show that the orbits of $W_p$ can be
\emph{non-trivial} unions of blocks for $B_n(\delta)$.
\begin{thm}
Suppose that $k$ is of characteristic $p>2$. Then for arbitrarily large
$n$ there exist $\lambda\vdash n$ and $\mu\vdash n-2$ which are in the
same $W_p$-orbit but not in the same $B_n(\delta)$-block.
\end{thm}
\begin{proof}
Let $b\in{\mathbb N}$ be such that
$$2b\equiv 2-\delta\mod p.$$ If $b$ is even (respectively odd),
consider the partial abacuses illustrated in Figure \ref{even}
(respectively Figure \ref{odd}).
\begin{figure}
\caption{The partial abacuses for $\lambda$ and $\mu$ when $b$ is even.}
\label{even}
\end{figure}
\begin{figure}
\caption{The partial abacuses for $\lambda$ and $\mu$ when $b$ is
odd.}
\label{odd}
\end{figure}
These will not correspond directly to partitions $\lambda$ and $\mu$,
as the degree of each partition will be much larger than $b$. However,
by completing each in the same way (by adding the same number of black
beads in rows from right to left above each partition, followed by a
suitable number of grey beads), they can be adapted to form abacuses
of partitions $\lambda\vdash n$ and $\mu\vdash n-2$ for some $n>>0$
and for some $b'\equiv b\mod p$. (This corresponds to adding
sufficiently many zeros to the end of each partition such that each
has $|\lambda|$ parts.)
It is clear from Proposition \ref{abblock} that in each case $\lambda$
and $\mu$ are in the same $W_p$-orbit. Note that for both $\lambda$
and $\mu$, all beads are as high as they can be on their given runner.
If we move any bead to a higher numbered position then this
corresponds to increasing the degree of the associated partition.
Thus $\lambda$ and $\mu$ are the only partitions with degree at most
$|\lambda|$ in their $W_p$-orbit.Also it is easy to check that $\mu$
is obtained from $\lambda$ by removing two boxes from the same
row. Clearly by increasing $b$ we can make $n$ arbitrarily large.
To complete the proof, it is enough to show that $L_n(\lambda^T)$ and
$L_n(\mu^T)$ are not in the same $B_n(\delta)$-block. We will reduce
this to a calculation for the symmetric group, and use the
corresponding (known) block result in that case. To state this we need
to recall the notion of a $p$-core.
A partition is a \emph{$p$-core} if the associated abacus has no gap
between any pair of beads on the same runner. We associate a unique
$p$-core to a given partition $\tau$ by sliding all beads in some
abacus representation of $\tau$ as far up each runner as they can go,
and taking the corresponding partition. By the Nakayama conjecture
(see \cite{meiertappe} for a survey of its various proofs), two
partitions $\tau$ and $\eta$ are in the same block for $k\Sigma_n$ if
and only if they have the same $p$-core. It is also easy to show
(using the definition of $p$-cores involving the removal of $p$-hooks
\cite{mathas}) that if $\tau$ is a $p$-core then so is $\tau^T$.
Returning to our proof, as $\lambda\vdash n$ we have that the cell
module $\Delta_n(\lambda^T)$ is isomorphic to the Specht module
$S^{\lambda^T}$ as a $k\Sigma_n$-module (by \cite[Section 2]{dhw}). As
$\lambda$ is a $p$-core so is $\lambda^T$, and hence
$\Delta_n(\lambda^T)$ is in a $k\Sigma_n$-block on its own (by the
Nakayama conjecture) so is simple as a $k\Sigma_n$-module,
isomorphic to $D^{\lambda^T}$, and hence equal to $L_n(\lambda^T)$ as a
$B_n(\delta)$-module.
If
$$[\Delta_n(\mu^T):L_n(\lambda^T)]\neq 0$$
then we must have
$$[\res_{k\Sigma_n}\Delta_n(\mu^T):D^{\lambda^T}]\neq 0.$$ However,
$\res_{k\Sigma_n}\Delta_n(\mu^T)$ has a Specht filtration where the
multiplicity of $S^{\eta^T}$ in this filtration is given by the
Littlewood-Richardson coefficient $c_{\mu^T(2)}^{\eta^T}$
\cite[Proposition 8]{pagetLRR}.
In particular, as $\lambda^T$ is obtained from $\mu^T$ by adding two
boxes in the same column we see that $S^{\lambda^T}=D^{\lambda^T}$
does not appear as a Specht subquotient in this filtration
\cite[remarks after Theorem 3.1]{dhw}. However, we still have to prove
that it cannot appear as a composition factor of some other
$S^{\eta^T}$. But this is clear, as if it did then $\eta^T$ would have
to have the same $p$-core as $\lambda$, but $\lambda$ is already a
$p$-core and hence this is impossible.
This proves that $\Delta_n(\mu^T)=L_n(\mu^T)$, and so $\lambda$ and $\mu$
are in different blocks for $B_n(\delta)$.
\end{proof}
\begin{figure}
\caption{Assorted examples
with $p=5$ and $\delta=2$.}
\label{rosetta}
\end{figure}
\begin{figure}
\caption{Assorted examples
with $p=5$ and $\delta=2$.}
\label{rosetta2}
\end{figure}
To conclude, we illustrate some examples of various affine reflections
together with the corresponding partitions and abacuses, when $p=5$
and $\delta=2$. Our condition on $b$ implies that it must be chosen to
be a multiple of $5$. Reflections are labelled (a)--(e) in Figure
\ref{rosetta}, with the corresponding partitions and abacuses in
Figure \ref{rosetta2}. Case (a) corresponds to the reflection from
$(4,4,2)$ to $(4,3,1)$, with $n=b=10$. Case (b) corresponds to the
reflection from $(4,4,3)$ to $(4,2,1)$, with $n=11$ and $b=15$. Case
(c) corresponds to the reflection from $(4,4,4)$ to $(4,1,1)$ with
$n=12$ and $b=15$. These three cases only use elements from $W$, and
so would be reflections in any characteristic. Hence the condition on
matched contents in these cases are equalities, not merely
equivalences mod $p$. Case (d) corresponds to the reflection from
$(6,6,5)$ to $(6,5,4)$ with $n=17$ and $b=20$. This is a strictly
affine phenomenon, and so the paired boxes only sum to $1-\delta$ mod
$p$. Finally, case (e) corresponds to the reflection from $(6,5,2)$ to
$(6,6,1)$ with $n=13$ and $b=15$. This is our only example of
reflection about an affine $(ij)$ line, and so is the only case
illustrated where the number of boxes is left unchanged.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title{$MSS_{18}$ is Digitally 18-contractible}
\author{Laurence Boxer
\thanks{
Department of Computer and Information Sciences,
Niagara University,
Niagara University, NY 14109, USA;
and Department of Computer Science and Engineering,
State University of New York at Buffalo.
email: [email protected]
}
}
\date{ }
\maketitle{}
\begin{abstract}
The paper~\cite{Han06} incorrectly asserts that the digital image $MSS_{18}$,
a digital model of the Euclidean 2-sphere $S^2$, is not 18-contractible. We show this
assertion is false.
Key words and phrases: digital topology, digital image, contractible, fundamental group
\end{abstract}
\section{Introduction}
In digital topology, we often find that properties of a digital image are
analogous to topological properties of an object in Euclidean space modeled by the
digital image. For example, a digital image that models a contractible object may
have the property of digital contractibility, and a digital image that models a
non-contractible object may have the property of digital non-contractibility.
$MSS_{18}$ is the name often used for a certain digital image that models the
Euclidean 2-sphere $S^2$. S.E. Han has claimed (Theorem~4.3 of~\cite{Han06})
that $MSS_{18}$ is not 18-contractible. We show this assertion is false.
\section{Preliminaries}
Much of this section is quoted or paraphrased from~\cite{BxSt16}.
We use ${\mathbb Z}$ to indicate the set of integers.
\subsection{Adjacencies}
A digital image is a graph $(X,\kappa)$, where $X$ is a subset of ${\mathbb Z}^n$ for
some positive integer~$n$, and $\kappa$ is an adjacency relation for the points
of~$X$. The $c_u$-adjacencies are commonly used.
Let $x,y \in {\mathbb Z}^n$, $x \neq y$, where we consider these points as $n$-tuples of integers:
\[ x=(x_1,\ldots, x_n),~~~y=(y_1,\ldots,y_n).
\]
Let $u \in {\mathbb Z}$,
$1 \leq u \leq n$. We say $x$ and $y$ are
{\em $c_u$-adjacent} if
\begin{itemize}
\item There are at most $u$ indices $i$ for which
$|x_i - y_i| = 1$.
\item For all indices $j$ such that $|x_j - y_j| \neq 1$ we
have $x_j=y_j$.
\end{itemize}
Often, a $c_u$-adjacency is denoted by the number of points
adjacent to a given point in ${\mathbb Z}^n$ using this adjacency.
E.g.,
\begin{itemize}
\item In ${\mathbb Z}^1$, $c_1$-adjacency is 2-adjacency.
\item In ${\mathbb Z}^2$, $c_1$-adjacency is 4-adjacency and
$c_2$-adjacency is 8-adjacency.
\item In ${\mathbb Z}^3$, $c_1$-adjacency is 6-adjacency,
$c_2$-adjacency is 18-adjacency, and $c_3$-adjacency
is 26-adjacency.
\end{itemize}
\begin{comment}
For $\kappa$-adjacent $x,y$, we write $x \leftrightarrow_{\kappa} y$ or $x \leftrightarrow y$ when $\kappa$ is understood. We write
$x \leftrightarroweq_{\kappa} y$ or $x \leftrightarroweq y$ to mean that either $x \leftrightarrow_{\kappa} y$ or $x = y$.
Given subsets $A,B\subset X$, we say that $A$ and $B$ are \emph{adjacent} if
there exist points $a\in A$ and $b\in B$ such that
$a \leftrightarroweq b$. Thus sets with nonempty intersection are automatically adjacent, while disjoint sets may or may not be adjacent.
\end{comment}
We write $x \leftrightarrow_{\kappa} x'$, or $x \leftrightarrow x'$ when $\kappa$ is understood, to indicate
that $x$ and $x'$ are $\kappa$-adjacent. Similarly, we
write $x \leftrightarroweq_{\kappa} x'$, or $x \leftrightarroweq x'$ when $\kappa$ is understood, to indicate
that $x$ and $x'$ are $\kappa$-adjacent or equal.
A subset $Y$ of a digital image $(X,\kappa)$ is
{\em $\kappa$-connected}~\cite{Rosenfeld},
or {\em connected} when $\kappa$
is understood, if for every pair of points $a,b \in Y$ there
exists a sequence $\{y_i\}_{i=0}^m \subset Y$ such that
$a=y_0$, $b=y_m$, and $y_i \leftrightarrow_{\kappa} y_{i+1}$ for $0 \leq i < m$.
\subsection{Digitally continuous functions}
The following generalizes a definition of~\cite{Rosenfeld}.
\begin{definition}
\label{continuous}
{\rm ~\cite{Boxer99}}
Let $(X,\kappa)$ and $(Y,\lambda)$ be digital images. A single-valued function
$f: X \rightarrow Y$ is $(\kappa,\lambda)$-continuous if for
every $\kappa$-connected $A \subset X$ we have that
$f(A)$ is a $\lambda$-connected subset of $Y$. $\Box$
\end{definition}
When the adjacency relations are understood, we will simply say that $f$ is \emph{continuous}. Continuity can be expressed in terms of adjacency of points:
\begin{thm}
{\rm ~\cite{Rosenfeld,Boxer99}}
A function $f:X\to Y$ is continuous if and only if $x \leftrightarrow x'$ in $X$ implies
$f(x) \leftrightarroweq f(x')$. \qed
\end{thm}
See also~\cite{Chen94,Chen04}, where similar notions are referred to as {\em immersions}, {\em gradually varied operators},
and {\em gradually varied mappings}.
A homotopy between continuous functions may be thought of as
a continuous deformation of one of the functions into the
other over a finite time period.
\begin{definition}{\rm (\cite{Boxer99}; see also \cite{Khalimsky})}
\label{htpy-2nd-def}
Let $X$ and $Y$ be digital images.
Let $f,g: X \rightarrow Y$ be $(\kappa,\kappa')$-continuous functions.
Suppose there is a positive integer $m$ and a function
$F: X \times [0,m]_{{{\mathbb Z}}} \rightarrow Y$
such that
\begin{itemize}
\item for all $x \in X$, $F(x,0) = f(x)$ and $F(x,m) = g(x)$;
\item for all $x \in X$, the induced function
$F_x: [0,m]_{{{\mathbb Z}}} \rightarrow Y$ defined by
\[ F_x(t) ~=~ F(x,t) \mbox{ for all } t \in [0,m]_{{{\mathbb Z}}} \]
is $(2,\kappa')-$continuous. That is, $F_x(t)$ is a path in $Y$.
\item for all $t \in [0,m]_{{{\mathbb Z}}}$, the induced function
$F_t: X \rightarrow Y$ defined by
\[ F_t(x) ~=~ F(x,t) \mbox{ for all } x \in X \]
is $(\kappa,\kappa')-$continuous.
\end{itemize}
Then $F$ is a {\rm digital $(\kappa,\kappa')-$homotopy between} $f$ and
$g$, and $f$ and $g$ are {\rm digitally $(\kappa,\kappa')-$homotopic in} $Y$.
$\Box$
\end{definition}
If there is a $(\kappa,\kappa)$-homotopy $F: X \times [0,m]_{{\mathbb Z}} \to X$ between
the identity function $1_X$ and a constant function, we say
$F$ is a (digital) {\em $\kappa$-contraction} and $X$ is {\em $\kappa$-contractible}.
\section{Contractibility of $MSS_{18}$}
$MSS_{18}$~\cite{Han06} is a ``small" digital model of the Euclidean 2-sphere~$S^2$,
appearing rather like an American football.
As shown in Figure~\ref{MSS18fig}, we can take $MSS_{18} = \{p_i\}_{i=0}^9$, where
\[ p_0= (0,0,0),~p_1=(1,1,0),~p_2=(1,2,0),~p_3=(0,3,0),~p_4=(-1,2,0),
\]
\[ ~p_5=(-1,1,0),~p_6=(0,1,-1),~p_7=(0,2,-1),~p_8=(0,2,1),~p_9=(0,1,1).
\]
\begin{figure}
\caption{$MSS_{18}
\label{MSS18fig}
\end{figure}
Contrary to the claim of Theorem~4.3 of~\cite{Han06}, we have the following
Theorem~\ref{MSS18contracts}. Its proof makes use of the contractibility of a
4-point digital simple closed curve~\cite{Boxer99}. Notice that $MSS_{18}$ contains
the 4-point 18- and 26-simple closed curves
\[ S=\{(x,1,z) \in MSS_{18}\}=\{p_1,p_6,p_5,p_9\} \mbox{ and }\]
\[ S'=\{(x,2,z) \in MSS_{18}\} = \{p_2,p_7,p_4,p_8\}.
\]
Roughly, our contraction of $MSS_{18}$ begins by continuously deforming $MSS_{18}$
into a connected subset of $S \cup S'$, after which a contraction is completed.
\begin{thm}
\label{MSS18contracts}
$MSS_{18}$ is 18-contractible and 26-contractible.
\end{thm}
\begin{proof}
We define a contraction $H: MSS_{18} \times [0,3] \to MSS_{18}$ as follows.
\begin{itemize}
\item For the step at time $t=0$, we let $H(p_i,0) = p_i$ for all members of
$\{i\}_{i=0}^9$.
\item For the step $t=1$, we let
\[ H(p_i,1) = \left \{ \begin{array}{ll}
p_1 & \mbox{if } i \in \{0,1,9\}; \\
p_6 & \mbox{if } i \in \{5,6\}; \\
p_2 & \mbox{if } i \in \{2,3,8\}; \\
p_7 & \mbox{if } i \in \{4,7\}. \\
\end{array} \right .
\]
Thus, during this step, $H$ begins contracting $S$, deforming $S$ to
$\{p_1,p_6\}$; and also begins
contracting $S'$, deforming $S$ to $\{p_2,p_7\}$; as well as bringing $p_0$ to $p_1$ and $p_3$ to $p_2$.
\item For the step $t=2$, let
\[ H(p_i,2) = \left \{ \begin{array}{ll}
p_6 & \mbox{if } H(p_i,1) \in \{p_1,p_6\}; \\
p_7 & \mbox{if } H(p_i,1) \in \{p_2,p_7\}.
\end{array} \right .
\]
This step completes the contraction of $S$ to the point $p_6$; it also
completes the contraction of $S'$ to the point $p_7$.
\item For the step $t=3$, let $H(p_i)=p_6$ for all indices~$i$.
\end{itemize}
It is elementary to verify that $H$ is an 18-homotopy and a 26-homotopy
between the identity on $MSS_{18}$ and a constant map.
\end{proof}
Theorem~\ref{MSS18contracts} adds to our
knowledge~\cite{Boxer99,Boxer06} of ``small" digital spheres that are
digitally contractible. It seems likely that ``large" digital spheres are not
digitally contractible, although other than for digital 1-spheres, i.e.,
simple closed curves~\cite{Boxer10}, at the current writing
the literature lacks results to support this conjecture.
Note also that since a contractible digital image has trivial
fundamental group (\cite{Boxer05} - proof corrected in
\cite{BxSt18}), the following assertion, originally appearing
as Propositions~3.3 and~3.5 of~\cite{BxSt16}, is an
immediate consequence of Theorem~\ref{MSS18contracts}.
\begin{cor}
Let $x \in MSS_{18}$. Then the fundamental groups $\Pi_1^{18}(MSS_{18},x)$
and $\Pi_1^{26}(MSS_{18},x)$ of $(MSS_{18},x)$ with respect to 18- and 26-adjacency,
respectively, are trivial. $\qed$
\end{cor}
\section{Further remarks}
We have corrected an error of~\cite{Han06} by showing that $MSS_{18}$ is
contractible with respect to 18-adjacency.
\end{document} |
\begin{document}
\title[commutators on Banach function spaces]{Necessary conditions for the boundedness of linear and bilinear commutators on Banach function spaces}
\author{Lucas Chaffee}
\address{Department of Mathematics,
Western Washington University, Bellingham, WA 98225-9063}
\email{[email protected]}
\author{David Cruz-Uribe, OFS}
\address{Department of Mathematics\\
University of Alabama, Box 870350, Tuscaloosa, AL 35487}
\email{[email protected]}
\subjclass[2010]{42B20, 42B35}
\keywords{BMO, commutators, singular integrals, fractional integrals,
bilinear operators, weights, variable Lebesgue spaces}
\date{January 26, 2017}
\thanks{ The second
author is supported by NSF Grant DMS-1362425 and research funds from the
Dean of the College of Arts \& Sciences, the University of Alabama.}
\begin{abstract}
In this article we extend recent results by the first
author~\cite{LC} on the necessity of $BMO$ for the boundedness of
commutators on the classical Lebesgue spaces. We generalize these
results to a large class of Banach function spaces. We show that
with modest assumptions on the underlying spaces and on the operator
$T$, if the commutator $[b,T]$ is bounded, then the function $b$ is
in $BMO$.
\end{abstract}
\maketitle
\section{Introduction}
The purpose of this paper is to extend a recent result of the first
author~\cite{LC} on necessary conditions for commutators to be bounded
on the classical Lebesgue spaces. He showed that if $T$ is a ``nice''
operator, and if (for example) the commutator $[b,T]$ is bounded on
$L^p$, then $b\in BMO$. He also proved an analogous result for bilinear
commutators. We generalize these results to a large collection of
Banach function spaces. To do so requires the assumption of a
geometric condition on the underlying spaces that is closely related
to the boundedness of the Hardy-Littlewood maximal operator, and which
holds in a large number of important special cases.
To state our results we recall some basic facts about Banach function
spaces. For further information, see Bennett and
Sharpley~\cite{bennett-sharpley88}. By a Banach function space $X$ we
mean a Banach space of measurable functions over ${\mathbb R}^n$ whose norm
$\|\mathcal Dot\|_X$ satisfies the following for all $f,\,g\in X$:
\begin{enumerate}
\setlength{\itemsep}{3pt}
\item $\|f\|_X = \||f|\|_X$;
\item if $|f|\le |g|$ a.e., then
$\|f\|_X\leq \|g\|_X$;
\item if $\{f_n\}\subset X$ is a
sequence such that $|f_n|$ increases to $|f|$ a.e., then
$\|f_n\|_X$ increases to $\|f\|_X$;
\item if $E\subset {\mathbb R}^n$ is bounded, then $\|\chi_E\|_X<\infty$;
\item if $E$ is bounded, then $\int_E |f(x)|\, d\mu\le C\|f\|_X$,
where $C=C(E,X)$.
\end{enumerate}
Given a Banach function space $X$, there exists another Banach
function space $X'$, called the associate space of $X$, such that for
all $f\in X$,
\[ \|f\|_X \approx \sup_{\substack{g\in X'\\ \|g\|_{X'}\leq 1}}
\int_{\mathbb R}n f(x)g(x)\,dx. \]
In many (but not all) cases, the associate space is equal to the dual
space $X^*$. The associate space, however, is always reflexive, in
the sense that $(X')'=X$. Moreover, we have the following
generalization of H\"older's inequality:
\[ \int_{{\mathbb R}^n} |f(x)g(x)|\,dx \preccurlyeqssim \|f\|_X \|g\|_{X'}. \]
Given a linear operator $T$,
define the commutator $[b,T]f(x) = b(x)Tf(x)-T(bf)(x)$, where $b$ is a
locally integrable function. We can now state our first result.
\begin{theorem} \label{thm:linear} Given Banach function spaces $X$
and $Y$, and $0\leq \alpha<n$, suppose that for every cube $Q$,
\begin{equation} \label{eqn:linear1}
|Q|^{-\text{ var}phirac{\alpha}{n}}\|\chi_Q\|_{Y'}
\|\chi_Q\|_{X} \leq C|Q|.
\end{equation}
Let $T$ be a linear operator defined on $X$ which can
be represented by
\[ Tf(x)=\int_{\mathbb R}n K(x-y)f(y)\,dy\]
for all $x\not\in \text{supp}(f)$, where $K$ is a homogeneous
kernel of degree $n-\alpha$. Suppose further that there exists a
ball $B\in{\mathbb R}^{n}$ on which $\text{ var}phirac1K$ has an absolutely convergent
Fourier series. If the commutator satisfies
$[b,T] : X\to Y$,
then $b \in BMO({\mathbb R}^n)$.
\end{theorem}
A wide variety of classical operators satisfy the hypotheses of
Theorem~\mathbb{R}f{thm:linear}. The kernel $K$ is such that $\text{ var}phirac1K$ has
an absolutely convergent Fourier series if it is non-zero on a ball
$B$ and has enough regularity: $K\in C^s(B)$ for $s>n/2$ is
sufficient. (See Grafakos~\cite [Theorem~3.2.16]{grafakos08a}. For
weaker sufficient conditions, see recent results by M\'oricz and
Veres~\cite{MR2361606}.) In the linear case this condition is
satisfied by Calder\'on-Zygmund singular integrals of convolution type
whose kernels are smooth, and in particular the Riesz transforms. It
also includes the fractional integral operators (also referred to as
Riesz potentials). For precise definitions, see
Section~\mathbb{R}f{section:bfs} below.
To state our result in bilinear case we recall that there are two
commutators to consider: if $T$ is a bilinear operator
and $b\in L^1_{\mathrm loc}({\mathbb R}^n)$, define
\begin{gather*}
[b,T]_1(f,g)(x) = b(x)T(f,g)(x)- T(bf,g)(x), \\
[b,T]_2(f,g)(x) = b(x)T(f,g)(x)- T(f,bg)(x).
\end{gather*}
\begin{theorem} \label{thm:main}
Given Banach function spaces $X_1,\ X_2$, and $Y$, and $0\leq
\alpha<2n$, suppose that for every cube $Q$,
\begin{equation} \label{eqn:main1}
|Q|^{-\text{ var}phirac{\alpha}{n}}\|\chi_Q\|_{Y'}
\|\chi_Q\|_{X_1}\|\chi_Q\|_{X_2} \leq C|Q|.
\end{equation}
Let $T$ be a bilinear operator defined on $X_1\times X_2$ which can
be represented by
\[ T(f,g)(x)=\int_{\mathbb R}n\int_{\mathbb R}n K(x-y,z-y)f(y)g(z)\,dydz \]
for all $x\not\in \text{supp}(f)\cap \text{supp}(g)$, where $K$ is a homogeneous
kernel of degree $2n-\alpha$. Suppose further that there exists a
ball $B\in{\mathbb R}^{2n}$ on which $\text{ var}phirac1K$ has an absolutely convergent
Fourier series. If for $j=1$ or $j=2$, the bilinear commutator satisfies
\[ [b,T]_j:X_1\times X_2\to Y, \]
then $b \in BMO({\mathbb R}^n)$.
\end{theorem}
\begin{remark}
Theorem~\mathbb{R}f{thm:main} extends naturally to
multilinear operators. We leave the statement and proof of this
generalization to the interested reader.
\end{remark}
\begin{remark}
The restrictions on $\alpha$ are not actually necessary in the proofs
of Theorems~\mathbb{R}f{thm:linear} and~\mathbb{R}f{thm:main}:
we can take any $\alpha \in {\mathbb R}$. However, we are not aware of any
operators for which Banach function space estimates hold that do not
satisfy the given restrictions on $\alpha$.
\end{remark}
For the absolute convergence of multiple Fourier series, see the above
references. In the bilinear case, Theorem~\mathbb{R}f{thm:main} covers such
operators as the bilinear Calder\'on-Zygmund operators with smooth
kernels~\cite{MR2030573,MR1880324,MR1947875,MR2483720} and the
bilinear fractional integral
operator~\cite{MR1164632,MR1812822,MR1682725,MR2514845}.
One drawback of Theorem~\mathbb{R}f{thm:main} is that we must assume that the
target space $Y$ is a Banach function space. This is somewhat restrictive:
even in the case of the Lebesgue spaces, bilinear operators
satisfy inequalities of the form $T : L^{p_1}\times L^{p_2}
\rightarrow L^p$ where $p<1$. This assumption, however, is
intrinsic to the statement and proof of our result, since we need to
use the generalized reverse H\"older inequality. We are uncertain
what the correct assumption should be when we assume that $Y$ is only
a quasi-Banach space.
\begin{remark}
The necessity of BMO for the boundedness on Lebesgue spaces of
commutators of certain multilinear singular integrals was recently
proved in~\cite{LW} using a completely different approach, but the
authors were also required to assume that for the target space
$L^p$, $p\geq 1$.
\end{remark}
The remainder of this paper is split into two parts. We defer the
actual proof of Theorems~\mathbb{R}f{thm:linear} and~\mathbb{R}f{thm:main} to
Section~\mathbb{R}f{section:proof} and in fact we will only give the proof of
the latter; the proof of the former is gotten by a trivial adaptation
of the proof in the bilinear case. But first, in Section~\mathbb{R}f{section:bfs} we
give a number of specific examples of Banach function spaces and
consider the relationship between the conditions~\eqref{eqn:linear1}
and ~\eqref{eqn:main1}, and sufficient conditions for maximal
operators and commutators to
be bounded.
Throughout this paper, $n$ will denote the dimension of the underlying
space, ${\mathbb R}^n$. We will consider real-valued functions over
${\mathbb R}^n$. Cubes in ${\mathbb R}^n$ will always have their sides parallel to
the coordinate axes. If we write $A\preccurlyeqssim B$, we mean $A\leq cB$, where the
constant $c$ depends on the operator $T$, the Banach function spaces,
and on the dimension $n$. These implicit constants may change from
line to line. If we write $A\approx B$, then $A\preccurlyeqssim B$ and
$B\preccurlyeqssim A$.
\section{Examples of Banach function spaces}
\label{section:bfs}
\subsection*{Averaging and maximal operators}
The necessary condition \eqref{eqn:linear1} in Theorem~\mathbb{R}f{thm:linear}
is closely related to a necessary condition for the boundedness of
averaging operators and fractional maximal operators. For
$0\leq \alpha<n$, given a cube $Q$ define the linear $\alpha$-averaging operator
\[ A_\alpha^Q f(x) = |Q|^{\text{ var}phirac{\alpha}{n}}\Xint-_Q f(y)\,dy
\mathcal Dot \chi_Q(x). \]
We define the associated fractional maximal operator by
\[ M_\alpha f(x) = \sup_Q |Q|^{\text{ var}phirac{\alpha}{n}}\Xint-_Q |f(y)|\,dy
\mathcal Dot \chi_Q(x). \]
We immediately have that for all $Q$, $|A_\alpha^Qf(x)|\leq M_\alpha
f(x)$. We also make the analogous definitions in the bilinear case:
for $0\leq \alpha <2n$,
\[ A_\alpha^Q (f,g)(x) = |Q|^{\text{ var}phirac{\alpha}{n}}\Xint-_Q f(y)\,dy
\Xint-_Q g(y)\,dy\mathcal Dot \chi_Q(x), \]
\[ M_\alpha (f,g)(x) = \sup_Q |Q|^{\text{ var}phirac{\alpha}{n}}\Xint-_Q |f(y)|\,dy
\Xint-_Q |g(y)|\,dy\mathcal Dot \chi_Q(x).\]
Again, we have the pointwise bound $|A_\alpha^Q(f,g)(x)|\leq M_\alpha
(f,g)(x)$. In both the linear and bilinear case, when $\alpha=0$ we
write $M$ instead of $M_0$.
In the linear case the maximal operators are classical; the averaging
operators were implicit but seem to have first been considered
explicitly when $\alpha=0$ in~\cite{jawerth86}. In the bilinear
case, when $\alpha=0$,
the bilinear maximal operator was introduced in~\cite{MR2483720}, and
when $0<\alpha<2n$ in~\cite{MR2514845}. The bilinear averaging
operators were first considered in~\cite{Kokilashvili:2015gw}.
The following result is due to Berezhnoi~\cite[Lemma~2.1]{MR1622773}
in the linear case and to Kokilashvili {\em et
al.}~\cite[Theorem~2.1]{Kokilashvili:2015gw} in the bilinear case.
\begin{proposition} \label{prop:avg-op}
Fix $0\leq \alpha<n$. Given Banach function spaces $X$, $Y$,
there exists a constant $C$ such that for every cube $Q$,
\begin{equation} \label{eqn:avg-op1}
\|\chi_Q\|_{Y}\|\chi_Q\|_{X'} \leq C|Q|^{1-\text{ var}phirac{\alpha}{n}};
\end{equation}
if and only if
\begin{equation} \label{eqn:avg-op2}
\|A_\alpha^Q f\|_{Y} \leq C\|f\|_{X}.
\end{equation}
Similarly, in the bilinear case fix $0\leq \alpha <2n$. Given Banach
function spaces $X_1$, $X_2$ and $Y$ there exists a constant $C$ such
that for every cube $Q$,
\begin{equation} \label{eqn:bi-avg-op1}
\|\chi_Q\|_{Y}\|\chi_Q\|_{X_1'}\|\chi_Q\|_{X_2'}
\leq C|Q|^{1-\text{ var}phirac{\alpha}{n}};
\end{equation}
if and only if
\begin{equation} \label{eqn:bi-avg-op2}
\|A_\alpha^Q (f,g)\|_{Y} \leq C\|f\|_{X_1}\|g\|_{X_2}.
\end{equation}
\end{proposition}
By the pointwise estimates above,~\eqref{eqn:avg-op2} holds whenever
the fractional maximal operator satisfies
$M_\alpha : X \rightarrow Y$, and the corresponding fact is true in
the bilinear case. Moreover, when $\alpha=0$ and $X=Y$,
condition~\eqref{eqn:avg-op1} is the same as~\eqref{eqn:linear1}.
This yields the following important corollary to Theorem~\mathbb{R}f{thm:linear}.
\begin{corollary} \label{cor:m-bounded}
Given a Banach function space $X$, if $M : X \rightarrow X$,
then~\eqref{eqn:linear1} holds. Equivalently, if the maximal operator
is bounded and $T$ is an operator with a kernel that is homogenous of
degree $n$, then a necessary condition for $[b,T] : X \rightarrow X$
is that $b\in BMO$.
\end{corollary}
As we will discuss below, the assumption that the maximal operator is
bounded on the Banach function space $X$ is a natural one.
Unfortunately, we cannot generalize Corollary~\mathbb{R}f{cor:m-bounded} to
the case $\alpha>0$ for linear operators or to any bilinear operators
acting on general Banach function spaces. However, we can prove that
the conditions in Proposition~\mathbb{R}f{prop:avg-op} and the hypotheses in
Theorems~\mathbb{R}f{thm:linear} and~\mathbb{R}f{thm:main} are related in two
important examples of Banach function spaces---the weighted and
variable Lebesgue spaces---and for singular integrals of convolution
type and fractional integral operators.
Before considering these spaces, we want to specify the operators we are
interested in. In the linear setting, we will consider singular
integrals of the form
\[ Tf(x) = \text{p.v.}\int_{{\mathbb R}^n} \text{ var}phirac{\Omega(y')}{|y|^n}f(x-y)\,dy, \]
where $x'=x/|x|^n$ and $\Omega $ is defined on $S^{n-1}$,
has mean $0$, and is sufficiently smooth.
Examples include the Riesz transforms $R_j$, which have kernels
$K_j(x) = \text{ var}phirac{x_j}{|x|^{n+1}}$. For $0<\alpha<n$, we will
consider the fractional
integral operator: i.e., the convolution operator
\[ I_\alpha f(x) = \int_{{\mathbb R}^n} \text{ var}phirac{f(y)}{|x-y|^{n-\alpha}}\,dy. \]
For more information on both kinds of operators,
see~\cite{grafakos08a,grafakos08b}.
In the bilinear setting, we consider singular integral operators of
the form
\[ T(f,g)(x) = \text{p.v.}\int_{{\mathbb R}^{n}} \int_{{\mathbb R}^{n}}
\text{ var}phirac{\Omega((y_1,y_2)')}{|(y_1,y_2)|^n}f(x-y_1)g(x-y_2)\,dy_1dy_2, \]
where $\Omega$ is defined on $S^{2n-1}$, has mean $0$ and is
sufficiently smooth. Examples include the multilinear Riesz
transforms. For more on these operators, see~\cite{MR1880324}.
We also consider the bilinear fractional integral operator, which is defined for $0<\alpha<2n$
by
\[ I_\alpha(f,g)(x) = \int_{{\mathbb R}^{n}} \int_{{\mathbb R}^{n}}
\text{ var}phirac{f(y_1)g(y_2)}{(|x-y_1|+|x-y_2|)^{2n-\alpha}}\,dy_1dy_2. \]
For more on these operators see~\cite{MR1164632,MR2514845}.
For brevity, in the following sections we will refer to linear and bilinear singular
integral operators whose kernels satisfy these hypotheses as regular
operators.
\subsection*{Weighted Lebesgue spaces}
In this section we apply Theorems~\mathbb{R}f{thm:linear}
and~\mathbb{R}f{thm:main} to the weighted Lebesgue spaces. Given a weight $w$ (i.e.,
a non-negative, locally integrable function) and $1<p<\infty$, we
define the space $L^p(w)$ to be the set of all measurable functions $f$ such that
\[ \|f\|_{L^p(w)} = \left(\int_{{\mathbb R}^n} |f(x)|^p
w(x)\,dx\right)^{\text{ var}phirac{1}{p}} < \infty. \]
We say that a weight $w$ is in the Muckenhoupt class $A_p$ if for
every cube $Q$,
\[ \Xint-_Q w(x)\,dx \left(\Xint-_Q w(x)^{1-p'}\,dx\right)^{p-1}
\leq C. \]
Then $L^p(w)$ is a Banach function space, and it is well known that
the associate space is $L^{p'}(w^{1-p'})$. Further, the
Hardy-Littlewood maximal operator is bounded on $L^p(w)$ if and only
if $w\in A_p$.
For commutators, if $w\in A_p$ and
if $T$ is any Calder\'on-Zygmund singular integral operator (and not
just the class of singular integrals described above), and if
$b\in BMO$, then the commutator $[b,T] : L^p(w) \rightarrow L^p(w)$~\cite{perez97}.
Moreover, it is easy to see that the $A_p$ condition is equivalent to
\[ |Q|^{-1}\|\chi_Q\|_{L^p(w)}\|\chi_Q\|_{L^{p'}(w^{1-p'})} \leq C, \]
which is condition~\eqref{eqn:linear1}. Therefore, we have proved
the following.
\begin{corollary}
For $1<p<\infty$ and $w\in A_p$, given a regular singular
integral operator $T$ and a function $b$, if $[b,T]$ is bounded on
$L^p(w)$, then $b\in BMO$.
\end{corollary}
For $0<\alpha<n$ the corresponding weight condition
is the $A_{p,q}$ condition. Given $1<p<\text{ var}phirac{n}{\alpha}$, define $q$
by $\text{ var}phirac{1}{p}-\text{ var}phirac{1}{q}=\text{ var}phirac{\alpha}{n}$. We say $w\in A_{p,q}$
if for every cube $Q$,
\[ \left(\Xint-_Q w(x)^q\,dx\right)^{\text{ var}phirac{1}{q}} \left(\Xint-_Q
w(x)^{-p'}\,dx\right)^{\text{ var}phirac{1}{p'}} \leq C. \]
We have that the fractional maximal operator satisfies $M_\alpha :
L^p(w^p)\rightarrow L^q(w^q)$ if and only if $w\in
A_{p,q}$~\cite{muckenhoupt-wheeden74}.
For commutators of the fractional integral operator $I_\alpha$,
if $w\in A_{p,q}$ and $b\in
BMO$,
$[b,I_\alpha] : L^p(w^p) \rightarrow L^q(w^q)$ \cite{cruz-uribe-fiorenza03}.
The $A_{p,q}$ condition also implies \eqref{eqn:linear1}, though
unlike the case of $A_p$ weights, this is less obvious. In this
case we have $X=L^p(w^p)$ and $Y=L^q(w^q)$, so $Y'=L^{q'}(w^{-q'})$,
and
we can rewrite \eqref{eqn:linear1} as
\[
\left( \Xint-_Q w^{-q'}\,dx \right)^{\text{ var}phirac{1}{q'}}
\left( \Xint-_Q w^{p}\,dx \right)^{\text{ var}phirac{1}{p}} \leq C.
\]
Here we use the fact that
since $\text{ var}phirac{1}{p}-\text{ var}phirac{1}{q}=\text{ var}phirac{\alpha}{n}$,
$\text{ var}phirac{1}{p}+\text{ var}phirac{1}{q'}=\text{ var}phirac{\alpha}{n}$. Since $p<q$, $q'<p'$, so
if we apply H\"older's inequality twice we get that
\[ \left( \Xint-_Q w^{-q'}\,dx \right)^{\text{ var}phirac{1}{q'}}
\left( \Xint-_Q w^{p}\,dx \right)^{\text{ var}phirac{1}{p}}
\leq \left( \Xint-_Q w^{-p'}\,dx \right)^{\text{ var}phirac{1}{p'}}
\left( \Xint-_Q w^{q}\,dx \right)^{\text{ var}phirac{1}{q}}
\leq C.\]
\begin{corollary}
Given $0<\alpha<n$ and $1<p<\text{ var}phirac{n}{\alpha}$, define $q$ by
$\text{ var}phirac{1}{p}-\text{ var}phirac{1}{q}=\text{ var}phirac{\alpha}{n}$. Given $w\in A_{p,q}$
and a function $b$, if the commutator
$[b,I_\alpha] : L^p(w^p)\rightarrow L^q(w^q)$, then $b\in BMO$.
\end{corollary}
We have similar results for bilinear operators, but they are much less
complete. Given $1<p_1,\,p_2<\infty$, define the vector ``exponent''
$\vec{p}=(p_1,p_2,p)$, where
$\text{ var}phirac{1}{p}=\text{ var}phirac{1}{p_1}+\text{ var}phirac{1}{p_2}$. Given $\vec{p}$ and
weights $w_1,\,w_2$, define the triple $\vec{w}=(w_1,w_2,w)$, where
$w=w_1^{\text{ var}phirac{p}{p_1}}w_2^{\text{ var}phirac{p}{p_2}}$. Define the multilinear
analog of the Muckenhoupt $A_p$ weights as follows: given $\vec{p}$,
we say that $\vec{w}\in A_{\vec{p}}$ if for every cube $Q$,
\[ \left(\Xint-_Q w\,dx\right)^{\text{ var}phirac{1}{p}}
\left(\Xint-_Q w_1^{1-p_1'}\,dx\right)^{\text{ var}phirac{1}{p_1'}}
\left(\Xint-_Q w_2^{1-p_2'}\,dx\right)^{\text{ var}phirac{1}{p_2'}} \leq C. \]
These weights were introduced in~\cite{MR2483720}, where they showed
that the bilinear maximal
operator satisfies $M :L^{p_1}(w_1)\times L^{p_2}(w_2)\rightarrow
L^p(w)$ if and only if $\vec{w} \in A_{\vec{p}}$. It is an immediate
consequence of H\"older's inequality that if $w_1\in A_{p_1}$ and
$w_2\in A_{p_2}$, then $\vec{w}\in A_{\vec{p}}$; however, this
condition is not necessary.
Given a bilinear Calder\'on-Zygmund singular integral
operator $T$ and $b\in BMO$, we have that if
$\vec{w}\in A_{\vec{p}}$, then for $i=1,2$,
$[b,T]_i : L^{p_1}(w_1)\times L^{p_2}(w_2)\rightarrow
L^p(w)$~\cite{MR2483720}. In light of the results in the linear case, it
seems reasonable to conjecture that when $p>1$, the $A_{\vec{p}}$
condition implies~\eqref{eqn:main1}, which in this case can be written
as
\begin{equation} \label{eqn:main-bilinear}
\left(\Xint-_Q w^{1-p'}\,dx\right)^{\text{ var}phirac{1}{p'}}
\left(\Xint-_Q w_1\,dx\right)^{\text{ var}phirac{1}{p_1}}
\left(\Xint-_Q w_2\,dx\right)^{\text{ var}phirac{1}{p_2}} \leq C.
\end{equation}
(Here we use the fact that
$\text{ var}phirac{1}{p_1}+\text{ var}phirac{1}{p_2}+\text{ var}phirac{1}{p'}=1$.) Written in this form,
this condition can be viewed formally as the bilinear analog of the fact that in the
linear case, if $w\in A_p$, then $w^{1-p'} \in A_{p'}$.
However, we cannot prove this in general, or even in the special case
when $w_i\in A_{p_i}$, $i=1,2$. We can prove that~\eqref{eqn:main1}
holds if we make the additional, stronger assumption that $w\in A_p$.
This holds, for instance, if we assume that
$w_1,\,w_2\in A_p\subset A_{p_i}$, $i=1,2$. (This inclusion holds since $A_q\subset A_r$ whenever $q<r$, and here
$p<p_i$.) For then, since $w_i\in A_{p_i}$, by a multi-variable reverse H\"older
inequality recently proved in~\cite{DCU-KM} (and implicit
in~\cite[Theorem~2.6]{cruz-uribe-neugebauer95}), we have that
\[ \left(\Xint-_Q w_1\,dx\right)^{\text{ var}phirac{p}{p_1}}
\left(\Xint-_Q w_2\,dx\right)^{\text{ var}phirac{p}{p_2}}
\preccurlyeqssim \Xint-_Q w\,dx. \]
But then, since $w\in A_p$,
\begin{multline*}
\left(\Xint-_Q w^{1-p'}\,dx\right)^{\text{ var}phirac{1}{p'}}
\left(\Xint-_Q w_1\,dx\right)^{\text{ var}phirac{1}{p_1}}
\left(\Xint-_Q w_2\,dx\right)^{\text{ var}phirac{1}{p_2}} \\
\approx \left(\Xint-_Q w\,dx\right)^{-\text{ var}phirac{1}{p}}
\left(\Xint-_Q w_1\,dx\right)^{\text{ var}phirac{1}{p_1}}
\left(\Xint-_Q w_2\,dx\right)^{\text{ var}phirac{1}{p_2}} \leq C.
\end{multline*}
\begin{corollary} \label{cor:wtd-sio-bilinear} Given $\vec{p}$ with
$p>1$ and $\vec{w}\in A_{\vec{p}}$, suppose $w_i \in A_{p_i}$,
$i=1,2$, and suppose $w\in A_p$. If
$T$ is a regular bilinear singular integral, and
$b$ is function such that for $i=1,2$, $[b,T]_i : L^{p_1}(w_1)
\times L^{p_2}(w_2) \rightarrow L^p(w)$, then $b\in BMO$.
\end{corollary}
For bilinear fractional integrals, the corresponding weight class was
introduced in~\cite{MR2514845}. With the notation as before, given
$0<\alpha<2n$ and
$\vec{p}$, suppose that $\text{ var}phirac{1}{2}<p<\text{ var}phirac{n}{\alpha}$. Define $q$ by
$\text{ var}phirac{1}{p}-\text{ var}phirac1q= \text{ var}phirac\alpha n$. If we define the vector
weight $\vec{w}=(w_1,w_2,w)$, where now $w=w_1w_2$, then
$\vec{w}\in A_{\vec{p},q} $ if
\[\left(\Xint-_Q w^{q}\,dx\right)^{\text{ var}phirac{1}{q}}
\left(\Xint-_Q w_1^{-p_1'}\,dx\right)^{\text{ var}phirac{1}{p_1'}}
\left(\Xint-_Q w_2^{-p_2'}\right)^{\text{ var}phirac{1}{p_2'}} \leq C. \]
The bilinear fractional maximal operator satisfies $M_\alpha :
L^{p_1}(w^{p_1}) \times L^{p_2}(w_2) \rightarrow L^q(w^q)$ if and only
if $\vec{w} \in A_{\vec{p},q}$. Similar to the $A_{\vec{p}}$ weights,
if $w_i \in A_{p_i,q_i}$, where $q_i>p_i$, $i=1,2$, and
$\text{ var}phirac{1}{q_1}+\text{ var}phirac{1}{q_2}=\text{ var}phirac{1}{q}$, then $\vec{w} \in
A_{\vec{p},q}$.
For the commutators of the bilinear fractional integral operator,
if $w\in A_{\vec{p},q}$, then for $i=1,2$,
$[b,I_\alpha]_i: L^{p_1}(w_1^{p_1})\times L^{p_2}(w_2^{p_2})\to
L^q(w^{q})$ \cite{CW,CX}.
As we did for singular integrals, we conjecture that the
$A_{\vec{p},q}$ condition implies~\eqref{eqn:main1}, which in this
case can be written as
\begin{equation} \label{eqn:bilinear-frac}
\left(\Xint-_Q w^{-q'}\,dx\right)^{\text{ var}phirac{1}{q'}}
\left(\Xint-_Q w_1^{p_1}\,dx\right)^{\text{ var}phirac{1}{p_1}}
\left(\Xint-_Q w_2^{p_2}\,dx\right)^{\text{ var}phirac{1}{p_2}} \leq C.
\end{equation}
(Here we use the fact that
$\text{ var}phirac{1}{p_1}+\text{ var}phirac{1}{p_2}+\text{ var}phirac{1}{q'}=1+\text{ var}phirac{\alpha}{n}$.)
Arguing as in the case of bilinear fractional singular integrals, we
can prove this if we assume that $w_i \in A_{p_i,q_i}$ and $w^q \in A_q$: i.e., that
\[ \left(\Xint-_Q w^{q}\,dx\right)^{\text{ var}phirac{1}{q}} \left(\Xint-_Q
w^{-q'}\,dx\right)^{\text{ var}phirac{1}{q'}} \leq C. \]
For then, again by
the bilinear reverse H\"older inequality (since $w\in
A_{p_i,q_i}$, $w\in A_\infty$, and this is sufficient for this
inequality to
hold) and H\"older's
inequality (since $q_i>p_i$),
\[ \left(\Xint-_Q w^{q}\,dx\right)^{\text{ var}phirac{1}{q}}
\geq
\left(\Xint-_Q w_1^{q_1}\,dx\right)^{\text{ var}phirac{1}{q_1}}
\left(\Xint-_Q w_2^{q_2}\,dx\right)^{\text{ var}phirac{1}{q_2}}
\geq
\left(\Xint-_Q w_1^{p_1}\,dx\right)^{\text{ var}phirac{1}{p_1}}
\left(\Xint-_Q w_2^{p_2}\,dx\right)^{\text{ var}phirac{1}{p_2}};\]
Inequality~\eqref{eqn:bilinear-frac} follows at once.
We can eliminate the assumption that $w^q\in A_q$ if we
restrict the range of $\alpha$ to $n \leq \alpha<2n$.
Suppose $w_i\in A_{p_i,q_i}$, $i=1,2$; then
we have that $\text{ var}phirac{1}{q} = \text{ var}phirac{1}{p}-\text{ var}phirac{\alpha}{n}
\leq \text{ var}phirac{1-p}{p} < 1$ since $p>\text{ var}phirac{1}{2}$. Moreover, we have that
$\text{ var}phirac1{q'}=\text{ var}phirac1{p'_1}+\text{ var}phirac1{
p_2'}+\left(\text{ var}phirac\alpha n-1\right)\geq \text{ var}phirac1{p'_1}+\text{ var}phirac1{ p_2'}$.
Therefore, we can apply H\"older's inequality three times to get that
the left-hand side of \eqref{eqn:bilinear-frac} is bounded by
\[ \left(\Xint-_Q w_1^{-p_1'}\,dx\right)^{\text{ var}phirac{1}{p_1'}}
\left(\Xint-_Q w_2^{-p_2'}\,dx\right)^{\text{ var}phirac{1}{p_2'}}
\left(\Xint-_Q w_1^{q_1}\,dx\right)^{\text{ var}phirac{1}{q_1}}
\left(\Xint-_Q w_2^{q_2}\,dx\right)^{\text{ var}phirac{1}{q_2}}\leq C. \]
\begin{corollary} \label{cor:wtd-frac-bilinear} Given $0<\alpha<2n $,
$\vec{p}$ such that $p<\text{ var}phirac{n}{\alpha}$, and $\vec{w}$ such that
$w_i \in A_{p_i,q_i}$ with $q_i>p_i$, suppose either $w^q\in A_q$ or
$n\leq \alpha<2n$. If $b$ is a function such that for $i=1,2$,
$[b,I_\alpha]_i : L^{p_1}(w_1^{p_1}) \times L^{p_2}(w_2^{p_2})
\rightarrow L^q(w^q)$, then $b\in BMO$.
\end{corollary}
\begin{remark}
In Corollaries~\mathbb{R}f{cor:wtd-sio-bilinear}
and~\mathbb{R}f{cor:wtd-frac-bilinear}, we can interpret the
hypotheses $w\in L^p(w)$ and $w^q\in L^q(w^q)$ as assuming the maximal operator is bounded on the target
space $L^p(w)$ or $L^q(w^q)$. This should be compared to the
assumptions in Corollaries~\mathbb{R}f{cor:var-sio-bilinear}
and~\mathbb{R}f{cor:var-frac-bilinear} below.
\end{remark}
\subsection*{Variable Lebesgue spaces}
The variable Lebesgue spaces are a generalization of the classical
$L^p$ spaces.
Given a measurable function ${p(\cdot)} : {\mathbb R}^n \rightarrow
[1,\infty)$, we define $L^{p(\cdot)}$ to be the collection of all measurable
functions such that
\[ \|f\|_{L^{p(\cdot)}} =\inf\left\{ \lambda > 0 :
\int_{{\mathbb R}^n} \bigg(\text{ var}phirac{|f(x)|}{\lambda}\bigg)^{p(x)}\,dx \leq 1
\right\}. \]
With this norm $L^{p(\cdot)}$ is a Banach function space; the associate space
is $L^{p'(\cdot)}$, where we define ${p'(\cdot)}$ pointwise by
$\text{ var}phirac{1}{p(x)}+\text{ var}phirac{1}{p'(x)}=1$. For brevity, we define
\[ p_- = \essinf_{x\in {\mathbb R}^n} p(x), {q(\cdot)}uad p_+ = \esssup_{x\in {\mathbb R}^n}
p(x). \]
For complete information on these spaces, see~\cite{cruz-fiorenza-book}.
The boundedness of the maximal operator depends (in a very subtle way)
on the regularity of the exponent function ${p(\cdot)}$.
A sufficient condition for $M : L^{p(\cdot)} \rightarrow L^{p(\cdot)}$ is that $p_- >1$ and ${p(\cdot)}$ satisfies the
log-H\"older continuity conditions locally and at infinity:
\[ \left|\text{ var}phirac{1}{p(x)}-\text{ var}phirac{1}{p(y)}\right| \leq
\text{ var}phirac{C_0}{-\log(|x-y|)}, \quad |x-y|\leq \text{ var}phirac{1}{2}, \]
and there exists a constant $1\leq p_\infty \leq \infty$ such that
\[ \left|\text{ var}phirac{1}{p(x)} - \text{ var}phirac{1}{p_\infty}\right|
\leq \text{ var}phirac{C_\infty}{\log(e+|x|)}. \]
By Proposition~\mathbb{R}f{prop:avg-op}, if $M : L^{p(\cdot)}
\rightarrow L^{p(\cdot)}$, then for every cube~$Q$,
\begin{equation} \label{eqn:K0}
\|\chi_Q\|_{L^{p(\cdot)}}\|\chi_Q\|_{L^{p'(\cdot)}} \leq C|Q|,
\end{equation}
which is \eqref{eqn:linear1}. However, we have a stronger result.
Suppose ${p(\cdot)}$ is such that
the maximal operator is bounded on $L^{p(\cdot)}$. Given a cube $Q$, if we define the
exponents $p_Q$ and $p'_Q$ by
\[ \text{ var}phirac{1}{p_Q} = \Xint-_Q \text{ var}phirac{1}{p(x)}\,dx, {q(\cdot)}uad
\text{ var}phirac{1}{p'_Q} = \Xint-_Q \text{ var}phirac{1}{p'(x)}\,dx, \]
then
\begin{equation} \label{eqn:strong-K0}
\|\chi_Q\|_{p(\cdot)} \approx |Q|^{\text{ var}phirac{1}{p_Q}} \quad \text{ and } \quad \|\chi_Q\|_{p'(\cdot)}
\approx |Q|^{\text{ var}phirac{1}{p'_Q}},
\end{equation}
and the implicit constants are
independent of $Q$~\cite[Proposition~4.66]{cruz-fiorenza-book}.
Let $T$ be a Calder\'on-Zygmund singular integral operator. If ${p(\cdot)}$ is such that
$1<p_-\leq p_+<\infty$ and the maximal operator is bounded on $L^{p(\cdot)}$,
then for all $b\in BMO$, $[b,T] : L^{p(\cdot)} \rightarrow
L^{p(\cdot)}$~\cite[Corollary~2.10]{MR2210118}.
\begin{corollary}
Let ${p(\cdot)}$ be an exponent function such that $1<p_-\leq p_+<\infty$ and
the maximal operator is bounded on $L^{p(\cdot)}$. If $T$ is a regular
singular integral and $b$ is a function such that $[b,T]$ is bounded
on $L^{p(\cdot)}$, then $b\in BMO$.
\end{corollary}
Given $0<\alpha<n$ and ${p(\cdot)}$ such that $1<p_-\leq p_+ <
\text{ var}phirac{n}{\alpha}$, define ${q(\cdot)}$ pointwise by
$\text{ var}phirac{1}{p(x)}-\text{ var}phirac{1}{q(x)}=\text{ var}phirac{\alpha}{n}$. If there
exists $q_0>\text{ var}phirac{n}{n-\alpha}$ such that the maximal operator is bounded
on $L^{({q(\cdot)}/q_0)'}$, then for all $b\in BMO$, $[b,I_\alpha] : L^{p(\cdot)}
\rightarrow L^{q(\cdot)}$. (This result does not appear explicitly in the
literature, but it is a straightforward application of known results.
For instance, it follows by extrapolation, arguing as in
\cite[Theorem~5.46]{cruz-fiorenza-book} but using the weighted norm
inequalities for commutators
from~\cite[Theorem~1.6]{cruz-uribe-fiorenza03}.)
If the maximal operator is bounded on $L^{({q(\cdot)}/q_0)'}$, then
by~\cite[Corollary~4.64]{cruz-fiorenza-book}, it is also bounded on
$L^{{q(\cdot)}/q_0}$ and so on $L^{q(\cdot)}$ \cite[Theorem~4.37]{cruz-fiorenza-book}. If we let $\theta=\text{ var}phirac{1}{q_0}$, then we can write
\[ \text{ var}phirac{1}{p(x)} = \text{ var}phirac{1}{q(x)}+\text{ var}phirac{\alpha}{n}
= \text{ var}phirac{\theta}{q(x)/q_0} +
\text{ var}phirac{1-\theta}{(1-\theta)\text{ var}phirac{n}{\alpha}}. \]
By our assumption on $q_0$, $r=(1-\theta)\text{ var}phirac{n}{\alpha}>1$, and so
the maximal operator is bounded on $L^r$. Hence,
by interpolation (see~\cite[Theorem~3.38]{cruz-fiorenza-book}) the
maximal operator is bounded on $L^{p(\cdot)}$. Therefore, by
\eqref{eqn:strong-K0},
\[ |Q|^{-\text{ var}phirac{\alpha}{n}}\|\chi_Q\|_{L^{q'(\cdot)}}\|\chi_Q\|_{L^{p(\cdot)}}
\preccurlyeqssim
|Q|^{-\text{ var}phirac{\alpha}{n}} |Q|^{\text{ var}phirac{1}{q'_Q}}|Q|^{\text{ var}phirac{1}{p_Q}}
\preccurlyeqssim
|Q|. \]
So again, \eqref{eqn:linear1} holds.
\begin{corollary}
Given $0<\alpha<n$ and ${p(\cdot)}$ such that $1<p_-\leq p_+ <
\text{ var}phirac{n}{\alpha}$, define ${q(\cdot)}$ pointwise by
$\text{ var}phirac{1}{p(x)}-\text{ var}phirac{1}{q(x)}=\text{ var}phirac{\alpha}{n}$. Suppose there
exists $q_0>\text{ var}phirac{n}{n-\alpha}$ such that the maximal operator is bounded
on $L^{({q(\cdot)}/q_0)'}$. If $b$ is such that $[b,I_\alpha] : L^{p(\cdot)}
\rightarrow L^{q(\cdot)}$, then $b\in BMO$.
\end{corollary}
We have similar results for bilinear operators. Suppose $p_1(\mathcal Dot)$,
$p_2(\mathcal Dot)$ are such that $1<(p_i)_-\leq (p_i)_+ < \infty$, $i=1,2$,
and define ${p(\cdot)}$ pointwise by
$\text{ var}phirac{1}{p(x)} = \text{ var}phirac{1}{p_1(x)}+\text{ var}phirac{1}{p_2(x)}$. Note that
$p_->\text{ var}phirac{1}{2}$. If we further assume that the (linear) maximal
operator is bounded on $L^{p_i(\mathcal Dot)}$, $i=1,2$, then given any
bilinear Calder\'on-Zygmund singular integral operator $T$, we have
that
$T : L^{p_1(\mathcal Dot)}\times L^{p_2(\mathcal Dot)} \rightarrow
L^{p(\cdot)}$~\cite[Corollary~4.1]{CruzUribe:2016wv}. Moreover, given any
$b\in BMO$,
$[b,T]_i : L^{p_1(\mathcal Dot)}\times L^{p_2(\mathcal Dot)} \rightarrow L^{p(\cdot)}$. This
is not proved explicitly in the literature, but the proof is
the same as for bilinear singular integrals, using bilinear extrapolation and
using the weighted inequalities for bilinear commutators
in~\cite{MR2483720}.
With the same assumptions we also
have that $M : L^{p_1(\mathcal Dot)}\times L^{p_2(\mathcal Dot)} \rightarrow L^{p(\cdot)}$:
by the generalized H\"older's
inequality~\cite[Corollary~2.28]{cruz-fiorenza-book},
\[ \|M(f,g)\|_{p(\cdot)} \leq \|Mf \mathcal Dot Mg\|_{p(\cdot)}
\leq \|Mf\|_{p_1(\mathcal Dot)}\|Mg\|_{p_2(\mathcal Dot)}. \]
Note that we pass from the bilinear to the linear maximal operator.
Also, note that the generalized H\"older's inequality is only proved
in~\cite{cruz-fiorenza-book} assuming $p_-\geq 1$, but for $p_->0$ it
follows by a rescaling argument:
cf.~\cite[Lemma~2.7]{CruzUribe:2014ux}.
If we further assume $p_->1$ and the maximal operator is bounded on
$L^{p(\cdot)}$, then by~\eqref{eqn:strong-K0} we have that
\[ \|\chi_Q\|_{p'(\cdot)} \|\chi_Q\|_{p_1(\mathcal Dot)}\|\chi_Q\|_{p_2(\mathcal Dot)}
\preccurlyeqssim
|Q|^{\text{ var}phirac{1}{p'_Q}+\text{ var}phirac{1}{(p_1)_Q}+\text{ var}phirac{1}{(p_2)_Q}} \leq C. \]
\begin{corollary} \label{cor:var-sio-bilinear}
Suppose $p_1(\mathcal Dot)$,
$p_2(\mathcal Dot)$ are such that $1<(p_i)_-\leq (p_i)_+ < \infty$, $i=1,2$, and
we define ${p(\cdot)}$ pointwise by $\text{ var}phirac{1}{p(x)} =
\text{ var}phirac{1}{p_1(x)}+\text{ var}phirac{1}{p_2(x)}$. Suppose further that $p_->1$ and
the maximal
operator is bounded on $L^{p(\cdot)}$ and $L^{p_i(\mathcal Dot)}$, $i=1,2$. If $T$
is a regular bilinear singular integral and $b$ is such that
$[b,T]_i : L^{p_1(\mathcal Dot)}\times L^{p_2(\mathcal Dot)}
\rightarrow L^{p(\cdot)}$, then $b\in BMO$.
\end{corollary}
To prove the analogous result for the bilinear fractional integral
operator, fix $0<\alpha<2n$. Suppose $p_1(\mathcal Dot)$, $p_2(\mathcal Dot)$ are
such that $1<(p_i)_-\leq (p_i)_+ < \infty$, $i=1,2$, and again define
${p(\cdot)}$ pointwise by
$\text{ var}phirac{1}{p(x)} = \text{ var}phirac{1}{p_1(x)}+\text{ var}phirac{1}{p_2(x)}$. Define ${q(\cdot)}$ by
$\text{ var}phirac{1}{p(x)}-\text{ var}phirac{1}{q(x)}=\text{ var}phirac{\alpha}{n}$. Fix
$0<\alpha_1,\alpha_2<n$ such that $\alpha_1+\alpha_2=\alpha$, and define
$q_i(\mathcal Dot)$, $i=1,2$, by
$\text{ var}phirac{1}{p_i(x)}-\text{ var}phirac{1}{q_i(x)}=\text{ var}phirac{\alpha_i}{n}$. If we assume
that the maximal operator is bounded on $L^{q(\cdot)}$, and that there exist
$q_i>\text{ var}phirac{n}{n-\alpha_i}$. $i=1,2$, such that the maximal operator is
bounded on $L^{(q_i(\mathcal Dot)/q_i)'}$, then given any $b\in BMO$,
$[b,I_\alpha]_i : L^{p_1(\mathcal Dot)}\times L^{p_2(\mathcal Dot)} \rightarrow
L^{q(\cdot)}$. As in the linear case, this result has not been explicitly
proved in the literature, but follows from known results. For all
$w\in A_\infty$ and any $0<p<\infty$,
$\|I_\alpha(f,g)\|_{L^p(w)}\preccurlyeqssim
\|M_\alpha(f,g)\|_{L^p(w)}$~\cite[Theorem~3.1]{MR2514845}. Since the
maximal operator is bounded on $L^{q(\cdot)}$, by extrapolation,
$\|I_\alpha(f,g)\|_{q(\cdot)} \preccurlyeqssim
\|M_\alpha(f,g)\|_{q(\cdot)}$~\cite[Theorem~5.24]{cruz-fiorenza-book}. By
the generalized H\"older's inequality,
\[ \|M_\alpha(f,g)\|_{q(\cdot)}
\leq \|M_{\alpha_1}f \mathcal Dot M_{\alpha_2}g\|_{q(\cdot)}
\leq \|M_{\alpha_1}f\|_{q_1(\mathcal Dot)} \|M_{\alpha_2}g\|_{q_2(\mathcal Dot)}
\leq \|f\|_{p_1(\mathcal Dot)} \|g\|_{p_2(\mathcal Dot)} ; \]
the last inequality follows from our assumptions on $q_i(\mathcal Dot)$
and~\cite[Remark~5.51]{cruz-fiorenza-book}.
In this case~\eqref{eqn:main1} becomes
\[ |Q|^{-\text{ var}phirac{\alpha}{n}}\|\chi_Q\|_{q'(\cdot)} \|\chi_Q\|_{p_1(\mathcal Dot)}
\|\chi_Q\|_{p_2(\mathcal Dot)} \leq C|Q|. \]
If we make the same assumptions on the exponents as used to prove the
inequalities for the commutators, then arguing as we did above
using~\eqref{eqn:strong-K0}, we get this inequality.
\begin{corollary} \label{cor:var-frac-bilinear}
Fix $0<\alpha<2n$. Suppose $p_1(\mathcal Dot)$,
$p_2(\mathcal Dot)$ are such that $1<(p_i)_-\leq (p_i)_+ < \infty$, $i=1,2$, and
again define ${p(\cdot)}$ pointwise by $\text{ var}phirac{1}{p(x)} =
\text{ var}phirac{1}{p_1(x)}+\text{ var}phirac{1}{p_2(x)}$. Define ${q(\cdot)}$ by
$\text{ var}phirac{1}{p(x)}-\text{ var}phirac{1}{q(x)}=\text{ var}phirac{\alpha}{n}$. Fix
$0<\alpha_1,\alpha_2<n$ such that $\alpha_1+\alpha_2=\alpha$, and define
$q_i(\mathcal Dot)$, $i=1,2$, by
$\text{ var}phirac{1}{p_i(x)}-\text{ var}phirac{1}{q_i(x)}=\text{ var}phirac{\alpha_i}{n}$. Suppose
that the maximal operator is bounded on $L^{q(\cdot)}$, and that there exist
$q_i>\text{ var}phirac{n}{n-\alpha_i}$. $i=1,2$, such that the maximal operator is
bounded on $L^{(q_i(\mathcal Dot)/q_i)'}$. Given any $b$, if $[b,I_\alpha]_i : L^{p_1(\mathcal Dot)}\times L^{p_2(\mathcal Dot)}
\rightarrow L^{q(\cdot)}$, then $b\in BMO$.
\end{corollary}
\begin{remark}
Sufficient conditions for the boundedness of commutators of singular and fractional
integrals on Orlicz spaces are known or can be readily proved using
extrapolation:
see~\cite{curbera-garcia-cuerva-martell-perez06,kokilashvili-krbec91}.
Similarly, such results can be proved in the weighted variable Lebesgue spaces and
generalized Orlicz spaces (also known as Nakano spaces or
Musielak-Orlicz spaces) using extrapolation:
see~\cite{DCU-PH,CruzUribe:2015km} for definitions and the corresponding extrapolation results. Results
similar to those for the variable Lebesgue spaces above can be
deduced in these settings---we leave the precise statements and
proofs to the interested reader.
\end{remark}
\section{Proof of Theorem~\mathbb{R}f{thm:main}}
\label{section:proof}
In this section we prove Theorem~\mathbb{R}f{thm:main}. As we noted in the
Introduction, the proof of Theorem~\mathbb{R}f{thm:linear} is gotten by a
simple adaptation of this proof. Our proof actually closely follows the argument due
to the first author in~\cite{LC}, which was in turn following the
techniques of Janson in \cite{SJ}. Here we will concentrate on the
parts of the proof which changes because we are working in the setting
of Banach function spaces, and we refer the reader to~\cite{LC} for
further details.
\begin{proof}
We will assume without loss of generality that $[b,T]_1$ is bounded;
the proof for the other commutator is identical.
Let $B=B((y_0,z_0),\delta\sqrt{2n})\subset{\mathbb R}^{2n}$ be a ball upon
which $\text{ var}phirac1K$ has an absolutely convergent Fourier series. By the
homogeneity of $K$, we may assume without loss of generality that
$2\sqrt n<|(y_0,z_0)|<4\sqrt n$ and $\delta<1$. These conditions
guarantee that $\overline B \cap \{0\}=\emptyset$, avoiding any
potential singularity of $K$; {this will be important below as it
will let us use the integral representation of $[b,T]_1$.}
Write the Fourier series of $\text{ var}phirac{1}{K}$ as
\[\text{ var}phirac1{K(y,z)}=\sum_ja_je^{i\nu_j\mathcal Dot(y,z)}=\sum_ja_je^{i(\nu_j^1,\nu_j^2)\mathcal Dot(y,z)};\]
note that the individual vectors
$\nu_j=(\nu_j^1,\nu_j^2)\in{\mathbb R}^n\times{\mathbb R}^n$ do not play any
significant role in the proof, and we introduce them simply to be precise.
Let $y_1=\delta^{-1}y_0$, $z_1=\delta^{-1}z_0$; then by homogeneity we
have that for all $(y,z)\in B((y_1,z_1),\sqrt{2n})$,
\[ \text{ var}phirac1{K(y,z)}=\text{ var}phirac{\delta^{-2n+\alpha}}{K(\delta y,\delta z)}=\delta^{-2n+\alpha}\sum_ja_je^{i\delta\nu_j\mathcal Dot(y,z)}.\]
Let $Q=Q(x_0,r)$ be an arbitrary cube in ${\mathbb R}^n$, and set $\tilde
y=x_0+ry_1,\ \tilde z=x_0+rz_1$. Define $Q'=Q(\tilde y,r)$ and
$Q''=Q(\tilde z,r)$.
It follows from the size conditions on $y_0$ and $z_0$ that $Q$ and
either $Q'$ or $Q''$ are disjoint. To see that this is the case, note that the
minimum size condition on $(y_0,z_0)$ implies that
$\max\{|y_0|,|z_0|\}\geq\sqrt{2n}$; without loss of generality,
suppose that it is $|y_0|$. This in turn implies that the distance
between $x_0$ and $\tilde y$ is greater than $r\sqrt{2n}$. Since $Q$
and $Q'$ each have side-length $r$, the distance of their centers from
one another guarantees that they must be disjoint. If $|z_0|$ is
larger, then we get the same conclusion for $Q''$.
As a consequence, $Q\cap Q'\cap Q''=\emptyset$, which allows us to use
the kernel representation of $[b,T]_1$ for
$(x,y,z)\in Q\times Q'\times Q''$. Additionally, for
$(x,y,z)\in Q\times Q'\times Q''$,
$\left(\text{ var}phirac{x-y}r,\text{ var}phirac{x-z}r\right)\in B((y_1,z_1),\sqrt{2n})$,
which in turn means that $(x-y,x-z)$ is bounded away from the
singularity of $K$, and so the integral representation of $[b,T]_1$ can
be used freely. Further details of these calculations can be found in
\cite{LC}.
It also follows from the maximum size
condition on $(y_0,z_0)$ that
\begin{equation} \label{eqn:size-cubes}
Q',\; Q''\subset B\left(x_0,\left(\text{ var}phirac{r\sqrt n}{2}\left(1+\text{ var}phirac8\delta\right)\right)\right)\subset\sqrt
n\left(1+\text{ var}phirac8\delta\right)Q.
\end{equation}
To see this, note that the maximum size of $y_0$ or $z_0$ is $4\sqrt
n$ which implies that the maximum distance from $x_0$ to $\tilde y$ or
$\tilde z$ is $\text{ var}phirac{4r\sqrt n}{\delta}$. The final containment
in $B\left(x_0,\left(\text{ var}phirac{r\sqrt
n}{2}\left(1+\text{ var}phirac8\delta\right)\right)\right)$ follows from this.
We can now estimate as follows. Fix $Q$ and let
$\sigma(x)=\sgn(b(x)-b_Q)$; then
\begin{align*}
\int_Q|b(x)&-b_{Q'}|\,dx\\
&=\int_Q(b(x)-b_{Q'})\sigma(x)\,dx\\
&=\text{ var}phirac1{|Q''|}\text{ var}phirac1{|Q'|}\int_Q\int_{Q'}\int_{Q''}(b(x)-b(y))\sigma(x)\,dzdydx\\
&=r^{-2n}\int_{{\mathbb R}^n}\int_{{\mathbb R}^n}\int_{{\mathbb R}^n}(b(x)-b(y))\text{ var}phirac{r^{2n-\alpha}K(x-y,x-z)}{K\left(\text{ var}phirac{x-y}r,\text{ var}phirac{x-z}r\right)}\\
&\quad\quad\times\sigma(x)\chi_{Q}(x)\chi_{Q'}(y)\chi_{Q''}(z)\,dzdydx\\
&=\delta^{-2n+\alpha}r^{-\alpha}\int_{{\mathbb R}^n}\int_{{\mathbb R}^n}\int_{{\mathbb R}^n}
(b(x)-b(y))K(x-y,x-z)\sum_ja_je^{i\text{ var}phirac\delta r\nu_j\mathcal Dot(x-y,x-z)}\\
&\quad\quad\times\sigma(x)\chi_{Q}(x)\chi_{Q'}(y)\chi_{Q''}(z)\,dzdydx
\end{align*}
Define the functions
\begin{align*}
f_j(y)&=e^{-i\text{ var}phirac\delta r\nu^1_j\mathcal Dot y}\chi_{Q'}(y)\\
g_j(z)&=e^{-i\text{ var}phirac\delta r\nu^2_j\mathcal Dot z}\chi_{Q''}(z)\\
h_j(x)&=e^{i\text{ var}phirac\delta r\nu_j\mathcal Dot(x,x)}\sigma(x)\chi_{Q}(x).
\end{align*}
Note that the norm of each of these functions in any Banach function
space will be the same as the norm of its support. We can now
continue the above estimate:
\begin{align*}
\int_Q|b(x)-&b_{Q'}|~dx\\
&=\delta^{-2n+\alpha}r^{-\alpha}\sum_ja_j
\int_{{\mathbb R}^n} h_j(x)\int_{{\mathbb R}^n}\int_{{\mathbb R}^n} (b(x)-b(y))\\
&\quad\quad\times K(x-y,x-z)f_j(y)g_j(z)~dzdydx\\
&=\delta^{-2n+\alpha}|Q|^{-\text{ var}phirac{\alpha}{n}}\sum_ja_j
\int_{{\mathbb R}^n} h_j(x)[b,T]_1(f_j,g_j)(x)~dx\\
&\leq\delta^{-2n+\alpha}|Q|^{-\text{ var}phirac{\alpha}{n}}\sum_j|a_j|
\int_{{\mathbb R}^n} |h_j(x)||[b,T]_1(f_j,g_j)(x)|~dx\\
&\leq\delta^{-2n+\alpha}|Q|^{-\text{ var}phirac{\alpha}{n}}\sum_j|a_j|\|h_j\|_{Y'}\|[b,T]_1(f_j,g_j)\|_{Y}\\
&\leq\delta^{-2n+\alpha}|Q|^{-\text{ var}phirac{\alpha}{n}}\sum_j|a_j|\|h_j\|_{Y'}\|[b,T]_1\|_{X_1\times X_2\to Y}\|f_j\|_{X_1}\|g_j\|_{X_2}\\
&=\delta^{-2n+\alpha}\|[b,T]\|_{X_1\times X_2\to Y}\sum_j|a_j|\|\chi_Q\|_{Y'}\|\chi_{Q'}\|_{X_1}\|\chi_{Q''}\|_{X_2}|Q|^{-\text{ var}phirac{\alpha}{n}}.\\
\end{align*}
Let $P=2\sqrt n(1+\text{ var}phirac8\delta)Q$. By
inequality~\eqref{eqn:size-cubes} we have that $Q,\,Q',\,Q"\subset
P$, and $|P|\approx |Q|$. Therefore, by~\eqref{eqn:main1},
\begin{align*}
\int_Q|b(x)-&b_{Q'}|~dx\\
&\preccurlyeqssim\|[b,T]\|_{X_1\times X_2\to Y}\sum_j|a_j|\|\chi_P\|_{Y'}\|\chi_{P}\|_{X_1}\|\chi_{P}\|_{X_2}|P|^{-\text{ var}phirac{\alpha}{n}}\\
&\preccurlyeqssim|P|\|[b,T]_1\|_{X_1\times X_2\to Y}\sum_j|a_j|, \\
& \preccurlyeqssim |Q|.
\end{align*}
Since this is true for every cube $Q$, $b\in BMO$ and the proof is complete.
\end{proof}
\end{document} |
\begin{document}
\title{Lipschitz inverse shadowing for nonsingular flows}
\begin{abstract}
We prove that Lipschitz inverse shadowing for
nonsingular flows is equivalent to structural stability.
\end{abstract}
\begin{keywords}
structural stability, shadowing, nonsingular flows
\end{keywords}
\begin{classcode}
MSC 2010: 37C50, 34D30
\end{classcode}
\section{Introduction}
The notion of inverse shadowing was introduced by Pilyugin and Corless in
\cite{PIL__APPROX_AND_REAL_TRAJ_FOR_GEN_DYN_SYS} and by Kloeden and Ombach in
\cite{KLOEDEN__HYP_HOME_AND_BISHAD}.
They defined this notion for diffeomorphisms. Inverse shadowing for flows
was first introduced in \cite{LEE__INV_SHAD_FOR_EXP_FLOWS}.
It is known that both Lipschitz shadowing and Lipschitz inverse shadowing properties
for diffeomorphisms are equivalent to structural stability
\cite{PILSDS, PILMELISP, PIL__LIP_SH_AND_SS_FOR_FLOWS}.
In \cite{LEE__INV_SHAD_FOR_SS_FLOWS} the authors proved that
structural stability for flows implies inverse shadowing.
In fact, they proved that structural stability implies Lipschitz inverse shadowing
although they did not use this term in their paper.
We prove that Lipschitz inverse shadowing for flows without rest points implies
structural stability.
\section{Definitions} \label{sec:defs}
Let $X$ be a $C^{1}$ vector field
on a Riemannian manifold
$ M $ with metric $\dist$
and let $\Phi$ be a flow generated by $X.$
We will only consider cases when $ M $ is closed and when $ M = \Rb^n $.
Let $d$ be a (small) positive number.
\begin{definition}
We say that a mapping
$\Psi:\mathbb R \times M \to M $ is a $d$-method for flow $\Phi$ if for any $t\in\mathbb R$,
\begin{equation}
\dist\enbrace{\Psi(t+s,x),\Phi(s,\Psi(t,x)) } < d
,\quad s\in [-1,1] ,
\label{neq:dmetdef}
\end{equation}
and $\Psi(0,x) = x$ for any $x\in M .$
\end{definition}
We introduce several classes of $d$-methods, following \cite{LEE__INV_SHAD_FOR_SS_FLOWS}.
Let $\Psi$ be a $d$-method. Denote by $( M )^\mathbb R$ the set of all functions
from $\mathbb R$ to $ M .$ Consider a mapping
$$\PsiMap: M \to ( M )^\mathbb R,$$
defined as
$$\enbrace{\PsiMap(x)}(t)=\Psi(t,x),\quad x\in M ,\ t\in\mathbb R.$$
\begin{itemize}
\item We say that the $d$-method $\Psi$ belongs to the class $\Psiclass{p},$ if
the mapping $\PsiMap$ is continuous in the pointwise-convergence topology on $( M )^\mathbb R.$
\item We say that the $d$-method $\Psi$ belongs to the class $\Psiclass{o},$ if
the mapping $\PsiMap$ is continuous in compact-open topology on $( M )^\mathbb R.$
\item We say that the $d$-method $\Psi$ belongs to the class $\Psiclass{c},$ if
it is continuous as a mapping of the form $\mathbb R\times M \to M .$
\item We say that the $d$-method $\Psi$ belongs to the class $\Psiclass{h},$ if
it is a flow of some $C^1$ vector field $Y$ that satisfies $$d_0(X,Y)\leq d.$$
Here $d_0$ is the $C^0$ metric on the space of $C^1$ vector fields on $ M .$
\item We say that the $d$-method $\Psi$ belongs to the class $\Psiclass{s},$ if
it is smooth as a mapping of the form $\mathbb R\times M \to M .$
\end{itemize}
\begin{remark}
Our definition of a $d$-method is a definition of a family of
pseudotrajectories (see the definition of a $d$-pseudotrajectory of
a flow in \cite{PILSDS}). If a method belongs to one of the classes defined above,
then the dependence of a pseudotrajectory on a point has some continuity
(smoothness) properties.
\end{remark}
Define $\mathrm{Rep}$ as a set of all increasing homeomorphisms $\alpha:\mathbb R\to\mathbb R$
with $\alpha(0)=0.$ We introduce also the following notation:
\begin{equation*}
\mathrm{Rep}e{\delta} = \setdef{\alpha\in\mathrm{Rep}}{\vmod{\frac{\alpha(t)-
\alpha(s)}{t-s} - 1 }\leq \delta,\quad t\neq s}.
\end{equation*}
\begin{definition}
Let
\Psiclassletter be one of the classes of methods defined above.
We say that a flow $\Phi$ has Lipschitz inverse shadowing property
(we write $\Phi\in \LISP$ in this case) with respect to the class \Psiclassletter if
for any point $p\in M $ there exist constants $L,d_0,$ such that
for any $d$-method $\Psi$ of the class \Psiclassletter with $d\leq d_0$ there exist
a point $\hat{p}\in M $ and a reparametrisation $\alpha\in\mathrm{Rep}e{Ld}$ such that
\begin{equation}
\dist(\Phi(t,p),\Psi(\alpha(t),\hat{p})) < Ld,\quad t\in \mathbb R.
\end{equation}
\end{definition}
\begin{remark} \label{rem:invRep}
It is easy to see that if $\delta$ is small enough and
$\alpha\in\mathrm{Rep}e{\delta}$ then $\alpha^{-1}\in\mathrm{Rep}e{2\delta}.$
\end{remark}
\begin{remark}
Obviously the inclusion $\Phi\in \LISP$ with respect to the class $\Psiclass{c}$
implies that $\Phi \in \LISP$ with respect to the classes $\Psiclass{h}$ and $\Psiclass{s}.$
\end{remark}
\begin{remark}
It is proved in \cite{LEE__INV_SHAD_FOR_SS_FLOWS} that the inverse shadowing properties
with respect to the classes of methods $\Psiclass{c},\ \Psiclass{o}$ and
$\Psiclass{p}$ are equivalent. We do not discuss the class $\Psiclass{h}.$
Further away we write just about Lipschitz inverse shadowing
without mentioning a class of methods, always meaning the class $\Psiclass{s}$.
\end{remark}
\section{Main Results}
The main result of the work is the following theorem:
\begin{theorem} \label{thm:mainth}
Let $ M $ be a closed manifold and let the flow $\Phi$ have no rest points.
Then $\Phi$ is structurally stable iff $\Phi\in \LISP.$
\end{theorem}
\section{The idea of the proof}
We will use the same idea that the authors of \cite{PIL__LIP_SH_AND_SS_FOR_FLOWS} used.
Fix a point $p\in M .$ Denote $f=\Phi(1,\cdot),$
$p_k=f^k(p),\ k\in\mathbb Z,$ and $A_k = Df(p_k),\ k\in\mathbb Z.$
Let $P_k:\tgtM{k}\to \tgtM{k}$ be the orthogonal projections with kernels
$\linhull{X(p_k)}$
and let $V_k$ be the orthogonal complement to $X(p_k).$ Denote $B_k = P_{k+1}A_k : V_k\to V_{k+1}.$
Consider the following system of difference equations
\begin{equation}\label{eqs:tangenteqsnf}
v_{k+1} = B_k v_k + b_{k+1},\ k\in\mathbb Z.
\end{equation}
Let $\mathcal{CR}$ be the chain-recurrent set of a flow $\Phi$.
The following is proved in \cite{PIL__LIP_SH_AND_SS_FOR_FLOWS}:
\begin{theorem}
Let $ M $ be a closed manifold.
Suppose that
there exists a constant $L_1$ such that
for any point $p$ and any bounded sequence $\seqz{b}{k}$ with entries from
the corresponding
$V_k,$ equations \eqref{eqs:tangenteqsnf} have a solution $\seqz{v}{k}$
with the norm bounded by $L_1\normban{b}.$
Then
\begin{itemize}
\item the set $\mathcal{CR}$ is hyperbolic;
\item the strong transversality condition is fulfilled.
\end{itemize}
\end{theorem}
It is known ( e.g. \cite{SELGRADE__HYP_AND_CHAIN_RECUR} ),
that the hyperbolicity of the chain-recurrent set implies
Axiom $A'.$ In turn, Axiom $A'$ and strong transversality condition imply
structural stability.
So the \enkav{only if} part of Theorem \ref{thm:mainth} will be proved if
we prove that the conditions of the previous theorem are satisfied.
The \enkav{if} part is proved in \cite{LEE__INV_SHAD_FOR_SS_FLOWS}.
The reason we consider only vector fields without singularities is
that unlike in \cite{PIL__LIP_SH_AND_SS_FOR_FLOWS} we cannot prove
that singularities are isolated in $\mathcal{CR}.$ If we knew that they are
than we could easily show that they are hyperbolic and prove
Theorem \ref{thm:mainth} holds.
\begin{definition}
We say that a flow $\Phi$
satisfy condition (UB) if
\begin{enumerate}
\item
the norms of the derivatives of the flow $\Phi(x,s)$ with respect to initial data
for $s\in[-1,1]$ are bounded. I.e., there exists
$$ Q_1 = \max_{x\in M,\ s\in [-1,1]} \normban{\diff{\Phi(s,x)}{x}} .$$
\item
The lengths of vectors of the vector field $X$ are bounded. I.e., there exists
$$Q_2 = \max_{x\in \Rb^n } \vmod{X(x)}.$$
\item
The reminder in the Taylor expansion of the flow $\Phi$ is uniformly bounded.
Let the manifold $M$ be covered by a countable number of
open balls $V_i$ of the same radius each admitting a coordinate chart.
There exists a monotonous function
$ g_1 :[0,\infty)\to[0,\infty)$ with the following property.
If for $x\in M,\ t\in \mathbb R$ we fix a coorinate chart of
the ball $V_i$
containing the point $\Phi(t,x)$ and denote the
representation of the flow $\Phi$ in this chart by the same letter $\Phi$,
then for any $h_1\in[-1,1]$, $h_2\in T_x M$, $\vmod{h_2}<1$,
the following estimate is fulfilled
$$ \vmod{\Phi(t+h_1,x+h_2) - \Phi(t,x) -h_1 X(\Phi(t,x) ) -
\diff{\Phi(t,x)}{x}h_2 } \leq g_1 (\vmod{h_1}+\vmod{h_2}),$$
where $ g_1 (\vmod{h_1}+\vmod{h_2})/(\vmod{h_1}+\vmod{h_2})$
tends to $0$ uniformly in $x$ for $(h_1,h_2)\to 0.$
Here we assume that all the points $\Phi(t+h_1,x+h_2)$
belong to $V_i$, otherwise we can reparametrize the flow globally.
\end{enumerate}
\end{definition}
A flow generated by $C^{1}$ vector field on a closed manifold
satisfies this condition.
At first we prove the solvability of equations that are different from
equations \eqref{eqs:tangenteqsnf}:
\begin{statement} \label{st:solveqs}
Let the flow have Lipschitz inverse shadowing property and satisfy condition (UB).
Then there exists a constant $L_1$ such that
for any point $p\in M $ and any inhomogeneity $\seqz{z}{k}$
that satisfies $\vmod{z_k}\leq 1,\ k\in\mathbb Z,$ for any natural $N$
there exists a sequence of real numbers
$\seq{s_k}_{k\in[-N,N-1]s}$ such that the system of equations
\begin{equation}\label{eqs:tangenteqsx}
x_{k+1} = A_k x_k + X(p_{k+1}) s_k + z_{k+1},\quad k\in[-N,N-1]
\end{equation}
has a solution $\seq{x^{(N)}_k}_{k\in[-N,N-1]s}$ such that
$\vmod{x^{(N)}_k}\leq L_1,\ k\in[-N,N-1]s.$
\end{statement}
The proof of this statement is the main difficulty and
is given later.
Now we show how the solvability of equations \eqref{eqs:tangenteqsx}
implies the solvability of equations \eqref{eqs:tangenteqsnf}.
The proof of the next corollary is similar to the proof of Lemma 2 from
\cite{PIL__LIP_SH_AND_SS_FOR_FLOWS}.
\begin{corollary} \label{thm:solveqsnf}
For any sequence $\seqz{b}{k}$ that satisfies $\vmod{b_k} \leq 1$ and $b_k \in V_k$
for all integer $k$ there exists a solution
$\seqz{v}{k}$ of system of equations \eqref{eqs:tangenteqsnf} such that
$\vmod{v_k}\leq L_1.$
\end{corollary}
\begin{proof}
Take $z_k=b_k$ in equations \eqref{eqs:tangenteqsx}.
Statement \ref{st:solveqs} guarantees that there exists a constant
$L_1$ such that for any integer $N$ there exists a sequence
$\seq{s_k}_{k\in[-N,N-1]s}$ such that system of equations \eqref{eqs:tangenteqsx}
has a solution $\seq{x_k}_{k\in[-N,N-1]s}$ with the norm bounded by $L_1.$
Fix $k\in[-N,N-1]s$
Note that $A_k X(p_k) = X(p_{k+1}).$ The definition of the projections $P_k$
implies the inclusion $(\Id-P_k)v\in \linhull{X(p_k)}$ for any $v\in \tgtM{k}.$
Thus $A_k(Id-P_k) = 0.$
Now we have the equality
$$ P_{k+1}A_k = P_{k+1}A_k P_k.$$
Multiply equalities \eqref{eqs:tangenteqsx} by $P_{k+1}:$
\begin{gather*}
P_{k+1} x_{k+1} = P_{k+1} A_k x_k + P_{k+1} X(p_{k+1}) s_k + P_{k+1} z_{k+1} = \\
= P_{k+1} A_k P_k x_k + b_{k+1},\quad k\in[-N,N-1].
\end{gather*}
Thus the sequence $v^{(N)}_k = P_k x_k$ is a solution of equations
\eqref{eqs:tangenteqsnf} for a finite number of indices. Now we can
pass to the limit as $N\to\infty.$ Due to the boundedness, $v^{(N)}_k$ has a limit
$v_k$ whose norm is also bounded by $L_1.$
\end{proof}
\input{InscribingR.tex}
\input{Inscribing.tex}
\end{document} |
\begin{document}
\begin{abstract}
In the present paper, we will discuss the following non-degenerate Hamiltonian system
\begin{equation*}
H(\theta,t,I)=\frac{H_0(I)}{\varepsilon^{a}}+\frac{P(\theta,t,I)}{\varepsilon^{b}},
\end{equation*}
where $(\theta,t,I)\in\mathbf{{T}}^{d+1}\times[1,2]^d$ ($\mathbf{{T}}:=\mathbf{{R}}/{2\pi \mathbf{Z}}$), $a,b$ are given positive constants with $a>b$, $H_0: [1,2]^d\rightarrow \mathbf R$ is real analytic and $P: \mathbf T^{d+1}\times [1,2]^d\rightarrow \mathbf R$ is $C^{\ell}$ with $\ell=\frac{2(d+1)(5a-b+2ad)}{a-b}+\mu$, $0<\mu\ll1$. We prove that if $\varepsilon$ is sufficiently small, there is an invariant torus with given Diophantine frequency vector for the above Hamiltonian system. As for application, we prove that a finite network of Duffing oscillators with periodic exterior forces possesses Lagrangian stability for almost all initial data.
\end{abstract}
\maketitle
\section{ Introduction and main results}\label{sec1}
Consider the harmonic oscillator (linear spring)
\begin{equation}\label{fc1-1}
\,\mathrm{d}ot{x}+k^2x=0.
\end{equation}
It is well-known that any solution of this equation is periodic. So any solution of this equation is bounded for $t\in \mathbf{R}$. That is, this equation is Lagrange stable. However, there is an unbounded solution to the equation
\begin{equation}\label{fc1-2}
\,\mathrm{d}ot{x}+k^2\, x=p(t)
\end{equation}
where the frequency of $p$ is equal to the frequency $k$ of the spring itself. Now let us consider a nonlinear equation
\begin{equation}\label{fc1-3}
\,\mathrm{d}ot{x}+x^3=0.
\end{equation}
This equation is Lagrange stable, too. An interesting problem is that, does
\begin{equation}\label{fc1-4}
\,\mathrm{d}ot{x}+x^3=p(t)
\end{equation}
have Lagrange stability when $p(t)$ is periodic?
Moser \cite{a1, a2} proposed to study the boundedness of all solutions for Duffing equation
\begin{equation}\label{fc1-5}
\,\mathrm{d}ot{x}+\alpha x^3+\beta x=p(t),
\end{equation}
where $\alpha>0, \beta\in \mathbf{R}$ are constants, $p(t)$ is a $1$-periodic continuous function.
The first boundedness result, prompted by questions of Littlewood \cite{a3}, is due to Morris \cite{a4} in 1976 who showed that all solutions of the equation (\ref{fcb1-5}) are bounded for all time.
\begin{equation}\label{fcb1-5}
\,\mathrm{d}ot{x}+2x^3=p(t),
\end{equation}
where $p(t)$ is a $2\pi$-periodic continuous function. Subsequently, Morris's boundedness result was, by Dieckerhoff-Zehnder \cite{a5} in 1987, extended to a wider class of systems
\begin{equation}\label{fcb1-6}
\,\mathrm{d}ot{x}+x^{2n+1}+\sum_{i=0}^{2n} x^{i}p_{i}(t)=0, n\geq 1,
\end{equation}
where $p_i(t)\in C^{\infty} (i=0,1,\cdots,2n)$ are 1-periodic functions.
For some other extensions to study the boundedness, one may see papers \cite{a6, a7, a8, a9, a10, a11, a12, a13, a14, a15}.
In many research fields such as physics, mechanics and mathematical biology as so on arise networks of coupled Duffing oscillators of various form.
For example, the evolution equations for
the voltage variables $V_1$ and $V_2$ obtained using the Kirchhoff's
voltage law are
\begin{equation}\label{fc1-6}
\begin{cases}
R^2 C^2 \frac{d^2 V_1}{d t^2}=-(\frac{R^2 C}{R_1})\frac{d V_1}{dt}-(\frac{R}{R_2})V_1-(\frac{R}{100 R_3}) V_1^3+(\frac{R}{R_C})V_2+f \sin \omega t,\\
R^2 C^2 \frac{d^2 V_2}{d t^2}=-(\frac{R^2 C}{R_1})\frac{d V_2}{dt}-(\frac{R}{R_2})V_2-(\frac{R}{100 R_3}) V_2^3+(\frac{R}{R_C})V_1 , \end{cases}
\end{equation}
where $R$'s and $C$'s are resistors and capacitors, respectively. This equation can be regarded as one coupled by two Duffing oscillators. See \cite{a17, a18, a19, a20, a21, a22, a23, a24} for more details.
Recently, Yuan-Chen-Li \cite{a16} studied the Lagrangian stability for coupled Hamiltonian system of $m$ Duffing oscillators:
\begin{equation}\label{fc1-7}
\,\mathrm{d}ot{x_{i}}+x_{i}^{2n+1}+\frac{\partial F}{\partial x_{i}}=0,\ \ i=1, 2, \cdots, m,
\end{equation}
where the polynomial potential $F=F(x, t)=\sum_{\alpha\in \mathbf{N}^{m}, |\alpha|\leq 2n+1}p_{\alpha}(t)x^{\alpha},$ $x\in \mathbf{R}^{m}$ with $p_{\alpha}(t)$ is of period $2 \pi$, and $n$ is a given natural number. Yuan-Chen-Li \cite{a16} proved that (\ref{fc1-7}) had Lagrangian stability for almost all initial data if $p_{\alpha}(t)$ was real analytic.
In the present paper, we will relax the real analytic condition of $p_{\alpha}(t)$ to $C^{\ell}$ ($\ell=2(m+1)(4n+2nm+1)+\mu$ with $0<\mu\ll1$).
In the whole of the present paper we denote by $C$ (or $C_0, C_1, c, c_0,c_1$, etc) an universal constant which may be different in different places. Let positive integer $d$ be the freedom of the to-be considered Hamiltonian.
\begin{thm}\label{thm1-1} Consider a Hamiltonian \begin{equation}\label{fcb1-1}
H(\theta,t,I)=\frac{H_0(I)}{\varepsilon^{a}}+\frac{P(\theta,t,I)}{\varepsilon^{b}},
\end{equation}
where $a,b$ are given positive constants with $a>b$, and $H_0$ and $P$ obey the following conditions:\\
{\rm(1)} Given $\ell=\frac{2(d+1)(5a-b+2ad)}{a-b}+\mu$ with $0<\mu\ll1$, and $H_0: [1,2]^d\rightarrow \mathbf R$ is real analytic and $P: \mathbf T^{d+1}\times [1,2]^d\rightarrow \mathbf R$ is $C^{\ell}$, and
\begin{equation}\label{fcb1-2}
||H_0||:=\sup_{I\in[1,2]^d}|H_0(I)|\le c_1, \ |P|_{C^{\ell}(\mathbf T^{d+1}\times [1,2]^d)}\le c_2,
\end{equation}
{\rm(2)} $H_0$ is non-degenerate in Kolmogorov's sense:
\begin{equation}\label{fcb1-3}
\text{det}\,\left(\frac{\partial^2 H_0(I)}{\partial I^2}\right)\ge c_3>0,\forall\; I\in [1,2]^d.
\end{equation}
Then there exists $0<\epsilon^*\ll 1$ such that for any $\varepsilon$ with $0<\varepsilon<\epsilon^*$, the Hamiltonian system
$$\dot \theta=\frac{\partial H(\theta,t,I)}{\partial I},\;\dot I=-\frac{\partial H(\theta,t,I)}{\partial \theta}$$ possesses a $d+1$ dimensional invariant torus of rotational frequency vector $(\omega(I_0),2\pi)$ with $\omega(I):=\frac{\partial H_0(I)}{\partial I}$, for any $I_0\in [1,2]^d$ and $\omega(I_0)$ obeying Diophantine conditions {\rm(we let $B=5a-b+2ad$):}\\
{\rm(i)}
\begin{equation}\label{fc1-19}
|\frac{\langle k,\omega(I_0)\rangle}{\varepsilon^a} +l|\ge \frac{\varepsilon^{-a+\frac{B}{\ell}}\gamma}{|k|^{\tau_1}}>\frac{\gamma}{|k|^{\tau_2}},\ \ k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z, |k|+|l|\le \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^2,
\end{equation}
where $\gamma=(\log\frac{1}{\varepsilon})^{-4},$ $\tau_1=d-1+\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)}$, $\tau_2=d+\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)}$; \\
{\rm(ii)}
\begin{equation}\label{fc1-20}
|\frac{\langle k,\omega(I_0)\rangle}{\varepsilon^a} +l|\ge \frac{\gamma}{|k|^{\tau_2}},\ \ k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z, |k|+|l|> \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2},
\end{equation}
where $\gamma=(\log\frac{1}{\varepsilon})^{-4},$ $\tau_2=d+\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)}$.
\end{thm}
Applying Theorem \ref{thm1-1} to (\ref{fc1-7}) we have the following theorem.
\begin{thm}\label{thm1-2} For any $A>0$, let $\mathbb{T}heta_A=\{(x_{1}, \dot x_{1}; \cdots, x_{m}, \dot x_{m})\in\mathbf R^{2m}:\ A\le \sum_{i=1}^m x_i^{2n+2}+(n+1)\dot x^2_i\le c_4A, \ c_4>1\}$. Then there exists a subset $\tilde \mathbb{T}heta_A\subset\mathbb{T}heta_A$ with
\begin{equation}\label{fc1-21}
\lim_{A\to\infty}\frac{\tilde \mathbb{T}heta_A}{\mathbb{T}heta_A}=1
\end{equation}
such that any solution to equation {\rm(\ref{fc1-7})} with any initial data $(x_{1}(0), \dot x_{1}(0); \cdots, x_{m}(0), \dot x_{m}(0))\in \tilde\mathbb{T}heta_A $ is time quasi-periodic with frequency vector $(\omega,2\pi)$ where $\omega=(\omega_i:\ i=1,\cdots,m)$ and $\omega_i=\omega_i(I(0))$ with $I(0)=(I_1(0),\cdots,I_m(0))$, $I_{i}(0)=(n+1)\dot x_{i}^{2}(0)+x_{i}^{2n+2}(0)$, furthermore,
\begin{equation}\label{fc1-22}
\sup_{t\in\mathbf R}\sum_{i=1}^m |x_i(t)|+|\dot x_i(t)|<\infty.
\end{equation}
\end{thm}
\begin{rem}\label{rem1-3} An equation is called to have Lagrangian stability for almost all initial data if its solutions obey (\ref{fc1-21}) and (\ref{fc1-22}).
\end{rem}
\begin{rem}\label{rem1-4} Let $\mathbb{T}heta=\{I_0\in[1,2]^d:\ \omega(I_0) \,\text{obeys the Diophantine conditions}\}$. We claim that the Lebesgue measure of $\mathbb{T}heta$ approaches to $1$:
\[\text{Leb} \mathbb{T}heta\ge 1-C (\log\frac{1}{\varepsilon})^{-2}\to 1,\;\text{as}\; \varepsilon
\to 0.\]
Let
$$\tilde\mathbb{T}heta_{k,l}=\left\{\xi\in\omega([1,2]^d):\ |\frac{\langle k,\xi\rangle}{\varepsilon^a} +l|\le \frac{\varepsilon^{-a+\frac{B}{\ell}}\gamma}{|k|^{\tau_1}},\ k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z, |k|+|l|\le \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^2\right\}$$
and
$$\tilde\mathbb{T}heta_{k,l}=\left\{\xi\in\omega([1,2]^d):\ |\frac{\langle k,\xi\rangle}{\varepsilon^a} +l|\le \frac{\gamma}{|k|^{\tau_2}},\ k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z, |k|+|l|> \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2}\right\}.$$
Let $f(\xi)=\frac{\langle k, \xi\rangle}{\varepsilon^{a}}+l.$ Since $k\neq 0,$ there exists an unit vector $v\in\mathbf{Z}^{d}$ such that
\begin{equation}\label{fc1-23}
\frac{df(\xi)}{dv}\geq \frac{C|k|}{\varepsilon^{a}}.
\end{equation}
Then, if , $k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z, |k|+|l|\le \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2}$, by (\ref{fc1-23}), we have
\begin{equation}\label{fc1-24}
\text{Leb}\tilde\mathbb{T}heta_{k,l}\le C \frac{\gamma\cdot\varepsilon^{\frac{B}{\ell}}}{|k|^{\tau_1+1}}.
\end{equation}
Thus,
\begin{equation}\label{fc1-25}
\text{Leb}\ \left(\bigcup_{k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z,|k|+|l|\le \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2} }\tilde\mathbb{T}heta_{k,l}\right)\le \sum_{|l|\le \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2}}C \gamma\cdot\varepsilon^{\frac{B}{\ell}}\le C(\log\frac{1}{\varepsilon})^{-2}.
\end{equation}
If $k\in\mathbf Z^d\setminus\{0\}, \ l\in\mathbf Z, \ |k|+|l|> \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2}$, we can let $c_5=\max\{|\omega([1,2]^d)|\}:=\max\{\sum_{i=1}^d|\omega_i([1,2]^d)|\}.$ Noting that $|\langle k, \xi\rangle|\leq c_5|k|.$ Thus if $|l|>\frac{c_5|k|}{\varepsilon^a}+1,$ then
$$|\frac{\langle k, \xi\rangle}{\varepsilon^a}+l|\geq|l|-|\frac{\langle k, \xi\rangle}{\varepsilon^a}|>\frac{c_5|k|}{\varepsilon^a}+1-\frac{c_5|k|}{\varepsilon^a}\geq1>\frac{\gamma}{|k|^{\tau_2}}.$$
It follows that $\tilde\mathbb{T}heta_{k,l}=\phi.$ Now we assume $|l|\leq\frac{c_5|k|}{\varepsilon^a}+1$, then by (\ref{fc1-23}),we have
\begin{equation}\label{fc1-26}
\text{Leb}\tilde\mathbb{T}heta_{k,l}\le \frac{C\gamma\varepsilon^a}{|k|^{\tau_2+1}}.
\end{equation}
Thus,
\begin{eqnarray}\label{fc1-27}
\nonumber\text{Leb}\ \left(\bigcup_{k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z,|k|+|l|> \varepsilon^{-\frac{B}{\ell}}(\log\frac{1}{\varepsilon})^{2} }\tilde\mathbb{T}heta_{k,l}\right)&\le& \sum_{k\neq0}\sum_{|l|\leq\frac{c_5|k|}{\varepsilon^a}+1}\frac{C\gamma\varepsilon^a}{|k|^{\tau_2+1}}\\
&\le&\sum_{k\neq0}\frac{C\gamma}{|k|^{\tau_2}}\le C(\log\frac{1}{\varepsilon})^{-4}.
\end{eqnarray}
Let $\mathbb{T}heta=[1,2]^d\setminus \left(\bigcup_{k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z}\omega^{-1}(\tilde\mathbb{T}heta_{k,l})\right)$. By the Kolmogorov's non-degenerate condition, the map $\omega:[1,2]^d\to \omega([1,2]^d)$ is a diffeomorphism in both direction. Then by (\ref{fc1-25}) and (\ref{fc1-27}), the proof of the claim is completed by letting $\mathbb{T}heta=[1,2]^d\setminus \left(\bigcup_{k\in\mathbf Z^d\setminus\{0\},l\in\mathbf Z}\omega^{-1}(\tilde\mathbb{T}heta_{k,l})\right)$.
\end{rem}
\section{Approximation Lemma}\label{sec2}
First we denote by $|\cdot|$ the norm of any finite dimensional Euclidean space. Let $C^{\tilde{\mu}}(\mathbf{R}^{m})$ for $0<\tilde{\mu}<1$ denote the space of bounded H\"older continuous functions $f: \mathbf{R}^m\rightarrow\mathbf{R}^n$ with the norm
\begin{equation*}
|f|_{C^{\tilde{\mu}}}=\sup_{0<|x-y|<1}\frac{|f(x)-f(y)|}{|x-y|^{\tilde{\mu}}}+\sup_{x\in\mathbf{R}^m}|f(x)|.
\end{equation*}
If $\tilde{\mu}=0$ the $|f|_{C^{\tilde{\mu}}}$ denotes the sup-norm. For $\tilde{\ell}=k+\tilde{\mu}$ with $k\in\mathbf{N}$ and $0\leq\tilde{\mu}<1$ we denote
by $C^{\tilde{\ell}}(R^{m})$ the space of functions $f:\mathbf{R}^m\rightarrow \mathbf{R}^n$ with H\"older continuous partial derivatives $\partial^{\alpha}f\in C^{\tilde{\mu}}(\mathbf{R}^m)$ for all multi-indices $\alpha=(\alpha_1,\cdots,\alpha_m)\in\mathbf{N}^m$ with the assumption that $|\alpha|:=|\alpha_1|+\cdots+|\alpha_m|\leq k$. We define the norm
\begin{equation*}
|f|_{C^{\tilde{\ell}}}:=\sum_{|\alpha|\leq\tilde{\ell}}|\partial^{\alpha}f|_{C^{\tilde{\mu}}}
\end{equation*}
for $\tilde{\mu}=\tilde{\ell}-[\tilde{\ell}]<1$. In order to give an approximate lemma, we define the kernel function
\begin{equation*}
K(x)=\frac{1}{(2\pi)^m}\int_{\mathbf{R}^m}\hat{K}(\xi)e^{i\langle x,\xi\rangle}d\xi,\ x\in\mathbf{C}^m,
\end{equation*}
where $\hat{K}(\xi)$ is a $C^{\infty}$ function with compact support, contained in the ball $|\xi|\leq a_1$ with a constant $a_1>0$, that satisfies
$$ \partial^{\alpha}\hat{K}(0)=
\left\{
\begin{array}{l}
1, \ \alpha=0,\\
0, \ \alpha\neq0.
\end{array}
\right.
$$
Then $K: \mathbf{C}^m\rightarrow\mathbf{R}^n$ is a real analytic function with the property that for every $j>0$ and every $p>0$, there exists a constant $c=c(j,p)>0$ such that for all $\beta\in \mathbf{N}^m$ with $|\beta|\leq j$,
\begin{equation}\label{fc2-1}
|\partial^{\beta}K(x+iy)|\leq c(1+|x|)^{-p}e^{a_1|y|}, \ x,y\in \mathbf{R}^m.
\end{equation}
\begin{lem}[Jackson-Moser-Zehnder]\label{lem2-1}
There is a family of convolution operators
\begin{equation}\label{fc2-2}
(S_{s}F)(x)=s^{-m}\int_{\mathbf{R}^{m}}K(s^{-1}(x-y))F(y)dy,\ \ 0<s\leq 1,\ \ \forall\ F\in C^{0}(\mathbf{R}^{m})
\end{equation}
from $C^{0}(\mathbf{R}^{m})$ into the linear space of entire functions on $\mathbf{C}^{m}$ such that for
every $\ell>0$ there exist a constant $c=c(\tilde{\ell})>0$ with the following properties: if $F\in C^{\tilde{\ell}}(\mathbf{R}^{m}),$ then for $|\alpha|\leq \tilde{\ell}$ and $|\mathrm{Im} x|\leq s$,
\begin{equation}\label{fc2-3}
|\partial^{\alpha}(S_{s}F)(x)-\sum_{|\beta|\leq \tilde{\ell}-|\alpha|}\partial^{\alpha+\beta}F(\mathrm{Re} x)({\bf{i}}\, \mathrm{Im} x)^{\beta}/\beta!|\leq c|F|_{C^{\tilde{\ell}}}s^{\tilde{\ell}-|\alpha|}
\end{equation}
and in particular for $\rho\leq s$
\begin{equation}\label{fc2-4}
|\partial^{\alpha}S_{s}F-\partial^{\alpha}S_{\rho}F|_{\rho}:=\sup_{|{\rm Im} x|\leq \rho}|\partial^{\alpha}(S_{s}F)(x)-\partial^{\alpha}(S_{\rho}F)(x)|\leq c|F|_{C^{\tilde{\ell}}}s^{\tilde{\ell}-|\alpha|}.
\end{equation}
Moreover, in the real case
\begin{equation}\label{fc2-5}
|S_{s}F-F|_{C^{p}}\leq c |F|_{C^{\tilde{\ell}}}s^{\tilde{\ell}-p},\ \ p\leq \tilde{\ell},
\end{equation}
\begin{equation}\label{fc2-6}
|S_{s}F|_{C^{p}}\leq c|F|_{C^{\tilde{\ell}}}s^{\tilde{\ell}-p},\ \ p\leq \tilde{\ell}.
\end{equation}
Finally, if $F$ is periodic in some variables then so are the approximating functions $S_{s}F$ in the same variables.
\end{lem}
\begin{rem}\label{rem2-1}
Moreover we point out that from (\ref{fc2-6}) one can easily deduce the following well-known convexity estimates
\begin{equation}\label{fc2-7}
|f|_{C^{q}}^{l-k}\leq c|f|_{C^{k}}^{l-q}|f|_{C^{l}}^{q-k},\ \ k\leq q\leq l,
\end{equation}
\begin{equation}\label{fc2-8}
|f\cdot g|_{C^{s}}\leq c(|f|_{C^{s}}|f|_{C^{0}}+|f|_{C^{0}}|g|_{C^{s}}),\ \ s\geq 0.
\end{equation}
See \cite{a25, a26} for the proofs of Lemma \ref{lem2-1} and the inequalities (\ref{fc2-7}) and (\ref{fc2-8}).
\end{rem}
\begin{rem}\label{rem2-2}
From the definition of the operator $S_s$, we clearly have
\begin{equation}\label{fc2-9}
\sup_{x,y\in\mathbf{R}^{m}, |y|\leq s}|S_sF(x+iy)|\leq C|F|_{C^0}.
\end{equation}
In fact, by the definition of $S_s$, we have that for any $x,y\in\mathbf{R}^{m}$ with $|y|\leq s$,
\begin{eqnarray}
\nonumber|S_sF(x+iy)|&=&\nonumber |s^{-m}\int_{\mathbf{R}^{m}}K(s^{-1}(x+iy-z))F(z)dz|\\
&=&\nonumber |\int_{\mathbf{R}^{m}}K(is^{-1}y+\xi)F(x-s\xi)d\xi|\\
&\leq&\nonumber |F|_{C^0}\int_{\mathbf{R}^{m}}|K(is^{-1}y+\xi)|d\xi\\
&\leq&\nonumber C|F|_{C^0},
\end{eqnarray}
where we used (\ref{fc2-1}) in the last inequality.
\end{rem}
Consider a function $F(\theta,t,I)$, where $F:\mathbf T^{d+1}\times [1,2]^d\rightarrow\mathbf{R}$ satisfies
\begin{equation*}
|F|_{C^{\tilde{\ell}}(\mathbf T^{d+1}\times [1,2]^d)}\leq C.
\end{equation*}
By Whitney's extension theorem, we can find a function $\tilde{F}: \mathbf{T}^{d+1}\times \mathbf{R}^{d}\rightarrow\mathbf{R}$ such that $\tilde{F}|_{\mathbf T^{d+1}\times [1,2]^d}=F$ (i.e. $\tilde{F}$ is the extension of $F$) and
\begin{equation*}
|\tilde{F}|_{C^{|\alpha|}(\mathbf{T}^{d+1}\times \mathbf{R}^{d})}\leq C_{\alpha}|F|_{C^{|\alpha|}(\mathbf T^{d+1}\times [1,2]^d)}, \ \forall\alpha\in\mathbf{Z}^{2d+1}_{+}, |\alpha|\leq\tilde{\ell},
\end{equation*}
where $C_{\alpha}$ is a constant depends only $\tilde{\ell}$ and $d$.
Let $z=(\theta,t,I)$ for brevity, define, for $\forall s>0$,
\begin{equation*}
(S_{s}\tilde{F})(z)=s^{-(2d+1)}\int_{\mathbf{T}^{d+1}\times \mathbf{R}^{d}}K(s^{-1}(z-\tilde{z}))\tilde{F}(\tilde{z})d\tilde{z}.
\end{equation*}
For any positive integer $p$, let $\mathbf{{T}}_s^p=\left\{x\in \mathbf{{C}}^{p}/{(2\pi \mathbf{Z})}^{p}: |{\rm Im}x|\leq s \right\}$, $\mathbf{{R}}_s^p=\left\{x\in \mathbf{{C}}^{p}: |{\rm Im}x|\leq s\right\}$. Fix a sequence of fast decreasing
numbers $s_{\nu}\downarrow0$, $\nu\in\mathbf{Z_{+}}$ and $s_0\leq\frac{1}{4}$. Let
\begin{equation*}
F^{(\nu)}(z)=(S_{2s_\nu}\tilde{F})(z),\ \nu\geq0.
\end{equation*}
Then $F^{(\nu)}$'s ($\nu\geq0$) are entire functions in $\mathbf{C}^{2d+1}$, in particular, which obey the following properties.\\
(1) $F^{(\nu)}$'s ($\nu\geq0$) are real analytic on the complex domain $\mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}}$;\\
(2) The sequence of functions $F^{(\nu)}$'s satisfies the bounds
\begin{equation}\label{fc2-10}
\sup_{z\in\mathbf{T}^{d+1}\times \mathbf{R}^{d}}|F^{(\nu)}(z)-\tilde{F}(z)|\leq C |F|_{C^{\tilde{\ell}}(\mathbf T^{d+1}\times [1,2]^d)}s_{\nu}^{\tilde{\ell}},
\end{equation}
\begin{equation}\label{fc2-11}
\sup_{z\in \mathbf{T}^{d+1}_{2s_{\nu+1}}\times\mathbf{R}^{d}_{2s_{\nu+1}}}|F^{(\nu+1)}(z)-F^{(\nu)}(z)|\leq C |F|_{C^{\tilde{\ell}}(\mathbf T^{d+1}\times [1,2]^d)}s_{\nu}^{\tilde{\ell}},
\end{equation}
where constants $C=C(d,\tilde{\ell})$ depend on only $d$ and $\tilde{\ell}$;\\
(3) The first approximate $F^{(0)}(z)=(S_{2s_0}\tilde{F})(z)$ is ``small'' with respect to $F$. Precisely,
\begin{equation}\label{fc2-12}
|F^{(0)}(z)|\leq C |F|_{C^{\tilde{\ell}}(\mathbf T^{d+1}\times [1,2]^d)},\ \ \forall z\in\mathbf{T}^{d+1}_{2s_0}\times\mathbf{R}^{d}_{2s_0},
\end{equation}
where constant $C=C(d,\tilde{\ell})$ is independent of $s_0$;\\
(4) From Lemma \ref{lem2-1}, we have that
\begin{equation}\label{fc2-13}
F(z)=F^{(0)}(z)+\sum_{\nu=0}^{\infty}(F^{(\nu+1)}(z)-F^{(\nu)}(z)),\ \ z\in \mathbf T^{d+1}\times [1,2]^d.
\end{equation}
Let
\begin{equation}\label{fc2-14}
F_0(z)=F^{(0)}(z),\ F_{\nu+1}(z)=F^{(\nu+1)}(z)-F^{(\nu)}(z).
\end{equation}
Then
\begin{equation}\label{fc2-15}
F(z)=\sum_{\nu=0}^{\infty}F_{\nu}(z),\ \ \ \ z\in \mathbf T^{d+1}\times [1,2]^d.
\end{equation}
\section{Normal Form}\label{sec3}
Let $I_0\in [1,2]^d$ such that $\omega(I_0)=\frac{\partial H_0}{\partial I}(I_0)$ obeys Diophantine conditions (\ref{fc1-19}) and (\ref{fc1-20}).
Let $\mu_1=\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)}$, $\mu_2=2\mu_1=\frac{(a-b)^2\mu}{500(a+b+1)(d+3)(5a-b+2ad)}$, $m_0=10+\left[E\right]$ where $E=\max\{\frac{4B}{a-b-\frac{2(\tau_1+2)B}{\ell}-2\mu_1}, \frac{2(2\tau_1+3)(\tau_2+1)B}{B-2a-2(\tau_2+1)b-\frac{2(2\tau_1+5)(\tau_2+1)B}{\ell}-8\mu_1(\tau_2+1)-2\mu_2)}\}$ ($a$, $b$, $\tau_1$, $\tau_2$, $B$, $\ell$ are the same as those in Theorem \ref{thm1-1}), and $[\cdot]$ is the integer part of a positive number.
Define sequences
\begin{itemize}
\item $ \varepsilon_{j}=\varepsilon^{\frac{j B}{m_0}}, j=0,1,2,\cdots,m_0, \varepsilon_{j}=\varepsilon_{j-1}^{1+\mu_3} \ \ with \ \ \mu_3=\frac{(a-b)\mu}{10B}, j=m_0+1,m_0+2,\cdots;$
\item $ s_j=\varepsilon_{j+1}^{\frac{1}{\ell}},\ s_j^{(l)}=s_{j}-\frac{l}{10}(s_j-s_{j+1}),l=0,1,\cdots,10, j=0,1,2,\cdots;$
\item $ r_j=\varepsilon^{\frac{(j+1)(\tau_1+1) B}{\ell m_0}+\mu_1+\frac{B}{\ell}} \ with \ \mu_1=\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)},\ \ j=0,1,2,\cdots,m_0,\ r_j=r_{j-1}^{1+\mu_3},\ j=m_0+1,m_0+2,\cdots;$
\item $ r_j^{(l)}=r_{j}-\frac{l}{10}(r_j-r_{j+1}),\ l=0,1,\cdots,10, \ j=0,1,2,\cdots;$
\item $K_j=\frac{2B}{s_j}\log\frac{1}{\varepsilon},\ j=0,1,2,\cdots;$
\item $B(r_j)=\{z\in\mathbf C^d: \, |z-I_0|\le r_j\},\ j=0,1,2,\cdots$. \end{itemize}
With the preparation of Section \ref{sec2}, we can rewrite equation (\ref{fcb1-1}) as follows:
\begin{equation}\label{fc3-1}
H(\theta,t,I)=\frac{H_0(I)}{\varepsilon^{a}}+\frac{1}{\varepsilon^{b}} \sum_{\nu=0}^{\infty}P_{\nu}(\theta,t,I),
\end{equation}
where
\begin{equation}\label{fc3-2}
P_{\nu}: \mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}}\rightarrow\mathbf{C},
\end{equation}
is real analytic, and
\begin{equation}\label{fc3-3}
\sup_{(\theta,t,I)\in \mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}}}|P_{\nu}|\leq C\varepsilon_{\nu}.
\end{equation}
Let
\begin{equation}\label{fc3-4}
h^{(0)}(t,I)\equiv0,\ P^{(0)}=P_0.
\end{equation}
Then we can rewrite equation (\ref{fc3-1}) as follows:
\begin{equation}\label{fc3-5}
H^{(0)}(\theta,t,I)=\frac{H_0(I)}{\varepsilon^{a}}+\frac{h^{(0)}(t,I)}{\varepsilon^{b}}+\frac{\varepsilon_0 P^{(0)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=1}^{\infty}\frac{P_{\nu}(\theta,t,I)}{\varepsilon^{b}}.
\end{equation}
Define
\begin{equation*}
D(s,r)=\mathbf{T}^{d+1}_{s}\times B(r),\ D(s,0)=\mathbf{T}^{d+1}_{s},\ D(0,r)=B(r).
\end{equation*}
For a function $f$ defined in $D(s,r)$ , define
\begin{equation*}
||f||_{D(s,r)}=\sup_{(\theta,t,I)\in D(s,r)}|f(\theta,t,I)|.
\end{equation*}
Similarly, we can define $||f||_{D(0,r)}$ and $||f||_{D(s,0)}$.
Clearly, (\ref{fc3-5}) fulfill (\ref{fc3-8})-(\ref{fc3-10}) with $m=0$. Then we have the following lemma.
\begin{lem}\label{lem3-1} Suppose that we have had $m+1$ {\rm(}$m=0,1,2,\cdots, m_0-1${\rm)} symplectic transformations $\Phi_0=id$, $\Phi_1$, $\cdots$, $\Phi_m$ with
\begin{equation}\label{fc3-6}
\Phi_j:D(s_j,r_j)\rightarrow D(s_{j-1},r_{j-1}),\ j=1,2,\cdots,m
\end{equation}
and
\begin{equation}\label{fcf3-1}
\|\partial(\Phi_{j}-id)\|_{D(s_j,r_j)}\leq \frac{1}{2^{j+1}},\ j=1,2,\cdots,m
\end{equation}
such that system {\rm(\ref{fc3-5})} is changed by $\Phi^{(m)}=\Phi_0\circ\Phi_1\circ\cdots\circ\Phi_m$ into
\begin{equation}\label{fc3-7}
H^{(m)}=H^{(0)}\circ\Phi^{(m)}=\frac{H_0(I)}{\varepsilon^{a}}+\frac{h^{(m)}(t,I)}{\varepsilon^{b}}+\frac{\varepsilon_m P^{(m)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m+1}^{\infty}\frac{P_{\nu}\circ\Phi^{(m)}(\theta,t,I)}{\varepsilon^{b}},
\end{equation}
where
\begin{equation}\label{fc3-8}
\|h^{(m)}(t,I)\|_{D(s_{m}, r_{m})}\leq C,
\end{equation}
\begin{equation}\label{fc3-9}
\|P^{(m)}(\theta,t,I)\|_{D(s_{m}, r_{m})}\leq C,
\end{equation}
\begin{equation}\label{fc3-10}
\|P_{\nu}\circ\Phi^{(m)}(\theta,t,I)\|_{D(s_{\nu}, r_{\nu})}\leq C\varepsilon_{\nu},\ \nu=m+1,m+2,\cdots.
\end{equation}
Then there is a symplectic transformation $\Phi_{m+1}$ with
\begin{equation*}
\Phi_{m+1}:D(s_{m+1},r_{m+1})\rightarrow D(s_m,r_m)
\end{equation*}
and
\begin{equation*}
\|\partial(\Phi_{m+1}-id)\|_{D(s_{m+1},r_{m+1})}\leq \frac{1}{2^{m+2}}
\end{equation*}
such that system {\rm(\ref{fc3-7})} is changed by $\Phi_{m+1}$ into {\rm($\Phi^{(m+1)}=\Phi_0\circ\Phi_1\circ\cdots\circ\Phi_{m+1}$)}
\begin{eqnarray*}\label{fc3-12}
\nonumber H^{(m+1)}&=&H^{(m)}\circ\Phi_{m+1}=H^{(0)}\circ\Phi^{(m+1)}\nonumber\\
\nonumber&=&\frac{H_0(I)}{\varepsilon^{a}}+\frac{h^{(m+1)}(t,I)}{\varepsilon^{b}}+\frac{\varepsilon_{m+1} P^{(m+1)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m+1)}(\theta,t,I)}{\varepsilon^{b}},
\end{eqnarray*}
where $H^{(m+1)}$ satisfies {\rm(\ref{fc3-8})}-{\rm(\ref{fc3-10})} by replacing $m$ by $m+1$.
\end{lem}
\begin{proof}
Assume that the change $\Phi_{m+1}$ is implicitly defined by
\begin{equation}\label{fc3-16}
\Phi_{m+1}: \begin{cases}
I=\rho+\frac{\partial S}{\partial \theta}, \\
\phi=\theta+\frac{\partial S}{\partial \rho},\\ t=t,
\end{cases}
\end{equation}
where $S=S(\theta, t, \rho)$ is the generating function, which will be proved to be analytic in a smaller domain
$D(s_{m+1}, r_{m+1}).$ By a simple computation, we have
$$d I\wedge d\theta=d\rho \wedge d\theta+\sum_{i,j=1}^d\frac{\partial^{2}S}{\partial\rho_{i}\partial\theta_{j}}d\rho_{i}\wedge d\theta_{j}=d\rho \wedge d\phi.$$
Thus, the coordinates change $\Phi_{m+1}$ is symplectic if it exists. Moreover, we get the changed Hamiltonian
\begin{eqnarray}\label{fc3-17}
\nonumber H^{(m+1)}&=&H^{(m)}\circ\Phi_{m+1}\nonumber\\
\nonumber &=&\frac{H_0(\rho+\frac{\partial S}{\partial \theta})}{\varepsilon^{a}}+\frac{h^{(m)}(t,\rho+\frac{\partial S}{\partial \theta})}{\varepsilon^{b}}+\frac{\varepsilon_{m} P^{(m)}(\theta,t,\rho+\frac{\partial S}{\partial \theta})}{\varepsilon^{b}}+\frac{\partial S}{\partial t}\\
&&+\frac{P_{m+1}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+ \sum_{\nu=m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}},
\end{eqnarray}
where $\theta=\theta(\phi, t, \rho)$ is implicitly defined by (\ref{fc3-16}). By Taylor formula, we have
\begin{eqnarray}\label{fc3-18}
\nonumber H^{(m+1)} &=&\frac{H_0(\rho)}{\varepsilon^{a}}+\frac{h^{(m)}(t,\rho)}{\varepsilon^{b}}+\langle\frac{\omega(\rho)}{\varepsilon^a}, \frac{\partial S}{\partial \theta}\rangle+\frac{\partial S}{\partial t}+\frac{\varepsilon_{m} P^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}\\
&&+\frac{P_{m+1}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+ \sum_{\nu=m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+R_1,
\end{eqnarray}
where $\omega(\rho)=\frac{\partial H_0}{\partial I}(\rho)$ and
\begin{eqnarray}\label{fc3-19}
\nonumber R_1&=&\frac{1}{\varepsilon^{a}}\int_{0}^{1}(1-\tau)\frac{\partial^2 H_0}{\partial I^2}(\rho+\tau\frac{\partial S}{\partial \theta}) (\frac{\partial S}{\partial \theta})^{2}d\tau+\frac{\varepsilon_m}{\varepsilon^b}\int_{0}^{1}\frac{\partial P^{(m)}}{\partial I}(\theta, t, \rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}d\tau\\
&&+\frac{1}{\varepsilon^b}\int_0^1\frac{\partial h}{\partial I}(t,\rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}\, d\tau.
\end{eqnarray}
Expanding $P^{(m)}(\theta,t,\rho)$ into a Fourier series,
\begin{equation}\label{fc3-20}
P^{(m)}(\theta,t,\rho)=\sum_{(k,l)\in\mathbf{Z}^d\times \mathbf{Z}}\widehat{{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}:=P_{1}^{(m)}(\theta,t,\rho)+P_{2}^{(m)}(\theta,t,\rho),
\end{equation}
where $P_{1}^{(m)}=\sum_{|k|+|l|\leq K_m}\widehat{{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}$, $P_{2}^{(m)}=\sum_{|k|+|l|> K_m}\widehat{{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}$.
Then, we derive the homological equation:
\begin{equation}\label{fc3-21}
\frac{\partial S}{\partial t}+\langle \frac{\omega(\rho)}{\varepsilon^{a}}, \frac{\partial S}{\partial\theta}\rangle
+\frac{\varepsilon_{m} P_1^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}-\frac{\varepsilon_{m} \widehat{{P}_1^{(m)}}(0,t,\rho)}{\varepsilon^{b}}=0,
\end{equation}
where $\widehat{{P}_1^{(m)}}(0, t, \rho)$ is $0$-Fourier coefficient of $P_1^{(m)}(\theta,t,\rho)$ as the function of $\theta$. Let
\begin{equation}\label{fc3-22}
S(\theta,t,\rho)=\sum_{|k|+|l|\leq K_m,k\neq 0}\widehat{S}(k, l, \rho)e^{i(\langle k, \theta\rangle+lt)}.
\end{equation}
By passing to Fourier coefficients, we have
\begin{equation}\label{fc3-23}
\widehat{S}(k, l, \rho)=\frac{\varepsilon_m}{\varepsilon^b}\cdot\frac{i}{\varepsilon^{-a}\langle k, \omega(\rho)\rangle +l}\widehat{{P}^{(m)}}(k, l, \rho),\;|k|+|l|\leq K_m,\; k\in \mathbf{Z}^{d}\setminus\{0\}, l\in \mathbf{Z}.
\end{equation}
Then we can solve homological equation (\ref{fc3-21}) by setting
\begin{equation}\label{fc3-24}
S(\theta,t,\rho)=\sum_{|k|+|l|\leq K_m,k\neq 0}
\frac{\varepsilon_m}{\varepsilon^b}\cdot\frac{i}{\varepsilon^{-a}\langle k, \omega(\rho)\rangle +l}\widehat{{P}^{(m)}}(k, l, \rho)e^{i(\langle k, \theta\rangle+lt)}.
\end{equation}
By (\ref{fcb1-3}) and (\ref{fc1-19}), for $\forall \rho\in B(r_{m})$, $|k|+|l|\leq K_{m}$, $k\neq 0$, we have
\begin{eqnarray}\label{fc3-25}
|\varepsilon^{-a}\langle k, \omega(\rho)\rangle+l|
&\geq &|\varepsilon^{-a}\langle k, \omega(I_{0})\rangle+l|-|\varepsilon^{-a}\langle k, \omega(I_{0})-\omega(\rho)\rangle|\nonumber\\
&\geq & \frac{\varepsilon^{-a+\frac{B}{\ell}}\gamma}{|k|^{\tau_1}}-C\varepsilon^{-a}|k|r_{m}\nonumber\\
&\geq & \frac{\varepsilon^{-a+\frac{B}{\ell}}\gamma}{2|k|^{\tau_1}}.
\end{eqnarray}
Then, by (\ref{fc3-9}), (\ref{fc3-23})-(\ref{fc3-25}), using R\"ussmann \cite{a27, a28} subtle
arguments to give optimal estimates of small divisor series (also see Lemma 5.1 in \cite{a29}), we get
\begin{equation}\label{fc3-26}
\|S(\theta,t,\rho)\|_{D(s^{(1)}_m, r_m)}\leq\frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m\|P^{(m)}(\theta,t,\rho)\|_{D(s_m,r_m)}}{\gamma s_{m}^{\tau_{1}}}\leq\frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}}}.
\end{equation}
Then by the Cauchy's estimate, we have
\begin{equation}\label{fc3-27}
\|\frac{\partial S}{\partial \theta}\|_{D(s^{(2)}_m, r_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}}\ll r_m-r_{m+1},\ \|\frac{\partial S}{\partial \rho}\|_{D(s^{(1)}_m, r^{(1)}_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}}r_m}\ll s_m-s_{m+1}.
\end{equation}
By (\ref{fc3-16}) and (\ref{fc3-27}) and the implicit function theorem, we get that there are analytic functions
$u=u(\phi, t, \rho), v=v(\phi, t, \rho)$ defined on
the domain $D(s^{(3)}_m, r^{(3)}_m)$ with
\begin{equation}\label{fc3-28}
\frac{\partial S(\theta,t,\rho)}{\partial \theta}=u(\phi, t, \rho),\ \frac{\partial S(\theta,t,\rho)}{\partial \rho}=-v(\phi, t, \rho)
\end{equation}
and
\begin{equation}\label{fc3-29}
\|u\|_{D(s^{(3)}_m, r^{(3)}_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}}\ll r_m-r_{m+1},\ \|v\|_{D(s^{(3)}_m, r^{(3)}_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}}r_m}\ll s_m-s_{m+1}
\end{equation}
such that
\begin{equation}\label{fc3-30}
\Phi_{m+1}: \begin{cases}
I=\rho+u(\phi, t, \rho), \\
\theta=\phi+v(\phi, t, \rho),\\ t=t.
\end{cases}
\end{equation}
Then, we have
\begin{equation}\label{fc3-31}
\Phi_{m+1}(D(s_{m+1},r_{m+1}))\subseteq \Phi_{m+1}(D(s^{(3)}_{m},r^{(3)}_{m}))\subseteq D(s_{m},r_{m}).
\end{equation}
Let
\begin{equation}\label{fc3-32}
h^{(m+1)}(t,\rho)=h^{(m)}(t,\rho)+\varepsilon_{m}\widehat{{P}_1^{(m)}}(0, t, \rho),
\end{equation}
\begin{equation}\label{fc3-33}
\frac{\varepsilon_{m+1}P^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}=\frac{\varepsilon_{m} P_2^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}+\frac{P_{m+1}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+R_1.
\end{equation}
Then by (\ref{fc3-18}), (\ref{fc3-20}), (\ref{fc3-21}), (\ref{fc3-32}) and (\ref{fc3-33}), we have
\begin{equation}\label{fc3-34}
H^{(m+1)}(\phi,t,\rho)=\frac{H_0(\rho)}{\varepsilon^{a}}+\frac{h^{(m+1)}(t,\rho)}{\varepsilon^{b}}+\frac{\varepsilon_{m+1} P^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+ \sum_{\nu=m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}.
\end{equation}
By (\ref{fc3-9}) and (\ref{fc3-20}), it is not difficult to show that (see Lemma A.2 in \cite{a30}),
\begin{equation}\label{fc3-36}
\| \frac{\varepsilon_{m} P_2^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}\|_{D(s^{(9)}_{m}, r^{(9)}_{m})}\leq \frac{C\varepsilon_{m}}{\varepsilon^{b}}K_{m}^{d+1}e^{-\frac{K_{m}s_{m}}{2}}\leq\frac{C\varepsilon_{m+1}}{\varepsilon^{b}}.
\end{equation}
By (\ref{fc3-8}), (\ref{fc3-9}), (\ref{fc3-20}), (\ref{fc3-32}) and (\ref{fc3-36}), we have
\begin{equation}\label{fc3-35}
\| h^{(m+1)}\|_{D(s_{m+1}, r_{m+1})}\leq \| h^{(m)}\|_{D(s_{m+1}, r_{m+1})}+\| \varepsilon_{m}\widehat{{P}_1^{(m)}}(0, t, \rho)\|_{D(s_{m+1}, r_{m+1})}\leq C.
\end{equation}
By (\ref{fc3-8}), (\ref{fc3-9}), (\ref{fc3-26})-(\ref{fc3-29}), we have
\begin{equation}\label{fc3-37}
\| \frac{1}{\varepsilon^{a}}\int_{0}^{1}(1-\tau)\frac{\partial^2 H_0}{\partial I^2}(\rho+\tau\frac{\partial S}{\partial \theta}) (\frac{\partial S}{\partial \theta})^{2}d\tau\|_{D(s^{(9)}_{m}, r^{(9)}_{m})}\leq \frac{C}{\varepsilon^{a}r_m^2}\cdot(\frac{\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}})^2\leq\frac{C\varepsilon_{m+1}}{\varepsilon^{b}},
\end{equation}
\begin{equation}\label{fc3-38}
\| \frac{\varepsilon_m}{\varepsilon^b}\int_{0}^{1}\frac{\partial P^{(m)}}{\partial I}(\theta, t, \rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}d\tau\|_{D(s^{(9)}_{m}, r^{(9)}_{m})}\leq \frac{C\varepsilon_m}{\varepsilon^{b}r_m}\cdot\frac{\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}}\leq\frac{C\varepsilon_{m+1}}{\varepsilon^{b}},
\end{equation}
\begin{equation}\label{fc3-39}
\| \frac{1}{\varepsilon^b}\int_0^1\frac{\partial h}{\partial I}(t,\rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}\, d\tau\|_{D(s^{(9)}_{m}, r^{(9)}_{m})}\leq \frac{C}{\varepsilon^{b}r_m}\cdot\frac{\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}}\leq\frac{C\varepsilon_{m+1}}{\varepsilon^{b}}.
\end{equation}
By (\ref{fc3-29}) and (\ref{fc3-30}), we have
\begin{equation}\label{fc3-40}
\Phi_{m+1}(\phi,t,\rho)=(\theta,t,I),\ (\phi,t,\rho)\in D(s_m^{(3)},r_m^{(3)}).
\end{equation}
By (\ref{fc3-29}), (\ref{fc3-30}) and (\ref{fc3-40}), we have
\begin{equation}\label{fc3-41}
\|I-\rho\|_{D(s^{(3)}_m,r^{(3)}_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}}, \ \ \|\theta-\phi\|_{D(s^{(3)}_m,r^{(3)}_m)}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}}r_m}.
\end{equation}
By (\ref{fc3-30}), (\ref{fc3-41}) and Cauchy's estimate, we have
\begin{equation}\label{fc3-42}
\|\partial(\Phi_{m+1}-id)\|_{D(s^{(4)}_m,r_m^{(4)})}\leq \frac{C\varepsilon^{a-b-\frac{B}{\ell}}\varepsilon_m}{\gamma s_{m}^{\tau_{1}+1}r_m}.
\end{equation}
It follows that
\begin{equation}\label{fc3-43}
\|\partial(\Phi_{m+1}-id)\|_{D(s_{m+1},r_{m+1})}\leq \frac{1}{2^{m+2}}.
\end{equation}
By (\ref{fc3-6}), (\ref{fcf3-1}), (\ref{fc3-31}) and (\ref{fc3-43}), we have
\begin{eqnarray}\label{fc3-44}
\nonumber &&\|\partial\Phi^{(m+1)}(\phi,t,\rho)\|_{D(s_{m+1},r_{m+1})}\\
\nonumber&=&\|(\partial\Phi_1\circ\Phi_2\circ\cdots\circ\Phi_{m+1})(\partial\Phi_2\circ\Phi_3\circ\cdots\circ\Phi_{m+1})\cdots(\partial\Phi_{m+1})\|_{D(s_{m+1},r_{m+1})}\\
\nonumber&\leq&\prod_{j=0}^{m}(1+\frac{1}{2^{j+2}})\\
&\leq&2.
\end{eqnarray}
It follows that
\begin{equation}\label{fc3-45}
\Phi^{(m+1)}(D(s_{\nu},r_{\nu}))\subset \mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}},\ \nu=m+1,m+2,\cdots.
\end{equation}
In fact, suppose that $w=\Phi^{(m+1)}(z)$ with $z=(\phi,t,\rho)\in D(s_{\nu},r_{\nu})$.
Since $\Phi^{(m+1)}$ is real for real argument and $r_{\nu}<s_{\nu}$, we have
\begin{eqnarray}\label{fc3-46}
\nonumber &&|{\rm Im} w|=|{\rm Im} \Phi^{(m+1)}(z)|=|{\rm Im} \Phi^{(m+1)}(z)-{\rm Im} \Phi^{(m+1)}({\rm Re}z)|\\
\nonumber&\leq&| \Phi^{(m+1)}(z)- \Phi^{(m+1)}({\rm Re}z)|\\
\nonumber&\leq&\|\partial\Phi^{(m+1)}(\phi,t,\rho)\|_{D(s_{m+1},r_{m+1})}|{\rm Im} z|\\
&\leq&2|{\rm Im} z|\leq2s_{\nu}.
\end{eqnarray}
By (\ref{fc3-3}) and (\ref{fc3-45}), we have
\begin{equation}\label{fc3-47}
\|\frac{P_{m+1}\circ\Phi^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}\|_{D(s_{m+1},r_{m+1})}\leq\frac{C\varepsilon_{m+1}}{\varepsilon^b},
\end{equation}
\begin{equation}\label{fc3-48}
\|P_{\nu}\circ\Phi^{(m+1)}(\phi,t,\rho)\|_{D(s_{\nu},r_{\nu})}\leq C\varepsilon_{\nu},\ \nu=m+2,m+3,\cdots.
\end{equation}
By (\ref{fc3-19}), (\ref{fc3-29}), (\ref{fc3-33}), (\ref{fc3-36}), (\ref{fc3-37})-(\ref{fc3-39}) and (\ref{fc3-47}), we have
\begin{equation}\label{fc3-49}
\|P^{(m+1)}(\phi,t,\rho)\|_{D(s_{m+1}, r_{m+1})}\leq C.
\end{equation}
The proof is finished by (\ref{fc3-31}), (\ref{fc3-34}), (\ref{fc3-35}), (\ref{fc3-43}), (\ref{fc3-48}) and (\ref{fc3-49}).
\end{proof}
By Lemma \ref{lem3-1}, there is a symplectic transformation $\Phi^{(m_0)}=\Phi_{0}\circ\Phi_{1}\circ\cdots\circ\Phi_{m_0}$ with
$$\Phi^{(m_0)}: D(s_{m_0}, r_{m_0})\rightarrow D(s_{0}, r_{0})$$
such that system (\ref{fc3-5}) is changed by $\Phi^{(m_0)}$ into
\begin{equation}\label{fc3-50}
H^{(m_0)}=\frac{H_0(I)}{\varepsilon^{a}}+\frac{h^{(m_0)}(t,I)}{\varepsilon^{b}}+\frac{\varepsilon^B P^{(m_0)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m_0+1}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}(\theta,t,I)}{\varepsilon^{b}}
\end{equation}
where
\begin{equation}\label{fc3-51}
\|h^{(m_0)}(t,I)\|_{D(s_{m_0}, r_{m_0})}\leq C,
\end{equation}
\begin{equation}\label{fc3-52}
\|P^{(m_0)}(\theta,t,I)\|_{D(s_{m_0}, r_{m_0})}\leq C,
\end{equation}
\begin{equation}\label{fc3-53}
\|P_{\nu}\circ\Phi^{(m_0)}(\theta,t,I)\|_{D(s_{\nu}, r_{\nu})}\leq C\varepsilon_{\nu},\ \nu=m_0+1,m_0+2,\cdots.
\end{equation}
\section{A symplectic transformation}\label{sec4}
Let $[h^{(m_0)}](I)=\widehat{h^{(m_0)}} (0,I)$ be the $0$-Fourier coefficient of $h^{(m_0)}(t,I)$ as the function of $t$.
In order to eliminate the dependence of $h^{(m_0)} (t,I)$ on the time-variable $t$, we introduce the following transformation
\begin{equation}\label{fcc4-1}
\Psi:\ \rho=I,\ \phi=\theta+\frac{\partial \tilde S(t,I)}{\partial I},
\end{equation}
where $\tilde S(t,I)=\frac{1}{\varepsilon^b}\int_0^t\left( [ h^{(m_0)}](I)-h^{(m_0)}(\xi,I) \right) d \xi.$ It is symplectic by easy verification $d\,\rho\wedge d\phi=d\, I\wedge d\, \theta$.
Noting that the transformation is not small. So $\Psi$ is not close to the Identity. Let \begin{equation*}\label{fc4-1}
\tilde{s}_0=\varepsilon^{b+\frac{(m_0+1)(2\tau_1+3) B}{\ell m_0}+4\mu_1+\frac{2B}{\ell}}, \ \tilde{r}_{0}=\varepsilon^{a+(\tau_2+1)b+\frac{(m_0+1)(2\tau_1+3)(\tau_2+1)B}{m_0\ell}+4\mu_1(\tau_2+1)+\mu_2+\frac{2B(\tau_2+1)}{\ell}},
\end{equation*}
where $\mu_1=\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)}$, $\mu_2=2\mu_1$.
We introduce a domain
$$\mathcal{D}:=\left\{t=t_1+t_2i\in \mathbf T_{s_{m_0}}:\; |t_2|\le \tilde{s}_0 \right\}\times\left\{I=I_1+I_2i\in B(r_{m_0}):\; |I_2|\le\tilde{r}_{0}\right\},$$
where $t_1,t_2,I_1,I_2$ are real numbers. Noting that $h^{(m_0)}(t,I)$ is real for real arguments. Thus, for $(t,I)\in \mathcal{D}$, we have
\begin{eqnarray}\label{fc4-2}
\nonumber&&\|{\rm Im}\frac{\partial \tilde S(t,I)}{\partial I}\|_{\mathcal{D}}\\
\nonumber &=&\|{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{\mathcal{D}}\\
\nonumber&\leq&\|\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{\mathcal{D}}\\
\nonumber&\leq&\|\frac{\partial^2 \tilde S(t,I)}{\partial I \partial t}\|_{\mathcal{D}}\|t_2i\|_{\mathcal{D}}+\|\frac{\partial^2 \tilde S(t,I)}{\partial^2 I}\|_{\mathcal{D}}\|I_2i\|_{\mathcal{D}}\\
\nonumber&\leq&\frac{C\tilde{s}_0}{\varepsilon^br_{m_0}s_{m_0}}+\frac{C\tilde{r}_0}{\varepsilon^br^2_{m_0}}\\
&\leq&\frac{1}{2}s_{m_0}.
\end{eqnarray}
By (\ref{fc3-50}), (\ref{fcc4-1}) and (\ref{fc4-2}), we have
\begin{equation}\label{fc4-3}
\Psi(\mathbf T^{d}_{s_{m_0}/2}\times\mathcal D)\subset D(s_{m_0},r_{m_0})
\end{equation}
and
\begin{eqnarray}\label{fc4-4}
\nonumber \tilde{H}(\phi,t,\rho)&=&H^{(m_0)}\circ\Psi\nonumber\\
&=&\frac{H_0(\rho)}{\varepsilon^{a}}+\frac{[ h^{(m_0)}](\rho)}{\varepsilon^{b}}+\frac{\varepsilon^B \breve{P}^{(m_0)}(\phi,t,\rho)}{\varepsilon^{b}}+ \sum_{\nu=m_0+1}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi(\phi,t,\rho)}{\varepsilon^{b}},
\end{eqnarray}
where $\breve{P}^{(m_0)}(\phi,t,\rho)=P^{(m_0)}(\phi-\frac{\partial}{\partial I}\tilde S(t,\rho),t,\rho)$ and $\|\breve{P}^{(m_0)}\|_{\mathbf T^{d}_{s_{m_0}/2}\times\mathcal D}\leq C$.
\section{Iterative lemma}\label{sec5}
By (\ref{fc3-51}), we have
\begin{equation}\label{fc5-1}
\varepsilon^{a-b}\|\frac{\partial^2 [ h^{(m_0)}](\rho)}{\partial \rho^2} \|_{D(0,\frac{r_{m_0}}{2})}\leq\frac{C\varepsilon^{a-b}}{r^2_{m_0}}\ll1.
\end{equation}
Then by (\ref{fcb1-3}), (\ref{fc3-51}) and (\ref{fc5-1}), solving the equation $\frac{\partial H_0(\rho)}{\partial \rho}+\varepsilon^{a-b}\frac{\partial [ h^{(m_0)}](\rho)}{\partial \rho} =\omega(I_0)$ by Newton iteration, we get that there exists $\tilde{I}_{0}\in \mathbf{R}^{d}\cap D(0,\frac{r_{m_0}}{2})$ with $|\tilde{I}_{0}-I_0|\leq\frac{C\varepsilon^{a-b}}{r_{m_0}}\ll r_{m_0}$ such that
\begin{equation}\label{fc5-2}
\frac{\partial H_0}{\partial \rho}(\tilde{I}_{0})+\varepsilon^{a-b}\frac{\partial [ h^{(m_0)}]}{\partial \rho} (\tilde{I}_{0})=\omega(I_0),
\end{equation}
where $\omega(I_0)=\frac{\partial H_0}{\partial \rho}(I_{0})$. For any $c>0$ and any $y_0\in\mathbf{{R}}^{d}$, let
\begin{equation*}
B(y_0,c)=\{z\in\mathbf C^d: \, |z-y_0|\le c\}.
\end{equation*}
Define
\begin{equation*}
\tilde{D}(s,r(I))=\mathbf{T}^{d+1}_{s}\times B(I,r),\ \tilde{D}(s,0)=\mathbf{T}^{d+1}_{s},\ \tilde{D}(0,r(I))=B(I,r).
\end{equation*}
Let $\tilde{\varepsilon}_0=\varepsilon_{m_0}=\varepsilon^B$. Noting that $|\tilde{I}_{0}-I_0|\ll r_{m_0}$, and by (\ref{fc4-3}), we have
\begin{equation}\label{fbc5-1}
\Psi(\tilde{D}(\tilde{s}_0,\tilde{r}_0(\tilde{I}_0)))\subset D(s_{m_0},r_{m_0}).
\end{equation}
Then we can rewrite equation (\ref{fc4-4}) as follows:
\begin{equation}\label{fc5-3}
\tilde{H}^{(0)}(\theta,t,I)=\frac{H^{(0)}_0(I)}{\varepsilon^{a}}+ \frac{\tilde{P}^{(0)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m_0+1}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi(\theta,t,I)}{\varepsilon^{b}},
\end{equation}
where $(\theta,t,I)\in \tilde{D}(\tilde{s}_0,\tilde{r}_0(\tilde{I}_0))$, $H^{(0)}_0(I)=H_0(I)+\varepsilon^{a-b}[ h^{(m_0)}](I)$, $\tilde{P}^{(0)}=\varepsilon^B \breve{P}^{(m_0)}$ and
\begin{equation}\label{fc5-4}
\frac{\partial H^{(0)}_0}{\partial I}(\tilde{I}_{0})=\omega(I_0),
\end{equation}
\begin{equation}\label{fc5-5}
\|\tilde{P}^{(0)}\|_{\tilde{D}(\tilde{s}_0,\tilde{r}_0(\tilde{I}_0))}\leq C\tilde{\varepsilon}_0.
\end{equation}
By (\ref{fcb1-3}) and (\ref{fc5-1}), we get that there exist constants $M_0>0$, $h_0>0$ such that
\begin{equation}\label{fc5-6}
det\left(\frac{\partial^2 H^{(0)}_0(I)}{\partial I^2}\right),\ det\left(\frac{\partial^2 H^{(0)}_0(I)}{\partial I^2}\right)^{-1}\leq M_0, \ \forall I\in \tilde{D}(0,\tilde{r}_0(\tilde{I}_0))
\end{equation}
and
\begin{equation}\label{fc5-7}
\|\frac{\partial^2 H^{(0)}_0(I)}{\partial I^2}\|_{\tilde{D}(0,\tilde{r}_0(\tilde{I}_0))}\leq h_0.
\end{equation}
Define sequences
\begin{itemize}
\item $ \tilde{\varepsilon}_0=\varepsilon_{m_0}=\varepsilon^B,\ \tilde{\varepsilon}_{j+1}=\tilde{\varepsilon}_{j}^{1+\mu_3}=\varepsilon_{m_0+1+j} \ with \ \mu_3=\frac{(a-b)\mu}{10B},\ j=0,1,\cdots;$
\item $ \tilde{s}_0=\varepsilon^{b+\frac{(m_0+1)(2\tau_1+3) B}{\ell m_0}+4\mu_1+\frac{2B}{\ell}} \ \ \ with \ \ \ \mu_1=\frac{(a-b)^2\mu}{1000(a+b+1)(d+3)(5a-b+2ad)},\ \ \tilde{s}_{j+1}= \tilde{s}_{j}^{1+\mu_3}, \ \ \tilde{s}_j^{(l)}=\tilde{s}_{j}-\frac{l}{10}(\tilde{s}_j-\tilde{s}_{j+1}),\ l=0,1,\cdots,10, \ j=0,1,2,\cdots;$
\item $\tilde{r}_{0}=\varepsilon^{a+(\tau_2+1)b+\frac{(m_0+1)(2\tau_1+3)(\tau_2+1)B}{m_0\ell}+4\mu_1(\tau_2+1)+\mu_2+\frac{2B(\tau_2+1)}{\ell}} \ \ with \ \ \mu_2=2\mu_1, \ \ \tilde{r}_{j+1}= \tilde{r}_{j}^{1+\mu_3},\ \tilde{r}_j^{(l)}=\tilde{r}_{j}-\frac{l}{10}(\tilde{r}_j-\tilde{r}_{j+1}),\ l=0,1,\cdots,10, \ j=0,1,2,\cdots;$
\item $\tilde{K}_j=\frac{2}{\tilde{s}_j}\log\frac{1}{\tilde{\varepsilon}_j},\ j=0,1,2,\cdots;$
\item $h_j=h_0(2-\frac{1}{2^j}),\ j=0,1,2,\cdots;$
\item $M_j=M_0(2-\frac{1}{2^j}),\ j=0,1,2,\cdots.$
\end{itemize}
We claim that \begin{equation}\label{fc5-8}
\|P_{\nu}\circ\Phi^{(m_0)}\circ\Psi(\theta,t,I)\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}\leq C\varepsilon_{\nu}= C\tilde{\varepsilon}_{\nu-m_0},\ \nu=m_0+1,m_0+2,\cdots.
\end{equation}
In fact, for $(t,I)=(t_1+t_2i,I_1+I_2i)\in \tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))$, where $t_1,t_2,I_1,I_2$ are real numbers, we have
\begin{eqnarray}\label{fc5-9}
\nonumber&&\|{\rm Im}\frac{\partial \tilde S(t,I)}{\partial I}\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}\\
\nonumber &=&\|{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}\\
\nonumber&\leq&\|\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}\\
\nonumber&\leq&\|\frac{\partial^2 \tilde S(t,I)}{\partial I \partial t}\|_{\mathcal{D}}\|t_2i\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}+\|\frac{\partial^2 \tilde S(t,I)}{\partial^2 I}\|_{\mathcal{D}}\|I_2i\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0))}\\
\nonumber&\leq&\frac{C\tilde{s}_{\nu-m_0}}{\varepsilon^br_{m_0}s_{m_0}}+\frac{C\tilde{r}_{\nu-m_0}}{\varepsilon^br^2_{m_0}}\\
&\leq&\frac{1}{2}s_{\nu}.
\end{eqnarray}
It follows that
\begin{equation}\label{fc5-10}
\Psi(\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0)))\subset \tilde{D}(s_{\nu},\tilde{r}_{\nu-m_0}(\tilde{I}_0)).
\end{equation}
Suppose that $w=\Phi^{(m_0)}(z)$ with $z=(\theta,t,I)\in \tilde{D}(s_{\nu},\tilde{r}_{\nu-m_0}(\tilde{I}_0))\subset D(s_{m_0},r_{m_0})$.
Since $\Phi^{(m_0)}$ is real for real argument and $\tilde{r}_{\nu-m_0}<r_{\nu}<s_{\nu}$, then by (\ref{fc3-44}) with $m=m_0-1$, we have
\begin{eqnarray}\label{fc5-11}
\nonumber &&|{\rm Im} w|=|{\rm Im} \Phi^{(m_0)}(z)|=|{\rm Im} \Phi^{(m_0)}(z)-{\rm Im} \Phi^{(m_0)}({\rm Re}z)|\\
\nonumber&\leq&| \Phi^{(m_0)}(z)- \Phi^{(m_0)}({\rm Re}z)|\\
\nonumber&\leq&\|\partial\Phi^{(m_0)}(\theta,t,I)\|_{D(s_{m_0},r_{m_0})}|{\rm Im} z|\\
&\leq&2|{\rm Im} z|\leq2s_{\nu}.
\end{eqnarray}
Then by (\ref{fc5-10}) and (\ref{fc5-11}), we have
\begin{equation}\label{fc5-12}
\Phi^{(m_0)}\circ\Psi(\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_0)))\subset \mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}},\ \nu=m_0+1,m_0+2,\cdots.
\end{equation}
By (\ref{fc3-3}) and (\ref{fc5-12}), the proof of (\ref{fc5-8}) is completed. Clearly, by (\ref{fc5-4})-(\ref{fc5-8}), (\ref{fc5-3}) fulfill (\ref{fc5-15})-(\ref{fc5-19}) with $m=0$. Then we have the following lemma.
\begin{lem}\label{lem5-1}{\rm (Iterative Lemma)} Suppose that we have had $m+1$ {\rm(}$m=0,1,2,\cdots${\rm)} symplectic transformations $\tilde{\Phi}_0=id$, $\tilde{\Phi}_1$, $\cdots$, $\tilde{\Phi}_m$ with
\begin{equation}\label{fc5-13}
\tilde{\Phi}_j:\tilde{D}(\tilde{s}_j,\tilde{r}_j(\tilde{I}_j))\rightarrow \tilde{D}(\tilde{s}_{j-1},\tilde{r}_{j-1}(\tilde{I}_{j-1})),\ j=1,2,\cdots,m
\end{equation}
and
\begin{equation}\label{fcg5-1}
\|\partial(\tilde{\Phi}_{j}-id)\|_{\tilde{D}(\tilde{s}_j,\tilde{r}_j(\tilde{I}_j))}\leq \frac{1}{2^{j+1}},\ j=1,2,\cdots,m
\end{equation}
where $\tilde{I}_j\in \mathbf{R^{d}}, \ j=0,1,2,\cdots,m$ such that system {\rm(\ref{fc5-3})} is changed by $\tilde{\Phi}^{(m)}=\tilde{\Phi}_0\circ\tilde{\Phi}_1\circ\cdots\circ\tilde{\Phi}_m$ into
\begin{equation}\label{fc5-14}
\tilde{H}^{(m)}=\tilde{H}^{(0)}\circ\tilde{\Phi}^{(m)}=\frac{H^{(m)}_0(I)}{\varepsilon^{a}}+ \frac{\tilde{P}^{(m)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m_0+m+1}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m)}(\theta,t,I)}{\varepsilon^{b}},
\end{equation}
where
\begin{equation}\label{fc5-15}
\frac{\partial H^{(m)}_0}{\partial I}(\tilde{I}_{m})=\omega(I_0),
\end{equation}
\begin{equation}\label{fc5-16}
\|\tilde{P}^{(m)}\|_{\tilde{D}(\tilde{s}_m,\tilde{r}_m(\tilde{I}_m))}\leq C\tilde{\varepsilon}_m,
\end{equation}
\begin{equation}\label{fc5-17}
det\left(\frac{\partial^2 H^{(m)}_0(I)}{\partial I^2}\right),\ det\left(\frac{\partial^2 H^{(m)}_0(I)}{\partial I^2}\right)^{-1}\leq M_m, \ \forall I\in \tilde{D}(0,\tilde{r}_m(\tilde{I}_m)),
\end{equation}
\begin{equation}\label{fc5-18}
\|\frac{\partial^2 H^{(m)}_0(I)}{\partial I^2}\|_{\tilde{D}(0,\tilde{r}_m(\tilde{I}_m))}\leq h_m,
\end{equation}
\begin{equation}\label{fc5-19}
\|P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m)}(\theta,t,I)\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_m))}\leq C\tilde{\varepsilon}_{\nu-m_0},\ \nu=m_0+m+1,m_0+m+2,\cdots.
\end{equation}
Then there is a symplectic transformation $\tilde{\Phi}_{m+1}$ with
\begin{equation}\label{fc5-20}
\tilde{\Phi}_{m+1}:\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))\rightarrow \tilde{D}(\tilde{s}_m,\tilde{r}_m(\tilde{I}_m))
\end{equation}
and
\begin{equation*}
\|\partial(\tilde{\Phi}_{m+1}-id)\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}\leq \frac{1}{2^{m+2}}
\end{equation*}
where $\tilde{I}_{m+1}\in \mathbf{R^{d}}$ such that system {\rm(\ref{fc5-14})} is changed by $\tilde{\Phi}_{m+1}$ into {\rm($\tilde{\Phi}^{(m+1)}=\tilde{\Phi}_0\circ\tilde{\Phi}_1\circ\cdots\circ\tilde{\Phi}_{m+1}$)}
\begin{eqnarray*}
\nonumber \tilde{H}^{(m+1)}&=&\tilde{H}^{(m)}\circ\tilde{\Phi}_{m+1}=\tilde{H}^{(0)}\circ\tilde{\Phi}^{(m+1)}\nonumber\\
\nonumber&=&\frac{H^{(m+1)}_0(I)}{\varepsilon^{a}}+ \frac{\tilde{P}^{(m+1)}(\theta,t,I)}{\varepsilon^{b}}+ \sum_{\nu=m_0+m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\theta,t,I)}{\varepsilon^{b}},
\end{eqnarray*}
where $\tilde{H}^{(m+1)}$ satisfies {\rm(\ref{fc5-15})}-{\rm(\ref{fc5-19})} by replacing $m$ by $m+1$.
\end{lem}
\begin{proof}
Assume that the change $\tilde{\Phi}_{m+1}$ is implicitly defined by
\begin{equation}\label{fc5-22}
\tilde{\Phi}_{m+1}: \begin{cases}
I=\rho+\frac{\partial S}{\partial \theta}, \\
\phi=\theta+\frac{\partial S}{\partial \rho},\\ t=t,
\end{cases}
\end{equation}
where $S=S(\theta, t, \rho)$ is the generating function, which will be proved to be analytic in a smaller domain
$\tilde{D}(\tilde{s}_{m+1}, \tilde{r}_{m+1}(\tilde{I}_{m+1})).$ By a simple computation, we have
$$d I\wedge d\theta=d\rho \wedge d\theta+\sum_{i,j=1}^d\frac{\partial^{2}S}{\partial\rho_{i}\partial\theta_{j}}d\rho_{i}\wedge d\theta_{j}=d\rho \wedge d\phi.$$
Thus, the coordinates change $\tilde{\Phi}_{m+1}$ is symplectic if it exists. Moreover, we get the changed Hamiltonian
\begin{eqnarray}\label{fc5-23}
\nonumber \tilde{H}^{(m+1)}&=&\tilde{H}^{(m)}\circ\tilde{\Phi}_{m+1}\nonumber\\
\nonumber &=&\frac{H^{(m)}_0(\rho+\frac{\partial S}{\partial \theta})}{\varepsilon^{a}}+\frac{ \tilde{P}^{(m)}(\theta,t,\rho+\frac{\partial S}{\partial \theta})}{\varepsilon^{b}}+\frac{P_{m_0+m+1}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}\\
&&+\frac{\partial S}{\partial t}+\sum_{\nu=m_0+m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}},
\end{eqnarray}
where $\theta=\theta(\phi, t, \rho)$ is implicitly defined by (\ref{fc5-22}). By Taylor formula, we have
\begin{eqnarray}\label{fc5-24}
\nonumber \tilde{H}^{(m+1)} &=&\frac{H^{(m)}_0(\rho)}{\varepsilon^{a}}+\langle\frac{\omega^{(m)}(\rho)}{\varepsilon^a}, \frac{\partial S}{\partial \theta}\rangle+\frac{\partial S}{\partial t}+\frac{ \tilde{P}^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}+R_1\\
\nonumber &&+\frac{P_{m_0+m+1}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}\\
&&+\sum_{\nu=m_0+m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}},
\end{eqnarray}
where $\omega^{(m)}(\rho)=\frac{\partial H^{(m)}_0}{\partial I}(\rho)$ and
\begin{equation}\label{fc5-25}
R_1=\frac{1}{\varepsilon^{a}}\int_{0}^{1}(1-\tau)\frac{\partial^2 H^{(m)}_0}{\partial I^2}(\rho+\tau\frac{\partial S}{\partial \theta}) (\frac{\partial S}{\partial \theta})^{2}d\tau+\frac{1}{\varepsilon^b}\int_{0}^{1}\frac{\partial \tilde{P}^{(m)}}{\partial I}(\theta, t, \rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}d\tau.
\end{equation}
Expanding $\tilde{P}^{(m)}(\theta,t,\rho)$ into a Fourier series,
\begin{equation}\label{fc5-26}
\tilde{P}^{(m)}(\theta,t,\rho)=\sum_{(k,l)\in\mathbf{Z}^d\times \mathbf{Z}}\widehat{\tilde{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}:=\tilde{P}_{1}^{(m)}(\theta,t,\rho)+\tilde{P}_{2}^{(m)}(\theta,t,\rho),
\end{equation}
where $\tilde{P}_{1}^{(m)}=\sum_{|k|+|l|\leq \tilde{K}_m}\widehat{\tilde{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}$, $\tilde{P}_{2}^{(m)}=\sum_{|k|+|l|> \tilde{K}_m}\widehat{\tilde{P}^{(m)}}(k,l,\rho)e^{i(\langle k,\theta\rangle+lt)}$.
Then, we derive the homological equation:
\begin{equation}\label{fc5-27}
\frac{\partial S}{\partial t}+\langle \frac{\omega^{(m)}(\rho)}{\varepsilon^{a}}, \frac{\partial S}{\partial\theta}\rangle
+\frac{ \tilde{P}_1^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}-\frac{\widehat{\tilde{P}^{(m)}}(0,0,\rho)}{\varepsilon^{b}}=0,
\end{equation}
where $\widehat{\tilde{P}^{(m)}}(0, 0, \rho)$ is $0$-Fourier coefficient of $\tilde{P}^{(m)}(\theta,t,\rho)$ as the function of $(\theta,t)$. Let
\begin{equation}\label{fc5-28}
S(\theta,t,\rho)=\sum_{|k|+|l|\leq \tilde{K}_m,(k,l)\neq (0,0)}\widehat{S}(k, l, \rho)e^{i(\langle k, \theta\rangle+lt)}.
\end{equation}
By passing to Fourier coefficients, we have
\begin{equation}\label{fc5-29}
\widehat{S}(k, l, \rho)=\frac{i}{\varepsilon^b}\cdot\frac{\widehat{\tilde{P}^{(m)}}(k, l, \rho)}{\varepsilon^{-a}\langle k, \omega^{(m)}(\rho)\rangle +l},\;|k|+|l|\leq \tilde{K}_m,\; (k,l)\in \mathbf{Z}^{d}\times \mathbf{Z}\setminus\{(0,0)\}.
\end{equation}
Then we can solve homological equation (\ref{fc5-27}) by setting
\begin{equation}\label{fc5-30}
S(\theta,t,\rho)=\sum_{|k|+|l|\leq \tilde{K}_m,(k,l)\neq (0,0)}
\frac{i}{\varepsilon^b}\cdot\frac{\widehat{\tilde{P}^{(m)}}(k, l, \rho)e^{i(\langle k, \theta\rangle+lt)}}{\varepsilon^{-a}\langle k, \omega^{(m)}(\rho)\rangle +l}.
\end{equation}
By (\ref{fc1-20}), (\ref{fc5-15}) and (\ref{fc5-17}), for $\forall \rho\in \tilde{D}(\tilde{s}_m,\tilde{r}_m(\tilde{I}_m))$, $|k|+|l|\leq \tilde{K}_{m}$, $(k,l)\neq (0,0)$, we have
\begin{eqnarray}\label{fc5-31}
|\varepsilon^{-a}\langle k, \omega^{(m)}(\rho)\rangle+l|
&\geq &|\varepsilon^{-a}\langle k, \omega^{(m)}(\tilde{I}_m)\rangle+l|-|\varepsilon^{-a}\langle k, \omega^{(m)}(\tilde{I}_m)-\omega^{(m)}(\rho)\rangle|\nonumber\\
&\geq & \frac{\gamma}{|k|^{\tau_2}}-C\varepsilon^{-a}|k|\tilde{r}_{m}\nonumber\\
&\geq &\frac{\gamma}{2|k|^{\tau_2}}.
\end{eqnarray}
Then, by (\ref{fc5-16}), (\ref{fc5-29})-(\ref{fc5-31}), using R\"ussmann \cite{a27, a28} subtle
arguments to give optimal estimates of small divisor series (also see Lemma 5.1 in \cite{a29}), we get
\begin{equation}\label{fc5-32}
\|S(\theta,t,\rho)\|_{\tilde{D}(\tilde{s}^{(1)}_m, \tilde{r}_m(\tilde{I}_m))}
\leq\frac{C\|\tilde{P}^{(m)}\|_{\tilde{D}(\tilde{s}_m,\tilde{r}_m(\tilde{I}_m))}}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}}}\leq\frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}}}.
\end{equation}
Then by the Cauchy's estimate, we have
\begin{equation}\label{fc5-33}
\|\frac{\partial S}{\partial \theta}\|_{D(\tilde{s}^{(2)}_m, \tilde{r}_m(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}+1}}\ll \tilde{r}_m-\tilde{r}_{m+1},\ \|\frac{\partial S}{\partial \rho}\|_{D(\tilde{s}^{(1)}_m, \tilde{r}^{(1)}_m(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}}\tilde{r}_m}\ll \tilde{s}_m-\tilde{s}_{m+1}.
\end{equation}
By (\ref{fc5-22}) and (\ref{fc5-33}) and the implicit function theorem, we get that there are analytic functions
$u=u(\phi, t, \rho), v=v(\phi, t, \rho)$ defined on
the domain $\tilde{D}(\tilde{s}^{(3)}_m, \tilde{r}^{(3)}_m(\tilde{I}_m))$ with
\begin{equation}\label{fc5-34}
\frac{\partial S(\theta,t,\rho)}{\partial \theta}=u(\phi, t, \rho),\ \frac{\partial S(\theta,t,\rho)}{\partial \rho}=-v(\phi, t, \rho)
\end{equation}
and
\begin{equation}\label{fc5-35}
\|u\|_{\tilde{D}(\tilde{s}^{(3)}_m, \tilde{r}^{(3)}_m(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}+1}}\ll \tilde{r}_m-\tilde{r}_{m+1},\ \|v\|_{\tilde{D}(\tilde{s}^{(3)}_m, \tilde{r}^{(3)}_m(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}}\tilde{r}_m}\ll \tilde{s}_m-\tilde{s}_{m+1}
\end{equation}
such that
\begin{equation}\label{fc5-36}
\tilde{\Phi}_{m+1}: \begin{cases}
I=\rho+u(\phi, t, \rho), \\
\theta=\phi+v(\phi, t, \rho),\\ t=t.
\end{cases}
\end{equation}
Then, we have
\begin{equation}\label{fc5-37}
\tilde{\Phi}_{m+1}(\tilde{D}(\tilde{s}^{(3)}_m, \tilde{r}^{(3)}_m(\tilde{I}_m)))\subseteq \tilde{D}(\tilde{s}_{m},\tilde{r}_{m}(\tilde{I}_m)).
\end{equation}
Let
\begin{equation}\label{fc5-38}
H_0^{(m+1)}(\rho)=H_0^{(m)}(\rho)+\varepsilon^{a-b}\widehat{\tilde{P}^{(m)}}(0, 0, \rho).
\end{equation}
By the Cauchy's estimate and (\ref{fc5-16}), we have
\begin{equation}\label{fc5-39}
\|\frac{\partial^p \widehat{\tilde{P}^{(m)}}(0, 0, \rho)}{\partial \rho^p}\|_{\tilde{D}(0,\tilde{r}^{(p)}_m(\tilde{I}_m))}\leq \frac{C \tilde{\varepsilon}_m}{\tilde{r}_m^p},\ \ p=1,2.
\end{equation}
By (\ref{fc5-17}), (\ref{fc5-18}), (\ref{fc5-38}) and (\ref{fc5-39}), we have
\begin{equation}\label{fc5-40}
det\left(\frac{\partial^2 H^{(m+1)}_0(\rho)}{\partial \rho^2}\right),\ det\left(\frac{\partial^2 H^{(m+1)}_0(\rho)}{\partial \rho^2}\right)^{-1}\leq M_{m+1}, \ \forall \rho\in \tilde{D}(0,\tilde{r}^{(2)}_m(\tilde{I}_m))
\end{equation}
and
\begin{equation}\label{fc5-41}
\|\frac{\partial^2 H^{(m+1
)}_0(\rho)}{\partial \rho^2}\|_{\tilde{D}(0,\tilde{r}^{(2)}_m(\tilde{I}_m))}\leq h_{m+1}.
\end{equation}
By (\ref{fc5-38}), we have
\begin{equation}\label{fc5-42}
\frac{\partial H_0^{(m+1)}(\rho)}{\partial \rho}=\frac{\partial H_0^{(m)}(\rho)}{\partial \rho}+\varepsilon^{a-b}\frac{\partial \widehat{\tilde{P}^{(m)}}(0, 0, \rho)}{\partial \rho}.
\end{equation}
Noting that $H_0^{(m+1)}(\rho)$, $H_0^{(m)}(\rho)$ and $\widehat{\tilde{P}^{(m)}}(0, 0, \rho)$ are real analytic on $\tilde{D}(0,\tilde{r}^{(2)}_m(\tilde{I}_m))$ and that $\tilde{I}_{m}\in\mathbf{R^d}$. Then by (\ref{fc5-15}), (\ref{fc5-38})-(\ref{fc5-40}) and (\ref{fc5-42}), it is not
difficult to see that (see Appendix ``A The Classical Implicit Function Theorem'' in \cite{a31}) there exists an unique point $\tilde{I}_{m+1}\in\mathbf{R^d}$ so that
\begin{equation}\label{fc5-43}
\frac{\partial H^{(m+1)}_0}{\partial \rho}(\tilde{I}_{m+1})=\omega(I_0),
\end{equation}
\begin{equation}\label{fc5-44}
|\tilde{I}_{m+1}-\tilde{I}_{m}|\leq \frac{C\varepsilon^{a-b}\tilde{\varepsilon}_m}{\tilde{r}_m}\ll \tilde{r}_{m}.
\end{equation}
By (\ref{fc5-37}) and (\ref{fc5-44}), we have
\begin{eqnarray}\label{fc5-45}
\tilde{\Phi}_{m+1}(\tilde{D}(\tilde{s}_{m+1}, \tilde{r}_{m+1}(\tilde{I}_{m+1})))&\subseteq&\tilde{\Phi}_{m+1}(\tilde{D}(\tilde{s}^{(9)}_m, \tilde{r}^{(9)}_m(\tilde{I}_{m+1})))\nonumber\\
&\subseteq&\tilde{\Phi}_{m+1}(\tilde{D}(\tilde{s}^{(3)}_m, \tilde{r}^{(3)}_m(\tilde{I}_m)))\subseteq \tilde{D}(\tilde{s}_{m},\tilde{r}_{m}(\tilde{I}_m)).
\end{eqnarray}
Let
\begin{equation}\label{fc5-46}
\frac{\tilde{P}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}=\frac{\tilde{P}_{2}^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}+\frac{P_{m_0+m+1}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+R_1.
\end{equation}
Then by (\ref{fc5-24}), (\ref{fc5-26}), (\ref{fc5-27}), (\ref{fc5-38}) and (\ref{fc5-46}), we have
\begin{equation}\label{fc5-47}
\tilde{H}^{(m+1)}(\phi,t,\rho)=\frac{H^{(m+1)}_0(\rho)}{\varepsilon^{a}}+ \frac{\tilde{P}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}+ \sum_{\nu=m_0+m+2}^{\infty}\frac{P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)}{\varepsilon^{b}}.
\end{equation}
By (\ref{fc5-16}), (\ref{fc5-26}) and (\ref{fc5-44}), it is not difficult to show that (see Lemma A.2 in \cite{a30}), we have
\begin{equation}\label{fc5-48}
\| \frac{\tilde{P}_2^{(m)}(\theta,t,\rho)}{\varepsilon^{b}}\|_{\tilde{D}(\tilde{s}^{(9)}_m, \tilde{r}^{(9)}_m(\tilde{I}_{m+1}))}\leq \frac{C\tilde{\varepsilon}_{m}}{\varepsilon^{b}}\tilde{K}_{m}^{d+1}e^{-\frac{\tilde{K}_{m}\tilde{s}_{m}}{2}}\leq\frac{C\tilde{\varepsilon}_{m+1}}{\varepsilon^{b}}.
\end{equation}
By (\ref{fc5-16}), (\ref{fc5-18}), (\ref{fc5-32})-(\ref{fc5-35}) and (\ref{fc5-44}), we have
\begin{equation}\label{fc5-49}
\| \frac{1}{\varepsilon^{a}}\int_{0}^{1}(1-\tau)\frac{\partial^2 H^{(m)}_0}{\partial I^2}(\rho+\tau\frac{\partial S}{\partial \theta}) (\frac{\partial S}{\partial \theta})^{2}d\tau\|_{\tilde{D}(\tilde{s}^{(9)}_m, \tilde{r}^{(9)}_m(\tilde{I}_{m+1}))}\leq \frac{C}{\varepsilon^{a}}\cdot(\frac{\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}+1}})^2\leq\frac{C\tilde{\varepsilon}_{m+1}}{\varepsilon^{b}},
\end{equation}
\begin{equation}\label{fc5-50}
\| \frac{1}{\varepsilon^b}\int_{0}^{1}\frac{\partial \tilde{P}^{(m)}}{\partial I}(\theta, t, \rho+\tau\frac{\partial S}{\partial\theta})\frac{\partial S}{\partial \theta}d\tau\|_{\tilde{D}(\tilde{s}^{(9)}_m, \tilde{r}^{(9)}_m(\tilde{I}_{m+1}))}\leq \frac{C\tilde{\varepsilon}_m}{\varepsilon^{b}\tilde{r}_m}\cdot\frac{\tilde{\varepsilon}_m}{\varepsilon^b\gamma \tilde{s}_{m}^{\tau_{2}+1}}\leq\frac{C\tilde{\varepsilon}_{m+1}}{\varepsilon^{b}}.
\end{equation}
By (\ref{fc5-35}) and (\ref{fc5-36}), we have
\begin{equation}\label{fc5-51}
\tilde{\Phi}_{m+1}(\phi,t,\rho)=(\theta,t,I),\ (\phi,t,\rho)\in \tilde{D}(\tilde{s}_m^{(3)},\tilde{r}_m^{(3)}(\tilde{I}_m)).
\end{equation}
By (\ref{fc5-35}), (\ref{fc5-36}) and (\ref{fc5-51}), we have
\begin{equation}\label{fc5-52}
\|I-\rho\|_{\tilde{D}(\tilde{s}_m^{(3)},\tilde{r}_m^{(3)}(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}+1}}, \ \|\theta-\phi\|_{\tilde{D}(\tilde{s}_m^{(3)},\tilde{r}_m^{(3)}(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}}\tilde{r}_m}.
\end{equation}
By (\ref{fc5-36}), (\ref{fc5-52}) and Cauchy's estimate, we have
\begin{equation}\label{fc5-53}
\|\partial(\tilde{\Phi}_{m+1}-id)\|_{\tilde{D}(\tilde{s}_m^{(4)},\tilde{r}_m^{(4)}(\tilde{I}_m))}\leq \frac{C\tilde{\varepsilon}_m}{\gamma\varepsilon^b\tilde{s}_{m}^{\tau_{2}+1}\tilde{r}_m}.
\end{equation}
By (\ref{fc5-44}) and (\ref{fc5-53}), we have
\begin{equation}\label{fc5-54}
\|\partial(\tilde{\Phi}_{m+1}-id)\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}\leq \frac{1}{2^{m+2}}.
\end{equation}
By (\ref{fc5-13}), (\ref{fcg5-1}), (\ref{fc5-45}) and (\ref{fc5-54}), we have
\begin{eqnarray}\label{fc5-55}
\nonumber &&\|\partial\tilde{\Phi}^{(m+1)}(\phi,t,\rho)\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}\\
\nonumber&=&\|(\partial\tilde{\Phi}_1\circ\tilde{\Phi}_2\circ\cdots\circ\tilde{\Phi}_{m+1})(\partial\tilde{\Phi}_2\circ\tilde{\Phi}_3\circ\cdots\circ\tilde{\Phi}_{m+1})\cdots(\partial\tilde{\Phi}_{m+1})\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}\\
\nonumber&\leq&\prod_{j=0}^{m}(1+\frac{1}{2^{j+2}})\\
&\leq&2.
\end{eqnarray}
We claim that \begin{equation}\label{fc5-56}
\|P_{\nu}\circ\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\phi,t,\rho)\|_{\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_{m+1}))}\leq C\tilde{\varepsilon}_{\nu-m_0},\ \nu=m_0+m+1,m_0+m+2,\cdots.
\end{equation}
In fact, suppose that $w=\tilde{\Phi}^{(m+1)}(z)$ with $z=(\phi,t,\rho)\in \tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_{m+1}))$.
Since $\tilde{\Phi}^{(m+1)}$ is real for real argument and $\tilde{r}_{\nu-m_0}<\tilde{s}_{\nu-m_0}$, we have
\begin{eqnarray}\label{fc5-57}
\nonumber &&|{\rm Im} w|=|{\rm Im} \tilde{\Phi}^{(m+1)}(z)|=|{\rm Im} \tilde{\Phi}^{(m+1)}(z)-{\rm Im} \tilde{\Phi}^{(m+1)}({\rm Re}z)|\\
\nonumber&\leq&| \tilde{\Phi}^{(m+1)}(z)- \tilde{\Phi}^{(m+1)}({\rm Re}z)|\\
\nonumber&\leq&\|\partial\tilde{\Phi}^{(m+1)}(\phi,t,\rho)\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}|{\rm Im} z|\\
&\leq&2|{\rm Im} z|\leq2\tilde{s}_{\nu-m_0}.
\end{eqnarray}
By (\ref{fc5-13}), (\ref{fc5-45}) and (\ref{fc5-57}), we have
\begin{equation}\label{fc5-58}
\tilde{\Phi}^{(m+1)}(\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_{m+1})))\subseteq D_{\nu}:=(\mathbf{T}^{d+1}_{2\tilde{s}_{\nu-m_0}}\times \mathbf{R}^{d}_{2\tilde{s}_{\nu-m_0}})\bigcap \tilde{D}(\tilde{s}_{0},\tilde{r}_{0}(\tilde{I}_{0})).
\end{equation}
For $(t,I)=(t_1+t_2i,I_1,I_2i)\in D_{\nu}$, where $t_1,t_2,I_1,I_2$ are real numbers, we have
\begin{eqnarray}\label{fc5-59}
\nonumber&&\|{\rm Im}\frac{\partial \tilde S(t,I)}{\partial I}\|_{D_{\nu}}\\
\nonumber &=&\|{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-{\rm Im}\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{D_{\nu}}\\
\nonumber&\leq&\|\frac{\partial \tilde S}{\partial I}(t_1+t_2i,I_1+I_2i)-\frac{\partial \tilde S}{\partial I}(t_1,I_1)\|_{D_{\nu}}\\
\nonumber&\leq&\|\frac{\partial^2 \tilde S(t,I)}{\partial I \partial t}\|_{\mathcal{D}}\|t_2i\|_{D_{\nu}}+\|\frac{\partial^2 \tilde S(t,I)}{\partial^2 I}\|_{\mathcal{D}}\|I_2i\|_{D_{\nu}}\\
\nonumber&\leq&\frac{C\tilde{s}_{\nu-m_0}}{\varepsilon^br_{m_0}s_{m_0}}+\frac{C\tilde{s}_{\nu-m_0}}{\varepsilon^br^2_{m_0}}\\
&\leq&\frac{1}{2}s_{\nu}.
\end{eqnarray}
By (\ref{fbc5-1}), (\ref{fc5-58}) and (\ref{fc5-59}), we have
\begin{equation}\label{fc5-60}
\Psi\circ\tilde{\Phi}^{(m+1)}(\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_{m+1})))\subseteq \bar{D}_{\nu}:=(\mathbf{T}^{d+1}_{s_{\nu}}\times \mathbf{R}^{d}_{2\tilde{s}_{\nu-m_0}})\bigcap D(s_{m_0},r_{m_0}).
\end{equation}
Suppose that $w=\Phi^{(m_0)}(z)$ with $z=(\theta,t,I)\in \bar{D}_{\nu}$.
Since $\Phi^{(m_0)}$ is real for real argument and $2\tilde{s}_{\nu-m_0}<r_{\nu}<s_{\nu}$, then by (\ref{fc3-44}) with $m=m_0-1$, we have
\begin{eqnarray}\label{fc5-62}
\nonumber &&|{\rm Im} w|=|{\rm Im} \Phi^{(m_0)}(z)|=|{\rm Im} \Phi^{(m_0)}(z)-{\rm Im} \Phi^{(m_0)}({\rm Re}z)|\\
\nonumber&\leq&| \Phi^{(m_0)}(z)- \Phi^{(m_0)}({\rm Re}z)|\\
\nonumber&\leq&\|\partial\Phi^{(m_0)}(\theta,t,I)\|_{D(s_{m_0},r_{m_0})}|{\rm Im} z|\\
&\leq&2|{\rm Im} z|\leq2s_{\nu}.
\end{eqnarray}
Then by (\ref{fc5-60}) and (\ref{fc5-62}), we have
\begin{equation}\label{fc5-63}
\Phi^{(m_0)}\circ\Psi\circ\tilde{\Phi}^{(m+1)}(\tilde{D}(\tilde{s}_{\nu-m_0},\tilde{r}_{\nu-m_0}(\tilde{I}_{m+1})))\subset \mathbf{T}^{d+1}_{2s_{\nu}}\times \mathbf{R}^{d}_{2s_{\nu}},\ \nu=m_0+m+1,m_0+m+2,\cdots.
\end{equation}
By (\ref{fc3-3}) and (\ref{fc5-63}), the proof of (\ref{fc5-56}) is completed. By (\ref{fc5-25}), (\ref{fc5-44}), (\ref{fc5-46}), (\ref{fc5-48})-(\ref{fc5-50}) and (\ref{fc5-56}), we have
\begin{equation}\label{fc5-64}
\|\tilde{P}^{(m+1)}\|_{\tilde{D}(\tilde{s}_{m+1},\tilde{r}_{m+1}(\tilde{I}_{m+1}))}\leq C\tilde{\varepsilon}_{m+1}.
\end{equation}
Then the proof is completed by (\ref{fc5-40}), (\ref{fc5-41}), (\ref{fc5-43}), (\ref{fc5-45}), (\ref{fc5-47}), (\ref{fc5-54}), (\ref{fc5-56}) and (\ref{fc5-64}).
\end{proof}
\section{Proof of Theorems \ref{thm1-1}-\ref{thm1-2}}\label{sec6}
In Lemma \ref{lem5-1}, letting $m\rightarrow\infty$ we get the following lemma:
\begin{lem}\label{lem6-1} There exisits a symplectic transformation $\tilde{\Phi}^{(\infty)}:=\lim_{m\rightarrow\infty}\tilde{\Phi}_0\circ\tilde{\Phi}_1\circ\cdots\circ\tilde{\Phi}_{m}$ with
\begin{equation}\label{fc6-1}
\tilde{\Phi}^{(\infty)}:\mathbf{T}^{d+1}\times \{\tilde{I}_{\infty}\}\rightarrow D(\tilde{s}_0,\tilde{r}_0(\tilde{I}_0)),
\end{equation}
where $\tilde{I}_{\infty}\in \mathbf{R^{d}}$ such that system {\rm(\ref{fc5-3})} is changed by $\tilde{\Phi}^{(\infty)}$ into
\begin{equation}\label{fc6-2}
\tilde{H}^{(\infty)}(\theta,t,I)=\tilde{H}^{(0)}\circ\tilde{\Phi}^{(\infty)}=\frac{H^{(\infty)}_0(I)}{\varepsilon^{a}},
\end{equation}
where
\begin{equation}\label{fc6-3}
\frac{\partial H^{(\infty)}_0}{\partial I}(\tilde{I}_{\infty})=\omega(I_0),
\end{equation}
\begin{equation}\label{fc6-4}
\|\tilde{\Phi}^{(\infty)}-id\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\leq \tilde{\varepsilon}_0^{\frac{1}{2\ell}}.
\end{equation}
\end{lem}
\begin{proof} By (\ref{fc5-35}) and (\ref{fc5-55}), for $z=(\theta,t,I)\in \mathbf{T}^{d+1}\times \tilde{I}_{\infty}$ and $m=0,1,2, \cdots$, we have
\begin{eqnarray}\label{fc6-5}
\nonumber &&\|\tilde{\Phi}^{(m+1)}(z)-\tilde{\Phi}^{(m)}(z)\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\\
\nonumber&=&\|\tilde{\Phi}^{(m)}(\tilde{\Phi}_{m+1}(z))-\tilde{\Phi}^{(m)}(z)\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\\
\nonumber&\leq&\|\partial\tilde{\Phi}^{(m)}(\tilde{\Phi}_{m+1}(z))\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\|\tilde{\Phi}_{m+1}(z)-z\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\\
&\leq&2\tilde{\varepsilon}_m^{\frac{1}{\ell}},
\end{eqnarray}
where $\tilde{\Phi}^{(0)}:=id$. Then, we have
\begin{equation*}
\|\tilde{\Phi}^{(\infty)}(z)-z\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\leq\sum_{m=0}^{\infty} \|\tilde{\Phi}^{(m+1)}(z)-\tilde{\Phi}^{(m)}(z)\|_{\mathbf{T}^{d+1}\times \tilde{I}_{\infty}}\leq\sum_{m=0}^{\infty}2\tilde{\varepsilon}_m^{\frac{1}{\ell}} \leq \tilde{\varepsilon}_0^{\frac{1}{2\ell}}.
\end{equation*}
This completes the proof of Lemma \ref{lem6-1}.
\end{proof}
Then the proof of Theorem \ref{thm1-1} is completed by (\ref{fc3-1}), (\ref{fc3-5}), (\ref{fc3-50}), (\ref{fc4-4}) , (\ref{fc5-3}) and Lemma \ref{lem6-1}. Applying Theorem \ref{thm1-1} to (\ref{fc1-7}) we have Theorem \ref{thm1-2} (see Section 5 of \cite{a16} for the proof).
\end{document} |
\betagin{document}
\betagin{abstract}
We introduce an invariant of tangles in Khovanov homology by considering a natural inverse system of Khovanov homology groups. As application, we derive an invariant of strongly invertible knots; this invariant takes the form of a graded vector space that vanishes if and only if the strongly invertible knot is trivial. While closely tied to Khovanov homology --- and hence the Jones polynomial --- we observe that this new invariant detects non-amphicheirality in subtle cases where Khovanov homology fails to do so. In fact, we exhibit examples of knots that are not distinguished by Khovanov homology but, owing to the presence of a strong inversion, may be distinguished using our invariant. This work suggests a strengthened relationship between Khovanov homology and Heegaard Floer homology by way of two-fold branched covers that we formulate in a series of conjectures.
\end{abstract}
\maketitle
\betagin{footnotesize}{\em To my grandfather, Karl Erik Snider.}\end{footnotesize}
The reduced Khovanov homology of an oriented link $L$ in the three-sphere is a bi-graded vector space $\Khred(L)$ for which the graded Euler characteristic $\sum_{u,q}(-1)^ut^q\dim\Khred^u_q(L)$ recovers the Jones polynomial of $L$ \cite{Khovanov2000,Khovanov2003}. This homological link invariant is known to detect the trivial knot. Precisely, Kronheimer and Mrowka prove that $\dim\Khred(K)=1$ if and only if $K$ is the trivial knot \cite{KM2011}. It remains an open problem to determine if the analogous detection result holds for the Jones polynomial.
Preceding the work of Kronheimer and Mrowka are a range of applications of Khovanov homology in low-dimensional topology. Perhaps most recognised among these is Rasmussen's combinatorial proof of the Milnor conjecture by way of the $s$ invariant \cite{Rasmussen2010}. Other examples include Ng's bound on the Thurston-Bennequin number \cite{Ng2005}, Plamenevskaya's invariant of transverse knots \cite{Plamenevskaya2006}, and obstructions to finite fillings on strongly invertible knots due to the author \cite{Watson2012,Watson2013}.
There are two features common to each of these applications. First, the quantity extracted from Khovanov homology is an integer (a particular grading \cite{Ng2005, Plamenevskaya2006,Rasmussen2010}, a count of a collection of gradings \cite{Watson2012}, or a dimension count \cite{KM2011}); and second, the quantity measured is not one that can be extracted from the Jones polynomial --- additional structure in Khovanov homology is essential in each case. The latter points to a clear advantage of Khovanov homology over the Jones polynomial, while the former suggests that further applications might be possible by considering more of the available structure.
This paper is principally concerned with developing new applications of the graded information in Khovanov homology.
\subsection*{Tangle invariants in Khovanov homology} As with the Jones polynomial, tangle decompositions provide an approach to calculation and an enrichment of structure in Khovanov homology. For example, Bar-Natan's work \cite{Bar-Natan2005} gave rise to a considerable improvements in calculation speed \cite{Bar-Natan2007}. Bar-Natan works in a category of formal complexes of tangles up to homotopy (modulo certain topological relations). On the other hand, Khovanov defines an algebraic invariant that is more natural in certain settings \cite{Khovanov2002} --- particularly in relation to two-fold branched covers and bordered Floer homology \cite{AGW2011}. There are a range of other generalized tangle invariants in Khovanov homology \cite{APS2006,GW2010,LP2009,Roberts2013-A,Roberts2013-D} and the state of the art is nicely summarized by Roberts \cite{Roberts2013-D}.
\parpic[r]{\includegraphics[scale=0.5]{figures/tiny-suture-alt}}
We introduce a new tangle invariant in Khovanov homology that is perhaps best aligned with the work of Grigsby and Wehrli \cite{GW2010}. The tangles considered are pairs $T=(B^3,\tau)$, where $\tau$ is a pair of properly embedded disjoint smooth arcs (together with a potentially empty collection of embedded disjoint closed components). These tangles will be endowed with a sutured structure (see Definition \ref{def:suture}, and compare the definitions of \cite[Section 5]{GW2010}), which may be thought of as a partition of the four points of $\partial\tau\subset\partial B^3$ into two pairs of points. Namely, we replace $B^3$ with the product $D^2\times I$ and distinguish the annular subset of the boundary $\partial D^2 \times I$ as the suture. Equivalence of sutured tangles is up to homeomorphism of the pair $(B^3,\tau)\colon\thinspaceng (D^2 \times I,\tau)$ fixing the suture.
Given a representative $T$ for the homeomorphism class of a sutured tangle, there is a naturally defined link $T(i)$, for any integer $i$, by adding $i$ half-twists and then closing the tangle (as in Figure \ref{fig:closures}). While these twists do not alter (the homeomorphism class of) the sutured tangle, the links $T(i)$ typically form an infinite family of distinct links. However, the Khovanov homology groups of the $T(i)$ are closely related, owing to the existence of a long exact sequence in Khovanov homology associated with a crossing resolution. In particular, there is a linear map $f_i\colon\thinspace \Khred(T(i+1))\to\Khred(T(i))$ for each integer $i$. Our object of study is the vector space defined by the inverse limit \[\KHT(T)=\varprojlim \Khred(T(i))\] as this yields an invariant of the underlying sutured tangle. It is not immediately apparent how this invariant might be related to other tangle invariants in the literature. While this is an interesting line of inquiry we will leave it for the moment and turn instead to an application.
\subsection*{The symmetry group of a knot} The symmetry group $\operatorname{Sym}(S^3,K)$ of a knot $K$ in $S^3$ is identified with the mapping class group of the knot exterior $M_K=S^3\smallsetminus\nu(K)$ \cite{Kawauchi1996}.
\lambdabellist
\small
\pinlabel $h$ at 52 4
\endlabellist
\parpic[r]{\includegraphics[scale=0.85]{figures/strong-trefoil}}
A strong inversion on a knot $K$ is an element $h\in\operatorname{Sym}(S^3,K)$ arising from an orientation preserving involution on $S^3$ that reverses orientation on the knot $K$. The pair $(K,h)$ will be called a strongly invertible knot whenever $h\in\operatorname{Sym}(S^3,K)$ is a strong inversion (this notation follows Sakuma \cite{Sakuma1986}). Notice that, according to the Smith conjecture, the fixed point set of such an involution must be unknotted \cite{Waldhausen1969}. When restricting a strong inversion to $M_K$ we obtain an involution on the knot exterior with one dimensional fixed point set. The quotient of such an involution is a tangle; the arcs of the tangle are the image of the fixed point set in the quotient. Moreover, by choosing equivariant meridional sutures on $\partial M_K$, the quotient tangle is naturally a sutured tangle $T_{K,h}$ for which the closure $T_{K,h}(\frac{1}{0})$ is the trivial knot. In fact, there is a one-to-one correspondence between pairs $(K,h)$ and sutured tangles $T$ for which $T(\frac{1}{0})$ is the trivial knot. Thus, to any strongly invertible knot $(K,h)$ we may associate a sutured tangle and the invariant $\KHT(T_{K,h})$.
We will focus on a particular finite dimensional quotient $\varkappa(K,h)$ of this inverse limit. This is a $\mathbb{Z}$-graded vector space; there is a natural secondary relative grading admitting a lift to a $(\mathbb{Z}\times\mathbb{Z}_{\operatorname{odd}}$)-graded vector space (see Section \ref{sec:bi}).
Some remarks are in order. If $T=(D^2\times I,\tau)$, then the above construction shows that $M_K$ is the two-fold branched cover of $D^2\times I$ with branch set $\tau$, denoted $\mathbf{\Sigmagma}_T$. Moreover, this covering can be chosen so that it respects the sutured structures. It is important to note that $K$ may admit more than one strong inversion and hence it can be the case that $M_K$ may be realised as a two-fold branched cover of $D^2\times I$ in different ways.
The appropriate notion of equivalence of strongly invertible knots is given by conjugacy in $\operatorname{Sym}(S^3,K)$; see Definition \ref{def:strong-knot}. As such, our invariant is best framed as an invariant of conjugacy classes. For example, if $K$ is hyperbolic it is known that $\operatorname{Sym}(S^3,K)$ is a subgroup of a dihedral group \cite{Riley1979,Sakuma1986} and $K$ admits 0, 1 or 2 strong inversions (up to conjugacy). Furthermore, in the case that there are 2 strong inversions, these must generate a cyclic or free involution \cite{Sakuma1986}. As a result, invariants of strong inversions (particularly, of strongly invertible knots) detect additional structure in the symmetry group. We note that, in general, a given knot admits finitely many strong inversions \cite{Kojima1983}.
\subsection*{Results and conjectures} An interesting feature of this invariant of strong inversions from Khovanov homology is the following:
\betagin{theorem}\lambdabel{thm:unknot}
Let $(K,h)$ be a strongly invertible knot. Then $\varkappa(K,h)=0$ if and only if $K$ is the trivial knot. \end{theorem}
Note that the trivial knot admits a standard strong inversion, and that $(K,h)$ is trivial if and only if $K$ is the trivial knot \cite{Marumoto1977}. We consider some particular examples as a means of comparing $\varkappa(K,h)$ with $\Khred(K)$. These establish:
\betagin{theorem}\lambdabel{thm:summary} (1) There exist distinct knots $K_1$ and $K_2$, each admitting a unique strong inversion $h_1$ and $h_2$ respectively, for which $\Khred(K_1) \colon\thinspaceng \Khred(K_2)$ but $\varkappa(K_1,h_1)\ncong\varkappa(K_2,h_2)$ as graded vector spaces.\\[4pt]
(2) There exist non-amphicheiral knots $K$, admitting a unique strong inversion $h$, for which $\Khred(K)\colon\thinspaceng\Khred(K^*)$ but $\varkappa(K,h)\ncong\varkappa(K^*,h)$ as graded vector spaces.
\end{theorem}
In fact we show more: Of all knots with 10 or fewer crossing for which the Jones polynomial and the signature (in combination) fail to detect non-amphicheirality, there is an involution present and $\varkappa$ detects non-amphicheirality; see Theorem \ref{thm:ten}.
Sakuma introduces and studies a similar invariant $\eta(K,h)$ \cite{Sakuma1986}. This is an Alexander-like polynomial invariant that, like $\varkappa(K,h)$, vanishes for the trivial knot. Unlike $\varkappa(K,h)$, Sakuma's invariant also vanishes for a range of non-trivial strongly invertible knots, including any amphicheiral $(K,h)$ for which $h$ is unique up to conjugacy \cite[Proposition 3.4]{Sakuma1986}. In this context it is also worth mentioning the work of Couture defining a Khovanov-like homology associated with links of divides \cite{Couture2009}. This gives rise to a homological invariant of strongly invertible knots $(K,h)$, though it is not clear how this is related to $\varkappa(K,h)$ (or if the two are related at all); see Remark \ref{rmk:Couture}.
Our work points to some conjectures about the behaviour of the vector space $\varkappa(K,h)$ and, most notably, its relationship with Heegaard Floer homology. In particular, there is evidence suggesting that the family of strongly invertible L-space knots --- knots admitting a Dehn surgery with simplest-possible Heegaard Floer homology --- might be characterised by way of Khovanov homology by appealing to $\varkappa$; see Conjecture \ref{con:L-space}.
\subsection*{Organization} Background on Khovanov homology is collected in Section \ref{sec:Kh} with particular attention paid to our grading conventions and their relationship to other conventions in the literature; this is summarized in Figure \ref{fig:gradings}. The invariants of sutured tangles and of strongly invertible knots are defined in Section \ref{sec:inv}; the invariant of strongly invertible knots $\varkappa$ is the main focus of the remainder of the paper. In Section \ref{sec:properties} some basic properties of $\varkappa$ are established including the non-vanishing result (Theorem \ref{thm:unknot}). In Section \ref{sec:examples} we give some preliminary examples. This includes properties of $\varkappa$ for torus knots (Theorem \ref{thm:torus}) and highlights the invariant's ability to distinguish strong inversions; compare Question \ref{qst:seperate}. Section \ref{sec:amph} considers the problem of obstructing amphicheirality and establishes Theorem \ref{thm:summary}. Section \ref{sec:conjectures} presents three conjectures. The invariant $\varkappa$ is graded, but also comes with a natural relative bi-grading that can be useful in calculations. The paper concludes with a construction of a lift of the latter to an absolute bi-grading in Section \ref{sec:bi}.
\section{Khovanov homology} \lambdabel{sec:Kh}
Khovanov's categorification of the Jones polynomial gives rise to a (co)homological invariant of oriented links in the three-sphere with the Jones polynomial arising as a graded Euler characteristic \cite{Khovanov2000}. For the purpose of this paper it will be sufficient to work with the reduced Khovanov homology $\Khred(L)$ \cite{Khovanov2003} taking coefficients in the field $\mathbb{F}=\mathbb{Z}/2\mathbb{Z}$ and giving rise to a $(\mathbb{Z}\times\frac{1}{2}\mathbb{Z})$-graded vector space. Letting $u$ denote the integer (homological) grading and $q$ denote the half-integer (quantum) grading, the invariant satisfies $\Khred(U)\colon\thinspaceng\mathbb{F}$ supported in grading $(u,q)=(0,0)$ (where $U$ denotes the trivial knot) and, more generally, \[V_L(t)=\sum_{q\in\mathbb{Z}}b_qt^q\] where each coefficient $b_q=\chi_u\big(\Khred_q(L)\big)=\sum_{u\in\mathbb{Z}}(-1)^u\dim\Khred^u_q(L)$ is the Euler characteristic in a fixed quantum grading. The symmetry in the Jones polynomial $V_{L^*}(t)=V_L(t^{-1})$, where $L^*$ denotes the mirror image of $L$, is realised in the bi-grading of Khovanov homology as $\Khred^u_q(L^*)\colon\thinspaceng\Khred^{-u}_{-q}(L)$.
Note that the quantum grading used here is half the grading considered elsewhere (compare \cite{Khovanov2000}, for example) and results in half-integer powers for links with an even number of components.
There is a third natural grading to consider: Setting $\deltata=u-q$ records diagonals of slope 1 in the $(u,q)$-plane and gives rise to a $\frac{1}{2}\mathbb{Z}$-grading on $\Khred(L)$. This may be relaxed to a relative $\mathbb{Z}$-grading (compare \cite{Watson2012}). It is an absolute $\mathbb{Z}$-grading for knots and we have that \[\det(K) = |\chi_\deltata\Khred(K)|\]
where $\chi_\deltata\big(\Khred(K)\big)=\sum_{\deltata\in\mathbb{Z}}(-1)^\deltata\dim\Khred^\deltata(K)$ (ignoring the quantum grading).
\betagin{figure}
\betagin{tikzpicture}[>=latex]
\draw [gray] (-5,3.5) -- (5,3.5);
\draw [gray] (-5,3) -- (5,3);
\draw [gray] (-5,2.5) -- (5,2.5);
\draw [gray] (-5,2) -- (5,2);
\draw [gray] (-5,1.5) -- (5,1.5);
\draw [gray] (-5,1) -- (5,1);
\draw [gray] (-5,0.5) -- (5,0.5);
\draw [gray] (-5,0) -- (5,0);
\draw [gray] (4,-5) -- (4,4.5);
\draw [gray] (3.5,-5) -- (3.5,4.5);
\draw [gray] (3,-5) -- (3,4.5);
\draw [gray] (2.5,-5) -- (2.5,4.5);
\draw [gray] (2,-5) -- (2,4.5);
\draw [gray] (1.5,-5) -- (1.5,4.5);
\draw [gray] (1,-5) -- (1,4.5);
\draw [gray] (0.5,-5) -- (0.5,4.5);
\draw [gray] (0,-5) -- (0,4.5);
\draw [gray] (-3,-0.5) -- (-3,4.5);
\draw [gray] (-3.5,-0.5) -- (-3.5,4.5);
\draw [gray] (-4,-0.5) -- (-4,4.5);
\draw [gray] (-0.5,-1.5) -- (5,-1.5);
\draw [gray] (-0.5,-2) -- (5,-2);
\draw [gray] (-0.5,-2.5) -- (5,-2.5);
\draw [gray] (-0.5,-3.5) -- (5,-3.5);
\draw [gray] (-0.5,-4) -- (5,-4);
\node at (4.5,-0.125) {\footnotesize{$u$}};
\node at (-0.125,4) {\footnotesize{$q$}};
\draw[ultra thick,->] (0,-0.5) -- (0,4);
\draw[ultra thick,->] (-0.5,0) -- (4.5,0);
\node at (0.25,-4.25) {\footnotesize{-$7$}};
\node at (0.75,-4.25) {\footnotesize{-$6$}};
\node at (1.25,-4.25) {\footnotesize{-$5$}};
\node at (1.75,-4.25) {\footnotesize{-$4$}};
\node at (2.25,-4.25) {\footnotesize{-$3$}};
\node at (2.75,-4.25) {\footnotesize{-$2$}};
\node at (3.25,-4.25) {\footnotesize{-$1$}};
\node at (3.75,-4.25) {\footnotesize{$0$}};
\node at (-4.25,0.25) {\footnotesize{-$10$}};
\node at (-4.25,0.75) {\footnotesize{-$9$}};
\node at (-4.25,1.25) {\footnotesize{-$8$}};
\node at (-4.25,1.75) {\footnotesize{-$7$}};
\node at (-4.25,2.25) {\footnotesize{-$6$}};
\node at (-4.25,2.75) {\footnotesize{-$5$}};
\node at (-4.25,3.25) {\footnotesize{-$4$}};
\node at (0.25,0.25) {$\bullet$};
\node at (0.75,0.75) {$\bullet$};
\node at (1.75,1.75) {$\bullet$};
\node at (1.25,0.75) {$\bullet$};
\node at (2.25,1.75) {$\bullet$};
\node at (2.75,2.25) {$\bullet$};
\node at (3.75,3.25) {$\bullet$};
\node at (-2,-0.125) {\footnotesize{$\deltata\!=\!u\!-\!q$}};
\node at (-4.125,4) {\footnotesize{$q$}};
\node at (-3.25,-0.25) {\footnotesize{$4$}};
\node at (-3.75,-0.25) {\footnotesize{$3$}};
\draw[ultra thick,->] (-4,-0.5) -- (-4,4);
\draw[ultra thick,->] (-4.5,0) -- (-2.5,0);
\node at (-3.75,0.25) {$\bullet$};
\node at (-3.75,0.75) {$\bullet$};
\node at (-3.75,1.75) {$\bullet$};
\node at (-3.25,0.75) {$\bullet$};
\node at (-3.25,1.75) {$\bullet$};
\node at (-3.25,2.25) {$\bullet$};
\node at (-3.25,3.25) {$\bullet$};
\draw[ultra thick,->] (-0.5,-1.5) -- (4.5,-1.5);
\node at (4.5,-1.675) {\footnotesize{$u$}};
\draw[ultra thick,->] (0,-1) -- (0,-3);
\node at (-0.125,-3.0125) {\footnotesize{$\deltata$}};
\node at (-0.25,-2.25) {\footnotesize{$4$}};
\node at (-0.25,-1.75) {\footnotesize{$3$}};
\node at (0.25,-1.75) {$\bullet$};
\node at (0.75,-1.75) {$\bullet$};
\node at (1.75,-1.75) {$\bullet$};
\node at (1.25,-2.25) {$\bullet$};
\node at (2.25,-2.25) {$\bullet$};
\node at (2.75,-2.25) {$\bullet$};
\node at (3.75,-2.25) {$\bullet$};
\draw[ultra thick,->] (-0.5,-4) -- (4.5,-4);
\node at (4.5,-4.125) {\footnotesize{$u$}};
\node at (0.25,-3.75) {$\bullet$};
\node at (0.75,-3.75) {$\bullet$};
\node at (1.75,-3.75) {$\bullet$};
\node at (1.25,-3.75) {$\bullet$};
\node at (2.25,-3.75) {$\bullet$};
\node at (2.75,-3.75) {$\bullet$};
\node at (3.75,-3.75) {$\bullet$};
\node at (-2.5,-2.5) {\includegraphics[scale=0.75]{figures/10_124}};
\end{tikzpicture}
\caption{A gradings glossary: $u$ (cohomological), $q$ (quantum) and $\deltata$ (diagonal) gradings on the vector space $\Khred(K)$. For the purpose of illustration we have considered the torus knot $K\sigmameq 10_{124}$. Each $\bullet$ denotes a copy of the vector space $\mathbb{F}$. Notice that $V_K(t)=t^{-4}+t^{-6}-t^{-10}$ in this case. For reference, the conventions in the upper left correspond to those of \cite{Watson2012,Watson2013}.}\lambdabel{fig:gradings}\end{figure}
These conventions are consistent with \cite{MO2007,Watson2012,Watson2013} and are summarized for a particular example in Figure \ref{fig:gradings}. Two different gradings on Khovanov homology will be used in this paper:
\betagin{itemize}
\item[(1)] A finite dimensional $\mathbb{Z}$-graded vector space $\Khred(L)=\Khred^u(L)$ by considering the homological grading $u$ (and ignoring both $q$ and $\deltata$); \\
\item[(2)] a finite dimensional $(\mathbb{Z}\times\frac{1}{2}\mathbb{Z})$-graded vector space $\Khred(L)=\Khred^{u,\deltata}(L)$ by considering the homological grading $u$ and the diagonal grading $\deltata$. We will generally relax the half-integer grading to an integer grading at the expense of passing from an absolute grading to a relative grading in the second factor. \\
\end{itemize}
With these conventions in place, we review the long exact sequence associated with a crossing resolution. Let $[\cdot,\cdot]$ be an operator on the bi-grading satisfying \[\Khred^{u,\deltata}(L)[i,j]\colon\thinspaceng\Khred^{{u-i},{\deltata-j}}(L)\] and, given an orientation on (a fixed diagram of) $L$, let $n_-(L)$ record the number of negative crossings according to a right-hand rule. Then given a distinguished positive crossing $\rightcross$ in a link diagram fix $c= n_-(\one)-n_-(\positive)$ for some choice of orientation on the affected strands of the new link associated with the resolution $\one$ at the crossing --- note that this is the resolution that does not inherit the orientation of the original link. With the abuse of notation $\Khred(L)=\Khred(\positive)$, $n_-(L)=n_-(\positive)$ (and so forth) understood, we have a long exact sequence
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1em,column sep=1em]
{ \Khred(\positive) & & \Khred(\zero)[0,-\half] \\
& \Khred(\one)[c+1,-\half c] & \\ };
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$ f^+ $} (m-1-3)
(m-2-2.north west) edge (m-1-1.south east)
(m-1-3.south west) edge node[auto] {$ \partial $} (m-2-2.north east);
\end{tikzpicture}\]
The connecting homomorphism $\partial$ is graded of bi-degree $(1,1)$, that is, this map raises both $u$- and $
\deltata$-grading by 1. Note that the long exact sequence is particularly well behaved with respect to the $\mathbb{Z}$-grading:
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1em,column sep=1em]
{ \Khred^u(\positive) & & \Khred^u(\zero) \\
& \Khred^{u-c-1}(\one) & \\ };
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$ f^+ $} (m-1-3)
(m-2-2.north west) edge (m-1-1.south east)
(m-1-3.south west) edge node[auto] {$ \partial $} (m-2-2.north east);
\end{tikzpicture}\]
That is, this exact {\em triangle} encodes the long exact sequence
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1em,column sep=2em]
{ \cdots & \Khred^{u-c-1}(\one) & \Khred^u(\rightcross) & \Khred^u(\zero) & \Khred^{u-c}(\one) & \cdots\\ };
\path[->,font=\scriptsize]
(m-1-1) edge (m-1-2)
(m-1-2) edge (m-1-3)
(m-1-3) edge node[auto] {$ f^+ $} (m-1-4)
(m-1-4) edge node[auto] {$ \partial $} (m-1-5)
(m-1-5) edge (m-1-6);
\end{tikzpicture}\]
and the map $f^+\colon\thinspace \Khred(\positive) \to \Khred(\zero)$ preserves the cohomological grading.
\section{Invariants from inverse limits}\lambdabel{sec:inv}
\subsection{An invariant of sutured tangles} \lambdabel{sec:tangles}
A tangle $T$ is the homeomorphism class of a pair $(B^3,\tau)$ where $B^3$ is a three-ball and $\tau$ is a pair of properly embedded arcs (together with a potentially empty collection of embedded circles). Consider the identification $B^3\colon\thinspaceng D^2\times I$.
\betagin{definition}\lambdabel{def:suture} A sutured tangle is a pair $(D^2\times I,\tau)$ where the four endpoints $\partial \tau$ are divided into two pairs confined to $D^2\times\{0\}$ and $D^2\times\{1\}$ respectively. The suture is the annulus $\partial D^2\times I$; equivalence of sutured tangles is up to homeomorphism of the pair $(D^2\times I,\tau)$ fixing the suture pointwise. A sutured tangle is called braid-like if the arcs $\tau$ admit an orientation that is inward at $D^2\times \{0\}$ and outward at $D^2\times \{1\}$.
\end{definition}
An example is illustrated in Figure \ref{fig:suture}. All tangles considered in this work will be sutured.
\betagin{figure}[h]
\includegraphics[scale=0.75]{figures/si-left}\qquad\qquad\qquad
\includegraphics[scale=0.75]{figures/si-right}
\caption{An example of a sutured tangle with the suture $\partial D^2\times I$ shaded (left); and projection to $I\times I$ illustrating the convention for planar representations of sutured tangles (right). Note that this example is braid-like in the sense of Definition \ref{def:suture}.}\lambdabel{fig:suture}\end{figure}
Having fixed a representative for a sutured tangle $T$, there are two natural links obtained in the closure: $T(\frac{1}{0})$ joins the endpoints in $D^2\times \{0\}$ and $D^2\times \{1\}$ respectively without adding any new crossings; $T(0)$ joins each endpoint in $D^2\times \{0\}$ to an endpoint in $D^2\times \{1\}$ without adding new crossings (see Figure \ref{fig:closures}).
\betagin{figure}[h]
\lambdabellist
\pinlabel $\underbrace{\phantom{aaaaaaaaaaaaia}}_n$ at 360 26
\lambdarge
\pinlabel $T$ at 41 49
\pinlabel $T$ at 160 49
\pinlabel $T$ at 279 49
\pinlabel $\cdots$ at 375 49
\endlabellist
\includegraphics[scale=0.75]{figures/closures}
\caption{Links $T(\frac{1}{0})$, $T(0)$ and $T(n)$ (from left-to-right) obtained via the closure of a fixed representative of a sutured tangle $T$. }\lambdabel{fig:closures}\end{figure}
More generally, note that the homeomorphism class of $T$ is not altered by adding horizontal twists, that is, a homeomorphism exchanging the pair of points $\partial\tau|_{D^2\times\{1\}}\subset D^2\times\{1\}$. With the convention that the crossing $\positive$ is represented by $+1$, the obvious one-parameter family of tangle representatives gives rise to an infinite family of links $T(n)$ in the closure. Precisely, if $T^n$ is the representative obtained from $T$ by adding $n$ half-twists, then we have $T(n)=T^n(0)$ (see Figure \ref{fig:closures}). Rational tangle attachments other than these horizontal twists will not, in general, preserve the suture despite the fact that the equivalence class of the underlying (unsutured) tangle is preserved (see \cite{Watson2012}, for example, for details on this more standard notion of tangle equivalence).
Fix a representative of a sutured tangle $T$ and, with the above conventions for closures in place, define $A_i=\Khred(T(i))$ for all $i\in\mathbb{Z}$. Then there is a natural inverse system provided by the maps $f_i\colon\thinspace A_{i+1}\to A_i$ in the long exact sequence. These are not necessarily graded maps since the resolved crossing need not be positive for a general tangle: At present we are distinguishing between $f_i$ and $f_i^+$ depending on compatibility of orientations at the resolved crossing. Note however that $f_i$ may always be regarded as a relatively graded map between the bi-graded vector spaces $A_{i+1}$ and $A_i$. Define \[\KHT(T) = \varprojlim A_i,\] the inverse limit of $\{A_i,f_i\}$ (see Weibel \cite{Weibel1994}, for example). We have by construction that:
\betagin{proposition}
The vector space $\KHT(T)$ is an invariant of the sutured tangle $T$, up to isomorphism. Moreover, if $T$ is braid-like then $\KHT(T)$ is naturally $\mathbb{Z}$-graded.
\end{proposition}
\betagin{proof}
The proposition follows immediately from the definitions owing to the fact that the pair $\{A_i,f_i\}$ is an invariant of $T$ up to reindexing. However the grading in the second statement deserves a few words.
If $T$ is a representative of a braid-like sutured tangle then the link $T(i)$, with Khovanov invariant $A_i$, inherits an orientation from the braid-like orientation on the properly embedded arcs of $T$. As a result, with this orientation fixed, the long exact sequence may be expressed as
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1em,column sep=1em]
{ A_{i+1} & & A_i\\
& B & \\ };
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$ f^+_i $} (m-1-3)
(m-2-2.north west) edge (m-1-1.south east)
(m-1-3.south west) edge (m-2-2.north east);
\end{tikzpicture}\]
where $B=\Khred(T(\frac{1}{0}))[c_T+i+1]=\Khred^{u-c_T-i-1}(T(\frac{1}{0}))$. The integer $c_T$ counts the negative crossings in $T$ when the orientation on one of the arcs of $\tau$ is reversed so that \[\textstyle c=n_-(T(\frac{1}{0}))+i-n_-(T(i+1))=c_T+i.\] Now the directed system $(A_i,f^+_i)$ is graded in the sense that each $f_i^+$ is a $\mathbb{Z}$-graded map between $\mathbb{Z}$-graded vector spaces. As a result the inverse limit $\KHT(T)$ inherits this grading as claimed.
\end{proof}
\subsection{An invariant of strongly invertible knots}\lambdabel{sec:strong}
\betagin{definition} A strong inversion $h$ on a knot $K$ is the isotopy class of an orientation preserving homeomorphism of $S^3$ that reverses orientation on the knot $K$. \end{definition}
Note that the fixed point set of $h$ is necessarily unknotted as a consequence of the Smith conjecture \cite{Waldhausen1969}. The involution $h$ is an element of the symmetry group of the knot, denoted $\operatorname{Sym}(S^3,K)$, which is identified with the mapping class group of the knot exterior $M_K\colon\thinspaceng S^3\smallsetminus \nu(K)$. Properties of this group are summarized by Kawauchi \cite[Chapter 10]{Kawauchi1996}; strong inversions in particular are considered by Sakuma \cite{Sakuma1986}. While the symmetry group of a knot may be trivial, and in particular, a given knot might not admit a strong inversion, these are relatively natural objects. For example:
\betagin{theorem}[Schreier \cite{Schreier1924}, see {\cite[Exercise 10.6.4]{Kawauchi1996}} or {\cite[Proposition 3.1 (1)]{Sakuma1986}}]\lambdabel{thm:Schreier} If $K$ is a torus knot then $\operatorname{Sym}(S^3,K)\colon\thinspaceng\mathbb{Z}/2\mathbb{Z}$ is generated by a unique strong inversion on $K$. $
\ensuremath\Box$ \end{theorem}
\betagin{theorem}[Thurston, see {\cite[Page 124]{Riley1979}} and {\cite[Proposition 3.1 (2)]{Sakuma1986}}] \lambdabel{thm:dihedral}If $K$ is a hyperbolic knot then $\operatorname{Sym}(S^3,K)$ is a subgroup of a dihedral group. In particular, $K$ admits 0, 1 or 2 strong inversions up to conjugacy, and $K$ admits 2 strong inversions if and only if $K$ admits a free or cyclic involution.$
\ensuremath\Box$\end{theorem}
More generally, any given knot admits finitely many strong inversions \cite{Kojima1983}.
\betagin{definition}\lambdabel{def:strong-knot}
A strongly invertible knot is a pair $(K,h)$ where $K$ is a knot in $S^3$ and $h$ is a strong inversion on $K$. Equivalence of strongly invertible knots $(K,h)$ and $(K',h')$ is up to orientation preserving homeomorphism $f\colon\thinspace S^3\to S^3$ satisfying $f(K)=K'$ and $fhf^{-1}=h'$. In particular, a strongly invertible knot corresponds to the conjugacy class of a strong inversion in $\operatorname{Sym}(S^3,K)$.
\end{definition}
If $(K,h)$ is a strongly invertible knot then the knot exterior $M_K$ admits an involution $h|_{M_K}$ with one dimensional fixed-point set meeting the boundary torus in four points. Moreover, the quotient of $M_K$ by the involution $h|_{M_K}$ is necessarily homeomorphic to $B^3$ (see \cite[Proposition 3.5]{Watson2012}, for example), and the image of the fixed-point set in the quotient is a pair of properly embedded arcs.
Choose a pair of disjoint annuli in $\partial M_K$, equivariant with respect to $h$, with meridional cores. Then the quotient of $M_K$ (as an equivariantly sutured manifold) is a sutured tangle denoted $T_{K,h}$; see Figure \ref{fig:si}. Notice that, by construction, the closure $T_{K,h}(\frac{1}{0})$ provides a branch set for the trivial surgery on $K$ and is therefore the trivial knot.
\betagin{figure}
\lambdabellist\footnotesize
\pinlabel $h$ at 136 4
\pinlabel $h(\gammamma_\mu)$ at 30 48
\pinlabel $\gammamma_\mu$ at 193 48
\pinlabel $\mu_0$ at 93 198
\pinlabel $\mu_1$ at 93 127
\pinlabel $\overline{\mu_0}$ at 283 100
\pinlabel $\overline{\mu_1}$ at 458 100
\endlabellist
\includegraphics[scale=0.75]{figures/trefoil-quotient}
\caption{The trefoil is strongly invertible. On the left, the involution on the complement of the trefoil is illustrated. Note that this symmetry exchanges the annular sutures $\gammamma_\mu$ and $h(\gammamma_\mu)$ in the boundary while fixing the meridians $\mu_0$ and $\mu_1$. On the right, the resulting quotient is a sutured tangle where each meridian descends to an arc $\operatorname{im}(\mu_i)=\overline{\mu_i}\in D^2\times\{i\}$ for $i=0,1$. This representative for the quotient tangle shown is compatible with the framing $6\mu+\lambdambda$ (in terms of the preferred generators); more on this quotient may be found in \cite[Section 3]{Watson2012} and \cite[Section 2]{Watson2013}.}\lambdabel{fig:si}\end{figure}
\betagin{proposition}\lambdabel{prp:tangle}
There is a one-to-one correspondence between strongly invertible knots $(K,h)$ and sutured tangles satisfying the additional property that $T(\frac{1}{0})$ is the trivial knot.
\end{proposition}
\lambdabellist
\lambdarge \pinlabel $T$ at 52 22.5
\small \pinlabel $a$ at 2 53
\endlabellist
\parpic[r]{\includegraphics[scale=0.75]{figures/lifting-an-arc}}
{\it Proof.} This is immediate from the discussion above. To reverse the construction, notice that the two-fold branched cover of the trivial knot $T(\frac{1}{0})$ is $S^3$, and the lift of an unknotted arc $a$ meeting the two lobes of the closure defining $T(\frac{1}{0})$ is a strongly invertible knot in $S^3$. $
\ensuremath\Box$
Note that, if $T_{K,h}= (D^2\times I,\tau)$ is the tangle associated with a strongly invertible knot $(K,h)$, then the above construction realises the knot exterior $M_K$ as the two-fold branched cover of $D^2\times I$, with branch set $\tau$, denoted $\mathbf{\Sigmagma}_{T_{K,h}}$. In particular, a given distinct (conjugacy classes of) strong inversions $h,h'$ on $K$ we get distinct strongly invertible knots $(K,h)$, $(K,h')$ (in the sense of Definition \ref{def:strong-knot}) and the knot exterior may be realised as a two-fold branched cover in two distinct ways.
Moreover, the Dehn surgery $S^3_n(K)$ on a strongly invertible knot $(K,h)$ may be realised as the two-fold branched cover of $S^3$ with branch set $T_{K,h}(n)$ for a suitable choice of representative (this is the preferred representative of \cite[Section 3.4]{Watson2012}). Note that this is a generalization/reformulation of the Montesinos trick \cite{Montesinos1976-trick}.
It is an immediate consequence of the property that $T(\frac{1}{0})$ is unknotted that the sutured tangle $T$ is braid-like. In view of Proposition \ref{prp:tangle}, given a strongly invertible knot $(K,h)$ we can associate the $\mathbb{Z}$-graded invariant $\KHT(T_{K,h})$.
Let $\mathbf{A}=\KHT(T_{K,h})$ and recall that $x\in\mathbf{A}$ may be identified with a sequence $\{x_j\}_{j\in\mathbb{Z}}$ such that $f_j(x_{j+1})=x_j$, where $x_j\in\Khred(T(j))$, for all $j\in\mathbb{Z}$.
\betagin{definition}\lambdabel{def:strong-inv}
Given a strongly invertible knot $(K,h)$ consider the subspace $\mathbf{K}\subset\mathbf{A}$ consisting of sequences satisfying the additional condition that $x_j=0$ for $j\ll0$. Denote by $\varkappa(K,h)$ the vector space $\mathbf{A}/\mathbf{K}$.
\end{definition}
\betagin{proposition}\lambdabel{prp:subspace}
The vector space $\varkappa(K,h)$ is a finite-dimensional $\mathbb{Z}$-graded invariant of the strongly invertible knot $(K,h)$, up to isomorphism.
\end{proposition}
\betagin{proof}
This can be seen from the short exact sequence of vector spaces \[
\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1.5em,column sep=1.5em]
{ 0&\mathbf{K}&\mathbf{A}& \varkappa(K,h) & 0 \\
};
\path[->,font=\scriptsize]
(m-1-1) edge (m-1-2)
(m-1-2) edge (m-1-3)
(m-1-3) edge (m-1-4)
(m-1-4) edge (m-1-5)
;
\end{tikzpicture}
\] which, owing to the fact that the inclusion is graded, may be decomposed according to the $\mathbb{Z}$-grading. That is, $\varkappa(K,h)\colon\thinspaceng\bigoplus_{u\in\mathbb{Z}}\varkappa^u(K,h)$ where $\varkappa^u(K,h)\colon\thinspaceng \mathbf{A}^u/\mathbf{K}^u$ is the $u^{\text{th}}$ graded piece of $\varkappa(K,h)$ having decomposed the inclusion $\mathbf{K}^u\hookrightarrow\mathbf{A}^u$ according to the $\mathbb{Z}$-grading.
Setting $A_j=\Khred(T_{K,h}(j))$, so that $\mathbf{A}=\varprojlim A_j$ as in the definition of $\KHT(T_{K,h})$, any choice of splitting gives rise to a (non-canonical) inclusion $\sigmagma\colon\thinspace\varkappa(K,h)\hookrightarrow\mathbf{A}$.
Recall that the universal property for the inverse limit is summarized in the present case by the commutative diagram
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1.5em,column sep=2em]
{ & \varkappa(K,h) & \\
& \mathbf{A} & \\
A_{j+1}& & A_j \\ };
\path[->,font=\scriptsize]
(m-1-2) edge[right hook->] (m-2-2)
(m-2-2) edge node[above] {$ \pi_{j+1} $} (m-3-1)
(m-2-2) edge node[above] {$ \pi_j $} (m-3-3)
(m-1-2) edge[bend right] node[left] {$ \iota_{j+1} $} (m-3-1)
(m-1-2) edge[bend left] node[right] {$ \iota_j $} (m-3-3)
(m-3-1) edge node[above] {$f_j$} (m-3-3);
\node at (0.35,0.555) {\scriptsize$\sigmagma$};
\node at (-3.2,-1.1) {$\cdots$};\draw[->] (-2.9,-1.1) -- (-2.4,-1.1);
\node at (3.2,-1.1) {$\cdots$};\draw[->] (2.4,-1.1) -- (2.9,-1.1);
\end{tikzpicture}\]
resulting in a family of maps $\iota_j=\pi_j\circ\sigmagma$ for $j\in\mathbb{Z}$. Note that $\ker(\iota_j)=\ker(\sigmagma)=0$ since $\sigmagma(y)_j\in A_j$ must be non-zero, for all $j\in\mathbb{Z}$, for any given non-zero element $y\in\varkappa(K,h)$. As a result this construction gives injections $\iota_j\colon\thinspace\varkappa(K,h)\hookrightarrow A_j$ for all $j\in\mathbb{Z}$ and, since $A_j$ is finite dimensional, $\varkappa(K,h)$ must be finite dimensional as well.
\end{proof}
We reiterate that distinguishing strong inversions on a particular knot gives additional information about the symmetry group. In particular, if $\varkappa(K,h)\ncong\varkappa(K,h')$ then we have identified distinct conjugacy classes of involutions in $\operatorname{Sym}(S^3,K)$. Moreover, if $K$ is hyperbolic and $\varkappa(K,h)\ncong\varkappa(K,h')$, then there must be a free or cyclic involution on the knot complement and hence a third element of order two in $\operatorname{Sym}(S^3,K)$ (see Theorem \ref{thm:dihedral}).
\section{Properties}\lambdabel{sec:properties}
\subsection{Behaviour under mirror image}\lambdabel{sub:mirror} A key property of the invariant $\varkappa$ is inherited from Khovanov homology.
\betagin{proposition}\lambdabel{prp:mirror}
Let $(K,h)$ be a strongly invertible knot and denote by $(K,h)^*$ the strongly invertible mirror, obtained by reversing orientation on $S^3$. Then $\varkappa^u(K,h)^*\colon\thinspaceng\varkappa^{-u}(K,h)$ as $\mathbb{Z}$-graded vector spaces.
\end{proposition}
Note that the strongly invertible mirror need not fix the conjugacy class of $h\in\operatorname{Sym}(S^3,K)$ in the case that the underlying knots are amphicheiral (that is, when $K^*\sigmameq K$); compare \cite[Proposition 4.3]{Sakuma1986}
\betagin{proof}[Proof of Proposition \ref{prp:mirror}] From the construction of the tangle $T_{K,h}$ associated with a strongly invertible knot $(K,h)$ we have that $T^*_{K,h}=T_{(K,h)^*}$ is the tangle associated with the strongly invertible mirror $(K,h)^*$. From the forgoing discussion, any choice of graded section $\sigmagma\colon\thinspace\varkappa(K,h)\hookrightarrow\KHT(T_{K,h})$ gives rise to a family of graded inclusion maps $\iota_j=\pi_j\circ\sigmagma$ for $j\in\mathbb{Z}$. Now, for any $j\in\mathbb{Z}$, we have
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1.5em,column sep=2.5em]
{ \varkappa^u(K,h)^* & \Khred^u(T_{K,h}^*(-j)) \\
\varkappa^{-u}(K,h) & \Khred^{-u}(T_{K,h}(j))\\};
\path[->,font=\scriptsize]
(m-1-1) edge[right hook->] node[above] {$\iota_{-j}$} (m-1-2)
(m-2-1) edge[right hook->] node[above] {$\iota_{j}$} (m-2-2)
(m-1-2) edge[<->] node[right] {$\colon\thinspaceng$}(m-2-2);
\end{tikzpicture}\]
by applying the behaviour of Khovanov homology for mirrors since $(T_{K,h}(j))^*\sigmameq T^*_{K,h}(-j)$. Composing with the isomorphism gives inclusions establishing that $\varkappa^u(K,h)^*\subseteq\varkappa^{-u}(K,h)$ and $\varkappa^{-u}(K,h)\subseteq\varkappa^u(K,h)^*$. As a result we have the identification $\varkappa^u(K,h)^*\colon\thinspaceng \varkappa^{-u}(K,h)$ as claimed.
\end{proof}
This may be obtained, alternatively, from the more general observation that \[\KHT^{-u}(T_{K,h})\colon\thinspaceng\KHT^{u}(T^*_{K,h}).\] Given that $\Khred^{-u}(T_{K,h}(j))\colon\thinspaceng\Khred^u(T_{K,h}^*(-j))$ we leave the reader to check that the relevant linear maps $f_{-j}$ and $f^*_{j-1}$ exchanged correspond to projections and inclusions, respectively, of complexes at the chain level. In particular, there is an analogous inverse system associated with resolutions of negative crossings arising from the long exact sequence for a negative crossing.
\subsection{Notions of stability} A key feature of Khovanov homology, leading ultimately to the computability of $\varkappa(K,h)$, is that the vector space $\Khred(T(n))$ stabilises, in a suitable sense, for sufficiently large $n$. This is made precise in the following statement (compare \cite[Lemma 4.10]{Watson2012}, taking note of the change in grading convention).
\betagin{lemma}\lambdabel{lem:stability}
Fix a representative $T=T_{K,h}$ for the sutured tangle associated with a strongly invertible knot $(K,h)$. Let $X$ be the one dimensional bi-graded vector space $\mathbb{F}^{(c_T,\phalf(1-c_T))}\colon\thinspaceng\Khred(T(\frac{1}{0}))[c_T,\frac{1}{2}(1-c_T)]$ where $c_T$ is the difference in negative crossings between the braid-like and non-braid-like orientation on the arcs of $T$. Then, up to an overall $-\frac{n}{2}$ in the $\deltata$-grading, \[\Khred(T(n))\colon\thinspaceng H_*\Big(\Khred(T(0)) \overset{D}{\to} \bigoplus_{i=0}^nX[i,0]\Big)\] obtained from an iterated mapping cone construction where $D$ is of bi-degree $(1,1)$.
\end{lemma}
\betagin{proof}
We fix the constant $c_T=n_-(T(\frac{1}{0}))-n_-(T(0))$ throughout, and let $n>0$ so that $c=n_-(T(\frac{1}{0}))-n_-(T(n))=c_T+n$. Now considering iterated applications of the long exact sequence (and minding gradings!), $\Khred(T(n))$ may be computed in terms of $\Khred(T(0))$:
\[
\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1em,column sep=2em]
{ \Khred(T(n)) &\\
\Khred(T(n-1))[0,-\half] &\Khred(T(\frac{1}{0}))[c_T+n,-\half(1-c_T-n)] \\
\Khred(T(n-2))[0,-1] & \Khred(T(\frac{1}{0}))[c_T+n-1,-\half(1-c_T-n)] \\
\vdots & \vdots\\
\Khred(T(0))[0,-\frac{n}{2}] & \Khred(T(\frac{1}{0}))[c_T,-\half(1-c_T-n)] \\} ;
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$ f_{n-1} $} (m-2-1)
(m-2-1) edge node[auto] {$ f_{n-2} $} (m-3-1)
(m-3-1) edge node[auto] {$ f_{n-3} $} (m-4-1)
(m-4-1) edge node[auto] {$ f_{0} $} (m-5-1)
(m-2-2) edge[<-] node[auto] {$ \partial $} (m-2-1)
(m-3-2) edge[<-] node[auto] {$ \partial$} (m-3-1)
(m-5-2) edge[<-] node[auto] {$ \partial$} (m-5-1)
;
\end{tikzpicture}
\]
Recall that the connecting homomorphisms $\partial$ are of bi-degree $(1,1)$ and, in particular, raise the $\deltata$-grading by one. As a result, since the occurrences of $\Khred(T(\frac{1}{0}))$ are in a fixed $\deltata$-grading, this iterative process does not induce any maps between the $\Khred(T(\frac{1}{0}))$. Hence $\Khred(T(n))$ may be computed from a complex (or, mapping cone; see Weibel \cite{Weibel1994}, for example) of the form
\[
\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=0.5em,column sep=2em]
{
& X[n,-\frac{n}{2}] \\
& X[n-1,-\frac{n}{2}] \\
& \vdots \\
\Khred(T(0))[0,-\frac{n}{2}] &X[0,-\frac{n}{2}] \\} ;
\path[->]
(m-4-1) edge[bend left] (m-1-2)
(m-4-1) edge[bend left] (m-2-2)
(m-4-1) edge[bend left] node[above] {$\vdots$} (m-4-2)
;
\end{tikzpicture}
\]
where each of the depicted maps is induced from the connecting homomorphism. The total map $D$ therefore raises the bi-grading by $(1,1)$ and the homology of $D$ gives the result as claimed.
\end{proof}
There are two immediate and important consequences of this observation.
\betagin{corollary}[See {\cite[Lemma 4.14]{Watson2012}}]\lambdabel{cor:stability}
If $n\gg0$ then $\Khred(T(n+1))\colon\thinspaceng\Khred(T(n))\oplus\mathbb{F}$. $
\ensuremath\Box$
\end{corollary}
\betagin{corollary}\lambdabel{cor:inj/surj}
The map $f_i$ is surjective for all $i\gg 0$ and injective for all $i\ll 0$. $
\ensuremath\Box$
\end{corollary}
Corollary \ref{cor:inj/surj} is an essential observation: It ensures computability of $\varkappa(K,h)$ and $\KHT(T_{K,h})$. Note that {\em sufficiently large/small} in this context depends, in general, on the choice of representative for the sutured tangle. On the other hand, varying the choice of representative can be a useful trick for computing $\varkappa(K,h)$ (see Section \ref{sec:examples}).
\subsection{Detecting the trivial knot}The object $\varkappa(K,h)$ bears some similarities with a polynomial invariant $\eta(K,h)$ of strongly invertible knots considered by Sakuma \cite{Sakuma1986}. In particular, Sakuma proves that $\eta$ is zero for the trivial knot. However, Sakuma's invariant must also vanish for $(K,h)$ if $K$ is amphicheiral and $h$ is a unique strong inversion on $K$ (up to conjugacy) \cite[Proposition 3.4 (1)]{Sakuma1986}. A stronger statement holds for $\varkappa(K,h)$ (compare \cite[Section 4.6]{Watson2012}).
\betagin{named}{Theorem \ref{thm:unknot}}Let $(K,h)$ be a strongly invertible knot. Then $\varkappa(K,h)=0$ if and only if $K$ is the trivial knot. \end{named}
\parpic[r]{\includegraphics[scale=0.75]{figures/unknot-tangle}}
{\it Proof.} First suppose that $K$ is the trivial knot, and notice that $T_{K,h}(n)$ may be identified with the $(2,n-1)$-torus link. This choice of representative for the sutured tangle is illustrated on the right; notice that $T(1)$ is the two-component trivial link and $T(2)$ and $T(0)$ are both trivial knots. The result now follows from direct calculation (compare \cite[Section 6.2]{Khovanov2000}). To see this, observe that if some composition $f=f_i\circ\cdots\circ f_j$ of the inverse system is the zero map, then $\varkappa(K,h)$ must vanish. Indeed, in this situation any sequence $\{x_j\}$ for which $f_{j}(x_{j+1})=x_j$ and $x_j\in A_j$ must also satisfy $x_j=0$ for $j\ll0$. Hence $\mathbf{A}\colon\thinspaceng\mathbf{K}$ (in the notation of Definition \ref{def:strong-inv}).
We simply observe that $f_1\circ f_0=0$ in the present setting. In detail, the long exact sequence defining $f_1$ is
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1em,column sep=1em]
{ \Khred(T(2)) & & \Khred(T(1))[0,-\half] \\
& \Khred(T({\frac{1}{0}}))[1,0] & \\ };
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$ f_1 $} (m-1-3)
(m-2-2.north west) edge (m-1-1.south east)
(m-1-3.south west) edge node[auto] {$ \partial $} (m-2-2.north east);
\end{tikzpicture}\]
which simplifies to
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1em,column sep=1em]
{ \mathbb{F}^{(0,0)} & & \mathbb{F}^{(0,0)}\oplus\mathbb{F}^{(0,\text{-}1)}\\
& \mathbb{F}^{(1,0)} & \\ };
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$ f_1 $} (m-1-3)
(m-2-2) edge node[auto] {$0$} (m-1-1)
(m-1-3) edge (m-2-2);
\end{tikzpicture}\]
so that, in particular, if $x_0$ generates $\mathbb{F}^{(0,0)}$ then $f_1(x_0)=(x_0,0)\in \mathbb{F}^{(0,0)}\oplus\mathbb{F}^{(0,\text{-}1)}$. Similarly, to define $f_0$ we have
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1em,column sep=1em]
{ \Khred(T(1)) & & \Khred(T(0))[0,-\half] \\
& \Khred(T(\frac{1}{0}))[0,\half] & \\ };
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$ f_0 $} (m-1-3)
(m-2-2.north west) edge (m-1-1.south east)
(m-1-3.south west) edge node[auto] {$ \partial $} (m-2-2.north east);
\end{tikzpicture}\]
which simplifies to give
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1em,column sep=1em]
{ \mathbb{F}^{(0,\phalf)}\oplus\mathbb{F}^{(0,\mhalf)} & & \mathbb{F}^{(0,\mhalf)}\\
& \mathbb{F}^{(0,\phalf)} & \\ };
\path[->,font=\scriptsize]
(m-1-1) edge node[auto] {$ f_0 $} (m-1-3)
(m-2-2) edge (m-1-1)
(m-1-3) edge node[auto] {$0$} (m-2-2);
\end{tikzpicture}\]
where $f_0(x_{\phalf},x_{\mhalf})=(0,x_{\mhalf})$ given $(x_{\phalf},x_{\mhalf})\in \mathbb{F}^{(0,\phalf)}\oplus\mathbb{F}^{(0,\mhalf)}$. Composing these two maps yields $(f_0\circ f_1)(x_0) = f_0(x_0,0)=0$ as claimed.
The converse depends on a relationship with Heegaard Floer homology. Let $(K,h)$ be a strongly invertible knot and suppose that $\varkappa(K,h)\colon\thinspaceng0$. For an appropriately chosen representative of $T=T_{K,h}$ we have that $S^3_n(K)$ is the two-fold branched cover $\mathbf{\Sigmagma}_{T(n)}$.
Now let $n\gg 0$ so that by Corollary \ref{cor:stability} we have $\Khred(T(n+1))\colon\thinspaceng\Khred(T(n))\oplus\mathbb{F}$. Then given a graded section $\sigmagma\colon\thinspace \varkappa(K,h)\hookrightarrow \KHT(T_{K,h})$ we may write \betagin{align*}\Khred(T(n+1))&\colon\thinspaceng\varkappa(K,h)\oplus\mathbb{F}^{m+1}\\ \Khred(T(n))&\colon\thinspaceng\varkappa(K,h)\oplus\mathbb{F}^{m}\end{align*} for some $m\ge0$. Applying Lemma \ref{lem:stability}, $\mathbb{F}^{m}$ and $\mathbb{F}^{m+1}$ are supported in a single (relative) $\deltata$-grading. Indeed, these subspaces cannot be present in $\Khred(T(n))$ when $n\ll0$, so they must cancel in the associated iterated mapping cone. Note that $m\le n$ is determined by those $f_{i>0}$ that are surjective (as in Corollary \ref{cor:inj/surj}).
By choosing $n\gg 0$ the connecting homomorphism vanishes for grading reasons and we obtain a short exact sequence
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1.5em,column sep=1.5em]
{ 0&\mathbb{F}&{ \varkappa(K,h)\oplus\mathbb{F}^{m+1}}& \varkappa(K,h)\oplus\mathbb{F}^{m} & 0 \\
};
\path[->,font=\scriptsize]
(m-1-1) edge (m-1-2)
(m-1-2) edge (m-1-3)
(m-1-3) edge node[above] {$ f_n $} (m-1-4)
(m-1-4) edge (m-1-5)
;
\end{tikzpicture}\]
(where the grading shifts have been suppressed). But $\varkappa(K,h)$ vanishes by hypotheses, so $\Khred(T(n))\colon\thinspaceng \mathbb{F}^m$ is supported in a single $\deltata$-grading. It follows that \[m=\det(T(n))=|H_1(\mathbf{\Sigmagma}_{T(n)};\mathbb{Z})|=|H_1(S_n(K);\mathbb{Z})|\] and $m=n$.
Given a link $L$ there is a spectral sequence starting from $\Khred(L)$ and converging to $\widehat{\operatorname{HF}}(-\mathbf{\Sigmagma}_L)$ (here, $-\mathbf{\Sigmagma}_L$ denotes the two-fold branched cover $\mathbf{\Sigmagma}_L$ with orientation reversed) \cite{OSz2005-branch}. In the present setting \[n=|H_1(S^3_n(K);\mathbb{Z})|\le\dim\widehat{\operatorname{HF}}(S^3_n(K))\le\dim\Khred(T(n)) = n\] so that $\dim\widehat{\operatorname{HF}}(S^3_n(K))=|H_1(S_n^3(K);\mathbb{Z})|$ and hence $S^3_n(K)$ is an L-space for all $n\gg0$. As a result, $g(K)=\tau(K)$, where $g(K)$ is the genus of the knot and $\tau(K)$ is the Ozsv\'ath-Szab\'o concordance invariant \cite[Proposition 3.3]{OSz2005-lens}.
A nearly identical argument for $n\ll0$ shows that $S^3_n(K)$ is an L-space for all $|n|\gg 0$. Thus $S^3_{-n}(K)\colon\thinspaceng S^3_n(K^*)$ is an L-space for sufficiently large $n$ and hence $g(K)=g(K^*)=\tau(K^*)=-\tau(K)$ \cite[Lemma 3.3]{OSz2003}. It follows that $\tau(K)=g(K)=0$ and hence $K$ is the trivial knot.$
\ensuremath\Box$
\subsection{An aside on cabling} While not every knot $K$ is strongly invertible, it is always the case that $D(K)=K\# K$ is a strongly invertible knot (with canonical strong inversion that switches the components, which we will suppress from the notation). As a result, the relatively graded vector space resulting from the composite $\varkappa(D(-))$ is an invariant of knots in $S^3$ (compare \cite{HW2010}). This invariant detects the trivial knot as a consequence of Theorem \ref{thm:unknot}, and is closely related to work of Grigsby and Wehrli. Indeed, we obtain an alternate proof of the following:
\betagin{theorem}[Grigsby-Wehrli \cite{GW2010}, Hedden \cite{Hedden2009-unknot}]\lambdabel{thm:cable}
The Khovanov homology of the two-cable of a knot detects the trivial knot.
\end{theorem}
\betagin{proof}[Sketch of Proof] Let $T$ be the tangle associated with the quotient of $D(K)$ with representative chosen so that $S^3_0(D(K))\colon\thinspaceng \mathbf{\Sigmagma}_{T(0)}$. We appeal to two immediate facts. First, that $D(K)$ is the trivial knot if and only if $K$ is the trivial knot, and second, that $T(0)$ is the (untwisted) two-cable of $K$ denoted $\mathcal{C}_2K$. We need to show that if $\Khred(\mathcal{C}_2K)\colon\thinspaceng \mathbb{F}^2$ then $\mathcal{C}_2K$ is the two-component trivial link (and hence $K$ is trivial).
Note that since $\varkappa(D(K))$ injects into $\Khred(T(0))\colon\thinspaceng\Khred(\mathcal{C}_2K)$ we may write $\Khred(\mathcal{C}_2K)\colon\thinspaceng\varkappa(D(K))\oplus\mathbb{F}^n$ with the summand $\mathbb{F}^n$ supported in a single $\deltata$-grading (Lemma \ref{lem:stability}). Since $\varkappa(D(K))$ vanishes only for the trivial knot (Theorem \ref{thm:unknot}) we may assume that $\varkappa(D(K))$ is non-zero. Furthermore, $0=\det(T(0))=|\chi_\deltata\Khred(\mathcal{C}_2K)|$ so if $n\ge 2$ we are done. This leaves two cases to consider: either $n=0$ and $\dim\varkappa(D(K))=2$ or $n=1$ and $\dim\varkappa(D(K))=1$. Notice that, in either case, $\Khred(\mathcal{C}_2(K))\colon\thinspaceng\mathbb{F}^2$.
Now applying the Ozsv\'ath-Szab\'o spectral sequence for the two-fold branched cover, together with the non-vanishing of $\widehat{\operatorname{HF}}(S^3_0(D(K)))$, we have that $\widehat{\operatorname{HF}}(S^3_0(D(K)))\colon\thinspaceng\mathbb{F}^2$ \cite{OSz2005-branch}. Now $S^3_0(D(K))$ is prime \cite[Corollary 4.5]{Scharlemann1990} so by a result of Hedden and Ni $S^3_0(D(K))$ must be $S^2\times S^1$ or 0-surgery on a trefoil \cite[Theorem 1.1]{HN2010}. The latter can be ruled out by hand (see the first example of Section \ref{sec:examples}; compare \cite{HW2010}) hence $\mathcal{C}_2K$ must be the two-component trivial link.
\end{proof}
This proof is not appreciably different from those already in the literature and represents essentially a reorganizing of the data on the $E_2$-page of a spectral sequence. However, it illustrates an interesting point: The proof could be shortened and made more internal to Khovanov homology were it known that $\dim\varkappa(K,h)>2$ for $K$ non-trivial. This is worth advertising as something stronger appears to be true; see Conjecture \ref{con:structure}.
\section{Examples}
\lambdabel{sec:examples}
We now turn to calculations of $\varkappa(K,h)$ for some explicit examples. While this invariant is defined as a $\mathbb{Z}$-graded vector space, in practice (as seen in establishing some of the properties in Section \ref{sec:properties}) it is useful to make use of the secondary (relative) $\mathbb{Z}$-grading from $\deltata=u-q$, which is described in Section \ref{sec:Kh}.
\subsection{Torus knots} We begin by giving a relatively detailed calculation for the invariant associated with the right-hand trefoil (our running example through the paper which we denote here by $K$). The strong inversion is shown in Figure \ref{fig:si} together with the quotient tangle. Note that the representative depicted satisfies $S^3_6(K)\colon\thinspaceng\mathbf{\Sigmagma}_{T(0)}$ since the connect sum in the branch set identifies the reducible surgery on this torus knot (see Moser \cite{Moser1971}). As a result, $T(-5)$ may be identified with the (negative) $(3,5)$-torus knot; see Figure \ref{fig:gradings}.
It will be convenient to fix the preferred representative $T^\circ$ for this tangle satisfying $S^3_n(K)\colon\thinspaceng\mathbf{\Sigmagma}_{T^\circ(n)}$. With this choice we have $T^\circ(1)$ is the knot $10_{124}$ of Figure \ref{fig:gradings} and $T^\circ(6)$ is the connected sum as above. We focus on the portion
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=1em,column sep=1em]
{\phantom{\cdot} &\Khred(T^\circ(6)) & \Khred(T^\circ(5)) & \Khred(T^\circ(4)) & \Khred(T^\circ(3)) & \Khred(T^\circ(2)) & \Khred(T^\circ(1)) & \phantom{\cdot} \\
};
\path[->,font=\scriptsize]
(m-1-1) edge (m-1-2)
(m-1-2) edge node[auto] {$ f_{5} $} (m-1-3)
(m-1-3) edge node[auto] {$ f_{4} $} (m-1-4)
(m-1-4) edge node[auto] {$ f_{3} $} (m-1-5)
(m-1-5) edge node[auto] {$ f_{2} $} (m-1-6)
(m-1-6) edge node[auto] {$ f_{1} $} (m-1-7)
(m-1-7) edge (m-1-8);
\end{tikzpicture}\]
of the inverse system. The key observation is that each of these maps, and indeed all of the $f_i$, are determined by $\Khred(T^\circ(1))$ and $\Khred(T^\circ(6))$ together with Lemma \ref{lem:stability}. Namely, in the notation of Lemma \ref{lem:stability} we have that
\[\Khred(T^\circ(6)) \colon\thinspaceng H_*\Big(\Khred(T^\circ(1)) \overset{D}{\to}\mathbb{F}^5\Big)\]
and, since $T^\circ(6)$ is a connect sum of two-bridge knots, $\Khred(T^\circ(6))\colon\thinspaceng\mathbb{F}^6$ must be supported in a single $\deltata$-grading (that is, alternating links are {\em thin}, see Lee \cite{Lee2005}). The precise (graded) form of this invariant is easily calculated and is given in Figure \ref{fig:trefoil-calc}. On the other hand, from Figure \ref{fig:gradings} we have that $\Khred(T^\circ(1))\colon\thinspaceng\mathbb{F}^3\oplus\mathbb{F}^4$ supported in two adjacent $\deltata$-gradings. As a result, we have that
\[\Khred(T^\circ(6))\colon\thinspaceng H_*\Big(\Khred(T^\circ(1)) \overset{D}{\to} \bigoplus_{i=0}^4\mathbb{F}^{(u=i-6)}\Big)\]
up to an overall shift in the $\deltata$-grading.
\betagin{figure}[tt]
\betagin{tikzpicture}[>=latex]
\fill [lightgray] (4,7.25)-- (3.5,7.25) -- (3.5,6)--(4,6);
\fill [lightgray] (3,7.25)-- (3.5,7.25) -- (3.5,4.75)--(3,4.75);
\fill [lightgray] (3,7.25)-- (2.5,7.25) -- (2.5,3.5)--(3,3.5);
\fill [lightgray] (2,7.25)-- (2.5,7.25) -- (2.5,2.25)--(2,2.25);
\fill [lightgray] (1,7.25)-- (1.5,7.25) -- (1.5,0)--(1,0);
\draw [gray] (4,-5) -- (4,7.25);
\draw [gray] (3.5,-5) -- (3.5,7.25);
\draw [gray] (3,-5) -- (3,7.25);
\draw [gray] (2.5,-5) -- (2.5,7.25);
\draw [gray] (2,-5) -- (2,7.25);
\draw [gray] (1.5,-5) -- (1.5,7.25);
\draw [gray] (1,-5) -- (1,7.25);
\draw [gray] (0.5,-5) -- (0.5,7.25);
\draw [gray] (0,-5) -- (0,7.25);
\draw [gray] (-0.5,-5) -- (-0.5,7.25);
\draw [gray] (-0.5,6.5) -- (5,6.5);
\draw [gray] (-0.5,6) -- (5,6);
\draw [gray] (-0.5,4.75) -- (5,4.75);
\draw [gray] (-0.5,5.25) -- (5,5.25);
\draw [gray] (-0.5,3.5) -- (5,3.5);
\draw [gray] (-0.5,4) -- (5,4);
\draw [gray] (-0.5,2.25) -- (5,2.25);
\draw [gray] (-0.5,2.75) -- (5,2.75);
\draw [gray] (-0.5,1) -- (5,1);
\draw [gray] (-0.5,1.5) -- (5,1.5);
\draw [gray] (-0.5,-0.25) -- (5,-0.25);
\draw [gray] (-0.5,0.25) -- (5,0.25);
\draw [gray] (-0.5,-1.5) -- (5,-1.5);
\draw [gray] (-0.5,-1) -- (5,-1);
\draw [gray] (-0.5,-2.25) -- (5,-2.25);
\draw [gray] (-0.5,-2.75) -- (5,-2.75);
\draw [gray] (-0.5,-3.5) -- (5,-3.5);
\draw [gray] (-0.5,-4) -- (5,-4);
\node at (-0.25,-4.75) {\footnotesize{-$7$}};
\node at (0.25,-4.75) {\footnotesize{-$6$}};
\node at (0.75,-4.75) {\footnotesize{-$5$}};
\node at (1.25,-4.75) {\footnotesize{-$4$}};
\node at (1.75,-4.75) {\footnotesize{-$3$}};
\node at (2.25,-4.75) {\footnotesize{-$2$}};
\node at (2.75,-4.75) {\footnotesize{-$1$}};
\node at (3.25,-4.75) {\footnotesize{$0$}};
\node at (3.75,-4.75) {\footnotesize{$1$}};
\node at (4.75, -3.75) {$A_1$};
\node at (-0.25,-3.75) {$\circ$};
\node at (0.25,-3.75) {$\circ$};
\node at (1.25,-3.75) {$\circ$};
\node at (0.75,-3.75) {$\bullet$};
\node at (1.75,-3.75) {$\bullet$};
\node at (2.25,-3.75) {$\bullet$};
\node at (3.25,-3.75) {$\bullet$};
\node at (4.75, -2.5) {$A_2$};
\node at (0.25,-2.5) {$\circ$};
\node at (1.25,-2.5) {$\circ$};
\node at (0.75,-2.5) {$\bullet$};
\node at (1.75,-2.5) {$\bullet$};
\node at (2.25,-2.5) {$\bullet$};
\node at (3.25,-2.5) {$\bullet$};
\node at (4.75, -1.25) {$A_3$};
\node at (1.25,-1.25) {$\circ$};
\node at (0.75,-1.25) {$\bullet$};
\node at (1.75,-1.25) {$\bullet$};
\node at (2.25,-1.25) {$\bullet$};
\node at (3.25,-1.25) {$\bullet$};
\node at (4.75, 0) {$A_4$};
\node at (1.25,-0.12) {$\circ$};
\node at (1.25,0.12) {$\bullet$};
\node at (0.75,0) {$\bullet$};
\node at (1.75,0) {$\bullet$};
\node at (2.25,0) {$\bullet$};
\node at (3.25,0) {$\bullet$};
\node at (4.75, 1.25) {$A_5$};
\node at (1.25,1.25) {$\bullet$};
\node at (0.75,1.25) {$\bullet$};
\node at (1.75,1.25) {$\bullet$};
\node at (2.25,1.25) {$\bullet$};
\node at (3.25,1.25) {$\bullet$};
\node at (4.75, 2.5) {$A_6$};
\node at (1.25,2.5) {$\bullet$};
\node at (0.75,2.5) {$\bullet$};
\node at (1.75,2.5) {$\bullet$};
\node at (2.25,2.6) {$\bullet$};\node at (2.25,2.4) {$\bullet$};
\node at (3.25,2.5) {$\bullet$};
\node at (4.75, 3.75) {$A_7$};
\node at (1.25,3.75) {$\bullet$};
\node at (0.75,3.75) {$\bullet$};
\node at (1.75,3.75) {$\bullet$};
\node at (2.25,3.65) {$\bullet$};\node at (2.25,3.85) {$\bullet$};
\node at (3.25,3.75) {$\bullet$};
\node at (2.75,3.75) {$\bullet$};
\node at (4.75, 5) {$A_8$};
\node at (1.25,5) {$\bullet$};
\node at (0.75,5) {$\bullet$};
\node at (1.75,5) {$\bullet$};
\node at (2.25,4.9) {$\bullet$};\node at (2.25,5.1) {$\bullet$};
\node at (2.75,5) {$\bullet$};
\node at (3.25,4.9) {$\bullet$};\node at (3.25,5.1) {$\bullet$};
\node at (4.75, 6.25) {$A_9$};
\node at (1.25,6.25) {$\bullet$};
\node at (0.75,6.25) {$\bullet$};
\node at (1.75,6.25) {$\bullet$};
\node at (2.25,6.15) {$\bullet$};\node at (2.25,6.35) {$\bullet$};
\node at (2.75,6.25) {$\bullet$};
\node at (3.25,6.15) {$\bullet$};\node at (3.25,6.35) {$\bullet$};
\node at (3.75,6.25) {$\bullet$};
\draw[thick,->>] (4.75,6) -- (4.75,5.25);
\draw[thick,->>] (4.75,4.75) -- (4.75,4);
\draw[thick,->>] (4.75,3.5) -- (4.75,2.75);
\draw[thick,->>] (4.75,2.25) -- (4.75,1.5);
\draw[thick,->] (4.75,1) -- (4.75,0.25);
\draw[thick,->] (4.75,-.25) -- (4.75,-1);
\draw[thick,left hook->] (4.75,-1.5) -- (4.75,-2.25);
\draw[thick,left hook->] (4.75,-2.75) -- (4.75,-3.5);
\node at (4.75,-4.25) {$\vdots$};
\node at (4.75,7) {$\vdots$};
\node at (0.75,7) {$\vdots$};
\node at (1.25,7) {$\vdots$};
\node at (1.75,7) {$\vdots$};
\node at (2.25,7) {$\vdots$};
\node at (2.75,7) {$\vdots$};
\node at (3.25,7) {$\vdots$};
\node at (3.75,7) {$\vdots$};
\node at (-3.5,-2.5) {\includegraphics[scale=0.75]{figures/10_124}}; \draw [very thick, black,->] (-2.4,-3.1) .. controls (-2,-3.5) and (-1.25,-3.75) .. (-0.5,-3.75);
\node at (-4,1) {\includegraphics[scale=0.7]{figures/closure6}}; \draw [very thick, black,->] (-3,0.5) .. controls (-2.5,0) and (-1.25,1.25) .. (-0.5,1.25);
\node at (-2.5,2.75) {\includegraphics[scale=0.6]{figures/closure5}}; \draw [very thick, black,->] (-1.5,2.65) .. controls (-1,2.65) and (-1.25,2.5) .. (-0.5,2.5);
\node at (-4,4) {\includegraphics[scale=0.5]{figures/closure4}}; \draw [very thick, black,->] (-3.15,4.25) .. controls (-2,4.25) and (-1.25,3.75) .. (-0.5,3.75);
\end{tikzpicture}
\caption{Calculations for the unique strong inversion on $K$, the right-hand trefoil: The representative of the associated quotient tangle $T^\circ$ is chosen so that $T^\circ(1)\sigmameq 10_{124}$ (compare Figure \ref{fig:gradings}) and $T^\circ(6)$ is a connect sum of a trefoil and a Hopf link. There are two relative $\deltata$-gradings distinguished here by the conventions that $\circ$ generates a copy of $\mathbb{F}$ in grading $\deltata$ and $\bullet$ generates a copy of $\mathbb{F}$ in grading $\deltata+1$; recall that the connecting homomorphisms raise both $u$- and $\deltata$-grading by one. With the convention that $A_i=\Khred(T^\circ(i))$ the sequences contributing to $\mathbf{K}\subset\mathbf{A}$ have been shaded so that, for example, $\mathbf{A}^0\colon\thinspaceng \mathbb{F}^2$ while $\varkappa^0(K)\colon\thinspaceng\mathbb{F}$. }
\lambdabel{fig:trefoil-calc}
\end{figure}
The differential must cancel the off-diagonal $\mathbb{F}^3$ to yield a thin knot; all other differentials are necessarily trivial. This calculation is summarized in Figure \ref{fig:trefoil-calc}. In particular, setting $A_i=\Khred(T^\circ(i))$ the long exact sequence splits in the cases
\[\betagin{tikzpicture}[>=latex]
\matrix (m) [matrix of math nodes, row sep=0.5em,column sep=1em]
{0 &\mathbb{F} & A_{i+1}& A_i & 0 & & {\text{when}\ } i\ge5 \\
0 & A_{i+1}& A_i & \mathbb{F} & 0 && {\text{when}\ } i\le2\\
};
\path[->,font=\scriptsize]
(m-1-1) edge (m-1-2)
(m-1-2) edge (m-1-3)
(m-1-3) edge node[auto] {$ f_{i} $} (m-1-4)
(m-1-4) edge (m-1-5)
(m-2-1) edge (m-2-2)
(m-2-2) edge node[auto] {$ f_{i} $} (m-2-3)
(m-2-3) edge (m-2-4)
(m-2-4) edge (m-2-5)
;
\end{tikzpicture} \]
for grading reasons and hence $f_i$ is surjective for $i\ge5$ and $f_i$ is injective for $i\le2$.
From the foregoing we calculate
\[\KHT^u(T^\circ)\colon\thinspaceng
\betagin{cases}
\mathbb{F} & u\ge1\\
\mathbb{F}^2 & u=0\\
\mathbb{F} & u = -1\\
\mathbb{F}^2 & u=-2\\
\mathbb{F} & u = -5,-4,-3 \\
0 & u\le-6
\end{cases}
\] and
\[\varkappa^u(K)\colon\thinspaceng
\betagin{cases}
\mathbb{F} & u=-5,-3,-2,0\\
0 & {\text{otherwise}}\\
\end{cases}
\]
where the unique strong inversion on $K$ has been suppressed from the notation. Alternatively, given a graded section $\sigmagma\colon\thinspace\varkappa(K)\hookrightarrow\KHT(T^\circ)$ we have
\[\KHT(T^\circ)\colon\thinspaceng\varkappa(K)\oplus u^{-4}\cdot W\]
where $W$ is the subspace of the graded vector space $\mathbb{F}[u]$ consisting of $\sum a_iu^i$ for which $a_1=0$.
We will record the invariant by
\[
\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (3,.5);
\node at (-0.75,0.25) {$\varkappa(K)$};
\node at (0.25,.25) {$1$};
\node at (1.25,.25) {$1$};
\node at (1.75,.25) {$1$};
\node at (2.75,.25) {$1$};
\node at (0.25,-0.25) {\footnotesize{-$5$}};
\node at (0.75,-0.25) {\footnotesize{-$4$}};
\node at (1.25,-0.25) {\footnotesize{-$3$}};
\node at (1.75,-0.25) {\footnotesize{-$2$}};
\node at (2.25,-0.25) {\footnotesize{-$1$}};
\node at (2.75,-0.25) {\footnotesize{$0$}};
\end{tikzpicture}\]
as extracted from Figure \ref{fig:trefoil-calc}. Our convention (here, and in the examples to follow) is that the dimension of the vector space in each $u$-grading is recorded (with blank entries indicating dimension 0); and the $u$-grading (labelled along the bottom) is read from left-to-right, following the conventions in Figure \ref{fig:gradings}.
\betagin{remark}\lambdabel{rmk:Couture}Couture has defined a Khovanov-like invariant for signed divides which may be regarded as an invariant of strongly invertible knots \cite{Couture2009}. Consulting \cite[Section 3.6]{Couture2009} we see that Couture's invariant for the trefoil has dimension 6 and is supported in positive gradings, suggesting that our invariant is not an alternate formulation of Couture's. (In fact, the difference is perhaps more pronounced for the unknot where Couture's invariant has dimension 2 and $\varkappa$ vanishes; see Theorem \ref{thm:unknot}.) While a relationship between the two would be interesting, such a relationship seems unlikely: both invariants are extracted from auxiliary objects associated with a strong inversion however Couture defines an apparently new chain complex while $\varkappa$ appeals to stable/limiting behaviour of the long exact sequence. \end{remark}
This trick of appealing to surgeries on torus knots may be applied more generally. We know, for example, that:
\betagin{theorem}\lambdabel{thm:torus}
For any torus knot $K_{p,q}$ the invariant $\varkappa(K_{p,q})$ is thin, in the sense that the vector space is supported in a single (relative) $\deltata$-grading. Moreover the dimension of $\varkappa(K_{p,q})$ is bounded above by $|pq|-1$.
\end{theorem}
\betagin{proof} Up to taking mirrors it suffices to consider the case $p,q>0$. Fix the representative $T^\circ$ for the quotient tangle of $K_{p,q}$ satisfying $S^3_n(K_{p,q})\colon\thinspaceng\mathbf{\Sigmagma}_{T^\circ(n)}$. Then by a result of Moser we have that $S^3_{pq-1}(K_{p,q})$ is a lens space \cite{Moser1971} so that the branch set $T^\circ(pq-1)$ must be a two-bridge knot by work of Hodgson and Rubinstein \cite{HR1985}. Now Lee's results establish that $\Khred(T(pq-1))$ is supported in a single $\deltata$-grading \cite{Lee2005}. As a result \[pq-1=|H_1(S^3_{pq-1}(K_{p,q});\mathbb{Z})|=\det(T(pq-1)) = |\chi_\deltata\Khred(T^\circ(pq-1))| = \dim\Khred(T^\circ(pq-1))\] and both statements follow on observing that $\iota_{pq-1}\colon\thinspace\varkappa(K_{p,q}) \to \Khred(T^\circ(pq-1))$ is injective, where $\iota_{pq-1}=\pi_{pq-1}\circ\sigmagma$, for any choice of graded section $\sigmagma\colon\thinspace\varkappa(K_{p,q})\hookrightarrow\KHT(T^\circ)$.
\end{proof}
As a result, the same procedure described for the trefoil may be applied to determine $\varkappa(K_{p,q})$ for any integers $p,q$ (again, omitting the unique strong inversion from the notation). For example, the (positive) torus knots $5_1=K_{2,5}$ and $8_{19}=K_{3,4}$ yield
\[\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (5,1.5);
\node at (-0.85,1.25) {$\varkappa(5_1)$};
\node at (0.25,1.25) {$1$};
\node at (0.75,1.25) {$1$};
\node at (1.25,1.25) {$1$};
\node at (1.75,1.25) {$2$};
\node at (2.25,1.25) {$1$};
\node at (2.75,1.25) {$1$};
\node at (3.25,1.25) {$1$};
\node at (-0.85,0.25) {$\varkappa(8_{19})$};
\node at (1.75,.25) {$1$};
\node at (2.25,.25) {$1$};
\node at (2.75,.25) {$1$};
\node at (3.25,.25) {$2$};
\node at (3.75,.25) {$1$};
\node at (4.25,.25) {$1$};
\node at (4.75,.25) {$1$};
\node at (0.25,-0.25) {\footnotesize{-$2$}};
\node at (0.75,-0.25) {\footnotesize{-$1$}};
\node at (1.25,-0.25) {\footnotesize{$0$}};
\node at (1.75,-0.25) {\footnotesize{$1$}};
\node at (2.25,-0.25) {\footnotesize{$2$}};
\node at (2.75,-0.25) {\footnotesize{$3$}};
\node at (3.25,-0.25) {\footnotesize{$4$}};
\node at (3.75,-0.25) {\footnotesize{$5$}};
\node at (4.25,-0.25) {\footnotesize{$6$}};
\node at (4.75,-0.25) {\footnotesize{$7$}};
\draw [white, fill=white] (-0.015,0.515) rectangle (5.015,0.985);
\end{tikzpicture}\]
In both cases the invariant is supported in a single $\deltata$-grading in agreement with Theorem \ref{thm:torus}. Notice that these examples are not distinguished as ungraded or relatively graded vector spaces, establishing the (absolute) $u$-grading as an essential part of the invariant.
It is interesting to compare this calculation with the behaviour of knot Floer homology \cite{OSz2004-knot, Rasmussen2003} for these examples: One can verify that $\dim\widehat{\operatorname{HFK}}(5_1)=\dim\widehat{\operatorname{HFK}}(8_{19})=5$ but that $\widehat{\operatorname{HFK}}(5_1)\ncong\widehat{\operatorname{HFK}}(8_{19})$ as graded vector spaces.
\subsection{Distinguishing strong inversions}\lambdabel{sec:dist} For all remaining examples we will state the result of our calculation, while specifying the strong inversion and the associated quotient tangle so that the reader can reproduce our work if desired. Our interest in this section will be on distinguishing strongly invertible knots $(K,h_1)$ and $(K,h_2)$, or, separating conjugacy classes in $\operatorname{Sym}(S^3,K)$.
\betagin{figure}[]
\includegraphics[scale=0.5]{figures/9_9}
\caption{Two strong inversions $h_1$ and $h_2$ (left and right) on the knot $K=9_9$ with the relevant quotient tangle in each case. Note that according to Sakuma $\eta(K,h_1)=\eta(K,h_2)=-2t^{-2}+4-2t^2$ \cite{Sakuma1986}.}\lambdabel{fig:9_9}\end{figure}
\betagin{figure}[]
\includegraphics[scale=0.5]{figures/10_155-main}
\caption{Two strong inversions $h_1$ and $h_2$ (left and right) on the knot $K=10_{155}$ with the relevant quotient tangle in each case. Note that according to Sakuma $\eta(K,h_1)=0$ and $\eta(K,h_2)=2t^{-2}-4+2t^2$ \cite{Sakuma1986}.}\lambdabel{fig:10_155}\end{figure}
For the first example the underlying knot is $9_9$. This knot admits a pair of strong inversions $h_1$ and $h_2$ (see Figure \ref{fig:9_9}) and is noteworthy as Sakuma's invariant fails to separate $(9_9,h_1)$ and $(9_9,h_2)$. We compute:
\[\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (9.5,1.5);
\node at (-2,1.25) {$\varkappa(9_9,h_1)$};
\node at (0.25,1.25) {$1$};
\node at (0.75,1.25) {$2$};
\node at (1.25,1.25) {$2$};
\node at (1.75,1.25) {$3$};
\node at (2.25,1.25) {$5$};
\node at (2.75,1.25) {$5$};
\node at (3.25,1.25) {$5$};
\node at (3.75,1.25) {$6$};
\node at (4.25,1.25) {$6$};
\node at (4.75,1.25) {$5$};
\node at (5.25,1.25) {$6$};
\node at (5.75,1.25) {$5$};
\node at (6.25,1.25) {$3$};
\node at (6.75,1.25) {$3$};
\node at (7.25,1.25) {$2$};
\node at (7.75,1.25) {$1$};
\node at (-2,0.25) {$\varkappa(9_9,h_2)$};
\node at (2.25,0.25) {$1$};
\node at (2.75,0.25) {$1$};
\node at (3.25,0.25) {$1$};
\node at (3.75,0.25) {$3$};
\node at (4.25,0.25) {$5$};
\node at (4.75,0.25) {$5$};
\node at (5.25,0.25) {$7$};
\node at (5.75,0.25) {$8$};
\node at (6.25,0.25) {$7$};
\node at (6.75,0.25) {$7$};
\node at (7.25,0.25) {$6$};
\node at (7.75,0.25) {$4$};
\node at (8.25,0.25) {$2$};
\node at (8.75,0.25) {$2$};
\node at (9.25,0.25) {$1$};
\node at (0.25,-0.25) {\footnotesize{-$8$}};
\node at (1.25,-0.25) {\footnotesize{-$6$}};
\node at (2.25,-0.25) {\footnotesize{-$4$}};
\node at (3.25,-0.25) {\footnotesize{-$2$}};
\node at (4.25,-0.25) {\footnotesize{$0$}};
\node at (5.25,-0.25) {\footnotesize{$2$}};
\node at (6.25,-0.25) {\footnotesize{$4$}};
\node at (7.25,-0.25) {\footnotesize{$6$}};
\node at (8.25,-0.25) {\footnotesize{$8$}};
\node at (9.25,-0.25) {\footnotesize{$10$}};
\draw [white, fill=white] (-0.015,0.515) rectangle (9.515,0.985);
\end{tikzpicture}\]
This example indicates that strong inversions need not be explicit on a given diagram, highlighting a subtlety in separating conjugacy classes. In fact, as pointed out by Paoluzzi \cite{Paoluzzi2005}, the fixed point sets of two strong inversions can be linked in $S^3$ in interesting ways. For example, $10_{155}$ admits a pair of strong inversions $h_1$ and $h_2$ (see Figure \ref{fig:10_155}) for which $\operatorname{Fix}(h_1)\cup\operatorname{Fix}(h_2)$ form a Hopf link. This is not made apparent in our diagrams; see \cite[Figure 10]{Paoluzzi2005} or \cite[Figure 3.1 (b)]{Sakuma1986}. For this example we calculate:
\[
\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (7.5,1.5);
\node at (-2,.75) {$\varkappa(10_{155},h_1)$};
\node at (0.25,1.25) {$1$};
\node at (0.75,1.25) {$2$};
\node at (1.25,1.25) {$2$};
\node at (1.75,1.25) {$3$};
\node at (2.25,1.25) {$3$};
\node at (2.75,1.25) {$2$};
\node at (3.25,1.25) {$2$};
\node at (3.75,1.25) {$1$};
\node at (2.25,0.75) {$2$};
\node at (2.75,0.75) {$3$};
\node at (3.25,0.75) {$3$};
\node at (3.75,0.75) {$5$};
\node at (4.25,0.75) {$4$};
\node at (4.75,0.75) {$3$};
\node at (5.25,0.75) {$3$};
\node at (5.75,0.75) {$1$};
\node at (4.25,0.25) {$1$};
\node at (4.75,0.25) {$1$};
\node at (5.25,0.25) {$1$};
\node at (5.75,0.25) {$2$};
\node at (6.25,0.25) {$1$};
\node at (6.75,0.25) {$1$};
\node at (7.25,0.25) {$1$};
\end{tikzpicture}\]
\[\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (10.5,2.5);
\node at (-2,1.25) {$\varkappa(10_{155},h_2)$};
\node at (0.25,2.25) {$1$};
\node at (0.75,2.25) {$1$};
\node at (1.25,2.25) {$1$};
\node at (1.75,2.25) {$2$};
\node at (2.25,2.25) {$1$};
\node at (2.75,2.25) {$1$};
\node at (3.25,2.25) {$1$};
\node at (1.75,1.75) {$1$};
\node at (2.25,1.75) {$2$};
\node at (2.75,1.75) {$3$};
\node at (3.25,1.75) {$3$};
\node at (3.75,1.75) {$4$};
\node at (4.25,1.75) {$3$};
\node at (4.75,1.75) {$2$};
\node at (5.25,1.75) {$2$};
\node at (3.75,1.25) {$1$};
\node at (4.25,1.25) {$1$};
\node at (4.75,1.25) {$2$};
\node at (5.25,1.25) {$2$};
\node at (5.75,1.25) {$2$};
\node at (6.25,1.25) {$2$};
\node at (6.75,1.25) {$1$};
\node at (7.25,1.25) {$1$};
\node at (5.75,.75) {$1$};
\node at (6.75,.75) {$1$};
\node at (7.25,.75) {$1$};
\node at (8.25,0.75) {$1$};
\node at (7.75,.25) {$1$};
\node at (8.75,0.25) {$1$};
\node at (9.25,.25) {$1$};
\node at (10.25,0.25) {$1$};
\end{tikzpicture}\]
As illustration, we have omitted the absolute $\mathbb{Z}$-grading and recorded instead $\varkappa(10_{155},h_1)$ and $\varkappa(10_{155},h_2)$ as relatively $(\mathbb{Z}\times\mathbb{Z})$-graded vector spaces. This highlights considerable additional structure.
These (and other) examples point to an obvious question:
\betagin{question}\lambdabel{qst:seperate}Does the invariant $\varkappa$ separate conjugacy classes of strong inversions in $\operatorname{Sym}(S^3,K)$ for a given prime knot? That is, given a prime knot $K$ admitting strong inversions $h$ and $h'$ is it the case that $(K,h)\sigmameq(K,h')$ if and only if $\varkappa(K,h)\colon\thinspaceng\varkappa(K,h')$ as graded vector spaces? \end{question}
\betagin{remark}\lambdabel{rmk:fig8} The emphasis on the grading is essential in this question: The figure eight admits a pair of strong inversions that are not distinguished by $\varkappa$ as relatively graded vector spaces but are distinguished by the absolute grading; see Section \ref{sec:bi}.\end{remark}
\section{Detecting non-amphicheirality}\lambdabel{sec:amph}
Recall that a knot $K$ is amphicheiral if $K\sigmameq K^*$, where $K^*$ denotes the mirror image of $K$, obtained by reversing orientation on $S^3$. For strongly invertible knots we write $(K,h)^*$; notice that if $h$ is unique then $(K,h)^*\sigmameq (K^*,h)$ makes sense (compare Section \ref{sub:mirror}). Regarding amphicheirality, Sakuma observes the following:
\betagin{proposition}[Sakuma {\cite[Proposition 3.4 (1)]{Sakuma1986}}]
Let $K$ be an amphicheiral knot and suppose that $h$ is a unique strong inversion on $K$ (up to conjugacy in $\operatorname{Sym}(S^3,K)$). Then $(K,h)\sigmameq(K^*,h)$ and $\eta(K,h)$ vanishes. $\ensuremath
\Box$
\end{proposition}
Sakuma points out that for all but two strongly invertible hyperbolic knots with 9 or fewer crossings non-amphicheirality is detected by this condition (or a closely related condition \cite[Proposition 3.4 (2)]{Sakuma1986}). The exceptions are $8_{20}$ and $9_{40}$. The latter has non-zero signature ruling out amphicheirality, but the former has vanishing signature and Sakuma invariant.
Despite the fact that the amphicheirality of the tabulated knots is well established \cite{Perko1974}, this does raise an interesting question about the nature of algebraic invariants capable of detecting this subtle property. Along these lines, the non-vanishing result established in Theorem \ref{thm:unknot} suggests that $\varkappa(K,h)$ is a good candidate invariant for detecting non-amphicheirality. We have:
\betagin{proposition}\lambdabel{prp:amph}
Let $K$ be an amphicheiral knot and suppose that $h$ is a unique strong inversion on $K$ (up to conjugacy in $\operatorname{Sym}(S^3,K)$). Then $(K,h)\sigmameq(K^*,h)$ and $\varkappa(K,h)\colon\thinspaceng\varkappa(K^*,h)$ as graded vector spaces.
\end{proposition}
\betagin{proof} This follows immediately from Proposition \ref{prp:mirror}.\end{proof}
\betagin{figure}[]
\raisebox{-20pt}{\includegraphics[scale=0.5]{figures/8_20}}
\qquad
\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (5.5,.5);
\node at (0.25,.25) {$1$};
\node at (0.75,.25) {$1$};
\node at (1.25,.25) {$1$};
\node at (1.75,.25) {$2$};
\node at (2.25,.25) {$2$};
\node at (2.75,.25) {$2$};
\node at (3.25,.25) {$2$};
\node at (3.75,.25) {$2$};
\node at (4.25,.25) {$1$};
\node at (4.75,.25) {$1$};
\node at (5.25,.25) {$1$};
\node at (0.25,-0.25) {\footnotesize{-$6$}};
\node at (0.75,-0.25) {\footnotesize{-$5$}};
\node at (1.25,-0.25) {\footnotesize{-$4$}};
\node at (1.75,-0.25) {\footnotesize{-$3$}};
\node at (2.25,-0.25) {\footnotesize{-$2$}};
\node at (2.75,-0.25) {\footnotesize{-$1$}};
\node at (3.25,-0.25) {\footnotesize{$0$}};
\node at (3.75,-0.25) {\footnotesize{$1$}};
\node at (4.25,-0.25) {\footnotesize{$2$}};
\node at (4.75,-0.25) {\footnotesize{$3$}};
\node at (5.25,-0.25) {\footnotesize{$4$}};
\end{tikzpicture}
\caption{The knot $8_{20}$ with quotient tangle associated with the unique strong inversion $h$ and $\mathbb{Z}$-graded vector space $\varkappa(8_{20},h)$. Note that $\eta(K,h)=0$ \cite{Sakuma1986}.}\lambdabel{fig:8_20}\end{figure}
For example, this criterion detects the non-amphicheirality of $8_{20}$ while Sakuma's invariant does not. The calculation is summarized in Figure \ref{fig:8_20} (for unicity of $h$ we refer to Hartley \cite{Hartley1981}; see also Kodama and Sakuma \cite{KS1992}). However, it is well-known that the Jones polynomial of an amphicheiral knot is symmetric, and this gives a quick certification that $8_{20}$ is not amphicheiral.
More generally, the Jones polynomial is typically very good at detecting non-amphicheirality. Given the relationship between the Jones polynomial and Khovanov homology, perhaps this should be explored further to ensure that we are not reinventing the wheel, in particular, that the information in $\varkappa(K,h)$ is not just a complicated repackaging of data from $\Khred(K)$.
\betagin{definition}
A knot $K$ is {J-amphicheiral} if the Jones polynomial of $K$ satisfies $V_K(t) = \sum_{i\ge0} a_i(t^i+t^{-i})$ for $a_i\in\mathbb{Z}$.
\end{definition}
For example, the knot $9_{42}$ is J-amphicheiral. It is strongly invertible with a unique strong inversion, and both $\varkappa$ and $\eta$ may be used to establish non-amphicheirality. This knot also has non-zero signature giving an alternate (and much easier) means of confirming this fact and motivating a second definition.
\betagin{definition}\lambdabel{def:Qamph}
A knot $K$ is {quasi-amphicheiral} if it is J-amphicheiral and has vanishing signature.\end{definition}
Amphicheiral knots are necessarily quasi-amphicheiral, however, quasi-amphicheiral knots that are non-amphicheiral seem to be quite rare. There are none with fewer than 9 crossings, for example; there are precisely three examples with 10 crossings: $10_{48},10_{71},10_{104}$. These are all thin knots, that is, the Khovanov homology $\Khred(K)$ is supported in a single diagonal for $K\in\{10_{48},10_{71},10_{104}\}$. Indeed, $\Khred(K)$ is determined by $V_K(t)$ and $\sigmagma(K)$ due to each of these knots being alternating. As a result, Khovanov homology does not detect the non-amphicheirality of these quasi-amphicheiral knots either, though in principle Khovanov homology should be more sensitive in this regard than the Jones polynomial (in fact, $9_{42}$ is an example supporting this presumption). Interestingly, $10_{71}$ and $10_{104}$ are distinct alternating knots that have identical invariants (Jones polynomial, signature and Khovanov homology).
\betagin{proposition}
Each $K\in\{10_{48},10_{71},10_{104}\}$ admits a unique strong inversion.
\end{proposition}
\betagin{proof}A strong inversion on each of the three knots is illustrated in Figure \ref{fig:fake}. Hartley proves that none of these knots admits a free period symmetry \cite{Hartley1981}. Cyclic symmetries are ruled out by Kodama and Sakuma, see in particular \cite[Table 3.1]{KS1992}. As each knot is hyperbolic $h$ must be unique; see Theorem \ref{thm:dihedral}. \end{proof}
As in previous examples, we will omit the unique strong inversion from the notation. This set of knots allows us to establish that $\varkappa(K)$ contains different information than $\{V_K(t), \sigmagma(K)\}$ and indeed $\Khred(K)$. By direct calculation we have:
\betagin{figure}[]
\includegraphics[scale=0.5]{figures/exec}
\caption{The quasi-amphicheiral knots $10_{48}$, $10_{71}$ and $10_{104}$ (left-to-right, top) and their respective quotient tangles $T(10_{48})$, $T(10_{71})$ and $T(10_{104})$ (left-to-right, bottom) corresponding to the unique strong inversion on each knot.}\lambdabel{fig:fake}\end{figure}
\betagin{theorem}\lambdabel{thm:ten}
For \[K\in\{10_{48},10_{71},10_{104}\}\] the invariant $\varkappa(K)$ detects the non-amphicheirality of $K$. Moreover, $\varkappa(10_{71})\ncong\varkappa(10_{104})$ distinguishing this pair, despite the fact that $\Khred(10_{71})\colon\thinspaceng\Khred(10_{104})$.
\end{theorem}
\betagin{proof}The calculations that, together with Proposition \ref{prp:amph}, establish Theorem \ref{thm:ten} are summarized as follows:
\[\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (16,2.5);
\node at (-0.85,2.25) {$\varkappa(10_{48})$};
\node at (4.25,2.25) {$1$};
\node at (4.75,2.25) {$3$};
\node at (5.25,2.25) {$4$};
\node at (5.75,2.25) {$5$};
\node at (6.25,2.25) {$8$};
\node at (6.75,2.25) {$10$};
\node at (7.25,2.25) {$10$};
\node at (7.75,2.25) {$11$};
\node at (8.25,2.25) {$11$};
\node at (8.75,2.25) {$9$};
\node at (9.25,2.25) {$8$};
\node at (9.75,2.25) {$7$};
\node at (10.25,2.25) {$4$};
\node at (10.75,2.25) {$2$};
\node at (11.25,2.25) {$1$};
\node at (11.75,2.25) {$1$};
\node at (-0.85,1.25) {$\varkappa(10_{71})$};
\node at (0.25,1.25) {$1$};
\node at (0.75,1.25) {$2$};
\node at (1.25,1.25) {$2$};
\node at (1.75,1.25) {$3$};
\node at (2.25,1.25) {$6$};
\node at (2.75,1.25) {$8$};
\node at (3.25,1.25) {$7$};
\node at (3.75,1.25) {$10$};
\node at (4.25,1.25) {$12$};
\node at (4.75,1.25) {$12$};
\node at (5.25,1.25) {$12$};
\node at (5.75,1.25) {$13$};
\node at (6.25,1.25) {$12$};
\node at (6.75,1.25) {$10$};
\node at (7.25,1.25) {$11$};
\node at (7.75,1.25) {$9$};
\node at (8.25,1.25) {$6$};
\node at (8.75,1.25) {$5$};
\node at (9.25,1.25) {$5$};
\node at (9.75,1.25) {$3$};
\node at (10.25,1.25) {$1$};
\node at (10.75,1.25) {$1$};
\node at (11.25,1.25) {$1$};
\node at (-0.85,0.25) {$\varkappa(\! 10_{104}\!)$};
\node at (5.25,0.25) {$1$};
\node at (5.75,0.25) {$1$};
\node at (6.25,0.25) {$1$};
\node at (6.75,0.25) {$3$};
\node at (7.25,0.25) {$5$};
\node at (7.75,0.25) {$4$};
\node at (8.25,0.25) {$7$};
\node at (8.75,0.25) {$10$};
\node at (9.25,0.25) {$10$};
\node at (9.75,0.25) {$11$};
\node at (10.25,0.25) {$14$};
\node at (10.75,0.25) {$14$};
\node at (11.25,0.25) {$12$};
\node at (11.75,0.25) {$14$};
\node at (12.25,0.25) {$12$};
\node at (12.75,0.25) {$9$};
\node at (13.25,0.25) {$8$};
\node at (13.75,0.25) {$7$};
\node at (14.25,0.25) {$4$};
\node at (14.75,0.25) {$2$};
\node at (15.25,0.25) {$2$};
\node at (15.75,0.25) {$1$};
\node at (0.25,-0.25) {\footnotesize{-$14$}};
\node at (1.25,-0.25) {\footnotesize{-$12$}};
\node at (2.25,-0.25) {\footnotesize{-$10$}};
\node at (3.25,-0.25) {\footnotesize{-$8$}};
\node at (4.25,-0.25) {\footnotesize{-$6$}};
\node at (5.25,-0.25) {\footnotesize{-$4$}};
\node at (6.25,-0.25) {\footnotesize{-$2$}};
\node at (7.25,-0.25) {\footnotesize{$0$}};
\node at (8.25,-0.25) {\footnotesize{$2$}};
\node at (9.25,-0.25) {\footnotesize{$4$}};
\node at (10.25,-0.25) {\footnotesize{$6$}};
\node at (11.25,-0.25) {\footnotesize{$8$}};
\node at (12.25,-0.25) {\footnotesize{$10$}};
\node at (13.25,-0.25) {\footnotesize{$12$}};
\node at (14.25,-0.25) {\footnotesize{$14$}};
\node at (15.25,-0.25) {\footnotesize{$16$}};
\draw [white, fill=white] (-0.015,0.515) rectangle (16.015,0.985);
\draw [white, fill=white] (-0.015,1.515) rectangle (16.015,1.985);
\end{tikzpicture}\]
None of these vector spaces exhibit the requisite symmetry for amphicheirality. \end{proof}
Note that Theorem \ref{thm:summary} is an immediate corollary of Theorem \ref{thm:ten}. We emphasise that $\varkappa(10_{71})\ncong\varkappa(10_{104})$ as graded vector spaces. Indeed, $\dim\varkappa(10_{71})=\dim\varkappa(10_{104})=152$; compare Remark \ref{rmk:closing}.
\betagin{remark} Sakuma computes $\eta(10_{104},h)=-t^{-3}+t^{-2}-t^{-1}+2-t+t^2-t^3$ \cite[Example 3.5]{Sakuma1986}, the non-vanishing of which provides another means of verifying the non-amphicheirality of $10_{104}$. \end{remark}
\section{Conjectures}\lambdabel{sec:conjectures}
\subsection{Structural observations}
Consider the graded vector space \[V=\mathbb{F}^{(0,\deltata)}\oplus\mathbb{F}^{(2,\deltata)}\oplus\mathbb{F}^{(3,\deltata)}\oplus\mathbb{F}^{(5,\deltata)}\] for some $\deltata\in\mathbb{Z}$, where the second grading should be regarded as a relative $\mathbb{Z}$-grading (compare the form of $\varkappa(K,h)$ in Section \ref{sec:examples} when $K$ is the trefoil).
\betagin{conjecture}\lambdabel{con:structure}For any strongly invertible knot $(K,h)$ there is a decomposition \[\varkappa(K,h)\colon\thinspaceng\bigoplus_{i=1}^kV[m_i,n_i]\] as a $(\mathbb{Z}\times\mathbb{Z})$-graded group (where the secondary grading is a relative grading) for pairs $(m_i,n_i)\in\mathbb{Z}\times\mathbb{Z}$. In particular, \[\dim\varkappa(K,h)\equiv 0\bmod{4}.\]\end{conjecture}
Note that a consequence of this conjecture is a Khovanov-theoretic alternative to the last step in the proof of Theorem \ref{thm:cable}. Namely, if $K$ is non-trivial, then so is $D(K)$ so that $\dim\varkappa(D(K))$ is purportedly at least 4 (combining Theorem \ref{thm:unknot} and Conjecture \ref{con:structure}) hence $\dim\Khred(\mathcal{C}_2K)$ is at least 4 as well.
This conjecture is based only on empirical evidence from a range of calculations. While we have no explanation whatsoever for this surprisingly ordered behaviour, there is some precedent for this in the literature. For example Lee's work \cite{Lee2005} (see also Rasmussen \cite{Rasmussen2010}) explained an observation of Bar-Natan \cite[Conjecture 1]{Bar-Natan2002}. There is also a related conjecture that remains open due to Dunfield, Gukov and Rasmussen and an observed a three-step pairing \cite[Section 5.6, particularly Definition 5.5]{DGR2006}. Even if our conjecture proves to be incorrect there should be some explanation for the observed behaviour on a wide range of examples.
Also observed in examples (see Section \ref{sec:dist}) is the following.
\betagin{conjecture}\lambdabel{con:rank} If $h_1$ and $h_2$ are strong inversions on a knot $K$ then $\dim\varkappa(K,h_1)=\dim\varkappa(K,h_2)$.\end{conjecture}
This again places emphasis on the graded structure of the invariant $\varkappa(K,h)$ (compare Question \ref{qst:seperate} and Remark \ref{rmk:fig8}).
Note that it is not the case that $\dim\Khred(L_1)=\dim\Khred(L_2)$ when $\mathbf{\Sigmagma}_{L_1}\colon\thinspaceng\mathbf{\Sigmagma}_{L_2}$ \cite{Watson2010}, however this equality does hold on a surprising range of examples of three-manifolds that two-fold branch cover distinct links. Conjecture \ref{con:rank} would explain such an equality in the case where the three-manifold arises by Dehn surgery on a knot $K$ admitting a pair of strong inversions $h_1$ and $h_2$. In particular, for surgery coefficient $n$ we have branch sets $L_1=T^\circ_{K,h_1}(n)$ and $L_2=T^\circ_{K,h_2}(n)$ for the two-fold branched cover $S^3_n(K)$.
\subsection{A Khovanov-theoretic characterisation of L-space knots} Recall that an L-space is a rational homology sphere $Y$ satisfying $\dim\widehat{\operatorname{HF}}(Y)=|H_1(Y;\mathbb{Z})|$, and a knot in $S^3$ admitting an L-space surgery is called an L-space knot \cite{OSz2005-lens}. This class of three-manifolds include lens spaces, for example. It is an interesting open problem to give a topological characterisation of L-spaces, and related to this is the problem of characterising L-space knots. In the presence of a strong inversion, we propose:
\betagin{conjecture}\lambdabel{con:L-space}
A non-trivial knot $K$ admitting a strong inversion $h$ is an L-space knot if and only if $\varkappa(K,h)$ is supported in a single diagonal grading $\deltata=u-q$.
\end{conjecture}
Support for this conjecture may be found in \cite{Watson2011}: Any knot admitting a lens space surgery (compare Theorem \ref{thm:torus}) as well as the $(-2,3,q)$-pretzel knots satisfy the conjecture. It is also the case that given a knot satisfying the conjecture, all sufficiently positive cables of the knot will also satisfy the conjecture (see \cite[Theorem 6.1]{Watson2011}). This follows from the observation that all of these examples are strongly invertible and admit a large surgery with a thin branch set (that is, the branch set has Khovanov homology supported in a single $\deltata$-grading).
We remark that it is implicit in the Berge conjecture that knots admitting a non-trivial lens space surgery must be strongly invertible. It is tempting to guess --- and indeed the original version of Conjecture \ref{con:L-space} did so! --- that this is a property of L-space knots in general, namely, that {\em L-space knots are strongly invertible}. However recent work of Baker and Luecke shows that this is not the case \cite{BakerLuecke}. Interestingly, their construction produces knots in $S^3$ with no symmetries at all but which admit surgeries that are two-fold branched covers of alternating knots. In particular, the surgery admits an involution and the associated branch set has thin Khovanov homology.
\section{Afterward: An absolute bi-grading}\lambdabel{sec:bi}
In calculating $\varkappa(K,h)$ we have made essential use of the secondary grading $\deltata$ on $\Khred(T_{K,h}(n))$. This is {\em a priori} a relative $\mathbb{Z}$-grading so that $\varkappa(K,h)$ is naturally a $(\mathbb{Z}\times\mathbb{Z})$-graded vector space (the second factor being the relative grading). It seems reasonable to attempt to promote (or, lift) this to an absolute bi-grading. To conclude, we will sketch a construction of such a lift.
Let $T=T_{K,h}$ be the tangle associated with a given strongly invertible knot $(K,h)$. Notice that from Lemma \ref{lem:stability} we can take a sufficiently large $n$ so that $\Khred(T(n))$ is computed (by way of an iterated mapping cone) in terms of $\Khred(T(0))$ and $\bigoplus_{i=0}^nX[i,0]\colon\thinspaceng \bigoplus_{i=0}^n\mathbb{F}^{(u(i),\deltata(i))}$ for some integer $u(i)$ and half-integer $\deltata(i)$.
Inspecting the proof of Lemma \ref{lem:stability} we see that $\deltata(i)$ depends on the integer $n$, however this dependance disappears when $\deltata$ is taken as a relative $\mathbb{Z}$-grading instead of an absolute $\frac{1}{2}\mathbb{Z}$-grading. In particular, we may fix a choice of absolute $\deltata$ grading on $\varkappa(K,h)$ by requiring that the potential generators from $\bigoplus_{i=0}^nX[i,0]$ lie in $\deltata = +1$.
In the interest of preserving the symmetry under mirrors that was essential in application (see Section \ref{sec:amph}) it is more natural to fix $\deltata=+\frac{1}{2}$ instead. It is only a cosmetic difference to fix $2\deltata=+1$ to obtain an absolutely $(\mathbb{Z}\times\mathbb{Z}_{\operatorname{odd}})$-graded vector space (effectively clearing denominators in an {\em a priori} $(\mathbb{Z}\times\half \mathbb{Z})$-graded vector space). As a result, for example, the trefoil (considered in our running example) is promoted to
\[
\includegraphics[scale=0.5]{figures/strong-trefoil}\qquad
\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (3,.5);
\node at (-0.375,0.25) {\footnotesize{$1$}};
\node at (0.25,.25) {$1$};
\node at (1.25,.25) {$1$};
\node at (1.75,.25) {$1$};
\node at (2.75,.25) {$1$};
\node at (0.25,-0.25) {\footnotesize{-$5$}};
\node at (0.75,-0.25) {\footnotesize{-$4$}};
\node at (1.25,-0.25) {\footnotesize{-$3$}};
\node at (1.75,-0.25) {\footnotesize{-$2$}};
\node at (2.25,-0.25) {\footnotesize{-$1$}};
\node at (2.75,-0.25) {\footnotesize{$0$}};
\end{tikzpicture}\]
as a $(\mathbb{Z}\times\mathbb{Z}_{\operatorname{odd}})$-graded vector space (the vertical axis represents $2\deltata$, as in Figure \ref{fig:gradings}). Since the invariant for any torus knot is supported in a single $\deltata$-grading (indeed, $2\deltata=+1$ according to this absolute lift for positive torus knots; compare Theorem \ref{thm:torus}), this does not add too much new information. However, in general this does add considerably more structure. For example, the figure eight gives
\[
\includegraphics[scale=0.5]{figures/strong-fig8}\qquad
\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (5,1);
\node at (-0.375,0.25) {\footnotesize{-$1$}};
\node at (-0.375,0.75) {\footnotesize{-$3$}};
\node at (0.25,.75) {$1$};
\node at (1.25,.75) {$1$};
\node at (1.75,.75) {$1$};
\node at (2.75,.75) {$1$};
\node at (2.25,.25) {$1$};
\node at (3.25,.25) {$1$};
\node at (3.75,.25) {$1$};
\node at (4.75,.25) {$1$};
\node at (0.25,-0.25) {\footnotesize{$0$}};
\node at (0.75,-0.25) {\footnotesize{$1$}};
\node at (1.25,-0.25) {\footnotesize{$2$}};
\node at (1.75,-0.25) {\footnotesize{$3$}};
\node at (2.25,-0.25) {\footnotesize{$4$}};
\node at (2.75,-0.25) {\footnotesize{$5$}};
\node at (3.25,-0.25) {\footnotesize{$6$}};
\node at (3.75,-0.25) {\footnotesize{$7$}};
\node at (4.25,-0.25) {\footnotesize{$8$}};
\node at (4.75,-0.25) {\footnotesize{$9$}};
\end{tikzpicture}\]
(this may be extracted from the calculations in \cite{Watson2013}).
\betagin{remark}Notice that the figure eight admits a second strong inversion and, since this knot is amphicheiral and hyperbolic, the pair of strong inversions $h_1$ and $h_2$ must be interchanged under mirror image \cite[Proposition 3.4 (2)]{Sakuma1986}. That is, $(K,h_1)^*\sigmameq(K,h_2)$ as strongly invertible knots where $K$ is the figure eight. In particular, this example illustrates the necessity for the $\mathbb{Z}$-graded information in distinguishing strong inversion: $\varkappa(K,h_1)\colon\thinspaceng\varkappa(K,h_2)$ even as relatively $(\mathbb{Z}\times\mathbb{Z})$-graded groups (compare Remark \ref{rmk:fig8}).\end{remark}
A key feature of this choice --- building on Proposition \ref{prp:mirror} --- is summarized in the following statement, the proof of which is left to the reader.
\betagin{proposition}
Let $(K,h)$ be a strongly invertible knot with strongly invertible mirror $(K,h)^*$. Then $\varkappa^{u,2\deltata}(K,h)^*\colon\thinspaceng \varkappa^{-u,-2\deltata}(K,h)$ as $(\mathbb{Z}\times\mathbb{Z}_{\operatorname{odd}})$-graded vector spaces. $
\ensuremath\Box$
\end{proposition}
We note that this $(\mathbb{Z}\times\mathbb{Z}_{\operatorname{odd}})$-graded invariant of strong inversions typically contains considerably more information than its $\mathbb{Z}$-graded counterpart. For example, revisiting the quasi-amphicheiral knots of Section \ref{sec:amph} we have:
\[
\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (8,1.5);
\node at (-2,.75) {$\varkappa(10_{48})$};
\node at (0.25,1.25) {$1$};
\node at (0.75,1.25) {$3$};
\node at (1.25,1.25) {$4$};
\node at (1.75,1.25) {$5$};
\node at (2.25,1.25) {$6$};
\node at (2.75,1.25) {$5$};
\node at (3.25,1.25) {$4$};
\node at (3.75,1.25) {$3$};
\node at (4.25,1.25) {$1$};
\node at (2.25,0.75) {$2$};
\node at (2.75,0.75) {$5$};
\node at (3.25,0.75) {$6$};
\node at (3.75,0.75) {$8$};
\node at (4.25,0.75) {$9$};
\node at (4.75,0.75) {$7$};
\node at (5.25,0.75) {$6$};
\node at (5.75,0.75) {$4$};
\node at (6.25,0.75) {$1$};
\node at (4.25,0.25) {$1$};
\node at (4.75,0.25) {$2$};
\node at (5.25,0.25) {$2$};
\node at (5.75,0.25) {$3$};
\node at (6.25,0.25) {$3$};
\node at (6.75,0.25) {$2$};
\node at (7.25,0.25) {$2$};
\node at (7.75,0.25) {$1$};
\node at (-0.375,1.25) {\footnotesize{-$1$}};
\node at (-0.375,0.75) {\footnotesize{$1$}};
\node at (-0.375,0.25) {\footnotesize{$3$}};
\node at (0.25,-0.25) {\footnotesize{-$6$}};
\node at (0.75,-0.25) {\footnotesize{-$5$}};
\node at (1.25,-0.25) {\footnotesize{-$4$}};
\node at (1.75,-0.25) {\footnotesize{-$3$}};
\node at (2.25,-0.25) {\footnotesize{-$2$}};
\node at (2.75,-0.25) {\footnotesize{-$1$}};
\node at (3.25,-0.25) {\footnotesize{$0$}};
\node at (3.75,-0.25) {\footnotesize{$1$}};
\node at (4.25,-0.25) {\footnotesize{$2$}};
\node at (4.75,-0.25) {\footnotesize{$3$}};
\node at (5.25,-0.25) {\footnotesize{$4$}};
\node at (5.75,-0.25) {\footnotesize{$5$}};
\node at (6.25,-0.25) {\footnotesize{$6$}};
\node at (6.75,-0.25) {\footnotesize{$7$}};
\node at (7.25,-0.25) {\footnotesize{$8$}};
\node at (7.75,-0.25) {\footnotesize{$9$}};
\end{tikzpicture}\]
\[\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (11.5,2.5);
\node at (-2,1.25) {$\varkappa(10_{71})$};
\node at (0.25,2.25) {$1$};
\node at (0.75,2.25) {$2$};
\node at (1.25,2.25) {$2$};
\node at (1.75,2.25) {$3$};
\node at (2.25,2.25) {$3$};
\node at (2.75,2.25) {$2$};
\node at (3.25,2.25) {$2$};
\node at (3.75,2.25) {$1$};
\node at (2.25,1.75) {$3$};
\node at (2.75,1.75) {$6$};
\node at (3.25,1.75) {$5$};
\node at (3.75,1.75) {$9$};
\node at (4.25,1.75) {$8$};
\node at (4.75,1.75) {$5$};
\node at (5.25,1.75) {$6$};
\node at (5.75,1.75) {$2$};
\node at (4.25,1.25) {$4$};
\node at (4.75,1.25) {$7$};
\node at (5.25,1.25) {$6$};
\node at (5.75,1.25) {$11$};
\node at (6.25,1.25) {$9$};
\node at (6.75,1.25) {$6$};
\node at (7.25,1.25) {$7$};
\node at (7.75,1.25) {$2$};
\node at (6.25,0.75) {$3$};
\node at (6.75,0.75) {$4$};
\node at (7.25,0.75) {$4$};
\node at (7.75,0.75) {$7$};
\node at (8.25,0.75) {$5$};
\node at (8.75,0.75) {$4$};
\node at (9.25,0.75) {$4$};
\node at (9.75,0.75) {$1$};
\node at (8.25,0.25) {$1$};
\node at (8.75,0.25) {$1$};
\node at (9.25,0.25) {$1$};
\node at (9.75,0.25) {$2$};
\node at (10.25,0.25) {$1$};
\node at (10.75,0.25) {$1$};
\node at (11.25,0.25) {$1$};
\node at (-0.375,2.25) {\footnotesize{-$1$}};
\node at (-0.375,1.75) {\footnotesize{$1$}};
\node at (-0.375,1.25) {\footnotesize{$3$}};
\node at (-0.375,0.75) {\footnotesize{$5$}};
\node at (-0.375,0.25) {\footnotesize{$7$}};
\node at (0.25,-0.175) {\footnotesize{-$14$}};
\node at (0.75,-0.375) {\footnotesize{-$13$}};
\node at (1.25,-0.175) {\footnotesize{-$12$}};
\node at (1.75,-0.375) {\footnotesize{-$11$}};
\node at (2.25,-0.175) {\footnotesize{-$10$}};
\node at (2.75,-0.25) {\footnotesize{-$9$}};
\node at (3.25,-0.25) {\footnotesize{-$8$}};
\node at (3.75,-0.25) {\footnotesize{-$7$}};
\node at (4.25,-0.25) {\footnotesize{-$6$}};
\node at (4.75,-0.25) {\footnotesize{-$5$}};
\node at (5.25,-0.25) {\footnotesize{-$4$}};
\node at (5.75,-0.25) {\footnotesize{-$3$}};
\node at (6.25,-0.25) {\footnotesize{-$2$}};
\node at (6.75,-0.25) {\footnotesize{-$1$}};
\node at (7.25,-0.25) {\footnotesize{$0$}};
\node at (7.75,-0.25) {\footnotesize{$1$}};
\node at (8.25,-0.25) {\footnotesize{$2$}};
\node at (8.75,-0.25) {\footnotesize{$3$}};
\node at (9.25,-0.25) {\footnotesize{$4$}};
\node at (9.75,-0.25) {\footnotesize{$5$}};
\node at (10.25,-0.25) {\footnotesize{$6$}};
\node at (10.75,-0.25) {\footnotesize{$7$}};
\node at (11.25,-0.25) {\footnotesize{$8$}};
\end{tikzpicture}\]
\[\betagin{tikzpicture}[scale=0.85]
\draw[step=.5,black] (0,0) grid (11,2.5);
\node at (-2,1.25) {$\varkappa(10_{104})$};
\node at (0.25,2.25) {$1$};
\node at (0.75,2.25) {$1$};
\node at (1.25,2.25) {$1$};
\node at (1.75,2.25) {$2$};
\node at (2.25,2.25) {$1$};
\node at (2.75,2.25) {$1$};
\node at (3.25,2.25) {$1$};
\node at (1.75,1.75) {$1$};
\node at (2.25,1.75) {$4$};
\node at (2.75,1.75) {$3$};
\node at (3.25,1.75) {$5$};
\node at (3.75,1.75) {$6$};
\node at (4.25,1.75) {$3$};
\node at (4.75,1.75) {$4$};
\node at (5.25,1.75) {$2$};
\node at (3.25,1.25) {$1$};
\node at (3.75,1.25) {$4$};
\node at (4.25,1.25) {$7$};
\node at (4.75,1.25) {$7$};
\node at (5.25,1.25) {$10$};
\node at (5.75,1.25) {$9$};
\node at (6.25,1.25) {$6$};
\node at (6.75,1.25) {$6$};
\node at (7.25,1.25) {$2$};
\node at (5.25,0.75) {$2$};
\node at (5.75,0.75) {$5$};
\node at (6.25,0.75) {$6$};
\node at (6.75,0.75) {$8$};
\node at (7.25,0.75) {$9$};
\node at (7.75,0.75) {$7$};
\node at (8.25,0.75) {$6$};
\node at (8.75,0.75) {$4$};
\node at (9.25,0.75) {$1$};
\node at (7.25,0.25) {$1$};
\node at (7.75,0.25) {$2$};
\node at (8.25,0.25) {$2$};
\node at (8.75,0.25) {$3$};
\node at (9.25,0.25) {$3$};
\node at (9.75,0.25) {$2$};
\node at (10.25,0.25) {$2$};
\node at (10.75,0.25) {$1$};
\node at (-0.375,2.25) {\footnotesize{-$5$}};
\node at (-0.375,1.75) {\footnotesize{-$3$}};
\node at (-0.375,1.25) {\footnotesize{-$1$}};
\node at (-0.375,0.75) {\footnotesize{$1$}};
\node at (-0.375,0.25) {\footnotesize{$3$}};
\node at (0.25,-0.25) {\footnotesize{-$4$}};
\node at (0.75,-0.25) {\footnotesize{-$3$}};
\node at (1.25,-0.25) {\footnotesize{-$2$}};
\node at (1.75,-0.25) {\footnotesize{-$1$}};
\node at (2.25,-0.25) {\footnotesize{$0$}};
\node at (2.75,-0.25) {\footnotesize{$1$}};
\node at (3.25,-0.25) {\footnotesize{$2$}};
\node at (3.75,-0.25) {\footnotesize{$3$}};
\node at (4.25,-0.25) {\footnotesize{$4$}};
\node at (4.75,-0.25) {\footnotesize{$5$}};
\node at (5.25,-0.25) {\footnotesize{$6$}};
\node at (5.75,-0.25) {\footnotesize{$7$}};
\node at (6.25,-0.25) {\footnotesize{$8$}};
\node at (6.75,-0.25) {\footnotesize{$9$}};
\node at (7.25,-0.25) {\footnotesize{$10$}};
\node at (7.75,-0.25) {\footnotesize{$11$}};
\node at (8.25,-0.25) {\footnotesize{$12$}};
\node at (8.75,-0.25) {\footnotesize{$13$}};
\node at (9.25,-0.25) {\footnotesize{$14$}};
\node at (9.75,-0.25) {\footnotesize{$15$}};
\node at (10.25,-0.25) {\footnotesize{$16$}};
\node at (10.75,-0.25) {\footnotesize{$17$}};
\end{tikzpicture}\]
As this is apparently stronger information than the integer grading used to this point, it would be interesting to exhibit, for example, a quasi-amphicheiral knot for which determining the non-amphicheirality depends on this additional structure.
\betagin{remark}\lambdabel{rmk:closing}
As observed in the proof of Theorem \ref{thm:ten}, $\dim\varkappa(10_{71})=\dim\varkappa(10_{104})$; consulting the invariants above the number of $\deltata$-gradings (i.e. the homological width) supporting these invariants coincide. It is interesting that certain aspects of $\varkappa(10_{71})$ and $\varkappa(10_{104})$ (particularly, integer-valued invariants derived from $\varkappa$) coincide given that $\Khred(10_{71})\colon\thinspaceng\Khred(10_{104})$. The fact that $\varkappa(10_{71})$ and $\varkappa(10_{104})$ differ as $\deltata$-graded groups (absolutely or relatively) and are therefore separated by $\varkappa$ provides another application of the gradings in Khovanov homology to distinguish this pair.
\end{remark}
\betagin{footnotesize}
\end{document} |
\begin{document}
\title{Bayesian Games and the Smoothness Framework}
\author{Vasilis Syrgkanis}
\maketitle
\begin{abstract}
We consider a general class of Bayesian Games where each players utility
depends on his type (possibly multidimensional) and on the strategy profile and where players'
types are distributed independently. We show that if their
full information version for any fixed instance of the type profile is a smooth game then the Price of Anarchy bound
implied by the smoothness property, carries over to the Bayes-Nash Price of Anarchy.
We show how some proofs from the literature (item bidding auctions, greedy auctions) can be
cast as smoothness proofs or be simplified using smoothness. For first price item
bidding with fractionally subadditive bidders we actually manage to improve by much the existing
result \cite{Hassidim2011a} from $4$ to $\frac{e}{e-1}\approx 1.58$. This also shows a very
interesting separation between first and second price item bidding since second price item bidding
has PoA at least $2$ even under complete information. For a larger class of
Bayesian Games where the strategy space of a player also changes with his type we are able to show
that a slightly stronger definition of smoothness also implies a Bayes-Nash PoA bound.
We show how weighted congestion games actually satisfy this stronger definition of smoothness.
This allows us to show that the inefficiency bounds of weighted congestion games
known in the literature carry over to incomplete versions where the weights
of the players are private information. We also show how an incomplete version of
a natural class of monotone valid utility games, called effort market games are
universally $(1,1)$-smooth. Hence, we show that incomplete versions of effort market
games where the abilities of the players and their budgets are private information
has Bayes-Nash PoA at most $2$.
\end{abstract}
\section{Introduction}
In our information era, with the advent of electronic markets, most systems have grown
so large scale that central coordination has become infeasible. In addition players
are less and less informed of the actual game they are playing and the type of players
they are competing against. Coordinating the players is too costly and assuming that
the players know all the parameters of the game is too simplistic. Such a realization
renders mandatory the study of efficiency in non-cooperative games of incomplete information.
Ever since the introduction of the concept of the Price of Anarchy, a large part
of the algorithmic game theory literature has studied the effects of selfishness in
the efficiency of a system. In a unifying paper, Roughgarden \cite{Roughgarden2009}, gave a
general technique, called smoothness, for proving inefficiency results in games and portrayed how many
of the results in the literature can be cast in his framework. In addition, he showed
that such types of inefficiency proofs extend directly to almost every reasonable
non-cooperative solution concept, such as pure nash equilibria, mixed nash equilibria,
correlated equilibria and coarse-correlated equilibria.
However, such a unification holds only under the strong assumption of complete information
\footnote{Recently we became aware that an independent work by Roughgarden with some overlapping results on
extending smoothness to incomplete information games has been under submission to a conference since February 6, 2012.}:
players know every parameter of the game and no player has private information. Such an
assumption is pretty strong and if we want models that could capture realistic environments
we need to cope with games were players have incomplete information.
In this work we manage to show how to extend this unification to a significant class of games of
incomplete information: basically we show that if a complete information game is
smooth then the inefficiency bound given by the smoothness argument carries over to
incomplete information versions of the game where players have private parameters and each player's utility depends on his parameter and
the actions of the rest of the players. Hence, we manage to unify Price of Anarchy with
Bayesian Price of Anarchy analysis for a large class of games.
Many of the games studied in the literature such as Weighted Congestion Games
have been shown to be tight games: the best efficiency bound provable with a smoothness argument is the best
bound possible. For such games our analysis shows that the bayesian price of anarchy bound
is also tight, since complete information is a special case of incomplete information and hence
lower bounds on inefficiency carry over to the incomplete information model. Hence,
this immediately implies that the efficiency guarrantees for a tight class of games don't depend
on the information pocessed by the players: having more information doesn't imply better efficiency
and having less information doen't imply worse efficiency.
Our approach also manages to put several Bayesian Price of Anarchy results that exist in
the literature under the smoothness framework. Specifically we show how our main theorem
can be applied to get an improved result for first price item-bidding with subadditive bidders
and how to get a much simpler proof of good approximation guarrantees of the greedy mechanisms
introduced by Lucier and Borodin \cite{Lucier2009}.
\subsection{Related Work}
There has been a long line of research on quantifying inefficiency of equilibria starting
from \cite{Koutsoupias1999} who introduced the notion of the price of anarchy. A recent work
by Roughgarden \cite{Roughgarden2009} managed to unify several of these results under a proof
framework called smoothness and also showed that such inefficiency proofs also carry over
to inefficiency of coarse correlated equilibria. Moreover, he showed that such techniques
give tight results for the well-studied class of congestion games. Later, Bhawalkar et al.
\cite{Bhawalkar2010} also showed that it produces tight results for the larger class of
weighted congestion games. Another recent work by Schoppman and Roughgarden \cite{Roughgarden2010}
copes with games with continuous strategy spaces and shows how the smoothness framework should
be adapted for such games to produce tighter results. The introduce the new notion of local
smoothness for such games and showed that if an inefficiency upper bound proof lies in this framework
then it also carries over to correlated equilibria.
There have also been several works on quantifying the inefficiency of incomplete information games,
mainly in the context of auctions. A series of papers by Paes Leme and Tardos \cite{PaesLeme2010},
Lucier and Paes Leme \cite{Lucier2011} and Caragiannis et al \cite{Caragiannis2011} studied the
ineffficiency of Bayes-Nash equilibria of the generalized second price auction. Lucier and Borodin
studied Bayes-Nash Equilibria of non-truthful auctions that are based on greedy allocation algorithms
\cite{Lucier2009}. A series of three papers, Christodoulou, Kovacs and Schapira \cite{Christodoulou2008},
Bhawalkar and Roughgarden \cite{Bhawalkar} and Hassidim, Kaplan, Mansour, Nisan \cite{Hassidim2011a},
studied the inefficiency of Bayes-Nash equilibria of non-truthful combinatorial auctions that
are based on running simultaneous separate item auctions for each item. However, many of the results
in this line of work where specific to the context and a unifying framework doesn't exist. Lucier
and Paes Leme \cite{Lucier2011} introduced the concept of semi-smoothness and showed that their proof
for the inefficiency of the generalized second price auction falls into this category. However,
semi-smoothness is a much more restrictive notion of smoothness than just requiring that every complete
information instance of the game to be smooth.
\subsection{Model and Notation}
We consider the following model of Bayesian Games: Each player
has a type space $T_i$ and a probability distribution $D_i$ defined on $T_i$.
The distributions $D_i$ are independent, the types $T_i$ are disjoint and we denote with $D=\times_i D_i$.
Each player has a set of actions $A_i$ and let $A=\times_i A_i$. The utility of a player is a function
$u_i: T_i\times A\rightarrow \ensuremath{\mathbb R}$. The strategy of each player is a function
$s_i: T_i\rightarrow A_i$. At times we will use the notation $s(t)=(s_i(t_i))_{i\in N}$
to denote the vector of actions given a type profile $t$ and $s_{-i}(t_{-i})=(s_j(t_j))_{j{\sf N}eq i}$
to denote the vector of actions for all players except $i$. We could also define
cost minimization games where each player has a cost $c_i:T_i\times A\rightarrow \ensuremath{\mathbb R}$.
All of our results hold for both utility maximization and cost minimization games.
The two basic assumptions that we made in the class of Bayesian Games that
we examine is that a players utility is affected by the other players'
types only implicitly through their actions and not directly from
their types and that the players' types are distributed independently.
The above class of games is general enough and we portray several nice
examples of such Bayesian Games. An interesting future direction
is to try and relax any of these two assumptions or show that smoothness
is not sufficient to prove bounds without these assumptions.
As our solution concept we will use the most dominant solution concept in
incomplete information games, the Bayes-Nash Equilibrium (BNE). Our results hold for
mixed Bayes-Nash Equilibria too, but for simplicity of presentation we are
going to focus on Bayes-Nash Equilibria in pure strategies. A Bayes-Nash
Equilibrium is a strategy profile such that each player maximizes his expected
utility conditional on his private information:
$$\forall a \in A_i:\mathbb{E}_{t_{-i}|t_i}[u_i^{t_i}(s(t))]\geq \mathbb{E}_{t_{-i}|t_i}[u_i^{t_i}(a,s_{-i}(t_{-i})]$$
Given a strategy profile $s$ the social welfare of the game is defined
as the expected sum of player utilities:
$$SW(s)=\mathbb{E}_t[SW^t(s(t))]=\mathbb{E}_t[\sum_i u_i^{t_i}(s(t))]$$
In addition given a type profile $t$ we denote with $\text{\textsc{Opt}} (t)$ the
action profile that achieves maximum social welfare for type profile $t$:
$\text{\textsc{Opt}} (t)=\arg\max_{a\in A}SW^t(a)$.
As our measure of inefficiency we will use the \textit{Bayes-Nash Price of Anarchy} which
is defined as the ratio of the expected optimal social welfare over the expected
social welfare achieved at the worst Bayes-Nash Equilibrium:
$$\sup_{s \text{ is } BNE}\frac{\mathbb{E}_{t}[SW(\text{\textsc{Opt}} (t))]}{\mathbb{E}_t[SW(s(t))]}$$
\section{Constant Strategy Space Bayesian Games and Smoothness}
In this section we give a general theorem on how one can derive Bayes-Nash Price of Anarchy
results using the smoothness framework introduced in \cite{Roughgarden2009}. We first
give a definition of smoothness suitable for incomplete information games. Our definition
just states that the game is smooth in the sense of \cite{Roughgarden2009} for
each instantiation of the type profile.
\begin{defn}
A bayesian utility maximization game is said to be $(\lambda,\mu)$-smooth
if for any $t\in T$ and for any pair of action profiles $a,a'\in A$:
$$\sum_i u_i^{t_i}(a_i',a_{-i})\geq \lambda SW^t(a')-\mu SW^t(a)$$
\end{defn}
For the case of complete information games (i.e. the set of possible type
profiles is a singleton) Roughgarden \cite{Roughgarden2009} showed that the
efficiency achieved by any coarse-correlated equilibrium (superset of pure nash,
mixed nash and correlated equilibrium) is at least a $\lambda/(1+\mu)$ fraction
of the optimal social welfare. In other words the price of anarchy of any
of these solution concepts is at most $(1+\mu)/\lambda$. Our main results shows
that the latter also extends to Bayes-Nash Equilibria for the case of incomplete
information.
\begin{theorem}[Main Theorem]\label{thm:main}
If a \textit{Bayesian Game} is $(\lambda,\mu)$-smooth then
it has Bayes-Nash Price of Anarchy at most $\frac{1+\mu}{\lambda}$.
\end{theorem}
\begin{proof}
Let $\text{\textsc{Opt}} (t)=(\text{\textsc{Opt}} _i(t))_{i\in N}$ be the action profile that maximizes
social welfare given a type profile $t$. Suppose that player $i$ with type $t_i$ switches to playing
$\text{\textsc{Opt}} _i(t_i,w_{-i})$ for some type profile $w_{-i}$. Let $s$ be a Bayes-Nash
Equilibrium. Then we have:
\begin{align*}
\mathbb{E}_{t_{-i}}[u_i^{t_i}(s(t))]
~\geq~ & \mathbb{E}_{t_{-i}}[u_i^{t_i}(\text{\textsc{Opt}} _i(t_i,w_{-i}), s_{-i}(t_{-i}))]
\end{align*}
Taking expectation over $t_i$ and over all possible type profiles $w_{-i}$ we get:
\begin{align*}
\mathbb{E}_{t}[u_i^{t_i}(s(t))]
~\geq~ & \mathbb{E}_{w_{-i}}\mathbb{E}_{t_i}\mathbb{E}_{t_{-i}}[u_i^{t_i}(\text{\textsc{Opt}} _i(t_i,w_{-i}), s_{-i}(t_{-i}))] \\
~=~& \mathbb{E}_{w_{-i}} \mathbb{E}_{w_i} \mathbb{E}_{t_{-i}}[u_i^{w_i}(\text{\textsc{Opt}} _i(w_i,w_{-i}), s_{-i}(t_{-i}))] \\
~=~& \mathbb{E}_{t_i} \mathbb{E}_{w_{-i}} \mathbb{E}_{w_i} \mathbb{E}_{t_{-i}}[u_i^{w_i}(\text{\textsc{Opt}} _i(w_i,w_{-i}), s_{-i}(t_{-i}))] \\
~=~& \mathbb{E}_{t} \mathbb{E}_{w} [u_i^{w_i}(\text{\textsc{Opt}} _i(w), s_{-i}(t_{-i}))] \\
\end{align*}
Adding the above inequality for all players and using the smoothness property we get:
\begin{align}
\mathbb{E}_t[SW^t(s(t))]=\sum_i \mathbb{E}_t[u_i^{t_i}(s(t))]
~\geq~ & \sum_i\mathbb{E}_{t} \mathbb{E}_{w} [u_i^{w_i}(\text{\textsc{Opt}} _i(w), s_{-i}(t_{-i}))] {\sf N}onumber \\
~=~ & \mathbb{E}_{t} \mathbb{E}_{w} [\sum_i u_i^{w_i}(\text{\textsc{Opt}} _i(w), s_{-i}(t_{-i}))] {\sf N}onumber \\
~\geq~& \mathbb{E}_{t} \mathbb{E}_{w} [\lambda SW^{w}(\text{\textsc{Opt}} (w)) - \mu SW^w(s(t))] {\sf N}onumber \\
~=~& \lambda \mathbb{E}_w[SW^{w}(\text{\textsc{Opt}} (w))] - \mu \sum_i \mathbb{E}_{t} \mathbb{E}_{w} [u_i^{w_i}(s_i(t_i),s_{-i}(t_{-i}))] \label{eqn:bayes-nash}
\end{align}
If in the pre-last line we had $SW^t(s(t))$ then we would directly get our result. However,
the fact that there is this misalignment we need more work. In fact we are going to prove that:
\begin{equation}\label{eqn:misalignment}
\mathbb{E}_{t} \mathbb{E}_{w}[SW^w(s(t))]\leq \mathbb{E}_t[SW^t(s(t))]
\end{equation}
To achieve this we are going to use again the Bayes-Nash Equilibrium definition. But now
we are going to use it in the following sense: no player $i$ of some type $w_i$
wants to deviate to playing as if he was some other type $t_i$. This translates to:
\begin{align*}
\mathbb{E}_{t_{-i}}[u_i^{w_i}(s_i(t_i),s_{-i}(t_{-i})] \leq \mathbb{E}_{t_{-i}}[u_i^{w_i}(s_i(w_i),s_{-i}(t_{-i})]
\end{align*}
Taking expectation over $t_i$ and $w$ we have:
\begin{align*}
\mathbb{E}_w\mathbb{E}_t[u_i^{w_i}(s_i(t_i),s_{-i}(t_{-i})] ~\leq~ & \mathbb{E}_{w_i}\mathbb{E}_{w_{-i}}\mathbb{E}_{t_i}\mathbb{E}_{t_{-i}}[u_i^{w_i}(s_i(w_i),s_{-i}(t_{-i})]\\
~=~& \mathbb{E}_{w_i}\mathbb{E}_{t_{-i}}[u_i^{w_i}(s_i(w_i),s_{-i}(t_{-i})] \\
~=~& \mathbb{E}_{t_i}\mathbb{E}_{t_{-i}}[u_i^{t_i}(s_i(t_i),s_{-i}(t_{-i})]\\
~=~& \mathbb{E}_t[u_i^{t_i}(s(t))]
\end{align*}
Summing over all players gives us inequality \ref{eqn:misalignment}. Now combining inequality \ref{eqn:misalignment}
with inequality \ref{eqn:bayes-nash} we get:
\begin{equation}
\mathbb{E}_t[SW^t(s(t))]\geq \lambda \mathbb{E}_w[SW^{w}(\text{\textsc{Opt}} (w))] - \mu \mathbb{E}_t[SW^t(s(t))]
\end{equation}
which gives the theorem.
\end{proof}
In fact it is easy to see that our above analysis also works for a more relaxed version of smoothness
similar to the variant introduced in \cite{Lucier2011}:
\begin{defn}\label{def:semi-smooth}
A Bayesian utility maximization game is said to be $(\lambda,\mu)$-smooth
if for any $t\in T$ and $a\in A$, there exists a strategy profile $a'(t)$
such that:
$$\sum_i u_i^{t_i}(a_i'(t),a_{-i})\geq \lambda SW(\text{\textsc{Opt}} (t))-\mu SW^t(a)$$
\end{defn}
In fact our main proof holds even for a slightly more relaxed smoothness property that will
prove useful in auction settings. Our main theorem works for the following notion
of smoothness under the condition that the utilities of players at equilibrium are non-negative.
\begin{defn}
A Bayesian utility maximization game is said to be $(\lambda,\mu)$-smooth
if for any $t\in T$ and $a\in A$, there exists a strategy profile $a'(t)$
such that:
$$\sum_i u_i^{t_i}(a_i'(t),a_{-i})\geq \lambda SW(\text{\textsc{Opt}} (t))-\mu \sum_{i\in K\subseteq [n]}u_i^t(a)$$
where $K$ is some fixed subset of the players independent of the type profile $t$ and bid profile $b$.
\end{defn}
The reason why the latter definition is useful is that some auction environments might fail
to be smooth with the first definition mainly due to the fact that the utility of players
might be non-negative in expectation at equilibrium but can certainly be negative if we consider
an arbitrary bid profile $b$ that is not in equilibrium. Hence, the latter helps in settings
where at equilibrium utilities are certainly non-negative (individual rationality) but
there are strategy profiles at which a player might be getting negative utility.
\section{Item-Bidding Auctions}
In this section we consider the Item-Bidding Auctions studied in Christodoulou et al \cite{Christodoulou2008},
Bhawalkar and Roughgarden \cite{Bhawalkar} and Hassidim et al. \cite{Hassidim2011a}.
We first prove a smoothness result for First Price Item-Bidding Auctions for fractionally
subadditive bidders. Fractionally subadditive bidders are subcase of additive bidders and
a generalization of submodular bidders. Our results imply a big improvement in existing results.
Specifically Hassidim et al show that for fractionally subadditive bidders the Bayes-Nash
Price of Anarchy is at most $4$ (and at most $4\beta$ for $\beta$-fractionally subadditive.
We show that it is at most $\frac{e}{e-1}\approx 1.58$ (and at most $\frac{e}{e-1}\beta$ correspondingly).
Thus our result gives the same guarantees for first price item bidding auctions as those
existing for second price item bidding auctions (e.g. Christodoulou et al \cite{Christodoulou2008},
Bhawalkar et al \cite{Bhawalkar}).
\begin{theorem}
First Price Item-Bidding Auctions are $(\frac{1}{2},0)$-semi-smooth for
fractionally subadditive bidders.
\end{theorem}
\begin{proof}
Consider a valuation profile $v$ and a bid profile $b$. Let $\text{\textsc{Opt}} (v)$ be the optimal
allocation for that type profile. Let $\text{\textsc{Opt}} _i(v)$ be the set that player
$i$ gets at $\text{\textsc{Opt}} (v)$. Let $a=(a_1,\ldots,a_m)$
be the maximizing additive valuation for player $i$ for $\text{\textsc{Opt}} _i(v)$, i.e.
$v_i(\text{\textsc{Opt}} _i(v))=\sum_{j \in \text{\textsc{Opt}} _i(v)}a_j$ and $\forall S{\sf N}eq \text{\textsc{Opt}} _i(v): v_i(S)\geq \sum_{j\in S}a_j$
(Such an $a$ exists by the definition of fractionally subadditive valuations).
Suppose that player $i$ switches to bidding $a_j/2$ for each $j\in \text{\textsc{Opt}} _i(v)$
and $0$ everywhere else. Denote with $b_i'(v)$ such a deviation.
Let $X_i$ be the items that he wins after the deviation.
This means that for all $j\in \text{\textsc{Opt}} _i(v)-X_i: p_j(b)\geq a_j/2$ and for all $j\in X_i$
player $i$ pays exactly $a_j/2$.
Thus we have:
\begin{align*}
u_i(b_i'(v),b_{-i})\geq &~ v_i(X_i)-\sum_{j\in X_i}\frac{a_j}{2}
\geq \sum_{j\in X_i}a_j -\sum_{j\in X_i}\frac{a_j}{2}
= \sum_{j\in X_i}\frac{a_j}{2}\\
\geq &~ \sum_{j\in X_i}\frac{a_j}{2} + \sum_{j\in \text{\textsc{Opt}} _i(v)-X_i}\frac{a_j}{2}-p_j(b)\\
\geq &~ \sum_{j\in \text{\textsc{Opt}} _i(v)}\frac{a_j}{2}-\sum_{j\in \text{\textsc{Opt}} _i(v)-X_i}p_j(b)\\
\geq &~ \sum_{j\in \text{\textsc{Opt}} _i(v)}\frac{a_j}{2}-\sum_{j\in \text{\textsc{Opt}} _i(v)}p_j(b)\\
= &~ \frac{v_i(\text{\textsc{Opt}} _i(v))}{2}-\sum_{j\in \text{\textsc{Opt}} _i(v)}p_j(b)
\end{align*}
Now summing over all players the second term on the right hand side will become
the sum of prices over all items, since the sets $\text{\textsc{Opt}} _i(v)$ are disjoint. Thus we get:
\begin{equation}\label{eqn:smooth-first-price}
\sum_i u_i^{v_i}(b_i'(v),b_{-i}) + \sum_{j\in [m]}p_j(b)\geq \frac{\sum_i v_i(\text{\textsc{Opt}} _i(v))}{2}= \frac{1}{2}SW^v(\text{\textsc{Opt}} (v))
\end{equation}
Now the above inequality gives us the smoothness property. To completely fit it in our smoothness
model, we should also view the seller as a player with only one strategy and only one type and
whose utility is the sum of payments. His optimal deviation is then the trivial of doing nothing.
Then the left hand side of the above inequality is the sum of the utilities of all the players
(including the seller) had each of them unilaterally deviated to their optimal strategy.
\end{proof}
In fact considering randomized deviations, similar to that of \cite{Lucier2011} we are able
to prove a much tighter bound of $\frac{e}{e-1}\approx 1.58$ on the Price of Anarchy by
showing that the above game is actually $(1-\frac{1}{e},0)$-semi-smooth.
\begin{theorem}
First Price Item-Bidding Auctions are $(1-\frac{1}{e},0)$-semi-smooth for
fractionally subadditive bidders.
\end{theorem}
\begin{proof}
Consider a valuation profile $v$ and a bid profile $b$. Let $\text{\textsc{Opt}} (v)$ be the optimal
allocation for that type profile. Let $\text{\textsc{Opt}} _i(v)$ be the set that player
$i$ gets at $\text{\textsc{Opt}} (v)$. Let $a=(a_1,\ldots,a_m)$
be the maximizing additive valuation for player $i$ for $\text{\textsc{Opt}} _i(v)$, i.e.
$v_i(\text{\textsc{Opt}} _i(v))=\sum_{j \in \text{\textsc{Opt}} _i(v)}a_j$ and $\forall S{\sf N}eq \text{\textsc{Opt}} _i(v): v_i(S)\geq \sum_{j\in S}a_j$
(Such an $a$ exists by the definition of fractionally subadditive valuations).
Suppose that player $i$ switches to bidding a randomized bid with probability
density function $f(b)=\frac{1}{a_j-b}$ for $b\in [0,a_j(1-\frac{1}{e})]$ for each $j\in \text{\textsc{Opt}} _i(v)$
and $0$ everywhere else. Randomization is independent for each $j$. Denote with $\tilde{B}_i(v)$ such a
randomized deviation and $\tilde{b}_i$ a random draw from $\tilde{B}_i(v)$.
Let $X_i(b_i)$ be the random variable that denotes the items that he wins after the deviation.
Thus we have:
\begin{align*}
u_i(\tilde{B}_i(v),b_{-i})\geq &~ \mathbb{E}_{\tilde{b}_i\sim \tilde{B}_i(v)}[v_i(X_i(\tilde{b}_i))-\sum_{j\in X_i(\tilde{b}_i)}\tilde{b}_{ij}] \\
\geq~& \mathbb{E}_{\tilde{b}_i\sim \tilde{B}_i(v)}[\sum_{j\in X_i(\tilde{b}_i)}a_j -\sum_{j\in X_i(\tilde{b}_i)}\tilde{b}_{ij}]\\
\geq~& \mathbb{E}_{\tilde{b}_i\sim \tilde{B}_i(v)}[\sum_{j\in \text{\textsc{Opt}} _i(v)}(a_j -\tilde{b}_{ij})\mathbbm{1}\{\tilde{b}_{ij}\geq p_j(b)\}]\\
=~& \sum_{j\in \text{\textsc{Opt}} _i(v)}\mathbb{E}_{\tilde{b}_i\sim \tilde{B}_i(v)}[(a_j -\tilde{b}_{ij})\mathbbm{1}\{\tilde{b}_{ij}\geq p_j(b)\}]\\
=~& \sum_{j\in \text{\textsc{Opt}} _i(v)}\int_{p_j(b)}^{a_j(1-\frac{1}{e})}(a_j -t)\frac{1}{a_j-t}dt\\
=~& \sum_{j\in \text{\textsc{Opt}} _i(v)}a_j\left(1-\frac{1}{e}\right)-p_j(b)\\
=~& \left(1-\frac{1}{e}\right)v_i(\text{\textsc{Opt}} _i(v))-\sum_{j\in \text{\textsc{Opt}} _i(v)}p_j(b)
\end{align*}
Now summing over all players the second term on the right hand side will become
the sum of prices over all items, since the sets $\text{\textsc{Opt}} _i(v)$ are disjoint. Thus we get:
\begin{equation}
\sum_i u_i^{v_i}(b_i'(v),b_{-i}) + \sum_{j\in [m]}p_j(b)\geq \left(1-\frac{1}{e}\right)\sum_i v_i(\text{\textsc{Opt}} _i(v))=\left(1-\frac{1}{e}\right)SW^v(\text{\textsc{Opt}} (v))
\end{equation}
\end{proof}
Similarly one can also show that for $\beta$-fractionally subadditive bidders the First Price
Item-Bidding Auction is $(\frac{1}{\beta}\left(1-\frac{1}{e}\right),0)$-semi-smooth.
\begin{corollary}
First Price Item-Bidding Auctions with independent $\beta$-fractionally subadditive bidders have Bayes-Nash Price
of Anarchy at most $\frac{e}{e-1}\beta$.
\end{corollary}
It is known (see \cite{Bhawalkar}) that subadditive bidders are $\ln m$-fractionally subadditive. Thus the latter
gives a price of anarchy of $\frac{e}{e-1}\ln m $ for subadditive bidders.
The latter result creates an interesting separation between first-price and second price
auctions even in the incomplete information case. For the complete information case it is known
that second price item bidding has price of anarchy at least $2$ \cite{Christodoulou2008}, whilst pure nash equilibria of
first price auctions are always optimal (when a pure nash exists) \cite{Hassidim2011a}. The above bound states that
such a separation even in the incomplete information case, since second price has price of anarchy
at least $2$ whilst first price auctions have price of anarchy at most $\approx 1.58$.
\section{Greedy First Price Auctions}
In this section we consider the greedy first price auctions introduced by Lucier and Borodin \cite{Lucier2009}.
In a greedy auction setting there are $n$ bidders and $m$ items. Each bidder $i$ have some private combinatorial
valuation $v_i$ on the items that is drawn from some commonly known distribution.
The strategies of the players is to submit a valuation profile $b_i$ that outputs a value $b_i(S)$
for each set of items $S$. An allocation is a vector $A=(A_1,\ldots,A_n)$ that allocates
a set $A_i$ for each player $i$. We assume that there is a predefined subspace of feasible allocations.
The above setting is a generalization of the combinatorial auction setting since we don't assume
that the allocations $A_i$ must be disjoint, i.e. the same item can potentially be allocated to more
than one players.
A mechanism $\ensuremath{\mathcal M}(b)=(\ensuremath{\mathcal{A}}(b),p(b))$ takes as input a bid profile $b$ and outputs a feasible allocation
$\ensuremath{\mathcal{A}}(b)$ and a vector of prices $p(b)$ that each player has to pay. A mechanism is said to be greedy
if the allocation output by the mechanism is the outcome of a greedy algorithm
as we explain. Given bid profile $b$ a greedy algorithm is defined as follows:
Let $r:[n]\times 2^{[m]}\times \ensuremath{\mathbb R} \rightarrow \ensuremath{\mathbb R}$ be a priority function such that
$r(i,S,v)$ is the priority of allocating set $S$ to player $i$ when $b_i(S)=v$. Then the greedy
algorithm is as follows: Pick allocation $(i,A_i)$ that maximizes $r(i,S,b_i(S))$ over all
currently feasible allocations $(i,S)$ and allocate set $A_i$ to player $i$. Then remove player $i$.
A greedy algorithm is $c$-approximate if for any bid profile $b$ it returns an allocation that
is at least a $c$-fraction of the maximum possible allocation given valuation profile $b$.
Moreover, the mechanism is first price if the payments that the mechanism outputs is
$p_i(b)=b_i(A_i)$.
Given a type profile $v$ and a bid strategy profile $b$ the social welfare is
as always the sum of the players' utilities (including the auctioneer as a player). This boils down
to being the value of the allocation $\sum_i v_i(A_i)$ since payments cancel out.
The game defined by a greedy mechanism is a Separable Bayesian Game and in the theorem that follows
we are able to prove that it is also $(\frac{1}{2},c-1)$ smooth. This leads to a price of anarchy of $2c$.
This is not an improvement to the existing result of $c+O(\log(c))$ by Lucier and Borodin, but the analysis
is much simpler and the difference in the two bounds is not big.
\begin{theorem}
Any Greedy First Price Mechanism based on a $c$-approximate greedy algorithm defines a
Bayesian Game that is $(\frac{1}{2},c-1)$-smooth.
\end{theorem}
\begin{proof}
Given the mechanism $\ensuremath{\mathcal M}$ we define as $\theta_i(S,b_{-i})$ as the critical value
that player $i$ has to bid on set $S$ such that he is allocated set $S$ given the bid
profiles of the rest of the players.
In our proof we will use a very nice fact about $c$-approximate greedy mechanisms that
was proved by Lucier and Borodin. For any $c$-approximate greedy mechanism and any feasible
allocation $A'$ it must hold that $\sum_i \theta_i(A_i',b_{-i})\leq c\sum_i b_i(A_i)=c\sum_i p_i(b)$.
Now consider a valuation profile $v$ and bid profile $b$. Let $\text{\textsc{Opt}} _i(v)$ be the set allocated
to player $i$ in the optimal allocation for $v$. Let $b_i'(v)$ be the following bid strategy for player $i$:
he bids $v_i(\text{\textsc{Opt}} _i(v))/2$ single-mindedly on $\text{\textsc{Opt}} _i(v)$, i.e. $\forall S{\sf N}eq \text{\textsc{Opt}} _i(v): b_i(S)=0$
and $b_i(\text{\textsc{Opt}} _i(v))=\frac{v_i(\text{\textsc{Opt}} _i(v))}{2}$. There are two cases: either he gets allocated his
optimal item in which case his $u_i(b_i'(v),b_{-i})=\frac{v_i(\text{\textsc{Opt}} _i(v))}{2}$ or he doesn't
in which case: $\theta_i(\text{\textsc{Opt}} _i(v),b_{-i})\geq \frac{v_i(\text{\textsc{Opt}} _i(v))}{2}$. Therefore, we get:
\begin{equation}
u_i(b_i'(v),b_{-i})\geq \frac{v_i(\text{\textsc{Opt}} _i(v))}{2}-\theta_i(\text{\textsc{Opt}} _i(v),b_{-i})
\end{equation}
Summing over all players and using the upper bound on critical price proved by Lucier and Borodin instantiated
for $A'=\text{\textsc{Opt}} (v)$ we get:
\begin{equation}
\sum_iu_i(b_i'(v),b_{-i}) \geq \sum_i \frac{v_i(\text{\textsc{Opt}} _i(v))}{2} - \sum_i \theta_i(\text{\textsc{Opt}} _i(v),b_{-i})
\geq \frac{1}{2}\sum_i v_i(\text{\textsc{Opt}} _i(v)) - c \sum_i p_i(b)
\end{equation}
Now, by rearranging we get:
\begin{equation}
\sum_iu_i(b_i'(v),b_{-i}) + \sum_i p_i(b) \geq \sum_i \frac{v_i(\text{\textsc{Opt}} _i(v))}{2} - \sum_i \theta_i(\text{\textsc{Opt}} _i(v),b_{-i})
\geq \frac{1}{2}\sum_i v_i(\text{\textsc{Opt}} _i(v)) - (c-1) \sum_i p_i(b)
\end{equation}
The latter states that our game satisfies our most relaxed semi-smoothness definition for $\lambda=\frac{1}{2}$
and $\mu=c-1$.
\end{proof}
In fact, we can improve our bound on the price of anarchy to $\frac{e}{e-1}c$ using the
same trick of randomized deviations that we did in item bidding.
\section{Variable Strategy Space Games and Universal Smoothness}
In this section we cope with the following more general class of Bayesian Games
whose goal is to capture games where the strategy space of a player not only his
utility is dependend on his type. We denote with $A_i(t_i)\subseteq A_i$ the actions available
to a player of type $t_i$. A players strategy is a function $s_i:T_i\rightarrow A_i$ that
satisfies $\forall t_i\in T_i: s_i(t_i)\in A_i(t_i)$. We will denote with $A(t)=\times_i A_i(t_i)$.
We still assume that the utility of a player depends on his type and the actions of the
rest of the players: $u_i:T_i\times A\rightarrow \ensuremath{\mathbb R}$. We must point out that the utility of a player $i$ is undefined
if $a_i{\sf N}otin A_i(t_i)$. The rest of the definitions are
the same as in our previous definition of Bayesian Games. Our initial definition was
a special case of this class of Bayesian Games where $\forall t_i\in T_i:A_i(t_i)=A_i$.
For such a Bayesian Game we need a slight alteration of the definition of what it means
for a complete information instance of it to be smooth, since the utility of the complete
information instance is undefined on strategies that are not in the strategy space of a player
for that instance.
\begin{defn}
A Bayesian Game is $(\lambda,\mu)$-smooth if $\forall t\in T$ and for all $a,a'\in A(t)$:
$$\sum_{i} u_i^{t_i}(a_i',a_{-i})\geq \lambda \sum_i u_i^{t_i}(a')-\mu \sum_i u_i^{t_i}(a)$$
\end{defn}
The above class of games is a very general class of Bayesian Game and it
is hard to believe that one can generalize smoothness to such a class.
However, a lot of the games in the literature satisfy an even stronger definition of smoothness.
For games that satisfy this stronger definition of smoothness we can generalize existing results
to incomplete information versions of the games.
\begin{defn}
A Bayesian Game is universally $(\lambda,\mu)$-smooth iff $\forall t,w \in T$ and for all
$a\in A(t)$, $b\in A(w)$:
$$\sum_i u_i^{w_i}(b_i,a_{-i})\geq \lambda \sum_i u_i^{w_i}(b)-\mu \sum_i u_i^{t_i}(a)$$
Since $b_i\in A_i(w_i)$ observe that the first term is also well defined.
\end{defn}
Universal smoothness is a more restrictive notion than smoothness, in the sense that if a Bayesian
Game is universally smooth then it is also smooth. This follows from the fact that if we take
the definition of universal smoothness restricted only when $t=w$ then we get the smoothness definition.
In addition the two definitions are equivalent to the smoothness of \cite{Roughgarden2009} for
complete information games.
Though the above definition seems restrictive enough in the sections that follow we will show
that most routing games studied in the literature are actually universally $(\lambda,\mu)$-smooth
and therefore the bounds known for the complete information carry over to some natural incomplete
information versions.
\begin{theorem}
If a Bayesian Game with Variable Strategy Space is universally $(\lambda,\mu)$-smooth then
it has Bayes-Nash PoA at most $(1+\mu)/\lambda$.
\end{theorem}
\begin{proof}
Using similar reasoning as in Theorem \ref{thm:main} we can arrive at the conclusion that:
\begin{align*}
\mathbb{E}_t[SW^t(s(t))]~\geq~ & \mathbb{E}_{t} \mathbb{E}_{w} [\sum_i u_i^{w_i}(\text{\textsc{Opt}} _i(w), s_{-i}(t_{-i}))] {\sf N}onumber
\end{align*}
Then we observe that $\text{\textsc{Opt}} (w)\in A(w)$ and $s(t)\in A(t)$. Thus applying the definition of
universal smoothness we get:
\begin{align*}
\mathbb{E}_t[SW^t(s(t))]~\geq~ & \mathbb{E}_t \mathbb{E}_w [\lambda \sum_i u_i^{w_i}(\text{\textsc{Opt}} (w))-\mu \sum_i u_i^{t_i}(s(t))]\\
~=~& \lambda \mathbb{E}_w[SW^w(\text{\textsc{Opt}} (w))]-\mu \mathbb{E}_t[SW^t(s(t))]
\end{align*}
which gives the theorem.
\end{proof}
\subsection{Weighted Congestion Games with Probabilistic Demands}
In this section we examine how our analysis applies to incomplete information versions of routing games.
The games that we study in this section are cost minimization games and hence we will use the variants
of our theorems and notation so far adapted to cost minimization games.
We first describe the complete information game. We consider unsplittable atomic selfish routing
games where each player has demand $w_i$ that he needs to send from a node $s_i$ to a node $t_i$ over a graph $G$.
Let $\mathcal{P}_i$ be the set of paths from $s_i$ to $t_i$. The strategy of a player is to choose a path $p_i\in \mathcal{P}_i$.
Each edge $e$ of the graph has some delay function $l_e(x_e)$ where $x_e$ denotes
the total congestion of edge $e$: $x_e=\sum_{i: e\in p_i}w_i$, which is assumed to
be monotone non-decreasing. Given a strategy profile $p=(p_i)_{i\in n}$
the cost of a player is $c_i(p)=\sum_{e \in p_i} w_i l_e(x_e)$.
In the literature so far only the case where $w_i$ are common knowledge has been studied. This is a very strong informational
assumption. Instead it is more natural consider the case where the $w_i$ are private information
and only a distribution on them is common knowledge. Thus the type a player in our game is his
weight $w_i$. In fact to make the game comply with our definition of Bayesian Games we will assume that the strategy of a player is a pair $(r_i,p_i)$ of a rate $r_i$ and path $p_i$. In addition given a type $w_i$ a player's
action space is $A_i(w_i)=\{(w_i,p_i): p_i\in \mathcal{P}_i\}$, i.e. we constraint the player to have to
route his whole demand. Given the above small alteration in the definition of the game it is now
easy to see that the cost of a player depends only on the strategies of the other players and
not on their types: $\forall a_i=(r_i,p_i)\in A_i(t_i), \forall a_{-i}=(r_{-i},p_{-i})\in A_{-i}: c_i^{t_i}(a_i,a_{-i})=\sum_{e\in p_i} r_i l_e(x_e(a))$ where $x_e(a)=\sum_{k: e\in p_k} r_k$ and it's undefined for $a_i{\sf N}otin A_i(t_i)$.
Hence, if we prove that the latter Bayesian Game is universally $(\lambda,\mu)$-smooth,
this would imply a Bayes-Nash PoA of $\lambda/(1-\mu)$.
Very recently (Bhawalkar et al \cite{Bhawalkar2010}) showed that
weighted congestion games are smooth games and therefore smoothness arguments provide tight
results for the Price of Anarchy. Our analysis shows that one can extend these upper bounds to
incomplete information too. Moreover, since complete information is a special case of incomplete
information where priors are singleton distributions, the bayes-nash price of anarchy analysis
will still be tight. Moreover, this shows a collapse of efficiency between complete and incomplete
information and shows that knowing more doens't necessarily improve the efficiency guarrantee's in this
types of games.
Most of the literature on weighted congestion games uses the following fact:
if for the class of delay functions $\ensuremath{\mathcal{C}}$ allowed we have that:
$\forall x,x^*\in \mathbb{R}^+: x^*l_e(x+x^*)\geq \lambda x^*l_e(x^*)+\mu xl_e(x)$ then
weighted congestion games with delays in class $\ensuremath{\mathcal{C}}$ are $(\lambda,\mu)$-smooth.
We will actually show that if the delay functions satisfy the above property then the
Bayesian Game is universally $(\lambda,\mu)$-smooth.
\begin{lemma}If for any delay function $l_e()$ in the class of delay functions $\ensuremath{\mathcal{C}}$
allowed we have that: $\forall x,x^*\in \mathbb{R}^+: x^*l_e(x+x^*)\geq \lambda x^*l_e(x^*)+\mu xl_e(x)$
then the resulting class of Bayesian Unsplittable Selfish Routing Games with Probabilistic
Demands is universally $(\lambda,\mu)$-smooth.
\end{lemma}
\begin{proof}
Let $w$, $t$ be two type profiles. Let $a=(w,p)\in A(w)$ and $b=(t,p')\in A(t)$. Let
$x_e(a)=\sum_{i:e\in p_i} w_i$ and $x_e(b)=\sum_{i:e\in p_i'}t_i$. Then:
\begin{align*}
\sum_i c_i^{t_i}(b_i,a_{-i})\leq & \sum_i \sum_{e\in p_i'}t_i l_e(t_i+x_e(a))
\leq \sum_i \sum_{e\in p_i'}t_i l_e(x_e(b)+x_e(a))\\
= & \sum_e x_e(b)l_e(x_e(b)+x_e(a))\\
\leq & \lambda \sum_e x_e(b)l_e(x_e(b))+\mu \sum_e x_e(a) l_e(x_e(a))\\
= & \lambda \sum_i \sum_{e\in p_i'} t_i l_e(x_e(b))+ \mu \sum_e \sum_{e\in p_i} w_i l_e(x_e(a))\\
= & \lambda \sum_i c_i^{t_i}(b) + \mu \sum_i c_i^{w_i}(a)
\end{align*}
which is exactly the universal smoothness definition.
\end{proof}
\subsection{Bayesian Effort Games}
In this section we study what our analysis imply for incomplete information versions of
the following class of effort games \cite{Bachrach2011}: There is a set of players
$[n]$ and a set of project $[m]$. Each player has some budget of effort $B_i$ which he can split among the projects. Each project $j$ has some value that is a non-decreasing concave function $V_j()$ of the weighted sum of efforts $\sum_{i\in N} a_{ij} x_{ij}$ where $a_{ij}$ is some ability factor of player $i$ in project $j$ (we assume $V(0)=0$).
The value of a project is then split among
the participants and each participant receives a share proportional to his weighted input $a_{ij}x_{ij}$.
Such games where shown in \cite{Bachrach2011} to be Valid Utility Games \cite{Vetta2002} and hence $(1,1)$-smooth.
We will consider the natural incomplete information version of these games where each players
ability vector $a_i=(a_{ij})_{j\in [m]}$ and the budget $B_i$ are private information, each ability vector
is drawn from some distribution $F_i$ on $\ensuremath{\mathbb R}^{m+1}_+$. The $F_i$ are independent. To
adapt it in our variable strategy space model we will assume that the strategy of a player
is a pair $(\tilde{a}_i,x_i)$ where $\tilde{a}_i$ is the declared ability vector and
$x_i$ is the vector of efforts of player $i$. We will constraint the strategy space
such that given an ability vector $a_i$ the player has to declare his true ability vector:
$A_i(a_i,B_i)=\{(a_i,x_i): x_i\in \ensuremath{\mathbb R}^m, \sum_j x_{ij}\leq B_i\}$.
We are able to show that these games are actually universally smooth games and thereby
the Bayes-Nash PoA of the above Bayesian Games will be at most $2$.
\begin{lemma} Bayesian Effort Market Games are universally $(1,1)$-smooth. \end{lemma}
\begin{proof}
The proof is an adaptation of the smoothness proof for valid utility games \cite{Vetta2002,Roughgarden2009}, but in the space of
real functions instead of set functions. In addition it is adapted to accommodate for different types
of players so as to show the stronger version of universal smoothness is satisfied.
Let $s=(a,x)\in A(a,B)$ and $s'=(b,y)\in A(b,B')$. Then we have:
\begin{align*}
\sum_i u_i^{(b_i,B_i)}(s_i',s_{-i}) = & \sum_i \sum_j b_{ij} y_{ij} \frac{V_j(b_{ij}y_{ij}+a_{-i}\cdot x_{-i})}{b_{ij}y_{ij}+a_{-i}\cdot x_{-i}}
\end{align*}
Now we use the fact that for a concave function $V_j()$ that satisfies $V_j(0)=0$, it holds that
$V_j(x)/x$ is a decreasing function. Hence:
$$\frac{V_j(b_{ij}y_{ij}+a_{-i}\cdot x_{-i})}{b_{ij}y_{ij}+a_{-i}\cdot x_{-i}} \leq
\frac{V_j(a_{-i}\cdot x_{-i})}{a_{-i}\cdot x_{-i}}\implies
b_{ij} y_{ij} \frac{V_j(b_{ij}y_{ij}+a_{-i}\cdot x_{-i})}{b_{ij}y_{ij}+a_{-i}\cdot x_{-i}}
\geq V_j(b_{ij}y_{ij}+a_{-i}\cdot x_{-i})-V_j(a_{-i}\cdot x_{-i})$$
Thus:
\begin{align*}
\sum_i u_i^{(b_i,B_i)}(s_i',s_{-i}) \geq \sum_i \sum_j (V_j(b_{ij}y_{ij}+a_{-i}\cdot x_{-i})-V_j(a_{-i}\cdot x_{-i}))
\end{align*}
In addition, since $V_j()$ is concave, increasing then for all $t_1>t_2$ and $y>0$: $V_j(y+t_1)-V_j(t_1)\leq V_j(y+t_2)-V_j(t_2)$. Combining we get:
\begin{align*}
\sum_i u_i^{(b_i,B_i)}(s_i',s_{-i}) \geq& \sum_j \sum_i (V_j(b_{ij}y_{ij}+a_{-i}\cdot x_{-i})-V_j(a_{-i}\cdot x_{-i}))\\
\geq & \sum_j \sum_i V_j\left( b_{ij}y_{ij}+a_{-i}\cdot x_{-i}+ a_{ij}x_{ij}+\sum_{k=1}^{i-1}b_{kj}y_{kj}\right)-V_j\left(a_{-i}\cdot x_{-i}+a_{ij}x_{ij}+\sum_{k=1}^{i-1}b_{kj}y_{kj}\right)\\
= & \sum_j \sum_i V_j\left( \sum_{k=1}^i (b_{kj}y_{kj}+a_{kj}x_{kj})+ \sum_{k=i+1}^{n}a_{kj}y_{kj}\right)-V_j\left(\sum_{k=1}^{i-1} (b_{kj}y_{kj}+a_{kj}x_{kj})+ \sum_{k=i}^{n}a_{kj}y_{kj}\right)\\
= & \sum_j V_j\left(\sum_{k=1}^{n} (b_{kj}y_{kj}+a_{kj}x_{kj})\right)- V_j\left(\sum_{k=1}^{n} a_{kj}x_{kj}\right)\\
\geq & \sum_j V_j\left(\sum_{k=1}^{n} b_{kj}y_{kj}\right)- V_j\left(\sum_{k=1}^{n} a_{kj}x_{kj}\right)\\
= & SW^{b,B'}(s')-SW^{a,B}(s)
\end{align*}
\end{proof}
\begin{corollary}
The Bayes-Nash PoA of Bayesian Effort Games is at most $2$.
\end{corollary}
\end{document} |
\begin{document}
\title{Mixing Property of Quantum Relative Entropy}
\author{Fedor Herbut}
\affiliation {Serbian Academy of Sciences
and Arts, Knez Mihajlova 35, 11000
Belgrade, Serbia and Montenegro}
\email{[email protected]}
\date{\today}
\begin{abstract}
An analogue of the mixing property of
quantum entropy is derived for quantum
relative entropy. It is applied to the
final state of ideal measurement and to
the spectral form of the second density
operator. Three cases of states on a
directed straight line of relative
entropy are discussed.
\end{abstract}
\pacs{03.65.Ta 03.67.-a} \maketitle
\rm Relative entropy plays a fundamental
role in quantum information theory (see
p. 15 in \cite{O-P} and the review
articles \cite{Vedral}, \cite{Schum},
which have relative entropy in the
title).
The {\it relative entropy}
$S(\rho||\sigma)$ of a state (density
operator) $\rho$ with respect to a state
$\sigma$ is by definition
$$S(\rho||\sigma)\equiv {\rm tr} [\rho log(\rho )]-{\rm tr}
[\rho log(\sigma)]\eqno{(1a)}$$
$$\mbox{if}\quad \mbox{supp}(\rho ) \subseteq
\mbox{supp}(\sigma );\eqno{(1b)}$$
$$\mbox{or else}\quad
S(\rho||\sigma)=+\infty \eqno{(1c)}$$
(see p. 16 in \cite{O-P}). By "support"
is meant the subspace that is the
topological closure of the range.
If $\sigma$ is singular and condition
(1b) is valid, then the orthocomplement
of the support (i. e., the null space) of
$\rho$, contains the null space of
$\sigma$, and both operators reduce in
supp$(\sigma )$. Relation (1b) is valid
in this subspace. Both density operators
reduce also in the null space of
$\sigma$. Here the $log$ is not defined,
but it comes after zero, and it is
generally understood that zero times an
undefined quantity is zero. We'll refer
to this as {\it the zero convention}.
The more familiar concept of (von
Neumann) quantum entropy, $S(\rho )\equiv
-{\rm tr} [\rho log(\rho )]$, also requires
the zero convention. If the state space
is infinite dimensional, then, in a
sense, entropy is almost always infinite
(cf p.241 in \cite{Wehrl}). In
finite-dimensional spaces, entropy is
always finite.
In contrast, relative entropy is often
infinite also in finite-dimensional
spaces (due to (1c)). Most results on
relative entropy with general validity
are {\it inequalities}, and the infinity
fits well in them. It is similar with
entropy. But there is one {\it equality
for entropy} that is much used, {\it the
mixing property} concerning {\it
orthogonal state decomposition} (cf p.
242 in \cite{Wehrl}):
$$\sigma =\sum_k w_k\sigma_k,\eqno{(2)}$$
$\forall k:\enskip w_k\geq 0$; for
$w_k>0$, $\sigma_k>0,\enskip {\rm tr}
\sigma_k=1$; $\sum_kw_k=1$. Then
$$S(\sigma )=H(w_k)+
\sum_kw_kS(\sigma_k),\eqno{(3a)}$$
$$H(w_k)\equiv
-\sum_k[w_klog(w_k)] \eqno{(3b)}$$ being
the Shannon entropy of the probability
distribution $\{w_k:\forall k\}$.
The {\it first aim} of this article is to
derive an analogue of (3a), which will be
called {mixing property of relative
entropy}. The {\it second aim} is to
apply it to the derivation of two
properties of the final state in ideal
measurement, and to the spectral
decomposition of $\sigma$ in the general
case.
We will find it convenient to make use of
an {\it extension} $log^e$ of the
logarithmic function to the entire real
axis:
$$\mbox{if}\quad 0<x:\qquad
log^e(x)\equiv log(x),\eqno{(4a)}$$ ,
$$\mbox{if}\quad
x\leq 0:\qquad log^e(x)\equiv 0.\quad
\eqno{(4b)}$$
The following elementary property of the
extended logarithm will be utilized.
Lemma 1: {\it If an orthogonal state
decomposition (2) is given, then
$$log^e(\sigma )
=\sum'_k [log(w_k)]Q_k+\sum'_k log^e
(\sigma_k),\eqno{(5)}$$ where $Q_k$ is
the projector onto the support of
$\sigma_k$, and the prim on the sum means
that the terms corresponding to $w_k=0$
are omitted.}
Proof: Spectral forms $\forall k, \enskip
w_k>0:\enskip
\sigma_k=\sum_{l_k}s_{l_k}\ket{l_k}
\bra{l_k}\quad$ (all $s_{l_k}$ positive)
give a spectral form $\sigma =
\sum_k\sum_{l_k}w_ks_{l_k}\ket{l_k}\bra{l_k}$
of $\sigma$ on account of the
orthogonality assumed in (2) and the zero
convention. Since numerical functions
define the corresponding operator
functions via spectral forms, one obtains
further
$$log^e(\sigma
)\equiv
\sum_k\sum_{l_k}[log^e(w_ks_{l_k})]\ket{l_k}
\bra{l_k}=$$
$$\sum_k'\sum_{l_k}[log(w_k)+log(s_{l_k})]
\ket{l_k} \bra{l_k}=$$
$$\sum_k'[log(w_k)]Q_k+\sum_k'
\sum_{l_k}[log(s_{l_k})]\ket{l_k}
\bra{l_k}.$$ (In the last step
$Q_k=\sum_{l_k}\ket{l_k}\bra{l_k}$ for
$w_k>0$ was made use of.) The same is
obtained from the RHS of (5) when the
spectral forms of $\sigma_k$ are
substituted in it.
$\Box$
Now we come to the main result.
Theorem 1: {\it Let condition (1b) be
valid for the states $\rho$ and $\sigma$,
and let an orthogonal state decomposition
(2) be given. Then
$$S(\rho||\sigma)=S\Big(\sum_kQ_k\rho
Q_k\Big)-S(\rho )+$$
$$H(p_k||w_k)+\sum_kp_k S(Q_k\rho
Q_k/p_k||\sigma_k),\eqno{(6)}$$ where,
for $w_k>0$, $Q_k$ projects onto the
support of $\sigma_k$, and $Q_k\equiv 0$
if $w_k=0$, $p_k\equiv {\rm tr} (\rho Q_k)$,
and
$$H(p_k||w_k)\equiv
\sum_k[p_klog(p_k)]-\sum_k[p_klog(w_k)]
\eqno{(7)}$$ is the classical discrete
counterpart of the quantum relative
entropy, valid because $(p_k>0)\enskip
\Rightarrow (w_k>0)$.}
One should note that the claimed validity
of the classical analogue of (1b) is due
to the definitions of $p_k$ and $Q_k$.
Besides, (2) implies that $(\sum_kQ_k)$
projects onto supp$(\sigma )$. Further,
as a consequence of (1b),
$(\sum_kQ_k)\rho =\rho$. Hence, ${\rm tr}
\Big(\sum_kQ_k\rho Q_k\Big)=1$.
Proof of theorem 1: We define $$\forall
k,\enskip p_k>0:\quad \rho_k\equiv
Q_k\rho Q_k/p_k.\eqno{(8)}$$ First we
prove that (1b) implies $$\forall
k,\enskip p_k>0:\quad
\mbox{supp}(\rho_k)\subseteq \mbox{supp}
(\sigma_k).\eqno{(9)}$$
Let $k$, $p_k>0$, be an arbitrary fixed
value. We take a pure-state decomposition
$$\rho
=\sum_n\lambda_n\ket{\psi_n}\bra{\psi_n}
\eqno{(10a)},$$ $\forall n:\enskip
\lambda_n>0$. Applying $Q_k...Q_k$ to
(10a), one obtains another pure-state
decomposition
$$Q_k\rho Q_k=p_k\rho_k
=\sum_n\lambda_nQ_k\ket{\psi_n}\bra{\psi_n}
Q_k\eqno{(10b)}$$ (cf (8)). Let
$Q_k\ket{\psi_n}$ be a nonzero vector
appearing in (10b). Since (10a) implies
that $\ket{\psi_n}\in \mbox{supp}(\rho )$
(cf Appendix (ii)), condition (1b)
further implies $\ket{\psi_n}\in
\mbox{supp}(\sigma )$. Let us write down
a pure-state decomposition
$$\sigma =\sum_m
\lambda'_m\ket{\phi_m}\bra{\phi_m}
\eqno{(11)}$$ with $\ket{\phi_1}\equiv
\ket{\psi_n}$. (This can be done with
$\lambda'_1>0$ cf \cite{Hadji}.) Then,
applying $Q_k...Q_k$ to (11) and taking
into account (2), we obtain the
pure-state decomposition
$$Q_k\sigma Q_k=w_k\sigma_k=\sum_m
\lambda'_mQ_k\ket{\phi_m}\bra{\phi_m}
Q_k. \eqno{(11b)}$$ (Note that $w_k>0$
because $p_k>0$ by assumption.) Thus,
$Q_k\ket{\psi_n}=Q_k\ket{\phi_1}\in
\mbox{supp}(\sigma_k)$. This is valid for
any nonzero vector appearing in (10b),
and these span supp$(\rho_k)$ (cf
Appendix (ii)). Therefore, (9) is valid.
On account of (1b), the standard
logarithm can be replaced by the extended
one in definition (1a) of relative
entropy: $$ S(\rho ||\sigma
)=-S(\rho)-{\rm tr} [\rho log^e(\sigma )].$$
Substituting (2) on the RHS, and
utilizing (5), the relative entropy
$S(\rho ||\sigma )$ becomes
$$-S(\rho )-{\rm tr} \Big\{\rho
\Big[\sum_k'[log(w_k)]Q_k+\sum_k'[
log^e(\sigma_k)]\Big]\Big\}=$$
$$-S(\rho )-\sum_k'[p_klog(w_k)]-\sum_k'{\rm tr}
[\rho log^e(\sigma_k)].$$ Adding and
subtracting $H(p_k)$ (cf (3b)), replacing
$log^e(\sigma_k)$ by
$Q_k[log^e(\sigma_k)]Q_k$, and taking
into account (7) and (8), one further
obtains
$$S(\rho ||\sigma
)=-S(\rho )+H(p_k)+H(p_k||w_k)+$$
$$-\sum_k'p_k{\rm tr} [\rho_klog^e(\sigma_k)].$$
(The zero convention is valid for the
last term because the density operator
$Q_k\rho Q_k/p_k$ may not be defined.
Note that replacing $\sum_k$ by $\sum_k'$
in (7) does not change the LHS because
only $p_k=0$ terms are omitted.)
Adding and subtracting the entropies
$S(\rho_k)$ in the sum, one further has
$$S(\rho ||\sigma
)=-S(\rho )+H(p_k)+H(p_k||w_k)+$$
$$\sum_k'p_kS(\rho_k)+\sum_k'p_k\{-S(\rho_k) -{\rm tr}
[\rho_klog^e(\sigma_k)]\}.$$ Utilizing
the mixing property of entropy (3a), one
can put $S\Big(\sum_kp_k\rho_k\Big)$
instead of
$[H(p_k)+\sum_k'p_kS(\rho_k)]$. Owing to
(9), we can replace $log^e$ by the
standard logarithm and thus obtain
the RHS(6).
$\Box$\\
{\it Some Applications of the Mixing
Property} - Let $\rho$ be a state and
$A=\sum_ia_iP_i+\sum_ja_jP_j$ a spectral
form of a discrete observable (Hermitian
operator) $A$, where the eigenvalues
$a_i$ and $a_j$ are all distinct. The
index $i$ enumerates all the detectable
eigenvalues, i. e., $\forall i:\enskip
{\rm tr} (\rho P_i)>0$, and ${\rm tr} [\rho
(\sum_iP_i)]=1$.
After an {\it ideal measurement} of $A$
in $\rho$, the entire ensemble is
described by the {\it L\"{u}ders state}:
$$\rho_L(A)\equiv \sum_iP_i\rho
P_i\eqno{(12)}$$ (cf \cite{Lud}). (One
can take more general observables that
are ideally measurable in $\rho$ cf
\cite{Roleof}. For simplicity we confine
ourselves to discrete ones.)
Corollary 1: {\it The relative-entropic
"distance" from any quantum state to its
L\"{u}ders state is the difference
between the corresponding quantum
entropies:} $$S\Big(\rho ||\sum_iP_i\rho
P_i\Big)=S\Big(\sum_iP_i\rho
P_i\Big)-S(\rho ).\eqno{(13)}$$
Proof: First we must prove that
$$\mbox{supp}(\rho )\subseteq
\mbox{supp}\Big(\sum_iP_i\rho
P_i\Big).\eqno{(14)}$$ To this purpose,
we write down a decomposition (10a) of
$\rho$ into pure states. One has
$\mbox{supp}(\sum_iP_i)\supseteq
\mbox{supp}(\rho )$ (equivalent to the
certainty of $(\sum_iP_i)$ in $\rho$, cf
\cite{Roleof}), and the decomposition
(10a) implies that each $\ket{\psi_n}$
belongs to $\mbox{supp}(\rho )$. Hence,
$\ket{\psi_n}\in \mbox{supp}(\sum_iP_i)$;
equivalently,
$\ket{\psi_n}=(\sum_iP_i)\ket{\psi_n}$.
Therefore, one can write
$$\forall n:\quad \ket{\psi_n}=\sum_i(P_i
\ket{\psi_n}).\eqno{(15a)}$$ Further,
(10a) implies
$$\sum_iP_i\rho
P_i=\sum_i\sum_n\lambda_nP_i\ket{\psi_n}
\bra{\psi_n}P_i.\eqno{(15b)}$$ As seen
from (15b), all vectors
$(P_i\ket{\psi_n})$ belong to
supp$(\sum_iP_i\rho P_i)$. Hence, so do
all $\ket{\psi_n}$ (due to (15a)). Since
$\rho$ is the mixture (10a) of the
$\ket{\psi_n}$, the latter span
$\mbox{supp}(\rho )$. Thus, finally, also
(14) follows.
In our case $\sigma \equiv \sum_iP_i\rho
P_i$ in (6). We replace $k$ by $i$. Next,
we establish
$$\forall i:\quad Q_i\rho Q_i=P_i\rho
P_i.\eqno{(16)}$$ Since $Q_i$ is, by
definition, the support projector of
$(P_i\rho P_i)$, and $P_i(P_i\rho
P_i)=(P_i\rho P_i)$, one has $P_iQ_i=Q_i$
(see Appendix (i)). One can write
$P_i\rho P_i=Q_i( P_i\rho P_i)Q_i$, from
which then (16) follows.
Realizing that $w_i\equiv {\rm tr} (Q_i\rho
Q_i)={\rm tr} (P_i\rho P_i)\equiv p_i$ due to
(16), one obtains $H(p_i||w_i)=0$ and
$$\forall i:\quad S(Q_i\rho Q_i/p_i ||P_i\rho
P_i/w_i)=0$$ in (6) for the case at
issue. This completes the proof.
$\Box$
Now we turn to a peculiar further
implication of corollary 1.
Let $B=\sum_k\sum_{l_k}b_{kl_k}P_{kl_k}$
be a spectral form of a discrete
observable (Hermitian operator) $B$ such
that all eigenvalues $b_{kl_k}$ are
distinct. Besides, let $B$ be more
complete than $A$ or, synonymously, a
refinement of the latter. This, by
definition means that
$$\forall k:\quad
P_k=\sum_{l_k}P_{kl_k}\eqno{(17)}$$ is
valid. Here $k$ enumerates both the $i$
and the $j$ index values in the spectral
form of $A$.
Let $\rho_L(A)$ and $\rho_L(B)$ be the
L\"{u}ders states (12) of $\rho$ with
respect to $A$ and $B$ respectively.
Corollary 2: {\it The states $\rho$,
$\rho_L(A)$, and $\rho_L(B)$ lie on a
straight line with respect to relative
entropy, i. e. $$S\Big(\rho ||
\rho_L(B)\Big)=S\Big(\rho
||\rho_L(A)\Big)+S\Big(\rho_L(A))||
\rho_L(B)\Big),\eqno{(18a)}$$ or
explicitly:} $$S\Big(\rho
||\sum_i\sum_{l_i}(P_{il_i}\rho
P_{il_i})\Big)=S\Big(\rho
||\sum_i(P_i\rho P_i)\Big)+$$
$$ S\Big(\sum_i(P_i\rho P_i)||
\sum_i\sum_{l_i}(P_{il_i} \rho
P_{il_i})\Big).\eqno{(18b)}$$
Note that all eigenvalues $b_{kl_k}$ of
$B$ with indices others than $il_i$ are
undetectable in $\rho$.
Proof follows immediately from corollary
1 because
$$S\Big(\rho ||\rho_L(B)\Big)
=\Big[S\Big(\rho_L(B)\Big)-
S\Big(\rho_L(A)\Big)\Big]+$$
$$\Big[S\Big(\rho_L(A)\Big)-S(\rho
)\Big],$$ and, as easily seen from (12),
$\rho_L(B)= \Big(\rho_L(A)\Big)_L(B)$ due
to $P_{il_i}P_{i'}=\delta_{i,i'}P_{il_i}$
(cf (17)).
$\Box$
Next, we derive another consequence of
theorem 1.
Corollary 3: {\it Let $\{p_k:\forall k\}$
and $\{w_k:\forall k\}$ be probability
distributions such that $p_k>0\enskip
\Rightarrow \enskip w_k>0$. Then,
$$H(p_k||w_k)=S\Big(\sum_kp_k\ket{k}\bra{k}
||\sum_kw_k\ket{k}\bra{k}\Big),\eqno{(19)}$$
where the LHS is given by (7), and the
orthonormal set of vectors
$\{\ket{k}:\forall k\}$ is arbitrary.}
Proof: Applying (6) to the RHS of (19),
one obtains
$$RHS(19)=S\Big(\sum_kp_k\ket{k}\bra{k}\Big)
-S\Big(\sum_kp_k\ket{k}\bra{k}\Big)+$$
$$H(p_k||w_k)+\sum_kp_kS(\ket{k}\bra{k}||
\ket{k}\bra{k})=LHS(19).$$
$\Box$
Finally, a quite different general result
also follows from the mixing property
(6).
Theorem 2: {\it Let $S(\rho ||\sigma )$
be the relative entropy of any two states
such that (1b) is satisfied. Let,
further,
$$\sigma
=\sum_kw_k\ket{k}\bra{k}\eqno{(20)}$$ be
a spectral form of $\sigma$ in terms of
eigenvectors. Then $$S(\rho ||\sigma )=
S\Big(\rho ||\sum_k(\ket{k}\bra{k}\rho
\ket{k}\bra{k})\Big)+$$
$$S\Big(\sum_k(\ket{k}\bra{k} \rho
\ket{k}\bra{k})||\sigma
\Big).\eqno{(21)}$$ Thus, the states
$\rho$, $\sum_k(\ket{k}\bra{k} \rho
\ket{k}\bra{k})$ (cf (20) for
$\ket{k}\bra{k}$), and $\sigma$ lie on a
directed straight line of relative
entropy.}
Proof: Application of (6) to the LHS(21),
in view of (20), leads to
$$S(\rho ||\sigma )=S\Big(\sum_k(\ket{k}\bra{k}
\rho \ket{k}\bra{k})\Big)-S(\rho )+$$
$$H(p_k||w_k)+\sum_kp_kS(\ket{k}\bra{k}
||\ket{k}\bra{k}).$$ In view of $\enskip
p_k= \bra{k}\rho \ket{k}$, (13), (19),
and (20), this equals RHS(21).
$\Box$
It is well known that the
relative-entropic "distance", unlike the
Hilbert-Schmidt (HS) one, fails to
satisfy the triangle rule, which requires
that the distance between two states must
not exceed the sum of distances if a
third state is interpolated. But, and
this is part of the triangle rule, one
has equality if and only if the
interpolated state lies on a straight
line with the two states. As it is seen
from corollary 2 and theorem 2 as
examples, the relative-entropic
"distance" does satisfy the equality part
of the triangle rule.
An interpolated state lies on the HS line
between two states if and only if it is a
convex combination of the latter.
Evidently, this is not true in the case
of relative entropy.
Partovi \cite{Partovi} has recently
considered three states on a directed
relative-entropic line: a general
multipartite state $\rho_1 \equiv
\rho_{AB\dots N}$, a suitable separable
multipartite state $\rho_2\equiv
\rho_{AB\dots N}^S$ with the same
reductions $\rho_A, \rho_B, \dots
,\rho_N$, and finally $\rho_3\equiv
\rho_A\otimes \rho_B\otimes \dots \otimes
\rho_N$. The mutual information in
$\rho_1$ is taken to be its total
correlations information. It is well
known that it can be written as the
relative entropy of $\rho_1$ relative to
$\rho_3$. The straight line implies:
$$S(\rho_1||\rho_2)=
S(\rho_1||\rho_3)-S(\rho_2||\rho_3).$$ To
my understanding, it is Partovi's idea
that if $\rho_2$ is as close to $\rho_1$
as possible (but being on the straight
line and having the same reductions),
then its von Neumann mutual information
$S(\rho_2||\rho_3)$ equals the classical
information in $\rho_1$, and
$S(\rho_1||\rho_2)$ is the amount of
entanglement or quantum correlation
information in $\rho_1$.
Partovi's approach utilizes the
relative-entropy "distance" in the only
way how it is a distance: on a straight
line. One wonders why should the relative
entropy "distance" be relevant outside a
straight line, where it is no distance at
all cf \cite{V-P}, \cite{Plenio}. On the
other hand, these approaches have the
very desirable property of being
entenglement monotones. But so are many
others (see {\it ibid.}).
To sum up, we have derived the mixing
property of relative entropy $S(\rho
||\sigma )$ for the case when (1b) is
valid (theorem 1), and two more general
equalities of relative entropies
(corollary 3 and theorem 2), which follow
from it. Besides, two properties of
L\"{u}ders states (12) have been obtained
(corollary 1 and corollary 2). The mixing
property is applicable to any orthogonal
state decomposition (2) of $\sigma$.
Hence, one can expect a versatility of
its applications in quantum information
theory.\\
{\it Appendix} - Let $\rho
=\sum_n\lambda_n\ket{n}\bra{n}$ be an
arbitrary decomposition of a density
operator into ray projectors, and let $E$
be any projector. Then $$E\rho =\rho
\quad \Leftrightarrow \quad \forall
n:\enskip E\ket{n}=\ket{n}\eqno{(A.1)}$$
(cf Lemma A.1. and A.2. in
\cite{FHJP94}).
(i) If the above decomposition is an
eigendecomposition with positive weights,
then $\sum_n\ket{n}\bra{n}=Q$, $Q$ being
now the support projector of $\rho$, and,
on account of (A.1),
$$E\rho =\rho \quad \Rightarrow \quad
EQ=Q.\eqno{(A.2)}$$.
(ii) Since one can always write $Q\rho
=\rho$, (A.1) implies that all $\ket{n}$
in the arbitrary decomposition belong to
supp$(\rho )$. Further, defining a
projector $F$ so that supp$(F)\equiv$
span$(\{\ket{n}:\forall n\})$, one has
$FQ=F$. Equivalence (A.1) implies $F\rho
=\rho$. Hence, (A.2) gives $QF=Q$.
Altogether, $F=Q$, i. e., the unit
vectors $\{\ket{n}:\forall n\}$
span supp$(\rho)$.\\
\end{document} |
\begin{document}
\title[No sublinear diffusion for a class of torus homeomorphisms]{Inexistence of sublinear diffusion for a class of torus homeomorphisms}
\author{Guilherme Silva Salomão}\thanks{G. S. S. was supported by Fapesp, F. T. was partially supported by the Alexander von Humboldt foundation and by Fapesp and CNPq.}
\author{Fabio Armando Tal}
\address{Instituto de Matemática e Estatística, Rua do Mat\~ao 1010, Cidade Universitária, São Paulo, SP, Brazil, 05508-090}
\email{[email protected],\, [email protected]}
\begin{abstract}
We prove that, if $f$ is a homeomorphism of the two torus isotopic to the identity whose rotation set is a non-degenerate segment and $f$ has a periodic point, then it has uniformly bounded deviations in the direction perpendicular to the segment.
\end{abstract}
\maketitle
\section{Introduction}
The study of surface dynamics from a topological viewpoint has been gathering increasing attention in the last decade, in large part because of the developments of new tools and techniques that have been proven effective in tackling previously hopeless problems. A great deal of these developments have been tied to recent improvements in both Brouwer theory and Rotation theory. In particular, the search for a greater understanding of the dynamics of torus homeomorphisms in the isotopy class of the identity has been one of the motivating forces behind the developments, due to its connection to relevant physical dynamics like Hamiltonian homeomorphisms in general or specific models as the Kicked-Harper model and the Zaslavsky-Web maps.
Rotation theory for homeomorphisms of the $2$-torus $\mathbb{T}^2=\ensuremath{\mathbb{R}}^2/\ensuremath{\mathbb{Z}}^2$ appeared in the early 90s as an extension of the ideas of the Poincaré rotation number for homeomorphisms of the circle. Given $f:\mathbb{T}^2\to\mathbb{T}^2$ a torus homeomorphism isotopic to the identity, and $\widetilde{f}:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ a lift of $f$ to the universal covering of $\mathbb{T}^2$ one can define, following \cite{MisiurewiczZiemian1989RotationSets}, the {\it{rotation set}} $\rho(\widetilde{f})$ of $\widetilde{f}$ as:
$$\rho(\widetilde{f}):=\{v \mid \exists n_k\to \infty, \widetilde{z}_k\in \ensuremath{\mathbb{R}}^2, \lim_{k\to\infty}\frac{\widetilde{f}^{n_k}(\widetilde{z}_k)-\widetilde{z}_k}{n_k}=v\},$$
which is always compact and convex.
This notion, which is invariant by change of coordinates in the isotopy class of the identity, has been shown to be of great utility in describing the dynamics of these maps. For instance, it is in some cases possible to deduce, just by the analysis of the rotation sets, that the dynamics has periodic points of arbitrarily large period \cite{franks:1989}, that is has positive entropy \cite{llibre/mackay:1991}, or even that it has a well defined chaotic region \cite{KoropeckiTal2012StrictlyToral}.
One relevant feature of rotation sets is that they, as per the definition, describe only linear rates of displacement in the lift. But a question that has appeared in several contexts is to determine if it is possible that sublinear displacements can exists that are not captured by it. For, while it follows directly from the definitions that there exists $M_0, N_0>0$ such that for all $\widetilde{z}$ in $\ensuremath{\mathbb{R}}^2$ and all $n>N_0$, $d\left( (\widetilde{f}^n(\widetilde{z})-\widetilde{z})/n, \rho(\widetilde{f})\right) <M_0$, one is left to wonder if there it is also possible to obtain a better estimate. Specifically, does it also hold that there exists some $M_1>0$ such that for all $\widetilde{z}$ in $\ensuremath{\mathbb{R}}^2$ and all $n$, $d\left( \widetilde{f}^n(\widetilde{z})-\widetilde{z}), n \rho(\widetilde{f}) \right) <M_1$? If the latter occur, we say that $f$ has uniformly bounded deviation from its rotation set. A similar concept is that of bounded deviations in a direction $w$. Let $w\in\ensuremath{\mathbb{R}}^2_*$, define the projection in the $w$ direction as $P_w:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2, P_w(x)=\langle x, w/\Vert w \Vert\rangle$. One says that $f$ has {\it{ bounded $w$-deviations}} if there exists $M_1>0$ such that for all $\widetilde{z}$ in $\ensuremath{\mathbb{R}}^2$ and all $n$, $d\left( P_w(\widetilde{f}^n(\widetilde{z})-\widetilde{z}), P_w(n \rho(\widetilde{f}) \right) <M_1$.
Analysis of bounded deviations for homeomorphisms of $\mathbb{T}^2$ is a topic increasingly present in the literature. This is due both to its intrinsic interest, but also to its use a fundamental tool in solving some traditional problems in the field. For instance, it has appeared in the proof of Boyland's conjecture in the torus and in the closed annulus \cite{AddasZanata2015BoundedMeanMotionDiffeos, LeCalvezTal2015ForcingTheory, ConejerosTal2}, in determining the existence of Aubry-Mather sets for homeomorphisms of the closed annulus \cite{ConejerosTal2}, in the study of the existence of irrational rotation factors for the dynamics \cite{Jaeger2009Linearisation, JaegerTal2016IrrationalRotationFactors, JaegerPasseggi2015SemiconjugateToIrrational, kocsard2017rotational} and in the attempts to solve the remaining case of the Franks-Misiurewicz Conjecture \cite{KoropeckiPasseggiSambarino2016FMC, passeggi2018deviations, kocsard2016dynamics}. While it is known that bounded deviations don't necessarily hold in all situations, in particular when the rotation set is a singleton (see \cite{KoropeckiTal2012Irrotational, kocsard/koropecki:2007}), there are several cases where it can be established as, for instance, if $\rho(\widetilde{f})$ has nonempty interior (\cite{AddasZanata2015BoundedMeanMotionDiffeos, LeCalvezTal2015ForcingTheory}). Furthermore, it is also known (see \cite{Davalos2013SublinearDiffusion, GuelmanKoropeckiTal2012Annularity} that if the rotation set is a line segment with two different points with rational coordinates, then $f$ has bounded deviations in the direction that is perpendicular to the the segment.
In all these studies, one case which remained unsolved in either direction is to determine, wherever the rotation set was a non-degenerate line segment with at most one rational point, if bounded deviations for a given direction still needed to hold. Part of the problem here is the relatively lack of examples of these situations. For instance, it was not known until the recent work by Avila if there existed a homeomorphism whose rotation set was a non-degenerate line segment without rational points. On the other hand, while there existence of examples of rotation sets which are non-degenerate line intervals with a single rational point is a folklorical result, these examples always had the rational point as an extremity of the line segment. Indeed, in \cite{ LeCalvezTal2015ForcingTheory} it was shown that this must be the case, that is, that there is no homeomorphism of $\mathbb{T}^2$ such that its rotation set is a non-degenerate line segment with a single bi-rational point in its relative interior.
In this paper we deal exactly with this latter situation. Let us denote by $(x)_1$ and $(x)_2$ the canonical first and second coordinates, respectively, of a point $x\in\ensuremath{\mathbb{R}}^2$, and given $\rho_0=((\rho_0)_1,(\rho_0)_2)$, let us denote $\rho_0^\textup{Per}p=(-(\rho_0)_2,(\rho_0)_1)$ and, if $(\rho_0)_1\not=0$, denote $\tan(\rho_0)=(\rho_0)_2/(\rho_0)_1$. Our main result is that bounded deviation in the perpendicular direction must exists:
\begin{teoa}\label{teoremaA}
Let $f:\mathbb{T}^2\to\mathbb{T}^2$ be a torus homeomorphism isotopic to the identity, $\tilde{f}:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ a lift of $f$ and $\pi:\ensuremath{\mathbb{R}}^2\to\mathbb{T}^2$ the covering map.
Suppose that $\rho(\tilde f)=\{t\rho_0\mid 0\leq t\leq 1\}$, where $\tan(\rho_0)\notin\mathbb{Q}$. Then there is $M>0$ such that $$|\langle \tilde{f}^n(\tilde{z})-\tilde{z},\rho_0^\textup{Per}p\rangle|<M,$$ for every $\tilde{z}\in\ensuremath{\mathbb{R}}^2$ e $n\in\ensuremath{\mathbb{Z}}$.
\end{teoa}
An immediate corollary, using the results from D\'avalos (\cite{Davalos2013SublinearDiffusion}) and Le Calvez and the second author (\cite{LeCalvezTal2015ForcingTheory})) is that
\begin{corb}
Let $f:\mathbb{T}^2\to\mathbb{T}^2$ be a torus homeomorphism isotopic to the identity, $\tilde{f}:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ a lift of $f$ and $\pi:\ensuremath{\mathbb{R}}^2\to\mathbb{T}^2$ the covering map.
Suppose that $\rho(\tilde f)$ is a non-degenerate line segment and that $f$ has at least one periodic point. Then $f$ has bounded deviations in the direction perpendicular to $\rho(\tilde f)$.
\end{corb}
Theorem A should have plenty of applications and has already been used in \cite{xioachuansalvador2019}.
The main new technical development that allowed for this work was the introduction of the new forcing techniques for surface homeomorphisms in \cite{ LeCalvezTal2015ForcingTheory}, although in no way its just a straightforward application. Also, the theory is relatively recent, and this work has as a subproduct some new lemmas that may be useful in its application. The paper is organized as follow. In Section~\ref{sec:preliminaries} we describe the main tools necessary for our work and in Section~\ref{sec:mainresult} we prove the main theorem and its corollary.
\section{Preliminaries}\label{sec:preliminaries}
\subsection{Rotation theory for homeomorphisms of $\mathbb{T}^2$}
\label{caprot}
Let $f:\mathbb{T}^2\to\mathbb{T}^2$ be an homeomorphisms isotopic to the identity, where $\mathbb{T}^2=\ensuremath{\mathbb{R}}^2/\ensuremath{\mathbb{Z}}^2$, let $\tilde{f}:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ be a lift of $f$ to the universal covering of $\mathbb{T}^2$ and let $\pi:\ensuremath{\mathbb{R}}^2\to\mathbb{T}^2$ be the covering map. We already defined the rotation set of $\tilde f$ at the introduction, also known as the
\emph{ Misiurewicz-Ziemian rotation set}. We say that a point $x\in\mathbb{T}^2$ has a \emph{rotation vector} $v$ if $\rho(\tilde f,x)=\lim_{n\to\infty}\frac{\tilde{f}^n(\tilde{x})-\tilde{x}}{n}=v,$ where $\tilde{x}$ is any point in $\pi^{-1}(x)$.
Let $\phi:\mathbb{T}^2\to\ensuremath{\mathbb{R}}^2$ be the displacement function $\phi(x)=\tilde{f}(\tilde x)-\tilde x$, where $\tilde x$ is some point in $\pi^{-1}(x)$. As $$\frac{1}{n}\sum_{k=0}^{n-1}\phi(f^k(x))=\frac{1}{n}\sum_{k=0}^{n-1}(\tilde{f}^{k+1}(\tilde x)-\tilde{f}^k(\tilde x))=\frac{\tilde{f}^n(\tilde x)-\tilde x}{n},$$ we have, by Birkhoff's Ergodic Theorem, that if $\mu$ is an ergodic borelian probability measure invariant by $f$, then $$\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\phi(f^k(x))=\int_{\mathbb{T}^2}\phi \dif\mu \quad\textrm{ for $\mu$ almost all $x\in\mathbb{T}^2$},$$ therefore allmost all points for $\mu$ have a rotation vector and it is equal to $\int_{\mathbb{T}^2}\phi \dif\mu$, which is called the \emph{rotation vector of the measure $\mu$}. It is well known also that (see \cite{MisiurewiczZiemian1989RotationSets})
$$\rho(\tilde f)=\left\{\int_{\mathbb{T}^2}\phi\dif\mu\mid\mu \textrm{ is a borel probability measure invariant by $f$}\right\},$$
which bridges the concept of rotation for points and the displacement for invariant measures. As a consequence, one obtains that rotation sets are always compact and convex subsets of the plane, and that if $v$ is an extremal point of the rotation set, then there exists an ergodic $f$-invariant measure $\mu$ whose rotation vector is $v$.
Of particular importance for this work, we have that, whenever the rotation set of $\tilde f$ is a line segment with irrational slope that contains the point $(0,0)$, one must have that $(0,0)$ is an extremal point of $\rho(\tilde f)$ (by \cite{LeCalvezTal2015ForcingTheory}) and a result from Franks (see \cite{franks:1988a}) implies that $\tilde f$ must have a fixed point. Furthermore, since in this case $(0,0)$ is the only point in $\rho(\tilde f)\cap \ensuremath{\mathbb{Q}}^2$, then every periodic point of $f$ must be lifted to a periodic point of $\tilde f$.
\subsection{Essential dynamics}
We need the concept of essential points for the dynamics, developed in \cite{KoropeckiTal2012StrictlyToral, koropecki2018fully}. We refer to the papers for a complete exposition, only citing the required results.
An open subset $U\subset\mathbb{T}^2$ is \textit{inessential} if every closed curve contained in $U$ is null-homotopic in $\mathbb{T}^2$, otherwise $U$ is called \textit{essential}. A general subset $E\subset\mathbb{T}^2$ is inessential if it has a inessential open neighborhood, otherwise it is called essential. Finally, $E$ is said to be \emph{fully essential} if $\mathbb{T}^2\setminus E$ is inessential.
\begin{definicao}\label{defdiness}
Let $x\in\mathbb{T}^2$ and $f:\mathbb{T}^2\to\mathbb{T}^2$ be a homeomorphism isotopic to the identity. We say that $x$ is \textit{an inessencial point of $f$} if $\cup_{k\in\ensuremath{\mathbb{Z}}}f^k(U)$ is inessential for some neighborhood $U$ of $x$, otherwise $x$ is called \textit{an essential point for $f$}. Furthermore, we say that $x$ is a \emph{fully essential point of $f$} if $\cup_{k\in\ensuremath{\mathbb{Z}}}f^k(U)$ is fully essential for any neighborhood $U$ of $x$.
\end{definicao}
The following proposition appeared in \cite{guelman2015rotation}. $||x||_\infty$ denotes the infinity norm on $\ensuremath{\mathbb{R}}^2, \, ||x||_\infty=\max\{|(x)_1|,|(x)_2|\}$, where $(x)_1$ and $(x)_2$ are the first and second canonical coordinates of a point $x\in\ensuremath{\mathbb{R}}^2$.
\begin{proposicao}
\label{propguelman2015rotation}
Let $O\subset\ensuremath{\mathbb{R}}^2$ be a connected open set such that $\bigcup_{n\in\ensuremath{\mathbb{Z}}}f^n(\pi(O))$ is fully essential and such that $\overline{\pi(O)}$ is inessential. Then there exists $M\in\ensuremath{\mathbb{N}}$ and $K\subset\ensuremath{\mathbb{R}}^2$ compact such that $[0,1]^2$ is contained in a bounded connected component of $\ensuremath{\mathbb{R}}^2\setminus K$ and $$K\subset\bigcup_{|i|\leq M,\, ||v||_{\infty}\leq M}\left(\tilde{f}^i(O)+v\right).$$
\end{proposicao}
We also have that:
\begin{lema}\label{lemaessencial}
Let $f:\mathbb{T}^2\to\mathbb{T}^2$ be an homeomorphism isotopic to the identity, $\tilde{f}:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ a lift and $z_0\in\mathbb{T}^2$ a recurrent point such that $\rho(\tilde f,z_0)$ does not belong to $\ensuremath{\mathbb{Q}}^2$. If $\rho(\tilde f)$ is a nondegenerate line segment with irrational slope, then $z_0$ is a fully essential point for $f$. In particular, for every $\varepsilon>0$, the set $U_{\varepsilon}=\bigcup_{i=0}^{\infty} f^{i}(\pi(B(\varepsilon, \tilde{z}_0)))$ is fully essential.
\end{lema}
\begin{proof}
Assume, for a contradiction, that $z_0$ is an inessential point. Therefore there exists $\varepsilon>0$ such that $U_{\varepsilon}=\bigcup_{i=0}^{\infty} f^{i}(\pi(B(\varepsilon, \tilde{z}_0)))$ is inessential. Note that each connected component of $U_\varepsilon$ must be contained in a topological open disk. As $U_\varepsilon$ is $f$-invariant, $f$ permutes the connected componets of $U_\varepsilon$. Also, since $z_0$ is recurrent, if $U_\varepsilon^0$ is the connected component of $U_\varepsilon$ containing $z_0$, there must exist a smallest $N> 0$ such that $f^{N}(U_\varepsilon^0)=U_\varepsilon^0$. Let $w\in\ensuremath{\mathbb{Z}}^2$ be the integer vector such that $\tilde{f}^{N}(\widetilde U_\varepsilon^0)=\widetilde U_\varepsilon^0+w$, where $\widetilde U_\varepsilon^0$ is the lift of $U_\varepsilon^0$ that contains$\tilde z_0$. As $z_0$ is recurrent, there exists a subsequence $n_k$ such that $f^{n_k}(z_0)\to z_0$. In particular, there exists $k_0$ such that if $k>k_0$ we have $f^{n_k}(z_0)\in U_\varepsilon^0$. But by the choice of $N$ we have that $f^i(U_\varepsilon^0)\cap U_\varepsilon^0=\emptyset$ if $1\leq i<N$, one deduces that $n_k=p_k N$, for $k>k_0$. Therefore $\tilde{f}^{n_k}(\tilde z_0)=\tilde{f}^{p_kN}(\tilde z_0)\in \tilde{f}^{p_kN}(\widetilde U_\varepsilon^0)=\widetilde U_\varepsilon^0 +p_kw$. So, $\tilde{f}^{p_kN}(\tilde z_0)-p_kw\to\tilde z_0$, which implies that $\rho(\tilde f,z_0)=w/N$, a contradiction.
Assume now, again for a contradiction, that $z_0$ is essential but not fully essential. There exists $\varepsilon>0$ such that $U_{\varepsilon}=\bigcup_{i=0}^{\infty} f^{i}(\pi(B(\varepsilon, \tilde{z}_0)))$ is essential but not fully essential. Note tha, as $z_0$ is recurrent, all connected components of $U_{\varepsilon}$ are periodic, and by Proposition 1.2 and Proposition 1.4 of \cite{KoropeckiTal2012StrictlyToral}, there must exist $g=f^l$ a power of $f$, $\hat g:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ a lift of $g$, a vector $W\in \ensuremath{\mathbb{Z}}^2_*$ , and $M>0$ such that. for all $\tilde z\in\ensuremath{\mathbb{R}}^2$ and all $n\in\ensuremath{\mathbb{Z}}$, $|\langle \hat{g}^n(\tilde{z})-\tilde{z}, W\rangle|<M$. But this implies that the rotation set of $\hat g$ is contained in a segment of rational slope. Since $\rho(\hat g)= l \left(\rho(\tilde f)+V\right)$ for some $V\in\ensuremath{\mathbb{Z}}^2$, one deduces that $\rho(\tilde f)$ is also a line segment of rational slope, again a contradiction.
\end{proof}
\label{capfol}
\subsection{Brouwer homeomorphisms}
We recall that a \emph{Brouwer homeomorphism} is a homeomorphism of the plane preserving orientation without fixed points, and that a \emph{line} in the plane is a continuous, injective and proper map from $\ensuremath{\mathbb{R}}$ to $\ensuremath{\mathbb{R}}^2$. By Sch\"{o}enflies Theorem, if $\phi:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}^2$ is a line, then it can be extended to a homeomorphism $\phi^*:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ preserving orientation, such that $\phi(t)=\phi^*(t,0)$. We define canonically then the \emph{left and right} of $\phi$ as $L(\phi)=\phi^*(\ensuremath{\mathbb{R}}\times(0,+\infty))$ and $R(\phi)=\phi^*(\ensuremath{\mathbb{R}}\times(-\infty,0))$ respectively.
The fundamental result on the study of Brouwer Homeomorphisms is that:
\begin{teo}[\cite{brouwer1912beweis}]\label{teobrouwer0}
Given $h:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ a Brouwer homeomorphism and $x\in\ensuremath{\mathbb{R}}^2$, there exists a line $\phi:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}^2$, with $\phi(0)=x$, such that $h([\phi])\subset L(\phi)$ and $h^{-1}([\phi])\subset R(\phi)$, where $[\phi]=\phi(\ensuremath{\mathbb{R}})$.
\end{teo}
A line as in the above result is called a \textit{Brouwer line}. A direct consequence of this theorem is that, if $h$ is a Brouwer homeomorphism, then every point in $\ensuremath{\mathbb{R}}^2$ is contained in a open invariant connected and simply connected set, and the dynamics of $h$ in this set is conjugated to a rigid translation. In particular, $h$ only has wandering points.
\subsection{Maximal isotopies}
Let $M$ be an oriented surface, $f:M\to M$ a homeomorphisms isotopic to he identity, and denote by $\mathcal{I}$ the space of all isotopies between $f$ and the identity, that is, if $I\in\mathcal{I}$ then $I=(f_t)_{t\in[0,1]}$, where $f_0=\textrm{Id}_M$, $f_1=f$, for every $t\in[0,1],\, f_t$ is a homeomorphism of $M$, and $I$ is a continuous curve on the space of homeomorphisms of $M$, using the topology of the uniform convergence over compact subsets. The trajectory of a point $z\in M$ for $I$ is defined as the path $t\mapsto f_t(z)$, which we denote by $I(z)$. By concatenating paths, we can define, for $n\in\ensuremath{\mathbb{N}}$, $$I^n(z)=\Pi_{0\leq k<n}I(f^k(z)),\,\,\, I^\ensuremath{\mathbb{N}}(z)=\Pi_{k\tilde \gamma_0eq 0}I(f^k(z)) \,\textrm{ e }\, I^\ensuremath{\mathbb{Z}}(z)=\Pi_{k\in\ensuremath{\mathbb{Z}}}I(f^k(z)).$$ Denote $\textrm{fix}(I)=\bigcap_{t\in[0,1]}\textrm{fix}(f_t)$ to the set of points whose isotopy path is constant, and let $\textrm{dom}(I)$ be its complement, which is called the \emph{domain} of $I$. A closed subset $F\subset\textrm{fix}(f)$ is said to be unlinked for $f$ if there exists $I\in\mathcal{I}$ such that $F\subset \textrm{fix}(I)$.
We can define a pre-order in $\mathcal{I}$ as follows:
\begin{definicao}
Let $I_1,I_2\in\mathcal{I}$. Say that $I_1\leqslant I_2$ if
\begin{enumerate}[(i)]
\item $\textrm{fix}(I_1)\subset\textrm{fix}(I_2)$;
\item $I_2$ is homotopic to $I_1$ relative to $\textrm{fix}(I_1)$.
\end{enumerate}
\end{definicao}
We say that $I\in\mathcal{I}$ is a \emph{maximal isotopy} if it is maximal for the pre-order defined above. Note that this is equivalent to the property that, for all $z\in\textrm{fix}(f)\setminus\textrm{fix}(I)$, the trajectory of $z$, which is a closed loop, is not null homotopic in $\textrm{dom}(I)$ (see \cite{beguin2016fixed}).
Now, if $I$ is a maximal isotopy, denoting $\tilde{I}=(\tilde f_t)_{t\in[0,1]}$ the lift of $I|_{\textrm{dom}(I)}$ to the universal covering space $\widetilde{\textrm{dom}(I)}$ of $\textrm{dom}(I)$, we have that $\tilde f_1=\tilde f$ has no fixed points and $\tilde f_0=\textrm{Id}_{\widetilde{\textrm{dom}(I)}}$, and therefore the restriction of $\tilde f_1$ to each of the connected components of its domain is a Brouwer homeomorphism. We have the following:
\begin{teo}[\cite{beguin2016fixed}]\label{teobeguin2016fixed}
For all $I\in\mathcal{I}$, there exists $I'\in\mathcal{I}$ such that $I\leqslant I'$ and $I'$ is maximal.
\end{teo}
Therefore, given an unlinked subset $F$ of fixed points for $f$, one can always find a maximal isotopy $I'$ such that $F\subset \textrm{fix}(I')$.
\subsection{Paths transverse to oriented foliations}
If $M$ is a oriented surface, we define a \emph{oriented topological foliation with singularities of $M$} as a topological oriented foliation $\mathcal{F}$ defined on an open subset of $M$. This subset will be called the \emph{domain of $\mathcal{F}$} and denoted $\textrm{dom}(\mathcal{F})$, while its complement is the set of \emph{singularities of $\mathcal{F}$}, and is denoted $\textrm{sing}(\mathcal{F})$.
Let us fix $\mathcal{F}$ an oriented singular foliation of $M$ and, for each $z \in \textrm{dom}(\mathcal{F})$, denote by $\phi_z$ to the leaf of $\mathcal{F}$ passing through $z$. A {\it{path}} in $M$ is a continuous function $\tilde \gamma_0amma:J\to M$ where $J$ is a non-degenerate interval of the line. We denote by $[\tilde \gamma_0amma]$ the image of $\tilde \gamma_0amma$.
A path $\tilde \gamma_0amma:J\to \textrm{dom}(\mathcal{F})$is said to be transverse to $\mathcal{F}$ if, for all $t\in J$, there exists a homeomorphism $c:W\to (0,1)^2$, where $W$ is a neighborhood of $\tilde \gamma_0amma(t)$, such that $c$ sends the restriction of the $\mathcal{F}$ to the foliation by downward oriented vertical leafs in $(0,1)^2$, and such that $\pi_1\circ c\circ\tilde \gamma_0amma$ is strictly increasing in a neighborhood of $t$, where $\pi_1$ is the projection onto the first coordinate. Intuitively, a path is transverse to $\mathcal{F}$ if it always locally crosses leafs from right to left.
\begin{definicao}\label{defequivalence}
If $M=\ensuremath{\mathbb{R}}^2$ and $\mathcal{F}$ does not have singularities, we say that two transverse paths are \textit{equivalent} if they intersect the same leafs. In the general case we say that two transverse paths $\tilde \gamma_0amma:J\to\textrm{dom}(\mathcal{F})$ e $\tilde \gamma_0amma':J'\to\textrm{dom}(\mathcal{F})$ are \textit{equivalent} if there exist $H:J\times[0,1]\to\textrm{dom}(\mathcal{F})$ continuous, and $h:J\to J'$ an increasing homeomorphism such that:
\begin{enumerate}[(i)]
\item $H(t,0)=\tilde \gamma_0amma(t)$, $H(t,1)=\tilde \gamma_0amma'(h(t))$;
\item $\forall t\in J$ e $\forall s_1,s_2\in[0,1]$, $\phi_{H(t,s_1)}=\phi_{H(t,s_2)}$.
\end{enumerate}
In this case, we will denote $\tilde \gamma_0amma\sim_\mathcal{F}\tilde \gamma_0amma'$.
\end{definicao}
The previous definition is equivalent to showing that, if $\widetilde\mathcal{F}$ is the lift of $\mathcal{F}$ to the universal covering of $\textrm{dom}(\mathcal{F})$, then there exists lifts $\widetilde\tilde \gamma_0amma$ and $\widetilde\tilde \gamma_0amma'$ of $\tilde \gamma_0amma$ and $\tilde \gamma_0amma'$, respectively, to the universal covering $\textrm{dom}(\mathcal{F})$ such that $\widetilde{\tilde \gamma_0amma}\sim_{\widetilde{\mathcal{F}}}\widetilde{\tilde \gamma_0amma}'$.
We remark that, by a version of the Poincare-Bendixson Theorem, if $\mathcal{F}$ is a non-singular foliation of $\ensuremath{\mathbb{R}}^2$ and $\tilde \gamma_0amma:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}^2$ is a transverse line, then every leaf $\phi$ intersecting $\tilde \gamma_0amma$ do so at exacty one point, and the leaf $\phi$ crosses $\tilde \gamma_0amma$ from left to right, that is, if $t'\in\ensuremath{\mathbb{R}}$ is such that $\phi(t')\in[\tilde \gamma_0amma]$, then $\phi(t)\in L(\tilde \gamma_0amma)$, if $t<t'$, and $\phi(t)\in R(\tilde \gamma_0amma)$, if $t>t'$.
Given three lines $\tilde \gamma_0amma_i:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}^2$, $i\in\{0,1,2\}$, we say that \emph{$\tilde \gamma_0amma_0$ separates $\tilde \gamma_0amma_1$ and $\tilde \gamma_0amma_2$} if $[\tilde \gamma_0amma_1]$ and $[\tilde \gamma_0amma_2]$ lie in different connected components of the complement of $[\tilde \gamma_0amma_0]$. We say that \emph{$\tilde \gamma_0amma_2$ is above $\tilde \gamma_0amma_1$ with respect to $\tilde \gamma_0amma_0$} if it holds that:
\begin{enumerate}[(i)]
\item The lines are pairwise disjoint;
\item no line separates the other two;
\item if $\lambda_1$, $\lambda_2$ are two disjoint paths joining $z_1=\tilde \gamma_0amma_0(t_1)$, $z_2=\tilde \gamma_0amma_0(t_2)$ to $z'_1\in[\tilde \gamma_0amma_1]$, $z'_2\in[\tilde \gamma_0amma_2]$, respectively, and not intersecting the lines but at the extremal points, then $t_2>t_1$.
\end{enumerate}
\begin{figure}
\caption{$\tilde \gamma_0amma_2$ is above $\tilde \gamma_0amma_1$ with respect to $\tilde \gamma_0amma_0$}
\label{figacima}
\end{figure}
We need the fundamental definition of $\mathcal{F}$-transversal intersection.
\begin{definicao}
Let $M=\ensuremath{\mathbb{R}}^2$ and let $\mathcal{F}$ be non-singular. Given two transverse paths $\tilde \gamma_0amma_i:J_i\to\ensuremath{\mathbb{R}}^2$, $i\in\{1,2\}$, such that $\phi_{\tilde \gamma_0amma_1(t_1)}=\phi_{\tilde \gamma_0amma_2(t_2)}=\phi$ for some $t_1, t_2$ in the interior of $J_1, J_2$ respectively. We say that $\tilde \gamma_0amma_1$ \emph{intersects $\tilde \gamma_0amma_2$ $\mathcal{F}$-transversally at $\phi$} if there exists $a_1,b_1\in J_1$ with $a_1<t_1<b_1$ and $a_2,b_2\in J_2$ with $a_2<t_2<b_2$ such that
\begin{enumerate}[(i)]
\item $\phi_{\tilde \gamma_0amma_2(a_2)}$ is below $\phi_{\tilde \gamma_0amma_1(a_1)}$ with respect to $\phi$;
\item $\phi_{\tilde \gamma_0amma_2(b_2)}$ is above $\phi_{\tilde \gamma_0amma_1(b_1)}$ with respect to $\phi$.
\end{enumerate}
In general, where $M$ is any oriented surface and $\mathcal{F}$ is a singular foliation, we say that two transverse paths $\tilde \gamma_0amma_i:J_i\to\mathrm{dom}(\mathcal{F})$, $i\in\{1,2\}$, such that $\phi_{\tilde \gamma_0amma_1(t_1)}=\phi_{\tilde \gamma_0amma_2(t_2)}=\phi$, \emph{intersects $\tilde \gamma_0amma_2$ $\mathcal{F}$-transversally at $\phi$}, if there are lifts $\widetilde\tilde \gamma_0amma_i$ to the universal covering $\widetilde{\mathrm{dom}(\mathcal{F})}$ that intersect $\widetilde\mathcal{F}$-transversally, where $\widetilde\mathcal{F}$ is the lift of $\mathcal{F}$.
In both cases, we denote $\tilde \gamma_0amma_1|_{[a_1,b_1]}\pitchfork_{\mathcal{F}}\tilde \gamma_0amma_2|_{[a_2,b_2]}$.
\end{definicao}
Whenever the context is clear, we will just say that $\tilde \gamma_0amma_1$ and $\tilde \gamma_0amma_2$ intersect transversally. Also, if $\tilde \gamma_0amma_1(t_1)=\tilde \gamma_0amma_2(t_2)$ and $\tilde \gamma_0amma_1$ intersects $\tilde \gamma_0amma_2$ $\mathcal{F}$-transversally at $\phi_{\tilde \gamma_0amma_1(t_1)}=\phi_{\tilde \gamma_0amma_2(t_2)}$, we will just say that $\tilde \gamma_0amma_1$ and $\tilde \gamma_0amma_2$ intersect $\mathcal{F}$-transversally at $\tilde \gamma_0amma_1(t_1)$. In case a path has a transversal intersection with itself, we say that $\tilde \gamma_0amma$ has a \emph{transversal self-intersection}.
\begin{figure}
\caption{$\tilde \gamma_0amma_1$ and $\tilde \gamma_0amma_2$ with an $\mathcal{F}
\label{figinter}
\end{figure}
Note that, if $\tilde \gamma_0amma_1|_{I_1}\pitchfork_{\mathcal{F}}\tilde \gamma_0amma_2|_{I_2}$ and $\tilde \gamma_0amma_2|_{I_2}\sim_{\mathcal{F}}\tilde \gamma_0amma_3|_{I_3}$, then $\tilde \gamma_0amma_1|_{I_1}\pitchfork_{\mathcal{F}}\tilde \gamma_0amma_3|_{I_3}$. Note also that, if $\tilde \gamma_0amma_1|_{I_1}\pitchfork_{\mathcal{F}}\tilde \gamma_0amma_2|_{I_2}$ at $\phi$ does not imply that $\tilde \gamma_0amma_1$ and $\tilde \gamma_0amma_2$ actually intersect at a point of $\phi$. However, $\tilde \gamma_0amma_1|_{I_1}$ and $\tilde \gamma_0amma_2|_{I_2}$ have at least a common point.
\subsection{Brouwer-Le Calvez foliations}
The following is one of the most useful tools in the study of homeomorphisms of surfaces.
\begin{teo}[\cite{le2005version}]\label{teofolheacao}
Let $h:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ be a Brouwer homeomorphism and $G$ a discrete group of orientation preserving homeomorphisms of the plane that acts freely and properly. If $h$ commutes with the elements of $G$, then there exists a foliation $\mathcal{F}$ of $\ensuremath{\mathbb{R}}^2$ by Brouwer lines for $h$. Furthermore, $\mathcal{F}$ is $G$-invariant.
\end{teo}
A foliation as in the previous theorem will be called a \emph{Brouwer-Le Calvez foliation}.
Note that, by combining the previous result with Theorem \ref{teobeguin2016fixed}, one can obtain that, given a homeomorphism $f$ of $M$ isotopic to identity and an unlinked subset $F$ of $\textrm{fix}(f)$, one can always find a maximal isotopy $I$ such that $F\subset \textrm{fix}(I)$. Since the lift of the restriction of $f$ to $\textrm{dom}(I)$ is a Brouwer homeomorphism commuting with all covering automorphisms, Theorem \ref{teofolheacao} tell us that one may find a foliation $\widetilde\mathcal{F}$ by Brouwer lines that descends to an oriented nonsingular foliation of $\textrm{dom}(I)$. It is therefore possible to define a singular foliation $\mathcal{F}$ of $M$ such that $\textrm{dom}(\mathcal{F})=\textrm{dom}(I)$, and such that $F$ is contained in $\textrm{sing}(\mathcal{F})$.
Furthermore, one can show (\cite{le2005version}) that:
\begin{teo}\label{foltal}
Given a maximal isotopy $I$, there exists $\mathcal{F}$ a oriented topological foliation with singularities of $M$, with $\emph{dom}(\mathcal{F})=\emph{dom}(I)$, such that for all $z\in\emph{dom}(I)$ the trajectory $I(z)$ is homotopic with fixed endpoints in $\emph{dom}(I)$ to a path transverse to $\mathcal{F}$, and this path is unique up to equivalence.
\end{teo}
We denote by $I_\mathcal{F}(z)$ to the class of paths described in the previous theorem, as well as to its representatives whenever the context is clear. We further denote $$I^n _\mathcal{F}(z)=\Pi_{0\leq k<n}I_\mathcal{F}(f^k(z)), \,\,\,I^\ensuremath{\mathbb{N}}_\mathcal{F}(z)=\Pi_{k\tilde \gamma_0eq 0}I_\mathcal{F}(f^k(z)) \,\textrm{ e }\,I^\ensuremath{\mathbb{Z}}_\mathcal{F}(z)=\Pi_{k\in\ensuremath{\mathbb{Z}}}I_\mathcal{F}(f^k(z)).$$
\begin{definicao}
A transversal path $\tilde \gamma_0amma:[a,b]\to\textrm{dom(I)}$ is called \textit{admissible of order $n$} if there exists $z\in\textrm{dom}(I)$ such that $\tilde \gamma_0amma$ is equivalent to a path in $I_\mathcal{F}^n(z)$. A path that is admissible of some order is just called admissible.
\end{definicao}
Note that Proposition 19 of \cite{LeCalvezTal2015ForcingTheory} has as a direct consequence that:
\begin{lema}\label{lemasubcaminhosadmissiveis}
Let $\beta, \tilde \gamma_0amma:[a,b]\to\textrm{dom(I)}$ be paths transversal to $\mathcal{F}$, such that $\beta\pitchfork_{\mathcal{F}}\tilde \gamma_0amma$. If $\tilde \gamma_0amma$ is admissible of order $n$ and $I\subset[a,b]$ is a nondegenerate interval, then $\tilde \gamma_0amma\mid_{I}$ is also admissible of order $n$.
\end{lema}
\subsection{Forcing}
Let us present now some of the results from the forcing theory developed by Le Calvez and the second author. As before we assume fixed a homeomorphism $f$ of the surface $M$, $I$ a maximal isotopy joining the identity and $f$, and $\mathcal{F}$ a Brouwer-Le Calvez foliation for $I$. The fundamental lemma is the following:
\begin{proposicao}[\cite{LeCalvezTal2015ForcingTheory}]
\label{propadm}
Suppose $\tilde \gamma_0amma_i:[a_i,b_i]\to M$, $i=1,2$ are two transverse paths that intersect $\mathcal{F}$-transversally at $\tilde \gamma_0amma_1(t_1)=\tilde \gamma_0amma_2(t_2)$. If $\tilde \gamma_0amma_1$ is admissible of order $n_1$ and $\tilde \gamma_0amma_2$ is admissible of order $n_2$, then both the transverse paths $\tilde \gamma_0amma_1|_{[a_1,t_1]}\tilde \gamma_0amma_2|_{[t_2,b_2]}$ and $\tilde \gamma_0amma_2|_{[a_2,t_2]}\tilde \gamma_0amma_1|_{[t_1,b_1]}$ are admissible of order $n_1+n_2$. Furthermore, either one of these paths is admissible of order $\min(n_1,n_2)$ or both are admissible of order $\max(n_1,n_2)$.
\end{proposicao}
\begin{definicao}\label{defdirigido}
Let $\tilde \gamma_0amma:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}^2$ be a proper path and let $\rho\in\ensuremath{\mathbb{R}}^2$, with $||\rho||=1$. We say that $\tilde \gamma_0amma$ é \emph{directed by $\rho$} if $$\lim_{t\to\pm\infty}||\tilde \gamma_0amma(t)||=+\infty,\quad\lim_{t\to+\infty}\tilde \gamma_0amma(t)/||\tilde \gamma_0amma(t)||=\rho,\quad\lim_{t\to-\infty}\tilde \gamma_0amma(t)/||\tilde \gamma_0amma(t)||=-\rho .$$
\end{definicao}
We note that whenever $f:\mathbb{T}^2\to\mathbb{T}^2$ is a homeomorphism homotopic to the identity and $z\in\mathbb{T}^2$ is a point whose rotation vector is well defined and equal to $\rho\not=0$, then we can find $\tilde \gamma_0amma$ a representative of $I^\ensuremath{\mathbb{Z}}_\mathcal{F}(z)$, such that every lift of $\tilde \gamma_0amma$ to $\ensuremath{\mathbb{R}}^2$ (the universal covering of $\mathbb{T}^2$) is a path directed by $\rho/||\rho||$.
Assume $M$ is $\ensuremath{\mathbb{R}}^2$, and let $\tilde \gamma_0amma:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}^2$ be a transverse line. Denote, as before, the two connected components of its complement by $R(\tilde \gamma_0amma)$ and $L(\tilde \gamma_0amma)$. We define the \emph{foliated right} (resp. \emph{foliated left} of $\tilde \gamma_0amma$, denoted as $r(\tilde \gamma_0amma)$ (resp. $l(\tilde \gamma_0amma)$) as the set of leafs and singularities of $\mathcal{F}$ stictly contained in $R(\tilde \gamma_0amma)$ ( resp. $L(\tilde \gamma_0amma)$). Thus $l(\tilde \gamma_0amma)\cup r(\tilde \gamma_0amma)$ contains all leafs and singularities not intersected by $\tilde \gamma_0amma$. The following is a useful criterium to detect transversal intersections:
\begin{proposicao}
\label{proprl}
Let $\mathcal{F}$ be an oriented topological foliation with singularities of $\ensuremath{\mathbb{R}}^2$, $\tilde \gamma_0amma:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}^2$ be a transverse path where $\tilde \gamma_0amma$ is a line , and let $\tilde \gamma_0amma':[a,b]\to\ensuremath{\mathbb{R}}^2$ be a transverse path. If $[\tilde \gamma_0amma']\cap l(\tilde \gamma_0amma)\neq\emptyset$, $[\tilde \gamma_0amma']\cap r(\tilde \gamma_0amma)\neq\emptyset$ and if $J^*=\{t\in\ensuremath{\mathbb{R}}\mid\phi_{\tilde \gamma_0amma(t)}\textrm{ crosses }\tilde \gamma_0amma'\}$ is bounded, then there exist intervals $J$ and $J'$ such that $\tilde \gamma_0amma'|_{J'}\pitchfork_{\mathcal{F}}\tilde \gamma_0amma|_J$.
\end{proposicao}
\begin{proof}
Let $t_0, s_0$ in $[a,b]$ be such that $\tilde \gamma_0amma'(t_0)\in l(\tilde \gamma_0amma)$ and $\tilde \gamma_0amma'(s_0)\in r(\tilde \gamma_0amma)$, and we assume, without loss of generality, that $t_0<s_0$. Since $l(\tilde \gamma_0amma)$ and $r(\tilde \gamma_0amma)$ are closed, let $t_1=\max_{t_0\le t <s_0}\{t\mid\tilde \gamma_0amma'(t)\in l(\tilde \gamma_0amma)\}$ and let $s_1= \min_{t_1<t\le s_0}\{t\mid\tilde \gamma_0amma'(t)\in r(\tilde \gamma_0amma)\}$. Let $J'=[t_1,s_1]$, and note that $[\tilde \gamma_0amma'|_{J'}]$ must intersect $\tilde \gamma_0amma$, so let $t'$ be such that $\tilde \gamma_0amma'(t')\in [\tilde \gamma_0amma]$. Also, for $t_1<t<s_1$, $\tilde \gamma_0amma'(t)$ is in a leaf that crosses $\tilde \gamma_0amma$ and so there exists $a_0, b_0$ such that $\tilde \gamma_0amma'|_{(t_1, s_1)}$ is equivalent to $\tilde \gamma_0amma|_{(a_0,b_0)}$, where $a_0$ and $b_0$ are finite since both are contained in $J^*$. Set $J=[a_0, b_0]$ and let $s$ be such that $\tilde \gamma_0amma(s)=\tilde \gamma_0amma'(t')$. Note that, by lifting $\tilde \gamma_0amma'|_{J'}$ and $\tilde \gamma_0amma|_J$ to paths $\tilde\tilde \gamma_0amma'$ and $\tilde\tilde \gamma_0amma$ in the universal covering of $\emph{dom}(\mathcal{F})$ such that $\tilde\tilde \gamma_0amma(s)=\tilde\tilde \gamma_0amma'(t')$, and lifting $\mathcal{F}$ to $\tilde\mathcal{F}$, one has that the leaf $\phi_0=\phi_{\tilde\tilde \gamma_0amma'(t')}$ is such that $\tilde\tilde \gamma_0amma(a_0)$ is above $\tilde\tilde \gamma_0amma'(t_1)$ and $\tilde\tilde \gamma_0amma(b_0)$ is below $\tilde\tilde \gamma_0amma'(s_1)$. Thefore $\tilde\tilde \gamma_0amma'\pitchfork_{\tilde\mathcal{F}}\tilde\tilde \gamma_0amma$ and so $\tilde \gamma_0amma'|_{J'}\pitchfork_{\mathcal{F}}\tilde \gamma_0amma|_J$.
\end{proof}
The next result shows that admissible paths obey some sort of continuity:
\begin{lema}[\cite{LeCalvezTal2015ForcingTheory}]
\label{lemaestabilidade}
Fix $z\in\emph{dom}(I)$, $n\tilde \gamma_0eq 1$, and let us parametrize $I^n_{\mathcal{F}}(z)$ by $[0,1]$. For each $0<a<b<1$, there exists a neighborhood $V$ of $z$ such that, for all $z'\in V$, $I^n_{\mathcal{F}}(z)|_{[a,b]}$ is equivalent to a subpath of $I^n_{\mathcal{F}}(z')$. Furthermore, There exists $W$ a neighborhood of $z$ such that, for all $z',z''\in W$, the path $I^n_{\mathcal{F}}(z')$ is equivalent to a subpath of $I^{n+2}_{\mathcal{F}}(f^{-1}(z''))$.
\end{lema}
One key fact used on the proof of the main theorem of this work is that typical points for ergodic measures are recurrent. We present a similar definition for transverse paths:
\begin{definicao}\label{defrecorrente}
A transverse path $\tilde \gamma_0amma:\ensuremath{\mathbb{R}}\to M$ is called \emph{$\mathcal{F}$-recurrent} if for every compact $J\subset\ensuremath{\mathbb{R}}$ and all $t\in\ensuremath{\mathbb{R}}$ there exists segments $J'\subset(-\infty,t]$ and $J''\subset[t,+\infty)$ such that $\tilde \gamma_0amma|_{J'}\sim_{\mathcal{F}}\tilde \gamma_0amma|_{J}$ e $\tilde \gamma_0amma|_{J''}\sim_{\mathcal{F}}\tilde \gamma_0amma|_{J}$.
\end{definicao}
It follows from \ref{lemaestabilidade} that:
\begin{corolario}
\label{correcorrente}
If $z\in\emph{dom}(I)$ is a recurrent point for $f$, then $I^\ensuremath{\mathbb{Z}}_{\mathcal{F}}(z)$ is $\mathcal{F}$-recurrent.
\end{corolario}
We also need the following technical proposition:
\begin{proposicao}[\cite{LeCalvezTal2015ForcingTheory}]
\label{proppqeps}
Let $f:\mathbb{T}^2\to\mathbb{T}^2$ be an homeomorphism isotopic to the identity, $\hat{f}:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ a lift of $f$ and assume that $\rho(\hat f)=\{t\rho_0\mid 0\leq t\leq 1\}$, where $\tan(\rho_0)\notin\mathbb{Q}$. Then there exists $\hat y_0\in\emph{dom}(\hat\mathcal{F})$ such that for every $\ensuremath{\varepsilon}ilon\in\{-1,1\}$ there exists a sequence $(p_l,q_l)_{l\tilde \gamma_0eq 0}$ in $\ensuremath{\mathbb{Z}}^2\times\ensuremath{\mathbb{N}}$ satisfying: $$\lim_{l\to+\infty}q_l=+\infty, \quad\lim_{l\to+\infty}\tilde{f}^{q_l}(\tilde y_0)-\tilde y_0-p_l=0, \quad\ensuremath{\varepsilon}ilon\langle p_l,\rho_0^\textup{Per}p\rangle>0$$ and a sequence $(p'_l,q'_l)_{l\tilde \gamma_0eq 0}$ in $\ensuremath{\mathbb{Z}}^2\times\ensuremath{\mathbb{N}}$ satisfying: $$\lim_{l\to+\infty}q'_l=+\infty, \quad\lim_{l\to+\infty}\tilde{f}^{-q'_l}(\tilde y_0)-\tilde y_0-p'_l=0, \quad\ensuremath{\varepsilon}ilon\langle p'_l,\rho_0^\textup{Per}p\rangle>0.$$ Furthermore, we can take $y_0=\pi(\tilde{y_0})$ such that $\rho(\tilde f,y_0)=\rho_0$.
\end{proposicao}
We finish with one of the main results from \cite{calvez2018topological}.
\begin{teo}
\label{teoauto}
Let $M$ be an oriented surface, $f$ an homeomorphism of $M$ isotopic to the identity, $I$ a maximal isotopy for $f$ and $\mathcal{F}$ a Brouwer-Le Calvez foliation transverse to $I$. Assume that $\tilde \gamma_0amma:[a,b]\to\emph{dom}(I)$ is an admissible path of order $r$ with a transverse self-intersectin at $\tilde \gamma_0amma(s)=\tilde \gamma_0amma(t)$, where $s<t$. Let $\tilde\tilde \gamma_0amma$ be a lift of $\tilde \gamma_0amma$ to the universal covering $\widetilde{\emph{dom}}(I)$ of $\emph{dom}(I)$ and let $T$ be a covering transformation such that $\tilde\tilde \gamma_0amma$ and $T(\tilde\tilde \gamma_0amma)$ have a $\tilde\mathcal{F}$-transverse intersection at $\tilde\tilde \gamma_0amma(t)=T(\tilde\tilde \gamma_0amma)(s)$. Let $\tilde f$ be the lift of $f|_{\emph{dom}(I)}$ to $\widetilde{\emph{dom}}(I)$, and $\hat f$ be the homeomorphism of the anular covering space $\widehat{\emph{dom}}(I)=\widetilde{\emph{dom}}(I)/T$ that is lifted by $\tilde f$. Then:
\begin{enumerate}[(i)]
\item For every rational number $p/q\in(0,1]$, writen in its irreducible form, there exists $\tilde z\in\widetilde{\emph{dom}}(I)$ such that $\tilde{f}^{qr}(\tilde z)=T^p(\tilde z)$ and $\tilde{I}_{\tilde\mathcal{F}}^{\ensuremath{\mathbb{Z}}}(\tilde z)$ is equivalent to $\Pi_{k\in\ensuremath{\mathbb{Z}}}T^k(\tilde\tilde \gamma_0amma|_{[s,t]})$;
\item For every irrational number $\lambda\in[0,1/r]$, there exists a compact $\hat f$ invariant set $\hat{Z}_\rho\subset\widehat{\emph{dom}}(I)$ such that every point$\hat z\in\hat{Z}_\rho$ has a rotation number $\rho(\tilde f,\hat z)=\lambda$. Furthermore, if $\tilde z$ is a lift of $\hat z$, then $\tilde{I}^{\ensuremath{\mathbb{Z}}}_{\tilde\mathcal{F}}(\tilde z)$ is equivalent to $\Pi_{k\in\ensuremath{\mathbb{Z}}}T^k(\tilde\tilde \gamma_0amma|_{[s,t]})$.
\end{enumerate}
\end{teo}
\section{Proofs of the main results}\label{sec:mainresult}
\subsection{Proof of Theorem A}
Let us fix an isotopy $I:\mathbb{T}^2\times[0,1]\to\mathbb{T}^2$ between $f$ and the identity, such that $\textrm{fix}(I)\neq\emptyset$, and $\tilde{I}:\ensuremath{\mathbb{R}}^2\times[0,1]\to\ensuremath{\mathbb{R}}^2$ a lift of $I$ such that $\textrm{fix}(\tilde I)\neq\emptyset$. By Theorem \ref{teobeguin2016fixed}, we may assume that $I$ is maximal. Let $\mathcal{F}$ be the foliation of $\mathbb{T}^2$ given by Theorem \ref{foltal}, and $\tilde\mathcal{F}$ its lift for $\ensuremath{\mathbb{R}}^2$. Let also $\mu_0$ be an ergodic measure whose the rotation vector is $\rho_0$. Let us further assume that $\rho_0$ is in the first quadrant, i.e., that both $(\rho_0)_1$ and $(\rho_0)_2$ are positive, the other cases are analogous.
\begin{lema}\label{lemacurvas}
There are $\tilde\mathcal{F}$-transverse lines $\alpha_-,\alpha_+:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}^2$ and $v_-,v_+\in\ensuremath{\mathbb{Z}}^2$ such that $\alpha_-(t+1)=\alpha_-(t)+v_-$, $\alpha_+(t+1)=\alpha_+(t)+v_+$, for every $t\in\ensuremath{\mathbb{R}}$, and such that $\langle v_-,\rho_0^\textup{Per}p\rangle<0$, $\langle v_-,\rho_0\rangle>0$, $\langle v_+,\rho_0^\textup{Per}p\rangle>0$ and $\langle v_+,\rho_0\rangle>0$.
\end{lema}
\begin{proof}
Let us start by building $\alpha_+$ and $v_+$. Let $\tilde y_0$ and $(p_l,q_l)_{l\tilde \gamma_0eq 0}$ given by Proposition \ref{proppqeps} such that $\lim_{l\to+\infty}q_l=+\infty$, $\lim_{l\to+\infty}(\tilde{f}^{q_l}(\tilde y_0)-\tilde y_0-p_l)=0$ and $\langle p_l,\rho_0^\textup{Per}p\rangle>0$. Let us fix $\alpha_0\in\tilde{I}_{\tilde\mathcal{F}}(\tilde y_0)$. Let $V_0\subset\ensuremath{\mathbb{R}}^2$ be a small neighborhood of $\tilde y_0$ such that there exists $h_0:V_0\to(0,1)^2$ a homeomorphism mapping the restriction of $\tilde\mathcal{F}$ to $V_0$ into the vertical foliation oriented downwards in $(0,1)^2$ and such that $(h_0([\alpha_0]\cap V_0))_1=[(h_0(\tilde y_0))_1,1)$ (i.e., $h(\alpha_0\cap V_0)$ crosses all the leaves on the left of the leaf that passes through $h_0(\tilde y_0)$). Let also $\varepsilon>0$ such that $B(\varepsilon,\tilde y_0)\subset V_0$. As noted in the Proposition \ref{proppqeps} we can assume that $\rho(\tilde f,\tilde y_0)=\rho_0$, so there is $l_0\in\ensuremath{\mathbb{N}}$ such that for $l\tilde \gamma_0eq l_0$ we have $\langle p_l,\rho_0\rangle>0$. By Proposition \ref{proppqeps} we have $\lim_{l\to+\infty}(\tilde{f}^{q_l}(\tilde y_0)-\tilde y_0-p_l)=0$, so there is $l_1>0$ such that $\tilde{f}^{q_l}(\tilde y_0)-p_{l}\in B(\varepsilon,\tilde y_0)$ and $\langle p_{l},\rho_0^\textup{Per}p\rangle>0$, for all $l\tilde \gamma_0eq l_1$. Making $l'=\max\{l_0,l_1\}$, let us denote $N=q_{l'}$ and $v_+=p_{l'}$.
\begin{figure}
\caption{Illustration of the Lemma \ref{lemacurvas}
\label{figalpha}
\end{figure}
Consider now $\alpha_N\in\tilde{I}^N_{\tilde\mathcal{F}}(\tilde y_0)$ such that $[\alpha_0]\subset[\alpha_N]$ and $\alpha_N$ is parameterized by $[0,1]$. Since $\tilde{f}^N(\tilde y_0)-v_+\in B(\varepsilon,\tilde y_0)$, we have that $\tilde{f}^N(\tilde y_0)-v_+\in V_0$, therefore we can modify $\alpha_N$ inside $V_0$ in order to obtain a transverse path $\alpha_N':[0,1]\to\ensuremath{\mathbb{R}}^2$ so that $\alpha_N$ and $\alpha_N'$ are equal outside of $V_0$ and $\alpha_N'(0)=\tilde{f}^N(\tilde y_0)-v_+$ (to modify $\alpha_N$ in $V_0$, it is enough to modify $h_0(\alpha_N\cap V_0)$ in $(0,1)^2$ and take it back to $V_0$ using $h_0^{-1}$, see Figure \ref{figlemacurvas}). Now, let us define $\alpha'_+=\Pi_{k\in\ensuremath{\mathbb{Z}}}(\alpha_N'+kv_+)$. Since $\alpha'_+$ is a transverse path, if $\alpha'_+$ has self-intersection, i.e., if $\alpha'_+(s)=\alpha'_+(t)$, with $s<t$, we can remove the arc $\alpha'_+|_{(s,t]}$ and reparametrize in a suitable way, getting like this a new path, which we shall denote by $\alpha_+$. Therefore we can assume that $\alpha_+$ is a simple path and so, by the construction, we get that $\alpha_+$ is a line satisfying the conditions of the statement.
\begin{figure}
\caption{Construction of $\alpha_N'$}
\label{figlemacurvas}
\end{figure}
The construction of $\alpha_-$ and $v_-$ is analogous, using Proposition \ref{proppqeps} with $\ensuremath{\varepsilon}ilon=-1$.
\end{proof}
\begin{lema}\label{lemasobra}
There is $L_0>0$ such that, for every $\tilde{x}\in\ensuremath{\mathbb{R}}^2\setminus\emph{sing}(\tilde{\mathcal{F}})$, there is a transverse path $\tilde{\tilde \gamma_0amma}_{\tilde x}\in\tilde{I}_{\tilde{\mathcal{F}}}(\tilde x)$, with $\emph{diam}(\tilde{\tilde \gamma_0amma}_{\tilde x})<L_0$.
\end{lema}
\begin{proof}
First, let us note that it is enough to prove the result for points in $[0,1]^2$, since $\tilde{I}_{\tilde\mathcal{F}}(\tilde x+w)=\tilde{I}_{\tilde\mathcal{F}}(\tilde x)+w$ for every $\tilde x\in\ensuremath{\mathbb{R}}^2$ and every $w\in\ensuremath{\mathbb{Z}}^2$. Now, since $\tilde I$ is continuous, there is $L>0$ such that $\tilde{I}([0,1]^2\times [0,1])\subset B(L,0)$, i.e., for every point $\tilde x\in[0,1]^2$ the isotopy path $\tilde{I}(\tilde x)$ is contained in $B(L,0)$.
Now, let $\alpha_+$ and $\alpha_-$ be the transverse lines given by Lemma \ref{lemacurvas}, and also let $v_1,v_2,v_3,v_4\in\ensuremath{\mathbb{Z}}^2$ be such that $B(L,0)$ is contained in a bounded connected component of $\ensuremath{\mathbb{R}}^2\setminus([\alpha_++v_1]\cup[\alpha_-+v_2]\cup[\alpha_++v_3]\cup[\alpha_-+v_4])$, which we shall denote by $U$, and $U\subset R(\alpha_++v_1)\cap R(\alpha_-+v_2)\cap L(\alpha_++v_3)\cap L(\alpha_-+v_4)$ (see Figure \ref{figsobra1}).
\begin{figure}
\caption{Illustration of the set $U$}
\label{figsobra1}
\end{figure}
We claim that if $\phi$ is a leaf of $\tilde\mathcal{F}$ that intersects $U$, then $[\phi]\cap U$ is connected (that is, a segment of $\phi$). In fact, as remarked right after Definition \ref{defequivalence}, since $\alpha_+$ and $\alpha_-$ (as well as their translations) are transverse lines, we have that the leaf $\phi$ crosses each line at most once, and always from left to right of the line. Now let $t_1<t_2$ be such that $\phi(t_i)\in U,\, i\in\{1,2\}$. Note that to prove that $[\phi]\cap U$ is connected, it is enough to prove that $\phi(t)\in U$, for every $t_1<t<t_2$. Since $\phi(t_1)\in U$ and $t>t_1$, we have that $\phi(t)\in R(\alpha_++v_1)$ and $\phi(t)\in R(\alpha_-+v_2)$. Analogously, since $\phi(t_2)\in U$ and $t<t_2$, we have that $\phi(t)\in L(\alpha_++v_3)$ and $\phi(t)\in L(\alpha_-+v_4)$. So, $\phi(t)\in R(\alpha_++v_1)\cap R(\alpha_-+v_2)\cap L(\alpha_++v_3)\cap L(\alpha_-+v_4)$. Furthermore, as $\phi(t_1)$ and $\phi(t_2)$ belong to $U$, we have that no point of $\phi|_{[t_1,t_2]}$ intersects $[\alpha_++v_1]\cup[\alpha_-+v_2]\cup[\alpha_++v_3]\cup[\alpha_-+v_4]$. Therefore $\phi(t)\in U$, proving that $[\phi]\cap U$ is connected.
Let $\widehat{\textrm{dom}}(\tilde I)$ be the universal covering of $\textrm{dom}(\tilde I)$, $\hat\pi:\widehat{\textrm{dom}}(\tilde I)\to\textrm{dom}(\tilde I)$ the covering map, $\widehat{I}$ a lift of $\tilde{I}|_{\textrm{dom}(\tilde I)}$ to $\widehat{\textrm{dom}}(\tilde I)$ and $\hat\mathcal{F}$ the lift of $\tilde\mathcal{F}$ to $\widehat{\textrm{dom}}(\tilde I)$. Since $I$ is maximal, we have that $\tilde I$ is maximal, and therefore $\hat f:\widehat{\textrm{dom}}(\tilde I)\to\widehat{\textrm{dom}}(\tilde I)$ is a Brouwer homeomorphism, where $\hat f$ is a lift of $\tilde{f}|_{\textrm{dom}(\tilde I)}$. Let us fix now $\tilde x\in[0,1]^2$ such that $\tilde x\notin \textrm{sing}(\tilde\mathcal{F})$ and $\hat{x}\in\widehat{\textrm{dom}}(\tilde I)$ a lift of $\tilde x$. Being $\Phi(\hat x)=\{\phi\in\hat\mathcal{F}|\,\hat{x}\in R(\phi) \textrm{ and }\hat{f}(\hat x)\in L(\phi)\}\cup\{\phi_{\hat x},\phi_{\hat{f}(\hat x)}\}$, note that $\Phi(\hat x)$ is the set of leafs that intersect the transverse trajectory $\hat{I}_{\hat\mathcal{F}}(\hat x)$, and beyond that, $\Phi(\hat x)$ is totally ordered by the relation $\phi_1<\phi_2$ if $R(\phi_1)\subset R(\phi_2)$, because $\hat f$ is a Brouwer homeomorphism and the leaves of $\hat\mathcal{F}$ are Brouwer lines. Using this order, we can parameterize $\Phi(\hat x)$ by a parameter $s\in[0,1]$ in such a way that $\phi_0=\phi_{\hat x}$ and $\phi_1=\phi_{\hat{f}(\hat x)}$.
Denoting the isotopy path $\hat{I}(\hat x):[0,1]\to\widehat{\textrm{dom}}(\tilde I)$ by $\hat\beta$, we can define the following functions of the parameter $s$
$$t_-(s)=\begin{cases}
0, \,\textrm{if }s=0\\
\inf\{t\in[0,1]\mid\hat{\beta}([t,1])\cap R(\phi_s)=\emptyset\}, \,\textrm{if }s\in(0,1]
\end{cases} $$
and
$$t_+(s)=\begin{cases}
\inf \{t\in[0,1]\mid \hat{\beta}([t, 1])\subset L(\phi_s)\}, \,\textrm{if }s\in[0,1)\\
1, \,\textrm{if }s=1.
\end{cases} $$
Intuitively, $t_-(s)$ is the moment when $\hat\beta$ was on the right side of $\phi_s$ for the last time, and $t_+(s)$ is the first moment in which $\hat\beta$ is always on the left side of $\phi_s$. Note that, if $s_1<s_2$, then $t_{-}(s_1)\leq t_{+}(s_1) < t_{-}(s_2)\leq t_{+}(s_2)$. So we have that $t_-$ and $t_+$ coincide, except possibly in a countable set of discontinuities. However, note that if we list the discontinuity points as $(s_i)_{i\in\ensuremath{\mathbb{N}}}$, we have that $\sum d_i\leq 1$, where $d_i=t_+(s_i)-t_-(s_i)$ (because $t_\pm([0,1])\subset[0,1]$).
Now let us define a path $\tilde \gamma_0amma^*_{\hat x}:[0,1]\to\widehat{\textrm{dom}}(\tilde I)$ which will be transverse except in the discontinuities of $t_-$ and $t_+$ as follows: we make $\tilde \gamma_0amma_{\hat x}^*$ be equal to $\hat\beta$ in the points where $t_-(s)=t_+(s)$ and where $t_-(s)\neq t_+(s)$ we make $\tilde \gamma_0amma_{\hat x}^*$ be equal to the leaf segment $\phi_s$ which connects $\hat{\beta}(t_-(s))$ to $\hat{\beta}(t_+(s))$ (see Figure \ref{figsobra2}).
\begin{figure}
\caption{Illustration of the curves $\hat\beta$ e $\tilde \gamma_0amma^*_{\hat x}
\label{figsobra2}
\end{figure}
Now, since $\tilde \gamma_0amma^*_{\hat x}$ is made by leaves points or leaves segments, we have that for each $s\in[0,1]$ we can find $V_s$ a tubular neighborhood of the point (or segment) of $\tilde \gamma_0amma^*_{\hat x}$ which intersects the leaf $\phi_s$ and $\varepsilon_s\in(0,1)$ so that $V_s\subset B(\varepsilon_s,[\tilde \gamma_0amma_{\hat x}^*]\cap[\phi_s])$. Then, $[\tilde \gamma_0amma^*_{\hat x}]\subset\cup_{s\in[0,1]}V_s$, and by compactness we have $[\tilde \gamma_0amma^*_{\hat x}]\subset\cup_{j=0}^k V_{s_j}\subset\cup_{j=0}^k B(\varepsilon_{s_j},[\tilde \gamma_0amma_{\hat x}^*]\cap[\phi_{s_j}])$, for some $k\in\ensuremath{\mathbb{N}}$ (note that the diameter of $B(\varepsilon_{s_j},[\tilde \gamma_0amma_{\hat x}^*]\cap[\phi_{s_j}])$ is uniformly bounded, even for the values of $s_j$ such that $[\tilde \gamma_0amma_{\hat x}^*]\cap[\phi_{s_j}]$ is a leaf segment, because $\sum d_i\leq 1$). Therefore, we can partition the interval $[0,1]$ in a finite number of closed subintervals and modify $\tilde \gamma_0amma_{\hat x}^*$ in each interval, inside the tubular neighborhoods, keeping the ends of the intervals fixed, in order to obtain a transverse path $\tilde \gamma_0amma_{\hat x}$ such that $[\tilde \gamma_0amma_{\hat x}]\subset\cup_{j=0}^k B(\varepsilon_{s_j},[\tilde \gamma_0amma_{\hat x}^*]\cap[\phi_{s_j}])$ (see Figure \ref{figsobra3}) .
\begin{figure}
\caption{Modification of $\tilde \gamma_0amma^*_{\hat x}
\label{figsobra3}
\end{figure}
Note now that, since $\hat\pi(\hat\beta)=\tilde{I}(\tilde x)$ is contained in $U$ and, if $\phi$ is a leaf of $\tilde\mathcal{F}$, then $[\phi]\cap U$ is connected, therefore $\hat\pi(\tilde \gamma_0amma^*_{\hat x})$ is also contained in $U$. So, $\hat\pi(\cup_{j=0}^k B(\varepsilon_{s_j},[\tilde \gamma_0amma_{\hat x}^*]\cap[\phi_{s_j}]))\subset B(\max\{\varepsilon_{s_j}\},U)\subset B(1,U)$, and then, denoting $\hat\pi(\tilde \gamma_0amma_{\hat x})=\tilde{\tilde \gamma_0amma}_{\tilde x}$, we have that $\tilde{\tilde \gamma_0amma}_{\tilde x}\subset B(1,U)$ and $\tilde{\tilde \gamma_0amma}_{\tilde x}\in \tilde{I}_{\tilde\mathcal{F}}(\tilde x)$. Since $U$ is bounded, we have that there is $L_0>0$ such that $\textrm{diam}(\tilde{\tilde \gamma_0amma}_{\tilde x})<L_0$, proving the result.
\end{proof}
For now on, let us take $\tilde z_0$ to be the point given by proposition \ref{proppqeps} and let $z_0=\pi(\tilde z_0)$. Note that $z_0$ is recurrent, and has rotation vector $\rho_0$.
We will denote by $\tilde \gamma_0$ a element of $\tilde{I}_{\tilde{\mathcal{F}}}^\ensuremath{\mathbb{Z}}(\tilde z_0)$ which passes through $\tilde z_0$ and by $[\tilde \gamma_0]$ its image. Using Lemma \ref{lemasobra}, we can assume that for each $n\in\ensuremath{\mathbb{Z}}$, the segment of $\tilde \gamma_0$ between $\tilde{f}^n(\tilde{z}_0)$ and $\tilde{f}^{n+1}(\tilde{z}_0)$ has diameter less than $L_0$. We can also assume that $\tilde \gamma_0$ is parameterized so that $\tilde \gamma_0(n)=\tilde{f}^n(\tilde z_0)$, for every $n\in\ensuremath{\mathbb{Z}}$.
\begin{definicao}\label{defcone}
Let $v\in\ensuremath{\mathbb{R}}^2$ be a unit vector such that $v\neq \rho_0$ and $v\neq\rho_0^\textup{Per}p$, and denote by $v_s$ the vector symmetrical to $v$ with respect to the direction of $\rho_0$ (i.e., $\langle v_s,\rho_0 \rangle=\langle v,\rho_0 \rangle$ and $\langle v_s,\rho^\textup{Per}p_0 \rangle=-\langle v,\rho^\textup{Per}p_0 \rangle$). Let us now denote the straight lines generated by $v$ and $v_s$ passing through $\tilde{z}_0$ by $r_v$ and $r_{v_s}$ (i.e., $r_v(t)=\tilde{z}_0+tv$, for $t\in\ensuremath{\mathbb{R}}$). We have that $\ensuremath{\mathbb{R}}^2\setminus([r_v]\cup [r_{v_s}])$ has four connected components, and denote by $C_1$ and $C_2$ the components that intersect the straight line generated by $\rho_0$ passing through $\tilde{z}_0$. We will call \emph{cone generated by $\rho_0$ with inclination $v$ and origin $\tilde{z}_0$} to the closure of $C_1\cup C_2$, and we will denote such a set by $C_{\tilde{z}_0}^{\rho_0}(v)$ (see Figure \ref{figcone}).
\begin{figure}
\caption{Illustration of $C_{\tilde{z}
\label{figcone}
\end{figure}
\end{definicao}
Note that in Definition \ref{defcone} we have $\partial(C_{\tilde{z}_0}^{\rho_0}(v))=[r_v]\cup [r_{v_s}]$
In the next lemma we will denote, for $L>0$, $B(L,A)=\cup_{\tilde x\in A}B(L,\tilde x)$, where $A\subset\ensuremath{\mathbb{R}}^2$.
\begin{lema}\label{lemacone}
Given $v\in\ensuremath{\mathbb{R}}^2$ as in Definition \ref{defcone}, there is $L_1>0$ such that $[\tilde \gamma_0]\subset B(L_0,C_{\tilde{z}_0}^{\rho_0}(v))\cup B(L_1,\tilde{z}_0)$, where $L_0$ is given by the Lemma \ref{lemasobra}.
\end{lema}
\begin{proof}
First, let us consider the straight lines $r_v$ and $r_{v_s}$, as in Definition \ref{defcone}. Denoting by $d(r_v,x)$ the distance between the straight line $r_v$ and the point $\tilde x$, we have that $d_n=d(r_v, n\rho_0+\tilde{z}_0)=d(r_{v_s},n\rho_0+\tilde{z}_0)=n||\langle\rho_0,v\rangle v-\rho_0||$, for $n\in\ensuremath{\mathbb{Z}}$. Note that $B(d_n,n\rho_0+\tilde{z}_0)\subset C_{\tilde{z}_0}^{\rho_0}(v)$.
We have that $\lim_{n\to\infty}\frac{\tilde{f}^n(\tilde{z}_0)-\tilde{z}_0}{n}=\rho_0$. So, making $\varepsilon=||\langle\rho_0,v\rangle v-\rho_0||$, there is $n_1>0$ such that
\begin{align*}
d(\tilde{f}^n(\tilde{z}_0),n\rho_0+\tilde{z}_0)&=||\tilde{f}^n(\tilde{z}_0)-\tilde{z}_0-n\rho_0||\\
&< n\varepsilon=n||\langle\rho_0,v\rangle v-\rho_0||=d(r_v, n\rho_0+\tilde{z}_0), \quad\forall n\tilde \gamma_0eq n_1
\end{align*}
Proceeding analogously for $\tilde{f}^{-1}$, we obtain
\begin{align*}
d(\tilde{f}^{-n}(\tilde{z}_0),-n\rho_0+\tilde{z}_0)&=||\tilde{f}^{-n}(\tilde{z}_0)-\tilde{z}_0-n(-\rho_0)||\\
&< n\varepsilon=n||\langle\rho_0,v\rangle v-\rho_0||=d(r_v, n\rho_0+\tilde{z}_0), \quad\forall n\tilde \gamma_0eq n_2.
\end{align*}
Therefore, setting $n_0=\max\{n_1,n_2\}$, we have that $d(\tilde{f}^{n}(\tilde{z}_0),n\rho_0+\tilde{z}_0)<d_n=d(r_v, n\rho_0+\tilde{z}_0)$, and since $B(d_n,n\rho_0+\tilde{z}_0)\subset C_{\tilde{z}_0}^{\rho_0}(v)$, we have $\tilde{f}^n(\tilde{z}_0)\in C_{\tilde{z}_0}^{\rho_0}(v)$, for $|n|\tilde \gamma_0eq n_0$ (see Figure \ref{figlemacone}).
\begin{figure}
\caption{Illustration of $\tilde{f}
\label{figlemacone}
\end{figure}
By construction, we have that the diameter of the segment $\tilde \gamma_0$ between $\tilde{f}^n(\tilde{z}_0)$ and $\tilde{f}^{n+1}(\tilde{z}_0)$ is smaller than $L_0$, so we have that for $|n|\tilde \gamma_0eq n_0$ such segment is contained in $B(L_0,C_{\tilde{z}_0}^{\rho_0}(v))$.
For $n$ such that $|n|<n_0$, $\tilde{f}^n(\tilde{z}_0)$ may be out of the cone, but since such points exist only in finite quantity, we have that there is $L'>0$ such that $||\tilde{f}^n(\tilde{z}_0)-\tilde{z}_0||<L'$, for $|n|<n_0$. Again, since each segment $\tilde \gamma_0$ has diameter bounded by $L_0$, we have that the segments of $\tilde \gamma_0$ between $\tilde{f}^n(\tilde{z}_0)$ and $\tilde{f}^{n+1}(\tilde{z}_0)$, with $|n|<n_0$, are contained in $B(L_1,\tilde{z}_0)$, where $L_1=L'+L_0$.
\end{proof}
\begin{lema}\label{lemaauto}
If $\tilde\alpha:[a,b]\to\ensuremath{\mathbb{R}}^2$ is an admissible path for $\tilde f$, there is no $w\in\ensuremath{\mathbb{Z}}^2_*$ such that $\tilde\alpha\pitchfork_{\tilde{\mathcal{F}}}(\tilde\alpha+w)$.
\end{lema}
\begin{proof}
Suppose there are $w\in\ensuremath{\mathbb{Z}}^2_*$ and $\tilde\alpha:[a,b]\to\ensuremath{\mathbb{R}}^2$ a path $r$-admissible such that $\alpha\pitchfork_{\tilde{\mathcal{F}}}(\tilde\alpha+w)$. Thus, we have by Theorem \ref{teoauto} that given $p/q\in(0,1]$ written in an irreducible way, $f$ will have a periodic point with rotation vector equal to $\frac{p}{qr}\cdot w$, which is an absurd, because $\rho(\tilde f)\cap\ensuremath{\mathbb{Q}}^2=\{(0,0)\}$.
\end{proof}
\begin{lema}\label{lemaautogamma}
There is no $w\in\ensuremath{\mathbb{Z}}^2$ such that $\tilde \gamma_0\pitchfork_{\tilde{\mathcal{F}}}(\tilde \gamma_0+w)$.
\end{lema}
\begin{proof}
If $w\neq 0$, the result is a direct consequence of Lemma \ref{lemaauto}. For $w=0$, let us suppose by contradiction that $\tilde \gamma_0$ has a transverse self-intersection. So, there are intervals $I,J\subset\ensuremath{\mathbb{R}}$ such that $\tilde \gamma_0|_I\pitchfork_{\tilde{\mathcal{F}}}\tilde \gamma_0|_J$. We can suppose that $I\subset(-N,N)$, for some $N\in\ensuremath{\mathbb{N}}$. Now let us note that since $z_0$ is recurrent, there are $n_k\in\ensuremath{\mathbb{N}}$ and $w_k\in\ensuremath{\mathbb{Z}}^2$ such that $\tilde{f}^{n_k}(\tilde z_0)\to \tilde z_0+w_k$ or, equivalently, $\tilde{f}^{n_k}(\tilde z_0)-w_k\to\tilde z_0$, and then $\tilde{f}^{-N+n_k}(\tilde z_0)-w_k\to\tilde{f}^{-N}(\tilde z_0)$. Also, since $\rho(f,z_0)=\rho_0$, we have $w_k/n_k\to\rho_0$ and then there is $k_0$ such that $w_k\neq 0$ if $k>k_0$.
Note that, by the way $\tilde \gamma_0$ has been parameterized, we have that $\tilde \gamma_0(-N)=\tilde{f}^{-N}(\tilde z_0)$ and $\tilde \gamma_0(N)=\tilde{f}^{N}(\tilde z_0)$, and then $\tilde \gamma_0\vert_{[-N,N]}=\tilde{I}_{\tilde\mathcal{F}}^{2N}(\tilde{f}^{-N}(\tilde z_0))$. Since $\tilde{f}^{-N+n_k}(\tilde z_0)-w_k\to\tilde{f}^{-N}(\tilde z_0)$, by Lemma \ref{lemaestabilidade} we have that if $k'$ is large enough, $\tilde{I}_{\tilde\mathcal{F}}^{2N}(\tilde{f}^{-N}(\tilde z_0))$ is equivalent to a sub-path of $\tilde{I}_{\tilde\mathcal{F}}^{2N+2}(\tilde{f}^{-1}(\tilde{f}^{-N+n_{k'}}(\tilde z_0)-w_{k'}))$. But
\begin{align*}
\tilde{I}_{\tilde\mathcal{F}}^{2N+2}(\tilde{f}^{-1}(\tilde{f}^{-N+n_{k'}}(\tilde z_0)-w_{k'}))&=\tilde{I}_{\tilde\mathcal{F}}^{2N+2}(\tilde{f}^{-N-1+n_{k'}}(\tilde z_0)-w_{k'})\\
&=(\tilde \gamma_0-w_{k'})|_{[-N-1+n_{k'},N+1+n_{k'}]}
\end{align*}
and we can also assume that $k'>k_0$. So, there is $I'\subset[-N-1+n_{k'},N+1+n_{k'}]$ such that $(\tilde \gamma_0-w_{k'})|_{I'}\sim_{\tilde\mathcal{F}}\tilde \gamma_0|_{[-N,N]}$. Since $I\subset[-N,N]$, there is $I''\subset I'$ such that $(\tilde \gamma_0-w_{k'})|_{I''}\sim_{\tilde\mathcal{F}}\tilde \gamma_0|_I$, but $\tilde \gamma_0|_I\pitchfork_{\tilde{\mathcal{F}}}\tilde \gamma_0|_J$, therefore $(\tilde \gamma_0-w_{k'})|_{I'}\pitchfork_{\tilde{\mathcal{F}}}\tilde \gamma_0|_J$, and given that $k'>k_0$, we have $w_{k'}\neq 0$, and thus we get a contradiction, from Lemma \ref{lemaauto}.
\end{proof}
\begin{lema}
\label{lemalinhaunica}
$\tilde \gamma_0$ intersects each leaf at most once.
\end{lema}
\begin{proof}
Suppose, by contradiction, that there are $t'<t''$ such that $\phi_{\tilde \gamma_0(t')}=\phi_{\tilde \gamma_0(t'')}$. So we have that $\tilde \gamma_0|_{[t',t'']}$ is $\tilde\mathcal{F}$-equivalent to a transverse closed curve $\Gamma$. But since $\Gamma$ has some sub-path $\Gamma_0:J\to\ensuremath{\mathbb{R}}^2$ which is transverse, closed and simple, we have that $\tilde \gamma_0$ has a sub-path $\tilde\mathcal{F}$-equivalent to $\Gamma_0$.
Since $\tilde \gamma_0$ does not have transverse self-intersection, it follows from Proposition 20 of \cite{calvez2018topological} that:
\begin{enumerate}[(i)]
\item $\bigcup_{s\in J} [\phi_{\Gamma_0(s)}]=A_{\Gamma_0}$ is an open topological annulus;
\item $\{t\in\ensuremath{\mathbb{R}}\mid\tilde \gamma_0(t)\in A_{\Gamma_0}\}$ is an interval, which we denote by $I=(a,b)$, not necessarily bounded;
\item if $-\infty<a<b<+\infty$, then $\tilde \gamma_0(t')$ and $\tilde \gamma_0(t'')$ can not both belong to unbounded connected components of $\ensuremath{\mathbb{R}}^2\setminus A_{\Gamma_0}$.
\end{enumerate}
Since $z_0$ is recurrent and has rotation vector $\rho$ we have, as in the proof of the Lemma \ref{lemaautogamma}, that there are $t_k\to+\infty$, $w_k^+\in\ensuremath{\mathbb{Z}}^2$, with $||w_k^+||\to+\infty$ and $s_k\to-\infty$, $v_k^-\in\ensuremath{\mathbb{Z}}^2$, with $||v_k^-||\to+\infty$ such that $\tilde \gamma_0(t_k)\in A_{\Gamma_0}+w_k^+$ and $\tilde \gamma_0(s_k)\in A_{\Gamma_0}+v_k^-$.
Firstly, notice that, as $\Gamma$ is a transverse, closed and simple curve, every leaf $\phi$ that intersects $\Gamma_0$ it does so in only one point. From this it follows that either every leaf that intersects $\Gamma$ has its $\omega$-limit contained in the bounded connected component of its complement, or any leaf intersecting $\Gamma_0$ has its $\alpha$-limit contained in the bounded connected component of its complement. We will assume, without loss of generality, that the first situation occurs, and the second case is treated in the same way. But, since the foliation is invariant by integer translations, it follows that an analogous property holds for the leafs intersecting $\Gamma_0+w$, for integer vectors $w$. So, we have that, if $[\phi]\subset A_{\Gamma_0}$ and $[\Gamma_0+w]\cap[\Gamma_0]=\emptyset$, then we have that $[\phi]\cap A_{\Gamma_0}+w=\emptyset$, because the $\omega$-limit of a leaf at the intersection would be contained in two disjoint sets.
So, for $k$ large enough, since $[\Gamma_0+w_k^+]\cap[\Gamma_0]=\emptyset$, we have that $\phi_{\tilde \gamma_0(t_k)}$ is contained in an unbounded connected component of $\ensuremath{\mathbb{R}}^2\setminus A_{\Gamma_0}$, and the same holds for $\phi_{\tilde \gamma_0(s_k)}$. Therefore, by Proposition 20 of \cite{calvez2018topological}, we have that $\tilde \gamma_0$ has a transverse self-intersection, which is an absurd.
\end{proof}
\begin{corolario}
\label{corlinha}
$\tilde \gamma_0$ is a line.
\end{corolario}
\begin{proof}
By Lemma \ref{lemalinhaunica} we have that $\tilde \gamma_0$ is a simple path, and since $||\tilde{f}^n(\tilde z_0)||\to+\infty$ when $n\to\pm\infty$, we have the result.
\end{proof}
Therefore, since $\tilde \gamma_0$ is a line, the sets $L(\tilde \gamma_0)$, $R(\tilde \gamma_0)$, $l(\tilde \gamma_0)$ and $r(\tilde \gamma_0)$ are well defined.
\begin{lema}\label{lemailimitada}
Let $A$ be a connected component of $r(\tilde \gamma_0)$ (or $l(\tilde \gamma_0)$). Then $A$ is an unbounded set.
\end{lema}
\begin{proof}
By Lemma \ref{lemalinhaunica} , we have that $\tilde \gamma_0$ is a line and crosses each leaf at most once. Thus, the set $\bigcup_{t\in\ensuremath{\mathbb{R}}}[\phi_{\tilde \gamma_0(t)}]$ of leaves that pass through $\tilde \gamma_0$ is homeomorphic to $\ensuremath{\mathbb{R}}^2$ and therefore simply connected. So, all connected components of the complement of such set are unbounded.
\end{proof}
\begin{lema}\label{lemacompacto}
If $K\subset\ensuremath{\mathbb{R}}^2$ is compact, then the set $I_K=\{t\in\ensuremath{\mathbb{R}}\mid[\phi_{\tilde \gamma_0(t)}]\cap K\neq\emptyset\}$ is also compact.
\end{lema}
\begin{proof}
Let $\phi$ be a leaf such that $\phi(\bar s)\in K$, for some $\bar s\in\ensuremath{\mathbb{R}}$, and $V_-^w,V_+^w\in\ensuremath{\mathbb{Z}}^2$ such that $K\subset R(\alpha_-+V_-^w)$ and $K\subset L(\alpha_++V_+^w)$ (see Figure \ref{figlr}), where $\alpha_-$ and $\alpha_+$ are the transverse lines given by the Lemma \ref{lemacurvas} (note that we can find such lines since $K$ is compact, and therefore bounded). Then we have that $\phi(\bar s)\in R(\alpha_-+V_-^w)\cap L(\alpha_++V_+^w)$, and so, as remarked after Definition \ref{defequivalence}, $\phi(s)\in R(\alpha_-+V_-^w)$, for every $s>\bar s$, and $\phi(s)\in L(\alpha_++V_+^w)$, for every $s<\bar s$, that is, $[\phi]\subset R(\alpha_-+V_-^w)\cup L(\alpha_++V_+^w)$. However, since $\alpha_-$ is directed by $v_-$ such that $\langle v_-,\rho_0^\textup{Per}p\rangle<0$ and $\langle v_-,\rho_0\rangle>0$ (and $\alpha_+$ is directed by $v_+$ such that $\langle v_+,\rho_0^\textup{Per}p\rangle>0$ and $\langle v_+,\rho_0\rangle>0$) and, by the Lemma \ref{lemacone}, $[\tilde \gamma_0+w]$ is contained in $B(L_0,C_{\tilde{z}_0}^{\rho_0}(v)+w)\cup B(L_1,\tilde{z}_0+w)$, where the cone $C_{\tilde{z}_0}^{\rho_0}(v)$ is generated by $\rho_0$ and $v$ is a unit vector such that $\langle v,\rho_0\rangle>0$ and $0< \langle v,\rho_0^\textup{Per}p\rangle<\langle v_+,\rho_0^\textup{Per}p\rangle$, we have that there is $\bar t=\max\{t\in\ensuremath{\mathbb{R}}\mid(\tilde \gamma_0+w)(t)\in R(\alpha_-+V_-^w)\cup L(\alpha_++V_+^w)\}$, and therefore $\phi$ can only cross $\tilde \gamma_0+w$ before $\bar t$. Proceeding symmetrically we can obtain a lower bound for the instant in which $\phi$ can cross $\tilde \gamma_0+w$, thus proving the statement.
\begin{figure}
\caption{Proof of Lemma \ref{lemacompacto}
\label{figlr}
\end{figure}
\end{proof}
\begin{corolario}\label{lemadiresq}
Let $\zeta:[a,b]\to\ensuremath{\mathbb{R}}^2$ be a transverse path and $w\in\ensuremath{\mathbb{Z}}^2$. If $[\zeta]\cap l(\tilde \gamma_0+w)\neq\emptyset$ and $[\zeta]\cap r(\tilde \gamma_0+w)\neq\emptyset$, then $\zeta\pitchfork_{\tilde\mathcal{F}}(\tilde \gamma_0+w)$.
\end{corolario}
\begin{proof}
By Proposition \ref{proprl}, it is sufficient to prove that the set
$$I_\zeta=\{t\in\ensuremath{\mathbb{R}}\mid\phi_{(\tilde \gamma_0+w)(t)}\textrm{ crosses }\zeta\}$$
is compact. But this follows directly from Lemma \ref{lemacompacto}.
\end{proof}
\begin{lema}\label{lemalinhanovo}
If $\tilde\tilde \gamma_0amma':\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}^2$ is such that $\tilde\tilde \gamma_0amma'\sim_{\tilde\mathcal{F}}\tilde \gamma_0$, then $\tilde\tilde \gamma_0amma'$ is a line.
\end{lema}
\begin{proof}
By Lemma \ref{lemalinhaunica}, since $\tilde\tilde \gamma_0amma'\sim_{\tilde\mathcal{F}}\tilde \gamma_0$, we have that $\tilde\tilde \gamma_0amma'$ intersects each leaf at most once, therefore $\tilde\tilde \gamma_0amma'$ is a simple path. Now let $K\subset\ensuremath{\mathbb{R}}^2$ be a compact set. Since $\tilde\tilde \gamma_0amma'\sim_{\tilde\mathcal{F}}\tilde \gamma_0$, we have that there is reparametrization of $\tilde\tilde \gamma_0amma'$ so that $\phi_{\tilde \gamma_0(t)}=\phi_{\tilde\tilde \gamma_0amma'(t)}$, for every $t\in\ensuremath{\mathbb{R}}$. Note that $\tilde\tilde \gamma_0amma'$ being proper is a property that does not depend on its parametrization. So, by Lemma \ref{lemacompacto}, we have that there is $M>0$ such that $\phi_{\tilde\tilde \gamma_0amma'(t)}\cap K=\emptyset$, for every $|t|>M$. Thus, if $t'\in(\tilde\tilde \gamma_0amma')^{-1}(K)$, we have that $|t'|<M$, and so we have that $\tilde\tilde \gamma_0amma'$ is a proper path, proving the lemma.
\end{proof}
\begin{lema}\label{lemagamaw}
Given $w\in\ensuremath{\mathbb{Z}}^2_*$, if $\tilde \gamma_0$ and $\tilde \gamma_0+w$ intersect the same leaf $\phi$, then there are sequences $t^+_k,s^+_k\nearrow+\infty$, $t^-_k,s^-_k\searrow-\infty$ and transverse paths $\tilde\tilde \gamma_0amma'_i:\ensuremath{\mathbb{R}}\to\ensuremath{\mathbb{R}}^2$, $i=1,2$ such that:
\begin{enumerate}[(i)]
\item $\tilde\tilde \gamma_0amma'_1\sim_{\tilde\mathcal{F}}\tilde \gamma_0$ and $\tilde\tilde \gamma_0amma'_2\sim_{\tilde\mathcal{F}}\tilde \gamma_0+w$;
\item $\tilde\tilde \gamma_0amma'_1(t^+_k)=\tilde\tilde \gamma_0amma'_2(s^+_k)$ and $\tilde\tilde \gamma_0amma'_1(t^-_k)=\tilde\tilde \gamma_0amma'_2(s^-_k)$, for every $k\in\ensuremath{\mathbb{N}}$.
\end{enumerate}
\end{lema}
\begin{proof}
Since $\tilde \gamma_0$ is recurrent, we can find sequences $t^+_k,s^+_k\nearrow+\infty$, such that $\tilde \gamma_0(t_k^+)$ and $(\tilde \gamma_0+w)(s_k^+)$ belong to the leaf $\phi_k^+=\phi+w_k^+$, for some $w_k^+\in\ensuremath{\mathbb{Z}}^2$ (and in the same way, sequences $t^-_k,s^-_k\searrow-\infty$ with the same property). Let, for each $k\in \ensuremath{\mathbb{N}}$, $W_k^+$ be a tubular neighborhood of the leaf $\phi+w_k^+$ such that $\tilde \gamma_0(t_k^+),(\tilde \gamma_0+w)(s_k^+)\in W_k^+$ and $W_k^-$ a tubular neighborhood of the leaf $\phi+w_k^-$ such that $\tilde \gamma_0(t_k^-),(\tilde \gamma_0+w)(s_k^-)\in W_k^-$. Since $||w_k^\pm||\to\infty$, we can assume that the neighborhoods are mutually disjoint. So, making in each neighborhood a modification in the ways similar to the one made in the Lemma \ref{lemacurvas} (see Figure \ref{figlemacurvas}) we can obtain transverse paths $\tilde\tilde \gamma_0amma'_1$ and $\tilde\tilde \gamma_0amma'_2$ which satisfy the properties $(i)$ and $(ii)$ of the statement.
\end{proof}
\begin{obs}\label{obslinhas}
Note that by Lemma \ref{lemalinhanovo}, we have that $\tilde\tilde \gamma_0amma'_1$ and $\tilde\tilde \gamma_0amma'_2$ are lines.
\end{obs}
\begin{lema}\label{lemacompconexa}
If $\tilde \gamma_0$ and $\tilde \gamma_0+w$ cross the same leaf, then every connected component of $L(\tilde\tilde \gamma_0amma'_1)\cap R(\tilde\tilde \gamma_0amma'_2)$ and of $R(\tilde\tilde \gamma_0amma'_1)\cap L(\tilde\tilde \gamma_0amma'_2)$ is bounded, where $\tilde\tilde \gamma_0amma'_1$ and $\tilde\tilde \gamma_0amma'_2$ are the paths given by Lemma \ref{lemagamaw}.
\end{lema}
\begin{proof}
Let us assume, by contradiction, that the result is not true. We can, without loss of generality, assume that there is an unbounded connected component $O$ of $R(\tilde\tilde \gamma_0amma'_1)\cap L(\tilde\tilde \gamma_0amma'_2)$. The case where there is an unbounded connected component of $L(\tilde\tilde \gamma_0amma'_1)\cap R(\tilde\tilde \gamma_0amma'_2)$ is similar. Since $\tilde\tilde \gamma_0amma'_1$ and $\tilde\tilde \gamma_0amma'_2$ are lines that cross each other, any connected component of the complement of $[\tilde\tilde \gamma_0amma'_1]\cup[\tilde\tilde \gamma_0amma'_2]$ has on its boundary points that are in $[\tilde\tilde \gamma_0amma'_1]$ but not in $[\tilde\tilde \gamma_0amma'_2]$ and also point that are in $[\tilde\tilde \gamma_0amma'_2]$ but not in $[\tilde\tilde \gamma_0amma'_1]$. Let $P_1$ and $P_2$ be points in $\partial O\cap[\tilde\tilde \gamma_0amma'_1]\cap (\ensuremath{\mathbb{R}}^2\setminus[\tilde\tilde \gamma_0amma'_2])$ and $\partial O\cap(\ensuremath{\mathbb{R}}^2\setminus[\tilde\tilde \gamma_0amma'_1])\cap [\tilde\tilde \gamma_0amma'_2]$, respectively. Let $\phi_1,\phi_2:\ensuremath{\mathbb{R}}\to \textrm{dom}(\tilde I)$ be the leaves of $\tilde\mathcal{F}$ passing through $P_1$ and $P_2$ respectively, and let us assume that $\phi_1(0)=P_1$ and $\phi_2(0)=P_2$. Note that $\phi_1(0)\in L(\tilde\tilde \gamma_0amma'_2)$, and since every leaf of $\tilde\mathcal{F}$ that intersects $\tilde\tilde \gamma_0amma'_1$ or $\tilde\tilde \gamma_0amma'_2$ must cross that path from left to right, we have that $\phi_1((-\infty, 0))$ is contained in $L(\tilde\tilde \gamma_0amma'_1)\cup L(\tilde\tilde \gamma_0amma'_2)$. Moreover, if $\phi_1((-\infty, 0))$ is bounded, then the $\alpha$-limit set of $\phi_1$ will be contained in $l(\tilde\tilde \gamma_0amma'_1)\cup l(\tilde\tilde \gamma_0amma'_2)$. Furthermore, if $\varepsilon>0$ is sufficiently small, then $\phi_1(\varepsilon)$ belongs to $O$. Analogously, it is possible to show that $\phi_2((0,+\infty))$ is contained in $R(\tilde\tilde \gamma_0amma'_1)\cup R(\tilde\tilde \gamma_0amma'_2)$, its $\omega$-limit set is contained in $r(\tilde \gamma_0amma_1)\cup r(\tilde \gamma_0amma_2)$ and, if $\varepsilon>0$ is sufficiently small, then $\phi_2(-\varepsilon)$ belongs to $O$.
We know that, since $\tilde \gamma_0$ and $\tilde \gamma_0+w$ have no $\tilde\mathcal{F}$-transverse intersection, the same holds for $\tilde\tilde \gamma_0amma'_1$ and $\tilde\tilde \gamma_0amma'_2$, since these paths are equivalent to $\tilde \gamma_0$ and $\tilde \gamma_0+w$, respectively. Therefore, by the Corollary \ref{lemadiresq}, we have that $\tilde\tilde \gamma_0amma'_1$ can not intersect both $r(\tilde\tilde \gamma_0amma'_2)$ and $l(\tilde\tilde \gamma_0amma'_2)$. Let us assume initially that $\tilde\tilde \gamma_0amma'_1$ does not intersect $l(\tilde\tilde \gamma_0amma'_2)$.
Let $t_0$ be such that $\tilde\tilde \gamma_0amma'_1(t_0)=P_1$. By the Lemma \ref{lemagamaw} there are $t_{-}<t_0<t_{+}$ and $s_{-}<s_{+}$ such that $\tilde\tilde \gamma_0amma'_1(t_{-})=\tilde\tilde \gamma_0amma'_2(s_{-})$ and $\tilde\tilde \gamma_0amma'_1(t_{+})=\tilde\tilde \gamma_0amma'_2(s_{+})$. Let $s'_{-}$ be the largest real number smaller than $s^{+}$ such that $\tilde\tilde \gamma_0amma'_2(s'_{-})= \tilde\tilde \gamma_0amma'_1(t'_{-})$, with $t'_{-}<t_0$, and let $s'_{+}$ be the smallest real number bigger than $s'_{-}$ such that $\tilde\tilde \gamma_0amma'_2(s'_{+})= \tilde\tilde \gamma_0amma'_1(t'_{+})$, with $t'_{+}>t_0$. We have that, with the proper orientation, $\tilde\tilde \gamma_0amma'_1([t'_{-}, t'_{+}])\cup \tilde\tilde \gamma_0amma'_2([s'_{-}, s'_{+}])$ is the image of a simple closed curve $C_1$, separating the plane into two disjoint, one of them being bounded, $P_1$ belongs to the image of this curve, and if $\varepsilon$ is sufficiently small, then $\phi_1(-\varepsilon)$ and $\phi_1(\varepsilon)$ belong to distinct components of the complement of the curve. But we have that, if $H_1$ is a connected component of $l(\tilde\tilde \gamma_0amma'_2)$ which contains the $\alpha$-limit set of $\phi_1$, then $F_1=H_1\cup\phi_1((-\infty,-\varepsilon))$ does not intersect $[\tilde\tilde \gamma_0amma'_1]\cup[\tilde\tilde \gamma_0amma'_2]$. Furthermore, by the Lemma \ref{lemailimitada} we have that $H_1$ is an unbounded set, therefore $F_1$ is also unbounded, and thus is contained in an unbounded connected component of the complement of $C_1$. Note also that $\phi_1(-\varepsilon)$ is in the same connected complement component of $C_1$ as $F_1$. However, $O$ is contained in the complement of $[\tilde\tilde \gamma_0amma'_1]\cup[\tilde\tilde \gamma_0amma'_2]$, and therefore in the complement of $C_1$, and $O$ is also unbounded. Note that $\phi_1(\varepsilon)\in O$ and that $\phi_1(-\varepsilon)$ and $\phi_1(\varepsilon)$ are in separate components of the complement of $C_1$. Then $O$ and $F_1$ are contained in distinct components of the complement of $C_1$ and both are unbounded, which is absurd.
Let us now assume that $\tilde\tilde \gamma_0amma'_1$ does not intersect $ r(\tilde\tilde \gamma_0amma'_2)$. Let $s_0$ be such that $\tilde\tilde \gamma_0amma'_2(s_0)=P_2$. We can, as before, find $t''_{-}<t''_{+}$ and $s''_{-}<s_0<s''_{+}$ such that $\tilde\tilde \gamma_0amma'_1([t''_{-}, t''_{+}])\cup \tilde\tilde \gamma_0amma'_2([s''_{-}, s''_{+}])$ is the image of a simple closed curve $C_2$, and such that if $\varepsilon$ is sufficiently small, $\phi_2(-\varepsilon)$ and $\phi_2(\varepsilon)$ belong to distinct components of the complement of the curve $C_2$. Note that, if $H_2$ is the connected component of $r(\tilde\tilde \gamma_0amma'_2)$ which contains the $\omega$-limit set of $\phi_2$, then $F_2=H_2\cup\phi_2((\varepsilon, +\infty)$ does not intersect $[\tilde\tilde \gamma_0amma'_1]\cup[\tilde\tilde \gamma_0amma'_2]$. Proceeding as before we show that the two connected components of the complement of $ C_2 $ are unbounded, thus obtaining a contradiction.
\end{proof}
\begin{lema}\label{lemalrnovo2}
If $r(\tilde \gamma_0)\cap l(\tilde \gamma_0+w)\neq\emptyset$, then:
\begin{enumerate}[(i)]
\item $[\tilde \gamma_0]\cap l(\tilde \gamma_0+w)\neq\emptyset$, $[\tilde \gamma_0]\cap r(\tilde \gamma_0+w)=\emptyset$;
\item $[\tilde \gamma_0+w]\cap r(\tilde \gamma_0)\neq\emptyset$, $[\tilde \gamma_0+w]\cap l(\tilde \gamma_0)=\emptyset$.
\end{enumerate}
\end{lema}
\begin{proof}
Note that if $\tilde \gamma_0$ and $\tilde \gamma_0 +w$ do not cross a common leaf the result is trivial. Suppose then that $\tilde \gamma_0$ and $\tilde \gamma_0+w$ cross the same leaf, and let $\tilde\tilde \gamma_0amma'_1$ and $\tilde\tilde \gamma_0amma'_2$ be the lines given by Lemma \ref{lemagamaw}. Note that, since $\tilde\tilde \gamma_0amma'_1\sim_{\tilde\mathcal{F}}\tilde \gamma_0$ and $\tilde\tilde \gamma_0amma'_2\sim_{\tilde\mathcal{F}}\tilde \gamma_0+w$, we have $r(\tilde \gamma_0)=r(\tilde\tilde \gamma_0amma'_1)$ and $l(\tilde \gamma_0+w)=l(\tilde\tilde \gamma_0amma'_2)$. Let $\tilde p'\in r(\tilde\tilde \gamma_0amma'_1)\cap l(\tilde\tilde \gamma_0amma'_2)$. Since $r(\tilde\tilde \gamma_0amma'_1)\subset R(\tilde\tilde \gamma_0amma'_1)$ and $l(\tilde\tilde \gamma_0amma'_2)\subset L(\tilde\tilde \gamma_0amma'_2)$, we have that $\tilde p'$ belongs to $R(\tilde\tilde \gamma_0amma'_1)\cap L(\tilde\tilde \gamma_0amma'_2)$ and so, by the Lemma \ref{lemacompconexa}, belongs to a bounded connected component of $R(\tilde\tilde \gamma_0amma'_1)\cap L(\tilde\tilde \gamma_0amma'_2)$. But the connected component $C$ of $r(\tilde\tilde \gamma_0amma'_1)$ which contains $\tilde p'$ is unbounded, by Lemma \ref{lemailimitada}, and therefore needs to intersect $[\tilde\tilde \gamma_0amma'_1]\cup[\tilde\tilde \gamma_0amma'_2]$. Since $C$ is disjoint from $[\tilde\tilde \gamma_0amma'_1]$, we have that it intersects $[\tilde\tilde \gamma_0amma'_2]$,which implies that there is $\bar t$ such that $[\phi_{\tilde\tilde \gamma_0amma'_2(\bar t)}]\subset r(\tilde \gamma_0)$ and since $\tilde\tilde \gamma_0amma'_2\sim_{\tilde\mathcal{F}}\tilde \gamma_0+w$, this implies that some point of $\tilde \gamma_0 + w$ is on the leaf $\phi_{\tilde\tilde \gamma_0amma'_2(\bar t)}$, and therefore is also in $r(\tilde \gamma_0)$. Since $\tilde \gamma_0$ and $\tilde \gamma_0+w$ do not have a transverse intersection, we deduce by Corollary~\ref{lemadiresq} that $[\tilde \gamma_0+w]\cap l(\tilde \gamma_0)=\emptyset$. The proof that $l(\tilde \gamma_0+w)$ intersects $\tilde \gamma_0$ and therefore $[\tilde \gamma_0]\cap r(\tilde \gamma_0+w)=\emptyset$ is analogous.
\end{proof}
\begin{obs}\label{obslrnovo}
Note that we can obtain a result symmetrical to the previous one, with $r(\tilde \gamma_0+w)\cap l(\tilde \gamma_0)\neq\emptyset$.
\end{obs}
Let $p$ be a singularity of $\mathcal{F}$ and let us fix $\tilde{p}$ a lift of $p$. We have that $\tilde{p}$ is a singularity of $\tilde\mathcal{F}$ and suppose that $\tilde{p}\in l(\tilde \gamma_0)$.
\begin{lema}\label{lemacorte}
Let $w,w'\in\ensuremath{\mathbb{Z}}^2$ be such that $\langle w,\rho_0^\textup{Per}p\rangle>0$ and $\tilde{p}+w'\in l(\tilde \gamma_0)$, then $(\tilde{p}+w')+w\in l(\tilde \gamma_0)$.
\end{lema}
\begin{proof}
Suppose, by contradiction, that $(\tilde{p}+w')+w\in r(\tilde \gamma_0)$. Note that $(\tilde{p}+w')+w\in l(\tilde \gamma_0+w)$, so we have that $l(\tilde \gamma_0+w)\cap r(\tilde \gamma_0)\neq\emptyset$ (see Figure \ref{fig46}). Therefore, by Lemma \ref{lemalrnovo2}, we have that $[\tilde \gamma_0+w]\cap r(\tilde \gamma_0)\neq\emptyset$.
\begin{figure}
\caption{Illustration of Lemma \ref{lemacorte}
\label{fig46}
\end{figure}
We claim that this implies that $r(\tilde \gamma_0+w)\subset \ensuremath{\mathbb{R}}^2\setminus l(\tilde \gamma_0)$: indeed, if $r(\tilde \gamma_0+w)\cap l(\tilde \gamma_0)\neq\emptyset$, we have again by the Lemma \ref{lemalrnovo2} that $[\tilde \gamma_0+w]\cap l(\tilde \gamma_0)\neq\emptyset$. So, we get by Proposition \ref{proprl} that $\tilde \gamma_0\pitchfork_{\mathcal{F}}(\tilde \gamma_0+w)$, which is a contradiction by the Lemma \ref{lemaautogamma}, proving the claim. Therefore, since $(\tilde{p}+w')+2w$ is a singularity and so it must be contained in $r(\tilde \gamma_0)\cup l(\tilde \gamma_0)$, and since $(\tilde{p}+w')+2w\in r(\tilde \gamma_0+w)$ and $r(\tilde \gamma_0+w)\subset \ensuremath{\mathbb{R}}^2\setminus l(\tilde \gamma_0)$, we have that $(\tilde{p}+w')+2w\in r(\tilde \gamma_0).$ So, by induction, we have that $(\tilde{p}+w')+nw\in r(\tilde \gamma_0)$, for all $n\in\ensuremath{\mathbb{N}}$.
Let us denote now $v'_w=w/||w||$. If $\theta$ is the smallest angle between $v'_w$ and the line generated by $\rho_0$, let $v_w$ be a unit vector such that its angle to the line generated by $\rho_0$ is $\theta/2$. Thus, we have that for $n$ sufficiently large, $(\tilde{p}+w')+nw$ is in the connected component of $\ensuremath{\mathbb{R}}^2\setminus (B(L_0,C_{\tilde{z}_0}^{\rho_0}(v_w))\cup B(L_1,\tilde{z}_0))$ contained in the left of the line generated by $\rho_0$ passing through $\tilde{z}_0$ (see Figure \ref{figlemacorte}). But, by Lemma \ref{lemacone}, we have that $[\tilde \gamma_0]\subset B(L_0,C_{\tilde{z}_0}^{\rho_0}(v_w))\cup B(L_1,\tilde{z}_0)$, therefore $(\tilde{p}+w')+nw\in l(\tilde \gamma_0)$, which is a contradiction.
\begin{figure}
\caption{Illustration of Lemma \ref{lemacorte}
\label{figlemacorte}
\end{figure}
\end{proof}
Note that with an analogous demonstration we can obtain the following:
\begin{lema}\label{lemacorte2}
Let $w,w'\in\ensuremath{\mathbb{Z}}^2$ be such that $\langle w,\rho_0^\textup{Per}p\rangle<0$ and $\tilde{p}+w'\in r(\tilde \gamma_0)$, then $(\tilde{p}+w')+w\in r(\tilde \gamma_0)$.
\end{lema}
Since the slope of $\rho_0^\textup{Per}p$ is irrational, we can define the following order in $\ensuremath{\mathbb{Z}}^2$:
\begin{definicao}
$w\succ w^\prime \Leftrightarrow\langle w-w^\prime,\rho_0^\textup{Per}p\rangle>0$.
\end{definicao}
\begin{obs}
Note that such order is defined simply by projecting $\ensuremath{\mathbb{Z}}^2$ on the line generated by $\rho_0^\textup{Per}p $ and using the natural order of such a line. In addition, we have that such projection is dense on the line.
\end{obs}
We will denote by $Z_r=\{w\in\ensuremath{\mathbb{Z}}^2 \mid \tilde{p}+w\in r(\tilde \gamma_0)\}$ and $Z_l=\{w\in\ensuremath{\mathbb{Z}}^2\mid \tilde{p}+w\in l(\tilde \gamma_0)\}$. Note that, since $(\tilde p+\ensuremath{\mathbb{Z}}^2)\cap[\tilde \gamma_0]=\emptyset$, we have that $\ensuremath{\mathbb{Z}}^2=Z_l\cup Z_r$. The following lemma will show that the projections of $Z_r$ and $Z_l$ on the line generated by $\rho_0^\textup{Per}p $ are contained in two disjoint half-lines.
\begin{lema}\label{lemasemiretas}
The sets $Z_r$ and $Z_l$ defined above satisfy the following properties:
\begin{enumerate}[(i)]
\item If $w\in Z_l$ and $w'\succ w$, then $w'\in Z_l$;
\item If $w\in Z_r$ and $w'\prec w$, then $w'\in Z_r$.
\end{enumerate}
\end{lema}
\begin{proof}
To prove $(i)$, let us take $w\in Z_l$ and $w'\succ w$. So we have $\langle w'-w,\rho_0^\textup{Per}p\rangle>0$. Since $w\in Z_l$, we have by definition that $\tilde{p}+w\in l(\tilde \gamma_0)$ and then, by Lemma \ref{lemacorte}, we have that $\tilde{p}+w+(w'-w)\in l(\tilde \gamma_0)$, and so $w'\in Z_l$.
The proof of $(ii)$ is analogous.
\end{proof}
\begin{lema}\label{lemacorte3}
Let $w\in\ensuremath{\mathbb{Z}}^2$ be such that $w\prec 0$. So there is $w^*\in\ensuremath{\mathbb{Z}}^2$ such that $w^*\in Z_l$ and $w^*+w\in Z_r$.
\end{lema}
\begin{proof}
Let us denote $\langle A,\rho_0^\textup{Per}p\rangle=\{\langle w,\rho_0^\textup{Per}p\rangle\mid w\in\ A\}$. Using such notation, we have that $\langle \ensuremath{\mathbb{Z}}^2,\rho_0^\textup{Per}p\rangle=\langle Z_l,\rho_0^\textup{Per}p\rangle\cup\langle Z_r,\rho_0^\textup{Per}p\rangle$ and $\langle \ensuremath{\mathbb{Z}}^2,\rho_0^\textup{Per}p\rangle$ is dense in $\ensuremath{\mathbb{R}}$. Let us prove that $\langle Z_l,\rho_0^\textup{Per}p\rangle$ is bounded from below. By contradiction, if $\langle Z_l,\rho_0^\textup{Per}p\rangle$ is unbounded from below, there are $w_n\in\ensuremath{\mathbb{Z}}^2, n\in\ensuremath{\mathbb{N}}$, such that $\langle w_n,\rho_0^\textup{Per}p\rangle\to-\infty$. But, by Lemma \ref{lemasemiretas}, if $w_n\in Z_l$, then $w'\in Z_l$, if $w'\succ w_n$, and so we have that $\langle Z_l,\rho_0^\textup{Per}p\rangle=\langle \ensuremath{\mathbb{Z}}^2,\rho_0^\textup{Per}p\rangle$, which is a contradiction. Therefore $\langle Z_l,\rho_0^\textup{Per}p\rangle$ is bounded from below. Analogously, we have that $\langle Z_r,\rho_0^\textup{Per}p\rangle$ is bounded from above. Let us denote $l^*=\inf\langle Z_l,\rho_0^\textup{Per}p\rangle$ and $r^*=\sup\langle Z_r,\rho_0^\textup{Per}p\rangle$. We claim that $l^*=r^*$. In fact, if $r^*<l^*$, we have a contradiction, because we would have $(r^*,l^*)\cap\langle \ensuremath{\mathbb{Z}}^2,\rho_0^\textup{Per}p\rangle=\emptyset$, and $\langle \ensuremath{\mathbb{Z}}^2,\rho_0^\textup{Per}p\rangle$ is dense in $\ensuremath{\mathbb{R}}$. If $l^*<r^*$, we have that there are $w_l\in Z_l$ and $w_r\in Z_r$ such that $w_r-w_l\succ 0$. Thus, by the Lemma \ref{lemacorte}, we have that $w_l+(w_r-w_l)\in l(\tilde \gamma_0)$, which is a contradiction. Therefore, $l^*=r^*$. Since $l^*=\inf \langle Z_l,\rho_0^\textup{Per}p\rangle$, we have that there is a sequence $w_n\in Z_l$ such that $\langle w_n,\rho_0^\textup{Per}p\rangle\to l^*$. So, since $\langle w,\rho_0^\textup{Per}p\rangle<0$, there is $n'\in \ensuremath{\mathbb{N}}$ such that $\langle w_{n'}+w,\rho_0^\textup{Per}p\rangle<l^*$. Thus, making $w^*=w_{n'}$, we have that $w^*\in Z_l$ and $w^*+w\in Z_r$, proving the lemma.
\end{proof}
\begin{obs}\label{obscorte}
Analogously to the previous Lemma, given $w\succ 0$, we can obtain $w^{**}\in\ensuremath{\mathbb{Z}}^2$ such that $w^{**}\in Z_r$ and $w^{**}+w\in Z_l$
\end{obs}
\begin{lema}\label{lema1}
Let $w\in\ensuremath{\mathbb{Z}}^2_*$. If $w\succ 0$, then for every $M>0$ there exists $t_M^{+}>M$ and $t_M^{-}<-M$ such that both $\tilde \gamma_0(t_M^{-})$ and $\tilde \gamma_0(t_M^{+})$ lie in $r(\tilde \gamma_0+w)$. Likewise, there exists $s_M^{+}>M$ and $s_M^{-}<-M$ such that both $\tilde \gamma_0+w(s_M^{-})$ and $\tilde \gamma_0+w(s_M^{+})$ lie in $l(\tilde \gamma_0)$. In particular we have that $\tilde \gamma_0$ and $\tilde \gamma_0+w$ are not $\tilde\mathcal{F}$-equivalent.
\end{lema}
\begin{proof}
Let's look at the case where $w\prec 0$. By the Lemma \ref{lemacorte3}, we have that there is $w^*\in\ensuremath{\mathbb{Z}}^2$ such that $\tilde p+w^*\in l(\tilde \gamma_0)$ and $\tilde p+w^*+w\in r(\tilde \gamma_0)$, and therefore $\tilde p+w^*+w\in r(\tilde \gamma_0)\cap l(\tilde \gamma_0+w)$. By Lemma \ref{lemalrnovo2}, we have that $[\tilde \gamma_0]\cap l(\tilde \gamma_0+w)\neq\emptyset$ and $[\tilde \gamma_0+w]\cap r(\tilde \gamma_0)\neq\emptyset$, so one can find some $t_0, s_0$ such that $\tilde \gamma_0(t_0)\in l(\tilde \gamma_0+w)$ and $\tilde \gamma_0+w(s_0)\in r(\tilde \gamma_0)$. Fix $M>0$ and let us show the existence of $t_M^{+}$, the other cases are similar. Let $\phi_0$ be the leaf that passes through $\tilde \gamma_0(t_0)$ and note that, if $w^*\succ 0$ is in $\ensuremath{\mathbb{Z}}^2$, then $\phi_0\subset l(\tilde \gamma_0+w+w^*)\subset l(\tilde \gamma_0+w)$. Let $N>|t_0$ be an integer. Using Proposition \ref{proppqeps}, one can find a sequence $(p_l, q_l)$ in $\ensuremath{\mathbb{Z}}^2\times\ensuremath{\mathbb{N}}$ with $q_l$ going to infinity and $p_l\succ 0$ such that $\tilde f^{q_l}(\tilde z_0)-p_l$ converges to $\tilde z_0$. This implies, by Lemma \ref{lemaestabilidade}, that for sufficiently large $l$ the path $\tilde \gamma_0-p_l\mid_{[q_l-N-1,q_l+ N+1]}$ contains a subpath equivalent to $\tilde \gamma_0\mid_{[-N,N]}$. But since $\tilde \gamma_0\mid_{[-N,N]}$intersects $\phi_0$, $\tilde \gamma_0-p_l\mid_{[q_l-N-1,q_l+ N+1]}$ must also do so, and therefore $\tilde \gamma_0\mid_{[q_l-N-1,q_l+ N+1]}$ intersects $\phi_0+p_l$, which lies in $l(\tilde \gamma_0+w)$. It suffices then to take $l$ such that $q_l>M-N-1$.
The proof for $w\succ 0$ is analogous, using the Remark \ref{obscorte}.
\end{proof}
\begin{lema}\label{lema2}
Given $t_1>0$, there is $0<\varepsilon<\frac{1}{2}$ such that, if $\tilde z'\in B(\varepsilon,\tilde z_0)$, then every element of $\tilde{I}^{\ensuremath{\mathbb{Z}}}_{\tilde{\mathcal{F}}}(\tilde z')$ contains a sub-path $\tilde\mathcal{F}$-equivalent to $\tilde \gamma_0\vert_{[0, t_1]}$.
\end{lema}
\begin{proof}
It follows directly by Lemma \ref{lemaestabilidade}.
\end{proof}
In what follows, we will say that $f$ \emph{does not have bounded deviation in the positive direction of $\rho_0^\textup{Per}p$} if there are $\tilde x_k \in\ensuremath{\mathbb{R}}^2$ and $(n_k)_{k\in\ensuremath{\mathbb{N}}}$ an increasing sequence such that $\lim_{k\to\infty}\langle \tilde{f}^{n_k}(\tilde x_k)-\tilde x_k,\rho_0^\textup{Per}p\rangle=+\infty$. Analogously, we will say that $f$ \emph{does not have bounded deviation in the negative direction of $\rho_0^\textup{Per}p$} if there are $\tilde x_k$ and $(n_k)_{k\in\ensuremath{\mathbb{N}}}$ as before such that the previous limit is equal to $-\infty$. Note that if $f$ does not have bounded deviation in the direction of $\rho_0^\textup{Per}p$ so either $f$ does not have bounded deviation in the positive direction of $\rho_0^\textup{Per}p$ or $f$ does not have bounded deviation in the negative direction of $\rho_0^\textup{Per}p$.
\begin{lema}\label{lema3}
If $f$ does not have bounded deviation in the positive direction of $\rho_0^{\textup{Per}p}$, then, given $0<\varepsilon<\frac{1}{2}$, there are $\tilde x\in\ensuremath{\mathbb{R}}^2$, $N\in\ensuremath{\mathbb{N}}$ and $P\in\ensuremath{\mathbb{Z}}^2$ such that:
\begin{enumerate}[(i)]
\item $\tilde x\in B(\varepsilon, \tilde z_0)$
\item $P \succ (-2,0)$
\item $\tilde f^{N}(\tilde x)\in B(\varepsilon, \tilde z_0+P)$
\end{enumerate}
\end{lema}
\begin{proof}
We have by the Lemma \ref{lemaessencial} that the set $U_{\varepsilon}=\bigcup_{i=0}^{\infty} f^{i}(\pi(B(\varepsilon, \tilde{z}_0)))$ is fully essential, and since $\varepsilon<\frac{1}{2}$, we have that $\overline{\pi(B(\varepsilon,\tilde{z}_0))}$ is inessential. So, applying the Proposition \ref{propguelman2015rotation}, we get a compact set of the plane $K$ and $M\in\ensuremath{\mathbb{N}}$ such that $[0,1]^2$ is contained in a bounded connected component of $\ensuremath{\mathbb{R}}^2\setminus K$, which we shall denote by $A$, and $K\subset\bigcup_{|i|\leq M,\, ||v||_{\infty}\leq M}\left(\tilde{f}^i(B(\varepsilon, \tilde{z}_0))+v\right)$. Since $f$ does not have bounded deviation in the positive direction of $\rho_0^{\textup{Per}p}$, there are $P'\in\ensuremath{\mathbb{Z}}^2$ and $l\in\ensuremath{\mathbb{N}}$ such that $l>2M$, $\langle P', \rho_0^{\textup{Per}p}\rangle > \langle -(2,0), \rho_0^{\textup{Per}p}\rangle+2M$ and $\tilde{f}^{l}([0,1]^2)\cap([0,1]^2+P')\neq\emptyset$. Since $[0,1]^2\subset A$ and $\tilde{f}^{l}([0,1]^2)\cap([0,1]^2+P')\neq\emptyset$, we have that $\tilde{f}^{l}(A)$ intersects $A+P'$. Since $A$ is bounded, we have that $\tilde{f}^{l}(\partial A)\cap (\partial A+P')\neq\emptyset$, and since $\partial A\subset K$, we get $\tilde{f}^{l}(K)\cap(K+P')\neq\emptyset$. Now, let $\tilde y\in\tilde{f}^{l}(K)\cap(K+P')$. So there are $n_i\in\ensuremath{\mathbb{Z}}$, $|n_i|<M$ and $v_i\in\ensuremath{\mathbb{Z}}^2$, $||v_i||_\infty <M$, for $i=1,2$, such that
\begin{align*}
\tilde y\in \tilde{f}^l(K) &\ensuremath{\mathbb{R}}ightarrow \tilde y \in \tilde{f}^l(\tilde{f}^{n_1}(B(\varepsilon,\tilde z_0 ))+v_1)=\tilde{f}^{l+n_1}(B(\varepsilon,\tilde z_0 ))+v_1 \\
\tilde y \in K+P' &\ensuremath{\mathbb{R}}ightarrow \tilde y \in \tilde{f}^{n_2}(B(\varepsilon,\tilde z_0))+v_2+P'.
\end{align*}
Then we get
\begin{align*}
\tilde{f}^{-n_2}(\tilde y) &\in(\tilde{f}^{l+n_1-n_2}(B(\varepsilon,\tilde z_0))+v_1)\cap B(\varepsilon,\tilde z_0+v_2+P')\\
\tilde{f}^{-n_2}(\tilde y) -v_1&\in\tilde{f}^{l+n_1-n_2}(B(\varepsilon,\tilde z_0))\cap B(\varepsilon,\tilde z_0+v_2-v_1+P').
\end{align*}
Thus, setting $N=l+n_1-n_2$ e $P=v_2-v_1+P'$, we get the result.
\end{proof}
Note that we can prove an analogous result for the case where $f$ does not have bounded deviation in the negative direction of $\rho_0^\textup{Per}p$.
With all the results proven so far we can complete the proof of Theorem A.
\begin{proof}[Proof of Theorem A]
Suppose by contradiction that $f$ does not have bounded deviation in the direction of $\rho_0^\textup{Per}p$. Let us assume that $f$ does not have bounded deviation in the positive direction of $\rho_0^\textup{Per}p$ (the other case is analogous).
Since $(1,0)\prec 0$, we have by the Lemma \ref{lema1} that $\tilde \gamma_0$ and $\tilde \gamma_0+(1,0)$ are not $\tilde\mathcal{F}$-equivalents, and beyond that, $\tilde \gamma_0$ intersects a leaf in $l(\tilde \gamma_0+(1,0))$. Let $t_0$ be a moment in which such intersection occurs, i.e., $\tilde \gamma_0(t_0)$ belongs to a leaf, which we shall denote by $\phi_l$, which is contained in $l(\tilde \gamma_0+(1,0))$. Analogously, since $-(1,0)\succ 0$ we have by the Lemma \ref{lema1} that $\tilde \gamma_0$ and $\tilde \gamma_0-(1,0)$ are not $\tilde\mathcal{F}$-equivalents, and beyond that $\tilde \gamma_0$ intersects a leaf in $r(\tilde \gamma_0-(1,0))$. Let $t_1$ be such that $\tilde \gamma_0(t_1)$ belongs to a leaf, which we shall denote by $\phi_r$, which is contained in $r(\tilde \gamma_0-(1,0))$. Note that, by Lemma \ref{lema1},we can assume both $t_0$ and $t_1$ are positive, and that $0<t_0<t_1$. Now, let $0<\varepsilon<\frac{1}{2}$ be given by Lemma \ref{lema2}, and also $\tilde x \in \ensuremath{\mathbb{R}}^2$, $N\in\ensuremath{\mathbb{N}}$ and $P\in\ensuremath{\mathbb{Z}}^2$ given by the Lemma \ref{lema3}. We will denote a fixed element of $\tilde{I}^{\ensuremath{\mathbb{Z}}}_{\tilde{\mathcal{F}}}(\tilde x)$ by $\beta_{\tilde{x}}$.
\begin{figure}
\caption{Construction of $\beta_{\tilde{x}
\label{fig1}
\end{figure}
Let us prove that $\beta_{\tilde{x}}$ intersects $r(\tilde \gamma_0-(1,0))$ and $l(\tilde \gamma_0+P+(1,0))$. Since $\tilde x\in B(\varepsilon,\tilde z_0)$, we have by the Lemma \ref{lema2} that $\tilde \gamma_0\vert_{[0, t_1]}$ is equivalent to a subpath of $\beta_{\tilde{x}}$, thus we have that $\beta_{\tilde{x}}$ crosses $\phi_r$. Similarly, since $\tilde f^N(\tilde x)\in\ B(\varepsilon,\tilde z_0+P)$, we have that $\tilde \gamma_0\vert_{[0,t_1]}+P$ is equivalent to a sub-path of $\beta_{\tilde{x}}$, and so $\beta_{\tilde{x}}$ intersects $\phi_l+P$. Let us denote by $I=[a,b]$ an interval such that $[\beta_{\tilde{x}}(a)$ belongs to $\phi_r$, $\beta_{\tilde{x}}(b)$belongs to $\phi_l+P$.
We claim that, for every $w\in\ensuremath{\mathbb{Z}}^2$ such that $-(1,0)\prec w\prec P+(1,0)$, $(\tilde \gamma_0+w)$ intersects $\beta_{\tilde{x}}|_I$ $\tilde\mathcal{F}$-transversally. Let us first prove that $[\phi_l+P]\subset l(\tilde \gamma_0+w)$ and $[\phi_r]\subset r(\tilde \gamma_0+w)$. Let us suppose by contradiction that $[\phi_r]\not\subset r(\tilde \gamma_0+w)$. We have two possibilities: $[\phi_r]\subset l(\tilde \gamma_0+w)$ or $\tilde \gamma_0+w$ crosses $\phi_r$. If $[\phi_r]\subset l(\tilde \gamma_0+w)$, we have that $[\tilde \gamma_0]\cap l(\tilde \gamma_0+w)\neq\emptyset$. In addition, since $0\prec-(1,0)\prec w$, we have by Lemma \ref{lema1} that $[\tilde \gamma_0]\cap r(\tilde \gamma_0+w)\neq\emptyset$. Therefore, by the Lemma \ref{lemadiresq}, we have that $\tilde \gamma_0\pitchfork_{\tilde\mathcal{F}}\tilde \gamma_0+w$, which is a contradiction, by the Lemma \ref{lemaauto}. If $\tilde \gamma_0+w$ crosses $\phi_r$, since $[\phi_r]\subset r(\tilde \gamma_0-(1,0))$, we have $[\tilde \gamma_0+w]\cap r(\tilde \gamma_0-(1,0))\neq\emptyset$. In addition, since $-(1,0)\prec w$, we have by Lemma \ref{lema1} that $[\tilde \gamma_0]\cap l(\tilde \gamma_0-(1,0)-w)\neq\emptyset$, and thus $[\tilde \gamma_0+w]\cap l(\tilde \gamma_0-(1,0))\neq\emptyset$. Again, by the Lemma \ref{lemadiresq} we have that $(\tilde \gamma_0-(1,0))\pitchfork_{\tilde\mathcal{F}}(\tilde \gamma_0+w)$, which is a contradiction, by the Lemma \ref{lemaauto}. Therefore we have that $[\phi_r]\subset r(\tilde \gamma_0+w)$. In a symmetrical way, using the fact that $w\prec P+(1,0)\prec P$, we can prove that $[\phi_l+P]\subset l(\tilde \gamma_0+w)$. So, by the Lemma \ref{lemadiresq}, we have that $(\tilde \gamma_0+w)\pitchfork_{\tilde{\mathcal{F}}}\beta_{\tilde{x}}|_I$, proving the claim. Note that, by Lemma \ref{lemasubcaminhosadmissiveis}, that both $\tilde \gamma_0+w$ and $\beta_{\tilde{x}}\mid_{I}$ are admissible.
Now let $w_1,w_2\in\ensuremath{\mathbb{Z}}^2$ be such that $-(1,0)\prec w_1\prec w_2\prec P+(1,0)$. Since $(\tilde \gamma_0+w_1)\pitchfork_{\tilde{\mathcal{F}}}\beta_{\tilde{x}}|_I$, we have that there are $t',s'\in\ensuremath{\mathbb{R}}$ such that $\tilde \gamma_0+w_1$ and $\beta_{\tilde{x}}$ intersect $\tilde\mathcal{F}$-transversally at $(\tilde \gamma_0+w_1)(t')=\beta_{\tilde{x}}(s')$. In particular, one can find an interval $J_1=[a_1, b_1]$ containing $t'$ such that $(\tilde \gamma_0+w_1)\mid_{J}\pitchfork_{\tilde\mathcal{F}}\beta_{\tilde{x}}$ and Lemma \ref{lemasubcaminhosadmissiveis} and Proposition \ref{propadm} show that, for any $c<a_1$ the path $beta_c'=(\tilde \gamma_0+w_1)|_{[c,t']}\,\beta_{\tilde{x}}|_{[s',b]}$ is admissible.
Note that, by Lemma \ref{lema1}, there exists some $c_0<a_1$ such that $\phi_0=\phi_{(\tilde \gamma_0+w_1)(c_0)}$ is in $r(\tilde \gamma_0+w_2)$. This implies that, for any $c\leq c_0$, $\beta'_c \pitchfork_{\tilde\mathcal{F}} \tilde \gamma_0+w_2$, as it intersects both $\phi_0\subset r(\tilde \gamma_0+w_2)$ and $\phi_l+P\subset l(\tilde \gamma_0+w_2)$. Let $J_2=[a_2, b_2]$ be an interval such that $\beta'_c \pitchfork_{\tilde\mathcal{F}} (\tilde \gamma_0+w_2)\mid_{J_2}$. We can, as in the proof of Lemma \ref{lema1}, find $w_3$ such that there exists some interval $J_3=[a_3,b_3]$ with $b_3< c_0$ such that $(\tilde \gamma_0+w_3)\mid_{J_3}$ is equivalent to $(\tilde \gamma_0+w_2)\mid_{J_2}$. But this implies that the path $\beta_{a_3}'$ has a transverse intersection with $\beta_{a_3}'+w_3-w_1$ since the latter has a subpath equivalent to $(\tilde \gamma_0+w_1)\mid_{j_3}+(w_3-w_1)=(\tilde \gamma_0+w_3)\mid_{j_3}$, a contradiction with Lemma \ref{lemaauto}, concluding the demonstration.
\begin{figure}
\caption{Construction of $\beta'$}
\label{fig2}
\end{figure}
\begin{figure}
\caption{Intersection of $\beta'$ with $\beta'+w_3-w_1$}
\label{fig3}
\end{figure}
\end{proof}
\subsection{Proof of Corollary B}
Assume $f:\mathbb{T}^2\to\mathbb{T}^2$ is isotopic to the identity, has a periodic point, let $\tilde f:\ensuremath{\mathbb{R}}^2\to\ensuremath{\mathbb{R}}^2$ be a lift of $f$ and assume $\rho(\tilde f)$ is a non-degenerate line segment. Note that, as $f$ has a periodic point, $\rho(\tilde f)$ has at least one point in $\ensuremath{\mathbb{Q}}^2$. If $\rho(\tilde f)$ has two distinct points in $\ensuremath{\mathbb{Q}}^2$, then the result follows directly from \cite{Davalos2013SublinearDiffusion}. So we may assume that $\rho(\tilde f)$ has a single point with rational coordinates, and then Theorem C from \cite{LeCalvezTal2015ForcingTheory} implies that there exists integers $p_1, p_2, q$ and some vector $rho_0$ such that $\rho(\tilde f)\{(p_1/q, p_2/q) +t \rho_0 \mid 0\le t \le q\}$. But then we can take $g=f^q$ and its lift $\tilde g= \tilde f^q-(p_1, p_2)$ and apply Theorem A to them, deducing that $g$ has uniformly bounded deviations in the direction $\rho_0^{\textup{Per}p}$. But if a power of $f$ has uniformly bounded deviations in a given direction, $f$ itself must also have this property, which concludes the proof of the Corollary.
\end{document} |
\begin{document}
\title{Some Quantum Information Inequalities\\
from a Quantum Bayesian Networks Perspective}
\author{Robert R. Tucci\\
P.O. Box 226\\
Bedford, MA 01730\\
[email protected]}
\date{\today}
\maketitle
\vskip2cm
\section*{Abstract}
This is primarily a pedagogical paper.
The paper re-visits
some well-known quantum
information theory
inequalities.
It does this from a
quantum Bayesian networks
perspective. The paper
illustrates some of the benefits
of using quantum Bayesian networks
to discuss quantum SIT (Shannon Information Theory).
\section{Introduction}
For a good textbook on classical (non-quantum)
Shannon
Information Theory (SIT), see, for example,
Ref.\cite{CovTh}
by Cover and Thomas.
For a good
textbook
on quantum SIT, see, for example,
Ref.\cite{Wilde} by Wilde.
This paper is written assuming that
the reader has first read
a previous paper by the same
author, Ref.\cite{Tuc-mixology},
which is an introduction
to quantum Bayesian networks for mixed states.
This paper re-visits
some well-known quantum
information theory
inequalities (mostly
the monotonicity
of the relative entropy
and consequences thereof).
It does this from a
quantum Bayesian networks
perspective. The paper
illustrates some of the benefits
of using quantum Bayesian networks
to discuss quantum SIT.
\section{Preliminaries and Notation}
Reading all of Ref.\cite{Tuc-mixology}
is a
prerequisite to reading this paper.
This
section
will introduce
only notation
which hasn't
been defined already in
Ref.\cite{Tuc-mixology}.
Let
\beq
S_{\rvx,\rvy} =
S_\rvx\times S_\rvy=
\{(x,y):x\in S_\rvx, y\in S_\rvy\}
\;,
\eeq
\beq
\calh_{\rvx,\rvy} =
\calh_\rvx\otimes \calh_\rvy=
span\{\ket{x}_\rvx\ket{y}_\rvy:
x\in S_\rvx, y\in S_\rvy\}
\;.
\eeq
Suppose
$\{P_{\rvx,\rvy}(x,y)\}_{\forall x,y}
\in pd(S_{\rvx,\rvy})$.
We will often use
the expectation operators
$E_x = \sum_x P(x)$,
$E_{x,y}=\sum_{x,y} P(x,y)$,
and
$E_{y|x}=\sum_y P(y|x)$.
Note that $E_{x,y} = E_x E_{y|x}$.
Let
\beq
P(x:y) = \frac{P(x,y)}{P(x)P(y)}
\;.
\eeq
Note that
$E_{x} P(x:y) = E_{y} P(x:y)=1$.
We will use the following
measures of various
types of information (entropy):
\begin{itemize}
\item
The (plain) entropy of the
random variable $\rvx$ is defined
in the classical case by
\beq
H(\rvx) =
E_x \ln \frac{1}{P(x)}
\;,
\eeq
which we also call
$H_{P_\rvx}(\rvx)$,
$H\{P(x)\}_{\forall x}$,
and
$H(P_\rvx)$.
This quantity measures the
spread of $P_\rvx$.
The quantum generalization of this is, for
$\rho_{\rvx}\in
dm(\calh_{\rvx})$,
\beq
S(\rvx) =
-\tr_\rvx (\rho_\rvx\ln \rho_\rvx)
\;,
\eeq
which we also call
$S_{\rho_\rvx}(\rvx)$ and
$S(\rho_\rvx)$.
One can also consider
plain entropy for
a joint random variable
$\rvx=(\rvx_1,\rvx_2)$.
In the
classical
case,
for $P_{\rvx_1,\rvx_2}\in pd(S_{\rvx_1,\rvx_2})$
with marginal probability distributions $P_{\rvx_1}$
and $P_{\rvx_2}$,
one defines a joint entropy $H(\rvx_1,\rvx_2)=H(\rvx)$
and partial entropies
$H(\rvx_1)$ and $H(\rvx_2)$.
The quantum generalization of this is, for
$\rho_{\rvx_1,\rvx_2}\in dm(\calh_{\rvx_1,\rvx_2})$
with partial density matrices
$\rho_{\rvx_1}$ and $\rho_{\rvx_2}$,
a joint entropy
$S(\rvx_1,\rvx_2)=S(\rvx)$
with partial entropies
$S(\rvx_1)$ and $S(\rvx_2)$.
\item
The conditional entropy of $\rvy$ given $\rvx$
is defined
in the classical case by
\beqa
H(\rvy|\rvx) &=&
E_{x,y} \ln \frac{1}{P(y|x)}
\\
&=&
H(\rvy,\rvx)-H(\rvx)
\;,
\eeqa
which we also call
$H_{P_{\rvx, \rvy}}(\rvy|\rvx)$.
This quantity measures the conditional
spread
of $\rvy$ given $\rvx$.
The quantum generalization of this is, for
$\rho_{\rvx,\rvy}\in
dm(\calh_{\rvx,\rvy})$,
\beq
S(\rvy|\rvx) =
S(\rvy,\rvx) -S(\rvx)
\;,
\eeq
which we also call
$S_{\rho_{\rvx, \rvy}}(\rvy|\rvx)$.
\item The Mutual Information (MI)
of $\rvx$ and $\rvy$
is defined
in the classical case by
\beqa
H(\rvy:\rvx) &=&
E_{x,y} \ln
P(x:y)= E_x E_y P(x:y) \ln P(x:y)
\\
&=&
H(\rvx) + H(\rvy) - H(\rvy,\rvx)
\;,
\eeqa
which we also call
$H_{P_{\rvx,\rvy}}(\rvy:\rvx)$.
This quantity measures the correlation
between $\rvx$ and $\rvy$.
The quantum generalization of this is, for
$\rho_{\rvx,\rvy}\in
dm(\calh_{\rvx,\rvy})$,
\beq
S(\rvy:\rvx) =
S(\rvx) + S(\rvy) - S(\rvx,\rvy)
\;,
\eeq
which we also call
$S_{\rho_{\rvx,\rvy}}(\rvy:\rvx)$.
\item The Conditional Mutual Information (CMI,
which
can be read as ``see me")
of $\rvx$ and $\rvy$
given $\rv{\lam}$
is defined
in the classical case by:
\beqa
H(\rvy:\rvx|\rv{\lam})
&=&
E_{x,y,\lam} \ln
\frac{P(x,y|\lam)}{P(x|\lam)P(y|\lam)}
\\
&=&
E_{x,y,\lam} \ln
\frac{P(x,y,\lam)P(\lam)}{P(x,\lam)P(y,\lam)}
\\
&=&
H(\rvx|\rv{\lam}) + H(\rvy|\rv{\lam})
- H(\rvy,\rvx|\rv{\lam})
\;,
\eeqa
which we also call
$H_{P_{\rvx,\rvy,\rv{\lam}}}(\rvy:\rvx|\rv{\lam})$.
This
quantity measures the conditional correlation
of $\rvx$ and $\rvy$ given $\rv{\lam}$.
The quantum generalization of this is, for
$\rho_{\rvx,\rvy,\rv{\lam}}\in
dm(\calh_{\rvx,\rvy,\rv{\lam}})$,
\beq
S(\rvy:\rvx|\rv{\lam})
=
S(\rvx|\rv{\lam}) + S(\rvy|\rv{\lam})
- S(\rvy,\rvx|\rv{\lam})
\;,
\eeq
which we also call
$S_{\rho_{\rvx,\rvy,\rv{\lam}}}(\rvy:\rvx|\rv{\lam})$
\item The relative
information
of $P\in pd(S_\rvx)$
divided by $Q\in pd(S_\rvx)$
is defined by
\beq
D\{P(x)//Q(x)\}_{\forall x} =
\sum_x P(x)\ln\frac{P(x)}{Q(x)}
\;,
\eeq
which we also call
$D(P_{\rvx}//Q_\rvx)$.
The quantum generalization of this is, for
$\rho_{\rvx}, \sigma_\rvx\in
dm(\calh_{\rvx})$,
\beq
D(\rho_\rvx//\sigma_\rvx) =
\tr_\rvx\left(\rho_\rvx (\ln \rho_\rvx
-\ln \sigma_\rvx)\right)
\;.
\eeq
\end{itemize}
Note that we
define entropies
using natural logs. Our
strategy is to
use natural log entropies
for all intermediate analytical
calculations, and to
convert
to base-2 logs
at the end of those
calculations if
a base-2 log numerical answer
is desired. Such a conversion is
of course trivial
using $\log_2 x = \frac{\ln x}{\ln 2}$ and
$\ln 2 = 0.6931$
The notation $\atrho{\rho}\calf\atrhoend$
will be used to
indicate that all quantum
entropies $S(\cdot)$
in statement $\calf$ are to be evaluated
at density matrix $\rho$.
For example,
$\atrho{\rho}
S(\rva) + S(\rvb|\rvc)=0
\atrhoend$
will stand for
$S_\rho(\rva) + S_\rho(\rvb|\rvc)=0$.
Define
\beq
I_\rvx = \sum_{x\in S_\rvx}
\ket{x}_\rvx\bra{x}_\rvx
\;.
\eeq
Define $1^{N}$ to be the $N$-tuple
whose $N$ components are all equal to one.
Recall
from Ref.\cite{Tuc-mixology}
that
an amplitude
$\{A(y|x)\}_{\forall y,x}$ is
said to be an {\bf isometry} if
\beq
\sum_{y}
\sandb{A(y|x)}
\sandb{
\hc\\
x\rarrow x'
}
=
\delta_x^{x'}
\;
\eeq
for all $x,x'\in S_\rvx$.
\section{Monotonicity of Relative
Entropy (MRE)}
In this section, we will state
the monotonicity of
the relative entropy (MRE,
which can be read as ``more") and
derive some of its many consequences,
such as $MI\geq 0$,
$CMI\geq 0$,
and the data processing inequalities.
\subsection{General MRE Inequality}
\begin{claim}
Suppose $\{P_\rva(a)\}_{\forall a\in S_\rva}$
and $\{Q_\rva(a)\}_{\forall a\in S_\rva}$
are both probability distributions.
Suppose $\{T_{\rvb|\rva}(b|a)\}_
{\forall (b,a)\in S_{\rvb,\rva}}$ is a
transition probability matrix, meaning that
its entries are non-negative and
satisfy $\sum_b T_{\rvb|\rva}(b|a)= 1$
for any $a\in S_\rva$.
Then
\beq
D(T_{\rvb|\rva}P_\rva//
T_{\rvb|\rva}Q_\rva)
\leq
D(P_\rva//Q_\rva)
\;,\label{eq-mono-cla}
\eeq
where we are overloading the symbol
$T_{\rvb|\rva}$ so that it
stands also for an $N_\rvb\times N_\rva$
matrix,
and we are overloading the symbols
$P_\rva, Q_\rva$
so that they stand also for
$N_\rva$-dimensional column vectors.
\end{claim}
\proof
\beqa
D(P//Q) &=&
\sum_a P(a) \ln \frac{P(a)}{Q(a)}
\\
&=&
\sum_{a,b} T(b|a)P(a) \ln \frac{T(b|a)P(a)}{T(b|a)Q(a)}
\\
&\geq&
\sum_{b} (TP)(b) \ln \frac{(TP)(b)}{(TQ)(b)}
\label{eq-from-log-sum}
\\
&=&
D(TP//TQ)
\;
\eeqa
Eq.(\ref{eq-from-log-sum}) follows
from the so called log-sum inequality (See
Ref.\cite{CovTh}).
\qed
Recall from Ref.\cite{Tuc-mixology}
that a channel superoperator
$\calt_{\rvb|\rva}$
is a map from $dm(\calh_\rva)$
to $dm(\calh_\rvb)$ which can be expressed
as
\beq
\calt_{\rvb|\rva}(\cdot)=
\sum_\mu K_\mu (\cdot) K^\dagger_\mu
\;,
\eeq
where the operators $K_\mu:\calh_\rva\rarrow \calh_\rvb$,
called Krauss operators, satisfy:
\beq
\sum_\mu K^\dagger_\mu K_\mu = 1
\;.
\eeq
Ref.\cite{Tuc-mixology}
explains how a channel
superop can be portrayed
in terms of QB nets as
a two body
scattering diagram.
\begin{claim}
Suppose $\rho_\rva, \sigma_\rva\in dm(\calh_\rva)$
and $\calt_{\rvb|\rva}:dm(\calh_\rva)\rarrow dm(\calh_\rvb)$
is a channel superop.
Then
\beq
D(\calt_{\rvb|\rva}(\rho_\rva)//
\calt_{\rvb|\rva}(\sigma_\rva))
\leq
D(\rho_\rva//\sigma_\rva)
\;\label{eq-mono-qua}
\eeq
\end{claim}
\proof
See Ref.\cite{Wilde}
and original references therein.
\qed
Note that
Eq.(\ref{eq-mono-cla})
is a special case of
Eq.(\ref{eq-mono-qua}).
Indeed,
if $\calt_{\rvb|\rva}$
has Krauss operators $\{K_\mu\}_{\forall \mu}$,
then let
\begin{subequations}
\label{eq-rb-k-ra-k}
\beq
\rho_\rvb =
\sum_\mu K_\mu \rho_\rva K^\dagger_\mu
\;,
\eeq
\beq
\sigma_\rvb =
\sum_\mu K_\mu \sigma_\rva K^\dagger_\mu
\;.
\eeq
\end{subequations}
Assume $\rho_\rva$ and $\sigma_\rva$
can both be diagonalized in the same basis
as follows
\begin{subequations}
\beq
\rho_\rva =
\sum_a \sandb{\ket{a}_\rva}
P_\rva(a)\sandb{\hc}
\;,
\eeq
\beq
\sigma_\rva =
\sum_a \sandb{\ket{a}_\rva}
Q_\rva(a)\sandb{\hc}
\;.
\eeq
\label{eq-diag-same-basis}\end{subequations}
Likewise, assume that
$\rho_\rvb$ and $\sigma_\rvb$
can both be diagonalized in the same
basis. Thus assume Eqs.(\ref{eq-diag-same-basis}),
but with the letters $a$'s replaced by $b$'s.
Then Eqs.(\ref{eq-rb-k-ra-k}) reduce to
\begin{subequations}
\beq
P_\rvb = T_{\rvb|\rva} P_\rva
\;,
\eeq
\beq
Q_\rvb = T_{\rvb|\rva} Q_\rva
\;,
\eeq
\end{subequations}
where
\beq
T_{\rvb|\rva}(b|a)=
\sum_\mu |\av{b|K_\mu|a}|^2
\;\label{eq-t-krauss}
\eeq
for all $a\in S_\rva$
and $b\in S_\rvb$.
Clearly, this $T_{\rvb|\rva}$ satisfies
$\sum_b T(b|a)=1$.
Therefore
the quantum MRE
with diagonal density matrices
is just the classical MRE.
\subsection{Subadditivity of Joint Entropy (MI$\geq$0)}
For any random variables $\rva,\rvb$,
\beq
H(\rva, \rvb)\leq H(\rva) + H(\rvb)
\;.
\eeq
This is sometimes called
the subadditivity of the joint
entropy, or the independence upper bound
on the joint entropy.
It can also be written as
(i.e., conditioning reduces entropy)
\beq
H(\rvb|\rva)\leq H(\rvb)
\;,
\eeq
or as (MI$\geq 0$)
\beq
H(\rvb:\rva)\geq 0
\;.
\eeq
\begin{claim} (MI $\geq 0$)
For any $\rho\in dm(\calh_{\rva,\rvb})$,
\beq
S(\rva, \rvb)\leq S(\rva) + S(\rvb)
\;,
\eeq
or, equivalently,
\beq
S(\rvb|\rva)\leq S(\rvb)
\;,
\eeq
or, equivalently,
\beq
S(\rvb:\rva)\geq 0
\;.
\eeq
\end{claim}
\proof
Apply MRE
with
$\calt = \tr_{\rva}$.
\beq
0=D(\rho_\rvb//\rho_\rvb)
\leq
D(\rho_{\rva,\rvb}//
\rho_{\rva}
\rho_{\rvb})=S(\rva:\rvb)
\;.
\eeq
\qed
\subsection{Strong Subadditivity of
Joint Entropy (CMI$\geq$0)}
For any random variables $\rva,\rvb,\rve$,
\beq
H(\rva, \rvb|\rve)\leq H(\rva|\rve) + H(\rvb|\rve)
\;.
\eeq
This is sometimes called
the strong subadditivity of the joint
entropy.
It can also be written as
\beq
H(\rvb|\rva,\rve)\leq H(\rvb|\rve)
\;,
\eeq
or as (CMI $\geq 0$)
\beq
H(\rvb:\rva|\rve)\geq 0
\;.
\eeq
\begin{claim} (CMI $\geq 0$)
For any $\rho\in dm(\calh_{\rva,\rvb,\rve})$,
\beq
S(\rva, \rvb|\rve)\leq S(\rva|\rve) + S(\rvb|\rve)
\;,
\eeq
or, equivalently,
\beq
S(\rvb|\rva,\rve)\leq S(\rvb|\rve)
\;,
\eeq
or, equivalently,
\beq
S(\rvb:\rva|\rve)\geq 0
\;.
\eeq
\end{claim}
\proof
Apply MRE
with
$\calt = \tr_{\rva}$
to get
\beq
D(\rho_{\rvb,\rve}//
\rho_{\rve}\frac{I_\rvb}{N_\rvb}
)
\leq
D(
\rho_{\rva,\rvb,\rve}//
\rho_{\rva,\rve}\frac{I_\rvb}{N_\rvb})
\;.
\eeq
Then note that
\beq
S(\rva:\rvb|\rve)=
D(
\rho_{\rva,\rvb,\rve}//
\rho_{\rva,\rve}\frac{I_\rvb}{N_\rvb})
-
D(\rho_{\rvb,\rve}//
\rho_{\rve}\frac{I_\rvb}{N_\rvb}
)
\;.
\eeq
\qed
\subsection{Araki-Lieb Inequality}
\begin{claim}(Araki-Lieb Inequality\cite{ArLi})
For any $\rho\in dm(\calh_{\rva,\rvb})$,
\beq
|S(\rva)-S(\rvb)|\leq S(\rva,\rvb)
\;.
\eeq
or, equivalently,
\beq
\left\{
\begin{array}{l}
-S(\rvb)\leq S(\rvb|\rva)\\
-S(\rva)\leq S(\rva|\rvb)
\end{array}
\right.
\;.
\eeq
\end{claim}
\proof
Consider a pure state $\rho_{\rva,\rvb,\rve}
\in dm(\calh_{\rva,\rvb,\rve})$
with partial trace $\rho_{\rva,\rvb}$.
Then
\beq
S(\rvb,\rve)\leq S(\rvb) + S(\rve)
\;.
\label{eq-be-st-ad}
\eeq
According to Claim \ref{claim-sa-is-sb},
$S(\rvb,\rve)=S(\rva)$ and $S(\rve)=S(\rva,\rvb)$.
These two
identities allow us to excise any
mention of $\rve$ from Eq.(\ref{eq-be-st-ad}).
Thus
Eq.(\ref{eq-be-st-ad}) is equivalent to
\beq
S(\rva)\leq S(\rvb) + S(\rva,\rvb)
\;,
\eeq
which immediately gives
\beq
-S(\rvb)\leq S(\rvb|\rva)
\;.
\eeq
\qed
Note that classically, one has
\beq
0
\stackrel{(a)}{\leq}
H(\rvb|\rva)
\stackrel{(b)}{\leq}
H(\rvb)
\;.
\eeq
Inequality (a) follows
from the definition of
$H(\rvb|\rva)$, and (b)
follows from MI$\geq 0$.
For quantum states, on the other hand,
\beq
-S(\rvb)
\stackrel{(a)}{\leq}
S(\rvb|\rva)
\stackrel{(b)}{\leq}
S(\rvb)
\;,
\eeq
or, equivalently,
\beq
|S(\rvb|\rva)|\leq S(\rvb)
\;.
\eeq
Inequality (a) follows
from the Araki-Lieb inequality, and (b)
follows from MI$\geq 0$.
\subsection{Monotonicity (Only in Some Special Cases) of Plain Entropy }
Consider the two node CB net
\beq
\entrymodifiers={++[o][F-]}
\xymatrix{
\rvb&\rva\ar[l]
}
\;.\label{eq-ab-cbnet}
\eeq
For this net, $P_\rvb = P_{\rvb|\rva}P_\rva$.
Assume also that
$P_{\rvb|\rva}$ is a square matrix (i.e.,
that $N_\rva=N_\rvb$)
and
that it is doubly stochastic (i.e.,
that $\sum_b P(b|a)=1$
for all $a$, and $\sum_a P(b|a)=1$
for all $b$. In other words,
each of
its columns and rows sums to one.).
Then
the classical MRE
implies
\beq
D(P_\rvb//\frac{1^{N}}{N})
\leq
D(P_\rva//\frac{1^{N}}{N})
\;,
\eeq
where $N=N_\rva=N_\rvb$.
(The reason we need $\sum _a P(b|a)=1$
is that we
must have $P_{\rvb|\rva}1^N = 1^N$).
Next note that for any
random variable $\rvx$,
\beq
H(P_\rvx) = \ln(N_\rvx) -
D(P_\rvx//\frac{1^{N_
\rvx}}{N_\rvx})
\;.
\eeq
Thus,
\beq
H(\rvb)\geq H(\rva)
\;.
\eeq
Thus,
when $P_{\rvb|\rva}$
is square and doubly stochastic,
$P_{\rvb}$
has a larger spread
than $P_\rva$.
This situation is sometimes
described
by saying that ``mixing" increases
entropy.
An important scenario
where the opposite is
the case and
$P_{\rvb}$
has a {\it smaller} spread
than $P_{\rva}$
is
when
$\rvb=f(\rva)$
for some deterministic
function $f:S_\rva\rarrow S_\rvb$.
In this case, $P(b|a)=\delta(b, f(a))$
(clearly not a doubly stochastic
transition matrix). Thus
\beq
H(\rvb,\rva)= H(\rvb|\rva)+H(\rva)=H(\rva)
\;
\eeq
Also
\beq
H(\rvb,\rva)= H(\rva|\rvb)+H(\rvb)
\;
\eeq
Hence
\beq
H(\rva|\rvb)+H(\rvb) = H(\rva)
\;
\eeq
But $H(\rva|\rvb)\geq 0$. Hence
\beq
H(\rvb)=H(f(\rva))\leq H(\rva)
\;.
\eeq
Loosely speaking,
the random variable $f(\rva)$
varies over a smaller range
than $\rva$ (unless $f()$ is a
bijection),
so $f(\rva)$ has a smaller spread than $\rva$.
\begin{claim}\label{cl-more-plain-h}
Suppose $\rho_\rva\in dm(\calh_\rva)$
and $\calt_{\rvb|\rva}:dm(\calh_\rva)
\rarrow dm(\calh_\rvb)$
is a square (i.e., $N_\rva=N_\rvb$)
channel superop
such that $I_\rvb=\calt_{\rvb|\rva}(I_\rva)$. Then
\beq
S(\calt_{\rvb|\rva}(\rho_\rva))\geq
S(\rho_\rva)
\;.
\eeq
\end{claim}
\proof
Let
$\rho_\rvb = \calt_{\rvb|\rva}(\rho_\rva)$.
Then MRE implies
\beq
D(\rho_\rvb//\frac{I_\rvb}{N})
\leq
D(\rho_\rva//\frac{I_\rvb}{N})
\;,
\eeq
where $N=N_\rva=N_\rvb$.
Now note that for $\rvx=\rva,\rvb$,
\beq
S(\rho_\rvx) = \ln(N_\rvx) -
D(\rho_\rvx//\frac{I_\rvx}{N_\rvx})
\;.
\eeq
\qed
\subsection{Entropy of Measurement}
\label{sec-ent-mea}
Applying the cl$(\cdot)$
operator to a node (``classicizing" it)
is like a ``measurement".
Thus, the following inequality is
often called the entropy of measurement
inequality.
\begin{claim}
For any $\rho_\rvx \in \calh_\rvx$
and orthonormal basis
$\{\ket{x}_\rvx\}_{\forall x}$,
\beq
H\{\av{x|\rho_\rvx|x}\}_{\forall x}
\geq S(\rho_\rvx)
\;,
\eeq
or, equivalently,
\beq
S_{\rho_{\rvx_{cl}}}(\rvx_{cl})
\geq
S_{\rho_{\rvx}}(\rvx)
\;.
\eeq
\end{claim}
\proof: This is a special case of Claim
\ref{cl-more-plain-h} with $\rva = \rvx$,
$\rvb=\rvx_{cl}$,
and $\calt = {\rm cl}_\rvx$.
\qed
Note that one can
prove many other similar inequalities
by appealing to
MRE with $\calt = {\rm cl}_\rvx$.
For instance,
for any $\rho_{\rva,\rvb}\in \calh_{\rva,\rvb}$,
\beq
S_{\rho_{\rvb,\rva_{cl}}}(\rvb,\rva_{cl})
\geq
S_{\rho_{\rvb,\rva}}(\rvb,\rva)
\;,
\eeq
and
\beq
S_{\rho_{\rvb,\rva_{cl}}}(\rvb:\rva_{cl})
\leq
S_{\rho_{\rvb,\rva}}(\rvb:\rva)
\;.
\eeq
\subsection{Entropy of Preparation}
An ensemble
$\{\sqrt{w_j}\ket{\psi_j}\}_{\forall j}$
for a system can
be described as a
preparation of the system.
Thus the following inequality is
often called the entropy
of preparation inequality.
\begin{claim}
Suppose the weights $\{w_j\}_{\forall j\in S_\rvj}$
are non-negative numbers that sum to one,
and $\{\ket{\psi_j}_\rvx\}_{\forall j\in S_\rvj}$
are normalized
states that span $\calh_\rvx$. Let
\beq
\rho_\rvx = \sum_j w_j
\sandb{\ket{\psi_j}_\rvx}\sandb{\hc}
=
\tr_{\rvj}(\rho_{\rvx,\rvj})
\;
\eeq
where $\rho_{\rvx,\rvj}\in
dm(\calh_{\rvx,\rvj})$
is a pure state (a ``purification"
of $\rho_\rvx$).
Then
\beq
H\{w_j\}_{\forall j} \geq S(\rho_\rvx)
\;,
\eeq
or, equivalently,
\beq
S_{\rho_{\rvx,\rvj_{cl}}}(\rvj_{cl})
\geq S_{\rho_{\rvx}}(\rvx)
\;.
\eeq
The inequality becomes an equality
iff the states
$\{\ket{\psi_j}_\rvx\}_{\forall j}$
are orthonormal, in which case the weights
$\{w_j\}_{\forall j}$
are the eigenvalues of $\rho_\rvx$.
\end{claim}
\proof
Let
\beq
\rho_{\rvx,\rvj}=
\sandb{\ket{\psi}_{\rvx,\rvj}}
\sandb{\hc}
\;,
\eeq
where
\beq
\ket{\psi}_{\rvx,\rvj} =
\sandb{\sum_{x,j}A(x,j)
\begin{array}{c}
\ket{x}_\rvx\\
\ket{j}_\rvj
\end{array}
}
=
\sandb{
\entrymodifiers={++[o][F-]}
\xymatrix{\rvx&\rvj\ar[l]}
}
\;,
\eeq
where
\beq
A(x,j) = A(x|j)A(j)
\;,
\eeq
and
\beq
A(x|j) = \av{x|\psi_j}
,\;\;\;
A(j) = \sqrt{w_j}
\;.
\eeq
Then
\beq
S_{\rho_{\rvx,\rvj_{cl}}}(\rvj_{cl})
\stackrel{(a)}{\geq}
S_{\rho_{\rvx,\rvj}}(\rvj)
\stackrel{(b)}{=}
S_{\rho_{\rvx,\rvj}}(\rvx)
=
S_{\rho_{\rvx}}(\rvx)
\;
\eeq
$(a)$ follows from the entropy
of measurement inequality (Section \ref{sec-ent-mea}).
Note that $(a)$ becomes an equality
iff the states $\{\ket{\psi_j}_\rvx\}_{\forall j}$
are orthonormal.
$(b)$ follows because $\rho_{\rvx,\rvj}$
is a pure state.
\qed
\subsection{Data Processing Inequalities}
Consider the following CB net
\beq
\entrymodifiers={++[o][F-]}
\xymatrix{
\rvc&\rvb\ar[l]&\rva\ar[l]
}
\;.
\eeq
Classical MRE with $T=P_{\rvc|\rvb}$
implies
\beq
D(P_{\rvc,\rva}//P_\rvc P_\rva)
\leq
D(P_{\rvb,\rva}//P_\rvb P_\rva)
\;.
\eeq
Thus
\beq
H(\rvc:\rva)\leq H(\rvb:\rva)
\;.\label{eq-dp-classical}
\eeq
Eq.(\ref{eq-dp-classical})
is called a data processing
inequality.
Next consider the following CB net
\beq
\entrymodifiers={++++[o][F-]}
\xymatrix{
\rvb&
{\begin{array}{c}
\rvy=\\f(\rvx)\end{array}}\ar[l]
&\rvx\ar[l]&\rva\ar[l]
}
\;\label{eq-dp-det-graph}
\eeq
where node $\rvy$ is deterministic with
$P(y|x) = \delta(y, f(x))$.
The data processing inequality
applied to graph Eq.(\ref{eq-dp-det-graph})
gives
\beq
H(f(\rvx):\rva) \leq H(\rvx:\rva)
\;,
\label{eq-h-fx-a}
\eeq
and
\beq
H(\rvb:\rvx) \leq H(\rvb:f(\rvx))
\;.
\label{eq-h-fx-b}
\eeq
Note that for any random
variable $\rvz$, one has
\beq
H(f(\rvz):\rvz)=H(f(\rvz)) - H(f(\rvz)|\rvz)=H(f(\rvz))
\;.\label{eq-info-fz}
\eeq
Combining Eqs.(\ref{eq-h-fx-a})
and (\ref{eq-info-fz}) yields\footnote{
What we really mean by
the limit $\rva\rarrow \rvx$
is that $P(x|a)=\delta_x^a$.
Taking $\rvb\rarrow\rvx$
in Eq.(\ref{eq-h-fx-b}) would not
work
because $\rvb$ and $\rvx$
are not adjacent to each other
whereas $\rva$ and $\rvx$ are.}
\beq
H(f(\rvx))=
H(f(\rvx):\rva)|_{\rva\rarrow\rvx}
\; \leq \;
H(\rvx:\rva)|_{\rva\rarrow\rvx}=
H(\rvx)
\;.
\eeq
Now let's try to find
quantum analogues to
the classical data
processing inequalities.
To do so, we will use
the following QB nets.
For
$j\geq 1$, let
$\beta_j = (\rvb_j,\rve_j)$. Define
\beq
\begin{array}{c}
\entrymodifiers={++[o][F-]}
\xymatrix{
*{}
&
\calg_j\ar[l]
&
*{}\ar[l]
}
\end{array}
=
\begin{array}{c}
\entrymodifiers={++[o][F-]}
\xymatrix{
*{}
&
\cancel{\rvb_j}\ar[l]
&
\rv{\beta}_j\ar[l]_>>{\delta}\ar[d]_>>{\delta}
&
*{}\ar[l]
\\
*{}
&
*{}
&
\cancel{\rve_j}
&
*{}
}
\end{array}
\;,
\eeq
and
\beq
\rho^{(j)}_{\rv{\beta}_j,\ldots,
\rv{\beta}_2,
\rv{\beta}_1, \rvb,\rva}
=
\sandb{
\entrymodifiers={++[o][F-]}
\xymatrix{
\calg_j&*{\;\dots\;}\ar[l]&\calg_2\ar[l]&
\calg_1\ar[l]&
\rvb\ar[l]&
\rva\ar[l]
}
}
\sandb{\hc}
\;.
\label{eq-qb-net-j-links}
\eeq
For example,
\beq
\rho^{(2)}_{\rv{\beta}_2,\rv{\beta}_1, \rvb,\rva}
=
\sandb{
\entrymodifiers={++[o][F-]}
\xymatrix{
\cancel{\rvb_2}&
\rv{\beta}_2\ar[l]_>>{\delta}\ar[d]_>>{\delta}&
\cancel{\rvb_1}\ar[l]&
\rv{\beta}_1\ar[l]_>>{\delta}\ar[d]_>>{\delta}&
\rvb\ar[l]&
\rva\ar[l]
\\
*{}&
\cancel{\rve_2}&
*{}&
\cancel{\rve_1}&
*{}&
*{}
}
}
\sandb{\hc}
\;,
\label{eq-qb-net-two-links}
\eeq
\beq
\rho^{(1)}_{\rv{\beta}_1, \rvb,\rva}=
{\rm erase}_{\rv{\beta}_2}\{
\rho^{(2)}_{\rv{\beta}_2,\rv{\beta}_1, \rvb,\rva}
\}
=
\sandb{
\entrymodifiers={++[o][F-]}
\xymatrix{
\cancel{\rvb_1}&
\rv{\beta}_1\ar[l]_>>{\delta}\ar[d]_>>{\delta}&
\rvb\ar[l]&
\rva\ar[l]
\\
*{}&
\cancel{\rve_1}&
*{}&
*{}
}
}
\sandb{\hc}
\;,
\eeq
and
\beq
\rho^{(0)}_{\rvb,\rva}=
{\rm erase}_{\rv{\beta}_1}\{
\rho^{(1)}_{\rv{\beta}_1, \rvb,\rva}
\}
=
\sandb{
\entrymodifiers={++[o][F-]}
\xymatrix{
\rvb&
\rva\ar[l]
}
}
\sandb{\hc}
\;.
\eeq
Note that the operations of
tracing versus erasing
a node from a density matrix
(and corresponding QB net)
are different. They can produce
different density matrices.
Let $\rvb_0=\rvb$.
For $j\geq 1$, assume the amplitude
$A(\beta_j|b_{j-1})$
comes from a
channel superoperator
$\calt_{\beta_j|b_{j-1}}$.
Hence, it must be an isometry.
Some
quantum data processing inequalities
refer to a single QB net,
whereas others refer to multiple ones.
The next two sections
address these two possibilities.
\subsubsection{Single-Graph Data Processing}
\begin{claim}
For ${\rho^{(2)}_{\rv{\beta}_{2},\rv{\beta}_{1},\rvb,\rva}}$ given by the QB net of
Eq.(\ref{eq-qb-net-two-links}),
\beq
S_{\rho^{(2)}_{\rv{b}_{2cl},\rva}}(
\rv{b}_{2cl}:\rva)
\stackrel{(a)}{\leq}
S_{\rho^{(2)}_{\rv{b}_{1cl},\rva}}(
\rv{b}_{1cl}:\rva)
\stackrel{(b)}{\leq}
S_{\rho^{(2)}_{\rvb_{cl},\rva}}(
\rvb_{cl}:\rva)
\;.
\eeq
(Note that the $\rve_j$ have
been traced over.)
\end{claim}
\proof
Inequality $(b)$
is just a special case of inequality $(a)$.
Inequality $(a)$ can be established
as follows.
\beqa
\lefteqn{
\atrho{
\rho^{(2)}_{\rv{b}_{2cl},\rv{b}_{1cl},\rva}
}
S(\rv{b}_1:\rva)
-
S(\rv{b}_{2}:\rva)
}\nonumber
\\
&=&
S(\rv{b}_1:\rva)-
[
S(\rv{b}_1,\rv{b}_2:\rva)
-
S(\rv{b}_1:\rva|\rv{b}_2)]
\label{eq-single-gr-dp-a}
\\
&=&
-S(\rv{b}_2:\rva|\rv{b}_1)
+
S(\rv{b}_1:\rva|\rv{b}_2)
\label{eq-single-gr-dp-b}
\\
&=&
S(\rv{b}_1:\rva|\rv{b}_2)
\label{eq-single-gr-dp-c}
\\
&\geq& 0
\label{eq-single-gr-dp-d}
\atrhoend
\;.
\eeqa
\begin{itemize}
\item[(\ref{eq-single-gr-dp-c}):] Follows because
$S(\rv{b}_2:\rva|\rv{b}_1)=0$
since
$\rv{b}_1$
is classical and at the middle of a
Markov chain.
See Claim \ref{cl-cond-mid-markov}.
\item[(\ref{eq-single-gr-dp-d}):] Follows
because CMI$\geq$ 0.
\end{itemize}
\mbox{\;}
\qed
\subsubsection{Multi-Graph Data Processing}
The following claim
was proven
by Schumacher and Nielsen
in Ref.\cite{schu-niel}.
\begin{claim}\label{cl-multi-gr-dp}
For ${\rho^{(j)}_{\rv{\beta}_j,\ldots,\rv{\beta}_{2},\rv{\beta}_{1},\rvb,\rva}}$ given by the QB net of
Eq.(\ref{eq-qb-net-j-links}),
\beq
S_{
\rho^{(3)}_{
\rvb_3,\cancel{\rvb_2},\cancel{b_1},\rva}
}(\rvb_3:\rva)
\stackrel{(a)}{\leq}
S_{
\rho^{(2)}_{
\rvb_2,\cancel{b_1},\rva}
}(\rvb_2:\rva)
\stackrel{(b)}{\leq}
S_{
\rho^{(1)}_{
\rvb_1,\rva}
}(\rvb_1:\rva)
\;.
\eeq
(Note that the $\rve_j$ have
been traced over.)\end{claim}
\proof
Inequalities $(a)$ and $(b)$
both follow from MRE
because
\beq
\rho^{(3)}_{
\rvb_3,\cancel{\rvb_2},\cancel{b_1},\rva}
=
\tr_{\rve_3}\circ
\calt_{\rv{\beta_3}|\rvb_2}
(\rho^{(2)}_{
\rvb_2,\cancel{b_1},\rva})
\;,
\eeq
and
\beq
\rho^{(2)}_{
\rvb_2,\cancel{b_1},\rva}
=
\tr_{\rve_2}\circ
\calt_{\rv{\beta_2}|\rvb_1}
(\rho^{(1)}_{
\rvb_1,\rva})
\;.
\eeq
\qed
\section{Hybrid Entropies With Both
Classical
and Quantum Random Variables}
\subsection{
Conditioning Entropy on a Classical Random Variable}
From the
definition
of $H(\rvb|\rva)$,
it's clear
that $H(\rvb|\rva)\geq 0$.
On the
other hand, $S(\rvb|\rva)$
can sometimes be negative.
One case where $S(\rvb|\rva)$
is guaranteed to
be non-negative
is when the random variable
being conditioned
on is classical.
\begin{claim}\label{cl-cond-on-class-rv}
For any $\rho_{\rva,\rvb}\in dm(\calh_{\rva,\rvb})$,
\beq
S_{\rho_{\rvb,\rva_{cl}}}
(\rvb|\rva_{cl})\geq
\max(0, S_{\rho_{\rvb,\rva}}(\rvb|\rva))
\;.
\eeq
\end{claim}
\proof
By MRE with $\calt = {\rm cl}_\rva$,
\beq
S(\rvb:\rva_{cl})
\leq
S(\rvb:\rva)
\;.
\eeq
But
\beq
S(\rvb:\rva_{cl})
=S(\rvb)-S(\rvb|\rva_{cl})
\;,
\eeq
and
\beq
S(\rvb:\rva)
=S(\rvb)-S(\rvb|\rva)
\;.
\eeq
Hence
\beq
S(\rvb|\rva_{cl})
\geq
S(\rvb|\rva)
\;.
\eeq
One can express $\rho_{\rvb, \rva_{cl}}$ as
\beq
\rho_{\rvb, \rva_{cl}}=
\sum_a
P(a)
\sandb{\ket{a}_\rva}
\rho_{\rvb|a}
\sandb{\hc}
\;
\eeq
where $\{P(a)\}_{\forall a}\in pd(S_\rva)$
and
$\rho_{\rvb|a}\in dm(\calh_\rvb)$
for all $a$. Therefore
\beqa
S(\rho_{\rvb, \rva_{cl}})
&=&
-\tr_\rvb
\sum_a P(a)\rho_{\rvb|a}
\ln\left(
P(a)\rho_{\rvb|a}\right)
\\
&=&
H\{P(a)\}_{\forall a} +
\sum_a P(a) S(\rho_{\rvb|a})
\;.
\eeqa
Hence,
\beq
S_{\rho_{\rvb, \rva_{cl}}}(\rvb|\rva_{cl})
=
S(\rho_{\rvb, \rva_{cl}})
-
S(\rho_{\rva_{cl}})=
\sum_a P(a) S(\rho_{\rvb|a})
\geq
0
\;.
\eeq
\qed
\subsection{Clone Random Variables}
We'll say two random variables are
clones of each other
if they are
perfectly correlated.
Classical and quantum
clone random variables
behave very
differently as far as
entropy is concerned.
In this section, we
will show that
two classical clones
can be merged without changing the
entropy,
but not so for two quantum clones.
Consider the following CB net
\beq
\entrymodifiers={++[o][F-]}
\xymatrix{
\rva'&\rva\ar[l]\ar[r]&\rvb
}
\;,
\eeq
where
\beq
P(a'|a) = \delta_a^{a'}
\;.
\eeq
Since $P(a',a)=P(a) \delta_a^{a'}$,
one gets
\beq
H(\rva,\rva') = H(\rva)=H(\rva')
\;,
\eeq
\beq
H(\rva|\rva')= H(\rva'|\rva)=0
\;,
\eeq
\beq
H(\rva:\rva')= H(\rva)=H(\rva')
\;,
\eeq
\beq
H(\rvb, \rva, \rva') = H(\rvb, \rva)
=H(\rvb,\rva')
\;,
\eeq
\beq
H(\rvb, \rva|\rva')=
H(\rvb|\rva)=H(\rvb|\rva')
\;.
\eeq
All these
results can be described
by saying that
the classical clone
random variables
$\rva$ and $\rva'$
are interchangeable and
that often they can be ``merged"
into a single random variable
without changing the entropy.
Quantum clone random variables,
on the other hand, cannot be merged
in general.
For example, for a general state
$\rho_{\rva,\rva'}$,
one has
$S(\rva,\rva')\neq S(\rva)$,
even if
\beq
\av{a,a'|\rho_{\rva,\rva'}|a,a'}
\varpropto \delta_a^{a'}
\;\label{eq-av-a-rho-a}
\eeq
for all $a,a'\in S_\rva$.
For example,
when
\beq
\rho_{\rva,\rva'}=
\sandb{
\frac{1}{\sqrt{N_\rva}}
\sum_{a}
\begin{array}{l}
\ket{a}_\rva\\
\ket{a}_{\rva'}
\end{array}
}
\sandb{\hc}
\;,
\eeq
Eq.(\ref{eq-av-a-rho-a}) is satisfied.
However,
$S(\rva,\rva')=0$ and
$S(\rva)=S(\rva')\neq 0$.
Hence,
$S(\rva,\rva')\neq S(\rva)$.
Similarly, for a general state
$\rho_{\rvb,\rva,\rva'}$,
$S(\rvb,\rva,\rva')\neq S(\rvb,\rva)$.
For example,
when
\beq
\rho_{\rvb,\rva,\rva'}=
\sandb{\frac{1}{\sqrt{N_\rva N_\rvb}}
\sum_{a,b}
\begin{array}{l}
\ket{b}_\rvb\\
\ket{a}_\rva\\
\ket{a}_{\rva'}
\end{array}
}
\sandb{\hc}
\;,\label{eq-rho-baa-pure}
\eeq
Eq.(\ref{eq-av-a-rho-a}) is satisfied.
However,
$S(\rvb,\rva,\rva')=0$ and
$S(\rvb,\rva)=S(\rva')\neq 0$.
Hence,
$S(\rvb,\rva,\rva')\neq S(\rvb,\rva)$.
\begin{claim}
Suppose
\beq
\rho_{\rvb, \rva, \rva'}
=
\rho_{\rvb, \rva_{cl}, \rva'_{cl}}
=
\sum_a P(a)
\sandb{
\begin{array}{l}
\ket{a}_{\rva}\\
\ket{a}_{\rva'}
\end{array}
}
\rho_{\rvb|a}
\sandb{\hc}
\;
\eeq
where $\{P(a)\}_{\forall a}
\in pd(S_\rva)$
and
$\rho_{\rvb|a}\in dm(\calh_\rvb)$
for all $a$.
Then
\beq
S(\rvb,\rva,\rva')=
S(\rvb, \rva)= S(\rvb, \rva')
\;,
\eeq
and
\beq
S(\rvb,\rva|\rva')= S(\rvb| \rva)=
S(\rvb| \rva')
\;.
\eeq
\end{claim}
\proof
For any density matrix $\rho$
with no zero eigenvalues,
$\ln \rho$ can be
expressed as an infinite power
series
in powers of $\rho$:
\beq
\ln \rho=
\sum_{j=0}^{\infty} c_j \rho^j
\;,
\eeq
for some real numbers
$c_j$ that are independent of $\rho$.
Note that
\beqa
\tr_{\rva'}\rho^2_{\rvb, \rva, \rva'}
&=&
\tr_{\rva'}
\sum_a P^2(a)
\sandb{
\begin{array}{l}
\ket{a}_{\rva}\\
\ket{a}_{\rva'}
\end{array}
}
\rho^2_{\rvb|a}
\sandb{\hc}
\\
&=&
\sum_a P^2(a)
\sandb{
\ket{a}_{\rva}
}
\rho^2_{\rvb|a}
\sandb{\hc}
\\
&=&
\rho^2_{\rvb, \rva}
\;.
\eeqa
Thus, the operations of $\tr_{\rva'}$
and raising-to-a-power commute
when
acting on $\rho_{\rvb, \rva_{cl}, \rva'_{cl}}$.
(This is not the case for $\rho_{\rvb,\rva,\rva'}$
given by Eq.(\ref{eq-rho-baa-pure})).
Finally, note that
\beqa
S(\rvb,\rva,\rva')&=&
-\tr_{\rvb,\rva,\rva'}
\left(
\rho_{\rvb,\rva,\rva'}
\ln
\rho_{\rvb,\rva,\rva'}
\right)
\\
&=&
-\tr_{\rvb,\rva,\rva'}
\left(
\sum_{j=0}^\infty
c_j
\rho^{j+1}_{\rvb,\rva,\rva'}
\right)
\\
&=&
-\tr_{\rvb,\rva}
\left(
\sum_{j=0}^\infty
c_j
\rho^{j+1}_{\rvb,\rva}
\right)
\\
&=&
S(\rvb,\rva)=S(\rvb,\rva')
\;.
\eeqa
\qed
\subsection{Conditioning CMI On the
Middle of a Tri-node Markov-Like Chain}
We will refer to a
node with 2 incoming
arrows and no outgoing ones
as a collider.
Let's consider
all CB nets with 3 nodes
and 2 arrows. These can have
either one collider
or none.
The CB net with one collider
is
\beq
\entrymodifiers={++[o][F-]}
\xymatrix{
\rva\ar[r]&\rve&\rvb\ar[l]
}
\;
\eeq
For this net,
$P(a,b|e)\neq P(a|e) P(b|e)$
so
$H(\rva:\rvb|\rve)\neq 0$.
There
are 3 CB nets with no collider:
the
fan-out (a.k.a. broadcast, or fork) net,
and 2 Markov chains (in opposite directions):
\beq
\entrymodifiers={++[o][F-]}
\xymatrix{
\rva&\rve\ar[l]\ar[r]&\rvb
}
\;,
\eeq
\beq
\entrymodifiers={++[o][F-]}
\xymatrix{
\rva&\rve\ar[l]&\rvb\ar[l]
}
\;,
\eeq
\beq
\entrymodifiers={++[o][F-]}
\xymatrix{
\rva\ar[r]&\rve\ar[r]&\rvb
}
\;.
\eeq
We will refer to these 3 graphs
as tri-node Markov-like chains.
For all 3 of these nets
$P(a,b|e)= P(a|e) P(b|e)$ so
$H(\rva:\rvb|\rve)= 0$.
In this case we say
$\rva$ and $\rvb$ are
conditionally
independent (of $\rve$).
\begin{claim}\label{cl-cond-mid-markov}
Let
\beq
\rho^{\rm fan-out}_{\rva,\rvb,\rve}=
\begin{array}{l}
\tr_{\rv{\alpha}_0,\rv{\eps}_0,\rv{\beta}_0}\\
\tr_{\rv{\alpha}_1,\rv{\eps}_1,\rv{\beta}_1}
\end{array}\sandb{
\entrymodifiers={++[o][F-]}
\xymatrix{
\rv{\alpha}_0\ar[d]&\rv{\eps}_0\ar[d]&\rv{\beta}_0\ar[d]\\
\rva\ar[d]&\rve\ar[l]\ar[d]\ar[r]&\rvb\ar[d]\\
\rv{\alpha}_1&\rv{\eps}_1&\rv{\beta}_1
}}\sandb{\hc}
\;,
\eeq
and
\beq
\rho^{Markov}_{\rva,\rvb,\rve}=
\begin{array}{l}
\tr_{\rv{\alpha}_0,\rv{\eps}_0,\rv{\beta}_0}\\
\tr_{\rv{\alpha}_1,\rv{\eps}_1,\rv{\beta}_1}
\end{array}
\sandb{
\entrymodifiers={++[o][F-]}
\xymatrix{
\rv{\alpha}_0\ar[d]&\rv{\eps}_0\ar[d]&\rv{\beta}_0\ar[d]\\
\rva\ar[d]&\rve\ar[l]\ar[d]&\rvb\ar[d]\ar[l]\\
\rv{\alpha}_1&\rv{\eps}_1&\rv{\beta}_1
}}\sandb{\hc}
\;.
\eeq
With $\rho_{\rva,\rvb,\rve}$
equal to either
$\rho_{\rva,\rvb,\rve}^{\rm fan-out}$
or
$\rho_{\rva,\rvb,\rve}^{\rm Markov}$,
\beq
S_{\rho_{\rva,\rvb,\rve_{cl}}}(\rva:\rvb|\rve_{cl})=0
\;.
\eeq
\end{claim}
\proof
At the end of this proof,
we will show that
for both of these QB nets,
$\rho_{\rva,\rvb,\rve_{cl}}$
can be expressed as
\beq
\rho_{\rva,\rvb,\rve_{cl}}=
\sum_e P(e)\sandb{\ket{e}_\rve}
\rho_{\rva|e}\;\rho_{\rvb|e}
\sandb{\hc}
\;,
\label{eq-rho-a-b-e}
\eeq
where $\{P(e)\}_{\forall e}\in pd(S_\rve)$,
and
$\rho_{\rva|e}\in dm(\calh_\rva)$,
$\rho_{\rvb|e}\in dm(\calh_\rvb)$
for all $e\in S_\rve$.
Let's assume this for now. Then
\beqa
S(\rva,\rvb,\rve_{cl})&=&
-\tr_{\rva,\rvb}\sum_e\left\{
P(e)\rho_{\rva|e}\;\rho_{\rvb|e}
\ln\left(P(e)
\rho_{\rva|e}\;\rho_{\rvb|e}\right)
\right\}
\\
&=&
H\{P(e)\}_{\forall e}
+
\sum_e P(e)
[S(\rho_{\rva|e}) + S(\rho_{\rvb|e})]
\;.
\eeqa
Hence
\beq
S(\rva,\rvb|\rve_{cl})
=
\sum_e P(e)
[S(\rho_{\rva|e}) + S(\rho_{\rvb|e})]
\;.
\eeq
One can show in the same way that also
\beq
S(\rva|\rve_{cl})
=
\sum_e P(e)
S(\rho_{\rva|e})
\;,
\eeq
and
\beq
S(\rvb|\rve_{cl})
=
\sum_e P(e)
S(\rho_{\rvb|e})
\;.
\eeq
Thus
\beq
S(\rva:\rvb|\rve_{cl})
=
S(\rva|\rve_{cl})
+
S(\rvb|\rve_{cl})
-
S(\rva,\rvb|\rve_{cl})
=
0
\;.
\eeq
Now let's show
that $\rho_{\rva,\rvb,\rve_{cl}}$
has the form Eq.(\ref{eq-rho-a-b-e})
for both QB nets.
For the fan-out net,
\beq
\rho^{\rm fan-out}_{\rva,\rvb,\rve_{cl}}=
\sum_{\alpha_0, \eps_0, \beta_0}
\sum_{\alpha_1, \eps_1, \beta_1}
\sum_e
\sandb{
\begin{array}{r}
\sum_a A(a|e,\alpha_0)A(\alpha_1|a)A(\alpha_0)\ket{a}_\rva\\
\sum_b A(b|e,\beta_0)A(\beta_1|b)A(\beta_0)\ket{b}_\rvb\\
A(e|\eps_0)A(\eps_1|e)A(\eps_0)\ket{e}_\rve
\end{array}
}\sandb{\hc}
\;.
\eeq
Set
\beq
\rho_{\rva|e}
= C_{\rva|e}
\sum_{\alpha_0,\alpha_1} \sandb{\sum_a
A(a|e,\alpha_0)A(\alpha_1|a)A(\alpha_0)
\ket{a}_\rva}
\sandb{\hc}
\;,
\eeq
and
\beq
\rho_{\rvb|e}
= C_{\rvb|e}
\sum_{\beta_0,\beta_1} \sandb{\sum_b
A(b|e,\beta_0)A(\beta_1|b)A(\beta_0)
\ket{b}_\rvb}
\sandb{\hc}
\;.
\eeq
For $\rvx = \rva,\rvb$,
the constant $C_{\rvx|e}$
depends
on $e$ and is defined so that
$\tr_\rvx \rho_{\rvx|e}=1$.
For the Markov chain net,
\beq
\rho^{\rm Markov}_{\rva,\rvb,\rve_{cl}}=
\sum_{\alpha_0, \eps_0, \beta_0}
\sum_{\alpha_1, \eps_1, \beta_1}
\sum_e
\sandb{
\begin{array}{r}
\sum_a A(a|e,\alpha_0)A(\alpha_1|a)A(\alpha_0)\ket{a}_\rva\\
\sum_b A(b|\beta_0)A(\beta_1|b)A(\beta_0)\ket{b}_\rvb\\
A(e|b,\eps_0)A(\eps_1|e)A(\eps_0)\ket{e}_\rve
\end{array}
}\sandb{\hc}
\;.
\eeq
Set
\beq
\rho_{\rva|e}
= C_{\rva|e}
\sum_{\alpha_0,\alpha_1} \sandb{\sum_a
A(a|e,\alpha_0)A(\alpha_1|a)A(\alpha_0)
\ket{a}_\rva}
\sandb{\hc}
\;,
\eeq
and
\beq
\rho_{\rvb|e}
= C_{\rvb|e}
\sum_{\eps_0,\eps_1}
\sum_{\beta_0,\beta_1} \sandb{\sum_b
\begin{array}{l}
A(b|\beta_0)A(\beta_1|b)A(\beta_0)\\
A(e|b,\eps_0)A(\eps_1|e)A(\eps_0)
\end{array}
\ket{b}_\rvb}
\sandb{\hc}
\;,
\eeq
where again,
$C_{\rva|e}$ and
$C_{\rvb|e}$ are
defined so that
the density matrices
$\rho_{\rva|e}$ and $\rho_{\rvb|e}$
have unit trace.
For both graphs, if we define
\beq
P(e)= \tr_{\rva,\rvb}\av{e|\rho_{\rva,\rvb,\rve}|e}
\;,
\eeq
then Eq.(\ref{eq-rho-a-b-e}) is
satisfied.
\qed
\subsection{Tracing the Output of an Isometry}
This section
will mention an observation
that is pretty trivial, but
arises frequently so it is
worth pointing out explicitly.
Consider the following density matrix
\beq
\rho_{\rvc,\rvb,\rva}=
\sandb{
\entrymodifiers={++[o][F-]}
\xymatrix{
\rvc&\rvb\ar[l]&\rva\ar[l]
}
}
\sandb{\hc}
=
\sandb{
\begin{array}{r}
A(c|b)\ket{c}_\rvc\\
\sum_{a,b,c}
A(b|a)\ket{b}_\rvb\\
A(a)\ket{a}_\rva
\end{array}
}
\sandb{\hc}
\;.
\eeq
{\it Assume that $A(c|b)$ is an isometry.}
Then
\beq
\tr_\rvc
\rho_{\rvc,\rvb,\rva}
=
\sum_b
\sandb{
\begin{array}{l}
A(b|a)\ket{b}_\rvb\\
A(a)\ket{a}_\rva
\end{array}
}
\sandb{\hc}
=
\rho_{\rvb_{cl},\rva}
\;,
\eeq
and
\beq
\atrho{
\rho_{\rvc,\rvb,\rva}}
S(\rvb,\rva)=S(\rvb_{cl},\rva)
\atrhoend
\;.
\eeq
Thus, we observe
that tracing over all the {\it output}
indices of an isometry amplitude
embedded within a density matrix converts
the {\it inputs} of that isometry
amplitude into classical
random variables.
Next consider
the following density matrix,
\beq
\rho_{\rvc,\cancel{\rvb},\rva}=
\sandb{
\entrymodifiers={++[o][F-]}
\xymatrix{
\rvc&\cancel{\rvb}\ar[l]&\rva\ar[l]
}
}
\sandb{\hc}
=
\sandb{
\begin{array}{r}
A(c|b)\ket{c}_\rvc\\
\sum_{a,b,c}
A(b|a)A(a)\ket{a}_\rva
\end{array}
}
\sandb{\hc}
\;.
\eeq
{\it Assume that both $A(c|b)$ and $A(b|a)$
are isometries}.
Then
\beq
\tr_\rvc
\rho_{\rvc,\cancel{\rvb},\rva}
=
\sum_a
\sandb{
\begin{array}{l}
A(a)\ket{a}_\rva
\end{array}
}
\sandb{\hc}
=
\rho_{\rva_{cl}}
\;,
\eeq
and
\beq
\atrho{
\rho_{\rvc,\cancel{\rvb},\rva}}
S(\rva)=S(\rva_{cl})
\atrhoend
\;.
\eeq
Thus, we observe
that two isometries
joined by slashed variables
behave as if they
were just one isometry.
\subsection{Holevo Information}
Suppose $\{P(x)\}_{\forall x}\in pd(S_\rvx)$
and $\rho_{\rvq|x}\in dm(\calh_\rvq)$
for all $x$. Set
\beq
\rho_\rvq=
E_x \rho_{\rvq|x}
=
\sum_x
P(x)
\rho_{\rvq|x}
=
\sum_x
\sandb{\sqrt{P(x)}}
\rho_{\rvq|x}
\sandb{\hc}
\;.\label{eq-rhoq-px-rhoqx}
\eeq
Then the Holevo information
for the ensemble
$\{P(x),\rho_{\rvq|x}\}_{\forall x}$
is defined as
\beq
Hol\{P(x),\rho_{\rvq|x}\}_{\forall x}= S(E_x \rho_{\rvq|x})-
E_x S(\rho_{\rvq|x})=[S,E_x]\rho_{\rvq|x}
\;.
\eeq
\begin{claim}\label{claim-hol-mi}
Let
\beq
\rho_{\rvq,\rvx}=
\rho_{\rvq,\rvx_{cl}}=
\sum_x P(x)
\sandb{\ket{x}_\rvx}\rho_{\rvq|x}
\sandb{\hc}
\;,
\eeq
where $\{P(x)\}_{\forall x}\in pd(S_\rvx)$
and
$\rho_{\rvq|x}\in dm(\calh_\rvq)$ for all $x$.
Then
\beq
Hol\{P(x),\rho_{\rvq|x}\}_{\forall x} = S_{\rho_{\rvq, \rvx_{cl}}}(
\rvq : \rvx_{cl})
\;
\eeq
Thus,
the Holevo
information is a
MI with one of the
two random variables classical.
\end{claim}
\proof
\beqa
S_{\rho_{\rvq, \rvx_{cl}}}(\rvq,\rvx_{cl})
&=&
-\tr_\rvq
\sum_x \left\{
P(x)
\rho_{\rvq|x}
\ln\left(P(x)\rho_{\rvq|x}\right)
\right\}
\\
&=&
H\{P(x)\}_{\forall x}
+
E_x S(\rho_{\rvq|x})
\;.
\eeqa
Hence
\beqa
\atrho{\rho_{\rvq, \rvx_{cl}}}
S(\rvq:\rvx_{cl})
&=&
S(\rvq)
-S(\rvq|\rvx_{cl})
\\
&=&
S(\rvq)-
E_x S(\rho_{\rvq|x})
\\
&=&
S(E_x \rho_{\rvq|x})-
E_x S(\rho_{\rvq|x})
\atrhoend
\;.
\eeqa
\qed
\section{Holevo Bound}
In this section
we prove the so called
Holevo Bound, which
is an upper bound
on the
accessible information. The accessible
information
is a figure of merit of
a quantum ensemble.
The upper
bound
is given by
the Holevo
information.
The proof of
the Holevo Bound\footnote{
The proof given here
of Holevo's original result (Ref.\cite{hol})
is very similar to the one
first given by Schumacher and Westmoreland
in Ref.\cite{schu-west}.}
that we give next, it
utilizes and therefore illustrates
many of
the concepts and inequalities
that were introduced earlier in
this paper.
Consider a density
matrix $\rho_\rvq$ expressible
in the form Eq.(\ref{eq-rhoq-px-rhoqx}).
It is useful
to re-express
$\rho_\rvq$
using
the eigenvalue decompositions
of the density matrices
$\rho_{\rvq|x}$.
For some $\rv{Q}$
with $S_{\rv{Q}}=S_\rvq$,
suppose the eigenvalue
decompositions of the
$\rho_{\rvq|x}$ are
given by
\beq
\rho_{\rvq|x}=
\sum_Q \lam_{Q|x}
\sandb{
\ket{\lam_{Q|x}
}_\rvq}
\sandb{\hc}
\;
\eeq
for all $x$.
Define
\beq
A(x) = \sqrt{P(x)}
\;,
\eeq
\beq
A(Q|x)=
\sqrt{\lam_{Q|x}}
\;,
\eeq
and
\beq
A(q|Q,x)=
\av{q|\lam_{Q|x}}
\;.
\eeq
Then
\beq
\rho_\rvq=
\sum_{x,Q}
\sandb{\sum_q A(q|Q,x)A(Q|x)A(x)\ket{q}_\rvq}
\sandb{\hc}
\;.
\eeq
It is useful to
find a purification of $\rho_\rvq$;
that is, a pure state
$\rho_{\rvq,\rvr}$
such that $\rho_\rvq =
\tr_{\rvr}(\rho_{\rvq,\rvr})$.
One possible purification of
$\rho_\rvq$ is given by
\beq
\rho_{\rvq,\rv{Q},\rvx_{cl}}
=
\sum_x
\sandbr{
\sum_{q,Q}
A(q|Q,x)\ket{q}_\rvq\\
A(Q|x)\ket{Q}_{\rv{Q}}\\
A(x)\ket{x}_\rvx
}
\sandb{\hc}
=
\sandb{
\entrymodifiers={++[o][F-]}
\xymatrix{
\rvq&*{}&\rvx_{cl}\ar[ll]\ar[dl]\\
*{}&\rv{Q}\ar[ul]&*{}
}
}
\sandb{\hc}
\;\label{eq-qb-net-rhoq}
\eeq
with $\rvr=(\rv{Q},\rvx_{cl})$.
Let $S_{\rvq_j}=S_{\rv{Q}}=S_\rvq$, and
$S_{\rvy_j}=S_\rvy$
for $j=1,2,3$. Suppose
$\rho_{\rvq_1}\in pd(\calh_{\rvq})$
is defined by
Eq.(\ref{eq-rhoq-px-rhoqx})
with $\rvq$ replaced by $\rvq_1$.
Suppose $\rho_{\rvq_1}$
is transformed to
$\rho_{\rvq_2}'\in dm(\calh_\rvq)$
by a quantum
channel with
Krauss operators $\{K_y\}_{\forall y}$. Thus
\beq
\rho_{\rvq_2}'=
\sum_y
K_y \rho_{\rvq_1}
K_y^\dagger
\;.
\eeq
As explained
in Ref.\cite{Tuc-mixology},
the Krauss operators $\{K_y\}_{\forall y}$
can be extended to a
unitary matrix $U_{\rvq,\rvy}$.
Let
\beq
\av{q_2|K_y|q_1}
=
\begin{array}{rcl}
\bra{q_2}_{\rvq_2}& & \ket{q_1}_{\rvq_1}\\
&U_{\rvq,\rvy}&\\
\bra{y}_{\rvy_2}& & \ket{0}_{\rvy_1}
\end{array}
\;
\eeq
for all $q_1,q_2\in S_\rvq$
and $y\in S_\rvy$.
Now we can define
\beq
R_{\rvq_2,\rvy_2,\rvx_{cl}}
=
\tr_{\rv{Q}}
\sandb{
\entrymodifiers={+++[o][F-]}
\xymatrix{
\cancel{\rvq_3}&*{}&
\cancel{\rvq_1}\ar[dl]&*{}&\rvx_{cl}\ar[dl]\ar[ll]\\
*{}&\rvq_2,\rvy_2\ar[ul]_>>{\delta}\ar[dl]_>>{\delta}&*{}&\rv{Q}\ar[ul]&*{}\\
\cancel{\rvy_3}&*{}&
\cancel{\rvy_1}\ar[ul]_<<{0}&*{}&*{}
}
}
\sandb{\hc}
\;,\label{eq-qb-net-hol}
\eeq
where
\beq
A(q_2,y_2|q_1, y_1=0)
=
\begin{array}{rcl}
\bra{q_2}_{\rvq_2}& & \ket{q_1}_{\rvq_1}\\
&U_{\rvq,\rvy}&\\
\bra{y_2}_{\rvy_2}& & \ket{0}_{\rvy_1}
\end{array}
\;
\eeq
for all $q_1,q_2\in S_\rvq$
and $y_2\in S_\rvy$.
Note that
$R_{\rvq_2,\rvy_2,\rvx_{cl}}$
satisfies
$R_{\rvq_2}=\rho'_{\rvq_2}$.
\begin{claim}\label{cl-hol}
If
$\rho_{\rvq_1,\rv{Q},\rvx_{cl}}$
is the QB net of Eq.(\ref{eq-qb-net-rhoq})
with $\rvq$ replaced by $\rvq_1$,
and
$R_{\rvq_2,\rvy_2,\rvx_{cl}}$
is the QB net of Eq.(\ref{eq-qb-net-hol}), then
\beq
S_{R_{\rvy_2,\rvx_{cl}}}(\rvy_2:\rvx_{cl})
\leq
Hol\{P(x), \rho_{\rvq_1|x}\}_{\forall x}
\;.
\eeq
\end{claim}
\proof
\beqa
S_{R_{\rvy_2,\rvx_{cl}}}(\rvy_2:\rvx_{cl})
&\leq&
S_{R_{\rvq_2,\rvy_2,\rvx_{cl}}}(\rvq_2,\rvy_2:\rvx_{cl})
\label{eq-hol-a}
\\
&\leq&
S_{\rho_{\rvq_1,\rvx_{cl}}}(\rvq_1:\rvx_{cl})
\label{eq-hol-b}
\\
&=&
Hol\{P(x), \rho_{\rvq_1|x}\}_{\forall x}
\label{eq-hol-c}
\;.
\eeqa
\begin{itemize}
\item[(\ref{eq-hol-a}):]
Follows because of MRE with $\calt=\tr_{\rvq_2}$.
\item[(\ref{eq-hol-b}):]
Follows from the multi-graph data processing
inequalities.
\item[(\ref{eq-hol-c}):] Follows
from Claim \ref{claim-hol-mi}.
\end{itemize}\mbox{\;}
\qed
Define the accessible information $Acc$
of the ensemble
$\{P(x), \rho_{\rvq_1|x}\}_{\forall x}$
and any channel with Krauss operators
$\{K_y\}_{\forall y}$
by
\beq
Acc\{P(x), \rho_{\rvq_1|x}\}_{\forall x}
=
\max_{\{K_y\}_{\forall y}}S_{R_{\rvy_2,\rvx_{cl}}}(\rvy_2:\rvx_{cl})
\;.
\eeq
Claim \ref{cl-hol} implies that
\beq
Acc\{P(x), \rho_{\rvq_1|x}\}_{\forall x}
\leq
Hol\{P(x), \rho_{\rvq_1|x}\}_{\forall x}
\;.
\eeq
\appendix
\section{Appendix: Schmidt Decomposition}
\label{app-s-decomp}
In this appendix, we define
the Schmidt decomposition of any
bi-partite pure state.
Consider any pure state $\ket{\psi}_{\rva,\rvb}\in
\calh_{\rva,\rvb}$. It can be expressed as
\beq
\ket{\psi}_{\rva,\rvb}=
\sandbr{
\sum_{a,b}
A(a,b)
\ket{a}_\rva
\\
\ket{b}_\rvb
}
\;.
\eeq
Assume $S_\rva\supset S_\rvb$.
Thus, $N_\rva\geq N_\rvb$.
$A(a,b)$ can be thought of as an
$N_\rva\times N_\rvb$
matrix. Let its
singular value decomposition be
\beq
A(a,b)=
\sum_{a_1\in S_\rva}
\sum_{b_1\in S_\rvb}
U(a,a_1)
\sqrt{P(b_1)}
\theta(a_1= b_1)
V^\dag
(b_1,b)
\;
\eeq
for all $a\in S_\rva$,
$b\in S_\rvb$,
where $U$ and $V$ are
unitary matrices.
Then we can express $\ket{\psi}_{\rva,\rvb}$
as
\beq
\ket{\psi}_{\rva,\rvb}=
\sandbr{
\sum_{b_1\in S_\rvb}
\sqrt{P(b_1)}
\ket{b_1}_\rva'
\\
\ket{b_1}_\rvb'
}
\;,
\label{eq-s-decomp}
\eeq
where
\beq
\ket{a_1}_\rva'=
\sum_{a\in S_\rva} U(a,a_1)\ket{a}_\rva
\;,
\eeq
for all $a_1\in S_\rva$ and
\beq
\ket{b_1}_\rvb'=
\sum_{b\in S_\rvb} V^*(b,b_1)\ket{b}_\rvb
\;
\eeq
for all $b_1\in S_\rvb$.
Eq.(\ref{eq-s-decomp})
is called
the Schmidt Decomposition
of $\ket{\psi}_{\rva,\rvb}$.
\begin{claim}
\label{claim-sa-is-sb}
If $\rho_{\rva,\rvb}\in dm(\calh_{\rva,\rvb})$
is pure, then
\beq
S(\rva)=S(\rvb)
\;.
\eeq
\end{claim}
\proof
Let
\beq
\rho_{\rva,\rvb}=
\sandb{
\ket{\psi}_{\rva,\rvb}
}
\sandb{\hc}
\;.
\eeq
If we express
$\ket{\psi}_{\rva,\rvb}$
as in Eq.(\ref{eq-s-decomp}), then
\beq
S(\rva)=
H\{P(b_1)\}_{\forall b_1\in S_\rvb}=
S(\rvb)
\;.
\eeq
\qed
\section{Appendix: Partial Entropies of
Pure Multi-Partite State}\label{app-partial-ents}
In this appendix,
we state some
consequences of Claim \ref{claim-sa-is-sb}
for the partial entropies
of pure multi-partite states.
Let $\rva_J = (\rva_j)_{j\in J}$
for any $J\subset Z_{1,N}$.
\begin{claim}\label{cl-sj-sjc}
Suppose
$J$ is
a nonempty subset of $\zn$ and
$J^c=Z_{1,N}-J$. If
$\rho_{\rva_\zn}$ is pure, then
\beq
\left\{
\begin{array}{l}
S(\rva_{\zn})=0,
\\
S(\rva_{J})=S(\rva_{J^c})
\end{array}
\right.
\;.
\eeq
For example, for $N=4$, this means
\beq
\left\{
\begin{array}{l}
S(\rva_1,\rva_2, \rva_3, \rva_4)=0,
\\
S(\rva_1,\rva_2,\rva_3) = S( \rva_4)
\mbox{ and permutations,}
\\
S(\rva_1,\rva_2) = S( \rva_3, \rva_4)
\mbox{ and permutations}
\end{array}
\right.
\;.
\eeq
\end{claim}
\proof
This is just a
generalization of Claim \ref{claim-sa-is-sb}.
\qed
\begin{claim}
Suppose $I,J$
are nonempty, disjoint subsets of $\zn$
such that $I\cup J = \zn$.
If $\rho_{\rva_\zn}$
is a pure state, then
\begin{subequations}
\beq
S(\rva_{I}|\rva_{J})=-S(\rva_{I})=-S(\rva_{J})
\;
\eeq
\beq
S(\rva_{I}:\rva_{J}) = 2 S(\rva_{I})=2 S(\rva_{J})
\;
\eeq
\end{subequations}
\end{claim}
\proof
Obvious.
\qed
\begin{claim}
Suppose $I,J,K$
are nonempty, disjoint subsets of $\zn$
such that $I\cup J\cup K = \zn$.
If $\rho_{\rva_\zn}$
is a pure state, then
\begin{subequations}
\beq
S(\rva_{J}|\rva_{I}) = S(\rva_{K}) -S(\rva_{I})
\;
\eeq
\beq
S(\rva_{J}|\rva_{I}) = -S(\rva_{J}|\rva_{K})
\;
\eeq
\beq
S(\rva_{I}:\rva_{J})
=
S(\rva_{I})+S(\rva_{J})-S(\rva_{K})
\;
\eeq
\beq
S(\rva_{I}:\rva_{J}|\rva_{K})= S(\rva_{I}:\rva_{J})
\;
\eeq
\end{subequations}
\end{claim}
\proof
Obvious.
\qed
\begin{claim}
Suppose $I,J,K,L$
are nonempty, disjoint subsets of $\zn$
such that $I\cup J\cup K\cup L = \zn$.
If $\rho_{\rva_\zn}$
is a pure state, then
\begin{subequations}
\beq
S(\rva_{I}:\rva_{J}|\rva_{K}) =
S(\rva_{I}|\rva_{K})-
S(\rva_{I}|\rva_{L})
\;
\eeq
\beq
S(\rva_{I}:\rva_{J}|\rva_{K}) =
-S(\rva_{I}:\rva_{J}|\rva_{L})
\;
\eeq
\end{subequations}
\end{claim}
\proof
Obvious.
\qed
\section{Appendix: RUM of Pure States}
In this appendix,
I describe
what I call
the RUM (Roots of Unity Model) of pure states.
The model only works
for pure states, and even
for those there is no guarantee
that it will always give the
right answer. That's why I
call it a model.
One famous physics ``model" is
the Bohr model of
the Hydrogen atom. The Bohr model
gives some nice intuition
about what is going on, plus it
predicts some (not all)
of the features of the Hydrogen
spectrum.
The RUM of pure states gives some
insight into why quantum
conditional entropies $S(\rvb|\rva)$
can be negative unlike classical
conditional entropies $H(\rvb|\rva)$
which are always non-negative.
It also gives some
insight into the identities
presented in Appendix \ref{app-partial-ents}
for the partial entropies of multi-partite states.
It ``explains" such identities
as being a consequence of
the high degree of symmetry
of pure multi-partite states.
Consider an $N$-partite
pure state described
by $N$ random variables
$\rva_1,\rva_2,\ldots,\rva_N$.
We {\it redefine} the
random variables $\rva_j$
so that they equal the $N$'th
roots of unity:
\beq
\rva_j = \exp(i \frac{2\pi(j-1)}{N})
\;
\eeq
for $j\in \zn$. Let
$J$ be any nonempty subset of
$Z_{1,N}$.
Let $\sum \rva_{J} =
\sum_{j\in J} \rva_j$.
We {\it redefine} the entropy of the
$N$-partite state as follows
\beq
S(\rva_{J}) = \left|\sum \rva_{J}\right|
\;.
\eeq
Note that the various
subsystems $\rva_j$
contribute to this entropy in
a coherent sum,
instead of the incoherent
sums that we usually find
when dealing with classical entropy.
Note that
\beq
\sum \rva_J = - \sum \rva_{J^c}
\;
\eeq
so
\beq
S(\rva_{J}) = S(\rva_{J^c})
\;.
\eeq
This identity
was obtained in the exact case
too, in Claim \ref{cl-sj-sjc}.
Let $J,K$ be two nonempty disjoint
subsets of $Z_{1,N}$.
In this model
\beq
S(\rva_{K}|\rva_{J})
= S(\rva_{K},\rva_{J})-S(\rva_{J})=
\left|\sum\rva_{K\cup J}\right|
-\left|\sum\rva_{J}\right|
\;,
\eeq
which clearly
can be negative.
From the triangle inequalities
\beq
\left||\sum\rva_{J}|-|\sum\rva_{K}|
\right|
\leq
|\sum\rva_{J\cup K}|
\leq
|\sum\rva_{J}|+|\sum\rva_{K}|
\;.
\eeq
This can be re-written as
\beq
|S(\rva_{J})-S(\rva_{K})|
\leq
S(\rva_{J}, \rva_{K})
\leq
S(\rva_{J}) + S(\rva_{K})
\;.
\eeq
We recognize this as
the Araki-Lieb inequality and
subadditivity of the joint entropy.
\end{document} |
\begin{document}
\title{Induced subgraphs of graphs with large chromatic number.
\X. Holes of specific residue}
\begin{abstract}
A large body of research in graph theory concerns the induced subgraphs of graphs with large chromatic number, and especially which induced cycles must occur.
In this paper, we unify and substantially extend results from a number of previous papers, showing
that, for every positive integer $k$, every graph with large chromatic number
contains either a large complete subgraph or induced cycles of all lengths
modulo $k$.
As an application, we prove two conjectures of Kalai and Meshulam from the 1990's connecting the chromatic number of a graph with the homology of its independence complex.
\end{abstract}
\section{Introduction}
All graphs in this paper are finite and have no loops or parallel edges. We denote the chromatic number of a graph $G$
by $\hbox{-}\cdots\hbox{-}hi(G)$, and its clique number (the cardinality of its largest clique) by $\omega(G)$. A {\em hole} in $G$ means an induced subgraph
which is a cycle of length at least four.
What can we say about the hole lengths in a graph $G$ with large chromatic number? If $G$ is a complete
graph then it has no holes at all, and the question becomes trivial.
But if we bound the clique number of $G$ then the question becomes much more interesting, and much deeper.
In an influential paper written thirty years ago, Gy\'arf\'as~\hbox{-}\cdots\hbox{-}ite{gyarfas} made a number of conjectures about
induced subgraphs of graphs with large chromatic number and bounded clique number. Three of these conjectures, concerning holes, are
particularly well-known:
\begin{thm}\label{gyarfasconj} For all $\kappa\ge 0$
\begin{itemize}
\item there exists $c$ such that every graph with chromatic number greater than $c$
contains either a complete subgraph on $\kappa$ vertices or a hole of odd length;
\item for all $\ell\ge 0$ there exists $c$ such that
every graph with chromatic number greater than $c$
contains either a complete subgraph on $\kappa$ vertices or a hole of length at least $\ell$;
\item for all $\ell\ge 0$ there exists $c$ such that
every graph with chromatic number greater than $c$
contains either a complete subgraph on $\kappa$ vertices or a hole
whose length is odd and at least $\ell$.
\end{itemize}
\end{thm}
All three conjectures are now known to be true: the first was proved by the authors in~\hbox{-}\cdots\hbox{-}ite{oddholes} (see \hbox{-}\cdots\hbox{-}ite{cycles} for earlier work);
the second jointly with Maria Chudnovsky in~\hbox{-}\cdots\hbox{-}ite{longholes}; and the third (which is a strengthening of the first two) jointly with Chudnovsky and Sophie Spirkl in~\hbox{-}\cdots\hbox{-}ite{longoddholes}.
The analogous result for long even holes is also known (it is enough to find two vertices joined by three long paths with no edges between them, and
this follows from results of \hbox{-}\cdots\hbox{-}ite{bananas}).
Another intriguing result on holes was shown by Bonamy, Charbit and Thomass\'e~\hbox{-}\cdots\hbox{-}ite{bonamy}, who
proved a conjecture of Kalai and Meshulam by showing the following.
\begin{thm}\label{bonamy}
Every graph with sufficiently large chromatic number contains either a triangle or a hole of length $0$ modulo $3$.
\end{thm}
In this paper we prove the following theorem, which contains all the results mentioned above as special cases.
\begin{thm}\label{mainthm}
For all $\kappa,\ell\ge 0$ there exists $c$ such that
every graph $G$ with $\hbox{-}\cdots\hbox{-}hi(G)>c$ and $\omega(G)\le \kappa$
contains holes of every length modulo $\ell$.
\end{thm}
Note that this result allows us to demand a {\em long} hole of length $i$ modulo $j$ by
taking $\ell=Nj$ for large $N$ and then choosing a suitable residue.
Thus it implies all three Gy\'arf\'as conjectures; and it extends \ref{bonamy}
in several ways,
allowing us to ask for any size of clique, and a hole of any residue and as long as we want.
(Though we cannot demand a hole of any {\em specific} length: it is well-known that there are graphs
with arbitrarily large girth and chromatic number.)
We will in fact prove an even stronger statement.
We say $A,B\subseteq V(G)$ are {\em anticomplete} if $A\hbox{-}\cdots\hbox{-}ap B=\emptyset$ and no vertex in $A$ has a neighbour in $B$;
and subgraphs $P,Q$ of $G$ are {\em anticomplete} if $V(P),V(Q)$ are anticomplete. We prove the following.
\begin{thm}\label{superkalai}
Let $\kappa,n\ge 0$ be integers, and for $1\le i\le n$ let $p_i\ge 0$ and $q_i\ge 1$ be integers. Then there exists $c\ge 0$
with the following property. Let $G$ be a graph such that $\hbox{-}\cdots\hbox{-}hi(G)>c$ and $\omega(G)\le \kappa$. Then
there are $n$ holes $H_1,\ldots, H_{n}$ in $G$, pairwise anticomplete,
such that $H_i$ has length $p_i$ modulo $q_i$ for $1\le i\le n$.
\end{thm}
Let us restate this in slightly different language.
An ideal of graphs is {\em $\hbox{-}\cdots\hbox{-}hi$-bounded} if there is a function $f$ such that $\hbox{-}\cdots\hbox{-}hi(G)\le f(\omega(G))$ for every graph $G$ in the
class. Thus \ref{superkalai} can be reformulated as:
\begin{thm}\label{boundedkalai}
Let $n\ge 0$ be an integer, and for $1\le i\le n$ let $p_i\ge 0$ and $q_i\ge 1$ be integers. Let $\mathcal{C}$
be the ideal of all graphs that do not contain $n$ pairwise anticomplete holes $H_1,\ldots, H_n$ where
$H_i$ has length $p_i$ modulo $q_i$ for $1\le i\le n$.
Then $\mathcal{C}$ is $\hbox{-}\cdots\hbox{-}hi$-bounded.
\end{thm}
\ref{boundedkalai} (or equivalently \ref{superkalai}) implies \ref{mainthm}, and also implies the main theorem of~\hbox{-}\cdots\hbox{-}ite{complement}, which is the case of \ref{superkalai} when
$p_i=1$ and $q_i=2$ for each $i$. But it also has applications to further conjectures
of Kalai and Meshulam~\hbox{-}\cdots\hbox{-}ite{kalai}, connecting graph theory with topology, and in particular with the homology of the independence complex of $G$. We discuss these in the final section.
Let us say a hole $H$ in $G$ is {\em $d$-peripheral} if $\hbox{-}\cdots\hbox{-}hi(G[X])>d$, where $X$ is the set of vertices of $G$
that are not in $V(H)$ and have no neighbours in $V(H)$.
\ref{superkalai} follows easily from the following version of \ref{mainthm}, which will therefore be our main objective:
\begin{thm}\label{peripheral}
For all $\kappa,\ell,d\ge 0$ there exists $c$ such that
every graph $G$ with $\hbox{-}\cdots\hbox{-}hi(G)>c$ and $\omega(G)\le \kappa$
contains
$d$-peripheral holes of every length modulo $\ell$.
\end{thm}
\noindent{\bf Proof of \ref{superkalai}, assuming \ref{peripheral}.\ \ }
Let $\kappa, n$ and $p_i,q_i\;(1\le i\le n)$ be as in \ref{superkalai}. We may assume that
$n\ge 1$ and $\kappa\ge 2$, and we proceed by induction on $n$, for fixed $\kappa$.
Choose $d$
such that for every graph $G$ with $\hbox{-}\cdots\hbox{-}hi(G)>d$ and $\omega(G)\le \kappa$,
there are $n-1$ holes $H_1,\ldots, H_{n-1}$ in $G$, pairwise anticomplete,
where $H_i$ has length $p_i$ modulo $q_i$ for $1\le i\le n-1$.
Let $c$ satisfy \ref{peripheral}
with $\ell$ replaced by $q_n$. We claim that $c$ satisfies \ref{superkalai}; for let
$G$ be a graph such that $\hbox{-}\cdots\hbox{-}hi(G)>c$ and $\omega(G)\le \kappa$. By \ref{superkalai}, $G$ has a $d$-peripheral
hole $H_n$ of length $p_n$ modulo $q_n$. Let $X$ be the set of vertices of $G$ not in $H_n$ and with no neighbour in $H_n$.
Thus $\hbox{-}\cdots\hbox{-}hi(G)>d$. From the inductive hypothesis, $G[X]$ has
$n-1$ holes $H_1,\ldots, H_{n-1}$ in $G$, pairwise anticomplete,
where $H_i$ has length $p_i$ modulo $q_i$ for $1\le i\le n-1$. But then $H_1,\ldots, H_n$ satisfy the theorem.~\bbox
In this paper, we are also interested in holes of nearly
equal length.
In the triangle-free case, a result is known that is even stronger than \ref{mainthm}: we proved in~\hbox{-}\cdots\hbox{-}ite{holeseq} that
\begin{thm}\label{holeseq}
For all $\ell\ge 0$ there exists $c$ such that
every triangle-free graph with chromatic number greater than $c$
contains holes of $\ell$ consecutive lengths.
\end{thm}
We conjectured in~\hbox{-}\cdots\hbox{-}ite{holeseq} that the same should be true if we exclude larger cliques:
\begin{thm}\label{moreholeseq}
{\bf Conjecture:}
For all integers $\kappa, \ell\ge 0$,
there exists $c\ge 0$ such that
every graph with chromatic number greater than $c$
contains either a complete subgraph on $\kappa$ vertices or
holes of $\ell$ consecutive lengths.
\end{thm}
This conjecture remains open. However, we make a small step towards it: we will show that under the same hypotheses,
there are (long) holes of two consecutive lengths.
\begin{thm}\label{2holes}
For each $\kappa,\ell\ge 0$ there exists $c\ge 0$ such that
every graph with chromatic number greater than $c$
contains either a complete subgraph on $\kappa$ vertices or
holes of two consecutive lengths, both of length more than $\ell$.
\end{thm}
We have convinced ourselves that with a great deal of work, which we omit, we could get three consecutive ``long'' holes,
but so far that is the best we can do.
As in several other papers of this series, the proof of \ref{peripheral} examines whether there is an induced subgraph
of large chromatic number such that every ball of small radius in it has bounded chromatic number.
Let us make this more precise.
If $X\subseteq V(G)$, the subgraph of $G$ induced on $X$ is denoted by $G[X]$,
and we often write $\hbox{-}\cdots\hbox{-}hi(X)$ for $\hbox{-}\cdots\hbox{-}hi(G[X])$. The {\em distance} or {\em $G$-distance} between two vertices $u,v$
of $G$ is the length of a shortest path between $u,v$, or $\infty$ if there is no such path.
If $v\in V(G)$ and $\rho\ge 0$ is an integer, $N_G^{\rho}(v)$ or $N^{\rho}(v)$ denotes
the set of all vertices $u$ with $G$-distance exactly
$\rho$ from $v$, and $N_G^{\rho}[v]$ or $N^{\rho}[v]$ denotes the set of all $u$ with $G$-distance at most $\rho$ from $v$.
If $G$ is a nonnull graph and $\rho\ge 1$,
we define $\hbox{-}\cdots\hbox{-}hi^{\rho}(G)$ to be the maximum of $\hbox{-}\cdots\hbox{-}hi(N^{\rho}[v])$ taken over all vertices $v$ of $G$.
(For the null graph $G$ we define $\hbox{-}\cdots\hbox{-}hi^{\rho}(G)=0$.)
Let $\mathbb{N}$ denote the set of nonnegative integers, and let $\phi:\mathbb{N}\rightarrow \mathbb{N}$ be a non-decreasing function.
For $\rho\ge 1$, let us say a graph $G$ is {\em $(\rho,\phi)$-controlled} if
$\hbox{-}\cdots\hbox{-}hi(H)\le \phi(\hbox{-}\cdots\hbox{-}hi^{\rho}(H))$ for every induced subgraph $H$ of $G$. Roughly, this says that in every induced subgraph $H$ of $G$ with
large chromatic number, there is a vertex $v$ such that $H[N^{\rho}_H[v]]$ has large chromatic number.
Let $\mathcal{C}$ be a class of graphs. We say $\mathcal{C}$ is an {\em ideal} if every induced subgraph of each member
of $\mathcal{C}$ also belongs to $\mathcal{C}$. If $\rho\ge 2$ is an integer, an ideal $\mathcal{C}$
is {\em $\rho$-controlled} if there is a nondecreasing function $\phi:\mathbb{N}\rightarrow \mathbb{N}$
such that every graph in $\mathcal{C}$ is $(\rho,\phi)$-controlled. For $\ell\ge 4$, an {\em $\ell$-hole} means a hole
of length exactly $\ell$.
The proof of \ref{peripheral} breaks into two parts, the 2-controlled case and the $\rho$-controlled case when $\rho>2$
(because if we can be sure that all 2-balls have small chromatic number then it is easier to piece together paths to make holes
of any desired length.) We will prove the following two complementary results,
which together imply \ref{peripheral}:
\begin{thm}\label{radthm}
Let $\rho\ge 2$ be an integer, and let $\mathcal{C}$ be a $\rho$-controlled ideal of graphs.
Let $\ell\ge 24$ if $\rho=2$, and $\ell\ge 8\rho^2+6\rho$ if $\rho>2$. Then for all $\kappa,d\ge 0$,
there exists $c\ge 0$ such that every graph $G\in \mathcal{C}$ with $\omega(G)\le \kappa$ and $\hbox{-}\cdots\hbox{-}hi(G)>c$
has a $d$-peripheral $\ell$-hole.
\end{thm}
\begin{thm}\label{uncontrolled}
For all integers $\ell\ge 2$ and $\tau,d\ge 0$ there is an integer $c\ge 0$ with the following property.
Let $G$ be a graph such that $\hbox{-}\cdots\hbox{-}hi^8(G)\le \tau$, and every induced subgraph $J$ of $G$ with
$\omega(J)<\omega(G)$
has chromatic number at most $\tau$.
If $\hbox{-}\cdots\hbox{-}hi(G)>c$ then there are $\ell$ $d$-peripheral holes in $G$ with lengths of all possible values modulo $\ell$.
\end{thm}
\noindent {\bf Proof of \ref{peripheral}, assuming \ref{radthm} and \ref{uncontrolled}.\ \ }
Let $\kappa,\ell,d\ge 0$, and let $\mathcal{C}$ be the ideal of graphs with clique number at most $\kappa$ and with no $d$-peripheral
hole of some
length modulo $\ell$.
By \ref{uncontrolled}, for each $\tau\ge 0$ there exists $c_{\tau}$ such that every $G\in \mathcal{C}$ with $\hbox{-}\cdots\hbox{-}hi^8(G)\le \tau$
satisfies $\hbox{-}\cdots\hbox{-}hi(G)\le c_{\tau}$,
and so $\mathcal{C}$ is 8-controlled. By \ref{radthm} the theorem follows. This proves \ref{peripheral}.~\bbox
We prove
the 2-controlled case of \ref{radthm} in the next section, and the $\rho>2$ case in section 3, deducing \ref{radthm} at the end of section 3.
We prove \ref{uncontrolled} in section~\ref{sec:uncontrolled}, completing the proof of \ref{peripheral};
and
prove the theorem about two consecutive long holes in section \ref{sec:2holes}.
\section{2-control}
First we handle the 2-controlled case. The proof here is very much like part of the proof of
theorem 4.8 of~\hbox{-}\cdots\hbox{-}ite{longoddholes}; the main difference is a strengthening of theorem 4.5 of that paper.
First we need some definitions.
If $G$ is a graph and $B,C\subseteq V(G)$, we say that $B$ {\em covers} $C$ if $B\hbox{-}\cdots\hbox{-}ap C=\emptyset$ and every vertex in $C$
has a neighbour in $B$.
Let $G$ be a graph, let $x\in V(G)$, let $N$ be some set of neighbours of $x$, and let $C\subseteq V(G)$
be disjoint from $N\hbox{-}\cdots\hbox{-}up \{x\}$, such that $x$ is anticomplete to $C$ and $N$ covers $C$. In this situation
we call $(x,N)$ a {\em cover} of $C$ in $G$. For $C,X\subseteq V(G)$, a {\em multicover of $C$} in $G$
is a family $(N_x:x\in X)$
such that
\begin{itemize}
\item $X$ is stable;
\item for each $x\in X$, the pair $(x,N_x)$ is a cover of $C$;
\item for all distinct $x,x'\in X$, the vertex $x'$ is anticomplete to $N_x$ (and in particular all the sets
$\{x\}\hbox{-}\cdots\hbox{-}up N_x$ are pairwise disjoint).
\end{itemize}
Its {\em length} is $|X|$, and the multicover $(N_x:x\in X)$ is {\em stable} if each of the sets $N_x\;(x\in X)$ is stable.
\begin{figure}\label{fig:1}
\end{figure}
Let $(N_x:x\in X)$ be a multicover of $C$, let $X'\subseteq X$, and for each $x\in X'$ let $N_x'\subseteq N_x$;
and let $C'\subseteq C$ be covered by each of the
sets $N_x'\;(x\in X')$. Then $(N_x':x\in X')$ is a multicover of $C'$, and we say it is {\em contained} in $(N_x:x\in X)$.
Again, let $(N_x:x\in X)$ be a multicover of $C$. Let $P$ be an induced path of $G$ with the following properties:
\begin{itemize}
\item $P$ has length three or five;
\item the ends of $P$ are in $X$;
\item no vertex of $X$ not an end of $P$ belongs to or has a neighbour in $V(P)$; and
\item every vertex of $P$ belongs to $X\hbox{-}\cdots\hbox{-}up \bigcup_{x\in X}N_x\hbox{-}\cdots\hbox{-}up C$.
\end{itemize}
Let us call such a path $P$ an {\em oddity} for the multicover.
\begin{figure}\label{fig:2}
\end{figure}
If $(N_x:x\in X)$ is a multicover of $C$, with an oddity $P$, and
$(N_x':x\in X')$ is a multicover of $C'\subseteq C$ contained in $(N_x:x\in X)$, and $V(P)$ is anticomplete
to $X'\hbox{-}\cdots\hbox{-}up \bigcup_{x\in X'}N_x'\hbox{-}\cdots\hbox{-}up C'$, we say that $(N_x':x\in X')$ is a multicover of $C'$ {\em compatible} with $P$.
Let $H$ be the subgraph induced on $\bigcup_{x\in X}N_x$; we call the clique number of $H$
the {\em cover clique number} of $(N_x:x\in X)$.
First we need to show the following:
\begin{thm}\label{getoddity}
Let $\tau,\kappa, m',c'\ge 0$ be integers, and let $0\le \kappa'\le \kappa$ be an integer.
Then there exist integers $m,c\ge 0$ with the following property.
Let $G$ be a graph such that
\begin{itemize}
\item $\omega(G)\le \kappa$;
\item $\hbox{-}\cdots\hbox{-}hi(H)\le \tau$ for every induced subgraph $H$ of $G$
with $\omega(H)<\kappa$; and
\item $G$ admits a
stable multicover $(N_x:x\in X)$ with length $m$, of a set $C$ with $\hbox{-}\cdots\hbox{-}hi(C)>c$, with
cover clique number at most $\kappa'$.
\end{itemize}
Then there is an oddity $P$ for the multicover, and a multicover $(N_x':x\in X')$
of $C'\subseteq C$ contained in $(N_x:x\in X)$ and compatible with $P$, such that $|X'|=m'$ and $\hbox{-}\cdots\hbox{-}hi(C')>c'$.
\end{thm}
We proceed by induction on $\kappa'$, with $\tau,\kappa,m',c'$ fixed.
Thus, inductively, there exist $m_0,c_0\ge 0$ such that
the theorem holds if $m,c$ are replaced by $m_0,c_0$ respectively, and $\kappa'$ is replaced by any $\kappa_0$ with
$0\le \kappa_0<\kappa'$.
(Note that possibly $\kappa'=0$, when this statement is vacuous; in that case take $m_0=c_0=0$.)
Let $m=4+4m_0+2m'$. Define $c_m=4\tau+2^{m}(c_0+c')$, and
for $i=m-1,\ldots, 1$ let $c_{i}=2c_{i+1}+\tau$. Let $c=2c_1+\tau$; we will show that $m,c$ satisfy the theorem.
Let $G$, $(N_x:x\in X)$ and $C$ be as in the theorem, where $|X|=m$, $\hbox{-}\cdots\hbox{-}hi(C)>c$ and the cover clique
number of $(N_x:x\in X)$ is at most $\kappa'$. We may assume (because otherwise the theorem follows from the inductive hypothesis) that:
\\
\\
(1) {\em There is no multicover $(N_x':x\in X')$ of $C'\subseteq C$ contained in $(N_x:x\in X)$ with cover clique number less than $\kappa'$,
and with $|X'|=m_0$ and $\hbox{-}\cdots\hbox{-}hi(C')>c_0$.}
Let $X=\{x_1,\ldots, x_m\}$, and let us write $N_i$ for $N_{x_i}$ for $1\le i\le m$.
\\
\\
(2) {\em For $1\le i\le m$, there exist disjoint $C_i,D_i\subseteq C$ with $\hbox{-}\cdots\hbox{-}hi(C_i), \hbox{-}\cdots\hbox{-}hi(D_i)>c_i$, and $A_h\subseteq N_h$
for $1\le h\le i$, such that each $A_h$ covers one of $C_i, D_i$ and is anticomplete to the other.}
\\
\\
If $A\subseteq N_1\hbox{-}\cdots\hbox{-}up \hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up N_m$, let $f(A)$ denote the set of vertices in $C$ with a neighbour in $A$.
Since $\hbox{-}\cdots\hbox{-}hi(C)>c$, there exists $A_1\subseteq N_1$ minimal such that $f(A_1)$
has chromatic number more than $c_1$. Let $C_1=f(A_1)$ and $D_1=C\setminus C_1$.
From the minimality of $A_1$, it follows that $\hbox{-}\cdots\hbox{-}hi(C_1)\le c_1+\tau$. Consequently
$\hbox{-}\cdots\hbox{-}hi(D_1)>\hbox{-}\cdots\hbox{-}hi(C)-(c_1+\tau)\ge c_1$. Thus (2) holds for $i = 1$.
Now we assume that $i>1$ and $C_{i-1}, D_{i-1}$ and the sets $A_1,\ldots, A_{i-1}$
satisfy (2) for $i-1$.
Choose $A_i\subseteq N_i$
minimal such that one of $\hbox{-}\cdots\hbox{-}hi(f(A_i)\hbox{-}\cdots\hbox{-}ap C_{i-1})$, $\hbox{-}\cdots\hbox{-}hi(f(A_i)\hbox{-}\cdots\hbox{-}ap D_{i-1})$ is more than $c_i$; say
the first (without loss of generality). Let $C_i=f(A_i)\hbox{-}\cdots\hbox{-}ap C_{i-1}$. Now $\hbox{-}\cdots\hbox{-}hi(f(A_i)\hbox{-}\cdots\hbox{-}ap D_{i-1})\le c_i+\tau$, from the
minimality of $A_i$, so $\hbox{-}\cdots\hbox{-}hi(D_i)>c_{i-1}-c_i-\tau\ge c_i$, where $D_i = D_{i-1}\setminus f(A_i)$. Thus $A_i$ covers $C_i$
and is anticomplete to $D_i$. This proves (2).
From (2) with $i = m$, each $A_i$ covers one of $C_m, D_m$ and is anticomplete to the other. By exchanging $C_m,D_m$
if necessary, we may assume that for at least $m/2$ values of $i$, $A_i$ covers $C_m$ and is anticomplete to $D_m$.
We may assume (by reordering $x_1,\ldots, x_m$) that $A_i$ covers $C_m$ and is anticomplete to $D_m$ for $1\le i\le m/2$.
Let $B_i=N_i\setminus A_i$ for $1\le i\le m/2$.
\\
\\
(3) {\em There is an oddity $P$ for $(N_x:x\in X)$ with ends $x_1,x_2$ and with interior in
$B_1\hbox{-}\cdots\hbox{-}up B_2\hbox{-}\cdots\hbox{-}up D_m.$}
\\
\\
Since $\hbox{-}\cdots\hbox{-}hi(D_m)>c_m\ge \tau$, there is a clique $Z\subseteq D_m$
with $|Z|=\kappa$. Now $N_1$ covers $D_m$, but $A_1$ is anticomplete to $D_m$, so $B_1$ covers $D_m$.
Similarly $B_2$ covers $D_m$. Choose a vertex $y_1\in B_{1}\hbox{-}\cdots\hbox{-}up B_{2}$
with as many neighbours in $Z$ as possible; and we may assume that
$y_1\in B_{1}$. Not every vertex of $Z$ is incident with $y_1$ since $\omega(G)\le \kappa$;
let $z_2\in Z$ be nonadjacent to
$y_1$. Choose $y_2\in B_{2}$ adjacent to $z_2$.
From the choice of $y_1$, there exists
$z_1\in Z$ adjacent to $y_1$ and not to $y_2$. If $y_1,y_2$ are nonadjacent, then $x_1\hbox{-} y_1\hbox{-} z_1\hbox{-} z_2\hbox{-} y_2\hbox{-} x_2$ is an oddity,
and if $y_1,y_2$ are adjacent then $x_1\hbox{-} y_1\hbox{-} y_2\hbox{-} x_2$ is an oddity. This proves (3).
Now there are at most four vertices of $P$ that have neighbours in $C_m$, and so there exists $F\subseteq C_m$ with
$\hbox{-}\cdots\hbox{-}hi(F)>c_m-4\tau=2^{m}(c_0+c')$ that is anticomplete to $V(P)$. There are two vertices of $P$ in $N_1\hbox{-}\cdots\hbox{-}up N_2$, and those are the only
vertices of $P$ that might have neighbours in $A_i$ for $3\le i\le m/2$. Let these vertices be $p,q$, and for $3\le i\le m/2$
let $P_i$ be the set of vertices in $A_i$ adjacent to $p$, and $Q_i$ the set adjacent to $q$.
For each $v\in F$, let $I(v)$ be the set of $i$ with $3\le i\le m/2$
such that $v$ has a neighbour in $P_i$. For each subset $I\subseteq \{3,\ldots, m/2\}$ with $|I|=m_0$, the chromatic number of
the set of $v\in F$ with $I\subseteq I(v)$ is at most $c_0$, by (1). Since there are at most $2^{m-1}$ such subsets $I$,
the set of vertices $v\in F$ with $|I(f)|\ge m_0$ has chromatic number at most $2^{m-1}c_0$; and similarly the set of vertices
adjacent to neighbours of $q$ in at least $m_0$ sets $A_i$ has chromatic number at most $2^{m-1}c_0$. Consequently
there exists $F'\subseteq F$ with
$$\hbox{-}\cdots\hbox{-}hi(F')\ge \hbox{-}\cdots\hbox{-}hi(F)-2^{m}c_0> 2^{m}c'$$
such that for each $v\in F'$, there are at most $2m_0$ values of $i\in \{3,\ldots, m/2\}$ such that $v$ is adjacent to a neighbour
of $p$ or $q$ in $A_i$. There are only at most $2^{m}$ possibities for the set of these values, so there exists
$C'\subseteq F'$ with $\hbox{-}\cdots\hbox{-}hi(C')\ge \hbox{-}\cdots\hbox{-}hi(F')2^{-m}>c'$ such that all vertices in $C'$ have the same set of values,
and in particular there exists $I\subseteq \{3,\ldots, m/2\}$ with $|I|=m/2-2-2m_0=m'$ such that no vertex in $C'$
has a neighbour adjacent to $p$ or $q$ in any $A_i(i\in I)$. For each $i\in I$, let $N_{x_i}'$ be the set of vertices in $A_i$
nonadjacent to both $p,q$. Then $(N_{x}':x\in \{x_i:i\in I\})$ is a multicover of $C'$, contained in $(N_x:x\in X)$,
and compatible with $P$. This proves \ref{getoddity}.~\bbox
By three successive applications of \ref{getoddity} (one for each oddity), we deduce:
\begin{thm}\label{gettriple}
For all integers $\tau,\kappa\ge 0$, there exist integers $m,c\ge 0$ with the following property.
Let $G$ be a graph such that
\begin{itemize}
\item $\omega(G)\le \kappa$;
\item $\hbox{-}\cdots\hbox{-}hi(H)\le \tau$ for every induced subgraph $H$ of $G$
with $\omega(H)<\kappa$; and
\item $G$ admits a
stable multicover $(N_x:x\in X)$ of a set $C$,
where $|X|=m$ and $\hbox{-}\cdots\hbox{-}hi(C)>c$.
\end{itemize}
Then there are three oddities $P_1,P_2,P_3$ for the multicover, where $V(P_1), V(P_2), V(P_3)$
are pairwise anticomplete.
\end{thm}
(The same is true with ``three'' replaced by any other positive integer, but we only need three.)
Next we need:
\begin{thm}\label{findhole}
Let $\ell\ge 24$ be an integer.
Take the complete bipartite graph $K_{\ell,\ell}$, with bipartition $A,B$. Add three more edges joining three disjoint
pairs of vertices in $A$. Now subdivide every edge between $A$ and $B$ once, and subdivide each of the three additional
edges either
two or four times. The graph we produce has a hole of length $\ell$.
\end{thm}
We leave the proof to the reader (use the fact that if
$x,y,z\in \{3,5\}$ then $\ell$ is expressible as a sum of some or none of $x,y,z$ and at least three 4's).
A multicover $(N_x:x\in X)$ of $C$ is said to be {\em stably $k$-crested} if there are vertices $a_1,\ldots, a_k$ and
vertices $a_{ix}\;(1\le i\le k, x\in X)$ of $G$,
all distinct, with the following properties:
\begin{itemize}
\item $a_1,\ldots, a_k$ and the vertices $a_{ix}\;(1\le i\le k, x\in X)$ do not belong to $X\hbox{-}\cdots\hbox{-}up C\hbox{-}\cdots\hbox{-}up \bigcup_{x\in X}N_x$;
\item for $1\le i\le k$ and each $x\in X$, $a_{ix}$ is adjacent to $x$, and there are no other edges between the sets
$\{a_1,\ldots, a_k\}\hbox{-}\cdots\hbox{-}up \{a_{ix}:1\le i\le k, x\in X\}$ and $X\hbox{-}\cdots\hbox{-}up C\hbox{-}\cdots\hbox{-}up \bigcup_{x\in X}N_x$;
\item for $1\le i\le k$ and each $x\in X$, $a_{ix}$ is adjacent to $a_i$, and there are no other edges between $\{a_1,\ldots, a_k\}$ and $\{a_{ix}:1\le i\le k, x\in
X\}$
\item $a_1,\ldots, a_k$ are pairwise nonadjacent;
\item for all $i,j\in \{1,\ldots, k\}$ and all distinct $x,y\in X$, $a_{ix}$ is nonadjacent to $a_{jy}$.
\end{itemize}
(Thus the ``crest'' part is obtained from
$K_{k,|X|}$ by subdividing every edge once.)
We deduce:
\begin{thm}\label{crested}
Let $\ell\ge 24$, and let $\tau,\kappa\ge 0$. Then there exist $m,c\ge 0$ with the following property.
Let $G$ be a graph such that
\begin{itemize}
\item $\omega(G)\le \kappa$;
\item $\hbox{-}\cdots\hbox{-}hi(J)\le \tau$ for every induced subgraph $H$ of $G$
with $\omega(H)<\kappa$; and
\item $G$ admits a stably
$\ell$-crested stable multicover $(N_x:x\in X)$ of a set $C$,
where $|X|=m$ and $\hbox{-}\cdots\hbox{-}hi(C)>c$.
\end{itemize}
Then $G$ has a hole of length $\ell$.
\end{thm}
\noindent{\bf Proof.}\ \
Let $m,c$ satisfy \ref{gettriple}, choosing $m\ge \ell$ (note that if $m,c$ satisfy \ref{gettriple} then so do $m',c$
for $m'\ge m$.) By \ref{gettriple}, there are three oddities, pairwise anticomplete;
and the result follows from \ref{findhole}. This proves \ref{crested}.~\bbox
Theorem 4.4 of~\hbox{-}\cdots\hbox{-}ite{longoddholes} says:
\begin{thm}\label{getbigtick}
For all $m,c,k,\kappa,\tau\ge 0$ there exist $m',c'\ge 0$ with the following property.
Let $G$ be a graph with $\omega(G)\le \kappa$, such that $\hbox{-}\cdots\hbox{-}hi(H)\le \tau$ for every induced subgraph $H$ of $G$ with $\omega(H)<\kappa$.
Let $(N_x':x\in X')$ be a multicover in $G$ of some set $C'$, such that $|X'|\ge m'$ and $\hbox{-}\cdots\hbox{-}hi(C')> c'$.
Then there exist $X\subseteq X'$ with $|X|\ge m$, and $C\subseteq C'$ with $\hbox{-}\cdots\hbox{-}hi(C)> c$, and a stable multicover $(N_x:x\in X)$
of $C$ contained in $(N_x':x\in X')$ that is stably $k$-crested.
\end{thm}
Combining \ref{crested} and \ref{getbigtick}, we deduce the following (a strengthening of
theorem 4.5 of~\hbox{-}\cdots\hbox{-}ite{longoddholes}):
\begin{thm}\label{nomult}
Let $\ell\ge 24$, and let $\tau,\kappa\ge 0$. Then there exist $m,c\ge 0$ with the following property.
Let $G$ be a graph such that
\begin{itemize}
\item $\omega(G)\le \kappa$;
\item $\hbox{-}\cdots\hbox{-}hi(H)\le \tau$ for every induced subgraph $H$ of $G$
with $\omega(H)<\kappa$; and
\item $G$ admits a
multicover with length $m$, of a set $C$ with $\hbox{-}\cdots\hbox{-}hi(C)>c$.
\end{itemize}
Then $G$ has a hole of length $\ell$.
\end{thm}
We need the following, a consequence of theorem 9.7 of~\hbox{-}\cdots\hbox{-}ite{chandeliers}. That involves ``trees of lamps'', but we
do not need to define
those here; all we need is that a cycle of length $\ell$ is a tree of lamps. (Note that what we call a
``multicover'' here is called a ``strongly-independent 2-multicover'' in that paper, and indexed in a slightly different way.)
\begin{thm}\label{findchand5}
Let $m,\kappa,c',\ell\ge 0$, and let $\mathcal{C}$ be a 2-controlled ideal,
such that for every $G\in \mathcal{C}$:
\begin{itemize}
\item $\omega(G)\le \kappa$;
\item $G$ does not admit a multicover of length $m$ of a set with chromatic number more than $c'$; and
\item $G$ has no hole of length $\ell$.
\end{itemize}
Then there exists $c$ such that all graphs in $\mathcal{C}$ have chromatic number at most $c$.
\end{thm}
Now we prove the main result of this section, that is, \ref{radthm} with $\rho=2$.
\begin{thm}\label{rad2}
Let $\ell\ge 24$ and let $\mathcal{C}$ be a 2-controlled ideal of graphs. For all $\kappa,d\ge 0$ there exists $c$
such that every graph in $\mathcal{C}$ with clique number at most $\kappa$ and chromatic number more than $c$
has a $d$-peripheral hole of length $\ell$.
\end{thm}
\noindent{\bf Proof.}\ \
We proceed by induction on $\kappa$. The result holds for $\kappa\le 1$, so we assume that $\kappa\ge 2$ and
every graph in $\mathcal{C}$ with clique number less than $\kappa$ has chromatic number at most $\tau$.
Let $\mathcal{C}'$ be the ideal of graphs $G\in \mathcal{C}$ such that $\omega(G)\le \kappa$ and $G$ has no hole of
length $\ell$. Choose $m,c'$ to satisfy \ref{nomult} (with $c$ replaced by $c'$). Choose $c''$ to satisfy
\ref{findchand5} (with $\mathcal{C}$ replaced by $\mathcal{C}'$, and $c$ replaced by $c''$). Let $c=\max(c'',d+\ell \tau)$; we claim that
$c$ satisfies the theorem. For let $G\in \mathcal{C}$ with $\omega(G)\le \kappa$ and $\hbox{-}\cdots\hbox{-}hi(G)>c$.
Suppose first that $G\in \mathcal{C}'$. By \ref{nomult},
$G$ does not admit a multicover with length $m$ of a set with chromatic number more than $c'$. From
\ref{findchand5}, $\hbox{-}\cdots\hbox{-}hi(G)\le c''$, a contradiction.
Thus $G\notin \mathcal{C}'$, and so $G$
has an $\ell$-hole $H$. For each vertex of $H$, its set of neighbours has chromatic number at most $\tau$; and so
the set of all vertices of $G$ that belong to or have a neighbour in $H$ has chromatic number at most $\ell\tau$. Since
$\hbox{-}\cdots\hbox{-}hi(G)>d+\ell \tau$, it follows that $H$ is $d$-peripheral. This proves \ref{rad2}.~\bbox
\section{The $\rho$-controlled case for $\rho\ge 3$.}
Let $G$ be a graph. We say a {\em grading} of $G$ is a sequence $(W_1,\ldots, W_n)$ of subsets of $V(G)$, pairwise
disjoint and with union $V(G)$. If $w\ge 0$ is such that $\hbox{-}\cdots\hbox{-}hi(G[W_i])\le \tau$ for $1\le i\le n$
we say the grading is {\em $\tau$-colourable}. We say that $u\in V(G)$ is {\em earlier} than $v\in V(G)$
(with respect to some grading $(W_1,\ldots, W_n)$) if $u\in W_i$ and $v\in W_j$ where $i<j$.
Let $G$ be a graph, and let $B,C\subseteq V(G)$, where $B$ covers $C$. Let $B=\{b_1,\ldots, b_m\}$.
For $1\le i<j\le m$ we say that $b_i$ is {\em earlier} than $b_j$
(with respect to the enumeration $(b_1,\ldots, b_m)$). For $v\in C$, let $i\in \{1,\ldots, m\}$ be minimum such that $b_i,v$ are adjacent;
we call $b_i$ the {\em earliest parent} of $v$. An edge $uv$ of $G[C]$ is said to be {\em square} (with respect to the
enumeration
$(b_1,\ldots, b_m)$) if
the earliest parent of $u$ is nonadjacent to $v$, and the earliest parent of $v$ is nonadjacent to $u$.
Let $B=\{b_1,\ldots, b_m\}$, and let $(W_1,\ldots, W_n)$ be a grading of $G[C]$. We say the enumeration $(b_1,\ldots, b_m)$ of $B$ and the
grading $(W_1,\ldots, W_n)$ are {\em compatible} if for all $u,v\in C$ with $u$ earlier than $v$, the earliest parent of $u$ is
earlier than
the earliest parent of $v$.
A graph $H$ is a {\em $\rho$-ball} if either $V(H)=\emptyset$ or there is a vertex $z\in V(H)$ such that every vertex of $H$
has $H$-distance at most $\rho$ from $z$; and we call $z$ a {\em centre} of the $\rho$-ball.
If $G$ is a graph, a subset $X\subseteq V(G)$ is said to be a {\em $\rho$-ball} if $G[X]$ is a
$\rho$-ball. (Note that there may be vertices of $G$ not in $X$ that have $G$-distance at most $\rho$ from $z$; and also,
for a pair of vertices in $X$, their $G$-distance and their $G[X]$-distance may be different.)
\begin{thm}\label{greentouchrad}
Let $\phi$ be a nondecreasing function and $\rho\ge 3$, and let $G$ be a $(\rho,\phi)$-controlled graph. Let $\tau\ge 0$
such that $\hbox{-}\cdots\hbox{-}hi^{\rho-1}(G)\le \tau$ and $\hbox{-}\cdots\hbox{-}hi(J)\le \tau$ for every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$.
Let $c\ge 0$ and let $(W_1,\ldots, W_n)$ be a $\tau$-colourable grading of $G$. Let $H$ be a subgraph of $G$ (not necessarily induced)
with $\hbox{-}\cdots\hbox{-}hi(H)>\tau+1+\phi(c+\tau)$, and such that $W_i\hbox{-}\cdots\hbox{-}ap V(H)$ is stable in $H$ for each $i\in \{1,\ldots, n\}$.
Then there is an edge $uv$ of $H$, and a $\rho$-ball $X$ of $G$,
such that
\begin{itemize}
\item $u,v$ are both earlier than every vertex in $X$;
\item $v$ has a $G$-neighbour in $X$, and $u$ does not; and
\item $\hbox{-}\cdots\hbox{-}hi(G[X])>c$.
\end{itemize}
\end{thm}
\noindent{\bf Proof.}\ \
Let us say that $v\in V(G)$ is {\em internally active} if there is a $\rho$-ball $X\ni v$ with $\hbox{-}\cdots\hbox{-}hi(X)>c+\tau$ such that no vertex of $X$ is
earlier than $v$. (Note that $X\hbox{-}\cdots\hbox{-}ap W_i$ may have more than one element, so there may be vertices in $X$ that are neither
earlier nor later than $v$.) Let $R_1$ be the set of internally active vertices. We claim first:
\\
\\
(1) {\em $\hbox{-}\cdots\hbox{-}hi(G\setminus R_1)\le \phi(c+\tau)$.}
\\
\\
For suppose not. Then since $G$ is $(\rho,\phi)$-controlled, there is a $\rho$-ball $X\subseteq V(G)\setminus R_1$
with $\hbox{-}\cdots\hbox{-}hi(G)>c+\tau$, which therefore contains an internally active vertex, a contradiction. This proves (1).
Let us say $v\in V(G)$ is {\em externally active} if there is a $\rho$-ball $X$ of $G$ with $\hbox{-}\cdots\hbox{-}hi(X)>c+\tau$ such that every vertex
of $X$ is later than $v$, and $v$ has an $H$-neighbour in $X$. Let $R_2$ be the set of externally active vertices. We claim:
\\
\\
(2) {\em $R_1\setminus R_2$ is stable in $H$.}
\\
\\
For suppose that $uv$ is an edge of $H$ with both ends in $R_1\setminus R_2$. Since each $W_i\hbox{-}\cdots\hbox{-}ap V(H)$ is stable in $H$, we may assume that
$u$ is earlier than $v$. Since $v$ is internally active, there is a $\rho$-ball $X$ containing $v$ with $\hbox{-}\cdots\hbox{-}hi(X)>c+\tau$ such that
no vertex of $X$ is earlier than $v$; but then $u$ is externally active, a contradiction. This proves (2).
\\
\\
(3) {\em There is a subset $Y\subseteq V(H)$ such that $H[Y]$ is connected and has chromatic number more than $\tau$,
and a $\rho$-ball $X$ of $G$ with $\hbox{-}\cdots\hbox{-}hi(G[X])>c+\tau$, such that every vertex of $Y$ is earlier than every vertex of $X$,
and some vertex of $Y$ has
a $H$-neighbour in $X$.}
\\
\\
Since $H$ has chromatic number more than $\tau+1+\phi(c+\tau)$, it follows from (1) and (2) that $\hbox{-}\cdots\hbox{-}hi(H[R_2])>\tau$.
Let $Y$ be the vertex set of a component of $H[R_2]$ with maximum chromatic number. Choose $v\in Y$ such that no vertex
of $Y$ is later than $v$. Since $v$ is externally active, this proves (3).
Let $X,Y$ be as in (3). If some vertex of $Y$ has no $G$-neighbour in $X$, then since $H[Y]$ is connected, there is an
edge $uv$ of $H[Y]$ such that $v$ has a $G$-neighbour in $X$ and $u$ does not, and the theorem holds. We assume then
that every vertex of $Y$ has a $G$-neighbour in $X$. For each $y\in Y$, let $N(y)$ denote its set of $G$-neighbours in $X$.
Let $z$ be a centre of $X$, and for $0\le i\le \rho$ let $L_i$
be the set of vertices in $X$ with $G[X]$-distance $i$ to $z$. Thus $L_0\hbox{-}\cdots\hbox{-}up\hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up L_{\rho}=X$.
Let $Y_0$ be the set of all $y\in Y$ with $N(y)\subseteq L_{\rho-1}\hbox{-}\cdots\hbox{-}up L_{\rho}$.
\\
\\
(4) {\em $Y_0\ne\emptyset$.}
\\
\\
Since $\hbox{-}\cdots\hbox{-}hi(H[Y])>\tau$, it follows that $\hbox{-}\cdots\hbox{-}hi(G[Y])>\tau$, and so some vertex $y\in Y$ has $G$-distance at least $\rho$
from $z$. Consequently $N(y)\subseteq L_{\rho-1}\hbox{-}\cdots\hbox{-}up L_{\rho}$. This proves (4).
Choose $y\in Y_0$, if possible with the additional property that $N(y)\hbox{-}\cdots\hbox{-}ap L_{\rho-1}=\emptyset$.
Let $U$ be the set
of vertices in $L_{\rho}$ with a neighbour in $N(y)\hbox{-}\cdots\hbox{-}ap L_{\rho-1}$.
\\
\\
(5) {\em There is a vertex $y'$ of $Y$ with $N(y')\not\subseteq N(y)\hbox{-}\cdots\hbox{-}up U$.}
\\
\\
For there is a vertex $y'\in Y$ with $G$-distance at least $\rho$ from $y$, since $\hbox{-}\cdots\hbox{-}hi(G[Y])>\tau$. Since $\rho>2$,
$N(y)\hbox{-}\cdots\hbox{-}ap N(y')=\emptyset$. If $N(y')\subseteq U$, then $y'\in Y_0$ and $N(y')\hbox{-}\cdots\hbox{-}ap L_{\rho-1}=\emptyset$;
but then $N(y)\hbox{-}\cdots\hbox{-}ap L_{\rho-1}=\emptyset$ from the choice of $y$, and so $U=\emptyset$, a contradiction. Thus
$N(y')\not\subseteq U$. This proves (5).
Now $X\setminus (N(y)\hbox{-}\cdots\hbox{-}up U)$ is a $\rho$-ball $X'$ say, and some vertex (namely $y'$) of $Y$ has a $G$-neighbour in it,
and another (namely $y$) has no $G$-neighbour in it. Since $H[Y]$ is connected, there is an edge $uv$ of $H[Y]$
such that $v$ has a $G$-neighbour in $X'$ and $u$ does not. But $\hbox{-}\cdots\hbox{-}hi(X)>c+\tau$, and
every vertex in $N(y)\hbox{-}\cdots\hbox{-}up U$ has $G$-distance at most two from $y$ and so $\hbox{-}\cdots\hbox{-}hi(N(y)\hbox{-}\cdots\hbox{-}up U)\le \tau$,
and consequently $\hbox{-}\cdots\hbox{-}hi(X')\ge \hbox{-}\cdots\hbox{-}hi(X)-\tau>c$.
This proves \ref{greentouchrad}.~\bbox
We also need the following, proved in~\hbox{-}\cdots\hbox{-}ite{longoddholes}:
\begin{thm}\label{getgreenedge}
Let $G$ be a graph, and let $B,C\subseteq V(G)$, where $B$ covers $C$. Let every induced subgraph $J$ of $G$ with
$\omega(J)<\omega(G)$
have chromatic number at most $\tau$.
Let the enumeration $(b_1,\ldots, b_m)$ of $B$ and the grading $(W_1,\ldots, W_n)$ of $G[C]$ be compatible. Let $H$ be the subgraph of $G$
with vertex set $C$ and edge set the set of all square edges. Let $(W_1,\ldots, W_n)$ be $\tau$-colourable; then
$\hbox{-}\cdots\hbox{-}hi(G[C])\le \tau^2\hbox{-}\cdots\hbox{-}hi(H)$.
\end{thm}
We deduce:
\begin{thm}\label{greenedgerad}
Let $\phi$ be a nondecreasing function and $\rho\ge 3$, and let $G$ be a $(\rho,\phi)$-controlled graph. Let $\tau\ge 0$
such that $\hbox{-}\cdots\hbox{-}hi^{\rho-1}(G)\le \tau$ and $\hbox{-}\cdots\hbox{-}hi(J)\le \tau$ for every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$.
Let $B,C\subseteq V(G)$, where $B$ covers $C$.
Let the enumeration $(b_1,\ldots, b_m)$ of $B$ and the grading $(W_1,\ldots, W_n)$ of $G[C]$ be compatible.
Let $(W_1,\ldots, W_n)$ be $\tau$-colourable, and let $\hbox{-}\cdots\hbox{-}hi(G[C])>\tau^2 (\tau+1+\phi(c+\tau))$.
Then there is a square edge $uv$, and a $\rho$-ball $X$ of $G$,
such that
\begin{itemize}
\item $u,v$ are both earlier than every vertex in $X$;
\item $v$ has a neighbour in $X$, and $u$ does not; and
\item $\hbox{-}\cdots\hbox{-}hi(X)>c$.
\end{itemize}
\end{thm}
\noindent{\bf Proof.}\ \ Let $H$ be as in \ref{getgreenedge}. By \ref{getgreenedge},
$\hbox{-}\cdots\hbox{-}hi(G[C])\le \tau^2\hbox{-}\cdots\hbox{-}hi(H)$.
Since $\hbox{-}\cdots\hbox{-}hi(G[C])>\tau^2 (\tau+1+\phi(c+\tau))$ and $\hbox{-}\cdots\hbox{-}hi^1(G)\le \tau$, it follows that $\hbox{-}\cdots\hbox{-}hi(H)>\tau+1+\phi(c+\tau)$.
By \ref{greentouchrad} applied to $G[C]$ and $H$, we deduce
that there is an edge $uv$ of $H$, and a $\rho$-ball $X$ of $G$, satisfying the theorem. This proves \ref{greenedgerad}.~\bbox
A {\em $\rho$-comet} $(\mathcal{P}, X)$ in a graph $G$ consists of a set $\mathcal{P}$ of induced paths, each with the same pair of
ends $x,y$ say, and a $\rho$-ball $X$, such that
$y$ has a neighbour in $X$ and no other vertex of any member of $\mathcal{P}$ has a neighbour in $X$. We call $x$ the {\em tip}
of the $\rho$-comet, and $\hbox{-}\cdots\hbox{-}hi(X)$ its {\em chromatic number}, and the set of lengths of members of $\mathcal{P}$ its {\em spectrum}.
\begin{thm}\label{twotails}
Let $\phi$ be a nondecreasing function, and let $\rho\ge 3$ and $\tau\ge 0$.
For all integers $c\ge 1$ there exists $c'\ge 0$ with the following property.
Let $G$ be a $(\rho,\phi)$-controlled graph such that
$\hbox{-}\cdots\hbox{-}hi^{\rho-1}(G)\le \tau$ and $\hbox{-}\cdots\hbox{-}hi(J)\le \tau$ for every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$.
Let $x\in V(G)$, and let $V(G)\setminus \{x\}$ be a $\rho$-ball, such that $x$ has a neighbour in $G\setminus x$.
Let $\hbox{-}\cdots\hbox{-}hi(V(G)\setminus \{x\})>c'$.
Then there is a $\rho$-comet $(\{P,Q\}, C)$ in $G$ with tip $x$ and chromatic number more than $c$, where
$|E(Q)|=|E(P)|+1$, and $|E(P)|\le 2\rho+1$.
\end{thm}
\noindent{\bf Proof.}\ \ Let $c' = 2\tau^2 (\tau+1+\phi(c+\tau))$, and let $G,x$ be as in the theorem.
Since $V(G)\setminus \{x\}$ is a $\rho$-ball, every vertex of $G$ has $G$-distance at most $2\rho+1$ from $x$;
for $0\le k\le 2\rho+1$ let $L_k$ be the set of vertices of $G$ with $G$-distance exactly $k$
from $x$. Since $\hbox{-}\cdots\hbox{-}hi(V(G)\setminus \{x\})>c'$, there exists $k$
such that $\hbox{-}\cdots\hbox{-}hi(L_k)>c'/2$. Since $\hbox{-}\cdots\hbox{-}hi^2(G)\le \tau$ it follows that $k\ge 3$.
Let $(b_1,\ldots, b_n)$ be an enumeration of $L_{k-1}$, and for $1\le i\le n$ let $W_i$ be the set of vertices in $L_k$
that are adjacent to $b_i$ but not to $b_1,\ldots, b_{i-1}$. Then $(W_1,\ldots, W_n)$ is a $\tau$-colourable grading of
$G[L_k]$, compatible with $(b_1,\ldots, b_n)$.
Since $\hbox{-}\cdots\hbox{-}hi(L_k)>\tau^2 (\tau+1+\phi(c+\tau))$,
by \ref{greenedgerad}
there is a square edge $uv$ of $G[L_k]$, and a $\rho$-ball $C$ of $G[L_k]$,
such that
\begin{itemize}
\item $u,v$ are both earlier than every vertex in $C$;
\item $v$ has a neighbour in $C$, and $u$ does not; and
\item $\hbox{-}\cdots\hbox{-}hi(C)>c$.
\end{itemize}
Let $u',v'$ be the earliest parents of $u,v$ respectively.
Let $P$ consist of the union of the path $v\hbox{-} v'$ and a path of length $k-1$ between $v',x$ with interior in $L_1,\ldots, L_{k-2}$;
and let $Q$ consist of the union of the path $v\hbox{-} u\hbox{-} u'$ and a path of length $k-1$ between $u',x$ with interior in $L_1,\ldots, L_{k-2}$.
Then $|E(Q)|=|E(P)|+1$ and $|E(P)|\le 2\rho$. Moreover, no vertex in $C$ has a neighbour in $P\hbox{-}\cdots\hbox{-}up Q$
different from $v$, since all vertices in $C$ are later than $u,v$. This proves \ref{twotails}.~\bbox
By repeated application of \ref{twotails} we deduce:
\begin{thm}\label{manytails}
Let $\phi$ be a nondecreasing function, and let $\rho\ge 3$, $\ell\ge \rho(8\rho+6)$ and $\tau\ge 0$.
For all integers $c\ge 1$ there exists $c'\ge 0$ with the following property.
Let $G$ be a $(\rho,\phi)$-controlled graph such that
$\hbox{-}\cdots\hbox{-}hi^{\rho-1}(G)\le \tau$ and $\hbox{-}\cdots\hbox{-}hi(J)\le \tau$ for every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$.
Let $x\in V(G)$, and let $V(G)\setminus \{x\}$ be a $\rho$-ball, such that $x$ has a neighbour in $G\setminus x$.
Then there is a $\rho$-comet $(\mathcal{P}, X)$ in $G$ with tip $x$ and chromatic number more than $c$, such that
its spectrum includes $\{\ell+i:\:0\le i\le 2\rho+3\}$.
\end{thm}
\noindent{\bf Proof.}\ \ Let $c_{\ell+1}=c$, and for $i=\ell,\ldots, 1$ let \ref{twotails} be satisfied setting $c = c_{i+1}$ and $c'=c_{i}$.
Let $c'=c_1$.
\\
\\
(1) {\em For all $k\ge 1$ there exists $p_k$ with $1\le p_k\le 2\rho$ and a $\rho$-comet in $G$ with tip $x$,
chromatic number more than $c_k$,
and spectrum including $\{p_1+\hbox{-}\cdots\hbox{-}dots+p_k+i:\:1\le i\le k\}$.}
\\
\\
By hypothesis there is a $\rho$-comet in $G$ with chromatic number more than $c_1$, tip $x$ and spectrum $\{1\}$, so the statement
holds when $k=1$, setting $p_1=0$; and it follows for $k\ge 2$ by repeated application of \ref{twotails}.
This proves (1).
Now $p_1,\ldots, p_{\ell}$ exist and sum to at least $\ell$, so there exists $k\le \ell$ maximum such that
$$p_1+\hbox{-}\cdots\hbox{-}dots+p_{k}\le \ell.$$
Since
$p_1+\hbox{-}\cdots\hbox{-}dots+p_{4\rho+3}< 2(4\rho+3)\rho\le \ell$, it follows that $k\ge 4\rho+3$.
From the maximality of $k$, and since $p_{k+1}\le 2\rho$, it follows that
$p_1+\hbox{-}\cdots\hbox{-}dots+p_{k}> \ell-2\rho$. Consequently the spectrum of the corresponding $\rho$-comet contains
$\{\ell+i:\:0\le i\le 2\rho+3\}$. This proves \ref{manytails}.~\bbox
\begin{thm}\label{manyholes}
Let $\phi$ be a nondecreasing function, and let $\rho\ge 3$, $\ell\ge 8\rho^2+6\rho$, and $d,\tau\ge 0$.
Then there exists $c$ with the following property.
Let $G$ be a $(\rho,\phi)$-controlled graph with $\hbox{-}\cdots\hbox{-}hi(G)>c$ such that
$\hbox{-}\cdots\hbox{-}hi^{\rho-1}(G)\le \tau$ and $\hbox{-}\cdots\hbox{-}hi(J)\le \tau$ for every induced subgraph $J$ of $G$ with $\omega(J)<\omega(G)$.
Then there is a $d$-peripheral $\ell$-hole in $G$.
\end{thm}
\noindent{\bf Proof.}\ \
Define $c_4=\ell(2\rho+4)\tau$. Choose $c_3$ such that \ref{manytails} is satisfied replacing $c,c', \ell$
by $c_4,c_3,\ell-6\rho$ respectively.
Let $c_2 =\rho\tau +\phi(c_3)$, $c_1=\tau\phi(c_2)$, and let $c=\max(\phi(c_1),\ell\tau+d)$. Let $G$ be as in the theorem with $\hbox{-}\cdots\hbox{-}hi(G)>c$.
Since $\hbox{-}\cdots\hbox{-}hi(G)>c\ge \phi(c_1)$, there exists $z\in V(G)$ such that, denoting the set of vertices
of $G$ with $G$-distance $i$ from $z$ by $L_i$, we have $\hbox{-}\cdots\hbox{-}hi(L_{\rho})>c_1$. Since $L_1$ is $\tau$-colourable, there is a stable
subset $A$ of $L_1$ such that the set $B$ of vertices in $L_{\rho}$ that are descendants of vertices in $A$ has chromatic number
more than $c_1/\tau=\phi(c_2)$. Consequently there is a $\rho$-ball $C\subseteq B$ with $\hbox{-}\cdots\hbox{-}hi(C)>c_2$.
Choose $D\subseteq A$ minimal such that every vertex in $C$ has an ancestor in $D$. Let $v_1\in D$; then
there exists $v_{\rho-1}\in L_{\rho-1}$ with a neighbour in $C$ such that $v_1$ is its only ancestor in $D$.
Let $v_1\hbox{-} v_2\hbox{-}\cdots\hbox{-} v_{\rho-1}$ be an induced path, where
$v_i\in L_i$ for $1\le i\le {\rho}-1$. The set of vertices in $C$
with $G$-distance less than $\rho$ from one of $v_1,v_2,\ldots, v_{\rho-1}$ has chromatic number at most $\rho\tau$, and so the set $E$ of
vertices in $C$ with $G$-distance at least $\rho$ from each of $v_1,v_2,\ldots, v_{\rho-1}$ has chromatic number more than
$c_2 -\rho\tau =\phi(c_3)$. Consequently there is a $\rho$-ball $F\subseteq E$, with chromatic number more than $c_3$.
Since $C$ is a $\rho$-ball and $v_{\rho-1}$ has a neighbour in $C$, there is an induced path $P$ of $G[C\hbox{-}\cdots\hbox{-}up \{v_{\rho}\}]$
from $v_{\rho-1}$ to some vertex
$x\in C$ with a neighbour in $F$, of length at most $2\rho$, such that no vertex of $P$ different from $x$
has a neighbour in $F$. By \ref{manytails} applied to $x,F$, since $\hbox{-}\cdots\hbox{-}hi(F)> c_3$, there is a vertex $v\in F$, $2\rho+4$ induced paths
$P_0,\ldots, P_{2\rho+3}$ of $G[F\hbox{-}\cdots\hbox{-}up\{v_{\rho-1}\}]$ between $x,v$, and a $\rho$-ball $X\subseteq F$, such that:
\begin{itemize}
\item $|E(P_i)|=\ell-6\rho+i$ for $0\le i\le 2\rho+3$;
\item $V(P_i)\hbox{-}\cdots\hbox{-}ap X=\emptyset$ for $0\le i\le 2\rho+3$;
\item $v$ has a neighbour in $X$ and no other vertex of $P_i$
has a neighbour in $X$, for $0\le i\le 2\rho+2$; and
\item $\hbox{-}\cdots\hbox{-}hi(X)>c_4$.
\end{itemize}
Now every vertex of $X$ has $G$-distance at least $\rho$ from each of $v_1,\ldots, v_{\rho-1}$, but there may be vertices in $X$
with $G$-distance less than $\rho$ to a vertex in $P$ or in one of $P_0,\ldots, P_{2\rho+3}$. The union of these paths has at most
$\ell(2\rho+4)$ vertices (in fact much fewer), and since $\hbox{-}\cdots\hbox{-}hi(X)>\ell(2\rho+4)\tau$, there exists a vertex in $X$ with $G$-distance
at least $\rho$ from all vertices of these paths. Let $y\in L_{\rho-1}$ be adjacent to this vertex, and let $Q$ be an induced path
between $y,x$ with interior in $X$, of length at most $2\rho+2$. Let $R$ be a path of length $\rho-1$
between $y,z$ with interior in $D\hbox{-}\cdots\hbox{-}up L_2\hbox{-}\cdots\hbox{-}up L_3\hbox{-}\cdots\hbox{-}up \hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up L_{\rho-2}$.
The union of the four paths $z\hbox{-} v_1\hbox{-} v_2\hbox{-}\cdots\hbox{-} v_{\rho-1}, P,Q,R$ has length at most $6\rho$, and at least $4\rho-3$, since
$P,Q$ have lengths at least $\rho$ and at least $\rho-1$ respectively. Let their union have length $j$
where $4\rho-3\le j\le 6\rho$. Let $i = 6\rho-j$; then $0\le i\le 2\rho+3$, and so $P_i$ is defined and has length $\ell-6\rho+i+j$.
Consequently the union of the five paths
$z\hbox{-} v_1\hbox{-} v_2\hbox{-}\cdots\hbox{-} v_{\rho-1}$, $P$, $P_i$, $Q$ and $R$ is a cycle $H_i$ of length $\ell$, and we claim it is induced.
Certainly it is a cycle; suppose that it is
not induced, and so there is an edge $ab$ say that joins two nonconsecutive vertices of $H_i$. It follows
that $a,b$ do not both belong to any of its five constituent paths. Certainly $a,b\ne z$.
Suppose that $a=v_i$ for some $i$. Since every vertex in $E$ has $G$-distance at least $\rho$ from $v_i$, it follows that $b\notin V(P_i)$
and $b\notin V(Q)$. Also $b\notin V(P)$ since every vertex of $V(P)\setminus \{v_{\rho-1}\}$ belongs to $L_k$ and $P$
is an induced path containing $v_{\rho-1}$. Thus $b\in R$. Since the $G$-distance between $y,a$ is at least $\rho-1$, and $R$
has length $\rho-1$, it follows that $b\in L_1$ and so $i \in \{1,2\}$; but $i\ne 1$ since $A$ is stable, and $i\ne 2$ since
$v_{\rho-1}$, and hence $v_2$, has a unique ancestor in $D$. This proves that $a,b\notin \{v_1,\ldots, v_{\rho-1}\}$.
Next suppose that $a\in V(R)$, and so either $b\in V(P)\setminus \{v_{\rho-1}\}$, or $b\in V(P_i\hbox{-}\cdots\hbox{-}up Q\setminus \{y\})$.
In either case $b\in L_k$, and so $a=y$; hence $b\notin V(Q)$ since $Q$ is induced and $y\in V(Q)$, and $a\notin V(P\hbox{-}\cdots\hbox{-}up P_i)$
since $y$ has a neighbour in $X$ with $G$-distance at least $\rho$ from each vertex of $V(P\hbox{-}\cdots\hbox{-}up P_i)$, and $\rho\ge 3$.
This proves that $a,b\notin V(R)$. Next suppose that $a\in V(P)\setminus \{x\}$. Then $b\in F$; but no vertex of $P$
except $x$ has a neighbour in $F$, a contradiction. Finally, suppose that $a\in V(P_i)$ and $b\in V(Q)$. No vertex of $P_i$
has a neighbour in $X$ except $v$, and $v\in V(Q)$, a contradiction. This proves that $H_i$ is an $\ell$-hole. Now the
set of vertices of $G$ that belong to or have a neighbour in $H_i$ has chromatic number at most $\ell\tau$,
and since $\hbox{-}\cdots\hbox{-}hi(G)>c\ge \ell\tau+d$, it follows that $H_1$ is $d$-peripheral.
This proves \ref{manyholes}.~\bbox
Let us deduce \ref{radthm}, which we restate:
\begin{thm}\label{radthm2}
Let $\rho\ge 2$ be an integer, and let $\mathcal{C}$ be a $\rho$-controlled ideal of graphs.
Let $\ell\ge 24$ if $\rho=2$, and $\ell\ge 8\rho^2+6\rho$ if $\rho>2$. Then for all $\kappa,d\ge 0$,
there exists $c\ge 0$ such that every graph $G\in \mathcal{C}$ with $\omega(G)\le \kappa$ and $\hbox{-}\cdots\hbox{-}hi(G)>c$
has a $d$-peripheral $\ell$-hole.
\end{thm}
\noindent{\bf Proof.}\ \
By induction on $\kappa$ we may assume that there exists $\tau_1$ such that every graph in $\mathcal{C}$ with clique number
less than $\kappa$ and no $d$-peripheral $\ell$-hole has chromatic number at most $\tau_1$.
Let $\mathcal{C}_2$ be the ideal of $G\in \mathcal{C}$ with clique number at most $\kappa$ and no $d$-peripheral $\ell$-hole. We suppose
that there are graphs in $\mathcal{C}_2$ with arbitrarily large chromatic number, and so $\mathcal{C}_2$ is not 2-controlled,
by \ref{rad2}. Consequently there exists $\tau_2$ such that if $\mathcal{C}_3$ denotes the class of graphs $G\in \mathcal{C}_2$
with $\hbox{-}\cdots\hbox{-}hi^2(G)\le \tau_2$, there are graphs in $\mathcal{C}_3$ with arbitrarily large chromatic number. Hence by \ref{manyholes}
with $\rho=3$, $\mathcal{C}_3$ is not 3-controlled, and so on; and we deduce that there is an ideal $\mathcal{C}_{\rho}$ of graphs
in $\mathcal{C}$ that is not $\rho$-controlled, a contradiction. This proves \ref{radthm2}.~\bbox
\section{Controlling 8-balls}\label{sec:uncontrolled}
In this section we prove \ref{uncontrolled}.
We use the following relative of \ref{greentouchrad}, proved in~\hbox{-}\cdots\hbox{-}ite{longoddholes}:
\begin{thm}\label{greentouch}
Let $\tau,c\ge 0$ and let $(W_1,\ldots, W_n)$ be a $\tau$-colourable grading of a graph $G$. Let $H$ be a subgraph of $G$ (not necessarily
induced)
with $\hbox{-}\cdots\hbox{-}hi(H)>\tau+2(c+\hbox{-}\cdots\hbox{-}hi^1(G))$. Then there is an edge $uv$ of $H$, and a subset $X$ of $V(G)$,
such that
\begin{itemize}
\item $G[X]$ is connected;
\item $u,v$ are both earlier than every vertex in $X$;
\item $v$ has a neighbour in $X$, and $u$ does not; and
\item $\hbox{-}\cdots\hbox{-}hi(X)>c$.
\end{itemize}
\end{thm}
We deduce a version of \ref{greenedgerad} that has no assumption of $\rho$-control:
\begin{thm}\label{greenedge}
Let $G$ be a graph, and let $B,C\subseteq V(G)$, where $B$ covers $C$. Let every induced subgraph $J$ of $G$ with
$\omega(J)<\omega(G)$
have chromatic number at most $\tau$.
Let the enumeration $(b_1,\ldots, b_m)$ of $B$ and the grading $(W_1,\ldots, W_n)$ of $G[C]$ be compatible.
Let $(W_1,\ldots, W_n)$ be $\tau$-colourable, and let $\hbox{-}\cdots\hbox{-}hi(G[C])>\tau^2 (2c+3\tau)$.
Then there is a square edge $uv$, and a subset $X$ of $V(G)$,
such that
\begin{itemize}
\item $G[X]$ is connected;
\item $u,v$ are both earlier than every vertex in $X$;
\item $v$ has a neighbour in $X$, and $u$ does not; and
\item $\hbox{-}\cdots\hbox{-}hi(X)>c$.
\end{itemize}
\end{thm}
\noindent{\bf Proof.}\ \ Let $H$ be as in \ref{getgreenedge}; then by \ref{getgreenedge}, $\hbox{-}\cdots\hbox{-}hi(G[C])\le \tau^2\hbox{-}\cdots\hbox{-}hi(H)$.
Since $\hbox{-}\cdots\hbox{-}hi(G[C])>\tau^2 (2c+3\tau)$ and $\hbox{-}\cdots\hbox{-}hi^1(G)\le \tau$, it follows that $\hbox{-}\cdots\hbox{-}hi(H)>\tau+2(c+\hbox{-}\cdots\hbox{-}hi^1(G))$.
By \ref{greentouch} applied to $G[C]$ and $H$, we deduce
that there is an edge $uv$ of $H$, and a subset $X$ of $V(G)$, satisfying the theorem. This proves \ref{greenedge}.~\bbox
A {\em shower} in $G$ is a sequence $(L_0, L_1,\ldots, L_k,s)$ where $L_0, L_1,\ldots, L_k$ are pairwise disjoint subsets
of $V(G)$ and $s\in L_k$,
such that
\begin{itemize}
\item $|L_0|=1$;
\item $L_{i-1}$ covers $L_i$ for $1\le i< k$;
\item for $0\le i<j\le k$, if $j>i+1$ then no vertex in $L_j$ has a neighbour in $L_i$; and
\item $G[L_k]$ is connected.
\end{itemize}
(Note that we do not require that $L_{k-1}$ covers $L_k$.)
We call the vertex in $L_0$ the {\em head} of the shower, and
$s$ its {\em drain}, and $L_0\hbox{-}\cdots\hbox{-}up \hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up L_k$ is its {\em vertex set}.
(Thus the drain can be any vertex of $L_k$.)
The set of vertices in $L_k$ with a neighbour in $L_{k-1}$
is called the {\em floor} of the shower.
Let $\mathcal{S}$ be a shower with head $z_0$, drain $s$ and vertex set $V$.
An induced path of $G[V]$ between $z_0,s$ is called a {\em jet}
of $\mathcal{S}$. For $d\ge 0$, a jet $J$ is {\em $d$-peripheral} if there is a subset $X$ of the floor of the shower, anticomplete to $V(J)$,
with $\hbox{-}\cdots\hbox{-}hi(G[X])>d$.
The set of all lengths of $d$-peripheral jets of $\mathcal{S}$ is called the {\em $d$-jetset} of $\mathcal{S}$.
For integers $\ell\ge 2$ and $1\le k\le \ell$, we say a $d$-jetset is {\em $(k,\ell)$-complete} if there are
$k$ jets $J_0,\ldots, J_{k-1}$, all $d$-peripheral, such that $|E(J_j)|=|E(J_0)|+j$ modulo $\ell$ for $0\le j\le k-1$.
We will prove:
\begin{thm}\label{multijet}
Let $\tau\ge 0$ and $\ell\ge 2$. For all integers $d\ge 0$ and $t$ with $1\le t\le \ell$ there exists $c_{d,t}\ge 0$ with the following property.
Let $G$ be a graph such that $\hbox{-}\cdots\hbox{-}hi^{3}(G)\le \tau$, and such that
every induced subgraph $J$ of $G$ with
$\omega(J)<\omega(G)$
has chromatic number at most $\tau$.
Let $\mathcal{S}=(L_0, L_1,\ldots, L_k,s)$ be a shower in $G$,
with floor of chromatic number more than $c_t$. Then the $d$-jetset of $\mathcal{S}$ is $(t,\ell)$-complete.
\end{thm}
\noindent{\bf Proof.}\ \ Suppose first that $t=1$. Let $c_{d,1}=d+\tau$, and let $G$ and
$\mathcal{S}=(L_0, L_1,\ldots, L_k,s)$ be as in the theorem, with floor $F$ where
$\hbox{-}\cdots\hbox{-}hi(F)>c_1$. Since $c_{d,1}\ge 0$, it follows that $F\ne \emptyset$, and so there is an induced path $P$ of $G[L_k]$
between $s$ and some vertex in $F$. Choose such a path $P$ of minimum length, and let its end in $F$ be $v$. If $s=v$ let
$u=v$, and otherwise let $u$ be the neighbour of $v$ in $P$. It follows that no vertex of $P$ belongs to or has a neighbour in $F$
except $u,v$.
Also there is a path $Q$ of length $k$ between $v$ and $z_0\in L_0$, with
one vertex $w\in L_{k-1}$ and all others in $L_0\hbox{-}\cdots\hbox{-}up\hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up L_{k-2}\hbox{-}\cdots\hbox{-}up \{v\}$. Thus $P\hbox{-}\cdots\hbox{-}up Q$ is a jet. Moreover, the only
vertices of $P\hbox{-}\cdots\hbox{-}up Q$ that belong to or have a neighbour in $F$ are $u,v,w$, and so all these neighbours have
$G$-distance at most two from $v$, and consequently have chromatic number at most $\tau$.
Since $\hbox{-}\cdots\hbox{-}hi(F)>c_{d,1}=d+\tau$, it follows that $P\hbox{-}\cdots\hbox{-}up Q$ is a $d$-peripheral jet, as required.
We may therefore assume that $2\le t\le \ell$, and inductively the result holds for $t-1$ (and all $d$).
Define $d'=d+2\tau$, and let $c_{d,t}=2(\ell+1)\tau^2 (2c_{d',t-1}+7\tau)$; let $G$ be a graph, and let $\mathcal{S}=(L_0, L_1,\ldots, L_k,s)$ be a shower in $G$
with floor $F$ of chromatic number more than $c_{d,t}$.
For $i\ge 0$, let $M_i$ be the set of vertices with $G[L_k]$-distance exactly $i$ from $s$. Then there exists $r\ge 0$ such that
$\hbox{-}\cdots\hbox{-}hi(F\hbox{-}\cdots\hbox{-}ap M_r)\ge \hbox{-}\cdots\hbox{-}hi(F)/2$; and $r\ge 3$ since $\hbox{-}\cdots\hbox{-}hi^2(G)\le \tau<\hbox{-}\cdots\hbox{-}hi(F)/2$.
For $0\le i\le r$, each vertex $v\in M_i$ is joined to $s$ by an induced path of length $i$ with interior in
$M_1\hbox{-}\cdots\hbox{-}up\hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up M_{i-1}$; let us call such a path a {\em bloodline} of $v$.
\begin{figure}\label{fig:3}
\end{figure}
If $u\in F\hbox{-}\cdots\hbox{-}ap M_r$ and $v\in M_0\hbox{-}\cdots\hbox{-}up\hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up M_{r-2}$, we say that $v$ is a
{\em grandparent} of $u$ if there exists $b\in L_{k-1}$ adjacent to both $u,v$.
Let $C$ be the set of vertices in $M_r$ with a grandparent in $M_0\hbox{-}\cdots\hbox{-}up\hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up M_{r-2}$.
\\
\\
(1) {\em If $\hbox{-}\cdots\hbox{-}hi(C)>\ell \tau^2 (2c_{d',t-1}+7\tau)$ then the theorem holds.}
\\
\\
Let $(v_1,\ldots, v_n)$ be an enumeration of $M_0\hbox{-}\cdots\hbox{-}up\hbox{-}\cdots\hbox{-}dots \hbox{-}\cdots\hbox{-}up M_{r-2}$, where the vertex in $M_0$ is first, followed by the vertices
in $M_1$ in some order, and so on; more precisely, for $1\le i<j\le n$, if $v_i\in M_a$
and $v_j\in M_b$ where $a,b\in \{0,\ldots, r-2\}$, then $a\le b$. Let $B_0$ be the set of vertices
in $L_{k-1}$ that have neighbours in $M_0\hbox{-}\cdots\hbox{-}up\hbox{-}\cdots\hbox{-}dots \hbox{-}\cdots\hbox{-}up M_{r-2}$, and let $(b_1,\ldots, b_m)$ be an enumeration of $B_0$,
enumerating the members
of $B_0$ with the earliest neighbours in $(v_1,\ldots, v_n)$ first; more precisely,
for $1\le i<j\le m$, if $b_j$ is adjacent to $v_q$ for some $q\in \{1,\ldots, n\}$, then there exists $p\in \{1,\ldots, q\}$
such that $b_i$ is adjacent to $v_p$.
We say $v_i$ is the {\em earliest grandparent}
of $u\in M_r$ if $i$ is minimum such that $v_i$ is a grandparent of $u$.
For $1\le i\le n$, let $W_i$ be the set of vertices in $M_r$ whose earliest grandparent is $v_i$.
Thus $C=W_1\hbox{-}\cdots\hbox{-}up\hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up W_n$, and $(W_1,\ldots, W_n)$ is a grading of
$G[C]$. It is $\hbox{-}\cdots\hbox{-}hi^2(G)$-colourable, since $W_i\subseteq N^2_G[v_i]$ for each $i$, and hence $\tau$-colourable.
For $u\in C$, if $v$ is the earliest grandparent of $u$, then the earliest parent of $u$ is adjacent to $v$. Consequently
the enumeration $(b_1,\ldots, b_m)$ and the
grading $(W_1,\ldots, W_n)$ are compatible; because if $u,v\in W$ with $u$ earlier than $v$, the earliest grandparent of $u$ is
earlier than
the earliest grandparent of $v$, and consequently the earliest parent of $u$ is
earlier than
the earliest parent of $v$.
For $0\le j<\ell$, let $C_j$ be the set of vertices $u\in C$ such that $i-j$ is a multiple of $\ell$, where $i$ is the length
of the bloodline of the earliest grandparent of $u$. Thus $C_0\hbox{-}\cdots\hbox{-}up\hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up C_{\ell-1} = C$.
Choose $j$ such that $\hbox{-}\cdots\hbox{-}hi(C_j)>\tau^2 (2c_{d,t-1}+7\tau)$. Then by \ref{greenedge},
there is a square edge $uv$ with $u,v\in C_j$, and a subset $X$ of $C_j$,
such that
\begin{itemize}
\item $G[X]$ is connected;
\item $u,v$ are both earlier than every vertex in $X$;
\item $v$ has a $G$-neighbour in $X$, and $u$ does not; and
\item $\hbox{-}\cdots\hbox{-}hi(X)>c_{d,t-1}+2\tau$.
\end{itemize}
\begin{figure}\label{fig:4}
\end{figure}
Let the earliest parents of $u,v$ be $u', v'$ respectively, and let their earliest grandparents be $u'', v''$ respectively.
Let $P$ be the induced path between $v,s$ consisting of the path $v\hbox{-} v'\hbox{-} v''$ and
a bloodline of $v''$.
Thus $P$ has length $j+2$ modulo $\ell$. Let $Q$ be the path between $v,s$
consisting of the edge $uv$, the path $u\hbox{-} u'\hbox{-} u''$, and a bloodline of $u''$. Note that $Q$ is induced,
since $v$ is not adjacent to $u'$ (because $uv$ is square). Moreover, $Q$ has length $j+3$ modulo $\ell$.
Let $Z$ be the set of vertices in $X$ with $G$-distance at least four from both $u',v'$, and let $L_{k-1}'$ be the set of
vertices in $L_{k-1}$ with a neighbour in $Z$. Let $L_{k-2}'$ be the set of vertices in $L_{k-2}$ with a
neighbour in $L_{k-1}'$, and for $0\le i\le k-3$
let $L_i'=L_i$. It follows that $L_{i-1}$ covers $L_i$ for $0\le i\le k-2$, and so $(L_0',\ldots, L_{k-1}',X\hbox{-}\cdots\hbox{-}up \{v\},v)$ is a
shower $\mathcal{S}'$. Let $V'$ be its vertex set.
We claim that $V'\hbox{-}\cdots\hbox{-}ap V(P\hbox{-}\cdots\hbox{-}up Q)=\{v\}$, and every edge between $V'$ and $V(P\hbox{-}\cdots\hbox{-}up Q)$ is incident with $v$. For suppose that
$a\in V'$ and $b\in V(P\hbox{-}\cdots\hbox{-}ap Q)$ are equal or adjacent, and $a,b\ne v$. If $b=u$, then $a\ne b$ since $u\notin V'$; $a\notin X$
since $u$ has no neighbour in $X$; and so $a\in L_{i}'$ for some $i'<k$. Then $i=k-1$ since $b\in L_k$, and so $a$ has a neighbour
in $Z$ from the definition of $L_{k-1}'$; and so the $G$-distance between $a,u$ is at least two from the definition of $Z$,
a contradiction. Thus $b\ne u$. Next suppose that $b\in \{u',v'\}$.
Since $u',v'$ are the earliest parents of $u,v$ respectively,
and $u,v$ are earlier than every vertex in $X$, it follows that no vertex in $X$ is adjacent to $b$; and so $a\ne b$, and $a\notin X$.
Hence $a\in L_i'$ for some $i<k$, and since $b\in L_{k-1}$ it follows that $k-2\le i\le k-1$. But then the $G$-distance between $a$
and some vertex of $Z$ is at most two, and since the $G$-distance between $b,Z$ is at least four, it follows that $a,b$ are nonadjacent,
a contradiction. So $b\notin \{u',v'\}$ and so $b\in M_j$ for some $j\le r-2$, and so $b$ is a vertex of a bloodline of the earliest
grandparent of one of $u,v$ (say $u'', v''$ respectively). Consequently $a\ne b$, and $a\notin X$, and so
$a\in L_{k-1}'$. Choose $z\in Z$ adjacent to $a$; then $u,v$ are earlier than $z$, and so $u'',v''$ are both
earlier than the earliest grandparent of $z$. It follows that no parent of $z$ has a neighbour in a bloodline of either of $u'',v''$,
and so $a,b$ are nonadjacent,
a contradiction. This proves our claim that $V'\hbox{-}\cdots\hbox{-}ap V(P\hbox{-}\cdots\hbox{-}up Q)=\{v\}$, and every edge between $V'$ and $V(P\hbox{-}\cdots\hbox{-}up Q)$ is incident with $v$.
In particular $Z$ is anticomplete to $V(P\hbox{-}\cdots\hbox{-}up Q)$.
The floor of $\mathcal{S}'$
includes $Z$, and $\hbox{-}\cdots\hbox{-}hi(Z)\ge \hbox{-}\cdots\hbox{-}hi(X)-2\hbox{-}\cdots\hbox{-}hi^3(G)>c_{d',t-1}$. Consequently the $d'$-jetset of $\mathcal{S}'$ is $(t-1,\ell)$-complete;
let $J_0,\ldots, J_{t-2}$ be corresponding $d'$-peripheral jets of $\mathcal{S}'$. For $0\le h\le t-2$, both $J_h\hbox{-}\cdots\hbox{-}up P$ and $J_h\hbox{-}\cdots\hbox{-}up Q$
are jets of $\mathcal{S}$, and we claim they are $d$-peripheral. For let $0\le h\le t-2$, and let $Y$ be a subset of the floor of
$\mathcal{S}'$ with $\hbox{-}\cdots\hbox{-}hi(G[Y])>d'$, anticomplete to $V(J_h)$. Thus $Y\subseteq X\hbox{-}\cdots\hbox{-}up \{v\}\subseteq F$. Let $Y'=Y\hbox{-}\cdots\hbox{-}up Z$.
Since every vertex in $Y\setminus Z$ has $G$-distance at most three from one of $u',v'$, it follows that
$\hbox{-}\cdots\hbox{-}hi(Y\setminus Z)\le 2\tau$, and so $\hbox{-}\cdots\hbox{-}hi(Y')\ge \hbox{-}\cdots\hbox{-}hi(Y)-2\tau> d$. Since $Y'$ is anticomplete to $V(P\hbox{-}\cdots\hbox{-}up Q)$,
this proves our claim that both $J_h\hbox{-}\cdots\hbox{-}up P$ and $J_h\hbox{-}\cdots\hbox{-}up Q$ are $d$-peripheral jets of $\mathcal{S}$. Consequently the $d$-jetset of
$\mathcal{S}$ is $(t,\ell)$-complete. This proves (1).
\\
\\
(2) {\em If $\hbox{-}\cdots\hbox{-}hi((F\hbox{-}\cdots\hbox{-}ap M_r)\setminus C)>\tau^2 (2c_{d',t-1}+7\tau)$ then the theorem holds.}
\\
\\
Let us write $C' = (F\hbox{-}\cdots\hbox{-}ap M_r)\setminus C$; then $C'$ is the set of vertices in $M_r$ that have neighbours in $L_{k-1}$,
but every such neighbour has no neighbour in $M_0\hbox{-}\cdots\hbox{-}up\hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up M_{r-2}$. The neighbours in $L_{k-1}$ might or
might not have neighbours in $M_{r-1}$. Take an arbitrary enumeration $(b_1,\ldots, b_n)$ of $M_{r-1}$, and for $1\le i\le n$
let $W_i$ be the set of vertices in $C'$ adjacent to $b_i$ and nonadjacent to $b_1,\ldots, b_{i-1}$. Thus $(W_1,\ldots, W_n)$ is a grading
of $G[C']$, compatible with $(b_1,\ldots, b_n)$, and it is $\hbox{-}\cdots\hbox{-}hi^1(G)$-colourable and hence $\tau$-colourable. By \ref{greenedge},
there is a square edge $uv$ of $G[C']$, and a subset $X$ of $C'$,
such that
\begin{itemize}
\item $G[X]$ is connected;
\item $u,v$ are both earlier than every vertex in $X$;
\item $v$ has a neighbour in $X$, and $u$ does not; and
\item $\hbox{-}\cdots\hbox{-}hi(X)>c_{d',t-1}+2\tau$.
\end{itemize}
Let $u',v'$ be the earliest neighbours in $M_{r-1}$ of $u,v$ respectively.
Let $P$ be an induced path between $v,s$ consisting of the edge $vv'$ and a bloodline of $v'$; and let $Q$ be the path
consisting of the edges $uv,uu'$, and a bloodline of $u'$. They are both induced.
Let $Z$ be the set of all vertices in $X$ with $G$-distance at least four from both of $u,v$. Let $L_{k-1}'$
be the set of vertices in $L_{k-1}$ with a neighbour in $Z$, and for $0\le i\le k-2$ let $L_i'=L_i$. Then
$(L_0',\ldots, L_{k-1}', X\hbox{-}\cdots\hbox{-}up \{v\},v)$ is a shower $\mathcal{S}'$. Moreover, its vertex set $V'$ satisfies $V'\hbox{-}\cdots\hbox{-}ap V(P\hbox{-}\cdots\hbox{-}up Q)=\{v\}$,
and every edge between $V'$ and $V(P\hbox{-}\cdots\hbox{-}up Q)$ is incident with $v$ (the proof is as in (1), and we omit it).
Its floor includes $Z$, and $\hbox{-}\cdots\hbox{-}hi(Z)\ge \hbox{-}\cdots\hbox{-}hi(X)-2\hbox{-}\cdots\hbox{-}hi^3(G)>c_{d',t-1}$;
and so the $d'$-jetset of $\mathcal{S}'$ is $(t-1,\ell)$-complete. But for each jet $J$ of $\mathcal{S}'$, $J\hbox{-}\cdots\hbox{-}up P, J\hbox{-}\cdots\hbox{-}up Q$
are both jets of $\mathcal{S}$, and as in (1) it follows that the $d$-jetset of $\mathcal{S}$ is $(t,\ell)$-complete. This proves (2).
Since $\hbox{-}\cdots\hbox{-}hi(F)>c_{d,t}$, it follows that $\hbox{-}\cdots\hbox{-}hi(F\hbox{-}\cdots\hbox{-}ap M_r)>(\ell+1)\tau^2 (2c_{d',t-1}+7\tau)$, and so
either $\hbox{-}\cdots\hbox{-}hi(C)>\ell \tau^2 (2c_{d',t-1}+7\tau)$ or $\hbox{-}\cdots\hbox{-}hi((F\hbox{-}\cdots\hbox{-}ap M_r)\setminus C)>\tau^2 (2c_{d',t-1}+7\tau)$. Hence
the result follows from (1) or (2). This proves \ref{multijet}.~\bbox
A {\em recirculator} for a shower $(L_0, L_1,\ldots, L_k,s)$ with head $z_0$ is an induced path $R$
with ends $s,z_0$ such that no internal vertex of $R$ belongs to $V$
and no internal vertex of $R$ has any neighbours in $V\setminus \{s,z_0\}$.
We need the following, proved in~\hbox{-}\cdots\hbox{-}ite{holeseq}:
\begin{thm}\label{doubleshower}
Let $c\ge 0$ be an integer, and let $G$ be a graph such that $\hbox{-}\cdots\hbox{-}hi(G)>44c+4\hbox{-}\cdots\hbox{-}hi^{8}(G)$.
Then there is a shower in $G$, with floor of chromatic number more than $c$, and with a recirculator.
\end{thm}
We deduce \ref{uncontrolled}, which we restate:
\begin{thm}\label{uncontrolled2}
For all integers $\ell\ge 2$ and $\tau,d\ge 0$ there is an integer $c\ge 0$ with the following property.
Let $G$ be a graph such that $\hbox{-}\cdots\hbox{-}hi^8(G)\le \tau$, and every induced subgraph $J$ of $G$ with
$\omega(J)<\omega(G)$
has chromatic number at most $\tau$.
If $\hbox{-}\cdots\hbox{-}hi(G)>c$ then there are $\ell$ $d$-peripheral holes in $G$ with lengths of all possible values modulo $\ell$.
\end{thm}
\noindent{\bf Proof.}\ \
Let $c_{d,\ell}$ be as in \ref{multijet}, and let $c=44c_{d,\ell}+4\tau$. Let $G$ be a graph
such that $\hbox{-}\cdots\hbox{-}hi^8(G)\le \tau$, and every induced subgraph $J$ of $G$ with
$\omega(J)<\omega(G)$
has chromatic number at most $\tau$, with $\hbox{-}\cdots\hbox{-}hi(G)>c$. By \ref{doubleshower} there
is a shower in $G$, with floor of chromatic number more than $c_{d,\ell}$, and with a recirculator.
By \ref{multijet} the $d$-jetset of this shower is $(\ell,\ell)$-complete. Thus adding the recirculator to each of the corresponding jets
gives the $\ell$ $d$-peripheral holes we need. This proves \ref{uncontrolled2}.~\bbox
\section{Two consecutive holes}\label{sec:2holes}
Finally let us prove \ref{2holes}, which we restate.
\begin{thm}\label{2holesagain}
For each $\kappa,\ell\ge 0$ there exists $c\ge 0$ such that every graph $G$ with $\omega(G)\le \kappa$ and $\hbox{-}\cdots\hbox{-}hi(C)>c$
has holes of two consecutive lengths, both of length more than $\ell$.
\end{thm}
\noindent{\bf Proof.}\ \
We may assume that $\ell\ge 8$.
By induction on $\kappa$, there exists $\tau_1$ such that
every graph with clique number less than $\kappa$ and chromatic number
more than $\tau_1$ has two holes of consecutive length more than $\ell$.
Let $\mathcal{C}_2$ be the ideal of graphs with clique number at most $\kappa$
and with no two holes of consecutive lengths more than $\ell$. By \ref{rad2}, $\mathcal{C}_2$ is not 2-controlled, and so
for some $\tau_1$ there are graphs $G$ in $\mathcal{C}_2$ with arbitrarily large chromatic number and $\hbox{-}\cdots\hbox{-}hi^2(G)\le \tau$.
Let $\mathcal{C}_3$ be the ideal of graphs in $G$ with $\hbox{-}\cdots\hbox{-}hi^2(G)\le \tau$. By \ref{manyholes} with $\rho=3$,
$\mathcal{C}_3$ is not 3-controlled, and so on; and we deduce that there is an ideal $\mathcal{C}_{\ell}$ of graphs in $\mathcal{C}_2$,
with unbounded chromatic number, and a number $\tau$ such that $\hbox{-}\cdots\hbox{-}hi^{\ell}(G)\le \tau$ for each $G\in \mathcal{C}_{\ell}$.
Let $c'=14\tau^3$, and let $c=44c'+4\tau$; and choose $G\in \mathcal{C}_{\ell}$ with $\hbox{-}\cdots\hbox{-}hi(G)>c$.
By \ref{doubleshower}, there is a shower $\mathcal{S}=(L_0,\ldots, L_k,s)$ in $G$ with a recirculator and with floor $F$ with chromatic number
more than $14\tau^3$. Define $d, M_0,\ldots, M_d$ and $C$
as in the proof of \ref{multijet}.
\\
\\
(1) {\em If $\hbox{-}\cdots\hbox{-}hi(C)> 7\tau^3$ then the theorem holds.}
\\
\\
Define the enumeration $(v_1,\ldots, v_n)$ of $M_0\hbox{-}\cdots\hbox{-}up \hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up M_{d-2}$,
and $B_0$ and its enumeration $(b_1,\ldots, b_m)$,
as in the proof of \ref{multijet}. Let $b_{m+1-i}'=b_i$; so $(b_1',\ldots, b_m')=(b_m,\ldots, b_1)$ is also an enumeration of the same set.
For $i=1,\ldots, m$ let $W_i$ be the set of vertices in $M_d$ that are adjacent to $b_i'$ and to none of $b_1',\ldots, b_{i-1}'$. Thus
$W_1\hbox{-}\cdots\hbox{-}up\hbox{-}\cdots\hbox{-}dots\hbox{-}\cdots\hbox{-}up W_m=C$, and $(W_1,\ldots, W_m)$ is a $\hbox{-}\cdots\hbox{-}hi^1(G)$-colourable (and hence $\tau$-colourable) grading of $G[C]$,
compatible with $(b_1',\ldots, b_m')$. By \ref{greenedge} (taking $c=2\tau$),
there is a square edge $uv$ of $G[C]$, and a subset $X$ of $C$,
such that
\begin{itemize}
\item $G[X]$ is connected;
\item $u,v$ are both earlier than every vertex in $X$;
\item $v$ has a neighbour in $X$, and $u$ does not; and
\item $\hbox{-}\cdots\hbox{-}hi(X)>2\tau$.
\end{itemize}
Let $u',v'\in B_0$ be the earliest parents of $u,v$ respectively. Since $\hbox{-}\cdots\hbox{-}hi(X)>2\tau$, there exists $x\in X$ with $G$-distance
at least four from each of $u',v'$. Let $x'\in B_0$ be its earliest parent, and let $R$ be an induced path of $G[X\hbox{-}\cdots\hbox{-}up \{v,x'\}]$
between $v$ and $x'$.
Now no vertex of the interior of $R$ is adjacent to either of $u',v'$ since $u,v$ are earlier than every member of $X$
and $u',v'$ are their earliest parents. Also, $x'$ is nonadjacent to $u,v,u',v'$ since the $G$-distance between $x$ and $u',v'$
is at least four. Since $uv$ is square, the path $Q'$ obtained from $R$ by adding the edge $vv'$ is induced,
and since $u$ has no neighbour in $X$, so is the path $P'$ obtained from $R$ by adding the path $v\hbox{-} u\hbox{-} u'$.
Now there are paths between the apex of $\mathcal{S}$ (say $a$)
and $u'$, and between $a,v'$, both of length $k-1\ge \ell$. No vertex of $L_k$ has a neighbour different from $u',v'$ in these paths.
Also $x'$ has no neighbour in $P'\hbox{-}\cdots\hbox{-}up Q'$; because any such neighbour would be in $L_k\hbox{-}\cdots\hbox{-}up L_{k-1}\hbox{-}\cdots\hbox{-}up L_{k-2}$, and hence
would have $G$-distance at most one from one of $u',v'$, which is impossible since the $G$-distance between $x$ and $u',v'$
is at least four. Thus by taking the union of the first of these with $P'$ and the second with $Q'$,
we obtain two paths $P,Q$, both induced and both between $a,x'$, with consecutive lengths. Choose $i$ minimum such that $x'$
is adjacent to $v_i$, and let $R$ be a bloodline of $v_i$ (defined as in \ref{multijet}). Let $u'=b_f'$, $v'=b_g'$, $x'=b_h'$. Since $u,v$ are earlier than $x$,
it follows that $f,g<h$. Now $u',v'$ are nonadjacent to $v_i$ since the $G$-distance between $u'v'$ and $x$ is at least four.
Since $u'=b_{m+1-f}$ and $x'=b_{m+1-h}$, and $m+1-f>m+1-h$, it follows that no neighbour of $u'$ belongs to $V(R)$, and similarly
for $v'$. Thus the union of $P$ with the edge $x'v_i$ and $R$ is induced, and so is the union of $Q$ with $x'v_i$ and $R$.
Consequently there are jets of $\mathcal{S}$ with consecutive lengths. By taking their unions with the recirculator, we obtain
holes of consecutive lengths. This proves (1).
\\
\\
(2) {\em If $\hbox{-}\cdots\hbox{-}hi((F\hbox{-}\cdots\hbox{-}ap M_d)\setminus C)>7\tau^3$ then the theorem holds.}
\\
\\
The proof of this is the same as that for step (2) of the proof of \ref{multijet} and we omit it.
From (1) and (2) the result follows.~\bbox
\section{Some connections with homology}
In the 1990s, Kalai and Meshulam made several intriguing conjectures connecting the chromatic number of a graph with the homology
of a simplicial complex associated with $G$. (Most of them are mentioned in~\hbox{-}\cdots\hbox{-}ite{kalai}, and see also~\hbox{-}\cdots\hbox{-}ite{kalai2}.)
The {\em $n$th Betti number}
$b_n(X)$ of
a simplicial complex $X$ is the rank of the $n$th homology group $H_n(X)$ (see, for instance, \hbox{-}\cdots\hbox{-}ite{hatcher}). The {\em Euler characteristic\footnote{The
standard notation for the Euler characteristic of a simplicial complex $X$ is $\hbox{-}\cdots\hbox{-}hi(X)$; however, we will avoid using that
notation here, as there is clearly some potential for confusion between $\hbox{-}\cdots\hbox{-}hi(I(G))$ and $\hbox{-}\cdots\hbox{-}hi(G)$.}} of $X$ is
$\sum_{n\ge 0}(-1)^n c_n(X)$, where $c_n(X)$ is the number of $n$-faces in $X$; it turns out that the Euler characteristic is
also equal to the alternating sum $\sum_{n\ge0}(-1)^n b_n(X)$ of Betti numbers.
The {\em independence complex} $I(G)$ of a graph $G$ is the simplicial complex whose faces are the stable sets of vertices. For a graph $G$, we say that its {\em total Betti number} is $b(G):=\sum_{n\ge 0}b_n(G)$. The total Betti number of a graph $G$ is clearly greater than or equal to the modulus of the Euler characteristic of $I(G)$, as the former is the sum of Betti numbers and the latter is equal to the alternating sum.
Kalai and Meshulam made several conjectures on this topic: one was
already mentioned (at \ref{bonamy}), and two others are as follows:
\begin{thm}\label{kalaiconj2}
For every integer $k\ge 0$ there exists $c$ such that the following holds.
If $b(H)\le k$ for every induced subgraph $H$ of $G$ then $\hbox{-}\cdots\hbox{-}hi(G)\le c$.
\end{thm}
\begin{thm}\label{kalaiconj1}
For every integer $k\ge 0$ there exists $c$ such that following holds. For every graph $G$, if the
Euler characteristic of $I(H)$ has modulus at most $k$ for every induced subgraph $H$ of $G$ then
$\hbox{-}\cdots\hbox{-}hi(G)\le c$.
\end{thm}
Kalai and Meshulam also asked about the graphs $G$ that satisfy the condition in \ref{kalaiconj1} with $k=1$ (i.e.~for every induced subgraph $H$, the Euler characteristic of $H$ lies in $\{-1,0,1\}$). They conjectured that $G$ has this property if and only if
$G$ has no induced cycle of length divisible by three. We prove this conjecture in~\hbox{-}\cdots\hbox{-}ite{ternary}, with Chudnovsky and Spirkl.
In this paper, we prove conjectures \ref{kalaiconj2} and \ref{kalaiconj1}.
The second conjecture is clearly stronger, as the modulus of the Euler characteristic of $I(G)$ is at most the total Betti number of $G$.
In this section, we will prove both of these conjectures, using \ref{superkalai}.
Say that
a graph $G$ is {\em $k$-balanced} if for every induced subgraph $H$ of $G$, the number of stable sets in $H$ of even cardinality differs by at most $k$ from the
number of stable sets of odd cardinality. The condition in \ref{kalaiconj1} is exactly that $G$ is $k$-balanced, so \ref{kalaiconj1} (and therefore also \ref{kalaiconj2}) is an immediate consequence of the following result.
\begin{thm}\label{kalaiconj}
For every integer $k\ge 0$ there exists $c$ such that $\hbox{-}\cdots\hbox{-}hi(G)\le c$ for every $k$-balanced graph $G$.
\end{thm}
\noindent{\bf Proof of \ref{kalaiconj}, assuming \ref{superkalai}.\ \ }
Let $k\ge 1$ be an integer. By \ref{superkalai}, we may choose $c$ such that every graph $G$ with $\omega(G)\le k+1$
and $\hbox{-}\cdots\hbox{-}hi(G)>c$ contains $k$ holes, pairwise anticomplete and each of length a multiple of three. We claim that
every $k$-balanced graph has chromatic number at most $c$. For if $G$ is $k$-balanced, then $G$ has no clique of cardinality more
than $k+1$ (because a complete subgraph $H$ with $k+2$ vertices has $k+2$ odd stable sets and only one even one); and $G$ does not
have $k$ holes that are pairwise anticomplete, each of length a multiple of three. (We leave the reader to check this.) This
proves \ref{kalaiconj}.~\bbox
\end{document} |
\begin{document}
\title[Line Planning and Timetabling in Subway Networks]{\large{An optimization model for line planning and timetabling in automated urban metro subway networks}}
\author{V\'ictor Blanco}
\address{IEMath-GR, Universidad de Granada.}
\email{[email protected]}
\author{Eduardo Conde}
\address{Dep. Statistics \& OR, Universidad de Sevilla.}
\email{[email protected]}
\author{Yolanda Hinojosa}
\address{Dep. Applied Economics I, Universidad de Sevilla.}
\email{[email protected]}
\author{Justo Puerto}
\address{Dep. Statistics \& OR, Universidad de Sevilla.}
\email{[email protected]}
\thanks{The authors were partially supported by the projects FQM-5849 (Junta de Andaluc\'ia $\backslash$ FEDER), MTM2016-74983-C2-1-R (MINECO, Spain) and contract 1853/0257 (Soci\'et\'e Metrolab{\textregistered}, Service Contr\^ole de Gestion).}
\date{\today}
\begin{abstract}
In this paper we present {a} Mixed Integer Nonlinear Programming model that we developed as part of a pilot study requested by the R\&D company {\sc Metrolab\textregistered}\footnote{Soci\'et\'e Metrolab\textregistered, Service Contr\^ole de Gestion, registered on the Paris Trade and Companies Register under the number 532 684 685 RCS Paris, with its registered office at 117/119 Quai de Valmy - 75010 PARIS - FRANCE} in order to design tools for finding solutions for line planning and timetable situations in automated urban metro subway networks. Our model incorporates important factors in public transportation systems from both, a cost-oriented and a passenger-oriented perspective, as time-dependent demands, interchange stations, short-turns and technical features of the trains in use. The incoming flows of passengers are modeled by means of piecewise linear demand functions which are parameterized in terms of arrival rates and bulk arrivals. Decisions about frequencies, train capacities, short-turning and timetables for a given planning horizon are jointly integrated to be optimized in our model. Finally, a novel Math-Heuristic approach is proposed to solve the problem. The results of extensive computational experiments are reported to show its applicability and effectiveness to handle real-world subway networks.
\end{abstract}
\keywords{Line planning, short-turns, timetabling, Mixed Integer Nonlinear Programming, Math-heuristic.}
\maketitle
\section{Introduction}
In this paper, we propose a model for line planning and timetabling on general urban subway transportation systems. This study was originated by a real-world problem proposed by Metrolab{\textregistered}, a French R\&D company, dealing with the line planning and timetabling of trains of existing subway networks. It was a pilot experience to automatize the decision making process, at the tactical and operational level, of a small section of the {Paris subway network.}
The development of flexible tools to control, automatically, a transportation system according to a set of indicators of its service quality may have a considerable impact in its efficiency and usefulness. The quality perception of a public transportation system from a customer's point of view is highly dependent of its reliability, comfortability and effectiveness when comparing with alternative transportation means. If just poor-quality connections are offered or the quality-price relationship does not fulfil the passengers' expectations, they may decide to use alternative transportation means. Therefore, the quality of a public transportation system, from the passengers' point of view, is a key objective in its design and management, besides infrastructure constraints, operational limitations or budget considerations~\cite{goe17}.
The existing literature on line planning is very extensive (see e.g., \cite{des07, goe17, sch12} and the references therein), including different models which can be classified according to the decisions covered (determination of train routes, frequency setting, or both), infrastructure and operational aspects, objective functions and the way in which passengers' decisions are taken into account in the decision making process. For instance, in \cite{goe17}, line planning models are classified with respect to their objective functions into models with cost-oriented or with passenger-oriented objective functions. Our model will consider both points of view in an attempt to find an equilibrium between these conflicting objectives which, as mentioned in \cite{gui08}, is an important challenge in a public transportation system.
However, building an effective model is much more complex than selecting the appropriate nature of the objective function. The correct delimitations of the considered features in the context of the transportation system or in the set of customers served by this system is a very important {phase in the actual modeling. Often, an initial line planing must be re-engineered motivated by changes on costumers' flows induced by changes in \emph{passengers' route choices}, \cite{goe17}.} This gives rise to a \emph{bilevel optimization problem} with a line planning problem on the upper level and a passenger's route choice problem on the lower level. Moreover, the existence of several decision-makers is not the only difficulty of this problem. There exist uncertainties in the number of passengers that must be served (demands), in the origin-destination pairs that customers want to go across or in the times needed to cover the network links due, for instance, to machine failures or other incidents. In this situation it seems hard to integrate all these elements in a suitable optimization model.
Usually, some simplifications must be assumed in order to obtain operational solutions for realistic situations. The usefulness of the model will be strongly conditioned by these assumptions. Having a valid model for a wide range of scenarios might be a lofty target but one worth aiming for. The resulting approximation can be seen not only as a model to optimize the existing resources in a given transportation system but also as a \emph{what-if} tool to make rational decisions. For instance, it can be used to check the implementation of possible modifications in the structure of the existing network (adding stations, new connections, \ldots) or in the conditions (demand variation, service disruptions, \ldots) under which the transportation system is working at present.
Line planning is only one of the planning process's stages. Indeed, following \cite{des07,mic09,sch17}, the planning process in public transportation includes several phases that usually are sequentially executed in the following order:
\begin{enumerate}
\item {\bf network design}, where the stations, links and routes of the lines are established,
\item {\bf line planning}, specifying the frequency and the capacity of the vehicles used in each line (\emph{line concept}, \cite{goe17}),
\item {\bf timetabling}, {defining } the arrival/departure times and
\item {\bf scheduling}, in which vehicles and/or crews are planned.
\end{enumerate}
The first phase, namely network design, is done at the strategic level and implies a high cost (see e.g., \cite{bpr11}). Moreover, the remaining steps involve decisions at the tactical and operational levels which are conditioned by that design. Thus, it seems appropriate to assess different designs by means of a \emph{what-if analysis} based on the efficiency of the system under a given scenario. In that case, after initializing with a \emph{reasonable} design, the procedure could \emph{optimize} the efficiency of the transportation system using tactical/operational decisions. This may reveal possible weaknesses of the current design and, after fixing some of them, will give rise to a new design. The process could be repeated until a compromise transportation design is found.
In addition to the literature on line planning commented above, one may also find a rich literature on timetabling and scheduling (see e. g. \cite{bar14, cap02, cap06, lee09, sun14} for timetabling models and \cite{can11, dar07, tek18} for scheduling models). Timetabling models are usually classified according to the capacity of the transportation system or the requirements of passengers. Regarding to the scheduling literature, models can be classified into single and multiple depot frameworks~\cite{bun09}. Besides, we can find periodic and time-dependent timetabling and scheduling models and many other variants depending on the considered constraints. Considerable effort has also been made to analyze the practical effects of the different elements or features covered by these models. For instance, as mentioned in \cite{niu15}, periodic timetables easily allow passengers to remember the exact departure times at stations but, in general, they are not fully sensitive to time-varying passenger demands, which could result in long waiting times and reduced service reliability, particularly under irregular over-saturated conditions.
Once the framework of the transportation system has been delimited, the resulting model should be optimized. However, as pointed out in \cite{sch17}, going through the above-mentioned stages of the planning process in a sequential way, often leads just to suboptimal solutions. Recent research (see \cite{gor13,gui08,lopp17,opp18,sch17}) is increasingly oriented towards {\bf integrated planning} in which two or even more of these planning stages are simultaneously addressed. These integrated optimization models are frequently superior to those optimizing sequentially the considered stages, as it has been recognized in the literature (see e.g., \cite{sch17} and the references therein).
In this paper we propose a new integrated model in which line planning and timetabling are simultaneously optimized using a combination of cost- and passenger-oriented objective function. In our model, time-dependency on demands is considered in addition to two other elements that, as far as we know, have only been addressed separately in the literature: short-turns and interchange stations.
\textit{Short-turning} is a tactical decision for which some trains can perform short cycles in order to increase frequency in specific sections of a line. In general, due
to their analytical complexity, the approaches to manage short-turnings in the context of railways planning are based on particular cases and there are no general models that can be applied without modifications to every situation~\cite{can16}. Besides, most of the literature analyzing short-turning in a railway context is limited to a single two-way transit line \cite{can16, cor11, ort09, wee18}. Our model incorporates decisions concerning the activation of short-turns simultaneously in several lines of the network.
On the other hand, \textit{interchange} or \textit{transfer stations} are shared by several lines in the system allowing passengers to change from one to another line. Papers dealing with interchange stations (see e.g. \cite{hei18,kan16,won08}) usually aim for minimizing the total transfer waiting time of passengers by synchronizing train arrival times at transfer stations. We will manage here the effective flows of passengers at the interchange stations and compute the corresponding effects both in the quality of the service and in technical constraints, as those related to the capacities of the trains.
The final aim is to model a real system, inspired by one initially proposed by Metrolab{\textregistered}, using its more relevant features whilst its computational tractability is preserved. This is an important challenge taking into account the combinatorial, stochastic, multilevel and multiobjective nature of the problem. The resulting outcome is a Mixed Integer Second Order Cone Programming model, which can be solved using off-the-shelf optimization solvers, but only for limited sizes. For larger sizes we propose a Math-Heuristic approach in which the system is decoupled into different lines. Afterward, each subsystem is optimized individually but including in the input data the flows of passengers generated after optimizing other lines. The process is repeated like in a \emph{block coordinate descent} procedure (see e.g., \cite{ber99}) until a given stoping rule is verified.
The remainder of this paper is organized as follows. Section \ref{sec:1} deals with a detailed description of the optimization problem and its main elements. In Section \ref{sec:2} we present the Mathematical Programming formulation of the problem. The demand function modelling how the flow of passengers entering into certain stations of a line changes according to the effects of the other lines or due to external factors is detailed in Section \ref{sec:3}. The usefulness of the proposed model is illustrated in Section \ref{ex0} with a case study using real data on a section of the Paris subway provided by Metrolab{\textregistered}. Our Math-Heuristic approach is proposed in Section \ref{sec:4} and the corresponding computational results, including its comparison with the exact method, are reported in Section \ref{sec:5} using several network topologies adapted from the literature. The paper ends with a section where some conclusions and future extensions of the model are outlined.
\section{Problem description}
\label{sec:1}
Let us assume that the technical features of a public transportation network, which is a part of a complex underground train system, are known. Our goal is to model the problem of how to operate different metro lines on this network according to a set of technical requirements and a given structure of the demand requesting for this service. Suppose that a set of routes for the potential lines (\emph{lines pool}) and a set of interchange stations are specified. Furthermore, some other factors involved in the system performance such as passenger flows, set of possible train capacities, maximum number of allowed trips in a given line during the planning horizon, demand fluctuation (e.g., rush/off-peak hours) or stopping time windows, amongst others, are assumed to be also available. The goal is to model such a system using its more relevant features whilst its computational tractability is preserved. In the description of our approach we will consider three main blocks: input data, feasible actions and assessment of a particular solution together with some additional specifications of our model.
\subsection{Input data}
In addition to the input network, including the topological route map and the interchange stations, the main block of input data corresponds to the passenger flow amongst the considered set of stations. Obviously, the number of passengers awaiting in a specific station for the next train which connects to a given destination is a stochastic process. Furthermore, the stochastic processes corresponding to the set of considered stations are interdependent due to the flow relationships amongst the stations which, in addition, may change over time. In our model, given that we deal with a heavily congested subway line, these stochastic processes will be replaced by average rates of the number of passengers.
The dynamic dependence of these processes on time should be preserved in some way in the optimization model since it is one of the relevant elements in order to obtain realistic operating solutions. We will do it through a function measuring the intensity of the demand.
In our framework, we assume that two main situations affect to the passengers flows: transitions between stations or lines and arrival of external demand functions. The first one refers to the behaviour of the passengers with respect to the mobility pattern. These data fix an assignment for the destination of the passengers catching a train in a given station, what is known in the literature as \emph{line planning with route assignment}, \cite{goe17}. The alternative approach of line planning with route choice seems to be more appropriate just in those transportation systems with high density of connections (with alternative paths between two given locations) and low trip frequency. However, this is not the case in our model since these two features rarely appear in underground train systems.
We will use an Origin-Destination (OD)-matrix per line having as entries the proportions of passengers moving between pairs of stations of the line and also a set of values quantifying the proportion of passengers which want to change from one metro line to another in each one of the transfer stations. These proportions may be considered as estimations of the probabilities with which a passenger moves through the network and could be dependent on the dynamic nature of the transportation system. Following \cite{sch17}, in order to model the passengers' flows, OD-data gives rise to more realistic applications than those based on traffic loads since the paths followed by passengers depend strongly on the line concept. As observed by several authors (\cite{goe17,sch17}), the optimization models derived from the management of OD-data are often harder to be solved numerically, and thus, approximated ad-hoc algorithms need to be used to deal with problems of realistic sizes.
The second concept, the arrival of external demand functions, refers to the intensity of use of the transportation network. The external demand models the incoming flow of passengers entering to the system from outside during the planning horizon. These functions determine the relative importance, in the overall planning cost, of the stations used to access the system and it may change depending on time. Also, rush hours at given time periods{, irregular weather conditions,} or the celebration of events at certain places close to stations may provoke increasing or decreasing of the incoming flow of passengers taken into account in our model.
These external demand functions, together with the OD-data corresponding to movements between pairs of stations {and the proportions of transfer passengers}, give rise to a model having time-dependent passenger flows. Time-dependent flows are an appropriate feature of a realistic approach for traffic planning and, as pointed out in \cite{sch17}, at present, there is not much research literature covering integrated optimization planning models under these conditions.
\subsection{Feasible actions}
In our model, the line concept design is specified by choosing the operating frequencies and the train capacities for each line considered in the lines pool. Furthermore, our line concept design allows \textit{short-turning} in some lines, i.e., the possibility of activating, for some or all the trains, and at certain time periods, short cycles, in order to increase the frequency in specific (consecutive) stations suffering from intensive demand. This situation is typical in lines which connect distant residential areas with the city center or economic centers.
In the integrated planning model, the line concept design is optimized together with their corresponding timetable. Both elements define the feasible solutions of the problem once a set of technical constraints, involving passenger flows and leaving/arrival times, is specified.
The line concept design will be the main source of discrete decision variables for the formulation of our problem. As, for instance, the selection, among a finite set of capacities for the trains. On the contrary, the actual number of trips of a given line will be modeled using a finite set of replicas of continuous variables corresponding to potential timetables. On the basis of these decision variables a number of additional auxiliary variables will be considered in the optimization model in order to control the times in which different events happen at each station during the planning horizon. In our model, unlike most of the approaches deriving optimal timetables in transportation systems, the departure times are not discretized and the period of time elapsing between consecutive train departures is not constrained to be constant. Thus, it provides more flexible decisions as well as timetables sensitive to the changeable conditions in the passengers's flow during the planning horizon, turning out, in general, in an aperiodic timetabling.
Different train speeds are not taken into account in our line concept design because the pilot proposal by Metrolab{\textregistered} only considered constant and fixed speeds between each pair of consecutive stations. Nevertheless, continuous variables modeling the speed of a train between two consecutive stations could be \textit{easily} added to our model as explained in Remark \ref{remark1}.
\subsection{Assessing a planning solution}
As commented above, due to the multi-objective nature of the problem, one of the most difficult modeling issues is that of assessing a feasible solution (line concept+timetable). Maintenance/operational planning costs are usually easy to handle as a part of the objective function. However, in order to consider also the passenger-oriented nature of the objective, the cost induced by the quality of the service provided by the system should be included in the objective function. This will be incorporated into the model quantifying the cost of unmet (non-served) demand, that is, passengers who cannot take a train due to lack of capacity. Non-served passengers contribute a given amount, in terms of costs, due to their confidence loss in the transportation system, their balking rate or a combination of these two and some other factors. Obviously, calibrating these costs is not an easy task, but it may be partially handled by managing a finite set of cost estimates and solving the problem for each one of them in order to evaluate the influence of these hard-to-calibrate parameters in the proposed solution.
\subsection{Specifications of our model}
In the following we list the assumptions that are imposed to derive a suitable Mathematical Programming formulation for our model.
\begin{description}
\item[\textit{Planning horizon}:] Our model considers a continuous time interval in which all the events must start and decisions occur. The range of this interval depends on the data collection accuracy and with-in-day variability of the demand changes. This planning horizon is fixed a priori but, as mentioned in Remark \ref{remark2}, our model allows to join two consecutive planning horizons by passing data about numbers of passengers and arrival/departure times obtained from an optimal solution on a given planning period as input data for the next one.
\item[\textit{One direction trips}:] We assume that each line in the pool operates only in one direction, from a given line-header station to the final one. Usual round-trip lines are modeled as two symmetric lines by interchanging the order of the stations. A \emph{trip} consists in making the complete walk along all the ordered stations of a given line, from the head to the final station. Thus, a physical round-trip starting and ending at the same station will be given by two lines sharing the stations but in the opposite order.
\item[\textit{Short-turns}:] We consider that specific sections of consecutive stations in certain lines are allowed to be activated in some of the trips.
\item[\textit{Interchange stations}:] We consider that the lines in the lines pool may share common interchange stations where some of the passengers change of line to go to their final destination.
\item[\textit{Train capacities}:] The model assumes that a finite set of admissible capacities for the trains operating a line is given.
\item[\textit{Safety interval}:] A minimum security time window between consecutive trips in any line is established.
\item[\textit{Maximum number of trips}:] We assume, w.l.o.g., that the maximum number of possible trips in each line is given beforehand. Note that an upper bound of this maximum number can be obtained taking into account the range of the planning horizon and the safety interval. In our formulation, we resort to a set of decision variables controlling the time in which different events happen. These variables must be replicated as many times as the maximum number of trips, although only some of those \emph{trip-variables} are activated and then represent actual trips. Hence, the size of the formulation strongly depends on this maximum number. The idea is to consider that the potentially variable number of trips of the line concept is fixed to the maximum number although, in fact, some of them are really \textit{fake trips}. This trick will ease the task of building constraints to ensure non-overlapping events and {the safety interval} between consecutive trips in the stations of a given line.
\item[\textit{Piecewise linear cumulative incoming demand}:] We model the accumulated number of passengers arriving to each station up to a given instant during the planning horizon by the so-called {\it demand function}. With this function we manage variable arrival rates during the planning horizon and bulk arrivals due to special events, like for instance the end of a football match in a close location, the arrival of passengers coming from another transportation mean or, in the interchange stations, the arrival of passengers coming from another line of the subway network. As proposed by Metrolab{\textregistered} for its pilot experience, we consider that this function is a piecewise linear function of time (further details are given in Section \ref{sec:3}).
\end{description}
Figure \ref{fig:00} illustrates some of the considered features of the networks under study. There, we represent by circles or squares the nodes corresponding to the stations of two subway lines. An interchange station common to two lines is marked with a black square. We have also included a {possible} short-turn in the \textit{horizontal} line covering a set of four consecutive squared stations (drawn as a dashed line in the picture).
\begin{center}
\begin{figure}
\caption{Representation of a sample network in our framework.\label{fig:00}
\label{fig:00}
\end{figure}
\end{center}
\section{MINLP Formulation}\label{sec:2}
In this section we {provide} a Mathematical Programming formulation for the problem described in Section \ref{sec:1}. First of all, we {are given} an input network, like the one depicted in Figure \ref{fig:00}, including the topological route map and the stations, some of them being interchange stations and the lines pool $L$ defined over this network. {In addition, short-turning decisions are allowed in some lines of the lines pool. These decisions always concern a set of given consecutive stations of those lines. In what follows and when no confusion is possible, we will refer to both, the set of consecutive stations and the trip concerned by a short-turning decision as a \textit{ short-turn}}. On the other hand, trips trough the full-length lines will be referred as \textit{whole trips}. {In order to introduce the model we start by defining} the set of \emph{parameters} describing the remainder of the input data {as well as} the \emph{decision variables} used to identify a feasible solution.
\subsection{Parameters}
The input parameters for our model are the following:
\begin{itemize}
\setlength{\itemindent}{-0.2cm}
\item $[0,T]$: Planning horizon in which {trains start their journeys} at a given head of line station. Note that a train may start its last journey on $[0, T]$ while its last stop {may occur} in a time instant $t>T$.
\item $L$: Set of lines in the network {(lines pool)}. As mentioned above, round-trip lines are considered as two different lines sharing the stations but {traversed} in opposite directions (rigorously speaking, the stations represent platforms of the corresponding line). Each one of the lines is assumed to be described by its node stations and its directed connections between consecutive stations. Additionally we will denote by $LS$ the set of lines containing a short-turn and by $LNS$ the remainder {(those in which none of its proper subsets of stations can be activated as short-turns). Clearly, } $L=LS \cup LNS$.
\item $N_\ell=\{1, \ldots, n_\ell\}$: {Set of } stations of line $\ell\in L$. Stations are assumed to be ordered in its travelling direction. Observe that if the lines $\ell$ and $\ell^\prime$ correspond to {the same round-trip line but traversed} in opposite directions they have the same number of stations ($n_\ell=n_{\ell^\prime}$) and station $i\in\ell$ represents the opposite platform to station $n_{\ell^\prime}-i+1\in \ell^\prime$.
Additionally, for each line $\ell\in LS$ we will denote by $S_\ell$ the set of stations in the (unique) short-turn, being $1_{S_\ell}$ the first station of the short-turn and $n_{S_\ell}$ the last one.
In order to present a clearer Mathematical Programming model, we will also denote from now on by $\overline{\mathcal{S}}=\Big\{(i,\ell): i\in N_\ell, \ell\in LNS\Big\} \cup \Big\{(i,\ell): i\in N_\ell\backslash S_\ell, \ell\in LS\Big\}$, i.e., those stations which are not part of the available short-turns.
\item $d_i^\ell$: Distance{ (measured as travel time)} between the stations $i$ and $i+1$ of line $\ell\in L$. {Recall, as mentioned in Section \ref{sec:1}, that} we assume that the speed of the trains operating between consecutive station $i$ and $i+1$ is fixed, { and thus, this distance is trip independent.}
\item ${e}_i^{\ell}$: Stopping time for {any} train at station $i$ of line $\ell$ before leaving to station $i+1$. This value represents a time window to perform different operations as the unload/load of passengers from/to the train or {the} fine-tuning of the train. Without loss of generality, we also consider that the stopping times are trip independent.
\item $t_{1\mapsto 1_{S_l}}$: {Distance (measured as travel time plus stopping time at the last and intermediate stations) between the head of line station and the first station of the short-turn for} line $l\in LS$. Note that, this parameter is given by the following expression:
\begin{equation}
t_{1\mapsto 1_{S_l}}=\sum_{r=1}^{1_{S_l}-1}({d_r^l}+e_{r+1}^{\ell}). \label{t1s}
\end{equation}
\item $IS^\ell$: Minimum safety time interval between consecutive trips in a given line $\ell \in L$.
\item $K_\ell= \{1, \ldots, {\bar{k}_\ell}\}$: {Set of} trips in line $\ell\in L$. It is worth noting that the exact number of trips in a line is a decision of the line planning process and which is not known beforehand. In fact, some trips will not be actually used on the planning. We will refer to them as \textit{fake trips}. Note also that the maximum number of possible trips in the line $\ell\in L$ can always be upper bounded by $\left\lceil\frac{T}{IS^\ell}\right\rceil+1$. Typically the value $\bar{k}_\ell$ will be smaller than this bound and should be estimated on the basis of the technical specifications of the network.
\item $Q = \{q_1, \ldots, q_{|Q|}\}$: Admissible capacities for trains operating in all the lines. In some cases, a single capacity profile is allowed while in some others a base capacity is available and, by adding extra wagons, it can be {doubled, triplicated}, etc.
\end{itemize}
We also need the proportion of passengers, which, not being able to catch the train in a given attempt, still insist on using the system and wait for the next train. In addition, our model corresponds to a line planning with route assignment in which the OD-matrix, whose entries describe the proportions of passengers moving among stations of each line is given.
\begin{itemize}
\setlength{\itemindent}{-0.2cm}
\item $\alpha$: Proportion of passengers that cannot get on a train because of lack of capacity and await for the next one (proportion of persisting passengers). This parameter is assumed to be independent of stations and lines. It can also model the probability that a passenger gives up from getting on a crowded train and awaits for the next one.
\item $p^\ell_{ij}$: Proportion of passengers awaiting at station $i$ to go to the station $j$ using the line $\ell\in L$. Note that this parameter can assume positive values only if $i<j$, i.e., if station $i$ is previous to station $j$.
\end{itemize}
The following parameters will be used to assess a given planning solution. As explained in Section \ref{sec:1}, we propose the aggregation of the maintenance/operational planning costs together with a \emph{quantification} of the social cost whose purpose is to measure the quality of the service provided by the system. Obviously, the correct estimation of these costs is fundamental for a meaningful model.
\begin{itemize}
\setlength{\itemindent}{-0.2cm}
\item $b_q^\ell$: Fixed cost for starting a whole trip at line $\ell \in L$ with capacity $q\in Q$. Usually, the larger the capacities, the higher the fixed costs on the trips. These costs model the consumption of resources to prepare a train for a new trip, the energy spent in the trip, the fixed costs of the staff needed to control the train, etc.
\item $b_{Sq}^\ell$: Fixed cost for starting a short-turn at line $\ell \in LS$ with capacity $q\in Q$. As in the previous cost, larger capacities and lengths of the short-turns usually involve higher costs on the trips. These fixed costs are usually smaller than the corresponding ones for whole trips.
\item $\gamma^{{\ell}}_{ij}$: Unitary profit for transporting a passenger from the station $i$ to the station $j$ of the line $\ell\in L$. It represent the cost of the ticket paid by a single passenger to use the service for a given origin-destination trip.
\item $\mu_1$: Unitary penalty for passengers who cannot get on the first arriving train due to its limited capacity and still insist on using the system. This parameter is an estimation of the loss supported by the service for unsatisfied passengers that still keep waiting to use the subway service.
\item $\mu_2$: Unitary penalty for passengers who give up using the network and leave the system after they cannot get on the first arriving train due to its limited capacity. This parameter is an estimation of the loss supported by the service for unsatisfied passengers that are lost by the system.
\end{itemize}
The above set of parameters are listed in a more compact form in Table \ref{parameters}
\begin{table}[h]
\begin{center}
{\small\begin{tabular}{|cp{10cm}|}\hline
Parameter & Description\\\hline
$[0,T]$ & Planning horizon.\\
$L$ & Lines of the network {(Lines pool).}\\
$LS$ & Lines enabling short-turns.\\
$LNS$ & Lines without short-turns.\\
$N_\ell=\{1, \ldots, n_\ell\}$ & Stations of line $\ell\in L$. \\
$S_\ell= \{1_{S_\ell}, \ldots, n_{S_\ell}\}$ & Stations in the short-turn of line $\ell\in LS$.\\
$\overline{\mathcal{S}}$ & Stations {not affected by short-turns.} \\
$d_i^\ell$& {Distance (measured as time)} between stations $i$ and $i+1$ of line $\ell\in L$. \\
${e}_i^{\ell}$& Stopping time that a train spends in the station $i$ of the line $\ell\in L$.\\
$t_{1\mapsto 1_{S_l}}$ & {Distance (measured as time) between the head of line station and the first station of the short-turn for} line $\ell\in LS$.\\
$IS^\ell$& Minimum safety time interval between consecutive trips, for line $\ell\in L$.\\
$K_\ell= \{1, \ldots, {\bar{k}_\ell}\}$& Set of trips in line $\ell\in L$, some of them being \textit{fake trips}. \\
$Q = \{q_1, \ldots, q_{|Q|}\}$& Admissible capacities for trains operating in all the lines.\\
$p^\ell_{ij}$ & Proportion of passengers moving between stations $i$ and $j$ of line $\ell\in L$. \\
$\alpha$ & Proportion of persisting passengers. \\
$b_q^\ell$ & Fixed cost for starting a whole trip at line $\ell\in L$ with capacity $q\in Q$.
\\$b_{Sq}^\ell$ & Fixed cost for starting a short-turn at line $\ell \in LS$ with capacity $q\in Q$.\\
$\gamma^{{\ell}}_{ij}$ & Unitary profit {for transporting} a passenger from the station $i$ to the station $j$ {of }line $\ell\in L$.\\
$\mu_1$ & Unitary penalty for persisting passengers.\\
$\mu_2$ & Unitary penalty for passengers who give up using the system.\\
\hline
\end{tabular}}
\end{center}
\caption{Parameters of the model.\label{parameters}}
\end{table}
\subsection{Decision variables}
The following set of decision variables are used in our Mathematical Programming model:
\begin{itemize}
\item {\bf Continuous Variables:}
\begin{itemize}
\setlength{\itemindent}{-0.2cm}
\item $t^{k{\ell}}_1$: Departure time from the initial station of line $\ell\in L $ at its $k$-th trip, $k\in K_\ell$.\\
Since the travel time between consecutive stations and the stopping time at each station are fixed, the departure time from the initial station of a line will be the reference time to calculate the departure time from the rest of stations of the line.
\end{itemize}
It is worth mentioned that we will force any fake trip to operate on the line at the same time as the previous true trip. Thus, the departure time from the initial station of the line of a fake trip must coincide with the departure time from the initial station of the previous true trip and therefore, the departure time from the rest of stations must also coincide.
Note also that if $\ell\in LS $ some of its trips may be short-turns. When a short-turn is activated the stations not affected by the short-turn are considered as nodes of a fake trip, being the departure times from these stations fixed to the ones of the previous true whole trip. In order to avoid infeasible solutions due to inconsistent times relating departure times in consecutive stations (in and out the short-turn) the following continuous variables are needed.
\begin{itemize}
\setlength{\itemindent}{-0.2cm}
\item $w^{k{\ell}}$: Difference between the actual departure time from the first station of the short-turn of the $k$-th trip, $k\in K_\ell$, of line $\ell\in LS$ and the time when it should depart from this station taking into account its departure time from the initial station of the line. Note that if the $k$-th trip is a whole trip then, $w^{k{\ell}}=0$.\\
Note also that in the definition of these variables we are implicitly assuming that the initial station of the line is not part of the short-turn. If this occurs, we need to take as reference time the departure time from any other station out of the short-turn.
\end{itemize}
For modeling purpose we need {to} distinguish between the flow of passengers that get on a train that performs a whole trip and the flow of passengers that get on a train that only performs {a} short-turn. {Note that the first one is defined for any line of the lines pool whilst the second is only defined for those lines containing a short-turn}.
\begin{itemize}
\setlength{\itemindent}{-0.2cm}
\item $f^{k{\ell}}_i$: Flow of passengers captured in the station {$i \in N_{\ell}$} by the train that covers the $k$-th trip of the line $\ell\in L$, {being $k\in K_\ell$ a whole trip}.
\item $g^{k{\ell}}_i$: Flow of passengers captured in the station $i\in S_\ell\backslash\{n_{S_\ell}\}$ by the train that covers the $k$-th trip of the line $\ell\in LS$, {being $k\in K_\ell$ a short-turn.}
\end{itemize}
Note that the $k$-th trip of a line $\ell\in LS$ is either, a whole trip or a short-turn. Thus, if $g^{k\ell}_i>0$ then, $f^{k\ell}_i=0$ and viceversa. Furthermore, if the $k$-th trip is a fake trip for line $\ell \in LS$ (resp. $\ell \in LNS$), then $g^{k\ell}_i=0$, for all $i\in S_\ell\backslash\{n_{S_\ell}\}$ and $f^{k\ell}_i=0$ for all $i\in N_\ell$ (resp. $f^{k\ell}_i=0$ for all $i\in N_\ell$).
\item {\bf Binary Variables:}
\begin{itemize}
\setlength{\itemindent}{-0.2cm}
\item The decisions about the capacities of the trains at each trip are modeled using the following variables:
$$
y_q^{k\ell} = \left\{\begin{array}{cl}1 & \mbox{if the $k$-th trip of line $\ell$ is {a whole trip}}\\ &\mbox{with capacity $q$ }\\
0 & \mbox{otherwise,}\end{array}\right.
\quad {k \in K_\ell},\, q\in Q,\, \ell\in L.$$
$$
y_{Sq}^{k\ell} = \left\{\begin{array}{cl}1 & \mbox{if the $k$-th trip of line $\ell$ traverses every } \\
& \mbox{station of the short-turn $S_\ell$ with capacity $q$}\\
0 & \mbox{otherwise,}\end{array}\right.
\quad {k \in K_\ell},\, q\in Q,\, \ell\in LS.$$
\end{itemize}
Note that {if the $k$-th trip of line $\ell \in LS$ is a short-turn with capacity $q$ then, } $y_{Sq}^{k\ell}=1$ and $y_q^{k\ell}=0$. On {the other hand}, {if the $k$-th trip of line $\ell \in LS$ is a whole trip with capacity $q$ ($y_q^{k\ell}=1$)}, the train traverses all the stations of the line and, in particular, the stations of the short-turn, and then, $y_{Sq}^{k\ell}=1$. So, we have the following relationship between the two sets of variables:
\begin{equation}\label{rely}
y_q^{k\ell}\leq y_{Sq}^{k\ell}\quad {k \in K_\ell},\, q\in Q,\, \ell\in LS.
\end{equation}
{Observe that we can detect whether the $k$-th trip is a fake trip} {for line $\ell \in LS$ (resp. $\ell \in LNS$)} {by checking if $\sum_{q \in Q} y_{Sq}^{k\ell} =0 ( =\sum_{q \in Q} y_{q}^{k\ell})$}
{(resp. $\sum_{q \in Q} y_{q}^{k\ell}=0$).}
\item {\bf Auxiliary Variables}: A set of auxiliary variables, computed from the previous ones, is considered in order to ease the reading of our Mathematical Programming formulation.
\begin{itemize}
\setlength{\itemindent}{-0.2cm}
\item $t^{k\ell}_i$: Time instant in which a train departs from station $i$ at the $k$-th trip of line $\ell\in L$. This value can be obtained by adding the travel times between consecutive stations plus the stopping times at the traversed stations. Thus, this time is given by the following expressions:
\begin{align}
&t^{k\ell}_i =t_1^{k\ell}+\sum_{r=1}^{i-1}({d_r^\ell}+e_{r+1}^{\ell}), \quad i>1,\, (i,{\ell})\in \overline{\mathcal{S}},\, k \in K_\ell,\label{tiemposalida1}\\
& t^{k\ell}_i=t_1^{k\ell}+\sum_{r=1}^{i-1}({d_r^\ell}+e_{r+1}^{\ell})+w^{k{\ell}}, \quad i>1,\, i\in S_\ell,\, k \in K_\ell,\, \ell \in LS.\label{tiemposalida3}
\end{align}
Expressions \eqref{tiemposalida1} allow us to appropriately define our auxiliary variables $t_i^{k\ell}$ by updating the time in which a train starts its journey {at the first station of the line} to the time spent until it leaves station $i$ for stations outside a short-turn. Equations \eqref{tiemposalida3} allow us to compute the arrival time of a train to any station of the short-turn. As mentioned before, trips that only cover stations in the short-turn are considered as fake trips for the stations which are not part of the short-turn and their corresponding departure times are supposed to be the same as the times of the previous trip. Variables $w^{k{\ell}}$ permit to fit departure times of the stations in each short-turn trip. These variables will {take value} $0$ if the train cover the complete line.
\item $D_i^\ell(t^{k\ell}_i)$: Number of passengers accumulated at station $i$ of line $\ell\in L$ {from the origin of the planning horizon} up to time $t^{k\ell}_i$. This number is given by a function of the time $t^{k\ell}_i$ and takes into account the external arrivals and also the passengers arriving to the station to change to another line. As mentioned {above}, a complete description of this \textit{demand function} is given in Section \ref{sec:3}.
\item $h^{k\ell}_i$: Excess of passengers that where not able to get on the train at station $i$ of the $k$-th trip of line $\ell\in L$ because a lack of capacity of the train. To compute this variable we distinguish between whole trips in any of the pool of lines and short-turn trips in those lines containing a short-turn. In the last case we assume that the passengers catching a short-turn train with destination to a station outside the short-turn get-off from the train in the last station of the short-turn to catch a {whole} trip train of the same line. Taking into account these assumptions, the excess of passengers can be computed as follows:
\begin{align}
& {h^{1\ell}_i} = D_i^\ell(t_i^{1\ell}) - f_i^{1\ell},\quad \text{for } (i,{\ell})\in \overline{\mathcal{S}},\label{remanente0a}\\
& {h^{1\ell}_i} = D_i^\ell(t_i^{1\ell}) - f_i^{1\ell} - g_i^{1\ell}, \quad \text{for } i\in S_\ell\backslash\{n_{S_\ell}\},\, \ell \in LS,\label{remanente0c}\\
& h^{1\ell}_i = D_i^\ell(t_i^{1\ell}) - f_i^{1\ell} + \displaystyle\sum_{r=1_{S_\ell}}^{i-1} \displaystyle\sum_{j=i+1}^{n_\ell} p_{rj} g_r^{1\ell},\quad \text{for } i=n_{S_\ell},\, \ell \in LS,\label{remanente0d}\\
& h^{k\ell}_i =D^\ell_i(t^{k\ell}_i)-D^\ell_i(t^{(k-1)\ell}_i)+\alpha h^{(k-1)\ell}_i-f_{i}^{k\ell}, \quad \text{for } (i,{\ell})\in \overline{\mathcal{S}}, \, k=2,\ldots, \bar{k}_\ell, \label{remanentea}\\
& h^{k\ell}_{i} = D^\ell_i(t^{k\ell}_i)-D^{{\ell}}_i(t^{(k-1)\ell}_i)+\alpha h^{(k-1)\ell}_{i} - f_{i}^{k\ell}- g_i^{k\ell}, \label{remanenteS}\nonumber\\
& \hspace*{5cm} \text{for } i \in S_\ell \backslash\ \{n_{S_\ell}\}, \; k=2,\ldots, \bar{k}_\ell, \ell \in LS, \\
& h^{k\ell}_i = D^\ell_i(t^{k\ell}_i)-D^{{\ell}}_i(t^{(k-1)\ell}_i)+\alpha h^{(k-1)\ell}_{i} - f_{i}^{k\ell} + \displaystyle\sum_{r=1_{S_\ell}}^{i-1} \displaystyle\sum_{j=i+1}^{n_\ell} p_{rj} g_r^{k\ell},\label{remanented} \nonumber\\
& \hspace*{6cm} \text{for } i=n_{S_\ell}, k=2,\ldots, \bar{k}_\ell, \ell \in LS.
\end{align}
In the first trip, the excess of passenger is {computed} by subtracting to the demand the flow of passengers already caught by the train ({equations} \eqref{remanente0a}-\eqref{remanente0d}). For modeling the flow of caught passengers we take into account in \eqref{remanente0c} that the trip could be either, a whole trip ($f_i^{1\ell}$) or a short-turn ($g_i^{1\ell}$). In the special case of the last station of the short-turn, one needs to add to the excess of passengers, those that get-off from a {train of a} short-turn {trip} to catch a train of a whole trip of the same line (equations \eqref{remanente0d}).
For the rest of the trips ({equations} \eqref{remanentea}--\eqref{remanented}), we take into account the demand accumulated after the previous trip plus the excess of persisting passengers of the previous trip.
\end{itemize}
\item {\bf Semicontinuous Variables}:
We also consider a {set of} semicontinuous variable{s} collecting the excess of passengers only for \textit{true trips}:
$$
x^{k\ell}_i=\left\{\begin{array}{ll}
h_{i}^{k\ell} & \text{ if $k$ is a true trip for station $i$ of line $\ell$}\\
0 &\text{otherwise,}
\end{array}\right. \quad {k \in K_\ell},\, i\in N_\ell,\, \ell\in L.
$$
This set of variables will allow us to account in the objective function for the actual excess of passengers, i.e., the ones associated to true trips.
\end{itemize}
Table \ref{variables} summarizes the set of {above mentioned} variables.
\begin{table}[h]
\begin{center}
{\small\begin{tabular}{|cp{13cm}|}\hline
Variable & Description\\\hline
$t^{k{\ell}}_1$ & Departure time from the initial station of line $\ell\in L$ at its $k$-th trip.\\
$f^{k{\ell}}_i$ & Flow of passengers captured in the station $i$ by the train that covers the $k$-th trip of the line $\ell\in L$, when $k$ {is a whole trip.}\\
$g^{k{\ell}}_i$ & Flow of passengers captured in the station $i\in S_\ell \backslash{\{n_\ell\}}$ by the train that covers the $k$-th trip of the line $\ell\in LS$, when $k$ only covers the short-turn stations.\\
$w^{k{\ell}}$ & {Difference between the actual departure time from the first station of the short-turn of the $k$-th trip of line $\ell\in LS$ and the time when it should depart from this station taking into account its departure time from the initial station of the line.}\\
$y_q^{k\ell}$ & $\left\{\begin{array}{cl}1 & \mbox{if the $k$-th trip of line $\ell\in L$ {is a whole trip} with capacity $q$}\\
0 & \mbox{otherwise}\end{array}\right.$\\
$y_{Sq}^{k\ell}$ & $\left\{\begin{array}{cl}1 & \mbox{if the $k$-th trip of line $\ell\in LS$ {traverses the short-turn $S_\ell$} with capacity $q$.}\\
0 & \mbox{otherwise}\end{array}\right.$\\
{$t^{k\ell}_i$} & {Departure time from the station $i$ of line $\ell$ in its $k$-th trip.}\\
$D_i^\ell(t^{k\ell}_i)$ & Number of passengers accumulated from instant $0$ up to instant $t^{k\ell}_i$ in the station $i$ of line $\ell\in L$.\\
$h^{k\ell}_i$& Excess of passengers that where not able to get on the train at station $i$ at the $k$-th trip of line $\ell\in L$ because of a lack of capacity.\\
$x^{k\ell}_i$ & Excess of passengers only if $k$ is a \textit{true} trip for station $i$ of line $\ell\in L$.\\
\hline
\end{tabular}}
\end{center}
\caption{Variables of the model.\label{variables}}
\end{table}
Using the variables described above, we give now the formulation of our problem distinguishing the two main elements: (1) an objective function aggregating the economical and social costs of any feasible solution; and (2) a set of technical constraints where the line concept and the timetable are specified in order to serve the estimated flow of passengers over the planning horizon.
\subsection{Objective Function}
In what follows we describe the different {\bf costs} and {\bf rewards} that are considered in the objective function of our model, for a given line $\ell{\in L}$, in terms of the variables and parameters described above.
\begin{itemize}
\setlength{\itemindent}{-0.2cm}
\item {\bf Capacity cost: }\\
It accounts for the costs $b_q^\ell$ and $b^\ell_{Sq}$ corresponding to the train capacity $q\in Q$ for every trip of line $\ell \in L$. This capacity is controlled by means of the variables $y^{k\ell}_q$ or $y^{k\ell}_{Sq}$ depending {on} whether the line contains short-turns or not.
\begin{equation}
\left\{
\begin{array}{cl}
\displaystyle\sum_{k \in K_\ell} \displaystyle\sum_{q\in Q} b^\ell_q y^{k\ell}_q &\mbox{ if } { \ell\in LNS},\\
{\displaystyle\sum_{k \in K_\ell} \displaystyle\sum_{q\in Q} b^\ell_q y^{k\ell}_q + }\displaystyle\sum_{k \in K_\ell} \displaystyle\sum_{q\in Q} b^\ell_{Sq} (y^{k\ell}_{Sq} - y^{k\ell}_{q}) & \mbox{ if } \ell\in LS.
\end{array}\right.
\label{obj:2}\tag{{${\rm Cap}(\ell)$}}
\end{equation}
{Observe that in the second expression, if proper short-turns {with capacity $q$} are activated, one has $y_{Sq}^{k\ell}=1$ and $y_{q}^{k\ell}=0$, {and thus,} the amount $b_{Sq}^\ell$ is accounted {for}. In case of whole trips {with capacity $q$}, by \eqref{rely}, one has that $y_{Sq}^{k\ell}=1$ and $y_{q}^{k\ell}=1$ . Thus, the second addend is zero, and only $b_{q}^\ell$ is accounted {for}, modeling adequately the capacity costs.}
\item {\bf Reward per served passengers:} \\
{The unitary reward per served passenger} computes the estimated revenue received when passengers use a line. It is obtained as {the} average revenue given by the different transport tickets used by the passengers. In order to compute {the overall reward per served passengers}, observe that {if the $k$-th trip is a whole trip,} the expression $ \sum_{r=0}^{i-1} p_{ri} f_r^{k\ell}$ returns the estimated number of passengers getting off at the station $i$ (coming from any other previous station). Note that, in case a short-turn {is} activated on {any line $\ell\in LS$}, it is needed to add the rewards of passengers being routed within the short-turn {($\sum_{i \in S_\ell} \sum_{r=1_{S_\ell}}^{i-1} p_{ri}^\ell g_r^{k\ell} $)} together with the reward from those which use the short-turn to get-off at the last station and continue {with the next whole trip}. Then, the overall reward per served passengers is:
\begin{equation}
\hspace*{-1cm}
\left\{\begin{array}{cl}
\displaystyle\sum_{i \in N_\ell} \displaystyle\sum_{k\in K_\ell} \displaystyle\sum_{r=0}^{i-1}\gamma^{\ell}_{ri} p_{ri}^\ell f_r^{k\ell} & \mbox{ if } { \ell\in LNS},\\
\displaystyle\sum_{k\in K_\ell}\left(\displaystyle\sum_{i \in N_\ell} \displaystyle\sum_{r=0}^{i-1}\gamma^{\ell}_{ri} p_{ri}^\ell f_r^{k\ell}+\displaystyle\sum_{i \in S_\ell} \displaystyle\sum_{r=1_{S_\ell}}^{i-1}\gamma^{\ell}_{ri} p_{ri}^\ell g_r^{k\ell} + \hspace*{-0.2cm}\displaystyle\sum_{r \in S_\ell:\atop
r\neq n_{S_\ell}} \displaystyle\sum_{j=n_{S_\ell+1}}^{n_\ell}\gamma^{\ell}_{rn_{S_\ell}} p_{rj}^\ell g_r^{k\ell}\right)& \mbox{ if } \ell\in LS.
\end{array}\right.
\label{obj:4a}\tag{{${\rm RewPPass}(\ell)$}}
\end{equation}
\item {\bf Costs of non-served passengers:} \\
It accounts for the social cost incurred when a passenger cannot get on the train arriving to the station due to its lack of capacity. The unitary cost should aggregate some indicators of the service quality and some subjective measures of the satisfaction degree perceived by the passengers. The overall cost is computed by using the total number of passengers exceeding the capacity of the system at some instant of the planning horizon, scaled by the average penalties of persisting/giving up passengers, as follows:
\begin{equation}\alpha \mu_1\sum_{i\in N_\ell}
\sum_{k\in K_\ell} x_i^{k\ell}+ (1-\alpha) \mu_2 \sum_{i\in N_\ell} \sum_{k\in K_\ell} x_i^{k\ell}, \label{obj:5}\tag{{${\rm NonServed}(\ell)$}}
\end{equation}
Observe that a high excess of passenger at the end of the planning horizon can be easily avoided by increasing the $\mu$-parameters for the last trip or by adding appropriate constraints involving the $x$-variables.
\end{itemize}
The overall cost of using line $\ell\in L$ during the planning horizon can be expressed as:
\begin{equation}
\eqref{obj:2} -\eqref{obj:4a} +\eqref{obj:5}\label{obj}\tag{{\rm COST}($\ell$)}
\end{equation}
\subsection{Modelling Constraints}
{In what follows we describe the constraints linking the variables and parameters in our model. They have been classified in four main blocks: capacity constraints, time control constraints, flow control constraints and passenger surplus constraints.}
\begin{itemize}
\item Capacities and true/fake trips:
\begin{subequations}
\makeatletter
\def${\rm C}4${${\rm C}1$}
\makeatother
\label{CS1}
\renewcommand{${\rm C}4-{\arabic{equation}}$}{${\rm C}1-{\arabic{equation}}$}
\begin{align}
\sum_{q\in Q} y^{1\ell}_q= 1, &\qquad\ell\in LNS,\label{c:5}\\
\sum_{q\in Q} y^{k\ell}_q\leq 1, &\qquad k>1, \ell\in L,\label{c:6}\\
\sum_{q\in Q} y^{\overline{k}_\ell \ell}_q= 1, &\qquad\ell\in L \label{lastround},\\
y_q^{k\ell} \leq y_{Sq}^{k\ell} ,& \qquad q\in Q, k \in K_\ell, \ell\in LS,\label{c:ys}\\
\sum_{q\in Q} y^{1\ell}_q+\sum_{q\in Q} y^{1\ell}_{Sq}\geq 1,&\qquad \ell\in LS,\label{c:5b}\\
\sum_{q\in Q} y^{\kappa_\ell \ell}_q=\sum_{q\in Q} y^{1\ell}_{Sq}- \sum_{q\in Q}y^{1\ell}_q,&\qquad\ell\in LS,\label{sub2}\\
\sum_{q\in Q} y^{k\ell}_{Sq} \leq 1,&\qquad k>1, \ell\in LS,\label{c:6b}
\end{align}
\end{subequations}
where $\kappa_\ell=\left[\frac{t_{1\mapsto 1_{S_l}}}{IS^\ell}\right]+1$, i.e. the number of short-turn trips of line $\ell {\in LS}$ which fit within the period of time taken by a train to go from the head of line to the first station of the short-turn.
When short-turns are not allowed for a line, the appropriate definition of the capacity variables is ensured by constraints \eqref{c:5}--\eqref{lastround}. They enforce that exactly one {of the allowed capacities} is chosen for the first and the last trip and at most one for the rest of them. Fake (resp. true) trips are identified by trips with capacities equal to (resp. greater than) zero. Thus, constraints \eqref{c:5} and \eqref{lastround} determine that the first and the last trip of each line are true trips. We will see later that this permits the actual trains to be scheduled from the beginning to the end of the planning horizon, providing the users a complete service during that time interval. Note that constraints \eqref{c:6} and \eqref{lastround} are also valid for lines {allowing} short-turns and then, when {short-turns are allowed for a line}, the appropriate definition of the capacity variables is warranted by constraints \eqref{c:6}--\eqref{c:6b}. Constraints \eqref{c:ys} indicate that {when a whole trip is a true trip, it is also a true trip for the short-turn stations}. Constraints \eqref{c:5b} fix that the first trip ({being either, a whole trip or a short-turn}) is a true trip. Constraints \eqref{sub2} force trip $\kappa_\ell$ to be a true {whole} trip (resp. a fake trip) if the first trip {is a } short-turn (resp. if the first trip {is a whole trip}).
These constraints, together with \eqref{c:5b} are the equivalent to \eqref{c:5} for lines with short-turns, and they ensure that a real train is scheduled from the beginning of the planning horizon. Finally, constraints \eqref{c:6b} are the equivalent to \eqref{c:6} for short-turn trips.
\item Time control:
\begin{subequations}
\makeatletter
\def${\rm C}4${${\rm C}2$}
\makeatother
\label{CS2}
\renewcommand{${\rm C}4-{\arabic{equation}}$}{${\rm C}2-{\arabic{equation}}$}
\begin{align}
t_1^{1\ell}=0,&\qquad \ell\in L,\label{firsttime}\\
t_1^{\overline{k}\ell}=T, &\qquad \ell\in L,\label{lasttime}\\
t_1^{\kappa_\ell \ell}\leq T\left(1-\sum_{q\in Q} y^{1\ell}_{Sq}+\sum_{q\in Q} y^{1\ell}_q \right),&\qquad \ell\in LS,\label{sub1}\\
IS \left(\displaystyle\sum_{q \in Q} y_q^{k\ell}\right) \leq t^{k\ell}_i- t^{(k-1)\ell}_i, & \label{c:8a} \nonumber\\
\hspace*{4cm} k=2,\ldots, \overline{k}_\ell, &\, {(i,l)\in \overline{\mathcal{S}} \mbox{ with $k\neq \kappa_\ell$ if $\ell\in LS$,}}\\
t^{k\ell}_i- t^{(k-1)\ell}_i \leq T\left(\sum_{q\in Q} y^{k\ell}_q\right),& \quad k=2,\ldots, \overline{k}_\ell,\, {(i,l)\in \overline{\mathcal{S}}} \label{c:8b}\\
IS \left(\displaystyle\sum_{q \in Q} y_{Sq}^{k\ell}\right) \leq t^{k\ell}_i- t^{(k-1)\ell}_i, &\quad i \in S_\ell, \, k=2,\ldots, \overline{k}_\ell, \, \ell\in LS,\label{c:8c}\\
t^{k\ell}_i- t^{(k-1)\ell}_i \leq (T+t_{1\mapsto 1_{S_l}})\left(\sum_{q\in Q} y^{k\ell}_{Sq}\right),&
\quad i \in S_\ell,k=2,\ldots, \overline{k}_\ell, \, \ell\in LS, \label{c:8d}\\
- t_{1\mapsto 1_{S_l}} \left(1-\sum_{q\in Q_\ell} y_{q}^{k\ell}\right) \leq w^{k\ell}, & \quad
k \in K_\ell,\, \ell \in LS,\label{tiemposalida:w1}\\
w^{k\ell}\leq (T+t_{1\mapsto 1_{S_l}}) \left(1-\sum_{q\in Q_\ell} y_{q}^{k\ell}\right), &\quad
k \in K_\ell,\, \ell \in LS, \label{tiemposalida:w2}
\end{align}
\end{subequations}
Constraints \eqref{firsttime} and \eqref{lasttime} state that the first and the last trip of each line should exactly start at the first station of the complete line at instant time $0$ and $T$, respectively. If the line does not contain a short-turn, these constraints together with \eqref{c:5} and \eqref{lastround} ensure that there are trains traveling the line during the whole planning horizon. If the line contains short-turns, recall that constraints \eqref{c:5b} permit the first trip to be a short-turn trip. In this case, constraints \eqref{sub1} together with constraints \eqref{sub2} force trip $\kappa_\ell$ to be a true {whole} trip starting at time $0$ at the head of the line.
Constraints \eqref{c:8a} ensure that the arrival times between consecutive trains satisfy the safety time window interval. Constraints \eqref{c:8b} force a fake trip to operate on the line at the same time as the previous true trip. For lines allowing short-turns constraints \eqref{c:8a}--\eqref{c:8d} represent the same as the above but taking into account that in this case the trip can be either, a whole trip or a short-turn.
Observe that in \eqref{c:8a} if $\ell\in LS$ the case $k=\kappa_\ell$ is excluded. The reason is that constraints \eqref{sub2} force trip $\kappa_\ell$ to be a true whole trip if and only if the first trip ($k=1$) is a short-turn and, in this case, constraints \eqref{sub1} ensure that trip $\kappa_\ell$ starts at the head of the line station at time 0, and then, it is the first whole trip. Finally, constraints \eqref{tiemposalida:w1} and \eqref{tiemposalida:w2} fix the upper and lower bounds on the values of variables $w^{kl}$ when the $k$-th trip is a short-turn, enforcing that this variable is $0$ if the trip is a whole trip.
\item Flow control:
\begin{subequations}
\makeatletter
\def${\rm C}4${${\rm C}3$}
\makeatother
\label{CS3}
\renewcommand{${\rm C}4-{\arabic{equation}}$}{${\rm C}3-{\arabic{equation}}$}
\begin{align}
f_{i}^{k\ell} + \displaystyle\sum_{r=1} ^{i-1} f_r^{k\ell} \left(\displaystyle\sum_{j=i+1}^{n_\ell} p_{rj}^\ell\right) \leq \displaystyle\sum_{q \in Q} q y_q^{k\ell},& \quad k\in K_\ell, i\in N_\ell, \ell \in L ,\label{c:14d}\\
g_{i}^{k\ell} + \displaystyle\sum_{r=1_{S_\ell}} ^{i-1} g_r^{k\ell} \left(\displaystyle\sum_{j=i+1}^{n_{S_\ell}} p_{rj}^\ell\right) \leq \displaystyle\sum_{q \in Q} q (y_{Sq}^{k\ell}- y_{q}^{k\ell}),&\; k\in K_\ell, i \in S_\ell\backslash\{n_{S_\ell}\}, \ell \in LS,\label{c:14i}\\
f_{i}^{1\ell} {\leq} D_i^\ell(t_i^{1\ell}), &\quad i \in N_\ell, \ell \in L,\label{c:14b1}\\
f_{i}^{kl}\leq D^\ell_i(t^{k\ell}_i)-D^\ell_i(t^{(k-1)\ell}_i)+\alpha h^{(k-1)\ell}_i, &\quad k=2,\ldots, \overline{k}_\ell, i\in N_\ell, \ell \in L,\label{c:14a1}\\
g_{i}^{1\ell}\leq D^\ell_i(t^{1\ell}_i), &\;\;i \in S_\ell\backslash\{n_{S_\ell}\}, \ell \in LS,\label{c:14b2}\\
g_{i}^{k\ell}\leq \left(D^\ell_i(t^{k\ell}_i)-D^\ell_i(t^{(k-1)\ell}_i)\right)+\alpha h^{(k-1)\ell}_{i},& \;\; k=2,\ldots, \overline{k}_\ell, i \in S_\ell\backslash\{n_{S_\ell}\}, \ell \in LS.\label{c:14a2}
\end{align}
\end{subequations}
The flow of passengers catching a given train is determined by the capacity of the train and the {mobility pattern} of people. Hence, the effective capacity of the trains arriving to a given station depends of the passengers that caught the train in previous stations of this same trip and whose destination is a subsequent station of the line. Such an effective capacity is warranted, depending on the case (short-turn allowed or not), by constraints \eqref{c:14d} and \eqref{c:14i}.
With constraints \eqref{c:14b1} (resp. \eqref{c:14a1}) we ensure that the flow of passengers captured at a given station by a train that covers the first trip (resp. the $k$-th trip for $k>1$) is at most the demand of passengers accumulated at station $i$ since the beginning of the planning horizon (resp. since the instant in which the previous train departed from that station plus the passengers that were not able to get on the previous train because of lack of capacity and wait for the next one). Constraints \eqref{c:14b2} and \eqref{c:14a2} are the {analogous} ones for short-turn {trips}.
\item Passenger surplus:
In order to compute only the surplus of passengers of a true trip, we use the {set of semi-continuous variables $x^{k\ell}_i$}:
$$
x^{k\ell}_{i} = \left\{\begin{array}{cl}
h^{k\ell}_{i} \times \sum_{q\in Q} y_{q}^{k\ell} & \mbox{ if $(i,l)\in \overline{\mathcal{S}}$ { or ($i=n_{S_\ell}$, $\ell \in LS$)}},\\
h^{k\ell}_{i} \times \sum_{q\in Q} y_{Sq}^{k\ell} & \mbox{ if $i\in S_\ell\backslash\{n_{S_\ell}\}$, $\ell \in LS$,}
\end{array}\right.
$$
which can be linearized as follows:
\begin{subequations}
\makeatletter
\def${\rm C}4${${\rm C}4$}
\makeatother
\label{CS4}
\renewcommand{${\rm C}4-{\arabic{equation}}$}{${\rm C}4-{\arabic{equation}}$}
\begin{align}
& x^{k\ell}_{i}\geq h_{i}^{k\ell}-M_i^\ell\left(1-\sum_{q\in Q_\ell} y_{q}^{k\ell}\right), (i,l)\in \overline{\mathcal{S}} \mbox{ {or ($i=n_{S_\ell}$, $\ell \in LS$)}} ,\label{remanente0ax1}\\
& x^{k\ell}_{i}\geq h_{i}^{k\ell}-M_i^\ell\left(1-\sum_{q\in Q_\ell} y_{Sq}^{k\ell}\right), i\in S_\ell\backslash\{n_{S_\ell}\}, \ell \in LS,\label{remanente0ax3}
\end{align}
\end{subequations}
being $M_i^\ell$ a large enough constant bounding the surplus of passengers at any station $i$ of line $\ell\in L$.
\end{itemize}
\subsection{A compact MINLP formulation}
According to the decision variables{, the objective function} and the constraints described {above}, {the following Mathematical Programming formulation is valid for our {line} planning and timetabling model:}
\begin{align}
\qquad\min & \displaystyle\sum_{\ell\in L}\mbox{\ref{obj}} & \nonumber\\
\mbox{s.t. } & \eqref{CS1},\, \eqref{CS2},\, \eqref{CS3} \mbox{ and } \eqref{CS4},\nonumber\\
& 0 \le t^{k\ell}_1 \leq T,&\quad k\in K_\ell,\, \ell\in L, \nonumber\\
&f_i^{k\ell} \geq 0, &\quad k\in K_\ell,\, i\in N_\ell,\, \ell\in L, \label{MINLP}\tag{P}\\
&g_i^{k\ell}\geq 0, &\quad k\in K_\ell,\, i\in S_\ell\backslash\{n_{S_\ell}\},\, \ell\in LS, \nonumber\\
&w^{k\ell} \in \mathbb{R}, &\quad k\in K_\ell,\, \ell \in LS, \nonumber\\
&x^{k\ell}_{i} \geq 0, &\quad k\in K_\ell,\, i\in N_\ell,\, \ell \in L,\nonumber\\
&y^{k\ell}_q \in \{0,1\}, &\quad k\in K_\ell,\, q\in Q,\, \ell\in L, \nonumber\\
& y^{k\ell}_{S_q} \in \{0,1\}, &\quad k\in K_\ell,\, q\in Q, \ell \in LS.\nonumber
\end{align}
Observe that although the above formulation seems to be separable by lines in $L$, the lines are linked through the demand function{{s}} $D_i^\ell(t)$ (constraints \eqref{c:14b1}-\eqref{c:14a2}) which represent the accumulated flow of passengers awaiting for a train at a given station $i$ of line $l\in L$ at time instant $t$. As we will describe in Section \ref{sec:3}, such a flow is affected not only by the line $\ell$ but also by other lines through passengers changing of lines at transfer stations. This function introduces new variables and non linear constraints into the above formulation.
{Several extensions may be easily accommodated within the above model as highlighted in the following remarks:}
\begin{remark} \label{remark1}
In our model the speed of trains is considered to be constant during the whole journey. However, one can easily modify expressions {\eqref{t1s}, }\eqref{tiemposalida1} and \eqref{tiemposalida3} using variables $v_{i}^{k\ell}{>0}$ to decide the speed of the train during a trip $k$ of line $\ell$ between stations $i$ and $i+1$. For instance, let $\rho_i$ be the physical distance between stations $i$ and $i+1$ and let $\omega_{i}^{k\ell}$ represent the inverse of the speed, i.e., $\omega_{i}^{k\ell} = \frac{1}{v_i^{k\ell}}$, then one could replace the travel times $d_i^\ell$ by $\rho_i \times \omega_{i}^{k\ell}$ in expressions {\eqref{t1s}, } \eqref{tiemposalida1} and \eqref{tiemposalida3}. By adding to the objective function a cost assessing the resource consumption due to speed changes one can have a more general model preserving the structure of the one stated above. Similar modifications can be also considered by enabling the model to decide about variable stopping times at any station.
\end{remark}
\begin{remark}\label{remark2}
The model allows us to join two consecutive planning horizons by passing data about numbers of passengers and arrival/departure times obtained from an optimal solution on the first planning period as input data for the {second} one{, and so on}. In particular, the passengers that may remain at station $i$ of line $\ell\in L$ at the end of the first planning horizon can be considered as passengers at station $i$ to use line $\ell$ at the beginning of the {second} planning horizon{, and so on}. This information has to be incorporated in order to compute the demand function, as we will see in the next section.
\end{remark}
\section{The Demand function}
\label{sec:3}
One of the main goals of our model is to incorporate, in the design of the line planning and timetabling of an existing network, information about the flow of passengers moving through the network during the planning horizon. Clearly, the flow of passengers arriving to a given station is a random variable. Thus, we will incorporate to the model an estimation of its average value.
In order to model the number of passengers entering to the transportation system through a given station of a fixed line, we use the so-called \textit{demand function}, which maps at a given instant $t$ the accumulated number of passengers wanting to catch a train at this station (from the beginning of the planning horizon). Here, the estimation process should be carefully done in order to capture the essential behaviour of the demands served by the system.
{Different shapes for the function are possible within this framework to approximate the demand. The choice of such a shape is a crucial step in the modeling process since one has to find an equilibrium between obtaining accurate estimations and providing manageable mathematical programming formulations. Once again, motivated by our pilot experience with Metrolab{\textregistered}, we use a piecewise linear approximation whose slope is fixed for any given station $i \in N_\ell$, but whose breakpoints and discontinuities may change according to the flow induced by external block of arrivals and by the rest of the lines. For a given line $\ell \in L$ and a station $i\in N_\ell$, we estimate the demand function as follows:
\begin{equation}\label{demand}\tag{${\rm D}$}
D_i^\ell(t) = \beta_{0i}^\ell + \beta_i^\ell t + J^E_{i\ell} (t) + \displaystyle\sum_{\ell'\neq \ell, \ell' \ni i} J^I_{i\ell\ell'} (t),
\end{equation}
for $t \in [0,\widehat{T}_\ell]$, with $\widehat{T}_\ell=T+\sum_{r=1}^{n_\ell-1} (d_r^\ell+e_{r+1}^\ell)$, {i.e., the maximum time in which the train can reach the last station of the line,} and where:
\begin{itemize}
\setlength{\itemindent}{-0.8cm}
\item[] $\beta_{0i}^\ell$ is the number of passengers awaiting a train of line $\ell\in L$ in the station $i\in N_\ell$ at the beginning of the planning horizon.
\item[] $\beta_i^\ell$ is the average rate of passengers arriving to the station $i\in N_\ell$ of line $\ell\in L$ by unit of time.
\item[] $J^E_{i\ell}(t)$ is the sum of the external block of arrivals of passengers up to the instant $t$ to the station $i\in N_\ell$ of line $\ell\in L$.
\item[] $J^I_{i\ell\ell'} (t)$ is the sum of the block arrivals of passengers up to the instant $t$ to the station $i\in N_\ell$ of line $\ell\in L$ from line $\ell^\prime\in L$.
\end{itemize}
In what follows we will refer to $[0,\widehat{T}_\ell]$ as the \textit{extended planning horizon for line $\ell$}.
Thus, the demand function at a given time instant $t$ in station $i\in N_\ell$ of line $\ell\in L$, consists of three parts. The first one is a linear part, in which, from an initial number of passengers, $\beta_{0i}^\ell$, the number increases by a rate $\beta_i^\ell$. However, such a base estimation may be modified either by external block of arrivals (second part), $J^E_{i\ell}(t)$, or by passengers coming from other interacting lines at interchange stations (third part), $J^I_{i\ell\ell'} (t)$.
As can be seen, in the formulation \eqref{MINLP}, the demand function $D_i^\ell(t)$ is used exclusively to access flows in the set of instants $t = t_{i}^{k\ell}$ for $i \in N_l, \ell \in L$ and $k \in K_\ell$. Each of the time instants in which the demand function needs to be evaluated induces {some} sets of inequalities and variables as those described in subsections \ref{ss:EA} and \ref{ss:IA} (for external and internal arrivals).
\subsection{External Arrivals: $J^E$\label{ss:EA}}
We consider that we are given both a set of breakpoints representing time instants when the block of arrivals occur and the amounts of passengers entering to the system at these instants for each station $i \in N_\ell$. {That is, we assume that a set of sorted instants $se^{i\ell}_1 < \cdots < {se_{re^{i\ell}}^{i\ell}}$ as well as discontinuity flow jumps associated to each of those instants $\Psi^{i\ell}_{1}, \ldots, {\Psi^{i\ell}_{re^{i\ell}}}$ are known}, {i.e., a block arrival of $\Psi^{i\ell}_{r}$ is assumed at time instant $se^{i\ell}_r$, for $r=1, \ldots, re^{i\ell}$.} The external arrivals represent block of arrivals of passengers for instance, due to the end of a football match in a place close to one of our stations. For the sake of readability and without loss of generality, we will assume that $se^{i\ell}_0=0,\, se_{re^{i\ell}+1}^{i\ell}=\widehat{T}_\ell$, and $\Psi^{i\ell}_{0}=\Psi^{i\ell}_{re^{i\ell}+1}=0$, i.e., the first and last time instants of external arrivals coincide with the beginning and the end of the {extended} planning horizon, and the discontinuity jumps at those instants are null. Given a time instant $t \in [0,\widehat{T}_\ell]$, we use the following set of binary variables to determine {whether $t$ belongs to the interval} $[se^{i\ell}_r, se^{i\ell}_{r+1})$ for $ {r=0, \ldots, re^{i\ell}}$:
$$
\delta^E_{ri\ell}(t) = \left\{\begin{array}{cl} 1 & \mbox{if $t \in [se^{i\ell}_r, se^{i\ell}_{r+1})$,}\\
0 & \mbox{otherwise,}\end{array}\right. \quad i\in N_\ell,\, \ell\in L.
$$
Note that with these settings, the accumulated discontinuity flow jumps of external arrivals for a station $i\in N_\ell$ of line $\ell\in L$ can be modeled using the following constraints
\begin{equation}\label{DE}\tag{${\rm D}_{\rm E}$}
\begin{split}
J^E_{i\ell} (t) = \displaystyle\sum_{r=0}^{re^{i\ell}} \left(\displaystyle\sum_{r^\prime\leq r} \Psi_{ir^\prime}^\ell\right) \delta^E_{r i\ell}(t),\quad i\in N_\ell,\, \ell\in L,\\
se^{i\ell}_r \delta^E_{ri\ell}(t) \leq t < se^{i\ell}_{r+1} \delta^E_{ri\ell}(t)+\widehat{T}_\ell (1-\delta^E_{ri\ell}(t)),\quad r=0, \ldots, re^{i\ell},\, i\in N_\ell,\, \ell\in L,\\
\displaystyle\sum_{r=0}^{re^{i\ell}} \delta^E_{ri\ell}(t) =1,\quad i\in N_\ell,\, \ell\in L,\\
\delta^E_{ri\ell}(t) \in \{0,1\}\quad r=0, \ldots, re^{i\ell},\, i\in N_\ell,\, \ell\in L.
\end{split}
\end{equation}
The reader may observe that the first constraint allows us to appropriately define $J^E$ by accumulating the flow jumps previous to a given instant $t$. The second set of constraints permits to determine the intervals in which $t$ lies and the third constraint ensures that only one of these intervals is identified for each $t$.
\subsection{Internal Arrivals\label{ss:IA}}
The main difference between internal and external block of arrivals is that in the latter, the breakpoints and the discontinuity flow jumps are known, while for internal arrivals, this information depends on the decision variables of the problem. {Recall that internal block of arrivals occur when passengers get off a train in an interchange station to transfer to another line. Thus, the time instants of those arrivals are part of the decision problem.} Therefore, we need to state a set of equations showing the existing {relationships} between the decision variables and the breakpoints and jumps in the flow of passengers they provoke.
In what follows we describe the modeling issues concerning the breakpoint {times} and flow jumps of internal arrivals.
\begin{itemize}
\item {\bf Breakpoint {times} at line $\ell\in L$.}
Note that the instants{, $si^{i\ell\ell^\prime}_r$,} in which an internal flow jump occurs by {the} arrivals of { trains coming from } line $\ell^\prime\in L$ to {an} interchange station $i\in N_\ell \cap N_{\ell^\prime}$ {can be computed in terms of the time $t_i^{r\ell^\prime}$ as:}
$$
si^{i\ell\ell^\prime}_r = t_{i}^{r\ell^\prime} - e_{i}^{r\ell^\prime}, \quad r=1, \ldots, \bar{k}_{\ell^\prime}
$$
{that is,} the {time} {instant} in which the $r$-th trip of line $\ell^\prime\in L$ departs to the next station ($ t_{i}^{r\ell^\prime}$) minus the waiting time at the station $i\in N_{\ell^\prime}$ ($e_{i}^{r\ell^\prime}$), that is, the instants in which the trains arrive to the interchange station of line $\ell'\in L$.
\item {\bf Discontinuity flow jumps at line $\ell\in L$ coming from line $\ell^\prime\in L$.}
The volume of the internal block of arrivals to an interchange station can be derived from the passenger flows controlled by our decision variables. To compute it we will also need a set of values quantifying the proportion of passengers which want to change from one metro line to another one in all the interchange stations. Let $\tau_i^{\ell\ell^\prime}$ be the proportion of the passengers that get off a train in an interchange station $i\in N_\ell \cap N_{\ell^\prime}$ of the line $\ell^\prime\in L$ to change to line $\ell \in L$. Thus, the flow jump at an instant $si^{i\ell\ell^\prime}_r$ for $r\in K_{\ell^\prime},\, \ell\in L, \, i \in N_\ell$ is given by
$$
\Phi_{ir}^{\ell\ell^\prime} =\left\{\begin{array}{ll}
\tau_{i}^{\ell\ell^\prime} \displaystyle\sum_{j<i} p_{ji}^{\ell^\prime}f_j^{r\ell^\prime}&\mbox{ if $(i,\ell^\prime)\in \overline{\mathcal{S}}$}\\
\tau_{i}^{\ell\ell^\prime} \left(\displaystyle\sum_{j<i} p_{ji}^{\ell^\prime}f_j^{r\ell^\prime}+\displaystyle\sum_{j<i; j\in S_{\ell^\prime}} p_{ji}^{\ell^\prime}g_j^{r\ell^\prime}\right)&\mbox{ if $i\in S_{\ell^\prime},\, \ell^\prime\in LS$}\end{array}\right.
$$
\end{itemize}
Let $S^I_{i\ell\ell^\prime}=\left\{si^{i\ell\ell^\prime}_0, \cdots, si^{i\ell\ell^\prime}_{\bar{k}_{\ell^\prime}+1}\right\}$ be the set of sorted {breakpoint} {times} at {a given interchange} station $i\in N_\ell \cap N_{\ell^\prime}$ of line $\ell\in L$ caused by line $\ell^\prime\in L$. In order to make clearer the formulation, we will assume, w.l.o.g., that $si^{i\ell\ell^\prime}_0=0,\, si^{i\ell\ell^\prime}_{\bar{k}_{\ell^\prime}+1}=\widehat{T}_\ell$, and $\Phi_{i0}^{\ell\ell^\prime}=\Phi_{i\bar{k}_{\ell^\prime}+1}^{\ell\ell^\prime}=0$, i.e., the first and last time instants of internal arrivals coincide with the beginning and the end of the {extended} planning horizon, and the flow jumps at those times are null.
Now, we {are ready to model} the internal flow jumps induced by line $\ell^\prime\in L$ onto the station $i\in N_\ell \cap N_{\ell^\prime}$ of the line $\ell$. {We proceed analogously} {as in} {the external arrivals} but incorporating a set of binary variables ($\delta^I$) to identify in which time interval between two consecutive breakpoints a given instant $t$ {belongs to}:
\begin{equation}\label{DI}\tag{${\rm D}_{\rm I}$}
\begin{split}
J^I_{i\ell\ell'}(t) = \displaystyle\sum_{r=0}^{\bar{k}_{\ell^\prime}} \left(\displaystyle\sum_{r^\prime\leq r} \Phi_{ir^\prime}^{\ell\ell^\prime}\right) \delta^I_{r i\ell\ell^\prime}(t), \quad i\in N_\ell \cap N_{\ell^\prime},\, \ell\in L, \ell^\prime\in L,\\
si^{i\ell\ell^\prime}_r \delta^I_{ri\ell\ell^\prime}(t) \leq t < si^{i\ell\ell^\prime}_{r+1} \delta^I_{ri\ell\ell^\prime}(t)+\widehat{T}_\ell(1-\delta^I_{ri\ell\ell^\prime}(t)),\; r=0, \ldots, \bar{k}_{\ell^\prime},\, i\in N_\ell \cap N_{\ell^\prime},\, \ell\in L, \ell^\prime\in L,\\
\displaystyle\sum_{r=0}^{\bar{k}_{\ell^\prime}} \delta^I_{ri\ell\ell^\prime}(t) =1, \quad i\in N_\ell \cap N_{\ell^\prime,}\, \ell\in L, \ell^\prime\in L,\\
\delta^I_{ri\ell\ell^\prime}(t) \in \{0,1\}\; r=0, \ldots, \bar{k}_{\ell^\prime},\, i\in N_\ell \cap N_{\ell^\prime},\, \ell\in L, \ell^\prime\in L.
\end{split}
\end{equation}
{Note that} the {two first sets of the above inequalities} are nonlinear since $\mathbf{\Phi}${, $\mathbf{S^I}$} and $\mathbf{\delta^I}$ are decision variables of our problem.
{As mentioned above}, the demand function $D_i^\ell(t)$ is only applied to a certain (finite) set of time {instants}. Then, {it} can be incorporated into the model by {adding} the variables $D_{i}^{k\ell} \equiv D_i^\ell (t_{i}^{k\ell})$, $\delta^E$ and $\delta^I$, as well as their corresponding linear and nonlinear constraints in \eqref{DE} and \eqref{DI}. We also include in the model \eqref{MINLP} { the following} two sets of valid inequalities:
\begin{align}
& \delta_{r i \ell}^{E}(t_i^{k+1\ell})\leq \sum_{r'=0}^{r} \delta_{r^\prime i \ell}^{E} (t_i^{k\ell}),& k \in K_\ell,\, ,r=0, \ldots, re^{i\ell},\, i\in N_\ell,\, \ell\in L, \label{in:13}\\
& \delta_{r i \ell\ell^\prime}^{I}(t_i^{k+1\ell})\leq \sum_{r'=0}^{r} \delta_{r^\prime i \ell\ell^\prime}^{I} (t_i^{k\ell}),& k \in K_\ell, \; r=0, \ldots, \bar{k}_{\ell^\prime},\, i\in N_\ell \cap N_{\ell^\prime},\, \ell\in L, \ell^\prime\in L. \label{in:14}
\end{align}
Inequalities \eqref{in:13} and \eqref{in:14}, ensure that trip $k+1$ {goes} through station $i$ after trip $k$. Although these inequalities are redundant with the rest of the constraints in the model, they considerably improve the performance of the solver over the branch-and-bound tree induced by the mathematical programming model.
\section{Case study: The Metrolab{\textregistered} pilot experience\label{ex0}}
We illustrate the MINLP model introduced in Section \ref{sec:2} together with the demand function described in Section \ref{sec:3} with a simplified version of the network of our pilot experience for Metrolab{\textregistered}.
Consider the simple network example depicted in Figure \ref{ex0:fig}. This network has a similar topology to the one provided by Metrolab{\textregistered} to calibrate our model in the pilot experience. It represents a small section of the Paris subway network.
\begin{figure}
\caption{Network of Case study \ref{ex0}
\label{ex0:fig}
\end{figure}
The network consists of {two \textit{bidirectional lines}}{(1-2-3-4-5 and 1'-2'-3-4'-5'}), each of them with five stations and sharing one of them (station $3$) which acts as an interchange station. Both lines allow short-turns on the planning. The edges of the lines which take part of the short-turn of each line are marked with dashed lines ($2-3-4$ for {one of the lines} and $2'-3-4'$ for {the other}). The travel time between consecutive stations is written next to each edge of the network. The stopping times are fixed to 30 seconds for stations different from the head and the final ones ($e_i^\ell$-parameters). Also the minimum safety time between consecutive trips is 2 minutes ($IS^\ell$-parameter). Hence, using our notation {$|L| = 4$} (two lines, but each of them in two directions), {$|LS|=4$ and $|LNS|=0$}. The lines will be numbered from 1 to 4, where $1$ and $2$ correspond with the horizontal line, left--right and right--left, respectively, and lines $3$ and $4$ are identified with the vertical line, up--down and down--up, respectively. {The set of available capacities for the trains are $800$ and $1600$ passengers.}
For illustration purposes, we consider a planning horizon of $T=20$ minutes (from 7:30 to 7:50), and a maximum number of trips $K_\ell=7$ for the four lines. We assume that all the passengers that cannot get on a train because it is full await for the next train ($\alpha =1$) and that the unitary penalty for persisting passengers is fixed to $\mu_1=0.1875$ (except for the last trip in which we fix it to $10 \times \mu_1=1.875$ to avoid a high excess of passengers at the end of the planning horizon). We also assume that $40\%$ of the passengers that get-off at station $3$ interchange to the other lines ($\tau$-parameter).
The {O-D matrix} and rewards per passenger are detailed in Tables \ref{table:exflows} and \ref{table:rewards}, respectively.
\renewcommand{0.15cm}{0.12cm}
\begin{table}[h]
\begin{center}
{\begin{tabular}{r|ccccc|r|ccccc|}
\multicolumn{1}{r}{$p_{ij}^\ell$} & {\tiny 1} & {\tiny 2} & {\tiny 3} & {\tiny 4} & \multicolumn{1}{c}{\tiny 5} & \multicolumn{1}{r}{$p_{ij}^\ell$} & {\tiny 1'} & {\tiny 2'} & {\tiny 3} & {\tiny 4'} & \multicolumn{1}{c}{\tiny 5'}\\
\cline{2-6}\cline{8-12}{\tiny 1} & 0 & 0.40 & 0.35 & 0.20 & 0 & {\tiny 1'} & 0 & 0.40 & 0.35 & 0.20 & 0\\
{\tiny 2} & 0.40 & 0 & 0.60 & 0.35 & 0 & \hspace*{1cm} {\tiny 2'} & 0.40 & 0 & 0.60 & 0.35 & 0 \\
{\tiny 3} & 0.35 & 0.6 & 0 & 0.95 & 0 & {\tiny 3 } & 0.35 & 0.60 & 0 & 0.95 & 0 \\
{\tiny 4} & 0.20 & 0.35 & 0.95 & 0 & 1 & {\tiny 4'} & 0.20 & 0.35 & 0.95 & 0 & 1 \\
{\tiny 5} & 0.05 & 0.05 & 0.05 & 1 & 0 & {\tiny 5'} & 0.05 & 0.05 & 0.05 & 1 & 0\\
\cline{2-6}\cline{8-12}\end{tabular}}
\end{center}
\caption{{O-D matrix} of Case study \ref{ex0}: Lines 1-2 (left) and 3-4 (right). \label{table:exflows}}
\end{table}
\renewcommand{0.15cm}{0.15cm}
\begin{table}[h]
\begin{center}
{\begin{tabular}{r|rrrrr|r|rrrrr|}
\multicolumn{1}{r}{$\gamma_{ij}^\ell$} & \multicolumn{1}{l}{{\tiny 1}} & \multicolumn{1}{l}{{\tiny 2}} & \multicolumn{1}{l}{{\tiny 3}} & \multicolumn{1}{l}{{\tiny 4}} & \multicolumn{1}{l}{{\tiny 5}} & \multicolumn{1}{r}{$\gamma_{ij}^\ell$} & \multicolumn{1}{l}{{\tiny 1'}} & \multicolumn{1}{l}{{\tiny 2'}} & \multicolumn{1}{l}{{\tiny 3}} & \multicolumn{1}{l}{{\tiny 4'}} & \multicolumn{1}{l}{{\tiny 5'}} \\
\cline{2-6}\cline{8-12}{\tiny 1} & 0 & 0.3 & 0.4 & 0.6 & 1 & {\tiny 1'} & 0 & 0.2 & 0.5 & 0.7 & 1\\
{\tiny 2} & 0.3 & 0 & 0.1 & 0.3 & 1 &\hspace*{1cm} {\tiny 2'} & 0.2 & 0 & 0.3 & 0.5 & 1 \\
{\tiny 3} & 0.5 & 0.2 & 0 & 0.2 & 1 & {\tiny 3}\, & 0.4 & 0.2 & 0 & 0.2 & 0 \\
{\tiny 4} & 0.6 & 0.3 & 0.1 & 0 & 0 & {\tiny 4'} & 0.7 & 0.5 & 0.3 & 0 & 0 \\
{\tiny 5} & 0.9 & 0.6 & 0.4 & 0.3 & 0 & {\tiny 5'} & 0.9 & 0.7 & 0.5 & 0.2 & 0 \\
\cline{2-6}\cline{8-12}\end{tabular}}
\end{center}
\caption{Rewards of Case study \ref{ex0}: Lines 1-2 (left) and 3-4 (right). \label{table:rewards}}
\end{table}
For the demand function, we consider the arrival rates of passengers per minute and the initial number of passengers at each station of each line detailed in Table \ref{table:demand}.
\begin{table}[h]
\begin{center}
{\begin{tabular}{|c|ccccc|ccccc|}
\cline{2-11}\multicolumn{1}{r|}{} & \multicolumn{10}{c|}{Lines}\\
\cline{2-11}\multicolumn{1}{r|}{} & \multicolumn{5}{c|}{$\ell=1$\phantom{'} } & \multicolumn{5}{c|}{$\ell=2$} \\
\hline
Stations {($i$)} & 1\phantom{'} & 2\phantom{'} & 3 & 4\phantom{'} & 5\phantom{'} & 5\phantom{'} & 4\phantom{'} & 3 & 2\phantom{'} & 1\phantom{'} \\\hline
{$\beta_{0i}^\ell$} & 50 & 50 & 50 & 50 & 0\phantom{0} & 50 & 50 & 50 & 50 & 0\phantom{0} \\
{$\beta_i^\ell$} & 10 & 100 & 120 & 90\phantom{0} & 0 & 10 & 160 & 180 & 150 & 0 \\
\hline\multicolumn{1}{r|}{} & \multicolumn{5}{c|}{$\ell=3$} & \multicolumn{5}{c|}{$\ell=4$} \\
\hline
Stations {($i$)} & 1' & 2' & 3 & 4' & 5' & 5' & 4' & 3 & 2' & 1'\\\hline
{$\beta_{0i}^\ell$} & 50 & 50 & 50 & 50 & 50 & 50 & 50 & 50 & 50 & 50 \\
{$\beta_i^\ell$} & 10 & 150 & 170 & 160 & 0 & 10 & 100 & 180 & 150 & 0 \\
\hline
\end{tabular}}
\end{center}
\caption{Coefficients of the Demand functions of Case study \ref{ex0}.\label{table:demand}}
\end{table}
The timetabling for the first line obtained after running our model in Gurobi 8.0 \cite{gurobi} is provided in Table \ref{exresults1}. The reported solution was obtained after 12 hours running time with a MIP GAP of $1.51\%$. There, we detail for each trip ($k$, indicating with 'S' those trips which are short-turns ), its optimal capacity, the departing times of each train of each line (DepTime), and also the flow estimations at each of the stages: the number of passengers that get off the train (Get-Off), the number of passengers that get-on the train ($f_i^{k\ell}$ for {whole trips} or $g_{i}^{k\ell}$ for short-turns), the excess of passengers ($h_{i}^{k\ell}$), the passengers surplus of true trips ($x_i^{k\ell}$) {and the actual load of the train}.
\begin{table}[h]
{
\begin{center}
{\small\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{$k$: Capacity} &$i$ & \textbf{DepTime} & \textbf{Get-Off} & $f_{i}^{k\ell}$ ($g_{i}^{k\ell}$) & $h_{i}^{k\ell}$ & $x_{i}^{k\ell}$& \textbf{Load} \\\hline
\multirow{5}{*}{1: 800} & 1 & 07:30:00 & 0.00 & 50.00 & 0.00 & 0.00 & 50.00 \\
\cline{2-8} & 2 & 07:33:30 & 20.00 & 400.00 & 0.00 & 0.00 & 430.00 \\
\cline{2-8} & 3 & 07:35:00 & 257.50 & 627.50 & 101.50 & 101.50 & 800.00 \\
\cline{2-8} & 4 & 07:37:30 & 746.13 & 725.00 & 0.00 & 0.00 & 778.88 \\
\cline{2-8} & 5 & 07:40:30 & 778.88 & 0.00 & 0.00 & 0.00 & 0.00 \\
\cline{1-8} \multirow{3}{*}{2S: 1600} & 2 & 07:39:34 & 0.00 & 606.94 & 0.00 & 0.00 & 606.94 \\
\cline{2-8} & 3 & 07:41:04 & 364.17 & 1231.59 & 0.00 & 0.00 & 1474.36 \\
\cline{2-8} & 4 & 07:43:34 & 1474.36 & 0.00 & 638.18 & 0.00 & 91.93 \\
\cline{1-8} \multirow{5}{*}{3: {\bf 0}} & 1 & 07:30:00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\cline{2-8} & 2 & 07:39:34 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\cline{2-8} & 3 & 07:41:04 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\cline{2-8} & 4 & 07:43:34 & 0.00 & 0.00 & 638.18 & 0.00 & 0.00 \\
\cline{2-8} & 5 & 07:40:30 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\cline{1-8} \multirow{3}{*}{4S: 800} & 2 & 07:43:12 & 0.00 & 364.02 & 0.00 & 0.00 & 364.02 \\
\cline{2-8} & 3 & 07:44:42 & 218.41 & 603.05 & 0.00 & 0.00 & 748.66 \\
\cline{2-8} & 4 & 07:47:12 & 748.66 & 0.00 & 1014.15 & 0.00 & 48.35 \\
\cline{1-8} \multirow{5}{*}{5: {\bf 0}} & 1 & 07:30:00 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\cline{2-8} & 2 & 07:43:12 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\cline{2-8} & 3 & 07:44:42 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\cline{2-8} & 4 & 07:47:12 & 0.00 & 0.00 & 1014.15 & 0.00 & 0.00 \\
\cline{2-8} & 5 & 07:40:30 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\cline{1-8} \multirow{5}{*}{6: 1600} & 1 & 07:46:15 & 0.00 & 162.60 & 0.00 & 0.00 & 162.60 \\
\cline{2-8} & 2 & 07:49:45 & 65.04 & 655.05 & 0.00 & 0.00 & 752.61 \\
\cline{2-8} & 3 & 07:51:15 & 449.94 & 1297.33 & 0.00 & 0.00 & 1600.00 \\
\cline{2-8} & 4 & 07:53:45 & 1494.25 & 1494.25 & 109.44 & 109.44 & 1600.00 \\
\cline{2-8} & 5 & 07:56:45 & 1600.00 & 0.00 & 0.00 & 0.00 & 0.00 \\
\cline{1-8} \multirow{5}{*}{7: 800} & 1 & 07:50:00 & 0.00 & 37.40 & 0.00 & 0.00 & 37.40 \\
\cline{2-8} & 2 & 07:53:30 & 14.96 & 373.99 & 0.00 & 0.00 & 396.43 \\
\cline{2-8} & 3 & 07:55:00 & 237.48 & 641.05 & 0.00 & 0.00 & 800.00 \\
\cline{2-8} & 4 & 07:57:30 & 747.38 & 446.03 & 0.00 & 0.00 & 498.65 \\
\cline{2-8} & 5 & 08:00:30 & 498.65 & 0.00 & 0.00 & 0.00 & 0.00 \\
\hline
\end{tabular}}
\end{center}}
\caption{Results for {the first line} of the Case study \ref{ex0}. \label{exresults1}}
\end{table}
Note that trips $3$ and $5$ are fake trips (they have zero capacity), so the optimal number of trips for the planning is $5$. Two of them, the second and third true trips (trips $2$S and $4$S in Table \ref{exresults1}) are short-turns, while the remainder are {whole} trips with different capacities. We can observe that fake trips operate on the line at the same time as the previous true trip. For instance, trip $3$ departs from station $1$, which doesn't belong to the short-turn, at the same time than the previous true whole trip (trip $1$) and from station $2$, which belongs to the short-turn, at the same time than the previous true trip for this station (short-turn trip $2$). Note also that the $\mu_1$-parameter considered for the last trip of the lines causes, in this case, that the excess of passengers is equal to $0$ at the end of the planning horizon.
The departing times of each true trip of all the lines from each of the stations are detailed in Figure \ref{fig:ex}. The horizontal axis is the time horizon, while the vertical axis represent spatial lengths (stations). We also report over each of the itineraries, their optimal capacities. Shortest itineraries running only over a subset of stations are short-turns. One can observe that the time difference between consecutive trips is not constant, as expected from the asymmetry of the lines and the transfer of passengers between them at different time instants. Then, as mentioned before, we deal with an aperiodic timetabling.
\begin{figure}
\caption{{Timetables of the different trips for each line of the Case study of Section \ref{ex0}
\label{fig:ex}
\end{figure}
Furthermore, with the solution obtained, the estimated demands at the departing times of each one of the trips at the interchange station are drawn in Figure \ref{fig:demands}. Observe that in that picture we draw in the horizontal axis, the departing times for such an interchange station at the different trips. In the vertical axis we depict the accumulated demand at those time instants. Note that the jumps in the demand induced by passengers that interchange to that line occur during the whole time interval between departing times of the interchange station, but we only account for the accumulated amount when the new train {departs}.
\begin{figure}
\caption{Accumulated Demands obtained for the Case study of Section \ref{ex0}
\label{fig:demands}
\end{figure}
A summary of the costs and rewards obtained for the reported solution is given in Table \ref{ex:summary}. Since an aggregated function of the costs is minimized in our model, the negative overall cost obtained ($-11404.88$) can be seen as a positive global reward for the optimal planning.
\begin{table}[!]
\begin{center}
\begin{tabular}{|c||cccc|c|}\hline
Costs/Rewards & Line 1 & Line 2 & Line 3 & Line 4 & All lines \\\hline\hline
\eqref{obj:2} Complete Lines & 409.08 & 294.00 & 294.00& 294.00 & 1291.08\\
\eqref{obj:2} Short-Turns & 102.27 & 55.99 & 66.00 & 66.00 & 290.26\\
\eqref{obj:4a} {Complete Lines }& 1736.16 & 2514.53&3365.67&2814.05 & 100430.41\\
\eqref{obj:4a} { Short-Turns }& 541.70 &534.52 & 743.69& 780.74 & 2448.67\\
\eqref{obj:5} & 44.85 & 0 & 0& 0 & 44.85\\\hline
OVERALL COST & -1721.65& -2699.06 & -3749.37 & -3234.79 & {\bf -11404.88}\\\hline
\end{tabular}
\end{center}
\caption{Summary of solution obtained for the Case study of Section \ref{ex0}.\label{ex:summary}}
\end{table}
Problem \eqref{MINLP} depends on the demand that occur in each line and {on the correlation between lines induced by interchanging stations. For that reason, to have an exact model the demands by lines must be considered as variables and hence, the problem that considers all the lines at the same time becomes very hard by obvious reasons. Among them, the problem is a Mixed Integer Non Linear Programming Program whose continuous relaxation is neither convex nor concave because of the bilinear constraints in \eqref{DE} and \eqref{DI} (which are needed to be linearized, turning into weak continuous relaxations of the problem); and in addition, the number of variables increases considerably with each line that is jointly considered in the problem. Therefore, in order to provide an efficient procedure which is able to handle the model for realistic instance sizes in reasonable time (recall that the pilot Case study \ref{ex0}} required 12 hours of CPU time to obtain a nearly optimal solution), we propose a Math-Heuristic approach. This algorithm solves sequentially Problem \eqref{MINLP} for one line, say $\ell\in L$, at a time (by fixing appropriately the demand parameters after each run) to approximate the actual solution of the global problem. We will denote such a problem as Problem \eqref{MINLP}$_\ell$. This Math-Heuristic approach is developed in the following section.
\section{A Math-Heuristic Approach}
\label{sec:4}
\begin{figure}
\caption{Flowchart of the Math-Heuristic approach.\label{fc}
\label{fc}
\end{figure}
In this section we propose an optimization-based heuristic approach to solve \eqref{MINLP}. Note that the MINLP formulation stated for the problem involves many continuous and discrete variables, as well as a large number of constraints. In particular, the number of $\delta$-variables that allows us to model the internal jumps of the demand function, described in the previous section, {is} $O(K^2 N^{Ic} L^2)$, where $K$ is the maximum number of trips and $N^{Ic}$ is the number of interchange stations. That bound implies a high complexity to solve even small-size instances of the problem.
In our algorithm we apply a \textit{divide-and-conquer} strategy by splitting the models by lines, that is, we solve sequentially the model for a single line $\ell\in L$ at a time. The output of that solution together with the solutions of those previously solved are passed to the next line to be solved. Hence, the flow information provided by the lines in $L \backslash \{\ell\}$ (transfer passenger flows and train arrival times at the interchange stations) are considered as input for solving the problem \eqref{MINLP}$_\ell$ for the fixed line $\ell \in L$. In this way, the demand function is updated and a new line is solved using the same approach. This procedure continues until a stopping rule evaluating the progress of the overall costs is satisfied, i.e. when the solution stabilizes or until a maximum number of iterations is reached. This approach allows us to avoid considering all the $\delta$-variables together in a single model, which is one of the main drawbacks of the exact algorithm. A compact flow chart of the algorithm is drawn in Figure \ref{fc}.
In the initial iteration we consider the whole set of lines to solve, $\widehat{L}=L$, and the time instants in which the demand is affected by passengers coming from an interchange station, $\mathbf{S^I}$ (it is assumed that, for the first line solved, there are no passengers {coming} from other lines). Then, for each line $\ell\in L$, we update {the demand function taking into account} the flows and breakdown times obtained when solving the previous lines and then, we solve the problem for the single line $\ell$ with such an input information. Observe that after the initial {line is solved} (in which the connecting flows are set to zero), the optimal flows give us information about the passengers that get on and {get off} from the trains of the line, what affect the rest of the lines.
Once all the lines have been solved, we repeat the procedure $\texttt{maxit}$ times, to avoid getting trapped in locally optimal solutions. Let $z^{\ell(it)}$ be the objective function value of problem \eqref{MINLP}$_\ell$ at iteration $it$. If at an iteration $it <\texttt{maxit}$, this value stabilizes, that is, the relative deviation in the objective value with respect to the previous iteration of line $\ell\in L$, namely $\frac{|z^{\ell(it)}-z^{\ell(it-1)}|}{z^{\ell(it)}}$ is not significant enough (smaller than a tolerance $\varepsilon$), the corresponding solution for this line is kept fixed for the remainder iterations. Thus, the iterative procedure terminates either, after the iteration when the solution stabilizes with some degree of accuracy, $\varepsilon$, (for all the lines) or when a maximum number of iterations is reached. Finally, to obtain a feasible solution (upper bound), we solve \eqref{MINLP} by fixing the binary variables obtained during the heuristic procedure. Such a problem becomes a continuous Linear Programming problem which is efficiently solved.
\section{Experiments \label{sec:5}}
\begin{figure}
\caption{Network Topologies for the Experiments.\label{fig:all}
\label{fig:all}
\end{figure}
We have run a series of computational experiments in order to test the model and the performance of our {two} approaches. We have considered the networks topologies drawn in Figure \ref{fig:all}. The names of the networks are in the form $X$\texttt{L}$Y$\texttt{T} where $X$ is the number of lines and $Y$ the number of interchange stations. We consider networks with $2, 4, 6$ and $8$ lines, each of them with $7$ stations except \texttt{4L2T} that contains 14 stations. For each of the $8$ networks, we tested our approach in two situations: no short-turns are allowed (that is, with our notation $|LS|=0$ and short-turns are allowed in some of the lines (thickest lines/gray nodes correspond with allowed short-turns for the lines) to evaluate the effect of short-turning. The construction of the different networks is motivated by the representation of the most common situations in real-world subway networks {(see \cite{lap11})}. {The description of the networks is provided in Table \ref{table:lines}.
\begin{table}[h]
{\small\begin{center}
\begin{tabular}{|c|p{11.5cm}|}\hline
Network & Description\\\hline
\texttt{2L0T} & One bidirectional line with no interchange stations.\\\hline
\texttt{4L1T} &Two bidirectional lines with a single interchange station.\\\hline
\texttt{6L2T} & Three bidirectional lines. One of the bidirectional lines with two interchange stations, each of them interchanging for a different bidirectional line (2-by-2).\\\hline
\texttt{4L2T} &{Two bidirectional lines.} One of the bidirectional lines with two interchange stations, each of them interchanging from the same second bidirectional line.\\\hline
\texttt{6L1T} &Three bidirectional lines with one interchange station for different lines (3-by-3).\\\hline
\texttt{6L3T} &Three bidirectional lines with three 2-by-2 interchange stations.\\\hline
\texttt{8L4T} &Four bidirectional lines with four 2-by-2 interchange stations.\\\hline
\texttt{8L3T} &Four bidirectional lines with three 2-by-2 interchange stations.\\\hline
\end{tabular}
\end{center}}
\caption{Description of the network topologies considered in our computational experiments.\label{table:lines}.}
\end{table}
Our set of networks include the one proposed by Metrolab{\textregistered} (\texttt{4L1T}) to analyze the viability of automatizing the management of the Paris subway network. Actually, the parameters of these networks were designed based on that case and are available at \url{http://bit.ly/InputData_SubwayPlanning}. All the models were coded in \texttt{Python 3.6}, and solved using \texttt{Gurobi 8.0}~\cite{gurobi} in a Mac OSX with an Intel Core i7 processor at 3.3 GHz and 16GB of RAM.
\subsection{Results}
\begin{table}
\begin{center}
{\small\begin{tabular}{|c|c|cc|cc|c|}\cline{3-6}
\multicolumn{ 2}{c}{} & \multicolumn{ 2}{|c}{{\bf Math-Heuristic}} & \multicolumn{ 2}{|c|}{{\bf MINLP} \eqref{MINLP}} & \multicolumn{1}{c}{} \\
\hline
{\bf Network} & $|{\bf LS}|$ & {\bf BestObj} & {\bf CPU} & {\bf BestObj} & {\bf CPU} & {\bf GAP} (\%) \\
\hline\hline
\multirow{2}{*}{\texttt{2L0T}} & 0 & 145702 & $< 0.1$ & 145573 & 7 & 0.09 \\
& 2 & 114145 & 11 & 112916 & \texttt{TL}
& 1.08 \\\hline
\multirow{2}{*}{\texttt{4L1T}} & 0 & 206729 & 132 & 206242 & \texttt{TL}
& 0.24 \\
& 4 & 152890 & 631 & 152016 & \texttt{TL}
& 0.57 \\\hline
\multirow{2}{*}{\texttt{4L2T}} & 0 & 348267 & 3522 & 347224 & \texttt{TL}
& 0.30 \\
& 4 & 333102 & 4892 & 332665 & \texttt{TL}
& 0.13 \\\hline
\multirow{2}{*}{\texttt{6L1T}} & 0 & 276961 & 1130 & 276545 & \texttt{TL}
& 0.15 \\
& 2 & 235488 & 1521 & 234589 & \texttt{TL}
& 0.38 \\ \hline
\multirow{2}{*}{\texttt{6L2T}} & 0 & 249038 & 606 & 248854 & \texttt{TL}
& 0.07 \\
& 4 & 203080 & 2882 & 203080 &\texttt{TL}
& 0.00 \\\hline
\multirow{2}{*}{\texttt{6L3T}} & 0 & 248988 & 520 & 248979 & \texttt{TL}
& $< 0.01$ \\
& 2 & 217004 & 2688 & 216909 & \texttt{TL}
& 0.04 \\\hline
\multirow{2}{*}{\texttt{8L3T}} & 0 & 404627 & 432 & 404147 & \texttt{TL}
& 0.12 \\
& 4 & 362916 & 1467 & 362908 & \texttt{TL}
& $< 0.01$ \\\hline
\multirow{2}{*}{\texttt{8L4T}} & 0 & 404469 & 1191 & 404211 & \texttt{TL}
& 0.06 \\
& 4 & 374067 & 1144 & 374064 & \texttt{TL}
& $< 0.01$ \\\hline
\end{tabular}}
\caption{Computational Results \label{Table:results}}
\end{center}
\end{table}
The results of our computational experience are shown in Table \ref{Table:results}. In this table the results are organized in {five} blocks of columns. The first block reports the network topology (using the name convention explained above).
The second block , ``$|{\bf LS}|$'', stands for the number of allowed short-turns on the considered network.
The {third} block gathers the results obtained with the Math-Heuristic approach described in Section \ref{sec:4}, whereas the {fourth} one reports the corresponding results obtained with the exact MINLP Formulation described in Section \ref{sec:2}. In both cases, `{`\textbf{BestObj}''} is the value of the objective function and ``\textbf{CPU}'' is the time, in seconds, required to meet the stopping criterion or to reach the time limit (a maximum CPU time of 10 hours has been set for solving the MINLP formulation), respectively. {Those instances for which the time limit was reached are indicated with \texttt{TL} in that column, in whose case, the best obtained solution is reported.} Finally, the block
``\textbf{Gap (\%)}'' is the percentage gap between the objective function value obtained in the Math-Heuristic approach and the objective function value obtained when solving the MINLP Formulation.
It is worth noting that to improve the performance of the MINLP solver, we have initialized it with an initial solution given by the one provided by the Math-Heuristic approach. This implies that the solution of the exact approach is always as good as the Math-Heuristic and the \%Gap is a measure of the improvement given by the exact method over the proposed heuristic.
The results reported in Table \ref{Table:results} show the remarkable performance of our Math-Heuristic algorithm. In all cases, it achieves rather good solutions with gaps with respect to the exact method smaller than $1.08\%$ and in rather competitive computing times (one order of magnitude smaller than the exact MINLP solver). Note that the time limit of 10 hours was reached in all the instances except in the simplest one. The results are particularly exceptional in the most complex topologies using eight lines where the improvement of the exact method is negligible: gaps are smaller than 0.12\% and computing times of the Math-Heuristic are, on average, around 2\% of those required by the MINLP. Based on our experiment we conclude that the Math-Heuristic algorithm is a good compromise to solve the line planning and timetabling problem considered in this paper. Apart from the relative deviations between the solutions obtained with the math-heuristic and the exact approach, Gurobi was not even able to solve the problems within the time limit in all the instances except the simplest one, while our math-heuristic is able to obtain good quality feasible solution of the problem in reasonable CPU times. Finally, it is worth mentioning that the consideration of short-turns in the network considerably increases the computational effort needed to solve the problem, but at the same time reduces the overall cost considered in our objective function (an average reduction of $15\%$ was observed under the considered parameters). For instance, the exact approach required 7 seconds to solve to optimality the instance \texttt{2L0T}, without short-turns. However, it was not able to certify optimality for the same instance with short-turns, within the time limit of 10 hours. The comparison with respect to the Math-Heuristic is less dramatic and we report an average relative deviation of $57\%$ in CPU time between the instances with and without short-turnings.
\section{Conclusions and Future work}
\label{sec:concl}
This paper considers a general model for line planning and timetabling problems on subway networks that was originated by a collaboration with Metrolab{\textregistered}, a French R\&D company analyzing the viability of automating the metro of Paris. We propose a new Mathematical Programming- based decision making tool in this context. A number of different elements have been taken into account in the construction of both, objective function and constraint set of the proposed integrated line planning and timetabling model. The model aims to reflect as most real factors as possible of this complex transportation environment, but still being possible its computational suitability. Our outcome is a Mixed Integer Non Linear Programming formulation taking into account several quality and operation measures for feasible planing. We incorporate to the model an integrated cost- and passenger-oriented objective function, time-dependent demands and interchange stations together with several strategic decisions on the planning, as the starting times for each trip, the activation of short-turns, the determination of the number of trips in a journey, the selection of capacities for the trips, and the determination of the optimal number of trips. This representation of the problem may allow us to use the optimization model as a (\emph{what-if}) tool to assess potential technological or managerial innovations that may be introduced into the system. In order to assess such innovations, one can decouple the whole transportation system and consider just a (small) section of it where the main effects of such a new element produce a higher impact. Such a reduction process resembles the one followed in our paper to illustrate the case study. Apart from using a MINLP solver for solving small instances of the problem, we also develop a novel Math-Heuristic algorithm which allows us to solve realistic instances. Extensive computational experiments on different network topologies based on the one provided by Metrolab{\textregistered} on the metro of Paris, show that the Math-Heuristic algorithm performs remarkably well as compared to the exact resolution of the Mathematical Programming formulation, being an adequate tool to solve this type of problems.
Many future enhancements of the contributions made here could be cited now. Some of them could refer to the integration of new elements to the model such as variable speed for the trains or more general demand functions to model the number of passengers awaiting at the stations. Also, the stochastic nature of the parameters of the problem could be managed by incorporating the uncertainty on the passenger flows to the problem, and then using tools from Stochastic Programming to derive a model and different solution approaches for the problem. On the other hand, an interesting research line could be to obtain theoretical conditions under which one was able to ensure the convergence of our iterative numerical approach. It would be interesting to find conditions ensuring the convergence of the Math-Heuristic approach, even in simpler models, since they may give rise to new versions of the algorithm integrating different stopping rules or ways to decouple the system in parts.
\end{document} |
\begin{document}
\title{Phase-dependent fluctuations of intermittent resonance fluorescence}
\author{H\'ector M. Castro-Beltr\'an}
\email{[email protected]}
\affiliation{Centro de Investigaci\'on en Ingenier\'{\i}a y Ciencias Aplicadas, \\
Instituto de Investigaci\'on en Ciencias B\'asicas y Aplicadas, \\
Universidad Aut\'onoma del Estado de Morelos,
Avenida Universidad 1001, 62209 Cuernavaca, Morelos, M\'exico}
\affiliation{Instituto de Ciencias F\'isicas,
Universidad Nacional Aut\'onoma de M\'exico, \\
Apartado Postal 48-3, 62251 Cuernavaca, Morelos, M\'exico}
\author{Ricardo Rom\'an-Ancheyta}
\email{[email protected]}
\affiliation{Instituto de Ciencias F\'isicas,
Universidad Nacional Aut\'onoma de M\'exico, \\
Apartado Postal 48-3, 62251 Cuernavaca, Morelos, M\'exico}
\author{Luis Guti\'errez}
\email{[email protected]}
\affiliation{Centro de Investigaci\'on en Ingenier\'{\i}a y Ciencias Aplicadas, \\
Instituto de Investigaci\'on en Ciencias B\'asicas y Aplicadas, \\
Universidad Aut\'onoma del Estado de Morelos,
Avenida Universidad 1001, 62209 Cuernavaca, Morelos, M\'exico}
\date{\today}
\begin{abstract}
Electron shelving gives rise to bright and dark periods in the resonance
fluorescence of a three-level atom. The spectral signature of such blinking
is a very narrow inelastic peak on top of the two-level atom spectrum. Here,
we investigate theoretically phase-dependent fluctuations (e.g., squeezing)
of intermittent resonance fluorescence in the frameworks of balanced and
conditional homodyne detection (BHD and CHD, respectively). In BHD, the
squeezing is reduced significantly in size and Rabi frequency range
compared to that for a two-level atom. The sharp peak is found only in the
spectrum of the squeezed quadrature, splitting the negative broader
squeezing peak for weak fields. CHD correlates the BHD signal with the
detection of emitted photons. It is thus sensitive to third-order fluctuations
of the field, produced by the atom-laser nonlinearity, that cause noticeable
deviations from the second-order BHD results. For weak driving, the
third-order spectrum is negative, enlarging the squeezing peak but also
reducing the sharp peak. For strong driving, the spectrum is dominated by
third-order fluctuations, with a large sharp peak and the sidebands becoming
dispersive. Finally, the addition of third-order fluctuations makes the
integrated spectra of both quadratures equal in magnitude in CHD, in
contrast to those by BHD. A simple mathematical approach allows us to
obtain very accurate analytical results in the shelving regime.
\end{abstract}
\pacs{42.50.Lc, 42.50.Ct, 42.50.Hz}
\maketitle
\section{\label{sec:intro}Introduction}
A photon emitter with peculiar fluctuations is a single three-level atom with
a laser-driven strong transition competing with a coherently or incoherently
driven weak transition. The occasional population of a long-lived state, an
effect called electron shelving, produces intermittence (blinking) in the
resonance fluorescence of the strong transition. Photon statistics of the
fluorescence have been thoroughly studied for three-level atomic systems
\cite{PlKn97}, in which case the process is ergodic, i.e., when the mean
bright and dark periods are finite. For a single quantum dot or molecule the
statistics are more complicated if the process is not ergodic \cite{StHB09}.
In the spectral domain, ergodic shelving manifests in the appearance of a
very narrow inelastic peak on top of the central peak of the two-level-like
spectrum. This has been well studied analytically and numerically
\cite{HePl95,GaKK95,EvKe02} and observed experimentally \cite{BuTa00}.
In the latter, heterodyne detection was used, which allows for very high
spectral resolution \cite{HBLW97}. In their paper \cite{BuTa00}, B\"uhner
and Tamm suggest performing complementary phase-dependent
measurements of the fluorescence; so far, there are no reports yet,
perhaps due to experimental restrictions.
Squeezing, the reduction of fluctuations below those of a coherent state
in a quadrature at the expense of increasing fluctuations in the other
quadrature, is weak in resonance fluorescence \cite{WaZo91,CoWZ84}.
The low collection and imperfect quantum efficiency of photodetectors
have been the main barriers for the observation of squeezing, although
recent experimental progress tackle these issues. On the one hand,
there is the increased solid angle of emission captured with minimal
disturbance of the photon density of states surrounding the atom
\cite{SoLe15}. On the other hand, there is the development of conditional
detection schemes based on homodyne detection that cancel the finite
quantum efficiency issue
\cite{Vogel91,Vogel95,CCFO00,FOCC00,HSL+06,GSS+07,KAD+09,KuVo-X}.
We discuss two of them.
Homodyne correlation measurement (HCM), proposed by Vogel
\cite{Vogel91,Vogel95} (see also \cite{Carm85}), consists of intensity
correlations of the previously mixed source and weak local oscillator
fields, thus canceling the detector efficiency factors. The output contains
several terms, including the variance and an amplitude-intensity correlation.
Very recently, HCM was used to observe squeezing in the resonance
fluorescence of a single two-level quantum dot \cite{SHJ+15} in conditions
close to those for free-space atomic resonance fluorescence. In fact, in
the first demonstration of HCM the amplitude-intensity correlation of the
fluorescence of a single three-level ion in the $\Lambda$ configuration
was observed \cite{GRS+09} although not yet in the squeezing regime.
Conditional homodyne detection (CHD) was proposed and demonstrated
by Carmichael, Orozco and coworkers \cite{CCFO00,FOCC00}. This
consists of balanced homodyne detection (BHD) of a quadrature
conditioned on an intensity measurement of part of the emitted field;
it gives the amplitude-intensity correlation of HCM but measured directly,
without the other terms. As in the intensity correlations, the conditioning
cancels the dependence on detector efficiency. The intensity detection
channel has nontrivial effects on the quadrature signals. The
amplitude-intensity correlation is of third order in the field amplitude;
hence it allows for third-order fluctuations. Initially, CHD was devised for
weak light emitters, neglecting the third-order fluctuations. This allowed
the identification of the Fourier transform of the correlation as the spectrum
of squeezing \cite{CCFO00,FOCC00}. However, recent work on CHD of
two-level atom resonance fluorescence has shown important deviations
from the spectrum of squeezing due to increasing nonlinearity in the
atom-laser interaction \cite{hmcb10,CaGH15}. An additional display of
these non-Gaussian fluctuations is found in the asymmetry of the
correlation in cavity QED \cite{DeCC02} and in the resonance
fluorescence of a $V$-type three-level atom
\cite{MaCa08,CaGM14,XGJM15} and of two blockading Rydberg atoms
\cite{XuMo15}.
In this paper we investigate theoretically ensemble-averaged
phase-dependent fluctuations of the intermittent (ergodic) resonance
fluorescence of a single three-level atom (3LA). Besides numerical solutions
for the one- and two-time expectation values, we obtain approximate
analytical solutions which are very accurate in the limit when the decay rate
of the strong transition is much larger than those of the weak transitions.
Our solutions are simple and reflect clearly the time and spectral scales.
Thus we begin by writing the expression for the coherent and
phase-independent incoherent spectra of the 3LA, studied numerically at
length in Ref. \cite{EvKe02}.
We compare the spectra and variances of an ideal BHD approach, which
could also be obtained from HCM, with those of the CHD method. They
have in common that the sharp extra peak
\cite{HePl95,GaKK95,EvKe02,BuTa00}, on atom-laser resonance, is a
feature \textit{only} of the quadrature that features squeezing; in the other
quadrature the spectrum is a simple broad positive Lorentzian. In the
weak-field limit, while both methods give similar negative spectra for a
two-level atom (2LA), the sharp peak is positive, reducing the squeezing in
BHD and enhancing the negative peak in CHD. For a strong laser field the
third-order fluctuations of CHD distort the positive Lorentzian sidebands of
the Mollow triplet and turn them dispersive for both 2LA and 3LA. However,
for the 3LA, both the sharp peak and the dispersive sidebands are much
larger than the second-order spectrum.
Interestingly, in CHD, the addition of third-order fluctuation makes the
integrated spectra of both quadratures equal in magnitude, in contrast to
the case of the spectrum by BHD \cite{CaGH15}. This feature of CHD may
be a bonus over other modern variations of the standard homodyne
detection scheme.
This paper is organized as follows: In Sec. II we introduce the atom-laser
model and obtain approximate analytic solutions in the shelving regime.
In Sec. III we calculate the phase-independent spectrum, and in Sec. IV
we calculate the phase-dependent spectra and variances. Sections V and
VI are devoted to the amplitude-intensity correlation by CHD and its
spectrum, respectively. Finally, conclusions are given in Sec. VII. Two
appendices summarize the analytic and numerical methods employed.
\section{\label{sec:model}Atom-Laser Model and Solutions}
\begin{figure}
\caption{\label{fig:chdsetup}
\label{fig:chdsetup}
\end{figure}
We consider a single three-level atom where a laser of Rabi frequency
$\Omega$ drives a transition between the ground state $|g \rangle$ and an
excited state $|e \rangle$. The excited state has two spontaneous emission
channels: one directly to the ground state with rate $\gamma$ for the driven
transition, and one via a long-lived shelving state $|a \rangle$ with rate
$\gamma_d$, which in turn decays to the ground state with rate
$\gamma_a$ (see Fig. \ref{fig:chdsetup}). In the limit
\begin{eqnarray} \label{eq:gammas}
\gamma \gg \gamma_d \,, \gamma_a
\end{eqnarray}
the fluorescence of the driven transition features well-defined bright and
dark periods of average lengths,
\begin{eqnarray} \label{brightdarktimes}
T_B = \frac{2\Omega^2 +\gamma^2}{\gamma_d \Omega^2} \,,
\qquad T_D &=& \gamma_a^{-1} \,,
\end{eqnarray}
respectively, as calculated in Ref. \cite{EvKe02} using a random telegraph
model.
Throughout this paper we assume zero atom-laser detuning. This serves
two purposes: first, we limit the discussion to the essentials of the main
topics; second, with further assumptions discussed later, we obtain close
approximate analytical solutions. The master equation for the atomic
density operator, in the frame rotating at the laser frequency, can be
written as
\begin{eqnarray} \label{masterEq}
\dot{\rho}(t) &=&
-i \frac{\Omega}{2} [\sigma_{eg} +\sigma_{ge} ,\rho] \nonumber \\
&& +\frac{\gamma}{2} \left( 2\sigma_{ge} \rho \sigma_{eg}
-\sigma_{ee} \rho -\rho \sigma_{ee} \right) \nonumber \\
&& + \frac{\gamma_d}{2} \left( 2\sigma_{ae} \rho \sigma_{ea}
-\sigma_{ee} \rho -\rho \sigma_{ee} \right) \nonumber \\
&& + \frac{\gamma_a}{2} \left( 2\sigma_{ga} \rho \sigma_{ag}
-\sigma_{aa} \rho -\rho \sigma_{aa} \right) \,,
\end{eqnarray}
where $\sigma_{jk} = | j \rangle \langle k |$ are atomic transition operators
which obey the inner product prescription $\langle j | k \rangle =\delta_{jk}$.
We obtain two sets of equations. The first one is
\begin{subequations}
\begin{eqnarray} \label{eq:BlochEqs}
\dot{\rho} &=& \mathbf{M} {\rho} +\mathbf{b} \,,
\end{eqnarray}
where $\rho = ( \rho_{eg}, \rho_{ge}, \rho_{ee}, \rho_{gg} )^T$,
$\mathbf{b} = ( 0, 0, 0, \gamma_a )^T$, and
\begin{eqnarray} \label{eq:matrixM}
\mathbf{M} &=& \left( \begin{array}{cccc}
-\gamma_+/2 & 0 & i\Omega/2 & -i\Omega/2 \\
0 & -\gamma_+/2 & -i\Omega/2 & i\Omega/2 \\
i\Omega/2 & -i\Omega/2 & -\gamma_+ & 0 \\
-i\Omega/2 & i\Omega/2 & \gamma_- & -\gamma_a
\end{array} \right) \,,
\end{eqnarray}
\end{subequations}
where
\begin{eqnarray}
\gamma_+ = \gamma +\gamma_d \,, \qquad
\gamma_- = \gamma -\gamma_a \,.
\end{eqnarray}
Here, we have eliminated the population $\rho_{aa}$ due to
conservation of probability, $\rho_{gg} +\rho_{ee} +\rho_{aa} =1$.
The second set of equations involves the coherences linking states
$|e \rangle$ and $|g \rangle$ to state $|a \rangle$, i.e.,
$(\rho_{ga}, \rho_{ag},\rho_{ea},\rho_{ae})^T$. They evolve with damped
oscillations with zero mean. The two sets are decoupled, and only the
first one is relevant for the purposes of this work.
We obtain first the steady state of the density operator (labeled with the
abbreviation $st$). For a more compact notation we define
$\alpha_- =\rho_{eg}^{st} =\langle \sigma_- \rangle_{st}$,
$\alpha_+ =\alpha_-^{\ast}$, and
$\alpha_{jj} =\rho_{jj}^{st} =\langle \sigma_{jj} \rangle_{st}$.
We have
\begin{subequations} \label{eq:alphas}
\begin{eqnarray}
\alpha_{\mp} &=& \mp i \frac{ Y/\sqrt{2}}{ 1+Y^2 +(q/2)Y^2 } \,,
\label{eq:alphapm} \\
\alpha_{ee} &=& \frac{Y^2/2}{1+Y^2 +(q/2)Y^2 } \,, \label{eq:alphaee} \\
\alpha_{gg} &=& \frac{ 1+ Y^2/2 }{1+Y^2 +(q/2)Y^2} \,, \label{eq:alphagg} \\
\alpha_{aa} &=& q \alpha_{ee} \,, \label{eq:alpha_aa}
\end{eqnarray}
\end{subequations}
where
\begin{eqnarray}
q=\gamma_d/\gamma_a \,, \qquad
Y = \sqrt{2} \Omega/\gamma_+ \,.
\end{eqnarray}
For $\gamma_d=0$ ($q=0$) we recover the results of the 2LA.
Equation (\ref{eq:BlochEqs}) is still too complicated to solve analytically
in the general case. However, in the limit (\ref{eq:gammas}), very good
approximate solutions are obtained (see Appendix \ref{sec:approx} for
more details). We use a Laplace transform approach to obtain
approximate expectation values of the atomic vector,
$\mathbf{s} = (\sigma_-, \sigma_+, \sigma_{ee}, \sigma_{gg} )^T$,
and two-time correlations. With the atom initially in its ground state,
$\langle \mathbf{s}(0) \rangle= ( 0,0,0,1 )^T$, the expectation values
of the atomic operators are
\begin{subequations} \label{eq:BlochSols}
\begin{eqnarray}
\langle \sigma_{\mp} (t) \rangle
&=& \mp i \frac{ Y/ \sqrt{2} }{1+Y^2} f(t)
\mp i \frac{ \sqrt{2} \gamma_+ Y }{8 \delta}
\left( e^{\lambda_+ t} -e^{\lambda_- t} \right) \nonumber \\
&& +\alpha_{\mp} \left( 1- e^{\lambda_2 t} \right) \,, \\
\langle \sigma_{ee} (t) \rangle &=&
\frac{ Y^2/2 }{1+ Y^2} f(t) +\alpha_{ee} \left( 1- e^{\lambda_2 t} \right) \,, \\
\langle \sigma_{gg} (t) \rangle
&=& e^{\lambda_2 t} -\frac{ Y^2/2 }{1+ Y^2} f(t)
+\alpha_{gg} \left( 1- e^{\lambda_2 t} \right) \,,
\end{eqnarray}
\end{subequations}
where
\begin{eqnarray}
f(t) &=& e^{\lambda_2 t} -\frac{1}{2} \left[
\left( 1+\frac{3 \gamma_+}{4 \delta} \right) e^{\lambda_+ t}
\right. \nonumber \\
&& \left. + \left( 1-\frac{3 \gamma_+}{4 \delta} \right) e^{\lambda_- t}
\right] \,,
\end{eqnarray}
\begin{subequations} \label{eq:eigenvalues}
\begin{eqnarray}
\lambda_1 &=& -\gamma_+/2 \,, \\
\lambda_2 &=& -\gamma_a \left( 1+ q \frac{\Omega^2 }
{ 2 \Omega^2 +\gamma^2 } \right) \,, \label{eq:ev2} \\
\lambda_{\pm} &=& -\frac{3\gamma_+}{4} \pm \delta \,,
\end{eqnarray}
\end{subequations}
and
\begin{eqnarray}
\delta &=& (\gamma_+/4) \sqrt{1-8Y^2} \,.
\end{eqnarray}
This approach allows us to identify Eqs. (\ref{eq:eigenvalues}) as the
eigenvalues of the matrix (\ref{eq:matrixM}) of the master equation. This is
much more convenient than attempting to write the exact ones in compact
form. The eigenvalues contain the kernel of the atomic evolution, that is,
the scales of decay and coherent evolution, as well as the corresponding
widths and positions of the spectral components.
The first eigenvalue, $\lambda_1$, is exact and gives half the total decay rate
from the excited state. Although absent in Eqs. (\ref{eq:BlochSols}), it occurs in
the second-order correlations (see below). Then, $\lambda_2$ represents the
slow decay rate due to shelving. This caues the steady state to be reached
after a long time, $t \sim \gamma_d^{-1}$. Borrowing from the random
telegraph model \cite{EvKe02}, the slow decay rate is given by
$\lambda_2 = -(T_D^{-1} +T_B^{-1})$. The two remaining eigenvalues
represent the damped coherent evolution; they are real if $8Y^2 \le 1$ and
complex if $8Y^2 >1$. Eigenvalues $\lambda_1, \lambda_{\pm}$ contain the
two-level-like evolution towards a quasi-steady state (with the decay rate
$\gamma$ of the two-level case replaced by $\gamma_+$ for the 3LA) that
is followed by the slow decay.
The two-time correlations
$\langle \sigma_+(0) \mathbf{s} (\tau) \sigma_-(0) \rangle_{st}$, which
have initial conditions $(0,0,0,\alpha_{ee} )^T$, are approached like those
for $\langle \mathbf{s} (t) \rangle$. Using the quantum regression formula
(see, e.g., \cite{Carm99}) and $\mathbf{s}(0)= (0,0,0,1)^T$, we have
\begin{eqnarray} \label{eq:3rdordercorr}
\langle \sigma_+(0) \mathbf{s} (\tau) \sigma_-(0) \rangle_{st}
= \alpha_{ee} \langle \mathbf{s}(\tau) \rangle_{ \mathbf{s}(0) } \,,
\end{eqnarray}
that is, these correlations are identical to Eqs. (\ref{eq:BlochSols}) times the
factor $\alpha_{ee}$, with $t$ replaced by $\tau$.
The approximate analytic solutions to the correlations
$\langle \sigma_+(0) \mathbf{s} (\tau) \rangle_{st}$, which have initial
conditions $(\alpha_{ee},0,0, \alpha_+)^T$, can be similarly obtained (see
Appendix \ref{sec:approx}). We use them, however, to obtain the solutions
for correlations of fluctuations,
$\langle \Delta \sigma_+(0) \Delta \mathbf{s} (\tau) \rangle_{st}$, where
\begin{eqnarray} \label{eq:dipoleSplit}
\Delta \sigma_{jk} (t) = \sigma_{jk} (t) -\langle \sigma_{jk} \rangle_{st} \,,
\qquad \langle \Delta \sigma_{jk}(t)\rangle =0 \,.
\end{eqnarray}
Hence
\begin{eqnarray} \label{eq:corr2split}
\langle \Delta \sigma_+(0) \Delta \sigma_{\mp}(\tau) \rangle_{st}
&=& \langle \sigma_+(0) \sigma_{\mp}(\tau) \rangle_{st}
-\langle \sigma_+ \rangle_{st} \langle \sigma_{\mp} \rangle_{st} \,,
\nonumber \\
\end{eqnarray}
yielding
\begin{eqnarray} \label{eq:p_tau}
\langle \Delta \sigma_+(0) \Delta \sigma_{\mp}(\tau) \rangle_{st}
&=& C_1 e^{\lambda_1 \tau} \pm C_2 e^{\lambda_2 \tau}
\nonumber \\
&& \mp C_+ e^{\lambda_+ \tau} \mp C_- e^{\lambda_- \tau} \,,
\end{eqnarray}
where
\begin{subequations} \label{eq:p-coef}
\begin{eqnarray}
C_1 &=& \frac{Y^2/4}{ 1+Y^2 +(q/2)Y^2 } \,, \\
C_2 &=& \frac{q Y^4/4}{ (1+Y^2) \left( 1+Y^2 +(q/2)Y^2 \right)^2} \,,
\label{eq:C2} \\
C_{\mp} &=& \frac{ Y^2[1-Y^2 \pm (1-5Y^2)(\gamma_+/4\delta)] }
{ 8(1+Y^2) \left( 1+Y^2 +(q/2)Y^2 \right)} \,.
\end{eqnarray}
\end{subequations}
\section{\label{sec:powerspec}Stationary Power Spectrum}
The stationary (Wiener-Khintchine) power spectrum is given by the Fourier
transform of the dipole field auto-correlation function,
\begin{eqnarray}
S(\omega) &=& \frac{1}{\pi \alpha_{ee}} \mathrm{Re}
\int_0^{\infty} d\tau e^{-i \omega \tau}
\langle \sigma_+ (0) \sigma_- (\tau) \rangle_{st} \,.
\end{eqnarray}
The factor $(\pi \alpha_{ee})^{-1}$ normalizes the integral of $S(\omega)$
over all frequencies to unity. Equation (\ref{eq:corr2split}) separates the
spectrum in two parts:
\begin{eqnarray}
S(\omega) &=& S_{coh}(\omega) +S_{inc}(\omega) \,,
\end{eqnarray}
where
\begin{eqnarray} \label{eq:ScohDef}
S_{coh}(\omega) &=& \frac{|\alpha_{+}|^2}{\pi \alpha_{ee}} \mathrm{Re}
\int_0^{\infty} e^{-i \omega \tau} d\tau
= \frac{|\alpha_{+}|^2}{\pi \alpha_{ee}} \delta(\omega)
\end{eqnarray}
and
\begin{eqnarray} \label{eq:SincDef}
S_{inc}(\omega) &=&
\frac{1}{\pi \alpha_{ee}} \mathrm{Re} \int_0^{\infty} d\tau e^{-i \omega \tau}
\langle \Delta \sigma_+(0) \Delta \sigma_-(\tau) \rangle_{st}
\nonumber \\
\end{eqnarray}
are, respectively, the coherent spectrum, due to elastic scattering, and
the incoherent (inelastic) spectrum, due to atomic fluctuations.
\begin{figure}
\caption{\label{fig:Sinc}
\label{fig:Sinc}
\end{figure}
The main features of the spectrum of the atom-laser system of the previous
section were studied in \cite{EvKe02}. The incoherent spectrum consists
of a two-level-like structure that becomes a triplet for strong excitation
\cite{Mollow69}, plus a sharp peak, associated with the eigenvalue
$\lambda_2$, due to the shelving of ethe lectronic population in the
long-lived state. This three-level system contains the essential physics of
the more complex atomic system used for the experimental observation
of the sharp peak \cite{BuTa00} by heterodyne detection, able to resolve
hertz or sub-hertz features \cite{HBLW97}. The sharp peak had been
predicted for the $V$-type and $\Lambda$-type 3LAs \cite{HePl95,GaKK95},
which also feature electron shelving.
Our Laplace transform approach allowed us to obtain a very good analytic
approximation to the full spectrum, split into its various components, with
their widths and amplitudes readily spotted. Substituting Eq. (\ref{eq:p_tau})
into Eq. (\ref{eq:SincDef}) the incoherent spectrum is
\begin{eqnarray} \label{eq:3LAincSpec}
S_{inc}(\omega) &=& \frac{1}{\pi \alpha_{ee}} \left[
C_+ \frac{ \lambda_+}{\omega^2 +\lambda_+^2}
+C_- \frac{ \lambda_-}{\omega^2 +\lambda_-^2} \right. \nonumber \\
&& \left. -C_1 \frac{ \lambda_1}{\omega^2 +\lambda_1^2}
-C_2 \frac{ \lambda_2}{\omega^2 +\lambda_2^2} \right] \,.
\end{eqnarray}
In Fig. \ref{fig:Sinc} we plot this spectrum with eigenvalues
(\ref{eq:eigenvalues}) along the exact and 2LA spectra. It reproduces
remarkably well the exact spectrum, with the sharp peak being slightly
smaller (bigger) in the saturating (strong) case than the exact one. Also,
making $\gamma_d=0$, the formula is exact for the 2LA spectrum
\cite{Mollow69}. The intensity (integral over all frequencies) of the sharp
peak is
\begin{eqnarray*}
I_{ep} &=& \frac{q Y^2/2}{ (1+Y^2) \left( 1+Y^2 +(q/2)Y^2 \right)} \,.
\end{eqnarray*}
It is small for both weak and strong driving (proportional to $Y^2$ and
$Y^{-2}$, respectively), and largest for $\Omega \approx 3\gamma/4$.
The coherent spectrum of the 3LA is
\begin{eqnarray}
S_{coh}(\omega)
&=& \frac{1}{\pi \left( 1+Y^2 +(q/2)Y^2 \right)} \delta(\omega) \,,
\end{eqnarray}
smaller than that of the 2LA (where $q=0$) \cite{EvKe02}. The difference
in intensity is precisely given by $I_{ep}$.
The choice of values $\gamma_d =0.05 \gamma$ and
$\gamma_a =0.015 \gamma$, small enough to fulfill the limit
(\ref{eq:gammas}), is such that the relation $\gamma_d =3.3 \gamma_a$
closely optimizes the intensity of the sharp extra peak for any given Rabi
frequency \cite{EvKe02}. For simplicity, we use these values for all the
remaining 3LA plots in this work.
\begin{widetext}
\begin{figure}
\caption{\label{fig:SqueezSpec}
\label{fig:SqueezSpec}
\end{figure}
\end{widetext}
\section{\label{sec:specsqueez}The Spectrum of Squeezing}
Now we turn to the phase-dependent spectrum of the fluorescence of the
three-level atom and compare it to the well-known case of the two-level
atom \cite{CoWZ84,RiCa88}. Following Carmichael \cite{Carm87}, we
define the \textit{ideal source field} spectrum of squeezing as the Fourier
transform of photocurrent fluctuations of the quadratures in homodyne
detection,
\begin{eqnarray} \label{eq:specsqueez}
S_{\phi}(\omega) &=&
8\gamma_+ \eta \int_{0}^{\infty} d\tau \cos{\omega \tau} \,
\langle : \Delta \sigma_{\phi}(0) \Delta \sigma_{\phi}(\tau) : \rangle_{st}
\nonumber \\
&=& 8\gamma_+ \eta \int_{0}^{\infty} d\tau \cos{\omega \tau} \nonumber \\
&& \times \mathrm{Re} \left[ e^{-i\phi}
\langle \Delta \sigma_+(0) \Delta \sigma_{\phi}(\tau) \rangle_{st} \right] \,,
\end{eqnarray}
where
\begin{eqnarray} \label{eq:fluctop}
\Delta \sigma_{\phi} &=& \frac{1}{2} \left(\Delta \sigma_- e^{i\phi}
+\Delta \sigma_+ e^{-i\phi} \right) \,,
\end{eqnarray}
$\phi$ is the phase of the local oscillator in a BHD setup (that is, blocking
the path to detector $\mathrm{D}_I$ in Fig. \ref{fig:chdsetup}), $\eta$ is a
combined collection and detection efficiency, and the dots $::$ indicate that
the operators must follow time and normal orderings. This is an
incoherent spectrum as it depends on the field fluctuations. In fact, the
phase-dependent and the phase-independent spectra are related as
\cite{RiCa88}
\begin{eqnarray} \label{eq:S_RC}
S_{inc}(\omega) &=& \frac{1}{8\pi \alpha_{ee} \gamma_+ \eta}
\left[ S_{\phi}(\omega) +S_{\phi+\pi/2}(\omega) \right] \,.
\end{eqnarray}
Adding the spectra for $\phi=0$ and $\pi/2$ Eq. (\ref{eq:SincDef}) is
recovered.
Although the atom and laser parameters do not always allow for squeezing
(negative values in the spectrum), we keep the moniker of spectrum of
squeezing in order to distinguish this from the spectrum of
Sec. \ref{sec:chd-spec}.
Substituting Eq. (\ref{eq:p_tau}) in Eq. (\ref{eq:specsqueez}) the
approximate spectra for the quadratures are
\begin{eqnarray}
S_{0}(\omega) &=& -8 \gamma_+ \eta
C_1 \frac{ \lambda_1}{\omega^2 +\lambda_1^2} \,,
\label{eq:SqSpec_0} \\
S_{\pi/2}(\omega) &=& 8 \gamma_+ \eta \left[
C_+ \frac{ \lambda_+}{\omega^2 +\lambda_+^2}
+C_- \frac{ \lambda_-}{\omega^2 +\lambda_-^2} \right.
\nonumber \\
&& \left. -C_2 \frac{ \lambda_2}{\omega^2 +\lambda_2^2} \right] \,.
\label{eq:SqSpec_hpi}
\end{eqnarray}
For $\phi=0$ the spectrum is only a single, positive (no squeezing)
Lorentzian, just like for the 2LA, now with a width of $\gamma_+/2$.
For $\phi=\pi/2$ the spectrum is more interesting, as shown in
Fig. \ref{fig:SqueezSpec} for several field strengths. For instance, it has
a sharp peak [last term in Eq. (\ref{eq:SqSpec_hpi})], with its maximum
near $\Omega \approx 0.9 \gamma$. From weak to little more than
saturating fields the first two terms of Eq. (\ref{eq:SqSpec_hpi}) (with
factors $C_{\pm} |\lambda_{\pm}|$) add to form a single negative peak,
indicating squeezing. Rice and Carmichael \cite{RiCa88} found that the
weak-field spectrum ($Y^2 \ll 1$) in the 2LA has a line-width smaller than
$\gamma/2$ due to the negative value of the Lorentzians with amplitudes
$C_{\pm} |\lambda_{\pm}|$ in Eq. (\ref{eq:3LAincSpec}), resulting in a
squared Lorentzian \cite{Mollow69}. In the 3LA there is less squeezing
and the sharp peak splits the squeezing peak. For strong fields, the
spectrum consists of the sidebands of the Mollow triplet plus the extra
peak.
An additional manifestation of shelving is the shrinking of the sidebands
of the quadrature spectra compared to those of the 2LA. This is because
state $| a \rangle$ takes up an important fraction of the steady-state
population (actually, $\alpha_{aa} = q \alpha_{ee}$) for increasing Rabi
frequency.
To further illustrate the difference among the spectra of quadratures, we
plot in Fig. \ref{fig:Specs1-2} the spectra of the correlations
$\langle \Delta \sigma_+(0) \Delta \sigma_{\mp}(\tau) \rangle_{st}$. For
$S_{0}(\omega)$ the integrals are added, while for $S_{\pi/2}(\omega)$
they are subtracted. Thus, the sharp peak appears only in the latter. The
addition or subtraction cancels spectral components. The spectrum
(\ref{eq:SincDef}) contains only one of the integrals.
\begin{figure}
\caption{\label{fig:Specs1-2}
\label{fig:Specs1-2}
\end{figure}
\subsection{\label{sec:variance}Variances and integrated spectra}
An alternative approach to squeezing is the study of the variance or noise
in a quadrature,
\begin{eqnarray} \label{eq:variance}
V_{\phi} &=& \langle : (\Delta \sigma_{\phi})^2 : \rangle_{st}
=\mathrm{Re} \left[ e^{-i\phi}
\langle \Delta \sigma_+ \Delta \sigma_{\phi} \rangle_{st} \right]
\end{eqnarray}
or, equivalently, the integrated spectrum, related as $\int_{-\infty}^{\infty}
S_{\phi}(\omega) d \omega =4\pi \gamma_+ \eta V_{\phi}$. A negative
variance is a signature of squeezing in a quadrature. We have
\begin{subequations} \label{eq:varphi}
\begin{eqnarray}
V_0 &=& 2 C_1 = \frac{Y^2/2}{ 1+Y^2 +(q/2)Y^2 } \,, \\
V_{\pi/2} &=& 2 ( C_2 -C_+ -C_- ) \nonumber \\
&=& \frac{Y^2/2}{ (1+Y^2) \left( 1+Y^2 +(q/2)Y^2 \right)^2} \nonumber \\
&& \times \left[ Y^4 \left( 1+ \frac{q}{2} \right) +\frac{q}{2} Y^2 -1 \right] \,.
\end{eqnarray}
\end{subequations}
We plot the variances in Fig. \ref{fig:variance}. $V_0$ is positive for any
laser strength; there is no squeezing for $\phi=0$ but the total noise is
smaller for the 3LA. For $V_{\pi/2}$ both the interval of the laser strength
and amplitude for squeezing are notably reduced by the coupling to the
long-lived state, and the Rabi frequency for the largest negative value is
now very close to the saturating value, $\Omega=\gamma_+/4$, which
we use for several spectra.
\begin{figure}
\caption{\label{fig:variance}
\label{fig:variance}
\end{figure}
The standard BHD technique depends on the finite detector efficiency
$\eta$. This is a key obstacle to observe the weak squeezing of single-atom resonance fluorescence. Only very recently has the squeezing in the
fluorescence of a single two-level quantum dot been observed
\cite{SHJ+15} with homodyne correlation measurements
\cite{Vogel91,Vogel95}, which are independent of the detector efficiency.
However, the measured variance had to be extracted from complementary
measurements with different phases.
There is a subtle issue that also has to be addressed: Why is it that the
quadrature variances are different? It seems natural to think that one
features squeezing and the other does not. But, from the viewpoint of
integrated spectra, one could expect this to be independent of the local
oscillator phase. Thus, we reformulate the question: What spectrum could
be integrated that gives the same value for both quadratures?
Conditional homodyne detection also solves the issue of finite detector
efficiency, measuring an amplitude-intensity correlation, in this case
without the need to extract the desired correlation from complementary
measurements. CHD has been used to detect squeezing of a cavity QED
source \cite{FOCC00}. Additionally, CHD gives an answer to the missing
term in the integrated spectra. We devote the next two sections to a
summary of CHD theory and its application to 3LA resonance fluorescence.
\section{\label{sec:chd-time}Conditional Homodyne Detection}
Figure \ref{fig:chdsetup} illustrates the setup for amplitude-intensity
correlation by CHD. Its theory was first presented in \cite{CCFO00}; its
application to resonance fluorescence of a 2LA was given in
\cite{hmcb10,CaGH15}, and its application to that of a $V$-type
three-level atom was presented in \cite{MaCa08,CaGM14,XGJM15}.
Hence, here we show only its basic features. A quadrature of the field,
$E_{\phi}$, is measured in balanced homodyne detection conditioned on
the direct detection of a photon (intensity, $I$) at detector $D_I$, i.e.,
$\langle I(0)E_{\phi}(\tau) \rangle_{st}$. Here,
$E_{\phi} \propto \sqrt{\eta} \sigma_{\phi}$ and
$I \propto \eta \sigma_+ \sigma_-$. Upon normalization, the
dependence of the correlation on the detector efficiency $\eta$ is
canceled. Then
\begin{eqnarray} \label{eq:hDef}
h_{\phi}(\tau) &=&
\frac{ \langle: \sigma_+(0) \sigma_-(0) \sigma_{\phi}(\tau) :\rangle_{st}}
{ \langle \sigma_+ \sigma_- \rangle_{st} \langle \sigma_{\phi}
\rangle_{st} } \,,
\end{eqnarray}
where it is assumed that the system is stationary,
\begin{equation} \label{eq:dipoleQuad}
\sigma_{\phi}=
\frac{1}{2} \left(\sigma_- e^{i\phi} +\sigma_+ e^{-i\phi} \right)
\end{equation}
is the dipole quadrature operator, $\phi$ is the phase between the strong
local oscillator and the driving field, and we recall that $::$ indicates time and
normal operator orderings. These orderings lead to different formulas for
positive and negative time intervals, and in general, the correlations are
asymmetric
\cite{CCFO00,FOCC00,DeCC02,MaCa08,CaGM14,XGJM15,XuMo15}.
However, in the present case the correlation is symmetric; thus we only
use the expression for positive intervals:
\begin{eqnarray} \label{eq:hDef2}
h_{\phi}(\tau) &=&
\frac{ \langle \sigma_+(0) \sigma_{\phi}(\tau) \sigma_-(0) \rangle_{st}}
{ \langle \sigma_+ \sigma_- \rangle_{st} \langle \sigma_{\phi}
\rangle_{st} } \,.
\end{eqnarray}
When the laser excites the atom on resonance, as is the case in this paper,
the in-phase quadrature $\langle \sigma_{\phi=0}(t) \rangle$ vanishes at all
times, and likewise
$\langle \sigma_+(0) \sigma_{0}(\tau) \sigma_- (0)\rangle_{st}=0$. So, to
obtain a finite measurement of this quadrature, it is necessary to add a
coherent offset of amplitude $E_{\mathrm{off}}$ and phase $\phi=0$ to the
dipole field before reaching the beam splitter \cite{hmcb10}. This procedure,
however, hides the non-classical character of the fluorescence, showing a
monotonously decaying correlation:
\begin{eqnarray} \label{eq:h_0}
h_{0}(\tau) &=& 1+ \frac{\alpha_{ee}}{\alpha_{ee} +E_{\mathrm{off}}^2 }
e^{-\gamma_+ \tau /2} \,.
\end{eqnarray}
The $\phi=\pi/2$ quadrature is more interesting. Substituting
Eqs. (\ref{eq:BlochSols}a), (\ref{eq:3rdordercorr}) and (\ref{eq:alphaee})
into Eq. (\ref{eq:hDef2}) we obtain
\begin{eqnarray} \label{eq:hhpi}
h_{\pi/2}(\tau) &=& 1 +B_2 e^{\lambda_2 \tau}
-B_+ e^{\lambda_+ \tau} -B_- e^{\lambda_- \tau} \,,
\end{eqnarray}
where
\begin{subequations}
\begin{eqnarray} \label{eq:B2pm}
B_2 &=& q \frac{Y^2/2}{1+Y^2} \,, \\
B_{\pm} &=& \left( 1+ q \frac{Y^2/2}{1+Y^2} \right)
\left( \frac{1}{2} \pm \frac{1-2Y^2} {8 \delta/\gamma_+} \right) \,.
\end{eqnarray}
\end{subequations}
The coupling to the metastable level $|a \rangle$ has visible consequences
for both short and long times, making the CHD correlation amplitude larger
than is the case for a 2LA, through the factor $q=\gamma_d/\gamma_a$.
This excess amplitude decays slowly towards the unit value, which signals
the decorrelation for long $\tau$, best noticed for large $\Omega$.
The CHD correlation can be written in terms of correlations of fluctuation
operators, as is the case with the full incoherent and squeezing spectra.
Splitting the dipole operators into a mean plus fluctuations, Eq.
(\ref{eq:dipoleSplit}), $h_{\phi}(\tau)$ is decomposed into a constant
term plus two two-time correlations, one of second order and one of
third order in the dipole fluctuation operators,
\begin{subequations}
\begin{eqnarray} \label{eq:hsplit}
h_{\phi}(\tau) =1 +h_{\phi}^{(2)}(\tau) +h_{\phi}^{(3)}(\tau) \,,
\end{eqnarray}
where
\begin{eqnarray} \label{eq:h2}
h_{\phi}^{(2)}(\tau) &=& \frac{ 2\mathrm{Re} \left[
\langle \sigma_{-} \rangle_{st} \langle \Delta \sigma_+(0)
\Delta \sigma_{\phi}(\tau) \rangle_{st} \right] }
{\langle \sigma_{\phi} \rangle_{st} \langle \sigma_+ \sigma_- \rangle_{st} } \,,
\end{eqnarray}
\begin{equation} \label{eq:h3}
h_{\phi}^{(3)}(\tau) =\frac{ \langle \Delta \sigma_+(0)
\Delta \sigma_{\phi}(\tau) \Delta \sigma_- (0)
\rangle_{st} }{\langle \sigma_{\phi} \rangle_{st}
\langle \sigma_+ \sigma_- \rangle_{st} } \,.
\end{equation}
\end{subequations}
The splitting is not done by the measurement scheme, but it can be
calculated to provide valuable information about the system's fluctuations.
\begin{figure}
\caption{\label{fig:h_tau}
\label{fig:h_tau}
\end{figure}
For $\phi=0$, due to the need to add an offset, we are left with
Eq. (\ref{eq:h_0}). For $\phi=\pi/2$ we obtain the approximate expression:
\begin{subequations}
\begin{eqnarray}
h_{\pi/2}^{(2)}(\tau) &=& \frac{2}{\alpha_{ee}} \left[ C_2 e^{\lambda_2 \tau}
-C_+ e^{\lambda_+ \tau} -C_- e^{\lambda_- \tau} \right] \,,
\label{eq:h2ap} \\
h_{\pi/2}^{(3)}(\tau) &=& D_2 e^{\lambda_2 \tau}
+D_+ e^{\lambda_+ \tau} +D_- e^{\lambda_- \tau} \,, \label{eq:h3ap}
\end{eqnarray}
where
\begin{eqnarray} \label{eq:Dcoefs}
D_{2} &=& B_2 -\frac{2 C_2}{\alpha_{ee} } \,, \qquad
D_{\pm} = \frac{2 C_{\pm}}{\alpha_{ee}} -B_{\pm} \,,
\end{eqnarray}
\end{subequations}
which are too cumbersome to be reproduced in full here. In Fig.
\ref{fig:h_tau} we plot the analytical results, Eq. (\ref{eq:hhpi}) and its
partial results $1+h_{\pi/2}^{(2)}(\tau)$ and $h_{\pi/2}^{(3)}(\tau)$, which
differ very little from the exact ones.
The vanishing of Eq. (\ref{eq:hsplit}) at $\tau=0$ has the same origin as
the antibunching in the intensity correlations: when the atom is in the
ground state upon a photon emission, both the dipole field and the
intensity are zero, and they build up again when the atom reabsorbs
light. For $\phi=0$ the effect is not seen due to the additional offset. So
\begin{subequations}
\begin{eqnarray} \label{eq:h2_pi/2_0tau}
h_{\pi/2}^{(2)}(0) &=& \frac{\alpha_{ee} -2|\alpha_{+}|^2}{\alpha_{ee}}
\nonumber \\
&=& \frac{ Y^2 +(q/2)Y^2 -1 }{ 1+Y^2 +(q/2)Y^2 } \,,
\end{eqnarray}
and
\begin{eqnarray}
h_{\pi/2}^{(3)}(0) &=& \frac{2( |\alpha_{+}|^2 -\alpha_{ee})}{\alpha_{ee}}
= - 2(2+q) \alpha_{ee} \nonumber \\
&=& -\frac{ (2+q) Y^2 }{1+Y^2 +(q/2)Y^2 } \,,
\label{eq:h3_pi/2_0tau}
\end{eqnarray}
\end{subequations}
that is, the initial size of the correlation is proportional to the mean
population in the excited state. For $\Omega \gg \gamma$, the third-order
correlation has its largest (negative) initial value $h_{\pi/2}^{(3)}(0) \to -2$.
The third-order term signals the deviation from Gaussian fluctuations as a
consequence of the nonlinearity of the resonance fluorescence process
for increasing laser intensity \cite{hmcb10,CaGH15}. As perhaps best
noticed in the spectral domain, it is the enhanced sensitivity to nonlinearity
that makes CHD stand out over BHD and the spectrum of squeezing.
We illustrate this in the next section.
\section{\label{sec:chd-spec}Quadrature Spectra from CHD}
The spectrum measured from the amplitude-intensity correlation is given by
\begin{eqnarray} \label{eq:S-chd}
\mathcal{S}_{\phi}(\omega) &=& 4 \gamma_+ \alpha_{ee}
\int_{0}^{\infty} d\tau \cos{\omega \tau}
\left[ h_{\phi}(\tau) -1 \right] \,.
\end{eqnarray}
The factor $4 \gamma_+ \alpha_{ee}$ is the photon flux into the CHD
setup. For $\phi=0$ we replace it by
$4 \gamma_+ (\alpha_{ee} +E_{\mathrm{off}}^2)$.
Following the splitting of $h_{\phi}(\tau)$, Eq. (\ref{eq:hsplit}),
the spectra of second- and third-order dipole fluctuations are, respectively,
\begin{subequations}
\begin{eqnarray} \label{eq:S2}
\mathcal{S}_{\phi}^{(2)}(\omega) &=& 4\gamma_+ \alpha_{ee}
\int_{0}^{\infty} d\tau \cos{\omega \tau} \,h_{\phi}^{(2)}(\tau)
\label{eq:S2} \,, \\
\mathcal{S}_{\phi}^{(3)}(\omega) &=& 4\gamma_+ \alpha_{ee}
\int_{0}^{\infty} d\tau \cos{\omega \tau} \,h_{\phi}^{(3)}(\tau) \,,
\label{eq:S3}
\end{eqnarray}
\end{subequations}
Using Eqs. (\ref{eq:h_0}) and (\ref{eq:hhpi}) we obtain the
approximate analytical spectra. For $\phi=0$, we have
\begin{eqnarray} \label{eq:S-chd-0}
\mathcal{S}_{0}(\omega) &=& -4\gamma_+ \alpha_{ee}
\frac{\lambda_1}{\omega^2 +\lambda_1^2} \,,
\end{eqnarray}
which is independent of the offset. The spectrum of this quadrature is a
simple Lorentzian of width $\gamma_+/2$. For $\phi=\pi/2$ we have
\begin{eqnarray} \label{eq:S-chd-hpi}
\mathcal{S}_{\pi/2}(\omega) &=& 4\gamma_+ \alpha_{ee}
\left[ B_+ \frac{\lambda_+}{\omega^2 +\lambda_+^2}
+B_- \frac{\lambda_-}{\omega^2 +\lambda_-^2}
\right. \nonumber \\
&& \left. - B_2 \frac{\lambda_2}{\omega^2 +\lambda_2^2} \right] \,.
\end{eqnarray}
The second-order spectra are
\begin{subequations}
\begin{eqnarray}
\mathcal{S}_{0}^{(2)}(\omega) &=& \mathcal{S}_{0}(\omega) \,, \\
\mathcal{S}_{\pi/2}^{(2)}(\omega) &=& 8 \gamma_+
\left[ C_+ \frac{\lambda_+}{\omega^2 +\lambda_+^2}
+C_- \frac{\lambda_-}{\omega^2 +\lambda_-^2}
\right. \nonumber \\
&& \left. -C_2 \frac{\lambda_2}{\omega^2 +\lambda_2^2} \right] \,.
\label{eq:S2app}
\end{eqnarray}
\end{subequations}
These are just the spectra of squeezing, Eqs. (\ref{eq:SqSpec_0}) and (\ref{eq:SqSpec_hpi}), without the detector efficiency factor.
The third-order spectra are
\begin{subequations}
\begin{eqnarray}
\mathcal{S}_{\phi}^{(3)}(\omega)
&=& \mathcal{S}_{\phi}(\omega) -\mathcal{S}_{\phi}^{(2)}(\omega) \\
\mathcal{S}_{0}^{(3)}(\omega) &=& 0 \,, \\
\mathcal{S}_{\pi/2}^{(3)}(\omega)
&=& -4 \gamma_+ \alpha_{ee} \sum_{k=2,+,-} D_k
\frac{\lambda_k}{\omega^2 +\lambda_k^2} \,,
\end{eqnarray}
\end{subequations}
where $D_k$ are given by Eq. (\ref{eq:Dcoefs}) and $\lambda_k$ are the
eigenvalues Eq. (\ref{eq:eigenvalues}).
Originally, CHD was conceived to overcome the issue of imperfect
detection and thus be able to measure squeezing of weak light sources
\cite{CCFO00,FOCC00}. In the weak-field limit the spectrum of the
amplitude-intensity correlation approaches the spectrum of squeezing if
third-order fluctuations can be neglected, i.e.,
\begin{eqnarray}
S_{\phi}(\omega) = \eta \mathcal{S}_{\phi}^{(2)}(\omega)
\approx \eta \mathcal{S}_{\phi}(\omega) \,.
\end{eqnarray}
For the third-order spectrum, the sharp peak is about half the second-order
one, of size $\sim Y^4$, while the other terms go also as $Y^4$, and the
other second-order terms go as $Y^2$. However, for not-so-weak fields,
we find strong signatures of third-order fluctuations in the spectra.
In Fig. \ref{fig:Schd} we plot the analytical results of the spectra of the
amplitude-intensity correlation of the two- and three-level atom, Eq.
(\ref{eq:S-chd-hpi}). The difference from the (omitted) exact results is very
small. Note that for weak and saturating laser the sharp peak is smaller
than in the spectra of squeezing, Fig. (\ref{fig:SqueezSpec}). This is
because the third-order sharp peak is negative in this excitation regime, as
seen in the insets. Moreover, in this excitation regime, the full third-order
spectrum is negative [insets of Figs.~\ref{fig:Schd}(b) and \ref{fig:Schd}(c)],
which adds to the negative squeezing peak of this quadrature
\cite{hmcb10,CaGH15}.
\begin{widetext}
\begin{figure}
\caption{\label{fig:Schd}
\label{fig:Schd}
\end{figure}
\end{widetext}
In the strong-excitation regime the third-order spectrum leads to striking
deviations between the CHD and squeezing spectra and between the
2LA and the 3LA. On the one hand, the sidebands become dispersive
\cite{hmcb10}. This comes out when $\lambda_{\pm}$ become complex,
that is, for $\Omega > \gamma_+/4$, but it is only for strong enough
excitation that the spectral components split. While the second-order
peaks are Lorentzians, the third-order ones are dispersive and of
comparable size for the 2LA \cite{hmcb10} or bigger for the 3LA. On the
other hand, there are large deviations in the size of the spectra. The
third-order spectrum is much bigger in the 3LA than in the 2LA, not only
for the sharp peak. The third-order spectrum contributes most of the
total CHD spectrum.
The above effects can be explained as follows. The third-order correlation
of fluctuation operators gives a measure of the atom-laser nonlinearity,
which grows with increasing laser intensity, and the deviation from
Gaussian fluctuations of the fluorescence. Also, it should be mentioned that
the dipole fluctuations of the driven transition are enhanced due to the
coupling to the long-lived state $| a \rangle$, which is populated by the
increased number of spontaneous emission events from the excited state,
Eq. (\ref{eq:alpha_aa}). An early study of this effect in the three-level
configuration of this paper reported large deviations in the photon
statistics from those of a 2LA \cite{MeSc90}.
We recall that in CHD the second- and third-order components cannot
be measured separately; both are merged in a single measured signal.
CHD goes beyond the concept of squeezing when studying
phase-dependent fluctuations.
\subsection{\label{sec:variancechd}Integrated spectra}
Finally, we calculate the integrated spectra of the CHD quadratures:
\begin{subequations}
\begin{eqnarray} \label{eq:intS-chd}
\int_{-\infty}^{\infty} \mathcal{S}_{0}(\omega) d\omega
&=& 4\pi \gamma_+ \alpha_{ee} \,, \\
\int_{-\infty}^{\infty} \mathcal{S}_{\pi/2}(\omega) d\omega
&=& -4\pi \gamma_+ \alpha_{ee} \,.
\end{eqnarray}
\end{subequations}
That the magnitudes are equal means that the total emitted noise is
independent of the quadrature. This is made possible by the third-order
fluctuations, absent in the spectrum of squeezing, Sec. \ref{sec:variance}.
This result is analogous to calculating the total incoherent emission by
integrating the incoherent spectrum.
\section{Conclusions}
We investigated ensemble-averaged phase-dependent fluctuations of the
intermittent resonance fluorescence of a single three-level atom. We
focused mainly on the spectrum of squeezing by balanced homodyne
detection, and on the spectrum of the amplitude-intensity correlation of
conditional homodyne detection. The shelving effect produces a sharp
peak in the spectrum of the quadrature that features squeezing. Since this
peak is positive, it acts to reduce the amount of squeezing observed in the
weak- to moderate- (strong-) excitation regime. Since CHD is sensitive to
third-order dipole fluctuations that grow with atom-laser nonlinearity, the
spectra of BHD and CHD are very different for strong excitation. Additional
insight is obtained by calculating the variances or integrated spectra of
quadratures. In BHD the variances are different, while in CHD they are
equal, a feature that deserves further study.
We considered only the case of exact atom-laser resonance. This allowed
us to obtain a very good approximate analytical solution of the master
equation with a simple method, which we then used to construct analytical expressions for the various quantities of interest. Further insight into the
incoherent spectrum and its link to the phase-dependent fluctuations could
be established. Also, the on-resonance case allowed us to present the
basic physical features in the most straightforward manner.
Conditional homodyne detection, with its sensitivity to third-order field
fluctuations, opens a new gate to study phase-dependent fluctuations
beyond the realm of squeezing for highly nonlinear and non-Gaussian
optical processes. On the other hand, the impressive advances in photon
collection efficiencies by parabolic mirrors \cite{SoLe15,StSL10} could
complement CHD for atomic resonance fluorescence, its squeezing and
its quantum fluctuations in general.
\begin{acknowledgments}
H.M.C.-B thanks Prof. J. R\'ecamier for hospitality at ICF-UNAM. R.R.A
thanks CONACYT for the scholarship No. 379732, and DGAPA-UNAM
for support under project IN108413.
\end{acknowledgments}
\appendix
\begin{widetext}
\section{\label{sec:approx}Approximate Solutions}
The approximate expectation values of the atomic operators are
\begin{subequations} \label{eq:BlochSolsApp}
\begin{eqnarray}
\langle \sigma_{\mp} (t) \rangle &=&
\mp i \frac{ Y/ \sqrt{2} }{1+Y^2} \left[ e^{\lambda_2 t}
-e^{-3\gamma_+ t/4} \left( \cosh{\delta t}
+ \frac{3 \gamma_+}{4 \delta} \sinh{\delta t} \right) \right]
\mp i \sqrt{2} Y \frac{\gamma_+}{4 \delta} e^{-3\gamma_+ t/4} \sinh{\delta t}
+\alpha_{\mp} \left( 1- e^{\lambda_2 t} \right) \,, \nonumber \\ \\
\langle \sigma_{ee} (t) \rangle &=&
\frac{ Y^2/2 }{1+ Y^2} \left[ e^{\lambda_2 t}
-e^{-3\gamma_+ t/4} \left( \cosh{\delta t}
+ \frac{3 \gamma_+}{4 \delta} \sinh{\delta t} \right) \right]
+\frac{Y^2/ 2}{1+Y^2 +(q/2)Y^2} \left( 1- e^{\lambda_2 t} \right) \,, \\
\langle \sigma_{gg} (t) \rangle &=& e^{\lambda_2 t}
-\frac{ Y^2/2 }{1+ Y^2} \left[ e^{\lambda_2 t}
-e^{-3\gamma_+ t/4} \left( \cosh{\delta t}
+ \frac{3 \gamma_+}{4 \delta} \sinh{\delta t} \right) \right]
+\frac{1+(Y^2/ 2)}{1+Y^2 +(q/2)Y^2} \left( 1- e^{\lambda_2 t} \right) \,.
\end{eqnarray}
\end{subequations}
We make several assumptions to give our results simple, albeit long,
expressions: First, we neglect a term $\gamma_d \Omega^2/2$ in the
solutions in the Laplace space that reduce the problem to one similar to
the 2LA case, with $\gamma$ replaced by $\gamma_+$. Eigenvalues
$\lambda_{\pm}$ are thus identified. They give rise to the terms with the
hyperbolic functions. Second, a constant term is multiplied by a factor
$e^{\lambda_2 t}$. Finally, we add a term
$\langle \sigma_{jk} \rangle_{st}(1-e^{\lambda_2 t})$. Recall that for the
case of a 2LA $\lambda_2=0$ and
$\langle \sigma_{ee} (t) \rangle +\langle \sigma_{gg} (t) \rangle =1$.
These \textit{ad hoc} assumptions make the approximate solutions very
close to the exact ones, as long as $\gamma_d$ and $\gamma_a$ are
at least one order smaller than $\gamma$.
Similarly, we obtain
\begin{eqnarray} \label{eq:p_tauApp}
\langle \sigma_+(0) \sigma_{\mp}(\tau) \rangle_{st}
&=& \pm \frac{1}{2} \frac{ Y^2 }{ (1+Y^2 +(q/2)Y^2)^2 }
+\frac{1}{4} \frac{Y^2}{ 1+Y^2 +(q/2)Y^2 } e^{\lambda_1 \tau}
\pm \frac{q}{4} \frac{Y^4}{(1+Y^2) \left( 1+Y^2 +(q/2)Y^2 \right)^2}
e^{\lambda_2 \tau}
\nonumber \\
&& \mp \frac{1}{4} \frac{Y^2 }{(1+Y^2) \left( 1+Y^2 +(q/2)Y^2 \right)}
e^{-3\gamma_+ \tau /4} \left[ (1-Y^2) \cosh{\delta \tau}
\mp \frac{1-5Y^2}{4\delta/\gamma_+} \sinh{\delta \tau} \right] \,.
\end{eqnarray}
\section{Equations of motion}
We solve sets of linear equations of motion for the expectation values
of the atomic operators and for two-time correlations. The equations
and the formal solutions can be written as
\begin{eqnarray} \label{eq:BlochEqsFluc}
\frac{d}{dt} g(t) &=& \mathbf{M} g(t) \,, \\
g(t) &=& e^{\mathbf{M} t} g(0) \,,
\end{eqnarray}
where $\mathbf{M}$ is the matrix (\ref{eq:matrixM}). In general, we solve
these equations numerically. The initial conditions, however, are obtained
exactly analytically, even off resonance. For instance, defining
$\Delta \mathbf{s} \equiv \left( \Delta \sigma_-,
\Delta \sigma_+, \Delta \sigma_{ee}, \Delta \sigma_{gg} \right)^T$,
where $\Delta \sigma_{jk} =\sigma_{jk} -\alpha_{jk}$ and
$\alpha_{jk} =\rho_{kj}^{st}$, $\alpha_+ =\rho_{ge}^{st}$,
$\alpha_- =\rho_{eg}^{st}$, the initial conditions of the second- and
third-order correlations of the fluctuation operators are
\begin{eqnarray} \label{eq:corre2st}
\langle \Delta \sigma_+ \Delta \mathbf{s} \rangle_{st}
= \left( \begin{array}{c} \alpha_{ee} -\alpha_+ \alpha_- \\ -\alpha_{+}^2 \\
-\alpha_{+}\alpha_{ee} \\
\alpha_{+} (1- \alpha_{gg}) \end{array} \right)
= \frac{\Omega^2}{N^2} \left( \begin{array}{c} (2+q) \Omega^2 \\
\gamma_+^2 \\ -i \gamma_+ \Omega \\
i (1+q) \gamma_+ \Omega \end{array} \right) \,,
\end{eqnarray}
\begin{eqnarray} \label{eq:corre3st}
\langle \Delta \sigma_+ \Delta \mathbf{s}
\Delta \sigma_- \rangle_{st} = \left( \begin{array}{c}
2\alpha_{-} (\alpha_+ \alpha_- -\alpha_{ee}) \\
2\alpha_{+} (\alpha_+ \alpha_- -\alpha_{ee}) \\
\alpha_{ee} (2\alpha_+ \alpha_- -\alpha_{ee}) \\
(\alpha_{gg} -1) (2\alpha_+ \alpha_- -\alpha_{ee}) \end{array} \right)
= \frac{\Omega^4}{N^3} \left( \begin{array}{c}
i 2(2+q) \gamma_+ \Omega \\
-i 2(2+q) \gamma_+ \Omega \\
\gamma_+^2 - (2+q) \Omega^2 \\
(1+q) [ \gamma_+^2 -(2+q) \Omega^2 ] \end{array} \right) \,,
\end{eqnarray}
respectively, where we used the steady-state values $\alpha_{jk}$ of
Eqs. (\ref{eq:alphas}) and $N= (2 +q)\Omega^2 +\gamma_+^2$.
The numerical calculations of the spectra are more efficiently implemented
using the formal solution of the correlations,
$g(\tau)=e^{\mathbf{M} \tau} g(0)$, so the Fourier integral is formally
solved as $(i \omega \mathbf{1-M})^{-1} g(0)$, where $\mathbf{1}$ is the
$4 \times 4$ identity matrix. One is saved from potentially troublesome
integrals where the upper limit is a long time of the order
$\gamma_d^{-1}$.
\end{widetext}
\begin{references}
\bibitem{PlKn97} For a review see, e.g., M.~B. Plenio and P.~L. Knight,
Rev. Mod. Phys. \textbf{70}, 101 (1997).
\bibitem{StHB09} F.~D. Stefani, J.~P. Hoogenboom, and E. Barkai,
Phys. Today \textbf{62}(2), 34 (2009).
\bibitem{HePl95} G.~C. Hegerfeldt and M.~B. Plenio, Phys. Rev. A
\textbf{52}, 3333 (1995).
\bibitem{GaKK95} B.~M. Garraway, M.~S. Kim, and P.~L. Knight,
Opt. Commun. \textbf{117}, 560 (1995).
\bibitem{EvKe02} J. Evers and Ch. H. Keitel, Phys. Rev. A
\textbf{65}, 033813 (2002).
\bibitem{BuTa00} V. B\"uhner and Chr. Tamm, Phys. Rev. A
\textbf{61}, 061801 (2000).
\bibitem{HBLW97} J.~T. H\"offges, H.~W. Baldauf, W. Lange, and
H. Walther, J. Mod. Opt. \textbf{44}, 1999 (1997).
\bibitem{WaZo91} D.~F. Walls and P. Zoller, Phys. Rev. Lett.
\textbf{47}, 709 (1981).
\bibitem{CoWZ84} M.~J. Collett, D.~F. Walls, P. Zoller, Opt. Commun.
\textbf{52}, 145 (1984).
\bibitem{SoLe15} M. Sonderman and G. Leuchs, in \textit{Engineering
the Atom-Photon Interaction}, edited by A. Predojevic and M.~W. Mitchell
(Springer, Heidelberg, 2015).
\bibitem{Vogel91} W. Vogel, Phys. Rev. Lett. \textbf{67}, 2450 (1991).
\bibitem{Vogel95} W. Vogel, Phys. Rev. A \textbf{51}, 4160 (1995).
\bibitem{CCFO00} H.~J. Carmichael, H.~M. Castro-Beltran, G.~T.
Foster, and L.~A. Orozco, Phys. Rev. Lett. \textbf{85}, 1855 (2000).
\bibitem{FOCC00} G.~T. Foster, L.~A. Orozco, H.~M. Castro-Beltran,
and H.~J. Carmichael, Phys. Rev. Lett. \textbf{85}, 3149 (2000).
\bibitem{HSL+06} H.-G. Hong, W. Seo, M. Lee, W. Choi, J.-H. Lee, and
K. An, Opt. Lett. \textbf{31}, 3182 (2006).
\bibitem{GSS+07} N.~B. Grosse, Th. Symul, M. Stobi\'nska, T.~C. Ralph,
and P.~K. Lam, Phys. Rev. Lett. \textbf{98}, 153603 (2007).
\bibitem{KAD+09} L.~A. Krivitsky, U.~L. Andersen, R. Dong, A. Huck,
C. Wittmann, and G. Leuchs, Phys. Rev. A \textbf{79}, 033828 (2009).
\bibitem{KuVo-X} B. K\"uhn and W. Vogel, Phys. Rev. Lett. \textbf{116},
163603 (2016); arXiv:1511.01723.
\bibitem{Carm85} H.~J. Carmichael, Phys. Rev. Lett. \textbf{55}, 2790 (1985).
\bibitem{SHJ+15} C.~H.~H. Schulte, J. Hansom, A.~E. Jones,
C. Matthiesen, C. Le Gall, and M. Atat\"ure, Nature (London)
\textbf{525}, 222 (2015).
\bibitem{GRS+09} S. Gerber, D. Rotter, L. Slodi\v{c}ka, J. Eschner,
H.~J. Carmichael, and R. Blatt, Phys. Rev. Lett. \textbf{102},
183601 (2009).
\bibitem{hmcb10} H.~M. Castro-Beltran, Opt. Commun. \textbf{283},
4680 (2010).
\bibitem{CaGH15} H.~M. Castro-Beltran, L. Gutierrez, and L. Horvath,
Appl. Math. Inf. Sci. \textbf{9}, 2849 (2015).
\bibitem{DeCC02} A. Denisov, H.~M. Castro-Beltran, and
H.~J. Carmichael, Phys. Rev. Lett. \textbf{88}, 243601 (2002).
\bibitem{MaCa08} E.~R. Marquina-Cruz and H.~M. Castro-Beltran,
Laser Phys. \textbf{18}, 157 (2008).
\bibitem{CaGM14} H.~M. Castro-Beltran, L. Gutierrez, and E.~R.
Marquina-Cruz, in \textit{Latin America Optics and Photonics},
Cancun, Mexico, 2014, OSA Technical Digest
(Optical Society of America, Washington, D.C., 2014), paper LM4A.38.
\bibitem{XGJM15} Q. Xu, E. Greplova, B. Julsgaard, and K. M\o lmer,
Phys. Scripta \textbf{90}, 128004 (2015).
\bibitem{XuMo15} Q. Xu and K. M\o lmer, Phys. Rev. A
\textbf{92}, 033830 (2015).
\bibitem{Carm99} H.~J. Carmichael, \textit{Statistical Methods in
Quantum Optics 1: Master Equations and Fokker-Planck Equations}
(Springer, Berlin, 2002).
\bibitem{Mollow69} B.~R. Mollow, Phys. Rev. \textbf{188}, 1969 (1969).
\bibitem{RiCa88} P.~R. Rice and H.~J. Carmichael, J. Opt. Soc. Am. B,
\textbf{5}, 1661 (1988).
\bibitem{Carm87} H.~J. Carmichael, J. Opt. Soc. Am. B, \textbf{4}, 1588
(1987). The ideal source field spectrum of squeezing is the one produced
by the (atomic) source alone, neglecting the free field. Since the latter is
assumed to be in the vacuum state, its omission is justified on the basis
of using normal and time operator orderings.
\bibitem{MeSc90} M. Merz and A. Schenzle, Appl. Phys. B
\textbf{50}, 115 (1990).
\bibitem{StSL10} M. Stobi\'nska, M. Sonderman, and G. Leuchs,
Opt. Commun. \textbf{283}, 737 (2010).
\end{references}
\end{document} |
\begin{document}
\title{A Bijection Between Partially Directed Paths in the Symmetric Wedge and Matchings}
\author{Svetlana~Poznanovi\'{c}}
\date{}
\maketitle
\begin{abstract}
We give a bijection between partially directed paths in the
symmetric wedge $y= \pm x$ and matchings, which sends north steps
to nestings. This gives a bijective proof of a result of Prellberg
et al. that was first discovered through the corresponding
generating functions: the number of partially directed paths
starting at the origin confined to the symmetric wedge $y= \pm x$
with $k$ north steps is equal to the number of matchings on $[2n]$
with $k$ nestings.
\end{abstract}
{\bf Key Words:} partially directed path, matching, nesting
{\bf AMS subject classification:} 05A15, 05A18
\section{Introduction}
The purpose of this paper is to give a bijective proof of a fact
that was discovered unexpectedly and connects two seemingly
different branches of combinatorics. One of the branches is the
study of matchings and set partitions and, more specifically, the
statistics crossings and nestings. The other one is the study of
directed paths in the plane.
Based on Touchard's work~\cite{Touchard}, Riordan~\cite{Riordan}
derived a formula for the number of matchings with $k$ crossings.
Since then, a lot of results connected to this topic have been
obtained. We mention a few. M.~de Sainte-Catherine
in~\cite{Sainte-Catherine} bijectively shows that the number of
matchings with $k$ crossings is equal to the number of matchings
with $k$ nestings. This bijection also implies symmetric joint
distribution of crossings and nestings. More than two decades
later, Kasraoui and Zeng, in~\cite{Kasraoui}, extended this
bijection to show that the same result holds for set partitions.
Martin Klazar~\cite{Klazar06} studied the distribution of these
statistics on subtrees of the generating tree of matchings, and
the same questions for set partitions were studied in~\cite{PYan}.
In another line of work, Prellberg et al. in~\cite{Rpr} worked on
founding a generating function of self-avoiding partially directed
paths in the wedge $y=\pm px$ consisting of east, north and south
steps. Using the kernel method they were able to derive explicitly
the generating function for the case $p=1$. The generating
function revealed that the number of such paths which end at
$(n,-n)$ with $k$ north steps is the same as the number of
matchings on $[2n]$ with $k$ nestings. For a nice survey of the
history of the problem and how this fact was discovered
see~\cite{Rubey}.
A matching on the set $[2n]=\{1,\dots,2n\}$ is a family of $n$
two-element disjoint subsets of $[2n]$. In particular, it is a
set-partition with all the blocks of size two. It is convenient to
represent a matching with its standard diagram consisting of arcs
connecting $2n$ vertices on a horizontal line (see Figure
\ref{fig:matching}). The vertices are numbered in increasing order
from left to right. The set of all matchings of $[2n]$ is denoted
by $\mathcal{M}_n$. We say that two edges $(a,b)$ and $(c,d)$ form
a crossing if $a<c<b<d$ (i.e. if the cross) and they form a
nesting if $a<c<d<b$ (i.e. if one covers the other). If they are
neither crossed nor nested we say they form an alignment. The
number of nestings in a matching $M$ is denoted by $ne(M)$.
\\
\\
\begin{figure}
\caption{\emph{Diagram of a matching with 10 vertices and edges:
$e_1=(1,3),e_2=(2,7),e_3=(4,6),e_4=(5,8)$, and $e_5=(9,10)$. This
matching has 3 crossings formed by the pairs of edges:
$(e_1,e_2),(e_2,e_4),$ and $(e_3,e_4)$, one nesting $(e_2,e_3)$,
and all the other pairs of edges form alignments.}
\label{fig:matching}
\end{figure}
A partially directed path in the plane is a path starting at the
origin and consisting of unit east, north, and south steps. We
consider all such paths confined to the symmetric wedge defined by
the lines $y=\pm x$. Let $\mathcal{P}_n$ be the set of all such
paths ending at the line $y=-x$ with $n$ horizontal steps.
\begin{theorem}
There is a bijection $\Phi:\mathcal{P}_n \rightarrow
\mathcal{M}_n$ that takes the number of north steps of $P \in
\mathcal{P}_n$ to the number of nestings of $\Phi(P)$.
\end{theorem}
\begin{remark} While preparing the present paper, we found out about the very recent work of Martin
Rubey~\cite{Rubey} in which he presents a bijective proof of the
same result. However, our bijection is different from Rubey's, as
illustrated in Example \ref{T:phi}. In particular, $\Phi$ may be
of special interest in the study of matchings because a key part
of it is a bijection on matchings which, unlike the other
bijections used in the literature, does not preserve the type of
the matching, i.e., the sets of minimal and maximal elements of
the blocks. This may give further insight into the interaction
between matchings of different type when various statistics of
matchings are studied.
\end{remark}
\section{Definition and properties of the bijection $\Phi$}
Below we define a bijection $\Phi:\mathcal{P}_n \rightarrow
\mathcal{M}_n$ that takes the number of north steps of $P \in
\mathcal{P}_n$ to the number of nestings of $\Phi(P)$. The map
$\Phi$ is defined as the composition of two maps: $\Phi=\phi \circ
\psi$, where $\psi:\mathcal{P}_n \rightarrow \mathcal{M}_n$ and
$\phi:\mathcal{M}_n \rightarrow \mathcal{M}_n$.
\subsection{Bijection $\psi$ from $\mathcal{P}_n$ to $\mathcal{M}_n$}
Every path $P \in \mathcal{P}_n$ is determined by the
$y$-coordinates of its east steps, i.e., a sequence $a_1, \dots,
a_n$ of integers such that $ -(i-1) \leq a_i \leq i-1$. Set
$b_i=a_{n+1-i}+n+1-i$. Note that $ 1 \leq b_i \leq 2(n+1-i)-1$.
Define a matching $M$ on $[2n]$ by connecting the first available
vertex from the left to the $b_i$-th available vertex to its
right, one by one for each $i=1,\dots, n$ in that order. Note that
before the $i$-th step there are $2(n+1-i)$ vertices that are not
connected yet, so each step is possible. We define $\psi(P)=M$. It
is not hard to see that knowing M, one can reverse the steps one
by one and find the $b_i$'s, which determine a path $P$. So $\psi$
is a bijection. Figure \ref{fig:mappsi} shows a path $P \in
\mathcal{P}_7$ and $\psi(P)$.
\begin{definition} Let $M \in \mathcal{M}_n$.
Suppose the edges $e_1,\dots,e_n$ of $M$ are ordered according to
their left endpoints in ascending order. Suppose $e_i=(a,b)$ and
$e_{i+1}=(c,d)$. Define
\[st_i(M):=
\begin{cases}
\left| \{v: d \leq v \leq b, v \text{ is a vertex of } e_k, k> i
\} \right|, &\text{if $e_i$
and $e_{i+1}$ are nested}\\
0, &\text{otherwise}
\end{cases}
\]
and
\[st(M)=\sum_{i=1}^{n-1}st_i(M).\]
\end{definition}
\begin{figure}
\caption{\emph{Path $P \in \mathcal{P}
\label{fig:mappsi}
\end{figure}
\begin{lemma} The number of north steps of $P$ is equal to
$st(\psi(P))$.
\end{lemma}
\begin{proof}
Let $M=\psi(P)$. The number of north steps of $P$ is
\begin{eqnarray}
\sum_{a_{i+1} > a_i}{(a_{i+1}-a_i)} =\sum_{b_{n-i} \geq
b_{n-i+1}+2}{(b_{n-i}-b_{n-i+1}-1)}
\end{eqnarray}
So, it suffices to show that
\begin{eqnarray}
st_i(M)=
\begin{cases}
b_i-b_{i+1}-1, \; &\text{if $b_{i} \geq b_{i+1}+2$} \\
0, &\text{otherwise}
\end{cases}
\end{eqnarray}
After the $i$-th edge $e_i$ is
drawn in the construction of $M$, there are $b_i-1$ unconnected
vertices below it. In the case $b_{i} \geq b_{i+1}+2$, we have
$b_i-1 \geq b_{i+1}+1$ which implies $e_{i+1}$ is nested below
$e_i$ and $st_i(M)=b_i-b_{i+1}-1$. In the other case, when $b_{i}
< b_{i+1}+2$, we have $b_i-1 < b_{i+1}+1$ and hence the edge
$e_{i+1}$ and $e_i$ are crossed (if $b_i>1$) or aligned (if
$b_i=1$). In either case, $st_i(M)=0$.
\end{proof}
\subsection{Bijection $\phi$ from $\mathcal{M}_n$ to $\mathcal{M}_n$}
We describe $\phi$ by a series of transformations on the diagrams of the matchings.
This map preserves the first edge. For $M \in
\mathcal{M}_n$, $N=\phi(M)$ is constructed inductively as follows.
If $n=1$ set $\phi(M)=M$. If $n>1$, let $M_1$ be the matching
obtained from $M$ by deleting its first edge $e_1=(1,r)$ and let
$N_1=\phi(M_1)$. Let $N_2$ be the matching obtained by adding back
the edge $e_1$ in the same position as it was in $M$. Denote by
$e_2$ the second edge of $N_2$ (which was also the second edge of
$M$). There are three cases:
\begin{itemize}
\item [\emph{\textbf{case 1:}}] $e_1$ and $e_2$ were aligned
In this case set $N=\phi(M)=N_2$.
\item [\emph{\textbf{case 2:}}] $e_1$ and $e_2$ were crossed
Let $f_2=e_2 =(l_2,r_2), f_3=(l_3,r_3), \dots, f_k=(l_k,r_k)$ be
the edges in $N_2$ crossing $e_1$ ordered by their left endpoints
$2=l_2<l_3<\cdots <l_k$. Rearrange them in the following way:
connect $r_2$ to $l_3$, $r_3$ to $l_4, \dots, r_{k-1}$ to $l_k$.
Finally, insert one additional vertex right before $r$ and connect
it to $r_k$. Delete the vertex $l_2$ and renumber the remaining
vertices (see Figure~\ref{fig:case2}). Note that the position of
the first edge in the matching $N$ obtained this way is the same
as in $M$. Set $\phi(M)=N$.
\begin{figure}
\caption{\emph{Definition of $\phi$ when $e_1$ and $e_2$ are
crossed. Dashed lines are used to represent edges whose left
endpoints have been changed.}
\label{fig:case2}
\end{figure}
\item [\emph{\textbf{case 3:}}] $e_1$ and $e_2$ were nested
In $N_2$, let $f_1 =(l_1,r_1), \dots, f_p=(l_p,r_p)$ be the edges
crossing both $e_1=(1,r)$ and $e_2=(2,q)$, and let $f_{p+1}
=(l_{p+1},r_{p+1}), \dots, f_{p+s}=(l_{p+s},r_{p+s})$ be the edges
crossing $e_1$ but not $e_2$, such that $l_1<\cdots
<l_{p}<q<l_{p+1}< \cdots <l_{p+s}$. For easier notation let
$\{l_1<\cdots <l_{p}<q<l_{p+1}< \cdots <l_{p+s}\}=\{v_1<\cdots
<v_{p}<v_{p+1}<v_{p+2}< \cdots <v_{p+s+1}\}$. Add one vertex right
before $r$ and connect it to $v_{s+1}$. "Rearrange" the edges
$f_1, \dots, f_{p+s}$ so that $r_1,\dots,r_{p+s}$ are connected to
$v_1,\dots,v_s,v_{s+2},\dots,v_{p+s+1}$ in that order. Finally,
delete the vertex 2 and renumber the remaining vertices. See
Figure~\ref{fig:case3} for an illustration when $p=3$ and $s=2$.
Call the matching obtained this way $N$. The first edge of $N$ is
the same as in $M$. Set $\phi(M)=N$.
\end{itemize}
\begin{figure}
\caption{\emph{Example of case 3 for $p=3$ and $s=2$. }
\label{fig:case3}
\end{figure}
\begin{example}\label{T:phi}
Figure~\ref{fig:fi} shows step-by-step construction of $\phi(M)$
for the matching $M$ from Figure~\ref{fig:mappsi}. So, for the
path $P$ given in Figure~\ref{fig:mappsi}, the corresponding
matching is
$\Phi(P)=\{(1,4),(2,14),(3,12),(5,8),(6,9),(7,11),(10,13)\}$. Note
that the image of $P$ under Rubey's bijection defined
in~\cite{Rubey} is
$\{(1,4),(2,14),(3,11),(5,8),(6,9),(7,13),(10,12)\}$. Hence the
two bijections are different.
\end{example}
\begin{figure}
\caption{\emph{Example of construction of $\phi(M)$}
\label{fig:fi}
\end{figure}
\begin{theorem}
The map $\phi$ is a bijection and $ne(\phi(M))=st(M)$.
\end{theorem}
\begin{proof} To show that $\phi$ is bijective, we explain how to define the inverse map. Note that the matching resulting from case 1 above
has the property that its first edge is (1,2). In the matching
resulting from case 2 (case 3 respectively), the vertex preceding
the right endpoint of the first edge $e_1$ is a left endpoint
(right endpoint respectively) of an edge different than $e_1$.
Since all the steps in the definition of $\phi$ are invertible, we
simply perform the inverse steps of the corresponding case.
It is left to prove $ne(\phi(M))=st(M)$. For shortness, for any
matching $M$, let $ne(e,M)$ denote the number of edges in $M$
below the edge $e$. Let $M$, $M_1$, $N_1$, $N_2$, and $N$ be the
same as in the definition of $\phi$. By inductive hypothesis,
$ne(N_1)=st(M_1)=st(M)-st_1(M)$. So we just need to prove
\begin{equation}\label{E:need}
ne(N)=ne(N_1)+st_1(M)
\end{equation}
It is clear that
\begin{equation} \label{E:clear}
ne(N_2)=ne(N_1)+ ne(e_1,N_2)
\end{equation}
In the first case of the definition of $\phi$, ~\eqref{E:need}
clearly follows since $st_1(M)=0$ and we do not add nestings to
$N_1$ by adding back $e_1$.
In the second case, $st_1(M)=0$, so we need to show that
$ne(N)=ne(N_1)$. To this end, if $e$ is an edge in $N_2$ different
from $f_2,\dots, f_k$ (notation from the definition of $\phi$),
let $r(e)$ be the edge in $N$ that corresponds to $e$ in the
obvious way, and let $r(f_i)$ be the edge with right endpoint
$r_i$, for $i=2, \dots, k$. It is clear that
$ne(e,N_2)=ne(r(e),N)$
for any edge $e \notin \{e_1,f_2,\dots, f_k\}$. Note that the left endpoint of $r(f_i)$ in $N$ is $l_{i}-1$ because the vertex $2$ from $N_2$ was deleted (see Figure~\ref{fig:case2}).
So, for $2 \leq i <
k$
\begin{align}
ne(f_i,N_2)&-ne(r(f_i),N)= \notag \\
&=\left|\{\text{edges in $N$ below $e_1$ with left endpoint between $l_i-1$ and $l_{i+1}-1$}\}\right | \label{E:ne1}\\
ne(f_k,N_2)&-ne(r(f_k),N)= \notag \\
&=\left |\{\text{edges in $N$ below $e_1$ with left endpoint between $l_k-1$ and
$r$}\}\right | \label{E:ne2}
\end{align}
By subtracting the following equalities
\begin{align}
ne(N_2)&=\sum_{i=2}^k{ne(f_i,N_2)}+\sum_{e \notin \{f_2,\dots,
f_k\}}{ne(e,N_2)} \label{E:N2} \\
ne(N)&=\sum_{i=2}^k{ne(r(f_i),N)}+\sum_{e \notin \{f_2,\dots,
f_k\}}{ne(r(e),N)} \label{E:N}
\end{align}
and using~\eqref{E:ne1} and ~\eqref{E:ne2} we get
\begin{equation}
ne(N_2)-ne(N)=ne(e_1,N)=ne(e_1,N_2)
\end{equation}
This together with~\eqref{E:clear} gives $ne(N)=ne(N_1)$.
In the third case, similarly, denote by $r(f_i)$ the edge in $N$
that ends with vertex $r_i$, $i=1,\dots, p+s$, by $r(e_2)$ the
edge that ends with the vertex $r-1$, and for every other edge $e$
in $N_2$, denote by $r(e)$ the edge in $N$ that corresponds to $e$
in the natural way. In $N_2$, define $a$ to be the number of edges
below $e_1$ and crossing $e_2=(2,q)$ and $b$ to be the number of
those edges below $e_1$ with a left endpoint right of $q$. In
what follows, $v_i$ are the vertices defined in case 3 of the
definition of $\phi$. Then
\begin{align}
st_1(M)&=1+a+2b+s \label{E:1}\\
ne(r(e_2),N)&= \left| \{\text{edges in $N_2$ below $e_1$ with left
endpoint between
$v_{s+1}$ and $r$}\}\right| \label{E:2}\\
ne(N_2)&=ne(N_1)+ne(e_2,N_2)+1+a+b \label{E:first}\\
ne(N_2)&=ne(e_1,N_2)+ne(e_2,N_2)+\sum_{i=1}^{p+s}{ne(f_i,N_2)}+\sum_{e
\notin
\{e_1,e_2,f_1,\dots, f_{p+s}\}}{ne(e,N_2)} \label{E:second}\\
ne(N)&=ne(e_1,N)+ne(r(e_2),N)+
\sum_{i=1}^{p+s}{ne(r(f_i),N)}+\sum_{e \notin \{e_1,e_2,f_1,\dots,
f_{p+s}\}}{ne(r(e),N_2)} \label{E:third}
\end{align}
To complete the proof, we need to distinguish two cases: $ s\geq
p$ and $p>s$. When $ s\geq p$, close inspection of the
"rearrangement" of the edges reveals:
\begin{align}\label{E:3}
&ne(r(f_i),N)-ne(f_i,N_2)= \notag \\
&=
\begin{cases}
1, & 1 \leq i \leq p\\
1+ \left| \{\text{edges in $N_2$ below $e_1$ with left vertex
between
$v_i$ and $v_{i+1}$}\}\right|, & p < i \leq s \\
0, & s < i \leq p+s
\end{cases}
\end{align}
while when $p>s$, similar equalities hold:
\begin{align}\label{E:4}
& ne(r(f_i))-ne(f_i)= \notag \\
&=
\begin{cases}
1, & 1 \leq i \leq s\\
-\left| \{\text{edges in $N_2$ below $e_1$ with left vertex
between $v_i$ and $v_{i+1}$}\}\right|, & s < i \leq p\\
0, & p < i \leq p+s
\end{cases}
\end{align}
Now, we add the equations~\eqref{E:first} and ~\eqref{E:third} and
subtract~\eqref{E:second} from them. Using \eqref{E:1},
\eqref{E:2}, and \eqref{E:3},i.e.,~\eqref{E:4}, we
get~\eqref{E:need}.
\end{proof}
\subsection{Some properties of $\Phi$}
First we need few definitions. We say that $\{l,l+1,\dots,k\}$ is
a component of a matching $M \in \mathcal{M}_n$ if the
restrictions of $M$ on each of the sets $\{1,\dots,l-1\}$,
$\{l,l+1, \dots,k \}$, and $\{ k+1 ,\dots, n\}$ are matchings
themselves. A matching is called irreducible if it has only one
component. In terms of diagrams, a matching is irreducible if it
cannot be split by vertical bars into disjoint matchings.
A component of a path $P \in \mathcal{P}_n$ is a subsequence of
consecutive steps beginning at $(l,-l)$ and ending at $(k,-k)$
such that both parts of $P$ between $(l,-l)$ and $(k,-k)$, and
between $(k,-k)$ and $(n,-n)$ when translated by the appropriate
vector to the origin represent paths in $\mathcal{P}_{k-l}$ and
$\mathcal{P}_{n-k}$ respectively. A component which does not have
nontrivial subcomponents is called irreducible.
\begin{proposition} For $P \in \mathcal{P}_n$ the following are
true:
\begin{itemize}
\item[(a)] $P$ has $k$ south steps on the line $x=n$ if and only
in $\Phi(P)$, $1$ is connected to $k+1$ .
\item[(b)] The irreducible components of $P$ read backwards are
in one-to-one correspondence with the irreducible components of
$\Phi(P)$ from left to right.
\end{itemize}
\end{proposition}
\begin{proof}
\begin{itemize}
\item[(a)] From the definition of $\psi$, it is clear that $P$ has
$k$ south steps on the line $x=n$ if and only in $\psi(P)$, $1$ is
connected to $k+1$. Thus, the claim follows from the fact that
$\phi$ preserves the first edge.
\item[(b)] This statement is clearly true if we replace $\Phi$ by
$\psi$. Hence, it suffices to observe that if the irreducible
components of $\psi(P)$ are $C_1, \dots, C_k$, then $\phi(C_1),
\dots, \phi(C_k)$ are the irreducible components of $\Phi(P)$.
\end{itemize}
\end{proof}
\begin{proposition}
If $P$ is a path with no north steps (Dyck path) then $M=\Phi(P)$
is the unique matching with no nestings such that $i$ is a left
endpoint in $M$ exactly when the $(2n+1-i)$-th step of $P$ is a
south step.
\noindent In other words, the set of left and right endpoints of
$M$ is determined by $P$ traced backwards.
\end{proposition}
\begin{proof}
It follows from the definition of $\psi$ that the statement is
true for $\psi(P)$. Moreover, since $\psi(P)$ has no nestings,
$\phi$ leaves $\psi(P)$ unchanged.
\end{proof}
\section*{Acknowledgment}
The author would like to thank Catherine Yan for suggesting this
problem, which was posed by A.~Rechnitzer at the 19-th FPSAC in
July, 2007.
\textsc{Svetlana Poznanovi\'{c}: Department of Mathematics, Texas
A\&M
University, College Station, TX 77843 USA} \\
\emph{E-mail address}: \verb"[email protected]"
\end{document} |
\begin{document}
\title{Galois actions on N\'eron models of Jacobians}
\author{Lars Halvard Halle}\email{[email protected]}
\address{Departement Wiskunde, Celestijnenlaan 200B, B-3001 Leuven (Heverlee), Belgium}
\begin{abstract}
Let $X$ be a smooth curve defined over the fraction field $K$ of a complete d.v.r. $R$, and let $K'/K$ be a tame extension. We study extensions of the $G = \mathrm{Gal}(K'/K)$-action on $ X_{K'} $ to certain regular models of $X_{K'}$ over $R'$, the integral closure of $R$ in $K'$. In particular, we consider the induced action on the cohomology groups of the structure sheaf of the special fiber of such a regular model, and obtain a formula for the Brauer trace of the endomorphism induced by a group element on the alternating sum of the cohomology groups.
We apply these results to study a natural filtration of the special fiber of the N\'eron model of the Jacobian of $X$ by closed, unipotent subgroup schemes. We show that the jumps in this filtration only depend on the fiber type of the special fiber of the minimal regular model with strict normal crossings for $X$ over $R$, and in particular are independent of the residue characteristic. Furthermore, we obtain information about where these jumps occur. We also compute the jumps for each of the finitely many possible fiber types for curves of genus $1$ and $2$.
\end{abstract}
\keywords{Models of curves, tame cyclic quotient singularities, group actions on cohomology, N\'eron models}
\thanks{The research was partially supported by the Fund for Scientific Research - Flanders (G.0318.06)}
\maketitle
\section{Introduction}
Let $X$ be a smooth, projective and geometrically connected curve of genus $ g(X) > 0 $, defined over the fraction field $K$ of a complete discrete valuation ring $R$, with algebraically closed residue field $k$. By a \emph{model} for $X$ over $R$, we mean an integral and normal scheme $ \mathcal{X} $ that is flat and projective over $ S = \mathrm{Spec}(R) $, and with generic fiber $ \mathcal{X}_K \cong X $. The special fiber $ \mathcal{X}_k $ of such a model is called a \emph{reduction} of $X$.
The semi-stable reduction theorem, due to Deligne and Mumford (\cite{DelMum}, Corollary 2.7), states that there exists a finite, separable field extension $ L/K$ such that $X_L$ admits a semi-stable model over the integral closure $R_L$ of $R$ in $L$.
In order to study reduction properties of $X$, it can often be useful to work with the Jacobian $J/K$ of $X$. The question whether $X$ has semi-stable reduction over $S = \mathrm{Spec}(R)$ is reflected in the structure of the \emph{N\'eron model} $ \mathcal{J}/S$ (cf. ~\cite{Ner}) of $J$. In fact, $X$ has semi-stable reduction over $S$ if and only if $ \mathcal{J}_k^0 $, the identity component of the special fiber, has zero \emph{unipotent radical} (\cite{DelMum}, Proposition 2.3).
In general, it is necessary to make ramified base extensions in order for $X$ to obtain semi-stable reduction. If the residue characteristic is positive, it can often be difficult to find explicit extensions over which $X$ obtains stable reduction. In the case where a tamely ramified extension suffices one can do this by considering the geometry of suitable regular models for $X$ over $ S $ (cf. ~\cite{Thesis}). In this paper we study, among other things, how the geometry of the N\'eron model contains information that is relevant for obtaining semi-stable reduction for $X$.
\subsection{N\'eron models and tame base change}
Let $ K'/K $ be a finite, separable and tamely ramified extension of fields, and let $R'$ be the integral closure of $R$ in $ K' $. Then $R'$ is a complete discrete valuation ring, with residue field $k$. Furthermore, $ K'/K $ is Galois, with group $ G = \boldsymbol{\mu}_n $, where $ n = \mathrm{deg}(K'/K) $.
Let $ \mathcal{J}'/S' $ be the N\'eron model of the Jacobian of $ X_{K'} $, where $ S' = \mathrm{Spec}(R') $. Due to a result by B. Edixhoven (\cite{Edix}, Theorem 4.2), it is possible to describe $ \mathcal{J}/S $ in terms of $ \mathcal{J}'/S' $, together with the induced $G$-action on $ \mathcal{J}' $. Namely, if $W$ denotes the \emph{Weil restriction} of $ \mathcal{J}'/S' $ to $S$ (cf. ~\cite{Ner}, Chapter 7), one can let $G$ act on $W$ in such a way that $ \mathcal{J} \cong W^G $, where $ W^G $ denotes the scheme of invariant points. In particular, one gets an isomorphism $ \mathcal{J}_k \cong W_k^G $. By \cite{Edix}, Theorem 5.3, one can use this description of $ \mathcal{J}_k $ to define a descending filtration
$$ \mathcal{J}_k = F_n^0 \supseteq \ldots \supseteq F_n^i \supseteq \ldots \supseteq F_n^n = 0 $$
of $ \mathcal{J}_k $ by closed subgroup schemes.
In \cite{Edix}, Remark 5.4.5, a generalization of this setup is suggested. If we define $ \mathcal{F}^{i/n} = F_n^i $, where $ F_n^i $ is the $i$-th step in the filtration induced by the extension of degree $n$, one can consider the filtration
$$ \mathcal{J}_k = \mathcal{F}^0 \supseteq \ldots \supseteq \mathcal{F}^a \supseteq \ldots \supseteq \mathcal{F}^1 = 0, $$
with indices in $ \mathbb{Z}_{(p)} \cap [0,1] $. The construction of $ \mathcal{F}^a $ is independent of the choice of representatives $i$ and $n$ for $ a = i/n $.
The filtration $ \{ \mathcal{F}^a \} $ contains significant information about $\mathcal{J}$. For instance, the subgroup schemes $ \mathcal{F}^a $ are \emph{unipotent} for $ a > 0 $, so in a natural way, this filtration gives a measure on how far $ \mathcal{J}/S $ is from being \emph{semi-abelian}.
One way to study the filtration $ \{ \mathcal{F}^a \} $ is to determine where it \emph{jumps}. This will occupy a considerable part of this paper. The jumps in the filtration often give explicit numerical information about $X$. For instance, if $X$ obtains stable reduction after a tamely ramified extension, we show that the jumps occur at indices of the form $i/\tilde{n}$, where $\tilde{n}$ is the degree of the minimal extension that realizes stable reduction for $X$.
It follows from Edixhoven's theory that to determine the jumps in the filtration $ \{ F^i_n \} $ induced by an extension of degree $n$, one needs to compute the irreducible characters for the representation of $ \boldsymbol{\mu}_n $ on the tangent space $ T_{ \mathcal{J}'_k, 0 } $. We shall use such computations for \emph{infinitely} many integers $n$ to describe the jumps of the filtration $ \{ \mathcal{F}^a \} $ with rational indices.
\subsection{N\'eron models for Jacobians}
Contrary to the case of general abelian varieties, N\'eron models for Jacobians can be constructed in a fairly concrete way, using the theory of the relative Picard functor (cf. ~\cite{Ner}, Chapter 9). The following property will be of particular importance to us: If $ \mathcal{Z}/S' $ is a regular model for $ X_{K'}/K' $, then there is a canonical isomorphism
$$ \mathrm{Pic}_{\mathcal{Z}/S'}^0 \cong (\mathcal{J}')^0, $$
where $ \mathrm{Pic}_{\mathcal{Z}/S'}^0 $ (resp.~$(\mathcal{J}')^0$) is the identity component of $ \mathrm{Pic}_{\mathcal{Z}/S'} $ (resp.~$\mathcal{J}'$). It follows that there is a canonical isomorphism
$$ H^1(\mathcal{Z}_k, \mathcal{O}_{\mathcal{Z}_k}) \cong T_{\mathcal{J}'_k, 0}. $$
We shall work with regular models $ \mathcal{Z} $ of $ X_{K'} $ that admit $G$-actions that are compatible with the $G$-action on $ \mathcal{J}' $. It will then follow that the representation of $G$ on $ T_{\mathcal{J}'_k, 0} $ can be described in terms of the representation of $G$ on $ H^1(\mathcal{Z}_k, \mathcal{O}_{\mathcal{Z}_k}) $.
\subsection{Models and actions}
In order to find an $S'$-model for $X_{K'} $ with a compatible $G$-action, we take a model $ \mathcal{X} $ of $X$ over $S$, and consider its pullback $ \mathcal{X}_{S'} $ to $ S' $. Let $ \mathcal{Y} \to \mathcal{X}' \to \mathcal{X}_{S'} $ be the composition of the normalization with the minimal desingularization. Then $ \mathcal{Y} $ is a model of $ X_{K'} $ with an action of $G$ that lifts the obvious action on $ \mathcal{X}_{S'} $. The $G$-action restricts to the special fiber $ \mathcal{Y}_k $, and in particular, $G$ will act on the cohomology groups $ H^i(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $, for $ i = 0, 1 $.
In order to understand the $G$-action on $ H^i(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $, we need a good description of the geometry of $\mathcal{Y}$ and of the $G$-action on $ \mathcal{Y} $. For this purpose, we demand that the model $ \mathcal{X} $ has good properties. To begin with, we shall require that $ \mathcal{X} $ is regular, and that the special fiber is a divisor with strict normal crossings. Furthermore, we shall always require that any two irreducible components of $ \mathcal{X}_k $, whose multiplicities are both divisible by the residue characteristic, have empty intersection. This condition is automatically fulfilled if $X$ obtains stable reduction after a tamely ramified extension, but holds also for a larger class of curves.
Under these assumptions, it turns out that the normalization $ \mathcal{X}' $ of $ \mathcal{X}_{S'} $ has at most \emph{tame cyclic quotient singularities} (cf. ~\cite{CED}, Definition 2.3.6 and \cite{Thesis}, Paper I Proposition 4.3). These singularities can be resolved explicitly, and it can be seen that $ \mathcal{Y} $ is a strict normal crossings model for $ X_{K'} $.
We shall also only consider the case where $ n = \mathrm{deg}(K'/K) $ is relatively prime to the multiplicities of all the irreducible components of $ \mathcal{X}_k $. With this additional hypothesis, it turns out that we can describe the combinatorial structure of the special fiber $ \mathcal{Y}_k $ (i.e., the intersection graph of the irreducible components, their genera and multiplicities), in terms of the corresponding data for $ \mathcal{X}_k $.
If all the assumptions above are satisfied, it follows that all irreducible components of $ \mathcal{Y}_k $ are stable under the $G$-action on $ \mathcal{Y} $, and that all intersection points in $ \mathcal{Y}_k $ are fixed points. We can explicitly describe the action on the cotangent space of $ \mathcal{Y} $ at these intersection points, and the restriction of the $G$-action to each irreducible component of $ \mathcal{Y}_k $.
\subsection{Action on cohomology}
Next, we study the representation of $ G = \boldsymbol{\mu}_n $ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $. In particular, we would like to compute the irreducible characters for this representation. So for every $ g \in G $, we want to compute the trace of the endomorphism of $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ induced by $g$, and then use this information to find the characters.
There are some technical problems that need to be overcome in order to do this. First, since we allow the residue characteristic to be positive, just knowing the trace for each $g \in G$ may not give sufficient information to compute the characters. Instead, we have to compute the so called \emph{Brauer trace} for every $g \in G $ (cf. \cite{SerreLin}, Chapter 18). This means that we have to lift the eigenvalues and traces from characteristic $p$ to characteristic $0$. From knowing the Brauer trace for every $ g \in G $ we can compute the irreducible Brauer characters, and then the ordinary characters are obtained by reducing the Brauer characters modulo $p$. Second, the special fiber $ \mathcal{Y}_k $ will in general be singular, and even non-reduced. This complicates trace computations considerably.
To deal with these problems, we introduce in Section \ref{section 6} a certain filtration of the special fiber $ \mathcal{Y}_k $ by effective subdivisors, where the difference at the $i$-th step is an irreducible component $ C_i $ of $ \mathcal{Y}_k $. Since $ \mathcal{Y} $ is an SNC-model, each $ C_i $ is a smooth and projective curve, and with our assumption on $n$, the $G$-action restricts to each $ C_i $. Furthermore, to each step in this filtration, one can in a natural way associate an invertible $ G $-sheaf $ \mathcal{L}_i $, supported on $C_i$.
We apply the so called Lefschetz-Riemann-Roch formula (\cite{Don}, Corollary 5.5), in order to get a formula for the Brauer trace of the endomorphism induced by each $ g \in G $ on the formal difference $ H^0(C_i,\mathcal{L}_i) - H^1(C_i,\mathcal{L}_i) $. An important step is to show that our description of the action on $ \mathcal{Y} $ is precisely the data that is needed to obtain these formulas. Then we show that these traces add up to give the Brauer trace for the endomorphism induced by each $ g \in G $ on the formal difference $ H^0(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) - H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $. In particular, we give in Theorem \ref{thm. 9.13}, one of the main results in this work, a formula for this Brauer trace, and show that it only depends on the combinatorial structure of $ \mathcal{X}_k $.
Let us also remark that in our situation, we already know the character for $ H^0(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $, and hence we will be able to compute the irreducible characters for $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ in this way.
\subsection{Conclusions and computations}
If now $ \mathcal{X}/S $ is the minimal regular model with strict normal crossings for $ X/K $, we prove in Theorem \ref{main character theorem} that the irreducible characters for the representation of $ G = \boldsymbol{\mu}_n $ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ only depend on the combinatorial structure of the special fiber $ \mathcal{X}_k $, as long as $n$ is relatively prime to $l$, where $l$ is the least common multiple of the multiplicities of the irreducible components of $ \mathcal{X}_k $.
Let $ \mathcal{J} $ be the N\'eron model of the Jacobian of $X$. Then it follows from Theorem \ref{main character theorem} that the jumps in the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $ only depend on the combinatorial structure of $ \mathcal{X}_k $ (Corollary \ref{main jump corollary}). This is due to the fact that $ \mathbb{Z}_{ (p l) } \cap [0,1] $ is ``dense'' in $ \mathbb{Z}_{ (p) } \cap [0,1] $. Furthermore, in Corollary \ref{specific jump corollary}, we draw the conclusion that the jumps are actually independent of ~$p$, and that the jumps can only occur at finitely many rational numbers of a certain kind, depending on the combinatorial structure of $ \mathcal{X}_k $.
For a fixed genus $ g \geq 1 $, there are only a finite number of possible combinatorial structures for $ \mathcal{X}_k $, modulo a certain equivalence relation. In case $ g = 1 $ or $ g = 2 $, one has complete classifications (cf. ~\cite{Kod} for $g=1$ and \cite{Ueno}, \cite{Ogg} for $g=2$). In Section \ref{computations and jumps} we compute the jumps for each possible fiber type for $g=1$ (which were also computed by Schoof in \cite{Edix}) and for $g=2$.
\section{N\'eron models and tamely ramified extensions}\label{Neron}
\subsection{}
Let $ R $ be a discrete valuation ring, with fraction field $K$ and residue field $k$, and let $A$ be an abelian variety over $K$. There exists a canonical extension of $A$ to a smooth group scheme $ \mathcal{A} $ over $ S = \mathrm{Spec}(R) $, known as the \emph{N\'eron model} (\cite{Ner}, Theorem 1.4/3). The N\'eron model is characterized by the following universal property: for every smooth morphism $ T \rightarrow S $, the induced map $ \mathcal{A}(T) \rightarrow A(T_K) $ is \emph{bijective}.
\subsection{}\label{neronbase}
We assume from now on that $R$ is strictly henselian. Let $ K'/K $ be a finite, separable extension of fields, and let $R'$ be the integral closure of $R$ in $K'$. Let $ \mathcal{A}'/S' $ denote the N\'eron model of the abelian variety $ A_{K'}/K' $, where $ S' = \mathrm{Spec}(R') $. In general, it is not so easy to describe how N\'eron models change under ramified base extensions. However, in the case where $K'/K$ is tamely ramified, one can relate $ \mathcal{A}'/S' $ and $ \mathcal{A}/S $ in a nice way, due to a result by B. Edixhoven (\cite{Edix}, Theorem 4.2). We will in this section explain this relation, following the treatment in \cite{Edix}. We refer to this paper for further details.
Assume now that $ K'/K $ is tamely ramified. Then $ K'/K $ is Galois with group $ G = \boldsymbol{\mu}_n $, where $ n = [K' : K] $. Let $G$ act on $ A_{K'} = A \times_{\mathrm{Spec}(K)} \mathrm{Spec}(K') $ (from the right), via the action on the right factor. By the universal property of $ \mathcal{A}' $, this $G$-action on $ A_{K'} $ extends uniquely to a right action on $ \mathcal{A}' $, such that the morphism $ \mathcal{A}' \rightarrow S' $ is equivariant. The idea in \cite{Edix} is to reconstruct $ \mathcal{A} $ as an invariant scheme for this action. However, since $ \mathcal{A} $ is an $S$-scheme, it is necessary to ``push forward'' from $S'$ to $S$.
\subsection{}
The \emph{Weil restriction} of $ \mathcal{A}' $ to $S$ is the functor $ \Pi_{S'/S}(\mathcal{A}'/S') $ defined by assigning, to any $S$-scheme $T$, the set $ \Pi_{S'/S}(\mathcal{A}'/S')(T) = \mathcal{A}'(T_{S'}) $ (cf. ~\cite{Ner}, Chapter 7). This functor is representable by an $S$-scheme, which we will denote by $ \Pi_{S'/S}(\mathcal{A}'/S') $.
In \cite{Edix}, an equivariant $G$-action on $ \Pi_{S'/S}(\mathcal{A}'/S') $ is defined, corresponding to the $G$-action on $ \mathcal{A}' $.
Furthermore, there is a canonical morphism
$$ \mathcal{A} \rightarrow \Pi_{S'/S}(\mathcal{A}'/S'), $$
which, according to \cite{Edix}, Theorem 4.2, is a closed immersion, and induces an isomorphism
\begin{equation}\label{Bastheorem}
\mathcal{A} \cong (\Pi_{S'/S}(\mathcal{A}'/S'))^G,
\end{equation}
where $(\Pi_{S'/S}(\mathcal{A}'/S'))^G$ denotes the \emph{scheme of invariant points} for this $ G $-action.
\subsection{Filtration of $\mathcal{A}_k$}
One can use Isomorphism (\ref{Bastheorem}) to study the special fiber $ \mathcal{A}_k $ in terms of $ \mathcal{A}'_k $, together with the $G$-action. Indeed, let $ R \subset R' = R[\pi']/(\pi'^n - \pi) $ be a tame extension, where $ \pi $ is a uniformizing parameter for $R$. Then we have that $ R'/\pi R' = k[\pi']/(\pi'^n) $. For any $ k $-algebra $C$, it follows that
$$ \mathcal{A}_k(C) \cong X_k^G(C) \cong X_k(C)^G \cong \mathcal{A}'(C[\pi']/(\pi'^n))^G, $$
where $ X = \Pi_{S'/S}(\mathcal{A}'/S') $.
In \cite{Edix}, Chapter 5, this observation is used to construct a filtration of $\mathcal{A}_k$. To do this, let us first consider an $R$-algebra $ C $. From Isomorphism (\ref{Bastheorem}) one gets a map
$$ \mathcal{A}(C) \rightarrow \mathcal{A}'(C \otimes_R R'), $$
which further gives
$$ \mathcal{A}(C) \rightarrow \mathcal{A}'(C \otimes_R R') \rightarrow \mathcal{A}'(C \otimes_R R'/(\pi'^i)), $$
for any integer $i$ such that $ 0 \leq i \leq n $. Define functors $ F^i\mathcal{A}_k $ by
$$ F^i\mathcal{A}_k(C) = \mathrm{Ker}(\mathcal{A}(C) \rightarrow \mathcal{A}'(C \otimes_R R'/(\pi'^i))), $$
for any $ k = R/(\pi)$-algebra $C$. The functors $ F^i\mathcal{A}_k $ are represented by closed subgroup schemes of $ \mathcal{A}_k $, and give rise to a descending filtration
\begin{equation}\label{filtr}
\mathcal{A}_k = F^0\mathcal{A}_k \supseteq F^1\mathcal{A}_k \supseteq \ldots \supseteq F^n\mathcal{A}_k = 0.
\end{equation}
\subsection{}
The successive quotients of Filtration (\ref{filtr}) can be described quite accurately: Let $ Gr^i \mathcal{A}_k $ denote the quotient $ F^i\mathcal{A}_k/F^{i+1}\mathcal{A}_k $, for $ i \in \{ 0, \ldots, n-1 \} $. Then, according to Theorem 5.3 in \cite{Edix}, we have that $ Gr^0(\mathcal{A}_k) = (\mathcal{A}'_k)^{\boldsymbol{\mu}_n} $, and for $ 0 < i < n $, we have that
$$ Gr^i \mathcal{A}_k \cong T_{\mathcal{A}'_k,0}[i] \otimes_k (\mathfrak{m}/\mathfrak{m}^2)^{\otimes i}, $$
where $ \mathfrak{m} \subset R'$ is the maximal ideal, and where $ T_{\mathcal{A}'_k,0}[i] $ denotes the subspace of $ T_{\mathcal{A}'_k,0} $ where $ \xi \in \boldsymbol{\mu}_n $ acts by multiplication by $ \xi^i $. In particular, we note that the group schemes $ F^i\mathcal{A}_k $ are \emph{unipotent} for $ i > 0 $.
The filtration \emph{jumps} at the index $ i \in \{ 0, \ldots, n-1 \} $ if $ Gr^i \mathcal{A}_k \neq 0 $. Since
$$ T_{\mathcal{A}'_k,0}[0] = (T_{\mathcal{A}'_k,0})^{\boldsymbol{\mu}_n} = T_{(\mathcal{A}'_k)^{\boldsymbol{\mu}_n},0} $$
(use \cite{Edix}, Proposition 3.2), it follows that the jumps are completely determined by the representation of $ \boldsymbol{\mu}_n $ on $ T_{\mathcal{A}'_k,0} $. In particular, it follows that there are at most $ \mathrm{dim}(A) $ jumps, since $ \mathrm{dim}_k T_{\mathcal{A}'_k,0} = \mathrm{dim}(A) $.
\subsection{Filtration with rational indices}\label{ratfil}
Let $ a \in \mathbb{Z}_{(p)} \cap [0,1] $. If $ a = i/n $, then we define $ \mathcal{F}^a \mathcal{A}_k = F^i_n \mathcal{A}_k $, where $ F^i_n \mathcal{A}_k $ denotes the $i$-th step in the filtration induced by the tame extension of degree $n$. This definition does not depend on the choice of representatives $i$ and $n$ for $ a = i/n $ (\cite{Thesis}, Lemma 2.3). The following proposition is immediate:
\begin{prop}\label{ratfilprop}
The construction above gives a descending filtration
$$ \mathcal{A}_k = \mathcal{F}^0 \mathcal{A}_k \supseteq \ldots \supseteq \mathcal{F}^a\mathcal{A}_k \supseteq \ldots \supseteq \mathcal{F}^1 \mathcal{A}_k = 0 $$
of $ \mathcal{A}_k $ by closed subgroup schemes, where $ a \in \mathbb{Z}_{(p)} \cap [0,1] $.
\end{prop}
Let $ x \in [0,1] $ be a real number, and let $ (x^j)_j $ (resp. $ (x_k)_k $) be a sequence of numbers in $ \mathbb{Z}_{(p)} \cap [0,1] $ converging to $x$ from above (resp. from below). We will say that $ \{ \mathcal{F}^a \mathcal{A}_k \} $ \emph{jumps} at $x$ if $ \mathcal{F}^{x_k} \mathcal{A}_k \supsetneq \mathcal{F}^{x^j} \mathcal{A}_k $ for all $j$ and $k$. It is natural to ask \emph{how} many jumps there are, and \emph{where} they occur. It is easily seen that since every discrete filtration $ \{ F^i_n \mathcal{A}_k \} $ jumps at most $ g = \mathrm{dim}(A) $ times, the filtration $ \{ \mathcal{F}^a \mathcal{A}_k \} $ can have at most $g$ jumps.
Consider a positive integer $n$ that is not divisible by $p$, and let $ \{ F^i_n \mathcal{A}_k \} $ be the filtration induced by the extension of degree $n$. Let us assume that this filtration has a jump at some $ i \in \{ 0, \ldots, n-1 \} $. Then we can say that $ \{ \mathcal{F}^a \mathcal{A}_k \} $ has a jump in the interval $ [i/n, (i+1)/n] $. By computing jumps in this way for increasing $n$, we get finer partitions of the interval $ [0,1] $, and increasingly better approximations of the jumps in $ \{ \mathcal{F}^a \mathcal{A}_k \} $.
It follows that one can compute the jumps of $ \{ \mathcal{F}^a \mathcal{A}_k \} $ by computing the jumps for the filtrations $ \{ F^i_n \mathcal{A}_k \} $ for ``sufficiently'' many $n$ that are not divisible by $p$. This would for instance be the case for a multiplicatively closed subset $ \mathcal{U} \subset \mathbb{N} $ such that $ \mathbb{Z}[\mathcal{U}^{-1}] \cap [0,1] $ is dense in $ \mathbb{Z}_{(p)} \cap [0,1] $.
\subsection{}
In the case where $ A/K $ obtains semi-abelian reduction over a tamely ramified extension $K'$ of $K$, the jumps of $ \{ \mathcal{F}^a \mathcal{A}_k \} $ have an interesting interpretation, which we will now explain. Let $ \widetilde{K} $ be the minimal extension over which $A$ aquires semi-abelian reduction (cf. ~\cite{Deschamps}, Th\'eor\`eme 5.15), and let $ \tilde{n} = \mathrm{deg}(\widetilde{K}/K) $. Then we shall see below that the jumps occur at rational numbers of the form $ k/\tilde{n} $, where $ k \in \{0, \ldots, \tilde{n} - 1 \} $. This is essentially due to the following observation:
\begin{lemma}\label{tame jumps}
Let $ \widetilde{K}/K $ be the minimal extension over which $A/K$ obtains semi-abelian reduction, and let $ \tilde{n} = \mathrm{deg}(\widetilde{K}/K) $. Consider a tame extension $ K'/K $ of degree $ n $, factoring via $ \widetilde{K} $, and let $ m = n/\tilde{n} $. Let $\mathcal{A}'/S' $ be the N\'eron model of $A_{K'}$.
Then we have that the jumps in the filtration $ \{ F^i_n \mathcal{A}_k \} $ induced by $ S'/S $ occur at indices $ i = k n / \tilde{n} $, where $ 0 \leq k \leq \tilde{n} - 1$.
\end{lemma}
\begin{pf}
Let $ \widetilde{\mathcal{A}}/\widetilde{S} $ be the N\'eron model of $ A_{\widetilde{K}} $. By assumption, both $\mathcal{A}'$ and $ \widetilde{\mathcal{A}} $ are semi-abelian. Since $ \widetilde{\mathcal{A}}_{S'} $ is smooth, and $\mathcal{A}'$ has the N\'eronian property, we get a canonical morphism $ \widetilde{\mathcal{A}}_{S'} \rightarrow \mathcal{A}' $, extending the identity map on the generic fibers. Since $ \widetilde{\mathcal{A}}_{S'} $ is semi-abelian, it follows from Proposition 7.4/3 in \cite{Ner} that this morphism induces an isomorphism $ (\widetilde{\mathcal{A}}_k)^0 \cong (\mathcal{A}'_k)^0 $. In particular, we get that $ T_{\widetilde{\mathcal{A}}_k,0} = T_{\mathcal{A}'_k, 0} $.
Consider now the filtration $ \{ F^i_m \widetilde{\mathcal{A}}_k \} $ of $ \widetilde{\mathcal{A}}_k $ induced by the extension $ S'/\widetilde{S} $. Since $ \widetilde{\mathcal{A}} $ is semi-abelian, we have that $ F^i_m \widetilde{\mathcal{A}}_k = 0 $ for all $ i > 0 $. Therefore, we get that
$$ \widetilde{\mathcal{A}}_k = F^0_m \widetilde{\mathcal{A}}_k = Gr^0_m \widetilde{\mathcal{A}}_k = (\mathcal{A}'_k)^{\boldsymbol{\mu}_m}. $$
But now
$$ (T_{\mathcal{A}'_k, 0})^{\boldsymbol{\mu}_m} = T_{(\mathcal{A}'_k)^{\boldsymbol{\mu}_m}, 0} = T_{\mathcal{A}'_k, 0}, $$
and so it follows that $ \boldsymbol{\mu}_m $ acts trivially on $ T_{\mathcal{A}'_k, 0} $.
Let us now consider the filtration $ \{ F^i_n \mathcal{A}_k \} $ induced by the extension $ S'/S $. The jumps in this filtration are determined by the $ \boldsymbol{\mu}_n $-action on $ T_{\mathcal{A}'_k, 0} $. Assume that $ T_{\mathcal{A}'_k, 0}[i] \neq 0 $, for some $ i \in \{0, \ldots, n -1 \} $. On this subspace, every $ \xi \in \boldsymbol{\mu}_n $ acts by multiplication by $ \xi^i $. We can identify $ \boldsymbol{\mu}_m $ with the $\tilde{n}$-th powers in $ \boldsymbol{\mu}_n $, and since we established above that $ \boldsymbol{\mu}_m $ acts trivially, it follows that $ \xi^{\tilde{n} i} = 1 $. So therefore $ \tilde{n} i = k n $ for some $ k \in \{0, \ldots, \tilde{n} - 1 \} $, and we get that $ i = k n/\tilde{n} $.
\end{pf}
Using this lemma, one gets the following result:
\begin{prop}\label{tamejumpprop}
If $ A/K $ obtains semi-abelian reduction over a tamely ramified extension of $K$, then the jumps in the filtration $ \{ \mathcal{F}^a \mathcal{A}_k \} $ occur at indices $ k/\tilde{n} $, where $ k \in \{0, \ldots, \tilde{n} - 1 \} $, and where $ \tilde{n} $ is the degree of the minimal extension $ \widetilde{K}/K $ that realizes semi-abelian reduction for $ A $.
\end{prop}
\begin{pf}
Let us consider the sequence of integers $ (\tilde{n} m)_m $, where $m$ runs over the positive integers that are not divisibe by $p$. For the extension of degree $n = \tilde{n} m$, Lemma \ref{tame jumps} gives that the jumps of $ \{ F^i_n \mathcal{A}_k \} $ occur at indices $ i = k n / \tilde{n} $, where $ 0 \leq k \leq \tilde{n} - 1$. It follows that the jumps of $ \{ \mathcal{F}^a \mathcal{A}_k \} $ will be among the limits of the expressions $ i/n = k/ \tilde{n} $, as $m$ goes to infinity, and the result follows.
\end{pf}
The next proposition shows that the jumps come in ''simultaneously reduced'' form.
\begin{prop}\label{minimalityjump}
Let us assume that $ A/K $ obtains semi-abelian reduction over a tamely ramified minimal extension $ \widetilde{K}/K$, and that $ \tilde{n} = [\widetilde{K}:K] > 1 $. Let $ i_1/\tilde{n}, \ldots, i_g/\tilde{n} $ be the jumps in the filtration $ \{ \mathcal{F}^a \mathcal{A}_k \} $.
Assume that $m$ is a positive integer such that $ m $ divides $ i_l $ for all $l$, and that $ m $ divides $ \tilde{n} $. Then it follows that $ m = 1 $.
\end{prop}
\begin{pf}
Let us assume to the contrary that $ m > 1 $. Then it follows that the subgroup $ H \subseteq \boldsymbol{\mu}_{\tilde{n}} $ consisting of $\tilde{n}/m $-th powers acts trivially on $ T_{\widetilde{\mathcal{A}}_k, 0} $. We now claim that $H$ acts trivially also on $ \widetilde{\mathcal{A}}_k^0 $. To see this, we first observe that
$$ T_{\widetilde{\mathcal{A}}_k, 0} \cong (T_{\widetilde{\mathcal{A}}_k, 0})^H \cong T_{\widetilde{\mathcal{A}}_k^H, 0}. $$
Furthermore, since $p$ does not divide the order of $H$, we have that $\widetilde{\mathcal{A}}_k^H$ is smooth, and that the canonical inclusions $ \widetilde{\mathcal{A}}_k^H \subseteq \widetilde{\mathcal{A}}_k $ and $ (\widetilde{\mathcal{A}}_k^H)^0 \subseteq \widetilde{\mathcal{A}}_k^0 $ are closed immersions. Hence, $\mathrm{dim}((\widetilde{\mathcal{A}}_k^H)^0) = \mathrm{dim}((\widetilde{\mathcal{A}}_k)^0) $, and so it follows that $ (\widetilde{\mathcal{A}}_k^H)^0 = (\widetilde{\mathcal{A}}_k^0)^H = \widetilde{\mathcal{A}}_k^0 $. Therefore, $H$ acts trivially on $\widetilde{\mathcal{A}}_k^0$. But, by Lemma 5.16 in \cite{Deschamps}, this contradicts the minimality of $ \widetilde{K}/K $.
\end{pf}
\subsection{The case of Jacobians}\label{jacobiancase}
Let $X/K$ be a smooth, projective and geometrically connected curve of genus $ g > 0 $. Let $ J' = J_{K'} $ denote the Jacobian of $X_{K'}$, and let $ \mathcal{J}'/S' $ be the N\'eron model of $J'$ over $S'$.
We can let $G$ act on $ X_{K'} $ via the action on the second factor. Let $ \mathcal{Y}/S' $ be a regular model of $ X_{K'} $ such that the $G$-action on $ X_{K'} $ extends to $ \mathcal{Y} $. According to \cite{Ner}, Theorem 9.5/4, there is, under certain hypotheses, a canonical isomorphism
$$ \mathrm{Pic}^0_{\mathcal{Y}/S'} \cong \mathcal{J}'^0, $$
where $ \mathcal{J}'^0 $ is the identity component of $ \mathcal{J}' $, and where $ \mathrm{Pic}^0_{\mathcal{Y}/S'} $ is the identity component of the relative Picard functor $ \mathrm{Pic}_{\mathcal{Y}/S'} $. Hence, on the special fibers, we get an isomorphism
$$ \mathrm{Pic}^0_{\mathcal{Y}_k/k} \cong \mathcal{J}'^0_k. $$
By \cite{Ner}, Theorem 8.4/1, it follows that we can canonically identify
\begin{equation}\label{H^1=T}
H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) \cong T_{\mathcal{J}'_k,0}.
\end{equation}
We are interested in computing the irreducible characters for the representation of $\boldsymbol{\mu}_n$ on $T_{\mathcal{J}'_k,0}$. With the identification in \ref{H^1=T} above, we see that this can be done by computing the irreducible characters for the representation of $\boldsymbol{\mu}_n$ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $.
By combining the discussion in this section with properties of the representation of $\boldsymbol{\mu}_n$ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $, we obtain in Corollary \ref{specific jump corollary} a quite precise description of the jumps of the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $.
\section{Tame extensions and Galois actions}\label{extensions and actions}
\subsection{}
Throughout the rest of this paper, $R$ will denote a complete discrete valuation ring, with fraction field $K$, and with algebraically closed residue field $k$.
$X/K$ will be a smooth, projective, geometrically connected curve over $K$, of genus $g(X)>0$.
\begin{dfn}
A scheme $ \mathcal{X} $ is called a \emph{model} of $X$ over $ S = \mathrm{Spec}(R) $ if $ \mathcal{X} $ is integral and normal, projective and flat over $S$, and with generic fibre $ \mathcal{X}_K \cong X $.
\end{dfn}
It is well known that we can always find a regular model for $X/K$ (see for instance \cite{Liubook}). By blowing up points in the special fiber, we can even ensure that the irreducible components of the special fiber are smooth, and intersect transversally. Such a model will be called a \emph{strict normal crossings} model for $X/K$, or for short, an SNC-model.
\subsection{}\label{2.2}
Let $ \mathcal{X}/S $ be an SNC-model for $X/K$. Let $ K \subset K' $ be a finite, separable field extension, and let $R'$ be the integral closure of $R$ in $K'$. Since $R$ is complete, we have that $R'$ is a complete discrete valuation ring (\cite{Serre}, Proposition II.3). Making the finite base extension $S' = \mathrm{Spec}(R') \rightarrow S = \mathrm{Spec}(R) $, we obtain a commutative diagram
$$ \xymatrix{
\mathcal{Y} \ar[d] \ar[r]^{\rho} & \mathcal{X}' \ar[d] \ar[r]^{f} & \mathcal{X} \ar[d]\\
S' \ar[r]^{\mathrm{id}} & S' \ar[r] & S, } $$
where $ \mathcal{X}' $ is the normalization of the pullback $ \mathcal{X}_{S'} = \mathcal{X} \times_S S' $ ($ \mathcal{X}_{S'} $ is integral by Lemma \ref{lemma construction} below), and $ \rho : \mathcal{Y} \rightarrow \mathcal{X}' $ is the minimal desingularization. The map $ f : \mathcal{X}' \rightarrow \mathcal{X} $ is the composition of the projection $ \mathcal{X}_{S'} \rightarrow \mathcal{X} $ with the normalization $ \mathcal{X}' \rightarrow \mathcal{X}_{S'} $.
\begin{lemma}\label{lemma construction} With the hypotheses above, the following statements hold:
\begin{enumerate}
\item The pullback $ \mathcal{X}_{S'} $ is an integral scheme.
\item $ f : \mathcal{X}' \rightarrow \mathcal{X} $ is a finite morphism.
\end{enumerate}
\end{lemma}
\begin{pf}
(i) Let us first note that the generic fiber of $ \mathcal{X}_{S'} $ is the pullback $\mathcal{X}_K \otimes_K K' $, where $ \mathcal{X}_K $ is the generic fiber of $ \mathcal{X} $. By assumption $ \mathcal{X}_K $ is smooth and geometrically connected over $K$, so in particular the generic fiber of $ \mathcal{X}_{S'} $ is integral. Now, since $ \mathcal{X}_{S'} \rightarrow S' $ is flat, it follows from \cite{Liubook}, Proposition 4.3.8, that $ \mathcal{X}_{S'} $ is integral as well.
(ii) Since $ R' $ is a \emph{complete} discrete valuation ring it is excellent (\cite{Liubook}, Theorem 8.2.39). As $ \mathcal{X}_{S'} $ is of finite type over $S'$, it follows that $ \mathcal{X}_{S'} $ is an excellent scheme, and hence the normalization morphism $ \mathcal{X}' \rightarrow \mathcal{X}_{S'} $ is finite (\cite{Liubook}, Theorem 8.2.39). The projection $ \mathcal{X}_{S'} \rightarrow \mathcal{X} $ is finite, since it is the pullback of the finite morphism $ S' \rightarrow S $. So the composition $f$ of these two morphisms is indeed finite.
\end{pf}
\subsection{} Let us now assume that the field extension $ K \subset K' $ is Galois with group $G$. Every $ \sigma \in G $ induces an automorphism of $R'$ that fixes $R$, and we have furthermore that $ R'^{G} = R $. So there is an injective group homomorphism $ G \rightarrow \mathrm{Aut}(S') $, and we may view $ S' \rightarrow S $ as the quotient map.
We can lift the $G$-action to $ \mathcal{X}_{S'} $, via the action on the second factor. So there is a group homomorphism $ G \rightarrow \mathrm{Aut}(\mathcal{X}_{S'}) $. For any element $ \sigma \in G $, we shall still denote the image in $ \mathrm{Aut}(\mathcal{X}_{S'}) $ by $ \sigma $. Proposition \ref{prop. 2.3} below states that this action lifts uniquely both to the normalization $ \mathcal{X}' $ and to the minimal desingularization $ \mathcal{Y} $ of $ \mathcal{X}' $.
\begin{prop}\label{prop. 2.3} With the hypotheses above, the following statements hold:
\begin{enumerate}
\item The $G$-action on $ \mathcal{X}_{S'} $ lifts uniquely to the normalization $ \mathcal{X}' $.
\item The $ G $-action on $ \mathcal{X}' $ lifts uniquely to the minimal desingularization $ \mathcal{Y} $.
\item For any $ \sigma \in G $, let $ \sigma $ denote the induced automorphism of $ \mathcal{X}' $, and let $ \tau $ be the unique lift of $ \sigma $ to $ \mathrm{Aut}(\mathcal{Y}) $. Then we have that $ \tau(\rho^{-1}(\mathrm{Sing}(\mathcal{X}'))) = \rho^{-1}(\mathrm{Sing}(\mathcal{X}')) $. That is, the exceptional locus is mapped into itself under the $G$-action on $ \mathcal{Y} $.
\end{enumerate}
\end{prop}
\begin{pf}
This is straightforward from the universal properties of the normalization and of the minimal desingularization. For a detailed proof, we refer to \cite{Thesis}.
\end{pf}
\subsection{}\label{assumption on degree}
We shall throughout the rest of the paper make the assumption that $ n = [K':K] $ is not divisible by the residue characteristic $p$. Since $k$ is algebraically closed it has a full set $ \boldsymbol{\mu}_n $ of $n$-th roots of unity, and as $R$ is complete, we may lift all $n$-th roots of unity to $R$. We can choose a uniformizing parameter $ \pi \in R $ such that $ K' = K[\pi']/(\pi'^n - \pi) $. The extension $ K \subset K' $ is Galois, with group $ G = \boldsymbol{\mu}_n $. Also, $ R' := R[\pi']/(\pi'^n - \pi) $ is the integral closure of $R$ in $K'$, and $ \pi'$ is a uniformizing parameter for $R'$.
\subsection{Assumptions on $\mathcal{X}$}\label{assumption on surface}
Throughout the rest of this paper, we shall make two assumptions in the situation considered in Section \ref{2.2}:
\begin{ass}\label{ass. 2.4}
Let $ x \in \mathcal{X} $ be a closed point in the special fiber such that two irreducible components $C_1$ and $C_2$ of $ \mathcal{X}_k $ meet at $x$, and let $ m_i = \mathrm{mult}(C_i) $. We will always assume that \emph{at least} one of the $m_i$ is not divisible by $p$.
\end{ass}
With this assumption, we can find an isomorphism
$$ \widehat{\mathcal{O}}_{\mathcal{X},x} \cong R[[u_1,u_2]]/(\pi - u_1^{m_1} u_2^{m_2}) $$
(cf. \cite{CED}, proof of Lemma 2.3.2).
\begin{ass}\label{ass. 2.5}
Let $l$ denote the least common multiple of the multiplicities of the irreducible components of $\mathcal{X}_k$. Then we assume that $ \mathrm{gcd}(l,n) = 1 $.
\end{ass}
When Assumptions \ref{ass. 2.4} and \ref{ass. 2.5} are valid, the following facts can be proved using the computations in \cite{Thesis}:
$\bullet$ Let $ x \in \mathcal{X} $ be a closed point in the special fiber. Because of Assumption \ref{ass. 2.5}, there is a unique point $ x' \in \mathcal{X}'_k $ that maps to $x$. The local analytic structure of $ \mathcal{X}' $ at $x'$ depends only on $n = [K':K]$ and on the local analytic structure of $ \mathcal{X} $ at $x$. If $x$ belongs to a unique component of $ \mathcal{X}_k $, then $x'$ belongs to a unique component of $ \mathcal{X}'_k $, and $ \mathcal{X}' $ is regular at $x'$. If $x$ is an intersection point of two distinct components, then the same is true for $x'$, and $ \mathcal{X}' $ will have a \emph{tame cyclic quotient singularity} at $x'$.
$\bullet$ The minimal desingularization $ \mathcal{Y} $ of $ \mathcal{X}' $ is an SNC-model. Furthermore, the structure of $ \mathcal{Y} $ locally above a tame cyclic quotient singularity $x' \in \mathcal{X}'$ is completely determined by the structure locally at $ x = f(x') $ and the degree $n$ of the extension. The inverse image of $x'$ consists of a chain of smooth and rational curves whose multiplicities and self intersection numbers may be computed from the integers $n, m_1$ and $m_2$.
$\bullet$ For every irreducible component $C$ of $ \mathcal{X}_k $, there is precisely one component $C'$ of $ \mathcal{X}'_k $ that dominates $C$. The component $C'$ is isomorphic to $C$, and we have that $ \mathrm{mult}_{\mathcal{X}'_k}(C') = \mathrm{mult}_{\mathcal{X}_k}(C) $. It follows that the combinatorial structure of $ \mathcal{Y}_k $ is completely determined by the combinatorial structure of $ \mathcal{X}_k $ and the degree of $S'/S$.
\subsection{}
We will now begin to describe the $G$-action on $ \mathcal{X}' $ and $ \mathcal{Y} $ in more detail. Assumptions \ref{ass. 2.4} and \ref{ass. 2.5} will impose some restrictions on this action.
\begin{prop}\label{action on desing}
Let $ \rho : \mathcal{Y} \rightarrow \mathcal{X'} $ be the minimal desingularization. Then the following properties hold:
\begin{enumerate}
\item Let $D$ be an irreducible component of $ \mathcal{Y}_k $ that dominates a component of $ \mathcal{X}_k $. Then $D$ is stable under the $G$-action, and $G$ acts \emph{trivially} on $D$.
\item Let $ x' \in \mathcal{X}' $ be a singular point, and let $E_1, \ldots, E_{l}$ be the exceptional components mapping to $x'$ under $ \rho $. Then every $E_i$ is stable under the $G$-action, and every node in the chain $ \rho^{-1}(x') $ is fixed under the $G$-action.
\end{enumerate}
\end{prop}
\begin{pf}
Let us first note that the map $ \mathcal{X}_{S'} \rightarrow \mathcal{X} $ is an isomorphism on the special fibers. Moreover, the action on the special fiber of $ \mathcal{X}_{S'} $ is easily seen to be trivial, so every closed point in the special fiber is fixed. Since the action on $ \mathcal{X}' $ commutes with the action on $ \mathcal{X}_{S'} $, it follows that every point in the special fiber of $ \mathcal{X}' $ is fixed. In particular, every irreducible component $C'$ of $ \mathcal{X}'_k $ is stable under the $G$-action, and the restriction of this action to $C'$ is trivial. Since the action on $ \mathcal{Y} $ commutes with the action on $ \mathcal{X}' $, it follows that the same is true for the strict transform $D$ of $C'$ in $ \mathcal{Y} $. This proves (i).
For (ii), we observe that since $x'$ is fixed, we have that $\rho^{-1}(x')$ is stable under the $G$-action. But also the two branches meeting at $x'$ are fixed. Let $D$ be the strict transform of any of these two branches. From part (i), it follows that the point where it meets the exceptional chain $\rho^{-1}(x')$ must be fixed. So if $E_1$ is the component in the chain meeting $\widetilde{D}$, then $E_1$ must be mapped into itself. Let $E_2$ be the next component in the chain. Then the point where $E_1$ and $E_2$ meet must also be fixed, so $E_2$ must also be mapped to itself. Continuing in this way, it is easy to see that all of the exceptional components are stable under the $G$-action, and that all nodes in $\rho^{-1}(x')$ are fixed points.
\end{pf}
\begin{cor}\label{g^{-1}(Z) = Z}
Let $ 0 \leq Z \leq \mathcal{Y}_k $ be an effective divisor. Then the $G$-action restricts to $Z$.
\end{cor}
\begin{pf}
Since $Z$ is an effective Weil divisor, we can write $ Z = \sum_{C} r_C C $, where $C$ runs over the irreducible components of $ \mathcal{Y}_k $, and $r_C$ is a non-negative integer for all $C$. But Proposition \ref{action on desing} states that all irreducible components $C$ of $ \mathcal{Y}_k $ are stable under the $G$-action, and hence we get that the same holds for $ Z $. In other words, the action restricts to $Z$.
\end{pf}
From Proposition \ref{action on desing} above, it follows that every node $y$ in $ \mathcal{Y}_k $ is a fixed point for the $G$-action on $ \mathcal{Y} $. Hence there is an induced action on $ \mathcal{O}_{\mathcal{Y},y} $ and on the cotangent space $ \mathfrak{m}_y/\mathfrak{m}_y^2 $, where $ \mathfrak{m}_y \subset \mathcal{O}_{\mathcal{Y},y} $ is the maximal ideal. In order to get a precise description of the action on the cotangent space, we will first describe the action on the completion $ \widehat{\mathcal{O}}_{\mathcal{Y},y} $.
Since, by Proposition \ref{action on desing}, every irreducible component $D$ of $ \mathcal{Y}_k $ is mapped to itself under the $G$-action, it follows that the $G$-action restricts to $D$ and that the points where $D$ meets the rest of the special fiber are fixed. In the case where $G$ acts non-trivially on $D$, we will see in Proposition \ref{prop. 3.3} that the fixed points for the $G$-action on $D$ are precisely the points where $D$ meets the rest of the special fiber. In particular, we wish to describe the action on $D$ locally at the fixed points.
\section{Desingularizations and actions}\label{lifting the action}
In this section, we study how one can explicitly describe the action on the minimal desingularization $ \rho : \mathcal{Y} \rightarrow \mathcal{X}' $. Since we are only interested in this action locally at fixed points or stable components in the exceptional locus of $\rho$, we will begin with showing that we can reduce to studying the minimal desingularization locally at a singular point $ x' \in \mathcal{X}' $. This is an important step, since we have a good description of the complete local ring $ \widehat{\mathcal{O}}_{\mathcal{X}',x'} $. In particular, we can find a nice algebraization of this ring, with a compatible $G$-action. It turns out that it suffices for our purposes to study the minimal desingularization of this ring, and the lifted $G$-action.
In the second part of this section, we study the desingularization of an algebraization of $ \widehat{\mathcal{O}}_{\mathcal{X}',x'} $. We use the explicit blow up procedure in \cite{CED} in order to describe how the $G$-action lifts. In particular, we describe the action on the completion of the local rings at the nodes in the exceptional locus, and the action on the exceptional components. These results are collected in Proposition \ref{prop. 3.3}.
\subsection{}\label{seclocred}
If $ x' \in \mathcal{X}' $ is a singular point, we need to understand how $G$ acts on $ \widehat{\mathcal{O}}_{\mathcal{X}',x'} $. In order to do this, we consider the image $ f(x') = x $ of $x'$ under the morphism $ f : \mathcal{X}' \rightarrow \mathcal{X} $. Then $x$ is a closed point in the special fiber, and we have that
$$ \widehat{\mathcal{O}}_{\mathcal{X},x} \cong R[[v_1,v_2]]/(\pi - v_1^{m_1} v_2^{m_2}), $$
where $m_1$ and $m_2$ are positive integers. Let $n$ be the degree of $ R'/R $, which by assumption is relatively prime to $m_1$ and $m_2$. In the discussion that follows we will use some properties that were proved in \cite{Thesis}.
We let $ G = \boldsymbol{\mu}_n $ act on $ \mathcal{X}_{S'} $ via its action on the second factor. We point out that we here choose the action on $R'$ given by $ [\xi](\pi') = \xi \pi' $ for any $ \xi \in \boldsymbol{\mu}_n $. Choosing this action is notationally convenient when we work with rings. However, the natural right action on $ \mathcal{X}_{S'} $ is the inverse to the one we use here. In particular, the irreducible characters for the representation of $ \boldsymbol{\mu}_n $ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ induced by the action chosen here on $\mathcal{X}_{S'}$ will be the inverse characters to those induced by the right action.
Let also $ x $ denote the unique point of $ \in \mathcal{X}_{S'} $ mapping to $ x \in \mathcal{X} $. The map $ \mathcal{O}_{\mathcal{X},x} \rightarrow \mathcal{O}_{\mathcal{X}_{S'},x} $ associated to the projection $ \mathcal{X}_{S'} \rightarrow \mathcal{X} $ can be described by the tensorization
$$ \mathcal{O}_{\mathcal{X},x} \rightarrow \mathcal{O}_{\mathcal{X},x} \otimes_R R', $$
and the $G$-action on $ \mathcal{O}_{\mathcal{X}_{S'},x} = \mathcal{O}_{\mathcal{X},x} \otimes_R R' $ is induced from the action on $R'$.
Since $ \mathcal{O}_{\mathcal{X},x} \rightarrow \mathcal{O}_{\mathcal{X},x} \otimes_R R' $ is finite, completion commutes with tensoring with $R'$, so we get that
$$ \widehat{\mathcal{O}}_{\mathcal{X}_{S'},x} = \widehat{\mathcal{O}}_{\mathcal{X},x} \otimes_R R', $$
and hence the $G$-action on $ \widehat{\mathcal{O}}_{\mathcal{X}_{S'},x} $ is induced from the action on $R'$ in the second factor. It follows that
$$ \widehat{\mathcal{O}}_{\mathcal{X}_{S'},x} \cong R'[[v_1,v_2]]/(\pi'^n - v_1^{m_1} v_2^{m_2}), $$
and that the $G$-action can be described by $ [\xi](\pi') = \xi \pi' $ and $ [\xi](v_i) = v_i $, for any $ \xi \in \boldsymbol{\mu}_n $.
Let $ \mathcal{X}' \rightarrow \mathcal{X}_{S'} $ be the normalization. There is a unique point $x'$ mapping to $x$, and the induced map $ \widehat{\mathcal{O}}_{\mathcal{X}_{S'},x} \rightarrow \widehat{\mathcal{O}}_{\mathcal{X}',x'} $ is the normalization of $ \widehat{\mathcal{O}}_{\mathcal{X}_{S'},x} $. Furthermore, the $G$-action on $ \widehat{\mathcal{O}}_{\mathcal{X}',x'} $ induced by the action on $ \mathcal{X}' $ is the unique lifting of the $G$-action on $ \widehat{\mathcal{O}}_{\mathcal{X}_{S'},x} $ to the normalization $ \widehat{\mathcal{O}}_{\mathcal{X}',x'} $.
Let $ \rho : \mathcal{Y} \rightarrow \mathcal{X}' $ be the minimal desingularization, and consider the fiber diagram
$$ \xymatrix{
\widehat{\mathcal{Y}} \ar[d]_{\hat{\rho}} \ar[r]^{\phi} & \mathcal{Y} \ar[d]^{\rho} \\
\mathrm{Spec}(\widehat{\mathcal{O}}_{\mathcal{X}',x'}) \ar[r] & \mathcal{X}' .}
$$
Then $ \hat{\rho} $ is the minimal desingularization of $ \mathrm{Spec}(\widehat{\mathcal{O}}_{\mathcal{X}',x'}) $ (cf. ~\cite{Lip}, Lemma 16.1, and use the fact that $ \mathcal{Y} $ is minimal), and hence the $G$-action on $ \mathrm{Spec}(\widehat{\mathcal{O}}_{\mathcal{X}',x'}) $ lifts uniquely to $ \widehat{\mathcal{Y}} $.
The projection $\phi$ induces an isomorphism of the exceptional loci $ \hat{\rho}^{-1}(x') $ and $ \rho^{-1}(x') $. Let $E$ be an exceptional component. Then the $G$-action restricts to $E$, and it is easily seen that $\phi$, when restricted to $E$, is equivariant.
Furthermore, for any closed point $ y \in \rho^{-1}(x') $, we have that $\phi$ induces an isomorphism $ \widehat{\mathcal{O}}_{\mathcal{Y},y} \cong \widehat{\mathcal{O}}_{\widehat{\mathcal{Y}},y} $ (one can argue in a similar way as in the proof of \cite{Liubook}, Lemma 8.3.49). If $y$ is a fixed point, it is easily seen that this isomorphism is equivariant.
We therefore conclude that in order to describe the action on $\mathcal{Y}$ locally at the exceptional locus over $x'$, it suffices to consider the minimal desingularization of $ \mathrm{Spec}(\widehat{\mathcal{O}}_{\mathcal{X}',x'}) $.
\subsection{}
In order to find an algebraization of $ \widehat{\mathcal{O}}_{\mathcal{X}',x'} $, we consider first the polynomial ring $ V = R'[v_1,v_2]/(\pi'^n - v_1^{m_1} v_2^{m_2}) $. We let $G$ act on $ V $ by $ [ \xi ](\pi') = \xi \pi' $ and $ [ \xi ](v_i) = v_i $ for $ i = 1,2 $, for any $ \xi \in G $. Note that the maximal ideal $ \mathfrak{p} = (\pi', v_1,v_2) $ is fixed, and hence there is an induced action on the completion $ \widehat{V}_{\mathfrak{p}} = R'[[v_1,v_2]]/(\pi'^n - v_1^{m_1} v_2^{m_2}) $, given as above. This gives a $G$-equivariant algebraization of $ \widehat{\mathcal{O}}_{\mathcal{X}_{S'},x} $.
Consider the $R'$-algebra homomorphism
$$ V = R'[v_1,v_2]/(\pi'^n - v_1^{m_1} v_2^{m_2}) \rightarrow T = R'[t_1,t_2]/(\pi' - t_1^{m_1} t_2^{m_2}), $$
given by $ v_i \mapsto t_i^n $. We let $ \boldsymbol{\mu}_n $ act on $T$, relatively to $R'$, by $ [ \eta ](t_1) = \eta t_1 $, $ [ \eta ](t_2) = \eta^r t_2 $, where $ r $ is the unique integer $ 0 < r < n $ such that $ m_1 + r m_2 \equiv_n 0 $. (Note that this is an ad hoc action introduced to compute the normalization, which must not be confused with the natural $G$-action). Arguing as in \cite{Thesis}, one can show that the induced map $ V \rightarrow U := T^{\boldsymbol{\mu}_n} $ is the normalization of $V$. Furthermore, it is easily seen that there is a unique maximal ideal $\mathfrak{q} \subset U $ mapping to $ \mathfrak{p} $, corresponding to the "origin" $ (\pi', t_1, t_2) $ in $T$.
It is shown in \cite{Thesis} Lemma 4.1 that $U$ is an equivariant algebraization of $ \widehat{\mathcal{O}}_{\mathcal{X}',x'} $. Let $ \rho_U : \mathcal{Z} \rightarrow \mathrm{Spec}(U) $ be the minimal desingularization. Then we have a fiber diagram
$$
\xymatrix{
\widehat{\mathcal{Y}} \ar[d]_{\hat{\rho}} \ar[r] & \mathcal{Z} \ar[d]^{\rho_U} \\
\mathrm{Spec}(\widehat{\mathcal{O}}_{\mathcal{X}',x'}) \ar[r] & \mathrm{Spec}(U),}
$$
where all maps commute with the various $G$-actions. We conclude, by similar argumentation as in Section \ref{seclocred}, that in order to describe the $G$-action on $ \widehat{\mathcal{Y}} $ locally at fixed points or components in the exceptional locus, it suffices to compute the corresponding data for $ \mathcal{Z} $.
\subsection{}\label{3.11}
Proposition \ref{prop. 3.3} below gives a description of the $G$-action on $ \mathcal{Z} $. Having this description will be important in later sections, when we consider the $G$-action on the cohomology groups $ H^i(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $. For a proof, we refer to \cite{Thesis}, Proposition 4.3. In order to state this result, and for future reference, we list some properties associated to the resolution of the singularity $ Z = \mathrm{Spec}(U) $ (see \cite{CED} for proofs).
We call the integers $m_1$, $m_2$ and $n$ the \emph{parameters} of the singularity. Let $r$ be the unique integer with $ 0<r<n $ such that $ m_1 + rm_2 = 0 $ modulo $n$. Write $ \frac{n}{r} = [b_1, \ldots , b_l, \ldots , b_L]_{JH} $ for the Jung-Hirzebruch expansion. The exceptional locus of $ \rho_U $ consists of a \emph{string} of smooth and rational curves $ C_1, \ldots, C_L $, with self intersection numbers $ C_l^2 = - b_l $ and multiplicities $\mu_l$, for all $ l \in \{1, \ldots, L \} $.
There are two series of numerical equations associated to the singularity. We have
\begin{equation}\label{equation 8.1}
r_{l-1} = b_{l+1} r_l - r_{l+1},
\end{equation}
for $ 0 \leq l \leq L-1 $, where we put $ r_{-1} = n $ and $ r_0 = r $. Furthermore, we have
\begin{equation}\label{equation 8.2}
\mu_{l+1} = b_l \mu_l - \mu_{l-1},
\end{equation}
which is valid for $ 1 \leq l \leq L $. Here we define $ \mu_0 = m_2 $ and $ \mu_{L+1} = m_1 $.
We also have the equation $ m_1 + r m_2 = n \mu_1 $ (see \cite{CED}, Corollary 2.4.3). Together with Equations (\ref{equation 8.1}) and (\ref{equation 8.2}), this equation enables you to compute the branch multiplicities.
\begin{prop}\label{prop. 3.3}
The minimal desingularization $ \mathcal{Z} $ of $ Z = \mathrm{Spec}(U) $ can be covered by the affine charts $ \mathrm{Spec}(U_l) $, where
$$ U_l = R'[z_{l-1},w_{l-1}]/(z_{l-1}^{\mu_{l}} w_{l-1}^{\mu_{l-1}} - \pi'), $$
for $ l \in \{ 1, \ldots, L +1 \} $.
These charts are $ G $-stable, and the $ G $-action is given by
$ [ \xi ] (\pi') = \xi \pi' $, $ [ \xi ] (z_{l-1}) = \xi^{ \alpha_1 r_{l-2} } z_{l-1} $ and $ [ \xi ] (w_{l-1}) = \xi^{ - \alpha_1 r_{l-1} } w_{l-1} $, where $\alpha_i$ denotes an inverse to $m_1$ modulo $n$, for all $ l \in \{ 1, \ldots, L+1 \} $, and for any $ \xi \in G $.
Let $ C_l $ be the $ l $-th exceptional component. On the chart $\mathrm{Spec}(U_l)$, we have that the affine ring for $ C_l $ is $ k[w_{l-1}] $, and $ G $ acts by $ [ \xi ] (w_{l-1}) = \xi^{ - \alpha_1 r_{l-1} } w_{l-1} $, for any $ \xi \in G $. On the chart $\mathrm{Spec}(U_{l+1})$, the affine ring for $ C_l $ is $ k[z_l] $, and $ G $ acts by $ [ \xi ] (z_{l}) = \xi^{ \alpha_1 r_{l-1} } z_{l} $.
\end{prop}
The following corollary is immediate from Proposition \ref{prop. 3.3}:
\begin{cor}
The irreducible components $C_l$ of the exceptional locus are stable under the $G$-action. Furthermore, if $ \xi \in \boldsymbol{\mu}_n $ is a primitive root, then the automorphism of $C_l$ induced by $\xi$ is non-trivial, for all $ l \in \{ 1, \ldots, L \} $, with fixed points precisely at the two points where $C_l$ meets the rest of the special fiber.
\end{cor}
Let us finally remark that the cotangent space to $ \mathcal{Z} $ at the fixed point that is the intersection point of $ C_l $ and $ C_{l+1} $ is generated by (the classes) of the local equations $ z_l $ and $ w_l $ for the curves. Therefore, Proposition \ref{prop. 3.3} gives a complete description of the action on the cotangent space. Furthermore, we can also read off the eigenvalues for the elements of this basis. Hence we immediately get an explicit description of the action on the cotangent space to the minimal desingularization of $\mathcal{X}'$ at the corresponding fixed point.
\begin{ex}\label{Gdesingex}
Consider the singularity with parameters $ (m_1,m_2,n) = (1,3,7) $. From the equation $ m_1 + r_0 m_2 = n \mu_1 $ we easily compute that $ r_0 = 2 $ and $ \mu_1 = 1 $. From the equation $ n = r_{-1} = b_1 r_0 - r_1 $ we find that $ b_1 = 4 $ and that $ r_1 = 1 $. So we get that $ L = 2 $, and hence there are two exceptional curves $C_1$ and $C_2$. In order to compute $\mu_2$, we use the equation $ \mu_0 + \mu_2 = b_1 \mu_1 $, and find that $ \mu_2 = 1 $.
We conclude this example with writing out the $G$-action on $C_1$, and the $G$-action on the cotangent space to $ \mathcal{Z} $ at the point $ y_1 = C_1 \cap C_2 $. In the notation of Proposition \ref{prop. 3.3}, $C_1$ and $C_2$ are generically contained in the $G$-stable open affine $ \mathrm{Spec}(U_2) $. We can immediately read off that the cotangent space is generated by (the classes of) $ z_1 $ and $ w_1 $. Furthermore, we have that the $ G $-action is given by $ [\xi](z_1) = \xi^{\alpha_1 r_0} z_1 = \xi^{2} z_1 $ and $ [\xi](w_1) = \xi^{- \alpha_1 r_1} w_1 = \xi^{6} w_1 $, for any $ \xi \in \boldsymbol{\mu}_{7} $.
The affine ring for $C_1$ on this chart is $k[z_1]$, and the action is given by $ [\xi](z_1) = \xi^{2} z_1 $.
\end{ex}
\section{Computing traces}
We will now study of the $G$-action on the cohomology groups $ H^i(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $. Since the residue characteristic is (possibly) positive, we have to introduce the concept of \emph{Brauer characters}. This roughly amounts to lifting all eigenvalues to characteristic zero.
A second problem is the fact that $ \mathcal{Y}_k $ in general is singular. Therefore, we shall first consider the case of a group acting on the cohomology groups of an invertible sheaf on a smooth projective curve. For such situations, one can write down trace formulas in terms of local data at the fixed points. Later, in Section \ref{section 6}, we shall use such computations in order to compute the characters for the action by $G$ on $ H^i(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $.
\subsection{}\label{inverse}
We begin this section with recalling some generalities about coherent sheaf cohomology, and establish notation and terminology that will be used throughout the rest of the paper.
Let $ g : X \rightarrow Y $ be a morphism of schemes, and let $ \mathcal{F} $ be an $ \mathcal{O}_Y $-module. The morphism $ g $ induces a natural and canonical homomorphism
$$ H^p(g) : H^p(Y,\mathcal{F}) \rightarrow H^p(X, g^* \mathcal{F}), $$
for all $p \geq 0$.
Consider now the case where $Y = X$, so that $ g : X \rightarrow X $ is an endomorphism, and where $ \mathcal{F} $ is a sheaf of $ \mathcal{O}_X $-modules. Assume in addition that we are given a homomorphism $ u : g^* \mathcal{F} \rightarrow \mathcal{F} $ of $ \mathcal{O}_X $-modules. By functoriality, $u$ induces a homomorphism
$$ H^p(u) : H^p(X, g^* \mathcal{F}) \rightarrow H^p(X,\mathcal{F}), $$
for all $p \geq 0$.
\begin{dfn}\label{dfn 5.4}
Let $ g : X \rightarrow X $ be morphism, let $ \mathcal{F} $ be a sheaf of $ \mathcal{O}_X $-modules, and let $ u : g^* \mathcal{F} \rightarrow \mathcal{F} $ be a homomorphism of $ \mathcal{O}_X $-modules. The endomorphism
$$ H^p(g,u) : H^p(X,\mathcal{F}) \rightarrow H^p(X,\mathcal{F}) $$
\emph{induced} by the couple $ (g,u) $ is defined as the composition of the maps $ H^p(g) $ and $ H^p(u) $.
\end{dfn}
\begin{rmk}
In case $ \mathcal{F} = \mathcal{O}_X $, there is a canonical isomorphism $ g^* \mathcal{O}_X \cong \mathcal{O}_X $, associated to the morphism $g$. So we get naturally an endomorphism of the cohomology groups
$$ H^p(g) : H^p(X,\mathcal{O}_X) \rightarrow H^p(X,\mathcal{O}_X), $$
for all $p \geq 0$.
\end{rmk}
\subsection{}\label{5.4}
Let $ G $ be a finite group acting on $X$, and let $ \mathcal{F} $ be a coherent $ \mathcal{O}_X $-module. An isomorphism $ u : g^* \mathcal{F} \to \mathcal{F} $ is called a \emph{covering homomorphism}. We say that $ \mathcal{F} $ is a \emph{$G$-sheaf} if there exist, for every $ g \in G $, covering homomorphisms $u_g$ such that $ u_h \circ h^* u_g = u_{gh} $, where $ g, h \in G $.
\begin{rmk}
Let $G$ be a finite group acting on a projective scheme $X/\mathrm{Spec}(k)$, and let $ \mathcal{F} $ be a $G$-sheaf on $X$. The compatibility conditions ensure that $H^p(X,\mathcal{F})$ is a $ k[G] $-module for all $ p \geq 0 $.
\end{rmk}
Let $ u' : g^* \mathcal{F}' \to \mathcal{F}' $ be a second covering homomorphism. A map of covering homomorphisms is a map $ \phi : \mathcal{F} \to \mathcal{F}' $ of $ \mathcal{O}_X $-modules such that $ \phi \circ u = u' \circ g^* \phi $. A map of $G$-sheaves $ \mathcal{F} $ and $ \mathcal{F}' $ is a map of $ \mathcal{O}_X $-modules respecting the respective $G$-sheaf structures. The category of $G$-sheaves on $X$ is in fact an abelian category. For this, and further properties of $G$-sheaves, we refer to \cite{Kock}, Chapter 1.
Consider a short exact sequence
$$ 0 \to (\mathcal{F}_1,u_1) \to (\mathcal{F}_2,u_2) \to (\mathcal{F}_3,u_3) \to 0 $$
of coverings. It is straight forward to check that this gives a commutative diagram
\begin{equation}
\xymatrix{
\ldots \ar[r] & H^p(X, \mathcal{F}_2) \ar[r] \ar[d]^{H^p(g,u_2)} & H^p(X, \mathcal{F}_3) \ar[r]^{\delta} \ar[d]^{H^p(g,u_3)} & H^{p+1}(X, \mathcal{F}_1) \ar[r] \ar[d]^{H^{p+1}(g,u_1)} & \ldots \\
\ldots \ar[r] & H^p(X, \mathcal{F}_2) \ar[r] & H^p(X, \mathcal{F}_3) \ar[r]^{\delta} & H^{p+1}(X, \mathcal{F}_1) \ar[r] & \ldots .}
\end{equation}
Similarly, if $G$ acts on the projective scheme $X/k$, one checks that a short exact sequence of $G$-sheaves gives a long exact sequence of $k[G]$-modules in cohomology.
\subsection{}
If $X$ is projective over a field $k$, and $ \mathcal{F} $ is a coherent $ \mathcal{O}_X $-module, the cohomology groups $ H^p(X,\mathcal{F}) $ are finite dimensional $k$-vector spaces, and the trace $ \mathrm{Tr}(H^p(g,u)) $ of the endomorphism $ H^p(g,u) $ is defined.
If $ \mathcal{F} $ is a $G$-sheaf, we let $ [H^p(X,\mathcal{F})] $ denote the element associated to $ H^p(X,\mathcal{F}) $ in the representation ring $R_G(k)$.
\subsection{}
In the case where $ p = \mathrm{char}(k) > 0 $, we let $W(k)$ denote the ring of \emph{Witt vectors} for $k$ (\cite{Serre}, Chap.~II, par.~5). Recall that $W(k)$ is a complete discrete valuation ring, $p$ is a uniformizing parameter in $W(k)$ and the residue field is $k$. The fraction field $ FW(k) $ of $W(k)$, however, has characteristic $0$.
There exists a unique multiplicative map $ w : k \rightarrow W(k) $ that sections the reduction map $ W(k) \rightarrow k $. The map $w$ is often referred to as the \emph{Teichm\"uller lifting} from $k$ to $ W(k) $.
Since we assume $ k = \overline{k} $, it follows that $k$ has a full set of $n$-th roots of unity, for any $n$ not divisible by $p$. As $W(k)$ is complete, these lift uniquely to $W(k)$, and reduction modulo $p$ induces an isomorphism of $ \boldsymbol{\mu}_n(W(k)) $ onto $ \boldsymbol{\mu}_n(k) $.
\subsection{}
A few facts regarding \emph{Brauer characters} are needed, and are stated here in the case where $ G = \boldsymbol{\mu}_n $. We refer to \cite{SerreLin}, Chap.~18 for details.
If $ E $ is a $ k[G] $-module, we let $ g_E $ denote the endomorphism of $ E $ induced by $ g \in G $. Since the order of $g$ divides $n$, and $n$ is relatively prime to $p$, it follows that $g_E$ is diagonalizable, and that all the eigenvalues $ \lambda_1, \ldots, \lambda_{e=\mathrm{dim} E} $ are $n$-th roots of unity. The \emph{Brauer character} is then defined by assigning
$$ \phi_E(g) = \sum_{i=1}^{e} w(\lambda_i). $$
It can be seen that the function $ \phi_E : G \rightarrow W(k) $ thus obtained is a class function on $G$. We shall call the element $ \phi_E(g) \in W(k) $ the \emph{Brauer trace} of $g_E$. The ordinary trace is obtained from the Brauer trace by reduction modulo $p$.
An important property of the Brauer character is that it is additive on short exact sequences. That is, if
$ E' \rightarrow E \rightarrow E'' $ is a short exact sequence of $ k[G] $-modules, then $ \phi_E = \phi_{E'} + \phi_{E''} $. A useful consequence is that if
$$ 0 \rightarrow E_0 \rightarrow \ldots \rightarrow E_i \rightarrow \ldots \rightarrow E_l \rightarrow 0 $$
is an exact sequence of $k[G]$-modules, we get that $ \sum_{i=0}^l (-1)^i \phi_{E_i}(g) = 0 $.
\begin{ntn}
If $ V $ is a finite dimensional vector space over $k$, and $ \psi : V \rightarrow V $ is an automorphism, we will use the notation $ \mathrm{Tr}_{\beta}(\psi) $ for the Brauer trace of $\psi$.
\end{ntn}
\subsection{}
We now consider a smooth, connected and projective curve $C$ over $k$, with an invertible sheaf $ \mathcal{L} $ on $C$.
Let $ g : C \rightarrow C $ be an automorphism, and let $ u : g^*\mathcal{L} \rightarrow \mathcal{L} $ be a covering map. We would like to compute the alternating sum $ \sum_{p=0}^1 (-1)^p ~\mathrm{Tr}_{\beta}(H^p(g,u)) $ of the Brauer traces.
\subsection{}
Let us first consider the case when the automorphism $ g : C \rightarrow C $ is trivial, i.e., $ g = \mathrm{id}_C $. Then $ H^p(g) $ is the identity, so we need only consider $ u : \mathcal{L} = g^*\mathcal{L} \rightarrow \mathcal{L} $. Hence $ H^p(g,u) = H^p(u) : H^p(C,\mathcal{L}) \rightarrow H^p(C,\mathcal{L}) $, where $ u \in \mathrm{Aut}_{\mathcal{O}_C}(\mathcal{L}) $. Since $ \mathrm{Aut}_{\mathcal{O}_C}(\mathcal{L}) = k^* $, we have that $u$ is multiplication with some element $ \lambda_u \in k^* $.
\begin{prop}\label{prop. 5.5}
Let us keep the hypotheses above. Then the following equality holds in $W(k)$:
$$ \sum_{p=0}^1 (-1)^p~\mathrm{Tr}_{\beta}(H^{p}(u)) = w(\lambda_u) \cdot (\mathrm{deg}_C(\mathcal{L}) + 1 - p_a(C)). $$
\end{prop}
\begin{pf}
Using \v{C}ech cohomology, it is straightforward to see that $H^p(u)$ is multiplication by $ \lambda_u $, for all $p$. Applying the Riemann-Roch formula then gives the result.
\end{pf}
\subsection{Computing the trace when $g$ is non-trivial}
Let now $ g \in \mathrm{Aut}_k(C) $ be a non-trivial automorphism of finite period $n$ (i.e. $ g^n = \mathrm{id}_C $), where $n$ is not divisible by the characteristic of $k$.
In this situation, the so called \emph{Lefschetz-Riemann-Roch} formula (\cite{Don}, Theorem 5.4, Corollary 5.5) gives a formula for the Brauer trace of $ H^p(g,u) $ in terms of local data at the fixed points of $g$.
Let $ z \in C $ be a fixed point, and let $ i_z : \{z\} \hookrightarrow C $ be the inclusion. Pulling back $ u $ via $ i_z $ gives
$$ u(z) = i_z^* u : i_z^* g^* \mathcal{L} = i_z^* \mathcal{L} \rightarrow i_z^* \mathcal{L}, $$
a $k$-linear endomorphism of $ \mathcal{L}(z) $. We let $ \lambda_u(z) $ denote the (unique) eigenvalue of $ u(z) $
Since $ z \in C $ is a fixed point, there is an induced automorphism
$$ dg(z) : \mathfrak{m}_z/\mathfrak{m}_z^2 \rightarrow \mathfrak{m}_z/\mathfrak{m}_z^2 $$
of the cotangent space of $C$ at $z$. We let $ \lambda_{dg}(z) $ denote the (unique) eigenvalue of $ dg(z) $.
The Lefschetz-Riemann-Roch then comes out as follows:
\begin{prop}\label{prop. 5.6}
Let $C$, $ \mathcal{L} $, $g$ and $u$ be as above, and denote by $ C^g $ the (finite) set of fixed points of $g$. Then the following equality holds in $W(k)$:
$$ \sum_{p=0}^1 (-1)^p ~\mathrm{Tr}_{\beta}(H^p(g,u)) = \sum_{z \in C^g} w(\lambda_u(z))/(1- w(\lambda_{dg}(z))). $$
\end{prop}
\begin{pf}
See Prop.~5.8 in~\cite{Thesis}.
\end{pf}
\begin{rmk}
The reader might want to compare Proposition \ref{prop. 5.6} with the \emph{Woods-Hole}-formula (\cite{SGA5}, Exp. III, Cor. 6.12), that gives a formula for the ordinary trace, instead of the Brauer trace, but with weaker assumptions on the automorphism $g$.
\end{rmk}
\begin{rmk}
Throughout the rest of the text we will, when no confusion can arise, continue to write $ \lambda $ instead of $w(\lambda) $ for the Teichm\"uller lift of a root of unity $\lambda$.
\end{rmk}
\section{Action on the minimal desingularization}\label{section 6}
Recall the set-up in Section \ref{extensions and actions}. We considered an SNC-model $ \mathcal{X}/S $, and a tamely ramified extension $ S'/S $ of degree $n$ that is prime to the least common multiple of the multiplicities of the irreducible components of $ \mathcal{X}_k $. The minimal desingularization of the pullback $ \mathcal{X}_{S'}/S' $ is an SNC-model $ \mathcal{Y}/S' $, and the Galois group $ G = \boldsymbol{\mu_n} $ of the extension $ S'/S $ acts on $ \mathcal{Y} $.
Our goal is to compute the irreducible characters for this representation on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $. To do this, we would ideally compute the Brauer trace of the automorphism of $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ induced by $g$, for every group element $ g \in G $. This information would then be used to compute the Brauer character. However, we can not do this directly. Instead we will compute the Brauer trace of the automorphism induced by $g$ on the formal difference $ H^0(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) - H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $, for any $ g \in G $. In our applications, we know the character for $ H^0(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $, so this would suffice in order to determine the character for $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $.
The fact that $ \mathcal{Y}_k $ is not in general smooth, prevents us from using Propositions \ref{prop. 5.5} and \ref{prop. 5.6} directly. On the other hand, the irreducible components of $ \mathcal{Y}_k $ are smooth and proper curves. So we shall in fact show that it is possible to reduce to computing Brauer traces on each individual component of $ \mathcal{Y}_k $, where Propositions \ref{prop. 5.5} and \ref{prop. 5.6} do apply. The key step in obtaining this is to introduce a certain filtration of the special fiber $\mathcal{Y}_k$.
\subsection{}
Let $ \{ C_{\alpha} \}_{\alpha \in \mathcal{A}} $ denote the set of irreducible components of $ \mathcal{Y}_k $, and let $ m_{\alpha} $ denote the multiplicity of $ C_{\alpha} $ in $ \mathcal{Y}_k $. Then $ \mathcal{Y}_k $ can be written in Weil divisor form as
$$ \mathcal{Y}_k = \sum_{\alpha} m_{\alpha} C_{\alpha}. $$
\begin{dfn}\label{complete}
A \emph{complete} filtration of $\mathcal{Y}_k$ is a sequence
$$ 0 < Z_m < \ldots < Z_j < \ldots < Z_1 = \mathcal{Y}_k $$
of effective divisors $Z_j$ supported on $\mathcal{Y}_k$, such that for each $1 \leq j \leq m - 1 $ there exists an $ \alpha_j \in \mathcal{A} $ with $ Z_j - Z_{j+1} = C_{\alpha_j} $. So $ m = \sum_{\alpha} m_{\alpha} $.
\end{dfn}
Loosely speaking, such a filtration of $\mathcal{Y}_k$ is obtained by removing the irreducible components of the special fiber one at the time (counted with multiplicity).
\subsection{}
At each step $Z' < Z$ of a complete filtration, we can construct an exact sequence of sheaves.
\begin{lemma}\label{lemma 6.1}
Let $ 0 \leq Z' < Z \leq \mathcal{Y}_k $ be divisors such that $ Z - Z' = C $, for some irreducible component $C$ of $ \mathcal{Y}_k $. Denote by $ \mathcal{I}_Z $ and $ \mathcal{I}_{Z'} $ the corresponding ideal sheaves in $ \mathcal{O}_{\mathcal{Y}} $. Let $ i_{Z} $, $ i_{Z'} $ and $ i_C $ be the canonical inclusions of $ Z $, $Z'$ and $ C $ in $\mathcal{Y}$. Furthermore, let $ \mathcal{L} = i_C^*(\mathcal{I}_{Z'}) $. We then have an exact sequence
$$ 0 \rightarrow (i_C)_* \mathcal{L} \rightarrow (i_{Z})_* \mathcal{O}_{Z} \rightarrow (i_{Z'})_* \mathcal{O}_{Z'} \rightarrow 0 $$
of $ \mathcal{O}_{\mathcal{Y}} $-modules.
\end{lemma}
\begin{pf}
The inclusions $ \mathcal{I}_Z \subset \mathcal{I}_{Z'} \subset \mathcal{O}_{\mathcal{Y}} $ give rise to an exact sequence
$$ 0 \rightarrow \mathcal{K} \rightarrow \mathcal{O}_{\mathcal{Y}}/\mathcal{I}_Z \rightarrow \mathcal{O}_{\mathcal{Y}}/\mathcal{I}_{Z'} \rightarrow 0, $$
where $ \mathcal{K} = \mathcal{I}_{Z'} / \mathcal{I}_Z $ denotes the kernel. We need to determine $ \mathcal{K} $.
Consider the surjection $ \mathcal{I}_{Z'} \rightarrow \mathcal{I}_{Z'} / \mathcal{I}_Z $. Pulling back with $ i_C^* $, we get a surjection $ i_C^*(\mathcal{I}_{Z'}) \rightarrow i_C^* (\mathcal{I}_{Z'} / \mathcal{I}_Z) $, and we claim that this map is an isomorphism. Indeed, let $ U = \mathrm{Spec}(A) \subset \mathcal{Y} $ be an open affine set. Then $A$ is a regular domain, and the ideal sheaves $ \mathcal{I}_C $, $ \mathcal{I}_Z $ and $ \mathcal{I}_{Z'} $ restricted to $U$ correspond to invertible modules $ I_C $, $ I_{Z} $ and $ I_{Z'} $ in $A$. Since $ Z = Z' + C $, we have that $ I_{Z} = I_C I_{Z'} $. So $ I_{Z'}/I_{Z} = I_{Z'}/I_C I_{Z'} = I_{Z'} \otimes_A A/I_C $. From this observation, it follows easily that the map above is an isomorphism on all stalks, and therefore an isomorphism.
\end{pf}
\subsection{}\label{6.4} The filtrations we have introduced are $G$-equvariant.
\begin{lemma}\label{G-sheaf}
Let $ 0 \leq Z \leq \mathcal{Y}_k $ be an effective divisor, with ideal sheaf $ \mathcal{I}_Z $. Then we have that $ \mathcal{I}_Z $ is a $G$-subsheaf of $ \mathcal{O}_{\mathcal{Y}} $.
\end{lemma}
\begin{pf}
Let $ g \in G $ be any group element. Applying the exact functor $ g^{-1} $ to the inclusion $ \mathcal{I}_Z \subset \mathcal{O}_{\mathcal{Y}} $ gives an inclusion $ g^{-1} \mathcal{I}_Z \subset g^{-1} \mathcal{O}_{\mathcal{Y}} $. Composing this inclusion with the canonical map $ g^{\sharp} : g^{-1} \mathcal{O}_{\mathcal{Y}} \rightarrow \mathcal{O}_{\mathcal{Y}} $, we obtain a map $g^{-1} \mathcal{I}_Z \rightarrow \mathcal{O}_{\mathcal{Y}}$.
Now, let $ \mathcal{J} $ be the sheaf of ideals generated by the image of $ g^{-1} \mathcal{I}_Z $ in $ \mathcal{O}_{\mathcal{Y}} $. We have that $ \mathcal{J} $ is the ideal sheaf of $ g^{-1}(Z) $. But in our case $ g^{-1}(Z) = Z $, and therefore $ \mathcal{J} = \mathcal{I}_Z $.
The inclusion above induces an injective map $ g^*\mathcal{I}_Z \rightarrow g^* \mathcal{O}_{\mathcal{Y}} \cong \mathcal{O}_{\mathcal{Y}} $ of $ \mathcal{O}_{\mathcal{Y}} $-modules, whose image is $ \mathcal{I}_Z = g^{-1} \mathcal{I}_Z \cdot \mathcal{O}_{\mathcal{Y}} $ (\cite{Hart}, II.7.12.2). Hence we obtain an isomorphism $ u_Z : g^*\mathcal{I}_Z \rightarrow \mathcal{I}_Z $.
It is easy to check that the isomorphisms $ g^*\mathcal{I}_Z \rightarrow \mathcal{I}_Z $ for various elements $ g \in G $ satisfy the compatibility conditions, and commute with the $G$-sheaf structure on $ \mathcal{O}_{\mathcal{Y}} $.
\end{pf}
\subsection{}\label{6.7}
\begin{prop}\label{prop. 6.5}
Let us keep the hypotheses and notation from Lemma \ref{lemma 6.1}. The sequence
$$ 0 \rightarrow (i_C)_* \mathcal{L} \rightarrow (i_{Z})_* \mathcal{O}_{Z} \rightarrow (i_{Z'})_* \mathcal{O}_{Z'} \rightarrow 0 $$
is an exact sequence of $ G $-sheaves.
\end{prop}
\begin{pf}
Let $ \mathcal{I}_{Z} \subset \mathcal{I}_{Z'} \subset \mathcal{O}_{\mathcal{Y}} $ be the inclusions of the ideal sheaves. From Lemma \ref{G-sheaf}, it follows that these maps are maps of $G$-sheaves. The result now follows from the fact that the category of $G$-modules on $\mathcal{Y}$ is an abelian category (\cite{Kock}, Lemma 1.3).
\end{pf}
In particular, Proposition \ref{prop. 6.5} implies that for any $g \in G$, there are covering maps $u$ (resp.~$v$, $v'$) of $ (i_C)_* \mathcal{L} $ (resp.~$(i_Z)_* \mathcal{O}_{Z}$, $(i_{Z'})_* \mathcal{O}_{Z'}$) giving an exact sequence
\begin{equation}\label{G-shortseq}
0 \to ((i_C)_* \mathcal{L}, u) \to ((i_Z)_* \mathcal{O}_{Z},v) \to ((i_{Z'})_* \mathcal{O}_{Z'},v') \to 0
\end{equation}
of covering maps.
The maps $ u $, $ v $ and $ v' $ induce, for every $ p \geq 0 $, automorphisms $ H^p(g,u) $, $ H^p(g,v) $ and $ H^p(g,v') $ that commute with the differentials in the long exact sequence in cohomology asociated to Sequence (\ref{G-shortseq}). That is, we obtain, for every $ g \in G $, an \emph{automorphism} of this long exact sequence.
Note that all the sheaves appearing in the exact sequence above are supported on the special fiber of $ \mathcal{Y} $. We will now explain how we can ``restrict'' the endomorphisms $ H^p(g,u) $, $ H^p(g,v) $ and $ H^p(g,v') $ to the support of the various sheaves.
\subsection{}\label{6.9}
Let $X$ and $Y$ be schemes, and let $ i : X \rightarrow Y $ be a closed immersion. Assume also that an automorphism $ g : Y \rightarrow Y $ is given, that restricts to an automorphism $ f = g|_X : X \rightarrow X $.
Note that if $ \mathcal{F} $ is a quasi-coherent sheaf on $X$, then the push-forward $ i_* \mathcal{F} $ is a quasi-coherent sheaf on $ Y $, since $i$ is a closed immersion. The following lemma is straightforward, yet tedious to prove, so we omit the proof.
\begin{lemma}\label{lemma 6.6}
Keep the hypotheses above. Let $ \mathcal{F} $ be a quasi-coherent sheaf on $X$, and let $ u : g^* i_* \mathcal{F} \rightarrow i_* \mathcal{F} $ be a homomorphism of $ \mathcal{O}_Y $-modules. Then there is induced, for every $ p \geq 0 $, a commutative diagram
$$ \xymatrix{
H^p(Y, i_* \mathcal{F}) \ar[rr]^{H^p(g,u)} \ar[d]_{H^p(i)} & & H^p(Y, i_* \mathcal{F}) \ar[d]^{H^p(i)} \\
H^p(X, i^* i_*\mathcal{F}) \ar[rr]^{H^p(f,i^*u)} & & H^p(X, i^* i_*\mathcal{F}),}
$$
where the vertical arrows are isomorphisms.
\end{lemma}
\begin{pf}
See Prop. 6.7 in \cite{Thesis}.
\end{pf}
\subsection{}\label{6.11}
The closed immersions $ i_C $, $ i_Z $ and $ i_{Z'} $ induce isomorphisms
$$ H^p(\mathcal{Y}, (i_C)_* \mathcal{L}) \cong H^p(C,\mathcal{L}), $$
$$ H^p(\mathcal{Y}, (i_Z)_* \mathcal{O}_Z) \cong H^p(Z, \mathcal{O}_Z) $$
and
$$ H^p(\mathcal{Y}, (i_{Z'})_* \mathcal{O}_{Z'}) \cong H^p(Z', \mathcal{O}_{Z'}), $$
for all $ p \geq 0 $. Here we have identified $ \mathcal{L} $ with $ (i_C)^* (i_C)_* \mathcal{L} $ (and likewise for $ \mathcal{O}_Z $ and $ \mathcal{O}_{Z'} $).
Since $ C $, $ Z $ and $ Z' $ are projective curves over $k$, and since $ \mathcal{L} $, $ \mathcal{O}_Z $ and $ \mathcal{O}_{Z'} $ are coherent sheaves, the cohomology groups above are finite dimensional $k$-vector spaces, and nonzero only for $ p = 0 $ and $ p = 1 $.
So the long exact sequence in cohomology associated to Sequence (\ref{G-shortseq}) is simply
\begin{equation}\label{sequence 6.7}
0 \rightarrow H^0(C,\mathcal{L}) \rightarrow H^0(Z, \mathcal{O}_Z) \rightarrow \ldots \to H^1(Z, \mathcal{O}_Z) \rightarrow H^1(Z', \mathcal{O}_{Z'}) \rightarrow 0.
\end{equation}
\begin{prop}\label{longexactgroup}
Sequence (\ref{sequence 6.7}) is an exact sequence of $k[G]$-modules. Furthermore, we get an equality
\begin{equation}
\sum_{p=0}^1 (-1)^p [H^p(Z, \mathcal{O}_{Z})] = \sum_{p=0}^1 (-1)^p [H^p(Z', \mathcal{O}_{Z'})] + \sum_{p=0}^1 (-1)^p [H^p(C,\mathcal{L})]
\end{equation}
of (virtual) $k[G]$-modules.
\end{prop}
\begin{pf}
Denote by $g_C$ the restriction of $g$ to $C$. By Lemma \ref{lemma 6.6}, restriction to $C$ gives a commutative diagram
$$ \xymatrix{ H^p(\mathcal{Y},(i_C)_* \mathcal{L}) \ar[d]_{H^p(g,u)} \ar[rr]^{\cong} & & H^p(C,\mathcal{L}) \ar[d]^{H^p(g_C, i_C^*u)} \\
H^p(\mathcal{Y},(i_C)_* \mathcal{L}) \ar[rr]^{\cong} & & H^p(C,\mathcal{L}). } $$
Also, we get similar diagrams for $ H^p(g_Z, i_Z^* v) $ and $ H^p(g_{Z'}, i_{Z'}^* v') $.
Having made these identifications, we see that the automorphisms $ H^p(g_C, i_C^* u)$, $ H^p(g_Z, i_Z^* v) $ and $ H^p(g_{Z'}, i_{Z'}^* v') $, for $ p = 0, 1 $, fit together to give an automorphism of Sequence \ref{sequence 6.7} above. One checks that Sequence (\ref{sequence 6.7}) is an exact sequence of $ k[G] $-modules, and from this fact, the second statement immediately follows.
\end{pf}
\subsection{}
Let us write $ \mathcal{Y}_k = \sum_{\alpha} m_{\alpha} C_{\alpha} $, where $ \alpha \in \mathcal{A} $, and put $ m = \sum_{\alpha} m_{\alpha} $. Fix a complete filtration
$$ 0 < Z_m < \ldots < Z_j < \ldots < Z_2 < Z_1 = \mathcal{Y}_k, $$
where $ Z_j - Z_{j+1} = C_j $ for some $ C_j \in \{ C_{\alpha} \}_{\alpha \in \mathcal{A} } $, for each $ j \in \{ 1, \ldots, m-1 \} $. At each step of this filtration, Lemma \ref{lemma 6.1} asserts that there is a short exact sequence
$$ 0 \rightarrow (i_{C_j})_* \mathcal{L}_j \rightarrow (i_{Z_j})_* \mathcal{O}_{Z_j} \rightarrow (i_{Z_{j+1}})_* \mathcal{O}_{Z_{j+1}} \rightarrow 0, $$
where $ i_{\star} : \star \hookrightarrow \mathcal{Y} $ is the canonical inclusion. Note in particular that $ Z_m = C_m $, for some $ C_m \in \{ C_{\alpha} \}_{\alpha \in \mathcal{A} } $, so it makes sense to write $ \mathcal{O}_{Z_m} = \mathcal{L}_m $.
Proposition \ref{longexactgroup} has the following nice consequence:
\begin{prop}\label{prop-Grep}
We have an equality
\begin{equation}
\sum_{p=0}^1 (-1)^p [H^p(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k})] = \sum_{j=1}^m \sum_{p=0}^1 (-1)^p [H^p(C_j, \mathcal{L}_j)]
\end{equation}
of $ k[G] $-modules.
\end{prop}
\begin{pf}
Follows easily by induction from Proposition \ref{longexactgroup}.
\end{pf}
Let $ \phi^p_g $ (resp.~$\phi^p_{g,j}$) be the automorphism induced by the element $ g \in G $ on $ H^p(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ (resp.~$H^p(C_j, \mathcal{L}_j)$). The following corollary is immediate:
\begin{cor}\label{cor-Grep}
For any $g \in G$, the following equality holds in $W(k)$:
$$ \sum_{p=0}^1 (-1)^p~\mathrm{Tr}_{\beta}(\phi^p_g) = \sum_{j=1}^m \sum_{p=0}^1 (-1)^p~\mathrm{Tr}_{\beta}(\phi^p_{g,j}). $$
\end{cor}
\subsection{}
The importance of Corollary \ref{cor-Grep} is that it reduces the problem of computing the alternating sum of the Brauer traces of the endomorphisms
$$ \phi^p_g : H^p(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) \to H^p(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $$
to instead computing the same data for the endomorphisms
$$ \phi^p_{g,j} : H^p(C_j, \mathcal{L}_j) \to H^p(C_j, \mathcal{L}_j) $$
for certain invertible sheaves $ \mathcal{L}_j $, supported on the smooth irreducible components $ C_j $ of $ \mathcal{Y}_k $.
The main benefit is that for the latter computations, we can apply the Lefschetz-Riemann-Roch formulas in Proposition \ref{prop. 5.5} and \ref{prop. 5.6}. In what follows, we will explain how this can be done.
We keep the notation from Lemma \ref{lemma 6.1}, hence we have $ 0 \leq Z' < Z \leq \mathcal{Y}_k $, where $ Z - Z' = C $. We let $ g \in G $ be an element corresponding to a primitive root of $ \boldsymbol{\mu}_n $. From Proposition \ref{prop. 3.3}, we see that the fixed points of the automorphism $ g : C \to C $ are precisely the two points where $ C $ meets the other components of $ \mathcal{Y}_k $. Let $ y \in C $ be one of the fixed points, and let $ dg_y $ denote the cotangent map at $y$. The eigenvalue of $ dg_y $ can easily be computed using Proposition \ref{prop. 3.3}.
We will also need to compute the eigenvalue of the induced automorphism $ u_y : \mathcal{L}(y) \to \mathcal{L}(y) $. To do this, let $ C' $ be the other component of $ \mathcal{Y}_k $ that passes through $ y $. Then we can write $ \mathcal{I}_{Z'} = \mathcal{I}_{C}^{\otimes a} \otimes \mathcal{I}_{C'}^{\otimes a'} \otimes \mathcal{I}_{0} $, where $ \mathcal{I}_{0} $ is the ideal sheaf of an effective Cartier divisor not containing $C$ or $C'$.
Since $C$ and $C'$ intersect transversally at $y$, the fibers $ \mathcal{I}_{C}(y) $ and $ \mathcal{I}_{C'}(y) $ generate the cotangent space to $ \mathcal{Y} $ at $y$. The eigenvalues $ \lambda $ and $ \lambda' $ of these generators can easily be computed using Proposition \ref{prop. 3.3}, since they correspond to the two coordinates locally at $y$. The following lemma is an easy computation (c.f.~\cite{Thesis}, Section 6.10):
\begin{lemma}\label{eigenvaluelemma}
Keep the notation from the discussion above. The unique eigenvalue of the automorphism $ u_y : \mathcal{L}(y) \to \mathcal{L}(y) $ is $ \lambda^a \lambda'^{a'} $.
\end{lemma}
\section{Special filtrations for trace computations}\label{special filtrations}
Let $ \mathcal{X}/S $ be an SNC-model, and let $ S' \rightarrow S $ be a tame extension of degree $n$, where $n$ is prime to the least common multiple of the multiplicities of the irreducible components of $ \mathcal{X}_k $. Let $ \mathcal{X}' $ be the normalization of $ \mathcal{X}_{S'} $, and let $ \mathcal{Y} $ be the minimal desingularization of $ \mathcal{X}' $.
This section is devoted to computing, for any $ g \in G = \boldsymbol{\mu}_n $, the Brauer trace of the automorphism induced by $g$ on the formal difference $ H^0(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) - H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $. Hence a lot of our previous work will come together in this section.
Our assumption on the degree of $ S'/S $ makes it possible to describe $ \mathcal{Y}_k $ in terms of $ \mathcal{X}_k $. In particular, since every component of $ \mathcal{Y}_k $ either is an exceptional curve, or dominates a component of $ \mathcal{X}_k $, it is natural to stratify the combinatorial structure of $ \mathcal{Y}_k $ according to the combinatorial structure of $ \mathcal{X}_k $.
This stratification proves to be very convenient for our trace computations. The section concludes with Theorem \ref{thm. 9.13}, which gives a formula for the trace mentioned above as a sum of contributions associated in a natural way to the combinatorial structure of $ \mathcal{X}_k $.
\subsection{}\label{9.1}
We will associate a graph $ \Gamma(\mathcal{X}_k) $ to $ \mathcal{X}_k $ in the following way: The set of vertices, $ \mathcal{V} $, consists of the irreducible components of $ \mathcal{X}_k $. The set of edges, $ \mathcal{E} $, consists of the intersection points of $ \mathcal{X}_k $, and two distinct vertices $\upsilon$ and $\upsilon'$ are connected by $ \mathrm{Card} (\{ D_{\upsilon} \cap D_{\upsilon'} \}) $ edges, where $D_{\upsilon}$ denotes the irreducible component corresponding to $\upsilon$.
We define two natural functions on the set of vertices $ \mathcal{V} $. First, let the \emph{genus}
$$ \mathfrak{g} : \mathcal{V} \rightarrow \mathbb{N}_0, $$
be defined by $ \mathfrak{g}(\upsilon) = p_a(D_{\upsilon}) $. We also let the \emph{multiplicity}
$$ \mathfrak{m} : \mathcal{V} \rightarrow \mathbb{N}, $$
be defined by $ \mathfrak{m}(\upsilon) = \mathrm{mult}_{\mathcal{X}_k}(D_{\upsilon}) $.
The graph $ \Gamma(\mathcal{X}_k) $, together with the functions $\mathfrak{g}$ and $\mathfrak{m}$, encode all the combinatorial and numerical properties of $ \mathcal{X}_k $.
\subsection{}\label{9.2}
Let $ \mathcal{S} $ denote the set of irreducible components of $ \mathcal{Y}_k $. If $ C \in \mathcal{S} $, then we have either:
\begin{enumerate}
\item $ C $ dominates a component $ D_{\upsilon} $ of $ \mathcal{X}_k $, or
\item $ C $ is a component of the exceptional locus of the minimal desingularization $ \rho : \mathcal{Y} \rightarrow \mathcal{X}' $.
\end{enumerate}
In the first case, we have that $ p_a(C) = \mathfrak{g}(\upsilon) $, and $ \mathrm{mult}_{\mathcal{Y}_k}(C) = \mathfrak{m}(\upsilon) $. Furthermore, $G$ acts trivially on $C$. Since $C$ is the unique component of $ \mathcal{Y}_k $ corresponding to $ \upsilon $, we write $C = C_{\upsilon}$.
In the second case, we have that $ C $ is part of a chain of exceptional curves, corresponding uniquely to an edge $ \varepsilon \in \mathcal{E} $. Hence $ p_a(C) = 0 $. By choosing an ordering (or direction) of this chain, we can index the components in the chain by $ l $, for $ 1 \leq l \leq L(\varepsilon) $, where $ L(\varepsilon) $ is the length of the chain. So we can write $ C = C_{\varepsilon,l} $, for some $l \in \{1, \ldots, L(\varepsilon) \} $. By Proposition \ref{prop. 3.3}, $G$ acts nontrivially on $ C $, with fixed points exactly at the two points where $C$ meets the rest of the special fiber.
The special fiber $ \mathcal{Y}_k $ can now be written, as an effective divisor on $ \mathcal{Y} $, in the form
$$ \mathcal{Y}_k = \sum_{\varepsilon \in \mathcal{E}} \sum_{l = 1}^{L(\varepsilon)} \mu_{\varepsilon,l} C_{\varepsilon,l} + \sum_{\upsilon \in \mathcal{V}} m_{\upsilon} C_{\upsilon}, $$
where $ \mu_{\varepsilon,l} $ denotes the multiplicity of the component $ C_{\varepsilon,l} $, and $m_{\upsilon}$ is the multiplicity of $C_{\upsilon}$.
\subsection{}\label{9.3}
We will now consider \emph{special} filtrations of $ \mathcal{Y}_k $, inspired by the partition of the set of irreducible components of $ \mathcal{Y}_k $ introduced above.
Let us choose an ordering of the elements in $ \mathcal{V} $. We can then define the following sequence:
$$ 0 < \ldots < Z_{\mathcal{E}} =: Z_{\upsilon_{|\mathcal{V}| + 1}} < Z_{\upsilon_{|\mathcal{V}|}} < \ldots < Z_{\upsilon_i} < \ldots < Z_{\upsilon_1} = \mathcal{Y}_k, $$
where $ Z_{\mathcal{E}} := \mathcal{Y}_k - \sum_{\upsilon \in \mathcal{V}} m_{\upsilon} C_{\upsilon} $. The $ Z_{\upsilon_i} $ are defined inductively, for every $ i \in \{ 1, \ldots, |\mathcal{V}| \} $, by the refinements
$$ Z_{\upsilon_{i+1}} = Z_{\upsilon_i}^{m_{\upsilon_i} + 1} < \ldots < Z_{\upsilon_i}^j < \ldots < Z_{\upsilon_i}^1 = Z_{\upsilon_i}, $$
where $ Z_{\upsilon_i}^{j + 1} = Z_{\upsilon_i} - j C_{\upsilon_i} $ for every $ j \in \{ 0, \ldots, m_{\upsilon_i} \} $.
Next, we choose an ordering of the elements in $ \mathcal{E} $. We can then define the following sequence:
$$ 0 =: Z_{\varepsilon_{|\mathcal{E}|+1}} < Z_{\varepsilon_{|\mathcal{E}|}} < \ldots < Z_{\varepsilon_i} < \ldots < Z_{\varepsilon_1} := Z_{\mathcal{E}}. $$
The $ Z_{\varepsilon_i} $ are defined inductively, for any $ i \in \{ 1, \ldots, |\mathcal{E}| \} $, by the refinements
$$ Z_{\varepsilon_{i+1}} := Z_{\varepsilon_i, L(\varepsilon_i) + 1} < \ldots < Z_{\varepsilon_i,l} < \ldots < Z_{\varepsilon_i,1} := Z_{\varepsilon_i}, $$
which in turn are defined inductively, for every $ l \in \{ 1, \ldots, L(\varepsilon_i) \} $, by the further refinements
$$ Z_{\varepsilon_i, l+1} := Z_{\varepsilon_i, l}^{\mu_l + 1} < \ldots < Z_{\varepsilon_i, l}^j < \ldots < Z_{\varepsilon_i, l}^1 := Z_{\varepsilon_i, l}, $$
where $ Z_{\varepsilon_i, l}^{j+1} := Z_{\varepsilon_i, l} - j C_{\varepsilon_i, l} $, for every $ j \in \{ 0, \ldots, \mu_l \} $.
\begin{ex}\label{specfilex}
Let $ \mathcal{X}/S $ be an SNC-model with special fiber $ \mathcal{X}_k = 3 D_4 + D_1 + D_2 + D_3 $, where $D_i$ meets $D_4$ in a unique point for $ i \in \{ 1,2,3 \} $, and with no further intersection points. Let $R'/R$ be a tame extension of degree $7$, and let $ \mathcal{Y}/S' $ be the minimal desingularization of the normalization $ \mathcal{X}' $ of $ \mathcal{X}_{S'} $.
The singularities of $ \mathcal{X}' $ are formally isomorphic to $ \sigma = (1,3,7) $. From Example \ref{Gdesingex} we know that the exceptional locus of the resolution of $ \sigma $ consists of two components of multiplicity $1$. Let us write $C_i^1$ and $C_i^2$ for the components corresponding to the edge $ \varepsilon_i = (D_i,D_4) $.
We can now write $ \mathcal{Y}_k = \sum_{i=1}^4 m_i C_i + \sum_{j=1}^3 (C_j^1 + C_j^2) $. The first part of a special filtration is then
$$ Z_{\mathcal{E}} := \mathcal{Y}_k - \sum_{i=1}^4 m_i C_i < \ldots < \mathcal{Y}_k - (C_1 + C_2) < \mathcal{Y}_k - C_1 < \mathcal{Y}_k, $$
and the second part looks like
$$ 0 < C_3^2 < C_3^1 + C_3^2 < \ldots < Z_{\mathcal{E}} - (C_1^1 + C_1^2) < Z_{\mathcal{E}} - C_1^1 < Z_{\mathcal{E}}. $$
\end{ex}
\subsection{}
In the rest of this paper, we shall always choose complete filtrations of $ \mathcal{Y}_k $ that are of the form
\begin{equation}
0 < \ldots < Z_{\varepsilon_i} < \ldots < Z_{\varepsilon_1} = Z_{\mathcal{E}} < \ldots < Z_{\upsilon_i} < \ldots < Z_{\upsilon_1} = \mathcal{Y}_k,
\end{equation}
where $ Z_{\upsilon_{i+1}} < Z_{\upsilon_i} $ and $ Z_{\varepsilon_{i+1}} < Z_{\varepsilon_i} $ are subfiltrations as described above. We shall soon see that the chosen orderings of the sets $ \mathcal{E} $ and $ \mathcal{V} $ are irrelevant.
The nice feature of working with filtrations like this becomes evident when one wants to do trace computations \`a la Section \ref{section 6}. Then we may actually reduce to considering subfiltrations $ Z_{\upsilon_{i+1}} < Z_{\upsilon_i} $, which we interpret as \emph{contributions from the vertices} of $\Gamma$, and subfiltrations $ Z_{\varepsilon_{i+1}} < Z_{\varepsilon_i} $, which we interpret as \emph{contributions from the edges}.
\subsection{}
Let us fix a vertex $\upsilon \in \mathcal{V}$. We shall now define and calculate the contribution to the trace from $\upsilon$. To do this, we choose a filtration of $ \mathcal{Y}_k $ as in Section \ref{9.3} above. Then there will be a subfiltration of the form:
$$ Z_{\mathcal{E}} \leq Z_{\upsilon}^{m_{\upsilon}+1} < \ldots < Z_{\upsilon}^k < \ldots < Z_{\upsilon}^1 = Z_{\upsilon} \leq \mathcal{Y}_k, $$
where $ Z_{\upsilon}^{k} - Z_{\upsilon}^{k + 1} = C_{\upsilon} $, for all $ 1 \leq k \leq m_{\upsilon} $. The invertible sheaf associated to the $k$-th step in this filtration is $ \mathcal{L}_{\upsilon}^k := j_{\upsilon}^*(\mathcal{I}_{Z_{\upsilon}^{k+1}}) $, where $ j_{\upsilon} : C_{\upsilon} \hookrightarrow \mathcal{Y} $ is the canonical inclusion.
We will use the following easy lemma, whose proof is omitted.
\begin{lemma}\label{lemma 9.1}
Assume that $ S'/S $ is a nontrivial extension. If $C_1$ and $C_2$ are two distinct components of $ \mathcal{Y}_k $, corresponding to elements in $ \mathcal{V} $, then they have empty intersection.
\end{lemma}
In what follows, we will suppress the index $ \upsilon $, to simplify notation. Let $ D_1, \ldots, D_f $ be the irreducible components of $ Z $ that intersect $ C $ non-trivially, and that are not equal to $ C $. Let $ a_i $ denote the multiplicity of $D_i$. It follows from Lemma \ref{lemma 9.1} that the $D_i$ are exceptional components. Moreover, it follows from the way we constructed the filtration that the $D_i$ are precisely the components of $ \mathcal{Y}_k $ different from $ C $ that have non-empty intersection with $ C $. We can then write
$$ Z^{k + 1} = (m - k) C + a_1 D_1 + \ldots + a_f D_f + Z_0, $$
where all components of $ Z_0 $ have empty intersection with $ C $. So we get that
\begin{equation}\label{equation 9.2}
\mathcal{L}^k = j^*\mathcal{I}_{Z^{k+1}} = (\mathcal{I}_{C}|_{C})^{ \otimes m - k } \otimes (\mathcal{I}_{D_1}|_{C})^{\otimes a_1} \otimes \ldots \otimes (\mathcal{I}_{D_f}|_{C})^{\otimes a_f}.
\end{equation}
Let $ g $ be an element of $ G = \boldsymbol{\mu}_n $, corresponding to a root of unity $ \xi $. Note that the restriction of the automorphism $ g $ to $C$ is $ \mathrm{id}_{C}$. Let
$$ \phi^p_{g,k} : H^p(C,\mathcal{L}^k) \to H^p(C,\mathcal{L}^k) $$
be the automorphism induced by $g$.
\begin{dfn}\label{contribution from vertex}
We define \emph{the contribution to the trace} from the vertex $ \upsilon \in \mathcal{V} $ as the sum
$$ \mathrm{Tr}_{\upsilon}(\xi) = \sum_{k=1}^{m} \sum_{p=0}^1 (-1)^p~\mathrm{Tr}_{\beta}( \phi^p_{g,k} ). $$
\end{dfn}
The proposition below gives an effective formula for the contribution from a vertex $ \upsilon $:
\begin{prop}\label{prop. 9.7} The contribution to the trace from the vertex $ \upsilon $ is given by the formula
$$ \mathrm{Tr}_{\upsilon}(\xi) = \sum_{k=0}^{m-1} (\xi^{\alpha_{m}})^{k} ((m - k) C^2 + 1 - p_a(C)), $$
where $ \alpha_{m} $ is an inverse to $ m $ modulo $n$.
\end{prop}
\begin{pf}
By Proposition \ref{prop. 5.5}, we have that
$$ \sum_{p=0}^1 (-1)^p~\mathrm{Tr}_{\beta}( \phi^p_{g,k} ) = \lambda_k(\mathrm{deg}_{C}(\mathcal{L}^k) + 1 - p_a(C)), $$
where $ \lambda_k $ is the eigenvalue of the automorphism $ \mathcal{L}(y) \to \mathcal{L}(y) $, for \emph{any} point $ y \in C $. The proof will consist of specifying precisely the terms appearing in this formula.
Let us first compute $ \mathrm{deg}_{C}(\mathcal{L}^k) $. Since $ \mathcal{I}_{C} = \mathcal{O}_{\mathcal{Y}}(- C) $, it follows that
$$ \mathrm{deg}_{C}( \mathcal{I}_{C}|_{C}) = \mathrm{deg}_{C}(\mathcal{O}_{\mathcal{Y}}(- C)|_{C}) = - \mathrm{deg}_{C}(\mathcal{O}_{\mathcal{Y}}(C)|_{C}) = - C^2.$$
Furthermore, for any $ i \in \{ 1, \ldots, f \} $, we have that $ \mathcal{I}_{D_i} = \mathcal{O}_{\mathcal{Y}}(- D_i) $, and hence
$$ \mathrm{deg}_{C}(\mathcal{I}_{D_i}|_{C}) = \mathrm{deg}_{C}(\mathcal{O}_{\mathcal{Y}}(- D_i)|_{C}) = - 1. $$
It then follows from Equation \ref{equation 9.2} that
$$ \mathrm{deg}_{C}(\mathcal{L}^k) = - (m - k) C^2 - (a_1 + \ldots + a_f). $$
On the other hand, we have that $ - C^2 = (a_1 + \ldots + a_f)/m $, and therefore we get that
$ \mathrm{deg}_{C}(\mathcal{L}^k) = k C^2 $.
We now claim that $ \lambda_k = (\xi^{\alpha_{m}})^{m - k} $. To see this, let $D$ be one of the components of $ \mathcal{Y}_k $ meeting $ C $, and denote by $y$ the unique point where they intersect. Then $D$ is part of a chain of exceptional curves. Denote by $L$ the length of this chain. Using the notation and computations in Proposition \ref{prop. 3.3}, with $ C = C_{L+1} $ and $ D = C_L $, we can identify the fiber of $ \mathcal{I}_{C} $ at $ y = y_L $ with $ <z_{L+1}> $. The eigenvalue of $ z_{L+1} $ for the automorphism induced by $\xi$ was precisely equal to $ \xi^{\alpha_{m}} $, so it follows that $ \lambda_k = (\xi^{\alpha_{m}})^{m - k} $. The result follows by re-indexing.
\end{pf}
\begin{rmk}
In particular, it is clear that this formula is independent of how we have chosen to order the elements in $ \mathcal{V} $.
\end{rmk}
\subsection{}\label{9.6}
Let us now choose an edge $ \varepsilon \in \mathcal{E} $. In the filtration of $ \mathcal{Y}_k $, we can find a subfiltration $ 0 < Z_{\varepsilon} \leq Z_{\mathcal{E}} < \mathcal{Y}_k $, with the refinements
$$ Z_{\varepsilon, L(\varepsilon) + 1} < \ldots < Z_{\varepsilon,l} < \ldots < Z_{\varepsilon,1} = Z_{\varepsilon}, $$
for any $ l \in \{ 1, \ldots, L(\varepsilon) \} $, and further refinements
$$ Z_{\varepsilon,l+1} = Z_{\varepsilon,l}^{\mu_l + 1} < \ldots < Z_{\varepsilon,l}^{k} < \ldots < Z_{\varepsilon,l}^{1} = Z_{\varepsilon,l}, $$
where $ Z_{\varepsilon,l}^{k} - Z_{\varepsilon,l}^{k+1} = C_{\varepsilon,l} $, for any $ k \in \{ 1, \ldots, \mu_l \} $.
As we are working with a fixed $ \varepsilon $, we will for the rest of this section suppress the index $ \varepsilon $, to simplify the notation. Take now an integer $ l \in \{1, \ldots, L-1 \} $, and let $ j_l : C_l \hookrightarrow \mathcal{Y} $ be the canonical inclusion. Consider then the subfiltration involving the component $C_l$:
$$ \ldots < Z^{\mu_l + 1}_l < \ldots < Z^k_l < \ldots < Z^1_l < \ldots . $$
At the $k$-th step in this filtration, we have $ Z_l^k - Z_l^{k+1} = C_l $ for all $ 1 \leq k \leq \mu_l $. The associated invertible sheaf at the $k$-th step is
\begin{equation}
\mathcal{L}_l^k := j_l^*( \mathcal{I}_{Z_l^{k+1}}) = (\mathcal{I}_{C_l}|_{C_l})^{\otimes \mu_l - k} \otimes (\mathcal{I}_{C_{l+1}}|_{C_l})^{ \otimes \mu_{l+1} }.
\end{equation}
For $ l = L $, we note that since all components in $ Z_{L} $ other than $ C_L $ have empty intersection with $ C_L $, we get instead
$$ \mathcal{L}_L^k := j_L^*( \mathcal{I}_{Z_L^{k+1}}) = (\mathcal{I}_{C_L}|_{C_L})^{\otimes \mu_L - k}. $$
Let $ g \in G $ be a group element corresponding to a primitive root of unity $ \xi $. The restriction $ g|_{C_l} $ has fixed points exactly at the two points $ y_l $ and $ y_{l-1} $ where $C_l$ meets the rest of the special fiber. We need to compute the fibers at $ y_l $ and $ y_{l-1} $ of $ \mathcal{L}_l^k $, and the corresponding eigenvalues for the automorphisms induced by $ g $ at these fibers.
Let $ g \in G $ correspond to a root of unity $ \xi \in \boldsymbol{\mu}_n $, and let
$$ \phi_l^{k,p} : H^p(C_l, \mathcal{L}_l^k) \to H^p(C_l, \mathcal{L}_l^k) $$
be the automorphism induced by $ g $. We can then define the expression
$$ Tr_l^k(\xi) := \sum_{p=0}^1(-1)^p~Tr_{\beta}(\phi_l^{k,p}). $$
\begin{ntn}
Since $ \xi^{\alpha_1} $ appears so frequently in our formulas, we introduce the notation $ \chi = \xi^{\alpha_1} $.
\end{ntn}
\begin{prop}\label{lemma 9.10}
For a primitive root $ \xi $, the expression $ Tr_l^k(\xi) $ can be computed in the following way:
\begin{enumerate}
\item If $ l \in \{2, \ldots, L - 1 \} $, we have that
$$ Tr_l^k(\xi) = \frac{\chi^{r_{l-2}(\mu_l - k)}}{1 - \chi^{- r_{l-1}}} + \frac{\chi^{ - r_l (\mu_l - k) + r_{l-1} \mu_{l+1} }}{1 - \chi^{r_{l-1}}}, $$
for any $ k = 1, \ldots , \mu_l $.
\item If $ l = 1 $, we get
$$ Tr_1^k(\xi) = \frac{1}{1 - \chi^{ - r_0 }} + \frac{ \chi^{ - r_1 (\mu_1 - k) + r_0 \mu_2}}{1 - \chi^{r_0}} $$
for any $ k = 1, \ldots , \mu_1 $.
\item Finally, if $ l = L $, we get
$$ Tr_L^k(\xi) = \frac{\chi^{ r_{L-2}(\mu_L - k)}}{1 - \chi^{ - r_{L-1}}} + \frac{1}{1 - \chi^{r_{L-1}}} $$
for any $ k = 1, \ldots , \mu_L $.
\end{enumerate}
\end{prop}
\begin{pf}
We will give the proof in case $(ii)$, when $ l \in \{2, \ldots, L - 1 \} $, and we will use the notation and results from Section \ref{lifting the action}. Recall also that $ \mathcal{L}_l^k = j_l^* \mathcal{I}_{Z_l^{k+1}} $, where
$$ \mathcal{I}_{Z_l^{k+1}} = \mathcal{I}_{C_l}^{\otimes \mu_l - k} \otimes \mathcal{I}_{C_{l+1}}^{\otimes \mu_{l+1}} \otimes \mathcal{I}_0, $$
and where $ \mathcal{I}_0 $ has support away from $ C_l $ and $C_{l+1}$.
The fixed points of the automorphism $ g : C_l \rightarrow C_l $ are the two points $ y_{l-1}$ and $y_l$ where $C_l$ meets the other components of $ \mathcal{Z}_k $. The fibers of $ \mathcal{L}_l^k $ in the fixed points are
$$ \mathcal{L}_l^k(y_{l-1}) = \mathcal{I}_{Z_l^{k+1}}(y_{l-1}) = \mathcal{I}_{C_l}^{\otimes \mu_l - k}(y_{l-1}) = <z_{l-1}>^{\otimes \mu_l - k}, $$
and
$$ \mathcal{L}_l^k(y_l) = \mathcal{I}_{Z_l^{k+1}}(y_l) = \mathcal{I}_{C_l}^{\otimes \mu_l - k}(y_l) \otimes \mathcal{I}_{C_{l+1}}^{\otimes \mu_{l+1}}(y_l) = <w_l>^{\otimes \mu_l - k} \otimes <z_l>^{ \otimes \mu_{l+1} }. $$
Using Proposition \ref{prop. 3.3}, we compute that the eigenvalue for the automorphism on $ \mathcal{L}_l^k(y_{l-1}) $ (resp.~$ \mathcal{L}_l^k(y_l) $) is $(\chi^{r_{l-2}})^{\mu_l - k}$ (resp.~$(\chi^{ - r_l})^{\mu_l - k} (\chi^{r_{l-1}})^{\mu_{l+1}}$).
Let $ dg(y_{\star}) $ be the automorphism of the cotangent space to $C_l$ at the fixed point $y_{\star}$ induced by $g$. Using Proposition \ref{prop. 3.3} again, we compute that the eigenvalue of $ dg(y_{l-1}) $ (resp.~$ dg(y_{l}) $) is $\chi^{-r_{l-1}}$ (resp.~$\chi^{r_{l-1}}$).
We can therefore use Proposition \ref{prop. 5.6} to conclude that
$$ Tr_l^k(\xi) = \frac{\chi^{r_{l-2}(\mu_l - k)}}{1 - \chi^{- r_{l-1}}} + \frac{\chi^{ - r_l (\mu_l - k) + r_{l-1} \mu_{l+1} }}{1 - \chi^{r_{l-1}}}. $$
\end{pf}
We can then make the following definition:
\begin{dfn}\label{contribution from edge}
Let
\begin{equation} \mathrm{Tr}_{\varepsilon}(\xi) : = \sum_{l = 1}^{L(\varepsilon)} \sum_{k = 1}^{\mu_l} \mathrm{Tr}_{\varepsilon, l}^{k}(\xi).
\end{equation}
We say that $ \mathrm{Tr}_{\varepsilon}(\xi) $ is the \emph{contribution to the trace} from $ \varepsilon \in \mathcal{E} $.
\end{dfn}
\begin{rmk}
Let us note that $ \mathrm{Tr}_{\varepsilon}(\xi) $ is defined entirely in terms of the intrinsic data of the singularity associated to $ \varepsilon $. Furthermore, this expression does not depend on the order in which we chose $ \varepsilon $. It is also clear that this expression does not depend on the chosen subfiltration of the divisor $ \sum_{i = 1}^{L(\varepsilon)} m_{\varepsilon_i} C_{\varepsilon_i} $.
\end{rmk}
\begin{rmk}
It is also easy to see that since $ \mathrm{Tr}_{\varepsilon}(\xi) $ is in fact a polynomial in $\xi$, the same formula is valid for any (possibly non-primitive) root of unity.
\end{rmk}
We will now show that we obtain a formula for $ \sum_{p=0}^1 (-1)^p~\mathrm{Tr}_{\beta}(\phi_g^p) $, where $ \phi_g^p $ is the automorphism of $ H^p(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ induced by $g$, in terms of the vertex and edge contributions discussed above.
\begin{thm}\label{thm. 9.13}
Let $ g \in G $ be a group element corresponding to a root of unity $\xi \in \boldsymbol{\mu}_n$. Then we have that
$$ \sum_{p=0}^1 (-1)^p~\mathrm{Tr}_{\beta}(\phi_g^p) = \sum_{\upsilon \in \mathcal{V}} \mathrm{Tr}_{\upsilon}(\xi) + \sum_{\varepsilon \in \mathcal{E}} \mathrm{Tr}_{\varepsilon}(\xi). $$
Furthermore, this expression depends only on the combinatorial data $ (\Gamma(\mathcal{X}_k), \mathfrak{g}, \mathfrak{m}) $ associated to $\mathcal{X}_k$.
\end{thm}
\begin{pf}
We begin with choosing a special filtration
$$ 0 < \ldots < Z_{\varepsilon_i} < \ldots < Z_{\varepsilon_1} = Z_{\mathcal{E}} < \ldots < Z_{\upsilon_i} < \ldots < Z_{\upsilon_1} = \mathcal{Y}_k. $$
It then follows from Proposition \ref{prop-Grep} that
$$ \sum_{p=0}^1 (-1)^p~\mathrm{Tr}_{\beta}(\phi_g^p) = \sum_{\upsilon \in \mathcal{V}} \mathrm{Tr}_{\upsilon}(\xi) + \sum_{\varepsilon \in \mathcal{E}} \mathrm{Tr}_{\varepsilon}(\xi), $$
where $ \mathrm{Tr}_{\upsilon}(\xi) $ is the expression defined in Definition \ref{contribution from vertex} and $ \mathrm{Tr}_{\varepsilon}(\xi) $ is the expression defined in Definition \ref{contribution from edge}.
It follows from Proposition \ref{prop. 9.7} that $ \mathrm{Tr}_{\upsilon}(\xi) $ only depends on the combinatorial structure of $ \mathcal{X}_k $. Likewise, Proposition \ref{lemma 9.10} gives that $ \mathrm{Tr}_{\varepsilon}(\xi) $ only depends on the combinatorial structure of $ \mathcal{X}_k $. Therefore, the same is true for the sum of these expressions.
\end{pf}
\subsection{An explicit trace formula}
In \cite{Thesis}, an explicit formula for $ \mathrm{Tr}_{\varepsilon}(\xi) $ is obtained. We present this formula here, without proof. Let us assume that $ \varepsilon $ is analytically isomorphic to the singularity $ \sigma = (m_1,m_2,n) $, with notation as in Section \ref{lifting the action}.
Before giving the formula, we will first need to note a certain regularity of the minimal resolution of $ \sigma = (m_1,m_2,n) $, when $n$ runs through positive integers prime to $p$, and with the same residue class modulo $M$.
\begin{prop}[\cite{Thesis}, Proposition 7.10]\label{reglocprop}
Let $m_1, m_2$ be positive integers, let $ m = \mathrm{gcd}(m_1,m_2) $, and let $ M = \mathrm{lcm}(m_1,m_2) $. Let us furthermore fix a positive integer $ n_0 $ that is not divisible by $p$ and that is relatively prime to $M$. Then the following properties hold:
\begin{enumerate}
\item There exists an integer $ K \gg 0 $, such that the multiplicities of the components in the minimal resolution of the singularity $ \sigma = (m_1,m_2,n) $, where $ n = n_0 + K M $, satisfy
$$ \mu_0 > \mu_1 > \ldots > \mu_{l_0} = \ldots = m = \ldots = \mu_{L + 1 - l_1} < \ldots < \mu_{L} < \mu_{L + 1}, $$
where $L$ denotes the \emph{length} of the singularity $ \sigma $.
\item The integers $ \mu_2 , \ldots , \mu_{l_0} $ are uniquely determined by $ \mu_0 $ and $ \mu_1 $, and similarly $ \mu_{L + 1 - l_1}, \ldots , \mu_{L-1} $ are uniquely determined by $ \mu_{L} $ and $ \mu_{L+1} $.
\item For any extension of degree $ n' = n + k M $, where $ k > 0 $, we have that the multiplicities $ \mu_l' $ of the components in the minimal resolution of the singularity $ \sigma' = (m_1,m_2,n') $ will only differ from the sequence of multiplicities associated to $ \sigma $ by inserting $m$'s ``in the middle''.
\end{enumerate}
\end{prop}
In other words, for all $n$ sufficiently big and with a fixed residue class modulo $M$, we have that the multiplicities of the irreducible components of the exceptional components of the desingularization of $ \sigma = (m_1,m_2,n) $ are of the form as in part $ (i) $ of the proposition above. Increasing $n$ will only increase the length of the part of the components with constant multiplicity equal to $m$.
\begin{ex}\label{reglocex}
Consider the singularity $ (m_1,m_2,n) $ with $ m_1 = 3 $, $ m_2 = 4 $ and where $ n \equiv_{12} 5 $. We will use the notation $ (\mu_{L+1}, \mu_L, \mu_{L-1}, \ldots, \mu_1, \mu_0)_n $ for the multiplicities of the components in the resolution. Then we easily compute the sequences $ (3,2,3,4)_5 $, $ (3,2,1,2,3,4)_{17} $ and $ (3,2,1,1,2,3,4)_{29} $. This illustrates Proposition \ref{reglocprop} above, which then tells us that we have the sequence $ (3,2,1, \ldots, 1,2,3,4)_{5 + k \cdot 12} $, as soon as $ k \geq 1 $.
\end{ex}
In the situation where $ \varepsilon $ corresponds to the singularity $ \sigma = (m_1,m_2,n) $, where $ n \gg 0 $ with the interpretation above, it is proved in \cite{Thesis} that $ \mathrm{Tr}_{\varepsilon} = \mathrm{Tr}_{\sigma} $, where $ \mathrm{Tr}_{\sigma} $ is given by the formula below.
\begin{thm}[\cite{Thesis}, Theorem 10.9]\label{Formula}
Let $ \sigma = (m_1,m_2,n) $ be a singularity, where $ n \gg 0 $. Let $ m = \mathrm{gcd}(m_1,m_2) $, and let $ \alpha_m $ (resp.~$ \alpha_{m_1} $, resp.~$ \alpha_{m_2} $) be inverse to $m$ (resp.~$m_1$, resp.~$m_2$) modulo $n$. For any root of unity $ \xi \in \boldsymbol{\mu}_n $, we have that
$$ \mathrm{Tr}_{\sigma}(\xi) = \sum_{r=0}^{\mu_0 - 1} (\mu_1 - \left \lceil r \frac{\mu_{1}}{\mu_{0}} \right \rceil) (\xi^{\alpha_{m_2}})^r +
\sum_{r=0}^{\mu_{L+1} - 1} (\mu_L - \left \lceil r \frac{\mu_{L}}{\mu_{L+1}} \right \rceil) (\xi^{\alpha_{m_1}})^r -
\sum_{r=0}^{m-1} (\xi^{\alpha_m})^r. $$
The coefficients in this expression depend only on the residue class of $n$ modulo $ \mathrm{lcm}(m_1,m_2) $.
\end{thm}
With this formula at hand, one can effectively compute traces. Indeed, as long as $n$ is large compared to the multiplicities of the irreducible components of $ \mathcal{X}_k $, the expressions $ \mathrm{Tr}_{\varepsilon} $ can be computed using Theorem \ref{Formula}. The demand that $n$ should be ''large'' is no setback in the applications in the next section, where we are interested in the characters when $n$ grows to infinity.
\begin{ex}\label{formulaex}
Let $ m_1 = 2 $ and $ m_2 = 3 $. Then $ M = 6 $ and $ m = 1 $. So there are two cases to consider, namely $ n \equiv_6 1 $ and $ n \equiv_6 5 $. In the first case, one checks that the list of multiplicities is $ ( \mu_{L+1}, \ldots, \mu_0) = (2,1, \ldots, 1,2,3)_n $, and that $ Tr_{\sigma}(\xi) = 2 + \xi^{\alpha_3} $, where $ \alpha_3 $ is an inverse to $3$ modulo $n$.
In the second case, where $ n \equiv_6 5 $, one finds instead that the list of multiplicities is $ (2,1, \ldots, 1,3)_n $, and that the formula gives $ Tr_{\sigma}(\xi) = 1 $.
\end{ex}
\section{Character computations and jumps}\label{computations and jumps}
Let $X/K$ be a smooth, projective and geometrically irreducible curve, and let $ \mathcal{X}/S $ be the minimal SNC-model of $X$. We have in previous sections studied properties of the action of $ \boldsymbol{\mu}_n $ on the cohomology groups $ H^i(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $, where $ \mathcal{Y} $ is the minimal desingularization of the pullback $ \mathcal{X}_{S'} $ for some tame extension $S'/S$ of degree $n$.
We will throughout this section make the following assumption:
\begin{ass}\label{gcdassumption}
For any $g \geq 1 $, we assume that the greatest common divisor of the multiplicities of the irreducible components of $ \mathcal{X}_k $ is $1$. If $g=1$, we assume in addition that $X/K$ has a rational point.
\end{ass}
Let $\mathcal{J}/S$ be the N\'eron model of the Jacobian of $X$. We will in this section apply our results to the study of the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $, where $ a \in \mathbb{Z}_{(p)} \cap [0,1] $, that we defined in Section \ref{ratfil}. We will first prove some general properties for these filtrations, and then present some computations for curves of genus $g = 1$ and $g = 2$.
We would at this point like to remark that in order to make the $ \boldsymbol{\mu}_n $-action on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ compatible with the action on $ T_{\mathcal{J}'_k, 0} $, we have to let $ \boldsymbol{\mu}_n $ act on $ R' $ by $ [\xi](\pi') = \xi^{-1} \pi' $, for any $ \xi \in \boldsymbol{\mu}_n $. We made the choice in previous sections, when working with local rings, to let $ \boldsymbol{\mu}_n $ act by $ [\xi](\pi') = \xi \pi' $, in order to get simpler notation. This means that the irreducible characters for the representation on $ T_{\mathcal{J}'_k, 0} $ are the \emph{inverse} characters to those we compute when using our formulas for the representation on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $.
\subsection{}
Theorem \ref{thm. 9.13} states that the Brauer trace of the automorphism induced by any group element $ \xi \in \boldsymbol{\mu}_n $ on the formal difference $ H^0(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) - H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ only depends on the combinatorial structure of $ \mathcal{X}_k $. If Assumption \ref{gcdassumption} is valid, we can improve this result, and get a similar result for the character of the representation of $ \boldsymbol{\mu}_n $ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $:
\begin{thm}\label{main character theorem}
Let $ X/K $ be a smooth, projective and geometrically connected curve having genus $ g(X) > 0 $, and assume that Assumption \ref{gcdassumption} holds. Let $ \mathcal{X} $ be the minimal SNC-model of $ X $ over $S$. Furthermore, let $ S'/S $ be a tame extension of degree $n$, where $ n $ is relatively prime to the least common multiple of the multiplicities of the irreducible components of $ \mathcal{X}_k $, and let $ \mathcal{Y}/S' $ be the minimal desingularization of $ \mathcal{X}_{S'} $.
Then the irreducible characters for the representation of $ \boldsymbol{\mu}_n $ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ only depend on the combinatorial data $ (\Gamma(\mathcal{X}_k), \mathfrak{g}, \mathfrak{m}) $ associated to $\mathcal{X}_k$.
\end{thm}
\begin{pf}
Let $ g \in G $ correspond to the root $ \xi \in \boldsymbol{\mu}_n $. Then, by Theorem \ref{thm. 9.13}, we have that
$$ \sum_{p=0}^1 (-1)^p~\mathrm{Tr}_{\beta}(\phi^p_g) = \sum_{\upsilon \in \mathcal{V}} \mathrm{Tr}_{\upsilon}(\xi) + \sum_{\varepsilon \in \mathcal{E}} \mathrm{Tr}_{\varepsilon}(\xi), $$
where $ \phi^p_g $ is the automorphism induced by $g$ on $ H^p(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $. The contributions $ \mathrm{Tr}_{\upsilon}(\xi) $ can be computed using Proposition \ref{prop. 9.7}, and the contributions $ \mathrm{Tr}_{\varepsilon}(\xi) $ can be computed by Proposition \ref{lemma 9.10}. In this way, we obtain a formula for the Brauer trace of the automorphism induced by any $ \xi \in \boldsymbol{\mu}_n $ on the formal difference $ H^0(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) - H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $.
Since Assumption \ref{gcdassumption} holds, we have that $ H^0(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) = k $ (\cite{Arwin}, Lemma 2.6). Furthermore, the $ \boldsymbol{\mu}_n $-action on $ \mathcal{Y}_k $ is relative to the ground field $k$, so it follows that the character for the representation of $ \boldsymbol{\mu}_n $ on $ H^0(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ is $1$.
We therefore obtain the formula
$$ \mathrm{Tr}_{\beta}(\phi^1_g) = 1 - (\sum_{\upsilon \in \mathcal{V}} \mathrm{Tr}_{\upsilon}(\xi) + \sum_{\varepsilon \in \mathcal{E}} \mathrm{Tr}_{\varepsilon}(\xi)). $$
Since the expressions $ \mathrm{Tr}_{\upsilon}(\xi) $ and $ \mathrm{Tr}_{\varepsilon}(\xi) $ only depend on the combinatorial structure of $ \mathcal{X}_k $, the same is true for $ \mathrm{Tr}_{\beta}(\phi^p_g) $. This completes the proof, since the Brauer character for the representation of $ \boldsymbol{\mu}_n $ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ is determined by the Brauer trace for the group elements $ \xi \in \boldsymbol{\mu}_n $.
\end{pf}
Let $ \mathcal{J}/S $ be the N\'eron model of the Jacobian of $X/K$. Theorem \ref{main character theorem} has the following consequence for the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $:
\begin{cor}\label{main jump corollary}
The jumps in the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $ with indices in $ \mathbb{Z}_{(p)} \cap [0,1] $ depend only on the combinatorial data $ (\Gamma(\mathcal{X}_k), \mathfrak{g}, \mathfrak{m}) $. In particular, the jumps do not depend on the residue characteristic $p$.
\end{cor}
\begin{pf}
Let $S'/S$ be a tame extension of degree $n$, where $n$ is prime to $l$, the least common multiple of the multiplicities of the irreducible components of $ \mathcal{X}_k $. Let $ \mathcal{J}'/S' $ be the N\'eron model of the Jacobian of $X_{K'}$. Recall from Section \ref{jacobiancase} that we could make the identification $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) \cong T_{\mathcal{J}'_k,0} $.
The jumps in the filtration of $ \mathcal{J}_k $ induced by the extension $S'/S$ are determined by the irreducible characters for the representation of $ \boldsymbol{\mu}_n $ on $ T_{\mathcal{J}'_k,0} $. However, this representation is precisely the representation of $ \boldsymbol{\mu}_n $ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $, if we let $ \boldsymbol{\mu}_n $ act on $R'$ by $ [\xi](\pi') = \xi^{-1} \pi' $, for every $\xi$. By Theorem \ref{main character theorem}, the character for this representation only depends on the combinatorial data $ (\Gamma(\mathcal{X}_k), \mathfrak{g}, \mathfrak{m}) $.
Since $ \mathbb{Z}_{(lp)} \cap [0,1] $ is \emph{dense} in $ \mathbb{Z}_{(p)} \cap [0,1] $, we conclude that the jumps of the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $ with indices in $ \mathbb{Z}_{(p)} \cap [0,1] $ only depend on $ \Gamma(\mathcal{X}_k) $, $ \mathfrak{g} $ and $ \mathfrak{m} $.
\end{pf}
With the two results above at hand, we can draw some conclusions about \emph{where} the jumps occur in the case of Jacobians. Let us first recall the following terminology from \cite{Thesis}: An irreducible component $C$ of $ \mathcal{X}_k $ is called \emph{principal} if either $ p_a(C) > 0 $, or if $ C $ is smooth and rational and meets the rest of the components of $ \mathcal{X}_k $ in at least three points.
\begin{cor}\label{specific jump corollary}
Let $ \tilde{n} $ be the least common multiple of the multiplicities of the principal components of $ \mathcal{X}_k $. Then the jumps in the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $ occur at indices of the form $ i/\tilde{n} $, where $ 0 \leq i < \tilde{n} $.
\end{cor}
\begin{pf}
Let us first recall that if $X$ obtains semi-stable reduction over a tame extension $K'/K$, then the Jacobian of $X$ obtains semi-abelian reduction over the same extension (see \cite{DelMum}). Furthermore, the minimal extension that gives semi-abelian reduction is the unique tame extension $ \widetilde{K}/K $ of degree $ \tilde{n} $ (\cite{Thesis}, Paper I Theorem 7.1). So in this case, the statement follows from Proposition \ref{tamejumpprop}.
Let us now assume that $X$ needs a wildly ramified extension to obtain semi-stable reduction. Consider the combinatorial data $ (\Gamma(\mathcal{X}_k), \mathfrak{g}, \mathfrak{m}) $. It follows from \cite{Winters}, Corollary 4.3, that we can find an SNC-model $ \mathcal{Z}/\mathrm{Spec}(\mathbb{C}[[t]]) $, where the generic fiber of $ \mathcal{Z} $ is smooth, projective and geometrically connected, and where the special fiber of $ \mathcal{Z} $ has the \emph{same} combinatorial data as $ \mathcal{X}_k $.
Let $ \mathcal{J}_{\mathcal{Z}} $ be the N\'eron model of the Jacobian of the generic fiber of $ \mathcal{Z} $. Then the jumps of the filtration $ \{ \mathcal{F}^a \mathcal{J}_{\mathcal{Z},\mathbb{C}} \} $ occur at indices of the form $ i/\tilde{n} $, where $ 0 \leq i < \tilde{n} $. The result follows now from Corollary \ref{main jump corollary}.
\end{pf}
\subsection{}
Let $X/K$ be a smooth, projective and geometrically connected curve, and let $ \mathcal{X}/S $ be the minimal SNC-model of $X/K$. It is known that for a fixed genus $ g \geq 2 $, there are only finitely many possibilities for the combinatorial structure of the special fiber of $ \mathcal{X}/S$, modulo chains of $(-2)$-curves (\cite{Arwin}, Theorem 1.6). The same statement is, as we shall see below, also true for elliptic curves.
Let $ \mathcal{J}/S $ be the N\'eron model of the Jacobian of $X$. Since, by Corollary \ref{main jump corollary}, the jumps of the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $ only depend on the combinatorial structure of $ \mathcal{X}_k $, one can, for each $g > 0$, classify these jumps. In the next sections, we will give the jumps for every fiber type of genus $1$ and $2$.
\begin{rmk}
It is not hard to see that chains of $(-2)$-curves do not affect the jumps.
\end{rmk}
\subsection{Computations of jumps for $ g=1 $}
Let $X/K$ be an \emph{elliptic} curve, and let $ \mathcal{E} $ be the minimal regular model of $ X $. It is a well known fact that there are only finitely many possibilities for the combinatorial structure of the special fiber $ \mathcal{E}_k $, modulo chains of $(-2)$-curves. The various possibilities were first classified in \cite{Kod}, and this is commonly referred to as the \emph{Kodaira classification}. For another treatment of this theory, we refer to \cite{Liubook}, Chapter 10.2. If now $ \mathcal{X}/S $ denotes the minimal SNC-model of $X$, it follows that there are only finitely many possibilities for the combinatorial structure of $ \mathcal{X}_k $, each one derived from the Kodaira classification. The symbols $ I, II, \ldots $ appearing in Table \ref{table 1} below are known as the \emph{Kodaira symbols} and refer to the fiber types in the Kodaira classification.
Let $ \mathcal{J}/S $ be the N\'eron model of $ J(X) = X $. It follows from Corollary \ref{main jump corollary} and Corollary \ref{specific jump corollary} that the (unique) jump in the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $ only depends on the fiber type of $ \mathcal{X}/S $, and can only occur at finitely many \emph{rational} numbers. In Table \ref{table 1} below, we list the jumps for the various Kodaira types. Note that we obtain the same list as the one computed in \cite{Edix} by R. ~Schoof.
We would like to say a few words about how these computations are done. For each fiber type, we consider an infinite sequence $ (n_j)_{j \in \mathbb{N}} $, depending on the fiber type, where $ n_j \rightarrow \infty $ as $ j \rightarrow \infty $. For each $n_j$ in this sequence, let $ R_j/R $ be the tame extension of degree $n_j$, and let $ \pi_j $ be a uniformizing parameter of $ R_j $. Furthermore, let $ \boldsymbol{\mu}_{n_j} $ act on $ R_j $ by $ [\xi](\pi_j) = \xi \pi_j $. We can then use Theorem \ref{thm. 9.13} to compute the character for the induced representation of $ \boldsymbol{\mu}_{n_j} $ on $ H^1(\mathcal{Y}^j_k, \mathcal{O}_{\mathcal{Y}^j_k}) $, where $ \mathcal{Y}^j $ denotes the minimal desingularization of $ \mathcal{X}_{S_j} $, and where $ S_j = \mathrm{Spec}(R_j) $. This character is on the form $ \chi(\xi) = \xi^{i(j)} $. In particular, when $ n_j \gg 0 $, we obtain an explicit formula for $ i(j) $, using Theorem \ref{Formula}.
The character for the representation of $ \boldsymbol{\mu}_{n_j} $ on $ T_{\mathcal{J}^j_k, 0} $ is the inverse of this character, $ \chi^{-1}(\xi) = \xi^{- i(j)} $. The jump of $ \{ \mathcal{F}^a \mathcal{J}_k \} $ will then be given by the limit of the expression $ [- i(j)]_{n_j}/n_j $ as $ j \rightarrow \infty $, where $ [- i(j)]_{n_j} \equiv_{n_j} - i(j) $, and $ 0 \leq [- i(j)]_{n_j} < n_j $.
In Example \ref{example genus 1} below, we explain in detail how these computations are done for fiber type $IV$ in the Kodaira classification.
\begin{ex}\label{example genus 1}
Let $ \mathcal{X}/S $ have fibertype $IV$. In this case, the combinatorial data of $ \mathcal{X}_k $ consists of the set of vertices $ \mathcal{V} = \{ \upsilon_1, \ldots, \upsilon_4 \} $, where $ \mathfrak{m}(\upsilon_i) = 1 $ for $ i \in\{ 1,2,3 \}$, and $ \mathfrak{m}(\upsilon_4) = 3 $. Furthermore, we have that $ \mathfrak{g}(\upsilon_i) = 0 $ for all $i$. The set of edges is $ \mathcal{E} = \{ \varepsilon_1, \varepsilon_2, \varepsilon_3 \} $, where $ \varepsilon_i $ corresponds to the unique intersection point of the components $ \upsilon_i $ and $ \upsilon_4 $, for $ i = 1,2,3 $. Let us choose the ordering $ (\upsilon_i,\upsilon_4) $ for all $i$.
Let now $ n \gg 0 $ be a positive integer relatively prime to $ p $ and to $ \mathrm{lcm}( \{ \mathfrak{m}(\upsilon_i) \} ) = 3 $, and let $R'/R$ be a tame extension of degree $n$. Let $ \boldsymbol{\mu}_n $ act on $R'$ by $ [\xi](\pi') = \xi \pi' $ for any $ \xi \in \boldsymbol{\mu}_n$, where $ \pi' $ is a uniformizing parameter for $R'$.
For any $ g \in G $, corresponding to a root of unity $ \xi \in \boldsymbol{\mu}_n $, Theorem \ref{thm. 9.13} states that
$$ \sum_{p=0}^1 (-1)^p~\mathrm{Tr}_{\beta}(\phi^p_g) = \sum_{\upsilon \in \mathcal{V}} \mathrm{Tr}_{\upsilon}(\xi) + \sum_{\varepsilon \in \mathcal{E}} \mathrm{Tr}_{\varepsilon}(\xi). $$
Let $ \sigma $ be the singularity $ (1,3,n) $. Then we have that $ \mathrm{Tr}_{\varepsilon_i}(\xi) = \mathrm{Tr}_{\sigma}(\xi) $ for all $ i \in \{ 1,2,3 \} $. It suffices to consider the case where $ n \equiv_3 1 $. One computes easily that $ \mu_l = 1 $ for all $ l \in \{ 1, \ldots, L(\sigma) \} $. From Theorem \ref{Formula}, we immediately get that $ \mathrm{Tr}_{\varepsilon_i}(\xi) = 1 $, for all $i$.
Proposition \ref{prop. 9.7} states that
$$ \mathrm{Tr}_{\upsilon}(\xi) = \sum_{k=0}^{m_{\upsilon}-1} (\xi^{\alpha_{m_{\upsilon}}})^{k} ((m_{\upsilon} - k)C_{\upsilon}^2 + 1 - p_a(C_{\upsilon})), $$
for any $ \upsilon \in \mathcal{V} $, where $ \alpha_{m_{\upsilon}} m_{\upsilon} \equiv_n 1 $. As $ C_{\upsilon_i}^2 = - 1 $ for $ i \in \{ 1,2,3 \} $, we see that $ \mathrm{Tr}_{\upsilon_i}(\xi) = 0 $ for these vertices, and since $ C_{\upsilon_4}^2 = - 1 $, it follows that $ \mathrm{Tr}_{\upsilon_4}(\xi) = - 2 - \xi^{\alpha_3} $. In total, we get
$$ \mathrm{Tr}_{\beta}(e(H^{\bullet}(g|_{ \mathcal{Y}_k}))) = 3 + (- 2 - \xi^{\alpha_3}) = 1 - \xi^{\alpha_3}. $$
We can therefore conclude that the character for the representation of $ \boldsymbol{\mu}_n $ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ is $ \chi(\xi) = \xi^{\alpha_3} $.
In order to compute the jump of the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $, where $ \mathcal{J} $ is the N\'eron model of $J(X) = X$, we have to use the \emph{inverse} character, which is $ \chi^{-1}(\xi) = \xi^{[- \alpha_3]_n} $, where $ [- \alpha_3]_n = - \alpha_3 $ modulo $n$, and $ 0 \leq [- \alpha_3]_n < n $. The jump will be given by the limit of the expression $ ([- \alpha_3]_n)/n $ as $n$ goes to infinity over integers $n$ that are equivalent to $1$ modulo $3$.
Since $ n = 1 + 3 \cdot h $, for some integer $h$, we get that $ \alpha_3 = \frac{1 + 2 n}{3} $, where $ 0 < \alpha_3 < n $. Therefore, the jump occurs at the limit of $ ([- \alpha_3]_n)/n = \frac{n - 1}{3 n} $ which is $ 1/3 $.
\end{ex}
\begin{table}[htb]\caption{Genus $1$}\label{table 1}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}
\hline
Fibertype & $(I)$ & $(I)^*$ & $(I_n)$ & $(I_n)^*$ & $(II)$ & $(II)^*$ & $(III)$ & $(III)^*$ & $(IV)$ & $(IV)^*$\\
\hline
Jumps & $0$ & $1/2$ & $0$ & $1/2$ & $ 1/6 $ & $ 5/6 $ & $ 1/4 $ & $ 3/4 $ & $ 1/3 $ & $ 2/3 $ \\\hline
\end{tabular}
\end{table}
\subsection{Computations of jumps for $g=2$}\label{genus 2}
Let $ X/K $ be a curve having genus equal to $2$. Like in the case for elliptic curves, there are finitely many possibilities, modulo chains of $(-2)$-curves, for the combinatorial structure of the special fiber of the minimal regular model of $X$. Moreover, there exists a complete classification of the various possible fiber types. This classification is mainly due to A.P. Ogg (\cite{Ogg}), with the exception of a few missing cases which were filled in by Y. Namikawa and K. ~Ueno in \cite{Ueno}. We use the classification and notation in \cite{Ueno}.
Let $ \mathcal{X}/S $ be the minimal SNC-model of $X$, and let $ \mathcal{J}/S $ be the N\'eron model of the Jacobian of $X$. The jumps in the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $ depend only on the combinatorial structure of $ \mathcal{X}_k $, and can occur only at a finite set of rational numbers.
In order to compute the jumps for each fibertype, we proceed more or less in the same manner as we did in the case of elliptic curves. In Example \ref{example 13.1}, we explain in detail how this is done for fiber type $VI$ in the classification in \cite{Ueno}.
The jumps for the various genus $2$ fiber types are listed in tables \ref{table 2} through \ref{table 6} below.
\begin{ex}\label{example 13.1}
We consider fiber type $VI$ in the classification in \cite{Ueno}. In this case, the set of vertices of $ \Gamma(\mathcal{X}_k) $ is $ \mathcal{V} = \{ \upsilon_1, \ldots, \upsilon_7 \} $, where $ \mathfrak{g}(\upsilon_i) = 0 $ for all $i$. Furthermore, we have that $ \mathfrak{m}(\upsilon_i) = 1 $ for $ i = 1,7 $, $ \mathfrak{m}(\upsilon_i) = 2 $ for $ i = 2,5,6 $, $ \mathfrak{m}(\upsilon_3) = 3 $ and $ \mathfrak{m}(\upsilon_4) = 4 $. The set of edges is $ \mathcal{E} = \{ \varepsilon_1, \varepsilon_2, \varepsilon_3, \varepsilon_4, \varepsilon_5, \varepsilon_6 \} $, where $ \varepsilon_1 = (\upsilon_1,\upsilon_2) $, $ \varepsilon_2 = (\upsilon_2,\upsilon_3) $, $ \varepsilon_3 = (\upsilon_3,\upsilon_4) $, $ \varepsilon_4 = (\upsilon_5,\upsilon_4) $, $ \varepsilon_5 = (\upsilon_6,\upsilon_4) $ and $ \varepsilon_6 = (\upsilon_7,\upsilon_4) $.
We have that $ \mathrm{lcm}( \{\mathfrak{m}(\upsilon_i)\}) = 12 $. Let $ n \gg 0$ be any integer not divisible by $p$, and such that $ n \equiv_{12} 1 $. Let $ R'/R $ be the extension of degree $n$, and let $ \pi'$ be a uniformizing parameter of $R'$. Let $ \mathcal{Y} $ be the minimal desingularization of $ \mathcal{X}_{S'} $. We let $ \boldsymbol{\mu}_n $ act on $ R'$ by $ [\xi](\pi') = \xi \pi' $, for any $ \xi \in \boldsymbol{\mu}_n $.
Now, let $ \xi \in \boldsymbol{\mu}_n$ be a root of unity. For any $ \upsilon \in \mathcal{V} $, Proposition \ref{prop. 9.7} gives that
$$ \mathrm{Tr}_{\upsilon}(\xi) = \sum_{k=0}^{m_{\upsilon}-1} (\xi^{\alpha_{m_{\upsilon}}})^{k} ((m_{\upsilon} - k)C_{\upsilon}^2 + 1 - p_a(C_{\upsilon})). $$
As the computations are similar for all $ \upsilon \in \mathcal{V} $, we only do this explicitly for $ \upsilon_3 $. We have that $ p_a(C_{\upsilon_3}) = \mathfrak{g}(\upsilon_3) = 0 $, so it remains only to compute $ C_{\upsilon_3}^2 $. The edge $ \varepsilon_2 $ corresponds to the singularity $ \sigma_2 = (2,3,n) $ and the edge $ \varepsilon_3 $ corresponds to the singularity $ \sigma_3 = (3,4,n) $. Denote by $ C_l^{\sigma_2} $ the exceptional components in the resolution of $ \sigma_2 $, and by $ C_l^{\sigma_3} $ the components in the resolution of $ \sigma_3 $. Then $ C_1^{\sigma_2} $ and $ C_L^{\sigma_3} $ are the only two components of $ \mathcal{Y}_k $ that meet $ C_{\upsilon_3} $ (note the ordering of the formal branches in $ \sigma_2 $ and $ \sigma_3 $). It is easily computed that $ \mu_1^{\sigma_2} = 2 $ and that $ \mu_L^{\sigma_3} = 1 $. So it follows that $ C_{\upsilon_3}^2 = - 1 $, and therefore
$$ \mathrm{Tr}_{ \upsilon_3 }(\xi) = - 2 - \xi^{\alpha_3}. $$
For the other vertices, we compute that
$$ \mathrm{Tr}_{ \upsilon_1 }(\xi) = \mathrm{Tr}_{ \upsilon_7 }(\xi) = 0, $$
$$ \mathrm{Tr}_{ \upsilon_2 }(\xi) = \mathrm{Tr}_{ \upsilon_5 }(\xi) = \mathrm{Tr}_{ \upsilon_6 }(\xi) = - 1, $$
and
$$ \mathrm{Tr}_{ \upsilon_4 }(\xi) = - 7 - 5 \xi^{\alpha_4} - 3 (\xi^{\alpha_4})^2 - (\xi^{\alpha_4})^3. $$
Next, we must compute the contributions from the singularities. We will only write out the details for $ \varepsilon_3 = (\upsilon_3,\upsilon_4) $. In this case, we need to compute $ \mathrm{Tr}_{\sigma_3}(\xi) $. It is easily computed that $ \mu_1^{\sigma_3} = 3 $ and $ \mu_L^{\sigma_3} = 1 $. Theorem \ref{Formula} then gives that
$$ \mathrm{Tr}_{\varepsilon_3}(\xi) = \mathrm{Tr}_{\sigma_3}(\xi) = 3 + 2 \xi^{\alpha_4} + (\xi^{\alpha_4})^2. $$
For the contributions from the other edges, we compute in a similar fashion that
$$ \mathrm{Tr}_{\varepsilon_1}(\xi) = \mathrm{Tr}_{\varepsilon_6}(\xi) = 1, $$
$$ \mathrm{Tr}_{\varepsilon_2}(\xi) = 2 + \xi^{\alpha_3}, $$
and
$$ \mathrm{Tr}_{\varepsilon_4}(\xi) = \mathrm{Tr}_{\varepsilon_5}(\xi) = 3 + \xi^{\alpha_4} + (\xi^{\alpha_4})^2. $$
Summing up, we get
$$ \sum_{i=1}^7 \mathrm{Tr}_{\upsilon_i }(\xi) + \sum_{i=1}^6 \mathrm{Tr}_{\varepsilon_i}(\xi) = 1 - \xi^{\alpha_4} - (\xi^{\alpha_4})^3. $$
We can therefore conclude that the irreducible characters for the induced representation of $ \boldsymbol{\mu}_n $ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $ are $ \chi_1(\xi) = \xi^{\alpha_4} $ and $ \chi_2(\xi) = \xi^{3 \alpha_4} $.
The irreducible characters for the representation of $ \boldsymbol{\mu}_n $ on $ T_{\mathcal{J}'_k,0} $ induced by the action $ [\xi](\pi') = \xi^{-1} \pi' $ on $R'$ are the inverse characters of these, $ \chi_1^{-1}(\xi) = \xi^{ - \alpha_4} $ and $ \chi_2^{-1}(\xi) = \xi^{ - 3 \alpha_4} $. It is easily seen that $ [- \alpha_4]_n = (n-1)/4 $, and that $ [- 3 \alpha_4]_n = (3n-3)/4 $. Hence the jumps occur at the limits $1/4$ and $ 3/4 $ of these expressions as $ n $ goes to infinity.
\end{ex}
\begin{table}[htb]\caption{Genus $2$, Elliptic type $[1]$}\label{table 2}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Fiber type & $I_{0-0-0}$ & $I_{0-0-0}^*$ & $II$ & $III$ & $IV$ & $V$ \\
\hline
Jumps & $ 0 $ & $ 1/2 $ & $ 0 $, $ 1/2 $ & $ 1/3 $, $ 2/3 $ & $ 1/6 $, $ 5/6 $ & $ 1/6 $, $ 2/6 $ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$V^*$ & $VI$ & $VII$ & $VII^*$ & $VIII-1$ & $VIII-2$ \\
\hline
$ 4/6 $, $ 5/6 $ & $ 1/4 $, $ 3/4 $ & $ 1/8 $, $ 3/8 $ & $ 5/8 $, $ 7/8 $ & $ 1/10 $, $ 3/10 $ & $ 3/10 $, $ 9/10 $ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$VIII-3$ & $VIII-4$ & $IX-1$ & $IX-2$ & $IX-3$ & $IX-4$ \\
\hline
$ 1/10 $, $ 7/10 $ & $ 7/10 $, $ 9/10 $ & $ 1/5 $, $ 3/5 $ & $ 1/5 $, $ 2/5 $ & $ 3/5 $, $ 4/5 $ & $ 2/5 $, $ 4/5 $ \\
\hline
\end{tabular}
\end{table}
\begin{table}[htb]\caption{Genus $2$, Elliptic type $[2]$}\label{table 3}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$I_0-I_0-m$ & $I_0^*-I_0^*-m$ & $I_0-I_0^*-m$ & $2 I_0-m$ & $2I_0^*-m$ & $I_0-II-m$ \\
\hline
$ 0 $ & $ 1/2 $ & $ 0 $, $ 1/2 $ & $ 0 $, $ 1/2 $ & $ 1/4 $, $ 3/4 $ & $ 0 $, $ 1/6 $ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
$I_0-II^*-m$ & $ I_0-IV-m $ & $I_0-IV^*-m$ & $I_0^*-II-m$ & $I_0^*-II^*-m$ \\
\hline
$0$, $5/6$ & $0$, $1/3$ & $0$, $2/3$ & $1/6$, $3/6$ & $3/6$, $5/6$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
$I_0^*-II^*-\alpha$ & $ I_0^*-IV-m $ & $ I_0^*-IV^*-m $ & $ I_0^*-IV^*-\alpha $ & $I_0-III-m$ \\
\hline
$3/6$, $5/6$ & $1/2$, $1/3$ & $1/2$, $2/3$ & $1/2$, $2/3$ & $0$, $1/4$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
$I_0-III^*-m$ & $I_0^*-III-m$ & $I_0^*-III^*-m$ & $I_0^*-III^*-\alpha$ & $ 2II-m $ \\
\hline
$0$, $3/4$ & $1/4$, $2/4$ & $2/4$, $3/4$ & $2/4$, $3/4$ & $1/12$, $7/12$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
$ 2II^*-m $ & $II-II-m$ & $II-II^*-m$ & $II^*-II^*-m$ & $II^*-II^*-\alpha$ \\
\hline
$5/12$, $11/12$ & $1/6$, $1/6$ & $1/6$, $5/6$ & $5/6$, $5/6$ & $5/6$, $5/6$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
$II-IV-m$ & $II-IV^*-m$ & $II^*-IV-m$ & $II^*-IV-\alpha$ & $II^*-IV^*-m$ \\
\hline
$1/6$, $2/6$ & $1/6$, $4/6$ & $2/6$, $5/6$ & $2/6$, $5/6$ & $4/6$, $5/6$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
$II^*-IV^*-\alpha$ & $2IV-m$ & $2IV^*-m$ & $IV-IV-m$ & $IV-IV^*-m$ \\
\hline
$4/6$, $5/6$ & $1/6$, $4/6$ & $2/6$, $5/6$ & $1/3$, $1/3$ & $1/3$, $2/3$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
$IV^*-IV^*-m$ & $IV^*-IV^*-\alpha$ & $II-III-m$ & $II-III^*-m$ \\
\hline
$2/3$, $2/3$ & $2/3$, $2/3$ & $2/12$, $3/12$ & $2/12$, $9/12$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|}
\hline
$II^*-III-m$ & $II^*-III-\alpha$ & $II^*-III^*-m$ & $II^*-III^*-\alpha$ \\
\hline
$2/12$, $10/12 $ & $3/12$, $10/12 $ & $9/12$, $10/12 $ & $9/12$, $10/12 $ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|}
\hline
$ IV-III-m $ & $ IV-III^*-m $ & $ IV-III^*-\alpha $ & $ IV^*-III-m $ \\
\hline
$3/12$, $4/12 $ & $4/12$, $9/12 $ & $4/12$, $9/12 $ & $3/12$, $8/12 $ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
$ IV^*-III^*-m $ & $ IV^*-III^*-\alpha $ & $2III-m$ & $2III^*-m$ \\
\hline
$8/12$, $9/12 $ & $8/12$, $9/12 $ & $1/8$, $5/8$ & $3/8$, $7/8$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|}
\hline
$III-III-m$ & $III-III^*-m$ & $III^*-III^*-m$ & $III^*-III^*-\alpha$ \\
\hline
$1/4$, $1/4$ & $1/4$, $3/4$ & $3/4$, $3/4$ & $3/4$, $3/4$ \\
\hline
\end{tabular}
\end{table}
\begin{table}[htb]\caption{Genus $2$, Parabolic type $[3]$}\label{table 4}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$I_{n-0-0}$ & $I_n-I_0-m$ & $I_0-I_n^*-m$ & $I_n-I_0^*-m$ & $I_{n-0-0}^*$ & $I_0^*-I_n^*-m$\\
\hline
$0$ & $0$ & $ 0 $ , $ 1/2 $ & $ 0 $ , $ 1/2 $ & $ 1/2 $ , $ 1/2 $ & $ 1/2 $ , $ 1/2 $\\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$ II_{n-0} $ & $ II_{n-0}^* $ & $ II-I_n-m $ & $ II^*-I_n-m $ & $ IV-I_n-m $ & $ IV^*-I_n-m $ \\
\hline
$ 0 $ , $ 1/2 $ & $ 0 $ , $ 1/2 $ & $ 0 $ , $ 1/6 $ & $ 0 $ , $ 5/6 $ & $ 0 $ , $ 1/3 $ & $ 0 $ , $ 2/3 $ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$ II-I_n^*-m $ & $ II^*-I_n^*-m $ & $ II^*-I_n^*-\alpha $ & $ IV-I_n^*-m $ & $ IV^*-I_n^*-m $ \\
\hline
$ 1/6 $ , $ 3/6 $ & $ 3/6 $ , $ 5/6 $ & $ 3/6 $ , $ 5/6 $ & $ 2/6 $ , $ 3/6 $ & $ 3/6 $ , $ 4/6 $ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$ IV^*-I_n^*-\alpha $ & $ IV-II_n $ & $ IV^*-II_n $ & $ II-II_n^* $ & $ II^*-II_n^* $ & $ III-I_n-m $\\
\hline
$ 3/6 $ , $ 4/6 $ & $ 0 $ , $ 1/3 $ & $ 0 $ , $ 2/3 $ & $ 1/6 $ , $ 3/6 $ & $ 3/6 $ , $ 5/6 $ & $ 0 $ , $ 1/4 $\\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$ III^*-I_n-m $ & $ III-I_n^*-m $ & $ III^*-I_n^*-m $ & $ III^*-I_n^*-\alpha $ & $ III-II_n $ \\
\hline
$ 0 $ , $ 3/4 $ & $ 1/4 $ , $ 2/4 $ & $ 2/4 $ , $ 3/4 $ & $ 2/4 $ , $ 3/4 $ & $ 0 $ , $ 3/4 $ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$ III^*-II_n $ & $ III-II_n^* $ & $ III^*-II_n^* $ \\
\hline
$ 0 $ , $ 3/4 $ & $ 1/4 $ , $ 2/4 $ & $ 2/4 $ , $ 3/4 $ \\
\hline
\end{tabular}
\end{table}
\begin{table}[htb]\caption{Genus $2$, Parabolic type $[4]$}\label{table 5}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$I_{n-p-0}$ & $I_n-I_p-m$ & $I_{n-p-0}^*$ & $I_n^*-I_p^*-m$ & $I_n-I_p^*-m$ \\
\hline
$ 0 $ & $ 0 $ & $1/2$ & $1/2$ & $ 0 $, $1/2$ \\
\hline
\end{tabular}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$2I_{n}-m$ & $2I_{n}^*-m$ & $I_{n-p}$ & $III_n$ \\
\hline
$ 0 $, $1/2$ & $ 1/4 $ , $ 3/4 $ & $ 0 $, $1/2$ & $ 1/4 $ , $ 3/4 $ \\
\hline
\end{tabular}
\end{table}
\begin{table}[htb]\caption{Genus $2$, Parabolic type $[5]$}\label{table 6}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$I_{n-p-q}$ & $I_{n-p-q}^*$ & $II_{n-p}$ & $II_{n-p}^*$ & $III_{n}$ & $III_{n}^*$ \\
\hline
$ 0 $ & $1/2$ & $ 0 $, $1/2$ & $ 0 $, $1/2$ & $ 1/3 $, $2/3$ & $ 1/6 $, $5/6$ \\
\hline
\end{tabular}
\end{table}
\subsection{Final remarks and comments}
It would be interesting to know, for a curve $X/K$, the significance of the jumps in the filtration $ \{ \mathcal{F}^a \mathcal{J}_k \} $, where $ \mathcal{J} $ is the N\'eron model of $Jac(X)$. For instance, the sum of the jumps seems to be closely related to the so-called \emph{base change conductor} defined in \cite{Chai}. Furthermore, when $ k = \mathbb{C} $, and $ g = 1 $ or $ 2 $, computations show that the jumps correspond to half of the eigenvalues of the monodromy operator. It would be interesting to know if this holds in general.
It would be nice, if possible, to have a closed formula for the irreducible characters of the representation of $ \boldsymbol{\mu}_n $ on $ H^1(\mathcal{Y}_k, \mathcal{O}_{\mathcal{Y}_k}) $, where $ \mathcal{Y} $ is the minimal desingularization of $ \mathcal{X}_{S'} $, and $ n = \mathrm{deg}(S'/S) $. Such a formula would probably encode combinatorial properties of $ \mathcal{X}_k $.
We do not know if our results remain true in the case where distinct components of $ \mathcal{X}_k $ with multiplicities divisible by $p$ intersect nontrivially. The main problem is the lack of a good description of the minimal desingularization of $ \mathcal{X}_{S'} $.
Finally, we think it would be interesting to study filtrations for N\'eron models of abelian varieties that are not Jacobians. In that case, it is not so clear what kind of data would suffice in order to determine the jumps. For instance, is it true that all jumps are rational numbers?
\end{document} |
\begin{document}
\title[
\uppercase{Approximation by double second type delayed arithmetic
mean}]{
\uppercase{Approximation by double second type delayed arithmetic
mean of periodic functions in $H_{p}^{(\omega, \omega)}$ space}}
\author[Xh. Z. Krasniqi]{Xh. Z. Krasniqi}
\address{Faculty of Education \\
\indent University of Prishtina "Hasan Prishtina" \\
\indent Avenue "Mother Theresa" 5, 10000 Prishtina \\
\indent Kosovo}
\author[P. K\'orus]{P. K\'orus}
\address{Institute of Applied Pedagogy \\
\indent Juh\'asz Gyula Faculty of Education \\
\indent University of Szeged \\
\indent Hattyas utca 10, H-6725 Szeged \\
\indent Hungary}
\author[B. Szal]{B. Szal}
\address{Faculty of Mathematics, Computer Science and Econometrics \\
\indent University of Zielona G\'{o}ra \\
\indent ul. Szafrana 4a, 65-516 Zielona G\'{o}ra \\
\indent Poland}
\date{}
\begin{abstract}
In this paper, we give a degree of approximation of a function in the space $H_{p}^{(\omega, \omega)}$ by using the second type double delayed arithmetic means of its Fourier series. Such degree of approximation is expressed via two functions of moduli of continuity type. To obtain one more general result, we used the even-type double delayed arithmetic means of Fourier series as well.
\end{abstract}
\maketitle
\mathcal Section{Concise historical background and motivation}
On one hand, the approximation of $2\pi$-periodic and integrable functions by their Fourier series in the H\"{o}lder metric has been studied widely and consistently in many papers. Das at al. studied the degree of approximation of functions by matrix means of their Fourier series in the generalized H\"{o}lder metric \cite{DGR}, generalizing many previous known results. Again, Das at al. \cite{DNR} studied the rate of convergence problem of Fourier series in a new Banach space of functions conceived as a generalization of the spaces introduced by Pr\"{o}ssdorf \cite{SP} and Leindler \cite{L1}. Afterwards, Nayak at al. \cite{NDR} studied the rate of convergence problem of the Fourier series by delayed arithmetic mean in the generalized H\"{o}lder metric space which was earlier introduced in \cite{DNR} and obtaining a sharper estimate of Jackson's order which is the main objective of their result. Moreover, De\v{g}er \cite{D} determined the degree of approximation of functions by matrix means of their Fourier series in the same space of functions introduced in \cite{DNR}. In particular, he extended some results of Leindler \cite{L} and some other results by weakening the monotonicity conditions in the results obtained by Singh and Sonker \cite{SS} for some classes of numerical sequences introduced by Mohapatra and Szal \cite{MS}. Leindler's results obtained in \cite{L}, are generalized in \cite{XhK} by first author of the present paper, for functions from a Banach space, mainly using the generalized N\"{o}rlund and Riesz means. Very recently, Kim \cite{KIM1} presented a generalized result of a particular case of a result obtained previously in \cite{NDR}. In Kim's result is treated the degree of approximation of functions in the same generalized H\"{o}lder metric, but using the so-called even-type delayed arithmetic mean of Fourier series.
On the other hand, results on approximation of bivariate integrable functions and $2\pi$-periodic in each variable by their double Fourier series in the H\"{o}lder metric, the interested reader can find in \cite{UD2}, \cite{XhK2} and \cite{NH}. In all results reported in these papers, we came across to the degree of approximation of functions by various means of their double Fourier series and in which the quantity of the form $\mathcal{O}{(\log n)}$ appears. Involving such quantity produces a degree of approximation which is not of Jackson's order. This weakness motivated us to consider some means of double Fourier series which will overshoots it. Whence, we are going to investigate the degree of approximation of bivariate integrable functions and $2\pi$-periodic in each variable by their double Fourier series in the generalized H\"{o}lder metric, by motivation of removing the quantities of the form $\mathcal{O}{(\log n)}$ and obtaining the degree of approximation of Jackson's order, which is the aim of the paper.
Closing this section, for comparing of two quantities $u$ and $v>0$, throughout this paper we write $u=\mathcal{O}(v)$, whenever there exists a positive constant $c$ such that $u\leq cv$.
\mathcal Section{Introduction and preliminaries}
By $L_p(T^2)$, $p\geq 1$, we denote the space of all functions $f(x,y)$
integrable with $p$-power on $T^2:=(0,2\pi)\times (0,2\pi)$, and with norm
\begin{equation*}
\|f\|_p:=\left(\frac{1}{(2\pi)^2}\int_{0}^{2\pi}\int_{0}^{2\pi}|f(x,y)|^p
dxdy\right)^{1/p}.
\end{equation*}
If $f\in L_p(T^2)$, $p\geq 1$, then $\omega _i$, $(i=1,2)$ are considered as
moduli of continuity if $\omega _i$ are two positive non-decreasing
continuous functions on $[0,2\pi]$ with properties
\begin{enumerate}
\item[(i)] $\omega _i(0)=0$,
\item[(ii)] $\omega _i(\delta_1+\delta_2)\leq \omega _i(\delta_1)+\omega
_i(\delta_2)$,
\item[(iii)] $\omega _{i}(\lambda \delta )\leq (\lambda +1)\omega
_{i}(\delta )$, $\lambda \geq 0$.
\end{enumerate}
We define the space $H_{p}^{(\omega_1, \omega_2)}$ by
\begin{equation*}
H_{p}^{(\omega_1, \omega_2 )}:=\left\{f\in L^{p}(T^2), p\geq 1:
A(f;\omega_1, \omega_2 )<\infty \right\},
\end{equation*}
where
\begin{equation*}
A(f;\omega_1, \omega_2 ):=\mathcal Sup_{t_1\neq 0,\,\,t_2\neq 0 }\frac{\|f(x +t_1,y
+t_2)-f(x,y)\|_{p}}{\omega_1 (|t_1|)+\omega_2 (|t_2|)}
\end{equation*}
and the norm in the space $H_{p}^{(\omega_1, \omega_2)}$ is defined by
\begin{equation*}
\|f\|_{p}^{(\omega_1, \omega_2)}:=\|f\|_{p}+A(f;\omega_1, \omega_2 ).
\end{equation*}
If $\omega_1$, $\omega_2$, $v_1$ and $v_2$ are moduli of continuity so that
the two-variable function $\frac{\omega_1 (t_1)+\omega_2 (t_2)}{
v_1(t_1)+v_2(t_2)}$ has a maximum $M$ on $T^2$, then it is easy to see that
\begin{equation*}
\|f\|_{p}^{(v_1,v_2)}\leq \max \left(1,M\right)\|f\|_{p}^{(\omega_1,
\omega_2)},
\end{equation*}
which shows that in this case, for the given spaces $H_{p}^{(\omega_1,
\omega_2)}$ and $H_{p}^{(v_1, v_2)}$ we have
\begin{equation*}
H_{p}^{(\omega_1, \omega_2)}\mathcal Subseteq H_{p}^{(v_1, v_2)}\mathcal Subseteq L_p \quad
(p\geq 1).
\end{equation*}
We write
\begin{equation*}
\Omega_p(\delta_1,\delta_2;f):=\mathcal Sup_{0\leq h_1\leq \delta_1; 0\leq h_2\leq
\delta_2}\|f(x+h_1,y+h_2)-f(x,y)\|_{p}
\end{equation*}
for the integral modulus of continuity of $f(x,y)$, and whenever
\begin{equation*}
\|f(x +t_1,y +t_2)-f(x,y)\|_{p}=\mathcal{O}\left(\omega_1 (|t_1|)+\omega_2
(|t_2|)\right)
\end{equation*}
we write $f\in \text{Lip}(\omega_1,\omega_2,p)$, that is
\begin{equation*}
\text{Lip}(\omega_1,\omega_2,p)=\left\{f\in L_p(T^2): \|f(x +t_1,y
+t_2)-f(x,y)\|_{p}=\mathcal{O}\left(\omega_1 (|t_1|)+\omega_2
(|t_2|)\right)\right\}.
\end{equation*}
Clearly, for $\omega_1(t_1)=\mathcal{O}\left(t_1^{\alphapha}\right)$ and $
\omega_2(t_2)=\mathcal{O}\left(t_2^{\beta}\right)$, $0<\alphapha \leq 1$, $
0<\beta \leq 1$, the class $\text{Lip}(\omega_1,\omega_2,p)$ reduces to the
class $\text{Lip}(\alphapha,\beta,p)$, that is
\begin{equation*}
\text{Lip}(\alphapha,\beta, p)=\left\{f\in L^p(T^2): \|f(x +t_1,y
+t_2)-f(x,y)\|_{p}=\mathcal{O}\left(t_1^{\alphapha}\right)+\mathcal{O}
\left(t_2^{\beta}\right)\right\}.
\end{equation*}
Then for $1 \geq \alphapha \geq \gamma \geq 0$ and $1 \geq \beta \geq \delta
\geq 0$, by noting $\frac{t_1^{\alphapha}+t_2^{\beta}}{t_1^{\gamma}+t_2^{\delta}
}$ is bounded on $T^2$, we have
\begin{equation*}
\text{Lip}(\alphapha,\beta,p)\mathcal Subseteq \text{Lip}(\gamma,\delta,p) \mathcal Subseteq
L_p \quad (p\geq 1).
\end{equation*}
If $f(x,y)$ is a continuous function periodic in both variables with period $
2\pi$ and $p=\infty$, then the class $\text{Lip}(\alphapha,\beta,p)$ reduces to
the H\"older class $\text{H}_{(\alphapha,\beta)}$ (also called Lipschitz
class), that is
\begin{equation*}
\text{H}_{(\alphapha,\beta)}=\left\{f: |f(x +t_1,y +t_2)-f(x,y)|=\mathcal{O}
\left(t_1^{\alphapha}\right)+\mathcal{O}\left(t_2^{\beta}\right)\right\}.
\end{equation*}
It is verified that $\text{H}_{(\alphapha,\beta)}$ is a Banach space (see \cite
{UD2}) with the norm $\|f \|_{\alphapha,\beta}$ defined by
\begin{equation*}
\|f \|_{\alphapha,\beta}=\|f\|_{C}+\mathcal Sup_{t_1\neq 0,\,\,t_2\neq 0 }\frac{|f(x
+t_1,y +t_2)-f(x,y)|}{|t_1|^{\alphapha}+|t_2|^{\beta}},
\end{equation*}
where
\begin{equation*}
\|f\|_{C}=\mathcal Sup_{(x,y)\in T^2}|f(x,y)|.
\end{equation*}
Let $f(x,y)\in L^p(T^2)$ be a $2\pi$-periodic function with respect to each
variable, with its Fourier series
\begin{align*}
& f(x,y)\mathcal Sim
\mathcal Sum_{k=0}^{\infty}\mathcal Sum_{\ell=0}^{\infty}\lambda_{k,\ell}(a_{k,\ell} \cos kx
\cos \ell y+b_{k,\ell} \mathcal Sin kx \cos \ell y \\
& \hspace{3cm}+c_{k,\ell} \cos kx \mathcal Sin \ell y+d_{k,\ell} \mathcal Sin kx \mathcal Sin \ell
y)
\end{align*}
at the point $(x,y)$, where
\begin{align*}
\lambda _{k,\ell} & = \left\{
\begin{array}{rcl}
\frac{1}{4}, & \mbox{if} & k=\ell=0, \\
\frac{1}{2}, & \mbox{if} & k=0, \,\ell>0;\,\,\,\mbox{or} \,\,\, k>0,
\,\ell=0, \\
1, & \mbox{if} & k, \,\ell>0;
\end{array}
\right. \\
a_{k,\ell} & = \frac{1}{\pi^{2}}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}f(u, v)
\cos ku \cos \ell v \,du dv, \\
b_{k,\ell} & = \frac{1}{\pi^{2}}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}f(u, v)
\mathcal Sin ku \cos \ell v \,du dv, \\
c_{k,\ell} & = \frac{1}{\pi^{2}}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}f(u, v)
\cos ku \mathcal Sin \ell v \,du dv, \\
d_{k,\ell} & = \frac{1}{\pi^{2}}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}f(u,
v)\mathcal Sin ku \mathcal Sin \ell v\, du dv,\quad k,\,\ell\in \mathbb{N}\cup\{0\},
\end{align*}
and whose partial sums are
\begin{align*}
& S_{n,m}(f;x,y)= \mathcal Sum_{k=0}^{n}\mathcal Sum_{\ell=0}^{m}\lambda_{k,\ell}(a_{k,\ell}
\cos kx \cos \ell y+b_{k,\ell} \mathcal Sin kx \cos \ell y \\
& \hspace{3.8cm}+c_{k,\ell} \cos kx \mathcal Sin \ell y+d_{k,\ell} \mathcal Sin kx \mathcal Sin \ell
y),\quad n,m\geq 0 .
\end{align*}
To reveal our intention, we recall some other notations and notions.
Let $\mathcal Sum_{k=0}^{\infty}\mathcal Sum_{\ell=0}^{\infty}u_{k,\ell}$ be an infinite
double series with its sequence of arithmetic mean $\{\mathcal Sigma_{m,n}\}$, where
\begin{equation*}
\mathcal Sigma_{m,n}=\frac{1}{(m+1)(n+1)}\mathcal Sum_{k=0}^{m}\mathcal Sum_{\ell=0}^{n}U_{k,\ell}
\end{equation*}
and $U_{k,\ell}:=\mathcal Sum_{i=0}^{k}\mathcal Sum_{j=0}^{\ell}u_{i,j}.$
We define the \textit{Double Delayed Arithmetic Mean} $\mathcal Sigma_{m,k;n,\ell}$
by (see \cite{FW}):
\begin{align} \label{eq1}
\begin{split}
\mathcal Sigma_{m,k;n,\ell}:=&\left(1+\frac{m}{k}\right)\left(1+\frac{n}{\ell}
\right)\mathcal Sigma_{m+k-1,n+\ell-1}-\left(1+\frac{m}{k}\right)\frac{n}{\ell}
\mathcal Sigma_{m+k-1,n-1} \\
& -\frac{m}{k}\left(1+\frac{n}{\ell}\right)\mathcal Sigma_{m-1,n+\ell-1}+\frac{mn}{
k\ell}\mathcal Sigma_{m-1,n-1},
\end{split}
\end{align}
where $k$ and $\ell$ are positive integers.
If $k$ tends to $\infty$ with $m$ in such a way that $\frac{m}{k}$ is
bounded, and $\ell$ tends to $\infty$ with $n$ in such a way that $\frac{n}{
\ell}$ is also bounded, then $\mathcal Sigma_{m,k;n,\ell}$ defines a method of
summability which is at least as strong as the well-known $(C,1,1)$
summablity. This means that if $\mathcal Sigma_{m,n}\to \mu$, then $
\mathcal Sigma_{m,k;n,\ell}\to \mu$ as well. This important fact follows from (\ref
{eq1}) if we set $\mathcal Sigma_{m,n}=\mu+\xi_{m,n}$, where $\xi_{m,n}\to 0$ as $
m,n\to \infty$. Introducing this mean we expect to be useful in
applications, particularly in approximating of $2\pi$ periodic functions in
two variables.
We note that for $k=\ell=1$ we obtain $\mathcal Sigma_{m,1;n,1}=U_{m,n}$, while for $
m=n=0$ we get $\mathcal Sigma_{0,k;0,\ell}=\mathcal Sigma_{k-1,\ell-1}$. Moreover, for $k=m$
and $\ell=n$, we get
\begin{align} \label{eq2}
\mathcal Sigma_{m,m;n,n}=4\mathcal Sigma_{2m-1,2n-1}-2\mathcal Sigma_{2m-1,n-1}-2\mathcal Sigma_{m-1,2n-1}+
\mathcal Sigma_{m-1,n-1}
\end{align}
that we name as the \textit{first type} Double Delayed Arithmetic Mean.
However, in this research paper we will take $k=2m$ and $\ell=2n$ in the
Double Delayed Arithmetic Mean $\mathcal Sigma_{m,k;n,\ell}$ to obtain
\begin{align} \label{eq3}
\mathcal Sigma_{m,2m;n,2n}=\frac{1}{4}\left(9\mathcal Sigma_{3m-1,3n-1}-3\mathcal Sigma_{3m-1,n-1}-3
\mathcal Sigma_{m-1,3n-1}+\mathcal Sigma_{m-1,n-1}\right).
\end{align}
We name these particular sums as the \textit{second type} Double Delayed
Arithmetic Mean.
By $\mathcal Sigma _{m,n}(f;x,y)$ and $\mathcal Sigma _{m,2m;n,2n}(f;x,y)$ we denote the
arithmetic mean and the second type Double Delayed Arithmetic Mean for $
S_{k,\ell }(f;x,y)$, respectively. It is well-known (see e.g. \cite[page 4]
{KIM}) that the double Fej\'{e}r kernel is
\begin{equation*}
F_{m,n}(t_{1},t_{2}):=\frac{4}{(m+1)(n+1)}\left( \frac{\mathcal Sin \frac{(m+1)t_{1}
}{2}\mathcal Sin \frac{(n+1)t_{2}}{2}}{4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}}
\right) ^{2},
\end{equation*}
which we will use it in its equivalent form
\begin{equation}\label{eq4}
F_{m,n}(t_{1},t_{2}):=\frac{1}{(m+1)(n+1)}\frac{(1-\cos (m+1)t_{1})(1-\cos
(n+1)t_{2})}{\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}},
\end{equation}
while the arithmetic mean $\mathcal Sigma _{m,n}(f;x,y)$ is
\begin{equation}\label{eq5}
\mathcal Sigma _{m,n}(f;x,y)=\frac{1}{\pi ^{2}}\int_{0}^{\pi }\int_{0}^{\pi
}h_{x,y}(t_{1},t_{2})F_{m,n}(t_{1},t_{2})\,dt_{1}dt_{2},
\end{equation}
where
\begin{equation*}
h_{x,y}(t_{1},t_{2}):=f(x+t_{1},y+t_{2})+f(x-t_{1},y+t_{2})+f(x+t_{1},y-t_{2})+f(x-t_{1},y-t_{2}).
\end{equation*}
Furthermore, using (\ref{eq4}) successively, we have
\begin{align*}
F_{3m-1,3n-1}(t_{1},t_{2})& =\frac{(1-\cos (3mt_{1}))(1-\cos (3nt_{2}))}{
9mn\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}, \\
F_{3m-1,n-1}(t_{1},t_{2})& =\frac{(1-\cos (3mt_{1}))(1-\cos (nt_{2}))}{
3mn\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}, \\
F_{m-1,3n-1}(t_{1},t_{2})& =\frac{(1-\cos (mt_{1}))(1-\cos (3nt_{2}))}{
3mn\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}, \\
F_{m-1,n-1}(t_{1},t_{2})& =\frac{(1-\cos (mt_{1}))(1-\cos (nt_{2}))}{
mn\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}.
\end{align*}
Thus, we can write
\begin{equation}
\begin{split}
F_{m,2m;n,2n}(t_{1},t_{2}):=& \frac{1}{4}\left(
9F_{3m-1,3n-1}(t_{1},t_{2})-3F_{3m-1,n-1}(t_{1},t_{2})\right. \\
& \quad \left. -3F_{m-1,3n-1}(t_{1},t_{2})+F_{m-1,n-1}(t_{1},t_{2})\right) \\
=& \frac{1}{4mn\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}
\left( (1-\cos (3mt_{1}))(1-\cos (3nt_{2}))\right. \\
& \left. -(1-\cos (3mt_{1}))(1-\cos (nt_{2}))-(1-\cos (mt_{1}))(1-\cos
(3nt_{2}))\right. \\
& \left. +(1-\cos (mt_{1}))(1-\cos (nt_{2}))\right) \\
=& \frac{1}{4mn\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}
\left( \cos (mt_{1})-\cos (3mt_{1})\right) \left( \cos (nt_{2})-\cos
(3nt_{2})\right) \\
=& \frac{S(t_{1},t_{2})}{mn\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}
\right) ^{2}},
\end{split}
\label{eq6}
\end{equation}
where
\begin{equation*}
S(t_{1},t_{2}):=\mathcal Sin 2mt_{1}\mathcal Sin mt_{1}\mathcal Sin 2nt_{2}\mathcal Sin nt_{2}.
\end{equation*}
Therefore, using (\ref{eq5}) and (\ref{eq6}), we get the second type Double
Delayed Arithmetic Mean as
\begin{equation*}
\mathcal Sigma_{m,2m;n,2n}(f;x,y)=\frac{1}{mn\pi^2}\int_{0}^{\pi}\int_{0}^{
\pi}h_{x,y}(t_1,t_2)\frac{S(t_1,t_2)}{\left(4\mathcal Sin \frac{t_1}{2}\mathcal Sin \frac{t_2
}{2}\right)^2}dt_1dt_2.
\end{equation*}
Throughout this paper we agree to put: $h_1:=h_1(m)=\frac{\pi}{m}$, $
h_2:=h_2(n)=\frac{\pi}{n}$,
\begin{align*}
\phi_{x,y}(t_1,t_2) & := \frac{1}{4} ( f(x+t_1,y+t_2) + f(x-t_1,y+t_2) \\
& \hspace{1cm} + f(x+t_1,y-t_2) + f(x-t_1,y-t_2) - 4 f(x,y) ), \\
H_{x,z_1,y,z_2}(t_1,t_2) & :=\phi_{x+z_1,y+z_2}(t_1,t_2)-\phi_{x,y}(t_1,t_2),
\end{align*}
and
\begin{align*}
D_{m,n}(x,y):=\mathcal Sigma_{m,2m;n,2n}(f;x,y)-f(x,y).
\end{align*}
In order to achieve to our aim, we need some helpful lemmas given in next section.
\mathcal Section{Auxiliary Results}
\begin{lemma}
(The generalized Minkowski inequality \cite[p. 21]{SMN})\label{le1} For a
function $g(x_{1},y_{1})$ given on a measurable set $E:=E_{1}\times
E_{2}\mathcal Subset \mathbb{R}_{2}$, where $x_{1}=(x_{1},x_{2},\dots ,x_{m})$ and $
y_{1}=(x_{m+1},x_{m+2},\dots ,x_{n})$, the following inequality holds:
\begin{equation*}
\left( \int_{E_{1}}\left\vert \int_{E_{2}}g(x_{1},y_{1})dy_{1}\right\vert
^{p}dx_{1}\right) ^{\frac{1}{p}}\leq \int_{E_{2}}\left(
\int_{E_{1}}\left\vert g(x_{1},y_{1})\right\vert ^{p}dy_{1}\right) ^{\frac{1
}{p}}dx_{1},
\end{equation*}
for those values of $p$ for which the right-hand side of this inequality is
finite.
\end{lemma}
\begin{lemma}
\label{le2} Let $\omega_1$, $\omega_2$, $v_1$ and $v_2$ be moduli of
continuity so that $\frac{\omega_1 (t_1)}{v_1(t_1)}$ is non-decreasing in $
t_1$, $\frac{\omega_2 (t_2)}{v_2(t_2)}$ is non-decreasing in $t_2$, and $
f\in H_{p}^{(\omega_1, \omega_2)}$. Then for $0< t_1\leq \pi$, $0< t_2\leq
\pi$, and $p\geq 1$,
\begin{enumerate}
\item[(i)] $\|\phi_{x,y}(t_1,t_2)\|_p=\mathcal{O}\left(\omega_1
(t_1)+\omega_2 (t_2)\right)$,
\item[(ii)] $\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p=\mathcal{O}\left(\omega_1
(t_1)+\omega_2 (t_2)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_2
(|z_2|)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p=\mathcal{O}\left(\omega_1
(t_1+h_1)+\omega_2 (t_2)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p=\mathcal{O}\left(\omega_1
(|z_1|)+\omega_2 (|z_2|)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p=\mathcal{O}\left(\omega_1 (t_1)+\omega_2
(t_2+h_2)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p=\mathcal{O}\left(\omega_1
(|z_1|)+\omega_2 (|z_2|)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p=\mathcal{O}\left(\omega_1
(t_1+h_1)+\omega_2 (t_2+h_2)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p=\mathcal{O}\left(\omega_1
(|z_1|)+\omega_2 (|z_2|)\right)$.
\end{enumerate}
Moreover, if $\omega_1 = \omega_2$ and $v_1=v_2$, then
\begin{enumerate}
\item[(iii)] $\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p=\mathcal{O}\left( (v_1 (|z_1|)
+ (v_1 (|z_2|)) \left( \frac{\omega_1 (t_1)}{v_1 (t_1)} + \frac{\omega_1
(t_2)}{v_1 (t_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p=\mathcal{O}\left( (v_1 (|z_1|) + (v_1
(|z_2|)) \left( \frac{\omega_1 (t_1+h_1)}{v_1 (t_1+h_1)} + \frac{\omega_1
(t_2)}{v_1 (t_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p=\mathcal{O}\left( (v_1 (|z_1|) + (v_1
(|z_2|)) \left( \frac{\omega_1 (t_1)}{v_1 (t_1)} + \frac{\omega_1 (t_2+h_2)}{
v_1 (t_2+h_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p=\mathcal{O}\left( (v_1 (|z_1|) +
(v_1 (|z_2|)) \left( \frac{\omega_1 (t_1+h_1)}{v_1 (t_1+h_1)} + \frac{
\omega_1 (t_2+h_2)}{v_1 (t_2+h_2)}\right) \right)$.
\item[(iv)] $\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p
\newline
=\mathcal{O}\left( (v_1 (|z_1|) + (v_1 (|z_2|)) \left( \frac{\omega_1 (h_1)}{
v_1 (h_1)} + \frac{\omega_1 (h_2)}{v_1 (h_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p\newline
=\mathcal{O}\left( (v_1 (|z_1|) + (v_1 (|z_2|)) \left( \frac{\omega_1 (h_1)}{
v_1 (h_1)} + \frac{\omega_1 (h_2)}{v_1 (h_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2+h_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p\newline
=\mathcal{O}\left( (v_1 (|z_1|) + (v_1 (|z_2|)) \left( \frac{\omega_1 (h_1)}{
v_1 (h_1)} + \frac{\omega_1 (h_2)}{v_1 (h_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p\newline
=\mathcal{O}\left( (v_1 (|z_1|) + (v_1 (|z_2|)) \left( \frac{\omega_1 (h_1)}{
v_1 (h_1)} + \frac{\omega_1 (h_2)}{v_1 (h_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2) -
H_{x,z_1,y,z_2}(t_1,t_2+h_2) + H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2) \|_p=
\mathcal{O}\left( (v_1 (|z_1|) + (v_1 (|z_2|)) \left( \frac{\omega_1 (h_1)}{
v_1 (h_1)} + \frac{\omega_1 (h_2)}{v_1 (h_2)}\right) \right)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Part (i). We calculate as
\begin{equation*}
\begin{split}
\|\phi_{x,y}(t_1,t_2)\|_p&\leq \frac{1}{4}\left\{\|f(x+t_1,y+t_2)-f(x,y)
\|_p+\|f(x,y)-f(x-t_1,y+t_2)\|_p \right. \\
&\quad + \left.
\|f(x,y)-f(x+t_1,y-t_2)\|_p+\|f(x,y)-f(x-t_1,y-t_2)\|_p\right\} \\
&=\mathcal{O}\left(\omega_1 (t_1)+\omega_2 (t_2)\right).
\end{split}
\end{equation*}
Part (ii). Since $f\in \text{Lip}(\omega_1,\omega_2,p)$, we have
\begin{equation*}
\begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=
\|\phi_{x+z_1,y+z_2}(t_1,t_2)-\phi_{x,y}(t_1,t_2)\|_p \\
&\leq\frac{1}{4}\left\{\|f(x+z_1+t_1,y+z_2+t_2)-f(x+z_1,y+z_2)\|_p \right. \\
&\quad +\|f(x+z_1-t_1,y+z_2+t_2)-f(x+z_1,y+z_2)\|_p \\
&\quad +\|f(x+z_1+t_1,y+z_2-t_2)-f(x+z_1,y+z_2)\|_p \\
&\quad +\|f(x+z_1-t_1,y+z_2-t_2)-f(x+z_1,y+z_2)\|_p \\
&\quad +\|f(x+t_1,y+t_2)-f(x,y)\|_p \\
&\quad +\|f(x-t_1,y+t_2)-f(x,y)\|_p \\
&\quad +\|f(x+t_1,y-t_2)-f(x,y)\|_p \\
&\quad + \left. \|f(x-t_1,y-t_2)-f(x,y)\|_p \right\} \\
&=\mathcal{O}\left(\omega_1 (t_1)+\omega_2 (t_2)\right).
\end{split}
\end{equation*}
For the second part, a similar reasoning yields
\begin{equation} \label{eq9}
\begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=
\|\phi_{x+z_1,y+z_2}(t_1,t_2)-\phi_{x,y}(t_1,t_2)\|_p \\
&\leq\frac{1}{4}\left\{\|f(x+z_1+t_1,y+z_2+t_2)-f(x+t_1,y+t_2)\|_p \right. \\
&\quad +\|f(x+z_1-t_1,y+z_1+t_2)-f(x-t_1,y+t_2)\|_p \\
&\quad +\|f(x+z_1+t_1,y+z_2-t_2)-f(x+t_1,y-t_2)\|_p \\
&\quad + \left. \|f(x+z_1-t_1,y+z_2-t_2)-f(x-t_1,y-t_2)\|_p\right\} \\
&\quad +\|f(x+z_1,y+z_2)-f(x,y)\|_p \\
& =\mathcal{O}\left(\omega_1 (|z_1|)+\omega_2 (|z_2|)\right).
\end{split}
\end{equation}
The other relations can be verified in a very same way. We omit their
proofs.
Part (iii). Using part (ii) and the fact $v_1(t_1)$ is non-decreasing, in
case of $t_1\leq |z_1|$ and $t_2\leq |z_2|$, we get
\begin{equation*}
\begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=\mathcal{O}\left(\omega_1 (t_1)+\omega_1
(t_2)\right) \\
&=\mathcal{O}\left( v_1 (t_1) \frac{\omega_1 (t_1)}{v_1 (t_1)} + v_1 (t_2)
\frac{\omega_1 (t_2)}{v_1 (t_2)}\right) \\
&=\mathcal{O}\left( v_1 (|z_1|) \frac{\omega_1 (t_1)}{v_1 (t_1)} + v_1
(|z_2|)\frac{\omega_1 (t_2)}{v_1 (t_2)}\right).
\end{split}
\end{equation*}
Since $\frac{\omega_1 (t_1)}{v_1(t_1)}$ is non-decreasing, and we have the
second part of (ii), then for $t_1\geq |z_1|$ and $t_2\geq |z_2|$, we also
obtain
\begin{equation*}
\begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_1
(|z_2|)\right) \\
&=\mathcal{O}\left(v_1 (|z_1|) \frac{\omega_1 (|z_1|)}{v_1 (|z_1|)} + v_1
(|z_2|)\frac{\omega_1 (|z_2|)}{v_1 (|z_2|)}\right) \\
&=\mathcal{O}\left(v_1 (|z_1|) \frac{\omega_1 (t_1)}{v_1 (t_1)} + v_1 (|z_2|)
\frac{\omega_1 (t_2)}{v_1 (t_2)}\right).
\end{split}
\end{equation*}
For $t_1\leq |z_1|$ and $t_2\geq |z_2|$, consider two possibilities. If $
|z_1| \geq t_2$, then
\begin{equation*}
\begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=\mathcal{O}\left(\omega_1 (t_1)+\omega_1
(t_2)\right) \\
&=\mathcal{O}\left( v_1 (t_1) \frac{\omega_1 (t_1)}{v_1 (t_1)} + v_1 (t_2)
\frac{\omega_1 (t_2)}{v_1 (t_2)} \right) \\
& = \mathcal{O}\left( v_1 (|z_1|) \left( \frac{\omega_1 (t_1)}{v_1 (t_1)} +
\frac{\omega_1 (t_2)}{v_1 (t_2)}\right) \right).
\end{split}
\end{equation*}
If $t_2 \geq |z_1|$, then
\begin{equation*}
\begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_1
(|z_2|)\right) \\
&=\mathcal{O}\left(v_1 (|z_1|) \frac{\omega_1 (|z_1|)}{v_1 (|z_1|)} + v_1
(|z_2|)\frac{\omega_1 (|z_2|)}{v_1 (|z_2|)}\right) \\
&=\mathcal{O}\left( (v_1 (|z_1|) + v_1 (|z_2|)) \frac{\omega_1 (t_2)}{v_1
(t_2)}\right).
\end{split}
\end{equation*}
For $t_1\geq |z_1|$ and $t_2\leq |z_2|$, we can get
\begin{equation*}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p = \mathcal{O}\left( v_1 (|z_2|) \left( \frac{
\omega_1 (t_1)}{v_1 (t_1)} + \frac{\omega_1 (t_2)}{v_1 (t_2)}\right) \right)
\end{equation*}
in case of $|z_2| \geq t_1$, and
\begin{equation*}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p=\mathcal{O}\left( (v_1 (|z_1|) + v_1 (|z_2|))
\frac{\omega_1 (t_1)}{v_1 (t_1)}\right)
\end{equation*}
in case of $t_1 \geq |z_2|$ similarly as before. This shows the validity of
the first inequality in part (iii). The other relations can be verified in a
very same way.
Part (iv). We have
\begin{equation*}
\begin{split}
&\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p \\
&\leq\|\phi_{x,y}(t_1+h_1,t_2)-\phi_{x,y}(t_1,t_2)\|_p \\
&\quad +\|\phi_{x+z_1,y+z_2}(t_1+h_1,t_2)-\phi_{x+z_1,y+z_2}(t_1,t_2)\|_p \\
&=\mathcal{O}\left(\omega_1 (h_1)\right) = \mathcal{O}\left(\omega_1 (h_1) +
\omega_1 (h_2)\right)
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
&\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p \\
&\leq\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p + \|H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p \\
&=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_2 (|z_2|)\right),
\end{split}
\end{equation*}
while
\begin{equation*}
\begin{split}
&\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p \\
&\leq\|\phi_{x,y}(t_1,t_2+h_2)-\phi_{x,y}(t_1,t_2)\|_p \\
&\quad +\|\phi_{x+z_1,y+z_2}(t_1,t_2+h_2)-\phi_{x+z_1,y+z_2}(t_1,t_2)\|_p \\
&=\mathcal{O}\left(\omega_1 (h_2)\right) = \mathcal{O}\left(\omega_1 (h_1) +
\omega_1 (h_2)\right)
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
&\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p \\
&\leq\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p + \|H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p \\
&=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_2 (|z_2|)\right),
\end{split}
\end{equation*}
furthermore
\begin{equation*}
\begin{split}
\| H_{x,z_1,y,z_2}(t_1,t_2)- &H_{x,z_1,y,z_2}(t_1+h_1,t_2)-
H_{x,z_1,y,z_2}(t_1,t_2+h_2)\\
&\quad + H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2) \|_p \\
&\leq \|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p \\
&\quad + \| H_{x,z_1,y,z_2}(t_1,t_2+h_2) -
H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p \\
&= \mathcal{O}\left(\omega_1 (h_1) + \omega_1 (h_2)\right)
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)&-H_{x,z_1,y,z_2}(t_1+h_1,t_2) -
H_{x,z_1,y,z_2}(t_1,t_2+h_2)\\
&\quad + H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p \\
&\leq\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p + \|H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p \\
&\quad + \| H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p + \|
H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p \\
&=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_2 (|z_2|)\right).
\end{split}
\end{equation*}
From analogous estimations, we can obtain part (iv) by considering the four
cases $h_1\leq |z_1|$ and $h_2\leq |z_2|$; $h_1\geq |z_1|$ and $h_2\geq
|z_2| $; $h_1\leq |z_1|$ and $h_2\geq |z_2|$; $h_1\geq |z_1|$ and $h_2\leq
|z_2|$, respectively, as in part (iii). We omit the details.
\end{proof}
\mathcal Section{Main Results}
We prove the following statement.
\begin{theorem}
\label{the01} Let $\omega $ and $v$ be moduli of continuity so that $\frac{
\omega (t)}{v(t)}$ is non-decreasing in $t$. If $f\in H_{p}^{(\omega ,\omega
)}$, $p\geq 1$, then
\begin{equation*}
\Vert \mathcal Sigma _{m,2m;n,2n}(f)-f\Vert _{p}^{(v,v)}= \mathcal{O} \left( \frac{\omega (h_{1})
}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) ,
\end{equation*}
where $h_{1}=\frac{\pi }{m}$, $h_{2}=\frac{\pi }{n}$ for $m,n\in
\mathbb{N}
$.
\end{theorem}
\alphalowdisplaybreaks{\begin{proof}
Using the equality
\begin{equation}
\int_{0}^{\pi }\int_{0}^{\pi }\frac{\mathcal Sin 2mt_{1}\mathcal Sin mt_{1}\mathcal Sin 2nt_{2}\mathcal Sin
nt_{2}}{\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}
dt_{1}dt_{2}=\frac{mn\pi ^{2}}{4} \label{eq11}
\end{equation}
we have
\begin{equation}
\begin{split}
D_{m,n}(x,y)& =\mathcal Sigma _{m,2m;n,2n}(f;x,y)-f(x,y) \\
& =\frac{4}{mn\pi ^{2}}\int_{0}^{\pi }\int_{0}^{\pi }\phi _{x,y}(t_{1},t_{2})
\frac{S(t_{1},t_{2})}{\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}
\right) ^{2}}dt_{1}dt_{2},
\end{split}
\label{eq12}
\end{equation}
where
\begin{equation*}
\begin{split}
\phi _{x,y}(t_{1},t_{2})=& \frac{1}{4}\big[
f(x+t_{1},y+t_{2})+f(x-t_{1},y+t_{2}) \\
& \quad +f(x+t_{1},y-t_{2})+f(x-t_{1},y-t_{2})-4f(x,y)\big]
\end{split}
\end{equation*}
and
\begin{equation*}
S(t_{1},t_{2})=\mathcal Sin 2mt_{1}\mathcal Sin mt_{1}\mathcal Sin 2nt_{2}\mathcal Sin nt_{2}.
\end{equation*}
By definition, we have
\begin{equation}
\Vert D_{m,n}\Vert _{p}^{(v,v)}:=\Vert D_{m,n}\Vert _{p}+\mathcal Sup_{z_{1}\neq
0,\,\,z_{2}\neq 0}\frac{\Vert D_{m,n}(x+z_{1},y+z_{2})-D_{m,n}(x,y)\Vert _{p}
}{v(|z_{1}|)+v(|z_{2}|)}. \label{eq13}
\end{equation}
Now, we can write
\begin{equation}
\begin{split}
& D_{m,n}(x+z_{1},y+z_{2})-D_{m,n}(x,y) \\
& =\frac{4}{mn\pi ^{2}}\int_{0}^{\pi }\int_{0}^{\pi }\left[ \phi
_{x+z_{1},y+z_{2}}(t_{1},t_{2})-\phi _{x,y}(t_{1},t_{2})\right] \frac{
S(t_{1},t_{2})}{\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}
dt_{1}dt_{2} \\
& =\frac{4}{mn\pi ^{2}}\int_{0}^{\pi }\int_{0}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{\left( 4\mathcal Sin \frac{
t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\
& =\frac{4}{mn\pi ^{2}}\int_{0}^{h_{1}}
\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{\left(
4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\
& \quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{
\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\
& \quad +\frac{4}{mn\pi ^{2}}\int_{0}^{h_{1}}\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{\left( 4\mathcal Sin \frac{
t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\
& \quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{\left( 4\mathcal Sin \frac{
t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2}:=\mathcal Sum_{r=1}^{4}J_{r},
\end{split}
\label{eq14}
\end{equation}
Using Lemma \ref{le1} for $p\geq 1$, by the estimate
\begin{equation*}
\left\vert \frac{S(t_{1},t_{2})}{\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}
}{2}\right) ^{2}}\right\vert =\mathcal{O}\left( m^{2}n^{2}\right) ,\quad
0<t_{1}\leq \pi ,0<t_{2}\leq \pi ,
\end{equation*}
and Lemma \ref{le2}, we have
\begin{equation}
\begin{split}
\Vert J_{1}\Vert _{p}& \leq \frac{4}{mn\pi ^{2}}\int_{0}^{h_{1}}
\int_{0}^{h_{2}}\Vert H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\Vert _{p}\left\vert
\frac{S(t_{1},t_{2})}{\left( 4\mathcal Sin \frac{t_{1}}{2}\mathcal Sin \frac{t_{2}}{2}
\right) ^{2}}\right\vert dt_{1}dt_{2} \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right)
mn\int_{0}^{h_{1}}\int_{0}^{h_{2}}\left( \frac{\omega (t_{1})}{v(t_{1})}+
\frac{\omega (t_{2})}{v(t_{2})}\right) dt_{1}dt_{2}\right) \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) mn\left( \frac{
\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right)
\int_{0}^{h_{1}}\int_{0}^{h_{2}}dt_{1}dt_{2}\right) \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) .
\end{split}
\label{eq15}
\end{equation}
The quantity $J_{2}$ can be written as follows:
\begin{equation}
\begin{split}
J_{2}& =\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\mathcal Sin 2mt_{1}\mathcal Sin mt_{1}\\%
&\quad \times \frac{\mathcal Sin 2nt_{2}\mathcal Sin nt_{2}}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}
\left( \frac{1}{\left( 2\mathcal Sin \frac{t_{1}}{2}\right) ^{2}}-\frac{1}{t_{1}^{2}}
\right) dt_{1}dt_{2}\\
&\quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{
t_{1}^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}
dt_{1}dt_{2}:=J_{21}+J_{22}.
\end{split}
\label{eq16}
\end{equation}
Since the function
\begin{equation*}
\frac{1}{\left( 2\mathcal Sin \frac{t_{1}}{2}\right) ^{2}}-\frac{1}{t_{1}^{2}}
\end{equation*}
is bounded for $0<t_{1}\leq \pi $ and
\begin{equation*}
\left\vert \frac{\mathcal Sin 2nt_{2}\mathcal Sin nt_{2}}{\left( 2\mathcal Sin \frac{t_{2}}{2}
\right) ^{2}}\right\vert =\mathcal{O}\left( n^{2}\right) ,\quad 0<t_{2}\leq
\pi ,
\end{equation*}
then by Lemma \ref{le1} with $p\geq 1$ and Lemma \ref{le2}, we get
\begin{equation}
\begin{split}
\Vert J_{21}\Vert _{p}& =\mathcal{O}\left( \frac{n}{m}\right)
\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\Vert
H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\Vert _{p}dt_{1}dt_{2} \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \frac{n}{m}
\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\left( \frac{\omega (t_{1})}{v(t_{1})}+
\frac{\omega (t_{2})}{v(t_{2})}\right) dt_{1}dt_{2}\right) \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \frac{n}{m}\left(
\frac{\omega (\pi )}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})}\right)
\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}dt_{1}dt_{2}\right) \\
& =\mathcal{O}\left( \frac{1}{m}\left( v(|z_{1}|)+v(|z_{2}|)\right) \left(
\frac{\omega (\pi )}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right)
.
\end{split}
\label{eq17}
\end{equation}
It is clear that for $m\in
\mathbb{N}
$
\begin{equation*}
\frac{1}{m}\omega \left( \pi \right) \leq 2\omega \left( \frac{\pi }{m}
\right) ,
\end{equation*}
whence
\begin{equation*}
\left\Vert J_{21}\right\Vert _{p}=O\left( \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (h_{2})}{v(h_{2})}\right) \right) .
\end{equation*}
Further, substituting $t_{1}+h_{1}$ in place of $t_{1}$ in
\begin{equation}
J_{22}=\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{
t_{1}^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \label{eq18}
\end{equation}
we obtain
\begin{equation}
J_{22}=-\frac{4}{mn\pi ^{2}}\int_{0}^{\pi
-h_{1}}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{
S(t_{1},t_{2})}{(t_{1}+h_{1})^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}
dt_{1}dt_{2}, \label{eq19}
\end{equation}
Hence, from (\ref{eq18}) and (\ref{eq19}) we get
\begin{equation*}
\begin{split}
J_{22}& =\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{
t_{1}^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\
& \quad -\frac{2}{mn\pi ^{2}}\int_{0}^{\pi
-h_{1}}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{
S(t_{1},t_{2})}{(t_{1}+h_{1})^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}
dt_{1}dt_{2}\\
& =\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{
t_{1}^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\
& \quad -\frac{2}{mn\pi ^{2}}\int_{0}^{h_{1}}
\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{
(t_{1}+h_{1})^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\
& \quad -\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{
(t_{1}+h_{1})^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\
& \quad +\frac{2}{mn\pi ^{2}}\int_{\pi -h_{1}}^{\pi
}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{
(t_{1}+h_{1})^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2}
\end{split}
\end{equation*}
\begin{equation}
\begin{split}
& =\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\left( \frac{
H_{x,z_{1},y,z_{2}}(t_{1},t_{2})}{t_{1}^{2}}-\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{(t_{1}+h_{1})^{2}}\right) \frac{S(t_{1},t_{2})}{\left( 2\mathcal Sin \frac{t_{2}}{2}
\right) ^{2}}dt_{1}dt_{2} \\
& \quad -\frac{2}{mn\pi ^{2}}\int_{0}^{h_{1}}
\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{
(t_{1}+h_{1})^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\
& \quad +\frac{2}{mn\pi ^{2}}\int_{\pi -h_{1}}^{\pi
}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{
(t_{1}+h_{1})^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}
dt_{1}dt_{2}:=\mathcal Sum_{s=1}^{3}J_{22}^{(s)}.
\end{split}
\label{eq20}
\end{equation}
Using the inequalities $\frac{2}{\pi }\beta \leq \mathcal Sin \beta $ for $\beta \in
(0,\pi /2)$, $\mathcal Sin \beta \leq \beta $ for $\beta \in (0,\pi )$, Lemma \ref
{le1} and Lemma \ref{le2}, we have
\begin{equation}
\begin{split}
\Vert J_{22}^{(2)}\Vert _{p}& =\mathcal{O}\left( \frac{1}{mn}\right)
\int_{0}^{h_{1}}\int_{0}^{h_{2}}\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\Vert _{p}\frac{(mt_{1}nt_{2})^{2}}{
(t_{1}+h_{1})^{2}\left( \frac{t_{2}}{\pi }\right) ^{2}}dt_{1}dt_{2} \\
& =\mathcal{O}\left( mn\right)
\int_{0}^{h_{1}}\int_{0}^{h_{2}}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega
(t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right)
dt_{1}dt_{2} \\
& =\mathcal{O}\left( mn\right) (v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega
(2h_{1})}{v(2h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right)
\int_{0}^{h_{1}}\int_{0}^{h_{2}}dt_{1}dt_{2} \\
& =\mathcal{O}\left( mn\right) (v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) h_{1}h_{2} \\
& =\mathcal{O}\left( (v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (h_{1})}{
v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) .
\end{split}
\label{eq21}
\end{equation}
Moreover, the inequalities $|\mathcal Sin \beta |\leq 1$ for $\beta \in (\pi
-h_{1},\pi )$, $\mathcal Sin \beta \leq \beta $ for $\beta \in (0,\pi )$, Lemma \ref
{le1}, and Lemma \ref{le2} imply
\begin{equation}
\begin{split}
\Vert J_{22}^{(3)}\Vert _{p}& =\mathcal{O}\left( \frac{1}{mn}\right)
\int_{\pi -h_{1}}^{\pi }\int_{0}^{h_{2}}\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\Vert _{p}\frac{(nt_{2})^{2}}{
(t_{1}+h_{1})^{2}\left( \frac{t_{2}}{\pi }\right) ^{2}}dt_{1}dt_{2} \\
& =\mathcal{O}\left( \frac{n}{m}\right) \int_{\pi -h_{1}}^{\pi
}\!\int_{0}^{h_{2}}(v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (t_{1}+h_{1})}{
v(t_{1}+h_{1})}+\frac{\omega (t_{2})}{v(t_{2})}\right) \frac{1}{
(t_{1}+h_{1})^{2}}dt_{1}dt_{2} \\
& =\mathcal{O}\left( \frac{n}{m}\right) (v(|z_{1}|)+v(|z_{2}|))\int_{\pi
}^{\pi +h_{1}}\int_{0}^{h_{2}}\left( \frac{\omega (\theta _{1})}{v(\theta
_{1})}+\frac{\omega (t_{2})}{v(t_{2})}\right) \frac{1}{\theta _{1}^{2}}
d\theta _{1}dt_{2} \\
& =\mathcal{O}\left( \frac{n}{m}\right) (v(|z_{1}|)+v(|z_{2}|))\left( \frac{
\omega (\pi +h_{1})}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})}\right)
\int_{\pi }^{\pi +h_{1}}\int_{0}^{h_{2}}\frac{d\theta _{1}dt_{2}}{\theta
_{1}^{2}} \\
& =\mathcal{O}\left( \frac{n}{m}\right) (v(|z_{1}|)+v(|z_{2}|))\left( \frac{
\omega (\pi )+\omega (h_{1})}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})}
\right) \frac{h_{1}h_{2}}{\pi (\pi +h_{1})} \\
& =\mathcal{O}\left( \frac{1}{m^{2}}\right) (v(|z_{1}|)+v(|z_{2}|))\left(
\frac{\omega (\pi )}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})}\right) \\
& =\mathcal{O}\left( \frac{1}{m}(v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) .
\end{split}
\label{eq22}
\end{equation}
For $J_{22}^{(1)}$ we can write
\begin{equation}
\begin{split}
J_{22}^{(1)}& =\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\Bigg(
\frac{H_{x,z_{1},y,z_{2}}(t_{1},t_{2})}{t_{1}^{2}}-\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{t_{1}^{2}} \\
& \qquad +\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{t_{1}^{2}}-\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{(t_{1}+h_{1})^{2}}\Bigg) \\
& \qquad \times \frac{S(t_{1},t_{2})}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right)
^{2}}dt_{1}dt_{2} \\
& =\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\left(
H_{x,z_{1},y,z_{2}}(t_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})
\right) \\
& \qquad \times \frac{S(t_{1},t_{2})}{t_{1}^{2}\left( 2\mathcal Sin \frac{t_{2}}{2}
\right) ^{2}}dt_{1}dt_{2} \\
& \quad +\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{
\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}} \\
& \qquad \times \left( \frac{1}{t_{1}^{2}}-\frac{1}{(t_{1}+h_{1})^{2}}
\right) dt_{1}dt_{2}:=J_{221}^{(1)}+J_{222}^{(1)}.
\end{split}
\label{eq23}
\end{equation}
Then, using Lemma \ref{le1}, and Lemma \ref{le2}, we have
\begin{equation}
\begin{split}
\Vert J_{221}^{(1)}\Vert _{p}& =\mathcal{O}\left( \frac{1}{mn}\right)
\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})
\right\Vert _{p}\frac{(nt_{2})^{2}}{t_{1}^{2}\left( 2\frac{t_{2}}{\pi }
\right) ^{2}}dt_{1}dt_{2} \\
& =\mathcal{O}\left( \frac{n}{m}\right) \int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}(v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})
}+\frac{\omega (h_{2})}{v(h_{2})}\right) \frac{dt_{1}dt_{2}}{t_{1}^{2}} \\
& =\mathcal{O}\left( \frac{n}{m}\right) (v(|z_{1}|)+v(|z_{2}|))\left( \frac{
\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \frac{h_{2}
}{h_{1}} \\
& =\mathcal{O}\left( (v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (h_{1})}{
v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) .
\end{split}
\label{eq24}
\end{equation}
With similar reasoning we obtain
\begin{equation}
\begin{split}
\Vert J_{222}^{(1)}\Vert _{p}& =\mathcal{O}\left( \frac{1}{mn}\right)
\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\right\Vert _{p}\frac{(nt_{2})^{2}}{
\left( 2\frac{t_{2}}{\pi }\right) ^{2}}\frac{h_{1}(2t_{1}+h_{1})}{
t_{1}^{2}(t_{1}+h_{1})^{2}}dt_{1}dt_{2} \\
& =\mathcal{O}\left( \frac{n}{m}\right) \int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (t_{1}+h_{1})}{
v(t_{1}+h_{1})}+\frac{\omega (t_{2})}{v(t_{2})}\right) \frac{
h_{1}dt_{1}dt_{2}}{t_{1}^{2}(t_{1}+h_{1})} \\
& =\mathcal{O}\left( \frac{n}{m^{2}}\right)
(v(|z_{1}|)+v(|z_{2}|))\int_{h_{1}}^{\pi }\left( \frac{\omega (t_{1}+h_{1})}{
(t_{1}+h_{1})v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})t_{1}}\right) \frac{
h_{2}dt_{1}}{t_{1}^{2}} \\
& =\mathcal{O}\left( \frac{1}{m^{2}}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (2h_{1})}{2h_{1}v(h_{1})}
\int\limits_{h_{1}}^{\pi }\frac{dt_{1}}{t_{1}^{2}}+\frac{\omega (h_{2})}{
v(h_{2})}\int\limits_{h_{1}}^{\pi }\frac{dt_{1}}{t_{1}^{3}}\right) \\
& =\mathcal{O}\left( \frac{1}{m^{2}}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( m\frac{\omega (h_{1})}{2h_{1}v(t_{1})}
+m^{2}\frac{\omega (h_{2})}{v(h_{2})}\right) \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) .
\end{split}
\label{eq25}
\end{equation}
So, from (\ref{eq23}), (\ref{eq24}) and (\ref{eq25}), we have
\begin{equation}
\Vert J_{22}^{(1)}\Vert _{p}=\mathcal{O}\left( (v(|z_{1}|)+v(|z_{2}|))\left(
\frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right)
\right) . \label{eq26}
\end{equation}
Now, taking into account (\ref{eq20}), (\ref{eq21}), (\ref{eq22}) and (\ref
{eq26}), we have
\begin{equation}
\Vert J_{22}\Vert _{p}=\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right)
\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}
\right) \right) . \label{eq27}
\end{equation}
Whence, using (\ref{eq16}), (\ref{eq17}) and (\ref{eq27}), we obtain
\begin{equation}
\Vert J_{2}\Vert _{p}=\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right)
\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}
\right) \right) . \label{eq28}
\end{equation}
By analogy, we conclude that
\begin{equation}
\Vert J_{3}\Vert _{p}=\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right)
\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}
\right) \right) . \label{eq29}
\end{equation}
Finally, let us estimate the quantity $J_{4}$. Indeed, $J_{4}$ can be
rewritten as
\begin{align*}
J_{4}& = \frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_1,t_2) \\
&\qquad \times \left( \frac{1}{\left( 2\mathcal Sin \frac{t_{1}}{2}\right) ^{2}}-\frac{1}{
t_{1}{}^{2}}\right) \left( \frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}
-\frac{1}{t_{2}{}^{2}}\right) dt_{1}dt_{2}\\
&\quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_1,t_2) \\
& \qquad \times \left( \frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{
t_{2}{}^{2}}\right) \frac{1}{t_{1}{}^{2}}dt_{1}dt_{2}\\
& \quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_1,t_2) \\
& \qquad \times \left( \frac{1}{\left( 2\mathcal Sin \frac{t_{1}}{2}\right) ^{2}}-\frac{1}{
t_{1}{}^{2}}\right) \frac{1}{t_{2}{}^{2}}dt_{1}dt_{2}\\
& \quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_1,t_2)}{\left( t_{1}t_{2}\right)
^{2}}dt_{1}dt_{2} \\
&:= J_{41}+J_{42}+J_{43}+J_{44}.
\end{align*}
The boundedness of the function
\begin{equation*}
\left( \frac{1}{\left( 2\mathcal Sin \frac{t_{1}}{2}\right) ^{2}}-\frac{1}{
t_{1}{}^{2}}\right) \left( \frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}
-\frac{1}{t_{2}{}^{2}}\right)
\end{equation*}
for $0<t_{1},t_{2}\leq \pi $, Lemma \ref{le1} and Lemma \ref{le2}, implies
\begin{align*}
\Vert J_{41}\Vert _{p}&=\mathcal{O}\left( \frac{1}{mn}\right)
\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Vert
H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\Vert _{p}dt_{1}dt_{2}\\
&=\mathcal{O}\left( \frac{1}{mn}\right) \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}(v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (t_{1})}{v(t_{1})}+\frac{\omega
(t_{2})}{v(t_{2})}\right) dt_{1}dt_{2}\\
&=\mathcal{O}\left( \frac{1}{mn}\right) (v(|z_{1}|)+v(|z_{2}|))\frac{\omega
\left( \pi \right) }{v\left( \pi \right) } \\
&=\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) .
\end{align*}
Substituting $t_{1}$ with $t_{1}+h_{1}$ in
\begin{align*}
&&\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_1,t_2) \left( \frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{
t_{2}{}^{2}}\right) \frac{1}{t_{1}{}^{2}}dt_{1}dt_{2}
\end{align*}
we obtain
\begin{align*}
J_{42} &= -\frac{4}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})S(t_1,t_2) \\
& \qquad \times \left( \frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{
t_{2}{}^{2}}\right) \frac{1}{\left( t_{1}+h_{1}\right) {}^{2}}dt_{1}dt_{2}.
\end{align*}
Hence
\begin{align*}
J_{42} &= \frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_{1},t_{2}) \left( \frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{
t_{2}{}^{2}}\right) \frac{1}{t_{1}^{2}}dt_{1}dt_{2}\\
& \quad -\frac{2}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})S(t_{1},t_{2}) \\
& \qquad \times \left( \frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{
t_{2}{}^{2}}\right) \frac{1}{\left( t_{1}+h_{1}\right) {}^{2}}dt_{1}dt_{2}\\
&= \frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_{1},t_{2}) \left( \frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{
t_{2}{}^{2}}\right) \frac{1}{t_{1}^{2}}dt_{1}dt_{2}\\
& \quad -\frac{2}{mn\pi ^{2}}\left( \int_{0}^{h_{1}}\int_{h_{2}}^{\pi
}+\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }-\int_{\pi -h_{1}}^{\pi
}\int_{h_{2}}^{\pi }\right) H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \\
& \qquad \times S(t_{1},t_{2})\left( \frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right)
^{2}}-\frac{1}{t_{2}{}^{2}}\right) \frac{1}{\left( t_{1}+h_{1}\right) {}^{2}}
dt_{1}dt_{2}\\
&= \frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\left( \frac{
H_{x,z_{1},y,z_{2}}(t_{1},t_{2})}{t_{1}{}^{2}}-\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{\left( t_{1}+h_{1}\right) {}^{2}}
\right) \\
& \qquad \times S(t_{1},t_{2})\left( \frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right)
^{2}}-\frac{1}{t_{2}{}^{2}}\right) \frac{1}{t_{1}{}^{2}}dt_{1}dt_{2}\\
& \quad -\frac{2}{mn\pi ^{2}}\left( \int_{0}^{h_{1}}\int_{h_{2}}^{\pi }-\int_{\pi
-h_{1}}^{\pi }\int_{h_{2}}^{\pi }\right)
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \\
& \qquad \times S(t_{1},t_{2})\left( \frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right)
^{2}}-\frac{1}{t_{2}{}^{2}}\right) \frac{1}{\left( t_{1}+h_{1}\right) {}^{2}}
dt_{1}dt_{2} \\
&:= \mathcal Sum\limits_{s=1}^{3}J_{42}^{\left( s\right) }.
\end{align*}
Since the function
\begin{equation*}
\frac{1}{\left( 2\mathcal Sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{t_{2}^{2}}
\end{equation*}
is bounded for $0<t_{1}\leq \pi $, using Lemma \ref{le1} with $p\geq 1$ and
Lemma \ref{le2}, we have
\begin{align*}
\left\Vert J_{42}^{\left( 2\right) }\right\Vert _{p} &= \mathcal{O}\left(
\frac{1}{mn}\right) \int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (t_{1}+h_{1})}{
v(t_{1}+h_{1})}+\frac{\omega (t_{2})}{v(t_{2})}\right) \frac{
m^{2}t_{1}^{2}dt_{1}dt_{2}}{(t_{1}+h_{1})^{2}} \\
&= \mathcal{O}\left( \frac{m}{n}\right) \left( v(|z_{1}|)+v(|z_{2}|)\right)
\left( \frac{\omega (2h_{1})}{v(2h_{1})}+\frac{\omega (\pi )}{v(\pi )}
\right) h_{1} \\
&= \mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) ,
\end{align*}
\begin{align*}
\left\Vert J_{42}^{\left( 3\right) }\right\Vert _{p} &= \mathcal{O}\left(
\frac{1}{mn}\right) \int_{\pi -h_{1}}^{\pi }\int_{h_{2}}^{\pi }\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (t_{1}+h_{1})}{
v(t_{1}+h_{1})}+\frac{\omega (t_{2})}{v(t_{2})}\right) \frac{dt_{1}dt_{2}}{
(t_{1}+h_{1})^{2}} \\
&= \mathcal{O}\left( \frac{1}{mn}\right) \left( v(|z_{1}|)+v(|z_{2}|)\right)
\left( \frac{\omega (\pi )}{v(\pi )}\right) h_{1} \\
&= \mathcal{O}\left( \frac{1}{m}\left( v(|z_{1}|)+v(|z_{2}|)\right) \left(
\frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right)
\right)
\end{align*}
and
\begin{align*}
\left\Vert J_{42}^{\left( 1\right) }\right\Vert _{p} &= \mathcal{O}\left(
\frac{1}{mn}\right) \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})
\right\Vert _{p}\frac{1}{t_{1}^{2}}dt_{1}dt_{2} \\
& \quad +\mathcal{O}\left( \frac{1}{mn}\right) \int_{h_{1}}^{\pi
}\int_{h_{2}}^{\pi }\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\right\Vert _{p}\frac{
h_{1}(2t_{1}+h_{1})}{t_{1}^{2}\left( t_{1}+h_{1}\right) ^{2}}dt_{1}dt_{2} \\
&= \mathcal{O}\left( \frac{1}{mn}\right) \left( v(|z_{1}|)+v(|z_{2}|)\right)
\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}
\right) \int_{h_{1}}^{\pi }\frac{1}{t_{1}^{2}}dt_{1} \\
& \quad +\mathcal{O}\left( \frac{1}{m^{2}n}\right) \int_{h_{1}}^{\pi
}\int_{h_{2}}^{\pi }\left( v(|z_{1}|)+v(|z_{2}|)\right) \\
& \qquad \times \left( \frac{\omega (t_{1}+h_{1})}{\left( t_{1}+h_{1}\right)
v(t_{1}+h_{1})}+\frac{\omega (t_{2})}{t_{1}v(t_{2})}\right) \frac{1}{
t_{1}^{2}}dt_{1}dt_{2} \\
&= \mathcal{O}\left( \frac{1}{n}\right) \left( v(|z_{1}|)+v(|z_{2}|)\right)
\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right)
\\
& \quad +\mathcal{O}\left( \frac{1}{m^{2}n}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (2h_{1})}{2h_{1}v(h_{1})}
\int_{h_{1}}^{\pi }\frac{1}{t_{1}^{2}}dt_{1}+\frac{\omega (\pi )}{v(\pi )}
\int_{h_{1}}^{\pi }\frac{1}{t_{1}^{3}}dt_{1}\right) \\
&= \mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) .
\end{align*}
The quantity $J_{43}$ can be estimated analogously to $J_{42}.$
Substituting in
\begin{equation*}
J_{44}=\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_1,t_2)}{\left( t_{1}t_{2}\right)
^{2}}dt_{1}dt_{2}
\end{equation*}
$t_{1}$ with $t_{1}+h_{1}$, $t_{2}$ with $t_{2}+h_{2}$ and $t_{1}$ with $
t_{1}+h_{1}$, $t_{2}$ with $t_{2}+h_{2}$, respectively, we get
\begin{equation*}
J_{44}=-\frac{4}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_1,t_2)}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2},
\end{equation*}
\begin{equation*}
J_{44}=-\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{\pi
-h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left(
t_{1}(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2},
\end{equation*}
\begin{equation*}
J_{44}=\frac{4}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{0}^{\pi
-h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}.
\end{equation*}
Hence
\begin{align*}
J_{44}& =\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_1,t_2)}{\left( t_{1}t_{2}\right)
^{2}}dt_{1}dt_{2}\\
& \quad -\frac{1}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_1,t_2)}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2}\\
& \quad -\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{\pi
-h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left(
t_{1}(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\\
& \quad +\frac{1}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{0}^{\pi
-h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\\
& =\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_1,t_2)}{\left( t_{1}t_{2}\right)
^{2}}dt_{1}dt_{2}\\
& \quad -\frac{1}{mn\pi ^{2}}\left( \int_{0}^{h_{1}}\int_{h_{2}}^{\pi
}+\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }-\int_{\pi -h_{1}}^{\pi
}\int_{h_{2}}^{\pi }\right) \\
& \quad \qquad H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_1,t_2)}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2} \\
& \quad -\frac{1}{mn\pi ^{2}}\left( \int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}+\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }-\int_{h_{1}}^{\pi
}\int_{\pi -h_{2}}^{\pi }\right) \\
& \quad \qquad H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left(
t_{1}(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\\
& \quad +\frac{1}{mn\pi ^{2}}\left(
\int_{0}^{h_{1}}\int_{0}^{h_{2}}+\int_{0}^{h_{1}}\int_{h_{2}}^{\pi
}-\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi }+\int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}+\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\right.\\
& \quad \qquad \qquad \left. -\int_{h_{1}}^{\pi }\int_{\pi -h_{2}}^{\pi }-\int_{\pi -h_{1}}^{\pi
}\int_{0}^{h_{2}}-\int_{\pi -h_{1}}^{\pi }\int_{h_{2}}^{\pi }+\int_{\pi
-h_{1}}^{\pi }\int_{\pi -h_{2}}^{\pi }\right)\\
& \quad \qquad \qquad \quad H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\\
& =\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\left( \frac{
H_{x,z_{1},y,z_{2}}(t_{1},t_{2})}{\left( t_{1}t_{2}\right) ^{2}}-\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}
}\right.\\
& \quad \qquad \quad \left. -\frac{H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})}{\left(
t_{1}(t_{2}+h_{2})\right) ^{2}}+\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\right) S(t_1,t_2) dt_{1}dt_{2}\\
& \quad -\frac{1}{mn\pi ^{2}}\left( \int_{0}^{h_{1}}\int_{h_{2}}^{\pi }-\int_{\pi
-h_{1}}^{\pi }\int_{h_{2}}^{\pi }\right)
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_1,t_2)}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2}\\
& \quad -\frac{1}{mn\pi ^{2}}\left( \int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}-\int_{h_{1}}^{\pi }\int_{\pi -h_{2}}^{\pi }\right)
H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left(
t_{1}(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\\
& \quad +\frac{1}{mn\pi ^{2}}\left(
\int_{0}^{h_{1}}\int_{0}^{h_{2}}+\int_{0}^{h_{1}}\int_{h_{2}}^{\pi
}-\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi }+\int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}\right.\\
& \quad \qquad \qquad \left. -\int_{h_{1}}^{\pi }\int_{\pi -h_{2}}^{\pi }-\int_{\pi -h_{1}}^{\pi
}\int_{0}^{h_{2}}-\int_{\pi -h_{1}}^{\pi }\int_{h_{2}}^{\pi }+\int_{\pi
-h_{1}}^{\pi }\int_{\pi -h_{2}}^{\pi }\right)\\
& \quad \qquad \qquad \quad H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\
& :=\mathcal Sum \limits_{l=1}^{13}J_{44}^{\left( l\right) }.
\end{align*}
We start with the estimate of $J_{44}^{\left( 1\right) }$.
Indeed, we have
\begin{align*}
J_{44}^{\left( 1\right) } &= \frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi
}\int_{h_{2}}^{\pi }\Bigg(\frac{H_{x,z_{1},y,z_{2}}(t_{1},t_{2})}{\left(
t_{1}t_{2}\right) ^{2}}-\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}} \\
& \quad \qquad -\frac{H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})}{\left(
t_{1}(t_{2}+h_{2})\right) ^{2}}+\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\Bigg)S(t_{1},t_{2})dt_{1}dt_{2} \\
&= \frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Bigg(
H_{x,z_{1},y,z_{2}}(t_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \\
&\quad \qquad -H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})+H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})
\Bigg)\frac{S(t_{1},t_{2})dt_{1}dt_{2}}{\left( t_{1}t_{2}\right) ^{2}} \\
&\quad +\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Bigg(
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})
\Bigg) \\
&\qquad \times \Bigg(\frac{1}{\left( t_{1}t_{2}\right) ^{2}}-\frac{1}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}}\Bigg)S(t_{1},t_{2})dt_{1}dt_{2} \\
&\quad +\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Bigg(
H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})
\Bigg) \\
&\qquad \times \Bigg(\frac{1}{\left( t_{1}t_{2}\right) ^{2}}-\frac{1}{\left(
t_{1}(t_{2}+h_{2})\right) ^{2}}\Bigg)S(t_{1},t_{2})dt_{1}dt_{2} \\
&\quad +\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\Bigg(\frac{1}{(t_{1}+h_{1})^{2}
}-\frac{1}{t_{1}^{2}}\Bigg) \\
&\qquad \times \Bigg(\frac{1}{(t_{2}+h_{2})^{2}}-\frac{1}{t_{2}^{2}}\Bigg)
S(t_{1},t_{2})dt_{1}dt_{2} \\
&:= J_{44}^{\left( 1,1\right) }+J_{44}^{\left( 1,2\right) }+J_{44}^{\left(
1,3\right) }+J_{44}^{\left( 1,4\right) }.
\end{align*}
It follows from Lemma \ref{le2} (iv) and $|S(t_{1},t_{2})|\leq 1$ for $
t_{1}\in (h_{1},\pi )$, $t_{2}\in (h_{2},\pi )$ that
\begin{align*}
\Vert J_{44}^{\left( 1,1\right) }\Vert _{p} &= \frac{\mathcal{O}(1)}{mn}
\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Big\|
H_{x,z_{1},y,z_{2}}(t_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \\
& \quad -H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})+H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})
\Big\|_{p}\frac{|S(t_{1},t_{2})|dt_{1}dt_{2}}{\left( t_{1}t_{2}\right) ^{2}}
\\
&= \frac{\mathcal{O}(1)}{mn}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \int_{h_{1}}^{\pi
}\int_{h_{2}}^{\pi }\frac{dt_{1}dt_{2}}{\left( t_{1}t_{2}\right) ^{2}} \\
&= \mathcal{O}(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{
v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{align*}
Moreover, from Lemma \ref{le2} (iii) and using the same arguments as above,
we obtain
\begin{align*}
\Vert J_{44}^{\left( 1,2\right) }\Vert _{p} &= \frac{\mathcal{O}(1)}{mn}
\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Big\|
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})
\Big\|_{p} \\
& \quad \times \frac{1}{t_{2}^{2}}\Bigg(\frac{1}{t_{1}^{2}}-\frac{1}{\left(
t_{1}+h_{1}\right) ^{2}}\Bigg)|S(t_{1},t_{2})|dt_{1}dt_{2} \\
&= \frac{\mathcal{O}(1)}{mn}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \int_{h_{1}}^{\pi
}\int_{h_{2}}^{\pi }\frac{h_{1}\left( 2t_{1}+h_{1}\right) }{
t_{2}^{2}t_{1}^{2}\left( t_{1}+h_{1}\right) ^{2}}dt_{1}dt_{2} \\
&= \frac{\mathcal{O}(1)}{m^{2}n}\left( (v(|z_{1}|)+(v(|z_{2}|)\right) \left(
\frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right)
\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\frac{1}{t_{2}^{2}t_{1}^{3}}
dt_{1}dt_{2} \\
&= \mathcal{O}(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{
v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{align*}
Similarly, we obtain
\begin{equation*}
\Vert J_{44}^{\left( 1,3\right) }\Vert _{p}=\mathcal{O}
(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (h_{2})}{v(h_{2})}\right) .
\end{equation*}
To estimate $\Vert J_{44}^{\left( 1,4\right) }\Vert _{p}$ we use Lemma \ref
{le1} and Lemma \ref{le2} (iii) in order to get
\begin{align*}
\Vert J_{44}^{\left( 1,4\right) }\Vert _{p} &= \frac{\mathcal{O}(1)}{mn}
\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Big\|
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\Big\|_{p} \\
& \quad \times \Bigg(\frac{1}{(t_{1}+h_{1})^{2}}-\frac{1}{t_{1}^{2}}\Bigg)\Bigg(
\frac{1}{(t_{2}+h_{2})^{2}}-\frac{1}{t_{2}^{2}}\Bigg)
|S(t_{1},t_{2})|dt_{1}dt_{2} \\
&= \frac{\mathcal{O}(1)}{mn}\left( (v(|z_{1}|)+(v(|z_{2}|)\right)
\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Bigg(\frac{\omega (t_{1}+h_{1})}{
v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\Bigg) \\
& \quad \times \frac{h_{1}h_{2}\left( 2t_{1}+h_{1}\right) \left(
2t_{2}+h_{2}\right) }{\left( t_{1}t_{2}\left( t_{1}+h_{1}\right) \left(
t_{2}+h_{2}\right) \right) ^{2}}dt_{1}dt_{2} \\
&= \frac{\mathcal{O}(1)}{\left( mn\right) ^{2}}\left(
(v(|z_{1}|)+(v(|z_{2}|)\right) \\
& \quad \times \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Bigg(\frac{\omega
(t_{1}+h_{1})}{\left( t_{1}+h_{1}\right) v(h_{1})t_{2}^{3}t_{1}^{2}}+\frac{
\omega (t_{2}+h_{2})}{\left( t_{2}+h_{2}\right) v(h_{2})t_{1}^{3}t_{2}^{2}}
\Bigg)dt_{1}dt_{2} \\
&= \frac{\mathcal{O}(1)}{\left( mn\right) ^{2}}\left(
(v(|z_{1}|)+(v(|z_{2}|)\right) \\
& \quad \times \Bigg(\frac{\omega (2h_{1})}{2h_{1}v(h_{1})}\int_{h_{1}}^{\pi
}\int_{h_{2}}^{\pi }\frac{1}{t_{2}^{3}t_{1}^{2}}dt_{1}dt_{2}+\frac{\omega
(2h_{2})}{2h_{2}v(h_{2})}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\frac{1}{
t_{2}^{2}t_{1}^{3}}dt_{1}dt_{2}\Bigg) \\
&= \mathcal{O}(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{
v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{align*}
Whence, using the above estimates, we have
\begin{equation}
\Vert J_{44}^{\left( 1\right) }\Vert _{p}=\mathcal{O}
(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (h_{2})}{v(h_{2})}\right) . \label{eqb1}
\end{equation}
Replacing $t_{2}$ with $t_{2}+h_{2}$ in
\begin{equation*}
J_{44}^{\left( 2\right) }=\frac{1}{mn\pi ^{2}}\int_{0}^{h_{1}}\int_{h_{2}}^{
\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2}
\end{equation*}
we get
\begin{equation*}
J_{44}^{\left( 2\right) }=-\frac{1}{mn\pi ^{2}}\int_{0}^{h_{1}}\int_{0}^{\pi
-h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{
\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}
\end{equation*}
and after adding them we obtain
\begin{align*}
J_{44}^{\left( 2\right) } &= \frac{1}{2mn\pi ^{2}}\bigg(\int_{0}^{h_{1}}
\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})
}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2} \\
& \quad -\int_{0}^{h_{1}}\int_{0}^{\pi
-h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{
\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\bigg) \\
&= \frac{1}{2mn\pi ^{2}}\bigg(\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\bigg(\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}
}-\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\bigg) \\
& \qquad \times
S(t_{1},t_{2})dt_{1}dt_{2}\\
& \quad -\int_{0}^{h_{1}} \int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{
S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\
& \quad +\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\bigg) \\
&:= \mathcal Sum\limits_{s=1}^{3}J_{44s}^{\left( 2\right) }.
\end{align*}
Since
\begin{equation*}
|\mathcal Sin 2mt_{1}\mathcal Sin mt_{1}\mathcal Sin 2nt_{2}\mathcal Sin nt_{2}|\leq
4(mnt_{1}t_{2})^{2}\quad \text{for}\quad 0<t_{1}<\pi ,0<t_{2}<\pi ,
\end{equation*}
then applying Lemma \ref{le1} and Lemma \ref{le2} (iii) we have
\begin{align*}
\Vert J_{442}^{\left( 2\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn}
\right)\int_{0}^{h_{1}}\int_{0}^{h_{2}}\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{
(mnt_{1}t_{2})^{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}
dt_{1}dt_{2} \\
&= \mathcal{O}(mn)\int_{0}^{h_{1}}\int_{0}^{h_{2}}\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (t_{1}+h_{1})}{
v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\right)
dt_{1}dt_{2} \\
&= \mathcal{O}(mn)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega
(2h_{1})}{v(2h_{1})}+\frac{\omega (2h_{2})}{v(2h_{2})}\right) h_{1}h_{2} \\
&= \mathcal{O}(1)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{align*}
Because of
\begin{equation*}
|\mathcal Sin 2mt_{1}\mathcal Sin mt_{1}|\leq 2(mt_{1})^{2}\quad \text{for}\quad 0<t_{1}<\pi
\end{equation*}
and
\begin{equation*}
|\mathcal Sin 2nt_{2}\mathcal Sin nt_{2}|\leq 1\quad \text{for}\quad \pi -h_{2}<t_{2}<\pi ,
\end{equation*}
it follows that
\begin{align*}
\Vert J_{443}^{\left( 2\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn}
\right)\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi }\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{
(mt_{1})^{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\
&= \mathcal{O}\left(\frac{m}{n}\right)\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi }\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (t_{1}+h_{1})}{
v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{
dt_{1}dt_{2}}{(t_{2}+h_{2})^{2}} \\
&= \mathcal{O}\left(\frac{m}{n}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{
\omega (2h_{1})}{v(2h_{1})}h_{1}h_{2}+h_{1}\int_{\pi }^{\pi +h_{2}}\frac{
\omega (\theta _{2})}{v(\theta _{2})}\frac{d\theta _{2}}{\theta _{2}^2}\right)
\\
&= \mathcal{O}\left(\frac{m}{n}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{
\omega (h_{1})}{v(h_{1})}h_{1}h_{2}+\frac{\omega (\pi +h_{2})}{v(\pi +h_{2})}
\frac{h_{1}h_{2}}{\pi (\pi +h_{2})}\right) \\
&= \mathcal{O}\left( \frac{1}{n^{2}}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (\pi )}{v(\pi )}\right) \\
&= \mathcal{O}\left(\frac{1}{n}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{
\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{align*}
Now we write
\begin{align*}
J_{441}^{\left( 2\right) } &= \frac{1}{2mn\pi ^{2}}\int_{0}^{h_{1}}
\int_{h_{2}}^{\pi }\bigg(\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{
\left( (t_{1}+h_{1})t_{2}\right) ^{2}}-\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}} \\
& \quad +\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}}-\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\bigg)S(t_{1},t_{2})dt_{1}dt_{2} \\
&= \frac{1}{2mn\pi ^{2}}\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\bigg(
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})
\bigg) \\
&\qquad \times \frac{S(t_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}
dt_{1}dt_{2} \\
&\quad +\int_{0}^{h_{1}}\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{\left(
t_{1}+h_{1}\right) ^{2}} \bigg(\frac{1}{t_{2}^{2}}-\frac{1}{\left( t_{2}+h_{2}\right) ^{2}}
\bigg)dt_{1}dt_{2} \\
& :=J_{441}^{\left( 21\right) }+J_{441}^{\left( 22\right) }.
\end{align*}
For $\Vert J_{441}^{\left( 21\right) }\Vert _{p}$ we have
\begin{align*}
\Vert J_{441}^{\left( 21\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn}
\right)\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}
\\
& \quad \times \frac{(mt_{1})^{2}}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}
dt_{1}dt_{2} \\
&= \mathcal{O}\left(\frac{m}{n}\right)\int_{0}^{h_{1}}\int_{h_{2}}^{\pi
}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega
(h_{2})}{v(h_{2})}\right) \frac{dt_{1}dt_{2}}{t_{2}^{2}} \\
&= \mathcal{O}(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{
v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) ,
\end{align*}
while for $\Vert J_{441}^{\left( 22\right) }\Vert _{p}$, we get
\begin{align*}
\Vert J_{441}^{\left( 22\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn}
\right)\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p} \\
& \quad \times \frac{(mt_{1})^{2}}{\left( t_{1}+h_{1}\right) ^{2}}\bigg(\frac{1}{
t_{2}^{2}}-\frac{1}{\left( t_{2}+h_{2}\right) ^{2}}\bigg)dt_{1}dt_{2} \\
&= \mathcal{O}\left(\frac{m}{n}\right)\int_{0}^{h_{1}}\int_{h_{2}}^{\pi
}(v(|z_{1}|)+(v(|z_{2}|)) \\
& \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega
(t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{h_{2}(2t_{2}+h_{2})}{
t_{2}^{2}\left( t_{2}+h_{2}\right) ^{2}}dt_{1}dt_{2} \\
&= \mathcal{O}\left(\frac{m}{n^{2}}\right)(v(|z_{1}|)+(v(|z_{2}|))\bigg(
\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})
}\frac{1}{t_{2}^{3}}dt_{1}dt_{2} \\
& \quad +\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\frac{\omega (t_{2}+h_{2})}{\left(
t_{2}+h_{2}\right) v(h_{2})}\frac{1}{t_{2}^{2}}dt_{1}dt_{2}\bigg) \\
&= \mathcal{O}\left(\frac{m}{n^{2}}\right)(v(|z_{1}|)+(v(|z_{2}|))\bigg(\frac{\omega
(2h_{1})}{v(h_{1})}\int_{h_{2}}^{\pi }\frac{1}{t_{2}^{3}}dt_{2}
+\frac{\omega (2h_{2})}{2h_{2}v(h_{2})}\int_{h_{2}}^{\pi }\frac{1}{
t_{2}^{2}}dt_{2}\bigg)h_{1} \\
&= \mathcal{O}(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{
v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{align*}
Whence,
\begin{equation*}
\Vert J_{441}^{\left( 2\right) }\Vert _{p}=\mathcal{O}
(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (h_{2})}{v(h_{2})}\right) .
\end{equation*}
Thus, we obtain
\begin{equation}
\Vert J_{44}^{\left( 2\right) }\Vert _{p}=\mathcal{O}(1)\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (h_{2})}{v(h_{2})}\right) . \label{eqb2}
\end{equation}
By analogy, we find that
\begin{equation}
\Vert J_{44}^{\left( 4\right) }\Vert _{p}=\mathcal{O}(1)\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (h_{2})}{v(h_{2})}\right) . \label{eqb3}
\end{equation}
Once more, replacing $t_{2}$ with $t_{2}+h_{2}$ in
\begin{equation*}
J_{44}^{\left( 3\right) }=\frac{1}{mn\pi ^{2}}\int_{\pi -h_{1}}^{\pi
}\!\!\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{
S(t_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2}
\end{equation*}
we get
\begin{equation*}
J_{44}^{\left( 3\right) }=-\frac{1}{mn\pi ^{2}}\int_{\pi -h_{1}}^{\pi
}\!\!\int_{0}^{\pi -h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{
S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}
\end{equation*}
and adding them side by side, we have
\begin{align*}
J_{44}^{\left( 3\right) } &= \frac{1}{2mn\pi ^{2}}\bigg(\int_{\pi
-h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})
\frac{S(t_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2} \\
& \quad -\int_{\pi -h_{1}}^{\pi }\!\!\int_{0}^{\pi
-h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{
\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\bigg) \\
&= \frac{1}{2mn\pi ^{2}}\bigg(\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }
\bigg(\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}}-\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\bigg) \\
&\quad \times S(t_{1},t_{2})dt_{1}dt_{2} \!- \!\int_{\pi -h_{1}}^{\pi
}\!\!\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{
S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\
&\quad +\int_{\pi -h_{1}}^{\pi }\!\!\int_{\pi -h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\bigg) \\
&:= \mathcal Sum\limits_{s=1}^{3}J_{44s}^{\left( 3\right) }
\end{align*}
Since
\begin{equation*}
|\mathcal Sin 2mt_{1}\mathcal Sin mt_{1}|\leq 1,\quad \text{for}\quad \pi -h_{1}<t_{1}<\pi
\end{equation*}
and
\begin{equation*}
|\mathcal Sin 2nt_{2}\mathcal Sin nt_{2}|\leq 2(nt_{2})^{2}\quad \text{for}\quad 0<t_{2}<\pi
,
\end{equation*}
then applying Lemma \ref{le1} and Lemma \ref{le2} (iii), we have
\begin{align*}
\Vert J_{442}^{\left( 3\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn}
\right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{0}^{h_{2}}\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{
(nt_{2})^{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\
&= \mathcal{O}\left(\frac{n}{m}\right)\int_{\pi -h_{1}}^{\pi
}\!\!\int_{0}^{h_{2}}\left( v(|z_{1}|)+v(|z_{2}|)\right)
\left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega
(t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{dt_{1}dt_{2}}{\left(
t_{1}+h_{1}\right) ^{2}} \\
&= \mathcal{O}\left(\frac{n}{m}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \\
& \quad \times h_{2}\left( \int_{\pi -h_{1}}^{\pi }\frac{\omega (t_{1}+h_{1})}{
v(t_{1}+h_{1})}\frac{dt_{1}}{\left( t_{1}+h_{1}\right) ^{2}}+\frac{\omega
(2h_{2})}{v(2h_{2})}\int_{\pi -h_{1}}^{\pi }\frac{dt_{1}}{\left(
t_{1}+h_{1}\right) ^{2}}\right) \\
&= \mathcal{O}\left(\frac{1}{m}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left(
\int_{\pi }^{\pi +h_{1}}\frac{\omega (\theta _{1})}{v(\theta _{1})}\frac{
d\theta _{1}}{\theta _{1}^{2}}+\frac{\omega (2h_{2})}{v(2h_{2})}\int_{\pi
}^{\pi +h_{1}}\frac{d\theta _{1}}{\theta _{1}^{2}}\right) \\
&= \mathcal{O}\left(\frac{1}{m}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{
\omega (\pi +h_{1})}{v(\pi +h_{1})}+\frac{\omega (2h_{2})}{v(2h_{2})}\right)
h_{1} \\
&= \mathcal{O}\left( \frac{1}{m^{2}}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (\pi )}{v(\pi )}+\frac{
\omega (h_{2})}{v(h_{2})}\right) \\
&= \mathcal{O}\left(\frac{1}{m}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{
\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{align*}
Furthermore, the inequality
\begin{equation*}
|\mathcal Sin 2mt_{1}\mathcal Sin mt_{1}\mathcal Sin 2nt_{2}\mathcal Sin nt_{2}|\leq 1\quad \text{for}\quad
\pi -h_{1}<t_{1}<\pi ,\quad \pi -h_{2}<t_{2}<\pi ,
\end{equation*}
implies
\begin{align*}
\Vert J_{443}^{\left( 3\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn}
\right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{\pi -h_{2}}^{\pi }\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{
dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\
&= \mathcal{O}\left(\frac{1}{mn}\right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{\pi
-h_{2}}^{\pi }\left( v(|z_{1}|)+v(|z_{2}|)\right) \\
& \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega
(t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{dt_{1}dt_{2}}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\
&= \mathcal{O}\left(\frac{1}{mn}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{
\omega (\pi +h_{1})}{v(\pi +h_{1})}+\frac{\omega (\pi +h_{2})}{v(\pi +h_{2})}
\right) \\
& \quad \times \int_{\pi -h_{1}}^{\pi }\!\!\int_{\pi -h_{2}}^{\pi }\frac{
dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\
&= \mathcal{O}\left(\frac{1}{mn}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \frac{\omega
(2\pi )}{v(2\pi )}\int_{\pi }^{\pi +h_{1}}\!\!\int_{\pi }^{\pi +h_{2}}\frac{
dt_{1}dt_{2}}{\left( t_{1}t_{2}\right) ^{2}} \\
&= \mathcal{O}\left( \frac{1}{\left( mn\right) ^{2}}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \frac{\omega (\pi )}{v(\pi )} \\
&= \mathcal{O}\left(\frac{1}{mn}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{
\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{align*}
It is obvious that
\begin{align*}
J_{441}^{\left( 3\right) } &= \frac{1}{2mn\pi ^{2}}\int_{\pi -h_{1}}^{\pi
}\!\!\int_{h_{2}}^{\pi }\bigg(\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{
\left( (t_{1}+h_{1})t_{2}\right) ^{2}}-\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}} \\
& \quad +\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left(
(t_{1}+h_{1})t_{2}\right) ^{2}}-\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\bigg)S(t_{1},t_{2})dt_{1}dt_{2} \\
&= \int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }\bigg(
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})
\bigg) \\
& \quad \times \frac{S(t_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}
dt_{1}dt_{2}+\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2}) \\
& \quad \times \frac{S(t_{1},t_{2})}{\left( t_{1}+h_{1}\right) ^{2}}\bigg(\frac{1}{
t_{2}^{2}}-\frac{1}{\left( t_{2}+h_{2}\right) ^{2}}\bigg)
dt_{1}dt_{2}:=J_{441}^{\left( 31\right) }+J_{441}^{\left( 32\right) }.
\end{align*}
For $\Vert J_{441}^{\left( 31\right) }\Vert _{p}$, we have
\begin{align*}
\Vert J_{441}^{\left( 31\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn}
\right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}
\\
& \quad \times \frac{dt_{1}dt_{2}}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}} \\
&= \mathcal{O}\left(\frac{1}{mn}\right)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \int_{\pi
-h_{1}}^{\pi }\frac{dt_{1}}{\left( t_{1}+h_{1}\right) ^{2}}\int_{h_{2}}^{\pi
}\frac{dt_{2}}{t_{2}^{2}} \\
&= \mathcal{O}\left( \frac{1}{m^{2}}\right) (v(|z_{1}|)+(v(|z_{2}|))\left(
\frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) ,
\end{align*}
as far as for $\Vert J_{441}^{\left( 32\right) }\Vert _{p}$ we obtain
\begin{align*}
\Vert J_{441}^{\left( 32\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn}
\right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p} \\
& \quad \times \frac{1}{\left( t_{1}+h_{1}\right) ^{2}}\bigg(\frac{1}{t_{2}^{2}}-
\frac{1}{\left( t_{2}+h_{2}\right) ^{2}}\bigg)dt_{1}dt_{2} \\
&= \mathcal{O}\left(\frac{1}{mn}\right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi
}(v(|z_{1}|)+(v(|z_{2}|)) \\
& \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega
(t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{1}{\left( t_{1}+h_{1}\right) ^{2}
}\frac{h_{2}(2t_{2}+h_{2})}{t_{2}^{2}\left( t_{2}+h_{2}\right) ^{2}}
dt_{1}dt_{2} \\
&= \mathcal{O}\left(\frac{1}{mn^{2}}\right)(v(|z_{1}|)+(v(|z_{2}|))\bigg(\frac{\omega
(\pi +h_{1})}{v(\pi +h_{1})}\int_{h_{2}}^{\pi }\frac{dt_{2}}{t_{2}^{3}}
+\frac{\omega (2h_{2})}{2h_{2}v(h_{2})}\int_{h_{2}}^{\pi }\frac{dt_{2}}{
t_{2}^{2}}\bigg)h_{1} \\
&= \mathcal{O}\left( \frac{1}{m^{2}}\right) (v(|z_{1}|)+(v(|z_{2}|))\left(
\frac{\omega (\pi )}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})}\right) \\
&= \mathcal{O}\left( \frac{1}{m}\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{
\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{align*}
Hence, we obtain
\begin{equation*}
\Vert J_{441}^{\left( 3\right) }\Vert _{p}=\mathcal{O}\left( \frac{1}{m}
\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (h_{2})}{v(h_{2})}\right) .
\end{equation*}
Consequently, we get
\begin{equation}
\Vert J_{44}^{\left( 3\right) }\Vert _{p}=\mathcal{O}\left( \frac{1}{m}
\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (h_{2})}{v(h_{2})}\right) . \label{eqb4}
\end{equation}
By the same way, we obtain
\begin{equation}
\Vert J_{44}^{\left( 5\right) }\Vert _{p}=\mathcal{O}\left( \frac{1}{n}
\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (h_{2})}{v(h_{2})}\right) . \label{eqb5}
\end{equation}
For $J_{44}^{\left( 6\right) }$, we have
\begin{align}\label{eqb6}
\begin{split}
\Vert J_{44}^{\left( 6\right) }\Vert _{p}& = \mathcal{O}\left( \frac{1}{mn}
\right) \int_{0}^{h_{1}}\!\!\int_{0}^{h_{2}}\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{
(mnt_{1}t_{2})^{2}dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}
}\\
&= \mathcal{O}\left( mn\right)
\int_{0}^{h_{1}}\!\!\int_{0}^{h_{2}}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{
\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{
v(t_{2}+h_{2})}\right) dt_{1}dt_{2} \\
&= \mathcal{O}\left( 1\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega
(h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{split}
\end{align}
Similarly as in the estimation of $J_{44}^{\left( 2\right) }$, we have
\begin{align*}
J_{44}^{\left( 7\right) } &= \frac{1}{2mn\pi ^{2}}\int_{0}^{h_{1}}
\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{
S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\
& \quad -\frac{1}{2mn\pi ^{2}}\int_{0}^{h_{1}}\int_{0}^{\pi
-h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+2h_{2})\frac{S(t_{1},t_{2})}{
\left( (t_{1}+h_{1})(t_{2}+2h_{2})\right) ^{2}}dt_{1}dt_{2} \\
&= \frac{1}{2mn\pi ^{2}}\bigg(\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\bigg(\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}-\frac{
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+2h_{2})}{\left(
(t_{1}+h_{1})(t_{2}+2h_{2})\right) ^{2}}\bigg) \\
&\qquad \times
S(t_{1},t_{2})dt_{1}dt_{2} \\
& \quad -\int_{0}^{h_{1}}
\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+2h_{2})\frac{
S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+2h_{2})\right) ^{2}}dt_{1}dt_{2}
\\
& \quad +\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi
}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+2h_{2})\frac{S(t_{1},t_{2})}{\left(
(t_{1}+h_{1})(t_{2}+2h_{2})\right) ^{2}}dt_{1}dt_{2}\bigg)
\end{align*}
and consequently
\begin{equation}
\Vert J_{44}^{\left( 7\right) }\Vert _{p}=\mathcal{O}\left( 1\right)
(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega
(h_{2})}{v(h_{2})}\right) . \label{eqb7}
\end{equation}
Analogously,
\begin{equation}
\Vert J_{44}^{\left( 9\right) }\Vert _{p}=\mathcal{O}\left( 1\right)
(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega
(h_{2})}{v(h_{2})}\right) . \label{eqb8}
\end{equation}
For $J_{44}^{\left( 8\right) }$, we have
\begin{align}\label{eqb9}
\begin{split}
\Vert J_{44}^{\left( 8\right) }\Vert _{p} &= \mathcal{O}\left( \frac{1}{mn}
\right) \int_{0}^{h_{1}}\!\!\int_{\pi -h_{2}}^{\pi }\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{
(mt_{1})^{2}dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\
&= \mathcal{O}\left( \frac{m}{n}\right) \int_{0}^{h_{1}}\!\!\int_{\pi
-h_{2}}^{\pi }(v(|z_{1}|)+(v(|z_{2}|)) \\
& \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega
(t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{dt_{1}dt_{2}}{\left(
t_{2}+h_{2}\right) ^{2}} \\
&= \mathcal{O}\left( \frac{m}{n}\right) (v(|z_{1}|)+(v(|z_{2}|))
\left( \frac{\omega (h_{1})}{v(h_{1})}h_{2}+\int_{\pi -h_{2}}^{\pi }
\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\frac{dt_{2}}{\left(
t_{2}+h_{2}\right) ^{2}}\right) h_{1} \\
&= \mathcal{O}\left( \frac{1}{n}\right) (v(|z_{1}|)+(v(|z_{2}|))
\left( \frac{\omega (h_{1})}{v(h_{1})}h_{2}+\int_{\pi }^{\pi +h_{2}}
\frac{\omega (\theta _{2})}{v(\theta _{2})}\frac{d\theta _{2}}{\theta
_{2}^{2}}\right)\\
&= \mathcal{O}\left( \frac{1}{n^{2}}\right) (v(|z_{1}|)+(v(|z_{2}|))\left(
\frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (\pi )}{v(\pi )}\right) \\
&= \mathcal{O}\left( \frac{1}{n}\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{
\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{split}
\end{align}
By analogy, we also get
\begin{equation}
\Vert J_{44}^{\left( 11\right) }\Vert _{p}=\mathcal{O}\left( \frac{1}{m}
\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (h_{2})}{v(h_{2})}\right) . \label{eqb10}
\end{equation}
Now we need to give the upper bound for $\Vert J_{44}^{\left( 10\right)
}\Vert _{p}$. We have
\begin{align}\label{eqb11}
\begin{split}
\Vert J_{44}^{\left( 10\right) }\Vert _{p} &= \mathcal{O}\left( \frac{1}{mn}
\right) \int_{h_{1}}^{\pi }\!\!\int_{\pi -h_{2}}^{\pi }\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{
dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\
&= \mathcal{O}\left( \frac{1}{mn}\right) \int_{h_{1}}^{\pi }\!\!\int_{\pi
-h_{2}}^{\pi }(v(|z_{1}|)+(v(|z_{2}|)) \\
& \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega
(t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{dt_{1}dt_{2}}{\left(
t_{1}+h_{1}\right) ^{2}}\\
&= \mathcal{O}\left( \frac{1}{mn}\right) (v(|z_{1}|)+(v(|z_{2}|))\left(
\frac{\omega (\pi )}{v(\pi )}\frac{m}{n}+m\int_{\pi }^{\pi +h_{2}}\frac{
\omega (u_{2})du_{2}}{v(u_{2})u_{2}^{2}}\right) \\
&= \mathcal{O}\left( \frac{1}{n^{2}}\right) (v(|z_{1}|)+(v(|z_{2}|))\frac{
\omega (\pi )}{v(\pi )} \\
&= \mathcal{O}\left( \frac{1}{n}\right) (v(|z_{1}|)+(v(|z_{2}|))\frac{\omega
(h_{2})}{v(h_{2})}.
\end{split}
\end{align}
By analogy, we obtain
\begin{equation}
\Vert J_{44}^{\left( 12\right) }\Vert _{p}=\mathcal{O}\left( \frac{1}{m}
\right) (v(|z_{1}|)+(v(|z_{2}|))\frac{\omega (h_{1})}{v(h_{1})}.
\label{eqb12}
\end{equation}
Finally, we have
\begin{align}\label{eqb13}
\begin{split}
\Vert J_{44}^{\left( 13\right) }\Vert _{p} &= \mathcal{O}\left( \frac{1}{mn}
\right) \int_{\pi -h_{1}}^{\pi }\!\!\int_{\pi -h_{2}}^{\pi }\left\Vert
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{
dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\
&= \mathcal{O}\left( \frac{1}{mn}\right) \int_{\pi -h_{1}}^{\pi
}\!\!\int_{\pi -h_{2}}^{\pi }(v(|z_{1}|)+(v(|z_{2}|)) \\
& \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega
(t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{dt_{1}dt_{2}}{\left(
(t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\
&= \mathcal{O}\left( \frac{1}{mn}\right) (v(|z_{1}|)+(v(|z_{2}|)) \\
& \quad \times \left( h_{2}\int_{\pi -h_{1}}^{\pi }\frac{\omega (t_{1}+h_{1})}{
v(t_{1}+h_{1})}\frac{dt_{1}}{(t_{1}+h_{1})^{2}}+h_{1}\int_{\pi -h_{2}}^{\pi }
\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\frac{dt_{2}}{(t_{2}+h_{2})^{2}}
\right) \\
&= \mathcal{O}\left( \frac{1}{mn}\right) (v(|z_{1}|)+(v(|z_{2}|))\left(
h_{2}\int_{\pi }^{\pi +h_{1}}\frac{\omega (\theta _{1})}{v(\theta _{1})}
\frac{d\theta _{1}}{\theta _{1}^{2}}+h_{1}\int_{\pi }^{\pi +h_{2}}\frac{
\omega (\theta _{2})}{v(\theta _{2})}\frac{d\theta _{2}}{\theta _{2}^{2}}
\right) \\
&= \mathcal{O}\left( \frac{1}{\left( mn\right) ^{2}}\right)
(v(|z_{1}|)+(v(|z_{2}|))\frac{\omega (\pi )}{v(\pi )}\\
& = \mathcal{O}\left( \frac{1}{mn}\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{
\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) .
\end{split}
\end{align}
Whence, using \eqref{eqb1}-\eqref{eqb13}, we obtain
\begin{equation*}
\left\Vert J_{44}\right\Vert _{p}=\mathcal{O}\left( 1\right)
(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega
(h_{2})}{v(h_{2})}\right) .
\end{equation*}
Combining partial estimates, we get
\begin{equation*}
\mathcal Sup_{z_{1}\neq 0,\,\,z_{2}\neq 0}\frac{\Vert
D_{m,n}(x+z_{1},y+z_{2})-D_{m,n}(x,y)\Vert _{p}}{v(|z_{1}|)+v(|z_{2}|)}=
\mathcal{O}\left( 1\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{
\omega (h_{2})}{v(h_{2})}\right) .
\end{equation*}
Procceding as in the similar lines, it can be proved that
\begin{equation*}
\Vert D_{m,n}\Vert _{p}=\mathcal{O}\left( 1\right) \left( \omega
(h_{1})+\omega (h_{2})\right) .
\end{equation*}
Hence
\begin{equation*}
\Vert D_{m,n}\Vert _{p}^{(v,v)}=\mathcal{O}\left( 1\right) \left( \frac{
\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right)
\end{equation*}
and this ends our proof.
\end{proof}}
Now we can finish this section by deriving some particular results. For this, we need to
specialize the functions $\omega(t)$ and $v(t)$ in our theorem. In fact, taking $\omega(t)=t^{\alphapha}$ and $v(t)=t^{\beta}$, $0\leq \beta< \alphapha\leq 1$, in Theorem \ref{the01} we get:
\begin{corollary}
If $f\in \text{Lip}(\alphapha,\beta, p)$, $p\geq 1$, $0\leq \beta< \alphapha\leq 1$, then
\begin{equation*}
\Vert \mathcal Sigma _{m,2m;n,2n}(f)-f\Vert _{p}^{(v,v)}= \mathcal{O} \left( \frac{1
}{m^{\alphapha -\beta}}+\frac{1
}{n^{\alphapha -\beta}}\right)
\end{equation*}
for all $m,n\in
\mathbb{N}
$.
\end{corollary}
For $p=\infty$ in above corollary, we obtain the following.
\begin{corollary}
If $f\in \text{H}_{(\alphapha,\beta)}$, $0\leq \beta< \alphapha\leq 1$, then
\begin{equation*}
\Vert \mathcal Sigma _{m,2m;n,2n}(f)-f\Vert _{\alphapha,\beta}= \mathcal{O} \left( \frac{1
}{m^{\alphapha -\beta}}+\frac{1
}{n^{\alphapha -\beta}}\right)
\end{equation*}
for all $m,n\in
\mathbb{N}
$.
\end{corollary}
\mathcal Section{A generalization of Theorem \ref{the01}}
We denote by $W(L^p((-\pi,\pi)^2); \beta_1, \beta_2)$ the weighted space $L^p((-\pi,\pi)^2)$ with weight function $\left|\mathcal Sin \left(\frac{x}{2}\right)\right|^{\beta_1 p}\left|\mathcal Sin \left(\frac{y}{2}\right)\right|^{\beta_2 p}$, ($\beta_1,\beta_2\geq 0$), and endowed with norm
$$\|f\|_{p;\beta_1,\beta_2}:=\left(\frac{1}{(2\pi)^2}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}\left|f(x,y)\right|^p\left|\mathcal Sin \left(\frac{x}{2}\right)\right|^{\beta_1 p}\left|\mathcal Sin \left(\frac{y}{2}\right)\right|^{\beta_2 p}dxdy\right)^p$$
for $1\leq p<\infty$, and
$$\|f\|_{p;\beta_1,\beta_2}:=\text{ess}\!\!\!\!\!\mathcal Sup_{-\pi\leq x,y \leq \pi }\left\{\left|f(x,y)\right|\left|\mathcal Sin \left(\frac{x}{2}\right)\right|^{\beta_1 }\left|\mathcal Sin \left(\frac{y}{2}\right)\right|^{\beta_2 }\right\}$$
for $p=\infty$ (for $\beta_1=0,\beta_2=0$ see \cite{US}).
Acting accordingly, we define the space $H_{p;\beta_1,\beta_2}^{(\omega_1, \omega_2)}$ by
\begin{equation*}
H_{p;\beta_1,\beta_2}^{(\omega_1, \omega_2 )}:=\left\{f\in W(L^p((-\pi ,\pi )^2); \beta_1, \beta_2), p\geq 1:
A(f;\omega_1 , \omega_2 ;\beta_1 ,\beta_2 )<\infty \right\},
\end{equation*}
where
\begin{equation*}
A(f; \omega_1 , \omega_2 ;\beta_1 ,\beta_2 ):=\mathcal Sup_{t_1\neq 0,\,\,t_2\neq 0 }\frac{\|f(x +t_1,y
+t_2)-f(x,y)\|_{p ;\beta_1 ,\beta_2 }}{\omega_1 (|t_1|)+\omega_2 (|t_2|)}
\end{equation*}
and the norm in the space $H_{p ;\beta_1 ,\beta_2 }^{(\omega_1, \omega_2)}$ is defined by
\begin{equation*}
\|f\|_{p ;\beta_1 ,\beta_2 }^{(\omega_1, \omega_2)}:=\|f\|_{p ;\beta_1 ,\beta_2 }+A(f;\omega_1, \omega_2 ;\beta_1 ,\beta_2 ).
\end{equation*}
We take $k=rm$ and $\ell =qn$ in (\ref{eq1}), to obtain
\begin{align*}
\mathcal Sigma_{m,rm;n,qn}:=&\left( 1+\frac{1}{r}\right)\left( 1+\frac{1}{q}
\right)\mathcal Sigma_{m(r+1)-1,n(q+1)-1}-\left( 1+\frac{1}{r}\right)\frac{1}{q}
\mathcal Sigma_{m(r+1)-1,n-1} \nonumber \\
& -\frac{1}{r}\left( 1+\frac{1}{q}\right)\mathcal Sigma_{m-1,n(q+1)-1}+\frac{1}{
rq}\mathcal Sigma_{m-1,n-1},
\end{align*}
where $r,q\in \{2,4,6,\dots\}$.
Hence, we name the mean $\mathcal Sigma _{m,rm;n,qn}(f;x,y)$ as {\it Double Even-Type Delayed Arithmetic Mean} for $
S_{k,\ell }(f;x,y)$, which can be represented in its integral form
\begin{equation*}
\mathcal Sigma_{m,rm;n,qn}(f;x,y)=\frac{1}{mn\pi^2}\int_{0}^{\pi}\int_{0}^{
\pi}h_{x,y}(t_1,t_2)L_{m,rm;n,qn}(t_1,t_2)dt_1dt_2,
\end{equation*}
where
$$L_{m,rm;n,qn}(t_1,t_2):=\frac{4}{rq}\frac{\mathcal Sin \frac{\left(r+2\right)mt_1}{2}\mathcal Sin \frac{rmt_1}{2}\mathcal Sin \frac{\left(q+2\right)nt_2}{2}\mathcal Sin \frac{qnt_2}{2}}{\left( 4\mathcal Sin \frac{t_1}{2}\mathcal Sin \frac{t_2
}{2}\right)^2}.$$
Further, we establish a more general theorem on the degree of approximation of function $f$ belonging to $H_{p ;\beta_1 ,\beta_2 }^{(\omega, \omega)}$ with norm $\|f\|_{p ;\beta_1 ,\beta_2 }^{(v,v)}$ by Double Even-Type Delayed Arithmetic Mean $\mathcal Sigma_{m,rm;n,qn}(f;x,y)$. It represent a twofold generalization of Theorem \ref{the01}, both in context of the considered space and the mean.
\begin{theorem}
\label{the05} Let $\omega $ and $v$ be moduli of continuity so that $\frac{
\omega (t)}{v(t)}$ is non-decreasing in $t$. If $f\in H_{p ;\beta_1 ,\beta_2 }^{(\omega, \omega)}$, $p\geq 1$, then
\begin{equation*}
\Vert \mathcal Sigma_{m,rm;n,qn}(f)-f\Vert _{p ;\beta_1 ,\beta_2 }^{(v,v)}= \mathcal{O} \left( r\frac{\omega (h_{1})
}{v(h_{1})}+q\frac{\omega (h_{2})}{v(h_{2})}\right) ,
\end{equation*}
where $h_{1}=\frac{\pi }{m}$, $h_{2}=\frac{\pi }{n}$, $m,n\in
\mathbb{N}
$, and $r,q\in \{2,4,6,\dots\}$.
\end{theorem}
\begin{proof}
The proof can be done in the same lines as the proof of Theorem \ref{the01}. We omit the details.
\end{proof}
\begin{remark}
For $\beta_1 =0$, $\beta_2=0$, $r=2$ and $q=2$, Theorem \ref{the05} reduces exactly to Theorem \ref{the01}.
\end{remark}
\mathcal Section{Conclusions}
For one dimension, a degree of approximation of a function in the space $H_{p}^{(\omega)}$ has been given by several authors, see \cite{DGR,DNR,D,UD2,KIM1,XhK,L,NDR,SS} and some other references already mentioned here. Inspired by these papers, we have given two corresponding results using the second (even) type double delayed arithmetic means of the Fourier series of a function from the space $H_{p}^{(\omega , \omega)}$ ($H_{p ;\beta_1 ,\beta_2 }^{(\omega, \omega)}$).
\label{lastpage}
\end{document} |
\begin{document}
\footnote{{\it 2000 Mathematics Subject Classification.}
Primary:32F45.
{\it Key words and phrases.} Lempert function, Arakelian's
theorem.}
\begin{abstract}
We prove that the multipole Lempert function is monotone
under inclusion of
pole sets.
\end{abstract}
\maketitle
Let $D$ be a domain in $\mathbb C^n$ and let $A=(a_j)_{j=1}^l$,
$1\le
l\le\infty$, be a countable (i.e. $l=\infty$) or a
non--empty
finite subset of $D$ (i.e. $l\in\mathbb N$). Moreover, fix a
function
$\boldsymbol p:D\longrightarrow\mathbb R_+$ with
$$
|\bs{p}|:=\{a\in D:\boldsymbol p(a)>0\}=A.
$$
$\boldsymbol p$ is called a {\it pole function for $A$} on $D$ and
$|\bs{p}|$
its {\it pole set}. In case that $B\subset A$ is a
non--empty
subset we put $\boldsymbol p_B:=\boldsymbol p$ on $B$ and $\boldsymbol p_B:=0$ on
$D\setminus B$. $\boldsymbol p_B$ is a pole function for $B$.
For $z\in D$ we set
$$
l_D(\boldsymbol p,z)=\inf\{\prod_{j=1}^l|\lambda_j|^{\boldsymbol p(a_j)}\},
$$
where the infimum is taken over all subsets
$(\lambda_j)_{j=1}^l$
of $\mathbb D$ (in this paper, $\mathbb D$ is the open unit disc in
$\mathbb C$) for
which there is an analytic disc $\varphi\in\mathcal O(\mathbb D,D)$ with
$\varphi(0)=z$ and $\varphi(\lambda_j)=a_j$ for all $j$. Here we
call
$l_D(\boldsymbol p,\cdot)$ {\it the Lempert function with $\boldsymbol
p$-weighted
poles at $A$} \cite{Wik1,Wik2}; see also \cite{jarpfl},
where this
function is called {\it the Coman function for $\boldsymbol p$}.
Recently, F.~Wikstr\"om \cite{Wik1} proved that if $A$ and
$B$ are finite
subsets of a convex domain $D\subset\mathbb C^n$ with
$\varnothing\neq B\subset A$
and if $\boldsymbol p$ is a pole function for $A$, then $l_D(\boldsymbol
p,\cdot)\le l_D(\boldsymbol
p_B,\cdot)$ on $D$.
On the other hand, in \cite{Wik2} F.~Wikstr\"om gave an
example of
a complex space for which this inequality fails to hold and
he
asked whether it remains to be true for an arbitrary domain
in
$\mathbb C^n$.
The main purpose of this note is to present a positive
answer to that question, even
for countable pole sets (in particular, it follows that the
infimum in the
definition of the Lempert function is always taken over a
non-empty set).
\begin{theorem} For any domain $D\subset\mathbb C^n$, any
countable or non-empty finite subset $A$
of $D$, and any pole function $\boldsymbol p$ for $A$ we have
$$
l_D(\boldsymbol p,\cdot)=\inf\{l_D(\boldsymbol p_B,\cdot):\varnothing\neq B
\text{ a
finite subset of } A\}.
$$
Therefore, $$ l_D(\boldsymbol p,\cdot)=\inf\{l_D(\boldsymbol
p_B,\cdot):\varnothing\neq
B\subset A\}. $$
\end{theorem}
\vskip 0.5cm
The proof of this result will be based on the following
\
\begin{theorem*}[Arakelian's Theorem \cite{Ara}] Let
$E\subset\Omega\subset\mathbb C$ be a relatively
closed subset of the domain $\Omega$. Assume that
$\Omega^*\setminus E$ is connected and
locally connected. (Here $\Omega^*$ denotes the one--point
compactification of $\Omega$.)
If $f$ is a complex-valued continuous function on $E$ that
is
holomorphic in the interior of $E$ and if $\varepsilon>0$,
then there is
a $g\in\mathcal O(\Omega)$ with $|g(z)-f(z)|<\varepsilon$ for any
$z\in E$.
\end{theorem*}
\
\begin{proof} Fix a point $z\in D$. First, we shall verify
the inequality
$$
l_D(\boldsymbol p,z)\le\inf\{l_D(\boldsymbol p,z):\varnothing\neq B \text{
a
finite subset of } A\}:\leqno{(1)}
$$
Take a non--empty proper finite subset $B$ of $A$. Without
loss of
generality we may assume that $B=A_m:=(a_j)_{j=1}^m$ for a
certain
$m\in\mathbb N$, where $A=(a_j)_{j=1}^l$, $m<l\le\infty$.
Now, let $\varphi:\Bbb D\to D$ be an analytic disc with
$\varphi(\lambda_j)=a_j,$ $1\le j\le m,$ where $\lambda_0:=0$ and
$a_0:=z.$ Fix $t\in[\max_{0\le j\le m}|\lambda_j|,1)$ and put
$\lambda_j=1-\frac{1-t}{j^2}$,\; $j\in A(m)$, where
$A(m):=\{m+1,\dots,l\}$ if $l<\infty$, respectively
$A(m):=\{j\in\mathbb N:j>m\}$ if $l=\infty$. Consider a continuous curve
$\varphi_1:[t,1)\to D$ such that $\varphi_1(t)=\varphi(t)$ and
$\varphi_1(\lambda_j)=a_j$, $j\in A(m)$. Define
$$
f=\begin{cases}
\varphi|_{\overline{t\Bbb D}}\\
\varphi_1|_{[t,1)}
\end{cases}
$$
on the set $F_t=\overline{t\Bbb D}\cup[t,1)\subset\mathbb D$.
Observe that
$F_t$ satisfies the geometric condition in Arakelian's
Theorem.
Since $(\lambda_j)_{j=0}^l$ satisfy the Blaschke condition,
for any $k$ we
may find a Blaschke product $B_k$ with zero set
$(\lambda_j)_{j=0,j\neq
k}^l$. Moreover, we denote by $d$ the function
$\operatorname{dist}(\partial D,f)$ on
$F_t$, where the distance arises from the $l^\infty$--norm.
Let $\eta_1$,
$\eta_2$ be continuous real-valued functions on $F_t$ with
\begin{gather*}
\eta_1,\eta_2\le\log\frac{d}{9},\qquad
\eta_1,\eta_2=\min_{\overline{t\mathbb D}}
\log\frac{d}{9}\hbox{ on }(\overline{t\mathbb D}),\\
\text{ and }\quad
\eta_1(\lambda_j)=\eta_2(\lambda_j)+\log(2^{-j-1}|B_j(\lambda_j)|),\quad
j\in A(m).
\end{gather*}
Applying three times
Arakelian's theorem, we may find functions $\zeta_1,
\zeta_2\in\mathcal O(\mathbb D)$ and a holomorphic mapping $h$ on $\mathbb D$
such that
$$
|\zeta_1-\eta_1|\le 1,\;|\zeta_2-\eta_2|\le 1,\hbox{ and
}|h-f|\le
\varepsilon|e^{\zeta_1-1}|\le \varepsilon e^{\eta_1}\text{
on } F_t,
$$
where $\varepsilon:=
\min\{\tfrac{|B_j(\lambda_j)|}{2^{j+1}}:j=0,\dots,m\}<1$
(in the last case
apply Arakelian's theorem componentwise to the
mapping $e^{1-\zeta_1}f$).
In particular, we have
\begin{gather*}
|h-f|\le\frac{d}{9}\quad \text{ on } F_t,\\
|\gamma_j|\leq
e^{\eta_1(\lambda_j)}2^{-j-1}|B_j(\lambda_j)|
=e^{\eta_2(\lambda_j)}2^{-j-1}|B_j(\lambda_j)|,\quad
j=0,\dots,m,\\
|\gamma_j|\le e^{\eta_1(\lambda_j)}=
e^{\eta_2(\lambda_j)}2^{-j-1}|B_j(\lambda_j)|
,\;j\in A(m),
\end{gather*}
where $\gamma_j:=h(\lambda_j)-f(\lambda_j)$, $j\in\mathbb Z_+$ if
$l=\infty$, respectively $0\leq j\leq l$ if $l\in\mathbb N$.
Then, in virtue of
$e^{\eta_2(\lambda_j)}
\le e^{1+\operatorname{Re} \zeta_2(\lambda_j)}$, the function
$$
g:=e^{\zeta_2}\sum_{j=0}^l\frac{B_j
}{e^{\zeta_2(\lambda_j)}B_j(\lambda_j)}\gamma_j
$$
is
holomorphic on $\mathbb D$ with $g(\lambda_j)=\gamma_j$ and $$
|g|\le e^{\operatorname{Re}\zeta_2+1}\le
e^{\eta_2+2}\le\frac{e^2}{9}d\quad
\text{ on } F_t. $$ For $q_t:=h-g$ it follows that
$q_t(\lambda_j)=f(\lambda_j)$ and $$
|q_t-f|\le\frac{e^2+1}{9}d<d\quad\text{ on }F_t. $$
Thus we have found a holomorphic
mapping $q_t$ on $\mathbb D$ with $q_t(\lambda_j)=a_j$ and
$q_t(F_t)\subset D$.
Hence there is a simply connected domain
$E_t$ such that $F_t\subset E_t\subset\mathbb D$ and
$q_t(E_t)\subset D$.
Let $\rho_t:\mathbb D\to E_t$ be the Riemann mapping with
$\rho_t(0)=0,$ $\rho_t'(0)>0$ and
$\rho_t(\lambda_j^t)=\lambda_j$. Considering the analytic
disc
$q_t\circ\rho_t:\mathbb D\to D$, we get that
$$
l_D(\boldsymbol p,z)\le\prod_{j=1}^l|\lambda_j^t|^{\boldsymbol
p(a_j)}\leq\prod_{j=1}^m|\lambda_j^t|^{\boldsymbol p(a_j)}.
$$
Note that by the Carath\'eodory kernel theorem, $\rho_t$
tends, locally uniformly, to the
identity map of $\mathbb D$ as $t\to 1$. This shows that the last
product converges to
$\prod_{j=1}^m|\lambda_j|^{\boldsymbol p(a_j)}$. Since $\varphi$ was
an arbitrary
competitor for $l_G(\boldsymbol p_{A_m},z)$, the inequality (1)
follows.
On the other hand, the existence of an analytic disc whose
graph contains
$A$ and $z$ implies that
$$
l_D(\boldsymbol p,z)\ge\limsup_{m\to\infty}l_D(\boldsymbol p_{A_m},z),
$$
which completes the proof.
\end{proof}
\begin{remark}{\rm Looking at the above proof shows that we
have proved an
approximation and simultaneous interpolation result, i.e.
the
constructed function $q_t$ approximates and interpolates
the given
function $f$.
We first mention that the proof of Theorem 1 could be
simplified
using a non-trivial result on interpolation sequences due
to
L.~Carleson (see, for example, Chapter 7, Theorem 3.1 in
\cite{and}).
Moreover, it is possible to prove the following general
result
which extends an approximation and simultaneous
interpolation
result by P.~M.~Gauthier, W.~Hengartner, and
A.~A.~Nersesyan (see \cite{gauhen}, \cite{ner}):
Let $D\subset\mathbb C$ be a domain, $E\subset D$ a relatively
closed
subset satisfying the condition in Arakelian's theorem,
$\Lambda\subset E$ such that $\Lambda$
has no accumulation point in $D$ and
$\Lambda\cap\operatorname{int} E$ is a finite set. Then for given
functions $f,h\in\mathcal C(E)\cap\mathcal O(\operatorname{int} E)$ there
exists a
$g\in\mathcal O(D)$ such that
$$
|g-f|<e^{\operatorname{Re} h} \text { on } E,\quad
\text{ and } \quad
f(\lambda)=g(\lambda),\;\lambda\in\Lambda.
$$
It is even possible to prescribe a finite number of the
derivatives for $g$ at all the points in $\Lambda$. }
\end{remark}
\
As a byproduct we get the following result.
\begin{corollary} Let $D\subset\mathbb C^n$ be a domain and let
$\boldsymbol p, \boldsymbol q:D\to\mathbb R_+$ be two
pole functions on $D$ with $\boldsymbol p\leq\boldsymbol q$, $|\bs{q}|$ at most
countable. Then
$l_D(\boldsymbol q,z)\leq l_D(\boldsymbol p,z)$, $z\in D$.
\end{corollary}
Hence, the Lempert function is monotone with respect to
pole functions with an
at most countable support.
\begin{remark}{\rm The Lempert function is, in general, non
strictly monotone under
inclusion of pole sets. In fact, take $D:=\mathbb D\times\mathbb D$,
$A:=\{a_1,a_2\}\subset\mathbb D$, where $a_1\neq a_2$, and
observe that
$l_D(\boldsymbol p,(0,0))=|a_1|$, where $\boldsymbol
p:=\chi|_{A\times\{a_1\}}$
(use the product property in \cite{dietra}, \cite{jarpfl},
\cite{nikzwo}).
}\end{remark}
\begin{remark}{\rm Let $D\subset\mathbb C^n$ be a domain, $z\in
D$, and let now $\boldsymbol
p:D\longrightarrow\mathbb R_+$ be a "general" pole function, i.e. $|\bs{p}|$ is
uncountable. Then there are two cases:
1) There is a $\varphi\in\mathcal O(\mathbb D,D)$ with
$\varphi(\lambda_{\varphi,a})=a$, $\lambda_{\varphi,a}\in\mathbb D$,
for all $a\in|\bs{p}|$ and $\varphi(0)=z$. Defining
\begin{multline*}
l_D(\boldsymbol
p,z):=\inf\{\prod|\lambda_{\psi,a}|^{\boldsymbol
p(a)}:\psi\in\mathcal O(\mathbb D,D) \text{ with
}\\ \psi(\lambda_{\psi,a})=a \text{ for all } a\in|\bs{p}|,
\psi(0)=z\}
\end{multline*}
then $l_D(\boldsymbol p,z)=0$.
Observe that $l_D(\boldsymbol p,z)=\inf\{l_D(\boldsymbol p_B,z):\varnothing
\neq B\text{ a finite
subset of } A\}$.
2) There is no analytic disc as in 1). In that case we may
define
$^{1)}$\footnote{1) Compare the definition of the Coman
function (for the second
case) in \cite{jarpfl}.}
$$
l_D(\boldsymbol p,z):=\inf\{l_D(\boldsymbol p_B,z):\varnothing\neq B\text {
a finite subset of
} A\}.
$$
Example \ref{ex} below may show that the definition in 2)
is more sensitive than
the one used in \cite{jarpfl}.
}
\end{remark}
\vskip 0.5cm
Before giving the example we use the above definition of
$l_D(\boldsymbol p,\cdot)$
for an arbitrary pole function $\boldsymbol p$ to conclude.
\begin{corollary} Let $D\subset\mathbb C^n$ be a domain and let
$\boldsymbol p, \boldsymbol q :D\longrightarrow\mathbb R_+$
be arbitrary pole functions with $\boldsymbol p\leq\boldsymbol q$. Then
$l_D(\boldsymbol q,\cdot)\leq
l_D(\boldsymbol p,\cdot)$.
\end{corollary}
\begin{example}\label{ex} {\rm Put $D:=\mathbb D\times\mathbb D$ and
let $A\subset\mathbb D$ be uncountable, e.g. $A=\mathbb D$.
Then there is no $\varphi\in\mathcal O(\mathbb D,D)$ passing through
$A\times\{0\}$ and
$(0,w)$, $w\in\mathbb D_*$. Put $\boldsymbol p:=\chi|_{A\times\{0\}}$ on
$D$ as a pole function.
Let $B\subset A$ be a non--empty finite subset. Then
applying the
product property (see \cite{dietra}, \cite{jarpfl},
\cite{nikzwo}), we get
$$
l_D(\boldsymbol
p_B,(0,w))=g_D(B\times\{0\},(0,w))=\max\{g_{\mathbb D}(B,0),g_{\mathbb D}(0,w)\},
$$
where $g_{\mathbb D}(A,\cdot)$ denotes the Green function in
$\mathbb D$ with respect to
the pole set $A$.
Therefore, $l_D(\boldsymbol p,(0,w))=|w|$ $^{1)}$.}
\end{example}
To conclude this note we mention that the Lempert function
is not
holomorphically contractible, even if the holomorphic map
is a proper
covering.
\begin{example} {\rm Let $\pi:\mathbb D_*\longrightarrow\mathbb D_*$,
$\pi(z):=z^2$. Obviously, $\pi$ is
proper and a covering. Fix two different points $a_1,
a_2\in\mathbb D_*$ with $a_1^2=a_2^2=:c$. For a point
$z\in\mathbb D_*$ we know that
$$
l_{\mathbb D_*}(\chi|_{\{c\}},z^2)=\min\{l_{\mathbb D_*}(\chi|_{\{a_1\}},z),l_{\mathbb D_*}(\chi|_{\{a_2\}},z)\}
\geq l_{\mathbb D_*}(\chi|_A,z),
$$
where $A:=\{a_j:j=1,2\}$ and $\chi|_B$ is the
characteristic function for the set $B\subset D_*$.
Assume that $l_{\mathbb D_*}(\chi|_{\{a_1\}},z)\leq
l_{\mathbb D_*}(\chi|_{\{a_2\}},z)$.
Then this left side is nothing than the classical Lempert
function for the pair $(a_1,z)$.
Recalling how to calculate it via a covering map one easily
concludes that $l_{\mathbb D_*}(\chi|_A,z)<
l_{\mathbb D_*}(\chi|_{\{a_1\}},z)$. Hence $l_{\mathbb D_*}(\boldsymbol p
,\pi(z))>l_{\mathbb D_*}(\boldsymbol p\circ\pi,z)$, where $\boldsymbol
p:=\chi|_{\{c\}}$.
Therefore, the Lempert function with a multipole behaves
worse than the Green
function with a multipole.}
\end{example}
{\bf Acknowledgments.} This note was written during the
stay of
the first named author at the University of Oldenburg
supported by a
grant from the DFG (September - October 2004). He likes to
thank both
institutions for their support.
We thank the referee for pointing out an error in the
first version of this paper.
\end{document}
\end{document} |
\begin{document}
\tildeitle{Reconciling Semiclassical and Bohmian Mechanics: \\
II. Scattering states for discontinuous potentials}
\author{Corey Trahan and Bill Poirier}
\affiliation{Department of Chemistry and Biochemistry, and
Department of Physics, \\
Texas Tech University, Box 41061,
Lubbock, Texas 79409-1061}
\email{[email protected]}
\begin{abstract}
In a previous paper [J. Chem. Phys. {\bf 121} 4501 (2004)]
a unique bipolar decomposition, $\hat{P}si = \hat{P}si_1 + \hat{P}si_2$ was
presented for stationary bound states $\hat{P}si$ of
the one-dimensional Schr\"odinger\ equation, such that the components
$\hat{P}si_1$ and $\hat{P}si_2$ approach their semiclassical WKB analogs in
the large action limit. Moreover, by applying the Madelung-Bohm
ansatz to the components rather than to $\hat{P}si$ itself, the resultant
bipolar Bohmian mechanical formulation satisfies the correspondence
principle. As a result, the bipolar quantum trajectories are
classical-like and well-behaved, even when $\hat{P}si$ has many nodes, or
is wildly oscillatory. In this paper, the previous decomposition
scheme is modified in order to achieve the same desirable properties
for stationary scattering states. Discontinuous potential systems
are considered (hard wall, step, square barrier/well), for which the
bipolar quantum potential is found to be {\em zero} everywhere,
except at the discontinuities. This approach leads to an exact
numerical method for computing stationary scattering states of any
desired boundary conditions, and reflection and transmission
probabilities. The continuous potential case will be considered in a
future publication.
\end{abstract}
\title{Reconciling Semiclassical and Bohmian Mechanics: \
II. Scattering states for discontinuous potentials}
\sin\tildehetaection{INTRODUCTION}
\leftarrowbel{intro}
Much attention has been directed by theoretical/computational
chemists towards developing reliable and accurate means for solving
dynamical quantum mechanics problems---i.e., for obtaining solutions
to the time-dependent Schr\"odinger\ equation---for molecular systems.
Insofar as ``exact'' quantum methods are concerned, two traditional
approaches have been used: (1) representation of the system
Hamiltonian in a finite, direct-product basis set; (2)
discretization of the wavefunction onto a rectilinear grid of
lattice points over the relevant region of configuration
space. Both approaches, however, suffer from the
drawback that the computational effort scales exponentially with
system dimensionality.\cos\tildehetaite{bowman86,bacic89} Recently, a number of
promising new methods have emerged with the potential to alleviate
the exponential scaling problem once and for all. These include
various basis set optimization
methods,\cos\tildehetaite{poirier99qcII,poirier00gssI,yu02b,wangx03b}
and build-and-prune methods,\cos\tildehetaite{dawes04} such as those based on
wavelet techniques.\cos\tildehetaite{poirier03weylI,poirier04weylII,poirier04weylIII}
On the other hand, a completely different approach to the
exponential scaling problem is to use basis sets or grid points,
that themselves evolve over time. The idea is that at any given
point in time, one need sample a much smaller Hilbert subspace, or
configuration space region, than would be required at all
times---thus substantially reducing the size of the calculation. For
basis set calculations, much progress along these lines has been
achieved by the multi-configurational time-dependent Hartree (MCTDH)
method, developed by Meyer, Manthe and co-workers.\cos\tildehetaite{meyer90,manthe92}
More recently, time-evolving grid, or ``quantum trajectory''
methods\cos\tildehetaite{lopreore99,mayor99,wyatt99,wyatt01b,wyatt01c,wyatt} (QTMs)
have also been developed, and for certain types of systems,
successfully applied at quite high dimensionalities.\cos\tildehetaite{wyatt01c,wyatt}
QTMs are based on the hydrodynamical picture of quantum mechanics,
developed over half a century ago by Bohm\cos\tildehetaite{bohm52a,bohm52b} and
Takabayasi,\cos\tildehetaite{takabayasi54} who built on the earlier work of
Madelung\cos\tildehetaite{madelung26} and van Vleck.\cos\tildehetaite{vanvleck28} QTMs are
inherently appealing for a number of reasons. First, they offer an
intuitive, classical-like understanding of the underlying dynamics,
which is difficult-to-impossible to extract from more traditional
fixed grid/basis methods. In effect, quantum trajectories are like
ordinary classical trajectories, except that they evolve under a
modified potential $V+Q$, where $Q$ is the wavefunction-dependent
``quantum potential'' correction. Second, QTMs hold the promise of
delivering exact quantum mechanical results without exponential
scaling in computational effort. Third, they provide a pedagogical
understanding of entirely quantum mechanical effects such as
tunneling\cos\tildehetaite{lopreore99,wyatt} and
interference.\cos\tildehetaite{poirier04bohmI,zhao03} They have already been
used to solve a variety of different types of problems, including
barrier transmission,\cos\tildehetaite{lopreore99} non-adiabatic
dynamics,\cos\tildehetaite{wyatt01} and mode relaxation.\cos\tildehetaite{bittner02b}
Several intriguing phase space generalizations have also
emerged,\cos\tildehetaite{takabayasi54,shalashilin00,burghardt01a,burghardt01b}
of particular relevance for dissipative
systems.\cos\tildehetaite{trahan03b,donoso02,bittner02a,hughes04}
Despite this success, QTMs suffer from a significant numerical
drawback, which, to date, precludes a completely robust application
of these methods. Namely: QTMs are numerically unstable in the
vicinity of amplitude nodes. This ``node problem'' manifests in
several different ways:\cos\tildehetaite{wyatt01b,wyatt} (1) infinite forces,
giving rise to kinky, erratic trajectories; (2)
compression/inflation of trajectories near wavefunction local
extrema/nodes, leading to; (3) insufficient sampling for accurate
derivative evaluations. Nodes are usefully divided into two
categories,\cos\tildehetaite{poirier04bohmI} depending on whether $Q$ is formally
well-behaved (``type one'' nodes) or singular (``type two'' nodes).
For stationary state solutions to the Schr\"odinger\ equation, for instance,
all nodes are type one nodes. In principle, type one nodes are
``gentler'' than type two nodes; however, from a numerical
standpoint, even type one nodes will give rise to the problems
listed above, because the slightest numerical error in the
evaluation of $Q$ is sufficient to cause instability.
In the best case, the node problem simply results in substantially
more trajectories and time steps than the corresponding classical
calculation; in the worst case, the QTM calculation may fail
altogether, beyond a certain point in time. Several numerical
methods, both ``exact'' and approximate, are currently being developed
to deal with this important problem. The latter category includes
the artificial viscosity\cos\tildehetaite{kendrick03,pauler04} and linearized
quantum force methods,\cos\tildehetaite{garashchuk04} both of which have
proven to be very stable.
While such approximate methods may not capture the hydrodynamic
fields with complete accuracy in nodal regions, they do allow for
continued evolution and long-time solutions, often unattainable via
use of a traditional QTM. The ``exact'' methods include the adaptive
hybrid methods,\cos\tildehetaite{hughes03}
and the complex amplitude method.\cos\tildehetaite{garashchuk04b}
In the adaptive hybrid methods,
for which hydrodynamic
trajectories are evolved everywhere except for in nodal regions, where
the time-dependent Schrodinger equation is solved instead to avoid
node problems. Although they have been applied successfully for some
problems, these methods are difficult to implement numerically,
since not only must the hydrodynamic fields be somehow
monitored for forming singularities, but there must also be an
accurate means for interfacing and coupling the two completely
different equations of motion. The complex amplitude method is
cleaner to implement, but is only exact for linear and quadratic
Hamiltonians.
In a recent paper,\cos\tildehetaite{poirier04bohmI} hereinafter referred to as
``paper~I,'' one of the authors (Poirier) introduced a new strategy
for dealing with the node problem, based on a bipolar decomposition
of the wavefunction. The idea is to partition the wavefunction into
two (or in principle, more) component functions, i.e. $\hat{P}si = \hat{P}si_1
+ \hat{P}si_2$. One then applies QTM propagation separately to $\hat{P}si_1$
and $\hat{P}si_2$, which can be linearly superposed to generate $\hat{P}si$
itself at any desired later time. In essence, this works because the
Schr\"odinger\ equation itself is linear, but the equivalent Bohmian
mechanical, or quantum Hamilton's equations of motion (QHEM) are
not.\cos\tildehetaite{poirier04bohmI} In principle, therefore, one may improve the
numerical performance of QTM calculations simply by judiciously
dividing up the initial wavepacket into pieces.
Although bipolar decompositions have been around for quite some
time,\cos\tildehetaite{floyd94,brown02} their use as a tool for circumventing
the node problem for QTM calculations is quite recent. Two promising
new exact methods that seek to accomplish this are the so-called
``counter-propagating wave'' method (CPWM),\cos\tildehetaite{poirier04bohmI}
and the ``covering function'' method (CFM).\cos\tildehetaite{babyuk04}
In the CPWM,
the bipolar decomposition is chosen to correspond to the
semiclassical WKB approximation,\cos\tildehetaite{poirier04bohmI} for which all of the
hydrodynamic field functions are smooth and classical-like, and the
component wavefunctions are node-free. Interference is achieved
naturally, via the superposition of left- and right-traveling (i.e.
positive- and negative-momentum) waves. For one-dimensional (1D)
stationary bound states, it can be shown that the resultant bipolar
quantum potential $q(x)$ becomes arbitrarily small in the large
action limit, even though the number of nodes becomes arbitrarily
large. (Note: in accord with the convention established in
\Ref{poirier04bohmI}, upper/lower case will be used to denote the
unipolar/bipolar field quantities). In the CFM, the idea is to
superpose some well-behaved large-amplitude wave, with the actual
ill-behaved (nodal or wildly oscillatory) wave, so as to ``dilute''
the undesirable numerical ramifications of the latter.
This paper is the second in a series designed to explore the CPWM
approach, introduced in paper~I. As discussed there in greater
detail, there are many motivations for this approach, but the
primary one is to reconcile the semiclassical and Bohmian theories,
in a manner that preserves the best features of both, and also
satisfies the correspondence principle. For our purposes, this means
that the Lagrangian manifolds (LMs) for the two theories should become
identical in the large action limit (Sec.~\text{Re}f{scattering}).
As described above, a key benefit of the CPWM decomposition is an
elegant treatment of interference, the chief source of nodes and
``quasi-nodes''\cos\tildehetaite{wyatt} (i.e. rapid oscillations) in quantum
mechanical systems. An interesting perspective on the role of
interference in semiclassical and Bohmian contexts is to be found in
a recent article by Zhao and Makri.\cos\tildehetaite{zhao03}
Whereas paper~I focused on stationary bound states for 1D systems,
the present paper (paper~II) and the next in the series
(paper~III)\cos\tildehetaite{poirier05bohmIII} concern themselves with stationary
scattering states. The CPWM decomposition of paper~I is uniquely
specified for any arbitrary 1D state---bound or scattering---and in
the bound case, always satisfies the correspondence principle.
However, the non-$L^2$ nature of the scattering states is such that
the paper~I decomposition generally does {\em not} satisfy the
correspondence principle in this case. Simply put, the quantum
trajectories and LMs exhibit oscillatory behavior in at least one
asymptotic region (thereby manifesting reflection), whereas the
semiclassical LMs do not. This is not a limitation of the CPWM, but
is rather due to the fundamental failure of the basic WKB approximation
to predict any reflection whatsoever for above-barrier energies, as has
been previously well established.\cos\tildehetaite{berry72,froman,heading} In
semiclassical theory, a modification must therefore be made to the
basic WKB approximation, in order to obtain meaningful scattering
quantities. As discussed in Sec.~\text{Re}f{scattering} and in paper~III,
our approach will be to apply a similar modification to the exact quantum
decomposition (actually, a {\em reverse} modification) such that the
correspondence principle remains satisfied, and the two theories
thus reconciled, even for scattering systems.
It will be shown the modified CPWM gives rise to bipolar Bohmian LMs
that are {\em identical} to the semiclassical LMs, regardless of
whether or not the action is large. Put another way, this means
that the bipolar quantum potentials $q$ effectively {\em vanish},
so that the resultant quantum trajectory evolution is {\em completely
classical}. Moreover, the resultant component wavefunctions,
$\hat{P}si_1(x)$ and $\hat{P}si_2(x)$, correspond asymptotically to the
familiar ``incident,'' ``transmitted,'' and ``reflected'' waves of
traditional scattering theory. Thus, the modified CPWM implementation
of the bipolar Bohmian approach provides a natural generalization of
these conceptually fundamental entities {\em throughout all of
configuration space}, not just in the asymptotic regions, as is the
case in conventional quantum scattering theory.
The above conclusions will be demonstrated for both discontinuous and
continuous potential systems, in papers~II and~III, respectively.
Discontinuous potentials---e.g. the hard wall, the step potential,
and the square barrier/well---serve as a useful benchmark for the
modified CPWM approach, because the scattering component waves
(e.g. ``incident wave,'' etc.) in this case {\em are} well-defined
throughout all of configuration space, according to a conventional
scattering treatment. Although this is no longer true for continuous
potentials, the foundation laid here in paper~II can be extended to
the continuous (and also time-dependent) case as well, as described
in paper~III. Additional motivation for the development of a
scattering version of the CPWM, vis-a-vis the relevance for
chemical physics applications, is provided in paper~III. Additional
motivation for the consideration of discontinuous potentials is
provided in Sec.~\text{Re}f{scattering} of the present paper.
\sin\tildehetaection{THEORY}
\leftarrowbel{theory}
\sin\tildehetaubsection{Background}
\leftarrowbel{background}
\sin\tildehetaubsubsection{Bohmian mechanics}
\leftarrowbel{Bohmian}
According to the Bohmian formulation,\cos\tildehetaite{wyatt,holland} the QHEM
are derived via substitution of the 1D (unipolar) wavefunction ansatz,
\begin{equation}
\hat{P}si(x,t) = R(x,t) e^{i S(x,t)/\tildeilde{H}bar}
\end{equation}
into the time-dependent Schr\"odinger\ equation. For the 1D Hamiltonian,
\begin{equation}
\tildeilde{H}at H = -{\tildeilde{H}bar^2 \over 2 m} {\partial^2 \over \partial x^2} + V(x),
\leftarrowbel{hameqn}
\end{equation}
this results in the coupled pair of nonlinear partial differential equations,
\ea{ \frac{\partial S(x,t)}{\partial t} & = & -\frac{S'^{2}}{2m} - V(x)
+ \frac{\tildeilde{H}bar^{2}}{2m}\frac{R''}{R}, \nonumber \\
\frac{\partial R(x,t)}{\partial t} & = &
-\frac{1}{m}R'\,S'-\frac{1}{2m}R\,S'',}
where $m$ is the mass, $V(x)$ is the system potential, and primes denote
spatial partial differentiation.
The first of the two equations above is the quantum Hamilton-Jacobi
equation (QHJE), whose last term is equal to
$-Q(x,t)$, i.e. comprises the quantum potential correction.
The second equation is a continuity equation.
When combined with the quantum trajectory evolution equations,
i.e.
\ea{
P & = & m {dx \over dt} = S', \nonumber \\
{d P \over dt} & = & -(V'+Q'), }
the continuity equation ensures that the probability [i.e. density,
$R(x,t)^2$, times volume element] carried by individual quantum
trajectories is conserved over the course of their time evolution.
\sin\tildehetaubsubsection{CPWM decomposition for stationary states}
\leftarrowbel{bipolar}
In paper I, we derived a unique bipolar decomposition,
\begin{equation}
\hat{P}si(x) = \hat{P}si_+(x) + \hat{P}si_-(x), \leftarrowbel{bipolardecomp}
\end{equation}
for stationary eigenstates $\hat{P}si(x)$ of 1D Hamiltonians of the
\eq{hameqn} form, such that:
\begin{enumerate}
\item{$\hat{P}pm(x)$ are themselves (non-$L^2$) solutions to the Schr\"odinger\ equation,
with the same eigenvalue, $E$, as $\hat{P}si(x)$ itself.}
\item{The invariant flux values, $\pm F$, of the two solutions,
$\hat{P}pm(x)$, equal those of the two semiclassical (WKB) solutions.}
\item{The median of the enclosed action, $x_0$, equals that of the
semiclassical solutions.}
\end{enumerate}
There are other important properties of the $\hat{P}pm(x)$,\cos\tildehetaite{poirier04bohmI}
as discussed in Sec.~\text{Re}f{intro}, and in \Ref{poirier04bohmI}. Nevertheless,
the above three conditions are sufficient to uniquely specify the
decomposition. In the special case of bound (i.e. $L^2$) stationary
states, the real-valuedness of $\hat{P}si(x)$ implies that the
$\hat{P}pm(x)$ are complex conjugates of each other.
\sin\tildehetaubsection{Scattering systems}
\leftarrowbel{scattering}
It is natural to ask to what extent the above analysis may be
generalized for scattering potentials. Certainly, $\hat{P}si(x)$ itself
is no longer $L^2$, nor even real-valued, and there are generally
two linearly independent solutions of interest for each $E$, instead
of just one. Condition (1) above poses no difficulty for $\hat{P}pm(x)$,
as these component functions are non-$L^2$ and complex-valued, even
in the bound eigenstate case. In principle, condition (2) is not
difficult either; although the flux value depends on the
normalization of $\hat{P}si$ itself, which is not $L^2$, certain
well-established normalization conventions for scattering states
exist, that can be applied equally well to semiclassical and exact
quantum solutions. There is no action median {\em per se} for
scattering states, as the action enclosed within the $\hat{P}si_+(x)$ and
$\hat{P}si_-(x)$ phase space Lagrangian
manifolds\cos\tildehetaite{poirier04bohmI,keller60,maslov,littlejohn92} (LMs) is infinite;
however, the scattering analog of condition (3) is related to the asymptotic
boundary conditions, and it is here that one encounters difficulty.
Moreover, an additional concern is raised by the doubly-degenerate
nature of the continuum eigenstates, namely: should each scattering
$\hat{P}si(x)$ have its {\em own} $\hat{P}pm(x)$ decomposition, or should
there be a single $\hat{P}pm(x)$ pair, from which all degenerate
$\hat{P}si(x)$'s may be constructed via arbitrary linear superposition?
To resolve these issues, we will adopt the same general strategy
used in paper I, i.e. we will resort to semiclassical theory as our
guide, wherever possible. We will also exploit certain special
features of the scattering problem not found in generic bound state
systems, such as the asymptotic potential condition $V'(x) \rightarrow 0$ as
$x \rightarrow \pm \infty$ (where primes denote spatial differentiation),
and its usual implications for scattering theory and
applications.\cos\tildehetaite{taylor}
The basic WKB solutions are given by
\begin{equation}
\hat{P}pm^\sin\tildehetac(x) = r_\sin\tildehetac(x) e^{\pm i s_\sin\tildehetac(x)/\tildeilde{H}bar}, \leftarrowbel{scsoln}
\end{equation}
where
\begin{equation}
r_\sin\tildehetac(x) = \sin\tildehetaqrt{{m F \over s'_\sin\tildehetac(x)}} \qquad\tildeext{and}
\qquad s'_\sin\tildehetac(x) = \sin\tildehetaqrt{2 m \sin\tildehetaof{E-V(x)}} \leftarrowbel{scrs}
\end{equation}
The corresponding positive and negative momentum functions,
specifying the semiclassical LMs, are given by $p^\sin\tildehetac_\pm(x) = \pm
s'_\sin\tildehetac(x)$. Equations (\text{Re}f{scsoln}) and (\text{Re}f{scrs}) apply to both
bound and scattering cases; note that for both, $\hat{P}pm^\sin\tildehetac(x)$ are
complex conjugates of each other. The asymptotic potential condition
ensures that these approach exact quantum plane waves
asymptotically, with the usual scattering interpretations, i.e.
$\hat{P}si_+(x)$ in the $x\rightarrow-\infty$ asymptotic region is the incoming
wave from the left (usually taken to be the incident wave),
$\hat{P}si_+(x)$ as $x\rightarrow\infty$ is the outgoing wave from the left (the
usual transmitted wave), etc.
Insofar as determining the corresponding exact quantum solutions
$\hat{P}pm(x)$, the procedure described in paper I is still appropriate
for bound and semi-bound (i.e. on one side only) states, in that the
results satisfy the correspondence principle globally, as desired
(for semi-bound examples, consult the Appendix). For true
scattering states, however, this procedure fails, in the sense that
if $\hat{P}si_+(x)$ is chosen to match the normalization and flux of
$\hat{P}si^\sin\tildehetac_+(x)$ in the $x\rightarrow\infty$ asymptote, then it will
necessarily approach a nontrivial linear superposition of
$\hat{P}si^\sin\tildehetac_+(x)$ and $\hat{P}si^\sin\tildehetac_-(x)$ in the $x \rightarrow -\infty$
asymptote, and vice-versa. There is therefore an ambiguity as to how
the corresponding quantum $\hat{P}pm(x)$'s should be defined, i.e. which
asymptotic region should be used to effect the correspondence.
More significantly though, {\em either} choice will result in component
functions $\hat{P}pm(x)$ with substantial interference in one of the two
asymptotic regions. This is due to partial reflection of the exact
quantum scattering states, which is not predicted by the basic WKB
approximation. Thus, in the large action limit, the exact quantum
solutions manifest large-magnitude quantum potentials, $q_\pm(x)$,
and rapidly oscillating field functions $q_\pm(x)$, $r_\pm(x)$, and
$p_\pm(x)$---exactly the undesirable behavior that the CPWM was
introduced to avoid---whereas the corresponding basic WKB functions
are smooth, and asymptotically uniform.
The lack of any partial reflection is a well-understood shortcoming of
the WKB approximation\cos\tildehetaite{berry72,froman,heading,poirier03capI}---i.e.,
the basic $\hat{P}pm^\sin\tildehetac(x)$ components, though elegantly constructed
from smooth classical functions $r_\sin\tildehetac(x)$ and $s_\sin\tildehetac(x)$, do
not in and of themselves correspond to any actual quantum
scattering solutions $\hat{P}si(x)$. In light of the bipolar
decomposition ideas introduced in paper I, however,
our perspective is the reverse one: for any {\em actual}
quantum $\hat{P}si(x)$, can one determine an \eq{bipolardecomp}
decomposition such that the resultant $\hat{P}pm(x)$ resemble their
well-behaved semiclassical counterparts, and is such a decomposition unique?
Among other properties,\cos\tildehetaite{poirier04bohmI} the $\hat{P}pm(x)$ LM's should become
identical to the semiclassical LM's in the large action limit, so as
to satisfy the correspondence principle. Based on the considerations
of the previous paragraph it is clear that the paper I decomposition
does not achieve this goal, when applied to stationary scattering
states.
We defer a full accounting of these issues---in the context of
completely arbitrary continuous potentials $V(x)$---to
paper~III, wherein it will be
demonstrated how to compute exact quantum reflection and
transmission probabilities (and stationary scattering states) using
only classical trajectories, and without the need for explicit
numerical differentiation of the wavefunction. In the present paper,
we lay the foundation for paper III, by focusing attention onto two
key aspects whose development comprises an essential prerequisite.
First, as the paper III approach treats $V(x)$ as a sequence of
steps,\cos\tildehetaite{poirier05bohmIII} the present
paper~II will focus exclusively on the
step potential and related discontinuous potential systems, for
which $V(x) = \tildeext{const}$ in between successive steps. Discontinuous
potentials are important for chemical physics, because they model
steep repulsive wells, and are used in statistical theories of liquids.
Moreover, they hold a special significance for QTM methods, for which they
serve as a ``worst-case scenario'' benchmark. Indeed, conventional QTM
techniques {\em always} fail when applied to discontinuous potentials.
To date, The only such calculations that have been
performed\cos\tildehetaite{holland} have computed the quantum potential from a
completely separate time-dependent fixed-grid calculation (the
``analytical approach'')\cos\tildehetaite{wyatt} rather than directly from
the quantum trajectories themselves. Even if one {\em could} propagate
trajectories for discontinous systems using a traditional QTM, the
trajectories that would be generated would be very kinky and
erratic,\cos\tildehetaite{holland} and a great many time trajectories
and time steps would thus be required.
Second, since the new $\hat{P}pm(x)$ do {\em not} satisfy condition (1),
unlike the paper~I CPWM decomposition, the time evolution of these
two component functions is clearly not
that of the time-dependent Schr\"odinger\ equation. Moreover, since the
$|\hat{P}pm|^2$ are constant over time [because $|\hat{P}si|^2$ itself is
stationary, and \eq{bipolardecomp} is presumed unique], {\em the two
$\hat{P}pm(x,t)$ time evolutions must be coupled together}. It is essential
that the nature of this coupling be completely understood, in order
that the present approach may be generalized to non-stationary state
situations---e.g. to wavepacket scattering, as will be discussed in
future publications.
The ramifications for QTMs
are equally important. Accordingly, the present paper focuses on
the QTM propagation of the wavefunction and its bipolar
components---with a keen eye towards generality and physical
interpretation---even though the states involved are stationary.
This approach leads to a pedagogically useful reinterpretation of
``incident,'' ``transmitted,'' and ``reflected'' waves---very
reminiscent of ray optics in electromagnetic theory---which is
applicable much more generally than traditional usage might suggest.
\sin\tildehetaubsection{Basic applications}
\leftarrowbel{timeevol}
The necessary theory will be developed over the course of a
consideration of various model application systems of increasing
complexity.
\sin\tildehetaubsubsection{free particle system}
\leftarrowbel{free}
Let us first consider the simplest case imaginable, the free
particle system, $V(x)=0$. In this case, the exact solutions
$\hat{P}pm(x) = \hat{P}pm^\sin\tildehetac(x)$ clearly satisfy the conditions of
Sec.~\text{Re}f{bipolar}, and the bipolar quantum potentials $q_\pm(x)$
are zero everywhere. Thus, the bipolar decomposition developed for
bound states in paper I can be used directly with this continuum
system, requiring only the slight modification that arbitrary linear
combinations of $\hat{P}si_+^\sin\tildehetac(x)$ and $\hat{P}si_-^\sin\tildehetac(x)$ are to be
allowed, in order to construct arbitrary scattering solutions
$\hat{P}si(x)$. For convenience, the linear combination coefficients will
from here on out be directly incorporated into the amplitude
functions, $r_\pm(x)$, and phase functions, $s_\pm(x)$, so that
\eq{bipolardecomp} is still correct.
If from all solutions $\hat{P}si(x)$ one considers only that which
satisfies the usual scattering boundary conditions (i.e. incident
wave incoming from the left) then the negative momentum wave
$\hat{P}si_-$ vanishes, and $\hat{P}si(x)=\hat{P}si_+(x)$. There is zero
reflection, and $100\%$ transmission. Put another way, the incident
flux, $\lim_{x\rightarrow -\infty} j_+(x)$, is equal to the transmitted
flux, $\lim_{x\rightarrow +\infty} j_+(x)$, where
\ea {
j_\pm(x) & = & {\tildeilde{H}bar \over 2 i m}
\sin\tildehetaof{\hat{P}pm^*(x) {d\hat{P}pm(x) \over dx} -
{d\hat{P}pm^*(x)\over dx} \hat{P}pm(x)} \nonumber \\
& & = \sin\tildehetaof{p_\pm(x) \over m} r_\pm^2(x), \leftarrowbel{fluxeq}}
[both flux values are equal to $F$, as in \eq{scrs}].
In the quantum trajectory description, flux manifests as
probability-transporting trajectories, which move along the LMs. For
the boundary conditions described above, there are only positive
momentum trajectories, moving uniformly from left to right with
momentum $p_+(x) = \sin\tildehetaqrt{2mE}$. If a $\hat{P}si_-(x)$ contribution were
present, its trajectories would move uniformly in the opposite
direction [$p_-(x)=-\sin\tildehetaqrt{2mE}$.] Since the two components $\hat{P}pm(x)$
are in this case uncoupled, the positive and negative momentum
trajectories would have no interaction with each other.
\sin\tildehetaubsubsection{hard wall system}
\leftarrowbel{hardwall}
We next consider the hard wall system:
\begin{equation}
V(x) = {\cos\tildehetaases {0 & for $x \le 0$; \cos\tildehetar
\infty & for $x>0$. \cos\tildehetar }}
\end{equation}
In the $x\le0$ region, the two $\hat{P}pm(x)$ components are exactly the
same as in the free particle case, except that the $\hat{P}si(0)=0$
boundary condition imposes the additional constraints,
\begin{equation}
s_-(0) = s_+(0) + \pi {\text{mod}}(2 \pi) \qquad ; \qquad
r_-(0) = r_+(0). \leftarrowbel{hwconst}
\end{equation} This also results in only {\em one} linearly independent
solution instead of two, i.e. $\hat{P}si(x) \propto \sin\tildehetain(k x)$, with $k =
\sin\tildehetaqrt{2 m E}/\tildeilde{H}bar$. Regarding the LMs and trajectories, in the
$x<0$ region, these are identical to those of Sec.~\text{Re}f{free}, e.g.
the $\hat{P}si_+(x)$ LM trajectories move uniformly to the right, towards
the hard wall at $x=0$.
It is natural to ask what happens when the $\hat{P}si_+(x)$ LM
trajectories actually reach $x=0$. There are two reasonable
interpretations. The first is that the trajectories keep moving
uniformly into the $x>0$ region of configuration space. This
approach treats the hard wall system as if it were the free particle
system, but with the $x>0$ region effectively
ignored.\cos\tildehetaite{poirier00qcI} This underscores the fact that unlike
$\hat{P}si(x)$ itself, the individual $\hat{P}pm(x)$ components {\em per se}
are unconstrained at the origin---though the \eq{hwconst} constraint
implies a unique correspondence between the two. This interpretation
also makes it clear that for the hard wall system, the paper I
decomposition is essentially identical to the present decomposition,
as is worked out in detail in the Appendix.
In the second interpretation, the effect of the hard wall at $x=0$
is to cause instantaneous elastic reflection of a $\hat{P}si_+(x)$ LM
trajectory momentum, from $p = p_+ = +\sin\tildehetaqrt{2 m E}$ to $p = p_- =
-\sin\tildehetaqrt{2 m E}$. Afterwards, the reflected trajectory propagates
uniformly backward, along the $\hat{P}si_-(x)$ LM. In this
interpretation, the trajectories never leave the allowed
configuration space, $x\le0$. However, wavepacket reflection is
essentially achieved via trajectory {\em hopping} from one LM to the
other---not unlike that previously considered, e.g., in the context
of non-adiabatic transitions.\cos\tildehetaite{tully71}
The trajectory hopping interpretation is adopted in
the present paper, and in paper III, but the first interpretation
will also be reconsidered in later publications.
Note that for discontinuous potentials---and indeed more
generally\cos\tildehetaite{poirier05bohmIII}---one can regard trajectory hopping as the
{\em source} of $\hat{P}pm(x)$ interaction coupling.
For the hard wall case, trajectory hopping only manifests at $x=0$,
the sink of all $\hat{P}si_+(x)$ LM trajectories, and the source of all
$\hat{P}si_-(x)$ LM trajectories. If these trajectories are to be
regarded as one and the same via hopping, then a unique field
transformation for $r$, $s$, and all spatial derivatives, must be
specified. Fortunately, the unique correspondence between
$\hat{P}si_+(x)$ and $\hat{P}si_-(x)$ described above, enables one to do just
that. In particular, \eq{hwconst} specifies the correct
transformations for $r$ and $s$, as transported by the quantum
trajectories. All spatial derivatives of arbitrary orders can then
be obtained via spatial differentiation of \eq{scsoln}---although in
the hard wall case, only the $s'$ condition, $p_-(0) = - p_+(0)$ is
relevant, because all higher order derivatives are identically zero.
Since the magnitudes of the $p$ and $r$ fields associated with a
given quantum trajectory are unchanged as a result of the trajectory
hop, \eq{fluxeq} implies that the incident and reflected flux values
are the same (apart from sign), and so the scattering system
exhibits 100\% reflection and zero transmission (along each LM, the
flux is invariant\cos\tildehetaite{poirier04bohmI}). These basic facts of the hard wall
system are of course well understood. The point, though, is that we
have now obtained the information in a time-{\em dependent} quantum
trajectory manner, rather than through the usual route of applying
boundary conditions to time-{\em independent} piecewise component
functions. In other words, \eq{hwconst} now refers to individual
{\em quantum trajectories}, rather than to wavefunctions.
This shift of emphasis is very important, and leads to quite a
number of conceptual and computational advantages. For instance, the
standard description of the hard wall stationary states would
decompose these into plane wave components interpreted as
``incident'' and ``reflected'' waves. This language suggests a
process, or change over time---i.e. a state that is initially
incident, at some later time is somehow transformed into a reflected
state. Nothing in the standard description, however, would seem to
render transparent the usage of such terminology, i.e. $\hat{P}si(x)$ is
stationary, and so the reflected and transmitted components are in
fact {\em both} present for all times. Of course, a localized
superposition of stationary states, i.e. a wavepacket, may well
exhibit such an explicit transformation over the course of the time
evolution, as such a state is decidedly non-stationary. Indeed,
wavepackets are relied upon by the more rigorous formulations of
scattering theory, in order to justify the use of terms such as
``reflected wave,'' even in a stationary context.\cos\tildehetaite{taylor} Such
formulations, though certainly legitimate, seem always to require a
clever use of limits, the subtle distinction between unitary and
isometric transformations, and other esoteric mathematical tricks.
On the other hand, the time-dependent bipolar quantum trajectory
hopping picture presented above provides a physicality to such
language that is immediately apparent. Over the course of the time
evolution, although the wavefunction as a whole is stationary, each
individual {\em trajectory} is first incident from the left, then
collides with the hard wall, and is subsequently reflected back
towards the left (i.e. towards $x\rightarrow-\infty$). The bipolar quantum
trajectories are all classical, as the bipolar quantum potentials,
$q_\pm(x)$, are zero everywhere except at the wall itself.
Interference arises naturally from the superposition of the two
LMs---i.e., from the trajectories that have already progressed to
the point of reflecting, vs. those that have not reflected yet. In
contrast, since $\hat{P}si(x)$ itself exhibits very substantial
interference, and an infinite number of nodes, the traditional
unipolar QTM treatment would be very ill-behaved, i.e. $R(x)$ would
oscillate wildly in the large $k$ limit, and $Q(x)$ would be
numerically unstable near the nodes. Apart from these important
pragmatic drawbacks, the incident/reflected interpretation of the
quantum trajectories would also be lost.
The bipolar quantum trajectory description of the hard wall system
is very reminiscent of ray optics, as used to describe the
reflection of electromagnetic waves off of a perfectly reflecting
surface.\cos\tildehetaite{jackson} Indeed, much can be gained from applying a
ray optics analogy to quantum scattering applications, especially where
discontinuous potentials are concerned. One can construct a simple
gedankenexperiment as follows. Let $x_L<0$ denote some effective
left edge of the system, well to the left of the interaction region.
At some initial time $t=0$, all trajectories on the positive LM
lying to the right of $x_L$ are {\em ignored}, as is the negative LM
altogether. One then evolves the retained trajectories over time,
and monitors the contribution that just these trajectories make to
the total wavefunction. In some respects, it is as if the point
$x_L$ were serving as the initial wavefront for some incoming wave,
that at $t=0$ had not yet reached the hard wall/reflecting surface.
Of course, if the actual wave were in fact truncated in this
fashion, then the discontinuity in the field functions at the
wavefront would result in a very non-trivial propagation over time,
owing to the high-frequency components implicitly present. For
actual waves, the precise nature of the wavefront is known to have a
tremendous impact on the resultant dynamics.\cos\tildehetaite{jackson,brillouin14}
We avoid such complicating details by always interpreting the
``actual wave'' to be the full stationary wave itself, i.e. the
truncation is conceptual only.
In the ray optics analogy, the above situation is like a source of
light located at $x_L$, which is suddenly ``turned on'' at $t=0$. It
takes time for the wavefront to propagate to the reflecting surface,
and additional time for the reflected wavefront to make its way back
to $x=x_L$. Prior to the latter point in time, the evolution of the
truncated electromagnetic wave is decidedly {\em non}-stationary;
afterwards however, a stationary wave is achieved, at least within
the region of interest, $x_L\le x \le 0$, as the wavefront has by
this stage propagated beyond this region. The same qualitative
comments apply to the bipolar quantum case, although of course the
evolution equations are different.
A similar prescription may be used to achieve rudimentary
``wavepacket dynamics,'' even in the context of purely stationary
states. Instead of retaining {\em all} initial trajectories that lie
to the left of $x_L$, one retains only those that lie within some
finite interval. The resulting time evolution is analogous to a
light source that is turned on at $t=0$, and then turned off at some
later time (prior to when the wavefront arrives at the reflecting
surface). The initial ``wavepacket'' has uniform density, and moves
with uniform speed towards the hard wall. Interference fringes then
form after the foremost trajectories have been reflected onto the
negative LM. Eventually, all trajectories within the interval are
reflected, at which point interference ceases (the nodes are
``healed''\cos\tildehetaite{wyatt}), uniform density is restored, and the
reflected wave travels with uniform speed in the reverse direction,
back towards the starting point $x_L$. Qualitatively, this behavior
is clearly similar to that undergone by actual wavepackets
reflecting off of barrier potentials.
\sin\tildehetaubsection{More complicated applications}
\leftarrowbel{complicated}
The ideas described above can be easily extended to more complicated
discontinuous potential systems, such as up- and down-step
potentials, and any combination of multiple steps, e.g. square
barriers and square wells. In paper III, they will even be extended
to arbitrary continuous potentials.\cos\tildehetaite{poirier05bohmIII} In every case, the
ray optics analogy from electromagnetic theory may also be extended
accordingly. This approach provides a useful perspective on global
reflection and transmission in scattering systems, and in
particular, demonstrates how such quantities may be obtained from a
single, universal expression for local reflection and transmission.
\sin\tildehetaubsubsection{step potential system---above barrier energies}
\leftarrowbel{stepabove}
We next consider the step potential system:
\begin{equation}
V(x) = {\cos\tildehetaases {0 & for $x \le 0$; \cos\tildehetar
V_0 & for $x>0$, \cos\tildehetar }} \leftarrowbel{steppot}
\end{equation}
Classically, this system exhibits 100\% transmission if
the trajectory energy is above the barrier (i.e. $E>V_0$), and
100\% reflection if the trajectory energy is below the barrier
($E<V_0$). Quantum mechanically, all above barrier trajectories
are found to exhibit partial reflection and partial transmission,
although there is a general increase in transmission probability with
increasing energy. The below barrier quantum trajectories exhibit 100\%
reflection, as in the classical case; however, they also manifest
tunneling into the classically forbidden $x>0$ region. Thus even
quantum mechanically, the the above and below barrier cases must be
handled somewhat differently.
To begin with, we consider the above-barrier case.
Note that the LM's are unbounded in either direction, i.e. the classically
allowed region extends to both asymptotes, $x\rightarrow \pm \infty$.
Incoming trajectories can therefore originate from either asymptote,
thus giving rise to two linearly independent solutions, $\hat{P}si(x)$.
This is in stark contrast to the hard wall system, for which incoming
trajectories could only originate from $x\rightarrow -\infty$, thus resulting
in only one linearly independent solution for $\hat{P}si(x)$.
In the standard time-independent picture, one starts with the four
piecewise solutions,
\begin{equation}
\hat{P}Apm(x) = e^{\pm i p_A x/\tildeilde{H}bar} \quad\tildeext{and}\quad
\hat{P}Bpm(x) = e^{\pm i p_B x/\tildeilde{H}bar}, \leftarrowbel{steppieces}
\end{equation}
where region $A$ corresponds to $x\le 0$,region $B$ to
$x \ge 0$. The momenta values are classical, i.e.
\begin{equation}
p_A = \sin\tildehetaqrt{2 m E}\quad\tildeext{and}\quad
p_B = \sin\tildehetaqrt{2 m (E-V_0)}. \leftarrowbel{steppees}
\end{equation}
Matching $\hat{P}si(x)$ and $\hat{P}si'(x)$ boundary conditions at $x=0$, and
specifying asymptotic boundary conditions for $\hat{P}si(x)$, then enables
a unique determination of the four complex coefficients $A_\pm$ and $B_\pm$
in
\begin{equation}
\hat{P}si(x) = {\cos\tildehetaases {A_+ \hat{P}Ap(x) + A_- \hat{P}Am(x) & for $x \le 0$; \cos\tildehetar
B_+ \hat{P}Bp(x) + B_- \hat{P}Bm(x) & for $x\ge0$, \cos\tildehetar }}.
\leftarrowbel{stepwhole}
\end{equation}
In general, the solution coefficients depend on the particular
stationary solution of interest. For the usual scattering convention
of an incident wave incoming from the left (Fig.~\text{Re}f{stepabovefig})
the solutions are
\ea{
A_+ = 1 \quad & ; & \quad A_- = R = \of{{p_A - p_B \over p_A + p_B}}
\nonumber \\
B_+ = T = \of{{2 p_A \over p_A + p_B}} \quad & ; & \quad B_- = 0,
\leftarrowbel{stepcoeffs}
} where $R$ and $T$ are respectively, reflection and transmission
amplitudes. When flux is properly accounted for, the resultant
reflection and transmission probababilities (which add up to unity)
are given by \begin{equation}
P_{\tildeext{refl}} = |R|^2 \qquad ; \qquad
P_{\tildeext{trans}} = \of{{p_B \over p_A}} |T|^2.
\leftarrowbel{steprefltrans}
\end{equation} Note that \eqs{stepcoeffs}{steprefltrans} above are correct for
both an ``up-step'' and a ``down-step''---i.e. for $V_0$ positive or
negative. We can also apply these equations to the ``opposite''
boundary conditions, i.e. to an incident wave incoming from the right,
by simply transposing $A$ and $B$ subscripts, and $+$ and $-$ subscripts
($p_A$ and $p_B$ are still positive). This is important, because any
stationary solution $\hat{P}si(x)$ can be obtained as some linear superposition of
left-incident and right-incident solutions.
Regarding the time-dependent interpretation, it is evident that upon
reaching the step discontinuity, left-incident trajectories must be
partially reflected and partially transmitted. The trajectory is
suddenly split into two, one that continues to propagate along the
positive LM for the transmitted $B$ region (i.e. the $B+$ LM) and
the other being instantaneously reflected down to the $A-$ LM.
Moreover, since probability carried by individual quantum
trajectories is conserved,\cos\tildehetaite{wyatt,holland} this splitting
must be done in a manner that preserves both probability and flux.
In other words, the local splitting of the trajectory at $x=0$ must
correspond to \eq{stepcoeffs}, which is now regarded as a {\em
local} condition, giving rise to local reflection and transmission
amplitudes, $R$ and $T$. For the present step potential case, these
local quantities are directly related to the global
$P_{\tildeext{refl}}$ and $P_{\tildeext{trans}}$ values via
\eq{steprefltrans}. For multiple step potentials (Sec.~\text{Re}f{steps}),
the global expressions above [\eq{steprefltrans}] no longer apply;
however, a local, time-dependent trajectory version of
\eq{stepcoeffs} {\em does} turn out to be correct.
Such an expression, immediately applicable to all single and
multiple step systems, can be written as follows: \ea{
r_{\tildeext{refl}} = \of{{p_{\tildeext{i/r}} - p_{\tildeext{trans}} \over
p_{\tildeext{i/r}} + p_{\tildeext{trans}}}}r_{\tildeext{inc}}
\quad & ; & \quad
s_{\tildeext{refl}} = s_{\tildeext{inc}} \leftarrowbel{tdrefl}\\
r_{\tildeext{trans}} = \of{{2 p_{\tildeext{i/r}} \over
p_{\tildeext{i/r}} + p_{\tildeext{trans}}}} r_{\tildeext{inc}} \quad & ; & \quad
s_{\tildeext{trans}} = s_{\tildeext{inc}}. \leftarrowbel{tdtrans}}
In the above equations, ``inc'' refers to any trajectory, locally
incident on some particular step from some particular direction,
which spawns both a locally reflected trajectory, ``refl,'' and a
locally transmitted trajectory, ``trans''. The quantity
$p_{\tildeext{i/r}}$ is the (positive) momentum associated with the
locally incident/reflected trajectory; similarly, $p_{\tildeext{trans}}$
(also positive) is associated with the locally transmitted
trajectory. For above-barrier incident trajectories, note that the
local reflection and transmission amplitudes are both real, thus
ensuring the reality of $r$ and $s$ for the spawned trajectories.
Returning to the step potential system, the ray optics picture can
once again shed some interesting light. The optical analog of the
step is an interface between two media with different indices of
refraction. Light incident on such an interface will partially
reflect back towards the original source, and partially refract
forwards into the new medium. The refraction is completely analogous
to the discontinuous change in momentum, $(p_B-p_A)$, that suddenly
occurs as one crosses the step (Fig.~\text{Re}f{trajfig}). In any event,
the ray optics gedankenexperiment described in Sec.~\text{Re}f{hardwall}
can also be applied to the step potential system, in order to obtain
a particular stationary solution $\hat{P}si(x)$ with any desired boundary
conditions.
For instance, suppose one is interesting in constructing the
left-incident wave solution, i.e. that of \eq{stepcoeffs}. At $t=0$,
only the $\hat{P}Ap$ wave is considered, and only those trajectories for
which $x\le x_L$, as before. As the incident trajectories reach the
step, two new waves are dynamically created from the spawned
trajectories: a transmitted wave traveling to the right, and a
reflected wave traveling to the left. A plot of the overall density
$|\hat{P}si(x,t)|^2$ so obtained will change over time, as the
transmitted and reflected wavefronts propagate into their respective
regions (Fig.~\text{Re}f{sbbelowfig}). Eventually, however, these
wavefronts will propagate beyond the region of interest, i.e. $x_L
\le x \le x_R$, where $x_R>0$ is the right edge of the region of
interest. When this occurs, the solution for $\hat{P}si(x)$ obtained
within the region of interest will be exactly equal to the
stationary solution with the desired boundary condition.
As in the hard wall case, one can also perform step potential
``wavepacket dynamics'' by restricting consideration to just those
initial $\hat{P}Ap$ trajectories lying within some coordinate interval.
The wavepacket will propagate towards the step with uniform density
and speed. As the first few trajectories hit the step, a uniform
transmitted wave will be formed in the $B$ region. In the $A$
region, the sudden appearance of a $\hat{P}Am$ wave will introduce
interference wiggles in the overall density plot (although no nodes
{\em per se}, owing to partial reflection only). Eventually, after
all trajectories have progressed beyond the step, well-separated
transmitted and reflected wavepackets emerge, propagating in their
respective spaces and directions. There is no longer any
interference in the $A$ region, as the incident wave is now gone,
having been completely divided into the two final contributions.
$P_{\tildeext{refl}}$ and $P_{\tildeext{trans}}$ values may be determined
via monitors placed at $x_L$ and $x_R$, either by integrating
probability over time as the respective wavepackets travel through,
or by recording the (constant) amplitude values $R$ and $T$, and
applying \eq{steprefltrans}.
\sin\tildehetaubsubsection{step potential system---below barrier energies}
\leftarrowbel{stepbelow}
The case for which the incident trajectory energies are below $V_0$
requires special discussion. In this case, the classical LMs and
trajectories are confined to the $A$ region only (i.e. to $x\le 0$),
as the entire $B$ region is classically forbidden. In the language
of Sec.~\text{Re}f{scattering}, these below barrier states are therefore
semi-bound, implying that there is only one linearly independent
stationary solution, $\hat{P}si(x)$ which without loss of generality,
must be real-valued. This in turn implies that the $\hat{P}pm(x)$ are
complex conjugates of each other, as in the bound state case
discussed in paper~I. Indeed, one option is to simply apply the
paper~I decomposition to such problems. This approach is discussed
in detail in the Appendix, wherein it is shown to provide a natural
extension of classical trajectories into the tunneling region.
On the other hand, the trajectory hopping-based decomposition scheme
offers a different, but also very natural means to accomplish the
same task---which has the added advantage that all bipolar quantum
potentials {\em vanish}, except at $x=0$. The idea is simply to
treat all expressions in Sec.~\text{Re}f{stepabove} as being literally
correct for the below barrier case as well, with the understanding
that the requisite quantities need no longer be real. In particular,
$p_{\tildeext{trans}} = i \tildeilde{H}bar \kappa$ [\eq{kappaeqn}] becomes pure
positive imaginary, implying that the transmitted trajectories
``turn a corner'' in the complex plane, and start heading off in the
positive imaginary direction, with speed $\tildeilde{H}bar \kappa/m$
(Fig.~\text{Re}f{complexfig}). Along this path, the transmitted wave is an
ordinary plane wave; however, when analytically continued to the
real axis in the $x>0$ region (via a $90^\cos\tildehetairc$ clockwise rotation
in the complex plane), the familiar exponentially damped form
results (Fig.~\text{Re}f{stepbelowfig}).
For the reflected wave, \eq{tdrefl} states that the reflected
``phase'' remains unchanged. However, $r_{\tildeext{refl}}$ is now {\em
complex}, leading to an effective phase shift of $2\delta$, where
$\delta$ is defined in the Appendix [\eq{deltaeqn}]. For localized
wavepackets, a physical significance can be attributed to this phase
shift, in both quantum mechanics and electromagnetic theory; it is
the source of the Goos-H\"anchen
effect,\cos\tildehetaite{jackson,hirschfelder74} a time delay observed in
conjunction with total internal reflection. Consequently, in the
time-dependent wavepacket context, it may be more appropriate to
associate the phase shift with {\em time}, rather than with $s$ or
$r$---specifically, with the delay time needed to accrue sufficient
action so as to compensate for the shift. For stationary states,
however, such a time delay would be inconsequential, because all
trajectories are identical apart from overall phase. Consequently,
we do not consider such time delays explicitly in this paper, though
we will return to this issue in future publications.
\sin\tildehetaubsubsection{multiple step systems}
\leftarrowbel{steps}
The most interesting case is that for which there are multiple
discontinuities, occurring at arbitrary locations $x_k$ (with
$k=1,2,\ldots,l$), and dividing up configuration space into $l+1$
regions, labeled $A$, $B$, $C$, etc. In each region, the potential
energy has a different constant value, i.e. $V(x) = V_A$ in region
$A$, etc. From an optics point of view, this system is analogous to
a stack of different materials, each with its own thickness, and
index of refraction. Our primary focus in this paper will be square
barrier/well systems for which $l=2$, and $V_A = V_C$. However, all
of the present analysis extends to the more general case described
above.
In the standard time-independent picture, the solution is obtained
via a straightforward generalization of Eqs. (\text{Re}f{steppieces}),
(\text{Re}f{steppees}), and (\text{Re}f{stepwhole}). However, even when
comparable left-incident boundary conditions are specified as in
Sec.~\text{Re}f{stepabove}---i.e. $A_+ = 1$, and (for $l=2$) $C_- =
0$---the remaining coefficient values are fundamentally different
from those of the single-step case. To begin with, only the $l$'th
step exhibits the characteristics of a (locally) left-incident
single-step solution; all other steps involve four non-zero
coefficients, corresponding locally to some superposition of left-
and right-incident waves. Even more importantly, however, the
expressions for the coefficient values as a function of system
parameters {\em in no way} resembles \eq{stepcoeffs}; in particular,
these now depend explicitly on the $x_k$ values, as well as on
$V_A$, $V_B$, etc. The same is also true for the global
$P_{\tildeext{refl}}$ and $P_{\tildeext{trans}}$ expressions, as compared
with \eq{steprefltrans}.
It is this dependence on the other steps that gives rise to the
global nature of the time-independent solutions; i.e. the
coefficient values at one step depend in principle on the properties
of all of the other steps, no matter how far away these might be
located. Consequently, a reflection probability as obtained from the
$A_-$ value associated with the first, $k=1$ step, cannot be
determined without extending the analysis out to the final step at
$x= x_l$, in the standard time-independent picture. On the other
hand, a primary goal of the time-{\em dependent} approach is to
construct a completely {\em local} theory, for which local
reflection and transmission amplitudes associated with any given
trajectory, as it encounters a given step $k$, depend {\em only} on
the properties of the $k$'th step (i.e. on $x_k$, and on the $p$ or
$V$ values to the immediate left and right of $x_k$). In fact, from
the point of view of the given trajectory, it must be immaterial
whether the potential contains other steps or not---implying that
the correct local relations for the spawned trajectories, if they
exist at all, must be exactly those already specified in
\eqs{tdrefl}{tdtrans}.
How is it possible that for stationary wavefunctions, whose time
evolution is presumably trivial, an inherently global problem can be
converted to a local one, simply by switching from a
time-independent to a time-dependent perspective? This is because of
the bipolar decomposition, which provides each step with not one,
but two sets of incident trajectories, one from the left, and one
from the right. When there are multiple steps, not only does this
result in a non-trivial superposition for the resultant locally
reflected and transmitted waves, but the trajectories themselves are
subject to multiple spawnings, which effectively enable them to
traverse back and forth over the same regions of configuration space
an arbitrary number of times (Fig.~\text{Re}f{trajfig}). This crucial
feature ultimately gives rise to the rich global scattering behavior
observed even in two-step systems. However, it is wholly missed by
any time-independent treatment, even a bipolar one, which can only
summarize the net superposition of all left-traveling and
right-traveling waves.
We now discuss how the local time-dependent theory described above
gives rise to the correct stationary solutions, which is readily
understood by invoking the ray optics description introduced
earlier. For simplicity and definiteness, we consider only the
square potential case, which is optically analogous to say, a single
pane of glass surrounded by vacuum. If a single step gives rise to a
single reflection, then two steps, like a pair of mirrors, results
in an {\em infinite} number of reflections. The same is true of a
pane of glass, within which a single beam of light will be reflected
back and forth at the edges an arbitrary number of times. Of course,
these reflections are not perfect; a portion of the incident flux
always escapes as transmission into the surrounding vacuum.
Consequently, each successive internal reflection is exponentially
damped, in accord with \eq{tdrefl}.
If the globally incident wave is incoming from the left, then at
$x_1$, there are two contributions to $\hat{P}si_{B+}$. One contribution
is the portion of the left-incident $\hat{P}si_{A+}$ wave that is {\em
locally transmitted} through the first step. Apart from a phase
factor, the resultant $B_+$ value would be given by \eq{stepcoeffs}
if this were the only contribution. However, there is also a
contribution that arises from the {\em locally reflected} part of
the right-incident wave, $\hat{P}si_{B-}$. This contribution is zero for
a single step system, but of course non-zero in the multiple step
case. Although the second contributing wave is right-incident, we
can still use \eqs{tdrefl}{tdtrans} to compute the contribution to
$\hat{P}si_{B+}$, as discussed in Sec.~\text{Re}f{stepabove}. For the second,
$k=l=2$ step at $x_2$, there are only left-incident waves;
consequently, $\hat{P}si_{C+}$ and $\hat{P}si_{B-}$ are obtained from a single
source each, i.e. $\hat{P}si_{B+}$ (Fig.~\text{Re}f{trajfig}).
The above description refers to the stationary state result,
obtained by our gedankenexperiment in the large time limit only. In
practice this result would be achieved in stages. As in the previous
examples, we imagine that at time $t=0$, one retains only those
trajectories for which $x\le x_L < x_1$. This one-sided trajectory
restriction is somewhat analogous to continuous wave cavity
ring-down spectroscopy.\cos\tildehetaite{wheeler98} When the wavefront first hits
the first interface at $x=x_1$, there is partial reflection and
transmission, exactly identical to what would happen for a single
step system. The reflected wavefront propagates beyond the left edge
of the region of interest at $x=x_L$, and for some time, the
reflected amplitude passing through this left edge is constant. The
initially transmitted wavefront eventually reaches the second step
at $x=x_2$ (i.e. the far side of the pane of glass), leading to a
second transmission into the $C$ region, and a second reflection
back through the $B$ region. Eventually, the second transmitted
wavefront reaches the right edge of interest at $x = x_R$, after
which the transmitted amplitude remains constant for some time.
Neither the globally transmitted nor reflected amplitudes for the
times indicated above, as determined via monitors at $x=x_L$ and $x
= x_R$ respectively, are correct. However, we have not yet described
the steady state solution. To do so requires an accounting of the
second reflected wavefront, which eventually reaches the $x=x_1$
step again, this time incident from the right. The resultant locally
transmitted wave becomes an instantaneous second contribution to
$\hat{P}si_{A-}$, and the locally reflected wave plays the same role for
$\hat{P}si_{B+}$. These new contributions give rise to discontinuities in
these waves, that subsequently propagate to the left and right,
respectively (Fig.\text{Re}f{sbabovefig}). The new $\hat{P}si_{A-}$ wave
discontinuity eventually reaches $x=x_L$, where it is recorded by
the monitor, giving rise to a sudden change in the reflection
probability value.
The $\hat{P}si_{B+}$ discontinuity propagates to the second step, where
it spawns new discontinuities in $\hat{P}si_{C+}$ and $\hat{P}si_{B-}$. The
former constitutes the border between first- and second-order
transmitted waves, registered at sufficiently later time by the
monitor at $x=x_R$. The latter, second-order $\hat{P}si_{B-}$ wave heads
back towards the first step, to give rise to third-order waves, with
commensurate discontinuities, etc. In principle, this process
continues indefinitely, resulting over time in global transmitted
and reflected waves of arbitrarily high order. However, \eq{tdrefl}
and the relation $P_{\tildeext{refl}} + P_{\tildeext{trans}} = 1$ ensure
that the result converges to a stationary solution exponentially
quickly. Moreover, since $C_-$ is necessarily zero throughout this
process, it is clear that the stationary state that is converged to
is indeed the one corresponding to the desired boundary condition of
a globally incident wave that is incoming from the left.
Note that in an actual optical system as described above, the
spatial dimensionality is three rather than one, and the incident
wave would usually be taken at some angle to the normal. If in
addition, the beam has a finite width, then one would observe
separate reflected beams for each order, of exponentially decreasing
brightness. The one-dimensional quantum case, however, is analogous
to a normal incident beam, for which all orders of reflection are
superposed. In addition to providing a pedagogical understanding of
the dynamics that is very much analogous to the optical example
provided, the picture above also suggests a practical numerical
method that may be used to obtain stationary scattering states of
any desired boundary condition (via superposition of globally left-
and right-incident wave solutions, obtained independently).
Note that the ``wavepacket dynamics'' version of the ray optics
analogy may also be applied. In this case, the resultant initial
square wavepacket is somewhat reminiscent of pulsed wave cavity
ring-down spectroscopy.\cos\tildehetaite{wheeler98} Once the wavepacket
has penetrated the middle region $B$ (i.e. the pane of glass),
it reflects back and forth between the two edges, with each
reflection giving rise to a left- or right-propagating outgoing
square wavepacket in region $A$ or $C$, and a temporary interference
pattern in region $B$. The amplitude of the central wavepacket
dissipates exponentially in time. All of this complicated
behavior is indeed qualitatively observed in actual wavepacket
dynamics for such systems, but in the present context, is
reconstructed entirely from a single stationary state.
\sin\tildehetaection{Numerical Details}
\leftarrowbel{numerics}
In this section, we discuss several remaining issues pertaining to
the numerical methods used to generate and propagate the various
bipolar component waves, for the examples discussed in
Secs.~\text{Re}f{complicated} and~\text{Re}f{results}. For all of these
examples, the numerical algorithm used corresponds to the
gedankenexperiment with one-sided truncation, i.e. to continuous
wave cavity ring-down. In essence, this
consists of just two basic operations:
(1) each piece-wise bipolar component
of the wavefunction [i.e. $\hat{P}Apm(x)$, $\hat{P}Bpm(x)$, etc.] is
independently propagated in time over its appropriate region of
space, using the standard QHEMs and QTMs; (2) whenever a
trajectory reaches a turning point, it is immediately deleted, and
replaced with two new trajectories, spawned in the appropriate
locally transmitted and reflected component LMs.
The first operation above, i.e. QTM propagation of the wavefunction
components, is very straightforward. Note that for simplicity, we
have throughout this paper used time-independent expressions for
$\hat{P}si(x)$ and its components, but in reality these evolve over
time---even for stationary states, via $\dot s = \partial s(x,t) /
\partial t = - E$. We have therefore been rather lax in
distinguishing Hamilton's principle function from Hamilton's
characteristic function, although from a trajectory standpoint, it
is always the former that is implied. Since each component is
stationary in its own right, the time evolution of the hydrodynamic
fields is governed by the quantum stationary Hamilton-Jacobi
equation (QSHJE), rather than the QHJE. Moreover, the fact that the
piecewise $r$ is constant implies that the component quantum
potentials are {\em zero}, resulting in classical HJE's and
trajectories. These conclusions are trivially correct for the
present paper, for which all components are plane waves; however,
the arguments also extend to arbitrary continuous potential
systems.\cos\tildehetaite{poirier05bohmIII}
From a numerical perspective, the use of classical trajectories offers many
advantages over a conventional QTM propagation. To begin with, the trajectories
themselves are always smooth if $V(x)$ is smooth, resulting in far fewer
trajectories and larger time steps than would otherwise be the case. Even
more importantly, however, since the quantum potential is not required, there
is no need to compute on-the-fly numerical spatial derivatives of the local
hydrodynamic fields. Consequently, for a given component, the trajectories
are completely independent and need not communicate---again resulting in
fewer of them. Indeed, it is possible to perform an essentially exact
computation using only a {\em single trajectory per wavefunction component}.
This feature is particularly important for the very frontmost trajectory
of the initial ensemble, which for brief periods at later times, will
(via spawning) come to be the {\em only} trajectory to occupy a given
component LM. The subsequent evolution of these lone trajectories does not
require the presence of nearby trajectories.
The spawning of new trajectories, i.e. operation (2) above, also
bears further discussion. In principle, this is always achieved via
application of \eqs{tdrefl}{tdtrans}. For the above barrier case,
$R$ and $T$ are both real, ensuring the reality of $r$ and $s$ for
the spawned trajectories---although in the case of a down step,
$R<0$, resulting in a negative $r_{\tildeext{refl}}$. This is in accord
with the conventions discussed in paper I. However, in this paper,
we find it numerically convenient to adopt the more usual $r>0$
convention. Thus, if \eq{tdrefl} yields a negative
$r_{\tildeext{refl}}$, it is replaced with $-r_{\tildeext{refl}}$, and $\pi$
is added to $s_{\tildeext{refl}}$. A similar, but more complicated
modification is also applied to the below-barrier trajectories, for
which \eqs{tdrefl}{tdtrans} yield complex amplitudes. In this case,
the phase shift is $2 \delta$, as discussed in
Sec.~\text{Re}f{stepbelow} and the Appendix.
For a single step system, the algorithm is now essentially complete.
At the initial time $t=0$, a variable number of particles (or
synonymously, grid points) are distributed uniformly along the
$\hat{P}si_{A+}$ manifold, to the left of $x= x_L$. The extent of these
points must be large enough that at the end of the propagation,
there are still $\hat{P}si_{A+}$ grid points that have not yet reached
$x_L$. The grid spacing is mostly arbitrary, but must be small
enough that at sufficiently later times, there is always at least
one trajectory per component LM. The propagation is considered
complete when the reflected and transmitted wavefronts travel beyond
$x_L$ and $x_R$, respectively.
For multiple step systems, the situation is similar, but somewhat
more complex. The primary new feature is the {\em recombination} of
wavefunction components arising from two sources, i.e. from two
locally incident waves coming from opposite directions. The present
algorithm would seem to yield {\em two} subcomponent wavefunctions
for every component, each with its own set of trajectories. If left
``unchecked,'' this would lead to undesirable further
multifurcations for higher orders/later times. A simple solution
would be to propagate each subcomponent long enough that there is at
least one trajectory for each, then extrapolate the corresponding
subcomponent wavefunctions to a common position, where a new
trajectory is constructed for the superposed component wavefunction,
which is then propagated in lieu of the subcomponent trajectories.
This requires dynamical fitting (see below), or at the very least,
extrapolation. Although these numerical operations would be very
stable in the present context, to rule these out altogether as
sources of error in Sec.~\text{Re}f{results}, we have adopted a much
simpler approach---i.e. the grid spacing is chosen such that
trajectories from the two component waves incident on a given step
always arrive at the same time (Fig.~\text{Re}f{trajfig}). The
corresponding subcomponent wavefunction values are then simply added
together when forming the spawned trajectory. Adopting once again
the $r>0$ convention for the superposed component wave, $\hat{P}si_\pm$,
the corresponding field values are then obtained via
$r=\sin\tildehetaqrt{\hat{P}si_\pm^{*}\hat{P}si_\pm}$ and
$s=\arctan\sin\tildehetaof{\text{Im}(\hat{P}si_\pm)/\text{Re}(\hat{P}si_\pm)}$.
As discussed in Sec.~\text{Re}f{steps}, multiple step systems allow for
infinite reflections that perpetually modify $|\hat{P}si(x,t)|^2$, in
principle for all time. In practice however, there is exponential
convergence within the region of interest, $x_L \le x \le x_R$, so
that one would not run the calculation indefinitely, but only until
the desired accuracy is reached. Accurate ``error bars'' on the
computed global $P_{\tildeext{refl}}$ and $P_{\tildeext{trans}}$ values are
conveniently provided by the magnitudes of the most recent
discontinuous jumps as recorded by the monitors at $x_L$ and $x_R$.
Note that the number of digits of accuracy scales only {\em
linearly} with propagation time. However, the rate of convergence
depends on the energy value. Near the barrier height, in particular,
convergence may take quite a long time, as the exponent is close to
zero. For all other energies, only a few ``cycles'' should be
required, depending on the level of accuracy desired.
If in addition to reflection and transmission probabilities, the
actual stationary solution over the region of interest is also
desired, then it is necessary to reconstruct $\hat{P}si(x)$. This is
obtained from the final grid, after the propagation is finished,
using a multiple step generalization of \eq{stepwhole}. The first
step is to reconstruct the component wavefunctions $\hat{P}Apm(x)$,
$\hat{P}Bpm(x)$, etc., via interpolation or fitting of the hydrodynamic
field values from the corresponding dynamical grid points onto a
much finer common grid (used e.g. for plotting purposes). The second
step is to linearly superpose the $\pm$ components onto the plotting
grid, and to assemble the pieces together over the coordinate range
of interest. For the discontinuous systems considered here, the
number of dynamical grid points per component can be as small as
one---i.e. much smaller, even, than the number of wavelengths! To
our knowledge, such performance has never been achieved previously
by a QTM; however, it does require that the plotting grid be much
finer than the dynamical grid, e.g. at least several points per
wavelength, in order to adequately represent the interference
fringes of the the superposed solution, $\hat{P}si(x)$.
\sin\tildehetaection{RESULTS}
\leftarrowbel{results}
In this section, we apply the numerical algorithm previously
described to three different applications: the up-step potential,
the square barrier, and the square well.
\sin\tildehetaubsection{Up-step Potential}
\leftarrowbel{upstep}
The first system considered is the up-step potential, i.e.
\eq{steppot} with $V_0 >0$. Since there are no multiple reflections,
this is in principle a trivial application for the current
algorithm; it therefore serves as a useful numerical test. Both
above barrier (Sec.~\text{Re}f{stepabove}) and below barrier
(Sec.~\text{Re}f{stepbelow}) energies are considered. We choose
molecular-like values for the constants, i.e. $V_0 = 0.009$ hartree,
and $m=2000$ a.u.
The left and right edges of the region of interest are taken to be
$x_L = -1.0$ a.u. and $x_R = 1.0$ a.u., respectively. At the initial
time, $t=0$, 51 trajectory grid points are distributed uniformly
over the interval $-4 \le x \le -1$ (grid spacing of $0.06$ a.u.).
This number is far greater than what would be needed for dynamical
purposes, but is chosen so as to avoid construction of a separate
plotting grid (Sec.~\text{Re}f{numerics}). The hydrodynamic field
functions for the initial $\hat{P}si_{A+}(x)$ wavepacket over the above
interval are taken to be $r(x)=1$ a.u.$^{-1}$ and $s(x) = \sin\tildehetaqrt{2mE} x$.
For the above barrier calculation, the energy $E = 2 V_0 = 0.018$
hartree was used. The trajectory propagation and termination were
performed exactly as described in Sec.~\text{Re}f{numerics}. The real and
imaginary parts of all three resultant wavefunction components [i.e.
$\hat{P}Apm(x)$ and $\hat{P}Bp(x)]$ at the final time, $t=550$ a.u. are
presented in Fig.~\text{Re}f{stepabovefig}. All three components exhibit
the desired plane wave behavior, e.g. no interference is evident
within a given component. The resultant $\hat{P}si(x)$ does exhibit
interference in the $A$ region, however, arising from the
superposition of $\hat{P}Ap$ and $\hat{P}Am$.
For the below barrier calculation, the system was given an energy
equal to one half of the barrier height, i.e. $E = V_0/2 = 0.0045$.
As per the discussion in Secs.~\text{Re}f{stepbelow} and~\text{Re}f{numerics},
tunneling into the forbidden region $B$ is achieved, not through a
quantum potential, but via analytic continuation. At sufficiently
large time ($t=1100$ a.u.) the final wavefunction is reconstructed
from the components, i.e. $\hat{P}si_A(x) = \hat{P}Ap(x) + \hat{P}Am(x)$, and
$\hat{P}si_B(x) = \hat{P}Bp(x)$. The real and imaginary parts of the
reconstructed wavefunction are presented in Fig.~\text{Re}f{stepbelowfig}.
In the figure, squares and circles denote the numerical results
obtained via the present algorithm, whereas the solid and dashed
lines represent the well-known analytic solutions. The agreement is
essentially exact. Note that the real and imaginary parts are in
phase throughout the coordinate range---i.e., $S(x)$ is a constant,
so apart from a phase factor, $\hat{P}si(x)$ is real. Note also that the
tunneling region exhibits the desired exponential decay.
\sin\tildehetaubsection{Square Barrier}
\leftarrowbel{squarebarrier}
The second system considered is the square barrier. This is a
two-step potential ($l=2$), with $V_A = V_C = 0$, and $V_B = V_0>0$.
The two steps comprise the left and right edges of the barrier, at
$x_1 = 0$ and $x_2=w$, respectively. The constants are chosen as
follows: $V_0 = 0.018$ hartree; $m=2000$ a.u.; $w=1$; $x_L=-1$ a.u.,
$x_R=2$ a.u. Initially, 75 trajectory grid points are distributed
uniformly over the interval $-5 \le x \le -1$ (grid spacing of
$0.05$ a.u.), which again, is far more than are dynamically
required. The same initial hydrodynamic field functions are used as
in Sec.~\text{Re}f{upstep}.
Both above barrier ($E>V_0$) and below barrier ($E<V_0$) energies
are considered. For the above barrier case, $E = 2 V_0 = 0.036$
hartree. Once again, the trajectory propagation and termination were
performed exactly as described in Sec.~\text{Re}f{numerics}. In order to
converge $P_{\tildeext{trans}}$ to $10^{-4}$, a propagation time of $3000$
a.u. was required. This corresponds to 3 complete cycles, i.e. a 3rd-order
calculation. Figure~\text{Re}f{trajfig}
is a plot of the quantum trajectories for this calculation, in which
every fifth trajectory for each of the five component wavefunctions
is indicated. Trajectory spawning at the two steps is very clearly
evident, as is recombination of pairs of incident waves (indicated
by circles). On the whole, this figure demonstrates all of the
anticipated analogues with ray optics, i.e. parallel trajectories,
reflection and refraction.
In Fig.~\text{Re}f{sbabovefig}, the time evolution of the superposition
state $\hat{P}si(x,t)$ is represented, via snapshots of the real and
imaginary parts at seven different times. At $t=0$ a.u., the
incident wavefront is located at $x=-1$ a.u. By $t=250$ a.u., the
wavefront has spawned $\hat{P}Am(x)$ and $\hat{P}Bp(x)$ trajectories; the
former gives rise to the kink (really a discontinuity) somewhat to
the left of the first step. By $t=550$ a.u. and $t=650$ a.u., the
$\hat{P}Am(x)$ wavefront has moved outside the region of interest, though
the $\hat{P}Bp(x)$ wavefront has not quite reached the second step. After
it does so, two new wavefronts are propagated along the $\hat{P}Cp(x)$
and $\hat{P}Bm(x)$ LMs (e.g. $t=800$ a.u.), the former of which
propagates beyond the region of interest by $t=900$ a.u. Subsequent
discontinuity magnitudes become exponentially smaller, so that at
sufficiently large time (i.e. $t=2000$ a.u.), the resultant
$\hat{P}si(x)$ has converged to the correct stationary solution.
For the below barrier case, $E = V_0/2 = 0.009$ hartree, $x_L=-0.5$
a.u., and the other parameters are as above except $w=0.5$ a.u.
Within the barrier, there are in principle an arbitrary number of
reflections back and forth as before. However, substantial amplitude
loss occurs due to tunneling, in addition to partial reflection, as
a result of which fewer cycles are required in order to achieve the
same $10^{-4}$ level of convergence ($t=1400$ a.u., or 2 cycles).
Fig.~\text{Re}f{complexfig} indicates how the tunneling dynamics is
achieved. After the wavefront hits
the first step, the transmitted $\hat{P}Bp$ wave is propagated along the
imaginary axis ($iy$), until the point $y=w$ is reached. When this
occurs, it is necessary to analytically continue $\hat{P}Bp$ down to the
real axis, in order to compute amplitudes for the new $\hat{P}Cp(x)$ and
$\hat{P}Bm(x)$ trajectories that are spawned at the second step. The
latter component propagates in the (negative) imaginary direction,
along $w - iy'$, until $y'=w$, at which point analytic continuation
is once more applied (this time for the first step), and the pattern
repeated.
Seven snapshots of the superposition density, $|\hat{P}si(x,t)|^2$, are
displayed in Fig.~\text{Re}f{sbbelowfig}. The initial density---equal to
just the $|\hat{P}Ap(x)|^2$ density---is uniform. After the wavefront
encounters the first step, a reflected $\hat{P}Am(x)$ emerges, giving
rise to clearly evident interference in the region $A$. The
$\hat{P}si_B(x)=\hat{P}Bp(x)$ wave is at this stage perfectly exponentially
damped. Upon encountering the second step, a second contribution,
$\hat{P}Bm(x)$ emerges; however, this first-order correction is already
extremely small, owing to the large amount of tunneling that has
occurred. The global transmitted wave, $\hat{P}Cp(x)$, though small, is
clearly seen to have uniform density.
In addition to the two detailed trajectory calculations described
above, we computed $P_{\tildeext{refl}}$ and $P_{\tildeext{trans}}$ for a
large range of $w$ and $E$ values, so as to fully explore (without
loss of generality) the entire range of the square barrier problem.
The numerical results are presented, and compared with known
analytical values,\cos\tildehetaite{gasiorowitz} in Fig.~\text{Re}f{sbtransreflfig}.
Two aspects of this study bear comment. First, for all $w$ and $E$
values considered, the computed $P_{\tildeext{refl}}$ and
$P_{\tildeext{trans}}$ values agree with the exact values to within an
error comparable to that predicted by the level of numerical
convergence. In particular, the oscillatory energy dependence is
perfectly reproduced. Second, the closer the barrier peak is
approached from either above or below in energy, the longer the time
required to achieve a given level of convergence, as predicted in
Sec.~\text{Re}f{numerics}. In particular, for the calculations closest to
the barrier peak, 5-7 cycles were required in order to approximately
maintain a $10^{-4}$ convergence of the transmission probability.
Although a greater number of particles are required in this case,
this poses no great limitation in practice, since one would
presumably never require a calculation precisely at the peak energy.
\sin\tildehetaubsection{Square Well}
\leftarrowbel{squarewell}
As the final system, we consider the square-well potential, i.e. the
square barrier but with $V_0<0$. In the scattering state context,
there is no tunneling for this system, but in other respects it
resembles the square barrier. From an optics point of view, the
square well corresponds to a central medium with larger index of
refraction than its surroundings, whereas the square barrier
corresponds to a smaller index of refraction, giving rise to the
possibility of total internal reflection (i.e. tunneling). The
parameters are as in Sec.~\text{Re}f{squarebarrier}, except that
$V_0=0.009$ hartree, and three different $w$ values are considered:
$w=2$ a.u., $w=4$ a.u., and $w=16$ a.u.
As the time evolution and trajectory pictures are similar to those
of the previous sections, we focus only on the $P_{\tildeext{refl}}$ and
$P_{\tildeext{trans}}$ calculations, for which once again, a large range
of energies was considered ($0.0005<E<0.2$ hartree). The number of
initial trajectories ranged from 50 to 200, for the highest to the
lowest energies, respectively. The computed transmission/reflection
probabilities were again converged to $10^{-4}$.
The energy-resolved reflection and transmission probabilities are
presented in Fig.~\text{Re}f{swtransreflfig}.
As in the square barrier case, excellent agreement is achieved with
the exact analytical results, i.e. on the order of the level of
convergence. This is true despite the fact that the square well
energy curves are decidedly more oscillatory than the square barrier
curves---particularly for wide barriers, for which the $E$
dependence is very sensitive indeed. One particularly important
feature exhibited by the exact curves is the so-called
Ramsauer-Townsend effect,\cos\tildehetaite{gasiorowitz} i.e. the phenomenon of 100\%
transmission and zero reflection, even at very low energies.
This occurs when
$\sin\tildehetain(2k_{B}\,w)=0$, and may be regarded as a purely quantum
mechanical resonance phenomenon. Yet it is reproduced perfectly here
in the bipolar decomposition, using {\em classical trajectories}.
Indeed, the bipolar picture provides an interesting physical
explanation, i.e. the right-incident and left-incident waves of the
first step give rise to spawned contributions to $\hat{P}Am(x)$ that
exactly cancel each other out via destructive interference.
\sin\tildehetaection{SUMMARY AND CONCLUSIONS}
\leftarrowbel{conclusion}
As described in paper~I, the Schr\"odinger\ equation is linear, yet the
equivalent QHEM---obtained via substitution of the Madelung-Bohm
ansatz into the Schr\"odinger\ equation---are not. This aspect of Bohmian
mechanics suggests that it can be beneficial, both from a
pedagogical and a computational perspective, to apply a suitable
bifurcation (or ``multifurcation'') to the wavefunction prior to
applying the QHEM. Indeed, following the paper~I CPWM bipolar
decomposition for 1D stationary bound states,\cos\tildehetaite{poirier04bohmI} the
quantum trajectories become more well-behaved and classical-like,
in precisely the limit in which there are more nodes, and the usual
unipolar calculation breaks down. Moreover, the resultant component
LMs admit a natural physical interpretation in terms of the
corresponding semiclassical LMs.
In the generalization to the stationary scattering states considered
here, a somewhat different bipolar decomposition is found to be
required. The new decomposition is still unique, at least for
discontinuous potentials. However, the resultant components
$\hat{P}pm(x)$ are no longer solutions to the Schr\"odinger\ equation in their
own right, as a result of which their time evolution is coupled. The
new scheme---though fundamentally different from the old
one---nevertheless bears a correspondence to a modified version of
semiclassical theory appropriate for scattering systems.
Curiously, this semiclassical modification is
{\em not} simply a higher order treatment in $\tildeilde{H}bar$;\cos\tildehetaite{berry72} if it
were, the corresponding exact quantum modification considered here
would not exist.
In any event, the new decomposition also gives rise to its own
physical interpretation, specific to the scattering context. In
particular, for right-incident boundary conditions, the left and
right asymptotes of $\hat{P}si_+(x)$ respectively represent incident and
transmitted waves, whereas the left asymptote of $\hat{P}si_-(x)$
represents the reflected wave [the right asymptote of $\hat{P}si_-(x)$
approaches zero]. That these conceptually useful asymptotic bipolar
assignments---found even in the most elementary treatments of
scattering---may be extended {\em throughout configuration space}
[even for continuous potentials (paper~III)] represents an important
leap forward, especially for QTMs.
Another pedagogically and numerically useful development from this
approach is the inherently time-dependent ray optics interpretation
that naturally arises, particularly in the discontinuous potential
context. The ray optics approach is anticipated to be a relevant
guiding force in subsequent generalizations of the present
methodology, i.e. to continuous, multidimensional potential systems,
and---as per the discussion in Secs.~\text{Re}f{hardwall}
and~\text{Re}f{stepabove}---for non-stationary wavepacket dynamics. This
approach also provides a much simpler trajectory-based explanation
of scattering terminology as applied in a stationary context---e.g.
why the ``reflected wave'' is so-called, despite being present from
the earliest times---than those traditionally used.\cos\tildehetaite{taylor}
The discontinuous potential applications considered in this
paper---hard wall (Sec.~\text{Re}f{hardwall}), step potential
(Secs.~\text{Re}f{stepabove}, \text{Re}f{stepbelow}, and~\text{Re}f{upstep}), and
square barrier/well (Secs.~\text{Re}f{steps}, \text{Re}f{squarebarrier},
and~\text{Re}f{squarewell})---are significant for several reasons. To
begin with, these are the first discontinuous applications of a
genuine QTM calculation that have ever been performed, to the
authors' knowledge. The singular derivatives associated with
discontinuous potential functions would wreak havoc with standard
numerical differentiation routines. Second, the use of a
time-{\em dependent} method for stationary, or time-{\em independent},
applications, is also significant. Ordinarily, the time dependence of
stationary states is regarded as trivial. In the present context,
this is true in a sense for the hard wall and step potential
systems, because the correct answer is ``built in'' the method
itself. For multiple step systems, however, the dynamical truncated
wave approach, i.e. the gedankenexperiment introduced in
Sec.~\text{Re}f{hardwall} and further developed in later sections,
yields decidedly nontrivial results.
In particular, the algorithm uses only {\em single step} scattering
coefficients to obtain global scattering quantities for {\em
multiple step} systems. In effect, the time dependent nature of this
approach allows computation of {\em global} properties using a
completely {\em local} method. Not only were exact quantum results
obtained for a full range of system parameters, but the numerical
resources necessary to achieve this---i.e. the number of
trajectories and time steps---were decidedly minimal. Indeed, the
algorithm lives up to the promise made in paper~I, of performing an
accurate quantum calculation with {\em fewer trajectories than
nodes}---a prospect virtually unheard of in a unipolar context.
In future publications, we will naturally attempt to generalize the
methodology described here and in paper~III, for the type of
multidimensional time-dependent wavepacket dynamics relevant to
chemical physics applications. In this context, the scattering
version of the CPWM decomposition developed here is an absolutely
essential first step, as reactive scattering is the underpinning of
all chemical reactions. Additional discussion and motivation will be
provided in paper~III.
\sin\tildehetaection*{ACKNOWLEDGEMENTS}
This work was supported by awards from The Welch Foundation
(D-1523) and Research Corporation. The authors would like to acknowledge
Robert E. Wyatt and Eric R. Bittner for many stimulating discussions.
David J. Tannor and John C. Tully are also acknowledged.
\sin\tildehetaection*{Appendix: Bipolar decomposition of semi-bound states}
As discussed in Secs.~\text{Re}f{scattering}, \text{Re}f{hardwall},
and~\text{Re}f{stepbelow}, semi-bound stationary states in 1D are bounded
on one side only, as a result of which they are real-valued and
singly-degenerate, like bound states. Consequently, they are
amenable to the CPWM bipolar decomposition scheme introduced in
paper~I. In this appendix, we apply this decomposition to two
semi-bound systems: the hard wall system, and the below-barrier
up-step system.
\sin\tildehetaubsection{Hard wall system}
From \Ref{poirier04bohmI}, the most general bipolar decomposition of a hard
wall stationary state---corresponding to Sec.~\text{Re}f{bipolar}
condition (1) only---is found to satisfy \begin{equation} -\cos\tildehetaot\sin\tildehetaof{s(x)/\tildeilde{H}bar} =
\of{{m F \over \tildeilde{H}bar}} \sin\tildehetaof{{-\cos\tildehetaot(k x) \over k} + B}, \end{equation} where
$s(x) = s_+(x) = -s_-(x)$, and $r_+(x) = r_-(x)$ is obtained from
$s(x)$ via \eq{scrs} (without ``sc'' subscripts). The arbitrary
parameters $F$ and $B$ are the invariant flux and median action
parameters associated with conditions~(2) and (3),
respectively,\cos\tildehetaite{poirier04bohmI} although the definition of $F$ has been
changed slightly to account for the scattering normalization
convention, $r_+(x) = 1$. Note that {\em only} the semiclassical
values for these parameters yields a solution that satisfies the
correspondence principle in the large action (i.e. $k$) limit. In
particular, the choice $B=0$ and $F= \tildeilde{H}bar k/m$ yields the desired
semiclassical result, $s(x) = \tildeilde{H}bar k x$; all other choices exhibit
undesirable oscillatory behavior in $r_\pm(x)$, $s_\pm(x)$, and
$q_\pm(x)$.
\sin\tildehetaubsection{Up-step system}
For the hard wall system considered above---which is just the
special case of the up-step potential in the limit $V_0 \rightarrow
\infty$---exact agreement is achieved between semiclassical and
quantum LM's in the $x<0$ region. This is the only region of
interest for the hard wall system; however, for finite $V_0$
values---i.e. for general below-barrier up-step stationary
states---there is of course also tunneling into the forbidden
region, which must be accounted for. The paper~I bipolar
decomposition therefore results in LM's that span the {\em entire}
coordinate range $-\infty < x < \infty$. These LMs are given by the
following analytical expression: \begin{equation} p(x) = {\cos\tildehetaases {\tildeilde{H}bar k & for
$x \le 0$; \cos\tildehetar
{2 \tildeilde{H}bar \kappa e^{2 \kappa x} \sin\tildehetain{2\delta} \over
1 + e^{4 \kappa x} - 2 e^{2 \kappa x}\cos\tildehetaos{2\delta}} & for $x>0$. \cos\tildehetar }},
\leftarrowbel{ptun}
\end{equation}
where $\delta$ is given by
\begin{equation}
\tildean \delta = {\kappa \over k} = \sin\tildehetaqrt{\of{{V_0 \over E}} - 1},
\leftarrowbel{deltaeqn}
\end{equation}
and
\begin{equation}
\kappa = \sin\tildehetaqrt{2 m (V_0 - E)}. \leftarrowbel{kappaeqn}
\end{equation}
The $p(x)$ LM function of \eq{ptun} is continuous everywhere,
including at the potential discontinuity at $x=0$. In the $A$
region, it agrees exactly with the semiclassical solution; in the
$B$ region, it decays exponentially to zero.
The paper~I approach thus yields a very natural way to extend
trajectories into the tunneling region. Note that the quantum
potential in this region is not zero; indeed, it exhibits a
discontinuity at $x=0$ that exactly balances that of $V(x)$ itself,
so that the bipolar modified potential is continuous across the
step. Unlike the above-barrier case, the paper~I solution does {\em
not} manifest oscillatory behavior in the large action limit, and so
this approach would at first glance appear to be ideal. There are
two reasons, however, why it is not pursued here. The first reason
is that $r(x)$ diverges asymptotically as $x\rightarrow\infty$, which
according to preliminary numerical investigations, appears to lead
to numerical instabilities for completely QTM-based propagation
schemes. Second, if the barrier were to fall off again at larger $x$
values, so that the tunneling region were finite, then the
asymptotic behavior would be once again undesirably oscillatory.
This would be the case, for example, for the below-barrier energies
of the square barrier system of Sec.~\text{Re}f{squarebarrier}.
\end{document} |
\begin{document}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{prop}[theorem]{Proposition}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{cor}[theorem]{Corollary}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{conj}[theorem]{Conjecture}
\newtheorem{rmk}[theorem]{Remark}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{defth}[theorem]{Definition-Theorem}
\newcommand{\partial}{\partial}
\newcommand{{\mathbb C}}{{\mathbb C}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\mathbb N}}{{\mathbb N}}
\newcommand{{\mathbb Q}}{{\mathbb Q}}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb P}}{{\mathbb P}}
\newcommand{{\mathbb L}}{{\mathbb L}}
\newcommand{{\mathbb T}}{{\mathbb T}}
\newcommand{{\mathbb P}}{{\mathbb P}}
\newcommand\AAA{{\mathcal A}}
\newcommand\BB{{\mathcal B}}
\newcommand{\mathbb C}C{{\mathcal C}}
\newcommand\DD{{\mathcal D}}
\newcommand\EE{{\mathcal E}}
\newcommand\FF{{\mathcal F}}
\newcommand\GG{{\mathcal G}}
\newcommand\HH{{\mathcal H}}
\newcommand{\mathbf I}I{{\mathcal I}}
\newcommand\JJ{{\mathcal J}}
\newcommand\KK{{\mathcal K}}
\newcommand\LL{{\mathcal L}}
\newcommand\MM{{\mathcal M}}
\newcommand\NN{{\mathcal N}}
\newcommand\OO{{\mathcal O}}
\newcommand\PP{{\mathcal P}}
\newcommand\QQ{{\mathcal Q}}
\newcommand\RR{{\mathcal R}}
\newcommand\SSS{{\mathcal S}}
\newcommand{\mathbf T}T{{\mathcal T}}
\newcommand\UU{{\mathcal U}}
\newcommand\VV{{\mathcal V}}
\newcommand\WW{{\mathcal W}}
\newcommand\XX{{\mathcal X}}
\newcommand\YY{{\mathcal Y}}
\newcommand\ZZ{{\mathcal Z}}
\newcommand{\mathbb C}H{{{\mathbb C}C\HH}}
\newcommand\PEY{{\PP\EE\YY}}
\newcommand\MF{{\MM\FF}}
\newcommand\RCT{{{\mathcal R}_{CT}}}
\newcommand\PMF{{\PP\kern-2pt\MM\FF}}
\newcommand\FL{{\FF\LL}}
\newcommand\PML{{\PP\kern-2pt\MM\LL}}
\newcommand\GL{{\GG\LL}}
\newcommand\Pol{{\mathcal P}}
\newcommand\half{{\textstyle{\frac12}}}
\newcommand\Half{{\frac12}}
\newcommand\Mod{\operatorname{Mod}}
\newcommand\Area{\operatorname{Area}}
\newcommand\ep{\epsilon}
\newcommand\hhat{\widehat}
\newcommand\Proj{{\mathbf P}}
\newcommand\U{{\mathbf U}}
\newcommand\Hyp{{\mathbf H}}
\newcommand\D{{\mathbf D}}
\newcommand\Z{{\mathbb Z}}
\newcommand\R{{\mathbb R}}
\newcommand\Q{{\mathbb Q}}
\newcommand\E{{\mathbb E}}
\newcommand\til{\widetilde}
\newcommand\length{\operatorname{length}}
\newcommand\tr{\operatorname{tr}}
\newcommand\gesim{\succ}
\newcommand\lesim{\prec}
\newcommand\simle{\lesim}
\newcommand\simge{\gesim}
\newcommand{\asymp}{\asymp}
\newcommand{\simadd}{\mathrel{\overset{\text{\tiny $+$}}{\sim}}}
\newcommand{\setminus}{\setminus}
\newcommand{\operatorname{diam}}{\operatorname{diam}}
\newcommand{\pair}[1]{\langle #1\rangle}
\newcommand{{\mathbf T}}{{\mathbf T}}
\newcommand{\operatorname{inj}}{\operatorname{inj}}
\newcommand{\operatorname{\mathbf{pleat}}}{\operatorname{\mathbf{pleat}}}
\newcommand{\operatorname{\mathbf{short}}}{\operatorname{\mathbf{short}}}
\newcommand{\operatorname{vert}}{\operatorname{vert}}
\newcommand{\operatorname{\mathbf{collar}}}{\operatorname{\mathbf{collar}}}
\newcommand{\operatorname{\overline{\mathbf{collar}}}}{\operatorname{\overline{\mathbf{collar}}}}
\newcommand{{\mathbf I}}{{\mathbf I}}
\newcommand{\prec_t}{\prec_t}
\newcommand{\prec_f}{\prec_f}
\newcommand{\prec_b}{\prec_b}
\newcommand{\prec_p}{\prec_p}
\newcommand{\prec_peq}{\preceq_p}
\newcommand{\prec_s}{\prec_s}
\newcommand{\preceq_c}{\preceq_c}
\newcommand{\prec_c}{\prec_c}
\newcommand{\prec_{\rm top}}{\prec_{\rm top}}
\newcommand{{\mathbf T}opprec}{\prec_{\rm TOP}}
\newcommand{\mathrel{\scriptstyle\searrow}}{\mathrel{\scriptstyle\searrow}}
\newcommand{\mathrel{\scriptstyle\swarrow}}{\mathrel{\scriptstyle\swarrow}}
\newcommand{\mathrel{\scriptstyle\searrow}d}{\mathrel{{\scriptstyle\searrow}\kern-1ex^d\kern0.5ex}}
\newcommand{\mathrel{\scriptstyle\swarrow}d}{\mathrel{{\scriptstyle\swarrow}\kern-1.6ex^d\kern0.8ex}}
\newcommand{\mathrel{\scriptstyle\searrow}eq}{\mathrel{\raise-.7ex\hbox{$\overset{\searrow}{=}$}}}
\newcommand{\mathrel{\scriptstyle\swarrow}eq}{\mathrel{\raise-.7ex\hbox{$\overset{\swarrow}{=}$}}}
\newcommand{\operatorname{tw}}{\operatorname{tw}}
\newcommand{\operatorname{base}}{\operatorname{base}}
\newcommand{\operatorname{trans}}{\operatorname{trans}}
\newcommand{|_}{|_}
\newcommand{\overline}{\overline}
\newcommand{\operatorname{\UU\MM\LL}}{\operatorname{\UU\MM\LL}}
\newcommand{\mathcal{EL}}{\mathcal{EL}}
\newcommand{\tsum}{\sideset{}{'}\sum}
\newcommand{\tsh}[1]{\left\{\kern-.9ex\left\{#1\right\}\kern-.9ex\right\}}
\newcommand{{\mathbf T}sh}[2]{\tsh{#2}_{#1}}
\newcommand{\mathrel{\approx}}{\mathrel{\approx}}
\newcommand{\Qeq}[1]{\mathrel{\approx_{#1}}}
\newcommand{\lesssim}{\lesssim}
\newcommand{\Qle}[1]{\mathrel{\lesssim_{#1}}}
\newcommand{\operatorname{simp}}{\operatorname{simp}}
\newcommand{\operatorname{succ}}{\operatorname{succ}}
\newcommand{\operatorname{pred}}{\operatorname{pred}}
\newcommand\fhalf[1]{\overrightarrow {#1}}
\newcommand\bhalf[1]{\overleftarrow {#1}}
\newcommand\sleft{_{\text{left}}}
\newcommand\sright{_{\text{right}}}
\newcommand\sbtop{_{\text{top}}}
\newcommand\sbot{_{\text{bot}}}
\newcommand\sll{_{\mathbf l}}
\newcommand\srr{_{\mathbf r}}
\newcommand\geod{\operatorname{\mathbf g}}
\newcommand\mtorus[1]{\partial U(#1)}
\newcommand\A{\mathbf A}
\newcommand\Aleft[1]{\A\sleft(#1)}
\newcommand\Aright[1]{\A\sright(#1)}
\newcommand\Atop[1]{\A\sbtop(#1)}
\newcommand\Abot[1]{\A\sbot(#1)}
\newcommand\boundvert{{\partial_{||}}}
\newcommand\storus[1]{U(#1)}
\newcommand\Momega{\omega_M}
\newcommand\nomega{\omega_\nu}
\newcommand\operatorname{tw}ist{\operatorname{tw}}
\newcommand\modl{M_\nu}
\newcommand\MT{{\mathbb T}}
\newcommand{\mathbf T}eich{{\mathcal T}}
\renewcommand{\operatorname{Re}}{\operatorname{Re}}
\renewcommand{{\mathbf I}m}{\operatorname{Im}}
\title{Moebius rigidity for compact deformations of negatively curved manifolds}
\author{Kingshook Biswas}
\address{Indian Statistical Institute, Kolkata, India. Email: [email protected]}
\begin{abstract} Let $(X, g_0)$ be a complete, simply connected Riemannian manifold with sectional curvatures $K_{g_0}$
satisfying $-b^2 \leq K_{g_0} \leq -1$ for some $b \geq 1$. Let $g_1$ be a Riemannian metric on $X$ such that $g_1 = g_0$ outside
a compact in $X$, and with sectional curvatures $K_{g_1}$ satisfying $K_{g_1} \leq -1$. The identity map $id : (X, g_0) \to (X, g_1)$
is bi-Lipschitz, and hence induces a homeomorphism between the boundaries at infinity of $(X, g_0)$ and $(X, g_1)$, which we denote by
$\hat{id}_{g_0, g_1} : \partial_{g_0} X \to \partial_{g_1} X$. We show that if the boundary map $\hat{id}_{g_0, g_1}$ is Moebius
(i.e. preserves cross-ratios), then it extends to an isometry $F : (X, g_0) \to (X, g_1)$.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
In various rigidity problems for negatively curved spaces, the interplay between the geometry of the space and the geometry of its
boundary at infinity plays a prominent role. For a CAT(-1) space $X$, there is a positive function called the {\it cross-ratio} defined
on the space of quadruples of distinct points in the boundary $\partial X$, and a well-known problem asks whether the cross-ratio in fact
determines the space up to isometry. More precisely, if $f : \partial X \to \partial Y$ is a Moebius homeomorphism between boundaries of
CAT(-1) spaces $X, Y$ (i.e. a homeomorphism which preserves cross-ratios), then the question is whether $f$ extends to an isometry
$F : X \to Y$. It is a classical result that this holds when $X = Y = \mathbb{H}^n$, the real hyperbolic space, a fact which is often
used in rigidity theorems for hyperbolic manifolds, for example in the Mostow Rigidity theorem \cite{mostow-ihes}. More generally, Bourdon \cite{bourdon2}
showed that if $X$ is a rank one symmetric space of noncompact type with the metric normalized such that the maximum of the sectional curvatures
equals $-1$, and $Y$ is any CAT(-1) space, then any Moebius embedding $f : \partial X \to \partial Y$ extends to an isometric embedding
$F : X \to Y$. For general CAT(-1) spaces $X, Y$, the problem remains open.
We should remark that one of the main motivations for
studying this problem is its relation to the {\it marked length spectrum rigidity} conjecture of Burns and Katok, which asks whether
two closed negatively curved manifolds $X, Y$ with the same marked length are necessarily isometric. Otal \cite{otal2} and independently Croke
\cite{croke} proved that marked length spectrum rigidity holds in two dimensions. It is well known that in fact
$X, Y$ have the same marked length spectrum if and only if there is an equivariant Moebius map between the boundaries of the universal covers
$f : \partial \tilde{X} \to \partial \tilde{Y}$, so a positive answer to the problem of extending Moebius maps to isometries would
also give a solution to the marked length spectrum rigidity problem (see \cite{otal1}). Equality of marked length spectra is also known to be
equivalent to existence of a homeomorphism between the unit tangent bundles $\phi : T^1 X \to T^1 Y$
conjugating the geodesic flows of $X, Y$ (\cite{hamenstadt1}). Proofs of these equivalences may be found in \cite{biswas3}, section 5.
We remark that in related work Beyrer, Fioravanti and Incerti-Medici have constructed a cross-ratio on the Roller boundary of any CAT(0) cube complex, and
have shown that any cross-ratio preserving bijection between geodesically complete cube complexes admits a unique extension to an
isomorphism of cube-complexes, and have also proved a version of marked length spectrum rigidity for group actions on CAT(0) cube complexes
\cite{beyrerandall}.
In \cite{biswas3}, it was shown that a Moebius homeomorphism between the boundaries of proper, geodesically complete CAT(-1) spaces
extends to a $(1, \log 2)$-quasi-isometry between the spaces. For $X, Y$ complete, simply connected manifolds of pinched negative curvature
$-b^2 \leq K \leq -1$, this result was refined in \cite{biswas5} to show that the extension may be taken in this case to be a $(1, (1 - 1/b)\log 2)$-quasi-isometry.
In fact the quasi-isometric extension of \cite{biswas3} and \cite{biswas5} was shown to be given by a certain natural extension of Moebius maps called the
{\it circumcenter extension}, which is natural with respect to composition with isometries.
In \cite{biswas6}, it was shown that if $f : \partial X \to \partial Y$ and $g : \partial Y \to \partial X$ are mutually inverse
Moebius homeomorphisms between boundaries of complete, simply connected manifolds $X, Y$ of pinched negative curvature
$-b^2 \leq K \leq -1$, then the circumcenter extensions $F : X \to Y$ and $G : Y \to X$ of $f, g$ are $\sqrt{b}$-bi-Lipschitz homeomorphisms which are inverses
of each other.
In the present article we consider compactly supported deformations of the metric on a complete, simply connected manifold
$(X, g_0)$ of pinched negative
curvature $-b^2 \leq K_{g_0} \leq -1$, i.e. we consider metrics $g_1$ on $X$ such that $g_1 = g_0$ outside a compact in $X$, and such that
the sectional curvature of $g_1$ is bounded above by $-1$. The identity map $id : (X, g_0) \to (X, g_1)$ is clearly bi-Lipschitz, hence it induces a homeomorphism between boundaries which we denote by $\hat{id}_{g_0, g_1} : \partial_{g_0} X \to \partial_{g_1} X$, and the problem in this context becomes the following: if $\hat{id}_{g_0, g_1}$
is Moebius, then does it extend to an isometry $F : (X, g_0) \to (X, g_1)$? Partial results for this problem were obtained in \cite{biswas4}, where local and
infinitesimal versions of the problem were considered, namely metrics $g_1$ such that the $C^2$ norm $||g_0 - g_1||_{C^2}$ is small, and one-parameter
families of metrics $(g_t)_{0 \leq t \leq 1}$, and in both cases it was shown that if the boundary maps are Moebius then they extend to isometries.
Our main theorem below gives a complete solution to this problem:
\begin{theorem} \label{mainthm} Let $(X, g_0)$ be a complete, simply connected manifold of pinched negative curvature $-b^2 \leq K_{g_0} \leq -1$.
Let $g_1$ be a metric on $X$ such that $g_1 = g_0$ outside a compact in $X$, and such that the sectional curvature of $g_1$ satisfies $K_{g_1} \leq -1$.
Let $\hat{id}_{g_0, g_1} : \partial_{g_0} X \to \partial_{g_1} X$ denote the homeomorphism between boundaries induced by the identity map
$id : (X, g_0) \to (X, g_1)$. Suppose $\hat{id}_{g_0, g_1}$ is Moebius. Then the circumcenter extension of $\hat{id}_{g_0, g_1}$ is an
isometry $F : (X, g_0) \to (X, g_1)$.
\end{theorem}
The key to the proof of the above theorem is a further study of properties of the circumcenter extension. In section 2 we briefly recall some facts about
Moebius maps, geodesic conjugacies and circumcenter extensions. In section 3 we prove the results about the circumcenter extension which are used in the
proof of the main theorem, while section 4 is devoted to the proof of the main theorem.
\section{Preliminaries}
We give below a brief outline of the background on Moebius maps which we will be needing, for details and proofs of the
assertions below the reader is referred to \cite{biswas3}, \cite{biswas5},
\cite{biswas6}.
Let $(Z, \rho_0)$ be a compact metric space of diameter one. The cross-ratio with respect to a metric $\rho$ on $Z$ is the function on quadruples of distinct points in $Z$
defined by
$$
[\xi, \xi', \eta, \eta']_{\rho} := \frac{\rho(\xi, \eta) \rho(\xi', \eta')}{\rho(\xi, \eta') \rho(\xi', \eta)} \ , \xi, \xi', \eta, \eta' \in Z
$$
Two metrics $\rho_1, \rho_2$ on $Z$ are said to be Moebius equivalent if their cross-ratios are equal, $[.,.,.,.]_{\rho_1} = [.,.,.,.]_{\rho_2}$.
A metric $\rho$ on $Z$ is said to be antipodal if it has diameter one and for all $\xi \in Z$ there exists $\eta \in Z$ such that $\rho(\xi, \eta) = 1$.
Assume that the metric $\rho_0$ is antipodal. We then define $\mathcal{M}(Z, \rho_0)$ to be the set of all antipodal metrics $\rho$ on $Z$ which
are Moebius equivalent to $\rho_0$. Then for any two metrics $\rho_1, \rho_2 \in \mathcal{M}(Z, \rho_0)$, there is a positive continuous function on $Z$
called the derivative of the metric $\rho_2$ with respect to the metric $\rho_1$, denoted by $\frac{d\rho_2}{d\rho_1}$, such that
$$
\rho_2(\xi, \eta)^2 = \frac{d\rho_2}{d\rho_1}(\xi) \frac{d\rho_2}{d\rho_1}(\eta) \rho_1(\xi, \eta)^2
$$
for all $\xi, \eta \in Z$. If $\xi$ is not an isolated point of $Z$, then
$$
\frac{d\rho_2}{d\rho_1}(\xi) = \lim_{\eta \to \xi} \frac{\rho_2(\xi, \eta)}{\rho_1(\xi, \eta)}
$$
Moreover
$$
\left(\max_{\xi \in Z} \frac{d\rho_2}{d\rho_1}(\xi)\right) \left(\min_{\xi \in Z} \frac{d\rho_2}{d\rho_1}(\xi)\right) = 1
$$
This allows us to define a metric on the set $\mathcal{M}(Z, \rho_0)$ by
$$
d_{\mathcal{M}}(\rho_1, \rho_2) := \max_{\xi \in Z} \log \frac{d\rho_2}{d\rho_1}(\xi)
$$
The metric space $(\mathcal{M}(Z, \rho_0), d_{\mathcal{M}})$ is proper and complete. The following lemma follows from the proof of Lemma 2.6 of \cite{biswas3},
we include a proof for convenience:
\begin{lemma} \label{maxminantipodal} For $\rho_1, \rho_2 \in \mathcal{M}(Z, \rho_0)$, let $\xi, \eta \in Z$ be points where $\frac{d\rho_2}{d\rho_1}$ attains its
maximum and minimum values respectively. If $\xi' \in Z$ is such that $\rho_1(\xi, \xi') = 1$, then $\frac{d\rho_2}{d\rho_1}$ attains its minimum at $\xi'$, and
$\rho_2(\xi, \xi') = 1$. If $\eta' \in Z$ is such that $\rho_2(\eta, \eta') = 1$, then $\frac{d\rho_2}{d\rho_1}$ attains its maximum at $\eta'$, and
$\rho_1(\eta, \eta') = 1$.
\end{lemma}
\noindent{\bf Proof:} Let $\lambda, \mu > 0$ be the maximum and minimum values of $\frac{d\rho_2}{d\rho_1}$ respectively, then we know that $\lambda \cdot \mu = 1$.
For $\xi' \in Z$ such that $\rho_1(\xi, \xi') = 1$, we have
$$
1 \geq \rho_2(\xi, \xi')^2 = \frac{d\rho_2}{d\rho_1}(\xi) \frac{d\rho_2}{d\rho_1}(\xi') \rho_1(\xi, \xi')^2 \geq \lambda \cdot \mu \cdot 1 = 1
$$
so equality holds in the inequalities above, hence $\frac{d\rho_2}{d\rho_1}(\xi') = \mu$ and $\rho_2(\xi, \xi') = 1$.
For $\eta' \in Z$ such that $\rho_2(\eta, \eta') = 1$, we have
$$
1 = \rho_2(\eta, \eta')^2 = \frac{d\rho_2}{d\rho_1}(\eta) \frac{d\rho_2}{d\rho_1}(\eta') \rho_1(\eta, \eta')^2 \leq \mu \cdot \lambda \cdot 1 = 1
$$
so equality holds in the inequalities above, hence $\frac{d\rho_2}{d\rho_1}(\eta') = \lambda$ and $\rho_1(\eta, \eta') = 1$. $\operatorname{diam}ond$
Let $f : (Z_1, \rho_1) \to (Z_2, \rho_2)$ be a homeomorphism between metric spaces. We say $f$ is Moebius if $f$ preserves cross-ratios
with respect to the metrics $\rho_1$ and $\rho_2$, i.e. $[f(\xi), f(\xi'), f(\eta), f(\eta')]_{\rho_2} = [\xi, \xi' ,\eta, \eta']_{\rho_1}$ for all
quadruples of distinct points $\xi,\xi', \eta, \eta'$ in $Z_1$. Then the metrics $\rho_1$ and
$f^* \rho_2$ (the pull-back of $\rho_2$ by $f$) are Moebius equivalent, and we define the derivative of the Moebius map $f$ with respect to
the metrics $\rho_1, \rho_2$ to be the function $\frac{df^* \rho_2}{d\rho_1}$.
Let $X$ be a proper, geodesically complete CAT(-1) space (this means that every finite geodesic segment in $X$ can be extended to a
bi-infinite geodesic), with boundary at infinity $\partial X$. The Busemann function of $X$ is the function $B : X \times X \times \partial X \to \mathbb{R}$
defined by
$$
B(x,y,\xi) := \lim_{z \to \xi} (d(x, z) - d(y,z)) \ , \ x,y \in X, \xi \in \partial X
$$
Note that $|B(x,y,\xi)| \leq d(x, y)$ for all $x,y \in X, \xi \in \partial X$. For $x \in X$ and $\xi, \eta \in \partial X$, we denote by $[x, \xi) \subset X$ the unique
geodesic ray joining $x$ to $\xi$, and we denote by $(\xi, \eta) \subset X$ the unique bi-infinite geodesic joining $\xi$ and $\eta$.
For every $x \in X$, there is a metric $\rho_x$ on $\partial X$ called the visual metric on $\partial X$ based at $X$, defined by $\rho_x(\xi, \eta) = e^{-(\xi|\eta)_x}$,
where $(\xi|\eta)_x$ is the Gromov inner product between $\xi,\eta \in \partial X$ with respect to the basepoint $x \in X$, defined by
$$
(\xi|\eta)_x := \lim_{y \to \xi, z \to \eta} \frac{1}{2} (d(x,y)+d(x,z)-d(y,z))
$$
The metric space $(\partial X, \rho_x)$ is compact of diameter one, and the metric $\rho_x$ is antipodal. We have $\rho_x(\xi, \eta) = 1$ if and only if
the point $x$ lies on the bi-infinite geodesic $(\xi, \eta)$.
Moreover, any two visual metrics $\rho_x, \rho_y$ on
$\partial X$ are Moebius equivalent, so there is a canonical cross-ratio function on quadruples of distinct points in $\partial X$, which we will denote by simply $[.,.,.,.]$.
The derivative $\frac{d\rho_y}{d\rho_x}$ is given by
$$
\frac{d\rho_y}{d\rho_x}(\xi) = e^{B(x,y,\xi)}
$$
The space $\mathcal{M}(\partial X, \rho_x)$ is independent of the choice of $x \in X$, and we will denote it by $\mathcal{M}(\partial X)$. The map
$i_X : X \to \mathcal{M}(\partial X), x \mapsto \rho_x$, is an isometric embedding, and the image is $\frac{1}{2}\log 2$-dense in $\mathcal{M}(\partial X)$.
For $x \in X$ and a subset $B \subset X$, we define the shadow of the set $B$ as seen from $x$ to be the subset of $\partial X$ defined by
$$
\mathcal{O}(x, B) := \{ \xi \in \partial X | \ [x, \xi) \cap B \neq \emptyset \}
$$
The following lemma will be useful:
\begin{lemma} \label{smallshadow} Let $x_0 \in X$ and $R > 0$. For $x \in X$, the diameter of the shadow $\mathcal{O}(x, B(x_0, R))$ with respect to
the visual metric $\rho_x$ tends to $0$ as $x \to \infty$. More precisely, for all $\xi, \eta \in \mathcal{O}(x, B(x_0, R))$,
$$
\rho_x(\xi, \eta) \leq e^{2R -d(x,x_0)}
$$
\end{lemma}
\noindent{\bf Proof:} Given $x \in X$ and $\xi \in \mathcal{O}(x, B(x_0, R))$, by definition there exists
$z \in [x, \xi) \cap B(x_0, R)$. Then we have
\begin{align*}
B(x, x_0, \xi) & = B(x, z, \xi) + B(z, x_0, \xi) \\
& \geq d(x, z) - d(z, x_0) \\
& \geq d(x, x_0) - 2R \\
\end{align*}
Thus for $\xi, \eta \in \mathcal{O}(x, B(x_0, R))$ we have
\begin{align*}
\rho_x(\xi, \eta)^2 & = \frac{d\rho_x}{d\rho_{x_0}}(\xi)\frac{d\rho_x}{d\rho_{x_0}}(\eta) \rho_{x_0}(\xi, \eta)^2 \\
& = e^{B(x_0, x, \xi)} e^{B(x_0, x, \eta)} \rho_{x_0}(\xi, \eta)^2 \\
& \leq e^{2R - d(x, x_0)} e^{2R - d(x,x_0)} \\
\end{align*}
and so
$$
\rho_x(\xi, \eta) \leq e^{2R - d(x,x_0)}
$$
$\operatorname{diam}ond$
The space of geodesics $\mathcal{G}X$ of $X$ is defined to be the space $\mathcal{G}X := \{ \gamma : \mathbb{R} \to X | \ \gamma \hbox{ is an isometric embedding } \}$
equipped with the topology of uniform convergence on compacts. We define continuous maps $\pi : \mathcal{G}X \to X$ and $p : \mathcal{G}X \to \partial X$ by
$\pi(\gamma) = \gamma(0) \in X$ and $p(\gamma) = \gamma(+\infty) \in \partial X$, and
for $x \in X$, we define $T^1_x X := \pi^{-1}(x) \subset \mathcal{G}X$.
The geodesic flow of the CAT(-1) space $X$ is the one-parameter group of homeomorphisms
$(\phi_t : \mathcal{G}X \to \mathcal{G}X)_{t \in \mathbb{R}}$ defined by $(\phi_t \gamma)(s) := \gamma(s+t)$. When $X$ is a simply connected, complete
Riemannian manifold of negative sectional curvature $K \leq -1$, then the map $\mathcal{G}X \to T^1 X, \gamma \mapsto \gamma'(0)$ is a
homeomorphism conjugating the geodesic flow on $\mathcal{G}X$ to the usual geodesic flow on $T^1 X$.
Let $Y$ be another proper, geodesically complete CAT(-1) space, and suppose there is a Moebius homeomorphism $f : \partial X \to \partial Y$. The Moebius map
$f$ induces a homeomorphism $\phi : \mathcal{G}X \to \mathcal{G}Y$ conjugating the geodesic flows, which is defined as follows: given $\gamma \in \mathcal{G}X$,
let $x = \gamma(0), \xi = \gamma(+\infty), \eta = \gamma(-\infty)$, then $\phi(\gamma)$ is defined to be the unique $\tilde{\gamma} \in \mathcal{G}Y$ such that
$\tilde{\gamma}(+\infty) = f(\xi), \tilde{\gamma}(-\infty) = f(\eta)$, and $\tilde{\gamma}(0) = y$, where $y$ is the unique point in the bi-infinite geodesic
$(f(\eta), f(\xi)) \subset Y$ such that $\frac{df^*\rho_y}{d\rho_x}(\xi) = 1$.
In a CAT(-1) space, any bounded set $B \subset X$ has a unique circumcenter $c(B) \in X$,
i.e. the unique point minimizing the function $x \in X \mapsto \sup_{y \in B} d(x,y)$.
For a compact set $K \subset \mathcal{G}X$ such that $p(K) \subset \partial X$ has at least two points, the limit of the circumcenters
$c(\pi(\phi_t(K)))$ exists as $t \to +\infty$, we call the limit the asymptotic circumcenter of the set $K$ and denote it by $c_{\infty}(K) \in X$.
The geodesic conjugacy $\phi : \mathcal{G}X \to \mathcal{G}Y$ induced by a Moebius map $f : \partial X \to \partial Y$ then allows us to define an extension
$F : X \to Y$ of $f$, called the circumcenter extension of $f$, by
$$
F(x) := c_{\infty}(\phi(T^1_x X)) \in Y
$$
The circumcenter extension is a $(1, \log 2)$-quasi-isometry and is
locally $1/2$-Holder. For $x \in X$, the point $F(x) \in Y$ can be characterized as the unique point in $Y$ minimizing the
function $y \in Y \mapsto d_{\mathcal{M}}(f_* \rho_x, \rho_y)$ (where $f_* \rho_x \in \mathcal{M}(\partial Y)$ is the push-forward of $\rho_x \in \mathcal{M}(\partial X)$
by the Moebius map $f$).
\section{Some properties of the circumcenter extension}
Throughout this section, $X, Y$ will denote two complete, simply connected manifolds with pinched negative curvature $-b^2 \leq K \leq -1$.
Suppose there is a Moebius homeomorphism $f : \partial X \to \partial Y$ with inverse $g : \partial Y \to \partial X$,
and let $F : X \to Y$ and $G : Y \to X$ be the circumcenter extensions of $f$ and $g$ respectively. Then from \cite{biswas6}, we have that
$F$ and $G$ are $\sqrt{b}$-bi-Lipschitz homeomorphisms which are inverses of each other. Define a function $r : X \to \mathbb{R}$ by
$$
r(x) := d_{\mathcal{M}}( f_* \rho_x, \rho_{F(x)}) = \sup_{\xi \in \partial X} \log \frac{df_* \rho_x}{d\rho_{F(x)}}(f(\xi))
$$
In the following, we identify $\mathcal{G}X, \mathcal{G}Y$ with $T^1 X, T^1 Y$ respectively, and we identify the geodesic
conjugacy $\phi : \mathcal{G}X \to \mathcal{G}Y$ with a geodesic conjugacy $\phi : T^1 X \to T^1 Y$. We also identify the maps
$\pi : \mathcal{G}X \to X, p : \mathcal{G}X \to \partial X$ with maps $\pi : T^1 X \to X, p : T^1 X \to \partial X$ respectively
(and similarly for the corresponding maps for $Y$). For $x \in X, \xi \in \partial X$ we denote by
$\overrightarrow{x\xi} \in T^1_x X$ the unit tangent vector at $x$ given by $\gamma'(0)$, where $\gamma$ is the unique geodesic satisfying $\gamma(0) = x,
\gamma(+\infty) = \xi$. The flip $T^1_x X \to T^1_x X, v \mapsto -v$, induces a continuous involution $i_x : \partial X \to \partial X$, defined by
requiring that $\overrightarrow{xi_x(\xi)} = - \overrightarrow{x\xi}$ for all $\xi \in \partial X$. Similarly for $y \in Y$ we have an involution
$i_y : \partial Y \to \partial Y$. The following lemma follows from Lemma 4.13 of \cite{biswas6}:
\begin{lemma} \label{derivformula} For $x \in X, y \in Y, \xi \in \partial X$, we have
$$
\log \frac{df_* \rho_x}{d\rho_y}(f(\xi)) = B(y, \pi(\phi(\overrightarrow{x\xi})), f(\xi))
$$
In particular,
$$
r(x) = \sup_{\xi \in \partial X} B(F(x), \pi(\phi(\overrightarrow{x\xi})), f(\xi))
$$
\end{lemma}
\begin{lemma} \label{rlipschitz} The function $r : X \to \mathbb{R}$ is $1$-Lipschitz.
\end{lemma}
\noindent{\bf Proof:} Let $x, y \in X$. Since $\phi : T^1 X \to T^1 Y$ conjugates the geodesic flows, we have, for any $\xi \in \partial X$,
$$
B(\pi(\phi(\overrightarrow{x\xi})), \pi(\phi(\overrightarrow{y\xi})), f(\xi)) = B(x, y, \xi)
$$
We then have, using Lemma \ref{derivformula} above,
\begin{align*}
r(x) = d_{\mathcal{M}}(f_* \rho_x, \rho_{F(x)}) & \leq d_{\mathcal{M}}( f_* \rho_x, \rho_{F(y)}) \\
& = \sup_{\xi \in \partial X} \log \frac{df_* \rho_x}{d\rho_{F(y)}}(f(\xi)) \\
& = \sup_{\xi \in \partial X} B(F(y), \pi(\phi(\overrightarrow{x\xi})), f(\xi)) \\
& = \sup_{\xi \in \partial X} B(F(y), \pi(\phi(\overrightarrow{y\xi})), f(\xi))
+ B(\pi(\phi(\overrightarrow{y\xi})), \pi(\phi(\overrightarrow{x\xi})), f(\xi) \\
& = \sup_{\xi \in \partial X} B(F(y), \pi(\phi(\overrightarrow{y\xi})), f(\xi)) + B(y, x, \xi) \\
& \leq \sup_{\xi \in \partial X} B(F(y), \pi(\phi(\overrightarrow{y\xi})), f(\xi)) + d(x,y) \\
& = r(y) + d(x, y) \\
\end{align*}
Thus $r(x) - r(y) \leq d(x,y)$. Interchanging $x$ and $y$ the same argument as above gives $r(y) - r(x) \leq d(x,y)$, hence $|r(x) - r(y)| \leq d(x,y)$. $\operatorname{diam}ond$
We say that a probability measure $\mu$ on $\partial X$ is {\it balanced} at a point $x \in X$ if the vector-valued integral
$\int_{\partial X} \overrightarrow{x\xi} d\mu(\xi) \in T_x X$ equals $0 \in T_x X$, or equivalently if $\int_{\partial X} < v, \overrightarrow{x\xi} > d\mu(\xi) = 0$
for all $v \in T_x X$. If the compact $K \subset \partial X$ denotes the support of $\mu$, then it is shown in \cite{biswas6} that $\mu$ is balanced at $x$ if and only if
the convex hull in $T_x X$ of the compact set $\{ \overrightarrow{x\xi} : \xi \in K \}$ contains the origin of $T_x X$.
For $x \in X$, let $K_x \subset \partial X$ denote the set on which the function $\xi \in \partial X \mapsto \frac{df_* \rho_x}{d\rho_{F(x)}}(f(\xi))$
attains its maximum value. In \cite{biswas6}, it is shown that for any $x \in X$, there exists a probability measure $\mu_x$ on $\partial X$ with support
contained in $K_x$ such that the measure $\mu_x$ is balanced at $x$, and such that the measure $f_* \mu_x$ on $\partial Y$ is balanced at $F(x) \in Y$
(with a similar definition of balanced measures for measures on $\partial Y$ and points of $Y$).
The main result of this section is the following:
\begin{theorem} \label{rconstant} The function $r$ is constant.
\end{theorem}
\noindent{\bf Proof:} Since the function $r$ and the circumcenter map $F$ are both Lipschitz, they are differentiable almost everywhere, so the set of points $D \subset X$
at which both $r$ and $F$ are differentiable has full measure. Let $x \in D$ and let $\xi \in K_x$. Then for any $y \in X$,
\begin{align*}
r(y) & \geq B(F(y), \pi(\phi(\overrightarrow{y\xi})), f(\xi)) \\
& = B(F(y), F(x), f(\xi)) + B(F(x), \pi(\phi(\overrightarrow{x\xi})), f(\xi)) + B(\pi(\phi(\overrightarrow{x\xi})), \pi(\phi(\overrightarrow{y\xi})), f(\xi)) \\
& = B(F(y), F(x), f(\xi)) + r(x) + B(x, y, \xi) \\
\end{align*}
so
\begin{equation} \label{randb}
r(y) - r(x) \geq B(F(y), F(x), f(\xi)) + B(x, y, \xi)
\end{equation}
for all $y \in X, \xi \in K_x$. It is well-known that the gradient at $x$ of the function $y \in X \mapsto B(x, y, \xi)$ is given by the vector $\overrightarrow{x\xi}$,
while the gradient at $F(x)$ of the function $z \in Y \mapsto B(z, F(x), f(\xi))$ is given by the vector $-\overrightarrow{F(x)f(\xi)}$.
Let $v \in T_x X$ and $t > 0$, and let $y = \exp_x tv \in X$. Then as $t \to 0$, using the fact that $r$ and $F$ are differentiable at $x$, equation (\ref{randb}) above
gives
$$
dr_x(tv) + o(t) \geq -<dF_x(tv), \overrightarrow{F(x)f(\xi)} > + < tv, \overrightarrow{x\xi} > + o(t)
$$
so dividing by $t$ above and letting $t$ tend to $0$ gives
\begin{equation} \label{drdF}
dr_x(v) \geq < v, \overrightarrow{x\xi} > - < dF_x(v), \overrightarrow{F(x)f(\xi)} >
\end{equation}
for all $v \in T_x X$, $\xi \in K_x$. Integrating both sides of inequality (\ref{drdF}) above over the set $K_x$ with respect to the probability measure $\mu_x$,
and using the facts that the support of $\mu_x$ is contained in $K_x$, the measure $\mu_x$ is balanced at $x$ and the measure $f_* \mu_x$ is
balanced at $F(x)$, we obtain
\begin{align*}
dr_x(v) = \int_{\partial X} dr_x(v) d\mu_x(\xi) & = \int_{K_x} dr_x(v) d\mu_x(\xi) \\
& \geq \int_{K_x} < v, \overrightarrow{x\xi} > d\mu_x(\xi) - \int_{K_x} < dF_x(v), \overrightarrow{F(x)f(\xi)} > d\mu_x(\xi) \\
& = \int_{\partial X} < v, \overrightarrow{x\xi} > d\mu_x(\xi) - \int_{\partial X} < dF_x(v), \overrightarrow{F(x)f(\xi)} > d\mu_x(\xi) \\
& = \int_{\partial X} < v, \overrightarrow{x\xi} > d\mu_x(\xi) - \int_{\partial Y} < dF_x(v), \overrightarrow{F(x)\eta} > df_* \mu_x(\eta) \\
& = 0 \\
\end{align*}
Thus $dr_x(v) \geq 0$ for all $v \in T_x X$, replacing $v$ by $-v$ gives $dr_x(-v) \geq 0$ so $dr_x(v) \leq 0$ for all $v \in T_x X$, and hence $dr_x(v) = 0$ for all $v \in T_x X$.
Since $r$ is Lipschitz and $dr_x = 0$ for $x$ in the full measure set $D$, it follows that $r$ is constant. $\operatorname{diam}ond$
A corollary of the proof of the above theorem is the following:
\begin{prop} \label{dFKx} Let $x \in X$ be a point of differentiability of $F$. Then for all $\xi \in K_x, v \in T_x X$ we have
$$
< dF_x(v), \overrightarrow{F(x)f(\xi)} > = < v, \overrightarrow{x\xi} >
$$
Equivalently,
$$
dF^*_x(\overrightarrow{F(x)f(\xi)}) = \overrightarrow{x\xi}
$$
for all $\xi \in K_x$.
\end{prop}
\noindent{\bf Proof:} By the previous theorem the function $r$ is constant, so the set $D$ in the proof of the previous theorem is just the
set of points of differentiability of $F$. Let $x \in D$, and $\xi \in K_x$. Since $r$ is constant, equation (\ref{drdF}) above
gives
$$
0 \geq < v, \overrightarrow{x\xi} > - < dF_x(v), \overrightarrow{F(x)f(\xi)} >
$$
for all $v \in T_x X$. Replacing $v$ by $-v$ in the above equation gives
$$
0 \leq < v, \overrightarrow{x\xi} > - < dF_x(v), \overrightarrow{F(x)f(\xi)} >
$$
for all $v \in T_x X$. Combining the two gives $< dF_x(v), \overrightarrow{F(x)f(\xi)} > = < v, \overrightarrow{x\xi} >$ for all $v \in T_x X$. $\operatorname{diam}ond$
\begin{lemma} \label{qisom} Let $M \geq 0$ denote the constant value of the function $r$. Then the circumcenter map $F : X \to Y$ is a $(1, 2M)$-quasi-isometry, i.e.
$$
d(x,y) - 2M \leq d(F(x), F(y)) \leq d(x,y) + 2M
$$
for all $x,y \in X$.
\end{lemma}
\noindent{\bf Proof:} Note that push-forward of metrics by $f$ gives an isometry $f_* : \mathcal{M}(\partial X) \to \mathcal{M}(\partial Y)$. So for $x,y \in X$,
we have
\begin{align*}
d(x,y) & = d_{\mathcal{M}}( \rho_x, \rho_y) \\
& = d_{\mathcal{M}}( f_* \rho_x, f_* \rho_y) \\
& \leq d_{\mathcal{M}}( f_* \rho_x, \rho_{F(x)}) + d_{\mathcal{M}}( \rho_{F(x)}, \rho_{F(y)}) + d_{\mathcal{M}}( \rho_{F(y)}, f_* \rho_y) \\
& = M + d(F(x), F(y)) + M \\
\end{align*}
Similarly,
\begin{align*}
d(F(x), F(y)) & = d_{\mathcal{M}}( \rho_{F(x)}, \rho_{F(y)}) \\
& \leq d_{\mathcal{M}}( \rho_{F(x)}, f_* \rho_x) + d_{\mathcal{M}}( f_* \rho_x, f_* \rho_y) + d_{\mathcal{M}}( f_* \rho_y, \rho_{F(y)}) \\
& = M + d(x, y) + M \\
\end{align*}
thus
$$
d(x,y) - 2M \leq d(F(x), F(y)) \leq d(x,y) + 2M
$$
$\operatorname{diam}ond$
The following lemma is a straightforward consequence of Lemma \ref{maxminantipodal}:
\begin{lemma} \label{maxminflip} Let $x \in X$ and $y \in Y$. Then:
\noindent (i) The function $\frac{d\rho_x}{df^*\rho_y}$ attains its maximum at $\xi \in \partial X$ if and only if it attains its minimum at $i_x(\xi)$.
Moreover in this case $f(i_x(\xi)) = i_y(f(\xi))$, so $y$ lies on the bi-infinite geodesic $(f(\xi), f(i_x(\xi)))$.
\noindent (ii) If $\xi \in \partial X$ is a maximum of $\frac{d\rho_x}{df^*\rho_y}$ then the point $z = \pi(\phi(\overrightarrow{x\xi})) \in Y$ is the unique
point on the geodesic ray $[y, f(\xi)) \subset Y$ at a distance $d_{\mathcal{M}}(f_* \rho_x, \rho_y)$ from $y$.
\end{lemma}
\noindent{\bf Proof:} (i) We first note that since $X$ is a simply connected manifold of
negative curvature, for $\xi, \eta \in \partial X$ we have $\rho_x(\xi, \eta) = 1$ if and only if $\eta = i_x(\xi)$.
Let $\xi \in \partial X$ be a maximum of $\frac{d\rho_x}{df^*\rho_y}$. Let $\eta = f^{-1}(i_y(f(\xi))) \in \partial X$, then
$f^* \rho_y(\xi, \eta) = \rho_y(f(\xi), f(\eta)) = \rho_y(f(\xi), i_y(f(\xi))) = 1$, hence by Lemma \ref{maxminantipodal} we have that
$\eta$ is a minimum of $\frac{d\rho_x}{df^*\rho_y}$. Moreover, by Lemma \ref{maxminantipodal}, $\rho_x(\xi, \eta) = 1$, thus $\eta = i_x(\xi)$, so
$\frac{d\rho_x}{df^*\rho_y}$ attains its minimum at $i_x(\xi)$, and $f(i_x(\xi)) = i_y(f(\xi))$.
For the converse, suppose that $i_x(\xi) \in \partial X$ is a minimum of $\frac{d\rho_x}{df^*\rho_y}$. Then $\rho_x(\xi, i_x(\xi)) = 1$ implies by Lemma
\ref{maxminantipodal} that $\frac{d\rho_x}{df^*\rho_y}$ attains its maximum at $\xi$. Moreover, by Lemma \ref{maxminantipodal},
$f^* \rho_y(\xi, i_x(\xi)) = 1$, so $\rho_y(f(\xi), f(i_x(\xi))) = 1$, hence $f(i_x(\xi)) = i_y(f(\xi))$.
\noindent (ii) Let $\xi$ be a maximum of $\frac{d\rho_x}{df^*\rho_y}$. By definition of the geodesic conjugacy $\phi$, the point $z = \pi(\phi(\overrightarrow{x\xi})) \in Y$
lies on the bi-infinite geodesic $(f(\xi), f(i_x(\xi))) \subset Y$. By (i) above, the point $y$ also lies on the bi-infinite geodesic $(f(\xi), f(i_x(\xi)))$.
Since $\xi$ is a maximum of $\frac{d\rho_x}{df^*\rho_y}$ it follows that
$\log \frac{d\rho_x}{df^*\rho_y}(\xi) = d_{\mathcal{M}}( \rho_x, f^*\rho_y) = d_{\mathcal{M}}( f_* \rho_x, \rho_y)$ (note that push-forward of metrics by $f$ gives
an isometry $f_* : \mathcal{M}(\partial X) \to \mathcal{M}(\partial Y)$). Thus by Lemma \ref{derivformula} we have
\begin{align*}
B(y, z, f(\xi)) & = \log \frac{df_*\rho_x}{d\rho_y}(f(\xi)) \\
& = \log \frac{d\rho_x}{df^*\rho_y}(\xi) \\
& = d_{\mathcal{M}}( f_* \rho_x, \rho_y) \\
\end{align*}
Since $y,z$ both lie on the geodesic $(f(\xi), f(i_x(\xi)))$, it follows that $z$ is the unique point on the geodesic ray $[y, f(\xi))$ at a distance
$d_{\mathcal{M}}( f_* \rho_x, \rho_y)$ from $y$. $\operatorname{diam}ond$
Finally, we need a lemma about Riemannian angles and comparison angles from \cite{biswas5}.
For $x \in X$ and $\xi, \eta \in \partial X$, let $\angle \xi x \eta \in [0, \pi]$
denote the Riemannian angle between the geodesic rays $[x, \xi)$ and $[x, \eta)$ at the point $x$. Then the following holds (this is Lemma 6.6
of \cite{biswas5}):
\begin{lemma} \label{anglecomp} For all $x \in X$ and $\xi, \eta \in \partial X$ we have
$$
\rho_x(\xi, \eta)^b \leq \sin\left(\frac{1}{2}\angle \xi x \eta \right) \leq \rho_x(\xi, \eta)
$$
\end{lemma}
\section{Proof of main theorem}
Let $(X, g_0)$ be a complete, simply connected manifold of pinched negative curvature $-b^2 \leq K_{g_0} \leq -1$. Let $g_1$ be a metric on $X$
such that $g_1 = g_0$ outside a compact in $X$, and suppose $g_1$ is negatively curved, $K_{g_1} \leq -1$. Then the metrics $g_0, g_1$ are bi-Lipschitz,
so the identity map $id : (X, g_0) \to (X, g_1)$ induces a homeomorphism between boundaries which we denote by
$f : \partial_{g_0} X \to \partial_{g_1} X$. Suppose the map $f$ is Moebius. Let $F : (X, g_0) \to (X, g_1)$ be the circumcenter extension of the Moebius map $f$.
Note that both metrics $g_0, g_1$ have pinched negative curvature (since $g_0$ does, and $g_1 = g_0$ outside a compact), so the results of the previous
section apply to $F$. In particular, by Theorem \ref{rconstant}, the function $r(x) = d_{\mathcal{M}}( f_* \rho_x, \rho_{F(x)} )$ is constant, let
$M \geq 0$ denote its constant value. By Lemma \ref{qisom}, to show that the circumcenter map $F$ is an isometry, it suffices to show that $M = 0$.
Let $T^1 X_{g_0} \subset TX$ and $T^1 X_{g_1} \subset TX$ denote the unit tangent bundles with respect to the metrics $g_0, g_1$ respectively, and let
$\phi : T^1 X_{g_0} \to T^1 X_{g_1}$ denote the geodesic conjugacy induced by the Moebius map $f$. For $x \in X$, let $\rho^{g_0}_x$ and $\rho^{g_1}_x$
denote the visual metrics based at $x$ on the boundaries $\partial_{g_0} X$ and $\partial_{g_1} X$ of $(X, g_0)$ and $(X, g_1)$ respectively.
For $x \in X$ and $\xi, \eta \in \partial_{g_i} X$, let $(\xi, \eta)_i \subset X$ denote the bi-infinite $g_i$-geodesic with endpoints $\xi, \eta$, and
let $[x, \xi)_i \subset X$ denote the $g_i$-geodesic ray joining $x$ to $\xi$, and let $\overrightarrow{x\xi}^i \in T^1_x X_{g_i}$
denote the $g_i$-unit tangent vector to the $g_i$-geodesic ray $[x, \xi)_i$ at the point $x$, where $i = 0,1$.
For $x \in X$ and a compact $K \subset X$, let $\mathcal{O}_i(x, K) \subset \partial_{g_i} X$ denote the shadow of the set $K$ as seen from the point $x$
with respect to the metric $g_i$, where $i = 0,1$.
For $i = 0,1$ and $x \in X$, let $i^{g_i}_x : \partial_{g_i} X \to \partial_{g_i} X$ denote the involution of the boundary of $(X, g_i)$
as defined in the previous section.
\begin{lemma} \label{conjid} Let $K = $ supp$(g_1 - g_0)$ denote the support of the symmetric 2-tensor $g_1 - g_0$. Let $x \in X - K$.
If $\xi \in \partial_{g_0} X$ is such that $\xi, i^{g_0}_x(\xi) \in \partial_{g_0} X - \mathcal{O}_o(x, K)$,
then $\overrightarrow{x\xi}^0 = \overrightarrow{xf(\xi)}^1 \in T^1_x X_{g_0} \cap T^1_x X_{g_1}$ and
$\phi(\overrightarrow{x\xi}^0) = \overrightarrow{xf(\xi)}^1 = \overrightarrow{x\xi}^0$.
\end{lemma}
\noindent{\bf Proof:} The hypothesis on $\xi$ implies that the $g_0$-geodesic rays $[x, \xi)_0$ and $[x, i^{g_0}_x(\xi))_0$ are disjoint from $K$,
hence so is the bi-infinite $g_0$-geodesic $(\xi, i^{g_0}_x(\xi))_0$, thus it is also a $g_1$-geodesic, hence $(\xi, i^{g_0}_x(\xi))_0$ equals the
bi-infinite $g_1$-geodesic $(f(\xi), f(i^{g_0}_x(\xi)))_1$.
In particular $\overrightarrow{x\xi}^0 = \overrightarrow{xf(\xi)}^1 \in T^1_x X_{g_0} \cap T^1_x X_{g_1}$,
and $\phi(\overrightarrow{x\xi}^0)$ is tangent to $(\xi, i^{g_0}_x(\xi))_0$, so $\pi(\phi(\overrightarrow{x\xi}^0))$ lies on $(\xi, i^{g_0}_x(\xi))_0$.
Now we can choose a neighbourhood $U$ of $\xi$ in $\partial_{g_0} X$ which is disjoint from $\mathcal{O}_o(x, K)$, and such that for any $\eta \in U$, the $g_0$-geodesic
$(\xi, \eta)_0$ is disjoint from $K$ (by choosing $U$ small enough). Then for $\eta \in U$, the $g_0$-geodesics $[x, \xi)_0, [x, \eta)_0, (\xi, \eta)_0$ are
disjoint from $K$, hence they are $g_1$-geodesics as well, and it follows that $\rho^{g_0}_x(\xi, \eta) = \rho^{g_1}_x(f(\xi), f(\eta))$ for all $\eta \in U$.
Hence
$$
\frac{df^*\rho^{g_1}_x}{\rho^{g_0}_x}(\xi) = \lim_{\eta \to \xi} \frac{f^*\rho^{g_1}_x(\xi, \eta)}{\rho^{g_0}_x(\xi, \eta)} = 1
$$
so it follows from the definition of $\phi$ that $\pi(\phi(\overrightarrow{x\xi}^0)) = x$, thus
$\phi(\overrightarrow{x\xi}^0) = \overrightarrow{xf(\xi)}^1 = \overrightarrow{x\xi}^0$. $\operatorname{diam}ond$
For $i = 0,1$, let $d_{g_i}$ denote the distance function of $(X, g_i)$, and for $x \in X$ and $\xi, \eta \in \partial_{g_i} X$ let $\angle_i \xi x \eta$ denote the Riemannian
angle between the $g_i$-geodesic rays $[x, \xi)_i, [x, \eta)_i$ at the point $x$ with respect to the metric $g_i$.
We can now prove the main theorem:
\noindent{\bf Proof of Theorem \ref{mainthm}:} As remarked earlier, it suffices to show that the constant $M = 0$,
where $d_{\mathcal{M}}( f_* \rho^{g_0}_x, \rho^{g_1}_{F(x)} ) = M$ for all $x \in X$. Fix $\epsilon > 0$, we will show that $M \leq \epsilon$.
Fix a basepoint $x_0 \in X$ and choose $R > 0$
such that the support of $g_1 - g_0$ is contained in the $g_0$-ball of radius $R$ around $x_0$, and
let $B$ denote the closed $g_0$-ball of radius $R$ around $x_0$. Fix $\xi_0, \eta_0 \in \partial_{g_0} X$ such that $x_0 \in (\xi_0, \eta_0)_0$,
let $\gamma : \mathbb{R} \to X$ be the unique unit speed $g_0$-geodesic such that $\gamma(-\infty) = \xi_0, \gamma(0) = x_0, \gamma(+\infty) = \eta_0$.
For $t > R$ let $x_t \in X$ denote the point $\gamma(t)$, and define $\epsilon_t > 0$ by
$$
\epsilon_t := \sup \{ \angle_0 \xi x_t \xi_0 | \xi \in \mathcal{O}_0(x_t, B) \}
$$
Then it follows from Lemma \ref{smallshadow} and Lemma \ref{anglecomp} that $\epsilon_t \to 0$ as $t \to +\infty$.
Let $K_t \subset \partial_{g_0} X$ denote the set where the function $\frac{d\rho^{g_0}_{x_t}}{df^*\rho^{g_1}_{F(x_t)}}$ attains its maximum value $e^M$.
Let $C_t \subset T_{x_t} X$ denote the cone
$$
C_t := \{ v \in T_{x_t} X | < v, \overrightarrow{x_t \xi_0}^0 >_{g_0} \geq \cos(\epsilon_t) ||v||_{g_0} \}
$$
and let $D_t := \{ -v \in T_{x_t} X | v \in C_t \}$. Then for $\xi \in \partial_{g_0} X$, if $\overrightarrow{x_t \xi}^0 \notin C_t \cup D_t$,
then $\xi, i^{g_0}_{x_t}(\xi) \notin \mathcal{O}_0(x_t, B)$.
Moreover, for $v, w \in C_t$
and $\alpha, \beta \geq 0$ we have $\alpha v + \beta w \in C_t$. Now if $\xi, \eta \in \partial_{g_0} X$ are such that $\overrightarrow{x\xi}^0 \in C_t$
and $\overrightarrow{x\eta}^0 \in D_t$, then by the triangle inequality
$$
\rho^{g_0}_{x_t}(\xi, \eta) \geq 1 - \rho^{g_0}_{x_t}(\xi, \xi_0) - \rho^{g_0}_{x_t}(\eta, \eta_0)
$$
and by Lemma \ref{anglecomp} we have
$$
\rho^{g_0}_{x_t}(\xi, \xi_0)^b \leq \sin(\epsilon_t/2), \rho^{g_0}_{x_t}(\eta, \eta_0)^b \leq \sin(\epsilon_t/2),
$$
so since $\epsilon_t \to 0$ as $t \to +\infty$, by choosing $t > R$ large enough we may assume that
$$
\rho^{g_0}_{x_t}(\xi, \eta) \geq e^{-\epsilon}
$$
whenever $\xi, \eta \in \partial_{g_0} X$ are such that $\overrightarrow{x\xi}^0 \in C_t$
and $\overrightarrow{x\eta}^0 \in D_t$. We fix such a $t > R$ large enough so that this holds.
As stated in section 3, there exists a probability measure $\mu$ on $\partial_{g_0} X$ with support contained in $K_t$ such that $\mu$ is balanced
at $x_t \in (X, g_0)$, equivalently the convex hull in $T_{x_t} X$ of the compact set $\{ \overrightarrow{x_t \xi}^0 | \xi \in K_t \}$ contains the
origin of $T_{x_t} X$. By the classical Caratheodory theorem on convex hulls, it follows that there exist distinct points $\xi_1, \dots, \xi_k \in K_t$ and
$\alpha_1, \dots, \alpha_k > 0$ such that $\alpha_1 \overrightarrow{x_t\xi_1}^0 + \dots + \alpha_k \overrightarrow{x_t\xi_k}^0 = 0$ and
$\alpha_1 + \dots + \alpha_k = 1$, where $1 \leq k \leq n+1$ (here $n$ is the dimension of $X$). Note that since the vectors $\overrightarrow{x_t \xi_i}^0$
are nonzero, we must have $k \geq 2$. We now consider various cases:
\noindent{\bf Case 1.} $k = 2$:
Then since $\overrightarrow{x_t \xi_1}^0, \overrightarrow{x_t\xi_2}^0$ are unit vectors, the relation
$\alpha_1 \overrightarrow{x_t\xi_1}^0 + \alpha_2 \overrightarrow{x_t\xi_2}^0 = 0$ implies that
$\overrightarrow{x_t \xi_1}^0 = - \overrightarrow{x_t \xi_2}^0$, hence $\xi_2 = i^{g_0}_{x_t}(\xi_1)$. By Lemma \ref{maxminflip}, the function
$\frac{d\rho^{g_0}_{x_t}}{df^*\rho^{g_1}_{F(x_t)}}$ attains its minimum at $\xi_2$, so since $\xi_2 \in K_t$, the maximum and minimum of the function
$\frac{d\rho^{g_0}_{x_t}}{df^*\rho^{g_1}_{F(x_t)}}$ are equal, thus $e^M = e^{-M}$, and so $M = 0$ as required.
\noindent{\bf Case 2.} $k \geq 3$, and there exist $1 \leq i \neq j \leq k$ such that $\overrightarrow{x_t \xi_i}^0, \overrightarrow{x_t \xi_j}^0 \in T_{x_t} X - (C_t \cup D_t)$:
In this case, $\xi_i, i^{g_0}_{x_t}(\xi_i), \xi_j, i^{g_0}_{x_t}(\xi_j) \in \partial_{g_0} X - \mathcal{O}_o(x_t, B)$.
It follows from Lemma \ref{conjid} that the points
$z_i := \pi(\phi(\overrightarrow{x_t\xi_i}^0)), z_j := \pi(\phi(\overrightarrow{x_t\xi_j}^0))$ satisfy $z_i = x_t = z_j$. Thus the $g_1$-geodesics $(f(\xi_i), f(i^{g_0}_{x_t}(\xi_i)))_1$ and
$(f(\xi_j), f(i^{g_0}_{x_t}(\xi_j)))_1$ intersect at the point $x_t$. On the other hand, by Lemma \ref{maxminflip}, the geodesics
$(f(\xi_i), f(i^{g_0}_{x_t}(\xi_i)))_1$ and
$(f(\xi_j), f(i^{g_0}_{x_t}(\xi_j)))_1$ intersect at the point $F(x_t)$. If $\xi_j \neq i^{g_0}_{x_t}(\xi_i)$, then the geodesics
$(f(\xi_i), f(i^{g_0}_{x_t}(\xi_i)))_1$ and $(f(\xi_j), f(i^{g_0}_{x_t}(\xi_j)))_1$ have a unique point of intersection, thus $x_t = F(x_t)$, and by
Lemma \ref{maxminflip} we have
$$
M = d_{\mathcal{M}}( f_* \rho^{g_0}_{x_t}, \rho^{g_1}_{F(x_t)} ) = d_{g_1}(z_i, F(x_t)) = d_{g_1}(x_t, x_t) = 0
$$
If on the other hand $\xi_j = i^{g_0}_{x_t}(\xi_i)$, then the same argument as in Case 1 above shows that $M = 0$. Thus in either case $M = 0$.
\noindent{\bf Case 3.} $k \geq 3$, and $\overrightarrow{x\xi}^0 \in T_{x_t} - (C_t \cup D_t)$ for at most one $i \in \{ 1, \dots, k \}$:
Then relabelling the $\xi_i$'s if necessary, we may assume that $\overrightarrow{x_t\xi_1}^0, \dots, \overrightarrow{x_t\xi_{k - 1}}^0 \in C_t \cup D_t$.
Now if $\overrightarrow{x_t\xi_1}^0, \dots, \overrightarrow{x_t\xi_{k - 1}}^0 \in C_t$, then
$\alpha_1 \overrightarrow{x_t\xi_1}^0 + \dots + \alpha_{k-1} \overrightarrow{x_t\xi_{k - 1}}^0 \in C_t$ and it follows that $\overrightarrow{x_t \xi_k}^0 \in D_t$.
Similarly if $\overrightarrow{x_t\xi_1}^0, \dots, \overrightarrow{x_t\xi_{k - 1}}^0 \in D_t$, then we must have $\overrightarrow{x_t \xi_k}^0 \in C_t$. Thus either way,
there exist $1 \leq i \neq j \leq k$ such that $\overrightarrow{x_t \xi_i}^0 \in C_t$ and $\overrightarrow{x_t \xi_j}^0 \in D_t$. Let $\eta = i^{g_0}_{x_t}(\xi_i),
\eta' = i^{g_0}_{x_t}(\xi_j)$, then $\overrightarrow{x_t \eta}^0 \in D_t$ and $\overrightarrow{x_t \eta'}^0 \in C_t$, and by Lemma \ref{maxminflip}, the function
$\frac{d\rho^{g_0}_{x_t}}{df^*\rho^{g_1}_{F(x_t)}}$ attains its minimum value $e^{-M}$ at the points $\eta, \eta'$. Now by our hypothesis on $t$ we have
$$
\rho^{g_0}_{x_t}( \eta, \eta') \geq e^{-\epsilon}.
$$
We then have
\begin{align*}
e^{-2\epsilon} & \leq \rho^{g_0}_{x_t}( \eta, \eta')^2 \\
& = \frac{d\rho^{g_0}_{x_t}}{df^*\rho^{g_1}_{F(x_t)}}(\eta) \frac{d\rho^{g_0}_{x_t}}{df^*\rho^{g_1}_{F(x_t)}}(\eta') f^*\rho^{g_1}_{F(x_t)}(\xi, \xi')^2 \\
& \leq e^{-M} \cdot e^{-M} \cdot 1 \\
\end{align*}
thus $e^{-2\epsilon} \leq e^{-2M}$, hence $M \leq \epsilon$.
Since Cases 1,2,3 above exhaust all possibilities, it follows that $M \leq \epsilon$ for any given $\epsilon > 0$, thus $M = 0$ as required. $\operatorname{diam}ond$
\end{document} |
\begin{document}
\title{Precision-Weighted Federated Learning}
\author{Jonatan Reyes}
\email{[email protected]}
\affiliation{
\institution{Concordia University}
\city{Montreal}
\country{Canada}
}
\author{Lisa Di Jorio}
\email{[email protected]}
\affiliation{
\institution{Imagia Cybernetics Inc.}
\city{Montreal}
\country{Canada}
}
\author{Cecile Low-Kam}
\email{[email protected]}
\affiliation{
\institution{Imagia Cybernetics Inc.}
\city{Montreal}
\country{Canada}
}
\author{Marta Kersten-Oertel}
\email{[email protected]}
\affiliation{
\institution{Concordia University}
\city{Montreal}
\country{Canada}
}
\renewcommand{Reyes et al.}{Reyes et al.}
\begin{abstract}
Federated Learning using the Federated Averaging algorithm has shown great advantages for large-scale applications that rely on collaborative learning, especially when the training data is either unbalanced or inaccessible due to privacy constraints. We hypothesize that Federated Averaging underestimates the full extent of heterogeneity of data when the aggregation is performed. We propose \emph{Precision-weighted Federated Learning}\footnote{A US provisional patent application has been filed for protecting at least one part of the innovation disclosed in this article} a novel algorithm that takes into account the second raw moment (uncentered variance) of the stochastic gradient when computing the weighted average of the parameters of independent models trained in a Federated Learning setting. With Precision-weighted Federated Learning, we address the communication and statistical challenges for the training of distributed models with private data and provide an alternate averaging scheme that leverages the heterogeneity of the data when it has a large diversity of features in its composition. Our method was evaluated using three standard image classification datasets (MNIST, Fashion-MNIST, and CIFAR) with two different data partitioning strategies (independent and identically distributed (IID), and non-identical and non-independent (non-IID)) to measure the performance and speed of our method in resource-constrained environments, such as mobile and IoT devices. The experimental results demonstrate that we can obtain a good balance between computational efficiency and convergence rates with Precision-weighted Federated Learning. Our performance evaluations show $9\%$ better predictions with MNIST, $18\%$ with Fashion-MNIST, and $5\%$ with CIFAR-10 in the non-IID setting. Further reliability evaluations ratify the stability in our method by reaching a 99\% reliability index with IID partitions and 96\% with non-IID partitions. In addition, we obtained a $20x$ speedup on Fashion-MNIST with only 10 clients and up to $37x$ with 100 clients participating in the aggregation concurrently per communication round. The results indicate that Precision-weighted Federated Learning is an effective and faster alternative approach for aggregating private data, especially in domains where data is highly heterogeneous.
\end{abstract}
\maketitle
\keywords{Federated Learning, Federated Averaging, Precision-weighted Federated Learning, Aggregation Algorithms, Privacy and Security Preserving}
\section{Introduction}
Machine learning based on distributed deep neural networks (DNNs) has gained significant traction in both research and industry~\cite{lecun2015deep, najafabadi2015deep}, with many applications in IoT, for mobile devices, and in the automobile sector. For example, IoT devices and sensors can be protected from web attacks during the exchanging of data between the device and web services (or data stores) in the cloud \cite{tian2019distributed}. Mobile devices use distributed learning models to assist in vision tasks for automatic corner detection in photographs~\cite{rosten2006machine}, prediction tasks for text entry \cite{yang2018applied}, and recognition tasks for image matching and speech recognition \cite{lecun2015deep}. Alternatively, modern automobiles utilize distributed machine learning models to improve drivers' experience, vehicle's self-diagnostics and reporting capabilities \cite{johanson2014big}.
Despite the benefits provided by distributed machine learning, data privacy and data aggregation are raising concerns addressed in various resource-constrained domains. For example, the communication costs incurred when updating deep learning models in mobile devices is expensive for most users as their internet bandwidths are typically low. In addition, the data used during the training of models in mobile devices is privacy-sensitive, and operations of raw data outside the portable devices are susceptible to attacks. One solution is using secure protocols \cite{bonawitz2016practical} or differential-privacy guarantees \cite{dwork2014algorithmic, melis2019exploiting} to ensure that data is transferred between clients and servers safely. Another solution is to use data aggregation for distributed DNNs mitigating the need for transferring data to a central data store. With this solution, the learning occurs at the client level where models are optimized locally across the distributed clients. This approach is termed \emph{Federated Learning} \cite{mcmahan2016communication}.
McMahan \emph{et al.} \cite{mcmahan2016communication} introduced the notion of Federated Learning in a distributed setting of mobile devices. Their developed \emph{Federated Averaging} algorithm uses numerous communication rounds where all participating devices send their local learning parameters, i.e. DNN weights, to be aggregated in a central server in order to create a global shared model. Once the global model is computed, it is distributed to every client replacing the current deep learning model. Since only the global model is communicated in these rounds, data aggregation is achieved, even though the client's raw data never leaves the device. Given such a setup, individual clients can collaboratively learn an averaged shared model without compromising confidentiality. This makes Federated Learning a promising solution to the analysis of privacy-sensitive data distributed across multiple clients.
In McMahan \emph{et al.'s} original paper the local learning parameters on each client are aggregated by the central server and the global model is maintained with the \emph{weighted average} of these parameters \cite{mcmahan2016communication}. There are potentially a few statistical shortcomings identified with this type of averaging method. If we consider that the aggregation of weights across multiple clients is similar to a meta-analysis which synthesizes the effects of diversity across multiple studies then variation across the population should be considered. Meta-analysis is a quantitative method that combines results from different studies on the same topic in order to draw a general conclusion and to evaluate the consistency among study findings\cite{hedges_olkin_1985, nakagawa2012methodological}. There is compelling evidence that demonstrates a misleading interpretation of results and a reduction of statistical power when combining data from different sources without accounting for variation across the sources \cite{ioannidis2007heterogeneity, bangdiwala2016statistical, lin2010meta}.
\subsection{Hypotheses}
In this paper, we build on the work of McMahan \emph{et al.}\cite{mcmahan2016communication}, and propose the \emph{Precision-weighted Federated Learning} algorithm, a novel \emph{variance-based} averaging scheme to aggregate model weights across clients. The proposed method penalizes the model uncertainty at the client level to improve the robustness of the centralized model, regardless of the data distribution: independent and identically distributed (IID) or non-identical and non-independent (non-IID). Our approach makes use of the uncentered variance of the gradient estimator from the Adam optimizer \cite{kingma2014adam} to compute the weighted average at each communication step (Figure \ref{fig:aggregation_weights}).
We hypothesize that the Federated Averaging algorithm underestimates the full extent of heterogeneity on domains where data is complex with a large diversity of features in its composition. More specifically, we hypothesized that: (1) Precision-weighted Federated Learning can leverage individual intra-variability when averaging multiple sources to improve performances when the training data is highly-heterogeneous across sources, and (2) it can harness individual intra-variability when averaging multiple sources to accelerate the learning process, especially when data is highly-heterogeneous across sources.
To test our hypothesis we compared the performance of the original Federated Averaging algorithm against the Precision-weighted method in a number of image classification tasks using MNIST \cite{lecun1998gradient}, Fashion-MNIST \cite{xiao2017fashion}, and CIFAR-10 \cite{krizhevsky2009learning} datasets.
\begin{figure}
\caption{Aggregation of weights and variance via Precision-weighted Federated Learning: local models are trained across clients \emph{(Left)}
\label{fig:aggregation_weights}
\end{figure}
\subsection{Contributions}
The contributions of this paper are threefold: (1) We propose a novel algorithm for the averaging of distributed models using the estimated variance of the stochastic gradient computed independently by each client in a Federated Learning environment; (2) We provide extensive evaluations of the method using benchmark image classifications datasets demonstrating its robustness to unbalanced and non-IID data distributions; and (3) We compare the method to Federated Averaging on empirical experiments, and with fewer communication rounds we obtain comparable accuracy on IID distributions, greater accuracy on non-IID distributions, and more stable accuracy over communication rounds over all distributions.
\section{Related Work}
There is an increasing concern for aggregation of private data in the data mining domain, particularly when models require access to a client's data in order to improve their accuracy \cite{agrawal2000privacy, lindell2000privacy}. Data privacy and data aggregation are thus concerns that are actively being investigated for both centralized \cite{xu2014information, verykios2004state} and decentralized (or distributed) \cite{mcmahan2016communication} data environments.
The method proposed in this paper is dedicated to the aggregation of weights for DNNs with decentralized data. It is, therefore, important to observe the communication challenges addressed in previous work, mainly the security and protection of data, and the reduction of the steps needed in communication cycles. Bonawitz \emph{et al.} \cite{bonawitz2016practical} proposed a complementary approach to Federated Learning: a communication-efficient secure aggregation protocol for high-dimensional data. In Bonawitz \emph{et al.}'s work, Federated Learning was used in the training of DNN models for mobile devices using secure aggregation algorithms to protect the data residing on individual mobile devices. On the other hand, Kone\u cn\`y \emph{et al.} \cite{konevcny2016federated} presented two optimization algorithms (structural and sketched updates) to reduce the communication cost in the training of deep neural networks in a federation of participant mobile devices.
As well as communication challenges, the aggregation of data in decentralized environments is impacted by statistical challenges, especially when the training data is non-IID. Smith \emph{et al.} \cite{smith2017federated} highlighted the fact that data across a DNN is often non-IID distributed; that is, each participant updating the shared model in a Federated Learning setting generates a distinct distribution of data. One way to handle data heterogeneity is by using multi-task learning (MTL) frameworks. Smith \emph{et al.} created the MOCHA framework, which enables the analysis of data variability in a Federated MTL. However, as noted by Zhao \emph{et al.} \cite{zhao2018federated}, the Federated MTL is not comparable with the original work on Federated Learning as the proposed framework does not apply to non-convex deep learning models. In the same paper, Zhao \emph{et al.} proposed a data-sharing strategy to improve test-accuracy when data is non-IID. This method requires a small subset of data consisting of a uniform distribution to be shared across clients. Albeit promising results can be achieved with this method, the shared subset of data may not always be available, especially when data is highly sensitive in nature. Other methods explore the statistical challenges of Federated Learning by creating synthetic data using Dirichlet distributions with different concentration parameters. This technique allows the creation of more realistic non-IID data distributions at the client level, which are used to examine the effects on aggregations carried out with the Federated Learning algorithm \cite{zhao2018federated, hsu2019measuring}.
There is a diverse body of work that further explores collaborative learning, data sharing and data preservation across multiple data centers. Note that all of these methods are substantially different than the original work on Federated Learning. Although some yield comparable or even better results than Federated Learning they lack empirical observations with non-IID data. Chang \emph{et al.} \cite{chang2018distributed} addressed the problem of distributed learning on medical data and compared five heuristics: separate training on subsets, training on pooled data, weight averaging, and weight transfer (single and cyclical transfer). Of all these heuristics, training on pooled data has the best prediction performance and training on cyclical weight transfer achieved comparable testing accuracy to that of centrally trained models. Xu \emph{et al.} \cite{xu2018collaborative} introduced a collaborative deep learning (co-learning) method for the training of a shared global model using a cyclical learning rate schedule mixed with an incremental number of epochs. Their results demonstrate that the method is comparable with data centralized learning. Lalitha \emph{et al.} \cite{lalitha2018fullydecentralized} trained a model over a network of devices without a centralized controller. However, the users could communicate locally with their closest neighbors. The performance of the proposed algorithm on two users matches the performance of an algorithm trained by a central user with access to all data. They left a full empirical evaluation for future research. Chen \emph{et al.} \cite{chen2018federated} proposed a Federated Meta-Learning framework for the training of recommended systems. The framework permits data sharing at the algorithm level, preserves data privacy, and reports an increase of 12.4\% in accuracy compared with previous results. Kim \emph{et al.} \cite{kim2018keep} addressed the problem of catastrophic forgetting (the ability of neural networks to learn new tasks while discarding knowledge about previous learned tasks) in a distributed learning environment on clinical data and introduced an approach for knowledge preservation. Similarly, Bui \emph{et al.} \cite{bui2018partitioned} unify continual learning and Federated Learning in a partitioned variational inference framework. Vepakomma \emph{et al.} \cite{vepakomma2018splitlearning}, introduced \textit{split learning}, which addresses challenges specific to health data, such as different modalities across clients, no label sharing and semi-supervised learning.
In the field of genetics, genome-wide association studies aim to identify genetic variants associated to phenotypes of interest. As the effect of these variants on phenotypes is usually moderate, individual hospital studies are under-powered to detect them with confidence and a growing number of consortia are created to combine data across studies. As patient genotypes are privacy sensitive, these consortia use \textit{meta-analyses} to aggregate summary statistics from multiple studies. This increases the statistical power of finding a mutation related to a phenotype, while protecting the privacy of individual genotypes. Lin and Zang \cite{lin2010meta} demonstrated that meta-analyses achieve comparable efficiency as analyses of pooled individual participants under mild assumptions. This proximity with the distributed learning setting motivated us to create the Precision-weighted Federated Learning, an averaging approach that considers a meta-analysis weighting scheme in the aggregation of the effects of the variances from the weights generated during training of the neural network.
\section{PRECISION-WEIGHTED FEDERATED LEARNING}
The Precision-weighted Federated Learning approach combines the weights from each client into a globally shared model where the aggregation is achieved by averaging the weights by the inverse of their estimated variance. We will use the same notations than the Federated Averaging algorithm \cite{mcmahan2016communication} to describe the implementation of the proposed method. We consider the general objective
\begin{eqnarray}
\min_{w \in \mathbb{R}^d} f(w) & \text{with} & f(w) = \frac{1}{n} f_i(w),
\label{eq:objective}
\end{eqnarray}
for $i=1, ..., n$, where $n$ is the number of data examples and $f_i(w) = \ell(x_i, y_i; w)$ is the loss of the prediction on example $(x_i, y_i)$ made with model parameters $w$. If the data is partitioned over $K$ clients, McMahan \emph{et al.} rewrite the objective of Equation \ref{eq:objective} as
\begin{eqnarray}
f(w) = \sum_{k=1}^K \frac{n_k}{n} F_k(w) & \text{with} & F_k(w) = \frac{1}{n_k} \sum_{i \in \mathcal{P}_k} f_i(w),
\end{eqnarray}
where $\mathcal{P}_k$ is the set of indexes of data examples on client $k$ and $n_k = |\mathcal{P}_k |$.
Under a uniform distribution of training examples over the clients, the \emph{IID assumption}, the expectation of the client-specific loss $F_k(w)$ is $f(w)$. In a non-IID setting however, this result does not hold \cite{mcmahan2016communication}.
The corresponding stochastic gradient descent for optimization with a fixed learning rate $\eta$ consists in computing the gradient
\begin{eqnarray}
g_k & = & \nabla F_k(w_t)
\end{eqnarray}
for each client $k$ at iteration $t$, and applying the two successive updates
\begin{eqnarray}
w_{t+1}^k & \leftarrow & w_t - \eta g_k \text{ for all } k
\end{eqnarray}
\begin{eqnarray}
w_{t+1} & \leftarrow & \sum_{k=1}^K \frac{n_k}{n} w_{t+1}^k.
\label{eq:global_update}
\end{eqnarray}
With Precision-weighted Federated Learning the global update of Equation \ref{eq:global_update} is replaced by
\begin{eqnarray}
w_{t+1} \leftarrow \sum_{k=1}^K \frac{\left(v^k_{t+1}\right)^{-1}}{\sum_{k=1}^K \left(v^k_{t+1}\right)^{-1}} w_{t+1}^k
\label{eq:pw}
\end{eqnarray}
where $v^k_{t+1}$ denotes the variance of the maximum likelihood estimator of weight $w$ at iteration $t+1$ for client $k$. This inverse variance weighting scheme used in Equation \ref{eq:pw} corresponds to the fixed effect model used in meta-analyses. Intuitively, this method allows taking into consideration the uncertainty of each client into the aggregated result and uses the estimated variance to penalize the model uncertainty at the client level: models with high estimated variance across clients have a smaller impact on the aggregation result at the current communication round. Although $v^k_{t+1}$ is inversely proportional to the sample size, it is a more nuanced summary as it captures additional uncertainty about the client's weights.
To estimate the inverse of the variance of the maximum likelihood, we use the raw second moment estimate (uncentered variance) from the Adam optimizer \cite{kingma2014adam}, which approximates the diagonal of the Fisher information matrix \cite{pascanu2014natural}. Our experiments show that this approximation manages to capture the uncertainty of weights in practice.
\section{METHODOLOGY}\label{sec:methodology}
We tested the Precision-weighted Federated Learning method under different data distributions for image classification tasks. The baseline we use is the Federated Averaging approach. Firstly, we explore the performance of our method in resource-constrained environments, applicable to areas where memory is limited, such as mobile and IoT devices. Next, we present a scenario in which we investigate the speedup of our method as a function of the number of clients participating in the aggregation of weights. Finally, we present the analysis for the generalization of the global model when the parameter variance is applied to the aggregation of parameters of all the models in the distributed learning process.
Since the statistics of the data are influenced by the way it is distributed across clients, we tested the proposed methodology with both IID and non-IID data distributions. To create these scenarios, we distributed the training data across individual clients in two configurations (see Section \ref{ss:4.2}). The complexity of the image recognition problems was increased in agreement with the methodology proposed by Scheidegge \emph{et al.} \cite{scheidegger2018efficient} and therefore MNIST, Fashion-MNIST and CIFAR-10 were used as benchmarks. Furthermore, we utilized modest convolutional architectures to
compare training speed and optimal convergence with our method and Federated Averaging and to explain the effects of variance in the generalization of the centralized model. All of the experiments were executed on an NVIDIA Tesla V100 Graphic Processing Unit.
\subsection{Datasets}
\noindent \textbf{MNIST}: The MNIST dataset consist of 70,000 gray-scale images (28 x 28 pixels in size) which are divided in 60,000 training and 10,000 test samples. The images are grouped in 10 classes corresponding to the handwritten numbers from zero to nine.
\noindent \textbf{CIFAR-10}: The CIFAR-10 dataset consists of 60,000 colored images (36 x 36 pixels in size) divided in a training set of 50,000 and a testing set of 10,000 images. Images in CIFAR-10 are grouped into 10 mutually exclusive classes of animals and vehicles: airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships, and trucks.
\noindent \textbf{Fashion-MNIST}: The Fashion-MNIST dataset contains the same number of samples, image dimensions and number of classes (different labels) in its training and testing sets than MNIST, however, the images are of clothing (e.g. t-shirts, coats, dresses and sandals).
\subsection{Data Distributions}\label{ss:4.2}
\noindent \textbf{IID}: With IID data distribution the number of classes and the number of samples per class were assigned to clients with a uniform distribution. We shuffled the training data and created one partition per client with an equal number of samples per class. For example, 10 clients receive 600 samples per class. Figure \ref{fig:data_distribution} \textit{(Top)} shows an example with 5 clients and 4 classes.
\noindent \textbf{Non-IID}: With this data partition, two classes are assigned per client at most. This is similar to the partition shown in \cite{mcmahan2016communication} used to explore the limits of the Federated Averaging approach, which we now use to test and compare our algorithm under similar circumstances. In this extreme scenario, the number of samples per class per client is evenly distributed, creating a balanced scenario. (Figure \ref{fig:data_distribution} \textit{(Bottom)}).
\begin{figure}
\caption{IID}
\label{fig:subim1}
\caption{non-IID}
\label{fig:subim2}
\caption{Example of data distributions for 4 classes and 5 clients.}
\label{fig:data_distribution}
\end{figure}
\subsection{Convolutional Neural Networks}
The architectures used in our experiments were CNNs trained from scratch. All artificial networks were based on the Keras sequential model, trained with the Adam optimizer and an objective function as defined by categorical cross-entropy.
For MNIST and Fashion-MNIST the architecture of the first artificial neural network consisted of two convolutional layers using 3x3 kernels (each with 32 convolution filters). A rectified linear unit (ReLU) activation is performed right after each convolution, followed with a 2x2 max pooling used to reduce the spatial dimension, a dropout layer used to prevent overfitting, a fully densely-connected layer (with 128 units using a ReLu activation), and leading to a final softmax output layer (600,810 total parameters). The network was trained from scratch using partitions of training data and the final model was evaluated using the testing set.
A second network was used to train our models using data from the CIFAR-10 dataset. The architecture consisted of one 3x3 convolutional layer (with 32 convolution filter using a ReLu activation), followed with a 2x2 max pooling, a batch normalization layer; a second 3x3 convolutional layer (with 64 convolution filter using a ReLu activation), followed with, a batch normalization layer and a 2x2 max pooling; a dropout layer; one fully densely-connected layer (with 1024 and 512 units using a ReLu activation), another dropout layer; and a final softmax output layer (4,225,354 total parameters).
\subsection{Adam and the Weighted-Variance Callback}
A key component in the formulation of the weighted average algorithm is the estimation of the individual intra-variability expressed during the training of local data. As the training of the model proceeds, we capture the weights' variances via the second raw moment (uncentered variance) of the stochastic gradient descent from the Adam optimizer and use it in the construction of the Precision-weighted Federated Learning algorithm. In order to access the internal statistics of the model during training, we use a callback function that averages the variance estimators on the second half of the last epoch. The last epoch is chosen as it provides a more accurate prediction of the variance of the final weight.
\section{RESULTS}
This section presents the results of our model predictions trained with the two aforementioned data partitioning strategies in Section \ref{sec:methodology} and demonstrates the limits and practical application of the proposed method. All of our experiments use a different random seeds to randomize the order of observations during the training of the local models. As noted in McMahan \emph{et al.'s} paper, averaging federated models from different initial conditions leads to poor results. Thus, in order to avoid the drastic loss of accuracy observed on independent initialization of models for general non-convex objectives, each local model was trained using a shared random initialization for the first round of communication. After the first round of communication, all local models were initialized with the globally averaged model aggregated from the previous round.
\subsection{Evaluating Computational Resources}
Experiments with MNIST and Fashion-MNIST were conducted by using 500 rounds of communication, 1 epoch, and batch sizes (10, 25, 50, 100, and 200). Similarly, experiments with CIFAR-10 were executed for 500 rounds of communication, with 10 epochs, and batch sizes (10, 25, 50, 100, and 200). All of the training samples of each dataset were arranged among 10 clients.
The comparison results of test-accuracy between Federated Averaging and Precision-weighted Federated Learning aggregation methods using IID partitions is given in Table ~\ref{tab:iid}. Given this setup, test-accuracy scores are comparable with those obtained using Federated Averaging, however, our method is more stable. When we analyze the results of MNIST and Fashion-MNIST, we observe that test-accuracy values are consistent across batch sizes. The accuracy curves of Precision-weighted Federated Learning and the Federated Averaging for these datasets are show in Figure \ref{fig:iid_mnist_fashion} \emph{(a)} and \emph{(b)}). Alternatively, CIFAR-10 models trained with $B = 10$ using Precision-weighted Federated Learning show an improvement of 12\% (Figure \ref{fig:iid_mnist_fashion} \emph{(c)}) with more stable predictions. This improved accuracy on CIFAR-10 could indicate that there is greater heterogeneity in models trained on natural images than in models trained on grayscale images, even in an IID setting.
\begin{table*}[ht!]
\centering
\caption{Comparison of test-accuracy results (IID data distributions)}
\label{tab:iid}
\begin{tabular}{p{2cm}|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{MNIST} &
\multicolumn{2}{c|}{Fashion-MNIST} &
\multicolumn{2}{c}{CIFAR-10} \\
& FedAvg & PW &
FedAvg & PW &
FedAvg & PW \\
\midrule
B = 10 &
$0.99 \pm 0.002$ &
$0.99 \pm 0.002$ &
$0.93 \pm 0.009$ &
$0.93 \pm 0.008$ &
$0.69 \pm 0.045$ &
$\textbf{0.77} \pm \textbf{0.019}$ \\
B = 25 &
$0.99 \pm 0.002$ &
$0.99 \pm 0.002$ &
$0.93 \pm 0.010$ &
$0.93 \pm 0.010$ &
$0.77 \pm 0.004$ &
$0.77 \pm 0.018$ \\
B = 50 &
$0.99 \pm 0.003$ &
$0.99 \pm 0.003$ &
$0.93 \pm 0.011$ &
$0.93 \pm 0.011$ &
$0.76 \pm 0.023$ &
$0.76 \pm 0.013$ \\
B = 100 &
$0.99 \pm 0.004$ &
$0.99 \pm 0.004$ &
$0.93 \pm 0.013$ &
$0.93 \pm 0.012$ &
$0.76 \pm 0.014$ &
$0.76 \pm 0.011$\\
B = 200 &
$0.99 \pm 0.006$ &
$0.99 \pm 0.006$ &
$0.93 \pm 0.016$ &
$0.93 \pm 0.015$ &
$0.76 \pm 0.014$ &
$0.76 \pm 0.011$\\
\hline
\multicolumn{7}{r}{\small Averaged results using 1 epoch (MNIST and Fashion-MNIST) and 10 epochs } \\
\end{tabular}
\end{table*}
\begin{figure}
\caption{MNIST (B = 50)}
\caption{Fashion-MNIST (B = 50)}
\caption{CIFAR-10 (B = 10)}
\caption{Test-accuracy for Federated Averaging (FedAvg) and Precision-weighted Federated Learning (PW) using IID data distributions.}
\label{fig:subim1}
\label{fig:subim2}
\label{fig:iid_mnist_fashion}
\end{figure}
As discussed in the introduction section, we hypothesized improvements on the performance of models whose training data is highly-heterogeneous in nature. The comparison results of performance using Non-IID data partitions are given in Table ~\ref{tab:non-iid}. As we observe, both methods perform poorly with a batch number of $B=10$ and more notably in Precision-weighted Federated Learning, which is more sensitive to the noise present in the input images. This behavior of Federated Averaging is comparable with other related work in Federated Learning \cite{zhao2018federated} and its effects are also visible in Precision-weighted Federated Learning. However, with larger batch sizes, higher test-accuracy and more stable predictions are obtained, starting from the first round irregardless of the dataset (Figure \ref{fig:noniid}) This indicates that the estimations of variance are effectively used to computed a weighted average, resulting in more effective penalization of the model's uncertainty at the client level. For MNIST, our method can obtain increases in test-accuracy of up to of 9\% with $B = 200$. The results of Fashion-MNIST show the highest increment of 18\% in the test-accuracy with $B = 200$. Similarly, the highest accuracy of CIFAR-10 improves by 5\% with $B= 100$. These results demonstrate that our first hypothesis is confirmed only when models are trained with a batch size of $B = 25$ or higher.
\begin{table*}[t!]
\caption{Comparison of test-accuracy results (non-IID data distributions)}
\label{tab:non-iid}
\begin{tabular}{p{2cm}|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{MNIST} &
\multicolumn{2}{c|}{Fashion-MNIST} &
\multicolumn{2}{c}{CIFAR-10} \\
& FedAvg & PW &
FedAvg & PW &
FedAvg & PW \\
\midrule
B = 10 &
$0.98 \pm 0.026$ &
$0.98 \pm 0.014$ &
$0.85 \pm 0.028$ &
$0.82 \pm 0.031$ &
$0.34 \pm 0.052$ &
$0.16 \pm 0.052$ \\
B = 25 &
$0.97 \pm 0.029$ &
$\textbf{0.98} \pm \textbf{0.028}$ &
$0.85 \pm 0.048$ &
$0.85 \pm 0.024$ &
$0.51 \pm 0.053$ &
$\textbf{0.53} \pm \textbf{0.054}$ \\
B = 50 &
$0.97 \pm 0.053$ &
$\textbf{0.98} \pm \textbf{0.035}$ &
$0.79 \pm 0.048$ &
$\textbf{0.86} \pm \textbf{0.035}$ &
$0.58 \pm 0.048$ &
$\textbf{0.60} \pm \textbf{0.027}$ \\
B = 100 &
$0.95 \pm 0.071$ &
$\textbf{0.98} \pm \textbf{0.055}$ &
$0.77 \pm 0.046$ &
$\textbf{0.86} \pm \textbf{0.040}$ &
$0.56 \pm 0.059$ &
$\textbf{0.59} \pm \textbf{0.041}$\\
B = 200 &
$0.90 \pm 0.083$ &
$\textbf{0.98} \pm \textbf{0.058}$ &
$0.73 \pm 0.052$ &
$\textbf{0.86} \pm \textbf{0.050}$ &
$0.59 \pm 0.07$ &
$0.59 \pm 0.049$ \\
\hline
\multicolumn{7}{r}{\small Averaged results using 1 epoch (MNIST and Fashion-MNIST) and 10 epochs (CIFAR-10) } \\
\end{tabular}
\end{table*}
\begin{figure}
\caption{CIFAR-10}
\caption{Test-accuracy increases as batch size is larger with non-IID partitions. Aggregation methods: Federated Averaging (FedAvg) and Precision-weighted Federated Learning (PW). }
\label{fig:noniid}
\end{figure}
\subsection{Reliability}
The reliability index is an important element to consider in the evaluations of the performance of machine learning systems. In this study, we compute the reliability index defined in \cite{maniruzzaman2018accurate} as the ratio of the standard deviation of the test-accuracy and mean value of the test-accuracy accuracy as shown in Equation \ref{eq:reliability-index}.
\begin{eqnarray}
\xi_k(\%) = \left( 1- \frac{\sigma_n}{\mu_n} \right) x 100
\label{eq:reliability-index}
\end{eqnarray}
\noindent, where $\sigma_n$ is the standard deviation and $\mu_n$ is the mean of test-accuracy scores per batch. Consequently, the overall system reliability index can be computed by averaging all of the reliability indexes as expressed in Equation \ref{eq:overall-reliability-index}. Table \ref{tab:reliability-index} quantifies the computed reliability index per batch size and shows the overall system stability, which confirms that Precision-weighted Federated Learning reaches optimal performance, except for CIFAR-10 in a non-IID. This is due to the sensitivity of our method with small batch sizes, compromising performance.
\begin{eqnarray}
\xi(\%) = \left( \frac{\sum_{k=1}^K\xi_k}{N} \right)
\label{eq:overall-reliability-index}
\end{eqnarray}
\begin{table*}[ht]
\centering
\caption{Reliability index across batches (IID data distributions)}
\label{tab:reliability-index}
\begin{tabular}{p{2cm}|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{MNIST} &
\multicolumn{2}{c|}{Fashion-MNIST} &
\multicolumn{2}{c}{CIFAR-10} \\
& FedAvg & PW &
FedAvg & PW &
FedAvg & PW \\
\midrule
B = 10 &
$99.80$ &
$99.80$ &
$99.03$ &
$99.14$ &
$93.47$ &
$97.52$ \\
B = 25 &
$99.80$ &
$99.80$ &
$98.92$ &
$98.92$ &
$94.83$ &
$97.67$ \\
B = 50 &
$99.70$ &
$99.70$ &
$98.81$ &
$98.81$ &
$96.99$ &
$98.29$ \\
B = 100 &
$99.60$ &
$99.60$ &
$98.60$ &
$98.70$ &
$98.17$ &
$98.55$ \\
B = 200 &
$99.40$ &
$99.40$ &
$98.27$ &
$98.38$ &
$98.29$ &
$98.55$ \\
\hline
&
$99.66$ &
$99.66$ &
$98.73$ &
$\textbf{98.79}$ &
$96.35$ &
$\textbf{98.11}$ \\
\end{tabular}
\end{table*}
\begin{table*}[ht]
\centering
\caption{Reliability index across batches (non-IID data distributions)}
\label{tab:reliability-index}
\begin{tabular}{p{2cm}|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{MNIST} &
\multicolumn{2}{c|}{Fashion-MNIST} &
\multicolumn{2}{c}{CIFAR-10} \\
& FedAvg & PW &
FedAvg & PW &
FedAvg & PW \\
\midrule
B = 10 &
$97.34$ &
$98.57$ &
$96.71$ &
$96.22$ &
$84.62$ &
$67.70$ \\
B = 25 &
$97.02$ &
$97.13$ &
$94.34$ &
$97.17$ &
$89.57$ &
$89.87$ \\
B = 50 &
$94.54$ &
$96.42$ &
$93.95$ &
$95.94$ &
$91.72$ &
$95.49$ \\
B = 100 &
$92.56$ &
$94.37$ &
$94.00$ &
$95.37$ &
$89.52$ &
$93.09$ \\
B = 200 &
$90.75$ &
$94.06$ &
$92.89$ &
$94.16$ &
$88.03$ &
$91.75$ \\
\hline
&
$94.44$ &
$\textbf{96.11}$ &
$94.38$ &
$\textbf{95.77}$ &
$88.69$ &
$87.58$ \\
\end{tabular}
\end{table*}
\subsection{Increasing Participating Clients}
Inspired by McMahan \emph{et al.'s} original paper \cite{mcmahan2016communication}, we experiment with the client fraction $C$ that controls the amount of multi-client parallelism. To this regard, we investigate the number of communication rounds necessary to achieve target test-accuracy of 75\%, 80\%, and 85\% for models trained with Fashion-MNIST. For this purpose, the predictive models used a fixed batch size $B = 100$ and epoch $E = 1$. The training data was split into 100 participants and evaluated speed for every 10, 20, 50, and 100 clients participating in the aggregation in parallel.
Table \ref{tab:client-speedup} provides the number of communication rounds needed to reach the aforementioned test-accuracy scores as well as their corresponding speedup. We observe a negative correlation indicating that an increase in participants reduces the number of communication rounds irregardless of the number of participants. This behavior is in alignment with McMahan \emph{et al.'s} work in \cite{mcmahan2016communication}. Given this setup, Precision-weighted Federated Learning misses the first target with 10 and 50 clients, but it can reach subsequent target score up to $20x$ faster with 10 clients and $37x$ with 100 clients participating concurrently. Thus, we see that with a small client fraction ($C = 0.1$; that is 10 client per round), a good balance between computational efficiency and convergence rate can be obtained.
\begin{table*}[t]
\caption{Number of rounds and speedup relative to Federated Averaging to reach different test-accuracy values on Fashion-MNIST.)}
\label{tab:client-speedup}
\begin{tabular}{|l|c c|c c|c c|c c|}
\toprule
& \multicolumn{2}{c|}{C = 0.1} &
\multicolumn{2}{c|}{C = 0.2} &
\multicolumn{2}{c|}{C = 0.5} &
\multicolumn{2}{c|}{C = 1.0} \\
ACC & FedAvg & PW &
FedAvg & PW &
FedAvg & PW &
FedAvg & PW\\
\midrule
75\% & 47 & 50 &
65 & 35 (19x) &
21 & 23 &
17 & 12 (14x)\\
80\% & 149 & 125 (12x) &
134 & 66 (20x) &
153 & 57 (27x) &
44 & 27 (16x)\\
85\% & 641 & 319 (20x) &
671 & 225 (30x) &
473 & 279 (17x) &
286 & 78 (37x)\\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Variance Analysis}
In this paper we demonstrated that combining widely disparate sources can hide important features useful for discrimination, leading to limitations in the collaborative learning experience. Owing to this, Precision-weighted Federated Learning considers the inverse of the estimated variance to compute a weighted average. Given Equation \ref{eq:pw}, this algorithm operates under the assumption that weights with large variance estimations across sources reduces the quality of the analysis and therefore should have a smaller impact in the aggregation.
To explain the effects of variance in the generalization of the global model using Precision-weighted Federated Learning, we trained 4 clients with a fixed batch size $B = 10$ and epoch $E = 1$ for 100 communication rounds. The training data of CIFAR-10 was distributed among three clients with IID partitions and a single client with a non-IID partition (Client 1). In this regard, three clients receive a large number of training samples per class, whereas one of the them receives a considerably small number of training samples (Figure \ref{fig:unbalanced-experiment}). This is to maximize the expression of variation across clients.
After model training and before the aggregation, we average the inverse of the estimated variance of the stochastic gradient per client before it is aggregated and plot it. Figure \ref{fig:variance}. Given a small number of training samples, the amount of intra-variability computed for Client 1 is significantly smaller than other models. Consequently, the inverse of the variance for this client is high and therefore the penalization of weights is greater. This behavior is evident since the beginning of the learning cycle and causes a reduction of the inverse of the variance as training continues.
\begin{figure}
\caption{Client 1}
\label{fig:subim1}
\caption{Clients 2, 3, 4}
\label{fig:subim2}
\caption{Class distribution per client. One client using an non-IID unbalanced partition, others using an IID partition. }
\label{fig:unbalanced-experiment}
\end{figure}
Given a small number of training samples, the amount of intra-variability computed for Client 1 is significantly smaller than other models. Consequently, the inverse of these variances for this client is high and therefore the penalization of weights is greater (Equation \ref{eq:pw}). This behavior is evident since the first communication round and results in a better model generalization if achieved. Alternatively, models with larger training samples provide weights with higher quality and their penalization is minimum. Figure \ref{fig:cat-plot-layers} shows the inverse of the estimated variance per weight and client. With this view we can identify \textit{conv2d/bias} and \textit{conv2d/kernel} with the highest mean of the inverse variance. This suggests that the Adam optimizer could not capture the most prominent characteristics that make up the training data, for these layers, due to the limited number of training passes.
\begin{figure}
\caption{Effect of variance in the generalization of the global model. Each data point represents the mean of the inverse variances per client at a given communication round. Data points in the "Mean of Estimated Variance" graph were normalized between 0 and 1.}
\label{fig:variance}
\end{figure}
\begin{figure}
\caption{Category plots showing the dispersion of clients per layer at the first round. Data points in the "Mean of Estimated Variance" graph were normalized between 0 and 1.}
\label{fig:cat-plot-layers}
\end{figure}
\section{DISCUSSION}
Federated Learning is a promising solution to the analysis of privacy-sensitive data distributed globally across clients. At the core of Federated Learning is Federated Averaging, an aggregation algorithm that consolidates the weighted average of distributed machine learning models into a global model shared with every client participating in the learning cycle. In this paper, we hypothesized that Federated Averaging underestimates the full extent of heterogeneity of data across participants, leading to a reduction in the statistical power and quality of predictions, and thus proposed Precision-weighted Federated Learning. Our method averages the weights of individual sources by the inverse of the estimated variance. When weighting machine learning models differently, it must be noted that different aggregation algorithms may yield different results under different circumstances. Our method shows the greatest advantages when the data is highly-heterogeneous across clients.
Our first hypothesis postulates that not accounting for variation across clients may lead to a reduction of statistical power when combining data form multiple sources. We confirmed this hypothesis by showing that models trained with batch size $B >= 25$ and Precision-weighted Federated Learning can obtain a $2\%$ improvement with MNIST, $14\%$ with Fashion-MNIST, and $9\%$ with CIFAR-10 using non-IID partitions Nevertheless, the presented algorithm can still be improved. With a batch size $B = 10$, our method is sensitive to the noise introduce by individual sources, degrading the performance of the method. These results prove the limits of our algorithm. Alternatively, when we compare our method with those models trained with IID partitions, our method shows comparable results to those of Federated Averaging. This suggest that the inter-variance estimations were small due to the large number of training samples and uniform distribution of classes among participants, leading to more confident predictions.
Our second hypothesis addresses convergence speed and supports the idea that the use of estimated variance can capture better representation of intricate features dispersed across sources, resulting in an acceleration of the learning process. We confirmed this hypothesis by demonstrating that our method can reach test-accuracy targets faster, and with less communication rounds between targets, than Federated Averaging. With Fashion-MNIST, we also obtained a $24x$ speedup (with only 10 clients trained in parallel) than Federated Learning. This suggest that our method reduces the communication costs required between rounds. Although it is possible to achieve higher test-accuracy by using more complex state-of-the-art architectures, our goal in this study was to explore the statistical challenges, especially when the training data is non-IID. Therefore, we measure the performance of both aggregation method with simple network architectures.
Although the aggregation of model parameters, rather than raw individual client data, represents a significant step towards privacy preservation, the Precision-weighted Federated Averaging algorithm remains vulnerable to inference attacks, as the model parameters still contain information about data. This is a limitation of the general Federated Learning protocol and is not exclusive to our approach. Recently, Geyer \emph{et al.} \cite{geyer2017differentiallyprivate} and Truex \emph{et al.} \cite{truex2018hybrid} introduced frameworks that preserve client-level differential privacy. However, Melis \emph{et al.} demonstrated that privacy guarantees at the client-level are achieved at the expense of model performance and are only effective when the number of clients participating in the aggregation is significantly large, thousands or more \cite{melis2019exploiting}. Owing to this, we will examine the behavior and performance of the Precision-weighted Federated Learning scheme combined with Differential Private Federated Learning \cite{dwork2014algorithmic, wei2020federated} as a future work.
\section{CONCLUSION}
In this paper we presented an novel aggregation algorithm for computing the weighted average of distributed DNN models trained in a Federated Learning environment. It does not require sharing raw private data. Instead, this algorithm takes into consideration the second raw moment (uncentered variance) of the stochastic gradient estimated from the Adam optimizer to compute the weighted average of distributed machine learning models. Precision-weighted Federated Learning was benchmarked with MNIST, Fashion-MNIST and CIFAR using two data distribution strategies (IID and non-IID). When compared to Federated Averaging, this algorithm was shown to provide significant advantages when the data is highly-heterogeneous across clients, and showed comparable test-accuracy when the data is uniformly distributed across clients. Demonstrating that including the variability across models in the aggregation results in a more effective and faster option for averaging distributed machine learning models having complex data with a large diversity of features in its composition. With these advantages, Precision-weighted Federated Learning show promise in comprehensive exploratory analyses of sensitive biomedical data distributed across medical centers. Thus, in future work we will examine the feasibility of this method in medical image classification tasks.
\end{document} |
\begin{document}
\title{Guaranteed inference in topic models}
\author{\name Khoat Than \email [email protected] \\
\addr Hanoi University of Science and Technology, 1, Dai Co Viet road, Hanoi, Vietnam.
\AND
\name Tung Doan \email phongtung\[email protected] \\
\addr{Hanoi University of Science and Technology, 1, Dai Co Viet road, Hanoi, Vietnam.}
}
\editor{}
\maketitle
\begin{abstract}
One of the core problems in statistical models is the estimation of a posterior distribution. For topic models, the problem of posterior inference for individual texts is particularly important, especially when dealing with data streams, but is often intractable in the worst case \citep{SontagR11}. As a consequence, existing methods for posterior inference are approximate and do not have any guarantee on neither quality nor convergence rate. In this paper, we introduce a provably fast algorithm, namely \textit{Online Maximum a Posteriori Estimation (OPE)}, for posterior inference in topic models. OPE has more attractive properties than existing inference approaches, including theoretical guarantees on quality and fast rate of convergence to a local maximal/stationary point of the inference problem. The discussions about OPE are very general and hence can be easily employed in a wide range of contexts. Finally, we employ OPE to design three methods for learning Latent Dirichlet Allocation from text streams or large corpora. Extensive experiments demonstrate some superior behaviors of OPE and of our new learning methods.
\end{abstract}
\begin{keywords}
Topic models, posterior inference, online MAP estimation, theoretical guarantee, stochastic methods, non-convex optimization
\end{keywords}
\section{Introduction}
Latent Dirichlet allocation (LDA) \citep{BNJ03} is the class of Bayesian networks that has gained arguably significant interests. It has found successful applications in a wide range of areas including text modeling \citep{Blei2012introduction}, bioinformatics \citep{PritchardSD2000population,LiuLT10miRNA}, history \citep{Mimno2012historiography}, politics \citep{Grimmer2010Political,GerrishB2012vote}, psychology \citep{Schwartz+2013Personality}, to name a few.
One of the core issues in LDA is the estimation of posterior distributions for individual documents. The research community has been studying many approaches for this estimation problem, such as variational Bayes (VB) \citep{BNJ03}, collapsed variational Bayes (CVB) \citep{TehNW2007collapsed}, CVB0 \citep{Asuncion+2009smoothing}, and collapsed Gibbs sampling (CGS) \citep{GriffithsS2004,MimnoHB12}. Those approaches enable us to easily work with millions of texts
\citep{MimnoHB12,Hoffman2013SVI,Foulds2013stochastic}. The quality of LDA in practice is determined by the quality of the inference method being employed. However, none of the mentioned methods has a theoretical guarantee on quality or convergence rate. This is a major drawback of existing inference methods.
Our first contribution in this paper is the introduction of a provably efficient algorithm, namely \emph{Online Maximum a Posteriori Estimation (OPE)}, for doing posterior inference of topic mixtures in LDA. This inference problem is in fact nonconvex and is NP-hard \citep{SontagR11,Arora+2016infer}. Our new algorithm is stochastic in nature and theoretically converges to a local maximal/stationary point of the inference problem. We prove that OPE converges at a rate of $O({1/T})$, which surpasses the best rate of existing stochastic algorithms for nonconvex problems \citep{Mairal2013stochasticNonconvex,Ghadimi2013stochasticNonconvex}, where $T$ is the number of iterations. Hence, OPE overcomes many drawbacks of VB, CVB, CVB0, and CGS. Those properties help OPE to be preferable in many contexts, and to provide us real benefits when using OPE in a wide class of probabilistic models.
The topic modeling literature has seen a fast growing interest in designing large-scale learning algorithms \citep{MimnoHB12,ThanH2012fstm,Broderick2013streaming,Foulds2013stochastic,Patterson2013stochastic,Hoffman2013SVI,ThanD14dolda,SatoN2015SCVB0}. Existing algorithms allow us to easily analyze millions of documents. Those developments are of great significance, even though the posterior estimation is often intractable. Note that the performance of a learning method heavily depends on its core inference subroutine. Therefore, existing large-scale learning methods seem to likely remain some of the drawbacks from VB, CVB, CVB0, and CGS.
Our second contribution in this paper is the introduction of 3 stochastic algorithms for learning LDA at a large scale: \emph{Online-OPE} which is online learning; \emph{Streaming-OPE} which is streaming learning; and \emph{ML-OPE} which is regularized online learning.\footnote{A slight variant of ML-OPE was shortly presented in \citep{ThanD14dolda} under a different name of DOLDA.} These algorithms own the stochastic nature when learning global variables (topics), and employ OPE as the core for inferring local variables for individual texts, which is also stochastic. They overcome many drawbacks of existing large-scale learning methods owing to the preferable properties of OPE. From extensive experiments we find that Online-OPE, Streaming-OPE, and ML-OPE often reach very fast to a high predictiveness level, and are able to consistently increase the predictiveness of the learned models as observing more data. In paricular, while Online-OPE surpasses the state-of-the-art methods, ML-OPE often learns tens to thousand times faster than existing methods to reach the same predictiveness level. Therefore, our new methods are efficient tools for analyzing text streams or big collections.
\textsc{Organization:} in the next section we briefly discuss related work. In Section \ref{sec-LDA-infer-theta}, we present the OPE algorithm for doing posterior inference. We also analyze the convergence property. We further compare OPE with existing inference methods, and discuss how to employ it in other contexts. Section \ref{sec-Dolda} presents three stochastic algorithms for learning LDA from text streams or big text collections. Practical behaviors of large-scale learning algorithms and OPE will be investigated in Section \ref{sec-Evaluation}. The final section presents some conclusions and discussions.
\textsc{Notation:}
Throughout the paper, we use the following conventions and notations. Bold faces denote vectors or matrices. $x_i$ denotes the $i^{th}$ element of vector $\mbf{x}$, and $A_{ij}$ denotes the element at row $i$ and column $j$ of matrix $\mbf{A}$. The unit simplex in the $n$-dimensional Euclidean space is denoted as $\Delta_n = \{ \mbf{x} \in \mathbb{R}^n: \mbf{x} \ge 0, \sum_{k=1}^{n} x_k = 1 \}$, and its interior is denoted as $\overline{\Delta}_n$. We will work with text collections with $V$ dimensions (dictionary size). Each document $\mbf{d}$ will be represented as a frequency vector, $\mbf{d} = (d_1, ..., d_V)^T$ where $d_j$ represents the frequency of term $j$ in $\mbf{d}$. Denote $n_d$ as the length of $\mbf{d}$, i.e., $n_d = \sum_j d_j$. The inner product of vectors $\mbf{u}$ and $\mbf{v}$ is denoted as $\left<\mbf{u}, \mbf{v}\right>$.
\section{Related work}\label{sec-related-work}
Notable inference methods for probabilistic topic models include VB, CVB, CVB0, and CGS. Except VB \citep{BNJ03}, most other methods originally have been developed for learning topic models from data. Fortunately, one can adapt them to do posterior inference for individual documents \citep{ThanH15sparsity}. Other good candidates for doing posterior inference include \emph{Concave-Convex procedure} (CCCP) by \cite{Yuille2003cccp}, \emph{Stochastic Majorization-Minimization} (SMM) by \cite{Mairal2013stochasticNonconvex}, \emph{Frank-Wolfe} (FW) \citep{Clarkson2010}, Online Frank-Wolfe (OFW) \citep{Hazan2012OFW}, and \emph{Thresholded Linear Inverse} (TLI) which has been newly developed by \cite{Arora+2016infer}.
Few methods have an explicit theoretical guarantee on inference quality and convergence rate. In spite of being popularly used in topic modeling, we have not seen any theoretical analysis about how fast VB, CVB, CVB0, and CGS do inference for individual documents. One might employ CCCP \citep{Yuille2003cccp} and SMM \citep{Mairal2013stochasticNonconvex} to do inference in topic models. Those two algorithms are guaranteed to converge to a stationary point of the inference problem. However, the convergence rate of CCCP and SMM is unknown for non-convex problems which are inherent in LDA and many other models. Each iteration of CCCP has to solve a (non-linear) equation system, which is expensive and non-trivial in many cases. Furthermore, up to now those two methods have not been investigated rigorously in the topic modeling literature.
It is worth discussing about FW \citep{ThanH15sparsity}, OFW \citep{Hazan2012OFW}, and TLI \citep{Arora+2016infer}, the three methods with theoretical guarantees on quality. FW is a general method for convex programming \citep{Clarkson2010}. \cite{ThanH15sparsity,ThanH2012fstm} find that it can be effectively used to do inference for topic models. OFW is an online version of FW for convex problems whose objective functions come partly in an online fashion. One important property of FW and OFW is that they can converge fast and return sparse solutions. Nonetheless, FW and OFW only work with convex problems, and thus require some special settings/modifications for topic models. On the other hand, TLI has been proposed recently to do exact inference for individual texts. This is the only inference method which is able to recover solutions exactly under some assumptions. TLI requires that a document should be very long, and the topic matrix should have a small condition number. Those conditions might not always be present in practice. Therefore TLI is quite limited and should be improved further.
Two other algorithms for MAP estimation with provable guarantees are \emph{Particle Mirror Decent} (PMD) \citep{Dai2016pmd} and HAMCMC \citep{Simsekli2016stochastic}. Both algorithms base on sampling to estimate a posterior distribution. Therefore they can be used to do posterior inference for topic models. PMC is shown to converge at a rate of $\mathcal{O}(T^{-1/2})$, while HAMCMC converges at a rate of $\mathcal{O}({T}^{-1/3})$ as suggested by \cite{Teh2016consistencySGLD}.\footnote{In fact \cite{Simsekli2016stochastic} provide an explicit bound on the error as $\mathcal{O}(1/\sum_{t=1}^T \epsilon_t)$, where $\epsilon_t$ defines the step-size of their algorithm. This error bound will go to zero as $T$ goes to infinity. However, the authors did not provided any explicit error bound which directly depends on $T$.} Those are significant developments for Bayesian inference. However, their effectiveness in topic modeling is unclear at the time of writing this article.
In this work, we propose OPE for doing posterior inference. Unlike CCCP and SMM, OPE is guaranteed to converge very fast to a local maximal/stationary point of the inference problem. The convergence rate of OPE is faster than that of PMD and HAMCMC. Each iteration of OPE requires modest arithmetic operations and thus OPE is significantly more efficient than CCCP, SMM, PMD, and HAMCMC. Having an explicit guarantee helps OPE to overcome many limitations of VB, CVB, CVB0, and CGS. Further, OPE is so general that it can be easily employed in a wide range of contexts, including MAP estimation and non-convex optimization. Therefore, OPE overcomes some drawbacks of FW, OFW, and TLI. Table \ref{table 1: theoretical comparison} presents more details to compare OPE and various inference methods.
\section{Posterior inference with OPE} \label{sec-LDA-infer-theta}
LDA \citep{BNJ03} is a generative model for modeling texts and discrete data. It assumes that a corpus is composed from $K$ topics $\mbf{\beta}_1, ..., \mbf{\beta}_K$, each of which is a sample from a $V$-dimensional Dirichlet distribution, $Dirichlet(\eta)$. A document $\mbf{d}$ arises from the following generative process:
\begin{enumerate}
\item Draw $\mbf{\theta}_d | \alpha \sim Dirichlet(\alpha)$
\item For the $n^{th}$ word of $\mbf{d}$:
\begin{itemize}
\item[-] draw topic index $z_{dn} | \mbf{\theta}_d \sim Multinomial(\mbf{\theta}_d)$
\item[-] draw word $w_{dn}| z_{dn}, \mbf{\beta} \sim Multinomial(\mbf{\beta}_{z_{dn}})$.
\end{itemize}
\end{enumerate}
Each topic mixture $\mbf{\theta}_d = (\theta_{d1}, ..., \theta_{dK})$ represents the contributions of topics to document $\mbf{d}$, while $\beta_{kj}$ shows the contribution of term $j$ to topic $k$. Note that $\mbf{\theta}_d \in \Delta_K, \mbf{\beta}_k \in \Delta_V, \forall k$. Both $\mbf{\theta}_d$ and $\mbf{z}_d$ are unobserved variables and are local for each document.
According to \cite{TehNW2007collapsed}, the task of \emph{Bayesian inference (learning)} given a corpus $\mathcal{C} = \{\mbf{d}_1, ..., \mbf{d}_M\}$ is to estimate the posterior distribution $p(\mbf{z, \theta, \beta} | \mathcal{C}, \alpha, \eta)$ over the latent topic indicies $\mbf{z} = \{\mbf{z}_1, ..., \mbf{z}_d\}$, topic mixtures $\mbf{\theta} = \{\mbf{\theta}_1, ..., \mbf{\theta}_M\}$, and topics $\mbf{\beta} = (\mbf{\beta}_1, ..., \mbf{\beta}_K)$. \emph{The problem of posterior inference} for each document $\mbf{d}$, given a model $\{\mbf{\beta}, \alpha\}$, is to estimate the full joint distribution $p(\mbf{z}_d, \mbf{\theta}_d, \mbf{d} | \mbf{\beta}, \alpha)$. Direct estimation of this distribution is intractable. Hence existing approaches uses different schemes. VB, CVB, and CVB0 try to estimate the distribution by maximizing a lower bound of the likelihood $p(\mbf{d} | \mbf{\beta}, \alpha)$, whereas CGS \citep{MimnoHB12} tries to estimate $p(\mbf{z}_d | \mbf{d}, \mbf{\beta}, \alpha)$. For a detailed discussion and comparison of those methods, the reader should refer to \cite{ThanH15sparsity}.
\subsection{MAP inference of topic mixtures}
We now consider the MAP estimation of topic mixture for a given document $\mbf{d}$:
\begin{equation} \label{eq1}
\mbf{\theta}^* = \arg \max_{\mbf{\theta} \in \Delta_K} \Pr(\mbf{\theta}, \mbf{d}|\mbf{\beta},\alpha) = \arg \max_{\mbf{\theta} \in \Delta_K} \Pr(\mbf{d}|\mbf{\theta},\mbf{\beta}) \Pr(\mbf{\theta}|\alpha).
\end{equation}
\cite{ThanH15sparsity} show that this problem is equivalent to the following one:
\begin{equation} \label{eq2}
\mbf{\theta}^* = \arg \max_{\mbf{\theta} \in \Delta_K} \sum_j d_j \log\sum_{k = 1}^K\theta_k\beta_{kj} + (\alpha - 1)\sum_{k = 1}^K \log\theta_k.
\end{equation}
\cite{SontagR11} showed that this problem is NP-hard in the worst case when $\alpha < 1$. In the case of $\alpha \ge 1$, one can easily show that the problem (\ref{eq2}) is concave, and therefore it can be solved in polynomial time. Unfortunately, in practice of LDA, the parameter $\alpha$ is often small, says $\alpha < 1$, causing (\ref{eq2}) to be nonconcave. That is the reason for why (\ref{eq2}) is intractable in the worst case.
We present a novel algorithm (OPE) for doing inference of topic mixtures for documents. The idea of OPE is quite simple. It solves problem (\ref{eq2}) by iteratively finding a good vertex of $\overline{\Delta}_K = \{\mbf{x} \in \mathbb{R}^K: \sum_{k=1}^{K} x_k = 1, \mbf{x} \ge \epsilon > 0 \}$ to improve its solution. A good vertex at each iteration is decided by assessing stochastic approximations to the gradient of the objective function $f(\mbf{\theta})$. When the number of iterations goes to infinity, OPE will approach to a local maximal/stationary point of problem (\ref{eq2}). Details of OPE is presented in Algorithm \ref{alg:OPE-infer}.
\begin{algorithm}[tp]
\caption{OPE: Online maximum a posteriori estimation}
\label{alg:OPE-infer}
\begin{algorithmic}
\STATE {\bfseries Input: } document $\boldsymbol{d}$, and model $\{\mbf{\beta}, \alpha\}$.
\STATE {\bfseries Output:} $\boldsymbol{\theta}$ that maximizes
$ f(\boldsymbol{\theta}) = \sum_j d_j \log \sum_{k=1}^K \theta_k \beta_{kj} + (\alpha-1) \sum_{k=1}^{K} \log \theta_k.$
\STATE Initialize $\mbf{\theta}_1$ arbitrarily in $\overline{\Delta}_K = \{\mbf{x} \in \mathbb{R}^K: \sum_{k=1}^{K} x_k = 1, \mbf{x} \ge \epsilon > 0 \}$.
\FOR{ $t = 1, ..., \infty$}
\STATE Pick $f_{t}$ uniformly from $\;\;\; \{\sum_j d_j \log \sum_{k=1}^K \theta_{k} \beta_{kj}; \;\; (\alpha-1) \sum_{k=1}^{K} \log \theta_k \}$
\STATE $F_{t} := \frac{2}{t} \sum_{h=1}^{t} f_h$
\STATE $\mbf{e}_t := \arg \max_{\mbf{x} \in \overline{\Delta}_K} \left<F'_{t}(\boldsymbol{\theta}_{t}), \mbf{x} \right> \;\;\;\;$ (the vertex of $\overline{\Delta}_K$ that follows the maximal gradient)
\STATE $\boldsymbol{\theta}_{t +1} := \boldsymbol{\theta}_{t} + (\boldsymbol{e}_{t} - \boldsymbol{\theta}_{t}) /t$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Convergence analysis}
In this section, we prove the convergence of OPE which appears in Theorem \ref{the9}. We need the following observations.
\begin{lemma} \label{lem01}
Let $\{X_1, X_2, ...\}$ be a sequence of uniformly i.i.d. random variables on $\{-1, 1\}$. (Each $X_i$ is also known as a Rademacher random variable.) The followings hold for the sequence $S_n = X_1 + X_2 + \cdots + X_n$:
\begin{enumerate}
\item $\frac{S_n}{n} \rightarrow 0$ as $n \rightarrow +\infty$.
\item There exist constants $v \in [0, 1)$ and $N_0 > 1$ such that $\forall n \geq N_0, |S_n| \leq n^v$ (equivalently, $\log_n |S_n| \leq v$).
\end{enumerate}
\end{lemma}
\begin{proof}
Let $a_n$ (and $b_n$ respectively) be the number of times that $1$ (and $-1$) appears in the sum $X_1 + X_2 + \cdots + X_n$. So $a_n + b_n =n$ and $S_n = a_n - b_n$.
If $S_n = c n$ for some $c$, then $a_n = (c+1)n/2, b_n = (1 - c)n/2$. Since $X_i$ is picked uniformly from $\{-1, 1\}$ for every $i$, both $a_n/n$ and $b_n/n$ go to 0.5 as $n \rightarrow +\infty$. This suggests that $c$ goes to 0 as $n \rightarrow +\infty$. Therefore $\frac{S_n}{n} \rightarrow 0$ as $n \rightarrow +\infty$.
We will prove the second result by contrapositive. Assume
\begin{equation} \label{lem1-eq01}
\forall v \in [0, 1), \forall N_0 >1, \exists n \geq N_0 \text{ such that } \log_n |S_n| > v.
\end{equation}
Take an infinite sequence $v_t \in [0, 1)$ such that $v_t \rightarrow 1$ as $t \rightarrow +\infty$. Then statement (\ref{lem1-eq01}) implies that $\forall t \ge 1, \exists n_t$ satisfying
\begin{eqnarray}
\nonumber
\log_{n_1} (n_1 +1) &>& \log_{n_1} |S_{n_1}| > v_1, \\
\label{lem1-eq02}
\log_{n_t} (n_t +1) &>& \log_{n_t} |S_{n_t}| > v_t, \\
\nonumber
n_t &>& n_{t-1} \text{ for } t \ge 2.
\end{eqnarray}
It is easy to see that $\log_{n_t} (n_t +1) \rightarrow 1$ as $t \rightarrow \infty$. Therefore $ \log_{n_t} |S_{n_t}| \rightarrow 1$ as $t \rightarrow \infty$. In other words, $|S_{n_t}| \rightarrow n_t$ as $t \rightarrow \infty$. This is in contrary to the first result. Hence the second result holds.
\end{proof}
\begin{theorem}[Convergence] \label{the9}
Consider the objective function $f(\mbf{\theta})$ in problem (\ref{eq2}), given fixed $\mbf{d},\mbf{\beta},\alpha$. For Algorithm~\ref{alg:OPE-infer}, the followings hold
\begin{enumerate}
\item For any $\mbf{\theta} \in \overline{\Delta}_K$, $F_{t}\left(\mbf{\theta}\right)$ converges to $f\left(\mbf{\theta}\right)$ as ${t} \rightarrow +\infty$,
\item $\mbf{\theta}_{t}$ converges to a local maximal/stationary point $\mbf{\theta}^*$ of $f$ at a rate of $\mathcal{O}(1 / t)$.
\end{enumerate}
\end{theorem}
\begin{proof}
Denote $g_1 = \sum_j d_j \log\sum_{k = 1}^K\theta_k\beta_{kj}$ and $g_2 = (\alpha - 1)\sum_{k = 1}^K \log\theta_k$, we have $f = g_1 + g_2$. Let $a_{t}$ and $b_{t}$ be the number of times that we have already picked $g_1$ and $g_2$ respectively after $t$ iterations.
Note that $a_t + b_t = t$. Therefore for any $\mbf{\theta} \in \overline{\Delta}_K$ we have
\begin{eqnarray}
\label{eq3--OPE}
F_t &=& \frac{2}{t}(a_t g_1 + b_t g_2) \\
\label{eq4--OPE}
F_t - f &=& \frac{a_t - b_t}{t} (g_1 - g_2) = \frac{S_t}{t} (g_1 - g_2) \\
\label{eq5--OPE}
F'_t - f' &=& \frac{a_t - b_t}{t} (g'_1 - g'_2) = \frac{S_t}{t} (g'_1 - g'_2),
\end{eqnarray}
where we have denoted $S_t = a_t - b_t$.
For each iteration $t$ of OPE we have to pick uniformly randomly an $f_t$ from $\{g_1, g_2\}$. Now we make a correspondence between $f_t$ and a uniformly random variable $X_t$ on $\{1, -1\}$. This correspondence is a one-to-one mapping. So $S_t$ can be represented as $S_t = X_1 + \cdots + X_t$. Lemma \ref{lem01} shows that $S_t / t \rightarrow 0$ as $t \rightarrow \infty$. Combining this with (\ref{eq4--OPE}) we conclude that the sequence ${F_t}$ converges to $f$. Also due to (\ref{eq5--OPE}), the sequence $F'_t$ converges to $f'$. The convergence holds for any $\mbf{\theta} \in \overline{\Delta}_K$. This proves the first statement of the theorem.
It is easy to see that the sequence $\{\mbf{\theta}_1, \mbf{\theta}_2, ...\}$ converges to a point $\mbf{\theta}^* \in \overline{\Delta}_K$ at the rate $\mathcal{O}(1/t)$, due to the update of $\boldsymbol{\theta}_{t +1} := \boldsymbol{\theta}_{t} + (\boldsymbol{e}_{t} - \boldsymbol{\theta}_{t}) /t$. We next show that $\mbf{\theta}^*$ is a local maximal/stationary point of $f$.
Consider
\begin{eqnarray}
\label{eq7--OPE}
\left\langle F'_t(\mbf{\theta}_t) , \frac{\mbf{e}_{t}- \mbf{\theta}_t}{t} \right\rangle
&=& \left\langle F'_t(\mbf{\theta}_t) - f'(\mbf{\theta}_t), \frac{\mbf{e}_{t}- \mbf{\theta}_t}{t} \right\rangle + \left\langle f'(\mbf{\theta}_t), \frac{\mbf{e}_{t}- \mbf{\theta}_t}{t} \right\rangle \\
\label{eq8--OPE}
&=& \left\langle \frac{S_t}{t}(g'_1(\mbf{\theta}_t) - g'_2(\mbf{\theta}_t)), \frac{\mbf{e}_{t}- \mbf{\theta}_t}{t} \right\rangle + \left\langle f'(\mbf{\theta}_t), \frac{\mbf{e}_{t}- \mbf{\theta}_t}{t} \right\rangle \\
\label{eq9--OPE}
&=& \frac{S_t}{t^2} \left\langle g'_1(\mbf{\theta}_t) - g'_2(\mbf{\theta}_t), {\mbf{e}_{t}- \mbf{\theta}_t} \right\rangle + \left\langle f'(\mbf{\theta}_t), \frac{\mbf{e}_{t}- \mbf{\theta}_t}{t} \right\rangle
\end{eqnarray}
Note that $g_1, g_2$ are Lipschitz continuous on $\overline{\Delta}_K$. Hence there exists a constant $L$ such that
\begin{eqnarray}
\left< f'(z), y - z \right> &\le& f(y) - f(z) + L || y- z ||^2, \forall z, y \in \overline{\Delta}_K
\end{eqnarray}
Exploiting this to (\ref{eq9--OPE}) we obtain
\begin{eqnarray}
\nonumber
\left\langle F'_t(\mbf{\theta}_t) , \frac{\mbf{e}_t- \mbf{\theta}_t}{t} \right\rangle
&\le& \frac{S_t}{t^2} \left\langle g'_1(\mbf{\theta}_t) - g'_2(\mbf{\theta}_t), {\mbf{e}_{t}- \mbf{\theta}_t} \right\rangle + f(\mbf{\theta}_{t+1}) - f(\mbf{\theta}_{t}) + \frac{L}{t^2} ||\mbf{e}_{t}- \mbf{\theta}_t ||^2.
\end{eqnarray}
Since $\mbf{e}_{t}$ and $\mbf{\theta}_t$ belong to $\Delta_K$, the quantity $| \left\langle g'_1(\mbf{\theta}_t) - g'_2(\mbf{\theta}_t), {\mbf{e}_{t}- \mbf{\theta}_t} \right\rangle |$ is bounded above for any $t$. Therefore, there exists a constant $c_2>0$ such that
\begin{eqnarray}
\label{eq11--OPE}
\left\langle F'_t(\mbf{\theta}_t) , \frac{\mbf{e}_t- \mbf{\theta}_t}{t} \right\rangle
&\le& \frac{c_2 | S_t |}{t^2} + f(\mbf{\theta}_{t+1}) - f(\mbf{\theta}_{t}) + \frac{c_2 L}{t^2}.
\end{eqnarray}
Summing both sides of (\ref{eq11--OPE}) for all $t$ we have
\begin{eqnarray}
\label{eq12--OPE}
\sum_{h=1}^{t} \frac{1}{h} \left\langle F'_h(\mbf{\theta}_h) , \mbf{e}_h- \mbf{\theta}_h \right\rangle
&\le& \sum_{h=1}^{t} \frac{c_2 | S_h |}{h^2} + f(\mbf{\theta}_{t+1}) - f(\mbf{\theta}_{1}) + \sum_{h=1}^{t}\frac{c_2 L}{h^2}.
\end{eqnarray}
As $t \rightarrow +\infty$ we note that $f(\mbf{\theta}_t) \rightarrow f(\mbf{\theta}^*)$ due to the continuity of $f$. As a result, inequality (\ref{eq12--OPE}) implies
\begin{eqnarray}
\label{eq13--OPE}
\sum_{h=1}^{+\infty} \frac{1}{h} \left\langle F'_h(\mbf{\theta}_h) , \mbf{e}_h- \mbf{\theta}_h \right\rangle
&\le& \sum_{h=1}^{+\infty} \frac{c_2 | S_h |}{h^2} + f(\mbf{\theta}^*) - f(\mbf{\theta}_{1}) + \sum_{h=1}^{+\infty}\frac{c_2 L}{h^2}.
\end{eqnarray}
According to Lemma \ref{lem01}, there exist constants $v \in [0, 1)$ and $T_0 > 1$ such that $\forall t \geq T_0, |S_t| \leq t^v$. Therefore
\begin{eqnarray}
\label{eq13a-OPE}
\sum_{h=1}^{+\infty} \frac{1}{h} \left\langle F'_h(\mbf{\theta}_h) , \mbf{e}_h- \mbf{\theta}_h \right\rangle &\le& c_2 \sum_{h=1}^{T_0} \frac{ | S_h |}{h^2} + c_2\sum_{h=T_0+1}^{+\infty} \frac{ h^{v}}{h^2} + f(\mbf{\theta}^*) - f(\mbf{\theta}_{1}) + \sum_{h=1}^{+\infty}\frac{c_2 L}{h^2}.
\end{eqnarray}
Note that the series $\sum_{h=T_0+1}^{+\infty} {h^{v} / h^2}$ converges due to $v \in [0, 1)$, and $\sum_{h=1}^{T_0} {| S_h |}/{h^2}$ is bounded. Further, $\sum_{h=1}^{+\infty} {L}/{h^2} < \infty$. Hence, the right-hand side of (\ref{eq13a-OPE}) is finite. In addition, $\left\langle F'_h(\mbf{\theta}_h) , \mbf{e}_h \right\rangle > \left\langle F'_h(\mbf{\theta}_h) , \mbf{\theta}_h \right\rangle$ for any $h>0$ because of $\mbf{e}_{h} = \arg \max_{\mbf{x} \in \overline{\Delta}_K} \left\langle F'_h(\mbf{\theta}_h) , \mbf{x} \right\rangle$. Therefore we obtain the following
\begin{eqnarray}
\label{eq14--OPE}
0 \le \sum_{h=1}^{+\infty} \frac{1}{h} \left\langle F'_h(\mbf{\theta}_h) , \mbf{e}_h- \mbf{\theta}_h \right\rangle
&<& \infty.
\end{eqnarray}
In other words, the series $\sum_{h=1}^{+\infty} \frac{1}{h} \left\langle F'_h(\mbf{\theta}_h) , \mbf{e}_h- \mbf{\theta}_h \right\rangle$ converges to a finite constant.
Note that $0 \le \left\langle F'_h(\mbf{\theta}_h) , \mbf{e}_h- \mbf{\theta}_h \right\rangle$ for any $h$. If there exists constant $c_3 >0$ satisfying $\left\langle F'_h(\mbf{\theta}_h) , \mbf{e}_h- \mbf{\theta}_h \right\rangle \ge c_3$ for an infinite number of $h$'s, then the series $\sum_{h=1}^{+\infty} \frac{1}{h} \left\langle F'_h(\mbf{\theta}_h) , \mbf{e}_h- \mbf{\theta}_h \right\rangle$ could not converge to a finite constant, which is in contrary to (\ref{eq14--OPE}). Therefore,
\begin{equation}
\label{eq15--OPE}
\left\langle F'_h(\mbf{\theta}_h) , \mbf{e}_h- \mbf{\theta}_h \right\rangle \rightarrow 0 \text{ as } h \rightarrow +\infty.
\end{equation}
In one case, there exists a large $H$ such that $|\mbf{e}_h- \mbf{\theta}_h| \rightarrow 0$ for any $h \ge H$. This suggests one of the vertex of $\overline{\Delta}_K$ is a local maximal point of $f$, which proves the theorem. On the other case, assume that the sequence $\mbf{e}_h- \mbf{\theta}_h$ diverges or converges to a nonzero constant $c_4$. Then (\ref{eq15--OPE}) holds if and only if $F'_h(\mbf{\theta}_h)$ goes to 0 as $h \rightarrow +\infty$. Since $\mbf{\theta}_h \rightarrow \mbf{\theta}^*$, we have
\begin{equation}
\lim\limits_{h \rightarrow +\infty} || F'_h(\mbf{\theta}_h) || = \lim\limits_{h \rightarrow +\infty} || f'(\mbf{\theta}_h) || = || f'(\mbf{\theta}^*) ||= 0.
\end{equation}
In other words, $\mbf{\theta}^*$ is a stationary point of $f$.
\end{proof}
\subsection{Comparison with existing inference methods}
\begin{table*}[tp]
\caption{Theoretical comparison of 5 inference methods, given a document $\mbf{d}$ and model $\mathcal{M}$ with $K$ topics. MAP denotes maximum a posterior, ELBO denotes maximizing an evidence lower bound on the likelihood. $T$ denotes the number of iterations. $n_d$ and $\ell_d$ respectively are the number of different terms and number of tokens in $\mbf{d}$. `-' denotes \emph{`unknown'}. Note that $n_d \le \ell_d$.}
\begin{center}
\begin{scriptsize}
\begin{tabular}{llllll}
\hline
Method & OPE & VB & CVB & CVB0 & CGS \\
\hline
Posterior probability of interest & $\Pr(\mbf{\theta, d} | \mathcal{M})$ & $\Pr(\mbf{\theta, z, d} | \mathcal{M})$ & $\Pr(\mbf{z, d} | \mathcal{M})$ & $\Pr(\mbf{z, d} | \mathcal{M})$ & $\Pr(\mbf{z, d} | \mathcal{M})$ \\
Approach & MAP & ELBO & ELBO & ELBO & Sampling \\
Quality bound & Yes & -& -& -& - \\
Convergence rate & $O(1/T)$ & - & -& -& - \\
Iteration complexity & $O(K. n_d)$ & $O(K. n_d)$ & $O(K. \ell_d)$ & $O(K. \ell_d)$ & $O(K. \ell_d)$ \\
Storage & $O(K)$ & $O(K. n_d)$ & $O(K. \ell_d)$ & $O(K. \ell_d)$ & $O(K. \ell_d)$ \\
$Digamma$ evaluations & 0 & $O(K.n_d)$ & 0 & 0 & $O(K.n_d)$ \\
$Exp$ or $Log$ evaluations & $O(K.n_d)$ & $O(K.n_d)$ & $O(K. \ell_d)$ & 0 & $O(K.n_d)$ \\
Modification on global variables & No & No & Yes & Yes & No \\
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\label{table 1: theoretical comparison}
\end{table*}
Comparing with other inference approaches (including VB, CVB, CVB0 and CGS), our algorithm has many preferable properties as summarized in Table \ref{table 1: theoretical comparison}. \footnote{A detailed analysis of VB, CVB, CVB0 and CGS can be found in \citep{ThanH15sparsity}.}
\begin{itemize}
\item[-] OPE has explicitly theoretical guarantees on quality and fast convergence rate. This is the most notable property of OPE, for which existing inference methods often do not have.
\item[-] Its rate of convergence surpasses the best rate of existing stochastic algorithms for non-convex problems \citep{Ghadimi2013stochasticNonconvex,Mairal2013stochasticNonconvex}. Note that OPE can be easily modified to solve more general non-convex problems. Therefore, it is applicable to a wide range of contexts.
\item[-] OPE requires a very modest memory of $O(K)$ for storing temporary solutions and gradients, which is significantly more efficient than VB, CVB, CVB0, and CGS.
\item[-] Each iteration of OPE requires $O(Kn_d)$ computations for computing gradients and updating solutions. This is much more efficient than VB, CVB, CVB0, and CGS in practice.
\item[-] Unlike CVB and CVB0, OPE does not change the global variables when doing inference for individual documents. Hence OPE embarrasingly enables parallel inference, and is more beneficial than CVB and CVB0.
\end{itemize}
\subsection{Extension to MAP estimation and non-convex optimization}
It is worth realizing that the employment of OPE for other contexts is straightforward. The main step of using OPE is to formulate the problem of interest to be maximization of a function of the form $f(x) = g_1(x) + g_2(x)$. In the followings, we demonstrate this main step in two problems which appear in a wide range of contexts.
\emph{The MAP estimation problem} in many probabilistic models is the task of finding an
\[x^* = \arg \max_x \Pr(x | D) = \arg \max_x \Pr(D | x) \Pr(x)/ \Pr(D),\]
where $\Pr(D | x)$ denotes the likelihood of an observed variable $D$, $\Pr(x)$ denotes the prior of the hidden variable $x$, and $\Pr(D)$ denotes the marginal probability of $D$. Note that
\[x^* =\arg \max_x \Pr(D | x) \Pr(x) = \arg \max_x [\log \Pr(D | x) + \log \Pr(x)].\]
If the densities of the distributions over $x$ and $D$ can be described by some analytic functions,\footnote{The exponential family of distributions is an example.} then the MAP estimation problem turns out to be maximization of $f(x) = g_1(x) + g_2(x)$, where $g_1(x) =\log \Pr(D | x), g_2(x) =\log \Pr(x)$.
Now consider a general \emph{non-convex optimization} problem $x^* =\arg \max_x f(x)$. Theorem~1 by \cite{Yuille2003cccp} shows that we can always decompose $f(x)$ into the sum of a convex function and a concave function, provided that $f(x)$ has bounded Hessian. This is one way to decompose $f$ into the sum of two functions. We can use many other ways to make a decomposition of $f = g_1 + g_2$ and then employ OPE, because the convergence proof of OPE does not require $g_1$ and $g_2$ to be concave/convex.
The analysis above demonstrates that OPE can be easily employed in a wide range of contexts, including posterior estimation and non-convex optimization. One may need to suitably modify the domain of $\mbf{\theta}$, and hence the step of finding $\mbf{e}_t$ will be a linear program which can be solved efficiently. Comparing with non-linear steps in CCCP \citep{Yuille2003cccp} and SMM \citep{Mairal2013stochasticNonconvex}, OPE could be much more efficient. In this paper, we do not try to make a rigorous investigation of OPE in those contexts, and leave it open for future research.
\section{Stochastic algorithms for learning LDA} \label{sec-Dolda}
We have seen many attractive properties of OPE that other methods do not have. We further show in this section the simplicity of using OPE for designing fast learning algorithms for topic models. More specifically, we design 3 algorithms: \emph{Online-OPE} which learns LDA from large corpora in an online fashion, \emph{Streaming-OPE} which learns LDA from data streams, and \emph{ML-OPE} which enables us to learn LDA from either large corpora or data streams. These algorithms employ OPE to do MAP inference for individual documents, and the online scheme \citep{Bottou1998stochastic,Hoffman2013SVI} or streaming scheme \citep{Broderick2013streaming} to infer global variables (topics). Hence, the stochastic nature appears in both local and global inference phases. Note that the MAP inference of local variables by OPE has theoretical guarantees on quality and convergence rate. Such a property might help the new large-scale learning algorithms be more attractive than existing ones, which base on VB, CVB, CVB0, and CGS.
\subsection{Regularized online learning}
Given a corpus $\mathcal{C}$ (with finite or infinite number of documents) and $\alpha > 0$, we will estimate the topics $\mbf{\beta}_1, ...,\mbf{\beta}_K$ that maximize
\begin{equation} \label{eq13}
\begin{split}
\mathcal{L}(\mbf{\beta}) &= \sum_{\mbf{d} \in \mathcal{C}} \log \Pr \left(\mbf{\theta}_d, \mbf{d} | \mbf{\beta},\alpha\right)\\
&= \sum_{\mbf{d} \in \mathcal{C}} \left(\sum_j d_j \log\sum_{k = 1}^K\theta_{dk}\beta_{kj} + (\alpha - 1)\sum_{k = 1}^K \log\theta_{dk}\right) + constant.
\end{split}
\end{equation}
To solve this problem, we use the online learning scheme by \cite{Bottou1998stochastic}. More specifically, we repeat the following steps:
\emph{\begin{itemize}
\item[-] Sample a subset $\mathcal{C}_t$ of documents from $\mathcal{C}$. Infer the local variables ($\mbf{\theta}_d$) for each document $\mbf{d} \in \mathcal{C}_t$, given the global variable $\mbf{\beta}^{t - 1}$ in the last step.
\item[-] Form an intermediate global variable $\hat{\mbf{\beta}}^t$ for $\mathcal{C}_t$.
\item[-] Update the global variable to be a weighted average of $\hat{\mbf{\beta}}^t$ and $\mbf{\beta}^{t - 1}$.
\end{itemize}}
Details of this learning algorithm is presented in Algorithm \ref{alg:ML-OPE}, where we have used the same arguments as \cite{ThanH2012fstm} to update the intermediate topics $\hat{\mbf{\beta}}^t$ from $\mathcal{C}_t$:
\begin{equation} \label{eq14}
\hat{\beta}^t_{kj} \propto \sum_{\mbf{d} \in \mathcal{C}_t} d_j \theta_{dk}.
\end{equation}
Note that in Algorithm \ref{alg:ML-OPE} the step-size $\rho_t = \left(t + \tau\right)^{-\kappa}$ satisfies two conditions: $\sum_{t = 1}^{\infty} \rho_t = \infty$ and $\sum_{t = 1}^{\infty} \rho_t^2 < \infty$. These conditions are to assure that the learning algorithm will converge to a stationary point \citep{Bottou1998stochastic}. $\kappa \in (0.5, 1]$ is the forgeting rate, the higher the lesser the algorithm weighs the role of new data.
\begin{algorithm}[tp]
\caption{\textsf{ML-OPE} for learning LDA from massive/streaming data}
\label{alg:ML-OPE}
\begin{algorithmic}
\STATE {\bfseries Input: } data sequence, $K, \alpha, \tau > 0, \kappa \in (0.5, 1]$
\STATE {\bfseries Output: } $\mbf{\beta}$
\STATE Initialize $\mbf{\beta}^0$ randomly in $\Delta_V$
\FOR{ $t = 1, ..., \infty$}
\STATE Pick a set $\mathcal{C}_t$ of documents
\STATE Do inference by OPE for each $\mbf{d} \in \mathcal{C}_t$ to get $\mbf{\theta}_d$, given $\mbf{\beta}^{t-1}$
\STATE Compute intermediate topics $\hat{\mbf{\beta}}^t$ as:
\begin{equation}
\label{eq-ML-OPE-1}
\hat{\beta}_{kj}^t \propto \sum_{\mbf{d} \in \mathcal{C}_t} d_j \theta_{dk}
\end{equation}
\STATE Set step-size: $\rho_t = \left(t + \tau\right)^{-\kappa}$
\STATE Update topics: $\mbf{\beta}^t := \left(1 - \rho_t\right)\mbf{\beta}^{t-1} + \rho_t\hat{\mbf{\beta}}^t$
\ENDFOR
\end{algorithmic}
\end{algorithm}
It is worth discussing some fundamental differences between ML-OPE and existing online/streaming methods. First, we need not to know a priori how many documents to be processed. Hence, ML-OPE can deal well with streaming/online environments in a realistical way. Second, ML-OPE learns topics ($\mbf{\beta}$) directly instead of learning the parameter ($\mbf{\lambda}$) of a variarional distribution over topics as in SVI \citep{Hoffman2013SVI}, in SSU \citep{Broderick2013streaming}, and in the hybrid method by \cite{MimnoHB12}. While $\mbf{\beta}$ are regularized to be in the unit simplex $\Delta_V$, $\mbf{\lambda}$ can grow arbitrarily. The uncontrolled growth of $\mbf{\lambda}$ might potentially cause overfitting as sufficiently many documents are processed. In contrast, the regularization on $\mbf{\beta}$ helps ML-OPE to avoid overfitting and generalize better.
\subsection{Online and streaming learning}
In the existing literature of topic modeling, most methods for posterior inference try to estimate $\Pr(\mbf{\theta, z, d} | \mathcal{M})$ or $\Pr(\mbf{z, d} | \mathcal{M})$ given a model $\mathcal{M}$ and document $\mbf{d}$. Therefore, existing large-scale learning algorithms for topic models often base on those probabilities. Some examples include SVI \citep{Hoffman2013SVI}, SSU \citep{Broderick2013streaming}, SCVB0 \citep{Foulds2013stochastic,SatoN2015SCVB0}, Hybrid sampling and SVI \citep{MimnoHB12}, Population-VB \citep{McInerney2015population}.
Different with other approaches, OPE directly infers $\mbf{\theta}$ by maximizing the joint probability $\Pr(\mbf{\theta, d} | \mathcal{M})$. Following the same arguments with \cite{ThanH15sparsity}, one can easily exploit OPE to design new online and streaming algorithms for learning LDA. Online-OPE in Algorithm~\ref{alg:-online-OPE} and Streaming-OPE in Algorithm~\ref{alg:-stream-OPE} are two exploitations of OPE. It is worth noting that Online-OPE and Streaming-OPE are hybrid combinations of OPE with SVI, which are similar in manner to the algorithm by \cite{MimnoHB12}. Further, Streaming-OPE and ML-OPE do not need to know a priori the number of documents to be processed in the future, and hence are suitable to work in a real streaming environment.
\begin{algorithm}[tp]
\caption{\textsf{Online-OPE} for learning LDA from massive data}
\label{alg:-online-OPE}
\begin{algorithmic}
\STATE {\bfseries Input: } training data $\mathcal{C}$ with $D$ documents, $K, \alpha, \eta, \tau > 0, \kappa \in (0.5, 1]$
\STATE {\bfseries Output:} $\mbf{\lambda}$
\STATE Initialize $\mbf{\lambda}^0$ randomly
\FOR{ $t = 1, ..., \infty$}
\STATE Sample a set $\mathcal{C}_t$ consisting of $S$ documents.
\STATE Use Algorithm~\ref{alg:OPE-infer} to do posterior inference for each document $\mbf{d} \in \mathcal{C}_t$, given the global variable $\mbf{\beta}^{t-1} \propto \mbf{\lambda}^{t - 1}$ in the last step, to get topic mixture $\mbf{\theta}_d$. Then compute $\mbf{\phi}_d$ as
\begin{equation}
\phi_{djk} \propto \theta_{dk} \beta_{kj}.
\end{equation}
\STATE For each $k \in \{1, 2, ..., K\}$, form an intermediate global variable $\hat{\mbf{\lambda}}_k$ for $\mathcal{C}_t$ by
\begin{equation}
\hat{\lambda}_{kj} = \eta + \frac{D}{S} \sum_{\mbf{d} \in \mathcal{C}_t} d_j \phi_{djk}.
\end{equation}
\STATE Update the global variable by, where $\rho_t = (t + \tau)^{-\kappa}$,
\begin{equation}
\mbf{\lambda}^{t} := (1-\rho_t) \mbf{\lambda}^{t-1} + \rho_t \hat{\mbf{\lambda}}.
\end{equation}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[tp]
\caption{\textsf{Streaming-OPE} for learning LDA from massive/streaming data}
\label{alg:-stream-OPE}
\begin{algorithmic}
\STATE {\bfseries Input: } data sequence, $K, \alpha$
\STATE {\bfseries Output:} $\mbf{\lambda}$
\STATE Initialize $\mbf{\lambda}^0$ randomly
\FOR{ $t = 1, ..., \infty$}
\STATE Sample a set $\mathcal{C}_t$ of documents.
\STATE Use Algorithm~\ref{alg:OPE-infer} to do posterior inference for each document $\mbf{d} \in \mathcal{C}_t$, given the global variable $\mbf{\beta}^{t-1} \propto \mbf{\lambda}^{t - 1}$ in the last step, to get topic mixture $\mbf{\theta}_d$. Then compute $\mbf{\phi}_d$ as
\begin{equation}
\phi_{djk} \propto \theta_{dk} \beta_{kj}.
\end{equation}
\STATE For each $k \in \{1, 2, ..., K\}$, compute sufficient statistics $\hat{\mbf{\lambda}}_k$ for $\mathcal{C}_t$ by
\begin{equation}
\hat{\lambda}_{kj} = \sum_{\mbf{d} \in \mathcal{C}_t} d_j \phi_{djk}.
\end{equation}
\STATE Update the global variable by
\begin{equation}
\mbf{\lambda}^{t} := \mbf{\lambda}^{t-1} + \hat{\mbf{\lambda}}.
\end{equation}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Empirical evaluation} \label{sec-Evaluation}
This section is devoted to investigating practical behaviors of OPE, and how useful it is when OPE is employed to design new algorithms for learning topic models at large scales. To this end, we take the following methods, datasets, and performance measures into investigation.
\textsc{Inference methods:}
\begin{itemize}
\item[-] \emph{Online MAP estimation} (OPE).
\item[-] \emph{Variational Bayes} (VB) \citep{BNJ03}.
\item[-] \emph{Collapsed variational Bayes} (CVB0) \citep{Asuncion+2009smoothing}.
\item[-] \emph{Collapsed Gibbs sampling} (CGS) \citep{MimnoHB12}.
\end{itemize}
CVB0 and CGS have been observing to work best by several previous studies \citep{Asuncion+2009smoothing,MimnoHB12,Foulds2013stochastic,GaoSWYZ15,SatoN2015SCVB0}. Therefore they can be considered as the state-of-the-art inference methods.
\textsc{Large-scale learning methods:}
\begin{itemize}
\item[-] Our new algorithms: \emph{ML-OPE}, \emph{Online-OPE}, \emph{Streaming-OPE}
\item[-] \emph{Online-CGS} by \cite{MimnoHB12}
\item[-] \emph{Online-CVB0} by \cite{Foulds2013stochastic}
\item[-] \emph{Online-VB} by \cite{Hoffman2013SVI}, which is often known as SVI
\item[-] \emph{Streaming-VB} by \cite{Broderick2013streaming} with original name to be SSU
\end{itemize}
Online-CGS \citep{MimnoHB12} is a hybrid algorithm, for which CGS is used to estimate the distribution of local variables ($\mbf{z}$) in a document, and VB is used to estimate the distribution of global variables ($\mbf{\lambda}$). Online-CVB0 \citep{Foulds2013stochastic} is an online version of the batch algorithm by \cite{Asuncion+2009smoothing}, where local inference for a document is done by CVB0. Online-VB \citep{Hoffman2013SVI} and Streaming-VB \citep{Broderick2013streaming} are two stochastic algorithms for which local inference for a document is done by VB. To avoid possible bias in our investigation, we wrote 6 methods by Python in a unified framework with our best efforts, and Online-VB was taken from \url{http://www.cs.princeton.edu/~blei/downloads/onlineldavb.tar}.
\textsc{Data for experiments:} The following three large corpora were used in our experiments. \textit{Pubmed} consists of 8.2 millions of medical articles from the pubmed central; \textit{New York Times} consists of 300,000 news;\footnote{
The data were retrieved from \url{http://archive.ics.uci.edu/ml/datasets/}} and \emph{Tweet} consists of nearly 1.5 millions tweets.\footnote{We crawled tweets from Twitter (\url{http://twitter.com/}) with 69 hashtags containing various kinds of topics. Each document is the text content of a tweet. Then all tweets went through a preprocessing procedure including tokenizing, stemming, removing stopwords, removing low-frequency words (appear in less than 3 documents), and removing extremely short tweets (less than 3 words). Details of this dataset can be found at \cite{Mai2016hdp}.} The vocabulary size ($V$) of New York Times and Pubmed is more than 110,000, while that of Tweet is more than 89,000. It is worth noting that the first two datasets contain long documents, while Tweet contains very short tweets. The shortness of texts poses various difficulties \citep{Tang2014understandingLDA,Arora+2016infer,Mai2016hdp}. Therefore the usage of both long and short texts in our investigation would show more insights into performance of different methods. For each corpus we set aside randomly 1000 documents for testing, and used the remaining for learning.
\textsc{Parameter settings:}
\begin{itemize}
\item[-] \emph{Model parameters:} $K=100, \alpha = 1/K, \eta = 1/K$. Such a choice of $(\alpha, \eta)$ has been observed to work well in many previous studies \citep{GriffithsS2004,Hoffman2013SVI,Broderick2013streaming,Foulds2013stochastic}.
\item[-] \emph{Inference parameters:} at most 50 iterations were allowed for OPE and VB to do inference. We terminated VB if the relative improvement of the lower bound on likelihood is not better than $10^{-4}$. 50 samples were used in CGS for which the first 25 were discarded and the remaining were used to approximate the posterior distribution. 50 iterations were used to do inference in CVB0, in which the first 25 iterations were burned in. Those number of samples/iterations are often enough to get a good inference solution, according to \cite{MimnoHB12,Foulds2013stochastic}.
\item[-] \emph{Learning parameters:} minibatch size $S = | \mathcal{C}_t | =5000$, $\kappa = 0.9, \tau=1$. This choice of learning parameters has been found to result in competitive performance of Online-VB \citep{Hoffman2013SVI} and Online-CVB0 \citep{Foulds2013stochastic}. Therefore it was used in our investigation to avoid possible bias. We used default values for some other parameters in Online-CVB0.
\end{itemize}
\textsc{Performance measures:} We used \textit{NPMI} and \textit{Predictive Probability} to evaluate the learning methods. NPMI \citep{Lau2014npmi} measures the semantic quality of individual topics. From extensive experiments, \cite{Lau2014npmi} found that NPMI agrees well with human evaluation on the interpretability of topic models. Predictive probability \citep{Hoffman2013SVI} measures the predictiveness and generalization of a model to new data. Detailed descriptions of these measures are presented in Appendix \ref{appendix--perp}.
\subsection{Performance of learning methods}
We first investigate the performance of the learning methods when spending more time on learning from data. Figure~\ref{fig-OPE-perp-npmi-time} shows how good they are. We observe that OPE-based methods and Online-CGS are among the fastest methods, while Online-CVB0, Online-VB and Streaming-VB performed very slowly. Remember from Table \ref{table 1: theoretical comparison} that VB requires various evaluations of expensive functions (e.g., digamma, exp, log), while CVB0 needs to update a large number of statistics which associate with each token in a document. That is the reason for the slow performance of Online-CVB0 and VB-based methods. In contrast, each iteration of OPE and CGS is very efficient. Further, OPE can converge very fast to a good approximate solution of the inference problem. Those reasons explain why OPE-based methods and Online-CGS learned significantly more efficiently than the others.
\begin{figure}
\caption{Predictiveness (log predictive probability) and semantic quality (NPMI) of the models learned by different methods as spending more time on learning. Higher is better. Note that ML-OPE and Online-OPE often reach state-of-the-art performance. To reach the same predictiveness level, ML-OPE and Online-OPE work many times faster than Online-CGS, hundred times faster than Online-CVB0, and thousand times faster than both Online-VB and Streaming-VB.}
\label{fig-OPE-perp-npmi-time}
\end{figure}
In terms of predictiveness, Streaming-OPE seems to perform worst, while 6 other methods often perform well. It is worth observing that while ML-OPE consistently reached state-of-the-art performance, Online-OPE surpassed all other methods for three datasets. Online-OPE even outperformed the others with a significant margin in Tweet, despite that such a dataset contains very short documents. ML-OPE and Online-OPE can quickly reach to a high predictiveness level, while Online-VB, Online-CGS, Online-CVB0, and Streaming-VB need substantially more time. Online-VB, Online-CGS, Online-CVB0, and Streaming-VB can perform considerably well as provided long time for leanring. Note that those four methods did not consistently perform well. For example, Online-VB worked on Pubmed worse than other methods, Online-CVB0 performed on Tweet worse than the others. The consistent superior performance of Online-OPE and ML-OPE might be due to the fact that the inferred solutions by OPE are provably good as guaranteed in Theorem \ref{the9}. However, the goodness of OPE seems not to be inherited well in Streaming-OPE.
In terms of semantic quality measured by NPMI, Figure \ref{fig-OPE-perp-npmi-time} shows the results from our experiments. It is easy to observe that Online-OPE often learns models with a good semantic quality, and is often among the top performers. Online-OPE and Online-CGS can return models with a very high quality after a short learning time. In contrast, Streaming-OPE and Online-CVB0 were among the worst methods. Interestingly, Streaming-VB often reach state-of-the-art performance when provided long time for learning, although many of its initial steps are quite bad. Comparing with predictiveness, NMPI did not consistently increase as the learning methods were allowed more time to learn. NPMI from long texts seems to be more stable than that for short texts, suggeting that learning from long texts often gets more coherent models than learning from short texts. In our experience with short texts, most methods often return models with many incoherent and noisy topics for some initial steps.
\begin{figure}
\caption{Performance of different learning methods as seeing more documents. Higher is better. Online-OPE often surpasses all other methods, while ML-OPE performes comparably with existing methods in terms of predictiveness.}
\label{fig-OPE-perp-npmi}
\end{figure}
Figure \ref{fig-OPE-perp-npmi} shows another perspective of performance, where 7 learning methods were fed more documents for learning and were allowed to pass a dataset many times. We oberve that most methods can reach to a high predictiveness and semantic quality after reading some tens of thousand documents. All methods improve their predictiveness as learning from more documents. The first pass over a dataset often helps the learning methods to increase predictiveness drastically. However, more passes over a dataset are able to help improve predictiveness slightly. We find that Streaming-VB often needs many passes over a dataset in order to reach comparable predictiveness. It is easy to see that Online-OPE surpassed all other methods as learning from more documents, while ML-OPE performed comparably. Though being fed more documents in Tweet, existing methods were very hard to reach the same performance of Online-OPE in terms of predictiveness.
Those investigations suggest that ML-OPE and Online-OPE can perform comparably or even significantly better than existing state-of-the-art methods for learning LDA at large scales. This further demonstrates another benefit of OPE for topic modeling.
\textit{A sensitivity analysis:} We have seen an impressive performance of some OPE-based methods. We find that ML-OPE and Online-OPE can be potentially useful in practice of large-scale modeling. We next help them more usable by analyizing sensitivity with their parameters as below.
\begin{figure}
\caption{Sensitivity of ML-OPE when changing its parameters. (a) Change the minibatch size when fixed $\{ \kappa=0.9, \tau=1, T=50\}
\label{fig:ML-OPE-sensitivity}
\end{figure}
We now consider the effects of the parameters on the performance of our new learning methods. The parameters include: the forgetting rate $\kappa$, $\tau$, the number $T$ of iterations for OPE, and the minibatch size. Inappropriate choices of those parameters might affect significantly the performance. To see the effect of a parameter, we changed its values in a finite set, but fixed the other parameters. We took ML-OPE into consideration, and results of our experiments are depicted in Figure~\ref{fig:ML-OPE-sensitivity}.
We observe that $\kappa$ and $T$ did not significantly affect the performance of ML-OPE. These behaviors of ML-OPE are interesting and beneficial in practice. Indeed, we do not have to consider much about the effect of the forgetting rate $\kappa$ and thus no expensive model selection is necessary. Figure~\ref{fig:ML-OPE-sensitivity}(b) reveals a much more interesting behavior of OPE. One easily observes that more iterations in OPE did not necessarily help the performance of ML-OPE. Just $T=20$ iterations for OPE resulted in a comparable predictiveness level as $T=100$. It suggests that OPE converges very fast in practice, and that $T=20$ might be enough for practical employments of OPE. This behavior is really beneficial in practice, especially for massive data or streaming data.
$\tau$ and minibatch size did affect ML-OPE significantly. Similar with the observation by \cite{Hoffman2013SVI} for SVI, we find that ML-OPE performed consistently better as the minibatch size increased. It can reach to a very high predictiveness level with a fast rate. In contrast, ML-OPE performed worse as $\tau$ increased. The method performed best at $\tau=1$. It is worth noting that the dependence between the performance of ML-OPE and \{$\tau$, minibatch size\} is monotonic. Such a behavior enables us to easily choose a good setting for the parameters of ML-OPE in practice.
\subsection{Speed of inference methods}
\begin{figure}
\caption{Average time to do inference for a doccument as the number of minibatches increases. Lower is faster. Note that OPE often performs many times faster than CGS, and hundreds times faster than VB.}
\label{fig-OPE-inf-time}
\end{figure}
Next we investigate the speed of inference. We took VB, CVB0, CGS, and OPE into consideration. For all of these methods, we compute the average time to do inference for a document at every minibatch when learning LDA. Figure \ref{fig-OPE-inf-time} depicts the results. We find that among 4 inference methods, OPE consumed a modest amount of time, while CGS needed slightly more time. VB needed intensive time to do inference. The main reasons are that it requires various evaluations of expensive functions (e.g., log, exp, digamma), and that it needs to check convergence, which in our observation was often very expensive. Due to maintainance/update of many statistics which associate with each token in a document (see Table \ref{table 1: theoretical comparison}), CVB0 also consumed significant time. Note further that VB and CVB0 do not have any guarantee of convergence rate. Hence in practice VB and CVB0 might converge slowly.
Figure \ref{fig-OPE-inf-time} suggests that OPE can perform fastest, compared with existing inference methods. Our investigation in the previous subsection demonstrates that OPE can find very good solutions for the posterior estimation problem. Those observations suggests that OPE is a good candidate for posterior inference in various situations.
\subsection{Convergence and stability of OPE in practice}
Our last investigation is about whether or not OPE performs stably in practice. We have to consider this behavior as there are two probabilistic steps in OPE: initialization of $\mbf{\theta_1}$ and pick of $f_{t}$. To see stability, we took 100 testing documents from New York Times to do inference given the 100-topic LDA model previously learned by ML-OPE. For each document, we did 10 random runs for OPE, saved the objective values of the last iterates, and then computed the standard deviation of the objective values.
Stability of OPE is assessed via the standard deviation of the objective values. The smaller, the more stable. Figure \ref{fig-OPE-stability} shows the histogram of the standard deviations computed from 100 functions. Each $T$ corresponds to a choice of the number of iterations for OPE.
\begin{figure}
\caption{Convergence and stability of OPE in practice. The top-right corner shows convergence of OPE as allowing more iterations. The stability of OPE is measured by the standard deviation of the objective values ($f(\mbf{\theta}
\label{fig-OPE-stability}
\end{figure}
Observing Figure \ref{fig-OPE-stability}, we find that the standard deviation is small ($\le 20$) for a large amount of functions. Comparing with the mean value of $f(\mbf{\theta})$ which often belonged to $[-2300, -2000]$, the deviation is very small in magnitude. This suggests that for each function, the objective values returned by OPE from 10 runs seem not to significantly differ from each other. In other words, OPE behaved very stable in our observation.
Figure \ref{fig-OPE-stability} also suggests that OPE converges very fast. The growth of $f(\mbf{\theta})$ seems to be close to a linear function. This agrees well with Theorem \ref{the9} on convergence rate of OPE.
\section{Conclusion} \label{sec-Conclusion}
We have discussed how posterior inference for individual texts in topic models can be done efficiently. Our novel algorithm (OPE) is the first one which has a theorerical guarantee on quality and fast convergence rate. In practice, OPE can do inference very fast, and can be easily extended to a wide range of contexts including MAP estimation and non-convex optimization. By exploiting OPE carefully, we have arrived at new efficient methods for learning LDA from data streams or large corpora: ML-OPE, Online-OPE, and Streaming-OPE. Among those, ML-OPE and Online-OPE can reach state-of-the-art performance at a high speed. Furthermore, Online-OPE surpasses all existing methods in terms of predictiveness, and works well with short text. As a result, they are good candidates to help us deal with text streams and big data. The code of those methods are available at \url{http://github.com/Khoat/OPE/}.
\acks{This research is funded by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under Grant Number 102.05-2014.28 and by the Air Force Office of Scientific Research (AFOSR), Asian Office of Aerospace Research \& Development (AOARD), and US Army International Technology Center, Pacific (ITC-PAC) under Award Number FA2386-15-1-4011.}
\appendix
\section{Predictive Probability}
\label{appendix--perp}
Predictive Probability shows the predictiveness and generalization of a model $\mathcal{M}$ on new data. We followed the procedure in \citep{Hoffman2013SVI} to compute this quantity. For each document in a testing dataset, we divided randomly into two disjoint parts $\mbf{w}_{obs}$ and $\mbf{w}_{ho}$ with a ratio of 70:30. We next did inference for $\mbf{w}_{obs}$ to get an estimate of $\mathbb{E}(\mbf{\theta}^{obs})$. Then we approximated the predictive probability as
\[
\Pr(\mbf{w}_{ho} | \mbf{w}_{obs}, \mathcal{M}) \approx \prod_{w \in \mbf{w}_{ho} } \sum_{k=1}^{K} \mathbb{E}(\mbf{\theta}^{obs}_k) \mathbb{E}(\mbf{\beta}_{kw}),
\]
\[
\text{Log Predictive Probability} = \frac{\log \Pr(\mbf{w}_{ho} | \mbf{w}_{obs}, \mathcal{M})} {|\mbf{w}_{ho}|},
\]
where $\mathcal{M}$ is the model to be measured. We estimated $\mathbb{E}(\mbf{\beta}_k) \propto \mbf{\lambda}_k$ for the learning methods which maintain a variational distribution ($\mbf{\lambda}$) over topics. Log Predictive Probability was averaged from 5 random splits, each was on 1000 documents.
\section{NPMI}
\emph{NPMI} \citep{Aletras2013evaluating,Bouma2009NPMI} is the measure to help us see the coherence or semantic quality of individual topics. According to \cite{Lau2014npmi}, NPMI agrees well with human evaluation on interpretability of topic models. For each topic $t$, we take the set $\{w_1, w_2, ..., w_n\}$ of top $n$ terms with highest probabilities. We then computed
\[
NPMI(t) = \frac{2}{n(n-1)} \sum_{j=2}^{n} \sum_{i=1}^{j-1} \frac{\log \frac{P(w_j, w_i)}{P(w_j) P(w_i)}}{- \log P(w_j, w_i)},
\]
where $P(w_i, w_j)$ is the probability that terms $w_i$ and $w_j$ appear together in a document. We estimated those probabilities from the training data. In our experiments, we chose top $n=10$ terms for each topic.
Overall, NPMI of a model with $K$ topics is averaged as:
\[
NPMI = \frac{1}{K} \sum_{t=1}^{K} NPMI(t).
\]
\end{document} |
\begin{document}
\title[Solitons, Boundaries, and Quantum Affine Algebras]{Solitons, Boundaries,\\ and Quantum Affine Algebras}
\author{Gustav W Delius}
\text{ad}dress{Department of Mathematics\\University of
York\\York YO10 5DD\\United Kingdom}
\email{[email protected]}
{\mathcal R}laddr{http://www.york.ac.uk/mathematics/physics/delius/}
\thanks{The
transparencies from this talk are available on the web at\\
{\bf h}ref{http://www.york.ac.uk/depts/maths/physics/delius/talks/iskmaa.pdf}{http://www.york.ac.uk/depts/maths/physics/delius/talks/iskmaa.pdf}
.}
\begin{abstract}
This is a condensed write-up of a talk delivered at the Ramanujan
International Symposium on Kac-Moody Lie algebras and Applications
in Chennai in January 2002. The talk introduces special coideal
subalgebras of quantum affine algebras which appear in physics
when solitons are restricted to live on a half-line by an
integrable boundary condition. We review how the quantum affine
symmetry determines the soliton S-matrix in affine Toda field
theory and then go on to use the unbroken coideal subalgebra on
the half-line to determine the soliton reflection matrix. This
gives a representation theoretic method for the solution of the
reflection equation (boundary Yang-Baxter equation) by reducing it
to a linear equation.
\end{abstract}
\maketitle
\section{Introduction}
{\bf Quantum affine algebras} are quantum deformations of the
universal enveloping algebras of affine Kac-Moody algebras,
introduced by Drinfeld \cite{Drinfeld:1988in} and Jimbo
\cite{Jimbo:1986ua}. They are therefore obviously related to the
topic of this conference.
{\bf Solitons} are localized finite-energy solutions of
relativistic wave equations with special properties. In the
quantum theory they lead to particle states. The relation between
classical solitons and Kac-Moody algebras was stressed already by
the Kyoto group \cite{MR2001a:37109}. More recently the role that
quantum affine algebras play in the quantum theory of solitons was
uncovered in \cite{Bernard:1991ys}.
By {\bf Boundaries} I mean spatial boundaries. In this talk space
will be taken to be one-dimensional, i.e., the real line.
Introducing a boundary by imposing a boundary condition at some
point restricts physics to the half-line.
In this talk we will knit these three topics together. We will do
so by using a particularly rich family of quantum field theories.
They are known as quantum affine Toda field theories and they
posses all three: a quantum affine symmetry algebra
\cite{Bernard:1991ys}, soliton solutions
\cite{Hollowood:1992by,Olive:1993cm}, and integrable boundary
conditions \cite{Bowcock:1995vp}.
The main topic of the talk will be certain coideal subalgebras of
quantum affine algebras that remain unbroken symmetry algebras
after imposing an integrable boundary condition
\cite{Delius:2001qh}. These algebras provide a new tool for the
solution of the reflection equation.
Because the first part of the talk is simply an introduction to
the known theory of solitons and quantum affine symmetry in affine
Toda field theory, we will be very brief in the next two sections.
We give references to the literature where more details can be
found. These references are not always to the original works but
often to more recent works because they may be more useful in the
given context.
\section{Solitons in Affine Toda field theory}\label{s:toda}
\subsection{Field equations}
Associated to every affine Kac-Moody algebra ${\bf h}at{g}$ there is a relativistic field
equation
\begin{equation}\label{feq}
\partial_x^2\phi-\partial_t^2\phi=\frac{1}{2i}\sum_{j=0}^n\,\eta_j\,\alpha_j^\vee\,
e^{i(\alpha_j^\vee,\,\phi)}
\end{equation}
where the field $\phi=\phi(x,t)$ takes values in the root space of
the underlying finite dimensional Lie algebra $g$,
$\alpha_0,\dots,\alpha_n$ are the simple roots of ${\bf h}at{g}$
projected onto the root space of $g$, $(\ ,\ )$ is the Killing
form, $\alpha_j^\vee=2\alpha_j/(\alpha_j,\alpha_j)$ and the
$\eta_j$ are coprime integers such that
$\sum_{j=0}^n\eta_j\alpha_j^\vee=0$. The sine-Gordon model is the
simplest example of an affine Toda field equation corresponding to
$g={\bf h}at{sl}_2$.
The field equations \eqref{feq} are integrable in the sense that
they posses an infinite number of integrals of motion in
involution. This follows for example from the fact that they can
be written as the zero-curvature condition for a gauge-connection
taking its value in the affine Lie algebra ${\bf h}at{g}$
\cite{Olive:1985mb}.
\subsection{Soliton solutions}
The Toda field equations \eqref{feq} have degenerate vacuum
solutions with $\phi=2\pi\lambda$ for any fundamental weight
$\lambda$ of $g$. Thus there are kink solutions which interpolate
between these vacua. They are stable due to their nonzero
topological charge
\begin{equation}\label{tcharge}
T[\phi]=\phi(\infty)-\phi(-\infty)=2\pi\lambda.
\end{equation}
\begin{wrapfigure}{r}{4cm}
\includegraphics[width=4cm]{fig1.eps}
\end{wrapfigure}
Furthermore there are solutions that describe an arbitrary number
of kinks, each moving with its own velocity. Surprisingly, when
two of these kinks meet they pass through each other and reemerge
with their original shape restored. It is this property that
shows that these kinks are solitons. An example of a two-soliton
solution in the sine-Gordon model is shown on the right.
The topological charges of the fundamental solitons are weights in
the fundamental representations of $g$. All solitons whose
topological charge lie in the same fundamental representation have
the same mass. A particle physicist would therefore say that they
form a multiplet.
As an introduction to solitons from the particle physicsts
viewpoint I recommend the book by Rajaraman
\cite{Rajaraman:1982is}. A more classical presentation is given in
\cite{MR2001a:37109}. For details on the solitons in affine Toda
theory see \cite{Hollowood:1992by,Olive:1993cm}.
\subsection{Quantum solitons}
In the quantum theory we associate particle states with the
soliton solutions. Let $V^\mu_\theta$ be the space spanned by the
solitons in multiplet $\mu$ with rapidity\footnote{The rapidity
$\theta$ parametrizes the energy and momentum of particles on the
mass shell. If the mass is $m$ then the energy is $E=m\cosh\theta$
and the momentum is $p=m\sinh\theta$.} $\theta$.
\begin{wrapfigure}{r}{4cm}
\setlength{\unitlength}{1.0mm}
\begin{picture}(66,15)(15,9)
\thinlines \drawpath{22.0}{22.0}{34.0}{10.0}
\drawpath{34.0}{22.0}{22.0}{10.0} \drawthickdot{28.0}{16.0}
\drawcenteredtext{22.0}{6.0}{$V^\mu_\theta$}
\drawcenteredtext{34.0}{6.0}{$V^\nu_{\theta'}$}
\drawcenteredtext{20.0}{26.0}{$V^\nu_{\theta'}$}
\drawcenteredtext{34.0}{26.0}{$V^\mu_\theta$}
\drawlefttext{34.0}{16.0}{$S^{\mu\nu}(\theta-\theta')$}
\end{picture}
\end{wrapfigure}
Asymptotic two-soliton states span tensor product spaces
$V^\mu_\theta\otimes V^\nu_{\theta'}$.
An incoming two-soliton state in $V^\mu_\theta\otimes
V^\nu_{\theta'}$ with $\theta>\theta'$ will evolve during
scattering into an outgoing state in $V^\nu_{\theta'}\otimes
V^\mu_{\theta}$ with scattering amplitude given by the entry of
the two-soliton S-matrix $S^{\mu\nu}(\theta-\theta')$. We
represent this graphically as in the figure on the right.
\subsection{S-matrix factorization}
Due to integrability, the multi-soliton S-matrix factorizes into a
product of two-soliton S-matrices. For the three-particle
scattering process this is represented graphically as follows:
\begin{equation}
\setlength{\unitlength}{0.8mm}
\begin{picture}(112,32)
\thinlines \drawpath{4.0}{28.0}{28.0}{4.0}
\drawpath{28.0}{28.0}{4.0}{4.0} \drawpath{16.0}{28.0}{16.0}{4.0}
\drawpath{44.0}{28.0}{68.0}{4.0} \drawpath{68.0}{28.0}{44.0}{4.0}
\drawpath{62.0}{28.0}{62.0}{4.0} \drawpath{84.0}{28.0}{108.0}{4.0}
\drawpath{108.0}{28.0}{84.0}{4.0} \drawpath{92.0}{28.0}{92.0}{4.0}
\drawcenteredtext{38.0}{16.0}{$=$}
\drawcenteredtext{78.0}{16.0}{$=$} \drawthickdot{16.0}{16.0}
\drawthickdot{56.0}{16.0} \drawthickdot{62.0}{22.0}
\drawthickdot{62.0}{10.0} \drawthickdot{92.0}{20.0}
\drawthickdot{96.0}{16.0} \drawthickdot{92.0}{12.0}
\end{picture}
\end{equation}
The compatibility of the two ways to factorize requires the
S-matrix to be a solution of the Yang-Baxter equation
\begin{multline}\label{ybe}
(1\otimes S^{\mu\nu}(\theta-\theta'))
(S^{\mu\lambda}(\theta-\theta'')\otimes 1)
(1\otimes S^{\nu\lambda}(\theta'-\theta''))
\\=
(S^{\nu\lambda}(\theta'-\theta'')\otimes 1)
(1\otimes S^{\mu\lambda}(\theta-\theta''))
(S^{\mu\nu}(\theta-\theta')\otimes 1).
\end{multline}
A classical reference on factorized S-matrices is
\cite{Zamolodchikov:1979xm}.
Quantum affine algebras provide the key technique for finding
solutions to the Yang-Baxter equation, as will be explained in the
next section.
\section{Quantum group symmetry}
\subsection{Non-local symmetry charges}
Quantum affine Toda theory has symmetry charges $T_i, Q_i,
\bar{Q}_i$, $i=0,\dots,n$ which generate the $U_q({\bf h}at{g})$
algebra with relations
\begin{align}
&[T_i,Q_j]=\alpha_i\cdot\alpha_j\,Q_j,~~~~~~
[T_i,\bar{Q}_j]=-\alpha_i\cdot\alpha_j\,Q_j\notag\\
&Q_i\bar{Q}_j-q^{-\alpha_i\cdot\alpha_j}\bar{Q}_j
Q_i=\partialta_{ij}\,\frac{q^{2 T_i}-1}{q_i^{2}-1},
\end{align}
where
\begin{equation}
q=e^{2\pi i\frac{1-{\bf h}at{\beta}ar}{{\bf h}at{\beta}ar}}
\end{equation}
and $q_i=q^{\alpha_i\cdot\alpha_i/2}$. They also satisfy the Serre
relations. The charges $Q_i, \bar{Q}_i$ are non-local in the sense
that they are obtained as space integrals of the time components
of currents which are themselves non-local expressions. The
details can be found in \cite{Bernard:1991ys}.
\subsection{Action on solitons}
Each soliton multiplet $V^\mu_\theta$ carries a representation
$\pi^\mu_\theta: {\uqgh}\rightarrow \text{End}(V^\mu_\theta)$ of the
quantum affine algebra. The symmetry acts on the multi-soliton
states through the coproduct $\Delta:{\uqgh}\rightarrow
{\uqgh}\otimes {\uqgh}$.
\begin{align}\label{cop}
\Delta(Q_i)&=Q_i\otimes 1+q^{T_i}\otimes Q_i,\notag\\
\Delta(\overline{Q}_i)&=\overline{Q}_i\otimes 1+q^{T_i}\otimes \overline{Q}_i,\\
\Delta(T_i)&=T_i\otimes 1+1\otimes T_i.\notag
\end{align}
An explanation of why non-local symmetry charges have such a
non-cocommutative coproduct can be found in \cite{Bernard:1993mu}.
\subsection{S-matrix as intertwiner}
The S-matrix has to commute with the action of any symmetry charge
$Q\in{\uqgh}$,
\begin{equation}
\begin{CD}\label{sint}
V^\mu_{\theta}\otimes V^\nu_{\theta'}
@>(\pi^\mu_\theta\otimes\pi^\nu_{\theta'})(\Delta(Q))>>
V^\mu_{\theta}\otimes V^\nu_{\theta'}
\\
@VV{S^{\mu\nu}(\theta-\theta')}V
@VV{S^{\mu\nu}(\theta-\theta')}V
\\
V^\nu_{\theta'}\otimes V^\mu_{\theta}
@>(\pi^\nu_{\theta'}\otimes\pi^\mu_{\theta})(\Delta(Q))>>
V^\nu_{\theta'}\otimes V^\mu_{\theta}
\end{CD}
\end{equation}
This determines the S-matrix uniquely up to an overall factor.
Jimbo solved this intertwining equation for the vector
representations of all quantum affine algebras in
\cite{Jimbo:1986ua}. A practical technique for solving the
intertwining equations in a large number of cases is the tensor
product graph method, see e.g. \cite{Delius:1994ht}.
The scalar prefactor which is not fixed by the Yang-Baxter
equation can be determined by imposing other physical requirements
such as unitarity, crossing symmetry and closure of the bootstrap.
This has been done for a number of affine Toda theories, see e.g.,
\cite{Hollowood:1993sy,Gandenberger:1995gg,Gandenberger:1996cw}
\subsection{Yang-Baxter equation from Schur's lemma}
Because the tensor product modules are irreducible for generic
rapidities, Schur's lemma implies that the following diagram is
commutative up to a scalar factor\begin{equation}
\begin{CD}
V^\mu_{\theta}\otimes V^\nu_{\theta'}\otimes V^\lambda_{\theta''}
@>{S^{\mu\nu}(\theta-\theta')\,\otimes\,\text{id}}>>
V^\nu_{\theta'}\otimes V^\mu_{\theta}\otimes V^\lambda_{\theta''}
\\
@VV{\text{id}\otimes S^{\nu\lambda}(\theta'-\theta'')}V
@V{id\,\otimes\, S^{\mu\lambda}(\theta-\theta'')}VV
\\
V^\mu_{\theta}\otimes V^\lambda_{\theta''}\otimes V^\nu_{\theta'}
@.
V^\nu_{\theta'}\otimes V^\lambda_{\theta''}\otimes V^\mu_{\theta}
\\
@VV{S^{\mu\lambda}(\theta-\theta'')\,\otimes\,\text{id}}V
@V{S^{\nu\lambda}(\theta'-\theta'')\otimes\text{id}}VV
\\
V^\lambda_{\theta''}\otimes V^\mu_{\theta}\otimes V^\nu_{\theta'}
@>{id\,\otimes\, S^{\mu\nu}(\theta-\theta')}>>
V^\lambda_{\theta''}\otimes V^\nu_{\theta'}\otimes V^\mu_{\theta}
\end{CD}
\end{equation}
Thus any S-matrix which satisfies the intertwining property
\eqref{sint} automatically satisfies the Yang-Baxter equation (at
least up to a scalar factor).
\section{Soliton reflection}
\subsection{Integrable boundary conditions}
We now restrict the theory to the left half-line by imposing that
at the boundary
\begin{equation}\label{bc}
\partial_x{\phi} = i\sum_{j=0}^n \epsilon_j
\alpha_j^\vee\exp\left(\frac{i}{2}(\alpha_j^\vee,\phi)\right)
\end{equation}
where the $\epsilon_j$ are free parameters. These boundary
conditions were proposed by Corrigan et.al. because they preserve
integrability for certain choices of the parameters
\cite{Bowcock:1995vp}. There are other integrable boundary
conditions \cite{Bowcock:1996gw,Delius:1998rf} and it will be
interesting to extend the observations from the next section to
these.
\subsection{Soliton reflection}
\begin{wrapfigure}{r}{3.5cm}
\setlength{\unitlength}{0.8mm}
\begin{picture}(62,21)(18,10)
\Thicklines \drawpath{40.0}{26.0}{40.0}{6.0} \thinlines
\drawpath{40.0}{16.0}{30.0}{26.0} \drawpath{40.0}{16.0}{30.0}{6.0}
\drawrighttext{28.0}{6.0}{$V^\mu_\theta$}
\drawrighttext{28.0}{26.0}{$V^{\bar{\mu}}_{-\theta}$}
\drawlefttext{42.0}{16.0}{$K^\mu(\theta)$}
\end{picture}
\end{wrapfigure}
For $a_n^{(1)}$ affine Toda theory the soliton solutions which
satisfy the boundary conditions \eqref{bc} were determined in
\cite{Delius:1998jw} by a nonlinear method of images. These
solutions describe an incoming soliton with rapidity $\theta$
being converted into an outgoing anti-soliton with rapidity
$-\theta$. In the quantum theory such reflection processes are
described by a reflection matrix $K^\mu(\theta)$ which maps from
the incoming multiplet $V^\mu_\theta$ to the outgoing conjugate
multiplet $V^{\bar{\mu}}_{-\theta}$.
The entries of the reflection
matrix are the reflection amplitudes.
\subsection{Factorization of reflection matrices}
If the boundary condition preserves integrability then the
multi-soliton reflection matrices have to factorize.
\begin{equation}
\setlength{\unitlength}{0.8mm}
\begin{picture}(98,38)
\Thicklines \drawpath{20.0}{34.0}{20.0}{4.0} \thinlines
\drawpath{20.0}{20.0}{6.0}{34.0} \drawpath{20.0}{20.0}{6.0}{6.0}
\drawpath{20.0}{20.0}{4.0}{28.0} \drawpath{20.0}{20.0}{4.0}{12.0}
\drawthickdot{20.0}{20.0} \drawcenteredtext{30.0}{20.0}{$=$}
\drawcenteredtext{66.0}{20.0}{$=$} \Thicklines
\drawpath{56.0}{34.0}{56.0}{4.0} \thinlines
\drawpath{56.0}{24.0}{36.0}{4.0} \drawpath{56.0}{24.0}{46.0}{34.0}
\drawpath{56.0}{18.0}{36.0}{28.0} \drawpath{56.0}{18.0}{36.0}{8.0}
\drawthickdot{52.0}{20.0} \drawthickdot{44.0}{12.0}
\drawthickdot{56.0}{18.0} \drawthickdot{56.0}{24.0} \Thicklines
\drawpath{92.0}{34.0}{92.0}{4.0} \thinlines
\drawpath{92.0}{14.0}{82.0}{4.0} \drawpath{92.0}{14.0}{72.0}{34.0}
\drawpath{92.0}{20.0}{72.0}{30.0}
\drawpath{92.0}{20.0}{72.0}{10.0} \drawthickdot{80.0}{26.0}
\drawthickdot{92.0}{20.0} \drawthickdot{88.0}{18.0}
\drawthickdot{92.0}{14.0}
\end{picture}
\end{equation}
The compatibility condition between the two ways of factorizing
the two-soliton reflection matrix is the {\it reflection equation}
\cite{Cherednik:1984vs}
\begin{multline}\label{req}
(1\otimes K^\nu(\theta'))
S^{\nu\bar{\mu}}(\theta+\theta')
(1\otimes K^\mu(\theta))
S^{\mu\nu}(\theta-\theta')
\\=
S^{\bar{\nu}\bar{\mu}}(\theta-\theta')
(1\otimes K^\mu(\theta))
S^{\mu\bar{\nu}}(\theta+\theta')
(1\otimes K^\nu(\theta').
\end{multline}
This equation is also sometimes known as the {\it boundary
Yang-Baxter equation}. Like the Yang-Baxter equation \eqref{ybe}
it is a non-linear functional matrix equation. It was solved to
find the reflection matrix for the sine-Gordon solitons in
\cite{Ghoshal:1994tm} and then for the vector solitons in
$a_n^{(1)}$ affine Toda theory in \cite{Gandenberger:1999uw}.
Solutions of the reflection equation are not only needed to
describe the reflection of particles off a boundary. They are also
used to construct integrable quantum spin chains with a boundary
and they provide the boundary Bolzmann weights for solvable models
of statistical mechanics. Unfortunately the reflection equation
has proven difficult to solve directly except for matrices of very
small dimension \cite{Lima-Santos:1999,Liu:1998jd}, for diagonal
matrices \cite{deVega:1994sb,Batchelor:1996xa} and a few other
cases \cite{Ahn:1998,Gandenberger:1999uw,Lima-Santos:2001aa}. The
two known systematic techniques for the construction of reflection
matrices are the fusion procedure \cite{Behrend:1996en} and the
extended Hecke algebraic techniques \cite{Doikou:2002ry} but these
have so far only produced a small subset of the possible
solutions.
A way to find solutions to the reflection equation with the help
of quantum affine algebras has only been discovered very recently
\cite{Delius:2001qh}. This method arose by studying the non-local
symmetries of affine Toda theory on the half-line, as will be
described in the next section.
\section{Boundary quantum group symmetry}
\subsection{Symmetry charges}
In the presence of the boundary the non-local charges of the bulk
theory are no longer conserved. Rather we found
\cite{Delius:2001qh} that the boundary condition \eqref{bc} breaks
the symmetry to a subalgebra ${\mathcal B}_\epsilon\subset{\uqgh}$
generated by
\begin{equation}\label{tcc}
\widehat{Q}_i = Q_i + \bar{Q}_i +
{\bf h}at{\epsilon}_i q^{T_i},~~~~~i=0,\dots,n.
\end{equation}
Note that the symmetry algebra depends on the boundary parameters
${\bf h}at{\epsilon_i}$ (which are related to the $\epsilon_i$ in
\eqref{bc} by a rescaling). We derived these conserved charges
using first order boundary conformal perturbation theory but we
believe the result to hold exactly. For ${\bf h}at{g}=a_n^{(1)}$ we
were able to show that this is the case, at least on shell, as
explained in the next section. We can now report that recent
calculations performed in collaboration with Alan George establish
the symmetry also for ${\bf h}at{g}=d_n^{(1)}$. It is the subject of
ongoing work to extend this to all affine Toda field theories with
an integrable boundary condition.
\subsection{Reflection Matrix as Intertwiner}
The reflection matrix has to commute with the action of any
symmetry charge ${\bf h}at{Q}\in{\mathcal B}_\epsilon\subset{\uqgh}$,
i.e., the following diagram is commutative:
\begin{equation}\label{kint}
\begin{CD}
V^\mu_{\theta}
@>\pi^\mu_\theta({\bf h}at{Q})>>
V^\mu_{\theta}
\\
@VV{K^{\mu}(\theta)}V
@VV{K^{\mu}(\theta)}V
\\
V^{\bar{\mu}}_{-\theta}
@>\pi^{\bar{\mu}}_{-\theta}({\bf h}at{Q})>>
V^{\bar{\mu}}_{-\theta}
\end{CD}
\end{equation}
If the symmetry algebra ${\mathcal{B}}e$ is "large enough" so that
$V^\mu_\theta$ and $V^{\bar{\mu}}_{-\theta}$ are irreducible modules of
${\mathcal{B}}e$, then this linear equation determines the reflection matrix
uniquely up to an overall factor. As we will see below, this is
the case for example for the vector representation of $a_n^{(1)}$.
\subsection{Coideal property}
The residual symmetry algebra ${\mathcal{B}}e$ turns out not to be a Hopf
algebra because it does not have a coproduct
$\Delta:{\mathcal{B}}e\rightarrow{\mathcal{B}}e\otimes{\mathcal{B}}e$. Using the coproduct
\eqref{cop} of $\uqgh$ we find
\begin{equation}
\Delta({\bf h}at{Q}_i)=(Q_i+\bar{Q}_i)\otimes 1+q^{T_i}\otimes
{\bf h}at{Q}_i.
\end{equation}
Thus the symmetry algebra ${\mathcal{B}}e$ is a left coideal of ${\uqgh}$ in
the sense that
\begin{equation}
\Delta({\bf h}at{Q})\in{\uqgh}\otimes{\mathcal{B}}e~~~\text{ for all }{\bf h}at{Q}\in{\mathcal{B}}e.
\end{equation}
This allows it to act on multi-soliton states in the presence of a
boundary because the coproduct can be used to define the action of
${\mathcal{B}}e$ on any tensor product of a $\uqgh$-module (solitons) with a
${\mathcal{B}}e$-module (boundary states). We would like to stress that this
is the first time that a symmetry algebra in physics has been
found that is not a bialgebra.
\subsection{The Reflection Equation from Schur's lemma}
If ${\mathcal B}_\epsilon$ is "large enough" so that the tensor
product modules are irreducible, then by Schur's lemma the
following diagram is commutative up to a scalar factor
\begin{equation}
\begin{CD}\label{cdre}
V^\mu_\theta\otimes V^\nu_{\theta'}
@>\text{id}\,\otimes K^\nu(\theta')>>
V^\mu_\theta\otimes V^{\bar{\nu}}_{-\theta'}
\\
@VVS^{\mu\nu}(\theta-\theta')V
@VS^{\mu\bar{\nu}}(\theta+\theta')VV
\\
V^\nu_{\theta'}\otimes V^\mu_{\theta}
@.
V^{\bar{\nu}}_{-\theta'}\otimes V^\mu_\theta
\\
@VV\text{id}\,\otimes K^\mu(\theta)V
@VV\text{id}\,\otimes K^\mu(\theta)V
\\
V^\nu_{\theta'}\otimes V^{\bar{\mu}}_{-\theta}
@.
V^{\bar{\nu}}_{-\theta'}\otimes V^{\bar{\mu}}_{-\theta}
\\
@VVS^{\nu\bar{\mu}}(\theta+\theta')V
@VS^{\bar{\nu}\bar{\mu}}(\theta-\theta')VV
\\
V^{\bar{\mu}}_{-\theta}\otimes V^\nu_{\theta'}
@>\text{id}\,\otimes K^\nu(\theta')>>
V^{\bar{\mu}}_{-\theta}\otimes V^{\bar{\nu}}_{-\theta'}
\end{CD}
\end{equation}
Thus any reflection matrix which satisfies the intertwining
property \eqref{kint} automatically satisfies the reflection
equation \eqref{req} (at least up to a scalar factor).
\subsection{Calculating Reflection Matrices}
Solving the quantum group intertwining property \eqref{kint} is
much easier than solving the reflection equation \eqref{req}
directly because the intertwining property is a linear equation.
To illustrate this new technique we present here the derivation of
the reflection matrix for the vector solitons in $a_n^{(1)}$ Toda
theory. Using the representation matrices
\begin{align}
\pi^\mu_\theta({\bf h}at{Q}_i)&=x\,e^{i+1}{}_i+
x^{-1}\,e^{i}{}_{i+1}+
{\bf h}at{\epsilon}_i\,((q^{-1}-1)\,e^{i}{}_i+(q-1)\,e^{i+1}{}_{i+1}+\mathbf{1})
\end{align}
the intertwining property \eqref{kint} gives the following set of
linear equations for the entries of the reflection matrix:
\begin{align}
0&={\bf h}at{\epsilon}_i(q^{-1}-q)K^i{}_i+x\,K^i{}_{i+1}-x^{-1}\,K^{i+1}{}_i,
\label{s3}\\
0&=K^{i+1}{}_{i+1}-K^i{}_i,\\
0&={\bf h}at{\epsilon}_i\,q\,K^i{}_j+x^{-1}\,K^{i+1}{}_j,~~~j\neq
i,i+1,\\
0&={\bf h}at{\epsilon}_i\,q^{-1}\,K^j{}_i+x\,K^j{}_{i+1},~~~j\neq
i,i+1.
\end{align}
If all $|{\bf h}at{\epsilon}_i|=1$ then one finds the solution
\begin{align}
K^i{}_i(\theta)&=\left(
q^{-1}\,(-q\,x)^{(n+1)/2}-
{\bf h}at{\epsilon}\,q\,(-q\,x)^{-(n+1)/2}
\right)
\frac{k(\theta)}{q^{-1}-q},\\
K^i{}_j(\theta)&={\bf h}at{\epsilon}_i\cdots{\bf h}at{\epsilon}_{j-1}\,
(-q\,x)^{i-j+(n+1)/2}\,k(\theta),~~~~~~\text{for }j>i,\\
K^j{}_i(\theta)&={\bf h}at{\epsilon}_i\cdots{\bf h}at{\epsilon}_{j-1}{\bf h}at{\epsilon}\,
(-q\,x)^{j-i-(n+1)/2}\,k(\theta),~~~~~\text{for }j>i,
\end{align}
that is unique up to an overall numerical factor $k(\theta)$. If
all ${\bf h}at{\epsilon}_i=0$ then the solution is diagonal. For other
values for the ${\bf h}at{\epsilon}_i$ there are no solutions. This is
in agreement with the restrictions on the boundary parameters
found by imposing classical integrability \cite{Bowcock:1995vp}.
We have applied the same technique to derive the reflection
matrices for any representation of ${\bf h}at{sl}_2$
\cite{Delius:2002ad}. We are currently calculating the reflection
matrices for the vector representation for all families of affine
Lie algebras, mirroring the work done by Jimbo for R-matrices
\cite{Jimbo:1986ua}.
\subsection{Boundary Bound States}
Particles can bind to the boundary, creating multiplets of
boundary bound states. These span finite dimensional
representations $V^{[\lambda]}$ of the symmetry algebra ${\mathcal
B}_\epsilon$. The reflection of particles off boundary bound
states is described by intertwiners
\begin{equation}
K^{\mu[\lambda]}(\theta):V^\mu_\theta\otimes
V^{[\lambda]}\rightarrow V^{\bar{\mu}}_{-\theta}\otimes V^{[\lambda]}.
\end{equation}
Unfortunately nothing is known so far about the representation
theory of the algebras ${\mathcal{B}}e$. One objective of this talk is to
encourage research into this direction. Ideally one would like to
be able to calculate the branching rules for representations of
$\uqgh$ into irreducible representations of ${\mathcal{B}}e$.
\section{Reconstruction of symmetry}
In this last section I want to show how to reconstruct the
symmetry algebra of a theory with boundary from the knowledge of
at least one of its reflection matrices.
Let us assume that for one particular $\uqgh$ representation
$V^\mu_\theta$ we know the reflection matrix
$K^\mu(\theta):V^\mu_\theta\rightarrow V^{\bar{\mu}}_{-\theta}$.
We define the corresponding ${\uqgh}$-valued $L$-operators in terms
of the universal $R$-matrix ${\mathcal R}$ of ${\uqgh}$,
\begin{align}
L^\mu_\theta&=\left(\pi^\mu_\theta\otimes\text{id}\right)
({\mathcal R})\in\text{End}(V^\mu_\theta)\otimes{\uqgh},\\
\bar{L}^{\bar{\mu}}_\theta&=
\left(\pi^{\bar{\mu}}_{-\theta}\otimes\text{id}\right)
({\mathcal R}^{\text{op}})\in\text{End}(V^{\bar{\mu}}_{-\theta})\otimes{\uqgh}.
\end{align}
Here ${\mathcal R}^{\text{op}}$ is the opposite universal
R-matrix obtained by interchanging the two tensor factors.
From these $L$-operators we construct the matrices
\begin{equation}
B^\mu_\theta=\bar{L}^{\bar{\mu}}_\theta\,
(K^\mu(\theta)\otimes\,1)\,L^\mu_\theta\in
\text{End}(V^\mu_\theta,V^{\bar{\mu}}_{-\theta})\otimes{\uqgh}.
\end{equation}
In index notation this becomes
\begin{equation}
(B^\mu_\theta)^\alpha{}_\beta=
(\bar{L}^{\bar{\mu}}_\theta)^\alpha{}_{\bf g}amma
(K^\mu(\theta))^{\bf g}amma{}_\partialta(L^\mu_\theta)^\partialta{}_\beta
\in{\uqgh}.
\end{equation}
We find that for all $\theta$ the $(B^\mu_\theta)^\alpha{}_\beta$
are elements of a coideal subalgebra ${\mathcal B}$ which commutes
with the reflection matrices. It is easy to check the coideal
property:
\begin{equation}
\Delta\left((B^{\mu}_\theta)^\alpha{}_\beta\right)
=(\bar{L}^{\bar{\mu}}_\theta)^\alpha{}_\partialta
(L^\mu_\theta)^\sigma{}_\beta
\otimes (B^{\mu}_\theta)^\partialta{}_\sigma.
\end{equation}
Also any $K^\nu(\theta'):V^\nu_{\theta'}\rightarrow
V^{\bar{\nu}}_{-\theta'}$ which satisfies the appropriate reflection
equation commutes with the action of the elements
$(B^\mu_\theta)^\alpha{}_\beta$
\begin{equation}
K^\nu(\theta')\circ\pi^\nu_{\theta'}((B^\mu_\theta)^\alpha{}_\beta)=
\pi^{\bar{\nu}}_{-\theta'}((B^\mu_\theta)^\alpha{}_\beta)\circ K^\nu(\theta'),
\end{equation}
Applying the above construction to the vector solitons in affine
Toda theory and expanding in powers of $x=e^\theta$ gives
\begin{equation}
B^\mu_\theta=B+x\,\sum_{l=0}^n(q^{-1}-q)\,e^{l+1}{}_l\otimes
\left(Q_l+\bar{Q}_l+{\bf h}at{\epsilon}_l\,q^{T_l}\right)+{\mathcal O}(x^2).
\end{equation}
This shows that the charges were correct to all orders.
The $B$-matrices can be shown to satisfy the quadratic relations
\begin{equation}
\check{P}R^{\bar{\nu}\bar{\mu}}(\theta-\theta')\check{P}\,
\overset{1}{B^\mu_\theta}\,
R^{\mu\bar{\nu}}(\theta+\theta')\,
\overset{2}{B^\nu_{\theta'}}
=\overset{2}{B^\nu_{\theta'}}\,
\check{P}R^{\nu\bar{\mu}}(\theta+\theta')\check{P}\,
\overset{1}{B^\mu_\theta}\,
R^{\mu\nu}(\theta-\theta'),
\end{equation}
Thus they generate a {\it reflection equation algebra} in the
sense of Sklyanin \cite{Sklyanin:1988yz}.
\section{Summary and Discussion}
In the first half of the talk we reviewed how quantum affine
algebras are used to find the solutions of the Yang-Baxter
equation that describe soliton scattering in affine Toda theory.
In the second half we then described the recently discovered
analogous technique for soliton reflection off integrable
boundaries. The main points were the following
\begin{itemize}
\item A boundary condition breaks the symmetry to a subalgebra
${\mathcal{B}}e$ of the quantum affie algebra $\uqgh$, depending on the
boundary parameters $\epsilon_i$.
\item The residal symmetry algebra ${\mathcal{B}}e$ is not a Hopf algebra
but only a coideal subalgebra of $\uqgh$.
\item The symmetry implies that the reflection matrices are
intertwiners of ${\mathcal{B}}e$ representations. Because for generic
rapidity the $\uqgh$ modules $V^\mu_\theta$ spanned by the solitons
are irreducible also as modules of the subalgebra ${\mathcal{B}}e$, the
intertwining property determines the reflection matrices uniquely
up to a scalar factor.
\item Because also the tensor product modules $V^\mu_\theta\otimes
V^\nu_{\theta'}$ are irreducible as ${\mathcal{B}}e$ modules, Schur's lemma
implies that the reflection matrices that satisfy the intertwining
property are also automatically solutions of the reflection
equation.
\item The intertwining property leads to a set of linear equations
for the entries of the reflection matrix which is relatively
straightforward to solve. We have thus obtained a very practical
method of finding solutions to the reflection equation.
\item At special imaginary values of the rapidity $\theta$ a
soliton multiplets $V^\mu_theta$ can become reducible and boundary
bound states can form, spanning irreducible representations of
${\mathcal{B}}e$. Thus the physically interesting study of boundary bound
states is connected to the study of the branching rules from
$\uqgh$ to ${\mathcal{B}}e$.
\end{itemize}
While in this talk we concentrated on affine Toda field theories,
a similar story unfolds in the principal chiral models. There the
theory in the bulk has a Yangian symmetry, which in the presence
of a boundary is broken to a twisted Yangian. This was discovered
in \cite{Delius:2001yi} and was used to find reflection matrices
in \cite{Delius:2001yi,MacKay:2002at}.
We found the boundary quantum groups ${\mathcal{B}}e$ by studying concrete
boundary quantum field theories. However the question can also be
approached purely mathematially. One would like to classify and
construct all coideal subalgebras of quantum affine algebras or
Yangians that qualify as boundary quantum groups in the sense that
their intertwiners are solutions to the boundary Yang-Baxter
equation. This would lead to a classification of solutions of the
boundary Yang-Baxter equation in terms of boundary quantum groups
in the same spirit as the classification of solutions of the
Yang-Baxter equation in terms of representations of quantum
groups.
I hope to have given you in this talk a glimpse of a whole new
field of enquiry related to Kac-Moody algebras with applications
in both physics and mathematics.
\noindent{\bf Note added:}
Since this contribution was written in May 2002 several
interesting papers have appeared on the subject of reflection
matrices and coideal subalgebras. Examples include
\cite{Baseilhac:2002kf,Molev:2002,Delius:2002}
\noindent{\bf Acknowledgments}
The work presented here was performed in collaboration with Niall
MacKay and our graduate students Ben Short and Alan George. My
work is supported by an EPSRC advanced fellowship.
\providecommand{{\bf h}ref}[2]{#2}\begingroup\raggedright
\endgroup
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
The instanton Floer homology of a knot in $S^{3}$ is a vector space
with a canonical mod $2$ grading. It carries a distinguished endomorphism of
even degree, arising from the $2$-dimensional homology class
represented by a Seifert surface. The Floer homology decomposes as
a direct sum of the generalized eigenspaces of this endomorphism.
We show that the Euler characteristics of these generalized
eigenspaces are the coefficients of the Alexander polynomial of the
knot. Among other applications, we deduce that instanton homology
detects fibered knots.
\end{abstract}
\section{Introduction}
For a knot $K\subset S^{3}$, the authors defined in
\cite{KM-sutures} a Floer homology group $\KHI(K)$, by a slight
variant of a construction that appeared first in \cite{Floer-Durham-paper}.
In brief, one takes the knot complement $S^{3} \setminus N^{\circ}(K)$
and forms from it a closed $3$-manifold $Z(K)$ by attaching to
$\partial N(K)$ the manifold $F\times S^{1}$, where $F$ is a genus-1
surface with one boundary component. The attaching is done in such a
way that $\{\text{point}\}\times S^{1}$ is glued to the meridian of
$K$ and $\partial F \times \{\text{point}\}$ is glued to the
longitude. The vector space $\KHI(K)$ is then defined by applying
Floer's instanton homology to the closed 3-manifold $Z(K)$. We will
recall the details in section~\ref{sec:background}. If $\Sigma$ is a
Seifert surface for $K$, then there is a corresponding closed surface
$\bar\Sigma$ in $Z(K)$, formed as the union of $\Sigma$ and one copy
of $F$. The homology class $\bar\sigma=[\bar\Sigma]$ in $H_{2}(Z(K))$
determines an
endomorphism $\mu(\bar\sigma)$ on the instanton homology of $Z(K)$,
and hence also an endomorphism of $\KHI(K)$. As was shown in
\cite{KM-sutures}, and as we recall below, the generalized
eigenspaces of $\mu(\bar\sigma)$ give a direct sum decomposition,
\begin{equation}\label{eq:eigenspace-decomposition}
\KHI(K) = \bigoplus_{j=-g}^{g} \KHI(K,j).
\end{equation}
Here $g$ is the genus of the Seifert surface. In this paper, we will
define a canonical $\Z/2$ grading on $\KHI(K)$, and hence on each
$\KHI(K,j)$, so that we may write
\[
\KHI(K,j) = \KHI_{0}(K,j) \oplus \KHI_{1}(K,j).
\]
This allows us to define the Euler characteristic $\chi(\KHI(K,j))$ as
the difference of the ranks of the even and odd parts. The main result
of this paper is the following theorem.
\begin{theorem}\label{thm:main}
For any knot in $S^{3}$,
the Euler characteristics $\chi(\KHI(K,j))$ of the summands
$\KHI(K,j)$ are minus the coefficients of
the symmetrized Alexander polynomial $\Delta_{K}(t)$, with Conway's
normalization. That is,
\[
\Delta_{K}(t) = - \sum_{j} \chi(\KHI(K,j)) t^{j}.
\]
\end{theorem}
The Floer homology group $\KHI(K)$ is supposed to be an ``instanton''
counterpart to the Heegaard knot homology of Ozsv\'ath-Szab\'o and
Rasmussen \cite{Ozsvath-Szabo-knotfloer,Rasmussen-thesis}. It is known
that the Euler characteristic of Heegaard knot homology gives the
Alexander polynomial; so the above
theorem can be taken as further evidence that the two theories are
indeed closely related.
\begin{figure}
\caption{\label{fig:Oriented-Skein}
\label{fig:Oriented-Skein}
\end{figure}
The proof of the theorem rests on Conway's skein relation for the
Alexander polynomial. To exploit the skein relation in this way, we
first extend the definition of $\KHI(K)$ to links. Then, given three
oriented knots or links $K_{+}$, $K_{-}$ and $K_{0}$ related by the
skein moves (see Figure~\ref{fig:Oriented-Skein}), we establish a long
exact sequence relating the instanton knot (or link) homologies of
$K_{+}$, $K_{-}$ and $K_{0}$. More precisely,
if for example $K_{+}$ and $K_{-}$ are knots and $K_{0}$ is a
$2$-component link, then we will show that there is along exact
sequence
\[
\cdots \to \KHI(K_{+}) \to \KHI(K_{-}) \to \KHI(K_{0}) \to \cdots .
\]
(This situation is a little different when $K_{+}$ and $K_{-}$ are
$2$-component links and $K_{0}$ is a knot: see Theorem~\ref{thm:skein}.)
Skein exact sequences of
this sort for $\KHI(K)$ are not new. The definition of $\KHI(K)$
appears almost verbatim in Floer's paper
\cite{Floer-Durham-paper}, along with outline
proofs of just such a skein sequence. See in particular part ($2'$) of
Theorem 5 in \cite{Floer-Durham-paper}, which corresponds to
Theorem~\ref{thm:skein} in this paper.
The material of Floer's paper
\cite{Floer-Durham-paper} is also presented in \cite{Braam-Donaldson}. The
proof of the skein exact sequence which we shall describe is essentially
Floer's argument, as amplified in \cite{Braam-Donaldson},
though we shall present it in the context of sutured
manifolds. The new ingredient however is the decomposition
\eqref{eq:eigenspace-decomposition} of the instanton Floer homology,
without which one cannot arrive at the Alexander
polynomial.
The structure of the remainder of this paper is as
follows. In section~\ref{sec:background}, we recall the construction
of instanton knot homology, as well as instanton homology for sutured
manifolds, following \cite{KM-sutures}. We take the opportunity here
to extend and slightly generalize our earlier results concerning these
constructions. Section~\ref{sec:skein} presents the proof of the main
theorem. Some applications are discussed in section~\ref{sec:applications}.
The relationship between $\Delta_{K}(t)$ and the instanton homology of
$K$ was conjectured in \cite{KM-sutures}, and the result provides the
missing ingredient to show that the $\KHI$ detects fibered knots.
Theorem~\ref{thm:main} also provides a lower bound for the rank of the
instanton homology group:
\begin{corollary}\label{cor:alexander-vs-rank}
If the Alexander polynomial of $K$ is $\sum_{-d}^{d}a_{j} t^{j}$,
then
the rank of $\KHI(K)$ is not less than $\sum_{-d}^{d}|a_{j}|$.
\qed
\end{corollary}
The corollary can be used to draw conclusions about the existence of
certain representations of the knot group in $\SU(2)$.
\subparagraph{Acknowledgment.}
As this paper was being completed, the authors learned that
essentially the same result has been obtained simultaneously by Yuhan
Lim \cite{Lim}. The authors are grateful to the referee for pointing
out the errors in an earlier version of this paper, particularly
concerning the mod $2$ gradings.
\section{Background}
\label{sec:background}
\subsection{Instanton Floer homology}
Let $Y$ be a closed, connected, oriented $3$-manifold, and let $w\to Y$ be a
hermitian line bundle with the property that the pairing of $c_{1}(w)$
with some class in $H_{2}(Y)$ is odd. If $E\to Y$ is a $U(2)$ bundle
with $\Lambda^{2}E \cong w$, we write $\bonf(Y)_{w}$ for the
space of $\PU(2)$ connections in the adjoint bundle $\ad(E)$, modulo
the action of the gauge group consisting of automorphisms of $E$ with
determinant $1$. The instanton Floer homology group $I_{*}(Y)_{w}$ is
the Floer homology arising from the Chern-Simons functional on
$\bonf(Y)_{w}$. It has a relative grading by $\Z/8$.
Our notation for this Floer group follows
\cite{KM-sutures}; an exposition of its construction is in
\cite{Donaldson-book}. We will always use complex coefficients, so
$I_{*}(Y)_{w}$ is a complex vector space.
If $\sigma$ is a $2$-dimensional integral homology class in $Y$, then
there is a corresponding operator $\mu(\sigma)$ on $I_{*}(Y)_{w}$ of
degree $-2$. If $y\in Y$ is a point representing the generator of
$H_{0}(Y)$, then there is also a degree-$4$ operator $\mu(y)$.
The operators $\mu(\sigma)$, for $\sigma\in H_{2}(Y)$, commute with
each other and with $\mu(y)$.
As shown in \cite{KM-sutures} based on the calculations
of \cite{Munoz}, the simultaneous eigenvalues of the commuting
pair of operators
$(\mu(y),\mu(\sigma))$ all have the form
\begin{equation}\label{eq:eigenvalue-pairs}
(2, 2k) \qquad\text{or}\qquad (-2, 2k\sqrt{-1}),
\end{equation}
for even integers $2k$ in the range
\[
| 2k | \le |\sigma|.
\]
Here $|\sigma|$ denotes the Thurston norm of $\sigma$, the minimum
value of $-\chi(\Sigma)$ over all aspherical embedded surfaces
$\Sigma$ with $[\Sigma]=\sigma$.
\subsection{Instanton homology for sutured manifolds}
We recall the definition of the instanton Floer homology for a
balanced sutured manifold, as introduced in \cite{KM-sutures} with
motivation from the Heegaard counterpart defined in \cite{Juhasz-1}.
The reader is referred to \cite{KM-sutures} and \cite{Juhasz-1} for
background and details.
Let
$(M,\gamma)$ be a balanced sutured manifold. Its oriented boundary is a union,
\[
\partial M = R_{+}(\gamma) \cup A(\gamma) \cup
(- R_{-}(\gamma))
\]
where $A(\gamma)$ is a union of annuli, neighborhoods of the sutures
$s(\gamma)$. To define the instanton homology group $\SHI(M,\gamma)$
we proceed as follows. Let $([-1,1]\times T,\delta)$ be a product sutured
manifold, with $T$ a connected, oriented surface with boundary. The
annuli $A(\delta)$ are the annuli $[-1,1]\times \partial T$, and we
suppose these are in one-to-one correspondence with the annuli
$A(\gamma)$. We attach this product piece to $(M,\gamma)$ along the
annuli to obtain a manifold
\begin{equation}\label{eq:barM}
\bar{M} = M \cup \bigl( [-1,1]\times T \bigr).
\end{equation}
We write
\begin{equation}\label{eq:boundary-barM}
\partial \bar{M} = \bar{R}_{+} \cup (-\bar{R}_{-}).
\end{equation}
We can regard $\bar{M}$ as a sutured manifold (not balanced, because it
has no sutures). The surface $\bar{R}_{+}$ and $\bar{R}_{-}$ are both
connected and are diffeomorphic. We choose an orientation-preserving
diffeomorphism
\[
h : \bar{R}_{+} \to \bar{R}_{-}
\]
and then define $Z=Z(M,\gamma)$ as the quotient space
\[
Z = \bar{M}/\sim,
\]
where $\sim$ is the identification defined by $h$. The two surfaces
$\bar{R}_{\pm}$ give a single closed surface
\[
\bar{R}\subset Z.
\]
We need to impose a side condition on the choice of $T$ and $h$ in
order to proceed. We require that there is a closed curve $c$ in $T$
such that $\{1\}\times c$ and $\{-1\}\times c$ become non-separating
curves in $\bar{R}_{+}$ and $\bar{R}_{-}$ respectively; and we require
further that $h$ is chosen so as to carry $\{1\}\times c$ to
$\{-1\}\times c$ by the identity map on $c$.
\begin{definition}
We say that $(Z,\bar{R})$ is an admissible closure of $(M,\gamma)$
if it arises in this way, from some choice of $T$ and $h$,
satisfying the above conditions. \CloseDef
\end{definition}
\begin{remark}
In \cite[Definition~4.2]{KM-sutures}, there was an additional
requirement that $\bar{R}_{\pm}$ should have genus $2$ or more. This
was needed only in the context there of Seiberg-Witten Floer
homology, as explained in section~7.6 of
\cite{KM-sutures}. Furthermore, the notion of closure in
\cite{KM-sutures} did not require that $h$ carry $\{1\}\times c$
to $\{-1\}\times c$, hence the qualification ``admissible'' in the
present paper.
\end{remark}
In an admissible closure, the curve $c$ gives rise to a torus $S^{1}\times c$
in $Z$ which meets $\bar{R}$ transversely in a circle. Pick a point
$x$ on $c$. The
closed curve $S^{1}\times \{x\}$ lies on the torus $S^{1}\times c$ and
meets $\bar{R}$ in a
single point. We write
\[
w \to Z
\]
for a hermitian line bundle on $Z$ whose first Chern class is dual to
$S^{1}\times\{x\}$. Since $c_{1}(w)$ has odd evaluation on the closed
surface $\bar{R}$, the instanton homology group $I_{*}(Z)_{w}$ is
well-defined. As in \cite{KM-sutures}, we write
\[
I_{*}(Z|\bar{R})_{w} \subset I_{*}(Z)_{w}
\]
for the simultaneous generalized eigenspace of the pair of operators
\[(\mu(y),\mu(\bar{R}))\] belonging to the eigenvalues $(2,2g-2)$, where
$g$ is the genus of $\bar{R}$. (See \eqref{eq:eigenvalue-pairs}.)
\begin{definition}
For a balanced sutured manifold $(M,\gamma)$,
the instanton Floer homology group $\SHI(M,\gamma)$ is defined to
be $I_{*}(Z|\bar{R})_{w}$, where $(Z,\bar{R})$ is any admissible
closure of $(M,\gamma)$. \CloseDef.
\end{definition}
It was shown in \cite{KM-sutures} that $\SHI(M,\gamma)$ is
well-defined, in the sense that any two choices of $T$ or $h$ will
lead to isomorphic versions of $\SHI(M,\gamma)$.
\subsection{Relaxing the rules on $T$}
\label{subsec:disconnected-T}
As stated, the definition of $\SHI(M,\gamma)$ requires that we form a
closure $(Z,\bar{R})$ using a \emph{connected} auxiliary surface $T$.
We can relax this condition on $T$, with a little care, and the extra
freedom gained will be convenient in later arguments.
So let $T$ be a possibly disconnected, oriented surface with boundary.
The number of boundary components of $T$ needs to be equal to the
number of sutures in $(M,\gamma)$. We then need to choose an
orientation-reversing
diffeomorphism between $\partial T$ and $\partial R_{+}(\gamma)$, so
as to be able to form a manifold $\bar{M}$ as in \eqref{eq:barM},
gluing $[-1,1]\times \partial T$ to the annuli $A(\gamma)$. We
continue to write $\bar{R}_{+}$, $\bar{R}_{-}$ for the ``top'' and
``bottom'' parts of the boundary of $\partial \bar{M}$, as at
\eqref{eq:boundary-barM}. Neither of these need be connected, although
they have the same Euler number. We shall impose the following
conditions.
\begin{enumerate}
\item On each connected component $T_{i}$ of $T$, there is an
oriented
simple closed curve $c_{i}$ such that the corresponding curves
$\{1\}\times c_{i}$ and $\{-1\}\times c_{i}$ are both
non-separating on $\bar{R}_{+}$ and $\bar{R}_{-}$ respectively.
\item \label{item:T-condition-2} There exists a diffeomorphism $h :
\bar{R}_{+}\to\bar{R}_{-}$ which carries $\{1\}\times c_{i}$ to
$\{-1\}\times c_{i}$ for all $i$, as oriented curves.
\item There is a $1$-cycle $c'$ on $\bar{R}_{+}$ which intersects
each curve $\{1\}\times c_{i}$ once.
\end{enumerate}
We then choose any $h$ satisfying \ref{item:T-condition-2} and use $h$
to identify the top and bottom, so forming a closed pair $(Z,\bar{R})$ as
before. The surface $\bar{R}$ may have more than one component (but no
more than the number of components of $T$). No component of $\bar{R}$
is a sphere, because each component contains a non-separating curve.
We may regard $T$ as a codimension-zero submanifold of $\bar{R}$ via
the inclusion of $\{1\}\times T$ in $\bar{R}_{+}$.
For each component $\bar{R}_{k}$ of $\bar{R}$, we now choose one
corresponding component $T_{i_{k}}$ of $T\cap\bar{R}_{k}$. We take
$w\to Z$
to be the complex line bundle with $c_{1}(w)$ dual to the sum of the
circles $S^{1}\times \{x_{k}\}\subset S^{1}\times c_{i_{k}}$. Thus
$c_{1}(w)$ evaluates to $1$ on each component
$\bar{R}_{k}\subset\bar{R}$. We may then consider the instanton Floer
homology group $I_{*}(Z|\bar{R})_{w}$.
\begin{lemma}\label{lem:relaxed-independence}
Subject to the conditions we have imposed, the Floer homology
group $I_{*}(Z|\bar{R})_{w}$ is independent of the choices made.
In particular, $I_{*}(Z|\bar{R})_{w}$ is isomorphic to
$\SHI(M,\gamma)$.
\end{lemma}
\begin{proof}
By a sequence of applications of the excision property of Floer
homology \cite{Floer-Durham-paper, KM-sutures}, we shall establish that
$I_{*}(Z|\bar{R})_{w}$ is isomorphic to $I_{*}(Z'|\bar{R}')_{w'}$,
where the latter arises from the same construction but with a
\emph{connected} surface $T'$. Thus $I_{*}(Z'|\bar{R}')_{w'}$ is
isomorphic to $\SHI(M,\gamma)$ by definition: its independence
of the choices made is proved in \cite{KM-sutures}.
We will show how to reduce the number of components
of $T$ by one. Following the argument of \cite[section
7.4]{KM-sutures}, we have an isomorphism
\begin{equation}\label{eq:u-to-w}
I_{*}(Z|\bar{R})_{w} \cong I_{*}(Z|\bar{R})_{u},
\end{equation}
where $u\to Z$ is the complex line bundle whose first Chern class
is dual to the cycle $c'\subset Z$.
We shall suppose in the fist instance that at least one of $c_{i}$
or $c_{j}$ is non-separating in the corresponding component
$T_{i}$ or $T_{j}$.
Since $c_{1}(u)$ is odd on
the $2$-tori $S^{1}\times c_{i}$ and $S^{1}\times c_{j}$, we can
apply Floer's excision theorem (see also
\cite[Theorem~7.7]{KM-sutures}): we cut $Z$ open along these two
$2$-tori and glue back to obtain a new pair $(Z' | \bar{R}')$,
carrying a line bundle $u'$, and we have
\[
I_{*}(Z|\bar{R})_{u} \cong I_{*}(Z'|\bar{R}')_{u'}.
\]
Reversing the construction that led to the isomorphism
\eqref{eq:u-to-w}, we next have
\[
I_{*}(Z'|\bar{R}')_{u'} \cong I_{*}(Z'|\bar{R}')_{w'},
\]
where the line bundle
$w'$ is dual to a collection of circles $S^{1}\times\{x'_{k'}\}$,
one for each component of $\bar{R}'$.
The pair $(Z',\bar{R}')$ is obtained from the sutured
manifold $(M,\gamma)$ by the same construction that led to
$(Z,R)$, but with a surface $T'$ having one fewer components: the
components $T_{i}$ and $T_{j}$ have been joined into one component
by cutting open along the circles $c_{i}$ and $c_{j}$ and
reglueing.
If both $c_{i}$ and $c_{j}$ are separating in $T_{i}$ and $T_{j}$
respectively, then the above argument fails, because $T'$ will
have the same number of components as $T$. In this case, we can
alter $T_{i}$ and $c_{i}$ to make a new $T'_{i}$ and $c'_{i}$,
with $c'_{i}$ non-separating in $T'_{i}$. For example, we may
replace $Z$ by the disjoint union $Z \amalg Z_{*}$, where $Z_{*}$
is a product $S^{1}\times T_{*}$, with $T_{*}$ of genus $2$. In
the same manner as above, we can cut $Z$ along $S^{1}\times c_{i}$
and cut $Z_{*}$ along $S^{1}\times c_{*}$, and then reglue,
interchanging the boundary components. The effect of this is to
replace $T_{i}$ be a surface $T'_{i}$ of genus one larger.
We can take $c'_{i}$ to be a non-separating curve on $T_{*}
\setminus c_{*}$.
\end{proof}
\subsection{Instanton homology for knots and links}
\label{subsec:inst-homology-link}
Consider a link $K$ in a closed oriented $3$-manifold $Y$. Following
Juh\'asz
\cite{Juhasz-1}, we can associate to $(Y,K)$ a sutured manifold
$(M,\gamma)$ by taking $M$ to be the link complement and taking the
sutures $s(\gamma)$ to consist of two oppositely-oriented meridional
curves on each of the tori in $\partial M$. As in \cite{KM-sutures},
where the case of knots was discussed, we take Juh\'asz'
prescription as a definition for the instanton knot (or link) homology
of the pair $(Y,K)$:
\begin{definition}[\textit{cf.} \cite{Juhasz-1}]
We define the instanton homology of the link $K\subset Y$ to be
the instanton Floer homology of the sutured manifold $(M,\gamma)$
obtained from the link complement as above. Thus,
\[
\KHI(Y,K) = \SHI(M,\gamma).
\]
\CloseDef
\end{definition}
Although we are free to choose any admissible closure $Z$ in
constructing $\SHI(M,\gamma)$, we can exploit the fact that we are
dealing with a link complement to narrow our choices. Let $r$ be the
number of components of the link $K$. Orient $K$ and choose a
longitudinal oriented curve $l_{i}\subset \partial M$ on the
peripheral torus of each component $K_{i}\subset K$. Let $F_{r}$ be a
genus-1 surface with $r$ boundary components, $\delta_{1},\dots,\delta_{r}$. Form a closed manifold
$Z$ by attaching $F_{r}\times S^{1}$ to $M$ along their
boundaries:
\begin{equation}\label{eq:special-closure}
Z = (Y\setminus N^{o}(K))
\cup (F_{r}\times S^{1}).
\end{equation}
The attaching is done so that the curve $p_{i}\times
S^{1}$ for $p_{i}\in \delta_{i}$ is attached to the meridian of
$K_{i}$ and $\delta_{i}\times \{q\}$ is attached to the chosen
longitude $l_{i}$. We can view $Z$ as a closure of $(M,\gamma)$ in
which the auxiliary surface $T$ consists of $r$ annuli,
\[
T = T_{1}\cup \dots \cup T_{r}.
\]
The two
sutures of the product sutured manifold
$[-1,1]\times T_{i}$ are attached to meridional sutures on the
components of $\partial M$ corresponding to $K_{i}$ and $K_{i-1}$ in
some cyclic ordering of the components.
Viewed this way, the corresponding surface
$\bar{R}\subset Z$ is the torus
\[
\bar{R} = \nu \times S^{1}
\]
where $\nu\subset F_{r}$ is a closed curve representing a generator
of the homology of the closed genus-1 surface obtained by adding disks
to $F_{r}$.
Because $\bar{R}$ is a torus,
the group
$I_{*}(Z|\bar{R})_{w}$ can be more simply described as the
generalized eigenspace of $\mu(y)$ belonging to the eigenvalue $2$,
for which we temporarily introduce the notation
$I_{*}(Z)_{w,+2}$. Thus we can write
\[
\KHI(Y,K) = I_{*}(Z)_{w,+2}.
\]
An important special case for us is when $K \subset Y$
is null-homologous in $Y$ with its given orientation. In this case,
we may choose a Seifert surface $\Sigma$, which we regard as a
properly embedded oriented surface in $M$ with oriented boundary a
union of longitudinal curves, one for each component of $K$. When a
Seifert surface is given, we have a \emph{uniquely preferred} closure $Z$,
obtained as above but using the longitudes provided by $\partial
\Sigma$. Let us fix a Seifert surface
$\Sigma$ and write $\sigma$ for its homology class in
$H_{2}(M,\partial M)$. The preferred closure of the sutured link
complement is entirely determined by $\sigma$.
\subsection{The decomposition into generalized eigenspaces}
We continue to suppose that $\Sigma$ is a Seifert surface for the
null-homologous oriented knot $K\subset Y$. We write $(M,\gamma)$ for
the sutured link complement and $Z$ for the preferred closure.
The homology class $\sigma = [\Sigma]$ in $H_{2}(M,\partial M)$
extends to a class $\bar\sigma = [\bar\Sigma]$ in $H_{2}(Z)$: the
surface $\bar\Sigma$ is formed from the Seifert surface $\Sigma$ and
$F_{r}$,
\[
\bar\Sigma = \Sigma\cup F_{r}.
\]
The homology class $\bar\sigma$ determines an endomorphism
\[
\mu(\bar\sigma) : I_{*}(Z)_{w,+2} \to I_{*}(Z)_{w,+2}.
\]
This endomorphism is traceless, a consequence of the relative $\Z/8$
grading: there is an endomorphism $\epsilon$ of $I_{*}(Z)_{w}$ given
by multiplication by $(\sqrt{-1})^{s}$ on the part of relative grading
$s$, and this $\epsilon$ commutes with $\mu(y)$ and anti-commutes
with $\mu(\bar\sigma)$. We write this traceless
endomorphism as
\begin{equation}\label{eq:mu-o}
\mu^{o}(\sigma) \in \sl( \KHI(Y,K)).
\end{equation}
Our notation hides the fact that the construction depends (a priori)
on the existence of the preferred closure $Z$, so that $\KHI(Y,K)$ can
be canonically identified with $I_{*}(Z)_{w,+2}$.
It now follows from \cite[Proposition~7.5]{KM-sutures} that the eigenvalues of
$\mu^{o}(\sigma)$ are even integers $2j$ in the range
$-2\bar{g}+2 \le 2j \le 2\bar{g}-2$, where $\bar{g}=g(\Sigma)+r $ is
the genus of $\bar{\Sigma}$. Thus:
\begin{definition}
For a null-homologous oriented link $K\subset Y$ with a chosen
Seifert surface $\Sigma$, we write
\[
\KHI(Y,K,[\Sigma], j)
\subset \KHI(Y,K)
\]
for the generalized eigenspace of $\mu^{o}([\Sigma])$ belonging to
the eigenvalue $2j$, so that
\[
\KHI(Y,K) =
\bigoplus_{j=-g(\Sigma)+1-r}^{g(\Sigma)-1+r}
\KHI(Y,K,[\Sigma], j),
\]
where $r$ is the number of components of $K$.
If $Y$ is a homology sphere, we may omit
$[\Sigma]$ from the notation; and if $Y$ is $S^{3}$ then we simply
write $\KHI(K,j)$. \CloseDef
\end{definition}
\begin{remark}
The authors believe that, for a general sutured manifold
$(M,\gamma)$, one can define a unique linear map
\[
\mu^{o} : H_{2}(M,\partial M) \to \sl ( \SHI(M,\gamma))
\]
characterized by the property that for any admissible closure
$(Z,\bar{R})$ and any $\bar{\sigma}$ in $H_{2}(Z)$
extending $\sigma \in H_{2}(M,\partial M)$ we have
\[ \mu^{o}(\sigma) = \text{traceless part of $\mu(\bar\sigma)$},
\]
under a suitable
identification of $I_{*}(Z| \bar{R})_{w}$ with $\SHI(M,\gamma)$. The
authors will return to this question in a future paper. For now, we
are exploiting the existence of a preferred closure $Z$ so as to
side-step the issue of whether $\mu^{o}$ would be well-defined,
independent of the choices made.
\end{remark}
\subsection{The mod 2 grading}
\label{subsec:mod-2-grading}
If $Y$ is a closed $3$-manifold, then the instanton homology group
$I_{*}(Y)_{w}$ has a canonical decomposition into parts of even and
odd grading mod $2$. For the purposes of this paper, we normalize our
conventions so that the two generators of $I_{*}(T^{3})_{w}=\C^{2}$
are in \emph{odd} degree. As in \cite[section~25.4 ]{KM-book}, the
canonical mod $2$ grading is then essentially determined by the
property that, for a cobordism $W$ from a manifold $Y_{-}$ to $Y_{+}$,
the induced map on Floer homology has even or odd grading according to
the parity of the integer
\begin{equation}\label{eq:iota-W} \iota(W) = \frac{1}{2}
\Bigl( \chi(W) + \sigma(W) +
b_1(Y_+) - b_0(Y_+) - b_1(Y_-) + b_0(Y_-)\Bigr).
\end{equation}
(In the case of connected manifolds $Y_{+}$ and $Y_{-}$,
this formula reduces to the one that appears in \cite{KM-book} for the monopole
case. There is more than one way to extend the formula to the case of
disconnected manifolds, and we have simply chosen one.)
By declaring that the generators for
$T^{3}$ are in odd degree, we ensure that the canonical mod $2$
gradings behave as expected for disjoint unions of the $3$-manifolds.
Thus, if $Y_{1}$ and $Y_{2}$ are the connected components of a
$3$-manifold $Y$ and $\alpha_{1}\otimes \alpha_{2}$ is a class on $Y$
obtained from $\alpha_{i}$ on $Y_{i}$, then $\gr(\alpha_{1}\otimes
\alpha_{2})$ is $\gr(\alpha_{1}) + \gr(\alpha_{2})$ in $\Z/2$ as
expected.
Since the Floer homology $\SHI(M,\gamma)$ of a sutured manifold
$(M,\gamma)$ is defined in terms of $I_{*}(Z)_{w}$ for an admissible
closure $Z$, it is tempting to try to define a canonical mod $2$
grading on $\SHI(M,\gamma)$ by carrying over the canonical mod $2$
grading from $Z$. This does not work, however, because the result will
depend on the choice of closure. This is illustrated by the fact that
the mapping torus of a Dehn twist on $T^{2}$ may have Floer homology
in \emph{even} degree in the canonical mod $2$ grading (depending on
the sign of the Dehn twist), despite the fact that both $T^{3}$ and
this mapping torus can be viewed as closures of the same sutured
manifold.
We conclude from this that, without auxiliary choices, there is no
\emph{canonical} mod $2$ grading on $\SHI(M,\gamma)$ in general: only
a relative grading. Nevertheless, in the special case of an oriented
null-homologous knot or link $K$ in a closed $3$-manifold $Y$, we
\emph{can} fix a convention that gives an absolute mod $2$ grading,
once a Seifert surface $\Sigma$ for $K$ is given. We simply take the
preferred closure $Z$ described above in
section~\ref{subsec:inst-homology-link}, using $\partial\Sigma$ again
to define the longitudes, so that $\KHI(Y,K)$ is identified with
$I_{*}(Z)_{w,+2}$, and we use the canonical mod $2$ grading from the
latter.
With this convention, the unknot $U$ has $\KHI(U)$ of rank $1$, with
a single generator in odd grading mod $2$.
\section{The skein sequence}
\label{sec:skein}
\subsection{The long exact sequence}
Let $Y$ be any closed, oriented $3$-manifold, and let $K_{+}$, $K_{-}$
and $K_{0}$ be any three oriented knots or links in $Y$ which are related by
the standard skein moves: that is, all three links coincide outside a
ball $B$ in
$Y$, while inside the ball they are as shown in
Figure~\ref{fig:Oriented-Skein}. There are two cases which occur
here: the two strands of $K_{+}$ in $B$ may belong to the same
component of the link, or to different components. In the first case
$K_{0}$ has one more component than $K_{+}$ or $K_{-}$, while in the
second case it has one fewer.
\begin{theorem}[\textit{cf.} Theorem 5 of \cite{Floer-Durham-paper}]
\label{thm:skein}
Let $K_{+}$, $K_{-}$ and $K_{0}$ be oriented links in $Y$ as
above. Then, in the case that $K_{0}$ has one more component than
$K_{+}$ and $K_{-}$, there is a long exact sequence relating the
instanton homology groups of the three links,
\begin{equation}\label{eq:skein-first}
\cdots\to \KHI(Y,K_{+}) \to \KHI(Y,K_{-}) \to \KHI(Y,K_{0}) \to
\cdots.
\end{equation}
In the case that $K_{0}$ has fewer components that $K_{+}$ and
$K_{-}$, there is a long exact sequence
\begin{equation}\label{eq:skein-second}
\cdots\to \KHI(Y,K_{+}) \to \KHI(Y,K_{-}) \to
\KHI(Y,K_{0})\otimes V^{\otimes 2} \to
\cdots
\end{equation}
where $V$ a 2-dimensional vector space arising as
the instanton Floer homology of the sutured manifold
$(M,\gamma)$, with $M$ the solid torus $S^{1}\times D^{2}$
carrying four parallel sutures $S^{1}\times \{p_{i}\}$ for four
points $p_{i}$ on $\partial D^{2}$ carrying alternating
orientations.
\end{theorem}
\begin{proof}
Let $\lambda$ be a standard circle in the complement of $K_{+}$
which encircles the two strands of $K_{+}$ with total linking
number zero, as shown in Figure~\ref{fig:K-plus-with-lambda}.
\begin{figure}
\caption{\label{fig:K-plus-with-lambda}
\label{fig:K-plus-with-lambda}
\end{figure}
Let
$Y_{-}$ and $Y_{0}$ be the $3$-manifolds obtained from $Y$ by
$-1$-surgery and $0$-surgery on $\lambda$ respectively. Since
$\lambda$ is disjoint from $K_{+}$, a copy of $K_{+}$ lies in
each, and we have new pairs $(Y_{-1},K_{+})$ and $(Y_{0},K_{+})$.
The pair $(Y_{-1},K_{+})$ can be identified with $(Y,K_{-})$.
\begin{figure}
\caption{\label{fig:Skein-Tubes}
\label{fig:Skein-Tubes}
\end{figure}
Let
$(M_{+},\gamma_{+})$, $(M_{-},\gamma_{-})$ and
$(M_{0},\gamma_{0})$ be the sutured manifolds associated to the
links $(Y,K_{+})$, $(Y,K_{-})$ and $(Y_{0},K_{0})$ respectively:
that is, $M_{+}$, $M_{-}$ and $M_{0}$ are the link complements of
$K_{+}\subset Y$, $K_{-}\subset Y$ and $K_{0}\subset Y_{0}$
respectively, and there are two sutures on each boundary
component. (See Figure~\ref{fig:Skein-Tubes}.)
The sutured manifolds $(M_{-},\gamma_{-})$ and
$(M_{0}, \gamma_{0})$ are obtained from $(M_{+},\gamma_{+})$ by
$-1$-surgery and $0$-surgery respectively on the circle
$\lambda\subset M_{+}$. If $(Z,\bar{R})$ is any admissible closure
of $(M_{+},\gamma_{+})$ then surgery on $\lambda\subset Z$ yields
admissible closures for the other two sutured manifolds. From
Floer's surgery exact triangle \cite{Braam-Donaldson}, it follows
that there is a long exact sequence
\begin{equation}\label{eq:SHI-long-exact}
\cdots\to \SHI(M_{+},\gamma_{+}) \to
\SHI(M_{-},\gamma_{-}) \to
\SHI(M_{0},\gamma_{0}) \to
\cdots
\end{equation}
in which the maps are induced by surgery cobordisms between
admissible closures of the sutured manifolds.
By definition, we have
\[
\begin{aligned}
\SHI(M_{+},\gamma_{+}) &= \KHI(Y,K_{+}) \\
\SHI(M_{-},\gamma_{-}) &= \KHI(Y,K_{-}) .
\end{aligned}
\]
\begin{figure}
\caption{\label{fig:Decompose-M0}
\label{fig:Decompose-M0}
\end{figure}
However, the situation for $(M_{0}, \gamma_{0})$ is a little
different. The manifold $M_{0}$ is obtained by zero-surgery on the
circle $\lambda$ in $M_{+}$, as indicated in
Figure~\ref{fig:Skein-Tubes}. This sutured manifold contains a
product annulus $S$, consisting of the union of the
twice-punctured disk shown in Figure~\ref{fig:Decompose-M0} and
a disk $D^{2}$ in the surgery solid-torus $S^{1}\times D^{2}$. As
shown in the figure, sutured-manifold decomposition along the
annulus $S$ gives a sutured manifold $(M'_{0},\gamma'_{0})$ in
which $M'_{0}$ is the link complement of $K_{0}\subset Y$:
\[
(M_{0},\gamma_{0}) \decomp{S} (M'_{0}, \gamma'_{0}).
\]
By Proposition~6.7 of \cite{KM-sutures} (as adapted to the
instanton homology setting in section~7.5 of that paper), we
therefore have an isomorphism
\[
\SHI (M_{0},\gamma_{0})\cong \SHI (M'_{0},
\gamma'_{0}).
\]
We now have to separate cases according to the number of
components of $K_{+}$ and $K_{0}$. If the two strands of $K_{+}$
at the crossing belong to the same component, then every component
of $\partial M'_{0}$ contains exactly two, oppositely-oriented
sutures, and we therefore have
\[
\SHI (M'_{0},
\gamma'_{0}) = \KHI(Y, K_{0}).
\]
In this case, the sequence \eqref{eq:SHI-long-exact} becomes the
sequence in the first case of the theorem.
\begin{figure}
\caption{\label{fig:Remove-Sutures}
\label{fig:Remove-Sutures}
\end{figure}
If the two strands of $K_{+}$ belong to different components, then
the corresponding boundary components of $M_{+}$ each carry two
sutures. These two boundary components become one boundary
component in $M'_{0}$, and the decomposition along $S$ introduces
two new sutures; so the resulting boundary component in $M'_{0}$
carries six meridional sutures, with alternating orientations.
Thus $(M'_{0}, \gamma'_{0})$ fails to be the sutured manifold
associated to the link $K_{0}\subset Y$, on account of having four
additional sutures. As shown in Figure~\ref{fig:Remove-Sutures}
however, the number of sutures on a torus boundary component can
always be reduced by $2$ (as long as there are at least four to
start with) by using a decomposition along a separating annulus.
This decomposition results in a manifold with one additional
connected component, which is a solid torus with four longitudinal
sutures. This operation needs to be performed twice to reduce the
number of sutures in $M'_{0}$ by four, so we obtain two copies of
this solid torus. Denoting by $V$ the Floer homology of this
four-sutured solid-torus, we therefore have
\[
\SHI (M'_{0},
\gamma'_{0}) = \KHI(Y, K_{0})\otimes V\otimes V
\]
in this case. Thus the sequence \eqref{eq:SHI-long-exact} becomes
the second long exact sequence in the theorem.
At this point, all that remains is to show that $V$ is
$2$-dimensional, as asserted in the theorem. We will do this
indirectly, by identifying $V\otimes V$ as a $4$-dimensional
vector space. Let $(M_{4},\gamma_{4})$ be the sutured solid-torus
with $4$ longitudinal sutures, as described above, so that
$\SHI(M_{4},\gamma_{4})=V$. Let $(M,\gamma)$ be two disjoint
copies of $(M_{4},\gamma_{4})$,
so that
\[
\SHI(M,\gamma) = V\otimes V.
\]
We can describe an admissible closure of $(M,\gamma)$ (with a
disconnected $T$ as in section~\ref{subsec:disconnected-T}) by
taking $T$ to be four annuli: we attach $[-1,1]\times T$ to
$(M,\gamma)$ to form $\bar{M}$ so that $\bar{M}$ is $\Sigma\times
S^{1}$ with $\Sigma$ a four-punctured sphere. Thus
$\partial\bar{M}$ consists of four tori, two of which belong to
$\bar{R}_{+}$ and two to $\bar{R}_{-}$. The closure $(Y,\bar{R})$
is obtained by gluing the tori in pairs; and this can be done so
that $Y$ has the form $\Sigma_{2}\times S^{1}$, where $\Sigma_{2}$
is now a closed surface of genus $2$. The surface $\bar{R}$ in
$\Sigma_{2}\times S^{1}$ has the form $\gamma\times S^{1}$, where
$\gamma$ is a union of two disjoint closed curves in independent
homology classes. The line bundle $w$ has $c_{1}(w)$ dual to
$\gamma'$, where $\gamma'$ is a curve on $\Sigma_{2}$ dual to one
component of $\gamma$.
Thus we can identify $V\otimes V$ with the generalized eigenspace
of $\mu(y)$
belonging to the eigenvalue $+2$ in the Floer homology
$I_{*}(\Sigma_{2}\times S^{1})_{w}$,
\begin{equation}
\label{eq:VVisSigma2}
V\otimes V = I_{*}(\Sigma_{2} \times S^{1})_{w,+2},
\end{equation}
where $w$ is dual to a curve
lying on $\Sigma_{2}$. Our next task is therefore to identify this
Floer homology group. This was done (in slightly different
language) by Braam and Donaldson
\cite[Proposition~1.15]{Braam-Donaldson}. The
main point is to identify the relevant representation variety in
$\bonf(Y)_{w}$, for which we quote:
\begin{lemma}[{\cite{Braam-Donaldson}}]
\label{lem:Sigma2-calc}
For $Y=\Sigma_{2}\times S^{1}$ and $w$ as above,
the critical-point set of the Chern-Simons functional in
$\bonf(Y)_{w}$ consists of two disjoint $2$-tori. Furthermore,
the Chern-Simons functional is of Morse-Bott type along its
critical locus. \qed
\end{lemma}
To continue the calculation, following
\cite{Braam-Donaldson}, it now follows from the
lemma that $I_{*}(\Sigma_{2}\times S^{1})_{w}$ has dimension at most
$8$ and that the even and odd parts of this Floer group, with
respect to the relative mod 2 grading, have equal dimension:
each at most $4$. On the other hand, the group
$I_{*}(\Sigma_{2}\times S^{1}| \Sigma_{2})_{w}$ is non-zero.
So the generalized eigenspaces belonging to the
eigenvalue-pairs $((-1)^{r}2, i^{r}2)$, for $r=0,1,2,3$, are
all non-zero. Indeed, each of these generalized eigenspaces is
$1$-dimensional, by Proposition~7.9 of \cite{KM-sutures}.
These four 1-dimensional generalized eigenspaces all belong
to the same relative mod-2 grading. It follows that
$I_{*}(\Sigma_{2}\times S^{1})_{w}$ is 8-dimensional, and can
be identified as a vector space with the homology of the
critical-point set. The generalized eigenspace belonging to
$+2$ for the operator $\mu(y)$ is therefore $4$-dimensional;
and this is $V\otimes V$. This completes the argument.
\end{proof}
\subsection{Tracking the mod 2 grading}
Because we wish to examine the Euler characteristics, we need to know
how the canonical mod 2 grading behaves under the maps in
Theorem~\ref{thm:skein}. This is the content of the next lemma.
\begin{lemma}\label{lem:mod-2-sequence}
In the situation of Theorem~\ref{thm:skein}, suppose that the link
$K_{+}$ is null-homologous (so that $K_{-}$ and $K_{0}$ are
null-homologous also). Let $\Sigma_{+}$ be a Seifert surface for
$K_{+}$, and let $\Sigma_{-}$ and $\Sigma_{0}$ be Seifert surfaces
for the other two links, obtained from $\Sigma_{+}$ by a
modification in the neighborhood of the crossing. Equip the
instanton knot homology groups of these links with their canonical
mod $2$ gradings, as determined by the preferred closures arising
from these Seifert surfaces.
Then in the first case of the two cases of the
theorem, the map from $\KHI(Y,K_{-})$ to $\KHI(Y,K_{0})$ in the
sequence \eqref{eq:skein-first} has odd degree, while the other
two maps have even degree, with respect to the canonical mod 2
grading.
In the second case, if we grade the 4-dimensional vector space
$V\otimes V$ by identifying it with $I_{*}(\Sigma_{2}\times
S^{1})_{w,+2}$ as in \eqref{eq:VVisSigma2}, then the map from
$\KHI(Y,K_{0})\otimes V^{\otimes 2}$ to $\KHI(Y,K_{+})$
in \eqref{eq:skein-second}
has odd degree, while the other
two maps have even degree.
\end{lemma}
\begin{proof}
We begin with the first case. Let $Z_{+}$ be the preferred
closure of the sutured knot complement $(M_{+},\gamma_{+})$
obtained from the knot $K_{+}$, as defined by
\eqref{eq:special-closure}. In the notation of the proof of
Theorem~\ref{thm:skein}, the curve $\lambda$ lies in $Z_{+}$. Let
us write $Z_{-}$ and $Z_{0}$ for the manifolds obtained from
$Z_{+}$ by $-1$-surgery and $0$-surgery on $\lambda$ respectively.
It is a straightforward observation that $Z_{-}$ and $Z_{0}$ are
respectively the preferred closures of the sutured complements of
the links $K_{-}$ and $K_{0}$. The surgery cobordism $W$ from
$Z_{+}$ to $Z_{-}$ gives rise to the map from $\KHI(Y,K_{+})$ to
$\KHI(Y,K_{-})$. This $W$ has the same homology as the cylinder
$[-1,1]\times Z_{+}$ blown up at a single point. The quantity
$\iota(W)$ in \eqref{eq:iota-W} is therefore even, and it follows
that the map
\[
\KHI(Y,K_{+}) \to \KHI(Y,K_{-})
\]
has even degree. The surgery cobordism $W_{0}$ induces a map
\begin{equation}\label{eq:second-cobordism}
I_{*}(Z_{-})_{w} \to I_{*}(Z_{0})_{w}
\end{equation}
which has odd degree, by another application of \eqref{eq:iota-W}.
This concludes the proof of the first case.
In the second case of the theorem,
we still have a long exact
sequence
\[
\to I_{*}(Z_{+})_{w} \to I_{*}(Z_{-})_{w} \to
I_{*}(Z_{0})_{w} \to
\]
in which the map $I_{*}(Z_{-})_{w} \to I_{*}(Z_{0})_{w}$ is
odd and the other two are even.
However, it is no longer true that the manifold $Z_{0}$ is
the preferred closure of the sutured manifold obtained from
$K_{0}$. The manifold $Z_{0}$ can be described as being obtained
from the complement of $K_{0}$ by attaching $G_{r}\times S^{1}$,
where $G_{r}$ is a surface of genus $2$ with $r$ boundary
components. Here $r$ is the number of components of $K_{0}$, and
the attaching is done as before, so that the curves
$\partial G_{r}\times
\{q\}$ is attached to the longitudes and the curves
$\{p_{i}\}\times S^{1}$
are attached to the meridians. The \emph{preferred} closure, on
the other hand, is defined using a surface $F_{r}$ of genus
$1$, not genus $2$. We write $Z'_{0}$ for the preferred closure, and our
remaining task is to compare the instanton Floer homologies of
$Z_{0}$ and $Z'_{0}$, with their canonical $\Z/2$ gradings.
An application of Floer's excision theorem provides an
isomorphism
\[
I_{*}(Z_{0})_{w,+2} \to I_{*}(Z'_{0})_{w,+2} \otimes
I_{*}(\Sigma_{2}\times S^{1})_{w,+2}
\]
where (as before) the class $w$ in the last term is dual to a
non-separating curve in the genus-2 surface $\Sigma_{2}$.
\begin{figure}
\caption{\label{fig:F-and-G}
\label{fig:F-and-G}
\end{figure}
(See
Figure~\ref{fig:F-and-G} which depicts the excision cobordism
from $G_{r}\times S^{1}$ to $(F_{r}\amalg \Sigma_{2})\times
S^{1}$, with the $S^{1}$ factor suppressed.) The
isomorphism is realized by an explicit cobordism $W$, with
$\iota(W)$ odd, which accounts for the difference between the
first and second cases and concludes the proof.
\end{proof}
\subsection{Tracking the eigenspace decomposition}
The next lemma is similar in spirit to Lemma~\ref{lem:mod-2-sequence},
but deals with eigenspace decomposition rather than the mod $2$
grading.
\begin{lemma}\label{lem:eigenspace-sequence}
In the situation of Theorem~\ref{thm:skein}, suppose again that
the links
$K_{+}$, $K_{-}$ and $K_{0}$ are
null-homologous. Let $\Sigma_{+}$ be a Seifert surface for
$K_{+}$, and let $\Sigma_{-}$ and $\Sigma_{0}$ be Seifert surfaces
for the other two links, obtained from $\Sigma_{+}$ by a
modification in the neighborhood of the crossing.
Then in the first case of the two cases of the theorem, the
maps in the long exact
sequence \eqref{eq:skein-first} intertwine the three operators
$\mu^{o}([\Sigma_{+}])$, $\mu^{o}([\Sigma_{-}])$ and
$\mu^{o}([\Sigma_{0}])$. In particular then, we have a long exact
sequence
\begin{equation*}
\to \KHI(Y,K_{+},[\Sigma_{+}],j) \to
\KHI(Y,K_{-},[\Sigma_{-}],j) \to
\KHI(Y,K_{0},[\Sigma_{0}],j) \to
\end{equation*}
for every $j$.
In the second case of Theorem~\ref{thm:skein}, the maps in the
long exact sequence \eqref{eq:skein-second} intertwine the
operators $\mu^{o}([\Sigma_{+}])$ and $\mu^{o}([\Sigma_{-}])$ on
the first two terms with the operator
\[
\mu^{o}([\Sigma_{0}]) \otimes 1 +
1 \otimes \mu([\Sigma_{2}])
\]
acting on
\[
\KHI(Y,K_{0})\otimes I_{*}(\Sigma_{2}\times
S^{1})_{w,+2}\cong \KHI(Y,K_{0})\otimes V^{\otimes 2}.
\]
\end{lemma}
\begin{proof}
The operator $\mu^{o}([\Sigma])$ on the knot homology groups is
defined in terms of the action of $\mu([\bar\Sigma])$ for a
corresponding closed surface $\bar\Sigma$ in the preferred closure
of the link complement. The maps in the long exact sequences arise
from cobordisms between the preferred closures. The lemma follows
from the fact that the corresponding closed surfaces are
homologous in these cobordisms.
\end{proof}
\subsection{Proof of the main theorem}
For a null-homologous link $K \subset Y$ with a chosen Seifert surface
$\Sigma$, let us write
\[
\begin{aligned}
\chi (Y,K,[\Sigma])&= \sum_{j}
\chi(\KHI(Y,K,[\Sigma],j))t^{j} \\
&= \sum_{j} \bigl ( \dim\KHI_{0}(Y,K,[\Sigma],j) -
\dim\KHI_{1}(Y,K,[\Sigma],j)\bigr) t^{j} \\
&= \str( t^{\mu^{o}(\Sigma)/2}),
\end{aligned}
\]
where $\str$ denotes the alternating trace.
If $K_{+}$, $K_{-}$ and $K_{0}$ are three skein-related links with
corresponding Seifert surfaces $\Sigma_{+}$, $\Sigma_{-}$ and
$\Sigma_{0}$, then Theorem~\ref{thm:skein}, Lemma~\ref{lem:mod-2-sequence} and
Lemma~\ref{lem:eigenspace-sequence} together tell us that we have the
relation
\[
\chi (Y,K_{+},[\Sigma_{+}]) - \chi (Y,K_{-},[\Sigma_{-}]) + \chi
(Y,K_{0},[\Sigma_{0}]) = 0
\]
in the first case of Theorem~\ref{thm:skein}, and
\[
\chi (Y,K_{+},[\Sigma_{+}]) - \chi (Y,K_{-},[\Sigma_{-}]) - \chi
(Y,K_{0},[\Sigma_{0}]) r(t) = 0
\]
in the second case. Here $r(t)$ is the contribution from the term
$I_{*}(\Sigma_{2}\times S^{1})_{w,+2}$, so that
\[
r(t) = \str (t^{\mu([\Sigma_{2}])/2}).
\]
From the proof of Lemma~\ref{lem:Sigma2-calc} we can read off the
eigenvalues of $[\Sigma_{2}]/2$: they are $1$, $0$ and $-1$, and the
$\pm 1$ eigenspaces are each $1$-dimensional. Thus
\[
r(t) = \pm (t - 2 + t^{-1}).
\]
To determine the sign of $r(t)$, we need to know the canonical $\Z/2$
grading of (say) the $0$-eigenspace of $\mu([\Sigma_{2}])$ in
$I_{*}(\Sigma_{2}\times S^{1})_{w,+2}$. The trivial $3$-dimensional
cobordism from $T^{2}$ to $T^{2}$ can be decomposed as $N^{+}\cup
N^{-}$, where $N^{+}$ is a cobordism from $T^{2}$ to $\Sigma_{2}$ and
$N_{-}$ is a cobordism the other way. The $4$-dimensional cobordisms
$W^{\pm}= N^{\pm}\times S^{1}$ induce isomorphisms on the
$0$-eigenspace of $\mu([T^{2}])=\mu([\Sigma_{2}])$; and
$\iota(W^{\pm})$ is odd. Since the generator for $T^{3}$ is in odd
degree, we conclude that the $0$-eigenspace of $\mu([\Sigma_{2}])$ is
in even degree, and that
\[
\begin{aligned}
r(t) &= - (t - 2 + t^{-1}) \\
&= - q(t)^{2}
\end{aligned}
\]
where
\[
q(t) = (t^{1/2}-t^{-1/2}).
\]
We can roll the two case of Theorem~\ref{thm:skein} into one by
defining the ``normalized'' Euler characteristic as
\begin{equation}\label{eq:renormalized}
\tilde\chi(Y,K,[\Sigma]) =
q(t)^{1-r}\chi(Y,K,[\Sigma])
\end{equation}
where $r$ is the number of components of the link $K$. With this
notation we have:
\begin{proposition}
For null-homologous skein-related links $K_{+}$, $K_{-}$ and
$K_{0}$ with corresponding Seifert surface $\Sigma_{+}$,
$\Sigma_{-}$ and $\Sigma_{0}$, the normalized Euler
characteristics \eqref{eq:renormalized} satisfy
\[
\tilde \chi (Y,K_{+},[\Sigma_{+}]) - \tilde \chi
(Y,K_{-},[\Sigma_{-}])= (t^{1/2}-t^{-1/2})\,\tilde \chi
(Y,K_{0},[\Sigma_{0}]).
\]
\qed
\end{proposition}
In the case of classical knots and links, we may write this simply as
\[
\tilde \chi (K_{+}) - \tilde\chi
(K_{-})= (t^{1/2}-t^{-1/2})\,\tilde\chi
(K_{0}).
\]
This is the exactly the skein relation of the (single-variable)
normalized Alexander polynomial
$\Delta$. The latter is
normalized so that $\Delta=1$ for the unknot, whereas our $\tilde\chi$ is
$-1$ for the unknot because the generator of its knot homology is in
odd degree. We therefore have:
\begin{theorem}
For any link $K$ in $S^{3}$, we have
\[
\tilde\chi(K) = - \Delta_{K}(t),
\]
where $\tilde\chi(K)$ is the normalized Euler characteristic
\eqref{eq:renormalized} and $\Delta_{K}$ is the Alexander polynomial
of the link with Conway's normalization.\qed
\end{theorem}
In the case that $K$ is a knot, we have $\tilde\chi(K)=\chi(K)$, which
is the case given in Theorem~\ref{thm:main} in the introduction. \qed
\begin{remark}
The equality $r(t)= - q(t)^{2}$ can be interpreted as arising from
the isomorphism
\[
I_{*}(\Sigma_{2} \times S^{1})_{w,+2} \cong V\otimes V,
\]
with the additional observation that the isomorphism between
these two is odd with respect to the preferred $\Z/2$ gradings.
\end{remark}
\section{Applications}
\label{sec:applications}
\subsection{Fibered knots}
In \cite{KM-sutures}, the authors adapted the argument of Ni \cite{Ni-A}
to establish a criterion for a knot $K$ in $S^3$ to be a fibered knot: in
particular, Corollary~7.19 of \cite{KM-sutures} states that $K$
is fibered if the following three conditions hold:
\begin{enumerate}
\item the Alexander polynomial $\Delta_{K}(T)$ is monic, in the
sense that its leading coefficient is $\pm 1$;
\item the leading coefficient occurs in degree $g$, where
$g$ is the genus of the knot; and
\item the dimension of $\KHI(K,g)$ is $1$.
\end{enumerate}
It follows from our Theorem~\ref{thm:main} that the last of these
three conditions implies the other two. So we have:
\begin{proposition}\label{prop:fibered-knot}
If $K$ is a knot in $S^{3}$ of genus $g$, then $K$ is fibered if
and only if the dimension of $\KHI(K,g)$ is $1$. \qed
\end{proposition}
\subsection{Counting representations}
We describe some applications to representation varieties associated
to
classical knots $K\subset S^{3}$. The
instanton knot homology $\KHI(K)$ is defined in terms of the preferred
closure $Z=Z(K)$ described at \eqref{eq:special-closure}, and
therefore involves the flat connections
\[
\Rep(Z)_{w} \subset \bonf(Z)_{w}
\]
in the space of connections
$\bonf(Z)_{w}$: the quotient by the determinant-1 gauge group of the
space of all $\PU(2)$ connections in $\PP(E_{w})$, where $E_{w}\to Z$
is a $U(2)$ bundle with $\det(E)=w$. If the space of
these flat connections in $\bonf(Z)_{w}$ is non-degenerate in the
Morse-Bott sense when regarded as the set of critical points of the
Chern-Simons functional, then we have
\[
\dim I_{*}(Z)_{w} \le \dim H_{*}(\Rep(Z)_{w}).
\]
The generalized eigenspace $I_{*}(Z)_{w,+2}\subset I_{*}(Z)_{w}$ has
half the dimension of the total, so
\[
\dim \KHI(K) \le \frac{1}{2} \dim H_{*}(\Rep(Z)_{w}).
\]
As explained in \cite{KM-sutures}, the representation variety
$\Rep(Z)_{w}$ is closely related to the space
\[
\Rep(K,\bi) = \{ \, \rho: \pi_{1}(S^{3}
\setminus K) \to \SU(2) \mid \rho(m) =
\bi \,\},
\]
where $m$ is a chosen meridian and
\[
\bi =
\begin{pmatrix}
i & 0 \\ 0 & -i
\end{pmatrix}.
\]
More particularly, there is a two-to-one covering map
\begin{equation}\label{eq:covering}
\Rep(Z)_{w} \to \Rep(K,\bi).
\end{equation}
The circle subgroup $\SU(2)^{\bi}\subset \SU(2)$ which stabilizes $\bi$ acts on
$\Rep(K,\bi)$ by conjugation. There is a unique reducible element in
$\Rep(K,\bi)$ which is fixed by the circle action; the remaining
elements are irreducible and have stabilizer $\pm 1$. The most
non-degenerate situation that can arise, therefore, is that
$\Rep(K,\bi)$ consists of a point (the reducible) together with
finitely many circles, each of which is Morse-Bott. In such a case,
the covering \eqref{eq:covering} is trivial. As in
\cite{KM-knot-singular}, the corresponding non-degeneracy condition at
a flat connection $\rho$ can be interpreted as the condition that the
map
\[
H^{1}(S^{3}\setminus K; \g_{\rho}) \to H^{1}(m ;
\g_{\rho}) = \R
\]
is an isomorphism. Here $\g_{\rho}$ is the local system on the knot
complement with fiber $\su(2)$, associated to the representation
$\rho$. We therefore have:
\begin{corollary}
Suppose that the representation variety $\Rep(K,\bi)$ associated
to the complement of a classical knot $K\subset S^{3}$ consists of
the reducible representation and $n(K)$ conjugacy classes of
irreducibles, each of which is non-degenerate in the above sense.
Then
\[
\dim \KHI(K) \le 1 + 2n(K).
\]
\end{corollary}
\begin{proof}
Under the given hypotheses, the representation variety
$\Rep(K,\bi)$ is a union of a single point and $n(K)$ circles. Its
total Betti number is therefore $1 + 2 n(K)$. The representation
variety $\Rep(Z)_{w}$ is a trivial double cover \eqref{eq:covering},
so the total Betti number of $\Rep(Z)_{w}$ is twice as large, $2 +
4n(K)$.
\end{proof}
Combining this with Corollary~\ref{cor:alexander-vs-rank}, we obtain:
\begin{corollary}
Under the hypotheses of the previous corollary, we have
\[
\sum_{j=-d}^{d} |a_{j}| \le 1 + 2n(K)
\]
where the $a_{j}$ are the coefficients of the Alexander
polynomial. \qed
\end{corollary}
Among all the irreducible elements of $\Rep(K,\bi)$, we can
distinguish the subset consisting of those $\rho$ whose image is
binary dihedral: contained, that is, in the normalizer of a circle
subgroup whose infinitesimal generator $J$ satisfies
$\mathrm{Ad}(\bi)(J)=-J$.
If $n'(K)$ denotes the number of such
irreducible binary dihedral representations, then one has
\[
| \det(K) | = 1 + 2n'(K).
\]
(see \cite{Klassen}). On the other hand, the determinant
$\det(K)$ can also be computed as the value of the Alexander
polynomial at $-1$: the alternating sum of the coefficients. Thus we
have:
\begin{corollary}
Suppose that the Alexander polynomial of $K$ fails to be
alternating, in the sense that
\[
\left| \sum_{j=-d}^{d} (-1)^{j} a_{j} \right|
< \sum_{j=-d}^{d} | a_{j}|.
\]
Then either $\Rep(K,\bi)$ contains some representations that
are not binary dihedral, or some of the binary-dihedral
representations are degenerate as points of this
representation variety. \qed
\end{corollary}
This last corollary is nicely illustrated by the torus knot
$T(4,3)$. This knot is the first non-alternating knot in Rolfsen's
tables \cite{Rolfsen}, where it appears as $8_{19}$. The Alexander
polynomial of $8_{19}$ is not alternating in the sense of the
corollary; and as the corollary suggests, the representation variety
$\Rep(8_{19}; \bi)$ contains representations that are not binary
dihedral. Indeed, there are representations whose image is the binary
octahedral group in $\SU(2)$.
\end{document} |
\begin{document}
\title{Characterization of Strict Positive Definiteness on products of complex spheres}
\author{Mario H. Castro$^a$, Eugenio Massa$^b$, and Ana Paula Peron$^b$.
\\\scriptsize{$^a$Departamento de Matem\'{a}tica, UFU - Uberlândia (MG), Brazil. }\\ \scriptsize{
$^b$Departamento de Matem\'{a}tica, ICMC-USP - S\~{a}o Carlos, Caixa Postal 668, 13560-970 S\~{a}o Carlos (SP), Brazil.}\\ \scriptsize{
e-mails: [email protected],\,\,\,\, [email protected],\,\,\,\, [email protected].
}
}
\maketitle
\begin{abstract}
In this paper we consider Positive Definite functions on products $\Omega_{2q}\times\Omega_{2p}$ of complex spheres, and we obtain a condition, in terms of the coefficients in their disc polynomial expansions, which is necessary and sufficient for the function to be Strictly Positive Definite. The result includes also the more delicate cases in which $p$ and/or $q$ can be $1$ or $\infty$.
The condition we obtain states that a suitable set in ${\mathbb{Z}}^2$, containing the indexes of the strictly positive coefficients in the expansion, must intersect every product of arithmetic progressions.
\\
\noindent{\bf MSC:} 42A82; 42C10.
\\
\noindent{\bf Keywords:} Strictly Positive Definite Functions, Product of Complex Spheres, Generalized Zernike Polynomial.
\end{abstract}
\section{Introduction}
The main purpose of this paper is to obtain a characterization of Strictly Positive Definite functions on products of complex spheres, in terms of the coefficients in their disc polynomial expansions: these results are contained in the Theorems \ref{th_main}, \ref{th_main_1p} and \ref{th_main_11}.
Positive Definiteness and Strict Positive Definiteness are important in many applications, for example, Strict Positive Definiteness is required in certain interpolation problems in order to guarantee the unicity of their solution.
From a theoretical point of view, the problem of characterizing both Positive Definiteness and Strict Positive Definiteness has been considered in many recent papers, in different contexts. More details on the applications and the literature related to this problem will be given in Section \tref{sec_liter}.
\par
Let $\varOmega$ be a nonempty set. A kernel $K: \varOmega \times \varOmega \to {\mathbb{C}}$ is called {\em Positive Definite} (PD in the following) on $\varOmega$ when
\begin{equation}\label{eq-quad-form-geral}
\sum_{\mu,\nu=1}^L c_\mu\overline{c_\nu} K(x_\mu,x_\nu) \geq 0,
\end{equation}
for any $L\ge1$, $c=(c_1,\ldots,c_L) \in {\mathbb{C}}^L$ and any subset $X:=\{x_1,\ldots,x_L\} $ of distinct points in $\varOmega$. Moreover, $K$ is {\em Strictly Positive Definite} (SPD in the following) when it is Positive Definite and the inequality above is strict for $c\neq0$.
If $S^q$ is the $q$-dimensional unit sphere in the Euclidean space ${\mathbb{R}}^{q+1}$, we say that a continuous function $f:[-1,1] \to {\mathbb{R}}$ is PD (resp. SPD) on $S^q$, when the associated kernel $K(v,v'):=f(v\cdot_{\mathbb{R}} v')$ is PD (resp. SPD) on $S^q$ (here ``~$\cdot_{\mathbb{R}}$~" is the usual inner product in ${\mathbb{R}}^{q+1}$).
In \cite{scho-42} it was proved that
a continuous function $f$ is PD on $S^q$, $q\geq1$, if, and only if, it admits an expansion in the form
\begin{equation}\label{eq-scho}\begin{array}{ccc}
&\displaystyle f(t)=\sum_{m\in{\mathbb{Z}}_+} a_mP_m^{(q-1)/2}(t),\quad t\in[-1,1],&\\&\mbox{where $\sum a_mP_m^{(q-1)/2}(1)<\infty$ and $a_m\geq0$ for all $m\in{\mathbb{Z}}_+$. }\end{array}
\end{equation}
In \pref{eq-scho}, $P_m^{(q-1)/2}$ are the Gegenbauer polynomials of degree $m$ associated to $(q-1)/2$ (see \cite[page 80]{szego}) and ${\mathbb{Z}}_+={\mathbb{N}}\cup\pg{0}$.
In
\cite{debao-sun-valdir} it was proved that the function $f$ in \pref{eq-scho} is also SPD on $S^q$, $q\geq2$ if, and only if, the set
$
\{m\in{\mathbb{Z}}_+: a_{m}>0\}
$
contains an infinite number of odd and of even numbers. This condition is equivalent to asking that
\begin{equation}\label{eq_inters_Sd}
\{m\in{\mathbb{Z}}_+: a_{m}>0\}\cap (2{\mathbb{N}}+x)\neq \emptyset \qquad\mbox{for every $x\in{\mathbb{N}}$}.
\end{equation}
The complex case is defined in a similar way: if $\Omega_{2q}$ is the unit sphere in ${\mathbb{C}}^q$, $q\geq2,$ and $\mathbb{D}$ is the unit closed disc in ${\mathbb{C}}$,
then a continuous function $f:\mathbb{D} \to {\mathbb{C}}$ is said to be PD (resp. SPD) on $\Omega_{2q}$ if the associated kernel $K(z,z'):=f(z\cdot z')$ is PD (resp. SPD) on $\Omega_{2q}$, where ``~$\cdot$~" is the usual inner product in ${\mathbb{C}}^q$.
As proved in \cite{P-valdir-pd-esfcompl}, a continuous function $f:\mathbb{D} \to {\mathbb{C}}$ is PD on $\Omega_{2q}$, $q\geq2$ if, and only if, it has the representation in series of the form
\begin{equation}\label{eq-pd-esfq}\begin{array}{ccc}
&\displaystyle f(\textbf{x}i)=\sum_{m,n\in{\mathbb{Z}}_+} a_{m,n}R_{m,n}^{q-2}(\textbf{x}i),\quad \textbf{x}i\in{\mathbb{D}},&\\
&\mbox{where $\sum a_{m,n}<\infty$ and $a_{m,n}\geq0$ for all $m,n\in{\mathbb{Z}}_+$.}&
\end{array}
\end{equation}
The functions $R_{m,n}^{q-2}$ in \pref{eq-pd-esfq} are the {\em disc polynomials}, or {\em generalized Zernike polynomials} (see Equation \pref{eq-def-pol-disc}).
The condition for $f$ to be SPD was obtained in \cite{meneg-jean-traira,P-valdir-complexapproach}:
$f$ as in \pref{eq-pd-esfq} is SPD on $\Omega_{2q}$ if, and only if,
the set
$
\{m-n\in{\mathbb{Z}}: a_{m,n}>0\}
$
intersects every full arithmetic progression in ${\mathbb{Z}}$, that is,
\begin{equation}\label{eq_inters_Oq}
\{m-n\in{\mathbb{Z}}: a_{m,n}>0\}\cap (N{\mathbb{Z}}+x) \neq \emptyset
\qquad\mbox{for every $N,x\in{\mathbb{N}}$.}
\end{equation}
The characterization of SPD functions on the spheres $S^1$, $\Omega_2$, $S^\infty$ and $\Omega_\infty$ were also considered in \cite{P-valdir-claudemir-spd-compl,men-spd-analysis,meneg-jean-traira}, obtaining similar results (see also in Section \ref{sec_S1}).
Products of real spheres where considered in \cite{P-jean-men-pdSMxSm, jean-men, P-jean-menS1xSm, P-jean-menS1xS1}:
a continuous PD function on $S^q\times S^p$, $q,p\geq1$ can be written as
\begin{equation}\label{eq-pd-esfSd}\begin{array}{c}
\displaystyle f(t,s)=\sum_{m,k\in{\mathbb{Z}}_+} a_{m,k}P_m^{(q-1)/2}(t)P_k^{(p-1)/2}(s),\quad t,s\in[-1,1],\\
\mbox{where $\sum a_{m,k}P_m^{(q-1)/2}(1)P_k^{(p-1)/2}(1)<\infty$ and $a_{m,k}\geq0$ for all $m,k\in{\mathbb{Z}}_+$,}
\end{array}
\end{equation}
and, for $q,p\geq2$, it is also SPD on $S^q\times S^p$ if, and only if, the following condition, obtained in \cite{jean-men}, holds true: in each intersection of the set
$
\{(m,k)\in{\mathbb{Z}}_+^2: a_{m,k}>0\}
$
with the four sets $(2{\mathbb{Z}}_++x)\times(2{\mathbb{Z}}_++y),\ x,y\in\pg{0,1}$, there exists a sequence $(m_i,k_i)$ such that $m_i,k_i\to \infty$.
In fact, this condition is equivalent to the following one:
\begin{equation}\label{eq_inters_SpSq}
\{(m,k)\in{\mathbb{Z}}_+^2: a_{m,k}>0\}\cap (2{\mathbb{N}}+x)\times(2{\mathbb{N}}+y)\neq \emptyset
\qquad\mbox{for every $x,y\in{\mathbb{N}}$.}
\end{equation}
Again, when considering $S^1$ in the place of $S^q$ and/or $S^p$, similar (but not analogous) results are obtained: see \cite{ P-jean-menS1xSm, P-jean-menS1xS1} and Section \ref{sec_S1}.
\subsection{Main results}
The purpose of this paper is to consider the same kind of problems described above for the case of the products $\Omega_{2q}\times\Omega_{2p}$ of two complex spheres.
The characterization of Positive Definiteness in this setting was obtained
in \cite[Theorem 7.1]{P-berg-porcu_gelfand} for $ q,p\in{\mathbb{N}},\;q,p\geq2$: it was proved that a continuous function $f:\mathbb{D} \times \mathbb{D} \to {\mathbb{C}}$ is PD on $\Omega_{2q}\times\Omega_{2p}$ if, and only if, it admits an expansion in the form
\begin{equation}\label{eq-pd-prod-esf-compl}
\begin{array}{ccc}
&\displaystyle f(\textbf{x}i,\eta)=\sum_{m,n,k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}R_{m,n}^{q-2}(\textbf{x}i)R_{k,l}^{p-2}(\eta),\quad (\textbf{x}i,\eta)\in{\mathbb{D}}\times{\mathbb{D}},&\\
&\mbox{where $\sum a_{m,n,k,l}<\infty$ and $a_{m,n,k,l}\geq0$ for all $m,n,k,l\in{\mathbb{Z}}_+$.}&\end{array}
\end{equation}
If $p$ and/or $q$ can take the values $1$ or $\infty$, a characterization of Positive Definiteness is also known (see in Section \ref{sec_PD_on_prod}), except for the case $p=q=\infty$, which we address in Theorem \ref{th_PDinfty}.
In fact, if we define $ R^{\infty}_{m,n}(\textbf{x}i):={\textbf{x}i\vphantom{\overline\textbf{x}i}}^{m}\overline\textbf{x}i^{n},\ \textbf{x}i\in\mathbb{D}$, then the characterization \pref{eq-pd-prod-esf-compl} holds for $ q,p\in{\mathbb{N}}\cup\pg{\infty},\;q,p\geq2$.
Our main results are contained in the following theorems, where we characterize SPD functions on the product of two complex spheres $\Omega_{2q}\times\Omega_{2p}$, $q,p\in{\mathbb{N}}\cup\pg{\infty}$, in terms of the coefficients in their expansions.
\begin{theorem}\label{th_main}
Let $ q,p\in{\mathbb{N}}\cup\pg{\infty},\ q,p\geq2$. A continuous function $f:\mathbb{D} \times \mathbb{D} \to {\mathbb{C}}$, which is PD on $\Omega_{2q}\times\Omega_{2p}$, is also SPD on $\Omega_{2q}\times\Omega_{2p}$ if, and only if, considering its expansion as in \pref{eq-pd-prod-esf-compl}, the set $$J':=\pg{(m-n,k-l)\in{\mathbb{Z}}^2:\ a_{m,n,k,l}>0}$$ intersects every product of full arithmetic progressions in ${\mathbb{Z}}$,
that is,
\begin{equation}\label{eq_inters_Opq}
J'\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)\neq \emptyset\qquad \mbox{for every $N,M,x,y\in{\mathbb{N}}$.}
\end{equation}
\end{theorem}
It is worth noting the similarities between the characterizations of SPD in the various cases described here, actually, they can always be reduced to a condition on the intersection between a set constructed with the indexes of the nonnegative coefficients in the expansion of the function, and certain arithmetic progressions or products of them: compare the conditions (\ref{eq_inters_Sd}-\ref{eq_inters_Oq}-\ref{eq_inters_SpSq}-\ref{eq_inters_Opq}).
\par
When $p$ and/or $q$ can take the value $1$,
we obtain the following characterizations.
\begin{theorem}\label{th_main_1p}
Let $2\leq p\in{\mathbb{N}}\cup\pg{\infty}$. A continuous function $f:\partial\mathbb{D} \times \mathbb{D} \to {\mathbb{C}}$, which is PD on $\Omega_2\times\Omega_{2p}$, is also SPD on $\Omega_2\times\Omega_{2p}$ if, and only if, considering its expansion as
\begin{equation}\label{eq-pd-prod-esf-complO2p}\begin{array}{rcl}
&\displaystyle f(\textbf{x}i,\eta)=\sum_{m\in{\mathbb{Z}},\,k,l\in{\mathbb{Z}}_+} a_{m,k,l}\textbf{x}i^mR_{k,l}^{p-2}(\eta),\quad (\textbf{x}i,\eta)\in\partial{\mathbb{D}}\times{\mathbb{D}},&\\
&\mbox{where $\sum a_{m,k,l}<\infty$ and $a_{m,k,l}\geq0$ for all $m\in{\mathbb{Z}},\,k,l\in{\mathbb{Z}}_+$,}&
\end{array}
\end{equation} the set $$\pg{(m,k-l)\in{\mathbb{Z}}^2:\ a_{m,k,l}>0}$$ intersects every product of full arithmetic progressions in ${\mathbb{Z}}$.
\end{theorem}
\begin{theorem}\label{th_main_11}
A continuous function $f:\partial \mathbb{D} \times \partial\mathbb{D} \to {\mathbb{C}}$, which is PD on $\Omega_2\times\Omega_2$, is also SPD on $\Omega_2\times\Omega_2$ if, and only if, considering its expansion as
\begin{equation}\label{eq-pd-prod-esf-complO2}\begin{array}{rcl}
&\displaystyle f(\textbf{x}i,\eta)=\sum_{m,k\in{\mathbb{Z}}} a_{m,k}\textbf{x}i^m\eta^k,\quad (\textbf{x}i,\eta)\in\partial{\mathbb{D}}\times\partial{\mathbb{D}},&\\
&\mbox{where $\sum a_{m,k}<\infty$ and $a_{m,k}\geq0$ for all $m,k\in{\mathbb{Z}}$,}&\end{array}
\end{equation} the set $$\pg{(m,k)\in{\mathbb{Z}}^2:\ a_{m,k}>0}$$ intersects every product of full arithmetic progressions in ${\mathbb{Z}}$.
\end{theorem}
We observe that the Theorems \ref{th_main_1p} and \ref{th_main_11}
will follow immediately from the same proof as Theorem \ref{th_main}, after rewriting the expansions \pref{eq-pd-prod-esf-complO2p} and \pref{eq-pd-prod-esf-complO2} in order to be formally identical to \pref{eq-pd-prod-esf-compl} (see Lemma \ref{lm_charDD}). This is a remarkable fact considering that, in the real case, when the product involves the sphere $S^1$ (see \cite{P-jean-menS1xS1,P-jean-menS1xSm}) one had to use quite different arguments with respect to the higher dimensional case in \cite{jean-men}.
We remark however that Theorem \ref{th_main_11} is not new, as it is a particular case of the main result in \cite{men-gue-toro}.
\par
This paper is organized in the following way. In Section \ref{sec_liter} we discuss some further literature related to our problem.
In Section \ref{sec_teoria} we set our notation and discuss some known results that will be used later.
Theorem \ref{th_main} is proved in Section \ref{sec_proofmain}.
In Section \ref{sec_infty} we state and prove the mentioned characterization of PD functions on $\Omega_\infty\times\Omega_\infty$. Finally, Section \ref{sec_S1} is devoted to showing how one can deduce, from Theorem \ref{th_main_11}, the characterization of SPD functions on $S^1\times S^1$ proved in \cite{P-jean-menS1xS1}.
\subsection{Literature}\label{sec_liter}
Since the first results on
Positive Definite functions on real spheres, obtained by Schoenberg in his seminal paper (\cite{scho-42}), such functions were found to be relevant and have been studied in several distinct areas. In fact, they are both used by researchers directly interested in applied sciences, such as geostatistics, numerical analysis, approximation theory (cf. \cite{cheney-approx-pd, CheneyLight-book, gneiting-2013, porcu-bev-gent}), and by theoretical researchers aiming at further generalizations that, along with their theoretical importance, could become useful in other practical problems.
One important motivation for characterizing Strictly Positive Definite functions comes from certain interpolation problems, where the interpolating function is generated by a Positive Definite kernel. Actually, the unicity of the solution of the interpolation problem is guaranteed only if the generating kernel is also Strictly Positive Definite (cf. \cite{light-cheney,cheney-xu}): consider, for instance, the interpolation function
$$
F(x) = \sum_{j=1}^Lc_jK(x,x_j), \quad x\in \varOmega,
$$
where $X=\{x_1,\ldots,x_L\} \subset \varOmega$ is given and $K$ is a known Strictly Positive Definite kernel in $\varOmega$; then the matrix of the system obtained from the interpolation conditions $F(x_i)=\lambda_i$, $i=1,\ldots,L$, is the matrix $[K(x_i,x_j)]$, whose determinant is positive, thus giving a unique solution for the system.
In particular, the case where $\varOmega$ is a real sphere is very important in applications where one needs to
assure unicity for interpolation problems with datas given on the Earth surface (which can be identified with the real sphere $S^2$).
Also, the case where $\varOmega$ is the product of a sphere with some other set
turns out to be of particular interest for its application to geostatistical problems in space and time, whose natural domain is $S^2\times {\mathbb{R}}$ (see \cite{porcu-bev-gent} and references therein).
Immediate applications in the case of complex spheres are less obvious: we refer to \cite{P-massa-porcu-montee}, where parametric families of Positive Definite functions on complex spheres are provided. It is also worth noting that the Zernike polynomials are used in applications such as optics and optical engineering (cf. \cite{ramos-et-al-zernike-opt,torre} and references therein).
Motivated by these and other applications, several papers appeared dealing with the theoretical problem of characterizing Positive Definiteness and Strict Positive Definiteness:
along with those already mentioned in the introduction, we cite \cite{musin-multi-pd}, where a characterization of real-valued multivariate Positive Definite functions on $S^q$ is obtained, and \cite{ yaglom,hannan,men-rafaela}, where matrix-valued Positive Definite functions are investigated.
In \cite{porcu-berg}, the characterization in \cite{scho-42} is extended to the case of Positive Definite functions on the cartesian product of $S^q$ times a locally compact group $G$,
which includes the mentioned case $S^q\times {\mathbb{R}}$
and also generalizes the result obtained in \cite{P-jean-men-pdSMxSm} about Positive Definite functions on products of real spheres.
Also, the Positive Definite functions on Gelfand pairs and on products of them were characterized in \cite{P-berg-porcu_gelfand},
while those on the product of a locally compact group with $\Omega_\infty$ in \cite{P-berg-porcu-Omega-inf}.
Concerning the characterization of Strictly Positive Definite functions, we cite also the cases of compact two-point homogeneous spaces and products of them (\cite{barbosa-men, men-victor-prod-esf-esp_homg})
and the case of a torus (\cite{men-gue-toro}).
\section{Notation and known results}\label{sec_teoria}
We first give a brief introduction on the {disc polynomials} that appear in the Equations \pref{eq-pd-esfq} and \pref{eq-pd-prod-esf-compl}:
for $2\leq q\in{\mathbb{N}}$, the function $R_{m,n}^{q-2}$, defined in the disc $\mathbb{D}=\pg{\textbf{x}i\in{\mathbb{C}}:|\textbf{x}i|\leq 1}$, is called {\em disc polynomial} (or {\em generalized Zernike polynomial}) of degree $m$ in $\textbf{x}i$ and $n$ in $\overline\textbf{x}i$ associated to ${q-2}$, and can be written as (see \cite{koor-II})
\begin{equation}\label{eq-def-pol-disc}
R_{m,n}^{{q-2}}(\textbf{x}i)=r^{|m-n|}\,e^{i(m-n)\phi}\,R_{\min\{m,n\}}^{({q-2},\,|m-n|)}(2r^2 -1), \quad \textbf{x}i=re^{i\phi}\in\mathbb{D},\ m,n\in{\mathbb{Z}}_+,\end{equation}
where $R_{k}^{(\alpha,\beta)}$ is the usual Jacobi polynomial of degree $k$ associated to the real numbers $\alpha,\beta>-1$ and normalized by $R_{k}^{(\alpha,\beta)}(1)=1$ (see \cite[page 58]{szego}).
For future use we also define
\begin{eqnarray}\label{eq_Rm1inf}
R^{\infty}_{m,n}(\textbf{x}i)=R^{-1}_{m,n}(\textbf{x}i)&:=&{\textbf{x}i\vphantom{\overline\textbf{x}i}}^{m}\overline\textbf{x}i^{n},\qquad \textbf{x}i\in\mathbb{D}\,.
\end{eqnarray}
It is well known (see \cite{koor-II,koor-london})
that the disc polynomials, as well as those defined in \pref{eq_Rm1inf}, satisfy, for $q\in{\mathbb{N}}\cup\{\infty\}$, $\textbf{x}i\in\mathbb{D}$, and $m,n\in{\mathbb{Z}}_+$,
\begin{eqnarray} \label{eq_modulo-Rmn}
&R_{m,n}^{q-2}(1) = 1, \quad
|R^{q-2}_{m,n}(\textbf{x}i)| \leq 1, &\quad
\\\label{eq_propridd_Rmn}
&R_{m,n}^{q-2} (e^{i\phi}\textbf{x}i) = e^{i(m-n)\phi}R_{m,n}^{q-2} (\textbf{x}i),& \quad \phi\in{\mathbb{R}},
\\\label{eq_proprconj_Rmn}
&R^{q-2}_{m,n}\pt{\,\overline\textbf{x}i\,}=\overline{ R^{q-2}_{m,n}(\textbf{x}i)}.&
\end{eqnarray}
Observe that, by \pref{eq_modulo-Rmn}, the series in \pref{eq-pd-esfq} and \pref{eq-pd-prod-esf-compl} converge uniformly in their domain. Moreover, the characterization in \pref{eq-pd-prod-esf-compl} implies that
the functions $(\textbf{x}i,\eta)\mapsto R_{m,n}^{q-2}(\textbf{x}i)R_{k,l}^{p-2}(\eta)$ are PD on $\Omega_{2q}\times\Omega_{2p}$ for all $m,n,k,l\in{\mathbb{Z}}_+$ (and, by \pref{eq-pd-esfq}, the functions $\textbf{x}i\mapsto R_{m,n}^{q-2}(\textbf{x}i)$ are PD on $\Omega_{2q}$).
Another important property is contained in the following lemma.
\begin{lemma}\label{lm_Rto0}
If $q\in{\mathbb{N}}\cup\pg{\infty}$ and $\textbf{x}i\in\mathbb{D}'=\{\textbf{x}i\in{\mathbb{C}}:|\textbf{x}i|< 1\}$, then
\begin{equation}\label{eq_Rto0}
\lim_{\stackrel{m+n\to\infty}{m\neq n}}R_{m,n}^{q-2}(\textbf{x}i)=0\,.
\end{equation}
If $q\in{\mathbb{N}}\cup\pg{\infty}$ and $\textbf{x}i=e^{i\phi}\in\partial\mathbb{D}$ then
\begin{equation}\label{eq_Reitet}
R_{m,n}^{q-2}(e^{i\phi})=e^{i(m-n)\phi}\,.
\end{equation}
\end{lemma}
For the proof of \pref{eq_Rto0} when $q\geq2$ see \cite{meneg-jean-traira}.
It is worth noting that the limit is true even without the condition ${m\neq n}$, except for the special case $q=2$ and $\textbf{x}i=0$.
On the other hand, \pref{eq_Reitet} follows from (\ref{eq_modulo-Rmn}-\ref{eq_propridd_Rmn}).\\
\subsection{Positive Definiteness on complex spheres}\label{sec_PD_on_sing}
As we anticipated in the introduction, it is known by \cite{P-valdir-pd-esfcompl} that a continuous function $f:\mathbb{D}\to{\mathbb{C}}$ is PD on $\Omega_{2q}$, $2\leq q\in{\mathbb{N}}$, if, and only if, the coefficients $a_{m,n}$ in the series representation \pref{eq-pd-esfq} satisfy $\sum a_{m,n}<\infty$ and $a_{m,n}\geq0$ for all $m,n\in{\mathbb{Z}}_+$.
In the case of the complex sphere $\Omega_2$, when associating a continuous function $f$ to a kernel via the formula $K(z,z'):=f(z\cdot z')$, one has that $z\cdot z'\in\partial\mathbb{D}$ for every $z,z'\in\Omega_2$, then it becomes natural to consider functions $f$ defined in $\partial\mathbb{D}$.
The PD functions on $\Omega_2$ were also characterized in
\cite{P-valdir-pd-esfcompl}, namely, $f:\partial\mathbb{D}\to {\mathbb{C}}$ is PD on $\Omega_2$ if, and only if,
\begin{equation}\label{eq-pd-prod1}\begin{array}{ccc}
&\displaystyle f(\textbf{x}i)=\sum_{m\in{\mathbb{Z}}} a_{m}\textbf{x}i^m,\quad \textbf{x}i\in{\partial \mathbb{D}},&\\
&\mbox{where $\sum a_{m}<\infty$ and $a_{m}\geq0$ for all $m\in{\mathbb{Z}}$.}&\end{array}
\end{equation}
In order to write this formula as \pref{eq-pd-esfq}, and then to be able to use the same expansion for all $q\in{\mathbb{N}}$, we use the polynomials $R_{m,n}^{-1}$ defined in \pref{eq_Rm1inf} and we rearrange the coefficients in \pref{eq-pd-prod1} so that
\begin{equation}\label{eq-pd-prod1mn}
f(\textbf{x}i)
=\sum_{m,n\in{\mathbb{Z}}_+} a_{m,n}R_{m,n}^{-1}(\textbf{x}i),\quad \textbf{x}i\in\partial\mathbb{D},
\end{equation}
with the additional requirement that $a_{m,n}=0 $ if $mn>0$, implying that
$$\begin{cases}
a_{m,0}:=a_m,&m\geq 0,\\ a_{0,m}:=a_{-m},&m\geq 0.
\end{cases}$$
In this way, $f$ is PD on $\Omega_2$ if, and only if, it satisfies the characterization \pref{eq-pd-esfq} with $a_{m,n}=0 $ for $mn>0$ and $\partial\mathbb{D}$ in the place of $\mathbb{D}$.
The complex sphere $\Omega_\infty$ is defined as the sphere of the sequences in the Hilbert complex space $\ell^2({\mathbb{C}})$ having unitary norm.
In \cite{chris-ressel-pd}, it was proved that a continuous function $f:{\mathbb{D}}\to {\mathbb{C}}$ is PD on $\Omega_\infty$ if, and only if, it admits the series representation
\begin{equation}\label{eq-pd-esfi}\begin{array}{ccc}
&\displaystyle f(\textbf{x}i)=\sum_{m,n\in{\mathbb{Z}}_+} a_{m,n}{\textbf{x}i\vphantom{\overline\textbf{x}i}}^m\overline{\textbf{x}i}^n,\quad \textbf{x}i\in{\mathbb{D}},&\\
&\mbox{where $\sum a_{m,n}<\infty$ and $a_{m,n}\geq0$ for all $m,n\in{\mathbb{Z}}_+$,}&\end{array}
\end{equation}
which becomes analogous to the characterization \pref{eq-pd-esfq} if we use the definition of $R_{m,n}^\infty$ in \pref{eq_Rm1inf}.
It is also worth noting that $f$ is PD on $\Omega_\infty$ if, and only if, $f$ is PD on $\Omega_{2q}$ for every $q\geq2$.
\subsection{Positive Definiteness on products of spheres}
\label{sec_PD_on_prod}
From now on, in order to simplify the exposition, we will use the symbol $\mathbb{D}D$ to designate either $\partial \mathbb{D}$ or $\mathbb{D}$, depending if we are considering, respectively, the sphere $\Omega_2$ or a higher dimensional sphere.
When considering products of spheres $\Omega_{2q}\times\Omega_{2p}$, $ q,p\in{\mathbb{N}}\cup\pg\infty$, a continuous functions $f:\mathbb{D}D\times\mathbb{D}D \to {\mathbb{C}}$ is said to be PD (resp. SPD) on $\Omega_{2q}\times\Omega_{2p}$, if the associated kernel
\begin{equation}\label{eq-Kfromfprod}
K:[\Omega_{2q}\times\Omega_{2p}]\times[\Omega_{2q}\times\Omega_{2p}]\ni (\,(z,w),(z',w')\,)\mapsto f(z\cdot z',w\cdot w')
\end{equation}
is PD (resp. SPD) on $\Omega_{2q}\times\Omega_{2p}$.
In this section we will justify the following claim:
\begin{lemma}\label{lm_charDD}
A continuous function $f:\mathbb{D}D \times \mathbb{D}D \to {\mathbb{C}}$ is PD on $\Omega_{2q}\times\Omega_{2p}$, $ q,p\in{\mathbb{N}}\cup\pg\infty$, if and only if, it admits an expansion in the form
\begin{equation}\label{eq-pd-prod-esf-compl_DD}
\begin{array}{ccc}
&\displaystyle f(\textbf{x}i,\eta)=\sum_{m,n,k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}R_{m,n}^{q-2}(\textbf{x}i)R_{k,l}^{p-2}(\eta),\quad (\textbf{x}i,\eta)\in{\mathbb{D}D}\times{\mathbb{D}D},&\\
&\mbox{where $\sum a_{m,n,k,l}<\infty$ and $a_{m,n,k,l}\geq0$ for all $m,n,k,l\in{\mathbb{Z}}_+$,}&\end{array}
\end{equation}
adding the requirement that $a_{m,n,k,l}=0$ if $q=1$ and $mn>0$ (resp. $p=1$ and $kl>0$).
\end{lemma}
Lemma \ref{lm_charDD} is a generalization of the characterization \pref{eq-pd-prod-esf-compl} to include the cases when $q,p$ can take the values $1$ or $\infty$, replacing $\mathbb{D}$ with $\mathbb{D}D$ and redefining the coefficients in the series, where $p$ or $q$ is $1$, as we did in Equation \pref{eq-pd-prod1mn}.
In order to justify the claim, we will use results from \cite{P-berg-porcu_gelfand} and \cite{P-berg-porcu-Omega-inf}, which are stated in a more general setting.
Let $U(p)$ be the locally compact group of the unitary $p\times p$ complex matrices. A continuous
function
$\widetilde \Phi:U(p)\to {\mathbb{C}}$
is called Positive Definite on $U(p)$
if the kernel
$(A,B)\mapsto\widetilde\Phi(B^{-1}A)$ is
Positive Definite on $U(p)$ {(see \cite[page 87]{Berg})}.
The following remark will be useful to translate from this setting to the case of complex spheres in which we are interested
(see also \cite[Section 6]{P-berg-porcu_gelfand}).
\begin{remark}\label{rem_U_Om}
Let $\Phi:\mathbb{D}D\to{\mathbb{C}}$ and $\widetilde \Phi:U(p)\to{\mathbb{C}}$ be related by $\widetilde\Phi(A)=\Phi(Ae_p\cdot e_p)$, where $e_p= (1,0,\ldots,0)\in \Omega_{2p}$.
Then $\widetilde\Phi(A)$ depends only on the upper-left element $[A]_{1,1}$ and it can be seen by the definition of Positive Definiteness that $\widetilde\Phi$ is PD on $U(p)$ if, and only if, $\Phi$ is PD on $\Omega_{2p}$.
Moreover, $\widetilde\Phi$ is continuous if, and only if, $\Phi$ is, since $M:U(p)\to \mathbb{D}D:A\mapsto [A]_{1,1}$ is continuous and admits a continuous right
inverse $$M^-:\mathbb{D}D\to U(p):\textbf{x}i\mapsto M^-(\textbf{x}i)\ \text{ such that }\ [M^-(\textbf{x}i)]_{1,1}=\textbf{x}i\,.$$ \end{remark}
Now Lemma \ref{lm_charDD} is obtained as follows:
\begin{enumerate}
\item When $ q,p\in{\mathbb{N}},\;q,p\geq2$, the lemma is exactly the characterization \pref{eq-pd-prod-esf-compl}.
\item When $q=1$ and $p\in{\mathbb{N}}$ (or vice-versa) we can use Corollary 3.5 in \cite{P-berg-porcu_gelfand}, observing that we can identify functions on $\Omega_2$ with periodic functions on ${\mathbb{R}}$, and we can take the locally compact group $L=U(p)$, obtaining a characterization for PD functions on $\Omega_2\times U(p)$. Then we can translate the characterization from $U(p)$ to $\Omega_{2p}$, using Remark \ref{rem_U_Om}.
\item When $q=\infty$ and $p\in{\mathbb{N}}$ (or vice-versa) we can use Theorem 1.3 in \cite{P-berg-porcu-Omega-inf}, taking the locally compact group $L=U(p)$ and proceeding as above.
\item When $q=p=\infty$ the claim is a consequence of Theorem \ref{th_PDinfty} in Section \ref{sec_infty}.
\end{enumerate}
\section{Proof of the main results}\label{sec_proofmain}
In the following we
will need to consider matrices whose elements are described by many indexes: for this we will write
$$
\pq{b_{i,j,k,l,...}}_{i=1,..,I,\; j=i,..,J,\; ...}^{k=1,..,K,\;l=1,..,L,\; ...}\,,
$$
where the indexes in the lower line are intended to be line indexes and those in the above line are column indexes.
Also, we will specify the indexes alone when their ranges are clear.
Let $q,p\in {\mathbb{N}}\cup\pg{\infty}$. From \pref{eq-quad-form-geral} and \pref{eq-Kfromfprod}, the definition of Positive Definiteness on $\Omega_{2q}\times\Omega_{2p}$, for a
continuous function $f:\mathbb{D}D\times\mathbb{D}D\to {\mathbb{C}}$, takes the form
\begin{equation}\label{eq-quad-form}
\sum_{\mu,\nu=1}^Lc_\mu\overline{c_\nu}f(z_\mu\cdot z_\nu, w_\mu\cdot w_\nu) \geq0
\end{equation}
for all $L\geq1$, $(c_1,c_2,\ldots,c_L)\in{\mathbb{C}}^L$ and $X=\{(z_1,w_1),(z_2,w_2),\ldots,(z_L,w_L)\}\subset\Omega_{2q}\times\Omega_{2p}$.
As a consequence, if we define the matrix $A_X$ associated to the function $f$ and to the set $X$ by
\begin{equation}\label{eq-def-AX}
A_X:= [f(z_\mu\cdot z_\nu, w_\mu\cdot w_\nu)]^{\mu=1,\ldots, L}_{\nu=1,\ldots, L}\,,
\end{equation}
then:
\begin{itemize}
\item
$f$ is PD if, and only if, for every choice of $L$, $X$, and $c^t = (c_1,c_2,\ldots,c_L)$,
$$
\overline c^t A_X c \geq 0,
$$
that is, $A_X$ is a Hermitian and positive semidefinite matrix (see \cite[page 430]{horn-joh-matrix});
\item $f$ is also SPD if, and only if, for every choice of $L$ and $X$,
$$
\overline c^t A_X c = 0 \Longleftrightarrow c=0,
$$
that is, $A_X$ is a positive definite matrix.
\end{itemize}
Let now $f$ be a continuous function, PD on $\Omega_{2q}\times\Omega_{2p}$, which we can write uniquely as in Lemma \ref{lm_charDD}.
If we define the set
\begin{equation}\label{eq_defJ}
J=\pg{(m,n,k,l)\in{\mathbb{Z}}_+^4:\ a_{m,n,k,l}>0}\,,
\end{equation}
then, for a finite set $X=\{(z_1,w_1),(z_2,w_2),\ldots,(z_L,w_L)\}\subseteq\Omega_{2q}\times\Omega_{2p}$, we can write
\begin{equation}\label{eq_AxsumBx}
A_X=\sum_{(m,n,k,l)\in J} a_{m,n,k,l} B_X^{m,n,k,l}
\end{equation} where
\begin{equation}\label{eq_defBX}
B_X^{m,n,k,l}:= [R^{q-2}_{m,n}(z_\mu\cdot z_\nu)\,R^{p-2}_{k,l}( w_\mu\cdot w_\nu)]_{\nu=1,\ldots, L}^{\mu=1,\ldots, L}
\end{equation}
is the positive semidefinite matrix associated to $X$ and to the function $R_{m,n}^{q-2}(\textbf{x}i)R_{k,l}^{p-2}(\eta)$.
With these definitions, the following lemma holds.
\begin{lemma}\label{lm_Ax_sistBx}
The matrix $A_X$ is a positive definite matrix if, and only if, the equivalence
\begin{equation}\label{eq_sist_iff_B}
\overline c^t B_X^{m,n,k,l} c = 0\ \ \forall\ (m,n,k,l)\in J\quad \Longleftrightarrow \quad c=0
\end{equation}
holds true.
\end{lemma}
Lemma \ref{lm_Ax_sistBx} is a consequence of the following one.
\begin{lemma}
Let $A= \sum_jA_j$, where $A_j$ are positive semidefinite matrices. Then A is positive semidefinite and the condition that $A$ is positive definite is equivalent to
$$\overline c^tA_jc= 0 \ \ \forall j \quad \Longleftrightarrow \quad c=0\,. $$
\end{lemma}
\begin{proof}
First, $\overline c^tAc=\sum_j\overline c^tA_jc\geq 0$, then one has that $A$ is positive semidefinite too.
\\If A is positive definite and $\overline c^tA_jc=0$ for every $j$, then of course $\overline c^tAc=0$ and so $c=0$.
\\Finally, if $\overline c^tAc=0$ then (sum of nonnegative terms) $\overline c^tA_jc=0$ $\forall j$; if we assume that this system implies $c=0$ then $A$ is positive definite.
\end{proof}
In the following proposition we prove one of the two implications of Theorem \ref{th_main}.
\begin{prop}\label{th_spd->progr}
Let $q,p\in{\mathbb{N}}\cup\pg{\infty}$, $f$ be a continuous function which is PD on $\Omega_{2q}\times\Omega_{2p}$ and consider
\begin{equation}\label{eq_defJ'}
J'=\pg{(m-n,k-l)\in{\mathbb{Z}}^2:\ (m,n,k,l)\in J}\,.
\end{equation}
If $f$ is SPD on $\Omega_{2q}\times\Omega_{2p}$ then
\begin{equation}\label{eq_inters}
J'\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)\neq \emptyset \ \text{ for every $N,M,x,y\in{\mathbb{N}}\,.$}
\end{equation}
\end{prop}
\begin{proof}
Assume $J'\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)= \emptyset$ for some $N,M,x,y\in{\mathbb{N}}$. Without loss of generality we may assume $M,N\geq2$.
\\
Fix a point $(z,w)\in \Omega_{2q}\times\Omega_{2p}$ and take the set of points $$X=\pg{(e^{i2\pi \tau/N}z,e^{i2\pi \sigma/M}w)\in\Omega_{2q}\times\Omega_{2p}:\ \tau=1,..,N,\,\sigma=1,..,M}\,;$$
then, using the Equations (\ref{eq_modulo-Rmn}-\ref{eq_propridd_Rmn}), the matrix in \pref{eq_defBX} reads as
$$B_X^{m,n,k,l}=\pq{e^{i2\pi (m-n)(\tau-\lambda)/N}e^{i2\pi (k-l)(\sigma-\zeta) /M}}^{\tau=1,..,N,\,\,\sigma=1,..,M}_{\lambda=1,..,N,\,\,\zeta=1,..,M}. $$
Observe that this matrix factors as the product $B_X^{m,n,k,l}=\overline b^tb$ where b is the line vector $$b=\pq{e^{i2\pi (m-n)\tau/N}e^{i2\pi (k-l)\sigma /M}}^{\tau,\sigma}$$
(we omit the dependence on $X$ and $\pt{m,n,k,l}$ in the notation for $b$).
Then each equation of the system in \pref{eq_sist_iff_B} reads as $\overline c^tB_X^{m,n,k,l}c=\overline{ c}^t\overline b^tbc=0$ and is equivalent to $bc=0$.
At this point we take $c=\pq{e^{-i2\pi \tau x/N}e^{-i2\pi \sigma y /M}}_{\tau,\sigma}$, so that
\begin{equation}\label{eq_bcsum}
bc=\sum_{\tau,\sigma} e^{i2\pi (m-n-x)\tau/N}e^{i2\pi (k-l-y)\sigma /M}=\sum_{\tau} e^{i2\pi (m-n-x)\tau/N}\sum_{\sigma}e^{i2\pi (k-l-y)\sigma /M}\,.
\end{equation}
By our assumption, for every $\pt{m,n,k,l}\in J$, either $m-n-x$ is not a multiple of $N$ or $k-l-y$ is not a multiple of $M$. This implies that one of the two sums in \pref{eq_bcsum} is zero and then $bc=0$.
\\Then $c$ is a nontrivial solution of the system in \pref{eq_sist_iff_B}. We have thus proved that $J'\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)= \emptyset$ implies that $f$ is not SPD.
\end{proof}
The rest of this section is dedicated to proving the following proposition, which contains the remaining implication of Theorem \ref{th_main}.
\begin{prop}\label{th_progr->spd}
Let $q,p$, $f$ and $J'$ be as in Proposition \tref{th_spd->progr}. If condition \pref{eq_inters} holds true, then $f$ is SPD on $\Omega_{2q}\times\Omega_{2p}$.
\end{prop}
First of all, we prove the following consequence of condition \pref{eq_inters}.
\begin{lemma}\label{lm_int_inf_2}
If $A\subset{\mathbb{Z}}^2$ satisfies \begin{equation}\label{eq_inters_lm}
I_{M,N,x,y}:= A\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)\neq \emptyset \ \text{ for every $N,M,x,y\in{\mathbb{N}}\,,$}
\end{equation} then, for every $N,M,x,y\in{\mathbb{N}}$, the set $$\pg{\min\pg{|\alpha|,|\beta|}: (\alpha,\beta)\in I_{M,N,x,y}}$$ is unbounded and $I_{M,N,x,y}$ is infinite.
\end{lemma}
\begin{proof}
Suppose $\pg{\min\pg{|\alpha|,|\beta|}: (\alpha,\beta)\in I_{M,N,x,y}}\subseteq [0,C]$.\\
Let $(\widehat x,\widehat y) \in(N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)$ with $\widehat x,\widehat y>C$ and $D$ be a multiple of $M$ and of $N$ such that $\widehat x-D,\widehat y-D<-C$. Then $(D{\mathbb{Z}}+\widehat x)\times (D{\mathbb{Z}}+\widehat y)\cap I_{M,N,x,y}=\emptyset$ and $$(D{\mathbb{Z}}+\widehat x) \times (D{\mathbb{Z}}+\widehat y)\subseteq(N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y) .$$ As a consequence $(D{\mathbb{Z}}+\widehat x)\times (D{\mathbb{Z}}+\widehat y)\cap A=\emptyset$, which contradicts \pref{eq_inters_lm}.
\end{proof}
The next step will be to prove that we can verify Strict Positive Definiteness only on certain special sets $X\subseteq\Omega_{2q}\times\Omega_{2p}$ (see Lemma \ref{lm_SDP_Xenh}).
In view of Lemma \ref{lm_Rto0}, when calculating ${\mathbb{R}}mnq(z_\mu\cdot z_\nu)$ and considering the limit for $m+n\to \infty$, the obtained behavior is quite different if $|z_\mu\cdot z_\nu|<1$ or $|z_\mu\cdot z_\nu|=1$. In particular, we will have to treat carefully the cases when $|z_\mu\cdot z_\nu|=1$.
This happens either if $z_\mu=z_\nu$ (observe that the points in the set $X$ must be distinct but they can have one of the two components in common), or if $z_\mu= e^{i\theta}z_\nu$ with $\theta\in(0,2\pi)$. In this last case we say that the two points $z_\mu,z_\nu\in \Omega_{2q}$ are {\em antipodal}.
Our strategy to deal with antipodal points is inspired by \cite{meneg-jean-traira}. We will say that a set of (distinct) points $Y=\pg{(z_\mu,w_\mu): \mu=1,\ldots, L}$ in $\Omega_{2q}\times\Omega_{2p}$ is {\em Antipodal Free} if the following property holds:
\begin{itemize}
\item[(AF)]\quad if $\mu\neq\nu$ then $|z_\mu\cdot z_\nu|<1$ unless $z_\mu=z_\nu$ and $|w_\mu\cdot w_\nu|<1$ unless $w_\mu=w_\nu$.
\end{itemize}
Of course, since the points in $Y$ are distinct, if $z_\mu=z_\nu$ then $|w_\mu\cdot w_\nu|<1$ (resp. if $w_\mu=w_\nu$ then $|z_\mu\cdot z_\nu|<1$).
\begin{remark}\label{rm_antip1}
Since two distinct points in $\Omega_2$ are always antipodal, if, for instance, $q=1$, then, in an antipodal free set $Y$ in $\Omega_2\times\Omega_{2p}$, all the $z_\mu$ are the same and then $|w_\mu\cdot w_\nu|<1$ for $\mu\neq\nu$. When $p=q=1$ then an antipodal free set $Y$ in $\Omega_2\times\Omega_2$ contains a unique point $(z,w)$.
\end{remark}
Consider now an antipodal free set $Y\subseteq \Omega_{2q}\times\Omega_{2p}$ and two sets of angles $\Theta=\pg{\theta_\tau: \tau=1,\ldots,t}$ and $\mathbb{D}elta=\pg{\delta_\sigma: \sigma=1,\ldots,s}$ in $[0,2\pi)$.
We define the {\em enhanced set associated to} $Y,\,\Theta$ and $\mathbb{D}elta$ as the set
\begin{equation}\label{eq_defX}
X=\pg{(e^{i\theta\tau}z_\mu,e^{i\delta_\sigma}w_\mu):\, \mu=1,\ldots, L,\,\tau=1,\ldots,t,\,\sigma=1,\ldots,s}\,.
\end{equation}
Observe that, by construction, the points that appear in $X$ are all distinct (but now there exist many antipodal points among them).
The following lemma provides a sort of inverse construction.
\begin{lemma}\label{lm_SsubEnh}
Given a finite set $S\subseteq \Omega_{2q}\times\Omega_{2p}$ one can always obtain an antipodal free set $Y\subseteq \Omega_{2q}\times\Omega_{2p}$ and
two sets $\Theta$ and $\mathbb{D}elta$ of angles in $[0,2\pi)$, such that $S$ is contained in the enhanced set $X$ associated to $Y,\,\Theta$ and $\mathbb{D}elta$.
\end{lemma}
\begin{proof}
For a finite set $X_1\subseteq \Omega_{2q}$ one can select a maximal subset $Y_1$ not containing antipodal points and then define the set $\Theta$ containing $0$ and all the distinct $\theta\in (0,2\pi)$ that are needed to produce the remaining points as $e^{i\theta}z_\mu$ with $z_\mu\in Y_1$.
For the set $S\subseteq \Omega_{2q}\times\Omega_{2p}$ one produces with this algorithm a maximal subset $Y_1$ not containing antipodal points along with a corresponding set of angles $\Theta$ from all the first coordinates $z$ in $S$, than a maximal subset $Y_2$ not containing antipodal points along with a corresponding set of angles $\mathbb{D}elta$ from all the second coordinates $w$ in $S$.
Then $Y:=Y_1\times Y_2$ will be such that $S$ is contained in the enhanced set associated to $Y,\,\Theta$ and $\mathbb{D}elta$.
\end{proof}
The following two lemmas will make clear why it is useful to consider antipodal free sets.
\begin{lemma}\label{lm_PDlimit}
Let $Y=\pg{(z_\mu,w_\mu): \mu=1,\ldots, L}$ in $\Omega_{2q}\times\Omega_{2p}$ be antipodal free. Then the matrix
$$ [R_{m,n}^{q-2}(z_\mu\cdot z_\nu)R_{k,l}^{p-2}(w_\mu\cdot w_\nu)]_\nu^\mu \quad $$ is positive definite provided $n\neq m,\,k\neq l$ and $m+n,k+l$ are large enough.
\end{lemma}
\begin{proof}
Actually, the diagonal elements of the matrix are all equal to $R_{m,n}^{q-2}(1)R_{k,l}^{p-2}(1)=1$, moreover,
condition (AF) implies that if $z_\mu\cdot z_\nu=1$ then $|w_\mu\cdot w_\nu|<1$ and if $w_\mu\cdot w_\nu=1$ then $|z_\mu\cdot z_\nu|<1$.
As a consequence, the non-diagonal elements converge to zero by \pref{eq_Rto0}, when $n\neq m,\,k\neq l$ and $min\pg{m+n,k+l}\to \infty$. Then the matrix, which is Hermitian and with real positive diagonal, becomes strictly diagonally dominant, thus positive definite (\cite[Theorem 6.1.10]{horn-joh-matrix}).
\end{proof}
\begin{lemma}\label{lm_SDP_Xenh}
Let $q,p\in{\mathbb{N}}\cup\pg{\infty}$ and $f$ be a continuous function which is PD on $\Omega_{2q}\times\Omega_{2p}$. Then the following assertions are equivalent:
\begin{itemize}
\item[(i)] $f$ is SPD on $\Omega_{2q}\times\Omega_{2p}$;
\item[(ii)] the matrix $A_X$ defined in \pref{eq-def-AX} is positive definite for every finite set $X$ being the enhanced set associated to some antipodal free set $Y\subseteq \Omega_{2q}\times\Omega_{2p}$ and two sets $\Theta$ and $\mathbb{D}elta$ of angles in $[0,2\pi)$.
\end{itemize}
\end{lemma}
\begin{proof}
First observe that $(i)$ is equivalent to:
\begin{equation*}
\text{(iii) $A_S$ is a positive definite matrix for every finite set $S\subseteq \Omega_{2q}\times\Omega_{2p}$.}
\end{equation*}
The implication $(iii)\Longrightarrow(ii)$ is trivial. In order to prove that $(ii)\Longrightarrow(iii)$ observe that, given $S$, one can obtain $X$ as described in Lemma \ref{lm_SsubEnh}: since $S\subseteq X$, then $A_S$ is a principal submatrix of the positive definite matrix $A_X$ and then it is a positive definite matrix itself.
\end{proof}
At this point we can prove Proposition \ref{th_progr->spd}.
\begin{proof}[Proof of Proposition \ref{th_progr->spd}]
Let $X$ (finite) be the enhanced set associated to an antipodal free set $Y\subseteq \Omega_{2q}\times\Omega_{2p}$ and two sets $\Theta$ and $\mathbb{D}elta$ of angles in $[0,2\pi)$ and
consider the system
\begin{equation}\label{eq_sistB}
\overline c^tB_X^{m,n,k,l}c=0\ \text{for every $(m,n,k,l)\in J$}.
\end{equation}
In view of the Lemmas \ref{lm_Ax_sistBx} and \ref{lm_SDP_Xenh}, all we have to do is to prove that this system implies $c=0$.
Using the property in \pref{eq_propridd_Rmn}, with the notation introduced in \pref{eq_defX} for the elements of $X$, we have
$$B_X^{m,n,k,l}=\pq{e^{i (m-n)(\theta_\tau-\theta_\lambda)}e^{i(k-l)(\delta_\sigma-\delta_\zeta) }R_{m,n}^{q-2}(z_\mu\cdot z_\nu)R_{k,l}^{p-2}(w_\mu\cdot w_\nu)}^{\tau,\sigma,\mu}_{\lambda,\zeta,\nu}\,.$$
It is convenient to write this matrix as a block matrix as follows:
$$ B_X^{m,n,k,l}=[R_{m,n}^{q-2}(z_\mu\cdot z_\nu)R_{k,l}^{p-2}(w_\mu\cdot w_\nu)A^{m,n,k,l}]_\nu^\mu$$ where $$A^{m,n,k,l}=[e^{i (m-n)(\theta_\tau-\theta_\lambda)}e^{i(k-l)(\delta_\sigma-\delta_\zeta) }]^{\tau,\sigma}_{\lambda,\zeta}\,.$$
The vector $c$ will be correspondingly split as
$$c=[c_\mu]_\mu\quad\text{ where }\quad c_\mu=[c_\mu^{\tau\sigma}]_{\tau,\sigma}\,.$$
We have then $$\overline c^tB_X^{m,n,k,l}c=\sum_{\mu,\nu} R_{m,n}^{q-2}(z_\mu\cdot z_\nu)R_{k,l}^{p-2}(w_\mu\cdot w_\nu)\overline{c_\nu}^tA^{m,n,k,l}c_\mu.$$
Similar to the proof of Proposition \ref{th_spd->progr}, the matrix $A^{m,n,k,l}$ factors as $A^{m,n,k,l}=\overline{b}^tb$ where $$b=[{e^{i (m-n)\theta_\tau}e^{i (k-l)\delta_\sigma}}]^{\tau\sigma}\,,$$
then we may write
\begin{equation}\label{eq_cBc}
\overline c^tB_X^{m,n,k,l}c=\sum_{\mu,\nu} \overline{bc_\nu}^tbc_\mu R_{m,n}^{q-2}(z_\mu\cdot z_\nu)R_{k,l}^{p-2}(w_\mu\cdot w_\nu)\,.
\end{equation}
Observe that since $Y$ is antipodal free we will be able to use Lemma \ref{lm_PDlimit} in order to discuss this quadratic form.
We suppose now for the sake of contradiction that $c\neq0$. Without loss of generality we assume that $c_1^{1,1}\neq0$ and we first aim to prove that
\begin{equation}\label{eq_bcneq0}
bc_1=\sum_{\tau,\sigma}{e^{i (m-n)\theta_\tau}e^{i (k-l)\delta_\sigma}}c_1^{\tau,\sigma}\neq0
\end{equation}
for certain $(m,n,k,l)\in J$.
Actually, by the Theorem 2.4 and the Lemmas 2.5 and 2.6 in \cite{P-jean-menS1xS1}, which use the theory of linear recurrence sequences, and in particular a generalization of the Skolen-Mahler-Lech Theorem due to Laurent \cite[Theorem 1]{Laurent89} (see also \cite{pinkus-spd-herm}),
we know that given the angles $\theta_\tau, \delta_\sigma$ and the vector $c_1$, with $c_1^{1,1}\neq0$, there exist $N,M,x,y\in{\mathbb{N}}$ such that the function defined in ${\mathbb{Z}}^2$
$$ L( \alpha,\beta):=\sum_{\tau,\sigma}{e^{i\, \alpha\,\theta_\tau}e^{i\, \beta\;\delta_\sigma}}c_1^{\tau,\sigma}$$
is not zero
for all $( \alpha,\beta)$ in the set $P:=(N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)$.
By Lemma \ref{lm_int_inf_2} applied to $J'$, there exists a sequence $S:=\pg{(\alpha_i,\beta_i)}\subseteq P\cap J'$ such that $|\alpha_i|,|\beta_i|\to\infty$.
As a consequence, \pref{eq_bcneq0} holds true for every $(m,n,k,l)\in J$ such that $(m-n,k-l)\in S$.
\\ Now we can select $(m-n,k-l)\in S$ with $|m-n|,|k-l|$ as large as we want (which implies that $ m\neq n$, $k\neq l$ and that $m+n$ and $k+l$ are also large).
For the corresponding $(m,n,k,l)\in J$, the equation in \pref{eq_sistB} can not be zero in view of Equation \pref{eq_bcneq0} and Lemma \ref{lm_PDlimit}.
We have then proved that a nontrivial solution of system \pref{eq_sistB} can not exist.
\end{proof}
\begin{remark}
Observe that in the case $p=q=1$, in view of Remark \ref{rm_antip1}, the sum in Equation \pref{eq_cBc} has only one term which is $|bc_1|^2$, then the contradiction follows readily after proving \pref{eq_bcneq0}.
\end{remark}
At this point, Theorem \ref{th_main} is a consequence of the Propositions \ref{th_spd->progr} and \ref{th_progr->spd}. The Theorems \ref{th_main_1p} and \ref{th_main_11} follow from the same two propositions after
translating back from the expansion in Lemma \ref{lm_charDD} to the usual ones in the Equations \pref{eq-pd-prod-esf-complO2p} and \pref{eq-pd-prod-esf-complO2} (see in the Sections \ref{sec_PD_on_sing} and \ref{sec_PD_on_prod}).
\section{Characterization of Positive Definiteness on $\Omega_\infty\times\Omega_\infty$}\label{sec_infty}
In this section we aim to prove the following:
\begin{theorem}\label{th_PDinfty} Let $f:\mathbb{D}\times \mathbb{D}\to{\mathbb{C}}$ be a continuous function. Then $f$ is PD on $\Omega_\infty\times\Omega_\infty$ if, and only if,
\begin{equation}\label{eq:expandcpinfty}
\begin{array}{c}
\begin{array}{rcll}
f(\textbf{x}i,\eta)&=&\displaystyle\sum_{m,n,k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}R_{m,n}^\infty(\textbf{x}i)R_{k,l}^\infty(\eta)&\\
&=& \displaystyle\sum_{m,n,k,l\in{\mathbb{Z}}_+} a_{m,n,k,l} {\textbf{x}i\vphantom{\overline\textbf{x}i}}^m\overline{\textbf{x}i}^n\eta^k\overline{\eta}^l,\quad& (\textbf{x}i,\eta)\in{\mathbb{D}}\times{\mathbb{D}},\end{array}
\\\mbox{where $\sum a_{m,n,k,l}<\infty$ and $a_{m,n,k,l}\geq0$ for all $m,n,k,l\in{\mathbb{Z}}_+$.}
\end{array}
\end{equation}
Moreover, the series in Equation \pref{eq:expandcpinfty} is uniformly convergent on ${\mathbb{D}}\times {\mathbb{D}}$.
\end{theorem}
In the proof we will use ideas from \cite{P-berg-porcu-Omega-inf}
and we will need the following lemma, whose proof is analogous to that of Lemma 4.1 in \cite{P-berg-porcu-Omega-inf} and will be omitted.
\begin{lemma}\label{thm:technical}
Let $q,p\in{\mathbb{N}}\cup\pg\infty,\;q,p\geq2$ and $f:\mathbb{D}\times \mathbb{D}\to {\mathbb{C}}$ be a continuous and PD function on $\Omega_{2q}\times\Omega_{2p}$. Given points $w_1,\ldots,w_L\in\Omega_{2p}$ and numbers $c_1,\ldots,c_L\in {\mathbb{C}}$, the function $F:\mathbb{D}\to {\mathbb{C}}$ defined by
\begin{equation}\label{eq:sum1}
F(\textbf{x}i)=\sum_{j,k=1}^L f(\textbf{x}i,w_j\cdot w_k)c_j\overline{c_k}
\end{equation} is continuous and PD on $\Omega_{2q}$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{th_PDinfty}]
First observe that $f$ is PD on $\Omega_\infty\times\Omega_\infty$ if, and only if, $f$ is PD on $\Omega_{2q}\times\Omega_{2p}$ for every $q,p\geq2$.
It is also easy to see that the function $g(\textbf{x}i)=\textbf{x}i$, $\textbf{x}i\in{\mathbb{D}}$,
is PD on $\Omega_{2q}$ for every $q\geq2$, as well as its conjugate.
By the Schur Product Theorem for Positive Definite kernels, cf. \cite[Theorem 3.1.12]{Berg}, one obtains that also $h(\textbf{x}i)={\textbf{x}i\vphantom{\overline\textbf{x}i}}^{m}\overline{\textbf{x}i}^{n}$ is PD on $\Omega_{2q}$ for $q\geq2$ and $m,n\in{\mathbb{Z}}_+$, and that ${\textbf{x}i\vphantom{\overline\textbf{x}i}}^m\overline{\textbf{x}i}^n\eta^k\overline{\eta}^l$ is PD on $\Omega_{2q}\times\Omega_{2p}$ for $q,p\ge 2$ and $m,n,k,l\in{\mathbb{Z}}_+$.
As a consequence, any function of the form \pref{eq:expandcpinfty} is continuous and PD on $\Omega_{2q}\times\Omega_{2p}$ for every $q,p\geq2$, and then on $\Omega_{\infty}\times\Omega_\infty$ too.
\par
Now let the continuous function $f:{\mathbb{D}}\times {\mathbb{D}}\to{\mathbb{C}}$ be PD on $\Omega_\infty\times\Omega_\infty$.
For $\eta\in\mathbb{D},\,c\in{\mathbb{C}}$, consider the special case of \pref{eq:sum1} with $L=2,q=\infty,p=2$, $w_1=(\eta,w), w_2=(1,0)\in\Omega_4$, $c_1=1, c_2=c$, that is,
\begin{equation}\label{eq:sum2}
F_{\eta,c}(\textbf{x}i)=f(\textbf{x}i,1)(1+|c|^2)+f(\textbf{x}i,\eta)\overline{c}+f(\textbf{x}i,\overline{\eta})c.
\end{equation}
By Lemma~\ref{thm:technical},
$F_{\eta,c}$ is a continuous PD function on $\Omega_\infty$. Then, using a theorem due to Christensen and Ressel, see \cite{chris-ressel-pd}, it can be written as
$$
F_{\eta,c}(\textbf{x}i)=\sum_{m,n\in{\mathbb{Z}}_+} a_{m,n}\pt{\eta,c} {\textbf{x}i\vphantom{\overline\textbf{x}i}}^m\overline{\textbf{x}i}^n,
$$
where $a_{m,n}\pt{\eta,c}\ge 0$ are uniquely determined and satisfy $\sum_{m,n\in{\mathbb{Z}}_+} a_{m,n}\pt{\eta,c}<\infty.$
By using $c=1,-1,i$ and proceeding as in the end of the proof of \cite[Theorem 1.2]{P-berg-porcu-Omega-inf}, one obtains that
\begin{equation}\label{eq-cara-f}
f(\textbf{x}i,\eta)=\frac{1-i}4F_{\eta,1}(\textbf{x}i)-\frac{1+i}4F_{\eta,-1}(\textbf{x}i)+\frac{i}2F_{\eta,i}(\textbf{x}i)=\sum_{m,n\in{\mathbb{Z}}_+} \varphi_{m,n}(\eta){\textbf{x}i\vphantom{\overline\textbf{x}i}}^m\overline{\textbf{x}i}^n,
\end{equation}
where
$$
\varphi_{m,n}(\eta):=\frac{1-i}4a_{m,n}(\eta,1)-\frac{1+i}4a_{m,n}(\eta,-1)+\frac{i}2a_{m,n}(\eta,i), \quad \eta\in\mathbb{D}\,,
$$
and then
\begin{equation}\label{eq-serie-phi-finita}
\m{\sum_{m,n\in{\mathbb{Z}}_+}\varphi_{m,n}(\eta)}<\infty,\qquad \eta\in{\mathbb{D}}.
\end{equation}
Consider now $p\geq2$ and the function $\widetilde f_p:\mathbb{D}\times U(p):(\textbf{x}i,A)\mapsto f(\textbf{x}i,A e_p\cdot e_p)$, where $e_p=(1,0,\ldots,0)\in \Omega_{2p}$.
By construction, $\widetilde f_p$ is continuous and PD on $\Omega_{\infty}\times U(p)$.
By Theorem 1.3 in \cite{P-berg-porcu-Omega-inf}, we can expand $\widetilde f_p$ as
$$\widetilde f_p(\textbf{x}i, A )=\sum_{m,n\in{\mathbb{Z}}_+} \widetilde\varphi^{(p)}_{m,n}(A)R^{\infty}_{m,n}(\textbf{x}i)=\sum_{m,n\in{\mathbb{Z}}_+} \widetilde\varphi^{(p)}_{m,n}(A){\textbf{x}i\vphantom{\overline\textbf{x}i}}^m\overline{\textbf{x}i}^n\,,$$
where $\widetilde\varphi^{(p)}_{m,n}$ are continuous PD functions on $U(p)$.
By derivation one has that
$$ \widetilde \varphi^{(p)}_{m,n}(A) =\frac1{m!n!}\frac{\partial^{m+n}\widetilde f_p(0,A)}{\partial {\textbf{x}i\vphantom{\overline\textbf{x}i}}^m\partial\overline{\textbf{x}i}^n}$$
and
\begin{equation}\label{eq_phi=der}
\varphi_{m,n}(\eta) =\frac1{m!n!}\frac{\partial^{m+n}f(0,\eta)}{\partial {\textbf{x}i\vphantom{\overline\textbf{x}i}}^m\partial\overline{\textbf{x}i}^n},
\end{equation}
but by construction
$$\widetilde \varphi^{(p)}_{m,n}(A)= \frac1{m!n!}\frac{\partial^{m+n}\widetilde f_p(0,A)}{\partial {\textbf{x}i\vphantom{\overline\textbf{x}i}}^m\partial\overline{\textbf{x}i}^n}=\frac1{m!n!}\frac{\partial^{m+n}f(0,Ae_p\cdot e_p)}{\partial {\textbf{x}i\vphantom{\overline\textbf{x}i}}^m\partial\overline{\textbf{x}i}^n}= \varphi_{m,n}(Ae_p\cdot e_p).
$$
By Remark \ref{rem_U_Om} we deduce that $\varphi_{m,n}$ is continuous and PD on $\Omega_{2p}$, for every $p\geq2$.
As a consequence, $\varphi_{m,n}$ is PD on $\Omega_\infty$ and thus we can again use the theorem by Christensen and Ressel, in order to conclude that for every $m,n$,
$$
\varphi_{m,n}(\eta) = \sum_{k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}\,\eta^k\overline{\eta}^l, \quad \eta\in{\mathbb{D}}\,,
$$ where ${a_{m,n,k,l}}\geq0$, for every $k,l\in{\mathbb{Z}}_+$, and
$ \sum_{k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}<\infty\,. $
Thus,
$$
f(\textbf{x}i,\eta)=\sum_{m,n\in{\mathbb{Z}}_+}\sum_{k,l\in{\mathbb{Z}}_+} a_{m,n,k,l}\,{\textbf{x}i\vphantom{\overline\textbf{x}i}}^m\overline{\textbf{x}i}^n\eta^k\overline{\eta}^l\,,
$$
and then $\displaystyle\sum_{m,n,k,l\in{\mathbb{Z}}_+}a_{m,n,k,l}<\infty.$
\end{proof}
\section{A connection with the cases $S^1$ and $S^1\times S^1$}
\label{sec_S1}
In this section we aim to show
that one can deduce, from Theorem \ref{th_main_11}, the characterization of Strict Positive Definiteness on $S^1\times S^1$ proved in \cite{P-jean-menS1xS1},
namely, that
a continuous function $f:[-1,1] \times [-1,1] \to {\mathbb{C}}$
which is PD on $S^1\times S^1$, is also SPD on $S^1\times S^1$ if, and only if, considering its expansion as in \pref{eq-pd-esfSd},
the set
$
\{(m,k)\in{\mathbb{Z}}^2: a_{|m|,|k|}>0\}
$
intersects every product of full arithmetic progressions in ${\mathbb{Z}}$, that is,
\begin{equation}\label{eq_inters_S11}
\{(m,k)\in{\mathbb{Z}}^2: a_{|m|,|k|}>0\}\cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)\neq \emptyset\qquad \mbox{for every $N,M,x,y\in{\mathbb{N}}$.}
\end{equation}
Actually, condition \pref{eq_inters_S11} has more similarities with the conditions we obtain here in the Theorems \ref{th_main}, \ref{th_main_1p} and \ref{th_main_11} for the complex spheres, where an intersection with every product of full arithmetic progressions in ${\mathbb{Z}}$ is required, rather than with the known conditions for real spheres in higher dimensions, where only progressions of step 2 are involved (see Equations \pref{eq_inters_Sd} and \pref{eq_inters_SpSq}).
The polynomials $P_m^0$ in \pref{eq-pd-esfSd} are also known as Tchebichef polynomials of the first kind (see \cite[page 29]{szego}) and can be written as $P_m^{0}(\cos\phi)=\cos(m\phi)$, $\phi\in[0,\pi]$. As a consequence, a way of writing \pref{eq-pd-esfSd} often used in literature when $p=q=1$ is the following:
\begin{equation}\label{eq_expS1S1}
f(\cos(\phi),\cos(\psi))=\sum_{m,k\in{\mathbb{Z}}_+} a_{m,k}\cos(m\phi)\cos(k\psi),\quad \phi,\psi\in[0,\pi].
\end{equation}
Below, we will show that one can establish a correspondence between PD (and between SPD) functions on $S^1\times S^1$ and a subset of those on $\Omega_2\times\Omega_2$. We will do it first for the case of a single sphere.
\begin{lemma}\label{lm_bij_1}
There exists a bijection between PD (resp. SPD) functions on $S^1$ and PD (resp. SPD) functions on $\Omega_2$ which are invariant under conjugation, that is, $f(e^{i\phi})=f(e^{-i\phi}),\ \phi\in[0,2\pi)$.
\end{lemma}
\begin{proof}
Let $f:\partial\mathbb{D}\to {\mathbb{C}}$ be a PD function on $\Omega_2$ satisfying $f(e^{i\phi})=f(e^{-i\phi})$, then it is real valued and it only depends on the real part.
\\
Consider the bijection
$$A:\Omega_2\to S^1 :e^{i\phi}\mapsto (\cos(\phi),\sin(\phi))$$
and the surjective map
$$C:\partial\mathbb{D}\to[-1,1]:e^{i\phi}\mapsto \cos(\phi)\,,$$
which admits a right inverse $C^-:x\mapsto e^{i\arccos(x)}$.
Then $C\circ C^-=id_{[-1,1]}$ and since $f$ only depends on the real part,
\begin{equation}\label{eq-fCC}
f(C^-\circ C(e^{i\phi}))=f(e^{i\phi}), \quad e^{i\phi}\in\partial \mathbb{D}.
\end{equation}
Also observe that
\begin{equation}\label{prod-int-w-compl}
C(w\cdot w')=Aw\cdot_{\mathbb{R}} Aw', \quad w,w'\in\Omega_2.
\end{equation}
Therefore, the bijection in the claim is the following:
$$B:f\mapsto \widehat f:=f\circ {C^-}\,,$$
whose inverse is given by $$B^{-1}: \widehat f\mapsto f:=\widehat f \circ C\,.$$
Actually, for kernels $K$ and $\widehat K$ associated, respectively, to $f$ and $\widehat f$, it holds, by (\ref{eq-fCC}-\ref{prod-int-w-compl}),
$$\widehat K(Aw,Aw')=\widehat f(Aw\cdot_{\mathbb{R}} Aw')=f(C^-\circ C (w\cdot w'))=f(w\cdot w')=K(w,w'),$$
then the definition of PD (resp. SPD) in \pref{eq-quad-form-geral} becomes equivalent for the two kernels.
\end{proof}
The case on a product of spheres is very similar.
\begin{lemma}\label{lm_bij_11}
There exists a bijection between PD (resp. SPD) functions on $S^1\times S^1$ and PD (resp. SPD) functions on $\Omega_2\times\Omega_2$ that
are invariant under conjugation in both variables, that is, $f(e^{i\phi},e^{i\psi})=f(e^{-i\phi},e^{i\psi})=f(e^{i\phi},e^{-i\psi}),\ \phi,\psi\in[0,2\pi)$.
\end{lemma}
\begin{proof}
Consider a function $f:\partial\mathbb{D}\times\partial\mathbb{D} \to {\mathbb{C}}$ which is PD on $\Omega_2\times\Omega_2$ and satisfies $f(e^{i\phi},e^{i\psi})=f(e^{-i\phi},e^{i\psi})=f(e^{i\phi},e^{-i\psi})$, then it is real valued and it only depends on the real part of both $e^{i\phi}$ and $e^{i\psi}$.
Thus the proof follows the same lines as that of Lemma \ref{lm_bij_1}, where now the bijection is defined as
$$B:f\mapsto \widehat f(\vartriangle,\star):=f( {C^-(\vartriangle),C^-(\star))}\,.$$
\end{proof}
Now we need to establish the correspondence between the coefficients in the expansions of $f$ and $\widehat f$.
For the single sphere case,
a continuous function $f$ which is PD on $\Omega_2$ can be written as in \pref{eq-pd-prod1}.
The condition that $f$ is invariant under conjugation, assumed in Lemma \ref{lm_bij_1},
is equivalent to $a_m=a_{-m},\ m\in{\mathbb{Z}}$, then we can rewrite $$f(e^{i\phi})=a_0+\sum_{m\in{\mathbb{N}}} {a_{m}} (e^{im\phi}+e^{-im\phi})= a_0+\sum_{m\in{\mathbb{N}}} {2a_{m}} \cos(m\phi)$$
and the function $\widehat f$ corresponding to $f$ in the bijection from Lemma \ref{lm_bij_1} can be written as
$$\widehat f(\cos(\phi))
=f(C^-(\cos(\phi)))=f(e^{i\phi})
= a_0+\sum_{m\in{\mathbb{N}}} {2a_{m}} \cos(m\phi)\,.$$
If we consider the Schoenberg coefficients $\widehat a_m,\,m\in{\mathbb{Z}}_+$, for $\widehat f$, that is,
$$ \widehat f(\cos(\phi))=\widehat a_0+\sum_{m\in{\mathbb{N}}} {\widehat a_{m}} (\cos(m\phi))\,,$$
we obtain the relation
\begin{equation}\label{eq_relcoefS1O2}
a_{m}>0\Longleftrightarrow \widehat a_{|m|}>0,\ \ m\in{\mathbb{Z}}\,.
\end{equation}
Now, by \cite{P-valdir-claudemir-spd-compl}, the condition for $f$ to be SPD on $\Omega_2$ is
\begin{equation}\label{eq_intersO2}
\pg{m\in{\mathbb{Z}}:\ a_{m}>0} \cap (N{\mathbb{Z}}+x)\neq \emptyset\qquad \mbox{for every $N,x\in{\mathbb{N}}$,}
\end{equation}
which then translates, via \pref{eq_relcoefS1O2}, to the known condition (see \cite{P-valdir-claudemir-spd-compl,barbosa-men})
\begin{equation}
\pg{m\in{\mathbb{Z}}:\ a_{|m|}>0} \cap (N{\mathbb{Z}}+x)\neq \emptyset\qquad \mbox{for every $N,x\in{\mathbb{N}}$.}
\end{equation}
Again, when considering the product of two spheres, the argument is similar.
A continuous function $f$, which is PD on $\Omega_2\times\Omega_2$, is written as in \pref{eq-pd-prod-esf-complO2}
and the condition that $f$ is invariant under conjugation in both variables, assumed in Lemma \ref{lm_bij_11},
is equivalent to
$$a_{m,k}=a_{-m,k}=a_{m,-k}=a_{-m,-k},\ m,k\in{\mathbb{Z}}.$$
Then, proceeding as above, one obtains
\begin{eqnarray}
\nonumber f(e^{i\phi},e^{i\psi})&=& a_0+\sum_{m\in{\mathbb{N}}} {2a_{m,0}} \cos(m\phi)+\sum_{k\in{\mathbb{N}}} {2a_{0,k}} \cos(k\psi)+\sum_{m,k\in{\mathbb{N}}} {4a_{m,k}} \cos(m\phi)\cos(k\psi)\\
&=&\widehat f(\cos\phi,\cos\psi)\,.
\end{eqnarray}
If we denote by $\widehat a_{m,k},\,m,k\in{\mathbb{Z}}_+$, the coefficients in the expansion of $\widehat f$ as in \pref{eq_expS1S1},
we obtain the relation
\begin{equation}\label{eq_relcoefS1O2_quad}
a_{m,k}>0\Longleftrightarrow \widehat a_{|m|,|k|}>0\ \ m,k\in{\mathbb{Z}}\,.
\end{equation}
Finally, by Theorem \ref{th_main_11}, the condition for $f$ to be SPD on $\Omega_2\times\Omega_2$ is
\begin{equation}\label{eq_intersO2O2}
\pg{(m,k)\in{\mathbb{Z}}^2:\ a_{m,k}>0} \cap (N{\mathbb{Z}}+x)\times (M{\mathbb{Z}}+y)\neq \emptyset\qquad \mbox{for every $N,M,x,y\in{\mathbb{N}}$,}
\end{equation}
which then translates, via \pref{eq_relcoefS1O2_quad}, to the condition \pref{eq_inters_S11} that we were seeking.
\end{document} |
\begin{document}
\title{Measurement in the de Broglie-Bohm interpretation: \\Double-slit, Stern-Gerlach and EPR-B}
\author{Michel Gondran}
\affiliation{University
Paris Dauphine, Lamsade, 75 016 Paris, France}
\email{[email protected]}
\author{Alexandre Gondran}
\affiliation{\'Ecole Nationale de l'Aviation Civile, 31000 Toulouse, France}
\email{[email protected]}
\begin{abstract}
We propose a pedagogical presentation of measurement in the de Broglie-Bohm interpretation. In this heterodox interpretation, the position of a quantum particle exists and is piloted by the phase of the wave function. We show how this position explains determinism and realism in the three most important experiments of quantum measurement: double-slit, Stern-Gerlach and EPR-B.
First, we demonstrate the conditions in which the de Broglie-Bohm interpretation can be assumed to be valid through continuity with classical mechanics.
Second, we present a numerical simulation of the double-slit experiment performed by Jönsson in 1961 with electrons. It demonstrates the continuity between classical mechanics and quantum mechanics: evolution of the probability density at various distances and convergence of the quantum trajectories to the classical trajectories when h tends to 0.
Third, we present an analytic expression of the wave function in the Stern-Gerlach experiment. This explicit solution requires the calculation of a Pauli spinor with a spatial extension. This solution enables to demonstrate the decoherence of the wave function and the three postulates of quantum measurement: quantization, the Born interpretation and wave function reduction. The spinor spatial extension also enables the introduction of the de Broglie-Bohm trajectories, which gives a very simple explanation of the particles' impact and of the measurement process.
Finally, we study the EPR-B experiment, the Bohm version of the Einstein-Podolsky-Rosen experiment. Its theoretical resolution in space and time shows that a causal interpretation exists where each atom has a position and a spin. This interpretation avoids the flaw of the previous causal interpretation. We recall that a physical explanation of non-local influences is possible.
\end{abstract}
\maketitle
\section{Introduction}
"\emph{I saw the impossible done}".\cite{Bell_1982} This is how John Bell describes his
inexpressible surprise in 1952 upon the publication of an article
by David Bohm~\cite{Bohm_1952}. The impossibility came from a
theorem by John von Neumann outlined in 1932 in his book \emph{The
Mathematical Foundations of Quantum Mechanics},\cite{vonNeumann}
which seemed to show the impossibility of adding "hidden
variables" to quantum mechanics. This impossibility, with its
physical interpretation, became almost a postulate of quantum
mechanics, based on von Neumann's indisputable authority as a
mathematician. As Bernard d'Espagnat notes in 1979:
"\emph{At the university, Bell had, like all of us, received from
his teachers a message which, later still, Feynman would
brilliantly state as follows: "No one can explain more than we
have explained here [...]. We don't have the slightest idea of a
more fundamental mechanism from which the former results (the
interference fringes) could follow". If indeed we are to believe
Feynman (and Banesh Hoffman, and many others, who expressed the
same idea in many books, both popular and scholarly), Bohm's
theory cannot exist. Yet it does exist, and is even older than
Bohm's papers themselves. In fact, the basic idea behind it was
formulated in 1927 by Louis de Broglie in a model he called "pilot
wave theory". Since this theory provides explanations of what, in
"high circles", is declared inexplicable, it is worth
consideration, even by physicists [...] who do not think it gives us the final answer to the
question "how reality really is.}"\cite{dEspagnat}
And in 1987, Bell wonders about his teachers' silence concerning
the Broglie-Bohm pilot-wave:
"\emph{But why then had Born not told me of this 'pilot wave'? If
only to point out what was wrong with it? Why did von Neumann not
consider it? More extraordinarily, why did people go on producing
"impossibility" proofs after 1952, and as recently as 1978?
While even Pauli, Rosenfeld, and Heisenberg could produce no more
devastating criticism of Bohm's version than to brand it as
"metaphysical" and "ideological"? Why is the pilot-wave picture
ignored in text books? Should it not be taught, not as the only
way, but as an antidote to the prevailing complacency? To show
that vagueness, subjectivity and indeterminism are not forced on
us by experimental facts, but through a deliberate theoretical
choice?}"\cite{Bell_1987}
More than thirty years after John Bell's questions, the
interpretation of the de Broglie-Bohm pilot wave is still ignored by
both the international community and the textbooks.
What is this pilot wave theory?
For de Broglie, a quantum particle is not only defined by its wave function.
He assumes that the quantum particle also has a position which is piloted by the wave function.\cite{Broglie_1927}
However only the probability density of this position is known.
The position exists in itself (ontologically) but is unknown to the observer.
It only becomes known during the measurement.
The goal of the present paper is to present the Broglie-Bohm pilot-wave through
the study of the three most important experiments of quantum measurement:
the double-slit experiment which is the crucial experiment of the
wave-particle duality, the Stern and Gerlach experiment with the
measurement of the spin, and the EPR-B experiment with the problem
of non-locality.
The paper is organized as follows. In section II, we demonstrate the conditions in which the de Broglie-Bohm interpretation
can be assumed to be valid through continuity with classical mechanics. This involves the de Broglie-Bohm interpretation
for a set of particles prepared in the same way. In section~\ref{sect:DoubleSlit},
we present a numerical simulation of the double-slit experiment
performed by Jönsson in 1961 with electrons~\cite{Jonsson}.
The method of Feynman path integrals allows to calculate the time-dependent wave function. The evolution of the probability density
just outside the slits leads one to consider the dualism of the
wave-particle interpretation. And the de Broglie-Bohm trajectories
provide an explanation for the impact positions of the particles. Finally, we show the continuity between
classical and quantum trajectories with the convergence of these trajectories to classical
trajectories when $h$ tends to $0$. In section~\ref{sect:SternGerlach}, we present an analytic expression of
the wave function in the Stern-Gerlach experiment. This explicit solution
requires the calculation of a Pauli spinor with a spatial extension.
This solution enables to demonstrate the decoherence of the wave function and the three postulates of quantum measurement:
quantization, Born interpretation and wave function reduction.
The spinor spatial extension also enables the introduction of the de
Broglie-Bohm trajectories which gives a very simple explanation of
the particles' impact and of the measurement process. In section~\ref{sect:EPR-B}, we study the EPR-B experiment, the Bohm version of the
Einstein-Podolsky-Rosen experiment. Its theoretical resolution in
space and time shows that a causal interpretation exists where each
atom has a position and a spin. Finally, we recall that a physical explanation of
non-local influences is possible.
\section{The de Broglie-Bohm interpretation}
\label{sect:deBroglieInterpretation}
The de Broglie-Bohm interpretation is based on the following demonstration.
Let us consider a wave function $\Psi(\textbf{x},t)$ solution to the Schr\"odinger
equation:
\begin{eqnarray}\label{eq:schrodinger1}
i\hbar \frac{\partial \Psi(\mathbf{x},t) }{\partial t}=\mathcal{-}\frac{\hbar ^{2}}{2m}
\triangle \Psi(\mathbf{x},t) +V(\mathbf{x})\Psi(\mathbf{x},t)\\
\label{eq:schrodinger2}
\Psi (\mathbf{x},0)=\Psi_{0}(\mathbf{x}).
\end{eqnarray}
With the variable change $\Psi(\mathbf{x},t)=\sqrt{\rho^{\hbar}(\mathbf{x},t)} \exp(i\frac{S^{\hbar}(\textbf{x},t)}{\hbar})$, the Schr\"odinger
equation can be decomposed into Madelung
equations~\cite{Madelung_1926}(1926):
\begin{equation}\label{eq:Madelung1}
\frac{\partial S^{\hbar}(\mathbf{x},t)}{\partial t}+\frac{1}{2m}
(\nabla S^{\hbar}(\mathbf{x},t))^2 +
V(\mathbf{x})-\frac{\hbar^2}{2m}\frac{\triangle
\sqrt{\rho^{\hbar}(\mathbf{x},t)}}{\sqrt{\rho^{\hbar}(\mathbf{x},t)}}=0
\end{equation}
\begin{equation}\label{eq:Madelung2}
\frac{\partial \rho^{\hbar}(\mathbf{x},t)}{\partial t}+ div
\left(\rho^{\hbar}(\mathbf{x},t) \frac{\nabla
S^{\hbar}(\mathbf{x},t)}{m}\right)=0
\end{equation}
with initial conditions:
\begin{equation}\label{eq:Madelung3}
\rho^{\hbar}(\mathbf{x},0)=\rho^{\hbar}_{0}(\mathbf{x}) \qquad \text{and}
\qquad S^{\hbar}(\mathbf{x},0)=S^{\hbar}_{0}(\mathbf{x}) .
\end{equation}
Madelung equations correspond to a set of non-interacting quantum particles all prepared
in the same way (same $\rho^{\hbar}_{0}(\mathbf{x})$
and $S^{\hbar}_{0}(\mathbf{x})$).
A quantum particle is said to be \textit{statistically
prepared} if its initial probability density $\rho^{\hbar}_{0}(\mathbf{x})$
and its initial action $S^{\hbar}_{0}(\mathbf{x})$ converge, when $\hbar\to 0$, to non-singular functions $\rho_{0}(\mathbf{x})$ and $S_{0}(\mathbf{x})$. It is the case of an
electronic or $C_{60}$ beam in the double slit experiment or
an atomic beam in the Stern and Gerlach experiment. We will seen that it is also the case of a beam of entrangled particles in the EPR-B experiment.
Then, we have the following theorem:~\cite{Gondran2011,Gondran2012a}
\textit{For statistically
prepared quantum particles,
the probability density} $\rho^{\hbar}(\textbf{x},t)$ \textit{and the
action} $S^{\hbar}(\textbf{x},t)$, \textit{solutions to the Madelung
equations
(\ref{eq:Madelung1})(\ref{eq:Madelung2})(\ref{eq:Madelung3}),
converge, when} $\hbar\to 0$,\textit{ to the classical density
$\rho(\textbf{x},t)$ and the classical action} $S(\textbf{x},t)$,
\textit{solutions to the statistical Hamilton-Jacobi equations:}
\begin{eqnarray}\label{eq:statHJ1b}
\frac{\partial S\left(\textbf{x},t\right) }{\partial
t}+\frac{1}{2m}(\nabla S(\textbf{x},t) )^{2}+V(\textbf{x},t)=0\\
\label{eq:statHJ2b}
S(\textbf{x},0)=S_{0}(\textbf{x})\\
\label{eq:statHJ3b}
\frac{\partial \mathcal{\rho }\left(\textbf{x},t\right) }{\partial
t}+ div \left( \rho \left( \textbf{x},t\right) \frac{\nabla
S\left( \textbf{x},t\right) }{m}\right) =0\\
\label{eq:statHJ4b}
\rho(\mathbf{x},0)=\rho_{0}(\mathbf{x}).
\end{eqnarray}
We give some indications on the demonstration of this theorem when the
wave function $\Psi(\textbf{x},t)$ is written as a
function of the initial wave function $\Psi_{0}(\textbf{x})$ by
the Feynman paths integral \cite{Feynman}:
\begin{equation}\label{eq:interFeynman}
\Psi(\textbf{x},t)= \int F(t,\hbar)
\exp\left(\frac{i}{\hbar}S_{cl}(\textbf{x},t;\textbf{x}_{0}\right)
\Psi_{0}(\textbf{x}_{0})d\textbf{x}_0
\end{equation}
where $F(t,\hbar)$ is an independent function of $\textbf{x}$ and
of $\textbf{x}_{0}$.
For a statistically prepared quantum particle, the wave function is written
$ \Psi(\textbf{x},t)= F(t,\hbar)\int\sqrt{\rho^{\hbar}_0(\mathbf{x}_0)}
\exp(\frac{i}{\hbar}( S^{\hbar}_0(\textbf{x}_0)+
S_{cl}(\textbf{x},t;\textbf{x}_{0})) d\textbf{x}_0$. The theorem
of the stationary phase shows that, if $\hbar$ tends towards 0, we
have $ \Psi(\textbf{x},t)\sim
\exp(\frac{i}{\hbar}min_{\textbf{x}_0}( S_0(\textbf{x}_0)+
S_{cl}(\textbf{x},t;\textbf{x}_{0}))$, that is to say that the
quantum action $S^{h}(\textbf{x},t)$ converges to the function
\begin{equation}\label{eq:solHJminplus}
S(\textbf{x},t)=min_{\textbf{x}_0}( S_0(\textbf{x}_0)+
S_{cl}(\textbf{x},t;\textbf{x}_{0}))
\end{equation}
which is the solution to the Hamilton-Jacobi equation
(\ref{eq:statHJ1b}) with the initial condition (\ref{eq:statHJ2b}).
Moreover, as the quantum density $\rho^{h}(\textbf{x},t)$
satisfies the continuity equation (\ref{eq:Madelung2}), we deduce,
since $S^{h}(\textbf{x},t)$ tends towards $S(\textbf{x},t)$, that
$\rho^{h}(x,t)$ converges to the classical density
$\rho(\textbf{x},t)$, which satisfies the continuity equation
(\ref{eq:statHJ3b}). We obtain both announced convergences.
These statistical Hamilton-Jacobi equations (\ref{eq:statHJ1b},\ref{eq:statHJ2b},\ref{eq:statHJ3b},\ref{eq:statHJ4b}) correspond to a set of classical particles prepared in the same way (same $\rho_{0}(\mathbf{x})$ and $S_{0}(\mathbf{x})$). These classical particles are trajectories obtained in Eulerian representation with the velocity field $ \mathbf{v}\left( \mathbf{x,}t\right) =\frac{\mathbf{\nabla }S\left( \mathbf{
x,}t\right) }{m} $, but the density and the action are not sufficient to describe it completely. To know its position at time $t$, it is necessary to know its initial position. Because the Madelung equations converge to
the statistical Hamilton-Jacobi equations, it is logical to do the same in quantum mechanics. We conclude that a \textit{ statistically prepared quantum particle} is not completely described by
its wave function. It is necessary to add this
initial position and an equation to define the evolution of this
position in the time. It is the de Brogglie-Bohm interpretation where the position is called the "hidden variable".
The two first postulates of quantum mechanics, describing the
quantum state and its evolution,~\cite{CT_1977} must be completed in this heterodox interpretation.
At initial time t=0, the state of the particle is
given by the initial wave function $ \Psi_{0}(\textbf{x})$ (a wave
packet) and its initial position $\textbf{X}(0)$; it is the new first postulate.
The new second postulate gives the evolution on the wave function and on the position.
For a single spin-less particle in a potential
$V(\textbf{x})$, the evolution of the wave function is given by the usual
Schrödinger equation (\ref{eq:schrodinger1},\ref{eq:schrodinger2})
and the evolution of the particle position is given by
\begin{equation}\label{eq:champvitesse}
\frac{d
\textbf{X}(t)}{dt}=\frac{\textbf{J}^{h}(\textbf{x},t)}{\rho^{h}(\textbf{x},t)}|_{\textbf{x}=\textbf{X}(t)}=\frac{\nabla
S^{h}(\textbf{x},t)}{m}|_{\textbf{x}=\textbf{X}(t)}
\end{equation}
where
\begin{equation}\label{eq:courant}
\textbf{J}^{h}(\textbf{x},t)=\frac{\hbar}{2 m i}
( \Psi^*(\textbf{x},t)\nabla\Psi(\textbf{x},t)-\Psi(\textbf{x},t)\nabla\Psi^*(\textbf{x},t))
\end{equation}
is the usual quantum current.
In the case of a particle with spin, as in the Stern and Gerlach
experiment, the Schrödinger equation must be replaced by the
Pauli or Dirac equations.
The third quantum mechanics postulate which describes the measurement
operator (the observable) can be conserved. But the three postulates of
measurement are not necessary: the postulate of quantization, the Born
postulate of probabilistic interpretation of the wave function and the
postulate of the reduction of the wave function. We see that
these postulates of measurement can be explained on each
example as we will shown in the following.
We replace these three postulates by a single one, the "quantum equilibrium
hypothesis",~\cite{Durr_1992,Sanz,Norsen} that describes the interaction
between the initial wave function $\Psi_0(\textbf{x})$ and the initial particle position
$\textbf{X}(0)$:
For a set of identically prepared particles having $t=0$
wave function $\Psi_0(\textbf{x})$, it is assumed that
the initial particle positions $\textbf{X}(0)$ are distributed
according to:
\begin{equation}\label{eq:quantumequi}
P[\textbf{X}(0)=\textbf{x}]\equiv
P(\textbf{x},0)=|\Psi_0(\textbf{x})|^2 =\rho^{h}_0(\textbf{x}).
\end{equation}
It is the Born rule at the initial time.
Then, the probability distribution ($P(\textbf{x},t)\equiv
P[\textbf{X}(t)=\textbf{x}]$) for a set of particles moving
with the velocity field $\textbf{v}^{h}(\textbf{x},t)=\frac{\nabla
S^{h}(\textbf{x},t)}{m}$ satisfies the property of the "equivariance" of the
$|\Psi(\textbf{x},t)|^2$ probability distribution:~\cite{Durr_1992}
\begin{equation}\label{eq:quantumequit}
P[\textbf{X}(t)=\textbf{x}]\equiv
P(\textbf{x},t)= |\Psi(\textbf{x},t)|^2 =\rho^{h}(\textbf{x},t).
\end{equation}
It is the Born rule at time t.
Then, the de Broglie-Bohm interpretation is based on a continuity between classical and quantum mechanics where the quantum particles are statistical prepared with an initial probability densitiy satisfies the "quantum equilibrium
hypothesis" (\ref{eq:quantumequi}). It is the case of the three studied experiments.
We will revisit these three measurement experiments through
mathematical calculations and numerical simulations. For each one,
we present the statistical interpretation that is common to the
Copenhagen interpretation and the de Broglie-Bohm pilot wave, then
the trajectories specific to the de Broglie-Bohm interpretation.
We show that the precise definition of the initial conditions,
i.e. the preparation of the particles, plays a fundamental
methodological role.
\section{Double-slit experiment with electrons}
\label{sect:DoubleSlit}
Young's double-slit experiment\cite{Young_1802} has long been the crucial experiment
for the interpretation of the wave-particle duality. There have been realized with massive objects, such as
electrons~\cite{Davisson,Jonsson},
neutrons~\cite{Halbon}, cold neutrons~\cite{Zeilinger_1988},
atoms~\cite{Estermann}, and more recently, with coherent ensembles
of ultra-cold atoms~\cite{Shimizu}, and even with
mesoscopic single quantum objects such as C$_{60}$ and
C$_{70}$~\cite{Arndt}. For Feynman, this experiment addresses
"\emph{the basic element of the mysterious behavior [of electrons] in its most strange form.
[It is] a phenomenon which is impossible, absolutely impossible to explain in any classical
way and which has in it the heart of quantum mechanics. In reality, it contains the only
mystery.}"~\cite{Feynman_1965}
The de Broglie-Bohm interpretation and the numerical simulation help us here to revisit the
double-slit experiment with electrons performed by Jönsson in 1961 and to provide an answer to
Feynman'mystery. These simulations~\cite{Gondran_2005a} follow those conducted in 1979 by Philippidis, Dewdney
and Hiley~\cite{Philippidis_1979} which are today a classics. However, these simulations~\cite{Philippidis_1979} have some limitations because
they did not consider realistic slits. The slits, which can be clearly represented by a function
$G(y)$ with $G(y)=1$ for $-\beta\leq y \leq \beta$ and $G(y)=0$ for $|y|>\beta$,
if they are $2\beta$ in width, were modeled by a Gaussian function $G(y)=e^{-y^2/2 \beta^2}$.
Interference was found, but the calculation could not account for
diffraction at the edge of the slits. Consequently, these simulations could not be used to defend the de Broglie-Bohm interpretation.
\begin{figure}
\caption{\label{fig:schema-Young}
\label{fig:schema-Young}
\end{figure}
Figure~\ref{fig:schema-Young} shows a diagram of the double slit experiment by Jönsson.
An electron gun emits electrons one by one in the horizontal plane, through a hole
of a few micrometers, at a velocity $v= 1.8\times10^{8} m/s$ along the horizontal $x$-axis.
After traveling for $d_1= 35 cm$, they encounter a plate pierced with two
horizontal slits A and B, each $0.2\mu$m wide and spaced $1\mu$m from each other.
A screen located at $d_2=35cm$ after the slits collects these electrons.
The impact of each electron appears on the screen as the experiment unfolds.
After thousands of impacts, we find that the distribution of electrons on the
screen shows interference fringes.
The slits are very long along the $z$-axis, so there is no effect of diffraction
along this axis. In the simulation, we therefore only consider the wave function
along the $y$-axis; the variable $x$ will be treated classically with $x=vt$.
Electrons emerging from an electron gun are represented by the same initial
wave function $\Psi_0(y)$.
\subsection{Probability density}
Figure~\ref{fig:schema-Young2} gives a general view of the evolution of the probability
density from the source to the detection screen (a lighter shade means that the density
is higher i.e. the probability of presence is high). The calculations were made using
the method of Feynman path integrals~\cite{Gondran_2005a}. The wave function after the slits ($t_1=d_1/v\simeq2. 10^{-11}s < t <t_1+d_2/v \simeq 4. 10^{-11}s$) is deduced
from the values of the wave function at slits A and B: $\Psi(y,t)= \Psi_A(y,t)+ \Psi_B(y,t)$
with $\Psi_A(y,t)=\int_A K(y,t,y_a, t_1) \Psi(y_a, t_1) dy_a$, $\Psi_B(y,t)=\int_B
K(y,t,y_b, t_1) \Psi(y_b, t_1) dy_b$ and $K(y,t,y_\alpha, t_1)=
(m/2i\hbar (t-t_1))^{1/2} e^{im(y-y_\alpha)^2/2\hbar(t-t_1)}$.
\begin{figure}
\caption{\label{fig:schema-Young2}
\label{fig:schema-Young2}
\end{figure}
\begin{figure}
\caption{\label{fig:schema-Young3}
\label{fig:schema-Young3}
\end{figure}
Figure~\ref{fig:schema-Young3} shows a close-up of the evolution of the probability density
just after the slits. We note that interference will only occur a few centimeters after the slits.
Thus, if the detection screen is $1 cm$ from the slits, there is no interference and one can
determine by which slit each electron has passed. In this experiment, the measurement is performed by the detection
screen, which only reveals the existence or absence of the fringes.
\begin{figure}
\caption{\label{fig:schema-Young5}
\label{fig:schema-Young5}
\end{figure}
The calculation method enables us to compare the evolution of the cross-section of the probability
density at various distances after the slits ($0.35 mm$, $3.5 mm$, $3.5 cm$ and $35 cm$) where the
two slits A and B are open simultaneously (interference: $ \vert \Psi_A +\Psi_B \vert^2$) with the
evolution of the sum of the probability densities where the slits A and B are open independently
(the sum of two diffractions: $ \vert \Psi_A \vert^2+\vert\Psi_B \vert^2$). Figure~\ref{fig:schema-Young5}
shows that the difference between these two phenomena appears only a few centimeters after the slits.
\subsection{ Impacts on screen and de Broglie-Bohm trajectories}
The interference fringes are observed after a certain period of time when the impacts of the electrons
on the detection screen become sufficiently numerous. Classical quantum theory only explains the impact
of individual particles statistically.
However, in the de Broglie-Bohm interpretation: a particle has an initial position and follows a path whose
velocity at each instant is given by equation (\ref{eq:champvitesse}). On the basis of this assumption we conduct
a simulation experiment by drawing random initial positions of the electrons in the initial wave packet
("quantum equilibrium hypothesis").
\begin{figure}
\caption{\label{fig:traj-Young}
\label{fig:traj-Young}
\end{figure}
Figure~\ref{fig:traj-Young} shows, after its initial starting position, 100 possible quantum
trajectories of an electron passing through one of the two slits: We have not represented the paths
of the electron when it is stopped by the first screen. Figure~\ref{fig:zoom-traj-Young} shows
a close-up of these trajectories just after they leave their slits.
\begin{figure}
\caption{\label{fig:zoom-traj-Young}
\label{fig:zoom-traj-Young}
\end{figure}
The different trajectories explain both the impact of electrons on the detection screen and the interference fringes.
This is the simplest and most natural interpretation to explain the impact positions: "The position of an impact is
simply the position of the particle at the time of impact." This was the view defended by Einstein at the Solvay
Congress of 1927. The position is the only measured variable of the experiment.
In the de Broglie-Bohm interpretation, the impacts on the screen are the real positions of the electron as in classical mechanics and the three postulates of the measurement of quantum mechanics can be trivialy explained: the position is an eigenvalue of the position operator because the position variable is identical to its operator (X$\Psi $ = x$\Psi $), the Born postulate is satisfied with the "equivariance" property, and the reduction of the wave packet is not necessary
to explain the impacts.
Through numerical simulations, we will demonstrate how, when the Planck constant $h$ tends to 0,
the quantum trajectories converge to the classical trajectories. In reality a constant is not able to tend to 0 by definition. The convergence to classical trajectories
is obtained if the term $ht/m\rightarrow0$; so $h\rightarrow0$ is equivalent to $m\rightarrow+\infty$
(i.e. the mass of the particle grows) or $t\rightarrow0$ (i.e. the distance slits-screem $d_2\rightarrow0$).
Figure~\ref{fig:traj-Young-converg} shows the 100 trajectories that start at the same 100 initial
points when Planck's constant is divided respectively by 10, 100, 1000 and 10000 (equivalent to multiplying the mass by 10, 100, 1000 and 10000).
We obtain quantum trajectories converging to the classical trajectories, when $h$ tends to 0.
\begin{figure}
\caption{\label{fig:traj-Young-converg}
\label{fig:traj-Young-converg}
\end{figure}
The study of the slits clearly shows that, in the de Broglie-Bohm interpretation, there is no physical separation between quantum mechanics and classical mechanics. All particles have quantum properties, but specifically quantum behavior only appears in certain experimental conditions: here when the ratio ht/m is sufficiently large. Interferences only appear gradually and the quantum particle behaves at any time as both a wave and a particle.
\section{The Stern-Gerlach experiment}
\label{sect:SternGerlach}
In 1922, by studying the deflection of a beam of silver atoms in a
strongly inhomogeneous magnetic field (cf.
Figure~\ref{fig:schema-SetG}) Otto Stern and Walter Gerlach
\cite{SternGerlach} obtained an experimental result that
contradicts the common sense prediction: the beam, instead of
expanding, splits into two separate beams giving two spots of
equal intensity $N^+$ and $N^-$ on a detector, at equal
distances from the axis of the original beam.
\begin{figure}
\caption{\label{fig:schema-SetG}
\label{fig:schema-SetG}
\end{figure}
Historically, this is the experiment which helped establish
spin quantization. Theoretically, it is the seminal experiment
posing the problem of measurement in quantum mechanics. Today it
is the theory of decoherence with the diagonalization of the
density matrix that is put forward to explain the first part of
the measurement process \cite{Zeh}.
However, although these authors consider the Stern-Gerlach
experiment as fundamental, they do not propose a calculation of the spin decoherence time.
We present an analytical solution to this
decoherence time and the diagonalization of the density matrix.
This solution requires the calculation of the Pauli spinor with a
spatial extension as the equation:
\begin{equation}\label{eq:psi-0}
\Psi^{0}(z) = (2\pi\sigma_{0}^{2})^{-\frac{1}{2}}
e^{-\frac{z^2}{4\sigma_0^2}}
\left( \begin{array}{c}\cos \frac{\theta_0}{2}e^{ - i\frac{\varphi_0}{2}}
\\
\sin\frac{\theta_0}{2}e^{i\frac{\varphi_0}{2}}
\end{array}
\right).
\end{equation}
Quantum mechanics textbooks \cite{Feynman_1965, CT_1977, Sakurai,
LeBellac} do not take into account the spatial extension of the
spinor (\ref{eq:psi-0}) and simply use the simplified spinor
without spatial extension:
\begin{equation}\label{eq:psi-s}
\Psi^{0} = \left( \begin{array}{c}\cos \frac{\theta_0}{2}e^{ - i\frac{\varphi_0}{2}}
\\
\sin\frac{\theta_0}{2}e^{i\frac{\varphi_0}{2}}
\end{array}
\right).
\end{equation}
However, as we shall see, the different evolutions of the spatial
extension between the two spinor components will have a key role
in the explanation of the measurement process. This spatial
extension enables us, in following the precursory works of
Takabayasi~\cite{Takabayasi_1954}, Bohm~\cite{Bohm_1955,Bohm_1993},
Dewdney et al.~\cite{Dewdney_1986} and Holland~\cite{Holland_1993},
to revisit the Stern and Gerlach experiment, to explain the decoherence and to demonstrate
the three postulates of the measure: quantization, Born statistical
interpretation and wave function reduction.
Silver atoms contained in the oven E (Figure~\ref{fig:schema-SetG})
are heated to a high temperature and escape through a narrow
opening. A second aperture, T, selects those atoms whose velocity,
$\textbf{v}_0$, is parallel to the $y$-axis. The atomic beam crosses
the gap of the electromagnet $A_{1}$ before condensing on the
detector, $P_{1}$ . Before crossing the electromagnet, the
magnetic moment of each silver atom is oriented randomly
(isotropically). In the beam, we represent each atom by its wave
function; one can assume that at the entrance to the
electromagnet, $A_{1}$, and at the initial time $t=0$, each atom
can be approximatively described by a Gaussian spinor in $z$ given
by (\ref{eq:psi-0}) corresponding to a pure state. The variable $y$
will be treated classically with $y= vt$. $\sigma_0=10^{-4}m$
corresponds to the size of the slot T along the $z$-axis. The
approximation by a Gaussian initial spinor will allow explicit
calculations. Because the slot is much wider along the $x$-axis, the
variable $x$ will be also treated classically.
To obtain an explicit solution of the Stern-Gerlach experiment, we take
the numerical values used in the CohenTannoudji
textbook~\cite{CT_1977}. For the silver atom,
we have $m = 1.8\times 10^{-25} kg$, $v_0 = 500\ m/s$
(corresponding to the temperature of $T=1000°K$).
In equation~(\ref{eq:psi-0}) and in figure~\ref{fig:spin},
$\theta_0$ and $\varphi_{0}$ are the polar angles characterizing the initial orientation of the
magnetic moment, $\theta_0$ corresponds to the angle with the
$z$-axis. The experiment is a statistical mixture of pure states
where the $\theta_0$ and the $\varphi_0$ are randomly chosen:
$\theta_0$ is drawn in a uniform way from $[0,\pi]$ and that
$\varphi_0$ is drawn in a uniform way from $[0,2\pi]$.
\begin{figure}
\caption{\label{fig:spin}
\label{fig:spin}
\end{figure}
The evolution of the spinor $\Psi=\left( \begin{array}{c}\psi_{+}
\\
\psi_{-}
\end{array}
\right)$ in a magnetic field
$\textbf{B}$ is then given by the Pauli equation:
\begin{equation}\label{eq:Pauli}
i\hbar \left( \begin{array}{c} \frac{\partial \psi _{+}}{\partial t}
\\
\frac{\partial \psi _{-}}{\partial t}
\end{array}
\right)
=-\frac{\hbar ^{2}}{2m} \Delta
\left( \begin{array}{c} \psi _{+}
\\
\psi _{-}
\end{array}
\right)
+\mu _{B}\textbf{B}\sigma \left( \begin{array}{c} \psi _{+}
\\
\psi _{-}
\end{array}
\right)
\end{equation}
where $\mu_B=\frac{e\hbar}{2m_e}$ is the Bohr magneton and where
$\sigma=(\sigma_{x},\sigma_{y},\sigma_{z})$ corresponds to the
three Pauli matrices. The particle first enters an electromagnetic
field $\textbf{B}$ directed along the $z$-axis, $B_{x}=B'_0x$,
$B_{y}=0$, $B_{z}=B_{0} -B'_{0} z$, with $B_{0}=5$ Tesla,
$B'_{0}=\left| \frac{\partial B}{\partial z}\right| = 10^3~Tesla/m$
over a length $\Delta l=1~cm$. On exiting the magnetic
field, the particle is free until it reaches the detector $P_1$
placed at a $D=20~cm$ distance.
The particle stays within the magnetic field for a time $\Delta
t=\frac{\Delta l}{v}= 2\times 10^{-5} s$. During this time
$[0,\Delta t]$, the spinor is:~\cite{Platt_1992} (see Appendix A)
\begin{widetext}
\begin{equation}\label{eq:fonctiondanschampmagnétique}
\Psi (z,t) \simeq \left(
\begin{array}{c}
\cos \frac{\theta_0}{2}
(2\pi\sigma_0^2)^{-\frac{1}{2}}
e^{-\frac{(z-\frac{\mu_{B} B'_{0}}{2 m}t^{2})^2 }
{4\sigma_0^2}} e^{i\frac{\mu_{B} B'_{0}t z -\frac{\mu^2_{B} B'^2_{0}}{6 m}t^3 + \mu_B B_0 t + \frac{\hbar \varphi_0}{2}}{\hbar }}\\
i \sin \frac{\theta_0}{2}
(2\pi\sigma_0^2)^{-\frac{1}{2}}
e^{-\frac{(z+\frac{\mu_{B} B'_{0}}{2 m}t^{2})^2}
{4\sigma_0^2}} e^{i\frac{-\mu_B B'_{0}t z -\frac{\mu^2_{B} B'^2_{0}}{6
m}t^3 - \mu_B B_0 t -\frac{\hbar \varphi_0}{2}}{\hbar }}
\end{array}
\right).
\end{equation}
\end{widetext}
After the magnetic
field, at time $t+ \Delta t$ $(t \geq 0)$ in the free space, the
spinor becomes: \cite{Bohm_1993,Dewdney_1986,Holland_1993,Platt_1992,Gondran_2005b} (see Appendix A)
{\small
\begin{equation}\label{eq:fonctionapreschampmagnétique}
\Psi (z,t+\Delta t) \simeq\left(
\begin{array}{c}
\cos \frac{\theta_0}{2}
(2\pi\sigma_0^2)^{-\frac{1}{2}}
e^{-\frac{(z-z_{\Delta}- ut)^2 }
{4\sigma_0^2}} e^{i\frac{m u z + \hbar \varphi_+}{\hbar }} \\
\sin \frac{\theta_0}{2}
(2\pi\sigma_0^2)^{-\frac{1}{2}}
e^{-\frac{(z+z_{\Delta}+ ut)^2}
{4\sigma_0^2}} e^{i\frac{-
muz + \hbar \varphi_-}{\hbar }}
\end{array}
\right)
\end{equation}}
where
\begin{equation}\label{eq:zdeltavitesse}
z_{\Delta}=\frac{\mu_B B'_{0}(\Delta
t)^{2}}{2 m}=10^{-5}m,~~~~~~u =\frac{\mu_B B'_{0}(\Delta t)}{m}=1 m/s.
\end{equation}
Equation (\ref{eq:fonctionapreschampmagnétique}) takes into
account the spatial extension of the spinor and we note that the
two spinor components have very different $z$ values. All
interpretations are based on this equation.
\subsection{The decoherence time}
We deduce from (\ref{eq:fonctionapreschampmagnétique}) the
probability density of a pure state in the free space after the
electromagnet:
{\small
\begin{eqnarray}
\rho_{\theta_0}(z,t+ \Delta t) \simeq
(2\pi\sigma_0^2)^{-\frac{1}{2}}
&&\left(\cos^{2} \frac{\theta_0}{2} e^{-\frac{(z-z_{\Delta}- ut)^2}{2\sigma_0^2}}\right.\nonumber\\
&&\quad +\left.\sin^{2} \frac{\theta_0}{2}
e^{-\frac{(z+z_{\Delta}+ ut)^2}{2\sigma_0^2}}\right)\label{eq:densitéaprèschampmagnétiqueteta}
\end{eqnarray}}
Figure~\ref{fig:ddp-SetG} shows the probability density of a pure
state (with $\theta_0= \pi/3$) as a function of $z$ at several
values of $t$ (the plots are labeled $y = vt$). The beam separation
does not appear at the end of the magnetic field ($1~cm$), but $16~cm$
further along. It is the moment of the decoherence.
\begin{figure}
\caption{\label{fig:ddp-SetG}
\label{fig:ddp-SetG}
\end{figure}
The decoherence time, where the two spots $N^{+}$ and $N^{-}$ are
separated, is then given by the equation:
\begin{equation}\label{eq:tempsdecoherence}
t_{D} \simeq \frac{3 \sigma_{0}-z_\Delta}{u}=3 \times
10^{-4}s.
\end{equation}
This decoherence time is usually the time required to diagonalize
the marginal density matrix of spin variables associated with a
pure state \cite{Roston}:
\begin{equation}\label{eq:matricedensite1}
\rho^{S}(t) =\left(
\begin{array}{ccccc}
\int|\psi_+(z,t)|^2 dz & \int\psi_+(z,t)\psi^*_-(z,t) dz\\
\int\psi_-(z,t)\psi^*_+(z,t) dz & \int|\psi_-(z,t)|^2 dz\\
\end{array}
\right)
\end{equation}
For $t\geq t_D$, the product $\psi_+(z,t+ \Delta t)\psi_-(z,t+
\Delta t) $ is null and the density matrix is diagonal: the
probability density of the initial pure state
(\ref{eq:fonctionapreschampmagnétique}) is diagonal:
\begin{equation}\label{eq:matricedensite3}
\rho^{S}(t+ \Delta t) = (2\pi\sigma_0^2)^{-1}\left(
\begin{array}{ccccc}
\cos^2\frac{\theta_0}{2} & 0\\
0 & \sin^2\frac{\theta_0}{2} \\
\end{array}
\right)
\end{equation}
\subsection{Proof of the postulates of quantum measurement}
We then obtain atoms with a spin oriented only along the $z$-axis (positively
or negatively). Let us consider the spinor $\Psi(z,t+ \Delta t) $
given by equation~(\ref{eq:fonctionapreschampmagnétique}).
Experimentally, we do not measure the spin directly, but the
$\widetilde{z}$ position of the particle impact on $P_1$ (Figure~\ref{fig:impacts-SetG}).
\begin{figure}
\caption{\label{fig:impacts-SetG}
\label{fig:impacts-SetG}
\end{figure}
If \ $\widetilde{z}\in N^+$, the term $\psi_-$ of
(\ref{eq:fonctionapreschampmagnétique}) is numerically equal to zero and the
spinor $\Psi$ is proportional to $\binom{1}
{0} $, one of the eigenvectors of the spin operator $S_z= \frac{\hbar}{2} \sigma_z $: $ \Psi (\tilde{z},t+\Delta t) \simeq
(2\pi\sigma_0^2)^{-\frac{1}{4}} \cos \frac{\theta_0}{2}
e^{-\frac{(\tilde{z}_1-z_{\Delta}- ut)^2 }
{4\sigma_0^2}} e^{i\frac{m u \tilde{z}_1 + \hbar \varphi_+}{\hbar }}\binom{1}
{0}$. Then, we have $S_z\Psi =\frac{\hbar}{2} \sigma_z \Psi= +\frac{\hbar}{2} \Psi$.
If $\widetilde{z}\in N^-$, the term $\psi_+$ of
(\ref{eq:fonctionapreschampmagnétique}) is numerically equal to zero and the
spinor $\Psi $ is proportional to $\binom{0}
{1}$, the other eigenvector of the spin operator $S_z$: $ \Psi (\tilde{z},t+\Delta t) \simeq
(2\pi\sigma_0^2)^{-\frac{1}{4}}\sin \frac{\theta_0}{2}
e^{-\frac{(\tilde{z}_2+z_{\Delta}+ ut)^2}
{4\sigma_0^2}} e^{i\frac{-
mu\tilde{z}_2 + \hbar \varphi_-}{\hbar }}\binom{0}
{1}$. Then, we have $S_z\Psi =\frac{\hbar}{2} \sigma_z \Psi= -\frac{\hbar}{2} \Psi$.
Therefore, the measurement of the spin corresponds to an eigenvalue of the spin operator.
It is a proof of the postulate of quantization.
Equation (\ref{eq:matricedensite3}) gives the probability $
\cos^{2} \frac{\theta_0}{2} $ (resp. $\sin^{2} \frac{\theta_0}{2}
$) to measure the particle in the spin state $+ \frac{\hbar}{2}$
(resp. $-\frac{\hbar}{2}$); this proves the Born probabilistic postulate.
By drilling a hole in the detector $P_1$ to the location of the
spot $ N^+$ (figure~\ref{fig:schema-SetG}), we select all the atoms that are in the spin
state $ |+\rangle= \binom{1}
{0}$. The new spinor of these atoms is obtained by
making the component $\Psi_-$ of the spinor $\Psi$ identically
zero (and not only numerically equal to zero) at the time when the atom crosses the detector $P_1 $; at
this time the component $\Psi_-$ is indeed stopped by detector $
P_1 $. The future trajectory of the silver atom after crossing the detector $
P_1 $ will be guided by this new (normalized) spinor. The
wave function reduction is therefore not linked to the
electromagnet, but to the detector $P_1$ causing an irreversible
elimination of the spinor component $ \Psi_-$.
\subsection{Impacts and quantizations explained by de Broglie-Bohm trajectories}
Finally, it remains to provide an explanation of the individual
impacts of silver atoms. The spatial extension of the spinor
(\ref{eq:psi-0}) allows to take into account the particle's
initial position $z_0$ and to introduce the Broglie-Bohm
trajectories
\cite{Dewdney_1986,Holland_1993,Broglie_1927,Bohm_1952,Challinor}
which is the natural assumption to explain the individual
impacts.
Figure~\ref{fig:SetG-10traj} presents, for a silver atom with the
initial spinor orientation $(\theta_0=\frac{\pi}{3},\varphi_0=0)$, a
plot in the $(Oyz)$ plane of a set of 10 trajectories whose initial
position $z_0$ has been randomly chosen from a Gaussian
distribution with standard deviation $\sigma_{0}$. The spin
orientations $\theta(z,t)$ are represented by arrows.
\begin{figure}
\caption{\label{fig:SetG-10traj}
\label{fig:SetG-10traj}
\end{figure}
The final orientation, obtained after the decoherence time
$t_{D}$, depends on the initial particle position $z_{0}$ in the
spinor with a spatial extension and on the initial angle
$\theta_{0}$ of the spin with the $z$-axis. We obtain
$+\frac{\pi}{2}$ if $z_0
> z^{\theta_{0}}$ and $-\frac{\pi}{2}$ if $z_0
< z^{\theta_{0}}$ with
\begin{equation}\label{eq:seuilpolarization}
z^{\theta_{0}}=\sigma_0 F^{-1}\left(\sin^{2}\frac{\theta_{0}}{2}\right)
\end{equation}
where F is the repartition function of the normal centered-reduced
law. If we ignore the position of the atom in its wave function,
we lose the determinism given by equation
(\ref{eq:seuilpolarization}).
In the de Broglie-Bohm interpretation with a realistic interpretation of the spin, the "measured" value
is not independent of the context of the measure and is
contextual. It conforms to the Kochen and Specker
theorem:~\cite{Kochen 1967} Realism and non-contextuality are
inconsistent with certain quantum mechanics predictions.
Now let us consider a mixture of pure states where the initial
orientation ($\theta_0,\varphi_0$) from the spinor has been
randomly chosen. These are the conditions of the initial Stern and
Gerlach experiment. Figure~\ref{fig:SetG-10trajectoires}
represents a simulation of 10 quantum trajectories of silver atoms
from which the initial positions $z_0$ are also randomly chosen.
\begin{figure}
\caption{\label{fig:SetG-10trajectoires}
\label{fig:SetG-10trajectoires}
\end{figure}
Finally, the de
Broglie-Bohm trajectories propose a clear interpretation of the spin
measurement in quantum mechanics. There is interaction with the
measuring apparatus as is generally stated; and there is indeed a
minimum time required to measure. However this measurement and
this time do not have the signification that is usually applied to them.
The result of the Stern-Gerlach experiment is not the measure of
the spin projection along the $z$-axis, but the orientation of the
spin either in the direction of the magnetic field gradient, or in
the opposite direction. It depends on the position of the particle
in the wave function. We have therefore a simple explanation for
the non-compatibility of spin measurements along different axes.
The measurement duration is then the time necessary for the
particle to point its spin in the final direction.
\section{EPR-B experiment}
\label{sect:EPR-B}
Nonseparability is one of the most puzzling aspects of quantum
mechanics. For over thirty years, the EPR-B, the spin version of
the Einstein-Podolsky-Rosen experiment~\cite{EPR} proposed by
Bohm~\cite{Bohm_1951}, the Bell theorem~\cite{Bell64}
and the BCHSH inequalities~\cite{Bell64,BCHSH,Bell_1987} have been
at the heart of the debate on hidden variables and non-locality.
Many experiments since Bell's paper have demonstrated violations
of these inequalities and have vindicated quantum
theory~\cite{Clauser_1972}.
Now, EPR pairs of massive atoms are also
considered~\cite{Beige}. The usual conclusion of these
experiments is to reject the non-local realism for two reasons:
the impossibility of decomposing a pair of entangled atoms into
two states, one for each atom, and the impossibility of
interaction faster than the speed of light.
Here, we show that there exists a de Broglie-Bohm interpretation which answers these two questions positively. To
demonstrate this non-local realism, two methodological conditions
are necessary. The first condition is the same as in the Stern-Gerlach experiment:
the solution
to the entangled state is obtained by resolving the Pauli equation
from an initial singlet wave function with a spatial extension as:
\begin{equation}\label{eq:7psi-0}
\Psi_{0}(\textbf{r}_A,\textbf{r}_B) =\frac{1}{\sqrt{2}}f(\textbf{r}_A) f(\textbf{r}_B)(|+_{A}\rangle |-_{B}\rangle - |-_{A}\rangle |
+_{B}\rangle),
\end{equation}
and not from a simplified wave function without spatial extension:
\begin{equation}\label{eq:7psi-1}
\Psi_{0}(\textbf{r}_A,\textbf{r}_B) =\frac{1}{\sqrt{2}}(|+_{A}\rangle |-_{B}\rangle - |-_{A}\rangle |
+_{B}\rangle).
\end{equation}
$f$ function and $|\pm\rangle$ vectors are presented later.
The resolution in space of the Pauli equation is essential: it
enables the spin measurement by spatial quantization and explains
the determinism and the disentangling process. To explain the
interaction and the evolution between the spin of the two
particles, we consider a two-step version of the EPR-B experiment.
It is our second methodological condition. A first causal
interpretation of EPR-B experiment was proposed in 1987 by
Dewdney, Holland and
Kyprianidis~\cite{Dewdney_1987b} using these two conditions.
However, this interpretation had a flaw~\cite{Holland_1993} (p. 418): the
spin module of each particle depends directly on the singlet wave
function, and thus the spin module of each particle varied during
the experiment from 0 to $\frac{\hbar}{2}$. We present a de
Broglie-Bohm interpretation that avoid this flaw.~\cite{Gondran_2012}
\begin{figure}
\caption{\label{fig:expEPR}
\label{fig:expEPR}
\end{figure}
Figure \ref{fig:expEPR} presents the Einstein-Podolsky-Rosen-Bohm
experiment. A source $S$ created in O pairs of identical atoms A
and B, but with opposite spins. The atoms A and B split following
the $y$-axis in opposite directions, and head towards two identical
Stern-Gerlach apparatus $\textbf{E}_\textbf{A}$ and $\textbf{E}_\textbf{B}$. The
electromagnet $\textbf{E}_\textbf{A}$ "measures" the spin of A along the
$z$-axis and the electromagnet $\textbf{E}_\textbf{B}$ "measures" the spin of
B along the $z'$-axis, which is obtained after a rotation of an
angle $\delta$ around the $y$-axis. The initial wave function of the
entangled state is the singlet state (\ref{eq:7psi-0}) where
$\textbf{r}=(x,z)$,
$f(\textbf{r})=(2\pi\sigma_{0}^{2})^{-\frac{1}{2}}
e^{-\frac{x^2 + z^2}{4\sigma_0^2}}$, $|\pm_{A}\rangle$ and $|\pm_{B}\rangle$ are the eigenvectors
of the operators $\sigma_{z_A}$ and $\sigma_{z_B}$: $\sigma_{z_A}
|\pm_{A}\rangle= \pm |\pm_{A}\rangle$, $\sigma_{z_B}
|\pm_{B}\rangle= \pm |\pm_{B}\rangle$. We treat the dependence
with $y$ classically: speed $-v_y$ for A and $v_y$ for B. The wave
function $\Psi(\textbf{r}_A, \textbf{r}_B, t)$ of the two
identical particles A and B, electrically neutral and with
magnetic moments $\mu_0$, subject to magnetic fields
${\textbf{E}_\textbf{A}}$ and ${\textbf{E}_\textbf{B}}$, admits on the basis of
$|\pm_{A}\rangle$ and $|\pm_{B}\rangle$ four components
$\Psi^{a,b}(\textbf{r}_A, \textbf{r}_B, t)$ and satisfies the
two-body Pauli equation~\cite{Holland_1993} (p. 417):
\begin{eqnarray}
i\hbar \frac{\partial \Psi^{a,b}}{\partial t}
=&&\left(-\frac{\hbar^2}{2 m}\Delta_A -\frac{\hbar^2}{2 m}\Delta_B\right)\Psi^{a,b}\nonumber\\
&+&\mu B^{\textbf{E}_\textbf{A}}_j (\sigma_j)_{c}^{a}\Psi^{c,b}
+\mu B^{\textbf{E}_\textbf{B}}_j (\sigma_j)_{d}^{b}\Psi^{a,d}\label{eq:7Paulideuxcorps1}
\end{eqnarray}
with the initial conditions:
\begin{equation}\label{eq:7Paulideuxcorps2}
\Psi^{a,b}(\textbf{r}_A, \textbf{r}_B,
0)=\Psi_0^{a,b}(\textbf{r}_A, \textbf{r}_B)
\end{equation}
where $\Psi_0^{a,b}(\textbf{r}_A, \textbf{r}_B)$ corresponds to
the singlet state (\ref{eq:7psi-0}).
To obtain an explicit solution of the EPR-B experiment, we take
the numerical values of the Stern-Gerlach experiment.
One of the difficulties of the interpretation of the EPR-B
experiment is the existence of two simultaneous measurements. By
doing these measurements one after the other, the interpretation
of the experiment will be facilitated. That is the purpose of the
two-step version of the experiment EPR-B studied below.
\subsection{First step EPR-B: Spin measurement of A}
In the first step we make a Stern and Gerlach "measurement" for
atom A, on a pair of particles A and B in a singlet state. This
is the experiment first proposed in 1987 by Dewdney, Holland and
Kyprianidis.~\cite{Dewdney_1987b}
Consider that at time $t_0$ the particle A arrives at the entrance
of electromagnet $\textbf{E}_\textbf{A}$. After this exit of the magnetic
field $\textbf{E}_\textbf{A}$, at time $t_0+ \triangle t + t$, the wave
function (\ref{eq:7psi-0}) becomes~\cite{Gondran_2012}:
{\small
\begin{eqnarray}
\Psi(\textbf{r}_A, \textbf{r}_B, t_0 + \triangle t+ t )=
\frac{1}{\sqrt{2}} f(\textbf{r}_B)\times (
&& f^{+}(\textbf{r}_A,t) |+_{A}\rangle | -_{B}\rangle \nonumber\\
&-& f^{-}(\textbf{r}_A,t) |-_{A}\rangle
| +_{B}\rangle)\nonumber\\&&\label{eq:7psiexperience1}
\end{eqnarray}}
with
\begin{equation}\label{eq:7fonction}
f^{\pm}(\textbf{r},t)\simeq f(x, z \mp z_\triangle \mp ut)
e^{i(\frac{\pm muz}{\hbar}+ \varphi^\pm (t))}
\end{equation}
where $ z_{\Delta} $ and $u $ are given by equation (\ref{eq:zdeltavitesse}).
The atomic density $\rho(z_A, z_B,t_0 + \Delta t + t)$ is found by
integrating $\Psi^{*}(\textbf{r}_A,\textbf{r}_B, t_0 + \triangle t
+ t )\Psi(\textbf{r}_A, \textbf{r}_B, t_0 + \triangle t+ t)$ on
$x_A$ and $x_B$:
\begin{eqnarray}
\rho&(z_A&, z_B,t_0 + \Delta t+ t) = \left((2\pi\sigma_0^2)^{-\frac{1}{2}}
e^{-\frac{(z_B)^2}{2\sigma_0^2}}\right)\label{eq:7densitéaprèschampmagnétiqueAB}\\
&\times&\left((2\pi\sigma_0^2)^{-\frac{1}{2}}
\frac{1}{2}\left(e^{-\frac{(z_A-z_{\Delta}- ut)^2}{2\sigma_0^2}}+
e^{-\frac{(z_A+z_{\Delta}+
ut)^2}{2\sigma_0^2}}\right)\right).\nonumber
\end{eqnarray}
We deduce that the beam of particle A is divided into two, while
the beam of particle B stays undivided:
\begin{itemize}
\item the density of A is the same, whether particle A is
entangled with B or not, \item the density of B is not affected
by the "measurement" of A.
\end{itemize}
Our first conclusion is: the position of B does not depend
on the measurement of A, only the spins are involved. We conclude
from equation (\ref{eq:7psiexperience1}) that the spins of A and B
remain opposite throughout the experiment. These are the two
properties used in the causal interpretation.
\subsection{Second step EPR-B: Spin measurement of B}
The second step is a continuation of the first and corresponds to
the EPR-B experiment broken down into two steps. On a pair of
particles A and B in a singlet state, first we made a Stern and
Gerlach measurement on the A atom between $t_0$ and $t_0+
\triangle t+ t_D $, secondly, we make a Stern and Gerlach
measurement on the B atom with an electromagnet $\textbf{E}_\textbf{B}$
forming an angle $\delta$ with $\textbf{E}_\textbf{A}$ during $t_0 +
\triangle t+ t_D$ and $t_0+ 2( \triangle t + t_D)$.
At the exit of magnetic field $\textbf{E}_\textbf{A}$, at time $t_0 +
\triangle t +t_D$, the wave function is given by
(\ref{eq:7psiexperience1}). Immediately after the measurement of
A, still at time $t_0+ \triangle t+ t_D $, the wave function of B
depends on the measurement $\pm $ of A:
\begin{equation}\label{eq:7psiexperience1BcondmesA}
\Psi_{B /\pm A}(\textbf{r}_B, t_0 + \triangle t+ t_1 )=
f(\textbf{r}_B) |\mp_{B}\rangle.
\end{equation}
Then, the measurement of B at time $t_0+ 2( \triangle t + t_D)$
yields, in this two-step version of the EPR-B experiment, the same
results for spatial quantization and correlations of spins as in
the EPR-B experiment.
\subsection{Causal interpretation of the EPR-B experiment}
We assume, at the creation of the two entangled particles A and B,
that each of the two particles A and B has an initial wave
function with opposite spins: $\Psi_0^A(\textbf{r}_A, \theta^A_0,
\varphi^A_0)= f(\textbf{r}_A)
\left(\cos\frac{\theta^A_0}{2}|+_{A}\rangle +
\sin\frac{\theta^A_0}{2}e^{i \varphi^A_0}|-_{A}\rangle\right)$ and
$\Psi_0^B(\textbf{r}_B , \theta^B_0, \varphi^B_0)= f(\textbf{r}_B)
\left(\cos\frac{\theta^B_0}{2}|+_{B}\rangle +
\sin\frac{\theta^B_0}{2}e^{i \varphi^B_0}|-_{B}\rangle\right)$
with $\theta_0^B= \pi-\theta_0^A$ and $\varphi_0^B= \varphi_0^A
-\pi$. The two particles A and B are statistically prepared as in the Stern and Gerlach experiment. Then the Pauli principle tells us that the two-body wave
function must be antisymmetric; after calculation we find the same
singlet state (\ref{eq:7psi-0}):
\begin{eqnarray}
\Psi_0(\textbf{r}_A,\theta^A, \varphi^A,\textbf{r}_B,\theta^B, \varphi^B)=&-& e^{i \varphi^A} f(\textbf{r}_A) f(\textbf{r}_B)\\
&\times& \left(|+_{A}\rangle
|-_{B}\rangle - |-_{A}\rangle|+_{B}\rangle\right).\nonumber
\end{eqnarray}
Thus, we can consider that the singlet wave function is the wave
function of a family of two fermions A and B with opposite spins:
the direction of initial spin A and B exists, but is not
\textit{known}. It is a local hidden variable which is therefore
necessary to add in the initial conditions of the model.
This is not the interpretation followed by the Bohm
school~\cite{Dewdney_1987b,Dewdney_1986,Bohm_1993,Holland_1993} in the
interpretation of the singlet wave function; they do not assume the existance of wave functions $\Psi_0^A(\textbf{r}_A, \theta^A_0,
\varphi^A_0) $ and $\Psi_0^B(\textbf{r}_B , \theta^B_0, \varphi^B_0) $ for each particle, but only the singlet state $ \Psi_0(\textbf{r}_A,\theta^A, \varphi^A,\textbf{r}_B,\theta^B, \varphi^B) $. In consequence, they suppose a zero spin for each particle at the initial time and a spin module of each particle varied during
the experiment from 0 to $\frac{\hbar}{2}$~\cite{Holland_1993} (p. 418).
Here, we assume that at the initial time we know the spin of each
particle (given by each initial wave function) and the initial
position of each particle.
\textbf{Step 1: spin measurement of A}
In the equation (\ref{eq:7psiexperience1}) particle A can be
considered independent of B. We can therefore give it the wave
function
\begin{eqnarray}
\Psi^A(\textbf{r}_A, t_0+ \triangle t+ t )=&&
\cos\frac{\theta_0^A}{2} f^{+}(\textbf{r}_A,t)|+_{A}\rangle\nonumber\\ &&+
\sin\frac{\theta_0^A}{2}e^{i
\varphi_0^A}f^{-}(\textbf{r}_A,t)|-_{A}\rangle\label{eq:fonctiondondeA}
\end{eqnarray}
which is the wave function of a free particle in a Stern Gerlach
apparatus and whose initial spin is given by
($\theta_0^A,\varphi_0^A$). For an initial polarization
($\theta_0^A,\varphi_0^A$) and an initial position ($z_0^A$), we
obtain, in the de Broglie-Bohm interpretation~\cite{Bohm_1993} of the
Stern and Gerlach experiment, an evolution of the position
($z_A(t)$) and of the spin orientation of A
($\theta^A(z_A(t),t)$)~\cite{Gondran_2005b}.
The case of particle B is different. B follows a rectilinear
trajectory with $y_B(t)= v_yt$, $z_B(t)=z_0^B$ and $x_B(t)=x_0^B$.
By contrast, the orientation of its spin moves with the
orientation of the spin of A: $\theta^B(t)= \pi -
\theta^A(z_A(t),t)$ and $\varphi^B(t)= \varphi(z_A(t),t)- \pi$. We
can then associate the wave function:
\begin{eqnarray}
\Psi^B(\textbf{r}_B, t_0+ \triangle t+ t )=f(\textbf{r}_B)
&&\left( \cos\frac{\theta^B(t)}{2} |+_{B}\rangle \right.\label{eq:fonctiondondeB}\\
&&\quad\left.+
\sin\frac{\theta^B(t)}{2}e^{i
\varphi^B(t)}|-_{B}\rangle\right).\nonumber
\end{eqnarray}
This wave function is specific, because it depends upon initial
conditions of A (position and spin). The orientation of spin of
the particle B is driven by the particle A \textit{through the
singlet wave function}. Thus, the singlet wave function is the
non-local variable.
\textbf{Step 2: Spin measurement of B }
At the time $t_0 + \Delta t + t_D$, immediately after the
measurement of A, $\theta^B(t_0 + \Delta t + t_D)= \pi$ or 0 in accordance with the
value of $\theta^A(z_A(t),t)$ and the wave function of B is given
by~(\ref{eq:7psiexperience1BcondmesA}).
The frame $(Ox'yz')$ corresponds to the frame $(Oxyz)$
after a rotation of an angle $\delta$ around the $y$-axis.
$\theta^B$ corresponds to the B-spin angle with the $z$-axis,
and $\theta'^B$ to the B-spin angle with the $z'$-axis,
then $\theta'^B(t_0 + \Delta t + t_D)= \pi+\delta$ or $\delta$.
In this second step, we are exactly in the
case of a particle in a simple Stern and Gerlach experiment (with magnet $\textbf{E}_\textbf{B}$)
with a specific initial polarization equal to $\pi+\delta$ or $\delta$ and not random like in step 1. Then, the
measurement of B, at time $t_0+ 2( \triangle t +t_D)$), gives, in
this interpretation of the two-step version of the EPR-B
experiment, the same results as in the EPR-B experiment.
\subsection{Physical explanation of non-local influences}
From the wave function of two entangled particles, we find spins,
trajectories and also a wave function for each of the two
particles. In this interpretation, the quantum particle has a
local position like a classical particle, but it has also a
non-local behavior through the wave function. So, it is the wave
function that creates the non classical properties. We can keep a
view of a local realist world for the particle, but we should add
a non-local vision through the wave function. As we saw in step 1,
the non-local influences in the EPR-B experiment only concern the
spin orientation, not the motion of the particles themselves.
Indeed only spins are entangled in the wave function~(\ref{eq:7psi-0})
not positions and motions like in the initial EPR experiment.
This is a key point in the search for a physical explanation of
non-local influences.
The simplest explanation of this non-local influence is to
reintroduce the concept of ether (or the preferred frame), but a
new format given by Lorentz-Poincaré and by Einstein in
1920\cite{Einstein_1920}: "\textit{Recapitulating, we may say that according to the general theory
of relativity space is endowed with physical qualities; in this sense,
therefore, there exists an ether. According to the general theory of
relativity space without ether is unthinkable; for in such space there
not only would be no propagation of light, but also no possibility of
existence for standards of space and time (measuring-rods and clocks),
nor therefore any space-time intervals in the physical sense. But this
ether may not be thought of as endowed with the quality characteristic
of ponderable media, as consisting of parts which may be tracked
through time. The idea of motion may not be applied to it.}"
Taking into account the new experiments, especially Aspect's
experiments, Popper~\cite{Popper_1982} (p. XVIII) defends a
similar view in 1982:
"\textit{I feel not quite convinced that the experiments are
correctly interpreted; but if they are, we just have to accept
action at a distance. I think (with J.P. Vigier) that this would
of course be very important, but I do not for a moment think that
it would shake, or even touch, realism. Newton and Lorentz were
realists and accepted action at a distance; and Aspect's
experiments would be the first crucial experiment between
Lorentz's and Einstein's interpretation of the Lorentz
transformations.}"
Finally, in the de Broglie-Bohm interpretation, the EPR-B experiments of non-locality have therefore a
great importance, not to eliminate realism and determinism, but as
Popper said, to rehabilitate the existence of a certain type of
ether, like Lorentz's ether and like Einstein's ether in 1920.
\section{Conclusion}
In the three experiments presented in this article, the variable that is measured in fine is the position of the particle given by this impact on a screen.
In the double-slit, the set of these positions gives the interferences;
in the Stern-Gerlach and the EPR-B experiments, it is the position of
the particle impact that defines the spin value.
It is this position that the de Broglie-Bohm interpretation
adds to the wave function to define a complete state of the quantum particle.
The de Broglie-Bohm interpretation is then based only on the initial
conditions $\Psi^0(x) $ and $X(0)$ and the evolution
equations~(\ref{eq:schrodinger1}) and~(\ref{eq:champvitesse}).
If we add as initial condition the "quantum equilibrium hypothesis"~(\ref{eq:quantumequi}),
we have seen that we can deduce, for these three examples,
the three postulates of measurement.
These three postulates are not necessary if we solve the time-dependent Schrödinger equation (double-slit experiment)
or the Pauli equation with spatial extension (Stern-Gerlach and EPR experiments). However, these simulations enable us to better understand those experiments:
In the double-slit experiment, the interference phenomena appears only some centimeters after the slits and shows the continuity with classical mechanics;
in the Stern-Gerlach experiment, the spin up/down measurement appears also after a given time, called decoherence time; in the EPR-B experiment, only the spin of B is affected by the spin measurement of A, not its density.
Moreover, the de Broglie-Bohm
trajectories propose a clear explanation of the spin measurement in quantum mechanics.
However, we have seen two very different cases in the measurement process.
In the first case (double slit experiment), there is no influence of
the measuring apparatus (the screen) on the quantum particle. In the second
case (Stern-Gerlach experiment, EPR-B), there is an interaction with the
measuring apparatus (the magnetic field) and the quantum particle. The result
of the measurement depends on the position of the particle in the wave function.
The measurement duration is then the time necessary for the stabilisation of the result.
This heterodox interpretation clearly explains experiments with a set of quantum particles that are statistically prepared. These particles verify the "quantum equilibrium hypothesis" and the de Broglie-Bohm interpretation establishes continuity with classical mechanics. However, there is no reason that the de Broglie-Bohm interpretation can be extended to quantum particles that are not statistically prepared. This situation occurs when the wave packet corresponds to a
quasi-classical coherent state, introduced in 1926 by
Schr\"odinger~\cite{Schrodinger_26}. The field quantum theory and
the second quantification are built on these coherent
states~\cite{Glauber_65}. It is also the case, for the hydrogen atom, of
localized wave packets whose motion are on the classical trajectory
(an old dream of Schr\"odinger's). Their existence was predicted in 1994 by
Bialynicki-Birula, Kalinski, Eberly, Buchleitner and
Delande~\cite{Bialynicki_1994, Delande_1995, Delande_2002}, and
discovered recently by Maeda and Gallagher~\cite{Gallagher} on
Rydberg atoms. For these non statistically prepared quantum particles, we have shown~\cite{Gondran2011,Gondran2012a} that the
natural interpetation is the Schrödinger interpretation proposed at the Solvay congress in 1927. Everythings happens as if the quantum mechanics interpretation depended on the preparation of the particles (statistically or not statistically prepared). It is perhaps
a response to the "theory of the double solution" that Louis de Broglie was seeking since 1927: "\textit{I introduced as the "double solution theory" the idea that
it was necessary to distinguish two different solutions that are both linked to the wave equation, one that I called
wave $u$, which was a real physical wave represented by a singularity as it was not normalizable due to a local anomaly defining the particle, the other one as Schr\"odinger's $\Psi$ wave, which is a probability representation as it is normalizable without singularities.}"~\cite{Broglie}
\appendix
\section{Calculating the spinor evolution in the Stern-Gerlach experiment}
In the magnetic field $B=(B_x,0,B_z)$, the Pauli equation
(\ref{eq:Pauli}) gives coupled Schrödinger equations for each
spinor component
\begin{eqnarray}\label{eq:Paulicomplet}
i\hbar \frac{\partial\psi_{\pm}}{\partial t}(x,z,t)=
&-& \frac{\hbar^2}{2 m} \nabla^{2} \psi_{\pm }(x,z,t) \nonumber\\
&\pm&\mu_B (B_0 - B_0' z) \psi_{\pm}(x,z,t) \nonumber\\
&\mp&i \mu_B B_0' x\psi_{\mp}(x,z,t).
\end{eqnarray}
If one effects the transformation \cite{Platt_1992}
\begin{equation}\nonumber
\psi_{\pm}(x,z,t)= \exp \left(\pm \frac{i \mu_B B_0 t}{\hbar}\right)
\overline{\psi}_{\pm}(x,z,t)
\end{equation}
equation~(\ref{eq:Paulicomplet}) becomes
\begin{eqnarray*}
i\hbar \frac{\partial\overline{\psi}_{\pm}}{\partial t}(x,z,t)=
&-&\frac{\hbar^{2}}{2 m} \nabla^{2} \overline{\psi}_{\pm}(x,z,t)\\
&\mp& \mu_B B'_0 z \overline{\psi}_{\pm}(x,z,t) \\
&\mp& i \mu_B B'_0 x \overline{\psi}_{\mp}(x,z,t)\exp\left(\pm i \frac{2 \mu_B B_0 t}{\hbar}\right)
\end{eqnarray*}
The coupling term oscillates rapidly with the Larmor frequency
$\omega_{L}= \frac{2 \mu_B B_0}{\hbar}=1,4 \times 10^{11} s^{-1}$.
Since $|B_{0}|\gg |B'_{0}z|$ and $|B_{0}|\gg |B'_{0}x|$, the
period of oscillation is short compared to the motion of the wave
function. Averaging over a period that is long compared to the
oscillation period, the coupling term vanishes, which
entails\cite{Platt_1992}
\begin{equation}\label{eq:Paulimoyen}
i\hbar \frac{\partial\overline{\psi}_{\pm}}{\partial t}(x,z,t)= -\frac{\hbar^{2}}{2 m} \nabla^{2} \overline{\psi}_{\pm}(x,z,t)
\mp \mu_B
B'_0 z \overline{\psi}_{\pm}(x,z,t).
\end{equation}
Since the variable x is not involved in this equation and
$\psi^{0}_{\pm}(x,z)$ does not depend on x,
$\overline{\psi_{\pm}}(x,z,t) $ does not depend on x:
$\overline{\psi_{\pm}}(x,z,t)\equiv\overline{\psi_{\pm}}(z,t) $.
Then we can explicitly compute the preceding equations for all t
in $[0, \Delta t]$ with $\Delta t=\frac{\Delta l}{v}=2 \times
10^{5}s$.
We obtain:
\begin{eqnarray*}
&\overline{\psi}_{+}(z,t)= \psi_{K}(z,t) \cos \frac{\theta_0}{2}e^{i\frac{\varphi_0}{2}}~~~&~~\text{and}~~K=-\mu_B B'_{0}\\
&\overline{\psi}_{-}(z,t)= \psi_{K}(z,t) i\sin\frac{\theta_0}{2}e^{-i\frac{\varphi_0}{2}}&~~\text{and}~~K=+\mu_B B'_{0}
\end{eqnarray*}
$\sigma_t^2 = \sigma_0^2 + \left(\frac{\hbar
t}{2m\sigma_0}\right)^2$ and
{\small
\begin{eqnarray}
\psi_{K}(z,t)&=&(2\pi\sigma_{t}^{2})^{-\frac{1}{4}}
e^{-\frac{(z+\frac{K t^{2}}{2 m})^2}{4\sigma_t^2}}
\exp \frac{i}{\hbar}
\left[-\frac{\hbar}{2}\tan^{-1}\left(\frac{\hbar t}{2 m \sigma_{0}^{2}}\right)\right.\nonumber\\
&&\left.-Ktz-\frac{K^{2}t^{3}}{6 m}
+\frac{(z+\frac{K t^{2}}{2 m})^{2}\hbar^{2}t^{2}}{8 m \sigma_{0}^{2}
\sigma_{t}^{2}}\right].\label{eq:ondegauss}
\end{eqnarray}}
where~(\ref{eq:ondegauss}) is a classical result.\cite{Feynman}
The experimental conditions give $\frac{\hbar \Delta t}{2 m
\sigma_0}=4 \times 10^{-11}~m \ll \sigma_{0}=10^{-4}~m$. We deduce
the approximations $\sigma_{t} \simeq \sigma_0$ and
\begin{equation}\label{eq:ondegaussapprox}
\overline{\psi}_{K}(z,t) \simeq (2\pi\sigma_{0}^{2})^{-\frac{1}{4}}
e^{-\frac{(z+\frac{K t^{2}}{2 m})^2}{4\sigma_0^2}}\exp \frac{i}{\hbar}
\left[ -Ktz-\frac{K^{2}t^{3}}{6 m}\right].
\end{equation}
At the end of the magnetic field, at time $\Delta t$, the spinor
equals to
\begin{equation}\label{eq:ondeadeltat}
\Psi (z,\Delta t) = \left( \begin{array}{c}
\psi_{+}(z,\Delta t)\\
\psi_{-}(z,\Delta t)
\end{array}
\right)
\end{equation}
with
\begin{eqnarray*}
\psi_{+}(z,\Delta t)&=&(2\pi\sigma_{0}^{2})^{-\frac{1}{4}}
e^{-\frac{(z - z_{\Delta})^2}{4\sigma_0^2}+ \frac{i}{\hbar}m u z} \cos \frac{\theta_0}{2}e^{i \varphi_{+}}\\
\psi_{-}(z,\Delta t)&=& (2\pi\sigma_{0}^{2})^{-\frac{1}{4}}
e^{-\frac{(z + z_{\Delta})^2}{4\sigma_0^2}- \frac{i}{\hbar}m u z} i\sin\frac{\theta_0}{2}e^{i \varphi_{-}}
\end{eqnarray*}
\begin{eqnarray*}
&z_{\Delta}&=\frac{\mu_B B'_{0}(\Delta
t)^{2}}{2 m},~~~~~~u =\frac{\mu_0 B'_{0}(\Delta t)}{m}~~~~\text{and}\\
&\varphi_{+}&=\frac{\varphi_{0}}{2} -\frac{\mu_{B}B_0 \Delta
t}{\hbar}-\frac{K^{2}(\Delta t)^{3}}{6 m \hbar};\\
&\varphi_{-}&=-\frac{\varphi_{0}}{2} +\frac{\mu_{0}B_0 \Delta
t}{\hbar}-\frac{K^{2}(\Delta t)^{3}}{6 m \hbar}.
\end{eqnarray*}
We remark that the passage through the magnetic field gives the
equivalent of a velocity $+u$ in the direction $0z$ to the
function $\psi_+$ and a velocity $-u$ to the function $\psi_{-}$.
Then we have a free particle with the initial wave
function~(\ref{eq:ondeadeltat}). The Pauli equation resolution
again yields $\psi_{\pm}(x,z,t)=\psi_{x}(x,t) \psi_{\pm}(z,t)$ and
with the experimental conditions we have $\psi_{x}(x,t) \simeq
(2\pi\sigma_{0}^{2})^{-\frac{1}{4}} e^{-\frac{x^2}{4\sigma_0^2}}$
and
\begin{eqnarray*}
\psi_{+}(z,t +\Delta t) &\simeq&(2\pi\sigma_{0}^{2})^{-\frac{1}{4}}\cos \frac{\theta_0}{2}\\
&&\times \exp^{-\frac{(z - z_{\Delta}- u t)^2}{4\sigma_0^2}+ \frac{i}{\hbar}(m u z -\frac{1}{2}m u^2 t + \hbar \varphi_{+})}\\
\psi_{-}(z,t + \Delta t) &\simeq&(2\pi\sigma_{0}^{2})^{-\frac{1}{4}}i\sin\frac{\theta_0}{2}\\
&&\times \exp^{-\frac{(z + z_{\Delta}+u t)^2 }{4\sigma_0^2}+\frac{i}{\hbar}(- m u z -\frac{1}{2}m u^2 t + \hbar \varphi_{-})}
\end{eqnarray*}
\end{document} |
\begin{document}
\selectlanguage{english}
\begin{abstract}
Dependencies of the optimal constants in strong and weak type bounds
will be studied between maximal functions corresponding to the
Hardy--Littlewood averaging operators over convex symmetric bodies
acting on $\mathbb R^d$ and $\mathbb Z^d$. Firstly, we show, in the
full range of $p\in[1,\infty]$, that these optimal constants in
$L^p(\mathbb R^d)$ are always not larger than their discrete analogues
in $\ell^p(\mathbb Z^d)$; and we also show that the equality holds for
the cubes in the case of $p=1$. This in particular implies that the
best constant in the weak type $(1,1)$ inequality for the discrete
Hardy--Littlewood maximal function associated with centered cubes in
$\mathbb Z^d$ grows to infinity as $d\to\infty$, and if $d=1$ it is
equal to the largest root of the quadratic equation $12C^2-22C+5=0$.
Secondly, we prove dimension-free estimates for the
$\ell^p(\mathbb Z^d)$ norms, $p\in(1,\infty]$, of the discrete
Hardy--Littlewood maximal operators with the restricted range of
scales $t\geq C_q d$ corresponding to $q$-balls,
$q\in[2,\infty)$. Finally, we extend the latter result on
$\ell^2(\mathbb Z^d)$ for the maximal operators restricted to dyadic
scales $2^n\ge C_q d^{1/q}$.
\end{abstract}
\maketitle
\section{Introduction}
\subsection{A brief overview of the paper} Throughout this paper $d\in\mathbb{N}$ always denotes the dimension of the Euclidean space $\mathbb{R}^d$, and $G$ denotes a convex symmetric body in $\mathbb{R}^d$,
which is a bounded closed and symmetric convex subset of $\mathbb{R}^d$ with
nonempty interior. We shall consider convex bodies in two contexts,
continuous and discrete. Therefore, in
order to avoid unnecessary technicalities, we always assume that $G\subset \mathbb{R}^d$ is closed (whereas in the
literature it is usually assumed to be open). One of the most
classical examples of convex symmetric bodies are the $q$-balls
$B^q\subset \mathbb{R}^d$, $q\in[1,\infty]$, defined for $q \in [1,\infty)$ by
\begin{equation*}
B^q:=B^q(d):=\Big\{x\in\mathbb{R}^d: \vert x\vert_q:=\Big( \sum_{i=1}^d |x_i|^q\Big)^{1/q}\le 1 \Big\},
\end{equation*}
and for $q=\infty$ by
\begin{equation*}
B^\infty:=B^\infty(d):=\Big\{x\in\mathbb{R}^d: \vert x\vert_\infty:=\max_{1\leq i\leq d}|x_i|\le 1 \Big\}.
\end{equation*}
If $p=2$ then $B^2$ is the closed unit Euclidean ball in $\mathbb{R}^d$ centered at the
origin, and if $p=\infty$ then $B^{\infty}$ is the cube in $\mathbb{R}^d$ centered
at the origin and of side length $2$.
We associate with a convex symmetric body $G\subset \mathbb{R}^d$ the families of
continuous $(M_t^G)_{t>0}$ and discrete $(\mathcal{M}_t^G)_{t>0}$ averaging operators given respectively by
\begin{equation}
\lambdabel{eq:18}
M^G_t F(x):=\frac{1}{|G_t|} \int_{G_t} F(x-y)\, {\rm d} y, \qquad F\in L^1_{\rm loc}(\mathbb{R}^d),
\end{equation}
and
\begin{equation}
\lambdabel{eq:19}
\mathcal{M}^G_t f(x):=\frac{1}{|G_t\cap\mathbb{Z}^d|} \sum_{y\in G_t\cap\mathbb{Z}^d } f(x-y), \qquad f\in \ell^\infty(\mathbb{Z}^d),
\end{equation}
where $G_t=\{y\in\mathbb{R}^d: t^{-1}y\in G\}$ is the dilate of $G\subset \mathbb{R}^d$. Moreover, we define the corresponding maximal functions by
\begin{equation*}
M_\ast^G F(x) :=\sup_{t>0} \big|M_t^G F(x)\big|,
\qquad \text{ and } \qquad
\mathcal{M}_\ast^G f(x) :=\sup_{t>0} \big|\mathcal{M}_t^G f(x)\big|.
\end{equation*}
It is well known that both maximal functions are of weak type $(1,1)$
and of strong type $(p,p)$ for any $p\in(1,\infty]$. Moreover, neither
of these maximal functions is of strong type $(1,1).$ Our primary interest is
focused on determining whether the constants arising in the weak and
strong type inequalities can be chosen independently of the dimension
$d$. Moreover, we shall compare the best constants in such
inequalities for $M_\ast^G$ and $\mathcal{M}_\ast^G$, respectively.
For
$p\in (1, \infty]$ we denote by $C(G,p)$ the smallest constant $0< C< \infty$ for
which the following strong type inequality holds
\begin{equation*}
\| M_\ast^G F \|_{L^p(\mathbb{R}^d)} \leq C \| F \|_{L^p(\mathbb{R}^d)}, \qquad F \in L^p(\mathbb{R}^d).
\end{equation*}
Similarly, $C(G,1)$ will stand for the smallest constant $0< C< \infty$ satisfying
\begin{equation*}
\sup_{\lambdambda>0}\lambdambda \, |\{ x \in \mathbb{R}^d : M_\ast^G F(x) > \lambdambda \}| \leq C \| F \|_{L^1(\mathbb{R}^d)}, \qquad F \in L^1(\mathbb{R}^d).
\end{equation*}
Analogously to $C(G,p)$, we define $\mathcal{C}(G,p)$ for any $p \in [1, \infty]$, referring to $\mathcal{M}_\ast^G$ in place of $M_\ast^G$.
Our main result of this paper can be formulated as follows.
\begin{theorem}\lambdabel{thm:T}
Fix $d \in \mathbb{N}$ and let $G \subset \mathbb{R}^d$ be a convex symmetric
body. Then for each $p \in [1, \infty]$ we have
\begin{align}
\lambdabel{eq:16}
C(G,p) \leq \mathcal{C}(G,p).
\end{align}
Moreover, for the $d$-dimensional cube
$B^\infty(d)\subset \mathbb{R}^d$ one has
\begin{align}
\lambdabel{eq:52}
C(B^\infty(d),1) = \mathcal{C}(B^\infty(d),1).
\end{align}
\end{theorem}
Some comments are in order:
\begin{itemize}
\item[(i)] Clearly, $C(G,\infty)=\mathcal{C}(G,\infty)=1$, since we have been working with averaging operators.
\item[(ii)] Theorem \ref{thm:T} gives us a quantitative dependence
between $C(G,p)$ and $\mathcal{C}(G,p)$. Inequality \eqref{eq:16} coincides
with a well known phenomenon in harmonic analysis, which states that
it is harder to establish bounds for discrete operators than the
bounds for their continuous counterparts.
\item[(iii)] Formula \eqref{eq:52} was observed by the first author in
his master thesis. However, it has not been published before. In
particular, it yields that
$\mathcal{C}(B^\infty(d),1)\ _{\overrightarrow{d\to\infty}} \infty$ in view
of the result of Aldaz \cite{Ald1} asserting that the optimal constant
$C(B^\infty(d),1)$ in the weak type $(1,1)$ inequality grows to
infinity as $d\to\infty$. Quantitative bounds for the constant
$C(B^\infty(d),1)$ were given by Aubrun \cite{Aub1}, who proved
$C(B^\infty(d),1)\gtrsim_{\varepsilon}(\log d)^{1-\varepsilon}$ for
every $\varepsilon>0$, and soon after that by Iakovlev and Str\"omberg
\cite{IakStr1} who considerably improved Aubrun's lower bound by
showing that $C(B^\infty(d),1)\gtrsim d^{1/4}$. The latter result also
ensures in the discrete setup that
$\mathcal{C}(B^\infty(d),1)\gtrsim d^{1/4}$.
\item[(iv)] If $d=1$, then \eqref{eq:52} combined with the result of Melas \cite{Mel} implies that
$\mathcal{C}(B^\infty(1),1)$ is equal to the larger root of the quadratic equation
$12C^2-22C+5=0$.
\item[(v)] The product structure of the cubes $B^\infty(d)$ and the
fact that one works with continuous/discrete norms for $p=1$ are
essential to prove $C(B^\infty(d),1) = \mathcal{C}(B^\infty(d),1)$. At this
moment it does not seem that our method can be used to attain the
equality in \eqref{eq:16} for $G=B^\infty(d)$ with
$p\in(1, \infty)$. In general case, as we shall see later in this
paper, inequality \eqref{eq:16} cannot be reversed.
\item[(vi)] The proof of inequality \eqref{eq:16} relies on a suitable
generalization of the ideas described in the master thesis of the first
author. The details are presented in Section \ref{sec:Tr}.
\end{itemize}
Systematic studies of dimension-free estimates in the continuous case were initiated
by Stein \cite{SteinMax}, who showed that $C(B^2,p)$ is bounded
independently of the dimension for all $p\in(1,\infty]$. Shortly
afterwards Bourgain \cite{B1} proved that $C(G,2)$ can be estimated by
an absolute constant independent of the dimension and the convex
symmetric body $G\subset \mathbb{R}^d$. This result was extended to the range
$p\in(3/2,\infty]$ in \cite{B2} and independently by Carbery in
\cite{Car1}. It is conjectured that one can estimate $C(G,p)$ by a
dimension-free constant for all $p\in(1,\infty]$. This was verified
for the $q$-balls $B^q$, $q\in[1,\infty)$, by M\"uller \cite{Mul1} and
for the cubes $B^\infty$ by Bourgain \cite{B3}. The latter result
exhibits an interesting phenomenon, which shows that the
dimension-free estimates on $L^p(\mathbb{R}^d)$ for $p\in(1, \infty]$ cannot
be extended to the weak type $(1, 1)$ endpoint. Namely, the optimal
constant $C(B^\infty(d),1)$ in the weak type $(1,1)$ inequality, as
Aldaz \cite{Ald1} proved, grows to infinity with the dimension.
Additionally, if the range of scales $t$ in the definition of the
maximal operator $M_\ast^G$ is restricted to the dyadic values ($t\in \mathbb D := \{2^n : n \in \mathbb{Z} \}$), then
the constants in the strong type $(p,p)$ inequalities with
$p\in(1,\infty]$ are bounded uniformly in $d$ for any $G$, see
\cite{Car1}. For a more detailed account of the subject in the
continuous case, its history and extensive literature we refer to
\cite{DGM1} or \cite{BMSW4}.
Surprisingly, in the discrete setting there is no hope for estimating
$\mathcal{C}(G,p)$ independently of the dimension and the convex symmetric
body. Fixing $1\leq \lambdambda_1<\cdots<\lambdambda_d<\ldots<\sqrt{2}$ and examining the ellipsoids
\begin{equation}
\lambdabel{eq:elip}
E(d):=\Big\{x\in \mathbb{R}^d : \sum_{j=1}^d \lambdambda_j^2\, x_j^2\,\le 1 \Big\},
\end{equation}
it was proved in \cite[Theorem~2]{BMSW3} that for every $p\in(1, \infty)$ there
is a constant $C_p>0$ such that for all $d\in \mathbb{N}$ we have
\begin{align}
\lambdabel{eq:17}
\mathcal C(E(d), p)\ge \sup_{\|f\|_{\ell^p(\mathbb{Z}^d)}\le 1}\big\|\sup_{0<t\le d }|\mathcal{M}_{t}^{E(d)} f|\big\|_{\ell^p(\mathbb{Z}^d)}
\ge
C_p(\log d)^{1/p}.
\end{align}
This inequality shows that if $p\in(3/2, \infty)$, then for sufficiently large $d$ the inequality
in \eqref{eq:16} with $G=E(d)$ is strict,
since from \cite{B2} and \cite{Car1} we know that there exists a
finite constant $C_p>0$ independent of the dimension such that
\begin{align}
\lambdabel{eq:32}
C(E(d), p)\le C_p.
\end{align}
On the other hand, for cubes $G=B^\infty(d)$, it
was also proved \cite[Theorem~3]{BMSW3} that for every
$p\in(3/2, \infty]$ there is a finite constant $C_p>0$ such that for
every $d\in\mathbb{N}$ one has
\begin{align}
\lambdabel{eq:31}
\mathcal{C}(B^\infty(d), p)\le C_p.
\end{align}
For $p\in(1, 3/2]$ it still remains open whether
$\mathcal{C}(B^\infty(d), p)$ can be estimated independently of the
dimension. In view of the second part of Theorem \ref{thm:T}
interpolation does not help, since
$\mathcal{C}(B^\infty(d),1)\ _{\overrightarrow{d\to\infty}} \infty$.
Inequalities \eqref{eq:17} and \eqref{eq:31} illustrate that the
dimension-free phenomenon in the discrete setting contrasts sharply
with the situation that we know from the continuous setup. However, as
it was shown in \cite[Theorem~2]{BMSW3}, if the dimension-free
estimates fail in the discrete setting it may only happen for small
scales, see also \eqref{eq:17}. To be more precise, define the discrete restricted maximal
function
\begin{align}
\lambdabel{eq:24}
\mathcal{M}_{\ast, > D}^G f(x) :=\sup_{t>D} \big|\mathcal{M}_t^G f(x)\big|; \qquad x\in\mathbb{Z}^d, \quad D\ge 0,
\end{align}
corresponding to the averages from \eqref{eq:19}. For $D=0$ we see that $\mathcal{M}_{\ast, > D}^G=\mathcal{M}_{\ast}^G$.
Then by \cite[Theorem~1]{BMSW3} one has for arbitrary convex symmetric bodies
$G\subset\mathbb{R}^d$ that there
exists $c(G)>0$ such that
\begin{align}
\lambdabel{eq:26}
\big\|\mathcal{M}_{\ast, > c(G)d}^G f\big\|_{\ell^p(\mathbb{Z}^d)}
\leq
e^6 C(G, p)\|f\|_{\ell^p(\mathbb{Z}^d)},
\qquad f\in \ell^p(\mathbb{Z}^d).
\end{align}
Specifically, $\frac{1}{2}d^{1/2}\le c(E(d))\le d^{1/2}$ for the ellipsoids \eqref{eq:elip}, and consequently
by \eqref{eq:26}, we also have
\begin{equation*}
\big\Vert\mathcal{M}_{\ast, > d^{3/2}}^{E(d)} f\big\Vert_{\ell^p(\mathbb{Z}^d)}\leq e^6C(E(d), p)\Vert f\Vert_{\ell^p(\mathbb{Z}^d)}, \qquad f\in \ell^p(\mathbb{Z}^d),
\end{equation*}
which ensures dimension-free estimates for any $p\in (3/2, \infty]$ thanks to \eqref{eq:32}.
In the case of $q$-balls $G=B^q(d)$, $q\in[1,\infty]$, in view of \cite{B3} and \cite{Mul1}, inequality \eqref{eq:26} comes down to
\begin{equation}
\lambdabel{eq:28}
\big\Vert\mathcal{M}_{\ast, > d^{1+1/q}}^G f\big\Vert_{\ell^p(\mathbb{Z}^d)}\leq C_{p,q}\Vert f\Vert_{\ell^p(\mathbb{Z}^d)}, \qquad f\in \ell^p(\mathbb{Z}^d),
\end{equation}
for all $p\in(1,\infty]$, where $C_{p,q}$ denotes a constant that
depends on $p$ and $q$ but not on the dimension $d$.
We close this discussion by gathering a few conjectures that arose upon completing
\cite{BMSW3}, \cite{BMSW4} and \cite{BMSW2}.
\begin{conjecture}
\lambdabel{con:1}
Let $d\in\mathbb{N}$ and let $B^q(d)\subset\mathbb{R}^d$ be a $q$-ball.
\begin{enumerate}[label*={\arabic*}.]
\item (Weak form) Is it true that for
every $p\in(1, \infty)$ and $q\in [1, \infty]$ there exist constants $C_{p, q}>0$ and $t_q>0$ such that
for
every $d\in\mathbb{N}$ we have
\begin{align}
\lambdabel{eq:25}
\sup_{\|f\|_{\ell^p(\mathbb{Z}^d)}\le 1}\big\|\mathcal{M}_{\ast, > t_q d}^{B^q(d)} f\big\|_{\ell^p(\mathbb{Z}^d)}\le C_{p, q} \, ?
\end{align}
\item (Strong form) Is it true that for
every $p\in(1, \infty)$ and $q\in[1, \infty]$ there exists a constant $C_{p, q}>0$ such that for
every $d\in\mathbb{N}$ we have
\begin{align}
\lambdabel{eq:33}
\mathcal{C}(B^q(d), p)\le C_{p, q} \, ?
\end{align}
\end{enumerate}
\end{conjecture}
A few comments about these conjectures are in order.
\begin{itemize}
\item[(i)] The first conjecture arose on the one hand in view of
the inequalities \eqref{eq:17} and \eqref{eq:28}, and on the other hand in
view of the result from \cite{BMSW4}, where it had been verified for
the Euclidean balls. Indeed, if $q=2$ then following \cite[Section
5]{BMSW4} one can see that \eqref{eq:28} holds with $ad$ in place of
$d^{1+1/q}$, where $a>0$ is a large absolute constant independent of
$d$. Since the dimension-free phenomenon may only break down for small
scales, the conjectured threshold from which we can expect
dimension-free estimates in the discrete setup is at the level of a
constant multiple of $d$.
\item[(ii)] The second conjecture says that one should expect
dimension-free estimates for $\mathcal{C}(B^q(d), p)$ corresponding to
$q$-balls as we have for their continuous counterparts $C(B^q(d), p)$.
It was verified \cite{BMSW3} in the case of cubes
$\mathcal{C}(B^{\infty}(d), p)$ with $p\in(3/2, \infty]$ as we have seen in
\eqref{eq:31}. Moreover, in \cite{BMSW2} the second and fourth author
in collaboration with Bourgain and Stein proved that the discrete
dyadic Hardy--Littlewood maximal function
$\sup_{n\in\mathbb{N}}|\mathcal{M}^{B^2}_{2^n} f|$ over the Euclidean balls
$B^2(d)$ has dimension-free estimates on $\ell^2(\mathbb{Z}^d)$. Although
this can be thought of as the first step towards establishing
\eqref{eq:33}, the general case seems to be very difficult even for
the Euclidean balls or cubes for $p\in(1, 3/2]$. This will surely
require new methods.
\end{itemize}
In our second main result of this paper we verify the first conjecture for the balls $B^q$ for all $q\in(2,\infty)$.
\begin{theorem}\lambdabel{thm:1}
For every $q\in(2,\infty)$ and each $a>0$ and $p\in(1,\infty)$ there exists $C(p,q,a)>0$ independent of the dimension $d\in\mathbb{N}$ such that
for all $f\in\ell^p(\mathbb{Z}^d)$ we have
\begin{equation*}
\big\|\sup_{N \geq a d}|\mathcal{M}_N^{B^q}f|\big\|_{\ell^p(\mathbb{Z}^d)}\leq C(p,q,a) \|f\|_{\ell^p(\mathbb{Z}^d)}.
\end{equation*}
\end{theorem}
The proof of Theorem \ref{thm:1} is presented in Section \ref{sec:Bq};
it relies on the methods developed in \cite[Section 5]{BMSW4}, Hanner's inequality \eqref{eq:Han_ineq} and
Newton's generalized binomial theorem. It follows from
\cite[Theorem~2]{BMSW4} that Theorem \ref{thm:1} remains true for
$q=2$, but only for $a>0$, which is large enough. Our proof of Theorem \ref{thm:1}
can be easily adapted to yield the same result. We point out necessary changes in the proof. Since we rely on Hanner's inequality our proof of Theorem \ref{thm:1} does not carry over to $q \in [1,2).$ This is because for such values of $q$ the inequality \eqref{eq:Han_ineq} is reversed.
Our final result concerns the dyadic maximal operator associated with $q$-balls.
\begin{theorem}
\lambdabel{thm:10}
Fix $q \in [2, \infty)$. Let $C_1, C_2>0$ and define
$\mathbb D_{C_1, C_2}:=\{N\in\mathbb D:C_1d^{1/q}\le N\le C_2d\}$. Then
there exists a constant $C_q>0$ independent of the dimension such that
for every $f\in \ell^2(\mathbb{Z}^d)$ we have
\begin{align}
\lambdabel{eq:20}
\big\|\sup_{N\in \mathbb D_{C_1, C_2}}|\mathcal M_N^{B^q}f|\big\|_{\ell^2(\mathbb{Z}^d)}\le C_q\|f\|_{\ell^2(\mathbb{Z}^d)}.
\end{align}
\end{theorem}
Theorem \ref{thm:10} is an incremental step towards establishing the
second conjecture. By adapting the ideas developed in \cite{BMSW2} we
are able to obtain dimension-free estimates for the discrete
restricted dyadic Hardy--Littlewood maximal functions over $q$-balls
for all $q\in[2, \infty)$. Theorem \ref{thm:10} generalizes
\cite[Theorem 2.2]{BMSW3}, which was stated for $q = 2$. The proof of
inequality \eqref{eq:20} as in \cite{BMSW2} exploits the invariance of
$B_N^q\cap\mathbb{Z}^d$ under the permutation group of $\mathbb{N}_d$. Then we can
efficiently use probabilistic arguments on a permutation group
corresponding to $\mathbb{N}_d$ that reduce the matter to the decrease dimension trick
as in \cite{BMSW2}. The proof of Theorem \ref{thm:10} is a technical
elaboration of the methods from \cite{BMSW2}. However, for the
convenience of the reader, mainly due to intricate technicalities we decided
to provide necessary details in Section \ref{sec:4}. We remark that the condition $q \in [2, \infty)$ cannot be dropped in our proof of Theorem \ref{thm:10} as it is required in the estimate at the origin from Proposition \ref{prop:1}.
\subsection{Notation} The following basic notation will be used
throughout the paper.
\begin{enumerate}[label*={\arabic*}.]
\item We will write $A \lesssim_{\delta} B$
($A \gtrsim_{\delta} B$) to say that there is an absolute constant
$C_{\delta}>0$ (which depends on a parameter $\delta>0$) such that
$A\le C_{\delta}B$ ($A\ge C_{\delta}B$). We will write
$A \simeq_{\delta} B$ when $A \lesssim_{\delta} B$ and
$A\gtrsim_{\delta} B$ hold simultaneously. We shall abbreviate subscript $\delta$ if irrelevant.
\item
Let $\mathbb{N}:=\{1,2,\ldots\}$ be the set of positive integers and $\mathbb{N}_0 := \mathbb{N}\cup\{0\}$, and
$\mathbb D:=\{2^n: n\in\mathbb{Z}\}$ will denote the set of dyadic numbers.
We set $\mathbb{N}_N := \{1, 2, \ldots, N\}$ for any $N \in \mathbb{N}$.
\item For a measurable set
$A\subseteq \mathbb{R}^d$ we denote by $|A|$ the Lebesgue measure of $A$ and by
$|A\cap \mathbb{Z}^d|$ the number of lattice points in $A.$
\item
The Euclidean space $\mathbb{R}^d$
is endowed with the standard inner product
\[
x\cdot\xi:=\lambdangle x, \xi\rangle:=\sum_{k=1}^dx_k\xi_k
\]
for every two vectors $x=(x_1,\ldots, x_d)$ and
$\xi=(\xi_1, \ldots, \xi_d)$ from $\mathbb{R}^d$. Let $|x|_2:=\sqrt{\lambdangle x, x\rangle}$ denote the Euclidean norm of a vector
$x\in\mathbb{R}^d$. The Euclidean open ball centered
at the origin with radius one will be denoted by $B^2$.
We shall abbreviate $B^2$ to $B$ and $|\cdot|_2$ to $|\cdot|$.
\item Let $(X, \mu)$ be a measure space $X$ with a $\sigma$-finite
measure $\mu$. The space of all measurable functions whose modulus is
integrable with $p$-th power is denoted by $L^p(X)$ for
$p\in(0, \infty)$, whereas $L^{\infty}(X)$ denotes the space of all
measurable essentially bounded functions. The space of all measurable
functions that are weak type $(1, 1)$ will be denoted by
$L^{1, \infty}(X)$. In our case we will usually have $X=\mathbb{R}^d$ or
$X=\mathbb{T}^d$ equipped with Lebesgue measure, and $X=\mathbb{Z}^d$ endowed
with counting measure. If $X$ is endowed with counting measure we
will abbreviate $L^p(X)$ to $\ell^p(X)$ and $L^{1, \infty}(X)$ to $\ell^{1, \infty}(X)$. If the context causes no
confusions we will also abbreviate $\|\cdot\|_{L^p(\mathbb{R}^d)}$ to
$\|\cdot\|_{L^p}$ and $\|\cdot\|_{\ell^p(\mathbb{Z}^d)}$ to
$\|\cdot\|_{\ell^p}$.
\item Let $\mathcal{F}$ denote the Fourier transform on $\mathbb{R}^d$ defined for any function
$f \in L^1\big(\mathbb{R}^d\big)$ as
\begin{align*}
\mathcal{F} f(\xi) := \int_{\mathbb{R}^d} f(x) e^{2\pi i \sprod{\xi}{x}} {\: \rm d}x \quad \text{for any}\quad \xi\in\mathbb{R}^d.
\end{align*}
If $f \in \ell^1\big(\mathbb{Z}^d\big)$ we define the discrete Fourier
transform by setting
\begin{align*}
\hat{f}(\xi) := \sum_{x \in \mathbb{Z}^d} f(x) e^{2\pi i \sprod{\xi}{x}} \quad \text{for any}\quad \xi\in\mathbb{T}^d,
\end{align*}
where $\mathbb{T}^d$ denotes the $d$-dimensional torus, which will be identified
with $Q:=[-1/2, 1/2]^d$. To simplify notation we will denote by
$\mathcal F^{-1}$ the inverse Fourier transform on $\mathbb{R}^d$ or the
inverse Fourier transform (Fourier coefficient) on the torus
$\mathbb{T}^d$. It will cause no confusions since their meaning will be always clear from the context.
\end{enumerate}
\section{Transference of strong and weak type inequalities: Proof of Theorem \ref{thm:T}}
\lambdabel{sec:Tr}
Here we elaborate the arguments from the master thesis of the first author to prove Theorem \ref{thm:T}. The general idea behind the proof of \eqref{eq:16} is as follows. We fix a non-negative bump function $F \colon \mathbb{R}^d \to \mathbb{R}$ for which the constant in the corresponding maximal inequality is almost $C(G,p)$. Since dilations are available in the continuous setting, $F$ can be taken to be very slowly varying. Then we sample the values of $F$ at lattice points to produce $f \colon \mathbb{Z}^d \to \mathbb{R}$. Because $F$ is regular, the norms of $F$ and $f$ are almost the same. Moreover, we deduce that $M_*^G F$ cannot be essentially larger than $\mathcal{M}_*^G f$. Indeed, for $F$ being slowly varying its maximal function is slowly varying as well. Also, given $n \in \mathbb{Z}^d$ we see that $M_t^G F(n)$ is certainly not much greater than $f(n)$, unless $t$ is very large. For large values of $t$, in turn, the sets $G_{t} \cap \mathbb{Z}^d$ are regular, making the quantities $M_t^G F(n)$ and $\mathcal{M}^G_{t} f(n)$ comparable to each other. The constant in the maximal inequality associated with $f$ is then at least not much smaller than $C(G,p)$. Thus, \eqref{eq:16} holds.
\begin{proof}[Proof of Theorem \ref{thm:T}]
Let $G \subset \mathbb{R}^d$ be a convex symmetric body. Let $r\in(0, 1)$ and $R>1$ be real numbers such that $B_r\subset G\subset B_R$, where $B_t$ is the Euclidean ball centered at the origin with radius $t>0$.
We may assume that $p\in[1, \infty)$, otherwise there is nothing to do. We now distinguish three cases. In the first two cases we prove \eqref{eq:16} for arbitrary $G$ and all $p\in[1, \infty)$. In the third case we show that the equality is attained in \eqref{eq:16} if $G=B^{\infty}$ and $p=1$.
\paragraph{\bf Case 1, when $p \in (1, \infty)$} Fix
$\eta \in (0,1)$ and take $F \in C_{\rm c}^\infty(\mathbb{R}^d)$ such that
$F \geq 0$ and
\begin{equation}\lambdabel{T1}
\| M^G_* F \|_{L^p(\mathbb{R}^d)} \geq (1-\eta) C(G,p) \|F\|_{L^p(\mathbb{R}^d)}.
\end{equation}
For each $K \in \mathbb{N}$ let us define $F_K$ by setting
$F_K(x) := F(\frac{x}{K})$. Note that
$\|F_K\|_{L^p(\mathbb{R}^d)} = K^{d/p} \, \|F\|_{L^p(\mathbb{R}^d)}$. Moreover,
since $M^G_* F_K (x) = M^G_* F (\frac{x}{K})$, we have
$\| M^G_* F_K \|_{L^p(\mathbb{R}^d)} = K^{d/p} \, \| M^G_* F \|_{L^p(\mathbb{R}^d)}$. Next,
we define $f_K \colon \mathbb{Z}^d \rightarrow [0, \infty)$ by setting
$f_K(n) := F_K(n)$ for $n \in \mathbb{Z}^d$. Then we immediately have
\begin{align}
\lambdabel{eq:34}
\|F\|_{L^p(\mathbb{R}^d)} = \lim_{K \rightarrow \infty} \Big( \frac{1}{K^d}\sum_{n \in \mathbb{Z}^d } f_K(n)^p \Big)^{1/p}.
\end{align}
Thus for all sufficiently large $K\in \mathbb{N}$ (say $K \geq K_1$) we see
\begin{equation}\lambdabel{T2}
\| f_K \|_{\ell^p(\mathbb{Z}^d)} \leq (1+\eta) K^{d/p} \|F\|_{L^p(\mathbb{R}^d)}.
\end{equation}
Choose $N \in \mathbb{N}$ such that
\begin{equation}\lambdabel{T3}
\| M^G_* F \cdot \ind{[-N,N]^d} \|_{L^p(\mathbb{R}^d)} \geq (1 - \eta) \, \|M^G_* F\|_{L^p(\mathbb{R}^d)}.
\end{equation}
In a similar way as in \eqref{eq:34}, we conclude that there exists $K_2$ such that for all $K \geq K_2$ we have
\begin{equation}\lambdabel{T4}
\Big( \frac{1}{K^d}\sum_{n \in \mathbb{Z}^d \cap [-NK, NK]^d} M^G_* F_K(n)^p \Big)^{1/p}
\geq (1 - \eta) \, \| M^G_* F \cdot \ind{[-N,N]^d} \|_{L^p(\mathbb{R}^d)}.
\end{equation}
Let $\kappa > 0$ be such that $M^G_*F_K(n) \geq \kappa$ for each
$K \in \mathbb{N}$ and $n \in \mathbb{Z}^d \cap [-NK,NK]^d$. We fix
$\varepsilon \in (0, \eta \kappa/2)$ and take $\delta > 0$ for which
$|x - y| < \delta$ implies $|F(x) - F(y)| < \varepsilon$. Since $G_t\subset B_{tR}$, we obtain
\begin{displaymath}
M^G_tF(x) \leq F(x)+\varepsilon, \qquad t \in (0, \delta R^{-1}),
\end{displaymath}
or, equivalently,
\begin{displaymath}
M^G_tF_K(x) \leq F_K(x)+\varepsilon, \qquad t \in (0, K \delta R^{-1}).
\end{displaymath}
Our goal is to prove that
\begin{equation}\lambdabel{T5}
\mathcal{M}_*^G f_K(n) \geq (1 - \eta) \, M^G_*F_K(n), \qquad n \in \mathbb{Z}^d \cap [-NK,NK]^d,
\end{equation}
if $K$ is large enough. To this end we shall show separately that
\begin{displaymath}
\mathcal{M}_*^G f_K(n) \geq M^G_t F_K(n) - \eta \kappa, \qquad t \in (0, K \delta R^{-1}),
\end{displaymath}
and
\begin{displaymath}
\quad \mathcal{M}_*^G f_K(n) \geq (1 - \eta/2) \, M^G_t F_K(n) - \eta \kappa / 2, \qquad t \geq K \delta R^{-1}.
\end{displaymath}
Fix $n \in \mathbb{Z}^d \cap [-NK,NK]^d$. Obviously, if
$t \in (0, K \delta R^{-1})$, then the first inequality follows
\begin{displaymath}
M^G_tF_K(n) \leq F_K(n) + \varepsilon \leq \mathcal{M}_*^G f_K(n) + \varepsilon \leq \mathcal{M}_*^G f_K(n) + \eta \kappa.
\end{displaymath}
Hence, we are reduced to prove the second estimate for
$M^G_t F_K(n)$ in the case $t \geq K \delta R^{-1}$. Let $\rho \in (0,1)$ be
such that $|G_{1+2\rho} | \leq (1-\eta/2)^{-1} |G|$ and assume that
$K \geq K_3 := \sqrt{d}R / (r \delta \rho)$. Therefore, for each
$t \geq K \delta R^{-1}$ we have $t + \sqrt{d}/r \leq (1+ \rho)t$. Let $Q_m:=m+B^\infty_{1/2}$
be the cube centered at $m \in \mathbb{Z}^d$ and of side length $1$. If $Q_m \cap G_t \neq \emptyset$ for some
$m \in \mathbb{Z}^d$, then one can
easily see that
$Q_m \subseteq G_{t+\sqrt{d}/r} \subseteq G_{(1+\rho)t}$, provided
$t \geq K_3 \delta R^{-1}$. Consequently, we conclude
\begin{align*}
\mathcal{M}_{t + \sqrt{d}/r}^G f_K(n) & = \frac{1}{|G_{t + \sqrt{d}/r} \cap \mathbb{Z}^d|} \, \sum_{m \in G_{t + \sqrt{d}/r} \cap \mathbb{Z}^d} f_K(n-m) \\
& \geq \frac{1}{|G_{t + \sqrt{d}/r} \cap \mathbb{Z}^d|} \, \sum_{m \in G_{t + \sqrt{d}/r} \cap \mathbb{Z}^d} \Big( \int_{Q_m} F_K(n-x) \, {\rm d}x - \varepsilon \Big) \\
& \geq \Big( \frac{1}{|G_{t + \sqrt{d}/r} \cap \mathbb{Z}^d|} \, \int_{G_t} F_K(n-x) \, {\rm d} x \Big) - \varepsilon \\
& \geq \frac{|G_t|}{|G_{(1+\rho)t} \cap \mathbb{Z}^d|} M^G_t F_K(n) - \eta \kappa / 2,
\end{align*}
where in the first inequality we have used that
$K \geq K_3 \geq \sqrt{d} / \delta.$ Hence, it remains to show
\begin{displaymath}
|G_t| \geq (1-\eta/2) \ |G_{(1+\rho)t} \cap \mathbb{Z}^d|.
\end{displaymath}
We notice that if $m \in G_{(1+\rho)t} \cap \mathbb{Z}^d$, then
$Q_m \subseteq G_{(1+\rho)t + \sqrt{d}/r} \subseteq G_{(1+2\rho)t}$. Thus
we obtain
\begin{displaymath}
|G_{(1+\rho)t} \cap \mathbb{Z}^d| \leq |G_{(1+2\rho)t}| \leq (1-\eta/2)^{-1} \, |G_t|.
\end{displaymath}
Finally, combining \eqref{T1}, \eqref{T2}, \eqref{T3}, \eqref{T4}, and
\eqref{T5}, for any $K \geq \max\{K_1, K_2, K_3\}$ we have
\begin{displaymath}
\| \mathcal{M}^G_* f_K \|_{\ell^p(\mathbb{Z}^d)} \geq \frac{(1- \eta)^4}{1 + \eta} \, C(G,p) \, \|f_K \|_{\ell^p(\mathbb{Z}^d)}.
\end{displaymath}
Hence, since $\eta \in (0,1)$ was arbitrary, we conclude that
$\mathcal{C}(G,p) \geq C(G,p)$.
\paragraph{\bf Case 2, when $p =1$} The inequality
$\mathcal{C}(G,1) \geq C(G,1)$ can be deduced in a similar way as it was
done for $p \in (1, \infty)$. We only describe necessary changes. Namely, as in \eqref{T1} we
fix
$\eta \in (0,1)$ and take $F \in C_{\rm c}^\infty(\mathbb{R}^d)$ such that
$F \geq 0$ and
\begin{equation}\lambdabel{T1'}
\| M^G_* F \|_{L^{1, \infty}(\mathbb{R}^d)} \geq (1-\eta) C(G,1) \|F\|_{L^1(\mathbb{R}^d)}.
\end{equation}
Then we choose $N \in \mathbb{N}$ such that
\begin{equation}\lambdabel{T3'}
\| M^G_* F \cdot \ind{[-N,N]^d} \|_{L^{1, \infty}(\mathbb{R}^d)} \geq (1 - \eta) \, \|M^G_* F\|_{L^{1, \infty}(\mathbb{R}^d)}.
\end{equation}
It is easy to see that for each $x_0, y_0 \in \mathbb{R}^d$ one has
\begin{displaymath}
| M^G_* F (x_0) - M^G_* F(y_0) | \leq \sup_{|x-y|=|x_0-y_0|} \ |F(x) - F(y)|.
\end{displaymath}
This allows us to deduce for sufficiently large $K\in\mathbb{N}$ that
\begin{align}
\lambdabel{eq:44}
\| M^G_* F_K \cdot \ind{\mathbb{Z}^d\cap[-NK,NK]^d} \|_{\ell^{1, \infty}(\mathbb{Z}^d)}
\geq (1 - \eta) K^d \, \| M^G_* F \cdot \ind{[-N,N]^d} \|_{L^{1, \infty}(\mathbb{R}^d)}
\end{align}
with the function $F_K$ as in the previous case. From now on we may
proceed in much the same way as in the previous case to establish
\eqref{T5}. Once \eqref{T5} is proved we combine \eqref{T1'}, \eqref{T2} (with $p=1$), \eqref{T3'}, \eqref{eq:44} and \eqref{T5} to obtain
\begin{displaymath}
\| \mathcal{M}^G_* f_K \|_{\ell^{1, \infty}(\mathbb{Z}^d)} \geq \frac{(1- \eta)^4}{1 + \eta} \, C(G,1) \, \|f_K \|_{\ell^1(\mathbb{Z}^d)}
\end{displaymath}
for any $K \geq \max\{K_1, K_2, K_3\}$. Since $\eta \in (0,1)$ was
arbitrary, we conclude that $\mathcal{C}(G,1) \geq C(G,1)$ as desired. This
completes the first part of Theorem \ref{thm:T}. We now turn our
attention to the case $G = B^\infty$ and show that the last inequality
can be reversed.
\paragraph{\bf Case 3, when $p =1$ and $G = B^\infty=[-1, 1]^d$}
Given $\eta \in (0,1)$ consider $f \in \ell^1(\mathbb{Z}^d)$ and
$\lambdambda > 0$ such that $f \geq 0$ and
\begin{displaymath}
\lambdambda \, |\{ n \in \mathbb{Z}^d : \mathcal{M}^{B^\infty}_* f(n) > \lambdambda \}| \geq (1- \eta) \, \mathcal{C}(B^\infty, 1) \, \| f \|_{\ell^1(\mathbb{Z}^d)}.
\end{displaymath}
Let $Q_\delta(x):=x+B^\infty_{\delta/2}$ denote
the cube centered at $x\in\mathbb{R}^d$ and of side length $\delta>0$. For
$\delta\in(0, 1)$, we set
\begin{displaymath}
F_\delta(x) := \sum_{n \in \mathbb{Z}^d} f(n) \, |Q_\delta(n)|^{-1}\ind{Q_\delta(n)}(x).
\end{displaymath}
Clearly $\|F_\delta\|_{L^1(\mathbb{R}^d)} = \| f \|_{\ell^1(\mathbb{Z}^d)}$, this is the place where it is essential that we are working with $p=1$.
We show that for each
$n \in \mathbb{Z}^d$ one has
\begin{equation}
\lambdabel{eq:Ta}
M^{B^\infty}_* F_\delta(x) \geq \mathcal{M}^{B^\infty}_* f(n), \qquad x \in Q_{1-\delta}(n).
\end{equation}
To prove \eqref{eq:Ta} we note that
\[
\mathcal{M}^{B^\infty}_* f(n)=\sup_{t>0}\mathcal{M}^{B^\infty}_t f(n)=\sup_{N\in\mathbb{N}_0}\mathcal{M}^{B^\infty}_N f(n)
\]
since
$|B_t^{\infty}\cap\mathbb{Z}^d|=|B_{\lfloor t\rfloor}^{\infty}\cap\mathbb{Z}^d|= (2\lfloor t\rfloor+1)^d$,
where $\mathcal{M}^{B^\infty}_0 f:=f$.
If $N=0$, then for each $n \in \mathbb{Z}^d$ and $x \in Q_{1-\delta}(n)$ we
obtain
\begin{align*}
\mathcal{M}^{B^\infty}_0 f(n)=f(n)= \int_{Q_{1}(x)}F_{\delta}(y)\mathrm{d} y \le M^{B^\infty}_* F_\delta(x).
\end{align*}
If $N \in \mathbb{N}$, then $n+ B_N^{\infty}\subseteq x+B_{N+\frac{1-\delta}{2}}^{\infty}$
for each $n \in \mathbb{Z}^d$ and
$x \in Q_{1-\delta}(n)$. Therefore,
\begin{align*}
\mathcal{M}^{B^\infty}_N f(n)&=
\frac{1}{|n+ B_N^{\infty}\cap\mathbb{Z}^d|} \sum_{k\in n+ B_N^{\infty}\cap\mathbb{Z}^d}f(k)\\
&\le
\frac{1}{|n+ B_N^{\infty}\cap\mathbb{Z}^d|} \sum_{k\in\mathbb{Z}^d }\int_{Q_{\delta}(k)}\ind{x+B_{N+\frac{1-\delta}{2}}^{\infty}}(k)f(k) |Q_{\delta}(k)|^{-1}\ind{Q_{\delta}(k)}(y)\mathrm{d} y\\
&\le \frac{1}{|x+B_{N+1/2}^{\infty}|}\int_{x+B_{N+1/2}^{\infty}} F_{\delta}(y)\,\mathrm{d} y
\le M^{B^\infty}_* F_\delta(x),
\end{align*}
since $|n+ B_N^{\infty}\cap\mathbb{Z}^d|=(2N+1)^d=|x+B_{N+1/2}^{\infty}|$.
Hence \eqref{eq:Ta} follows, and consequently we obtain
\begin{align*}
\lambdambda \, |\{ x \in \mathbb{R}^d : M^{B^\infty}_* F_\delta(x) > \lambdambda \}|
& \geq \lambdambda \, (1-\delta)^d \, |\{ n \in \mathbb{Z}^d : \mathcal{M}^{B^\infty}_* f(n) > \lambdambda \}| \\
& \geq (1-\eta) \, (1-\delta)^d \, \mathcal{C}(B^\infty, 1) \, \|F_\delta \|_{L^1(\mathbb{R}^d)}.
\end{align*}
Since $\eta$ and $\delta$ were arbitrary, we conclude that
$C(B^\infty, 1) \geq \mathcal{C}(B^\infty, 1)$. Finally, combining this
inequality with the previous result we obtain
$C(B^\infty, 1) = \mathcal{C}(B^\infty, 1)$ and the proof of Theorem \ref{thm:T} is completed.
\end{proof}
\section{Discrete maximal operator for $B^q$ and large scales: Proof of Theorem \ref{thm:1}}
\lambdabel{sec:Bq}
The purpose of this section is to prove Theorem \ref{thm:1}. We will follow the ideas from
\cite[Section 5]{BMSW4}. From now on we assume that $q\in(2,\infty)$ is fixed. By $Q := B^\infty_{1/2}$ we mean the unit cube centered at the origin.
\begin{lemma}\lambdabel{lem:1}
Let
$\tilde{C}_q := \max \big\{ \sum_{k=1}^\infty \big| {q \choose 2k} \big| \, 2^{-2k}, \big(\frac{3}{2}\big)^q \big\}$. If
$N\geq d^{\frac{1}{2}+\frac{1}{q}}$, then
\begin{equation*}
|B_N^q\cap\mathbb{Z}^d|\leq 2 |B_{N_1}^q|\leq 2 e^{\frac{\tilde{C}_q}{q}}|B_N^q|,
\end{equation*}
where $N_1:=N\big(1+ d^{-1} \tilde{C}_q \big)^{\frac{1}{q}}$.
\end{lemma}
\begin{proof}
Since $q > 2$, by Hanner's inequality for $x\in B_N^q\cap\mathbb{Z}^d$ and $y\in Q$ we obtain
\begin{equation}\lambdabel{eq:Han_ineq}
\norm{x+y}^q_q +\norm{x-y}^q_q\leq (\norm{x}_q+\norm{y}_q)^q
+|\norm{x}_q-\norm{y}_q|^q.
\end{equation}
Moreover, for every $x\in B_N^q$ we have
$|\{y\in Q : \norm{x+y}_q> \norm{x-y}_q\}| = |\{y\in Q : \norm{x+y}_q< \norm{x-y}_q\}|$,
which implies
$|\{y\in Q : \norm{x+y}_q\leq \norm{x-y}_q\}|\geq 1/2$. Hence,
\begin{align}
\lambdabel{eq:3}
\begin{split}
|B_N^q\cap \mathbb{Z}^d|=\sum_{x\in B_N^q\cap\mathbb{Z}^d} 1&\leq 2\sum_{x\in B_N^q\cap\mathbb{Z}^d}\int_Q \ind{\{y\in\mathbb{R}^d : \norm{x+y}_q\leq \norm{x-y}_q\}}(z)\,{\rm d}z\\
&\leq 2\sum_{x\in B_N^q\cap\mathbb{Z}^d}\int_Q \ind{\{y\in\mathbb{R}^d : 2\norm{x+y}_q^q\leq (\norm{x}_q+\norm{y}_q)^q+\abs{\norm{x}_q-\norm{y}_q}^q\}}(z)\,{\rm d}z
\end{split}
\end{align}
We shall estimate
$(\norm{x}_q+\norm{y}_q)^q+|\norm{x}_q-\norm{y}_q|^q$ for
$x\in B_N^q\cap\mathbb{Z}^d$ and $y\in Q$. Let us firstly assume that
$\norm{x}_q\geq 2 \norm{y}_q$. By Newton's generalized binomial
theorem we have
\begin{align*}
(\norm{x}_q+\norm{y}_q)^q+|\norm{x}_q-\norm{y}_q|^q
&=\sum_{k=0}^\infty {q \choose k} \norm{y}_q^k \norm{x}_q^{q-k}+\sum_{k=0}^\infty {q \choose k} (-1)^k\norm{y}_q^k \norm{x}_q^{q-k}\\
&=2\sum_{k=0}^\infty {q \choose 2k} \norm{y}_q^{2k} \norm{x}_q^{q-2k}\\
& \leq 2 \bigg( \norm{x}_q^q + \norm{x}_q^{q-2} \norm{y}_q^2 \sum_{k=1}^\infty \bigg| {q \choose 2k} \bigg| \, 2^{-2k+2} \bigg) \\
& \leq 2 \bigg( N^q + N^{q-2} d^{\frac{2}{q}} \sum_{k=1}^\infty \bigg| {q \choose 2k} \bigg| \, 2^{-2k} \bigg) \\
& \leq 2N^q \bigg( 1 + d^{-1} \sum_{k=1}^\infty \bigg| {q \choose 2k} \bigg| \, 2^{-2k} \bigg).
\end{align*}
On the other hand, if $\norm{x}_q\leq 2\norm{y}_q$, then
\begin{equation*}
(\norm{x}_q+\norm{y}_q)^q+|\norm{x}_q-\norm{y}_q|^q
\leq 2\big(3\norm{y}_q \big)^q\leq 2 \bigg(\frac{3}{2}\bigg)^q d
\leq 2 N^q d^{-\frac{q}{2}} \bigg(\frac{3}{2}\bigg)^q \leq 2 N^q d^{-1} \bigg(\frac{3}{2}\bigg)^q.
\end{equation*}
Combining the above gives
\begin{equation}\lambdabel{eq:4}
(\norm{x}_q+\norm{y}_q)^q+|\norm{x}_q-\norm{y}_q|^q\leq 2N^q\big(1+ d^{-1} \tilde{C}_q\big)=2N_1^q.
\end{equation}
Finally, by \eqref{eq:3} and \eqref{eq:4} we obtain
\begin{align*}
\abs{B_N^q\cap \mathbb{Z}^d}&\leq 2\sum_{x\in B_N^q\cap\mathbb{Z}^d}\int_Q \ind{\{y\in\mathbb{R}^d : \norm{x+y}_q\leq N(1+d^{-1} \tilde{C}_q )^{1/q}\}}(z)\,{\rm d}z\\
&=2\sum_{x\in B_N^q\cap\mathbb{Z}^d}\int_Q \ind{B^q_{N(1+d^{-1}\tilde{C}_q)^{1/q}}}(x+z)\,{\rm d}z\\
&\leq 2|B_{N_1}^q|\\
&=2 \big(1+d^{-1} \tilde{C}_q \big)^{\frac{d}{q}} |B_N^q|\\
&\leq 2 e^{\frac{\tilde{C}_q}{q}}|B_N^q|,
\end{align*}
which finishes the proof.
\end{proof}
We remark that Lemma \ref{lem:1} holds for $q = 2$ without any changes
with $\tilde{C}_2=9/4,$ so that $e^{\tilde{C}_2/2}=e^{9/8}$.
\begin{lemma}\lambdabel{lem:2}
Let $ a >0$. Take $N\geq a d$, $0\leq j\leq N-1$, and $x\in\mathbb{R}^d$ such
that
\begin{equation}\lambdabel{eq:1}
N\bigg(1+\frac{j}{N}\bigg)^{\frac{1}{q}}\leq \norm{x}_q\leq N\bigg(1+\frac{j+1}{N}\bigg)^{\frac{1}{q}}.
\end{equation}
If $d\in\mathbb{N}$ is sufficiently large (depending only on $ a $ and $q$),
then
\begin{equation*}
\abs{Q\cap(B_N^q-x)}=\abs{\{y\in Q : x+y\in B_N^q\}}\leq e^{-\frac{7}{128q^2} j^2}.
\end{equation*}
\end{lemma}
\begin{proof}
Note that the claim is trivial for $j=0$. Therefore, let
$1\leq j\leq N-1$ and take $x\in\mathbb{R}^d$ such that \eqref{eq:1}
holds. Assume that $y\in Q$ and $x+y\in B_N^q$. Then
\begin{equation}
\lambdabel{eq:45}
\norm{x}_q^q -\norm{x+y}_q^q \geq N^q\bigg(1+\frac{j}{N}\bigg) -N^q=jN^{q-1}.
\end{equation}
We shall also estimate the expression on the left hand side of the above inequality from above. Let $I_i:=\abs{x_i}^q-\abs{x_i+y_i}^q$, then
\begin{equation*}
\norm{x}_q^q-\norm{x+y}_q^q=\sum_{i=1}^d I_i.
\end{equation*}
For $\abs{x_i}>2 \abs{y_i}$ by Newton's generalized binomial theorem
we have
\begin{align*}
I_i=\abs{x_i}^q-\abs{\abs{x_i}+\mathrm{sgn}(x_i)y_i}^q&=\abs{x_i}^q -\abs{\sum_{k=0}^\infty {q \choose k} \abs{x_i}^{q-k} \mathrm{sgn}(x_i)^k y_i^k}\\
&\leq \abs{x_i}^q- \abs{\abs{x_i}^q+q\abs{x_i}^{q-1}\mathrm{sgn}(x_i)y_i}+\sum_{k=2}^\infty \abs{{q \choose k}} \abs{x_i}^{q-k} \abs{y_i}^k\\
&\leq -q\abs{x_i}^{q-1}\mathrm{sgn}(x_i)y_i+\sum_{k=2}^\infty \abs{{q \choose k}} \abs{x_i}^{q-k} \abs{y_i}^k\\
&\leq -q\abs{x_i}^{q-1}\mathrm{sgn}(x_i)y_i+ \abs{x_i}^{q-2} \abs{y_i}^2 \sum_{k=2}^\infty \abs{{q \choose k}} 2^{2-k}.
\end{align*}
On the other hand, if $\abs{x_i}\leq 2\abs{y_i}\le 1$, and $A_q:=1 + \big(\frac{3}{2}\big)^q+q$, then
\begin{align*}
I_i\leq (2 \abs{y_i})^q +(3\abs{y_i})^q\leq 1 + \bigg(\frac{3}{2}\bigg)^q
\leq -q\abs{x_i}^{q-1}\mathrm{sgn}(x_i)y_i+A_q.
\end{align*}
Let
$\tilde{x}=\big(\abs{x_1}^{q-1}\mathrm{sgn}(x_1),\ldots,\abs{x_d}^{q-1}\mathrm{sgn}(x_d)\big).$
Combining the above we obtain by H\"{o}lder's inequality
\begin{align*}
\norm{x}_q^q -\norm{x+y}_q^q&\leq -q\lambdangle \tilde{x},y\rangle +A_qd +\sum_{k=2}^\infty \abs{{q \choose k}} 2^{2-k} \sum_{i=1}^d \abs{x_i}^{q-2} \abs{y_i}^2\\
&\leq -q\lambdangle \tilde{x},y\rangle +A_qd +\sum_{k=2}^\infty \abs{{q \choose k}} 2^{2-k} \norm{x}_q^{q-2} \norm{y}_q^{2}\\
&\leq -q\lambdangle \tilde{x},y\rangle +A_qd + N^{q-2}d^{\frac{2}{q}}\sum_{k=2}^\infty \abs{{q \choose k}} 2^{-k+1}\\
&\leq -q\lambdangle \tilde{x},y\rangle +\bigg(A_q + \sum_{k=2}^\infty \abs{{q \choose k}} 2^{-k+1} \bigg) N^{q-2}d^{\frac{2}{q}};
\end{align*}
to ensure the validity of the last inequality above we take $d$ so
large that $ad\ge d^{1/q}.$
By \eqref{eq:45} and the previous display, since $q>2$, we obtain
\begin{equation}\lambdabel{eq:7}
-q\lambdangle \tilde{x},y\rangle\geq jN^{q-1}-\bigg(A_q + \sum_{k=2}^\infty \abs{{q \choose k}} 2^{-k+1} \bigg) N^{q-2}d^{\frac{2}{q}}
\geq \frac{1}{2} jN^{q-1},
\end{equation}
provided that $d$ is so large that
\begin{equation}
\lambdabel{eq:lem:2:1}
\bigg(A_q + \sum_{k=2}^\infty \abs{{q \choose k}} 2^{-k+1} \bigg) a^{-1}d^{-1+\frac{2}{q}}\le \frac12.
\end{equation}
Note that
\begin{equation*}
\norm{\tilde{x}}_2=\norm{x}_{2q-2}^{q-1}\leq \norm{x}_q^{q-1}\leq 2 N^{q-1}.
\end{equation*}
Hence, for $y\in Q$ and $x+y\in B_N^q$ we obtain
\begin{equation*}
\left\lambdangle -\frac{\tilde{x}}{\norm{\tilde{x}}_2},y \right\rangle\geq \frac{j}{4q}.
\end{equation*}
We know from \cite[Inequality (5.6)]{BMSW4} that for every unit vector
$z\in\mathbb{R}^d$ and for every $s>0$ we have
\begin{align*}
|\{y\in Q: \lambdangle z, y\rangle\ge
s\}|\le e^{-\frac78s^2}.
\end{align*}
Applying this inequality for
$z=-\frac{\tilde{x}}{\norm{\tilde{x}}_2}$ and $s=j/(4q)$ we arrive at
\begin{align*}
\abs{\{y\in Q : x+y\in B_N^q\}}\leq
\Big|\Big\{y\in Q : \bigg\lambdangle -\frac{\tilde{x}}{\norm{\tilde{x}}_2},y \bigg\rangle\geq \frac{j}{4q}\Big\}\Big|
\leq e^{-\frac{7j^2}{128q^2} }.
\end{align*}
This concludes the proof of the lemma.
\end{proof}
A comment is in order here. For $q=2$, a version of Lemma \ref{lem:2} holds for all $d\in \mathbb{N}$ under the restriction
\begin{equation}\lambdabel{eq:8}
a \geq 2 \, \bigg( 1 + \bigg(\frac{3}{2}\bigg)^2+2 + \sum_{k=2}^\infty \abs{{2 \choose k}} 2^{-k+1} \bigg) =2(1+(3/2)^2+2+1/2)=\frac{23}{2}.
\end{equation}
The above ensures that \eqref{eq:lem:2:1} (and hence also \ref{eq:7}) is satisfied for all $d \in \mathbb{N}$.
\begin{lemma}\lambdabel{lem:3}
Let $a>0$. If $d\in\mathbb{N}$ is sufficiently large (depending on $a$ and
$q$) then there exists a constant $C_q'>0$ depending only on $q$
such that for all $N\ge ad$ one has
\begin{equation*}
|B_N^q|\leq C_q' |B_N^q\cap \mathbb{Z}^d|.
\end{equation*}
\end{lemma}
\begin{proof}
For each $a>0$ we take $J := J_{q,a} \in \mathbb{N}$ satisfying
\begin{align}\lambdabel{eq:def_J}
\sum_{j\ge J} e^{-\frac{7 j^2}{128q^2}}e^{2(j+1)/(aq)}\le \frac{1}{4 }e^{-\frac{\tilde{C}_q}{q}},
\end{align}
where $\tilde{C}_q>0$ is the constant specified in Lemma \ref{lem:1}.
It suffices to show that for every $M \geq \frac{a d}{2} $ we have
\begin{equation}\lambdabel{eq:2}
|B^q_M|\le 2|B^q_{M(1+J/M)^{1/q}}\cap\mathbb{Z}^d|.
\end{equation}
Indeed, take $d$ so large that $ad\ge 2 J.$ Then for $N \geq a d$ we
can find $M\in[\frac{ad}{2},N],$ such that $N=M(1+J/M)^{1/q}$
and thus \eqref{eq:2} gives
\begin{align*}
|B^q_N|= (1+J/M)^{d/q}|B^q_M|\leq 2e^{2J/(aq)}|B^q_{N}\cap\mathbb{Z}^d|.
\end{align*}
Our aim now is to prove \eqref{eq:2}. Define
$U_j=\big\{x\in\mathbb{R}^d: M\big(1+\frac{j}{M}\big)^{1/q}< \norm{x}_q\leq M\big(1+\frac{(j+1)}{M}\big)^{1/q}\big\}$
for $j \geq 0$ and observe that
\begin{align*}
|B^q_M|
&=\sum_{x\in\mathbb{Z}^d}\int_{Q}\ind{B^q_M}(x+y){\rm d}y\\
&=\sum_{x\in B^q_M\cap\mathbb{Z}^d}\int_{Q}\ind{B^q_M}(x+y){\rm
d}y+\sum_{0\le j<J}\sum_{x\in U_j\cap\mathbb{Z}^d}\int_{Q}\ind{B^q_M}(x+y){\rm d}y\\
&\hspace{4.5cm} + \sum_{j\ge J}\sum_{x\in U_j\cap\mathbb{Z}^d}\int_{Q}\ind{B^q_M}(x+y){\rm d}y\\
&\le |B^q_{M(1+J/M)^{1/q}}\cap\mathbb{Z}^d|+\sum_{j=J}^{M-1}\sum_{x\in U_j\cap\mathbb{Z}^d}|Q\cap (B^q_M-x)|.
\end{align*}
Indeed, if $j\ge M$, then $|Q\cap (B^q_M-x)|=0$ holds for each
$x\in U_j$; here we take $d$ so large that
$ad(2^{1/q}-1)\ge d^{1/q}$. Clearly
$M \geq \frac{a d}{2} \geq d^{\frac{1}{2} + \frac{1}{q}},$ if $d$ is
large enough. Now applying Lemma \ref{lem:2} (with $M$ in place of $N$
and $a/2$ in place of $a$) together with Lemma \ref{lem:1} we see that
for sufficiently large $d$ one has
\begin{align*}
\sum_{j=J}^{M-1}\sum_{x\in U_j\cap\mathbb{Z}^d}|Q\cap (B^q_M-x)|&\leq \sum_{j=J}^{M-1}\sum_{x\in U_j\cap\mathbb{Z}^d} e^{-\frac{7 j^2}{128q^2}}\\
&\leq \sum_{j=J}^{M-1}\big|B^q_{M(1+\frac{j+1}{M})^{1/q}}\cap\mathbb{Z}^d\big| e^{-\frac{7 j^2}{128q^2}}\\
&\leq 2e^{\frac{\tilde{C}_q}{q}} \sum_{j=J}^{\infty}\abs{B^q_{M}}\bigg(1+\frac{j+1}{M}\bigg)^{d/q} e^{-\frac{7 j^2}{128q^2}}\\
&\leq 2e^{\frac{\tilde{C}_q}{q}} \abs{B^q_{M}} \sum_{j=J}^{\infty} e^{2(j+1)/(aq)} e^{-\frac{7 j^2}{128q^2}}.
\end{align*}
Hence, by \eqref{eq:def_J} we have
\begin{equation*}
|B^q_M|\leq |B^q_{M(1+J/M)^{1/q}}\cap \mathbb{Z}^d|+\frac{1}{2}|B^q_M|,
\end{equation*}
which finishes the proof.
\end{proof}
We make a similar remark as below Lemma \ref{lem:2}. For $q=2$,
the claim of Lemma \ref{lem:3} holds for all $d\in\mathbb{N}$ provided that the $a>0$
is large enough. In this case it suffices to take
\[
a\geq \max(23,2J_{2,23}),\]
where $J:=J_{2,23}$ is a non-negative integer satisfying
\[
\sum_{j\ge J} e^{-\frac{7 j^2}{128q^2}}e^{2(j+1)/23}\le \frac{1}{4 }e^{-\frac{9}{8}}.
\]
Then the implied constant $C'_2$ is equal to $2 e^{J/23}.$ Finally, in
the proof of Theorem \ref{thm:1} below it suffices to take
$N \geq C'_2 d$. We leave the details to the interested reader.
\begin{proof}[Proof of Theorem \ref{thm:1}]
Fix $p\in(1,\infty)$. It is well known that for any $q\in(2,\infty)$ and any $d\in\mathbb{N}$ one has
\begin{equation*}
\big\|\sup_{N >0}|\mathcal{M}_N^{B^q}f|\big\|_{\ell^p(\mathbb{Z}^d)}\leq C_d(p,q) \|f\|_{\ell^p(\mathbb{Z}^d)},
\end{equation*}
with a constant $C_d(p,q)$ which depends on the dimension $d.$ Thus we may assume that the dimension
$d$ is large enough, in fact larger than any fixed number $d_0$ (which may
depend on $q\in (2,\infty)$ and $a>0$).
Let $f \colon \mathbb{Z}^d\to \mathbb{C}$ be a non-negative function. Define
$F \colon \mathbb{R}^d\to \mathbb{C}$ by setting
\[
F(x):=\sum_{y\in\mathbb{Z}^d}f(y)\ind{y+Q}(x).
\]
Clearly $\|F\|_{L^p(\mathbb{R}^d)}=\|f\|_{\ell^p(\mathbb{Z}^d)}$.
Fix $a > 0$ and let $\tilde{C}_q>0$ be the constant specified in Lemma
\ref{lem:1}. Take $N\ge a d$ and define
$N_1:=N\big(1+d^{-1} \tilde{C}_q \big)^{1/q}$. Observe that
$ad \geq d^{\frac{1}{2}+\frac{1}{q}}$ when $d$ is large enough.
Hence, by \eqref{eq:4} and \eqref{eq:Han_ineq}, for $z\in Q$
and $y\in B^q_{N}$ we have the estimate
\[
\norm{y+z}_q\le N_1
\]
on the set $\{z\in Q : \norm{y+z}_q\le \norm{y-z}_q\}$, and the
Lebesgue measure of this set is at least $1/2$. Then by Lemma \ref{lem:3} for
sufficiently large dimension $d$ and all $x\in\mathbb{Z}^d$ we obtain
\begin{align}
\lambdabel{eq:5}
\begin{split}
\mathcal M_N^{B^q}f(x)&
=\frac1{|B^q_N\cap\mathbb{Z}^d|}\sum_{y\in B^q_N\cap\mathbb{Z}^d}f(x+y)\ind{B^q_N}(y)\\
&=\frac1{|B^q_N\cap\mathbb{Z}^d|}\sum_{y\in B^q_N\cap\mathbb{Z}^d}f(x+y)\int_Q \ind{B^q_N}(y)\,dz\\
&\lesssim_q \frac1{|B^q_N|}\sum_{y\in\mathbb{Z}^d}f(x+y)\int_{Q}\ind{B^q_{N_1}}(y+z) \, {\rm d}z\\
&=\frac1{|B^q_N|}\sum_{y\in\mathbb{Z}^d}f(y)\int_{x+B^q_{N_1}}\ind{y+Q}(z) \, {\rm d}z\\
&=\frac1{|B^q_N|}\int_{x+B^q_{N_1}}F(z) \, {\rm d}z\\
&=\bigg(\frac{N_1}{N}\bigg)^d\frac{1}{|B^q_{N_1}|}\int_{B^q_{N_1}}F(x+z) \, {\rm d}z\\
&\lesssim_q\frac{1}{|B^q_{N_1}|}\int_{B^q_{N_1}}F(x+z) \, {\rm d}z\\
&=M_{N_1}^{B^q}F(x).
\end{split}
\end{align}
Let us now take $N_2:=N_1\big(1+d^{-1} \tilde{C}_q \big)^{1/q}.$
Similarly as above, for $y\in Q$ and $z\in B^q_{N_1}$ we have
\begin{equation*}
\norm{y+z}_q\leq N_2
\end{equation*}
on the set $\{y\in Q : \norm{y+z}_q\le \norm{y-z}_q\}$, and the
Lebesgue measure of this set is at least $1/2$. Therefore, Fubini's theorem leads to
\begin{align}
\lambdabel{eq:6}
\begin{split}
M_{N_1}^{B^q}F(x)&=\frac{1}{|B^q_{N_1}|}\int_{B^q_{N_1}}F(x+z) \, {\rm d}z\\
&\leq \frac2{|B^q_{N_1}|}
\int_{\mathbb{R}^d}F(x+z)\ind{B^q_{N_1}}(z)\int_{Q}\ind{B^q_{N_2}}(z+y) \, {\rm d}y{\rm d}z\\
&=2\Big(\frac{N_2}{N_1}\Big)^d \int_{Q} \frac{1}{|B^q_{N_2}|}
\int_{\mathbb{R}^d}F(x+z-y)\ind{B^q_{N_1}}(z-y)\ind{B^q_{N_2}}(z) \, {\rm d}z{\rm d}y\\
&\lesssim \int_{x+Q}\frac1{|B^q_{N_2}|}\int_{\mathbb{R}^d}F(z-y)\ind{B^q_{N_2}}(z) \, {\rm d}z{\rm d}y\\
&= \int_{x+Q}M_{N_2}^{B^q}F(y) \, {\rm d}y.
\end{split}
\end{align}
Denote $C_{d,q} := a \, \big(1+d^{-1}\tilde{C}_q\big)^{2/q} d.$
Combining \eqref{eq:5} with \eqref{eq:6} and applying H\"{o}lder's
inequality, we obtain
\begin{align*}
\big\|\sup_{N\ge a d}|\mathcal M_N^{B^q}f|\big\|_{\ell^p(\mathbb{Z}^d)}^p
&\lesssim_q\sum_{x\in\mathbb{Z}^d} \Big(\int_{x+Q} \sup_{N\ge
C_{d,q}}|M_N^{B^q}F(y)|\Big)^p {\rm d}y\\
&\leq \sum_{x\in\mathbb{Z}^d} \int_{x+Q}\sup_{N\ge
C_{d,q}}\big| M_N^{B^q}F(y)\big|^p {\rm d}y \\
&=\big\|\sup_{N\ge C_{d,q}}\big|M_N^{B^q}F\big|\big\|_{L^p(\mathbb{R}^d)}^p.
\end{align*}
By the dimension-free $L^p(\mathbb{R}^d)$ boundedness of the maximal operator $M_*^{B^q}$ (proved in \cite{Mul1}), we obtain
\begin{equation*}
\big\|\sup_{N\ge C_{d,q}}\big|M_N^{B^q}F\big|\big\|_{L^p(\mathbb{R}^d)}^p\lesssim_q\|F\|_{L^p(\mathbb{R}^d)}^p=\|f\|_{\ell^p(\mathbb{Z}^d)}^p.
\end{equation*}
This proves Theorem \ref{thm:1}.
\end{proof}
\section{Decrease dimension trick: Proof of Theorem \ref{thm:10}}
\lambdabel{sec:4}
We now prove Theorem \ref{thm:10} by adapting the methods
introduced in \cite[Section 2]{BMSW2} to the case of $q$-balls. In
fact this section is a technical elaboration of \cite[Section
2]{BMSW2}. However, we have decided to
provide necessary details due to intricate technicalities. Throughout this section we will abbreviate
$\|\cdot\|_{\ell^p(\mathbb{Z}^d)}$ to $\|\cdot\|_{\ell^p}$ and
$\|\cdot\|_{L^p(\mathbb{R}^d)}$ to $\|\cdot\|_{L^p}$. We fix
$q \in [2, \infty)$ and recall that $\mathcal M_N^{B^q}$ is the
operator whose multiplier is given by
$$
\mathfrak m^{B^q}_N(\xi):
=\frac{1}{|B^q_N\cap\mathbb{Z}^d|}
\sum_{x\in B^q_N\cap\mathbb{Z}^d}e^{2\pi i \xi\cdot x}, \qquad \xi\in\mathbb{T}^d\equiv[-1/2, 1/2)^d.
$$
For each $\xi\in\mathbb{T}^d$ we will write
$\|\xi\|^2:=\|\xi_1\|^2+\ldots+\|\xi_d\|^2$, where
$\|\xi_j\|=\operatorname{dist}(\xi_j, \mathbb{Z})$ for any $j\in\mathbb{N}_d$. Since we identify
$\mathbb{T}^d$ with $[-1/2, 1/2)^d$, it is easy to see that the norm
$\|\cdot\|$ coincides with the Euclidean norm $|\cdot|_2$ restricted
to $[-1/2, 1/2)^d$. It is also very well known that $\|\eta\|\simeq|\sin(\pi\eta)|$
for every $\eta\in\mathbb{T}$, since
$|\sin(\pi\eta)|=\sin(\pi\|\eta\|)$ and for $0\le|\eta|\le 1/2$ we
have
\begin{align}
\lambdabel{eq:103}
2|\eta|\le|\sin(\pi\eta)|\le \pi|\eta|.
\end{align}
The proof of Theorem \ref{thm:10} is based on Proposition
\ref{prop:1} and Proposition \ref{prop:2}, which give respectively estimates of
the multiplier $\mathfrak m^{B^q}_N(\xi)$ at the origin and at
infinity. These estimates will be described in terms of the
proportionality constant
\begin{align*}
\kappa(d, N):= \kappa_q(d, N):=Nd^{-1/q}.
\end{align*}
\begin{proposition}
\lambdabel{prop:1}
For every $d, N \in \mathbb{N}$ and for every $\xi \in \mathbb{T}^d$ we have
\begin{align}
\lambdabel{eq:22}
|\mathfrak{m}^{B^q}_N(\xi) - 1| \leq 2 \pi^2 \kappa(d,N)^2 \| \xi \|^2.
\end{align}
\end{proposition}
\begin{proposition}
\lambdabel{prop:2}
There is a constant $C_q>0$ such that for any $d, N\in\mathbb{N}$ if $10\le \kappa(d, N) \leq 50qd^{1-1/q}$, then for all $\xi\in\mathbb{T}^d$ we have
\begin{align}
\lambdabel{eq:23}
|\mathfrak{m}^{B^q}_N(\xi)|\le C_q \big((\kappa(d, N)\|\xi\|)^{-1}+\kappa(d, N)^{-\frac{1}{7}}\big).
\end{align}
\end{proposition}
Before we prove Proposition \ref{prop:1} and Proposition \ref{prop:2} we show how
\eqref{eq:20} follows from \eqref{eq:22} and
\eqref{eq:23}.
\begin{proof}[Proof of Theorem \ref{thm:10}]
Since $\mathbb D_{C_1, C_2}$ is a subset of the dyadic set $\mathbb D$
we can assume, without loss of generality, that $C_1=C_2=10$. For every $t>0$ let $P_t$ be the
semigroup with the multiplier
\begin{align*}
\mathfrak p_t(\xi):=e^{-t\sum_{i=1}^d\sin^2(\pi\xi_i)}, \qquad \xi\in\mathbb{T}^d.
\end{align*}
It follows from \cite{Ste1} (see also \cite{BMSW3} for more details)
that for every $p\in(1, \infty)$ there is $C_p>0$ independent of
$d\in\mathbb{N}$ such that for every $f\in\ell^p(\mathbb{Z}^d)$ we have
\begin{align*}
\Big\|\sup_{t>0}|P_tf|\Big\|_{\ell^p}\le C_p \|f\|_{\ell^p}.
\end{align*}
It suffices to compare the averages $\mathcal M^{B^q}_N$ with
$P_{N^2/d^{2/q}}$. Namely, the proof of \eqref{eq:20} will be
completed if we obtain the dimension-free estimate on $\ell^2(\mathbb{Z}^d)$
for the following square function
\[
Sf(x):=\Big(\sum_{N\in \mathbb D_{C_1, C_2}}|\mathcal M_Nf(x)-P_{N^2/d^{2/q}}f(x)|^2\Big)^{1/2}, \qquad x\in\mathbb{Z}^d.
\]
By Plancherel's formula, \eqref{eq:22}, and \eqref{eq:23}, we can
estimate $\|S(f)\|_{\ell^2}^2$ by
\begin{align*}
C_q' \int_{\mathbb{T}^d}\,
\bigg(\sum_{\substack{m\in\mathbb{Z}:\\10d^{1/q}\le 2^m\le 10d}}
\min\bigg\{\frac{2^{2m}}{d^{2/q}}\|\xi\|^2,\bigg(\frac{2^{2m}}{d^{2/q}}\|\xi\|^2\bigg)^{-1}\bigg\}+d^{2/7q}\sum_{\substack{m\in\mathbb{Z}:\\10d^{1/q}\le 2^m\le 10d}}
2^{-2m/7}\bigg) |\hat{f}(\xi)|^2{\rm d}\xi,
\end{align*}
where $ C_q'$ is a constant that depends only on $q.$ To complete the
proof we note that the integral above is clearly bounded by
$C \|f\|_{\ell^2}^2$ for a suitable constant $C>0$ independent of $d$.
\end{proof}
The rest of this section is devoted to proving Proposition
\ref{prop:1} and Proposition \ref{prop:2}. We emphasize that in the proof of Proposition \ref{prop:1} the assumption $q\geq 2$ is crucial.
\begin{proof}[Proof of Proposition \ref{prop:1}]
Since the balls $B_N^q$ are symmetric under permutations and sign
changes we may repeat the proof of \cite[Proposition 2.1]{BMSW2}
reaching
$$|\mathfrak{m}^{B^q}_N(\xi) - 1| \le \frac{2}{|B^q_N \cap \mathbb{Z}^d |} \sum_{j=1}^d \sin^2(\pi \xi_j) \sum_{x \in B^q_N \cap \mathbb{Z}^d} \frac{|x|_2^2}{d}.$$
Observe that $|x|_2^2\le |x|_q^2\cdot d^{1-2/q}$. Indeed, for $q=2$ this is simply an equality, and for $q>2$ it suffices to apply H\"older's inequality for the pair $(\frac{q}{2},\frac{q}{q-2})$. Consequently,
\begin{displaymath}
|\mathfrak{m}^{B^q}_N(\xi) - 1| \leq \frac{2}{|B^q_N \cap \mathbb{Z}^d |} \sum_{j=1}^d \sin^2(\pi \xi_j)
\sum_{x \in B^q_N \cap \mathbb{Z}^d} \frac{|x|_q^2}{d^{2/q}} \leq 2 \pi^2 \kappa(d,N)^2 \|\xi\|^2,
\end{displaymath}
which gives the claim.
\end{proof}
The proof of Proposition \ref{prop:2} can be deduced from a series of
auxiliary lemmas, which we formulate and prove below. In what follows
for an integer $1\le r\le d$ and a radius $R>0$ we let
$$B_{R}^{q,(r)}=\{x\in\mathbb{R}^r: \vert x\vert_q\le R \}\qquad\textrm{and}\qquad S_{R}^{q,(r)}=\{x\in\mathbb{R}^r:|x|_q=R \}$$
be the $r$-dimensional ball and sphere of radius $R>0,$ respectively.
\begin{lemma}
\lambdabel{lem:4}
For all $d, N \in \mathbb{N}$ we have
\begin{displaymath}
(2 \lfloor \kappa(d,N) \rfloor + 1)^d \leq |B^q_N \cap \mathbb{Z}^d|
\leq |B^q_{N + d^{1/q}}|
= \frac{2^d \Gamma(1+\frac{1}{q})^d}{\Gamma(1 + \frac{d}{q})} \Big(N + d^{1/q} \Big)^d.
\end{displaymath}
\end{lemma}
\begin{proof}
The lower bound follows from the inclusion $[-\kappa(d, N), \kappa(d, N)]^d\cap\mathbb{Z}^d\subseteq B^q_N\cap\mathbb{Z}^d$, while the upper bound is a simple consequence of the triangle inequality.
\end{proof}
\begin{lemma}
\lambdabel{lem:5}
Given $\varepsilon_1, \varepsilon_2\in(0, 1]$ we define for every
$d, N\in\mathbb{N}$ the set
\[
E=\big\{x\in B^q_N\cap\mathbb{Z}^d : |\{i\in\mathbb{N}_d : |x_i|\ge \varepsilon_2\kappa(d, N)\}|\le\varepsilon_1 d\big\}.
\]
If $\varepsilon_1, \varepsilon_2\in(0, 1/(10q)]$ and
$\kappa(d,N)\ge10$, then we have
\begin{align*}
|E| \le 2e^{-\frac{d}{10}}|B^q_N\cap\mathbb{Z}^d|.
\end{align*}
\end{lemma}
\begin{proof}
As in \cite[Lemma 2.4]{BMSW2} we can estimate
\begin{equation}
\lambdabel{eq:9}
|E| \le (2\varepsilon_2\kappa(d, N)+1)^{d}+ \sum_{1\le m\le \varepsilon_1d}{{d}\choose{m}}(2\varepsilon_2\kappa(d,N)+1)^{d-m}\big|B_N^{q,(m)}\cap\mathbb{Z}^{m}\big|.
\end{equation}
Since $\kappa(d, N) \geq 10$ and $\varepsilon_2 \leq 1/10$, using the lower bound from Lemma \ref{lem:4} gives
\begin{equation}
\lambdabel{eq:10}
(2\varepsilon_2\kappa(d, N)+1)^{d} \leq e^{-\frac{16d}{19}} |B^q_N\cap\mathbb{Z}^{d}|
\end{equation}
as in the case $q=2$. On the other hand, the upper bound from Lemma \ref{lem:4} can be applied to estimate the sum appearing in \eqref{eq:9} by
\begin{align}
\lambdabel{eq:11}
\sum_{1\le m\le \varepsilon_1d} \frac{d^m}{m!}(2\varepsilon_2\kappa(d,N)+1)^{d-m} \frac{2^m}{\Gamma(1+\frac{m}{q})} d^{m/q} \big( \kappa(d,N) + 1 \big)^m.
\end{align}
Now observe that
\begin{equation}
\lambdabel{eq:12}
\frac{1}{\Gamma(1+\frac{m}{q})} \leq \frac{4q e^{m/q}}{(m/(2q))^{-1+m/q}}.
\end{equation}
Indeed, if $m/q \geq 2$, then
\begin{displaymath}
\frac{1}{\Gamma(1+\frac{m}{q})} \leq \frac{1}{\lfloor \frac{m}{q} \rfloor !} \leq
\frac{e^{\lfloor \frac{m}{q} \rfloor}}{\lfloor \frac{m}{q} \rfloor^{\lfloor \frac{m}{q} \rfloor}} \leq \frac{e^{\frac{m}{q}}}{(m/(2q))^{-1+m/q}},
\end{displaymath}
where in the second inequality we have used that $\frac{1}{n!} \leq \frac{e^n}{n^n}$ holds for any $n \in \mathbb{N}$. If $m/q \leq 2$, then
\begin{displaymath}
\frac{1}{\Gamma(1+\frac{m}{q})} \leq 2 \leq \frac{4qm}{2q} (2e)^{m/q} (m/q)^{-m/q} = \frac{4q e^{\frac{m}{q}}}{(m/(2q))^{-1+m/q}}.
\end{displaymath}
In the first inequality we used the fact that the gamma function is estimated from below by $1/2$ on the interval $[1,3]$.
Applying \eqref{eq:12} we get
\begin{align*}
\frac{d^{m+m/q} 2^m}{m! \, \Gamma(1 + \frac{m}{q})} \leq \frac{d^{m+m/q} 2^m e^m 4q e^{m/q}}{m^m (m/(2q))^{-1+m/q}} = \Big( \frac{2de}{m}\Big)^{m(1+1/q)} 2m q^{m/q} \leq \Big( \frac{2deq}{m}\Big)^{m(1+1/q)}.
\end{align*}
Combining this with \eqref{eq:9}, \eqref{eq:10}, and \eqref{eq:11}, and repeating the argument used in \cite[(2.16)]{BMSW2}, we arrive at
\begin{align}
\lambdabel{eq:14}
\begin{split}
|E| & \leq e^{-\frac{16d}{19}} |B^q_N\cap\mathbb{Z}^{d}| + \sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor} \Big( \frac{2deq}{m}\Big)^{m(1+1/q)} (2\varepsilon_2\kappa(d,N)+1)^{d-m} \big( \kappa(d,N) + 1 \big)^m \\
&\leq \Big(e^{-\frac{16d}{19}}+\sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor} \Big( \frac{2deq}{m}\Big)^{m(1+1/q)}\Big(\frac{2\varepsilon_2\kappa(d,N)+1}{2 \lfloor \kappa(d,N) \rfloor + 1}\Big)^{d-m}\Big)|B^q_N\cap\mathbb{Z}^{d}| \\
& \leq \Big( e^{-\frac{16d}{19}} + e^{- \frac{72d}{95}} \sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor} e^{ \varphi(m) } \Big) |B^q_N\cap\mathbb{Z}^{d}|,
\end{split}
\end{align}
where $\varphi(x):= (1+1/q) x \log(\frac{2eqd}{x})$, $x\ge0.$
For $x\in [0, d/(10q)]$ we
have
$$\varphi'(x)=(1+1/q)\log\bigg(\frac{2eqd}{x}\bigg)-(1+1/q)\ge \log\bigg(\frac{2qd}{x}\bigg)\ge \log 3.$$
Hence, $\varphi$ is increasing on
$[0,\lfloor \varepsilon_1 d \rfloor],$ and arguing as in the proof of
\cite[Lemma 2.4]{BMSW2} we get
\begin{equation*}
\sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor} e^{ \varphi(m) } \leq e^{\varphi(\frac{d}{10q})} \sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor}e^{-(\lfloor \varepsilon_1 d \rfloor-m)\log 3}\le \frac{3}{2} e^{\varphi(\frac{d}{10q})} \leq \frac{3}{2} (20eq^2)^{d/(5q)} \le \frac{3}{2} e^{\frac{2d}{5e}} e^{\frac{4d}{5q}} < \frac{3}{2} e^{\frac{3}{5}d},
\end{equation*}
since $q^{1/q} \leq e^{1/e}$ and $20 < e^3$. Then \eqref{eq:14} gives
\begin{displaymath}
\frac{|E|}{|B^q_N\cap\mathbb{Z}^{d}|} \leq e^{-\frac{16d}{19}}
+e^{-\frac{72d}{95}}\sum_{m=1}^{\lfloor \varepsilon_1 d \rfloor}
e^{\varphi(m)}
\le e^{-\frac{16d}{19}}+ \frac{3}{2}e^{-\frac{3d}{19}}
\le e^{-\frac{d}{10}}\Big(e^{-\frac{141}{190}}+\frac{3}{2}\Big)
\le2e^{-\frac{d}{10}},
\end{displaymath}
which finishes the proof.
\end{proof}
Recall that ${\rm Sym}(d)$ denotes the permutation group on
$\mathbb{N}_d$. We will also write
$\sigma\cdot x=(x_{\sigma(1)}, \ldots, x_{\sigma(d)})$ for every
$x\in\mathbb{R}^d$ and $\sigma\in{\rm Sym}(d)$. Let $\mathbb P$ be the
uniform distribution on the symmetry group ${\rm Sym}(d)$,
i.e. $\mathbb P(A)={|A|}/{d!}$ for any $A\subseteq{\rm Sym}(d)$, since
$|{\rm Sym}(d)|=d!$. The expectation $\mathbb E$ will be always taken
with respect to the uniform distribution $\mathbb P$ on the symmetry
group ${\rm Sym}(d)$. We will need two lemmas from \cite{BMSW2}.
\begin{lemma}
\lambdabel{lem:6}
Assume that $I, J\subseteq\mathbb{N}_d$ and $|J|=r$ for some
$0\le r\le d$. Then
\begin{align*}
\mathbb P[\{\sigma\in{\rm Sym}(d) : |\sigma(I)\cap J|\le {r|I|}/{(5d)}\}]\le e^{-\frac{r|I|}{10d}}.
\end{align*}
In particular, if $\delta_1, \delta_2\in(0, 1]$ satisfy
$5\delta_2\le\delta_1$ and $\delta_1d\le |I|\le d$, then we have
\begin{align*}
\mathbb P[\{\sigma\in{\rm Sym}(d) : |\sigma(I)\cap
J|\le \delta_2 r\}]\le e^{-\frac{\delta_1r}{10}}.
\end{align*}
\end{lemma}
\begin{lemma}
\lambdabel{lem:7}
Assume that we have a finite decreasing sequence
$0\le u_d\le\ldots\le u_2\le u_1\le(1-\delta_0)/2$ for some
$\delta_0\in(0, 1)$. Suppose that $I\subseteq\mathbb{N}_d$ satisfies
$\delta_1d\le |I|\le d$ for some $\delta_1\in(0, 1]$. Then for every
$J=(d_0, d]\cap\mathbb{Z}$ with $0\le d_0\le d$ we have
\begin{align*}
\mathbb E\Big[\exp\Big({-\sum_{j\in\sigma(I)\cap J}u_j}\Big)\Big]
\le 3\exp\Big({-\frac{\delta_0\delta_1}{20}\sum_{j\in J}u_j}\Big).
\end{align*}
\end{lemma}
\begin{lemma}
\lambdabel{lem:8}
For $d, N\in\mathbb{N}$, $\varepsilon\in (0, 1/(50q)]$ and an integer
$1\le r\le d$ we define
\[
E=\{x\in B^q_N\cap\mathbb{Z}^d : \sum_{i=1}^rx_i^q<\varepsilon^{q+1}\kappa(d, N)^q r\}.
\]
If $\kappa(d, N)\ge10$, then we have
\begin{align}
\lambdabel{eq:37}
|E|\le 4e^{-\frac{\varepsilon r}{10}}|B^q_N\cap\mathbb{Z}^d|.
\end{align}
As a consequence, $B^q_N\cap\mathbb{Z}^d$ can be written as a disjoint sum
\begin{align}
\lambdabel{eq:38}
\begin{split}
B^q_N\cap\mathbb{Z}^d &= \Big( \bigcup_{\varepsilon^{q+1}\kappa(d, N)^q r\leq l\le N^q}\big(B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r\big) \times\big(S^{q,d-r}_{(N^q-l)^{1/q}}\cap\mathbb{Z}^{d-r}\big) \Big) \cup E',
\end{split}
\end{align}
where $E' \subset \mathbb{Z}^d$ satisfies $|E'|\le 4e^{-\frac{\varepsilon r}{10}}|B^q_N\cap\mathbb{Z}^d|$.
\end{lemma}
\begin{proof}
Let $\delta_1\in(0, 1/(10q)]$ be such that $\delta_1\ge5\varepsilon$,
and define $I_x=\{i\in\mathbb{N}_d: |x_i|\ge\varepsilon\kappa(d, N)\}$. We
have $E\subseteq E_1\cup E_2$, where
\begin{align*}
E_1&=\{x\in B^q_N\cap\mathbb{Z}^d : \sum_{i\in I_x\cap\mathbb{N}_r}|x_i|^q<\varepsilon^{q+1}\kappa(d, N)^q r\ \text{ and }\ |I_x|\ge\delta_1d\},\\
E_2&=\{x\in B^q_N\cap\mathbb{Z}^d : |I_x|<\delta_1d\}.
\end{align*}
By Lemma \ref{lem:5} (with $\varepsilon_1=\delta_1$ and
$\varepsilon_2=\varepsilon$) we have
$|E_2|\le2e^{-\frac{d}{10}}|B^q_N\cap\mathbb{Z}^d|$, provided that
$\kappa(d, N)\ge10$. Observe that
\begin{align*}
|E_1|&=\sum_{x\in B^q_N\cap\mathbb{Z}^d}\frac{1}{d!}\sum_{\sigma\in{\rm Sym}(d)}\ind{E_1}(\sigma^{-1}\cdot x)\\
&=\sum_{x\in B^q_N\cap\mathbb{Z}^d}\mathbb P[\{\sigma\in{\rm Sym}(d) :
\sum_{i\in \sigma(I_x)\cap\mathbb{N}_r}|x_{\sigma^{-1}(i)}|^q<\varepsilon^{q+1}\kappa(d, N)^q r\ \text{ and }\ |\sigma(I_x)|\ge\delta_1d\}],
\end{align*}
since $I_{\sigma^{-1}\cdot x}=\sigma(I_x)$. Now by Lemma \ref{lem:6}
(with $J=\mathbb{N}_r$, $\delta_2=\frac{\delta_1}{5}$ and $\delta_1$ as
above) we obtain, for every $x\in B^q_N\cap\mathbb{Z}^d$ such that
$|I_x|\ge \delta_1 d$ , the estimate
\begin{align*}
\mathbb P[\{\sigma\in&{\rm Sym}(d) :
\sum_{i\in \sigma(I_x)\cap\mathbb{N}_r}|x_{\sigma^{-1}(i)}|^q<\varepsilon^{q+1}\kappa(d, N)^q r\ \text{ and }\ |\sigma(I_x)|\ge\delta_1d\}]\\
&\le
\mathbb P[\{\sigma\in{\rm Sym}(d)
: |\sigma(I_x)\cap\mathbb{N}_r|\le\delta_2r\}]\le 2e^{-\frac{\delta_1 r}{10}},
\end{align*}
since
\[
\{\sigma\in{\rm Sym}(d) : \sum_{i\in \sigma(I_x)\cap\mathbb{N}_r}|x_{\sigma^{-1}(i)}|^q<\varepsilon^{q+1}\kappa(d, N)^q r\ \text{
and }\ |\sigma(I_x)|\ge\delta_1d \ \text{ and
}\ |\sigma(I_x)\cap\mathbb{N}_r|>\delta_2r\}=\emptyset.
\]
Thus $|E_1|\le 2e^{-\frac{\varepsilon r}{2}}|B^q_N\cap \mathbb{Z}^d|$, which
proves \eqref{eq:37}. To prove \eqref{eq:38} we write
\[
B^q_N\cap\mathbb{Z}^d=\bigcup_{l=0}^{N^q}\big(B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r\big) \times \big(S^{q,d-r}_{(N^q-l)^{1/q}}\cap\mathbb{Z}^{d-r}\big).
\]
Then we see that
\[
\Big(\bigcup_{l=0}^{N^q}\big(B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r\big) \times\big(S^{q,d-r}_{(N^q-l)^{1/q}}\cap\mathbb{Z}^{d-r}\big)\Big)\cap E^{\bf c} =\Big(\bigcup_{l\ge\varepsilon^{q+1}\kappa(d, N)^q r}\big(B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r\big) \times\big(S^{d-r}_{(N^q-l)^{1/q}}\cap\mathbb{Z}^{d-r}\big)\Big)\cap E^{\bf c},
\]
and consequently we obtain \eqref{eq:38} with some $E'\subseteq E$.
The proof is completed.
\end{proof}
\begin{lemma}
\lambdabel{lem:9}
For $d, N\in\mathbb{N}$ and $\varepsilon\in(0, 1/(50q)]$ if
$\kappa(d, N)\ge10$, then for every $1\le r\le d$ and $\xi\in\mathbb{T}^d$ we
have
\begin{align*}
|\mathfrak m^{B^q}_N(\xi)|\le\sup_{l\ge\varepsilon^{q+1}\kappa(d, N)^q r}|\mathfrak
m_{l^{1/q}}^{B^q,(r)}(\xi_1,\ldots, \xi_r)|+4e^{-\frac{\varepsilon r}{10}},
\end{align*}
where
\begin{align*}
\mathfrak m_{R}^{B^q,(r)}(\eta):=\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^d|}\sum_{x\in
B_{R}^{q,(r)}\cap\mathbb{Z}^d}e^{2\pi i \eta\cdot x},\qquad \eta\in \mathbb{T}^r,
\end{align*}
is the lower dimensional multiplier with $r\in \mathbb{N}$ and $R>0$.
\end{lemma}
\begin{proof}
We identify $\mathbb{R}^d\equiv\mathbb{R}^r\times\mathbb{R}^{d-r}$ and $\mathbb{T}^d\equiv\mathbb{T}^r\times\mathbb{T}^{d-r}$ and we will write $\mathbb{R}^d\ni x=(x^1, x^2)\in \mathbb{R}^r\times\mathbb{R}^{d-r}$ and $\mathbb{T}^d\ni \xi=(\xi^1, \xi^2)\in \mathbb{T}^r\times\mathbb{T}^{d-r}$ respectively.
Invoking \eqref{eq:38} one obtains
\begin{align*}
&|\mathfrak m^{B^q}_N(\xi)|\\
&\le\frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{l\ge\varepsilon^{q+1}\kappa(d, N)^q r}\;\sum_{x^2\in
S_{(N^q-l)^{1/q}}^{q,d-r}\cap\mathbb{Z}^{d-r}}|B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r|\frac{1}{|B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r|}\Big|\sum_{x^1\in
B_{l^{1/q}}^{q,(r)}\cap\mathbb{Z}^r}e^{2\pi i \xi^1\cdot
x^1}\Big|+4e^{-\frac{\varepsilon r}{10}}\\
& \le\sup_{l\ge\varepsilon^{q+1}\kappa(d, N)^q r}|\mathfrak
m_{l^{1/q}}^{B^q,(r)}(\xi_1,\ldots, \xi_r)|+4e^{-\frac{\varepsilon r}{10}}.
\end{align*}
In the last inequality the disjointness in the decomposition from \eqref{eq:38} has been used.
\end{proof}
The next two lemmas give information on the size difference between the balls $B_R^{q,(r)}$ and their shifts $z+B_R^{q,(r)}$ for $z\in \mathbb{R}^r$.
\begin{lemma}
\lambdabel{lem:10}
Let $R\ge1$ and let $r\in\mathbb{N}$ be such that $r\le R^{\delta}$ for some
$\delta\in(0, q/(q+1))$. Then for every $z\in\mathbb{R}^r$ we have
\begin{align}
\lambdabel{eq:40}
\big||(z+B_R^{q,(r)})\cap\mathbb{Z}^r|-|B_R^{q,(r)}|\big|\le |B_R^{q,(r)}|r^{(q+1)/q}R^{-1}e^{r^{(q+1)/q}/R}\le e|B_R^{q,(r)}|R^{-1+(q+1)\delta/q}.
\end{align}
\end{lemma}
\begin{proof}
For the proof we refer to \cite[Lemma 2.9]{BMSW2}.
\end{proof}
\begin{lemma}
\lambdabel{lem:11}
Let $R\ge1$ and let $r\in\mathbb{N}$ be such that $r\le R^{\delta}$ for some
$\delta\in(0, q/(q+1))$. Then for every $z\in\mathbb{R}^r$ we have
\begin{align*}
\big|\big(B_{R}^{q,(r)}\cap\mathbb{Z}^r\big)\operatorname{tr}iangle\big((z+B_{R}^{q,(r)})\cap\mathbb{Z}^r\big)\big|&\le
4e\big(r|z|R^{-1}e^{r|z|R^{-1}}+e^{r|z|R^{-1}}R^{-1+(q+1)\delta/q}\big)|B_R^{q,(r)}|\\
&\le
4e\big(|z|R^{-1+\delta}e^{|z|R^{-1+\delta}}+
e^{|z|R^{-1+\delta}}R^{-1+(q+1)\delta/q}\big)|B_R^{q,(r)}|.
\end{align*}
\end{lemma}
\begin{proof}
For the proof we refer to \cite[Lemma 2.10]{BMSW2}.
\end{proof}
We now recall the dimension-free estimates for the multipliers $m^{B^q_R}(\xi):=|B^q_R|^{-1}\mathcal F(\ind{B^q_R})(\xi)$ for $\xi\in\mathbb{R}^d$.
\begin{lemma}{\cite[Lemma 2.11]{BMSW2}}
\lambdabel{lem:19}
There exist constants $c_q,C>0$ independent of $d$ and such that for
every $R>0$ and $\xi\in\mathbb{R}^d$ we have
\begin{equation*}
|m^{B^q}(\xi)|\leq C (c_q R d^{-1/q} |\xi|)^{-1},
\quad \text{ and }\quad
|m^{B^q}(\xi)-1|\leq C (c_q Rd^{-1/q} |\xi|).
\end{equation*}
\end{lemma}
Lemma \ref{lem:11} and Lemma \ref{lem:19} are essential in proving the following estimate.
\begin{lemma}
\lambdabel{lem:20}
There exists a constant $C_q>0$ such that for every $\delta\in(0, 1/2)$ and for all
$r\in\mathbb{N}$ and $R>0$ satisfying $1\le r\le R^{\delta}$ we have
\begin{align*}
|\mathfrak{m}^{B^q, (r)}_R(\eta)|\le C_q \big(\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}
+r\kappa(r, R)^{-\frac{1+\delta}{3}}+\big(\kappa(r, R)\|\eta\|\big)^{-1}\big)
\end{align*}
for every $\eta\in\mathbb{T}^r$.
\end{lemma}
\begin{proof}
The inequality is obvious when $R\le 16,$ so it suffices to consider $R> 16.$
Firstly, we assume that
$\max\{\|\eta_1\|,\ldots,\|\eta_r\|\}>\kappa(r, R)^{-\frac{1+\delta}{3}}$. Let $M=\big\lfloor
\kappa(r, R)^{\frac{2-\delta}{3}}\big\rfloor$ and assume without loss of generality that $\|\eta_1\|>\kappa(r, R)^{-\frac{1+\delta}{3}}$. Then
\begin{align}
\lambdabel{eq:48}
\begin{split}
|\mathfrak{m}^{B^q, (r)}_R(\eta)|&\le
\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\sum_{x\in
B_{R}^{q,(r)}\cap\mathbb{Z}^r}\frac{1}{M}\Big|\sum_{s=1}^Me^{2\pi i
(x+se_1)\cdot\eta}\Big|\\
&\quad +\frac{1}{M}\sum_{s=1}^M\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\Big|\sum_{x\in
B_{R}^{q,(r)}\cap\mathbb{Z}^r}e^{2\pi i
x\cdot\eta}-e^{2\pi i
(x+se_1)\cdot\eta}\Big|.
\end{split}
\end{align}
Since $\kappa(r,R)\ge 1$ we now see that
\begin{align}
\lambdabel{eq:49}
\frac{1}{M}\Big|\sum_{s=1}^Me^{2\pi i
(x+se_1)\cdot\eta}\Big|\le M^{-1}\|\eta_1\|^{-1}\le
2\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}.
\end{align}
We have assumed that $r\le R^{\delta}$, thus by Lemma \ref{lem:11}, with $z=se_1$
and $s\le M\le \kappa(r, R)^{\frac{2-\delta}{3}}$, we obtain
\begin{align}
\lambdabel{eq:50}
\begin{split}
\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\Big| & \sum_{x\in B_{R}^{q,(r)}\cap\mathbb{Z}^r}e^{2\pi i x\cdot\eta} - e^{2\pi i
(x+se_1)\cdot\eta}\Big|\\
&\le\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\big|\big(B_R^{q,(r)}\cap\mathbb{Z}^r\big)\operatorname{tr}iangle\big((se_1+B_R^{q,(r)})\cap\mathbb{Z}^r\big)\big|\\
&\le 8e\big(srR^{-1}e^{srR^{-1}}+e^{srR^{-1}}R^{-1+(q+1)\delta/q}\big)\\
&\le 16e^2\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}},
\end{split}
\end{align}
since $srR^{-1}\le \kappa(r, R)^{\frac{2-\delta}{3}}R^{-1+\delta}\le\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}\le1$ and $R^{-1+(q+1)\delta/q} \leq R^{-1+3\delta/2}\le R^{-\frac{1}{3}+\frac{2\delta}{3}}$, and for $R>16$ we also have
$$
|B_{R}^{q,(r)}\cap\mathbb{Z}^r| \geq |B_{R-r^{1/q}}^{q,(r)}| \geq |B_{R-r^{1/2}}^{q,(r)}| = |B_R^{q,(r)}|\bigg(1-\frac{r^{1/2}}{R}\bigg)^r \geq |B_R^{q,(r)}|\big(1-r^{3/2}R^{-1}\big)\ge|B_R^{q,(r)}|/2.
$$
Combining \eqref{eq:48} with
\eqref{eq:49} and \eqref{eq:50} we obtain
\[
|\mathfrak{m}^{B^q, (r)}_R(\eta)|\le (16e^2+2)\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}.
\]
Secondly, we assume that
$\max\{\|\eta_1\|,\ldots,\|\eta_r\|\}\le\kappa(r, R)^{-\frac{1+\delta}{3}}$. Observe that
by \eqref{eq:40} we have
\begin{align*}
\bigg| \frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}-
\frac{1}{|B_{R}^{q,(r)}|}\bigg|\le\frac{eR^{-1+(q+1)\delta/q}}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}
\le\frac{2e\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}}{|B_{R}^{q,(r)}|}.
\end{align*}
Then $|\mathfrak{m}^{B^q, (r)}_R(\eta)|$ is bounded by
\begin{align}
\lambdabel{eq:42}
\begin{split}
\Big|\mathfrak{m}^{B^q, (r)}_R(\eta)&-\frac{1}{|B_{R}^{q,(r)}|}\mathcal
F(\ind{B_R^{q,(r)}})(\eta)\Big|+\frac{1}{|B_{R}^{q,(r)}|}\big|\mathcal
F(\ind{B_R^{q,(r)}})(\eta)\big|\\
&\le 2e\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}+\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\Big|\sum_{x\in
B_{R}^{q,(r)}\cap\mathbb{Z}^r}e^{2\pi i
x\cdot\eta}-\int_{B_R^{q,(r)}}e^{2\pi i
y\cdot\eta}\mathrm{d} y\Big| \\ & \quad + \frac{1}{|B_{R}^{q,(r)}|}\big|\mathcal
F(\ind{B_R^{q,(r)}})(\eta)\big|.
\end{split}
\end{align}
Let $Q^{(r)}=[-1/2, 1/2]^r$ and note that by Lemma \ref{lem:11} with
$z=t\in Q^{(r)}$ we obtain
\begin{align}
\lambdabel{eq:51}
\begin{split}
\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\Big| & \sum_{x\in
B_{R}^{q,(r)}\cap\mathbb{Z}^r}e^{2\pi i
x\cdot\eta}-\int_{B_R^{q,(r)}}e^{2\pi i
y\cdot\eta}\mathrm{d} y\Big|\\
&=\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\Big|\sum_{x\in
\mathbb{Z}^r}\int_{Q^{(r)}}e^{2\pi i
x\cdot\eta}\ind{B_R^{q,(r)}}(x)-e^{2\pi i
(x+t)\cdot\eta}\ind{B_R^{q,(r)}}(x+t)\mathrm{d} t\Big|\\
&\le\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\int_{Q^{(r)}}\big|(B_R^{q,(r)}\cap\mathbb{Z}^r)\operatorname{tr}iangle\big((t+B_R^{q,(r)})\cap\mathbb{Z}^r\big)\big|\mathrm{d}
t\\
&\quad +
\frac{1}{|B_{R}^{q,(r)}\cap\mathbb{Z}^r|}\sum_{x\in
\mathbb{Z}^r}\ind{B_R^{q,(r)}}(x)\int_{Q^{(r)}}|e^{2\pi i
x\cdot\eta}-e^{2\pi i
(x+t)\cdot\eta}|\mathrm{d} t\\
& \le16e^2\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}+2\pi\big(\|\eta_1\|+\ldots+\|\eta_r\|\big)\\
&\le16e^2\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}+2\pi r\kappa(r, R)^{-\frac{1+\delta}{3}}.
\end{split}
\end{align}
Finally, by Lemma \ref{lem:19} we obtain
\begin{align*}
\frac{1}{|B_R^{q,(r)}|}|\mathcal F(\ind{B_R^{q,(r)}})(\eta)|\le C\big(c_q\kappa(r, R)\|\eta\|\big)^{-1}.
\end{align*}
Combining this with \eqref{eq:42} and \eqref{eq:51} we conclude
\[
|\mathfrak{m}^{B^q, (r)}_R(\eta)|\le(16e^2+2e)\kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}+2\pi r\kappa(r, R)^{-\frac{1+\delta}{3}}
+C c_q^{-1}\big(\kappa(r, R)\|\eta\|\big)^{-1},
\]
which completes the proof.
\end{proof}
\begin{lemma}
\lambdabel{lem:13}
For every $\delta\in(0, 1/2)$ and $\varepsilon\in(0, 1/(50q)]$ there is a
constant $C_{q,\delta, \varepsilon}>0$ such
that for every $d, N\in\mathbb{N}$, if $r$ is an integer such that $1\le r\le d$ and
$\max\{1, \varepsilon^{\frac{(q+1)\delta}{q}}\kappa(d, N)^{\delta}/2\}\le r\le \max\{1,\varepsilon^{\frac{(q+1)\delta}{q}}\kappa(d, N)^{\delta}\}$,
then for every $\xi=(\xi_1,\ldots,\xi_d)\in\mathbb{T}^d$ we have
\begin{align*}
|\mathfrak{m}^{B^q}_N(\xi)|\le C_{q,\delta, \varepsilon}\big(\kappa(d, N)^{-\frac{1}{3}+\frac{2\delta}{3}}+(\kappa(d, N)\|\eta\|)^{-1}\big),
\end{align*}
where $\eta=(\xi_1,\ldots, \xi_r)$.
\end{lemma}
\begin{proof}
If $\kappa(d, N)\le \varepsilon^{-\frac{q+1}{q}}$, then there is nothing to do, since the implied constant in question is allowed to depend on $q$, $\delta$, and $\varepsilon$. We will assume that $\kappa(d, N)\ge \varepsilon^{-\frac{q+1}{q}}$, which ensures that $\kappa(d, N)\ge10$.
In view of Lemma \ref{lem:9} we have
\begin{align*}
|\mathfrak{m}^{B^q}_N(\xi)|\le\sup_{R\ge \varepsilon^{(q+1)/q}\kappa(d, N) r^{1/q}}|\mathfrak{m}_{R}^{B^q,(r)}(\eta)|+4e^{-\frac{\varepsilon r}{10}},
\end{align*}
where $\eta=(\xi_1,\ldots,\xi_r)$. By Lemma \ref{lem:20}, since $r\le \varepsilon^{\frac{(q+1)\delta}{q}}\kappa(d, N)^{\delta}\le \kappa(r, R)^{\delta}\le R^{\delta}$, we obtain
\begin{align*}
|\mathfrak
m_{R}^{B^q,(r)}(\eta)|\lesssim_q \kappa(r, R)^{-\frac{1}{3}+\frac{2\delta}{3}}
+r\kappa(r, R)^{-\frac{1+\delta}{3}}+\big(\kappa(r, R)\|\eta\|\big)^{-1}.
\end{align*}
Combining the two estimates above with our assumptions we obtain the desired claim.
\end{proof}
We have prepared all necessary tools to
prove inequality \eqref{eq:23}. We shall be working under the
assumptions of Lemma \ref{lem:13} with $\delta=2/7$.
\begin{proof}[Proof of Proposition \ref{prop:2}]
Assume that $\varepsilon=1/(50q)$. If
$\kappa(d, N)\le2^{\frac{7}{2}}\cdot (50q)^{\frac{q+1}{q}}$ then
clearly \eqref{eq:23} holds. Therefore, we can assume that
$d, N\in\mathbb{N}$ satisfy
$2^{\frac{7}{2}}\cdot(50q)^{\frac{q+1}{q}} \le \kappa(d, N)\le 50qd^{1-1/q}$. We
choose an integer $1\le r\le d$ satisfying
$(50q)^{-\frac{2(q+1)}{7q}}\kappa(d, N)^{\frac{2}{7}}/2\le r\le (50q)^{-\frac{2(q+1)}{7q}}\kappa(d, N)^{\frac{2}{7}}$,
this is possible since
$(50q)^{-\frac{2(q+1)}{7q}}\kappa(d, N)^{\frac{2}{7}} \geq 2$ and
$(50q)^{-\frac{2(q+1)}{7q}}\kappa(d, N)^{\frac{2}{7}} \le d^{\frac{2(1-1/q)}{7}}\le d$.
By symmetry we may also assume that $\|\xi_1\|\ge\ldots\ge\|\xi_d\|$
and we shall distinguish two cases. Suppose first that
\begin{align*}
\|\xi_1\|^2+\ldots+\|\xi_r\|^2\ge\frac{1}{4}\|\xi\|^2.
\end{align*}
Then in view of Lemma \ref{lem:13} (with $\delta=2/7$ and
$r\simeq_q \kappa(d, N)^{\frac{2}{7}}$) we obtain
\begin{align*}
|\mathfrak m^{B^q}_N(\xi)|\le C_q\big(\kappa(d, N)^{-\frac{1}{7}}+(\kappa(d, N)\|\xi\|)^{-1}\big),
\end{align*}
and we are done. So we can assume that
\begin{align}
\lambdabel{eq:54}
\|\xi_1\|^2+\ldots+\|\xi_r\|^2\le\frac{1}{4}\|\xi\|^2.
\end{align}
Let $\varepsilon_1=1/10$ and assume first that
\begin{align}
\lambdabel{eq:55}
\|\xi_j\|\le\frac{\varepsilon_1^{1/q}}{10\kappa(d,N)}\quad\text{ for all } \quad r\le
j\le d.
\end{align}
We use the symmetries of $B^q_N\cap\mathbb{Z}^d$ to write
\begin{align*}
\mathfrak{m}^{B^q}_N(\xi)=\frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in B^q_N\cap\mathbb{Z}^d}\prod_{j=1}^d \cos(2\pi x_j \xi_j).
\end{align*}
Applying the Cauchy--Schwarz inequality, $\cos^2(2\pi x_j \xi_j)=1-\sin^2(2\pi x_j \xi_j)$ and $1-x\le e^{-x}$, we obtain
\begin{align}
\lambdabel{eq:56}
\begin{split}
|\mathfrak{m}^{B^q}_N(\xi)|^2\le \frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in B^q_N\cap\mathbb{Z}^d}\exp\Big(-\sum_{j=r+1}^d\sin^2(2\pi x_j \xi_j)\Big).
\end{split}
\end{align}
For $x\in B^q_N\cap\mathbb{Z}^d$ we define
\begin{align*}
I_x&=\{i\in\mathbb{N}_d : \varepsilon\kappa(d, N)\le |x_i|\le 2\varepsilon_1^{-1/q}\kappa(d,N) \},\\
I_x'&=\{i\in\mathbb{N}_d : 2\varepsilon_1^{-1/q}\kappa(d,N)< |x_i| \},\\
I_x''&=\{i\in\mathbb{N}_d : \varepsilon\kappa(d,N)\le |x_i|\}=I_x\cup I_x'.
\end{align*}
and
\[
E=\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x|\ge\varepsilon_1 d/2\big\}.
\]
Observe that
\begin{align*}
E^{\bf c}=&\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x|<\varepsilon_1 d/2\big\}
=\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x''|<\varepsilon_1 d/2+|I_x'|\big\}\\
\subseteq&\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x''|<\varepsilon_1 d/2+|I_x'|\text{
and } |I_x'|\le \varepsilon_1 d/2\big\}
\cup
\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x'|> \varepsilon_1 d/2\big\}.
\end{align*}
Then it is not difficult to see that
\[
E^{\bf c}\subseteq \big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x''|<\varepsilon_1 d\big\},
\]
since $ \big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x'|> \varepsilon_1 d/2\big\}=\emptyset$.
Then by Lemma \ref{lem:5} with $\varepsilon_2=\varepsilon$,
we obtain
\begin{align*}
|E^{\bf c}|\le |\big\{x\in B^q_N\cap\mathbb{Z}^d : |I_x''|<\varepsilon_1 d\big\}|\le 2e^{-\frac{d}{10}}|B^q_N\cap\mathbb{Z}^d|.
\end{align*}
Therefore, by \eqref{eq:56} we have
\begin{align*}
|\mathfrak{m}^{B^q}_N(\xi)|^2
&\le \frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in
B^q_N\cap\mathbb{Z}^d}\exp\Big(-\sum_{j\in I_x \cap J_r}\sin^2(2\pi x_j
\xi_j)\Big)\ind{E}(x)+2e^{-\frac{d}{10}},
\end{align*}
where $J_r=\{r+1,\ldots, d\}$. Using \eqref{eq:103} and definition of $I_x$ we have
\[
\sin^2(2\pi x_j \xi_j)\ge 16|x_j|^2\|\xi_j\|^2\ge 16\varepsilon^2\kappa(d,N)^2\|\xi_j\|^2,
\]
since $2|x_j|\|\xi_j\|\le 1/2$ by \eqref{eq:55}, and consequently we obtain
\begin{align}
\lambdabel{eq:58}
\begin{split}
\frac{1}{|B^q_N\cap\mathbb{Z}^d|}&\sum_{x\in
B^q_N\cap\mathbb{Z}^d}\exp\Big(-\sum_{j\in I_x\cap J_r}\sin^2(2\pi x_j
\xi_j)\Big)\ind{E}(x)\\
&\le
\frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in
B^q_N\cap\mathbb{Z}^d\cap E}\exp\Big(-16\varepsilon^2\kappa(d,N)^2\sum_{j\in I_x\cap
J_r}\|\xi_j\|^2\Big)\le Ce^{-c\kappa(d,N)^2\|\xi\|^2}
\end{split}
\end{align}
for some constants $C, c>0$. In order to get the last inequality in \eqref{eq:58} observe that
\begin{align*}
\frac{1}{|B^q_N\cap\mathbb{Z}^d|}&\sum_{x\in B^q_N\cap\mathbb{Z}^d\cap E}
\exp\Big(-16\varepsilon^2\kappa(d,N)^2\sum_{j\in I_x\cap J_r}\|\xi_j\|^2\Big)\\
&=\frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in
B^q_N\cap\mathbb{Z}^d\cap E}\frac{1}{d!}\sum_{\sigma\in{\rm Sym}(d)}\exp\Big(-16\varepsilon^2\kappa(d,N)^2\sum_{j\in \sigma(I_x)\cap
J_r}\|\xi_j\|^2\Big)\\
&=\frac{1}{|B^q_N\cap\mathbb{Z}^d|}\sum_{x\in
B^q_N\cap\mathbb{Z}^d\cap E}\mathbb E\bigg[\exp\Big(-16\varepsilon^2\kappa(d,N)^2\sum_{j\in \sigma(I_x)\cap
J_r}\|\xi_j\|^2\Big)\bigg],
\end{align*}
since $\sigma\cdot (B^q_N\cap\mathbb{Z}^d\cap E)=B^q_N\cap\mathbb{Z}^d\cap E$ for every $\sigma\in{\rm Sym}(d)$.
We now apply Lemma \ref{lem:7} with $\delta_1=\varepsilon_1/2$, $d_0=r$, $I=I_x$, $\delta_0=3/5$, and
\begin{displaymath}
u_j = \left\{ \begin{array}{rl}
16\varepsilon^2\kappa(d,N)^2 \|\xi_r\|^2 & \textrm{for } 1 \leq j \leq r, \\
16\varepsilon^2\kappa(d,N)^2 \|\xi_j\|^2 & \textrm{for } r+1 \leq j \leq d, \end{array} \right.
\end{displaymath}
noting that $16\varepsilon^2\kappa(d,N)^2 \|\xi_r\|^2 \leq 1/5$ by \eqref{eq:55}. We conclude that
\begin{align*}
\mathbb E\bigg[\exp\Big(-16\varepsilon^2\kappa(d,N)^2\sum_{j\in \sigma(I_x)\cap
J_r}\|\xi_j\|^2\Big)\bigg]\le 3 \exp\Big(-c'\kappa(d,N)^2\sum_{j=r+1}^d\|\xi_j\|^2\Big),
\end{align*}
holds for some $c'>0$ and for all $x\in B^q_N\cap\mathbb{Z}^d\cap E$.
This proves \eqref{eq:58} since by
\eqref{eq:54} we obtain
\begin{align*}
\exp\Big(-c'\kappa(d,N)^2\sum_{j=r+1}^d\|\xi_j\|^2\Big)\le \exp\Big(-\frac{c'\kappa(d,N)^2}{4}\sum_{j=1}^d\|\xi_j\|^2\Big).
\end{align*}
Assume now that \eqref{eq:55} does not hold. Then
\begin{align*}
\|\xi_j\|\ge\frac{\varepsilon_1^{1/2}}{10\kappa(d,N)}\quad\text{ for all
} \quad 1\le j\le r.
\end{align*}
Hence
\begin{align*}
\|\xi_1\|^2+\ldots+\|\xi_r\|^2\ge\frac{\varepsilon_1r}{100\kappa(d,N)^2}.
\end{align*}
Therefore, we invoke Lemma \ref{lem:13} with $\eta=(\xi_1,\ldots, \xi_r)$ again and obtain
\begin{align*}
|\mathfrak{m}^{B^q}_N(\xi)|&\lesssim_q \kappa(d, N)^{-\frac{1}{7}}+(\kappa(d, N)\|\eta\|)^{-1}\\
&\lesssim_q \kappa(d, N)^{-\frac{1}{7}},
\end{align*}
since $r\simeq_q \kappa(d, N)^{\frac{2}{7}}$. This completes the proof of Proposition \ref{prop:2}.
\end{proof}
\end{document} |
\begin{document}
\title{A Solution to the 1-2-3 Conjecture}
\author{Ralph Keusch \\ \small{[email protected]}}
\date{\today}
\maketitle
\begin{abstract}
We show that for every graph without isolated edge, the edges can be assigned weights from $\{1,2,3\}$ so that no two neighbors receive the same sum of incident edge weights. This solves a conjecture of Karo\'{n}ski, {\L}uczak, and Thomason from 2004.
\end{abstract}
\section{Introduction}\label{section:introduction}
Let $G=(V,E)$ be a simple graph. A $k$-edge-weighting is a function $\omega: E \rightarrow \{1, \ldots, k\}$. Given an edge-weighting $\omega$, for each vertex $v \in V$ we denote by $s_{\omega}(v):=\sum_{w \in N(v)} \omega(\{v,w\})$ its \emph{weighted degree}. We say that two vertices $v,w \in V$ have a coloring conflict if $s_{\omega}(v)=s_{\omega}(w)$ and $\{v,w\} \in E$. If there is no coloring conflict in the graph, $\omega$ is called \emph{vertex-coloring}. We are interested to find the smallest integer $k$ that admits a vertex-coloring $k$-edge-weighting for the graph $G$. This question arised as the local variant of the graph irregularity strength problem, where one seeks to find a $k$-edge-weighting so that \emph{all} nodes receive different weighted degrees \cite{chartrand1986irregular}. In 2004, Karo\'{n}ski, {\L}uczak, and Thomason conjectured that for each connected graph with at least two edges, a vertex-coloring $3$-edge-weighting exists \cite{karonski2004edge}. Soon after, the problem was referred to as 1-2-3 Conjecture and gained a lot of attention due to its elegant statement.
Karo\'{n}ski et al.\ verified the conjecture for $3$-colorable graphs \cite{karonski2004edge}. Afterwards, Addario-Berry, Dalal, McDiarmid, Reed, and Thomason provided the first finite, general upper bound of $k=30$ \cite{addarioberry2007vertexcolouring}. The general result was improved to $k=16$ by Addario-Berry et al.\ \cite{addarioberry2008degree} and further to $k=13$ by Wang and Yu \cite{wang2008onvertex}. In 2010, Kalkowski, Karo\'{n}ski, and Pfender made a big step and proved upper bounds of $k=6$ and $k=5$, using a simple algorithmic argument \cite{kalkowski2009vertex, kalkowski2010vertexcoloring}.
More results have been dedicated to specific graph classes. For $d$-regular graphs, a bound of $k=4$ has been proven for $d \le 3$ \cite{karonski2004edge}, for $d=5$ \cite{bensmail2018result}, and then in general \cite{przybylo2021theconjecture}. Furthermore, Przyby{\l}o gave an affirmative answer to the conjecture for $d$-regular graphs, given that $d \ge 10^8$ \cite{przybylo2021theconjecture}. In addition, the conjecture was confirmed by Zhong for ultra-dense graphs, i.e., for all graphs $G=(V,E)$ where the minimum degree is at least $0.99985|V|$ \cite{zhong2018theconjecture}. Recently, Przyby{\l}o asserted the statement as well for all graphs where the minimum degree is sufficiently large \cite{przybylo2020conjecture}. Concretely, by applying the Lov\'{a}sz Local Lemma, he proved that there exists a constant $C>0$ such that the conjecture holds for all graphs with $\delta(G)\ge C\log\Delta(G)$. However, not always $3$ weights are necessary. For instance, a random graph $G(n,p)$ asymptotically almost surely admits a $2$-edge-weighting without coloring conflicts \cite{addarioberry2008degree}. For all $d$-regular bipartite graphs \cite{chang2011vertex}, Chang et al.\ have shown that $k=2$ is possible as well, given that $d \ge 3$. Regarding the computational complexity, Dudek and Wajc proved that it is NP-complete to determine whether a given graph $G$ supports a vertex-coloring $2$-edge-weighting \cite{dudek2011complexity}, whereas the same decision problem is in $P$ for bipartite graphs \cite{thomassen2016flow}.
Many closely related problems have been analyzed. A natural variant are total weightings as introduced by Przyby{\l}o and Wo\'{z}niak \cite{przybylo2010conjecture}, where the edges receive weights from $\{1, \ldots, k\}$ as before, but additionally each vertex gets a weight from $\{1, \ldots, \ell\}$. The weighted degree of a vertex is then defined as the sum of all incident edge weights, plus the weight that it received itself. Przyby{\l}o and Wo\'{z}niak conjectured that for each graph there exists a vertex-coloring total weighting with vertices and edges weighted by the set $\{1,2\}$. While this question is still open, a simple argument from Kalkowski shows that each graph admits a vertex-coloring total weighting with vertex weights from $\{1,2\}$ and edge weights from $\{1,2,3\}$ \cite{kalkowski2009note}.
A weaker version of vertex-coloring edge-weightings can be obtained by defining the vertex colors as multisets of incident edge-weights, instead of sums \cite{karonski2004edge, addarioberry2005vertex}. Recently, Vu\v{c}kovi\'{c} reached the optimal bound $k=3$ for this variant \cite{vuckovic2018multiset}. Vice versa, a harder variant are list colorings, where each edge $e \in E$ has its own list $L(e)$ of allowed edge-weights \cite{bartnicki2009weight}. In fact, the application of Alon's Nullstellensatz \cite{alon1999combinatorial} led to significant results on this intriguing problem \cite{seamone2014bounding, cao2021total, zhu2022every} and on its variant for total weightings \cite{przybylo2011total, wong2011total, wong2014every}. Many more variations of vertex-coloring edge-weightings have been studied, e.g., variations for hypergraphs \cite{kalkowski2017conjecture, bennett2016weak} or directed graphs \cite{bartnicki2009weight, khatirinejad2011digraphs}. For a general overview of the progress on the $1$-$2$-$3$ Conjecture and on related problems, we refer to the early survey of Seamone \cite{seamone2012theconjecture} and to the recent survey of Grytczuk \cite{grytczuk2020from}.
Turning back to the original question, the general upper bound was recently shrinked to $k=4$ \cite{keusch2022vertex}. With the present paper, we close the final gap and confirm the conjecture.
\begin{theorem}\label{thm:main}
Let $G=(V,E)$ be a graph without connected component isomorphic to $K_2$. Then there exists an edge-weighting $\omega:E \rightarrow \{1,2,3\}$ such that for each edge $\{v,w\} \in E$,
$$\sum_{u \in N(v)}\omega(\{u,v\}) \neq \sum_{u \in N(w)}\omega(\{u,w\}).$$
\end{theorem}
In Section~\ref{section:mainideas}, we give an overview of the proof strategy, describe how the proof is elaborated upon the ideas from \cite{keusch2022vertex}, and collect several auxiliary results. Afterwards, in Section~\ref{section:main} we formally prove the theorem. Finally, we conclude with a few remarks in Section~\ref{section:remarks}.
\section{Main ideas and proof preparations}\label{section:mainideas}
We use the following notation. Let $G=(V,E)$ be a graph, let $W \subseteq V$, and let $C=(S,T)$ be a cut. Then we denote by $E(W)$ the edge set of the induced subgraph $G[W]$ and by $E(S,T)$ the subset of edges having an endpoint in both $S$ and $T$ (the cut edges of $C$). For a vertex $v \in V$, $N(v)$ stands for its neighborhood and $\deg_W(v) := |N(v) \cap W|$ is the number of neighbors in $W$. Finally, for two disjoint subsets $S, T \subseteq V$, denote by $G(S,T)$ be the bipartite subgraph with vertex set $S \cup T$ and edge set $E(S,T)$.
As a starting point, let us summarize the strategy that was introduced in \cite{keusch2022vertex} to construct a conflict-free edge-weighting with weights $\{1,2,3,4\}$. There, we started with a maximum cut $C=(S,T)$ and initial weights from $\{2,3\}$, making the weighted degrees of nodes in $S$ even and those of nodes in $T$ odd. Depending on the remaining coloring conflicts, an auxiliary flow problem on $G(S,T)$ was carefully designed. Then, the resulting maximum flow yielded a collection of edge-disjoint paths, along which the edge-weights could be changed in order to make the edge-weighting vertex-coloring.
We are going to extend that approach as follows. We partition the vertex set into two sets $R$ and $B$ of \emph{red} and \emph{blue} nodes, where the red vertices form an independent set. We start by giving each edge weight $2$ and apply the strategy from \cite{keusch2022vertex} only to the subgraph $G[B]$, consequently only taking a maximum cut $C=(S,T)$ of $G[B]$. However, when putting the weights onto $E(B)$, we do not yet finalize the vertex-weights of the blue nodes. Instead, we only ensure that there are no coloring conflicts inside $S$ and inside $T$, which is possible with the weight set $\{1,2,3\}$. Afterwards, we cautiously construct a weighting for $E(R,B)$ such that the weighted degrees of vertices in $R$ remain even, but the weighted degrees of the blue nodes become odd. More precisely, the weighted degree of nodes in $S$ should obtain values $1 \pmod{4}$ and those of $T$ obtain values $3 \pmod{4}$. Consequently, all coloring conflicts will be resolved and the edge-weighting becomes vertex-coloring.
Unfortunately, our construction requires that the set $B$ is of even cardinality, enforcing us to handle several different situations when proving Theorem~\ref{thm:main} in Section~\ref{section:main}. In some cases, the described strategy only works when one or even two vertices are removed from the graph. Afterwards, when re-inserting the nodes, augmenting the edge-weighting to the full graph sometimes requires an additional round of weight modifications, for instance along a path $p$.
We now start with the formal preparations for the proof. To find a suitable independent set $R$ of red nodes, we will apply the following simple result.
\begin{lemma}\label{lemma:divide}
Let $G=(V,E)$ be a connected graph, let $v,w \in V$, and let $p$ be a shortest $v$-$w$-path. Then there exists an independent set $R \subseteq V$ such that
\begin{enumerate}[(i)]
\item the graph $G(R,V \setminus R)$ is connected, and
\item the path $p$ is alternating between $R$ and $V \setminus R$, where we can choose whether $v \in R$ or $v \in V \setminus R$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $p=\{v_1:=v, v_2, \ldots, v_k:=w\}$ be a shortest $v$-$w$-path in $G$. If $v=v_1$ is required to be in $R$, we start with $R:=\{v_1\}$, otherwise we start with $R:=\emptyset$. Next, we put every second vertex of $p$ into $R$. Because $p$ is a shortest $v$-$w$-path, $R$ remains an independent set and the required properties hold at least for the induced subgraph $G[\{v_1, \ldots, v_k\}]$.
Let $v_{k+1}, \ldots, v_n$ be an ordering of the remaining vertices of $V$ (if there are any), such that for each $i>k$, $v_i$ has at least one neighbor $v_j$ with $j<i$. We are going to proceed the remaining vertices one after another, thereby extending $R$, and prove by induction that for each $i>k$, the graph $G[\{v_1, \ldots, v_{i}\}]$ achieves property (i).
Consider a vertex $v_i$ and assume that $G[\{v_1, \ldots, v_{i-1}\}]$ satisfies the precondition. If $v_i$ already has a neighbor $v_j \in R$, we do not extend $R$, otherwise we extend the set $R$ by adding the node $v_i$. In both cases, $R$ obviously remains an independent set and $G(R, \{v_1, \ldots, v_i\} \setminus R)$ is connected, thus the statement follows by induction.
\end{proof}
Once having partitioned the vertex set into the independent set $R$ of red nodes and the set $B:=V \setminus R$ of blue nodes, we will start assigning weights to the edges $E(B)$. At the same time, we introduce an odd-valued \emph{designated color} $f(v)$ for each blue vertex $v \in B$, so that the function $f$ is a proper vertex-coloring. We thereby take into account that the blue-red-edges contribute to the weighted degrees as well. Because we only have edge-weights $\{1,2,3\}$ available, our capabilities are limited and we have to keep the weighted degrees in $B$ even for the moment. But we can construct the edge-weighting of $E(B)$ so that the current weights and the designated colors almost coincide and only differ by $1$. Later, we will overcome the remaining differences when carefully assigning edge-weights to $E(R,B)$.
With the following lemma, we adapt the key ideas from \cite{keusch2022vertex} to our setting. We will typically apply it to the subgraph $G[B]$ and to the function $h(v):=2\deg_R(v)$, to obtain an edge-weighting $\omega$ of $E(B)$ and a function of designated colors $f: B \rightarrow \{1,3,5, \ldots\}$. Recall that we denote by $s_{\omega}(v)$ the weighted degree of a vertex $v$ under the weighting $\omega$.
\begin{lemma}\label{lemma:weighting}
Let $G=(V,E)$ be a not necessarily connected graph and let $h:V \rightarrow \{0, 2, 4, \ldots\}$ be a function attaining only even values. Then there exists an edge-weighting $\omega: E \rightarrow \{1,2,3\}$ and a function $f:V \rightarrow \{1, 3, 5, \ldots\}$ attaining only odd values such that
\begin{enumerate}[(i)]
\item $f(v) \neq f(w)$ for each edge $\{v,w\} \in E$, and
\item $|s_{\omega}(v) + h(v) - f(v)| = 1$ for all $v \in V$.
\end{enumerate}
\end{lemma}
We prove the lemma with the flow-based strategy that was introduced in \cite{keusch2022vertex}. A key step towards the statement is therefore the following auxiliary result.
\begin{lemma}[Lemma~2 in \cite{keusch2022vertex}]\label{lemma:flow}
Let $G=(V,E)$ be a graph, let $C=(S,T)$ be a maximum cut of $G$, let $F\subseteq E(S) \cup E(T)$, and let $\sigma$ be an orientation of the edge set $F$. Furthermore, let $G_{C, F, \sigma}$ be the auxiliary directed multigraph network constructed as follows.
\begin{enumerate}[(i)]
\item As vertex set, take $V$, and add a source node $s$ and a sink node $t$.
\item For each edge $\{u,v\} \in E(S,T)$, insert the two arcs $(u,v)$ and $(v,u)$, both with capacity $1$.
\item For each edge $\{u,v\} \in F$ with corresponding orientation $(u,v) \in \sigma$, insert arcs $(s, u)$ and $(v, t)$, both with capacity $1$, potentially creating multi-arcs. Do not insert $(u,v)$.
\end{enumerate}
Then in the network $G_{C, F, \sigma}$, there exists an $s$-$t$-flow of value $|F|$.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lemma:weighting}]Let $G=(V,E)$ be a graph and let $h:V \rightarrow \{0, 2, 4, \ldots\}$. In a first step, we define designated colors $f(v)$ of odd parity such that two neighbors always receive distinct colors. Afterwards, we construct the edge-weighting $\omega$ that satisfies (ii).
Let $\{v_1, \ldots, v_n\}$ be an arbitrary ordering of $V$ and let $C=(S,T)$ be a maximum cut of $G$. We assign the designated colors $f(v_i)$ to all vertices one after another. We aim to define designated colors such that all $v_i \in S$ receive a color $f(v_i) \equiv 1 \pmod{4}$ and each $v_i \in T$ receives a color $f(v_i) \equiv 3 \pmod{4}$.
Consider a vertex $v_i \in V$, assume that $v_1, \ldots, v_{i-1}$ already got a designated color, and let $s(v_i) := h(v_i)+2\deg(v_i)$. Denote by $k_i \ge 0$ the number of neighbors $v_j$ with $j<i$ that are on the same side of the cut as $v_i$. To define a suitable value $f(v_i)$, we consider the following set of $2k_i+2$ odd values:
$$S(v_i) := \{s(v_i)-2k_i-1, \ldots, s(v_i)-1, s(v_i)+1, \ldots, s(v_i)+2k_i+1\}.$$
We shall choose a value for $f(v_i)$ from $S(v_i)$. Because the value of $f(v_i) \pmod{4}$ is already determined, half of the elements from $S(v_i)$ are not allowed, so $k_i+1$ potential choices are remaining. At most $k_i$ of them are blocked by values $f(v_j)$ from the neighbors $v_j$ of $v_i$ with a smaller index. Hence, there exists at least one suitable value $f(v_i) \in S(v_i)$ with the following properties:
\begin{itemize}
\item $f(v_i) \neq f(v_j)$ for all neighbors $v_j$ of $v_i$ with $j<i$,
\item $f(v_i) \equiv 1 \pmod{4}$ if $v_i \in S$ and $f(v_i) \equiv 3 \pmod{4}$ if $v_i \in T$, and
\item $|f(v_i)-s(v_i)| \leq 2k_i+1.$
\end{itemize}
Fix a value for $f(v_i)$ from $S(v_i)$ that satisfies these three properties simultaneously. We repeat this procedure for all vertices one after another to achieve property (i) of the statement. It remains to define the edge-weighting $\omega$ that fulfills (ii).
Let $V^+ := \{v_i \in V: f(v_i) > s(v_i)\}$ and $V^- := \{v_i \in V: f(v_i) < s(v_i)\}$. Moreover, for all $v_i \in V$ let
$$g(v_i) := \begin{cases}
\frac{1}{2} (f(v_i)-s(v_i)-1), & \text{if } v_i \in V^+, \\
\frac{1}{2} (f(v_i)-s(v_i)+1), & \text{if } v_i \in V^-.
\end{cases}$$
Observe that for all $1 \le i \le n$, $|g(v_i)| \le k_i$ by construction. In order to apply Lemma~\ref{lemma:flow}, we construct a subset $F \subseteq E[S] \cup E[T]$ and an orientation $\sigma$ of $F$ as follows.
For each vertex $v_i \in S$, choose $|g(v_i)|$ neighbors $v_j \in S$ with smaller index (i.e., $j<i$) and add the $|g(v_i)|$ edges $\{v_i, v_j\}$ to $F$. If $v_i \in V^+$, increase the weight of the $|g(v_i)|$ edges $\{v_i, v_j\}$ to $3$ and add the orientations $(v_i, v_j)$ to $\sigma$. Vice versa, if $v_i \in V^-$, decrease the weight of the edges $\{v_i,v_j\}$ to $1$ and add the orientation $(v_j, v_i)$ to $\sigma$. After having executed the described modifications on the edge weights for all vertices in $S$, the weighted degree of each $v_i \in S$ potentially received changes when considering itself and when considering vertices with higher index, resulting in a current value of
$$t(v_i) := 2\deg(v_i) + 2g(v_i) + |\{w:(w,v_i) \in \sigma\}| - |\{w:(v_i,w) \in \sigma\}|,$$
no matter whether $v_i \in V^+$ or $v_i \in V^-$.
For each vertex $v_i \in T$, choose as well $|g(v_i)|$ neighbors $v_j \in T$ with $j<i$ and add the $|g(v_i)|$ edges $\{v_i, v_j\}$ to $F$. If $v_i \in V^+$, change the weight of these edges to $3$ and always add the orientation $(v_j, v_i)$ to $\sigma$. If $v_i \in V^-$, always add the orientation $(v_i, v_j)$ to $\sigma$ and change the edge-weights to $1$. Mind the differences compared to $S$ regarding the orientations. Under the modified weighting, the current weighted degree of $v_i \in T$ is
$$t(v_i) := 2\deg(v_i) + 2g(v_i) + |\{w:(v_i,w) \in \sigma\}| - |\{w:(w,v_i) \in \sigma\}|.$$
Having defined $F$ and $\sigma$, we proceed by constructing the auxiliary multigraph $G_{C,F,\sigma}$ as specified in the statement of Lemma~\ref{lemma:flow}. Thereby, each edge of $F$ leads to exactly one arc incident to $s$ and one arc incident to $t$, where $s$ and $t$ are the two additional nodes inserted into the graph. For each node $v_i$, the construction is such that the number of arcs from $s$ to $v_i$ is $|\{w:(v_i,w) \in \sigma\}|$ and the number of arcs from $v_i$ to $t$ is $|\{w:(w,v_i) \in \sigma\}|$.
We now apply Lemma~\ref{lemma:flow} to $G$ and obtain an $s$-$t$-flow of size $|F|$ in the auxiliary multigraph $G_{C,F,\sigma}$. As all edges have capacity $1$, there are $|F|$ edge-disjoint $s$-$t$-paths in $G_{C,F,\sigma}$. Consider such a directed path $p=(s, u_1, \ldots, u_m, t)$, and let $p' = \{u_1, \ldots, u_m\}$ be its induced, undirected subpath in the bipartite graph $G(S,T)$. Unless $u_1 = u_m$ (which happens when $p'$ is an empty path), we modify the weighting $\omega$ of each edge $\{u_i, u_{i+1}\} \in p'$ as follows: increase the weight to $3$ if $u_i \in S$, and decrease the weight to $1$ if $u_i \in T$. In other words, we alternately increase or decrease the edge weights along the path. The weighted degrees of the internal nodes $u_2, \ldots, u_{m-1}$ thereby do not change, in contrast to those of $u_1$ and $u_m$. The weighted degree of $u_1$ increases by $1$, if $u_1 \in S$, and decreases by $1$, if $u_1 \in T$. Regarding $u_m$, its weighted degree increases by $1$, if $u_m \in T$, and decreases by $1$, if $u_m \in S$. When $u_1=u_m$, there is no change on the weighted degree of this node. We repeat the described modification on $\omega$ for all $|F|$ paths provided by Lemma~\ref{lemma:flow}. Denote by $\omega$ the resulting edge-weighting.
As we found $|F|$ edge-disjoint $s$-$t$-paths in the auxiliary network $G_{C,F,\sigma}$, each arc starting at $s$ and each arc arriving at $t$ is included in exactly one of the $s$-$t$-paths on which we modified edge-weights. Looking at a vertex $v_i \in S$, each of the arcs $(s,v_i)$ is contained in an $s$-$t$-path $p$, so for each arc $(s,v_i)$ we increased the weighted degree of $v_i$ by $1$ when handling $p$. Similarly, for each arc $(v_i,t)$ we decreased the weighted degree of $v_i$ by $1$ when handling the $s$-$t$-path which contained that arc. The only exceptions are $s$-$t$-paths of the form $\{s, v_i, t\}$ or $\{s, v_i, \ldots, v_i, t\}$ where we didn't make any weight modifications. But there, the arcs $(s,v_i)$ and $(v_i,t)$ cancel each other out. Summing up the changes on the weighted degree of $v_i$, we deduce that it holds
$$s_{\omega}(v_i) = t(v_i) + |\{w:(v_i,w) \in \sigma\}| - |\{w:(w,v_i) \in \sigma\}| = 2\deg(v_i) + 2g(v_i).$$
Vice versa, consider a vertex in $v_i \in T$. By the same argument, for each arc $(s,v_i)$ we decreased the weighted degree of $v_i$ by $1$ and for each arc $(v_i,t)$ we increased the weighted degree of $v_i$ by $1$. Adding up the changes, we again have
$$s_{\omega}(v_i) = t(v_i) + |\{w:(w,v_i) \in \sigma\}| - |\{w:(v_i,w) \in \sigma\}| = 2\deg(v_i) + 2g(v_i).$$
Putting everything together and plugging in the definition of $s(v_i)$ and then the definition of $g(v_i)$, we conclude that for each $v_i \in V$ it holds
$$
|s_{\omega}(v_i)+h(v_i)-f(v_i)| =|2\deg(v_i)+2g(v_i)+h(v_i)-f(v_i)|
= |2g(v_i)+s(v_i)-f(v_i)|
= 1.
$$
\end{proof}
With Lemma~\ref{lemma:weighting}, we get an edge-weighting for $E(B)$ and designated weights $f(v)$ for the vertices $v \in B$. For each blue node $v$, some amount $\alpha(v)$ of incident edge weights is missing to actually achieve its designated color with its weighted degree. This additional weight $\alpha(v)$ will be gained via the edges between $B$ and $R$. With Lemma~\ref{lemma:edgesbetween} below, we indeed find an edge-weighting for the subgraph $G(R,B)$ where the weighted degree of each $v \in B$ is $\alpha(v)$.
As discussed above, we also have to cover some uncomfortable cases where some vertices will be removed from the graph, whereupon the situation becomes more complex. In some of these situations, there will be a set $R' \subseteq R$ of vertices that should attain a weighted degree of odd parity (instead of even parity as expected). Moreover, for a few blue vertices $v \in B$, $\alpha(v)$ may be even. In some other cases, the edge-weighting is forced to accomplish some requirements on a fixed path $p$, to guarantee that the weights can be further changed along the edges of $p$ without destroying the entire structure. Therefore, Lemma~\ref{lemma:edgesbetween} contains several additional properties that will help us to handle the exceptional cases.
\begin{lemma}\label{lemma:edgesbetween}
Let $G=(V,E)$ be a connected bipartite graph with parts $B$ and $R$. Let $\alpha:B \rightarrow \mathbb N$ be a function such that $\alpha(v) \in \{2\deg(v)-1, 2\deg(v), 2\deg(v)+1\}$ for all $v \in B$. Moreover, let $R' \subseteq R$ such that $|R'| + \sum_{v \in B} \alpha(v)$ is even. Then there exists an edge-weighting $\omega: E \rightarrow \{1,2,3\}$ such that
\begin{enumerate}[(i)]
\item $s_{\omega}(v)$ is even for all $v \in R \setminus R'$,
\item $s_{\omega}(v)$ is odd for all $v \in R'$, and
\item $s_{\omega}(v) = \alpha(v)$ for all $v \in B$.
\end{enumerate}
Moreover, let $p=\{v_1, \ldots, v_k\}$ be a fixed path with $k \ge 3$ and $v_1, v_k \in B$. Then $\omega$ satisfies in addition
\begin{enumerate}
\item[(iv)] $\omega(\{v_1,v_2\}) \neq 1$ if $\alpha(v_1)=2\deg(v_1)+1$ and $\omega(\{v_1,v_2\}) \neq 3$ otherwise,
\item[(v)] $\omega(\{v_{i-1},v_i\})+\omega(\{v_{i},v_{i+1}\}) \in \{3,4,5\}$ for each $1 < i < k$ with $v_i \in B$, and
\item[(vi)] $\omega(\{v_{k-1},v_k\}) \neq 1$ if $\alpha(v_k)=2\deg(v_k)+1$ and $\omega(\{v_{k-1},v_k\}) \neq 3$ otherwise.
\end{enumerate}
\end{lemma}
\begin{proof}
We start by setting the initial weight of all edges to $2$. Let $T$ be a spanning tree of $G$ that includes the fixed path $p$. We construct $\omega$ by only changing the weights of a subset $E_o$ of $T$-edges. Consider $T$ as a rooted tree with arbitrary root $r$ and denote for each $v \neq r$ by $par(v)$ its parent in the rooted tree.
Let $V_o := R' \cup \{v \in B: \alpha(v) \neq 2\deg(v)\}$ be the set of all vertices that shall obtain an odd weighted degree. Note that the assumption on $R'$ guarantees that $|V_o|$ is even. While constructing the set $E_o$, denote for each $v \in V$ by $E_o(v)$ the subset of edges from $E_o$ that are incident to $v$. We want to arrange the set $E_0$ so that for all nodes $v \in V$, $|E_o(v)|$ is odd if and only if $v \in V_o$.
We start with the leafs of $T$. For each leaf $\ell$, put $\{\ell, par(\ell)\}$ into $E_o$ if and only if $\ell \in V_o$. We then iterate to the internal nodes of $T$ and repeat the idea: we consider each node $v$ only after having handled all its children, and then decide whether we put $\{v, par(v)\}$ into $E_o$ or not, thereby always ensuring that $|E_o(v)|\equiv 1 \pmod{2}$ if and only if $v \in V_o$. For the root $r$, the argument does not work, since $r$ has no parent node. However, because each edge from $E_o$ contributes to two sets $E_o(v)$, the sum $\sum_{v \in V}|E_o(v)|$ must be even. Thus,
$$0 \pmod{2} \equiv \sum_{v \in V}|E_o(v)| \pmod{2} \equiv \big(|E_o(r)| + |V_o \setminus \{r\}|\big) \pmod{2},$$
and since $|V_o|$ is even, we see that also for the root $r$, the value $|E_o(r)|$ is odd if and only if $r \in V_o$.
We are now going to modify the weighting of $E_o$. As the graph $G$ is bipartite with parts $R$ and $B$, it is sufficient to only consider sets $E_o(v)$ where $v \in B$. For each $v \in B$, we change the weights of the edges in $E_o(v)$ according to the following rules.
If $\alpha(v) = 2\deg(v)-1$, decrease the weights of $\tfrac{1}{2}(|E_o(v)|+1)$ edges in $E_o(v)$ to $1$, and increase the weights of all other edges in $E_o(v)$ to $3$. Then indeed the weighted degree of $v$ becomes
$$2|N(v)-E_o(v)|+\tfrac{1}{2}(|E_o(v)|+1)+\tfrac{3}{2}(|E_o(v)|-1)=2|N(v)-E_o(v)|+2|E_o(v)|-1=\alpha(v).$$
If $v$ is not a vertex of the fixed path $p$, we can distribute these weight modifications arbitrarily among $E_o(v)$, otherwise there are some restrictions. In situations where $v$ is an internal vertex of $p$ and both incident edges are contained in $E_o(v)$, we ensure that one of the two edges gets weight $1$ and the other $3$. If $v=v_1$ is the starting vertex of $p$ and $\{v_1,v_2\} \in E_o(v_1)$, set $\omega(\{v_1,v_2\})=1$ and distribute the other weights (if there are any) arbitrarily among $E_o(v)$. Similarly, if $v=v_k$ is the ending vertex of $p$ and $\{v_{k-1},v_k\} \in E_o(v_k)$, our only restriction is to put weight $1$ on the edge $\{v_{k-1},v_k\}$.
If $\alpha(v) =2\deg(v)+1$, decrease the weights of $\tfrac{1}{2}(|E_o(v)|-1)$ edges of $E_o(v)$ to $1$, and increase all other weights of edges of $E_o(v)$ to $3$. Then, again it holds $s_{\omega}(v)=\alpha(v)$. Similarly as above, ensure that if $v$ is an internal node of $p$, not both edges on $p$ incident to $v$ get the same odd weight. Moreover, if $v$ is the starting or ending vertex of $p$, ensure that the edge on $p$ incident to $v$ does not receive weight $1$ if it is contained in $E_o(v)$.
Finally, if $\alpha(v)=2\deg(v)$, $|E_o(v)|$ is even and we assign the weight $1$ to one half of the edges in $E_o(v)$ and weight $3$ to the other half, to assure $s_{\omega}(v)=\alpha(v)$. Once more, if $v$ is an internal vertex of $p$, ensure that not both incident edges on $p$ get the same odd weight. Furthermore, if $v$ is the starting or ending vertex of $p$, again take care of that the edge on $p$ incident to $v$ does not receive weight $3$.
With the described weight modifications, the edge-weighting $\omega$ is defined and (iii)-(vi) are fulfilled. Regarding the red vertices, for each node $v \in R$, $|E_o(v)|$ is odd if and only if $v \in R'$, and exactly the incident edges that are contained in $E_o(v)$ have been weighted with an odd value. Thus, $\omega$ achieves properties (i) and (ii) as well.
\end{proof}
\section{Proof of Theorem~\ref{thm:main}}\label{section:main}
In Section~\ref{section:mainideas}, we prepared the proof with several auxiliary results. The plan is now to define an independent set $R$ with Lemma~\ref{lemma:divide}, to define $B:=V \setminus R$, to use Lemma~\ref{lemma:weighting} for finding an edge-weighting for $G[B]$, and finally to apply Lemma~\ref{lemma:edgesbetween} when extending the edge-weighting to the remaining edges. The crucial point behind this strategy is that the set $B$ is required to have even cardinality, making the proof significantly more technical.
We therefore describe three basic situations regarding the sets $R$ and $B$, and, in some cases, one additional vertex $v_0 \notin R \cup B$. We demonstrate for each situation that a vertex-coloring edge-weighting with weights $\{1,2,3\}$ can be constructed, always using Lemma~\ref{lemma:weighting} and Lemma~\ref{lemma:edgesbetween} in combination. Afterwards in the actual proof of Theorem~\ref{thm:main}, we will show by a case distinction that for each graph, the problem can actually be reduced to one of the three basic situations.
We use the following definition to annotate a partition $V=R \cup B$ that achieves all required properties.
\begin{definition}\label{definition:partition}
Let $G=(V,E)$ be a connected graph and let $V=R \cup B$ be a partition of the vertex set into two disjoint subsets of red and blue nodes. We say that $(R,B)$ is a good $R$-$B$-partition of $G$ if $R$ is an independent set, the bipartite subgraph $G(R,B)$ is connected, and $|B| \equiv 0 \pmod{2}$.
\end{definition}
Obviously, the ideal situation occurs when there is a good $R$-$B$-partition known for the entire graph. Lemma~\ref{lemma:basicsituation} shows how we then find a suitable edge-weighting.
\begin{lemma}\label{lemma:basicsituation}
Let $G=(V,E)$ be a connected graph and let $R \cup B$ be a good $R$-$B$-partition of $G$. Then there exists an edge weighting $\omega:E \rightarrow \{1,2,3\}$ such that the weighted degrees $s_{\omega}$ yield a proper vertex coloring of $G$ and such that $s_{\omega}(v)$ is even if and only if $v \in R$.
\end{lemma}
\begin{proof}
For all $v \in B$, let $h(v) := 2\deg_R(v)$. We apply Lemma~\ref{lemma:weighting} to $G[B]$ and to $h$, and obtain a weighting $\omega_1$ of $E(B)$ together with a function $f$ on $B$, standing for the designated final weighted degrees of the nodes.
Next, for each $v \in B$ let $\alpha(v) := f(v) - s_{\omega_1}(v)$ be the difference between the designated weighted degree and the already received incident edge weights. By Lemma~\ref{lemma:weighting}~(ii), we know that $\alpha(v) \in \{2\deg_R(v)-1, 2\deg_R(v)+1\}$. Moreover, by putting $R' := \emptyset$ and using the assumption that $|B|$ is even, it follows that the value of $|R'| + \sum_{v \in B} \alpha(v)$ is even.
We apply Lemma~\ref{lemma:edgesbetween} to the bipartite subgraph $G(R,B)$ and to $\alpha$, without considering any path $p$, to obtain a weighting $\omega_2$ for $G(R,B)$ where each vertex $v \in R$ receives an even-valued weighted degree $s_{\omega}(v) := s_{\omega_2}(v)$. Each vertex $v \in B$ gets an additional weight of $\alpha(v)$, hence combining the two weightings $\omega_1$ and $\omega_2$, for $v \in B$ we have $s_{\omega}(v):=s_{\omega_1}(v)+s_{\omega_2}(v)=f(v)$. By Lemma~\ref{lemma:weighting}, $f$ only attains odd values and for any two neighbors $v,w \in B$ we have $f(v) \neq f(w)$. Because $R$ is an independent set, the weighted degrees $s_{\omega}$ indeed properly color the vertices of the graph $G$.
\end{proof}
In the next situation, there exists an additional vertex $v_0$ which is not included in $R$ or $B$, i.e., we only have a good $R$-$B$-partition of $G[V \setminus \{v_0\}]$. Consequently, the weighted degree of $v_0$ must be even and coloring conflicts between $v_0$ and its neighbors in $R$ can arise. To solve these conflicts, we put odd weights on the edges between $v_0$ and $R$. Carefully choosing weights $1$ or $3$, we can ensure that the weighted degree of $v_0$ is different from all of its neighbors. However, the argument only works when $v_0$ has at least $2$ neighbors in $R$. Furthermore, if $\deg_R(v_0)$ is odd, $v_0$ is required to have at least one neighbor in $B$, as the respective edge is needed for making $s_{\omega}(v_0)$ even-valued.
\begin{lemma}\label{lemma:remainingvertex}
Let $G=(V,E)$ be a graph and let $v_0 \in V$ be a vertex such that $G[V \setminus \{v_0\}]$ is connected. Moreover, let $(R,B)$ be a good $R$-$B$-partition of $G[V \setminus \{v_0\}]$ such that $\deg_R(v_0) \ge 2$ and such that either $\deg_R(v_0)$ is even or $\deg_B(v_0) \ge 1$. Then there exists a vertex-coloring edge weighting $\omega:E \rightarrow \{1,2,3\}$ of $G$.
\end{lemma}
\begin{proof}
Let $h(v) := 2|N(v) \setminus B|$ for all $v \in B$. We will construct the weighting $\omega$ in three steps: a weighting $\omega_1$ for $E(B)$, a weighting $\omega_2$ for the edges between $R$ and $B$, and a weighting $\omega_3$ for the edges incident to $v_0$. The final weighting $\omega$ is then the combination of $\omega_1$, $\omega_2$, and $\omega_3$.
We first apply Lemma~\ref{lemma:weighting} to $G[B]$ and $h$ to obtain a weighting $\omega_1$ of the edges set $E(B)$, together with designated final weighted degrees $f(v)$ for the blue nodes. By Lemma~\ref{lemma:weighting}~(ii) and our choice of $h$, for all $v \in B \setminus N(v_0)$ it holds
$$f(v)-s_{\omega_1}(v) = h(v) \pm 1 \in \{2\deg_R(v)+1, 2\deg_R(v)-1\},$$
whereas for all $v \in N(v_0) \cap B$ we have
$$f(v)-s_{\omega_1}(v) = h(v) \pm 1 \in \{2\deg_R(v)+3, 2\deg_R(v)+1\}.$$
Since the edges between $v_0$ and $R$ will receive an odd weight, we put $R' := N(v_0) \cap R$, taking care of that the weighted degrees of these nodes will be even-valued at the end. Next, for all $v \in B \setminus N(v_0)$ we let $\alpha(v) := f(v) - s_{\omega_1}(v)$. Regarding $N(v_0) \cap B$, we distinguish two cases.
\begin{itemize}
\item If $|R'|$ is even, we put $\alpha(v) := f(v)-s_{\omega_1}(v)-2$ for all $v \in N(v_0) \cap B$. Note that by construction, it holds $\alpha(v) \in \{2\deg_R(v)+1,2\deg_R(v)-1\}$.
\item If $|R'|$ is odd, then by assumption $\deg_B(v_0) > 0$. Fix a vertex $u_0 \in N(v_0) \cap B$ and set $\alpha(u_0) := 2\deg_R(u_0)$. For all $v \in B \cap N(v_0) \setminus \{u_0\}$, let again $\alpha(v) := f(v)-s_{\omega_1}(v)-2$.
\end{itemize}
Because $|B|$ is even, we ensured in both cases that $|R'| + \sum_{v \in B} \alpha(v)$ is even. We apply Lemma~\ref{lemma:edgesbetween} to the graph $G(R,B)$ and to $\alpha$ (but without any path $p$) and obtain a weighting $\omega_2$ for the bipartite graph such that for a vertex $v \in R$, $s_{\omega_2}(v)$ is odd if and only if $v \in R'$. Moreover, for each vertex $v \in B$, it holds $s_{\omega_2}(v)=\alpha(v)$.
We now introduce the third weighting $\omega_3$ for the edges that are incident to $v_0$. For all $v \in N(v_0) \cap R$ put $\omega_3(\{v,v_0\}) := 1$. If $\deg_R(v_0)$ is odd, we have specified a distinct vertex $u_0 \in N(v_0) \cap B$. Set $\omega_3(\{v_0,u_0\}) := f(u_0)-s_{\omega_1}(u_0)-s_{\omega_2}(u_0)$ and observe that this value is indeed either $1$ or $3$. For all remaining edges $e$ between $v_0$ and $B$, set $\omega_3(e) := 2$. In both of our cases, the value $s_{\omega_3}(v_0)$ thereby becomes even.
We combine $\omega := \omega_1 + \omega_2 + \omega_3$ to a full edge-weighting of $G$. For $v \in B$ we then have $s_{\omega}(v) = f(v)$, no matter whether $v$ is connected to $v_0$ or not. By Lemma~\ref{lemma:weighting}, $f$ attains only odd values on $B$ and for any two neighbors $v,w \in B$ we have $f(v) \neq f(w)$. For $v \in R$, $s_{\omega_2}(v)$ is odd if and only if $v \in R'$, by Lemma~\ref{lemma:edgesbetween}. However, for all $v \in R'$ we set $s_{\omega_3}(v)=1$, so $s_{\omega}(v)=s_{\omega_2}(v)+s_{\omega_3}(v)$ is even again. Because $s_{\omega}(v_0)=s_{\omega_3}(v_0)$ is even and $R$ is an independent set, it remains to guarantee that there are no coloring conflicts between $v_0$ and its neighbors in the set $R$.
Let $N_R(v_0) := N(v_0) \cap R = \{v_1, \ldots, v_k\}$, where $k \ge 2$ by assumption, and assume w.l.o.g.\ that $s_{\omega}(v_1) \le s_{\omega}(v_2) \le \ldots \le s_{\omega}(v_k)$. If $s_{\omega}(v_i) \neq s_{\omega}(v_0)$ for all $1 \le i \le k$, there are no coloring conflicts and we are done. Otherwise, we increase some edge-weights. Let $x > 0$ be the smallest integer such that $s_{\omega}(v_0)+2x$ is different from all values $s_{\omega}(v_1), \ldots, s_{\omega}(v_k)$, and let $i' \le k$ be maximal such that $s_{\omega}(v_{i'}) < s_{\omega}(v_0) + 2x$. Because at least one $s_{\omega}(v_i)$ is equal to $s_{\omega}(v_0)$, the index $i'$ is well-defined.
First consider the case $i' \le k-x$. For each $v_i \in N_R(v_0)$ with $i > k-x$, increase the weight of $\{v_i, v_0\}$ from $1$ to $3$ and denote by $\omega'$ the resulting edge-weighting.
Then, for $i > k-x \ge i'$, it holds
$$s_{\omega'}(v_i) > s_{\omega}(v_i) > s_{\omega}(v_0) + 2x,$$
whereas for $i \le k-x$, we have
$$s_{\omega'}(v_i)=s_{\omega}(v_i) \neq s_{\omega}(v_0) + 2x.$$
Hence, $s_{\omega'}(v_0)=s_{\omega}(v_0)+2x$ is different from $s_{\omega'}(v_i)$ for all $1 \le i \le k$.
Next, consider the case $i' > k-x$ and $x<k$. We change the weight from $1$ to $3$ for all edges $\{v_0,v_i\}$ where $v_i \in N_R(v_0)$ and $i \ge k-x$. Again, denote by $\omega'$ the resulting edge weighting. For $i \ge k-x$ we then have
$$s_{\omega'}(v_i)=s_{\omega}(v_i)+2 \neq s_{\omega}(v_0)+2x+2,$$
whereas for all $i <k-x$ it holds
$$s_{\omega'}(v_i)=s_{\omega}(v_i) \le s_{\omega}(v_{i'}) < s_{\omega}(v_0)+2x.$$
Thus, for all $1 \le i \le k$ we achieved $s_{\omega'}(v_0) = s_{\omega}(v_0) + 2x+2 \neq s_{\omega'}(v_i)$.
It remains the case $x=k$. Here, for each $0\le y < k$, the value $s_{\omega}(v_0)+2y$ is attained by one $s_{\omega}(v_i)$. So we have $s_{\omega}(v_i)=s_{\omega}(v_0)+2i-2$ for all $1 \le i \le k$. We only increase the weight of $\{v_0,v_2\}$ to $3$. For the new edge-weighting $\omega'$, it holds $s_{\omega'}(v_0)=s_{\omega}(v_0)+2$, but $s_{\omega'}(v_1)=s_{\omega}(v_0)$, $s_{\omega'}(v_2)=s_{\omega}(v_2)+2=s_{\omega}(v_0)+4$, and, for all $i \ge 3$,
$$s_{\omega'}(v_i)=s_{\omega}(v_i)=s_{\omega}(v_0)+2i-2 \ge s_{\omega}(v_0)+4.$$
\end{proof}
The last of our three base situations is again given by a vertex $v_0$ which is not contained in $B \cup R$. In contrast to the setting of Lemma~\ref{lemma:remainingvertex}, this time we have $\deg_R(v_0)=1$. Here, a coloring-conflict can only appear between $v_0$ and its single neighbor $u_0 \in R$. If this conflict occurs, we repair it by changing the weights along a cycle that includes $v_0$, so that the weighted degree of $v_0$ changes but that of $u_0$ remains the same.
\begin{lemma}\label{lemma:rearranging}
Let $G=(V,E)$ be a graph and let $v_0 \in V$ be a vertex such that the induced subgraph $G[V \setminus \{v_0\}]$ is connected. Let $(R,B)$ be a good $R$-$B$-partition of $G[V \setminus \{v_0\}]$ and let $u_0 \in R$ such that $N(v_0) \cap R = \{u_0\}$. Suppose that there exists a non-trivial path in $G(R,B)$ that starts and ends in $N(v_0) \cap B$ and does not include $u_0$. Then there exists an edge weighting $\omega:E \rightarrow \{1,2,3\}$ such that the weighted degrees $s_{\omega}$ yield a proper vertex coloring of $G$.
\end{lemma}
\begin{proof}
For all $v \in B$, put $h(v) := 2|N(v) \setminus B|$. We apply Lemma~\ref{lemma:weighting} to $G[B]$ and to $h$, and receive a weighting $\omega_1$ of $E(B)$, together with a function $f$ on $B$ where $f(v)-s_{\omega_1}(v)=h(v)\pm 1$ holds for all $v \in B$. At the end, for each $v \in B$ its weighted degree $s_{\omega}(v)$ shall coincide with $f(v)$.
Let $k \ge 3$ and let $p=\{v_1,v_2, \ldots, v_k\}$ be a path in $G(R,B)$ which does not have $u_0$ as internal node and whose starting and endpoint vertices $v_1, v_k$ are in $N(v_0) \cap B$. Next, for all $v \in N(v_0) \cap B$ define $\beta(v) := f(v)-s_{\omega_1}(v)-2\deg_R(v)$. Since $v_1$ and $v_k$ are connected to $v_0$, we have $\beta(v_1),\beta(v_k) \in \{1,3\}$. In the next step, we define a subset $R' \subseteq R$ and a weighting $\omega_2$ for the edges incident to $v_0$, thereby considering two cases.
\begin{enumerate}[(a)]
\item If $\beta(v_1)=\beta(v_k)$, let $R':=\emptyset$ and set $\omega_2(e)=2$ for each edge $e$ incident to $v_0$.
\item If $\beta(v_1)\neq \beta(v_k)$, assume w.l.o.g.\ that $\beta(v_1)=3$ and $\beta(v_k)=1$. Set $\omega_2(\{v_0,u_0\}):=3$ and $\omega_2(\{v_0,v_1\}):=3$. For all other edges $e$ incident to $v_0$ (including in particular $\{v_0,v_k\}$), put $\omega_2(e):=2$. Finally, let $R' := \{u_0\}$.
\end{enumerate}
Moreover, for all $v \in B \cap N(v_0)$ we define $\alpha(v) := f(v)-s_{\omega_1}(v)-s_{\omega_2}(v)$, whereas for all $v \in B \setminus N(v_0)$ we set $\alpha(v) := f(v)-s_{\omega_1}(v)$. Note that for all blue vertices we have $\alpha(v) \in \{2\deg_R(v)+1,2\deg_R(v)-1\}$, except for $v_1$ in case (b) where $\alpha(v_1)=2\deg_R(v_1)$. We claim that $|R'| + \sum_{v \in B} \alpha(v)$ is even in both cases (a) and (b). Indeed, in case~(a), this follows because $\alpha(v)$ is odd for all $v \in B$, $|B|$ is even, and $R'$ is empty. In case~(b), it is true because $|B|$ is still even, $|R'|$ is odd, but $s_{\omega_2}(v_1)$ is odd and thus $\alpha(v_1)$ is even. We apply Lemma~\ref{lemma:edgesbetween} to $G(R,B)$, $\alpha$, and $p$, and obtain a weighting $\omega_3$ for the bipartite graph with various properties. Having $\omega_3$ on hand, let $\omega$ be the edge-weighting that combines $\omega_1$, $\omega_2$, and $\omega_3$.
For each vertex $v \in R$, $s_{\omega_3}(v)$ is odd if and only if $v \in R'$, implying that for all $v \in R \setminus N(v_0)$, $s_{\omega}(v)=s_{\omega_3}(v)$ is even. Moreover, for the only vertex $u_0$ in $R \cap N(v_0)$, $s_{\omega}(u_0) = s_{\omega_2}(u_0)+s_{\omega_3}(u_0)$ is even as well, in both cases (a) and (b). Therefore, $s_{\omega}$ indeed attains even values on the red vertices. Regarding the blue vertices, we have $s_{\omega_3}(v)=\alpha(v)$ for all $v \in B$ and thus $s_{\omega}(v) = s_{\omega_1}(v)+s_{\omega_2}(v)+s_{\omega_3}(v)=f(v)$. By Lemma~\ref{lemma:weighting}, $f$ only attains odd values on $B$ and for any two neighbors $v,w \in B$ we have $f(v) \neq f(w)$. All together, under $\omega$ a coloring conflict can only arise between $v_0$ and $u_0$.
If $s_{\omega}(v_0) \neq s_{\omega}(u_0)$, there is nothing more to do and $s_{\omega}$ properly colors the vertices of $G$. So assume that the two values are equal. Our goal is to create a modified edge-weighting $\omega'$ without coloring conflicts, by only changing the weights on $p$, on $\{v_0, v_1\}$, and on $\{v_0, v_k\}$. We start with the edges that are incident to $v_1$ or $v_k$. There are three sub-cases.
\begin{itemize}
\item If $\beta(v_1)=\beta(v_k)=1$, we decrease the weights of $\{v_0,v_1\}$ and $\{v_0,v_k\}$ by $1$, so we put $\omega'(\{v_0,v_1)\}):=\omega'(\{v_0,v_k\}) := 1$. However, we also have $\alpha(v_1)=2\deg_R(v_1)-1$ and $\alpha(v_k)=2\deg_R(v_k)-1$, implying $\omega(\{v_1,v_2\}) \neq 3$ and $\omega(\{v_{k-1},v_k\}) \neq 3$ by properties (iv) and (vi) of Lemma~\ref{lemma:edgesbetween}. Hence we can \emph{increase} the weights of those two edges by $1$ in order to achieve that $s_{\omega'}(v_1)=s_{\omega}(v_1)$ and $s_{\omega'}(v_k)=s_{\omega}(v_k)$. At the same time, we have $s_{\omega'}(v_0) = s_{\omega}(v_0)-2$.
\item If $\beta(v_1)=\beta(v_k)=3$, we put $\omega'(\{v_0,v_1)\}):=\omega'(\{v_0,v_k\}) := 3$. Observe that here, it holds $\alpha(v_1)=2\deg_R(v_1)+1$ and $\alpha(v_k)=2\deg_R(v_k)+1$. By Lemma~\ref{lemma:edgesbetween}~(iv) and (vi), the weights of $\{v_1,v_2\}$ and $\{v_{k-1},v_k\}$ are not $1$, hence we can \emph{decrease} the weights of the two edges by $1$ to keep the weighted degrees of $v_1$ and $v_k$ the same, whereas $s_{\omega'}(v_0) = s_{\omega}(v_0)+2$.
\item If $\beta(v_1)\neq \beta(v_k)$, we assumed w.l.o.g.\ that $\beta(v_1)=3$ and $\beta(v_k)=1$. Recall that in this situation we put $\omega_2(\{v_0,v_1\})=3$ and $\omega_2(\{v_0,v_k\})=2$, implying $\alpha(v_1)=2\deg_R(v_1)$ and $\alpha(v_k)=2\deg_R(v_k)-1$. Then Lemma~\ref{lemma:edgesbetween} yields $\omega(\{v_1,v_2\}) \neq 3$ and $\omega(\{v_{k-1},v_k\} \neq 3$. We now \emph{decrease} the weights of $\{v_0,v_1\}$ and $\{v_0,v_k\}$ both by $1$ and \emph{increase} the weights of $\{v_1,v_2\}$ and $\{v_{k-1},v_k\}$ by $1$. Again, the weighted degrees of $v_1$ and $v_k$ remain the same, while $s_{\omega'}(v_0) = s_{\omega}(v_0)-2$.
\end{itemize}
In all three cases we achieved $s_{\omega'}(v_0) = s_{\omega}(v_0) \pm 2$. It remains to consider the internal nodes of $p$. So far, we changed the weighted degrees of $v_2$ and $v_{k-1}$ by $+1$ or $-1$ (or, if $k=3$, we changed the weighted degree of $v_2=v_{k+1}$ by $+2$ or $-2$). For each internal node $v_i \in B$ of $p$ (if there are any), it holds $\omega(\{v_{i-1},v_i\})+\omega(\{v_i,v_{i+1}\}) \in \{3,4,5\}$ by Lemma~\ref{lemma:edgesbetween}~(v). Therefore, there is always a choice to modify the weights of both $\{v_{i-1},v_i\}$ and $\{v_i,v_{i+1}\}$ by $1$ while the weighted degree of $v_i$ remains the same. After these modifications, it holds $|\omega'(e)-\omega(e)|=1$ for each edge $e \in p$. Because $R$ is an independent set, the weighted degree of each internal node $v \in R$ of $p$ remains even and we did not create any new coloring conflicts. Since $u_0 \notin p$, it holds in particular $s_{\omega'}(u_1)=s_{\omega}(u_1)$, and we conclude that the coloring conflict between $v_0$ and $u_0$ has been solved by changing the edge weights along the cycle $\{v_0,v_1, v_2, \ldots, v_k,v_0\}$.
\end{proof}
Before starting with the main proof, we need one last minor lemma.
\begin{lemma}\label{lemma:blocks}
Let $G=(V,E)$ be a connected graph with minimum degree at least $2$. Then there exist two vertices $x,y \in V$ such that $\{x,y\} \in E$ and $G[V \setminus \{x,y\}]$ is connected.
\end{lemma}
\begin{proof}
We do a depth-first-search on $G$ with arbitrary starting vertex $r$ and consider a leaf $x$ on the resulting search tree $T$ with largest depth, i.e., with largest distance on $T$ to the root $r$ among all nodes. Let $y$ be the parent of $x$ in $T$. We claim that $G[V \setminus \{x, y\}]$ is connected.
Clearly, removing $x$ does not disconnect $G$. All other children of $y$ (if there are any) are leafs of $T$ as well, due to our choice of $x$. As they all have degree at least $2$, they are all connected to at least one other node in $G$. However, because we consider a depth-first search tree, all neighbors of children of $y$ must be ancestors of $y$ in $T$. Hence, we can also remove $y$ without destroying the connectivity of the remaining graph.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Assume w.l.o.g.\ that $G$ is connected and contains at least three vertices. The plan is to show that it is always possible to find a good $R$-$B$-partition of either the entire graph $G$ or a large portion of it. Then the good $R$-$B$-partition should allow us to apply either Lemma~\ref{lemma:basicsituation}, Lemma~\ref{lemma:remainingvertex}, or Lemma~\ref{lemma:rearranging} to find a vertex-coloring edge-weighting. However, the fact that $|B|$ needs to be even requires a careful preparation, leading to a subtle case distinction.
We first assume that there exists $x \in V$ with $\deg(x)=1$. Let $y$ be its only neighbor. Define $V' := V \setminus \{x\}$ and $G' := G[V']$. By Lemma~\ref{lemma:divide} applied to $G'$ and to the trivial path $p:=\{y\}$, there exists an independent set $R$ such that $y \in R$ and $G(R,V \setminus R)$ is connected. Let $B := V' \setminus R$. We distinguish two sub-cases.
\begin{itemize}
\item If $|B|$ is odd, we can add $x$ to the set $B$ and make $|B|$ even. Afterwards, $(R,B)$ is a good $R$-$B$-partition of $G$, so we can apply Lemma~\ref{lemma:basicsituation} directly to find a vertex-coloring edge-weighting $\omega$ of $G$.
\item If $|B|$ is even, then $(R,B)$ is a good $R$-$B$-partition of $G'$. We apply Lemma~\ref{lemma:basicsituation} to $G'$ and receive a vertex-coloring edge-weighting $\omega$ of $G'$ where $s_{\omega}(y)$ is even. To extend $\omega$ to the full edge set $E$, we put $\omega(\{x,y\}):=2$ . Clearly, the weighted degree of $y$ remains even. As $R$ is an independent set, the only potential coloring conflict is between $x$ and $y$. But $\deg(x)=1$ and $\deg(y)>1$, hence $s_{\omega}(x)=2$, $s_{\omega}(y) > 2$, and $s_{\omega}$ is a proper vertex coloring of the full graph $G$.
\end{itemize}
For the remainder, we can assume that the minimum degree of $G$ is at least $2$. By Lemma~\ref{lemma:blocks}, there are two vertices $x,y \in V$ sharing an edge such that removing them does not disconnect the graph. We define $V'' := V \setminus \{x,y\}$ and $G'' := G[V'']$.
We first study the special situation where $\deg(x)=\deg(y)=2$. Let $z_x \in N(x) \setminus \{y\}$ and $z_y \in N(y) \setminus \{x\}$ be the two other neighbors of $x$ and $y$, where $z_x=z_y$ is possible. We apply Lemma~\ref{lemma:divide} to $G''$ and to the trivial path $p := \{z_x\}$, thereby requiring $z_x \in R$, and get an independent set $R$ such that $G(R,V \setminus R)$ is connected. We set $B := V'' \setminus R$ and distinguish four different sub-cases.
\begin{itemize}
\item If $|B|$ is even and $z_y \in R$, we add both $x$ and $y$ to $B$. $|B|$ remains even, $(B,R)$ is a good $R$-$B$-partition, and with Lemma~\ref{lemma:basicsituation}, we find an edge-weighting $\omega$ without coloring conflicts.
\item If $|B|$ is even and $z_y \in B$ (and thus $z_x \neq z_y$), we put $y$ into $R$, so both neighbors of $x$ are in $R$ while $R$ remains an independent set. By Lemma~\ref{lemma:remainingvertex} applied to $G$ and $v_0 := x$, there exists a vertex-coloring $\omega$ without coloring conflicts.
\item If $|B|$ is odd and $z_y \in B$, we put $x$ into $B$ and $y$ into $R$ to obtain a good $R$-$B$-partition of $G$. Then we are done with Lemma~\ref{lemma:basicsituation}.
\item If $|B|$ is odd but $z_y \in R$, we create an auxiliary graph $\tilde{G}=(\tilde{V},\tilde{E})$ by adding an additional vertex $w$ and setting $\tilde{V} := V'' \cup \{w\}$ and $\tilde{E} := E(V'') \cup \{\{w,z_x\}\}$. After adding $w$ to $B$, $|B|$ becomes even. We apply Lemma~\ref{lemma:basicsituation} to $\tilde{G}$ and get a vertex-coloring edge-weighting $\tilde{\omega}$ for $\tilde{G}$ where $s_{\tilde{\omega}}(v)$ is even if and only if $v \in R$. Note that $\tilde{\omega}(\{w,z_x\})$ must be odd since $w \in B$.
Going back to the original graph $G$, we set $\omega(e) := \tilde{\omega}(e)$ for all edges $e \in E(V'')$. Moreover, let $\omega(\{y,z_y\}) := 2$ and $\omega(\{x,z_x\}) := 1$, making $s_{\omega}(z_x)$ and $s_{\omega}(z_y)$ even and thus keeping $V''$ conflict-free. Finally, if $s_{\omega}(z_x)=2$, we put $\omega(\{x,y\}):=3$, in all other cases we set $\omega(\{x,y\}):=1$. Then $s_{\omega}(x)$ is even, $s_{\omega}(y)$ is odd, and $s_{\omega}(x) \neq s_{\omega}(z_x)$, so $s_{\omega}$ is indeed vertex-coloring.
\end{itemize}
It remains to handle the situations where $\deg(x)+\deg(y) > 4$. We assume w.l.o.g.\ that $\deg(x)>2$ and again define specific vertices $z_x$ and $z_y$, this time together with a $z_x$-$z_y$-path $p$. Let $z_x \in N(x) \setminus \{y\}$ and $z_y \in N(y) \setminus \{x\}$, but now we require in addition that either $z_x = z_y$ (then $p$ would be the trivial path), or $\{x,z_y\} \notin E$, $\{y,z_x\} \notin E$, and there exists a shortest $z_x$-$z_y$-path $p$ in $G''$ without internal node that is connected to $x$ or $y$. The path $p$ is only needed for solving a few exceptional sub-cases. To obtain the independent set $R$, we apply Lemma~\ref{lemma:divide} to $G''$ and to the path $p$, this time requiring $z_x \notin R$. Having received the independent set $R$, we set $B := V'' \setminus R$, thus $z_x \in B$. Moreover, by Lemma~\ref{lemma:divide}~(ii), the path $p$ is alternating between $B$ and $R$.
We start with the cases where $|B|$ is odd. First, we assume that at most one vertex of $x$ and $y$ has neighbors in $R$.
\begin{itemize}
\item If $\deg_R(x)=0$, no matter whether $\deg_R(y)=0$ or not, we put $x$ into $R$ and $y$ into $B$, consequently making $|B|$ even and keeping $R$ an independent set. Moreover, the graph $G(R,B)$ is connected thanks to the edge $\{x,y\}$. Hence, we have a good $R$-$B$-partition of $G$ and with Lemma~\ref{lemma:basicsituation} we directly find a suitable edge-weighting $\omega$.
\item If $\deg_R(y)=0$ and $\deg_R(x) \neq 0$, we put $y$ into $R$ and $x$ into $B$. Again, we find a suitable edge-weighting $\omega$ with Lemma~\ref{lemma:basicsituation}.
\end{itemize}
Next, suppose that $|B|$ is odd, $\deg_R(x) \ge 1$, $\deg_R(y) \ge 1$, and $\deg_R(x)+\deg_R(y) \ge 3$.
\begin{itemize}
\item If $\deg_R(x) \ge 2$, put $y$ into $B$. Then the bipartite subgraph $G(R,B)$ is connected and $\deg_B(x) \ge 1$, so we can apply Lemma~\ref{lemma:remainingvertex} to $G$ and to $v_0:=x$, no matter whether $\deg_R(x)$ is even or odd, and find a suitable edge-weighting $\omega$.
\item If $\deg_R(y) \ge 2$, the same argument as above applies by exchanging $x$ and $y$. Note that here, the assumption $z_x \in B$ does not have any impact.
\end{itemize}
We now assume that $|B|$ is odd, $\deg_R(x)=\deg_R(y)=1$, and $x$ and $y$ actually have a joint neighbor $q$ in $R$.
\begin{itemize}
\item If $N(x)\cap N(y) \cap R = \{q\}$, we create an auxiliary graph $\tilde{G}=(\tilde{V},\tilde{E})$ as follows. Let $\tilde{V} := V \cup \{w\}$, where $w$ is an additional vertex, and set $\tilde{E} := E \setminus \{\{x,y\},\{x,q\}\} \cup \{\{w,y\}\}$. Furthermore, put $x$ and $w$ into $R$ and $y$ into $B$. Because we removed the edge $\{x,q\}$, the set $R$ is an independent set. Moreover, thanks to $y$, $|B|$ becomes even, thus $(R,B)$ is a good $R$-$B$-partition of $\tilde{G}$. We apply Lemma~\ref{lemma:basicsituation} to $\tilde{G}$ and receive a vertex-coloring edge-weighting $\tilde{\omega}$ such that $s_{\tilde{\omega}}$ attains odd values exactly on $B$. Because $\deg_{\tilde{G}}(w)=1$, we have $\tilde{\omega}(\{w,y\})=2$. We now construct an edge-weighting $\omega$ for the original graph $G$ as follows.
For each edge $e \in E \cap \tilde{E}$, we let $\omega(e)=\tilde{\omega}(e)$. On the two edges $\{x,q\}$ and $\{x,y\}$, we put weight $2$. We observe that $s_{\omega}(y)$ has now the same value in $G$ as $s_{\omega'}(y)$ had in $\tilde{G}$. Moreover, $s_{\omega}(q)$ is still even, hence the only potential coloring conflict is between $x$ and $q$. If $s_{\omega}(x) \neq s_{\omega}(q)$, we are done. Otherwise, let $\mu := \omega(\{y,q\})$.
If $\mu = 2$, we can modify $\omega$ by resetting $\omega(\{y,q\}):=3$, $\omega(\{x,y\}):=1$, and $\omega(\{x,q\}):=1$. The effect is that the weighted degrees of $y$ and $q$ remained the same whereas $s_{\omega}(x)$ decreased by $2$, so we resolved the coloring conflict between $q$ and $x$.
If $\mu$ is odd, we adapt $\omega$ by resetting $\omega(\{x,y\}):=\mu$, $\omega(\{x,q\}):=\mu$, and $\omega(\{y,q\}):=2$. Thereby, the weighted degree of $x$ changes by $\pm 2$ whereas all other weighted degrees remain the same. Thus, in both situations, the coloring conflict is resolved.
\end{itemize}
Under the assumption that $|B|$ is odd, it remains to handle the situations where $x$ and $y$ both have exactly one neighbor in $R$ but not a joint neighbor therein. Denote by $z_x'$ the neighbor of $x$ in $R$. Recall that in $G''$, there exists a $B$-$R$-alternating path $p$ from $z_x$ to $z_y$ without internal vertices that are connected to $x$ or $y$.
\begin{itemize}
\item If $z_y \in R$, put $y$ into $B$ and observe that $(R,B)$ is a good $R$-$B$-partition of $G[V \setminus \{x\}]$. Let $p' := p \cup \{y,z_y\}$. Then the preconditions of Lemma~\ref{lemma:rearranging} are satisfied with $v_0:=x$, $u_0:=z_x'$, and $p'$, hence we find a vertex-coloring edge-weighting $\omega$.
\item If $z_y \in B$, let $T$ be a spanning tree of $G(R,B)$ containing $p$. By Lemma~\ref{lemma:divide}~(i), $G(R,B)$ is connected, so indeed this spanning tree exists. Let $z_y' \neq z_x'$ be the neighbor of $y$ in $R$. By construction, the path $p$ does not contain $z_x'$ or $z_y'$ as internal node. Therefore, on $T$ there exists a $z_y$-$z_x'$-path without $z_y'$ as internal vertex, or there exists a $z_x$-$z_y'$-path without $z_x'$ as internal vertex. Assume w.l.o.g.\ that we are in the first situation and denote by $p'$ our $z_y$-$z_x'$-path. Define $p'' := p' \cup \{x,z_x'\}$, put $x$ into $B$, and observe that by setting $v_0 := y$ and $u_0 := z_y'$, with the path $p''$ all preconditions of Lemma~\ref{lemma:rearranging} are fulfilled. Again, there exists an edge-weighting $\omega$ that properly colors the vertices of $G$.
\end{itemize}
We now turn to cases where $|B|$ is even. For the remainder of the proof, we don't need the path $p$ anymore, but we still use the vertex $z_x \in N(x) \cap B$. We start with the sub-cases where $\deg_R(x) \ge 1$.
\begin{itemize}
\item If $\deg_R(y) \ge 1$, we simply put $x$ and $y$ to $B$ to get a good $R$-$B$-partition of $G$. We directly find $\omega$ with Lemma~\ref{lemma:basicsituation} applied to $G$.
\item If $\deg_R(y) = 0$, we add $y$ to $R$ and achieve $\deg_R(x) \ge 2$. Thanks to $z_x$, $\deg_B(x)$ is at least $1$, allowing us to use Lemma~\ref{lemma:remainingvertex} with $v_0:=x$ no matter whether $\deg_R(x)$ is even or odd. Again, we find again a vertex-coloring edge-weighting $\omega$.
\end{itemize}
Finally, we are left with the situations where $|B|$ is even and the set $N(x) \cap R$ is empty. Due to the assumption that $\deg(x)>2$, we have $\deg_B(x) \ge 2$. We distinguish the following four sub-cases.
\begin{itemize}
\item If $\deg_R(y)=0$, we add $y$ to $R$. Let $z_x' \in N(x) \setminus \{y,z_x\}$. Since $G(R,B)$ is connected, there exists a $z_x$-$z_x'$-path $p'$ in $G''$ that alternates between $B$ and $R$. We apply Lemma~\ref{lemma:rearranging} to $G$, to the fixed path $p'$, to $v_0:=x$, and to $u_0:=y$ and see that there exists an edge-weighting $\omega$ without coloring conflicts.
\item If $y$ has both neighbors in $R$ and in $B$, we add $x$ to $R$. Then $\deg_R(y) \ge 2$ and $\deg_B(y) \ge 1$, so we can apply Lemma~\ref{lemma:remainingvertex} with $v_0:=y$, no matter whether $\deg_R(y)$ is even or odd, and find a vertex-coloring edge-weighting $\omega$.
\item If $\deg_B(y)=0$ and $\deg(y)$ is even, we put $x$ into $R$. Observe that the value of $\deg_R(y)$ must be even in this sub-case. Thus, by Lemma~\ref{lemma:remainingvertex} applied to $v_0:=y$, there is an edge-weighting $\omega$ with the desired properties.
\item At last, if $\deg_B(y)=0$ and $\deg(y)$ is odd (and thus at least $3$), we have to be careful. Note that here, we have $N(x) \cap N(y) = \emptyset$. We create an auxiliary graph $\tilde{G}$ as follows. Let $\tilde{E} := E \setminus \{\{x,y\},\{y,z_y\}\} \cup \{x,z_y\}$ and let $\tilde{G} := (V,\tilde{E})$. We keep $R$ as it is, but we add both $x$ and $y$ to $B$. Observe that $\tilde{G}$ is connected, because the degree of $y$ was sufficiently large. Moreover, $(R,B)$ is a good $R$-$B$-partition of $\tilde{G}$ because $x \in B$ is connected to $R$ thanks to the edge $\{x,z_y\}$. With Lemma~\ref{lemma:basicsituation} we then obtain a vertex-coloring edge-weighting $\tilde{\omega}$ for $\tilde{G}$. Note that since $y \in B$, $s_{\tilde{\omega}}(y)$ is odd, so there exists at least one edge $\tilde{e}$ incident to $y$ where $\tilde{\omega}(\tilde{e})$ is odd. This edge $\tilde{e}$ will be helpful below.
Let $\mu := \tilde{\omega}(\{x,z_y\})$. We use $\tilde{\omega}$ to construct a weighting $\omega$ for $G$ as follows. We set $\omega(\{x,y\}) := \omega(\{y,z_y\}) := \mu$, whereas for all edges $e \in E \cap E'$, we keep the edge-weight from $\tilde{\omega}$. Consequently, for all $v \in V \setminus \{y\}$ we have $s_{\omega}(v)=s_{\tilde{\omega}}(v)$, whereas $s_{\omega}(y)=s_{\tilde{\omega}}(y)+2\mu$ remains odd. Because $N(y) \cap B = \{x\}$, the only potential coloring conflict is between the two nodes $x$ and $y$. However, if $x$ and $y$ attain the same weighted degree, we can replace the weight of edge $\tilde{e}$ by the other possible odd value. Then the conflict vanishes, and since the other endpoint of $\tilde{e}$ lies in the independent set $R$ of even-weighted nodes, we did not create any new conflicts.
\end{itemize}
No matter how the structure of the graph is at $x$ and $y$, we have seen by an exhaustive case distinction that it is always possible to reduce the situation to one of the three basic situations that have been covered by Lemma~\ref{lemma:basicsituation}, Lemma~\ref{lemma:remainingvertex}, and Lemma~\ref{lemma:rearranging}. Hence, for each connected graph $G=(V,E)$ with $|V| \ge 3$, the presented approach leads to a vertex-coloring edge-weighting $\omega$ only with weights from $\{1,2,3\}$.
\end{proof}
\section{Concluding remarks}\label{section:remarks}
With this paper, we present a constructive solution to the 1-2-3 Conjecture that is built on top of the ideas from \cite{keusch2022vertex} and uses the independent set $R$ of red nodes as additional key ingredient. We mentioned in Section~\ref{section:introduction} that the decision problem whether there exists a vertex-coloring edge-weighting only from $\{1,2\}$ is NP-hard \cite{dudek2011complexity}. In our proof, the critical step in terms of algorithmic complexity is to find a maximum cut $C=(S,T)$ in the proof of Lemma~\ref{lemma:weighting}. Finding a maximum cut is NP-complete, but our strategy can be adjusted as follows.
Instead of starting directly with a maximum cut, we could begin with an arbitrary cut. Then we find either a sufficiently large flow, or a strictly larger cut, as demonstrated in the proof of Lemma~\ref{lemma:flow} in \cite{keusch2022vertex}. By repeating the argument, at some point we get a cut $C'=(S',T')$ that leads to a suitable flow, because the size of the cut can not become arbitrarily large. This should actually happen in polynomial time. We therefore believe that using the ideas of our proof, a polynomial-time algorithm for the construction of a suitable edge-weighting can be derived.
\end{document} |
\begin{document}
\title[Bijections and Lawrence polytope]{A framework unifying some bijections for graphs and its connection to Lawrence polytopes}
\author[Changxin Ding]{Changxin Ding}
\address{Georgia Institute of Technology \\ School of Mathematics \\
Atlanta, GA 30332-0160}
\email{[email protected]}
\begin{abstract}
Let $G$ be a connected graph. The Jacobian group (also known as the Picard group or sandpile group) of $G$ is a finite abelian group whose cardinality equals the number of spanning trees of $G$. The Jacobian group admits a canonical simply transitive action on the set $\mathcal{G}(G)$ of cycle-cocycle reversal classes of orientations of $G$. Hence one can construct combinatorial bijections between spanning trees of $G$ and $\mathcal{G}(G)$ to build connections between spanning trees and the Jacobian group. The BBY bijections and the Bernardi bijections are two important examples. In this paper, we construct a new family of such bijections that include both. Our bijections depend on a pair of atlases (different from the ones in manifold theory) that abstract and generalize certain common features of the two known bijections. The definitions of these atlases are derived from triangulations and dissections of the Lawrence polytopes associated to $G$. The acyclic cycle signatures and cocycle signatures used to define the BBY bijections correspond to regular triangulations. Our bijections can extend to subgraph-orientation correspondences. Most of our results hold for regular matroids. We present our work in the language of fourientations, which are a generalization of orientations.
\end{abstract}
\maketitle
Key words: sandpile group; cycle-cocycle reversal class; Lawrence polytope; triangulation; dissection; fourientation
Declarations of interest: none
\section{Introduction}
In this introduction, we provide all of the relevant definitions and main results.
\subsection{Overview}\label{introduction}
Given a connected graph $G$, we build a new family of bijections between the set $\mathcal{T}(G)$ of spanning trees of $G$ and the set $\mathcal{G}(G)$ of equivalence classes of orientations of $G$ up to cycle and cocycle reversals. The new family of bijections includes the \emph{BBY bijection} (also known as the geometric bijection) constructed by Backman, Baker, and Yuen \cite{BBY}, and the \emph{Bernardi bijection}\footnote{The Bernardi bijection in \cite{Bernardi} is a subgraph-orientation correspondence. In this paper, by the Bernardi bijection we always mean its restriction to spanning trees.} in \cite{Bernardi}.
These bijections are closely related to the \emph{Jacobian group} (also known as the \emph{Picard group} or \emph{sandpile group}) $\text{Jac}(G)$ of $G$. The group $\text{Jac}(G)$ and the set $\mathcal{T}(G)$ of spanning trees are equinumerous. Recently, many efforts have been devoted to making $\mathcal{T}(G)$ a \emph{torsor} for $\text{Jac}(G)$, i.e., defining a simply transitive action of $\text{Jac}(G)$ on $\mathcal{T}(G)$. In \cite{BW}, Baker and Wang interpreted the Bernardi bijection as a bijection between $\mathcal{T}(G)$ and \emph{break divisors}. Since the set of break divisors is a canonical torsor for $\text{Jac}(G)$, the Bernardi bijection induces the \emph{Bernardi torsor}. In \cite{Yuen}, Yuen defined the geometric bijection between $\mathcal{T}(G)$ and break divisors of $G$. Later, this work was generalized in \cite{BBY} where Backman, Baker, and Yuen defined the BBY bijection between $\mathcal{T}(G)$ and the cycle-cocycle reversal classes $\mathcal{G}(G)$. The set $\mathcal{G}(G)$ was introduced by Gioan \cite{G1} and is known to be a canonical torsor for $\text{Jac}(G)$ \cite{B, BBY}. Hence any bijection between $\mathcal{T}(G)$ and $\mathcal{G}(G)$ makes $\mathcal{T}(G)$ a torsor.
From the point of view in \cite{BBY}, replacing break divisors with $\mathcal{G}(G)$ provides a more general setting. In particular, we may also view the Bernardi bijection as a bijection between $\mathcal{T}(G)$ and $\mathcal{G}(G)$ and define the Bernardi torsor.
Our work put all the above bijections in the same framework. It is surprising because the BBY bijection and the Bernardi bijection rely on totally different parameters. The main ingredients to define the BBY bijection
are an \emph{acyclic cycle signature} $\sigma$ and an \emph{acyclic cocycle signature} $\sigma^*$ of $G$. The BBY bijection sends spanning trees to $(\sigma,\sigma^*)$-compatible orientations, which are representatives of $\mathcal{G}(G)$. The Bernardi bijection relies on a ribbon structure on the graph $G$ together with a vertex and an edge as initial data. Although for planar graphs, the Bernardi bijection becomes a special case of the BBY bijection, they are different in general \cite{Yuen, BBY}. The main ingredients to define our new bijections are a \emph{triangulating atlas} and a \emph{dissecting atlas} of $G$. These atlases (different from the ones in manifold theory) abstract and generalize certain common features of the two known bijections. They are derived from \emph{triangulations} and \emph{dissections} of the \emph{Lawrence polytopes} associated to graphs. The acyclic cycle signatures and cocycle signatures used to define the BBY bijections correspond to \emph{regular} triangulations.
Our bijections extend to subgraph-orientation correspondences. The construction is similar to the one that extends the BBY bijection in \cite{D2}. The extended bijections have nice specializations to forests and connected subgraphs.
Our results are also closely related to and motivated by Kalm\'an's work \cite{K}, Kalm\'an and T\'othm\'er\'esz's work \cite{KT1}, and Postnikov's work \cite{P} on \emph{root polytopes} of \emph{hypergraphs}, where the hypergraphs specialize to graphs, and the Lawrence polytopes generalize the root polytopes in the case of graphs.
Most of our results hold for \emph{regular matroids} as in \cite{BBY}. Regular matroids are a well-behaved class of matroids which contains graphic matroids and co-graphic matroids. The paper will be written in the setting of regular matroids.
We present our theory using the language of \emph{fourientations}, which are a generalization of orientations introduced by Backman and Hopkins \cite{BH}.
Our paper is organized as follows.
\begin{enumerate}
\item[\ref{regular}] We review some basics of regular matroids.
\item[\ref{four}] We introduce fourientations.
\item[\ref{new framework}] We use fourientations to build the framework: a pair of atlases and the induced map. We also recall the BBY bijection and the Bernardi bijection as examples.
\item[\ref{bijection and atlas}] We define triangulating atlases and dissecting atlases and present our bijections.
\item[\ref{sign}] We use our theory to study signatures. In particular, we generalize acyclic signatures to triangulating signatures.
\item[\ref{Lawrence-intro}] We build the connection between the geometry of the Lawrence polytopes and the combinatorics of the regular matroid.
\item[\ref{intro-motivation}] We explain the motivation by showing how our work is related to \cite{BBY, KT1, P}.
\item[\ref{combinatorial results}] We prove the results in Section~\ref{bijection and atlas} except the two examples.
\item[\ref{signature}] We prove the results in Section~\ref{sign} and the two examples in Section~\ref{bijection and atlas}.
\item[\ref{Lawrence}] We prove the results in Section~\ref{Lawrence-intro}.
\end{enumerate}
\subsection{Notation and terminology: regular matroids}\label{regular}
In this section, we introduce the definition of regular matroids, signed circuits, signed cocircuits, orientations, etc; see also \cite{BBY} and \cite{D2}. We assume that the reader is familiar with the basic theory of matroids; some standard references include \cite{O}.
A matrix is called \emph{totally unimodular} if every square submatrix has determinant $0$, $1$, or $-1$. A matroid is called \emph{regular} if it can be represented by a totally unimodular matrix over $\mathbb{R}$.
Let $\mathcal{M}$ be a regular matroid with ground set $E$. We call the elements in $E$ \emph{edges}. Without loss of generality, we may assume $\mathcal{M}$ is represented by an $r\times n$ totally unimodular matrix $M$, where $r=\text{rank}(M)$ and $n=|E|$. Here we require $r>0$ to avoid an empty matrix. For the case $r=0$, most of our results are trivially true.
For any circuit $C$ of the regular matroid $\mathcal{M}$, there are exactly two $\{0, \pm 1\}$-vectors in $\ker_\mathbb{R}(M)$ with support $C$. We call them \emph{signed circuits} of $\mathcal{M}$, typically denoted by $\overrightarrow{C}$. Dually, for any cocircuit $C^*$, there are exactly two $\{0, \pm 1\}$-vectors in $\operatorname{im}_\mathbb{R}(M^T)$ with support $C^*$. We call them \emph{signed cocircuits} of $\mathcal{M}$, typically denoted by $\overrightarrow{C^*}$. The notions of signed circuit and signed cocircuit are intrinsic to $\mathcal{M}$, independent of the choice of $M$ up to \emph{reorientations}. By a reorientation, we mean multiplying some columns of $M$ by $-1$. For the proofs, see \cite{SW}.
These signed circuits make $\mathcal{M}$ an \emph{oriented matroid} \cite[Chapter 1.2]{BVSWZ}, so regular matroids are in particular oriented matroids.
It is well known that the \emph{dual matroid} $\mathcal{M}^*$ of a regular matroid $\mathcal{M}$ is also regular. There exists a totally unimodular matrix $M^*_{(n-r)\times n}$ such that the signed circuits and signed cocircuits of $\mathcal{M^*}$ are the signed cocircuits and signed circuits of $\mathcal{M}$, respectively. For the details, see \cite{SW}. The matrix $M^*$ should be viewed as a dual representation of $M$ in addition to a representation of $\mathcal{M^*}$. In particular, if we multiply some columns of $M$ by $-1$, then we should also multiply the corresponding columns of $M^*_{(n-r)\times n}$ by $-1$.
For any edge $e\in E$, we define an \emph{arc} $\overrightarrow{e}$ of $\mathcal{M}$ to be an $n$-tuple in the domain $\mathbb{R}^E$ of $M$, where the $e$-th entry is $1$ or $-1$ and the other entries are zero. We make the notion of arcs intrinsic to $\mathcal{M}$ in the following sense. If we multiply the $e$-th column of $M$ by $-1$, then an arc $\overrightarrow{e}$ will have the opposite sign with respect to the new matrix, but it is still the same arc of $\mathcal{M}$. So, the matrix $M$ provides us with a reference orientation for $E$ so that we know for the two opposite arcs associated with one edge which one is labeled by ``$1$''. The signed circuits $\overrightarrow{C}$ and signed cocircuits $\overrightarrow{C^*}$ can be viewed as sets of arcs in a natural way. An \emph{orientation} of $\mathcal{M}$, typically denoted by $\overrightarrow{O}$, is a set of arcs where each edge appears exactly once. It makes sense to write $\overrightarrow{e}\in\overrightarrow{C}$, $\overrightarrow{C^*}\subseteq\overrightarrow{O}$, etc. In these cases we say the arc $\overrightarrow{e}$ is \emph{in} the signed circuit $\overrightarrow{C}$, the signed cocircuit $\overrightarrow{C^*}$ is \emph{in} the orientation $\overrightarrow{O}$, etc.
Now we recall the notion of \emph{circuit-cocircuit reversal (equivalence) classes} of orientations of $\mathcal{M}$ introduced by Gioan \cite{G1, G2}.
If $\overrightarrow{C}$ is a signed circuit in an orientation $\overrightarrow{O}$ of $\mathcal{M}$, then a \emph{circuit reversal} replaces $\overrightarrow{C}$ with with the opposite signed circuit $-\overrightarrow{C}$ in $\overrightarrow{O}$. The equivalence relation generated by circuit reversals defines the \emph{circuit reversal classes} of orientations of $\mathcal{M}$. Similarly, we may define the \emph{cocircuit reversal classes}. The equivalence relation generated by circuit and cocircuit reversals defines the circuit-cocircuit reversal classes. We denote by $[\overrightarrow{O}]$ the circuit-cocircuit reversal class containing $\overrightarrow{O}$. It is proved in \cite{G2} that the number of circuit-cocircuit reversal classes of $\mathcal{M}$ equals the number of bases of $\mathcal{M}$.
Let $B$ be a basis of $\mathcal{M}$ and $e$ be an edge. If $e\notin B$, then we call the unique circuit in $B\cup \{e\}$ the \emph{fundamental circuit} of $e$ with respect to $B$, denoted by $C(B,e)$; if $e\in B$, then we call the unique cocircuit in $(E\backslash B)\cup \{e\}$ the \emph{fundamental cocircuit} of $e$ with respect to $B$, denoted by $C^*(B,e)$.
\emph{Graphic matroids} are important examples of regular matroids. Let $G$ be a connected finite graph with nonempty edge set $E$, where \emph{loops} and \emph{multiple edges} are allowed. By fixing a reference orientation of $G$, we get an \emph{oriented incidence matrix} of $G$. By deleting any row of the matrix, we get a totally unimodular matrix $M$ of full rank representing the graphic matroid $\mathcal{M}_G$ associated to $G$; see \cite{O} for details. Then edges, bases, signed circuits, signed cocircuits, arcs, orientations, and circuit-cocircuit reversal classes of $\mathcal{M}_G$ are edges, spanning trees, directed cycles, directed cocycles (bonds), arcs, orientations, and cycle-cocycle reversal classes of $G$, respectively.
\subsection{Notation and terminology: fourientations}\label{four}
It is convenient to introduce our theory in terms of \emph{fourientations}. Fourientations of graphs are systematically studied by Backman and Hopkins \cite{BH}. We will only make use of the basic notions but we define them for regular matroids. A fourientation $\overrightarrow{F}$ of the regular matroid $\mathcal{M}$ is a subset of the set of all the arcs. Symbolically, $\overrightarrow{F}\subseteq\{\pm\overrightarrow{e}: e\in E\}$. Intuitively, a fourientation is a choice for each edge of $\mathcal{M}$ whether to
make it \emph{one-way oriented}, leave it \emph{unoriented}, or \emph{biorient} it.
We denote by $-\overrightarrow{F}$ the fourientation obtained by reversing all the arcs in $\overrightarrow{F}$. In particular, the bioriented edges remain bioriented. We denote by $\overrightarrow{F}^c$ the set complement of $\overrightarrow{F}$, which is also a fourientation. Sometimes we use the notation $-\overrightarrow{F}^c$, which switches the unoriented edges and the bioriented edges in $\overrightarrow{F}$. See Figure~\ref{fourientation1} for examples of fourientations.
\begin{figure}
\caption{Examples of fourientations}
\label{fourientation1}
\end{figure}
A \emph{potential circuit} of a fourientation $\overrightarrow{F}$ is a signed circuit $\overrightarrow{C}$ such that $\overrightarrow{C}\subseteq \overrightarrow{F}$. A \emph{potential cocircuit} of a fourientation $\overrightarrow{F}$ is a signed cocircuit $\overrightarrow{C^*}$ such that $\overrightarrow{C^*}\subseteq -\overrightarrow{F}^c$.
\begin{figure}
\caption{A potential circuit and a potential cocircuit of $\protect\overrightarrow{F}
\label{fourientation2}
\end{figure}
\subsection{New framework: a pair of atlases $(\mathcal{A}, \mathcal{A^*})$ and the induced map $f_{\mathcal{A},\mathcal{A^*}}$}\label{new framework}
The BBY bijection studied in \cite{BBY} relies upon a pair consisting of an acyclic circuit signature and an acyclic cocircuit signature. We will generalize this work by building a new framework where the signatures are replaced by \emph{atlases} and the BBY bijection is replaced by a map $f_{\mathcal{A},\mathcal{A^*}}$. This subsection will introduce these new terminologies.
\begin{definition}\label{def1}
Let $B$ be a basis of $\mathcal{M}$. \begin{enumerate}
\item We call the edges in $B$ \emph{internal} and the edges not in $B$ \emph{external}.
\item An \emph{externally oriented basis} $\overrightarrow{B}$ is a fourientation where all the internal edges are bioriented and all the external edges are one-way oriented.
\item An \emph{internally oriented basis} $\overrightarrow{B^*}$ is a fourientation where all the external edges are bioriented and all the internal edges are one-way oriented.
\item An \emph{external atlas} $\mathcal{A}$ of $\mathcal{M}$ is a collection of externally oriented bases $\overrightarrow{B}$ such that each basis of $\mathcal{M}$ appears exactly once.
\item An \emph{internal atlas} $\mathcal{A^*}$ of $\mathcal{M}$ is a collection of internally oriented bases $\overrightarrow{B^*}$ such that each basis of $\mathcal{M}$ appears exactly once.
\end{enumerate}
\end{definition}
Given an external atlas $\mathcal{A}$ (resp. internal atlas $\mathcal{A^*}$) and a basis $B$, by $\overrightarrow{B}$ (resp. $\overrightarrow{B^*}$) we always mean the oriented basis in the atlas although the notation does not refer to the atlas.
\begin{definition}\label{def2}
For a pair of atlases $(\mathcal{A},\mathcal{A^*})$, we define the following map
\begin{align*}
f_{\mathcal{A},\mathcal{A^*}}:\{\text{bases of }\mathcal{M}\} & \to \{\text{orientations of }\mathcal{M}\} \\
B & \mapsto \overrightarrow{B} \cap \overrightarrow{B^*} \text{ (where } \overrightarrow{B}\in\mathcal{A},\overrightarrow{B^*}\in\mathcal{A^*}\text{).}
\end{align*}
\end{definition}
\begin{figure}
\caption{An example for Definition~\ref{def1}
\label{mapexample}
\end{figure}
We remark that, in the other direction, for any map $f$ from bases to orientations, there exists a unique pair of atlases $(\mathcal{A},\mathcal{A}^*)$ such that $f=f_{\mathcal{A},\mathcal{A^*}}$. So, the pair of atlases merely lets us view the map $f$ from a different perspective. However, from the main results of this paper, one will see why this new perspective interests us.
Now we put the BBY bijection and the Bernardi bijection in our framework.
\begin{example}[Atlases $\mathcal{A_\sigma},\mathcal{A^*_{\sigma^*}}$ and the BBY map (bijection)]\label{sigma atlas} Here we assume readers are familiar with the basic terminologies related to signatures in \cite{BBY}; for those who are new to this area, see Definition~\ref{def3}. Let $\sigma$ be a circuit signature of $\mathcal{M}$. We may construct an external atlas $\mathcal{A_\sigma}$ from $\sigma$ such that for each externally oriented basis $\overrightarrow{B}\in\mathcal{A_\sigma}$, each external arc $\overrightarrow{e}\in\overrightarrow{B}$ is oriented according to the orientation of the fundamental circuit $C(B,e)$ in $\sigma$. Similarly, we may construct an internal atlas $\mathcal{A}^*_{\sigma^*}$ from $\sigma^*$ such that for each internally oriented basis $\overrightarrow{B^*}\in\mathcal{A}^*_{\sigma^*}$, each internal arc $\overrightarrow{e}\in\overrightarrow{B^*}$ is oriented according to the orientation of the fundamental cocircuit $C^*(B,e)$ in $\sigma^*$. Then when the two signatures are acyclic, the map $f_{\mathcal{A_\sigma},\mathcal{A^*_{\sigma^*}}}$ is exactly the BBY map defined in \cite{BBY}.
\end{example}
\begin{example}[Atlases $\mathcal{A_\text{B}}, \mathcal{A}_q^*$ and the Bernardi map (bijection)]\label{B atlas}
The Bernardi bijection is defined for a connected graph $G$ equipped with a \emph{ribbon structure} and with initial data $(q,e)$, where $q$ is vertex and $e$ is an edge incident to the vertex; see \cite{B} for details or see \cite{BW} for a nice introduction. Here we use an example (Figure~\ref{mapexample2}) to recall the construction of the bijection in the atlas language. The Bernardi bijection is a map from spanning trees to certain orientations. The construction makes use of the \emph{Bernardi tour} which starts with $(q,e)$ and goes around a given tree $B$ according to the ribbon structure. We may construct an external atlas $\mathcal{A_\text{B}}$ of $\mathcal{M}_G$ as follows. Observe that the Bernardi tour cuts each external edge twice. We orient each external edge toward the first-cut endpoint, biorient all the internal edges of $B$, and hence get an externally oriented basis $\overrightarrow{B}$. All such externally oriented bases form the atlas $\mathcal{A_\text{B}}$.
The internal atlas $\mathcal{A}_q^*$ of $\mathcal{M}_G$ is constructed as follows. For any tree $B$, we orient each internal edge away from $q$, biorient external edges, and hence get $\overrightarrow{B^*}\in\mathcal{A}_q^*$. We remark that $\mathcal{A}_q^*$ is a special case of $\mathcal{A^*_{\sigma^*}}$, where $\sigma^*$ is an acyclic cocycle signature \cite[Example 5.1.5]{Yuen2}.
The map $f_{\mathcal{A}_\text{B},\mathcal{A}_q^*}$ is exactly the Bernardi map.
\begin{figure}
\caption{An example for the Bernardi map. The tree $B$ is in red.}
\label{mapexample2}
\end{figure}
\end{example}
\subsection{Bijections and the two atlases}\label{bijection and atlas}
We will see in this subsection that the map $f_{\mathcal{A},\mathcal{A^*}}$ induces a bijection bases of $\mathcal{M}$ and circuit-cocircuit reversal classes of $\mathcal{M}$ when the two atlases satisfy certain conditions which we call \emph{dissecting} and \emph{triangulating}. Furthermore, we will extend the bijection as in \cite{D2}.
The following definitions play a central role in our paper. Although the definitions are combinatorial, they were derived from Lawrence polytopes; see Section~\ref{Lawrence-intro}.
\begin{definition}\label{key def}
Let $\mathcal{A}$ be an external atlas and $\mathcal{A^*}$ be an internal atlas of $\mathcal{M}$.
\begin{enumerate}
\item We call $\mathcal{A}$ \emph{dissecting} if for any two distinct bases $B_1$ and $B_2$, the fourientation $\overrightarrow{B_1}\cap(-\overrightarrow{B_2})$ has a potential cocircuit.
\item We call $\mathcal{A}$ \emph{triangulating} if for any two distinct bases $B_1$ and $B_2$, the fourientation $\overrightarrow{B_1}\cap(-\overrightarrow{B_2})$ has no potential circuit.
\item We call $\mathcal{A^*}$ \emph{dissecting} if for any two distinct bases $B_1$ and $B_2$, the fourientation $(\overrightarrow{B_1^*}\cap(-\overrightarrow{B_2^*}))^c$ has a potential circuit.
\item We call $\mathcal{A^*}$ \emph{triangulating} if for any two distinct bases $B_1$ and $B_2$, the fourientation $(\overrightarrow{B_1^*}\cap(-\overrightarrow{B_2^*}))^c$ has no potential cocircuit.
\end{enumerate}
\end{definition}
\begin{Rem}
Being triangulating is stronger than being dissecting due to Lemma~\ref{3-painting}.
\end{Rem}
Now we are ready to present the first main result in this paper.
\begin{theorem}\label{main1}
Given a pair of dissecting atlases $(
\mathcal{A},\mathcal{A^*})$ of a regular matroid $\mathcal{M}$, if at least one of the atlases is triangulating, then the map
\begin{align*}
\overline{f}_{\mathcal{A},\mathcal{A^*}}:\{\text{bases of }\mathcal{M}\} & \to \{\text{circuit-cocircuit reversal classes of }\mathcal{M}\} \\
B & \mapsto [\overrightarrow{B} \cap \overrightarrow{B^*}]
\end{align*}
is bijective, where $[\overrightarrow{B} \cap \overrightarrow{B^*}]$ denotes the circuit-cocircuit reversal class containing the orientation $\overrightarrow{B} \cap \overrightarrow{B^*}$.
\end{theorem}
\begin{example}[Example~\ref{sigma atlas} continued]\label{mainex}
One of the main results in \cite{BBY} is that the BBY map induces a bijection between bases and circuit-cocircuit reversal classes. We will see that both $\mathcal{A_\sigma}$ and $\mathcal{A^*_{\sigma^*}}$ are triangulating (Lemma~\ref{acyclic-tri}). Thus Theorem~\ref{main1} recovers this result.
\end{example}
\begin{example}[Example~\ref{B atlas} continued]
Theorem~\ref{main1} also recovers the bijectivity of the Bernardi map for trees in \cite{Bernardi}. In \cite{Bernardi}, it is proved that the Bernardi map is a bijection between spanning trees and the \emph{$q$-connected outdegree sequences}. Baker and Wang \cite{BW} observed that the $q$-connected outdegree sequences are essentially the same as the \emph{break divisors}. Later in \cite{BBY}, the break divisors are equivalently replaced by cycle-cocycle reversal classes. We will see that the external atlas $\mathcal{A}_\text{B}$ is dissecting (Lemma~\ref{bernardidissect}). The internal atlas $\mathcal{A}_q^*$ is triangulating because it equals $\mathcal{A^*_{\sigma^*}}$ for some acyclic signature $\sigma^*$. Hence the theorem applies.
\end{example}
In \cite{D2}, the BBY bijection is extended to a bijection $\varphi$ between subsets of $E$ and orientations of $\mathcal{M}$ in a canonical way. We also generalize this work by extending $f_{\mathcal{A},\mathcal{A^*}}$ to $\varphi_{\mathcal{A},\mathcal{A^*}}$.
\begin{definition}[The map $\varphi_{\mathcal{A},\mathcal{A^*}}$]\label{def-ext}
We will define a map $\varphi_{\mathcal{A},\mathcal{A^*}}$ from orientations to subgraphs such that $\varphi_{\mathcal{A},\mathcal{A^*}}\circ f_{\mathcal{A},\mathcal{A^*}}$ is the identity map, and hence $\varphi_{\mathcal{A},\mathcal{A^*}}$ extends $f^{-1}_{\mathcal{A},\mathcal{A^*}}$. We start with an orientation $\overrightarrow{O}$. By Theorem~\ref{main1}, we get a basis $B=\overline{f}_{\mathcal{A},\mathcal{A^*}}^{-1}([\overrightarrow{O}])$. Since $\overrightarrow{O}$ and $f_{\mathcal{A},\mathcal{A^*}}(B)$ are in the same circuit-cocircuit reversal class, one can obtain one of them by reversing disjoint signed circuits $\{\overrightarrow{C_i}\}_{i\in I}$ and cocircuits $\{\overrightarrow{C_j^*}\}_{j\in J}$ in the other (see Lemma~\ref{orientation1}). Define $\varphi_{\mathcal{A},\mathcal{A^*}}(\overrightarrow{O})=(B\cup \biguplus\limits_{i\in I}C_i)\backslash \biguplus\limits_{j\in J}C_j^*$.
\end{definition}
The amazing fact here is that $\varphi_{\mathcal{A},\mathcal{A^*}}$ is a bijection, and it has two nice specializations besides $f^{-1}_{\mathcal{A},\mathcal{A^*}}$.
\begin{theorem}\label{main2}
Fix a pair of dissecting atlases $(
\mathcal{A},\mathcal{A^*})$ of $\mathcal{M}$ with ground set $E$. Suppose at least one of the atlases is triangulating.
(1) The map
\begin{align*}
\varphi_{\mathcal{A},\mathcal{A^*}}:\{\text{orientations of }\mathcal{M}\} & \to \{\text{subsets of }E\} \\
\overrightarrow{O} & \mapsto (B\cup \biguplus_{i\in I}C_i)\backslash \biguplus_{j\in J}C_j^*
\end{align*}
is a bijection, where $B$ is the unique basis such that $f_{\mathcal{A},\mathcal{A^*}}(B)\in [\overrightarrow{O}]$, and the orientations $f_{\mathcal{A},\mathcal{A^*}}(B)$ and $\overrightarrow{O}$ differ by disjoint signed circuits $\{\overrightarrow{C_i}\}_{i\in I}$ and cocircuits $\{\overrightarrow{C_j^*}\}_{j\in J}$.
(2) The image of the independent sets of $\mathcal{M}$ under the bijection $\varphi^{-1}_{\mathcal{A},\mathcal{A^*}}$ is a representative set of the circuit reversal classes of $\mathcal{M}$.
(3) The image of the spanning sets of $\mathcal{M}$ under the bijection $\varphi^{-1}_{\mathcal{A},\mathcal{A^*}}$ is a representative set of the cocircuit reversal classes of $\mathcal{M}$.
\end{theorem}
\begin{Rem}
We can apply Theorem~\ref{main2} to extend and generalize the Bernardi bijection; see Corollary~\ref{Bernardi-extend} for a formal statement. In \cite{Bernardi}, the Bernardi bijection is also extended to a subgraph-orientation correspondence. However, Bernardi's extension is different from the bijection $\varphi_{\mathcal{A}_\text{B},\mathcal{A}_q^*}$ in Theorem~\ref{main2} in general.
\end{Rem}
\subsection{Signatures and the two atlases}\label{sign}
In Section~\ref{bijection and atlas}, we have seen that acyclic signatures $\sigma$ and $\sigma^*$ induce triangulating atlases $\mathcal{A}_\sigma$ and $\mathcal{A^*_{\sigma^*}}$, respectively, and hence we may apply our main theorems to the BBY bijection. In this section, we will define a new class of signatures, called \emph{triangulating signatures}, which are in one-to-one correspondence with triangulating atlases and generalize acyclic signatures. Note that in \cite{BBY}, the BBY map is proved to be bijective onto \emph{$(\sigma,\sigma^*)$-compatible orientations}, which are representatives of the circuit-cocircuit reversal classes. We will also generalize this result. In particular, we will reformulate Theorem~\ref{main1} and Theorem~\ref{main2} in terms of the signatures and the compatible orientations (for triangulating atlases).
First we recall the definitions of circuit (resp. cocircuit) signatures, acyclic circuit (resp. cocircuit) signatures, and compatible orientations in \cite{B}.
\begin{definition}\label{def3}
Let $\mathcal{M}$ be a regular matroid.
\begin{enumerate}
\item A \emph{circuit signature} $\sigma$ of $\mathcal{M}$ is the choice of a direction for each circuit of $\mathcal{M}$. For each circuit $C$, we denote by $\sigma(C)$ the signed circuit we choose for $C$. By abusing notation, we also view $\sigma$ as the set of the signed circuits we choose: $\{\sigma(C):C\text{ is a circuit}\}$.
\item The circuit signature $\sigma$ is said to be \emph{acyclic} if whenever $a_C$ are nonnegative reals with $\sum_C a_C\sigma(C)=0$ in $\mathbb{R}^E$ we have $a_C=0$ for all $C$, where the sum is over all circuits of $\mathcal{M}$.
\item An orientation of $\mathcal{M}$ is said to be \emph{$\sigma$-compatible} if any signed circuit in the orientation is in $\sigma$.
\item Cocircuit signatures $\sigma^*$, acyclic cocircuit signatures, and $\sigma^*$-compatible orientations are defined similarly.
\item An orientation is said to be \emph{($\sigma,\sigma^*$)-compatible} if it is both $\sigma$-compatible and $\sigma^*$-compatible.
\end{enumerate}
\end{definition}
Recall in Example~\ref{sigma atlas} that from signatures $\sigma$ and $\sigma^*$, we may construct atlases $\mathcal{A_\sigma}$ and $\mathcal{A^*_{\sigma^*}}$. It is natural to ask: (1) Which signatures induce triangulating atlases? (2) Is any triangulating atlas induced by a signature?
The following definition and proposition answer the question.
\begin{definition}
\begin{enumerate}
\item A circuit signature $\sigma$ is said to be \emph{triangulating} if for any $\overrightarrow{B}\in\mathcal{A_\sigma}$ and any signed circuit $\overrightarrow{C}\subseteq\overrightarrow{B}$, $\overrightarrow{C}$ is in the signature $\sigma$.
\item A cocircuit signature $\sigma^*$ is said to be \emph{triangulating} if for any $\overrightarrow{B^*}\in\mathcal{A^*_{\sigma^*}}$ and any signed cocircuit $\overrightarrow{C^*}\subseteq\overrightarrow{B^*}$, $\overrightarrow{C^*}$ is in the signature $\sigma^*$.
\end{enumerate}
\end{definition}
\begin{Rem}\label{atlas-free}
In an atlas-free manner, the definition of triangulating circuit signatures is as follows: a circuit signature $\sigma$ is said to be triangulating if for any basis $B$, any signed circuit that is the sum of signed fundamental circuits (for $B$) in $\sigma$ is also in $\sigma$ (see Lemma~\ref{fundamental}). A similar definition works for the cocircuit signatures.
\end{Rem}
\begin{theorem}\label{trig-intro}
The maps
\begin{align*}
\alpha:\{\text{triangulating circuit sig. of }\mathcal{M}\} & \to \{\text{triangulating external atlases of }\mathcal{M}\} \\
\sigma & \mapsto \mathcal{A_\sigma}
\end{align*}
and
\begin{align*}
\alpha^*:\{\text{triangulating cocircuit sig. of }\mathcal{M}\} & \to \{\text{triangulating internal atlases of }\mathcal{M}\} \\
\sigma^* & \mapsto \mathcal{A}^*_{\sigma^*}
\end{align*}
are bijections.
\end{theorem}
\begin{Rem}\label{rem-nonexample}
For a dissecting external atlas $\mathcal{A}$, it is possible for there to be no circuit signature $\sigma$ such that $\mathcal{A}_\sigma=\mathcal{A}$. We can actually find a graph such that $\mathcal{A}_{\text{B}}$ (which is necessarily dissecting) gives the desired example; see Figure~\ref{nonexampleBernardi}.
\begin{figure}
\caption{A dissecting external atlas $\mathcal{A}
\label{nonexampleBernardi}
\end{figure}
\end{Rem}
\begin{Rem}
Acyclic signatures are all triangulating; see Lemma~\ref{acyclic-tri}. There exists a triangulating signature that is not acyclic; see Proposition~\ref{nonex}.
\end{Rem}
A nice thing about the acyclic signatures is that we can talk about compatible orientations and they form representatives of orientation classes. The following properties of triangulating signatures generalize the acyclic counterpart proved in \cite{BBY}.
\begin{proposition}
Suppose $\sigma$ and $\sigma^*$ are \emph{triangulating} signatures.
\begin{enumerate}
\item The set of $(\sigma, \sigma^*)$-compatible orientations is a representative set of the circuit-cocircuit reversal classes of $\mathcal{M}$.
\item The set of $\sigma$-compatible orientations is a representative set of the circuit reversal classes of $\mathcal{M}$.
\item The set of $\sigma^*$-compatible orientations is a representative set of the cocircuit reversal classes of $\mathcal{M}$.
\end{enumerate}
\end{proposition}
To reformulate Theorem~\ref{main1} and Theorem~\ref{main2} in terms of signatures and compatible orientations, we write
\[\text{BBY}_{\sigma,\sigma^*}=f_{\mathcal{A_\sigma},\mathcal{A^*_{\sigma^*}}}\text{ and }\varphi_{\sigma,\sigma^*}=\varphi_{\mathcal{A_\sigma},\mathcal{A^*_{\sigma^*}}}.\]
They are exactly the BBY bijection in \cite{BBY} and the extened BBY bijection in \cite{D2} when the two signatures are acyclic. By the two theorems and a bit extra work, we have the following theorems, which generalize the work in \cite{BBY} and \cite{D2}, respectively.
\begin{theorem}\label{main1-sign}
Suppose $\sigma$ and $\sigma^*$ are \emph{triangulating} signatures of a regular matroid $\mathcal{M}$. The map $\text{BBY}_{\sigma,\sigma^*}$ is a bijection between the bases of $\mathcal{M}$ and the $(\sigma,\sigma^*)$-compatible orientations of $\mathcal{M}$.
\end{theorem}
\begin{theorem}\label{main2-sign}
Suppose $\sigma$ and $\sigma^*$ are \emph{triangulating} signatures of a regular matroid $\mathcal{M}$ with ground set $E$.
(1) The map
\begin{align*}
\varphi_{\sigma,\sigma^*}:\{\text{orientations of }\mathcal{M}\} & \to \{\text{subsets of } E\} \\
\overrightarrow{O} & \mapsto (\text{BBY}_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\cup \biguplus_{i\in I}C_i)\backslash \biguplus_{j\in J}C_j^*
\end{align*}
is a bijection, where $\overrightarrow{O^{cp}}$ the (unique) ($\sigma,\sigma^*$)-compatible orientation obtained by reversing disjoint signed circuits $\{\overrightarrow{C_i}\}_{i\in I}$ and signed cocircuits $\{\overrightarrow{C_j^*}\}_{j\in J}$ in $\overrightarrow{O}$.
(2) The map $\varphi_{\sigma,\sigma^*}$ specializes to the bijection
\begin{align*}
\varphi_{\sigma,\sigma^*}: \{\sigma\text{-compatible orientations}\} & \to \{\text{independent sets}\} \\
\overrightarrow{O} & \mapsto \text{BBY}_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\backslash \biguplus_{j\in J}C_j^*.
\end{align*}
(3) The map $\varphi_{\sigma,\sigma^*}$ specializes to the bijection
\begin{align*}
\varphi_{\sigma,\sigma^*}:\{\sigma^*\text{-compatible orientations}\} & \to \{\text{spanning sets}\} \\
\overrightarrow{O} & \mapsto \text{BBY}_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\cup \biguplus_{i\in I}C_i.
\end{align*}
\end{theorem}
The definition of triangulating signatures is somewhat indirect. However, in the case of \emph{graphs}, we have the following nice description for the triangulating \emph{cycle} signatures, the proof of which is due to Gleb Nenashev. We do not know whether a similar statement holds for regular matroids.
\begin{theorem}\label{triangular}
A \emph{cycle} signature $\sigma$ of a \emph{graph} $G$ is triangulating if and only if for any three directed cycles in $\sigma$, their sum (as vectors in $\mathbb{Z}^E$) is not zero.
\end{theorem}
\subsection{Lawrence polytopes and the two atlases}\label{Lawrence-intro}
In this subsection, we will introduce a pair of \emph{Lawrence polytopes} $\mathcal{P}$ and $\mathcal{P^*}$ associated to a regular matroid $\mathcal{M}$. We will see that dissections and triangulations of the Lawrence polytopes correspond to the dissecting atlases and triangulating atlases, respectively, which is actually how we derived Definition~\ref{key def}. We will also see that regular triangulations correspond to acyclic signatures.
Readers can find some information on Lawrence polytopes in the paper \cite{BS} and the books \cite{BVSWZ,Z}. The Lawrence polytopes defined for regular matroids in this paper were rediscovered by the author in attempts to define a dual object to the root polytope studied in \cite{P}; see Section~\ref{intro-motivation} for details.
Recall that $M_{r\times n}$ is a totally unimodular matrix representing $\mathcal{M}$.
\begin{definition}
\begin{enumerate}
\item We call \[\begin{pmatrix} M_{r\times n} & {\bf 0} \\ I_{n\times n} & I_{n\times n} \end{pmatrix}\] the \emph{Lawrence matrix}, where $I_{n\times n}$ is the identity matrix. The columns of the Lawrence matrix are denoted by $P_1, \cdots, P_n$, $P_{-1}, \cdots, P_{-n}\in\mathbb{R}^{n+r}$ in order.
\item The \emph{Lawrence polytope} $\mathcal{P}\subseteq \mathbb{R}^{n+r}$ of $\mathcal{M}$ is the convex hull of the points $P_1, \cdots, P_n, P_{-1}, \cdots, P_{-n}$.
\item If we replace the matrix $M$ in (1) with $M^*_{(n-r)\times n}$ (see Section~\ref{regular}), then we get the Lawrence polytope $\mathcal{P^*}\subseteq\mathbb{R}^{2n-r}$. We use the labels $P_i^*$ for the points generating $\mathcal{P^*}$.
\item We further assume that $\mathcal{M}$ is loopless when defining $\mathcal{P}$ and that $\mathcal{M}$ is coloopless when defining $\mathcal{P}^*$, to avoid duplicate columns of the Lawrence matrix.
\end{enumerate}
\end{definition}
\begin{Rem}
We only need the assumption in (4) for the geometric results in this subsection. In particular, we do not need the assumption for atlases. One can use ``point configurations'' \cite{DRS} to replace polytopes so that the assumption is unnecessary.
\end{Rem}
\begin{Rem}
Our definition of the Lawrence polytope certainly depends on the matrix we choose. We still say the Lawrence polytope $\mathcal{P}$ (or $\mathcal{P^*}$) of $\mathcal{M}$ for the following two reasons. First, if we fix a total order on the ground set $E$ and fix a reference orientation, then the matrix $M$ is unique up to a multiplication of a matrix in $\mathrm{SL}(r,\mathbb{Z})$ on the left; see \cite{SW}. Hence the resulting Lawrence polytope is also unique in a similar sense. Second, our results involving the Lawrence polytope do not depend on the choice of $M$.
\end{Rem}
We introduce some basic notions in discrete geometry.
\begin{definition}
A \emph{simplex} $S$ is the convex hull of some affinely independent points. A \emph{face} of $S$ is a simplex generated by a subset of these points, which could be $S$ or $\emptyset$.
\end{definition}
\begin{definition}\label{tri-diss-def}
Let $\mathcal{P}$ be a polytope of dimension $d$. \begin{enumerate}
\item If $d+1$ of the vertices of $\mathcal{P}$ form a $d$-dimensional simplex, we call such a simplex a \emph{maximal simplex} of $\mathcal{P}$.
\item A \emph{dissection} of $\mathcal{P}$ is a collection of maximal simplices of $\mathcal{P}$ such that \begin{enumerate}
\item[(I)] the union is $\mathcal{P}$, and
\item[(II)] the relative interiors of any two distinct maximal simplices in the collection are disjoint.
\end{enumerate}
\item If we replace the condition (II) in (2) with the condition (III) that any two distinct maximal simplices in the collection intersect in a common face (which could be empty), then we get a \emph{triangulation}. (See Figure~\ref{tri-dis}.)
\end{enumerate}
\end{definition}
\begin{figure}
\caption{A triangulation and a dissection of an octahedron}
\label{tri-dis}
\end{figure}
The next two theorems build the connection between the geometry of the Lawrence polytopes and the combinatorics of the regular matroid. To state them, we need to label the $2|E|$ arcs of $\mathcal{M}$. Recall that given the matrix $M$, the arcs of $\mathcal{M}$ are the standard unit vectors and their opposites. We denote them by $\overrightarrow{e_1}, \cdots, \overrightarrow{e_n}$ and $\overrightarrow{e_{-1}}, \cdots, \overrightarrow{e_{-n}}$. In particular, $\overrightarrow{e_i}=-\overrightarrow{e_{-i}}$.
\begin{theorem}\label{3-fold}
We have the following threefold bijections, all of which are denoted by $\chi$. (It should be clear from the context which one we are referring to when we use $\chi$. )
\begin{enumerate}
\item The Lawrence polytope $\mathcal{P}\subseteq\mathbb{R}^{n+r}$ is an $(n+r-1)$-dimensional polytope whose vertices are exactly the points $P_1, \cdots, P_n, P_{-1}, \cdots, P_{-n}$. Hence we may define a bijection
\begin{align*}
\chi:\{\text{vertices of }\mathcal{P}\} & \to \{\text{arcs of }\mathcal{M}\} \\
P_i & \mapsto \overrightarrow{e_i}
\end{align*}
\item The map $\chi$ in (1) induces a bijection
\begin{align*}
\chi:\{\text{maximal simplices of }\mathcal{P}\} & \to \{\text{externally oriented bases of }\mathcal{M}\} \\
\begin{gathered}
\text{a maximal simplex}\\
\text{with vertices }\{P_i:i\in I\}
\end{gathered} & \mapsto \text{the fourientation }\{\chi(P_i):i\in I\}.
\end{align*}
\item The map $\chi$ in (2) induces two bijections
\begin{align*}
\chi:\{\text{triangulations of }\mathcal{P}\} & \to \{\text{triangulating external atlases of }\mathcal{M}\} \\
\begin{gathered}
\text{a triangulation with} \\
\text{maximal simplices }\{S_i:i\in I\}
\end{gathered} & \mapsto \text{the external atlas }\{\chi(S_i):i\in I\},
\end{align*}
and \begin{align*}
\chi:\{\text{dissections of }\mathcal{P}\} & \to \{\text{dissecting external atlases of }\mathcal{M}\} \\
\begin{gathered}
\text{a dissection with}\\\text{maximal simplices }\{S_i:i\in I\}
\end{gathered} & \mapsto \text{the external atlas }\{\chi(S_i):i\in I\}.
\end{align*}
\item In (1), (2) and (3), if we replace the Lawrence polytope $\mathcal{P}$ with $\mathcal{P^*}$, the points $P_i$ with $P_i^*$, $\chi$ with $\chi^*$, and every word ``external'' with ``internal'', then the statement also holds.
\end{enumerate}
\end{theorem}
Recall that the map $\alpha:\sigma\mapsto\mathcal{A}_\sigma$ is a bijection between triangulating circuit signatures and triangulating external atlases of $\mathcal{M}$. See Section~\ref{regular-sign} for the definition of regular triangulations.
\begin{theorem}\label{main3}
The restriction of the bijection $\chi^{-1}\circ\alpha$ to the set of acyclic circuit signatures of $\mathcal{M}$ is bijective onto the set of regular triangulations of $\mathcal{P}$. In other words, a circuit signature $\sigma$ is acyclic if and only if the triangulation $\chi^{-1}({A}_\sigma)$ is regular. Dually, the restriction of the bijection $(\chi^*)^{-1}\circ\alpha^*$ to the set of acyclic cocircuit signatures of $\mathcal{M}$ is bijective onto the set of regular triangulations of $\mathcal{P}^*$.
\end{theorem}
We conclude this subsection with Table~\ref{table-summarize}.
\begin{table}[h!]
\centering
\begin{tabular}{ |c|c|c| }
\hline
circuit signature $\sigma$ & external atlas $\mathcal{A}$& Lawrence polytope $\mathcal{P}$\\
\hline
(may not exist) & dissecting & dissection\\
\hline
triangulating & triangulating & triangulation\\
\hline
acyclic & (no good description) & regular triangulation\\
\hline
\end{tabular}
\caption{A summary of the correspondences among signatures, atlases, and the Lawrence polytopes via $\alpha$ and $\chi$. We omit the dual part. }
\label{table-summarize}
\end{table}
\subsection{Motivation and root polytopes}\label{intro-motivation}
We explain how our work is motivated by and related to the work \cite{K}, \cite{KT1}, and \cite{P} on \emph{root polytopes} of \emph{hypergraphs} and the work \cite{BBY} on the BBY bijections.
The story began with a question by O. Bernardi when he was my advisor. He asked whether the bijection in \cite[Theorem 12.9]{P} is a BBY bijection. We now explain this question.
Let $G=(V,E)$ be a connected graph without loops, where $V=\{v_i:i\in I\}$ is the vertex set and $E=\{e_j:j\in J\}$ is the edge set. By adding a vertex to the midpoint of each edge of $G$, we obtain a bipartite graph $\text{Bip}(G)$ with vertex classes $V$ and $E$, where we use $e_j$ to label the midpoint of $e_j$ by abusing notation. We remark that this is a special case of constructing the bipartite graph $\text{Bip}(H)$ associated with a hypergraph $H$; see \cite{K}.
Let $\{\mathbf{v}_i:i\in I\}\cup\{\mathbf{e}_j:j\in J\}$ be the coordinate vectors in $\mathbb{R}^{|V|+|E|}$.
The \emph{root polytope} associated to $\text{Bip}(G)$ is
\begin{align*}\mathcal{Q}&=\text{CovexHull}(\mathbf{v}_i-\mathbf{e}_j:v_i \text{ is incident to }e_j \text{ in }G)\\
(&=\text{CovexHull}(\mathbf{v}_i-\mathbf{e}_j:\{v_i,e_j\}\text{ is an edge of Bip}(G))).
\end{align*}
The maximal simplices of $\mathcal{Q}$ are characterized by the following lemma.
\begin{lemma}\cite[Lemma 12.5]{P} Any maximal simplex of $\mathcal{Q}$ is of the form \[\Delta_T=\text{CovexHull}(\mathbf{v}_i-\mathbf{e}_j:\{v_i,e_j\}\text{ is an edge of }T),\] where $T$ is a spanning tree of $\text{Bip}(G)$.
\end{lemma}
For a spanning tree $T$ of $\text{Bip}(G)$, we define the \emph{right degree vector} to be $RD(T)=(d_j-1:j\in J)$, where $d_j$ is the degree of $e_j$ in $T$; we define the \emph{left degree vector} to be $LD(T)=(d_i-1:i\in I)$, where $d_i$ is the degree of $v_i$ in $T$.
\cite[Theorem 12.9]{P} implies the following result.
\begin{theorem}\label{Post1}
For any triangulation $\{\Delta_{T_1}, \cdots, \Delta_{T_s}\}$ of $\mathcal{Q}$, \begin{enumerate}
\item the set $R=\{RD(T_1), \cdots, RD(T_s)\}$ does not depend on the triangulation,
\item the set $L=\{LD(T_1), \cdots, LD(T_s)\}$ does not depend on the triangulation, and
\item the map $f:RD(T_i)\mapsto LD(T_i)$ is a bijection from $R$ to $L$.
\end{enumerate}
\end{theorem}
From the point of view of \cite{K}, $R$ is the set of \emph{hypertrees} of $G$ (viewed as a hypergraph). Since $G$ is a graph in our case, $R$ is the set of spanning trees of $G$. To be precise, the spanning tree $B$ of $G$ induced by the vector $RD(T)$ is $B=\{e_j:d_j=2\}$. From the point view of \cite{KT1}, $L$ is in bijection with \emph{break divisors} of $G$. From the point view of \cite{AKBS, BBY}, the break divisors are in bijection with the indegree sequences of $q$-connected orientations, where $q$ is a root vertex of $G$, and hence the break divisors are in bijection with the circuit-cocircuit reversal classes. Here we remark that the break divisors are canonical representatives for the set $\text{Pic}^g(G)$, and $\text{Pic}^g(G)$ is a canonical torsor for the group $\text{Jac}(G)(=\text{Pic}^0(G))$ \cite{AKBS}. Therefore, the map $f$ in Theorem~\ref{Post1} induces a bijection between spanning trees of $G$ and the circuit-cocircuit reversal classes of $G$. Thus we can ask whether it is a BBY bijection.
Meanwhile, the work \cite{KT1, KT2} shows that the Bernardi bijection induces a dissection of $\mathcal{Q}$. These results strongly suggest that the dissections and triangulations of $\mathcal{Q}$ are intimately related to these types of bijections. The mysterious part was that the root polytope is only related to the external edges of trees, so the theory for the bijection $f$ and the one for the BBY bijection have an essential difference. This was why we were looking for a dual object to the root polytope, and the dual object turns out to be the Lawrence polytope $\mathcal{P}^*$.
To see how our work on atlases and Lawrence polytopes implies all the results above, we build the connections between the terminologies related to the root polytope $\mathcal{Q}$ and the ones we use for the Lawrence polytope $\mathcal{P}$.
Firstly, the geometric objects $\mathcal{Q}$ and $\mathcal{P}$ differ by an invertible linear transformation. Indeed, for any $e_j\in E$ and its two endpoints $v_{i_1},v_{i_2}\in V$, the pair $(\mathbf{v}_{i_1}-\mathbf{e}_j, \mathbf{v}_{i_2}-\mathbf{e}_j)$ can be transformed to $(\mathbf{e}_j+\mathbf{v}_{i_2}-\mathbf{v}_{i_1}, \mathbf{e}_j)$, via which $\mathcal{Q}$ is transformed to the Lawrence polytope $\mathcal{P}$ associated to the oriented incidence matrix $M$ of $G$ (ignoring one redundant row of $M$). Secondly, the combinatorial object $\text{Bip}(G)$ can be viewed as a fourientation $\overrightarrow{G}$ where each edge of $G$ is bioriented if we view the edges $\{\mathbf{v}_{i_1},\mathbf{e}_j\}$ and $\{\mathbf{v}_{i_2},\mathbf{e}_j\}$ in $\text{Bip}(G)$ as the arcs $(v_{i_1}, v_{i_2})$ and $(v_{i_2}, v_{i_1})$ in $\overrightarrow{G}$, respectively. Via this correspondence, a spanning tree $T\subseteq\text{Bip}(G)$ corresponds to an externally oriented base $\overrightarrow{B}\subseteq\overrightarrow{G}$, and $RD(T)$ corresponds to the underlying tree of $\overrightarrow{B}$. Recall that $LD(T)$ corresponds to the indegree sequence of a $q$-connected orientation, which we did not specify. Now we point out that, in our language, this orientation is $\overrightarrow{B}\cap\overrightarrow{B^*}$, where $\overrightarrow{B^*}\in\mathcal{A}^*_q$. Hence the map $f$ in Theorem~\ref{Post1} corresponds to the map $\overline{f}_{\mathcal{A},\mathcal{A}^*_q}$, where $\mathcal{A}=\chi(\{\Delta_{T_1}, \cdots, \Delta_{T_s}\})$ (by identifying $\mathcal{P}$ and $\mathcal{Q}$).
Now it is clear that the root polytope $Q$ (or the Lawrence polytope $\mathcal{P}$) only deals with the external atlas $\mathcal{A}$, and it can induce the bijection $f$ because we implicitly use the internal atlas $\mathcal{A}^*_q$. The dual object $\mathcal{P}^*$ deals with the internal atlas and makes the bijection $f$ ``symmetric''.
We point out that \cite[Lemma 12.5]{P} and \cite[Lemma 12.6]{P} correspond to Theorem~\ref{3-fold}(2) and the triangulation part of Theorem~\ref{3-fold}(3), respectively. However, we still prove them because we need to deal with dissections and regular matroids.
In \cite{P}, the author asks whether a triangulation of $\mathcal{Q}$ can be reconstructed from the bijection $f$. This question has been fully answered by Galashin, Nenashev, and Postnikov in \cite{GNP}. In particular, they construct a new combinatorial object called a \emph{trianguloid} and prove that trianguloids are in bijection with triangulations of $\mathcal{Q}$ \cite[Theorem 5.6]{GNP}. Moreover, they make use of trianguloids to prove that different triangulations induce different bijections $f$ \cite[Theorem 5.7]{GNP}. In this paper, we characterize the triangulations of the Lawrence polytope $\mathcal{P}$ in terms of circuit signatures; see Table~\ref{table-summarize} and Theorem~\ref{triangular}. However, we can only apply our results to the root polytopes associated to $\text{Bip}(G)$, where $G$ is a graph rather than a hypergraph. Even for this case, it is unclear how our characterization is related to theirs.
Going back to Bernardi's question, we can answer it now. If we view the BBY bijection and $f$ as maps to orientations, then the map $f$ is not BBY in general because there exists a triangulating cycle signature that is not acyclic (Proposition~\ref{nonex}). If we view the two bijections as maps to the circuit-cocircuit reversal classes, then the answer is still no by \cite[Theorem 5.7]{GNP}. A harder question is whether they induce the same \emph{torsor} (see Section~\ref{introduction} for the definition). We do not know the answer.
Another important way to relate the Lawrence polytope (or the root polytope) to the BBY bijection is by the zonotopal subdivision. The \emph{zonotope} $Z(M)$ (resp. $Z(M^*)$) is the Minkowski sum of the columns of $M$ (resp. $M^*$). Their subdivisions are used to construct the BBY bijection in \cite[Section 3.4]{BBY}. In particular, every acyclic circuit signature (resp. cocircuit signature) induces a subdivision of $Z(M)$ (resp. $Z(M^*)$) indexed by the bases of $\mathcal{M}$.
We may view the zonotope $Z(M)$ (resp. $Z(M^*)$) as a section of the Lawrence polytope $\mathcal{P}$ (resp. $\mathcal{P}^*$), which is an example of a more general phenomenon known as the \emph{Cayley trick}; see \cite{D,HRS,P,Z}. To be precise, denote the columns of $M$ by $M_1,\cdots,M_n$, and recall that the columns of the Lawrence matrix \[\begin{pmatrix} M_{r\times n} & {\bf 0} \\ I_{n\times n} & I_{n\times n} \end{pmatrix}\] are denoted by $P_1, \cdots, P_n$, $P_{-1}, \cdots, P_{-n}$. Then we have \[Z(M)=\{\sum_{i=1}^n k_iM_i:k_i\in [0,1]\text{ for all }i\},\]
and\[\mathcal{P}=\{\sum_{i=1}^n (k_iP_i+k_{-i}P_{-i}):\sum_{i=1}^n (k_i+k_{-i})=1\}.\]
We take the section $y_1=\cdots=y_n=1/n$ of $\mathcal{P}\subseteq\mathbb{R}^{n+r}$, where $y_1,\cdots,y_n$ denote the last $n$ coordinates of $\mathbb{R}^{n+r}$. A direct computation shows that the zonotope $Z(M)$ is exactly the $n$-th dilate of this section.
If we restrict a triangulation $\chi^{-1}(\mathcal{A}_\sigma)$ of $\mathcal{P}$ to the (dilated) section $Z(M)$, we obtain a subdivision of $Z(M)$. When the signature $\sigma$ is acyclic, it is easy to check that the subdivision of $Z(M)$ is exactly the one induced by $\sigma$ in \cite{BBY}.
\iffalse
\subsection{A tiling of a simplex induced by the bijection $\varphi_{\mathcal{A},\mathcal{A^*}}$}\label{tile-intro}
In \cite[Section 4]{D2}, the bijection $\varphi_{\sigma,\sigma^*}$, which is generalized to $\varphi_{\mathcal{A},\mathcal{A^*}}$ in Theorem~\ref{main2}, induces a tiling of the hypercube $[0,1]^E$, where $[0,1]^E$ is the set of \emph{continuous orientations} of $\mathcal{M}$. The tiling is a decomposition of $[0,1]^E$ into half-open cells such that each cell contains exactly one discrete orientation $\overrightarrow{O}$ and the vectors spanning the cell correspond to the edges in $\varphi_{\sigma,\sigma^*}(\overrightarrow{O})$.
The hypercube has two projections, the zonotopes $Z(M)$ and $Z(M^*)$ generated by the columns of $M$ and $M^*$, respectively; see Figure~\ref{diag}. Zonotopal subdivisions of are used to construct the BBY bijection in \cite[Section 3.4]{BBY}. Every acyclic circuit signature (resp. cocircuit signature) induces a subdivision of $Z(M)$ (resp. $Z(M^*)$) indexed by the bases of $\mathcal{M}$. The tiling of the hypercube \cite{D2} contains the information of these zonotopal subdivisions. Roughly speaking, if we project part of the tiling to the zonotope $Z(M)$ (resp. $Z(M^*)$), then we get a refinement of the subdivision, which is indexed by the independent sets (resp. spanning sets).
We may view the zonotope $Z(M)$ (resp. $Z(M^*)$) as a section of the Lawrence polytope $\mathcal{P}$ (resp. $\mathcal{P}^*$), which is an example of a more general phenomenon known as the \emph{Cayley trick}; see \cite{D,HRS,P,Z}. To be precise, denote the columns of $M$ by $M_1,\cdots,M_n$, and recall that the columns of the Lawrence matrix \[\begin{pmatrix} M_{r\times n} & {\bf 0} \\ I_{n\times n} & I_{n\times n} \end{pmatrix}\] are denoted by $P_1, \cdots, P_n$, $P_{-1}, \cdots, P_{-n}$. Then we have \[Z(M)=\{\sum_{i=1}^n k_iM_i:k_i\in [0,1]\text{ for all }i\},\]
and\[\mathcal{P}=\{\sum_{i=1}^n (k_iP_i+k_{-i}P_{-i}):\sum_{i=1}^n (k_i+k_{-i})=1\}.\]
We take the section $y_1=\cdots=y_n=1/n$ of $\mathcal{P}$, where $y_1,\cdots,y_n$ denote the last $n$ coordinates. A direct computation shows that the zonotope $Z(M)$ is exactly the $n$-th dilate of this section. If we restrict a triangulation $\chi^{-1}({A}_\sigma)$ of $\mathcal{P}$ to the (dilated) section $Z(M)$, we will certainly obtain a subdivision of $Z(M)$. When the signature $\sigma$ is acyclic, it is easy to check that the subdivision of $Z(M)$ is exactly the one induced by $\sigma$ in \cite{BBY}.
\begin{figure}
\caption{A commutative diagram}
\label{diag}
\end{figure}
We have introduced five objects in Figure~\ref{diag}. This suggests that there might be a ``universal'' geometric object that contains the hypercube as a section and can be projected to the Lawrence polytopes. Moreover, we expect this object to have a tiling property as the hypercube. The target of this section is to define this object $\mathcal{S}$ and build the tiling.
We fix a space $\mathbb{R}_{\geq 0}^{2n}$ whose points are written as \[x=(x(1),\cdots, x(n),x(-1),\cdots,x(-n)),\] where every entry is nonnegative. Recall that the arcs of $\mathcal{M}$ are denoted by $\overrightarrow{e_1}, \cdots, \overrightarrow{e_n}$, $\overrightarrow{e_{-1}}, \cdots, \overrightarrow{e_{-n}}$. The entry $x(i)$ should be thought of as corresponded to $\overrightarrow{e_i}$.
Let \[\mathcal{S}=\{x\in\mathbb{R}_{\geq 0}^{2n}: \sum_{i=1}^n(x(i)+x(-i))=1\}.\]
Clearly, the simplex $\mathcal{S}$ are projected to $\mathcal{P}$ and $\mathcal{P}^*$ via the two Lawrence matrices, respectively. The section $\{x\in\mathcal{S}: x(i)+x(-i)=1/n\}$ of $\mathcal{S}$ is a hypercube (equivalent to $[0,1]^E$).
Now we construct the tiling. In \cite{D2}, the tiling induced by $\varphi_{\sigma,\sigma^*}$ is a decomposition of the hypercube into half-open cells. Here we need half-open simplices as defined below.
Let $\overrightarrow{O}$ be an orientation of $\mathcal{M}$. For each edge $e_i$, we define the number $\overrightarrow{O}_i$ to be \[
\overrightarrow{O}_i=
\begin{cases}
i, & \text{if } \overrightarrow{e_i}\in\overrightarrow{O};\\
-i, & \text{if } \overrightarrow{e_{-i}}\in\overrightarrow{O}.
\end{cases}
\]
Given an orientation $\overrightarrow{O}$ and a subset $A\subseteq E$, we consider the \emph{half-open simplices} \[\text{hoc}(\mathcal{S},\overrightarrow{O},A)=\left\{x\in\mathcal{S}: \begin{gathered}
x(\overrightarrow{O}_i)\neq 0,\text{ for }e_i\in A;\\ x(-\overrightarrow{O}_i)= 0,\text{ for }e_i\notin A.
\end{gathered}
\right\}\]
\begin{theorem}\label{main4}
Let $(
\mathcal{A},\mathcal{A^*})$ be a pair of dissecting atlases of $\mathcal{M}$, and at least one of the atlases is triangulating. Then
\[\mathcal{S}=\biguplus_{\overrightarrow{O}}\text{hoc}(\mathcal{S},\overrightarrow{O},\varphi_{\mathcal{A},\mathcal{A^*}}(\overrightarrow{O})),\] where the disjoint union is over all the orientations of $\mathcal{M}$.
\end{theorem}
\begin{example}
The dimension of $\mathcal{S}$ is $2n-1$, so we can only draw pictures for $n\leq 2$.
\end{example}
\begin{Rem}
The tiling in Theorem~\ref{main4} contains the information of the triangulation or the dissection $\chi^{-1}(\mathcal
{A})$ in the following sense. \DCX{not sure}
\end{Rem}
\fi
\section{The Proofs of Theorem~\ref{main1} and Theorem~\ref{main2}}\label{combinatorial results}
We will prove Theorem~\ref{main1} and Theorem~\ref{main2} in this section.
\subsection{Preliminaries}\label{Pre}
In this subsection, we will introduce some lemmas and notations. Some of them will also be used in other sections.
Let $\mathcal{M}$ be a regular matroid. We start with three lemmas which hold for oriented matroids and hence for regular matroids. In the case of graphs, one can find the later two results in \cite[Lemma 2.4 and Proposition 2.5]{BH}.
The following lemma is known as the \emph{orthogonality axiom} \cite[Theorem 3.4.3]{BVSWZ}.
\begin{lemma}\label{orientation2}
Let $\overrightarrow{C}$ be a signed circuit and $\overrightarrow{C^*}$ be a signed cocircuit of $\mathcal{M}$. If $C\cap C^*\neq\emptyset$, then there exists one edge on which $\overrightarrow{C}$ and $\overrightarrow{C^*}$ agree and another edge on which $\overrightarrow{C}$ and $\overrightarrow{C^*}$ disagree.
\end{lemma}
\begin{lemma}\label{exclusive} Let $\overrightarrow{F}$ be a fourientation of $\mathcal{M}$. Then for any potential circuit $\overrightarrow{C}$ and any potential cocircuit $\overrightarrow{C^*}$ of $\overrightarrow{F}$, their underlying edges satisfy $C\cap C^*=\emptyset$.
\end{lemma}
\begin{proof}
Assume $E_0=C\cap C^*$ is nonempty. Then the edges in $E_0$ must be one-way oriented by $\overrightarrow{C}$ and by $\overrightarrow{C^*}$. This contradicts Lemma~\ref{orientation2}.
\end{proof}
The following lemma is known as the \emph{3-painting axiom}; see \cite[Theorem 3.4.4]{BVSWZ}.
\begin{lemma}\label{3-painting} Let $\overrightarrow{F}$ be a fourientation of $\mathcal{M}$ and $\overrightarrow{e}$ be a one-way oriented edge in $\overrightarrow{F}$.
Then $\overrightarrow{e}$ belongs to some potential circuit of $\overrightarrow{F}$ or $\overrightarrow{e}$ belongs to some potential circuit of $\overrightarrow{F}$ but not both.
\end{lemma}
We also need the following lemma and definition. Recall that $M$ is a totally unimodular matrix representing the regular matroid $\mathcal{M}$.
\begin{lemma}\cite[Lemma 6.7]{Z}\label{conformal}
(1) Let $\overrightarrow{u}\in\ker_\mathbb{R}(M)$. Then $\overrightarrow{u}$ can be written as a sum of signed circuits with positive coefficients $\sum k_i\overrightarrow{C_i}$ where for each edge $e$ of each $C_i$, the sign of $e$ in $\overrightarrow{C_i}$ agrees with the sign of $e$ in $\overrightarrow{u}$.
(2) Let $\overrightarrow{u^*}\in\operatorname{im}_\mathbb{R}(M^T)$. Then $\overrightarrow{u^*}$ can be written as a sum of signed cocircuits with positive coefficients $\sum k_i\overrightarrow{C_i^*}$ where for each edge $e$ of each $C_i^*$, the sign of $e$ in $\overrightarrow{C_i^*}$ agrees with the sign of $e$ in $\overrightarrow{u^*}$.
\end{lemma}
\begin{definition}\label{component}
In Lemma~\ref{conformal}, we call the signed circuit $\overrightarrow{C_i}$ a \emph{component} of $\overrightarrow{u}$ and the signed cocircuit $\overrightarrow{C_i^*}$ a \emph{component} of $\overrightarrow{u^*}$.
\end{definition}
\begin{Rem}\label{component2}
In Lemma~\ref{conformal}, the linear combination might not be unique. However, if we fix a linear combination, it is clear that the underlying edges of $\overrightarrow{u}$ (i.e., $\{e:e\text{-th entry of }\overrightarrow{u}\text{ is not zero}\}$) is the union of the underlying edges of its components in the linear combination. Also see \cite[Lemma 4.1.1]{BBY} for an integral version of Lemma~\ref{conformal}.
\end{Rem}
The following lemma is crucial when we deal with circuit-cocircuit reversal classes. One can find a proof in the proof of \cite[Theorem 3.3]{GY} or see \cite[Section 2.1]{D2}.
\begin{lemma}\label{orientation1}
Let $\overrightarrow{O_1}$ and $\overrightarrow{O_2}$ be two orientations in the same circuit-cocircuit reversal class of $\mathcal{M}$. Then $\overrightarrow{O_2}$ can be obtained by reversing disjoint signed circuits and signed cocircuits in $\overrightarrow{O_1}$.
\end{lemma}
Lastly, we introduce some useful notations here. Recall that $E$ is the ground set of $\mathcal{M}$. Let $E_0$ be a subset of $E$ and $\overrightarrow{F}$ be a fourientation. We denote by $\overrightarrow{F}|_{E_0}$ the fourientation obtained by restricting $\overrightarrow{F}$ to the ground set $E_0$, i.e., $\overrightarrow{F}|_{E_0}=\overrightarrow{F}\cap\{\pm\overrightarrow{e}: e\in E_0\}$. When $E_0$ consists of a single edge $e$, we simply write $\overrightarrow{F}|_e$. In particular, when $e$ is unoriented in $\overrightarrow{F}$, $\overrightarrow{F}|_e=\emptyset$. When $e$ is bioriented in $\overrightarrow{F}$, we write $\overrightarrow{F}|_e=\updownarrow$.
\subsection{Proof of Theorem~\ref{main1}}
We first recall some basic settings. We fix a regular matroid $\mathcal{M}$ with ground set $E$. Let $\mathcal{A}$ be an external atlas and $\mathcal{A^*}$ be an internal atlas, which means for every basis $B$ of $\mathcal{M}$, there exists a unique externally oriented basis $\overrightarrow{B}\in\mathcal{A}$ and a unique internally oriented basis $\overrightarrow{B^*}\in\mathcal{A^*}$. The pair $(\mathcal{A},\mathcal{A^*})$ of atlases induces the following map
\begin{align*}
f_{\mathcal{A},\mathcal{A^*}}:\{\text{bases}\} & \to \{\text{orientations}\} \\
B & \mapsto \overrightarrow{B} \cap \overrightarrow{B^*}.
\end{align*}
Let $B_1$ and $B_2$ be two arbitrary bases (not necessarily distinct). Let $\overrightarrow{O_1}$, $\overrightarrow{O_2}$ and $\overrightarrow{F}$, $\overrightarrow{F^*}$ be two orientations and two fourientations given by the following formulas: \[\overrightarrow{O_i}=f_{\mathcal{A},\mathcal{A^*}}(T_i), i\in\{1,2\},\] \[\overrightarrow{F}=\overrightarrow{B_1}\cap(-\overrightarrow{B_2}),\] \[\overrightarrow{F^*}=(\overrightarrow{B_1^*}\cap(-\overrightarrow{B_2^*}))^c.\]
Now we compute the two fourientations $\overrightarrow{F}$ and $\overrightarrow{F^*}$ in terms of $\overrightarrow{O_1}$ and $\overrightarrow{O_2}$, which is summarized in Table~\ref{table1}. For example, when $e\in B_2\backslash B_1$, we have $\overrightarrow{F}|_e=\overrightarrow{O_1}|_e$. This is because $\overrightarrow{F}=\overrightarrow{B_1}\cap(-\overrightarrow{B_2})$, $\overrightarrow{B_2}|_e=\updownarrow$, and $\overrightarrow{B_1}|_e=\overrightarrow{O_1}|_e$ (due to $\overrightarrow{O_1}=\overrightarrow{B_1}\cap\overrightarrow{B_1^*}$). All the other results can be derived similarly. A direct consequence of this table is the following lemma.
\begin{table}[h!]
\centering
\bgroup
\def1.5{1.5}
\begin{tabular}{ |c|c|c|c|c| }
\hline
position of edge $e$ & $B_1\cap B_2$ & $B_1\backslash B_2$ & $B_2\backslash B_1$ & $B_1^c\cap B_2^c$\\
\hline
$\overrightarrow{F}|_{e}$ & $\updownarrow$ & $-\overrightarrow{O_2}$ & $\overrightarrow{O_1}$ & $ \overrightarrow{O_1}\cap(-\overrightarrow{O_2})$ \\
\hline
$\overrightarrow{F^*}|_{e}$ &$(-\overrightarrow{O_1})\cup\overrightarrow{O_2}$ & $-\overrightarrow{O_1}$ &$\overrightarrow{O_2}$ & $\emptyset$\\
\hline
\end{tabular}
\egroup
\caption{The table computes $\protect\overrightarrow{F}$ and $\protect\overrightarrow{F^*}$ in terms of $\protect\overrightarrow{O_1}$ and $\protect\overrightarrow{O_2}$. The edges $e$ of $\mathcal{M}$ are partitioned into $4$ classes according to whether $e\in B_1$ and whether $e\in B_2$. We view $\protect\overrightarrow{O_1}$ and $\protect\overrightarrow{O_1}$ as sets of arcs of $\mathcal{M}$ so that the union and intersection make sense. We omit ``$|_e$'' after $\protect\overrightarrow{O_i}$'s. E.g., when $e\in B_1^c\cap B_2^c$, $\protect\overrightarrow{F}|_e=\protect\overrightarrow{O_1}|_e\cap(-\protect\overrightarrow{O_2}|_e)$.}
\label{table1}
\end{table}
\begin{lemma}\label{theorem1lemma} Let $E_\rightrightarrows$ be the set of edges where $\overrightarrow{O_1}$ and $\overrightarrow{O_2}$ agree. Let $E_\rightleftarrows=E\backslash E_\rightrightarrows$.
\begin{enumerate}
\item If $E_0\subseteq E_\rightrightarrows$, then $\overrightarrow{F}|_{E_0}=\overrightarrow{F^*}|_{E_0}$.
\item If $E_0\subseteq E_\rightleftarrows$, then $\overrightarrow{O_1}|_{E_0}\subseteq\overrightarrow{F}|_{E_0}$ and $\overrightarrow{F^*}|_{E_0}\subseteq\overrightarrow{O_2}|_{E_0}$.
\end{enumerate}
\end{lemma}
Now we are ready to prove Theorem~\ref{main1}. By \cite[Theorem 3.10]{G2}, the number of circuit-cocircuit reversal classes equals the number of bases. Thus it is enough to prove that the map $\overline{f}_{\mathcal{A},\mathcal{A^*}}$ in Theorem~\ref{main1} is injective, which is the following proposition.
\begin{proposition}\label{th1prop}
Let $B_1$ and $B_2$ be two distinct bases of $\mathcal{M}$. If either of the following two assumptions holds, then the orientations $\overrightarrow{O_1}=f_{\mathcal{A},\mathcal{A^*}}(B_1)$ and $\overrightarrow{O_2}=f_{\mathcal{A},\mathcal{A^*}}(B_2)$ are in distinct circuit-cocircuit reversal classes.
\begin{enumerate}
\item The external atlas $\mathcal{A}$ is dissecting and the internal atlas $\mathcal{A^*}$ is triangulating.
\item The external atlas $\mathcal{A}$ is triangulating and the internal atlas $\mathcal{A^*}$ is dissecting.
\end{enumerate}
\end{proposition}
\begin{proof}
Assume by contradiction that $\overrightarrow{O_1}$ and $\overrightarrow{O_2}$ are in the same circuit-cocircuit reversal class. By Lemma~\ref{orientation1}, there exist disjoint signed circuits $\{\overrightarrow{C_i}\}_{i\in I}$ and signed cocircuits $\{\overrightarrow{C^*_{j}}\}_{j\in J}$ in $\overrightarrow{O_1}$ by reversing which we may obtain $\overrightarrow{O_2}$.
(1) Because $\mathcal{A}$ is dissecting, the fourientation $\overrightarrow{F}$ has a potential cocircuit $\overrightarrow{D^*}$. We will show that $\overrightarrow{D^*}$ is also a potential cocircuit of $\overrightarrow{F^*}$, which contradicts that $\mathcal{A^*}$ is triangulating.
Consider applying Lemma~\ref{theorem1lemma}. Note that $E_\rightleftarrows$ is the disjoint union of $\{C_i\}_{i\in I}$ and $\{C^*_{j}\}_{j\in J}$. For any $j\in J$, let $E_0=C^*_j$ and apply Lemma~\ref{theorem1lemma}(2). Then we get $\overrightarrow{F^*}|_{C^*_j}\subseteq\overrightarrow{O_2}|_{C^*_j}=-\overrightarrow{C^*_j}$. By definition, this implies that $-\overrightarrow{C^*_j}$ is a potential cocircuit of $\overrightarrow{F^*}$, which contradicts that $\mathcal{A^*}$ is triangulating. So, $J=\emptyset$. For any $i\in I$, let $E_0=C_i$ and apply Lemma~\ref{theorem1lemma}(2). Then we get $\overrightarrow{C_i}=\overrightarrow{O_1}|_{C_i}\subseteq\overrightarrow{F}|_{C_i}$. This means $\overrightarrow{C_i}$ is a potential circuit of $\overrightarrow{F}$. Because $\overrightarrow{D^*}$ is a potential cocircuit of $\overrightarrow{F}$, by Lemma~\ref{exclusive}, $D^*\cap C_i=\emptyset$. Hence $D^*\subseteq E_\rightrightarrows$. By Lemma~\ref{theorem1lemma}(1), $\overrightarrow{D^*}$ is a potential cocircuit of $\overrightarrow{F^*}$, which gives the desired contradiction.
(2) This part of the proof is dual to the previous one. To be precise, because $\mathcal{A^*}$ is dissecting, the fourientation $\overrightarrow{F^*}$ has a potential circuit $\overrightarrow{D}$. Then by applying Lemma 2.5, we may prove that $I=\emptyset$, $-\overrightarrow{C^*_j}$ is a potential cocircuit of $\overrightarrow{F^*}$, $D\subseteq E_\rightrightarrows$, and $D$ is a potential circuit of $\overrightarrow{F}$. The last claim contradicts that $\mathcal{A}$ is triangulating.
\end{proof}
\begin{Rem}
If we just want to show the map $f_{\mathcal{A},\mathcal{A^*}}$ is injective under the assumption of Proposition~\ref{th1prop}, the proof is short and works even for oriented matroids. Indeed, assume by contradiction that $\overrightarrow{O_1}=\overrightarrow{O_2}$, then by Lemma~\ref{theorem1lemma}(1), $\overrightarrow{F}=\overrightarrow{F^*}$, which contradicts the definitions of the triangulating atlas and dissecting atlas.
\end{Rem}
\subsection{Proof of Theorem~\ref{main2}}
We will prove Theorem~\ref{main2} in this section. For the construction of $\varphi_{\mathcal{A},\mathcal{A^*}}$, see Definition~\ref{def-ext}.
We will prove that $\varphi_{\mathcal{A},\mathcal{A^*}}$ has the following property, which is stronger than bijectivity.
\begin{definition}
Let $\varphi$ be a map from the set of orientations of $\mathcal{M}$ to the set of subsets of $E$. We say the map $\varphi$ is \emph{tiling} if for any two distinct orientations $\overrightarrow{O_1}$ and $\overrightarrow{O_2}$, there exists an edge $e$ such that $\overrightarrow{O_1}|_e\neq\overrightarrow{O_2}|_e$ and $e\in \varphi(\overrightarrow{O_1})\bigtriangleup \varphi(\overrightarrow{O_2})$.
\end{definition}
\begin{Rem}
In \cite[Section 4]{D2}, it is shown that $\varphi$ is tiling if and only if it canonically induces a half-open decomposition of the hypercube $[0,1]^E$, where $[0,1]^E$ is viewed as the set of continuous orientations of $\mathcal{M}$.
\end{Rem}
\begin{lemma}
If $\varphi$ is tiling, then $\varphi$ is bijective.
\end{lemma}
\begin{proof}
The property $e\in \varphi(\overrightarrow{O_1})\bigtriangleup \varphi(\overrightarrow{O_2})$ in the definition implies $\varphi(\overrightarrow{O_1})\neq\varphi(\overrightarrow{O_2})$. Hence $\varphi$ is injective. The domain and codomain of $\varphi$ are equinumerous, so $\varphi$ is bijective.
\end{proof}
Now we begin to show $\varphi_{\mathcal{A},\mathcal{A^*}}$ is tiling.
\begin{theorem}\label{strong injectivity}
If either of the following two assumptions holds, then the map $\varphi_{\mathcal{A},\mathcal{A^*}}$ is tiling. In particular, $\varphi_{\mathcal{A},\mathcal{A^*}}$ is bijective.
\begin{enumerate}
\item The external atlas $\mathcal{A}$ is dissecting and the internal atlas $\mathcal{A^*}$ is triangulating.
\item The external atlas $\mathcal{A}$ is triangulating and the internal atlas $\mathcal{A^*}$ is dissecting.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $\overrightarrow{O_A}$ and $\overrightarrow{O_B}$ be two different orientations of $\mathcal{M}$. Assume by contradiction that the desired edge $e$ does not exist. So,
\begin{center}
for edges $e\in \varphi_{\mathcal{A},\mathcal{A^*}}(\overrightarrow{O_A})\bigtriangleup \varphi_{\mathcal{A},\mathcal{A^*}}(\overrightarrow{O_B})$, we have $\overrightarrow{O_A}|_e=\overrightarrow{O_B}|_e$. ($\dagger$)
\end{center}
By the construction of $\varphi_{\mathcal{A},\mathcal{A^*}}$, we can find bases $B_1$ and $B_2$ such that $\overrightarrow{O_A}$ is obtained from reversing disjoint signed circuits $\{\overrightarrow{C_{1,i}}\}_{i\in I_1}$ and signed cocircuits $\{\overrightarrow{C^*_{1,j}}\}_{j\in J_1}$ in $\overrightarrow{O_1}:=f_{\mathcal{A},\mathcal{A^*}}(B_1)$, $\overrightarrow{O_B}$ is obtained from reversing disjoint signed circuits $\{\overrightarrow{C_{2,i}}\}_{i\in I_2}$ and signed cocircuits $\{\overrightarrow{C^*_{2,j}}\}_{j\in J_2}$ in $\overrightarrow{O_2}:=f_{\mathcal{A},\mathcal{A^*}}(B_2)$, \[\varphi_{\mathcal{A},\mathcal{A^*}}(\overrightarrow{O_A})=(B_1\cup C_1) \backslash C_1^*,\] and \[\varphi_{\mathcal{A},\mathcal{A^*}}(\overrightarrow{O_B})=(B_2\cup C_2) \backslash C_2^*,\]
where $C_k$ is all the underlying edges of $\{\overrightarrow{C_{k,i}}\}_{i\in I_k}$ and $C_k^*$ is all the underlying edges of $\{\overrightarrow{C_{k,i}^*}\}_{i\in J_k}$ for $k=1,2$. We also denote $\overrightarrow{C_k}=\biguplus_{i\in I_k}\overrightarrow{C_{k,i}}$ and $\overrightarrow{C_k^*}=\biguplus_{j\in J_k}\overrightarrow{C_{k,j}^*}$.
We still adopt the notations \[\overrightarrow{F}=\overrightarrow{B_1}\cap(-\overrightarrow{B_2}), \overrightarrow{F^*}=(\overrightarrow{B_1^*}\cap(-\overrightarrow{B_2^*}))^c,\] introduced in the previous section.
We compute $\overrightarrow{F}$ and $\overrightarrow{F^*}$ in terms of $\overrightarrow{C_1},\overrightarrow{C_2},\overrightarrow{C^*_1}$, and $\overrightarrow{C^*_2}$, and the results are summarized in Table~\ref{table2}. The next two paragraphs will explain the table.
\begin{table}
\centering
\bgroup
\def1.5{1.5}
\begin{tabular}{ |c|c|c|c|c| }
\hline
position of $e$ and label & $\alpha$: $B_1\cap B_2$ & $\beta$: $B_1\backslash B_2$ & $\gamma$: $B_2\backslash B_1$ & $\delta$: $B_1^c\cap B_2^c$\\
\hline
\multirow{2}{3cm}{1: $C_1\cap C_2$} & $\overrightarrow{F}=\updownarrow$ & $\overrightarrow{F}=-\overrightarrow{C_2}$ & $\overrightarrow{F}=\overrightarrow{C_1}$ & $\overrightarrow{F}=\overrightarrow{C_1}\cap(-\overrightarrow{C_2})$\\
& & $\overrightarrow{F^*}=-\overrightarrow{C_1}$ & $\overrightarrow{F^*}=\overrightarrow{C_2}$ & $\overrightarrow{F^*}=\emptyset$\\
\hline
\multirow{2}{3cm}{2: $C^*_1\cap C^*_2$} & $\overrightarrow{F}=\updownarrow$ & $\overrightarrow{F}=-\overrightarrow{C^*_2}$ & $\overrightarrow{F}=\overrightarrow{C^*_1}$ & \\
& $\overrightarrow{F^*}=(-\overrightarrow{C^*_1})\cup\overrightarrow{C^*_2}$ & $\overrightarrow{F^*}=-\overrightarrow{C^*_1}$ & $\overrightarrow{F^*}=\overrightarrow{C^*_2}$ & $\overrightarrow{F^*}=\emptyset$\\
\hline
3: $C^*_1\backslash(C_2\cup C^*_2)$ & $\overrightarrow{F^*}=-\overrightarrow{C^*_1}$ $_\dagger$& $\overrightarrow{F^*}=-\overrightarrow{C^*_1}$ & $\overrightarrow{F^*}=-\overrightarrow{C^*_1}$ $_\dagger$& $\overrightarrow{F^*}=\emptyset$\\
\hline
4: $C^*_2\backslash(C_1\cup C^*_1)$ & $\overrightarrow{F^*}=\overrightarrow{C^*_2}$ $_\dagger$ & $\overrightarrow{F^*}=\overrightarrow{C^*_2}$ $_\dagger$ & $\overrightarrow{F^*}=\overrightarrow{C^*_2}$ & $\overrightarrow{F^*}=\emptyset$\\
\hline
5: $C_1\backslash(C_2\cup C^*_2)$ & $\overrightarrow{F}=\updownarrow$ & $\overrightarrow{F}=\overrightarrow{C_1}$ $_\dagger$& $\overrightarrow{F}=\overrightarrow{C_1}$ & $\overrightarrow{F}=\overrightarrow{C_1}$ $_\dagger$\\
\hline
6: $C_2\backslash(C_1\cup C^*_1)$ & $\overrightarrow{F}=\updownarrow$ & $\overrightarrow{F}=-\overrightarrow{C_2}$ & $\overrightarrow{F}=-\overrightarrow{C_2}$ $_\dagger$& $\overrightarrow{F}=-\overrightarrow{C_2}$ $_\dagger$\\
\hline
7: $(C_1\cup C_2\cup C^*_1\cup C^*_2)^c$ & $\overrightarrow{F}=\updownarrow$ & $\overrightarrow{F}=\overrightarrow{F^*}$ $_\dagger$& $\overrightarrow{F}=\overrightarrow{F^*}$ $_\dagger$& $\overrightarrow{F^*}=\emptyset$\\
\hline
\end{tabular}
\egroup
\caption{The computational results used in Proposition~\ref{strong injectivity}.}
\label{table2}
\end{table}
All the edges $e$ are partitioned into $28$ classes according to whether $e$ is in $B_1$ and/or in $B_2$ (columns), and whether $e$ is in $C_1$, $C_2$, $C_1^*$, and/or $C_2^*$ (rows). Regarding the rows, we start with 4 large classes $(C_1\cup C^*_1) \cap (C_2 \cup C^*_2)$, $(C_1\cup C^*_1)^c \cap (C_2 \cup C^*_2)$, $(C_1\cup C^*_1) \cap (C_2 \cup C^*_2)^c$, and $(C_1\cup C^*_1)^c \cap (C_2 \cup C^*_2)^c$. Using $C_1\cap C^*_1=C_2\cap C^*_2=\emptyset$, we may partition these 4 large classes into small classes. However, two items $C_1\cap C_2^*$ and $C_1^*\cap C_2$ are missing in the table. This is because they are empty. Indeed, if one of them, say $C_1\cap C_2^*$, is not empty, then by Lemma~\ref{orientation2}, there exists an edge $e\in C_1\cap C_2^*$ such that $\overrightarrow{C_1}|_e\neq\overrightarrow{C_2^*}|_e$. This implies $\overrightarrow{O_A}|_e\neq\overrightarrow{O_B}|_e$. By the definition of $\varphi_{\mathcal{A},\mathcal{A^*}}$, $e\in C_1\cap C_2^*$ implies $e\in\varphi_{\mathcal{A},\mathcal{A^*}}(\overrightarrow{O_A})\bigtriangleup \varphi_{\mathcal{A},\mathcal{A^*}}(\overrightarrow{O_B})$. By the assumption ($\dagger$), we have $\overrightarrow{O_A}|_e=\overrightarrow{O_B}|_e$, which gives the contradiction. So, the rows of the table cover all the cases.
In the table, we view $\overrightarrow{C_1},\overrightarrow{C_2},\overrightarrow{C^*_1}$, and $\overrightarrow{C^*_2}$ as sets of arcs, so the union and intersection make sense. We omit ``$|_e$'' as in Table~\ref{table1}. We only give the useful results, so some fourientations in some cells are not given. The computation is straightforward. If there is no $\dagger$ in the cell, then we can get the result by making use of Table~\ref{table1} where $\overrightarrow{O_k}$ can be replaced by $\overrightarrow{C_k}$ when $e\in C_k$ and by $\overrightarrow{C_k^*}$ when $e\in C_k^*$, for $k=1,2$. If there is a $\dagger$ in the cell, then the computation makes use of the assumption ($\dagger$). For example, for cells $3\alpha$ and $3\gamma$, since $e\in C_1^*$ and $e\in B_2$, we have $e\notin\varphi_{\mathcal{A},\mathcal{A^*}}(O_A)$ and $e\in\varphi_{\mathcal{A},\mathcal{A^*}}(O_B)$. By ($\dagger$), we have $\overrightarrow{O_A}|_e=\overrightarrow{O_B}|_e$, and hence $\overrightarrow{C_1^*}|_e=\overrightarrow{O_1}|_e=-\overrightarrow{O_2}|_e$. Combining this formula and Table~\ref{table1}, we obtain the formulas in $3\alpha$ and $3\gamma$. Similarly, we obtain the formulas in cells with $\dagger$ in rows 4,5, and 6. For cells $7\beta$ and $7\gamma$, we still have $\overrightarrow{O_A}|_e=\overrightarrow{O_B}|_e$ due to ($\dagger$), which implies $\overrightarrow{O_1}|_e=-\overrightarrow{O_2}|_e$. Then by Table~\ref{table1}, we get $\overrightarrow{F}=\overrightarrow{F^*}$.
Now we use the table to prove two claims.
\textbf{Claim 1}: if $\overrightarrow{C_1}-\overrightarrow{C_2}\neq 0$, then each of its component (see Definition~\ref{component}) is a potential circuit of $\overrightarrow{F}$.
By Lemma~\ref{conformal}, it suffices to check that for any arc $\overrightarrow{e}\in\overrightarrow{C_1}\cup(-\overrightarrow{C_2})$ that is not cancelled in $\overrightarrow{C_1}-\overrightarrow{C_2}$, we have $\overrightarrow{F}|_e=\overrightarrow{e}$ or $\overrightarrow{F}$ is bioriented. This follows directly from the rows 1, 5, and 6 in Table~\ref{table2} ($\overrightarrow{C_1}=-\overrightarrow{C_2}$ in row 1).
Similarly, we can prove the other claim.
\textbf{Claim 2}: if $\overrightarrow{C_2^*}-\overrightarrow{C_1^*}\neq 0$, then each of its component is a potential cocircuit of $\overrightarrow{F^*}$.
We are ready to complete the proof.
When $B_1=B_2$, by definition $\overrightarrow{F}$ has no potential circuit, and $\overrightarrow{F^*}$ has no potential circuit. By Claim 1 and Claim 2, $\overrightarrow{C_1}=\overrightarrow{C_2}$ and $\overrightarrow{C_2^*}=\overrightarrow{C_1^*}$, which implies $\overrightarrow{O_A}=\overrightarrow{O_B}$. Contradiction.
From now on we assume $B_1\neq B_2$. We will apply the dissecting and triangulating conditions (1) or (2) to get contradictions.
(1) Because $\mathcal{A}$ is dissecting and $\mathcal{A^*}$ is triangulating, there exists a potential cocircuit $\overrightarrow{D^*}$ of $\overrightarrow{F}$, and there is no potential cocircuit of $\overrightarrow{F^*}$. The later one implies that $\overrightarrow{C_2^*}=\overrightarrow{C_1^*}$ by Claim 2. So, rows 3 and 4 in Table~\ref{table2} can be ignored in this case, and in cells $2\alpha$, $2\beta$, and $2\gamma$, $\overrightarrow{F}=\overrightarrow{F^*}$.
Now we claim that the potential cocircuit $\overrightarrow{D^*}$ of $\overrightarrow{F}$ is also a potential cocircuit of $\overrightarrow{F^*}$, which gives the contradiction. Indeed, on one hand, for edges $e$ in rows 5 and 6, and for edges $e$ in row 1 such that $\overrightarrow{C_1}|_e=-\overrightarrow{C_2}|_e$, they are exactly the underlying edges of $\overrightarrow{C_1}-\overrightarrow{C_2}$, and hence by Claim 1, Lemma~\ref{exclusive}, and Remark~\ref{component2}, as a potential cocircuit of $\overrightarrow{F}$, $\overrightarrow{D^*}$ does not use these edges at all. On the other hand, for the remaining edges $e$, which are those in rows 2 and 7, and in row 1 such that $\overrightarrow{C_1}|_e=\overrightarrow{C_2}|_e$, we have either $\overrightarrow{F}|_e=\overrightarrow{F^*}|_e$ or $\overrightarrow{F^*}|_e=\emptyset$. So, $\overrightarrow{D^*}$ is also a potential cocircuit of $\overrightarrow{F^*}$.
(2) This part can be proved by a similar argument.
\end{proof}
This concludes the proof of Theorem~\ref{main2}(1). It remains to show (2) and (3).
\begin{proposition}\label{main2(2)(3)}
Under either of the assumptions of Proposition~\ref{strong injectivity} on the atlases $\mathcal{A}$ and $\mathcal{A^*}$, we have the following properties of $\varphi_{\mathcal{A},\mathcal{A^*}}$.
(1) The image of the independent sets of $\mathcal{M}$ under the bijection $\varphi^{-1}_{\mathcal{A},\mathcal{A^*}}$ is a representative set of the circuit reversal classes of $\mathcal{M}$.
(2) The image of the spanning sets of $\mathcal{M}$ under the bijection $\varphi^{-1}_{\mathcal{A},\mathcal{A^*}}$ is a representative set of the cocircuit reversal classes of $\mathcal{M}$.
\end{proposition}
\begin{proof}
Recall that the map
\begin{align*}
\varphi_{\mathcal{A},\mathcal{A^*}}:\{\text{orientations of }\mathcal{M}\} & \to \{\text{subsets of }E\} \\
\overrightarrow{O} & \mapsto (B\cup \biguplus_{i\in I}C_i)\backslash \biguplus_{j\in J}C_j^*
\end{align*}
is a bijection, where $B$ is the unique basis such that $f_{\mathcal{A},\mathcal{A^*}}(B)\in [\overrightarrow{O}]$, and the orientations $f_{\mathcal{A},\mathcal{A^*}}(B)$ and $\overrightarrow{O}$ differ by disjoint signed circuits $\{\overrightarrow{C_i}\}_{i\in I}$ and cocircuits $\{\overrightarrow{C_j^*}\}_{j\in J}$.
Let $A=\varphi_{\mathcal{A},\mathcal{A^*}}(\overrightarrow{O})$. Then $A$ is an independent set $\Leftrightarrow$ $I=\emptyset$ (due to Lemma~\ref{orientation1}) $\Leftrightarrow$ The orientations $\overrightarrow{O}$ and $f_{\mathcal{A},\mathcal{A^*}}(B)$ are in the same cocircuit reversal class.
Because the set $\{f_{\mathcal{A},\mathcal{A^*}}(B):B\text{ is a basis}\}$ is a representative set of the circuit-cocircuit reversal classes (Theorem~\ref{main1}), the set $\{\varphi^{-1}_{\mathcal{A},\mathcal{A^*}}(A):A\text{ is independent}\}$ is a representative set of the circuit-reversal classes.
This proves (1). Similarly, (2) also holds.
\end{proof}
This completes the proof of Theorem~\ref{main2}.
\section{Signatures, the BBY bijection, and the Bernardi bijection}\label{signature}
In this section we will use our theory to recover and generalize the work in \cite{BBY}, \cite{D2}, and \cite{Bernardi}. To do this, we will build the connection between circuit signatures (resp. cocircuit signatures) and external atlases (resp. internal atlases) of the regular matroid $\mathcal{M}$. We will also see how the BBY bijection (resp. the extended BBY bijection) and the Bernardi bijection become a special case of Theorem~\ref{main1} (resp. Theorem~\ref{main2}). In particular, the acyclic signatures used to define the BBY bijection will be generalized to \emph{triangulating signatures}.
\subsection{Signatures and atlases}
For the definitions of circuit signatures $\sigma$, cocircuit signatures $\sigma^*$, and triangulating signatures, see Section~\ref{sign}.
Recall that given a circuit signature $\sigma$, we may construct the external atlas $\mathcal{A_\sigma}$ from $\sigma$ such that for each externally oriented basis $\overrightarrow{B}\in\mathcal{A_\sigma}$, each external arc $\overrightarrow{e}\in\overrightarrow{B}$ is oriented according to the orientation of the fundamental circuit $C(B,e)$ in $\sigma$. Similarly, we may construct the internal atlas $\mathcal{A}^*_{\sigma^*}$.
We now show that all the triangulating atlases can be obtained in this way. Moreover, they must come from triangulating signatures. The following lemma is trivial but useful.
\begin{lemma}\label{funda}
Every circuit of $\mathcal{M}$ is a fundamental circuit $C(B,e)$ for some basis $B$ and some edge $e$. Dually, every cocircuit of $\mathcal{M}$ is a fundamental cocircuit $C^*(B,e)$ for some basis $B$ and some edge $e$.
\end{lemma}
\begin{theorem}[Theorem~\ref{trig-intro}]\label{trig}
\begin{enumerate}
\item The map \begin{align*}
\alpha:\{\text{triangulating circuit sig. of }\mathcal{M}\} & \to \{\text{triangulating external atlases of }\mathcal{M}\} \\
\sigma & \mapsto \mathcal{A_\sigma}
\end{align*}
is a bijection.
\item The map \begin{align*}
\alpha^*:\{\text{triangulating cocircuit sig. of }\mathcal{M}\} & \to \{\text{triangulating internal atlases of }\mathcal{M}\} \\
\sigma^* & \mapsto \mathcal{A}^*_{\sigma^*}
\end{align*}
is a bijection.
\end{enumerate}
\end{theorem}
\begin{proof}
We only prove (1) because the same method can be used to prove (2).
First we check the atlas $\mathcal{A_\sigma}$ is triangulating when $\sigma$ is triangulating. Assume by contradiction that there exist distinct bases $B_1$ and $B_2$ such that $\overrightarrow{B_1}\cap(-\overrightarrow{B_2})$ has a potential circuit $\overrightarrow{C}$. Then $\overrightarrow{C}\subseteq \overrightarrow{B_1}$ and $-\overrightarrow{C}\subseteq \overrightarrow{B_2}$. By the definition of $\sigma$ being triangulating, $\overrightarrow{C}\in \sigma$ and $-\overrightarrow{C}\in \sigma$, which gives the contradiction.
The map $\alpha$ is injective. Indeed, given two different signatures $\sigma_1$ and $\sigma_2$, there exists a signed circuit $\overrightarrow{C}$ such that $\overrightarrow{C}\in\sigma_1$ and $-\overrightarrow{C}\in\sigma_2$. By Lemma~\ref{funda}, $C$ is a fundamental circuit $C(B, e)$. Then the two externally oriented bases associated to $B$ in $\mathcal{A}_{\sigma_1}$ and in $\mathcal{A}_{\sigma_2}$ have different signs on $e$.
The map $\alpha$ is surjective. Given a triangulating external atlas $\mathcal{A}$, we need to find a triangulating signature $\sigma$ such that $\mathcal{A}=\mathcal{A}_\sigma$. By Lemma~\ref{funda}, any circuit $C$ is a fundamental circuit $C(B, e)$. Then we define $\sigma(C)$ to be the signed circuit $\overrightarrow{C}$ in $\overrightarrow{B}\in\mathcal{A}$. This is well-defined. Indeed, if from two different bases $B_1$ and $B_2$ we get two opposite signed circuits $\overrightarrow{C}$ and $-\overrightarrow{C}$, then $\overrightarrow{C}\subseteq\overrightarrow{B_1}$ and $-\overrightarrow{C}\subseteq\overrightarrow{B_2}$. Hence $\overrightarrow{C}\subseteq\overrightarrow{B_1}\cap(-\overrightarrow{B_2})$, which contradicts $\mathcal{A}$ being triangulating. It is obvious that $\mathcal{A}=\mathcal{A}_\sigma$. It remains to show that $\sigma$ is triangulating. For any $\overrightarrow{B_1}\in\mathcal{A_\sigma}$ and any signed circuit $\overrightarrow{C}\subseteq\overrightarrow{B_1}$, we need to show $\overrightarrow{C}\in \sigma$. If $C$ is a fundamental circuit with respect to $B_1$, then it is done. Otherwise, by Lemma~\ref{funda}, $C=C(B_2,e)$ for some other basis $B_2$. Then either $\overrightarrow{C}\subseteq\overrightarrow{B_2}$ or $-\overrightarrow{C}\subseteq\overrightarrow{B_2}$. The second option is impossible because $\overrightarrow{B_1}\cap(-\overrightarrow{B_2})$ does not contain any signed circuit. Thus $\overrightarrow{C}\subseteq\overrightarrow{B_2}$ and hence $\overrightarrow{C}\in\sigma$.
\end{proof}
\subsection{Acyclic signatures} In this subsection, we prove acyclic signatures are triangulating. This is essentially \cite[Lemma~2.10 and Lemma~2.9]{D2}. For readers' convenience, we give a proof here, which consists of two lemmas.
Let $\overrightarrow{e}$ be an arc. We denote by $\overrightarrow{C}(B,\overrightarrow{e})$ the fundamental circuit oriented according to $\overrightarrow{e}$ when $e\in B$, and denote by $\overrightarrow{C^*}(B,\overrightarrow{e})$ the fundamental cocircuit oriented according to $\overrightarrow{e}$ when $e\notin B$.
\begin{lemma}\label{fundamental}
Fix a basis $B$ of $\mathcal{M}$.
(1) For any signed circuit $\overrightarrow{C}$, \[\overrightarrow{C}=\sum_{e\notin B,\overrightarrow{e}\in\overrightarrow{C}}\overrightarrow{C}(B,\overrightarrow{e}).\]
(2) For any signed cocircuit $\overrightarrow{C^*}$, \[\overrightarrow{C^*}=\sum_{e\in T,\overrightarrow{e}\in\overrightarrow{C^*}}\overrightarrow{C^*}(T,\overrightarrow{e}).\]
\end{lemma}
\begin{proof}
We only prove (1) since the method works for (2).
Note that the set of signed fundamental circuits with respect to $B$ form a basis of $\ker_\mathbb{R}(M)$ (choose an arbitrary orientation for each circuit) \cite{SW}. Hence we can write $\overrightarrow{C}\in\ker_\mathbb{R}(M)$ as a linear combination of these fundamental circuits with real coefficients:
\[\overrightarrow{C}=\sum_{e\notin B}k_e\overrightarrow{C}(B,\overrightarrow{e}).\]
By comparing the coefficients of $e\notin B$ in both sides, we get the desired formula.
\end{proof}
\begin{lemma}\label{acyclic-tri}
Let $\sigma$ be an acyclic circuit signature and $\sigma^*$ be an
acyclic cocircuit signature. Then $\sigma$ and $\sigma^*$ are triangulating. (Equivalently, $\mathcal{A}_\sigma$ and $\mathcal{A}^*_{\sigma^*}$ are triangulating atlases.)
\end{lemma}
\begin{proof}
We only give the proof for $\sigma$.
By definition, for any $\overrightarrow{B}\in\mathcal{A_\sigma}$ and any signed circuit $\overrightarrow{C}\subseteq\overrightarrow{B}$, we need to show $\overrightarrow{C}\in \sigma$.
By Lemma~\ref{fundamental}, \[\overrightarrow{C}=\sum_{e\notin B,\overrightarrow{e}\in\overrightarrow{C}}\overrightarrow{C}(B,\overrightarrow{e}).\]
Since $\overrightarrow{B}\in\mathcal{A_\sigma}$, every signed circuit in the right-hand side is in $\sigma$. By the definition of $\sigma$ being acyclic, we have $\overrightarrow{C}\in\sigma$. So, $\sigma$ is triangulating.
\end{proof}
There exists a triangulating circuit signature that is not acyclic. See Section~\ref{nonexample} for an example together with a nice description of the triangulating \emph{cycle} signatures of \emph{graphs}.
\subsection{The BBY bijection and compatible orientations}
Given a pair $(\sigma,\sigma^*)$ of triangulating signatures, we write \[\text{BBY}_{\sigma,\sigma^*}=f_{\mathcal{A_\sigma},\mathcal{A^*_{\sigma^*}}}\text{ and }\varphi_{\sigma,\sigma^*}=\varphi_{\mathcal{A_\sigma},\mathcal{A^*_{\sigma^*}}}.\]
They are exactly the BBY bijection in \cite{BBY} and the extened BBY bijection in \cite{D2} when the two signatures are acyclic. By the results in the previous two subsections, we may apply Theorem~\ref{main1} and Theorem~\ref{main2} to these two maps and hence generalize the counterpart results in \cite{BBY} and \cite{D2}.
Compared with atlases, signatures allow us to talk about compatible orientations; see Section~\ref{sign} for the definition. The maps $\text{BBY}_{\sigma,\sigma^*}$ and $\varphi_{\sigma,\sigma^*}$ are proved to be bijective onto compatible orientations in addition to orientation classes in \cite{BBY,D2}. Here is an example.
\begin{theorem}\label{BBYTh}\cite[Theorem 1.3.1]{BBY}
Suppose $\sigma$ and $\sigma^*$ are \emph{acyclic} signatures of $\mathcal{M}$.
\begin{enumerate}
\item The map $\text{BBY}_{\sigma,\sigma^*}$ is a bijection between the bases of $\mathcal{M}$ and the $(\sigma,\sigma^*)$-compatible orientations of $\mathcal{M}$.
\item The set of $(\sigma, \sigma^*)$-compatible orientations is a representative set of the circuit-cocircuit reversal classes of $\mathcal{M}$.
\end{enumerate}
\end{theorem}
We will also generalize these results by reformulating Theorem~\ref{main1} and Theorem~\ref{main2} in terms of signatures and compatible orientations. We first prove a lemma.
\begin{lemma}\label{compatibleimage}
Suppose $\sigma$ and $\sigma^*$ are \emph{triangulating} signatures of $\mathcal{M}$. Then for any basis $B$, the orientation $\text{BBY}_{\sigma,\sigma^*}(B)$ is $(\sigma, \sigma^*)$-compatible.
\end{lemma}
\begin{proof}
For any signed circuit $\overrightarrow{C}\subseteq\text{BBY}_{\sigma,\sigma^*}(B)=\overrightarrow{B}\cap\overrightarrow{B^*}$, where $\overrightarrow{B}\in\mathcal{A_\sigma}$ and $\overrightarrow{B^*}\in\mathcal{A^*_{\sigma^*}}$, we have $\overrightarrow{C}\subseteq\overrightarrow{B}$, and hence $\overrightarrow{C}$ is in the signature $\sigma$ because $\sigma$ is triangulating. Similarly, for any signed cocircuit in the orientation $\text{BBY}_{\sigma,\sigma^*}(B)$, it is in $\sigma^*$. So, the orientation $\text{BBY}_{\sigma,\sigma^*}(B)$ is $(\sigma, \sigma^*)$-compatible.
\end{proof}
Now we generalize Theorem~\ref{BBYTh}.
\begin{theorem}\label{tri-theorem}
Suppose $\sigma$ and $\sigma^*$ are \emph{triangulating} signatures of $\mathcal{M}$. \begin{enumerate}
\item (Theorem~\ref{main1-sign}) The map $\text{BBY}_{\sigma,\sigma^*}$ is a bijection between the bases of $\mathcal{M}$ and the $(\sigma,\sigma^*)$-compatible orientations of $\mathcal{M}$.
\item The set of $(\sigma, \sigma^*)$-compatible orientations is a representative set of the circuit-cocircuit reversal classes of $\mathcal{M}$.
\end{enumerate}
\end{theorem}
\begin{proof}
It is a direct consequence of the following three facts. \begin{itemize}
\item By Theorem~\ref{main1}, the image of $\text{BBY}_{\sigma,\sigma^*}$ forms a representative set of the circuit-cocircuit reversal classes.
\item By Lemma~\ref{compatibleimage}, the image of $\text{BBY}_{\sigma,\sigma^*}$ is contained in the set of $(\sigma, \sigma^*)$-compatible orientations.
\item By Lemma~\ref{orientation1}, each circuit-cocircuit reversal class contains at most one $(\sigma, \sigma^*)$-compatible orientation.
\end{itemize}
\end{proof}
Theorem~5.2 in \cite{D2} says that the following result on the extended BBY bijection holds for acyclic signatures. Now we prove that it holds for triangulating signatures.
\begin{theorem}[Theorem~\ref{main2-sign}]\label{tri-theorem2}
Suppose $\sigma$ and $\sigma^*$ are \emph{triangulating} signatures of a regular matroid $\mathcal{M}$ with ground set $E$.
(1) The map
\begin{align*}
\varphi_{\sigma,\sigma^*}:\{\text{orientations of }\mathcal{M}\} & \to \{\text{subsets of } E\} \\
\overrightarrow{O} & \mapsto (\text{BBY}_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\cup \biguplus_{i\in I}C_i)\backslash \biguplus_{j\in J}C_j^*
\end{align*}
is a bijection, where $\overrightarrow{O^{cp}}$ the (unique) ($\sigma,\sigma^*$)-compatible orientation obtained by reversing disjoint signed circuits $\{\overrightarrow{C_i}\}_{i\in I}$ and signed cocircuits $\{\overrightarrow{C_j^*}\}_{j\in J}$ in $\overrightarrow{O}$.
(2) The map $\varphi_{\sigma,\sigma^*}$ specializes to the bijection
\begin{align*}
\varphi_{\sigma,\sigma^*}: \{\sigma\text{-compatible orientations}\} & \to \{\text{independent sets}\} \\
\overrightarrow{O} & \mapsto \text{BBY}_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\backslash \biguplus_{j\in J}C_j^*.
\end{align*}
(3) The map $\varphi_{\sigma,\sigma^*}$ specializes to the bijection
\begin{align*}
\varphi_{\sigma,\sigma^*}:\{\sigma^*\text{-compatible orientations}\} & \to \{\text{spanning sets}\} \\
\overrightarrow{O} & \mapsto \text{BBY}_{\sigma,\sigma^*}^{-1}(\overrightarrow{O^{cp}})\cup \biguplus_{i\in I}C_i.
\end{align*}
\end{theorem}
\begin{proof}
(1) This is a direct consequence of Theorem~\ref{main2}(1) and Theorem~\ref{tri-theorem}.
(2)
Let $A=\varphi_{\sigma,\sigma^*}(\overrightarrow{O})$. Then $A$ is an independent set $\Leftrightarrow$ $I=\emptyset$ (by Lemma~\ref{orientation1}) $\Leftrightarrow$ $\overrightarrow{O}$ is $\sigma$-compatible.
(3) The proof is similar to the one of (2).
\end{proof}
The following result is a direct consequence of Theorem~\ref{main2} and Theorem~\ref{tri-theorem2}.
\begin{corollary}
For a \emph{triangulating} signature $\sigma$ (resp. $\sigma^*$), the $\sigma$-compatible (resp. $\sigma^*$-compatible) orientations form representatives for circuit reversal classes (resp. cocircuit reversal classes).
\end{corollary}
\begin{Rem}
We can further generalize Theorem~\ref{tri-theorem2}(2)(3) a bit. From the proof of Theorem~\ref{tri-theorem2}(2) (including the preceding lemmas), it is clear that if $\sigma$ is a triangulating signature and $\mathcal{A^*}$ is a dissecting atlas, then $\varphi_{\mathcal{A_\sigma},\mathcal{A^*}}$ specializes to a bijection between $\{\sigma\text{-compatible orientations}\}$ and $\{\text{independent sets}\}$. The dual statement also holds.
\end{Rem}
So far, we have proved every claim in Section~\ref{sign} except Theorem~\ref{triangular}, which we will prove next.
\subsection{Triangulating cycle signatures of graphs}\label{nonexample}
As introduced in Section~\ref{sign}, for graphs, we have a nice description (Theorem~\ref{triangular}) for the triangulating cycle signatures. We will prove the result and use it to check an example where a triangulating cycle signature is not acyclic.
Let $G$ be a graph where multiple edges are allowed (loops are of no interest here). By cycles of $G$, we mean simple cycles. When we add cycles, we view them as vectors in $\mathbb{Z}^E$.
We start with a basic lemma. We cannot find a reference, so we prove it briefly.
\begin{lemma}\label{theta}
If the sum of two directed cycles $\overrightarrow{C_1}$ and $\overrightarrow{C_2}$ of $G$ is a directed cycle, then their common edges $C_1\cap C_2$ form a path (which is directed in opposite ways in the two directed cycles).
\end{lemma}
\begin{proof}
Clearly $C_1\cap C_2$ contains a path. Take a maximal path and consider its two endpoints $v_1$ and $v_2$. We put a chip $c$ at $v_1$ and move $c$ along $\overrightarrow{C_1}$. Without loss of generality, we may assume $c$ leaves the path and certainly leaves $C_2$. We claim that the next place where $c$ reaches $C_2$ is $v_2$, which finishes the proof of the lemma. Indeed, if $c$ reaches a common vertex of $C_1$ and $C_2$ other than $v_2$, then we can move $c$ back to $v_1$ along $\overrightarrow{C_2}$, and hence the route of $c$ forms a direct cycle which is strictly contained in $\overrightarrow{C_1}+\overrightarrow{C_2}$. This contradicts that $\overrightarrow{C_1}+\overrightarrow{C_2}$ is a cycle.
\end{proof}
Our target is to prove the following result. The proof needs a technical lemma which we will state and prove right after the result. The proof (including the technical lemma) is due to Gleb Nenashev.
\begin{theorem}[Theorem~\ref{triangular}]\label{tri-sign}
Let $\sigma$ be a \emph{cycle} signature of a \emph{graph} $G$. Then the following are equivalent.
\begin{enumerate}
\item $\sigma$ is triangulating.
\item For any three directed cycles in $\sigma$, the sum is not zero.
\item For any two directed cycles in $\sigma$, if their sum is a cycle, then the sum is in $\sigma$.
\end{enumerate}
\end{theorem}
\begin{proof}
The equivalence of (2) and (3) is trivial. Without loss of generality, we may assume $G$ is connected.
Then we prove (1) implies (3). Denote the two directed cycles by $\overrightarrow{C_1}$ and $\overrightarrow{C_2}$, and their sum by $\overrightarrow{C}$. By Lemma~\ref{theta}, $C_1\cap C_2$ is a path $P$. Hence we can get a forest from $C_1\cup C_2$ by removing one edge in $C_1\backslash P$ and one edge in $C_2\backslash P$. We extend the forest to a spanning tree $B$ of $G$. Then $C_1$ and $C_2$ are both fundamental cycles of $G$ with respect to $B$. Consider the external atlas $\overrightarrow{B}\in \mathcal{A}_\sigma$. Because $C_1, C_2\in\sigma$, we have $\overrightarrow{C_1}, \overrightarrow{C_2}\subseteq \overrightarrow{B}$, and hence $\overrightarrow{C}\subseteq\overrightarrow{B}$. Because $\sigma$ is triangulating, $\overrightarrow{C}\in\sigma$.
The difficult part is (3) implies (1). For any $\overrightarrow{B}\in\mathcal{A_\sigma}$ and any signed circuit $\overrightarrow{C}\subseteq\overrightarrow{B}$, we want to show $\overrightarrow{C}\in\sigma$. By Lemma~\ref{Gleb}, we can write $\overrightarrow{C}$ as the sum of directed fundamental cycles with a complete parenthesization such that each time we add two directed cycles up, the sum is always a directed cycle. Because $\overrightarrow{C}\subseteq\overrightarrow{B}$, all the directed fundamental cycles in the summation are in $\sigma$. Due to (3), $\overrightarrow{C}\in\sigma$.
\end{proof}
\begin{lemma}[Theorem~\ref{tri-sign} continued]\label{Gleb}
Let $B$ be a spanning tree of a connected graph $G$ and $\overrightarrow{C}$ be a directed cycle. Denote by $\overrightarrow{e_1},\cdots,\overrightarrow{e_m}$ the external arcs that appear in $\overrightarrow{C}$ in order (with an arbitrary start). By Lemma~\ref{fundamental}, \[\overrightarrow{C}=\sum_{i=1}^m\overrightarrow{C}(B,\overrightarrow{e_i}).\]
\end{lemma}
Then the summation can be completely parenthesized such that during the summation the sum of two terms is always a directed cycle. (e.g., $\overrightarrow{C}=(\overrightarrow{C_1}+\overrightarrow{C_2})+(\overrightarrow{C_3}+(\overrightarrow{C_4}+\overrightarrow{C_5}))$ is completely parenthesized, and we hope $\overrightarrow{C_1}+\overrightarrow{C_2}$, and $\overrightarrow{C_3}+(\overrightarrow{C_4}+\overrightarrow{C_5})$ are all directed cycles.)
\begin{proof}
Without loss of generality, we may assume that any two vertices in $G$ are adjacent, because adding an edge to $G$ does not affect the result.
We use induction on $m$. When $m\leq 2$, the statement is trivial. Assume the statement holds for some integer $m\geq 2$, and we need to show it holds for $m+1$.
Denote $\overrightarrow{C}$ by $(\overrightarrow{e_1},\overrightarrow{P_1},\overrightarrow{e_2},\overrightarrow{P_2}, \cdots,\overrightarrow{e_{m+1}}, \overrightarrow{P_{m+1}})$, where $\overrightarrow{P_i}\subseteq\overrightarrow{C}$ is the directed (internal) path connecting $\overrightarrow{e_i}$ and $\overrightarrow{e_{i+1}}$. See Figure~\ref{Gleb-fig}.
\begin{figure}
\caption{The pictures used in Lemma~\ref{Gleb}
\label{Gleb-fig}
\end{figure}
We denote the vertices in an object by $V(\text{object})$. The set $V(\overrightarrow{P_i})$ includes the two endpoints of the path. When $\overrightarrow{P_i}$ contains no arc, we define $V(\overrightarrow{P_i})$ to be the head of $\overrightarrow{e_i}$, which is also the tail of $\overrightarrow{e_{i+1}}$.
We take a vertex $r$ of $G$ viewed as the root of the tree $B$. Define the \emph{height} of a vertex $v$ to be the number of edges in the unique path in $B$ connecting $v$ and $r$. For a (internal) path $\overrightarrow{P_i}$, there exists a unique vertex in $\overrightarrow{P_i}$ with the \emph{minimum} height. We denote the vertex by $r_i$ and define the \emph{height} of $\overrightarrow{P_i}$ to be the height of $r_i$. Let $\overrightarrow{P_k}$ be a path having the \emph{maximal} height among all $\overrightarrow{P_i}$. We remove the vertex $r_k(\neq r)$ together with the incident edges from the tree $B$ and denote the connected component containing $r$ by $B'$. Then $B'$ is a tree not containing any vertex in $\overrightarrow{P_k}$ but containing the vertices in $V(C)\backslash V(\overrightarrow{P_k})$. We will see the construction of $\overrightarrow{P_k}$ and $B'$ is crucial to our proof.
Without loss of generality, we may assume $1<k<m+1$. Let $\overrightarrow{e_k}'$ be the arc directed from the tail of $\overrightarrow{e_k}$ to the head of $\overrightarrow{e_{k+1}}$. Denote by $\overrightarrow{C_0}$ the directed cycle $(\overrightarrow{e_k},\overrightarrow{P_k},\overrightarrow{e_{k+1}},-\overrightarrow{e_k}')$. Let $\overrightarrow{C}'=\overrightarrow{C}-\overrightarrow{C_0}$. Note that $\overrightarrow{C}'$ is the directed cycle obtained from $\overrightarrow{C}$ by replacing the path $(\overrightarrow{e_k},\overrightarrow{P_k},\overrightarrow{e_{k+1}})$ with the arc $\overrightarrow{e_k}'$. By Lemma~\ref{fundamental}, we have
\[\overrightarrow{C}'=\sum_{i=1}^{k-1}\overrightarrow{C}(B,\overrightarrow{e_i})+\overrightarrow{C}(B,\overrightarrow{e_k}')+\sum_{i=k+2}^{m+1}\overrightarrow{C}(B,\overrightarrow{e_i}).\]
Now we apply the induction hypothesis to $\overrightarrow{C}'$ and get a way to completely parenthesize the summation so that the parenthesization has the desired property for $\overrightarrow{C}'$.
We rewrite $\overrightarrow{C}$ as
\begin{align*}
\overrightarrow{C}=\overrightarrow{C}'+\overrightarrow{C_0} & = \sum_{i=1}^{k-1}\overrightarrow{C}(B,\overrightarrow{e_i})+(\overrightarrow{C}(B,\overrightarrow{e_k}')+\overrightarrow{C_0})+\sum_{i=k+2}^{m+1}\overrightarrow{C}(B,\overrightarrow{e_i}) \\
& = \sum_{i=1}^{k-1}\overrightarrow{C}(B,\overrightarrow{e_i})+(\overrightarrow{C}(B,\overrightarrow{e_k})+\overrightarrow{C}(B,\overrightarrow{e_{k+1}}))+\sum_{i=k+2}^{m+1}\overrightarrow{C}(B,\overrightarrow{e_i}).
\end{align*}
We completely parenthesize the summation for $\overrightarrow{C}$ in the same way as we just did for $\overrightarrow{C}'$ by adding up $(\overrightarrow{C}(B,\overrightarrow{e_k})+\overrightarrow{C}(B,\overrightarrow{e_{k+1}}))$ first and then treating it as the summand $\overrightarrow{C}(B,\overrightarrow{e_k}')$ in $\overrightarrow{C}'$.
We claim this gives us the desired parenthesization. Indeed, for any new directed cycle $\overrightarrow{D}$ produced in the summation of $\overrightarrow{C}$, there are two cases. \begin{itemize}
\item If $\overrightarrow{D}$ does not use $\overrightarrow{e_k}$ (and hence $\overrightarrow{e_{k+1}}$), then $\overrightarrow{D}$ also appear in the summation of $\overrightarrow{C}'$. Thus $\overrightarrow{D}$ is a directed cycle.
\item If $\overrightarrow{D}$ uses $\overrightarrow{e_k}$, then the corresponding term in the summation of $\overrightarrow{C}'$ is $\overrightarrow{D}'=\overrightarrow{D}-\overrightarrow{C_0}$, where $\overrightarrow{D}'$ could be $\overrightarrow{C}(B,\overrightarrow{e_k}')$ or a newly produced directed cycle containing $\overrightarrow{e_k}'$. Note that the endpoints of all the external edges in $\overrightarrow{C}'$ are in $B'$. So all the fundamental cycles in the summation of $\overrightarrow{C}'$ only use vertices in $B'$, and hence $\overrightarrow{D}'$ does not use any vertex in $\overrightarrow{P_k}$. Thus $\overrightarrow{D}=\overrightarrow{D}'+\overrightarrow{C_0}$ is a directed cycle.
\end{itemize}
\end{proof}
Now we present an example to show that a circuit signature being acyclic is stronger than being triangular. (We used a computer program to find the example.)
\begin{proposition}\label{nonex}
There exists a planar graph that admits a triangulating but not acyclic cycle signature.
\end{proposition}
\begin{proof}
We remove one edge in the complete graph on $5$ vertices and denote the new graph by $G$.
\begin{figure}
\caption{The graph $G$ used in Proposition~\ref{nonex}
\label{nonfig}
\end{figure}
The graph $G$ is planar, which allows us to present its directed cycles using regions. We denote by $C_i$ the cycle that bounds the region labeled by $i$ in Figure~\ref{nonfig}, where $i=1,2,3,4,5$. By orienting them counterclockwise, we obtain five directed cycles $\overrightarrow{C_1},\cdots, \overrightarrow{C_5}$.
Let the cycle signature $\sigma$ be the set of the following directed cycles. The counterclockwise ones are
$2,3,5,23,25,123,235,245,345,1235,2345$, and the clockwise ones are $-1,-4,-12,-13,-34,-45,-125,-134,-234,-1234,-12345$, where ``$23$'' means $\overrightarrow{C_2}+\overrightarrow{C_3}$, ``$-234$'' means $-\overrightarrow{C_2}-\overrightarrow{C_3}-\overrightarrow{C_4}$, etc. There are twenty-two cycles in all.
The signature $\sigma$ is not acyclic because the sum of the directed cycles $123$, $245$, $-234$, and $-125$ is zero.
It is straightforward to check $\sigma$ is triangulating by Theorem~\ref{tri-sign}(2). (This should be done in minutes by hand. We remark that it is much harder to check $\sigma$ or $\mathcal{A}_\sigma$ is triangulating by definition since there are $75$ spanning trees. )
\end{proof}
\subsection{The Bernardi bijection}\label{Bernardi}
We will apply our theory to recover and generalize some features of the Bernardi bijection in this subsection. For the definition of the Bernardi bijection $f_{\mathcal{A}_\text{B},\mathcal{A}_q^*}$, see Example~\ref{B atlas}.
Note that the internal atlas $\mathcal{A}_q^*$ is a special case of $\mathcal{A^*_{\sigma^*}}$, where $\sigma^*$ is an acyclic cocycle signature \cite[Example 5.1.5]{Yuen2}, so $\mathcal{A}_q^*$ is triangulating. The external atlas $\mathcal{A}_\text{B}$ is not triangulating in general (Remark~\ref{rem-nonexample}). However, it is always dissecting. This fact was discovered and proved by Kalm\'an and T\'othm\'er\'esz \cite{KT1,KT2} in a different language; see Section~\ref{intro-motivation}. For readers' convenience, we give a proof here.
\begin{lemma}\cite{KT1,KT2}\label{bernardidissect}
The external atlas $\mathcal{A}_\text{B}$ is dissecting.
\end{lemma}
\begin{proof}
By definition, we need to check $\overrightarrow{F}=\overrightarrow{B_1}\cap(-\overrightarrow{B_2})$ has a potential cocircuit, where $B_1$ and $B_2$ are two different spanning trees.
Consider the first edge $e_0$ where the Bernardi processes for $B_1$ and $B_2$ differ. Without loss of generality, we may assume $e_0\in B_1$ and $e_0\notin B_2$. Consider the fundamental cocircuit $C^*$ of $e_0$ with respect to $B_1$. We orient it away from $q$ and get the signed cocircuit $\overrightarrow{C^*}$. We will prove that $\overrightarrow{C^*}$ is a potential cocircuit of $\overrightarrow{F}$. See Figure~\ref{lemmapic}.
\begin{figure}
\caption{The figure used in the proof of Lemma~\ref{bernardidissect}
\label{lemmapic}
\end{figure}
Note that the Bernardi tour for $B_1$ uses $e_0$ twice. When it visits $e_0$ the second time, any external edges $f$ in $C^*$ has been cut at least once. Recall that the notation $\overrightarrow{B_1}|_f$ means (the set of) the arc $\overrightarrow{f}$ induced by the tour. There are two cases.
(a) If the tour cuts $f$ before its first visit to $e_0$, then $-\overrightarrow{B_1}|_f\subseteq \overrightarrow{C^*}$.
(b) If the tour cuts $f$ after its first visit to $e_0$, then $\overrightarrow{B_1}|_f\subseteq \overrightarrow{C^*}$. Hence $\overrightarrow{F}|_f\subseteq \overrightarrow{C^*}$.
Now we look at the Bernardi process for $B_2$. We know the following two cases.
(c) For any edge $f$ in (a), we have $\overrightarrow{B_2}|_f=\overrightarrow{B_1}|_f$ because the two tours coincide until they reach $e_0$. Hence $\overrightarrow{F}|_f=\emptyset\subseteq \overrightarrow{C^*}$.
(d) For the edge $e_0$, which is external with respect to $B_2$, we have $-\overrightarrow{B_2}|_{e_0}\subseteq \overrightarrow{C^*}$, and hence $\overrightarrow{F}|_{e_0}\subseteq \overrightarrow{C^*}$.
By (b), (c), and (d), the signed circuit $\overrightarrow{C^*}$ is a potential cocircuit of $\overrightarrow{F}$.
\end{proof}
Now we may apply Theorem~\ref{main1} and Theorem~\ref{main2} to $f_{\mathcal{A}_\text{B},\mathcal{A}_q^*}$ and get the following results, where Corollary~\ref{Bernardi-extend}(1) recovers the bijectivity of $\overline{f}_{\mathcal{A}_\text{B},\mathcal{A}_q^*}$ proved in \cite{Bernardi}, (2) extends it, and (3) generalizes it.
\begin{corollary}\label{Bernardi-extend} Let $G$ be a connected ribbon graph.
\begin{enumerate}
\item The Bernardi map $f_{\mathcal{A}_\text{B},\mathcal{A}_q^*}$ induces a bijection $\overline{f}_{\mathcal{A}_\text{B},\mathcal{A}_q^*}:B\mapsto [\beta_{(q,e)}]$ between the spanning trees of $G$ and the cycle-cocycle reversal classes of $G$.
\item The Bernardi map $f_{\mathcal{A}_\text{B},\mathcal{A}_q^*}$ can be extended to a bijection $\varphi_{\mathcal{A}_\text{B},\mathcal{A}_q^*}$ between subgraphs and orientations in the sense of Theorem~\ref{main2}.
\item Let $\sigma^*$ be any triangulating cocycle signature. Then the modified Bernardi map $f_{\mathcal{A}_\text{B},\mathcal{A}_{\sigma^*}^*}$ still has the properties (1) and (2).
\end{enumerate}
\end{corollary}
\section{Lawrence polytopes}\label{Lawrence}
In this section, we will prove Theorem~\ref{3-fold} and Theorem~\ref{main3} introduced in Section~\ref{Lawrence-intro} together with some basic properties of Lawrence polytopes.
For the definitions, see Section~\ref{Lawrence-intro}. Here we recall that $M_{r\times n}$ is a totally unimodular matrix representing a \emph{loopless} regular matroid $\mathcal{M}$. The Lawrence matrix is
\[\begin{pmatrix} M_{r\times n} & {\bf 0} \\ I_{n\times n} & I_{n\times n} \end{pmatrix},\]
whose columns are denoted by $P_1, \cdots, P_n, P_{-1}, \cdots, P_{-n}$ in order. The \emph{Lawrence polytope} $\mathcal{P}\in\mathbb{R}^{n+r}$ is the convex hull of the points $P_1, \cdots, P_n, P_{-1}, \cdots, P_{-n}$.
Due to duality, we will only prove Theorem~\ref{3-fold} and Theorem~\ref{main3} for $\mathcal{P}$. The proof is long, so we divide the section into three parts.
\subsection{A single maximal simplex of the Lawrence polytope}
The target of this subsection is to characterize maximal simplices of the Lawrence polytope $\mathcal{P}$.
We start with three basic lemmas, and the proofs are omitted. We denote by $(x_1, \cdots, x_r, y_1, \cdots, y_n)$ the coordinates of the Euclidean space $\mathbb{R}^{n+r}$ containing $\mathcal{P}$.
\begin{lemma}\label{affinesubspace}
The Lawrence polytope $\mathcal{P}$ is in the affine subspace $\sum\limits_{i=1}^{n}y_{i}=1$, and the affine subspace does not contain the origin and is of dimension $n+r-1$.
\end{lemma}
\begin{lemma}\label{affinesimplex}
The convex hull of $k+1$ points $Q_1, \cdots, Q_{k+1}$ in an affine subspace that does not pass through the origin is a $k$-dimensional simplex if and only if the corresponding vectors $Q_1, \cdots, Q_{k+1}$ are linearly independent.
\end{lemma}
\begin{lemma}\label{linear}
The linear combination $\sum\limits_{i=1}^{n}(a_iP_i+a_{-i}P_{-i})$ is zero if and only if $a_i=-a_{-i}$ for all $i$ and $\sum\limits_{i=1}^{n} a_iM_i=0$, where $M_i$ is the $i$-th column of $M$.
\end{lemma}
\begin{proposition}\label{vertex}
The vertices of $\mathcal{P}$ are the points $P_1, \cdots, P_n, P_{-1}, \cdots, P_{-n}$.
\end{proposition}
\begin{proof}
It suffices to show that any point $P_i$, where $i$ could be positive or negative, cannot be express as a convex combination of the other points. Assume by contradiction that we can do so for some $P_i$. Then by Lemma~\ref{linear}, $P_{-i}$ must have coefficient one in the convex combination, and hence $P_i=P_{-i}$. This contradicts our assumption that $\mathcal{M}$ is loopless.
\end{proof}
Recall that we label the arcs of $\mathcal{M}$ are denoted by $\overrightarrow{e_1}, \cdots, \overrightarrow{e_n}$ and $\overrightarrow{e_{-1}}, \cdots, \overrightarrow{e_{-n}}$. We denote the underlying edge of the arc $\overrightarrow{e_{i}}$ by $e_i$, where $i>0$. We define the bijection
\begin{align*}
\chi:\{\text{vertices of }\mathcal{P}\} & \to \{\text{arcs of }\mathcal{M}\} \\
P_i & \mapsto \overrightarrow{e_i}
\end{align*}
We need the following lemma to characterize the maximal simplices of $\mathcal{P}$.
\begin{lemma}\label{linear2} Let $I\subseteq\{1, \cdots, n, -1, \cdots, -n\}$. Then the vectors $\{P_i: i\in I\}$ is linear dependent if and only if there exists a bioriented circuit in $\{\overrightarrow{e_i}: i\in I\}$, where a bioriented circuit is the union of two opposite signed circuits (as sets of arcs).
\end{lemma}
\begin{proof}
This is due to Lemma~\ref{linear} and the fact that a collection of columns $M_i$ of $M$ is linear dependent if and only if the corresponding edges $e_i$ contain a circuit.
\end{proof}
\begin{corollary}\label{dim}
(1) The Lawrence polytope $\mathcal{P}$ has dimension $n+r-1$.
(2) The map $\chi$ induces a bijection (still denoted by $\chi$)
\begin{align*}
\chi:\{\text{maximal simplices of }\mathcal{P}\} & \to \{\text{externally oriented bases of }\mathcal{M}\} \\
\begin{gathered}
\text{a maximal simplex}\\
\text{with vertices }\{P_i:i\in I\}
\end{gathered} & \mapsto \text{the fourientation }\{\chi(P_i):i\in I\}.
\end{align*}
\end{corollary}
\begin{proof}
Clearly if a set $\overrightarrow{F}$ of arcs of $\mathcal{M}$ does not contain a bioriented circuit, then its cardinality satisfies $|\overrightarrow{A}|\leq n+r$, and the equality holds if and only if $\overrightarrow{F}$ is an externally oriented basis of $\mathcal{M}$.
By Lemma~\ref{linear2}, Lemma~\ref{affinesubspace}, and Lemma~\ref{affinesimplex}, the corollary holds.
\end{proof}
This finishes the proof of Theorem~\ref{3-fold}(1)(2).
\subsection{Two maximal simplices of the Lawrence polytope}
To show Theorem~\ref{3-fold}(3), which characterizes the triangulations and dissections of $\mathcal{P}$, we first prove Proposition~\ref{local}, which characterizes when two maximal simplices satisfy (II) and (III) in Definition~\ref{tri-diss-def}, respectively.
Note that when we say two simplices or two fourientations, they might be identical.
We need some preparations.
\begin{definition}
(1) If the vertices of a simplex $S$ are some of the vertices of $\mathcal{P}$, then $S$ is called a \emph{simplex of $\mathcal{P}$}.
(2) The relative interior of $S$ is denoted by $S^\circ$.
\end{definition}
The following lemma is basic, and the proof is omitted.
\begin{lemma}\label{unique}
Let $S$ be a simplex and $x\in S$. Then the point $x$ can be uniquely written as a convex combination of the vertices of $S$. Moreover,
$x\in S^\circ$ if and only if each vertex of $S$ has a nonzero coefficient in the convex combination.
\end{lemma}
The following lemma gives an equivalent description of (III) in Definition~\ref{tri-diss-def}. The book \cite[Definition 2.3.1]{DRS} uses it as a definition. The proof is omitted.
\begin{lemma}\label{tri-lemma}
Let $S_1$ and $S_2$ be two maximal simplices of $\mathcal{P}$. Then $S_1$ and $S_2$ intersect in a common face if and only if for any face $A_1$ of $S_1$ and any face $A_2$ of $S_2$ such that $A_1^\circ\cap A_2^\circ\neq\emptyset$, we have $A_1=A_2$.
\end{lemma}
We aim at describing $A_1^\circ\cap A_2^\circ=\emptyset$ in terms of fourientations (Lemma~\ref{hard lemma}). Before this, we introduce a notation and a simple lemma without proof. Let $\overrightarrow{F}$ be a fourientation, then we let
\[E(\overrightarrow{F})=\{e\in E: \overrightarrow{F}|_e\neq\emptyset\}.\]
\begin{lemma}\label{stupid}
Let $\overrightarrow{F_1}$ and $\overrightarrow{F_2}$ be two fourientations of $\mathcal{M}$ such that $E(\overrightarrow{F_1})=E(\overrightarrow{F_2})$. Then $\overrightarrow{F_1}\neq\overrightarrow{F_2}$ if and only if $\overrightarrow{F}=\overrightarrow{F_1}\cap(-\overrightarrow{F_2})$ contains a one-way oriented edge.
\end{lemma}
\begin{lemma}\label{hard lemma}
Assume $S_1$ and $S_2$ are two simplices $\mathcal{P}$ (not necessarily maximal). Let $\overrightarrow{F_k}$ be the fourientation $\chi(S_k)$ for $k=1,2$, and denote $\overrightarrow{F}=\overrightarrow{F_1}\cap(-\overrightarrow{F_2})$. Then $S_1^\circ\cap S_2^\circ\neq\emptyset$ if and only if $E(\overrightarrow{F_1})=E(\overrightarrow{F_2})$ and any one-way oriented edge in $\overrightarrow{F}$ belongs to a potential circuit of $\overrightarrow{F}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{stupid}, when $S_1=S_2$, the statement holds. We only consider the case $S_1\neq S_2$.
We first prove the ``only if'' part. Let $x\in S_1^\circ\cap S_2^\circ$. Throughout the proof, $k\in\{1,2\}$. We denote by $F_k$ the set of \emph{indices} of the edges in $E(\overrightarrow{F_k})$ ($F_k\subseteq\{1,\cdots, n\})$.
By Lemma~\ref{unique}, we may write \[x=\sum_{i\in F_1}(w_{1i}^+P_i+w_{1i}^-P_{-i})=\sum_{i\in F_2}(w_{2i}^+P_i+w_{2i}^-P_{-i}),\]where the nonnegative coefficients $w_{ki}^+, w_{ki}^-$ sum up to $1$ for each $k$, and only when the edge $e_i$ is one-way oriented in $\overrightarrow{F_k}$, one of $w_{ki}^+$ and $w_{ki}^-$ is zero.
Now we compare the two convex combinations of $x$ (recall Lemma~\ref{linear}). From the lower half of the Lawrence matrix, we get $F_1=F_2$ and $w_{1i}^++ w_{1i}^-=w_{2i}^++w_{2i}^-$ for $i\in F_1$. Denote $w_i=w_{1i}^++ w_{1i}^-$. It is clear that $\sum\limits_{i\in F_1}w_i=1$ and each summand $w_i>0$.
Now we focus on the upper half of the Lawrence matrix. The computational results are summarized in Table~\ref{table3}, which compares the two convex combinations of $x$ restricted to the top $r$ entries. Denote by $a_{ki}\in\mathbb{R}^r$ the top $r$ entries of the vector $w_{ki}^+P_i+w_{ki}^-P_{-i}$ for $i\in F_1$. For every $i\in F_1$, according to the status of the edge $e_i$ in $\overrightarrow{F}$, there are $4$ possible types given in the first column of the table. We omit ``$|_{e_i}$'' after $\overrightarrow{F}$ and $\overrightarrow{F_k}$ (e.g.$\overrightarrow{F}=\updownarrow$ means $\overrightarrow{F}|_{e_i}=\updownarrow$). These $4$ types are further divided into $9$ types according to how $\overrightarrow{F_1}$ and $\overrightarrow{F_2}$ orient $e_i$. Note that neither $\overrightarrow{F_1}$ nor $\overrightarrow{F_2}$ could be empty over $e_i$. Then for each of the $9$ types, we know whether $P_i$ and $P_{-i}$ are in $S_k$ because $\overrightarrow{F_k}=\chi(S_k)$. For example, when $e_i$ is of the 4th type, $P_i\in S_1,P_{-i}\notin S_1$, and hence $w_{1i}^+P_i+w_{1i}^-P_{-i}=w_iP_i$. So, the vector $u_{1i}$, which is the top $r$ entries of $w_{1i}^+P_i+w_{1i}^-P_{-i}$, is $w_iM_i$ for the 4th type. Similarly, one can get all the other results in the table. Because $S_1\neq S_2$ and $F_1=F_2$, by Lemma~\ref{stupid}, there exists an edge in rows $4$ to $9$.
\begin{table}
\centering
\bgroup
\def1.5{1.5}
\begin{tabular}{ |m{2.2cm}|m{4cm}|c|c|c| }
\hline
type of $e_i$ in terms of $\overrightarrow{F}$ & type of $e_i$ in terms of $\overrightarrow{F_k}$ and label & $a_{1i}$ & $a_{2i}$ & $a_{1i}-a_{2i}$\\
\hline
$\overrightarrow{F}=\updownarrow$ & 1: $\overrightarrow{F_1}=\overrightarrow{F_2}=\updownarrow$ & $w_{1i}^+M_i$ & $w_{2i}^+M_i$ & $(w_{1i}^+-w_{2i}^+)M_i$ \\
\hline
\multirow{2}{2.2cm}{$\overrightarrow{F}=\emptyset$} &2: $\overrightarrow{F_1}=\overrightarrow{F_2}=\overrightarrow{e_i}$ & $w_iM_i$ & $w_iM_i$ & $0$ \\
\cline{2-5}
&3: $\overrightarrow{F_1}=\overrightarrow{F_2}=\overrightarrow{e_{-i}}$ & $0$ & $0$ & $0$ \\
\hline
\multirow{3}{2.2cm}{$\overrightarrow{F}=\overrightarrow{e_i}$} & 4: $\overrightarrow{F_1}=\overrightarrow{e_i}$, $\overrightarrow{F_2}=\updownarrow$ & $w_iM_i$ & $w_{2i}^+M_i$ & $w_{2i}^-M_i$ \\
\cline{2-5}
& 5: $\overrightarrow{F_1}=\updownarrow$, $\overrightarrow{F_2}=\overrightarrow{e_{-i}}$ & $w_{1i}^+M_i$ & $0$ & $w_{1i}^+M_i$ \\
\cline{2-5}
& 6: $\overrightarrow{F_1}=\overrightarrow{e_i}$, $\overrightarrow{F_2}=\overrightarrow{e_{-i}}$ & $w_iM_i$ & $0$ & $w_iM_i$ \\
\hline
\multirow{3}{2.2cm}{$\overrightarrow{F}=\overrightarrow{e_{-i}}$} & 7: $\overrightarrow{F_1}=\overrightarrow{e_{-i}}$, $\overrightarrow{F_2}=\updownarrow$ & $0$ & $w_{2i}^+M_i$ & $-w_{2i}^+M_i$ \\
\cline{2-5}
& 8: $\overrightarrow{F_1}=\updownarrow$, $\overrightarrow{F_2}=\overrightarrow{e_i}$ & $w_{1i}^+M_i$ & $w_iM_i$ & $-w_{1i}^-M_i$ \\
\cline{2-5}
& 9: $\overrightarrow{F_1}=\overrightarrow{e_{-i}}$, $\overrightarrow{F_2}=\overrightarrow{e_i}$ & $0$ & $w_iM_i$ & $-w_iM_i$ \\
\hline
\end{tabular}
\egroup
\caption{The computational results used in Lemma~\ref{hard lemma}. Here $M_i$ is the $i$-th column of $M$, and $a_{ki}\in\mathbb{R}^r$ is the top $r$ entries of the vector $w_{ki}^+P_i+w_{ki}^-P_{-i}$.}
\label{table3}
\end{table}
By definition, \[\sum_{i\in F_1}(a_{1i}-a_{2i})=0.\]
Denote the coefficients of $M_i$ in the last column of the table by $u_i\in\mathbb{R}$ and let $u\in\mathbb{R}^r$ be the column vector $(u_i)_{1\leq i\leq r}$, where we set $u_i=0$ for $i\in\{1,\cdots,n\}\backslash F_1$, so
\[Mu=\sum_{i\in F_1}u_iM_i=\sum_{i\in F_1}(a_{1i}-a_{2i})=0.\]
By applying Lemma~\ref{conformal} to $u\in\ker_\mathbb{R}(M)$, we may decompose $u$ into a linear combination $\sum k_j\overrightarrow{C_j}$ of signed circuits, where $k_j>0$ and for each edge $e_i$ of each $C_j$, the sign of $e_i$ in $\overrightarrow{C_j}$ agrees with the sign of $u_i$. However, by the table, for each edge $e_i$ that is one-way oriented in $\overrightarrow{F}$ (rows $4$ to $9$), the sign of $u_i$ agrees with $\overrightarrow{F}|_{e_i}$ (comparing the first column with the last column). So, as sets of arcs, $\overrightarrow{C_j}\subseteq\overrightarrow{F}$. Hence any one-way oriented edge in $\overrightarrow{F}$ belongs to a potential circuit $\overrightarrow{C_j}$ of $\overrightarrow{F}$.
For the ``if'' part, our proof strategy is to reverse the proof of the ``only if'' part. Note that the second column of the table still lists all the possible types of edges $e\in F_1(=F_2)$, and there exists at least one edge in rows $4$ to $9$ by Lemma~\ref{stupid}. Because any one-way oriented edge in $\overrightarrow{F}$ belongs to a potential circuit of $\overrightarrow{F}$, by adding these signed circuit up, we get a vector $u=(u_i)\in\ker_\mathbb{R}(M)$, where for each edge $e_i$ that is one-way oriented in $\overrightarrow{F}$, the sign of $u_i$ agrees with $\overrightarrow{F}|_{e_i}$. Intuitively, $u$ agrees with the sign pattern (including zero) of the last column of the table from row 2 to row 9 but the weights ``$w$'' are not normalized. Then for each $i\in F_1$, we write $u_iM$ as $\Tilde{a}_{1i}-\Tilde{a}_{2i}$ such that $\Tilde{a}_{ki}$ is a multiple of $M_i$ and agrees with the sign pattern (including zero) of $a_{ki}$ in the table, which is always feasible because the coefficients of $M_i$ are allowed to be larger than $1$. Then we find nonnegative numbers $\Tilde{w}_{ki}^+$ and $\Tilde{w}_{ki}^-$ such that \begin{enumerate}
\item $\Tilde{a}_{ki}$ is the top $r$ entries of the vector $\Tilde{w}_{ki}^+P_i+\Tilde{w}_{ki}^-P_{-i}$;
\item $\Tilde{w}_{1i}^++ \Tilde{w}_{1i}^-=\Tilde{w}_{2i}^++\Tilde{w}_{2i}^-$;
\item the sign pattern agrees with the second column of the table (i.e., $\Tilde{w}_{ki}^+=0\Leftrightarrow \overrightarrow{F_k}|_{e_i}=\overrightarrow{e_{-i}}$ and $\Tilde{w}_{ki}^-=0\Leftrightarrow\overrightarrow{F_k}|_{e_i}=\overrightarrow{e_i}$).
\end{enumerate}
It is straightforward to check this is also feasible. Lastly, we normalize the weights $\Tilde{w}_{ki}^+$ and $\Tilde{w}_{ki}^-$ to obtain $w_{ki}^+$ and $w_{ki}^-$ such that the total sum is $1$ for each $k$. The point $x=\sum\limits_{i\in F_1}(w_{1i}^+P_i+w_{1i}^-P_{-i})=\sum\limits_{i\in F_2}(w_{2i}^+P_i+w_{2i}^-P_{-i})$ is in $S_1^\circ\cap S_2^\circ$.
\end{proof}
We are ready to prove the main result of this subsection.
\begin{proposition}\label{local}
Let $S_1$ and $S_2$ be two maximal simplices of $\mathcal{P}$. Let $\overrightarrow{B_k}$ be the externally oriented basis $\chi(S_k)$ for $k=1,2$, and denote $\overrightarrow{F}=\overrightarrow{B_1}\cap(-\overrightarrow{B_2})$.
\begin{enumerate}
\item $S_1^\circ\cap S_2^\circ=\emptyset$ if and only if $\overrightarrow{F}$ has a potential cocircuit.
\item $S_1$ and $S_2$ intersect at a common face if and only if $\overrightarrow{F}$ has no potential circuit.
\item If one of the equivalent conditions in (1) or (2) holds, then $S_1\neq S_2$ implies $B_1\neq B_2$.
\end{enumerate}
\end{proposition}
\begin{proof}
First we prove (3). Assume by contradiction that $B_1=B_2$. Then $\overrightarrow{F}$ is a fourientation where the internal edges are all bioriented and there exists one external edge that is one-way oriented due to $\overrightarrow{B_1}\neq \overrightarrow{B_2}$. So, $\overrightarrow{F}$ has no potential cocircuit and has a potential circuit, which gives the contradiction.
For (1), we apply Lemma~\ref{hard lemma} to $S_1$ and $S_2$. Since $E(\overrightarrow{B_1})=E(\overrightarrow{B_2})$ always holds, $S_1^\circ\cap S_2^\circ=\emptyset$ if and only if there exist a one-way oriented edge in $\overrightarrow{F}$ such that it does not belong to any potential circuit of $\overrightarrow{F}$. By Lemma~\ref{3-painting}, we find a potential cocircuit of $\overrightarrow{F}$.
For (2), we apply Lemma~\ref{tri-lemma}. The maximal simplices $S_1$ and $S_2$ do not intersect in a common face if and only if there exist two distinct faces $A_1$ of $S_1$ and $A_2$ of $S_2$ such that $A_1^\circ\cap A_2^\circ\neq\emptyset$, which by Lemma~\ref{hard lemma} is equivalent to
\begin{center}
($\star$) there exist two distinct fourientations $\overrightarrow{F_1}\subseteq\overrightarrow{B_1}$ and $\overrightarrow{F_2}\subseteq\overrightarrow{B_2}$ such that $E(\overrightarrow{F_1})=E(\overrightarrow{F_2})$ and any one-way oriented edge in $\overrightarrow{F_0}:=\overrightarrow{F_1}\cap(-\overrightarrow{F_2})$ belongs to a potential circuit of $\overrightarrow{F_0}$.
\end{center}
It remains to show ($\star$) is equivalent to $\overrightarrow{F}$ having a potential circuit. If ($\star$) holds, then $\overrightarrow{F_0}\subseteq\overrightarrow{F}$, and hence a potential circuit of $\overrightarrow{F_0}$ is also a potential circuit of $\overrightarrow{F}$. By Lemma~\ref{stupid}, there is indeed a one-way oriented edge in $\overrightarrow{F_0}$. Thus $\overrightarrow{F}$ has a potential circuit. Conversely, if $\overrightarrow{F}$ has a potential circuit $\overrightarrow{C}$, then there must be a one-way oriented edge in $\overrightarrow{F}|_C$ (because in general $\overrightarrow{B_1}\cap(-\overrightarrow{B_2})$ does not contain bioriented circuits). Set $\overrightarrow{F_1}=\overrightarrow{B_1}|_C$ and $\overrightarrow{F_2}=\overrightarrow{B_2}|_C$. Clearly, we have $E(\overrightarrow{F_1})=E(\overrightarrow{F_2})$ and $\overrightarrow{F_0}=\overrightarrow{F}|_C$. By Lemma~\ref{stupid}, $\overrightarrow{F_1}\neq\overrightarrow{F_2}$. So, ($\star$) holds.
\end{proof}
\subsection{Volume of the Lawrence polytope and Theorem~\ref{3-fold}(3)}
Proposition~\ref{local} is close to Theorem~\ref{3-fold}(3). It remains to show the maximal simplices coming from a dissecting atlas or a triangulating atlas indeed cover the Lawrence polytope $\mathcal{P}$. Unfortunately, we cannot find a direct proof showing that any point of $\mathcal{P}$ is in some maximal simplex that comes from the given atlas. Instead, we make use of volume.
Recall that $\mathcal{P}$ is in the affine space $\sum\limits_{i=1}^ny_i=1$, and the affine space is in $\mathbb{R}^{n+r}$ with coordinate system $(x_1, \cdots, x_r, y_1, \cdots, y_n)$.
For a polytope $S$, we denote by $\text{vol}(S)$ the volume of $S$.
We first compute the volume of a maximal simplex of $\mathcal{P}$.
\begin{lemma}\label{volume-simplex}
Let $S$ be a maximal simplex of $\mathcal{P}$. Then \[\text{vol}(S)=\frac{\sqrt{n}}{(n+r-1)!},\]
where $n$ is the number of edges and $n+r-1$ is the dimension of $\mathcal{P}$. In particular, all the maximal simplices of $\mathcal{P}$ have the same volume.
\end{lemma}
\begin{proof}
Consider the pyramid $\Tilde{S}$ with the base $S$ and the apex $O$.
The height of $\Tilde{S}$ is the distance from $O$ to the affine hyperplane $\sum\limits_{i=1}^ny_i=1$, so \[\text{vol}(\Tilde{S})=\frac{1}{\dim(\Tilde{S})}\cdot\text{base}\cdot\text{height}=\frac{1}{n+r}\cdot \text{vol}(S)\cdot\frac{1}{\sqrt{n}}.\]
Another way to compute $\text{vol}(\Tilde{S})$ is using a determinant. Note that $\Tilde{S}$ is a simplex and one of its vertices is $O$. The coordinates of the $n+r$ vertices of $S$ are the corresponding columns of the Lawrence matrix \[\begin{pmatrix} M_{r\times n} & {\bf 0} \\ I_{n\times n} & I_{n\times n} \end{pmatrix}.\]
Thus they form a $(n+r)\times(n+r)$ submatrix $N$. Hence
\[\text{vol}(\Tilde{S})=\frac{1}{\dim(\Tilde{S})!}\cdot|\det(N)|.\]
Because $M$ is totally unimodular, appending a standard unit vector to it still results in a totally unimodular matrix. By doing this repeatedly, we see that the Lawrence matrix is totally unimodular. Hence $\det(N)=\pm 1$ and \[\text{vol}(\Tilde{S})=\frac{1}{(n+r)!}.\]
By combining the two formulas of $\text{vol}(\Tilde{S})$, we get the desired formula.
\end{proof}
Then we try to find one triangulating atlas and one triangulation of $\mathcal{P}$. The existence of triangulation is proved by constructing regular triangulations; see \cite[Section 2.2.1]{DRS}. We will compute regular triangulations of $\mathcal{P}$ in Section~\ref{regular-sign}.
\begin{lemma}\label{exist-triangulation}
There exists a triangulation of $\mathcal{P}$.
\end{lemma}
To show the existence of triangulating atlas, we show the existence of acyclic signatures, which is implicitly proved in \cite{BBY} by making use of the following equivalent definition of acyclic signatures.
\begin{lemma}\cite[Lemma 3.1.1]{BBY}\label{geometric signature}
Let $\sigma$ be a circuit signature of $\mathcal{M}$. Then $\sigma$ is acyclic if and only if there exists $\overrightarrow{w}\in\mathbb{R}^E$ such that $\overrightarrow{w}\cdot \overrightarrow{C}>0$ for each signed circuit $\overrightarrow{C}\in\sigma$, where the product is the usual inner product.
\end{lemma}
\begin{lemma}\label{exist}
Let $\mathcal{M}$ be a regular matroid.
\begin{enumerate}
\item \cite{BBY} There exists an acyclic circuit signature if $\mathcal{M}$ has at least one circuit.
\item There exists a triangulating external atlas.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) We can always find $\overrightarrow{w}\in\mathbb{R}^E$ such that $\overrightarrow{w}\cdot \overrightarrow{C}\neq 0$ for any signed circuit $\overrightarrow{C}$. Then put the signed circuits with positive inner products into $\sigma$. By Lemma~\ref{geometric signature}, $\sigma$ is acyclic.
(2) If $\mathcal{M}$ has at least one circuit, then by Lemma~\ref{acyclic-tri}, $\mathcal{A}_\sigma$ is triangulating. If $\mathcal{M}$ has no circuit, then $\mathcal{M}$ has only one basis $E$. By definition, $\mathcal{A}$ is triangulating.
\end{proof}
Now we play a trick to find the volume of $\mathcal{P}$ and hence prove Theorem~\ref{3-fold}(3).
\begin{proposition}
(1) The volume of the Lawrence polytope $\mathcal{P}$ is \[\text{vol}(\mathcal{P})=(\text{the number of bases of }\mathcal{M})\cdot\frac{\sqrt{n}}{(n+r-1)!}.\]
(2) The map $\chi$ induces two bijections
\begin{align*}
\chi:\{\text{triangulations of }\mathcal{P}\} & \to \{\text{triangulating external atlases of }\mathcal{M}\} \\
\begin{gathered}
\text{a triangulation with} \\
\text{maximal simplices }\{S_i:i\in I\}
\end{gathered} & \mapsto \text{the external atlas }\{\chi(S_i):i\in I\},
\end{align*}
and \begin{align*}
\chi:\{\text{dissections of }\mathcal{P}\} & \to \{\text{dissecting external atlases of }\mathcal{M}\} \\
\begin{gathered}
\text{a dissection with}\\\text{maximal simplices }\{S_i:i\in I\}
\end{gathered} & \mapsto \text{the external atlas }\{\chi(S_i):i\in I\}.
\end{align*}
\end{proposition}
\begin{proof}
(1) We denote the number of bases of $\mathcal{M}$ by $b$.
By Lemma~\ref{volume-simplex}, the number of the maximal simplices used in a dissection (and hence a triangulation) of $\mathcal{P}$ is a constant $t$, and we need to show $t=b$.
Because there exists a triangulating external atlas $\mathcal{A}$ (Lemma~\ref{exist}), by Lemma~\ref{local}(2), the volume of $\mathcal{P}$ is not less than the total volume of the corresponding maximal simplices (via $\chi$). Thus $t\geq b$.
Because there exists a triangulation of $\mathcal{P}$ (Lemma~\ref{exist-triangulation}), by Lemma~\ref{local}(2)(3), the externally oriented bases corresponding to the maximal simplices in the triangulation have \emph{distinct underlying bases}. Thus $t\leq b$. Therefore $t=b$.
(2) This is direct consequences of Lemma~\ref{local}(1)(2) and part (1).
\end{proof}
\subsection{Regular triangulations and acyclic signatures}\label{regular-sign}
For the basics of regular triangulations, we refer the readers to \cite{LS} and \cite[Chapter 2]{DRS}. Here we recall the construction of regular triangulations of a polytope $\mathcal{P}\subseteq\mathbb{R}^n$ with the vertex set $V$.
\begin{enumerate}
\item[(i)] Pick a \emph{height function} $h:V\rightarrow \mathbb{R}$. \emph{Lift} each vertex $v\in V$ in $\mathbb{R}^n$ to $\mathbb{R}^{n+1}$ by appending $h(v)$ to the coordinate of $v$. Take the convex hull of the lifted vertices and get a lifted polytope $\mathcal{P}'$.
\item[(ii)] Project the \emph{lower facets} of $\mathcal{P}'$ onto $\mathbb{R}^n$. Here, a lower facet is a facet that is visible from below (i.e., a facet whose outer normal vector has its last coordinate negative).
\item[(iii)] When all the lower facets are simplices, the projected facets form a triangulation of $\mathcal{P}$, called a \emph{regular triangulation}. (See \cite{DRS} for a proof.)
\end{enumerate}
Recall the map $\alpha:\sigma\mapsto\mathcal{A}_\sigma$ is a bijection between triangulating circuit signatures and triangulating external atlases of $\mathcal{M}$, and the map $\chi$ is a bijection between triangulations of $\mathcal{P}$ and triangulating external atlases.
\begin{theorem}[Theorem~\ref{main3}]
The restriction of the bijection $\chi^{-1}\circ\alpha$ to the set of acyclic circuit signatures of $\mathcal{M}$ is bijective onto the set of regular triangulations of $\mathcal{P}$.
\end{theorem}
\begin{proof}
In this proof, we use bold letters to represent column vectors. In particular, the vertices $P_i$ of $\mathcal{P}$ will be denoted by $\mathbf{P}_i$ instead. Recall that a circuit signature $\sigma$ is acyclic if and only if there exists $\mathbf{w}\in\mathbb{R}^E$ such that $\mathbf{w}^T \cdot \mathbf{C}>0$ for each signed circuit $\mathbf{C}\in\sigma$ (Lemma~\ref{geometric signature}).
First, we prove that a regular triangulation can always be obtained from an acyclic signature. Recall that $(x_1, \cdots, x_r, y_1, \cdots, y_n)$ denotes the coordinate of a point in $\mathbb{R}^{n+r}$ and $\mathcal{P}$ spans the affine subspace $\sum\limits_{i=1}^{n}y_{i}=1$ (Lemma~\ref{affinesubspace} and Corollary~\ref{dim}). To lift the vertices $\mathbf{P}_i$ of $\mathcal{P}$, we use the space $\mathbb{R}^{n+r}$ and lift $\mathbf{P}_i$ along the normal vector $\mathbf{n}_P=(0, \cdots, 0, 1, \cdots, 1)^T$ of the affine subspace that $\mathcal{P}$ lives in. To be precise, we lift $\mathbf{P}_i$ to $\mathbf{P}_i+h_i\cdot\mathbf{n}_P$, for $i=1, \cdots, n, -1, \cdots, -n$, and get the lifted polytope $\mathcal{P}'$.
For any maximal simplex $S$ of $\mathcal{P}$, let $S'$ be the lifted $S$. It is easy to check $S'$ must be a simplex. We use the following method to decide whether $S'$ is a lower facet of $\mathcal{P}'$. Let $H$ be the unique hyperplane of $\mathbb{R}^{n+r}$ that contains $S'$. Then $S'$ is a lower facet of $\mathcal{P}'$ if and only if for any vertex $\mathbf{P}_j$ of $\mathcal{P}$ not in $S$, we have $h_j>\overline{h}_j$, where $\overline{h}_j$ is the unique number such that $\mathbf{P}_j+\overline{h}_j\cdot\mathbf{n}_P\in H$. Intuitively, we use the new number $\overline{h}_j$ to lift $\mathbf{P}_j$ so that it lies in $H$, so $h_j>\overline{h}_j$ means if we lift $\mathbf{P}_j$ by $h_j$, then it is higher than $H$. It is not hard to check that this method is valid.
Set $\mathbf{w}=(h_1-h_{-1}, \cdots, h_n-h_{-n})^T\in\mathbb{R}^n$. Let $\overrightarrow{B}=\chi(S)$.
$\mathbf{Claim}$: For any $\mathbf{P}_j$ that is not a vertex of $S$, $h_j>\overline{h}_j$ if and only if $\mathbf{w}^T\cdot\mathbf{C}_j>0$, where $\mathbf{C}_j$ is the signed fundamental circuit with respect to $B$ and $\overrightarrow{e_{j}}=\chi(\mathbf{P}_{j})$.
Now we prove the claim. The idea is that $\overline{h}_j$ should be determined by $\{h_i:\mathbf{P}_i\in S\}$. Denote the equation of $H$ by
\[H:\mathbf{n}_H^T\cdot \mathbf{x}=c,\] where $\mathbf{n}_H=(a_1, \cdots, a_r, b_1, \cdots, b_n)^T$ is the normal vector and $\mathbf{x}\in\mathbb{R}^{n+r}$ . Note that $H$ cannot be perpendicular to the affine space $\mathcal{P}$ spans. So $\sum\limits_{i=1}^nb_i=\mathbf{n}_H^T\cdot\mathbf{n}_P\neq 0$. Without loss of generality, we may assume $\sum\limits_{i=1}^nb_i=1$.
Because $H$ contains $S'$, we have equalities
\begin{equation}\label{eq1}
c=\mathbf{n}_H^T\cdot (\mathbf{P}_i+h_i\cdot\mathbf{n}_P)=\mathbf{n}_H^T\cdot \mathbf{P}_i+h_i,
\end{equation}
for all the vertices $\mathbf{P}_i$ of $S$. We also have
\begin{equation}\label{eq2}
c=\mathbf{n}_H^T\cdot \mathbf{P}_j+\overline{h}_j.
\end{equation}
Because $\mathbf{C}_j$ is a signed circuit, we have
\begin{align}
& M\cdot \mathbf{C}_j = 0 \nonumber \\
\Rightarrow \quad & (\mathbf{P}_1-\mathbf{P}_{-1},\mathbf{P}_2-\mathbf{P}_{-2}, \cdots, \mathbf{P}_n-\mathbf{P}_{-n})\cdot \mathbf{C}_j = 0 \nonumber \\
\Rightarrow \quad & \mathbf{n}_H^T\cdot(\mathbf{P}_1-\mathbf{P}_{-1},\mathbf{P}_2-\mathbf{P}_{-2}, \cdots, \mathbf{P}_n-\mathbf{P}_{-n})\cdot \mathbf{C}_j = 0 \nonumber \\
\Rightarrow \quad & \mathbf{n}_H^T\cdot(\mathbf{P}_1-\mathbf{P}_{-1},\mathbf{P}_2-\mathbf{P}_{-2}, \cdots, \mathbf{P}_n-\mathbf{P}_{-n})\cdot \mathbf{C}_j = 0. \label{eq3}
\end{align}
In the last equality, we focus on the vertices $\mathbf{P}_i$ and $\mathbf{P}_{-i}$ such that the $i$-th entry of $\mathbf{C}_j$ is non-zero since the other terms contribute zero to the left-hand side. These vertices correspond to the arcs in $\mathbf{C}_j$ and $-\mathbf{C}_j$. By the definitions of $\mathbf{C}_j$ and $\overrightarrow{B}$, these vertices are all in $S$ except $\mathbf{P}_j$. Thus by (\ref{eq1}), (\ref{eq2}), and (\ref{eq3}), we have
\[(h_1-h_{-1},\cdots,\overline{h}_j-h_{-j}, \cdots, h_n-h_{-n})\cdot \mathbf{C}_j = 0, \text{ for }j>0;\]
\[(h_1-h_{-1},\cdots,h_{-j}-\overline{h}_{j}, \cdots, h_n-h_{-n})\cdot \mathbf{C}_j = 0, \text{ for }j<0.\]
Note that the $|j|$-th entry of $\mathbf{C}_j$ has the same sign as $j$. Therefore $h_j>\overline{h}_j$ if and only if $\mathbf{w}^T\cdot\mathbf{C}_j>0$. (End of the proof of the claim)
By the claim, the lower facets of $\mathcal{P}'$ correspond to the externally oriented bases in $\mathcal{A}_\sigma$, where $\sigma$ is the acyclic signature induced by $-\mathbf{w}=(h_{-1}-h_1, \cdots, h_{-n}-h_n)^T$. Thus any regular triangulation comes from an acyclic signature.
Conversely, if we have an acyclic circuit signature induced by some vector $-\mathbf{w}$, then we may construct the heights $h_i$ such that $\mathbf{w}=(h_1-h_{-1}, \cdots, h_n-h_{-n})^T$, and get a lifted polytope $\mathcal{P}'$. Still by the claim, the lower facets of $\mathcal{P}'$ come from the maximal simplices $S=\chi^{-1}(\overrightarrow{B})$ of $\mathcal{P}$, where $\overrightarrow{B}\in\mathcal{A}_\sigma$. So, the triangulation $\chi^{-1}({A}_\sigma)$ is regular.
\end{proof}
This completes proving the results in Section~\ref{Lawrence-intro}.
\iffalse
\section{A tiling induced by the main bijection $\varphi_{\mathcal{A},\mathcal{A^*}}$}\label{tile}
In this section, we make use of some results in \cite{D2} to prove Theorem~\ref{main4}, which builds a tiling of the simplex $\mathcal{S}$; for the introduction and definitions see Section~\ref{tile-intro}.
First we need to recall some notions and results in \cite{D2}. \DCX{not the same cube} Consider the hypercube \[\mathcal{C}=\{x\in\mathbb{R}_{\geq 0}^{2n}: x(i)+x(-i)=1,\text{ for all }i\}\]
and the half-open cell
\[\text{hoc}(\mathcal{C},\overrightarrow{O},A)=\{x\in\mathcal{C}: x(\overrightarrow{O}_i)\neq 0,\text{ for }e_i\in A; x(-\overrightarrow{O}_i)= 0,\text{ for }e_i\notin A\},\]
where $\overrightarrow{O}$ is an orientation and $A\subseteq E$.
\DCX{intuitively...}
Let \[\varphi:\{\text{orientations of }\mathcal{M}\}\to\{\text{susets of }E\}\] be a map.
\begin{proposition}\label{abcd}\cite[Proposition 4.2]{D2}
The map $\varphi$ is tiling if and only if \[\mathcal{C}=\biguplus_{\overrightarrow{O}}\text{hoc}(\mathcal{C},\overrightarrow{O},\varphi(\overrightarrow{O})),\] where the disjoint union is over all the orientations of $\mathcal{M}$.
\end{proposition}
The proposition has the following weighted version. Let \[w=(w_1,\cdots, w_n)\in\mathbb{R}_{\geq 0}^{n}\] be a weight vector. Let \[\mathcal{C}_w=\{x\in\mathbb{R}_{\geq 0}^{2n}: x(i)+x(-i)=w_i,\text{ for all }i\},\]
\[\text{hoc}(\mathcal{C}_w,\overrightarrow{O},A)=\{x\in\mathcal{C}_w: x(\overrightarrow{O}_i)\neq 0,\text{ for }e_i\in A; x(-\overrightarrow{O}_i)= 0,\text{ for }e_i\notin A\}.\]
Then we have the following corollary of Proposition~\ref{abcd}, which can be proved by a simple dilation argument.
\begin{corollary}\label{weithted tiling}
If $w_i\neq 0$ for \emph{all} $i$, then $\varphi$ is tiling if and only if \[\mathcal{C}_w=\biguplus_{\overrightarrow{O}}\text{hoc}(\mathcal{C}_w,\overrightarrow{O},\varphi(\overrightarrow{O})).\]
\end{corollary}
Recall that \[\mathcal{S}=\{x\in\mathbb{R}_{\geq 0}^{2n}: \sum_{i=1}^n(x(i)+x(-i))=1\},\] \[\text{hoc}(\mathcal{S},\overrightarrow{O},A)=\{x\in\mathcal{S}: x(\overrightarrow{O}_i)\neq 0,\text{ for }e_i\in A; x(-\overrightarrow{O}_i)= 0,\text{ for }e_i\notin A\}. \]
\begin{lemma}\label{local bijectivity}
Suppose the map $\varphi$ is tiling and let $A$ be a nonempty subset of $E$. Define a map\begin{align*}
\psi_A: \{\text{subsets of }A\} & \to \{\text{orientations of }A\} \\
H & \mapsto (\varphi^{-1}(H))|_A.
\end{align*}
\begin{enumerate}
\item $\psi_A$ is a bijection.
\item The map \[\varphi_{A}=\psi_A^{-1}\] is tiling.
\item The domain of $\varphi_{A}$ equals $\{\overrightarrow{O}|_A:\varphi(\overrightarrow{O})\subseteq A\}$.
\item For any orientation $\overrightarrow{O}$ of $E$ such that $\varphi(\overrightarrow{O})\subseteq A$, we have $\varphi_{A}(\overrightarrow{O}|_A)=\varphi(\overrightarrow{O})$.
\end{enumerate}
\end{lemma}
\begin{proof}
Because $\varphi$ is tiling, for any two distinct subsets $H_1$ and $H_2$ of $A$, there exists an edge $e$ such that $e\in H_1\bigtriangleup H_2$ and $\varphi^{-1}(H_1)|_e\neq\varphi^{-1}(H_2)|_e$. Note that $e\in A$, so $\varphi^{-1}(H_1)|_A\neq\varphi^{-1}(H_2)|_A$. This implies (1) and (2).
The surjectivity of $\psi_A$ implies (3).
To get (4), we let $H=\varphi(\overrightarrow{O})$ and hence $\psi_A(\varphi(\overrightarrow{O}))=\overrightarrow{O}|_A$. Then take the inverse.
\end{proof}
\begin{theorem}
The map $\varphi$ is tiling if and only if
\[\mathcal{S}=\biguplus_{\overrightarrow{O}}\text{hoc}(\mathcal{S},\overrightarrow{O},\varphi(\overrightarrow{O})),\] where the disjoint union is over all the orientations of $\mathcal{M}$.
\end{theorem}
\begin{proof}
Let $W=\{w\in\mathbb{R}_{\geq 0}^{n}:\sum\limits_{i=1}^nw_i=1\}.$
Because \[\mathcal{S}=\biguplus_{w\in W}\mathcal{C}_w,\]we have
\begin{align}
& \mathcal{S}=\biguplus_{\overrightarrow{O}}\text{hoc}(\mathcal{S},\overrightarrow{O},\varphi(\overrightarrow{O})) \label{arg1}\\
\Leftrightarrow \quad & \forall w\in W,\mathcal{C}_w=\biguplus_{\overrightarrow{O}}\text{hoc}(\mathcal{C}_w,\overrightarrow{O},\varphi(\overrightarrow{O})). \label{arg2}
\end{align}
So, if we have (\ref{arg1}), then $\varphi$ is tiling by Corollary ~\ref{weithted tiling}.
Conversely, if $\varphi$ is tiling, we want to show (\ref{arg2}) holds for any $w\in W$ possibly with some $w_i=0$. Fix $w\in W$ and let $E_w=\{e_i\in E: w_i\neq 0\}\subseteq E$. Note that if $\varphi(\overrightarrow{O})\nsubseteq E_w$, then $\text{hoc}(\mathcal{C}_w,\overrightarrow{O},\varphi(\overrightarrow{O}))=\emptyset$, because $x(\overrightarrow{O}_i)\neq 0$ cannot hold when $e_i\in\varphi(\overrightarrow{O})\backslash E_w$. Hence we only need to show \begin{equation}\label{arg3}
\mathcal{C}_w=\biguplus_{\overrightarrow{O}:\varphi(\overrightarrow{O})\subseteq E_w}\text{hoc}(\mathcal{C}_w,\overrightarrow{O},\varphi(\overrightarrow{O})).
\end{equation}
We write points $x=(x(i):i=\pm 1,\cdots, \pm n)\in\mathbb{R}_{\geq 0}^{2n}$ as the Cartesian product $(x(i):e_{|i|}\in E_w)\times(x(i):e_{|i|}\notin E_w) $. Hence \[\mathcal{C}_w=\mathcal{C}_{\widetilde{w}}\times \mathbf{0},\] where $\widetilde{w}$ is the vector obtained from $w$ by removing the zero entries, $\mathcal{C}_{\widetilde{w}}$ is a hypercube in $\mathbb{R}_{\geq 0}^{2|E_w|}$, and $\mathbf{0}$ is the zero vector in $\mathbb{R}_{\geq 0}^{2n-2|E_w|}$. We also have
\begin{align*}
& \biguplus_{\overrightarrow{O}:\varphi(\overrightarrow{O})\subseteq E_w}\text{hoc}(\mathcal{C}_w,\overrightarrow{O},\varphi(\overrightarrow{O})) &\\
=& \biguplus_{\overrightarrow{O}:\varphi(\overrightarrow{O})\subseteq E_w}\text{hoc}(\mathcal{C}_{\widetilde{w}},\overrightarrow{O}|_{E_w},\varphi(\overrightarrow{O}))\times \mathbf{0} & \\
= & \biguplus_{\overrightarrow{O}:\varphi(\overrightarrow{O})\subseteq E_w}\text{hoc}(\mathcal{C}_{\widetilde{w}},\overrightarrow{O}|_{E_w},\varphi_{A}(\overrightarrow{O}|_{E_w})) \times \mathbf{0} & \text{(by Lemma~\ref{local bijectivity}(4))} \\
= & \biguplus_{\text{orientation }\overrightarrow{O_A}\text{ of }A}\text{hoc}(\mathcal{C}_{\widetilde{w}},\overrightarrow{O_A},\varphi_{A}(\overrightarrow{O_A})) \times \mathbf{0} & \text{(by Lemma~\ref{local bijectivity}(3))} \\
= & \quad \mathcal{C}_{\widetilde{w}}\times \mathbf{0} & \text{(by Corollary ~\ref{weithted tiling}}).
\end{align*}
Thus (\ref{arg3}) holds.
\end{proof}
The above theorem and Theorem~\ref{strong injectivity} imply Theorem~\ref{main4}.
\fi
\end{document} |
\begin{document}
\title{Canonical Stratifications along Bisheaves}
\begin{abstract}A theory of bisheaves has been recently introduced to measure the homological stability of fibers of maps to manifolds. A bisheaf over a topological space is a triple consisting of a sheaf, a cosheaf, and compatible maps from the stalks of the sheaf to the stalks of the cosheaf. In this note we describe how, given a bisheaf constructible (i.e., locally constant) with respect to a triangulation of its underlying space, one can explicitly determine the coarsest stratification of that space for which the bisheaf remains constructible.\end{abstract}
\section*{Introduction}
The space of continuous maps from a compact topological space $\mathscr{X}$ to a metric space $\mathscr{M}$ carries a natural metric structure of its own --- the distance between $f,g:\mathscr{X}\to\mathscr{M}$ is given by $\sup_{x \in \mathscr{X}} d_\mathscr{M}[f(x),g(x)]$, where $d_\mathscr{M}$ is the metric on $\mathscr{M}$. It is natural to ask how sensitive the fibers $f^{-1}(p)$ over points $p \in \mathscr{M}$ are to perturbations of $f$ in this metric space of maps $\mathscr{X}\to\mathscr{M}$. The case $\mathscr{M}= \mathbb{R}$ (endowed with its standard metric) is already interesting, and lies at the heart of both Morse theory \cite{milnor} and the stability of persistent homology \cite{interlevelsets, cdsgo, cseh}.
The theory of {\bf bisheaves} was introduced in \cite{macpat} to provide stable lower bounds on the homology groups of such fibers in the case where $f$ is a reasonably tame (i.e., Thom-Mather stratified) map. The fibers of $f$ induce two algebraic structures generated by certain basic open subsets $\mathscr{U} \subset \mathscr{M}$ --- their Borel-Moore homology $\bH^\text{\tiny BM}_\bullet(f^{-1}(\mathscr{U})) = \text{H}_\bullet(\mathscr{X},\mathscr{X}-f^{-1}(\mathscr{U}))$ naturally forms a sheaf of abelian groups, whereas their singular homology $\text{H}_\bullet(f^{-1}(\mathscr{U}))$ naturally forms cosheaf. If $\mathscr{M}$ is a $\mathbb{Z}$-orientable manifold, then its fundamental class---let's call it $o \in \text{H}^m_c(\mathscr{M})$---restricts to a generator $o_\mathscr{U}$ of the top compactly-supported cohomology $\text{H}^m_c(\mathscr{U})$ of basic open subsets $\mathscr{U} \subset \mathscr{M}$. The cap product \cite[Sec 3.3]{hatcher} with its pullback $f^*(o_\mathscr{U}) \in \text{H}^m_c(f^{-1}(\mathscr{U}))$ therefore induces group homomorphisms
\[
\bH^\text{\tiny BM}_{m+\bullet}(f^{-1}(\mathscr{U})) {\longrightarrow} \text{H}_\bullet(f^{-1}(\mathscr{U}))
\]
from the ($m$-shifted) Borel-Moore to the singular homology over $\mathscr{U}$. These maps commute with restriction maps of the sheaf and extension maps of the cosheaf by naturality of the cap product. This data, consisting of a sheaf plus a cosheaf along with such maps is the prototypical and motivating example of a bisheaf.
Fix an arbitrary open set $\mathscr{U} \subset \mathscr{M}$ and restrict the bisheaf described above to $\mathscr{U}$.
We replace the restricted Borel-Moore sheaf with its largest sub {\em episheaf}
(i.e., a sheaf whose restriction maps on basic opens are all surjective), and similarly, we replace the restricted singular cosheaf with its largest quotient {\em monocosheaf} (i.e., a cosheaf whose extension maps on basic opens are all injective).
It is not difficult to confirm that even after the above alterations, one can induce canonical maps from the episheaf to the monocosheaf which form a new bisheaf over $\mathscr{U}$. The stalkwise-images of the maps from the episheaf to the monocosheaf in this new bisheaf form a {\em local system} over $\mathscr{U}$ --- this may be viewed as either a sheaf or a cosheaf depending on taste, since all of its restriction/extension maps are invertible. The authors of \cite{macpat} call this the {\bf persistent local system} of $f$ over $\mathscr{U}$.
The persistent local system of $f$ over $\mathscr{U}$ is a collection of subquotients of $\text{H}_\bullet(f^{-1}(p))$ for all $p \in \mathscr{U}$
and provides a principled lower bound for the fiberwise homology of $f$ over $\mathscr{U}$ which is stable to perturbations. For a sufficiently small $\epsilon > 0$, let $\mathscr{U}_\epsilon$ be the shrinking of $\mathscr{U}$ by $\epsilon$. For all tame maps $g: \mathscr{X} \to \mathscr{M}$ within $\epsilon$ of $f$, the persistent local system of $f$ over $U$ restricted to $\mathscr{U}_\epsilon$ is a fiberwise subquotient of the persistent local system of $g$ over $\mathscr{U}_\epsilon$.
The goal of this paper is to take the first concrete steps towards rendering this new theory of bisheaves amenable to explicit machine computation. In Sec \ref{sec:simpbish} we introduce the notion of a {\bf simplicial bisheaf}, i.e., a bisheaf which is constructible with respect to a fixed triangulation of the underlying manifold $\mathscr{M}$. Such bisheaves over simplicial complexes are not much harder to represent on computers than the much more familiar cellular (co)sheaves --- if we work with field coefficients rather than integers, for instance, a simplicial bisheaf amounts to the assignment of one matrix to each simplex $\sigma$ of $\mathscr{M}$ and two matrices to each face relation $\sigma \leq \sigma'$, subject to certain functoriality constraints --- more details can be found in Sec \ref{sec:simpbish} below.
On the other hand, bisheaves are profoundly different from (co)sheaves in certain fundamental ways --- as noted in \cite{macpat}, the category of bisheaves, simplicial or otherwise, over a manifold $\mathscr{M}$ is not abelian. Consequently, we have no direct recourse to bisheafy analogues of basic (co)sheaf invariants such as sheaf cohomology and cosheaf homology. Even so, some of the ideas which produced efficient algorithms for computing cellular sheaf cohomology \cite{cgn} can be suitably adapted towards the task of extracting the persistent local system from a given simplicial bisheaf. One natural way to accomplish this is to find the coarsest partition of the simplices of $\mathscr{M}$ into regions so that over each region the cap product map relating the Borel-Moore stalk to the singular costalk is locally constant. This idea is made precise in Sec \ref{sec:strat}.
Our main construction is described in Sec \ref{sec:main}. Following \cite{strat}, we use the bisheaf data over an $m$-dimensional simplicial complex $\mathscr{M}$ to explicitly construct a stratification by simplicial subcomplexes
\[
\varnothing = \mathscr{M}_{-1} \subset \mathscr{M}_0 \subset \cdots \subset \mathscr{M}_{m-1} \subset \mathscr{M}_m = \mathscr{M},
\]
called the {\bf canonical stratification of $\mathscr{M}$} along the given bisheaf; the connected components of each $\mathscr{M}_d - \mathscr{M}_{d-1}$, called the {\em canonical $d$-strata}, enjoy three remarkably convenient properties for our purposes.
\begin{enumerate}
\item {\em Constructibility}: if two simplices lie in the same stratum, then the cap-product maps assigned to them by the bisheaf are related by invertible transformations.
\item {\em Homogeneity}: if two adjacent simplices $\sigma \leq \sigma'$ of $\mathscr{M}$ lie in different strata, then the (isomorphism class of the) bisheaf data assigned to the face relation $\sigma \leq \sigma'$ in $\mathscr{M}$ depends only on those strata.
\item {\em Universality:} this is the coarsest stratification (i.e., the one with fewest strata) satisfying both constructibility and homogeneity.
\end{enumerate}
Armed with the canonical stratification of $\mathscr{M}$ along a bisheaf, one can reduce the computational burden of building the associated persistent local system as follows. Rather than extracting an episheaf and monocosheaf for {\em every} simplex and face relation, one only has to perform these calculations for each canonical stratum. The larger the canonical strata are, the more computationally beneficial this strategy becomes.
{\footnotesize
\section{Bisheaves around Simplicial Complexes} \label{sec:simpbish}
Let $\mathscr{M}$ be a simplicial complex and let $\category{Ab}$ denote the category of abelian groups. By a {\bf {sheaf} over $\mathscr{M}$} we mean a functor
\[
\shf{F}:\mathbb{F}ace(\mathscr{M}) \to \category{Ab}
\]
from the poset of simplices in $\mathscr{M}$ ordered by the face relation to the abelian category $\category{Ab}$. In other words, each simplex $\sigma$ of $\mathscr{M}$ is assigned an abelian group $\shf{F}(\sigma)$ called the {\em stalk} of $\shf{F}$ over $\sigma$, while each face relation $\sigma \leq \sigma'$ among simplices is assigned a group homomorphism $\shf{F}(\sigma \leq \sigma'):\shf{F}(\sigma) \to \shf{F}(\sigma')$ called its {\em restriction map}. These assignments of objects and morphisms are constrained by the usual functor-laws of associativity and identity. A morphism $\shf{\alpha}:\shf{F} \to \shf{G}$ of sheaves over $\mathscr{M}$ is prescribed by a collection of group homomorphisms $\{\shf{\alpha}_\sigma:\shf{F}(\sigma) \to \shf{G}(\sigma)\}$, indexed by simplices of $\mathscr{M}$, which must commute with restriction maps.
The dual notion is that of a {\bf {cosheaf} under $\mathscr{M}$}, which is a functor
\[
\csh{F}:\mathbb{F}ace(\mathscr{M})^\text{op} \to \category{Ab};
\]
this assigns to each simplex $\sigma$ an abelian group $\csh{F}(\sigma)$ called its {\em costalk}, and to each face relation $\sigma \leq \sigma'$ a contravariant group homomorphism $\csh{F}(\sigma \leq \sigma'):\csh{F}(\sigma') \to \csh{F}(\sigma)$, called the {\em extension map}. As before, a morphism $\csh{\alpha}:\csh{F} \to \csh{G}$ of cosheaves under $\mathscr{M}$ is a simplex-indexed collection of abelian group homomorphisms $\{\csh{\alpha}_\sigma:\csh{F}(\sigma) \to \csh{G}(\sigma)\}$ which must commute with extension maps. For a thorough introduction to cellular (co)sheaves, the reader should consult \cite{curry}.
\subsection{Definition}
The following algebraic-topological object (see \cite[Def 5.1]{macpat}) coherently intertwines sheaves with cosheaves.
\begin{definition}
\label{def:bisheaf}
A {\bf {bisheaf} around $\mathscr{M}$} is a triple $\bsh{F} = (\shf{F},\csh{F},F)$ defined as follows. Here $\shf{F}$ is a sheaf over $\mathscr{M}$, while $\csh{F}$ is an cosheaf under $\mathscr{M}$, and
\[
F = \{F_\sigma:\shf{F}(\sigma) \to \csh{F}(\sigma)\}
\] is a collection of abelian group homomorphisms indexed by the simplices of $\mathscr{M}$ so that the following diagram, denoted $\bsh{F}(\sigma \leq \sigma')$, commutes for each face relation $\sigma \leq \sigma'$:
\[
\xymatrixcolsep{1in}
\xymatrix{
\shf{F}(\sigma) \ar@{->}[d]_{F_\sigma}\ar@{->}[r]^{\shf{F}(\sigma \leq \sigma')}& \shf{F}(\sigma') \ar@{->}[d]^{F_{\sigma'}} \\
\csh{F}(\sigma) & \csh{F}(\sigma') \ar@{->}[l]^{\csh{F}(\sigma \leq \sigma')}\\
}
\]
(The right-pointing map is the restriction map of the sheaf $\shf{F}$, while the left-pointing map is the extension map of the cosheaf $\csh{F}$).
\end{definition}
\subsection{Bisheaves from Fibers}
The following construction is adapted from \cite[Ex 5.3]{macpat}. Consider a map $f:\mathscr{X}\to\mathscr{M}$ whose target space $\mathscr{M}$ is a connected, triangulated manifold of dimension $m$. Let $o$ be a generator of the top compactly-supported cohomology group $\text{H}_\text{\tiny c}^m(\mathscr{M})$. Our assumptions on $\mathscr{M}$ imply $\text{H}_\text{\tiny c}^m(\mathscr{M}) \simeq \mathbb{Z}$, so $o \in \{\pm 1\}$. Now the inclusion $\category{st } \sigma \subset \mathscr{M}$ of the open star\footnote{The open star of $\sigma \in \mathscr{M}$ is given by $\category{st } \sigma = \{\tau \in \mathscr{M} \mid \sigma \leq \tau\}$.} of any simplex $\sigma$ in $\mathscr{M}$ induces an isomorphism on $m$-th compactly supported cohomology, so let $o|_\sigma$ be the image of $o$ in $\text{H}^m_c(\category{st } \sigma)$ under this isomorphism. Since $f$ restricts to a map $f^{-1}(\category{st } \sigma) \to \category{st } \sigma$, the generator $o|_\sigma$ pulls back to a class $f^*(o|_\sigma)$ in $\text{H}^m_c(f^{-1}(\category{st } \sigma))$. The cap product with $f^*(o|_\sigma)$ therefore constitutes a map
\[
\xymatrixcolsep{0.7in}
\xymatrix{
\text{H}_{m+\bullet}^\text{\tiny BM}\left(f^{-1}(\category{st } \sigma)\right) \ar@{->}[r]^-{\frown f^*(o|_\sigma)} & \text{H}_\bullet\left(f^{-1}(\category{st } \sigma)\right)
}
\]
from the Borel-Moore homology to the singular homology of the fiber $f^{-1}(\category{st } \sigma)$. We note that the former naturally forms a sheaf over $\mathscr{M}$ while the later forms a cosheaf; as mentioned in the Introduction, the above data constitutes the primordial example of a bisheaf.
\section{Stratifications along Bisheaves} \label{sec:strat}
Throughout this section, we will assume that $\bsh{F} = (\shf{F},\csh{F},F)$ is a bisheaf of abelian groups over some simplicial complex $\mathscr{M}$ of dimension $m$ in the sense of Def \ref{def:bisheaf}. We do not require this $\mathscr{M}$ to be a manifold.
\begin{definition}
\label{def:fstrat}
An {\bf $\bsh{F}$-{stratification} of $\mathscr{M}$} is a filtration $\mathscr{K}_\bullet$ by subcomplexes:
\[
\varnothing = \mathscr{K}_{-1} \subset \mathscr{K}_0 \subset \cdots \subset \mathscr{K}_{m-1} \subset \mathscr{K}_m = \mathscr{M},
\]
so that connected components of the (possibly empty) difference $\mathscr{K}_d - \mathscr{K}_{d-1}$, called the {\em $d$-dimensional strata} of $\mathscr{K}_\bullet$, obey the following axioms.
\begin{enumerate}
\item{\bf Dimension:} The maximum dimension of simplices lying in a $d$-stratum should precisely equal $d$ (but we do not require every simplex in a $d$-stratum $S$ to be the face of some $d$-simplex in $S$).
\item{\bf Frontier:} The transitive closure of the following binary relation $\prec$ on the set of all strata forms a partial order: we say $S \prec S'$ if there exist simplices $\sigma \in S$ and $\sigma' \in S'$ with $\sigma \leq \sigma'$. Moreover, this partial order is graded in the sense that $S \prec S'$ implies $\dim S \leq \dim S'$, with equality of dimension occurring if and only if $S = S'$.
\item {\bf Constructibility:} $\bsh{F}$ is {locally constant} on each stratum. Namely, if two simplices $\sigma \leq \tau$ of $\mathscr{M}$ lie in the same stratum, then $\shf{F}(\sigma \leq \tau)$ and $\csh{F}(\sigma \leq \tau)$ are both isomorphisms.
\end{enumerate}
\end{definition}
\begin{remark}
It follows from constructibility (and the fact that strata must be connected) that the commuting diagram $\bsh{F}(\sigma \leq \sigma')$ assigned to simplices $\sigma \leq \sigma'$ of $\mathscr{M}$ depends, up to isomorphism, only on the strata containing $\sigma$ and $\sigma'$. That is, given any other pair $\tau \leq \tau'$ so that $\sigma$ and $\tau$ lie in the same stratum $S$ while $\sigma'$ and $\tau'$ lie in the same stratum $S'$, there exist four isomorphisms (depicted as dashed vertical arrows) which make the following cube of abelian groups commute up to isomorphism:
\[
\xymatrixcolsep{.4in}
\xymatrixrowsep{0.05in}
\xymatrix{
& \shf{F}(\sigma) \ar@{-->}[dddd]_\sim \ar@{->}[rr]^{\shf{F}(\sigma \leq \sigma')} \ar@{->}[dl]_{F_\sigma} & & \shf{F}(\sigma') \ar@{-->}[dddd]^\sim \ar@{->}[dr]^{F_{\sigma'}} &\\
\csh{F}(\sigma) & & & & \csh{F}(\sigma') \ar@{->}[llll]^{\csh{F}(\sigma \leq \sigma')} \\
& & & & \\
& & & & \\
& \shf{F}(\tau) \ar@{->}[rr]^{\shf{F}(\tau \leq \tau')} \ar@{->}[dl]_{F_\tau} & & \shf{F}(\tau') \ar@{->}[dr]^{F_{\tau'}} &\\
\csh{F}(\tau) \ar@{-->}[uuuu]^\sim & & & & \csh{F}(\tau') \ar@{->}[llll]^{\csh{F}(\tau \leq \tau')} \ar@{-->}[uuuu]_\sim
}
\]
These vertical isomorphisms are not unique, but rather depend on choices of paths lying in $S$ (from $\sigma$ to $\tau$) and in $S'$ (from $\sigma'$ to $\tau'$).
\end{remark}
\begin{example} The first example of an $\bsh{F}$-stratification of $\mathscr{M}$ that one might consider is the {\bf skeletal} stratification, where the $d$-strata are simply the $d$-simplices.
\end{example}
Since we are motivated by computational concerns, we seek an $\bsh{F}$-stratification with as few strata as possible. To make this notion precise, note that the set of all $\bsh{F}$- stratifications of $\mathscr{M}$ admits a partial order --- we say that $\mathscr{K}_\bullet$ {\em refines} another $\bsh{F}$-stratification $\mathscr{K}'_\bullet$ if every stratum of $\mathscr{K}_\bullet$ is contained inside some stratum of $\mathscr{K}'_\bullet$ (when both are viewed as subspaces of $\mathscr{M}$). The skeletal stratification refines all the others, and serves as the maximal object in this poset; and the object that we wish to build here lies at the other end of this hierarchy.
\begin{definition}
\label{def:canstr}
The {\bf canonical} $\bsh{F}$-stratification of $\mathscr{M}$ is the minimal object in the poset of $\bsh{F}$-stratifications of $\mathscr{M}$ ordered by refinement --- every other stratification is a refinement of the canonical one.
\end{definition}
The reader may ask why this object is well-defined at all --- why should the poset of all $\bsh{F}$-stratifications admit a minimal element, and even if it does, why should that element be unique? Taking this definition as provisional for now, we will establish the existence and uniqueness of the {canonical $\bsh{F}$-stratification} of $\mathscr{M}$ via an explicit construction in the next section.
\section{The Main Construction} \label{sec:main}
As before, we fix a bisheaf $\bsh{F} = (\shf{F},\csh{F},F)$ on an $m$-dimensional simplicial complex $\mathscr{M}$. Our goal is to construct the canonical $\bsh{F}$-stratification, which was described in Def \ref{def:canstr} and will be denoted here by $\mathscr{M}_\bullet$:
\[
\varnothing = \mathscr{M}_{-1} \subset \mathscr{M}_0 \subset \cdots \subset \mathscr{M}_{m-1} \subset \mathscr{M}_m = \mathscr{M}.
\] We will establish the existence and uniqueness of this stratification by constructing the strata in reverse-order: the $m$-dimensional canonical strata will be identified before the $(m-1)$-dimensional canonical strata, and so forth. There is a healthy precedent for such top-down constructions that dates back to work of Whitney \cite{whitney} and Goresky-MacPherson \cite[Sec 4.1]{gm2}.
\subsection{Localizations of the Face Poset}
The key ingredient here, as in \cite{strat}, is the ability to {\em localize} \cite[Ch I.1]{gabzis} the poset $\mathbb{F}ace(\mathscr{M})$ about a special sub-collection $W$ of face relations that is {closed} in the following sense: if $(\sigma \leq \tau)$ and $(\tau \leq \nu)$ both lie in $W$ then so does $(\sigma \leq \nu)$.
\begin{definition}
\label{def:loc}
Let $W$ be a closed collection of face relations in $\mathbb{F}ace(\mathscr{M})$ and let $W^+$ denote the union of $W$ with all equalities of the form $(\sigma = \sigma)$ for $\sigma$ ranging over simplices in $\mathscr{M}$. The {\bf localization} of $\mathbb{F}ace(\mathscr{M})$ about $W$ is a category $\mathbb{F}ace_W(\mathscr{M})$ whose objects are the simplices of $\mathscr{M}$, while morphisms from $\sigma$ to $\tau$ are given by equivalence classes of finite (but arbitrarily long) $W$-{\em zigzags}. These have the form
\[
(\sigma \leq \tau_0 \geq \sigma_0 \leq \cdots \leq \tau_k \geq \sigma_k \leq \tau), \text{ where:}
\]
\begin{enumerate}
\item only relations in $W^+$ can point backwards (i.e., $\geq$),
\item composition is given by concatenation, and
\item the trivial zigzag $(\sigma = \sigma)$ represents the identity morphism of each simplex $\sigma$.
\end{enumerate}
The equivalence between $W$-zigzags is generated by the transitive closure of the following basic relations. Two such zigzags are related
\begin{itemize}
\item {\em horizontally} if one is obtained from the other by removing internal equalities, e.g.:
\begin{align*}
\left(\cdots \leq \tau_0 \geq \sigma_0 = \sigma_0 \geq \tau_1 \leq \cdots \right) &\sim \left(\cdots \leq \tau_0 \geq \tau_1 \leq \cdots\right), \\
\left(\cdots \geq \sigma_0 \leq \tau_1 = \tau_1 \leq \sigma_1 \geq \cdots \right) &\sim \left(\cdots \geq \sigma_0 \leq \sigma_1 \geq \cdots\right),
\end{align*}
\item or {\em vertically}, if they form the rows of a grid:
\begin{alignat*}{11}
\sigma~ & ~\leq~ & ~\tau_0~ & ~\geq~ & ~\sigma_0~ & ~\leq~ & ~\cdots~ & ~\geq~ & ~\sigma_k~ & ~\leq~ & ~\tau \\
\roteq~ & & \leqdn~ & & \leqdn~ & & & & \leqdn~ & & \roteq \\
\sigma~ & ~\leq~ & ~\tau'_0~ & ~\geq~ & ~\sigma'_0~ & ~\leq~ & ~\cdots~ & ~\geq~ & ~\sigma'_k~ & ~\leq~ & ~\tau
\end{alignat*}
whose vertical face relations (also) lie in $W^+$.
\end{itemize}
\end{definition}
\begin{remark}
These horizontal and vertical relations are designed to render invertible all the face relations $(\sigma \leq \tau)$ that lie in $W$. The backward-pointing $\tau \geq \sigma$ which may appear in a $W$-zigzag serves as the formal inverse to its forward-pointing counterpart $\sigma \leq \tau$ --- one can use a vertical relation followed by a horiztonal relation to achieve the desired cancellations whenever $(\cdots \geq \sigma \leq \tau \geq \sigma \leq \cdots)$ or $(\cdots \leq \tau \geq \sigma \leq \tau \geq \cdots)$ are encountered as substrings of a $W$-zigzag.
\end{remark}
\subsection{Top Strata}
Consider the subset of face relations in $\mathbb{F}ace(\mathscr{M})$ to which $\bsh{F}$ assigns invertible maps, i.e.,
\begin{align}
\label{eq:E}
E = \{(\sigma \leq \tau) \text{ in } \mathbb{F}ace(\mathscr{M}) \mid \shf{F}(\sigma \leq \tau) \text{ and }\csh{F}(\sigma \leq \tau) \text{ are isomorphisms}\}.
\end{align}
One might expect, in light of the constructibility requirement of Def \ref{def:fstrat}, that finding canonical strata would amount to identifying isomorphism classes in the localization of $\mathbb{F}ace(\mathscr{M})$ about $E$. Unfortunately, this does not work --- the pieces of $\mathscr{M}$ obtained in such a manner do not obey the frontier axiom in general. To rectify this defect, we must suitably modify $E$. Define the set of simplices
\begin{align*}
U = \{\sigma \in \mathbb{F}ace(\mathscr{M}) \mid (\sigma \leq \tau) \in E \text{ for all } \tau \in \category{st } \sigma\},
\end{align*}
and consider the subset $W \subset E$ given by
\begin{align}
\label{eq:W}
W = \{(\sigma \leq \tau) \in E \mid \sigma \in U\}.
\end{align}
Thus, a pair of adjacent simplices $(\sigma \leq \tau)$ of $\mathscr{M}$ lies in $W$ if and only if the sheaf $\shf{F}$ and cosheaf $\csh{F}$ assign isomorphisms not only to $(\sigma \leq \tau$) itself, but also to {\em all other face relations} encountered among simplices in the open star of $\sigma$. For our purposes, it is important to note that $U$ is {\em upward closed} as a subposet of $\mathbb{F}ace(\mathscr{M})$, meaning that $\sigma \in U$ and $\sigma' \geq \sigma$ implies $\sigma' \in U$.
\begin{proposition}
\label{prop:isostrat}
Every simplex $\tau$ lying in an $m$-stratum of any $\bsh{F}$-stratification of $\mathscr{M}$ must be isomorphic in $\mathbb{F}ace_{W}(\mathscr{M})$ to an $m$-dimensional simplex of $\mathscr{M}$.
\end{proposition}
\begin{proof}
Assume $\tau$ lies in an $m$-dimensional stratum $S$ of an $\bsh{F}$-stratification of $\mathscr{M}$. By the dimension axiom, $S$ contains at least one $m$-simplex, which we call $\sigma$. Since $S$ is connected, there exists a zigzag of simplices lying entirely in $S$ that links $\sigma$ to $\tau$, say
\[
\zeta = (\sigma \leq \tau_0 \geq \sigma_0 \leq \cdots \leq \tau_k \geq \sigma_k \leq \tau).
\]
By the constructibility requirement of Def \ref{def:fstrat}, every face relation in sight (whether $\leq$ or $\geq$) lies in $E$. And by the frontier requirement of that same definition, membership in $m$-strata is upward closed, so in particular all the $\sigma_\bullet$'s lie in $U$. Finally, since $\sigma$ is top-dimensional and $\tau_0 \geq \sigma$, we must have $\tau_0 = \sigma$. Thus, not only is our $\zeta$ a $W$-zigzag, but it also represents an invertible morphism in $\mathbb{F}ace_{W}(\mathscr{M})$. Indeed, a $W$-zigzag representing its inverse can be obtained simply by traversing backwards:
\[
\zeta^{-1} = (\tau \leq \tau \geq \sigma_k \leq \tau_k \geq \cdots \leq \tau_0 \geq \sigma \leq \sigma).
\]
This confirms that $\sigma$ and $\tau$ are isomorphic in $\mathbb{F}ace_{W}(\mathscr{M})$, as desired.
\end{proof}
Given the preceding result, the coarsest $m$-strata that one could hope to find are isomorphism classes of $m$-dimensional simplices in $\mathbb{F}ace_{W}(\mathscr{M})$.
\begin{proposition}
\label{prop:topstrat}
The canonical $m$-strata of $\mathscr{M}_\bullet$ are precisely the isomorphism classes of $m$-dimensional simplices in $\mathbb{F}ace_{W}(\mathscr{M})$.
\end{proposition}
\begin{proof}
Let $\sigma$ be an $m$-simplex of $\mathscr{M}$. We will show that the set $S$ of all $\tau$ which are isomorphic to $\sigma$ forms an $m$-stratum by verifying the frontier and constructibility axioms from Def \ref{def:fstrat} --- the dimension axiom is trivially satisfied since $\sigma \in S$. Note that for any $\tau \in S$ there exists some $W$-zigzag whose simplices all lie in $S$, and which represents an isomorphism from $\sigma$ to $\tau$ in $\mathbb{F}ace_{W}(\mathscr{M})$. (The existence of these zigzags shows that $S$ is connected). So let us fix for each $\tau \in S$ such a zigzag
\[
\zeta_\tau = (\sigma \leq \tau_0 \geq \sigma_0 \leq \cdots \geq \sigma_k \leq \tau),
\]
and assume it is horizontally reduced in the sense that none of its order relations (except possibly the first and last $\leq$) are equalities. Thus, all the $\sigma_d$'s in $\zeta_\tau$ lie in $U$. Upward closure of $U$ now forces simplices in $\category{st } \sigma_k$, which contains $\category{st } \tau$, to also lie in $S$. This shows that $S$ satisfies the frontier axiom, because any simplex of $\mathscr{M}$ with a face in $S$ must itself lie in $S$. We now turn to establishing constructibility. Since $\sigma$ is top-dimensional, we know that $\tau_0 = \sigma$, so in fact the first $\leq$ in $\zeta_\tau$ must be an equality. Consider the bisheaf data $\bsh{F}(\zeta_\tau)$ living over our zigzag:
\[
\xymatrix{
\shf{F}(\sigma) \ar@{->}[r] \ar@{->}[d] & \shf{F}(\tau_0) \ar@{->}[d] & \shf{F}(\sigma_0) \ar@{->}[d] \ar@{->}[r] \ar@{->}[l] & \cdots \ar@{->}[r] & \shf{F}(\tau_k) \ar@{->}[d] & \shf{F}(\sigma_k) \ar@{->}[d] \ar@{->}[l]\ar@{->}[r]& \shf{F}(\tau) \ar@{->}[d]\\
\csh{F}(\sigma) & \csh{F}(\tau_0) \ar@{->}[l]\ar@{->}[r]& \csh{F}(\sigma_0) & \cdots \ar@{->}[l] & \csh{F}(\tau_k)\ar@{->}[l] \ar@{->}[r] & \csh{F}(\sigma_k) &\ar@{->}[l]\csh{F}(\tau)
}
\]
(All horizontal homomorphisms in the top row are restriction maps of $\shf{F}$, all horizontal homomorphisms in the bottom row are extension maps of $\csh{F}$, and the vertical morphism in the column of a simplex $\nu$ is $F_\nu$). By definition of $W$ (and the fact that $\sigma = \tau_0)$, all horizontal maps in sight are isomorphisms, so in particular we may replace all left-pointing arrows in the top row and all the right-pointing arrows in the bottom row by their inverses to get abelian group isomorphisms $\phi_\tau:\shf{F}(\sigma) \to \shf{F}(\tau)$ and $\psi_\tau:\csh{F}(\tau) \to \csh{F}(\sigma)$ that fit into a commuting square with $F_\sigma$ and $F_\tau$. Now given any other simplex $\tau' \geq \tau$ lying in $S$, one can repeat the argument above with the bisheaf data $\bsh{F}\left(\zeta_{\tau'} \circ \zeta_\tau^{-1}\right)$ to confirm that
\[ \shf{F}(\tau \leq \tau') = \phi_{\tau'} \circ \phi_\tau^{-1} \quad \text{and} \quad
\csh{F}(\tau \leq \tau') = \psi_\tau^{-1} \circ \psi_{\tau'}.
\]
Thus both maps are isomorphisms, as desired.
\end{proof}
\subsection{Lower Strata}
Our final task is to determine which simplices lie in canonical strata of dimension $< m$. This is accomplished by iteratively modifying both the simplicial complex $\mathscr{M} = \mathscr{M}_m$ and the set of face relations $W = W_m$ which was defined in (\ref{eq:W}) above.
\begin{definition}
\label{def:subpair}
Given $d \in \{0,1,\ldots,m-1\}$, assume we have the pair $(\mathscr{M}_{d+1},W_{d+1})$ consisting of a simplicial complex $\mathscr{M}_{d+1}$ of dimension $\leq (d+1)$ and a collection $W_{d+1}$ of face relations in $\mathbb{F}ace(\mathscr{M})$. The subsequent pair $(\mathscr{M}_{d},W_{d})$ is defined as follows.
\begin{enumerate}
\item The set $\mathscr{M}_{d}$ is obtained from $\mathscr{M}_{d+1}$ by removing all the simplices which are isomorphic to some $(d+1)$-simplex in the localization $\mathbb{F}ace_{W_{d+1}}(\mathscr{M})$.
\item To define $W_{d}$, first consider the collection of simplices
\[
U_{d} = \{\sigma \in \mathbb{F}ace(\mathscr{M}_{d}) \mid (\sigma \leq \tau) \in E \text{ for all } \tau \in \textbf{st}_{d}~\sigma\};
\]
here $\textbf{st}_{d}~\sigma$ is the open star of $\sigma$ in $\mathscr{M}_{d}$ (i.e., the collection of all $\tau \in \mathscr{M}_{d}$ satisfying $\tau \geq \sigma$), while $E$ is the set of face relations defined in (\ref{eq:E}). Now, set
\[
W_{d} = W_{d+1} \cup \{(\sigma \leq \tau) \mid \sigma \in U_{d} \text{ and }\tau \in \textbf{st}_{d}~\sigma\}.
\]
\end{enumerate}
\end{definition}
\begin{proposition}
The sequence $\mathscr{M}_\bullet$ described in Def \ref{def:subpair} constitutes a filtration of the original simplicial complex $\mathscr{M}$ by subcomplexes with the property that $\dim \mathscr{M}_d \leq d$ for each $d \in \{0,1,\ldots,m\}$.
\end{proposition}
\begin{proof}
Since $\mathscr{M}_m = \mathscr{M}$ is manifestly its own $m$-dimensinal subcomplex, it suffices by induction to show that if $\mathscr{M}_{d+1}$ is a simplicial complex of dimension $\leq (d+1)$, then the simplices in $\mathscr{M}_{d} \subset \mathscr{M}_{d+1}$ constitute a subcomplex of dimension $\leq d$. To this end, we will confirm that the difference $\mathscr{M}_{d+1} - \mathscr{M}_{d}$ satisfies two properties --- it must:
\begin{itemize}
\item contain all the $(d+1)$-simplices in $\mathscr{M}_{d+1}$, and
\item be upward closed with respect to the face partial order of $\mathscr{M}_{d+1}$.
\end{itemize} Since every $(d+1)$-simplex is isomorphic to itself in $\mathbb{F}ace_{W_{d+1}}(\mathscr{M})$ via the identity morphism, the first requirement is immediately met. And by definition of $W_{d+1}$, if an arbitrary simplex $\sigma$ of $\mathscr{M}_{d+1}$ is isomorphic to a $(d+1)$-simplex in $\mathbb{F}ace_{W_{d+1}}(\mathscr{M})$, then so are all the simplices that lie in its open star $\textbf{st}_{d+1}~\sigma \subset \mathscr{M}_{d+1}$. Thus, our second requirement is also satisfied and the desired conclusion follows.
\end{proof}
The structure of the sets $W_{\bullet}$ from Def \ref{def:subpair} enforces a convenient monotonicity among morphisms in the localization $\mathbb{F}ace_{W_\bullet}(\mathscr{M})$.
\begin{lemma}
\label{lem:mono}
For each $d \in \{0,1,\ldots,m\}$, there are no morphisms in the localization $\mathbb{F}ace_{W_d}(\mathscr{M})$ from any simplex $\sigma$ in the difference $\mathscr{M} - \mathscr{M}_d$ to a simplex $\tau$ of $\mathscr{M}_{d}$.
\end{lemma}
\begin{proof}
Any putative morphism from $\sigma$ to $\tau$ in $\mathbb{F}ace_{W_d}(\mathscr{M})$ would have to be represented by a $W_{d}$-zigzag, say
\[
\zeta = (\sigma \leq \tau_0 \geq \sigma_0 \leq \cdots \leq \tau_k \geq \sigma_k \leq \tau).
\] Note that all face relations appearing here, except possibly the first $(\sigma \leq \tau_0)$, must lie in $W_{d}$ by upward closure. Since $\sigma \in \mathscr{M} - \mathscr{M}_d$, it must be isomorphic in $\mathbb{F}ace_{W_{i}}(\mathscr{M})$ to an $i$-simplex in $\mathscr{M}_{i}$ for some $i > d$. But the very existence of a zigzag representing such an isomorphism requires the bisheaf $\bsh{F}$ to be constant on the open star $\textbf{st}_{i}~\sigma$, meaning that $(\sigma \leq \tau_0)$ must lie in $W_{i} \subset W_d$. Thus, all the face relations ($\leq$ and $\geq$) encountered in $\zeta$ lie in $W_d$, whence $\zeta$ must be an isomorphism in the localization $\mathbb{F}ace_{W_d}(\mathscr{M})$ (with its inverse being given by backwards traversal). But now, $\tau$ would also be isomorphic to some $i$-simplex in $\mathbb{F}ace_{W_{i}}(\mathscr{M})$ with $i > d$, which forces the contradiction $\tau \not\in \mathscr{M}_d$.
\end{proof}
Here is our main result.
\begin{theorem}
The sequence $\mathscr{M}_\bullet$ of simplicial complexes described in Def \ref{def:subpair} is the canonical $\bsh{F}$-stratification of $\mathscr{M}$. Moreover, for each $d \in \{0,1,\ldots,m\}$, the canonical $d$-strata of $\mathscr{M}_\bullet$ are isomorphism classes of $d$-simplices from $\mathscr{M}_d$ in the localization $\mathbb{F}ace_{W_d}(\mathscr{M})$.
\end{theorem}
\begin{proof}
We proceed by reverse-induction on $d$, with the base case $d=m$ being given by Prop \ref{prop:topstrat}. So we assume that the statement holds up to $(d+1)$, and establish that the canonical $d$-strata must be isomorphism classes of $d$-simplices from $\mathscr{M}_d$ in the localization $\mathbb{F}ace_{W_d}(\mathscr{M})$. Let $S$ denote the isomorphism class of a $d$-simplex $\sigma_*$ in $\mathscr{M}_d$. We will establish that $S$ satisfies all three axioms of Def \ref{def:fstrat}.
\begin{itemize}
\item {\bf Dimension:} clearly, $S$ contains a simplex $\sigma_*$ of dimension $d$; moreover, since $\dim \mathscr{M}_d \leq d$, all simplices of $\mathscr{M}$ with dimension $> d$ lie in $\mathscr{M} - \mathscr{M}_{d}$. None of these can be isomorphic in $\mathbb{F}ace_{W_d}(\mathscr{M})$ to $\sigma_*$ without contradicting Lem \ref{lem:mono}.
\item {\bf Frontier:} it suffices to check antisymmetry of the relation $\prec$: there should be no simplices $\sigma \leq \sigma'$ with $\sigma \in \mathscr{M}-\mathscr{M}_d$ and $\sigma' \in S$. But the existence of such a $\sigma \leq \sigma'$ would result in a $W_d$-zigzag from $\sigma$ to $\sigma_*$, which is prohibited by Lem \ref{lem:mono}.
\item {\bf Constructibility:} it is straightforward to adapt the argument from the proof of Prop \ref{prop:topstrat} --- given simplices $\tau \leq \tau'$ both in $S$, one can find $W_d$-zigzags from $\sigma_*$ to $\tau$ and to $\tau'$ which guarantee that $\shf{F}(\tau \leq \tau')$ and $\csh{F}(\tau \leq \tau')$ are both isomorphisms.
\end{itemize}
To confirm that the strata obtained in this fashion are canonical, one can re-use the argument form the proof of Prop \ref{prop:isostrat} to show that a simplex which lies in a $d$-stratum of any $\bsh{F}$-stratification is isomorphic in $\mathbb{F}ace_{W_d}(\mathscr{M})$ to a $d$-simplex from $\mathscr{M}_d$, meaning that the strata can not be any larger than these isomorphism classes.
\end{proof}
Finally, we remark that since the sets $W_\bullet$ defined in \ref{def:subpair} form a sequence that increases as $d$ decreases, the set of $W_d$-zigzags is contained in the set of $W_{d-1}$-zigzags and so forth. Therefore, successive localization of $\mathbb{F}ace(\mathscr{M})$ about these $W_\bullet$'s creates a nested sequence of categories:
\[
\mathbb{F}ace_{W_m}(\mathscr{M}) \hookrightarrow \mathbb{F}ace_{W_{m-1}}(\mathscr{M}) \hookrightarrow \cdots \hookrightarrow \mathbb{F}ace_{W_1}(\mathscr{M}) \hookrightarrow \mathbb{F}ace_{W_0}(\mathscr{M}).
\]
And thanks to the monotonicity guaranteed by Lem \ref{lem:mono}, isomorphism classes of $d$-simplices from $\mathscr{M}_d$ in $\mathbb{F}ace_{W_d}(\mathscr{M})$ are stable under inclusion to $\mathbb{F}ace_{W_i}(\mathscr{M})$ for $i \leq d$, since no simplex of $\mathscr{M}_i$ can ever become isomorphic to a simplex from $\mathscr{M}_d - \mathscr{M}_i$ in this entire sequence of categories. Consequently, we can extract all the canonical strata just by examining isomorphism classes in a single category.
\begin{corollary}
The $d$-dimensional strata of the canonical $\bsh{F}$-stratification of $\mathscr{M}$ are isomorphism classes of $d$-simplices from $\mathscr{M}_{d}$ in $\mathbb{F}ace_{W_0}(\mathscr{M})$.
\end{corollary}
\end{document} |
\begin{document}
\title{Virtual Specht stability for $FI$-modules in positive characteristic}
\author{{Nate Harman} \\
\textit{\small Department of Mathematics} \\
\textit{\small Massachusetts Institute of Technology} \\
\textit{\small Cambridge, MA, 02139, USA}\\
\texttt{\small [email protected]}}
\maketitle
\begin{abstract}
We define a notion of virtual Specht stability which is a relaxation of the Church-Farb notion of representation stability for sequences of symmetric group representations. Using a structural result of Nagpal, we show that $FI$-modules over fields of positive characteristic exhibit virtual Specht stability.
\end{abstract}
2010 {\it Mathematics Subject Classification:} 20C30.
{\it Keywords:} representation stability, $FI$-modules
\begin{section}{Introduction}
Church and Farb defined the notion of representation stability for a sequence of symmetric group representations \cite{CF} over a field of characteristic zero. Most notably, it has been shown that the sequence of representations defined by a finitely generated $FI$-module over a field of characteristic zero exhibits representation stability \cite{CEF}.
$FI$-modules can be defined over fields of positive characteristic and have been studied in that setting fairly extensively in \cite{CEFN} and \cite{Nagpal}, however so far there has not been a replacement for the notion of representation stability which holds in this context. The purpose of this note is to define a notion of virtual Specht stability which is a relaxation of representation stability that holds for finitely generated $FI$-modules over fields of arbitrary characteristic.
We'll briefly note that theory of finitely generated $FI$-modules has been applied in numerous situations in geometry, topology, and algebra. Theorem \ref{virtstab} will immediately imply that virtual Specht stability holds in many of those applications. In particular virtual Specht stability will hold for the mod-$p$ cohomology of configuration spaces $\text{Conf}_n(M)$, as well as for the homology of congruence subgroups $\Gamma_n(p)$ with coefficients in $\mathbb{F}_p$. See \cite{CEF} and \cite{CEFN} for a discussion of these and other examples.
\begin{subsection}{$FI$-modules and representation stability}
Let $FI$ denote the category where the objects are finite sets, and morphisms are injections. An $FI$-module over a commutative ring $k$ is a covariant functor $V$ from $FI$ to the category of modules over $k$.
Since the set of $FI$-endomorphisms of the set $[n] = \{1,2,\dots,n\}$ is the symmetric group $S_n$, an $FI$-module $V$ can be thought of as a sequence of representations $V_n$ (the image of $[n]$ under the functor) of $S_n$ for each $n$ along with with compatibility maps from $V_n$ to $V_m$ for every injection from $[n]$ to $[m]$. An $FI$-module is said to be finitely generated in degree at most $d$ if all the $V_n$ are finitely generated as $k$-modules and $V_n$ is spanned by the images of $V_d$ under all injections from $[d]$ to $[n]$ for all $n >d$.
$FI$-modules were introduced by Church, Ellenberg, and Farb in \cite{CEF}, where it was shown that over a field of characteristic zero the sequence of symmetric group representations defined by a finitely generated $FI$-module exhibits the phenomenon known as representation stability as defined by Church and Farb in \cite{CF}.
If $\lambda = (\lambda_1, \lambda_2, \dots, \lambda_\ell)$ is a partition, then for $n \ge |\lambda|+\lambda_1$ let $\lambda[n] = (n - |\lambda|, \lambda_1, \lambda_2, \dots, \lambda_\ell)$. In other words, $\lambda[n]$ is the partition obtained by taking $\lambda$ and adding a large first part to make it have size $n$. Explicitly, the stability result of Church, Ellenberg, and Farb can be stated as follows:
\begin{theorem}{\textbf{(\cite{CEF} Proposition 3.3.3)}}\label{stab}
Let $V$ be a finitely generated $FI$-module over a field of characteristic zero. There exist non-negative integer constants $c_\lambda$ independent of $n$ and nonzero for finitely many partitions such that for all $n \gg 0$
$$V_n = \bigoplus_\lambda c_\lambda S^{\lambda[n]}$$
as a representation of the symmetric group $S_n$, where $S^{\lambda[n]}$ denotes the irreducible Specht module associated to the partition $\lambda[n]$.
\end{theorem}
In positive characteristic, representations of symmetric groups do not in general decompose into a direct sum of irreducible representations. So one might expect that for $FI$-modules over a field $k$ of characteristic $p > 0$ things are more complicated, and indeed that is the case.
As an example, consider the natural $FI$-module $V$ sending a finite set $S$ to $k[S]$, the space of formal linear combinations of elements of $S$. In this example we have that $V_n \cong k^n$ with the action of $S_n$ permuting the coordinates in the usual way. This is a direct sum of two irreducible representations if $p \nmid n$, and is an indecomposable representation with three irreducible composition factors whenever $p\mid n$ (and $n >2$). So even in this basic example, we see that things are more complicated in positive characteristic.
\end{subsection}
\begin{subsection}{Specht modules and Specht stability}
Specht modules are a well behaved class of symmetric group representations defined over arbitrary commutative rings. While in general they are not irreducible, they play an important role in the representation theory of symmetric groups. We will briefly review some facts about Specht modules that will be important for our purposes. All of the following results can be found in James's book on the representation theory of symmetric groups \cite{James}.
\begin{itemize}
\item For partition $\lambda$ of $n$ the integral Specht module $S^\lambda_\mathbb{Z}$ is a representation of $S_n$ over $\mathbb{Z}$ which is a free and of finite rank as a $\mathbb{Z}$-module. For an arbitrary commutative ring $k$ the Specht module $S^\lambda_k$ over $k$ is isomorphic to $S^\lambda_\mathbb{Z} \otimes_\mathbb{Z} k$. From now on we will drop the subscript when the base ring is clear.
\item $S^\lambda_\mathbb{Z}$ is naturally equipped with a nondegenerate $S_n$-equivariant symmetric bilinear form. This gives rise to a (possibly degenerate) $S_n$-equivariant symmetric bilinear form on $S^\lambda_k$ over any ring $k$. Let $S^{\lambda \perp}_k$ denote the radical of this form, which is naturally a $S_n$-invariant subspace of $S^\lambda_k$.
\item Over a field $k$ of characteristic $p$ the quotient $S^\lambda / S^{\lambda \perp}$ is non-zero if and only if $\lambda$ is $p$-regular (meaning $\lambda$ does not have $p$ parts of the same size), in which case it is irreducible. For $p$-regular partitions $\lambda$, let $D^\lambda$ denote this irreducible quotient.
\item Over a field of characteristic $p$ the $D^\lambda$ form a complete set of pairwise non-isomorphic irreducible representations of $S_n$ as $\lambda$ runs over the set of all $p$-regular partitions of $n$.
\item Over a field of characteristic $p$ the irreducible composition factors of $S^\lambda$ are all of the form $D^\mu$ with $\mu \ge \lambda$ in the dominance order. Moreover, for $p$-regular $\lambda$, $D^\lambda$ occurs in $S^\lambda$ with multiplicity one.
\end{itemize}
Specht modules are much better behaved and easier to work with than irreducible representations, so one could hope to generalize representation stability in terms of them. Putman defined a notion of \emph{Specht stability} \cite{Putman} for a sequence of symmetric group representations in which for all sufficiently large $n$ the $n$th term $V_n$ admits a filtration $$ 0 = V_n^{0} \subset V_n^1 \subset V_n^2 \subset \dots \subset V_n^d = V_n$$ where the graded pieces $V_n^i / V_n^{i-1}$ are isomorphic to Specht modules $S^{\lambda_i[n]}$ with $\lambda_i$ not depending on $n$.
The previous example sending a set $S$ to $k[S]$ can easily be seen to be Specht stable (of length $2$) by letting $V_n^1$ be the space of formal linear combinations of elements of $S$ where the coefficients sum to zero. Putman showed that any $FI$-module presented in sufficiently small degree relative to the characteristic of $k$ will exhibit Specht stability (\cite{Putman} Theorem E). However without the restriction on presentation degree Putman's theorem fails and there are $FI$-modules which do not exhibit Specht stability.
\end{subsection}
\begin{subsection}{Grothendieck groups}
Recall that the Grothendieck group $G_0(\mathcal{C})$ of an (essentially small) abelian category $\mathcal{C}$ is the abelian group generated by symbols $[X]$ for every object $X$ of $\mathcal{C}$, subject to the relation $[X] - [Y] + [Z] = 0$ for every short exact sequence $$0 \to X \to Y \to Z \to 0$$
of objects in $\mathcal{C}$. If every object of $\mathcal{C}$ has finite length (i.e. if $\mathcal{C}$ is a Krull-Schmidt category) then $G_0(\mathcal{C})$ is the free abelian group generated by irreducible classes $[X]$, where $X$ is a simple object in $\mathcal{C}$. In this case, given an arbitrary object $Y$, its class $[Y]$ in $G_0(\mathcal{C})$ is a formal linear combination of irreducible classes $[X]$ with coefficients corresponding to the multiplicity of $X$ in a Jordan-H\"older series for $Y$.
One could ask if for a finitely generated $FI$-module $V$ if the expression for $[V_n]$ in terms of irreducible classes $[D^\lambda]$ in the Grothendieck group of the category of finite dimensional representations of $S_n$ over $k$ stabilizes if we identify $[D^{\lambda[n]}]$ for different values of $n$. However in our example from before of sending a finite set $S$ to $k[S]$ we have the following inside the Grothendieck group for all $n > 2$:
\[
[V_n]=
\begin{cases}
\hphantom{2}[D^{(n)}] + [D^{(n-1,1 )}] &\text{if $p \nmid n$},\\[2ex]
2[D^{(n)}] + [D^{(n-1,1 )}] &\text{if $p \mid n$}.
\end{cases}
\]
This example illustrates a general fact about finitely generated $FI$-modules in characteristic $p$. Rather than stabilizing like in characteristic zero, the expression for $[V_n]$ in terms of irreducible classes becomes periodic with period a power of $p$ (see \cite{Harman} section 3.2). While this sort of periodicity is certainly interesting, one often still wants to think of the sequence of symmetric group representations coming from a finitely generated $FI$-module as being stable in some sense.
\end{subsection}
\begin{subsection}{Virtual Specht stability}
The main new idea in this paper is to combine the two ideas above and work in terms of Specht modules inside the Grothendieck groups to obtain a version of representation stability for finitely generated $FI$-modules over fields of arbitrary characteristic. Our main result is the following virtual relaxation of Theorem \ref{stab}, which says that we see stability when expressing $[V_n]$ in terms of Specht classes in the Grothendieck group:
\begin{theorem}\label{virtstab}
Let $V$ be a finitely generated $FI$-module over a field $k$ of positive characteristic. There exist (possibly negative) integer constants $c_\lambda$ independent of $n$ and nonzero for only finitely many partitions such that for all $n \gg 0$
$$[V_n] = \sum_\lambda c_\lambda [S^{\lambda[n]}]$$
inside the Grothendieck group of the category of finite dimensional representations of $S_n$ over $k$, where $S^{\lambda[n]}$ denotes the Specht module associated to the partition $\lambda[n]$.
\end{theorem}
We say that such a sequence of symmetric group representations $V_n$ (whether they come from an $FI$-module or not) satisfying the conclusion of Theorem \ref{virtstab} exhibits \emph{virtual Specht stability}. In particular, any sequence of representations exhibiting Putman's notion of Specht stability clearly also exhibits virtual Specht stability.
We'll note that in positive characteristics the Specht modules are in general not irreducible, and moreover their classes do not even form a basis for the Grothendieck group (they do span, but there are linear relations between them). So it is possible that each $[V_n]$ can be expressed in terms of Specht classes in multiple ways. Virtual Specht stability a priori just requires that among these choices of ways to write $[V_n]$ in terms of Specht classes there is a consistent way to do it for sufficiently large $n$.
Moreover we'll note that for a fixed $FI$-module the choice of the coefficients $c_\lambda$ in the theorem are not unique. So while this definition of virtual Specht stability will prove easy to work with for our purposes, we'd like to be able to make a statement that involves fewer choices and has a unique output.
Instead of using all Specht modules, we could consider just those corresponding to $p$-regular partitions $\lambda$, which do form a basis of the Grothendieck group (unitriangular relative to the basis of irreducibles). Instead of asking for the existence of some consistent way of expressing $[V_n]$ in terms of Specht classes, we could ask the stronger question of whether the unique expression for $[V_n]$ as a linear combination of $p$-regular Specht classes stabilizes. Our next theorem says that, in fact, this is equivalent to our (seemingly) weaker notion of virtual Specht stability. Note that $\lambda[n]$ is $p$-regular for $n > 2|\lambda|$ if and only if $\lambda$ is $p$-regular.
\begin{theorem} \label{unique}
A sequence of representations $V_n$ of $S_n$ over a field of characteristic $p$ exhibits virtual Specht stability if and only if there exist (possibly negative) integer constants $b_\lambda$ independent of $n$ and nonzero for only finitely many $p$-regular partitions $\lambda$ such that for all $n \gg 0$
$$[V_n] = \sum_\lambda b_\lambda [S^{\lambda[n]}]$$
inside the Grothendieck group of the category of finite dimensional representations of $S_n$ over $k$.
\end{theorem}
So in particular, finitely generated $FI$-modules over a field of characteristic $p$ also exhibit this ``stronger" version of virtual Specht stability. In fact we will prove a slightly stronger version of Theorem \ref{unique} which also gives control over how two equivalent stable expressions can differ, but we will wait until section \ref{sectequiv} to state the stronger result.
\end{subsection}
\end{section}
\begin{section}*{Acknowledgments}
This result was formulated, proved, and subsequently presented by the author while at the American Institute of Mathematics (AIM) workshop on representation stability in June 2016. Thanks to Andrew Putman, Steven Sam, Andrew Snowden, David Speyer, and the AIM staff for organizing the workshop and for inviting me to speak. Thanks as well to Pavel Etingof, Benson Farb, Andrew Putman, and Jordan Ellenberg for helpful comments and conversations. This work was partially supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1122374.
\end{section}
\begin{section}{Virtual Specht stability for $FI$-modules}
In this section we will prove Theorem \ref{virtstab}. The main tool we will need is a powerful structural result for $FI$-modules over arbitrary Noetherian rings due to Nagpal. Before stating the result, we will review a few definitions.
Let $W$ be a representation of $S_m$ over an arbitrary noetherian ring $k$ (for our purposes, $k$ will be a field of positive characteristic). The $FI$-module $M(W)$ is the functor that takes a finite set $S$ to the vector space $k[\text{Hom}_{FI}([m],S)] \otimes_{k[S_m]} W$. In other words, $M(W)_n$ is the representation of $S_n$ obtained by first extending $W$ to a $S_m \times S_{n-m}$ representation by letting the second factor act trivially, and then inducing up to $S_n$.
Such modules, and direct sums thereof, are exactly those $FI$-modules which admit the structure of an $FI\sharp$-module (See \cite{CEF}, section 4). We say that an $FI$-module $V$ is $\sharp$-filtered if it admits a filtration $$0 = V^0 \subset V^1 \subset \ldots \subset V^d = V$$ of $FI$-submodules such that the graded pieces $V^i / V^{i-1}$ are of the form $M(W)$ for some $W$. Informally, the result of Nagpal we will need says that, up to torsion, every $FI$-module admits a resolution by $\sharp$-filtered modules. More precisely, it says:
\begin{theorem}{\textbf{(\cite{Nagpal} Theorem A)}}\label{rohit} For an arbitrary finitely generated $FI$-module $V$, there exist $\sharp$-filtered $FI$-modules $J^1, J^2, J^3, \dots, J^N$ and maps of $FI$-modules $\phi^0: V \to J^1, \ \phi^1: J^1 \to J^2, \ \dots, \ \phi^{N-1}: J^{N-1} \to J^N$ such that for all $n \gg 0$
$$0 \to V_n \to J^1_n \to J^2_n \to \dots \to J^N_n \to 0$$
is an exact sequence of $S_n$ representations.
\end{theorem}
In addition to Nagpal's result on $FI$-modules, we will need one standard fact from the modular representation theory of symmetric groups. Informally, the following theorem says that at the level of Grothendieck groups induction between Specht modules behaves the same as in characteristic zero.
\begin{theorem} \textbf{(\cite{James} Corollary 17.14)} \label{spechtfilt} For arbitrary partitions $\lambda \vdash n, \ \mu \vdash m$ and over an arbitrary field $k$, the induced representation $V = Ind_{S_n \times S_m}^{S_{n+m}}(S^\lambda \boxtimes S^\mu)$ has a filtration $$0 = V^0 \subset V^1 \subset \ldots \subset V^d = V$$ such that the graded pieces $V^i / V^{i-1}$ are isomorphic to Specht modules $S^\nu$, with $S^\nu$ occurring with multiplicity equal to the Littlewood-Richardson coefficient $c_{\lambda,\mu}^{\nu}$.
\end{theorem}
\noindent \textbf{Proof of Theorem \ref{virtstab}:} The proof will be by a series of reductions to a known case from characteristic zero. First, Nagpal's result allows us to replace $[V_n]$ by an alternating sum of the $[J^i_n]$'s inside the Grothendieck group for all sufficiently large $n$. Hence it is enough to prove the result for $\sharp$-filtered $FI$-modules.
Next, we know by definition that $\sharp$-filtered modules admit a filtration with quotients of the form $M(W)$, where $W$ is a representation of some $S_m$. At the level of Grothendieck groups such filtrations just become sums, so it is enough to prove the result for the modules of the form $M(W)$.
Now since the functor $W \to M(W)_n$ (which again is given by extending $W$ to a $S_m \times S_{n-m}$ representation by letting $S_{n-m}$ act trivially and then inducing up to $S_n$) is exact, it descends to a linear map on Grothendieck groups, so it is enough to prove the claim for a collection of representations $W$ that span the Grothendieck groups of symmetric groups.
We know that the Specht modules $S^\lambda$ are exactly such a class of representations
. Finally, Theorem \ref{spechtfilt} tells us that $M(S^\lambda)_n = Ind_{S_m \times S_{n-m}}^{S_{n}}(S^\lambda \boxtimes S^{(n-m)})$ has a filtration by Specht modules with multiplicities the same as in characteristic zero, where we know stabilization occurs. $\square$
One consequence of representation stability in characteristic zero is that the characters $\chi_{V_n}$ of the $S_n$ representations $V_n$ are eventually polynomial functions in the number of cycles of given lengths (\cite{CEF} Theorem 1.5). Since Specht modules are defined over the integers their Brauer characters agree with their usual characters in characteristic zero, this along with Theorem \ref{virtstab} immediately implies the following positive characteristic analog of this fact:
\begin{corollary}
If $V$ is a finitely generated $FI$-module over a field of positive characteristic then the sequence of Brauer characters $\hat{\chi}_{V_n}$ of the $S_n$ representations $V_n$ is eventually polynomial.
\end{corollary}
\end{section}
\begin{section}{Equivalent stable presentations}\label{sectequiv}
For this section all representations are over a fixed field $k$ of characteristic $p > 0$. We will refer to ``the Grothendieck group for $S_n$" which should be taken to mean ``the Grothendieck group of the category of finite dimensional representations of $S_n$ over $k$".
For a fixed value of $n$ we know that the Specht classes $[S^\lambda]$ span the Grothendieck group for $S_n$, but since there more Specht modules than irreducible representations there are linear relations among the Specht classes.
One might hope that when we pass to the stable setting that these relations go away, since after all we are requiring a relation to hold in infinitely many Grothendieck groups simultaneously. Unfortunately, there is an easy source of stable linear relations between Specht classes that we will outline now.
Let $\sum c_\lambda[S^\lambda] = 0$ be a linear relation between Specht classes inside the Grothendieck group for $S_m$. Then we know that $$\sum c_\lambda [M(S^\lambda)_n] = \sum c_\lambda [Ind_{S_m \times S_{n-m}}^{S_{n}}(S^\lambda \boxtimes S^{(n-m)})] =0$$
in the Grothendieck group for $S_n$ for all $n > m$. If we then expand each term into Specht classes using Theorem \ref{spechtfilt} we then obtain a stable expression of the form $\sum d_\lambda[S^{\lambda[n]}] = 0$ with $d_\lambda$ not depending on $n$, $d_\lambda = c_\lambda$ if $|\lambda| = m$, and $d_\lambda =0$ if $|\lambda|>m$ which holds for all $n > 2m$. We will call such an expression an $\emph{induced expression for zero}$.
In particular this implies that the stable expression for a finitely generated $FI$-module in terms of Specht classes guaranteed by Theorem \ref{virtstab} is far from unique, we can just add an induced expression for zero as constructed above to obtain another equivalent stable expression.
We'd like to uniqueness statement about the expressions in terms of Specht classes, and there are (at least) two ways we could try to do this. First, we can restrict to those Specht classes corresponding to $p$-regular $\lambda$ as in the statement of Theorem \ref{unique}, and ask if the unique expression in terms of these stabilizes. Alternatively, we could ask if a stable expression for an $FI$-module (or other virtually Specht stable sequence) is unique up to a linear combination of induced expressions for zero.
The following proposition does both of these simultaneously, immediately implying both Theorem \ref{unique} and that any two stable expressions for the same $FI$-module differ by a linear combination of induced expressions for zero.
\begin{proposition}\label{unique2}
For any expression of the form $\sum c_\lambda[S^{\lambda[n]}]$ there is an expression $\sum d_\lambda[S^{\lambda[n]}]$ equivalent to the first inside the Grothendieck group for $S_n$ for all sufficiently large $n$ such that $d_\lambda = 0$ for $p$-singular $\lambda$, and the difference $\sum c_\lambda[S^{\lambda[n]}] - \sum d_\lambda[S^{\lambda[n]}]$ is a linear combination of induced expressions for zero.
\end{proposition}
\noindent \textbf{Proof:} Let $\lambda_0$ be such that $\lambda_0[n]$ is minimal in the dominance order among those $p$-singular partitions $\lambda$ with $c_\lambda \ne 0$ and let $m = |\lambda_0|$. iInside the Grothendieck group for $S_{m}$ we know we can write:
$$[S^{\lambda_0}] = \sum_{\lambda > \lambda_0 } b_\lambda [S^\lambda]$$
for some integer coefficients $b^\lambda$. Applying the functor $M(*)_n$ to both sides, expanding in terms of Specht classes using Theorem \ref{spechtfilt}, and rearranging gives an induced expression for zero of the form $$[S^{\lambda_0[n]}] - \sum_{\lambda[n] > \lambda_0[n] } b'_\lambda [S^{\lambda[n]}] = 0$$
for all sufficiently large $n$. If we subtract off $c_{\lambda_0}$ times this from our expression $\sum c_\lambda[S^{\lambda[n]}]$ we get an equivalent expression $\sum c'_\lambda[S^{\lambda[n]}]$ where $c'_{\lambda_0} = 0$ differing from the first by Specht classes corresponding to partitions $\lambda[n]$ larger than $\lambda_0[n]$ in the dominance order.
Since the (stable) dominance order has no infinite ascending chains, we can repeat this process until there are no non-zero coefficients for Specht classes corresponding to $p$-singular partitions. $\square$
\end{section}
\begin{section}{Examples}
Finally we will finish by giving a couple of examples to illustrate our results.
\noindent \textbf{Example 1:} Returning to our example from the first section of the $FI$-module $V$ that sends a finite set $S$ to $k[S]$. Inside $V_n$ there is the subspace of formal linear combinations of elements of $[n]$ where the sum of the coefficients is zero. This space is isomorphic to the Specht module $S^{(n-1,1)}$. The quotient of $V_n$ by this subspace is a copy of the trivial representation, which itself is the Specht module $S^{(n)}$. Hence we see that $[V_n] = [S^{(n-1,1)}]+[S^{(n)}]$ for all $n \ge 2$. This holds even when the characteristic of $k$ divides $n$, in which case the Specht module $S^{(n-1,1)}$ is not irreducible as it contains a $1$-dimensional invariant subspace. Of course this example also satisfies Putman's stronger notion of (non-virtual) Specht stability.
\noindent \textbf{Example 2:} Let $k$ be a field of characteristic $5$, and consider the standard representation of $S_5$ acting on $k^5$ by permuting the coordinates. This contains two proper $S_5$ invariant subspaces: the $4$-dimensional space of vectors with coordinate sum zero (i.e. the Specht module $S^{(4,1)}$), and the $1$-dimensional space of invariants spanned by the vector $(1,1,1,1,1)$. Since we are in characteristic $5$, this vector $(1,1,1,1,1)$ has coordinate sum zero and therefore the space of invariants is contained in the Specht module. Let $W = D^{(4,1)}$ be the $3$-dimensional quotient of $S^{(4,1)}$ by this invariant subspace.
Now consider the $FI$-module $M(W)$. Unlike the first example, the representations $M(W)_n$ in general do not admit a filtration where the successive quotients are Specht modules. Nevertheless, by construction we see that in the Grothendieck group $[W] = [S^{(4,1)}] - [S^{(5)}]$. Applying the Pieri rule to this expression term wise and simplifying we obtain an expression for $[M(W)_n]$ for all $n \ge 10$: $$[M(W)_n] = [S^{(n-5,4,1)}] + [S^{(n-4,3,1)}] + [S^{(n-3,2,1)}] + [S^{(n-2,1,1)}] - [S^{(n-5,5)}]$$
\noindent \textbf{Example 3:} Let $W = S^{(1,1,1)}$ be the sign representation of $S_3$ over a field of characteristic $3$. We know that $M(W)$ exhibits Specht stability by Theorem \ref{spechtfilt}, and at the level of Grothendieck groups we have that $$[M(W)_n]= [S^{(n-2,1,1)}]+[S^{(n-3,1,1,1)}]$$
for $n \ge 4$. This satisfies our definition of virtual Specht stability, however it involves a $3$-singular partition. Theorem \ref{unique} tells us that we should also see stability when expressing $[M(W)_n]$ in terms of Specht classes for $3$-regular partitions. In this case we have that $$[S^{(1,1,1)}] = [D^{(2,1)}] = [S^{(2,1)}]-[S^{(3)}]$$ which by the Pieri rule tells us that
$$[M(W)_n] = [S^{(n-3,2,1)}]+[S^{(n-2,1,1)}] - [S^{(n-3,3)}]$$
\end{section}
\end{document} |
\begin{document}
\setlength{\jot}{0pt}
\title{On two questions by Finch and Jones about Perfect Order Subset Groups}
\today
\author{Bret Benesh}
\address{
Department of Mathematics,
College of Saint Benedict and Saint John's University,
37 College Avenue South,
Saint Joseph, MN 56374-5011, USA
}
\email{[email protected]}
\begin{abstract}
A finite group $G$ is said to be a {\it POS-group} if the number of elements of every order occurring in $G$ divides $|G|$. We answer two questions by Finch and Jones in~\cite{finch2002curious} by providing an infinite family of nonabelian POS-groups with orders not divisible by $3$.
\end{abstract}
\maketitle
Let $G$ be a finite group, and define the {\it order subset of an element $x \in G$} to be $\{g \in G \mid o(g)=o(x)\}$, where $o(x)$ denotes the order of $x$. We say that $G$ has {\it perfect order subsets} if the number of elements in every order subset divides $|G|$; in this case, we say that $G$ is a POS-group. It is easy to see that $\mathbb{Z}_2$, $\mathbb{Z}_4$, and the symmetric group $S_3$ are POS-groups, whereas $\mathbb{Z}_3$, $\mathbb{Z}_5$, and $S_4$ are not.
This definition is due to Finch and Jones, who worked with abelian groups~\cite{finch2002curious} and direct products of abelian groups with $S_3$~\cite{finch2003nonabelian}. They provided the following open questions at the end of~\cite{finch2002curious}.
\begin{enumerate}
\item Are there nonabelian POS-groups other than the symmetric group $S_3$?
\item If the order of a POS-group is not a power of $2$, is the order necessarily divisible by $3$? This is also Conjecture 5.1 from~\cite{finch2003nonabelian}, also by Finch and Jones.
\end{enumerate}
The answers are ``yes" and ``no," respectively. The first question was answered by Finch and Jones in ~\cite{finch2003nonabelian}, although all groups were direct products of $S_3$ with an abelian group. Das~\cite{das2009finite} answered both questions by proving that there exists an action $\theta$ such that the semidirect product $\mathbb{Z}_{p^{k}} \rtimes_{\theta} \mathbb{Z}_{2^{l}}$ is a POS-group, where $p$ is a Fermat prime, $k \geq 1$, and $2^{l} \geq p-1$. Feit also answered both questions by indicating that a Frobenius group of order $p(p-1)$ for a prime $p>3$ is a POS-group~\cite{finch2003nonabelian}.
We now provide an infinite family of groups that simultaneously answers both questions. Let $n \geq 1$, and consider the group $\mathbb{Z}_4 \rtimes \mathbb{Z}_{2 \cdot 5^{n}}$ with the inversion action. The order of this group is $2^3 \cdot 5^{n}$, which is not divisible by $3$. Consequently, $S_3$ cannot appear as a subgroup or quotient of any of these groups, as $3$ divides the order of $S_3$.
Table~\ref{tab:OrderTable} summarizes the easy calculations required to find the size of each order subset of $\mathbb{Z}_4 \rtimes \mathbb{Z}_{2 \cdot 5^{n}}$ (one can use geometric sums to verify that all elements of the group are accounted for). Note that the number of elements of each order divides the order of the group, thereby proving that the groups are POS-groups. This confirms that $\mathbb{Z}_4 \rtimes \mathbb{Z}_{2 \cdot 5^{n}}$ is a POS-group.
\begin{table}[h]
\begin{tabular}{lllllll}\toprule
\multicolumn{7}{l}{$\mathbb{Z}_{4} \rtimes \mathbb{Z}_{2\cdot 5^{n}}$, $|G|=2^3 \cdot 5^{n}$} \\
\midrule
Order: & $1$ & $2$ & $4$ & $5^{m}$ & $2 \cdot 5^{m}$ & $4 \cdot 5^{m}$ \\
Elements: & $1$ & $5$ & $2$ & $4 \cdot 5^{m-1}$ &$4 \cdot 5^{m}$ &$8 \cdot 5^{m-1}$ \\
\hline
\end{tabular}
\caption{\label{tab:OrderTable}Here, $n$ is defined so that $n \geq 1$ and $m$ is defined so that $1 \leq m \leq n$.}
\end{table}
\end{document} |
\begin{document}
\title{Closed systems refuting quantum speed limit hypotheses}
\author{Niklas H{\"o}rnedal\,\orcidlink{0000-0002-2005-8694}\,}
\email{[email protected]}
\affiliation{Department of Physics and Materials Science, University of Luxembourg, L-1511 Luxembourg, Luxembourg}
\author{Ole S{\"o}nnerborn\,\orcidlink{0000-0002-1726-4892}\,}
\email{[email protected]}
\affiliation{Department of Mathematics and Computer Science, Karlstad University, 651 88 Karlstad, Sweden}
\affiliation{Department of Physics, Stockholm University, 106 91 Stockholm, Sweden}
\begin{abstract}
Quantum speed limits for isolated systems that take the form of a distance divided by a speed extend straightforwardly to closed systems. This is, for example, the case with the well-known Mandelstam-Tamm quantum speed limit. Margolus and Levitin derived an equally well-known and ostensibly related quantum speed limit, and it seems to be widely believed that the Margolus–Levitin quantum speed limit can be similarly extended to closed systems. However, a recent geometrical examination of this limit reveals that it differs significantly from most quantum speed limits. In this paper, we show contrary to the common belief that the Margolus-Levitin quantum speed limit does not extend to closed systems in an obvious way. More precisely, we show that there exist closed systems that evolve between states with any given fidelity in an arbitrarily short time while keeping the normalized expected energy fixed at any chosen value. We also show that for isolated systems, the Mandelstam-Tamm quantum speed limit and a slightly weakened version of this limit that we call the Bhatia-Davies quantum speed limit always saturate simultaneously. Both of these evolution time estimates extend straightforwardly to closed systems. We demonstrate that there are closed systems that saturate the Mandelstam-Tamm quantum speed limit but not the Bhatia-Davies quantum speed limit.
\end{abstract}
\mathrm{d}ate{\today}
\maketitle
\section{Introduction}
Many quantum speed limits (QSLs) for isolated systems take the form of a distance divided by a speed \cite{PiCiCeAdSo-Pi2016,Fr2016,DeCa2017}. Such evolution time estimates can be straightforwardly extended to closed systems.\footnote{An \emph{isolated system} is a system that evolves according to the von Neumann equation with a time-independent Hamiltonian, and a \emph{closed system} is one that evolves according to the von Neumann equation with a time-varying Hamiltonian.} The famous Mandelstam-Tamm QSL is an estimate of this kind \cite{MaTa1945,AnAh1990}. The Mandelstam-Tamm QSL states that the time it takes for an isolated system to evolve between two fully distinguishable states is bounded from below by\footnote{\emph{State} will always refer to a pure quantum state, that is, a state that can be represented by a density operator of rank $1$.}\textsuperscript{,}\footnote{All quantities are expressed in units such that $\hbar=1$.}
\begin{equation}\label{MT}
\tau_\textsc{mt}
= \frac{\pi}{2\Delta H},
\end{equation}
where $\Delta H$ is the energy uncertainty. More generally,
the time it takes for an isolated system to evolve between two states with fidelity $\mathrm{d}elta$ is bounded from below by\footnote{The \emph{fidelity} or \emph{overlap} between two states $\rho_1$ and $\rho_2$ is $\operatorname{tr}(\rho_1\rho_2)$.}
\begin{equation}\label{isolatedMT}
\tau_\textsc{mt}(\mathrm{d}elta)
= \frac{\arccos\sqrt{\mathrm{d}elta}}{\Delta H}.
\end{equation}
This estimate is also due to Mandelstam and Tamm but was rediscovered and formulated more concisely in \cite{Fl1973}.
The Mandelstam-Tamm QSL can be extended to closed systems by replacing the denominator in \eqref{isolatedMT} with the corresponding time average. Thus, the evolution time of a closed system evolving between two states with fidelity $\mathrm{d}elta$ is bounded from below by
\begin{equation}\label{closedMT}
\bar\tau_\textsc{mt}(\mathrm{d}elta)
= \frac{\arccos\sqrt{\mathrm{d}elta}}{\langle\hspace{-2pt}\langle\Delta H_t\rangle\hspace{-2pt}\rangle},
\end{equation}
with $\langle\hspace{-2pt}\langle\Delta H_t\rangle\hspace{-2pt}\rangle$ being the time average of the energy uncertainty. Since the Fubini-Study distance between two states with fidelity $\mathrm{d}elta$ is $\arccos\sqrt{\mathrm{d}elta}$ and the Fubini-Study speed with which a state evolves is $\Delta H_t$ \cite{AnAh1990,HoAlSo2022}, the Mandelstam-Tamm QSL is saturated if and only if the state follows a Fubini-Study geodesic in the projective Hilbert space. Mandelstam and Tamm's QSL has been extended to systems in mixed states \cite{Uh1992, DeLu2013a, AnHe2014, HoAlSo2022}.
Margolus and Levitin \cite{MaLe1998} derived a seemingly similar evolution time estimate. The Margolus-Levitin QSL states that the time it takes for an isolated system to evolve between two fully distinguishable states is greater than or equal to
\begin{equation}\label{ML}
\tau_\textsc{ml}=\frac{\pi}{2\langle H-\epsilon_\mathrm{min}\rangle},
\end{equation}
where $\langle H-\epsilon_\mathrm{min}\rangle$ is the expected energy $\langle H\rangle$ shifted by the smallest occupied energy $\epsilon_\mathrm{min}$, hereafter called the normalized expected energy. A more general result states that the time it takes for an isolated system to evolve between two states with fidelity $\mathrm{d}elta$ is lower bounded by
\begin{equation}\label{extML}
\tau_\textsc{ml}(\mathrm{d}elta)
=\frac{\alpha(\mathrm{d}elta)}{\langle H-\epsilon_\mathrm{min}\rangle},
\end{equation}
where
\begin{equation}\label{alpha}
\alpha(\mathrm{d}elta)
=\min_{z^2\leq \mathrm{d}elta}\frac{1+z}{2}\arccos\Big(\frac{2\mathrm{d}elta-1-z^2}{1-z^2}\Big).
\end{equation}
Like $\tau_\textsc{mt}(\mathrm{d}elta)$, the bound $\tau_\textsc{ml}(\mathrm{d}elta)$ is tight, and $\tau_\textsc{ml}(0)=\tau_\textsc{ml}$. The bound $\tau_\textsc{ml}(\mathrm{d}elta)$ was established numerically in \cite{GiLlMa2003} and derived analytically in \cite{HoSo2023}. Reference \cite{HoSo2023} also contains a geometric interpretation of $\tau_\textsc{ml}(\mathrm{d}elta)$ and a complete description of the systems that reach the bound.
A natural guess is that the Margolus-Levitin QSL is also valid for closed systems, provided one puts the time average of the normalized expected energy in the denominator. More generally, one might expect that the evolution time of a closed system is lower bounded by a quantity of the form $\mathcal{L}(\mathrm{d}elta)/\langle\hspace{-2pt}\langle H_t-\epsilon_\mathrm{min}t\rangle\hspace{-2pt}\rangle$ where $\mathcal{L}$ is some positive function that depends only on the fidelity $\mathrm{d}elta$ between the initial and the final state. In the next section, we show that this is not the case:
\noindent \emph{We show that for each state $\rho$ and $0\leq \mathrm{d}elta\leq 1$, there exists a Hamiltonian $H_t$ that evolves $\rho$ to a state with fidelity $\mathrm{d}elta$ relative to $\rho$ in an arbitrarily short time while keeping the normalized expected energy fixed at an arbitrary predetermined value.}
Lui et al.\ \cite{LiMiFuWa2021} used the Bhatia–Davies inequality to transform the Mandelstam–Tamm QSL into an upper bound for a proposed operationally defined QSL \cite{ShLiZhYuLi2020}. This upper bound is a new QSL that we call the Bhatia-Davies QSL, although one should rightly attribute it to the authors of \cite{LiMiFuWa2021}. The Bhatia-Davies QSL states that the time it takes for an isolated system to evolve between two states with fidelity $\mathrm{d}elta$ is bounded from below by
\begin{equation}\label{isolatedBD}
\tau_\textsc{bd}(\mathrm{d}elta) = \frac{\arccos\sqrt{\mathrm{d}elta}}{\sqrt{\langle\epsilon_\mathrm{max} - H\rangle\langle H - \epsilon_\mathrm{min}\rangle}},
\end{equation}
where $\epsilon_\mathrm{max}$ is the largest and $\epsilon_\mathrm{min}$ is the smallest occupied energy. The Bhatia-Davies QSL also extends straightforwardly to closed systems:
\begin{equation}\label{closedBD}
\bar\tau_\textsc{bd}(\mathrm{d}elta)
= \frac{\arccos\sqrt{\mathrm{d}elta}}{\langle\hspace{-2pt}\langle\sqrt{\langle\epsilon_\mathrm{max}t - H_t\rangle\langle H_t- \epsilon_\mathrm{min}t\rangle}\,\rangle\hspace{-2pt}\rangle}
\end{equation}
The Bhatia-Davies QSL is weaker than that of Mandelstam and Tamm in the sense that $\bar\tau_\textsc{mt}(\mathrm{d}elta)\geq \bar\tau_\textsc{bd}(\mathrm{d}elta)$ with a strict inequality in general for both isolated and closed systems. We show that the Mandelstam-Tamm and the Bhatia-Davies QSLs are always saturated simultaneously for isolated systems but that this need not be the case for closed systems:
\noindent\emph{We provide an example of a closed system that saturates the Mandelstam-Tamm but not the Bhatia-Davies QSL.}
\section{Time-dependent systems that disprove common belief}
One obtains a relatively simple type of time-dependent Hamiltonian if one conjugates a time-independent Hamiltonian $H$ with a one-parameter group of unitaries generated by a Hermitian operator $A$:
\begin{equation}
H_t=e^{-iAt}H e^{iAt}.
\end{equation}
Such a group action will preserve the eigenvalues but rotate the eigenvectors of $H$. If a state $\rho$ evolves under the influence of $H_t$,
\begin{equation}
\mathrm{d}ot\rho_t=-i[H_t,\rho_t],\qquad \rho_0=\rho,
\end{equation}
the state in the rotating frame picture,
\begin{equation}
\rho^{\textsc{rf}}_t=e^{iAt} \rho_t e^{-iAt},
\end{equation}
evolves as if the time-independent Hamiltonian $A-H$ governed the dynamics:
\begin{equation}\label{rotating}
\mathrm{d}ot\rho^{\textsc{rf}}_t=-i[A-H,\rho^{\textsc{rf}}_t],\qquad \rho^{\textsc{rf}}_0=\rho.
\end{equation}
As a consequence, in the Schrödinger picture,
\begin{equation}\label{evolution}
\rho_t=e^{-iAt} e^{-i(A-H)t} \rho e^{i(A-H)t} e^{iAt}.
\end{equation}
In general, the behavior of $\rho_t$ can be quite complex even though $H_t$ has a relatively simple time dependence. However, equation \eqref{evolution} tells us that if $\rho$ commutes with $A-H$, the evolving state will behave as if the time-independent `effective' Hamiltonian $A$ generates it:
\begin{equation}
\rho_t=e^{-iAt} \rho e^{iAt}.
\end{equation}
This observation will be of central importance below.
The eigenvectors of $H$ will also evolve with $A$ as effective Hamiltonian: If $\ket{j}$ is an eigenvector of $H$ with eigenvalue $\epsilon_j$, then $\ket{j;t}=e^{-iAt}\ket{j}$ is an eigenvector of $H_t$ with the eigenvalue $\epsilon_j$. As a result, the occupations of the energy levels are constant over time:
\begin{equation}
\bra{j;t}\rho_t\ket{j;t}=\bra{j}\rho\ket{j}.
\end{equation}
This means that the expected energy $\langle H_t\rangle$, the energy uncertainty $\Delta H_t$, and the normalized expected energy $\langle H_t-\epsilon_\mathrm{min}t\rangle$ and its `dual' $\langle \epsilon_\mathrm{max}t-H_t\rangle$ are conserved quantities; see \cite{NeAlSa2022,HoSo2023} for a QSL involving the dual of the normalized expected energy.
Another important fact is that $\rho_t$ is a Fubini-Study geodesic if $A\rho+\rho A=A$; see Appendix A in \cite{HoAlSo2022}. If such is the case, the Mandelstam-Tamm QSL is saturated, and the system evolves between two states with fidelity $\mathrm{d}elta$ in time $\bar\tau_\textsc{mt}(\mathrm{d}elta)$. Interestingly, given an initial state $\rho$ and a Hamiltonian $H$, there is an elegant way to construct an $A$ such that $[A-H,\rho]=0$ and $A\rho+\rho A=A$: Write $\rho=\ketbra{u}{u}$, let $\epsilon = \bra{u}H\ket{u}$, and define
\begin{equation}\label{elegant}
A=(H-\epsilon)\ketbra{u}{u}+\ketbra{u}{u}(H-\epsilon).
\end{equation}
Below we show how to disprove two hypotheses about QSLs with appropriate choices of $\rho$ and $H$, and $A$ defined as in \eqref{elegant}.
\subsection{The non-existence of a time-dependent Margolus-Levitin QSL}\label{sec:No timeMLQSL}
The Mandelstam-Tamm and Margolus-Levitin QSLs say that if one requires the state to follow a geodesic, one cannot modify a time-independent Hamiltonian in such a way that the energy uncertainty takes on an arbitrarily large value without the normalized expected energy also doing so. Interestingly, this does not hold for time-dependent Hamiltonians. Below we give an example of a closed system whose state follows a geodesic and in which the energy uncertainty decouples from the normalized expected energy so that one can let the energy uncertainty assume arbitrarily large values while the normalized expected energy remains at a fixed, predetermined value.
\noindent \emph{The consequence is that one can make the system evolve between two states with a given fidelity $\mathrm{d}elta$ in an arbitrarily short time and, at the same time, keep the normalized expected energy fixed at a finite value.}
\noindent Consider a quantum system in a state $\rho=\ketbra{u}{u}$. Let $H$ be a Hamiltonian, to be specified, and define $A$ as in \eqref{elegant}. Further, let $H_t=e^{-iAt}He^{iAt}$ and let $\rho_t$ be the state at time $t$ generated from $\rho$ by $H_t$. Then $\rho_t=e^{-iAt}\rho e^{iAt}$, and $\rho_t$ follows a Fubini-Study geodesic.
\begin{figure}
\caption{Graphs illustrating the dependence of the normalized expected energy (red) and energy uncertainty (blue) on the angle $\theta$. The requirement that the normalized expected energy be constant forces the energy uncertainty to grow toward infinity with decreasing angle.}
\label{fig0}
\end{figure}
To specify $H$ let $\ket{v}$ be a unit vector perpendicular to $\ket{u}$ and define the Pauli operators $X$ and $Z$ as
\begin{align}
X &= \ketbra{u}{u}-\ketbra{v}{v}, \label{X}\\
Z &= \ketbra{u}{v}+\ketbra{v}{u}. \label{Z}
\end{align}
Fix the value $E>0$ that the normalized expected energy should have, let $\mu$ be a positive function on the interval $0 < \theta <\pi$, and define
\begin{equation}
H=\mu(\theta)(\sin\theta Z - \cos\theta X).
\end{equation}
The largest and the smallest eigenvalues of $H$, and thus of $H_t$, are $\mu(\theta)$ and $-\mu(\theta)$, respectively, both of which are occupied by $\rho_t$. Furthermore, the normalized expected energy and the energy uncertainty are
\begin{align}
&\langle H_t - \epsilon_\mathrm{min}t\rangle = \mu(\theta)(1-\cos\theta), \\
&\Delta H_t = \mu(\theta)\sin\theta.
\end{align}
Since we want the normalized expected energy to be $E$, we must define $\mu$ as $\mu(\theta)=E/(1-\cos\theta)$, implying that
\begin{equation}
\Delta H_t = E\cot(\theta/2).
\end{equation}
Figure \ref{fig0} shows how the normalized expected energy and the energy uncertainty depend on the angle $\theta$.
\begin{figure}
\caption{Bloch vector representations of $H$, $A$, and $\rho$. The vector representing $\rho$ points along the positive $x$-axis, and the vector representing $H$ makes the angle $\theta$ with the negative $x$-axis. The purple circle represents the expected energy level to which $\rho$ belongs. As time passes, the state and the Hamiltonian rotate around the $z$-axis with the same angular velocity. The dashed vectors represent $H_t$ and $\rho_t$ at a time $t>0$. The expected energy level rotates with the state.}
\label{fig1}
\end{figure}
Let $\tau(\mathrm{d}elta)$ be the first time the system reaches a state having fidelity $\mathrm{d}elta$ with the initial state $\rho$. Since the state follows a Fubini-Study geodesic, the Mandelstam-Tamm QSL is saturated:
\begin{equation}
\tau(\mathrm{d}elta)=\bar\tau_\textsc{mt}(\mathrm{d}elta)=\frac{\arccos\sqrt{\mathrm{d}elta}}{E\cot(\theta/2)}.
\end{equation}
This evolution time can be made arbitrarily small by choosing $\theta$ sufficiently close to $0$. However, regardless of the value of $\theta$, the normalized average energy is preserved with the prescribed value $E$. We conclude that irrespective of a required fidelity $\mathrm{d}elta$ between the initial and the final states, a Hamiltonian exists that evolves the system between two states with fidelity $\mathrm{d}elta$ in an arbitrarily short time and along a trajectory such that the normalized expected energy is conserved with a prescribed value.
\noindent\emph{Consequently, the Margolus-Levitin QSL does not obviously extend to closed systems.}
In Figure \ref{fig1}, we have represented $H$, $A$, and $\rho$ as Bloch vectors relative to $X$, $Y$, and $Z$, with $Y=i(\ketbra{u}{v}-\ketbra{v}{u})$. The angle between $H$ and the negative $x$-axis is $\theta$. As time passes, the state and the Hamiltonian rotate around the $z$-axis with the same angular velocity. Note that $\rho_t$ moves along the equator in the Bloch sphere and thus is a Fubini-Study geodesic. The dotted vectors represent the state and the Hamiltonian at a time $t>0$.
The purple circle, formed by intersecting the Bloch sphere with a plane perpendicular to the extension of the vector representing $H$, represents the expected energy level to which $\rho$ belongs. This circle rotates together with $H_t$ and always lies in a plane perpendicular to the vector representing $H_t$. The key observation is that this circle corresponds to the normalized expected energy $E$ irrespective of the value of angle $\theta$, and $\rho_t$ will evolve together with that circle.
Most initial states will not evolve in such a well-behaved manner as those located on the equator of the Bloch sphere. In Figure \ref{fig2}, we have drawn the evolution curve of a state not on the equator.
\begin{figure}
\caption{An evolution curve starting from a state not on the equator of the Bloch sphere. In this case, $\theta=30^\circ$ and $E=1$. The left figure shows the evolution curve in the Schrödinger picture, and the right figure shows the same curve in the rotating frame picture. The warmer colors indicate more recent times, and the blue arrow represents the state at the final time.}
\label{fig2}
\end{figure}
In the rotating frame picture \eqref{rotating}, the evolution curve forms a circle around the $x$-axis. This is since $A-H\propto X$.
\subsection{The Bhatia-Davies QSL}
The example in the previous section shows that the normalized expected energy alone does not necessarily limit the evolution time from below for closed systems. In the example, however, an arbitrary width of the energy spectrum was permitted. If we require that the spectral width does not exceed a given value, the evolution time cannot be made arbitrarily small. This since the energy uncertainty cannot exceed the spectral width.
The Bhatia-Davies inequality \cite{BhDa2000} provides a tighter bound on the energy uncertainty than the spectral width. The Bhatia-Davies inequality states that the variance of any observable $B$ is bounded from above according to
\begin{equation}\label{Bhatia-Davies ineq}
\Delta^2 B\leq \langle b_{\mathrm{max}} - B\rangle\langle B - b_{\mathrm{min}}\rangle,
\end{equation}
with $b_{\mathrm{max}}$ and $b_{\mathrm{min}}$ being the largest and the smallest occupied eigenvalues of $B$. Consequently, the evolution time of an isolated system is bounded by $\tau_\textsc{bd}(\mathrm{d}elta)$ defined in \eqref{isolatedBD}, and the evolution time of a closed system is bounded by $\bar\tau_\textsc{bd}(\mathrm{d}elta)$ defined in \eqref{closedBD}.
Equality holds in the Bhatia-Davies inequality if and only if the state occupies at most two eigenvalues of $B$. Since the state of an isolated system saturating the Mandelstam-Tamm QSL occupies only two energy levels \cite{Br2003,HoAlSo2022}, the Mandelstam-Tamm and Bhatia-Davies QSLs are always saturated simultaneously for isolated systems.
The Mandelstam-Tamm and Bhatia-Davies QSLs generalize to closed systems as in \eqref{closedMT} and \eqref{closedBD}, respectively, and a natural guess would be that also these QSLs are always saturated simultaneously. However, as we will see, a time-dependent Hamiltonian can evolve a state at a constant speed along a Fubini-Study geodesic in such a way that the state during the entire evolution occupies more than two energy levels. Such an evolution will saturate the Mandelstam–Tamm QSL but not the Bhatia–Davies QSL. This is because the Bhatia-Davies inequality will be strict over the entire evolution time interval, which means that the denominator in \eqref{closedBD} is strictly greater than the denominator in \eqref{closedMT}.
\subsection{A non-saturation of the Bhatia-Davies QSLs}
Let $H$ be a Hamiltonian for a system with at least three distinct eigenvalues, and let $\rho=\ketbra{u}{u}$ be any state occupying at least three of those. Define $A$ as in \eqref{elegant}, let $H_t=e^{-iAt}He^{iAt}$, and let $\rho_t$ be the state at time $t$ generated from $\rho$ by $H_t$. Since $[A-H,\rho]=0$ and $A\rho+\rho A=A$, the Mandelstam-Tamm QSL is saturated, and the system will evolve between two states with fidelity $\mathrm{d}elta$ in time $\bar\tau_\textsc{mt}(\mathrm{d}elta)$. Furthermore, since $\rho_t$ always occupies at least three different energy levels,
\begin{equation}
\Delta^2H_t<\langle\epsilon_\mathrm{max}t - H_t\rangle\langle H_t - \epsilon_\mathrm{min}t\rangle.
\end{equation}
Therefore, $\bar\tau_\textsc{mt}(\mathrm{d}elta)>\bar\tau_\textsc{bd}(\mathrm{d}elta)$, and the Mandelstam-Tamm QSL is saturated but not the Bhatia-Davies QSL.
\section{Summary}
A common view is that the Margolus-Levitin quantum speed limit extends to an evolution time estimate for closed systems of the form $\mathcal{L}(\mathrm{d}elta)/\langle\hspace{-2pt}\langle H_t-\epsilon_\mathrm{min}t\rangle\hspace{-2pt}\rangle$ where $\mathcal{L}$ is a positive function that only depends on the fidelity $\mathrm{d}elta$ between the initial and final states and $\langle\hspace{-2pt}\langle H_t-\epsilon_\mathrm{min}t\rangle\hspace{-2pt}\rangle$ is the time average of the normalized expected energy. We have shown with a counterexample that this is not the case. More precisely, we have constructed a closed system that evolves between two states with fidelity $\mathrm{d}elta$ in an arbitrarily short time while keeping the normalized expected energy fixed at an arbitrary predetermined value.
We have also considered a QSL for isolated systems called the Bhatia-Davies QSL. This QSL extends straightforwardly to closed systems. We have shown that the Bhatia-Davies and Mandelstam-Tamm QSLs are always simultaneously saturated for isolated systems but that this need not be the case for closed systems.
\end{document} |
\begin{document}
\def\spacingset#1{\renewcommand{\baselinestretch}
{#1}\small\normalsize} \spacingset{1}
\if11
{
\title{\bf Order Determination for Spiked Models}
\author{Yicheng Zeng
and
Lixing Zhu\thanks{
The authors gratefully acknowledge
the support from a grant from the University Grants Council of Hong Kong, Hong Kong, and an SNFC grant (NSFC11671042) from the National Natural Science Foundation of China. }\hspace{.2cm}\\
Department of Mathematics, Hong Kong Baptist University, Hong Kong}
\maketitle
} \fi
\if01
{
\begin{center}
{\LARGE\bf Order Determination for Large Dimensional Matrices}
\end{center}
} \fi
\begin{abstract}
Motivated by dimension reduction in regression analysis and signal detection, we investigate the order determination for large dimension matrices including spiked models of which the numbers of covariates are proportional to the sample sizes for different models. Because the asymptotic behaviour of the estimated eigenvalues of the corresponding matrices differ completely from those in fixed dimension scenarios, we then discuss the largest possible number we can identify and introduce a ``valley-cliff'' criterion. We propose two versions of the criterion: one based on the original differences of eigenvalues and the other based on the transformed differences, which reduces the effects of ridge selection in the former one. This generic method is very easy to implement and computationally inexpensive, and it can be applied to various matrices. As examples, we focus on spiked population models, spiked Fisher matrices and factor models with auto-covariance matrices. Numerical studies are conducted to examine the method's finite sample performances and to compare it with existing methods.
\end{abstract}
\noindent
{\it Keywords:} Auto-covariance matrix, factor model, finite-rank perturbation, Fisher matrix, phase transition, ridge ratio, spiked population model.
\spacingset{1.45}
\section{Introduction}
Large dimensional matrices are often required to determine the order in diverse research fields to reduce the dimensionality. Examples include spiked population models proposed by \cite{johnstone2001distribution};
spiked Fisher matrices, which are motivated from signal detection and hypothesis testing for covariances; canonical correlation analysis; factor models; and target matrices in sufficient dimension reduction (see \cite{li1991sliced}; \cite{zhu2010sufficient}), which are for sufficient dimension reduction in regression analysis, in particular. \cite{luo2016combining} is a useful reference on order determination that proposed a ladle estimation for several models. We first use spiked population models as an example to describe the problem under study in this paper and propose a method that can be extended to handle other models. For any spiked population model, population covariance matrix $\Sigma_p$ can be written as a finite-rank perturbation of the identity matrix: $\Sigma_p=\sigma^2\textbf{I}_p+\Delta_p$, where $rank(\Delta_p)=q$ amounts to the fixed number of spikes, and $p$ is the dimension of the matrix.
Thus, determining the number of spikes is equivalent to determining the order of the matrix $\Delta$ mentioned above. For other large dimensional matrices, such as sample auto-covariance matrices and spiked Fisher matrices, the problems can be formulated in a similar manner.
The literature includes several proposals in the fixed dimension cases, such as the classic Akaike Information Criteria and Bayesian Information Criteria. Several methods have been developed for sufficient dimension reduction that can also be used in the models mentioned above. The methods include the sequential testing method (\cite{li1991sliced}), the BIC-type criterion (\cite{zhu2006sliced}), ridge ratio estimation (\cite{xia2015consistently}) and ladle estimation (\cite{luo2016combining}). Some of them can even handle cases with divergent dimension problems in which $p/n\to 0$ at certain rates.
However, when the dimension $p$ is proportional to the sample size $n$ where $p/n \to c$ for $0<c<\infty$, the problem becomes much more challenging. Thus, some efforts have been devoted to this problem with use of the large dimensional random matrix theory (see for example \cite{kritchman2008determining}; \cite{onatski2009testing}). Again, consider spiked population models. When $p/n\rightarrow c$ for a constant $c>0$, using the results derived by \cite{baik2006eigenvalues}, \cite{passemier2012determining} introduced a criterion that counts the number of differences between two consecutive eigenvalues below a predetermined positive constant threshold. However, when there are equal spikes, the corresponding differences could also be smaller than the threshold they designed, and the criterion could then very easily define a smaller estimator than the true number.
\cite{passemier2014estimation} further modified this method to suit cases with multiple spikes. Underestimation however remains an issue when, say, there are three or more equal spikes. In addition to the problem caused by spike multiplicity, the dominating effect by a couple of the largest eigenvalues also results in underestimation. That is, when a couple of eigenvalues are very large and the other eigenvalues are too close to $\sigma^2$ and the differences between these small spikes would also be very small.
For the number of factors from a factor model for high-dimensional time series, \cite{li2017identifying} proposed a similar criterion to that of \cite{passemier2014estimation}.
For spiked Fisher matrices, \cite{wang2017extreme} used the classical scree plot to determine the number of spikes when a threshold is selected in a delicate manner. The underestimation is still an issue.
We demonstrate this phenomenon in the numerical studies below. Relevant references include \cite{lam2012factor} and \cite{xia2015consistently}.
In this paper, we introduce a novel and generic criterion when the dimension $p$ is proportional to the sample size $n$. The criterion is based on the eigenvalue difference-based ridge ratios with the following features. First, the criterion can handle spike multiplicity problem and alleviates the large eigenvalue dominance problem. Second, the criterion has a nice ``valley-cliff'' pattern such that the consistent estimator is at the ``valley bottom'' facing the ``cliff'' upon which all the next ratios exceed a threshold.
Third, adding ridge values plays a very important role to make the ratios more stable and creates the ``valley-cliff" pattern. Fourth, to reduce the sensitivity of the criterion to ridge selection, we suggest another version that uses transformed eigenvalues. Fifth, we also discuss in detail reducing the effect of model scale in the construction. The new method is also very efficient in computation.
The remainder of this paper is organised as follows. In Section \ref{sec2}, we propose a VAlley-CLiff Estimation (VACLE) and provide an optimal lower bound to show what order can be identified. In Section~\ref{sec3}, we first note that the VACLE could be improved when we use a transformation-based valley-cliff estimation (TVACLE) to alleviate the criterion's sensitivity to the designed ridge value. In this section, we also discuss in detail the methods to select transformation. In Section~\ref{sec4}, we give spiked population models, factor models with auto-covariance matrices and spiked Fisher matrices as applications. Section~\ref{sec5} contains numerical studies and compares the VACLE and the TVACLE with existing competitors. A real data example is analysed in Section~\ref{sec6}. Some concluding remarks are in Section~\ref{sec7}, and the proofs of the theoretical results are contained in the supplementary materials.
\section{Criterion construction and properties}\label{sec2}
In this section, we describe our motivation in detail and provide the construction steps and its properties.
\subsection{ Motivation }
Consider a simple spiked population model. For a $p\times p$ matrix $\Sigma_p=\sigma^2\textbf{I}_p+\Delta_p$ with the eigenvalues $\lambda_{1}\geq\cdots\geq \lambda_{q_1}> \lambda_{q_1+1}=\cdots =\lambda_{p}=\sigma^2$ where $q_1$ is a fixed number and the scale parameter $\sigma^2$ is either known or unknown.
Let $\tilde\lambda_{i}$ be the eigenvalues of $\Delta_p$ and then $\lambda_{i}=\tilde \lambda_{i}+\sigma^2$, $1\le i\le p$,
with $\tilde \lambda_{1} \ge \cdots \ge \tilde\lambda_{q_1}> \tilde \lambda_{q_1+1}=\cdots = \tilde\lambda_{p}=0$.
When $p$ is proportional to $n$, estimation of $\lambda_i-\sigma^2$ is no longer consistent to $0$.
Thus, we do not directly use either $\lambda_{i}$ or $\tilde \lambda_{i}$ but rather $\delta_{i}=\lambda_{i}- \lambda_{i+1}=\tilde \lambda_{i}- \tilde \lambda_{i+1}\ge 0$ for $i=1, \cdots, q_1$ and $=0$ for $i=q_1+1, \cdots, p-1$.
Consider a sequence of ratios as
$r_{i}:=\delta_{i+1}/{\delta_{i}},\ 1\leq i\leq p-2$.
These ratios are scale-invariant and can then have the following property, when $i\le q_1$:
\begin{equation}\label{2.4}
r_{i}=\dfrac{\delta_{i+1}}{\delta_{i}}=\dfrac{\delta_{i+1}/\sigma^2}{\delta_{i}/\sigma^2}=\left\{\begin{array}{lcl}
\ge 0,&&for\ i<q_1,\\=0,&&for\ i=q_1.
\end{array}\right.
\end{equation}
For any $ q_1+1\leq i\leq p-2$, $r_{i}=0/0$ cannot be well defined because the values could vary dramatically and thus be instable.
Due to the non-monotonicity of the $\delta_i$'s, some ratios $r_{i}$, even for $1\le i\le q_1$, could converge to either $*/0$, $0/*$, $0/0$ or $*/*$ respectively, where $*$ stands for a positive value that could differ for each appearance.
This instability also occurs at the sample level. Thus, we cannot simply use this sequence of ratios to construct a criterion.
Taking these into consideration, we define a sequence of ridge type ratios:
\begin{equation}\label{2.3.1}
r^R_{i}:=\dfrac{\delta_{i+1}/\sigma^2+c_n}{\delta_{i}/\sigma^2+c_n}, 1\leq i\leq p-2.
\end{equation}
It is noticeable, in the construction of $r_i^R$, that we use $\delta_i/\sigma^2$ instead of $\delta_i$ in order to keep selection of $c_n$ independent of the scale parameter $\sigma^2$.
With appropriately selected $c_n\to 0$, these ratios have the following property:
\begin{equation*}\label{2.4.1}
r^R_{i}=\dfrac{\delta_{i+1}/\sigma^2+c_n}{\delta_{i}/\sigma^2+c_n}=\left\{\begin{array}{lcl}
\ge 0,&&for\ i<q_1,\\=c_n/(\delta_{q_1}/\sigma^2+c_n)\to 0,&&for\ i=q_1, \\c_n/c_n=1,&&for\ q_1+1\leq i\leq p-2,
\end{array}\right.
\end{equation*}
These ratios have a very useful ``valley-cliff'' pattern,
because $q_1$ should be the index of $ r^R_{q_1}\to 0$ at a ``valley bottom'' facing the ``cliff'' valued at $1$ of all next ratios $r^R_{i}$ for $i>q$. This nice pattern gives us a good opportunity to accurately identify $q_1$, although we will show later that in the setting in which $p$ is proportional to the sample size $n$, the identifiability of $q_1$ at the sample level remains a serious issue.
We also note that
the ratios
depend on $\sigma^2$ and $c_n$.
Under the aforementioned scale transformation $\hat\lambda_{i}\rightarrowtail (\sigma^2)^{-1} \hat\lambda_{i}$, if $\hat\lambda_{i}$ are the estimated eigenvalues,
\begin{equation*}
\hat r_{i}^R=\dfrac{\hat \delta_{i+1}/\sigma^2+c_n}{\hat\delta_i/\sigma^2+c_n}=\dfrac{(\sigma^2)^{-1}\hat \lambda_{i+1}-(\sigma^2)^{-1}\hat\lambda_{i+2}+c_n}{(\sigma^2)^{-1}\hat \lambda_{i}-(\sigma^2)^{-1}\hat\lambda_{i+1}+c_n}=\dfrac{\hat \lambda_{i+1}-\hat\lambda_{i+2}+\sigma^2 c_n}{\hat \lambda_{i}-\hat\lambda_{i+1}+\sigma^2 c_n}.
\end{equation*}
Later, however, we will show that the range of selecting $c_n$ can be rather wide, and thus the criterion is not very seriously affected by this cost when $\sigma^2$ is estimated, which can be shown in the numerical studies we conduct later.
In addition, we have a brief discussion about the estimation of $\sigma^2$ in Section~\ref{sec5}.
\subsection{ Valley-cliff criterion and estimation consistency}
Let $\textbf{T}_n$ be a target sample matrix of $\Sigma_p$ and $\hat \lambda_{1}\geq \cdots\geq \hat \lambda_{p}$ be its eigenvalues.
Here notations $\hat\lambda_i$ and $\hat\delta_i$ are related to the sample size $n$, although the $n$'s in subscripts have been omitted for brevity.
Define their sample versions $\hat{r}^R_{i}$ of $r_{i}^{R}$ in (\ref{2.3.1}) with $\hat{\delta}_{i}=\hat{\lambda}_{i}-\hat{\lambda}_{i+1}$ as
\begin{equation}
\hat r_{i}^R:=\dfrac{\hat\delta_{i+1}/\sigma^2+c_n}{\hat\delta_{i}/\sigma^2+c_n},\ 1\leq i \leq p-2,
\end{equation}
where $\sigma^2$ should be replaced by $\hat\sigma^2$ when $\sigma^2$ is unknown.
However, completely unlike the case with fixed $p$, even in the simple spiked population model case, $\hat{\lambda}_{i}$ are not consistent to $\lambda_{i}$ and these ratios cannot then simply converge to those in (\ref{2.4}). The number $q_1$ is generally unidentifiable. In the following section, we give the largest possible order we can identify.
Define
\begin{equation}\label{q}
q:=\# \{i: 1\le i\le q_1, \, \lambda_i>U(F)>\sigma^2 \}
\end{equation}
for some constant $U(F)$ where $F$ is the limiting spectral distribution (LSD) of all estimated eigenvalues $\hat \lambda_i$'s with the support $(a(F), b(F))$.
The constant $U(F)$ is the phase transition point (see \cite{baik2006eigenvalues}) and also the optimal bound for identifiability.
We still use a spiked population model as a typical example. From \cite{baik2005phase} and \cite{baik2006eigenvalues},
any spike with strength not stronger than $(1+\sqrt c)\sigma^2$ is not identifiable. In this case, $U(F)$ refers to the critical value $(1+\sqrt c)\sigma^2$.
More details are included in Section~\ref{sec4}.
Selecting an appropriate sequence $c_n$ plays an important role for estimation efficiency.
When it is selected in the principle: $\hat{\delta}_{i}=o_p(c_n)$ for $q+1\le i \le p-1$,
$\hat r_{i}^{R}$ still have a nice ``valley-cliff'' pattern at $i=q$ as
\begin{equation}
\lim_{n\to+\infty}\hat r_{i}^{R}=\left\{\begin{array}{lcl}
0,&&i=q\\1,&&q+1\leq i\leq L-1
\end{array} \right.
\end{equation}
where $L$ is a prefixed upper bound for $q$.
Taking this advantage, we define a thresholding valley-cliff estimator as, for a constant $\tau$ with $0< \tau <1$
\begin{equation}\label{10}
\hat{q}_n^{VACLE}=\max_{1\leq i\leq L-2}\left\{i:\hat r_{i}^{R}\leq\tau\right\}.
\end{equation}
To handle more general models, we consider the large dimensional matrices with the following model features.
\begin{MF}\label{1}
There exists a bound $U(F)$ such that the number $q$ defined in (\ref{q}) is a fixed constant and satisfies: \\
(A1)
there is a value $d$ such that $\hat\lambda_{q}/\sigma^2- d=o_P(1)$ as\ $n\rightarrow \infty$;\\
(A2) for a large fixed value $L$ satisfying $q+1<L< p$, there is a constant $e<d$ and a sequence $\tilde c_n\rightarrow 0$ such that $\hat\lambda_{i}/\sigma^2-e=O_p(\tilde c_n)$, for $q+1\le i\le L$.
\end{MF}
\begin{remark}
Model Feature \ref{1} describes features of the model structure at the sample level, that essentially requires
certain assumptions at the population level.
Condition $(A1)$ corresponds to the so-called {\it phase transition phenomenon} for the extreme eigenvalues, and $(A2)$ further focuses on the fluctuations of those that stick to the boundary of the support of the LSD.
General theory about the phase transitions and fluctuations can be found, for example, in \cite{peche2006largest}, \cite{benaych2011fluctuations}, \cite{benaych2011eigenvalues} and \cite{knowles2017anisotropic}. The details about how these features can be exhibited in three types of models are given in Section~\ref{sec4}.
\end{remark}
The estimation consistency is stated as follows.
\begin{theorem}\label{2}
When a model satisfies Model Feature~\ref{1}, and $\tilde c_n=o( c_n)$, then $\mathbb{P} (\hat{q}_n^{VACLE}=q)\to 1$ as $n\to \infty$.
\end{theorem}
\begin{remark}
The convergence rate of $\hat\lambda_{i}$ to a constant $e$, for $q+1\leq i\leq L-1$ is often $O_p(n^{-2/3})$, namely $\tilde c_n=n^{-2/3}$. The references include \cite{benaych2011fluctuations}, \cite{bao2015universality}, \cite{han2016tracy} and \cite{han2018unified} for several models discussed in Section~\ref{sec4}. However, for a spiked auto-covariance matrix, we will also state our result provided, as \cite{li2017identifying} did, that this rate can be achieved as this rate has not yet formally been derived. In this paper, the ridge $c_n \to 0$ is only restricted to $\hat\lambda_{q+1}=o_p(c_n)$. Such a wide range for the ridge selection alleviates, to a great extent, the influence from $\sigma^2$ when it needs to be estimated. The estimation issue will be discussed in Section~\ref{sec5}.
\end{remark}
\section{Modification of the VACLE}\label{sec3}
Although Theorem~\ref{2} provides estimation consistency, some numerical studies
that are not presented in this paper indicate that the performance of $\hat{q}_n^{VACLE}$ is sometimes and somehow sensitive to the choice of the ridge $c_n$ in finite sample cases.
To be specific, when $d-e$ in Model Feature~~\ref{1} is small, the ratio at $q$ could be close to $1$, and then we would easily determine a smaller value. Therefore, a small ridge $c_n$ is in demand. On the other hand, a small $c_n$ would result in the instability caused by $0/0$ type ratios, and overestimation would be possible. Thus, a trade-off exists between underestimation and overestimation in the choice of ridge $c_n$. We now attempt to alleviate this dilemma by using transformed eigenvalues.
Considering a transformation (depending on $n$) $f_n(\cdot)$, define
\begin{equation}
\hat\delta_{i}^*=f_n(\hat\lambda_{i}/\sigma^2)-f_n(\hat\lambda_{i+1}/\sigma^2),\ i=1,2,\cdots,p-1.
\end{equation}
The ratios are defined as
\begin{equation}\label{ratio-TR}
\hat r_{i}^{TR}:=\dfrac{\hat\delta_{i+1}^*+c_n}{\hat\delta_{i}^*+c_n},\ 1\leq i\leq p-2.
\end{equation}
The estimator of $q$ is defined as
\begin{equation}
\hat{q}_n^{TVACLE}=\max_{1\leq i\leq L-2}\left\{i:\hat r_{i}^{TR}\leq \tau\right\},
\end{equation}
where $c_n$ and $\tau$ have the same definitions as before. We call this criterion the transformation-based valley-cliff estimation(TVACLE).
For any transformation $f_n$, we wish that $\hat r_{i}^{TR}$ remains close to $1$ for $i>q$, and $\hat r_{q}^{TR}$ is closer to zero than $\hat r_{q}^R$. To achieve these objectives, we consider a transformation that can satisfy the following requirements $(i)-(iii)$:\\
\indent (i) $\mathbb{P}\{\hat\delta_{q}^*\geq\hat\delta_{q}/\sigma^2\}\rightarrow 1 $;\\
\indent (ii) $\mathbb{P}\{\hat\delta_{i}^*\leq\hat\delta_{i}/\sigma^2\}\rightarrow 1,\ for\ q+1\leq i\leq p-2$;\\
\indent (iii) ${\hat\delta_{q+1}^*}/{\hat\delta_{q}^*}\leq {\hat\delta_{q+1}}/{\hat\delta_{q}}$.
\begin{remark}
Conditions $(i)$ and $(ii)$ ensure the transformation could pull up the value of $\hat\delta_{q}$ and bring down that of $\hat\delta_{i}$, for $q+1\leq i\leq p-2$. Condition (iii) is critical to ensuring that the ``valley'' could be closer to its limit ``0'' and then be better separated from the ``cliff'' after the transformation.
\end{remark}
The following conditions $(a)-(c)$ ensure that $f_n:\mathbb{R}\rightarrow \mathbb{R}$ satisfies the above requirements $(i)-(iii)$, letting $f_n'(x)$ be the derivative of $f_n(x)$ with respect to $x$:\\
\indent (a) $f_n\ is\ differentiable,\ and\ f_n'\geq 0\ in\ \mathbb{R}$;\\
\indent (b) $f_n'\ is\ increasing\ and\ nonnegative\ in\ \mathbb{R}$;\\
\indent (c) there exists a sequence $ \kappa_n>0,\ \hat\lambda_{i}/\sigma^2-e=o_p(\kappa_n)\ for\ q+1\leq i\leq L-1$ such that $f_n'(x)=1$ for all $ x\in (e-\kappa_n,e+\kappa_n)$.
\begin{lemma}\label{4}
Conditions $(a)-(c)$ imply Requirements $(i)-(iii)$ for $\{\hat\delta_{n,i}^*\}$ and $\{\hat\delta_{n,i}\}$ defined as above.
\end{lemma}
\begin{remark}
In Condition $(c)$, $\kappa_n$ can take a wide range of values, as long as it satisfies the condition that $\hat \lambda_i/\sigma^2-e=o_p(\kappa_n)$ for $q+1\leq i\leq L-1$. Specifically, it can take a constant value, converge to zero or even converge to infinity. Let $f_n'$ take value $1$ in a neighbourhood of the value $e$ with radius $\kappa_n$, so that all $\hat\lambda_{i}/\sigma^2$, for $q+1\le i\le L-1$, fall into this neighbourhood.
Thus, the ratios $\hat r_i^{TR}$, for $q+1\le i\le L-2$, remain unaffected by the transformation $f_n$. Besides, the selection of $\kappa_n$ is independent of $c_n$.
\end{remark}
We now give a piecewise quadratic function for this purpose as follows:
\begin{equation}
f_n(x)=\left\{
\begin{array}{lcl}
L_n-\frac{1}{2k_1},&&x< L_n-\frac{1}{k_1}\\
\frac{1}{2}k_1x^2+(1-k_1L_n)x+\frac{1}{2}k_1L_n^2, &&L_n-\frac{1}{k_1}\leq x<L_n\\
x,&&L_n\leq x<R_n\\
\frac{1}{2}k_2x^2+(1-k_2R_n)x+\frac{1}{2}k_2R_n^2,&&x\geq R_n
\end{array} \right.
\end{equation}
where $k_1$ and $k_2$ are slopes of $\{f'_n\}_{n\geq1}$ to be determined, $L_n=e-\kappa_n$, $R_n=e+\kappa_n$.
It is clear that when $k_1$ and $k_2$ are $0$, the TVACLE degenerates to the VACLE.
The consistency of $\hat{q}_n^{TVACLE}$ is stated in the following theorem.
\begin{theorem}\label{3}
Under the same conditions of Theorem \ref{2}, the estimator $\hat{q}_n^{TVACLE}$ with the above transformation $f_n$ is equal to $q$ with a probability going to 1.
\end{theorem}
\begin{remark}
Although selecting an optimal transformation is desirable, we suspect the existence as there are a large class of functions that could satisfy the conditions. Thus, such an issue is beyond the scope of this paper.
\end{remark}
\section{Applications to several models}\label{sec4}
In this section, we introduce several special models to which our method can be applied.\\
\textbf{Spiked population models}\quad The model can also be motivated from the signal detection problem (see, e.g. \cite{nadler2010nonparametric}):
\begin{equation}\label{12}
x_i=\mathbf{A}u_i+\varepsilon_i,\ 1\le i\le n,
\end{equation}
where $u_i\in \mathbb{R}^q$ is a $q$-dimensional random signal vector with zero mean components, $\varepsilon_i\in \mathbb{R}^p$ is a $p$-dimensional random vector with mean zero and covariance matrix $\sigma^2\mathbf{I}_p$
, $\mathbf{A}\in \mathbb{R}^{p\times q}$ is the steering matrix whose $q$ columns are linearly independent of each other and $x_i\in \mathbb{R}^p$ is the observed vector on the $p$ sensors. Assume that the covariance matrix of the signals is of full rank with $q$ largest eigenvalues $\lambda_1\ge \cdots\ge \lambda_q>\sigma^2>0$. With the typical high dimensional setting in which $p/n\rightarrow c\in (0,+\infty)$, the population covariance matrix $\Sigma_p$ of $x_i$ coincides with the structure of a spiked population model:
\begin{equation}
\text{spec}(\Sigma_p)=\{\lambda_1,\cdots,\lambda_q,\sigma^2,\cdots,\sigma^2\},
\end{equation}
Theoretically, the spiked population model allows the existence of small spikes (i.e. $\lambda_i<\sigma^2$), but this case is not discussed in this paper.
\begin{theorem}\label{4.1}
Suppose that $\lambda_q>U(F)=\sigma^2(1+\sqrt c)$. This bound is optimal for the identifiability of $q$. Model Feature~\ref{1} then holds, and the results of Theorems~\ref{2} and \ref{3} hold true.
\end{theorem}
\begin{remark}
From the proof in the supplementary materials, we see the optimality of the lower bound $U(F)=\sigma^2(1+\sqrt c)$ because it is not possible to identify any eigenvalue $\lambda_i$ such that $\sigma^2(1+\sqrt c) >\lambda_i>\sigma^2$ for any $q< i \le L$.
\end{remark}
\textbf{Large-dimensional spiked Fisher matrix}\quad Again consider the signal detection problem discussed above,
\begin{equation}
x_i=\mathbf{A}u_i+\varepsilon_i,\ 1\le i\le n,
\end{equation}
where $x_i$, $\mathbf{A}$ and $u_i$ share the same settings of (\ref{12}), whilst $\varepsilon_i$ is a noise vector with a general covariance matrix $\Sigma_2$. Denote the population covariance matrix of $x_i$ by $\Sigma_1$ such that
$ \Sigma_1=\Sigma_2+\Delta,$
where $\Delta=\mathbf{A}cov(u_i)\mathbf{A}^T$ is a non-negative definite matrix with fixed rank $q$ provided that $cov(u_i)$ is of full rank. If $\Sigma_1=\sigma^2\mathbf{I}_p$, it degenerates to the spiked population model.
Otherwise, we note that $\Sigma_1\Sigma_2^{-1}$ has a spiked structure as
\begin{equation}
\text{spec}(\Sigma_1\Sigma_2^{-1})=\{\lambda_1,\cdots,\lambda_{q},1,\cdots,1\},
\end{equation}
where $\lambda_1\geq\cdots\ge\lambda_{q}>1$ and the number of spikes $q$ is fixed.
Let $\mathbf{S}_1$ and $\mathbf{S}_2$
be the sample covariance matrices that correspond to $\Sigma_1$ and $\Sigma_2$ with respective sample sizes of $n$ and $T$, where $p/n\rightarrow c>0$ and $p/T\rightarrow y\in (0,1)$ as $n\to \infty$ and $T\to \infty$ respectively.
Note that there are two different sample sizes $n$ and $T$ because the sample covariance matrix $\mathbf{S}_2$ comes from another sequence of pure noise observations, say $\{e_i\}_{1\le i\le T}$, with a different sample size $T$. Denote $\mathbf{S}_1=\frac{1}{n}\sum_{i=1}^nx_ix_i^{\top}$ and $\mathbf{S}_2=\frac{1}{T}\sum_{i=1}^ne_ie_i^{\top}$. When $\mathbf{S}_2$ is invertible, the random matrix $\mathbf{F}_n=\mathbf{S}_1 \mathbf{S}_2^{-1}$ is called a Fisher matrix,
whose motivation comes from the following hypothesis testing problem:
\begin{equation}
H_0:\ \Sigma_1=\Sigma_2\quad
H_1:\ \Sigma_1=\Sigma_2+\Delta.
\end{equation}
See \cite{wang2017extreme} as an example. Denote the eigenvalues of $\mathbf{F}_n$ as $\hat\lambda_{1}\geq\cdots\ge\hat\lambda_{p}$. The difference between the two hypotheses then relies upon those extreme eigenvalues of $\mathbf{F}_n$.
We consider a more general Fisher matrix with the spiked structure
\begin{equation}
\text{spec}(\Sigma_1\Sigma_2^{-1})=\{\lambda_1,\cdots,\lambda_{q},\sigma^2,\cdots,\sigma^2\}.
\end{equation}
This is motivated by the hypothesis testing problem:
\begin{equation}
H_0:\ \Sigma_1=\sigma^2\Sigma_2\quad
H_1:\ \Sigma_1=\sigma^2\Sigma_2+\Delta,
\end{equation}
Also, by using the simple transformation $\hat\lambda_{i}\rightarrowtail (\sigma^2)^{-1}\hat\lambda_{i}$, we can achieve the results in the case of $\sigma^2=1$ in a similar manner.
\begin{theorem}\label{4.3}
Suppose that $\lambda_q>U(F)=\sigma^2\gamma(1+\sqrt{c+y-cy})$, where $\gamma=(1-y)^{-1}$. This is the optimal lower bound for identifiability of $q$, and Model Feature~\ref{1} holds. The results of Theorems~\ref{2} and \ref{3} hold true.
\end{theorem}
In the following section, we consider the auto-covariance matrix. This matrix has a more much complicated structure at the sample level, so we provide some more discussion that does not exactly follow the examination for the Model Feature~~\ref{1} in Theorems~\ref{2} and \ref{3}. In addition, because the theoretical analysis for the estimated matrix is not as complete as those for the spiked population and Fisher matrix, we must then add some extra assumptions on the convergence rate of the estimated eigenvalues if we wish to derive the estimation consistency, although it would be true, as reasonably conjectured by \cite{li2017identifying}. This requires a rigorous proof, but it is beyond the scope of this paper. We shall therefore leave it to a further study. In this paper, however, we provide a proposition that assumes that the convergence rate can be achieved and use numerical studies to verify the usefulness of our method in practice.\\
\textbf{Large dimensional auto-covariance matrix}\quad Consider a factor model:
\begin{equation}\label{auto-model}
y_t=\mathbf{A}x_t+\varepsilon_t,
\end{equation}
where for a fixed number $q_0$, $x_t$ is a $q_0$-dimensional common factor time series, $\mathbf{A}$ is the $p\times q_0$ factor loading matrix, $\{\varepsilon_t\}$ is a sequence of Gaussian noise independent of $x_t$ and $y_t$ is the $t$-th column of the $p\times T$ observed matrix $\mathbf{Y}$; namely, $p$-dimensional observation at time $t$.
Let $\Sigma_y=cov(y_t,y_{t-1})$ be the lag-1 auto-covariance matrices of $y_t$, and let $\hat{\Sigma}_y=\frac{1}{T}\sum_{t=2}^{T+1}y_ty_{t-1}^{\top}$ be its sample version.
Let $\mu$ be a finite measure on the real line $\mathbb{R}$ with support denoted by $supp(\mu)$ and $\mathbb{C}\backslash supp(\mu)$ be a complex space $\mathbb{C}$ subtracting the set $supp(\mu)$. For any $z \in \mathbb{C}\backslash supp(\mu)$, the Stieltjes transformation and T-transformation of $\mu$ are respectively defined as
\begin{equation}\label{trans}
\mathcal{S}(z)=\int\frac{1}{t-z}d\mu(t),\ \mathcal{T}(z)=\int\frac{t}{z-t}d\mu(t).
\end{equation}
When $\mu$ is supported on an interval, say $supp(\mu)=[A,B]$, and $z$ is a real value, the $T$-transformation $\mathcal{T}(\cdot)$ is a decreasing homeomorphism from $(-\infty, A)$ onto $(\mathcal{T}(A-), 0)$ and from $(B, +\infty)$ onto $(0, \mathcal{T}(B+))$, where
$$\mathcal{T}(A-):=\lim_{z\in \mathbb{R}, z\rightarrow A-}\mathcal{T}(z), \
\mathcal{T}(B+):=\lim_{z\in \mathbb{R}, z\rightarrow B+}\mathcal{T}(z).$$
Give the assumptions on the time series $\{x_t\}_{1\le t\le T}$ and $\{\varepsilon_t\}_{1\le t\le T}$ (\cite{li2017identifying}) as follows.
\begin{assumption}\label{auto-assumption1}
$\{x_t\}_{1\le t\le T}$ is a $q_0$-dimensional stationary time series, where $q_0$ is a fixed number. Every component is independent of the others,
$$x_{i,t}=\sum_{l=0}^\infty\alpha_{i,l}\eta_{i,t-l},\ i=1,\cdots,q_0,\ t=1,\cdots,T,$$
where $\{\eta_{i,k}\}$ is a real-valued and weakly stationary white noise with mean $0$ and variance $\sigma_i^2$. Denote $\gamma_0(i)$ and $\gamma_1(i)$ as the variance and lag-1 auto-covariance of $\{x_{i,t}\}$, respectively.
\end{assumption}
\begin{assumption}\label{auto-assumption2}
$\{\varepsilon_t\}$ is a $p$-dimensional real-valued random vector independent of $\{x_t\}$ and with independent components $\varepsilon_{i,t}$, satisfying $\mathbb{E}(\varepsilon_{i,t})=0,\ \mathbb{E}(\varepsilon_{i,t}^2)=\sigma^2,$
and $\forall \eta>0$,
$$\frac{1}{\eta^4p^T}\sum_{i=1}^p\sum_{t=1}^{T+1}\mathbb{E}(|\varepsilon_{i,t}|^4I_{(|\varepsilon_{i,t}\ge\eta T^{1/4}|)})\longrightarrow 0\quad as\ pT\rightarrow \infty.$$
\end{assumption}
In the high dimensional setting with $p/T\rightarrow y>0$, the following result holds.
\begin{prop}\label{4.2}
Denote $\mathcal{T}(\cdot)$ as the $T$-transformation of the limiting spectral distribution for matrix $\hat {\mathbf{M}}_y/\sigma^4\equiv \hat{\Sigma}_y\hat{\Sigma}_y^{\top}/\sigma^4$. Suppose that the above assumptions are satisfied. Let $q=\#\{i: 1\le i\le q_0, \ \mathcal{T}_1(i)<\mathcal{T}(b_1+)\}$, where
$$\mathcal{T}_1(i)=\frac{2y\sigma^2\gamma_0(i)+\gamma_1(i)^2-\sqrt{(2y\sigma^2\gamma_0(i)+\gamma_1(i)^2)^2-4y^2\sigma^4(\gamma_0(i)^2-\gamma_1(i))^2}}{2\gamma_0(i)^2-2\gamma_1(i)^2},$$
$$b_1=(-1+20y+8y^2+(1+8y)^{3/2})/8,\ \mathcal{T}(b_1+)=\lim_{z\in \mathbb{R}, z\rightarrow b_1+}\mathcal{T}(z).$$
Then $q$ is the largest number of common factors that are identifiable.
\end{prop}
\begin{remark} Although the constraint $\ \mathcal{T}_1(i)<\mathcal{T}(b_1+)$ does not have a simple formulation presented in Model Feature~\ref{1}, it also provides the optimal bound.
\end{remark}
\begin{prop}\label{4.31}
If the estimated eigenvalues $\hat \lambda_i$ for $i>q$ have a convergence rate of order $O_p(n^{-2/3})$ with the assumptions in Proposition~\ref{4.2}, Model Feature~\ref{1} then holds, and Theorems~\ref{2} and \ref{3} hold true.
\end{prop}
\begin{remark} As we commented before, \cite{li2017identifying} considered a criterion with a reasonable conjecture on the convergence rate of order $O_p(n^{-2/3})$, although without a rigorous proof. We have not provided this result either, and thus we regard the above result as a proposition, rather than a theorem. We will see that it works well in the numerical studies.
\end{remark}
\section{Numerical Studies}\label{sec5}
\subsection{ Scale estimation}\label{sec5.1}
\cite{passemier2012determining} estimated $\sigma^2$ by simply taking the average over $\{\hat{\lambda}_{i}\}_{q+1\le i\le p}$ and \cite{ passemier2017estimation} established its consistency and further introduced a refined version by subtracting the bias. That, however, involves an iteration procedure because the number $q$ must be estimated. To construct a robust estimator, \cite{ulfarsson2008dimension} and \cite{johnstone2009consistency} used the median of the sample eigenvalues $\{\hat{\lambda}_{i}:\hat{\lambda}_{i}\le b\}$ and the sample variances $\{\frac{1}{n}\sum_{i=1}^nx_{ij}^2\}_{1\le j\le p}$, respectively. The former median still requires a crude estimator of the right edge $b=\sigma^2(1+\sqrt c)^2$ in advance, which is equivalent to give a rough initial estimator of $\sigma^2$.
In this section, we propose
a one-step procedure that could be regarded as a simplified version of the method of \cite{ulfarsson2008dimension}. With spiked population models, the empirical spectral distribution of $\mathbf{S}_n$ almost surely converges to a Marcenko-Pastur distribution $F_{c,\sigma^2}(x)$ (see details in Supplementary Materials). For $0<\alpha<1$, their $\alpha$-quantiles are denoted $\hat\xi_{c,\sigma^2}^{(n)}(\alpha)$ and $\xi_{c,\sigma^2}(\alpha)$, respectively:
\begin{equation}
\hat\xi_{c,\sigma^2}^{(n)}(\alpha):=\hat{\lambda}_{p-[p\alpha]},\quad
\xi_{c,\sigma^2}(\alpha):=\inf\{x:F_{c,\sigma^2}(x)\ge\alpha\}.
\end{equation}
It then follows that $\hat\xi_{c,\sigma^2}^{(n)}(\alpha)\rightarrow\xi_{c,\sigma^2}(\alpha),\ as\ n\rightarrow\infty$.
Note that
$\xi_{c,\sigma^2}(\alpha)=\sigma^2\xi_{c,1}(\alpha)$. Approximating a certain quantile, say $\xi_{c,\sigma^2}(\alpha)$, of the M-P distribution by its sample counterpart $\hat\xi_{c,\sigma^2}^{(n)}(\alpha)$, we obtain an estimator of $\sigma^2$,
\begin{equation}\label{13}
\hat{\sigma}^2=\hat\xi_{c,\sigma^2}^{(n)}(\alpha)\cdot{\xi_{c,1}(\alpha)}^{-1}.
\end{equation}
The consistency of $\hat \sigma^2$ is equivalent to that of $\hat\xi_{c,\sigma^2}^{(n)}(\alpha)$, which can hold under certain conditions.
Further, the rigidity of the eigenvalues of covariance matrix (see Theorem~3.3 in \cite{pillai2014universality}) implies that the convergence rate of $\hat \sigma^2$ is $o(n^{-1+\varepsilon})$ for any $\varepsilon>0$. Thus, consistencies still hold for VACLE and TVACLE when we replace $\hat\lambda_i/\sigma^2$ by $\hat\lambda_i/\hat \sigma^2$, as $\hat\sigma^2$ possesses a higher convergence rate than those extreme eigenvalues $\hat\lambda_{i},\ 1\le i\le L$ for any fixed $L$.
Practically, for the sake of simplicity and stability, we let $\alpha=0.5$ for $0<c<1$; and $1-(2c)^{-1}$ for $c\ge 1$. Then $\alpha=1-(2\max\{1,c\})^{-1}$. The sample quantile $\hat\xi_{c,\sigma^2}^{(n)}(\alpha)$ divides all positive eigenvalues of $\mathbf{S}_n$ into two equal parts. The estimator $\hat{\sigma}^2$ is then less sensitive to extreme eigenvalues of $\mathbf{S}_n$. Its performance is examined in the following numerical studies.
\subsection{ Simulations about spiked population models}\label{sec5.2}
In this subsection, we consider the comparisons between VACLE and TVACLE defined as $\hat q_n^{VACLE}$ and $\hat q_n^{TVACLE}$ and the method defined as $\hat q_n^{PY}$ developed and refined by \cite{passemier2012determining} and \cite{passemier2014estimation}. Because estimating the number of spikes $q$ is the main focus, we conduct the simulations mainly with given $\sigma^2$. For unknown $\sigma^2$, we give a brief discussion, and a simple one-step estimator of $\sigma^2$ is introduced in Section~\ref{sec5.1}. In all simulation experiments, we conduct $500$ independent replications.
Furthermore, recalling the definition of $c=p/n$, we report the results with three scenarios: $c=.25, 1$ and $2$ to represent the cases with dimensions $p$ smaller than and larger than the sample size $n$, respectively.
$\hat q_n^{PY}$ is defined by
\begin{equation}\label{}
\hat{q}_n^{PY}=min\{i\in \{1,\cdots ,Q\}:\hat\delta_{i+1}<d_n \ and\ \hat\delta_{i+2}<d_n\},
\end{equation}
where $L>q$ is a prefixed bound large enough, $d_n=o(n^{-1/2})$ and $n^{2/3}d_n\rightarrow+\infty$.
\subsubsection*{Models and parameters selections: the known $\sigma^2$ case.}
For $\hat{q}_n^{PY}$, the sequence $d_n=Cn^{-2/3}\sqrt{2loglogn}$ with $C$ being adjusted by an automatic procedure identical to that in \cite{passemier2014estimation}. For $\hat{q}_n^{VACLE}$ and $\hat{q}_n^{TVACLE}$, they share the same threshold $\tau=0.5$ but have different ridges $c_n$. Theoretically speaking, the selection range of $c_n$ is very wide to meet the requirement that $c_n\rightarrow 0$ and $n^{2/3}c_n\rightarrow +\infty$. Here, we give an automatic procedure for ridge calibration by pure-noise simulations.
For given $(p,n)$, we conduct 500 independent pure-noise simulations and obtain the $\alpha-$quantile $\text{q}_{p,n}(\alpha)$ and sample mean $\text{m}_{p,n}$ of the difference $\{\hat \lambda_1- \hat \lambda_2\}$, where $\hat \lambda_1$ and $\hat \lambda_2$ are the two largest eigenvalues of the noise matrix. By results in \cite{benaych2011fluctuations}, for such a no-spike matrix, we can use $\{\hat \lambda_1-\hat \lambda_2\}$ to approximate $\hat \delta_{q+1}$:
\begin{eqnarray*}
&&\mathbb{P}\{\text{q}_{p,n}(0.01)-m_{p,n}<\hat\delta_{q+1}-m_{p,n}<\text{q}_{p,n}(0.99)-m_{p,n}\}\nonumber\\
&\approx& \mathbb{P}\{\text{q}_{p,n}(0.01)-m_{p,n}<\hat \lambda_1-\hat \lambda_2-m_{p,n}<\text{q}_{p,n}(0.99)-m_{p,n}\}\approx 0.98.
\end{eqnarray*}
Thus, the ratio $\left[\hat\delta_{q+2}-m_{p,n}+[\text{q}_{p,n}(0.99)-\text{q}_{p,n}(0.01)]\right]\left[\hat\delta_{q+1}-m_{p,n}+[\text{q}_{p,n}(0.99)-\text{q}_{p,n}(0.01)]\right]^{-1}$ would be dominated by the term $[\text{q}_{p,n}(0.99)-\text{q}_{p,n}(0.01)-m_{p,n}]$ and then remain close to the ``cliff'' valued at $1$ with a high probability. To ensure the convergence rate, we select ridge $c_n^{(1)}=\log\log n\cdot[\text{q}_{p,n}(0.95)-\text{q}_{p,n}(0.05)]-\text{m}_{p,n}$ for the VACLE and a smaller one $c_n^{(2)}=\sqrt{\log\log n}[\text{q}_{p,n}(0.95)-\text{q}_{p,n}(0.05)]-\text{m}_{p,n}$ for the TVACLE. Note that $\text{q}_{p,n}(\alpha)$ and $\text{m}_{p,n}$ converge to zero at the same rate as $\hat\lambda_{q+1}$ that has a slightly faster rate to zero than $c_n^{(1)}$ and $c_n^{(2)}$.
In addition, we manually determine the sequence $\kappa_n$. Details in the parameters selections are reported in Table~\ref{spiked parameter1}.
Following the calibration procedure of \cite{passemier2014estimation}, we obtain the value of $C$ for various $c=p/n$, as shown in Table~\ref{spiked parameter2}.
\begin{table}[H]\footnotesize
\caption{\footnotesize Parameters settings for the three methods.}
\centering
\renewcommand1{0.8}
\begin{tabular}{cccccccc}
\toprule
Method & $d_n$ &$\tau$& $c_n$ & $\kappa_n$ &$k_1$ &$k_2$ &$L$\\
\midrule
PY & $C\cdot n^{-2/3}\sqrt{2\log\log n}$ & | &|&|&|&|&20\\
VACLE & | & 0.5 &$c_n^{(1)}$&|&|&|&20\\
TVACLE & | & 0.5 &$c_n^{(2)}$&$\log\log p \cdot p^{-2/3}$&5&5&20\\
\bottomrule
\end{tabular}
\label{spiked parameter1}
\end{table}
\begin{table}[H]\footnotesize
\caption{\footnotesize Values of $C$ .}
\centering
\renewcommand1{0.8}
\begin{tabular}{p{25pt}p{40pt}p{40pt}p{40pt}p{40pt}}
\toprule
{\it c=p/n} & 0.25
&1&2 \\
\midrule
C &5.5226
&6.3424&7.6257\\
\bottomrule
\end{tabular}
\label{spiked parameter2}
\end{table}
\begin{remark}
Note that we select different ridges $c_n$ in $\hat{q}_n^{VACLE}$ and $\hat{q}_n^{TVACLE}$. As described above, a relatively small ridge $c_n$ could cause $\hat r_{q+1}^R$ to have better separation from $\hat r_{q}^R$, but it might also result in instability of $\hat r_{i}^R$ for $i>q+1$, so a relatively large ridge is needed for $\hat r_{i}^R$. However, $\hat r_{i}^{TR}$ would be much less sensitive to the ridge. Thus, we choose a smaller ridge for $\hat{q}_n^{TVACLE}$. Besides, ridges $c_n^{(1)}$ and $c_n^{(2)}$ are generated by an automatic procedure instead of manual selections. This calibration procedure only depends on $(p,n)$.
\end{remark}
Consider three models: for fair comparison, Models~\ref{model1} and \ref{model2} were used by \cite{passemier2012determining} with dispersed spikes and closely spaced but unequal spikes respectively, and Model~\ref{model3} has two equal spikes:
\begin{model}\label{model1}
$q=5$, $\left(\lambda_1,\cdots,\lambda_5\right)=(259.72,17.97,11.04,7.88,4.82)$,
\end{model}
\begin{model}\label{model2}
$q=4$, $\left(\lambda_1,\cdots,\lambda_4\right)=(7,6,5,4)$,
\end{model}
\begin{model}\label{model3}
$q=4$, $\left(\lambda_1,\cdots,\lambda_4\right)=(5,4,3,3)$.
\end{model}
Furthermore, we compare $\hat{q}_n^{TVACLE}$ with $\hat{q}_n^{PY}$ on a model with greater multiplicity of spikes:
\begin{model}\label{model4}
$q=6$, $\left(\lambda_1,\cdots,\lambda_6\right)=(5,5,5,5,5,5)$.
\end{model}
Set $\sigma^2=1$. When $\sigma^2$ is regarded as unknown, use the one-step method in $(\ref{13})$ to estimate it. We conduct the same simulations for $\hat{q}_n^{VACLE}$ and $\hat{q}_n^{TVACLE}$ as those with the known $\sigma^2$, but we do not report the results of $\hat{q}_n^{PY}$ with the unknown $\sigma^2$ because we found that the results and conclusions are very similar.
\subsubsection*{Numerical Performance for Known $\sigma^2$ Case}
\begin{table}[H]\footnotesize
\caption{\footnotesize
Mean, mean square error and misestimation rates of $\hat{q}_n^{PY}$, $\hat{q}_n^{VACLE}$ and $\hat{q}_n^{TVACLE}$ over 500 independent replications for Models~\ref{model1}-\ref{model3}, with known $\sigma^2=1$.}
\centering
{\small\scriptsize\hspace{12.5cm}
\renewcommand{1}{1}\tabcolsep 0.1cm
\renewcommand1{0.8}
\begin{tabular}{c|c|ccc|ccc|ccc}
\hline
& & \multicolumn{3}{c}{$\hat{q}_n^{PY}$} & \multicolumn{3}{c}{$\hat{q}_n^{VACLE}$} &\multicolumn{3}{c}{$\hat{q}_n^{TVACLE}$} \\
& $(p,n)$& \multicolumn{1}{c}{Mean}&\multicolumn{1}{c}{MSE}&\multicolumn{1}{c}{$\hat{q}_n^{PY}\neq q$}&\multicolumn{1}{c}{Mean}&\multicolumn{1}{c}{MSE}&\multicolumn{1}{c}{$\hat{q}_n^{VACLE}\neq q$}&\multicolumn{1}{c}{Mean}&\multicolumn{1}{c}{MSE} &\multicolumn{1}{c}{$\hat{q}_n^{TVACLE}\neq q$} \\
\hline
\multirow{6}{*}{Model~\ref{model1}}
&$(50,200)$ &5.022 &0.022 &0.022 &5.004 &0.004 &0.004 &5.024 &0.024 &0.024\\
&$(200,800)$ &5.012 &0.012 &0.012 &5.002 &0.002 &0.002 &5.016 &0.016 &0.016 \\
\cline{2-11}
&$(100,100)$ &5.016 &0.02 &0.02 &4.97 &0.046 &0.046 &4.998 &0.002 &0.002 \\
&$(200,200)$ &5.026 &0.03 &0.024 &5.01 &0.01 &0.01 &5.004 &0.004 &0.004 \\
\cline{2-11}
&$(100,50)$ &4.846 &0.218 &0.212 &4.484 &1.296 &0.41 &4.782 &0.222 &0.216 \\
&$(200,100)$ &4.99 &0.074 &0.074 &4.758 &0.486 &0.194 &4.954 &0.046 &0.046 \\
\hline
\multirow{6}{*}{Model~\ref{model2}}
&$(50,200)$ &4.018 &0.058 &0.028 &4.006 &0.006 &0.006 &4.016 &0.016 &0.016 \\
&$(200,800)$ &4.016 &0.02 &0.014 &4.004 &0.004 &0.004 &4.032 &0.04 &0.028 \\
\cline{2-11}
&$(100,100)$ &3.922 &0.246 &0.074 &3.416 &2.112 &0.22 &3.968 &0.036 &0.036 \\
&$(200,200)$ &4.014 &0.014 &0.014 &3.92 &0.304 &0.048 &4.006 &0.006 &0.006 \\
\cline{2-11}
&$(200,100)$ &3.558 &0.83 &0.342 &2.452 &5.144 &0.584 &3.712 &0.304 &0.28 \\
&$(400,200)$ &3.906 &0.162 &0.118 &3.046 &3.138 &0.364 &3.958 &0.05 &0.044 \\
\hline
\multirow{6}{*}{Model~\ref{model3}}
&$(50,200)$ &3.994 &0.118 &0.032 &3.772 &0.804 &0.08 &4.024 &0.024 &0.024 \\
&$(200,800)$ &4.018 &0.018 &0.018 &4 &0 &0 &4.036 &0.036 &0.036 \\
\cline{2-11}
&$(200,200)$ &3.456 &0.92 &0.414 &1.94 &6.684 &0.734 &3.614 &0.518 &0.326 \\
&$(400,400)$ &3.904 &0.18 &0.122 &2.7 &4.152 &0.478 &3.898 &0.142 &0.112 \\
\cline{2-11}
&$(400,200)$ &2.222 &3.81 &0.952 &1.08 &9.736 &0.968 &2.648 &2.296 &0.91 \\
&$(800,400)$ &2.626 &2.482 &0.844 &1.588 &7.104 &0.954 &3.022 &1.558 &0.7 \\
\hline
\end{tabular}
}
\label{table1}
\end{table}
From Table~\ref{table1}, we have the following observations. For Model~\ref{model1}, all three methods work well with high accuracies and small MSE in the cases where the dimension $p$ is smaller than $n$ ($c=p/n =0.25$).
When either $c=1$ or $c=2$, $\hat{q}_n^{TVACLE}$ is the best, and $\hat{q}_n^{PY}$ also has smaller MSEs than $\hat{q}_n^{VACLE}$. In a word, all three methods perform in a satisfactory manner, but the performance of $\hat{q}_n^{TVACLE}$ is the most stable for various ratio of $c=p/n$. For Model~\ref{model2}, $\hat{q}_n^{VACLE}$ is sensitive to the ratio $c$, particularly its MSE. Although when $c=2$, $\hat{q}_n^{TVACLE}$ may sometimes slightly underestimate the true number, it is less serious than $\hat{q}_n^{PY}$.
For Model~\ref{model3} with two equal spikes, $\hat{q}_n^{TVACLE}$ works much better than both $\hat{q}_n^{PY}$ and $\hat{q}_n^{VACLE}$ that underestimate $q$ significantly.
\begin{table}[H]\footnotesize
\caption{\footnotesize Mean, mean squared error and empirical distribution of $\hat{q}_n^{PY}$ and $\hat{q}_n^{TVACLE}$ over 500 independent replications for Model~\ref{model4} $(q=6)$, with known $\sigma^2=1$.}
\centering
{\small\scriptsize\hspace{12.5cm}
\renewcommand{1}{1}\tabcolsep 0.1cm
\renewcommand1{0.8}
\begin{tabular}{c|c|c|c|cccccccc}
\hline
&$(p,n$)&Mean&MSE&$\hat q=0$ &$\hat q=1$ &$\hat q=2$ &$\hat q=3$ &$\hat q=4$ &$\hat q=5$&$\hat q=6$&$\hat q\ge 7$\\
\hline
\multirow{6}{*}{$\hat{q}_n^{PY}$}
&$(50,200)$ &5.358&2.874&0.018&0.04&0.042&0.06&0&0&\textbf{0.826}&0.014\\
&$(200,800)$ &5.816&0.868&0.002&0.014&0.02&0.012&0&0&\textbf{0.94}&0.012\\
\cline{2-12}
&$(100,100)$ &4.436&6.904&0.06&0.072&0.118&0.106&0.01&0.048&\textbf{0.572}&0.014\\
&$(200,200)$ &4.964&4.772&0.042&0.052&0.082&0.07&0&0&\textbf{0.742}&0.012\\
\cline{2-12}
&$(400,200)$ &3.858&9.794&0.078&0.138&0.164&0.094&0.008&0.032&\textbf{0.484}&0.002\\
&$(800,400)$ &4.406&7.558&0.068&0.11&0.098&0.086&0&0&\textbf{0.626}&0.012\\
\hline
\multirow{6}{*}{$\hat{q}_n^{TVACLE}$}
&$(50,200)$ &6.006&0.006&0&0&0&0&0&0&\textbf{0.994}&0.006\\
&$(200,800)$ &6.024&0&0&0&0&0&0&0&\textbf{0.976}&0.024\\
\cline{2-12}
&$(100,100)$ &5.886&0.122&0&0&0&0&0.004&0.106&\textbf{0.89}&0.11\\
&$(200,200)$ &6&0&0&0&0&0&0&0&\textbf{1}&0\\
\cline{2-12}
&$(400,200)$ &5.952&0.06&0&0&0&0&0.004&0.042&\textbf{0.952}&0.002\\
&$(800,400)$ &6.002&0.002&0&0&0&0&0&0&\textbf{0.998}&0.002\\
\hline
\end{tabular}
}
\label{table2}
\end{table}
To further confirm this phenomenon, we report the results for Model~\ref{model4} with more equal spikes. The results in Table~\ref{table2} suggest that $\hat{q}_n^{TVACLE}$ overall performs better than $\hat{q}_n^{PY}$ in terms of estimation accuracy and MSE, It has underestimation problem as its searching procedure stops earlier once the difference between consecutive eigenvalues corresponding to equal spikes are below the threshold $d_n$.
This conclusion can be made after observing its empirical distributions in Table~\ref{table2}. In contrast, $\hat{q}_n^{TVACLE}$ largely avoids this problem. To better illustrate this fact, we plot in Figure~\ref{figure4} the first 40 differences $\hat\delta_i$ for $\hat q_n^{PY}$
and the first 40 ratios of $\hat r_i^{TR}$ for $\hat q_n^{TVACLE}$.
The left subfigure shows that there are three $\hat \delta_i$, $i=3,4,5$, are very close to the threshold line $y=d_n$, which causes the underestimation problem shown in Table~\ref{table2}. In contrast, the right subfigure shows that the ``valley'' $\hat r_q^{TR}$ and the ``cliff'' $\hat r_{q+1}^{TR}$ are well separated by the threshold line $\tau=0.5$.
\begin{figure}
\caption{\footnotesize Plots of the first 40 differences and ratios: the left is for differences $\hat\delta_i$, $1\le i\le 40$, in $\hat q_n^{PY}
\label{figure4}
\end{figure}
As we claimed in Sections~\ref{sec2} and \ref{sec3}, the VACLE could be somehow sensitive to the ridge selection. The results reported in Table~\ref{table1} confirm this claim.
To explore the manner how the ridge $c_n$ affects both the VACLE and the TVACLE, Figures~\ref{figure1} presents, for Model~\ref{model2} with $(p,n)=(400,200)$, the boxplots of the first 7 ratios without ridge $\hat r_{i}$; the first 7 ridge ratios $\hat r_{i}^R$; and the first 7 transformed ridge ratios $\hat r_{i}^{TR}$.
From the left to right subfigure of Figure~\ref{figure1}, we can see that for $i>q=4$, $\hat r_{i}$ fluctuates much more than $\hat r_{i}^R$ and $\hat r_{4}^{TR}$ and $\hat r_{i}^{TR}$, $i>4$ are separated more significantly. This confirms the necessity of using a ridge with a stabilised ratio $\hat r_{i}^R$ and transformation can enhance the estimation accuracy.
\begin{figure}
\caption{\footnotesize Boxplots of the first 7 ratios: the left is for ratios without ridge, $\hat r_{i}
\label{figure1}
\end{figure}
\subsubsection*{The unknown $\sigma^2$ Case.}
Use Models~\ref{model2} and \ref{model4} and regard $\sigma^2$ as an unknown value. These two models represent the cases with and without equal spikes. Furthermore, because the conclusions are very similar to those with known $\sigma^2$, we then report only the results for $\hat{q}_n^{VACLE}$ and $\hat{q}_n^{TVACLE}$ to further confirm the advantages of $\hat{q}_n^{TVACLE}$. The numerical results are shown in Table~\ref{table3}. The results in the last two columns show that the one-step estimation $\hat\sigma^2$ has good performance in terms of accuracy and robustness.
\begin{table}[h]\footnotesize
\caption{\footnotesize Mean and mean square error of $\hat{q}_n^{VACLE}$, $\hat{q}_n^{TVACLE}$ and $\hat\sigma^2$, and the misestimation rates of $\hat{q}_n^{VACLE}$ and $\hat{q}_n^{TVACLE}$ over 500 independent replications for Model \ref{model2} and \ref{model4}, with unknown $\sigma^2$ whose true value is 1.}
\centering
{\small\scriptsize\hspace{12.5cm}
\renewcommand{1}{1}\tabcolsep 0.1cm
\renewcommand1{0.8}
\begin{tabular}{c|c|ccc|ccc|cc}
\hline
& & \multicolumn{3}{c}{$\hat{q}_n^{VACLE}$} &\multicolumn{3}{c}{$\hat{q}_n^{TVACLE}$}&\multicolumn{2}{c}{$\hat\sigma^2$} \\
&$(p,n)$& \multicolumn{1}{c}{Mean}&\multicolumn{1}{c}{MSE}&\multicolumn{1}{c}{$\hat{q}_n^{VACLE}\neq q$}&\multicolumn{1}{c}{Mean}&\multicolumn{1}{c}{MSE} &\multicolumn{1}{c}{$\hat{q}_n^{TVACLE}\neq q$} &\multicolumn{1}{c}{Mean}&\multicolumn{1}{c}{MSE}\\
\hline
\multirow{6}{*}{Model \ref{model2}}
&$(50,200)$ &4.002 &0.002 &0.002 &4.012 &0.012 &0.012 &1.0513 &0.0033 \\
&$(200,800)$ &4.002 &0.002 &0.002 &4.014 &0.014 &0.014 &1.0119 &0.0002 \\
\cline{2-10}
&$(100,100)$ &3.326 &2.346 &0.258 &3.966 &0.038 &0.038 &1.0326 &0.0022 \\
&$(200,200)$ &3.96 &0.176 &0.04 &4.006 &0.006 &0.006 &1.0169 &0.0006 \\
\cline{2-10}
&$(200,100)$ &2.334 &5.726 &0.616 &3.71 &0.306 &0.282 &1.0205 &0.0008 \\
&$(400,200)$ &3.266 &2.454 &0.292 &3.962 &0.038 &0.038 &1.0094 &0.0002 \\
\hline
\multirow{6}{*}{Model \ref{model4}}
&$(50,200)$ &6.01 &0.01 &0.01 &6.01 &0.01 &0.01 &1.0788 &0.0069 \\
&$(200,800)$ &6.002 &0.002 &0.002 &6.022 &0.022 &0.022 &1.0181 &0.0004 \\
\cline{2-10}
&$(100,100)$ &4.082 &10.362 &0.388 &5.878 &0.142 &0.112 &1.0555 &0.0042 \\
&$(200,200)$ &5.846 &0.938 &0.034 &6 &0 &0 &1.0256 &0.0009 \\
\cline{2-10}
&$(400,200)$ &4.524 &8.064 &0.306 &5.958 &0.042 &0.042 &1.0165 &0.0004 \\
&$(800,400)$ &5.822 &0.966 &0.034 &6.01 &0.01 &0.01 &1.0079 &$9\times 10^{-5}$ \\
\hline
\end{tabular}
}
\label{table3}
\end{table}
\subsection{ Numerical studies on large dimensional auto-covariance matrix}
To estimate the number of factors in Model (\ref{auto-model}), \cite{li2017identifying} introduced the following ratio-based estimator,
\begin{equation}\label{LWY estimator}
\hat q_T^{LWY}=\min\{i\ge 1:\ {\hat\lambda_{i+1}}/{\hat\lambda_{i}}>1-d_T\ \text{and}\ {\hat\lambda_{i+2}}/{\hat\lambda_{i+1}}>1-d_T\}-1,
\end{equation}
where $\hat\lambda_{i},1\le i\le p$, are in descending order and $d_T$ is a tuning parameter selected as that in Section 3.1 of \cite{li2017identifying}.
To examine the performance of the $\hat{q}_n^{TVACLE}$, we use $\hat q_T^{LWY}$ as the competitor. For the ratio
$p/T=y$, we consider two values $y=0.5$ and $y=2$. The dimension $p=100,200,300,400\ \text{and}\ 500$. In each case, we repeat the experiment 500 times. To be fair and concise, we conduct the simulation with two models as follows. The model structure is the same as in \cite{lam2012factor} and \cite{li2017identifying}: for $\ 1\le t\le T$
\begin{equation}
y_t=\mathbf{A}x_t+\varepsilon_t,\ \varepsilon_t\sim \mathbb{N}_p(\textbf{0},\textbf{I}_p),\ x_t=\Theta x_{t-1}+e_t,\ e_t\sim \mathbb{N}_k(\textbf{0},\textbf{$\Gamma$}),
\end{equation}
where $\mathbf{A}\in \mathbb{R}^{p\times q}$ is the factor loading matrix and $\{\varepsilon_t\}$ is a white noise sequence with unit variance $\sigma^2=1$. As in \cite{li2017identifying}, $\mathbf{A}$ and $\Gamma$ take the forms as $\mathbf{A}=(\mathbf{I}_q, \mathbf{O}_{(p-q)\times q})^{\top}$, $\Gamma=diag(2,2,\cdots,2)$.
We manipulate the strength of factors by adjusting the matrix $\Theta$ in different models as follows:
\begin{model}\label{model5}
This model is the same as \emph{Scenario III} in \cite{li2017identifying}. There are $q=3$ factors whose theoretical limits
equal $(7.726,5.496,3.613)$ in the case of $y=0.5$ and $(23.744,20.464,17.970)$ in the case of $y=2$. The upper edge $b_1$ of the supports in these two cases are respectively 2.773 and 17.637. $q=3$ factors are identifiable, and $\Theta=diag(0.6,-0.5,0.3)$.
\end{model}
\begin{model}\label{model6}
This model has more factors. There are $q=6$ factors with identical strength, and their theoretical limits are 5.496 in the case of $y=0.5$ and $20.464$ in the case of $y=2$. Because these limits exceed their corresponding upper edge $b_1$, all $q=6$ factors are identifiable in theory with $\Theta=diag(0.5,0.5,0.5,0.5,0.5,0.5)$.
\end{model}
All parameters in the simulations
share the same settings of parameters in Section~\ref{sec5.2} where we conduct numerical studies for spiked population models. These parameters in TVACLE estimator are shown in Table~\ref{auto parameter}.
\begin{table}[h]\footnotesize
\centering
\caption{\footnotesize Parameters in the TVACLE estimator.}
\renewcommand1{0.8}
\begin{tabular}{cccccc}
\toprule
$\tau$& $c_T$ & $\kappa_T$ &$k_1$ &$k_2$ &$L$\\
\midrule
0.5 &$\sqrt{\log\log T}\cdot[\text{q}_{p,T}(0.95)-\text{q}_{p,T}(0.05)]-\text{m}_{p,T}$&$\log\log p \cdot p^{-2/3}$&5&5&20\\
\bottomrule
\end{tabular}\label{auto parameter}
\end{table}
\begin{table}[H]\footnotesize
\caption{\footnotesize Mean, mean squared error and empirical distribution of $\hat{q}_T^{LWY}$ and $\hat{q}_T^{TVACLE}$ over 500 independent replications for Model \ref{model5}.}
\centering
{\small\scriptsize\hspace{12.5cm}
\renewcommand1{0.8}
\begin{tabular}{c|cccccc|cccccc}
\hline
&$p$ &100 &200 &300 &400 &500 &$p$ &100 &200 &300 &400 &500 \\
&$T=2p$ &200 &400 &600 &800 &1000 &$T=0.5p$ &50 &100 &150 &200 &250\\
\hline
\multirow{7}{*}{$\hat{q}_T^{LWY}$}
&$\hat q= 0$ &0.024&0.002&0&0&0&$\hat q= 0$ &0.53&0.238&0.234&0.138&0.054\\
&$\hat q= 1$ &0.028&0&0&0&0&$\hat q=1$ &0.326&0.412&0.38&0.36&0.282\\
&$\hat q = 2$ &0.384&0.138&0.05&0.014&0.008&$\hat q= 2$ &\textbf{0.136} &\textbf{0.32} &\textbf{0.356} &\textbf{0.464} &\textbf{0.572}\\
&$\hat q = 3$ &\textbf{0.544} &\textbf{0.85} &\textbf{0.948} &\textbf{0.976} &\textbf{0.986} &$\hat q= 3$ &\textbf{0.008} &\textbf{0.03} &\textbf{0.03} &\textbf{0.036} &\textbf{0.092}\\
&$\hat q \ge 4$ &0.02&0.01&0.002&0.01&0.006&$\hat q\ge 4$ &0&0&0&0.002&0\\
&Mean &2.508&2.866&2.952&2.996&2.998& Mean &0.622&1.142&1.182&1.404&1.702\\
&MSE &0.732&0.166&0.052&0.024&0.014&MSE &6.21&4.11&3.982&3.148&2.186\\
\hline
\multirow{7}{*}{$\hat{q}_T^{TVACLE}$}
&$\hat q=0$ &0&0&0&0&0&$\hat q= 0$ &0.02&0.002&0&0.002&0\\
&$\hat q= 1$ &0&0&0&0&0&$\hat q= 1$ &0.332&0.182&0.116&0.104&0.054\\
&$\hat q = 2$ &0.196&0.02&0.014&0.008&0.002&$\hat q= 2$ &\textbf{0.584} &\textbf{0.698} &\textbf{0.688} &\textbf{0.73} &\textbf{0.676}\\
&$\hat q = 3$ &\textbf{0.782} &\textbf{0.948} &\textbf{0.974} &\textbf{0.964} &\textbf{0.974} &$\hat q= 3$ &\textbf{0.062} &\textbf{0.116} &\textbf{0.196} &\textbf{0.16} &\textbf{0.268}\\
&$\hat q \ge 4$ &0.022&0.032&0.012&0.028&0.024 &$\hat q\ge 4$ &0.002&0.002&0&0.004&0.002\\
&Mean &2.826&3.012&2.998&3.02&3.022 &Mean &1.694&1.934&2.08&2.06&2.218\\
&MSE &0.218&0.052&0.026&0.036&0.026 &MSE &2.094&1.446&1.152&1.168&0.894\\
\hline
\end{tabular}
}
\label{table6}
\end{table}
\begin{table}[H]\footnotesize
\caption{\footnotesize Mean, mean squared error and empirical distribution of $\hat{q}_T^{LWY}$ and $\hat{q}_T^{TVACLE}$ over 500 independent replications for Model \ref{model6}.}
\centering
{\small\scriptsize\hspace{12.5cm}
\renewcommand1{0.8}
\begin{tabular}{c|cccccc|cccccc}
\hline
&$p$ &100 &200 &300 &400 &500 &$p$ &100 &200 &300 &400 &500 \\
&$T=2p$ &200 &400 &600 &800 &1000 &$T=0.5p$ &50 &100 &150 &200 &250\\
\hline
\multirow{10}{*}{$\hat{q}_T^{LWY}$}
&$\hat q = 0$ &0.156&0.104&0.072&0.098&0.054 &$\hat q= 0$ &0.226&0.202&0.26&0.24&0.134\\
&$\hat q = 1$ &0.178&0.154&0.124&0.146&0.076 &$\hat q= 1$ &0.418&0.35&0.326&0.304&0.28\\
&$\hat q = 2$ &0.19&0.154&0.114&0.104&0.062 &$\hat q= 2$ &0.296&0.314&0.262&0.236&0.312\\
&$\hat q = 3$ &0.162&0.112&0.068&0.106&0.044 &$\hat q= 3$ &0.06&0.122&0.134&0.17&0.188\\
&$\hat q = 4$ &0.13&0.006&0&0&0 &$\hat q= 4$ &0&0.012&0.016&0.05&0.07\\
&$\hat q = 5$ &0.112&0.072&0&0&0 &$\hat q= 5$ &0&0&0.002&0&0.014\\
&$\hat q = 6$ &\textbf{0.072} &\textbf{0.394} &\textbf{0.62} &\textbf{0.542} &\textbf{0.754} &$\hat q= 6$ &\textbf{0} &\textbf{0} &\textbf{0} &\textbf{0} &\textbf{0.002}\\
&$\hat q \ge 7 $ &0&0.004&0.002&0.004&0.01 &$\hat q\ge 7$ &0&0&0&0&0\\
&Mean &2.556&3.574&4.29&3.954&4.926 &Mean &1.19&1.392&1.326&1.486&1.83\\
&MSE &15.196&11.166&8.13&9.806&5.242 &MSE &23.862&22.192&22.974&21.746&18.802\\
\hline
\multirow{10}{*}{$\hat{q}_T^{TVACLE}$}
&$\hat q = 0$ &0 &0 &0 &0 &0 &$\hat q= 0$ &0 &0 &0 &0 &0\\
&$\hat q = 1$ &0 &0 &0 &0 &0 &$\hat q= 1$ &0.01 &0 &0 &0 &0\\
&$\hat q = 2$ & 0&0 &0 &0 &0 &$\hat q= 2$ &0.206&0.07&0.03&0.008&0.008\\
&$\hat q = 3$ & 0.004&0 &0 &0 &0 &$\hat q= 3$ &0.586&0.496&0.33&0.224&0.13\\
&$\hat q = 4$ &0.066&0.002&0&0&0 &$\hat q= 4$ &0.19&0.414&0.546&0.574&0.554\\
&$\hat q = 5$ &0.418&0.038&0&0&0 &$\hat q= 5$ &0.008&0.02&0.094&0.188&0.294\\
&$\hat q = 6$ & \textbf{0.51}&\textbf{0.946} &\textbf{0.99} &\textbf{0.984} &\textbf{0.97} &$\hat q= 6$ &\textbf{0} &\textbf{0} &\textbf{0} &\textbf{0.006} &\textbf{0.014}\\
&$\hat q \ge 7 $ &0.002&0.014&0.01&0.016&0.03 &$\hat q\ge 7$ &0 &0 &0 &0 &0\\
&Mean &5.44&5.972&6.01&6.016&6.03 &Mean &2.98&3.384&3.704&3.96&4.176\\
&MSE &0.72&0.06&0.01&0.016&0.03 &MSE &9.588&7.26&5.728&4.628&3.808\\
\hline
\end{tabular}
}
\label{table7}
\end{table}
From Table~\ref{table6}, we can see that when $T=2p$, $\hat q_T^{LWY}$ works well. That is, when $T$ is large, $\hat q_T^{LWY}$ shows good performance, whilst when $T$ is not large, it tends to underestimate the true number $q$. Our method clearly outperforms $\hat q_T^{LWY}$. Although when $T$ is small, the true value is somewhat underestimated, but still, with high proportion, to be two or greater. Table~\ref{table7} shows that for Model~\ref{model6} with equal spikes, when $T=2p$, the performance of $\hat q_T^{LWY}$ is not encouraging, and when $T=0.5p$, the underestimation problem becomes very serious, with a very high proportion having $\hat q_T^{LWY} \le 2$. In contrast, our method performs well when $T=2p$ and when $T=0.5p$; underestimation still occurs, but it is much less serious than $\hat q_T^{LWY}$ in the sense that $\hat q> 2$ with high proportion. Overall, our estimator $\hat{q}_T^{TVACLE}$ is superior to $\hat q_T^{LWY}$ in these limited simulations.
\subsection{Numerical studies on large dimensional spiked Fisher matrix}
Because the TVACLE has been demonstrated to outperform the VACLE overall, we only consider the comparison between $\hat q_n^{TVACLE}$ and the estimator $\hat q_n^{WY}$ introduced by \cite{wang2017extreme}. Sharing notations in Section~\ref{sec4}, the
estimator $\hat q_n^{WY}$ can be written as
\begin{equation}
\hat q_n^{WY}=\max\{i:\hat\lambda_{i}\ge b_2+d_n\},
\end{equation}
where $d_n$ was recommended to be $(\log\log p)p^{-2/3}$ in their paper.
As a Fisher matrix $\textbf{F}_n=\textbf{S}_1\textbf{S}_2^{-1}$ involves two random matrices $\textbf{S}_1$ and $\textbf{S}_2$, its eigenvalues are more dispersed, with wider range of the support,
than the spiked sample covariance matrices and auto-covariance matrices.
The aforementioned automatic procedure for ridge selection would then generate a larger $c_n$, and this in turn increases the value at the ``valley''. Hence,
we use a larger threshold $\tau=0.8$ to avoid underestimation. Further, in the following Model~\ref{model7}, we set the ridge $c_n^{(3)}=\sqrt{\log\log p}\cdot[\text{q}_{p,n}(0.95)-\text{q}_{p,n}(0.05)]-\text{m}_{p,n}$, whilst for Model~\ref{model8} with dramatically-fluctuated extreme eigenvalues, we need to set $c_n^{(3)}=\sqrt{\log\log p}\cdot[\text{q}_{p,n}(0.8)-\text{q}_{p,n}(0.05)]-\text{m}_{p,n}$ to avoid too large ridge. The parameters in $\hat q_n^{TVACLE}$ are shown in Table~\ref{Fisher parameters}.
\begin{table}[h]\footnotesize
\caption{\footnotesize Parameters in the TVACLE estimator.}
\centering
\renewcommand1{0.8}
\begin{tabular}{cccccc}
\toprule
$\tau$& $c_n$ & $\kappa_n$ &$k_1$ &$k_2$ &$L$\\
\midrule
0.8 &$c_n^{(3)}$&$\log\log p\cdot p^{-2/3}$&5&5&20\\
\bottomrule
\end{tabular}
\label{Fisher parameters}
\end{table}
Again, for a fair comparison, we design two models, one that was used by \cite{wang2017extreme} and the other with weaker spikes. For $y=p/T$ and $c=p/n$,
we set $(0.5,0.2)$ and $(0.2,0.5)$ for the respective models. The dimension $p$ takes values of $50, 100, 150, 200$ and $250$. For each combination $(p,T,n)$, the experiment is repeated $500$ times. Consider the number of spikes to be $q=3$ and $\textbf{A}$ to be a $p\times 3$ matrix as:
\begin{eqnarray}
\left(
\begin{array}{cccccc}
\sqrt {\alpha_1} & 0 & 0 & 0 & \cdots & 0 \\
0 & \sqrt{\frac{\alpha_2}{2}} & \sqrt{\frac{\alpha_2}{2}} & 0 & \cdots & 0 \\
0 & \sqrt{\frac{\alpha_3}{2}} & -\sqrt{\frac{\alpha_3}{2}} & 0 & \cdots & 0 \\
\end{array}
\right)^T_{3\times p},
\end{eqnarray}
where $\alpha=(\alpha_1,\alpha_2,\alpha_3)$ assumes different values in two models. Assume the covariance matrix $Cov(u_i)=\textbf{I}_3$ and $\Sigma_2=diag(1,\cdots,1,2,\cdots,2)$, where ``1'' and ``2'' both have multiplicity $p/2$. The two models are:
\begin{model}\label{model7}
Let $\alpha=(10,5,5)$, $(y,c)=(0.5,0.2)$, which is Model 1 in \cite{wang2017extreme}. The matrix $\Sigma_1\Sigma_2^{-1}$ has three spikes $\lambda_1=11$ and $\lambda_2=\lambda_2=6$ that are all significantly larger than the upper bound $b_2=\frac{1+\sqrt{c+y-cy}}{1-y}\approx~3.55$ of support of the distribution.
\end{model}
\begin{model}\label{model8}
Let $\alpha=(10,2,2)$, $(y,c)=(0.2,0.5)$. The matrix $\Sigma_1\Sigma_2^{-1}$ then also has three spikes $\lambda_1=11$ and $\lambda_2=\lambda_2=3$ larger than the upper bound $b_2=\frac{1+\sqrt{c+y-cy}}{1-y}\approx~2.22$ of the support of the distribution. Then $\lambda_2=\lambda_2=3$ are relatively more difficult to detect.
\end{model}
\begin{table}[H]\footnotesize
\caption{\footnotesize Mean, mean squared error and empirical distribution of $\hat{q}_n^{WY}$ and $\hat{q}_n^{TVACLE}$
for Model \ref{model7}.}
\centering
{\small\scriptsize\hspace{12.5cm}
\renewcommand1{0.8}
\begin{tabular}{c|c|c|c|ccccc}
\hline
&$(p,T,n)$&Mean&MSE&$\hat q=0$ &$\hat q=1$ &$\hat q=2$ &$\hat q=3$ &$\hat q=4$\\
\hline
&$(50,100,250)$ &2.344 &0.732 &0 &0.034 &0.592 &\textbf{0.37} &0.004\\
&$(100,200,500)$ &2.672 &0.352 &0 &0.004 &0.328 &\textbf{0.66} &0.008\\
$\hat{q}_n^{WY}$& $(150,300,750)$ &2.822 &0.194 &0 &0 &0.186 &\textbf{0.806} &0.008\\
&$(200,400,1000)$ &2.964 &0.092 &0 &0 &0.064 &\textbf{0.908} &0.028\\
&$(250,500,1250)$ &2.96 &0.068 &0 &0 &0.054 &\textbf{0.932} &0.014\\
\hline
&$(50,100,250)$ &2.364 &0.7 &0 &0.028 &0.584 &\textbf{0.384} &0.004\\
&$(100,200,500)$ &2.688 &0.336 &0 &0.004 &0.312 &\textbf{0.676} &0.008\\
$\hat{q}_n^{TVACLE}$ &$(150,300,750)$ &2.842 &0.182 &0 &0 &0.17 &\textbf{0.818} &0.012\\
&$(200,400,1000)$ &2.974 &0.082 &0 &0 &0.054 &\textbf{0.918} &0.028\\
&$(250,500,1250)$ &2.964 &0.064 &0 &0 &0.05 &\textbf{0.936} &0.014\\
\hline
\end{tabular}
}
\label{table4}
\end{table}
\begin{table}[H]\footnotesize
\caption{\footnotesize Mean, mean squared error and empirical distribution of $\hat{q}_n^{WY}$ and $\hat{q}_n^{TVACLE}$
for Model \ref{model8}.}
\centering
{\small\scriptsize\hspace{12.5cm}
\renewcommand1{0.8}
\begin{tabular}{c|c|c|c|ccccc}
\hline
&$(p,T,n)$&Mean&MSE&$\hat q=0$ &$\hat q=1$ &$\hat q=2$ &$\hat q=3$ &$\hat q=4$\\
\hline
&$(50,250,100)$ &2.114 &1.07 &0 &0.09 &0.708 &\textbf{0.2} &0.002\\
&$(100,500,200)$ &2.302 &0.79 &0 &0.046 &0.606 &\textbf{0.348} &0\\
$\hat{q}_n^{WY}$ &$(150,750,300)$ &2.498 &0.538 &0 &0.018 &0.466 &\textbf{0.516} &0\\
&$(200,1000,400)$ &2.622 &0.394 &0 &0.006 &0.368 &\textbf{0.624} &0.002\\
&$(250,1250,500)$ &2.692 &0.324 &0 &0.004 &0.304 &\textbf{0.688} &0.004\\
\hline
&$(50,250,100)$ &2.238 &0.898 &0 &0.064 &0.638 &\textbf{0.294} &0.004\\
&$(100,500,200)$ &2.462 &0.602 &0 &0.03 &0.48 &\textbf{0.488} &0.002\\
$\hat{q}_n^{TVACLE}$ &$(150,750,300)$&2.71 &0.314 &0 &0 &0.302 &\textbf{0.686} &0.012\\
&$(200,1000,400)$ &2.82 &0.232 &0 &0.002 &0.2 &\textbf{0.774} &0.024\\
&$(250,1250,500)$ &2.904 &0.164 &0 &0 &0.13 &\textbf{0.836} &0.034\\
\hline
\end{tabular}
}
\label{table5}
\end{table}
The results reported in Tables~\ref{table4} and \ref{table5} show that $\hat q_n^{TVACLE}$ shows overall better performance than $\hat q_n^{WY}$.
For Model~\ref{model8}, $\hat q_n^{TVACLE}$ is superior to $\hat q_n^{WY}$ when the signals are relatively weak.
\section{Real data example}\label{sec6}
Consider a data set of the daily prices of 100 stocks (see \cite{li2017identifying}). This dataset includes the stock prices of the S\&P500 from 2005-01-03 to 2006-12-29.
Except for incomplete data, every stock has 502 observations of log returns. Thus, $T=502$, $p=100$, and then $c=p/T\approx0.2$.
Denote $y_t\in \mathbb{R}^p$, $1\le t\le T$, as the $t$-th observation of the log return of these 100 stocks, and we then obtain its lag-1 sample auto-covariance matrix $\hat \Sigma_y$ and the matrix $\hat {\mathbf{M}}_y=\hat \Sigma_y\hat \Sigma_y^{\top}$ as formulated in Section~\ref{sec4}. Use $\hat q^{TVACLE}$ and $\hat q^{LWY}$ in \cite{li2017identifying} to determine the number of factors. All parameters in these two methods share the same settings.
We can see that the two largest eigenvalues of $\hat {\mathbf{M}}_y$ are $7.17\times 10^{-7}$ and $2.01\times 10^{-7}$, and the third to the $40$-th eigenvalues are shown in Figure~\ref{figure2}.
\begin{figure}
\caption{\footnotesize Eigenvalues of $\hat {\mathbf{M}
\label{figure2}
\end{figure}
Figure~\ref{figure3} shows that $\hat q^{LWY}=5$. However, as shown in Figure~\ref{figure2}, the gap between the 5th eigenvalue and several following eigenvalues is evidently insignificant.
As $\hat q^{LWY}$ is based on the magnitudes of the next two consecutive ratios. If eigenvalue multiplicity occurs, $\hat q^{LWY}$ could likely select a value smaller than the true number.
\begin{figure}
\caption{\footnotesize Ratios $\hat\lambda_{i+1}
\label{figure3}
\end{figure}
When the TVACLE is used, $\hat q^{TVACLE}=10$.
Figure~\ref{figure3} shows that the $11$th ratio is much larger than the $10$th ratio, although some values get smaller. Note that in this example, $c\sim 0.2$ and the ridge is relatively small, which would not very much dominate the difference between the eigenvalues and thus some oscillating values remain after the 10th ratio.
It is considered that $\hat q^{LWY}$ would neglect several factors and likely result in an underestimation. For a real data example, we usually cannot give a definitive answer. However, our method could provide an estimation that would be relatively conservative but necessary, particularly in the initial stage of data analysis; otherwise, an excessively parsimonious model would cause misleading conclusions.
\section{Concluding remarks}\label{sec7}
In this paper, we propose a valley-cliff criterion for spiked models, and the method can be applied to other order determination problems when the dimension is proportional to the sample size, such as those in sufficient dimension reduction if the corresponding asymptotics can be well investigated. The method is for the case with a fixed order $q$. An extension to the case with diverging $q$ will be proposed in our future work.
\vskip 14pt
\noindent {\large\bf Supplementary Materials}
\noindent Proofs and technical details are contained in the supplementary materials.
\par
\end{document} |
\begin{document}
\title{{\LARGE\bf Effective Maxwell's equations in general periodic
microstructures
}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{B.\,Schweizer, M.\,Urban}{}
\begin{center}
\vskip4mm
\begin{minipage}[c]{0.86\textwidth}
{\small {\bfseries Abstract:} We study the time harmonic Maxwell
equations in a meta-material consisting of perfect conductors and
void space. The meta-material is assumed to be periodic with
period $\textrm{e}ta>0$; we study the behaviour of solutions $(E^{\textrm{e}ta}, H^{\textrm{e}ta})$
in the limit $\textrm{e}ta\to 0$ and derive an effective system. In
geometries with a non-trivial topology, the limit system implies
that certain components of the effective fields
vanish. We identify the corresponding effective system and can predict,
from topological properties of the meta-material, whether or not it permits the
propagation of waves.
\\[2mm]
{\bfseries Key-words:} Maxwell's equations, homogenization,
diffraction, periodic structure, meta-material}
{\bfseries MSC:} 35Q61, 35B27, 78M40, 78A45
\textrm{e}nd{minipage}\\[3mm]
\textrm{e}nd{center}
\section{Introduction}
We are interested in transmission properties of meta-materials. In
this context, a meta-material is a periodic assembly of perfect
conductors, and our question concerns the behaviour of electromagnetic
fields when the period of the meta-material tends to zero. We fix a
frequency $\omega>0$ and investigate solutions to the time harmonic
Maxwell equations. Denoting the period of the meta-material by
$\textrm{e}ta>0$, we study the behaviour of solutions $(E^{\textrm{e}ta}, H^{\textrm{e}ta})$ to the
system
\begin{subequations}\langlebel{eq:MaxSysSeq-intro}
\begin{align}\langlebel{eq:MaxSeq1-intro}
\operatorname{curl} \, E^{\textrm{e}ta} &= \phantom{-}\operatorname{i} \omega \mu_0 H^{\textrm{e}ta}
\quad\textrm{ in } \Omega\, ,\\
\langlebel{eq:MaxSeq2-intro}
\operatorname{curl} \, H^{\textrm{e}ta} &= - \operatorname{i} \omega \varepsilon_0 E^{\textrm{e}ta} \quad \:
\textrm{ in } \Omega \setminus \overline{ \Sigma}_{\textrm{e}ta}\, ,\\
\langlebel{eq:MaxSeq3-intro}
E^{\textrm{e}ta} &= H^{\textrm{e}ta} = 0 \qquad\ \textrm{ in } \Sigma_{\textrm{e}ta}\, ,
\textrm{e}nd{align}
\textrm{e}nd{subequations}
in the limit $\textrm{e}ta\to 0$. In this model, we assume that the perfect
conductor fills the subset $\Sigma_\textrm{e}ta\subset \Omega$ of the domain
$\Omega\subset\mathbb{R}^3$.
In general, meta-materials for Maxwell's equations are described with
two periodic material parameters $\varepsilon$ and $\mu$ (permittivity and
permeability). We study here perfectly conducting inclusions, which
formally amount to setting $\varepsilon = \operatorname{i}nfty$. In this case, the electric
and the magnetic field vanish in the inclusions; see \textrm{e}qref
{eq:MaxSeq3-intro}. The material parameters in the other equations are
given by $\varepsilon_0>0$ and $\mu_0>0$, the coefficients of vacuum.
Imposing \textrm{e}qref {eq:MaxSeq1-intro} encodes boundary conditions: the
magnetic field $H$ has a vanishing normal component and the electric
field $E$ has vanishing tangential components on the boundary
$\partial\Sigma_\textrm{e}ta$.
We ask: Can electromagnetic waves propagate in the periodic medium?
Are there components of the effective fields that necessarily vanish?
What is the effective system that describes the remaining components?
Our theory yields the following results as particular applications: In
a geometry with perfect conducting plates, transmission through the
meta-material is possible in two directions. Instead, in a geometry
with long and thin holes in the metal no transmission is possible.
\subsection{Geometry and assumptions}\langlebel{section:geometry and assumptions}
We are interested in studying general geometries
$\Sigma_\textrm{e}ta$. Nevertheless, we remain in the framework of standard
periodic homogenization, i.e., the set $\Sigma_\textrm{e}ta$ of inclusions is
locally periodic. A microscopic structure is considered, which is
given by a perfectly conducting part $\Sigma \subset Y$ in a single
periodicity cell $Y$, where $Y \coloneqq [-1/2, 1/2]^3$.
We assume that the set $\Sigma$ is non empty and open with a Lipschitz
boundary as a subset of the $3$-torus.
Our aim is to study electromagnetic waves in an open subset
$\Omega \subset \mathbb{R}^3$. The meta-material is located in a second domain
$R \subset \subset \Omega$. In $\Omega \setminus R$, we have relative
permeability and relative permittivity equal to unity.
The microscopic structure in $R$ is defined using indices $k \operatorname{i}n \mathbb{Z}^3$
and shifted cubes $Y_k^{\textrm{e}ta} \coloneqq \textrm{e}ta (k + Y)$, where
$\textrm{e}ta > 0$. By $\mathcal{K}$ we denote the index set
$\mathcal{K} \coloneqq \set{ k \operatorname{i}n \mathbb{Z}^3 \given Y_k^{\textrm{e}ta} \subset R}$. We define the
meta-material $\Sigma_{\textrm{e}ta}$ by
\begin{equation}\langlebel{eq:definition of Sigma eta}
\Sigma_{\textrm{e}ta} \coloneqq \bigcup_{k \operatorname{i}n \mathcal{K}} \textrm{e}ta (k +
\Sigma) \subset R\, .
\textrm{e}nd{equation}
Even in the above periodic framework, quite general geometries and
topologies can be generated. The simplest non-trivial example occurs
if we study a cylinder $\Sigma \subset Y$ that connects two opposite
faces of $Y$; see Figure \ref {fig:the metal cylinder}. The cylinder $\Sigma$
generates a set $\Sigma_\textrm{e}ta$ that is the union of disjoint long and thin
fibers. In a similar way, we can generate the macroscopic geometry of
large metallic plates for which length and width are of macroscopic
size and the thickness is of order $\textrm{e}ta$; for the corresponding local
geometry $\Sigma$ see Figure \ref {fig:the metal plate}.
We investigate distributional solutions
$(E^{\textrm{e}ta}, H^{\textrm{e}ta}) \operatorname{i}n H^1(\Omega; \mathbb{C}^3) \times H^1(\Omega; \mathbb{C}^3)$
to~\textrm{e}qref{eq:MaxSysSeq-intro}. The number $\omega > 0$ denotes the
frequency, and $\mu_0, \varepsilon_0 > 0$ are the permeability and the
permittivity in vacuum, respectively.
We assume that we are given a sequence $(E^{\textrm{e}ta}, H^{\textrm{e}ta})_{\textrm{e}ta}$
of solutions to~\textrm{e}qref{eq:MaxSysSeq-intro} that satisfies the
energy-bound
\begin{equation}
\langlebel{eq:energy-bound}
\sup_{\textrm{e}ta > 0}\operatorname{i}nt_{\Omega} \Big(\abs{H^{\textrm{e}ta}}^2 + \abs{E^{\textrm{e}ta}}^2 \Big)< \operatorname{i}nfty\, .
\textrm{e}nd{equation}
If~\textrm{e}qref{eq:energy-bound} holds, by reflexivity of
$L^2(\Omega; \mathbb{C}^3)$, we find two vector fields
$E, H \operatorname{i}n L^2(\Omega; \mathbb{C}^3)$ and subsequences such that
$E^{\textrm{e}ta} \rightharpoonup E$ in $L^2(\Omega ; \mathbb{C}^3)$ and $H^{\textrm{e}ta} \rightharpoonup H$ in
$L^2(\Omega; \mathbb{C}^3)$. Due to the compactness with respect to
two-scale convergence, we may additionally assume for fields
$E_{0}, H_{0} \operatorname{i}n L^2(\Omega \times Y; \mathbb{C}^3)$ the two-scale convergence
\begin{equation}\langlebel{eq:(Eeta, Heta) converges in two scale to (E 0,
H 0)}
E^{\textrm{e}ta} \xrightharpoonup{\; 2 \;} E_0 \quad \textrm{ and } \quad H^{\textrm{e}ta} \xrightharpoonup{\; 2 \;} H_0\, .
\textrm{e}nd{equation}
\subsection{Main results}
We obtain an effective system of equations that describes the limits
$E$ and $H$. The effective system depends on topological properties of
the microscopic geometry $\Sigma \subset Y$. We denote by
$\mathbb{N}E\subset \set{1,2,3}$ those directions for which no curve in
$\Sigma$ exists that connects corresponding opposite faces of $Y$
(the notation $\mathbb{N}E$ indicates that there is \textrm{e}nquote{no loop in $\Sigma$}; for a
precise definition see~\textrm{e}qref{eq:index sets for H}). One of our result
is that the number of non-trivial components of the effective electric
field is given by $\abs{\mathbb{N}E}$ (see~Proposition \ref {proposition:
dimension of solution space to the cell problem of E}). Similarly,
the number of non-trivial components of the effective magnetic field
is given by $\abs{\mathcal{L}_{Y \setminus \overline{\Sigma}}}$, where $\mathcal{L}_{Y \setminus \overline{\Sigma}}\subset \set{1,2,3}$ are those
directions for which a curve in $Y\setminus \overline{\Sigma}$ exists
that connects corresponding opposite faces of $Y$ (the notation $\mathcal{L}_{Y \setminus \overline{\Sigma}}$ indicates
that there is a \textrm{e}nquote{loop in $Y \setminus \overline{\Sigma}$}; for
the result see Proposition \ref {proposition:for every element of the
set L E there is a solution to the cell problem}).
With the two index sets $\mathbb{N}E, \mathcal{L}_{Y \setminus \overline{\Sigma}} \subset \{1,2,3\}$, we can formulate
the effective system of Maxwell's equations. In the meta-material,
there holds
\begin{subequations}\langlebel{eq:effective system - introduction}
\begin{align}
\operatorname{curl} \, \hat{E} &=\phantom{-} \operatorname{i} \omega \mu_0 \hat{\mu} \hat{H}\,,\\
(\operatorname{curl} \, \hat{H})_k &= -\operatorname{i} \omega \varepsilon_0 (\hat{\varepsilon} \hat{E})_k \qquad
\textrm{ for every } k \operatorname{i}n \mathbb{N}E, \\
\hat{E}_k &= 0 \qquad \qquad \qquad \: \:\: \: \textrm{ for every }
k \operatorname{i}n \set{1,2,3} \setminus \mathbb{N}E\, , \\
\hat{H}_k &= 0 \qquad \qquad \qquad \: \:\: \: \textrm{ for every }
k \operatorname{i}n \set{1,2,3} \setminus \mathcal{L}_{Y \setminus \overline{\Sigma}} \, .
\textrm{e}nd{align}
\textrm{e}nd{subequations}
In this set of equations, the effective relative permittivity
$\hat{\varepsilon}$ and the effective relative permeability $\hat{\mu}$ are
defined through cell-problems. Our main result is the derivation of
these effective equations; see Theorem~\ref{theorem:macroscopic
equations} below. Essentially, the theorem states the following:
Let $(E^{\textrm{e}ta}, H^{\textrm{e}ta})_{\textrm{e}ta}$ be a bounded sequence of solutions
to~\textrm{e}qref{eq:MaxSysSeq-intro} satisfying~\textrm{e}qref{eq:energy-bound}, let
limit fields $(\hat{E}, \hat{H})$ be defined as weak and geometric
limits of $(E^{\textrm{e}ta}, H^{\textrm{e}ta})_{\textrm{e}ta}$, and let $\hat{\varepsilon}$ and
$\hat{\mu}$ be the effective coefficients defined by cell-problems
(see~\textrm{e}qref{eq:definition effective permittivity and permeability}). In
this situation, the limit $(\hat{E}, \hat{H})$ is a solution to the
effective system, which coincides with~\textrm{e}qref{eq:effective system -
introduction} in the meta-material.
Theorem~\ref{theorem:macroscopic equations} also specifies the
interface conditions along the boundary $\partial R$ of the
meta-material. The result allows to
determine, by checking topological properties of $\Sigma$,
if the meta-material supports propagating waves. To give an example:
In the case $\mathbb{N}E = \textrm{e}mptyset$ (that is, $\Sigma$ connects all opposite
faces of $Y$), the electric field $\hat E$ necessarily vanishes
identically in the meta-material and waves cannot propagate.
Of particular interest are those cases in which some components of
$\hat E$ and/or $\hat H$ vanish while the other components satisfy
certain blocks of Maxwell's equations. This occurs, e.g., in wire and
in plate structures. Our analysis is much more general: the effect
occurs when the solution spaces to the cell-problems are not three
dimensional, but have a lower dimension (drop of dimension).
\subsection{Literature}
From the perspective of applications, our contribution is closely
related to \cite {BouchitteSchweizer-Plasmons}, which is concerned
with an interesting experimental observation: Light can propagate well
in a structure made of thin silver plates; even nearly perfect
transmission through such a sub-wavelength structure was
experimentally observed. The mathematical analysis of \cite
{BouchitteSchweizer-Plasmons} explains the effect with a resonance
phenomenon. While \cite {BouchitteSchweizer-Plasmons} is purely
two-dimensional, the present contribution investigates which
genuinly three-dimensional structures are capable of showing similar
transmission properties.
From the perspective of methods, we follow other contributions more
closely. We deal with the homogenization of Maxwell's equations in
periodic structures. This mathematical task has already some history:
The book \cite {ZhikovMR1329546} contains the homogenization of the
equations in a standard setting (i.e., periodic and uniformly bounded
coefficient sequences $\varepsilon_\textrm{e}ta$ and $\mu_\textrm{e}ta$); for this case see
also \cite {MR2029130}.
The first homogenization result for Maxwell's equations in a singular
periodic structure appeared in \cite {BouchitteSchweizer-Max}: Small
split rings with a large absolute value of $\varepsilon_\textrm{e}ta$ were analyzed,
and a limit system with effective permittivity $\varepsilon_0 \hat \varepsilon$ and
effective permeability $\mu_0 \hat \mu$ was derived. The key point is
that the coefficient $\hat\mu$ of the limit system can have a negative
real part, due to resonance of the micro-structure. A similar result
was obtained in \cite {BouchitteBourel2009} with a simpler local
geometry; the effect of a negative $\mathbb{R}e \hat\mu$ is there obtained
through Mie-resonance. An extension to flat rings was performed in
\cite {Lamacz-Schweizer-Max}. The construction of a negative index
material (with negative permittivity \textrm{e}mph{and} negative permeability)
was successfully achieved in \cite {Lamacz-Schweizer-2016} with the
additional inclusion of long and thin wires. For a recent overview we
refer to \cite {Schweizer-resonances-survey-2016}.
The step towards perfectly conducting materials was done in \cite
{Lipton-Schweizer}, in which~\textrm{e}qref{eq:MaxSysSeq-intro} was also used.
The result of \cite {Lipton-Schweizer} is a limit system that takes
the usual form of Maxwell's equations, again with effective
permittivity $\varepsilon_0 \hat \varepsilon$, effective permeability
$\mu_0 \hat \mu$, and negative $\mathbb{R}e \hat\mu$. Once more, the negative
coefficient is possible since the periodic structure $\Sigma_\textrm{e}ta$ has
a singular (torus-like) geometry.
Compared to the results described above, the work at hand takes a
different perspective: We are not interested in a negative
$\mathbb{R}e \hat\mu$, but we are interested to see whether or not certain
components of the effective fields have to vanish (due to geometrical
properties of the microstructure). If some components vanish, we want
to extract the equations for the remaining components. The effect of
vanishing components is always a result of geometries in which the
substructure $\Sigma$ of the periodicity cell $Y$ touches two opposite
faces of $Y$. We recall that such substructures also enabled the
effect of a negative index in \cite {Lamacz-Schweizer-2016}. However,
in all contributions mentioned above (besides from \cite
{BouchitteSchweizer-Plasmons} and \cite {Lamacz-Schweizer-2016}) the
resonant structure $\Sigma$ is assumed to be compactly contained in
the cell $Y$.
It is worth mentioning that the study of wires (as a particular
example of a periodic microstructure with macroscopic dimensions) has
a longer history. Bouchitté and Felbacq showed that wire structures
with extreme coefficient values can lead to the effect of a negative
effective permittivity; see \cite{MR2262964, MR1444123}. Related
wire-constructions have been analyzed by Chen and Lipton; see
\cite{ChenLipton2010, ChenLipton2013}.
Our results concern the scattering properties of periodic media. We
emphasize that, in contrast to many classical contributions, we treat
only the case that the period is small compared to the wave-length of
the incident wave (prescribed by the frequency $\omega$). Also in the
case that the period and the wave-length are of the same order, one
can observe interesting transmission properties, e.g., due to the
existence of guided modes in the periodic structure. The
corresponding results are known as \textrm{e}nquote{diffraction theory} or
\textrm{e}nquote{grating theory}. For a fundamental analysis of existence and
uniqueness questions in such a diffraction problem we mention \cite
{MR1273315}; see also \cite {MR1961652}. Regarding classical methods
we mention \cite {MR1160941}, where the transmission properties of a
periodically perturbed interface are studied by means of integral
methods. A more recent contribution regarding a similar periodic
interface is \cite {MR3335171}. Closer to our analysis is \cite
{MR1694448}, where a three-dimensional layer of a periodic material is
studied (the material is periodic in two directions); see also the
overview \cite {MR1335399}.
\subsection{Methods and organization}
We use the tool of two-scale convergence of~\cite{Allaire1992} and
consider the two-scale limits $E_0 = E_0(x, y)$ and $H_0=H_0(x, y)$ of
the fields $E^{\textrm{e}ta}$ and $H^{\textrm{e}ta}$. By standard arguments, we obtain cell
problems for $E_0(x, \cdot)$ and $H_0(x, \cdot)$. We then characterise
the solution spaces of these cell problems---that is, we determine
bases of the solution spaces in terms of the index sets $\mathbb{N}E$ and
$\mathcal{L}_{Y \setminus \overline{\Sigma}}$. The crucial observation is the following: if the dimension of
one of the solution spaces is less than three, the standard procedure
to define homogenized coefficients does no longer work. Hence the
form of the effective system is not clear. However, once the
homogenized coefficients are carefully defined, the derivation of the effective
system is rather standard; see~\cite{Lamacz-Schweizer-2016,
Lipton-Schweizer}. Note that in \cite {BouchitteSchweizer-Max,
Lamacz-Schweizer-Max, Lipton-Schweizer}, a full torus geometry was
considered; the fact that the complement of the torus is not simply
connected leads to a $4$-dimensional solution space in the cell
problem for $H$. We, however, are interested in the opposite effect:
Geometries that generate solution spaces of dimension smaller than
$3$.
In Section~\ref{sec:preliminary geometric results} we introduce the
notions of simple Helmholtz domains, $k$-loops, and the geometric
average, and prove auxiliary results. The derivation of the cell
problems and the characterisation of their solution spaces is carried
out in Section~\ref{sec:cell problems}. In Section~\ref{sec:derivation
of the effective equations} we prove the main result, i.e., we
derive the effective system~\textrm{e}qref{eq:effective system -
introduction}. Section~\ref{section:examples} is devoted to the
discussion of some examples of microstructures.
\section{Preliminary geometric results}\langlebel{sec:preliminary geometric results}
\subsection{Periodic functions, Helmholtz-domains, and $k$-loops}
Let $Y$ denote the closed cube $[-1/2, 1/2]^3$ in $\mathbb{R}^3$. We
define an equivalence relation $\backsim$ on $Y$ by identifying
opposite sides of the cube: $y_a \backsim y_b$ whenever
$y_a - y_b \operatorname{i}n \mathbb{Z}^3$. The quotient space $Y / \backsim$ is
identified with the flat $3$-torus $\mathbb{T}^3$; the canonical projection
$Y \hookrightarrow \mathbb{T}^3$ is denoted by $\operatorname{i}ota$.
A map $u \colon \mathbb{R}^3 \to \mathbb{C}^n$ is called \textrm{e}mph{$Y$-periodic} if
$u(\cdot + l) = u(\cdot)$ for all $l \operatorname{i}n \mathbb{Z}^3$. For $m \operatorname{i}n \mathbb{N} \cup
\set{\operatorname{i}nfty}$ and $n \operatorname{i}n \mathbb{N}$, the function space $C_{\sharp}^m(Y ; \mathbb{C}^n)$
denotes the restriction of $Y$-periodic maps $\mathbb{R}^3 \to \mathbb{C}^n$ of class $C^m$ to $Y$; we may identify
this function space with $C^m(\mathbb{T}^3; \mathbb{C}^n)$. Similarly, we define
\begin{equation*}
H_{\sharp}^m(Y ; \mathbb{C}^n) \coloneqq \set[\big]{u |_Y \colon Y \to \mathbb{C}^n \given u \operatorname{i}n H^m_{\textrm{}loc}(\mathbb{R}^3; \mathbb{C}^n) \textrm{ is } Y \textrm{-periodic}} \, ,
\textrm{e}nd{equation*}
which can be identified with $H^m(\mathbb{T}^3 ; \mathbb{C}^n)$.
We note that $L_{\sharp}^2(Y; \mathbb{C}^n) = H_{\sharp}^0(Y ; \mathbb{C}^n) = L^2(Y ;
\mathbb{C}^n)$.
For a subset $U \subset Y$ such that $\operatorname{i}ota(U) \subset \mathbb{T}^3$ is open,
we set $ H^m_{\sharp}(U ; \mathbb{C}^n)\coloneqq H^m(\operatorname{i}ota(U);
\mathbb{C}^n)$. We denote by $H_{\sharp}(\operatorname{curl} \,, U)$ and $H_{\sharp}(\operatorname{div} \,, U)$
the spaces of all $L^2( \operatorname{i}ota(U); \mathbb{C}^3)$-vector fields $u \colon U \to
\mathbb{C}^3$ such that the distributional curl and the distributional
divergence satisfy $\operatorname{curl} \, u \operatorname{i}n L^2(\operatorname{i}ota(U)); \mathbb{C}^3)$ and $\operatorname{div} \, u \operatorname{i}n
L^2(\operatorname{i}ota(U))$, respectively. For brevity, we write $C_{\sharp}^k(U)$,
$L_{\sharp}^2(U)$, and $H_{\sharp}^1(U)$ if no confusion about the
target space is possible.
\begin{definition}[Simple Helmholtz domain]
Let $U \subset Y$ be such that $\operatorname{i}ota(U)$ is open in $\mathbb{T}^3$. We say
that $U$ is a \textrm{e}mph{simple Helmholtz domain} if for every vector
field $u \operatorname{i}n L_{\sharp}^2(U; \mathbb{C}^3)$ with $\operatorname{curl} \, u = 0$ in $U$ there exist a potential $\mathbb{T}heta \operatorname{i}n H_{\sharp}^1(U)$ and a constant $c_0 \operatorname{i}n \mathbb{C}^3$ such that $u = \nabla \mathbb{T}heta + c_0$ in $U$.
\textrm{e}nd{definition}
\begin{remark}\langlebel{remark:constant in Helmholtz domain is not unique}
Note that, in general, the constant $c_0$ is not unique. Take, for
instance, $\Sigma \coloneqq B_r(0)$ with $r \operatorname{i}n (0, 1/2)$. Then
$\mathbb{T}heta(y) \coloneqq \langlembda y_k$ and $c_0 \coloneqq - \langlembda \textrm{e}_k$
yields a representation of $u =0$ for every $\langlembda \operatorname{i}n \mathbb{C}$ and $k
\operatorname{i}n \set{1,2,3}$. For a generalisation see Lemma \ref{lemma:existence of potentials in case of no k-loop}.
\textrm{e}nd{remark}
In what follows, we consider curves $\gamma \colon [0, 1] \to Y$ (not
necessarily continuous in $Y$) such that
$\operatorname{i}ota \circ \gamma \colon [0, 1] \to \mathbb{T}^3$ is continuous (see
Figure~\ref{fig:k-loop althoug Sigma not connected} for a subset
$U = \Sigma_1 \cup \Sigma_2 \subset Y$ that admits a discontinuous
path $\gamma$ in $U$ so that $\operatorname{i}ota \circ \gamma$ is continuous). For
such a curve there exists a lift $\tilde{\gamma}$; that is, a
continuous curve $\tilde{\gamma} \colon [0, 1] \to \mathbb{R}^3$ with
$\pi \circ \tilde{\gamma} = \gamma$, where $\pi$ denotes the universal
covering $\mathbb{R}^3 \to \mathbb{T}^3$, $x \mapsto x \bmod \mathbb{Z}^3$.
\begin{definition}[$k$-loop]
Let $U \subset Y$ be non empty and such that $\operatorname{i}ota(U) \subset \mathbb{T}^3$ is open. Let $e_1, e_2,\textrm{e}_3 \operatorname{i}n \mathbb{R}^3$ be the standard basis
vectors, and let $k \operatorname{i}n \set{1,2,3}$. We say that a map $\gamma \colon [0, 1] \to
\mathbb{T}^3$ is a \textrm{e}mph{$k$-loop in $\operatorname{i}ota(U)$} if the path $ \gamma \colon [0, 1]
\to \operatorname{i}ota(U)$ is continuous and piecewise continuously differentiable, $\gamma(1) = \gamma(0)$, and $\scalarproduct{\tilde{\gamma}(1) - \tilde{\gamma}(0)}{\textrm{e}_k} \neq 0$, where $\tilde{\gamma} \colon [0, 1] \to \mathbb{R}^3$ is a lift of $\gamma$.
\textrm{e}nd{definition}
\begin{remark}
\textit{a)} For a lift $\tilde{\gamma}$ of the $k$-loop $\gamma$, we
have that $\tilde{\gamma}(1) - \tilde{\gamma}(0) \operatorname{i}n \mathbb{Z}^3$ by
$\gamma(1) = \gamma(0)$.\\
\textit{b)} By abuse of notation, we refer to
$\gamma \colon [0,1] \to Y$ as a $k$-loop in $U$ when the curve
$\operatorname{i}ota \circ \gamma \colon [0, 1] \to U$ is a $k$-loop in $U$.
\textrm{e}nd{remark}
For a subset $U$ of $Y$, we introduce the following index sets:
\begin{subequations} \langlebel{eq:index sets for H}
\begin{align}
\mathcal{L}_{U} &\coloneqq \set[\big]{k \operatorname{i}n \set{1,2,3} \given \textrm{there is a $k$-loop in } U} \langlebel{eq:index sets for H 1}\, ,\\
\mathcal{N}_{U} &\coloneqq \set[\big]{ k \operatorname{i}n \set{1,2,3} \given \textrm{there is no $k$-loop in } U} \, . \langlebel{eq:index sets for H 2}
\textrm{e}nd{align}
\textrm{e}nd{subequations}
Note that $\mathcal{L}_U \, \textrm{d}ot{\cup} \mathcal{N}_U = \set{1,2,3}$. We
turn to the analysis of potentials defined on $U$.
\begin{lemma}\langlebel{lemma:existence of potentials in case of no k-loop}
Let $U \subset Y$ be non-empty and such that $\operatorname{i}ota(U) \subset \mathbb{T}^3$ is open. Assume further that $U$ has only finitely many connected components. Let $k$ be an element of $\set{1,2,3}$. If there is no $k$-loop in $U$, then there exists a potential $\mathbb{T}heta_k \operatorname{i}n H_{\sharp}^1(U)$ such that $\nabla \mathbb{T}heta_k = \textrm{e}_k$.
\textrm{e}nd{lemma}
\begin{proof}
We may assume that $\operatorname{i}ota(U)$ is connected; otherwise each connected component is treated separately. We fix $k \operatorname{i}n \set{1,2,3}$, and consider the two opposite faces
\begin{equation*}
\Gamma_{k}^{(l)} \coloneqq \set{y \operatorname{i}n Y \given y_k =-1/2} \quad \textrm{ and } \quad \Gamma_{k}^{(r)} \coloneqq \set{y \operatorname{i}n Y \given y_k = +1/2}\, .
\textrm{e}nd{equation*}
\begin{figure}
\centering
\begin{subfigure}[t]{0.49\textwidth}
\hspace{1cm}
\begin{tikzpicture}[scale=3]
\, \textrm{d}raw[->] (-.6, -.6) -- (-.4, -.6);
\, \textrm{d}raw[->] (-.6, -.6) -- (-.6, -.4);
\coordinate (A) at (-0.5, -0.5);
\coordinate (B) at (0.5,-0.5);
\coordinate (C) at (0.5,0.5);
\coordinate (D) at (-0.5, 0.5);
\coordinate (E) at (-.15, -.5);
\coordinate (F) at (-.15, .5);
\coordinate (G) at (.15, .5);
\coordinate (H) at (.15, -.5);
\coordinate (I) at (.2, .2);
\, \textrm{d}raw[black] (A) -- (B) -- (C) -- (D) -- cycle;
\fill[gray!20!white] (A) -- (B) -- (C) -- (D) -- cycle;
\, \textrm{d}raw[black] (E) -- (F) -- (G) -- (H) -- cycle;
\fill[gray!75!white] (E) -- (F) -- (G) -- (H) -- cycle;
\node[] at (-.3, -.6) {$e_k$};
\node[] at (-.6, -.3) {$e_m$};
\node[] at (0, 0) {$U$};
\node[] at (-.7, 0) {$\Gamma_k^{(l)}$};
\node[] at (.7, 0) {$\Gamma_k^{(r)}$};
\textrm{e}nd{tikzpicture}
\caption{ }
\langlebel{fig:proof of technical lemma - first case}
\textrm{e}nd{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\hspace{1cm}
\begin{tikzpicture}[scale = 3]
\, \textrm{d}raw[->] (-.6, -.6) -- (-.4, -.6);
\, \textrm{d}raw[->] (-.6, -.6) -- (-.6, -.4);
\coordinate (A) at (-0.5, -0.5);
\coordinate (B) at (0.5,-0.5);
\coordinate (C) at (0.5,0.5);
\coordinate (D) at (-0.5, 0.5);
\coordinate (E) at (-.5,-.4);
\coordinate (F) at (.5,.1);
\coordinate (G) at (.5, .4);
\coordinate (H) at (-.5, -.1);
\coordinate (I) at (-.5, .1);
\coordinate (J) at (-.5, .1);
\coordinate (K) at (.25, .5);
\coordinate (L) at (-.25, .5);
\coordinate (M) at (-.5, .4);
\coordinate (N) at (-.25, -.5);
\coordinate (N1) at (-.1, -.3);
\coordinate (O) at (.25, -.5);
\coordinate (P) at (.5, -.4);
\coordinate (P1) at (.3, -.35);
\coordinate (Q) at (.5, -.1);
\, \textrm{d}raw[black] (A) -- (B) -- (C) -- (D) -- cycle;
\fill[gray!20!white] (A) -- (B) -- (C) -- (D) -- cycle;
\, \textrm{d}raw[black] (E)--(F) -- (G) -- (H) -- cycle;
\fill[gray!75!white] (E)--(F) -- (G) -- (H) -- cycle;
\, \textrm{d}raw[black] (J) --(K) -- (L) -- (M) -- cycle;
\fill[gray!75!white] (J) --(K) -- (L) -- (M) -- cycle;
\, \textrm{d}raw[black] (N) to [out=30, in=200] (N1)to[out=30, in=100] (O);
\fill[gray!75!white] (N) to [out=30, in=200] (N1) to[out=30, in=100] (O);
\, \textrm{d}raw[black] (P) to [out=200, in=290] (P1) to [out=120,in=220] (Q);
\fill[gray!75!white](P) to [out=200, in=290] (P1) to [out=120,in=220] (Q);
\node[] at (-.3, -.6) {$e_k$};
\node[] at (-.6, -.3) {$e_m$};
\node[] at (-.25, .35) {$U_2$};
\node[] at (-.05, -.05) {$U_3$};
\node[] at (.4, -.3) {$U_4$};
\node[] at (0, -.42) {$U_1$};
\node[] at (-.7, 0) {$\Gamma_k^{(l)}$};
\node[] at (.7, 0) {$\Gamma_k^{(r)}$};
\textrm{e}nd{tikzpicture}
\caption{ }
\langlebel{fig:proof of technical lemma - second case}
\textrm{e}nd{subfigure}
\caption{The sketches shows the projection of the periodicity cell $Y$
to the $e_k$-$e_m$-plane. We assume that the geometry is
independent of the third coordinate. (a) The set $U$ does not touch one of the
boundaries $\Gamma_k^{(l)}$ or $\Gamma_k^{(r)}$, and hence $k \operatorname{i}n
\mathcal{N}_U$. On the other hand, $m \operatorname{i}n \mathcal{L}_U$ since an $m$-loop in
$U$ exists. (b) There are connected components of $U = U_1 \cup
\, \textrm{d}ots \cup U_4$ that touch $\Gamma_k^{(l)}$ and $\Gamma_k^{(r)}$,
but neither $k$-loops nor $m$-loops in $U$ exist. That is, $k, m
\operatorname{i}n \mathcal{N}_U$.}
\langlebel{fig:1}
\textrm{e}nd{figure}
\textit{Idea of the proof.} By assumption, $U$ has only finitely
many connected components $U_1, \, \textrm{d}otsc, U_N$. Assume that none of
the connected components touches the boundaries $\Gamma_k^{(l)}$ and
$\Gamma_k^{(r)}$; that is, $U_i \cap \Gamma_k^{(l)} = \textrm{e}mptyset$ and
$U_i \cap \Gamma_k^{(r)} = \textrm{e}mptyset$ for all
$i \operatorname{i}n \set{1, \, \textrm{d}otsc, N}$ (as in Figure~\ref{fig:proof of technical
lemma - first case}). In this case, we can define
$\mathbb{T}heta_k \colon U \to \mathbb{C}$ by $\mathbb{T}heta_k(y) \coloneqq y_k$ and obtain
a well-defined function $\mathbb{T}heta_k \operatorname{i}n H_{\sharp}^1(U)$.
Let us now assume that there are connected components
$U_1, \, \textrm{d}otsc, U_N$ such that $\operatorname{i}ota(U_i \cup U_{i+1})$ is connected
for all $i \operatorname{i}n \set{1, \, \textrm{d}otsc, N-1}$. The potential
$\mathbb{T}heta_k \colon U_1 \cup \, \textrm{d}ots \cup U_N \to \mathbb{C}$ is defined as
follows: On $U_1$, we set $\mathbb{T}heta_k(y) \coloneqq y_k$. On $U_2$, we
define $\mathbb{T}heta_k(y) \coloneqq y_k + d_2$ for some $d_2 \operatorname{i}n \mathbb{Z}$ such
that $\mathbb{T}heta_k$ is continuous on $\operatorname{i}ota(U_1 \cup U_2)$. This
procedure can be continued until $\mathbb{T}heta_k$ is a continuous function
on $U_1 \cup \, \textrm{d}ots \cup U_N$; the continuity of $\mathbb{T}heta_k$ is a
consequence of the non-existence of a $k$-loop.
\textit{Rigorous proof.} We denote by $\pi \colon \mathbb{R}^3 \to \mathbb{T}^3$, $x \mapsto x \bmod \mathbb{Z}^3$ the universal covering of $\mathbb{T}^3$. A lift $ [0, 1] \to \mathbb{R}^3$ of a continuous path $\gamma \colon [0, 1] \to \operatorname{i}ota(U)$ is denoted by $\tilde{\gamma}$; that is, $\gamma = \pi \circ \tilde{\gamma}$.
Fix a point $x_0 \operatorname{i}n \operatorname{i}ota(U)$. For every $x \operatorname{i}n \operatorname{i}ota(U)$ choose a
piecewise smooth path $\gamma \colon [0, 1] \to \operatorname{i}ota(U)$ joining
$\gamma(0) = x_0$ and $\gamma(1) = x$. Let
$\tilde{\gamma} \colon [0,1] \to \mathbb{R}^3$ be a lift of $\gamma$, and
define $\mathbb{T}heta_k \colon \operatorname{i}ota(U) \to \mathbb{C}$ by
$\mathbb{T}heta_k(x) \coloneqq \scalarproduct{\tilde{\gamma}(1) -
\tilde{\gamma}(0)}{\textrm{e}_k}$. Note that the non-existence of a $k$-loop
ensures that $\mathbb{T}heta_k$ is well defined. We further observe that the
difference $\tilde{\gamma}(1) - \tilde{\gamma}(0)$ is independent of
the chosen lift $\tilde{\gamma}$; indeed, for two lifts
$\tilde{\gamma}$ and $\tilde{\partialta} $ of $\gamma$, there is
$l \operatorname{i}n \mathbb{Z}^3$ such that $\tilde{\gamma} = \tilde{\partialta} + l$.
\textrm{e}nd{proof}
\begin{corollary}\langlebel{corollary:helmholtz decomposition with vanishing components of c_0}
Let $U \subset Y$ be a simple Helmholtz domain. If $u \operatorname{i}n L_{\sharp}^2(Y ;
\mathbb{C}^3)$ satisfies $\operatorname{curl} \, u = 0$ in $U$, then there are a potential $\mathbb{T}heta \operatorname{i}n H_{\sharp}^1(U)$ and a constant $c_0 \operatorname{i}n \mathbb{C}^3$ such that $u = \nabla \mathbb{T}heta + c_0$ in $U$ and $\scalarproduct{c_0}{\textrm{e}_k} = 0$ for every $k\operatorname{i}n \mathcal{N}_U$.
\textrm{e}nd{corollary}
\begin{proof}
By the very definition of a Helmholtz domain, we find a potential $\Phi \operatorname{i}n H_{\sharp}^1(U)$ and a constant $d_0 \operatorname{i}n \mathbb{C}^3$ such that $u = \nabla \Phi + d_0$ in $U$. Due to Lemma~\ref{lemma:existence of potentials in case of no k-loop}, we find a potential $\mathbb{T}heta_k \operatorname{i}n H_{\sharp}^1(U)$ such that $\nabla \mathbb{T}heta_k = \textrm{e}_k$ for all $k \operatorname{i}n \mathcal{N}_U$. Setting $\mathbb{T}heta \coloneqq \Phi + \sum_{k \operatorname{i}n \mathcal{N}_U} \scalarproduct{d_0}{\textrm{e}_k} \mathbb{T}heta_k$ provides us with an element of $H_{\sharp}^1(U)$. Moreover,
\begin{equation*}
\nabla \mathbb{T}heta = u - d_0 + \sum_{k \operatorname{i}n \mathcal{N}_U} \scalarproduct{d_0}{\textrm{e}_k}\textrm{e}_k = u - \sum_{k \operatorname{i}n \mathcal{L}_U} \scalarproduct{d_0}{\textrm{e}_k}\textrm{e}_k\, .
\textrm{e}nd{equation*}
Setting $c_0 \coloneqq \sum_{k \operatorname{i}n \mathcal{L}_U}
\scalarproduct{d_0}{\textrm{e}_k}\textrm{e}_k$, we find the desired representation of $u$.
\textrm{e}nd{proof}
In the following, we need line-integrals of curl-free $L_{\sharp}^2(Y; \mathbb{C}^3)$-vector fields.
\begin{LemmaAndDefinition}[Line integral]\langlebel{PropositonAndDefinition:line integral}
Let $U \subset Y$ be such that $\operatorname{i}ota(U)$ is an open subset of $\mathbb{T}^3$ with Lipschitz boundary. Assume that $\gamma \colon [0, 1] \to Y$ is such that $\operatorname{i}ota \circ \gamma$ is a $k$-loop in $\operatorname{i}ota(U)$. There exists a unique linear and continuous map
\begin{equation*}
I_{\gamma} \colon \set[\big]{u \operatorname{i}n L_{\sharp}^2(Y; \mathbb{C}^3) \given \operatorname{curl} \, u = 0 \textrm{ in } U} \to \mathbb{C} \, , \quad u \mapsto I_{\gamma}(u)
\textrm{e}nd{equation*}
such that $I_{\gamma}(u)$ coincides with the usual line integral
\begin{equation}\langlebel{e:definition usual line integral}
I_{\gamma}(u) = \lineint{u} \coloneqq \operatorname{i}nt_0^1 \scalarproduct[\big]{u\big(\gamma(t) \big)}{\, \textrm{d}ot{\gamma}(t)} \, \textrm{d} t\, ,
\textrm{e}nd{equation}
for fields $u \operatorname{i}n C_{\sharp}^1( Y; \mathbb{C}^3)$ that are curl free in $U$.
The map $I_{\gamma}$ is called the \textrm{e}mph{line integral of $u$ along $\gamma$}, and we write, by abuse of notation, $\lineint{u}$ instead of $I_{\gamma}(u)$ for all $u \operatorname{i}n L_{\sharp}^2(Y; \mathbb{C}^3)$ that are curl free in $U$.
\textrm{e}nd{LemmaAndDefinition}
\begin{proof}[Idea of a proof]
The space $V \coloneqq \set{u \operatorname{i}n C^1_{\sharp}(Y ; \mathbb{C}^3) \given \operatorname{curl} \, u = 0 \textrm{ in } U}$ is dense in $X \coloneqq \set{u \operatorname{i}n L_{\sharp}^2(Y ; \mathbb{C}^3) \given \operatorname{curl} \, u = 0 \textrm{ in } U}$, which can be shown with a sequence of mollifiers $(\rho_{\textrm{e}ta})_{\textrm{e}ta}$. Defining $u_{\textrm{e}ta}$ by $u_{\textrm{e}ta} \coloneqq u * \rho_{\textrm{e}ta}$ provides us with a family $(u_{\textrm{e}ta})_{\textrm{e}ta}$ in $V$ with $u_{\textrm{e}ta} \to u$ in $L_{\sharp}^2(Y; \mathbb{C}^3)$. The map $\tilde{I}_{\gamma} \colon V \to \mathbb{C}$ defined by $\tilde{I}_{\gamma}(u) \coloneqq \lineint{u}$ is linear and continuous (because $u$ is curl free in $U$); using density of $V$ in $X$, the claim follows.
\textrm{e}nd{proof}
We note the following: If $\gamma$ is a $k$-loop in $U$ and
$\tilde{\gamma}$ is one of its lifts, then
$\lineint{(u \circ \operatorname{i}ota)} = \operatorname{i}nt_{\tilde{\gamma}} u$ for all fields
$u \operatorname{i}n C_{\sharp}^1(U; \mathbb{C}^3)$. Indeed,
$\, \textrm{d}ot{\tilde{\gamma}} = \, \textrm{d}ot{\gamma}$, and
$u \circ \operatorname{i}ota^{-1} \circ \gamma = u \circ \tilde{\gamma}$ for a
periodic function $u$. Note that the line integral along $\gamma$ and
the line integral along $\tilde{\gamma}$ coincide.
\subsection{The geometric average}
The notion of a geometric average was first introduced by Bouchitté, Bourel, and Felbacq in~\cite {BouchitteBourel2009}. The notion turned out to be very useful, it was extended in~\cite {Lipton-Schweizer} to more general geometries. Although we focus on simple Helmholtz domains in the main part of this work, we define the geometric average for general geometries.
In the subsequent definition of a geometric average, we need three
special curves $\gamma_1, \gamma_2, \gamma_3 \colon [0, 1] \to Y$,
which represent paths along the edges---that is, $\gamma_1(t)
\coloneqq (t - 1/2, -1/2, -1/2)$, $\gamma_{2}(t) \coloneqq (-1/2,
t-1/2, -1/2)$, and $\gamma_3(t) \coloneqq (-1/2, -1/2, t-1/2)$. We
furthermore use the index set $\mathcal{L}_U$ defined in~\textrm{e}qref{eq:index sets for H 1}.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\linewidth}
\hspace{1cm}
\begin{tikzpicture}[scale=3]
\, \textrm{d}raw[->] (-.6, -.6) -- (-.4, -.6);
\, \textrm{d}raw[->] (-.6, -.6) -- (-.6, -.4);
\coordinate (A) at (-0.5, -0.5);
\coordinate (B) at (0.5,-0.5);
\coordinate (C) at (0.5,0.5);
\coordinate (D) at (-0.5, 0.5);
\coordinate (E) at (-.5, -.1);
\coordinate (F) at (.1, .5);
\coordinate (G) at (-.5, .1);
\coordinate (H) at (-.1, .5);
\coordinate (I) at (0.1, -.5);
\coordinate (J) at (.5, -.1);
\coordinate (K) at (.5, .1);
\coordinate (L) at (-.1,-.5);
\, \textrm{d}raw[black] (A) -- (B) -- (C) -- (D) -- cycle;
\fill[gray!20!white] (A) -- (B) -- (C) -- (D) -- cycle;
\, \textrm{d}raw[black] (E) to [out=0, in=270] (F) to (H) to [out=270, in=0] (G);
\fill[gray!75!white](E) to [out=0, in=270] (F) to (H) to [out=270, in=0] (G);
\, \textrm{d}raw[black] (I) to [out=90, in=180] (J) to (K) to [out=180, in=90] (L);
\fill[gray!75!white](I) to [out=90, in=180] (J) to (K) to [out=180, in=90] (L);
\node[] at (-.3, -.6) {$e_k$};
\node[] at (-.6, -.3) {$e_m$};
\node[] at (-.7, 0) {$\Gamma_k^{(l)}$};
\node[] at (.7, 0) {$\Gamma_k^{(r)}$};
\node[] at (-.09, .2) {$U_1$};
\node[] at (.07, -.25) {$U_2$};
\textrm{e}nd{tikzpicture}
\caption{ }
\langlebel{fig:k-loop althoug Sigma not connected}
\textrm{e}nd{subfigure}
\begin{subfigure}[b]{0.49\linewidth}
\hspace{1.3cm}
\begin{tikzpicture}[scale = 3]
\, \textrm{d}raw[->] (-.6, -.6) -- (-.4, -.6);
\, \textrm{d}raw[->] (-.6, -.6) -- (-.6, -.4);
\coordinate (A) at (-0.5, -0.5);
\coordinate (B) at (0.5,-0.5);
\coordinate (C) at (0.5,0.5);
\coordinate (D) at (-0.5, 0.5);
\coordinate (E) at (-.5,-.4);
\coordinate (F) at (.5,.1);
\coordinate (G) at (.5, .4);
\coordinate (H) at (-.5, -.1);
\coordinate (I) at (-.5, .1);
\coordinate (J) at (-.5, .1);
\coordinate (K) at (.25, .5);
\coordinate (L) at (-.25, .5);
\coordinate (M) at (-.5, .4);
\coordinate (N) at (-.25, -.5);
\coordinate (O) at (.25, -.5);
\coordinate (P) at (.5, -.4);
\coordinate (P1) at (.3, -.35);
\coordinate (Q) at (.5, -.1);
\, \textrm{d}raw[black] (A) -- (B) -- (C) -- (D) -- cycle;
\fill[gray!20!white] (A) -- (B) -- (C) -- (D) -- cycle;
\, \textrm{d}raw[black] (E)--(F) -- (G) -- (H) -- cycle;
\fill[gray!75!white] (E)--(F) -- (G) -- (H) -- cycle;
\, \textrm{d}raw[black] (J) --(K) -- (L) -- (M) -- cycle;
\fill[gray!75!white] (J) --(K) -- (L) -- (M) -- cycle;
\, \textrm{d}raw[black] (N) to (Q);
\fill[gray!75!white] (N) to (O);
\, \textrm{d}raw[black] (P) -- (O) -- (N) -- (Q) -- cycle;
\fill[gray!75!white] (P) -- (O) -- (N) -- (Q) -- cycle;
\node[] at (-.3, -.6) {$e_k$};
\node[] at (-.6, -.3) {$e_m$};
\node[] at (-.25, .35) {$U_2$};
\node[] at (-.09, -.05) {$U_1$};
\node[] at (.2, -.4) {$U_3$};
\textrm{e}nd{tikzpicture}
\caption{ }
\langlebel{fig:there is a discontinuous k-loop}
\textrm{e}nd{subfigure}
\caption{ (a) There is a $k$-loop in $U =U_1
\cup U_2$ connecting $\Gamma_k^{(l)}$ and $\Gamma_k^{(r)}$ although $U = U_1 \cup U_2$ is not
connected. (b) There is $k$-loop $\gamma$ in $U$. Note that
$\lineint{u} = 3 (\oint u)_k$ for all fields $u \operatorname{i}n L_{\sharp}^2(Y
; \mathbb{C}^3)$
that are curl-free in $U$.}
\textrm{e}nd{figure}
\begin{definition}[Geometric average]\langlebel{definition:geometric average}
Let $U \subset Y$ be non-empty and such that $\operatorname{i}ota(U) \subset \mathbb{T}^3$
is open. Assume $u \colon U \to \mathbb{C}^3$ is an $L_{\sharp}^2(U;
\mathbb{C}^3)$-vector field that is curl free in $\operatorname{i}ota(U)$. We define its \textrm{e}mph{geometic average} $\oint u \operatorname{i}n \mathbb{C}^3$ as follows:
\begin{enumerate}
\operatorname{i}tem[1)] If $U$ is a simple Helmholtz domain, then the vector field
$u$ can be written as $u = \nabla \mathbb{T}heta + c_0$, where $\mathbb{T}heta \operatorname{i}n
H_{\sharp}^1(U)$ and $c_0 \operatorname{i}n \mathbb{C}^3$. In this case, for $k \operatorname{i}n
\set{1,2,3}$, we set
\begin{equation*}
\bigg( \oint u \bigg)_k \coloneqq
\begin{cases}
\scalarproduct{c_0}{\textrm{e}_k} & \textrm{ for } k \operatorname{i}n \mathcal{L}_U \, , \\
0 & \textrm{ otherwise}\, .
\textrm{e}nd{cases}
\textrm{e}nd{equation*}
\operatorname{i}tem[2)] If for all $k \operatorname{i}n \set{1,2,3}$ the path $\gamma_k$ along the
edge is a $k$-loop in $U$, then, for $k \operatorname{i}n \set{1,2,3}$, we set
\begin{equation*}
\bigg( \oint u \bigg)_k \coloneqq \operatorname{i}nt_{\gamma_k} u \, .
\textrm{e}nd{equation*}
\textrm{e}nd{enumerate}
\textrm{e}nd{definition}
In later sections, we consider fields $u \colon Y \to \mathbb{C}^3$ that are curl free in $\operatorname{i}ota(U)$ and vanish in $Y \setminus \overline{U}$, where $U \subset Y$ is non empty and proper. To define the geometric average of those fields, we restrict $u$ to the subset $U$ and apply the above definition.
\begin{remark}[The geometric average is well defined]
\textit{a)} Let $U$ be a simple Helmholtz domain. Fix $k \operatorname{i}n \mathcal{L}_U$, and let $\gamma \colon [0,1]
\to \mathbb{T}^3$ be a $k$-loop in $\operatorname{i}ota(U)$. Using that $u = \nabla \mathbb{T}heta + c_0$ in
$U$ as well as the periodicity of $\mathbb{T}heta$, we find that
\begin{equation*}
\lineint{u} = \lineint{c_0} = \scalarproduct{c_0}{\textrm{e}_k}
\scalarproduct{\tilde{\gamma}(1) - \tilde{\gamma}(0)}{\textrm{e}_k}\, ,
\textrm{e}nd{equation*}
and hence the number $\scalarproduct{c_0}{\textrm{e}_k}$ in \textit{1)} is well defined (despite our observation in Remark~\ref{remark:constant in Helmholtz domain is not unique}).
\textit{b)} The two definitions \textit{1)} and \textit{2)} coincide
when both can be applied. To see this, fix an index $k \operatorname{i}n
\set{1,2,3}$. The domain $U$ is a simple Helmholtz domain, and hence
we find a potential $\mathbb{T}heta \operatorname{i}n H_{\sharp}^1(U)$ and a constant $c_0
\operatorname{i}n \mathbb{C}^3$ such that $u = \nabla \mathbb{T}heta + c_0$ in $U$. We may assume
that $\mathbb{T}heta \operatorname{i}n C_{\sharp}^1(U)$; otherwise we approximate by smooth
functions. We then find, for the path $\gamma_k$ along the edge, that
\begin{equation*}
\operatorname{i}nt_{\gamma_k}u
= \operatorname{i}nt_{\gamma_k} \nabla \mathbb{T}heta +
\scalarproduct{c_0}{\textrm{e}_k}
= \mathbb{T}heta(\gamma_k(1)) - \mathbb{T}heta(\gamma_k(0)) + \scalarproduct{c_0}{\textrm{e}_k}
= \scalarproduct{c_0}{\textrm{e}_k}\, .
\textrm{e}nd{equation*}
\textrm{e}nd{remark}
\begin{remark}
\textit{a)} Let $U$ be a simple Helmholtz domain, and let $\gamma$
be a $k$-loop in $U$. We remark that $(\oint u)_k \neq \lineint{u}$
in general. To give an example, let $\tilde{\gamma}$ be a lift of
the $k$-loop $\gamma$ that travels the distance $2$ in the $k$th
direction, that is,
$\scalarproduct{\tilde{\gamma}(1) - \tilde{\gamma}(0)}{\textrm{e}_k} = 2$
(as in Figure \ref{fig:there is a discontinuous k-loop}). In this case,
$\lineint{u} = 2 (\oint u)_k$. Nevertheless, there always holds
$\lineint{u} = \langlembda_k (\oint u)_k$ for
$\langlembda_k \operatorname{i}n \mathbb{Z} \setminus \set{0}$.
\textit{b)} There are domains for which definition \textit{1)} can be applied, but definition \textit{2)} cannot be used. Indeed, $U \coloneqq B_r(0)$ is a simple Helmholtz domain for $r \operatorname{i}n (0, 1/2)$. However, $\gamma_k(t) \notin U$ for all $t \operatorname{i}n [0, 1]$ and all $k \operatorname{i}n \set{1,2,3}$.
\textit{c)} On the other hand, choosing $Y \setminus U$ to be a $3$-dimensional full torus $\mathbb{S}^1 \times \mathbb{D}^2 \subset \subset Y$, we find that $\gamma_1$, $\gamma_2$, and $\gamma_3$ are $1$-, $2$-, and $3$-loops in $U$, respectively. The domain $U$, however, is not simple Helmholtz.
\textrm{e}nd{remark}
\textbf{Properties of the geometric average.} We introduce the function space
\begin{equation}\langlebel{eq:definition of X}
X \coloneqq \set{ \varphi \operatorname{i}n L_{\sharp}^2(Y ; \mathbb{C}^3) \given \varphi = 0
\textrm{ in } Y \setminus \overline{U} \textrm{ and } \operatorname{div} \, \varphi = 0 \textrm{ in } Y }\, .
\textrm{e}nd{equation}
For a simple Helmholtz domain $U$ and under slightly stronger assumptions on the vector field $u \colon Y \to \mathbb{C}^3$, Bouchitt\'e, Bourel, and Felbacq, \cite {BouchitteBourel2009}, define the geometric average $\oint u$ by the identity~\textrm{e}qref{eq:identity for geometric average} below.
We show that our definition and theirs agree when both can be applied.
\begin{lemma}\langlebel{lemma:both notions of geometric average coincide}
Let $U \subset Y$ be a simple Helmholtz domain. If $u \colon Y \to \mathbb{C}^3$ is a vector field of class $L_{\sharp}^2(Y; \mathbb{C}^3)$ that is curl free in $U$, then the identity
\begin{equation}\langlebel{eq:identity for geometric average}
\operatorname{i}nt_Y \scalarproduct{u}{\varphi} = \scalarproduct[\bigg]{\oint u}{\operatorname{i}nt_Y \varphi}
\textrm{e}nd{equation}
holds for all $\varphi \operatorname{i}n X$, where the function space $X$ is defined in~\textrm{e}qref{eq:definition of X}.
\textrm{e}nd{lemma}
\begin{proof}
As $U$ is a simple Helmholtz domain, by Corollary~\ref{corollary:helmholtz decomposition with vanishing components of c_0}, we find a potential $\mathbb{T}heta \operatorname{i}n H_{\sharp}^1(U)$ and a constant $c_0 \operatorname{i}n \mathbb{C}^3$ such that $u = \nabla \mathbb{T}heta + c_0$ in $U$ and $\scalarproduct{c_0}{\textrm{e}_k} = 0$ for all $k \operatorname{i}n \mathcal{N}_U$. Substituting this decomposition into the left-hand side of~\textrm{e}qref{eq:identity for geometric average}, we find that
\begin{equation*}
\operatorname{i}nt_Y \scalarproduct{u}{\varphi} = \operatorname{i}nt_U \scalarproduct{\nabla \mathbb{T}heta}{\varphi} + \operatorname{i}nt_U \scalarproduct{c_0}{\varphi} = \operatorname{i}nt_U \scalarproduct{\nabla \mathbb{T}heta}{\varphi} + \sum_{k \operatorname{i}n \mathcal{L}_U} \scalarproduct{c_0}{\textrm{e}_k} \scalarproduct[\bigg]{\textrm{e}_k}{\operatorname{i}nt_Y \varphi}\, .
\textrm{e}nd{equation*}
Note that the function space $X$ is a subset of $H_{\sharp}(\operatorname{div} \,, Y)$. We are thus allowed to use integration by parts in the first integral on the right-hand side. Using that $\varphi$ is divergence free in $Y$ and vanishes in $Y \setminus \overline{U}$, we find that
\begin{equation*}
\operatorname{i}nt_Y \scalarproduct{u}{\varphi} = \sum_{k \operatorname{i}n \mathcal{L}_U}
\scalarproduct{c_0}{\textrm{e}_k} \scalarproduct[\bigg]{\textrm{e}_k}{\operatorname{i}nt_Y \varphi}
= \scalarproduct[\bigg]{\oint u}{ \operatorname{i}nt_Y \varphi}\, ,
\textrm{e}nd{equation*}
and hence the claimed equality holds.
\textrm{e}nd{proof}
When we derive effective equations, we need the following property of
the geometric average, which is a consequence of the
Lemma~\ref{lemma:both notions of geometric average coincide}.
\begin{corollary}\langlebel{corollary:property of geometric average}
Assume that $U \subset Y$ is a simple Helmholtz domain, and let $u \colon Y \to \mathbb{C}^3$ be a vector field of class $L_{\sharp}^2(Y ; \mathbb{C}^3)$ that is curl free in $U$. If $E \colon Y \to \mathbb{C}^3$ is another field of class $L_{\sharp}^2(Y; \mathbb{C}^3)$ that is curl free and that vanishes in $Y \setminus \overline{U}$, then
\begin{equation}\langlebel{eq:property of the geometric average}
\operatorname{i}nt_{U} (u \wedge E) = \bigg( \oint u \bigg) \wedge \bigg( \operatorname{i}nt_{Y} E\bigg) \, .
\textrm{e}nd{equation}
\textrm{e}nd{corollary}
\begin{proof}
Fix $k \operatorname{i}n \set{1,2,3}$. Defining the field $ \varphi \colon Y \to \mathbb{C}^3$ by $\varphi \coloneqq E \wedge \textrm{e}_k$ provides us with an element of $X$; indeed, $\varphi$ is of class $L_{\sharp}^2(Y ; \mathbb{C}^3)$ and vanishes in $Y \setminus \overline{U}$. Moreover, for every $\phi \operatorname{i}n C_c^{\operatorname{i}nfty}(Y)$, we calculate that
\begin{equation*}
\operatorname{i}nt_Y \scalarproduct{\varphi}{\nabla \phi}
= - \operatorname{i}nt_Y \scalarproduct{E}{\nabla \phi \wedge \textrm{e}_k}
= - \operatorname{i}nt_Y \scalarproduct{E}{ \operatorname{curl} \, (\phi \, \textrm{e}_k)} = 0.
\textrm{e}nd{equation*}
That is, $\varphi$ is divergence free in $Y$. We can thus apply Lemma~\ref{lemma:both notions of geometric average coincide} and obtain that
\begin{equation*}
\operatorname{i}nt_{U} \scalarproduct{u \wedge E}{\textrm{e}_k}
= \operatorname{i}nt_{Y} \scalarproduct{u}{\varphi}
= \scalarproduct[\bigg]{\oint u}{ \operatorname{i}nt_Y( E \wedge \textrm{e}_k)}
= \scalarproduct[\bigg]{\oint u}{ \bigg(\operatorname{i}nt_Y E\bigg) \wedge \textrm{e}_k}\, .
\textrm{e}nd{equation*}
As $k \operatorname{i}n \set{1,2,3}$ was chosen arbitrarily, this yields~\textrm{e}qref{eq:property of the geometric average}.
\textrm{e}nd{proof}
\begin{remark}
The statement of the corollary remains true when we replace the assumption on $U$ to be a simple Helmholtz domain by the assumption that $\overline{U} \cap \partial Y = \textrm{e}mptyset$. The geometric average $\oint u$ is then given by the second part of Definition~\ref{definition:geometric average}. A proof of the corollary in this situation is given in~\cite {Lipton-Schweizer}.
\textrm{e}nd{remark}
\section{Cell problems and their analysis}\langlebel{sec:cell problems}
In this section, we study sequences $(E^{\textrm{e}ta})_{\textrm{e}ta}$ and
$(H^{\textrm{e}ta})_{\textrm{e}ta}$ of solutions to~\textrm{e}qref{eq:MaxSysSeq-intro} and their two-scale limits $E_0$
and $H_0$.
\subsection{Cell problem for $E_0$}
\begin{lemma}[Cell problem for $E_0$]\langlebel{lemma:cell problem E_0}
Let $R \subset \subset \Omega \subset \mathbb{R}^3$ and $\Sigma \subset
Y$ be as in Section~\ref{section:geometry and assumptions}, and let
$(E^{\textrm{e}ta}, H^{\textrm{e}ta})_{\textrm{e}ta}$ be a sequence of solutions
to~\textrm{e}qref{eq:MaxSysSeq-intro} that satisfies the
energy-bound~\textrm{e}qref{eq:energy-bound}. The
two-scale limit $E_0 \operatorname{i}n L^2(\Omega \times Y; \mathbb{C}^3)$ satisfies the following:
\begin{enumerate}
\operatorname{i}tem[i)]
For almost all $x \operatorname{i}n R$ the field $E_0 = E_0(x, \cdot)$ is an element of $ H_{\sharp}(\operatorname{curl} \,, Y)$ and a distributional solution to
\begin{subequations}\langlebel{eq:cell problem E}
\begin{align}
\operatorname{curl} \,y E_0 &= 0 \quad \textrm{ in } Y\, , \langlebel{eq:cell problem E 1}\\
\operatorname{div}_y \, E_0 &= 0 \quad \textrm{ in } Y\setminus \overline{\Sigma}\, , \langlebel{eq:cell problem E 2}\\
E_0 &= 0 \quad \textrm{ in } \Sigma\, . \langlebel{eq:cell problem E 3}
\textrm{e}nd{align}
\textrm{e}nd{subequations}
Outside the meta-material, the two-scale limit $E_0$ is
$y$-independent; that is, $E_0(x, y) = E_0(x) = E(x)$ for a.e. $x
\operatorname{i}n \Omega \setminus R$ and a.e. $y \operatorname{i}n Y$.
\operatorname{i}tem[ii)]
Given $c \operatorname{i}n \mathbb{C}^3$, there exists at most one solution $u
\operatorname{i}n L_{\sharp}^2(Y; \mathbb{C}^3)$ to~\textrm{e}qref{eq:cell problem E} with
cell-average $\Xint-_{Y}u(y) \, \textrm{d} y = c$.
\textrm{e}nd{enumerate}
\textrm{e}nd{lemma}
\begin{proof}
\textit{i)} The derivation of the cell problem is by now standard and can, for instance, be found in~\cite{Lamacz-Schweizer-2016}. To give some ideas, fix $x \operatorname{i}n \Omega$ and $\textrm{e}ta > 0$, and set $\varphi_{\textrm{e}ta}(x) \coloneqq \varphi(x, x/ \textrm{e}ta)$, where $\varphi \operatorname{i}n C_c^{\operatorname{i}nfty}(\Omega; C_{\sharp}^{\operatorname{i}nfty}(Y; \mathbb{C}^3))$. Using integration by parts and the two-scale convergence of $(E^{\textrm{e}ta})_{\textrm{e}ta}$ we obtain that
\begin{equation*}
\lim_{\textrm{e}ta \to 0} \operatorname{i}nt_{\Omega} \scalarproduct[\big]{\textrm{e}ta \operatorname{curl} \, E^{\textrm{e}ta}(x)}{\varphi_{\textrm{e}ta}(x)} \, \textrm{d} x = \operatorname{i}nt_{\Omega} \operatorname{i}nt_{Y} \scalarproduct[\big]{E_0(x, y)}{\operatorname{curl} \,y \varphi(x, y)} \, \textrm{d} y \, \textrm{d} x \, .
\textrm{e}nd{equation*}
From this and~\textrm{e}qref{eq:MaxSeq1-intro}, we deduce that $E_0$ is a distributional solution to~\textrm{e}qref{eq:cell problem E 1}.
Equation~\textrm{e}qref{eq:cell problem E 2} is a consequence of~\textrm{e}qref{eq:MaxSeq2-intro}, and~\textrm{e}qref{eq:cell problem E 3} follows from~\textrm{e}qref{eq:MaxSeq3-intro}.
Due to~\textrm{e}qref{eq:cell problem E 1}, there holds $E_0 \operatorname{i}n H_{\sharp}(\operatorname{curl} \,, Y)$.
In $\Omega \setminus \overline{\Sigma}$, the cell problem for $E_0 = E_0(x, \cdot)$ coincides with~\textrm{e}qref{eq:cell problem E} if we replace $\Sigma$ by the empty set $\textrm{e}mptyset$. This problem, however, has only the constant solution.
\textit{ii)}
Let $u \operatorname{i}n L_{\sharp}^2(Y; \mathbb{C}^3)$ be a distributional solution to~\textrm{e}qref{eq:cell problem E} with $\Xint-_{Y} u = 0$.
We claim that the field $u$ vanishes identically in $Y$.
Indeed,~\textrm{e}qref{eq:cell problem E 1} implies the existence of a potential $\mathbb{T}heta \operatorname{i}n H_{\sharp}^1(Y)$ and of a constant $c_0 \operatorname{i}n \mathbb{C}^3$ such that $u = \nabla \mathbb{T}heta + c_0$ in $Y$. We may assume that $\Xint-_Y \mathbb{T}heta = 0$. As $u$ has vanishing average, we conclude that $c_0 = 0$. On account of~\textrm{e}qref{eq:cell problem E 2} and~\textrm{e}qref{eq:cell problem E 3}, the potential $\mathbb{T}heta$ is a distributional solution to
\begin{align*}
- \Delta \mathbb{T}heta &= 0 \quad \textrm{ in } Y \setminus \overline{\Sigma}\, ,\\
\mathbb{T}heta &= d \quad \textrm{ in } \Sigma\, ,
\textrm{e}nd{align*}
for some constant $d \operatorname{i}n \mathbb{C}$. As $ \mathbb{T}heta \operatorname{i}n H_{\sharp}^1(Y)$, the potential $\mathbb{T}heta$ does not jump across the boundary $\partial \Sigma$. Consequently, $\mathbb{T}heta = d$ in $Y$ and thus $u$ vanishes in $Y$.
\textrm{e}nd{proof}
While we obtained the uniqueness result immediately, the existence
statement is more involved. To investigate the solution space to the
cell problem~\textrm{e}qref{eq:cell problem E}, we use the two index sets
from~\textrm{e}qref{eq:index sets for H}:
$\mathcal{L}_{\Sigma} = \set[\big]{k \operatorname{i}n \set{1,2,3} \given
\textrm{there is a $k$-loop in } \Sigma}$ and
$\mathcal{N}_{\Sigma} = \set[\big]{k \operatorname{i}n \set{1,2,3} \given
\textrm{there is no $k$-loop in } \Sigma} = \set{1,2,3}
\setminus \mathcal{L}_{\Sigma}$ . We claim that $\abs{\mathbb{N}E}$ coincides
with the dimension of the solution space of~\textrm{e}qref{eq:cell problem E}.
\begin{lemma}[Connection between the $E_0$-problem~\textrm{e}qref{eq:cell
problem E} and $\mathbb{N}E$]\langlebel{lemma:characterisation of index set N
E}
For $k \operatorname{i}n \mathbb{N}E$ there exists a unique solution $E^k$
to~\textrm{e}qref{eq:cell problem E} with volume average $\textrm{e}_k$. On the
other hand, if $c \operatorname{i}n \mathbb{C}^3$ is such that $c \notin \operatorname{span} \set{ \textrm{e}_k
\given k \operatorname{i}n \mathbb{N}E}$, then there is no solution $E$ to
\textrm{e}qref{eq:cell problem E} with volume average $c$.
\textrm{e}nd{lemma}
\begin{proof}
\textit{Part 1.} Let $k$ be an element of $\mathbb{N}E$. Our aim is to show
that a solution $E^k$ exist. Due to Lemma~\ref{lemma:existence of
potentials in case of no k-loop}, there exists a potential
$\tilde{\mathbb{T}heta}_k \operatorname{i}n H_{\sharp}^1(\Sigma)$ such that $\nabla
\tilde{\mathbb{T}heta}_k = - \textrm{e}_k$.
We extend $\tilde{\mathbb{T}heta}_k$ to all of $Y$ as follows: Let $\mathbb{T}heta_k \operatorname{i}n H_{\sharp}^1(Y)$ be the weak solution to
\begin{align*}
- \Delta \mathbb{T}heta_k &= 0 \quad \: \: \: \, \textrm{ in } Y \setminus \overline{\Sigma} \, , \\
\mathbb{T}heta_k &=\tilde{\mathbb{T}heta}_k \quad \textrm{ on } \overline{\Sigma}\, .
\textrm{e}nd{align*}
By setting $E^k \coloneqq \nabla \mathbb{T}heta_k + \textrm{e}_k$, we obtain a
solution to~\textrm{e}qref{eq:cell problem E} whose cell average is $\textrm{e}_k$.
\textit{Part 2.}
Let $c \operatorname{i}n \mathbb{C}^3$ be such that $c \notin \operatorname{span} \set{ \textrm{e}_k
\given k \operatorname{i}n \mathbb{N}E}$. Assume that there is a solution $E$ to
\textrm{e}qref{eq:cell problem E} with $\Xint-_Y E = c$ .
By Part 1, for every $k \operatorname{i}n \mathbb{N}E$
there is a solution $E^k$ to~\textrm{e}qref{eq:cell problem E} with
$\Xint-_Y E^k = \textrm{e}_k$.
Consider the field
\begin{equation*}
v \coloneqq E - \sum_{k \operatorname{i}n \mathbb{N}E} \scalarproduct{c}{\textrm{e}_k}E^k \, .
\textrm{e}nd{equation*}
This field is a solution to \textrm{e}qref{eq:cell problem E} with
$\Xint-_Y v = \sum_{k \operatorname{i}n \mathcal{L}_{\Sigma}} \scalarproduct{c}{\textrm{e}_k} \textrm{e}_k$. As
$c \notin \operatorname{span} \set{ \textrm{e}_k \given k \operatorname{i}n \mathbb{N}H}$, there holds
$\Xint-_Y v \neq 0$. We find a potential
$\Phi \operatorname{i}n H_{\sharp}^1(Y)$ such that
$v = \nabla \Phi + \sum_{l \operatorname{i}n \mathcal{L}_{\Sigma}} \scalarproduct{c}{\textrm{e}_l} \textrm{e}_l$ in
$Y$ since $\operatorname{curl} \, v = 0$ in $Y$. Fix an index $k \operatorname{i}n \mathcal{L}_{\Sigma}$, and let $\gamma$ be a $k$-loop in
$\Sigma$. By~\textrm{e}qref{eq:cell problem E 3}, we have that
$\nabla \Phi \operatorname{i}n C_{\sharp}^0(\Sigma; \mathbb{C}^3)$. As $v = 0$ in
$\Sigma$, we calculate, by exploiting the periodicity of $\Phi$ in
$Y$,
\begin{equation}\langlebel{eq:proof characterisation of solution space for E - 2}
0 = \lineint{v} = \lineint{\nabla \mathbb{T}heta} + \sum_{l \operatorname{i}n \mathcal{L}_{\Sigma}} \scalarproduct{c}{\textrm{e}_l}
\lineint{\textrm{e}_l} = \scalarproduct{c}{\textrm{e}_k} \scalarproduct{\tilde{\gamma}(1) -
\tilde{\gamma}(0)}{\textrm{e}_k} \, .
\textrm{e}nd{equation}
Note that $ \scalarproduct{\tilde{\gamma}(1) -
\tilde{\gamma}(0)}{\textrm{e}_k} \neq 0$ since $\gamma$ is a $k$-loop. Thus $\scalarproduct{c}{\textrm{e}_k} = 0$ for all $k \operatorname{i}n \mathcal{L}_{\Sigma}$.
This, however, contradicts $\Xint-_Y v \neq 0$.
\textrm{e}nd{proof}
Thanks to this lemma, we have the following result.
\begin{proposition}[Characterisation of the solution space of the $E_0$-problem]\langlebel{proposition: dimension of solution space to the cell problem of E}
For every index $k \operatorname{i}n \mathbb{N}E$, let $E^k = E^k(y)$ be the solution
to~\textrm{e}qref{eq:cell problem E} from Lemma~\ref{lemma:characterisation
of index set N E}. Then there holds: Every solution $u$ to~\textrm{e}qref{eq:cell
problem E} can be written as
\begin{equation}\langlebel{eq:characterisation of the solution space to E problem}
u = \sum_{k \operatorname{i}n \mathbb{N}E} \alpha_k E^k\,
\textrm{e}nd{equation}
with constants $\alpha_k \operatorname{i}n \mathbb{C}$ for $k \operatorname{i}n \mathbb{N}E$.
In particular, the dimension of the solution space coincides with $\abs{\mathbb{N}E}$.
\textrm{e}nd{proposition}
\begin{proof}
We need to prove that
every solution $u$ to~\textrm{e}qref{eq:cell problem E} can be written as in~\textrm{e}qref{eq:characterisation of the solution space to E problem}.
Let $u \operatorname{i}n H_{\sharp}(\operatorname{curl} \,, Y)$ be an arbitrary solution
to~\textrm{e}qref{eq:cell problem E}. On account of~\textrm{e}qref{eq:cell problem E
1}, we find a potential $\mathbb{T}heta \operatorname{i}n H_{\sharp}^1(Y)$ and $c_0 \operatorname{i}n
\mathbb{C}^3$ such that $u = \nabla \mathbb{T}heta + c_0$ in $Y$. For each $k \operatorname{i}n
\set{1,2,3}$, we set $\alpha_k \coloneqq \scalarproduct{c_0}{\textrm{e}_k}$.
Consider the field
\begin{align}
v \coloneqq u - \sum_{k \operatorname{i}n \mathbb{N}E} c_k E^k\, . \nonumber
\shortintertext{This field is a solution to~\textrm{e}qref{eq:cell problem E} with}
\langlebel{eq:proof characterisation of solution space for E - 1}
\Xint-_Y v =\sum_{l \operatorname{i}n \mathcal{L}_{\Sigma}} \alpha_l \textrm{e}_l \, .
\textrm{e}nd{align}
By the second statement of Lemma~\ref{lemma:characterisation of
index set N E}, the coefficient $\alpha_l = 0$ for all $l \operatorname{i}n \mathcal{L}_{\Sigma}$. Hence $v = 0$
in $Y$ by the uniqueness result from Lemma \ref{lemma:cell problem E_0}. This provides
\textrm{e}qref{eq:characterisation of the solution space to E problem}.
\textrm{e}nd{proof}
\begin{remark}\langlebel{remark:the solution space of the cell problem of E is at most three dimensional}
The previous proposition implies, in particular, that the solution space to the cell
problem of $E_0$ is at most three dimensional. Note that no
assumption (such as simple connectedness) on the domain $\Sigma$ was imposed here.
\textrm{e}nd{remark}
\subsection{Cell problem for $H_0$}
\begin{lemma}[Cell problem for $H_0$]
\langlebel{lemma:cell problem for H_0}
Let $R \subset \subset \Omega \subset \mathbb{R}^3$ and $\Sigma \subset
Y$ be as in Section~\ref{section:geometry and assumptions}, and let
$(E^{\textrm{e}ta}, H^{\textrm{e}ta})_{\textrm{e}ta}$ be a sequence of solutions
to~\textrm{e}qref{eq:MaxSysSeq-intro} that satisfies the
energy-bound~\textrm{e}qref{eq:energy-bound}. The
two-scale limit $H_0 \operatorname{i}n L^2(\Omega \times Y; \mathbb{C}^3)$ satisfies the following:
\begin{enumerate}
\operatorname{i}tem[i)]
For almost all $x \operatorname{i}n R$ the field $H_0 = H_0(x , \cdot)$ is an element of $H_{\sharp}(\operatorname{div} \,, Y)$ and a distributional solution to
\begin{subequations}\langlebel{eq:cell problem H_0}
\begin{align}
\langlebel{eq:cell problem H_0 1}
\operatorname{curl} \,y H_0 &= 0 \quad \textrm{ in } Y \setminus \overline{\Sigma}\, , \\
\langlebel{eq:cell problem H_0 2}
\operatorname{div}_y \, H_0 &= 0 \quad \textrm{ in } Y\, ,\\
\langlebel{eq:cell problem H_0 3}
H_0 &= 0 \quad \textrm{ in } \Sigma\, .
\textrm{e}nd{align}
\textrm{e}nd{subequations}
Outside the meta-material, the two-scale limit $H_0$ is
$y$-independent; that is, $H_0(x, y) = H_0(x) = H(x)$ for a.e. $x
\operatorname{i}n \Omega \setminus R$ and a.e. $y \operatorname{i}n Y$.
\operatorname{i}tem[ii)]
If $Y \setminus \overline{\Sigma}$ is a simple Helmholtz domain,
then for every $c \operatorname{i}n \mathbb{C}^3$ there is at most one solution $u \operatorname{i}n
H_{\sharp}(\operatorname{div} \,, Y)$ to~\textrm{e}qref{eq:cell problem H_0} with geometric
average $\oint u = c$.
\textrm{e}nd{enumerate}
\textrm{e}nd{lemma}
\begin{proof}
\textit{i) $H_0$ is a distributional solution to~\textrm{e}qref{eq:cell problem H_0}.}
Exploiting the two-scale convergence of $(H^{\textrm{e}ta})_{\textrm{e}ta}$ and
$(E^{\textrm{e}ta})_{\textrm{e}ta}$, we deduce~\textrm{e}qref{eq:cell problem H_0 1} by
Maxwell's
equation~\textrm{e}qref{eq:MaxSeq2-intro}. By~\textrm{e}qref{eq:MaxSeq1-intro}, each
$H^{\textrm{e}ta}$ is a divergence-free field in $\Omega$, and hence $\operatorname{div}_y \, H_0 = 0$ in $Y$.
On account of~\textrm{e}qref{eq:MaxSeq3-intro}, the field $H^{\textrm{e}ta}\,
\operatorname{i}ndicator{\Sigma_{\textrm{e}ta}} $ vanishes identically in $R$. Thus, $0 =
H^{\textrm{e}ta} \, \operatorname{i}ndicator{\Sigma_{\textrm{e}ta}} \xrightharpoonup{\; 2 \;} H_0 \,
\operatorname{i}ndicator{\Sigma}$ implies that $H_0(x, y) = 0$ for almost all $(x,
y) \operatorname{i}n R \times \Sigma$, and hence~\textrm{e}qref{eq:cell problem H_0 3}.
Outside the meta material, the fields $H_0 = H_0(x, \cdot)$ and $H$ coincide since the corresponding cell problem admits only constant solutions.
\textit{ii) Uniqueness.} Let $Y \setminus \overline{\Sigma}$ be a
simple Helmholtz domain, and let $u \operatorname{i}n L_{\sharp}^2(Y; \mathbb{C}^3)$ be a
solution to~\textrm{e}qref{eq:cell problem H_0} with vanishing geometric
average, $\oint u = 0$. We claim that $u$ vanishes identically in
$Y$. As $u$ is curl free in the simple Helmholtz domain $Y \setminus
\overline{\Sigma}$, we find a potential $\mathbb{T}heta \operatorname{i}n H_{\sharp}^1(Y
\setminus \overline{\Sigma})$ and a constant $c_0 \operatorname{i}n \mathbb{C}^3$ such
that $u = \nabla \mathbb{T}heta + c_0$ in $Y \setminus \overline{\Sigma}$. For
an index $k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}$, we can apply the first part of
Definition~\ref{definition:geometric average} and find that
$(\oint u)_k =\scalarproduct{c_0}{\textrm{e}_k} $. By assumption, $\oint u =
0$ and hence $\scalarproduct{c_0}{\textrm{e}_k} = 0$ for all $k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}$. Due
to Lemma~\ref{lemma:existence of potentials in case of no k-loop}, for
every $k \operatorname{i}n \mathbb{N}H$, we find a potential $\mathbb{T}heta_k \operatorname{i}n H_{\sharp}^1(Y
\setminus \overline{\Sigma})$
such that $\nabla \mathbb{T}heta_k = \textrm{e}_k$. The function $\tilde{\mathbb{T}heta} \coloneqq
\mathbb{T}heta + \sum_{k \operatorname{i}n \mathbb{N}H} \scalarproduct{c_0}{\textrm{e}_k} \mathbb{T}heta_k$ is an
element of $H_{\sharp}^1(Y \setminus \overline{\Sigma})$. Moreover, $u
= \nabla \mathbb{T}heta + \sum_{k \operatorname{i}n \mathbb{N}H} \scalarproduct{c_0}{\textrm{e}_k} =
\nabla \tilde{\mathbb{T}heta}$ in $Y \setminus \overline{\Sigma}$.
Equations~\textrm{e}qref{eq:cell problem H_0 2} and~\textrm{e}qref{eq:cell problem H_0
3} imply $0 = \scalarproduct{u}{\nu} = \partial_{\nu} \tilde{\mathbb{T}heta}$ on
$\partial \Sigma$, where $\nu$ is the outward unit normal vector. We conclude that $\tilde{\mathbb{T}heta}$ is a weak solution to
\begin{align*}
- \Delta \tilde{\mathbb{T}heta} &= 0 \quad \textrm{ in } Y \setminus \overline{\Sigma}\, ,\\
\partial_{\nu} \tilde{\mathbb{T}heta} &= 0 \quad \textrm{ on } \partial \Sigma\,.
\textrm{e}nd{align*}
Solutions to this Neumann boundary problem are constant since $Y
\setminus \overline{\Sigma}$ is a domain. Hence $u = 0$ in $Y$.
\textrm{e}nd{proof}
\begin{remark}
Note that, in contrast to the $E_0$-problem (see
Lemma~\ref{lemma:cell problem E_0}), the uniqueness statement of
\textit{ii)} is false if we do not assume that
$Y \setminus \overline{\Sigma}$ is a simple Helmholtz
domain. Indeed, in~\cite {Lipton-Schweizer}, a $3$-dimensional full
torus $\Sigma$ is studied and a non-trivial solution with vanishing
geometric average is found.
\textrm{e}nd{remark}
\begin{lemma}[Connection between the $H_0$-problem \textrm{e}qref{eq:cell problem H_0} and $\mathcal{L}_{Y \setminus \overline{\Sigma}}$]\langlebel{lemma:for each k in L H there is a solution with geometric average equal to e k}
Let $Y \setminus \overline{\Sigma}$ be a simple Helmholtz domain. If
the index $k \operatorname{i}n \set{1,2,3}$ is an element of $\mathcal{L}_{Y \setminus \overline{\Sigma}}$, then there
exists a unique solution $H^k$ to~\textrm{e}qref{eq:cell problem H_0} with
geometric average $\textrm{e}_k$.
\textrm{e}nd{lemma}
\begin{proof}
Fix $k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}$ and let $\mathbb{T}heta_k \operatorname{i}n H_{\sharp}^1(Y \setminus \overline{\Sigma})$ be a distributional solution to
\begin{align*}
- \Delta \mathbb{T}heta_k &= 0 \qquad \qquad \textrm{ in } Y \setminus \overline{\Sigma}\, , \\
\partial_{\nu} \mathbb{T}heta_k &= - \scalarproduct{\textrm{e}_k}{\nu} \quad \textrm{ on } \partial \Sigma\, .
\textrm{e}nd{align*}
We define $H^k \colon Y \to \mathbb{C}^3$ by
\begin{equation*}
H^k \coloneqq
\begin{cases}
\nabla \mathbb{T}heta_k + \textrm{e}_k & \textrm{ in } Y \setminus \overline{\Sigma} \, ,\\
0 & \textrm{ in } \Sigma\, .
\textrm{e}nd{cases}
\textrm{e}nd{equation*}
In this way, we obtain an $L_{\sharp}^2(Y ; \mathbb{C}^3)$-vector field $H^k$
that is a distributional solution to~\textrm{e}qref{eq:cell problem
H_0}. As $k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}$, we obtain, using the definition of the geometric average, $\oint H^k = \textrm{e}_k$.
\textrm{e}nd{proof}
\begin{proposition}[Characterisation of the solution space to the $H_0$-problem]\langlebel{proposition:for every element of the set L E there is a solution to the cell problem}
Let $Y \setminus \overline{\Sigma}$ be a simple Helmholtz domain. For
every index $k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}$, let $H^k = H^k(y)$ be the solution
to~\textrm{e}qref{eq:cell problem H_0} from Lemma~\ref{lemma:for each k in L
H there is a solution with geometric average equal to e k}. Then
there holds: Every solution $u$ to~\textrm{e}qref{eq:cell problem H_0} can be
written as
\begin{equation}\langlebel{eq:solution to H-problem can be written as
linear combination of special fields}
u = \sum_{k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}} \alpha_k H^k\, ,
\textrm{e}nd{equation}
with constants $\alpha_k \coloneqq (\oint u)_k\operatorname{i}n \mathbb{C}$ for $k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}$.
In particular, the dimension of the solution space coincides with $\abs{\mathcal{L}_{Y \setminus \overline{\Sigma}}}$.
\textrm{e}nd{proposition}
\begin{proof}
We use the solutions $H^k$ of Lemma~\ref{lemma:for each k in L H there is a solution with
geometric average equal to e k}. The set $\set{H^k
\given k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}}$ is linearly independent since the geometric
averages of the $H^k$ are linear independent. We need to prove that every
solution $u$ to~\textrm{e}qref{eq:cell problem H_0} can be written as in~\textrm{e}qref{eq:solution to H-problem can be written as
linear combination of special fields}.
Let $u \operatorname{i}n H_{\sharp}(\operatorname{div} \,, Y)$ be a solution to~\textrm{e}qref{eq:cell
problem H_0}. We define
\begin{align*}
v \coloneqq u - \sum_{k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}} \bigg(\oint u\bigg)_k H^k\, .
\shortintertext{The field $v$ is also a solution to~\textrm{e}qref{eq:cell
problem H_0} and has the geometric average}
\oint v = \sum_{k \operatorname{i}n \mathbb{N}H} \bigg( \oint u \bigg)_k \textrm{e}_k \, .
\textrm{e}nd{align*}
As those components of the geometric average for which there is no
loop in $Y \setminus \overline{\Sigma}$ vanish, by the first part of
the definition of the geometric average, the right-hand side
vanishes. Due to the uniqueness statement of Lemma~\ref{lemma:cell
problem for H_0}, $v$ vanishes in $Y$. This provides~\textrm{e}qref{eq:solution to H-problem can be written as
linear combination of special fields}.
\textrm{e}nd{proof}
\begin{remark}
As a consequence of the previous proposition, we find that the solution space to the cell problem of $H_0$ is at most three dimensional if $Y \setminus\overline{\Sigma}$ is a simple Helmholtz domain.
\textrm{e}nd{remark}
\begin{remark}
Geometric intuition suggests that $\abs{\mathcal{L}_{Y \setminus \overline{\Sigma}}} \geq \abs{\mathbb{N}E}$; there
is, however, no obvious proof of this fact. As a consequence of this
inequality, we find that the dimensions $d_E$ and $d_H$ of the
solution spaces to the $E_0$-problem~\textrm{e}qref{eq:cell problem E} and
to the $H_0$-problem~\textrm{e}qref{eq:cell problem H_0} satisfy $d_H \geq d_E$.
\textrm{e}nd{remark}
\section{Derivation of the effective equations}\langlebel{sec:derivation of the effective equations}
Our aim in this section is to derive the effective Maxwell system. We
assume that $\Omega \subset \mathbb{R}^3$, the subdomain $R \subset \subset
\Omega$, and $\Sigma \subset Y$ are as in
Section~\ref{section:geometry and assumptions}. We work with the two
index sets $\mathbb{N}E$ and $\mathcal{L}_{Y \setminus \overline{\Sigma}}$ defined in~\textrm{e}qref{eq:index sets for H 2}
and \textrm{e}qref{eq:index sets for H 1}, respectively.
We define the matrices $\varepsilon_{\textrm{eff}}$, $\mu_{\textrm{eff}} \operatorname{i}n
\mathbb{R}^{3 \times 3}$ by setting, for $k \operatorname{i}n \set{1,2,3}$,
\begin{align}
(\varepsilon_{\textrm{eff}})_{kl} &\coloneqq \langlebel{eq:definition eps eff}
\begin{cases}
\scalarproduct[\big]{E^k}{E^l}_{L^2(Y; \mathbb{C}^3)} & \textrm{ if } k, l \operatorname{i}n \mathbb{N}E\, ,\\
0 & \:\, \textrm{otherwise} \, ,
\textrm{e}nd{cases}
\shortintertext{and}
\mu_{\textrm{eff}} \, \textrm{e}_k &\coloneqq \langlebel{eq:definition mu eff}
\begin{cases}
\operatorname{i}nt_Y H^k \quad \textrm{ if } k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}\, ,\\
0 \qquad \quad \textrm{ otherwise}\, .
\textrm{e}nd{cases}
\textrm{e}nd{align}
The \textrm{e}mph{effective permittivity} $\hat{\varepsilon} \colon \Omega \to \mathbb{R}^{3\times 3}$ and the \textrm{e}mph{effective permeability} $\hat{\mu} \colon \Omega \to \mathbb{R}^{3 \times 3}$ are defined by
\begin{equation}\langlebel{eq:definition effective permittivity and permeability}
\hat{\varepsilon}(x) \coloneqq
\begin{cases}
\varepsilon_{\textrm{eff}} \quad \textrm{if } x \operatorname{i}n R\, ,\\
\operatorname{id}_{3} \:\: \textrm{ otherwise}\, ,
\textrm{e}nd{cases}
\qquad
\hat{\mu}(x) \coloneqq
\begin{cases}
\mu_{\textrm{eff}} \quad \textrm{if } x \operatorname{i}n R\, ,\\
\operatorname{id}_{3} \:\: \,\textrm{ otherwise}\, ,
\textrm{e}nd{cases}
\textrm{e}nd{equation}
where $\operatorname{id}_3 \operatorname{i}n \mathbb{R}^{3 \times 3}$ is the identity matrix.
Let $(E^{\textrm{e}ta}, H^{\textrm{e}ta})_{\textrm{e}ta}$ be a sequence of solutions
to~\textrm{e}qref{eq:MaxSysSeq-intro} that satisfies the
energy-bound~\textrm{e}qref{eq:energy-bound}; the corresponding two-scale
limits are denoted by $E_0$, $H_0 \operatorname{i}n L^2(\Omega \times Y;
\mathbb{C}^3)$. Assume that $Y \setminus \overline{\Sigma}$ is such that we
can define the geometric average. We then define the \textrm{e}mph{limit
fields} $\hat{E},\hat{H} \colon \Omega \to \mathbb{C}^3$ by
\begin{equation}\langlebel{eq:limit fields}
\hat{E}(x) \coloneqq \operatorname{i}nt_Y E_0(x, y) \, \textrm{d} y \quad \textrm{ and } \quad \hat{H}(x) \coloneqq \oint H_0(x, \cdot)\, .
\textrm{e}nd{equation}
We recall that $H_0(x, \cdot)$ solves~\textrm{e}qref{eq:cell problem H_0}; it is
therefore curl free in $Y \setminus \overline{\Sigma}$ and vanishes in $\Sigma$.
The two fields $\hat{E}$ and $\hat{H}$ are of class $L^2(\Omega; \mathbb{C}^3)$.
\begin{theorem}[Macroscopic equations]\langlebel{theorem:macroscopic equations}
Let $(E^{\textrm{e}ta}, H^{\textrm{e}ta})_{\textrm{e}ta}$ be a sequence of solutions
to~\textrm{e}qref{eq:MaxSysSeq-intro} that satisfies the energy
bound~\textrm{e}qref{eq:energy-bound}. Assume that the geometry
$\Sigma_{\textrm{e}ta} \subset R \subset \Omega$ is as in
Section~\ref{section:geometry and assumptions}. Let
$Y \setminus \overline{\Sigma}$ be a simple Helmholtz domain, and
let $\mathbb{N}E$ and $\mathcal{L}_{Y \setminus \overline{\Sigma}}$ be the corresponding index sets. Let the
effective permittivity $\hat{\varepsilon}$ and the effective permeability
$\hat{\mu}$ be defined as in~\textrm{e}qref{eq:definition effective
permittivity and permeability}, and let the limit fields
$(\hat{E}, \hat{H})$ be defined as in~\textrm{e}qref{eq:limit fields}. Then
$\hat{E}$ and $\hat{H}$ are distributional solutions to
\begin{subequations}\langlebel{eq:effective equations - derivation of effective equations}
\begin{align}
\langlebel{eq:effective equations 1 - derivation of effective equations}
\operatorname{curl} \, \hat{E} &=\phantom{-} \operatorname{i} \omega \mu_0 \hat{\mu} \hat{H} \quad \:\:\: \:\textrm{ in } \Omega\, ,\\
\langlebel{eq:effective equations 3 - derivation of effective equations}
\operatorname{curl} \, \hat{H} &= - \operatorname{i} \omega \varepsilon_0 \hat{\varepsilon} \hat{E} \quad
\: \: \: \: \: \, \textrm{ in } \Omega \setminus R\, , \\
\langlebel{eq:effective equations 2 - derivation of effective equations}
(\operatorname{curl} \, \hat{H})_k &= -\operatorname{i} \omega \varepsilon_0 (\hat{\varepsilon} \hat{E})_k
\quad \textrm{ in } \Omega\, , \textrm{ for
every } k \operatorname{i}n \mathbb{N}E \, , \\
\langlebel{eq:effective equations 4 - derivation of effective equations}
\hat{E}_k &= 0 \qquad \qquad \quad \: \: \: \, \textrm{ in } R\, , \textrm{ for every } k \operatorname{i}n \set{1,2,3} \setminus \mathbb{N}E\, , \\
\langlebel{eq:effective equations 5 - derivation of effective equations}
\hat{H}_k &= 0 \qquad \qquad \quad \: \: \: \, \textrm{ in } R \, , \textrm{ for every } k \operatorname{i}n \mathbb{N}H \, .
\textrm{e}nd{align}
\textrm{e}nd{subequations}
\textrm{e}nd{theorem}
\begin{proof}
Thanks to the preparations
of the last section, we can essentially
follow~\cite{Lipton-Schweizer} to derive~\textrm{e}qref{eq:effective equations 1 - derivation of effective equations}--\textrm{e}qref{eq:effective equations 2 - derivation of effective equations}. The remaining
relations \textrm{e}qref{eq:effective equations 4 - derivation of effective
equations} and \textrm{e}qref{eq:effective equations 5 - derivation of
effective equations} follow from the characterisation of the
solution spaces of the cell problems.
\textit{Step 1: Derivation of~\textrm{e}qref{eq:effective equations 1 -
derivation of effective equations}.}
The distributional limit of~\textrm{e}qref{eq:MaxSeq1-intro} reads
\begin{equation}\langlebel{eq:distributional limit - proof of effective equations}
\operatorname{curl} \, E = \operatorname{i} \omega \mu_0 H \quad \textrm{ in } \Omega\, .
\textrm{e}nd{equation}
We recall that $E$ and $H$ are the weak
$L_{\sharp}^2(Y;\mathbb{C}^3)$-limits of $(E^{\textrm{e}ta})_{\textrm{e}ta}$ and
$(H^{\textrm{e}ta})_{\textrm{e}ta}$, respectively. By the definition of the limit
$\hat{E}$ in \textrm{e}qref{eq:limit fields} and the volume-average property of the two-scale limit
$E_0$, we find that
\begin{equation*}
\hat{E}(x) = \operatorname{i}nt_Y E_0(x, y) \, \textrm{d} y = E(x)\, ,
\textrm{e}nd{equation*}
for a.e. $x \operatorname{i}n \Omega$.
Thus, $\operatorname{curl} \, E = \operatorname{curl} \, \hat{E}$.
On the other hand, using that $Y \setminus \overline{\Sigma}$ is a
simple Helmholtz domain, for almost all $(x,y) \operatorname{i}n R\times Y$, the
two-scale limit $H_0$ can be written with coefficients $H_k(x)$ as
\begin{equation}\langlebel{eq:proof of effective equations - 2}
H_0(x,y) = \sum_{k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}} H_k(x) H^k(y) \, ,
\textrm{e}nd{equation}
by Proposition~\ref{proposition:for every element of the set L E there
is a solution to the cell problem}. The averaging property of the
two-scale limit, the identity~\textrm{e}qref{eq:proof of effective equations - 2}, and the definition of $\mu_{\operatorname{eff}}$ imply that
\begin{equation}\langlebel{eq:proof effective equations - 1}
H(x) = \operatorname{i}nt_Y H_0(x, y) \, \textrm{d} y
= \sum_{k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}}H_k(x) \operatorname{i}nt_Y H^k(y) \, \textrm{d} y
= \mu_{\textrm{eff}} \sum_{k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}} H_k(x) \textrm{e}_k\, .
\textrm{e}nd{equation}
Using the definition of the limit field $\hat{H}$ in~\textrm{e}qref{eq:limit fields} and
identity~\textrm{e}qref{eq:proof of effective equations - 2}, we conclude
from~\textrm{e}qref{eq:proof effective equations - 1} that
\begin{equation}\langlebel{eq:proof of effective equations - 6}
H(x) = \mu_{\textrm{eff}} \hat{H}(x) \, .
\textrm{e}nd{equation}
Outside the meta-material the $L^2(Y;\mathbb{C}^3)$-weak limit $H$ and the
two-scale limit $H_0$ coincide due to Lemma~\ref{lemma:cell problem
for H_0}. Moreover, $\hat{\mu}$ equals the identity, and
hence~\textrm{e}qref{eq:proof of effective equations - 6} holds also in $\Omega \setminus R$. From~\textrm{e}qref{eq:proof of effective equations - 6}
and~\textrm{e}qref{eq:distributional limit - proof of effective equations}
we conclude \textrm{e}qref{eq:effective equations 1 - derivation of effective equations}.
\textit{Step 2: Derivation of~\textrm{e}qref{eq:effective equations 3 - derivation of effective equations}, ~\textrm{e}qref{eq:effective equations 4 - derivation of effective equations} and~\textrm{e}qref{eq:effective equations 5 - derivation of effective equations}.}
To prove~\textrm{e}qref{eq:effective equations 3 - derivation of effective equations}, we first observe that $\Omega \setminus R \subset \Omega \setminus \Sigma_{\textrm{e}ta} $. We can therefore take the distributional limit in~\textrm{e}qref{eq:MaxSeq2-intro} as $\textrm{e}ta$ tends to zero and obtain that
\begin{equation*}
\operatorname{curl} \, H = - \operatorname{i} \omega \varepsilon_0 E \quad \textrm{ in } \Omega \setminus R \, .
\textrm{e}nd{equation*}
This shows~\textrm{e}qref{eq:effective equations 3 - derivation of effective equations}, since $\hat{H} = H$ and $\hat{E} = E$ in $\Omega \setminus R$.
By Proposition~\ref{proposition: dimension of solution space to the
cell problem of E}, the two-scale limit $E_0$ can be written with
coefficients $E_k(x)$ as $E_0(x, y) = \sum_{k \operatorname{i}n \mathbb{N}E} E_k(x)
E^k(y)$. Due to the definition of the limit field $\hat{E}$
in~\textrm{e}qref{eq:limit fields}, we find that
\begin{equation}\langlebel{eq:proof effective equations - 7}
\hat{E}(x) = \operatorname{i}nt_{Y} \sum_{k \operatorname{i}n \mathbb{N}E} E_k(x) E^k(y) \, \textrm{d} y = \sum_{k \operatorname{i}n \mathbb{N}E} E_k(x) \textrm{e}_k\, ,
\textrm{e}nd{equation}
for $x \operatorname{i}n R$. Consequently, equation~\textrm{e}qref{eq:effective equations 4 - derivation of effective equations} holds. Similarly, the definition of $\hat{H}$ and Proposition~\ref{proposition:for every element of the set L E there is a solution to the cell problem} imply that
\begin{equation*}
\hat{H}(x) = \oint \sum_{k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}} H_k(x) H^k(y) \, \textrm{d} y = \sum_{k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}} H_k(x) \textrm{e}_k
\textrm{e}nd{equation*}
for almost all $x \operatorname{i}n R$. This proves~\textrm{e}qref{eq:effective equations 5 - derivation of effective equations}.
\textit{Step 3: Derivation of~\textrm{e}qref{eq:effective equations 2 -
derivation of effective equations}.} We use the defining property
of the two-scale convergence and appropriate oscillating test
functions. For $k \operatorname{i}n \mathbb{N}E$ and $\theta \operatorname{i}n C_c^{\operatorname{i}nfty}(\Omega; \mathbb{R})$,
we set $\varphi(x, y) \coloneqq \theta(x) E^k(y)$ for $x\operatorname{i}n \Omega$ and
$y \operatorname{i}n Y$. We use $\varphi_{\textrm{e}ta}(x) \coloneqq \varphi(x, x/ \textrm{e}ta)$ for
$x \operatorname{i}n \Omega$. From the two-scale convergence of $(H^{\textrm{e}ta})_{\textrm{e}ta}$, we
obtain that
\begin{align*}
\lim_{\textrm{e}ta \to 0}\operatorname{i}nt_{\Omega}\scalarproduct{H^{\textrm{e}ta}}{\operatorname{curl} \, \varphi_{\textrm{e}ta}}
&= \operatorname{i}nt_{\Omega} \operatorname{i}nt_Y \scalarproduct[\big]{H_0(x,y)}{\nabla \theta(x) \wedge E^k(y)}\, \textrm{d} y\, \textrm{d} x\\
&=- \operatorname{i}nt_{\Omega} \scalarproduct[\Big]{\nabla \theta (x)}{\operatorname{i}nt_Y H_0(x,y) \wedge E^k(y) \, \textrm{d} y} \, \textrm{d} x \, .
\textrm{e}nd{align*}
Thanks to Corollary~\ref{corollary:property of geometric average} on the geometric average, the identity
\begin{equation*}
\operatorname{i}nt_Y H_0(x,y) \wedge E^k(y) \, \textrm{d} y = \oint H_0(x, \cdot) \wedge \textrm{e}_k
\textrm{e}nd{equation*}
holds for almost all $x \operatorname{i}n \Omega$. Consequently,
\begin{align}
\lim_{\textrm{e}ta \to 0}\operatorname{i}nt_{\Omega}\scalarproduct{H^{\textrm{e}ta}}{\operatorname{curl} \, \varphi_{\textrm{e}ta}}
&= -\operatorname{i}nt_{\Omega} \scalarproduct[\Big]{\nabla \theta(x)}{\oint H_0(x, \cdot) \wedge \textrm{e}_k} \, \textrm{d} x \nonumber\\
&= \operatorname{i}nt_{\Omega} \scalarproduct[\big]{\hat{H}(x)}{\operatorname{curl} \, \big(\theta(x) \textrm{e}_k \big)} \, \textrm{d} x\, . \langlebel{eq:proof of effective equations - 3}
\textrm{e}nd{align}
On the other hand, $H^{\textrm{e}ta}$ is a distributional solution to Maxwell's
equation~\textrm{e}qref{eq:MaxSeq2-intro}. Thus, using~\textrm{e}qref{eq:proof effective equations - 7}
\begin{align}
\lim_{\textrm{e}ta \to 0} \operatorname{i}nt_{\Omega} \scalarproduct{H^{\textrm{e}ta}}{\operatorname{curl} \, \varphi_{\textrm{e}ta}}
&= - \operatorname{i} \omega \varepsilon_0 \lim_{\textrm{e}ta \to 0} \operatorname{i}nt_{\Omega} \scalarproduct{E^{\textrm{e}ta}}{\varphi_{\textrm{e}ta}} \nonumber\\
&= - \operatorname{i} \omega \varepsilon_0 \operatorname{i}nt_{\Omega} \Big(\operatorname{i}nt_Y \scalarproduct{E_0(x, y)}{ E^k(y)} \, \textrm{d} y \Big) \theta(x)\, \textrm{d} x \nonumber\\
&= - \operatorname{i} \omega \varepsilon_0 \sum_{l \operatorname{i}n \mathbb{N}E} \operatorname{i}nt_{\Omega}(\varepsilon_{\textrm{eff}})_{kl} E_l(x) \theta(x) \, \textrm{d} x \nonumber \\
&= \operatorname{i}nt_{\Omega} \scalarproduct[\big]{- \operatorname{i} \omega \varepsilon_0 \varepsilon_{\textrm{eff}} \hat{E}(x)}{ \theta(x)\textrm{e}_k} \, \textrm{d} x\, . \langlebel{eq:proof of effective equations - 4}
\textrm{e}nd{align}
As $\theta \operatorname{i}n C_c^{\operatorname{i}nfty}(\Omega; \mathbb{R})$ was chosen arbitrarily,~\textrm{e}qref{eq:proof of effective equations - 3} and~\textrm{e}qref{eq:proof of effective equations - 4} imply that
\begin{equation*}
(\operatorname{curl} \, \hat{H})_k = - \operatorname{i} \omega \varepsilon_0 (\varepsilon_{\textrm{eff}} \hat{E})_k \quad \textrm{ for all } k \operatorname{i}n \mathbb{N}E\, .
\textrm{e}nd{equation*}
This shows~\textrm{e}qref{eq:effective equations 2 - derivation of effective
equations} and hence the theorem is proved.
\textrm{e}nd{proof}
\begin{remark}[Well-posedness of system~\textrm{e}qref{eq:effective equations - derivation of effective equations}]
We claim that the effective Maxwell system \textrm{e}qref{eq:effective
equations - derivation of effective equations} forms a complete
set of equations. To be more precise, we show the following: Let $R
\subset \Omega$ be a cuboid that is parallel to the axes and let
$\Omega$ be a bounded
Lipschitz domain. Assume that $\hat{\mu}$
is real and positive definite on
$V \coloneqq \operatorname{span} \set{\textrm{e}_k \given k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}}$---that is, there is
a constant $\alpha > 0$ such that
$\scalarproduct{\hat{\mu} \xi}{\xi} \geq \alpha \abs{\xi}^2$ for all
$\xi \operatorname{i}n V$. We also assume that $\hat{\varepsilon}$ is real and positive
definite on $\operatorname{span} \set{\textrm{e}_k \given k \operatorname{i}n \mathbb{N}E}$, and that
$\mu_0 >0$, $\mathbb{R}e \varepsilon_0 > 0$, and $\operatorname{Im} \varepsilon_0 >0$. Let
$(\hat{E}, \hat{H})$ be a solution to~\textrm{e}qref{eq:effective equations
- derivation of effective equations} with boundary condition
$\hat{H} \wedge \nu = 0$ on $\partial \Omega$, and assume that both
$\hat{E}$ and $\hat{H}$ are of class $H^1$ in $R$ and in
$\Omega \setminus R$. We claim that $\hat{E}$ and $\hat{H}$ are
trivial.
To prove that $\hat{E} = \hat{H} = 0$, we first show that the integration by parts
formula
\begin{equation}\langlebel{eq:remark step 2}
\operatorname{i}nt_{\Omega} \scalarproduct[\big]{\operatorname{curl} \, \hat{H}}{\hat{E}} = \operatorname{i}nt_{\Omega} \scalarproduct[\big]{\hat{H}}{\operatorname{curl} \, \hat{E}}\, .
\textrm{e}nd{equation}
holds. Equation~\textrm{e}qref{eq:remark step 2} is a consequence of an
integration by parts provided that the integral
$\operatorname{i}nt_{\partial R} \jump{\hat{H}} (\hat{E} \wedge \nu)$ vanishes, where
$\jump{\hat{H}}$ is the jump of the field $\hat{H}$ across the
boundary $\partial R$ and $\nu$ is the outward unit normal vector on
$\partial R$. As $\hat{E} \operatorname{i}n H(\operatorname{curl} \,, \Omega)$, there holds
$\jump{\hat{E} \wedge \nu} = 0$ (i.e., the tangential components of $\hat{E}$ do
not jump).
Let $\Gamma$ be one face of $R$. To prove
$\operatorname{i}nt_{\Gamma} \jump{\hat{H}} (\hat{E} \wedge \nu) = 0$, it suffices
to show that for all $k,l \operatorname{i}n \set{1,2,3}$ with $k \neq l$ and
$\scalarproduct{\textrm{e}_k}{\nu} = \scalarproduct{\textrm{e}_l}{\nu} = 0$, we have
that $\hat{E}_k \jump{\hat{H_l}}_{\Gamma} = 0$. We obtain this
relation from~\textrm{e}qref{eq:effective equations 2 - derivation of
effective equations} and \textrm{e}qref{eq:effective equations 4 -
derivation of effective equations}: Indeed, for $k \notin \mathbb{N}E$,
there holds $\hat{E}_k = 0$ because of~\textrm{e}qref{eq:effective equations 4
- derivation of effective equations}. On the other hand,
by~\textrm{e}qref{eq:effective equations 2 - derivation of effective
equations}, for $k \operatorname{i}n \mathbb{N}E$ there holds
$\partial_m H_l - \partial_l H_m = \mp \operatorname{i} \omega \varepsilon_0 (\hat{\varepsilon}
\hat{E})_k$ in the distributional sense, where $\nu = \textrm{e}_m$. This
implies that $\jump{H_l}_{\Gamma} = 0$.
We now show that for almost all $x \operatorname{i}n \Omega$
\begin{equation}\langlebel{eq:remark step 1}
\scalarproduct[\big]{\operatorname{curl} \, \hat{H}(x)}{\hat{E}(x)} = - \operatorname{i} \omega \varepsilon_0 \scalarproduct[\big]{\hat{\varepsilon} \hat{E}(x)}{\hat{E}(x)}\, .
\textrm{e}nd{equation}
In $\Omega \setminus R$, this identity is a consequence
of~\textrm{e}qref{eq:effective equations 3 - derivation of effective
equations}. In $R$ on the other hand, we conclude
from~\textrm{e}qref{eq:effective equations 4 - derivation of effective
equations} and~\textrm{e}qref{eq:effective equations 2 - derivation of
effective equations} that, for $x \operatorname{i}n R$,
\begin{equation*}
\scalarproduct[\big]{\operatorname{curl} \, \hat{H}(x)}{\hat{E}(x)} = \sum_{k \operatorname{i}n \mathbb{N}E} ( \operatorname{curl} \, \hat{H}(x) )_k \, \overline{\hat{E}_k(x)} = - \operatorname{i} \omega \varepsilon_0 \sum_{k \operatorname{i}n \mathbb{N}E} (\varepsilon_{\operatorname{eff}} \hat{E}(x))_k \, \overline{\hat{E}_k(x)}\, .
\textrm{e}nd{equation*}
Applying again~\textrm{e}qref{eq:effective equations 4 - derivation of effective equations}, we obtain~\textrm{e}qref{eq:remark step 1}.
From~\textrm{e}qref{eq:effective equations 1 - derivation of effective
equations}, the integration by parts formula~\textrm{e}qref{eq:remark step
2}, and~\textrm{e}qref{eq:remark step 1}, we obtain
\begin{equation*}
\operatorname{i}nt_{\Omega} \scalarproduct[\big]{\hat{\mu} \hat{H}}{\hat{H}} =
-\frac{\operatorname{i}}{\omega \mu_0} \operatorname{i}nt_{\Omega} \scalarproduct[\big]{\operatorname{curl} \,
\hat{E}}{\hat{H}} = -\frac{\operatorname{i}}{\omega \mu_0} \operatorname{i}nt_{\Omega}
\scalarproduct[\big]{\hat{E}}{\operatorname{curl} \, \hat{H}}
= \frac{\varepsilon_0}{\mu_0} \scalarproduct[\big]{\hat{\varepsilon}\hat{E}}{\hat{E}}\, .
\textrm{e}nd{equation*}
As $\hat{\mu}$ and $\mu_0$ are assumed to be real, by taking the imaginary part, we
find that
$\operatorname{i}nt_{\Omega} \operatorname{Im} \varepsilon_0 \scalarproduct{\hat{\varepsilon} \hat{E}}{\hat{E}}
= 0$, and hence $\scalarproduct{\hat{\varepsilon} \hat{E}}{\hat{E}} = 0$
almost everywhere in $\Omega$. This implies that $\hat{E} = 0$ in
$\Omega \setminus R$ and, taking into account that $\hat{\varepsilon}$ is
positive definite on $\operatorname{span} \set{\textrm{e}_k \given k \operatorname{i}n \mathbb{N}E}$, we also
find $\hat{E} = 0$ in $R$. We can therefore conclude
from~\textrm{e}qref{eq:effective equations 1 - derivation of effective
equations} that $\hat{H} $ vanishes in $\Omega \setminus R$ and that
$\hat{H}_k = 0$ in $R$ for all $k \operatorname{i}n \mathcal{L}_{Y \setminus \overline{\Sigma}}$. On account
of~\textrm{e}qref{eq:effective equations 5 - derivation of effective
equations}, $\hat{H} = 0$ in $\Omega$. This shows the uniqueness of
solutions.
\textrm{e}nd{remark}
\section{Discussion of examples} \langlebel{section:examples}
In this section, we apply Theorem~\ref{theorem:macroscopic equations} to some examples.
In what follows, $d_E$ and $d_H$ denote the dimension of
the solution space to the cell problem~\textrm{e}qref{eq:cell problem E} of
$E_0$ and \textrm{e}qref{eq:cell problem H_0} of $H_0$,
respectively. Due to Propositions \ref{proposition: dimension of
solution space to the cell problem of E} and~\ref{proposition:for
every element of the set L E there is a solution to the cell
problem}, we find that $d_E = \abs{\mathbb{N}E}$ and $d_H=\abs{\mathcal{L}_{Y \setminus \overline{\Sigma}}}$.
\subsection{The metal ball}\langlebel{example:metal ball}
To define the metallic ball structure, we fix a number $r \operatorname{i}n (0, 1/2)$, and set
\begin{equation}\langlebel{eq:definition metal ball}
\Sigma \coloneqq \set{y = (y_1, y_2, y_3) \operatorname{i}n Y \given y_1^2 + y_2^2 +y_3^2 < r^2 }\, .
\textrm{e}nd{equation}
A sketch of the periodicity cell is given in Figure~\ref{fig:the metal
ball}. We note that $Y \setminus \overline{\Sigma}$ is a simple
Helmholtz domain. As each two opposite faces of $Y$ can be connected
by a loop in $Y
\setminus \overline{\Sigma}$, we have that $\mathcal{L}_{Y \setminus \overline{\Sigma}} = \set{1,2,3}$. On the
other hand, we find no loop in $\Sigma$ that connects two opposite
faces of $Y$; hence $\mathbb{N}E = \set{1,2,3}$. To summarise, we have that
\begin{center}
\begin{tabular}{ c | c | c | c }
$\mathbb{N}E$ & $d_E$ & $\mathcal{L}_{Y \setminus \overline{\Sigma}}$ & $d_H$\\ \hline
$\set{1,2,3}$ & $3$ & $\set{1,2,3}$ & $3$ \\
\textrm{e}nd{tabular}\, .
\textrm{e}nd{center}
The Maxwell system~\textrm{e}qref{eq:effective equations - derivation of effective equations} is of the usual form; to be more precise, the following result holds.
\begin{corollary}[Macroscopic equations of the metal ball]
For $\Sigma$ as in~\textrm{e}qref{eq:definition metal ball}, a sequence
$(E^{\textrm{e}ta}, H^{\textrm{e}ta})_{\textrm{e}ta}$, and limits $\hat{E}$ and $\hat{H}$ as in Theorem~\ref{theorem:macroscopic equations}, the macroscopic equations read
\begin{subequations}\langlebel{eq:effective equations for metal ball}
\begin{align}
\langlebel{eq:effective equations for metal ball - 1}
\operatorname{curl} \, \hat{E} &= \phantom{-} \operatorname{i} \omega \mu_0 \hat{\mu} \hat{H} \quad \textrm{ in } \Omega\, ,\\
\langlebel{eq:effective equations for metal ball - 2}
\operatorname{curl} \, \hat{H}&= - \operatorname{i} \omega \varepsilon_0 \hat{\varepsilon}\hat{E} \: \,\quad \textrm{ in } \Omega \, .
\textrm{e}nd{align}
\textrm{e}nd{subequations}
\textrm{e}nd{corollary}
\begin{figure}
\centering
\begin{subfigure}[t]{0.3\textwidth}
\begin{tikzpicture}[scale=2.5]
\coordinate (A) at (-0.5, -0.5, 0.5);
\coordinate (B) at (0.5,-0.5,0.5);
\coordinate (C) at (0.5,0.5,0.5);
\coordinate (D) at (-0.5, 0.5, 0.5);
\coordinate (E) at (-0.5, 0.5, -0.5);
\coordinate (F) at (0.5, 0.5, -0.5);
\coordinate (G) at (0.5, -0.5, -0.5);
\coordinate (H) at (-0.5, -0.5, -0.5);
\fill[gray!20!white, opacity=.5] (A) -- (B) -- (C) -- (D) -- cycle;
\fill[gray!20!white, opacity=.5] (E) -- (F) -- (G) --(H) -- cycle;
\fill[gray!20!white, opacity=.5] (D) -- (E) -- (H) -- (A)-- cycle;
\fill[gray!20!white, opacity=.5] (B) -- (C) -- (F) -- (G) -- cycle;
\fill[gray!75!white] (0,0) circle (.35cm);
\, \textrm{d}raw[] (A) -- (B) -- (C) -- (D) --cycle (E) -- (D) (F) -- (C) (G) -- (B);
\, \textrm{d}raw[] (E) -- (F) -- (G) ;
\, \textrm{d}raw[densely dashed] (E) -- (H) (H) -- (G) (H) -- (A);
\, \textrm{d}raw[] (0,0) circle (.35cm);
\, \textrm{d}raw[] (-.35,0) arc (180:360:.35cm and .175cm);
\, \textrm{d}raw[dashed] (-.35,0) arc (180:0:.35cm and .175cm);
\, \textrm{d}raw[] (0,.35) arc (90:270:0.175cm and 0.35cm);
\, \textrm{d}raw[dashed] (0,.35) arc (90:-90:0.175cm and 0.35cm);
\, \textrm{d}raw[->] ( -.8, -.7, .5)--( -.4, -.7, .5 );
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.3, .5);
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.7, .2);
\node[] at (0,0,0) {$\Sigma$};
\node[] at (-.4, -.8, .5){$\textrm{e}_1$};
\node[] at (-.68, -.3, .5){$\textrm{e}_3$};
\node[] at (-.8, -.8, .-.1){$\textrm{e}_2$};
\textrm{e}nd{tikzpicture}
\caption{ }
\langlebel{fig:the metal ball}
\textrm{e}nd{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\begin{tikzpicture}[scale=2.5]
\coordinate (A) at (-0.5, -0.5, 0.5);
\coordinate (B) at (0.5,-0.5,0.5);
\coordinate (C) at (0.5,0.5,0.5);
\coordinate (D) at (-0.5, 0.5, 0.5);
\coordinate (E) at (-0.5, 0.5, -0.5);
\coordinate (F) at (0.5, 0.5, -0.5);
\coordinate (G) at (0.5, -0.5, -0.5);
\coordinate (H) at (-0.5, -0.5, -0.5);
\coordinate (M1) at (-0.15, -0.5, 0.5);
\coordinate (M2) at (0.15, -0.5, 0.5);
\coordinate (M3) at (0.15, 0.5, 0.5);
\coordinate (M4) at (-0.15, 0.5, 0.5);
\coordinate (M5) at (-0.15, 0.5, -0.5);
\coordinate (M6) at (0.15,0.5,-0.5);
\coordinate (M7) at (0.15, -0.5, -0.5);
\coordinate (M8) at (-0.15, -0.5, -0.5);
\fill[gray!20!white, opacity=.5] (A) -- (B) -- (C) -- (D) -- cycle;
\fill[gray!20!white, opacity=.5] (E) -- (F) -- (G) --(H) -- cycle;
\fill[gray!20!white, opacity=.5] (D) -- (E) -- (H) -- (A)-- cycle;
\fill[gray!20!white, opacity=.5] (B) -- (C) -- (F) -- (G) -- cycle;
\fill[gray!75!white, opacity=.9] (M4) -- (M5) -- (M6) -- (M3) -- cycle;
\fill[gray!75!white, opacity=.9] (M1) -- (M4) -- (M5) -- (M8) -- cycle;
\fill[gray!75!white, opacity=.9] (M2) -- (M3) -- (M6) -- (M7) -- cycle;
\fill[gray!75!white, opacity=.9] (M1) -- (M2) -- (M7) -- (M8) -- cycle;
\, \textrm{d}raw[] (A) -- (B) -- (C) -- (D) --cycle (E) -- (D) (F) -- (C) (G) -- (B);
\, \textrm{d}raw[] (E) -- (F) -- (G) ;
\, \textrm{d}raw[densely dashed] (E) -- (H) (H) -- (G) (H) -- (A);
\, \textrm{d}raw[] (M1) -- (M2) -- (M3) -- (M4) -- cycle;
\, \textrm{d}raw[] (M4) -- (M5) -- (M6) -- (M3) -- cycle;
\, \textrm{d}raw[dashed] (M5) -- (M8) (M7) -- (M6);
\, \textrm{d}raw[dashed] (M1) -- (M8) (M7) -- (M2);
\, \textrm{d}raw[->] ( -.8, -.7, .5)--( -.4, -.7, .5 );
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.3, .5);
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.7, .2);
\node[] at (-0.18, -.15, 0) {$\Sigma$};
\node[] at (-.4, -.8, .5){$\textrm{e}_1$};
\node[] at (-.68, -.3, .5){$\textrm{e}_3$};
\node[] at (-.8, -.8, .-.1){$\textrm{e}_2$};
\textrm{e}nd{tikzpicture}
\caption{ }
\langlebel{fig:the metal plate}
\textrm{e}nd{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\begin{tikzpicture}[scale=2.5]
\coordinate (A) at (-0.5, -0.5, 0.5);
\coordinate (B) at (0.5,-0.5,0.5);
\coordinate (C) at (0.5,0.5,0.5);
\coordinate (D) at (-0.5, 0.5, 0.5);
\coordinate (E) at (-0.5, 0.5, -0.5);
\coordinate (F) at (0.5, 0.5, -0.5);
\coordinate (G) at (0.5, -0.5, -0.5);
\coordinate (H) at (-0.5, -0.5, -0.5);
\fill[gray!75!white, opacity=.9] (A) -- (B) -- (C) -- (D) -- cycle;
\fill[gray!75!white, opacity=.9] (E) -- (F) -- (G) --(H) -- cycle;
\fill[gray!75!white, opacity=.9] (D) -- (E) -- (H) -- (A)-- cycle;
\fill[gray!75!white, opacity=.9] (B) -- (C) -- (F) -- (G) -- cycle;
\fill[gray!20!white] (0,0) circle (.35cm);
\, \textrm{d}raw[] (A) -- (B) -- (C) -- (D) --cycle (E) -- (D) (F) -- (C) (G) -- (B);
\, \textrm{d}raw[] (E) -- (F) -- (G) ;
\, \textrm{d}raw[densely dashed] (E) -- (H) (H) -- (G) (H) -- (A);
\, \textrm{d}raw[] (0,0) circle (.35cm);
\, \textrm{d}raw[] (-.35,0) arc (180:360:.35cm and .175cm);
\, \textrm{d}raw[dashed] (-.35,0) arc (180:0:.35cm and .175cm);
\, \textrm{d}raw[] (0,.35) arc (90:270:0.175cm and 0.35cm);
\, \textrm{d}raw[dashed] (0,.35) arc (90:-90:0.175cm and 0.35cm);
\, \textrm{d}raw[->] ( -.8, -.7, .5)--( -.4, -.7, .5 );
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.3, .5);
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.7, .2);
\node[] at (-.25, -0.5, .1) {$\Sigma$};
\node[] at (-.4, -.8, .5){$\textrm{e}_1$};
\node[] at (-.68, -.3, .5){$\textrm{e}_3$};
\node[] at (-.8, -.8, .-.1){$\textrm{e}_2$};
\textrm{e}nd{tikzpicture}
\caption{ }
\langlebel{fig:the air ball}
\textrm{e}nd{subfigure}
\caption{The periodicity cell $Y$ is represented by the cube. (a) The
metal ball of Example~\ref{example:metal ball}. (b) The metal plate
of Example~\ref{example:metal plate}. (c) A sketch of the air ball
(see Remark~\ref{remark:effective equations of air ball}).}
\textrm{e}nd{figure}
\subsection{The metal plate}\langlebel{example:metal plate}
To define the metal plate structure, fix a number $\gamma \operatorname{i}n (0, 1/2)$ and set
\begin{equation}\langlebel{eq:definition metal plate}
\Sigma \coloneqq \set{y = (y_1, y_2, y_3) \operatorname{i}n Y \given y_1 \operatorname{i}n (- \gamma, \gamma) }\, .
\textrm{e}nd{equation}
We refer to Figure~\ref{fig:the metal plate} for a sketch of the periodicity cell $Y$. Observe that $Y \setminus \overline{\Sigma}$ is a simple Helmholtz domain. We obtain the table
\begin{center}
\begin{tabular}{ c | c | c | c }
$\mathbb{N}E$ & $d_E$ & $\mathcal{L}_{Y \setminus \overline{\Sigma}}$ & $d_H$\\ \hline
$\set{1}$ & $1$ & $\set{2,3}$ & $2$ \\
\textrm{e}nd{tabular}\, .
\textrm{e}nd{center}
In fact, we do not only know the dimensions of the solution spaces
to~\textrm{e}qref{eq:cell problem E} and \textrm{e}qref{eq:cell problem H_0} but also
bases for these spaces. Indeed, for the volume fraction
$\alpha \coloneqq \abs{Y \setminus \overline{\Sigma}}$, the field
$E^1 \colon Y \to \mathbb{C}^3$ given by
$E^1(y) \coloneqq \textrm{e}_1 \alpha^{-1} \operatorname{i}ndicator{Y \setminus
\overline{\Sigma}}(y)$ is a solution to~\textrm{e}qref{eq:cell problem E}
with $\Xint-_Y E^1 = \textrm{e}_1$. On the other hand, for
$k \operatorname{i}n \set{2,3}$, the field $H^k \colon Y \to \mathbb{C}^3$,
$H^k(y) \coloneqq \textrm{e}_k \operatorname{i}ndicator{Y \setminus \overline{\Sigma}}(y)$
is a solution to~\textrm{e}qref{eq:cell problem H_0}. By the first part of the definition
of the geometric average, $\oint H^k = \textrm{e}_k$ and hence
$\set{H^2, H^3}$ is a basis of the solution space to~\textrm{e}qref{eq:cell
problem H_0}. Having the basis for the solution spaces at hand, we
can compute $\varepsilon_{\operatorname{eff}}$ and $\mu_{\operatorname{eff}}$ defined in
\textrm{e}qref{eq:definition eps eff} and \textrm{e}qref{eq:definition mu eff}: we have that
$\varepsilon_{\operatorname{eff}} = \alpha^{-1} \operatorname{diag}(1, 0, 0)$ and
$\mu_{\operatorname{eff}} = \alpha \, \operatorname{diag}(0, 1, 1)$. An application of
Theorem~\ref{theorem:macroscopic equations} yields the effective
equations for the metal plate.
\begin{corollary}[Macroscopic equations for the metal plate]
Let $\Sigma$ be as in~\textrm{e}qref{eq:definition metal plate} and let $\alpha \coloneqq \abs{Y \setminus
\overline{\Sigma}}$. For a sequence
$(E^{\textrm{e}ta}, H^{\textrm{e}ta})_{\textrm{e}ta}$, and limit fields $\hat{E}$ and $\hat{H}$ as
in Theorem~\ref{theorem:macroscopic equations}, the macroscopic equations read
\begin{subequations}\langlebel{eq:effective equations for metal plate}
\begin{align}
\langlebel{eq:effective equations for metal plate - 1}
\operatorname{curl} \, \hat{E} &= \phantom{-} \operatorname{i} \omega \mu_0 \hat{H} \phantom{-}\qquad \: \: \;\textrm{ in } \Omega \setminus R\, ,\\
\langlebel{eq:effective equations for metal plate - 2}
\operatorname{curl} \, \hat{H}&= - \operatorname{i} \omega \varepsilon_0 \hat{E} \: \qquad
\:\:\;\phantom{-} \textrm{ in } \Omega \setminus R\,
, \\
\langlebel{eq:effective equations for metal plate - 6}
(\partial_3, -\partial_2 ) \hat{E}_1 &=\phantom{-} \operatorname{i} \omega \mu_0\alpha (\hat{H}_2, \hat{H}_3) \: \,\textrm{ in } R \, ,\\
\langlebel{eq:effective equations for metal plate - 3}
\partial_2 \hat{H}_3- \partial_3 \hat{H}_2 &= - \operatorname{i} \omega \varepsilon_0 \alpha^{-1} \hat{E}_1\phantom{-}\quad \, \textrm{ in } R \, ,\\
\langlebel{eq:effective equations for metal plate - 4}
\hat{E}_2=\hat{E}_3 &= 0 \qquad \qquad \quad \: \: \: \: \: \;\phantom{-}\textrm{ in } R\, , \\
\langlebel{eq:effective equations for metal plate - 5}
\hat{H}_1&= 0 \qquad \qquad \quad \:\:\:\: \: \;\phantom{-} \textrm{ in } R \, .
\textrm{e}nd{align}
\textrm{e}nd{subequations}
\textrm{e}nd{corollary}
\subsection{The air cylinder}\langlebel{example:air cylinder}
To define the metallic box with a cylinder removed, we fix a number $r \operatorname{i}n (0, 1/2)$ and set
\begin{equation*}
\Sigma \coloneqq Y \setminus \set{y = (y_1, y_2, y_3) \operatorname{i}n Y \given y_1^2 + y_2^2 < r^2 }\, .
\textrm{e}nd{equation*}
A sketch of the periodicity cell is given in Figure~\ref{fig:the air cylinder}. The air cylinder $Y \setminus \overline{\Sigma}$ is a simple Helmholtz domain. We obtain
\begin{center}
\begin{tabular}{ c | c | c | c }
$\mathbb{N}E$ & $d_E$ & $\mathcal{L}_{Y \setminus \overline{\Sigma}}$ & $d_H$\\ \hline
$\textrm{e}mptyset$ & $0$ & $\set{3}$ & $1$ \\
\textrm{e}nd{tabular}\, .
\textrm{e}nd{center}
Once again, we do not only know the dimension of the solution space to~\textrm{e}qref{eq:cell problem H_0} but also its basis. Indeed, the field $H^3 \colon Y \to \mathbb{C}^3$ given by $H^3(y) \coloneqq \textrm{e}_k \operatorname{i}ndicator{Y \setminus \overline{\Sigma}}(y)$ is a solution to~\textrm{e}qref{eq:cell problem H_0}. We can thus compute $\varepsilon_{\operatorname{eff}}$ and $\mu_{\operatorname{eff}}$; for $\alpha \coloneqq \abs{Y \setminus \overline{\Sigma}}$, we find that $\varepsilon_{\operatorname{eff}} = 0$ and $\mu_{\operatorname{eff}} =\alpha \, \operatorname{diag}(0, 0, 1)$.
Although the solution space to the cell problem of $H_0$ is not
trivial, there is only the trivial solution to the effective equations
in $R$; that is,
\begin{equation*}
\hat{E} = \hat{H}= 0 \quad \textrm{ in } R\, .
\textrm{e}nd{equation*}
Note that $\hat{H}_3= 0$ is a consequence
of~\textrm{e}qref{eq:MaxSeq1-intro} and $\hat{E} = 0$ in $R$.
\begin{remark}\langlebel{remark:effective equations of air ball}
Instead of the air cylinder, we can also consider the air ball (see
Figure~\ref{fig:the air ball} for a sketch) and find that there are
also only the trivial solutions $\hat{E} = \hat{H}= 0$ in $R$.
\textrm{e}nd{remark}
\begin{figure}
\centering
\begin{subfigure}[t]{0.3\textwidth}
\begin{tikzpicture}[scale=2.5]
\coordinate (A) at (-0.5, -0.5, 0.5);
\coordinate (B) at (0.5,-0.5,0.5);
\coordinate (C) at (0.5,0.5,0.5);
\coordinate (D) at (-0.5, 0.5, 0.5);
\coordinate (E) at (-0.5, 0.5, -0.5);
\coordinate (F) at (0.5, 0.5, -0.5);
\coordinate (G) at (0.5, -0.5, -0.5);
\coordinate (H) at (-0.5, -0.5, -0.5);
\fill[gray!75!white, opacity=.9] (A) -- (B) -- (C) -- (D) -- cycle;
\fill[gray!75!white, opacity=.9] (E) -- (F) -- (G) --(H) -- cycle;
\fill[gray!75!white, opacity=.9] (D) -- (E) -- (H) -- (A)-- cycle;
\fill[gray!75!white, opacity=.9] (B) -- (C) -- (F) -- (G) -- cycle;
\fill[gray!20!white] (-.25, -.5) -- (-.25,.5)
arc(180:0:0.25cm and 0.125cm) -- (0.25, -0.5)
(-0.25, -.5) arc(180:360:0.25cm and 0.125cm);
\filldraw[gray!20!white] (-.25,-.5) -- (0.25,-.5);
\, \textrm{d}raw[] (A) -- (B) -- (C) -- (D) --cycle (E) -- (D) (F) -- (C) (G) -- (B);
\, \textrm{d}raw[] (E) -- (F) -- (G) ;
\, \textrm{d}raw[densely dashed] (E) -- (H) (H) -- (G) (H) -- (A);
\, \textrm{d}raw[] (-.25,0.5) arc (180:-180:0.25cm and 0.125cm);
\, \textrm{d}raw[dashed] (-.25,-.5) arc (180:360:0.25cm and 0.125cm);
\, \textrm{d}raw[dashed] (-.25,-.5) arc (180:0:0.25cm and 0.125cm);
\, \textrm{d}raw[dashed] (-0.25, -.5) -- (-0.25, .5);
\, \textrm{d}raw[dashed] (.25,-.5) -- (.25,.5);
\, \textrm{d}raw[->] ( -.8, -.7, .5)--( -.4, -.7, .5 );
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.3, .5);
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.7, .2);
\node[] at (-.25, .1, .55) {$\Sigma$};
\node[] at (-.4, -.8, .5){$\textrm{e}_1$};
\node[] at (-.68, -.3, .5){$\textrm{e}_3$};
\node[] at (-.8, -.8, .-.1){$\textrm{e}_2$};
\textrm{e}nd{tikzpicture}
\caption{ }
\langlebel{fig:the air cylinder}
\textrm{e}nd{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\begin{tikzpicture}[scale=2.5]
\coordinate (A) at (-0.5, -0.5, 0.5);
\coordinate (B) at (0.5,-0.5,0.5);
\coordinate (C) at (0.5,0.5,0.5);
\coordinate (D) at (-0.5, 0.5, 0.5);
\coordinate (E) at (-0.5, 0.5, -0.5);
\coordinate (F) at (0.5, 0.5, -0.5);
\coordinate (G) at (0.5, -0.5, -0.5);
\coordinate (H) at (-0.5, -0.5, -0.5);
\fill[gray!20!white, opacity=.5] (A) -- (B) -- (C) -- (D) -- cycle;
\fill[gray!20!white, opacity=.5] (E) -- (F) -- (G) --(H) -- cycle;
\fill[gray!20!white, opacity=.5] (D) -- (E) -- (H) -- (A)-- cycle;
\fill[gray!20!white, opacity=.5] (B) -- (C) -- (F) -- (G) -- cycle;
\fill[gray!75!white] (-.25, -.5) -- (-.25,.5)
arc(180:0:0.25cm and 0.125cm) -- (0.25, -0.5)
(-0.25, -.5) arc(180:360:0.25cm and 0.125cm);
\filldraw[gray!75!white] (-.25,-.5) -- (0.25,-.5);
\, \textrm{d}raw[] (A) -- (B) -- (C) -- (D) --cycle (E) -- (D) (F) -- (C) (G) -- (B);
\, \textrm{d}raw[] (E) -- (F) -- (G) ;
\, \textrm{d}raw[densely dashed] (E) -- (H) (H) -- (G) (H) -- (A);
\, \textrm{d}raw[] (-.25,0.5) arc (180:-180:0.25cm and 0.125cm);
\, \textrm{d}raw[dashed] (-.25,-.5) arc (180:360:0.25cm and 0.125cm);
\, \textrm{d}raw[dashed] (-.25,-.5) arc (180:0:0.25cm and 0.125cm);
\, \textrm{d}raw[dashed] (-0.25, -.5) -- (-0.25, .5);
\, \textrm{d}raw[dashed] (.25,-.5) -- (.25,.5);
\, \textrm{d}raw[->] ( -.8, -.7, .5)--( -.4, -.7, .5 );
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.3, .5);
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.7, .2);
\node[] at (0,0,0) {$\Sigma$};
\node[] at (-.4, -.8, .5){$\textrm{e}_1$};
\node[] at (-.68, -.3, .5){$\textrm{e}_3$};
\node[] at (-.8, -.8, .-.1){$\textrm{e}_2$};
\textrm{e}nd{tikzpicture}
\caption{ }
\langlebel{fig:the metal cylinder}
\textrm{e}nd{subfigure}
\begin{subfigure}[t]{0.3\textwidth}
\begin{tikzpicture}[scale=2.5]
\coordinate (A) at (-0.5, -0.5, 0.5);
\coordinate (B) at (0.5,-0.5,0.5);
\coordinate (C) at (0.5,0.5,0.5);
\coordinate (D) at (-0.5, 0.5, 0.5);
\coordinate (E) at (-0.5, 0.5, -0.5);
\coordinate (F) at (0.5, 0.5, -0.5);
\coordinate (G) at (0.5, -0.5, -0.5);
\coordinate (H) at (-0.5, -0.5, -0.5);
\fill[gray!20!white, opacity=.5] (A) -- (B) -- (C) -- (D) -- cycle;
\fill[gray!20!white, opacity=.5] (E) -- (F) -- (G) --(H) -- cycle;
\fill[gray!20!white, opacity=.5] (D) -- (E) -- (H) -- (A)-- cycle;
\fill[gray!20!white, opacity=.5] (B) -- (C) -- (F) -- (G) -- cycle;
\, \textrm{d}raw[rotate=0] (0,0) ellipse (15pt and 7.5pt);
\fill[gray!75!white] (0,0) ellipse (15pt and 7.5pt);
\, \textrm{d}raw[black] (-.2,0,0) to[bend left] (.2,0,0);
\fill[gray!20!white] (-.2,0,0) to[bend left] (.2,0,0);
\fill[gray!20!white] (-.2,0, 0) to[out=346, in=198] (.2,0,0);
\, \textrm{d}raw[black] (-.35,.07) to[bend right] (.35,.07);
\, \textrm{d}raw[] (A) -- (B) -- (C) -- (D) --cycle (E) -- (D) (F) -- (C) (G) -- (B);
\, \textrm{d}raw[] (E) -- (F) -- (G) ;
\, \textrm{d}raw[densely dashed] (E) -- (H) (H) -- (G) (H) -- (A);
\, \textrm{d}raw[->] ( -.8, -.7, .5)--( -.4, -.7, .5 );
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.3, .5);
\, \textrm{d}raw[->] (-.8, -.7, .5) -- (-.8, -.7, .2);
\node[] at (-0.15, -.13, 0) {$\Sigma$};
\node[] at (-.4, -.8, .5){$\textrm{e}_1$};
\node[] at (-.68, -.3, .5){$\textrm{e}_3$};
\node[] at (-.8, -.8, .-.1){$\textrm{e}_2$};
\textrm{e}nd{tikzpicture}
\caption{ }
\langlebel{fig:torus as metal part}
\textrm{e}nd{subfigure}
\caption{The periodicity cell $Y$ is represented by the cube. (a) The
air cylinder $Y \setminus \overline{\Sigma}$ of Example~\ref{example:air
cylinder} (b) The metal cylinder $\Sigma$ of Example~\ref{example:metal cylinder}. (c) $\Sigma$ is a $2$-dimensional full torus. In this case, $Y \setminus \overline{\Sigma}$ is not a simple Helmholtz domain.}
\textrm{e}nd{figure}
\subsection{The metal cylinder}\langlebel{example:metal cylinder}
Fix a number $r \operatorname{i}n (0, 1/2)$. To model a metallic cylinder, $\Sigma$ is defined as the set
\begin{equation}\langlebel{eq:definition metal cylinder}
\Sigma \coloneqq \set{y = (y_1, y_2, y_3) \operatorname{i}n Y \given y_1^2 + y_2^2 < r^2 }\, .
\textrm{e}nd{equation}
A sketch of the periodicity cell is given in Figure~\ref{fig:the metal cylinder}.
We claim that $Y \setminus \overline{\Sigma}$ is a simple Helmholtz domain.
Indeed, there are only the three standard nontrivial loops in $Y
\setminus \overline{\Sigma}$; namely, $\gamma_1, \gamma_2$, and
$\gamma_3$, which are given by $\gamma_1(t) \coloneqq (t - 1/2, -1/2,
-1/2)$, $\gamma_{2}(t) \coloneqq (-1/2, t-1/2, -1/2)$, and
$\gamma_3(t) \coloneqq (-1/2, -1/2, t-1/2)$ for $t \operatorname{i}n [0, 1]$. Thus, every $L_{\sharp}^2(Y; \mathbb{C}^3)$-vector field $u$ that is curl free in $Y \setminus \overline{\Sigma}$ can be written as $u = \nabla \mathbb{T}heta + c_0$ for $\mathbb{T}heta \operatorname{i}n H_{\sharp}^1(Y \setminus \overline{\Sigma})$ and $c_0 \operatorname{i}n \mathbb{C}^3$.
We find the table
\begin{center}
\begin{tabular}{ c | c | c | c }
$\mathbb{N}E$ & $d_E$ & $\mathcal{L}_{Y \setminus \overline{\Sigma}}$ & $d_H$\\ \hline
$\set{1,2}$ & $2$ & $\set{1,2,3}$ & $3$ \\
\textrm{e}nd{tabular}\, .
\textrm{e}nd{center}
As for the metal plate, we find an interesting non-trivial limit system.
\begin{corollary}[Macroscopic equations for the metal cylinder]
For $\Sigma$ as in~\textrm{e}qref{eq:definition metal cylinder}, a
sequence $(E^{\textrm{e}ta}, H^{\textrm{e}ta})_{\textrm{e}ta}$, and limit fields $\hat{E}$ and $\hat{H}$ as in Theorem~\ref{theorem:macroscopic equations}, the macroscopic equations read
\begin{subequations}\langlebel{eq:effective equations for metal cylinder}
\begin{align}
\langlebel{eq:effective equations for metal cylinder - 1}
\operatorname{curl} \, \hat{E} &= \phantom{-} \operatorname{i} \omega \mu_0 \hat{\mu} \hat{H} \quad \:\:\:\, \textrm{ in } \Omega \, ,\\
\langlebel{eq:effective equations for metal cylinder - 2}
\operatorname{curl} \, \hat{H}&= - \operatorname{i} \omega \varepsilon_0 \hat{E} \: \qquad \:\: \textrm{ in } \Omega \setminus R\, , \\
\langlebel{eq:effective equations for metal cylinder - 3}
\partial_2 \hat{H}_3- \partial_3 \hat{H}_2 &= - \operatorname{i} \omega \varepsilon_0 (\hat{\varepsilon} \hat{E})_1 \quad \textrm{ in } \Omega \, ,\\
\langlebel{eq:effective equations for metal cylinder - 4}
\partial_3 \hat{H}_1 - \partial_1 \hat{H}_3 &= - \operatorname{i} \omega \varepsilon_0 (\hat{\varepsilon} \hat{E})_2\quad \textrm{ in } \Omega\, , \\
\langlebel{eq:effective equations for metal cylinder - 5}
\hat{E}_3 &= 0 \qquad \qquad \quad \:\:\: \, \textrm{ in } R\, .
\textrm{e}nd{align}
\textrm{e}nd{subequations}
\textrm{e}nd{corollary}
\begin{proof}
We have that $\mathcal{L}_{Y \setminus \overline{\Sigma}} = \set{1,2,3}$, $\mathbb{N}E = \set{1,2}$, and $\hat{\varepsilon} = \operatorname{id}_3$ in $\Omega \setminus R$. Thus,~\textrm{e}qref{eq:effective equations for metal cylinder - 1} and~\textrm{e}qref{eq:effective equations for metal cylinder - 2} follow from~\textrm{e}qref{eq:effective equations 1 - derivation of effective equations} and~\textrm{e}qref{eq:effective equations 3 - derivation of effective equations}, respectively. Equations~\textrm{e}qref{eq:effective equations for metal cylinder - 3} and~\textrm{e}qref{eq:effective equations for metal cylinder - 4} follow from~\textrm{e}qref{eq:effective equations 2 - derivation of effective equations}, and~\textrm{e}qref{eq:effective equations for metal cylinder - 5} is a consequence of~\textrm{e}qref{eq:effective equations 4 - derivation of effective equations}.
\textrm{e}nd{proof}
\section*{Funding}
Support of both authors by DFG grant Schw 639/6-1 is gratefully
acknowledged.
\textrm{e}nd{document} |
\begin{document}
\begin{center}
\bf Geometric angle structures on triangulated surfaces
\end{center}
\begin{center}
Ren Guo
\end{center}
\noindent {\bf Abstract} In this paper we characterize a function
defined on the set of edges of a triangulated surface such that there is a
spherical angle structure having the function as the edge invariant (or Delaunay invariant).
We also characterize a function such that there is a hyperbolic angle structure
having the function as the edge invariant.
\noindent \S 1. {\bf Introduction}
Suppose $S$ is a closed surface and $T$ is a triangulation of $S$.
Here by a triangulation we mean the following: take a finite
collection of triangles and identify their edges in pairs by
homeomorphism. Let $V, E, F$ be the sets of all vertices, edges
and triangles in $T$ respectively. If $a, b$ are two simplices in
triangulation $T$, we use $a<b$ to denote that $a$ is a face of
$b$. Let $C(S,T)=\{ (e, f) | e \in E, f \in F,$ such that $e <
f\}$ be set of all \it corners \rm of the triangulation. An \it
angle structure \rm on a triangulated surface $(S,T)$ assigns each
corner of $(S,T)$ a number in $(0, \pi)$. A \it Euclidean (or
hyperbolic, or spherical) angles structure \rm is an angle
structure so that each triangle with the angle assignment is
Euclidean (or hyperbolic, or spherical). More precisely, a
Euclidean angle structure is a map $x: C(S,T)\to (0, \pi)$
assigning every corner $i$ (for simplicity of notation, we use one
letter to denote a corner) a positive number $x_i$ such that
$x_i+x_j+x_k=\pi$ whenever $i,j,k$ are three corners of a
triangle. A hyperbolic angle structure is a map $x: C(S,T)\to (0,
\pi)$ such that $x_i+x_j+x_k<\pi$. A spherical angle structure is
a map $x: C(S,T)\to (0, \pi)$ such that
\begin{equation} \label{1}
\left\{
\begin{array}{ccc}
x_i+x_j+x_k>\pi\\
x_j+x_k-x_i<\pi.
\end{array}
\right. \end{equation}
Actually it is proved in {\bf [B]} that positive numbers
$x_i,x_j,x_k$ are three inner angles of a spherical triangle if
and only if they satisfy conditions $(1)$.
Given an angle structure $x: C(S,T)\to (0, \pi)$, we define its
\it edge invariant \rm which is a function $D_x: E \to (0,2\pi)$
such that $D_x(e)=x_i+x_{i'}$ where $i=(e,f),i'=(e,f')$ are two
opposite corners facing the edge $e$. And we define its \it
Delaunay invariant \rm which is a function $\mathcal{D}_x: E \to
(-2\pi,2\pi)$ such that
$\mathcal{D}_x(e)=x_j+x_k+x_{j'}+x_{k'}-x_i-x_{i'}$ where
$i=(e,f),i'=(e,f')$ are two opposite corners facing the edge $e$
and $j,k$(or $j',k'$) are the other two corners of the triangle
$f$ (or $f'$).
For the simplicity of natation, we use $G$ to denote a fixed geometry,
where $G=E,H$ or $S$ means the Euclidean, hyperbolic or spherical geometry
respectively. Now given a function $D: E \to (0,2\pi)$ (or
$\mathcal{D}: E \to (-2\pi,2\pi)$), we use $AG(S,T;D)$ (or $AG(S,T;\mathcal{D})$)
to denote the set fo all $G$ angle structures having $D$ (or
$\mathcal{D}$) as the edge (or Delaunay) invariant.
The motivation of considering these sets is the study of \it
geometric cone metrics \rm with prescribed edge invariant or
Delaunay invariant on triangulated surfaces from the variational point of view.
A \it Euclidean (or
hyperbolic, or spherical) cone metric \rm assigns each edge in $T$
a positive number such that the numbers on any three edges of a triangle in $T$
form three edge length of a Euclidean (or
hyperbolic, or spherical) triangle. The variational method contains a variational problem
and a linear programming problem. The variational problem is to
show that the unique maximal point of a convex "capacity" defined on the
set $AG(S,T;D)$ (or $AG(S,T;\mathcal{D})$) gives the unique geometric
cone metric. The linear programming problem is to characterize the function $D$ (or
$\mathcal{D}$) such that the set $AG(S,T;D)$ (or $AG(S,T;\mathcal{D})$) is nonempty.
For Euclidean angle
structures, the Delaunay invariant and the edge invariant are
related by $2D_x(e)+\mathcal{D}_x(e)=2\pi$ for any $e$. Thus given
two functions $D$ and $\mathcal{D}$ satisfying
$2D(e)+\mathcal{D}(e)=2\pi$ for any $e$, we have
$AE(S,T;D)=AE(S,T;\mathcal{D}).$ Therefore the problem of
Euclidean cone metric with given edge invariant is eqivalent to
the problem of Euclidean cone metric with given Delaunay invariant.
Rivin {\bf[Ri1]} {\bf[Ri2]} worked out the variational problem and the
linear programming problem about $AE(S,T;D).$
Leibon {\bf[Le]} worked out the variational problem and the
linear programming problem about $AH(S,T;\mathcal{D}).$
Luo {\bf[Lu]} worked out the variational problem about $AS(S,T;D)$
the linear programming problem about which will be solved in this paper (theorem 1).
Although the variational problems about $AH(S,T;D)$ and $AS(S,T;\mathcal{D})$ are still open,
we will solve the linear programming problem about them in this paper (theorem 2 and 3).
The main results are the following.
For a triangulated surface $(S,T),$ a subset $X\subseteq F,$ we
use $|X|$ to denote the number of triangles in $X$ and we use
$E(X)$ to denote the set of all edges of triangles in $X.$
\noindent {\bf Theorem 1.} \it Given a triangulated surface
$(S,T)$ and a function $D: E\to (0,\pi)$, the set $AS(S,T;D)$ is
nonempty if and only if for any subset $X\subseteq F,$
$$\pi |X|< \sum _{e\in E(X)}D(e).$$ \rm
\noindent {\bf Theorem 2.} \it Given a triangulated surface
$(S,T)$ and a function $D: E\to (0,2\pi)$, the set $AH(S,T;D)$ is
nonempty if and only if for any subset $X\subset F,$
$$\pi(|F|-|X|)> \sum _{e\notin E(X)}D(e).$$ \rm
\noindent {\bf Theorem 3.} \it Given a triangulated surface
$(S,T)$ and a function $\mathcal{D}: E\to (-2\pi,2\pi)$, the set
$AS(S,T;\mathcal{D})$ is nonempty if and only if for any subset
$X\subset F,$
$$\pi(|F|-|X|)> \sum _{e\notin E(X)}(\pi-\frac12\mathcal{D}(e)).$$ \rm
The paper is organized as follows. In section 2, we prove theorem
1 by using Leibon's result. In section 3, we recall the duality
theorem in linear programming. In section 4, following Rivin's
method, we prove theorem 2 and 3 by using the duality theorem.
\noindent{\bf Acknowledgement} I wish to thank my advisor,
Professor Feng Luo, for suggesting this problem and for fruitful
discussion.
\noindent \S 2. {\bf Prove of theorem 1}
First let us recall the Leibon's result of characterization of the
function $\mathcal{D}$ such that the set $AH(S,T;\mathcal{D})$ is
nonempty.
\noindent {\bf Theorem 4.}(Leibon){\bf[Le]} \it Given a
triangulated surface $(S,T)$ and a function $\mathcal{D}: E\to
(0,2\pi)$, the set $AH(S,T;\mathcal{D})$ is nonempty if and only if
for any subset $X\subseteq F,$
$$\pi |X|< \sum _{e\in E(X)}(\pi-\frac12\mathcal{D}(e)).$$ \rm
\noindent {\bf Proof of theorem 1.} To show the conditions are
necessary, for any $X\subseteq F$, we have $\sum _{e\in
E(X)}D(e)=\sum_{e\in E(X)}(x_i+x_{i'}),$ where $i,i'$ are two
opposite corners facing the edge $e$. It turns out that the right
hand side of the equation is equal to $\sum_{f\in
X}(x_i+x_j+x_k)+\sum x_h,$ where the corner $h=(e,f^*)$ with $e\in
E(X)$ and $f^*\notin X.$ Hence $\sum _{e\in E(X)}D(e)\geq
\sum_{f\in X}(x_i+x_j+x_k)> \sum_{f\in X}\pi= \pi|X|.$
To show the conditions are sufficient, let us define a function
$\mathcal{D}: E \to (0,2\pi)$ by setting
$\mathcal{D}(e)=2\pi-2D(e).$ Thus the conditions $\pi |X|< \sum
_{e\in E(X)}D(e)$ are equivalent to $\pi |X|< \sum _{e\in
E(X)}(\pi-\frac12\mathcal{D}(e))$ which guarantee
$AH(S,T;\mathcal{D})$ is nonempty by theorem 4. It follows that
there is a solution for the inequalities
$$
\left\{
\begin{array}{ccc}
x_i+x_j+x_k< \pi & \ i,j,k\ \mbox{are three corners of a triangle}\\
x_j+x_k+x_{j'}+x_{k'}-x_i-x_{i'}=\mathcal{D}(e) \\
x_i> 0
\end{array}
\right.
$$
Let us define new variables $y_i$ for all $i \in C(S,T)$ by setting
$$y_i=\frac{\pi+x_i-x_j-x_k}{2}$$ provided $i,j,k$ are three corners
of a triangle. And since $\mathcal{D}(e)=2\pi-2D(e),$
the inequalities above are equivalent to
$$
\left\{
\begin{array}{ccc}
y_i+y_j+y_k> \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\
y_i+y_{i'}=D(e)& \ i,i'\ \mbox{are two
opposite corners facing an edge}\ e \\
y_j+y_k< \pi & \ j,k\ \mbox{are two corners of
a triangle}\\
\end{array}
\right.
$$
This solution obviously satisfies
$$\left\{
\begin{array}{ccc}
y_i+y_j+y_k> \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\
y_i+y_{i'}=D(e)& \ i,i'\ \mbox{are two
opposite corners facing an edge}\ e \\
y_j+y_k-y_i< \pi & \ i,j,k\ \mbox{are three corners
of a triangle}\\
y_i> 0\\
\end{array}
\right.
$$
Thus we obtain an angle structure in $AS(S,T;D)$. QED
\noindent \S 3. {\bf Duality Theorem}
We fix the notations as follows: $x=(x_1, ..., x_n)^t$ is a column
vector in $ \mathbf{R}^n$. The standard inner product in $
\mathbf{R}^n$ is denoted by $a^t x$. If $A: \mathbf{R}^n \to
\mathbf{R}^m$ is a linear transformation, we denote its transpose
by $A^t: \mathbf{R}^m \to \mathbf{R}^n.$ Given two vectors $x, a$
in $ \mathbf{R}^n$, we say $x \geq a$ if $x_i \geq a_i$ for all
indices $i$. Also $x>a$ means $x_i > a_i$ for all indices $i$.
A linear programming problem $(P)$ is to minimize an \it objective
function \rm $z = a^tx$ subject to the \it restrain conditions\rm
$$
\left\{
\begin{array}{ccc}
Ax=b\\
x \geq 0
\end{array}
\right.
$$ where $x \in \mathbf{R}^n$, $b\in \mathbf{R}^m$ and $A:
\mathbf{R}^n \to \mathbf{R}^m$ is a linear transformation. We call
a point $x$ satisfying the restrain conditions a \it feasible
solution \rm and denote the set of all the feasible solutions by
$D(P)=\{x \in \mathbf{R}^n|Ax=b,x \geq 0\}.$ An \it optimal
solution \rm $x$ for $(P)$ is a feasible solution so that the
objective function $z$ realizes the minimal value. The \it dual
problem \rm $(P^*)$ of $(P)$ is to maximize $z = b^t y$ subject to
$ A^t y \leq a , y\in \mathbf{R}^m$. Let us recall the duality
theorem in linear programming. The proof of the theorem can be
found in the book {\bf[KB]}.
\noindent {\bf Theorem 5.} \it The following statements are
equivalent.
\noindent (a) Problem (P) has an optimal solution.
\noindent (b) $D(P) \neq \emptyset$ and $D(P^*) \neq \emptyset$.
\noindent (c) Both problem $(P)$ and problem $(P^*)$ have optimal solutions so
that the minimal value of $(P)$ is equal to the maximal value of
$(P^*)$. \rm
In applications that we are interested, there is a special case
that the objective function $z =0$ for $(P)$. Thus the optimal
solution exists if and only if $D(P) \neq \emptyset$. Thus we
obtain the following corollary.
\noindent {\bf Corollary 6.} For $A: \mathbf{R}^n \to
\mathbf{R}^m$ and $b\in \mathbf{R}^m,$ the set $\{ x \in
\mathbf{R}^n |Ax=b, x \geq 0\} \neq \emptyset$ if and only if the
maximal value of $z = b^ty$ on $\{ y \in R^m | A^t y \leq 0\}$ is
non-positive.
\noindent \S 4. {\bf Proof of theorem 2 and 3 }
By following Rivin's method in {\bf [Ri2]}, we will prove a lemma
about the closure of $AH(S,T;D)$ in $ \mathbf{R}^{3|F|}=\{(x_i)^t, i\in C(S,T)\}$. The
closure of $AH(S,T;D)$ consists of all the points satisfying
$$
\left\{
\begin{array}{ccc}
x_i+x_j+x_k \leq \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\
x_i+x_{i'}=D(e) & \ i,i'\ \mbox{are two opposite corners facing an edge}\ e\\
x_i\geq 0 \\
\end{array}
\right.
$$
\noindent {\bf Lemma 7.} Given a triangulated surface $(S,T)$ and
a function $D: E\to [0,2\pi]$, the closure of $AH(S,T;D)$ is
nonempty if and only if for any subset $X\subset F,$
$$\pi(|F|-|X|)\geq \sum _{e\notin E(X)}D(e).$$ \rm
\noindent {\bf Proof.} The linear programming problem $(P)$ with
variables $x=(...,x_i,...,t_f,...)$ indexed by $C(S,T)\cup F$ is
to minimize the objective function $z = 0$ subject to the restrain
conditions
$$
\left\{
\begin{array}{ccc}
x_i+x_j+x_k+ t_f=\pi& \ i,j,k\ \mbox{are three corners of a triangle} f\\
x_i+x_{i'}=D(e)& \ i,i'\ \mbox{are two opposite
corners facing an edge}\ e\\
x_i\geq 0 \\
t_f\geq 0
\end{array}
\right.
$$
The dual problem $(P^*)$ with variable $y =( ...,y_f, ..., y_e,
...)$ indexed by $E \cup F$ is to maximize the objective function
$z = \sum_{f \in F} \pi y_f+ \sum_{e \in E} D(e) y_e$ subject to
the restrain conditions
$$
\left \{
\begin{array}{ccc}
y_f \leq 0&\\
y_f + y_e \leq 0&\ \mbox{whenever}\ e < f.
\end{array}
\right.
$$
Since the closure of $AH(S,T;D)$ is nonempty is equivalent to that the set
$D(P)$ is nonempty, by corollary 6, the latter one is equivalent
to that the maximal value of the objective function of $(P^*)$ is
non-positive.
To show the conditions $\pi(|F|-|X|)\geq \sum _{e\notin E(X)}D(e)$
for any $X\subset F$ are necessary, for any $X\subset F,$ let
$$y_f=
\left\{
\begin{array}{ccc}
0&\mbox{if}\ f\in X \\
-1&\mbox{if}\ f\notin X
\end{array}
\right.\ \mbox{and}\ y_e= \left\{
\begin{array}{ccc}
0&\mbox{if}\ e\in E(X) \\
1&\mbox{if}\ e\notin E(X)
\end{array}
\right.
$$
We claim that $(y_f, y_e)$ is a feasible solution. In fact, given
a pair $e<f,$ if $f\in X$, we must have $e\in E(X)$, then $y_f +
y_e=0.$ If $f\notin X$, then $y_f + y_e=-1+y_e \leq 0$.
By the assumption that the maximal value of the objective function
of $(P^*)$ is non-positive, since $(y_f, y_e)$ is feasible, we
have $0\geq z(y_f, y_e) = \sum_{f \notin X} \pi y_f+ \sum_{e
\notin E(X)} D(e) y_e = \pi(|X|-|F|)+ \sum _{e\notin E(X)}D(e).$
To show the conditions are sufficient, take an arbitrary feasible
solution $(y_f, y_e)$. If $y_f=0$ for all $f$, from $y_f+y_e\leq
0$, we know $y_e\leq 0$. Hence $z(y_f,y_e) = \sum _{e\notin
E}D(e)y_e \leq 0$, since $D(e)\in [0,2\pi].$ Otherwise, define
$X=\{f\in F | y_f = 0\}\subset F$, and let $a=max\{y_f, f\notin
X\}$. We have $a < 0$. Define
$$y_f^{(1)}=
\left\{
\begin{array}{ccc}
y_f=0&\mbox{if}\ f\in X \\
y_f-a&\mbox{if}\ f\notin X
\end{array}
\right.\ \mbox{and}\ y_e^{(1)}= \left\{
\begin{array}{ccc}
y_e&\mbox{if}\ e\in E(X) \\
y_e+a&\mbox{if}\ e\notin E(X)
\end{array}
\right.
$$
We claim that $(y_f^{(1)},y_e^{(1)})$ is a feasible solution. In
fact, $y_f^{(1)}\leq 0$. Given a pair $e<f,$ if $f\in X$, we must
have $e\in E(X)$, then $y_f^{(1)}+y_e^{(1)}=y_f+y_e\leq 0$. If $f\notin X$
and $e\notin E(X)$, then $y_f^{(1)}+y_e^{(1)}=y_f-a+y_e+a\leq 0$. If $f\notin
X$ but $e\in E(X)$, there exists another triangle $f'\in X$ so
that $e<f'$, then $y_e=y_e+y_{f'}\leq 0$. Therefore
$y_f^{(1)}+y_e^{(1)} = y_f-a+y_e\leq y_f-a \leq 0$, since $a$ is
the maximum.
Now the value of the objective function is
$z(y_f^{(1)},y_e^{(1)})= z(y_f,y_e)+ a(\pi (|X|-|F|)+\sum
_{e\notin E(X)}D(e))\geq z(y_f,y_e)$, according to the conditions.
Note the number of 0's in $\{y_f^{(1)}\}$ is more than that in
$\{y_f\}$. By the same procedure, after finite steps, it ends at a
feasible solution $(y_f^{(n)}=0,y_e^{(n)})$. We have
$z(y_f^{(n)},y_e^{(n)})\leq 0$. Since the value of the objective
function does not increase, therefore $0\geq
z(y_f^{(n)},y_e^{(n)})\geq \ldots \geq z(y_f^{(1)},y_e^{(1)}) \geq
z(y_f, y_e)$. QED
\noindent {\bf Proof of theorem 2.} Let $x_i=a_i+\varepsilon$ for any $i \in C(S,T),$
where $a_i\geq 0$ and $\varepsilon\geq 0.$ The linear programming
problem $(P)$ with variables $\{...,a_i,...\varepsilon\}$ is to
minimize the objective function $z = -\varepsilon$ subject to the
restrain conditions
$$
\left\{
\begin{array}{ccc}
a_i+a_j+a_k+3\varepsilon\leq\pi& \ i,j,k\ \mbox{are three corners of a triangle}\\
a_i+a_j+2\varepsilon=D(e)& \ i,j\ \mbox{are two
opposite corners facing an edge}\ e \\
a_i\geq 0\\
\varepsilon \geq 0
\end{array}
\right.
$$
The dual problem $(P^*)$ with variable $y =( ...,y_f, ..., y_e,
...)$ indexed by $E \cup F$ is to maximize the objective function
$z = \sum_{f \in F} \pi y_f+ \sum_{e \in E} D(e) y_e$ subject to
the restrain conditions
$$
\left\{
\begin{array}{ccc}
y_f \leq 0\\
y_f + y_e \leq 0& \mbox{whenever} e < f\\
3\sum_{f \in F} y_f + 2 \sum_{e \in E} y_e \leq -1
\end{array}
\right.
$$
By the theorem 5(c), the maximal value of the objective function
of $(P^*)$ is negative is equivalent to that the minimal value of
the objective function of $(P)$ is negative. The latter one is
equivalent to that there exists a feasible solution $a_i\geq 0,
\varepsilon> 0$. Therefore the set $AH(S,T;D)$ is nonempty.
We only need to show that the maximal value of the objective
function of $(P^*)$ is negative is equivalent to the conditions
$\pi(|F|-|X|)> \sum _{e\notin E(X)}D(e)$ for any $X\subset F.$
To show the conditions are necessary, for any $X\subset F,$ we
have $2|E(X)|>3|X|$ or $2|E(X)|\geq 3|X|+1$. Let
$$y_f=
\left\{
\begin{array}{ccc}
0&\mbox{if}\ f\in X \\
-1&\mbox{if}\ f\notin X
\end{array}
\right.\ \mbox{and}\ y_e= \left\{
\begin{array}{ccc}
0&\mbox{if}\ e\in E(X) \\
1&\mbox{if}\ e\notin E(X)
\end{array}
\right.
$$
We claim that $(y_f, y_e)$ is a feasible solution. If fact, as in
lemma 7, we can check $y_f+ y_e\leq 0$ for any pair $e<f.$
Furthermore $$3\sum_{f \in F} y_f + 2 \sum_{e \in E} y_e
=3\sum_{f\notin X}(-1)+2\sum_{e\notin
E(X)}1=3(|X|-|F|)+2(|E|-|E(X)|)$$
$$=3|X|-2|E(X)|+2|E|-3|F|=3|X|-2|E(X)|\leq-1$$
since $2|E|=3|F|$. Now $(y_f,y_e)$ is feasible
implies that $z(y_f,y_e)<0$ which is equivalent to $\pi(|F|-|X|)<
\sum _{e\notin E(X)}D(e).$
To show the conditions are sufficient, by the proof of lemma 7 we
know the maximal value of the objective function of $(P^*)$ is
$\leq 0$ under the conditions. We try to show it can not be 0.
Assume that $(y_f,y_e)$ is a feasible solution satisfying
$z(y_f,y_e)=0.$ We claim that $y_f=0$ for all $f$. Otherwise, as
in the proof of lemma 7, we can find another feasible solution
$(y_f^{(1)},y_e^{(1)})$ and we can check that
$z(y_f^{(1)},y_e^{(1)}) = z(y_f,y_e)+ a(\pi (|X|-|F|)+\sum
_{e\notin E(X)}D(e)) > z(y_f,y_e)= 0$, according to the
conditions. It is contradiction since the maximal value of the
objective function of $(P^*)$ is $\leq 0.$
Now from $y_f=0$ for all $f$ we see $y_e\leq 0$. Since $0=z(y_f,y_e)=\sum_{e
\in E} D(e) y_e$ and $D(e)>0$, we get $y_e=0$ for all $e$ and therefore $(y_f,y_e)= (0,0).$
But $(y_f,y_e)= (0,0)$ does not satisfy $3\sum_{f \in F} y_f + 2
\sum_{e \in E} y_e \leq -1.$ It is a contradiction since we assume
that $(y_f,y_e)$ is a feasible solution. This proves that the
maximal value of the objective function of $(P^*)$ is negative.
QED
\noindent {\bf Proof of theorem 3.} Given two functions
$D:E\to (0, 2\pi)$ and $\mathcal{D}: E\to (-2\pi,2\pi)$ satisfying
$2D(e)+\mathcal{D}(e)=2\pi$ for any $e$, we claim that
$AH(S,T;D)\neq \emptyset$ is eqivalent to $AS(S,T;\mathcal{D})\neq \emptyset.$
By this claim, theorem 3 is true
as a corollary of theorem 2.
In fact, $AS(S,T;\mathcal{D})$ is the set of solutions for the inequalities
$$
\left\{
\begin{array}{ccc}
x_i+x_j+x_k > \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\
x_j+x_k-x_i < \pi & \ i,j,k\ \mbox{are three corners of a triangle}\\
x_j+x_k+x_{j'}+x_{k'}-x_i-x_{i'}=\mathcal{D}(e) \\
x_i > 0
\end{array}
\right.
$$
Let us define new variables $y_i$ for all $i\in C(S,T)$ by setting
$$y_i=\frac{\pi+x_i-x_j-x_k}{2}$$ provided $i,j,k$ are three corners
of a triangle. Since $2D(e)+\mathcal{D}(e)=2\pi,$ we see that the
inequalities above are equivalent to
$$
\left\{
\begin{array}{ccc}
y_i+y_j+y_k < \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\
y_i> 0 \\
y_i+y_{i'}=D(e) & \ i,i'\ \mbox{are two opposite corners facing an edge}\ e\\
y_j+y_k<\pi & \ j,k\ \mbox{are two corners of a triangle}\\
\end{array}
\right.
$$
Since $y_i+y_j+y_k < \pi$ implies $y_j+y_k<\pi$, we can omit the
latter one. Equivalently, we get
$$
\left\{
\begin{array}{ccc}
y_i+y_j+y_k < \pi& \ i,j,k\ \mbox{are three corners of a triangle}\\
y_i> 0 \\
y_i+y_{i'}=D(e) & \ i,i'\ \mbox{are two opposite corners facing an edge}\ e\\
\end{array}
\right.
$$
Now the set of solutions of the inequalities above is exactly $AH(S,T;D).$
Thus we see $AH(S,T;D)\neq \emptyset$ is eqivalent to $AS(S,T;\mathcal{D})\neq \emptyset.$ QED
\noindent {\bf Reference}
[B] Marcel Berger, Geometry II. Springer-Verlag 1987
[KB]Bernard Kolman $\&$ Robert Beck, Elementary
Linear Programming with Applications. Academic Press 2 edition
1995
[Le] Gregory Leibon, Characterizing the Delaunay
decompositions of compact hyperbolic surface. Geom. Topol.
6(2002), 361-391
[Lu] Feng Luo, A Characterization of spherical
polyhedron surfaces.\\
http://front.math.ucdavis.edu/math.GT/0408112
[Ri1] Igor Rivin, Euclidean structures on simplicial
surfaces and hyperbolic volume. Ann. of Math. (2) 139 (1994), no.
3, 553-580
[Ri2] Igor Rivin, Combinational optimization in
geometry. Advance in Applied Math. 31(2003), no. 1, 242-271
\noindent Department of Mathematics\\ Rutgers University\\
Piscataway, NJ 08854, USA
\noindent Email: renguo$@$math.rutgers.edu
\end{document} |
\begin{document}
\title{On the role of the overall effect in exponential families}
\author{Anna Klimova \\
{\small{National Center for Tumor Diseases (NCT), Partner Site Dresden, and}}\\
{\small{Institute for Medical Informatics and Biometry,}}\\
{\small{Technical University, Dresden, Germany} }\\
{\small \texttt{[email protected]}}\\
{}\\
\and
Tam\'{a}s Rudas \\
{\small{Center for Social Sciences, Hungarian Academy of Sciences, and}}\\
{\small{Department of Statistics, E\"{o}tv\"{o}s Lor\'{a}nd University, Budapest, Hungary}}\\
{\small \texttt{[email protected]}}\\
}
\date{}
\maketitle
\begin{abstract}
\noindent Exponential families of discrete probability distributions when the normalizing constant (or overall effect) is added or removed are compared in this paper. The latter setup, in which the exponential family is curved, is particularly relevant when the sample space is an incomplete Cartesian product or when it is very large, so that the computational burden is significant. The lack or presence of the overall effect has a fundamental impact on the properties of the exponential family. When the overall effect is added, the family becomes the smallest regular exponential family containing the curved one. The procedure is related to the homogenization of an inhomogeneous variety discussed in algebraic geometry, of which a statistical interpretation is given as an augmentation of the sample space. The changes in the kernel basis representation when the overall effect is included or removed are derived. The geometry of maximum likelihood estimates, also allowing zero observed frequencies, is described with and without the overall effect, and various algorithms are compared. The importance of the results is illustrated by an example from cell biology, showing that routinely including the overall effect leads to estimates which are not in the model intended by the researchers.
\end{abstract}
\begin{keywords}
algebraic variety, contingency table, independence, log-linear model, maximum likelihood estimation, overall effect, relational model
\end{keywords}
\section{Introduction}
This paper deals with exponential families of probability distributions over discrete sample spaces. When defining such families, usually, a normalizing constant, which of course, is constant over the sample space but not over the family, is included. The presence of the normalizing constant implies that the parameter space may be an open set, which, in turn, is necessary for asymptotic normality of estimates and for the applicability of standard testing procedures. The normalizing constant, from an applied perspective, may be interpreted as a baseline or common effect, present everywhere on the sample space and is, therefore, also called the overall effect. The focus of the present work is to better understand the implications of having or not having an overall effect in such families, in particular how adding or removing the overall effect affects the properties of discrete exponential families.
Motivated by a number of important applications, \perp\!\!\!\perpte{KRD11, KRbm,KRextended} developed the theory of relational models, which generalize discrete exponential families, also called log-linear models, to situations when the sample space is not necessarily a full Cartesian product, the statistics defining the exponential family are not necessarily indicators of cylinder sets, and the overall effect is not necessarily present. Exponential families without the overall effect are particularly relevant, sometimes necessary, when the sample space is a proper subset of a Cartesian product. Several real examples, when certain combinations of the characteristics were either not possible logically or were left out from the design of the experiment were discussed in \perp\!\!\!\perpte{KRD11}. A real problem of this structure from cell biology is analyzed in this paper, too. When the overall effect is not present, the standard normalization procedure to obtain probability distributions cannot be applied, because the family is curved \perp\!\!\!\perpte{KRD11}. When, in spite of this, the standard normalization procedure is applied, as was done in this analysis, the resulting estimates do not possess the fundamental model properties.
The standardization of the estimates in exponential families is also an issue, when the size of the problem is very large and the computational burden is significant. Some Neural probabilistic language models are relational models. Due to the high-dimensional sample space, the evaluation of the partition function, which is needed for normalization, may be intractable. Some of the methods of parameter estimation under such models are based on the removal of the partition function, that is, the removal of the overall effect from the model and performing model training using the models without the overall effect. Approximations of estimates with and without the overall effect were studied, for example, by \perp\!\!\!\perpte{MnihTeh2012} and \perp\!\!\!\perpte{AndreasKlein2015}, among others. A different approach to avoiding global normalization (i.e., having an overall effect) is described in \perp\!\!\!\perpte{ProbGraphM}. However, the implications of the removal of the overall effect are not discussed in the existing literature.
Another area where removing or including the overall effect is relevant, is context specific independence models, see, e.g., \perp\!\!\!\perpte{HosgaardCSImodels} and \perp\!\!\!\perpte*{NymanCSI}. When the sample space is an incomplete Cartesian product, removing the overall effect, as described in this paper, specifies different variants of conditional independence in the parts of the sample space, depending on whether or not the part is or is not affected affected by the missing cells.
While including the overall in the definition of the statistical model to be investigated is seen by many researchers as ``natural' or ``harmless'', we show in this paper that adding or removing the overall effect may dramatically change the characteristics of the exponential family, up to the point of altering the fundamental model property intended by the researcher.
The main results of the paper include showing that allowing the overall effect expands the curved exponential family to the smallest regular exponential family which contains it. When the overall effect is removed, the sample space may have to be reduced (if there were cells which contained the overall effect only), and the changes in the structure of the generalized odds ratios defining the model are described in both cases. In the language of algebraic geometry, the procedure of removing the overall effect is identical to the dehomogenization of the variety defining the model \perp\!\!\!\perptep*{Cox}. An important area of applications of the results presented here is the case when several binary features are observed, but the combination that no feature is present is either is impossible logically or is possible but was left out from the study design. The converse of dehomogenization, that is homogenizing a variety, involves including a new variable, and it is shown that in some cases this can be identified, from a statistical perspective, with augmenting the sample space by a cell which is characterized by no feature being present. For example, the Aitchison -- Silvey independence \perp\!\!\!\perptep{AitchSilvey60,KRipf1} is homogenized, through the augmentation of the sample space, into the standard independence model.
The paper is organized as follows. Section \ref{SectionDefinition} gives a canonical definition of relational models using homogeneous, and if there is no overall effect included, one inhomogeneous generalized odds ratios, called dual representation and shows that including the overall effect is identical to omitting the inhomogeneous generalized odds ratio from it.
Section \ref{SectionInfluence} contains the result that including the overall effect expands the curved exponential family into the smallest regular one containing it. For the case of the removal of the overall effect, the dual representation of the model is given, and the relevance of certain results in algebraic geometry to the statistical problem is discussed. In particular, the homogenization of a variety through including a new variable is identified with augmenting the sample space with a cell where no feature is present, when this is meaningful. It is proved that the homogenization of the Aitchison -- Silvey (in the sequel, AS) independence model, which is defined on sample spaces where all combinations of features, except for the ``no feature present'' combination, are possible, is the usual model of mutual independence on the full Cartesian product obtained after augmenting the sample space with the missing cell. The relationship of these results with context specific independence is also described.
Section \ref{MLEsection} compares the maximum likelihood (ML) estimates in geometrical terms for relational models with and without the overall effect and based on the insight obtained, a modification of the algorithm proposed in \perp\!\!\!\perpte{KRipf1} is given. It is illustrated, that the ML estimates under two models which differ only in the lack or presence of the overall term, may be very different, up to the point of the existence or no existence of positive ML estimates, when the data contain observed zeros. However, when the MLE exists in the model containing the overall effect, it also does in the model obtained after the removal of the overall effect.
Finally, Section \ref{hemato} discusses an example of applications of relational models in cell biology. The equal loss of potential model in hematopoiesis \perp\!\!\!\perptep*{Perie2014} is a relational model without the overall effect. The published analysis of this model added the overall effect to it, to simplify calculations, and with this changed the properties of the model so that the published estimates do not fulfill the fundamental model property.
\section{A canonical form of relational models}\label{SectionDefinition}
Let $Y_1, \dots, Y_K$ be random variables taking values in finite sets $\mathcal{Y}_1, \dots, \mathcal{Y}_K$, respectively. Let the sample space $\mathcal{I}$ be a non-empty, proper or improper, subset of $\mathcal{Y}_1 \times \dots \times \mathcal{Y}_K$, written as a sequence of length $I=|\cg I|$ in the lexicographic order. Assume that the population distribution is parameterized by cell probabilities $\boldsymbol p =(p_1, \dots, p_I)$, where $p_i \in (0,1)$ and $\sum_{i=1}^I p_{i} = 1$, and denote by $\cg{P}$ the set of all strictly positive distributions on $\cg{I}$. For simplicity of exposition, a distribution in $\cg {P}$ will be identified with its parameter, $\boldsymbol p$, and $\cg P = \{\bd p > \bd 0: \,\, \bd 1^{\prime} \bd p = 1\}$.
Let $\mathbf{A}$ be a $J \times I$ matrix of full row rank with 0--1 elements and no zero columns.
A relational model for probabilities $RM(\mathbf{A})$ generated by $\mathbf{A}$ is the subset of $\mathcal{P}$ that satisfies:
\begin{equation}\label{genll}
RM(\mathbf{A}) = \{\bd p \in \mathcal{P}: \,\, \log \bd{p} = \mathbf{A}^{\prime} \bd{\theta}\},
\end{equation}
where $\bd \theta$ = $(\theta_1, \dots, \theta_J)^{\prime}$ is the vector of log-linear parameters of the model. A dual representation of a relational model can be obtained using a matrix, $\mathbf{D}$, whose rows form a basis of $Ker(\mathbf{A})$, and thus, $\mathbf{D}\mathbf{A}^{\prime}$ = $\mathbf O$:
\begin{equation}\label{multFam}
RM(\mathbf{A}) = \{\bd p \in \cg P: \,\, {\mathbf D} \log \bd p = \bd 0\}.
\end{equation}
The number of the degrees of freedom $K$ of the model is equal to $dim(Ker(\mathbf{A}))$. In the sequel, $\bd d_1^{\prime}, \bd d_2^{\prime}, \dots, \bd d_K^{\prime}$ denote the rows of $\mathbf D$. The dual representation can also be expressed in terms of the generalized odds ratios:
\begin{equation}\label{pDRatio}
\boldsymbol p^{\boldsymbol d_1^+}/\boldsymbol p^{\boldsymbol d_1^-} = 1, \quad
\boldsymbol p^{\boldsymbol d_2^+} /\boldsymbol p^{\boldsymbol d_2^-} = 1, \quad
\cdots \quad
\boldsymbol p^{\boldsymbol d_K^+} /\boldsymbol p^{\boldsymbol d_K^-} = 1,
\end{equation}
or in terms of the cross-product differences:
\begin{equation}\label{pDiff}
\boldsymbol p^{\boldsymbol d_1^+} - \boldsymbol p^{\boldsymbol d_1^-} = 0, \quad
\boldsymbol p^{\boldsymbol d_2^+} - \boldsymbol p^{\boldsymbol d_2^-} = 0, \quad
\cdots \quad
\boldsymbol p^{\boldsymbol d_K^+} - \boldsymbol p^{\boldsymbol d_K^-} = 0,
\end{equation}
where ${\boldsymbol {d^+}}$ and ${\boldsymbol {d^-}}$ stand for, respectively, the positive and negative parts of a vector $\boldsymbol d$.
The following dual representation is invariant of the choice of the kernel basis.
Let $\mathcal{X}_{\mathbf A}$ denote the polynomial variety associated with $\mathbf{A}$ \perp\!\!\!\perptep{SturBook}:
\begin{equation}\label{variety}
\mathcal{X}_{\mathbf A} = \left\{\boldsymbol p \in \mathbb{R}^{|\mathcal{I}|}_{\geq 0}: \,\, \boldsymbol p^{\boldsymbol d^+} = \boldsymbol p^{\boldsymbol d^-}, \,\, \forall \boldsymbol d \in Ker(\mathbf{A}) \right \}.
\end{equation}
The relational model generated by $\mathbf{A}$ is the following set of distributions:
\begin{equation}\label{ExtMprob}
RM(\mathbf{A}) = \mathcal{X}_{\mathbf A} \cap int(\Delta_{I-1}),
\end{equation}
where $int(\Delta_{I-1})$ is the interior of the $(I-1)$-dimensional simplex.
Notice that the variety $\mathcal{X}_{\mathbf A}$ includes elements $\boldsymbol p$ with zero components as well and can be used to extend the definition of the model to allow zero probabilities. The extended relational model, $\xbar{RM}(\mathbf{A})$, is the intersection of the variety $\mathcal{X}_{\mathbf A}$ with the probability simplex:
\begin{equation}\label{ExtMprob}
\,\xbar{RM}(\mathbf{A}) = \boldsymbol p \in \mathcal{X}_{\mathbf A} \cap \Delta_{I-1}.
\end{equation}
See \perp\!\!\!\perpte{KRextended} for more detail on the extended relational models.
Let $\bd 1^{\prime}$ = $(1, \dots,1)$ be the row of $1$'s of length $I$. If $\boldsymbol 1^{\prime}$ does not belong to the space spanned by the rows of $\mathbf A$, the relational model $RM(\mathbf{A})$ is said to be a model without the overall effect. Such models are specified using homogeneous and at least one non-homogeneous generalized odds ratios, and the corresponding variety $\mathcal{X}_{\mathbf A}$ is non-homogeneous \perp\!\!\!\perptep{KRD11}.
\begin{proposition}\label{oneORtheorem}
Let $RM(\mathbf A)$ be a model without the overall effect. There exists a kernel basis matrix $\mathbf{D}$ whose rows satisfy:
\begin{equation}\label{KernelRows}
\bd d_1^{\prime} \bd 1 \neq 0, \quad \bd d_2^{\prime} \bd 1= 0, \quad \dots, \,\, \bd d_K^{\prime} \bd 1 = 0.
\end{equation}
\end{proposition}
\begin{proof}
A relational model does not contain the overall effect if and only if it can be written using non-homogeneous (and possibly homogeneous) generalized odds ratios \perp\!\!\!\perptep{KRD11}. Therefore, $\mathbf{D}$ has at least one row, say $\bd d_1^{\prime}$, that is not orthogonal to $\bd 1$: $C_1 = \bd d_1^{\prime} \bd 1\neq 0$.
Suppose there exists another row, say $\boldsymbol d_2^{\prime}$, that is not orthogonal to $\boldsymbol 1$ and thus $C_2 = \boldsymbol d_2^{\prime}\boldsymbol 1 \neq 0$. The vectors $\boldsymbol d_1$ and $\boldsymbol d_2$ are linearly independent, so are the vectors
$\boldsymbol d_1$ and $C_2\boldsymbol d_1 - C_1 \boldsymbol d_2$. Substitute the row $\boldsymbol d_2^{\prime}$ with the row $C_2\boldsymbol d_1^{\prime} - C_1 \boldsymbol d_2^{\prime}$. And so on.
\end{proof}
\noindent It is assumed in the sequel that $\boldsymbol 1^{\prime}$ is not in the row space of $\mathbf{A}$. Notice that, because $\mathbf{A}$ is 0--1 matrix without zero columns, this is only possible when $2 \leq J = rank(\mathbf{A}) < I-1$. Throughout the entire paper, the kernel basis matrix $\mathbf D$ is assumed to satisfy (\ref{KernelRows}), and, without loss of generality, $\bd d_1^{\prime}\bd 1=-1$.
Some consequences of adding the overall effect to a relational model will be investigated by comparing the properties of the relational model generated by $\mathbf{A}$ and the model generated by the matrix $\bar{\mathbf A}$ obtained by augmenting the model matrix $\mathbf{A}$ with the row $\boldsymbol 1'$:
$$\bar{\mathbf A} = \left( \begin{array}{c} \boldsymbol 1' \\ \mathbf{A} \end{array} \right).$$
Let $RM(\bar{\mathbf{A}})$ be the relational model generated by $\bar{\mathbf{A}}$. Because $\boldsymbol 1'$ is a row of $\bar{\mathbf{A}}$, the corresponding polynomial variety $\mathcal{X}_{\bar{\mathbf A}}$ is homogeneous \perp\!\!\!\perptep[cf.][]{SturBook}.
\begin{theorem}\label{OEnonhom}
The dual representation of $RM(\bar{\mathbf{A}})$ can be obtained from the dual representation of $RM({\mathbf{A}})$ by removing the constraint specified by a non-homogeneous odds ratio from the latter.
\end{theorem}
\begin{proof}
Write the dual representation of $RM({\mathbf{A}})$ in terms of the generalized log odds ratios:
\begin{equation}\label{multFam1d}
\bd d_1^{\prime}\log \bd p = 0, \quad \bd d_2^{\prime}\log \bd p = 0, \dots, \quad \bd d_K^{\prime}\log \bd p = 0, \quad \mbox{ for any } \,\, \boldsymbol p \in RM({\mathbf{A}}).
\end{equation}
By the previous assumption, $\bd d_1^{\prime}\bd 1=-1$, and thus, the constraint $\bd d_1^{\prime}\log \bd p = 0$ is specified by a non-homogeneous odds ratio. Define $\bar{\mathbf{D}}$ as:
$$\bar{\mathbf{D}} = \left( \begin{array}{c} \boldsymbol d_2^{\prime} \\ \vdots \\ \boldsymbol d_K^{\prime} \end{array} \right).$$
Because, from (\ref{KernelRows}), $\bd d_2, \dots, d_K \in Ker(\mathbf{A})$,
$$\bar{\mathbf D} \bar{\mathbf A}^{\prime} = \left( \begin{array}{c} \boldsymbol d_2^{\prime} \\ \vdots \\ \boldsymbol d_K^{\prime} \end{array} \right) \left( \begin{array}{cc} \boldsymbol 1 & {\mathbf A}^{\prime} \end{array} \right) = \left( \begin{array}{cc} \boldsymbol d_2^{\prime} \boldsymbol 1 & \boldsymbol d_2^{\prime} \mathbf{A}' \\ \vdots & \vdots \\ \boldsymbol d_K^{\prime} \boldsymbol 1 & \boldsymbol d_K^{\prime} \mathbf{A}' \end{array} \right) = \mathbf O,$$
and thus, $\bd d_2, \dots, d_K \in Ker(\bar{\mathbf{A}})$. Finally, as $rank(\bar{\mathbf{D}}) = K-1$, $\bd d_2, \dots, d_K$ is a basis of $Ker(\bar{\mathbf{A}})$, and therefore,
\begin{equation}\label{multFam1d1}
\bd d_2^{\prime}\log \bd p = 0, \dots, \quad \bd d_K^{\prime}\log \bd p = 0, \quad \mbox{ for any } \,\, \boldsymbol p \in RM(\bar{\mathbf{A}}).
\end{equation}
\end{proof}
\section{The influence of the overall effect on the model structure}\label{SectionInfluence}
The consequences of adding or removing the overall effect will be studied separately. The changes in the model structure after the overall effect is added are considered first.
Let $RM(\mathbf{A})$ be a relational model without the overall effect and $RM(\bar{\mathbf{A}})$ be the corresponding augmented model. Let $\mathbf{A} = (a_{ji})$ for $j = 1, \dots, J$, $i = 1, \dots, I$. For any $\boldsymbol p \in RM(\bar{\mathbf{A}})$:
$$\log p_i = \theta_0 + a_{1i}\theta_1 + \dots + a_{Ji} \theta_J,$$
where $\theta_j = \theta_j(\boldsymbol p)$, $j = 0, 1,\dots, J$, are the log-linear parameters of $\boldsymbol p$. In particular, $\theta_0(\boldsymbol p)$ is the overall effect of $\boldsymbol p$.
\begin{theorem}\label{newTheoremVarieties}
The augmented model, $RM(\bar{\mathbf{A}})$, is the minimal regular exponential family which contains $RM(\mathbf{A})$, and
$$ RM(\mathbf{A}) = \{\boldsymbol p \in RM(\bar{\mathbf{A}}): \,\, \theta_0(\boldsymbol p) = 0 \}.$$
\end{theorem}
\begin{proof}
The second claim is proved first. Denote $\mathcal{M}_{0} = \{\boldsymbol p \in RM(\bar{\mathbf{A}}): \,\, \theta_0(\boldsymbol p) = 0 \}$. Let $\mathbf{D}$ be a kernel basis matrix of $\mathbf{A}$, having the form (\ref{KernelRows}), and notice that
$$ \mathbf{D} \log \boldsymbol p = \left(\begin{array}{c}\bd d_1\\[3pt] \bar{\mathbf{D}}\end{array}\right)\log \boldsymbol p=\left(\begin{array}{c}\theta_0(\boldsymbol {p})\bd d_1^{\prime} \bd 1\\[3pt] \bar{\mathbf{D}}\mathbf{A}^{\prime} \boldsymbol \theta\end{array}\right) = \left(\begin{array}{c}- \theta_0(\boldsymbol {p})\\[3pt] \bd 0\end{array}\right).$$
Therefore, any $\boldsymbol p \in \mathcal{M}_0$, satisfies $ \mathbf{D} \log \boldsymbol p = \boldsymbol 0$, and thus, belongs to $ RM(\mathbf{A})$. On the other hand, for any $\boldsymbol p \in RM({\mathbf{A}})$, both $\bar{\mathbf{D}}\log\boldsymbol p = \boldsymbol 0$ and $\theta_0(\boldsymbol {p}) = 0$ must hold, which immediately implies that $\boldsymbol p \in \mathcal{M}_0$.
The first claim is proved next.
The relational model $RM({\mathbf{A}})$ is a curved exponential family parameterized by
\begin{equation*}
\Theta = \{\bd \theta^{\prime} = (\theta_1, \dots, \theta_J)^{\prime} \in \mathbb{R}^{J}: \,\, \bd 1^{\prime} \exp\{\mathbf{A}^{\prime} \bd{\theta}\} = 1\}.
\end{equation*}
If the overall effect is added to $RM({\mathbf{A}})$, the parameter space gets an additional parameter:
\begin{equation*}
\Theta_1 = \{(\theta_0, \bd \theta^{\prime}) = (\theta_0, \theta_1, \dots, \theta_J)^{\prime} \in \mathbb{R}^{J+1}: \,\, \theta_0 = -\log(\bd 1^{\prime} \exp\{\mathbf{A}^{\prime} \bd{\theta}\})\}.
\end{equation*}
Because $\Theta_1$ is the smallest open set in $\mathbb{R}^J$ that contains $\Theta$, it parameterizes the minimal regular exponential family containing $RM(\mathbf{A})$. This family is, in fact, $RM(\bar{\mathbf{A}})$.
\end{proof}
\begin{example}\label{NewExample}
The relational models generated by the matrices
$$
\mathbf{A}= \left(\begin{array}{rrrr}
1& 1& 1& 0\\
0& 0& 1& 1\\
\end{array}\right), \qquad \bar{\mathbf{A}}= \left(\begin{array}{rrrr}
1& 1& 1& 1\\
1& 1& 1& 0\\
0& 0& 1& 1\\
\end{array}\right)
$$
consist of positive probability distribution which can be written in the following forms:
$$\left\{\begin{array}{l}
p_1 = \alpha_1,\\
p_2 = \alpha_1,\\
p_3 = \alpha_1\alpha_2,\\
p_4 = \alpha_2,
\end{array} \right. \qquad
\left\{\begin{array}{l}
p_1 = \beta_0\beta_1,\\
p_2 = \beta_0\beta_1,\\
p_3 = \beta_0\beta_1\beta_2,\\
p_4 = \beta_0\beta_2,
\end{array} \right.
$$
where $\beta_0$ is the overall effect. The dual representations can be written in the log-linear form, using $\bd d_1 = (-1,0,1,-1)^{\prime} , \bd d_2 = (-1,1,0,0)^{\prime} \in Ker(\mathbf{A})$:
$$\left\{\begin{array}{l}
\bd d_1^{\prime} \log \bd p = 0,\\
\bd d_2^{\prime} \log \bd p = 0,
\end{array} \right. \qquad
\left\{\begin{array}{l}\bd d_2^{\prime} \log \bd p = 0. \\
\end{array} \right.
$$
By Theorem \ref{OEnonhom}, after the overall effect is added, the model specification does not include the non-homogeneous constraint anymore. In terms of the generalized odds ratios:
$$\left\{\begin{array}{l}
p_3 /( p_1p_4) = 1,\\
p_1 / p_2 = 1,
\end{array} \right. \qquad
\left\{\begin{array}{l}p_1 / p_2 = 1. \\
\end{array} \right.
$$
The second model may be defined using restrictions only on homogeneous odds ratios, and there is no need to place an explicit restriction on the non-homogeneous odds ratio.
\qed
\end{example}
The changes in the model structure after the overall effect is removed are examined next.
A relational model with the overall effect can be reparameterized so that its model matrix has a row of $1$'s, and because of full row rank, this vector is not spanned by the other rows.
The implications of the removal of the overall effect will be investigated using a model matrix of this structure, say $\bar{\mathbf{A}}_1$. By the removal of the row $\boldsymbol 1^{\prime}$, one may obtain a different model matrix on the same sample space, but it may happen that there exists a cell $i_0$, whose only parameter is the overall effect, and after its removal, the $i_0$-th column contains zeros only. In such cases, to have a proper model matrix, such columns, that is such cells, need to be removed. Write $\mathcal{I}_0$ for the set of all such cells $i_0$, and let $I_0 = |\mathcal{I}_0|$. Then, the reduced model matrix, $\mathbf{A}_1$, is obtained from $\bar{\mathbf{A}}_1$ after removing the row of $1$'s and deleting the columns which, after this, contain only zeros. This is a model matrix on $\mathcal{I} \setminus \mathcal{I}_0$. Without loss of generality, the matrix $\bar{\mathbf{A}}_1$ can be written as:
$$\bar{\mathbf{A}}_1 = \left(\begin{array}{cc} \boldsymbol 1_{(I - I_0)}^{\prime} &\boldsymbol 1_{I_0}^{\prime} \\
\mathbf{A}_1 &\mathbf{O}_{(J-1)\times I_0} \end{array} \right).$$
If the sample spaces of $RM(\bar{\mathbf{A}}_1)$ and $RM({\mathbf{A}}_1)$ are the same that is, when $\mathcal{I}_0$ is empty, the reduced model is the subset of the original one, consisting of the distributions whose overall effect is zero, see Theorem \ref{newTheoremVarieties}. If the sample space is reduced, the relationship between the kernel basis matrices is described in the next result.
\begin{theorem}\label{thConjecture1}
The following holds:
\begin{enumerate}[(i)]
\item $dim(Ker(\mathbf{A}_1))=dim(Ker(\bar{\mathbf{A}}_1))-I_0+1$.
\item The kernel basis matrix $\mathbf{D}_1$ of $\mathbf{A}_1$ may be obtained from the kernel basis matrix $\bar{\mathbf{D}}_1$ of $\bar{\mathbf{A}}_1$ by deleting the the columns in $ \mathcal{I}_0$ and then leaving out the redundant rows.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}[(i)]
\item Because $\bar{\mathbf{A}}_1$ is a $J \times I$ matrix of full row rank, $dim(Ker(\bar{\mathbf{A}}_1)) = I-J$. The linear independence of its rows implies that the rows of $\mathbf{A}_1$ are also linearly independent. Therefore, because $\mathbf{A}_1$ is a $(J-1) \times (I-I_0)$ matrix, $dim(Ker(\mathbf{A}_1)) = I - I_0 - (J-1)$, which implies the result.
\item Let $\boldsymbol d_1, \boldsymbol d_2, \dots, {\boldsymbol d}_{I-J}$ be a kernel basis of $\bar{\mathbf{A}}_1$. Write
$$\boldsymbol d_i = (\boldsymbol{u}_i^{\prime},\boldsymbol{v}_i^{\prime})^{\prime}, \quad \mbox{ for } \quad i = 1,\dots, I-J.$$
Then,
$$\boldsymbol 0 = \bar{\mathbf{A}}_1\boldsymbol d_i = \left(\begin{array}{cc} \boldsymbol 1_{(I - I_0)}^{\prime} &\boldsymbol 1_{I_0}^{\prime} \\
\mathbf{A}_1 &\mathbf{O} \end{array} \right) \left(\begin{array}{c}\boldsymbol{u}_i\\ \boldsymbol{v}_i\end{array}\right), \quad \mbox{ for } \quad i = 1,\dots, I-J,$$
which implies that
\begin{eqnarray}\label{basisID}
\boldsymbol 1_{(I - I_0)}^{\prime}\boldsymbol u_i + \boldsymbol 1_{I_0}^{\prime} \boldsymbol v_i = 0, \quad \mathbf{A}_1 \boldsymbol u_i = \boldsymbol 0, \quad \mbox{ for } \quad i = 1,\dots, I-J.
\end{eqnarray}
Suppose $\mathbf{A}_1$ does not have the overall effect. Notice that each $\boldsymbol v_i$ has length $I_0$, and therefore, one can apply a non-singular linear transformation to the basis vectors $\boldsymbol d_1, \boldsymbol d_2, \dots, {\boldsymbol d}_{I-J}$ to reduce them to the form:
\begin{eqnarray*}
\boldsymbol d_1&=& (\boldsymbol u_1^{\prime}, 1, 0,\dots, 0)^{\prime}\\
\boldsymbol d_2 &=&(\boldsymbol u_2^{\prime}, 0, 1,\dots, 0)^{\prime}\\
\cdots&\\
\boldsymbol d_{I_0} &=&(\boldsymbol u_{I_0}^{\prime}, 0, 0,\dots, 1)^{\prime}\\
\boldsymbol d_{I_0+1} &=&(\boldsymbol u_{I_0+1}^{\prime}, 0, 0,\dots, 0)^{\prime}\\
\boldsymbol d_{I_0+2} &=&(\boldsymbol u_{I_0+2}^{\prime}, 0, 0,\dots, 0)^{\prime}\\
\cdots& \\
\boldsymbol d_{I-J} &=&(\boldsymbol u_{I-J}^{\prime}, 0, 0,\dots, 0)^{\prime}.
\end{eqnarray*}
The equations (\ref{basisID}) imply that
\begin{equation*}
\boldsymbol 1_{(I - I_0)}^{\prime}\boldsymbol u_i =-1, \,\, \mbox{ for } i = 1, \dots, I_0, \qquad \boldsymbol 1_{(I - I_0)}^{\prime}\boldsymbol u_i = 0, \,\, \mbox{ for } i = I_0+1, \dots, I-J,
\end{equation*}
$$ \mbox{ and }\quad \mathbf{A}_1 \boldsymbol u_i = \boldsymbol 0, \,\, \mbox{ for } i = 1, \dots, I-J.$$
The linear independence of $\boldsymbol d_{I_0+1}, \dots, \boldsymbol d_{I-J}$ in $\mathbb{R}^I$ entails the linear independence of
$\boldsymbol u_{I_0+1}, \dots, \boldsymbol u_{I-J}$ in $\mathbb{R}^{I-I_0}$. Notice that $\boldsymbol u_{1}, \dots, \boldsymbol u_{I_0}$ are jointly linearly independent from $\boldsymbol u_{I_0+1}, \dots, \boldsymbol u_{I-J}$, but not necessarily linearly independent from each other. A kernel basis of $\mathbf{A}_1$ comprises $I-J-I_0+1$ linearly independent vectors in $Ker(\mathbf{A}_1)$, and, for example, $\boldsymbol u_{I_0},\boldsymbol u_{I_0+1}, \dots, \boldsymbol u_{I-J}$ form such a basis. Therefore, ${\mathbf{D}}_1$ can be derived from a kernel basis matrix of $\bar{\mathbf{A}}_1$ by removing the columns for $\mathcal{I}_0$ and leaving out the $I_0 - 1$ redundant rows.
Suppose $\mathbf{A}_1$ has the overall effect; without loss of generality, $\boldsymbol 1^{\prime}$ is a row of $\mathbf{A}_1$. In this case, (\ref{basisID}) implies that both $\boldsymbol 1_{(I - I_0)}^{\prime}\boldsymbol u_i = 0$ and $\boldsymbol 1_{I_0}^{\prime} \boldsymbol v_i = 0$, for $i = I_0+1, \dots, I-J$. Because $\boldsymbol u_i$'s and $\boldsymbol v_i$'s vary independently from each other, the linear independence of $\boldsymbol d_1, \boldsymbol d_2, \dots, {\boldsymbol d}_{I-J}$ will imply that $(\boldsymbol u_i, \boldsymbol 0)$, for $i = I_0+1, \dots, I-J$, are also linearly independent in $\mathbb{R}^I$. Consequently, any $I-J-I_0+1$ vectors among $\boldsymbol u_i$'s are linearly independent in $\mathbb{R}^{I-I_0}$ and can form a kernel basis of $\mathbf{A}_1$. Thus, as in the previous case, ${\mathbf{D}}_1$ can be derived from a kernel basis matrix of $\bar{\mathbf{A}}_1$ by removing the columns for $\mathcal{I}_0$ and leaving out the $I_0 - 1$ redundant rows.
\end{enumerate}
\end{proof}
The next two examples illustrate the theorem.
\begin{example}\label{removeOEexample1}
Let $RM(\bar{\mathbf{A}}_1)$ be the relational model generated by
$$
\bar{\mathbf{A}}_1= \left(\begin{array}{rrrrrr}
1& 1& 1& 1 & 1&1\\
1& 0& 1& 0& 0 & 0\\
0& 1& 1& 1&0&0\\
\end{array}\right).
$$
Here, $\mathcal{I}_0 = \{5,6\}$. In terms of the generalized odds ratios the model can be written as:
$$\left\{\begin{array}{l}
p_3p_5 / (p_1 p_2)= 1,\\
p_3 p_6/( p_1p_2) = 1,\\
p_2/p_4 = 1.
\end{array} \right. $$
Remove the row $\boldsymbol 1^{\prime}$ and the last two columns and consider the reduced matrix:
$$
{\mathbf{A}}_1= \left(\begin{array}{rrrr}
1& 0& 1& 0\\
0& 1& 1& 1\\
\end{array}\right).
$$
The model $RM({\mathbf{A}}_1)$ does not have the overall effect and can be specified by two generalized odds ratios:
$$\left\{\begin{array}{l}
p_2 /( p_1p_4) = 1,\\
p_2 /p_4 = 1.
\end{array} \right. $$
These odds ratios are defined on the smaller probability space, and may be obtained by removing $p_5$ and $p_6$, and the redundant odds ratio, from the odds ratio specification of the original model.
\qed
\end{example}
\begin{example}\label{removeOEexample2}
Consider the relational model $RM(\bar{\mathbf{A}}_1)$ generated by
$$
\bar{\mathbf{A}}_1= \left(\begin{array}{rrrr}
1 & 1& 1& 1\\
1& 1& 1& 0\\
1& 0& 1& 0\\
\end{array}\right).
$$
In terms of the generalized odds ratios, the model specification is $p_1/p_3 = 1$. Notice that $\bar{\mathbf{A}}_1$ is row equivalent to $$\bar{\mathbf{A}}_2= \left(\begin{array}{rrrr}
0 & 0& 0& 1\\
1& 1& 1& 0\\
1& 0& 1& 0\\
\end{array}\right).
$$
Because every $\boldsymbol d$ in $Ker(\mathbf{A})$ is orthogonal to $(0, 0, 0, 1)$, its last component has to be zero: $d_4 = 0$. Therefore, in any specification of $RM(\bar{\mathbf{A}}_1)$ in terms of the generalized odds ratios, $p_4$ will not be present.
Set
$$
{\mathbf{A}}_1= \left(\begin{array}{rrr}
1& 1& 1\\
1& 0& 1\\
\end{array}\right).
$$
The model $RM({\mathbf{A}}_1)$ has the overall effect and can be specified by exactly the same generalized odds ratio as the model $RM(\bar{\mathbf{A}}_1)$: $p_1/p_3 = 1$.
As a further illustration, take
$$
\bar{\mathbf{A}}_1= \left(\begin{array}{rrrrrr}
1 & 1& 1& 1&1&1\\
1& 1& 1& 0& 0& 0\\
1& 0& 1& 0 & 0& 0\\
\end{array}\right).
$$
In this case,
$$
\bar{\mathbf{A}}_2= \left(\begin{array}{rrrrrr}
0 & 0& 0& 1&1&1\\
1& 1& 1& 0& 0& 0\\
1& 0& 1& 0 & 0& 0\\
\end{array}\right),
$$
and $\mathbf{A}_1$ is the same as above. In this case, $RM(\bar{\mathbf{A}}_1)$ is specified by
$p_1/p_3 = 1, p_4/p_5 = 1, p_4/p_6 = 1$, and $RM(\bar{\mathbf{A}}_1)$ is described as previously: $p_1/p_3 = 1$.
\qed
\end{example}
The polynomial variety $\mathcal{X}_{\bar{\mathbf{A}}_1}$ defining the model $RM(\bar{\mathbf{A}}_1)$ is homogeneous. If the removal of the cells comprising $\mathcal{I}_0$ leads to a model without the overall effect, the variety $\mathcal{X}_{\bar{\mathbf{A}}_1}$ is dehomogenized, yielding the affine variety $\mathcal{X}_{{\mathbf{A}}_1}$ \perp\!\!\!\perptep[cf.][]{Cox}.
The converse to this procedure, homogenization of an affine variety, is also studied in algebraic geometry, and is performed by introducing a new variable in such a way that all equations defining the variety become homogeneous. The essence of this procedure is that all probabilities are multiplied by this new variable. This leaves the homogeneous odds ratios unchanged, as the new variable cancels out. The value of a non homogeneous odds ratio becomes, instead of $1$, the reciprocal of the new variable. For example, the odds ratio in Example \ref{removeOEexample1}
$p_2/(p_1p_4)=1$ becomes $vp_2/(vp_1vp_4)=1/v$, where $v$ is the new variable. If now $v$ could be seen as the probability of an additional cell, say $p_0$, then this would be a homogeneous odds ratio, $p_0p_2/(p_1p_4)=1$.
Although a straightforward procedure in algebraic geometry, it does not necessarily have a clear interpretation in statistical inference. Introducing a new variable and a new cell for the purpose of homogenization can be made meaningful in some situations, if the sample space may be extended by one cell, and the new variable is the parameter (probability) of this cell. Homogenization requires this new variable to appear in every cell, too, so the parameter may be seen as the overall effect. The new cell has only the overall effect, thus no feature is present in this cell.
The augmentation of the sample space by an additional cell does make sense, if that cell exists in the population but was not observed because of the design of the data collection procedure, as in Example \ref{crabs}. The additional cell has the overall effect only, thus is a ``no feature present'' cell.
\begin{example}\label{crabs}
In the study of swimming crabs by \perp\!\!\!\perpte*{Kawamura1995}, three types of baits were used in traps to catch crabs: fish alone, sugarcane alone, fish-sugarcane combination. The sample space consists of three cells, $\mathcal{I} = \{(0, 1), (1,0), (1,1)\}$, and the cell $(0,0)$ is absent by design, because there were no traps without any bait. Under the AS independence, the cell parameter associated with both bait types present is the product of the parameters associated with the other two cells. This is a relational model without the overall effect, generated by the matrix \perp\!\!\!\perptep[cf.][]{KRipf1}:
$$\mathbf{A} = \left(
\begin{array}{ccc}
1 & 0 & 1 \\
0 & 1 & 1 \\
\end{array}
\right).$$
In fact, the overall effect cannot be included in this situation, because it would saturate the model. The affine variety associated with this model can be homogenized by including a new variable. The new variable may only be interpreted as the parameter associated with no bait present, and calls for an additional cell in the sample space (to avoid model the saturation of the model) which may only be interpreted as setting up a trap without any bait, which would also be a plausible research design. The resulting model is generated by $\mathbf{A}_0$:
$$\mathbf{A}_0 = \left(
\begin{array}{cccc}
1& 1& 1& 1\\
0& 1 & 0 & 1 \\
0& 0 & 1 & 1 \\
\end{array}
\right),$$
and indeed, is the model of traditional independence on the complete $2 \times 2$ contingency table. \qed
\end{example}
For situations like in Example \ref{crabs}, the AS independence is a natural model, but it also applies to cases, when the ``no feature present'' situation is logically impossible (like market basket analysis, or records of traffic violations, see \perp\!\!\!\perpte{KRD11, KRipf1}, and also the biological example in Section \ref{hemato}), and in such cases, the cell augmentation procedure is not meaningful. There are, however situations, when the existence of the ``no feature present'' cell is logically not impossible, but the actual existence in the population is dubious.
For a more general discussion of the homogenization of AS independence,
let $\boldsymbol d_1, \dots, \boldsymbol d_K$ be a kernel basis of $\mathbf{A}$, satisfying (\ref{KernelRows}) with $\boldsymbol d_1^{\prime} \boldsymbol 1 = -1$. The polynomial ideal $\mathscr{I}_{\mathbf{A}}$ associated with the matrix $\mathbf{A}$ is generated by one non-homogeneous polynomial $\boldsymbol p^{\boldsymbol d_1^+} - \boldsymbol p^{\boldsymbol d_1^-}$, and $K-1$ homogeneous polynomials, $\boldsymbol p^{\boldsymbol d_k^+} - \boldsymbol p^{\boldsymbol d_k^-}$. Notice that, because $(\boldsymbol d_1^+)^{\prime} \boldsymbol 1 - (\boldsymbol d_1^-)^{\prime} \boldsymbol 1$ $= (\boldsymbol d_1^+ - \boldsymbol d_1^-)^{\prime} \boldsymbol 1$ = $\boldsymbol d_1^{\prime} \boldsymbol 1$, the difference in the degrees of the monomials $p^{\boldsymbol d_1^+}$ and $p^{\boldsymbol d_1^-}$ is $-1$. Therefore, the polynomial $\boldsymbol p^{\boldsymbol d_1^+} - \boldsymbol p^{\boldsymbol d_1^-}$ can be homogenized by multiplying the first monomial by one additional variable, say $p_0$:
$$p_0\boldsymbol p^{\boldsymbol d_1^+} - \boldsymbol p^{\boldsymbol d_1^-}.$$
The polynomial ideal generated by
$$p_0\boldsymbol p^{\boldsymbol d_1^+} - \boldsymbol p^{\boldsymbol d_1^-}, \boldsymbol p^{\boldsymbol d_2^+} - \boldsymbol p^{\boldsymbol d_2^-}, \dots, \boldsymbol p^{\boldsymbol d_K^+} - \boldsymbol p^{\boldsymbol d_K^-}$$
and the corresponding variety are homogeneous, and can be described by the matrix of size $(J+1)\times (I+1)$ of the following structure:
$${\mathbf{A}}_0 = \left(\begin{array}{cc} 1 & \boldsymbol 1^{\prime}_I \\
\boldsymbol 0_J & \mathbf{A} \end{array} \right).$$
Here, $\boldsymbol 1^{\prime}_I$ is the row of $1$'s of length $I$, and $\boldsymbol 0_J$ is the column of zeros of length $J$.
In fact, the homogeneous variety $\mathcal{X}_{{\mathbf{A}}_0}$ is the projective closure of the affine variety $\mathcal{X}_{\bar{\mathbf{A}}_0}$ \perp\!\!\!\perptep{Cox}. The latter can be obtained from the former by dehomogenization via setting $p_0 = 1$.
The homogenization of the model of AS independence for three features is discussed next.
\begin{example}\label{Example0}
Consider the model of AS independence for three attributes, $A$, $B$, and $C$, described in \perp\!\!\!\perpte{KRD11}.
\begin{equation}\label{indepAtr}
{p_{110}}={p_{100}p_{010}}, \,\,{ p_{101}}={p_{100}p_{001}}, \,\, {p_{011}}={p_{010}p_{001}}, \,\,{p_{111}}={p_{100}p_{010}p_{001}}.
\end{equation}
Here $p_{ijk} = \mathbb{P}(A = i, B=j, C=k)$ for $i, j, k \in \{0,1\}$, where the combination $(0,0,0)$ does not exist, and $\sum_{ijk} p_{ijk} = 1$. The equations (\ref{indepAtr}) specify the relational model generated by
$$
\mathbf{A}= \left(\begin{array}{rrrrrrr}
1& 0& 0& 1& 1& 0& 1\\
0& 1& 0& 1& 0& 1& 1\\
0& 0& 1& 0& 1& 1& 1\\
\end{array}\right).
$$
Consider the following kernel basis matrix which is of the form (\ref{KernelRows}):
$$\mathbf{D} = \left(\begin{array} {rrrrrrr}
-1&-1&0&1&0&0&0\\
-1&0&1&1&0&-1&0 \\
0&-1&1&1&-1&0&0\\
0&0&-1&0&1&1&-1 \\
\end{array}\right).$$
The corresponding polynomial ideal is:
$$\mathscr{I}_{\mathbf{A}} =\langle \,\, p_{110} - p_{100} p_{010}, \,\,p_{110}p_{001} - p_{011}p_{100}, \,\, p_{110}p_{001} - p_{101}p_{010}, \,\, p_{111}p_{001} - p_{101}p_{011} \,\, \rangle.$$
The generating set of $\mathscr{I}_{\mathbf{A}}$ includes at least one non-homogeneous polynomial, due to $\boldsymbol d_1$, and can be homogenized by introducing a new variable, say $p_{000}$. The resulting ideal,
$$\mathscr{I}_{\mathbf{A}_0} = \langle \,\, p_{000}p_{110} - p_{100} p_{010}, \,\,p_{110}p_{001} - p_{011}p_{100}, \,\, p_{110}p_{001} - p_{101}p_{010}, \,\, p_{111}p_{001} - p_{101}p_{011} \,\, \rangle,$$
is homogeneous, and its zero set
\begin{equation}\label{variety0}
\mathcal{X}_{\mathbf A_0} = \left\{\boldsymbol p \in \mathbb{R}^{|\mathcal{I}+1|}_{\geq 0}: \,\, \boldsymbol p^{\boldsymbol d^+} = \boldsymbol p^{\boldsymbol d^-}, \,\, \forall \boldsymbol d \in Ker(\mathbf{A}_0) \right \},
\end{equation}
where
$$
\mathbf{A}_0= \left(\begin{array}{rrrrrrrr}
1& 1& 1& 1& 1& 1& 1& 1\\
0& 1& 0& 0& 1& 1& 0& 1\\
0& 0& 1& 0& 1& 0& 1& 1\\
0& 0& 0& 1& 0& 1& 1& 1\\
\end{array}\right),
$$
is thus a homogeneous variety. The relational model $RM(\mathbf{A}_0)$ is defined on a larger sample space, namely $\mathcal{I}\cup(0,0,0)$. The model has the overall effect and is the following set of distributions:
\begin{equation}\label{ExtMprob0}
\boldsymbol p \in \mathcal{X}_{\mathbf A_0} \cap int(\Delta_{I}).
\end{equation}
The rows of $\mathbf{A}_0$ are the indicators of the cylinder sets of the total (the row of $1$'s), and of the $A$, $B$, and $C$ marginals. Therefore, the relational model $RM(\mathbf{A}_0)$ is the traditional model of mutual independence.
\qed
\end{example}
The next theorem states in general what was seen in the example. Let $X_1, \dots, X_T$ be the random variables taking values in $\{0,1\}$. Write $\mathcal{I}^0$ for the Cartesian product of their ranges, and let $\mathcal{I} = \mathcal{I}^0 \setminus (0,\dots,0) $.
\begin{theorem}
Let $RM(\mathbf{A})$ be the model of AS independence of $X_1, \dots, X_T$ on the sample space $\mathcal{I}$. The interior of the projective closure of this model is the log-linear model of mutual independence of $X_1, \dots, X_T$ on the sample space $\mathcal{I}^0$.
\end{theorem}
\begin{proof}
Let $\mathbf{A}$ be the model matrix for the AS independence:
$$
\mathbf{A}= \left(\begin{array}{rrrrrrrrr}
1& 0& 0&\dots & 0& 1& 1& \dots &1\\
0& 1& 0&\dots& 0& 1& 0& \dots&1\\
0& 0& 1&\dots& 0& 0& 1& \dots &1\\
\vdots& \vdots & \vdots &\ddots & \vdots & \vdots& \vdots & \ddots & \vdots \\
0& 0& 0&\dots&1& 0& 0& \dots &1\\
\end{array}\right).
$$
The number of columns of $\mathbf{A}$ is equal to the number of cells in the sample space $\mathcal{I}$, $I = 2^T - 1$.
The model $RM(\mathbf{A})$ is the intersection of the polynomial variety $\mathcal{X}_{\mathbf{A}}$ and the interior of the simplex $\Delta_{I - 1}$. The variety $\mathcal{X}_{\mathbf{A}}$ is non-homogeneous, because among its generators there is at least one non-homogeneous polynomial. In order to obtain the projective closure of $\mathcal{X}_{\mathbf{A}}$ \perp\!\!\!\perptep[cf.][]{Cox}, include the ``no feature present'' cell, indexed by $0$, to the sample space, choose a Gr{\"o}bner basis of the ideal $\mathscr{I}_{\mathbf{A}}$, and homogenize all non-homogeneous polynomials in this basis using the cell probability $p_{0}$. Because the projective closure of $\mathcal{X}_{\mathbf{A}}$ is the minimal homogeneous variety in the projective space whose dehomogenization is $\mathcal{X}_{\mathbf{A}}$ \perp\!\!\!\perptep{Cox}, Theorem \ref{thConjecture1}(ii) implies that this closure can be described using the matrix
$$\mathbf{A}_0= \left(\begin{array}{rrrrrrrrrrr}
1&1 & 1& 1& \dots & 1& 1& 1& \dots& 1\\
0&1& 0& 0&\dots & 0& 1& 1& \dots &1\\
0&0& 1& 0&\dots& 0& 1& 0& \dots&1\\
0& 0& 0& 1&\dots& 0& 0& 1& \dots &1\\
\vdots& \vdots& \vdots & \vdots &\ddots & \vdots & \vdots& \vdots & \ddots & \vdots \\
0& 0& 0& 0&\dots&1& 0& 0& \dots &1\\
\end{array}\right).$$
Each distribution in $RM(\mathbf{A})$ has the multiplicative structure prescribed by $\mathbf{A}$ \perp\!\!\!\perptep{KRextended}, and during the homogenization, is mapped in a positive distribution in $\mathcal{X}_{\mathbf{A}_0}$. Because all strictly positive distributions in $\mathcal{X}_{\mathbf{A}_0}$ have the multiplicative structure prescribed by $\mathbf{A}_0$, they comprise the relational model $RM(\mathbf{A}_0)$. This matrix describes the model of mutual independence between $X_1, \dots, X_T$ in the effect coding, and the proof is complete.
\end{proof}
The homogenization (in the language of algebraic geometry) or regularization (in the language of the exponential families) leads to a simpler structure, which allows a simpler calculation of the MLE. However, the additional cell was not observed in these cases, and assuming its frequency is zero is ungrounded and may lead to wrong inference.
The framework developed here may also be used to define context specific independence, so that in one context conditional independence holds, in another one, AS independence does. To illustrate, let $X_1$, $X_2$, $X_3$ be random variables taking values in $\{0,1\}$. Assume that the $(0,0,0)$ outcome is impossible, so the sample space can be expressed as:
$$
\begin{tabular}{ccccc}
& \multicolumn{2}{c}{$X_3=0$}&\multicolumn{2}{c}{$X_3=1$}\\
\cmidrule(lr){2-3} \cmidrule(lr){4-5}
& $X_2=0$ & $X_2=1$ & $X_2=0$ & $X_2=1$ \\ [3pt]
\hline \\
$X_1=0$ &- & $p_{010}$ & $p_{001}$ & $p_{011}$\\
$X_1=1$ & $p_{100}$ & $p_{110}$ & $p_{101}$ & $p_{111}$\\
\end{tabular}
$$
Let $\boldsymbol p = (p_{001}, p_{010}, p_{011}, p_{100}, p_{101}, p_{110}, p_{111})$, and consider the relational model without the overall effect generated by
\begin{equation}\label{modMxCIAS}
\mathbf{A}_0 = \left(\begin{array}{ccccccc}
0&0&0&1&1&1&1\\
0&1&1&0&0&1&1\\
1&0&1&0&1&0&1\\
0&0&0&0&1&0&1\\
0&0&1&0&0&0&1\\
\end{array} \right),
\end{equation}
The kernel basis matrix is equal to:
\begin{equation}\label{kernelCIconventional}
\mathbf{D}_0 = \left(\begin{array}{rrrrrrr}
0& -1& 0& -1& 0& 1&0 \\
1&0&-1&0&-1&0&1\\
\end{array} \right),
\end{equation}
and thus, the model can be specified in terms of the following two generalized odds ratios:
$$\mathcal{COR}(X_1X_2 \mid X_3 = 0) = \frac{p_{110}}{p_{010}p_{100}} = 1, \qquad \mathcal{COR}(X_1X_2 \mid X_3 = 1) = \frac{p_{001}p_{111}}{p_{011}p_{101}} = 1.$$
The second constraint expresses the (conventional) context-specific independence of $X_1$ and $X_2$ given $X_3 = 1$. The first odds ratio is non-homogeneous, and the corresponding constraint may be seen as the context-specific AS-independence of of $X_1$ and $X_2$ given $X_3 = 0$.
\section{ML estimation with and without the overall effect}\label{MLEsection}
The properties of the ML estimates under relational models, discussed in detail in \perp\!\!\!\perpte{KRD11} and \perp\!\!\!\perpte{KRextended}, are summarized here in the language of the linear and multiplicative families defined by the model matrix and its kernel basis matrix. The conditions of existence of the MLE are reviewed first.
Let $\boldsymbol a_1, \dots, \boldsymbol a_{|\mathcal{I}|}$ denote the columns of $\mathbf{A}$, and let $C_{\mathbf{A}} = \{ \boldsymbol t \in \mathbb{R}^J_{\geq 0}: \,\, \exists \boldsymbol p \in \mathbb{R}^{|\mathcal{I}|}_{\geq 0} \quad \boldsymbol t = \mathbf{A}\boldsymbol p\}$ be the polyhedral cone whose relative interior comprises such $\boldsymbol t \in \mathbb{R}^J_{> 0}$, for which there exists a $\boldsymbol p > \boldsymbol 0$ that satisfies $\boldsymbol t = \mathbf{A}\boldsymbol p$. A set of indices $F = \{i_1, i_2, \dots,i_f\}$ is called facial if the columns $\boldsymbol a_{i_1}, \boldsymbol a_{i_2}, \dots, \boldsymbol a_{i_f}$ are affinely independent and span a proper face of $C_{\mathbf{A}}$ \perp\!\!\!\perptep*[cf.][]{GrunbaumConvex, GeigerMeekSturm2006,FienbergRinaldo2012}. It can be shown that a set $F$ is facial if and only if there exists a $\boldsymbol c \in \mathbb{R}^J$, such that $\boldsymbol c'\boldsymbol a_i = 0$ for every $i \in F$ and $\boldsymbol c'\boldsymbol a_i > 0$ for every $i \notin F$.
Let $\boldsymbol q \in \mathcal{P}$ and let $\cg K$ be the set of $\kappa > 0$, such that, for a fixed $\kappa$, the linear family
\begin{equation}\label{LinFam}
\cg F(\mathbf A, \bd q,\kappa) = \{\bd r \in \cg P: \,\, \mathbf{A} \bd r = \kappa\mathbf{A}\bd q\}
\end{equation}
is not empty, and let $\cg F(\mathbf A, \bd q) = \bigcup_{\cg K} \cg F(\mathbf A, \bd q,\kappa)$. For each $\kappa > 0$, the linear family $\cg{F}(\mathbf{A}, \bd q, \kappa)$ is a polyhedron in the cone $C_{\mathbf{A}}$.
\begin{theorem}\label{MLEextendTHnew}\perp\!\!\!\perptep{KRextended}
Let $RM(\mathbf{A})$ be a relational model, with or without the overall effect, and let $\boldsymbol q$ be the observed distribution.
\begin{enumerate}
\item The MLE $\hat{p}_{\boldsymbol q}$ given $\boldsymbol q$ exists if only:
\begin{enumerate}[(i)]
\item $supp(\boldsymbol q) = \mathcal{I}$, or
\item $supp(\boldsymbol q) \subsetneq \mathcal{I}$ and,
for all facial sets $F$ of $\mathbf{A}$, $supp(\boldsymbol q) \not\subseteq F$.
\end{enumerate}
In either case, $\hat{\bd p}_q = \cg F(\mathbf A, \bd q) \cap int(\mathcal{X}_{\mathbf{A}})$, and there exists a unique constant $\gamma_q > 0$, also depending on $\mathbf A$, such that:
$$ \mathbf{A}\hat{\bd p}_q = \gamma_q \mathbf{A} \bd q, \quad \bd 1^{\prime} \hat{\bd{p}}_q = 1.$$
\item The MLE under the extended model $\,\xbar{RM}(\mathbf{A})$, defined in (\ref{ExtMprob}), always exists and is the unique point of $\mathcal{X}_{\mathbf{A}}$ which satisfies:
\begin{align}
&\mathbf{A}\boldsymbol{p} = \gamma_q \mathbf{A } \boldsymbol q, \mbox{ for some } \gamma_q > 0; \label{A} \\
&\boldsymbol 1'\boldsymbol p = 1. \nonumber
\end{align}
\end{enumerate}
\end{theorem}
The statements follow from Theorem 4.1 in \perp\!\!\!\perpte{KRextended} and Corollary 4.2 in \perp\!\!\!\perpte{KRD11}, and the proof is thus omitted. The constant $\gamma_q$, called the adjustment factor, is the ratio between the subset sums of the MLE, $\mathbf{A}\hat{\bd p}_q$, and the subset sums of the observed distribution, $\mathbf{A}\bd q$. If the overall effect is present in the model, $\gamma_q = 1$ for all $\boldsymbol q$.
Let $\mathbf{A}$ be a model matrix whose row space does not contain $\boldsymbol 1^{\prime}$, and let $\bar{\mathbf{A}}$ be the matrix obtained by augmenting $\mathbf{A}$ with the row $\boldsymbol 1^{\prime}$. It will be shown in the proof of the next theorem that every facial set of $\mathbf{A}$ is facial for $\bar{\mathbf{A}}$. If the observed $\boldsymbol q$ is positive, the MLEs $\hat{\bd p}_{q}$ and $\bar{\bd p}_q$ under the models $RM({\mathbf A})$ and $RM(\bar{\mathbf A})$, respectively, exist. However, as implied by the relationship between the facial sets of ${\mathbf A}$ and $\bar{\mathbf{A}}$, if $\boldsymbol q$ has some zeros, the MLE may exist under $RM({\mathbf A})$, but not under $RM(\bar{\mathbf A})$, or neither of the MLEs exist.
\begin{theorem}
Let $\mathbf{A}$ be a model matrix whose row space does not contain $\boldsymbol 1^{\prime}$, and let $\bar{\mathbf{A}}$ be the matrix obtained by augmenting $\mathbf{A}$ with the row $\boldsymbol 1^{\prime}$. Let $\boldsymbol q$ be the observed distribution. If, given $\boldsymbol q$, the MLE under $RM(\bar{\mathbf A})$ exists, so does the MLE under $RM({\mathbf A})$.
\end{theorem}
\begin{proof}
If $\boldsymbol q > \boldsymbol 0$, both MLEs exists.
Assume that $\boldsymbol q$ has some zeros, that is, $supp(\boldsymbol q) \subsetneq \mathcal{I}$, and that the MLE under $RM(\bar{\mathbf{A}})$ exists. It will be shown next that for any facial set $F$ of $\mathbf{A}$, $supp(\boldsymbol q) \not\subseteq F$.
The proof is by contradiction. Let $F_0$ be a facial set of $\mathbf{A}$, such that $supp(\boldsymbol q) \subset F_0$. Therefore, there exists a $\boldsymbol c \in \mathbb{R}^J$, such that $\boldsymbol c'\boldsymbol a_i = 0$ for every $i \in F$ and $\boldsymbol c'\boldsymbol a_i > 0$ for every $i \notin F$.
Denote by $\bar{\boldsymbol a}_1, \dots, \bar{\boldsymbol a}_I$ the columns of $\bar{\mathbf{A}}$. By construction, $\bar{\boldsymbol a}_i = (1, \boldsymbol a_i^{\prime})^{\prime}$, $\,i = 1, \dots, I$. Let $\bar{\boldsymbol c} = (0,\boldsymbol c^{\prime})^{\prime}$. Then,
$$\bar{\boldsymbol c}^{\prime}\bar{\boldsymbol a}_i = 0\cdot 1+\boldsymbol c'\boldsymbol a_i = \left\{ \begin{array}{l} \boldsymbol c'\boldsymbol a_i = 0, \quad \mbox{ for } i \in F_0, \\
\bar{\boldsymbol c}^{\prime}\bar{\boldsymbol a}_i > 0, \quad \mbox{ for } i \notin F_0, \end{array} \right.
$$
and thus, $F_0$ is a facial set of $\bar{\mathbf{A}}$. Because $supp(\boldsymbol q) \subset F_0$, the MLE under $RM(\bar{\mathbf{A}})$, given $\boldsymbol q$, does not exist, which contradicts the initial assumption. This completes the proof.
\end{proof}
\textbf{Example \ref{Example0}} (revisited):
Let $\boldsymbol q_1 = (0,0,0,0,0,0,1)'$ be the observed distribution. Because $supp(\boldsymbol q_1) = \{7\}$ is not a subset of any facial sets of $\mathbf{A}$, the MLE exists:
$$\hat{\boldsymbol p}_{q_1} = \left(\sqrt[3]{2} - 1, \sqrt[3]{2} - 1, \sqrt[3]{2} - 1, (\sqrt[3]{2} - 1)^2, (\sqrt[3]{2} - 1)^2, (\sqrt[3]{2} - 1)^2, (\sqrt[3]{2} - 1)^3\right)',$$
with $\hat{\gamma}_q = 2 - \sqrt[3]{4}$.
On the other hand, the set of indices $F = \{1,4,5,7\}$ is facial for $\bar{\mathbf{A}}$, and $supp(\boldsymbol q_1) \subsetneq F$. In this case, the MLE exists only in the extended model $\,\xbar{RM}(\bar{\mathbf{A}})$, and is equal to $\boldsymbol q_1$ itself.
Let $\boldsymbol q_2 = (1,0,0,0,0,0,0)'$. Because $supp(\boldsymbol q_2) = \{1\}$ is a subset of a facial set of $\mathbf{A}$ and of a facial set of $\bar{\mathbf{A}}$, the MLEs exist only in the corresponding extended models.
\qed
Further properties of the adjustment factor, including its geometrical meaning, are described next, relying on the following result:
\begin{theorem}\label{ThFiber}
Let $\mathbf{A}$ be a model matrix whose row space does not contain $\boldsymbol 1^{\prime}$, and let $\bar{\mathbf{A}}$ be the matrix obtained by augmenting $\mathbf{A}$ with the row $\boldsymbol 1^{\prime}$. For any $\bd r_1,\:\bd r_2 \in \mathcal{P}$, $\bd r_1 \neq \bd r_2$, the following holds:
\begin{enumerate}
\item \begin{enumerate}[(i)]
\item The MLEs under $RM(\mathbf A)$, given they exist, are equal if and only if the subset sums entailed by $\mathbf{A}$ are proportional:
$$
\hat{\bd p}_{r_1} = \hat{\bd p}_{r_2} \quad \Leftrightarrow \quad \mathbf{A} \bd r_1 = \kappa \mathbf{A} \bd r_2 \quad \mbox{for some } \kappa\in \cg K
$$
and the adjustment factors in the MLE satisfy: $\kappa\hat\gamma_{r_1}$ = $\hat\gamma_{r_2}$.
\end{enumerate}
\item The MLEs under $RM(\bar{\mathbf A})$, given they exist, are equal if and only if the subset sums entailed by $\mathbf{A}$ coincide:
$$
\bar{\bd p}_{r_1} = \bar{\bd p}_{r_2} \quad \Leftrightarrow \quad \mathbf{A} \bd r_1 = \mathbf{A} \bd r_2.
$$
\end{enumerate}
\end{theorem}
The statements are a reformulation of Theorem 4.4 in \perp\!\!\!\perpte{KRD11}, and no proofs are provided here. The relationship between the adjustment factors is obvious.
The theorem implies that $\cg{F}(\mathbf A, \bd q)$ is an equivalence class in $\mathcal{P}$, in the sense that, for any $\bd r \in \cg{F}(\mathbf A, \bd q)$, the MLE under $RM(\mathbf{A})$ satisfies $\hat{\bd p}_{r}$ = $\hat{\bd p}_{q}$. Each sub-family $\cg F(\mathbf A, \bd q,\kappa)$ is characterized by its unique adjustment factor under $RM(\mathbf{A})$. That is, for every $ \boldsymbol r_1, \boldsymbol r_2 \in \mathcal{F}(\mathbf{A}, \boldsymbol q, \kappa)$, $\,\,\boldsymbol r_1 \neq \boldsymbol r_2$,
$$\hat{\boldsymbol p}_{r_1} = \hat{\boldsymbol p}_{r_2} = \hat{\boldsymbol p}_{q}, \qquad \hat\gamma_{r_1} = \hat\gamma_{r_2} =\hat\gamma_{q}/\kappa.$$
In addition, $\,\,\bar{\boldsymbol p}_{r_1} = \bar{\boldsymbol p}_{r_2}$ for any $ \boldsymbol r_1, \boldsymbol r_2 \in \mathcal{F}(\mathbf{A}, \boldsymbol q, \kappa)$, and therefore, for a fixed $\kappa$, $\cg F(\mathbf A, \bd q,\kappa)$ is an equivalence class under $RM(\bar{\mathbf{A}})$.
From a geometrical point of view, $\cg{F}(\mathbf{A}, \bd q)$ is a polyhedron which decomposes into polyhedra $\cg F(\mathbf{A}, \bd q,\kappa)$, with $\kappa>0$; clearly, $\bd q\in \cg F(\mathbf{A},\bd q,1)$. The MLE under $RM(\bar{\mathbf A})$ given $\bd r\in \cg F(\mathbf{A}, \bd q,\kappa)$ is the unique point common to the polyhedron $\cg F(\mathbf{A}, \bd q,\kappa)$ and the variety $\mathcal{X}_{\bar{\mathbf{A}}}$. Among the feasible values of $\kappa$ there exists a unique one, say $\hat\kappa$, such that the MLE $\bar{\bd p}_{r}$, $\forall\bd r\in \cg F(\mathbf{A}, \bd q,\hat \kappa)$, coincides with the MLE of $\bd q$ under $RM(\mathbf{A})$, $\hat{\bd p}_{q}$. This happens when $\hat\gamma_{r}=1$ so that, from (ii) in Theorem \ref{ThFiber}, $\hat\kappa$ = $\hat\gamma_{q}$. This latter point, $\hat{\bd p}_{q}$, is the intersection between $\cg F(\mathbf{A}, \bd q)$ and the non-homogeneous variety $\mathcal{X}_{{\mathbf{A}}}$. This specific value of the adjustment factor $\gamma_q = \hat\kappa$, is the adjustment factor of the MLE under $RM(\mathbf{A})$ given $\bd q$. An illustration is given next.
Relational models for probabilities without the overall effect are curved exponential families, and the computation of the MLE under such models is not straightforward. An extension of the iterative proportional fitting procedure, G-IPF, that can be used for both models with and models without the overall effect was proposed in \perp\!\!\!\perpte{KRipf1} and is implemented in \perp\!\!\!\perpte{gIPFpackage}. Alternatively, the MLEs can be computed, for instance, using the Newton-Raphson algorithm or the algorithm of \perp\!\!\!\perpte{EvansForcina11}. One of the algorithms described in \perp\!\!\!\perpte{Forcina2017} gave an idea of a possible modification of G-IPF. A brief description of the original and modified versions of G-IPF is given below:
\begin{center}
\begin{tabular}{lcr}
G-IPF & & G-IPFm \\
\hline
& &\\
Fix $\gamma> 0$ & & Fix $\gamma > 0$ \\
& &\\
\multicolumn{3}{c}{Run IPF($\gamma$) to obtain $\boldsymbol p_{\gamma}$, where} \\
& &\\
$\mathbf{A} \boldsymbol p_{\gamma} = \gamma \mathbf{A}\boldsymbol q$ & & $\bar{\mathbf{A}} \boldsymbol p_{\gamma} = \left(\begin{array}{c} 1 \\ \gamma \mathbf{A}\boldsymbol q\end{array}\right)$ \\
& & \\
$\mathbf{D} \log \boldsymbol p_{\gamma} = \boldsymbol 0$ & & $\bar{\mathbf{D}} \log \boldsymbol p_{\gamma} = \boldsymbol 0$\\
& &\\
\multicolumn{3}{c}{Adjust $\gamma$, to approach the solution of} \\
& &\\
$\boldsymbol 1^{\prime}\boldsymbol p_{\gamma} = 1$ & & $ \boldsymbol d_1^{\prime} \log \boldsymbol p_{\gamma} = 0$ \\
& &\\
\multicolumn{3}{c}{{Iterate with the new $\gamma$ }} \\
& &\\
\end{tabular}
\end{center}
\begin{theorem}\label{newGIPFconvTh}
If $\boldsymbol q > \boldsymbol 0$, the G-IPFm algorithm converges, and its limit is equal to $\hat{\boldsymbol{p}}_q$, the ML estimate of $\boldsymbol p$ under $RM(\mathbf{A})$.
\end{theorem}
\begin{proof}
The convergence of one iteration of G-IPFm, when $\gamma$ is fixed, can be proved similarly to Theorem 3.2 in \perp\!\!\!\perpte{KRipf1}. The limit is positive, $\tilde{\boldsymbol p}_{\gamma} > \boldsymbol 0$, and thus, by Lemma 1 in \perp\!\!\!\perpte{Forcina2017}$f(\gamma) = \boldsymbol d_1^{\prime} \log{\tilde{\bd p}}_{\gamma}$ is a strictly increasing and differentiable function of $\gamma$. So, one can update $\gamma$, until for some ${\gamma}_q$ the G-IPFm limit satisfies: $f(\gamma_q) =\boldsymbol d_1 \log \tilde{\boldsymbol p}_{{\gamma}_q} = 0$.
Because, in this case,
$$ \bd A \tilde{\bd{p}}_{\gamma_q} = {\gamma}_q \bd A\bd q, \quad \bd D \log \tilde{\bd{p}}_{\gamma_q} = \bd 0, \quad \bd 1^{\prime} \tilde{\bd{p}}_{\gamma_q} =1,$$
the uniqueness of the MLE implies that $\tilde{\bd{p}}_{\gamma_q} = \hat{\bd p}_q$ and ${\gamma}_{q} = \hat{\gamma}_q$.
\end{proof}
The original G-IPF can be used whether or not $\boldsymbol q$ has some zeros, and it computes a sequence whose elements are the unique intersections of the variety $\mathcal{X}_{\mathbf{A}}$ and each of the polyhedra defined by $\mathbf{A}\tilde{\bd \tau} = \gamma \mathbf{A} \bd q$ for different $\gamma$. This sequence converges, and its limit belongs to the hyperplane $\boldsymbol 1^{\prime} \boldsymbol \tau = 1$ \perp\!\!\!\perptep{KRextended}. G-IPFm produces a sequence whose elements are the unique intersections of the interior of the homogeneous variety $\mathcal{X}_{\bar{\mathbf{A}}}$ and each of the polyhedra $\cg F(\mathbf{A}, \bd q,\gamma)$. The limit of this sequence belongs to the interior of the non-homogeneous variety $\mathcal{X}_{\mathbf{A}}$. To ensure the existence, differentiability, and monotonicity of $f(\gamma)$, described above, the G-IPFm algorithm should be applied only when $\boldsymbol q > \boldsymbol 0$. If $\boldsymbol q$ has some zero components, the positive MLE $\hat{\bd p}_q$ may still exist, see Theorem \ref{MLEextendTHnew}(ii). However, for some $\boldsymbol q$, because, in general, the matrices $\mathbf{A}$ and $\bar{\mathbf{A}}$ have different facial sets, no strictly positive $\boldsymbol p_{\gamma}$ would satisfy $\bar{\mathbf{A}} \boldsymbol p_{\gamma} = \left(\begin{array}{c} 1 \\ \gamma \mathbf{A}\boldsymbol q\end{array}\right)$.
Some limitations and advantages of using the generalized IPF were addressed in \perp\!\!\!\perpte{KRipf1}, Section 2. In particular, while the assumption of the model matrix to be of full row rank can be relaxed for G-IPF, it is one of the major assumptions for the Newton-Raphson and the Fisher scoring algorithms. The algorithms proposed in \perp\!\!\!\perpte{Forcina2017} also require the model matrix to be of full row rank, and their convergence relies on the positivity of the observed distribution.
\section{Loss of potentials in hematopoiesis}\label{hemato}
Hematopoietic stem cells (HSC) are able to become progenitors that, in turn, may develop into mature blood cells. Understanding the process of forming mature blood cells, called hematopoiesis, is one of the most important aims of cell biology, as it may help to develop new cancer treatments. The HSC progenitors can proliferate (produce cells of the same type) or differentiate (produce cells of different types). Multiple experiments suggested that HSC progenitors are multipotent cells and differentiate by losing one of the potentials. While the mature blood cells are unipotent, they do not proliferate or differentiate The differentiation is believed to be a hierarchical process, with HSC progenitors and mature blood cells at the highest and the lowest levels, respectively.
The models discussed below apply to the steady-state of hematopoiesis, under the assumption that cells neither proliferate nor die and can undergo only first phase of differentiation. Various hierarchical models for differentiation have been proposed \perp\!\!\!\perptep*[cf.][]{Kawamoto2010,YeCells}. The equal loss of potentials (ELP) model was introduced in \perp\!\!\!\perpte{Perie2014}, and is described next. Denote by $MDB$ the three-potential HSC progenitor of the $M$, $D$, and $B$ mature blood cell types. During the first phase of differentiation, an $MDB$ progenitor can differentiate by losing either one or two potentials at the same time, and thus produce a cell of one of the six types: $M$, $D$, $B$, $MD$, $MB$, $DB$.
Let $\bd p$ be the vector of probabilities of losing the corresponding potentials from $MDB$:
$$\bd p = (p_{*DB}, p_{M*B},p_{MD*},p_{**B},p_{*D*},p_{M**})^{\prime}.$$
For example, $p_{*DB}$ is the probability of losing the $M$ potential from $MDB$, $p_{M*B}$ is the probability of loosing the $D$ potential from $MDB$, and $p_{**B}$ is the probability of losing the $M$ and $D$ potentials from $MBD$ at the same time, and so on.
The ELP model assumes that ``the probability to lose two potentials at the same time is the product of
the probability of losing each of the potentials'' \perp\!\!\!\perptep[see Caption to Fig 3A,][]{Perie2014}:
\begin{equation}\label{Ploss}
p_{**B} = p_{*DB} \cdot p_{M*B}, \qquad p_{M**} = p_{MD*} \cdot p_{M*B}, \qquad p_{*D*} = p_{*DB} \cdot p_{MD*}.
\end{equation}
The model specified by (\ref{Ploss}) is the relational model generated by the matrix
\begin{equation}\label{mxELP}
\mathbf A = \begin{pmatrix}
1& 0& 0& 1& 1& 0\\
0& 1& 0& 1& 0& 1\\
0& 0& 1& 0& 1& 1
\end{pmatrix},
\end{equation}
or, in a parametric form,
\begin{align}\label{ELPour}
p_{*DB} &= {\alpha_M}, \quad p_{M*B} = {\alpha_D}, \quad p_{MD*} = {\alpha_B},\nonumber\\
p_{**B} &= {\alpha_M\alpha_D},\quad p_{*D*} = {\alpha_M\alpha_B},\quad p_{M**} = {\alpha_D\alpha_B},
\end{align}
where, using the notation in \perp\!\!\!\perpte{Perie2014}, $\alpha_M,\alpha_D, \alpha_B$ are the parameters associated with the loss of the corresponding potential from $MDB$. It can be easily verified that the relational model generated by (\ref{mxELP}) does not have the overall effect, so the normalization has to be added as a separate condition:
$$Z = p_{*DB} + p_{M*B} + p_{MD*} + p_{**B} + p_{*D*} + p_{M**} = 1.$$
\perp\!\!\!\perpte{Perie2014} define the ELP model in the following parametric form:
\begin{align}\label{ELP}
p_{*DB} &= {\alpha_M}/{Z}, \quad p_{M*B} = {\alpha_D}/{Z}, \quad p_{MD*} = {\alpha_B}/{Z},\nonumber\\
p_{**B} &= {\alpha_M\alpha_D}/{Z},\quad p_{*D*} = {\alpha_M\alpha_B}/{Z},\quad p_{M**} = {\alpha_D\alpha_B}/{Z}.
\end{align}
That is, the authors rescaled the loss probabilities to force them sum to $1$. In fact, (\ref{ELP}) is also a relational model; it is generated by
\begin{equation}\label{ELPmatrix}
\bar{\mathbf{A}} = \left(\begin{array}{rrrrrr}
1&1&1&1&1&1\\
1& 0& 0& 1& 1& 0\\
0& 1& 0& 1& 0& 1\\
0& 0& 1& 0& 1& 1
\end{array}\right),
\end{equation}
and can be obtained by adding the overall effect to the model defined by (\ref{mxELP}). Because, the original model does not have the overall effect, adding a row of $1$'s changed this model. One can check by substitution that the probabilities in (\ref{ELP}) do not satisfy the multiplicative constraints (\ref{Ploss}). The estimates of the probabilities of loss of potentials from the $MDB$ cells shown in Figure 3B of \perp\!\!\!\perpte{Perie2014}. In the notation used here,
\begin{eqnarray}\label{MLE1}
&& \hat{p}_{*DB} = 0.35, \quad \hat p_{M*B} = 0.08, \quad \hat{p}_{MD*} = 0.49, \nonumber \\
&&\hat p_{**B} =0.01, \quad \hat p_{*D*} = 0.06, \quad \hat p_{M**} = 0.01.
\end{eqnarray}
These probabilities sum to $1$, but also do not satisfy (\ref{Ploss}).
\section*{Acknowledgments}
The authors wish to thank Antonio Forcina for his thought-provoking discussions, Ingmar Glauche and Christoph Baldow for their help with understanding the main concepts of hematopoiesis, and Wicher Bergsma. The second author is also a Recurrent Visiting Professor at the Central European University and the moral support received is acknowledged.
\end{document}
\section{Differentiation towards cell lineages}\label{LineagesDiff}
The next example will demonstrate the difference between the MLEs under a model without the overall effect and the model obtained by adding the overall effect, based one the same data.
In the cell development process, the collection of cells with the same progenitor is referred to as a clone.
\perp\!\!\!\perpte{Ramos2010} studied cell differentiation toward the endothelial, myeloid, and lymphoid lineages, and aimed to show the existence of a cell that is able to differentiate toward the endothelial ($E$), myeloid ($M$), and lymphoid ($M$) lineages. The frequency distribution of $85$ clones with regard to differentiation
potential towards the endothelial, myeloid, and lymphoid lineages is shown in Table \ref{cloneData}.
\begin{table}
\centering
\begin{tabular}{c|c|c|c|c|c|c}
$E$ & $M$ & $L$ & $ML$ & $EL$ & $EM$ & $EML$ \\
\hline
0.025 & 0& 0& 0.165 & 0.07 & 0.045 & 0.695
\end{tabular}
\label{cloneData}
\caption{Lineage sharing distribution of 85 clones \perp\!\!\!\perptep{Ramos2010}.}
\end{table}
No clones, among the $85$ observed, which appeared only in the $M$ or only in the $L$ lineages were detected. Because the existence of such clones is biologically plausible, the corresponding zero entries in Table \ref{cloneData} can be classified as \textit{observed} zeros rather than \textit{structural} zeros.
\begin {figure}
\centering
\noindent\includegraphics[scale=0.44]{ClonalAnalysis1.pdf}
\end{figure}
Because a fixed number of clones were involved in the experiment one can assume that the multinomial sampling was used, and a relational model for probabilities would be relevant. Let $\boldsymbol p = (p_{E}, p_{M}, p_{L}, p_{EM}, p_{EL}, p_{ML}, p_{EML})^{\prime}$ denote the vector of probabilities of presence in the corresponding combination of lineages, that is, of \textit{keeping} the corresponding potentials. The hypothesis of independence between the lineages, that might be expected to hold for the clone differentiation, is defined as:
$$p_{EM} = p_{E} p_{M}, \quad p_{EL} = p_{E}p_{L}, \quad p_{ML} = p_{M}p_{L}, \quad p_{EML} = p_{E}p_{M}p_{L}.$$
This is a special case of the model of AS independence, see Example \ref{Example1}, and is the relational model generated by
$$
\mathbf{A}= \left(\begin{array}{rrrrrrr}
1& 0& 0& 1& 1& 0& 1\\
0& 1& 0& 1& 0& 1& 1\\
0& 0& 1& 0& 1& 1& 1\\
\end{array}\right).
$$
Augmenting $\mathbf{A}$ with the row of $1$'s,
\begin{equation*}
\bar{\mathbf{A}} = \left(\begin{array}{rrrrrrr}
1& 1& 1& 1& 1& 1& 1\\
1& 0& 0& 1& 1& 0& 1\\
0& 1& 0& 1& 0& 1& 1\\
0& 0& 1& 0& 1& 1& 1\\
\end{array}\right)
\end{equation*}
leads to the model with the overall effect which can be specified using the multiplicative constraints:
$$p_E /p_{EM} = p_{EL}/p_{EML}, \qquad p_M/ p_{EM} = p_{ML}/p_{EML}, \qquad p_L p_{EL} = p_{ML}/ p_{EML}.$$
Under this model, the odds of keeping one potential out of two are independent on whether or not the third potential is present.
The model of AS independence, generated by $\mathbf{A}$, does not have a good fit, the deviance $G^2 = 446.41$ on four degrees of freedom. Thus, the data provide evidence against independence of the proportions of lineage sharing. The estimated adjustment factor given the data is $\hat{\gamma} = 0.4635$.
However, the AS independence formulated for intensities give a better fit: the deviance $G^2 = 7.8$ on four degrees of freedom. The data give evidence supporting the independence structure for the intensities of lineage sharing.
Because the overall effect is present, the models for probabilities and for intensities generated by $\bar{\mathbf{A}}$ are equivalent \perp\!\!\!\perptep{KRD11}. These models fit rather well, the deviance $G^2 = 7.86$ on three degrees of freedom.
In practice, the data collection procedure is closer to the Poisson sampling, and thus, relational models for intensities are more appropriate for the analyses. \qed
\begin{table}
\centering
\begin{tabular}{l|l|l|l|l|l|l|l}
The potentials kept: & $E$ & $M$ & $L$ & $EM$ & $EL$ & $ML$ & $EML$ \\
\hline
&&&&&&\\ [-6pt]
Observed & 0.025 & \multicolumn{1}{c|}{{0}} & \multicolumn{1}{c|}{{0}} & 0.045 & 0.07 & 0.165 & 0.695 \\
&&&&&&\\ [-3pt]
\multicolumn{1}{l|}{Models for probabilities:} & & & & & &\\
&&&&&&\\ [-6pt]
MLE without OE &20.3936 & 22.5570 & 23.3513 & 5.4120 & 5.6025 & 6.1969 & 1.4868 \\
MLE with OE & 0.48 & 0.91 & 1.26 & 4.56 & 6.33 & 11.86 & 59.60 \\
& & & & & & \\ [-3pt]
\multicolumn{1}{l|}{Models for intensities:} & & & & & &\\
&&&&&&\\ [-6pt]
Model without OE & 2.72 & 3.83 & 4.40 & 10.43 & 11.98 & 16.83 & 45.84 \\
MLE with OE & 0.48 & 0.91 & 1.26 & 4.56 & 6.33 & 11.86 & 59.60 \\
\end{tabular}
\label{MLEclones}
\caption{The observed and estimated frequencies of the lineage sharing between 85 clones.}
\end{table}
\section*{Acknowledgments}
The authors wish to thank Antonio Forcina for his thought-provoking discussions, Ingmar Glauche and Christoph Baldow for their help with understanding the main concepts of hematopoiesis. The second author is also a Recurrent Visiting Professor at the Central European University and the moral support received is acknowledged.
\end{document} |
\begin{document}
\title{Particle systems with coordination}
\begin{abstract}
We consider a generalization of spatial branching coalescing processes in which the behaviour of individuals is not (necessarily) independent, on the contrary, individuals tend to take simultaneous actions. We show that these processes have moment duals, which happen to be multidimensional diffusions with jumps. Moment duality provides a general framework to study structural properties of the processes in this class. We present some conditions under which the expectation of the process is not affected by coordination and comment on the effect of coordination on the variance. We analyse several examples in more detail, including the nested coalescent, the peripatric coalescent with selection and coordinated migration, and the Parabolic Anderson Model.
\mathrm end{abstract}
\underline{n}oindent
{\it Keywords and phrases.} Interacting particle system, branching coalescing process, coordination, moment duality, nested coalescent, coming down from infinity, peripatric coalescent, Parabolic Anderson Model, Feynman-Kac formula, branching random walk.
\underline{n}oindent
{\it MSC 2010.} 60J27, 60J80, 92D15, 92D25.
\section{Introduction}
Spatial branching coalescing processes and their duals have received considerable attention in the literature. For example, in \cite{AS} particle systems on a lattice are considered, where particles undergo migration, death, branching and (pair) coalescence, independently of one another. These processes are dual to certain interacting diffusions used in the modelling of spatially interacting populations with mutation, selection and resampling. One of the many questions of investigation is the long time behaviour of such processes.
On the other hand, \mathrm emph{coordinated} transitions of several particles have lead to interesting processes in a number of models which are already well-studied in the literature, such as multiple merger coalescents \cite{pitman, MS}. More recently, the seed-bank coalescent with simultaneous switching \cite{BGKW18} has shown qualitatively different features compared to its non-coordinated version. In both these cases, the effects of coordinated vs.~independent actions of particles may lead to drastically different long term behaviour, reflected for example in the question of \lq coming down from infinity\rq.
In this paper, we present a unified framework of spatial branching coalescing interacting particle systems, where all types of occurring transitions may occur in a coordinated manner. For simplicity, we restrict our presentation to the case of finitely many spatial locations, except for a few remarks in the last section. In our construction, the size of a coordinated transition is determined according to a measure on $[0,1]$. The individuals then \lq decide\rq\ independently according to the size of the transition whether or not to participate. This construction is reminiscent of Lambda-coalescents or of the seed-bank coalescent with simultaneous switching, and leads (under some suitable conditions on the measures involved) to a continuous time Markov chain with finite jump rates.
As a first result in Section \ref{sec1}, we prove moment duality for this class of coordinated processes. The
SDEs arings as dual processes will then be interpreted in terms of population genetics. We discuss a number of examples of processes in the (recent) mathematical literature, and construct their duals, some of which seem to be new. Via duality, we also provide some results on the long time behaviour of these models, such as a criterion for coming down from infinity for the so-called nested coalescent \cite{BDLS} and for models exhibiting death but no coalescence, and almost sure fixation in a variant of the peripatric coalescent \cite{LM} with non-coordinated selection and coordinated migration. Examples extend to situations not generally looked at from the point of view of population models, such as the famous Parabolic Anderson Model (PAM).
In Section~\ref{sec-E} we show that in absence of coalescence, the expectation of the coordinated branching coalescing process is the unique solution of a system of ODEs depending only on the total mass of the defining measures. As an example, we consider the PAM branching process and provide a straightforward new proof of the well-known Feynman-Kac formula based on our observation. In Section~\ref{sec-Var}, also in the coalescence-free case, we identify the choice of reproduction, death and migration measures that maximize or minimize the variance of the process, given the total masses of these measures. We use this to provide an upper bound on the variance of the PAM branching process. In Section~\ref{sec-infinity}, we extend some of our results, in particular our proof for the Feynman-Kac formula, to a class of infinite graphs. We point out that a prominent example of a process of our class on infinite graphs is the binary contact path process \cite{Griffeath}, a simple function of which is the contact process. Finally, as another application of the invariance of expectation, we provide a probabilistic interpretation of the expectation process of a branching random walk on an infinite uniform rooted tree.
\section{Coordinated branching coalescing processes}\label{sec-modeldef}
In this section, we present the general framework of this paper. Without coordination,
spatial branching coalescing particle systems in the setup of \cite{AS} are continuous time Markov chains with transitions according to the following definition:
\begin{definition}\label{def:fixed_rate}
Consider a finite set $V$. We write $e_v$ for the unit vector with 1 at the $v$-th coordinate, $v\in V.$ For each $v\in V$ fix the following parameters for
\begin{itemize}
\item pair-coalescence: $c_v\geq 0$,
\item death: $d_v\geq 0$,
\mathrm end{itemize}
and for each pair $(v,u)\in V\times V$ fix parameters for
\begin{itemize}
\item branching: $r_{vu}\geq 0$, and
\item migration: $m_{vu}\geq 0$.
\mathrm end{itemize}
A \mathrm emph{structured branching coalescing process} on $V$ with these parameters is the continuous time Markov chain $(Z_t)_{t\geq 0}$ taking values in $\mathbb N_0^V$ with the following transitions:
\begin{equation}\label{eq:jumps_fixed}
z \mapsto
\begin{cases}
z- e_v+ e_u, & \textrm{ at rate }z_v m_{vu}, v,u\in V \\
z+ e_u, & \textrm{ at rate }z_v r_{vu}, v,u\in V \\
z- e_v, & \textrm{ at rate } c_v\binom{z_v}{2}+d_vz_v.
\mathrm end{cases}
\mathrm end{equation}
\mathrm end{definition}
The set $V$ may be chosen countably infinite, see \cite{AS}, but for this paper we will restrict ourselves to finite sets. We assume $m_{vv}=0.$ Binary branching may be extended to more general reproduction mechanisms as in \cite{GCSW}, and more general pairwise interactions, see \cite{GCPP}.
In order to include coordination into such models, we replace the positive real-valued parameters of Definition \ref{def:fixed_rate} with measures on $[0,1]$. Denote by $ \mathcal{M}[0,1]$ the space of finite measures on $[0,1]$.
\begin{definition}\label{def:random_rate}
Fix a finite set $V$. For each $v\in V$ fix measures
\begin{itemize}
\item coalescence: $\Lambda_v\in \mathcal{M}[0,1]$,
\item death: $D_v\in \mathcal{M}[0,1]$,
\mathrm end{itemize}
and for each pair $(v,u)\in V\times V$ fix measures
\begin{itemize}
\item reproduction: $R_{vu}\in \mathcal{M}[0,1]$,
\item migration: $M_{vu}\in \mathcal{M}[0,1]$.
\mathrm end{itemize}
A \mathrm emph{structured branching coalescing process with coordination} with these parameters is the continuous time Markov chain $(Z_t)_{t\geq 0}=(Z_t^{(v)})_{t \geq 0, v \in V}$ with values in the set $\mathbb N_0^V$ of functions having domain $V$ and values in $\mathbb N_0$ such that
\begin{equation}\label{eq:jumps_random}
z \mapsto
\begin{cases}
z- ie_v+ ie_u, & \textrm{ at rate }\int_0^1\binom{z_v}{i}y^i(1-y)^{z_v-i}\frac{1}{y}M_{vu}(\mathrm{d} y), \, u,v\in V, 1\leq i\leq z_v\\
z- ie_v, & \textrm{ at rate } \int_0^1\binom{z_v}{i}y^i(1-y)^{z_v-i}\frac{1}{y}D_{v}(\mathrm{d} y),\, v\in V, 1\leq i\leq z_v\\
z+ ie_u, & \textrm{ at rate } \int_0^1\binom{z_v}{i}y^i(1-y)^{z_v-i}\frac{1}{y}R_{vu}(\mathrm{d} y),\, u,v\in V, 1\leq i\leq z_v\\
z- (i-1)e_v, & \textrm{ at rate } \int_0^1\binom{z_v}{i}y^i(1-y)^{z_v-i}\frac{1}{y^2}\Lambda_{v}(\mathrm{d} y),\, v\in V, 2\leq i\leq z_v.
\mathrm end{cases}
\mathrm end{equation}
\mathrm end{definition}
The rates of this process may be interpreted as individuals deciding independently to participate in an event, leading to a binomial number of affected individuals. The probability to participate in, say, a migration event from $v$ to $u$ is determined by the measure $y^{-1}M_{vu}(\mathrm{d} y)$. This measure has a singularity at $y=0$ and is not necessarily finite, but (with only slight abuse of notation) \[ \int_{\{0\}}\binom{z_v}{i}y^i(1-y)^{z_v-i}\frac{1}{y}M_{vu}(\mathrm{d} y)=z_v\mathds 1_{\{i=1\}}M_{vu}(\{0\}) \underline{n}umberthis\label{delta0} \]
is finite, analogously for the death and reproduction. For the coalescence, similarly,
\[ \int_{\{0\}}\binom{z_v}{i}y^i(1-y)^{z_v-i}\frac{1}{y^2}\Lambda_{v}(\mathrm{d} y)=\binom{z_v}{2}\mathds 1_{\{i=2\}}\Lambda_{v}(\{0\}). \underline{n}umberthis\label{delta0Lambda} \]
We will further discuss the role of these singularities below.
We abbreviate the total masses of $\Lambda_v,D_v,R_{vu}$ and $M_{vu}$ by $c_v$, $d_v$, $r_{vu}$ and $m_{vu}$ respectively, where we again assume that $m_{vv}=0$. Throughout the paper, $\delta_a$ denotes the Dirac measure in $a\in[0,1]$. Thus, in terms of Definition~\ref{def:random_rate}, the process defined in Definition~\ref{def:fixed_rate} corresponds to each of these measures being equal to the corresponding total mass times $\delta_0$. We refer to the points $v \in V$ as \mathrm emph{vertices}, which we often interpret as \mathrm emph{islands} in a spatial population model with multiple islands. Another possible interpretation of $V$ may be that of a type space, leading to multitype branching coalescing processes with coordination. The (undirected) \mathrm emph{interaction graph} associated to the measures defined in Definition~\ref{def:random_rate} is given as $G=(V,E)$, where
\[ E=\{ (u,v) \in V \times V \colon u \underline{n}eq v \text{ and } \max \{ r_{uv}, r_{vu}, m_{uv}, m_{vu} \}>0 \}, \]
i.e., the vertex set of $G$ equals $V$ and we connect two different vertices $u,v \in V$ by an edge whenever there is interaction by migration or reproduction between $u$ and $v$.
We prove in Lemma \ref{lem:markov} below that \mathrm eqref{eq:jumps_random} indeed yields a Markov chain. Its infinitesimal generator is expressed below in \mathrm eqref{eq:generator}.
Coordination of interactions in the above sense may be interpreted by means of suitable Poisson processes. We illustrate this by a first example.
\begin{example}\label{ex:bpre}
Consider the non-spatial case $V=\{ 1 \}$ without migration, and with $\Lambda_1=D_1=0$ fixed. We first let $R_1=r_1\delta_0$ for some $r>0$. Then the rate for a branching event producing $i$ offspring if there are presently $z$ particles is given by
\[r_1\int_0^1 \binom{z}{i}y^{i-1}(1-y)^{z-i}\delta_0(\mathrm{d} y)=r_1z\mathds 1_{\{i=1\}},\]
thus $(Z_t)_{t\geq 0}$ is a binary branching process where particles reproduce independently at rate $r_1,$ i.e.\ a Yule process. If on the other hand $R_1=r_1\delta_1$, then the reproduction rate is given by
\[r_1\int_0^1 \binom{z}{i}y^{i-1}(1-y)^{z-i}\delta_1(\mathrm{d} y)=r_1\mathds 1_{\{i=z\}}.\]
In this case we may look at the process from the following viewpoint: Reproduction events happen according to a Poisson process with intensity $r_1$, and at each arrival time of the Poisson process, every particle produces exactly one new particle. That is, the resulting process is given by $(2^{N_t^{(r_1)}})_{t\geq 0}$ where for $\lambda>0$, $(N^{(\lambda)}_t)_{t\geq 0}$ denotes a Poisson process with intensity $\lambda$. The main difference between the two cases considered here is independence vs.~coordination. The Dirac measure $R_1=\delta_0$ gives full independence, while for $R_1=\delta_1$ the reproduction events are fully coordinated. The choice $R_1=r_1\delta_w$ for some $w\in (0,1)$ leads to a model in which reproduction events arrive according to a Poisson process with intensity $r_1/w$ and at each reproduction event each individual reproduces with probability $w$. It is interesting to observe that in all these cases $$\ensuremath{\mathbb{E}}_n[Z_t]=n\ensuremath{\mathbb{E}}\Big[(1+w)^{N^{(r_1/w)}_t}\Big]=\mathrm e^{r_1t}. $$
As we will see in Lemma \ref{lem:expectationequality}, the invariance of the expectation is not a coincidence. In the general case $R_{1}\in \mathcal{M}[0,1]$, the process will also have expectation $\mathrm e^{r_1 t}$.
When we take the reproduction measure such that $R_{1}(dy)/y\in \mathcal{M}[0,1]$ we observe a relation to branching processes in a random environment in the sense of \cite{AK}. To make this connection precise, let $\{(t_n,y_n)\}_{n\in\mathbb N}$ be the times at which an event occurs and their impacts. Then $Y_n=\sum_{i=1}^{Y_{n-1}}X_i^{(n)}=Z_{t_n}$ is a branching process in random environment where for every $i\in\mathbb N$, $\mathbb P(X_i^{(n)}=2)=1-\mathbb P(X_i^{(n)}=1)=y_n$, and given $\{(t_n,y_n)\}_{n\in\mathbb N}$, the random variables $(X_i^{(n)})_{n\in\mathbb N}$ are independent.
\mathrm end{example}
In order to study the processes rigorously, the following definitions are useful. Let
\[ C_0=\{f \colon \mathbb N_0^V\to \mathbb{R} \colon \lim_{i\rightarrow \infty}f(z_i)=0, \forall (z_i)_{i\in \mathbb N}\text{ such that} \lim_{i\rightarrow \infty}|z_i|=\infty\}. \]
We say that $g \in \widetilde C_0$ if $g=c+f$ for some $c\in \mathbb{R}$ and $f \in C_0$. We consider the process $(Z_t)_{t\geq 0}$ as a process in the one-point compactification $\bar\mathbb N_0^V$ of $\mathbb{N}_0^V$ by taking the minimal extension, that is to say that the generator at any function evaluated at the point at infinity is zero.
\begin{lemma}\label{lem:markov}
$(Z_t)_{t\geq 0}$ is a well-defined continuous time Markov chain with state space $\bar\mathbb N_0^V.$ Further, it is a conservative process, in the sense that for all $T>0$
\[ \mathbb P(Z_t\in \mathbb{N}_0^V, \forall t\in[0,T]\big|Z_0 \in \mathbb N_0^V)=1, \underline{n}umberthis\label{conservativity} \] and the domain of its extended generator includes $\widetilde C_0$.
\mathrm end{lemma}
We recall that the domain of the extended generator is equal to the set of functions corresponding to the associated martingale problem, which will be spelt out in the proof of the lemma below. Note that if \mathrm eqref{conservativity} holds for all $T>0$, then this together with the continuity of measures implies
\[ \mathbb P\big(Z_t\in \mathbb{N}_0^V, \forall t\in[0,\infty)\big|Z_0 \in \mathbb N_0^V \big)=1, \]
i.e., almost surely, the process $(Z_t)_{t \geq 0}$ does not explode within finite time.
\begin{proof}[Proof of Lemma~\ref{lem:markov}]
The core of the proof is a domination argument and a control of the overall jump intensities (as required in \cite[Chapter 4, (11.9)]{EK}) in order to prevent explosion.
In order to verify that the process is conservative, we first show that the rate at which the process leaves any state $z\in \mathbb N_0^V$ is finite. Thus \mathrm eqref{eq:jumps_random} yields a $Q$-matrix, which generates the semigroup of a Markov chain on $\mathbb N_0^V.$ Observe that for all $u,v\in V, z_v\in \mathbb N_0$
\begin{align*}
\sum_{i=1}^{z_v}\int_0^1 \binom{z_v}{i}y^i(1-y)^{z_v-i}\frac{1}{y}M_{vu}(\mathrm{d} y)=&\int_0^1 (1-(1-y)^{z_v})\frac{1}{y}M_{vu}(\mathrm{d} y)\\
\leq &\int_0^1 z_v y\cdot \frac{1}{y}M_{vu}(\mathrm{d} y)=z_v m_{vu}.
\mathrm end{align*}
Similar calculations hold for the reproduction and death. For the coalescence we get
\begin{align*}
\sum_{i=2}^{z_v}\int_0^1 \binom{z_v}{i}y^i(1-y)^{z_v-i}\frac{1}{y^2}\Lambda_{v}(\mathrm{d} y)=&\int_0^1 (1-(1-y)^{z_v}-z_vy(1-y)^{z_v-1})\frac{1}{y^2}\Lambda_{v}(\mathrm{d} y)\\
\leq &z_v(z_v-1)c_v.
\mathrm end{align*}
Thus the total rate at which the process jumps out of state $z\in \mathbb N_0^V$ is bounded from above by
\[\sum_{v \in V} z_v \left[\sum_{u=1 }^K( m_{vu}+r_{vu}+d_{v}+(z_v-1)c_v)\right]<\infty.
\]
The fact that the process is conservative can be proved directly. However, we use a simple stochastic domination argument. Let $(Y_t)_{t \geq 0}$ be a one-dimensional process in our class with only one non-zero parameter which is $R=\sum_{v \in V} \sum_{u \in V} R_{vu}$. Then one can couple the processes $(Y_t)_{t \geq 0}$ and $(Z_t)_{t \geq 0}$ in such a way that
\[ \mathbb P(|Z_t| \leq Y_t,~ \forall t \geq 0)=1. \]
As we saw in Example 1, $\mathbb{E}[Y_t]=\mathrm e^{rt}$ where $r=\sum_{v \in V} \sum_{u \in V} r_{vu}$.
The fact that $(Y_t)_{t \geq 0}$ is increasing together with the finiteness of its expectation imply that $\mathbb P(Y_t<\infty, \forall t\in [0,T])=1$ for all $T>0$. From the stochastic domination we conclude that $(Z_t)_{t \geq 0}$ is conservative.
Finally, let us study the extended generator of $(Z_t)_{t \geq 0}$ (and the associated martingale problem). We observe that for any $(a^{(i)})_{i \in V}$, $(m^{(i)})_{i \in V}$ such that $m^{(i)}\geq 0$ and $
a^{(i)}\in \mathbb{R}$ for all $i \in V$,
$$
\Big(\sum_{i\in V} a^{(i)}\mathrm e^{-m^{(i)}Z^{(i)}_t}-\int_0^t A \big(\sum_{i\in V} a^{(i)} \mathrm e^{-m^{(i)}Z^{(i)}_s}\big)ds\Big)_{t \geq 0}
$$
is a martingale, where $A$ is the pointwise generator of $(Z_t)_{t \geq 0}$ that we describe in Equation \mathrm eqref{eq:generator}. The martingale property follows from \cite[Chapter 4, Problem 15 (page 263)]{EK}, since condition \cite[Chapter 4, (11.9)]{EK} is satisfied. It follows that all functions of the form $f(z)=\sum_{k=1}^r\sum_{i\in V}a^{(i)}_k\mathrm e^{-m^{(i,k)}Z^{(i)}_t}$, for $r\in \mathbb{N}$, $i \in V$, $m^{(i,k)}>0$ and $a^{(i)}_k\in \mathbb{R}$, are in the extended domain of $(Z_t)_{t \geq 0}$. Finally, as this family of functions is dense in $C_0$, an application of the Stone--Weierstrass theorem allows us to conclude that for every $f\in C_0$, writing
$$
M_t^f=f(Z_t)-\int_0^t A f(Z_s)ds,
$$
$(M^f_t)_{t \geq 0}$ is a martingale. Now, if $g=c+f$ for $c \in \mathbb{R}$, $M_t^g=g(Z_t)-\int_0^t A g(Z_s) \mathrm{d} s=M_t^f+c$ and thus $(M_t^g)_{t \geq 0}$ is also a martingale. Hence, $g$ is in the domain of the extended generator of $(Z_t)_{t \geq 0}$.
\mathrm end{proof}
Besides the branching processes in random environment briefly discussed in~Example \ref{ex:bpre}, several members of this class of processes are known:
\begin{enumerate}
\item Choosing $\Lambda_v\in \mathcal{M}[0,1]$, $D_v=0$, $R_v=0$ and $M_{uv}=m_{uv}\delta_0$ leads to the block-counting process of the structured $\Lambda$-coalescent, see \cite{pitman, MS, LSt}.
\item\label{example-Felix}For $V=\{v\}$, $\Lambda_v= 0$, $D_v=\delta_p$, $R_v=r \delta_0$, $r>0$, we obtain a branching process with binomial disasters as discussed in \cite{HP}.
\item\label{example-simultaneousswitching} For $V=\{1,2\}$, $\Lambda_1=\delta_0$, $\Lambda_2=0$, $M_{12},M_{21} \in \mathcal M[0,1]$, and for all $v \in V$, $D_v=R_v=0$, we get the seed-bank coalescent with simultaneous migration \cite{BGKW18}.
\item\label{example-spatialseedbank} Fix $n \in \mathbb N$, $K>0$, $V=\{ v_i \colon i \in \{1,\ldots,n\} \} \cup \{ w_i \colon i \in \{1,\ldots,n\} \}$, $(e_i)_{i=1}^n \in \mathbb{R}^n$, $(d_i)_{i=1}^n \in \mathbb{R}^n$ and $(a(i,j))_{i,j \in \{1,\ldots,n\},i \underline{n}eq j} \in \mathbb{R}^{n\times n}$. Then for $\Lambda_{v_i}= d_i \delta_0$, $\Lambda_{w_i}=0$, $M_{v_i v_j}= a(i,j) \delta_0$, $M_{v_i w_i}=e_i\delta_0$ and $M_{w_i v_i} = K e_i \delta_0$, $i,j=1,\ldots n$, we obtain the moment dual of a spatial seed bank model \cite[Model 1]{GdHO20}. Further, \cite[Model 2]{GdHO20}, the moment dual of the multi-layer seed-bank model also satisfies Definition~\ref{def:random_rate}, but with different migration measures. These examples are even included in Definition~\ref{def:fixed_rate}, whereas different choices of the measures $M_{v_i v_j}$ yield spatial variants of the seed-bank coalescent with simultaneous migration (fulfilling Definition~\ref{def:random_rate} only).
\item\label{example-subordinatorenv} For $V=\{v\}$, $\Lambda_v\in \mathcal{M}[0,1]$, $D_v=0$ and $R_v \in \mathcal{M}[0,1],$ we obtain coordinated branching coalescing processes \cite{GCSW} which arise as the moment dual of the Wright-Fisher model with selection in a (subordinator) random environment in the sense of \cite{BCM}. Indeed, this is the process $(Z_t)_{t \geq 0}$ in \cite[Theorem 3.2]{BCM} with the following choice of parameters: $b_1=\sigma_E=0$, $p(z,w)=z$, and the Lévy process $(Y_t)_{t \geq 0}$ being a decreasing Lévy process (with a nonpositive drift), i.e., the negative of a subordinator.
\item\label{example-hierarchical} For a general finite graph $V$, and for all $v,u \in V$ with $u \underline{n}eq v$ and for some $R, D \in \mathcal M [0,1]$ and $c,c',c''>0$, the choice $R_v=R$, $D_v=D$ and $\Lambda_v=c\delta_0$, further, $M_{uv} = c'\delta_0+c''\delta_1$ yields the moment dual of the hierarchical Moran model introduced in \cite[Section 2.1]{Dawson}, in the case when there is no selection on the level of colonies. In the particular case $R=D=0$, this process is Kingman's coalescent with erosion \cite{FLS}.
\item $V=[K]^d \subset \mathbb Z^d$ where $[K]=\{ 1,\ldots,K\}$, and for all $v \in [K]^d$, $\Lambda_v=0$, $D_v=\xi_v^{-} \delta_0$, $R_v=\xi_v^+ \delta_0$ and $m_{vu}=\delta_0 \mathds 1_{ \{ |v-u| =1 \}}$, where $\{\xi_v^+\}_{v \in [K]^d}$ and $\{\xi_v^-\}_{v \in [K]^d}$ are two families of independent and identically distributed random variables in $[0,\infty)$, leads to a branching process whose expectation is a solution of the Parabolic Anderson Model (PAM), see \cite{KPAM}. In the context of the PAM, coordination and some consequences will be discussed in Sections~\ref{sec:PAM} and~\ref{sec-Var}, which will partially be extended to infinite graphs in Section~\ref{sec-infinity}.
\mathrm end{enumerate}
The first example is classical, and so is the interpretation as a coordinated process: According to an underlying Poisson point process, coalescence events happen, and at each event, blocks decide independently according to $y\in[0,1]$ determined by $\Lambda$ whether or not to participate in the merger.
Examples \ref{example-Felix}, \ref{example-simultaneousswitching}, \ref{example-spatialseedbank}, \ref{example-subordinatorenv} and \ref{example-hierarchical} are recent in the literature. In these cases, coordination (of migration respectively death and reproduction) was used to construct models that include interesting features. For all five models, moment duality results were proved. The PAM is a well-understood model with a large literature. Despite having a moment dual, it is not usually included in the class of models that can be studied using the techniques of population genetics. We will introduce below \textit{the coordinated processes associated to the PAM}; all the members of this family have the same expectation, but radically different behaviour.
Another classical example of a process of our class is the \mathrm emph{binary contact path process} \cite{Griffeath}, which is strongly related to the contact process. Since these processes are usually studied on infinite graphs, we will recall them in this context in Section~\ref{sec-infinity}.
\section{Moment Duality}\label{sec1}
Since the process $(Z_t)_{t\geq 0}$ is a pure jump Markov process with finite rates, it is straightforward to identify its generator, which we denote by $A.$ It acts on bounded measurable functions $f:\mathbb N_0^V\to\mathbb{R}$ by
\begin{equation}
Af( z)=\sum_{u,v\in V} A_{M_{vu}}f(z)+\sum_{u,v\in V} A_{R_{vu}}f( z)+\sum_{v\in V}A_{D_{v}}f(z)+\sum_{v\in V} A_{\Lambda_v}f(z),
\mathrm end{equation}
where
\begin{align*}
A_{M_{vu}}f(z) & = \int_0^1\sum_{i=1}^{z_v}\binom{z_v}{i}[f(z- ie_v+ ie_u)-f( z)]y^i(1-y)^{z_v-i}\frac{1}{y}M_{vu}(\mathrm{d} y), \\
A_{R_{vu}}f( z) & = \int_0^1 \sum_{i=1}^{z_v}\binom{z_v}{i}[f( z+ ie_u)-f( z)]y^i(1-y)^{z_v-i}\frac{1}{y}R_{vu}(\mathrm{d} y), \\
A_{D_{v}}f(z) &= \int_0^1\sum_{i=1}^{z_v}\binom{z_v}{i}[f(z- ie_v)-f( z)]y^i(1-y)^{z_v-i}\frac{1}{y}D_v(\mathrm{d} y), \\
A_{\Lambda_{v}}f(z) &= \int_0^1\sum_{j=2}^{z_v}\binom{z_v}{j}[f(z- (j-1)e_v)-f(z)](1-y)^jy^{z_v-j}\frac{1}{y^2}\Lambda_{v}(\mathrm{d} y). \underline{n}umberthis\label{eq:generator}
\mathrm end{align*}
Our goal is to derive a moment duality. As a first step, we derive a generator duality. Define for functions $f \in \mathcal C^2([0,1]^V,\mathbb{R})$
\begin{equation}
Bf(x):=\sum_{u,v\in V} B_{M_{vu}}f(x)+\sum_{u,v\in V} B_{R_{vu}}f(x)+\sum_{v\in V}B_{D_{v}}f(x)+\sum_{v\in V} B_{\Lambda_v}f(x),
\mathrm end{equation}
where
\begin{equation}\label{eq:dual-migration}
B_{M_{vu}}f(x):= \int_0^1[f(x+ e_vy(x_u-x_v))-f(x)]\frac{1}{y}M_{vu}(\mathrm{d} y),
\mathrm end{equation}
\begin{equation}\label{eq:dual-branching}
B_{R_{vu}}f(x):= \int_0^1[f(x+e_vyx_v(x_u-1))-f(x)]\frac{1}{y}R_{vu}(\mathrm{d} y),
\mathrm end{equation}
\begin{equation}\label{eq:dual-death}
B_{D_{v}}f(x):= \int_0^1[f(x+e_vy(1-x_v))-f(x)]\frac{1}{y}D_v(\mathrm{d} y)
\mathrm end{equation}
and
\begin{equation}\label{eq:dual-coalescence}
B_{\Lambda_{v}}f(x):= \int_0^1[x_vf(x+e_vy(1-x_v))+(1-x_v)f(x-e_vyx_v)-f(x)]\frac{1}{y^2}\Lambda_{v}(\mathrm{d} y).
\mathrm end{equation}
Formulas \mathrm eqref{eq:dual-migration}--\mathrm eqref{eq:dual-coalescence} are valid for measures that have no atoms at zero, which are extended to the case of measures having atoms at zero via a standard abuse of notation, which we will explain now. We first clarify how to understand these formulas if one of these measures equals $\delta_0$. If $M_{vu}=\delta_0$, then \mathrm eqref{eq:dual-migration} is to be understood as
\[ B_{M_{vu}} f(x)= (x_u-x_v) \frac{\Bbb{P}artial f}{\Bbb{P}artial x_v}(x).
\underline{n}umberthis\label{eq:independent-migration} \]
If $R_{vu}=\delta_0$, then \mathrm eqref{eq:dual-branching} degenerates to
\[ B_{R_{vu}} f(x)= x_v(x_u-1) \frac{\Bbb{P}artial f}{\Bbb{P}artial x_v}(x). \underline{n}umberthis\label{eq:independent-branching}\]
If $D_v=\delta_0$, then \mathrm eqref{eq:dual-death} reads as
\[ B_{D_v} f(x)=(1-x_v) \frac{\Bbb{P}artial f}{\Bbb{P}artial x_v}(x). \underline{n}umberthis\label{eq:independent-death} \]
Finally, if $\Lambda_v=\delta_0$, then we obtain the generator of the corresponding Wright--Fisher diffusion as the limit of \mathrm eqref{eq:dual-coalescence}:
\[ B_{\Lambda_v} f(x)=\frac{1}{2} x_v(1-x_v) \frac{\Bbb{P}artial^2 f}{\Bbb{P}artial x_v ^2 }(x). \underline{n}umberthis\label{eq:WFdiffusion}\]
A general measure $\mu \in \mathcal M[0,1]$ can always be decomposed as $\mu=c\delta_0 + \mu'$ where $c\geq 0$ and $\mu'(\{0\})=0$. The general formulas for $B_{M_{vu}},B_{R_{vu}},B_{D_v}$ and $B_{\Lambda_v}$ are then obtained as a linear combination of the generators $B_{M_{vu}}$ in \mathrm eqref{eq:dual-migration} and \mathrm eqref{eq:independent-migration}, the ones $B_{R_{vu}}$ in \mathrm eqref{eq:dual-branching} and \mathrm eqref{eq:independent-branching}, the ones $B_{D_v}$ in \mathrm eqref{eq:dual-death} and \mathrm eqref{eq:independent-death}, respectively the ones $B_{\Lambda_v}$ in \mathrm eqref{eq:dual-coalescence} and \mathrm eqref{eq:WFdiffusion}. \\
Let $H:[0,1]^V\times \mathbb N_0^V \rightarrow [0,1]$ be such that for any $z \in \mathbb N_0^V$ and $x \in[0,1]^V$
\[
H(x, z)=\Bbb{P}rod_{v \in V} x_{v}^{z_v}
\]
with the convention that $0^0=1$. For $ z \in \mathbb N_0^V$ and $ x \in[0,1]^V$ we write $H_{x}(z):=H(x, z)$ and $H_{z}(x):=H(x, z)$ in order to indicate the coordinate functions. Clearly, $H_x$ for $x\in[0,1]^V$ is in the domain of the generator $A$, and $H_z$ for $z \in \mathbb N_0^V$ is in the domain of the generator of $B$.
\begin{theorem}\label{thm:generator_duality}
For $H$ defined above,
\[A H_x(z)=BH_z(x)\quad \forall z\in\mathbb N_0^V, x\in [0,1]^V.\]
\mathrm end{theorem}
\begin{proof} We check this for the four parts of the generator.
Migration:
\begin{eqnarray*}
A_{M_{vu}}H_{x}( z)&=&H_{x}( z)\int_0^1\sum_{i=0}^{z_v}\binom{z_v}{i}[x_v^{-i}x_u^i-1]y^i(1-y)^{z_v-i}\frac{M_{vu}(\mathrm{d} y)}{y}\\
&=&H_{x}(z)x_v^{-z_v}\int_0^1\sum_{i=0}^{z_v}\binom{z_v}{i}[x_v^{z_v-i}x_u^i-x_v^{z_v}]y^i(1-y)^{z_v-i}\frac{M_{vu}(\mathrm{d} y)}{y}\\
&=&H_{ x}(z)x_v^{-z_v}\int_0^1[(x_v(1-y)+x_uy)^{z_v}-x_v^{z_v}]\frac{M_{vu}(\mathrm{d} y)}{y}\\
&=&\int_0^1[H_{z}(x+e_vy(x_u-x_v))-H_{z}(x)]\frac{M_{vu}(\mathrm{d} y)}{y}\\
&=& B_{M_{vu}}H_{z}(x).
\mathrm end{eqnarray*}
Reproduction:
\begin{eqnarray*}
A_{R_{vu}}H_{x}(z)&=&H_{x}( z)\int_0^1\sum_{i=0}^{z_v}\binom{z_v}{i}[x_u^i-1]y^i(1-y)^{z_v-i}\frac{R_{vu}(\mathrm{d} y)}{y}\\
&=&H_{x}( z)\int_0^1[(x_uy+(1-y))^{z_v}-1]\frac{R_{vu}(\mathrm{d} y)}{y}\\
&=&\int_0^1[H_{ z}(x+e_vyx_v(x_u-1))-H_{ z}( x)]\frac{R_{vu}(\mathrm{d} y)}{y}\\
&=& B_{R_{vu}}H_{z}( x)
\mathrm end{eqnarray*}
where in the third equality we used $x_v^{z_v}(x_uy+(1-y))^{z_v}=(x_v(1-y)+x_vx_uy)^{z_v}=(x_v-yx_v(1-x_u))^{z_v}$.\\
Death:
\begin{eqnarray*}
A_{D_{v}}H_{ x}( z)&=&H_{x}( z)\int_0^1\sum_{i=0}^{z_v}\binom{z_v}{i}[x_v^{-i}-1]y^i(1-y)^{z_v-i}\frac{D_{v}(\mathrm{d} y)}{y}\\
&=&H_{x}( z)x_v^{-z_v}\int_0^1\sum_{i=0}^{z_v}\binom{z_v}{i}[x_v^{z_v-i}-x_v^{z_v}]y^i(1-y)^{z_v-i}\frac{D_{v}(\mathrm{d} y)}{y}\\
&=&H_{x}(z)x_v^{-z_v}\int_0^1[(x_v(1-y)+y)^{z_v}-x_v^{z_v}]\frac{D_{v}(\mathrm{d} y)}{y}\\
&=&\int_0^1[H_{z}(x+e_vy(1-x_v))-H_{z}(x)]\frac{D_{v}(\mathrm{d} y)}{y}\\
&=& B_{D_{v}}H_{ z}( x).
\mathrm end{eqnarray*}
Coalescence (this calculation in well-known, but we include it for completeness):
\begin{eqnarray*}
A_{\Lambda_{v}}H_{ x}(z)&=&H_{ x}( z)x_v^{-z_v}\int_0^1\sum_{i=0}^{z_v-1}\binom{z_v}{i}[x_v^{i+1}-x_v^{z_v}](1-y)^iy^{z_v-i}\frac{\Lambda_{v}(\mathrm{d} y)}{y^2}\\
&=&H_{ x}(z)x_v^{-z_v}\int_0^1[x_v(x_v(1-y)+y)^{z_v}+(1-x_v)x_v^{z_v}(1-y)^{z_v}-x_v^{z_v}]\frac{\Lambda_{v}(\mathrm{d} y)}{y^2}\\\underline{n}onumber
&=&H_{ x}( z)x_v^{-z_v}\int_0^1[x_v(x_v+y(1-x_v))^{z_v}+(1-x_v)(x_v-yx_v)^{z_v}-x_v^{z_v}]\frac{\Lambda_{v}(\mathrm{d} y)}{y^2}\\\underline{n}onumber
&=&\int_0^1[x_vH_{z}(x+e_vy(1-x_v))+(1-x_v)H_{ z}(x -e_vyx_v)-H_{ z}( x)]\frac{\Lambda_{v}(\mathrm{d} y)}{y^2}\\\underline{n}onumber
&=& B_{\Lambda_{v}}H_{ z}( x).
\mathrm end{eqnarray*}
\mathrm end{proof}
We denote by $(X_t)_{t \geq 0}$ the Markov process on $[0,1]^V$ with infinitesimal generator $B$.
\begin{corollary}\label{duality}
$(X_t)_{t\geq 0}$ and the process $(Z_t)_{t\geq 0}$ of Definition \ref{def:random_rate} are moment duals, that is, for all $z\in\mathbb N_0^V, x\in[0,1]^V, t\geq 0$ we have
\begin{equation}\label{eq:duality}
\ensuremath{\mathbb{E}}_{x}[H(X_t,z)]= \ensuremath{\mathbb{E}}_{z}[H(x, Z_t)].
\mathrm end{equation}
\mathrm end{corollary}
\begin{proof}
This follows from Proposition 1.2 of \cite{JK} and Theorem \ref{thm:generator_duality}, since by our assumptions the rates are finite. Alternatively, using Theorem \ref{thm:generator_duality} and Lemma \ref{lem:markov} it is immediate to verify the conditions of Theorem 4.11 in \cite{EK}.
\mathrm end{proof}
The dual Markov process $(X_t)_{t\geq 0}$ can be explicitly represented as a $|V|$-dimensional jump diffusion. Fix the parameters $\Lambda_v$, $D_v$, $R_{vu}$ and $M_{vu}$ in $\mathcal{M}[0,1].$ We define the following Poisson point processes (PPPs).\label{PPP's} For $v\in V$ let $N^{D_{v}}$ be a PPP on $(0,\infty)\times (0,1]$ with intensity measure $\frac{D_{v}(\mathrm{d} y)}{y} \otimes \mathrm{d} t $, and $N^{\Lambda_{v}}$ a PPP on $(0,\infty)\times [0,1] \times (0,1] $ with intensity measure $ \frac{\Lambda_{v}(\mathrm{d} y)}{y^2} \otimes \mathrm{d} t\otimes \mathrm{d} \theta$. For $(v,u)\in V\times V$ let $N^{M_{vu}}$ be a PPP on $(0,\infty)\times (0,1]$ with intensity measure $\frac{M_{vu}(\mathrm{d} y)}{y} \otimes \mathrm{d} t $, and $N^{R_{vu}}$ a PPP on $(0,\infty)\times (0,1]$ with intensity measure $ \frac{R_{vu}(\mathrm{d} y)}{y} \otimes \mathrm{d} t$. Here, the notations $\mathrm{d} t$, $\mathrm{d} y$ and $\mathrm{d} \theta$ stand for the Lebesgue measure on $[0,\infty)$ respectively $(0,1]$ and $[0,1]$. All PPPs involved are independent of each other and independent for different $v\in V$ respectively $(v,u)\in V\times V$. Let $(B_t)_{t\geq 0}=(B^{(v)}_t)_{t\geq 0, v\in V}$ be a $|V|$-dimensional standard Brownian motion independent of the PPPs. Then $(X_t)_{t\geq 0}=(X_t^{(v)})_{t\geq 0, v\in V}$ solves the system of SDEs
\begin{align}\label{eq:diffusion}
\mathrm{d} X_t^{(v)}=&\sum_{u\in V}\int_{y \in (0,1]} \big(y(X_{t-}^{(u)}-X_{t-}^{(v)})\big) N^{M_{vu}}(\mathrm{d} y,\mathrm{d} t)\\ \underline{n}onumber
&+\sum_{u\in V} \int_{y \in (0,1]}\big( yX_{t-}^{(v)}(X_{t-}^{(u)}-1) \big) N^{R_{vu}}(\mathrm{d} y,\mathrm{d} t) \\ \underline{n}onumber
&+ \int_{y \in (0,1]} \big(y(1-X_{t-}^{(v)})\big) N^{D_{v}}(\mathrm{d} y,\mathrm{d} t)\\ \underline{n}onumber
&+\int_{y \in (0,1]}\int_{\theta \in [0,1]} \big(y(\mathds 1_{\{\theta<X_{t-}^{(v)}\}}-X_t^{(v)}) \big) N^{\Lambda_v}(\mathrm{d} y, \mathrm{d} \theta, \mathrm{d} t)\\ \underline{n}onumber
&+\sum_{u\in V, u \underline{n}eq v} (X_t^{(u)}-X_t^{(v)}) M_{vu}(\{0\})\mathrm{d} t+\sum_{u\in V} X_t^{(v)}(X_t^{(u)}-1) R_{vu}(\{0\})\mathrm{d} t\\ \underline{n}onumber
&+(1-X_t^{(v)}) D_{v}(\{0\})\mathrm{d} t+\sqrt{X_t^{(v)}(1-X_t^{(v)})} \Lambda_v(\{0\})\mathrm{d} B^{(v)}_t, \quad t\geq 0
\mathrm end{align}
where $v\in V,$ with initial condition $X_0=(x^{(v)})_{v\in V}\in [0,1]^V.$
The above initial value problem gives a $|V|$-dimensional jump diffusions with non-Lipschitz coefficients. Existence and uniqueness results for such systems have recently drawn considerable interest, and we may refer to \cite{K, K1, BLP, xi2017jump} for existence and strong uniqueness results.
This dual process has an interpretation of the frequency process in the sense of population genetics, which is classical at least in the case without coordination. In that case, \mathrm eqref{eq:diffusion} reduces to
\begin{eqnarray}\label{difclassic}
\mathrm{d} X_t^{(v)}&=&\sum_{u\in V, u\underline{n}eq v} (X_{t}^{(u)}-X_{t}^{(v)}) m_{vu}\mathrm{d} t-\sum_{u\in V} X_{t}^{(v)}(1-X_{t}^{(u)}) r_{vu}\mathrm{d} t\\& &
+(1-X_{t}^{(v)}) d_{v}\mathrm{d} t+\sqrt{X_{t}^{(v)}(1-X_{t}^{(v)})} c_v\mathrm{d} B^{(v)}_t, \quad v\in V, t\geq 0.\underline{n}onumber
\mathrm end{eqnarray}
The solution $(X_t)_{t\geq 0}$ of \mathrm eqref{difclassic} can then be understood as the frequency of one genetic type in a two-type population living in a structured environment of $|V|$ islands. More precisely, it is the stepping stone model with mutation and selection, see \cite{Kimura}. Denote the two types by $-$ or $+.$ Then \mathrm eqref{difclassic} describes the dynamics under the following assumptions:
\begin{enumerate}
\item $ m_{vu}$ is the rate at which individuals of island $v$ migrate to island $u$ (migration).
\item $ r_{vu}$ measures the selective disadvantage of type $-$ individuals situated on island $v$ against type $+$ individuals situated on island $u$ (selection). Note that the term $r_{vu}$ for $v \underline{n}eq u$ is less classical than the one $r_{vv}$, nevertheless it receives an analogous interpretation,
\item $ d_{v}$ is the rate at which individuals of type $+$ change into individuals of type $-$ on island $v$ (mutation from $+$ to $-$).
\item $ c_{v}$ measures the strength of the random genetic drift in island $v$.
\mathrm end{enumerate}
In our dual process in \mathrm eqref{eq:diffusion}, the role of general measures, as opposed to Dirac measures at $0$, is compatible with this classical interpretation, which is now enriched by the possibility of large events that affect a positive fraction of the population. Large migration events are considered for example in the seed-bank model with simultaneous switching, see \cite{BGKW18}.
\subsection{The nested coalescent and its dual}\label{sec:nested}
The nested coalescent is an object introduced recently \cite{BDLS}, which has already received some attention \cite{BRSS, D, LS}. Its purpose is to integrate speciation events and individual reproduction in the same model, in order to be able to trace ancestry at the level of species. Species can be regarded as islands (in the sense of a classical structured coalescent), meaning that individual ancestral lines inside each species coalesce according to some measure $\Lambda$, for example at pairwise rate one, just as in the Kingman coalescent. The difference is that species also perform a Kingman coalescent of their own, and when two species coalesce, the ancestral lines inside them are allowed to coalesce, again at pairwise rate one. Thus the nested coalescent consists of (independent) coalescents at individual level, nested inside an \lq external\rq\ coalescent at species level.
In our framework, the block-counting process of the nested coalescent is given by choosing $\Lambda_v\in \mathcal{M}[0,1]$, $D_v=0$, $v\in V$, $R_{vu}=0$ and $M_{vu}=\delta_1, v\underline{n}eq u$ in Definition \ref{def:random_rate}. The resulting process $(Z_t)_{t\geq 0}$ on $\mathbb N_0^V$ is given by the jumps
\begin{equation}
z \mapsto \left\{ \begin{array}{ll}
z+z_v(-e_v+ e_u), & \textrm{ at rate }1, \text{ for each } v,u\in V, v\underline{n}eq u \\
z- e_v, & \textrm{ at rate } \Lambda_v(\{0\})\binom{z_v}{2}+\int_0^1\binom{z_v}{2}y(1-y)^{z_v-1}\frac{\Lambda_{v}(\mathrm{d} y)}{y^2} \\
z- je_v, & \textrm{ at rate } \int_0^1\binom{z_v}{j+1}y^j(1-y)^{z_v-j}\frac{\Lambda_{v}(\mathrm{d} y)}{y^2}, \qquad \textrm{ for }j \geq 2.
\mathrm end{array}
\right .
\mathrm end{equation}
If we impose $\Lambda_v=\Lambda$ for all $v\in V$ and ignore the empty islands, then up to labelling, $(Z_t)_{t\geq 0}$ is the block-counting process of a nested coalescent with individual $\Lambda$-coalescent and species Kingman coalescent. We now define the moment dual of the nested coalescent, which to our knowledge has not yet been introduced in the literature.
\begin{definition}[The nested Moran model]\label{def:nested_moran}
Fix parameters $\Lambda_v\in \mathcal{M}[0,1]$, $D_v=0$, $R_{vu}=0$ and $M_{vu}=\delta_1$. Let $(X_t)_{t \geq 0}=(X_t^{(v)})_{t\geq 0, v\in V}$ be the solution of
\begin{eqnarray}\label{diffusionmoran}
\mathrm{d} X_t^{(v)}&=&\sum_{u \colon u\underline{n}eq v} (X_{t-}^{(v)}-X_{t-}^{(u)}) \widetilde N^{M_{vu}}(\mathrm{d} t)+\int_{y \in (0,1]} \int_{\theta \in [0,1]} y(\mathds 1_{\{\theta<X_{t-}^{(v)}\}}-X_{t-}^{(v)}) N^{\Lambda_v}(\mathrm{d} y, \mathrm{d} \theta,\mathrm{d} t) \underline{n}onumber
\\&&+\sqrt{X_t^{(v)}(1-X_t^{(v)})} \Lambda_v(\{0\}) \mathrm{d} B^{(v)}_t, \quad t\geq 0\underline{n}onumber
\mathrm end{eqnarray}
for $v\in V,$ where $N^{\Lambda_v}$ are independent Poisson point processes as in \mathrm eqref{eq:diffusion}, $\widetilde N^{M_{vu}}$ are independent Poisson point processes on $[0,\infty)$ with intensity $\mathrm{d} t$ that are also independent of $\{ N^{\Lambda_v} \colon v \in V \}$, and $
(B_t^{(v)})_{t\geq 0, v\in V}$ is a standard $|V|$-dimensional Brownian motion independent of these Poisson point processes. We call $(X_t)_{t\geq 0}$ the \mathrm emph{nested Moran model} with parameters $\Lambda_v, v\in V$.
\mathrm end{definition}
The relation between the nested Moran model and the classical Moran model becomes clear if one considers an initial condition $X_0=(X_0^{(v)})\in\{0,1\}^V$. Observe that in this case equation \mathrm eqref{diffusionmoran} reduces to
\begin{eqnarray}
\mathrm{d} X_t^{(v)}&=&\sum_{u: u\underline{n}eq v} (X_{t-}^{(v)}-X_{t-}^{(u)}) \widetilde N^{M_{vu}} (\mathrm{d} t).
\mathrm end{eqnarray}
The connection becomes clear after observing that $\frac{1}{|V|}\sum_{v\in V} X_t^{(v)}$ is the frequency process of a Moran model with population size $|V|$.
Further, considering Example~\ref{example-hierarchical} in Section~\ref{sec-modeldef}, we see that the nested Moran model is similar to the hierarchical Moran model in case all branching and death measures are zero. A substantial difference is that in the hierarchical case, there is also independent migration apart from the completely coordinated one, i.e., $M_{vu}=c'\delta_0+c''\delta_1$ for some positive $c',c''$.
The word \mathrm emph{nested} may seem slightly misleading in the context of Defintion \ref{def:nested_moran}, as the object we introduce is not a family of Moran models correlated by a Moran model, but rather a family of jump diffusions correlated by a Moran model. We use the name \mathrm emph{nested Moran model} in order to emphasize that it arises as the moment dual of the nested coalescent. This is the content of the next result, which is an immediate corollary of Theorem \ref{duality}.
\begin{corollary}[The nested coalescent and its dual]
Fix parameters $\Lambda_v\in {\mathcal{M}}[0,1]$, $D_v=0$, $R_{vu}=0$ and $M_{vu}=\delta_1$. Then the block-counting process of the nested coalescent $(Z_t)_{t\geq 0}$ and the nested Moran model $(X_t)_{t\geq 0}$ with these parameters are moment duals, that is, for every $x \in [0,1]^V$, $z\in \mathbb N_0^V$ and $t>0$
\begin{equation}\label{duality_nested}
\ensuremath{\mathbb{E}}_{x}[\Bbb{P}rod_{v\in V} (X_t^{(v)})^{z_v}]= \ensuremath{\mathbb{E}}_{z}[\Bbb{P}rod_{v\in V} (x_v)^{Z_t^{(v)}}].
\mathrm end{equation}
\mathrm end{corollary}
\begin{remark}
It seems plausible to generalize this construction to species $\Lambda$-coalescents and even to more general nested coalescents (see \cite{D}) considering the Poisson processes governing the migration to be exchangeable instead of independent.
\mathrm end{remark}
We now provide a criterion for the nested coalescent to come down from infinity, whose proof is based on the moment duality. Let us first recall the following notions related to coming down from infinity. We set $\bar z:=(z,z,...,z)$ and $\bar x:=(x,x,...,x)$, for some $z\in\mathbb N$ and $x\in [0,1]$, where both $\bar z$ and $\bar x$ have $|V|$ coordinates. Further, for $w \in \mathbb N_0^V$ we put $|w|:=\Vert w \Vert_1=\sum_{i=1}^n w_i$.
\begin{definition}\label{def:comingdown}
We say that the structured branching coalescing process $(Z_t)_{t \geq 0}$ \mathrm emph{immediately comes down from infinity} if $ \lim_{m\rightarrow \infty} \lim_{z\rightarrow \infty} \Bbb{P}_{\bar z}(|Z_t|<m)=1$ for every $t>0$, it \mathrm emph{comes down from infinity} if $ \lim_{m\rightarrow \infty} \lim_{z\rightarrow \infty} \Bbb{P}_{\bar z}(|Z_t|<m)>0$ for every $t> 0$ and it \mathrm emph{does not come down from infinity} if $ \lim_{m\rightarrow \infty} \lim_{z\rightarrow \infty} \Bbb{P}_{\bar z}(|Z_t|<m)=0$ for every $t>0$.
\mathrm end{definition} Thanks to the strong Markov property, these three cases cover all possibilities, i.e., it cannot happen that $ \lim_{m\rightarrow \infty} \lim_{z\rightarrow \infty} \Bbb{P}_{\bar z}(|Z_t|<m)$ is zero for small $t$ but positive for large $t$, and it is also impossible that it is less than one for small $t$ but equal to one for large $t$. Further, thanks to the strong Markov property and the fact that the total number of particles of the nested coalescent is decreasing in $t$, coming down from infinity for this process implies that $\mathbb P(\limsup_{t \to \infty} |Z_t|<\infty)=1$.
Let us now again consider the particular case when $(Z_t)_{t \geq 0}$ is the nested coalescent, let $(X_t)_{t\geq 0}$ be the nested Moran model, and define
\[\tau:=\inf\{t>0: X_t=(1,1,...,1)\}.\]
For any measurable set $A \subseteq [0,\infty]$ and for any $t>0$, we write $\Bbb{P}_{\infty}(|Z_t|\in A)=\lim_{z\rightarrow \infty}\Bbb{P}_{\bar z}(|Z_t|\in A)$. We have the following lemma.
\begin{lemma}\label{lem:nocomingdown}
For all $t >0$, $\Bbb{P}_{\infty}(|Z_t|<\infty)= \sup_{x\in (0,1)} \Bbb{P}_{\bar x}(\tau<t).$
\mathrm end{lemma}
\begin{proof}
Note that for $x \in (0,1)$,
$$
\Bbb{P}_{\bar x}(\tau<t)=\Bbb{P}_{\bar x}(X_t=(1,...,1))=\lim_{z\rightarrow \infty}\ensuremath{\mathbb{E}}_{\bar x}[\Bbb{P}rod_{v\in V} (X_t^{(v)})^{z}]=\lim_{z\rightarrow \infty}\ensuremath{\mathbb{E}}_{\bar z}[\Bbb{P}rod_{v\in V} x^{Z_t^{(v)}}]
$$
where we have used the duality in the last equation. This implies that
$$
\lim_{z\rightarrow \infty}\Bbb{P}_{\bar z}(|Z_t|<m)+x^m\geq \Bbb{P}_{\bar x}(\tau<t)\geq x^m\lim_{z\rightarrow \infty}\Bbb{P}_{\bar z}(|Z_t|<m).
$$
Taking $x=1-m^{-1/2}$, one gets that
$$
\lim_{z\rightarrow \infty}\Bbb{P}_{\bar z}(|Z_t|<m)+\mathrm e^{-\sqrt{m}} \geq \Bbb{P}_{\overline{ 1-m^{-1/2}}}(\tau<t).
$$
After observing that $\Bbb{P}_{\bar x}(\tau<t)$ is an increasing function of $x$, we conclude that
$$
\liminf_{m\rightarrow \infty} \lim_{z\rightarrow \infty} \Bbb{P}_{\bar z}(|Z_t|<m)\geq \sup_{x\in (0,1)} \Bbb{P}_{\bar x}(\tau<t).
$$
Now take $x=1-m^{-2}$ and observe that
$$\Bbb{P}_{\overline{1-m^{-2}}}(\tau<t)\geq \mathrm e^{1/m}\lim_{z\rightarrow \infty}\Bbb{P}_{\bar z}(|Z_t|<m),$$
and taking $m$ to infinity this implies that
$$ \sup_{x\in (0,1)} \Bbb{P}_{\bar x}(\tau<t)\geq \limsup_{m\rightarrow \infty}\lim_{z\rightarrow \infty}\Bbb{P}_{\bar z}(|Z_t|<m).$$
\mathrm end{proof}
Now the proof of the following corollary is trivial.
\begin{corollary}\label{cor:nocomingdown}
The nested coalescent immediately comes down from infinity if, for every $t>0$, $\sup_{x\in (0,1)} \Bbb{P}_{\bar x}(\tau<t)=1$, comes down from infinity if, for every $t>0$, $\sup_{x\in (0,1)} \Bbb{P}_{\bar x}(\tau<t)>0$ and does not come down from infinity if, for every $t>0$, $\sup_{x\in (0,1)} \Bbb{P}_{\bar x}(\tau<t)=0$.
\mathrm end{corollary}
\begin{proof}
The corollary follows directly from Lemma~\ref{lem:nocomingdown}. Indeed, by continuity of measures,
\[ \mathbb P_{\infty}(|Z_t|<\infty)=\lim_{m\to\infty}\mathbb P_{\infty}(|Z_t|<m)=\lim_{m\to\infty}\lim_{z\to\infty} \mathbb P_{\bar z}(|Z_t|<m). \]
\mathrm end{proof}
\begin{comment}{
\begin{proof}
The assumption that the $\Lambda$-coalescent does not come down from infinity implies that the nested coalescent with $\Lambda$-coalescent at the individual level does not come down from infinity.
To see this, one can couple the $\Lambda$-coalescent with the nested coalescent with $\Lambda$-coalescent at the individual level, by saying that if some individual that started in island 1 coalesces with some individuals that started in some other island, the individual that is not discarded by the coalescence event is always one that started in island 1. Then the individuals that started in island 1 perform a $\Lambda$-coalescent which has almost surely fewer blocks than the nested coalescent with $\Lambda$-coalescent at the individual level, and the claim follows.
To finish the proof, the claim that the point $(1,1,1...,1)$ is not reached follows from Theorem~\ref{thm:generator_duality} and Lemma~\ref{lem:nocomingdown}, and for the boundary point $(0,0,...,0)$ it follows by symmetry.
\mathrm end{proof}}\mathrm end{comment}
We note that in \cite{BDLS}, an equivalent condition for coming down from infinity for the nested coalescent was found, where the marginal coalescent of species can be an arbitrary $\Lambda$-coalescent instead of a Kingman one. Here, the only assumption on the measures corresponding to the $\Lambda$-coalescent of individual ancestral lines (called `marginal gene coalescent' in that paper) and the one of species is that they have no mass at one, i.e., the probability that all species or all individuals of a given species merge simultaneously is zero. It was shown in \cite[Proposition 6.1]{BDLS} that such a nested coalescent comes down from infinity if and only if both the marginal gene coalescent and the marginal species coalescent both come down from infinity. Our Corollary~\ref{cor:nocomingdown} provides a simple alternative condition for coming down from infinity in the case when the marginal species coalescent is Kingman, using moment duality, including also the case when $\Lambda_v(\{ 1\})>0$. We expect that this result also extends to the case of a general $\Lambda$-coalescent for the species, but we refrain from presenting details.
This approach via duality may be applied to other models in the literature, like the Kingman coalescent with erosion \cite{FLS} that was introduced in Example~\ref{example-hierarchical} in Section~\ref{sec-modeldef}, but we defer such investigations to future work.
\subsection{Fixation of the advantageous trait in models with selection}
Being able to manipulate different mechanisms in the same mathematical framework allows us to translate known results about one mechanisms to find new results about a different mechanism, and also to come up with comparison arguments involving seemingly unrelated behaviours.
First, we provide a simple equivalent condition on coming down from infinity for one-dimensional processes that exhibit no coalescence but death, using a comparison between these two effects. Thanks to moment duality, this has implications regarding fixation in the moment dual of the process. Second, by comparing migration with death and using moment duality, we show that there are several examples of selection that lead to almost sure fixation in a structured population. The latter result is new and seems to be interesting from a biological perspective.
\begin{example}[Coming down from infinity without coalescence]
In \cite{S}, a necessary and sufficient condition for coming down from infinity for $\Lambda$-coalescents was provided. For structured processes with coalescence, the results of \cite{BDLS,BGKW18} show that different coalescance mechanisms at different vertices and coordinated migration can change the behaviour of the process with this respect radically, see also Corollary~\ref{cor:nocomingdown} in the present paper. It is natural to ask whether coming down from infinity is possible for structured branching coalescing processes exhibiting no coalescence but only death, migration and reproduction. While we expect that the answer to this question is still nontrivial due to a competition between death and migration if the set $V$ has at least two elements, for $V=\{ v \}$ the answer is straightforward and easy to verify, based on a comparison between death and coalescence.
Let us recall the notion of coming down from infinity, coming down from infinity immediately, and not coming down from infinity from Definition~\ref{def:comingdown}. We are interested in the case $V=\{ v \}$, where we ignore the index $v$ in the nomenclature of the corresponding measures and the process, writing simply $D,R,\Lambda$ and $ (Z_t)_{t \geq 0}$.
\begin{proposition}\label{prop:nocomingdownD}
Assume that $|V|=1$ and $\Lambda=0$. Then the structured branching coalescing process $(Z_t)_{t \geq 0}$ comes down from infinity if and only if $D$ has an atom at one.
\mathrm end{proposition}
Before proving this proposition, let us explain some of its consequences. In case $D=\delta_1$, starting from an infinite number of particles, the process is extinguished (i.e., absorbed at zero) after an exponentially distributed time. Thanks to Proposition~\ref{prop:nocomingdownD} and the linearity of rates in \mathrm eqref{eq:jumps_random} with respect to $D=D_v$, it follows that the same holds whenever $D$ has an atom at one, in particular coming down from infinity never happens immediately if $\Lambda=0$. Let us note that coming down from infinity up to time $t$ with positive probability is equivalent to reaching 1 with positive probability up to time $t$ for the moment dual $(X_t)_{t \geq 0}$ starting from any initial condition in $(0,1)$. Indeed, for $x \in (0,1)$ and $t>0$ we have
\[ \ensuremath{\mathbb{E}}_{\infty} [x^{Z_t}]=\lim_{n\to\infty} \ensuremath{\mathbb{E}}_{n} [x^{Z_t}] = \lim_{n\to\infty} \ensuremath{\mathbb{E}}_{x}[X_t^{n}] = \mathbb P_x(X_t=1), \underline{n}umberthis\label{CDFI}\]
which is positive if and only if the process comes down from infinity. In case there is selection in the model (i.e., $R \underline{n}eq 0$), reaching 1 can be interpreted as fixation of the advantageous trait.
\begin{proof}[Proof of Proposition~\ref{prop:nocomingdownD}]
We will study three cases, when there is mass at one, when the mass is concentrated at zero and when there is no mass at either one nor zero. We will combine these cases in order to obtain the whole spectrum.
First, it is clear that if $D$ has an atom at 1, then the process $(Z_t)_{t \geq 0}$
gets extinguished after an exponentially distributed time, and hence in particular it comes down from infinity (further, if $D$ is a multiple of $\delta_1$, then it stays infinite until the extinction). Thus, the condition of the proposition is sufficient for coming down from infinity.
For the rest of the proof, let us assume that $D$ has no atom at 1. Our goal is to show that the process does not come down from infinity. Since reproduction can only increase the value of the process, we assume without loss of generality that $R=0$.
Now, let us first consider the case when $D=\delta_0$. Our process $(Z_t)_{t \geq 0}$ is a pure death chain with jumps $n \to n-1$ at rate $n$. Hence, by \mathrm eqref{eq:diffusion}, the dual process $(X_t)_{t \geq 0}$ is deterministic, it equals the unique solution $(x(t))_{t \geq 0}$ of the ODE
\[ \frac{\mathrm{d}}{\mathrm{d} t} x(t)=1-x(t) \]
with $x(t)=(1-(1-x(0))\mathrm e^{-t})$, $t \geq 0$.
We observe that $\ensuremath{\mathbb{E}}_{n} [(x(0))^{Z_t}]=\ensuremath{\mathbb{E}}_x[X_t^n]=(x(0)\mathrm e^{-t}+(1-\mathrm e^{-t}))^n$, and thus $Z_t$ is a binomial random variable with parameter $n$ and $\mathrm e^{-t}$. Either from this or from Equation \mathrm eqref{CDFI}, we conclude that the process $(Z_t)_{t \geq 0}$ does not come down from infinity.
Second, if $D$ has no atom at zero, then we define a coalescence measure $\widehat \Lambda$ according to
\[ \frac{\widehat \Lambda(\mathrm{d} y)}{y^2} = \frac{D(\mathrm{d} y)}{y}, \qquad y \in (0,1]. \underline{n}umberthis\label{DLambdacoupling} \]
Then, since $D$ is a finite measure, we have $\int_{(0,1]} \frac{\widehat \Lambda(\mathrm{d} y)}{y}<\infty$. Consequently, by \cite[Theorem 8]{pitman}, the $\widehat \Lambda$-coalescent does not come down from infinity.
According to the proof of \cite[Corollary 2]{S}, it is even true that if we define a process $(Y_t)_{t \geq 0}$ similarly to the block-counting chain of the $\widehat \Lambda$-coalescent but having downward jumps of size $k$ instead of $k-1$ at rate $\int_0^1 x^{k-2} (1-x)^{b-k} \widehat \Lambda(\mathrm{d} x) $, given that there are $b \geq k$ blocks, then $(Y_t)_{t \geq 0}$ does not come down from infinity. Now, according to \mathrm eqref{eq:jumps_random} and \mathrm eqref{DLambdacoupling}, starting from $b \in \mathbb N$ blocks at the moment, the rate at which $(Y_t)_{t \geq 0}$ jumps to $b-k$, $k=2,\ldots,b$, is
\[ \binom{b}{k} \int_0^1 x^{k-2} (1-x)^{b-k} \widehat \Lambda(\mathrm{d} x) = \binom{b}{k} \int_0^1 x^{k-1} (1-x)^{b-k} D(\mathrm{d} x). \]
Now, thanks to \mathrm eqref{eq:jumps_random}, started from $b \in \mathbb N$, our process $(Z_t)_{t \geq 0}$ has the same jump rates as $(Y_t)_{t \geq 0}$, plus additionally downward jumps of size 1 at rate
\[ b \int_0^1 (1-x)^{b-1} D(\mathrm{d} x). \underline{n}umberthis\label{singledeath} \]
This additional rate however depends linearly on $b$. Hence, using the same arguments as for $D=\delta_0$, one can easily verify that $(Z_t)_{t \geq 0}$ does not come down from infinity. The case of a general $D$ (without an atom at 1) follows from the linearity of the transition rates in the second line of \mathrm eqref{eq:jumps_random} with respect to the death measure $D=D_v$.
\mathrm end{proof}
\mathrm end{example}
\begin{remark}
As we have seen, given $D$, the construction \mathrm eqref{DLambdacoupling} results in a coalescence measure $\widehat \Lambda$ satisfying $\int_{(0,1]} \frac{\widehat \Lambda(\mathrm{d} y)}{y}<\infty$. \cite[Theorem 8]{pitman} implies that the associated $\widehat \Lambda$-coalescent has dust, i.e.~if it is started from an infinite number of blocks, then there is a positive proportion of singletons for any $t>0$. This is a stronger condition than not coming down from infinity. E.g., the Bolthausen--Sznitman coalescent $\widehat \Lambda(\mathrm{d} y)=\mathrm{d} y$ has no dust, but it does not come down from infinity (cf.~\cite{S}).
Conversely, one could think of defining a death measure $D$ according to \mathrm eqref{DLambdacoupling} given a coalescence measure $\widehat \Lambda$ having no atom at zero. Then, for a process satisfying Definition~\ref{def:random_rate} and having zero death measure and coalescence measure $\widehat\Lambda$, we can consider the process where instead of coalescence according to $\widehat \Lambda$ there is death according to $D$, whereas reproduction is unchanged. This process, if it is well-defined, dominates the process with coalescence stochastically from below. However, the measure $D$ is only finite if $\int_{(0,1]} \frac{\widehat \Lambda(\mathrm{d} y)}{y}<\infty$. Otherwise, the death rate \mathrm eqref{singledeath} of single individuals equals $+\infty$, and one can intuitively say that the process with death instead of coalescence is the degenerate process jumping to zero immediately after time $t=0$, whatever the initial condition is.
\mathrm end{remark}
\begin{example}\label{example-peripatric}
Now, consider the peripatric coalescent $(Z_t^{(1)},Z_t^{(2)})_{t\geq 0}$ defined analogously to \cite{LM}, but with corordinated migration (instead of independent one). This is one of the processes satisfying Definition~\ref{def:random_rate}, with $V=\{ 1, 2 \}$, where for some $c>0$, we have $\Lambda_1=c \delta_0$, $\Lambda_2=0$, $M_{21},M_{12} \in \mathcal M(0,1]$, $R_{11}=\alpha'\delta_0$ and $R_{22}=\alpha\delta_0$ for some $\alpha',\alpha\geq 0$, and all other measures are equal to zero. The moment dual of the arising process can be interpreted as follows: the vertex 2 is a continent with a large population and the vertex 1 is an island with a smaller one, there is migration in both directions and selection at both locations, but random genetic drift plays a role only on the island.
We are interested in sufficient conditions under which $Z_t^{(2)}$ tends to infinity almost surely, which is equivalent to almost sure fixation of the fitter type in the dual process. Such an assertion follows as soon as we can verify that $Z_t \to \infty$ almost surely as $t \to\infty$ for a process $(Z_t)_{t \geq 0}$ satisfying
\[ Z_t^{(2)} \geq Z_t, \qquad \forall t \geq 0 \underline{n}umberthis\label{peripatricdomination} \]
realizationwise, given that $Z_0^{(2)}=Z_0$. In order to construct a process that dominates $(Z_t^{(2)})_{t \geq 0}$ from below in this sense, we can remove migration from vertex 1 to vertex 2 from the model. Then, the population on vertex 1 does not influence the one on vertex 2, and hence we can ignore the population on vertex 1 and consider migration from vertex 2 to 1 as death. This gives rise to the one-dimensional process $(Z_t)_{t \geq 0}$ on vertex set $\{ 2 \}$ with the following measures: $\Lambda_2=0$ for the coalescence, $R_{22}=\alpha \delta_0$ for the reproduction (as for $(Z_t^{(2)})_{t \geq 0}$), $M_{22}=0$ for the migration and $D_2=M_{21}$ for the death. It is easy to see $(Z_t)_{t \geq 0}$ satisfies \mathrm eqref{peripatricdomination} realizationwise.
For a general migration measure $M_{21} \in \mathcal M[0,1]$, proving that $Z_t \to \infty$ almost surely as $t \to\infty$ may be involved. However, such results are available for Dirac measures. Note that for $D_2=p \delta_p$, $p \in (0,1)$, according to \mathrm eqref{eq:dual-death}, the part of the generator of the dual corresponding to death reads as
\[ B_{D_2} f(x)= \int_0^1 [f(x+y(1-x))-f(x)] \frac{1}{y} D_v(\mathrm{d} y) = [f(x+p(1-x))-f(x)], \]
where we wrote $x$ instead of $x_2$ everywhere for simplicity. Now note that if a Markov process with values in $[0,1]$ jumps from $x \in [0,1]$ to $x+p(1-x)$ at rate $p$, then one minus this process jumps from $1-x$ to $(1-p)(1-x)$ at the same rate. This together with \mathrm eqref{eq:independent-branching} yields that if $(N_t)_{t \geq 0}$ denotes the dual of $(Z_t)_{t \geq 0}$, then $(1-N_t)_{t \geq 0}$ has generator
\[ \mathcal L f(x)= \alpha x (1-x) f'(x) + f((1-p)x)-f(x). \]
This relates our setting to the one of \cite{HP}, where processes with generators of the form
\[ \mathcal L f (x) = \alpha (x) f'(x) + f((1-p)x)-f(x) \underline{n}umberthis\label{pgenerator} \]
were studied, where the domain of the generator $\mathcal L$ consists of continuously differentiable functions $f \colon [0,\underline{n}u) \to \mathbb{R}$ for $\underline{n}u>0$ fixed, and $\alpha \colon [0,\underline{n}u) \to \mathbb{R}$ differentiable. It was showed in \cite[Theorem 1]{HP} that if we have $\sup_{x \in (0,\underline{n}u)} \frac{\alpha(x)}{x} < -\log (1-p)$, then the process tends to zero almost surely. Now, if we choose $\underline{n}u=1$ and $\alpha(x)=\alpha x(1-x)$, $\alpha>0$, then this condition is equivalent to
\[ \alpha < -\log (1-p), \underline{n}umberthis\label{alphacond} \]
which can always be guaranteed by a suitable choice of $\alpha>0$ as long as $p \in (0,1)$. Thus, if $\alpha$ and $p$ satisfy \mathrm eqref{alphacond}, then it follows by \cite[Theorem 1]{HP} that $N_t \to 1$ almost surely as $t \to \infty$. This implies almost sure fixation of the fitter type in the peripatric coalescent with selection and coordinated migration thanks to \mathrm eqref{peripatricdomination}.
This can be generalized to the case when $D_2=M_{21}$ is compactly supported within $(0,1]$, i.e., there exist $p^->0$ such that $\Lambda([0,p^-)) = 0$. In this case, we define $\bar c= \int_{[0,1]} \frac{D_2(\mathrm{d} y)}{y}$; this number is finite by assumption. Then the process $(N_t)_{t \geq 0}$ is stochastically dominated from below by the moment dual of the process having death measure $D_2^- = \bar c p^- \delta_{p^-}$ and all the other measures equal to the ones corresponding to $(Z_t)_{t \geq 0}$. Hence, if $\alpha$ is such that \mathrm eqref{alphacond} holds with $p=p^-$, then $N_t \to 1$ almost surely as $t \to\infty$, which again implies almost sure fixation.
\mathrm end{example}
\section{Coordination and expectation}\label{sec-E}
In this section we show that for the process $(Z_t)_{t \geq 0}$ introduced in Definition~\ref{def:random_rate}, for $v \in V$, the expectation process $t \mapsto \ensuremath{\mathbb{E}}[Z_t^{(v)}]$ equals the unique solution of a linear differential equation depending only on the total mass of the underlying measures $M_{uw}$, $D_{w}$ and $R_{uw}$, $u,w \in V$. This is true under the assumption that there is no coalescence (i.e., $\Lambda_v=0$ for all $v \in V$), but we will also provide some extensions to the case of nonzero coalescence.
\begin{lemma}\label{lem:expectationequality}
Let the collections of measures $(D_v)_{v \in V}$, $(R_{vu})_{v,u \in V}$, $(M_{vu})_{v,u \in V}$ and $(\Lambda_v)_{v \in V}$ satisfy Definition~\ref{def:random_rate} with $\Lambda_v=0$ for all $v \in V$. Define $(f(t,v))_{t \in [0,\infty), v \in V}=(\ensuremath{\mathbb{E}}[Z_t^{(v)}])_{t\in [0,\infty), v \in V}$. Then $(f(t,v))_{t \in [0,\infty), v \in V}$ is the unique solution of
\begin{equation}\label{magic}
\frac{\mathrm d}{\mathrm d t} f(t,v) = \sum_{u \in V}( f(t,u) m_{uv}-f(t,v) m_{vu}) - f(t,v) d_{v} + \sum_{u \in V} f(t,u) r_{uv}, \qquad v \in V, t\in [0,\infty)
\mathrm end{equation}
with initial condition $f(0,v)=z_0^{(v)}=Z_0^{(v)} \in \mathbb{R}^d$, $v \in V$.
\mathrm end{lemma}
\begin{proof}
The ODE in \mathrm eqref{magic} is linear with continuous coefficients and thus has a unique solution.
We fix $v\in V$ for the proof. From the proof of Lemma \ref{lem:markov} we know that $\ensuremath{\mathbb{E}}[|Z_t|]<\infty$ for all $t>0$ and from Lemma \ref{lem:markov} that $f^k_v(z)=\min \{z_v,k \}$ is in the extended domain for all $k \in \mathbb N$ and $v \in V$. From monotone convergence we conclude that $f_v(z)=z_v$ is also in the extended domain, for all $v \in V$. Applying the form \mathrm eqref{eq:generator} of the generator for $f_v(z)=z_v$, Dynkin's formula together with the linearity of expectation implies
\[ \begin{aligned}
\ensuremath{\mathbb{E}}&\big[Z_t^{(v)}\big]-Z_0^{(v)} = \ensuremath{\mathbb{E}} \Big[ \int_0^t \Big( \sum_{u \in V} \Big[ \int_0^1 \, \sum_{i=1}^{Z_s^{(u)}} \binom{Z^{(u)}_s}{i} iy^i(1-y)^{Z_s^{(u)}-i} \frac{1}{y} M_{uv}(\mathrm{d} y) \\ & \quad - \int_0^1 \sum_{i=1}^{Z_s^{(v)}} \binom{Z^{(v)}_s}{i}i y^i(1-y)^{Z_s^{(v)}-i} \frac{1}{y} M_{vu}(\mathrm{d} y) \Big] \\ & \quad - \int_0^1 \sum_{i=1}^{Z^{(v)}_s} \binom{Z^{(u)}_s}{i} i y^i(1-y)^{Z_s^{(u)}-i} \frac{1}{y}D_{v}(\mathrm{d} y)
+ \sum_{u \in V} \int_0^1 \sum_{i=1}^{Z_s^{(u)}} \binom{Z_s^{(u)}}{i} i y^i (1-y)^{Z_s^{(u)}-i} \frac{1}{y} R_{uv}(\mathrm{d} y) \Big) \mathrm{d} s \Big]
\\
& = \ensuremath{\mathbb{E}} \Big[ \int_0^t \Big( \sum_{u \in V} \Big[ \int_0^1 \, \ensuremath{\mathbb{E}}\big[\mathrm{Bin}(Z_s^{(u)},y) | Z_s\big] \frac{1}{y}M_{uv}(\mathrm{d} y)-\int_0^1 \, \ensuremath{\mathbb{E}}\big[\mathrm{Bin}(Z_s^{(v)},y) | Z_s\big] \frac{1}{y}M_{vu}(\mathrm{d} y) \Big] \\ & \quad - \int_0^1 \, \ensuremath{\mathbb{E}}\big[\mathrm{Bin}(Z^{(v)}_s,y) | Z_s\big] \frac{1}{y}D_{v}(\mathrm{d} y)
+ \sum_{u \in V} \int_0^1 \, \ensuremath{\mathbb{E}}\big[\mathrm{Bin}(Z^{(u)}_s,y) | Z_s\big] \frac{1}{y}R_{uv}(\mathrm{d} y)\Big) \mathrm{d} s \Big],
\mathrm end{aligned} \underline{n}umberthis\label{eq:expectations}
\]
where $\mathrm{Bin}(n,p)$ denotes a binomially distributed random variable with parameters $n \in \mathbb N$ and $p \in [0,1]$.
Let us show that for example the term for fixed $u \in V$ corresponding to migration between $u$ to $v$ depends only on the total mass of $M_{uv}$ and $M_{vu}$:
\[ \int_0^1 \, \ensuremath{\mathbb{E}}\big[\mathrm{Bin}(Z_s^{(v)},y)|Z_s\big] \frac{1}{y}M_{uv}(\mathrm{d} y)-\int_0^1 \, \ensuremath{\mathbb{E}}[\mathrm{Bin}(Z_s^{(u)},y)|Z_s] \frac{1}{y}M_{vu}(\mathrm{d} y) = Z^{(u)}_s m_{uv}-Z^{(v)}_s m_{vu}. \]
Analogously, we obtain for the death term
\[ - \int_0^1 \, \ensuremath{\mathbb{E}}\big[\mathrm{Bin}(Z^{(v)}_s,y) | Z_s\big] \frac{1}{y}D_{v}(\mathrm{d} y) = -Z_s^{(v)} d_{v}\]
and for the reproduction term for fixed $u \in V$ with offspring in $v \in V$
\[ \int_0^1 \, \ensuremath{\mathbb{E}}\big[\mathrm{Bin}(Z^{(u)}_s,y) | Z_s\big] \frac{1}{y}r_{uv}(\mathrm{d} y) = Z_s^{(u)} r_{uv}. \]
Since our process $(Z_t)_{t \geq 0}$ is nonnegative and $\ensuremath{\mathbb{E}}[\int_0^t Z_s^{(u)} m_{uv} \mathrm{d} s]<\infty$ for all $u \in V$, the Fubini--Tonelli theorem implies that we can interchange the outermost expectation (with respect to $Z_s^{(v)}$) with the integration from 0 to $t$, and similarly for the death and reproduction terms. Performing this and differentiating with respect to $t$, we obtain that $(f(t,w))_{t \geq 0, w \in V}=(\ensuremath{\mathbb{E}}[Z_t^{(w)}])_{t \geq 0, w \in V}$ is a solution to \mathrm eqref{magic}.
\mathrm end{proof}
Since the system of linear ODEs has an unique solution, the previous lemma implies that the expectation is invariant under coordination of migration, birth and death, as long as the coalescence measures are zero and the total masses of the other measures are unchanged.
The previous lemma works for coordinating events that affect single individuals. As calculating the expectation for the fully coordinated process is in general simpler, this provides a general machinery to calculate expectations, see the following example and (in the context of the PAM) Example~\ref{ex:intermediatePAM}.
\begin{example}
Consider a one-dimensional pure death process $(Z_t)_{t \geq 0}$ with $D=d\delta_0$ (where $D$ and $R$ denote the single death and reproduction measures, respectively). Our approach to calculate its expectation is to consider the fully coordinated process $(\bar Z_t)_{t \geq 0}$ with $D=d\delta_1$, where all the individuals die simultaneously at a random time $\mathcal T$ which is exponentially distributed with parameter $d$. Then for $t \geq 0$, $\ensuremath{\mathbb{E}}_n[Z_t]=\ensuremath{\mathbb{E}}_n[\bar Z_t]=n\mathbb P(\mathcal T>t)=ne^{-dt}$.
The same principle can be used to calculate the expectation of a Yule process $(Z_t)_{t \geq 0}$ with branching parameter $r>0$, i.e., $R=r\delta_0$ and all other parameters equal to zero. As we saw in Example 1, in this case the fully coordinated process ($R=r\delta_1$) admits the representation $\bar Z_t=n 2^{W_{rt}}$ where $(W_t)_{t \geq 0}$ is a standard Poisson process. This implies that $\ensuremath{\mathbb{E}}_n[\bar Z_t]=n\ensuremath{\mathbb{E}}[2^{W_{rt}}]$. Now, using that the probability generating function of a Poisson random variable with rate parameter $rt$ evaluated at $x$ is $\ensuremath{\mathbb{E}}[x^{W_{rt}}]=e^{rt(x-1)}$ we conclude that $\ensuremath{\mathbb{E}}_n[Z_t]=\ensuremath{\mathbb{E}}_n[\bar Z_t]=ne^{rt}$.
If we now change the notation and consider a process $(Z_t)_{t \geq 0}$ with $D=d\delta_0$, $R=r\delta_0$ and all other parameters equal to zero, we can combine the previous examples to observe that the fully coordinated process $(\bar Z_t)_{t \geq 0}$ defined via $\bar Z_t=n 2^{W_{rt}}\mathds 1_{\{\mathcal T>t\}}$ satisfies $\ensuremath{\mathbb{E}}_n[Z_t]=\ensuremath{\mathbb{E}}_n[\bar Z_t]=n\ensuremath{\mathbb{E}}[2^{W_{rt}}]\mathbb P(\mathcal T>t)=ne^{(r-d)t}$. It is interesting to see that the fully coordinated process and the birth-death branching process have the same expectation at any deterministic time $t\geq 0,$ but very different path behaviour. The first one will be extinguished almost surely at time $\mathcal T$ regardless of the reproduction rate, while if $r-d>0$ the birth-death branching process tends to infinity with positive probability.
\mathrm end{example}
The case $\Lambda_v\underline{n}eq 0$ does not allow such a clean representation. It is clear that no such result can hold in general, i.e., coordination has an effect on the expectation in the presence of pairwise interaction. To see this, think of the expectation of the block-counting process of a Kingman coalescent (no coordination). It is known that at $t>0$ started from infinity the expectation is finite (see \cite{BB} and the references therein), while the expectation for a star-shaped coalescent (full coordination, \cite{pitman}) started at infinity is always infinite. It is not hard to see that starting both processes with 3 blocks will already lead to processes with different expectations. However, it is still possible to use the idea of Lemma \ref{lem:expectationequality} to some extent. We state a result for the Kingman case $\Lambda_v=\delta_0$.
\begin{proposition}\label{lem:expectationequality2}
Let the collections of measures $(D_v)_{v \in V}$, $(R_{vu})_{v,u \in V}$ and $(M_{vu})_{v,u \in V}$ and $(\Lambda_v)_{v \in V}$ satisfy Definition~\ref{def:random_rate} with $\Lambda_v=c_v\delta_0$. Let $(f(t,v))_{t \geq 0, v \in V}$ be any solution of
\begin{eqnarray}\label{magic2}
\frac{\mathrm{d}}{\mathrm{d} t} f(t,v) &=& \sum_{u \in V}( f(t,u) m_{uv}-f(t,v) m_{vu}) - f(t,v) d_{v} + \sum_{u \in V} f(t,u) r_{uv}\underline{n}onumber\\&&-(f(t,v)^2-f(t,v))\frac{c_v}{2}, \qquad v \in V, t >0,
\mathrm end{eqnarray}
with $f(0,v)=z_0^{(v)} \in \mathbb{R}$, $v \in V$.
Then $\ensuremath{\mathbb{E}}[Z_t^{(v)}]\leq f(t,v)$.
\mathrm end{proposition}
\begin{proof}
The proof follows from the proof of Lemma \ref{lem:expectationequality} together with Jensen's inequality. Indeed,
$$
-\ensuremath{\mathbb{E}} \Big[ \binom{Z_s^{(v)}}{2} \Big]=-\frac{1}{2}\ensuremath{\mathbb{E}} \Big[( Z_s^{(v)})^2- Z_s^{(v)}\Big]\leq -\frac{1}{2}\ensuremath{\mathbb{E}} [ Z_s^{(v)}]^2+\frac{1}{2}\ensuremath{\mathbb{E}} [ Z_s^{(v)}],
$$
which deals with the additional term.
\mathrm end{proof}
\subsection{Variations of the PAM branching process}\label{sec:PAM}
The Parabolic Anderson Model is a classical mathematical object that has attracted a lot of attention in the recent decades. In its context, we work on the subgraph of $\mathbb Z^d$ spanned by the vertex set $V=[K]^d$, where $[K]=\{ 1,\ldots, K \}$ for $K \in \mathbb N$. Consider the Cauchy problem for the heat equation with random coefficients and localized initial datum: Let $\bar\xi^+=\{\xi_v^+\}_{v\in [K]^d}$ and $\bar\xi^-=\{\xi_v^-\}_{v\in [K]^d}$ be two families of independent and identically distributed random variables with values in $\mathbb{R}^+$, where we write $[K]=\{ 1,\ldots, K \}$ for $K \in \mathbb N$. For $v \in [K]^d$ let us define $\xi_v := \xi^+_v-\xi^-_v$, and let us put $\bar \xi=\{ \xi_v \}_{v \in [K]^d}$. Then the Cauchy problem is
\begin{eqnarray}
\frac{ \mathrm{d}}{\mathrm{d} t} f(t,v)&=&\sum_{u \colon |u-v|=1} [f(t,u)-f(t,v)]+\xi_vf(t,v) \label{PAM}\\
f(0,v)&=&\mathds 1_{\{v=\bar 0\}}.\underline{n}onumber
\mathrm end{eqnarray}
It is well-known that conditionally on $(\bar\xi^+,\bar \xi^-)$ the branching process $(Z_t)_{t \geq 0}$ which goes from the state $\bar z$ to the state $\bar z-e_v+e_u$ at rate $z_v$ if $u$, $v$ are neighbouring vertices and to the state $\bar z+e_v$ at rate $z_v \xi^+_v$ and $\bar z-e_v$ at rate $z_v \xi^-_v$, where $\xi^+_v-\xi^-_v=\xi_v$, has the property that under mild conditions on $\bar\xi=\{\xi_v\}_{v\in [K]^d}$, $f(t,v)= \ensuremath{\mathbb{E}}[Z_t^{(v)}]$ is a solution to the PAM \cite{GM}. (Note that conditional on $\bar \xi$, this solution is unique according to Lemma~\ref{lem:expectationequality}.) For this reason $(Z_t)_{t \geq 0}$ is studied in \cite{OR1,OR2,OR3}. We note that the results of the present section remain valid if we replace $[K]^d$ with a discrete torus, i.e., if the notion of neighbouring vertices is taken with respect to periodic boundary conditions.
As the process $(Z_t)_{t \geq 0}$ is a branching process, one can use moment duality, for example, to estimate the probability that there is at least one individual (in the branching process) in a certain position, using an ODE. Indeed, imagine that the branching process starts with one individual in the island $\bar 0$ and we are interested in knowing if at a fixed time $t>0$ there is some individual in position $v$. Taking $\bar x= (1,1,...,1)-e_v(1-\varepsilon)$, $\bar z= e_v$,
\begin{equation}\label{extinction}
\mathbb{P}_{e_v}(Z_t^{(v)}=0)= \lim_{\varepsilon\rightarrow 0} \ensuremath{\mathbb{E}}_{e_v}[\varepsilon^{Z_t^{(v)}}]= \lim_{\varepsilon\rightarrow 0}\ensuremath{\mathbb{E}}_{\bar x}[X_t^{(v)}]
\mathrm end{equation}
and
\begin{equation}\label{occupancy}
\mathbb{P}_{e_u}(Z_t^{(v)}=0)=\lim_{\varepsilon\rightarrow 0} \ensuremath{\mathbb{E}}_{e_u}[\varepsilon^{Z_t^{(v)}}]= \ensuremath{\mathbb{E}}_{(1,1,...,1)-e_v}[X_t^{(u)}].
\mathrm end{equation}
This approach seems not to be explored yet in the PAM literature. Indeed, the behaviour of this branching process is a classical problem that has been solved to a great extent only recently \cite{OR1,OR2,OR3} using different techniques. Note that in the case without death, $(X_t^{(v)})_{v\in V,t\geq 0}$ is the solution of a system of ordinary differential equations
\begin{eqnarray}
\frac{ \mathrm{d}}{\mathrm{d} t}X_t^{(v)}&=&\sum_{u \colon |u-v|=1} [X_t^{(u)}-X_t^{(v)}]dt+\xi_v X_t^{(v)}(1-X_t^{(v)})dt \label{PAMdual}
\mathrm end{eqnarray}
which is easy to solve numerically and seems plausible to study mathematically (see Figure \ref{imagen}).
\begin{figure}\label{imagen} \centering
\includegraphics[scale=.467]{Imagen.png}\caption{\small In these pictures we observe the process $X_t$ with $V=\mathbb N$, $R_{v,v}=r\delta_0$, $M_{v,v+1}=m\delta_0$ and all other parameters being zero. This is the dual of a branching random walk in which particles branch at rate 1 and migrate from state $v$ to $v+1$ at rate $m$. In a), b) and c) the starting condition is $(1,1,...)-e_4$ and in d) it is $(1,1,...)-e_{40}$. As observed in Equation \mathrm eqref{occupancy}, the black line $(X_0)$ is the graph of the probability that there is no particle at position 4 (resp.\ 40) at time $t>0$, starting the branching random walk with one particle at position zero at time zero.}\mathrm end{figure}
Probably, the most important technique used in the study of the PAM is the following assertion.
\begin{proposition}[Feynman-Kac formula]\label{thm:feynmankac}
Let $(Y_t)_{t \geq 0}$ be a simple symmetric random walk in $[K]^d.$ Under the moment condition
\[ \ensuremath{\mathbb{E}} \Big[ \big(\frac{\max \{ \xi_v, 2 \}}{\log(\max \{ \xi_v,2 \})}\big)^d \Big]<\infty, \qquad \forall v \in V, \underline{n}umberthis\label{PAMmomentcond} \]
we have that
\[ f(t,v)=\ensuremath{\mathbb{E}}[\mathrm{e}^{\int_0^t \xi_{Y_s}\mathrm{d} s}\mathds 1_{\{Y_t=v\}}]=\ensuremath{\mathbb{E}}[\mathrm{e}^{\int_0^t \xi_{Y_s}^+\mathrm{d} s}\mathds 1_{\{Y_t=v\}} \mathds 1_{\{ t < \mathcal T \}}], \qquad t>0, v \in [K]^d \]
is a solution to the PAM, where $(M_t)_{t \geq 0}$ is a Poisson process that is independent of $(Z_t)_{t \geq 0}$, and
\[ \mathcal T = \inf \{ t \geq 0 \colon M_{\int_0^t \xi_{Y_s}^- \mathrm{d} s} \geq 1 \}. \underline{n}umberthis\label{Tdef} \]
\mathrm end{proposition}
The original proof is analytic and can be found in \cite{GM}. Our construction provides a straightforward proof of this formula using the \lq lonely walker representation\rq~(see Remark 2.7 of \cite{KPAM}).
\begin{proof}
Since in the PAM there is no coalescence, Lemma~\ref{lem:expectationequality} provides a straightforward way to compute $\ensuremath{\mathbb{E}}[Z_t^{(v)}]$, $v \in V$. Let us first consider the case without death. Then, thanks to the lemma and the uniqueness of the solution of the PAM under \mathrm eqref{PAMmomentcond} (cf.~\cite[Theorem 1.2]{KPAM}), for $t \geq 0$, $Z_t^{(v)}$ has the same expectation as $Z'^{(v)}_t$ where the \lq fully coordinated PAM\rq~process $(Z'_t)_{t \geq 0}$ is such that $\Lambda_v=D_v=0$, further, $R_{uv}$ and $M_{vu}$ are replaced by their total masses times $\delta_1$. In the process $(Z'_t)_{t \geq 0}$, all individuals move together and reproduce simultaneously according to a Poisson process $(N_t)_{t \geq 0}$ time-changed by $(\xi^+_v)_{v \in V}$ evaluated along the random walk path $(Y_t)_{t \geq 0}$.
To be more precise, let us define $\tau=\int_0^t \xi^+_{Y_s} d s$. Then
$(Z'_t)_{t \geq 0}=(Z'^{(v)}_{t})_{t \geq 0,v \in V}$ is defined as
\[ Z'^{(v)}_t = 2^{N_{\tau}} \mathds 1_{\{ Y_t = v \}}. \]
Using the probability generating function of a Poisson random variable, one computes
\[ \ensuremath{\mathbb{E}}\big[Z_t^{(v)}\big]=\ensuremath{\mathbb{E}}\big[Z'_t \mathds 1_{\{Y_t = v \}}\big] = \ensuremath{\mathbb{E}}\big[ 2^{N_\tau} \mathds 1_{\{Y_t = v \}} \big] = \ensuremath{\mathbb{E}} \big[ \ensuremath{\mathbb{E}} \big[ 2^{N_\tau} \big| \sigma\big(\bar \xi^+, (Y_s)_{0 < s \leq t}\big) \big] \mathds 1_{\{ Y_t = v \}} \big] = \ensuremath{\mathbb{E}} \big[ \mathrm e^\tau \mathds 1_{\{ Y_t = v \}} \big], \]
which finishes the proof.
In case there is also death in the model, in the fully coordinated process all individuals simultaneously die
after the first arrival time of a Poisson process $(M_t)_{t \geq 0}$ independent of $(N_t)_{t \geq 0}$ time-changed by $(\xi^-_v)_{v \in V}$. To be more precise, for $t>0$,
\[ \{ Z'_t = 0 \} = \{ t \geq \mathcal T \}, \]
where $\mathcal T$ is defined according to \mathrm eqref{Tdef}.
Thus, we have
\[
\begin{aligned}
\ensuremath{\mathbb{E}}\big[Z_t^{(v)}\big] & = \ensuremath{\mathbb{E}} \big[ \mathrm e^\tau \mathds 1_{\{ Y_t=v \}} \mathds 1_{\{ t< \mathcal T \}} \big]=\ensuremath{\mathbb{E}} \big[ \mathrm e^{\int_0^t \xi^+_{Y_s} d s} \mathds 1_{\{ Y_t=v \}} \mathds 1_{\{ t< \mathcal T \}} \big]=\ensuremath{\mathbb{E}} \big[ \mathrm e^{\int_0^t (\xi^+_{Y_s}- \xi^-_{Y_s}) d s} \mathds 1_{\{ Y_t=v \}}\big]
\\ & =\ensuremath{\mathbb{E}} \big[ \mathrm e^{\int_0^t \xi_{Y_s} d s} \mathds 1_{\{ Y_t=v \}}\big].
\mathrm end{aligned}\]
\mathrm end{proof}
\begin{example}\label{ex:intermediatePAM}
Since the PAM has a unique solution under the condition \mathrm eqref{PAMmomentcond}, there is an uncountable family of coordinated processes $(Z''_t)_{t \geq 0}$ such that $\ensuremath{\mathbb{E}}[Z''^{(v)}_t]$ equals $\ensuremath{\mathbb{E}}[Z_t^{(v)}]=\ensuremath{\mathbb{E}}[Z'^{(v)}_t]$ from the proof of Proposition~\ref{thm:feynmankac}. For example, this is the case for $(Z''_t)_{t \geq 0}$ where the birth and the death rates are the same as for the branching process $(Z_t)_{t \geq 0}$, further, $\Lambda_v=0$ and $M_{uv}=m_{uv}\delta_{\frac{1}{2}}$. In the process $(Z''_t)_{t \geq 0}$, for any $(u,v) \in V \times V$ with $u \underline{n}eq v$, migration events from $u$ to $v$ happen according to a homogeneous Poisson process, independently of all the other pairs of vertices, and at a migration event each individual situated at $u$ migrates to $v$ independently with probability $1/2$.
\mathrm end{example}
\section{Coordination and variance}\label{sec-Var}
In this section, we further analyse the processes that turn out to have the same expectation thanks to Lemma~\ref{lem:expectationequality}. In the case of spatial branching processes with migration, i.e., in the case where there is no coalescence but reproduction, death and migration are possibly present in the model, we compute the variance of the processes. We show that given the total masses of the reproduction, death and migration measures, the variance is maximal in the completely coordinated case and minimal in the independent case.
We say that the collection of measures
\[ \big\{ (D_v)_{v \in V}, (R_{vu})_{v,u \in V}, (M_{vu})_{v,u \in V}, (\Lambda_v)_{v\in V} \big\} \]
is of type
\[ \big\{ (d_v)_{v \in V}, (r_{vu})_{v,u \in V}, (m_{vu})_{v,u \in V}, (c_v)_{v\in V} \big\} \]
if $D_v[0,1]=d_v$, $R_{vu}[0,1]=r_{vu}$, $M_{vu}[0,1]=m_{vu}$ and $\Lambda_v[0,1]=c_v$. In case the structured branching coalescing process $(Z_t)_{t \geq 0}$ (defined according to Definition~\ref{def:random_rate}) has parameters of this type, we write $(Z_t)_{t \geq 0} \in \mathcal{K}((d_v)_{v}, (r_{vu})_{u,v}, (m_{vu})_{u,v}, (c_v)_{v} )$.
\begin{lemma}\label{lem:varianceequality}
Let $d_v, r_{vu},m_{vu}\geq 0$ and $c_v=0$ for all $u,v \in V$. Then, \[ \sup_{(Z_t)_{t \geq 0} \in \mathcal{K}((d_v)_{v}, (r_{vu})_{u,v}, (m_{vu})_{u,v}, (0)_v )}\mathrm{Var}[Z^{(v)}_s]=\mathrm{Var}[\bar Z^{(v)}_s] \] for all $v \in V$, where $\bar Z^{(v)}_s$ is such that for all $u,w \in V$, $M_{u,w}=m_{u,w}\delta_1$, $R_{uw}=r_{uw}\delta_1$ and $D_{u}=d_u\delta_1$, and \[ \inf_{(Z_t)_{t \geq 0}\in \mathcal{K}((d_v)_{v}, (r_{vu})_{u,v}, (m_{vu})_{u,v}, (0)_v )}\mathrm{Var}[Z^{(v)}_s]=\mathrm{Var}[\underline Z^{(v)}_s] \] for all $v \in V$, where $\underline Z^{(v)}_s$ is such that for all $u,w \in V$, $M_{u,w}=m_{u,w}\delta_0$, $R_{uw}=r_{uw}\delta_0$ and $D_{u}=d_u\delta_0$. Here, $(0)_v$ denotes the collection of $|V|$ instances of the zero measure.
\mathrm end{lemma}
\begin{proof}
Applying the form \mathrm eqref{eq:generator} of the generator for $f_v(z)=z_v^2$, we obtain, using Dynkin's formula,
\begin{align*}
\mathbb{E}\Big[{Z_t^{(v)}}^2\Big]-{Z_0^{(v)}}^2& = \ensuremath{\mathbb{E}} \Big[ \int_0^t \Big(\sum_{u \in V} \int_0^1\sum_{i=1}^{Z_s^{(v)}} (-2 Z_s^{(v)}i + i^2) \binom{Z_s^{(v)}}{i} y^i (1-y)^{Z^{(v)}_s-i} \frac{1}{y} M_{vu}(\mathrm{d} y)
\\ & \qquad + \sum_{u \in V} \int_0^1 \sum_{i=1}^{Z_s^{(u)}} (2 Z_s^{(u)}i + i^2) \binom{Z_s^{(u)}}{i} y^i (1-y)^{Z^{(u)}_s-i} \frac{1}{y} M_{uv}(\mathrm{d} y)\\
& \qquad + \int_0^1 \sum_{i=1}^{Z_v^{(s)}}(-2 Z_s^{(v)} i+i^2) \binom{Z_s^{(v)}}{i} y^i (1-y)^{Z_s^{(v)}-i} \frac{1}{y} D_v(\mathrm{d} y) \\
& \qquad + \sum_{u \in V} \int_0^1 \sum_{i=1}^{Z_s^{(u)}} (2 Z_s^{(u)} i+i^2)\binom{Z_s^{(u)}}{i} y^i (1-y)^{Z_s^{(u)}-i} \frac{1}{y} R_{uv}(\mathrm{d} y)
\Big) \mathrm{d} s \Big]. \underline{n}umberthis\label{eq:lastline}
\mathrm end{align*}
Thus, recalling that for a binomial random variable with parameters $n,p$ one has $\ensuremath{\mathbb{E}}[X^2]=np(1-p+np)$ and using \mathrm eqref{eq:expectations} together with the Fubini-Tonelli theorem, we can interchange the outermost expectation with the integration from 0 to $t$ on the right-hand side of \mathrm eqref{eq:lastline} by the same arguments as in the proof of Lemma \ref{lem:expectationequality} and write the equation in differential form as follows
\[ \underline{n}umberthis\label{eq:Dynkinsquare}
\begin{aligned}
\frac{\mathrm{d}}{\mathrm{d} t} \mathbb{E}& \Big[ {Z_t^{(v)}}^2 \Big]= \ensuremath{\mathbb{E}} \big[ \sum_{u \in V} \int_0^1 Z_s^{(u)} (1-y+Z_s^{(u)} y-2Z_s^{(u)}) \big] M_{vu}(\mathrm{d} y)
\\ & \qquad + \ensuremath{\mathbb{E}} \big[\sum_{u \in V} \int_0^1 Z_s^{(v)} (1-y+Z_s^{(v)} y+2 Z_s^{(v)}) \big] M_{uv}(\mathrm{d} y)
\\ & \qquad + \ensuremath{\mathbb{E}} \big[\int_0^1 Z_s^{(v)} (1-y+Z_s^{(v)} y-2 Z_s^{(v)}) \big] D_v(\mathrm{d} y) \\
& \qquad + \ensuremath{\mathbb{E}} \big[\sum_{u \in V} \int_0^1 Z_s^{(u)}(1-y+Z_s^{(u)}y+2 Z_s^{(u)})\big] R_{uv}(\mathrm{d} y).
\mathrm end{aligned}
\]
Now, $\mathrm{Var}[Z_t^{(v)}] = \ensuremath{\mathbb{E}}[{Z_t^{(v)}}^2]-\ensuremath{\mathbb{E}}[Z_t^{(v)}]^2 $, and Lemma~\ref{lem:expectationequality} implies that $\ensuremath{\mathbb{E}}[Z_t^{(v)}]^2$ is constant given the total masses of all migration, death and reproduction measures. Hence, in order to maximize (minimize) $\mathrm{Var}[Z_t^{(v)}]$ given these total masses, it suffices to maximize (minimize) the right-hand side of \mathrm eqref{eq:Dynkinsquare}. Note that for all $v \in V$, $Z_s^{(v)}$ takes nonnegative integer values, and given that it is zero, $Z_s^{(v)}(1-y+Z_s^{(u)} y \Bbb{P}m 2Z_s^{(u)})=0$. Further, for $Z_s^{(v)} \geq 1$, $Z_s^{(v)}y \geq y$. It follows that given the total masses, any term on the right-hand side of \mathrm eqref{eq:Dynkinsquare} is maximal for the corresponding measure being a constant multiple of $\delta_1$ and minimal for the measure being a constant multiple of $\delta_0$. Hence,
we conclude the lemma.
\mathrm end{proof}
The previous result allows us to bound the variance of all the processes whose expectation solves the PAM.
\begin{corollary}
Assume that $(Z_t)_{t \geq 0}$ is a coordinated branching process such that $\ensuremath{\mathbb{E}}[Z_t^{(v)}]$ is a solution of equation \mathrm eqref{PAM}. Let $(Y_t)_{t \geq 0}$ be a simple symmetric random walk in $[K]^d.$ Then, recalling $\xi_v=\xi_v^+-\xi_v^-$, for $v\in V$,
\begin{align*}
{\rm Var}\big[Z_t^{(v)}\big]\leq &\ensuremath{\mathbb{E}}\Big[\mathrm exp\Big(\int_0^t \xi^+_{Y_s}\mathrm{d} s\Big)\Big(\mathrm exp \big(2\int_0^t \xi^+_{Y_s}\mathrm{d} s\big)-1\Big)\mathds 1_{\{Y_t=v\}} \mathds 1_{\{ t < \mathcal T \}} \Big]\\
=&\ensuremath{\mathbb{E}}\Big[\mathrm exp\Big(\int_0^t \xi_{Y_s}\mathrm{d} s\Big)\Big(\mathrm exp\big(2\int_0^t \xi^+_{Y_s}\mathrm{d} s\big)-1\Big)\mathds 1_{\{Y_t=v\}} \Big].
\mathrm end{align*}
Here, $(M_t)_{t \geq 0}$ is a Poisson process independent of $(Z_t)_{t \geq 0}$, and $\mathcal T=\inf \{ t \geq 0 \colon M_{\int_0^t \xi^{-}_{Y_s} \mathrm{d} s} \geq 1 \}$.
\mathrm end{corollary}
\begin{proof}
The right-hand side of the equation to be proven is the variance of $\bar Z_t^{(v)}$ in the notation of Lemma \ref{lem:varianceequality}, and thus the statement follows directly from this lemma. To calculate the variance we compute the second moment as follows:
\begin{align*}
\ensuremath{\mathbb{E}}\big[(Z_t^{(v)})^2\big]
= &\ensuremath{\mathbb{E}}\big[ 4^{N_{\int_0^t \xi_{Y_s}\mathrm{d} s}}\mathds 1_{\{Y_t=v\}} \mathds 1_{\{ t < \mathcal T \}} \big]=\ensuremath{\mathbb{E}} \big[ \mathrm{e}^{\ln(4)N_{\int_0^t \xi_{Y_s}\mathrm{d} s}}\mathds 1_{\{Y_t=v\}} \mathds 1_{\{ t < \mathcal T \}} \big] \\
= &\ensuremath{\mathbb{E}}\big[\mathrm{e}^{3\int_0^t \xi_{Y_s}\mathrm{d} s}\mathds 1_{\{Y_t=v\}} \mathds 1_{\{ t < \mathcal T \}} \big] .
\mathrm end{align*}
where in the last equality we used the formula for moment generating function of a Poisson random variable. As shown in Proposition \ref{thm:feynmankac}, $\ensuremath{\mathbb{E}}[\bar Z_t^{(v)}]^2=\ensuremath{\mathbb{E}}[\mathrm{e}^{2\int_0^t \xi_{Y_s}\mathrm{d} s}\mathds 1_{\{Y_t=v\}} \mathds 1_{\{ t < \mathcal T \}}]$, which together with the definition of $\mathcal T$ implies
$$
{\rm Var}\big[\bar Z_t^{(v)}\big]=\ensuremath{\mathbb{E}} \big[\mathrm{e}^{2\int_0^t \xi_{Y_s}\mathrm{d} s}(\mathrm{e}^{\int_0^t \xi_{Y_s}\mathrm{d} s} -1) \mathds 1_{\{Y_t=v\}} \mathds 1_{ \{ t < \mathcal T \} } \big]=\ensuremath{\mathbb{E}}\big[\mathrm{e}^{\int_0^t \xi_{Y_s}\mathrm{d} s}(\mathrm{e}^{2\int_0^t \xi^+_{Y_s}\mathrm{d} s}-1)\mathds 1_{\{Y_t=v\}} \big].
$$
\mathrm end{proof}
\section{Extensions to infinite graphs}\label{sec-infinity}
The main results of the present paper tell about the case when $G=(V,E)$ is a finite graph (where we recall that this graph was defined in Section~\ref{sec-modeldef}). Let us now discuss under what conditions these statements can be extended to an infinite graph in general.
Let $G=(V,E)$ be an infinite, connected, locally finite graph. Choose a collection
\[
\mathfrak M=\{ R_{uv},D_w, M_{uv}, \Lambda_w \colon w \in V, (u,v) \in E \}
\]
of elements of $\mathcal M[0,1]$ interpreted similarly to Definition~\ref{def:random_rate} for the case of a finite graph, and a collection
\[
\mathfrak P=\{ N^{R_{uv}}, N^{D_w}, N^{M_{uv}}, N^{\Lambda_w} \colon w \in V, (u,v) \in E \}
\]
of independent Poisson point processes, defined analogously to the case of a finite graph (cf.~page~\Bbb{P}ageref{PPP's}), involving the measures contained in $\mathfrak M$.
Let us fix $v_0 \in V$. For $v,w \in V$ let $d(v,w)$ denote the graph distance of $v$ and $w$, i.e., the length of the shortest path of edges connecting $v$ and $w$ in the graph. Then for $N \in \mathbb N_0$ we define
\[ V^N=\{ v \in V \colon d(v,v_0) \leq N \} \]
as the set of vertices situated at graph distance at most $N$ from $v_0$. We also put $V_{-1}=E_{-1}=\mathrm emptyset$. We denote by $G^N=(V^N,E^N)$ the subgraph of $G$ spanned by $V^N$. Then we let $(Z_{N,t})_{t \geq 0}=(Z^{(v)}_{N,t})_{t \geq 0,v \in V}$ to be the process defined according to the Poisson point process representation for finite graphs involving the measures contained in the set
\[
\mathfrak M^N=\{ R_{uv},D_v, M_{uv}, \Lambda_v \colon w \in V^N, (u,v) \in E^N \}
\]
and the corresponding Poisson point processes included in the set
\[
\mathfrak P=\{ N^{R_{uv}}, N^{D_w}, N^{M_{uv}}, N^{\Lambda_w} \colon w \in V^N, (u,v) \in E^N \}.
\]
Now, for $N\in \mathbb N_0$ we let
\[ \tau_N = \inf \{ t \geq 0 \colon \mathrm exists v \in V^N \setminus V^{N-1} \colon Z_{N,t}^{(v)} >0 \}. \]
The crucial assumption in order to define a limiting process on the infinite graph is the following nonexplosion condition:
\[ \lim_{N \to \infty} \tau_{N} = \infty, \underline{n}umberthis\label{nastycondition} \]
almost surely given the initial condition $Z_0^{(v)}=\mathds 1_{\{ v=v_0\}}$, $v \in V$.
Indeed, then for $t \geq 0$, the random variable
\[ N_t = \inf \{ N \in \mathbb N_0 \colon t < \tau_N \}. \]
is almost surely finite. Hence, if we define
\[ Z_t = \lim_{N \to \infty} Z_{N,t \wedge \tau_N}, \]
then $Z_t=Z_{N_t,t}$, almost surely,
and for all $t \geq 0$, $Z_{N_t,t}$ is almost surely well-defined according to the Poissonian construction for finite graphs.
Condition \mathrm eqref{nastycondition} is difficult to check. Despite it should be true in many interesting cases, it is not hard to come up with examples in which even if all the measures have a bounded mass, condition \mathrm eqref{nastycondition} is not satisfied. It remains an open question to find easy to verify sufficient conditions to extend the results presented in this paper to infinite graphs.
Lemma \ref{lem:expectationequality} can be extended to the infinite case under assumption \mathrm eqref{nastycondition}.
\begin{corollary}\label{cor:Emonconv}
Assume that for every $N$, $(Z_t^N)_{t \geq 0}$ as constructed above fulfills the condition of Lemma \ref{lem:expectationequality} and the assumption \mathrm eqref{nastycondition} is true. Then, for all $t \geq 0$
$$
\ensuremath{\mathbb{E}}[Z_t]=\lim_{N\rightarrow \infty} \ensuremath{\mathbb{E}}[Z_{t \wedge \tau_N}^N].
$$
\mathrm end{corollary}
\begin{proof}
After observing that for every $t>0$, $Z_{t \wedge \tau_N}^N$ is an increasing function of $N$, the proof follows from monotone convergence.
\mathrm end{proof}
Next, we recall three classical processes that are strongly related to each other, two of them belonging to our class of processes: one independent one (a branching random walk) and one fully coordinated one (the binary contact path process), and the third one being the contact process. These processes are usually studied on infinite graphs such as $\mathbb Z^d$ or uniform trees (see~\cite{Liggett} for details).
\begin{example}[Contact process, binary contact path process and branching random walk]
Let $D,R>0$, let $G=(V,E)$ be a (possibly infinite) graph and let $(\bar Z_t)_{t \geq 0}=(Z_t^{(v)})_{v\in V, t\geq 0}$ be the $\mathbb N_0^{|V|}$-valued Markov process with transitions
\begin{equation}
z \mapsto
\begin{cases}
z- z_ve_v, & \textrm{ at rate } D,\, v\in V,\\
z+ z_ve_u, & \textrm{ at rate }R \mathds 1_{(v,u)\in E},\, u,v\in V.
\mathrm end{cases}
\mathrm end{equation}
It is clear that the process $(\bar Z_t)_{t \geq 0}$ satisfies Condition \mathrm eqref{nastycondition}. It is called the \mathrm emph{binary contact path process}, and it was first studied in \cite{Griffeath}. Now consider $\bar C_t=(C_t^{(v)})_{v\in V}=(\mathds 1_{Z_t^{(v)}>0})_{v\in V}$, $t\geq 0$ and observe that $(C_t)_{t \geq 0}$ is the contact process on the graph $G$ with parameters $D$ and $R$ (cf.~\cite[Section 2]{BG}). Let further $(\bar N_t)_{t \geq 0}=(N_t^{(v)})_{v\in V,t\geq 0}$ be the $\mathbb N_0^{|V|}$-valued Markov process with transitions
\begin{equation}
z \mapsto
\begin{cases}
z- e_v, & \textrm{ at rate } D z_v,\, v\in V,\\
z+ e_u, & \textrm{ at rate } R z_v \mathds 1_{(v,u)\in E} ,\, u,v\in V.
\mathrm end{cases}
\mathrm end{equation}
The process $(\bar N_t)_{t \geq 0}=(N_t^{(v)})_{v\in V,t\geq 0}$ is the \mathrm emph{branching random walk} associated to the contact process. We observe that by Lemma~\ref{lem:expectationequality},
\[ \ensuremath{\mathbb{E}}_{\bar z}[ Z_t^{(v)}]=\ensuremath{\mathbb{E}}_{\bar z}[N_t^{(v)}], \qquad \forall v \in V, t \geq 0, \bar z \in \mathbb N_0^{|V|}, \underline{n}umberthis\label{contactexpectation}\]
and thus, $(\bar N_t)_{t \geq 0}$ also satisfies Condition \mathrm eqref{nastycondition}.
It is easy to see that if $C_0^{(v)} \leq N_0^{(v)}$ for all $v \in V$, then $C_t^{(v)} \leq N_t^{(v)}$ for all $t \geq 0$ and $v \in V$; in this sense, the contact process can be seen as a branching random walk where particles at the same site (vertex) coalesce. These assertions can be found in \cite[page 32]{Liggett}. A well-known consequence \cite[page 43]{Liggett} of this comparison is that if all degrees of the graph $G$ are bounded by $d \in \mathbb N$, then $\lim_{t \to \infty} |C_t|=0$ holds almost surely whenever the branching random walk is subcritical, i.e., $dR-D < 0$. Further, in the critical case $dR=D$, since particles of this branching random walk can actually die with positive probability, the branching random walk and hence also the contact process dies out.
\mathrm end{example}
We now present two applications of the results of the present paper that extend to infinite graphs thanks to Corollary~\ref{cor:Emonconv}. To start with, the proof that we provided for the Feynman-Kac formula (Proposition~\ref{thm:feynmankac}) remains true for infinite graphs satisfying \mathrm eqref{nastycondition}. Using classical results on the PAM, we can provide a more explicit sufficient condition for \mathrm eqref{nastycondition} as follows.
\begin{corollary}
Proposition~\ref{thm:feynmankac} remains true for $V=\mathbb Z^d$, $d \in \mathbb N$ (instead of $V=[K]^d$ for $K \in \mathbb N$), in case the moment condition~\mathrm eqref{PAMmomentcond} holds.
\mathrm end{corollary}
\begin{proof}
As already mentioned, according to \cite[Theorem 1.2]{KPAM}, \mathrm eqref{PAM} has a unique solution in $\mathbb Z^d$ given that \mathrm eqref{PAMmomentcond} holds. Using Corollary~\ref{cor:Emonconv}, we conclude that $f(t,v)= \ensuremath{\mathbb{E}}[Z_t^{(v)}]$ equals this solution. Finally, thanks to Proposition~\ref{thm:feynmankac} and monotone convergence, this solution must be equal to $f(t,v)=\ensuremath{\mathbb{E}}[\mathrm{e}^{\int_0^t \xi_{Y_s}^+\mathrm{d} s}\mathds 1_{\{Y_t=v\}} \mathds 1_{\{ t < \mathcal T \}}]$. This implies the corollary.
\mathrm end{proof}
Finally, as an additional application of the invariance of expectation on infinite graphs, we consider a branching random walk on a $d$-uniform rooted tree, and we provide a probabilistic interpretation of its expectation process in terms of the underlying ``fully coordinated'' process.
\begin{remark}\label{ex:tree}
Let the graph $G=(V,E)$ be a $d$-uniform rooted tree for some $d \in \mathbb N$. That is, there is a distinguished vertex $o$ called the root, which is the only vertex in the 0th generation of the vertex set $V$, vertex generations are pairwise disjoint, and for $n \in \mathbb N_0$, each vertex in generation $n$ is connected by an edge to precisely $d$ vertices in generation $n+1$, so that each vertex in generation $n+1$ has precisely one neighbour from generation $n$. For each $n \in \mathbb N_0$, let us fix an arbitrary indexing $v_{(n,1)},\ldots, v_{(n,d^n)}$ of the vertices of generation $n$, in particular, $v_{(0,1)}=o$. Then we fix $r,\mu>0$ and define a branching random walk, i.e., a particle system $(Z_t)_{t \geq 0}$ on $G$ according to Definition~\ref{def:random_rate} with the following rates: $\Lambda_v = D_v=0$ for all $v \in V$, $R_{vv}=r \delta_0$ for all $v\in V$ and $R_{vu} = 0$ for all $u,v \in V, u \underline{n}eq v$, further, $M_{vu}=\frac{1}{d} \mu \delta_0$ in case $(v,u) \in E$ and $u$ belongs to one generation higher than $v$ and $M_{vu}=0$ otherwise. In words, particles create an offspring at rate $r$ and jump to a uniformly chosen neighbouring vertex in the next generation at rate $\mu$, independently of all the other particles.
Let $(\bar Z_t)_{t \geq 0}$ be the associated fully coordinated process, i.e., the process that exhibits also no coalescence or death, and whose reproduction and migration measures $\bar R_{vu},\bar M_{vu}, u,v\in V$ are obtained by replacing $\delta_0$ with $\delta_1$ in the definition of the corresponding measure $R_{vu}$ respectively $M_{vu}$ of $(Z_t)_{t \geq 0}$. In this process, all particles move simultaneously at rate $\mu$ to one of the neighbours of the present vertex in the next generation, and at rate $r$ they reproduce simultaneously. In order to simplify the notation, we use the notations $Z_t^{(k,i)}$ and $\bar Z_t^{(k,i)}$ instead of $Z_t^{(v_{(k,i)})}$ resp.\ $\bar Z_t^{(v_{(k,i)})}$ for the coordinates of $Z_t$ resp.\ $\bar Z_t$ corresponding to the vertex $v_{(k,i)}$, $k \in \mathbb N_0$, $i=1,\ldots, d^k$. For $k \in \mathbb N$, $j \in \{0,1,\ldots,k-1\}$ and $i=1,\ldots,d^k$, let us write $a(j,k,i)$ for the ancestor of $(k,i)$ in generation $j$, i.e., for the unique vertex of the form $(j,\cdot)$ connected by a path (i.e., sequence of edges) to $(k,i)$. In particular, $a(0,k,i)=o$ for all $i=1,\ldots,d^k$.
Then, it is easy to check that the nonexplosion condition~\mathrm eqref{nastycondition} is satisfied, and hence Corollary~\ref{cor:Emonconv} is applicable. Together with Lemma~\ref{lem:expectationequality}, we obtain that starting from one single particle at $o$ at time zero, for $t\geq 0$, $k \in \mathbb N_0$ and $i=1,\ldots,d^k$ we have
\begin{align*}
\ensuremath{\mathbb{E}}\big[ Z_t^{(k,i)} \big] = \ensuremath{\mathbb{E}}\big[ \bar Z_t^{(k,i)} \big] = \mathrm e^{rt} \frac{(t\mu)^k}{k!} \mathrm e^{-t\mu}\frac{1}{d^k} = \mathrm e^{t(r-\mu)} \big(\frac{t\mu}{d}\big)^k \frac{1}{k!}. \underline{n}umberthis\label{eqexpbranching}
\mathrm end{align*}
This yields a probabilistic interpretation for the sytem of ODEs
\[ \begin{aligned} \frac{\mathrm{d}}{\mathrm{d} t} f(t,(k,i)) &= (r-\mu) f(t,(k,i)) + \frac{\mu}{d} f(t,a(k-1,k,i)), \quad k \in \mathbb N, i=1,\ldots,d^k, \\
\frac{\mathrm{d}}{\mathrm{d} t} f(t,o) & = (r-\mu) f(t,o), \\
f(0,(k,i))&=\delta_{0,k}, \mathrm end{aligned} \underline{n}umberthis\label{reactiondiffusion} \]
where it is straightforward to derive that its unique solution $f(t,(k,i))$ equals $\ensuremath{\mathbb{E}}[Z_t^{(k,i)}]$ from \mathrm eqref{eqexpbranching}.
Note that the representation \mathrm eqref{eqexpbranching} of the solution of the system~\mathrm eqref{reactiondiffusion} by a fully coordinated process is in analogy to the one provided for the solution of the PAM in the proof of Proposition~\ref{thm:feynmankac}. Hence, \mathrm eqref{eqexpbranching} can also be interpreted as an explicit Feynman--Kac representation.
By Lemma~\ref{lem:expectationequality}, for all structured branching-coalescing processes with coordination on $G$ where all the measures of the form $R_{vv}$ and $M_{vu}$ have the same total mass as for the branching random walk and all other measures are zero, the expectation of the process solves \mathrm eqref{reactiondiffusion}.
\mathrm end{remark}
\begin{thebibliography}{99}
\bibitem{AK} K. B. Athreya and S. Karlin. On Branching Processes with Random Environments: I: Extinction Probabilities. \mathrm emph{Ann. Math. Statist.} \textbf{42:5}, 1499--1520 (1971).
\bibitem{AS} S. R. Athreya and J. M. Swart. Branching-coalescing particle systems. \mathrm emph{Probab. Theory Relat. Fields} \textbf{131}, 376--414 (2005).
\bibitem{BCM} V. Bansaye, M-E. Caballero and S. Méléard. Scaling limits of general population processes - Wright-Fisher and branching processes in random environment. \mathrm emph{Electron. J. Probab.} \textbf{24}, paper no. 19, 38 pp. (2019).
\bibitem{BDLS} A. Blancas, J.-J. Duchamps, A. Lambert and A. Siri-Jégousse. Trees within trees: Simple nested coalescents.
\mathrm emph{Electron. J. Probab.} \textbf{23}, paper no. 94, 27 pp. (2018).
\bibitem{BRSS} A. B. Benítez, T. Rogers, J. Schweinsberg and A. Siri-Jégousse. The Nested Kingman Coalescent: Speed of Coming Down from Infinity. \mathrm emph{Ann. Appl. Probab.} \textbf{29:3}, 1808--1836 (2019).
\bibitem{BG} C. Bezuidenhout and G. R. Grimmett. The critical contact process dies out. \mathrm emph{Ann. Probab.} {\bf 18} 1462--1482 (1990).
\bibitem{BGKW18} J. Blath, A. González Casanova, N. Kurt and M. Wilke-Berengurer. The seed bank coalescent with simultaneous switching. \mathrm emph{Electron.~J.~Probab.}, {\bf 25}, paper no.~27, 21 pp. (2020).
\bibitem{BB} J. Berestycki and N. Berestycki. Kingman's coalescent and Brownian motion. \mathrm emph{ALEA Lat. Am. J. Probab. Math. Stat.} \textbf{6}, 239--259 (2009).
\bibitem{D} J-J. Duchamps. Trees within trees II: Nested fragmentations. \mathrm emph{Ann. Inst. H. Poincaré Probab. Statist.} \textbf{56:2}, 1203--1229 (2020).
\bibitem{Dawson} D. Dawson. Multilevel mutation-selection systems and
set-valued duals. \mathrm emph{J. Math Biol.} \textbf{76}, 295--378 (2018).
\bibitem{EK} S.N. Ethier and T.G. Kurtz. {\it Markov processes. Characterization and convergence. }\rm Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley \& Sons, Inc., New York, (1986).
\bibitem{FLS} F. Foutel-Rodier, A. Lambert and E. Schertzer. Kingman's coalescent with erosion. \mathrm emph{Electron. J. Probab.}, {\bf 25}, paper no. 56, 1–-33 (2020).
\bibitem{GM} J. Gärtner and S. Molchanov. Parabolic problems for the Anderson model I. Intermittency and related topics. \mathrm emph{Commun. Math. Phys.} \textbf{132}, 613--655 (1990).
\bibitem{GCPP} A. González Casanova, J. C. Pardo and J. L. Perez. Branching processes with interactions: the subcritical cooperative regime. \mathrm emph{arXiv:1704.04203} (2017).
\bibitem{GCSW} A. Gonz\'alez Casanova, D. Span\`o and M. Wilke--Berenguer. The effective strength of selection in random environment. \mathrm emph{arXiv:1903.12121} (2019).
\bibitem{GdHO20} A. Greven, F. den Hollander and M. Oomen. Spatial populations with seed-bank: well-posedness, duality and equilibrium. \mathrm emph{arXiv:2004.14137} (2020).
\bibitem{Griffeath} D. Griffeath. The binary contact path process. \mathrm emph{Ann. Probab.}, \textbf{11:3}, 692--705 (1983).
\bibitem{HP} F. Hermann and P. Pfaffelhuber. Markov branching processes with disasters: extinction, survival and duality to p-jump processes. \mathrm emph{Stoch. Process. Their Appl.}, \textbf{130:4}, 2488-2518 (2020).
\bibitem{JK} S. Jansen and N. Kurt. On the notion(s) of duality for Markov processes. \mathrm emph{Probab. Surv.} \textbf{11}, 59--120 (2014).
\bibitem{Kimura} M. Kimura. “Stepping stone” model of population. \mathrm emph{Ann. Rep. Nat. Inst. Genet. Japan} \textbf{3},
62–-63 (1953).
\bibitem{Kingman2} J. F. C. Kingman. On the genealogy of large populations, \mathrm emph{J. of Appl. Probab.}
\textbf{19}, \mathrm emph{Essays in Statistical Science}, pp.~27-43 (1982).
\bibitem{K} T. G. Kurtz. The Yamada-Watanabe-Engelbert theorem for
general stochastic equations and inequalities.
\mathrm emph{Electron. J. Probab.}, \textbf{12}, paper no. 33, 951--965 (2007).
\bibitem{K1} T. G. Kurtz. Weak and strong solutions of general stochastic models. \mathrm emph{Electron. Commun. Probab.}, \textbf{19}, paper no. 58, 16 pp. (2014).
\bibitem{KPAM} W. König. \mathrm emph{The parabolic Anderson model. Random walk in random potential}. Birkhauser (2016).
\bibitem{LM} A. Lambert and C., Ma. The coalescent in peripatric metapopulations. \mathrm emph{J. Appl. Probab.} 52 (2015), no. 2, 538--557.
\bibitem{LS} A. Lambert and E. Schertzer. Coagulation-transport equations and the nested coalescents. \mathrm emph{Probab. Theory Relat. Fields}, \textbf{176}, 77–147 (2020).
\bibitem{Liggett} T. M. Liggett. Stochastic Interacting Systems: \mathrm emph{Contact, Voter and Exclusion Processes}. Grundlehren der Mathematischen Wissenschaften (Fundamental Principles of Mathematical Sciences), volume 324,
Springer (1999). \color{black}
\bibitem{LSt} V. Limic and A. Sturm. The spatial $\Lambda$-coalescent. \mathrm emph{Electron. J. Probab.} \textbf{11}, 363--393 (2006).
\bibitem{MS} M. M\"ohle and S. Sagitov. A classification of coalescent processes for haploid exchangeable population models. \mathrm emph{Ann. Probab.} \textbf{29:4}, 1547--1562 (2001).
\bibitem{pitman} J. Pitman. Coalescents with multiple collisions. \mathrm emph{Ann. Probab.} \textbf{27:4}, 1870--1902 (1999).
\bibitem{BLP} M. Barczy, Z. Li, G. Pap. Yamada-Watanabe results for stochastic differential equations with jumps. \mathrm emph{International Journal of Stochastic Analysis},
Volume 2015, Article ID 460472, 23 pp (2015).
\bibitem{OR1} M. Ortgiese and M. I. Roberts. Scaling limit and ageing for branching random walk in Pareto environment. \mathrm emph{Ann. Inst. H. Poincaré Probab. Statist.} \textbf{54:3}, 1291--1313 (2018).
\bibitem{OR2} M. Ortgiese and M. I. Roberts. Intermittency for branching random walk in Pareto environment. \mathrm emph{Ann. Probab.} \textbf{44:3}, 2198--2263 (2016).
\bibitem{OR3} M. Ortgiese and M. I. Roberts. One-point localization for branching random walk in Pareto environment. \mathrm emph{Electron. J. Probab.} \textbf{22}, paper no. 6., 20 pp. (2017).
\bibitem{S} J. Schweinsberg. A necessary and sufficient condition for the $\Lambda$-coalescent to come down from infinity. \mathrm emph{Electron. Commun. Probab.}, \textbf{5}, paper no.~1, 1-11 (2000).
\bibitem{T} H. Trotter. On the product of semigroups of operators. \mathrm emph{Proc. Amer. Math. Soc.}, \textbf{10} 545--551 (1959).
\bibitem{xi2017jump}
F. Xi and C. Zhu.
Jump type stochastic differential equations with non-Lipschitz
coefficients: Non confluence, Feller and strong Feller properties, and
exponential ergodicity. \mathrm emph{Journal of Differential Equations},
\textbf{266:8}, 4668--4711 (2019).
\mathrm end{thebibliography}
\mathrm end{document} |
\begin{document}
\marktrue
\title{Toward Scalable and Unified Example-based Explanation and Outlier Detection}
\author{Penny~Chong,
Ngai-Man Cheung,
Yuval~Elovici,
and~Alexander~Binder
\thanks{P. Chong and N.-M. Cheung are with the Information Systems Technology and Design Pillar, Singapore University of Technology and Design, 487372, Singapore. (e-mail: penny\[email protected]; ngaiman\[email protected])}
\thanks{Y. Elovici is with the Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva, 84105, Israel. (email: [email protected])}
\thanks{A. Binder is with the Department of Informatics, University of Oslo, 0373 Oslo, Norway. (email: [email protected])}
}
\maketitle
\begin{abstract}
When neural networks are employed for high-stakes decision-making, it is desirable that they provide explanations for their prediction in order for us to understand the features that have contributed to the decision. At the same time, it is important to flag potential outliers for in-depth verification by domain experts.
In this work we propose to unify two differing aspects of explainability with outlier detection.
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction and at the same time identify regions of similarity between the predicted sample and the examples. The examples are real prototypical cases sampled from the training set via our novel iterative prototype replacement algorithm.
Furthermore, we propose to use the prototype similarity scores for identifying outliers.
We compare performances in terms of the classification, explanation quality, and outlier detection of our proposed network with other baselines. We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
\end{abstract}
\begin{IEEEkeywords}
prototypes, explainability, LRP, outlier detection, pruning, image classification.
\end{IEEEkeywords}
\IEEEpeerreviewmaketitle
\section{Introduction}
\IEEEPARstart{D}{eep} neural networks (DNNs) are widely used in various fields because they show superior performance. Explainable AI, aiming at transparency for predictions \cite{simonyan2013deep,zeiler2014visualizing,montavon2019layer} has gained research attention in recent years.
Since humans are known to generalize fast from a few examples, it is suitable to explain a prediction of a test sample via a set of similar examples.
Such an example-based explanation can simply be achieved using a similarity search over an available dataset based on a metric defined by the feature maps obtained from the trained model.
To achieve this, one natural option is to search in an available dataset for the nearest candidates to the test sample in the feature map space.
This yields an informative visualization of the feature embedding learned by the model, yet the similar examples found do not participate in the prediction of the test sample. Even if these similar examples share the same prediction with the test sample and are close in feature space, it is not obvious to what \textit{quantifiable extent} the model shares the same reasoning between examples and the test sample, and which parts of the test sample and nearest neighbors share the same features.
An alternative to achieve such explanation by examples is to employ kernel-based predictors \cite{cortes1995support}.
These methods compute a weighted sum of similarities between the test and the training samples, and thus the impact of each training sample on the prediction of the test sample is naturally quantifiable.
Although neural networks perform very well these days without the need for kernel-based setups, one may consider training a prototype-based student network from an arbitrarily structured teacher network to provide an example-based explanation\footnote{The teacher network is a typical convolutional neural network (CNN) for image classification tasks. The soft labels obtained from the teacher will be used for student-teacher learning to train a prototype-based student network.}. The prototypes in the student network participate in the network prediction, which resembles the participation of training samples in kernel-based predictions.
In this work we propose to unify two aspects of explanation with outlier detection, independent of the structure of the original model, by employing a prototype-based student network to address the three goals.
The explanation for predictions is given by the network in these two aspects:
i) the top-$k$ most similar prototypical examples and
ii) regions of similarity between the prediction sample and the top-$k$ prototypes.
Contrasting previous prototype learning approaches \cite{yang2018robust,li2017deep,ming2019interpretable,chen2019looks,hase2019interpretable,rymarczyk2020protopshare,nauta2020neural}, we refrain from directly training prototype vectors in the latent space to avoid reconstruction errors when visualizing the learned prototype vectors in the input space.
In lieu of this, we introduce an auxiliary output branch in the network for iterative prototype replacement along the lines of prototype selection methods \cite{gurumoorthy2019efficient, kim2016examples, arik2020protoattend} to select representative training examples for prediction inspired by dual kernel support vector machines (SVMs) \cite{cortes1995support}. Our method omits
the need to map or decode prototype vectors and guarantees an
explanation relative to real examples present in the training distribution. The prototype importance weight learned via the prototype replacement auxiliary loss also enables the pruning of uninformative prototypes.
With the proposed approach, it is imperative to consider the question of performance tradeoff when converting a standard CNN teacher network into a prototype-based student network. We quantify the performance of the predictors in the following three aspects: the prediction accuracy, the quality of explanation of the student network compared to the teacher network, and finally the competence of each predictor in quantifying the outlierness of a given sample. The first two aspects are the natural tradeoffs to be considered when training any surrogate model, while the last aspect measures the outlier sensitivity of predictors in high-stakes environments.
It is necessary for predictors deployed in high-stakes settings to be equipped with the capability to flag inputs that are either anomalous or poorly represented by the training set for additional validation by experts. Prototype-based approaches deliver this property naturally by the sequence of sorted similarities between the test sample and the prototypes even though they are not trained primarily for outlier detection. The following summarizes our contribution in this work:
\begin{enumerate}
\item We argue for unified interpretable models capable of explaining and detecting outliers with prototypical examples.
We demonstrate that a student network based on prototypes is capable of performing two types of explanation tasks and one outlier task simultaneously. We select prototypes from the training set that guarantee that the explanation based on examples comes from the true data distribution.
The prototype network provides two layers of explanation, first by identifying the top-$k$ nearest prototypical examples, and second, by showing the pixel evidence of similarity between each of these examples and the test sample. The latter is achieved by backpropagating scores from similarity layers using the Layer-wise Relevance Propagation (LRP) approach for explainability.
\item We introduce a novel iterative prototype replacement algorithm relying on training with masks and a widely reusable auxiliary loss term. We demonstrate that this prototype replacement approach scales to larger datasets such as LSUN and PCam.
\item We propose various generic head architectures that compute prototype-based predictions from a generic CNN feature extractor and evaluate their prediction accuracy, explanation quality, and outlier detection performance.
\item We revisit a method to quantify the degree of outlierness for a sample
based on the sorted prototype similarity scores.
By doing so, we avoid the complex issue of hyperparameter selection in unsupervised outlier detection methods without resorting to problematic tuning of test sets. A smaller contribution lies in the derivation of the prototype similarity scores depending on the network architectures.
\end{enumerate}
\section{Related Work}
\textbf{Prototype Learning}
One of the earliest works in prototype learning is the k-nearest neighbor (k-NN) classifier \cite{altman1992introduction}.
In recent years, researchers have proposed combining DNNs with prototype-based classifiers. The work \cite{yang2018robust} proposed a framework known as convolutional prototype learning (CPL), where the prototype vectors are optimized to encourage representations that are intraclass compact and interclass separable. Subsequently, interpretable prototype networks \cite{li2017deep,ming2019interpretable} were proposed to provide explanations based on similar prototypes for image and sequence classification tasks. The former used a decoder, while the latter projected each prototype to its closest embedding from the training set. Another work \cite{chen2019looks} introduced a prototypical part network (ProtoPNet) that is able to present prototypical cases that are similar to the parts the network looks at. These explanations are used during classification and not created post hoc. HPnet \cite{hase2019interpretable} classifies objects using hierarchically organized prototypes that are predefined in a taxonomy and is a generalization of ProtoPNet. Explanation-by-examples such as the use of prototypical examples is often preferred over superimposition explanations \cite{chattopadhay2018grad,simonyan2013deep,lundberg2017unified} for nontechnical end-users \cite{jeyakumar2020can}. Further works in this domain include
\cite{rymarczyk2020protopshare,nauta2020neural}.
In contrast to the aforementioned methods, we select prototypes directly from the training set and do not use trainable prototype vectors. Related works on prototype selection are \cite{gurumoorthy2019efficient, kim2016examples}, which involve mining representative samples such as prototypes and criticism samples to provide an optimally compact representation of the underlying data distribution.
Our work is related to ProtoAttend \cite{arik2020protoattend}, which selects input-dependent prototypes via an attention mechanism applied between the input and the database of prototype candidates. A notable difference between our work and ProtoAttend is that our prototypes are global prototypes (fixed after network convergence), whereas they learn input-dependent prototypes that require a sufficiently large prototype candidate database and high memory consumption during training to obtain good performance \cite{arik2020protoattend}.
In terms of explanation, our method is related to \cite{chen2019looks,hase2019interpretable,rymarczyk2020protopshare,nauta2020neural}. These methods explicitly focused on a patchwise representation of prototypes to deliver prototypical part explanation, whereas in our approach, a prototype represents the entire sample and we present a more fine-grained explanation in terms of pixel similarity between the prototype and test sample to explain their respective contribution to the membership of the predicted class using a post hoc explainability method \cite{montavon2019layer}.
\textbf{Outlier Detection.}
We revisit the idea of quantifying outliers
using statistics from the set of top-$k$ similarity scores. For this, k-NN-based distance methods have been frequently considered in the form of summed distances \cite{10.1007/3-540-45681-3_2}, local outlier factors \cite{10.1145/342009.335388,DBLP:conf/mldm/LateckiLP07}, combinations of distance and density estimates \cite{kim2006prototype}, or kernel-based statistics \cite{10.1007/978-3-642-13062-5_16}, and such methods are still of recent interest \cite{NIPS2019_9274} due to favorable theoretical properties. Unlike these works, we are interested in the performance of similarities that do not define a kernel, but rather emerge due to attention-based operations on feature maps.
A simple baseline for out-of-distribution detection is the maximum softmax probability method \cite{hendrycks2016baseline} which assumes that erroneously classified samples or outliers exhibit lower probability scores for the predicted class than the correctly classified example. Several other works related to the detection of out-of-distribution samples are \cite{liang2017enhancing,hsu2020generalized,hendrycks2018deep}. Generalized ODIN \cite{hsu2020generalized} improved upon the ODIN \cite{liang2017enhancing} method, which is based on temperature scaling and adding small perturbations, by proposing to decompose confidence scores and a modified input preprocessing method that eliminates the need to tune hyperparameters with out-of-distribution samples.
Our formulation of the outlier score using the sum of prototype similarity scores is based on the idea in \cite{10.1007/978-3-642-13062-5_16} and is also the generalization of the confidence score in ProtoAttend \cite{arik2020protoattend}.
\section{Methodology}
In a practical development task, many researchers prefer to start with a lower complexity baseline. As of 2021, this baseline for image classification would still be the conventional CNN for many practitioners because many people have some experience in training CNNs. Once one has trained a baseline, one has a teacher available to obtain soft labels for free. Furthermore, it is anecdotally reported that training with soft labels outperforms training with hard labels \cite{hinton2015distilling}; thus, we feel it is natural to adopt teacher-student training in this work. We employ teacher-student learning with soft labels for our proposed prototype-based network to transfer the knowledge from a standard teacher CNN to a prototype-based student network to retain the predictive performance of the teacher.
In this section, we first introduce our prototype network architectures, the iterative prototype replacement algorithm, and the learning objectives. We also present the explainability methods for the network and the formulation of the outlier score with respect to the student architectures.
\begin{figure}
\caption{Taxanomy of the different categories of head architectures proposed for the student network.
}
\label{taxanomy}
\end{figure}
Our proposed architecture for the student network is inspired by the dual formulation of SVMs \cite{cortes1995support} that solves the optimization problem
$
\max_{\alpha_i \geq 0} \sum^{N}_{i}\alpha_i - \frac{1}{2}\sum^{N}_{j} \sum^{N}_{k} \alpha_j \alpha_k y_j y_k K(\vec{x_j}, \vec{x_k}),
$
subject to the constraints $0\leq\alpha_i\leq C, \ \forall i$ and $\sum^{N}_{i} \alpha_i y_i=0$.
For a binary problem, the dual classifier takes the form of $f(\vec{x})=\sum^{N}_{i}\alpha_i y_i K(\vec{x_i},\vec{x})+ b$,
where $K(\vec{x_i},\vec{x})$ is a positive definite kernel. The solution is based on the set of similarities $\{K(\vec{x_i},\vec{x}) \ | \ 1 \leq i \leq N \}$ between the training samples and the test sample $\vec{x}$ which can be used to compute statistics to quantify outliers \cite{10.1007/978-3-642-13062-5_16}. We employ architectures based on attention-weighted similarities beyond Mercer-Kernels.
\begin{figure*}
\caption{Head I : Similarity using features after spatial pooling. The $y_{mask}
\label{head1}
\caption{Head II : Similarity using features before spatial pooling. The $y_{mask}
\label{head2}
\caption{Head III : Similarity w/attention using features before spatial pooling. The $y_{mask}
\label{head3}
\end{figure*}
\subsection{Proposed Architecture for the Student Network}\label{Network Architecture}
For the purpose of providing meaningful explanations and detecting outliers, we introduce three generic head architectures\footnote{The term head architecture refers to the architecture used in the prototype network which is the portion of the network located after the CNN encoder. Figures \ref{head1}-\ref{head3} illustrate this.} for our prototype student network. Each architecture is evaluated for its performance in providing explanation and detecting outliers in Section \ref{Results}. For simplicity, we assume in this section that the prototypical examples are already selected based on our iterative prototype replacement algorithm. We will introduce the algorithm in Section \ref{Prototype Selection} later.
Let $f(\cdot\ ;\theta_1):\mathbb{R}^{C_0\times H_0 \times W_0} \rightarrow \mathbb{R}^{C\times H \times W}$ be the CNN feature extractor before the spatial pooling layer with the learnable parameters $\theta_1$.
Let $e(\cdot\ ;\theta_2):\mathbb{R}^{C\times H \times W} \rightarrow \mathbb{R}^{\tilde{C}} $ be the prototype network with the learnable parameters $\theta_2$, and let $\tilde{C}$ be the number of classes in the classification task. The overall student network is represented by $e \ \circ f$. For simplicity, we omit the parameters $\theta_1$ and $\theta_2$ and denote the feature map representation of the input sample $x$ as $f(x)$ in this section.
To make a prediction, we have to solve the problem of computing a similarity between the feature map $f(x)$ for a test sample $x$ and the feature $f(p)$ for a prototype $p$. The feature maps $f_{c,h,w}(\cdot)$ have a channel dimension $c$ and spatial dimensions, usually the two $h,w$. We observe that three larger categories of similarities can be defined, which correspond to the head classes \textbf{I}, \textbf{II} and \textbf{III}. The \textbf{Head I} follows the simplest idea to compute a similarity without any spatial dimensions; thus, it pools over the spatial dimensions before computing an inner product over the channel dimension. The \textbf{Head II} class computes similarities that incorporate the spatial dimensions. The difference between the two heads in the Head II class is whether one matches each spatial coordinate $(h,w)$ in the test sample to the same coordinate in the prototype as in Head II-A or whether one searches for every coordinate in the test sample for the best matching coordinate in the prototype using a maximum-operation as in Head II-B. Finally, the \textbf{Head III} class also uses the spatial dimensions of feature maps but additionally learns an attention weight over the spatial coordinates of the features in the test sample. This serves to weight the spatial regions in the test samples in the final similarity measures, whereas the \textbf{Head II} class treats each spatial coordinate $(h,w)$ of the test sample feature map with equal weight.
Figure \ref{taxanomy} shows the taxonomy of the various head architectures for the student network and Figures \ref{head1}-\ref{head3} are visualizations of the head architectures. Our task differs from the standard image retrieval task as our task uses prototypes from the training set that participate in the prediction and thus share quantifiable prediction reasoning as the test sample, whereas the image retrieval task involves an exhaustive nearest neighbor search over the entire database that does not participate in the prediction and thus does not guarantee similar prediction reasoning between the test sample and its nearest neighbor.
Our proposed head architectures do not require any modification to the CNN encoder compared to \cite{arik2020protoattend}, which requires additional linear layers for queries and keys that add to the overhead during training.
The \textbf{Head I} architecture is based on a spatial average pooling of the feature maps $f(\cdot)$, resulting in a vector $g(\cdot)$ that consists of only $C$ feature channels. This is followed by a cosine similarity over the feature channels $g(\cdot)$, as seen in Figure \ref{head1}:
\begin{equation}
s^{\text{(I)}}(p_k)=\frac{g(x)^\top g(p_k)}{\|g(x)\| \|g(p_k)\|}.
\end{equation}
The similarity output $s^{\text{(I)}}(p_k)$ falls in the range of $[-1,1]$ where a higher value indicates higher similarity between the input sample $x$ and the prototype $p_k$.
The linear classification layer
$h(x,\{p_1,p_2,...,p_K\})= \sum^{K}_{k=1} w_k ReLU(s^{\text{(I)}}(p_k))+b$
then predicts the output logit $y$. For this head architecture, the learnable prototype network parameters $\theta_2$ denote the parameters of the linear layer $h(\cdot)$.
The linear layer $h(\cdot)$ in Head I is consistent with the formulation of the dual kernel SVM.
The linear layers $h(\cdot)$ in Heads II and III as shown in Figures \ref{head2} and \ref{head3} will have similar dependency on a set of prototypes $\{p_1,p_2,...,p_K\}$, but they no longer correspond to kernels.
\begin{algorithm*}[!ht]
\caption{Iterative Prototype Replacement Algorithm}
\label{alg:protoselect}
\renewcommand{\thealgorithm}{}
\floatname{algorithm}{}
\begin{algorithmic}[1]
\State \textbf{Initialization}: Let $S=D\cup P$ be the original training set with $n(S)=n(D)+n(P)=N+K$. $D$ as the new training set and $P=
\{p_1,...,p_k,...,p_K \}$ as the set of all prototypes.
The initial set $P$ consists of an equal number of samples per class that are randomly sampled from the original training set $S$.
Let $\theta_1$ and $\theta_2$ be the parameters of the CNN feature extractor and the prototype network, respectively, and $\mathcal{M} \in \mathbb{R}^K$ be the prototype importance weight used for replacement.
\While{$e\leq$ number of epochs}
\While{ $t \leq$ number of iterations}
\State Perform a forward pass to obtain input features $f(x)$, set of prototype features $\{f(p_k) \}$, corresponding input features \indent \ \ \ $\{z(p_k)\}$ of the linear classification layer $h(\cdot)$, and the output logits $y$. (For Head I, we compute $g(x)$ and $\{g(p_k) \}$ \indent \ \ \ instead of $f(x)$ and $\{f(p_k) \}$.)
\State Compute threshold $\tau_t$ as the $p$-th smallest weight in $\mathcal{M}$.
\State Compute binary prototype mask $\mathcal{M}'(\tau_t)$ from $\mathcal{M}$ as shown in Eq. \eqref{mask_eqn}.
\State Perform a forward pass in $h(\cdot)$ using the masked input to obtain the auxiliary output $y_{mask}$ as shown in Eq. \eqref{masked_output_eqn}.
\State Optimize parameters $\theta_1,\theta_2$ and $ \mathcal{M}$ based on the loss function \eqref{total_loss}.
\EndWhile
\State $P\gets P\setminus P_{replace} \cup D_{rand}$, where $P_{replace}$ is the set of $p$ prototypes with the smallest weight in $\mathcal{M}$, and $D_{rand}$ is the \hspace*{4mm} set of random samples comprising the replaced prototype classes sampled from $D$ to replace $P_{replace}$.
\State $D\gets D\setminus D_{rand} \cup P_{replace}$
\State Reinitialize only the weights in $\mathcal{M}$ where prototypes are replaced.
\EndWhile
\end{algorithmic}\end{algorithm*}
The \textbf{Head II} architecture in Figure \ref{head2}, uses the features before spatial pooling to compute similarity scores. Therefore, the similarity layer computes $s^{\text{(II)}}(p_k) \in \mathbb{R}^{H\times W}$, i.e.,~one similarity score for each spatial position $(h,w)$. We explore two types of similarity functions for this head architecture: A) similarity between the input and prototype at the same spatial position $(h,w)$ and
B) the maximum similarity between input and prototype at a given \emph{input spatial location} $(h,w)$.
We refer to the former as Head II-A and the latter as Head II-B. The similarity operation for Head II-A (Eq.\eqref{Head II-A}) and Head II-B (Eq.\eqref{Head II-B}) at the spatial location $(h,w)$ is defined as:
\begin{align}
\label{Head II-A}
s^{\text{(II-A)}}_{h,w}(p_k)&=\sum_c \hat{f}_{c,h,w}(x)\cdot \hat{f}_{c,h,w}(p_k),\\
\label{Head II-B}
s^{\text{(II-B)}}_{h,w}(p_k)&=\max_{h',w'} \sum_c \hat{f}_{c,h,w}(x) \cdot \hat{f}_{c,h',w'}(p_k),\\
\hat{f}_{c,h,w}(\cdot)&=\frac{f_{c,h,w}(\cdot)}{\|f_{\cdot,h,w}(\cdot)\|_2} = \frac{f_{c,h,w}(\cdot)}{\sqrt{\sum_c [f_{c,h,w}(\cdot)]^2}},
\end{align}
where $f_{c,h,w}(x)$ and $f_{c,h,w}(p_k)$ are the $c$-th features at the spatial location $(h,w)$ from the feature maps $f(x)$ and $f(p_k)$, respectively. Head II-A compares the pairwise similarity of the spatial features between $f(x)$ and $f(p_k)$ along the same spatial location, whereas Head II-B considers all spatial locations of $f(p_k)$ to find the location that is most similar to the specified spatial location in $f(x)$. Therefore, Eq. \eqref{Head II-B} ensures that the similarity at the spatial location $(h,w)$ is computed only between the location $(h,w)$ in $x$ and the most relevant location $(h',w')$ in $p_k$, i.e., the pair that locally gives the highest cosine similarity score. Its similarity operation is spatially invariant, as it considers all spatial locations of prototype $p_k$ in identifying the most similar feature to the input $x$.
Head II-A (Eq. \eqref{Head II-A}) can be viewed as a special case of Head II-B (Eq. \eqref{Head II-B}). The operation of Head II-B reduces to Head II-A when the maximum similarity between input and prototype for a given \emph{input spatial location} $(h,w)$, also occurs at the prototype location $(h,w)$. Thus, the Head II-B architecture is considered the general case of Head II-A.
For this category of architectures, the learnable prototype network parameters $\theta_2$ denote the parameters of the linear layer $h(\cdot)$.
Note that Head II-B, as a maximum over inner products, no longer defines a kernel, as the maximum of two positive definite matrices is not guaranteed to be a positive definite. Thus, we employ a nonkernel-based similarity.
The \textbf{Head III} architecture employs a common attention mechanism to compute weighted similarities. For the Head III architecture as shown in Figure \ref{head3}, the similarity scores from Eq. \eqref{Head II-A} and \eqref{Head II-B} are passed into a softmax $\sigma(\cdot)$ to obtain the following attention maps:
\begin{align}
a^{\text{(III-A)}}_{h,w}(p_k) &= \sigma(s^{\text{(II-A)}}_{h,w}(p_k)),\\
\label{attention_headIII-B}
a^{\text{(III-B)}}_{h,w}(p_k) &= \sigma(s^{\text{(II-B)}}_{h,w}(p_k)).
\end{align}
Head III-A and Head III-B are the attention-augmented counterparts of Head II-A and Head II-B, respectively:
\begin{align}
\label{Head III-A}
s^{\text{(III-A)}}_{c}(p_k)&=\sum_{h,w} a^{\text{(III-A)}}_{h,w}(p_k)\cdot f_{c,h,w}(x)\cdot f_{c,h,w}(p_k),
\end{align}
\begin{align}
\label{Head III-B} s^{\text{(III-B)}}_{c}(p_k)&=\sum_{h,w} a^{\text{(III-B)}}_{h,w}(p_k)\cdot f_{c,h,w}(x)\cdot f_{c,h_{\max},w_{\max}}(p_k).
\end{align}
The indices $h_{\max}$ and $w_{\max}$ in Eq. \eqref{Head III-B} are the selected $h'$ and $w'$ indices used in the computation of $s^{\text{(II-B)}}_{h,w}(p_k)$.
The attention map $a(p_k) \in \mathbb{R}^{H\times W}$ is applied over the spatial features $f(x) \in \mathbb{R}^{C\times H \times W}$ and $f(p_k) \in \mathbb{R}^{C\times H \times W}$ to give more weight to the important regions (i.e., spatial regions between $f(x)$ and $f(p_k)$ with high similarity) for the neural network to learn and focus on these regions.
The output from the attended similarity layer which represents a weighted spatial similarity for every channel,
is passed into a convolutional-1D layer with kernel size $C$ to compute a weighted sum of the $C$ features. Since $C \gg H\times W$, we avoid using simple average spatial pooling over the $C$ features to obtain better explanation heatmaps.
The convolutional-1D layer has a minimum weight clipping at zero to ensure the outputs are always nonnegative in order to apply the explainability method described in Section \ref{XAI}.
Additionally, the use of individual attention maps for the respective features $f(x)$ and $f(p_k)$ is also investigated in Head III-C:
\begin{align}
a^{\text{(III-C)}}_{h,w}(p_k) &= \sigma \left(\max_{u,v}\sum_c \hat{f}_{c,u,v}(x) \cdot \hat{f}_{c,h,w}(p_k)\right),\label{attn_fpk}\\
\label{Head III-C}
s^{\text{(III-C)}}_{c}(p_k)&=\sum_{h,w} a^{\text{(III-B)}}_{h,w}(p_k)\cdot f_{c,h,w}(x) \cdot a^{\text{(III-C)}}_{h,w}(p_k)\cdot f_{c,h,w}(p_k),
\end{align}
where $a^{\text{(III-B)}}_{h,w}$ is the attention score from Eq. \eqref{attention_headIII-B} for $f(x)$ at the \emph{input spatial location} $(h,w)$, whereas $a^{\text{(III-C)}}_{h,w}$ is the attention score for $f(p_k)$ at the \emph{prototype spatial location} $(h,w)$, as can be seen by the computation of $\max_{u,v}(\cdot)$ over the spatial locations of $\hat{f}(x)$.
Eq. \eqref{attn_fpk} computes an attention map by considering all spatial locations of $f(x)$ to find the most similar location to the specified location in $f(p_k)$ as opposed to the attention map based on Eq. \eqref{Head II-B}. The attention map $a^{\text{(III-B)}}(p_k) \in \mathbb{R}^{H\times W} $ with weights denoting the importance of the spatial positions in $f(x)$ based on its similarity with $f(p_k)$,
is applied over the feature $f(x)$. On the other hand, the attention map $a^{\text{(III-C)}}(p_k) \in \mathbb{R}^{H\times W}$ with importance weights for the spatial positions in $f(p_k)$ based on its similarity with $f(x)$, is applied over the feature $f(p_k)$.
For the head architectures in this category, the learnable prototype network parameters $\theta_2$ are the parameters of the convolutional-1D and the linear classification $h(\cdot)$ layers.
\subsection{Auxiliary Output Branch for Iterative Prototype Replacement}\label{Prototype Selection}
For the selection of representative prototypes, one can directly learn an importance mask over the entire training set
and select those samples with high importance as prototypes. This approach, however, does not scale well to large sample sizes as the number of parameters that must be learned increases proportionally. For approaches such as \cite{arik2020protoattend}, the database of prototype candidates used must be sufficiently large for successful training as reported in their paper.
Instead of learning an importance mask over the entire training set or a large candidate database, we propose to learn a smaller importance mask over a fixed number of $K$ prototypes. We note that $K$ is very much smaller than the size of the candidate database \cite{arik2020protoattend} used during training or inference. Our approach scales well to large sample sizes since the number of parameters to be learned can now be constrained by $K$ which also corresponds to the final number of prototypes used during inference.
We introduce an auxiliary output branch to guide the network to learn a prototype replacement scheme that replaces uninformative prototypes (out of the $K$ prototypes) with new samples based on the importance or contribution of the current prototypes to the decision boundary of the network.
Prototypes that are assigned larger importance weights have higher importance and are unlikely to be replaced. On the other hand, prototypes with smaller importance weights are likely to be replaced with other samples from the training set due to their minimal contribution to the decision boundary of the network.
During training, the network learns a prototype importance weight $\mathcal{M} \in \mathbb{R}^K$ via an auxiliary loss to quantify the importance of the $K$ prototypes based on their contribution to the prediction.
The auxiliary loss imposes a penalty when informative prototypes that influence the decision boundary of the network are removed, i.e., masked out.
The network parameters $\theta_1$ and $\theta_2$ and the prototype importance weight $\mathcal{M}$ are jointly optimized. The formulation of the auxiliary loss will be elaborated further in
Section \ref{Learning Objectives}.
The full algorithm for the iterative prototype replacement method is presented in Algorithm \ref{alg:protoselect}.
We compute the binary prototype mask $\mathcal{M}'(\tau_t)$ from the prototype importance weight $\mathcal{M}$ using the threshold $\tau_t$ at iteration $t$ as:
\begin{align}\label{mask_eqn}
\mathcal{M}'(\tau_t)= \frac{\max \ (0,\mathcal{M} - \tau_t )}{|\mathcal{M}-\tau_t|} \in \{0,1\}.
\end{align}
We use continuous ReLU activation with normalization as inspired by the hard shrinkage operation in \cite{gong2019memorizing} to avoid issues with the backpropagation of a discontinuous 0-1 step function.
For simplicity, we assign $\tau_t$ as the $p$-th smallest weight in $\mathcal{M}$ to zero out a fixed number of $p$ prototypes (in the input of the linear classification layer $h(\cdot)$) with the smallest weight in $\mathcal{M}$.
To compute the auxiliary output $y_{mask}$, we multiply the binary prototype mask $\mathcal{M}'(\tau_t)$ elementwise with the features $z(p_k) \in \mathbb{R}$ at the input of the final linear classification layer $h(\cdot)$ corresponding to each prototype as follows:
\begin{align}
\label{masked_output_eqn} y_{mask} =h(x,\{p_1,p_2,...,p_K\})= \sum^{K}_{k=1} w_k m'_k z(p_k)+b ,
\end{align}
where $m'_k \in \{0,1\}$ is the weight in $\mathcal{M}'(\tau_t)$, and $w_k \in \mathbb{R}$ and $b \in \mathbb{R}$ are the parameters of the linear layer $h(\cdot)$.
At the end of each epoch, we replace the $p$ prototypes that have the smallest weight in $\mathcal{M}$ with new samples
from the training set $D$. The new set of prototypes $P$ also maintains an equal number of samples per class during training.
The prototype importance weight $\mathcal{M}$ can also be used to prune prototypes and relevant neurons. This will be discussed in Section \ref{Pruning of Prototypes and Relevant Neurons}.
We remark that it is not of interest to iterate through all the training samples when choosing optimal prototypes, as this will not scale well to large datasets. Our algorithm is based on the idea that one can find prototypes that may be similar to a number of training samples and thus can avoid iterating through all training samples when selecting prototypes. In future work, we will consider presorting the prototypes based on their similarity to maintain an order over training samples during learning to speed up model convergence. The prototype-based Head III architectures with higher architectural complexities than Head I and II architectures have greater training overhead than a standard CNN teacher network due to the computation of attention maps and training time which increases proportionally with the number of prototypes used. Additional details are provided in Section S-I of the Supplementary Material.
\subsection{Learning Objective}\label{Learning Objectives}
We define the set of input training samples with their ground truth to be $D=\{(x_i,\tilde{y}_i) \ |\ 1\leq i \leq N \}$ and the set of prototypes with their class labels to be $P=\{(p_k, \tilde{q}_k) \ | \ 1\leq k \leq K \}$.
For an input sample $x$, we denote the output logits of the teacher network as $y_T$ and the logits and auxiliary output of the student network as $y$ and $y_{mask}$, respectively. The predicted class label from the student network for the input sample $x$ is represented by $y_{pred}=\arg\max y$.
The following defines the objective function for our student network:
\begin{multline}\label{total_loss}
\min_{\theta_1, \theta_2, \mathcal{M}} \ \mathcal{L}(\tilde{y}, \sigma(y)) + \lambda_1\mathcal{L}(\sigma(y_{T}), \sigma(y)) \\ + \lambda_2\mathcal{L}(y_{pred}, \sigma(y_{mask})) +
\lambda_3\mathcal{J}\left((x,\tilde{y}), (p, \tilde{q})\right),
\end{multline}
where $\mathcal{L}(\cdot)$ is the cross-entropy loss, $\mathcal{J}(\cdot)$ is a loss term based on the squared $L_2$-norm, and $\sigma(\cdot)$ is the softmax.
The first two terms are the losses commonly employed in the training of a student network. The second term, also known as the distillation loss, is the cross-entropy loss with the teacher soft labels as ground truth to encourage the student network to learn predictions resembling the teacher network.
The third term $\mathcal{L}(y_{pred}, \sigma(y_{mask}))$ is used for the learning of prototype importance weight $\mathcal{M} \in \mathbb{R}^K$. We use the standard cross-entropy loss as the auxiliary loss but with the predicted class label and the probability $\sigma(y_{mask})$ as the input argument to the standard cross-entropy loss rather than using the ground truth class $\tilde{y}$ and the probability output $\sigma(y)$ from the main network branch.
The auxiliary loss enables the network to learn prototype importance weight $\mathcal{M} \in \mathbb{R}^K$ which gives larger weights to prototypes that will cause a large change in the decision boundary of the network when removed, and lower importance weights to prototypes that cause only a small change to the decision boundary if removed. Our proposed solution is based on the idea of removing uninformative prototypes with minimal disruption to the decision boundary using the predicted class label $y_{pred}$ instead of the ground truth class label $\tilde{y}$ to guide the learning of parameters $\mathcal{M} \in \mathbb{R}^K$ since
we believe the network can learn better by replacing uninformative prototypes with other potentially more informative prototypes. This allows the replacement of lower ranked prototypes
that have a lower impact on the prediction accuracy in the prototype replacement step.
The fourth loss term $\mathcal{J}\left((x,\tilde{y}), (p, \tilde{q})\right)$ encourages the feature space of the input sample $x$ to be close to the set of prototypes from the same class but farther away from the other classes. The formulation of this objective term varies slightly between the different head architectures depending on their similarity operation.
For the Head I architecture, the objective term $\mathcal{J}\left((x,\tilde{y}), (p, \tilde{q})\right)$ is defined as:
\begin{equation*} \label{head1-distloss}
\frac{1}{|D||P|} \sum_{i=1}^{|D|} \sum_{k=1}^{|P|} ( \ \| \hat{g}(x_i)-\hat{g}(p_k) \|_2^2 \ )^{\alpha},
\end{equation*}
\begin{equation}
\text{where} \ \alpha=1, \ \text{if} \ \tilde{y}_i=\tilde{q}_k, \ \text{else} \ \alpha=-1.
\end{equation}
For the Head II-A and Head III-A architectures, the objective function $\mathcal{J} \left((x,\tilde{y}), (p, \tilde{q})\right)$ is defined as:
\begin{equation*}
\frac{1}{|D||P|} \sum_{i=1}^{|D|} \sum_{k=1}^{|P|} \bigg[\frac{1}{HW} \sum^{H,W,C}_{h,w,c}(\hat{f}_{c,h,w}(x_i)-\hat{f}_{c,h,w, }(p_k))^2\bigg]^{\alpha},
\end{equation*}
\begin{equation}\label{Head II-A and Head III-A}
\text{where} \ \alpha=1, \ \text{if} \ \tilde{y}_i=\tilde{q}_k, \ \text{else} \ \alpha=-1.
\end{equation}
The term inside the square bracket is simply the average squared $L_2$-norm computed for the channel dimension over all spatial positions $(h,w)$.
Similarly, the objective function $\mathcal{J} \left((x,\tilde{y}), (p, \tilde{q})\right)$ for the Head II-B and Head III-B architectures is defined as:
\begin{equation*}
\frac{1}{|D||P|} \sum_{i=1}^{|D|} \sum_{k=1}^{|P|}\bigg[\frac{1}{HW} \sum^{\mathclap{H,W,C}}_{\mathclap{h,w,c}} (\hat{f}_{c,h,w}(x_i) -\hat{f}_{c,h_{\max},w_{\max}}(p_k))^2 \bigg]^{\alpha} ,
\end{equation*}
\begin{equation}\label{Head II-B and Head III-B}
\text{where} \ \alpha=1, \ \text{if} \ \tilde{y}_i=\tilde{q}_k, \ \text{else} \ \alpha=-1.
\end{equation}
The different formulation for Eq. \eqref{Head II-A and Head III-A} and Eq. \eqref{Head II-B and Head III-B} is due to the two different types of similarity operations introduced in the head architectures, as discussed in Section \ref{Network Architecture}.
For Head III-C architecture, the objective function $\mathcal{J}(\cdot)$ is expressed as
\begin{equation}
\mathcal{J} \left((x,\tilde{y}), (p, \tilde{q})\right) = \mathcal{J}_x \left((x,\tilde{y}), (p, \tilde{q})\right) + \mathcal{J}_p \left((x,\tilde{y}), (p, \tilde{q})\right),
\end{equation}
where the first term is equivalent to Eq. \eqref{Head II-B and Head III-B} and the second term is defined accordingly based on the $a^{\text{(III-C)}}_{h,w}(p_k)$ operation for $f(p_k)$ in Head III-C, i.e., use $\hat{f}_{c,h_{max},w_{max}}(x_i)$ instead of $\hat{f}_{c,h,w}(x_i)$ and vice versa for $p_k$ in the loss $\mathcal{J}_p(\cdot)$.
\subsection{Explanation for Top-$k$ Nearest Prototypical Examples}\label{XAI}
The student network provides a prediction explanation in the form of:
\begin{enumerate*}
\item a prediction explanation with top-$k$ nearest prototypical examples and
\item pixel evidence of similarity between the prediction sample and the prototypical examples.
\end{enumerate*}
We define a prototype similarity score $u_k$ that ranges between $[0,1]$ to identify the top-$k$ nearest prototypical examples for a given input sample.
We elaborate on the formulation of the prototype similarity score $u_k$ in Section \ref{outlier} as the scores are also used to formulate the outlier score. In the following paragraph, we recapitulate the explanation method to compute pixel evidence of similarity between each pair of input-prototype.
To show evidence of similarity between the input sample and the prototypes in the pixel space, we use a posthoc explainability method, the Layer-wise Relevance Propagation (LRP) algorithm \cite{montavon2019layer,bach2015pixel} to compute a pair of LRP heatmaps for each pair of input-prototype.
For a pair of input-prototype, their respective heatmaps comprise positive and negative relevance scores (also known as evidence) in the pixel space. The relevance scores represent the pixels' contribution to the prediction score.
For the 2D-convolutional layers, we used the following LRP-$\alpha\beta$ rule to compute relevance $R_d^{(l)}$ for the neuron $d$ at the current layer $l$:
\begin{equation}
R_d^{(l)}=\sum_j \left( \ \alpha \ \frac{(a_dw_{dj})^+}{\sum_{0,d}(a_d w_{dj})^+} -\beta \ \frac{(a_dw_{dj})^-}{\sum_{0,d}(a_d w_{dj})^-} \ \right) R_j^{(l+1)},
\end{equation}
where $a_d$ is the lower-layer activation from the input neuron $d$ at layer $l$, $w_{dj}$ is the weight connecting the neuron $d$ at layer $l$ to the higher-layer neuron $j$ at the succeeding layer $l+1$, $R_j^{(l+1)}$ is the relevance for the neuron $j$ at the succeeding layer $l+1$, $\alpha-\beta=1$, $\alpha >0$, and $\beta \geq 0$.
For the other layers, including the similarity layer (Heads I and II) and the attended similarity layer (Head III), we use the following LRP-$\epsilon$ rule:
\begin{equation}
R_d^{(l)}=\sum_j \frac{a_dw_{dj}}{\epsilon+\sum_{0,d} a_d w_{dj}} \ R_j^{(l+1)}.
\end{equation}
The similarity/attended similarity layer is treated as a linear layer, which enables the easy application of the LRP-$\epsilon$ rule in the standard manner without resorting to more complex LRP methods such as BiLRP \cite{eberle2020building}. For Head III architectures, the relevance scores flow through the attended similarity layer branch only while treating the attention weights from the similarity layer as constants.
From the last linear layer up to the similarity/attended similarity layer $l$, we compute the relevance scores $R_{(\cdot)}^{(l)}(x,p_k)$ for each pair of $(x, p_k)$.
Using $R_{(\cdot)}^{(l)}(x, p_k)$ at the similarity/attended similarity layer $l$, we compute the LRP heatmap for $x$ by backpropagating the relevances to the input space of $x$ through the CNN encoder. The heatmap for $p_k$ is also computed in a similar manner (for Head II-B and Head III-B architectures with $\max_{h',w'}(\cdot)$ operation, the relevance scores are accumulated at the relevant $h'$ and $w'$ indices before backpropagating to the input space of $p_k$).
We refer the reader to \cite{montavon2019layer} for more information on LRP algorithms.
\subsection{Quantifying the Degree of Outlierness of a Sample}\label{outlier}
Our proposed prototype-based architectures can naturally quantify the degree of outlierness of an input sample based on the sequence of sorted similarities
even though they are not trained primarily for outlier detection. We revisit the method in \cite{10.1007/978-3-642-13062-5_16} and generalize the confidence score in \cite{arik2020protoattend} to compute an outlier score based on the sum of similarity scores.
We use the sum of the prototype similarity scores $u_{k}$ to compute the outlier score. While we see the main contribution in unifying two aspects of explanation and outlier detection, a smaller contribution lies in the different formulation of the prototype similarity scores $u_{k}$ with respect to the student architecture (refer to the output branch for outlier score in Figures \ref{head1}-\ref{head3}).
For each input sample $x$, we define the set of corresponding prototype similarity scores as $U=\{u_{k} \ | \ 1 \leq k \leq K \}$. For all Head I and Head II architectures, the $u_{k}$ score is assigned as
$
u_{k}=z(p_k),
$
where $z(p_k)$ is the input to the linear layer $h(\cdot)$ as shown in Figures \ref{head1} and \ref{head2}. For
Head III-A and Head III-B architectures, the $u_{k}$ score is computed as
$u_{k}= \frac{1}{HW} \sum_{h=1,w=1}^{H,W} r_{h,w}(p_k).
$
For the Head III-C architecture, the $u_{k}$ score is computed as $ u_{k}= \max_{h,w}(r_{h,w}(p_k)).$
With the placement of ReLU layers in the CNN, the elements of $U$ lie in the range of $[0,1]$, where a higher value indicates a larger similarity between the input sample $x$ and the prototype $p_k$. From the set $U$, top-$k'$ highest scores are selected regardless of the prototype class.
For sample $x$, the outlier score $o(x)$ is defined as:
\begin{equation}\label{outlier_eqn_score}
o= 1- \frac{1}{k'}\sum_{k=1}^{k'} u_k.
\end{equation}
Note that $u_k$ is in general no inner product, and thus, we cannot compute true metric distances and hence follow the idea in \cite{10.1007/978-3-642-13062-5_16}. In this formulation, we assume that the prototype similarity scores $u_k$ for outliers are smaller than the normal samples. The above formulation is the general case, which considers prototypes from all classes in the summation, unlike the formulation of the confidence score in \cite{arik2020protoattend}, which considers only the set of prototypes from the predicted class.
We show only the results using Eq. \eqref{outlier_eqn_score}, which gives better performance, and omit results using only prototypes from the same predicted class due to lack of space.
Larger outlier score $o$ indicates a larger anomaly.
\section{Experimental Settings}\label{exp_setup}
\textbf{Dataset.}
We used the LSUN \cite{yu2015lsun} dataset consisting of $10$ classes with an image size of $128\times128$ pixels, the PCam \cite{bejnordi2017diagnostic,veeling2018rotation} dataset consisting of $2$ classes with an image size of $96\times96$ pixels, and the Stanford Cars \cite{KrauseStarkDengFei-Fei_3DRR2013} dataset with image size $224\times224$ pixels, where we consider only samples in the first 50 cars classes instead of the total 196 classes due to time constraints.
For the LSUN dataset, we subsampled $10,000$ samples per class from the given training set to save training and computational time. We used the $80:20$ split for training and validation. The original validation split was used as the test set. For the Stanford Cars dataset, we used fewer number of training samples because we split 10\% of the given training set for each class to be our validation set, whereas the setup in \cite{chen2019looks} used all of the class samples in the given training set to train the model. We performed offline data augmentation following \cite{chen2019looks} on our training set and used the augmented training set for training. Since each sample in the training set is augmented 29 times, using 10\% fewer base training samples will have a larger impact on the model performance, as the complete training set used for training consists of both augmented and base samples. In our setup, we used the performance on the validation set as an early training stopping criterion and to select the best model as opposed to using the training accuracy, cluster cost, and separation cost to stop training as done in \cite{chen2019looks}.
Performance for the Stanford Cars dataset is reported on its official test set using only samples from the first 50 classes. For the PCam dataset, we used the given dataset splits as they are. We refrained from the common usage of CIFAR-10, MNIST, or Fashion MNIST classes, which are known to cluster very easily and offer little potential for aggregating spatial similarities due to low spatial resolutions. These datasets do not provide an accurate performance benchmark which we explain later in Section \ref{Prediction Performance}.
\textbf{LRP Perturbation.} We subsampled only $100$ samples per class from our LSUN test set and $500$ samples per class from our PCam test set to reduce computational time.
We set $\alpha=1.7$, $\beta=0.7$, and $\epsilon=1e-3$ for the LRP-$\alpha\beta$ and LRP-$\epsilon$ rules, respectively.
\textbf{Outlier Setup.} Three different types of outlier setups were used in our experiments. Setup A consists of the LSUN test set, which is labeled the normal sample set, and the test set from the Oxford Flowers 102 \cite{nilsback2008automated} dataset, which is labeled the outlier sample set. For Setup B, we created a synthetic outlier counterpart for each test sample in our test set (LSUN/PCam/Stanford Cars) by drawing random \textit{strokes} of thickness $M$ on the original image \cite{chong2020simple}.
In Setup C, we created an outlier counterpart for each test sample by manipulating the color of the image, by increasing the minimum saturation and value components, and then randomly rotating the hue component. The manipulated image has abnormal colors.
Due to the large number of test samples in the PCam dataset, $1500$ samples per class were sampled from the test set for the outlier detection experiments.
We showed a comparison with the common baseline method based on the maximum probability class \cite{hendrycks2016baseline}, the classic isolation forest (IF) \cite{liu2008isolation}, generalized ODIN (GODIN) \cite{hsu2020generalized}, and Outlier Exposure (OE) \cite{hendrycks2018deep} to evaluate the potential of our prototype networks in distinguishing outliers from the normal sample distribution.
The outlier score for the baseline method \cite{hendrycks2016baseline} is the negative probability of the predicted class such that a sample with a smaller predicted class probability has a higher outlier score. For the training of GODIN layers, i.e., $g(x)$ and $h_i(x)$, we used a learning rate of $1e-3$, and we either fixed or fine-tuned the teacher network layers using a learning rate of $1e-4$ with an early stopping training criterion using the performance on the in-distribution validation set. For the OE method, we fine-tuned the teacher network with a learning rate of $1e-3$ on one type of outlier and tested it on the other types of outliers. For fine-tuning with the out-of-distribution Oxford Flowers 102 dataset, we used the entire dataset comprising of training, validation, and test samples for training. For the hyperparameters of IF and other hyperparameters of GODIN and OE that are not stated here, we followed the hyperparameters used by the authors. We evaluated the outlier performance of the models using the area under the receiver operating characteristic (AUROC) curve, false positive rate at 95\% true positive rate (FPR95), and the area under the precision-recall (AUPR) curve metrics. We reported in Section S-II of the Supplementary Material both AUPRout and AUPRin metrics, where the former treats the outlier class as the positive class, while the latter treats the inlier class as the positive class.
\textbf{Model and Hyperparameter.} We employed ResNet-50 and ResNet-34 architectures \cite{he2016deep} pretrained on ImageNet as the CNN encoder. The former is used for the LSUN and PCam datasets, and the latter is used for the Stanford Cars dataset.
We used the SGD optimizer with momentum $0.9$ and weight decay of $1e-4$. For all head architecture networks, except Head III-B trained on the Stanford Cars dataset, the learning rates of $1e-3$ and $1e-4$ were used in the prototype network and the CNN encoder, respectively. A decay learning rate with a step size of $10$ epochs and $\gamma=0.1$ was also adopted. To reduce overfitting for the Head III-B network on the Stanford Cars dataset, we used smaller learning rates of $5e-5$ and $1e-5$ for the prototype network and the CNN encoder, respectively, with learning rate decay every $20$ epochs and $\gamma=0.2$. For the learning objectives, we set $\lambda_1=\lambda_2=1$ and $\lambda_3=0.1$. For the iterative prototype replacement algorithm, we set $p=30\%$ of the number of prototypes (Line 5 of Algorithm \ref{alg:protoselect}). We used PyTorch \cite{paszke2017automatic} for implementation.
We compared our methods with the teacher network, ProtoDNN \cite{li2017deep}, ProtoPNet \cite{chen2019looks}, and the $\chi^2$-kernel SVM \cite{vedaldi2012efficient}.
To control architectural effects, we used the same ResNet architecture pretrained on ImageNet as the CNN encoder for the aforementioned methods.
We followed the hyperparameters used by the authors except for the learning rate of the encoder in ProtoDNN, which we set to a learning rate of $1e-5$.
The $\chi^2$-kernel SVM used histogram CNN features pretrained on ImageNet. Since kernel SVMs do not scale to large datasets, we used a $\chi^2$ approximation kernel \cite{vedaldi2012efficient} and the SGD classifier with hinge loss at an optimal learning rate and tolerance of $1e-4$. We performed a grid search on the validation set with the hyperparameters $\{1e-5, 1e-4,..., 1 \}$ to determine the regularization constant in the SGD classifier. We used $10$ prototypes per class in the LSUN and Stanford Cars experiments, and $20$ prototypes per class in the PCam experiments.
To ensure reproducibility, we provide the instructions to prepare the datasets and splits used in our experiments including the codes to create the synthetic outliers in GitHub\footnote{\url{https://github.com/pennychong94/Project_Towards-Scalable-and-Unified-Example-based-Explanation-and-OD}}.
\section{Experimental Results} \label{Results}
\subsection{Prediction Performance}\label{Prediction Performance}
We begin the comparison with the predictive aspect of our proposed architectures and the other methods on both datasets.
Based on Table \ref{performance-LSUN}, it can be observed that the proposed student architectures outperform the teacher network and are better than the other methods on the LSUN dataset. The Head III-B student network, which computes an attention map based on the maximum similarity between input and prototype, has the best classification performance among the other student networks.
The Head III-C student network, which computes individual attention maps for the input and the prototype, has similar performance to Head III-B but is computationally more expensive to train due to the additional attention map used.
It can be observed that ProtoDNN underperforms severely on the LSUN dataset.
A manual inspection of the prototypes from ProtoDNN reveals that the prototypes do not resemble any training samples and that the autoencoder used for prototype decoding fails to reconstruct the LSUN samples, resulting in poor classification performance. This implies that the joint optimization for the layers in ProtoDNN performs well only on simple grayscale datasets such as MNIST and Fashion MNIST \cite{li2017deep} but is not generalizable to complex datasets such as LSUN.
Our student network does not rely on autoencoders for prototype visualization, since the prototypes are actual training samples selected from the training set.
On the basis of our own observation with outlier detection on simpler datasets and the inability of ProtoDNN to generalize to complex datasets such as LSUN, we refrain from using common and simple datasets such as MNIST and Fashion MNIST in our evaluation to provide a more challenging performance benchmark.
\begin{table}[ht!]
\setlength\tabcolsep{25pt}
\centering
\caption{Classification performance on LSUN test set.}
\label{performance-LSUN}
\begin{tabular}{|c|c|}
\hline
\textbf{Model} & \textbf{ \ Acc. (\%)} \\ \hline
\ $\chi^2$ SVM \cite{vedaldi2012efficient} \ & 76.8 \\
\ ProtoDNN \cite{li2017deep} \ & 54.2 \\
\ ProtoPNet \cite{chen2019looks} \ & 82.5 \\
\ Teacher (\textit{baseline}) \ & 80.1 \\ \hhline{|=|=|}
Student Head I \ & 82.9 \\ \cline{1-2}
Student Head II-A & 82.9 \\
Student Head II-B & 82.5 \\ \cline{1-2}
Student Head III-A \ & 82.8 \\
Student Head III-B \ & \textbf{83.5}\\
Student Head III-C \ & 83.4 \\
\hline
\end{tabular}
\end{table}
\begin{table}[ht!]
\setlength\tabcolsep{12pt}
\centering
\caption{Classification performance on PCam test set.}
\label{performance-Pcam}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Model} & \textbf{ \ Acc. (\%)} & \textbf{ \ AUROC \ } \\ \hline
\ $\chi^2$ SVM \cite{vedaldi2012efficient} \ & 80.2 & 0.8875\\
\ ProtoDNN \cite{li2017deep} \ & 76.7 & 0.8180\\
\ ProtoPNet \cite{chen2019looks} \ &\textbf{83.4} & 0.8822\\
\ Teacher (\textit{baseline}) \ & 80.5 & 0.9142\\ \hhline{|=|=|=|}
Student Head I \ & 83.3 & \textbf{0.9216} \\ \cline{1-3}
Student Head II-B & 82.5 & 0.9111 \\ \cline{1-3}
Student Head III-B \ & 81.0 & 0.9205 \\ \hline
\end{tabular}
\end{table}
For the PCam and Stanford Cars datasets, as shown in Tables \ref{performance-Pcam} and \ref{performance-stanfordcar}, we show only one type of head per category. For the Head II and III categories, we show only performances of Head II-B and Head III-B architectures as they are the general cases of Head II-A and Head III-A, respectively, and have lower computational complexity than the Head III-C architecture.
Based on Table \ref{performance-Pcam}, ProtoPNet and our student network with Head I architecture demonstrate equally good performance on the PCam dataset. The former achieves the highest test accuracy (only marginally better than Head I), and the latter achieves the best AUROC score. The PCam dataset consisting of only 2 classes and similar image statistics has smaller intraclass variations than the LSUN dataset with its 10 classes. In this case, Head I, which computes similarity using spatially pooled features, overfits less, thus performing better than more complex architectures such as Head II-B and Head III-B.
In Table \ref{performance-stanfordcar}, we compare the different head architectures only with the most competent state-of-the-art model, i.e., ProtoPNet. For the student networks trained on the Stanford Cars dataset, we observe similar performance trend as the student networks trained on PCam dataset since the Stanford Cars classification task is a fine-grained car model classification with significantly smaller intra- and interclass variations (similar to PCam samples) than the LSUN dataset. The architecture Head I has the best classification performance, surpassing the performance of ProtoPNet, while the attention-guided Head III-B architecture with higher complexity underperforms on this dataset. We used smaller learning rates for the Head III-B network on this dataset as the model overfits severely with higher learning rates as observed in our initial experiment.
In an attempt to improve the performance of the model on this dataset, we also investigated several Head III-B variants such as a parameterized spatial attention and uniform attention variants but obtain unsatisfactory classification performance compared to the Head I and Head II-B networks. For more information, we refer the reader to Section S-V of the Supplementary Material.
Due to small sample variations in the fine-grained dataset and a larger number of learnable parameters in the Head III-B architecture (the additional parameters in the 1D-CNN layer), the model has a higher tendency to overfit compared to simpler architectures such as Head I and Head II-B. Since fine-grained classification is a task that involves distinguishing between very similar objects, the network may need to focus on a smaller region of the image sample. Thus, Head III-B can potentially be improved further by sampling small prototype patches from the training set, as inspired by the learning of prototype latent patches in ProtoPNet, which permits the attention map in Head III-B to be applied more precisely over a smaller target region. We note that there is a performance gap between the performance of ProtoPNet in Table \ref{performance-stanfordcar} and the performance reported in \cite{chen2019looks} (Table 1 of their Supplementary Material). The performance gap is due to our experimental setup, which is different from \cite{chen2019looks} as described earlier in Section \ref{exp_setup}, and the classification problem of the first $50$ classes that are naturally more challenging than the remaining Stanford Cars classes. For a complete discussion on this matter, we refer the reader to Section S-IV of the Supplementary Material.
For the remaining experiments using the PCam and Stanford Cars datasets, we evaluated only Head II-B and Head III-B for the Head II and Head III categories.
\begin{table}[ht!]
\setlength\tabcolsep{25pt}
\centering
\caption{Classification performance on Stanford Cars test set using only samples from the first 50 classes.
The Head III-B network for this dataset is trained using lower learning rates than the other head architectures. Refer to Section \ref{exp_setup} for more details on the experimental setting.
}
\label{performance-stanfordcar}
\begin{tabular}{|c|c|}
\hline
\textbf{Model} & \textbf{ \ Acc. (\%)} \\ \hline
\ ProtoPNet \cite{chen2019looks} \ & 69.2 \\
\ Teacher (\textit{baseline}) \ & 70.0 \\ \hhline{|=|=|}
Student Head I \ & \textbf{72.1} \\ \cline{1-2}
Student Head II-B & 70.7 \\ \cline{1-2}
Student Head III-B \ & 60.6 \\
\hline
\end{tabular}
\end{table}
\begin{table}[ht!]
\setlength\tabcolsep{25pt}
\centering
\caption{Performance of the state-of-the-art models compared to our teacher model on LSUN and PCam test sets. $^*$Our teacher network is trained on a subset of the full LSUN training set using only 8000 samples per class. The reported results for LSUN are the performances on the original validation split (consisting of 300 images per category) that we used as our test set. }
\label{sota_perf}
\begin{tabular}{|c|c|}
\hline
\textbf{LSUN Dataset} & \textbf{ \ Acc. (\%)} \\ \hline
\ $^*$Teacher (\textit{baseline}) \ & 80.1 \\
Context-CNN \cite{javed2017object}\ & 89.0 \\
SIAT\_MMLAB \cite{wang2017knowledge} \ & \textbf{91.8}\\ \hline
\textbf{PCam Dataset} & \textbf{ \ AUROC} \\\hline
\ Teacher (\textit{baseline}) & 0.914 \\
\ VF-CNN \cite{marcos2017rotation} & 0.951 \\
\ G-CNN \cite{bekkers2018roto,lafarge2021roto} & 0.968\\
\ DSF-CNN \cite{graham2020dense} & \textbf{0.975}\\
\hline
\end{tabular}
\end{table}
We conclude that using prototype-based students generally results in models that are equally competitive in prediction performance as the teacher network except for Head III-B on the Stanford Cars dataset. Employing the same underlying architecture and using soft labels from the teacher network to train the prototype networks allows unconstrained modeling of the original task. We note that our proposed models are built on top of a typical CNN model and thus have lower performance than more advance state-of-the-art methods that are curated to perform well on the task. To achieve performance comparable to Table \ref{sota_perf}, one may use any state-of-the-art model as the teacher network and use the soft labels and the same underlying architecture to train the prototype networks. We do not include the state-of-the-art performance on the Stanford Cars dataset in Table \ref{sota_perf} since we used only the first 50 car classes in our experiments due to time constraints.
\subsection{Top-$k$ Nearest Example and Pixelwise Explanation}
We provide an explanation using the top-$k$ nearest prototypical examples from different classes and show the pixel evidence of similarity between the prototype and test sample.
We first compare the quality of the pixelwise LRP heatmap explanation generated by the different student architectures with the teacher network and then show examples of explanation in the two forms (refer to Section \ref{XAI}).
\begin{figure}
\caption{LRP heatmap quality evaluation on LSUN (left) and PCam (right) datasets. The LRP heatmap perturbation for the student will use the LRP score computed between each test sample and its top-1 prototype.}
\label{perturbfig}
\end{figure}
\begin{figure*}
\caption{LRP heatmaps from Head III-B on the LSUN dataset for the prediction \textit{classroom}
\label{lsun_cropped_heatmaps}
\end{figure*}
\begin{figure}
\caption{LRP heatmaps from Head III-B on the PCam dataset.
The first two columns are for a true negative (tumor absent),
the third and fourth columns are for a true positive (tumor present), and the last two columns are for a false negative.
}
\label{heatmaps_headIII-b_pcam}
\end{figure}
We assess the quality of the pixelwise LRP heatmap explanation for the network by performing a series of region perturbations in the input space of the test sample. The LRP heatmap comprises relevance scores associated with the degree of importance for the predicted class. Performing a perturbation on the image pixel associated with high relevance scores will destroy the network logit for the predicted class. In other words, we expect a sharp decrease in the logit score of the predicted class as the relevant features in the input sample are gradually removed in decreasing order of importance. We adopt the generalized region perturbation approach in \cite{samek2016evaluating}.
We perform a series of perturbations starting from the most relevant region (based on the LRP relevance scores) to the least relevant region, as shown in Figure \ref{perturbfig}. A steep decrease suggests a good pixel explanation.
Based on the subplot for the LSUN dataset in Figure \ref{perturbfig}, the Head III-B student architecture with the best classification performance also has the best heatmap explanation, as shown by the largest decline in the logit scores. The attention modules in Head III-A and Head III-B show evidently better explanation qualities than the standard teacher network, as suggested by their respective curves.
The Head I architecture shows a poorer explanation than the teacher network due to the early average pooling operation, which may have removed distinctive features that are useful in providing an informative pixel explanation.
For the PCam dataset, the subplot suggests better heatmap quality for Head III-B architecture in the first 5 perturbation steps only and is overtaken by the teacher network and the Head II-B student network.
Due to a lower diversity of objects in a patch (small cell types, stromal tissue and background) in the PCam dataset compared to the LSUN dataset, there may be a lesser need for attention weighting for explanation; thus, Head II-B shows better LRP explanation than its attention-augmented counterpart Head III-B when an increasing number of relevant regions are perturbed. Likewise, Head I architecture shows poor LRP explanation on the PCam dataset although it achieves the best classification performance in Table \ref{performance-Pcam}.
Generally, architectures with an attention mechanism provide better LRP explanation for datasets with large object diversity, while datasets with low object diversity need lesser attention weighting for explanation. In summary, Head II and Head III architectures produce heatmap quality comparable to the teacher network.
We show only LRP heatmaps from architecture with the best explanation (Head III-B) in Figures \ref{lsun_cropped_heatmaps} and \ref{heatmaps_headIII-b_pcam}.
We show pairs of heatmaps between the input sample and the top-1 nearest prototype (based on $u_k$ score) for a few selected classes.
The positive and negative evidence in the heatmaps are represented by red and blue pixels, respectively.
We obtain nonidentical input heatmaps (the columns in the top half of Figures \ref{lsun_cropped_heatmaps} and \ref{heatmaps_headIII-b_pcam}) when comparing the same input sample with different prototype samples due to the different similarity outputs computed and thus unequal relevances for each input-prototype pair.
In the left part of Figure \ref{lsun_cropped_heatmaps}, heatmaps for mostly all classes are dimmed out (the third row in the upper and lower part) after scaling due to their low similarity $u_{k}$ scores, except class $3$ (\textit{classroom}), which is the predicted class. The red pixels are the similar regions between the pair of input-prototype that has contributed positively to the prediction \textit{classroom}.
The input heatmaps in the second row in columns 1, 4, and 5 look similar and they
correspond to the similarity of the input image (\textit{classroom}) to prototype examples belonging to the categories bedroom, classroom, and conference room, respectively. The categories bedroom, classroom, and conference room are all indoor scenes containing somewhat similar objects and indoor lighting. Since for each class we look at the nearest prototype to an image (rather than a randomly drawn prototype), we will likely find similar sceneries for these three indoor categories in a large dataset such as LSUN. Thus, the input heatmaps that are computed based on looking at the nearest prototype for each category, appear to reflect a plausible similarity among these three similar indoor scene examples.
In the lower right of Figure \ref{lsun_cropped_heatmaps} showing the top-1 prototypes from different classes,
the regions that resemble furniture in the dining room, such as chairs, tables, and stools, generally lit up in the prototype heatmaps for the prediction \textit{dining room}.
They mainly correlate to the input regions with a dining table, chair, and chandelier, as shown in the upper right of Figure \ref{lsun_cropped_heatmaps}.
\begin{table*}[!ht]
\centering
\caption{Outlier detection performance reported using the \textbf{AUROC}$\vert$\textbf{FPR95} metric for LSUN networks. Higher ($\uparrow$) AUROC values indicate better outlier detection performance, whereas lower ($\downarrow$) FPR95 values indicate better outlier detection performance.}
\setlength\tabcolsep{3.5pt}
\label{outlier_lsun_alldata}
\begin{tabular}{|c|ccc|ccc|ccc|}
\hline
\multirow{1}{*}{\textbf{Model}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup A\\ Flowers\end{tabular}}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup B\\ LSUN strokes\_5\end{tabular}}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup C\\ LSUN altered color\end{tabular}}} \\ \hhline{|=|===|===|===|}
& \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Anomaly Score $^*$ \end{tabular}} } & \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Anomaly Score $^*$\end{tabular}} } & \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Anomaly Score $^*$ \end{tabular}} } \\ \cline{2-10}
IF \cite{liu2008isolation} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.678$\vert$0.661 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.687$\vert$0.811 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.482$\vert$0.936 \end{tabular}} \\
$\textrm{Teacher}^1$ + GODIN \cite{hsu2020generalized} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.992$\vert$0.041 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.715$\vert$0.759 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}\ 0.695$\vert$0.807 \end{tabular}}\\
$\textrm{Teacher}^2$ + GODIN \cite{hsu2020generalized} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} \textbf{0.995}$\vert$\textbf{0.019} \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.690$\vert$0.786 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.692$\vert$0.824 \end{tabular}}\\
\hhline{|=|===|===|===|}
& \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Max Prob. \end{tabular}} \cite{hendrycks2016baseline}} & \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Max Prob. \end{tabular}} \cite{hendrycks2016baseline}} & \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Max Prob. \end{tabular}} \cite{hendrycks2016baseline}} \\ \cline{2-10}
\ $\chi^2$ SVM \cite{vedaldi2012efficient} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.921$\vert$0.313 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.742$\vert$0.737 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.691$\vert$0.820 \end{tabular}} \\
\ ProtoDNN \cite{li2017deep} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.553$\vert$0.935 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.497$\vert$0.961 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.520$\vert$0.913 \end{tabular}} \\
\ ProtoPNet \cite{chen2019looks} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.847$\vert$0.478 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.645$\vert$0.898 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.683$\vert$0.847 \end{tabular}} \\
\ Teacher (\textit{baseline})& \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.934$\vert$0.284 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.677$\vert$0.800 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.690$\vert$0.798 \end{tabular}} \\
Teacher + OE \cite{hendrycks2018deep} (flowers) & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} NA \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.679$\vert$0.812 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.653$\vert$0.868 \end{tabular}}\\
Teacher + OE \cite{hendrycks2018deep} ($\textrm{strokes}\_5$) & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.911$\vert$0.274 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} NA \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.710$\vert$0.821 \end{tabular}}\\
Teacher + OE \cite{hendrycks2018deep} (color) & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.949$\vert$0.179 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.663$\vert$0.890 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}NA \end{tabular}}\\
\hhline{|=|===|===|===|}
& \textit{Top-1} & \textit{Top-20} & \textit{All proto.} & \textit{Top-1} & \textit{Top-20} & \textit{All proto.} & \textit{Top-1} & \textit{Top-20} & \textit{All proto.}\\ \cline{2-10}
\ Student Head I \ & 0.990$\vert$0.042 & 0.993$\vert$0.025 & 0.739$\vert$0.511 & 0.712$\vert$0.832 & 0.692$\vert$0.846 & 0.473$\vert$0.928 & 0.782$\vert$0.774 & \textbf{0.852}$\vert$\textbf{0.593} & 0.630$\vert$0.734 \\ \hline
\ Student Head II-A \ & 0.970$\vert$0.129 & 0.965$\vert$0.151 & 0.786$\vert$0.632 & 0.764$\vert$0.719 & 0.738$\vert$0.799 & 0.611$\vert$0.858 & 0.809$\vert$0.661 & 0.835$\vert$0.631 & 0.710$\vert$0.826 \\
\ Student Head II-B \ & 0.972$\vert$0.151 & 0.984$\vert$0.082 & 0.986$\vert$0.068 & 0.763$\vert$0.749 & \textbf{0.765}$\vert$0.793 & 0.677$\vert$0.902 & 0.779$\vert$0.770 & 0.790$\vert$0.799 & 0.778$\vert$0.892 \\ \hline
\ Student Head III-A \ & 0.922$\vert$0.315 & 0.943$\vert$0.221 & 0.885$\vert$0.328 & 0.726$\vert$0.748 & 0.735$\vert$\textbf{0.701} & 0.683$\vert$0.772 & 0.672$\vert$0.822 & 0.691$\vert$0.766 & 0.636$\vert$0.853 \\
\ Student Head III-B \ & 0.855$\vert$0.657 & 0.906$\vert$0.542 & 0.909$\vert$0.434 & 0.692$\vert$0.837 & 0.702$\vert$0.842 & 0.718$\vert$0.766 & 0.674$\vert$0.852 & 0.707$\vert$0.834 & 0.731$\vert$0.772\\
\ Student Head III-C \ & 0.970$\vert$0.118 & 0.972$\vert$0.071 & 0.914$\vert$0.434 & 0.726$\vert$0.769 & 0.727$\vert$0.772 & 0.509$\vert$0.890 & 0.763$\vert$0.763 & 0.786$\vert$0.690 & 0.680$\vert$0.804 \\
\hline
\multicolumn{10}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{$^*$ The anomaly detection algorithm returns an anomaly score for every input sample, and such an algorithm is not used for classification.}\\
\end{tabular}}\\
\multicolumn{10}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{$^1$ The parameters in the trained layers of the teacher network are fixed, and optimization involves only the GODIN layers, i.e., $g(x)$ and $h_i(x)$.}\\
\end{tabular}}\\
\multicolumn{10}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{$^2$ The parameters in the trained layers of the teacher network are fine-tuned with a learning rate that is an order smaller than the learning}\\
\end{tabular}}\\
\multicolumn{10}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{rate for the GODIN layers, i.e., $g(x)$ and $h_i(x)$.}\\
\end{tabular}}\\
\multicolumn{10}{c}{\begin{tabular}[c]{@{}c@{}} \end{tabular}}
\end{tabular}
\end{table*}
\begin{table*}[!ht]
\centering
\caption{Outlier detection performance reported using the \textbf{AUROC}$\vert$\textbf{FPR95} metric for PCam networks. Higher ($\uparrow$) AUROC values indicate better outlier detection performance, whereas lower ($\downarrow$) FPR95 values indicate better outlier detection performance.}
\label{outlier_pcam_alldata}
\begin{tabular}{|c|ccc|ccc|}
\hline
\multirow{1}{*}{\textbf{Model}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup B\\ PCam strokes\_5\end{tabular}}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup C\\ PCam altered color\end{tabular}}} \\ \hhline{|=|===|===|}
& \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Anomaly Score $^*$ \end{tabular}} } & \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Anomaly Score $^*$\end{tabular}} } \\ \cline{2-7}
IF \cite{liu2008isolation} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} \textbf{0.723}$\vert$\textbf{0.712} \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.580$\vert$0.754 \end{tabular}} \\
$\textrm{Teacher}^1$ + GODIN \cite{hsu2020generalized} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.696$\vert$0.824 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.585$\vert$0.967 \end{tabular}} \\
$\textrm{Teacher}^2$ + GODIN \cite{hsu2020generalized} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.579$\vert$0.929 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.788$\vert$0.857 \end{tabular}} \\
\hhline{|=|===|===|}
& \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Max Prob. \end{tabular}} \cite{hendrycks2016baseline}} & \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Max Prob. \end{tabular}} \cite{hendrycks2016baseline}} \\ \cline{2-7}
\ $\chi^2$ SVM \cite{vedaldi2012efficient} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.521$\vert$0.938 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.519$\vert$0.933 \end{tabular}} \\
\ ProtoDNN \cite{li2017deep} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.441$\vert$0.976 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.087$\vert$0.991 \end{tabular}} \\
\ ProtoPNet \cite{chen2019looks} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.514$\vert$0.930 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.450$\vert$0.654 \end{tabular}} \\
\ Teacher (\textit{baseline}) & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.527$\vert$0.905 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.326$\vert$0.910 \end{tabular}} \\
\ Teacher + OE \cite{hendrycks2018deep} ($\textrm{strokes}\_5$) & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}NA \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.703$\vert$0.640 \end{tabular}} \\
\ Teacher + OE \cite{hendrycks2018deep} (color) & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.561$\vert$1.00 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}NA \end{tabular}} \\
\hhline{|=|===|===|}
& \textit{Top-1} & \textit{Top-20} & \textit{All proto.} & \textit{Top-1} & \textit{Top-20} & \textit{All proto.} \\ \cline{2-7}
\ Student Head I \ & 0.576$\vert$0.817 & 0.574$\vert$0.816 & 0.511$\vert$0.960 & 0.350$\vert$0.822 & 0.358$\vert$0.853 & 0.820$\vert$0.427\\ \
\ Student Head II-B \ & 0.678$\vert$0.753 & 0.661$\vert$0.730 & 0.636$\vert$0.882 & 0.377$\vert$0.929 & 0.394$\vert$0.870 & \textbf{0.823}$\vert$\textbf{0.325}\\ \
\ Student Head III-B \ & 0.687$\vert$0.778 & 0.631$\vert$0.829 & 0.541$\vert$0.835 & 0.329$\vert$0.908 & 0.249$\vert$0.955 & 0.224$\vert$0.959 \\
\hline
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{$^*$ The anomaly detection algorithm returns an anomaly score for every input sample, and such an algorithm is not used for}\\
\end{tabular}}\\
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{classification.}\\
\end{tabular}}\\
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{$^1$ The parameters in the trained layers of the teacher network are fixed, and optimization involves only the GODIN layers,}\\
\end{tabular}}\\
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{i.e., $g(x)$ and $h_i(x)$.}\\
\end{tabular}}\\
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{$^2$ The parameters in the trained layers of the teacher network are fine-tuned with a learning rate that is an order smaller}\\
\end{tabular}}\\
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{than the learning rate for the GODIN layers, i.e., $g(x)$ and $h_i(x)$.}\\
\end{tabular}}\\
\multicolumn{7}{c}{\begin{tabular}[c]{@{}c@{}} \end{tabular}}
\end{tabular}
\end{table*}
\begin{table*}[!ht]
\centering
\caption{Outlier detection performance reported using the \textbf{AUROC}$\vert$\textbf{FPR95} metric for Stanford Cars networks. Higher ($\uparrow$) AUROC values indicate better outlier detection performance, whereas lower ($\downarrow$) FPR95 values indicate better outlier detection performance.}
\label{outlier_stanfordcar_alldata}
\begin{tabular}{|c|ccc|ccc|}
\hline
\multirow{1}{*}{\textbf{Model}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup B\\ Stanford Cars strokes\_5\end{tabular}}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup C\\ Stanford Cars altered color\end{tabular}}} \\ \hhline{|=|===|===|}
& \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Anomaly Score $^*$ \end{tabular}} } & \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Anomaly Score $^*$\end{tabular}} } \\ \cline{2-7}
IF \cite{liu2008isolation} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} \textbf{0.874}$\vert$\textbf{0.533} \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.854$\vert$0.531 \end{tabular}} \\
$\textrm{Teacher}^1$ + GODIN \cite{hsu2020generalized} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.622$\vert$0.940 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.503$\vert$0.994 \end{tabular}} \\
$\textrm{Teacher}^2$ + GODIN \cite{hsu2020generalized} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}} 0.466$\vert$0.987 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.529$\vert$0.955 \end{tabular}} \\
\hhline{|=|===|===|}
& \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Max Prob. \end{tabular}} \cite{hendrycks2016baseline}} & \multicolumn{3}{c|}{\textit{\begin{tabular}[c]{@{}c@{}}Max Prob. \end{tabular}} \cite{hendrycks2016baseline}} \\ \cline{2-7}
\ ProtoPNet \cite{chen2019looks} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.728$\vert$1.000 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.792$\vert$0.592 \end{tabular}} \\
\ Teacher (\textit{baseline}) & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.760$\vert$0.725 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.789$\vert$0.635 \end{tabular}} \\
\ Teacher + OE \cite{hendrycks2018deep} ($\textrm{strokes}\_5$) & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}NA \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{0.998}$\vert$\textbf{0.100} \end{tabular}} \\
\ Teacher + OE \cite{hendrycks2018deep} (color) & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}0.851$\vert$0.569 \end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}NA \end{tabular}} \\
\hhline{|=|===|===|}
& \textit{Top-1} & \textit{Top-20} & \textit{All proto.} & \textit{Top-1} & \textit{Top-20} & \textit{All proto.} \\ \cline{2-7}
\ Student Head I \ & 0.815$\vert$0.706 & 0.808$\vert$0.723 & 0.627$\vert$0.844 & 0.838$\vert$0.675 & 0.859$\vert$0.551 & 0.725$\vert$0.729\\ \
\ Student Head II-B \ & 0.818$\vert$0.687 & 0.799$\vert$0.770 & 0.393$\vert$0.953 & 0.810$\vert$0.662 & 0.782$\vert$0.741 & 0.265$\vert$0.975 \\
\ Student Head III-B \ & 0.700$\vert$0.857 & 0.661$\vert$0.899& 0.551$\vert$0.890 & 0.913$\vert$0.361& 0.931$\vert$0.307 & 0.903$\vert$0.346 \\
\hline
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{$^*$ The anomaly detection algorithm returns an anomaly score for every input sample, and such an algorithm is not used for}\\
\end{tabular}}\\
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{classification.}\\
\end{tabular}}\\
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{$^1$ The parameters in the trained layers of the teacher network are fixed, and optimization involves only the GODIN layers,}\\
\end{tabular}}\\
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{i.e., $g(x)$ and $h_i(x)$.}\\
\end{tabular}}\\
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{$^2$ The parameters in the trained layers of the teacher network are fine-tuned with a learning rate that is an order smaller}\\
\end{tabular}}\\
\multicolumn{7}{l}{\begin{tabular}[c]{@{}c@{}} \footnotesize{than the learning rate for the GODIN layers, i.e., $g(x)$ and $h_i(x)$.}\\
\end{tabular}}\\
\multicolumn{7}{c}{\begin{tabular}[c]{@{}c@{}} \end{tabular}}
\end{tabular}
\end{table*}
For the explanation on the PCam dataset in Figure \ref{heatmaps_headIII-b_pcam}, we show
explanations for a true negative, a true positive, and a false negative.
The true negative prediction shows positive relevance scores (in red) on dense clusters of lymphocytes (dark regular nuclei), which is evidence supporting the absence of tumors.
For the false negative prediction, the input, which is a positive sample (as indicated by the ground truth), resembles more of the negative class prototype (class 0) rather than the positive class prototype (class 1) in the red pixel areas.
One usually looks at the nearest prototype to the test sample for explanation in addition to assessing the heatmap computed for the test sample. In the PCam case, the interpretation of the prototype with its LRP heatmap requires some domain knowledge from the person, such as a pathologist who is trained to recognize certain morphological structures.
\begin{figure}
\caption{Input sample and its heatmap for a false negative prediction extracted from the first two rows and the fifth column of Figure \ref{heatmaps_headIII-b_pcam}
\label{extracted_input_heatmap_FN}
\end{figure}
In Figure \ref{extracted_input_heatmap_FN}, which depicts the false negative explanation extracted from Figure \ref{heatmaps_headIII-b_pcam} for the input sample with respect to a negative class prototype, one can see in the upper left of the first row (displaying the input sample) a few faint likely cancer nuclei. The heatmap shown in Figure \ref{extracted_input_heatmap_FN} shows very low heatmaps score in these cancer nuclei locations (almost black) and high heatmap scores in their direct surrounding embedding tissue matrix.
Moreover, one sees high heatmap scores in some of the black dots, which are tissue invading lymphocytes (TiLs) also present in the prototype, and for which some but not all of them seem to be similar to the TiLs in the prototype (Figure \ref{heatmaps_headIII-b_pcam}, column 5, row 3).
By looking at the prototype and its heatmap in the lower part of Figure \ref{heatmaps_headIII-b_pcam}, column 5, one can see that the similarities are found in a part of the tissue invading lymphocytes and a part of the embedding stromal tissue matrix.
The explanations in Figures \ref{lsun_cropped_heatmaps} and \ref{heatmaps_headIII-b_pcam} allow domain experts to easily validate the reasoning process of the neural networks by verifying that the nearest prototypical training samples are correct and the contributing features are relevant to the predicted class. By analyzing contradictory cases of prediction and explanation, domain experts understand the model limitation and are able to make an informed judgment. We have included more LRP explanation examples in Section S-III of the Supplementary Material.
The prototype similarity scores $u_k$, that are used to identify the top-$k$ nearest prototypical examples can also be employed to gauge the explanation confidence of a given input sample. The optimal value of $k$ that we choose to use for explanation depends on the task at hand. We can consider an explanation based on prototypical examples to be of sufficient confidence if the top-$k$ nearest prototypes (i.e., top-$k$ highest $u_k$ score) from the predicted class are responsible for a high fixed ratio (e.g., 90\%) of the prediction mass; thus, these prototypes can be considered to have sufficiently high similarity scores. The top-$k$ nearest prototypes are considered to have insufficient similarity scores if their total score is below this fixed ratio.
Alternatively, one can also interpret the similarity scores in another manner.
We consider the set of top-$k$ nearest prototypes from the predicted class to be too small if the removal of the prototype with the smallest similarity score would change the prediction label such that it differed from the prediction with the full set of prototypes. This is related to the principle of pertinent positives \cite{dhurandhar2018explanations}, but performed on the prototype level rather than the level of subsets of an input sample.
\begin{figure}
\caption{Impact on performance using different numbers of prototypes. The x-axis is the total number of prototypes from all classes.}
\label{varynumproto}
\end{figure}
\begin{table}[ht!]
\centering
\caption{Classification performance without and with pruning.}
\label{pruning-LSUN}
\begin{tabular}{|c|c|c||c|c|}
\hline
\multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Prune}}& \textbf{LSUN}& \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}\textbf{PCam}\end{tabular}}\\ \cline{3-5}
& & \textbf{ Acc. (\%)} & \textbf{ Acc. (\%)} & \textbf{AUROC} \\ \hline
\multirow{2}{*}{Head I} & \ding{55} & \textbf{82.9} & \textbf{83.3} & \textbf{0.9216}\\
&\ding{51} & 82.7 & 82.6 & 0.9177\\ \hline
\multirow{2}{*}{Head II-B} &\ding{55} & 82.5 & 82.5& 0.9111 \\
&\ding{51} & \textbf{82.6} & \textbf{83.3} & 0.9111\\ \hline
\multirow{2}{*}{Head III-B} &\ding{55}& \textbf{83.5} & 81.0 & 0.9205\\
&\ding{51} & 83.2 & \textbf{81.8} & \textbf{0.9244}\\ \hline
\end{tabular}
\end{table}
\begin{table*}[!ht]
\centering
\setlength\tabcolsep{2.5pt}
\caption{Outlier detection performance reported using the AUROC metric for networks without and with pruning.
}
\label{outlier_lsun_alldata_pruning}
\begin{tabular}{|c|c|ccc|ccc|ccc||ccc|ccc|}
\hline
\multirow{2}{*}{\textbf{Model}} &\multirow{2}{*}{\textbf{Prune}}& \multicolumn{9}{c||}{\textbf{\begin{tabular}[c]{@{}c@{}}LSUN\end{tabular}}} & \multicolumn{6}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}PCam\end{tabular}}} \\ \cline{3-17}
& & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup A\\ Flowers\end{tabular}}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup B\\ LSUN strokes\_5\end{tabular}}} & \multicolumn{3}{c||}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup C\\ LSUN altered color\end{tabular}}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup B\\ PCam strokes\_5\end{tabular}}} & \multicolumn{3}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}Setup C\\ PCam altered color\end{tabular}}}\\
\hhline{|=|=|===|===|===||===|===|}
& & \textit{Top-1} & \textit{Top-20} & \textit{All proto.} & \textit{Top-1} & \textit{Top-20} & \textit{All proto.} & \textit{Top-1} & \textit{Top-20} & \textit{All proto.}& \textit{Top-1} & \textit{Top-20} & \textit{All proto.} & \textit{Top-1} & \textit{Top-20} & \textit{All proto.}\\ \cline{3-17}
\multirow{2}{*}{Head I} \ & \ding{55} & 0.990 & \textbf{0.993} & 0.739 & \textbf{0.712} & 0.692 & 0.473 & 0.782 & \textbf{0.852} & 0.630 & 0.576 & 0.574 & 0.511 & 0.350 & 0.358 & \textbf{0.820} \\
& \ding{51} & 0.986 & 0.982 & 0.704 & 0.701 & 0.626 & 0.477 & 0.773 & 0.821 & 0.635 & \textbf{0.579} & 0.506 & 0.467 & 0.367 & 0.323 & 0.584\\
\hline
\multirow{2}{*}{Head II-B} \ & \ding{55} & 0.972 & 0.984 & \textbf{0.986} & 0.763 & 0.765 & 0.677 & 0.779 & 0.790 & 0.778 & 0.678 & 0.661 & 0.636 & 0.377 & 0.394 & 0.823\\
& \ding{51} & 0.968 & 0.983 & 0.977 & 0.749 & 0.765 & 0.672 & 0.757 & \textbf{0.812} & 0.777 & 0.673 & \textbf{0.701} & 0.676 & 0.443 & 0.552 & \textbf{0.825} \\
\hline
\multirow{2}{*}{Head III-B} \ & \ding{55} & 0.855 & 0.906 & 0.909 & 0.692 & 0.702 & 0.718 & 0.674 & 0.707 & 0.731 & \textbf{0.687} & 0.631 & 0.541 & \textbf{0.329} & 0.249 & 0.224 \\
& \ding{51} & 0.821 & \textbf{0.948} & 0.933 & 0.689 & \textbf{0.754} & 0.742 & 0.677 & \textbf{0.780} & 0.774 & 0.641 & 0.622 & 0.623 & 0.234 & 0.211 & 0.257 \\\hline
\end{tabular}
\end{table*}
\subsection{Detecting Outliers for Additional Validation}
The prototype student networks can naturally quantify the degree of outlierness using the set of similarity scores.
The GODIN method shows the best outlier detection performance in the flowers setup, as shown in Table \ref{outlier_lsun_alldata}, due to significant differences in the data distribution between the LSUN and flowers datasets. GODIN does not perform well on more challenging outlier setups such as strokes and altered color setups in which anomalies are created via the manipulation of normal samples.
The Head I student architecture in Table \ref{outlier_lsun_alldata} achieves the second-best performance for the flowers setup and top performance for the altered color setup using the top-$20$ prototype similarity scores $u_k$. Since the image statistics of the flowers dataset and the altered color anomalies are substantially different from the normal LSUN samples, the use of early spatial pooling before computing similarity in Head I does not remove distinctive features of the anomalies but is effective in detecting outliers while offering the least potential for overfitting. For the strokes outlier setup, Head II-B
generally has the best detection performance, as shown in Table \ref{outlier_lsun_alldata} and Table I of the Supplementary Material. The placement of the spatial pooling operation before the similarity operation in Head I may have removed traces of strokes on the image, while Head II-B retains the traces of strokes during the similarity computation.
The Head III-B architecture has a higher architectural complexity than Head I and Head II and is thus less robust to outliers and prone to overfitting.
We show the outlier performance of LSUN networks under the AUPRout and AUPRin metrics in Section S-II of the Supplementary Material due to space constraints.
We report detection only on the more challenging setups (strokes and altered color as suggested by the average detection scores in Table \ref{outlier_lsun_alldata}) for the PCam and Stanford Cars datasets.
In Table \ref{outlier_pcam_alldata}, the IF method generally achieves the best detection results on the strokes setup. Likewise, the student architectures with similarity layers before averaging (Head II-B and Head III-B) perform better in detecting strokes artifacts on the PCam dataset, which is consistent with the LSUN networks. For the altered color setup, as shown in Table \ref{outlier_pcam_alldata}, student networks with lower architectural complexity, such as Head I and Head II-B architectures, perform well on the altered color setup compared to Head III-B with higher architectural complexity.
The Head II-B and Head I networks using all prototypes are equally competent on the PCam altered color setting.
The outliers created using the PCam dataset in this setting are easier to be detected than those created using the LSUN dataset due to the significant differences between the color distribution of outliers and normal PCam samples.
We also observe that the ProtoDNN underperforms on the altered color setup, as a majority of the outliers have a higher maximum class probability than the normal samples.
We show the outlier performance of PCam networks under the AUPRout and AUPRin metrics in Section S-II of the Supplementary Material due to space constraints.
For the Stanford Cars experiments in Table \ref{outlier_stanfordcar_alldata}, we also observe that the IF model performs best in the strokes setup, similar to the PCam experiments. The competing OE method achieves comparable performance to IF on the Stanford Cars strokes setup and superior performance on the altered color setup. We deduce that exposing the network to an outlier set during training when the in-distribution set has low variability among samples can be highly beneficial to the model, as it is able to learn a more conservative concept of inliers easily as opposed to a setting with highly varied in-distribution samples.
We also observe that the Head III-B network with poor classification performance in Table \ref{performance-stanfordcar} performs better than the other head architectures in the altered color setup but not in the strokes setup as opposed to its performance on PCam outliers in Table \ref{outlier_pcam_alldata}.
Due to lack of space, we report the outlier performance of Stanford Cars networks under the AUPRout and AUPRin metrics in Section S-II of the Supplementary Material.
In conclusion, except for the PCam altered color setup, the use of 20 prototypes for Head II and Head III architectures generally shows reasonable outlier performances, and are mostly better than or comparable to the IF, GODIN, and OE methods.
This supports our argument for unified prototype-based models with generic head architectures that are interpretable and robust to outliers.
\section{Discussion}\label{Discussion}
\subsection{Varying Number of Prototypes}
We investigate the impact on classification performance using different numbers of prototypes. We use an equal number of prototypes per class. We discuss only the performance for student networks with simple (Head I) and complex (Head III-B) architectures due to a lack of space.
Based on Figure \ref{varynumproto}, only a marginal change in performance can be seen for the simple Head I architecture on the LSUN dataset, whereas a huge decline in performance can be seen for the more complex Head III-B architecture when using a large number of prototypes (150 prototypes) due to its larger tendency to overfit.
Likewise, only slight performance differences are reported for the PCam dataset with different numbers of prototypes. For the relatively simple PCam dataset, using a small number of prototypes (20 prototypes) is sufficient and less likely to overfit, as shown by the high test accuracy for both architectures.
Hence, using a small number of prototypes for simple datasets and complex network architectures is a good practice.
\subsection{Pruning of Prototypes and Relevant Neurons}\label{Pruning of Prototypes and Relevant Neurons}
Since the prototype importance weight $\mathcal{M}$ defines the importance
of the $K$ prototypes in the network prediction, we can prune the set of prototypes with the least weight in $\mathcal{M}$ to evaluate the impact on classification and outlier performances. We do not evaluate the LRP explanation of the pruned networks, as the general architecture of the networks remains largely unchanged.
We prune $p=30\%$ (the same as $p$ used in the iterative prototype replacement algorithm) of the prototypes and the relevant neurons, and then we fine-tune the network. Alternatively, one can set the pruning factor based on the classification performance on the validation set.
Based on Table \ref{pruning-LSUN}, we observe marginal changes in the classification performance after pruning. The pruned network with Head II-B shows a slight increase in accuracy for both datasets.
The outlier detection performance of the pruned networks is reported in Table \ref{outlier_lsun_alldata_pruning}. The pruned networks exhibit a larger change in the outlier performance compared to the prediction accuracy. For the LSUN dataset, the student network with the Head III-B architecture consistently performs better with pruning, as indicated by the significant gain in the detection performance. Since the pruning of prototypes and relevant neurons reduces the size of the network, the Head III-B network overfits less
after fine-tuning and is more robust to outliers. We do not observe the same benefit of pruning for Head III-B on the PCam outlier setups. For the strokes setup, an evident gain in performance can be observed for the Head II-B network trained on the PCam dataset. A smaller gain is seen for the Head II-B network in the altered color setup for both datasets. Thus, one may consider pruning uninformative prototypes if the network is prone to overfitting.
\section{Conclusion}
Various student architectures that compute prototype-based predictions were investigated for their suitability to provide example-based explanations and detect outliers.
The selection of prototypical examples from the training set via the iterative prototype replacement method guaranteed meaningful explanation. The prototype similarity scores can also be used naturally to quantify the outlierness of a sample.
However, the architecture with the best LRP explanation did not achieve the best outlier detection performance.
\section*{Acknowledgment}
This research is supported by both ST Engineering Electronics and NRF, Singapore, under its Corporate Lab @ University Scheme (Title: STEE Infosec-SUTD Corporate Laboratory) and the Research Council of Norway, via the SFI Visual Intelligence (Grant Number: 309439).
\ifCLASSOPTIONcaptionsoff
\fi
\vskip -2\baselineskip plus -1fil
\begin{IEEEbiographynophoto}{Penny Chong}
is a research scientist at IBM Research, Singapore.
She received her Ph.D. in Information Systems Technology and Design from Singapore University of Technology and Design (SUTD) in 2021, under the supervision of Alexander Binder and Ngai-Man Cheung. Her research interests include machine learning and explainable AI and its applications.
\end{IEEEbiographynophoto}
\vskip -2\baselineskip plus -1fil
\begin{IEEEbiographynophoto}{Ngai-Man Cheung}
is an associate professor at the Singapore University of Technology and Design (SUTD). He received his Ph.D. degree in Electrical Engineering from the University of Southern California (USC), Los Angeles, CA, in 2008. His research interests include image and signal processing, computer vision and AI.
\end{IEEEbiographynophoto}
\vskip -2\baselineskip plus -1fil
\begin{IEEEbiographynophoto}{Yuval Elovici}
is the director of the Telekom Innovation Laboratories at Ben-Gurion University of the Negev (BGU), head of the BGU Cyber Security Research Center and a professor in the Department of Software and Information Systems Engineering at BGU.
He holds a Ph.D. in Information Systems from Tel-Aviv University.
His primary research interests are computer and network security, cyber security, web intelligence, information warfare, social network analysis, and machine learning.
\end{IEEEbiographynophoto}
\vskip -2\baselineskip plus -1fil
\begin{IEEEbiographynophoto}{Alexander Binder}
is an associate professor at the University of Oslo (UiO). He received a Dr.rer.nat. from Technische Universit\"at Berlin, Germany in 2013. He was an assistant professor at the Singapore University of Technology and Design (SUTD) from 2015 to 2020.
His research interests include explainable deep learning among other topics.
\end{IEEEbiographynophoto}
\end{document} |
\begin{document}
\title{Quantile regression for longitudinal data: unobserved heterogeneity and informative missingness. \ Supplementary material}
\onehalfspacing
\section{ML estimation}
Parameter estimates for the \textit{lqmHMM} and the \textit{lqHMM+LDO} are obtained by using the Baum-Welch algorithm \citep{McDonaldZucchini1997}.
As before, we suppress the $\tau$ indexing of the model parameters. We will refer to LDO classes with the generic term 'components'', to align the terminology of \textit{lqmHMM} and \textit{lqHMM+LDO}, and use $\pi_{ig}$ in place of $\pi_{ig} (T_i \mid \boldsymbol \lambda, \tau)$ for the \textit{lqHMM+LDO} formulation, while $\pi_{ig} = \pi_{g}, \forall i = 1, \dots, n$ for the \textit{lqmHMM}.
Let $u_{i}(h) = \mathbb I \left[S_{it} = h \right]$ denote the indicator variable for the $i$-th individual in the $h$-th state at time occasion $t$ and let $u_{it}(k,h) = \mathbb I \left[ S_{it-1} = k, S_{it} = h \right]$ indicate whether individual $i$ moves from the $k$-th state at time occasion $t-1$ to the $h$-th one at time occasion $t$.
Moreover, $\zeta_{ig}$ is the indicator variable for the $i$-th unit belonging to the $g$-th component. Starting from the definition of the complete data log-likelihood,
\begin{align} \label{auxiliary_NPML}
\ell_c(\boldsymbol \Phi \mid \textbf y, \textbf T, \textbf S, \boldsymbol \zeta,\tau) & \propto
\sum _{i = 1} ^ n \bigg\{
\sum_{h = 1}^m u_{i1}(h) \log \delta_{h} +
\sum_{t=2}^{T_i} \sum_{h,k=1}^m u_{it}(k,h) \log q_{kh} +
\sum_{g = 1}^G \zeta_{ig} \log \pi_{ig} +
\nonumber \\[0.1cm]
& - T_i \log (\sigma) -
\sum _{t = 1}^{T_i} \sum_{h=1}^m
\sum_{g = 1}^G u_{it}(h) \zeta_{ig}
\rho_{\tau} \left[
\frac{ y_{it} - \mu_{it}[S_{it} = h, \textbf b_g] }{\sigma}
\right]
\bigg\},
\end{align}
parameter estimates are derived by using an EM algorithm.
In the E-step we take the expected value of the complete data log-likelihood \eqref{auxiliary_NPML}, given the observed data and the current parameter estimates; we refer to such quantity as $Q(\boldsymbol \Phi \mid \boldsymbol \Phi^{(r)})$. This amounts to replacing the indicator variables by the corresponding (posterior) expected values given by the following expressions
\begin{align*}
& \hat u_{it}(h \mid \tau) = {\rm E}_{{\boldsymbol \Phi}^{(r)}}\left[u_{it}(h)\mid \textbf y_{i}\right] \\
& \hat u_{it}(k,h \mid \tau) = {\rm E}_{{\boldsymbol \Phi}^{(r)}}\left[u_{it}(k,h )\mid \textbf y_{i}\right] \\
& \hat{\zeta}_{ig}(\tau) = {\rm E}_{{\boldsymbol \Phi}^{(r)}}\left[\zeta_{ig} \mid {\bf y}_{i}\right] \\
& \hat u_{it}(h \mid g, \tau) = {\rm E}_{{\boldsymbol \Phi}^{(r)}}\left[u_{it}(h )\mid \zeta_{ig}=1, \textbf y_{i}\right] \\
\end{align*}
The calculation of these terms is greatly simplified by considering forward and backward variables; see \cite{Baum1970}. In the present framework, forward variables, $a_{it} (h, g \mid \tau)$, are defined as the joint density of the longitudinal measures up to time $t$, for a generic individual ending up in the $h$-th state, conditional on the $g$-th mixture component:
\begin{align}\label{cap5_fwdRecursion}
a_{it}(h,g \mid \tau) =
f \left[y_{i1:t}, S_{it} = h \mid g, \tau \right ].
\end{align}
Similarly, the backward variables $b_{it} (h, g \mid \tau)$ represent the probability of the longitudinal sequence from occasion $t+1$ to the last observation $T_i$, conditional on being in the $h$-th state at time $t$ and the $g$-th mixture component:
\begin{align}\label{cap5_bwdRecursion}
b_{it}(h,g \mid \tau) =
f \big[ y_{it+1:T_i} \mid S_{it} = h, g, \tau \big].
\end{align}
Both terms can be computed recursively as
\begin{align*}
& a_{i1}(h,g \mid \tau) = \delta_h f_{y \mid sb} \big[ y_{i1} \mid S_{i1} = h, g, \tau \big] \\
&a_{it}(h,g \mid \tau) =
\sum_{k=1}^m a_{it-1}(k,g \mid \tau) q_{kh} f_{y \mid sb} \big[ y_{it} \mid S_{it} = h, g, \tau \big] \\
& b_{iT_{i}}(h,g \mid \tau) = 1 \\
& b_{it-1}(h,g \mid \tau) = \sum _{k=1}^m
b_{it}(k,\textbf b_g) q_{hk} f_{y \mid sb} \big[ y_{it} \mid S_{it} = k, g, \tau \big].
\end{align*}
Once these variables have been defined, the E-step can be performed straightforwardly. Considering the indicator variables for both the hidden Markov process and the finite mixture as missing data, conditional expected values, given the observed data and the current parameter estimates, can be easily computed as
\begin{align}
& \hat u_{it}(h \mid \tau) = \frac
{
\sum_g
a_{it}(h,g \mid \tau)
b_{it}(h,g \mid \tau)
\pi_g
}
{
\sum_{h} \sum_g
a_{it}(h,g \mid \tau)
b_{it}(h,g \mid \tau)
\pi_g
}, \nonumber \\[0.4cm]
& \hat u_{it}(k,h \mid \tau) =
\frac{
\sum_g
a_{it-1}(k,g \mid \tau)
q_{kh} \,
f_{y \mid sb}\left(y_{it} \mid S_{it} = h,_g, \tau \right)
b_{it}(h,g \mid \tau)
\pi_g
} {
\sum_{h k} \sum_g
a_{it-1}(k,g \mid \tau)
q_{kh} \,
f_{y \mid sb}\left(y_{it} \mid S_{it} = h,g, \tau \right)
b_{it}(h,g \mid \tau)
\pi_g
}, \nonumber \\[0.4cm]
& \hat{\zeta}_{ig}(\tau) = \frac{\sum_{h = 1}^m a_{iT_{i}}(h,g \mid \tau) \pi_g}{\sum_{l = 1}^G \sum_{h = 1}^m a_{iT_{i}}(h,l \mid \tau) \pi_l} \nonumber \\[0.4cm]
& u_{it}(h \mid g, \tau) = \frac{a_{it}(h,g \mid \tau) b_{it}(h,g \mid \tau) \pi_g}{\sum_{k = 1}^m a_{it}(k,g \mid \tau) b_{it}(k,g \mid \tau) \pi_g}.
\end{align}
The first three rows correspond to the posterior expected values of state and component membership indicators described so far, while the last one represents the conditional posterior probability of being in state $h$ at time occasion $t$, given the individual belongs to the $g$-th component of the finite mixture.
In the M-step, model parameter estimates are derived by maximizing $Q(\boldsymbol \Phi \mid \boldsymbol \Phi^{(r)})$ with respect to $\boldsymbol \Phi$. Given the separability of the parameter spaces for the longitudinal and the missing data process, the maximization can be partitioned into independent sub-problems, thus considerably simplifying the computation.
Standard results for basic HMMs are valid for the initial and the transition probability estimates,
\begin{align}
\hat \delta _h = \frac{\sum _{i = 1}^n \hat u_{i1}(h \mid \tau)}{n},
\quad
\hat q_{kh} = \frac {\sum _{i = 1}^n \sum_{t=1}^{T_i} \hat u_{it}(k,h \mid \tau)}{\sum _{i = 1}^n \sum_{t=1}^{T_i} \sum _{h = 1}^m \hat u_{it}(h,k \mid \tau)}, \quad h, k = 1, \dots, m.
\end{align}
\\
Longitudinal model parameters, say $\boldsymbol \psi$, are estimated by solving a weighted estimating equation, with weights given by the posterior probabilities of the hidden Markov process and the finite mixture. More specifically, the updates are obtained as
\begin{align}
\hat{\boldsymbol \psi}=\arg \min_{\boldsymbol \psi}
\sum_{i=1}^n \sum _{t = 1}^{T_i} \sum_{h=1}^m \sum_{g = 1}^G
\hat \zeta_{ig}(\tau) \hat u_{it}(h \mid g, \tau)
\rho_{\tau} \left[ \frac{y_{it} - \mu_{it} [S_{it} = h, \textbf b_g]
}
{\sigma} \right].
\label{eq_longEstim_NPML}
\end{align}
By extending the block algorithm proposed by \cite{Farcomeni2012} for the \textit{lqHMM}, three different steps have to be alternated. Noticing that $S_{it} = h$ implies $\boldsymbol\alpha_{s_{it}} = \boldsymbol\alpha_h$, fixed parameters $\boldsymbol \beta$ are estimated by solving
\begin{equation}
\hat{\boldsymbol \beta}=\arg \min_{\boldsymbol \beta} \sum_{i=1}^n \sum _{t = 1}^{T_i} \sum_{h=1}^m \sum_{g = 1}^G \hat \zeta_{ig}(\tau) \hat u_{it}(h \mid g, \tau)
\rho_{\tau} \left[ \tilde y_{it} - \textbf x_{it} \boldsymbol \beta \right],
\end{equation}
where $\tilde y_{it} = [y_{it} - \textbf z_{it} ^\prime \hat{\textbf b}_g - \textbf w_{it} ^\prime \hat{\boldsymbol \alpha}_h]$.
\\
State-dependent parameters ${\boldsymbol \alpha}_h, h = 1, ...,m$ are updated via
\begin{equation}
\hat{\boldsymbol \alpha_h}=\arg \min_{\boldsymbol \alpha_h} \sum_{i=1}^n \sum _{t = 1}^{T_i} \sum_{h=1}^m \sum_{g = 1}^G \hat \zeta_{ig}(\tau) \hat u_{it}(h \mid g, \tau)
\rho_{\tau} \left[ \tilde y_{it} - \boldsymbol \alpha_h \right],
\end{equation}
where $\tilde y_{it} = [y_{it} - \textbf x_{it} ^\prime \hat{\boldsymbol \beta} - \textbf z_{it} ^\prime \hat{\textbf b}_g]$.
\\
The locations $\textbf b_g$ are computed by solving
\begin{equation}
\hat{\textbf b}_g=\arg \min_{\textbf b_g} \sum_{i=1}^n \sum _{t = 1}^{T_i} \sum_{h=1}^m \sum_{g = 1}^G \hat \zeta_{ig}(\tau) \hat u_{it}(h \mid g, \tau)
\rho_{\tau} \left[ \tilde y_{it} - \textbf z_{it} \textbf b_g \right],
\end{equation}
with $\tilde y_{it} = [y_{it} - \textbf x_{it} ^\prime \hat{\boldsymbol \beta} - \textbf w_{it} ^\prime \hat{\boldsymbol \alpha}_h]$.
\\
With regards to the component probabilities, closed form expressions are available for
the \textit{lqmHMM} formulation:
\begin{equation}
\hat \pi_g = \frac{1}{n} \sum_{i = 1}^n \hat \zeta_{ig}(\tau).
\label{eq_pgEstim}
\end{equation}
For the \textit{lqHMM+LDO} specification, parameters in the LDO class model are estimated via a constrained optimization of the weighed likelihood for the cumulative logit model, that is
\begin{align}
\hat{\boldsymbol \lambda}=& \arg \max_{\boldsymbol \lambda} \sum_{g=1}^G \hat{\zeta}_{ig}(\tau)
\left\{
\left[\frac{\exp(\lambda_0 + \lambda_g T_i)}{1+ \exp(\lambda_0 + \lambda_g T_i)}
\right] -
\left[\frac{\exp(\lambda_0 + \lambda_{g-1} T_i)}{1+ \exp(\lambda_0 + \lambda_{g-1} T_i)}
\right] \right\}.
\end{align}
Finally, the scale parameter of the AL distribution is estimated by
\begin{equation}
\hat \sigma = \frac{1}{\sum_{i=1}^n T_i} \sum_{i = 1}^n \sum_{t=1}^{T_i} \sum_{h=1}^m \sum_{g=1}^G \hat \zeta_i(g\mid \tau) \hat{u}_{it}(h\mid g, \tau) \rho_\tau
\left(
y_{it} - \mu_{it}(S_{it} = h, \textbf b_g)
\right).
\end{equation}
The E- and the M-step of the algorithm are iterated until convergence, that is until the (relative) difference between subsequent likelihood values is lower than an arbitrary small amount $\varepsilon > 0$.
\section{Simulation study, scenario 1: complete data}
To investigate the empirical behaviour of the proposed \textit{lqmHMM} in the presence of time-constant and time-varying sources of unobserved heterogeneity, we have implemented a large scale simulation study. Different experimental scenarios have been taken into consideration and the performances of the proposed approach have been compared with those of \textit{lqHMM}, obtained by setting $G = 1$ (this has been done to avoid extra-variability due to incoherence between the code we have developed ad that of \citealp{Farcomeni2012}).\\
Data have been generated from a two state mixed HMM ($m = 2)$ with initial/transition probabilities given by
\begin{align}
\boldsymbol \delta = \left(0.7, 0.3 \right) \quad \text{and} \quad \textbf Q = \left(
\begin{matrix}
0.8 & 0.2 \\
0.2 & 0.8
\end{matrix} \right).
\end{align}
The following longitudinal regression model holds for the $h$-th state of the Markov chain:
\begin{equation}\label{cap5_simul_model}
Y_{it} = \alpha_{h} + (b_i +\beta_{1})\, x_{it1} + \beta_{2} \, x_{it2} + \varepsilon_{it},
\end{equation}
where $x_{it1} \sim \text{Unif}[-10,10]$, $\textbf x_{it2} = \textbf x_{i2} \sim \text{Bin}(1, 0.5)$, $\beta_1 = 2$ and $\beta_{2}=-0.8$. Different probability distributions have been considered to generate the measurement error, $\varepsilon_{it}$, that is a standard Gaussian, and a Chi-square distribution with $\nu = 2$ degrees of freedom, to allow for skew data.
The time-constant random slopes $b_i$ capture individual departures from the marginal effect $\beta_1$, and represent IID draws from a standard Gaussian ($\sigma_b = 1$) or a Student $t_3$ ($\sigma_b = \sqrt 3$) distribution. State-dependent intercepts have been set to $\alpha_1 = 100$ and $\alpha_2 = 110$.
The proposed simulation design follows the one discussed by \cite{LiuBottai2009} with a twofold aim. First, different distributions for the time-constant random parameters have been considered to study the robustness of the NPML approach when we move far from Gaussianity; second, different choices for the measurement error allow to study the performance of the proposed model in the presence of asymmetric errors.
\\
Comparing results obtained when fitting the \textit{lqHMM} with those from the \textit{lqmHMM}, we expect a reduced efficiency of $\hat \beta_1$ in the former case, due to the extra-variability associated with $b_i$ which is not taken into consideration by this model specification; also, we expect this will produce some bias in the state-dependent intercepts and the associated latent Markovian structure.
\\
For each scenario we have generated $B = 250$ samples with varying sample sizes $(n = 100, 200)$ and lengths of longitudinal sequences $(T = 5, 10)$. Model parameters have been estimated for $\tau = (0.25, 0.50, 0.75)$. While we consider balanced designs only, the algorithm can be readily applied to unbalanced designs as well. For each simulated dataset, we have estimated a \textit{lmHMM} and a \emph{lqmHMM} with $m = 2$ hidden states; for the \emph{lqmHMM} specification, we have considered a varying number of mixture components ($G = 1, ... , 15$) and retained the model with the best BIC value. To evaluate the performances of the analysed models, we have computed the average bias and root mean square error (RMSE) for each model parameter. Simulation results for the parameters in the longitudinal model are reported in Tables \ref{simul_q25}-\ref{simul_q75}.
\begin{center}
Tables \ref{simul_q25}-\ref{simul_q75} ABOUT HERE
\end{center}
We may notice that the precision of parameter estimates is higher at the center of the distribution when compared to other quantiles both for \textit{lqHMM} and \textit{lqmHMM}. Such differences are due to the reduced amount of information at the tails and tend to decrease with increasing sample sizes and number of measurement occasions.
\textbf{This result is particularly evident for the state-dependent intercepts. Indeed, based on the simulation scheme we have implemented, intercepts are directly related to the estimated quantiles, while the other parameters are constant when moving from $\tau = 0.25$ to $\tau = 0.75$. The precision of $\hat \alpha_1$ and $\hat \alpha_2$ is, therefore, generally higher when estimating the center of the distribution than when focusing on the tails.} \\
The results we have obtained for $\tau = 0.25$ are worth of some discussion. Given the values used for $\boldsymbol \alpha, \boldsymbol \delta$ and $\textbf Q$, the first quartile of the conditional distribution mainly entails units in the first state of the Markov chain, with intercept $\alpha_1$. Therefore, $\alpha_2$ is always estimated with a lower precision than $\alpha_1$ due to a sparse (imprecise) information, but for the case $b_i \sim t_3$ when fitting the \textit{lqHMM} as $\alpha_2$ is less influenced by the (heavy) left tail of random effect distribution (not accounted for in \textit{lqHMM}).
{However, increasing the number of repeated measurements, $T$, we have a higher chance to observe transitions towards the second state and the quality of results clearly improves when fitting the \textit{lqmHMM}; bias and higher variability seem to persist when fitting \textit{lqHMM}.
When comparing the two model approaches, it is clear that estimating a \textit{lqHMM} when both time-constant and time-varying sources of unobserved heterogeneity affect the longitudinal responses returns in a lower precision of all model parameters. A higher bias is observed for the state-dependent intercepts: the extra-variability not taken into consideration by the \textit{lqHMM} formulation is mostly captured by the latent Markov chain. }
However, given that $x_{i2}$ is time-constant, we may think that those sources of unobserved heterogeneity that do not evolve over time may also produce a high variability in the estimates for $\hat \beta_2$. RMSEs associated to $\hat \beta_2$ are generally higher (compared with those for $\hat \beta_1$) also when
fitting the \textit{lqmHMM} possibly because of some aliasing between such a parameter and the discrete mixture-component locations.
The impact of this issue seems to reduce with increasing $T$ for \textit{lqmHMM}, while remain persistent in \textit{lqHMM}.
\\
Looking at the estimated variance of the time-constant, individual-specific, random parameter $b_i$ ($\sigma_b$), we may observe a reduced RMSE when this parameter is drawn from a standard Gaussian rather than from a $t_3$ distribution, which suggests that some extra-variability has been ruled out by the quantile regression. In fact, the estimated $\sigma_b$ is always lower than the true one ($\sigma_b = \sqrt 3$).
\\
To conclude, by looking at results reported in Tables \ref{simul_q25}-\ref{simul_q75}, it can be notice that estimates obtained by the \textit{lqmHMM} show a clear and consistent path, with precision increasing with $T$ and $n$, regardless of the measurement error distribution. As regards the random parameter distribution, some extra-variability in the parameter estimates is generally observed, especially for low sample sizes and length of the study.
On the other hand, the estimates obtained under the \textit{lqHMM} formulation are persistently biased and result somewhat influenced by the measurement error distribution and, more substantially, by the random parameter distribution.
\section*{Tables}
\begin{table}[!ht]
\centering
\caption{Simulation study for $\tau = 0.25$. Bias and RMSE for longitudinal parameter estimates under the \textit{lqHMM} and \textit{lqmHMM} formulation.}
\scalebox{0.90}{
\begin{tabular}{llrrrrrrrrrrrrrrrrrrrrrrrrr}
\toprule
& & \multicolumn{2}{c}{$b_i \sim N(0,1), \epsilon_{it} \sim N(0,1)$} &
\multicolumn{2}{c}{$b_i \sim N, \epsilon_{it} \sim \chi_2^2$} &
\multicolumn{2}{c}{$b_i \sim t_3, \epsilon_{it} \sim \chi_2^2$} \\[0.2cm]
&& \multicolumn{1}{c}{\textit{lqHMM}} & \multicolumn{1}{c}{\textit{lqmHMM}} & \multicolumn{1}{c}{\textit{lqHMM}} & \multicolumn{1}{c}{\textit{lqmHMM}}& \multicolumn{1}{c}{\textit{lqHMM}} &\multicolumn{1}{c}{\textit{lqmHMM}}\\
\hline
\multirow{14}{*}{\rotatebox{90}{$n = 100$}}\\
&&\multicolumn{7}{l}{$\mathtt{T = 5}$}\\
& $\alpha_1$ & -4.970 (6.25) & -0.229 (0.42) & 1.224 (1.71) & 0.167 (0.36) & -8.909 (13.53) & 0.012 (0.71) \\
& $\alpha_2$ & -6.784 (7.73) & -2.220 (4.34) & 1.535 (1.82) & 0.270 (0.47) & -7.330 (8.10) & -3.231 (5.36) \\
& $\beta_1$ & 0.015 (0.12) & 0.008 (0.10) & 0.014 (0.12) & 0.008 (0.10) & 0.010 (0.16) & 0.020 (0.16) \\
& $\beta_2$ & 0.290 (1.68) & 0.058 (0.29) & -0.168 (1.08) & -0.112 (0.42) & 0.048 (1.51) & 0.113 (0.68) \\
& $\sigma_b$ & & -0.100 (0.15) & & -0.107 (0.14) & & -0.340 (0.43) \\
\cline{2-10}
& &\multicolumn{7}{l}{$\mathtt{T = 10}$} \\
& $\alpha_1$ & -3.897 (5.16) & -0.176 (0.22) & -1.949 (3.13) & -0.007 (0.10) & -9.482 (13.12) & -0.069 (0.13) \\
& $\alpha_2$ & -4.497 (5.99) & -0.246 (1.00) & -1.825 (3.31) & 0.002 (0.10) & -5.729 (7.03) & -0.049 (0.12) \\
& $\beta_1$ & 0.008 (0.12) & 0.007 (0.10) & 0.004 (0.12) & 0.003 (0.10) & 0.016 (0.15) & 0.022 (0.17) \\
& $\beta_2$ & -0.102 (1.93) & 0.041 (0.14) & 0.014 (1.28) & 0.016 (0.11) & 0.217 (1.64) & 0.045 (0.12) \\
& $\sigma_b$ & & -0.068 (0.11) & & -0.070 (0.10) & & -0.318 (0.42) \\
\hline
\hline
\multirow{15}{*}{\rotatebox{90}{$n = 200$}}\\
&&\multicolumn{7}{l}{$\mathtt{T = 5}$}\\
& $\alpha_1$ & -6.105 (7.09) & -0.257 (0.35) & -2.679 (4.11) & -0.004 (0.10) & -12.994 (17.12) & 0.018 (0.30) \\
& $\alpha_2$ & -7.211 (8.10) & -0.914 (2.54) & -3.139 (4.88) & -0.007 (0.11) & -7.623 (8.38) & -1.599 (3.76) \\
& $\beta_1$ & 0.007 (0.10) & 0.000 (0.07) & 0.005 (0.10) & 0.003 (0.07) & 0.006 (0.12) & 0.009 (0.12) \\
& $\beta_2$ & 0.032 (0.90) & 0.030 (0.18) & 0.046 (0.92) & 0.015 (0.11) & -0.094 (0.71) & 0.026 (0.20) \\
& $\sigma_b$ & & -0.091 (0.12) & & -0.067 (0.09) & & -0.266 (0.42) \\
\cline{2-10}
&&\multicolumn{7}{l}{$\mathtt{T = 10}$} \\
& $\alpha_1$ & -3.862 (5.10) & -0.108 (0.13) & -1.285 (1.80) & 0.014 (0.06) & -12.395 (16.85) & -0.030 (0.07) \\
& $\alpha_2$ & -3.995 (5.67) & -0.096 (0.12) & -0.918 (1.76) & 0.018 (0.07) & -6.190 (7.41) & -0.006 (0.07) \\
& $\beta_1$ & 0.003 (0.09) & 0.001 (0.07) & 0.011 (0.09) & 0.000 (0.07) & -0.001 (0.10) & 0.007 (0.12) \\
& $\beta_2$ & 0.129 (1.54) & 0.007 (0.08) & 0.031 (0.81) & 0.003 (0.07) & 0.048 (1.57) & 0.009 (0.08) \\
& $\sigma_b$ & & -0.047 (0.07) & & -0.054 (0.08) & & -0.272 (0.41) \\
\bottomrule
\bottomrule
\end{tabular}\label{simul_q25}
}
\end{table}
\begin{table}[!ht]
\centering
\caption{Simulation study for $\tau = 0.50$. Bias and RMSE for longitudinal parameter estimates under the \textit{lqHMM} and \textit{lqmHMM} formulation.}
\scalebox{0.90}{
\begin{tabular}{llrrrrrrrrrrrrrrrrrrrrrrrrr}
\toprule
& & \multicolumn{2}{c}{$b_i \sim N(0,1), \epsilon_{it} \sim N(0,1)$} &
\multicolumn{2}{c}{$b_i \sim N, \epsilon_{it} \sim \chi_2^2$} &
\multicolumn{2}{c}{$b_i \sim t_3, \epsilon_{it} \sim \chi_2^2$} \\[0.2cm]
&& \multicolumn{1}{c}{\textit{lqHMM}} & \multicolumn{1}{c}{\textit{lqmHMM}} & \multicolumn{1}{c}{\textit{lqHMM}} & \multicolumn{1}{c}{\textit{lqmHMM}}& \multicolumn{1}{c}{\textit{lqHMM}} &\multicolumn{1}{c}{\textit{lqmHMM}}\\
\hline
\multirow{14}{*}{\rotatebox{90}{$n = 100$}}\\
&&\multicolumn{7}{l}{$\mathtt{T = 5}$}\\
& $\alpha_1$ & -0.131 (0.36) & -0.028 (0.12) & 0.150 (0.48) & 0.145 (0.26) & 0.366 (0.62) & 0.200 (0.32) \\
& $\alpha_2$ & -0.003 (0.43) & -0.002 (0.13) & 0.196 (0.60) & 0.133 (0.27) & 0.064 (0.63) & 0.134 (0.28) \\
& $\beta_1$ & 0.018 (0.12) & 0.012 (0.10) & 0.017 (0.13) & 0.008 (0.10) & 0.003 (0.15) & 0.012 (0.16) \\
& $\beta_2$ & 0.073 (0.43) & 0.027 (0.15) & 0.086 (0.57) & 0.011 (0.26) & 0.023 (0.64) & -0.034 (0.29) \\
& $\sigma_b$ & & -0.104 (0.13) & & -0.109 (0.14) & & -0.459 (0.51) \\
\cline{2-10}
& &\multicolumn{7}{l}{$\mathtt{T = 10}$} \\
& $\alpha_1$ & -0.082 (0.31) & -0.016 (0.09) & 0.202 (0.40) & 0.116 (0.18) & 0.359 (0.54) & 0.105 (0.17) \\
& $\alpha_2$ & 0.031 (0.28) & -0.003 (0.10) & 0.259 (0.44) & 0.100 (0.19) & 0.148 (0.47) & 0.135 (0.20) \\
& $\beta_1$ & 0.013 (0.11) & 0.002 (0.07) & 0.011 (0.11) & 0.001 (0.07) & 0.023 (0.14) & 0.022 (0.16) \\
& $\beta_2$ & 0.021 (0.33) & 0.018 (0.10) & -0.007 (0.42) & 0.016 (0.16) & 0.019 (0.44) & -0.004 (0.17) \\
& $\sigma_b$ & & -0.075 (0.09) & & -0.074 (0.10) & & -0.402 (0.45) \\
\hline
\hline
\multirow{15}{*}{\rotatebox{90}{$n = 200$}}\\
&&\multicolumn{7}{l}{$\mathtt{T = 5}$}\\
& $\alpha_1$ & -0.076 (0.25) & 0.002 (0.06) & 0.186 (0.36) & 0.089 (0.12) & 0.383 (0.55) & 0.160 (0.22) \\
& $\alpha_2$ & 0.062 (0.31) & 0.003 (0.06) & 0.235 (0.46) & 0.091 (0.13) & 0.099 (0.46) & 0.125 (0.21) \\
& $\beta_1$ & 0.006 (0.09) & 0.002 (0.07) & 0.009 (0.09) & 0.001 (0.07) & -0.001 (0.11) & 0.010 (0.12) \\
& $\beta_2$ & 0.024 (0.32) & -0.007 (0.06) & 0.013 (0.40) & -0.004 (0.10) & -0.007 (0.45) & -0.005 (0.17) \\
& $\sigma_b$ & & -0.056 (0.08) & & -0.054 (0.08) & & -0.388 (0.43) \\
\cline{2-10}
&&\multicolumn{7}{l}{$\mathtt{T = 10}$} \\
& $\alpha_1$ & -0.080 (0.21) & 0.002 (0.06) & 0.192 (0.32) & 0.116 (0.18) & 0.335 (0.43) & 0.085 (0.12) \\
& $\alpha_2$ & 0.055 (0.20) & 0.003 (0.06) & 0.261 (0.39) & 0.100 (0.19) & 0.123 (0.33) & 0.106 (0.14) \\
& $\beta_1$ & 0.009 (0.09) & 0.002 (0.07) & 0.013 (0.08) & 0.001 (0.07) & 0.005 (0.10) & 0.011 (0.12) \\
& $\beta_2$ & 0.009 (0.22) & -0.007 (0.06) & -0.008 (0.31) & 0.016 (0.16) & 0.026 (0.32) & 0.011 (0.12) \\
& $\sigma_b$ & & -0.056 (0.08) & & -0.074 (0.10) & & -0.328 (0.38) \\
\bottomrule
\bottomrule
\end{tabular}\label{simul_q50}
}
\end{table}
\begin{table}[!ht]
\centering
\caption{Simulation study for $\tau = 0.75$. Bias and RMSE for longitudinal parameter estimates under the \textit{lqHMM} and \textit{lqmHMM} formulation.}
\scalebox{0.90}{
\begin{tabular}{llrrrrrrrrrrrrrrrrrrrrrrrrr}
\toprule
& & \multicolumn{2}{c}{$b_i \sim N(0,1), \epsilon_{it} \sim N(0,1)$} &
\multicolumn{2}{c}{$b_i \sim N, \epsilon_{it} \sim \chi_2^2$} &
\multicolumn{2}{c}{$b_i \sim t_3, \epsilon_{it} \sim \chi_2^2$} \\[0.2cm]
&& \multicolumn{1}{c}{\textit{lqHMM}} & \multicolumn{1}{c}{\textit{lqmHMM}} & \multicolumn{1}{c}{\textit{lqHMM}} & \multicolumn{1}{c}{\textit{lqmHMM}}& \multicolumn{1}{c}{\textit{lqHMM}} &\multicolumn{1}{c}{\textit{lqmHMM}}\\
\hline
\multirow{14}{*}{\rotatebox{90}{$n = 100$}}\\
&&\multicolumn{7}{l}{$\mathtt{T = 5}$}\\
& $\alpha_1$ & 1.109 (1.45) & 0.170 (0.23) & 1.741 (2.41) & 0.158 (0.39) & 3.015 (3.92) & 0.389 (0.61) \\
& $\alpha_2$ & 1.632 (1.90) & 0.213 (0.28) & 2.045 (2.54) & 0.343 (0.53) & 3.792 (5.95) & 0.384 (0.59) \\
& $\beta_1$ & 0.008 (0.12) & 0.007 (0.10) & 0.023 (0.13) & 0.022 (0.10) & 0.003 (0.16) & 0.014 (0.16) \\
& $\beta_2$ & -0.107 (0.95) & -0.045 (0.20) & -0.287 (1.36) & -0.174 (0.46) & -0.116 (1.91) & -0.151 (0.73) \\
& $\sigma_b$ & & -0.108 (0.14) & & -0.102 (0.13) & & -0.375 (0.48) \\
\cline{2-10}
& &\multicolumn{7}{l}{$\mathtt{T = 10}$} \\
& $\alpha_1$ & 1.020 (1.16) & 0.120 (0.16) & 1.388 (1.92) & -0.056 (0.21) & 3.907 (5.12) & 0.023 (0.27) \\
& $\alpha_2$ & 1.488 (1.58) & 0.129 (0.16) & 1.869 (2.28) & 0.111 (0.25) & 7.327 (16.36) & 0.229 (0.36) \\
& $\beta_1$ & 0.010 (0.12) & 0.006 (0.09) & 0.006 (0.12) & 0.003 (0.10) & 0.017 (0.15) & 0.022 (0.16) \\
& $\beta_2$ & -0.104 (0.70) & -0.016 (0.12) & -0.122 (1.03) & -0.064 (0.27) & 0.196 (2.04) & -0.041 (0.29) \\
& $\sigma_b$ & & -0.079 (0.11) & & -0.062 (0.10) & & -0.320 (0.40) \\
\hline
\hline
\multirow{15}{*}{\rotatebox{90}{$n = 200$}}\\
&&\multicolumn{7}{l}{$\mathtt{T = 5}$}\\
& $\alpha_1$ & 0.950 (1.04) & 0.104 (0.14) & 1.161 (1.32) & 0.003 (0.23) & 2.593 (3.39) & 0.181 (0.35) \\
& $\alpha_2$ & 1.507 (1.59) & 0.136 (0.18) & 1.689 (1.79) & 0.162 (0.31) & 3.843 (6.76) & 0.274 (0.43) \\
& $\beta_1$ & 0.010 (0.10) & 0.006 (0.07) & 0.009 (0.09) & 0.003 (0.08) & 0.005 (0.12) & 0.015 (0.12) \\
& $\beta_2$ & -0.057 (0.53) & 0.003 (0.11) & -0.085 (0.69) & -0.062 (0.27) & -0.213 (1.66) & -0.096 (0.36) \\
& $\sigma_b$ & & -0.083 (0.10) & & -0.069 (0.09) & & -0.310 (0.39) \\
\cline{2-10}
&&\multicolumn{7}{l}{$\mathtt{T = 10}$} \\
& $\alpha_1$ & 0.931 (0.99) & 0.094 (0.11) & 1.183 (1.48) & -0.129 (0.19) & 3.393 (4.63) & -0.062 (0.17) \\
& $\alpha_2$ & 1.511 (1.55) & 0.111 (0.13) & 1.732 (1.93) & 0.076 (0.18) & 5.919 (10.44) & 0.142 (0.22) \\
& $\beta_1$ & 0.010 (0.09) & 0.002 (0.07) & 0.005 (0.08) & 0.003 (0.07) & 0.009 (0.10) & 0.015 (0.12) \\
& $\beta_2$ & -0.073 (0.45) & -0.016 (0.08) & -0.075 (0.60) & -0.027 (0.18) & 0.090 (1.44) & 0.004 (0.19) \\
& $\sigma_b$ & & -0.059 (0.08) & & -0.048 (0.07) & & -0.257 (0.42) \\
\bottomrule
\bottomrule
\end{tabular}\label{simul_q75}
}
\end{table}
\end{document} |
\begin{document}
\noindent
\title{Efficient routing in Poisson small-world networks}
\author{ M. {\sc Draief} \thanks{Statistical Laboratory, Centre for
Mathematical Sciences, Wilberforce Road, Cambridge CB3 0WB UK
E-mail: {\tt [email protected]}} ~and~A. {\sc Ganesh}
\thanks{Microsoft Research, 7 J.J. Thomson Avenue, Cambridge CB3 0FB E-mail: {\tt [email protected]} }}
\date{}
\maketitle
\begin{abstract}
\noindent
In recent work, Jon Kleinberg considered a small-world network
model consisting of a $d$-dimensional lattice augmented with
shortcuts. The probability of a shortcut being present between two
points decays as a power, $r^{-\alpha}$, of the distance $r$
between them. Kleinberg showed that greedy routing is efficient if
$\alpha = d$ and that there is no efficient decentralised routing
algorithm if $\alpha \neq d$. The results were extended to a
continuum model by Franceschetti and Meester. In our work, we
extend the result to more realistic models constructed from a
Poisson point process, wherein each point is connected to all its
neighbours within some fixed radius, as well as possessing random
shortcuts to more distant nodes as described above.
\end{abstract}
\section{Introduction}
A classical random graph model introduced by Erd\H{o}s and R\'enyi
consists of $n$ nodes, with the edge between any pair of vertices
being present with probability $p(n)$, independent of other pairs.
Recently, there has been considerable interest in alternative
models where the nodes are given coordinates in an Euclidean
space, and the probability of an edge between a pair of nodes
$u$ and $v$ is given by a function $g(\cdot)$ of the distance $r(u,v)$
between the nodes; edges between different node pairs are independent.
Such `random connection' or `spatial random graph' models and variants
thereof arise, for instance, in the study of wireless communication networks.
The ``small-world phenomenon" (the principle that all people are linked
by short chains of acquaintances), which has long been a matter of
folklore, was inaugurated as an area of experimental study in the
social sciences through the pioneering work of Stanley Milgram \cite{Mil67}.
Recent works have suggested that the phenomenon is pervasive in networks
arising in nature and technology, and motivated interest in mathematical
models of such networks. While Erd\H{o}s-R\'enyi random graphs possess
the property of having a small diameter (smaller than logarithmic in
the number of nodes, above the connectivity threshold for $p(n)$),
they are not good models for social networks because of the independence
assumption. On the other hand, spatial random graphs are better at
capturing clustering because of the implicit dependence between edges
induced by the connection function $g(\cdot)$.
Watts and Strogatz \cite{WaSt98} conducted a set of re-wiring experiments
on graphs, and observed that by re-wiring a few random links in finite
lattices, the average path length was reduced drastically (approaching
that of random graphs). This led them to propose a model of ``small-world
graphs" which essentially consists of a lattice augmented with random
links acting as shortcuts, which play an important role in shrinking
the average path link. By the length of a path we mean the number of
edges on it, and distance refers to graph distance (length of shortest path)
unless otherwise specified.
The diameter of the Watts-Strogatz model in the 1-dimensional case was
obtained by Barbour and Reinert \cite{BR}. Benjamini and Berger \cite{BB}
considered a variant of this 1-dimensional model wherein the shortcut
between any pair of nodes, instead of being present with constant
probability, is present with probability given by a connection function
$g(\cdot)$; they specifically considered connection functions of the form
$g(r) \sim \beta r^{-\alpha}$, where $\beta$ and $\alpha$ are given constants,
and $r(u,v)$ is the graph distance between $u$ and $v$ in the underlying
lattice (i.e., the $L_1$ distance).
The general $d$-dimensional version of this model, on the finite lattice
with $n^d$ points, was studied by Coppersmith et al. \cite{CGS}. They
showed that the diameter of the graph is (i) $\mathbb{T}heta(\log n/\log \log n)$
if $\alpha=d$, (ii) at most polylogarithmic in $n$ if $d<\alpha<2d$, and
(iii) at least polynomial in $n$ if $\alpha>2d$. Finally, it was shown by
Benjamini et al. \cite{BKPS} that the diameter is a constant if $\alpha<d$.
The sociological experiments of Milgram demonstrated not only that
there is a short chain of acquaintances between strangers but also
that they are able to find such chains. What sort of graph models
have this property? Specifically, when can decentralised routing
algorithms (which we define later) find a short path between
arbitrary source and destination nodes?
This question was addressed by Jon Kleinberg \cite{Klei00} for the
class of finite $d$-dimensional lattices augmented with shortcuts,
where the probability of a shortcut being present between two nodes
decays as a power, $r^{-\alpha}$ of the distance $r$ between them.
Kleinberg showed that greedy routing is efficient if $\alpha = d$
and that there is no efficient decentralised routing algorithm if
$\alpha \neq d$. The results were extended to a continuum model by
Franceschetti and Meester \cite{FrMee04}. Note that these results
show that decentralised algorithms cannot find short routes when
$\alpha \neq d$, even though such routes are present for $\alpha<2d$
by the results of Benjamini et al. and Coppersmith et al. cited
above; when $\alpha > 2d$, no short routes are present.
\section{Our Model}
In this work, we consider a model constructed from a Poisson point
process on a finite square, wherein each point is connected to all
its neighbours within some fixed radius, as well as possessing random
shortcuts to more distant nodes. More precisely:
\begin{itemize}
\item We consider a sequence of graphs indexed by $n\in \mathbb{N}$.
\item Nodes form a Poisson process of rate $1$ on the square
$[0,\sqrt{n}]^2$.
\item Each node $x$ is linked to all nodes that are distance less
that $r_n=\sqrt{c\log n}$ for a sufficiently large constant, $c$.
In particular, if $c>1/\mathbb{P}i$, then this graph is connected with high
probability (abbreviated whp, and meaning with probability going to
1 as $n$ tends to infinity); see \cite{penrose}.
These links are referred to as local edges and the corresponding nodes
as the local contacts of $x$.
\item For two nodes $u$ and $v$ such that ${r}(u,v) > \sqrt{c\log n}$,
the edge $(u,v)$ is present with probability $a_n{r}(u,v)^{-\alpha}
\wedge 1$. Such edges are referred to as shortcuts. The parameter $a_n$
is chosen so that the expected number of shortcuts per node is equal to
some specified constant, ${\overline d}$.
\end{itemize}
The objective is to route a message from an arbitrary source node $s$
to an arbitrary destination $t$ using a small number of hops. We are
interested in decentralised routing algorithms, which do not require
global knowledge of the graph topology. It is assumed throughout that
each node knows its location (co-ordinates) on the plane, as well as
the location of all its neighbours, both local and shortcut, and of
the destination $t$. We show that efficient decentralised routing is
possible only if $\alpha = 2$. More precisely, we show the following:
\begin{itemize}
\item[$\bullet$] $\alpha=2$: there is a greedy decentralised algorithm
to route a message from source to destination in $O(\log^2 n)$ hops.
\item[$\bullet$] $\alpha<2$: any decentralised routing needs more than
$n^{\gamma}$ hops on average, for any $\gamma$ such that
$\gamma<(2-\alpha)/6$.
\item[$\bullet$] $\alpha>2$: any decentralised routing needs more than
$n^{\gamma}$ hops on average, for any $\gamma<\frac{\alpha-2}{2(\alpha-1)}$.
\end{itemize}
As noted by Kleinberg for the lattice model, the case $\alpha=2$
corresponds to a ``scale-free" network: the expected number of shortcuts
from a node $x$ to nodes which lie between distance $r$ and $2r$ from it
is the same for any $r$. It was observed by Franceschetti and Meester in
their continuum model that this property is related to the impossibility
of efficient decentralised routing when $\alpha \neq 2$ through the fact
that shortcuts can't make sufficient progress towards the destination
when $\alpha > 2$ (they are too short) while they can't home in on small
enough neighbourhoods of the destination when $\alpha < 2$ (they are too
long). Similar remarks apply to our model as well.
A model very similar to ours was considered by Sharma and Mazumdar
\cite{SM} who use it to describe an ad-hoc sensor network. The
sensors are located at the points of a Poisson process and can
communicate with nearby sensors through wireless links (corresponding
to local contacts). In addition, it is possible to deploy a small number
of wired links (corresponding to shortcuts), and the question they
address is that of how to place these wired links in order to enable
efficient decentralised routing.
In the analysis presented below, we ignore edge effects for ease of
exposition. This is equivalent to considering distances as being defined
on the torus obtained by identifying opposite edges of the square.
\section{Efficiency of greedy routing when $\alpha=2$}
When $\alpha=2$, we show that the following \emph{approximately}
greedy algorithm succeeds whp in reaching the destination in a number
of hops which is polylogarithmic in $n$, the expected number of nodes.
Denote by $C(u,r)$ the circle of radius $r$ centred at node $u$.
If there is no direct link from the source $s$ to the destination
$t$, then the message is passed via intermediate nodes as follows.
At each stage, the message carries the address (co-ordinates) of
the destination $t$, as well as a radius $r$ which is initialised
to ${r}(s,t)$, the distance between $s$ and $t$. Suppose the message
is currently at node $x$ and has radius $r > \sqrt{c\log n}$.
(If $r \le \sqrt{c\log n}$, then the node which updated $r$ would have
contained $t$ in its local contact list and delivered the message
immediately.) If node $x$ has a shortcut to some node $y \in A(t,r)$,
where the annulus $A(t,r)$ is defined as $A(t,r)= C(t,\frac{r}{2})
\setminus C(t,\frac{r}{4})$, then $x$ forwards the message to
$y$. If there is more than one such node, the choice can be arbitrary.
Otherwise, it forwards the message to one of its local
contacts which is closer to $t$ than itself. When a node $y$
receives a message, it updates $r$ to $r/2$ if
${r}(y,t)\le r/2$, and leaves $r$ unchanged otherwise.
In other words, if $x$ can find a shortcut which reduces the distance to
the destination by at least a half but by no more than three-quarters, it
uses such a shortcut. Otherwise, it uses a local contact to reduce the
distance to the destination. In that sense, the algorithm is approximately
greedy. The reason for considering such an algorithm rather than a greedy
algorithm that would minimize the distance to the destination at each
step is to preserve independence, which greatly simplifies the analysis.
Note that if a greedy step from $x$ takes us to $y$ (i.e., of all nodes
to which $x$ possesses a shortcut, $y$ is closest to $t$), then the
conditional law of the point process in the circle $C(t,r(t,y))$ is
no longer unit rate Poisson. The fact that there are no shortcuts from
$x$ to nodes within this circle biases the probability law and greatly
complicates the analysis. Our approximate greedy algorithm gets around
this problem.
Observe that if the message passes through a node $x$, the value of
$r$ immediately after visiting $x$ lies between ${r}(x,t)$ and
$2{r}(x,t)$.
We have implicitly assumed that any node can find a local contact closer
to $t$ than itself. We first show that this assumption holds whp if
$c$ is chosen sufficiently large.
Fix $c>0$ and $n\in \mathbb{N}$.
For two points $x$ and $y$ in the square $[0,\sqrt{n}]^2$, and a
realisation $\omega$ of the unit rate Poisson process on the square,
define the properties
$$
{\cal P}_n(x,y,\omega) = \{ \exists \ u\in \omega: {r}(u,y) < {r}(x,y)
\quad \mbox{and} \quad {r}(u,x) \le \sqrt{c\log n} \},
$$
and
$$
{\cal P}_n(\omega) = \bigwedge_{(x,y):{r}(x,y) \ge \sqrt{c\log n}}
{\cal P}_n(x,y,\omega).
$$
\begin{lem} \label{lem:good_local_contact}
If $c>0$ is sufficiently large, then
$P({\cal P}_n(\cdot)) \to 1$ as $n$ tends to infinity.
\end{lem}
In words, with high probability, any two points $x$ and $y$ in the square
$[0,\sqrt{n}]^2$ with ${r}(x,y) > \sqrt{c\log n}$ have the property
that there is a point $u$ of the unit rate Poisson process within
distance $\sqrt{c\log n}$ of $x$ which is closer than $x$ to $y$.
In particular, if $x$ and $y$ are themselves points of the Poisson
process, then $u$ is a local contact of $x$ which is closer to $y$.
The key point to note about the lemma is that it gives a probability
bound which is uniform over all such node pairs.
\begin{proof} Suppose ${r}(x,t)\ge \sqrt{c\log n}$. Consider the
circle $C_1$ of radius $\sqrt{c\log n}$ centred at $x$ and the
circle $C_2$ of radius ${r}(x,t)$ centred at $t$. For any point
$y\neq x$ in their intersection, ${r}(y,t) < {r}(x,t)$.
Moreover, the intersection contains a sector of $C_1$ of angle
$2\mathbb{P}i/3$. Denote this sector $D_1$. Now consider a tessellation of
the square $[0,\sqrt{n}]^2$ by small squares of side $\beta
\sqrt{c\log n}$.
Note that for a sufficiently small geometrical constant $\beta$ that
doesn't depend on $c$ or $n$ ($\beta =1/2$ suffices), the sector $D_1$
fully contains at least one of the smaller squares.
Hence, if every small square contains at least one point of the Poisson
process, then every node at distance greater than $\sqrt{c\log n}$ from
$t$ can find at least one local contact which is closer to $t$.
Number the small squares in some order and let $X_i$ denote the number
of nodes in the $i^{\rm th}$ small square, $i=1,\ldots,n/(\beta^2 c\log n)$.
The number of squares is assumed to be an integer for simplicity.
Clearly, the $X_i$ are iid Poisson random variables with mean
$\beta^2 c\log n$. Hence, by the union bound,
$$
P(\exists \ i: X_i=0) \le \sum_{i=1}^{n/(\beta^2 c\log n)} P(X_i=0)
= \frac{n}{\beta^2 c\log n} e^{-\beta^2 c\log n},
$$
which goes to zero as $n$ tends to infinity, provided that $\beta^2 c>1$.
In particular, $c>4$ suffices since we can take $\beta=1/2$.
\end{proof}
We now state the main result of this section.
\begin{thm}
\label{thm:beta2} Consider the small world random graph described above
with $\alpha =2$, expected node degree $\overline{d}=1$, and $c>0$
sufficiently large, as required by Lemma \ref{lem:good_local_contact}.
Then, the number of hops for message delivery between any pair of nodes
is of order $\log^2 n$ whp.
\end{thm}
\begin{proof} We first evaluate the normalisation constant
$a_n$ by noting that the expected degree, $\overline{d}$, of
a node located at the centre of the square satisfies
$$
\overline{d} \le a_n \int_{\sqrt{c\log n}}^{\sqrt{n/2}}
x^{-2} 2\mathbb{P}i x dx = \mathbb{P}i a_n (\log n-\log \log n - \log (2c)),
$$
and so
\begin{equation} \label{eq:normalisation1}
a_n \ge \frac{1}{\log n},
\end{equation}
for all $n$ sufficiently large, by the assumption that $\overline{d}=1$.
Next, we compute the probability of finding a suitable shortcut at
each step of the greedy routing algorithm. We think of the routing
algorithm as proceeding in phases. The value of $r$ is halved at
the end of each phase. The value of $r$ immediately after the
message reaches a node $x$ satisfies the relation ${r}(x,t) \in
(r/2,r]$ at each step of the routing algorithm.
We suppose that $r > k \sqrt{c\log n}$, for some large constant $k$.
Denote by $N_A$ the number of nodes in the annulus $A(t,r)$ and observe
that $N_A$ is Poisson with mean $3\mathbb{P}i r^2/16$. The distance from
$x$ to any of these nodes is bounded above by $3r/2$, and so the
probability that a shortcut from $x$ is incident on a particular one
of these nodes is bounded below by $a_n (3r/2)^{-2}$. Thus,
conditional on $N_A$, the probability that $x$ has a shortcut to one
of the $N_A$ nodes in $A(t,r)$ is bounded below by
\begin{equation} \label{eq:good_hop_prob1}
p(r,N_A) = 1 - \Bigl( 1- \frac{4a_n}{9r^2} \Bigr)^{N_A}.
\end{equation}
If $x$ doesn't have such a shortcut, the message is passed via local contacts
which are successively closer to $t$, and hence satisfy the same lower bound
on the probability of a shortcut to $A(t,r)$. Consequently, the number of
local steps $L_x$ until a shortcut is found is bounded above by a
geometric random variable with conditional mean $1/p(r,N_A)$.
Since $N_A \sim \mbox{Pois}(3\mathbb{P}i r^2/16)$, we have by a standard
application of the Chernoff bound that
$$
P(N_A \le \gamma r^2/16) \le \exp \Bigl( -\frac{(3\mathbb{P}i-\gamma)r^2}{16}
+ \frac{\gamma r^2}{16}\log\frac{3\mathbb{P}i}{\gamma} \Bigr),
$$
for any $\gamma<3\mathbb{P}i$.
Suppose first that $r \ge k\sqrt{c\log n}$ for some large constant $k$.
Taking $\gamma = 3\mathbb{P}i/2$, we obtain
\begin{equation} \label{eq:node_number_bd}
P \Bigl( N_A \le \frac{3\mathbb{P}i r^2}{32} \Bigr) \le \exp \Bigl(
-\frac{3\mathbb{P}i k^2 c\log n}{32} (1-\log 2) \Bigr).
\end{equation}
Suppose first that $N_A < 3\mathbb{P}i r^2/32$. The number of local hops,
$L_x$, to route the message from $x$ to $A$ is bounded above by the
number of nodes outside $A$, since the distance to $t$ is strictly
decreasing after each hop. Hence,
\begin{equation} \label{eq:localhops_condbd1}
E \Bigl[ L_x \Bigm| N_A < \frac{3\mathbb{P}i r^2}{32} \Bigr] \le
n- \mbox{area}(A) \le n.
\end{equation}
Next, if $N_A \ge 3\mathbb{P}i r^2/32$, then we have by
(\ref{eq:good_hop_prob1}) and (\ref{eq:normalisation1}) that
$$
p(r,N_A) \ge 1 - \exp \Bigl( -\frac{\mathbb{P}i a_n}{24} \Bigr) \ge
1 - \exp \Bigl( -\frac{\mathbb{P}i}{24\log n} \Bigr) \ge \frac{\mathbb{P}i}{48\log n},
$$
where the last inequality holds for all $n$ sufficiently large.
Since the number of hops to reach $A$ is bounded above by a geometric
random variable with mean $1/p(r,N_A)$, we have
\begin{equation} \label{eq:localhops_condbd2}
E \Bigl[ L_x \Bigm| N_A \ge \frac{3\mathbb{P}i r^2}{32} \Bigr] \le
\frac{48}{\mathbb{P}i}\log n.
\end{equation}
Finally, we obtain from (\ref{eq:node_number_bd}),
(\ref{eq:localhops_condbd1}) and (\ref{eq:localhops_condbd2}) that
$$
E[L_x] \le n\exp \Bigl( -\frac{3\mathbb{P}i k^2 c(1-\log 2)}{32} \log n \Bigr)
+ \frac{48}{\mathbb{P}i}\log n.
$$
The first term in the sum above can be made arbitrarily small by
choosing $k$ large enough, so $E[L_x]=O(\log n)$. It can also be seen
from the arguments above that $L_x=O(\log n)$ whp. In other words,
while $r\ge k\sqrt{c\log n}$, the number of hops during each phase
is of order $\log n$. Moreover, the number of such phases is of order
$\log n$ since the initial value of $r$ is at most $\sqrt{2n}$, and
$r$ halves at the end of each phase.
Hence, the total number of hops until $r < k\sqrt{c\log n}$ is of
order $\log^2 n$. Once the message reaches a node $x$ with ${r}(x,t)
< k\sqrt{c\log n}$, the number of additional hops to reach $t$ is bounded
above by the total number of nodes in the circle $C(t,k\sqrt{c\log n})$.
By using the Chernoff bound for a Poisson random variable, it can be shown
that this number is of order $\log n$ whp. This completes the
proof of the theorem.
\end{proof}
\section{Impossibility of efficient routing when $\alpha \neq 2$}
We now show that if $\alpha<2$, then no decentralised algorithm can
route between arbitrary source-destination pairs in time which is
polylogarithmic in $n$. In fact, the number of routing hops is
polynomial in $n$ with some fractional power that depends on $\alpha$.
We now make precise what we mean by a decentralised routing
algorithm. As specified earlier, each node knows the locations of
all its local contacts with distance $\sqrt{c\log n}$ and of all
its shortcut neighbours, as well as other nodes (if any) from which
shortcuts are incident to it. A routing algorithm
specifies a (possibly random) sequence of nodes $s=x_0, x_1,
\ldots, x_k = t, x_{k+1}=t, \ldots$, where the only requirement is
that each node $x_i$ be chosen from among the local or shortcut
contacts of nodes $\{ x_0,\ldots,x_{i-1} \}$. (This is the same
definition as used by Kleinberg \cite{Klei00}).
\begin{thm}
\label{thm:betaless2} Consider the small world random graph
described above with $\alpha < 2$, and arbitrarily large constants $c$
and $\overline{d}$. Suppose the source $s$ and destination $t$ are
chosen uniformly at random from the node set. Then,
the number of hops for message delivery in any decentralised
algorithm exceeds $n^{\gamma}$ whp, for any $\gamma < (2-\alpha)/6$.
\end{thm}
It is not important that the source and destination be chosen uniformly
but only that the distance between them be of order $n^a$ whp for some
$a>0$.
\begin{proof} We first evaluate the normalisation constant
$a_n$ by noting that the expected degree satisfies
$$
\overline{d} \ge a_n \int_{\sqrt{c\log n}}^{\sqrt{n}/2}
x^{-\alpha} 2\mathbb{P}i x dx = \frac{2 \mathbb{P}i a_n}{2-\alpha} \Bigl(
\frac{n^{(2-\alpha)/2}}{2^{2-\alpha}} - (c\log n)^{(2-\alpha)/2} \Bigr),
$$
which, on simplification, yields that
\begin{equation} \label{eq:normalisation2}
a_n \le \frac{4\overline{d}}{n^{(2-\alpha)/2}},
\end{equation}
for all $n$ sufficiently large. Note that $a_n$ is an upper bound
on the probability that there is a shortcut between any pair of nodes.
Suppose that the source $s$ and destination $t$ are chosen uniformly from
all nodes on $[0,\sqrt{n}]^2$. Fix $\delta \in (\gamma,1/2)$ and define
$C_{\delta} = C(t,n^{\delta})$ to be the circle of radius $n^{\delta}$
centred at $t$. It is clear that, for any $\epsilon>0$, the distance
${r}(s,C_{\delta})$ from $s$ to the circle $C_{\delta}$ is bigger than
$n^{(1/2)-\epsilon}$ whp. Suppose now that this inequality holds, but
that there is a routing algorithm which can route from $s$ to $t$ in
fewer than $n^{\gamma}$ hops. Denote by $s=x_0, x_1, \ldots, x_m=t$,
the sequence of nodes visited by the routing algorithm, with
$m\le n^{\gamma}$. We claim that there must be a shortcut from at least
one of the nodes $x_0,x_1,\ldots, x_{m-1}$ to the set $C_{\delta}$.
Indeed, if there is no such shortcut, then $t$ must be reached starting
from some node outside $C_{\delta}$ and using only local links. Since
the length of each local link is at most $\sqrt{c\log n}$ and the number
of hops is at most $n^{\gamma}$, the total distance traversed by local
hops is strictly smaller than $n^{\delta}$ (for large enough $n$, by
the assumption that $\delta>\gamma$), which yields a contradiction.
We now estimate the probability that there is a shortcut from one of
the nodes $x_0,\ldots,x_{m-1}$ to the set $C_{\delta}$.
The number of nodes in the circle $C_{\delta}$, denoted $N_C$,
is Poisson with mean $\mathbb{P}i n^{2\delta}$, so $N_C < 4n^{2\delta}$ whp.
Now, by (\ref{eq:normalisation2}) and the union bound,
$$
P(\exists \mbox{ shortcut between $u$ and $C_{\delta}$}|N_C <
4n^{2\delta}) \le 16 \ \overline{d} \ n^{(4\delta+\alpha-2)/2},
$$
for any node $u$. Applying this bound repeatedly for each of the nodes
$x_0, x_1, \ldots, x_{m-1}$ generated by the routing algorithm, we get,
\begin{equation} \label{eq:shortcut_bd2}
P(\exists \mbox{ shortcut to $C_{\delta}$ within $n^{\gamma}$
hops)} |N_C < 4n^{2\delta}) \le 16 \ \overline{d} \ n^{(2\gamma +
4\delta+\alpha-2)/2}.
\end{equation}
Now $\gamma < (2-\alpha)/6$ by assumption, and $\delta>\gamma$ can be
chosen arbitrarily. In particular, we can choose $\delta$ so that
$2\gamma + 4\delta + \alpha -2$ is strictly negative, in which case the
conditional probability of a shortcut to $C_{\delta}$ goes to zero
as $n\to \infty$. Since $P(N_C \ge 4n^{2\delta})$ also goes to
zero, we conclude that the probability of finding an $s-t$ route
with fewer than $n^{\gamma}$ hops also goes to zero. This
concludes the proof of the theorem.
\end{proof}
{\bf Remarks:} The theorem continues to hold if we assume 1-step lookahead.
By this, we mean that when a node decides where to send the message at the
next step, it can not only use the locations of all its local and shortcut
contacts, but also the locations of their contacts. All this means is that
after visiting $n^{\gamma}$ nodes, the algorithm has knowledge about
$O(n^{\gamma} \log n)$ nodes. If none of these nodes has a shortcut into
the set $C_{\delta}$, which is the case whp, then the arguments above still
apply. The same is true for $k$-step lookahead, for any constant $k$.
\begin{thm}
\label{thm:betamore2} Consider the small world random graph
described above with $\alpha > 2$, and arbitrarily large constants $c$
and $\overline{d}$. Suppose the source $s$ and destination $t$ are
chosen uniformly at random from the node set. Then,
the number of hops for message delivery in any decentralised
algorithm exceeds $n^{\gamma}$ whp, for any $\gamma <
(\alpha-2)/(2(\alpha-1))$.
\end{thm}
\begin{proof} For a node $u$, the probability that a randomly
generated shortcut has length bigger than $r$ is bounded above by
$$
\frac{ \int_r^{\infty} x^{-\alpha} 2\mathbb{P}i xdx }{ \int_{\sqrt{c\log
n}}^{\sqrt{n}/2} x^{-\alpha} 2\mathbb{P}i xdx} \le \mbox{const. }r^{2-\alpha} (\log
n)^{(\alpha-2)/2},
$$
for all $n$ sufficiently large. Since there are $2\overline{d}$ shortcuts
per node on average, the probability that two nodes $u$ and $v$ separated
by distance $r$ or more possess a shortcut between them is bounded above
by the same function, but with the constant suitably modified.
Now, for randomly chosen nodes $s$ and $t$, ${r}(s,t) > n^{(1/2)-\epsilon}$
whp, for any $\epsilon>0$. Hence, there can be a path of length $n^{\gamma}$
hops between $s$ and $t$ only if at least one of the hops is a shortcut
of length $n^{(1/2)-\epsilon-\gamma}$ or more. By the above and the union
bound, the probability of there being such a shortcut is bounded above by
$$
\mbox{const. } n^{\gamma} \Bigl( n^{(1/2)-\epsilon-\gamma}
\Bigr)^{2-\alpha} (\log n)^{(\alpha-2)/2}.
$$
The exponent of $n$ in the above expression is
$$
\frac{2-\alpha}{2}(1-2\epsilon) + \gamma(\alpha-1).
$$
The exponent above is negative for sufficiently small $\epsilon>0$
provided $\gamma < (\alpha-2)/(2(\alpha-1))$. In other words, if this
inequality is satisfied, then the probability of finding a route
with fewer than $n^{\gamma}$ hops goes to zero as $n\to \infty$.
This establishes the claim of the theorem.
\end{proof}
\end{document} |
\begin{document}
\title[Explicit]{Comparing homological invariants for mapping classes of surfaces}
\begin{abstract}
We compare two different types of mapping class invariants: the Hochschild homology of an $A_\infty$ bimodule coming from bordered Heegaard Floer homology, and fixed point Floer cohomology. We first compute the bimodule invariants and their Hochschild homology in the genus two case. We then compare the resulting computations to fixed point Floer cohomology, and make a conjecture that the two invariants are isomorphic. We also discuss a construction of a map potentially giving the isomorphism. It comes as an open-closed map in the context of a surface being viewed as a $0$-dimensional Lefschetz fibration over the complex plane.
\end{abstract}
\maketitle
\section{Introduction}
Denote by $\Sigma$ a compact oriented genus $g$ surface, possibly with boundary. We will be studying elements of a strongly based mapping class group $MCG_0(\Sigma,\partial\Sigma=S^1)$, which consists of orientation preserving self-diffeomorphisms $\phi\colon\thinspace\relax (\Sigma,\partial\Sigma) \rightarrow (\Sigma,\partial\Sigma)$ fixing the boundary, up to isotopy.
\subsection{Overview}
Suppose we are given a mapping class $\phi \in MCG_0(\Sigma,\partial\Sigma=S^1)$. We will be studying two homological invariants associated to $\phi$:
\begin{enumerate}
\item An $A_\infty$ bimodule $N(\phi)$ (or, more precisely, its $A_\infty$ homotopy equivalence class), and its Hochschild homology $HH_*(N(\phi))$, which is a $\mathbb Z_2$-graded vector space over ${\mathbb F}_2$. The bimodule $N(\phi)$ comes from the bordered Heegaard Floer theory: see~\cite{LOT-mcg}, where the bimodule is also denoted by $N(\phi)$, and the original paper~\cite{LOT-bim}, where the bimodule is denoted by $\widehat{CFDA}(\phi,-g+1)$. In Section~\ref{sec:bimodule from bordered} we cover the original construction of the bimodule $N(\phi)$, whereas in Section~\ref{sec:bimodule_via_Fukaya} we also describe an equivalent construction of this bimodule, using the partially wrapped Fukaya category of the surface. This construction will be useful in understanding the connection with the next invariant.
\item Suppose for a moment that $\phi$ is a mapping class of a closed surface, and pick a generic area-preserving representative $\phi$ in that mapping class. Then we consider $HF^*(\phi)$, a $\mathbb Z_2$-graded vector space over ${\mathbb F}_2$ defined as a cohomology of chain complex $CF^*(\phi)$. The generators of $CF^*(\phi)$ are non-degenerate constant sections of the mapping torus $T_\phi \rightarrow S^1$ (i.e. non-degenerate fixed points), and the differentials are pseudo-holomorphic cylinder sections of $T_\phi \times {\mathbb R} \rightarrow S^1 \times {\mathbb R} $. The same theory can be set up using fixed points as generators and pseudo-holomorphic discs in the Lagrangian Floer cohomology of graphs of $\id$ and $\phi$ as differentials, but we will use the sections and cylinders approach. The invariant $HF^*(\phi)$ is called \emph{fixed point Floer cohomology}, or \emph{symplectic Floer cohomology}. It is $\mathbb Z_2$-graded by the sign of $\det(d\phi - \id )$ at the fixed points of $\phi$.
In order to generalize this construction to mapping classes fixing the boundary, we have to specify in which direction to twist the boundary slightly to eliminate degenerate fixed points. There are two choices (we call them $+$ and $-$) for each boundary, see Figure~\ref{fig:perturbation_conventions} for the conventions.
In our case of a mapping class $\phi \in MCG_0(\Sigma,\partial\Sigma=S^1=U_1)$, we actually consider the induced mapping class $\tilde{\phi}\colon\thinspace\relax (\tilde{\Sigma},\partial \tilde{\Sigma}=U_1 \cup U_2) \rightarrow (\tilde{\Sigma},\partial \tilde{\Sigma}=U_1 \cup U_2)$, where the surface $\tilde{\Sigma}=\Sigma \setminus D^2$ is obtained by removing a disc in the small enough neighborhood of the boundary $U_1$, such that $\phi$ is identity on that neighborhood. We then consider fixed point Floer cohomology $HF^{*}(\tilde{\phi}\:;\: U_2^+,U_1^-)$ with two different perturbation twists on the two boundaries.
\end{enumerate}
The bimodule invariant $N(\phi)$ was computed for mapping classes of the genus one surface in~\cite[Section~10]{LOT-bim}. In Section~\ref{subsec:bim_comps}, we compute $N(\phi)$ in the genus two case; we do so by explicitly describing the bimodules associated to Dehn twists $\tau_l$, which generate the mapping class group. For that we write down the bimodules based on a holomorphic curve count, and then use the description of arc-slide type $DD$ bimodules from~\cite{LOT-arc} to prove that the bimodules $N(\tau_l)$ are the correct ones.
We also describe how to compute Hochschild homology of the bimodule in the genus two case. There is an obstacle we encountered in computing Hochschild homology: none of the smallest models of bimodules $N(\phi)$ for the Dehn twists are bounded, and thus their Hochschild complex is infinitely generated. We write down a certain bounded identity bimodule $[\mathbb{I}]^b$ in the genus two case, so that the bimodule $[\mathbb{I}]^b \boxtimes N(\phi) \boxtimes [\mathbb{I}]^b$ is bounded and belongs to the same $A_\infty$ homotopy equivalence class as $N(\phi)$. Thus by replacing $N(\phi)$ with $[\mathbb{I}]^b \boxtimes N(\phi) \boxtimes [\mathbb{I}]^b$ we solve the problem of $N(\phi)$ being not bounded.
Based on the computations of Hochschild homology $HH_*(N(\phi))$ in the genus two case, and the corresponding computations of the fixed point Floer cohomology, we make the following conjecture.
\begin{conjecture} \label{conj:in_intro} For every mapping class $\phi \in MCG_0(\Sigma,\partial\Sigma=S^1=U_1)$ there is an isomorphism of $\mathbb{Z}_2$-graded vector spaces
$$
HH_*(N(\phi^{-1})) \colon\thinspace\relaxng HF^{*+1}(\tilde{\phi}\:;\: U_2^+,U_1^-).$$
\end{conjecture}
\begin{theorem}\label{thm:int}
The conjecture above is true in the following two cases of mapping classes of genus two surface $\Sigma_2$:
\begin{itemize}
\item The identity mapping class $\phi=\id \lefttorightarrow \Sigma_2$.
\item Single Dehn twist along an arbitrary curve $\phi=\tau \lefttorightarrow \Sigma_2$.
\end{itemize}
\end{theorem}
\noindent The result above is based on computer calculations using~\cite{Pyt}. In fact, we confirmed Conjecture~\ref{conj:in_intro} in many other cases; in Section~\ref{sec:conj_iso} we prove Theorem~\ref{thm:int}, and list some of the other relevant computations.
We further explain where an isomorphism $HH_*(N(\phi^{-1})) \colon\thinspace\relaxng HF^{*+1}(\tilde{\phi}\:;\: U_2^+,U_1^-)$ may come from. In Section~\ref{sec:bordered<->Fuk}, we describe the spectacular connection, due to Auroux, between bordered Heegaard Floer theory and the partially wrapped Fukaya categories of punctured surfaces and their symmetric products. In the light of this connection the bimodule $N(\phi)$ becomes a graph bimodule $N_{\F_z}(\phi)$, associated to an automorphism of the partially wrapped Fukaya category of a surface $\F_z(\Sigma)$ induced by $\phi$. Moreover, there is a natural $open$-$closed$ map from the Hochschild homology of the graph bimodule into fixed point Floer cohomology, provided that we consider the same kind of Hamiltonian perturbations for both invariants. Based on these Fukaya categorical structures, in Section~\ref{sec:LF} we obtain theoretical evidence for Conjecture~\ref{conj:in_intro}. Namely, we show how the double basepoint version of Conjecture~\ref{conj:in_intro} can be viewed as an instance of a more general conjecture of Seidel~\cite[Conjecture 7.18]{Sei3II}, which states that the open-closed map in the Fukaya-Seidel category of a Lefschetz fibration is an isomorphism.
Below we drew a diagram which concisely describes the main invariants and relationships between them, which we study in this paper:
$$
\begin{tikzcd}[row sep=50pt,column sep=25pt]
&
\text{Mapping class }\phi \lefttorightarrow (\Sigma, \partial \Sigma )
\arrow[dl, rightsquigarrow, "\text{Section~\ref{alt_bim}}" left ]
\arrow[d, rightsquigarrow, "\text{Section~\ref{sec:bimodule from bordered}}" left]
\arrow[ddr, rightsquigarrow, "\text{Section~\ref{sec:fixed point Floer}}"]
&
\\
N_{\F_z}(\phi)
\arrow[d, rightsquigarrow, "\text{Section~\ref{sec:hf_bim}}" right]
&
N(\phi)
\arrow[l, leftrightarrow, "\simeq" above, "\text{Prop.~\ref{prop:bims_are_eq}}" below]
\arrow[d, rightsquigarrow, "\text{Section~\ref{sec:hf_bim}}" left]
&
\\
HH_*(N_{\F_z}(\phi))
\arrow[rr, bend right=25, "\text{Section~\ref{subsec:O-C map}}" above, "\text{(in the double basepoint case)}" below]
&
HH_*(N(\phi))
\arrow[l, leftrightarrow, "\colon\thinspace\relaxng" above, "\text{Corollary~\ref{cor}}" below]
\arrow[r, leftrightarrow, "\overset{?}{\colon\thinspace\relaxng}" above, "\text{Conjecture~\ref{conj}}" below]
&
HF^*(\tilde{\phi}\:;\ U_2^+,U_1^-)
\end{tikzcd}
$$
In addition to stating the conjecture in Section~\ref{sec:statement}, performing lots of computations supporting the conjecture in Section~\ref{supporting_comps}, and discussing theoretical evidence for the conjecture in Section~\ref{sec:LF}, some of the main contributions of the present paper are the new bimodule and Hochschild homology computations, made possible by a software package~\cite{Pyt}. From the point of view of low-dimensional topology, in Section~\ref{subsec:bim_comps} we compute the bordered Heegaard Floer bimodule invariants for mapping classes of the genus two surface (before they were only computed for the torus). From the point of view of symplectic geometry, the computed invariants are the graph bimodules for the partially wrapped Fukaya category of the genus two surface. These bimodule computations, while tedious, follow from the paper~\cite{LOT-arc}
\footnote{In~\cite{LOT-arc} the authors compute $\widehat{CFDD}$ bimodule for all arc-slides, and so one can tensor those bimodules with $\widehat{CFAA}(\id)$ (which was studied in~\cite{Boh2}), and obtain $\widehat{CFDA}$ bimodules for arc-slides. In Section~\ref{subsec:bim_comps} we took a slightly different, reverse approach: we guessed (based on holomorphic theory) the $\widehat{CFDA}$ bimodules for arc-slides, and then proved that those are the right ones by tensoring with $\widehat{CFDD}(\id)$.}
. More importantly, having computed those bimodules, in Section~\ref{alg_hoh} we introduce an algorithm for computing their Hochschild homologies. Computations with the bimodules and Hochschild homologies quickly get too complicated to do by hand, and so we decided to develop a python package~\cite{Pyt}. This program is the only software which computes the $\widehat{CFDA}$ bimodules in the second lowest $\text{Spin}^c$ structure, although only for the genus one and two cases\footnote{Zhan's python package~\cite{Pyt-Zha} allows one to work with bimodules $\widehat{CFDA}(\phi,0)$, which are the middle $\text{Spin}^c$ summands.}. Moreover, it also computes the Hochschild homology of bimodules, and thus makes new computations of knot Floer homology possible. Namely, given the factorization of a mapping class $\phi$ into Dehn twists, the program can compute knot Floer homology of the binding of the open book corresponding to monodromy $\phi$, in the second lowest Alexander grading; see~\cite[Showcase~1]{Pyt}. This is based on the fact that Hochschild homology is equal to knot Floer homology.
Also, assuming Conjecture~\ref{conj:in_intro} is true, our computational methods allow to effectively compute the number of fixed points of a mapping class $\tilde{\phi}$ by simply running a program, even in the pseudo-Anosov case. For example, taking a mapping class $\psi=\tau_A \tau_B^{-1}$ (see Figure~\ref{fig:Sigma_2}) as an input, the program~\cite[Showcase~2]{Pyt} outputs:
\begin{center}
\begin{tabular}{ |c|c|c| }
\hline
Automorphism & Number of fixed points \\
\hline
$\psi$ & 5 \\
$\psi^2$ & 9 \\
$\psi^3$ & 20 \\
$\psi^4$ & 49 \\
$\psi^5$ & 125 \\
\hline
\end{tabular}
\end{center}
As a byproduct of simplicity of such computations, we can determine if the mapping class is periodic, reducible with all components periodic, or pseudo-Anosov: the rank of $HH_*(N(\phi^{n}))$ is respectively bounded, grows linearly, or grows exponentially (see~\cite[Corollary 4.2]{LOT-mcg},~\cite[Corollary 1.7]{CC}).
\subsection{Context}
Here we discuss possible generalizations of Conjecture~\ref{conj:in_intro}, and other closely related constructions.
First, let us explain why we work only with the $1$-strand moving summand $N(\phi)=\widehat{CFDA}(\phi,-g+1)$ of the full bimodule $\bigoplus_{0 \leq k \leq 2g} \widehat{CFDA}(\phi;-g+k)$. There is a Fukaya categorical interpretation of $\widehat{CFDA}(\phi;-g+1)$ as a graph bimodule of $\phi$, which we discuss in~Section~\ref{sec:bimodule_via_Fukaya}. This suggests that the Fukaya categorical interpretation of $\widehat{CFDA}(\phi;-g+k)$ is a graph bimodule of the induced symplectomorphism $Sym^{k}(\phi)\colon\thinspace\relax Sym^{k}(\Sigma) \rightarrow Sym^{k}(\Sigma)$ (see~\cite[Section~1.1]{Perutz_LMII} for how to obtain a canonical symplectomorphism $Sym^{k}(\phi)$). Thus, in the light of the existence of the open-closed map (see Section~\ref{subsec:O-C map} for the $k=1$ case), the Hochschild homology $HH_*(\widehat{CFDA}(\phi, -g+k))$ should be isomorphic to some version of fixed point Floer cohomology of $Sym^{k}(\phi)$. While this invariant can be defined, if $k\ge 2$ there are no effective computational tools to manage it. However, in the $k=1$ case, there are a wealth of methods to compute fixed point Floer cohomology for mapping classes of surfaces (see Section~\ref{subsec:exist_comp_methods}). This allows us to compare (a specific version of) fixed point Floer cohomology of $\phi \lefttorightarrow \Sigma$ to $HH_*(N(\phi))$.
Second, there is another isomorphism of invariants of mapping classes of surfaces, which is directly related to our work. Consider a $3$-manifold $Y^3_\phi$, where the subscript $\phi$ indicates that the manifold fibers over the circle $Y^3_\phi \rightarrow S^1$ with the monodromy $ \phi \lefttorightarrow \Sigma $, where $\Sigma$ is a closed surface. The statement says that the Heegaard Floer homology in the second lowest $\text{Spin}^c$ structures (evaluating to $-2g+4$ on the fiber) is equal to the fixed point Floer cohomology of the corresponding monodromy: $HF^+_*(Y^3_\phi\:;\: -2g+4) \colon\thinspace\relaxng HF^*(\phi)$. Assuming the genus is greater than 2, this isomorphism follows from the following chain of isomorphisms:
\begin{itemize}
\item $HF^*(\phi) \colon\thinspace\relaxng HP^*_{degree=1}(\phi)$, \\ where $HF^*(\phi)$ is the fixed point Floer cohomology of mapping class $\phi$ of a closed surface, see~\cite{Sei2}, and $HP^*_{degree=1}(\phi)$ is periodic Floer cohomology, see~\cite{HS}. This isomorphism is addressed in~\cite[Appendix B]{LT}.
\item $HP^*_{degree=1}(\phi) \colon\thinspace\relaxng \widecheck{HM}_*(Y^3_\phi,c_+ \:;\: -2g+4)$, \\ where $Y^3_\phi$ is the $3$-manifold fibered over the circle with a monodromy $\phi$, and $\widecheck{HM}_*(Y^3_\phi,c_+ \:;\: -2g+4)$ denotes an invariant of a $3$-manifold called monopole Floer homology, defined in~\cite{KM}. The $c_+$ indicates the version of $\widecheck{HM}_*$ with a monotone positive perturbation. The isomorphism was proved in~\cite{LT}. The positivity of the perturbation comes from the genus being greater than 2.
\item $\widecheck{HM}_*(Y^3_\phi,c_+ \:;\: -2g+4) \colon\thinspace\relaxng \widecheck{HM}_\bullet(Y^3_\phi,c_+ \:;\: -2g+4)$, \\
where $\bullet$ indicates the negative completion of the coefficient ring. The definition of $\widecheck{CM}_*$ does not depend on the negative completion, so the above groups are isomorphic, see~\cite[p. 606]{KM}.
\item $\widecheck{HM}_\bullet(Y^3_\phi,c_+ \:;\: -2g+4) \colon\thinspace\relaxng \widecheck{HM}_\bullet(Y^3_\phi \:;\: -2g+4)$, \\
where the absence of $c_+$ indicates exactness of the perturbation in the definition of $\widecheck{HM}_\bullet$. The isomorphism is proved in~\cite[Theorems 31.1.2]{KM}.
\item $\widecheck{HM}_\bullet(Y^3_\phi \:;\: -2g+4) \colon\thinspace\relaxng \widecheck{HM}_\bullet(Y^3_\phi,c_b \:;\: -2g+4)$, \\
where $c_b$ indicates the balanced perturbation in the definition of $\widecheck{HM}_\bullet$. The isomorphism is proved in~\cite[Theorems 31.1.1]{KM}.
\item $\widecheck{HM}_\bullet(Y^3_\phi,c_b \:;\: -2g+4) \colon\thinspace\relaxng \widecheck{HM}_*(Y^3_\phi,c_b \:;\: -2g+4) $, \\
which again follows from the fact that the negative completion of the coefficient ring does not affect $\widecheck{HM}_*$.
\item $\widecheck{HM}_*(Y^3_\phi,c_b \:;\: -2g+4) \colon\thinspace\relaxng HF^+_*(Y_\phi^3\:;\: -2g+4)$, \\
which is a deep and very difficult theorem, despite the fact that the definition of Heegaard Floer homology was inspired by monopole Floer homology type constructions. It was proved via passing through another invariant, called embedded contact homology (ECH), in~\cite{KLT,KLT2,KLT3,KLT4,KLT5}, and~\cite{CGH,CGH1,CGH2,CGH3}.
\end{itemize}
Our Conjecture~\ref{conj:in_intro} is analogous to the proved isomorphism $HF^+(Y^3_\phi\:;\: -2g+4) \colon\thinspace\relaxng HF(\phi)$. We work in a slightly different $3$-manifold. Suppose we fix a lift of $\phi$ from the mapping class group of a closed surface $MCG(\Sigma)$ to the strongly based mapping class group $MCG_0(\Sigma,\partial \Sigma=U_1)$. Then, instead of the fibered manifold $Y^3_\phi$, we consider the open book corresponding to $\phi$; denote this open book by $M_\phi^\circ$ and its binding by $K$. Manifolds $Y^3_\phi$ and $M_\phi^\circ$ are related: $M_\phi^\circ$ is obtained from $Y^3_\phi$ by $0$-surgery on the constant section of $Y^3_\phi \rightarrow S^1$, which comes from the lift of $\phi$; $Y^3_\phi$ is obtained from $M_\phi^\circ$ by $0$-surgery on $K$. Instead of $HF^+(Y^3_\phi\:;\: -2g+4)$ we consider the knot Floer homology of the binding (in the second lowest Alexander grading) $\widehat{HFK}(M_\phi^\circ,K \:;\: -g+1)$. It is equal to the Hochschild homology $HH_*(N(\phi))$ (see~\cite[Theorem~14]{LOT-bim}), with which we are actually working in this paper. The relevant version of fixed point Floer cohomology turns out to be $HF^{*}(\tilde{\phi}\:;\: U_2^+,U_1^-)$. It is possible that the Conjecture~\ref{conj:in_intro} can be deduced from the proved isomorphism $HF^+(Y^3_\phi\:;\: -2g+4) \colon\thinspace\relaxng HF(\phi)$.
It is also interesting to compare our results to the work of Spano~\cite{Sp}. He develops the full version of embedded contact knot homology
$ECK(Y,K, \alpha)$, and conjectures it to be isomorphic to $HFK^-(Y,K)$. In~\cite[Section~3.3.1]{Sp}, the connection to symplectic Floer homology is explained. Namely, in case of the knot being a binding of an open book, the embedded contact knot homology is equal to a certain periodic Floer homology, see~\cite[Theorem~3.19]{Sp}. In the degree one case, it follows that if $ECK(Y, K, \alpha)\colon\thinspace\relaxng HFK^-(Y,K)$, then $HF(\phi, U_1+)\colon\thinspace\relaxng HFK^-(M_\phi^\circ,K \:;\: -g+1).$ Thus our Conjecture~\ref{conj:in_intro} can be viewed as the \say{hat} version of the conjecture of Spano.
Working in the \say{hat} version allows us to consider Hochschild homology of the bimodule $HH(N(\phi))$ instead of the knot Floer homology $\widehat{HFK}(M_\phi^\circ,K \:;\: -g+1)$. This transition is quite powerful, because two things become possible: computations using bordered Floer theory, and the connection to the partially wrapped Fukaya category of a surface, specifically to twisted open-closed maps there. The latter connection provides hope that the Conjecture~\ref{conj:in_intro} can be proved by more algebraic methods, using the structure of the Fukaya category. In this direction see~\cite{Gan}, where it is proved that the
untwisted open-closed map is an isomorphism for the non-degenerate
wrapped Fukaya category, in the exact setting.
\subsection{Outline of the paper}
This paper is organized as follows. For clarity comments on the novelty of the content are added.
\begin{description}
\item[Section 2.1] we cover the background on bordered Heegaard Floer theory, which is relevant to the construction of the bimodule invariant $N(\phi)$. This is an exposition of results from~\cite{LOT-bim}.
\item[Section 2.2] we perform computations of the bimodule invariant $N(\phi)$ in the genus two case. These computations are new.
\item[Section 2.3] we describe a method to compute Hochschild homology $HH_*(N(\phi))$ of bimodules in the genus two case, which we implemented in the program~\cite{Pyt}. The method and the computations, made possible by~\cite{Pyt}, are new.
\item[Section 3.1] we sketch the definition of fixed point Floer cohomology $HF^*(\phi)$, in both cases $\partial \Sigma = \emptyset$ and $\partial \Sigma \neq \emptyset$. This is an exposition of results from~\cite{Flo,DS,Sei2}, based on other expositions from~\cite{Gau,CC,Sei3II,Ul}.
\item[Section 3.2] we describe the existing methods to compute fixed point Floer cohomology. This is an exposition of results from~\cite{Sei1,Gau,Eft,CC}.
\item[Section 4.1] we state our main conjecture. This is new.
\item[Section 4.2] we perform various computations in the genus two case, and notice the equality of ranks of the groups $HF^{*}(\tilde{\phi}\:;\: U_2^+,U_1^-)$ and $HH_*(N(\phi))$, which confirms the conjecture. We also prove Theorem~\ref{thm:int}. The results are new.
\item[Section 5.1] we describe Auroux's symplectic geometric interpretation of bordered Heegaard Floer homology in terms the partially wrapped Fukaya categories. This is an exposition of~\cite{Aur1,Aur2}.
\item[Section 5.2] we explain the Fukaya categorical interpretation of the bimodule $N(\phi)$. This interpretation was known, see~\cite[Lemma~4.2]{AGW14}.
\item[Section 6.1] we explain the generalization of the bimodule $N(\phi)$ to the double basepoint version $N^\text{2bp}(\phi)$. This is an exposition of known constructions, which are based on~\cite{Zarev}.
\item[Section 6.2] we introduce the double basepoint generalization $HF^{*}(\dtilde{\phi}\:; \ U_3^+,U_2^+,U_1^-)$ of the fixed point Floer cohomology $HF^{*}(\tilde{\phi}\:;\: U_2^+,U_1^-)$, and state the double basepoint version of our main conjecture. This material is new.
\item[Section 6.3] we describe the background material on Lefschetz fibrations and the Fukaya-Seidel category. This is an expositions of results from~\cite{Sei4,Sei5,Sei3I,Sei3II}.
\item[Section 6.4] we show how our conjecture is a special case of the conjecture of Seidel~\cite[Conjecture 7.18]{Sei3II}, which states that the open-closed map in the Fukaya-Seidel category is an isomorphism. While the discussion of the open-closed map is an exposition of~\cite[Section~7]{Sei3II}, the connection between our work and~\cite{Sei3II} is new.
\end{description}
{\setlength{\parindent}{0pt} \bf Assumptions and conventions.}
\begin{itemize}
\item By $\phi$ we will usually denote not only the diffeomorphism, but also the mapping class which it represents.
\item We will use the convention $\omega(X_H,\cdot)=-dH$ for Hamiltonian vector fields.
\item Every homological invariant we consider will be defined over the field ${\mathbb F}_2$.
\item We will be working with the fixed point Floer $co$homology, rather than $ho$mology.
\end{itemize}
{\setlength{\parindent}{0pt} \bf Acknowledgements.}
I am thankful to my adviser Zoltán Szabó for his guidance and support during this project. I thank Nick Sheridan for suggesting I read one of the key references~\cite{Sei3II}. I also would like to thank Denis Auroux, Sheel Ganatra, Peter Ozsváth, Paul Seidel, András Stipsicz, Mehdi Yazdi, and Bohua Zhan for helpful conversations. Finally, I thank the referees for their helpful comments and suggestions.
\Needspace{8\baselineskip}
\section{Bimodule invariant coming from bordered Heegaard Floer homology}
Everything in this section is based on the bordered Heegaard Floer theory. It was developed by Lipshitz, Ozsváth and Thurston in~\cite{LOT-main} and~\cite{LOT-bim}. We refer to those papers for the theory of $A_\infty$ algebras, modules and bimodules, for how such objects arise in the Heegaard Floer theory, and for the proofs of the propositions we state and use along the way.
\subsection{Background: bimodules and their Hochschild homology}\label{sec:bimodule from bordered}
\subsubsection{Pointed matched circles}\label{sec:pmc}
We will be considering surfaces with one boundary component. Moreover, it is useful to consider parameterized surfaces, i.e. surfaces with a specified $1$-handle decomposition. Thus let us start with the following definition.
\begin{definition}
A \emph{pointed matched circle} is an oriented circle $\Zpmc$, equipped with a basepoint $z$ on it, and additional $4g$ points coming in pairs (distinct from each other and $z$) such that performing surgery on all $2g$ pairs results in one circle.
\end{definition}
\begin{construction}[The surface associated to a pointed matched circle]
Given a pointed matched circle $\Zpmc$, we can associate a surface whose boundary is a circle $\Zpmc$, viewing the $2g$ pairs of points as feet of $1$-handles. Specifically, we thicken $\Zpmc$ into a band $\Zpmc \times [0,1]$, then glue the $1$-handles to $\Zpmc \times \{1\}$, and then cap off the boundary component which is not $\Zpmc \times \{0\}$ (see below Figure~\ref{fig:pointed_matched_circle} and~\cite[Figure 1.1]{LOT-main}). We denote this surface by $F^{\circ}(\Zpmc)$, and the orientation on it is induced from the boundary via the usual rule \say{outward normal first}. Let $F(\Zpmc)$ denote the result of filling in a disc $D_\Zpmc$ to the boundary component of $F^{\circ}(\Zpmc)$ (so mapping classes of $F^{\circ}(\Zpmc)$ fixing the boundary naturally correspond to mapping classes of $F(\Zpmc)$ fixing the disc $D_\Zpmc$). Note that any two surfaces specified by the same pointed matched circle are homeomorphic, via a homeomorphism which is uniquely determined up to isotopy.
\end{construction}
\begin{example}
In Figure~\ref{fig:pointed_matched_circle}, we provide an example of a pointed matched circle in the $g=2$ case, and its corresponding surface of genus two. In our computations of the mapping class invariant we will be using this pointed matched circle, which we denote by $\Zpmc_2$. Notice that there are other pointed matched circles for a genus two surface, not isomorphic to $\Zpmc_2$. We could have used them. Thus here we make a particular choice which can be understood as a choice of a parameterization of the surface by specifying a $0$-handle (the preferred disc) and $1$-handles.
\end{example}
\begin{figure}
\caption{An example of a pointed matched circle for $g=2$, and its corresponding genus two surface.}
\label{fig:pointed_matched_circle}
\end{figure}
Having a pointed matched circle $\Zpmc$, by $-\Zpmc$ we denote the same circle but with the reversed orientation. The corresponding surfaces also have opposite orientations: $F^{\circ}(-\Zpmc)= -F^{\circ}(\Zpmc)$.
Consider now \emph{the genus $g$ strongly based mapping class groupoid}, which is a category where the objects are pointed matched circles with $4g$ points, and the morphism sets are
$$MCG_0(\Zpmc_L,\Zpmc_R)=\{\phi\colon\thinspace\relax F^{\circ}(\Zpmc_L)\xrightarrow{\colon\thinspace\relaxng} F^{\circ}(\Zpmc_R) \ | \ \phi(z_L)=z_R\} /\sim,$$
i.e. orientation preserving diffeomorphisms respecting the boundary and the basepoint, considered up to isotopy. For any pointed matched circle $\Zpmc$ with $4g$ points the corresponding group of self-diffeomorphisms $MCG_0(\Zpmc,\Zpmc) \colon\thinspace\relaxng MCG_0(\Sigma,\partial \Sigma)$ is the mapping class group of the genus $g$ surface with one boundary component.
Our goal for the rest of Section~\ref{sec:bimodule from bordered} is to explain how to associate a (homotopy equivalence class of) type $DA$ bimodule to an element $\phi \in MCG_0(\Zpmc_L,\Zpmc_R)$. This uses a number of correspondences, which are outlined in Figure~\ref{fig:plan_2}. If we take $\Zpmc_L=\Zpmc_R$, then we will produce an invariant of a mapping class of a surface. We now proceed to explaining the different pieces of the diagram.
\begin{figure}\label{fig:plan_2}
\end{figure}
\subsubsection{Mapping cylinders}
We need the notion of a \emph{strongly bordered $3$-manifold} with two boundary components. It consists of the following data (following~\cite[Definition 5.1]{LOT-bim}):
\begin{enumerate}
\item an oriented $3$-manifold $Y$ with two boundary components $\partial_1 Y$ and $\partial_2 Y$;
\item a preferred disc and a basepoint (on the boundary of that disc) in each boundary component;
\item parameterizations of each boundary components by some fixed surfaces $\psi_i\colon\thinspace\relax(F_i, D_i, z_i)\rightarrow\partial_i Y$ respecting distinguished discs and basepoints;
\item a framed arc connecting the basepoints such that the framing on the boundaries points into the distinguished discs.
\end{enumerate}
Given a parameterization of the boundaries of $Y$ by surfaces $F_1$ and $F_2$, then there is a natural notion of isomorphism of strongly bordered $3$-manifolds --- it is a diffeomorphism of the corresponding $3$-manifolds, which respects every piece of the additional data, i.e. parameterizations of boundaries, arcs connecting the basepoints, and their framings.
Given a strongly based mapping class we wish to form a strongly bordered $3$-manifold, which is called \emph{the corresponding mapping cylinder}.
\begin{construction}[Mapping cylinder]\label{mapping cylinder}
Fix pointed matched circles $\Zpmc_L$, $\Zpmc_R$ and a mapping class $\phi\colon\thinspace\relax (F(\Zpmc_L),D_L,z_L)\rightarrow (F(\Zpmc_R),D_R,z_R)$. We can form a \emph{mapping cylinder } \\ $M_\phi={}_\phi([0, 1]\times F(\Zpmc_R))_{\id }$, which is a strongly bordered $3$-manifold with the following data:
\begin{enumerate}
\item The $3$-manifold and its boundary components are
$$Y = [0, 1] \times F(\Zpmc_R), \qquad \partial_L Y = \{0\} \times -F(\Zpmc_R), \qquad \partial_R Y =\{1\} \times F(\Zpmc_R)$$
\item a parametrization of its boundary given by
$$\psi_L=-\phi\colon\thinspace\relax -F(\Zpmc_L) \rightarrow \partial_L Y, \qquad \psi_R=\id \colon\thinspace\relax F(\Zpmc_R) \rightarrow \partial_R Y ;$$
\item two distinguished discs $\{0\}\times D_R$ in $\partial_L Y$ and $\{1\}\times D_R$ in $\partial_R Y$;
\item a framed path $\gamma_z=[0,1]\times \{z_R\}$ between $z_L \in \partial_L Y$ and $z_R \in \partial_R Y$ such that the framing points into the distinguished discs $D_R$ at every fiber $\{t\} \times F(\Zpmc_R)$. See Figure~\ref{fig:mapping_cylinder}.
\end{enumerate}
\end{construction}
\begin{figure}
\caption{Mapping cylinder of $\phi\colon\thinspace\relax (F(\Zpmc_L),D_L,z_L)\rightarrow (F(\Zpmc_R),D_R,z_R)$.}
\label{fig:mapping_cylinder}
\end{figure}
The following lemma allows us to talk about mapping cylinders instead of mapping classes (and vice versa), i.e. it explains the first correspondence in Figure~\ref{fig:plan_2}.
\begin{lemma}[\text{\cite[Lemma~5.29]{LOT-bim}}]
Fix pointed matched circles $\Zpmc_L$ and $\Zpmc_R$. Then any strongly bordered $3$-manifold Y, whose boundary is parameterized by $F(\Zpmc_L)$ and $F(\Zpmc_R)$, and whose underlying space can be identified with a product of a surface with an interval (so that arc $\gamma_z$ is identified with the product of a point with the interval, respecting the framing) is of the form $M_\phi$ for some choice of strongly based mapping class $\phi\colon\thinspace\relax F^\circ(\Zpmc_L) \rightarrow F^\circ(\Zpmc_R)$. Moreover, two such strongly bordered three-manifolds are isomorphic if and only if they represent the same strongly based mapping class.
\end{lemma}
\subsubsection{Heegaard diagrams}\label{sec:hd}
Now, having constructed the mapping cylinder $M_\phi$, we would like to have a $2$-dimensional presentation of it.
\begin{definition}
An \emph{arced bordered Heegaard diagram with two boundary components} is a quadruple $(\ov{\Sigma},\ov{\b\alpha},\b{\beta},\b{z})$ where
\begin{itemize}
\item $\ov{\Sigma}$ is an oriented compact surface of genus $g$ with two boundary components, $\partial_L \ov{\Sigma}$ and $\partial_R \ov{\Sigma}$;
\item $\ov{\b{\alpha}}=\{\ov{\alpha}^\text{arc,left}_1,\dots,\ov{\alpha}^\text{arc,left}_{2l},\ov{\alpha}^\text{arc,right}_1,\dots,\ov{\alpha}^\text{arc,right}_{2r},\dots,\alpha_1^\text{curve},\dots,\alpha_{g-l-r}^\text{curve}\}$ is a collection of pairwise disjoint $2l$ embedded arcs with boundaries on $\partial_L \ov{\Sigma}$, $2r$ embedded arcs with boundaries on $\partial_R \ov{\Sigma}$, and $g-l-r$ circles in the interior (in particular $g \geq l+r$);
\item $\b{\beta}=\{\beta_1,\cdots,\beta_g\}$ is a $g$-tuple of pairwise disjoint curves in the interior of $\ov{\Sigma}$;
\item $\b z$ is a path in $\ov{\Sigma} \setminus (\ov{\b \alpha} \cup \b \beta )$ between $\partial_L \ov{\Sigma}$ and $\partial_R \ov{\Sigma}$;
\end{itemize}
These are required to satisfy:
\begin{itemize}
\item $\ov{\Sigma} \setminus \ov{\b \alpha}$ and $ \ov{\Sigma} \setminus \b \beta$ are connected;
\item $\ov{\b \alpha}$ intersect $\b \beta$ transversely.
\end{itemize}
\end{definition}
\begin{figure}
\caption{A Heegaard diagram for $M_{\id}
\label{fig:heegard_diagram_id}
\end{figure}
Notice that two boundaries of any arced bordered Heegaard diagram specify two pointed matched circles. In Figure~\ref{fig:heegard_diagram_id} we provide an example of a Heegaard diagram of the mapping cylinder of $\id\colon\thinspace\relax\Sigma_2 \rightarrow \Sigma_2$ in the genus two case.
The following proposition provides the second correspondence in Figure~\ref{fig:plan_2}.
\begin{proposition}[\text{\cite[Construction~5.6,Propositions~5.10,~5.11]{LOT-bim}}] \label{moves_thm}
Any arced bordered Heegaard diagram with two boundary components gives rise to a strongly bordered $3$-manifold. For the other direction, suppose a strongly bordered $3$-manifold has a boundary parameterized by $F(\Zpmc_1)$ and $F(\Zpmc_2)$. Then this $3$-manifold has an arced bordered Heegaard diagram with boundary pointed matched circles $\Zpmc_1$ and $\Zpmc_2$. The diffeomorphism type of this Heegaard diagram is unique up to certain moves that transform one diagram to another.
\end{proposition}
\noindent
In one direction, the procedure of getting a strongly bordered $3$-manifold from an arced bordered Heegaard diagram consists of thickening the surface, attaching $2$-handles along the circles (along the $\beta$-circles from one side, and along the $\alpha$-circles from the other side), and then carefully analyzing what happens on the $\alpha$ side of the boundary. There we have two surfaces of genera $l$ and $r$ with parameterizations coming from the Heegaard diagram, which are connected by an annulus, with the path $\b z$ in it. This annulus, along with the path $\b z$ in it, specify the framed arc $\gamma_z$ by which the two boundary surfaces are connected in the definition of a strongly bordered $3$-manifold. In the other direction, the existence of a Heegaard diagram follows from Morse theory, see~\cite[Proposition~5.10]{LOT-main}. To obtain uniqueness, Heegaard diagrams connected by certain moves are set to be equivalent, and we refer the reader to~\cite[Proposition~5.11]{LOT-bim} for the list of those moves and the proof.
\subsubsection{Bimodules} \label{sec:hf_bim}
Here the original definition of the bimodule invariant is described, following~\cite{LOT-bim}. Let us note that there is an alternative, more elementary approach, studied in~\cite{LOT-mcg} and~\cite{Sie}. It is similar in spirit to the Fukaya categorical interpretation of the bimodule invariant, which is described in Section~\ref{sec:bimodule_via_Fukaya}.
As a prerequisite to the subsequent material, we refer the reader to the excellent sources~\cite[Section~2]{LOT-main} and~\cite[Section~2]{LOT-bim} for the algebraic theory of \emph{$dg$ algebras}, \emph{$A_\infty$ modules}, and \emph{$A_\infty$ bimodules}. In particular, the most important algebraic object for us will be a \emph{type $DA$ bimodule over $dg$ algebras}, see \cite[Definition~2.2.43]{LOT-bim}. Following~\cite{LOT-bim}, we will use subscripts and superscripts to indicate the algebras of type $DA$ bimodules: if algebra $\mathcal A_1$ is on the $D$-side and algebra $\mathcal A_2$ is on the $A$-side of type $DA$ bimodule $N$, then we will write ${}^{\mathcal A_1}N_{\mathcal A_2}$. We will also occasionally make use of \emph{type $AA$} and \emph{type $DD$ bimodules}, using notations ${}_{\mathcal A_1}N_{\mathcal A_2}$ and ${}^{\mathcal A_1}N^{\mathcal A_2}$ respectively.
The first step in constructing any bordered Heegaard Floer invariants is always specifying the algebra.
\begin{construction}[The $dg$ algebra associated to a pointed matched circle]\label{construction:dg-algebra}
To a pointed matched circle $\Zpmc$ we associate a $dg$ algebra $\B(\Zpmc)$, which is $\A(\Zpmc,-g+1)$ in the notation of~\cite{LOT-bim} (i.e. we have only \say{one strand moving}). First, given a pointed matched circle $\Zpmc$, we construct a directed graph (\emph{quiver} for short) $\Gamma(\Zpmc)$. The vertices of $\Gamma(\Zpmc)$ are the matched pairs of points in the pointed matched circle, and the edges are the length one chords between the points, which do not cross the basepoint $z$. We illustrate this in Figure~\ref{fig:Z_2--B-Z_2-}, where we draw an example of a pointed matched circle $\Zpmc_2$ and the corresponding quiver $\Gamma(\Zpmc_2)$ below, inside the parenthesis.
To the quiver $\Gamma(\Zpmc)$ we associate a path algebra ${\mathbb F}_2(\Gamma(\Zpmc))$. The generators of the underlying $\mathbb{F}_2$-vector space are all the paths of the quiver (including the constant paths). If edges are denoted by letters $\rho_j$, we use the notation $\rho_{j_1 \dots j_k}$ for a path $(\rho_{j_1}, \ldots, \rho_{j_k})$. The multiplication in the algebra is given by concatenating the paths, if possible: for example, in ${\mathbb F}_2(\Gamma(\Zpmc))$ from Figure~\ref{fig:Z_2--B-Z_2-}, we have $i_1 \cdot \rho_4 \cdot \rho_5=\rho_{45}$. If paths do not concatenate, we declare the product to be zero.
At last, to obtain the algebra $\B(\Zpmc)$ we quotient the algebra ${\mathbb F}_2(\Gamma(\Zpmc))$ by setting certain paths to be zero. Those are the paths in which not consecutive chords are concatenated. For example, in ${\mathbb F}_2(\Gamma(\Zpmc))$ from Figure~\ref{fig:Z_2--B-Z_2-}, despite the fact that $\rho_{32}$ is a valid path, $\rho_2$ does not go right after $\rho_3$ in the pointed matched circle $\Zpmc_2$, and thus we set $\rho_{32}=0$. The full set of relations for $\B(\Zpmc_2)$ is given in the bottom of Figure~\ref{fig:Z_2--B-Z_2-}. The differential in $dg$ algebras $\B(\Zpmc)$ is always set to be trivial. All the constant paths of $\Gamma(\Zpmc)$ are idempotents in the algebra $\B(\Zpmc)$, and they correspond to the matched pairs of points in $\Zpmc$, i.e. $1$-handles of $F^\circ(\Zpmc)$. We usually denote these idempotents by $i_k$; there are four of them in $\B(\Zpmc_2)$ from Figure~\ref{fig:Z_2--B-Z_2-}. The sum of all the idempotents is a unit in $\B(\Zpmc)$ .
\end{construction}
\begin{figure}
\caption{A genus two example of how to associate a $dg$ algebra to a pointed matched circle.}
\label{fig:Z_2--B-Z_2-}
\end{figure}
Now we turn to the construction of type $DA$ bimodule invariant associated to a mapping class.
\begin{construction}[The type $DA$ bimodule associated to a mapping class]\label{constr_bim}
Fix a surface with one boundary component $(\Sigma,\partial \Sigma = S^1)=F^\circ(\Zpmc)$. A mapping class $\phi \in MCG_0(\Sigma,\partial \Sigma)= MCG_0(\Zpmc,\Zpmc)$ gives rise to a strongly bordered $3$-manifold $M_\phi$, the mapping cylinder of $\phi$. After this we can consider a Heegaard diagram $\H(M_\phi)$ representing $M_\phi$, with pointed matched circles $\Zpmc$ and $\Zpmc$ on the boundaries. To such a Heegaard diagram we associate a type $DA$ bimodule ${}^{\B(\Zpmc)}N(\phi)_{\B(\Zpmc)}$, over the algebra $\B(\Zpmc)$ from both sides.
The generators of the underlying $\mathbb{F}_2$-vector space of the bimodule consist of tuples $\x$ of intersection points between $\alpha$- and $\beta$- curves such that every $\alpha$ and $\beta$ circle gets one point, only one $\alpha$-arc on the right boundary gets a point (denote this $\alpha$-arc by $\alpha^\text{arc,right}_{\x}$), and all except one $\alpha$-arc on the left boundary get one point (denote this $\alpha$-arc by $\alpha^\text{arc,left}_{\x}$). See an example in Figure~\ref{fig:heeg_diag_dehn_twist}, where we marked all the generators on a Heegaard diagram.
The idempotent subalgebra of $\B(\Zpmc)$ acts on these generators in the following way. By Construction~\ref{construction:dg-algebra} for an $\alpha$-arc there is an associated idempotent of $\B(\Zpmc)$, which we denote by $i(\alpha^{arc})$. For a generator $\x$, we have actions $i(\alpha^\text{arc,left}_{\x}) \cdot \x= \x $, $\x \cdot i(\alpha^\text{arc,right}_{\x})= \x $, and the other idempotent actions are zero.
Higher $A_\infty$ type $DA$ actions on these generators are defined by counting of pseudo-holomorphic curves; for this, analytic in nature, definition we refer the reader to~\cite[Section~6.3]{LOT-bim} and references therein. Let us note an important point: along the way of the definition, an analytic choice of a family of almost complex structures on some space has to be made. If one makes a different analytic choice, or chooses a different Heegaard diagram for $M_\phi$, the resulting type $DA$ bimodule will be the same up to $A_\infty$ homotopy equivalence of type $DA$ bimodules.
\end{construction}
This finishes the explanation of how, to a mapping class $\phi \in MCG_0(\Sigma,\partial \Sigma)=MCG_0(\Zpmc,\Zpmc)$, we can associate an $A_\infty$ homotopy equivalence class of type $DA$ bimodules ${}^{\B(\Zpmc)}N(\phi)_{\B(\Zpmc)}$. An important feature of this mapping class invariant is that there is an operation on bimodules which corresponds to multiplication in the mapping class group. This operation is called \emph{box tensor product}~\cite[Definitions~2.3.2 and~2.3.9]{LOT-bim}, and is denoted by $\boxtimes$.
\begin{theorem}[\text{\cite[Theorem~12]{LOT-bim}}]\label{DA pairing}
Suppose $\phi_1$ and $\phi_2$ are two elements in the mapping class group $MCG_0(\Zpmc,\Zpmc)$. Then we have the following homotopy equivalence of bimodules:
$${}^{\B(\Zpmc)}N(\phi_1 \phi_2)_{\B(\Zpmc)} \simeq {}^{\B(\Zpmc)}N(\phi_2 )_{\B(\Zpmc)} \boxtimes {}^{\B(\Zpmc)}N(\phi_1)_{\B(\Zpmc)}.$$
\end{theorem}
Along the way of associating the bimodule invariant to $\phi \in MCG_0(\Sigma,\partial \Sigma)=MCG_0(\Zpmc,\Zpmc)$, we made a choice of a particular pointed matched circle $\Zpmc$, and an identification of $F^\circ(\Zpmc)$ with $(\Sigma, \partial \Sigma)$. It turns out that for us these choices are not important. Given two different pointed matched circles $\Zpmc, \ \Zpmc'$, and two different parameterizations of the surface $(\Sigma, \partial \Sigma) \colon\thinspace\relaxng_1 F^\circ (\Zpmc) $ and $ (\Sigma, \partial \Sigma) \colon\thinspace\relaxng_2 F^\circ (\Zpmc
') $,
the two mapping class groups $MCG_0(F^\circ (\Zpmc),F^\circ (\Zpmc
))$ and $MCG_0(F^\circ (\Zpmc'),F^\circ (\Zpmc
'))$ can be bijectively identified via conjugation by some element $a \in MCG_0(F^\circ (\Zpmc'),F^\circ (\Zpmc
))$:
\begin{align*}
MCG_0(F^\circ (\Zpmc'),F^\circ (\Zpmc
')) &\rightarrow MCG_0(F^\circ (\Zpmc),F^\circ (\Zpmc
)) \\
\psi&\mapsto a \psi a^{-1}
\end{align*}
Conveniently, the bimodules are also related: if $\phi=a \psi a^{-1}$, then, according to Theorem~\ref{DA pairing}, we have
$$N(\phi) \simeq N(a^{-1}) \boxtimes N(\psi) \boxtimes N(a).$$
More importantly, the main invariant for us is going to be the Hochschild homology of a bimodule, which is invariant with respect to conjugation of a mapping class.
Lastly, we quote~\cite[Theorem~4]{LOT-bim}, which says that ${}^{\B(\Zpmc)}N(\id)_{\B(\Zpmc)} \simeq {}^{\B(\Zpmc)}[\mathbb{I}]_{\B(\Zpmc)}$ is the \emph{identity type $DA$ bimodule}, i.e. it has generators $_{i_k}{i_k}_{i_k}$ for every idempotent of $\B(\Zpmc)$, and actions $({}_{i_k}{i_k}_{i_k},a) \rightarrow a \o {}_{i_l}{i_l}_{i_l}$ for every element $i_k a i_l = a$ in the algebra $\B(\Zpmc)$; see~\cite[Definition 2.2.48]{LOT-bim}. From the definition it follows that the operations $P\boxtimes{}^{\B(\Zpmc)}[\mathbb{I}]_{\B(\Zpmc)}$ and ${}^{\B(\Zpmc)}[\mathbb{I}]_{\B(\Zpmc)} \boxtimes P$ do not affect the bimodule $P$, up to isomorphism; this will be important later.
\subsubsection{Hochschild homology}\label{sec:hoch}
One of the main goals of this paper is to relate the bimodule $N(\phi)$ to fixed point Floer cohomology (defined in Section~\ref{sec:fixed point Floer}). In turns out that for that we need to apply a certain algebraic operation to the bimodule, which is called \emph{Hochschild homology} and denoted by $HH_*(N(\phi))$. In general, it is a homology theory associated to type $AA$ bimodules ${}_{\mathcal A_1}N_{\mathcal A_2}$ and type $DA$ bimodules ${}^{\mathcal A_1}N_{\mathcal A_2}$ that satisfy $\mathcal A_1=\mathcal A_2$. Hochschild homology should be though of as self-tensoring of the bimodule, pairing the left side to the right side. We refer to~\cite[Section~2.3.5]{LOT-bim} for the algebraic definitions, basic properties, and a way to compute Hochschild homology for bounded type $DA$ bimodules.
There are two important points about this algebraic structure. First, Hochschild homology depends only on the $A_\infty$ homotopy equivalence class of the bimodule. Thus, the Hochschild homology of the bimodule $HH_*(^{\B(\Zpmc)}N(\phi)_{\B(\Zpmc)})$ gives us an invariant of a mapping class. Second, $HH_*(^{\B(\Zpmc)}N(\phi)_{\B(\Zpmc)})$ can be identified with knot Floer homology.
\begin{theorem}[\text{\cite[Theorem~7]{LOT-bim}}]
Given a mapping class $\phi \in MCG(\Sigma,\partial \Sigma)$, denote by $M_\phi^\circ$ the open book with monodromy $\phi\colon\thinspace\relax (\Sigma,\partial \Sigma) \rightarrow (\Sigma, \partial \Sigma)$ and binding $K$. Then the following two vector spaces are isomorphic:
$$HH_*(N(\phi))\colon\thinspace\relaxng \widehat{HFK}(M_\phi^\circ,K \:;\: -g+1),$$
where the latter is the Alexander grading $(-g+1)$ summand of knot Floer homology of $K$.
\end{theorem}
\begin{remark}\label{rmk:inv_conj}
The open book $M_\phi^\circ$ is invariant with respect to conjugation of $\phi$, which implies that $HH_*(N(\phi))$ is invariant with respect to conjugation of $\phi$.
\end{remark}
There is a $\mathbb{Z}_2$-grading on $\widehat{HFK}(M_\phi^\circ,K \:;\: -g+1)$, coming from the sign of intersections of tori $\mathbb{T}_\alpha$ and $\mathbb{T}_\beta$ in the definition of generators of knot Floer homology. In conjunction with the theorem above, this endows Hochschild homology with a $\mathbb{Z}_2$-grading.
\subsubsection{Cancellation}
Let us finish this section by describing a process called \emph{cancellation}; for details we refer the readers to~\cite[Section~2.6]{Levine} (where it is called the ‘‘edge reduction algorithm'') and~\cite[Section~3.1]{Boh}. Suppose there are two generators $_i x_j$ and $_i y_j$ (the subscripts indicate their left and right idempotents) in a type $DA$ bimodule $P$ such that the only action between them is $\delta^1_1(_i x_j)=i \o _i y _j$ (we will label such actions by $1$, see the arrow between two generators $x_2$ and $t_{12}$ in Figure~\ref{fig:C_inv_bimodule}). Then we can cancel these two generators, i.e. erase $x$ and $y$ and the arrows involving them from the bimodule, and then add some other arrows between the remaining generators in the bimodule, guided by a cancellation rule: for every \say{zigzag} configuration of arrows
$$z_2 \xleftarrow{b \o (d_1,\ldots,d_l)} x \xrightarrow{1} y \xleftarrow{a \o (c_1,\ldots,c_k)} z_1$$
in the initial bimodule $P$ we will add an arrow
$$z_2 \xleftarrow{a\cdot b \o (c_1,\ldots,c_k,d_1,\ldots,d_l)} z_1$$
The outcome is a new bimodule $P'$ with less generators, which is homotopy equivalent to the previous one $P'\simeq P$.
\subsection{New bimodule computations}\label{subsec:bim_comps}
In this section, we compute the type $DA$ bimodules $N(\phi)$ for mapping classes of the genus two surface; the computations are based on the arc-slide bimodules studied in~\cite{LOT-arc}.
First, fix a genus two surface $\Sigma_2$ with one boundary component, and a set of curves on it as in Figure~\ref{fig:Sigma_2}.
\begin{figure}
\caption{Dehn twists along these curves generate $MCG_0(\Sigma_2)$.}
\label{fig:Sigma_2}
\end{figure}
The following is a presentation of the mapping class group of the genus two surface with one boundary component (see~\cite[Theorem~2]{W}):
\begin{equation}
\begin{aligned}\label{g2_mcg_presentationg}
MCG_0(\Sigma_2,\partial \Sigma_2)= \langle \tau_A,\tau_B,\tau_C,\tau_D,\tau_E | \ \ &\text{commuting relations}, \ \ \text{braid relations}, \\
&(\tau_E \tau_D \tau_C \tau_B)^5=\tau_A\tau_B\tau_C\tau_D\tau_E \tau_E\tau_D\tau_C\tau_B \tau_A \rangle,
\end{aligned}
\end{equation}
where $\tau_l$ is a right handed Dehn twist along the curve $l$. By commuting relations we mean that, if curves $l_1$ and $l_2$ do not intersect, then $\tau_{l_1} \tau_{l_2} =\tau_{l_2} \tau_{l_1}$. By braid relations we mean that, if curves $l_1$ and $l_2$ intersect at a single point transversely, then $\tau_{l_2} \tau_{l_1} \tau_{l_2} = \tau_{l_1} \tau_{l_2} \tau_{l_1}$.
Now we explain how to compute $N(\phi)$ for any $\phi \in MCG_0(\Sigma_2,\partial \Sigma_2)$. Every such mapping class can be represented by a product of Dehn twists $\tau_A,\tau_B,\tau_C,\tau_D,\tau_E$, or their inverses. Theorem~\ref{DA pairing} governs how bimodules $N(\phi)$ behave with respect to composition of mapping classes: the corresponding operation is the box tensor product. Thus it is enough to compute $N(\tau)$ only for these Dehn twists and their inverses:
\begin{equation}\label{ten_bimodules}
N(\tau_A),\ N(\tau_B),\ N(\tau_C),\ N(\tau_D),\ N(\tau_E), N(\tau^{-1}_A),\ N(\tau^{-1}_B),\ N(\tau^{-1}_C),\ N(\tau^{-1}_D),\ N(\tau^{-1}_E)
\end{equation}
\begin{remark}
There is a very interesting strategy, pioneered by Zhan~\cite{Boh}, to assign Floer theoretic invariants to topological objects without referring to pseudo-holomorphic theory. Assuming the ten bimodules~\eqref{ten_bimodules} are computed (we will proceed to computing them after the remark), this strategy can be applied in our case: we can redefine the bimodule invariant $N(\phi)$ for $\phi \in MCG_0(\Sigma_2,\partial \Sigma_2)$ in the following combinatorial way. For a factorization of $\phi$ into ten Dehn twists~\eqref{ten_bimodules} --- let us assume it is $\phi=\tau_A \tau_C \tau^{-1}_E$, for example, --- we associate a tensor product of the corresponding bimodules $N(\tau^{-1}_E) \boxtimes N(\tau_C) \boxtimes N(\tau_A)$. To make sure it is an invariant of the mapping class group element up to conjugation, we need to check the mapping class group relations. For example for the relation $\tau_{A} \tau_{B} \tau_{A} = \tau_{B} \tau_{A} \tau_{B}$ we need to check the following homotopy equivalence: $N(\tau_A) \boxtimes N(\tau_B) \boxtimes N(\tau_A) \simeq N(\tau_B) \boxtimes N(\tau_A) \boxtimes N(\tau_B)$. Indeed, in our case, after the computation of ten bimodules~\eqref{ten_bimodules}, computer can show that all the relations in presentation~\eqref{g2_mcg_presentationg} are satisfied; see~\cite[Showcase~3]{Pyt} for an illustration.
In~\cite{Boh}, Zhan gives a combinatorial definition of $\widehat{CFDA}(\phi, 0)$; it is an analogue of our invariant $\widehat{CFDA}(\phi, -g+1)=N(\phi)$, where generators occupy $g$ arcs on the left and $g$ arcs on the right boundary of the Heegaard diagram. In the definition he uses arc-slides (as opposed to Dehn twists) as generators of the mapping class groupoid. Using this definition for $\widehat{CFDA}(\phi,0)$, he then gives a combinatorial definition of the \say{hat} version of Heegaard Floer homology of a $3$-manifold $\widehat{HF}(Y^3)$.
\end{remark}
We now compute the ten bimodules~\eqref{ten_bimodules}. First we need to fix a parameterization of our surface $(\Sigma_2,\partial \Sigma_2) \colon\thinspace\relaxng F^\circ(\Zpmc)$. This will specify a $dg$ algebra. We use the pointed matched circle $\Zpmc_2$ and its corresponding algebra $\B(\Zpmc_2)$ from Figure~\ref{fig:Z_2--B-Z_2-}. For an identification $(\Sigma_2,\partial \Sigma_2) \colon\thinspace\relaxng F^\circ(\Zpmc_2)$ see Figure~\ref{fig:Sigma_2_parameterized}.
\begin{figure}
\caption{Parameterization of the surface $\Sigma_2 \colon\thinspace\relaxng F^\circ(\Zpmc_2)$.}
\label{fig:Sigma_2_parameterized}
\end{figure}
For every Dehn twist $\tau_l \in MCG_0(\Sigma_2,\Sigma_2)$ we need to specify a Heegaard diagram for a mapping cylinder $M_{\tau_l}$. Following~\cite[Section~5.3]{LOT-bim}, consider first the standard Heegaard diagram $\H(M_{\id})$ for $\id\colon\thinspace\relax F^\circ(\Zpmc_2) \rightarrow F^\circ(\Zpmc_2)$, see Figure~\ref{fig:curves_on_heeg_diag}. There is a shaded region of the diagram on the right that is identified with the right boundary $F^\circ(\Zpmc_2)\setminus D^2$ of the mapping cylinder. Analogously, there is a shaded region on the left part of the diagram which is identified with $-(F^\circ(\Zpmc_2)\setminus D^2)$. There are also curves $A_l,~B_l,~C_l,~D_l,~E_l$ on the left surface, and $A_r,~B_r,~C_r,~D_r,~E_r$ on the right surface, via the specified above identification $F^\circ(\Zpmc_2) \colon\thinspace\relaxng \Sigma_2$.
\begin{figure}
\caption{Heegaard diagram $\H(M_{\id}
\label{fig:curves_on_heeg_diag}
\end{figure}
Now, suppose we want to draw a Heegaard diagram for $M_{\tau_E}$. Then it is enough to change all the red arcs of the left side of $\H(M_{\id})$ by applying $\tau_{E}$. This corresponds to a parameterization $-\tau_E:-F^\circ(\Zpmc_2) \rightarrow \partial_L M_{\tau_E}$. So we apply the right handed Dehn twist $\tau_{E_l}$ to the red arcs.
Alternatively we can apply $\tau_{E_l}^{-1}$ to all the blue curves (this corresponds to applying self-diffeomorphism $\tau_{E_l}^{-1}$ to the Heegaard diagram). We also could have applied $\tau_{E_r}$ to all the red arcs, or $\tau_{E_r}^{-1}$ to all the blue curves. All these possibilities are depicted in Figure~\ref{fig:heeg_diag_dehn_twists_1-4}. All of the four diagrams are equivalent up to the equivalence moves from Proposition~\ref{moves_thm} and self-diffeomorphisms applied to the diagrams. The resulting Heegaard diagrams here are analogous to the ones for the genus one case in~\cite[Section~10.2]{LOT-bim}.
\begin{remark}
The orientation convention (essentially the signs of the Dehn twists on the diagrams) is chosen so that the map $\phi$ goes \say{from left to right} on the mapping cylinder and the Heegaard diagram, see~\cite[Appendix A]{LOT-mor}. This ensures the desired behavior with respect to gluing: $\H_{\phi_1 \phi_2} \colon\thinspace\relaxng \H_{\phi_2} {}_{\partial_R}{\cup}_{\partial_L} \H_{\phi_1}$.
\end{remark}
\begin{figure}
\caption{1st type.}
\caption{2nd type.}
\caption{3rd type.}
\caption{4th type.}
\caption{Four Heegaard diagrams for a Dehn twist along the curve E.}
\label{fig:heeg_diag_dehn_twists_1-4}
\end{figure}
The Heegaard diagrams for Dehn twists along the curves $E,~D,~A,~B$ are very similar in their complexity and structure. In contrast to this, as one can see from Figure~\ref{fig:two_Dehn}, the Heegaard diagrams for $\tau_E$ and $\tau_{C}^{-1}$ are very different, which results in the bimodule $N(\tau_C^{-1})$ having much more generators than the bimodule $N(\tau_{E})$. Thus we have two fundamentally different computations of the bimodules to perform: for Dehn twists $\tau_E$ and $\tau_{C}$ (or their inverses). Below, we compute the bimodule $N(\tau_E)$ via the second type of Heegaard diagram for $\tau_{E}$, and the bimodule $N(\tau_C^{-1})$ via the third type of diagram for $\tau_{C}^{-1}$. All other eight bimodules can be computed analogously: computations for $\tau_{E}^{-1},\tau_{D},\tau_{D}^{-1}$ are analogous to the one for $\tau_E$, while all the other five bimodule invariants for $\tau_{A}^{-1},\tau_{A},\tau_{B}^{-1},\tau_{B},\tau_{C}$ can be deduced from the previous five by reflection about the $x$-axis and changing the orientation of the Heegaard diagram. More precisely, to obtain $N(\tau_{A}^{-1})$ from $N(\tau_{E})$ for instance, one has to apply a certain involution to the algebra $\B(\Zpmc_2)$, reverse the direction of arrows, and reverse the order of algebra elements on the $A$ side. The involution is given by $i_k \leftrightarrow i_{3-k}$ and $\rho_k \leftrightarrow \rho_{8-k}$ (in particular, it does not preserve the multiplication). For example, the action
$${}_{i_0}(x_0)_{i_0} \o (\rho_{3},\rho_{234}) \rightarrow \rho_{34} \o {}_{i_2}(x_2)_{i_2}\quad \text{ in }N(\tau_{E})$$
results in action
$${}_{i_1}(x_2)_{i_1} \o (\rho_{456}, \rho_{5}) \rightarrow \rho_{45} \o {}_{i_3}(x_0)_{i_3} \quad \text{ in } N(\tau_{A}^{-1}).$$
All the ten bimodules~\eqref{ten_bimodules} are explicitly described in the program~\cite[Showcase~0]{Pyt}. For a general mapping class we first factorize it into Dehn twists, and then box tensor multiply all the bimodules associated to these Dehn twists.
While the actual technical computations of bimodules $N(\tau_E),~N(\tau_C^{-1})$ are done in Computations~\ref{comp: dehn twist, the easier one} and~\ref{harder one}, here we first describe the resulting bimodules. Figure~\ref{fig:heeg_diag_dehn_twist} shows the Heegaard diagram $\H (M_{\tau_E})$ along with the marked generators of the bimodule, and also the idempotents corresponding to the arcs.
The type $DA$ bimodule $N(\tau_E)$ is depicted in Figure~\ref{fig:E_bimodule}. The subscripts of the generators represent the underlying left and right idempotents; the arrow between generators $\x$ and $\y$ with the label $a \o (b,c)$ means that there is a type $DA$ action $\x \o (b, c) \rightarrow a \o \y$; if there is $1$ in the label on the right, it means that there are no incoming algebra elements in that action; if there is $1$ in the label on the left, it means that the corresponding action's outgoing algebra element is an idempotent. The $1$ notation refers to the fact that the sum of the idempotents in the algebra is a unit.
Figure~\ref{fig:heeg_diag_dehn_twist_connecting_curve} shows the Heegaard diagram $\H (M_{\tau_C^{-1}})$ along with the marked generators of the bimodule. Because there are many generators, for the generators $t_i$ we only mark an intersection point on the right side of the diagram; its corresponding $2g-1=3$ intersections on the left are uniquely determined. The bimodule $N(\tau_C^{-1})$ is depicted in Figure~\ref{fig:C_inv_bimodule}. On some arrows (which are a little lighter on the picture) we did not write the actions; those actions correspond to all the rectangles in the rectangle area with vertices $t_0,t_{12}$ and the right edge $\rho_{23456}$.
We now proceed to proving that the bimodules from Figures~\ref{fig:E_bimodule} and~\ref{fig:C_inv_bimodule} are indeed the bimodules $N(\tau_E)$ and $N(\tau_C^{-1})$ coming from the holomorphic structure contained in Heegaard diagrams in Figures~\ref{fig:heeg_diag_dehn_twist} and~\ref{fig:heeg_diag_dehn_twist_connecting_curve}.
\begin{computation}[$N(\tau_E)$] \label{comp: dehn twist, the easier one}
Let us denote for a moment our candidate bimodule from Figure~\ref{fig:E_bimodule} by $N'(\tau_E)$, and the bimodule $N(\tau_E)$ will be the one which corresponds to the Heegaard diagram in Figure~\ref{fig:heeg_diag_dehn_twist}. Thus we want to prove that $N'(\tau_E)\simeq N(\tau_E)$. There are two ways to do it. One is to use directly the definition of $A_\infty$ actions via pseudo-holomorphic curves. In the genus one case such computations were done in~\cite[Section~10.2]{LOT-bim}, and it is possible to generalize them to compute $N(\tau_E)$. However, it is more difficult to do this for $N(\tau_C^{-1})$, which is our next computation. Thus we choose another approach, which we will also use to compute $N(\tau_C^{-1})$.
Before proceeding to the proof of $N'(\tau_E)\simeq N(\tau_E)$, let us describe the necessary background material.
The algebra $\B(\Zpmc)$ is a direct summand of a bigger algebra
$$\B(\Zpmc)=\A(\Zpmc, -g +1) \quad \subset \quad \A(\Zpmc)=\bigoplus\limits_{-g \leq k \leq g}\A(\Zpmc, k),$$
see~\cite[Section~3]{LOT-main} for the definition of $\A(\Zpmc)$.
To a Heegaard diagram of the mapping cylinder $\H(M_\phi)$ one can associate not only a
$$\text{type $DA$ bimodule}\quad {}^{\B(\Zpmc)}N(\phi)_{\B(\Zpmc)}={}^{\B(\Zpmc)}\widehat{CFDA}(\phi,-g+1)_{\B(\Zpmc)},$$
but also
\begin{align*}
\text{type $DD$ bimodule}\quad &{}^{\B(\Zpmc)}\widehat{CFDD}(\phi,-g+1)^{\A(\Zpmc, g -1)}, \text{ and} \\
\text{type $AA$ bimodule}\quad &{}_{\A(\Zpmc, g -1)}\widehat{CFAA}(\phi,-g+1)_{\B(\Zpmc)},
\end{align*}
see~\cite[Section~6]{LOT-bim} for the definitions. The sets of generators for these three bimodules are the same, but the $A_\infty$ actions are different. Note that all the three bimodules are direct summands of more general ones
\begin{align*}
{}^{\A(\Zpmc)}\widehat{CFDA}(\phi)_{\A(\Zpmc)}=&\bigoplus_{0\leq j\leq 2g} {}^{\A(\Zpmc, -g +j )}\widehat{CFDA}(\phi,-g+j)_{\A(\Zpmc, -g +j )}, \\
^{\A(\Zpmc)}\widehat{CFDD}(\phi)^{\A(\Zpmc)}=&\bigoplus_{0\leq j\leq 2g} {}^{\A(\Zpmc, -g +j)}\widehat{CFDD}(\phi,-g+j)^{\A(\Zpmc, g -j)}, \\
_{\A(\Zpmc)}\widehat{CFAA}(\phi)_{\A(\Zpmc)}=&\bigoplus_{0\leq j\leq 2g} {}_{\A(\Zpmc, g -j)}\widehat{CFAA}(\phi,-g+j)_{\A(\Zpmc, -g +j)}.
\end{align*}
The $(-g+1)$-summands, which are relevant for us, are characterized by the number of arcs generators occupy on the left and the right side of a Heegaard diagram --- for us it is $2g-1$ on the left, and $1$ on the right. Equivalently, we can say that the $(-g+1)$-summands consist of generators in those $\text{Spin}^c$ structures of $M_\phi$ whose Chern class evaluates to $-2g+2$ on each of the boundaries of $M_\phi$. As we noted in the introduction, the reason why we focus our attention on the $(-g+1)$-summands is because those are the ones corresponding to the fixed point Floer theory of the surface. Because we will be only interested in the $(-g+1)$-summands, we will omit the index $-g+1$ for these bimodules below.
To complement Theorem~\ref{DA pairing}, we describe below the relationship between the type $AA$, $DD$ and $DA$ bimodules, which follow from \cite[Theorem~12]{LOT-bim}:
\begin{equation}\label{prrr}
\begin{split}
^{\B(\Zpmc)}\widehat{CFDD}(\phi)^{\A(\Zpmc, g -1)} \boxtimes {}_{\A(\Zpmc, g -1)}\widehat{CFAA}(\psi)_{\B(\Zpmc)} \simeq {}^{\B(\Zpmc)}\widehat{CFDA}(\psi \phi)_{\B(\Zpmc)}, \\
^{\B(\Zpmc)}\widehat{CFDA}(\phi)_{\B(\Zpmc)} \boxtimes {}^{\B(\Zpmc)}\widehat{CFDD}(\psi)^{\A(\Zpmc, g -1)} \simeq {}^{\B(\Zpmc)}\widehat{CFDD}(\psi \phi)^{\A(\Zpmc, g -1)}.
\end{split}
\end{equation}
(We use here the notation $\widehat{CFDA}(\phi)$ instead of $N(\phi)$ just to emphasize that $D$ sides are paired with $A$ sides.)
\emph{Arc-slides} are generators of the mapping class groupoid, see~\cite[Figure 3]{LOT-arc} for a definition of an arc-slide. In particular, they generate Dehn twists. In practice, \cite[Lemma~2.1]{LOT-arc} gives a standard way to decompose a Dehn twist $\tau_l\colon\thinspace\relax(\Sigma,\partial \Sigma) \rightarrow (\Sigma,\partial \Sigma)$ into a product of arc-slides; let us describe it. First, we pick a parameterization of a surface $(\Sigma,\partial \Sigma) \colon\thinspace\relaxng F^\circ(\Zpmc)$ such that $l$ is isotopic to an arc $\alpha \subset F^\circ(\Zpmc)$, whose ends are connected along the part of the boundary which does not contain a basepoint --- we denote this part by $I_\alpha$. Then we consider the composition of arc-slides, each of which moves a point on $I_\alpha$, in turn, once along the $\alpha$. This will be the desired Dehn twist. For example, see Figure~\ref{fig:Sigma_2_parameterized}, where the curve $E$ is in the correct position with respect to the arc $\alpha_E$. Thus the Dehn twist $\tau_E$ is equal to the single arc-slide over the arc $\alpha_E$, which is indicated on the picture.
There is also a standard way to produce a Heegaard diagram for an arc-slide. In our example of $\tau_E$, this is the 3rd type of the diagram in Figure~\ref{fig:heeg_diag_dehn_twists_1-4}. Based on these standard Heegaard diagrams, the type $DD$ bimodules for all arc-slides were explicitly computed in~\cite{LOT-arc}.
Let us return to the proof of $N'(\tau_E) \simeq N(\tau_E)$. We first prove the following isomorphism:
\begin{equation}\label{eqqqq}
{}^{\B(\Zpmc)}N'(\tau_E)_{\B(\Zpmc)} \boxtimes {}^{\B(\Zpmc)}\widehat{CFDD}(\id)^{\A(\Zpmc, g -1)} \colon\thinspace\relaxng {}^{\B(\Zpmc)}\widehat{CFDD}(\tau_E)^{\A(\Zpmc, g -1)}.
\end{equation}
On the left-hand-side, the bimodule $N'(\tau_E)$ is our candidate bimodule, see Figure~\ref{fig:heeg_diag_dehn_twist}. The bimodule ${}^{\B(\Zpmc)}\widehat{CFDD}(\id)^{\A(\Zpmc, g -1)}$, according~\cite[Theorem~1]{LOT-arc}, is the identity type $DD$ bimodule described in~\cite[Definition~1.3]{LOT-arc}. It consists of four generators
$${}_{i_0}(i_0)_{\bar{i}_0}, {}_{i_1}(i_1)_{\bar{i}_1}, {}_{i_2}(i_2)_{\bar{i}_2}, {}_{_3}(i_3)_{\bar{i}_3},$$
where ${\bar{i}_k}\in \A(\Zpmc, g -1)$ is the idempotent complementary to ${i}_k\in \B(\Zpmc)$. The type $DD$ actions are
\begin{equation}\label{act}
\partial(i)=\sum_{ \substack {(i \xrightarrow{\rho} j )\in \B(\Zpmc) \\ i\neq j }} a(\rho)\big|_{\B(\Zpmc)} \o j \o a(\rho)\big|_{\A(\Zpmc, g -1)},
\end{equation}
where $a(\rho)$ is the notation from ~\cite[Definition~3.23]{LOT-main}, and $a(\rho)\big|_{\A(\Zpmc, g -1)}$ stands for the projection of $a(\rho)$ onto the summand $\A(\Zpmc, g -1)$ (in general $a(\rho)\in \A(\Zpmc)$).
Note that, in our case of $\Zpmc=\Zpmc_2$, the property $i\neq j$ implies that in the type $DD$ actions the chord $\rho$ cannot be equal to $\rho_{12}, \rho_{23}, \rho_{56}$ or $\rho_{67}$.
Next, we describe in detail the right-hand-side of Isomorphism~\eqref{eqqqq}. According to~\cite[Theorem~2]{LOT-arc}, the bimodule ${}^{\B(\Zpmc)}\widehat{CFDD}(\tau_E)^{\A(\Zpmc, g -1)}$ is an arc-slide type $DD$ bimodule described in~\cite[Definition~1.7]{LOT-arc}. Note that our arc-slide $\tau_E$ is an \emph{under-slide} (see~\cite[Definition~4.2]{LOT-arc}), and thus the actions of the corresponding bimodule are described explicitly in~\cite[Definition~4.19]{LOT-arc}. Figure~\ref{fig:arc-slide_DD_bimodule} depicts the resulting bimodule; let us pause to describe the notation. An arrow between generators $\x$ and $\y$ with the label $a \o b$ means that there is a type $DD$ action $\x \rightarrow a \o \y \o b$. For elements of algebra $\A(\Zpmc, g -1)$ we use the strand diagram notation from~\cite[Section~3]{LOT-main}, with the difference that our strands are the horizontal lines numbered by $0$ to $7$, while in~\cite{LOT-main} the strands are numbered by $1$ to $8$. As an example of our notation, $|(0, 2), (1, 3), (4 \rightarrow 5)|$ represents an element of the algebra corresponding to the strand going from the 4th to the 5th line supplemented with idempotents $(0, 2)$ and $(1, 3)$ (this makes this an element of three-strands-moving algebra). In the notation of~\cite[Definition~3.23]{LOT-main}:
$$|(0, 2), (1, 3), (4 \rightarrow 5)|=a(\rho_5)\big|_{\A(\Zpmc, g -1)}$$
In the notation of~\cite[Definition~3.25]{LOT-main}:
$$|(0, 2), (1, 3), (4 \rightarrow 5)|=\begin{bmatrix}4 & 0 &1 \\ 5\end{bmatrix}$$
Also, every action in arc-slide bimodules has its type, see~\cite[Definition 4.19]{LOT-arc} for the relevant here case of an under-slide. In Figure~\ref{fig:arc-slide_DD_bimodule}, we specify the types of the actions by the superscripts U-n.
Given the complete description of all the bimodules involved, it is straightforward to check that Isomorphism~\eqref{eqqqq} holds. The actions in Figures~\ref{fig:E_bimodule} and~\ref{fig:arc-slide_DD_bimodule} are intentionally spaced in a similar way, to suggest how the isomorphism works. Taking an action from Figure~\ref{fig:E_bimodule}, and, if possible, box tensoring it with actions from ${}^{\A(\Zpmc, g -1)}\widehat{CFAA}(\id)^{\B(\Zpmc)}$ (described in Equation~\ref{act}), precisely results in the corresponding action from Figure~\ref{fig:arc-slide_DD_bimodule}.
Now, box tensoring both sides of Isomorphism~\eqref{eqqqq} by ${}_{\A(\Zpmc, g -1)}\widehat{CFAA}(\id)_{\B(\Zpmc)}$ gives
\begin{equation*}
\begin{split}
{}^{\B(\Zpmc)}N'(\tau_E)_{\B(\Zpmc)} &\boxtimes {}^{\B(\Zpmc)}\widehat{CFDD}(\id)^{\A(\Zpmc, g -1)} \boxtimes {}_{\A(\Zpmc, g -1)}\widehat{CFAA}(\id)_{\B(\Zpmc)} \simeq \\
&\colon\thinspace\relaxng
^{\B(\Zpmc)}\widehat{CFDD}(\tau_E)^{\A(\Zpmc, g -1)} \boxtimes {}_{\A(\Zpmc, g -1)}\widehat{CFAA}(\id)_{\B(\Zpmc)}.
\end{split}
\end{equation*}
In the view of pairings~\eqref{prrr}, the previous isomorphism can be transformed to
\begin{equation*}
{}^{\B(\Zpmc)}N'(\tau_E)_{\B(\Zpmc)} \boxtimes {}^{\B(\Zpmc)}\widehat{CFDA}(\id)_{\B(\Zpmc)} \simeq {}^{\B(\Zpmc)}\widehat{CFDA}(\tau_E)_{\B(\Zpmc)}.
\end{equation*}
The bimodule ${}^{\B(\Zpmc)}\widehat{CFDA}(\id)_{\B(\Zpmc)}$ is the identity bimodule (\cite[Theorem~4]{LOT-bim}), and thus tensoring with it has no effect. Thus, remembering ${}^{\B(\Zpmc)}N(\tau_E)_{\B(\Zpmc)}={}^{\B(\Zpmc)}\widehat{CFDA}(\tau_E)_{\B(\Zpmc)}$, the above homotopy equivalence transforms to
\begin{equation*}
{}^{\B(\Zpmc)}N'(\tau_E)_{\B(\Zpmc)} \simeq {}^{\B(\Zpmc)}N(\tau_E)_{\B(\Zpmc)},
\end{equation*}
which finishes the proof.
\end{computation}
\begin{computation}[$N(\tau_C^{-1})$]\label{harder one}
Let us denote for a moment our candidate bimodule from Figure~\ref{fig:C_inv_bimodule} by $N'(\tau_C^{-1})$, and the bimodule $N(\tau_C^{-1})$ will be the one which corresponds to the Heegaard diagram in Figure~\ref{fig:heeg_diag_dehn_twist_connecting_curve}
. Thus we want to prove that $N'(\tau_C^{-1}) \simeq N(\tau_C^{-1})$.
First, we factorize the Dehn twist $\tau_C^{-1}$ into the product of arc-slides. In Figure~\ref{fig:Sigma_2_parameterized}, consider a slide of the arc $\alpha_E$ over the arc $\alpha_A$, and let us call this arc-slide $\eta$. Then we get a new parameterization of the surface, where instead of the arc $\alpha_E$ we have a new arc $ \alpha'_E$ which is isotopic to the curve $C' \sim \eta(C)$, if
we connect the ends of the arc $ \alpha'_E$. According to~\cite[Lemma~2.1]{LOT-arc}, the Dehn twist $\tau_{C'}^{-1}$ can be factorized into four arc-slides $ \mu_1,\mu_2,\mu_3,\mu_4$ along the arc $ \alpha'_E$, which we picture in Figure~\ref{fig:arcslides_sequence}. Now, by a mapping class group relation $f\tau_l f^{-1}=\tau_{f(l)}$ we obtain the desired factorization:
$$
\tau_C^{-1}=\tau_{\eta^{-1}(C')}^{-1}=\eta^{-1} \tau_{C'}^{-1} \eta =\eta^{-1} \mu_4\mu_3\mu_2\mu_1 \eta.
$$
\begin{figure}
\caption{Composition of the above six arc-slides gives a left handed Dehn twist along the curve C in Figure~\ref{fig:Sigma_2_parameterized}
\label{fig:arcslides_sequence}
\end{figure}
From this factorization we get
$$
N(\eta)\boxtimes N(\mu_1) \boxtimes N(\mu_2) \boxtimes N(\mu_3) \boxtimes N(\mu_4) \boxtimes N(\eta^{-1}) \simeq N(\tau_{C'}^{-1}),
$$
and so to compute $N(\tau_{C'}^{-1})$ it is left to compute the bimodules for the six arc-slides. These are computed using exactly the same method we used in the previous computation of $N(\tau_{E})$; see \cite[Showcase~10]{Pyt} for explicit descriptions of all 6 bimodules. (The strand notation for algebra elements in the program is the same as the one used in Figure~\ref{fig:arc-slide_DD_bimodule}.)
For computing $N(\eta)\boxtimes N(\mu_1) \boxtimes N(\mu_2) \boxtimes N(\mu_3) \boxtimes N(\mu_4) \boxtimes N(\eta^{-1})$ we used the program~\cite[Showcase~10]{Pyt}: tensoring all the six arc-slide bimodules, and then doing all possible cancellations, results in a bimodule isomorphic to the one in Figure~\ref{fig:C_inv_bimodule} with canceled differential $x_2 \rightarrow t_{12}$. This proves the desired homotopy equivalence:
$$
N'(\tau_{C'}^{-1}) \simeq N(\eta)\boxtimes N(\mu_1) \boxtimes N(\mu_2) \boxtimes N(\mu_3) \boxtimes N(\mu_4) \boxtimes N(\eta^{-1}) \simeq N(\tau_{C'}^{-1}).
$$
\end{computation}
\begin{figure}
\caption{Heegaard diagram $\H (M_{\tau_E}
\label{fig:heeg_diag_dehn_twist}
\caption{Heegaard diagram $\H (M_{\tau_C^{-1}
\label{fig:heeg_diag_dehn_twist_connecting_curve}
\end{figure}
\begin{figure}
\caption{Bimodule ${}
\label{fig:E_bimodule}
\end{figure}
\begin{figure}
\caption{Bimodule ${}
\label{fig:C_inv_bimodule}
\end{figure}
\begin{figure}
\caption{Bimodule ${}
\label{fig:arc-slide_DD_bimodule}
\end{figure}
\subsection{Method to compute Hochschild homology}\label{alg_hoh}
\begin{wrapfigure}{r}{0.4\textwidth}
\includegraphics[width=0.4\textwidth]{./figures/heegard_diagram_id_bounded-eps-converted-to.pdf}
\caption{Heegaard diagram for $M_{\id}$ with no periodic domains, where $\id\colon\thinspace\relax F^\circ(\Zpmc_2) \rightarrow F^\circ(\Zpmc_2)$ is the identity mapping class.}
\label{fig:heegard_diagram_id_bounded}
\end{wrapfigure}
It is the Hochschild homology of a bimodule that we are going to equate with a version of fixed point Floer cohomology. Thus we would like to be able to compute it. The method from~\cite[Section~2.3.5]{LOT-bim} for computing Hochschild homology for type $DA$ bimodules works well, as long as the bimodule is bounded (see~\cite[Definition 2.2.46]{LOT-bim}). We will be computing Hochschild homology for many bimodules that are \emph{not bounded}, including all bimodules~\eqref{ten_bimodules}, and so the method does not apply off the shelf. To address this we multiply a given bimodule by a certain bounded bimodule on the left and on the right such that the $A_\infty$ homotopy equivalence class does not change:
{\colon\thinspace\relaxlor{white} this is a} $\displaystyle [\mathbb{I}]^b \boxtimes {} N(\phi)\boxtimes [\mathbb{I}]^b \simeq N(\phi).$
\noindent We now describe the construction of ${}^{\B(\Zpmc)}[\mathbb{I}]^b_{ \B(\Zpmc)}$.
Tensoring with the identity bimodule ${}^{\B(\Zpmc)}N(\id)_{\B(\Zpmc)} \simeq {}^{\B(\Zpmc)}[\mathbb{I}]_{\B(\Zpmc)}$ (see~\cite[Definition~2.2.48]{LOT-bim}) does not change the $A_\infty$ homotopy equivalence class. Thus to obtain the bimodule $[\mathbb{I}]^b$ it is enough to find a way to change ${}^{\B(\Zpmc)}[\mathbb{I}]_{\B(\Zpmc)}$ such that it becomes bounded, but does not change its $A_\infty$ homotopy equivalence class.
In the genus two case, we claim that the needed bimodule ${}^{\B(\Zpmc_2)}[\mathbb{I}]^b_{ \B(\Zpmc_2)}$ is depicted in Figure~\ref{fig:id_bounded_bimodule}. The graph on that figure does not have any cycles, thus the bimodule is bounded. Program~\cite[Showcase~11]{Pyt} finds that canceling four differentials $c_1 \rightarrow c_2$, $t_1 \rightarrow t_2$, $z_1 \rightarrow z_2$, and $w_1 \rightarrow w_2$ in ${}^{\B(\Zpmc_2)}[\mathbb{I}]^b_{ \B(\Zpmc_2)}$ gives ${}^{\B(\Zpmc_2)}[\mathbb{I}]_{\B(\Zpmc_2)}$, hence ${}^{\B(\Zpmc_2)}[\mathbb{I}]^b_{\B(\Zpmc_2)} \simeq {}^{\B(\Zpmc_2)}[\mathbb{I}]_{\B(\Zpmc_2)}$. Using this bounded identity bimodule, as well as ~\cite[Proposition~2.3.54]{LOT-bim}, in~\cite{Pyt} we implemented an algorithm to find Hochschild homology of any type $DA$ bimodule over $\B(\Zpmc_2)$. Resulting computations of Hochschild homology became the basis of Conjecture~\ref{conj:in_intro}.
The bounded model was guessed based on the diagram in Figure~\ref{fig:heegard_diagram_id_bounded} (three intersections on the left side of the diagram are omitted for generators $z_1,z_2,c_1,c_2,t_1,t_2,w_1,w_2$), which is the Heegaard diagram from Figure~\ref{fig:heegard_diagram_id} but with perturbed blue curves so that there are no periodic domains.
\begin{figure}
\caption{Bimodule ${}
\label{fig:id_bounded_bimodule}
\end{figure}
\Needspace{8\baselineskip}
\section{Background on fixed point Floer cohomology}\label{sec:fixed point Floer}
In this section, we sketch the definition of \emph{fixed point Floer homology}, and describe the existing computational methods.
Fixed point Floer cohomology was initially defined by Floer in~\cite{Flo} for symplectomorphisms which are Hamiltonian isotopic to the identity. It was extended to other symplectomorphisms by Dostoglou and Salamon in~\cite{DS}. Seidel in~\cite{Sei1} studied fixed point Floer cohomology of Dehn twists on surfaces. In~\cite{Sei2} Seidel defined fixed point Floer cohomology for any mapping class $\phi$ of a surface, proving that the choice of the symplectic representative of $\phi$ does not matter.
\Needspace{8\baselineskip}
\subsection{Case of a closed surface}
Consider a closed oriented surface $\Sigma$ with genus $g>1$. The construction of fixed point Floer cohomology $HF(\phi)$ for orientation preserving mapping classes $\phi \in MCG(\Sigma)$ works as follows:
\begin{enumerate}
\item We first choose a symplectic area form $\omega$ on $\Sigma$, and an area-preserving monotone representative $\phi$ of a mapping class. See~\cite{Sei2} for the definition of $monotone$ representative, and for the technical discussion on why Floer cohomology does not depend on the choice of such representative.
\item Next, we want to make sure that $\phi$ has only non-degenerate fixed points, i.e. at the fixed points we want $\det(d\phi - \id )\neq 0$. This is usually achieved by perturbing $\phi$ by the time-one isotopy $\psi^1_{X_{H_t}}$ along the Hamiltonian vector field $X_{H_t}$, where $H_t:\Sigma \rightarrow {\mathbb R}$ is a time-dependent generic Hamiltonian.
\item Now, the chain complex $CF_*(\phi)$ is generated over $\mathbb{F}_2$ by the fixed points of $\phi$. Non-degeneracy of fixed points implies that they are isolated, and so $CF_*(\phi)$ is finitely generated. Note that the fixed points of $\phi$ correspond to the constant sections of the fiber bundle $T_{\phi}\xrightarrow{\Sigma} S^1 $, where $T_{\phi}$ is the mapping torus $T_{\phi}=\Sigma \times [0,1] / (\phi(p),0)\sim (p,1)$.
\item Next we need to pick an almost complex structure $J$ on $T_{\phi} \times {\mathbb R}$. First, we pick a generic time-dependent almost complex structure $j$ on $\Sigma$, and then extend it to the rest of the tangent space of $T_{\phi} \times {\mathbb R}$ naturally, i.e. the direction of the circle inside $T_{\phi}$ and the direction of $\mathbb{R}$ are interchanged by $J$.
\item The differential $\partial: CF_*(\phi) \rightarrow CF_{*-1}(\phi)$ is now defined by counting the number of points in the moduli space $\mathcal M_1(x,y)$ of pseudo-holomorphic cylinder sections of the fiber bundle $T_\phi \times {\mathbb R} \xrightarrow{\Sigma} S^1 \times {\mathbb R} $, which limit to the constant sections $x$ and $y$ of $T_{\phi}\xrightarrow{\Sigma} S^1 $ at $+\infty$ and $-\infty$ respectively. Namely, if $x_i$ are all constant sections of $T_{\phi}\xrightarrow{\Sigma} S^1 $ (remember that these correspond to fixed points of $\phi$), then
$$\partial (x_j) = \sum_{x_i} \#( \mathcal M_1(x_j,x_i) /{\mathbb R} ) \cdot x_i.$$
Note that only the index one cylinders are counted (i.e. those which come in $1$-dimensional family), up to translation along $\R$. The differential goes from $+\infty$ to $-\infty$, as in Morse homology.
\item This differential satisfies $\partial^2=0$, and passing to the homology of the dual complex $HF^*(\phi)=H(CF^*(\phi),d)$ gives \emph{fixed point Floer cohomology} --- an invariant, which depends only on the mapping class $\phi \in MCG(\Sigma)$. The $\mathbb{Z}_2$-grading on this invariant is provided by the sign of $\det(d\phi - \id )$ at fixed points.
\end{enumerate}
In the construction above we omitted a lot of deep and hard pseudo-holomorphic theory: one has to study the moduli space $\mathcal M(x_j,x_i)$, prove that the sum $\partial (x_j) = \sum_{x_i} \#( \mathcal M(x_j,x_i) /{\mathbb R} ) \cdot x_i$ is finite, prove that $\partial^2=0$, and also prove that the homotopy equivalence class of $CF^*(\phi)$ does not depend on all of the choices made: generic area-preserving monotone representative $\phi$, and the complex structure $J$. We refer the reader to papers~\cite{Gau,CC,Sei3II,Ul} and references therein for discussion of these issues, and further details.
\subsection{Case of a surface with boundary}
There is a natural generalization of the above construction to surfaces with boundary. Suppose $\Sigma$ is an oriented surface of any genus with $n\neq 0$ boundary components $U_1 \cup U_2 \cup \dots \cup U_n$.
We will consider orientation preserving mapping classes $\phi \in MCG_0(\Sigma)$, pointwise fixing the boundary.
\begin{enumerate}
\item We first choose a symplectic area form $\omega$ on $\Sigma$, and an exact area-preserving representative $\phi$ of a mapping class. See~\cite[Appendix C]{Gau}, or~\cite[Lemma~3.3]{Ul} for an explanation of why the construction does not depend on this choice.
\item As in the closed case, we want every fixed point to be non-degenerate, and therefore isolated. Here comes the key difference from the closed surface case: because $\phi|_{\partial \Sigma}=\id|_{\partial \Sigma}$, we will need to perturb $\phi$ near the boundary. Thus, in order to specify a perturbation, as an input we will also take decorations of every boundary component with a sign, which will tell us how the perturbation behaves near the boundary components. If $U_i$ is decorated by $(+)$, then the perturbation in the neighborhood of $U_i$ should be a twist in the direction of the natural orientation of $U_i$ (or, equivalently, along the Reeb flow on the boundary, in the terminology of contact geometry). This corresponds to perturbation along a Hamiltonian vector field with the Hamiltonian $H$ having a time-independent local maximum on $U_i$, see Figure~\ref{fig:perturbation_conventions}. If $U_j$ is decorated by $(-)$, then the perturbation should be a twist in the opposite direction of the natural orientation, i.e. the Hamiltonian should have a local minimum on $C_j$. These twists near the boundary should be small enough, i.e. $\le 2\pi$ if one full twist is $2\pi$. The unions of positively and negatively decorated components will be denoted
\begin{equation}\label{dec}
U^+ = \bigcup_{\substack{\text{positively}\\\text{decorated}}} U_i^+ \qquad \text{and} \qquad U^- = \bigcup_{\substack{\text{negatively}\\\text{decorated}}} U_j^-
\end{equation}
\item The rest of the construction is the same as in the closed surface case, and we denote the resulting fixed point Floer cohomology by
$$HF^*(\phi\:;\: U^+,U^-).$$
\end{enumerate}
\begin{figure}
\caption{Perturbation twists near the boundary.}
\label{fig:perturbation_conventions}
\end{figure}
\begin{remark}
Notice that the naming of the twists comes from comparing the direction of the twist with the orientation on the boundary. It is not related to positive or negative Dehn twists. In fact, positive $(+)$ direction of twisting corresponds to the left handed twisting, which appears in negative (left handed) Dehn twists. $(-)$ direction of twisting corresponds to the right handed twisting, which appears in positive (right handed) Dehn twists.
\end{remark}
\Needspace{8\baselineskip}
\subsection{Existing computational methods}\label{subsec:exist_comp_methods}
In the case of the identity mapping class the Floer cohomology is the same as Morse cohomology with respect to the Hamiltonian we use for perturbing $\id$. See~\cite[Lemma~3.9]{Sei3II} for a proof and references to the original works. Thus we have
$$HF^*(\id\:;\: U^+,U^-)=MH^*(\Sigma \:;\ H \! \! \uparrow^{U^+}_{U^-}),$$where by $MH^*$ we denote Morse cohomology. It is equal to the relative homology
$$H^*(\Sigma, \ U^-),$$
because the Hamiltonian has local minimum on $U^-$ and local maximum on $U^+$.
First computations for non-trivial mapping classes were done by Seidel in~\cite{Sei1}. Suppose $\phi$ is a composition of right handed Dehn twists along curves $R=\{R_1,R_2,\dots,R_l\}$ and left handed Dehn twists along curves $L=\{L_1,L_2,\dots,L_k\}$. Suppose that all the curves are disjoint, their complement has no disc components, and that no $L_i$ is homotopic to $R_j$. Then
$$HF^*(\phi\:;\: U^+,U^-)=H^*(\Sigma - L, \ R \cup U^-)$$
for arbitrary decorations of boundary components, as in~\eqref{dec}.
This result is again achieved via reducing the computation to the Morse cohomology, where the Hamiltonian has local minimum on the curves $R_i$ and local maximum on the curves $L_j$.
Then Gautschi in~\cite{Gau} computed fixed point Floer cohomology for algebraically finite mapping classes (those include periodic mapping classes, and also reducible mapping classes which restrict to periodic classes to each component). Eftekhary in~\cite{Eft} then generalized Seidel's work to Dehn twists along curves $\{R_1,R_2,\dots,R_l\}$ which form a forest, i.e. the intersection graph (with $R_i$ being vertices and intersection points $R_i\cap R_j$ being edges) does not contain cycles; this is the most useful result for us, and so we quote it in detail below. The last computations were done by Cotton-Clay in~\cite{CC}, where he showed how to compute fixed point Floer cohomology for all pseudo-Anosov mapping classes and for all reducible mapping classes (including those with pseudo-Anosov components). Thus there is a way to compute fixed point Floer cohomology for any mapping class.
The following is a generalization of Eftekhary's work to the case with boundary.
\begin{theorem}[Eftekhary]\label{Eftekhary's theorem}
Suppose $\Sigma$ is a surface, whose boundary, if non-empty, is divided into positively and negatively decorated components $U=U^+ \sqcup U^-$, as in~\eqref{dec}. Suppose $\phi\colon\thinspace\relax MCG_0(\Sigma, \partial \Sigma = U)$ is a mapping class equal to a composition of right handed Dehn twists along the curves $R=\{R_1,R_2,\dots,R_l\}$, and left handed Dehn twists along the curves $L=\{L_1,L_2,\dots,L_m\}$. Assume that
\begin{itemize}
\item $R$ is a forest;
\item $L$ is a forest;
\item $L \cap R = \emptyset $;
\item no $L_i$ is homotopic to $R_j$;
\item all the curves $L_i$ and $R_j$ are homologically essential.
\end{itemize}
Then
$$
HF^*(\phi\:;\: U^+,U^-)=H^*(\Sigma - L, \ R \cup U^-).
$$
\end{theorem}
We will use this theorem to perform computations of fixed point Floer cohomology in Section~\ref{supporting_comps}.
\Needspace{8\baselineskip}
\section{Conjectural isomorphism}\label{sec:conj_iso}
\Needspace{8\baselineskip}
\subsection{Statement}\label{sec:statement}
Here we state our main conjecture, around which the paper is revolving. As in Section~\ref{sec:bimodule from bordered}, we consider the strongly based mapping class group of the genus $g$ surface with one boundary component. Because we want to be able to take fixed point Floer cohomology of $\phi$, we assume the genus to be greater than one.
Having a mapping class $\phi\colon\thinspace\relax (\Sigma,\partial \Sigma=U_1=S^1) \rightarrow (\Sigma,\partial \Sigma=U_1=S^1)$, let us consider the induced mapping class $\tilde{\phi}\colon\thinspace\relax (\tilde{\Sigma},\partial \tilde{\Sigma}=U_1 \cup U_2) \rightarrow (\tilde{\Sigma},\partial \tilde{\Sigma}=U_1 \cup U_2)$, where $\tilde{\Sigma}=\Sigma \setminus D^2$ is obtained by removing a disc in the small enough neighborhood of the boundary, such that $\phi$ is identity on that neighborhood.
\begin{conjecture}\label{conj}
For every mapping class $\phi \in MCG_0(\Sigma,\partial\Sigma=S^1=U_1)$ there is an isomorphism of $\mathbb{Z}_2$-graded vector spaces
$$
HH_*(N(\phi^{-1})) \colon\thinspace\relaxng HF^{*+1}(\tilde{\phi}\:;\: U_2^+,U_1^-).$$
\end{conjecture}
In the rest of the paper we will be investigating the conjecture from different angles. First, from the practical and concrete point of view, by performing lots of computations in Section~\ref{supporting_comps} we will obtain a strong evidence that the conjecture is true. Then, in Section~\ref{sec:bordered<->Fuk}, we will outline the symplectic geometric interpretation of bordered Heegaard Floer theory. Based on that, in Section~\ref{sec:LF}, we will reinterpret Conjecture~\ref{conj} in the context of the Fukaya category: we will see that it is a special case of a more general conjecture of Seidel ~\cite[Conjecture 7.18]{Sei3II}, which states that the open-closed map in the context of the Fukaya-Seidel category of Lefschetz fibration is an isomorphism.
\Needspace{8\baselineskip}
\subsection{Supporting computations}\label{supporting_comps}
To support Conjecture~\ref{conj}, we perform computations in the genus two case. From the bimodule side, the key results and tools we use are the computation of the Dehn twist bimodules in Section~\ref{subsec:bim_comps}, and the computer program~\cite{Pyt} we wrote to tensor bimodules and compute their Hochschild homologies. From the fixed point Floer cohomology side, we heavily exploit Theorem~\ref{Eftekhary's theorem}. We did lots of computations, and all of them confirmed the conjecture; below we describe five of them, in each case showing that the invariants $HH_*(N(\phi^{-1}))$ and $HF^{*+1}(\tilde{\phi}\:;\: U_2^+,U_1^-)$ are isomorphic.
As in Section~\ref{subsec:bim_comps}, we fix a set of curves generating the mapping class group as in Figure~\ref{fig:Sigma_2}, and use a parameterization $\Sigma_2 \colon\thinspace\relaxng F^\circ(\Zpmc_2)$ as in Figure~\ref{fig:Sigma_2_parameterized}.
\begin{computation}[$\phi=\id$]\label{comp:id}
Taking the identity bimodule $N(\id)=[\mathbb I]$ as an input, the program~\cite[Showcase~4]{Pyt} finds that Hochschild homology is
$$HH_*(N(\id))=(\mathbb{F}_2)^4_{(0)},$$
concentrated in grading 0. Applying Theorem~\ref{Eftekhary's theorem}, we obtain that fixed point Floer cohomology in this case is also four-dimensional:
$$HF^*(\tilde{\id}\:;\: U_2^+,U_1^-)=MH^*( \tilde{\Sigma}_2;H \! \! \uparrow^{U_2^+}_{U_1^-})=H^*(\tilde{\Sigma}_2, \ U_1)=(\mathbb{F}_2)^4_{(1)},$$
concentrated in grading 1, see Figure~\ref{fig:HF-id-} for an illustration. These computations confirm Conjecture~\ref{conj} for $\phi=\id\colon\thinspace\relax(\Sigma_2,\partial \Sigma_2) \rightarrow (\Sigma_2,\partial \Sigma_2)$.
\begin{figure}
\caption{Computation of $HF^*(\tilde{\id}
\label{fig:HF-id-}
\end{figure}
\end{computation}
\begin{computation}[$\phi=\tau_l$]\label{comp:DehnTwist}
Suppose $\phi = \tau_l$ is a right handed Dehn twist along any of the curves $A,~B,~C,~D,$ or $E$. Then the program~\cite[Showcase~5]{Pyt} finds that Hochschild homology is
$$HH_*(N(\tau_l^{-1}))=(\mathbb{F}_2)^4_{(0)}.$$
Applying Theorem~\ref{Eftekhary's theorem}, we obtain that fixed point Floer cohomology in this case is
$$HF^*(\tilde{\tau_l}\:;\: U_2^+,U_1^-)\overset{(1)}{=}H^*(\tilde{\Sigma}_2,l \cup U_1)=(\mathbb{F}_2)^4_{(1)}.$$
Equality $(1)$ is obtained by cutting $\tilde{\Sigma}_2$ along $l$ and computing Morse cohomology with respect to Hamiltonian which looks like in Figure~\ref{fig:HF-one_right_twist-}. These computations confirm Conjecture~\ref{conj} for $\phi=\tau_l:(\Sigma_2,\partial \Sigma_2) \rightarrow (\Sigma_2,\partial \Sigma_2)$.
\begin{figure}
\caption{Computation of $HF^*(\tilde{\tau_l}
\label{fig:HF-one_right_twist-}
\end{figure}
\end{computation}
\begin{computation}[$\phi=\tau_l^{-1}$]\label{comp:invDehnTwist}
For left handed Dehn twists we have the same answers: the program~\cite[Showcase~6]{Pyt} finds
$$HH_*(N(\tau_l))=(\mathbb{F}_2)^4_{(0)},$$
and Theorem~\ref{Eftekhary's theorem} implies
$$HF^*(\tilde{\tau_l}^{-1}\:;\: U_2^+,U_1^-)\overset{(1)}{=}H^*(\tilde{\Sigma}_2-l,U_1)=(\mathbb{F}_2)^4_{(1)}.$$
Equality $(1)$ is obtained by cutting $\tilde{\Sigma}_2$ along $l$ and computing Morse cohomology with respect to Hamiltonian which looks like in Figure~\ref{fig:HF-one_left_twist-}. These computations confirm Conjecture~\ref{conj} for $\phi=\tau_l^{-1}\colon\thinspace\relax(\Sigma_2,\partial \Sigma_2) \rightarrow (\Sigma_2,\partial \Sigma_2)$.
\begin{figure}
\caption{Computation of $HF^*(\tilde{\tau_l}
\label{fig:HF-one_left_twist-}
\end{figure}
\end{computation}
\begin{proof}[Proof of Theorem~\ref{thm:int}]
Based on the computations above, we prove the isomorphism $HH_*(N(\phi^{-1})) \colon\thinspace\relaxng HF^{*+1}(\tilde{\phi}\:;\: U_2^+,U_1^-)$ for the identity mapping class $\phi=\id \lefttorightarrow \Sigma_2$, and also any Dehn twist $\phi=\tau \lefttorightarrow \Sigma_2$.
Computation~\ref{comp:id} proves the conjecture for $\phi=\id$, while Computations~\ref{comp:DehnTwist} and~\ref{comp:invDehnTwist} prove the conjecture for Dehn twists along curves $A,~B,~C,~D,~E$ from Figure~\ref{fig:Sigma_2}.
We now prove the isomorphism for any Dehn twist along a \emph{non-separating} curve $m$. Any such curve $m$ can be mapped to the curve A by a mapping class $f$, and so any Dehn twist $\tau_m$ is conjugate to the Dehn twist $\tau_A$: $f\tau_A f^{-1}=\tau_{f(A)} =\tau_{m} $. Both fixed point Floer homology (it follows from the definition) and Hochschild homology (see Remark~\ref{rmk:inv_conj}) are invariant with respect to conjugation, and thus it follows that the isomorphism $HH_*(N(\tau_A^{-1})) \colon\thinspace\relaxng HF^{*+1}(\tilde{\tau_A}\:;\: U_2^+,U_1^-)$ implies the isomorphism for any Dehn twist $\tau_m$.
The case of a Dehn twist along a non-trivial \emph{separating} curve $m$ is done similarly. Any such curve $m$ either separates $\Sigma_2$ into two genus one surfaces, or into a genus two surface and an annulus. Therefore $m$ can be mapped to either the curve $G_1=\partial \big( Nbd_\epsilon (A\cup B) )\big)$, or the boundary curve $ G_2=\partial \big( Nbd_\epsilon(A\cup B \cup C \cup C) \big) \sim \partial \Sigma_2$. Thus it is enough to prove the conjecture for Dehn twists along $G_1$ and $G_2$. Seidel's result~\cite{Sei1} implies
\[ HF^{*}(\tilde{\tau^{\pm 1}_{G_1}}\:;\: U_2^+,U_1^-) = (\mathbb F_2)^5_{(1)}\oplus (\mathbb F_2)_{(0)}, \qquad HF^{*}(\tilde{\tau^{\pm 1}_{G_2}}\:;\: U_2^+,U_1^-) = (\mathbb F_2)^5_{(1)}\oplus (\mathbb F_2)_{(0)}. \]
By leveraging the chain relations in the mapping class group $\tau_{G_1}=(AB)^6, \ \tau_{G_2}=(ABCD)^{10}$, the program~\cite[Showcase~7]{Pyt} finds
\[ HH_*(N(\tau^{\pm 1}_{G_1}))= (\mathbb F_2)^5_{(0)}\oplus (\mathbb F_2)_{(1)}, \qquad HH_*(N(\tau^{\pm 1}_{G_2})) = (\mathbb F_2)^5_{(0)}\oplus (\mathbb F_2)_{(1)}, \]
which finishes the proof of Theorem~\ref{thm:int} in case of a Dehn twist along a separating curve.
\end{proof}
Experimenting with mapping classes arising from two disjoint forests of curves (i.e. those where we can use Eftekhary's Theorem~\ref{Eftekhary's theorem}), we always got equal ranks of homologies. Let us highlight two more examples.
\begin{computation}[$\phi=\tau_A \tau_B \tau_C \tau_D$]
This mapping class is a monodromy of an open book on $S^3$ with the binding being the $(5,2)$ torus knot, and the page being the genus two surface with boundary. Either by remembering that $\widehat{HFK}(S^3, T_{(5,2)})=(\mathbb{F}_2)^5$ contributing $\mathbb{F}_2$ to each of the five Alexander gradings $-2,-1,0,1,2$, or by invoking the program~\cite[Showcase~8]{Pyt}, we get
$$\widehat{HFK}(T_{(5,2)} \:;\: -g+1)=HH_*(N(\tau_D^{-1} \tau_C^{-1} \tau_B^{-1} \tau_A^{-1}))=(\mathbb{F}_2)_{(0)}.$$
Applying Theorem~\ref{Eftekhary's theorem} gives
$$HF^*(\widetilde{\tau_A \tau_B \tau_C \tau_D}\:;\: U_2^+,U_1^-)=H^*(\tilde{\Sigma}_2,U_1 \cup A \cup B \cup C\cup D)=(\mathbb{F}_2)_{(1)},$$ confirming Conjecture~\ref{conj}. It is the lowest rank that we observed in our computations.
\end{computation}
\begin{remark}
Interestingly, it is known that $\rk(\widehat{HFK}(\text{fibered knot} \:;\: -g+1))=\rk(HH_*(N(\phi)))>0$
is always true, see~\cite{BV}.
\end{remark}
\begin{computation}[$\phi=\tau_A^5 \tau_B \tau_C \tau_D \tau_E^5$]\label{ps-a}
This mapping class is pseudo-Anosov if viewed as a mapping class of a closed genus two surface (see~\cite{Eft}). In this case the program~\cite[Showcase~9]{Pyt} finds
$$HH_*(N(\phi^{-1}))=(\mathbb{F}_2)^{9}_{(1)}\oplus (\mathbb{F}_2)_{(0)},$$
and Theorem~\ref{Eftekhary's theorem} implies
$$HF^*(\tilde{\phi}\:;\: U_2^+,U_1^-)=H^*(\tilde{\Sigma}_2,U_1 \cup 5A \cup B \cup C \cup D \cup 5E)=(\mathbb{F}_2)^{9}_{(0)}\oplus (\mathbb{F}_2)_{(1)},$$
confirming Conjecture~\ref{conj}.
\end{computation}
\Needspace{8\baselineskip}
\section{Background on bordered theory vs Fukaya categories}\label{sec:bordered<->Fuk}
This section covers background material, which is needed for the subsequent Section~\ref{sec:LF}.
Let us recall how we associate a bimodule to a mapping class in Section~\ref{sec:hf_bim}. To a surface we associate a $dg$ algebra. To a mapping class we associate an $A_\infty$ bimodule over that algebra. Composition of mapping classes corresponds to the box tensor product of bimodules (Theorem~\ref{DA pairing}). In Section~\ref{alt_bim} below, we describe another geometric construction of the same mapping class invariant. The basis for that construction is reinterpretation of bordered Heegaard Floer theory in terms of the partially wrapped Fukaya category of the surface, which is due to Auroux~\cite{Aur1,Aur2}, and which we recall in Section~\ref{aur_constr}.
\Needspace{8\baselineskip}
\subsection{Auroux's construction}\label{aur_constr}
Consider a surface $\Sigma$ with one or more boundary components. Fix a set of points (\say{stops}) $Z$ on the boundary $ \partial \Sigma$, and fix a Liouville domain structure on $\Sigma$; the latter consists of an exact symplectic form $\omega = d \theta$, such that the Liouville vector field $X_\theta$ (defined by equation $\omega(X_\theta,\cdot)=\theta$) points outwards the boundary. Assume that every boundary component has at least one point from $Z$. To all this data we can associate the partially wrapped Fukaya category $\F_Z(\Sigma)$ (Auroux also considers Fukaya categories of $Sym^k(\Sigma)$, but we only need the $k=1$ case). We break down the construction of $\F_Z(\Sigma)$ into several steps:
\begin{enumerate}
\item We first need to pass from $\Sigma$ to $\hat{\Sigma}$, which is a Liouville manifold obtained by completion of the surface $\Sigma$ by a cylindrical end. In more details, $\hat{\Sigma}$ is constructed by taking the symplectization of the boundary $([0,+\infty) \times \partial \Sigma, d(r \cdot \theta) )$, and gluing its $0 \le r \leq 1$ part to the Liouville-flow-collar neighborhood of $\partial \Sigma$, by $i:((0,1] \times \partial \Sigma, d(r \cdot \theta) ) \rightarrow ((-\infty,0]\times \partial \Sigma, \omega) \subset \Sigma $, s.t. $i((e^r,x))=(r,x)$.
\item Objects of the partially wrapped Fukaya category $\F_Z(\Sigma)$ are curves of two types:
\begin{itemize}
\item Closed exact embedded Lagrangian curves in $\hat{\Sigma}$;
\item Non-compact properly embedded curves (arcs for short) such that the ends at infinity of $\hat{\Sigma}$ stabilize to be rays $(r,0)$ in the cylindrical end.
\end{itemize}
\item Morphism spaces are Lagrangian Floer cochain complexes $hom_{\F_Z(\Sigma)}(L_1,L_2)=CF_\text{Lagr}^*(\widetilde{L_1},L_2)$, where $\widetilde{L_1}$ is a Lagrangian submanifolds perturbed by a generic Hamiltonian. Because we consider non-compact Lagrangians, the behavior of the Hamiltonian perturbation at infinity of $\hat{\Sigma}$ affects $hom_{\F_Z(\Sigma)}(L_1,L_2)$ in an essential way. Auroux constructs a specific Hamiltonian, which wraps the ray of the arc around the cylindrical end until it reaches the stop, i.e. one of the rays in $Z \times [1,+\infty)$; the best way to understand the resulting perturbation is to look at Figure~\ref{fig:partially_wrapped_perturbation}, where $Z=\{z\}$. See~\cite{Aur1} for the details of the construction of this Hamiltonian.
\item $A_\infty$ operations
\begin{equation}\label{a_inf_op}
\mu_d: hom_{\F_Z(\Sigma)}(L_0,L_1)\o \dots \o
hom_{\F_Z(\Sigma)}(L_{d-1},L_d) \rightarrow
hom_{\F_Z(\Sigma)}(L_0,L_d)
\end{equation}
are given by counting holomorphic discs with $d+1$ marked points on the boundary (as usual in the Fukaya categories). In this step consistent perturbations and complex structures are needed to be picked, see~\cite{Aur1} for the details.
\item The final step is to prove that $A_\infty$ operations satisfy $A_\infty$ relations, and that the quasi-equivalence class of $\F_Z(\Sigma)$ does not depend on all the choices made in the definition (perturbations and complex structures). Again, for this we refer the reader to the original article~\cite{Aur1}.
\end{enumerate}
\begin{figure}
\caption{Perturbation near infinity for the partially wrapped Fukaya category of a surface.}
\label{fig:partially_wrapped_perturbation}
\end{figure}
As shown in~\cite[Theorem~1]{Aur2}, if $\alpha_1,\dots,\alpha_k \in Ob(\F_Z(\Sigma))$ are non-intersecting arcs, and their complement is a set of discs each containing one point from $Z$, then they $generate$ $\F_Z(\Sigma)$. This means that every object $L\in Ob(\F_Z(\Sigma))$, considered as an object in certain enlarged category of twisted complexes $L\in Ob(Tw \F_Z(\Sigma))$, is quasi-isomorphic to a direct summand of iterated mapping cones between the generating objects $\alpha_1,\dots,\alpha_k$, see~\cite[Section~3]{Aur3} for the details. The deep theorem behind this fact is that Lefschetz thimbles generate the Fukaya-Seidel category associated to a Lefschetz fibration~\cite[Theorem~18.24]{Sei5}. The Fukaya-Seidel category is closely related to the partially wrapped Fukaya category, see the next Section~\ref{sec:LF}.
Notice that surface $F^\circ(\Zpmc)$ from Section~\ref{sec:bimodule from bordered}, associated to a genus $g$ pointed matched circle $\Zpmc$, contains a basepoint $z$ in it boundary, which can serve as a stop, giving rise to the Fukaya category $\F_z(F^\circ(\Zpmc))$. Moreover, $F^\circ(\Zpmc)$ contains a distinguished set of generators of $\F_z(F^\circ(\Zpmc))$, which corresponds to the matched pairs of points in $\Zpmc$, see the surface on the right of Figure~\ref{fig:pointed_matched_circle}.
\begin{theorem}[Auroux]\label{thm:aur}
Suppose $\alpha_1,\dots,\alpha_{2g}$ are arcs in $F^\circ(\Zpmc)=\Sigma$ corresponding to the matched pairs in $\Zpmc$. Then the bordered one-strand-moving $dg$ algebra is quasi-isomorphic to the $A_\infty$ $hom$-algebra of the partially wrapped Fukaya category with respect to the generating set $\alpha_1,\dots,\alpha_{2g}$, i.e.
$$
\B(\Zpmc) \simeq \A(\F_z(\Sigma)) \colon\thinspace\relaxloneqq \bigoplus\limits_{1\leq i,j\leq 2g} hom_{\F_z(\Sigma)} (\alpha_i,\alpha_j).
$$
\end{theorem}
\begin{remark}
The full statement of Auroux's theorem involves all the summands of the bordered algebra and the Fukaya categories of symmetric products: for $0\leq k \leq 2g$ one has
$$ \A(\Zpmc,-g+k) \simeq \A(\F_z(Sym^k(\Sigma))) = \bigoplus\limits_{1\leq i,j\leq C_{2g}^k} hom_{\F_z(Sym^k(\Sigma))} (\lambda_i,\lambda_j),$$ where $\{ \lambda_i \}$ is a set of generators coming from the products of $k$ arcs in $\alpha_1,\dots,\alpha_{2g}$.
\end{remark}
Although we will not need this, let us mention that Auroux reinterpreted in terms of the Fukaya categories not only the bordered algebras, but also the type $A$ bordered Heegaard Floer modules, via the Yoneda embedding construction.
\begin{example}[Torus algebra]
Let us illustrate the above theorem on a torus. The way to get the torus bordered algebra $\B(\Zpmc_1)$ from a pointed matched circle $\Zpmc_1$ is pictured in Figure~\ref{fig:Z_1--B-Z_1-}. The Fukaya categorical interpretation of the torus bordered algebra $\B(\Zpmc_1)$ is pictured in Figure~\ref{fig:bordered=partially_wrapped}. The elements of the algebra appear as generators of Lagrangian Floer complexes. Note that we cannot see the product structure (i.e. holomorphic triangles) on this picture, because for that we need to consistently pick perturbations for the three Lagrangians involved in the product operation.
\begin{figure}
\caption{Torus bordered algebra, constructed from a pointed matched circle.}
\label{fig:Z_1--B-Z_1-}
\end{figure}
\begin{figure}
\caption{Elements of the torus bordered algebra $\B(\Zpmc_1)$, viewed as elements of morphism spaces between generating objects $\alpha_1,\alpha_2$ of the partially wrapped Fukaya category of the torus.}
\label{fig:bordered=partially_wrapped}
\end{figure}
\end{example}
\Needspace{8\baselineskip}
\subsection{Alternative bimodule construction via Fukaya categories}\label{sec:bimodule_via_Fukaya}\label{alt_bim}
Suppose we are given an exact self-diffeomorphism $\phi$ of $(\Sigma, \partial \Sigma = S^1)$, pointwise fixing the boundary. Suppose that the partially wrapped Fukaya category with one stop $\F_z(\Sigma)$ is generated by $\alpha_1,\ldots,\alpha_{2g}$. Then there is a standard way to associate to $\phi$ a $graph$ bimodule: $A_\infty$ bimodule of $AA$ type $_{\A(\F_z(\Sigma))}N_{\F_z}(\phi)_{\A(\F_z(\Sigma))}$ over the $hom$-algebra $\A(\F_z(\Sigma)) = \bigoplus\limits_{1\leq i,j\leq 2g} hom_{\F_z(\Sigma)} (\alpha_i,\alpha_j)$.
\begin{enumerate}
\item The bimodule as a vector space is equal to
\begin{equation} \label{eq:bimodule from Fuk}
N_{\F_z}(\phi)=\bigoplus\limits_{1\leq i,j\leq {2g}} hom_{\F_z(\Sigma)} (\alpha_i,\hat{\phi}(\alpha_j)),
\end{equation}
where $\hat{\phi}$ is an exact compactly supported self-diffeomorphism of the completion $\hat{\Sigma}$ induced by $\phi$. From now on we will abuse notation and use $\phi$ for $\hat{\phi}$.
\item The higher actions are given using $A_\infty$ operations~\ref{a_inf_op}. For example, the action $m^{1|1|1}\colon\thinspace\relax\A(\F_z(\Sigma)) \o N_{\F_z}(\phi) \o \A(\F_z(\Sigma)) \rightarrow N_{\F_z}(\phi)$ is given via the following operation counting holomorphic discs with four marked points:
\begin{align*}
{\scriptstyle
\mu_3: \left( \bigoplus\limits_{1\leq i,j\leq {2g}} hom_{\F_z(\Sigma)} (\alpha_i,\alpha_j) \right) }
& {\scriptstyle \o \left( \bigoplus\limits_{1\leq i,j\leq {2g}} hom_{\F_z(\Sigma)} (\alpha_i,\phi(\alpha_j)) \right) \o \left( \bigoplus\limits_{1\leq i,j\leq {2g}} hom_{\F_z(\Sigma)} (\phi(\alpha_i),\phi(\alpha_j)) \right) \rightarrow } \\
& {\scriptstyle \rightarrow \bigoplus\limits_{1\leq i,j\leq {2g}} hom_{\F_z(\Sigma)} (\alpha_i,\phi(\alpha_j)) }.
\end{align*}
\end{enumerate}
Note that the bimodule $_{\A(\F_z(\Sigma))}N_{\F_z}(\phi)_{\A(\F_z(\Sigma))}$ is of $AA$ type in the bordered theory terminology, whereas $^{\B(\Zpmc)}N(\phi)_{\B(\Zpmc)}$ from Section~\ref{sec:bimodule from bordered} was of $DA$ type. Remember that if $F^\circ(\Zpmc)=\Sigma$, by Theorem~\ref{thm:aur} we have $\B(\Zpmc)\simeq \A(\F_z(\Sigma))$, and so both bimodules are over the same algebra $\B(\Zpmc)$. We now unify the two constructions of bimodules.
\begin{proposition}\label{prop:bims_are_eq}
Suppose $\alpha_1,\dots,\alpha_{2g}$ are arcs in $F^\circ(\Zpmc)=\Sigma$ corresponding to the matched pairs in the pointed matched circle $\Zpmc$. Suppose $\phi \in MCG_0(F^\circ(\Zpmc),F^\circ(\Zpmc))$. Then the modulification of type $DA$ bimodule $^{\B(\Zpmc)}N(\phi)_{\B(\Zpmc)}$ is homotopy equivalent to $_{\B(\Zpmc)}N_{\F_z}(\phi)_{\B(\Zpmc)}$:
$$ {}_{\B(\Zpmc)}{\B(\Zpmc)}_{\B(\Zpmc)} \boxtimes {}^{\B(\Zpmc)}N(\phi)_{\B(\Zpmc)} \simeq {}_{\B(\Zpmc)}N_{\F_z}(\phi)_{\B(\Zpmc)}.$$
\end{proposition}
For the proof we refer the reader to~\cite[Lemma~4.2]{AGW14}. The main idea is to use $\alpha$-$\beta$-bordered Heegaard diagrams introduced in~\cite{LOT-mor}.
\begin{corollary}\label{cor}
Hochschild homologies of the two bimodules are isomorphic:
$$HH_*({}^{\B(\Zpmc)}N(\phi)_{\B(\Zpmc)})\colon\thinspace\relaxng HH_*( {}_{\A(\F_z(\Sigma))}N_{\F_z}(\phi)_{\A(\F_z(\Sigma))} ).$$
\end{corollary}
This follows from Proposition~\ref{prop:bims_are_eq} and~\cite[Proposition~2.3.54]{LOT-bim}.
\Needspace{8\baselineskip}
\section{Theoretical evidence for the conjecture}\label{sec:LF}
Let us step back and see what we have accomplished thus far. To a surface associated to a pointed matched circle, $(\Sigma, \partial \Sigma )=F^\circ(\Zpmc)$, we associated two quasi-isomorphic algebras:
$$
\begin{tikzcd}[column sep=15pt,row sep=45pt]
& (\Sigma, \partial \Sigma )=F^\circ(\Zpmc) \arrow[dr, rightsquigarrow, "\text{Section~\ref{sec:hf_bim}}" right ] \arrow[dl, rightsquigarrow, "\text{Section~\ref{aur_constr}}" left]&
\\
\A(\F_z(\Sigma)) = \bigoplus\limits_{1\leq i,j\leq 2g} hom_{\F_z(\Sigma)} (\alpha_i,\alpha_j)&&\B(\Zpmc) \arrow[ll, leftrightarrow, "\simeq" above, "\text{Theorem~\ref{thm:aur}}" below]
\end{tikzcd}
$$
To the mapping class $\phi \lefttorightarrow (\Sigma, \partial \Sigma )=F^\circ(\Zpmc)$ we associated two homotopy equivalent bimodules, and a version of fixed point Floer cohomology. We conjectured that Hochschild homology of either of the bimodules is isomorphic to fixed point Floer cohomology, and supported Conjecture~\ref{conj} by computations in Section~\ref{supporting_comps}.
$$
\begin{tikzcd}[row sep=50pt,column sep=15pt]
&
\text{Mapping class }\phi \lefttorightarrow (\Sigma, \partial \Sigma )=F^\circ(\Zpmc)
\arrow[dl, rightsquigarrow, "\text{Section~\ref{alt_bim}}" left]
\arrow[d, rightsquigarrow, "\text{Section~\ref{sec:bimodule from bordered}}" left]
\arrow[ddr, rightsquigarrow, "\text{Section~\ref{sec:fixed point Floer}}"]
&
\\
_{\A(\F_z(\Sigma))}N_{\F_z}(\phi)_{\A(\F_z(\Sigma))}
\arrow[d, rightsquigarrow, "\text{Section~\ref{sec:hoch}}" right]
&
^{\B(\Zpmc)}N(\phi)_{\B(\Zpmc)}
\arrow[l, leftrightarrow, "\simeq" above, "\text{Prop.~\ref{prop:bims_are_eq}}" below]
\arrow[d, rightsquigarrow, "\text{Section~\ref{sec:hoch}}" left]
&
\\
HH_*(N_{\F_z}(\phi))
\arrow[rr, bend right=25, "\text{Section~\ref{subsec:O-C map}}" above, "\text{(in the double basepoint case)}" below]
&
HH_*(N(\phi))
\arrow[l, leftrightarrow, "\colon\thinspace\relaxng" above, "\text{Corollary~\ref{cor}}" below]
\arrow[r, leftrightarrow, "\overset{?}{\colon\thinspace\relaxng}" above, "\text{Conjecture~\ref{conj}}" below]
&
HF^*(\tilde{\phi}\:;\ U_2^+,U_1^-)
\end{tikzcd}
$$
In contrast to Section~\ref{supporting_comps}, where we performed concrete computations supporting Conjecture~\ref{conj}, in this section we describe some more theoretical evidence. We are going to construct the map corresponding to the lowest arrow in the diagram above. For that we will need mild generalizations of all the invariants, namely we will need to use two basepoints instead of one in all the constructions. In Section~\ref{1bp_to_2bp_bim} we describe the known two-basepoints-generalizations $N_{\F_{\{z_1,z_2\}}}(\phi)$ and $N^\text{2bp}(\phi)$ of bimodules $N_{\F_z}(\phi)$ and $N(\phi)$. In Section~\ref{1bp_to_2bp_hf} we first describe some heuristics, which explain why $HF^*(\tilde{\phi}\:;\:U_2^+,U_1^-)$ corresponds to the one basepoint case. Based on this we introduce the double basepoint version $HF^*(\dtilde{\phi}\:;\ U_2^+,U_3^+, U_1^-)$. After that we state a double basepoint version of Conjecture~\ref{conj}:
$$HH_*(N^\text{2bp}(\phi)) \ \overset{?}{\colon\thinspace\relaxng} \ HF^*(\dtilde{\phi}\:;\: U_2^+,U_3^+, U_1^-).$$
In Section~\ref{LF} we cover the background material on how the Lefschetz fibration structure on a surface gives rise to a special type of the Fukaya category, the Fukaya-Seidel category. Finally, in Section~\ref{subsec:O-C map} we describe the so called open-closed map, which results in a map $HH_*(N^\text{2bp}(\phi)) \ \rightarrow \ HF^*(\dtilde{\phi}\:;\:U_2^+,U_3^+, U_1^-)$. This map is widely believed to be an isomorphism, see ~\cite[Conjecture 7.18]{Sei3II}, and if true, it would prove the double basepoint version of Conjecture~\ref{conj}.
\Needspace{8\baselineskip}
\subsection{From one basepoint to two: bimodule}\label{1bp_to_2bp_bim}
Let us explain how to modify our previous constructions of bimodules, if we want to have two basepoints instead of one. First of all, in Section~\ref{aur_constr} the partially wrapped Fukaya category was defined for any number of basepoints. In Figure~\ref{fig:partially_wrapped_2bpts} we draw a generating set of Lagrangians (red curves) in the case of two basepoints on the genus two surface, together with their perturbations (purple curves). Now we have five Lagrangian arcs as generators, instead of four in the one basepoint case (Figure~\ref{fig:pointed_matched_circle}). In general, for genus $g$ surface $(\Sigma,\partial \Sigma)$ the number of generating arcs will be $2g+1$ and $2g$ for two and one basepoint cases, respectively. We denote the double basepoint partially wrapped Fukaya category by $\F_{\{z_1,z_2\}}(\Sigma)$
\begin{figure}
\caption{Generators of the partially wrapped Fukaya category $\F_{\{z_1,z_2\}
\label{fig:partially_wrapped_2bpts}
\end{figure}
For the corresponding double basepoint version of pointed matched circle, as well as the corresponding algebra, see Figure~\ref{fig:Z_2-2bp--B-Z_2-2bp-}.
\begin{figure}
\caption{A genus two double basepoint example of how to get a $dg$ algebra out of a pointed matched circle. Paths consisting of different color arrows are prohibited.}
\label{fig:Z_2-2bp--B-Z_2-2bp-}
\end{figure}
Just as in Section~\ref{alt_bim}, we can define a graph bimodule for a mapping class $\phi \in MCG_0(\Sigma, \partial \Sigma)$ via the double basepoint partially wrapped Fukaya category $\F_{\{z_1,z_2\}}(\Sigma_2)$:
$$_{\A(\F_{\{z_1,z_2\}}(\Sigma))}N_{\F_{\{z_1,z_2\}}}(\phi)_{\A(\F_{\{z_1,z_2\}}(\Sigma))}=\bigoplus\limits_{1\leq i,j\leq k} hom_{\F_{\{z_1,z_2\}}(\Sigma)} (\alpha_i,\phi(\alpha_j)).$$
Auroux's Theorem~\ref{thm:aur} works for any number of basepoints, and so if $(\Sigma,\partial \Sigma)=F^\circ(\Zpmc^\text{2bp})$, then $\A(\F_{\{z_1,z_2\}}(\Sigma)) \simeq \B(\Zpmc^\text{2bp})$.
Despite the fact that we constructed $_{\B(\Zpmc^\text{2bp})}N_{\F_{\{z_1,z_2\}}}(\phi)_{\B(\Zpmc^\text{2bp})}$, for computations we would prefer to have a type $DA$ bimodule $^{\B(\Zpmc^\text{2bp})}N^\text{2bp}(\phi)_{\B(\Zpmc^\text{2bp})}$, generalizing $N(\phi)=N^\text{1bp}(\phi)$. Such a bimodule, as in the one basepoint case, comes from a Heegaard diagram for the mapping cylinder, but equipped with two basepoints on each boundary, and two arcs connecting them. The necessary machinery of bordered Heegaard diagrams with multiple basepoints was invented by Zarev in~\cite{Zarev}; in Figure~\ref{fig:heegard_diagram_id_2bpts} we draw two diagrams for the identity mapping class in the genus two case: on the right there is Zarev's bordered sutured Heegaard diagram, and on the left we drew a double basepoint Heegaard diagram, which would be a natural generalization of the one basepoint diagram. The two diagrams carry the same holomorphic information, and the bimodules coming from them are the same. The reason why they are different is because Zarev used the language of sutured manifolds and their bordered versions. In order to go from the left diagram to the right, instead of drawing basepoints and basepoint arcs, one deletes their neighborhoods and then sets the boundary coming from these neighborhoods (drawn in green) to be forbidden for holomorphic discs.
The generalization of Figure~\ref{fig:heegard_diagram_id_2bpts} to non-trivial mapping classes of genus $g$ surfaces is completely analogous to the one basepoint case. As in the one basepoint case, Proposition~\ref{prop:bims_are_eq} holds true:
$$ {}_{\B(\Zpmc^\text{2bp})}{\B(\Zpmc^\text{2bp})}_{\B(\Zpmc^\text{2bp})} \boxtimes {}^{\B(\Zpmc^\text{2bp})}N^\text{2bp}(\phi)_{\B(\Zpmc^\text{2bp})} \simeq {}_{\B(\Zpmc^\text{2bp})}N_{\F_{\{z_1,z_2\}}}(\phi)_{\B(\Zpmc^\text{2bp})}.$$
\begin{figure}
\caption{Double basepoint diagrams for mapping cylinder of $\id\colon\thinspace\relax\Sigma_2 \rightarrow \Sigma_2$.}
\label{fig:heegard_diagram_id_2bpts}
\end{figure}
Let us now discuss the relationship between one and two basepoint cases. It turns out that the Hochschild homologies of one and two basepoint bimodules are related, namely
\begin{equation}\label{HH_for_1bp_and_2bp}
\rk(HH_*(N^\text{2bp}(\phi))=\rk(HH_*(N(\phi))+1.
\end{equation}
The reason is that Hochschild homology of the double basepoint bimodule is equal to knot Floer homology of the binding of the corresponding open book in the second lowest Alexander grading, where knot in a Heegaard diagram is specified by four basepoints, instead of two: $HH_*(N^\text{2bp}(\phi))=\widehat{HFK}^\text{4bp}(M_\phi^\circ,K \:;\: -g)$. And the difference between the four basepoint and the usual two basepoint knot Floer homologies is known: $\widehat{HFK}^\text{4bp}(M_\phi^\circ,K)=(\mathbb{F}_2)^2 \o \widehat{HFK}(M_\phi^\circ,K)$, where the Alexander gradings of two generators of $(\mathbb{F}_2)^2$ are 0 and -1. Thus we have
\begin{align*}
&\rk(HH_*(N^\text{2bp}(\phi))=\rk(\widehat{HFK}^\text{4bp}(M_\phi^\circ,K \:;\: -g))= \\
= \ &\rk \big((\mathbb{F}_2)_{(\text{A}=-1)} \o \widehat{HFK}(M_\phi^\circ,K \:;\: -g+1) \oplus (\mathbb{F}_2)_{(\text{A}=0)} \o \widehat{HFK}(M_\phi^\circ,K \:;\: -g) \big)= \\
= \ &\rk \big( \widehat{HFK}(M_\phi^\circ,K \:;\: -g+1) \oplus \widehat{HFK}(M_\phi^\circ,K \:;\: -g) \big)=\\
= \ & \rk(HH_*(N(\phi)) + 1,
\end{align*}
because the lowest $-g$ Alexander grading of knot Floer homology of a fibered knot (i.e. the binding of an open book) is always one.
\Needspace{8\baselineskip}
\subsection{From one basepoint to two: fixed point Floer cohomology} \label{sec:HF_2bpts}\label{1bp_to_2bp_hf}
Let us first explain why the choice of fixed point Floer cohomology $HF^*(\tilde{\phi}\:;\:U_2^+,U_1^-)$ corresponds one basepoint. The point is that there is a version of fixed point Floer cohomology $HF^\text{1bp}(\phi)$, which is equal to $HF^{*}(\tilde{\phi}\:;\: U_2^+,U_1^-)$, but defined without deleting a second disc from the surface. In Section~\ref{sec:fixed point Floer}, we decided not to give a rigorous definition of $HF^\text{1bp}(\phi)$,
and chose to use existing methods and work with $HF^{*}(\tilde{\phi}\:;\: U_2^+,U_1^-)$.
But, because we would like to have the double basepoint version, we now indicate the setup for $HF^\text{1bp}(\phi)$. It can be defined only for infinite area surfaces with a cylindrical end, rather than compact surfaces with boundary, and so we have to work with the induced compactly supported exact self-diffeomorphisms $\phi$ on the completion $\hat{\Sigma}$. Analogously to how we chose the specific Hamiltonian perturbations near $\partial \Sigma$ in the definition of $HF^{*}(\tilde{\phi}\:;\: U_2^+,U_1^-)$, we now need to specify behavior of Hamiltonian perturbation near $\infty$ on $\hat{\Sigma}$. Inspired by~\cite[Section~6]{Sei3II}, we indicate the behavior of the Hamiltonian on $\infty$ in Figure~\ref{fig:1bpt_perturbation}, the left side. Upwards and downwards the Hamiltonian is linear with respect to the radial coordinate. Comparing the left and the the right side of the figure (on the right we glued the blue boundaries together), we can see why such Hamiltonian perturbation is equivalent to the one considered in the definition of $HF^{*}(\tilde{\phi}\:;\: U_2^+,U_1^-)$ --- the generators (fixed points) and the differentials in Floer cohomology on the left side and on the right side are in $1-1$ correspondence.
\begin{figure}
\caption{Left: the behavior of Hamiltonian perturbation one needs to consider in the one basepoint version $HF^\text{1bp}
\label{fig:1bpt_perturbation}
\end{figure}
The reason why we call $HF^\text{1bp}(\phi)$ the one basepoint version is because the Hamiltonian on the left side of Figure~\ref{fig:1bpt_perturbation} could be used in the definition of the partially wrapped Fukaya category with one basepoint. As indicated by the orange curve and its perturbation in Figure~\ref{fig:1bpt_perturbation}, putting the basepoint in the bottom part of infinity, and allowing Lagrangian arcs to go only to the upper part of infinity, one obtains perturbations sending all the arcs to the left, i.e. to the basepoint\footnote{Technically, one has to take the limit of these perturbations, because the Hamiltonian is linear with respect to the radial coordinate, as opposed to quadratic one used in~\cite{Aur2}.}.
Now we consider the double basepoint counterpart $HF^\text{2bp}(\phi)$ of the above construction. The corresponding behavior of Hamiltonian near $\infty$ on $\hat{\Sigma}$ is pictured on the left of Figure~\ref{fig:2bpt_perturbation}. The cohomology theory $HF^\text{2bp}(\phi)$ was developed in~\cite[Section~6]{Sei3II}, viewing $\hat{\Sigma}$ as double branched cover of $\C$, or equivalently, as a total space of a $0$-dimensional Lefschetz fibration over $\C$. A different, but equivalent version of Floer cohomology (where Hamiltonian is constant on boundary components) is depicted on the right --- instead of one disc we need to take out two discs this time, and we denote the resulting Floer cohomology by $HF^*(\tilde{\tilde{\phi}}\:;\: U_2^+,U_3^+,U_1^-)$.
\begin{figure}
\caption{Left: the behavior of Hamiltonian perturbationone needs to consider in the double basepoint version $HF^\text{2bp}
\label{fig:2bpt_perturbation}
\end{figure}
\begin{remark}
The one and two basepoint versions of fixed point Floer cohomology should be related, and for all the cases we considered, as in the case of rank relationship equation~\ref{HH_for_1bp_and_2bp} between Hochschild homologies, we had $\rk(HF^*(\tilde{\tilde{\phi}}\:;\: U_2^+,U_3^+,U_1^-))=\rk(HF^*(\tilde{\phi}\:;\: U_2^+,U_1^-))+1.$ We did not find a general explanation for this. The reason might be that if one compares cochain complexes, then they are identical except $CF^*(\tilde{\tilde{\phi}}\:;\: U_2^+,U_3^+,U_1^-)$ has one more generator $x$, depicted on the right of Figure~\ref{fig:2bpt_perturbation}. This generator does not have any differentials going out of it, as they are gradient lines going up from $x$ for suitable Hamiltonian, and there are no generators above $x$. It is likely that it also does not have any differentials going in (or, rather, one can arrange the Hamiltonian in such a way).
\end{remark}
Now we are ready to state a double basepoint version of Conjecture~\ref{conj}. Namely, the following should be true:
\begin{conjecture}\label{conj_2bp}
For every mapping class $\phi \in MCG_0(\Sigma,\partial\Sigma=S^1=U_1)$ there is an isomorphism of $\mathbb{Z}_2$-graded vector spaces
$$HH_*(N^{\text{2bp}} (\phi^{-1})) \colon\thinspace\relaxng HF^{*+1}(\tilde{\tilde{\phi}}\:;\: U_2^+,U_3^+,U_1^-).$$
\end{conjecture}
We did lots of computations using~\cite{Pyt}, and all of them support this conjecture. These computations are completely analogous to the ones in Section~\ref{supporting_comps}, and so we choose not to describe them. Rather, below we describe a more theoretical evidence for Conjecture~\ref{conj_2bp}.
\Needspace{8\baselineskip}
\subsection{\texorpdfstring{$\Sigma$ as Lefschetz fibration and the Fukaya-Seidel category}{Σ as Lefschetz fibration and the Fukaya-Seidel category}}\label{LF}
Take an area preserving double branched cover $f\colon\thinspace\relax\hat{\Sigma}\rightarrow \C$ of an exact surface $\hat{\Sigma}$ with cylindrical end over the complex numbers. For instance, one may take a quotient by the hyperelliptic involution, which we drew below in Figure~\ref{fig:LF}. We can view this cover as an exact symplectic fibration with singularities, as in~\cite[Setup 5.1]{Sei3II}. This fibration is in fact a $0$-dimensional Lefschetz fibration, with 2g+1 critical points.
We assume that the critical values $p_1,\dots,p_{2g+1}$ all satisfy $\Re(p_i)=0$ and $\Im(p_1)<\dots<\Im(p_{2g+1})$. The genus two case satisfying these properties is drawn in Figure~\ref{fig:LF}.
\begin{figure}
\caption{0-dimensional Lefschetz fibration structure on the genus two surface.}
\label{fig:LF}
\end{figure}
It turns out that the map $f\colon\thinspace\relax\hat{\Sigma}\rightarrow \C$ can produce a specific version of fixed point Floer cohomology of $\phi \lefttorightarrow \Sigma$, and a specific version of the Fukaya category of $\Sigma$. It turns out that those versions are equal to $HF^\text{2bp}(\phi)$ and $\F_{\{z_1,z_2\}}(\Sigma)$ respectively. We discuss this below.
Following~\cite[Section~6]{Sei3II}, having an exact compactly supported self-diffeomorphism $\phi\colon\thinspace\relax \hat{\Sigma} \rightarrow \hat{\Sigma}$ of $0$-dimensional Lefschetz fibration, we can consider fixed point Floer cohomology $HF^*(\phi,\delta > 0,\epsilon)$, where $\epsilon$ is not important to us because fibers of $f\colon\thinspace\relax\hat{\Sigma}\rightarrow \C$ do not have boundary, and $\delta$ is responsible for perturbation at infinity by the Hamiltonian
\begin{equation}\label{hamiltonian_2bp}
H:\hat{\Sigma}\rightarrow \mathbb R, \ x \mapsto \delta \Re(f(x)).
\end{equation}
This theory depends only on the sign of $\delta$, and from the definitions of Hamiltonians it follows that
$$HF^\text{2bp}(\phi)=HF^*(\phi,\delta > 0).$$
The Lefschetz fibration structure over the complex plane can be also used to define a special type of $A_\infty$ category $\F_f(\hat{\Sigma})$, which is called the Fukaya-Seidel category; see~\cite{Sei4,Sei5}, and more recent articles~\cite{Sei3I,Sei3II} for a more relevant setup for us. In our case of the double branched cover $f\colon\thinspace\relax\hat{\Sigma}\rightarrow \C$, the objects of the category are compact exact Lagrangians in $\hat{\Sigma}$, and also non-compact ones which are Lefschetz thimbles associated to admissible arcs\footnote{Admissible arcs in $\C$ are proper rays which start at a critical value of $f$, do not pass over other critical values, and at some point stabilize to be horizontal, oriented to the right rays. In our case of $f\colon\thinspace\relax\hat{\Sigma}\rightarrow \C$ Lefschetz thimbles are just preimages of admissible arcs.} in $\C$. Perturbation at infinity is defined using the same Hamiltonian~\ref{hamiltonian_2bp}, but making sure that $\delta \gg 0$ is big enough.
It turns out that the Fukaya-Seidel category $\F_f(\hat{\Sigma})$ is quasi-equivalent to the partially wrapped Fukaya category $\F_{\{z_1,z_2\}}(\Sigma)$, despite the fact that the non-compact objects allowed are different. For a setup, which mediates between the partially wrapped category with two basepoints and the Fukaya-Seidel category, see~\cite[Section~3.2]{Aur1}.
It was proved in~\cite{Sei5} that Lefschetz thimbles (one for each critical points) generate the Fukaya-Seidel category. The following are several consequences of the quasi-equivalence $\F_f(\hat{\Sigma}) \simeq \F_{\{z_1,z_2\}}(\Sigma)$. If we choose a generating set of thimbles for the category $\F_f(\hat{\Sigma})$, then these thimbles also generate $\F_{\{z_1,z_2\}}(\Sigma)$. An example of a generating set of Lefschetz thimbles in the genus two case is drawn in Figure~\ref{fig:LF}, and the same set of generators for $\F_{\{z_1,z_2\}}(\Sigma)$ was drawn in Figure~\ref{fig:partially_wrapped_2bpts}.
The $hom$-algebras for the two categories are also the same:
$$\bigoplus\limits_{1\leq i,j\leq k} hom_{\F_f(\hat{\Sigma})} (\alpha_i,\alpha_j) \simeq \bigoplus\limits_{1\leq i,j\leq k} hom_{\F_{\{z_1,z_2\}}(\Sigma)} (\alpha_i,\alpha_j) \simeq \B (\Zpmc^\text{2bp}),$$
and the bimodules corresponding to exact automorphisms
$\phi\colon\thinspace\relax \hat{\Sigma} \rightarrow \hat{\Sigma}$ are also the same:
$$N_{\F_f}(\phi)\colon\thinspace\relaxloneq\bigoplus\limits_{1\leq i,j\leq k} hom_{\F_f(\hat{\Sigma})} (\alpha_i,\phi(\alpha_j)) \simeq \bigoplus\limits_{1\leq i,j\leq k} hom_{\F_{\{z_1,z_2\}}(\Sigma)} (\alpha_i,\phi(\alpha_j)) \simeq N^\text{2bp}(\phi).$$
\Needspace{8\baselineskip}
\subsection{Open-closed map}\label{subsec:O-C map}
The open-closed map is a map between Hochschild homology of a graph bimodule and a fixed point Floer cohomology $OC:HH_*(N_{\text{Fuk}}(\phi)) \rightarrow HF(\phi)$. The key point is that this map can be defined only if Hamiltonian perturbations near $\infty$ on $\hat{\Sigma}$ are the same for $\text{Fuk}$ and $HF(\phi)$. Because Hamiltonian perturbations for $\F_f(\hat{\Sigma})$ and $HF^\text{2bp}(\phi)$ are the same, one expects to have a map
$$OC:HH_*(N_{\F_f}(\phi)) \rightarrow HF^\text{2bp}(\phi),$$
which would define a map in one direction of Conjecture~\ref{conj_2bp} (remember that $HH_*(N_{\F_f}(\phi)) \colon\thinspace\relaxng HH_*(N^\text{2bp}(\phi))$ and $HF^\text{2bp}(\phi) \colon\thinspace\relaxng HF^{*+1}(\tilde{\tilde{\phi}}\:;\: U_2^+,U_3^+,U_1^-)$).
Indeed, such a map was constructed by Seidel in~\cite{Sei3II}, and we describe the construction details below.
In~\cite[Section~7]{Sei3II} Seidel constructed open-closed map in the case where the symplectic manifold is an exact symplectic fibration with singularities over $\C$, which includes Lefschetz fibrations, and in particular double branched covers $f\colon\thinspace\relax\hat{\Sigma} \rightarrow \C$. In our case of $f\colon\thinspace\relax\hat{\Sigma} \rightarrow \C$ the open-closed map counts isolated points in the moduli space of holomorphic maps from a Riemann surface drawn in Figure~\ref{fig:o-c} to $\hat{\Sigma}$, with a twist $\phi$ along the gray line (compare with~\cite[Figure 3]{Sei3II}). These maps have the following boundary conditions: a twisted orbit of Hamiltonian vector field $X_H$ on one end, which is equivalent to a constant section of the mapping torus $T_{\phi \circ \psi^1_{X_{H}}}$, and a chain of Lagrangians on the other, with consistent perturbations. Along the gray line the map has a twist $\phi$. So the strip end with the gray line limits to an intersection point of $\phi \circ \psi^1_{X_{H}}(L_3) \cap L_1$ to the left of the gray line, and to the intersection point $L_3 \cap (\phi \circ \psi^1_{X_{H}})^{-1}(L_1)$ to the right of the gray line.
\begin{figure}
\caption{The open-closed map counts such holomorphic objects inside $\hat{\Sigma}
\label{fig:o-c}
\end{figure}
In this setting Seidel in~\cite[Equation 7.15]{Sei3II} defines a bimodule $\mathcal P(\phi,\delta, \epsilon)=\bigoplus\limits_{1\leq i,j\leq k} hom (\psi^1_{X_H}(\phi(\alpha_i)),\alpha_j)$. The $\epsilon$ does not play any role for us, because in our case of $0$-dimensional Lefschetz fibration $f\colon\thinspace\relax\hat{\Sigma} \rightarrow \C$ there is no boundary in a fiber. The $\delta$ is responsible for the Hamiltonian $H(x)=\delta \Re(f(x))$ which is used to perturb $\phi$ at $\infty$ of $\hat{\Sigma}$.
If we assume $\delta \gg 0$ so that the generating Lagrangians are wrapped enough to intersect each other at $\infty$,
then this bimodule is
\begin{align*}
&\mathcal P(\phi,\delta \gg 0)=\bigoplus\limits_{1\leq i,j\leq k} hom (\psi^1_{X_H}(\phi(\alpha_i)),\alpha_j)= \bigoplus\limits_{1\leq i,j\leq k} hom_{\F_f(\hat{\Sigma})} (\phi(\alpha_i),\alpha_j)=\\
&=\bigoplus\limits_{1\leq i,j\leq k} hom_{\F_{\{z_1,z_2\}}(\Sigma)} (\phi(\alpha_i),\alpha_j)=
\bigoplus\limits_{1\leq i,j\leq k} hom_{\F_{\{z_1,z_2\}}(\Sigma)} (\alpha_i,\phi^{-1}(\alpha_j))=N^\text{2bp}(\phi^{-1}).
\end{align*}
Now we turn our attention to~\cite[Conjecture 7.18]{Sei3II}. In our case this amounts to the following --- there is an open-closed map which gives an isomorphism:
$$
OC\colon\thinspace\relax HH_*(N^\text{2bp}(\phi^{-1})) \ \xrightarrow{\overset{?}{\colon\thinspace\relaxng}} \ HF_\text{2bp}^{*+1}(\phi).
$$
As a consequence, our double basepoint Conjecture~\ref{conj_2bp} is a special case of Seidel's conjecture. The one basepoint version, i.e. Conjecture~\ref{conj}, most likely fits in a similar framework, where instead of the Fukaya-Seidel category $\F_f(\Sigma)\simeq \F_{\{z_1,z_2\}}(\Sigma)$ one should work with the one basepoint partially wrapped Fukaya category $\F_z(\Sigma)$, and construct there an appropriate version of a twisted open-closed map.
\newcommand*{\arxivPreprint}[1]{ArXiv preprint \href{http://arxiv.org/abs/#1}{#1}}
\newcommand*{\arxiv}[1]{(ArXiv:\ \href{http://arxiv.org/abs/#1}{#1})}
\end{document} |
\betaegin{document}
\maketitle
\betaegin{abstract} In this paper we develop methods to extend the minimal hypersurface
approach to positive scalar curvature problems to all dimensions. This includes a proof
of the positive mass theorem in all dimensions without a spin assumption. It also includes
statements about the structure of compact manifolds of positive scalar curvature extending
the work of \cite{sy1} to all dimensions. The technical
work in this paper is to construct minimal slicings and associated weight functions in the
presence of small singular sets and to show that the singular sets do not become too
large in the lower dimensional slices. It is shown that the singular set in any slice is a
closed set with Hausdorff codimension at least three. In particular for arguments which
involve slicing down to dimension $1$ or $2$ the method is successful. The arguments
can be viewed as an extension of the minimal hypersurface regularity theory to this setting of
minimal slicings.
\varphiepsilonnd{abstract}
\sigmaetcounter{secnumdepth}{1}
\sigmaetcounter{section}{0}
\sigmaection{\betaf Introduction}
The study of manifolds of positive scalar curvature has a long history in
both differential geometry and general relativity. The theorems involved
include the positive mass theorem, the topological classification of manifolds
of positive scalar curvature, and the local geometric study of metrics of
positive scalar curvature. There are two methods which
have been successful in this study in general situations, the Dirac operator
method and the minimal hypersurface method. Both of these methods have
restrictions on their applicability, the Dirac operator methods require the
topological assumption that the manifold be spin, and the minimal hypersurface
method has been restricted to the case of manifolds with dimension at most $8$
because of the possibility of singularities which might occur in the hypersurfaces.
The purpose of this paper is to extend the minimal hypersurface method to
all dimensions.
The Dirac operator method was pioneered by A. Lichnerowicz \cite{lich} and
M. Atiyah, I. Singer \cite{as} in the early 1960s. It was extended by N. Hitchin \cite{h}
and then systematically developed by M. Gromov and H. B. Lawson in \cite{gl1}, \cite{gl2}, and
\cite{gl3}. Surgery methods for manifolds of positive scalar curvature were developed
in \cite{sy1} and \cite{gl2}. For simply connected manifolds $M^n$ with $n\gammaeq 5$ Gromov
and Lawson conjectured necessary and conditions for $M$ to have a metric of
positive scalar curvature (related to the index of the Dirac operator in the spin case).
The conjecture was solved in the affirmative by S. Stolz \cite{st}. The Dirac operator method
was used by E. Witten \cite{w} to prove the positive mass theorem for spin manifolds (see
also \cite{pt}).
The minimal hypersurface method originated in \cite{sy4} for the three dimensional case and
was extended to higher dimensions in \cite{sy1}. The extension to the positive mass theorem was initiated in \cite{sy2} and in higher dimensions in \cite{sy5} and \cite{sc}. In this paper we extend
the minimal hypersurface argument to all dimensions at least as regards the applications
to the positive mass theorem and results which can be proven by slicing down to dimension
two.
The basic objects of study in this paper are called {\it minimal $k$-slicings} and we now
describe them. We start with a compact oriented Riemannian manifold $M$ which will be our
top dimensional slice $\cal Sigma_n$. We choose an oriented volume minimizing hypersurface
$\cal Sigma_{n-1}$. Since $\cal Sigma_{n-1}$ is stable, the second variation form $S_{n-1}(\varphiphi,\varphiphi)$
has first eigenvalue which is non-negative. We choose a positive first eigenfunction $u_{n-1}$
and we use it as a weight $\rhoho_{n-1}$ for the volume functional on $n-2$ cycles which are
contained in $\cal Sigma_{n-1}$. We assume we have a $\cal Sigma_{n-2}\sigmaubset\cal Sigma_{n-1}$ which
minimizes the weighted volume $V_{\rhoho_{n-1}}(\cdot)$. The second variation $S_{n-2}(\varphiphi,\varphiphi)$ for the weighted volume on $\cal Sigma_{n-2}$ then has non-negative first eigenvalue and
we let $u_{n-2}$ be a positive first eigenfunction. We then define $\rhoho_{n-2}=u_{n-2}\rhoho_{n-1}$
and we continue this process. That is if we have $\cal Sigma_{j+1}\sigmaubset\cal Sigma_{j+2}\sigmaubset\lambdadots\sigmaubset
\cal Sigma_n$ which have been constructed, we choose $\cal Sigma_j$ to be a minimizer of the weighted
volume $V_{\rhoho_{j+1}}(\cdot)$. Such a nested family $\cal Sigma_k\sigmaubset\cal Sigma_{k+1}\sigmaubset\lambdadots\sigmaubset
\cal Sigma_n$ is called a {\it minimal $k$-slicing}.
The basic geometric theorem about minimal $k$-slicings which is generalized in Section 2
is the statement that if $\cal Sigma_n$ has positive scalar curvature then for any minimal $k$-slicing
we have that $\cal Sigma_k$ is Yamabe positive and so admits a metric of positive scalar curvature.
In particular if $k=2$ then $\cal Sigma_2$ must be diffeomorphic to $S^2$ and there can be no
minimal $1$-slicing.
If we start with $\cal Sigma_n$ with $n\gammaeq 8$, there might be a closed singular set ${\cal S}_{n-1}$ of Hausdorff dimension at most $n-8$ in $\cal Sigma_{n-1}$. In this paper we develop methods to
carry out the construction of minimal $k$-slicings allowing for the possibility that the $\cal Sigma_j$
may have nonempty singular sets ${\cal S}_j$. In order to do this it is necessary to extend
the existence and regularity theory for minimal hypersurfaces to this setting. To do this requires
maintaining some integral control of the geometry of the $\cal Sigma_j$ in the ambient manifold $\cal Sigma_n$,
and also of constructing the eigenfunctions $u_j$ which are bounded in appropriate weighted
Sobolev spaces. This control is gotten by carefully exploiting the terms which are left over
in the geometry of the second variation at each stage of the slicing. This is done by modifying
the second variation form $S_j$ to a larger form $Q_j$. The form $Q_j$ is more coercive
and can be diagonalized with respect to the weighted $L^2$ norm even in the presence of
small singular sets. We can then construct the next slice using the first eigenfunction for the
form $Q_j$ to modify the weight. This procedure only works if the singular sets ${\cal S}_j$
do not become too large. We prove that for a minimal $k$-slicing the Hausdorff dimension
of the singular set ${\cal S}_k$ is at most $k-3$. The regularity theorem is proven by
establishing appropriate compactness theorems for minimal $k$-slicings and showing that
at a singular point there is a homogeneous minimal $k$-slicing gotten by rescaling and
using appropriate monotonicity theorems (volume monotonicity and monotonicity of an
appropriate frequency function). A homogeneous minimal $k$-slicing is one in ${\mathbb R}^n$
for which all of the $\cal Sigma_j$ are cones and all of the $u_j$ are homogeneous of some degree.
It is then possible to show that if we had a $\cal Sigma_{k+1}$ with singular set of codimension
at least $3$, but $\cal Sigma_k$ had a singular set of Hausdorff dimension larger then $k-3$ then there
would exist a nontrivial homogeneous $2$-slicing with $\cal Sigma_2$ having an isolated singularity
at the origin. We show that no such homogeneous slicings exist to conclude that if ${\cal S}_{k+1}$
has codimension at least $3$ in $\cal Sigma_{k+1}$, then ${\cal S}_k$ has codimension at least $3$
in $\cal Sigma_k$. In particular if $k=2$ then $\cal Sigma_2$ is regular.
We now state the main theorems of the paper beginning with the positive mass theorem.
A manifold $M^n$ is called asymptotically flat if there is a compact set $K\sigmaubset M$
such that $M\sigmaetminus K$ is diffeomorphic to the exterior of a ball in ${\mathbb R}^n$
and there are coordinates near infinity $x^1,\lambdadots, x^n$ so that the metric components
$g_{ij}$ satisfy
\[ g_{ij}=\delta_{ij}+O(|x|^{-p}),\ |x||\partialartial g_{ij}|+|x|^2|\partialartial^2g_{ij}|=O(|x|^{-p})
\]
for some $p>\frac{n-2}{2}$. We also require the scalar curvature $R$ to satisfy
\[ |R|=O(|x|^{-q})
\]
for some $q>n$. Under these assumptions
the ADM mass is well defined by the formula (see \cite{sc} for the $n$ dimensional case)
\[ m=\frac{1}{4(n-1)\omega_{n-1}}\lambdaim_{\sigma\tauo\infty}\int_{S_\sigma}\sigmaum_{i,j}(g_{ij.i}-g_{ii,j})\nablau_j\ d\xii(\sigma)
\]
where $S_\sigma$ is the euclidean sphere in the $x$ coordinates, $\omega_{n-1}=Vol(S^{n-1}(1))$, and
the unit normal and volume integral are with respect to the euclidean metric. The positive
mass theorem is as follows.
\betaegin{thm}
Assume that $M$ is an asymptotically flat manifold with $R\gammaeq 0$. We then have that the ADM mass is nonnegative. Furthermore, if the mass is zero, then $M$ is isometric to $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^n$.
\varphiepsilonnd{thm}
It is shown in Section 5 using results of \cite{sy3} to simplify the asymptotic behavior and
an observation of J. Lohkamp which allows us to compactify the manifold keeping the
scalar curvature positive. The result which is needed for compact manifolds follows.
\betaegin{thm} If $M_1$ is any closed manifold of dimension $n$, then $M_1\#T^n$ does not
have a metric of positive scalar curvature.
\varphiepsilonnd{thm}
Both of these theorems were known if either $n\lambdaeq 8$ or for any $n$ assuming the manifold
is a spin manifold. Actually for $n=8$ there may be isolated singularities, but in this dimension
a result of N. Smale \cite{sm} shows that there is a dense set of ambient metrics for which
the singularities do not occur. Using this result the eight dimensional case can also be done
without dealing with singularities. In this paper we remove the dimensional and spin assumptions.
Finally we prove the following more precise theorem about compact manifolds with positive
scalar curvature.
\betaegin{thm}
Assume that $M$ is a compact oriented $n$-manifold with a metric of positive
scalar curvature. If $\alpha_1,\lambdadots,\alpha_{n-2}$ are classes in $H^1(M,\ifmmode{\cal Bbb Z}\else{$\cal Bbb Z$}\fi)$ with the property that the
class $\sigma_2$ given by
$\sigma_2=\alpha_{n-2}\cap\alpha_{n-3}\cap\lambdadots\alpha_1\cap[M]\in H_2(M,\ifmmode{\cal Bbb Z}\else{$\cal Bbb Z$}\fi)$ is nonzero, then the class
$\sigma_2$ can be represented by a sum of smooth two spheres.
If $\alpha_{n-1}$ is any class in $H^1(M,\ifmmode{\cal Bbb Z}\else{$\cal Bbb Z$}\fi)$, then we must have $\alpha_{n-1}\cap\sigma_2=0$. In particular,
if $M$ has classes $\alpha_1,\lambdadots,\alpha_{n-1}$ with $\alpha_{n-1}\cap\lambdadots\cap\alpha_1\cap[M]\nablaeq 0$,
then $M$ cannot carry a metric of positive scalar curvature.
\varphiepsilonnd{thm}
We also point out the recent series of papers by J. Lohkmp \cite{lo1}, \cite{lo2}, \cite{lo3},
and \cite{lo4}. These papers also present an approach to the high dimensional positive mass
theorem by extending the minimal hypersurface approach to all dimensions. Our approach
seems quite different both conceptually and technically, and is more in the classical spirit of
the calculus of variations. In any case we feel that, for such a fundamental result, it is of
value to have multiple approaches.
\betaigskip
\sigmaection{\betaf Terminology and statements of main theorems}
We begin by introducing the notation involved in the construction of a {\it minimal $k$-slicing}; that is,
a nested family of hypersurfaces beginning with a smooth manifold $\cal Sigma_n$ of dimension
$n$ and going down to $\cal Sigma_k$ of dimension $k\lambdaeq n-1$. This consists of $\cal Sigma_k\sigmaubset\cal Sigma_{k+1}
\sigmaubset\lambdadots \sigmaubset \cal Sigma_n$ where each $\cal Sigma_j$ will be constructed as a volume minimizer
of a certain weighted volume in $\cal Sigma_{j+1}$.
Let $\cal Sigma_n$ be a properly embedded $n$-dimensional submanifold in an open set $\Omega$
contained in $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^N$. We will consider a minimal slicing of $\cal Sigma_n$ defined in an inductive manner.
First, let $u_n=1$, and let $\cal Sigma_{n-1}$ be a volume minimizing hypersurface in $\cal Sigma_n$.
Of course, it may happen that
$\cal Sigma_{n-1}$ has a singular set ${\cal S}_{n-1}$ which is a closed subset of Hausdorff dimension
at most $n-8$. On $\cal Sigma_{n-1}$ we will construct a positive definite quadratic form $Q_{n-1}$
on functions by suitably modifying the index form associated to the second variation of
volume. We will then construct a positive function $u_{n-1}$ on $\cal Sigma_{n-1}$ which is a
least eigenfunction of $Q_{n-1}$. We then define $\rhoho_{n-1}=u_{n-1}u_n$, and we let $\cal Sigma_{n-2}$
be a hypersurface in $\cal Sigma_{n-1}$ which is a minimizer of the $\rhoho_{n-1}$-weighted volume
$V_{\rhoho_{n-1}}(\cal Sigma)=\int_\cal Sigma\rhoho_{n-1}d\mu_{n-2}$ for an $n-2$ dimensional submanifold of
$\cal Sigma_{n-1}$ and we denote $\mu_j$ to be the Hausdorff $j$-dimensional measure. Inductively,
assume that we have constructed a slicing down to dimension $k+1$; that is, we have a nested
family of hypersurfaces, quadratic forms, and positive functions $(\cal Sigma_j,Q_j,u_j)$ for
$j=k+1,\lambdadots,n$ such that $\cal Sigma_j$ minimizes the $\rhoho_{j+1}$-weighted volume where
$\rhoho_{j+1}=u_{j+1}u_{j+2}\lambdadots u_n$, $Q_j$ is a positive definite quadratic form related to
the second variation of the $\rhoho_{j+1}$-weighted volume (see (\rhoef{eqn:qform}) below), and $u_j$ is a lowest eigenfunction of $Q_j$ with eigenvalue $\lambdaambda_j\gammaeq 0$. We will always take $\lambdaambda_j$ to be the lowest Dirichlet eigenvalue (if $\partialartial\cal Sigma_j\nablaeq 0$) of $Q_j$ with respect to the weighted $L^2$ norm and we take $u_j$ to be a corresponding eigenfunction. We will show in Section 3 that such $\lambdaambda_j$ and $u_j$ exist. We then inductively
construct $(\cal Sigma_k,Q_k,u_k)$ by letting $\cal Sigma_k$ be a minimizer of the $\rhoho_{k+1}$ weighted volume
where $\rhoho_{k+1}=u_{k+1}u_{k+2}\lambdadots u_n$, $Q_k$ a positive definite quadratic form described
below, and $u_k$ a positive eigenfunction of $Q_k$.
Note that if $\cal Sigma_j$ is a leaf in a minimal $k$-slicing, then choosing a unit
normal vector $\nablau_j$ to $\cal Sigma_j$ in $\cal Sigma_{j+1}$ gives us an orthonormal basis
$\nablau_k,\nablau_{k+1},\lambdadots,\nablau_{n-1}$ for the normal bundle of $\cal Sigma_k$ defined on the regular
set ${\cal R}_k$. Thus the second fundamental form of $\cal Sigma_k$ in $\cal Sigma_n$ consists of the
scalar forms $A_k^{\nablau_j}=\lambdaangle A_k,\nablau_j\rhoangle$for $j=k,\lambdadots,n-1$ and we have
$|A_k|^2=\sigmaum_{j=k}^{n-1}|A_k^{\nablau_j}|^2$.
Now if we have a minimal $k$-slicing, we let $g_k$ denote the metric induced on $\cal Sigma_k$ from
$\cal Sigma_n$, and we let $\hat{g}_k$ denote the metric $\hat{g}_k=g_k+\sigmaum_{p=k}^{n-1}u_p^2dt_p^2$
on $\cal Sigma_k\tauimes(S^1)^{n-k}$ where we use $S^1$ to denote a circle of length $1$, and we denote
by $t_p$ a coordinate on the $p$th factor of $S^1$. We then note that the volume measure
of the metric $\hat{g}_k$ is given by $\rhoho_k d\mu_k$ where we have suppressed the $t_p$ variables since we will consider only objects which do not depend on them; for example, the $\rhoho_k$-weighted volume of $\cal Sigma_k$ is the volume of the $n$-dimensional manifold $\cal Sigma_k\tauimes T^{n-k}$. We will need to introduce another metric $\tauilde{g}_k$ on $\cal Sigma_k\tauimes(S^1)^{n-k-1}$. This is defined by
$\tauilde{g}_k= g_k+\sigmaum_{p=k+1}^{n-1}u_p^2\ dt_p^2$. Note that $\tauilde{g}_k$ is the metric
induced on $\cal Sigma_k\tauimes(S^1)^{n-k-1}$ by $\hat{g}_{k+1}$. We also let $\tauilde{A}_k$ denote the
second fundamental form of $\cal Sigma_k\tauimes(S^1)^{n-k-1}$ in $(\cal Sigma_{k+1}\tauimes(S^1)^{n-k-1}, \hat{g}_{k+1})$. The following lemma computes this second fundamental form.
\betaegin{lem} \lambdaabel{lem:2ff} We have $\tauilde{A}_k=A_k^{\nablau_k}-\sigmaum_{p=k+1}^{n-1}u_p\nablau_k(u_p)dt_p^2$,
and the square length with respect to $\tauilde{g}_k$ is given by
$|\tauilde{A}_k|^2=|A_k^{\nablau_k}|^2+\sigmaum_{p=k+1}^{n-1}(\nablau_k(log\ u_p))^2$.
\varphiepsilonnd{lem}
\betaegin{pf} If we consider a hypersurface $\cal Sigma$ in a Reimannian manifold with unit normal
$\nablau$, then we can consider the parallel hypersurfaces parametrized on $\cal Sigma$ by
$F_\varphiepsilon(x)=\varphiepsilonxp(\varphiepsilon\nablau(x))$ for small $\varphiepsilon$ and $x\in\cal Sigma$. We then have a family of induced metrics $g_\varphiepsilon$ from $F_\varphiepsilon$ on $\cal Sigma$, and the second fundamental form is given by $A=-\frac{1}{2}\deltaot{g}$
where $\deltaot{g}$ denotes the $\varphiepsilon$ derivative of $g_\varphiepsilon$ at $\varphiepsilon=0$.
If we let $\varphiepsilonxp$ denote the exponential map of $\cal Sigma_k$ in $\cal Sigma_{k+1}$, then since $\cal Sigma_{k+1}$
is totally geodesic in $\cal Sigma_{k+1}\tauimes T^{n-k-1}$, we have
\[ F_\varphiepsilon(x,t)=(\varphiepsilonxp(\varphiepsilon\nablau_k(x),t)
\]
for $(x,t)\in \cal Sigma_k\tauimes T^{n-k-1}$, and the induced family of metrics is given by
\[ \tauilde{g}_\varphiepsilon=(g_k)_\varphiepsilon+\sigmaum_{p=k+1}^{n-1}(u_p(\varphiepsilonxp(\varphiepsilon\nablau_k))^2\ dt_p^2.
\]
Thus we have
\[ \deltaot{\tauilde{g}}=-2A_k^{\nablau_k}+2\sigmaum_{p=k+1}^{n-1}u_p\nablau_k(u_p)\ dt_p^2
\]
since $A_k^{\nablau_k}$ is the second fundamental form of $\cal Sigma_k$ in $\cal Sigma_{k+1}$. It follows that
$\tauilde{A}_k=A_k^{\nablau_k}-\sigmaum_{p=k+1}^{n-1}u_p\nablau_k(u_p)dt_p^2$, and taking the square norm
with respect to the metric $\tauilde{g}_k$ then gives the desired formula for $|\tauilde{A}_k|^2$.
\varphiepsilonnd{pf}
We now describe the choice we will make for $Q_j$. Let $S_j$ be the second variation
form for the weighted volume $V_{\rhoho_{j+1}}$ at $\cal Sigma_j$, and define
\betaegin{eqnarray}
\lambdaabel{eqn:qform}
Q_j(\varphiphi,\varphiphi)&=&S_j(\varphiphi,\varphiphi)+\frac{3}{8}\int_{\cal Sigma_j}
(|\tauilde{A}_j|^2\nablaonumber\\ &+&\frac{1}{3n} \sigmaum_{p=j+1}^n(|\nablaabla_jlog\ u_p|^2+|\tauilde{A}_p|^2))\varphiphi^2
\rhoho_{j+1}\ d\mu_j
\varphiepsilonnd{eqnarray}
where, for now, $\varphiphi$ is a function supported in the regular set ${\cal R}_j$ and we define
$\tauilde{A}_n=0,\ u_n=1$. We will discuss an extended domain for $Q_j$ in the Section 3.
Up to this point our discussion is formal because we have not discussed issues related to
the singularities of the $\cal Sigma_j$ in a minimal slicing. We first define the {\it regular set}, ${\cal R}_j$
of $\cal Sigma_j$ to be the set of points $x$ for which there is a neighborhood of $x$ in $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^N$ in which
all of $\cal Sigma_j,\cal Sigma_{j+1},\lambdadots \cal Sigma_n$ are smooth embedded submanifolds of $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^N$. The {\it singular
set}, ${\cal S}_j$ is then defined to be the complement of ${\cal R}_j$ in $\cal Sigma_j$. Thus ${\cal S}_j$ is
a closed set by definition. The following result follows from the standard minimizing hypersurface
regularity theory. In this paper $dim(A)$ always refers to the Hausdorff dimension of a subset
$A\sigmaubset \ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^N$.
\betaegin{prop}
\lambdaabel{prop:topreg}
For $j\lambdaeq n-1$ we have $dim({\cal S}_j\sigmaim {\cal S}_{j+1})\lambdaeq j-7$, and in particular we
have $dim({\cal S}_{n-1})\lambdaeq n-8$.
\varphiepsilonnd{prop}
In light of this result, we see that our main task in controlling singularities is to control
the size of the set ${\cal S}_j\cap {\cal S}_{j+1}$. We will do this by extending the minimal
hypersurface regularity theory to this slicing setting. In order to do this we need to establish
the relevant compactness and tangent cone properties and this requires establishing
suitable bounds on the slicings. To begin this process we make the following definition.
\betaegin{defn} For a constant $\Lambdaambda>0$, a {\betaf $\Lambdaambda$-bounded minimal $k$-slicing}
is a minimal $k$-slicing satisfying the following bounds
$$ \lambdaambda_j\lambdaeq \Lambdaambda,\ Vol_{\rhoho_{j+1}}(\cal Sigma_j)\lambdaeq \Lambdaambda,\ \int_{\cal Sigma_j}(1+|A_j|^2+
\sigmaum_{p=j+1}^n|\nablaabla_jlog\ u_p|^2)u_j^2\rhoho_{j+1}\ d\mu_j\lambdaeq \Lambdaambda
$$
for $j=k,k+1,\lambdadots n-1$, where $\mu_j$ is Hausdorff measure, $\nablaabla_j$ is taken on (the
regular set of) $\cal Sigma_j$, and $A_j$ is the second fundamental form of $\cal Sigma_j$ in $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^N$.
\varphiepsilonnd{defn}
The minimal $k$-slicings we will consider in this paper will always be $\Lambdaambda$-bounded
for some $\Lambdaambda$. We have the following regularity theorem.
\betaegin{thm}
\lambdaabel{thm:reg} Given any $\Lambdaambda$-bounded minimal $k$-slicing, we have for each
$j=k,k+1,\lambdadots, n-1$ the bound on the singular set $dim({\cal S}_j)\lambdaeq j-3$.
\varphiepsilonnd{thm}
We now formulate an existence theorem for minimal $k$-slicings in $\cal Sigma_n$. We consider the
case in which $\cal Sigma_n$ is a closed oriented manifold. We assume that there is closed
oriented $k$-dimensional manifold $X^k$ and a smooth map $F:\cal Sigma_n\tauo X\tauimes T^{n-k}$ of
non-zero degree $s$. We let $\Omegamega$ denote a $k$-form of $X$ with $\int_X\Omegamega=1$, and we denote by $dt^{k+1},\lambdadots dt^n$ the basic one forms on $T^{n-k}$ where we assume the periods are
equal to one. We introduce the notation $\Theta=F^*\Omegamega$ and $\omegamega^p=F^*(dt^p)$
for $p=k+1,\lambdadots, n$.
We can now state our first existence theorem. A more refined existence theorem is given
by Theorem \rhoef{thm:exst2} which we will not state here.
\betaegin{thm}
\lambdaabel{thm:exst} For a manifold $M=\cal Sigma_n$ as described above, there is a $\Lambdaambda$-bounded,
partially regular, minimal $k$-slicing Moreover, if $k\lambdaeq j\lambdaeq n-1$ and $\cal Sigma_j$ is regular, then
$\int_{\cal Sigma_j}\Theta\wedge\omegamega^{k+1}\wedge\lambdadots\wedge\omegamega^j=s$.
\varphiepsilonnd{thm}
The proofs of Theorems \rhoef{thm:reg} and \rhoef{thm:exst} will be given in Sections 3 and 4.
In the
remainder of this section we discuss the quadratic forms $Q_j$ in more detail and derive
important geometric consequences for minimal $1$-slicings and $2$-slicings under the
assumption that $\cal Sigma_n$ has positive scalar curvature. Consequences of these results, which
are the main geometric theorems of the paper, will be given in Section 5.
Recall that in general if $\cal Sigma$ is a stable two-sided (trivial normal bundle) minimal hypersurface in a Riemannian manifold $M$, then we may choose a globally defined unit normal vector $\nablau$,
and we may parametrize normal deformations by functions $\varphiphi\cdot\nablau$. The second variation
of volume then becomes the quadratic form
\betaegin{equation}
\lambdaabel{eqn:secvar}
S(\varphiphi,\varphiphi)=\int_\cal Sigma[|\nablaabla\varphiphi|^2-\frac{1}{2}(R_M-R_\cal Sigma+|A|^2)\varphiphi^2]\ d\mu
\varphiepsilonnd{equation}
where $R_M$ and $R_\cal Sigma$ are the scalar curvature functions of $M$ and $\cal Sigma$ and $A$
denotes the second fundamental form of $\cal Sigma$ in $M$.
We have the following result which
computes the scalar curvature $\tauilde{R}_k$ of $\tauilde{g}_k$.
\betaegin{lem}
\lambdaabel{lem:rcalc} The scalar curvature of the metric $\tauilde{g}_k$ is given by
\[ \tauilde{R}_k=R_k-2\sigmaum_{p=k+1}^{n-1}u_p^{-1}\Deltaelta_ku_p-2\sigmaum_{k+1\lambdaeq p<q\lambdaeq n-1}
\lambdaangle\nablaabla_k log\ u_p,\nablaabla_k log\ u_q\rhoangle
\]
where $\Deltaelta_k$ and $\nablaabla_k$ denote the Laplace and gradient operators with respect to
$g_k$.
\varphiepsilonnd{lem}
\betaegin{pf} The calculation is a finite induction using the formula
\[ \tauilde{R}=R-2u^{-1}\Deltaelta u
\]
for the scalar curvature of the metric $\tauilde{g}=g+u^2dt^2$.
For $j=k,\lambdadots,n-1$
Let $\betaar{g}_j=g_k+\sigmaum_{p=j}^{n-1}u_p^2dt_p^2$. Note that $\betaar{g}_k=\hat{g}_k$
and $\betaar{g}_{k+1}=\tauilde{g}_k$. We prove the formula
\[ \betaar{R}_j=R_k-2\sigmaum_{p=j}^{n-1}u_p^{-1}\Deltaelta_k u_p-2\sigmaum_{j\lambdaeq p<q\lambdaeq n-1}
\lambdaangle\nablaabla_k log\ u_p,\nablaabla_k log\ u_q\rhoangle
\]
by a finite reverse induction on $j$. First note that for $j=n-1$ the formula follows from
the one above. Now assume the formula is correct for $\betaar{g}_{j+1}$
We then apply the formula above to obtain
\[ \betaar{R}_j=\betaar{R}_{j+1}-2u_j^{-1}\betaar{\Deltaelta}_j u_j.
\]
Since $u_j$ does not depend on the extra variables $t_p$, we have
\[ u_j^{-1}\betaar{\Deltaelta}_j u_j=u_j^{-1}\rhoho_j^{-1}div_k(\rhoho_j\nablaabla_k u_j)=
u_j^{-1}\Deltaelta_k u_j
+\sigmaum_{p=j+1}^{n-1}\lambdaangle\nablaabla_k log\ u_p, \nablaabla_k log\ u_j\rhoangle
\]
where as above $\rhoho_j=u_{j+1}\cdots u_{n-1}$. The statement now follows from
the inductive assumption. Since $\betaar{g}_{k+1}=\tauilde{g}_k$, we have proven the required
statement.
\varphiepsilonnd{pf}
We now consider consequences of having a minimal $k$-slicing of a manifold of positive
scalar curvature.
\betaegin{thm}
\lambdaabel{thm:eval} Assume that the scalar curvature of $\cal Sigma_n$ is bounded below by a constant
$\kappaappa$. If $\cal Sigma_k$ is a leaf in a minimal $k$-slicing, then we have the following scalar curvature
formula and eigenvalue estimate
\[ \hat{R}_k= R_n+2\sigmaum_{p=k}^{n-1}\lambdaambda_p+\frac{1}{4}\sigmaum_{p=k}^{n-1}(|\tauilde{A}_p|^2
-\frac{1}{n}\sigmaum_{q=p+1}^n(|\nablaabla_plog\ u_q|^2+|\tauilde{A}_q|^2))
\]
\[ \int_{\cal Sigma_k}(\kappaappa+\frac{3}{4}\sigmaum_{j=k+1}^n|\nablaabla_klog\ u_j|^2-R_k)\varphiphi^2\ d\mu_k\lambdaeq 4\int_{\cal Sigma_k}|\nablaabla_k\varphiphi|^2\ d\mu_k
\]
where $\varphiphi$ is any smooth function with compact support in ${\cal R}_k$.
\varphiepsilonnd{thm}
\betaegin{pf} First note that from (\rhoef{eqn:qform}) and (\rhoef{eqn:secvar}) we have
\betaegin{eqnarray*}
Q_j(\varphiphi,\varphiphi)&=&\int_{\cal Sigma_j}[|\nablaabla_j\varphiphi|^2-
\frac{1}{2}(\hat{R}_{j+1}-\tauilde{R}_j)\varphiphi^2 \\
&-&\frac{1}{8}(|\tauilde{A}_j|^2-\frac{1}{n}\sigmaum_{p=j+1}^n(|\nablaabla_jlog\ u_p|^2+|\tauilde{A}_p|^2))\varphiphi^2]\rhoho_{j+1}\ d\mu_j,
\varphiepsilonnd{eqnarray*}
and therefore $u_j$ satisfies the equation $L_ju_j=-\lambdaambda_ju_j$ where
\betaegin{equation}
\lambdaabel{eqn:operator} L_j=\tauilde{\Deltaelta}_j+\frac{1}{2}(\hat{R}_{j+1}-\tauilde{R}_j)+\frac{1}{8}(|\tauilde{A}_j|^2-\frac{1}{n}\sigmaum_{p=j+1}^n(|\nablaabla_jlog\ u_p|^2+|\tauilde{A}_p|^2)).
\varphiepsilonnd{equation}
We derive the scalar curvature formula by a finite downward induction beginning with $k=n-1$.
In this case
the eigenvalue estimates follow from the standard stability inequality (\rhoef{eqn:secvar}) since
$\rhoho_n=u_n=1$ and $\tauilde{R}_{n-1}=R_{n-1}$. We also have from Lemma \rhoef{lem:rcalc} that
$\hat{R}_{n-1}=R_{n-1}-2u_{n-1}^{-1}\Deltaelta_{n-1}u_{n-1}$. The equation satisfied by $u_{n-1}$ is
\[ \Deltaelta_{n-1}u_{n-1}+\frac{1}{2}(R_n-R_{n-1})u_{n-1}+\frac{1}{8}|\tauilde{A}_{n-1}|^2u_{n-1}=-\lambdaambda_{n-1}u_{n-1}
\]
and so we have
$\hat{R}_{n-1}=R_n+2\lambdaambda_{n-1}+\frac{1}{4}|\tauilde{A}_{n-1}|^2$. This proves the result for $k=n-1$.
Now we assume the conclusions are true for integers $k$ and larger, and we will derive them
for $k-1$. We first observe that $\hat{g}_{k-1}=\tauilde{g}_{k-1}+u_{k-1}^2\ dt_{k-1}^2$ and so
$\hat{R}_{k-1}=\tauilde{R}_{k-1}-2u_{k-1}^{-1}\tauilde{\Deltaelta}_{k-1}u_{k-1}$. On the other hand
from (\rhoef{eqn:operator}) applied with $j=k-1$ we see that $u_{k-1}$ satisfies the equation
\betaegin{eqnarray*}
\tauilde{\Deltaelta}_{k-1}u_{k-1}&+&\frac{1}{2}(\hat{R}_k-\tauilde{R}_{k-1})u_{k-1}+\frac{1}{8}(|\tauilde{A}_{k-1}|^2
\\ &-&\frac{1}{n}\sigmaum_{p=k}^n(|\nablaabla_{k-1}log\ u_p|^2+|\tauilde{A}_p|^2))u_{k-1}=-\lambdaambda_{k-1}u_{k-1}.
\varphiepsilonnd{eqnarray*}
Substituting this above we have
\betaegin{eqnarray*}
\hat{R}_{k-1}&=&\tauilde{R}_{k-1}+2[\lambdaambda_{k-1}+\frac{1}{2}(\hat{R}_k-\tauilde{R}_{k-1}) \\
&+&\frac{1}{8}(|\tauilde{A}_{k-1}|^2-\frac{1}{n}\sigmaum_{q=k}^n(|\nablaabla_{k-1}log\ u_q|^2+|\tauilde{A}_q|^2))],
\varphiepsilonnd{eqnarray*}
so we have
\[ \hat{R}_{k-1}=2\lambdaambda_{k-1}+\hat{R}_k+\frac{1}{4}(|\tauilde{A}_{k-1}|^2-\frac{1}{n}\sigmaum_{q=k}^n(|\nablaabla_{k-1}log\ u_q|^2+|\tauilde{A}_q|^2)).
\]
Using the inductive hypothesis we get the desired formula
\[\hat{R}_{k-1}= R_n+2\sigmaum_{p=k-1}^{n-1}\lambdaambda_p+\frac{1}{4}\sigmaum_{p=k-1}^{n-1}(|\tauilde{A}_p|^2
-\frac{1}{n}\sigmaum_{q=p+1}^n(|\nablaabla_p log\ u_q|^2+|\tauilde{A}_q|^2)).
\]
Now observe that
\betaegin{eqnarray*}
\sigmaum_{p=k}^{n-1}(n|\tauilde{A}_p|^2&-&\sigmaum_{q=p+1}^n(|\nablaabla_p log\ u_q|^2+|\tauilde{A}_q|^2)) \\
&\gammaeq& \sigmaum_{p=k}^{n-1}(\sigmaum_{r=k}^n|\tauilde{A}_r|^2-\sigmaum_{q=p+1}^n(|\nablaabla_p log\ u_q|^2+|\tauilde{A}_q|^2)) \\
&\gammaeq& \sigmaum_{p=k}^{n-1}\sigmaum_{q=p+1}^n(\sigmaum_{r=k}^{p-1}(\nablau_rlog\ (u_q))^2-|\nablaabla_plog\ u_q|^2)\\
&=&-\sigmaum_{p=k}^{n-1}\sigmaum_{q=p+1}^n|\nablaabla_{k-1}log\ u_q|^2\gammaeq -n\sigmaum_{q=k}^n|\nablaabla_{k-1}log\ u_q|^2.
\varphiepsilonnd{eqnarray*}
This formula implies that for each $k$ we have
\betaegin{equation}\lambdaabel{eqn:scbound} \hat{R}_k\gammaeq\kappaappa-1/4\sigmaum_{j=k}^n|\nablaabla_{k-1}log\ u_j|^2
\varphiepsilonnd{equation}
and so the following eigenvalue estimate follows from (\rhoef{eqn:secvar})
\[ \int_{\cal Sigma_k}(\kappaappa-\frac{1}{4}\sigmaum_{j=k+1}^n|\nablaabla_k log\ u_j|^2-\tauilde{R}_k)\varphiphi^2\rhoho_{k+1}\ d\mu_k
\lambdaeq 2\int_{\cal Sigma_k}|\nablaabla_k\varphiphi|^2\rhoho_{k+1}\ d\mu_k
\]
The remainder of the proof derives the eigenvalue estimate from this one. Since $\varphiphi$
is arbitrary we may replace $\varphiphi$ by $\varphiphi(\rhoho_{k+1})^{1/2}$ to obtain
\betaegin{eqnarray*}
\int_{\cal Sigma_k}(\kappaappa-\frac{1}{4}\sigmaum_{j=k+1}^n|\nablaabla_klog\ u_j|^2-\tauilde{R}_k)\varphiphi^2\ d\mu_k&\lambdaeq&
2\int_{\cal Sigma_k}|\nablaabla_k(\varphiphi/\sigmaqrt{\rhoho_{k+1}})|^2\rhoho_{k+1}\ d\mu_k \\
&\lambdaeq& 4\int_{\cal Sigma_k}|\nablaabla_k(\varphiphi/\sigmaqrt{\rhoho_{k+1}})|^2\rhoho_{k+1}\ d\mu_k
\varphiepsilonnd{eqnarray*}
where we used the inequality $2\lambdaeq 4$. After expanding, the term on the right becomes
\[ 4\int_{\cal Sigma_k}(|\nablaabla_k\varphiphi|^2-\varphiphi\lambdaangle\nablaabla_k\varphiphi,\nablaabla_klog\ \rhoho_{k+1}\rhoangle
+1/4\varphiphi^2|\nablaabla_k log\ \rhoho_{k+1}|^2)\ d\mu_k.
\]
Rewriting the middle term in terms of $\nablaabla_k(\varphiphi)^2$ and integrating by parts the term
becomes
\[ 4\int_{\cal Sigma_k}(|\nablaabla_k\varphiphi|^2+1/2\varphiphi^2[\sigmaum_{p=k+1}^{n-1}(u_p^{-1}\Deltaelta_k u_p
-|\nablaabla_k log\ u_p|^2)+1/2|\nablaabla_k log\ \rhoho_{k+1}|^2])\ d\mu_k.
\]
Now recall from Lemma \rhoef{lem:rcalc} that
\[ \tauilde{R}_k=R_k-2\sigmaum_{p=k+1}^{n-1}u_p^{-1}\Deltaelta_k u_p-2\sigmaum_{k+1\lambdaeq p<q\lambdaeq n-1}
\lambdaangle\nablaabla_k log\ u_p,\nablaabla_k log\ u_q\rhoangle.
\]
Thus we see that the terms involving $\Deltaelta_k u_p$ cancel out, and note also that
\[ |\nablaabla_k log\ \rhoho_{k+1}|^2=\sigmaum_{p=k+1}^{n-1}|\nablaabla_k\ log\ u_p|^2+2\sigmaum_{k+1\lambdaeq p<q\lambdaeq n-1}
\lambdaangle\nablaabla_k log\ u_p,\nablaabla_k log\ u_q\rhoangle
\]
so the second term also cancels. Thus we are left with
\betaegin{eqnarray*}
\int_{\cal Sigma_k}(\kappaappa-\frac{1}{4}\sigmaum_{j=k+1}^n|\nablaabla_klog\ u_j|^2&-&R_k)\varphiphi^2\ d\mu_k \\
&\lambdaeq& 4\int_{\cal Sigma_k}(|\nablaabla_k\varphiphi|^2-\frac{1}{4} \sigmaum_{j=k+1}^n|\nablaabla_k\ log\ u_j|^2)\ d\mu_k.
\varphiepsilonnd{eqnarray*}
This gives the desired eigenvalue estimate.
\varphiepsilonnd{pf}
This theorem will be central to the regularity proof in the next section and it also has an
important geometric consequence which is the main tool in the applications of Section 5.
\betaegin{thm}
\lambdaabel{thm:12slicing} Assume that $R_n\gammaeq \kappaappa>0$. If $\cal Sigma_k$ is regular, then
$(\cal Sigma_k,g_k)$ is a Yamabe positive conformal manifold. If $\cal Sigma_2$ lies in a minimal
$2$-slicing, $\cal Sigma_2$ is regular, and $\partialartial\cal Sigma_2=0$, then each connected
component of $\cal Sigma_2$ is homeomorphic to the two sphere. If $\cal Sigma_1$ lies in a minimal
$1$-slicing and $\cal Sigma_1$ is regular, then each component of $\cal Sigma_1$ is an arc of length
at most $2\partiali/\sigmaqrt{\kappaappa}$.
\varphiepsilonnd{thm}
\betaegin{pf} Recall that the condition that $g_k$ be Yamabe positive is that the lowest eigenvalue
of the conformal Laplacian $-\Deltaelta_k+c(k)R_k$ be positive where $c(k)=\frac{k-2}{4(k-1)}$. In
variational form this condition says
\[ -\int_{\cal Sigma_k}R_k\varphiphi^2\ d\mu_k<c(k)^{-1}\int_{\cal Sigma_k}|\nablaabla_k\varphiphi|^2\ d\mu_k
\]
for all nonzero functions $\varphiphi$ which vanish on $\partialartial\cal Sigma_k$ (if $\cal Sigma_k$ has a boundary).
Since $4<c(k)^{-1}$ we see that this follows from the eigenvalue estimate of Theorem \rhoef{thm:eval}.
Now consider $\cal Sigma_2$, and apply the eigenvalue estimate of Theorem
\rhoef{thm:eval} with $\varphiphi=1$ to a component $S$ of $\cal Sigma_2$ to see that
$\int_S R_2\ d\mu_2>0$. It then follows from the Gauss-Bonnet Theorem that $S$ is
homeomorphic to the two sphere (note that $S$ is orientable).
Finally, it $\gammaamma$ is a connected component of $\cal Sigma_1$ of length $l$, then the
eigenvalue estimate of Theorem \rhoef{thm:eval} implies that the lowest Dirichlet eigenvalue
of $\gammaamma$ is at least $\kappaappa/4$. Thus $\kappaappa/4\lambdaeq \partiali^2/l^2$ and $l\lambdaeq 2\partiali/\sigmaqrt{\kappaappa}$
as claimed.
\varphiepsilonnd{pf}
\betaigskip
\sigmaection{\betaf Compactness and regularity of minimal $k$-slicings}
The main goal of this section is to prove Theorem \rhoef{thm:reg}. In order
to do this we first must clarify some analytic issues concerning the domain of the quadratic form
$Q_j$. We let $L^2(\cal Sigma_j)$ denote the space of square integrable
functions on $\cal Sigma_j$ with respect to the measure $\rhoho_{j+1}\mu_j$. We let
\[ \|\varphiphi\|^2_{0,j}=\int_{\cal Sigma_j}\varphiphi^2 \rhoho_{j+1}\ d\mu_j
\]
denote the square norm on $L^2_{\cal Sigma_j}$. We introduce some notation, defining $P_j$ to be
the function defined on $\cal Sigma_j$
\[ P_j=|A_j|^2+\sigmaum_{p=j+1}^n|\nablaabla_jlog\ u_p|^2.
\]
We will say that a minimal $k$-slicing in an open set $\Omegamega$ is {\it partially regular} if
$dim({\cal S}_j)\lambdaeq j-3$ for $j=k,\lambdadots,n-1$. It follows from Proposition \rhoef{prop:topreg} that
if the $(k+1)$-slicing associated to a minimal $k$-slicing is partially regular, then
$dim({\cal S}_k)\lambdaeq min\{dim({\cal S}_{k+1}),k-7\}\lambdaeq k-2$.
For functions $\varphiphi$ which are Lipschitz (with respect to ambient distance) on $\cal Sigma_j$ with compact support in ${\cal R}_j\cap\betaar{\Omega}$, we define a square norm by
\[ \|\varphiphi\|_{1,j}^2=\|\varphiphi\|^2_{0,j}+ \int_{\cal Sigma_j}(|\nablaabla_j\varphiphi|^2+P_j\varphiphi^2)\rhoho_{j+1}\ d\mu_j.
\]
We let ${\cal H}_j$ denote the Hilbert space which is the completion with respect to this norm. Note
that functions in ${\cal H}_j$ are clearly locally in $W_{1,2}$ on ${\cal R}_j$. We will
assume from now on that $u_j\in {\cal H}_j$ for $j\gammaeq k$; in fact, we take this as part of the
definition of a bounded minimal $k$-slicing. We define ${\cal H}_{j,0}$ to be the closed subspace of ${\cal H}_j$ consisting of the completion of the Lipschitz functions with compact support in ${\cal R}_j\cap\Omegamega$.
In order to handle boundary effects we also assume that there is a larger domain $\Omega_1$ which contains $\betaar{\Omega}$ as a compact subset and that the $k$-slicing is defined and boundaryless in $\Omega_1$. Note that this is automatic if $\partialartial\cal Sigma_j=\partialhi$. Thus ${\cal H}_{j,0}$ consists of those functions in ${\cal H}_j$ with $0$ boundary data on $\cal Sigma_j\cap\partialartial\Omega$. The existence of eigenfunctions $u_j$ in this
space will be discussed in the next section. The following estimate of the $L^2(\cal Sigma_k)$ norm
near the singular set will be used both in this section and the next. The result may be thought
of as a non-concentration result for the weighted $L^2$ norm near the singular set in case
the ${\cal H}_k$ norm is bounded.
\betaegin{prop}
\lambdaabel{prop:l2con} Let ${\cal S}$ be a closed subset of $\Omega_1$ with zero $(k-1)$-dimensional Hausdorff measure. Let $\cal Sigma_k$ be a member of a bounded minimal $k$-slicing such that $\cal Sigma_{k+1}$ is partially regular in $\Omega_1$. For any $\varphiepsilonta>0$ there
exists an open set $V\sigmaubset \Omega_1$ containing ${\cal S}\cap\betaar{\Omega}$ such that whenever
${\cal S}_k\cap\betaar{\Omega}\sigmaubset V$ we have the following
estimate
\[ \int_{\cal Sigma_k\cap V}\varphiphi^2\rhoho_{k+1}\ d\mu_k\lambdaeq
\varphiepsilonta\int_{\cal Sigma_k\cap\Omega}[|\nablaabla_k\varphiphi|^2+(1+P_k)\varphiphi^2]\rhoho_{k+1}\ d\mu_k
\]
for all $\varphiphi\in {\cal H}_{k,0}$.
\varphiepsilonnd{prop}
\betaegin{pf} Let $\varphiepsilon>0,\ \delta>0$ be given. We may choose a finite covering of the compact set
${\cal S}\cap \betaar{\Omega}$ by balls $B_{r_\alpha}(x_\alpha)$ with $r_\alpha\lambdaeq \delta/5$ such
\[ \sigmaum_\alpha r_\alpha^{k-1}\lambdaeq \varphiepsilon.
\]
We let $V$ denote the union of the balls, $V=\cup_\alpha B_{r_\alpha}(x_\alpha)$.
Assume that ${\cal S}_k\cap\betaar{\Omega}\sigmaubset V$ and let $\varphiphi\in {\cal H}_{k,0}$.
We may extend $\varphiphi$ to $\cal Sigma_k\cap\Omega_1$ be taking $\varphiphi=0$ in $\Omega_1\sigmaim\Omega$.
By a standard first variation argument for submanifolds of $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^N$, for a nonnegative function we have
\betaegin{eqnarray*}
k\int_{\cal Sigma_k\cap B_r}\varphiphi^2\rhoho_{k+1}\ d\mu_k&\lambdaeq& r\int_{\cal Sigma_k\cap B_r}(|\nablaabla_k
\varphiphi^2\rhoho_{k+1}|+|H_k|\varphiphi^2\rhoho_{k+1})\ d\mu_k \\
&+&r\int_{\cal Sigma_k\cap\partialartial B_r} \varphiphi^2\rhoho_{k+1}\ d\mu_{k-1}.
\varphiepsilonnd{eqnarray*}
Let $L_\alpha(r)=\int_{\cal Sigma_k\cap B_r(x_\alpha)}\varphiphi^2\rhoho_{k+1}\ d\mu_k$ and
\[M_\alpha(r)=\int_{\cal Sigma_k\cap B_r(x_\alpha)}(|\nablaabla_k
(\varphiphi^2\rhoho_{k+1})|+|H_k|\varphiphi^2\rhoho_{k+1})\ d\mu_k.
\]
The above inequality then implies
\[ kL_\alpha(r)\lambdaeq rM_\alpha(r)+r\frac{d}{dr}(L_\alpha(r)).
\]
Now for any $\alpha$ and a small constant $\varphiepsilon_0$ we consider two cases: (1) There exists $r$ with
$r_\alpha\lambdaeq r\lambdaeq \delta/5$ such that the inequality
\[ \varphiepsilon_0L_\alpha(5r)\lambdaeq rM_\alpha(r).
\]
We denote such a choice of $r$ by $r_\alpha'$. Secondly, we have case (2) For all $r$ with
$r_\alpha\lambdaeq r\lambdaeq \delta/5$ we have
\[ rM_\alpha(r)< \varphiepsilon_0L_\alpha(5r).
\]
The collection of $\alpha$ for which the first case holds will be labeled $A_1$, and that for which the
second holds $A_2$. We will handle the two cases separately.
For the collection of balls with radius $r_\alpha'$ indexed by $A_1$ we may apply the five times
covering lemma to extract a subset $A_1'\sigmaubseteq A_1$ for which the balls in $A_1'$ are
disjoint and such that
\[ V_1\varphiepsilonquiv \cup_{\alpha\in A_1}B_{r_\alpha}(x_\alpha)\sigmaubseteq \cup_{\alpha\in A_1}B_{r_\alpha'}(x_\alpha)\sigmaubseteq
\cup_{\alpha\in A_1'}B_{5r_\alpha'}(x_\alpha).
\]
From the inequality of case (1) above applied for $\alpha\in A_2'$ we have
\[ L_\alpha(r_\alpha)\lambdaeq L_\alpha(5r_\alpha')\lambdaeq \varphiepsilon_0^{-1}r_\alpha'M_\alpha(r_\alpha')\lambdaeq \varphiepsilon_0^{-1}\delta M_\alpha(r_\alpha').
\]
Summing over $\alpha\in A_1$ and using disjointness of the balls we have
\betaegin{equation}
\lambdaabel{eqn:case1} \int_{\cal Sigma_k\cap V_1}\varphiphi^2\rhoho_{k+1}\ d\mu_k\lambdaeq
\varphiepsilon_0^{-1}\delta\int_{\cal Sigma_k\cap\Omega}(|\nablaabla_k \varphiphi^2\rhoho_{k+1}|+|H_k|\varphiphi^2\rhoho_{k+1})\ d\mu_k.
\varphiepsilonnd{equation}
Now for $\alpha\in A_2$ we have
\[ kL_\alpha(r)\lambdaeq \varphiepsilon_0L_\alpha(5r)+r\frac{d}{dr}(L_\alpha(r))
\]
for $r_\alpha\lambdaeq r\lambdaeq \delta/5$. For $j=0,1,2,\lambdadots$ define $\sigma_j=5^jr_\alpha$ and let $p$ be the
positive integer such that $\sigma_{p-1}<\delta/5\lambdaeq \sigma_p$. We define $\Lambdaambda_j$ by
$\Lambdaambda_j=L_\alpha(\sigma_j)$ for $j=0,1,\lambdadots,p$. For $\sigma_j\lambdaeq r\lambdaeq \sigma_{j+1}$ we then have
\[ kL_\alpha(r)\lambdaeq \varphiepsilon_0\Lambdaambda_{j+2}\Lambdaambda_j^{-1}L_\alpha(r)+r\frac{d}{dr}(L_\alpha(r)).
\]
Integrating we find
\[ \Lambdaambda_{j+1}\Lambdaambda_j^{-1}\gammaeq 5^{k-\varphiepsilon_0\Lambdaambda_{j+2}\Lambdaambda_j^{-1}}.
\]
Setting $R_j=\Lambdaambda_{j+1}\Lambdaambda_j^{-1}$ we have shown
\[ R_j\gammaeq 5^{k-\varphiepsilon_0R_jR_{j+1}}.
\]
Now if $R_j\lambdaeq 5^{k-1}$ then we would have $5^{k-1}\gammaeq 5^{k-\varphiepsilon_0R_jR_{j+1}}$
which in turn implies $\varphiepsilon_05^{k-1}R_{j+1}\gammaeq \varphiepsilon_0R_jR_{j+1}\gammaeq 1$. Thus if we
choose $\varphiepsilon_0=5^{-3k+3}$ we find $R_{j+1}\gammaeq 5^{2(k-1)}$ and hence it follows
that $R_jR_{j+1}\gammaeq 5^{2(k-1)}$. Thus we have shown that for any $j=0,1,\lambdadots, p-1$
we either have $R_j\gammaeq 5^{k-1}$ or $R_jR_{j+1}\gammaeq 5^{2(k-1)}$. This implies that
$\Lambdaambda_p\Lambdaambda_0^{-1}\gammaeq 5^{(p-1)(k-1)}\gammaeq 5^{1-k}(\delta/r_\alpha)^{k-1}$ and therefore
we have $L_\alpha(r_\alpha)\lambdaeq c(r_\alpha/\delta)^{k-1}L_\alpha(\sigma_p)$ for each $\alpha\in A_2$. Summing this over
these $\alpha$ and using the choice of the covering we have
\[ \int_{\cal Sigma_k\cap V_2}\varphiphi^2\rhoho_{k+1}\ d\mu_k\lambdaeq c\varphiepsilon\delta^{1-k}\int_{\cal Sigma_k\cap\Omega}\varphiphi^2\rhoho_{k+1}\ d\mu_k.
\]
Combining this with (\rhoef{eqn:case1}) we finally obtain
\[ \int_{\cal Sigma_k\cap V}\varphiphi^2\rhoho_{k+1}\ d\mu_k\lambdaeq c\varphiepsilon\delta^{1-k}\int_{\cal Sigma_k\cap\Omega}\varphiphi^2\rhoho_{k+1}\ d\mu_k+c\delta\int_{\cal Sigma_k\cap\Omega}(|\nablaabla_k \varphiphi^2\rhoho_{k+1}|+|H_k|\varphiphi^2\rhoho_{k+1})\ d\mu_k.
\]
since we have now fixed $\varphiepsilon_0$. We can estimate the second term on the right using
\[ |\nablaabla_k \varphiphi^2\rhoho_{k+1}|+|H_k|\varphiphi^2\rhoho_{k+1}\lambdaeq (\varphiphi^2+|\nablaabla_k\varphiphi|^2)\rhoho_{k+1}
+\frac{1}{2}\varphiphi^2(2+|\nablaabla_k\lambdaog\ \rhoho_{k+1}|^2+|H_k|^2)\rhoho_{k+1}.
\]
This implies the bound
\[ \int_{\cal Sigma_k\cap V}\varphiphi^2\rhoho_{k+1}\ d\mu_k\lambdaeq c(\varphiepsilon\delta^{1-k}+\delta)\int_{\cal Sigma_k\cap\Omega}\varphiphi^2\rhoho_{k+1}\ d\mu_k+c\delta\int_{\cal Sigma_k\cap\Omega}[|\nablaabla_k \varphiphi|^2+P_k\varphiphi^2]\rhoho_{k+1}\ d\mu_k.
\]
The desired conclusion now follows by choosing $\delta$ so that $c\delta=\varphiepsilonta/2$ and then choosing
$\varphiepsilon$ so that $c\varphiepsilon\delta^{1-k}=\varphiepsilonta$. This completes the proof.
\varphiepsilonnd{pf}
The following coercivity bound will be useful both in this section and in the next. We assume
here that we have a partially regular minimal $k$-slicing.
\betaegin{prop}
\lambdaabel{prop:coercive} Assume that our $k$-slicing is bounded. There is a constant $c$ such that
for $\varphiphi\in {\cal H}_{k,0}$ we have
\betaegin{equation*}
c^{-1}\int_{\cal Sigma_k}[|\nablaabla_k\varphiphi|^2+(P_k+|\nablaabla_klog\ u_k|^2)\varphiphi^2] \rhoho_{k+1}\ d\mu_k
\lambdaeq Q_k(\varphiphi,\varphiphi)+\int_{\cal Sigma_k}\varphiphi^2\rhoho_{k+1}\ d\mu_k.
\varphiepsilonnd{equation*}
Moreover we have the bound
\[ c^{-1}\int_{\cal Sigma_k}(|\nablaabla_k(\varphiphi\sigmaqrt{\rhoho_{k+1}})|^2+|A_k|^2\varphiphi^2\rhoho_{k+1})\ d\mu_k\lambdaeq
Q_k(\varphiphi,\varphiphi)+\int_{\cal Sigma_k}\varphiphi^2\rhoho_{k+1}\ d\mu_k.
\]
\varphiepsilonnd{prop}
\betaegin{pf} We can see from (\rhoef{eqn:qform}) that
\[ Q_k(\varphiphi,\varphiphi)\gammaeq S_k(\varphiphi,\varphiphi)+\frac{1}{8n}\int_{\cal Sigma_k}(\sigmaum_{p=k}^n|\tauilde{A}_p|^2+\sigmaum_{p=k+1}^n|\nablaabla_k\lambdaog u_p|^2)\varphiphi^2\rhoho_{k+1}d\mu_k.
\]
Using the stability of $\cal Sigma_k$ we have
\betaegin{equation} \lambdaabel{eqn:qbound}
Q_k(\varphiphi,\varphiphi)\gammaeq \frac{1}{8n}\int_{\cal Sigma_k}(\sigmaum_{p=k}^n|\tauilde{A}_p|^2+\sigmaum_{p=k+1}^n|\nablaabla_k\lambdaog u_p|^2)\varphiphi^2\rhoho_{k+1}d\mu_k.
\varphiepsilonnd{equation}
Finally we use Lemma \rhoef{lem:2ff} to conclude that (note that $\tauilde{A}_n=0$)
\[ \sigmaum_{p=k}^n|\tauilde{A}_p|^2\gammaeq \sigmaum_{p=k}^{n-1}|A_p^{\nablau_p}|^2\gammaeq \sigmaum_{p=k}^{n-1}|A_k^{\nablau_p}|^2=|A_k|^2,
\]
and thus we have
\[ Q_k(\varphiphi,\varphiphi)\gammaeq \frac{1}{8n}\int_{\cal Sigma_k}P_k\varphiphi^2\rhoho_{k+1}\ d\mu_k.
\]
Recall that $S_k(\varphiphi,\varphiphi)=\int_{\cal Sigma_k}(|\nablaabla_k\varphiphi|^2-q_k\varphiphi^2)\rhoho_{k+1}\ d\mu_k$ where
\[ q_k=\frac{1}{2}(|\tauilde{A}_k|^2+\hat{R}_{k+1}-\tauilde{R}_k)
\]
where $\hat{R}_{k+1}$ is given in Theorem \rhoef{thm:eval} and $\tauilde{R}_k$ is given in Lemma
\rhoef{lem:rcalc}. We will need an upper bound on $q_k$, so we first see from Theorem \rhoef{thm:eval}
with $k$ replace by $k+1$
\[ q_k\lambdaeq c+\frac{1}{2}\sigmaum_{p=k}^{n-1}|\tauilde{A}_p|^2-\frac{1}{2}\tauilde{R}_k
\]
where the constant bounds the curvature of $\cal Sigma_n$ and the eigenvalues. Now from Lemma \rhoef{lem:rcalc} we can obtain the bound
\[ -\frac{1}{2}\tauilde{R}_k\lambdaeq \frac{1}{2}|R_k|+\sigmaum_{p=k+1}^{n-1}|\nablaabla_k\lambdaog u_p|^2+div_k({\cal X}_k)
\]
where ${\cal X}_k=\sigmaum_{p=k+1}^{n-1}\nablaabla_k log\ u_p$. We observe that the Gauss equation
implies that $|R_k|\lambdaeq c(1+|A_k|^2)$, and so we have
\[ q_k\lambdaeq c+c\sigmaum_{p=k}^{n-1}|\tauilde{A}_p|^2+\sigmaum_{p=k+1}^{n-1}|\nablaabla_k\lambdaog u_p|^2+div_k({\cal X}_k)
\]
Now observe that $Q_k\gammaeq S_k$ and so we have
\[ \int_{\cal Sigma_k}(|\nablaabla_k\varphiphi|^2+\frac{1}{8n}P_k\varphiphi^2)\rhoho_{k+1}\ d\mu_k\lambdaeq 2Q_k(\varphiphi,\varphiphi)
+\int_{\cal Sigma_k}q_k\varphiphi^2\rhoho_{k+1}\ d\mu_k.
\]
We want to bound the second term on the right by a constant times the first plus up to the square
of the $L^2$ norm of $\varphiphi$, so we use the bound for $q_k$ to obtain
\betaegin{eqnarray*} \int_{\cal Sigma_k}q_k\varphiphi^2\rhoho_{k+1}\ d\mu_k&\lambdaeq& c\int_{\cal Sigma_k}(1+\sigmaum_{p=k}^{n-1}|\tauilde{A}_p|^2+\sigmaum_{p=k+1}^{n-1}|\nablaabla_k\lambdaog u_p|^2)\varphiphi^2\rhoho_{k+1}d\mu_k\\
&+&\int_{\cal Sigma_k}div_k({\cal X}_k)\varphiphi^2\rhoho_{k+1}d\mu_k.\\
\varphiepsilonnd{eqnarray*}
Now since $\varphiphi$ has compact support we have
\[ \int_{\cal Sigma_k}div_k({\cal X}_k)\varphiphi^2\rhoho_{k+1}\ d\mu_k=-\int_{\cal Sigma_k}\lambdaangle {\cal X}_k,\nablaabla(\varphiphi^2\rhoho_{k+1})\rhoangle\ d\mu_k.
\]
Easy estimates then imply the bound
\[ |\int_{\cal Sigma_k}div_k({\cal X}_k)\varphiphi^2\rhoho_{k+1}\ d\mu_k|\lambdaeq \frac{1}{2}
\int_{\cal Sigma_k}|\nablaabla_k\varphiphi|^2\rhoho_{k+1}\ d\mu_k+c\int_{\cal Sigma_k}(\sigmaum_{p=k+1}^{n-1}|\nablaabla_k\lambdaog u_p|^2)\varphiphi^2\rhoho_{k+1}\ d\mu_k.
\]
We may now absorb the first term back to the left and use (\rhoef{eqn:qbound}) to obtain the bound
\[ \int_{\cal Sigma_k}(|\nablaabla_k\varphiphi|^2+P_k\varphiphi^2)\rhoho_{k+1}\ d\mu_k\lambdaeq cQ_k(\varphiphi,\varphiphi)+
\int_{\cal Sigma_k}\varphiphi^2\rhoho_{k+1}d\mu_k.
\]
To bound the term involving $|\nablaabla_klog\ u_k|^2$ we recall that on the regular set
we have
\[ \tauilde{\Deltaelta}_ku_k+q_ku_k=-\lambdaambda_ku_k
\]
where $\lambdaambda_k\gammaeq 0$. This implies by direct calculation
\[ \tauilde{\Deltaelta}log\ u_k=-q_k-\lambdaambda_k-|\nablaabla_klog\ u_k|^2.
\]
(Note that $\tauilde{\nablaabla}_k=\nablaabla_k$ on functions which do not depend on the extra variables
$t_p$.) Now if $\varphiphi$ has compact support in ${\cal R}_k$, we multiply by $\varphiphi^2$,
integrate by parts to obtain
\[ \int_{\cal Sigma_k}(|\nablaabla_klog\ u_k|^2+q_k)\varphiphi^2\rhoho_{k+1}\ d\mu_k\lambdaeq 2\int_{\cal Sigma_k}\varphiphi\lambdaangle\nablaabla_k\varphiphi, \nablaabla_klog\ u_k\rhoangle\rhoho_{k+1}\ d\mu.
\]
By the arithmetic-geometric mean inequality
\betaegin{eqnarray*}
\int_{\cal Sigma_k}(|\nablaabla_klog\ u_k|^2+q_k)\varphiphi^2\rhoho_{k+1}\ d\mu_k&\lambdaeq& \frac{1}{2}\int_{\cal Sigma_k}(|\nablaabla_klog\ u_k|^2+q_k)\varphiphi^2\rhoho_{k+1}\ d\mu_k \\
&+&2\int_{\cal Sigma_k}|\nablaabla_k\varphiphi|^2\rhoho_{k+1}\ d\mu_k.
\varphiepsilonnd{eqnarray*}
This implies
\[ \frac{1}{2}\int_{\cal Sigma_k}|\nablaabla_klog\ u_k|^2\varphiphi^2\rhoho_{k+1}\ d\mu_k\lambdaeq\frac{1}{2}Q_k(\varphiphi,\varphiphi)
+\frac{3}{2}\int_{\cal Sigma_k}|\nablaabla_k\varphiphi|^2\rhoho_{k+1}\ d\mu_k.
\]
The first inequality then follows from this and our previous estimate.
The second conclusion follows since $|\nablaabla_k log\ \rhoho_{k+1}|^2\lambdaeq cP_k$, and
so the integrand on the left $|\nablaabla_k(\varphiphi\sigmaqrt{\rhoho_{k+1}})|^2+|A_k|^2\varphiphi^2\rhoho_{k+1}$
is bounded pointwise by a constant times $(|\nablaabla_k\varphiphi|^2+P_k\varphiphi^2)\rhoho_{k+1}$.
\varphiepsilonnd{pf}
Recall that an important analytic step in the minimal hypersurface regularity theory is
the local reduction to the case in which the hypersurface is the boundary of a set. This
makes comparisons particularly simple and reduces consideration to a multiplicity one
setting. We will need an analogous reduction in our situation. Since the leaves of a $k$-slicing
can be singular, we must consider the possibility that local topology comes into play and
prohibits such a reduction to boundaries of sets. What saves us here is the fact that $k$-slicings
come with a natural trivialization of the normal bundle (on the regular set). We have the following
result.
\betaegin{prop}
\lambdaabel{prop:boundary} Assume that $U$ is compactly contained in $\Omega$, and that $U\cap\cal Sigma_n$ is diffeomorphic to a ball. Assume that we have a minimal $k$-slicing in $\Omega$ such that the associated
$(k+1)$-slicing is partially regular. Let $\hat{\cal Sigma}_k$ denote the closure of any connected
component of $\cal Sigma_k\cap U\cap{\cal R}_{k+1}$. Then it follows that $\hat{\cal Sigma}_k$ divides
the corresponding connected component (denoted $\hat{\cal Sigma}_{k+1}$) of $\cal Sigma_{k+1}$ into a union of two relatively open subsets, and choosing the one, denoted $U_{k+1}$,
for which the unit normal of $\hat{\cal Sigma}_k$ points outward, we have $\hat{\cal Sigma}_k=\partialartial U_{k+1}$
as a point set boundary in $\hat{\cal Sigma}_{k+1}$, and as an oriented boundary in ${\cal R}_{k+1}$.
\varphiepsilonnd{prop}
\betaegin{pf} Since $\hat{\cal Sigma}_k\cap{\cal R}_{k+1}$ and $\hat{\cal Sigma}_{k+1}\cap{\cal R}_{k+1}$ are
connected, it follows that the complement of $\hat{\cal Sigma}_k\cap{\cal R}_{k+1}$ in
$\hat{\cal Sigma}_{k+1}\cap{\cal R}_{k+1}$ has either $1$ or $2$ connected components. These consist
of the connected components of points lying near $\hat{\cal Sigma}_k$ on either side. Locally these are
separate components, but they may reduce globally to a single connected component. If this were
to happen, then since $dim({\cal S}_{k+1})\lambdaeq k-2$, we could find a smooth embedded closed curve $\gammaamma(t)$ parametrized
by a periodic variable $t\in [0,1]$ with $\gammaamma(0)\in\hat{\cal Sigma}_k\cap{\cal R}_{k+1}$ and
$\gammaamma(t)\in {\cal R}_{k+1}\sigmaim \hat{\cal Sigma}_k$ for $t\nablaeq 0$. We may also assume that
$\gammaamma'(0)$ is transverse to $\hat{\cal Sigma}_k$. We choose local coordinates $x^1,\lambdadots, x^k$
for $\hat{\cal Sigma}_k$ in a neighborhood $V$ of $\gammaamma(0)$ and we may find an embedding
$F$ of $V\tauimes S^1$ in ${\cal R}_{k+1}$ with the property that $F(0,t)=\gammaamma(t)$, $F(x,0)\in \hat{\cal Sigma}_k$, $F(x,t)\nablaot\in \hat{\cal Sigma}_k$ for $t\nablaeq 0$, and $\frac{\partialartial F}{\partialartial t}(x,0)$ is transverse
to $\hat{\cal Sigma}_k$. The $k$-form $\omega=\zeta(x)dx^1\wedge\lambdadots\wedge dx^k$, where $\zeta$
is a nonnegative and nonzero function with compact support in $V$, is a closed form which has positive integral over
$\hat{\cal Sigma}_k$. Since the image $V_1=F(V\tauimes S^1)$ is compactly contained in ${\cal R}_{k+1}$ and the normal bundle of $\hat{\cal Sigma}_{k+1}$ is trivial, we may choose coordinates $x^{k+2},\lambdadots, x^n$ for a normal disk, and the coordinates $x^1,\lambdadots, x^k,t,x^{k+2},\lambdadots, x^n$ are then coordinates on a
neighborhood of $V_1$ in $\cal Sigma_n$. We may then extend $\omega$ to an $(n-1)$-form on this neighborhood by setting
\[ \omega_1=\omega\wedge \zeta_1(x^{k+2},\lambdadots,x^n)dx^{k+2}\wedge\lambdadots\wedge dx^n
\]
where $\zeta_1$ is a nonzero, nonnegative function with compact support in the domain of
$x^{k+1},\lambdadots,x^n$. Thus $\omega_1$ is a closed $(n-1)$-form with compact support in $U\cap \cal Sigma_n$
which has positive integral on $\hat{\cal Sigma}_{n-1}$, the connected component of $\cal Sigma_{n-1}$
containing $\gammaamma(0)$. This contradicts the condition that each connected component
of $\cal Sigma_{n-1}$ must divide the ball $U\cap \cal Sigma_n$ into $2$ connected components and
is the oriented boundary of one of them, say $\hat{\cal Sigma}_{n-1}=\partialartial U_n$, since
Stokes theorem would imply that $\int_{\hat{\cal Sigma}_{n-1}}\omega_1=\int_{U_n}d\omega_1=0$ (note that
$\omega_1$ has compact support in $U\cap\cal Sigma_n$).
\varphiepsilonnd{pf}
We will prove a boundedness theorem which will be needed in the proof of the
compactness theorem. Note that we will obtain the partial regularity theorem by finite
induction down from dimension $n-1$, so we may assume in the following theorems that
we have already established partial regularity for $(k+1)$-slicings. In the following result
we will consider the restriction of a $k$-slicing to a small ball $B_\sigma(x)$ where $x\in \ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^N$.
We consider the rescaled $k$-slicing of the unit ball given by $\cal Sigma_{j,\sigma}=\sigma^{-1}(\cal Sigma_j-x)$
with $u_{j,\sigma}(y)=a_ju_j(x+\sigma y)$ with $a_j$ chosen so that $\int_{\cal Sigma_{j,\sigma}}(u_{j,\sigma})^2\rhoho_{j+1,\sigma}\ d\mu_j=1$. We note that by Proposition \rhoef{prop:boundary} we may assume that each $\cal Sigma_j$ in $B_\sigma(x)$ is the oriented boundary
of a relatively open set $O_{j+1}\sigmaubseteq \cal Sigma_{j+1}$. We take $O_{j+1,\sigma}$ to be the rescaled open set.
The following result implies that the rescaled $k$-slicing remains $\Lambdaambda$-bounded for
a suitably chosen $\Lambdaambda$.
\betaegin{thm}
\lambdaabel{thm:bdness} Assume that all bounded $(k+1)$-slicings are partially regular. If
we take any bounded minimal $k$-slicing $(\cal Sigma_j,u_j)$ in $\Omegamega$ and a ball $B_\sigma(x)$
compactly contained in $\Omegamega$, then there is a $\Lambdaambda$ depending only on $\cal Sigma_n$ such that
$(\cal Sigma_{j,\sigma},u_{j,\sigma})$, $j=k,\lambdadots,n-1$ is $\Lambdaambda$-bounded in $B_{1/2}(0)$.
\varphiepsilonnd{thm}
\betaegin{pf} The proof is by a finite induction beginning with $k=n-1$. The boundedness of
$\mu_{n-1}(\cal Sigma_{n-1,\sigma})$ follows by comparison with a portion of the sphere of radius $1$
in a standard way (see a similar argument below). We normalize
$\int_{\cal Sigma_{n-1,\sigma}}(u_{n-1,\sigma})^2\ d\mu_{n-1}=1$, so it remains to show
\[ \int_{\cal Sigma_{n-1,\sigma}\cap B_{1/2}(0)}|A_{n-1,\sigma}|^2u_{n-1,\sigma}^2\ d\mu_{n-1}\lambdaeq \Lambdaambda.
\]
To see this, we use stability with the variation $\zetaeta u_{n-1,\sigma}$ to obtain
\[ \frac{1}{4}\int_{\cal Sigma_{n-1,\sigma}}|A_{n-1,\sigma}|^2\zetaeta^2u_{n-1,\sigma}^2\ d\mu_{n-1}\lambdaeq
Q_{n-1,\sigma}(\zetaeta u_{n-1,\sigma},\zetaeta u_{n-1,\sigma}).
\]
Now we have by direct calculation for any $W_{1,2}(\cal Sigma_{n-1,\sigma})$ function $v$
\[ Q_{n-1,\sigma}(\zetaeta v,\zetaeta v)=Q_{n-1,\sigma}(\zetaeta^2 v,v)+
\int_{\cal Sigma_{n-1,\sigma}}v^2|\nablaabla_{n-1,\sigma}\zetaeta|^2\ d\mu_{n-1}.
\]
Taking $v=u_{n-1,\sigma}$ and choosing $\zetaeta$ to be a function which is $1$ on $B_{1/2}(0)$ with
support in $B_1(0)$ and with bounded gradient we find
\[ \int_{\cal Sigma_{n-1,\sigma}}|A_{n-1,\sigma}|^2u_{n-1,\sigma}^2\ d\mu_{n-1}\lambdaeq 4\lambdaambda_{n-1,\sigma}+c\lambdaeq \Lambdaambda
\]
for a constant $\Lambdaambda$ where we have used the eigenvalue condition
\[ Q_{n-1,\sigma}(\zetaeta^2 u_{n-1,\sigma},u_{n-1,\sigma})=\lambdaambda_{n-1,\sigma}\int_{\cal Sigma_{n-1,\sigma}}\zetaeta^2u_{n-1,\sigma}^2\ d\mu_{n-1}
\]
and the obvious relation $\lambdaambda_{n-1,\sigma}=\sigma^2\lambdaambda_{n-1}$. This proves $\Lambdaambda$-boundedness for $k=n-1$.
Now assume that we have $\Lambdaambda$-boundedness for $j\gammaeq k+1$ in $B_{3/4}(0)$. Thus it follows
that $\int_{\cal Sigma_{k+1,\sigma}\cap B_{3/4}(0)}(1+(u_{k+1,\sigma})^2)\rhoho_{k+2,\sigma}\ d\mu_{k+1}$ is bounded
and hence $\int_{\cal Sigma_{k+1,\sigma}\cap B_{3/4}(0)}\rhoho_{k+1,\sigma}\ d\mu_{k+1}$ is bounded. We may then
use the coarea formula to find a radius $r\in (1/2,3/4)$ so that
\[ \int_{\cal Sigma_{k+1,\sigma}\cap \partialartial B_r(0)}\rhoho_{k+1,\sigma}\ d\mu_k\lambdaeq \Lambdaambda.
\]
Using the portion of $\cal Sigma_{k+1,\sigma}\cap \partialartial B_r(0)$ lying outside $O_{k,\sigma}$ as a comparison
surface we find
\[ Vol_{\rhoho_{k+1,\sigma}}(\cal Sigma_{k,\sigma}\cap B_{1/2}(0))\lambdaeq Vol_{\rhoho_{k+1,\sigma}}(\cal Sigma_{k+1,\sigma}\cap \partial B_r(0))\lambdaeq
\Lambdaambda.
\]
Finally we prove the bound
\[ \int_{\cal Sigma_{k,\sigma}\cap B_{1/2}(0)}(|A_{k,\sigma}|^2+\sigmaum_{p=k+1}^n|\nablaabla_{k,\sigma} log\ u_{p,\sigma}|^2)u_{k,\sigma}^2
\rhoho_{k+1,\sigma}\ d\mu_k\lambdaeq \Lambdaambda
\]
by the use of stability as we did above for the case $k=n-1$.
\varphiepsilonnd{pf}
We will now formulate and prove a compactness theorem for minimal $k$-slicings under
the assumption that the associated
$(k+1)$-slicings for the sequence are partially regular. We will say that a $\Lambdaambda$-bounded sequence of $k$-slicings $(\cal Sigma_j^{(i)},u_j^{(i)})$, $j=k,\lambdadots, n-1$ {\it converges} to a minimal
$k$-slicing $(\cal Sigma_j,u_j)$ in an open set $U$ if $\cal Sigma_j^{(i)}$ converges in $C^2$ norm to $\cal Sigma_j$ in $\betaar{U}$ locally on the complement of the singular set (of the limit) ${\cal S}_j$, and such that for
$j=k,\lambdadots,n-1$
\betaegin{equation}
\lambdaabel{eqn:conv1} \lambdaim_{i\tauo\infty}V_{\rhoho_{j+1}^{(i)}}(\cal Sigma_j^{(i)}\cap U_i)=V_{\rhoho_{j+1}}(\cal Sigma_j\cap U),
\varphiepsilonnd{equation}
\betaegin{eqnarray}
\lambdaabel{eqn:conv2}
\lambdaim_{i\tauo\infty}\|u^{(i)}_j\|^2_{0,j,U_i}&=&\|u_j\|_{0,j,U}^2 \\
\lambdaim_{i\tauo\infty} \int_{\cal Sigma_j^{(i)}\cap U_i}(|\nablaabla_ju_j^{(i)}|^2+P_j^{(i)}(u_j^{(i)})^2)\rhoho_{j+1}^{(i)}\ d\mu_j&=&\int_{\cal Sigma_j\cap U}(|\nablaabla_ju_j|^2+P_ju_j^2)\rhoho_{j+1}\ d\mu_j \nablaonumber
\varphiepsilonnd{eqnarray}
where $U_i$ is a sequence of compact subdomains of $U$ with $U_i\sigmaubseteq U_{i+1}\sigmaubseteq U$
and $U=\cup_iU_i$.
To make precise the meaning of convergence on compact subsets for this
problem involves some subtlety since changing the $u_p$, $p\gammaeq j+1$ by multiplication by a positive constant has no effect on the $\cal Sigma_j$, so in order to get nontrivial limits for the $u_p$ we must normalize
them appropriately. In case $\cal Sigma_j\cap U$ has multiple components this normalization must be
done on each component. If $(\cal Sigma_j,u_j)$ is a minimal $k$-slicing with $\cal Sigma_j$ being partially regular
for $j\gammaeq k+1$, then we call a compact subdomain $U$ of $\Omega$ {\it admissible for $(\cal Sigma_j,u_j)$}
if $U$ is a smooth domain which meets $\partialartial\cal Sigma_j$ transversally and $dim(\partialartial U\cap{\cal S}_j)\lambdaeq j-3$. It follows from the coarea formula that any smooth domain can be
perturbed to be admissble. We make the following definition.
\betaegin{defn} We say that a sequence of $k$-slicings $(\cal Sigma_j^{(i)},u_j^{(i)})$ {\it converges
on compact subsets} to a $k$-slicing $(\cal Sigma_j,u_j)$ if for any compact subdomain $U$ of $\Omega$ which
is admissible for $(\cal Sigma_j,u_j)$ and for any admissible domains $U_i$ for $(\cal Sigma_j^{(i)},u_j^{(i)})$
with $U_i\sigmaubseteq U_{i+1}\sigmaubseteq U$ compactly contained in $U$ it is true that each connected component of $\cal Sigma_j\cap{\cal R}_{j+1}\cap U$ is a limit of
connected components of $\cal Sigma_j^{(i)}\cap{\cal R}_{j+1}^{(i)}\cap U_i$ in the sense of (\rhoef{eqn:conv1}) and (\rhoef{eqn:conv2}) with $u_j$ appropriately normalized on each connected component.
\varphiepsilonnd{defn}
\betaegin{rem} Because of the connectedness of the regular set and the Harnack inequality, we may normalize the $u_j$ to be equal to $1$ at a point of $x_0\in {\cal R}_k$ about which we have a uniform ball on which the $\cal Sigma_j$ have bounded curvature, and this normalization suffices for the connected component of $\cal Sigma_k\cap U$ for any compact admissible domain for $(\cal Sigma_j,u_j)$. A consequence of
the compactness theorem below implies that this normalization suffices.
\varphiepsilonnd{rem}
The following compactness and regularity theorem includes Theorem \rhoef{thm:reg} as
a special case.
\betaegin{thm}
\lambdaabel{thm:cptness} Assume that all bounded minimal $(k+1)$-slicings are partially regular.
Given a $\Lambdaambda$-bounded sequence of $k$-slicings , there is a subsequence which
converges to a $\Lambdaambda$-bounded $k$-slicing on compact open subsets of $\Omegamega$.
Furthermore $\cal Sigma_k$ is partially regular.
\varphiepsilonnd{thm}
\betaegin{pf} We will proceed as usual by downward induction beginning with $k=n-1$. We
will break the proof into two separate steps, the first establishing the first statement
of (\rhoef{eqn:conv1}) for convergence of the $\cal Sigma_k$ and the second showing the other
two statements (\rhoef{eqn:conv2}) involving convergence of the $u_k$. For $k=n-1$ the first step follows
from the usual compactness theorem for volume minimizing hypersurfaces (see \cite{simon}).
To complete the proof we will need to develop some monotonicity ideas both for the
$\cal Sigma_j$ and for the $u_j$. We digress on this topic and return to the proof below.
We now prove a version of the monotonicity of the frequency-type function. This idea
is due to F. Almgren \cite{almgren}, and it gives a method to prove that solutions
of variationally defined elliptic equations are approximately homogeneous on a small scale.
The importance of this method for us is that it works in the presence of singularites provided
certain integrals are defined.
We will apply this to show that the $u_k$ become homogeneous upon rescaling at a given
singular point. Assume that $C$ is a $k$ dimensional cone in $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^n$ which is regular
except for a set $\cal S$ with $dim({\cal S})\lambdaeq k-3$. Assume that $Q$ is a quadratic form on $C$ of
the form
\[ Q(\varphiphi,\varphiphi)=\int_C(|\nablaabla\varphiphi|^2-q(x)\varphiphi^2)\rhoho\ d\mu
\]
where $\rhoho$ is a homogeneous weight function on $C$ of degree $p$; i.e. assume that
$\rhoho(\lambdaambda x)=\lambdaambda^p\rhoho(x)$ for $x\in C$ and $\lambdaambda>0$. Assume also that
$\rhoho$ is smooth and positive on the regular set ${\cal R}$ of $C$ and that $\rhoho$ is locally
$L^1$ on $C$. Assume also that $q$ is smooth on ${\cal R}$ and is homogeneous of
degree $-2$; i.e. assume that
$q(\lambdaambda x)=\lambdaambda^{-2}q(x)$ for $x\in C$ and $\lambdaambda>0$. Finally assume that
$u$ is a minimizer for $Q$ in a neighorhood of $0$ and in particluar that $u$ is smooth and
positive on ${\cal R}$. Assume also that $q=div({\cal X})+\betaar{q}$ where $|{\cal X}|^2+|\betaar{q}|\lambdaeq P$
for some positive function $P$ and that the following integral bound holds
\[ \int_C[|\nablaabla u|^2+(1+|\nablaabla log\ \rhoho|^2+P)u^2]\rhoho\ d\mu<\infty.
\]
Under these conditions we may define
the frequency function $N(\sigmaigma)$ which is a function of a radius $\sigma>0$ such that $B_\sigma(0)$
is contained in the domain of definition of $u$. It is defined by
\betaegin{equation}
\lambdaabel{eqn:freqfcn} N(\sigma)=\frac{\sigma Q_{\sigma}(u)}{I_{\sigma}(u)}
\varphiepsilonnd{equation}
where $Q_\sigma(u)$ and $I_\sigma(u)$ are defined by
\[ Q_\sigma(u)=\int_{C\cap B_\sigma(0)}(|\nablaabla u|^2-q(x)u^2)\rhoho\ d\mu_k,\ I_\sigma(u)=\int_{C\cap \partialartial B_\sigma(0)} u^2\rhoho\ d\mu_{k-1}
\]
where the last integral is taken with respect to $k-1$ dimensional Hausdorff measure. We
may now prove the following monotonicity result for $N(\sigma)$.
\betaegin{thm}
\lambdaabel{thm:freq} Assume that $u$ is a critical point of $Q$ which is integrable as above. The function $N(\sigma)$ is monotone increasing in $\sigma$, and for almost all $\sigma$ we have
\[ N'(\sigma)=\frac{2\sigma}{I_\sigma(u)}(I_\sigma(u_r)I_\sigma(u)-\lambdaangle u_r, u\rhoangle_\sigma^2)
\]
where $u_r$ denotes the radial derivative of $u$ and $\lambdaangle\cdot,\cdot\rhoangle_\sigma$ denotes
the $\rhoho$-weighted $L^2$ inner product taken on $C\cap\partialartial B_\sigma(0)$. The limit of $N(\sigma)$
as $\sigma$ goes to $0$ exists and is finite. The function
$N(\sigma)$ is equal to a constant $N(0)$ if and only if $u$ is homogeneous of degree $N(0)$.
\varphiepsilonnd{thm}
\betaegin{pf} The argument can be done variationally and combines two distinct deformations of
the function $u$. The first involves a radial deformation of $C$; precisely, let $\zetaeta(r)$ be
a function which is nonnegative, decreasing, and has support in $B_\sigma(0)$. Let $X$ denote
the vector field on $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^n$ given by $X=\zeta(r) x$ where $x$ denotes the position vector. The flow
$F_t$ of $X$ then preserves $C$, and we may write
\[ Q_\sigma(u\circ F_t)=\int_{C\cap B_\sigma(0)}(|\nablaabla_t u|^2-(q\circ F_{t})u^2)\rhoho\circ F_{t}\ d\mu_t
\]
where we have used a change of variable and $\nablaabla_t$ and $\mu_t$ denotes the gradient
operator and volume measure with respect to $F_t^*(g)$ where $g$ is the induced metric on $C$
from $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^n$. Differentiating with respect to $t$ and setting $t=0$ we obtain
\[ 0=\int_C\{(\lambdaangle-{\cal L}_Xg,du\omegatimes du\rhoangle-X(q)u^2)\rhoho+(|\nablaabla u|^2-qu^2)
(X(\rhoho)+\rhoho\ div(X))\}\ d\mu
\]
where ${\cal L}$ denotes the Lie derivative. By direct calculation we have $X(q)=-2\zeta q$,
$X(\rhoho)=p\zeta \rhoho$, $div(X)=r\zeta'(r)+k\zeta$, and ${\cal L}_X g=2r\zeta'(r)(dr\omegatimes dr)+2\zeta g$.
Substituting in this information and collecting terms we have
\[ 0=\int_C\{(p+k-2)\zeta(|\nablaabla u|^2-qu^2)+r\zeta'(|\nablaabla u|^2-2u_r^2-qu^2)\}\ \rhoho\ d\mu.
\]
Letting $\zeta$ approach the characteristic function of $B_\sigma(0)$ this implies
\betaegin{eqnarray*}
(p+k-2)Q_\sigma(u)&=&\sigma\int_{C\cap\partialartial B_\sigma(0)}(|\nablaabla u|^2-2u_r^2-qu^2)\}\ \rhoho\ d\mu_{k-1}\\
&=&\sigma\frac{dQ_\sigma(u)}{d\sigma}-2\sigma\int_{C\cap\partialartial B_\sigma(0)}u_r^2\rhoho\ d\mu_{k-1}.
\varphiepsilonnd{eqnarray*}
The second ingredient we need comes from the deformation $u_t=(1+t\zeta(r))u$ where
$\zeta$ is as above. Since $\deltaot{u}=\zeta u$ this deformation implies
\[ 0=\int_C(\lambdaangle \nablaabla u,\nablaabla(\zeta u)\rhoangle-q\zeta u^2)\rhoho\ d\mu.
\]
Expanding this and letting $\zeta$ approach the characteristic function of $B_\sigma(0)$ we have
\[ Q_\sigma(u)=\int_{C\cap\partialartial B_\sigma(0)}uu_r\ \rhoho\ d\mu_{k-1}.
\]
The proof will now follow by combining these. First we have
\[ N'(\sigma)=I_\sigma(u)^{-2}\{(Q_\sigma+\sigma Q'_\sigma)I_\sigma-\sigma Q_\sigma I'_\sigma\}.
\]
Substituting in for the terms involving derivatives this implies
\betaegin{eqnarray*} N'(\sigma)&=&I_\sigma^{-2}\{(Q_\sigma+(p+k-2)Q_\sigma)I_\sigma
-Q_\sigma(p+k-1)I_\sigma)\} \\
&+&2\sigma I_\sigma^{-2}\{\int_{C\cap\partialartial B_\sigma(0)} u_r^2\rhoho\ d\mu_{k-1}-Q_\sigma^2I_\sigma\}.
\varphiepsilonnd{eqnarray*}
Since the first term on the right is $0$, we may write this as
\[ N'(\sigma)=2I_\sigma(u)^{-1}(I_\sigma(u)I_\sigma(u_r)-\lambdaangle u_r, u\rhoangle_\sigma^2)
\]
which is the desired formula.
To see that $N(\sigma)$ is bounded from below as $\sigma$ goes to $0$ we can observe that
\[ N(\sigma)=\frac{1}{2}\sigma\frac{d}{d\sigma}\lambdaog(\betaar{I}_\sigma(u)),\ \betaar{I}_\sigma(u)=\frac{\int_{C\cap\partialartial B_\sigma(0)}u^2\rhoho\ d\mu_{k-1}}{\int_{C\cap\partialartial B_\sigma(0)}\rhoho\ d\mu_{k-1}},
\]
and the monotonicity expresses the condition that the function $\lambdaog\ \betaar{I}_\sigma(u)$ is a convex function
of $t=\lambdaog \sigma$. Since this function is defined for all $t\lambdaeq 0$, and by the coarea formula for any $\sigma_1>0$, there is a $\sigma\in [\sigma_1,2\sigma_1]$ so that $I_\sigma(u)\lambdaeq c\sigma^{-1}$ it follows that there is a sequence
$t_i=\lambdaog\ \sigma_i$
tending to $-\infty$ such that $\betaar{I}_{\sigma_i}(u)\lambdaeq c\sigma_i^{-K}$ for some $K>0$. Thus we have the
function $\lambdaog\ \betaar{I}_{\sigma_i}(u)\lambdaeq -ct_i$. It follows that the slope (that is $N(\sigma)$) of the convex function
$\lambdaog\betaar{I}_\sigma(u)$ is bounded from below as $t$ tends to $-\infty$.
Now if $N(\sigma)=N(0)$ is constant, we must have equality in the Schwartz inequality for
each $\sigma$, and hence we would have $u_r=f(r)u$ for some function $f(r)$. Now
this implies that $Q_\sigma=f(\sigma)I_\sigma$ and hence we have $rf(r)=N(0)$. Therefore it follows
that $f(r)=r^{-1}N(0)$, and $ru_r=N(0)u$ so $u$ is homogeneous of degree $N(0)$ by
Euler's formula.
\varphiepsilonnd{pf}
We will need to extend the usual monotonicity formula for the volume of minimal
submanifolds to the setting in which the submanifold under consideration minimizes
a weighted volume with a homogeneous weight function within a partially regular cone.
Precisely, let $C$ be a $k+1$ dimensional cone in $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^n$ with a singular set $\cal S$ of Hausdorff
dimension at most $k-2$. Let $\rhoho$ be a positive weight function which is homogeneous
of degree $p$; i.e. we have $\rhoho(\lambdaambda x)=\lambdaambda^p\rhoho(x)$ for $x\in C$ and $\lambdaambda>0$.
Assume that $\rhoho$ is smooth and positive on the regular set of $C$, and that $\rhoho$ is locally
integrable with respect to Hausdorff measure on $C$.
\betaegin{thm}
\lambdaabel{thm:mncty} Let $\cal Sigmama$ be a hypersurface in a $k+1$ dimensional cone $C$ which
minimizes the weighted volume $V_\rhoho$ for a homogeneous weight function $\rhoho$. We then
have the monotonicity formula
\[ \frac{d}{d\sigma}(\sigma^{-k-p}Vol_\rhoho(\cal Sigma\cap B_\sigma(0))=
\int_{\cal Sigma\cap\partialartial B_\sigma(0)}r^{-p-k-2}|x^\partialerp|^2\rhoho\ d\mu_{k-1}
\]
where $x^\partialerp$ denotes the component of the position vector $x$ perpendicular to $\cal Sigma$.
\varphiepsilonnd{thm}
\betaegin{pf} We take a function $\zeta(r)$ which is decreasing, nonnegative, and equal to $0$
for $r>\sigma$, and we consider the vector field $X=\zeta x$ where $x$ denotes the position vector.
The first variation formula for the $\rhoho$-weighted volume then implies
\[ 0=\int_\cal Sigma(X(\rhoho)+div_\cal Sigma(X)\rhoho)\ d\mu_k.
\]
Since $\rhoho$ is homogeneous we have $X(\rhoho)=p\zeta \rhoho$, and by direct
calculation $div_\cal Sigma(X)=k\zeta+r^{-1}\zeta'|x^T|^2$ where $x^T$ denotes the component of $x$ tangential
to $\cal Sigma$. Thus we have
\[ 0=\int_\cal Sigma\{(p+k)\zeta+r^{-1}\zeta'|x^T|^2\}\rhoho\ d\mu_k
\]
Taking $\zeta$ to approximate the characteristic function of $B_\sigma(0)$ we may write this
\[ (p+k)Vol_\rhoho(\cal Sigma\cap B_\sigma(0))=\sigma\frac{d}{d\sigma}Vol_\rhoho(\cal Sigma\cap B_\sigma(0))-
\int_{\cal Sigma\cap\partialartial B_\sigma(0)} r^{-1}|x^\partialerp|^2\rhoho\ d\mu_{k-1}
\]
where $x^\partialerp$ is the component of $x$ normal to $\cal Sigma$ in $C$. Note that
$r^2=|x^T|^2+|x^\partialerp|^2$ because $C$ is a cone and so $x$ is tangential to $C$. This may
be rewritten as the desired monotonicity formula and completes the proof.
\varphiepsilonnd{pf}
We now show that there can be no tangent minimal $2$-slicing with $C_2$ having an
isolated singularity at $\{0\}$.
\betaegin{thm}
\lambdaabel{thm:2dcone} If $C_2$ is a cone lying in a tangent minimal $2$-slicing such that
$C_2\sigmaim\{0\}\sigmaubseteq{\cal R}_2$, then $C_2$ is a plane and ${\cal R}_2=C_2$.
\varphiepsilonnd{thm}
\betaegin{pf} From the eigenvalue estimate of Theorem \rhoef{thm:eval} we have
\[ \int_{C_2}(\frac{3}{4}\sigmaum_{j=3}^n|\nablaabla_2\ log\ u_j|^2-R_2)\varphiphi^2\ d\mu_2\lambdaeq 4\int_{C_2}|\nablaabla_2\varphiphi|^2\ d\mu_2
\]
for test functions $\varphiphi$ with compact support in $C_2\sigmaim \{0\}$. Since $C_2$ is a two
dimensional cone we have $R_2=0$ away from the origin, and hence we have
\[ \int_{C_2}\sigmaum_{j=3}^n|\nablaabla_2\ log\ u_j|^2\varphiphi^2\ d\mu_2\lambdaeq c\int_{C_2}|\nablaabla_2\varphiphi|^2\ d\mu_2.
\]
Letting $r$ denote the distance to the origin, we take $\varphiepsilon$ and $R$ so that $0<\varphiepsilon<<R$ and
choose $\varphiphi$ to be a function of $r$ which is equal to $0$ for $r\lambdaeq \varphiepsilon^2$, equal to $1$
for $\varphiepsilon\lambdaeq r\lambdaeq R$, and equal to $0$ for $r\gammaeq R^2$. In the range $\varphiepsilon^2\lambdaeq r\lambdaeq \varphiepsilon$ we
choose
\[ \varphiphi(r)=\frac{log(\varphiepsilon^{-2}r)}{log(\varphiepsilon^{-1})}
\]
and for $R\lambdaeq r\lambdaeq R^2$
\[ \varphiphi(r)=\frac{log(R^2r^{-1})}{log\ R}.
\]
Thus for $\varphiepsilon^2\lambdaeq r\lambdaeq \varphiepsilon$ we have $|\nablaabla_2\varphiphi|^2=(r|log\ \varphiepsilon|)^{-2}$ and for
$R\lambdaeq r\lambdaeq R^2$ we have $|\nablaabla_2\varphiphi|^2=(r\ log\ R)^{-2}$. It thus follows
that
\[ \int_{C_2}|\nablaabla_2\varphiphi|^2\ d\mu_2\lambdaeq c(|log\ \varphiepsilon|^{-1}+(log\ R)^{-1}).
\]
Thus we may let $\varphiepsilon$ tend to $0$ and $R$ tend to $\infty$ to conclude that the functions
$u_3,\lambdadots, u_n$ are constant on $C_2$. This implies that $C_2$ has zero mean curvature and
hence is a plane. If all of the cones $C_3,\lambdadots C_{n-1}$ are regular near the origin, then it
follows that $0\in {\cal R}_2$, and we have completed the proof. Otherwise there is a $C_m$ for
$m\gammaeq 3$ which denotes the largest dimensional cone in the minimal $2$-slicing for which the
origin is a singular point. It follows that $C_m$ is a volume minimizing cone in
$\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^{m+1}=C_{m+1}$, and hence $u_m$ must be homogeneous of a negative degree (see Lemma \rhoef{lem:negdeg} below) contradicting
the fact that $u_m$ is constant along $C_2$. This completes the proof.
\varphiepsilonnd{pf}
{\it Completion of proof of Theorem \rhoef{thm:cptness}:} We first prove the compactness of the
$\cal Sigma_k$ in the sense of (\rhoef{eqn:conv1}) under the assumption that we have the partial regularity of
bounded minimal $(k+1)$-slicings and the compactness (both (\rhoef{eqn:conv1}) and (\rhoef{eqn:conv2}))
for $j\gammaeq k+1$. We need the following lemma.
\betaegin{lem}
\lambdaabel{lem:locbd} Assume that both the compactness and partial regularity hold for $(k+1)$-slicings. Given any $x\in {\cal S}_{k+1}$, there are constants $c$ and $r_0$ (depending on $x$ and $\cal Sigma_{k+1}$)
so that for $r\in (0,r_0]$ we have
\[ \int_{\cal Sigma_{k+1}\cap B_{2r}(x)} u_{k+1}^2\rhoho_{k+2}\ d\mu_{k+1}\lambdaeq
cr^2\int_{\cal Sigma_{k+1}\cap B_r(x)}P_{k+1}u_{k+1}^2\rhoho_{k+2}\ d\mu_{k+1},
\]
and
\[ Vol_{\rhoho_{k+2}}(\cal Sigma_{k+1}\cap B_{2r}(x))\lambdaeq cVol_{\rhoho_{k+2}}(\cal Sigma_{k+1}\cap B_r(x)).
\]
\varphiepsilonnd{lem}
\betaegin{pf} Since the left hand side of the inequality is continuous under convergence and the
right hand side is lower semicontinuous (Fatou's theorem) it is enough to establish the inequality
for $r=1$ on a cone $C_{k+1}$. This we can do by a compactness argument since we can
normalize
\[ \int_{C_{k+1}\cap B_1(0)} u_{k+1}^2\rhoho_{k+2}\ d\mu_{k+1}=1
\]
and if we had a sequence
of singular cones for which the right hand side tends to zero we would have a limiting cone $C_{k+1}$
on which $P_{k+1}=0$. It follows that $u_{k+2},\lambdadots, u_{n-1}$ are constant on $C_{k+1}$. Note
that the highest dimensional {\it singular} cone in the slicing $C_{n_0}$ is minimal and hence $u_{n_0}$ is homogeneous of a negative degree (see Lemma \rhoef{lem:negdeg} below). Therefore if
$n_0>k+1$ we have a contradiction. Therefore
we conclude that $C_{k+1}$ is minimal and $C_{k+2},\lambdadots, C_{n-1}$ are planes. Thus it follows
that $\tauilde{A}_{k+1}=A_{k+1}=0$ and hence $C_{k+1}$ is also a plane. Thus the cones are regular
sufficiently far out in the sequence; a contradiction. The second inequality follows easily by reduction
to cones. This proves the bounds.
\varphiepsilonnd{pf}
Given a sequence $(\cal Sigma_j^{(i)},u_j^{(i)})$ of $\Lambdaambda$-bounded minimal $k$- slicings, we
may apply the inductive assumption to obtain a subsequence (with the same notation) for
which the corresponding sequence of $(k+1)$-slicings converges in the sense of (\rhoef{eqn:conv1})
and (\rhoef{eqn:conv2}). By standard compactness theorems we may assume that $\cal Sigma_k^{(i)}$
converges on compact subsets of $\Omegamega\sigmaim {\cal S}_{k+1}$ to a limiting submanifold $\cal Sigma_k$
which minimizes $Vol_{\rhoho_k}$ (and is therefore regular outside a closed set of dimension at most
$k-7$). To establish (\rhoef{eqn:conv1}) we choose a neighborhood $U$ of ${\cal S}_{k+1}$ such
that
\[ Vol_{\rhoho_{k+2}}(\cal Sigma_{k+1}\cap \betaar{U})<\varphiepsilon.
\]
We apply Lemma \rhoef{lem:locbd} and compactness to find a finite collection of points
$x_\alpha\in {\cal S}_{k+1}$ and balls $B_{r_\alpha}(x_\alpha)\sigmaubset U$ so that
\[ \int_{\cal Sigma_{k+1}\cap B_{2r_\alpha}(x_\alpha)} u_{k+1}^2\rhoho_{k+2}\ d\mu_{k+1}<
cr_\alpha^2\int_{\cal Sigma_{k+1}\cap B_{r_\alpha}(x_\alpha)}P_{k+1}u_{k+1}^2\rhoho_{k+2}\ d\mu_{k+1}
\]
and
\[ Vol_{\rhoho_{k+2}}(\cal Sigma_{k+1}\cap B_{2r_\alpha}(x_\alpha))< cVol_{\rhoho_{k+2}}(\cal Sigma_{k+1}\cap B_{r_\alpha}(x_\alpha)).
\]
Now apply the Besicovitch covering
lemma to extract a finite number of disjoint collections ${\cal B}_\alpha$, $\alpha=1,\lambdadots, K$ of such balls
whose union covers ${\cal S}_{k+1}$. If $V$ denotes the union of these balls, then $V$ is a neighborhood
of ${\cal S}_{k+1}$, and hence for
$i$ sufficiently large we have ${\cal S}_{k+1}^{(i)}\sigmaubset V$. Because of convergence of the left sides
and lower semicontinuity of the right side, we have for $i$ sufficiently large
\[ \int_{\cal Sigma_{k+1}^{(i)}\cap B_{2r_\alpha}(x_\alpha)} (u_{k+1}^{(i)})^2\rhoho_{k+2}^{(i)}\ d\mu_{k+1}<
cr_\alpha^2\int_{\cal Sigma_{k+1}^{(i)}\cap B_{r_\alpha}(x_\alpha)} P_{k+1}^{(i)}(u_{k+1}^{(i)})^2\rhoho_{k+2}^{(i)}\ d\mu_{k+1}
\]
and
\[ Vol_{\rhoho_{k+2}^{(i)}}(\cal Sigma_{k+1}^{(i)}\cap B_{2r_\alpha}(x_\alpha))< cVol_{\rhoho_{k+2}^{(i)}}(\cal Sigma_{k+1}^{(i)}\cap B_{r_\alpha}(x_\alpha)).
\]
By the coarea formula, for each such
ball $B_{r_0}(x)$ we may find $s\in [r_0,2r_0]$ ($s$ depending on $i$) so that
\[ Vol_{\rhoho_{k+1}^{(i)}}(\cal Sigma_{k+1}^{(i)}\cap\partialartial B_s(x))\lambdaeq 2r_0^{-1}\int_{\cal Sigma_{k+1}^{(i)}\cap B_{2r_0}}
u_{k+1}^{(i)}\rhoho_{k+2}^{(i)}\ d\mu_{k+1}.
\]
Using the minimizing property of $\cal Sigma_k^{(i)}$ and simple inequalities we find
\betaegin{eqnarray*}
Vol_{\rhoho_{k+1}^{(i)}}(\cal Sigma_k^{(i)}\cap B_{r_0})&\lambdaeq& \varphiepsilon_1^{-1}\int_{\cal Sigma_{k+1}\cap B_{2r_0}(x)}\rhoho_{k+2}^{(i)}\ d\mu_{k+1} \\
&+&\varphiepsilon_1r_0^{-2}\int_{\cal Sigma_{k+1}\cap B_{2r_0}}(u_{k+1}^{(i)})^2\rhoho_{k+2}^{(i)}\ d\mu_{k+1}
\varphiepsilonnd{eqnarray*}
for any $\varphiepsilon_1>0$. Applying the inequalities above and summing over the balls (using disjointness and
a bound on $K$) we find
\[ Vol_{\rhoho_{k+1}^{(i)}}(\cal Sigma_k^{(i)}\cap V)\lambdaeq c\varphiepsilon_1^{-1}Vol_{\rhoho_{k+2}^{(i)}}(\cal Sigma_{k+1}^{(i)}\cap \betaar{U})
+c\varphiepsilon_1\int_{\cal Sigma_{k+1}^{(i)}}P_{k+1}^{(i)}(u_{k+1}^{(i)})^2\rhoho_{k+2}^{(i)}\ d\mu_{k+1}.
\]
For $i$ sufficiently large this implies
\[ Vol_{\rhoho_{k+1}^{(i)}}(\cal Sigma_k^{(i)}\cap V)\lambdaeq c\varphiepsilon_1^{-1}\varphiepsilon+c\varphiepsilon_1,
\]
so that we may fix $\varphiepsilon_1$ sufficiently small and then choose $\varphiepsilon$ as small as we wish to
make the right hand side smaller than any preassigned amount. Since we have
\[ \lambdaim_{i\tauo\infty}Vol_{\rhoho_{k+1}^{(i)}}(\cal Sigma_k^{(i)}\sigmaim V)=Vol_{\rhoho_{k+1}}(\cal Sigma_k\sigmaim V),
\]
we can conclude that $\lambdaim_{i\tauo\infty}Vol_{\rhoho_{k+1}^{(i)}}(\cal Sigma_k^{(i)})=Vol_{\rhoho_{k+1}}(\cal Sigma_k)$
establishing (\rhoef{eqn:conv1}).
Now assume that we have established the
partial regularity of all bounded minimal $(k+1)$-slicings and that we have proven the compactness
for the $\cal Sigma_k$ in the sense of (\rhoef{eqn:conv1}). We can then use the results we have obtained above together
with dimension reduction to prove partial regularity for $\cal Sigma_k$. Precisely, we have $dim({\cal S}_k)\lambdaeq
k-2$, and if $dim({\cal S}_k)>k-3$, then we can choose a number $d$ with
\[ k-3<d<dim({\cal S}_k),
\]
and go to a point $x\in {\cal S}_k$ of density for the measure ${\cal H}^d_\infty$ (since
${\cal H}^d_\infty({\cal S}_k)>0$). Taking successive tangent cones in the standard way and using the upper-semicontinuity
of ${\cal H}^d_\infty({\cal S}_k)$ we would eventually produce a minimal $2$-slicing by cones such that
$C_2\tauimes \ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^{k-2}$ has singular set with Hausdorff dimension at most $k-2$ (by partial regularity
of $(k+1)$-slicings) and greater than $k-3$. Therefore the cone $C_2$ must have an isolated singularity at the origin. This in turn contradicts Theorem \rhoef{thm:2dcone}. Therefore it follows that $dim({\cal S}_k)\lambdaeq k-3$ and $\cal Sigma_k$ is partially regular.
The final step of the proof is to show that the compactness statement holds for the $u_k$ under
the assumption that it holds for $(\cal Sigma_j,u_j)$ for $j\gammaeq k+1$ and also for $\cal Sigma_k$ (as
established above). Assume that we have a sequence of minimal $k$-slicings such that the associated
$(k+1)$-slicings and $\cal Sigma_k^{(i)}$ converge on compact subsets in the sense of (\rhoef{eqn:conv1}) and (\rhoef{eqn:conv2}). We choose a compact domain $U$ which is admissble for $(\cal Sigma_j,U_j)$ and
a nested sequence of domains $U_i$ admsisible for $(\cal Sigma_j^{(i)},u_j^{(i)})$. We work with
a connected component of $\cal Sigma_k\cap U$ which by abuse of notation we call by the same name $\cal Sigma_k$.
We may assume that the $u_k^{(i)}$ converge uniformly to $u_k$ on compact subsets of
$\Omegamega\sigmaim {\cal S}_k$ (where we can write $\cal Sigma_k^{(i)}$ locally as a normal graph over
$\cal Sigma_k$ and compare corresponding values of $u_k^{(i)}$ to $u_k$). In particular, if
$W$ is a compact subdomain of $\Omega\cap{\cal R}_k$ we have convergence of weighted
$L^2$ norms of $u_k^{(i)}$ to the corresponding $L^2$ norm of $u_k$ on $W$. If $U$
is any compact subdomain of $\Omega$ and
$\varphiepsilonta>0$, then by Proposition \rhoef{prop:l2con} applied with ${\cal S}={\cal S}_k$ we can
find an open neighborhood $V$ of ${\cal S}\cap\betaar{U}$ so that for $i$ sufficiently large
${\cal S}_k^{(i)}\cap\betaar{U}\sigmaubset V$, and
\[ \int_{\cal Sigma_k^{(i)}\cap V}(u_k^{(i)})^2 \rhoho_{k+1}^{(i)}\ d\mu_k\lambdaeq \varphiepsilonta\int_{\cal Sigma_k^{(i)}\cap\Omega}
[|\nablaabla_ku_k^{(i)}|^2+(1+P_k^{(i)})(u_k^{(i)})^2]\rhoho_{k+1}^{(i)}\ d\mu_k.
\]
The same inequality holds for the limit, and by the boundedness of the sequence the integral
on the right is uniformly bounded. Thus by choosing $\varphiepsilonta$ small enough we can make the
right hand side less than any prescribed $\varphiepsilon>0$. On the other hand if we take
$W=U\sigmaetminus\betaar{V}$ we then have convergence of the weighted $L^2$ norms on $W$,
so we can make the difference as small as we wish on $W$. It follows that the difference of
$L^2$ norms can be made arbitrarily small on $U$.
This completes the proof that the weighted $L^2$ integrals converge.
Completing the proof will require the construction of a proper locally Lipschitz function
$\cal Psi_k$ on ${\cal R}_k$
such that $u_k|\nablaabla_k\cal Psi_k|$ is bounded in $L^2(\cal Sigma_k)$. We give the construction of such a function in
Proposition \rhoef{prop:proper} below. It also follows that we may construct a subsequence
so that $\cal Psi_k^{(i)}$ are uniformly close to $\cal Psi_k$ on compact subsets of $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^N\sigmaim{\cal S}_k$
for $i$ large. We can now prove the second part of the convergence (\rhoef{eqn:conv2}). Assume
that $U\sigmaubset U_1\sigmaubset \Omegamega$ are compact domains. We let $\varphiepsilon>0$ we may choose a neighborhood $V$ of ${\cal S}_k$ so small that $\int_{V\cap\betaar{U_1}}u_k^2\rhoho_{k+1}\ d\mu_k<\varphiepsilon$. Because $\cal Psi_k$ is proper on ${\cal R}_k$, we may choose $\Lambdaambda$ sufficiently large that
$E_k(\Lambdaambda)\sigmaubset V$ where $E_k(\Lambdaambda)$ is the
subset of $\cal Sigma_k$ on which $\cal Psi_k>\Lambdaambda$. We now let $\gammaamma(t)$ be a nondecreasing
Lipschitz function such that $\gammaamma(t)=0$ for $t<\Lambdaambda$, $\gammaamma(t)=1$ for $t>\Lambdaambda$,
and $\gammaamma'(t)\lambdaeq \Lambdaambda^{-1}$. We let $\varphiphi$ be a spatial cutoff function which is $1$
on $U$, $0$ outside $U_1$, and has bounded gradient. We then have the inequality by Proposition
\rhoef{prop:coercive}
\[ \int_{\cal Sigma_k^{(i)}}(|\nablaabla_k\partialsi_k^{(i)}|^2+
P_k^{(i)}(\partialsi_k^{(i)})^2)\rhoho_k^{(i)}\ d\mu_j\lambdaeq cQ_k(\partialsi_k^{(i)},\partialsi_k^{(i)})
\]
where $\partialsi_k^{(i)}=\varphiphi(\gammaamma\circ\cal Psi_k^{(i)})u_k^{(i)}$. Since the support of $\partialsi_k^{(i)}$ is
contained in $V$ for $i$ sufficiently large we then have
\[ \int_{\cal Sigma_k^{(i)}}(|\nablaabla_k\partialsi_k^{(i)}|^2+P_k^{(i)}(\partialsi_k^{(i)})^2)\rhoho_{k+1}^{(i)}\ d\mu_j\lambdaeq
c\int_{\cal Sigma_k^{(i)}\cap V}(1+\Lambdaambda^{-2}|\nablaabla_k\cal Psi_k^{(i)}|^2)(u_k^{(i)})^2\rhoho_{k+1}^{(i)}\ d\mu_k.
\]
Since we have convergence of the $L^2$ norms of $u_k^{(i)}$ and boundedness of the $L^2$
norms of $u_k^{(i)}|\nablaabla_k\cal Psi_k^{(i)}|$, we then conclude that
\[ \int_{\cal Sigma_k^{(i)}}(|\nablaabla_k\partialsi_k^{(i)}|^2+
P_k^{(i)}(\partialsi_k^{(i)})^2)\rhoho_{k+1}^{(i)}\ d\mu_j\lambdaeq c\varphiepsilon+c\Lambdaambda^{-2}.
\]
If we let $V_1$ be a neighborhood of ${\cal S}_k$ such that
$\cal Sigma_k\cap V_1\sigmaubset E_k(3\Lambdaambda)$, then for
$i$ sufficiently large we will have $\cal Sigma_k^{(i)}\cap V_1\sigmaubset E_k^{(i)}(2\Lambdaambda)$
and hence
\[ \int_{\cal Sigma_k^{(i)}\cap V_1}(|\nablaabla_ku_k^{(i)}|^2+
P_k^{(i)}(u_k^{(i)})^2)\rhoho_{k+1}^{(i)}\ d\mu_j\lambdaeq c\varphiepsilon+c\Lambdaambda^{-2}.
\]
Since this can be made arbitrarily small, we have shown (\rhoef{eqn:conv2}) and completed the
proof of Theorem \rhoef{thm:cptness}.
\varphiepsilonnd{pf}
We will need the following lemma concerning minimal cones $C_m\sigmaubset \ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^{m+1}$.
\betaegin{lem}
\lambdaabel{lem:negdeg} Assume that $C_m$ is a volume minimizing cone in $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^{m+1}$ and that
$u_m$ is a positive minimizer for $Q_j$ which is homogeneous of degree $d$ on $C$. There
is a positive constant $c$ depending only on $m$ so that $d\lambdaeq -c$.
\varphiepsilonnd{lem}
\betaegin{pf} We write $u_m=r^dv(\xii)$ where $\xii\in S^m$. If we let $\cal Sigma=C\cap S^m$,
then $v$ satisfies the eigenvalue equation $\Deltaelta v+1/8|A_m|^2v=-\mu v$ where we must have
$d(d+m-2)=\mu$. This implies that $d=1/2(2-m+\sigmaqrt{(m-2)^2+4\mu})$ or
$d=1/2(2-m-\sigmaqrt{(m-2)^2+4\mu})$. Since $v$ and $|\nablaabla v|$ are in $L^2(\cal Sigma)$ we must
have $\mu<0$ and this implies that $d<0$.
To prove the negative upper bound on $d$ recall that the set of volume minimizing cones
is a compact set, and we have proven the compactness theorem above for the $L^2$ norms,
so if we had a sequence $(C_m^{(i)},u_m^{(i)})$ such that $d^{(i)}$ tends to $0$ we could extract
a convergent subsequence of the $(\cal Sigma^{(i)},v^{(i)})$ which converges to $(\cal Sigma,v)$ where
we could normalize $\int_{\cal Sigma^{(i)}}(v^{(i)})^2\ d\mu_{m-1}=1$ (hence $\int_\cal Sigma v^2\ d\mu_{m-1}=1$).
Since we have smooth convergence on compact subsets of the complement of the singular set
of $\cal Sigma$ we would then have $\Deltaelta v+5/8|A_m|^2v=0$ and therefore we would have $\mu=0$ for
the limiting cone, a contradiction.
\varphiepsilonnd{pf}
As the final topic of this section we construct the proper functions which were used in the
proof of Theorem \rhoef{thm:cptness}. This result will also be used in the next section.
\betaegin{prop}
\lambdaabel{prop:proper} Suppose we have a $\Lambdaambda$-bounded minimal $k$-slicing in $\Omegamega$.
There exists a positive function $\cal Psi_k$ which is locally Lipschitz on ${\cal R}_k$ and such that
for any domain $U$ compactly contained in $\Omegamega$, the function $\cal Psi_k$ is proper on
${\cal R}_k\cap\betaar{U}$. Moreover, the function $u_k|\nablaabla_k\cal Psi_k|$ is bounded in
$L^2(\cal Sigma_k\cap U)$ for any domain $U$ compactly contained in $\Omegamega$.
\varphiepsilonnd{prop}
\betaegin{pf} We define $\cal Psi_k=\max\{1,log\ u_k,log\ u_{k+1},\lambdadots,log\ u_{n-1}\}$ and we show that
it has the properties claimed. First note that $\cal Psi_k$ is locally Lipschitz on ${\cal R}_k$ since
it is the maximum of a finite number of smooth functions on ${\cal R}_k$. The bound
\[ \int_{\cal Sigma_k\cap U}(u_k|\nablaabla_k\cal Psi_k|)^2\rhoho_{k+1}\ d\mu_k\lambdaeq
\max_{k\lambdaeq j\lambdaeq n-1} \int_{\cal Sigma_k\cap U}(u_k|\nablaabla_klog\ u_j|)^2\rhoho_{k+1}\ d\mu_k
\]
together with Proposition \rhoef{prop:coercive} implies the $L^2(\cal Sigma_k)$ bound claimed on
$\cal Psi_k$. (Note that we may replace $\varphiphi$ by $\varphiphi u_k$ in the first inequality of Proposition
\rhoef{prop:coercive} where $\varphiphi$ is a cutoff function which is equal to $1$ on $U$.)
It remains to prove that $\cal Psi_k$ is proper on ${\cal R}_k\cap\betaar{U}$. Since $\betaar{U}$
is compact it suffices to show that for any $x_0\in {\cal S}_k\cap\betaar{U}$ we have
\[ \lambdaim_{x\tauo x_0}\cal Psi_k(x)=\infty.
\]
If we let $m\gammaeq k$ be the largest integer such that $\cal Sigma_m$ is singular at $x_0$, then
there is an open neighborhood $V$ of $x_0$ in which $\cal Sigma_m$ is a volume minimizing
hypersurface in a smooth Riemannian manifold. We will show that $u_m$ tends to infinity
at $x_0$ by first showing that this is true for any homogeneous approximation of $u_m$
at $x_0$. In order to construct homogeneous approximations we need to have the
compactness theorem for this top dimensional case, but our proof of compactness used
the result we are trying to prove, so we must find another argument for establishing
(\rhoef{eqn:conv2}) since (\rhoef{eqn:conv1}) is a standard result for volume minimizing hypersurfaces
in smooth manifolds. Our proof of the first part of (\rhoef{eqn:conv2}) did not require the
function $\cal Psi_k$, so we need only deal with the second part. First recall that $dim({\cal S}_m)\lambdaeq m-7$, so it follows from a standard result that given any $\varphiepsilon,\delta>0$ and $a\in (0,7)$ we can find a Lipschitz function $\partialsi$ so that $\partialsi=1$ in a neighborhood of ${\cal S}_m$, $\partialsi(x)=0$ for points $x$
with $dist(x,{\cal S}_m)\gammaeq \delta$, and
\[ \int_{\cal Sigma_m\cap V}|\nablaabla_m\partialsi|^a\ d\mu_m<\varphiepsilon^a.
\]
We show that
\[ \int_{\cal Sigma_m\cap V}|\nablaabla_m\partialsi|^2u_m^2\ d\mu_m\lambdaeq c\varphiepsilon^2.
\]
If we can establish this inequality,
then we can complete the proof of compactness for $k=m$ in the set $V$ as in the proof of
Theorem \rhoef{thm:cptness}. To establish
the inequality, we observe that the equation satisfied by $u_m$ is of the form
\[ \Deltaelta_m u_m+5/8|A_m|^2u_m+qu_m=0
\]
where $q$ is a bounded function (since $\cal Sigma_m$ is volume minimizing in a smooth manifold). On the
other hand the stability implies that
\[ \int_{\cal Sigma_m} |A_m|^2\varphiphi^2\ d\mu_m\lambdaeq \int_{\cal Sigma_m} (|\nablaabla\varphiphi|^2+c\varphiphi^2)\ d\mu_m.
\]
We may then replace $\varphiphi$ by $u_m^{8/5}\varphiphi$ and use the equation for $u_m$ to
obtain
\[ \int_{\cal Sigma_m}|\nablaabla_m(u_m)^{8/5}|^2\varphiphi^2\ d\mu_m\lambdaeq c\int_{\cal Sigma_m}u_m^{16/5}(|\nablaabla_m\varphiphi|^2+\varphiphi^2)\ d\mu_m.
\]
We may then apply the Sobolev inequality for minimal submanifolds to conclude that $u_m$
satisfies
\[ \int_{\cal Sigma_m\cap V}u_m^{\frac{16m}{5(m-2)}}\ d\mu_m\lambdaeq c.
\]
We then apply the H\"older inequality to obtain
\[ \int_{\cal Sigma_m\cap V}|\nablaabla_m\partialsi|^2u_m^2\ d\mu_m\lambdaeq \|\nablaabla_m\partialsi\|_{\frac{16m}{3m+10}}^2
\|u_m\|_{\frac{16m}{5(m-2)}}^2.
\]
Setting $a=\frac{16m}{3m+10}<7$ we have from above
\[ \int_{\cal Sigma_m\cap V}|\nablaabla_m\partialsi|^2u_m^2\ d\mu_m\lambdaeq c\varphiepsilon^2
\]
as desired.
Thus we have the compactness theorem for $(\cal Sigma_m,u_m)$ in $V$ and we can construct
tangent cones to $\cal Sigma_m$ at $x_0$ and homogeneous approximations to $u_m$ at $x_0$. By
Lemma \rhoef{lem:negdeg} any such homogeneous approximation $v_m$ has strictly negative degree $d\lambdaeq -c$ on its cone $C_m$ of definition. If we let ${\cal R}_m(C)$ denote the regular set
of $C$, then it follows that for any $\mu>1$, we have
\[ \inf_{{\cal R}_m(C)\cap B_{\alphalpha\sigmaigma}(0)}v_m\gammaeq \mu\inf_{{\cal R}_m(C)\cap B_\sigmaigma(0)}v_m
\]
for a fixed constant $\alphalpha\in (0,1)$ depending on $\mu$, but independent of which cone and which homogeneous approximation we choose. Note that $\Deltaelta_mu_m\lambdaeq cu_m$ and
$\Deltaelta_mv_m\lambdaeq 0$, so by the mean value inequality on volume minimizing hypersurfaces
(see \cite{bg}) we have
\[ u_m(x)\gammaeq cr^{-m}\int_{\cal Sigma_m\cap B_r(x)}u_m\ d\mu_m,\ v_m(x)\gammaeq
cr^{-m}\int_{C_m\cap B_r(x)}v_m\ d\mu_m
\]
for any $r$ so that $B_r(x_0)$ is compactly contained in $V$. It follows that the essential infima of
both $u_m$ and $v_m$ are positive on any compact subset. We now show that there exists
$\alphalpha\in (0,1)$ such that
\[ \inf_{{\cal R}_m\cap B_{\alphalpha\sigmaigma}(x_0)}u_m\gammaeq 2\inf_{{\cal R}_m\cap B_\sigmaigma(x_0)}u_m
\]
for $\sigmaigma$ sufficiently small. If we establish this, we have finished the proof that
$u_m$ tends to infinity at $x_0$ and hence we will have the desired properness
conclusion for $\cal Psi_k$. To establish this inequality we observe that if $(\cal Sigma_m^{(i)},u_m^{(i)})$
is a sequence converging to $(\cal Sigma_m,u_m)$ in the sense of (\rhoef{eqn:conv1}) and (\rhoef{eqn:conv2})
and $K$ is a compact set such that ${\cal R}_m\cap K\nablaeq \partialhi$ we have
\[ \inf_{{\cal R}_m\cap K}u_m\lambdaeq \lambdaiminf_{i\tauo\infty}\inf_{{\cal R}_m^{(i)}\cap K}u_m^{(i)}\lambdaeq
\lambdaimsup_{i\tauo\infty}\inf_{{\cal R}_m^{(i)}\cap K}u_m^{(i)}\lambdaeq c\inf_{{\cal R}_m\cap K}u_m
\]
for a fixed constant $c$. The first and second inequalities are obvious, and to get the third we
observe that for a small radius $r$ and any $x\in {\cal R}_m\cap K$ we have from above
\[ u_m(x)\gammaeq cr^{-m}\int_{\cal Sigma_m\cap B_r(x)}u_m\ d\mu_m,
\]
and hence for $i$ sufficiently large
\[ u_m(x)\gammaeq cr^{-m}\int_{\cal Sigma_m^{(i)}\cap B_r(x)}u_m^{(i)}\ d\mu_m\gammaeq \varphiepsilon_0\inf_{\cal Sigma_m^{(i)}\cap B_r(x)} u_m^{(i)}
\]
for a positive constant $\varphiepsilon_0$. This establishes the third inequality. The proof can now be completed
by using rescalings at $x_0$ which converge to $(C_m,v_m)$ for some cone and homogeneous
function together with the corresponding result for the homogeneous case.
\varphiepsilonnd{pf}
\betaigskip
\sigmaection{\betaf Existence of minimal $k$-slicings}
The main purpose of this section is to prove Theorem \rhoef{thm:exst}. We begin with the construction
of the eigenfunction $u_k$ assuming the $\cal Sigma_k$ has already been constructed and is partially
regular in the sense that $dim({\cal S}_k)\lambdaeq k-3$. We define the Hilbert spaces ${\cal H}_k$
and ${\cal H}_{k,0}$ as in the last section, namely, ${\cal H}_k$ (respectively ${\cal H}_{k,0}$) is the
completion in $\|\cdot\|_{0,1}$ of the Lipschitz functions with compact support in ${\cal R}_k\cap\betaar{\Omegamega}$ (respectively ${\cal R}_k\cap\Omegamega$).
In order to handle boundary effects we also assume that there is a larger domain $\Omega_1$ which contains $\betaar{\Omega}$ as a compact subset and that the $k$-slicing is defined and boundaryless in $\Omega_1$. Note that this is automatic if $\partialartial\cal Sigma_j=\partialhi$. Thus ${\cal H}_{k,0}$ consists of those functions in ${\cal H}_k$ with $0$ boundary data on $\cal Sigma_k\cap\Omega$. The quadratic form $Q_k$ is nonnegative
definite on the Lipschitz functions
with compact support in ${\cal R}_k\cap\Omegamega$, and so the standard Schwartz inequality holds
for any pair of such functions $\varphiphi,\partialsi$
\betaegin{equation}
\lambdaabel{eqn:schwartz} Q_k(\varphiphi,\partialsi)\lambdaeq \sigmaqrt{Q_k(\varphiphi,\varphiphi)}\sigmaqrt{Q_k(\partialsi,\partialsi)}.
\varphiepsilonnd{equation}
We now have the following result.
\betaegin{thm}
\lambdaabel{thm:qcomplete} The function $Q_k(\varphiphi,\partialsi)$ is continuous with respect to the norm
$\|\cdot\|_{0,1}$ in both variables and therefore extends as a continuous nonnegative definite
bilinear form on ${\cal H}_{k,0}$. The Schwartz inequality (\rhoef{eqn:schwartz}) holds for
$\varphiphi,\partialsi\in {\cal H}_{k,0}$. The function $Q_k(\varphiphi,\varphiphi)$ is strongly continuous and
weakly lower semicontinuous on ${\cal H}_{k,0}$.
\varphiepsilonnd{thm}
\betaegin{pf} From Proposition \rhoef{prop:coercive} we have for $\varphiphi_1,\varphiphi_2$ Lipschitz functions with compact support in ${\cal R}_k\cap\Omegamega$
\[ Q_k(\varphiphi_1-\varphiphi_2,\varphiphi_1-\varphiphi_2)\lambdaeq c\|\varphiphi_1-\varphiphi_2\|_{1,k}^2,
\]
so it follows from (\rhoef{eqn:schwartz}) that
\[ |Q_k(\varphiphi_1,\partialsi)-Q_k(\varphiphi_2,\partialsi)|\lambdaeq \sigmaqrt{Q_k(\varphiphi_1-\varphiphi_2,\varphiphi_1-\varphiphi_2)}\sigmaqrt{Q_k(\partialsi,\partialsi)}.
\]
Combining these we see that $Q_k$ is continuous in the first slot, and since it is symmetric in
both slots. Therefore $Q_k$ extends as a continuous nonnegative definite bilinear form on
${\cal H}_{k,0}$ and the Schwartz inequality holds on ${\cal H}_{k,0}$ by continuity.
To complete the proof we must prove that $Q_k(\varphiphi,\varphiphi)$ is weakly lower semicontinuous
on ${\cal H}_{k,0}$. Note that the square norm $\|\varphiphi\|_{0,k}^2+Q_k(\varphiphi,\varphiphi)$ is equivalent to $\|\varphiphi\|_{1,k}^2$ by Proposition \rhoef{prop:coercive}. Therefore these have the same bounded
linear functionals and hence determine the same weak topology on ${\cal H}_{k,0}$. Assume we have a sequence $\varphiphi\in {\cal H}_{k,0}$ which converges weakly to $\varphiphi\in{\cal H}_{k,0}$. We then have for any $\partialsi\in{\cal H}_{k,0}$
\[ Q_k(\varphiphi,\partialsi)=\lambdaim_{i\tauo\infty}Q_k(\varphiphi_i,\partialsi).
\]
This implies that for $i$ sufficiently large
\[ Q_k(\varphiphi,\varphiphi)=Q_k(\varphiphi-\varphiphi_i,\varphiphi)+Q_k(\varphiphi_i,\varphiphi)\lambdaeq \varphiepsilon+
\sigmaqrt{Q_k(\varphiphi_i,\varphiphi_i)}\sigmaqrt{Q_k(\varphiphi,\varphiphi)}
\]
for any chosen $\varphiepsilon>0$. It follows that
\[ Q_k(\varphiphi,\varphiphi)\lambdaeq \sigmaqrt{Q_k(\varphiphi,\varphiphi)}\lambdaiminf_{i\tauo\infty}\sigmaqrt{Q_k(\varphiphi_i,\varphiphi_i)}
\]
which implies the desired weak lower semicontinuity.
\varphiepsilonnd{pf}
In order to construct a lowest eigenfunction $u_k$ we will need the following Rellich-type
compactness theorem.
\betaegin{thm}
\lambdaabel{thm:rellich} The inclusion of ${\cal H}_{k,0}$ into $L^2(\cal Sigma_k)$ is compact in the
sense that any bounded sequence in ${\cal H}_{k,0}$ has a convergent subsequence
in $L^2(\cal Sigma_k)$.
\varphiepsilonnd{thm}
\betaegin{pf} This statement follows from Proposition \rhoef{prop:l2con} and the standard
Rellich theorem.
Assume that we have a bounded sequence $\varphiphi_i\in {\cal H}_{k,0}$; that is,
$\|\varphiphi_i\|_{1,k}^2\lambdaeq c$.
We may extend the $\varphiphi_i$ to $\Omega_1$ be taking $\varphiphi_i=0$ in $\Omega_1\sigmaim\Omega$, and by the
standard Rellich compactness theorem we may assume by extracting a subsequence
that the $\varphiphi_i$ converge in $L^2$ norm on compact subsets of $\betaar{\Omega}\sigmaim {\cal S}_k$ and
weakly in ${\cal H}_{k,0}$ to a limit $\varphiphi\in {\cal H}_{k,0}$. We show that $\varphiphi_i$ converges
to $\varphiphi$ in $L^2(\cal Sigma_k)$.
Given any $\varphiepsilon_1>0$, we can choose $\varphiepsilon>0,\ \delta>0$ in Proposition \rhoef{prop:l2con} so that for each
$i$ we have
\[ (\int_{\cal Sigma_k\cap V}\varphiphi_i^2\rhoho_{k+1}\ d\mu_k)^{1/2}\lambdaeq \varphiepsilon_1/3
\]
where $V$ is an open neighborhood of ${\cal S}_k\cap\betaar{\Omega}$. The Fatou theorem then implies
\[ (\int_{\cal Sigma_k\cap V}\varphiphi^2\rhoho_{k+1}\ d\mu_k)^{1/2}\lambdaeq \varphiepsilon_1/3
\]
Since $K=(\cal Sigma_k\sigmaim V)\cap\betaar{\Omega}$
is a compact subset of $\betaar{\Omega}\sigmaim {\cal S}_k$, we have for $i$ sufficiently large
\[ (\int_K(\varphiphi_i-\varphiphi)^2\rhoho_{k+1}\ d\mu_k)^{1/2}\lambdaeq \varphiepsilon_1/3.
\]
Combining these bounds we find
\[ \|\varphiphi_i-\varphiphi\|_0\lambdaeq (\int_K(\varphiphi_i-\varphiphi)^2\rhoho_{k+1}\ d\mu_k)^{1/2}+
(\int_{\cal Sigma_k\cap V}(\varphiphi_i-\varphiphi)^2\rhoho_{k+1}\ d\mu_k)^{1/2}\lambdaeq \varphiepsilon_1
\]
for $i$ sufficiently large. This completes the proof.
\varphiepsilonnd{pf}
We are now ready to prove the existence, positivity, and uniqueness of $u_k$ on $\cal Sigma_k\cap\Omega$.
\betaegin{thm}
\lambdaabel{thm:spectrum} The quadratic form $Q_k$ on ${\cal H}_{k,0}$ has discrete spectrum with
respect to the $L^2(\cal Sigma_k)$
inner product and may be diagonalized in an orthonormal basis for $L^2(\cal Sigma_k)$. The eigenfunctions
are smooth on ${\cal R}_k\cap\Omega$, and if we choose a first eigenfunction $u_k$, then $u_k$
is nonzero on ${\cal R}_k\cap\Omega$ and is therefore either strictly positive or strictly negative
since ${\cal R}_k\cap\Omega$ is connected. Furthermore any first eigenfunction is a multiple of $u_k$ which
we may take to be positive.
\varphiepsilonnd{thm}
\betaegin{pf} This follows from the standard minmax variational procedure for defining eigenvalues
and constructing eigenfunctions. For example, to construct the lowest eigenvalue and eigenfunction
we let
\[ \lambdaambda_k=\inf\{Q_k(\varphiphi,\varphiphi):\ \varphiphi\in {\cal H}_{k,0},\ \|\varphiphi\|_{0,k}=1\}.
\]
By Theorem \rhoef{thm:rellich} and Theorem \rhoef{thm:qcomplete} we may achieve this infimum
with a function $u_k\in{\cal H}_{k,0}$ with $\|u_k\|_{0,k}=1$. The Euler-Lagrange equation
for $u_k$ is then the eigenfunction equation with eigenvalue $\lambdaambda_k$. The higher eigenvalues
and eigenfunctions can be constructed by imposing orthogonality constraints with respect
the $L^2(\cal Sigma_k)$ inner product. We omit the standard details. The smoothness on ${\cal R}_k\cap\Omega$
follows from elliptic regularity theory.
The fact that a lowest eigenfunction $u$ is nonzero follows from
the fact that if $u\in {\cal H}_{k,0}$ then $|u|\in {\cal H}_{k,0}$ and $Q_k(u,u)=Q_k(|u|,|u|)$ a property
which can be easily checked on the dense subspace of Lipschitz functions with compact support
in ${\cal R}_k\cap\Omega$ and then follows by continuity. The multiplicity one property of the lowest
eigenspace follows from this property in the usual way. We omit the details.
\varphiepsilonnd{pf}
We now come to the existence results. We first discuss Theorem \rhoef{thm:exst}
and we then generalize the existence proof to a more precise form. Suppose $X$ is
a closed $k$-dimensional oriented manifold with $k<n$. We assume that $\cal Sigma_n$
is a closed oriented $n$-manifold and that there is a smooth map $F:\cal Sigma_n\tauo X\tauimes T^{n-k}$
of degree $s\nablaeq 0$. We let $\Omega$ denote a (unit volume) volume form of $X$ and let $\Theta=F^*\Omega$
so that $\Theta$ is a closed $k$-form on $\cal Sigma_n$. We let $t^p$ for $p=k+1,\lambdadots, n$
denote the coordinates on the circles and we assume they are periodic with period $1$.
For $p=k+1,\lambdadots, n$ we let $\omega^p$ be the closed $1$-form $\omega^p=F^*(dt^p)$. The
assumption on the degree of $F$ implies that $\int_{\cal Sigma_n}\Theta\wedge\omega^{k+1}\wedge\lambdadots\wedge
\omega^n=s$.
We will need the following elementary lemma.
\betaegin{lem} \lambdaabel{lem:exact}
Suppose $N^m$ is a closed oriented Riemannian manifold and let $\Omega$ be its volume form.
Given any open set $U$ of $N$ which is not dense in $N$, the form $\Omega$ is exact on $U$. Moreover,
given an open set $V$ compactly contained in $U$, we can find a closed $m$-form
$\Omega_1$ which agrees with $\Omega$ on $M\sigmaetminus U$ and such that $\Omega_1=0$ in $V$.
\varphiepsilonnd{lem}
\betaegin{pf}
Let $f$ be a smooth function which is equal to $1$ in $U$ and such that $\int_Nf\ d\Omega=0$.
Let $u$ be a solution of $\Deltaelta u=f$ and let $\tauhetaeta$ be the $(m-1)$-form $\tauhetaeta=*du$. We then
have $d\tauhetaeta=d*du=(\Deltaelta u)\Omega$, so we have $d\tauhetaeta=\Omega$ on $U$.
To prove the last statement, we let $\zetaeta$ be a smooth cutoff function which is equal to $1$
in $V$ and has compact support in $U$. We then define $\Omega_1=\Omega-d(\zetaeta*du)$. We then
have $\Omega_1=0$ in $V$ and $\Omega_1$ differs from $\Omega$ by an exact form.
\varphiepsilonnd{pf}
We now restate the existence theorem.
\betaegin{thm}
For a manifold $M=\cal Sigma_n$ as described above, there is a $\Lambdaambda$-bounded,
partially regular, minimal $k$-slicing Moreover, if $k\lambdaeq j\lambdaeq n-1$ and $\cal Sigma_j$ is regular, then
$\int_{\cal Sigma_j}\Theta\wedge\omegamega^{k+1}\wedge\lambdadots\wedge\omegamega^j=s$.
\varphiepsilonnd{thm}
\betaegin{pf} We begin with the $1$-form $\omega^n$ and we integrate to get a map $u_n:\cal Sigmama_n\tauo S^1$
so that $\omega^n=du_n$. Let $t$ be a regular value of $u_n$ and consider the hypersurface $S_n=u_n^{-1}(t)$. Because the map $F$ has degree $s$ and we have normalized our forms in
$X\tauimes T^{n-k}$ to have integral $1$, we see that $\int_{S_n}\Theta\wedge\omega^{k+1}\wedge\lambdadots
\wedge\omega^{n-1}=s$. Let $\cal Sigma_{n-1}$ be a least volume cycle in $\cal Sigma_n$ with the property that
$\int_{\cal Sigma_n}\Theta\wedge\omega^{k+1}\wedge\lambdadots\wedge\omega^{n-1}=s$. The existence follows from
standard results of geometric measure theory.
Now suppose for $j\gammaeq k$ we have constructed a partially regular minimal $j+1$ slicing
with the property that there is a form $\Theta_{j+1}$ of compact support which is
cohomologous to $\Theta\wedge\omega^{k+1}\wedge\lambdadots\wedge\omega^{j+1}$ such that
$\int_{\cal Sigma_{j+1}}\Theta_{j+1}=s$. Since the slicing is partially regular, we have that the
Hausdorff dimension of ${\cal S}_{j+1}$ is at most $j-2$, so it follows that the image
$F_j({\cal S}_{j+1})$ under the projection map $F_j:\cal Sigma_n\tauo X\tauimes T^{j-k}$ is a compact
set of Hausdorff dimension at most $j-2$. It follows from Lemma \rhoef{lem:exact} that the form
$\Omega\wedge dt^{k+1}\wedge\lambdadots\wedge dt^j$ is exact in a neighborhood $U$ of $F_j({\cal S}_{j+1})$,
given a neighborhood $V$ of $F_j({\cal S}_{j+1})$ which is compact in $U$ we can find a form
$\Omega_j$ which is cohomologous to $\Omega\wedge dt^{k+1}\wedge\lambdadots\wedge dt^j$ and vanishes in $V$. Pulling back we see that $\Theta_j=F^*\Omega_j$ vanishes in a neighborhood of ${\cal S}_{j+1}$ and is cohomologous to $\Theta\wedge\omega^{k+1}\wedge\lambdadots\wedge\omega^j$. We let $u_{j+1}$ be the map gotten
by integrating $\omega^{j+1}$ and consider its restriction to $\cal Sigma_{j+1}$. Since $u_{j+1}$ is in $L^2$
with respect to the weight $\rhoho_{j+2}$, we see that $\rhoho_{j+1}=u_{j+1}\rhoho_{j+2}$ is integrable on
$\cal Sigma_{j+1}$. It then follows from the coarea formula that we can find a regular value $t$
of $u_{j+1}$ in ${\cal R}_{j+1}$ so that the hypersurface $S_j\sigmaubset \cal Sigma_{j+1}$ given by
$S_j=u_{j+1}^{-1}(t)$ has finite $\rhoho_{j+1}$-weighted volume and satisfies $\int_{S_j}\Theta_j=s$.
We can then solve the minimization problem for the $\rhoho_{j+1}$-weighted volume among
integer multiplicity rectifiable currents $T$ with support in $\cal Sigma_{j+1}$, with no boundary in
${\cal R}_{j+1}$, and with $T(\Theta_j)=s$. A minimizer for this problem gives us $\cal Sigma_j$
and completes the inductive step for the existence.
\varphiepsilonnd{pf}
\betaegin{rem} The existence proof above does not specify the homology class of the minimizers
even if the minimizers are smooth since we are minimizing among cycles for which the
integral of $\Theta_j$ is fixed. In general there may be homology classes for which the integral
of $\Theta_j$ vanishes. We have chosen the class to do the minimization in order to avoid
a precise discussion of the homology of the singular spaces in which we are working. In
the following we give a more precise existence theorem which specifies the homology
classes and allows them to be general integral homology classes, possibly torsion classes.
\varphiepsilonnd{rem}
We now formulate and prove a more general existence theorem for minimal $k$ slicings.
In the theorem we let $[\cal Sigma_n]$ denote the fundamental homology class in $H_n(\cal Sigma_n,\mathbb Z)$
and, for a cohomology class $\alpha\in H^p(\cal Sigma_n,\mathbb Z)$, we let $\alpha\cap [\cal Sigma_n]$ denote
its Poincar\'e dual in $H_{n-p}(M,\mathbb Z)$.
\betaegin{thm} \lambdaabel{thm:exst2} Let $\cal Sigma_n$ be a smooth oriented manifold of dimension $n$
and let $k$ be an integer with $1\lambdaeq k\lambdaeq n-1$. Let $\alpha^1,\lambdadots,\alpha^{n-k}$ be cohomology classes
in $H^1(\cal Sigma_n, \mathbb Z)$, and suppose that $\alpha^{n-k}\cap\alpha^{n-k-1}\cap\lambdadots\cap\alpha^1\cap[\cal Sigma_n]
\nablaeq 0$ in $H_n(\cal Sigma_n,\mathbb Z)$. There exists a partially regular minimal $k$ slicing
with $\cal Sigma_j$ representing the homology class $\alpha^{n-j}\cap\lambdadots\cap\alpha^1\cap [\cal Sigma_n]$.
\varphiepsilonnd{thm}
\betaegin{pf}
Assume that we are given a partially regular $\Lambdaambda$-bounded minimal $(k+1)$-slicing
which represents $\alpha_1,\lambdadots,\alpha_{n-k-1}$. We thus have the weight
function $\rhoho_{k+1}$ defined on $\cal Sigma_{k+1}$ which we use to produce $\cal Sigma_k$. From the
partial regularity the singular set ${\cal S}_{k+1}$ of $\cal Sigma_{k+1}$ has Hausdorff dimension
at most $k-2$.
We consider the class of integer multiplicity rectifiable currents which are relative cycles in
$H_k(\cal Sigma_n,{\cal S}_{k+1},\mathbb Z)$; that is, for any $k-1$ form $\tauhetaeta$ of compact support in
$\cal Sigma_{k+1}\sigmaetminus{\cal S}_{k+1}$ we have $T(d\tauhetaeta)=0$. Because the set ${\cal S}_{k+1}$
has zero $k-1$ dimensional Hausdorff measure we have $H_k(\cal Sigma_n,{\mathbb Z})=
H_k(\cal Sigma_n,{\cal S}_{k+1},\mathbb Z)$. This follows because a current which is a relative
cycle $T$ in $\cal Sigma_n\sigmaetminus{\cal S}_{k+1}$ is also a cycle in $\cal Sigma_n$ since $\partial T$
is zero since it is unchanged by adding a set of $k-1$ measure zero.
We use $\rhoho_{k+1}$ weighted volume to set up a minimization problem. We consider the
class of relative cycles $T$ with support contained in $\cal Sigma_{k+1}$ which have finite weighted mass;
that is, $T=(S_k,\Theta,\xii)$ where $S_k$ is
a countably $k$-rectifiable set, $\Theta$ a $\mu_k$-measurable integer valued function on $S_k$,
and $\xii$ a $\mu_k$-measurable map from $S_k$ to $\wedge^k\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^N$ such that $\xii(x)$ is a
unit simple vector for $\mu_k$ a.e. $x\in S_k$. Such a $k$-current $T_k$ is $\rhoho_{k+1}$-finite if
\[ Vol_{\rhoho_{k+1}}(T_k)\varphiepsilonquiv\int_{S_k}\rhoho_{k+1}|\Theta|\ d\mu_k<\infty.
\]
Since we have already constructed $\cal Sigma_{k+1}$ so that it is $\Lambdaambda$-bounded we have
\[ \int_{\cal Sigma_{k+1}}\rhoho_{k+1}\ d\mu_{k+1}\lambdaeq \Lambdaambda.
\]
Now we can find a smooth closed hypersurface $H_k$ which is Poincar\'e dual to $\alpha_k$, and
we may perturb
it and use the coarea formula in a standard way to arrange that $\betaar{\cal Sigma}_k\varphiepsilonquiv\cal Sigma_{k+1}\cap H_k$
is a smooth embedded submanifold away from ${\cal S}_{k+1}$ and
\[ \int_{\betaar{\cal Sigma}_k}\rhoho_{k+1}\ d\mu_k\lambdaeq c.
\]
In particular the associated current $\betaar{T}_k\varphiepsilonquiv(\betaar{\cal Sigma}_k,1,\betaar{\xii})$ (where $\betaar{\xii}$ is the oriented unit tangent plane of $\betaar{\cal Sigma}_k$) is $\rhoho_{k+1}$-finite and is a competitor in our
variational problem.
The standard theory of integral currents now allows us to construct a minimizer for our variational
problem which gives us the next slice $\cal Sigma_k$ which could be disconnected and with integer
multiplicity. Thus $\cal Sigma_k$ represents the homology class
$\alpha^{n-k}\cap\lambdadots\cap\alpha^1\cap [\cal Sigma_n]$. This completes the proof of Theorem \rhoef{thm:exst2}.
\varphiepsilonnd{pf}
\betaigskip
\sigmaection{\betaf Application to scalar curvature problems}
In this section we prove two theorems for manifolds with positive scalar curvature. The first of these
is for compact manifolds and the second is the Positive Mass Theorem for asymptotically flat
manifolds. Our first theorem which we will need to prove the Positive Mass Theorem is the following.
\betaegin{thm} \lambdaabel{thm:psc0} Let $M_1$ be any closed oriented $n$-manifold. The manifold
$M=M_1\#T^n$ does not have a metric of positive scalar curvature.
\varphiepsilonnd{thm}
\betaegin{pf} Such a manifold $M$ has admits a map $F:M\tauo T^n$ of degree $1$, and so by
Theorem \rhoef{thm:exst} there exists a closed minimal $1$-slicing of $M$ in contradiction to
Theorem \rhoef{thm:12slicing}.
\varphiepsilonnd{pf}
We also prove the following more general theorem.
\betaegin{thm}
\lambdaabel{thm:psc1} Assume that $M$ is a compact oriented $n$-manifold with a metric of positive
scalar curvature. If $\alpha_1,\lambdadots,\alpha_{n-2}$ are classes in $H^1(M,\ifmmode{\cal Bbb Z}\else{$\cal Bbb Z$}\fi)$ with the property that the
class $\sigma_2$ given by
$\sigma_2=\alpha_{n-2}\cap\alpha_{n-3}\cap\lambdadots\alpha_1\cap[M]\in H_2(M,\ifmmode{\cal Bbb Z}\else{$\cal Bbb Z$}\fi)$ is nonzero, then the class
$\sigma_2$ can be represented by a sum of smooth two spheres.
If $\alpha_{n-1}$ is any class in $H^1(M,\ifmmode{\cal Bbb Z}\else{$\cal Bbb Z$}\fi)$, then we must have $\alpha_{n-1}\cap\sigma_2=0$. In particular,
if $M$ has classes $\alpha_1,\lambdadots,\alpha_{n-1}$ with $\alpha_{n-1}\cap\lambdadots\cap\alpha_1\cap[M]\nablaeq 0$,
then $M$ cannot carry a metric of positive scalar curvature.
\varphiepsilonnd{thm}
\betaegin{pf} By the existence and regularity results of Sections 3 and 4, there is a minimal
$2$-slicing so that $\cal Sigma_2\in\sigma_2$ is regular and satisfies the eigenvalue bound of Theorem \rhoef{thm:eval}. Choosing $\varphiphi=1$ on any given component of $\cal Sigma_2$ and applying the Gauss-Bonnet
theorem we see that each component must be topologically $S^2$.
In particular it follows that for any other $\alpha_{n-1}\in H^1(M,\ifmmode{\cal Bbb Z}\else{$\cal Bbb Z$}\fi)$ we have that $\alpha_{n-1}\cap\sigma_2$
is a class in $H_1(\cal Sigma_2,\ifmmode{\cal Bbb Z}\else{$\cal Bbb Z$}\fi)$, and therefore is zero.
\varphiepsilonnd{pf}
We now prove a Riemannian version of the positive mass theorem. Assume that $M$ is a
complete manifold with the property that there is a compact subset $K\sigmaubset M$ such that
$M\sigmaim K$ is a union of a finite number of connected components each of which is an
asymptotically flat end. This means that each of the components is diffeomorphic to the
exterior of a compact set in $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^n$ and admits asympototically flat coordinates $x^1,\lambdadots,x^n$
in which the metric $g_{ij}$ satisfies
\betaegin{equation}
\lambdaabel{eqn:af}
g_{ij}=\delta_{ij}+O(|x|^{-p}),\ |x||\partialartial g_{ij}|+|x|^2|\partialartial^2g_{ij}|=O(|x|^{-p}),\ |R|=O(|x|^{-q})
\varphiepsilonnd{equation}
where $p>(n-2)/2$ and $q>n$. Under these assumptions
the ADM mass is well defined by the formula (see \cite{sc} for the $n$ dimensional case)
\[ m=\frac{1}{4(n-1)\omega_{n-1}}\lambdaim_{\sigma\tauo\infty}\int_{S_\sigma}\sigmaum_{i,j}(g_{ij.i}-g_{ii,j})\nablau_j\ d\xii(\sigma)
\]
where $S_\sigma$ is the euclidean sphere in the $x$ coordinates, $\omega_{n-1}=Vol(S^{n-1}(1))$, and
the unit normal and volume integral are with respect to the euclidean metric. We may now state
the Positive Mass Theorem.
\betaegin{thm}
\lambdaabel{thm:psc2} Assume that $M$ is an asymptotically flat manifold with $R\gammaeq 0$. For each end
it is true that the ADM mass is nonnegative. Furthermore, if any of the masses is zero, then $M$ is isometric to $\ifmmode{\cal Bbb R}\else{$\cal Bbb R$}\fi^n$.
\varphiepsilonnd{thm}
\betaegin{pf} The theorem can be reduced to the case when there is a single end by capping off
the other ends keeping the scalar curvature nonnegative. We will show only that $m\gammaeq 0$,
and the equality statement can be derived from this (see \cite{sy2}). We will reduce the proof
to the compact case using results of \cite{sy3} and an observation of J. Lohkamp.
\betaegin{prop}
If the mass of $M$ is negative, there is a metric of nonnegative scalar curvature on $M$ which
is euclidean outside a compact set. This produces a metric of positive scalar curvature on a manifold
$\hat{M}$ which is gotten by replacing a ball in $T^n$ by the interior of a large ball in $M$.
\varphiepsilonnd{prop}
\betaegin{pf} Results of \cite{sy3} and \cite{sc} imply that if $m<0$ we can construct a new metric
on $M$ with nonnegative scalar curvature, negative mass, and which is conformally flat and
scalar flat near infinity. In particular, we have $g=u^{4/(n-2}\delta$ near infinity where $u$ is a
euclidean harmonic function which is asymptotic to $1$. Thus $u$ has the expansion
\[ u(x)=1+\frac{m}{|x|^{n-2}}+O(|x|^{1-n})
\]
where $m$ is the mass. Now we use an observation of Lohkamp \cite{lohkamp}. Since $m<0$, we can choose
$0<\varphiepsilon_2<\varphiepsilon_1$ and $\sigma$ sufficiently large so that we have $u(x)<1-\varphiepsilon_1$ for $|x|=\sigma$ and
$u(x)>1-\varphiepsilon_2$ for $|x|\gammaeq 2\sigma$. If we define $v(x)=u(x)$ for $|x|\lambdaeq \sigma$ and $v(x)=\min\{1-\varphiepsilon_2,u(x)\}$
for $|x|>\sigma$, then we see that $v(x)$ is weakly superharmonic for $|x|\gammaeq \sigma$, so may be
approximated by
a smooth superharmonic function with $v(x)=u(x)$ for $|x|\lambdaeq \sigma$ and $v(x)=1-\varphiepsilon_2$ for $|x|$
sufficiently large. The metric
which agrees with the original inside $S_\sigma$ and is given by $v^{4/(n-2)}\delta$ outside then has
nonnegative scalar curvature and is euclidean near infinity.
By extending this metric periodically we then produce a metric on $\hat{M}$ with nonnegative scalar
curvature which is not Ricci flat. Therefore the metric can be perturbed to have positive
scalar curvature.
\varphiepsilonnd{pf}
Using this result the theorem follows from Theorem \rhoef{thm:psc1} since the standard $1$-forms
on $T^n$ can be pulled back to $\hat{M}$ to produce the $\alpha_1,\lambdadots,\alpha_{n-1}$ of that
theorem. This completes the proof of Theorem \rhoef{thm:psc2}.
\varphiepsilonnd{pf}
\betaigskip
\betaegin{thebibliography}{XXX}
\betaibitem[A]{almgren} Almgren, F., {\it Q Valued functions minimizing Dirichlet's integral and the
regularity of area minimizing rectifiable currents up to codimension two}, World Scientific Monograph Series in Mathematics, 1. World Scientific Publishing Co., Inc., River Edge, NJ, 2000. xvi+955 pp.
\betaibitem[AS]{as} Atiyah, M., Singer, I., {\it The index of elliptic operators on compact manifolds},
Bull. Amer. Math. Soc. {\betaf 69} (1963) 422--433.
\betaibitem[BG]{bg} Bombieri, E., Giusti, E., {\it Harnack's inequality for elliptic differential equations
on minimal surfaces}, Invent Math {\betaf 15}(1972), 24--46.
\betaibitem[F]{federer} Federer, H., Geometric Measure Theory, Springer-Verlag New York, 1969.
\betaibitem[GL1]{gl1} Gromov, M, Lawson, H. B., {\it Spin and scalar curvature in the presence of a fundamental group. I}, Ann. of Math. {\betaf 111} (1980), no. 2, 209--230.
\betaibitem[GL2]{gl2} Gromov, M, Lawson, H. B., {\it The classification of simply connected manifolds of positive scalar curvature}, Ann. of Math. {\betaf 111} (1980), no. 3, 423--434.
\betaibitem[GL3]{gl3} Gromov, M, Lawson, H. B., {\it Positive scalar curvature and the Dirac operator on complete Riemannian manifolds}, Inst. Hautes Études Sci. Publ. Math. No. 58 (1983), 83--196.
\betaibitem[H]{h} Hitchin, N., {\it Harmonic spinors}, Advances in Math. {\betaf 14} (1974), 1--55.
\betaibitem[Li]{lich} Lichnerowicz, A., {\it Spineurs harmoniques} (French) C. R. Acad. Sci. Paris {\betaf 257} (1963) 7--9.
\betaibitem[Lo1]{lo1} Lohkamp, J., {\it Skin Structures on Minimal Hypersurfaces}, arXiv:1512.08249.
\betaibitem[Lo2]{lo2} Lohkamo, J., {\it Hyperbolic Geometry and Potential Theory on Minimal Hypersurfaces}, arXiv:1512.08251.
\betaibitem[Lo3]{lo3} Lohkamp, J., {\it Skin Structures in Scalar Curvature Geometry}, arXiv:1512.08252.
\betaibitem[Lo4]{lo4} Lohkamp, J., {\it The Higher Dimensional Positive Mass Theorem II}, arXiv:1612.07505.
\betaibitem[PT]{pt} Parker, T., Taubes, C., {\it On Witten's proof of the positive energy theorem},
Comm. Math. Phys. {\betaf 84} (1982), no. 2, 223--238.
\betaibitem[Sc]{sc} Schoen, R., {\it Variational theory for the total scalar curvature functional for Riemannian metrics and related topics}, Topics in calculus of variations (Montecatini Terme, 1987), 120--154, Lecture Notes in Math., 1365, Springer, Berlin, 1989.
\betaibitem[SY1]{sy1} Schoen, R., Yau, S. T., On the structure of manifolds with positive scalar curvature. Manuscripta Math. {\betaf 28} (1979), no. 1-3, 159--183.
\betaibitem[SY2]{sy2} Schoen, R., Yau, S. T., {\it On the proof of the positive mass conjecture in general relativity}, Comm. Math. Phys. {\betaf 65} (1979), no. 1, 45--76.
\betaibitem[SY3]{sy3} Schoen, R., Yau, S. T., {\it Positivity of the total mass of a general space-time}, Phys. Rev. Lett. {\betaf 43} (1979), no. 20, 1457--1459.
\betaibitem[SY4]{sy4} Schoen, R., Yau, S. T., {\it Existence of incompressible minimal surfaces and the topology of three-dimensional manifolds with nonnegative scalar curvature}, Ann. of Math. {\betaf 110} (1979), no. 1, 127--142.
\betaibitem[SY5]{sy5} Schoen, R., Yau, S. T., {\it Complete manifolds with nonnegative scalar curvature and the positive action conjecture in general relativity}, Proc. Nat. Acad. Sci. U.S.A. {\betaf 76} (1979), no. 3, 1024--1025.
\betaibitem[Si]{simon} Simon, L., Lectures on Geometric Measure Theory, Australian
National University, Proceedings of the Centre for Mathematical Analysis, Volume 3, 1983.
\betaibitem[Sm]{sm} Smale, N., {\it Generic regularity of homologically area minimizing hypersurfaces in eight-dimensional manifolds}, Comm. Anal. Geom. {\betaf 1} (1993), no. 2, 217--228.
\betaibitem[St]{st} Stolz, S., {\it Simply connected manifolds of positive scalar curvature}, Ann. of Math.
{\betaf 136} (1992), no. 3, 511--540.
\betaibitem[W]{w} Witten, E., {\it A new proof of the positive energy theorem}, Comm. Math. Phys. {\betaf 80} (1981), no. 3, 381--402.
\varphiepsilonnd{thebibliography}
\varphiepsilonnd{document} |
\begin{document}
\title{ADAPT: Mitigating Idling Errors in Qubits\\ via Adaptive Dynamical Decoupling}
\author{Poulami Das}
\authornote{Both authors contributed equally to this research. The corresponding authors can be be reached at [email protected] and [email protected].}
\affiliation{
\institution{Georgia Tech}
\city{Atlanta}
\country{USA}
}
\author{Swamit Tannu}
\authornotemark[1]
\affiliation{
\institution{University of Wisconsin}
\city{Madison}
\country{USA}}
\author{Siddharth Dangwal}
\affiliation{
\institution{IIT Delhi}
\city{New Delhi}
\country{India}
}
\author{Moinuddin Qureshi}
\affiliation{
\institution{Georgia Tech}
\city{Atlanta}
\country{USA}
}
\renewcommand{Das, Tannu, Dangwal, and Qureshi}{Das, Tannu, Dangwal, and Qureshi}
\begin{abstract}
The fidelity of applications on near-term quantum computers is limited by hardware errors. In addition to errors that occur during gate and measurement operations, a qubit is susceptible to {\em idling errors}, which occur when the qubit is idle and not actively undergoing any operations. To mitigate idling errors, prior works in the quantum devices community have proposed {\em Dynamical Decoupling (DD)}, that reduces stray noise on idle qubits by continuously executing a specific sequence of single-qubit operations that effectively behave as an identity gate. Unfortunately, existing DD protocols have been primarily studied for individual qubits and their efficacy at the application-level is not yet fully understood.
Our experiments show that naively enabling DD for every idle qubit does not necessarily improve fidelity. While DD reduces the idling error-rates for some qubits, it increases the overall error-rate for others due to the additional operations of the DD protocol. Furthermore, idling errors are program-specific and the set of qubits that benefit from DD changes with each program. To enable robust use of DD, we propose {\em Adaptive Dynamical Decoupling (ADAPT)}, a software framework that estimates the efficacy of DD for each qubit combination and judiciously applies DD only to the subset of qubits that provide the most benefit. ADAPT
employs a {\em Decoy Circuit}, which is structurally similar to the original program but with a known solution, to identify the DD sequence that maximizes the fidelity. To avoid the exponential search of all possible DD combinations, ADAPT employs a localized algorithm that has linear complexity in the number of qubits. Our experiments on IBM quantum machines (with 16-27 qubits) show that ADAPT improves the application fidelity by 1.86x on average and up-to 5.73x compared to no DD and by 1.2x compared to DD on all qubits.
\end{abstract}
\begin{CCSXML}
<ccs2012>
<concept>
<concept_id>10010520.10010553.10010562</concept_id>
<concept_desc>Computer systems organization~Embedded systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010575.10010755</concept_id>
<concept_desc>Computer systems organization~Redundancy</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010553.10010554</concept_id>
<concept_desc>Computer systems organization~Robotics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10003033.10003083.10003095</concept_id>
<concept_desc>Networks~Network reliability</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
\end{CCSXML}
\ccsdesc[500]{Computer systems organization~Quantum computing}
\keywords{Quantum computing, Idling errors, Dynamical decoupling, NISQ}
\maketitle
\section{Introduction}
\label{sec:intro}
Quantum hardware available today with fifty-plus qubits can already outperform the world's most advanced supercomputer for certain problems~\cite{QCSup}. In the near-term, we can expect quantum computers with few hundreds of qubits to solve certain domain-specific applications~\cite{qaoa1,emani2019quantum,qaoa,vqe,orus2019quantum}. Unfortunately, the fidelity of applications executed on these {\em Noisy Intermediate Scale Quantum (NISQ) }~\cite{preskill2018quantum} computers is limited by the high error-rates of the physical qubit devices. The probability of encountering an error on NISQ computers increases with the size of the program. Therefore, developing software solutions that can reduce the impact of hardware errors and improve the fidelity of NISQ applications is an active area of research.
A qubit can encounter errors while performing gate or measurement operations. Additionally, a qubit can also accumulate errors while it is idle and not performing any operations. These errors, referred to as {\em idling errors}, are observed on both superconducting~\cite{IBMDD,chen2021exponential} and ion-trap hardware~\cite{pokharel2018demonstration}. Under certain circumstances, the idling error-rate of a qubit can exceed the error-rate from gate operations. Furthermore, idling errors can increase significantly in the presence of other active qubits in the vicinity, as the idle qubit accumulates phase noise due to crosstalk generated by the on-going gate operations. Our experiments on IBMQ hardware show that an idle qubit is almost 10x more vulnerable to errors when two-qubit gate operations are scheduled adjacent to it. Thus, idling errors can significantly degrade the fidelity of quantum programs and we focus on mitigating these errors in this paper.
\begin{figure*}
\caption{(a) Baseline circuit -- qubits q[0] and q[2] have significant idle time, whereas q[1] remains busy (b) Applying DD on all idle qubits (c) Applying DD on qubit q[0] (d) Applying DD on qubit q[2] (e) Reliability of DD sequences compared to no DD (Note: Figure is for illustration purposes only). }
\label{fig:intro}
\end{figure*}
The characteristics of quantum programs and NISQ hardware cause a large number of program qubits to remain idle during the execution. There are three key reasons for qubits to remain idle: (1)~limited parallelism, (2)~high latency of two-qubit gates, and (3)~additional data movement due to SWAP operations. Quantum programs have limited operational parallelism as complex multi-qubit operations are translated into a highly serial sequence of two-qubit CNOT gates. Similarly, there exists significant non-uniformity in the latency of different operations on NISQ computers. For example, the latency of a CNOT gate on IBMQ hardware ($\approx$ 400 ns) is almost an order of magnitude higher than the latency of a single qubit gate. Also, CNOT gates on the same hardware incur different latencies. For example, while the latency of a CNOT gate is 440 ns on average, it can be as high as 860 ns on IBMQ-Toronto. Therefore, even if a program can orchestrate parallel operations on different qubits, some with single-qubit operations and others with two-qubit operations, the qubits with single-qubit operations finish execution earlier than the qubits with two-qubit operations and remain idle. Similarly, parallel two-qubit gates with variable latencies finish execution at different times. Finally, architectural constraints can also cause idle times in programs as current machines do not have all-to-all connectivity. When two unconnected qubits need to perform a two-qubit operation, the compiler inserts SWAP instructions (typically performed as a sequence of 3 CNOT instructions) to perform data movement, which causes serialization and large idle periods due to the long latency of these operations.
By keeping the idle qubits busy with a specific sequence of gates, their susceptibility to spurious noise can be reduced.
{\em Dynamical Decoupling (DD)}~\cite{oliver,DDLidarAPS,DDLidarPeriodic} is a well-known technique from the quantum devices community that uses this insight to mitigate idling errors. DD is implemented by the repeated execution of a sequence of single-qubit operations that return the qubit to its original state. Thus, DD operations do not change the overall state of the qubits as they collectively behave as an identity gate that suppresses noise. DD is widely used in device-level characterization experiments to remove systematic noise~\cite{oliver}. More recently, DD has been adopted in quantum volume~\cite{IBMDD} and quantum error correction circuits~\cite{chen2021exponential}.
So far, the use of DD has mainly been limited to single qubit experiments or specific circuits~\cite{IBMDD,chen2021exponential}. Most importantly, at the application-level, prior studies have relied on enabling DD for all idle qubits. However, our experiments on IBMQ systems show that such indiscriminate application of DD often results in sub-optimal improvements in fidelity because while DD can reduce idling errors, it can still increase the effective error-rate of a qubit if the errors introduced due to the extra operations outweighs the benefits.
Moreover, the effectiveness of DD depends on the structure of the program, qubit state, and device characteristics. We explain this dependency with an example. Figure~\ref{fig:intro}(a) shows a quantum program with 3-qubits. Note the difference in instruction latencies (40ns for $H$ versus 400ns and 600ns for the $CNOT$s) and that qubits $\mathsf{q}[0]$ and $\mathsf{q}[2]$ remain idle for significant periods of time. Figure~\ref{fig:intro}(b-d) shows the three options for applying DD -- to all idle qubits, only $\mathsf{q}[0]$, and only $\mathsf{q}[2]$, respectively. Figure~\ref{fig:intro}(e) compares the reliability of the baseline with the three options for DD. While applying DD for all idle qubits provides some reliability benefits, the highest improvement comes when DD is applied to only a subset of the qubits (only $\mathsf{q}[2]$ in this example). Thus, to mitigate idling errors at the application-level, DD must be applied robustly and judiciously for the most optimal performance. To that end, this paper proposes {\em Adaptive Dynamical Decoupling (ADAPT)}, a software framework to reliably use DD for NISQ applications by identifying the subset of qubits that are most likely to benefit from DD.
\begin{figure*}
\caption{(a) Qubit-rotation operation on a single qubit (b) Dynamical Decoupling (DD) using XYXY (XY4) sequence.}
\label{fig:back}
\end{figure*}
ADAPT employs a trial-and-error method to search for the subset of qubits that maximizes the application fidelity. However, the output of the program must be known a-priori for this search to be effective, which is not possible. We observe that the effectiveness of DD depends on the program structure and two programs with similar structures tend to have similar fidelity when executed on identical physical qubits. ADAPT leverages this insight and searches for the optimal DD sequence using a {\em Decoy Circuit} that is structurally similar to the given program but built using Clifford operations (and only a handful of non-Clifford instructions). Clifford gates (CNOT, H, X, Z, S) can be efficiently simulated on conventional computers~\cite{Knill} and therefore, the noise-free output of the decoy circuit can be estimated. The decoy circuit shows a similar trend in idling errors as the input circuit because it uses identical CNOT operations as the input circuit. ADAPT performs the trial-and-error search on the decoy circuit by inserting different DD sequences on the subset of the qubits and selecting the pattern that maximizes the likelihood of getting the correct answer on the decoy circuit. While the number of possible DD combinations increases exponentially with the number of qubits, ADAPT employs a divide-and-conquer approach where only 4 qubits are evaluated exhaustively at any time, keeping the search tractable. This results in a linear complexity in the number of qubits.
We evaluate the effectiveness of ADAPT using IBMQ systems, ranging from 16 to 27 qubits, and two different DD protocols. Our evaluations show that ADAPT is robust and improves fidelity for both types of DD sequences, making it generalizable to other systems and DD protocols. ADAPT improves the fidelity of key NISQ benchmarks on average by 1.86x and by up-to 5.73x compared to the baseline without DD, and by 1.2x compared to applying DD to all qubits. The software for ADAPT and datasets for the evaluations in this paper is available at this \href{https://github.com/pdas36/ADAPT}{\underline{link}}.
Overall, this paper makes the following contributions:
\begin{enumerate}[leftmargin=0cm,itemindent=.5cm,labelwidth=\itemindent,labelsep=0cm,align=left, itemsep=0.08cm, listparindent=0.3cm]
\item To the best of our knowledge, this is the first paper to evaluate DD at an application level. We show that while DD is beneficial in general, applying DD indiscriminately to all the idle qubits does not provide the highest fidelity.
\item We propose ADAPT, a software framework that applies DD judiciously by estimating the subset of qubits that are likely to provide the highest reliability with DD.
\item We present a Clifford-based decoy circuit approach and an efficient search algorithm to practically implement ADAPT.
\end{enumerate}
\section{Background}
\subsection{Quantum Bits and Gates}
The state of a qubit is denoted as a superposition of its basis states $\ket{0}$ and $\ket{1}$, i.e., $\ket{\psi} = \alpha\ket{0} + \beta\ket{1}$. It can be represented as a point on the Bloch sphere, as shown in the Figure~\ref{fig:back}(a). When a qubit with state $\ket{\psi}$ is measured, it produces "0" with a probability of $\alpha^2$ and "1" with probability $\beta^2$. A single qubit gate rotates the qubit state from one point on the sphere to another, as shown in Figure~\ref{fig:back}(a).
\subsection{NISQ Hardware Errors}
\label{sec:nisqerrors}
For a qubit in superposition, even a tiny change in its energy can produce a valid but \textit{erroneous} state. The likelihood of undesirable changes in the state of a qubit is defined as the qubit error rate. Qubit errors can broadly be classified into:
\noindent{\textbf{Active Errors: }}These errors are caused when the qubit is performing gate or measurement operations. On current generation of quantum computers from IBM, the error rate is approximately 0.1\% for single-qubit operations, 1\%-2\% for two-qubit operations, and 4\% for measurement operations.
\noindent{\textbf{Idling Errors: }}These errors occur when a qubit is idle and not performing any operations. For example, coherence errors can cause qubits to naturally decay to the lowest energy state ($\ket{0}$) within a short time ($10-100$ $\mu\textrm{seconds}$). Moreover, qubits can lose their phase information due to their interactions with the environment (dephasing). Similarly, undesirable crosstalk can cause operations on other active qubits to affect the state of neighboring (spectator) idle qubits. Idling errors can also be caused by environmental noise.
\ignore{
\begin{figure*}
\caption{(a) Circuit to evolve qubit q[0] freely (b) Circuit to evolve q[0] with DD (c) Fidelity of q[0] with free evolution and DD (d) Circuit to evolve q[0] in presence of cross-talk (e) Circuit to evolve q[0] with DD in presence of cross-talk (f) Fidelity of q[0] in presence of cross-talk, with and without DD}
\label{fig:motiv}
\end{figure*}
}
\subsection{NISQ Model for Quantum Computing}
Hardware errors can cause a quantum program to produce an incorrect answer. Unfortunately, in the near term, quantum computers will not have enough resources to perform error correction (which can incur 20x-100x overhead). However, some quantum algorithms can tolerate limited hardware errors at the algorithmic level and can be used to solve practical problems using the {\em Noisy Intermediate-Scale Quantum (NISQ)} model, wherein the program is run thousands of times to identify the correct answer. NISQ systems can solve problems that are beyond the reach of existing computers~\cite{GoogleQ,preskillNISQ}. However, the ability to infer the correct answer on NISQ machines depend on the error-rates and program length.
Recent works have proposed software solutions to reduce the length of programs~\cite{li2018tackling,DACWille,wille2014optimal,CGO} and use error characteristics to improve program fidelity~\cite{tannu2019not,murali2019noise,murali2020software}. Although useful in mitigating measurement, gate errors, and CNOT-CNOT crosstalk errors~\cite{murali2020software}, these schemes do not focus on idling errors. In this paper, we focus on software policies to mitigate idling errors at application-level.
\subsection{Idling Errors in NISQ Applications}
The characteristics of quantum programs cause many of the qubits to remain idle for a significant period of time during program execution. There are three key reasons for qubits to remain idle: (1)~limited parallelism, as quantum programs get decomposed into sequence of instructions with data dependency (2)~high latency of two-qubit gates compared to single-qubit gates, and (3)~additional data movement or SWAP operations. For example, let us take a 4-qubit Bernstein-Vazirani (BV) circuit shown in Figure~\ref{fig:bv_variable}(a). Due to low parallelism, the CNOT gates of this circuit must be scheduled serially. Consequently, qubit Q0 remains idle when CNOTs \circled{B} and \circled{C} are executed. Although existing compilers minimize idle times by scheduling instructions \textit{as late as possible}~\cite{qiskit,murali2020software}, this optimization is not feasible for all qubits as the computation must make forward progress. For example, late initialization causes Q2 to not experience any idle time but cannot optimize Q1 from remaining idle during the execution of CNOT \circled{C}. The long latency of the CNOT gates exacerbates the idle times. Even if a program can orchestrate parallel operations on different qubits, qubits with single-qubit operations (10x faster) finish execution earlier than qubits with two-qubit gate operations and remain idle. Furthermore, CNOT gates on the same hardware incur different latencies. For example, the worst-case CNOT gate latency on IBMQ-Toronto is 1.95x the average latency. Consequently, parallel CNOT gates with variable latencies finish at different times. Finally, NISQ compilers insert SWAP instructions to overcome limited device connectivity causing serialization and long idle periods. For example, Figure~\ref{fig:bv_variable}(b) shows that the idle time of qubit Q0 for different BV circuits is about an order of magnitude higher on IBMQ-Toronto compared to a machine with similar error rates but all-to-all connectivity.
\begin{figure}
\caption{(a) 4-qubit Bernstein-Vazirani circuit (b) Impact of SWAPs on the idle time of Q0 when BV circuits with increasing sizes are executed on IBMQ-Toronto.}
\label{fig:bv_variable}
\end{figure}
\begin{table}[htp]
\centering
\begin{small}
\caption{ Idling Times for Programs on IBMQ-Rome}
\setlength{\tabcolsep}{0.1cm}
\renewcommand{1.25}{1.2}
\begin{tabular}{ | c || c || c | c | c | c | c || c | c |}
\hline
Workload & Program & \multicolumn{5}{c||}{Idle Fraction (\%)} & \multicolumn{2}{c|}{Fidelity} \\
\cline{3-9}
Name & Latency & $Q_0$ & $Q_1$ & $Q_2$ & $Q_3$ & $Q_4$ & No DD & DD on all\\
\hline \hline
QFT-5 & $13.1\ \mu S$ & 92 & 38 & 17\ & 33\ & 63\ & 0.18 & 0.41
\\ \hline
QAOA-5 & $2.50\ \mu S$ & 82\ & 37\ & 35\ & 63\ & 79\ & 0.64 & 0.80
\\ \hline
Adder & $9.90\ \mu S$ & 41 & 42\ & 9\ & 42\ & 75\ & 0.40 & 0.37
\\ \hline
\end{tabular}
\label{tab:idle}
\end{small}
\end{table}
Table~\ref{tab:idle} shows the program latency and the percentage of time for which each qubit in three different five-qubit programs remains idle on IBMQ-Rome. Note that across these workloads, qubits remain idle on an average more than 50\% of the time, and as much as 92\%.
\begin{figure*}
\caption{Circuit to evolve qubit q[0] (a) freely (b) with DD. (c) Fidelity of q[0] with free evolution and with DD. Circuit to evolve q[0] (d) freely (e) with DD in presence of crosstalk from on-going CNOT operations. (f) Fidelity of q[0] in the presence of crosstalk, with and without DD. Distribution of fidelity of the idle qubit (g) without and (h) with DD when circuits (d-e) are executed on every qubit-link combination on IBMQ-Guadalupe.}
\label{fig:motiv}
\end{figure*}
\subsection{Dynamical Decoupling for Idling Errors}
To minimize the impact of idling errors, experimentalists have proposed {\em Dynamical Decoupling (DD)}~\cite{oliver,DDLidarAPS,DDLidarPeriodic}. As shown in Figure~\ref{fig:back}(b), DD keeps an idle qubit active by continuously rotating its state using single-qubit operations which suppresses the coupling between environmental noise and the qubit. DD is implemented by repeated execution of a sequence of single-qubit operations that returns the qubit to its original state (for example, a sequence of XYXY operations, as shown in Figure~\ref{fig:back}(b)). Thus, DD operations do not change the qubit's overall state as they collectively behave as an identity gate that suppresses the noise. DD is widely used in qubit characterization and has been shown to be effective on IBM~\cite{IBMDD}, Rigetti~\cite{pokharel2018demonstration}, and Google quantum hardware~\cite{chen2021exponential}.
\subsection{The Drawback of Dynamical Decoupling}
While DD can reduce idling errors, the extra operations introduced by it can cause gate errors. If the error-rate of these extra operations exceed the reduction in idling errors, then employing DD can reduce fidelity. Thus far, DD has been primarily limited to device-level studies for characterizing single qubits and its efficacy in reducing idling errors at the application-level is not yet fully understood.
Table~\ref{tab:idle} shows the application fidelity for the two cases (a)~when the program does not employ DD and (b)~when all qubits employ DD during the idle periods. We observe that applying DD on all the qubits can improve fidelity. However, our characterization studies show that even when DD improves application fidelity, we may obtain even higher fidelity by applying DD to only a select subset of qubits. While DD has been an effective at the single qubit level, it is unclear how to apply robustly DD at the application level.
\subsection{Goal: Software for Robust Use of DD}
The goal of this paper is to develop a software framework that can enable robust use of DD at the application level. As idling errors are program specific, the subset of qubits that benefit from DD are unique to each program. Therefore, our framework must identify, for each program, the sequence of DD activations by taking the program structure into account. To this end, we propose {\em Adaptive Dynamical Decoupling (ADAPT)}, which tries to learn the subset of qubits that must use DD to obtain the highest fidelity for a given program. Before we describe our solution, we present some characterization data and explain the challenges in applying DD at the application level to motivate the use of DD and our solution.
\section{Characterizing the Effectiveness of Dynamical Decoupling}
\label{sec:motivation}
\subsection{Mitigating Idling Errors using DD}
We quantify idling errors and the impact of DD in mitigating these errors using the characterization circuits shown in Figure~\ref{fig:motiv}(a) and (b). In the first circuit, shown in Figure~\ref{fig:motiv}(a), we initialize the qubit $\mathsf{q}[0]$ in an arbitrary state by rotating along the Y-axis, using an $R_y({\theta})$ gate, and allow it to evolve freely over the idle period. This is achieved on IBMQ systems by inserting Delay or Identity gates. In the end, we bring the qubit back to state $\ket{0}$ by performing an inverse rotation, $R_y^{-1}({\theta})$, and measure it. To understand if the errors are more than just natural decoherence, we perform a similar experiment. However, now we use XY pulses, typically used for DD, throughout the idle period, as shown in Figure~\ref{fig:motiv}(b). Consequently, qubit $\mathsf{q}[0]$ does not remain idle in this circuit.
We prepare multiple initial states by choosing different values of $\theta$ and study the circuits for $1.2\ \mu s$ (typical latency of 3 CNOTs). Figure~\ref{fig:motiv}(c) shows the fidelity of this circuit for a qubit on IBMQ-London. We observe that the fidelity of qubit $\mathsf{q}[0]$ improves significantly when DD is applied.
\subsection{Effectiveness of DD under Crosstalk}
\label{sec:motivation2}
In quantum programs, a qubit typically remains idle when other qubits are actively performing gate operations. Prior studies show that on-going two-qubit CNOT operations generate crosstalk that can significantly lower the fidelity of concurrent CNOT operations~\cite{murali2020software,harper}. To study the impact of crosstalk from concurrent CNOT operations on idling errors, we characterize the fidelity of an idle qubit by executing the circuits shown in Figure~\ref{fig:motiv}(d-e). In the first circuit, qubit $\mathsf{q}[0]$ evolves freely in the presence of CNOT operations on neighboring qubits, whereas in the second circuit, the qubit $\mathsf{q}[0]$ evolves in the presence of the XY DD gate sequence. Running these circuits on IBMQ-London for five different quantum states of qubit $\mathsf{q}[0]$ and an idle time period of $2.4\ \mu s$, we observe that the fidelity of the idle qubit $\mathsf{q}[0]$ drops up-to 34\% in the presence of concurrent CNOT operations, as shown in Figure~\ref{fig:motiv}(f). Thus, idling errors get amplified significantly in the presence of crosstalk, making quantum programs extremely vulnerable to these errors. We also observe DD to be effective even in the presence of crosstalk from on-going operations, as the fidelity improves from 34\% to 75\%.
We also characterize idling errors on 16-qubit IBMQ-Guadalupe. We map the idle qubit $\mathsf{q}[0]$ to every physical qubit. Further, for each idle qubit, the active qubits -- $\mathsf{q}[1]$ and $\mathsf{q}[2]$, are mapped to any of the remaining fifteen qubits that are physically connected. On IBMQ-Guadalupe, there are 224 such possible combinations and for each qubit-link combination, we run the two circuits (without and with DD) for five different theta values (initialized using $\theta$ in $[0,\pi]$). We also increase the idle time to 8 $\mu s$ to understand idling errors and the effectiveness of DD in the context of large programs. Note that $8\ \mu s$ is reasonable, as even a small program with 6-8 qubits and 20 serial CNOTs can experience such large idle periods. Figure~\ref{fig:motiv}(g-h) shows the distribution of the fidelity of these 2240 ($= 224\times 5\times 2$) circuits, without and with DD respectively. We observe that the fidelity of the idle qubit drops to 84.5\% on an average and up-to 13.6\% in the worst-case, when it evolves freely. However, with DD, the average fidelity improves to 91.3\% and 57.7\% in the worst-case.
\subsection{Factors Influencing Idling Errors and DD}
To understand factors influencing idling errors and the effectiveness of DD, we repeat the characterization experiments discussed in Section~\ref{sec:motivation2} on other IBMQ systems across multiple calibration cycles. For example, on 27-qubit IBMQ-Toronto, there are 700 qubit-link combinations and we characterize each of them using a total of 7000 circuits. We make the following key observations:
While DD generally improves the fidelity of the idle qubit $\mathsf{q}[0]$, there are many instances where DD worsens the fidelity. For example, Figure~\ref{fig:histogram} shows the histogram of the relative fidelity of qubit $\mathsf{q}[0]$ in the presence of DD. While DD improves the fidelity up-to 3.95x in the best case, it lowers the fidelity to 0.21x in the worst-case.
\begin{figure}
\caption{Distribution of Relative Fidelity of $\mathsf{q}
\label{fig:histogram}
\end{figure}
\begin{tcolorbox}
Applying DD pulses to every qubit during each idle time window can adversely impact the fidelity of a program.
\end{tcolorbox}
Idling errors depend on the state of qubits and concurrent CNOTs. Further, the effectiveness of DD change across calibration cycles. For example, Figure~\ref{fig:downsideofdd} shows the relative fidelity of Qubit-12 when CNOT operations are performed on the Link:17-18 for two different calibration cycles. In the first cycle, DD improves the fidelity up-to 1.27x, whereas DD degrades the fidelity up-to 0.35x in the second cycle. Our experiments also show that idling errors exist between qubit-link pairs that may not be present in the same on-chip neighborhood, making localized characterization approaches~\cite{murali2020software} inadequate. We make similar observations using other (a)~systems such as IBMQ-Paris and IBMQ-Casablanca and (b)~DD pulses~\cite{IBMDD}.
\begin{figure}
\caption{Relative Fidelity of Qubit-12 in the presence of CNOTs on Link:17-18 for different calibration cycles.}
\label{fig:downsideofdd}
\end{figure}
\begin{figure*}
\caption{Overview of ADAPT with key building blocks}
\label{fig:overview}
\end{figure*}
\begin{tcolorbox}
It is impractical to estimate the effectiveness of DD for all possible quantum states with respect to each qubit-link combination in large systems every calibration cycle.
\end{tcolorbox}
The characterization complexity increases further when simultaneous CNOT operations are considered on multiple links. Our experiments show that although CNOT operations on multiple links can result in higher idling errors in general, such additive effect does not exist always. For some of the cases, one of the active links dictates the overall idling error. In rare situations, multiple CNOTs can reduce idling errors.
To summarize, our evaluations using IBMQ systems show complex trends in idling errors and effectiveness of DD. We confirm that (1)~on-going CNOTs increase idling errors, (2)~the idling error-rate and effectiveness of DD depends on the CNOT patterns and combination of the idle and active qubits, and (3)~the idle duration.
\subsection{Impact of DD on Application Fidelity}
The most straightforward method to enable dynamical decoupling at the application level is to insert DD pulses wherever feasible. To implement this design, we can identify all program regions where each qubit is idle and insert DD sequences. However, naively inserting DD sequences for all qubits may not always be beneficial, as we have already observed from the characterization experiments. To demonstrate this effect at the application-level, we execute two 6-qubit benchmarks, {\em Quantum Fourier Transform (QFT)} and {\em Bernstein Vazirani (BV)}, on 27-qubit IBMQ-Toronto, with DD applied on all 64 ($2^6$) possible qubit combinations.
Figure~\ref{fig:qft_adder_all_dd} shows the fidelity (likelihood of getting correct answer) for all 64 DD combinations, with 0 (000000) being the baseline when no DD is applied, and 63 (111111) being the case when DD is applied on all six qubits. We observe that both benchmarks show significant variation in fidelity for different DD sequences. For \textit{QFT}, enabling DD for all qubits increases the fidelity by 2.6x, however, the fidelity can be improved up-to 6.6x by choosing sequence "010100". For \textit{BV}, applying DD on all qubits degrades fidelity to 0.88x, and we may deem DD to be counter-effective for this benchmark. However, using the DD sequence "010100" can improve the fidelity by 1.1x compared to no-DD and up-to 1.26x compared to DD-for-all. Note that the best DD sequence depends on the workload characteristics and the physical qubits used to run the program.
\begin{figure}
\caption{Fidelity of QFT and BV benchmarks with all possible DD sequences on IBMQ-Toronto. The sequence 0 \texttt{(000000)}
\label{fig:qft_adder_all_dd}
\end{figure}
\begin{tcolorbox}
Applying DD on all qubits can reduce program fidelity. To effectively mitigate idling errors, DD should be applied judiciously only to the qubits that benefit from DD in a quantum circuit.
\end{tcolorbox}
\section{Adaptive Dynamical Decoupling }
To enable the robust use of DD at the application level, we propose {\em Adaptive Dynamical Decoupling (ADAPT)}. ADAPT identifies the combination of DD sequences that suppress idling errors and maximizes the application fidelity. ADAPT is implemented as a compiler pass that can be easily integrated with existing and future quantum compiler tool flows. In this section, we provide an overview of ADAPT, discuss the design issues in estimating the optimal DD sequence, propose scalable search algorithms for the same.
\subsection{Overview of ADAPT}
ADAPT identifies all idle qubit slots in a quantum circuit and applies DD gate sequences during these idle periods. However, the most optimal subset of qubits on which DD must be applied is neither known a-priori to program execution nor practically feasible to obtain through extensive device characterization. To overcome this challenge, ADAPT relies on a \textit{Decoy Circuit} which is structurally similar to the input program, but with a known solution. Furthermore, to limit the complexity of the search for the optimal DD sequence, ADAPT employs a localized algorithm. Figure~\ref{fig:overview} shows an overview of ADAPT which accepts a quantum circuit as the input and outputs the circuit with the most optimal DD sequence. We discuss the specific design details of ADAPT next.
\subsection{Clifford Decoy Circuits (CDC)}
If the outcome of a quantum circuit is known, we can apply DD on different subsets of qubits and assess their effectiveness in improving fidelity. Unfortunately, practical applications are hard to simulate using conventional computers, and their correct outcome is unknown. To overcome this challenge in estimating the optimal DD sequence, we leverage the following insights.
\noindent \textit{Insight \#1: Not all quantum circuits are hard to simulate and circuits comprising of Clifford gates only can be simulated efficiently on conventional computers~\cite{Knill,Aaronson_2004}.}
\noindent \textit{Insight \#2: Our characterization experiments show that crosstalk from CNOT operations is a dominant source of idling errors. Thus, two circuits with similar CNOT structures encounter similar idling errors.}
ADAPT uses these two insights \noindent \circled{1}~to generate an efficient {\em Clifford Decoy Circuit (CDC)} that preserves the structure of the input program. \noindent \circled{2}~Next, ADAPT applies different DD combinations to this decoy circuit and \noindent \circled{3}~selects the DD sequence that maximizes the fidelity of the decoy circuit. \noindent \circled{4}~Finally, ADAPT applies this optimal DD sequence to the input circuit and executes it.
\subsubsection{Design of Clifford Decoy Circuits (CDC)}
\\
\label{sec:cliffordreplacement}
\noindent ADAPT relies on Clifford Decoy Circuits generated using gates from the Clifford group -- {\em CNOT, X, Y, Z, H, S}. To create the CDC of a circuit, ADAPT replaces the non-Clifford gates of the circuit using the closest Clifford gates.
To measure the closeness of a non-Clifford gate in the program with a Clifford gate, we use an operator norm- a distance measure used in the literature for approximating one unitary with another.
\begin{align}
\|U-V\|_{\infty}:=\max _{|\psi\rangle \neq 0} \frac{\|(U-V)|\psi\rangle \|_{2}}{\||\psi\rangle \|_{2}}
\end{align}
For example, by using operator norm, the U1 gate is either replaced by Z or S gates, whereas U2 and U3 gates are replaced by the closest Clifford gates depending on the Euler angles associated with the gates. As CNOT is a Clifford gate, the structure and usage of these gates are identical between the CDC and the input program and therefore, the CDC encounters similar crosstalk from CNOT operations. This also ensures that the qubits in the CDC experience similar idle times as the original circuit.
\color{black}
\subsubsection{Effectiveness of Clifford Decoy Circuits}
\\
\noindent To test the effectiveness of decoy circuits, we compare the fidelity of a 4-qubit quantum ADDER benchmark and its corresponding CDC for all possible DD sequence combinations. For example, this five qubit program has 16 ($2^4$) possible DD sequence combinations where the combination ``0 (0000)" indicates that DD is not applied on any of the qubits, whereas the combination ``15 (1111)" indicates that DD is applied to all of the qubits. We also compute the Spearman's Correlation Coefficient to quantify the agreement between the input program and the CDC. Figure~\ref{fig:adder_decoy_correlation} shows the trend in program fidelity for the actual circuit and the CDC and we observe that the program fidelity is strongly correlated to the fidelity of the CDC (Spearman's Correlation Coefficient = 0.78)
\begin{figure}
\caption{Correlation between the Fidelity of a 4-qubit Adder circuit on IBMQ-Guadalupe and corresponding Clifford decoy circuit. We observe strong correlation.}
\label{fig:adder_decoy_correlation}
\end{figure}
\subsubsection{Overcoming the Limitations of CDCs: Seeded Decoy Circuits}
\\
ADAPT generates sub-optimal DD sequences when there is a mismatch between the fidelity trend of the input program and its CDC. Our experiments show that a CDC with high variance in the output distribution can be insensitive to changes in idling errors and relying on them may result in sub-optimal DD sequences. For example, if a CDC produces a uniform distribution, executing this CDC with different DD sequences does not significantly change the output distribution in the presence of idling errors.
To tackle this problem, we propose the use of {\em Seeded Clifford Decoy Circuits (SDC)} that generate output distributions with low entropy, thus making them sensitive to idling errors. While simply removing all the single qubit gates from an input program and preserving the CNOT structure only, as shown in Figure~\ref{fig:sdc}, can generate a decoy circuit, it does not truly mirror the fidelity trends of the input circuit because it does not capture the phase errors. To ensure that the output entropy is reduced while still being representative of the input circuit, SDCs use a very limited number of non-Clifford gates. SDCs apply an initial layer of non-Clifford gates on few qubits and replace the remaining non-Clifford gates with Clifford gates. For example, unlike the CDC shown in Figure~\ref{fig:sdc}(c) that uses all Clifford gates, the SDC shown in Figure~\ref{fig:sdc}(d) uses non-Clifford gates in the first layer of the circuit and Clifford gates in the later circuit layers.
\begin{figure}
\caption{(a) Example quantum circuit. Decoy circuit construction using (b) Clifford gates (c) only CNOT (d) Mostly Clifford and few non-Clifford gates.}
\label{fig:sdc}
\end{figure}
Our experiments show that SDCs can produce a rich state evolution while generating low entropy outputs. Although, the simulation cost of SDC is slightly higher than CDC, it is still not as expensive as running a full quantum simulation, which requires exponential resources. Moreover, SDCs produce low entropy outputs and thus, further optimizations to reduce the sampling cost can be deployed as well. For the purpose of simulations, we use Qiskit Extended Stabilizer Simulator (based on ~\cite{bravyi2019simulation}). Table~\ref{tab:SDC} shows the effectiveness of SDCs in improving the correlation between the decoy and original circuits and the time required to simulate the SDCs for 64,000 shots. To test the scalability, we simulate a 100-qubit QAOA SDC, which requires 330 seconds for 100,000 shots. Note that CDCs or SDCs require simulation only once because applying different DD sequences does not alter the output on a simulator.
\begin{table}[htb]
\centering
\begin{small}
\caption{Correlation between Decoy and Input Circuits (higher is better)}
\setlength{\tabcolsep}{0.05cm}
\renewcommand{1.25}{1.2}
\begin{tabular}{ | c | c | c | c | c | }
\hline
Benchmark & Adder & QFT-6 & QAOA-8 & QAOA-10 \\ \hline
Platform & IBMQ-Rome & IBMQ-Paris & IBMQ-Paris & IBMQ-Paris \\ \hline \hline
\textbf{CDC-Correlation} & 0.76 & 0.53 & 0.22 & -0.012 \\ \hline
\textbf{SDC-Correlation} & 0.81 & 0.68 & 0.74 & 0.62 \\ \hline \hline
\textbf{SDC-SimTime} & 1.2 Sec & 5.4 Sec & 12.8 Sec & 22.2 Sec \\ \hline
\end{tabular}
\label{tab:SDC}
\end{small}
\end{table}
\begin{figure*}
\caption{Overall Workflow of ADAPT}
\label{fig:adapt_schematic}
\end{figure*}
\subsection{Managing Search Complexity}
The state-space for all possible DD sequences scales exponentially with the problem size. For a program with N qubits, there are $2^{N}$ possible combinations of qubits on which DD pulses can be applied when the qubits are idle. The combination $``000..0_{N}"$ represents DD applied on none of the qubits, whereas the combination $``111..1_{N}"$ represents DD applied on all the qubits, whenever the qubit is idle. Unfortunately, we cannot search this design space exhaustively for large programs even with CDCs. Instead, we perform localized search, whereby we first try to find the best subset of qubits in a neighborhood of 4-qubits (searching for all 16 combinations) before moving to the next neighborhood of 4-qubits. Therefore, for a circuit with N qubits, we use at most $4 \times N$ decoy circuits to estimate the best DD sequence for the input circuit. Thus, the search complexity increases linearly with the number of qubits. To accommodate the limitations of decoy circuits and obtain a more optimal DD sequence, we take a conservative estimate from the top two predicted sequences from ADAPT. For example, if the two best predictions are "1001" and "1011", the chosen sequence is "1011".
\subsection{Design Implementation}
Figure~\ref{fig:adapt_schematic} shows the workflow of ADAPT and how it fits into the existing NISQ execution model.
\subsubsection{Integration in the Compiler Tool-Flow}
\\
ADAPT is applied at the end of existing compiler passes, after a program has been compiled into the machine-specific instructions of a given hardware. Typically, quantum compilers decompose a program into two-qubit CNOT and single qubit gates in the first pass and then map the program qubits on the physical devices. During both these compiler passes, redundant gates are eliminated. Moreover, the mapping pass schedules the gates to maximize the operational concurrency and minimize the number of SWAPs~\cite{li2018tackling}. Additionally, the mapping pass also accounts for the error-rates of the qubits and gates during qubit mapping and instruction scheduling~\cite{murali2019noise,tannu2019not}. ADAPT is orthogonal to these existing optimizations, and it is applied after all the compiler passes. Therefore, it can be seamlessly integrated with any existing NISQ compiler.
\subsubsection{Finding Idle Qubits}
\\
To find idle qubits in a program, we translate the executable obtained from the compiler into an intermediate representation termed as the {\em Gate Sequence Table (GST)}, as shown in Figure~\ref{fig:adapt_schematic}. The GST slices the compiled circuit into layers and captures the data dependencies between the qubits in time. Note that the typical circuit representation used by the decomposition and mapping passes do not capture the idle cycles as gate latencies are not embedded in circuit representations. Whereas, GST uses timestamps generated by using physical gate latencies available from the machine calibration data to indicate start and end times of each gate. By querying the GST, we can identify the exact idle period for any qubit in a quantum program and insert the DD gate sequences.
\subsubsection{Pulses for Implementing DD}
\\
In theory, any sequence of instructions that are effectively identity gates can be used to implement DD. For example, we could simply use XX and YY pulses as the pairs result in identity operations. In this paper, we study two different DD protocols: the XY-4 sequence and the IBMQ-DD sequence as both have been shown to be effective for superconducting qubit devices in general~\cite{pokharel2018demonstration,chen2021exponential}, and particularly on IBMQ systems~\cite{pokharel2018demonstration,IBMDD}. The XY-4 sequence, shown in Figure~\ref{fig:dd_sequences}(a), is repeated execution of "X-Y-X-Y" gates, which takes about $210\ ns$ on most IBMQ machines as per its most optimal decomposition shown in Figure~\ref{fig:dd_sequences}(b). Note that since ADAPT adds DD sequences to the already compiled executable, it is necessary to add DD gates in the machine compliant instruction format. Each "X" and "SX" gate takes about $35\ ns$ and the "RZ" gate is performed in software~\cite{mckay2017efficient}. For this protocol, ADAPT inserts the DD gates for any idle windows with duration larger than $210\ ns$. Also, ADAPT continuously inserts XY-4 DD sequences for larger idle windows. For the IBMQ-DD sequence, ADAPT uses an approach similar to prior work~\cite{IBMDD} and inserts the decomposition of "X($\pi$)" and "X($-\pi$)" gates evenly during the idle period, as shown in Figure~\ref{fig:dd_sequences}. We also use a 10 nanosecond free evaluation buffer after each X and Y gate, consistent with the prior study on IBM systems~\cite{pokharel2018demonstration}.
\begin{figure}
\caption{(a) XY-4 DD sequence, (b) decomposition of XY-4 sequence on IBMQ systems, (c) IBMQ-DD using X($\pi$)-- X($-\pi$) gates, and (d) decomposition of IBMQ-DD sequence inserted in the idle window between two gates.}
\label{fig:dd_sequences}
\end{figure}
\section{Experimental Methodology}
\label{sec:evaluation}
In this section, we discuss the evaluation infrastructure used to the estimate the effectiveness of ADAPT.
\subsection{Compiler}
We use IBM's Qiskit tool-chain to compile the benchmarks~\cite{qiskit}. We use the Qiskit transpiler with "{\em noise adaptive}" mapping~\cite{murali2019noise,tannu2018case}, "{\em sabre routing}" policy~\cite{li2018tackling}, and optimization level 3 flags. We ensure identical mapping and sequence of CNOT gate operations across all the policies evaluated for each benchmark. While we use this compiler tool-chain, ADAPT is independent of the compiler being used and therefore, any other compiler may be used as well. As ADAPT integrates with existing compilers as a post-compile step, we add the most optimal decomposition of the DD gates to obtain the final compiled quantum object. For the implementation of the DD pulses, we use both XY-4 and IBMQ-DD (XX) sequences because they have been previously studied for IBMQ systems and summarize the broad category of DD protocols available. Although beyond the scope of this paper, ADAPT can be implemented using other DD sequences and specialized gate pulses as well.
\subsection{Quantum Hardware Platforms}
For all our evaluations, we use three different quantum hardware from IBM. The details of each hardware and their average error trends are listed in Table~\ref{tab:ibmqplatforms}. Also, we perform all our evaluations for each benchmark and a quantum hardware within the same calibration cycle to establish a fair comparison.
\begin{table}[htp]
\centering
\begin{small}
\caption{Error Characteristics of IBMQ Hardware}
\setlength{\tabcolsep}{0.15cm}
\renewcommand{1.25}{1.3}
\begin{tabular}{ | c || c || c | c || c | c | }
\hline
\multirow{2}{*}{Machine Name} & Num. of & \multicolumn{2}{c|}{Error Rate (in \%)} & T1 & T2\\
\cline{3-4}
& Qubits & CNOT & Measurement & ($\mu s$) & ($\mu s$) \\
\hline \hline
\texttt{IBMQ-Guadalupe} & {16} & 1.27 & 1.86 & 71.7 & 85.5 \\ \hline
\texttt{IBMQ-Paris} & {27} & 1.28 & 2.47 & 80.8 & 83.4 \\ \hline
\texttt{IBMQ-Toronto} & {27} & 1.52 & 4.42 & 105 & 114 \\ \hline
\end{tabular}
\label{tab:ibmqplatforms}
\end{small}
\end{table}
\ignore{
We also describe the latencies of the CNOT operations on each hardware in Table~\ref{tab:latency} as it dictates overall idle times experienced by a program in general.
\begin{table}[htp]
\centering
\begin{small}
\caption{Operational Latency on IBMQ Hardware}
\setlength{\tabcolsep}{0.15cm}
\renewcommand{1.25}{1.3}
\begin{tabular}{ | c || c || c | c | c|}
\hline
\multirow{2}{*}{Machine Name} & Num. of & \multicolumn{3}{c|}{CNOT Latency (in ns)}\\
\cline{3-5}
& Physical Links & Min & Mean & Max \\
\hline \hline
\texttt{IBMQ-Guadalupe} & {16}& 263 & 394 & 583 \\ \hline
\texttt{IBMQ-Paris} & {28} & 263 & 431 & 761 \\ \hline
\texttt{IBMQ-Toronto} & {28} & 242 & 441 & 860 \\ \hline
\end{tabular}
\label{tab:latency}
\end{small}
\end{table}
}
\subsection{Benchmarks}
We use benchmarks of different sizes and structures to test the effectiveness of ADAPT. Table~\ref{tab:benchmarks} summarizes the benchmarks used in this paper. The size and type of benchmarks are derived from prior works on software mitigation of hardware errors~\cite{li2018tackling,tannu2019not,murali2019noise,micro1,nishio,li2020qasmbench}. Additionally, we run some of these benchmarks with different initial conditions. For example, QFT-7A and QFT-7B have identical structures, but compute the Fourier transform of two different quantum states. This tests the effectiveness of decoy circuits for the evolution of different quantum states.
\begin{table}[htp]
\centering
\begin{small}
\caption{ Quantum Benchmark Characteristics}
\setlength{\tabcolsep}{0.1cm}
\renewcommand{1.25}{1.3}
\begin{tabular}{ | c || c || c | c | c | c | }
\hline
Benchmark & Benchmark & Num & Total & Circ & Avg. Idle\\
Description & Name & Qubits & Gates & Depth & Time ($\mu s$) \\
\hline \hline
\multirow{2}{*}{Bernstein Vazirani} & BV-7 & 7 & 95 & 30 & 8.8 \\
\cline{2-6}
& BV-8 & 8 & 132 & 55 & 15.2 \\
\hline
\hline
\multirow{4}{*}{Fourier Transform} & QFT-6A & 6 & 102 & 47 & 14.6 \\
\cline{2-6}
& QFT-6B & 6 & 228 & 148 & 27.9 \\
\cline{2-6}
& QFT-7A & 7 & 360& 202&50.0 \\
\cline{2-6}
& QFT-7B & 7 & 354 & 207 & 49.7 \\
\hline
\hline
\multirow{4}{*}{Approx. Optimization}
& QAOA-8A & 8 & 32 & 19 & 4.6 \\
\cline{2-6}
& QAOA-8B & 8 & 60 & 30 & 8.1 \\
\cline{2-6}
& QAOA-10A & 10 & 77 & 37 & 13.5 \\
\cline{2-6}
& QAOA-10B & 10 & 180 & 50 & 7.7 \\
\hline
\hline
Phase Estimation & QPEA-5 & 5 & 175 & 94 & 11.9 \\
\hline
\end{tabular}
\label{tab:benchmarks}
\end{small}
\end{table}
\subsection{ Reliability Metrics}
To quantify the program reliability, we compare the output obtained on a real machine with respect to an idealized (error-free) machine (results obtained on an error-free simulator). We quantify the program reliability by computing program Fidelity based on the distance between probability distributions. We use {\em Total Variation Distance (TVD)}~\cite{tvd:wiki} to evaluate the fidelity by measuring the distance between ideal output probability (P) distribution and the real experiment's output (Q). A Fidelity of 1 means identical distributions, whereas 0 means completely different distributions. Therefore, a higher Fidelity is desirable.
\begin{center}
\begin{align}
{TVD\,(P, Q) = \frac{1}{2} \, \sum \|P_i-Q_i\| }
\end{align}
\end{center}
\begin{center}
\begin{align}
{Fidelity \, = \, 1-TVD \,(P, Q)}
\end{align}
\end{center}
Prior works have used similar metrics such as Success Probability~\cite{tannu2018case,murali2019noise,nishio}. We use distance-based metric as the output of the quantum program can be a probability distribution with multiple correct answers. Also, TVD closely matches prior metrics.
\subsection{Number of Trials}
We perform experiments with up-to 32,000 shots, depending on the program size, to obtain the output probability distributions. The largest benchmark in our study uses ten qubits (QAOA-10), so it can produce a maximum of 1024 ($2^{10}$) unique solutions and therefore, the number of samples used in our study are sufficiently large for the workloads.
\begin{figure*}
\caption{Relative Fidelity of different policies on 27-qubit IBMQ-Toronto using (a) XY4 and (b) IBMQ-DD sequences.}
\label{fig:toronto_result}
\end{figure*}
\subsection{Competing Policies}
For our evaluations, we use four competing policies which are described next:
\begin{enumerate}[leftmargin=0cm,itemindent=.5cm,labelwidth=\itemindent,labelsep=0cm,align=left, itemsep=0.08cm, listparindent=0.3cm]
\item \textbf{No DD (Baseline)}: DD is not applied to any idle qubit.
\item \textbf{All-DD}: DD is applied on all the program qubits during any time period when they are idle.
\item \textbf{ADAPT}: The optimal DD sequence is obtained from a structured search using decoy circuits and applied.
\item \textbf{Runtime Best}: Evaluates the program with all possible DD sequences ($2^N$ for an $N$-qubit program) and the sequence with highest fidelity at runtime is selected.
\end{enumerate}
\section{Results}
In this section, we provide the evaluation results for ADAPT across three different quantum computers: 27-qubit IBMQ-Paris, 27-qubit IBMQ-Toronto, and 16-qubit IBMQ-Guadalupe.
\subsection{Results for IBMQ-Paris}
Figure~\ref{fig:Paris} shows the Fidelity of four benchmarks for the XY4 protocol compared to the baseline (without DD). The number below each benchmark label specifies the baseline fidelity. We observe that on an average DD improves Fidelity. Applying DD on all qubits improves the fidelity by 1.97x on average and up to 2.89x. However, ADAPT improves the application fidelity by 3.27x and by up-to 5.7x. We also observe that the effectiveness of DD increases with increasing program size, which is expected because larger programs have more operations and depth, leaving room for longer idle time windows in the program. We also observe that the most optimal sequence at run-time outperforms both ADAPT and All-DD. This is due to the limitations of the decoy circuits and the limited search space explored by ADAPT. Our default ADAPT design searches the best sequence in a neighborhood of up-to four qubits at a time. For example, for QAOA-10, ADAPT uses only 36 decoy circuits, unlike the entire 1024 possible decoy circuits space. Nonetheless, we observe that the fidelity improvement of ADAPT is close to the runtime best and higher than All-DD for these workloads.
We were unable to perform experiments using the IBMQ-DD protocol for IBMQ-Paris because of changes in the basis gates and subsequent retirement of the machine.
\begin{figure}
\caption{Relative fidelity of different policies on 27-qubit IBMQ-Paris using XY4 DD sequence.}
\label{fig:Paris}
\end{figure}
\begin{figure*}
\caption{Relative Fidelity on 16-qubit IBMQ-Guadalupe using (a) XY4 and (b) IBMQ-DD sequences.}
\label{fig:guadalupe_result}
\end{figure*}
\subsection{Results for IBMQ-Toronto}
Figure~\ref{fig:toronto_result} shows the Fidelity of All-DD, ADAPT and the Runtime-Best policies for 27-qubit IBMQ-Toronto machine, relative to the baseline for two different DD protocols. The number below each benchmark label specifies the baseline fidelity of the application. The structure of the QFT circuit cause qubits remain idle for substantial time periods. For example, in \texttt{QFT-6B}, Qubit-0 is idle for 90\% of the total time taken for the overall execution. Although a long sequence of DD gates adds a significant amount of single-qubit gate errors, it is still effective in improving the overall fidelity. Overall, ADAPT outperforms the baseline and improves the Fidelity by 1.52x and by up-to 3.1x for the XY4 protocol. Compared to All-DD, ADAPT improves the Fidelity on average by 1.3x and by up-to 1.89x. For the IBMQ-DD scheme, ADAPT improves the Fidelity by 1.47x and up-to 2.67x compared to the baseline. Thus, ADAPT is a generalized technique to identify qubits most vulnerable to idling errors at runtime and has applicability irrespective of the DD protocol.
\subsection{Results for IBMQ-Guadalupe}
Figure~\ref{fig:guadalupe_result} shows the Fidelity of the three different DD policies for the 16-qubit IBMQ-Guadalupe machine, normalized to the No-DD baseline. Note that this is one of the most recently released IBMQ systems with significantly reduced gate latencies and error-rates and improved coherence times. So, to test the robustness of ADAPT, we run slightly larger workloads (in terms of number of qubits, two-qubit operations, and circuit depth) on this machine. Here too, the number below each benchmark label specifies the baseline fidelity of the application. We observe that in general applying DD on all idle qubits for such large programs slightly degrades the fidelity in specific cases (\texttt{QFT-7A} for example). Note that the dominant source of errors in these circuits are not idling errors but gate and measurement errors. However, we observe that ADAPT is more robust and generally outperforms the All-DD policy. For example,
the fidelity of the \texttt{QAOA-10B} benchmark improves by 3.1x compared to the baseline (without DD) and 4.65x compared to applying all-DD. However, we observe that in all these cases, the most optimal DD sequence at runtime outperforms the All-DD policy.
Table~\ref{tab:summary} summarizes the minimum, average, and maximum Fidelity for the three different dynamical policies normalized to the baseline on three IBMQ machines ranging from 16 to 27 qubits. Overall, ADAPT improves application Fidelity by 1.7x on average and up-to 5.73x compared to No-DD.
\begin{table}[htp]
\centering
\begin{small}
\caption{Summary of Results}
\setlength{\tabcolsep}{0.05cm}
\renewcommand{1.25}{1.2}
\begin{tabular}{ | c || c | c | c || c | c | c|| c | c | c |}
\hline
\multirow{2}{*}{Machine} & \multicolumn{3}{c||}{All-DD/ XY4} & \multicolumn{3}{c||}{ADAPT/ XY-4} & \multicolumn{3}{c|}{ADAPT/ IBMQ-DD} \\
\cline{2-10}
& Min & GMean & Max & Min & GMean & Max & Min & GMean & Max \\
\hline \hline
Paris & 1.25 & 1.97 & 2.89 & 1.55 & 3.27 & 5.73 & -- & -- &-- \\
\hline
Toronto & 0.58 & 1.17& 2.61& 0.68 & 1.23 &3.06 & 0.99 &1.42 &2.67 \\
\hline
Guadalupe & 0.67 & 1.10 & 1.57 & 1.1 & 1.31 & 3.10 & 0.92 & 1.23 & 1.33\\
\hline
\end{tabular}
\label{tab:summary}
\end{small}
\end{table}
\ignore{
\subsection{Results for Approximation Ratio Gap}
We use Approximation Ratio Gap (ARG) to determine the effectiveness of ADAPT in the context of the QAOA benchmarks in particular. ARG determines the percentage difference between the approximation ratios on an ideal simulator and noisy outputs obtained on real hardware. A lower ARG denotes higher performance and closeness to the ideal output.~\cite{alam2020circuit}. Table~\ref{tab:eev_comparison} shows the ARG for some of the QAOA benchmarks evaluated in this paper for different machines.
\begin{table}[htp]
\centering
\begin{small}
\caption{Approximation Ratio Gap for XY4 Sequence (Lower is better)}
\setlength{\tabcolsep}{0.05cm}
\renewcommand{1.25}{1.25}
\begin{tabular}{ | c || c || c | c | c | c | }
\hline
{Machine} & Program & No-DD & All-DD & ADAPT & Runtime Best \\
\hline \hline
\multirow{4}{*}{IBMQ-Guadalupe} & QAOA-8-B & 14.88 & 19.97 & 19.32 & 12.64 \\
\cline{2-6}
& QAOA-10 & 17.41 & 19.47 & 13.36 & 13.36 \\
\cline{2-6}
& QAOA-8 & 14.62 & 19.86 & 18.20 & 13.53 \\
\cline{2-6}
& QAOA-9-B & 13.85 & 23.53 & 19.16 & 12.65 \\
\hline
\end{tabular}
\label{tab:benchmarks}
\end{small}
\end{table}
}
\subsection{Impact of DD Pulse Type}
We compare the effectiveness of the two protocols studied in this paper standalone using some additional characterization: the XY-4 sequence and the state-of-the-art XX sequence recently proven to be effective on IBMQ systems~\cite{IBMDD}. In the experiment, we prepare three different circuits for variable idle times (T), as shown in Figure~\ref{fig:dd_sequence_comparison}. In the circuit, a quantum state is prepared by performing a single qubit rotation and the associated qubit is kept idle. Throughout the idle time, CNOT operations are repeatedly performed on a specific physical link of the device that is not connected to the qubit under study. Finally, the qubit under study is brought back to the ground state by performing the inverse rotation. In the first circuit, no DD pulses are inserted, whereas in the second circuit the XY4 DD pulses inserted throughout the idle period. The third circuit uses IBM's DD sequence in which X($\pi$) and X($-\pi$) are evenly placed by waiting for specific delay slots, as shown in Figure~\ref{fig:dd_sequence_comparison}(c). The time period for each individual delay slot is computed from the difference of idle time and length of the two X rotations, as described in Equation~\eqref{eq:delayslot}. Note that we use the most optimal decomposition for both DD pulses to ensure a fair comparison.
\begin{equation}
\label{eq:delayslot}
\textrm{Delay }(\frac{\tau}{4}) = \frac{\textrm{T-length of X(}\pi\textrm{)- length of X(-}\pi\textrm{)}}{\textrm{4}}
\end{equation}
We run these circuits for each qubit and physical link combination (total of 224 combinations possible) on 16-qubit IBMQ-Guadalupe and Figure~\ref{fig:dd_sequence_comparison}(d) shows the average fidelity of each circuit as the idle time is increased. We observe that on an average XY4 sequence outperforms the IBMQ DD sequence with increasing idle time. Similar results are reported for the Google Sycamore hardware~\cite{chen2021exponential,ai2021exponential} as well as other works~\cite{ahmed2013robustness}. This is because when idle periods are longer, there is still sufficient delay between the two X rotations during which errors can accumulate for the IBMQ-DD sequence. While such long idle periods may not be observed for random circuits used in Quantum Volume experiments, they exist in many other practical quantum applications (QFT for example). For circuits with long idling periods, we observe that the IBMQ DD protocol often performs worse than applying XY4 continuously because the latter does not experience large delays between DD pulses. To account for this in our evaluations at the application-level, we use the IBMQ-DD sequence in a more conservative manner by inserting the DD gate sequence multiple times for large idle periods. This enables a fair assessment of the DD protocol.
\begin{figure}
\caption{A characterization circuit (a) without any DD (b) with the XY4 DD sequence (c) IBMQ DD sequence. (d) Mean fidelity from individual DD sequences on 16-qubit IBMQ Guadalupe when the idle time is increased.}
\label{fig:dd_sequence_comparison}
\end{figure}
\ignore{
We also study the impact of both the DD sequences at the application level when DD sequences are inserted on all the qubits and when applied to selected qubits only (optimal chosen from the runtime results obtained from all possible qubit combinations). Figure~\ref{fig:geomean} shows the geometric mean fidelity of 50 different benchmarks (up-to size 7) executed on multiple IBMQ hardware of quantum volume 32, relative to when no DD is applied. We observe that on an average XY4 performs equally well as IBM's state-of-the-art DD sequence and the optimal XY4 sequence outperforms the optimal IBMQ DD sequence \textcolor{red}{placeholder image for now}.
\begin{figure}
\caption{Average impact of XY4 and IBMQ XX DD sequences on application fidelity when DD is applied to all qubits vs. when selectively applied to the best possible qubit combination (based on runtime data)}
\label{fig:geomean}
\end{figure}
In general, DD and ADAPT can be implemented with other pulses as well. Many advanced DD sequences exist in addition to the two sequences studies above, but most require custom pulse control or longer pulse duration, which limits applicability. For example, the XY-4 pulse latency is 320 nanoseconds, so for any idle slot larger than 320 nanoseconds, ADAPT can insert XY-4 to mitigate idling error. In theory, and for single qubit experiments, concatenated DD (CDD) pulses like XY-16 (XYXY-YXYX-XYXY-YXYX) are more effective in decoupling phase noise~\cite{souza2011robust}. However, XY-16 requires a latency of 1.6 microseconds, which means these pulses can only be applied for idle periods that are longer than 1.6 microseconds. In most NISQ applications, significant number of idle slots tend to have time duration less than one microsecond. Thus, DD protocols with long pulse duration are not as appealing for NISQ applications. }
\section{Related Work}
\noindent{\textbf{NISQ Compilers.}} With the increasing number of qubits and improving device quality on near-term quantum architectures, quantum software can play a vital role in improving the reliability of NISQ applications~\cite{CCS,FCNature,NAE}. To improve the application fidelity, existing quantum compilers focus on minimizing the number of gates~\cite{CGO,guerreschi,li2018tackling,DACWille} by searching for the best possible qubit mappings and sequence of SWAP operations. Moreover, recent works on noise-aware compilation use the underlying error characteristics of the hardware to avoid specific physical qubits and links such that the impact of worst-case errors on the application fidelity is reduced~\cite{tannu2018case,murali2019noise,finigan2018qubit,nishio,murali2019full,patel2020disq,patel2020ureqa,patel2021qraft,murali2019noiseadaptive,micro1,micro3}. Other works specifically target reducing certain types of errors such as crosstalk errors between ongoing CNOT operations, measurement errors~\cite{FNM,murali2020software,li2018tackling}. However, to the best of our knowledge, no compiler optimization so far focuses only on mitigating idling errors at the application level. As programs grow in size, the vulnerability of programs to idling errors increase and our paper focuses on tackling these errors.
\noindent{\textbf{Dynamical Decoupling:}} This is a well-studied noise mitigation technique that applies to a variety of qubit technologies~\cite{DDLidarAPS,DDLidarPeriodic,oliver,PhysRevLett.82.2417,paz,souza2011robust}. DD is ubiquitously used in qubit benchmarking and other small-scale experiments. More recently, Tripathi et al. have discussed the presence of crosstalk due to inter-qubit couplings and the role of DD in suppressing these errors on superconducting qubits~\cite{tripathi2021suppression}. However, this study does not consider the crosstalk from CNOT operations which exists at the application level. Also, a generic theoretical framework for DD exists that can be used to integrate DD with fault-tolerant and noise mitigation protocols~\cite{paz}. While effective for calibration and well-studied from a theoretical perspective, the trade-offs in using DD at the application-level are not fully studied.
\noindent{\textbf{Dynamical Decoupling for NISQ:}}
Our work is inspired by the experimental demonstration of DD on IBM and Rigetti hardware in suppressing phase errors and improving coherence time (T2) using XY-4 and XX pulses~\cite{pokharel2018demonstration,IBMDD}. However, the use of DD to mitigate idling errors due to operational crosstalk, especially crosstalk generated by long latency CNOT gates, is not well explored. The recent study on IBMQ hardware only considers crosstalk from inter-qubit ZZ couplings~\cite{tripathi2021suppression}.
Recently, IBM demonstrated a milestone achievement of reaching a quantum volume of 64~\cite{IBMDD} and subsequently 128 for their superconducting machine, with DD as one of the components used in benchmarking the system. IBM used a custom DD protocol, wherein the compiler inserts DD pulses on all qubits wherever possible. This is similar to the {\em DD-on-All Qubits} configuration that we study as one of our baselines. We show that ADAPT is more effective than DD-on-All. DD on all qubits is also used in the study of quantum error correction codes (more specifically repetition codes) experiments on Google Sycamore device~\cite{ai2021exponential}.
Strikis et al.~\cite{arm2020learningbased} proposed an error mitigation strategy that inserts an extra gate before and after each operation to reduce both active and idle errors. They propose a learning scheme to identify the type of extra gates for error mitigation. Similarly, Zlokapa et al.~\cite{alex2020deep} propose to train a deep neural network to learn the noise characteristics of a 5-qubit machine and use this network to identify the best DD pulses. ADAPT is orthogonal to these existing approaches in that it tries to identify the subset of qubits that should use DD at runtime depending on the program and device characteristics. ADAPT can use the best DD sequence for that subset of qubits. While we use decoy circuits to find the best DD sequence in ADAPT, decoy circuits have been used in other scenarios as well, such as verifying cryptographic protocols~\cite{DecoyPR}.
\section{Conclusion}
The quality and size of near-term quantum computers is improving. However, the device error-rates are still quite high and limit the fidelity of applications executed on them. In addition to facing errors while performing gate or measurement operations, qubits can also accumulate errors while remaining idle. Such idling errors present significant challenges in executing large programs.
In this paper, we focus on mitigating idling errors for NISQ applications.
Prior works have used {\em Dynamical Decoupling (DD)} to reduce idling errors by applying a sequence of DD gates when the qubit is idle. While DD has been shown to be effective at a small scale, its applicability at the application level is not yet fully studied. We show that applying DD to all the qubits in a program is sub-optimal and may even degrade the application fidelity in some specific cases because DD is implemented by introducing additional quantum gates. If the collective error-rate of these additional operations surpasses the idling error-rate, DD can adversely impact the overall fidelity at the application-level. Thus, to reduce the impact of idling errors at the application level, dynamical decoupling must be applied judiciously. We propose a software framework {\em Adaptive Dynamical Decoupling (ADAPT)} to identify the subset of qubits that provides the highest reliability with DD. ADAPT uses decoy circuits and a localized search algorithm to perform a trial-and-error search to identify the best subset of qubits to apply DD. We evaluate ADAPT on three quantum computers from IBM using two types of DD protocols and show that ADAPT improves fidelity by 1.86x compared to not using DD on average and by up-to 5.73x. Compared to DD on all qubits, ADAPT improves the fidelity of applications by 1.2x.
\begin{acks}
We thank Cody Jones for helpful discussions and feedback. Poulami Das was funded by the Microsoft Research PhD Fellowship. We also thank Sanjay Kariyappa and Keval Kamdar for editorial suggestions. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
\end{acks}
\end{document} |
\begin{document}
\pagenumbering{gobble}
\title[Identification Methods for Ordinal Potential Differential Games]{Identification Methods for Ordinal Potential Differential Games}
\author*{\fnm{Balint} \sur{Varga}$^*$} \email{[email protected]}
\author{\sur{Da} \fnm{Huang}}
\author{\fnm{S\"oren} \sur{Hohmann}}
\affil{\orgdiv{ Institute of Control Systems (IRS)}, \orgname{Karlsruhe Institute of Technology}, \orgaddress{\city{Karlsruhe}, \postcode{76131}, \country{Germany}}}
\abstract{
This paper introduces two new identification methods for the class of ordinal potential differential games (OPDGs). Potential games possess benefits such as the computability and existence of the Nash equilibrium. While the analysis of static ordinal potential games has been explored in previous literature, their applicability to various engineering applications is limited. Although the core idea of OPDGs has been previously introduced, the question of how to systematically identify a potential game for a given differential game remains unanswered. To address this issue, this paper proposes two identification methods that provide the potential cost function for the given differential game. Both identification methods are based on linear matrix inequalities.
The first identification method aims to minimize the condition number of the potential cost function's parameters, providing a faster and more precise technique compared to earlier solutions. Additionally, a feasibility analysis for system structure requirements is provided.
The second identification method has a less strict formulation, allowing it to identify an OPDG in cases in which the first identification method fails to provide a solution.
These two novel identification methods are verified through simulations. The results demonstrate their advantages and potential in designing and analyzing cooperative systems and controllers.}
\keywords{Potential Games, Nash Equilibrium, LMI Optimization, Differential Games, Engineering Application}
\maketitle
\thispagestyle{firstpage}
\pagestyle{empty}
\section{Introduction}
Game theory is commonly used to model the interactions between rational agents~\cite{1998_DynamicNoncooperativeGame_basar,2005_LQDynamicOptimization_engwerda,2016_PotentialGameTheory_la}, which arise in a wide range of applications, such as modeling the behavior of companies in the stock market \cite{2001_GameTheoryBusiness_chatterjee}, solving routing problems in communication networks \cite{2006_AdaptiveChannelAllocation_nie}, and studying human-machine cooperation \cite{2014_NecessarySufficientConditions_flad}.
Utilizing game theory's systematic modeling and design techniques, effective solutions for various applications have been analyzed and designed, see e.g.~\cite{2021_OnlineInverseLinearQuadratic_inga}.
One of the core solution concepts of game theory is the so-called Nash-Equilibrium (NE), cf.~\cite{1998_DynamicNoncooperativeGame_basar}. In a NE of a game, each player reaches their optimal utility taking the optimal actions of all other players into account. If the players find themselves in a NE, there is no rational reason for them to change their actions.
The very first idea of a fictitious function replacing the original structure of a non-cooperative strategic game with $N$ players was given by \textit{Rosenthal} \cite{1973_ClassGamesPossessing_rosenthal}.
Based on this idea, the formal definitions of potential games were first introduced by \textit{Moderer and Shapley} \cite{1996_PotentialGames_monderer}. The general concept of \textit{potential games} is presented visually in Figure~\ref{fig:PG_general_idae}: The original non-cooperative strategic game\footnote{Note it is assumed that the original game is given, therefore the term \textit{given game} is used interchangeably to emphasize this property of the original game.} with $N$ players and with their cost functions {$J\play{i},$~$\,$~$i=1...N$} are replaced with one single potential function. This potential function provides a single mapping of strategy space $\, \mathcal{U} = \mathcal{U}\play{1} \times ... \times \mathcal{U}\play{N}$ of the original game to the real numbers
\begin{equation} \label{eq:chap4_pot_intro}
J\play{p} : \mathcal{U} \rightarrow \mathbb{R},
\end{equation}
instead of $N$ mappings of the combined strategy set of the players to the set of real numbers
\begin{equation} \label{eq:chap4_J_all_i}
J\play{i} : \mathcal{U} \rightarrow \mathbb{R}, \forall i \in \mathcal{P},
\end{equation}
where $ \mathcal{U}\play{i}$ represents the strategy space of player $i$. Therefore, the potential function serves as a substitute model for the original game, while retaining all essential information of the original game. Intuitively, the Nash equilibrium (NE) of the non-cooperative strategic game can be more easily computed using the potential function~\eqref{eq:chap4_pot_intro} than the coupled optimization of \eqref{eq:chap4_J_all_i} of the original game. A further beneficial feature of potential games is that they possess at least one NE. Furthermore, if the potential function is strictly concave and bounded, the NE is unique. These advantageous properties make potential games a highly appealing tool for strategic game analysis, cf.~\cite{2016_SurveyStaticDynamic_gonzalez-sanchez}, \cite[Section 2.2]{2016_PotentialGameTheory_la}.
Potential games have been extensively analyzed in the literature~\cite{1997_CharacterizationOrdinalPotential_voorneveld, 1999_PotentialGamesPurely_kukushkin,2012_StateBasedPotential_marden}. However, the focus of these works is primarily on \textit{potential static games}, which do not incorporate underlying dynamical systems.
In contrast, \textit{differential games} are particularly valuable for modeling and controlling cooperative systems in engineering applications, as they account for dynamic systems, see e.g.~\cite{2021_OnlineInverseLinearQuadratic_inga}. However, the existing literature on \textit{potential differential games} exclusively deals with \textit{exact potential differential games}, which have such a definition which limits their general applicability.
As a result, exact potential differential games can only describe games with specific system structures or utility functions for the players. For instance, the cost functions of the players are symmetric \cite{2016_SurveyStaticDynamic_gonzalez-sanchez, 2018_PotentialDifferentialGames_fonseca-morales}. Consequently, a general usage of exact potential differential games is limited.
To allow a broader usage, the subclass of \textit{ordinal potential differential games} (OPDGs) has been previously introduced in literature \cite{2021_PotentialDifferentialGames_varga}. While the core idea of this subclass has already been discussed, a systematic identification method for OPDGs is still absent from the literature. Identification is a crucial step in the analysis and design of potential games, as it enables us to derive the potential function for a given game structure. The absence of a systematic identification method for OPDGs limits their applicability in engineering and other fields. Therefore, there is a need for a new identification method that can provide a potential function for a given differential game structure and extend the reach of OPDGs to various applications.
\begin{figure}
\caption{The illustration of the general idea of the potential games, where the original game is replaced by a (fictitious) potential function. The optimum of this potential function provides the NE of the original game.}
\label{fig:PG_general_idae}
\end{figure}
Therefore, the contribution of this paper is the development of two identification methods to find the potential function of an OPDG: A potential function is identified for a given LQ differential game using linear matrix inequalities (LMI). Using an LMI provides a fast and accurate solution. The first identification method requires only the cost functions of the players and no information about the system state trajectories of the differential game is needed to determine the potential function. However, in some cases, the solution is not feasible since the constraints of the LMI are violated. Therefore, a further identification method is proposed, in which the constraints are softened. In order to construct the potential function using the second identification method, the system state trajectories of the differential game must be available. This has the disadvantage that the identification method is less robust against measurement noise of the trajectories compared to the first identification method.
The paper is structured as follows: Section II provides an overview of LQ-differential games. Then, the identification methods, are given in Section III. Section~IV presents the application of the proposed identification methods to two examples using simulations. Finally, Section V offers a brief summary and outlook.
\section{Preliminaries - Differential Games}
First, this section provides a short overview of linear-quadratic (LQ) differential games. Then, the core idea of potential games and the class of the exact potential differential games are presented. Finally, the core notion of OPDGs is provided and the limitations of the state of the art are discussed
\subsection{Linear-Quadratic Differential Games}
The general idea of a game is that numerous players interact with each other and try to optimize their own cost function\footnote{In literature of game theory, the formulation as a minimization problem is usual \cite[Chapter 1-3.]{2007_AlgorithmicGameTheory_nisan}. Since optimal control theory, minimization problems are prevalent \cite{2015_OptimierungStatischeDynamische_papageorgiu}, the optimization problems in this paper are formulated as minimization.}. If the players also interact with a dynamics system, the game is called \textit{differential game}, see \cite{1998_DynamicNoncooperativeGame_basar, 2016_NonzeroSumDifferentialGames_basar}. In this case, the players have to carry out dynamic optimizations.
Numerous practical engineering applications can often be characterized sufficiently with such LQ models, e.g.~\cite{2021_FeedbackSystemsIntroduction_astrom, 2021_InverseDynamicGame_ingacharaja, 2022_FeedbackControlSystems_rahmani-andebili}. An LQ-differential game ($\Gamma_\text{LQ}$) is defined as a tuple~of:
\begin{itemize}
\item a set $\mathcal{P}$ of players $i=1...N$ with their cost functions $J\play{i}$,
\item inputs of the players $\sv{u}(t) = \left[\sv{u}(t)\play{1}, ...,\sv{u}(t)\play{N}\right],$
\item a dynamic system $\sv{f}(t,\sv{x}(t), \sv{u}(t)) = \dot{\sv{x}}(t)$,
\item the system states $\sv{x}(t) \in \mathbb{R}$.
\end{itemize}
The linear dynamic system is
\begin{align} \label{eq:linear_system}
\dot{\sv{x}} &= \mathbf{A}\sv{x}+ \sum_{i=1}^{N} \vek{B}\play{i}\sv{u}\play{i}, \\ \nonumber
\sv{x}&(t_0) = \sv{x}_0,
\end{align}
where $\mathbf{A}$ and $\mathbf{B}^{(i)}$ are the system matrix and the input matrices of each player, respectively\footnote{Note that for the sake of simplicity, the time dependency of $\sv{x}(t)$ and $\sv{u}(t)$ are omitted in the following.}. Furthermore, the cost function of each player $i$ is given in a quadratic form:
\begin{equation} \label{eq:each_player_quad_cost_function}
J\play{i}=\frac{1}{2}\int_{t_0}^{\infty} \sv{x}^\mathsf{T} \vek{Q}\play{i}\sv{x} + \sum_{j=1}^{N} {\sv{u}\play{j}}^\mathsf{T} \vek{R}\play{ij}\sv{u}\play{j} \text{d}t,
\end{equation}
where matrices $\mathbf{Q}^{(i)}$ and $\mathbf{R}^{(ij)}$ denote the penalty factor on the state variables and on the inputs, respectively. The matrices $\mathbf{Q}^{(i)}$ are positive semi-definite and $\mathbf{R}^{(ij)}$ are positive definite.
A common solution concept of games is the NE, see e.g.~\cite{2005_LQDynamicOptimization_engwerda}, which is defined in the following:
\begin{deff} [Nash-Equilibrium]
The solution of
\begin{align}
\sv{u}\Splay{i} = \underset{\sv{u}\play{i}}{\mathrm{arg min}} \;J\play{i}, \; \; \; \forall i \in \mathcal{P}, \; \; \; \; \rm{s.t.} \; \text{(\ref{eq:linear_system})}
\end{align}
is called the Nash-Equilibrium of the differential game composed by \eqref{eq:linear_system} and \eqref{eq:each_player_quad_cost_function}.
\end{deff}
The NE is a stable solution of the game, i.e. if the players deviate from this equilibrium solution, they face higher costs.
In LQ-games, the NE can be computed by the coupled algebraic Riccati equations of the differential game. Therefore, the NE of the game can be characterized by the resulting players' inputs $\sv{u}\play{i}$. In case of a feedback structure, the control law $\sv{u}\play{i} = - \sv{K}\play{i}\sv{x}$ can be assumed. This feedback gain is computed by the algebraic Riccati equation (see e.g.~\cite{2005_LQDynamicOptimization_engwerda}.)
\begin{align} \label{eq:coup_Ric}
\vek{0} = &\vek{A}^\mathsf{T} \vek{P}\play{i} + \vek{P}\play{i} \vek{A} + \vek{Q}\play{i} -
\sum_{j \in \mathcal{P}} \vek{P}\play{i}\vek{S}\play{j}\vek{P}\play{j} \\ \nonumber
&-\sum_{j \in \mathcal{P}} \vek{P}\play{j}\vek{S}\play{j}\vek{P}\play{i} +
\sum_{j \in \mathcal{P}} \vek{P}\play{j}\vek{S}\play{ij}\vek{P}\play{j} , \; \forall i \in \mathcal{P},
\end{align}
where
\begin{align*}
\vek{S}\play{j} =& \vek{B}\play{j} {\vek{R}\play{jj}}^{-1} {\vek{B}\play{j}}^{T} \; j \in \mathcal{P} \\
\vek{S}\play{ij} =& \vek{B}\play{j} {\vek{R}\play{jj}}^{-1} \vek{R}\play{ij} {\vek{R}\play{jj}}^{-1} {\vek{B}\play{j}}^{T}\; j \in \mathcal{P}, i \neq j.
\end{align*}
From the solution $\vek{P}\play{i}$, the feedback gain of the players are computed
\begin{equation} \label{eq:KfromRicc_res}
\vek{K}\play{i} = {\vek{R}\play{ii}}^{-1} \vek{B}\Tplay{i} \vek{P}\play{i}.
\end{equation}
\subsection{Potential Differential Games}
As mentioned in the introduction, potential games have a very useful property: The computation of the NE can be reduced to a single optimization problem of one cost (fictitious) function $J\play{p}$. Such a potential game can be considered as a substituting optimal controller of the associated LQ-differential game $\Gamma_\text{LQ}$.
The potential function is assumed to be quadratic
\begin{subequations} \label{eq:odpg_obj}
\begin{equation}
J^{(p)}=\frac{1}{2} \int_{0}^{\tau} \sv{x}^{T} \mathbf{Q}^{(p)} \sv{x}+\sv{u}^{T} \mathbf{R}^{(p)} \sv{u} \, \mathrm{d} t
\end{equation}
\end{subequations}
where the matrices $\mathbf{Q}^{(p)}$ are positive semi-definite and $\mathbf{R}^{(p)}$ are positive definite, respectively. The vector $\sv{u}= \left[\sv{u}\play{1}, ...,\sv{u}\play{N}\right]$ involves all the players' inputs. In an LQ-case, the Hamiltonian of the potential function is
\begin{equation} \label{eq:hamiltionian_of_pot_game}
H\play{p} = \frac{1}{2} \sv{x}^{T} \mathbf{Q}^{(p)} \sv{x}+\frac{1}{2} \sv{u}^{T} \mathbf{R}^{(p)} \sv{u}+\sv{\lambda}^{(p)^{T}} \dot{\sv{x}}
\end{equation}
and the Hamiltonian of the player $i$ of $\Gamma_\text{LQ}$ is
\begin{align} \label{eq:hamiltionian_of_player_i}
H\play{i} = \frac{1}{2} \sv{x}^\mathsf{T} \vek{Q}\play{i}\sv{x} + \frac{1}{2} \displaystyle \sum_{j=1}^{N} {\sv{u}\play{j}}^\mathsf{T} \vek{R}\play{ij}\sv{u}\play{j} +\sv{\lambda}^{(i)^{T}} \! \! \dot{\sv{x}} \; \; \forall i \in \mathcal{P}.
\end{align}
\begin{deff} [Exact Potential Differential Game \cite{1996_PotentialGames_monderer}] \label{def:EPDG}
An exact potential differential game is a game, in which the computation of the NE can happen by the (\ref{eq:odpg_obj}) such that
\begin{align} \label{eq:def_pot_game}
\frac{\partial H^{(p)}}{\partial \boldsymbol{u}^{(i)}}=\frac{\partial H^{(i)}}{\partial \boldsymbol{u}^{(i)}} \; \; \forall i \in \mathcal{P},
\end{align}
where $H\play{p}$ and $H\play{i}$ are given in (\ref{eq:hamiltionian_of_pot_game}) and in (\ref{eq:hamiltionian_of_player_i}), respectively. If (\ref{eq:def_pot_game}) holds of optimization
\begin{align} \label{eq:EPDG_optim}
\sv{u} =& \underset{\sv{u}}{\mathrm{arg min}} \;J\play{p}, \\ \nonumber
& \rm{s.t.} \; \text{(\ref{eq:linear_system})},
\end{align}
yields the NE of $\Gamma_{\text{LQ}}$.
\end{deff}
In \cite{2016_SurveyStaticDynamic_gonzalez-sanchez} and \cite{2018_PotentialDifferentialGames_fonseca-morales}, the exact potential differential game are analyzed, and identification methods are provided on how to find the exact potential function of a given differential game. Using the optimization \eqref{eq:EPDG_optim}, the stabilizing feedback control law
\begin{equation}
\sv{u} = \vek{K}\play{p}\sv{x}
\end{equation}
of the potential games is obtained.
\subsection{Ordinal Potential Differential Games}
The main drawback of exact potential differential games is their limited applicability for general engineering problems, see e.g.~\cite{2016_DynamicPotentialGames_zazo}. Due to its definition, exact potential differential games can be solely applied to games
\begin{itemize}
\item with special system dynamics (e.g.~$\sv{u}\play{i}$ has only impact on $\sv{x}_i$, see Assumptions Theorem~1 and Corollary~1 from \cite{2018_PotentialDifferentialGames_fonseca-morales}) or
\item for which the cost functions of the players have a particular structure: E.g.~elements in main diagonal are identical for all player, see~\cite{2016_SurveyStaticDynamic_gonzalez-sanchez} or Example 4 in~\cite{2018_PotentialDifferentialGames_fonseca-morales}.
\end{itemize}
To allow a broader usage, the subclass of OPDGs has been introduced in~\cite{2021_PotentialDifferentialGames_varga}.
\begin{deff}[Ordinal Potential Differential Game \cite{2021_PotentialDifferentialGames_varga}]
If there exists an ordinal potential cost function $J\play{p}$ for a given $\Gamma_\text{LQ}$, for which
\begin{equation} \label{eq:opdg_def}
{\rm sgn}\left(\frac{\partial H^{(p)}}{\partial \boldsymbol{u}^{(i)}}\right)
=
{\rm sgn}\left(\frac{\partial H^{(i)}}{\partial \boldsymbol{u}^{(i)}}\right)\; \; \forall i \in \mathcal{P},
\end{equation}
holds, then the game is an OPDG.
\end{deff}
While the core idea of the subclass of OPDGs has been presented, still, a systematic identification method for OPDGs is still absent from the literature, which is a crucial step in the analysis and design of potential games. Thus, Problem \ref{prob:1} is defined as follows:
\begin{prob_stat} \label{prob:1}
Let's given a LQ-differential game $\Gamma_\text{LQ}$. How can be identified an OPDG for this associated LQ-differential game, which is characterized by a quadratic potential function $J\play{p}$, as given in (\ref{eq:odpg_obj})?
\end{prob_stat}
\section{Identification of OPDG}
In this section, the main results of the paper are presented: Two novel identification methods to identify an OPDG for a given LQ differential game.
\subsection{Trajectory-free LMI Optimization}
The identification method presented in this section is referred to as \textit{Trajectory-Free Optimization} (TFO).
The TFO identifies an ordinal potential LQ-differential game by solving an LMI optimization problem based on the idea from \cite{2015_SolutionsInverseLQR_priess}. This has the advantage that the ordinal potential cost function can be computed without determining the trajectories of the original game. This makes the optimization robust against measurement noise and disturbance.
According to \cite{2021_PotentialDifferentialGames_varga}, the condition for an OPDG, cf.~(\ref{eq:opdg_def}), can be reformulated, leading to Lemma~1.
\begin{lem}[Sufficient Condition of an OPDG~\cite{2021_PotentialDifferentialGames_varga}] \label{thm-2}
If for a two-player linear-quadratic game with scalar inputs,
\begin{align} \label{eq:cond_ccat2021}
\left( \vek{B}\Tplay{i}\vek{P}\play{p}\sv{x} \right)^\mathsf{T} \cdot \left( \vek{B}\Tplay{i}\vek{P}\play{i}\sv{x}\right) \geq 0
\end{align}
holds $\forall i \in \mathcal{P},$ and $\forall \sv{x}$, then it is an \textit{ordinal potential differential game} with the Hamiltonian function given by (\ref{eq:odpg_obj}).
\end{lem}
\begin{proof}
See \cite{2021_PotentialDifferentialGames_varga}.
\end{proof}
To verify whether the differential game is an OPDG, condition \eqref{eq:cond_ccat2021} has to be checked for all possible trajectories $\sv{x}$ of the original game. Consequently, using (\ref{eq:cond_ccat2021}) as constraint of an optimization problem indicates a trajectory dependency of the identification. Therefore, in the following, the elimination of this trajectory dependency is discussed. First, the following notation is introduced:
$$ \sv{v}\play{p}:= \mathbf{B}^{(i)^{T}} \mathbf{P}^{(p)} \text{ and }\sv{v}\play{i}:= \mathbf{B}^{(i)^{T}} \mathbf{P}^{(i)}.$$
Using this notation, (\ref{eq:cond_ccat2021}) can be rewritten as
\begin{align} \label{necessary_opdg_simplified}
\left(\sv{v}\play{p} \sv{x} \right)^\mathsf{T} \cdot \left(\sv{v}\play{i} \sv{x} \right)\geq 0.
\end{align}
In order to drop $\sv{x}$ in the constraint, we recall that (\ref{necessary_opdg_simplified}) must hold $\forall \sv{x}$, which means both terms in (\ref{necessary_opdg_simplified}) have the same sign regardless of the actual $\sv{x}$ leading to the new condition
\begin{equation} \label{eq:paralell_optim_1}
\omega\play{i}\sv{v}\play{p}-\sv{v}\play{i} = \sv{0},
\end{equation}
where $\omega\play{i}>0$ is a scaling factor.
Figure~\ref{fig:method_1} represents the system state vector $\sv{x}$ in two different time instance $t_1$ and $t_2$. Furthermore, the vectors $\sv{v}\play{i}$ and $\sv{v}\play{p}$ are given cf.~\eqref{eq:paralell_optim_1}. In this three-dimensional example, the scaling factor is a single scalar value for higher dimensions, it would be a vector. Since in Figure~\ref{fig:method_1}, $\sv{v}\play{p}$ and $\sv{v}\play{i}$ show in the same direction, condition \eqref{eq:paralell_optim_1} is fulfilled for all time instances.
Intuitively, if $\sv{x}$ has a lower dimension than $\sv{v}\play{p}$ and $\sv{v}\play{i}$ have, there are more possible combinations of $\sv{v}\play{p}$ and $\sv{v}\play{i}$, which fulfill condition \eqref{eq:paralell_optim_1}. For instance, if $\sv{x}$ is scalar and coincides with $\vek{e}^1$, then any arbitrary vectors of $\sv{v}\play{p}$ and $\sv{v}\play{i}$ in the plane $\vek{e}^2-\vek{e}^3$ fulfill condition \eqref{eq:paralell_optim_1}.
To construct an LMI optimization for finding OPDG, the idea from \cite{2015_SolutionsInverseLQR_priess} is applied. This leads to a reformulation of the original problem statement (cf.~Problem \ref{prob:1}) such~as:
\hspace*{1mm}
\begin{prob_stat} \label{prob_modifed}
Let it be assumed that a stabilizing feedback control $\sv{K}\play{p}$ and the system dynamics (\ref{eq:linear_system}) are given. Find (at least) one cost function \eqref{eq:odpg_obj} whose minimization with respect to the system dynamics leads the optimal control law $\sv{K}\play{p}$, such that \eqref{eq:opdg_def} holds.
\end{prob_stat}
\hspace*{1mm}
\begin{collar}
Problem \ref{prob_modifed} raises the uniqueness question: If parameters of (\ref{eq:odpg_obj}), $[\tilde{\vek{Q}}\play{p},\,\tilde{\vek{R}}\play{p}]$ solve Problem \ref{prob_modifed} then $[\gamma \cdot \tilde{\vek{Q}}\play{p},\,\gamma \cdot \tilde{\vek{R}}\play{p}]$ solve Problem 2 as well since they lead to the same optimal control law $\vek{K}\play{p}$, where $\gamma>0$ is a scalar. To overcome this issue and to ensure a unique solution a further criteria must be provided.
\end{collar}
For a unique solution of the associated inverse problem, the weighting matrices' condition number $\alpha$ is minimized
$$
\mathbf{I} \preceq
\begin{bmatrix}
\mathbf{Q}^{(p)} & \boldsymbol{0} \\
\boldsymbol{0} & \mathbf{R}^{(p)}
\end{bmatrix}
\preceq \alpha \mathbf{I},
$$
such that an unique solution is obtained. The main difference to other inverse control problems (see e.g.~ \cite{2015_SolutionsInverseLQR_priess} or \cite{2019_InverseDiscountedbasedLQR_el-hussieny}) is that (\ref{eq:cond_ccat2021}) or the reformulated condition \eqref{eq:paralell_optim_1} need to hold additionally. By summarizing, the following optimization problem is constructed:
\begin{subequations} \label{eq:optim1_lmi}
\begin{align}
\hat{\mathbf{P}}^{(p)},\hat{\mathbf{Q}}^{(p)}, \hat{\mathbf{R}}^{(p)}&=\arg \min _{\mathbf{P}^{(p)},\mathbf{Q}^{(p)}, \mathbf{R}^{(p)}}\alpha^{2}
\label{first_method_obj}\\
{\rm s.t.} \quad \mathbf{A}^{T} \mathbf{P}^{(p)}+\mathbf{P}^{(p)} \mathbf{A}-\mathbf{P}^{(p)} \mathbf{B} \mathbf{K}\play{p}+\mathbf{Q}^{(p)}&=\mathbf{0}
\label{first_method_con1}\\
\mathbf{B}^\mathsf{T} \mathbf{P}^{(p)}-\mathbf{R}^{(p)} \mathbf{K}\play{p} &= \mathbf{0}
\label{first_method_con2}\\
\omega\play{i} \mathbf{B}^{(i)^\mathsf{T}}\mathbf{P}^{(i)}-
\mathbf{B}^{(i)^\mathsf{T}}\mathbf{P}^{(p)} &= \mathbf{0} \; \; \forall i\in \mathcal{P}
\label{first_method_con3}\\
\mathbf{I} \preceq
\begin{bmatrix}
\mathbf{Q}^{(p)} & \boldsymbol{0} \\
\boldsymbol{0} & \mathbf{R}^{(p)}
\end{bmatrix}
&\preceq \alpha \mathbf{I}
\label{first_method_con4}\\
\quad \mathbf{P}^{(p)} &\succeq \boldsymbol{0},
\label{first_method_con5}
\end{align}
\end{subequations}
where $\vek{B} = [\vek{B}\play{1},...,\vek{B}\play{N}]$. The constraints (\ref{first_method_con1}), (\ref{first_method_con2}) and (\ref{first_method_con5}) are necessary that $\sv{K}\play{p}$ is optimal for the identified quadratic cost function.
The constraint (\ref{first_method_con4}) ensures the uniqueness of the solution.
The constraint (\ref{first_method_con3}) restricts the identified cost function to an ordinal potential function. Thus, if \eqref{eq:optim1_lmi} is feasible, then Problem \ref{prob_modifed} is solved and the original game is an OPDG.
\begin{figure}
\caption{
A schematic representation of the trajectory independency of the optimization in a three dimensional space $\left(\sv{e}
\label{fig:method_1}
\end{figure}
\subsection{Feasibility Analysis}
This section provides a feasibility analysis of \eqref{eq:optim1_lmi} for $\Gamma_{LQ}$ with two players.
Due to the constraints in \eqref{eq:optim1_lmi}, there are differential games, for which \eqref{eq:optim1_lmi} does not yield a feasible solution, since the constraints are violated: the solution is called \textit{non-feasible}. Thus, in the following, we discuss conditions for the feasibility of the proposed LMI optimization problem \eqref{eq:optim1_lmi}.
Without loss of generality, it is assumed that the system has $n$ states and that the input matrices of player $1$ and player $2$, $\mathbf{B}^{(i)}, \; i=\{1,2\}$ have the dimensions $n\,\text{x}\,p_1$ and $n\,\text{x}\,p_2$, respectively. The TFO can provide feasible solutions if the following requirements are satisfied:
\begin{lem}[Necessary Condition for the feasibility of the TFO for OPDGs] \label{lem:feas}
The TFO (\ref{eq:optim1_lmi}) for two players can be feasible, only~if
\begin{itemize} \label{conditio:OPDG_LMI}
\item[A)] the columns of the input matrices $\mathbf{B}^{(i)}, \; \forall i \in \mathcal{P}$ are linearly independent and
\item[B)] the system dimensions satisfy
\begin{equation} \label{eq:lemma1_condition_opdg}
\frac{1}{2}(1+n)-(p_1+p_2)>0,
\end{equation}
where $p_1$ and $p_2$ are the dimension of the inputs vectors $\vek{B}\play{1} \in \mathbb{R}^{n \times p_1}$, $\vek{B}\play{2} \in \mathbb{R}^{n \times p_2}$ of player 1 and 2, respectively.
\end{itemize}
\end{lem}
\begin{proof}
To prove the conditions, constraint (\ref{first_method_con3}) is rewritten as
$$
\vek{\omega}\play{i} \sv{v}\play{i}- \mathbf{B}^{(i)^\mathsf{T}}\mathbf{P}^{(p)} = \mathbf{0}, \forall i \in \mathcal{P},
$$
which can be vectorized such that
\begin{align} \label{eq:pr_step2}
\mathrm{vec}\left(\vek{\omega}\play{i} \sv{v}\play{i}\right) - \mathrm{vec}\left({\vek{B}\play{i}}^\mathsf{T} \vek{P}\play{p}\right) = \sv{0}, \forall i \in \mathcal{P},
\end{align}
and rearranged to
\begin{equation} \label{eq:chap4_proof_feasibly}
\underbrace{\left(\vek{E}_n \otimes {\vek{B}\play{i}}^\mathsf{T} \right)}_{\tilde{\vek{A}}} \underbrace{\mathrm{vec} \left(\vek{P}\play{p} \right)}_{\tilde{\vek{x}}} = \underbrace{\mathrm{vec}\left(\vek{\omega}\play{i} \sv{v}\play{i}\right)}_{\tilde{\vek{b}}}, \forall i \in \mathcal{P},
\end{equation}
where $\mathrm{vec}(\cdot)$ represents the column vectorization of a matrix and $\otimes$ is the Kronecker product of two matrices. In \eqref{eq:chap4_proof_feasibly}, the classical form of a system of linear equations $
\tilde{\vek{A}} \tilde{\vek{x}} = \tilde{\vek{b}}
$ is given in the underbraces.
Condition A is necessary for the consistency of the system of linear equations \eqref{eq:chap4_proof_feasibly}, for which
$$
\mathrm{rank} \left(\vek{E}_{n} \otimes {\vek{B}\play{i}}^\mathsf{T} \right) = \mathrm{rank} \left(\vek{E}_{n} \otimes {\vek{B}\play{i}}^\mathsf{T} \, \Big| \, \mathrm{vec}\left(\vek{\omega}\play{i} \sv{v}\play{i}\right) \right), \; \forall i \in \mathcal{P}
$$
must hold, since inconsistency of \eqref{first_method_con3} leads to an infeasible LMI.
Condition B is necessary for the following reasons.
If \eqref{eq:chap4_proof_feasibly} yields a single solution for a given $\mathrm{vec}\left(\vek{\omega}\play{i} \sv{v}\play{i}\right)$, then \eqref{eq:chap4_proof_feasibly} completely determines $\mathrm{vec}(\vek{P}\play{p})$. Therefore, it is not possible to modify $\vek{P}\play{p}$ to satisfy (\ref{eq:optim1_lmi}b) and (\ref{eq:optim1_lmi}c), and as a result, (\ref{eq:optim1_lmi}) cannot be feasible. On the other hand, if \eqref{eq:chap4_proof_feasibly} has multiple solutions, the constraints of the optimization problem \eqref{eq:optim1_lmi} have additional degrees of freedom. This consideration requires a rank analysis of \eqref{eq:chap4_proof_feasibly} to show that Condition B holds, see e.g.~\cite[Chapter 5]{2014_LinearAlgebraMatrix_banerjee}.
Due to the fact that the columns of the input matrix $\mathbf{B}^{(i)}, \; \forall i \in \mathcal{P}$ are linearly independent,
\begin{equation} \label{eq:feas_rank_cond_fin}
\mathrm{rank} \left(\vek{E}_{n} \otimes {\vek{B}\play{i}}^\mathsf{T} \right) < \mathrm{dim}\left( \mathrm{vec} \left(\vek{P}\play{p} \right) \right), \; \forall i \in \mathcal{P}
\end{equation}
must hold, where the dimension of a vector is denoted by $\mathrm{dim}$. Condition \eqref{eq:feas_rank_cond_fin} means that the number of rows of the coefficient matrix of \eqref{eq:chap4_proof_feasibly} need to be smaller than the number of columns, cf. \cite{2005_RowRankEquals_wardlaw, 2014_LinearAlgebraMatrix_banerjee}. For two players, \eqref{eq:chap4_proof_feasibly} leads to
\begin{equation} \label{eq:2_player}
\tilde{\vek{A}} =
\begin{bmatrix}
\vek{E} \otimes {\vek{B}\play{1}}^\mathsf{T} \\
\vek{E} \otimes {\vek{B}\play{2}}^\mathsf{T}
\end{bmatrix} \; \mathrm{and} \; \tilde{\vek{b}} = \begin{bmatrix}
\omega\play{1} \sv{v}\play{1} \\
\omega\play{2} \sv{v}\play{2}
\end{bmatrix}.
\end{equation}
The dimension of $\tilde{\vek{A}}$ is $n(p_1+p_2) \times n\cdot n,$ where $p_1$ and $p_2$ are the dimension of the input variables of player 1 and 2, respectively. If $\vek{P}\play{p}$ was not a symmetric matrix, the condition for a manifold of solutions would be $ n\cdot n > n(p_1+p_2)$. Due to the symmetric structure of $\vek{P}\play{p}$, the degrees of freedom of $\mathrm{vec}\left(\vek{P}\play{p}\right)$ are reduced to $\frac{1}{2}(1+n)n$, see \cite[Chapter 14]{2014_LinearAlgebraMatrix_banerjee}. Thus, condition \eqref{eq:feas_rank_cond_fin} is changed~to
\begin{equation}
\frac{1}{2}(1+n) > (p_1+p_2),
\end{equation}
which completes the proof.
\end{proof}
\begin{collar} \label{remark1}
The constraint (\ref{first_method_con3}) is more restrictive as condition~(\ref{eq:opdg_def}) from the definition of an OPDG. Therefore, it is possible that the solution of (\ref{eq:optim1_lmi}) is infeasible meanwhile the original game admits being an OPDG.
\end{collar}
\subsection{Weakly Trajectory-Dependent Optimization}
If not all constraints of \eqref{eq:optim1_lmi} are fulfilled, the TFO yields an \textit{infeasible solution}. This can be the case, even if a potential function exists for the given differential game cf.~Remark~\ref{remark1}. To overcome this issue and to identify the potential game still, a second identification method is introduced, which uses only the relevant parts of the trajectory information for the constraints of the LMI. The optimization is called \textit{Weakly Trajectory-Dependent Optimization} (WTDO), since the constraints for an OPDG (\ref{eq:optim1_lmi}b~-~\ref{eq:optim1_lmi}f) are reformulated and the constraint on an exact solution to the Riccati equation \eqref{eq:coup_Ric} is softened. Instead of optimizing the condition number, the WTDO minimizes the remaining error of (\ref{eq:optim1_lmi}b). The constraints are reformulated such that the closest points around the zero crossing of~$\vek{B}\Tplay{i}\vek{P}\play{i}\sv{x}$, $(\boldsymbol{x}^*_+$ and $\boldsymbol{x}^*_-)$ are computed, for which
\begin{align}
&\mathbf{B}^{(i)^{T}} \mathbf{P}^{(i)}\boldsymbol{x}^*_+ > \boldsymbol{0} \; \forall i \in \mathcal{P}, \\
&\mathbf{B}^{(i)^{T}} \mathbf{P}^{(i)}\boldsymbol{x}^*_- < \boldsymbol{0} \; \forall i \in \mathcal{P}
\end{align}
hold. This is a reasonable reformulation of constraints (\ref{eq:optim1_lmi}b), since the zero crossings --~the points, where the signs of \eqref{eq:opdg_def} change~-- are the points of interest to fulfill condition \eqref{eq:cond_ccat2021}. The constraint (\ref{first_method_con1}) from the TFO is softened and used for minimization\footnote{Note that reformulating hard constrained in a soft-constrained identification method is also used in literature, see e.g.~ \cite{2020_InverseOpenLoopNoncooperative_molloya}.}. The WTDO is formulated as an LMI optimization,
\begin{subequations} \label{eq:optim2_lmi}
\begin{align}
\hat{\mathbf{P}}^{(p)},\hat{\mathbf{Q}}^{(p)},& \hat{\mathbf{R}}^{(p)}=\arg \min _{\mathbf{P}^{(p)},\mathbf{Q}^{(p)}, \mathbf{R}^{(p)}}\eta^{2} \\
&{\rm s.t.} \; \text{(\ref{first_method_con2}),} \text{ and (\ref{first_method_con5})}\\
&\mathbf{I} \preceq
\begin{bmatrix}
\mathbf{Q}^{(p)} & \boldsymbol{0} \\
\boldsymbol{0} & \mathbf{R}^{(p)}
\end{bmatrix}\\
&\mathbf{B}^{(i)^{T}}
\mathbf{P}^{(p)}\boldsymbol{x}^*_+ > \boldsymbol{0} \; \forall i \in \mathcal{P},
\label{second_method_con2}\\
&\mathbf{B}^{(i)^{T}} \mathbf{P}^{(p)}\boldsymbol{x}^*_- < \boldsymbol{0} \; \forall i \in \mathcal{P},
\label{second_method_con3}
\end{align}
\end{subequations}
where
\begin{equation} \label{eq:trace_WTO}
\eta = \text{tr}\left(\mathbf{A}^{T} \mathbf{P}^{(p)}+\mathbf{P}^{(p)} \mathbf{A}-\mathbf{P}^{(p)} \mathbf{B} \mathbf{K}\play{p}+\mathbf{Q}^{(p)}\right).
\end{equation}
Thus, the constraint (\ref{first_method_con3}) is changed to the optimization objective of the WTDO. Furthermore, (\ref{second_method_con2}) and (\ref{second_method_con3}) ensure the condition of the OPDGs. Since, the WTDO is also formulated as an LMI, therefore an efficient calculation is guaranteed. Note that the usage of the trace \eqref{eq:trace_WTO} ensures the soft-constrained NE of the original game.
\section{Applications}
This section provides an academic and an engineering example to demonstrate the applicability of the proposed identification methods. Furthermore, the results are compared to the state-of-the-art solution.
\subsection{Input Dependent Optimization}
To establish a baseline for the analysis, we utilize the identification method proposed in \cite{2021_PotentialDifferentialGames_varga} and compare it with the two novel approaches. This state-of-the-art identification is referred to as \textit{Input-Dependent Optimization} (IDO). The IDO approach involves measuring the error between the inputs from the potential function and those from the original game ($\sv{x}^*$), which corresponds to its NE
\begin{equation} \label{eq:error_optim}
\sv{e}_{u} = \sv{u}\play{p}(t,\sv{x},\vek{Q}\play{p},\vek{R}\play{p}) - \sv{u}(t,\sv{x}^*)
\end{equation}
To identify the parameters of the OPDG, this error is minimized, which is happens through the following optimization:
\begin{subequations} \label{eq:optim_find_pot_games}
\begin{align}
&\; \; \; \hat{\vek{Q}}\play{p}, \hat{\vek{R}}\play{p} = \text{arg } \underset{\vek{Q}\play{p},\vek{R}\play{p}}{\text{min }} \left| \sv{e}_{u} \right|^2 \\
\text{s.t. }&\vek{A}^\mathsf{T} \vek{P}\play{p} + \vek{P}\play{p} \vek{A} + \vek{Q}\play{p} - \vek{P}\play{p}\vek{S}\play{p}\vek{P}\play{p} = \sv{0} \\
&\left( \vek{B}\Tplay{i}\vek{P}\play{p}\sv{x} \right)^\mathsf{T} \cdot \left( \vek{B}\Tplay{i}\vek{P}\play{i}\sv{x}\right) \geq 0 \; \forall i \in \mathcal{P},
\end{align}
\end{subequations}
where $\vek{S}\play{p} = \vek{B}\play{p}\vek{R}\Invplay{p}\vek{B}\Tplay{p}$.
The minimization of (\ref{eq:optim_find_pot_games}a) ensures that the necessary condition OPDG cf.~\cite{2021_PotentialDifferentialGames_varga}. The constraint (\ref{eq:optim_find_pot_games}b) is necessary for the optimum of $J\play{p}$ meaning that $\sv{u}\play{p}$ is the result of the optimal control problem. The constraint (\ref{eq:optim_find_pot_games}c) guarantees the sufficient condition of Lemma~\ref{eq:cond_ccat2021}.
The optimizer is an interior-point optimizer algorithm provided by MATLAB \cite{MATLAB:R2019b_u5}.
In the next section, two examples are presented. The first one is a general example: Neither the players' cost functions nor the system dynamics are special in contrast the the state-of-the-art examples. The second one is a specific engineering application, in which a human-machine interaction is modeled as an OPDG.
\subsection{General Example}
\subsubsection{The LQ-differential game}
The LTI model has the system and inputs matrices:
\begin{equation*}
\mathbf{A} \! =\! \begin{bmatrix}
-1.20 & 0.00 & 0.00 & 0.00 & 0.00 & 1.75 \\
0.00 & 2.10 & 0.00 & 0.00 & 0.00 & 0.00 \\
-1.00 & 0.00 & 2.95 & 0.00 & 0.00 & 0.00 \\
0.00 & 0.00 & 0.00 & 2.05 & 0.00 & 1.50 \\
2.00 & 0.00 & 0.00 & 0.00 & 1.00 & -4.15 \\
0.00 & 0.00 & 0.00 & 0.00 & 0.00 & 1.85
\end{bmatrix}\!, \;
\mathbf{B}\play{1} =\!
\begin{bmatrix}
1.0 & 1.0 \\
3.0 & 0.0 \\
2.1 & 4.0 \\
0.0 & 0.0 \\
0.1 & 2.0 \\
1.0 & 0.9
\end{bmatrix}
\text{and} \,
\mathbf{B}\play{2} = \!
\begin{bmatrix}
1.3 & 1.0 \\
1.0 & -1.1 \\
0.0 & 0.0 \\
2.0 & -1.0 \\
0.0 & -2.0 \\
4.0 & 2.1
\end{bmatrix}\hspace*{-1mm}.
\end{equation*}
Note that the system and input matrices
Both players have a quadratic cost function, cf. (\ref{eq:each_player_quad_cost_function}). The penalty factors of the first player are
\begin{align*}
\mathbf{Q}^{(1)} &= \rm{diag}([10,4,2,3,4,4]),\nonumber \\
\mathbf{R}^{(11)} &= \rm{diag}([1.5,1.0]), \; \nonumber
\mathbf{R}^{(12)} = \rm{diag}([0,0]) \nonumber
\end{align*}
and the factors of the second player are
\begin{align*}
\mathbf{Q}^{(2)} &= \rm{diag}([8,1,5,1,3,2]), \nonumber \\
\mathbf{R}^{(21)} &= \rm{diag}([0.1,0]), \; \nonumber
\mathbf{R}^{(22)} = \rm{diag}([1,1]). \nonumber
\end{align*}
The initial values of the simulation are $x_0 = [-0.5,\,-1.9,\,0.8, \, -0.6, \, 2.9, \,-0.1]^\mathsf{T}.$
The control laws of the players are computed by the coupled optimization of~(\ref{eq:each_player_quad_cost_function}), with $i=1,2$. The obtained feedback gains of the players are
\begin{align*}
\vek{K}_1 &= \begin{bmatrix}
-0.90 & 2.26 & 1.03 & -0.55 & -0.80 & 0.40 \\
-2.94 & -1.04 & 3.91 & 1.43 & -0.81 & 0.89
\end{bmatrix}, \\
\vek{K}_2 &= \begin{bmatrix}
-0.92 & -0.25 & 0.69 & 2.71 & -1.44 & 2.04 \\
-0.45 & -0.55 & 0.65 & -0.78 & -1.53 & 1.31
\end{bmatrix}.
\end{align*}
\subsubsection{Results}
For the evaluation and comparison of the identification methods, the error in the state trajectory is defined as
\begin{align} \label{eq:results_error}
e^{x} \; &=\max \left\{e^{x_1}, {e^{x_2}} \ldots, e^{x_n}\right\}, \\ \nonumber
e^{x_i} &=\left\|\frac{{\sv{x}}_{i}\play{p}}{\left\|{\sv{x}}_{i}\play{p}\right\|_{\max }}-\frac{{\sv{x}}_{i}^*}{\left\|{\sv{x}}_{i}\play{p}\right\|_{\max }}\right\|_{\max },
\;\forall i \in\{1, \ldots, n\},
\end{align}
where $\sv{x}_{i}^*$ and $\sv{x}_{i}\play{p}$ are the trajectories generated by the original game (OG) and by the OPDG, respectively. Furthermore, the computation time necessary for the identification is used for the evaluation. The numerical results of $\vek{Q}\play{p}$ and $\vek{R}\play{p}$ using (\ref{eq:optim1_lmi}) are
\begin{equation*}
\vek{Q}\play{p} = \begin{bmatrix}
16.75 & -1.26 & -2.62 & -3.88 & 0.73 & 4.11 \\
-1.26 & 5.16 & 1.17 & 0.70 & 0.56 & 0.85 \\
-2.62 & 1.17 & 6.10 & 0.43 & -0.26 & -0.15 \\
-3.88 & 0.70 & 0.43 & 6.72 & 0.56 & 1.91 \\
0.73 & 0.56 & -0.26 & 0.56 & 11.21 & -1.02 \\
4.11 & 0.85 & -0.15 & 1.91 & -1.02 & 2.87
\end{bmatrix}
\end{equation*}
\begin{equation*}
\vek{R}\play{p} =
\begin{bmatrix}
2.12 & 0.39 & 0.32 & -0.08 \\
0.39 & 2.06 & -0.07 & -0.10 \\
0.32 & -0.07 & 3.22 & -0.87 \\
-0.08 & -0.10 & -0.87 & 6.84
\end{bmatrix}
\end{equation*}
Figure~\ref{fig:sys_states13} and Figure~\ref{fig:sys_states46} show the resulting trajectories of the original game, with two players (solid lines), and of the substituting potential game (dashed lines), which are the result of \eqref{eq:optim2_lmi}. It can be seen that the trajectories from the controller designed via the potential cost function deviate insignificantly.
\begin{figure}
\caption{The 1.-3. system state trajectories}
\caption{The 4.-6. system state trajectories}
\caption{Comparison of the system state trajectories of the original game (solid lines) and the trajectories of the potential game (dashed lines)}
\label{fig:sys_states13}
\label{fig:sys_states46}
\end{figure}
\begin{figure}
\caption{The evolution of the derivatives of the Hamiltonian functions of the two players}
\label{fig:ham1}
\end{figure}
Figure~\ref{fig:ham1} shows the value of the derivatives of the Hamiltonian
$$\frac{\partial H^{(p)}}{\partial u^{(i)}_j} \; \mathrm{and} \; \frac{\partial H^{(i)}}{\partial u^{(i)}_j}.$$
It can be seen that all the zero-crossing points for the OG and the identified OPDG occur at the same time. This verifies the ordinal potential structure of $\Gamma_{\text{LQ}}$ meaning that (\ref{eq:opdg_def}) holds $\forall t$.
Table \ref{table:three2compare} compares the results of TFO and WTDO with IDO: It can be seen that the fastest and most accurate results are generated by the novel TFO. The trajectories errors $e^x$ are not different IDO and WTDO. On the other hand, WTDO requires less computation time compared to state-of-the-art IDO. The two novel identification methods outperform the state-of-the solution.
\begin{table}[!h]
\centering
\normalsize
\caption{The trajectory error and the computational times of the three identification methods}
\begin{tabular}{c|ccc}
& \; \; TFO \; \; & \; \; WTDO \; \; &\; \; IDO \; \;\\
\hline
\hline
$e^x$ [-]& 0.019 &0.077 & 0.076\\
\hline
$t_\mathrm{comp}$ in s& 0.15 & 28.15 & 212.1
\end{tabular}
\label{table:three2compare}
\end{table}
\subsection{Engineering Example}
This example presents the longitudinal control model of a vehicle-manipulator \cite{2019_ModelPredictiveControl_varga}. Such systems are used for road maintenance works, in which a human operator and the automation control the system.
This human-machine interaction can be formulated as a differential game $\Gamma_{\text{LQ}}$ with a linear system and two players enabling a systematic cooperative controller design.
\subsubsection{The two-player differential game}
The system matrix and the input matrices of longitudinal model are:
\begin{equation*}
\mathbf{A}= \begin{bmatrix}
0.0 & 1.0 & 0.0 \\
0.0 & 0.0 & 0.0 \\
0.0 & 1.0 & 0.0
\end{bmatrix}\!, \, \mathbf{B}^{(\mathrm{h})} =
\begin{bmatrix}
0.00 \\
0.00 \\
0.14
\end{bmatrix} \, \mathrm{and} \,
\mathbf{B}^{(\mathrm{a})} =
\begin{bmatrix}
0.0 \\
1.0 \\
0.0
\end{bmatrix}.
\end{equation*}
The initial states are ${\sv{x}_0 = -1.2, \; -0.95, \; 0.5]}$. The penalty factor $\mathbf{Q}^{(i)}$ and $\mathbf{R}^{(i)}$ in the cost function (\ref{eq:each_player_quad_cost_function}) of each player are
\begin{align*}
\mathbf{Q}^{(\mathrm{h})} &= \rm{diag}([1,1,5]),\nonumber \\
\mathbf{R}^{(\mathrm{h})} &= 1, \; \nonumber
\mathbf{R}^{(\mathrm{ha})} = 0.25 \\
\mathbf{Q}^{(\mathrm{a})} &= \rm{diag}([0.344, \; 0.076, \; 1.409]), \nonumber \\
\mathbf{R}^{(\mathrm{ah})} &= 0.19, \; \nonumber
\mathbf{R}^{(\mathrm{a})} = 1. \nonumber
\end{align*}
The feedback control law of the original game is calculated \eqref{eq:each_player_quad_cost_function}, leading to the feedback gains of the human and automation
\begin{equation}
K\play{\mathrm{h}} = \begin{bmatrix}
-0.78 & 0.26 & 1.42
\end{bmatrix}, \; \; K\play{\mathrm{a}} = \begin{bmatrix}
0.42 & 1.59 & 0.83
\end{bmatrix}.
\end{equation}
Since, \eqref{eq:lemma1_condition_opdg} does not hold for this model, the TFO cannot be applied. Thus, WTDO and IDO are used to compute the potential function of the game. Furthermore, a white Gaussian noise
\begin{equation*}
\tilde{\sv{x}}(t) = \sv{x}^*(t) + \sv{\sigma}(t)
\end{equation*}
is added to the state signal in order to analyze the robustness of the WTDO and compare with IDO.
\subsubsection{Results}
The identification with (\ref{eq:optim2_lmi}) lead to the following matrices of the potential function
\begin{align}
\vek{Q}\play{p} = \begin{bmatrix}
0.82 & 0.24 & -0.48 \\
0.24 & 0.59 & -1.01 \\
-0.48 & -1.01 & 2.15
\end{bmatrix} \; \mathrm{and} \; \vek{R}\play{p} = \begin{bmatrix}
1.00 & -0.05 \\
-0.05 & 1.60
\end{bmatrix}
\end{align}
To compare the performance of the WTDO with IDO, their error indices \eqref{eq:results_error} are computed at different noise levels. The results are shown in Table \ref{table:res_traj_noise}. Upon closer inspection of the results, it becomes apparent that the novel WTDO demonstrates superior performance compared to the stat-of-the-art IDO at all signal noise levels. Figure~\ref{fig:sys_states2Ex} shows the system trajectories with $10\,$dB SNR. It can be seen that the WTDO still provides a reliable solution, even at the noise level of $10\,$dB.
\begin{table}[!t]
\normalsize
\caption{Results with different white Gaussian noise levels}
\centering
\begin{tabular}{c|ccccc}
SNR in dB&$10$ & $20$ & $30$ & $40$ & $\infty$\\
\hline
\hline
$e^x_\mathrm{WTDO} \, [-]$ & 0.314& 0.107& 0.029& 0.027&0.002\\
\hline
$e^x_\mathrm{IDO} \, [-]$& 0.603& 0.265& 0.104& 0.047& 0.026
\end{tabular}
\label{table:res_traj_noise}
\end{table}
\begin{figure}
\caption{Comparison of the system state trajectories of the original game (solid lines) and the trajectories of the potential game (dashed lines)}
\label{fig:sys_states2Ex}
\end{figure}
\subsection{Discussion}
As the first simulation example showed, the TFO outperforms both WTDO and the state-of-the-art IDO. Both novel methods can provide an OPDG for the given differential game faster and more accurately compared to IDO. Furthermore, the second example explores the robustness of WTDO under varying noise levels. The findings demonstrate its potential for practical applications, such as modeling human-machine interaction as an OPDG, thereby indicating its suitability for real-world scenarios.
However, a theoretical analysis of TFO regarding its computational complexity is not provided in this paper. Lemma \ref{lem:feas} provides only the necessary condition for the existence of an OPDG. Furthermore, the limitations of WTDO are not investigated in this work. Thus, these open research questions need to be addressed in further research.
\vspace*{1cm}
\section{Conclusion and Outlook}
This paper has presented two systematic identification methods for finding an ordinal potential differential game corresponding to a given LQ-differential game, addressing a gap in the existing literature. Both identification methods utilize linear matrix inequality optimization techniques. The first method -- referred to as trajectory-free optimization -- leverages only the cost functions of the original differential game to identify the ordinal potential game. In cases where trajectory-free optimization is infeasible, the second method, referred to as weakly trajectory-dependent optimization, is proposed as an alternative. Simulation results demonstrate that both identification methods effectively reconstruct the trajectories of the original game while satisfying the conditions of an ordinal potential differential game. Moreover, they exhibit superior speed, accuracy, and robustness compared to a previously proposed identification method from the literature.
In future work, we plan to employ the proposed algorithms for designing cooperative learning controllers, see e.g.~ \cite{2023_LimitedInformationShared_varga}. Additionally, we aim to validate the effectiveness of the proposed identification methods through measurements obtained from human-machine interactions.
\input{varga_App_Math_Opt_arxiv_fin.bbl}
\section*{Founding}
This work was supported by the Federal Ministry for Economic Affairs and Climate Action, in the New Vehicle and System Technologies research initiative with Project number 19A21008D.
\end{document} |
\begin{document}
\title{Local Density of Solutions to Fractional Equations hanks{Supported by
the Australian Research Council Discovery Project 170104880 NEW ``Nonlocal
Equations at Work'',
the DECRA Project
DE180100957 ``PDEs, free boundaries
and applications'' and the Fulbright Foundation.
The authors are members of INdAM/GNAMPA.}\tableofcontents
\chapter*{Preface}
The study
of nonlocal operators of fractional type possesses a long tradition,
motivated both by mathematical curiosity and by real world applications.
Though this line of research presents some similarities and analogies with
the study of operators of integer order, it also presents a number
of remarkable differences, one of the greatest being the recently discovered
phenomenon that {\em
all functions are (locally) fractionally
harmonic (up to a small error)}. This feature is quite
surprising, since it is in sharp contrast with the case of classical
harmonic functions, and it reveals a genuinely nonlocal peculiarity.
More precisely, it has been proved in~\cite{MR3626547} that
given any $C^k$-function~$f$ in a bounded domain~$\Omega$
and given any~$\epsilon>0$, there exists a function~$f_\epsilon$ which
is fractionally harmonic in~$\Omega$ and such that the $C^k$-distance in~$\Omega$
between~$f$ and~$f_\epsilon$ is less than~$\epsilon$.
\begin{figure}
\caption{\footnotesize\it All functions are fractional harmonic, at different scales (scale of the original function).}
\label{FIGSCAL}
\end{figure}
Interestingly, this kind of results can be also applied at any scale,
as shown in Figures~\ref{FIGSCAL}, \ref{FIGSCAL2} and~\ref{FIGSCAL3}.
Roughly speaking, given {\em any} function, without any special geometric
prescription, in a given bounded domain (as in Figure~\ref{FIGSCAL}), one can ``complete''
the function outside the domain in such a way that the resulting object
is fractionally harmonic. That is, one can endow the function given in the bounded domain
with a number of suitable oscillations outside the domain in order
to make an integro-differential
operator of fractional type vanish. This idea is depicted in Figure~\ref{FIGSCAL2}.
As a matter of fact, Figure~\ref{FIGSCAL2} must be considered just a ``qualitative''
picture of this method, and does not have any demand of being ``realistic''.
On the other hand, even if Figure~\ref{FIGSCAL2} did not provide a correct
fractional harmonic extension of the given function outside the given domain,
the result can be repeated at a larger scale, as in Figure~\ref{FIGSCAL3},
adding further remote oscillations in order to obtain a fractional harmonic function.
\begin{figure}
\caption{\footnotesize\it All functions are fractional harmonic, at different scales
(``first''
scale of exterior oscillations).}
\label{FIGSCAL2}
\end{figure}
In this sense, this type of results really says that whatever graph we draw on a sheet of paper,
it is fractionally harmonic (more rigorously, it can be shadowed
with an arbitrary precision by another graph, which can be appropriately continued
outside the sheet of paper in a way which makes it fractionally harmonic).
\begin{figure}
\caption{\footnotesize\it All functions are fractional harmonic, at different scales
(``second''
scale of exterior oscillations).}
\label{FIGSCAL3}
\end{figure}
This book contains a {\em new result}
in this line of investigation, stating that {\em every
function lies in the kernel of every linear equation involving some fractional operator,
up to a small error}.
That is,
{\em any given function can be smoothly approximated by
functions lying in the kernel of a linear operator
involving at least one fractional component}.
The setting in which this result holds true is very general, since it takes into account
anomalous diffusion, with possible
fractional components
in both space and time. The operators taken into account comprise
the case of
the sum of classical and fractional Laplacians, possibly of different
orders, in the space variables, and classical or fractional derivatives
in the time variables.
Namely, the equation can be of any order, it does not need any
structure (it needs no ellipticity or parabolicity conditions), and the fractional
behaviour is either in time or space, or in both.
In a sense, this type of approximation results
reveals the true power of fractional equations, independently on the structural
``details'' of the single equation under consideration,
and
shows that {\em space-fractional and time-fractional
equations exhibit a variety of solutions which is much richer and more
abundant than in the case of classical diffusion}.
Though space- and time-fractional diffusions can be seen
as related aspects of nonlocal phenomena, they arise in different
contexts and present important structural differences.
The paradigmatic example of space-fractional diffusion is embodied by the fractional Laplacian,
that is a fractional root of the classical Laplace operator. This
setting often surfaces from stochastic processes presenting jumps
and it exhibits the classical spatial symmetries such as invariance under
translations and rotations, plus a scale invariance of the integral
kernel defining the operator. Differently from this,
time-fractional diffusion is typically related to memory effects,
therefore it distinguishes very strongly between the ``past''
and the ``future'', and the arrow of time plays a major role (in particular,
since the past influences the future, but not viceversa, time-fractional
diffusion does not possess the same type of symmetries of the space-fractional one).
In these pages, we will be able to consider operators which arise
as superpositions of both space- and time-fractional diffusion, possibly taking
into account classical derivatives as well (the cases of diffusion which
is fractional just in either space or time are comprised as special situation
of our general framework). Interestingly, we will also consider fractional
operators of any order, showing, in a sense, that some properties
related to fractional diffusion persist also when higher order operators
come into play,
differently from what happens in the classical case, in which the theory
available for the Laplacian operator presents significant differences with respect
to the case of polyharmonic operators.
To achieve the original result presented here, we
develop a broad theory of some fundamental facts about space- and time-fractional
equations.
Some of these additional results were known in the literature,
at least in some particular cases, but some other are new and interesting
in themselves, and, in developing these
auxiliary theories,
this monograph presents a completely self-contained
approach to a number of basic questions, such as:
\begin{itemize}
\item Boundary behaviour for the time-fractional eigenfunctions,
\item Boundary behaviour for the time-fractional harmonic functions,
\item Green representation formulas,
\item Existence and regularity for the first eigenfunction of the (possibly higher order)
fractional Laplacian,
\item Boundary asymptotics of the first eigenfunctions of the (possibly higher order)
fractional Laplacian,
\item Boundary behaviour of (possibly higher order) fractional harmonic functions.
\end{itemize}
We now dive into the technical details of this matter.
\chapter{Introduction: why fractional derivatives?}\label{WHY}
The goal of this introductory chapter is to provide a series
of simple examples in which fractional diffusion and fractional derivatives
naturally arise and give a glimpse on how
to use analytical methods to attack simple problems
arising in concrete situations. Some of the examples that we present are original,
some are modifications of known ones, all will be treated
in a {\em fully elementary} way that requires basically no
main prerequisites.
Other very interesting
motivations can be already found in the specialized literature, see e.g.~\cite{comb, MR1658022, MR2218073, MR2584076, MR2639369, MR2676137, MR3089369, claudia, MR3557159, 2017arXiv171203347G}
(also, in Chapter~\ref{DUEE} we will recall some other, somehow more advanced,
applications).
Some disclaimers here are mandatory. First of all, the motivations
that we provide do not aim at being fully exhaustive, since
the number of possible applications of fractional calculus are so abundant
that it is virtually impossible to discuss them all in one shot. Moreover
(differently from the rest of this monograph) while providing these motivations
we do not aim at the maximal mathematical rigor (e.g. all functions
will be implicitly assumed to be smooth and suitably decay at infinity, limits will freely taken and interchanged, etc.),
but rather at showing natural contests in which fractional objects appear
in an almost unavoidable way.
\begin{figure}
\caption{\footnotesize\it A material point sliding down.}
\label{FIGSLi}
\end{figure}
\begin{example}[Sliding time and tautochrone problem\index{problem!tautochrone}]\label{TAUr}
{\rm Let us consider a point mass subject to gravity, sliding down on a curve
without friction. We suppose that the particle starts its motion
with zero velocity at height~$h$ and it reaches its minimal position
at height~$0$ at time~$T(h)$. Our objective is to describe~$T(h)$
in terms of the shape of the slide. To this end, see Figure~\ref{FIGSLi},
we follow an approach introduced by N. H. Abel (see pages~11-27
in~\cite{MR1191901})
and use coordinates $(x,y)\in{\mathbb{R}}^2$ to describe the slide as a function
in the vertical variable, namely~$x=f(y)$. It is also convenient
to introduce the arclength parameter
\begin{equation}\label{VEL0}\phi(y):=\sqrt{|f'(y)|^2+1}\end{equation}
and to describe the position of the particle by the notation~$p(t):=(f(y(t)),y(t))$.
The velocity of the particle is therefore
\begin{equation}\label{VEL} v(t):=\dot p(t)=(f'(y(t),1)\,\dot y(t).\end{equation}
By construction, we know that~$y(0)=h$ and~$y(T(h))=0$, and
moreover~$v(0)=0$. Accordingly, by the Energy Conservation Principle,
for all~$t\in[0,T(h)]$,
$$ \frac{m |v(t)|^2}{2}+mg\,y(t)=\frac{m |v(0)|^2}{2}+mg\,y(0)=mgh,$$
where~$m$ is the mass of the particle and~$g$ is the gravity acceleration.
As a consequence, simplifying~$m$ and recalling~\eqref{VEL0} and~\eqref{VEL}
(and that the particle is sliding downwards),
$$ -\phi(y(t))\,\dot y(t)=\sqrt{|f'(y(t))|^2+1}\,|\dot y(t)|=|v(t)|=\sqrt{2g(h-y(t))}.
$$
Hence, separating the variables,
$$ T(h)=-\int_0^{T(h)}\frac{\phi(y(t))\,\dot y(t)}{\sqrt{2g(h-y(t))}}\,dt=
-\int_h^{0}\frac{\phi(y)}{\sqrt{2g(h-y)}}\,dy,
$$
that is
\begin{equation}\label{8sEQ}
T(h)=\int^h_{0}\frac{\phi(y)}{\sqrt{2g(h-y)}}\,dy.
\end{equation}
We observe that~$\phi(y)\ge1$, thanks to~\eqref{VEL0}, therefore~\eqref{8sEQ}
gives that
\begin{equation}\label{Th1} T(h)\ge\sqrt{\frac{2h}{g}},\end{equation}
which corresponds to the free fall, in which~$f$ is constant and the particle
drops down vertically.
Interestingly, the relation in~\eqref{8sEQ} can be seen as a fractional equation.
For instance, if we exploit the Caputo notation of fractional derivative
of order~$1/2$, as it will be discussed in detail in the forthcoming
formula~\eqref{defcap}, one can write~\eqref{8sEQ} in the fractional form
\begin{equation}\label{FRAcha} T(h) = \sqrt{\frac{\pi}{2g}}\,D_{h,0}^{1/2}\Phi(h),
\end{equation}
where~$\Phi$ is a primitive of~$\phi$, say
$$ \Phi(H):=\int_0^H\phi(y)\,dy.$$
It is instructive to solve the relation~\eqref{8sEQ} by obtaining explicitly~$\phi$
in terms of~$T(h)$. Of course, fractional calculus, operator algebra
and the theory of Volterra-type integral equations provide general
tools to deal with equations such as the one in~\eqref{FRAcha}, but for the scopes
of these pages, we try to perform our analysis using only elementary
computations. To this end, it is convenient to take advantage of
the natural scaling of the problem and convolve~\eqref{8sEQ}
against the kernel~$\frac{1}{\sqrt{h}}$. In this way, we obtain that
\begin{equation}\label{BRAbxdis}
\begin{split}
\int_0^H\frac{T(h)}{\sqrt{H-h}}\,dh\,&=\int_0^H\left[
\int^h_{0}\frac{\phi(y)}{\sqrt{2g(h-y)}}\,dy\right]\frac{dh}{\sqrt{H-h}}\\
&=\frac{1}{\sqrt{2g}}\int_0^H\phi(y)\,\left[
\int_y^H\frac{dh}{\sqrt{(h-y)(H-h)}}\right]\,dy.
\end{split}
\end{equation}
Using the change of variable~$\eta:=\frac{h-y}{H-y}$ we see that
$$ \int_y^H\frac{dh}{\sqrt{(h-y)(H-h)}}=\int_0^1\frac{d\eta}{\sqrt{\eta(1-\eta)}}=\pi,
$$
and hence~\eqref{BRAbxdis} becomes
\begin{equation}\label{BRAbxdis:2}
\int_0^H\frac{T(h)}{\sqrt{H-h}}\,dh=\frac{\pi}{\sqrt{2g}}\int_0^H\phi(y)\,dy=
\frac{\pi}{\sqrt{2g}}\Phi(H).\end{equation}
The main application of this formula consists in a quick solution of
the {\em tautochrone problem}, that is the determination
of the shape of the slide for which the sliding time~$T(h)$ is constant
(and therefore independent on the initial height~$h$). In this case,
we set~$T(h)=T$ and then~\eqref{BRAbxdis:2} gives that
$$ 2T\sqrt{H}=\frac{\pi}{\sqrt{2g}}\Phi(H),$$
and thus, differentiating in~$H$,
$$ \frac{T}{\sqrt{H}}=\frac{\pi}{\sqrt{2g}}\phi(H),$$
which, recalling \eqref{VEL0}, leads to
\begin{equation}\label{Cyah892}
|f'(y)|^2={\phi^2(y)-1}={\frac{2gT^2}{\pi^2\,y}-1}={\frac{2r-y}{y}},
\end{equation}
with~$r:=\frac{gT^2}{\pi^2}$, which is the equation of the {\em cycloid}\index{cycloid}.
Another interesting application of~\eqref{BRAbxdis:2}
surfaces when the sliding time~$T(h)$ behaves like a square root, say
\begin{equation}\label{Th2} T(h)=\kappa\sqrt{h},\end{equation}
for some~$\kappa>0$. In this case, formula~\eqref{BRAbxdis:2}
gives that
$$ \frac{\kappa\pi H}2=\kappa\int_0^H\sqrt{\frac{h}{H-h}}\,dh=\frac{\pi}{\sqrt{2g}}\Phi(H),$$
that says that
$$ \phi(H)=\Phi'(H)=\sqrt{\frac{g}{2}}\,\kappa,$$
and then, by~\eqref{VEL0},
$$ |f'(y)|^2=\phi^2(y)-1=\frac{g}{2}\,\kappa^2-1,$$
which is constant (and well-posed when~$\kappa\ge\sqrt{\frac2g}$,
coherently with~\eqref{Th1}). In this case~$f$ is linear
and therefore the slide corresponds to the classical
{\em inclined plane}.
}\end{example}
\begin{example}[Sliding time and brachistochrone problem\index{problem!brachistochrone}]\label{TAUr2} {\rm Formula~\eqref{8sEQ}, as
well as its explicit fractional formulation in~\eqref{FRAcha}, is also useful to
solve the {\em brachistochrone problem}, that is detecting the curve
of fastest descent between two given points. In this setting,
the mathematical formulation of Example~\ref{TAUr} can be modified as follows.
The initial height is fixed, hence we can assume that~$h>0$ is a given parameter.
Instead, we can optimize the shape of the slide as given by the function~$f$.
To this end, it is convenient to write~$\phi=\phi_f$ in~\eqref{VEL0}, in order
to stress its dependence on~$f$. Similarly, the fall time in~\eqref{8sEQ}
and~\eqref{FRAcha} can be denoted by~$T(f)$ to emphasize its dependence on~$f$.
In this way, we write~\eqref{8sEQ} as
\begin{equation}\label{LIgh1}
T(f)=\int^h_{0}\frac{\phi_f(y)}{\sqrt{2g(h-y)}}\,dy.
\end{equation}
Now, given~$\epsilon\in(-1,1)$, we consider a perturbation~$\eta\in C^\infty_0([0,h])$
of an optimal function~$f$.
That is, given~$\epsilon\in(0,1)$, we define
$$ f_\epsilon(y):=f(y)+\epsilon \eta(y).$$
Since~$f_\epsilon(0)=f(0)$ and~$f_\epsilon(h)=f(h)$, we have that the endpoints
of the slide described by~$f_\epsilon$ are the same as the ones
described by~$f$ and therefore the minimality of~$f$ gives that
\begin{equation}\label{LIgh2}
T(f_\epsilon)=T(f)+o(\epsilon).
\end{equation}
In addition, by~\eqref{VEL0},
\begin{eqnarray*}&& \phi_{f_\epsilon}(y)=\sqrt{|f'_\epsilon(y)|^2+1}=
\sqrt{|f'(y)+\epsilon \eta'(y)|^2+1}\\&&\qquad=
\sqrt{|f'(y)|^2+2\epsilon f'(y)\eta'(y)+o(\epsilon)+1}\\&&\qquad=
\sqrt{|f'(y)|^2+1}+\frac{\epsilon f'(y)\eta'(y)}{\sqrt{|f'(y)|^2+1}}+o(\epsilon)\\
&&\qquad=\phi_f(y)+\frac{\epsilon f'(y)\eta'(y)}{\sqrt{|f'(y)|^2+1}}+o(\epsilon).
\end{eqnarray*}
Therefore, in light of~\eqref{LIgh1} and~\eqref{LIgh2},
\begin{eqnarray*}
o(\epsilon)&=&
T(f_\epsilon)-T(f)\\
&=& \int^h_{0}\frac{\phi_{f_\epsilon}(y)-\phi_f(y)}{\sqrt{2g(h-y)}}\,dy\\
&=& \epsilon\int^h_{0}\frac{ f'(y)\eta'(y)}{\sqrt{2g(h-y)(|f'(y)|^2+1)}}\,dy+o(\epsilon),
\end{eqnarray*}
and consequently
$$ \int^h_{0}\frac{ f'(y)\eta'(y)}{\sqrt{2g(h-y)(|f'(y)|^2+1)}}\,dy=0.$$
Accordingly, since~$\eta$ is an arbitrary compactly supported perturbation,
we obtain the optimality condition
\begin{equation*}
\frac{d}{dy}\left( \frac{ f'(y)}{\sqrt{2g(h-y)(|f'(y)|^2+1)}}\right)=0
\end{equation*}
and then
\begin{equation*}
\frac{ f'(y)}{\sqrt{2g(h-y)(|f'(y)|^2+1)}}=-c,
\end{equation*}
for some~$c>0$.
This gives that
\begin{equation}\label{fpri} |f'(y)|^2=\frac{2c^2g(h-y)}{1-2c^2g(h-y)}.\end{equation}
It is now expedient to consider a suitable translation of the slide, by considering
the rigid motion described by the function
$$ \zeta(y):=\frac{y-1+2c^2gh}{2c^2 g}$$
and we define
$$ \tilde f(y):=2c^2 g f(\zeta(y)).$$
We point out that~$\zeta'(y)=\frac1{2c^2g}$ and
$$ \frac{2c^2g(h-\zeta(y))}{1-2c^2g(h-\zeta(y))}=
\frac{2c^2g h-2c^2g\zeta(y)}{1-2c^2gh+2c^2g\zeta(y)}=
\frac{2c^2g h-(y-1+2c^2gh)}{1-2c^2gh+(y-1+2c^2gh)}=\frac{1-y}{y}.$$
Consequently, by~\eqref{fpri},
$$ |\tilde f'(y)|^2= (2c^2 g)^2 (\zeta'(y))^2 |f'(\zeta(y))|^2
=\frac{2c^2g(h-\zeta(y))}{1-2c^2g(h-\zeta(y))}=\frac{1-y}{y},$$
which is again an equation describing a {\em cycloid} (compare with~\eqref{Cyah892}).}\end{example}
Some additional comments about Examples~\ref{TAUr} and~\ref{TAUr2}.
The tautochrone problem was first solved by
Christiaan Huygens
in his book {\em
Horologium Oscillatorium: sive de motu pendulorum ad horologia aptato demonstrationes
geometricae} in 1673.
Interestingly, the tautochrone problem
is also alluded in a famous passage from the 1851 novel {\em Moby Dick} by Herman Melville.
As for the brachistochrone problem,
Galileo was probably the first to
take it into account in~1638
in Theorem~XXII and Proposition~XXXVI of his book {\em Discorsi e Dimostrazioni Matematiche
intorno a due nuove scienze}, in which he seemed to
argue that the fastest motion from one end to the other does not take
place along the shortest line but along a circular arc
(now we know that ``circular arc'' should have been replaced
by ``cycloid'' to make the statement fully correct, and in fact the name
of cycloid is likely to go back to Galileo in its meaning of ``resembling a circle'').
The brachistochrone problem was then
posed explicitly in~1696
by Johann Bernoulli
in {\em Acta Eruditorum}, Although probably
knowing how to solve it himself, Johann Bernoulli
challenged all others to solve it, addressing his community with a rather bombastic
dare, such as {\em I, Johann Bernoulli, address the most brilliant
mathematicians in the world. Nothing is more attractive to intelligent
people than an honest, challenging problem, whose possible solution will
bestow fame and remain as a lasting monument.}
The coincidence of the solutions of the tautochrone and the brachistochrone
problems appears to be a surprising mathematical fact and to reveal
deep laws of physics: in the words of
Johann Bernoulli, {\em
Nature always tends to act in the simplest way, and so it here lets one curve
serve two different functions}.
For a detailed historical introduction to these problems, see e.g.~\cite{MR1281370}
and the references therein.
\begin{example}[Thermal insulation problems, Dirichlet-to-Neumann problems\index{Dirichlet-to-Neumann},
and the fractional Laplacian]\label{74747TGSCO}
{\rm Let us consider a room whose perimeter is given by walls of two different types.
One kind of walls is such that we can fix the temperature there,
i.e. by some heaters or coolers endowed with a thermostat.
The other kind of walls is made by insulating material which prevents the
heat flow to go through. The question that we want to address is:
\begin{equation}\label{QUE1}
{\mbox{{\em what is
the temperature at the insulating kind of walls?}}}
\end{equation}
We will see that the question in~\eqref{QUE1} can be conveniently set into a natural
fractional Laplacian framework. As a matter of fact,
To formalize this question, and address it at least in its simplest
possible formulation, let us consider the case in which the room
is modeled by the half-space~${\mathbb{R}}^{n+1}_+:={\mathbb{R}}^n\times(0,+\infty)$
(while the rooms in the real life are considered to be three-dimensional,
hence $n$ would be equal to~$2$ in this model,
we can also take into account the case of a general~$n$ in this discussion).
The walls of this room are given by~${\mathbb{R}}^n\times\{0\}$.
We suppose that the insulating material is placed in a nice bounded
domain~$\Omega\subset{\mathbb{R}}^n\times\{0\}$
and the temperature is prescribed at the remaining part of the walls~$({\mathbb{R}}^n\times\{0\}
)\setminus\Omega$, see Figure~\ref{2234CxC6783C}.
\begin{figure}
\caption{\footnotesize\it
The thermal insulation problem in Example \ref{74747TGSCO}
\label{2234CxC6783C}
\end{figure}
The temperature of the room at the point~$x=(x_1,\dots,x_{n+1})\in{\mathbb{R}}^n\times[0,+\infty)$
will be described by a function~$u=u(x)$. At the equilibrium, no heat flow occurs inside the room.
Taking the classical ansatz that the heat flow is produced by the gradient of the temperature,
since no heat sources are placed inside the room,
we obtain that for any ball~$B\Subset{\mathbb{R}}^{n+1}_+$ the heat
flux through the boundary of~$B$ is necessarily zero and therefore, by the Divergence Theorem,
$$ 0=\int_B {\rm div}(\nabla u)=\int_B \Delta u,$$
which gives that~$\Delta u=0$ in~${\mathbb{R}}^{n+1}_+$.
Complementing this equation with the prescriptions along the walls, we thereby obtain that
the room temperature~$u$ satisfies
\begin{equation}\label{UNja}
\begin{cases}
\Delta u(x)=0& {\mbox{ for all $x\in{\mathbb{R}}^{n+1}_+$,}}\\
\partial_{x_{n+1}}u(x)=0& {\mbox{ for all $x\in\Omega$,}}\\
u(x)=u_0(x_1,\dots,x_n)& {\mbox{ for all $x\in({\mathbb{R}}^{n}\times\{0\})\setminus\Omega$.}}\\
\end{cases}
\end{equation}
As a matter of fact, the setting in~\eqref{UNja} lacks uniqueness, since
if~$u$ is a solution of~\eqref{UNja}, then so are
$$ u(x)+x_{n+1},\qquad u(x)+x_1 x_{n+1},\qquad u(x)+e^{x_1}\sin x_{n+1} ,$$
and so on. Hence, to avoid uniqueness issues, we implicitly assume that the solution
of~\eqref{UNja} is constructed by energy minimization:
in this way, the strict convexity
of the energy functional
$$\int_{{\mathbb{R}}^{n+1}_+}|\nabla u(x)|^2\,dx$$
guarantees that the solution is unique.
The question in~\eqref{QUE1} is therefore reduced to find the value of~$u$
in~$\Omega$. To do so, one can observe that the model links the Neumann and the Dirichlet
boundary data of an elliptic problem: namely,
given on the boundary the homogeneous Neumann datum in $\Omega$ and
the (possibly inhomogeneous) Dirichlet datum in~$({\mathbb{R}}^{n}\times\{0\})\setminus\Omega$,
one can consider the (minimal energy) harmonic function satisfying these conditions
and then calculate its Dirichlet datum in~$\Omega$ to give an answer to~\eqref{QUE1}.
Computationally, it is convenient to observe that equation~\eqref{UNja} is linear
and therefore can be efficiently solved by Fourier transform\index{transform!Fourier}.
Indeed, we write~$\hat u=\hat u(\xi,x_{n+1})$ as the Fourier transform of~$u$
in the variables~$(x_1,\dots,x_n)$, that is, up to normalizing constants that
we neglect for the sake of simplicity, for any~$\xi=(\xi_1,\dots,\xi_n)\in{\mathbb{R}}^n$
and any~$x_{n+1}>0$ we define
$$ \hat u(\xi,x_{n+1}):=\int_{ {\mathbb{R}}^n } u(x_1,\dots,x_n,x_{n+1})
\exp\left(- i\sum_{j=1}^n x_j\xi_j\right)\,dx_1\dots dx_n.$$
Hence, integrating by parts, one sees that, for all~$k\in\{1,\dots,n\}$,
\begin{eqnarray*}
&&\int_{ {\mathbb{R}}^n } \partial_{x_k} u(x_1,\dots,x_n,x_{n+1})
\exp\left( -i\sum_{j=1}^n x_j\xi_j\right)\,dx_1\dots dx_n\\&=&-i\xi_k
\int_{ {\mathbb{R}}^n } u(x_1,\dots,x_n,x_{n+1})
\exp\left(- i\sum_{j=1}^n x_j\xi_j\right)\,dx_1\dots dx_n\\&=&-i\xi_k\, \hat u(\xi,x_{n+1})
.\end{eqnarray*}
Iterating this argument, one obtains that, for all~$k\in\{1,\dots,n\}$,
\begin{equation}\label{a78sddxvvvu}
\begin{split}
&\int_{ {\mathbb{R}}^n } \partial_{x_k}^2 u(x_1,\dots,x_n,x_{n+1})
\exp\left( -i\sum_{j=1}^n x_j\xi_j\right)\,dx_1\dots dx_n\\&\qquad=(-
i\xi_k)^2 \hat u(\xi,x_{n+1})=-\xi_k^2\,
\hat u(\xi,x_{n+1}),\end{split}\end{equation}
and hence, summing over~$k$ and taking into account also the derivatives in~$x_{n+1}$,
\begin{eqnarray*}
0&=&\widehat{\Delta u}(\xi,x_{n+1})\\&=& \sum_{k=1}^{n+1}
\int_{ {\mathbb{R}}^n } \partial_{x_k}^2 u(x_1,\dots,x_n,x_{n+1})
\exp\left( -i\sum_{j=1}^n x_j\xi_j\right)\,dx_1\dots dx_n
\\&=&- \sum_{k=1}^n\xi_k^2
\hat u(\xi,x_{n+1})+
\int_{ {\mathbb{R}}^n } \partial_{x_{n+1}}^2 u(x_1,\dots,x_n,x_{n+1})
\exp\left( -i\sum_{j=1}^n x_j\xi_j\right)\,dx_1\dots dx_n
\\ &=& -|\xi|^2 \hat u(\xi,x_{n+1})+\partial^2_{x_{n+1}}
\hat u(\xi,x_{n+1}).
\end{eqnarray*}
Consequently, for any~$x_{n+1}>0$,
\begin{equation*}
\begin{split}
-\widehat{ \partial_{x_{n+1}} u(\cdot,0)}
&=-\partial_{x_{n+1}} \hat u(\xi,0)
\\ &=-\partial_{x_{n+1}} \hat u(\xi,x_{n+1})+\int_0^{x_{n+1}}
\partial^2_{x_{n+1}}
\hat u(\xi,y)\,dy\\
&=-\partial_{x_{n+1}} \hat u(\xi,x_{n+1})+\int_0^{x_{n+1}}
|\xi|^2 \hat u(\xi,y)\,dy.
\end{split}
\end{equation*}
This equation has solution
\begin{equation*}
\hat u(\xi,x_{n+1})=\hat u_0(\xi) \,e^{-|\xi|\,x_{n+1}},\end{equation*}
and hence
\begin{equation}\label{FT:X} \partial_{x_{n+1}}\hat u(\xi,x_{n+1})=-|\xi|\,\hat u_0(\xi) \,e^{-|\xi|\,x_{n+1}}=
-|\xi|\,\hat u(\xi,x_{n+1}).\end{equation}
Combining this with the homogeneous Neumann condition in~\eqref{UNja} we obtain
that, if~${\mathcal{F}}^{-1}$
denotes the anti-Fourier transform, then
\begin{equation}\label{7ajsAHHBAB}
{\mathcal{F}}^{-1}\Big(|\xi|\,\hat u(\cdot,0)\Big)
=0\qquad{\mbox{ in }}\Omega.
\end{equation}
It is convenient to write this using the fractional Laplace formulation.
As a matter of fact, for every~$s\in[0,1]$ and a (sufficiently smooth and decaying)
function~$w:{\mathbb{R}}^n\to{\mathbb{R}}^n$, one can define
\begin{equation}\label{7A-DE}
(-\Delta)^s w:= {\mathcal{F}}^{-1} \Big(|\xi|^{2s} \hat w\Big).
\end{equation}
We observe that when~$s=1$ this definition, up to normalization constants,
gives the classical operator~$-\Delta$, thanks to~\eqref{a78sddxvvvu}.
By a direct computation (see e.g. Proposition~3.3 in~\cite{MR2944369})
one also sees that for every~$s\in(0,1)$
the operator in~\eqref{7A-DE}
can be written in integral form as
\begin{equation}\label{7A-DE2}
(-\Delta)^s w(x)=\int_{{\mathbb{R}}^n}\frac{2w(x)-w(x+y)-w(x-y)}{|y|^{n+2s}}\,dy.
\end{equation}
Comparing~\eqref{7ajsAHHBAB} with~\eqref{7A-DE}, we obtain that
equation~\eqref{UNja} can be written as a fractional equation for a function of~$n$
variables (rather than a classical reaction-diffusion equation for
a function of~$n+1$
variables): indeed, if we set~$v(x_1,\dots,x_n):=u(x_1,\dots,x_n,0)$,
then
\begin{equation}\label{QUE1BB}
\begin{cases}
\sqrt{-\Delta} \,v =0 &{\mbox{ in $\Omega_0$}},\\
v =u_0 &{\mbox{ in ${\mathbb{R}}^n\setminus\Omega_0$}},
\end{cases}
\end{equation}
where~$\Omega_0\subset{\mathbb{R}}^n$ is such that~$\Omega=\Omega_0\times\{0\}$.
The solution of the question posed in~\eqref{QUE1} can then
be obtained by considering the values of the solution~$v$ of~\eqref{QUE1BB}
in~$\Omega_0$
The equivalence between~\eqref{UNja} and~\eqref{QUE1BB} (which can also
be extended and generalized in many forms)
is often quite useful since it allows one to connect classical reaction-diffusion
equations and fractional equations and permits the methods typical of one research context
to be applicable to the other.
}\end{example}
\begin{example}[The thin obstacle problem\index{thin obstacle}]\label{THIN}
{\rm The classical obstacle problem considers an elastic membrane possibly subject to an external
force field
which is constrained above an obstacle.
The elasticity of the membrane makes its graph to be a
supersolution in the whole domain and a solution wherever it does not touch the obstacle.
For instance, if the vertical force field is denoted by~$h:{\mathbb{R}}^n\to{\mathbb{R}}$
and the obstacle is given by the subgraph of a function~$\varphi:{\mathbb{R}}^n\to{\mathbb{R}}$,
considering a global problem for the sake of simplicity,
the discussion above formalizes in the system of equations
\begin{equation*}
\begin{cases}
\Delta u\le h & {\mbox{ in }}{\mathbb{R}}^n,\\
u\ge\varphi & {\mbox{ in }}{\mathbb{R}}^n,\\
\Delta u=0 & {\mbox{ in }}{\mathbb{R}}^n\cap\{u>\varphi\}.
\end{cases}
\end{equation*}
As a variation of this problem, one can consider the case in which
the obstacle is ``thin'', i.e. it is supported along a manifold of smaller dimension -- concretely,
in our case, of codimension~$1$.
For concreteness, one can consider the case in which the obstacle
is supported on the hyperplane~$\{x_n=0\}$. In this case,
one considers the subgraph in~$\{x_n=0\}$ of a function~$\varphi:{\mathbb{R}}^{n-1}
\to{\mathbb{R}}$ and requires the solution to lie above it, see Figure~\ref{TH}.
\begin{figure}
\caption{\footnotesize\it
The thin obstacle problem.}
\label{TH}
\end{figure}
Combining the thin
obstacle constrain with the elasticity of the membrane, we model this problem
by the system of equations
\begin{equation}\label{CO:AK}
\begin{cases}
\Delta u\le h & {\mbox{ in }}{\mathbb{R}}^n,\\
u\ge\varphi & {\mbox{ in }}\{x_n=0\},\\
\Delta u=0 & {\mbox{ in }}{\mathbb{R}}^n\setminus (\{x_n=0\}\cap\{u=\varphi\}).
\end{cases}
\end{equation}
For concreteness, we will take~$h:=0$ from now on
(the general case can be reduced to this by subtracting a particular solution).
Also, given the structure of \eqref{CO:AK},
we
will focus on the case of even solutions with respect to the hyperplane~$\{x_n=0\}$,
namely
$$u(x_1,\dots,x_{n-1},-x_n)=u(x_1,\dots,x_{n-1},x_n).$$
We observe that, if~$\psi\in C^\infty_0({\mathbb{R}}^n,\,[0,+\infty))$, then
\begin{eqnarray*}
\int_{ {\mathbb{R}}^n } \nabla u(x)\cdot\nabla\psi(x)\,dx&=&
\int_{ \{x_n>0\} } \nabla u(x)\cdot\nabla\psi(x)\,dx+
\int_{ \{x_n<0\} } \nabla u(x)\cdot\nabla\psi(x)\,dx\\
&=&
\int_{ \{x_n>0\} } {\rm div}\,\big(\psi(x)\,\nabla u(x)\big)\,dx+
\int_{ \{x_n<0\} } {\rm div}\,\big(\psi(x)\,\nabla u(x)\big)\,dx
\\ &=&-2\int_{ \{x_n=0\} } \psi(x)\,\frac{\partial u}{\partial x_n}(x_1,\dots,x_{n-1},0^+)
\,d{\mathcal{H}}^{n-1}(x),
\end{eqnarray*}
where~${\mathcal{H}}^{n-1}$ denotes the standard
Hausdorff measure of codimension~$1$.
Therefore, the condition~$\Delta u\le0$ in~\eqref{CO:AK}
is distributionally equivalent to
$$ \frac{\partial u}{\partial x_n}(x_1,\dots,x_{n-1},0^+)\le0.$$
Similarly, given any point~$p\in \{x_n=0\}\cap\{u>\varphi\}$,
one can consider functions~$\psi\in C^\infty_0(B_\rho(p))$,
with~$\rho>0$ sufficiently small such that~$B_\rho(p)\subseteq\{u>\varphi\}$,
and thus find that
$$ \frac{\partial u}{\partial x_n}(x_1,\dots,x_{n-1},0^+)=0\qquad{\mbox{in}}\qquad
\{x_n=0\}\cap\{u>\varphi\}.$$
Hence, dropping the notation~$0^+$ for the sake of brevity,
one can write~\eqref{CO:AK} in the form
\begin{equation}\label{CO:AK2}
\begin{cases}
\Delta u=0 & {\mbox{ in }}{\mathbb{R}}^n\setminus (\{x_n=0\}\cap\{u=\varphi\}),\\
u\ge\varphi & {\mbox{ in }}\{x_n=0\},\\
\displaystyle\frac{\partial u}{\partial x_n}\le0& {\mbox{ in }}\{x_n=0\},\\
\displaystyle\frac{\partial u}{\partial x_n}=0& {\mbox{ in }}\{x_n=0\}\cap\{u>\varphi\}.
\end{cases}
\end{equation}
Interestingly, equation~\eqref{CO:AK2} can be written in a fractional Laplacian form.
Indeed, by using the Fourier transform in the variables~$(x_1,\dots,x_{n-1})$
(see e.g.~\eqref{FT:X}
and~\eqref{7A-DE}), we know that, up to normalization constants,
$$ \frac{\partial u}{\partial x_n}(x_1,\dots,x_{n-1},0^+)=-{\mathcal{F}}^{-1}
\left(
|(\xi_1,\dots,\xi_{n-1})| \hat u(\xi_1,\dots,\xi_{n-1},0)\right)=-\sqrt{-\Delta} u(x_1,\dots,x_{n-1},0).$$
Consequently, writing~$v:=u(x_1,\dots,x_{n-1},0)$, we can interpret~\eqref{CO:AK2}
as a fractional equation on~${\mathbb{R}}^{n-1}$, namely
\begin{equation*}
\begin{cases}
v\ge\varphi & {\mbox{ in }}{\mathbb{R}}^{n-1},\\
\sqrt{-\Delta}v\ge0& {\mbox{ in }}{\mathbb{R}}^{n-1},\\
\sqrt{-\Delta}v=0& {\mbox{ in }}\{v>\varphi\}.
\end{cases}
\end{equation*}
We do not go into the details of the classical and recent
developments of the mathematical theory of the thin obstacle problem
and of the many topics related to it:
for this, see e.g.~\cites{MR2962060, MR3709717}.}\end{example}
\begin{example}[The Signorini problem\index{problem!Signorini}]
{\rm In 1959, Antonio Signorini posed an engineering
problem consisting in finding the equilibrium configuration of
an elastic body, resting on a
rigid frictionless
surface (see~\cite{MR0118021}).
Gaetano Fichera provided a rigorous mathematical framework
in which the problem is well-posed
in 1963, see~\cite{MR0176661}. Interestingly, this solution
was found just a few weeks before Signorini's death,
whose last words were spent to celebrate this discovery
as his {\em greatest contentment}.
The historical description of these moments is commemorated
in the survey
{\em La nascita della teoria delle disequazioni variazionali ricordata dopo trent'anni},
of the 1995 {\em Atti dei Convegni Lincei}.
Here, we recall a simplified version of the problem, its relation with
the thin obstacle problem, and its link with the fractional Laplace operator.
To this end, we introduce some notation from linear elasticity.
Namely, given a material body~$A\subset{\mathbb{R}}^n$ at rest
one describes its equilibrium configuration
under suitable forces by~$B=y(A)$, where~$y:{\mathbb{R}}^n\to{\mathbb{R}}^n$
is a (suitably regular and invertible) map (or, equivalently, one can consider~$B$ the
set at rest and~$A$ the equilibrium configuration in the ``real world'', up
to replacing~$y$ with its inverse).
In this setting, the displacement vector is defined by
\begin{equation}\label{Upda}
U(x):=y(x)-x.\end{equation}
We denote the components of~$U$ by~$U^{(1)},\dots,U^{(n)}$.
The ansatz of the linear elasticity theory is that, as a consequence
of Hooke's Law\index{law!Hooke's}, the infinitesimal elastic energy is proportional to the symmetrized
gradient\index{symmetrized gradient} of~$U$. That is, for every~$i,j\in\{1,\dots,n\}$, one defines the strain
tensor\index{strain tensor}
\begin{equation}\label{CALD} ({\mathcal{D}}U)_{ij}:=\frac{\partial{U^{(i)}}}{\partial x_j}+
\frac{\partial{U^{(j)}}}{\partial x_i}\end{equation}
and sets
$$ |{\mathcal{D}}U|:=\sum_{i,j=1}^n|({\mathcal{D}}U)_{ij}|^2=
\sum_{i,j=1}^n\left(
\frac{\partial{U^{(i)}}}{\partial x_j}+\frac{\partial{U^{(j)}}}{\partial x_i}\right)^2.$$
With this notation, the elastic component of the energy is
\begin{equation}\label{781:19haj} {\mathcal{E}}(U):=\frac12
\int_{A} |{\mathcal{D}}U(x)|^2\,dx.\end{equation}
The differential operator governing the elastostatic equations
(known in the literature as Navier-Cauchy equations, at least in the special
form of them that we take into account in our simplified approach)
are obtained from the first variation of the elastic energy functional in~\eqref{781:19haj}.
Since
\begin{eqnarray*}&&\frac{
|{\mathcal{D}}(U+\epsilon\Phi)|^2 -|{\mathcal{D}}U|^2}{2}\\&=&\frac12
\sum_{i,j=1}^n\left(
\frac{\partial{U^{(i)}}}{\partial x_j}+\frac{\partial{U^{(j)}}}{\partial x_i}+\epsilon
\left(\frac{\partial{\Phi^{(i)}}}{\partial x_j}+
\frac{\partial{\Phi^{(j)}}}{\partial x_i}\right)\right)^2 -\frac{|{\mathcal{D}}U|^2}2\\&=&
\epsilon\sum_{i,j=1}^n
\left(
\frac{\partial{U^{(i)}}}{\partial x_j}+\frac{\partial{U^{(j)}}}{\partial x_i}\right)
\left(\frac{\partial{\Phi^{(i)}}}{\partial x_j}+
\frac{\partial{\Phi^{(j)}}}{\partial x_i}\right)+o(\epsilon)\\&=&
\epsilon\sum_{i,j=1}^n
\left(
\frac{\partial{U^{(i)}}}{\partial x_j}
\frac{\partial{\Phi^{(i)}}}{\partial x_j}+
\frac{\partial{U^{(j)}}}{\partial x_i}
\frac{\partial{\Phi^{(i)}}}{\partial x_j}+
\frac{\partial{U^{(i)}}}{\partial x_j}
\frac{\partial{\Phi^{(j)}}}{\partial x_i}+
\frac{\partial{U^{(j)}}}{\partial x_i}
\frac{\partial{\Phi^{(j)}}}{\partial x_i}
\right)+o(\epsilon)\\
&=&
\epsilon\sum_{i,j=1}^n
\left(
\frac{\partial{U^{(i)}}}{\partial x_j}
\frac{\partial{\Phi^{(i)}}}{\partial x_j}+
\frac{\partial{U^{(j)}}}{\partial x_i}
\frac{\partial{\Phi^{(i)}}}{\partial x_j}+
\frac{\partial{U^{(j)}}}{\partial x_i}
\frac{\partial{\Phi^{(i)}}}{\partial x_j}+
\frac{\partial{U^{(i)}}}{\partial x_j}
\frac{\partial{\Phi^{(i)}}}{\partial x_j}
\right)+o(\epsilon)\\&=&
2\epsilon\sum_{i,j=1}^n
\left(
\frac{\partial{U^{(i)}}}{\partial x_j}+
\frac{\partial{U^{(j)}}}{\partial x_i}\right)\frac{\partial{\Phi^{(i)}}}{\partial x_j}
+o(\epsilon),
\end{eqnarray*}
it follows from~\eqref{781:19haj} that, for all~$\Phi\in C^\infty_0(A,{\mathbb{R}}^n)$,
\begin{equation}\label{DElap}
\begin{split}
\langle D{\mathcal{E}}(U),\Phi\rangle\,&=
2\sum_{i,j=1}^n\int_A
\left(
\frac{\partial{U^{(i)}}}{\partial x_j}(x)+
\frac{\partial{U^{(j)}}}{\partial x_i}(x)\right)\frac{\partial{\Phi^{(i)}}}{\partial x_j}(x)\,dx\\
&=
-2\sum_{i,j=1}^n\int_A\frac{\partial}{\partial x_j}
\left(
\frac{\partial{U^{(i)}}}{\partial x_j}(x)+
\frac{\partial{U^{(j)}}}{\partial x_i}(x)\right)\;\Phi^{(i)}(x)\,dx\\
&=
-2\sum_{i=1}^n\int_A
\left(
\Delta{U^{(i)}}(x)+
\frac{\partial}{\partial x_i}{{\rm div }}\,U(x)\right)\;\Phi^{(i)}(x)\,dx.
\end{split}
\end{equation}
One can also take into account the effect of a force field~$f=(f^{(1)},\dots,f^{(n)}):
{\mathbb{R}}^n\to
{\mathbb{R}}^n$ which is acting on the material body. If we think that the map~$y$
is the outcome of the deformation produced by the force field,
one can consider the infinitesimal work associated to this force
as given approximatively, for small displacements,
by the quantity
$$ f(x)\cdot (y(x)-x)\,dx=f(x)\cdot U(x)\,dx,$$
where the setting in~\eqref{Upda} has been exploited.
We obtain in this way a potential energy of the
form
$$ {\mathcal{P}}(U):=\int_A f(x)\cdot U(x)\,dx.$$
We see that
\begin{equation}\label{ed:AK} \langle\nabla{\mathcal{P}}(U),\Phi\rangle=\int_A
f(x)\cdot\Phi(x)\,dx,\end{equation}
for all $\Phi\in C^\infty_0(A,{\mathbb{R}}^n)$.
{F}rom this and~\eqref{DElap}, we obtain the elastic equation
\begin{equation}\label{ELAEQ}
\begin{split}&
\Delta{U^{(i)}}(x)+
\frac{\partial}{\partial x_i}{{\rm div }}\,U(x)
=\frac12\,f^{(i)}(x),\\&{\mbox{for all $i\in\{1,\dots,n\}$ and all~$x\in A$.}}\end{split}
\end{equation}
We also assume that~$y(\partial A)=\partial B$ and that~$B$
rests on a rigid surface, say a frictionless table.
In this case, we can write that~$A\subset\{x_n\ge0\}$
and~$B\subset\{ y_n\ge0\}$, that is~$y^{(n)}(x)>0$
for all~$x\in A$, and therefore~$U^{(n)}(x)+x_n>0$ for all~$x\in A$.
The contact set between~$B$ and the table is described by the points lying in~$T:=(\partial B)
\cap\{y_n=0\}$, and the contact set between~$A$ and the table is described by
the points lying in~$S:=(\partial A)
\cap\{x_n=0\}$. We describe~$S$ by distinguishing two classes of points,
namely the points~$S_1$
which do not leave the contact set,
and the points~$S_2$
which leave the contact set, namely
\begin{equation}\label{S1S2s}
\begin{split}
S_1\,&:=\{ x\in S {\mbox{ s.t. }} y^{(n)}(x)=0\}\\
&=\{ x\in S {\mbox{ s.t. }} U^{(n)}(x)+x_n=0\}\\
&=\{ x\in S {\mbox{ s.t. }} U^{(n)}(x)=0\}\\
{\mbox{and }}\qquad
S_2\,&:=\{ x\in S {\mbox{ s.t. }} y^{(n)}(x)>0\}\\
&=\{ x\in S {\mbox{ s.t. }} U^{(n)}(x)+x_n>0\}\\
&=\{ x\in S {\mbox{ s.t. }} U^{(n)}(x)>0\}.
\end{split}\end{equation}
We take into account the effect of a surface tension, acting by means
of a force field~$g:{\mathbb{R}}^n\to{\mathbb{R}}^n$.
In this case, the infinitesimal work, for small displacements, is approximately given by
$$ g(x)\cdot (y(x)-x)\,d{\mathcal{H}}^{n-1}(x)=g(x)\cdot U(x)\,d{\mathcal{H}}^{n-1}(x).$$
Therefore, we describe the surface tension energy
effect by an energy functional of the form
$$ {\mathcal{S}}(U):=
\int_{\partial A} g(x)\cdot U(x)\,d{\mathcal{H}}^{n-1}(x).
$$
We stress that if~$x_0\in S_2$ there exist~$\rho>0$ and~$\epsilon_0>0$ such that for all~$\epsilon
\in[-\epsilon_0,\epsilon_0]$
and~$\Phi\in C^\infty_0( B_\rho(x_0),{\mathbb{R}}^n)$
the perturbation~$U+\epsilon \Phi$ is admissible,
in the sense that it maps~$A$ into~$y(A)\subset\{y_n\ge0\}$.
On the other hand, if~$\Phi\in C^\infty_0( {\mathbb{R}}^n,{\mathbb{R}}^n)$
and~$\Phi^{(n)}\ge0$ we have that
the perturbation~$U+\epsilon \Phi$ is admissible for all~$\epsilon\ge0$,
since, for every~$x\in A$,
$$ 0\le y^{(n)}(x)=U^{(n)}(x)+x_n\le U^{(n)}(x)+\epsilon \Phi^{(n)}(x)+x_n.$$
Furthermore,
$$ \langle D{\mathcal{S}}(U),\Phi\rangle
=\int_{\partial A} g(x)\cdot\Phi(x)\,d{\mathcal{H}}^{n-1}(x),$$
for all~$\Phi\in C^\infty_0({\mathbb{R}}^n,{\mathbb{R}}^n)$.
Therefore, recalling~\eqref{DElap}, \eqref{ed:AK}
and~\eqref{ELAEQ}, it follows that
the variation of the full energy with respect to a boundary
perturbation~$\Phi\in C^\infty_0({\mathbb{R}}^n,{\mathbb{R}}^n)$
is equal to
\begin{eqnarray*}
&&
2\sum_{i,j=1}^n\int_A
\left(
\frac{\partial{U^{(i)}}}{\partial x_j}(x)+
\frac{\partial{U^{(j)}}}{\partial x_i}(x)\right)\frac{\partial{\Phi^{(i)}}}{\partial x_j}(x)\,dx
\\ &&\qquad+
\int_A
f(x)\cdot\Phi(x)\,dx+
\int_{\partial A} g(x)\cdot\Phi(x)\,d{\mathcal{H}}^{n-1}(x)\\
&=&
2\sum_{i,j=1}^n\int_A \frac{\partial}{\partial x_j}\left(
\left(
\frac{\partial{U^{(i)}}}{\partial x_j}(x)+
\frac{\partial{U^{(j)}}}{\partial x_i}(x)\right)\,\Phi^{(i)}(x)\right)\,dx
\\ &&\qquad+
\int_{\partial A} g(x)\cdot\Phi(x)\,d{\mathcal{H}}^{n-1}(x)
\\ &=&
2\sum_{i=1}^n\int_A {\rm div}\,\left( \Phi^{(i)}(x)\,
\left(
\nabla U^{(i)}(x)+
\frac{\partial{U}}{\partial x_i}(x)\right)\right)\,dx
\\ &&\qquad+
\int_{\partial A} g(x)\cdot\Phi(x)\,d{\mathcal{H}}^{n-1}(x)\\
\\ &=&
2\sum_{i=1}^n\int_{\partial A} \Phi^{(i)}(x)\,
\left(
\nabla U^{(i)}(x)+
\frac{\partial{U}}{\partial x_i}(x)\right)\cdot\nu(x)\,d{\mathcal{H}}^{n-1}(x)
\\ &&\qquad+\sum_{i=1}^n
\int_{\partial A} g^{(i)}(x)\;\Phi^{(i)}(x)\,d{\mathcal{H}}^{n-1}(x)
\\ &=&
2\sum_{i,k=1}^n\int_{\partial A} \Phi^{(i)}(x)\,({\mathcal{D}} U)_{ik}(x)\,
\nu_k(x)\,d{\mathcal{H}}^{n-1}(x)
\\ &&\qquad+\sum_{i=1}^n
\int_{\partial A} g^{(i)}(x)\;\Phi^{(i)}(x)\,d{\mathcal{H}}^{n-1}(x),
\end{eqnarray*}
where~$\nu$ is the exterior normal of~$A$.
This and the admissibility discussion of the perturbation
gives the boundary conditions
\begin{equation}\label{RA11}\begin{split}&
\sum_{k=1}^n({\mathcal{D}} U)_{nk}(x)\,
\nu_k(x)
=-\frac12\,
g^{(n)}(x)\qquad
{\mbox{for all $x\in S_2$}}\\
{\mbox{and }}\qquad &\sum_{k=1}^n({\mathcal{D}} U)_{nk}(x)\,
\nu_k(x)
\ge -\frac12\,
g^{(n)}(x)\qquad
{\mbox{for all $x\in S_1$.}}
\end{split}\end{equation}
If the surface forces are tangential to the boundary of the material body,
we have that~$g^{(n)}=0$ on the surface of the table, and
also the normal at these points is vertical,
therefore~\eqref{RA11}
reduces to
\begin{equation}\label{RA1167}\begin{split}&
({\mathcal{D}} U)_{nn}(x)
=0\qquad
{\mbox{for all $x\in S_2$}}\\
{\mbox{and }}\qquad &({\mathcal{D}} U)_{nn}(x)
\le0\qquad
{\mbox{for all $x\in S_1$.}}
\end{split}\end{equation}
Interestingly, the problem is naturally endowed with
``ambiguous'' boundary conditions\index{ambiguous boundary conditions}, in the sense that
there are two alternative boundary conditions in~\eqref{RA1167}
that are prescribed at the boundary, in terms of either equalities
and inequalities, and it is not a priori known
what condition is satisfied at each point. Also, by~\eqref{CALD},
we can write~\eqref{RA1167} in the form
\begin{equation}\label{RA1167-BIS}\begin{split}&
\frac{\partial U^{(n)}}{\partial x_n}(x)
=0\qquad
{\mbox{for all $x\in S_2$}}\\
{\mbox{and }}\qquad &\frac{\partial U^{(n)}}{\partial x_n}(x)
\le0\qquad
{\mbox{for all $x\in S_1$.}}
\end{split}\end{equation}
We now consider a magnified version of this picture at ``nice'' contact points.
For this, for simplicity we suppose that
\begin{equation}\label{Gh781-48}
\begin{split}&
0\in\partial A,\qquad B_{\rho}(\rho e_n)\subseteq A,\qquad
B_{\rho}(-\rho e_1)\cap \{x_n=0\}\subseteq S_1\\&\qquad{\mbox{and}}\qquad
B_{\rho}(\rho e_1)\cap \{x_n=0\}\subseteq S_2,
\end{split}\end{equation}
for some~$\rho>0$, see Figure~\ref{CON}.
\begin{figure}
\caption{\footnotesize\it
Points leaving the contact set.}
\label{CON}
\end{figure}
Fixed~$\epsilon>0$, to be taken small in what follows,
we consider the transformation
$$ \Theta_\epsilon(x):=\left( \frac{x_1}{\epsilon},\dots,\frac{x_{n-1}}{\epsilon},
\frac{\sqrt{2}\;x_n}{\epsilon}\right).$$
By~\eqref{Gh781-48}, we have that, locally,
\begin{equation}\label{ConaVAU}
\begin{split}&
{\mbox{$\Theta_\epsilon(A)$
converges to~$\{x_n>0\}$,}}\\&
{\mbox{$\Theta_\epsilon(S_1)$
converges to~$\{x_n=0\}\cap\{x_1<0\}$}}\\
{\mbox{and }}\;&{\mbox{$\Theta_\epsilon(S_2)$
converges to~$\{x_n=0\}\cap\{x_1>0\}$,}} \end{split}\end{equation}
up to negligible sets.
Moreover, for each~$i\in\{1,\dots,n\}$, we define
\begin{equation} \label{89AK0} V_\epsilon^{(i)}(x):=
\epsilon^2\,\left[ U^{(i)}\big(\Theta_\epsilon(x)\big)-U^{(i)}(0)-
\sum_{k=1}^n \frac{\partial U^{(i)}}{\partial x_k}(0)\,\Theta^{(k)}_\epsilon(x)
\right].\end{equation}
Under a quadratic bound on~$U$, one can make the ansatz that~$V_\epsilon^{(i)}$
behaves well as~$\epsilon\searrow0$, and we will indeed assume that~$V_\epsilon=(
V_\epsilon^{(1)}, \dots,V_\epsilon^{(n)})$
converges smoothly to some~$V=(V^{(1)},\dots,V^{(n)})$. We also
set~$v:=V^{(n)}$.
We point out that, by~\eqref{RA1167-BIS} and~\eqref{Gh781-48},
we can take sequences of points~$p_j\in S_1$ and~$q_j\in S_2$ which converge to the origin as~$j\to+\infty$,
and, assuming that the solution is regular enough, write that
\begin{equation}\label{89AK}
\begin{split}&
0=\lim_{j\to+\infty}{U^{(n)}}(p_j)=U^{(n)}(0)
\\{\mbox{and }}\qquad& 0=\lim_{j\to+\infty}\frac{\partial U^{(n)}}{\partial x_n}(q_j)
=\frac{\partial U^{(n)}}{\partial x_n}(0).
\end{split}\end{equation}
As a consequence, if~$\Theta_\epsilon(x)\in S_1\cup S_2$,
\begin{equation*}
\begin{split}
V^{(n)}_\epsilon(x)\,&=
\epsilon^2\,\left[ U^{(n)}\big(\Theta_\epsilon(x)\big)-
\sum_{k=1}^{n-1} \frac{\partial U^{(i)}}{\partial x_k}(0)\,\Theta^{(k)}_\epsilon(x)
\right]\\&\ge
-\epsilon^2\,
\sum_{k=1}^{n-1} \frac{\partial U^{(i)}}{\partial x_k}(0)\,\Theta^{(k)}_\epsilon(x)
\\&=-
\epsilon\,
\sum_{k=1}^{n-1} \frac{\partial U^{(i)}}{\partial x_k}(0)\,x_k,
\end{split}
\end{equation*}
and therefore, recalling~\eqref{ConaVAU}, we obtain that
\begin{equation}\label{1-32}
v(x_1,\dots,x_{n-1},0)\ge 0,\qquad{\mbox{for all }}(x_1,\dots,x_{n-1})\in{\mathbb{R}}^{n-1}.
\end{equation}
By~\eqref{89AK}, it also follows that
\begin{eqnarray*} \frac{\partial V_\epsilon^{(n)}}{\partial x_n}(x)&=&\sqrt{2}\;
\epsilon\,\left[ \frac{\partial U^{(n)}}{\partial x_n}
\big(\Theta_\epsilon(x)\big)-
\frac{\partial U^{(n)}}{\partial x_n}(0)
\right]\\&=&
\sqrt{2}\;
\epsilon\,\frac{\partial U^{(n)}}{\partial x_n}
\big(\Theta_\epsilon(x)\big).\end{eqnarray*}
Hence, by~\eqref{RA1167-BIS},
\begin{equation}\label{RA15100295683}\begin{split}&
\frac{\partial V_\epsilon^{(n)}}{\partial x_n}(x)
=0\qquad
{\mbox{if $\Theta_\epsilon(x)\in S_2$}}\\
{\mbox{and }}\qquad &\frac{\partial V_\epsilon^{(n)}}{\partial x_n}(x)
\le0\qquad
{\mbox{if $\Theta_\epsilon(x)\in S_1$.}}
\end{split}\end{equation}
This and \eqref{ConaVAU} formally lead to
\begin{equation}\label{RA1167-TRIS}\begin{split}&
\frac{\partial v}{\partial x_n}(x)
=0\qquad
{\mbox{in $\{x_n=0\}\cap\{x_1>0\}$}}\\
{\mbox{and }}\qquad &\frac{\partial v}{\partial x_n}(x)
\le0\qquad
{\mbox{in $\{x_n=0\}\cap\{x_1<0\}$.}}
\end{split}\end{equation}
One can make~\eqref{RA1167-TRIS} more precise in light of~\eqref{1-32}.
Namely, we claim that
\begin{equation}\label{RA1167-QUADRIS}
\frac{\partial v}{\partial x_n}(x)
=0\qquad
{\mbox{in $\{x_n=0\}\cap\{v>0\}$.}}
\end{equation}
To check this, let us take~$x\in\{x_n=0\}$ with~$a:=v(x)>0$
and let~$\epsilon>0$ be so small that~$V^{(n)}_\epsilon(x) \geq\frac{a}{2}$
and~$\left| \frac{\partial U^{(i)}}{\partial x_k}(0)\right|\,|x|\le\frac{a}{4\epsilon\,(n-1)}$,
for all~$k\in\{1,\dots,n-1\}$. Then, by~\eqref{89AK0}
and~\eqref{89AK}, we have that
$$ \epsilon^2 U^{(n)}
\big(\Theta_\epsilon(x)\big)=V^{(n)}_\epsilon(x)+\epsilon^2
\sum_{k=1}^{n-1} \frac{\partial U^{(i)}}{\partial x_k}(0)\,\Theta^{(k)}_\epsilon(x)
\ge\frac{a}{2}-\frac{a}{4}>0.$$
This and~\eqref{S1S2s} give that~$\Theta_\epsilon(x)\in S_2$
and consequently~\eqref{RA1167-QUADRIS} follows
from~\eqref{RA15100295683}.
We also introduce the notation
$$ \sigma_k:=\begin{cases}
1 & {\mbox{ if }} k\ne n,\\
{\sqrt2} & {\mbox{ if }} k= n.
\end{cases}$$
Then, we see that, for each~$m\in\{1,\dots,n\}$,
\begin{eqnarray*}
\frac{\partial^2 V_\epsilon^{(n)}}{\partial x_m^2}(x)&=&
\sigma_m^2\,
\frac{\partial^2 U^{(n)}}{\partial x_m^2}\big(\Theta_\epsilon(x)\big)
,\end{eqnarray*}
and accordingly, for every~$x\in{\mathbb{R}}^{n-1}\times(0,+\infty)$
such that~$\Theta_\epsilon(x)\in A$,
\begin{eqnarray*}
\Delta V_\epsilon^{(n)} (x)&=&\sum_{m=1}^n
\sigma_m^2\,
\frac{\partial^2 U^{(n)}}{\partial x_m^2}\big(\Theta_\epsilon(x)\big)
\\&=&
\sum_{m=1}^{n-1}
\frac{\partial^2 U^{(n)}}{\partial x_m^2}\big(\Theta_\epsilon(x)\big)
+2\,
\frac{\partial^2 U^{(n)}}{\partial x_n^2}\big(\Theta_\epsilon(x)\big).
\end{eqnarray*}
Hence, since, by~\eqref{ELAEQ},
\begin{eqnarray*}
\frac12\,f^{(n)}(x)&=&
\Delta{U^{(n)}}(x)+
\frac{\partial}{\partial x_n}{{\rm div }}\,U(x)\\
&=&\sum_{m=1}^n \frac{\partial^2 U^{(n)}}{\partial x_m^2}(x)+
\sum_{m=1}^n \frac{\partial^2 U^{(m)}}{\partial x_m\partial x_n}(x)\\
&=&\sum_{m=1}^{n-1} \frac{\partial^2 U^{(n)}}{\partial x_m^2}(x)+
\sum_{m=1}^{n-1} \frac{\partial^2 U^{(m)}}{\partial x_m\partial x_n}(x)
+2\,\frac{\partial^2 U^{(n)}}{\partial x_n^2}(x),
\end{eqnarray*}
we find that, if~$\Theta_\epsilon(x)\in A$,
\begin{eqnarray*}
\frac12\,f^{(n)}\big(\Theta_\epsilon(x)\big)&=&
\Delta V_\epsilon^{(n)} (x)+
\sum_{m=1}^{n-1} \frac{\partial^2 U^{(m)}}{\partial x_m\partial x_n}
\big(\Theta_\epsilon(x)\big).
\end{eqnarray*}
In particular, if the force field is due to vertical gravity, we have that~$f^{(n)}=-g$
for some constant~$g$, and accordingly
$$ \Delta V_\epsilon^{(n)} (x)=h(x),$$
as long as~$\Theta_\epsilon(x)\in A$, with
$$ h(x):=-\frac{g}2-\sum_{m=1}^{n-1} \frac{\partial^2 U^{(m)}}{
\partial x_m\partial x_n}\big(\Theta_\epsilon(x)\big).$$
Hence, by~\eqref{ConaVAU}, we can write
$$ \Delta v(x)=h(x),$$
for all~$x\in{\mathbb{R}}^{n-1}\times(0,+\infty)$. Combining this with~\eqref{1-32},
\eqref{RA1167-TRIS} and~\eqref{RA1167-QUADRIS},
we can write the system of equations
\begin{equation}\label{sau83eysap}\begin{cases}
& \Delta v(x)=h(x)\qquad
{\mbox{in $\{x_n>0\}$,}}\\
& v\ge 0\qquad
{\mbox{on $\{x_n=0\}$,}}\\
&
\displaystyle\frac{\partial v}{\partial x_n}(x)
\le0\qquad
{\mbox{on $\{x_n=0\}$}},\\
&\displaystyle\frac{\partial v}{\partial x_n}(x)
=0\qquad
{\mbox{on $\{x_n=0\}\cap\{v>0\}$.}}
\end{cases}\end{equation}
By taking even reflection, one can also define
$$ u(x)=u(x_1,\dots,x_n)=\begin{cases}
v(x_1,\dots,x_{n-1},x_n) & {\mbox{ if }} x_n\geq0,\\
v(x_1,\dots,x_{n-1},-x_n) & {\mbox{ if }} x_n<0.
\end{cases}$$
Similarly, we define~$h$ in~$\{x_n<0\}$ by even reflection,
and in this way
$$\Delta u(x)=\Delta v(x_1,\dots,x_{n-1},|x_n|)=h(x_1,\dots,x_{n-1},|x_n|)=h(x),$$
as long as~$x_n\ne0$.
We also observe that if~$\varphi\in C^\infty_0({\mathbb{R}}^n)$
is such that~$\varphi=0$ in~$\{x_n=0\}\cap\{v=0\}$,
\begin{eqnarray*}&&
\int_{ {\mathbb{R}}^n } \nabla u(x)\cdot\nabla\varphi(x)\,dx+
\int_{ {\mathbb{R}}^n } h(x)\,\varphi(x)\,dx
\\
&=& \int_{ {\mathbb{R}}^n \cap\{x_n>0\}} \nabla u(x)\cdot\nabla\varphi(x)\,dx
+ \int_{ {\mathbb{R}}^n \cap\{x_n<0\}} \nabla u(x)\cdot\nabla\varphi(x)\,dx\\
&&\qquad+
\int_{ {\mathbb{R}}^n \cap\{x_n>0\}} h(x)\,\varphi(x)\,dx+
\int_{ {\mathbb{R}}^n \cap\{x_n<0\}} h(x)\,\varphi(x)\,dx
\\
&=& \int_{ {\mathbb{R}}^n \cap\{x_n>0\}} {\rm div}\big(\varphi(x)\,\nabla u(x)\big)\,dx
+ \int_{ {\mathbb{R}}^n \cap\{x_n<0\}} {\rm div}\big(\varphi(x)\,\nabla u(x)\big)\,dx\\
&=&
-\int_{ \{x_n=0\}} \varphi(x)\,\frac{\partial u}{\partial x_n}(x_1,\dots,x_{n-1},0^+)\,d
{\mathcal{H}}^{n-1}(x)\\&&\qquad
+
\int_{ \{x_n=0\}} \varphi(x)\,\frac{\partial u}{\partial x_n}(x_1,\dots,x_{n-1},0^-)\,d
{\mathcal{H}}^{n-1}(x)\\
&=&
-2\int_{ \{x_n=0\}\cap\{ v>0\}} \varphi(x)\,\frac{\partial v}{\partial x_n}(x_1,\dots,x_{n-1},0^+)\,d
{\mathcal{H}}^{n-1}(x)\\&=&0,
\end{eqnarray*}
thanks to~\eqref{sau83eysap}.
This says that
\begin{equation*}
{\mbox{$\Delta u=h$ in~${\mathbb{R}}^n\setminus
(\{x_n=0\}\cap\{u=0\})$.}}\end{equation*}
This and~\eqref{sau83eysap} lead to the system
\begin{equation*}\begin{cases}
& \Delta u(x)=h(x)\qquad
{\mbox{in ${\mathbb{R}}^n\setminus
(\{x_n=0\}\cap\{u=0\})$,}}\\
& u\ge 0\qquad
{\mbox{on $\{x_n=0\}$,}}\\
&\displaystyle\frac{\partial u}{\partial x_n}(x)
\le 0\qquad
{\mbox{on $\{x_n=0\}$,}}\\
&\displaystyle\frac{\partial u}{\partial x_n}(x)
=0\qquad
{\mbox{on $\{x_n=0\}\cap\{u>0\}$,}}
\end{cases}\end{equation*}
which is in the form of the thin obstacle problem\footnote{As a matter of fact,
in the recent mathematical jargon,
there is some linguistic confusion about the ``Signorini problem'',
since this name is often used also for the
thin obstacle problem in
Example~\ref{THIN}.
In a sense,
the thin obstacle problem should be properly referred to
as the ``scalar'' Signorini problem, but the adjective ``scalar'' happens
to be often missing.
Of course, the ``original'' Signorini problem is technically even more demanding than
the thin obstacle problem, due to the vectorial nature of the question.
For the optimal regularity and the free boundary analysis of
the original Signorini problem and some important links with the
scalar Signorini problem, we refer to~\cite{MR3480553}
(see in particular Section~5 there).}
discussed in Example~\ref{THIN}
(compare with~\eqref{CO:AK2}).
}\end{example}
\begin{example}[Gamma function, Balakrishnan formula\index{formula!Balakrishnan}, the method of semigroups, and the Heaviside operational calculus]
{\rm Let us start with the classical definition of
the Euler's Gamma function\index{function!Gamma}:
$$ \Gamma (z)=\int _{0}^{+\infty }\tau^{z-1}\,e^{-\tau}\,d\tau. $$
We compute it at the point~$z:=1-s$, with~$s\in(0,1)$, and we integrate by parts,
thus obtaining
\begin{eqnarray*} \Gamma (1-s)&=&
\int_{0}^{+\infty }\tau^{-s}\,e^{-\tau}\,d\tau\\ &=&
-\int_{0}^{+\infty }\tau^{-s}\,\frac{d}{d\tau}(e^{-\tau}-1)\,d\tau\\&=&
-s\int_{0}^{+\infty }\tau^{-s-1}\,(e^{-\tau}-1)\,d\tau,
\end{eqnarray*}
which can be written as
$$ \Gamma(-s)=\int_{0}^{+\infty }\tau^{-s-1}\,(e^{-\tau}-1)\,d\tau.$$
Now, we take~$\delta>0$
and make the substitution~$t:=\delta^{-1} \tau$. In this way, we obtain that
\begin{equation}\label{7uAKKm1eee}
\delta^{s}=\frac1{\Gamma(-s)}\int_{0}^{+\infty }t^{-s-1}\,(e^{-t\delta }-1)
\,dt.\end{equation}
It turns out that one can make sense of this formula not only for a given
real parameter~$\delta>0$, but also when~$\delta$ is replaced by a suitably nice
operator such as the Laplacian (with the minus sign to make it positive).
Namely, formally taking~$\delta:=-\Delta$ in~\eqref{7uAKKm1eee},
one finds that
\begin{equation}\label{7uAKKm1eee2}
(-\Delta)^{s}=\frac1{\Gamma(-s)}\int_{0}^{+\infty }t^{-s-1}\,(e^{t\Delta}-1)
\,dt.\end{equation}
In spite of the sloppy way in which formula~\eqref{7uAKKm1eee2} was derived here,
it is possible to give a rigorous proof of it using operator theory.
Indeed, formula~\eqref{7uAKKm1eee2} was established by
Alampallam V. Balakrishnan in~\cite{MR0115096}.
Its meaning in the operator sense is that applying
the operator on the left hand
side of~\eqref{7uAKKm1eee2} to a nice (say, smooth and rapidly decreasing)
function
is equivalent to apply to it the operator on the right hand
side, namely
\begin{equation}\label{7uAKKm1eee3}
(-\Delta)^{s}u(x)=\frac1{\Gamma(-s)}\int_{0}^{+\infty }t^{-s-1}\,
\big(e^{t\Delta}u(x)-u(x)\big)
\,dt.\end{equation}
The meaning of~$e^{t\Delta}u(x)$ is also in the sense of operators.
To understand this notation one can set~$\Phi_u(x,t):=e^{t\Delta}u(x)$
and observe that, formally,
\begin{eqnarray*} &&\partial_t \Phi_u(x,t)=\partial_t\big(e^{t\Delta}u(x)\big)=
\Delta e^{t\Delta}u(x) =\Delta \Phi_u(x,t)\\
{\mbox{and }}&&\Phi_u(x,0)= e^{\Delta 0}u(x)=e^0 u(x)=u(x).
\end{eqnarray*}
These observations can be formalized by ``going backwards'' in the computation
and defining~$e^{t\Delta}u(x):=\Phi_u(x,t)$, where the latter is the solution
of the heat equation with initial datum~$u$, that is
$$ \begin{cases}
\partial_t \Phi_u(x,t)=\Delta \Phi_u(x,t) & {\mbox{ for all $x\in{\mathbb{R}}^n$ and~$t>0$,}}\\
\Phi_u(x,0)=u(x) & {\mbox{ for all $x\in{\mathbb{R}}^n$.
}}\end{cases}$$
With this notation, we can write~\eqref{7uAKKm1eee3} as
\begin{equation}\label{7uAKKm1eee4}
(-\Delta)^{s}u(x)=\frac1{\Gamma(-s)}\int_{0}^{+\infty }t^{-s-1}\,
\big(\Phi_u(x,t)-u(x)\big)
\,dt.\end{equation}
The power of formula~\eqref{7uAKKm1eee4} is apparent, since it reduces
a nonlocal, and in principle rather complicated, operator
such as the fractional Laplacian to the superposition
of classical heat semigroups, and can be exploited as a ``subordination
identity'' in which the well-established knowledge of the classical heat flow
leads to new results for the fractional setting, see e.g.~\cite{MR2858052}.
The importance and broad range of applicability of formula~\eqref{7uAKKm1eee4}
and of its various extensions is
very clearly and extensively discussed in~\cites{2017arXiv171203347G, 2018arXiv180805159S}.
Now we make some historical comments about operator calculus and its successful
attempt to transform identities valid for real numbers into rigorous formulas
involving operators, under the appropriate assumptions. Without aiming
at reconstructing here the full history of the subject,
we recall that one of the first
attempts in the important directions sketched here was made by
Oliver Heaviside at the end of XIX century, who also tried to write
the solution of the heat equation in an operator form. Given the difficulty of the arguments
treated and the lack of mathematical technologies at that time,
some of the original arguments in the literature were probably not fully justified
and required the introduction of a brand new subject of mathematical analysis,
which indeed highly contribute to create and promote, see e.g.~\cite{MR555103}.
It is however plausible that the pioneering, albeit somewhat unrigorous, intuitions
of Heaviside were not always well-appreciated by the mathematical community
at that time. A footprint of this historical controversy has remained
in the work by Heaviside published in Volume~34 of the
periodical and scientific journal
{\em The Electrician}, in which Heaviside states that
{\em What one has a right to expect, however, is a fair field,
and that the want of sympathy should be kept in a neutral state, so as
not to lead to unnecessary obstruction. For even men who are not Cambridge
mathematicians deserve justice, which I very much fear they do not always get,
especially the meek and lowly}.
For an extensive treatment of the theory of operator calculus,
see e.g.~\cites{MR0105594, MR0361633, MR2244037, MR3468941} and the references therein.}\end{example}
\begin{example}[Fractional viscoelastic models, springs and dashpots]\label{LAD}
{\rm
A classical application of fractional derivatives occurs
in the phenomenological description of viscoelastic fluids. This is of course
a very advanced topic and we do not aim at fully cover it in these few pages:
see e.g. Section~10.2 of~\cite{MR1658022}, where
a number of fractional models for viscoelasticity\index{viscoelasticity} are discussed in detail.
Roughly speaking, a basic idea used in this context is that the viscoelastic effects
arise as a suitable ``ideal'' superposition of ``purely elastic'' and ``purely viscous''
phenomena, which are better understood when treated separately but whose
combined effect becomes quite difficult to comprise into classical equations.
On the one hand, the elastic effects are well-described by the displacement
of a classical spring subject to Hooke's Law, in which the displacement of the spring
is proportional to the force applied to it. That is, if~$\varepsilon$ denotes
the elongation of the spring and~$\sigma$ the force applied to it, one writes
\begin{equation}\label{VIS:000}
\sigma=\kappa\varepsilon ,
\end{equation}
for a suitable elastic coefficient~$\kappa>0$.
On the other hand, the viscous effects of fluids is classically described by
Newton's Law\index{law!Newton's}, according to which
forces are related to velocities, as in the formula
\begin{equation}\label{VIS:001}
\sigma=\nu\dot\varepsilon ,
\end{equation}
where the ``dot'' here above denotes derivative with respect to time,
for a suitable viscous coefficient~$\nu>0$.
The rationale sustaining~\eqref{VIS:001} can be understood thinking about the free fall
of an object. In this case, if one takes into account the gravity and
the viscous friction of the air, the vertical position of a falling object is described
by the equation
\begin{equation}\label{8iwjs1324367000923}
mg-\nu\dot\varepsilon= m\ddot\varepsilon.
\end{equation}
For long times, the falling body reaches asymptotically a terminal velocity,
which can be guessed by~\eqref{8iwjs1324367000923} by formally imposing that the limit
acceleration is zero: in this limit regime, one thus obtain the velocity equation
\begin{equation}\label{008iwjs1324367000923}
mg-\nu\dot\varepsilon= 0,
\end{equation}
which formally coincides with equation~\eqref{VIS:001} when the force is
the gravitational one. Hence, in a sense, comparing~\eqref{VIS:001} with~\eqref{008iwjs1324367000923},
one can think that
Newton's Law for viscid fluids describes the asymptotic velocity
in a regime in which the viscous effects are dominant and after a sufficient
amount of time (after which the acceleration effects become negligible).
Roughly speaking, the idea of viscoelasticity is that, in general,
fluids are neither perfectly elastic nor perfectly viscid, therefore
an accurate formulation of the problem requires the study of an operator which
interpolates between the ``derivative of order zero'' appearing
in~\eqref{VIS:000} and
the ``derivative of order one'' appearing
in~\eqref{VIS:001}, and of course fractional derivatives seem to perfectly fit
such a scope.
{F}rom the point of view of the notation, in the description of fluids,
the displacement function~$\varepsilon$ typically represents the ``strain'',
while the normalized force function~$\sigma$ typically represents the ``stress''
acting on the fluid particles.
\begin{figure}
\caption{\footnotesize\it Schematic representation of a spring (left) and a dashpot (right).}
\label{DASH}
\end{figure}
To give a concrete feeling of this superposition of elastic and viscid effects, we recall
here a purely mechanical model which was proposed by~\cite{Schiessel}
(we actually simplify the discussion presented in~\cite{Schiessel}, since we
do not aim here at fully justified general statements).
The idea proposed by~\cite{Schiessel} is to take into account a system of springs,
which react to forces elastically according to
Hooke's Law, and
dashpots, or viscous dampers, in which strains and stresses are related by
Newton's Law.
\begin{figure}
\caption{\footnotesize\it The spring-dashpot ladder of Example~\ref{LAD}
\label{DASH2}
\end{figure}
To clarify the setting we will draw schematically springs and dashpots as described in Figure~\ref{DASH}.
Following~\cite{Schiessel}, the model that we discuss here
consists of a ladder-like structure with springs along
one of the struts and dashpots on the rungs of the ladder, see Figure~\ref{DASH2}.
The system contains~$n$ spring and~$n-1$ dashpots and we will formally
consider the limit as~$n\to+\infty$.
The superindexes~$d$ and~$s$ will refer to the dashpots and the springs, respectively,
while the subindexes will refer to the position in the ladder.
In particular, we can consider the elongation of the springs, denoted by~$\varepsilon^s_0,\dots,
\varepsilon^s_{n-1}$ and the ones of the dashpots, denoted by~$\varepsilon^d_0,\dots,
\varepsilon^d_{n-2}$. {F}rom Figure~\ref{DASH2}, we see that
\begin{equation}\label{EL:1}
\varepsilon^d_k=\varepsilon^s_{k+1}+\varepsilon^d_{k+1}.
\end{equation}
Moreover, if $\sigma^s_1,\dots,\sigma^s_n$ denote the stresses on the springs
and~$\sigma^d_1,\dots,\sigma^d_{n-1}$ the ones on the dashpots,
the parallel arrangement on the ladder gives that
\begin{equation}\label{EL:2}
\sigma^s_k=\sigma^s_{k+1}+\sigma^d_k.\end{equation}
Also, the total elongation of the system is given by
\begin{equation}\label{EL:TOT}
\varepsilon=\varepsilon^s_0+\varepsilon^d_0,\end{equation}
and the stress at the end of the ladder is given by the one of the first spring, namely
\begin{equation}\label{EL:TOT2}
\sigma=\sigma^s_0.\end{equation}
We will consider springs with the same elastic coefficient, that we normalize in such a way
that Hooke's Law writes in this case as
\begin{equation}\label{EL:3}
\varepsilon^s_k=2\sigma^s_k.
\end{equation}
Also, the dashpots will be taken to be all with the same viscous coefficients
and, in this way, Newton's Law will be written as
\begin{equation}\label{EL:4}
\dot\varepsilon^d_k=\sigma^d_k.
\end{equation}
The case of different springs and dashpots can also be taken into account
and it would produce quantitatively different analysis, see~\cite{Schiessel}
for details, but even this simpler case in which all the mechanical elements are
the same produces some interesting effects that we now analyze.
First of all, by~\eqref{EL:1} and~\eqref{EL:3},
\begin{equation}\label{EL:5}
\varepsilon^d_k=2\sigma^s_{k+1}+\varepsilon^d_{k+1}.
\end{equation}
Similarly, by~\eqref{EL:2} and~\eqref{EL:4},
\begin{equation}\label{EL:6}
\sigma^s_k=\sigma^s_{k+1}+\dot\varepsilon^d_k.
\end{equation}
It is now appropriate to consider the Laplace transform\index{transform!Laplace} of a function~$u$,
that we denote by
$$ \bar u(\omega):=
\int_{0}^{+\infty} u(t) \,e^{-t\omega}\,dt.$$
For further reference, we observe that
\begin{equation}\label{87ujJ8929ppPQ}
\begin{split}
\bar{\dot{u}}(\omega)\,&=\int_{0}^{+\infty} \dot u(t) \,e^{-t\omega}\,dt\\&=
\int_{0}^{+\infty}\left( \frac{d}{dt}\big( u(t) \,e^{-t\omega}\big)
+\omega u(t) \,e^{-t\omega}\right)\,dt\\&=-u(0)+\omega\bar u(\omega).
\end{split}
\end{equation}
In a similar way,
considering the fractional derivative notation
$$ D^{1/2}_{t,0} u(t):=\int_0^t \frac{
\dot u(\tau)}{\sqrt{t-\tau}}\,d\tau,$$
to be compared with the general setting in the forthcoming formula~\eqref{defcap},
we have that
\begin{equation}\label{YTtatra}
\begin{split}
\overline{D^{1/2}_{t,0} u}(\omega)\,&=\int_0^{+\infty}\left[
\int_0^t \frac{
\dot u(\tau)}{\sqrt{t-\tau}}\,d\tau\right]\,e^{-t\omega}\,dt\\
&=\int_0^{+\infty}\left[\dot u(\tau)\int_\tau^{+\infty} \frac{e^{-t\omega}
}{\sqrt{t-\tau}}\,dt\right]\,d\tau\\
&=\int_0^{+\infty}\left[\dot u(\tau)e^{-\tau\omega}\int_0^{+\infty} \frac{e^{-\zeta\omega}
}{\sqrt{\zeta}}\,d\zeta\right]\,d\tau
\\&=\frac1{\sqrt{\omega}}
\int_0^{+\infty}\left[\dot u(\tau)e^{-\tau\omega}\int_0^{+\infty} \frac{e^{-\mu}
}{\sqrt{\mu}}\,d\mu\right]\,d\tau\\&=\frac{C}{\sqrt{\omega}}
\int_0^{+\infty}\dot u(\tau)e^{-\tau\omega}\,d\tau\\
&=\frac{C}{\sqrt{\omega}}
\int_0^{+\infty}\left(\frac{d}{d\tau}\Big(u(\tau)e^{-\tau\omega}\Big)+
\omega u(\tau)e^{-\tau\omega}
\right)
\,d\tau\\
&=-\frac{Cu(0)}{\sqrt{\omega}}+C\sqrt{\omega}
\int_0^{+\infty}u(\tau)e^{-\tau\omega}\,d\tau\\
&=-\frac{Cu(0)}{\sqrt{\omega}}+C\sqrt{\omega}\bar u(\omega),
\end{split}
\end{equation}
for a suitable~$C>0$.
Now, taking the Laplace transform of~\eqref{EL:5}, we find that
\begin{equation*}
\bar\varepsilon^d_k=2\bar\sigma^s_{k+1}+\bar\varepsilon^d_{k+1},
\end{equation*}
and therefore
\begin{equation}\label{EL:7}
\frac{\bar\varepsilon^d_k}{2\bar\sigma^s_{k+1}}=1+\frac{\bar\varepsilon^d_{k+1}}{2\bar\sigma^s_{k+1}}.
\end{equation}
Instead, taking the Laplace transform of~\eqref{EL:6}
and recalling~\eqref{87ujJ8929ppPQ}, assuming that the initial displacement vanishes,
we find that
\begin{equation}\label{7uAJJA93eirjj}
\bar\sigma^s_k=\bar\sigma^s_{k+1}+\omega\bar\varepsilon^d_k.
\end{equation}
We write this identity as
\begin{equation*}
\bar\sigma^s_{k+1}=\bar\sigma^s_{k+2}+\omega\bar\varepsilon^d_{k+1}
\end{equation*}
and we substitute it in the right
hand side of~\eqref{EL:7}, concluding that
\begin{equation}\label{EL:8}\begin{split}
\rho_k\,&:=
\frac{\bar\varepsilon^d_k}{2\bar\sigma^s_{k+1}}\\&=1+
\frac{\bar\varepsilon^d_{k+1}}{2(\bar\sigma^s_{k+2}+\omega\bar\varepsilon^d_{k+1})}\\&=
1+\frac{\rho_{k+1}}{1+\omega\frac{\bar\varepsilon^d_{k+1}}{\bar\sigma^s_{k+2}}}\\&=
1+\frac{\rho_{k+1}}{1+2\omega\rho_{k+1}}\\
&=1+\frac{1}{2\omega+\frac{1}{\rho_{k+1}}}.
\end{split}
\end{equation}
We can iterate~\eqref{EL:8} and then find that
\begin{equation*}
\rho_k=1+\frac{1}{2\omega+\frac{1}{1+\frac{1}{2\omega+\frac{1}{\rho_{k+2}}}}}=
1+\frac{1}{2\omega+\frac{1}{1+\frac{1}{2\omega+\frac{1}{
1+\frac{1}{2\omega+\frac{1}{\rho_{k+3}}}
}}}},
\end{equation*}
and so on. That is, in the formal limit of infinitely many springs and dashpots,
\begin{equation}\label{EL:10}
\frac{\bar\varepsilon^d_0}{2\bar\sigma^s_{1}}=
\rho_0=1+\frac{1}{2\omega+\frac{1}{1+\frac{1}{2\omega+\frac{1}{\rho_2}}}}=
1+\frac{1}{2\omega+\frac{1}{1+\frac{1}{2\omega+\frac{1}{
1+\frac{1}{2\omega+\frac{1}{\ddots}}
}}}},
\end{equation}
which is an infinite continuous fraction.
We observe that
\begin{equation}\label{EL:11}
{\mbox{the right hand side of~\eqref{EL:10} is equal to }}\frac{1+\sqrt{1+
\displaystyle\frac2\omega}}{2}.
\end{equation}
Indeed, the right hand side of~\eqref{EL:10} is a positive number,
say~$X$, and it satisfies that
$$ X=1+\frac{1}{2\omega+\frac{1}{X}}.$$
Solving~$X$ in this relation, we obtain~\eqref{EL:11}, as desired.
Then, from~\eqref{EL:10} and~\eqref{EL:11}, we conclude that
\begin{equation}\label{EL:13}
\frac{\bar\varepsilon^d_0}{2\bar\sigma^s_{1}}=
\frac{1+\sqrt{1+
\displaystyle\frac2\omega}}{2}.
\end{equation}
On the other hand, recalling~\eqref{EL:TOT}, \eqref{EL:TOT2}, \eqref{EL:3} and~\eqref{7uAJJA93eirjj},
\begin{equation*}
\begin{split}
\frac{\bar\varepsilon(\omega)}{2\bar\sigma(\omega)}\,&=
\frac{\bar\varepsilon^s_0+\bar\varepsilon^d_0}{2\bar\sigma^s_0}\\&=
\frac{\bar\varepsilon^s_0}{2\bar\sigma^s_0}+
\frac{\bar\varepsilon^d_0}{2\bar\sigma^s_0}\\&=
1+\frac{\bar\sigma^s_{1}}{\bar\sigma^s_1+\omega
\bar\epsilon^d_0}\times\frac{\bar\varepsilon^d_0}{2\bar\sigma^s_{1}},
\end{split}\end{equation*}
which combined to~\eqref{EL:13} gives that
\begin{equation}\label{EL:14}
\frac{\bar\varepsilon(\omega)}{2\bar\sigma(\omega)}=1+
\frac{\bar\sigma^s_{1}}{\bar\sigma^s_1+\omega
\bar\epsilon^d_0}\times\frac{1+\sqrt{1+
\displaystyle\frac2\omega}}{2}.
\end{equation}
For small~$\omega$, we see that~\eqref{EL:14}
becomes
\begin{equation}\label{EL:15}
\frac{\bar\varepsilon(\omega)}{\bar\sigma(\omega)}\simeq\sqrt{\frac2\omega}
.\end{equation}
We observe that, at least formally, the regime of small~$\omega$ corresponds to
that of large~$t$: a rigorous justification
of these asymptotics is likely to rely on a suitable use of an
Abelian-Tauberian Theorem (see e.g.~\cite{MR1015374})
and goes well beyond the scope of our heuristic argument (but see formulas~(38)
and~(39) in~\cite{Schiessel} for a quantitative analysis of these asymptotics).
Hence, from~\eqref{EL:15},
taking the initial displacement to be null, i.e. assuming that~$\varepsilon(0)=0$,
and normalizing the constants to be unitary
for the sake of simplicity,
in light of~\eqref{YTtatra}, we can write, for small~$\omega$,
that
\begin{equation*}
\bar\sigma(\omega)\simeq \sqrt{\omega}\,\bar\varepsilon(\omega)=
\overline{D^{1/2}_{t,0} \varepsilon}(\omega),\end{equation*}
and therefore, for large~$t$, that
$$ D^{1/2}_{t,0} \varepsilon(t)\simeq \sigma(t).$$
This provides a heuristic, but rather convincing, motivation
showing how fractional derivatives naturally surface in complex mechanical models
involving springs and dashpots and suggest that similar phenomena can arise
in models in which both elastic and viscid effects contribute effectively to the macroscopic
behaviour of the system.
}\end{example}
\begin{example}[Diffusion along a comb structure]\label{TGSCO}
{\rm In this example we show how the geometry of the diffusion media
can naturally produce fractional equations.
We take into account a diffusion model along a comb structure.
The diffusion along the backbone of the comb can be either classical or
of space-fractional type (the model that we present
was indeed introduced in~\cite{comb} in the case of classical diffusion
along the backbone, but we will present here an even more general setting
that comprises space-fractional diffusion as well).
The ramified medium that we take into account in this example
is a ``comb'', with a backbone on the horizontal axis and a fine grid
of vertical fingers which are located at mutual distance~$\epsilon>0$, see Figure~\ref{CCC}.
\begin{figure}
\caption{\footnotesize\it
The comb structure along which diffusion takes place in Example \ref{TGSCO}
\label{CCC}
\end{figure}
We consider the superposition of a horizontal diffusion along the backbone
driven by~$-(-\Delta)^s_x$, for
some~$s\in(0,1]$, the case~$s=1$ corresponding to classical diffusion
and the case~$s\in(0,1)$ to space-fractional diffusion
as in~\eqref{7A-DE} and~\eqref{7A-DE2}, and a vertical diffusion along the fingers of classical type.
To make the model attain a significant limit in the continuous approximation
in which~$\epsilon\searrow0$, since the fingers are infinitely
many and their distance tends to zero, it is convenient to assume
that the vertical diffusion is subject to a diffusive coefficient of order~$\epsilon$.
The initial position is taken for simplicity to be a concentrated mass
at the origin. This model translates into the following mathematical formulation:
\begin{equation}\label{EQUco0}
\begin{cases}
\partial_t u(x,y,t)=-\delta_0(y)\,(-\Delta)^s_x u(x,y,t)+\epsilon\displaystyle\sum_{j\in{\mathbb{Z}}}
\delta_0(\epsilon j)\,\partial^2_y u(x,y,t)& \\ \qquad\qquad
{\mbox{ for all $(x,y)\in{\mathbb{R}}^2$ and $t>0$,}}\\
u(x,y,0)=\delta_0(x)\,\delta_0(y).
\end{cases}
\end{equation}
As customary, the notation~$\delta_0$ denotes here
the Dirac's Delta function centered at the origin.
We point out that
\begin{equation}\label{sedisy}
\lim_{\epsilon\searrow0}\epsilon\sum_{j\in{\mathbb{Z}}}
\delta_0(\epsilon j)=1,
\end{equation}
in the distributional sense. Indeed, if~$\varphi\in C^\infty_0({\mathbb{R}})$,
\begin{eqnarray*}
\epsilon\sum_{j\in{\mathbb{Z}}}\int_{\mathbb{R}}
\delta_0(\epsilon j)\,\varphi(y)\,dy=\epsilon\sum_{j\in{\mathbb{Z}}}\varphi(\epsilon j).
\end{eqnarray*}
Since the latter can be seen as a Riemann sum, we find that
$$\lim_{\epsilon\searrow0}
\epsilon\sum_{j\in{\mathbb{Z}}}\int_{\mathbb{R}}
\delta_0(\epsilon j)\,\varphi(y)\,dy=
\int_{\mathbb{R}}\varphi(y)\,dy,$$
which gives~\eqref{sedisy}.
Hence, by~\eqref{sedisy}, one can take into account the continuous limit
of~\eqref{EQUco0} as~${\epsilon\searrow0}$, that we can write in the form
\begin{equation}\label{EQUco}
\begin{cases}
\partial_t u(x,y,t)=-\delta_0(y)\,(-\Delta)^s_x u(x,y,t)+\partial^2_y u(x,y,t)& {\mbox{ for all $(x,y)\in{\mathbb{R}}^2$ and $t>0$,}}\\
u(x,y,0)=\delta_0(x)\,\delta_0(y).
\end{cases}
\end{equation}
We consider the effective transport along the backbone~$\{y=0\}$, given by the function
\begin{equation}\label{5INT-U} U(x,t):=\int_{\mathbb{R}} u(x,y,t)\,dy.\end{equation}
Our claim is that~$U$ satisfies a time-fractional equation given by
\begin{equation}\label{EQUcoU}
\begin{cases}
D^{1/2}_{t,0} U(x,t)= -(-\Delta)^s_x U(x,t)
& {\mbox{ for all $x\in{\mathbb{R}}$ and $t>0$,}}\\
U(x,0)=\delta_0(x).
\end{cases}
\end{equation}
Here, up to normalization constants, we are taking
\begin{eqnarray*} D^{1/2}_{t,0} U(x,t)&:=&
\int_0^t \frac{\partial_tU(x,\tau)}{\sqrt{t-\tau}}\,d\tau\\&=&
\int_0^t \frac{\partial_tU(x,t-\tau)}{\sqrt{\tau}}\,d\tau,\end{eqnarray*}
and one can compare this expression with the similar ones
in~\eqref{8sEQ} and~\eqref{FRAcha}, as well as with the more general
setting that we will introduce in~\eqref{defcap}.
To prove~\eqref{EQUcoU}, we first check the initial condition. For this,
using~\eqref{EQUco}, we see that
\begin{equation}\label{9AHK-2oakak} U(x,0)=
\int_{\mathbb{R}} u(x,y,0)\,dy=\int_{\mathbb{R}} \delta_0(x)\,\delta_0(y)\,dy=
\delta_0(x).\end{equation}
Having checked the initial condition in~\eqref{EQUcoU}, we now aim
at proving the validity of the evolution equation in~\eqref{EQUcoU}.
To this end, it is convenient
to consider
the Fourier-Laplace transform of a function~$v=v(x,t)$, namely
the Fourier transform\index{transform!Fourier-Laplace} in the variable~$x$ combined with the Laplace transform
in the variable~$t$.
Namely, up to normalization constants that we omit,
we define
$$ {\mathcal{E}}_v(\xi,\omega):=
\iint_{ {\mathbb{R}} \times(0,+\infty)} v(x,t) \,e^{-i x\xi-t\omega}\,dx\,dt.$$
We observe that, for every~$x\in{\mathbb{R}}$,
\begin{eqnarray*}&&
\int_0^{+\infty} D^{1/2}_{t,0} U(x,t)\,e^{-t\omega}\,dt \\&=&
\int_0^{+\infty}\left[
\int_0^t \frac{\partial_tU(x,t-\tau)
\;e^{-t\omega}
}{\sqrt{\tau}}\,d\tau\right]\,dt\\&=&
\int_0^{+\infty}\left[
\int_\tau^{+\infty} \frac{\partial_tU(x,t-\tau)
\;e^{-t\omega}
}{\sqrt{\tau}}\,dt\right]\,d\tau\\&=&
\int_0^{+\infty}\left[
\int_\tau^{+\infty} \Big( \partial_t\big( U(x,t-\tau)
\;e^{-t\omega}\big) +\omega U(x,t-\tau)\;e^{-t\omega}
\Big)\,dt\right]\,\frac{d\tau}{\sqrt{\tau}}\\
\\&=&
\int_0^{+\infty}\left[
- U(x,0)\;e^{-\tau\omega} +\int_\tau^{+\infty}\omega U(x,t-\tau)\;e^{-t\omega}
\,dt\right]\,\frac{d\tau}{\sqrt{\tau}}\\
&=&-\sqrt{\frac{\pi}{\omega}}\;U(x,0)+\omega\int_0^{+\infty}\left[
\int_0^{+\infty} U(x,\sigma)\;e^{-(\sigma+\tau)\omega}\,d\sigma
\right]\,\frac{d\tau}{\sqrt{\tau}}\\
&=&-\sqrt{\frac{\pi}{\omega}}\;U(x,0)+\sqrt{\pi\omega}
\int_0^{+\infty} U(x,\sigma)\;e^{-\sigma\omega}\,d\sigma.
\end{eqnarray*}
As a consequence,
\begin{equation}\label{841016810412}
\begin{split}
{\mathcal{E}}_{D^{1/2}_{t,0} U}(\xi,\omega)\,&=\int_{\mathbb{R}}
\left[-\sqrt{\frac{\pi}{\omega}}\;U(x,0)+\sqrt{\pi\omega}
\int_0^{+\infty} U(x,\sigma)\;e^{-\sigma\omega}\,d\sigma\right]
e^{-ix\xi}\,dx\\&=
-\sqrt{\frac{\pi}{\omega}}\;\hat U(\xi,0)+
\sqrt{\pi\omega}\,
{\mathcal{E}}_{U}(\xi,\omega),
\end{split}\end{equation}
where~$\hat U$ denotes the Fourier transform of~$U$ in the variable~$x$.
Moreover, by \eqref{7A-DE}, up to normalizing constants we can write that
\begin{eqnarray*}
{\mathcal{E}}_{(-\Delta)^s_x U}(\xi,\omega)&=&\int_0^{+\infty}
|\xi|^{2s} \,\hat U(\xi,t)\,
e^{-t\omega}\,dt\\&=&|\xi|^{2s}\,{\mathcal{E}}_{U}(\xi,\omega).
\end{eqnarray*}
By this and~\eqref{841016810412}, we see that, to check the validity
of the evolution equation in~\eqref{EQUcoU}, recalling~\eqref{9AHK-2oakak},
up to normalizing constants
we need to establish that
\begin{equation}\label{EQUcoU-F}
|\xi|^{2s}\,{\mathcal{E}}_{U}(\xi,\omega)-\sqrt{\frac{1}{\omega}}+
\sqrt{\omega}\,
{\mathcal{E}}_{U}(\xi,\omega)=0.
\end{equation}
To this end, we consider the Fourier-Laplace transform of the function~$u$
that solves~\eqref{EQUco}. Namely,
we define
\begin{equation}\label{5INT-W} W(\xi,y,\omega):={\mathcal{E}}_u(\xi,y,\omega)=
\iint_{ {\mathbb{R}} \times(0,+\infty)} u(x,y,t) \,e^{-i x\xi-t\omega}\,dx\,dt.\end{equation}
In view of~\eqref{EQUco}, we point out that
\begin{eqnarray*} &&
-\delta_0(y)\,|\xi|^{2s}W(\xi,y,\omega)+\partial^2_yW(\xi,y,\omega)\\
&=&-\delta_0(y)\iint_{ {\mathbb{R}} \times(0,+\infty)}
|\xi|^{2s}\,u(x,y,t) \,e^{-i x\xi-t\omega}\,dx\,dt
+\partial^2_yW(\xi,y,\omega)\\
&=&-\delta_0(y)\int_0^{+\infty} |\xi|^{2s}\,
\hat u(\xi,y,t) \,e^{-t\omega}\,dt
+\partial^2_yW(\xi,y,\omega)\\&=&
{\mathcal{E}}_{-\delta_0(y)\,(-\Delta)^s_x u+\partial^2_y u}(\xi,y,\omega)\\&=&
{\mathcal{E}}_{\partial_tu}(\xi,y,\omega)\\&=&
\iint_{ {\mathbb{R}} \times(0,+\infty)} \partial_t u(x,y,t) \,e^{-i x\xi-t\omega}\,dx\,dt\\
&=&
\iint_{ {\mathbb{R}} \times(0,+\infty)} \Big(
\partial_t \big(u(x,y,t) \,e^{-i x\xi-t\omega}\big)+
\omega u(x,y,t) \,e^{-i x\xi-t\omega}
\Big)
\,dx\,dt\\
&=&
-\int_{ {\mathbb{R}} }
u(x,y,0)\,e^{-ix\xi}\,dx
+\omega\iint_{ {\mathbb{R}} \times(0,+\infty)} u(x,y,t) \,e^{-i x\xi-t\omega}
\,dx\,dt\\
&=&-\delta_0(y)+\omega W(\xi,y,\omega).
\end{eqnarray*}
This gives that the map~${\mathbb{R}}\ni y\mapsto W(\xi,y,\omega)$
satisfies
\begin{equation}\label{6gW}
\partial^2_y W=a^2 W-b(2a+c)\delta_0(y)+c\delta_0(y)\,W,\end{equation}
where~$a=a(\omega):=\sqrt\omega$, $b=b(\xi,\omega):=\frac{1}{2\sqrt\omega+|\xi|^{2s}}$
and~$c=c(\xi):=|\xi|^{2s}$.
Hence, fixing~$\xi\in{\mathbb{R}}$ and~$t>0$, one considers~\eqref{6gW}
as an equation for a function of~$y\in{\mathbb{R}}$, in which~$a$, $b$
and~$c$ are coefficients independent of~$y$.
With this in mind, we observe that, if~$a>0$, $b\in{\mathbb{R}}$,
and~$g(y):=b\,e^{-a|y|}$,
it holds that
\begin{equation}\label{7jALLAPP}
g''(y)=a^2 g(y)-b(2a+c)\delta_0(y)+c\delta_0(y)\,g(y),
\end{equation}
for any~$c\in{\mathbb{R}}$. To check this, let~$\varphi\in C^\infty_0({\mathbb{R}})$.
Then, we compute
\begin{eqnarray*}&&
\int_{\mathbb{R}} \big( g''(y)-a^2 g(y)+b(2a+c)\delta_0(y)-c\delta_0(y)\,g(y)\big)
\varphi(y)\,dy\\ &=&
\int_{\mathbb{R}} g(y)\varphi''(y)\,dy
-a^2\int_{\mathbb{R}}g(y)\varphi(y)\,dy
+b(2a+c)\varphi(0)-cg(0)\varphi(0)\\
&=&b\int_{\mathbb{R}}e^{-a|y|}\varphi''(y)\,dy
-a^2b\int_{\mathbb{R}} e^{-a|y|}\varphi(y)\,dy
+b(2a+c)\varphi(0)-bc\varphi(0)\\
&=&
b\int_{0}^{+\infty} e^{-ay}\varphi''(y)\,dy
+b\int_{-\infty}^0 e^{ay}\varphi''(y)\,dy\\&&\qquad
-a^2b\int_{0}^{+\infty} e^{-ay}\varphi(y)\,dy
-a^2b\int_{-\infty}^0 e^{ay}\varphi(y)\,dy\\&&\qquad
+b(2a+c)\varphi(0)-bc\varphi(0)\\&=&
-b\varphi'(0)+
ab\int_{0}^{+\infty} e^{-ay}\varphi'(y)\,dy
+
b\varphi'(0)
-ab\int_{-\infty}^0 e^{ay}\varphi'(y)\,dy\\&&\qquad
-a^2b\int_{0}^{+\infty} e^{-ay}\varphi(y)\,dy
-a^2b\int_{-\infty}^0 e^{ay}\varphi(y)\,dy\\&&\qquad
+b(2a+c)\varphi(0)-bc\varphi(0)\\&=&
-b\varphi'(0)-
ab \varphi(0)
+
a^2b\int_{0}^{+\infty} e^{-ay}\varphi(y)\,dy\\&&\qquad
+
b\varphi'(0)
-ab\varphi(0)
+a^2b\int_{-\infty}^0 e^{ay}\varphi(y)\,dy\\&&\qquad
-a^2b\int_{0}^{+\infty} e^{-ay}\varphi(y)\,dy
-a^2b\int_{-\infty}^0 e^{ay}\varphi(y)\,dy\\&&\qquad
+b(2a+c)\varphi(0)-bc\varphi(0)\\&=&0,
\end{eqnarray*}
which proves~\eqref{7jALLAPP}.
In the light of~\eqref{7jALLAPP}, we can write a solution of~\eqref{6gW} in the form
$$ W(\xi,y,\omega)=b(\xi,\omega) \,e^{-a(\omega)\,|y|}=
\frac{e^{-\sqrt\omega\,|y|}}{2\sqrt\omega+|\xi|^{2s}}.$$
As a consequence, recalling~\eqref{5INT-U} and~\eqref{5INT-W},
\begin{eqnarray*}
\frac{2}{ \sqrt\omega\big(2\sqrt\omega+|\xi|^{2s}\big) }&=&
\int_{\mathbb{R}}
\frac{e^{-\sqrt\omega\,|y|}}{2\sqrt\omega+|\xi|^{2s}}\,dy\\&=&
\int_{\mathbb{R}} W(\xi,y,\omega)\,dy\\&=&{\mathcal{E}}_U(\xi,\omega).
\end{eqnarray*}
Hence, we see that
$$\big(|\xi|^{2s}+2
\sqrt{\omega}\big)\,{\mathcal{E}}_{U}(\xi,\omega)=\frac{2}{ \sqrt\omega}.$$
With this, we have checked
validity of~\eqref{EQUcoU-F} and thus we have established the fractional equation
in~\eqref{EQUcoU}.
}\end{example}
\begin{example}[Fractional operators and option pricing]
{\rm One of the most common features in mathematical finance is option pricing.
Suppose that a holder wants
to buy some good, say, the ticket to the final match of Champions League
(an ``underlying''\index{underlying} in financial jargon), which
will occur at time~$T$.
The price of this ticket depends on time, and say that
at time~$t\in[0,T]$ this ticket costs~$S_t$
(we are using here a common notation in finance to use the
subindex~$t$ to denote the value of a function at a given time~$t$).
Rather than buying directly the ticket for the price~$S_t$,
he/she can buy an option\footnote{For simplicity,
we are discussing here the case of \emph{European options}\index{options!European},
in which
the holder can only
employ the option exactly at a given time $T$.
Instead, in the {\em American options}\index{options!American}
the holder has the right, but not the obligation, to buy the underlying
at an agreed-upon price at any time~$t\in[0,T]$
(and, of course,
the seller has an obligation to sell the underlying if the option is exercised).} that allow his/her
to buy the ticket at time~$T$ for a fixed price~$K$
(called in jargon ``strike price''\index{strike price}).
The question is: should the holder buy the ticket or the option?
Or, more precisely: what is the value that such an option has in the market?
We suppose that the value~$V$ of the option depends
on the
time $t$ and on the cost of the ticket~$S$, therefore we will write~$V=V(S,t)$.
What is obvious is the final value of the option, since
\begin{equation}\label{FIN}
V(S,T):=\max\{ S-K, 0\}.
\end{equation}
Indeed, at the final time, one can decide to either buy the ticket at a certain price~$S$
or the option at price~$V(S,T)$ and then the ticket at the strike price~$K$,
and these two operations should be equivalent, hence~$S=V(S,T)+K$ (this if~$S\ge K$,
otherwise the option has simply no value, since anyone can just buy
the ticket for a more convenient price, hence confirming~\eqref{FIN}).
Typically, in order to determine the option price, one can prove that it solves some partial differential equation. For instance, one can follow the Black-Scholes model,
and a variation of it, see e.g.~\cite{MR1904936}.
In our framework, a risk-neutral dynamic of the asset (the price of the ticket)
is
given by an exponential model
$$
S_t=S_0\exp(\mu t+\sigma W_t),
$$
where $\mu\in\mathbb{R}$
denotes a drift which
measures the expected yield of the underlying,
$\sigma\in(0,+\infty)$ is a diffusion coefficient
which measures the degree of variation of the trading
price of the asset (in jargon, ``volatility''),
and $W_t$ is a ``reasonable'' stochastic process
modeling the unpredictable oscillations of the market.
In the classical case, $W_t$ was taken to be simply the
Brownian motion, but recently fractional Brownian motions
and jump processes have been taken into account to
model possibly different evolutions of the market.
Without going into technical details, following e.g.~\cite{MR2584076},
we will suppose that the stochastic motion
arises from the superposition of a Brownian motion with a jump process.
Concretely, we assume that the ``infinitesimal generator'' of
the process is of the form
\begin{equation}\label{0-0usdjfcnv}
{\mathcal{A}}:= a\partial^2 -b (-\Delta)^s,\qquad{\mbox{ for some }}s\in(0,1),\end{equation}
with~$a$, $b\ge0$,
that is, if we denote by~${\mathbb{E}}$ the expected value, we assume that
\begin{equation}\label{CRAS} \lim_{\tau\to0} \frac{
{\mathbb{E}}\big( f(S_{t+\tau})\big)-f(S_t)
}{\tau}= {\mathcal{A}} f(S_t).\end{equation}
The heuristic rationale of this formula (say, with~$t=0$) is as follows:
in the appropriate scale, the probability
density of a particle traveling under
the superposition of a Brownian motion with a jump process satisfies a heat
equation in which the classical Laplacian is replaced by the operator~${\mathcal{A}}$,
see e.g.~\cite{MR2584076}, hence, if~$S_{t,x}$ denotes the
stochastic evolution starting at~$x$ (i.e. $S_{0,x}=x$), and~$
u(x,t):=
{\mathbb{E}}\big( f(S_{t,x})\big)$ we expect that~$u$ is a solution of
$$ \begin{cases}
\partial_t u(x,t)={\mathcal{A}}u(x,t), & {\mbox{ if }} t>0,\\
u(x,0)=f(x).
\end{cases}$$
Hence we can expect that
\begin{eqnarray*}&&
\lim_{\tau\to0} \frac{{\mathbb{E}}\big( f(S_{\tau,x})\big)-f(S_{0,x})}{\tau}=
\lim_{\tau\to0} \frac{u(x,\tau)-f(x)}{\tau}\\&&\qquad=
\lim_{\tau\to0} \frac{u(x,\tau)-u(x,0)}{\tau}
=\partial_t u(x,0)\\&&\qquad=
{\mathcal{A}}u(x,0)
={\mathcal{A}}f(x)\\&&\qquad={\mathcal{A}}f(S_{0,x}),
\end{eqnarray*}
which can provide a justification for~\eqref{CRAS}.
We also introduce a suitable It\^{o}'s formula, of the form
\begin{equation}\label{ITO}
\frac{d}{dt} f(S_t) = f'(S_t)\,\frac{dS_t}{dt}+{\mathcal{A}}f(S_t).
\end{equation}
To try to justify~\eqref{ITO}, one can notice that the stochastic process
can move ``indifferently'' up or down, say~$S_{t+\tau}-S_t$
have no ``definite sign'', hence
$$ \lim_{\tau\to0} \frac{
{\mathbb{E}}\big( f'(S_t)(S_{t+\tau}-S_t)\big)
}{\tau}=0.$$
That is, formally Taylor expanding, and recalling~\eqref{CRAS},
\begin{eqnarray*}
{\mathcal{A}} f(S_t)&=&
\lim_{\tau\to0} \frac{{\mathbb{E}}\big( f(S_{t+\tau})\big)-f(S_{t+\tau})}{\tau}
+\frac{f(S_{t+\tau})-f(S_t)}{\tau}\\
&=&\lim_{\tau\to0} \frac{{\mathbb{E}}\big( f(S_t)+
f'(S_t)(S_{t+\tau}-S_t)
\big)-\big(
f(S_t)+
f'(S_t)(S_{t+\tau}-S_t)
\big)}{\tau}
+\frac{d}{dt} f(S_t)\\
&=&\lim_{\tau\to0} -\frac{f'(S_t)(S_{t+\tau}-S_t)}{\tau}
+\frac{d}{dt} f(S_t)\\
&=& -f'(S_t)\,\frac{dS_t}{dt}
+\frac{d}{dt} f(S_t),
\end{eqnarray*}
which gives a heuristic justification of~\eqref{ITO}.
Clearly, formula~\eqref{ITO} can be extended to
functions which also depend explicitly on time, thus yielding
\begin{equation}\label{ITO2}
\frac{d}{dt} f(S_t,t) =
\frac{\partial f}{\partial t}(S_t,t)+
\frac{\partial f}{\partial S}(S_t,t)\,\frac{dS_t}{dt}+{\mathcal{A}}f(S_t,t).
\end{equation}
Now, the idea is to try to determine an equation for the value~$V$
of the option, under reasonable assumptions on the market.
It would be desirable to release~$V$ from any randomness
and uncertainty offered by the market. In a sense, the oscillations of
the market could affect the price~$S_t$ of the ticket at time $t$,
but we would like to know~$V=V(S,t)$ in a way which is independent
of this randomness (only in dependence of the time~$t$
and any possible price of the ticket~$S$), and then evaluate~$V(S_t,t)$
to have the value of the option at time~$t$, for the price~$S_t$ of the ticket.
To do so, we try to build a ``risk-free'' portfolio.
Given the oscillations of the market, the strategy of possessing only
a certain number of options is highly subject to the market uncertainties
and it would be safer for a holder, or for a company, not only
to buy or sell one or more options but also to buy or sell one or more tickets
for the Champions League final. That is, a suitable portfolio should be of the form
\begin{equation}\label{PORT} P(S,t)=V(S,t)+\delta S,\end{equation}
with~$\delta\in{\mathbb{R}}$ giving the number of tickets
to possess with respect to
the option to make the total portfolio as ``stable''
as possible (negative values of~$\delta$ would
correspond to selling tickets, rather than buying them). To choose~$\delta$ in such a way that~$P$ becomes
as close to risk-free as possible, it is desirable to reduce the oscillations
of~$P$ with respect to the variations of~$S$.
To this end, we make the simplifying assumption that~$V(0,t)=0$,
i.e. the value of the option is null if so is the price of the ticket,
and we observe that
$$ V(S,t)=V(S,t)-V(0,t)=\frac{\partial V}{\partial S}(0,t)\,S+O(S^2),$$
that is
$$ V(S,t)-\frac{\partial V}{\partial S}(0,t)\,S=O(S^2).$$
Comparing this with~\eqref{PORT}, we see that
a reasonable possibility to make the portfolio as independent as possible
to the fluctuating value~$S$ is to choose~$\delta:=-\frac{\partial V}{\partial S}(0,t)$
in~\eqref{PORT}, which leads to
\begin{equation}\label{PORT2} P(S,t)=V(S,t)-\frac{\partial V}{\partial S}(0,t)\, S.\end{equation}
Let us make a brief comment on the minus sign in~\eqref{PORT2}.
One can suppose, for instance, that~$V$ is monotone increasing with
respect to~$S$ (the more the ticket costs, the more the option is valuable)
and thus~$\frac{\partial V}{\partial S}(0,t)>0$.
In the setting of~\eqref{PORT2}, this means that, to maintain
a balanced portfolio, if one buys options it is appropriate to sell tickets.
One then makes the ansatz that~\eqref{PORT2}
gives indeed a risk-free portfolio. Under this assumption,
the time evolution of~$P$ is just governed by the interest rate and one can write that
the time evolution of~$P(S_t,t)$ is simply~$P_0\,e^{rt}$, for
some~$r$, $P_0\in{\mathbb{R}}$.
As a consequence,
$$ \frac{d}{dt} P(S_t,t)=\frac{d}{dt}(P_0\,e^{rt})=
r\,P_0\,e^{rt}=r\,P(S_t,t).$$
Hence, recalling~\eqref{PORT2},
\begin{equation}\label{7:AKs}
\frac{d}{dt}\left( V(S_t,t)-\frac{\partial V}{\partial S}(0,t)\, S_t\right)=
r\,V(S_t,t)-r\,\frac{\partial V}{\partial S}(0,t)\, S_t.
\end{equation}
On the other hand, by~\eqref{ITO2},
\begin{eqnarray*}\frac{d}{dt}V(S_t,t) =
\frac{\partial V}{\partial t}(S_t,t)+
\frac{\partial V}{\partial S}(S_t,t)\,\frac{dS_t}{dt}+{\mathcal{A}}V(S_t,t),\end{eqnarray*}
and so~\eqref{7:AKs} becomes
\begin{equation}\label{67123498dgwfsu}\begin{split}&
\frac{\partial V}{\partial t}(S_t,t)+
\frac{\partial V}{\partial S}(S_t,t)\,\frac{dS_t}{dt}+{\mathcal{A}}V(S_t,t)
-\frac{ d}{dt}\left(\frac{\partial V}{\partial S}(0,t)\, S_t\right)\\ =\;&
r\,V(S_t,t)-r\,\frac{\partial V}{\partial S}(0,t)\, S_t.\end{split}\end{equation}
It is also customary to neglect the
dependence of~$\frac{\partial V}{\partial S}(0,t)$ on~$t$
(say, the values of the options of very cheap tickets
are more or less the same independent on time), and thus
replace the term~$\frac{ d}{dt}\left(\frac{\partial V}{\partial S}(0,t)\, S_t\right)$
with~$ \frac{\partial V}{\partial S}(0,t)\, \frac{ dS_t}{dt}$. With this
approximation, one obtains from~\eqref{67123498dgwfsu}, after
simplifying two terms, that
\begin{equation}\label{BSS}
\frac{\partial V}{\partial t}(S_t,t)+{\mathcal{A}}V(S_t,t)=
r\,V(S_t,t)-r\,\frac{\partial V}{\partial S}(0,t)\, S_t.\end{equation}
Recalling~\eqref{0-0usdjfcnv}, when~$b=0$ one obtains
from~\eqref{BSS} the classical Black-Scholes equation
$$
\frac{\partial V}{\partial t}+rS_t\frac{\partial V}{\partial S_t}+\frac{\sigma^2}{2}S_t^2\frac{\partial^2 V}{\partial S_t^2}-rV=0\quad\text{in}\quad\mathbb{R}\times(0,T].
$$
In general, when~$b\ne0$ (and possibly~$a=0$)
one obtains in~\eqref{BSS} a nonlocal evolution equation
of fractional type. Such an equation
is complemented with the terminal condition in~\eqref{FIN}.}
\end{example}
\begin{example}[Complex analysis and Hilbert transform]{\rm
Given a (nice) function~$u:{\mathbb{R}}\to{\mathbb{R}}$,
the Hilbert transform\index{transform!Hilbert} of~$u$ is defined by
$$ H_u (t):=-{\frac {1}{\pi }}\int _{-\infty }^{\infty }{\frac {u(t)-u(\tau ) }{t-\tau }}\,d\tau .$$
We observe that
$$ \int _{|t-\tau|\in (r,R) }{\frac {u(t) }{t-\tau }}\,d\tau =u(t)\,
\int _{|\vartheta|\in (r,R) }{\frac {d\vartheta }{\vartheta }}=
u(t)\,\left(
\int _{r }^R{\frac {d\vartheta }{\vartheta }}+
\int _{-R}^{-r}{\frac {d\vartheta }{\vartheta }}\right)=
0,$$
for all~$0<r<R$. Hence,
in the principal value sense, after cancellations,
one can also write
$$ H_u (t):={\frac {1}{\pi }}\int _{-\infty }^{\infty }{\frac {u(\tau ) }{t-\tau }}\,d\tau .$$
Among the others, a natural application of
the Hilbert transform occurs in complex analysis.
Indeed, identifying points~$(x,y)\in{\mathbb{R}}\times[0,+\infty)$
of the real upper half-plane with points~$x+iy\in\{ z\in{\mathbb{C}} {\mbox{ with }} \Im z\ge0\}$
of the complex upper half-plane, one has that, {\em
on the boundary of the half-plane, the two harmonic conjugate functions of a holomorphic function are
related via the Hilbert transform}. More precisely,
if~$f$ is holomorphic in the complex upper half-plane,
we write~$z=x+iy$ with~$x\in{\mathbb{R}}$ and~$y\ge0$, and
$$ f(z)=u(x,y)+i v(x,y).$$
We also set~$u_0(x):=u(x,0)$ and~$v_0(x):=v(x,0)$.
Then, under natural regularity assumptions, we have that
\begin{equation}\label{HIL}
v_0(x)=H_{u_0}(x),\qquad{\mbox{ for all }}x\in{\mathbb{R}}.
\end{equation}
For rigorous complex analysis
results, we refer e.g.
to Theorem~93 on page~125 of~\cite{MR942661} and
to Theorems~3 and~4
on pages~77-78 of~\cite{MR1363489}. See also Sections 2.6--2.9
in~\cite{MR1363489} for a number of concrete applications of
the Hilbert transform.
We sketch two arguments to establish~\eqref{HIL},
one based on classical complex methods, and one exploiting
fractional calculus.
The first argument goes as follows. For any~$z=x+iy$ with~$y>0$, we consider the Hilbert transform of~$u_0$, up to normalization constants,
defined with a complex variable, namely we set
$$ F(z):=\frac1{\pi i} \int_{\mathbb{R}}
\frac{u_0(t)}{t-z}\,dt.$$
Then, $F$ is holomorphic in the upper half-plane. Moreover,
\begin{equation}\label{DeSTA}
\begin{split}
F(z)\,&=\frac1{\pi i} \int_{\mathbb{R}}
\frac{u_0(t)}{t-x-iy}\,dt
\\ &=\frac1{\pi } \int_{\mathbb{R}}
\frac{u_0(t)\, (i(x-t)+y)}{(t-x)^2+y^2}\,dt.
\end{split}\end{equation}
We let~$\tilde u(x,y):= \Re F(x+iy)$
and~$\tilde v(x,y):= \Im F(x+iy)$.
Then, using the substitution~$w:=(t-x)/y$,
\begin{eqnarray*}
\lim_{y\searrow0}\tilde u(x,y)&=&
\lim_{y\searrow0}
\frac1{\pi } \int_{\mathbb{R}}
\frac{u_0(t)\, y}{(t-x)^2+y^2}\,dt
\\
&=&
\lim_{y\searrow0}
\frac1{\pi } \int_{\mathbb{R}}
\frac{u_0(x+wy)}{w^2+1}\,dw
\\&=&
\frac{u_0(x)}{\pi } \int_{\mathbb{R}}
\frac{dw}{w^2+1}
\\&=& u_0(x).
\end{eqnarray*}
That is, the real part of~$F$ coincides with
the harmonic extension of~$u_0$ to the upper half-plane,
up to harmonic functions vanishing on the trace.
Therefore (reducing to finite energy solutions), we suppose that~$\tilde u=u$. Since~$\tilde v$ and~$v$ are the
conjugate harmonic functions of~$\tilde u$ and~$u$, respectively, from the Cauchy–Riemann equations we thereby find that
\begin{eqnarray*}&&
0=\partial_y (\tilde u-u)=-\partial_x (\tilde v-v)\\
{\mbox{and }}&&0=\partial_x (\tilde u-u)=\partial_y (\tilde v-v).
\end{eqnarray*}
Hence (restricting to functions with finite energy), we have that~$\tilde v=v$.
{F}rom these observations and~\eqref{DeSTA}, we find that
\begin{eqnarray*}
v_0(x)&=&\lim_{y\searrow0} \tilde v(x,y)
\\&=& \lim_{y\searrow0}
\frac1{\pi } \int_{\mathbb{R}}
\frac{u_0(t)\, (x-t)}{(t-x)^2+y^2}\,dt
\\&=& \lim_{y\searrow0}
\frac1{\pi } \int_{\mathbb{R}}
\frac{(u_0(t)-u_0(x))\, (x-t)}{(t-x)^2+y^2}\,dt
\\&=&
\frac1{\pi } \int_{\mathbb{R}}
\frac{(u_0(t)-u_0(x))\, (x-t)}{(t-x)^2}\,dt
\\&=&
-\frac1{\pi } \int_{\mathbb{R}}
\frac{u_0(t)-u_0(x)}{t-x}\,dt,\end{eqnarray*}
that gives~\eqref{HIL}.
We now present another argument to establish~\eqref{HIL}
based on fractional calculus.
Since~$u$ is harmonic in the upper half-plane,
using the Fourier transform in the~$x$ variable,
and dropping normalization constants for the
sake of simplicity,
we see that, for all~$y\in(0,+\infty)$,
$$ 0 ={\mathcal{F}} (\Delta u)(\xi,y)=-|\xi|^2 \hat u(\xi,y)+\partial^2_y \hat u(\xi,y),$$
and thus~$\hat u(\xi,y)= C(\xi)\,e^{-|\xi| y}$, with~$C(\xi)\in{\mathbb{R}}$.
This gives that
$$ \partial_y \hat u(\xi,0)= -|\xi|\,C(\xi) = -|\xi| \,\hat u(\xi,0)=-|\xi|\,\hat u_0(\xi).
$$
This and~\eqref{7A-DE} give that
$$ \partial_{y} u(x,0)=
{\mathcal{F}}^{-1}\Big(
\partial_{y}\hat u(\xi,0) \Big)
=
-{\mathcal{F}}^{-1}\Big(|\xi|\,\hat u_0(\xi)\Big)=
-\sqrt{-\Delta}u_0(x).$$
Hence, recalling~\eqref{7A-DE2},
dropping normalization constants for the
sake of simplicity, and using a parity cancellation, we see that
\begin{eqnarray*}
\partial_{y} u(x,0)&=&\frac12\,
\int_{{\mathbb{R}}}\frac{u_0(x+\theta)+
u_0(x-\theta)-2u_0(x)}{\theta^{2}}\,d\theta\\
&=&\lim_{\delta\searrow0}
\int_{{\mathbb{R}}\setminus(-\delta,\delta)}\frac{u_0(x+\theta)-u_0(x)}{\theta^{2}}\,d\theta\\
&=& \lim_{\delta\searrow0}
\int_{{\mathbb{R}}\setminus(-\delta,\delta)}\frac{u_0(x+\theta)
-u_0(x)-u_0'(x)\theta\Psi(\theta)
}{\theta^{2}}\,d\theta\\
&=& \int_{{\mathbb{R}}}\frac{u_0(x+\theta)
-u_0(x)-u_0'(x)\theta\Psi(\theta)
}{\theta^{2}}\,d\theta\\
&=& \int_{{\mathbb{R}}}\frac{u_0(\tau)
-u_0(x)
-u_0'(x)(\tau-x)\Psi(\tau-x)
}{(\tau-x)^{2}}\,d\tau,
\end{eqnarray*}
being~$\Psi\in C^\infty_0([-2,2], [0,1])$ an even function such that~$
\Psi=1$ in $[-1,1]$. Hence, from the Cauchy–Riemann equations,
$$ -\partial_{x}v_0(x)=-\partial_{x} v(x,0)=
\int_{{\mathbb{R}}}\frac{u_0(\tau)
-u_0(x)
-u_0'(x)(\tau-x)\Psi(\tau-x)
}{(\tau-x)^{2}}\,d\tau.$$
As a consequence,
\begin{equation}\label{701-39485x}
\begin{split}-
v_0(x)\,&= -\int_{-\infty}^x \partial_{x}v_0(t)\,dt\\&=
\int_{-\infty}^x\left[
\int_{{\mathbb{R}}}\frac{u_0(\tau)
-u_0(t)
-u_0'(t)(\tau-t)\Psi(\tau-t)
}{(\tau-t)^{2}}\,d\tau
\right]\,dt\\&=\lim_{R\to+\infty}
\int_{-\infty}^x\left[
\int_{-R+t}^{R+t}\frac{u_0(\tau)
-u_0(t)
-u_0'(t)(\tau-t)\Psi(\tau-t)
}{(\tau-t)^{2}}\,d\tau
\right]\,dt.
\end{split}\end{equation}
By an integration by parts, we notice that
\begin{eqnarray*}&&
\int_{-\infty}^x \frac{u_0(\tau)
-u_0(t) -u_0'(t)(\tau-t)\Psi(\tau-t)
}{(\tau-t)^{2}}\,dt\\
&=&
\int_{-\infty}^x \Big( u_0(\tau)
-u_0(t)
-u_0'(t)(\tau-t)\Psi(\tau-t)
\Big)\,
\frac{d}{dt}\left(\frac{1}{\tau-t}\right)\,dt\\&
=&\frac{ u_0(\tau)
-u_0(x)-u_0'(x)(\tau-x)\Psi(\tau-x)
}{\tau-x}\\ &&\quad+
\int_{-\infty}^x \frac{
u_0'(t)+
u_0''(t)(\tau-t)\Psi(\tau-t)-
u_0'(t)\Psi(\tau-t)-
u_0'(t)(\tau-t)\Psi'(\tau-t)
}{\tau-t}\,dt\\
&=&\frac{ u_0(\tau)
-u_0(x)}{\tau-x}
-u_0'(x)\Psi(\tau-x)
\\ &&\quad+
\int_{-\infty}^x
\left(
\frac{u_0'(t)}{\tau-t}+
u_0''(t)\Psi(\tau-t)-\frac{
u_0'(t)\Psi(\tau-t)}{\tau-t}-
u_0'(t)\Psi'(\tau-t)\right)\,dt\\
&=&\frac{ u_0(\tau)
-u_0(x)}{\tau-x}
-u_0'(x)\Psi(\tau-x)
\\ &&\quad+
\int_{-\infty}^x
\left(
\frac{u_0'(t)-u_0'(t)\Psi(\tau-t)}{\tau-t}+\frac{d}{dt}\Big(
u_0'(t)\Psi(\tau-t)\Big)\right)\,dt\\
&=&\frac{ u_0(\tau)
-u_0(x)}{\tau-x}
+
\int_{-\infty}^x u_0'(t)\Phi(\tau-t)\,dt,
\end{eqnarray*}
where we defined the odd function
$$ \Phi(r):=\frac{1-\Psi(r)}{r}.$$
By inserting this information into~\eqref{701-39485x}
and exchanging integrals, one obtains that
\begin{equation*}
\begin{split}-
v_0(x)\,&=\lim_{R\to+\infty}
\int_{-R+t}^{R+t}\left[
\frac{ u_0(\tau)
-u_0(x)}{\tau-x}
+
\int_{-\infty}^x u_0'(t)\Phi(\tau-t)\,dt
\right]\,d\tau
\\ &=\int_{\mathbb{R}}\frac{ u_0(\tau)
-u_0(x)}{\tau-x}\,d\tau
+\lim_{R\to+\infty}
\int_{-\infty}^x u_0'(t)\;\left[ \int_{-R}^{R} \Phi(\theta)\,d\theta\right]\,dt
\\ &=\int_{\mathbb{R}}\frac{ u_0(\tau)
-u_0(x)}{\tau-x}\,d\tau
.\end{split}\end{equation*}
This gives~\eqref{HIL}
(up to the normalizing constant that we have neglected).
}\end{example}
\begin{example}[Caputo derivatives and the fractional Laplacian]
{\rm The time-fractional diffusion that we model by the Caputo derivative
has fundamental differences with respect to the space-fractional diffusion
driven by the fractional Laplacian, since the latter possesses many invariances,
such as the ones under rotations and translations, that are not valid in the time-fractional
setting given by the Caputo derivative, in view of a memory effect which clearly distinguishes
between ``the past'' and ``the future'' and determines
the ``time's arrow''.
On the other hand, the sum of two Caputo derivatives with
opposite time directions reduces to the fractional Laplacian,
as we now discuss in detail.
Let $\alpha\in(0,1)$ and $u$ sufficiently smooth. An integration by parts\footnote{As a historical
remark, we observe that
formulas such as in~\eqref{eq:fromcaptomar} naturally relate different
definitions of time-fractional derivatives, such as the Caputo derivative
and the Marchaud\index{fractional!derivative!Marchaud} fractional derivative.}
allows us to write the (left) Caputo derivative as
\begin{equation}
\label{eq:fromcaptomar}
D^{\alpha}_{t,a,+}u(t):=\frac{u(t)}{\Gamma(1-\alpha)(t-a)^{\alpha}}+\frac{\alpha}{\Gamma(1-\alpha)}\int_a^t\frac{u(t)-u(\tau)}{(t-\tau)^{\alpha+1}} d\tau.
\end{equation}
Choosing $a:=-\infty$, formula \eqref{eq:fromcaptomar} reduces to
\begin{equation}
\label{eq:marleft}
D^{\alpha}_{t,-\infty,+}u(t)=\frac{\alpha}{\Gamma(1-\alpha)}\int_{-\infty}^t\frac{u(t)-u(\tau)}{(t-\tau)^{\alpha+1}} d\tau=\frac{\alpha}{\Gamma(1-\alpha)}\int_0^{+\infty}\frac{u(t)-u(t-\tau)}{\tau^{\alpha+1}}d\tau.
\end{equation}
Since we will be interested in the core of this monograph in given fractional
derivatives with a prescribed arrow of time, we will focus
on this type of definition (and, in fact, also on higher order ones,
as in~\eqref{defcap}). Nevertheless, by an inversion of the time's arrow,
one can also define a notion of right Caputo derivative, which, when $a:=+\infty$, can be written as
\begin{equation}
\label{eq:marright}
D^{\alpha}_{t,+\infty,-}u(t):=\frac{\alpha}{\Gamma(1-\alpha)}\int_t^{+\infty}\frac{u(t)-u(\tau)}{(\tau-t)^{\alpha+1}} d\tau=\frac{\alpha}{\Gamma(1-\alpha)}\int_0^{+\infty}\frac{u(t)-u(t+\tau)}{\tau^{\alpha+1}}d\tau.
\end{equation}
See the forthcoming footnote on page~\pageref{NOTION}
for further comments on the notion of left and right fractional derivatives.
Summing up \eqref{eq:marleft} and \eqref{eq:marright}, and dropping normalization constants
for simplicity, we have that
\begin{eqnarray*}
D^{\alpha}_{t,-\infty,+}u(t)+D^{\alpha}_{t,+\infty,-}u(t)&=&
\int_0^{+\infty}\frac{2u(t)-u(t+\tau)-u(t-\tau)}{\tau^{\alpha+1}}d\tau \\
&=&\frac12\,\int_{-\infty}^{+\infty}\frac{2u(t)-u(t+\tau)-u(t-\tau)}{|\tau|^{\alpha+1}}d\tau \\
&=&\left(-\Delta_t\right)^{\alpha/2}u(t),
\end{eqnarray*}
where we used the integral representation of the fractional Laplacian
(recall~\eqref{7A-DE2}) and the
obvious one-dimensional notation
$$ \Delta_t:=\frac{\partial^2}{\partial t^2}.$$
Therefore, the sum of left and right Caputo derivatives with initial points $-\infty$ and $+\infty$ respectively, gives, up to a multiplicative constant, the one-dimensional fractional Laplacian of fractional order $\frac{\alpha}{2}$.
}\end{example}
Other applications of fractional equations will be discussed in Appendix~\ref{APPEA}.
\chapter{Main results}\label{DUEE}
\label{s:first}
After having discussed in detail several motivations for fractional equations
in Chapter~\ref{WHY},
we begin here the mathematically rigorous part of this monograph,
and we start presenting the main original mathematical
results contained in this book and their relation with the existing literature.
In this work we prove the local density of functions
which annihilate a linear operator built by classical and
fractional derivatives, both in space and time.
Nonlocal operators of fractional type
present a variety of challenging problems in pure mathematics,
also in connections with long-range phase transitions and nonlocal
minimal surfaces,
and are nowadays commonly exploited in a large number of models
describing complex phenomena related
to anomalous diffusion and boundary reactions
in physics, biology and material sciences (see e.g.~\cite{claudia}
for several examples, for instance in atom dislocations in crystals and
water waves models).
Furthermore, anomalous diffusion in the space variables can be seen
as the natural counterpart of discontinuous
Markov processes, thus providing important connections
with problems in probability and statistics, and several applications to
economy and finance (see e.g.~\cite{MR0242239,MR3235230} for pioneer works relating
anomalous diffusion and financial models).
On the other hand, the development of time-fractional derivatives
began at the end of the seventeenth century, also in view of
contributions by mathematicians
such as Leibniz, Euler, Laplace, Liouville, Abel, Heaviside, and many others,
see e.g.~\cite{MR2624107, MR0444394, MR860085, MR125162492, ferrari} and the references therein for several
interesting scientific and historical discussions.
{F}rom the point of view of the applications, time-fractional derivatives
naturally provide a model to comprise memory effects in the description of the
phenomena under consideration.
In this work, the time-fractional derivative will be mostly described
in terms of the so-called
Caputo fractional derivative (see~\cite{MR2379269}), which
induces a natural ``direction'' in
the time variable, distinguishing between ``past'' and ``future''. In particular, the time direction encoded in this setting
allows the analysis of ``non anticipative systems'', namely phenomena in which
the state at a given time depends on past events, but not on future ones.
The Caputo derivative is also related to other types of time-fractional derivatives,
such as the Marchaud fractional derivative, which has
applications in modeling anomalous time
diffusion, see e.g.~\cite{MR3488533, AV, ferrari}.
See also~\cite{MR1219954, MR1347689} for more details on fractional
operators and several applications.
Here, we will take advantadge of the nonlocal structure
of a very general linear operator containing fractional derivatives
in some variables (say, either time, or space, or both),
in order to approximate, in the smooth sense and with arbitrary precision,
any prescribed function. Remarkably, {\em no structural assumption}
needs to be taken on the prescribed function: therefore
this approximation property reveals a {\em truly nonlocal behaviour},
since it is
in contrast with the rigidity of the functions that lie in the kernel
of classical linear
operators (for instance, harmonic functions cannot approximate
a function with interior maxima or minima, functions with null first derivatives
are necessarily constant, and so on).
The approximation
results with solutions of nonlocal operators have been first introduced
in~\cite{MR3626547}
for the case of the fractional Laplacian,
and since then widely studied under different perspectives,
including harmonic analysis,
see~\cite{MR3774704, 2016arXiv160909248G, 2017arXiv170804285R, 2017arXiv170806294R, 2017arXiv170806300R}.
The approximation result for the
one-dimensional case of a fractional derivative of Caputo type
has been considered in~\cite{MR3716924, CDV18}, and
operators involving classical time derivatives and additional classical derivatives
in space have been studied in~\cite{DSV1}.
The great flexibility of solutions of fractional problems established
by this type of approximation results
has also consequences that go beyond
the purely mathematical curiosity.
For example, these results
can be applied to study the evolution of
biological populations, showing how a nonlocal hunting or dispersive
strategy can be
more convenient than one based on classical diffusion,
in order to avoid waste of resources and optimize the search for food
in sparse environment, see~\cite{MR3590678, MR3579567}.
Interestingly, the theoretical descriptions
provided in this setting can be compared with a series of concrete
biological data and real world experiments, confirming
anomalous diffusion behaviours in many biological species, see~\cite{ALBA}.
Another interesting application of time-fractional derivatives
arises in neuroscience, for instance in view of the anomalous diffusion
which has been experimentally measured in neurons, see e.g.~\cite{SANTA}
and the references therein.
In this case, the anomalous diffusion could be seen as the effect
of the highly ramified structure of the biological cells
taken into account, see~\cite{comb, DV1}.
In many applications, it is also natural to consider the case in
which different types of diffusion take
place in different variables: for instance, classical diffusion
in space variables could be
naturally combined to anomalous diffusion with respect to variables
which take into account genetical information, see~\cite{GEN1, GEN2}.
Now, to state the main original results of this work, we introduce some notation.
In what follows, we will denote the ``local variables''
with the symbol $x$, the ``nonlocal variables''
with $y$, the ``time-fractional variables''
with $t$.
Namely, we consider the variables
\begin{equation}\label{1.0}\begin{split}
&x=\left(x_1,\ldots,x_n\right)\in\mathbb{R}^{p_1}\times\ldots\times\mathbb{R}^{p_n},
\\&
y=\left(y_1,\ldots,y_M\right)\in\mathbb{R}^{m_1}
\times\ldots\times\mathbb{R}^{m_M}\\
{\mbox{and }}\;&
t=\left(t_1,\ldots,t_l\right)\in\mathbb{R}^l,\end{split}\end{equation}
for some $p_1,\dots,p_n$, $M$, $m_1,\dots,m_M$, $l \in\mathbb{N}$, and we let
$$\left(x,y,t\right)\in\mathbb{R}^N,\qquad{\mbox{
where }}\;N:=p_1+\ldots+p_n+m_1+\ldots+m_M+l.$$
When necessary, we will use the notation $B_R^k$ to denote the $k$-dimensional
ball of radius $R$, centered at the origin in $\mathbb{R}^k$; otherwise, when there are no ambiguities, we will use the usual notation $B_R$.
Fixed $r=\left(r_1,\ldots,r_n\right)\in\mathbb{N}^{p_1}\times\ldots\times\mathbb{N}^{p_n}$,
with~$|r_i|\ge1$ for each~$i\in\{1,\dots, n \}$,
and~${\mbox{\large{\wedn{a}}}}=\left({\mbox{\large{\wedn{a}}}}_1,\ldots,{\mbox{\large{\wedn{a}}}}_n\right)\in\mathbb{R}^n$, we consider the local operator
acting on the variables~$x=(x_1,\dots,x_n)$ given by
\begin{equation}\label{ILPOAU-1}
\mathfrak{l}:=\sum_{i=1}^n {{\mbox{\large{\wedn{a}}}}_i\partial^{r_i}_{x_i}}.
\end{equation}
where the multi-index notation has been used.
Furthermore, given ${\mbox{\large{\wedn{b}}}}=\left({\mbox{\large{\wedn{b}}}}_1,\ldots,{\mbox{\large{\wedn{b}}}}_M\right)\in\mathbb{R}^M$ and
$s=\left(s_1,\ldots,s_M\right)\in\left(0,+\infty\right)^M$, we consider the operator
\begin{equation}\label{ILPOAU-2}
\mathcal{L}:=\sum_{j=1}^M {{\mbox{\large{\wedn{b}}}}_j(-\Delta)^{s_j}_{y_j}},
\end{equation}
where each operator~$(-\Delta)^{s_j}_{y_j}$
denotes the fractional Laplacian\index{fractional!Laplacian} of order $2s_j$ acting
on the set of space variables~$y_j\in\mathbb{R}^{m_j}$. More precisely,
for any~$j\in\{1,\dots,M\}$,
given $s_j>0$ and $h_j\in\mathbb{N}$ with $h_j:=\min_{q_j\in\mathbb{N}}$ such that $s_j\in(0,q_j)$,
in the spirit of~\cite{AJS2}, we consider
the operator
\begin{equation}
\label{nonlocop}
(-\Delta)^{s_j}_{y_j}u\left(x,y,t\right):=\int_{\mathbb{R}^{m_j}}
{\frac{\left(\delta_{h_j} u\right)
\left(x,y,t,Y_j\right)}{|Y_j|^{m_j+2s_j}} \,dY_j},
\end{equation}
where
\begin{equation}\label{898989ksdc}
\left(\delta_{h_j} u\right)\left(x,y,t,Y_j\right):=
\sum_{k=-h_j}^{h_j} {\left(-1\right)^k \binom{2h_j}{h_j-k}u
\left(x,y_1,\ldots,y_{j-1},y_j+kY_j,y_{j+1},\ldots,y_M,t\right)}.\end{equation}
In particular, when $h_j:=1$, this setting comprises the case
of the fractional Laplacian~$\left(-\Delta\right)^{s_j}_{y_j}$ of order~$2s_j\in(0,2)$, given by
\begin{equation*}\begin{split}
\left(-\Delta\right)^{s_j}_{y_j}u\left(x,y,t\right)
&\, := c_{m_j,s_j}\;
\int_{\mathbb{R}^{m_j}}
\Big(2u(x,y,t)-
u(x,y_1,\ldots,y_{j-1},y_j+Y_j,y_{j+1},\ldots,y_M,t)\\&\qquad\qquad-
u(x,y_1,\ldots,y_{j-1},y_j-Y_j,y_{j+1},\ldots,y_M,t)
\Big)\;
\frac{dY_j}{|Y_j|^{m_j+2s_j}},\end{split}
\end{equation*}
where $s_j\in(0,1)$ and $c_{m_j,s_j}$ denotes a multiplicative normalizing constant
(see e.g. formula~(3.1.10) in~\cite{claudia}).
It is interesting to recall that
if $h_j=2$ and $s_j=1$ the setting in~\eqref{nonlocop}
provides a nonlocal representation for the classical Laplacian,
see \cite{AV}.
In our general framework,
we take into account also nonlocal operators of time-fractional type.
To this end,
for any~$\alpha>0$, letting $k:=[\alpha]+1$ and $a\in\mathbb{R}\cup\left\{-\infty\right\}$,
one can introduce the left\footnote{In the literature, one often finds also \label{NOTION}
the notion of right Caputo fractional derivative, defined for $t<a$ by
$$ \frac{(-1)^k}{\Gamma\left(k-\alpha\right)}\int^{a}_{t}
\frac{\partial_{t}^{k} u(\tau)}{\left(\tau-t\right)^{\alpha-k+1}}
\, d\tau.$$
Since the right time-fractional derivative boils down to the left one
(by replacing~$t$ with~$2a-t$), in this work we focus only
on the case of left derivatives.
Also, though there are several time-fractional derivatives that are studied
in the literature under different perspectives,
we focus here on the Caputo derivative, since it possesses well-posedness properties
with respect to classical initial
value problems, differently than other time-fractional derivatives, such as the
Riemann-Liouville derivative\index{fractional!derivative!Riemann-Liouville},
in which the initial value setting involves data containing
derivatives of
fractional order.}
Caputo fractional derivative\index{fractional!derivative!Caputo} of order $\alpha$
and initial point\index{initial point} $a$, defined, for~$t>a$,
as
\begin{equation}
\label{defcap}
D_{t,a}^{\alpha}u(t):=\frac{1}{\Gamma(k-\alpha)}\int_a^t \frac{\partial_{t}^{k} u\left(\tau\right)}{(t-\tau)^{\alpha-k+1}} d\tau,
\end{equation}
where\footnote{For notational simplicity, we will often denote~$\partial_t^k u
=u^{(k)}$.} $\Gamma$ denotes the Euler's Gamma function.
In this framework,
fixed ${\mbox{\large{\wedn{c}}}}\,=\left({\mbox{\large{\wedn{c}}}}\,_1,\ldots,{\mbox{\large{\wedn{c}}}}\,_l\right)\in\mathbb{R}^l$,
$\alpha=(\alpha_1,\dots,\alpha_l)\in(0,+\infty)^l$
and~$a=(a_1,\dots,a_l)\in(\mathbb{R}\cup\left\{-\infty\right\})^l$,
we set
\begin{equation}\label{ILPOAU-3}
\mathcal{D}_a:=\sum_{h=1}^l {\mbox{\large{\wedn{c}}}}\,_h \,D_{t_h, a_h}^{\alpha_h}\,.
\end{equation}
Then, in the notation introduced in~\eqref{ILPOAU-1}, \eqref{ILPOAU-2}
and~\eqref{ILPOAU-3}, we consider here the superposition of the
local, the space-fractional, and the time-fractional operators, that is, we set
\begin{equation}\label{1.6BIS}
\Lambda_a:=\mathfrak{l}+\mathcal{L}+\mathcal{D}_a.
\end{equation}
With this, the statement of our main result goes as follows:
\begin{theorem}\label{theone}
Suppose that
\begin{equation}\label{NOTVAN}\begin{split}&
{\mbox{either there exists~$i\in\{1,\dots,M\}$ such that~${\mbox{\large{\wedn{b}}}}_i\ne0$
and~$s_i\not\in{\mathbb{N}}$,}}\\
&{\mbox{or there exists~$i\in\{1,\dots,l\}$ such that~${\mbox{\large{\wedn{c}}}}\,_i\ne0$ and $\alpha_i\not\in{\mathbb{N}}$.}}\end{split}
\end{equation}
Let $\ell\in\mathbb{N}$, $f:\mathbb{R}^N\rightarrow\mathbb{R}$,
with $f\in C^{\ell}\big(\overline{B_1^N}\big)$. Fixed $\epsilon>0$,
there exist
\begin{equation}\label{EX:eps}\begin{split}&
u=u_\epsilon\in C^\infty\left(B_1^N\right)\cap C\left(\mathbb{R}^N\right),\\
&a=(a_1,\dots,a_l)=(a_{1,\epsilon},\dots,a_{l,\epsilon})
\in(-\infty,0)^l,\\ {\mbox{and }}\quad&
R=R_\epsilon>1\end{split}\end{equation} such that
\begin{equation}\label{MAIN EQ}\left\{\begin{matrix}
\Lambda_a u=0 &\mbox{ in }\;B_1^N, \\
u=0&\mbox{ in }\;\mathbb{R}^N\setminus B_R^N,
\end{matrix}\right.\end{equation}
and
\begin{equation}\label{IAzofm}
\left\|u-f\right\|_{C^{\ell}(B_1^N)}<\epsilon.
\end{equation}
\end{theorem}
We observe that the initial points of the Caputo type operators
in Theorem~\ref{theone} also depend on~$\epsilon$, as detailed in~\eqref{EX:eps}
(but the other parameters, such as the orders of the operators
involved, are fixed arbitrarily).
We also stress that condition~\eqref{NOTVAN} requires that
the operator~$\Lambda_a$ contains at least one nonlocal operator
among its building blocks in~\eqref{ILPOAU-1}, \eqref{ILPOAU-2}
and~\eqref{ILPOAU-3}. This condition cannot be avoided, since
approximation results in the same spirit of Theorem~\ref{theone}
cannot hold for classical differential operators.
Theorem~\ref{theone} comprises, as particular cases,
the nonlocal approximation results established in the recent literature of this
topic. Indeed, when
\begin{eqnarray*}
&&{\mbox{\large{\wedn{a}}}}_1=\dots={\mbox{\large{\wedn{a}}}}_n={\mbox{\large{\wedn{b}}}}_1=\dots={\mbox{\large{\wedn{b}}}}_{M-1}={\mbox{\large{\wedn{c}}}}\,_1=\dots={\mbox{\large{\wedn{c}}}}\,_l=0,\\
&& {\mbox{\large{\wedn{b}}}}_M=1,\\
{\mbox{and }}&& s\in(0,1)
\end{eqnarray*}
we see that Theorem~\ref{theone} recovers the main result
in~\cite{MR3626547}, giving the local density of $s$-harmonic functions
vanishing outside a compact set.
Similarly, when
\begin{eqnarray*}&&
{\mbox{\large{\wedn{a}}}}_1=\dots={\mbox{\large{\wedn{a}}}}_n={\mbox{\large{\wedn{b}}}}_1=\dots={\mbox{\large{\wedn{b}}}}_{M}={\mbox{\large{\wedn{c}}}}\,_1=\dots={\mbox{\large{\wedn{c}}}}\,_{l-1}=0,\\
&&{\mbox{\large{\wedn{c}}}}\,_l=1,\\{\mbox{and }}&&\mathcal{D}_a=D_{t,a}^{\alpha},
\quad{\mbox{ for some~$\alpha>0$, $a<0$}}\end{eqnarray*}
we have that
Theorem~\ref{theone} reduces to
the main results in~\cite{MR3716924} for~$\alpha\in(0,1)$
and~\cite{CDV18} for~$\alpha>1$, in which such approximation result
was established for Caputo-stationary functions, i.e, functions that annihilate
the Caputo fractional derivative.
Also, when
\begin{eqnarray*}
&&p_1=\dots=p_{n}=1,\\
&&{\mbox{\large{\wedn{c}}}}\,_1=\dots={\mbox{\large{\wedn{c}}}}\,_{l}=0,\\ {\mbox{and }}
&&s_j\in(0,1),\quad{\mbox{for every~$j\in\{1,\dots,M\},$}}\end{eqnarray*}
we have that
Theorem~\ref{theone} recovers the cases taken into account in~\cite{DSV1},
in which approximation results have been established
for the superposition of a local operator
with a superposition of fractional Laplacians of order~$2s_j<2$.
In this sense, not only Theorem~\ref{theone} comprises the existing literature,
but it goes beyond it, since it combines
classical derivatives, fractional Laplacians and Caputo fractional derivatives
altogether.
In addition, it comprises the cases in which the space-fractional Laplacians
taken into account are of order
greater than~$2$.
As a matter of fact, this point is also a novelty
introduced by Theorem~\ref{theone} here
with respect to the previous literature.
Theorem~\ref{theone} was announced in~\cite{CDV18}, and
we have just received the very interesting preprint~\cite{2018arXiv181007648K}
which also considered the case of
different, not necessarily fractional, powers of the Laplacian,
using a different and innovative methodology.
The rest of this book is organized as follows.
Chapter~\ref{CH3} focuses on time-fractional operators.
More precisely, in Sections~\ref{s:second}
and~\ref{s:grf0}
we study the boundary behaviour of the eigenfunctions of the
Caputo derivative and of functions with vanishing Caputo derivative, respectively,
detecting their singular
boundary behaviour in terms of
explicit representation formulas. These type of results are also
interesting in themselves and can find further applications.
Chapter~\ref{CH4} is devoted to some properties
of the higher order fractional Laplacian.
More precisely, Section~\ref{s:grf} provides
some representation formula of the solution
of~$(-\Delta)^s u=f$ in a ball, with~$u=0$ outside this ball, for all~$s>0$,
and extends
the Green formula methods introduced
in~\cite{MR3673669} and \cite{AJS3}.
Then, in Section~\ref{SEC:eigef} we
study the boundary behaviour of the first Dirichlet
eigenfunction of higher order fractional equations, and in Section~\ref{sec5}
we give some precise asymptotics
at the boundary for the first Dirichlet eigenfunction\index{Dirichlet!eigenfunction}
of~$(-\Delta)^s$ for any~$s>0$.
Section~\ref{s:hwb} is devoted to
the analysis of the asymptotic behaviour of $s$-harmonic
functions, with a ``spherical bump function'' as exterior Dirichlet datum.
Chapter~\ref{CH5} is devoted to the proof of our main result. To this end,
Section~\ref{s:fourthE} contains an auxiliary statement, namely
Theorem~\ref{theone2}, which will imply
Theorem~\ref{theone}. This is technically convenient,
since the operator~$\Lambda_a$ depends in principle on the initial
point~$a$: this has the disadvantage that if~$\Lambda_a u_a=0$
and~$\Lambda_b u_b=0$ in some domain, the function~$u_a+u_b$
is not in principle a solution of any operator, unless~$a=b$.
To overcome such a difficulty, in Theorem~\ref{theone2}
we will reduce to the case in which~$a=-\infty$, exploiting
a polynomial extension that we have introduced and used in~\cite{CDV18}.
In Section~\ref{s:fourth0}
we make the main step towards the proof of Theorem~\ref{theone2}.
Here, we prove that functions in the kernel
of nonlocal operators such as the one in~\eqref{1.6BIS}
span with their derivatives a maximal Euclidean space.
This fact is special for the nonlocal case
and its proof is based on the boundary analysis of the fractional operators in both
time and space.
Due to the general form of the operator in~\eqref{1.6BIS},
we have to distinguish here several cases,
taking advantage of either the time-fractional
or the space-fractional components of the operators.
Finally, in Section~\ref{s:fourth} we
complete the proof of
Theorem~\ref{theone2}, using the previous approximation
results and suitable rescaling arguments.
The final appendix provides concrete examples in which our main result can be applied.
\chapter{Boundary behaviour of solutions of time-fractional equations}\label{CH3}
In this chapter, we give precise asymptotics for the boundary behaviour of
solutions of time-fractional equations. The cases of the eigenfunctions
and of the Dirichlet problem with vanishing forcing term
will be studied in detail (the latter will be often referred to
as the time-fractional harmonic case, borrowing a terminology from
elliptic equations, with a slight abuse of notation in our case).
\section{Sharp boundary behaviour\index{boundary behaviour} for the time-fractional eigenfunctions}\label{s:second}
In this section we show that the eigenfunctions of the Caputo fractional derivative
in~\eqref{defcap}
have
an explicit representation via the Mittag-Leffler function\index{function!Mittag-Leffler}.
For this,
fixed $\alpha$, $\beta\in\mathbb{C}$ with $\Re\left(\alpha\right)>0$,
for any $z$ with $\Re\left(z\right)>0$, we recall that
the Mittag-Leffler function is defined as
\begin{equation}\label{Mittag}
E_{\alpha,\beta}\left(z\right):=\sum_{j=0}^{+\infty} {\frac{z^j}{\Gamma
\left(\alpha j+\beta\right)}}.
\end{equation}
The Mittag-Leffler function plays an important role in equations
driven by the Caputo derivatives, replacing the exponential function
for classical differential equations, as given by the following well-established result
(see \cite{MR3244285} and the references therein):
\begin{lemma}\label{lemma1}
Let~$\alpha\in(0,1]$,
$\lambda\in{\mathbb{R}}$, and
$a\in\mathbb{R}\cup\left\{-\infty\right\}$.
Then, the unique solution of the boundary value problem
\begin{equation*}\left\{
\begin{matrix}
D_{t,a}^{\alpha}u(t)=\lambda\, u(t) &
\mbox{ for any }t\in (a,+\infty),\\
u(a)=1 &
\end{matrix}\right.
\end{equation*}
is given by $E_{\alpha,1}\left(\lambda \left(t-a\right)^\alpha\right)$.
\end{lemma}
Lemma~\ref{lemma1} can be actually generalized\footnote{It is easily seen that
for~$k:=1$ Lemma~\ref{MittagLEMMA} boils down
to Lemma~\ref{lemma1}.} to any
fractional order of differentiation~$\alpha$:
\begin{lemma}\label{MittagLEMMA}
Let $\alpha\in(0,+\infty)$, with~$\alpha\in(k-1,k]$ and~$k\in\mathbb{N}$,
$a\in\mathbb{R}\cup\left\{-\infty\right\}$, and~$\lambda\in{\mathbb{R}}$. Then,
the unique
continuous solution of the boundary value problem
\begin{equation}\label{CHE:0}\left\{
\begin{matrix}
D_{t,a}^{\alpha}u(t)=\lambda \,u(t) &
\mbox{ for any }t\in (a,+\infty),\\
u(a)=1 ,\\
\partial^m_t u(a)=0&\mbox{ for any }
m\in\{1,\dots,k-1\}
\end{matrix}\right.
\end{equation}
is given by $u\left(t\right)=E_{\alpha,1}\left(\lambda \left(t-a\right)^\alpha\right)$.
\begin{proof}
For the sake of simplicity we take $a=0$.
Also, the case in which~$\alpha\in{\mathbb{N}}$ can be checked with a direct computation,
so we focus on the case~$\alpha\in(k-1,k)$, with~$k\in{\mathbb{N}}$.
We
let $u\left(t\right):=E_{\alpha,1}\left(\lambda t^\alpha\right)$.
It is straightforward to see that~$ u(t)=1+{\mathcal{O}}(t^k)$ and therefore
\begin{equation}\label{CHE:1}
u(0)=1 \qquad{\mbox{and}}\qquad
\partial^m_t u(0)=0\;\mbox{ for any }\;
m\in\{1,\dots,k-1\}.
\end{equation}
We also claim that
\begin{equation}\label{CHE:2}
D_{t,a}^{\alpha}u(t)=\lambda \,u(t) \;
\mbox{ for any }\;t\in (0,+\infty).
\end{equation}
To check this,
we recall~\eqref{defcap} and~\eqref{Mittag} (with~$\beta:=1$),
and we have that
\begin{eqnarray*}&&
D_{t,a}^{\alpha} u\left(t\right) \\&= &
\frac{1}{\Gamma\left(k-\alpha\right)}\int_0^t {\frac{u^{(k)}\left(\tau\right)}{
\left(t-\tau\right)^{\alpha-k+1}}\, d\tau} \\
&=& \frac{1}{\Gamma\left(k-\alpha\right)}
\int_0^t {\left(\sum_{j=1}^{+\infty} {\lambda^j\frac{\alpha j\left(\alpha j
-1\right)\ldots\left(\alpha j-k+1\right)}{\Gamma\left(\alpha j
+1\right)} \tau^{\alpha j-k}}\right)\frac{d\tau}{\left(t-\tau\right)^{\alpha-k+1}}} \\
&=& \sum_{j=1}^{+\infty} {\lambda^j\,\frac{\alpha j\left(\alpha j-1\right)
\ldots\left(\alpha j-k+1\right)}{\Gamma\left(k-\alpha\right)
\Gamma\left(\alpha j+1\right)}}
\int_0^t {\tau^{\alpha j-k}\left(t-\tau\right)^{k-\alpha-1} \,d\tau}.
\end{eqnarray*}
Hence, using the change of variable $\tau=t\sigma$, we obtain that
\begin{equation}\label{H:1}
D_{t,a}^{\alpha} u\left(t\right) =
\sum_{j=1}^{+\infty} {\lambda^j\,
\frac{\alpha j\left(\alpha j-1\right)\ldots\left(\alpha j-k+1\right)}{
\Gamma\left(k-\alpha\right)\Gamma\left(\alpha j+1\right)}}
t^{\alpha j-\alpha}\int_0^1 {\sigma^{\alpha j-k}\left(1-\sigma\right)^{k-\alpha-1}
\, d\tau}.\end{equation}
On the other hand, from the basic properties of the Beta function, it is known
that if~$\Re(z)$, $\Re(w)>0$, then
\begin{equation}\label{H:2} \int_0^1 {\sigma^{z-1}\left(1-\sigma\right)^{w-1}\, dt}
=\frac{\Gamma\left(z\right)\Gamma\left(w\right)}{\Gamma\left(z+w\right)}.
\end{equation}
In particular, taking~$z:=\alpha j-k+1\in(\alpha-k+1,+\infty)\subseteq(0,+\infty)$
and~$w:=k-\alpha\in(0,+\infty)$, and substituting~\eqref{H:2} into~\eqref{H:1},
we conclude that
\begin{equation}\label{H:3}\begin{split}
D_{t,a}^{\alpha} u\left(t\right)=
& \sum_{j=1}^{+\infty} {\lambda^j\frac{\alpha j\left(\alpha j-1\right)\ldots\left(\alpha j-k+1\right)}{\Gamma\left(k-\alpha\right)\Gamma\left(\alpha j+1\right)}}\frac{\Gamma\left(\alpha j-k+1\right)\Gamma\left(k-\alpha\right)}{\Gamma\left(\alpha j-\alpha+1\right)}\,t^{\alpha j-\alpha} \\
=& \sum_{j=1}^{+\infty} {\lambda^j\frac{\alpha j\left(\alpha j-1\right)\ldots\left(\alpha j-k+1\right)}{\Gamma\left(\alpha j+1\right)}}\frac{\Gamma\left(\alpha j-k+1\right)}{\Gamma\left(\alpha j-\alpha+1\right)}\,t^{\alpha j-\alpha}.
\end{split}\end{equation}
Now we use the fact that $z\Gamma\left(z\right)=\Gamma\left(z+1\right)$
for any $z\in\mathbb{C}$ with $\Re\left(z\right)>-1$, so, we have
\begin{equation*}
\alpha j\left(\alpha j-1\right)\ldots\left(\alpha j-k+1\right)\Gamma\left(\alpha j-k+1\right)=\Gamma\left(\alpha j+1\right).
\end{equation*}
Plugging this information into~\eqref{H:3}, we thereby find that
\begin{equation*}
D_{t,a}^{\alpha} u\left(t\right)=
\sum_{j=1}^{+\infty} {\frac{\lambda^j}{\Gamma\left(\alpha j-\alpha+1\right)}t^{\alpha j-\alpha}}=\sum_{j=0}^{+\infty} {\frac{\lambda^{j+1}}{\Gamma\left(\alpha j+1\right)}t^{\alpha j}}=\lambda u(t).
\end{equation*}
This proves~\eqref{CHE:2}.
Then, in view of~\eqref{CHE:1} and~\eqref{CHE:2} we obtain that~$u$
is a solution of~\eqref{CHE:0}.
Hence, to complete the proof of the desired result, we have to show
that such a solution is unique. To this end, supposing that we have two
solutions of~\eqref{CHE:0}, we consider their difference~$w$, and
we observe that~$w$ is a solution of
\begin{equation*}\left\{
\begin{matrix}
D_{t,0}^{\alpha}w(t)=\lambda \,w(t) &
\mbox{ for any }t\in (0,+\infty),\\
\partial^m_t w(0)=0&\mbox{ for any }
m\in\{0,\dots,k-1\}.
\end{matrix}\right.
\end{equation*}
By Theorem~4.1 in~\cite{MR3563609}, it follows that~$w$ vanishes
identically, and this proves the desired uniqueness result.
\end{proof}
\end{lemma}
The boundary behaviour of the Mittag-Leffler function for different values
of the fractional parameter~$\alpha$ is depicted in Figure~\ref{FIG1}.
In light of~\eqref{Mittag},
we notice in particular that, near~$z=0$,
$$ E_{\alpha,\beta}\left(z\right)=
\frac{1}{\Gamma\left(\beta\right)}+\frac{z}{\Gamma\left(\alpha +\beta\right)}+O(z^2)$$
and therefore, near~$t=a$,
\begin{equation*}
E_{\alpha,1}\left(\lambda \left(t-a\right)^\alpha\right)
=1+\frac{\lambda \left(t-a\right)^\alpha}{\Gamma\left(\alpha +1\right)}+O\big(\lambda^2\left(t-a\right)^{2\alpha}\big).
\end{equation*}
\begin{figure}
\caption{\footnotesize\it Behaviour of the
Mittag-Leffler
function $E_{\alpha,1}
\label{FIG1}
\end{figure}
\section{Sharp boundary behaviour\index{boundary behaviour} for the time-fractional harmonic functions}\label{s:grf0}
In this section, we detect the optimal boundary behaviour of time-fractional
harmonic functions and of their derivatives.
The result that we need for our purposes is the following:
\begin{lemma}\label{LF}
Let~$\alpha\in(0,+\infty)\setminus\mathbb{N}$.
There exists a function~$\psi:\mathbb{R}\to\mathbb{R}$ such
that~$\psi\in C^\infty((1,+\infty))$ and
\begin{eqnarray}
\label{LAp1}&&D^\alpha_0\psi(t)=0 \qquad{\mbox{for all }}t\in(1,+\infty),\\
\label{LAp2}{\mbox{and }}&&\lim_{\epsilon\searrow0}
\epsilon^{\ell-\alpha} \partial^\ell\psi(1+\epsilon t)=\kappa_{\alpha,\ell}\, t^{\alpha-\ell},
\qquad{\mbox{for all }}\ell\in\mathbb{N},
\end{eqnarray}
for some~$\kappa_{\alpha,\ell}\in\mathbb{R}\setminus\{0\}$, where~\eqref{LAp2}
is taken in the sense of distribution for~$t\in(0,+\infty)$.
\end{lemma}
\begin{proof} We use Lemma~2.5
in~\cite{CDV18}, according to which
(see in particular formula~(2.16)
in~\cite{CDV18}) the claim in~\eqref{LAp1}
holds true. Furthermore (see formulas~(2.19)
and~(2.20) in~\cite{CDV18}), we can write that, for all~$t>1$,
\begin{equation}\label{VBVBHJSnb}
\psi(t)=-\frac1{\Gamma(\alpha)\Gamma([\alpha]+1-\alpha)}
\iint_{[1,t]\times[0,3/4]}\partial^{[\alpha]+1}\psi_0(\sigma)\,(\tau-\sigma)^{[\alpha]-\alpha}\,
(t-\tau)^{\alpha-1}\,d\tau\,d\sigma,
\end{equation}
for a suitable~$\psi_0\in C^{[\alpha]+1}([0,1])$.
In addition, by Lemma~2.6 in~\cite{CDV18}, we can write that
\begin{equation}\label{0oLAMJA}
\lim_{\epsilon\searrow0}
\epsilon^{ -\alpha} \psi(1+\epsilon)=\kappa,
\end{equation}
for some~$\kappa\ne0$.
Now we set
$$ (0,+\infty)\ni t\mapsto f_\epsilon(t):=
\epsilon^{\ell-\alpha} \partial^\ell\psi(1+\epsilon t).$$
We observe that, for any~$\varphi\in C^\infty_0((0,+\infty))$,
\begin{equation}\label{UHAikAJ678OKA}
\begin{split}& \int_0^{+\infty} f_\epsilon(t)\,\varphi(t)\,dt=
\epsilon^{\ell-\alpha} \int_0^{+\infty}\partial^\ell\psi(1+\epsilon t)\varphi(t)\,dt\\
&=
\epsilon^{-\alpha} \int_0^{+\infty}\frac{d^\ell}{dt^\ell}\big(\psi(1+\epsilon t)\big)\varphi(t)\,dt
=(-1)^\ell\,
\epsilon^{-\alpha} \int_0^{+\infty}\psi(1+\epsilon t)\,\partial^\ell\varphi(t)\,dt.
\end{split}\end{equation}
Also, in view of~\eqref{VBVBHJSnb},
\begin{eqnarray*}&&
\epsilon^{-\alpha}|\psi(1+\epsilon t)|\\
&=&\left|\frac{\epsilon^{-\alpha}}{\Gamma(\alpha)\Gamma([\alpha]+1-\alpha)}
\iint_{[1,1+\epsilon t]\times[0,3/4]}\partial^{[\alpha]+1}\psi_0(\sigma)\,(\tau-\sigma)^{[\alpha]-\alpha}\,
(1+\epsilon t-\tau)^{\alpha-1}\,d\tau\,d\sigma\right|
\\&\le&C\,\epsilon^{-\alpha}\,
\int_{[1,1+\epsilon t]}(1+\epsilon t-\tau)^{\alpha-1}\,d\tau
\\ &=& Ct^\alpha,
\end{eqnarray*}
which is locally bounded in~$t$, where~$C>0$ here above may vary from line to line.
As a consequence, we can pass to the limit in~\eqref{UHAikAJ678OKA}
and obtain that
$$\lim_{\epsilon\searrow0} \int_0^{+\infty} f_\epsilon(t)\,\varphi(t)\,dt
=
(-1)^\ell\, \int_0^{+\infty} \lim_{\epsilon\searrow0}
\epsilon^{-\alpha}
\psi(1+\epsilon t)\,\partial^\ell\varphi(t)\,dt.
$$
This and~\eqref{0oLAMJA} give that
$$\lim_{\epsilon\searrow0} \int_0^{+\infty} f_\epsilon(t)\,\varphi(t)\,dt
=
(-1)^\ell\, \kappa\,\int_0^{+\infty} t^\alpha\,
\partial^\ell\varphi(t)\,dt=\kappa\,\alpha\dots(\alpha-\ell+1)\int_0^{+\infty} t^{\alpha-\ell}\,\varphi(t)\,dt,$$
which establishes~\eqref{LAp2}.
\end{proof}
\chapter{Boundary behaviour of solutions of space-fractional equations}\label{CH4}
In this chapter, we give precise asymptotics for the boundary behaviour of
solutions of space-fractional equations. The cases of the eigenfunctions
and of the Dirichlet problem with vanishing forcing term
will be studied in detail. To this end, we will also
exploit useful representation formulas of the solutions
in terms of suitable Green functions.
\section{Green representation formulas and solution of $(-\Delta)^s u=f$ in $B_1$ with homogeneous Dirichlet datum}
\label{s:grf}
Our goal is to provide some representation results on the solution
of~$(-\Delta)^s u=f$ in a ball, with~$u=0$ outside this ball, for all~$s>0$.
Our approach is an extension of the Green formula methods introduced
in~\cite{MR3673669} and \cite{AJS3}:
differently from the previous literature, we are not assuming here that~$f$
is regular in the whole of the ball, but merely that it is H\"older continuous
near the boundary and sufficiently integrable inside. Given the type of singularity
of the Green function, these assumptions are sufficient to obtain meaningful
representations, which in turn will be useful to deal with
the eigenfunction problem in the subsequent Chapter~\ref{SEC:eigef}.
\subsection{Solving $(-\Delta)^s u=f$ in $B_1$ for discontinuous~$f$ vanishing near $\partial B_1$}
Now, we want to extend the representation
results of \cite{MR3673669} and \cite{AJS3} to
the case in which the right hand side is not H\"older continuous,
but merely in a Lebesgue space, but it has the additional
property of vanishing near the boundary of the domain.
To this end, fixed~$s>0$,
we consider the polyharmonic Green\index{function!Green}
function in $B_1\subset{\mathbb{R}}^n$, given,
for every~$x\ne y\in\mathbb{R}^n$, by
\begin{equation}\label{GREEN}
\begin{split}&
\mathcal{G}_s\left(x,y\right):=\frac{k(n,s)}{\left|x-y\right|^{n-2s}}\,
\int_0^{r_0\left(x,y\right)}
\frac{\eta^{s-1}}{\left(\eta+1\right)^{\frac{n}{2}}} \,d\eta,
\\ {\mbox{where }}\quad&r_0\left(x,y\right):=
\frac{\left(1-\left|x\right|^2\right)_+\left(1-\left|y\right|^2\right)_+}{\left|x-y\right|^2}, \\
{\mbox{with }}\quad&
k(n,s):=\frac{\Gamma\left(\frac{n}{2}\right)}{
\pi^{\frac{n}{2}}\,4^s\Gamma^2\left(s\right)}.
\end{split}
\end{equation}
Given~$x\in B_1$, we also set
\begin{equation}\label{GREEN2} d(x):=1-|x|.\end{equation}
In this setting, we have:
\begin{proposition}\label{LONTANO}
Let~$r\in(0,1)$ and~$f\in L^2(B_1)$, with~$f=0$ in~${\mathbb{R}}^n\setminus B_r$. Let
\begin{equation}\label{DEF uG} u(x):=
\begin{cases}
\displaystyle\int_{B_1} \mathcal{G}_s\left(x,y\right)\,f(y)\,dy & {\mbox{ if }}x\in B_1,\\
0&{\mbox{ if }}x\in{\mathbb{R}}^n\setminus B_1.
\end{cases}
\end{equation}
Then:
\begin{equation}
\label{LON1}
u\in L^1(B_1), {\mbox{ and }} \|u\|_{L^1(B_1)}\le C\,\|f\|_{L^1(B_1)},
\end{equation}
\begin{equation}\label{LON2}
{\mbox{for every $R\in(r,1)$, }} \sup_{x\in B_1\setminus B_R}
d^{-s}(x)\,|u(x)|\le C_R\,\|f\|_{L^1(B_1)},\end{equation}
\begin{equation}\label{LON3}
{\mbox{$u$ satisfies }}(-\Delta)^s u=f{\mbox{ in }}B_1 {\mbox{ in the sense of
distributions,}}
\end{equation}
and
\begin{equation}\label{LON4}
u\in W^{2s,2}_{loc}(B_1).
\end{equation}
Here above,
$C>0$ is a constant depending on~$n$, $s$ and~$r$,
$C_R>0$ is a constant depending on~$n$, $s$, $r$ and~$R$
and $C_\rho>0$ is a constant depending on~$n$, $s$, $r$ and~$\rho$.
\end{proposition}
When~$f\in C^\alpha(B_1)$ for some~$\alpha\in(0,1)$,
Proposition~\ref{LONTANO} boils down to the main results of \cite{MR3673669}
and~\cite{AJS3}.
\begin{proof}[Proof of Proposition~\ref{LONTANO}]
We recall the following useful estimate, see Lemma~3.3 in
\cite{AJS3}:
for any~$\epsilon\in\left(0,\,\min\{n,s\}\right)$, and any~$\bar R$, $\bar r>0$,
$$ \frac1{\bar R^{n-2s}}\,\int_0^{\bar r/\bar R^2}\frac{\eta^{s-1}}{
\left(\eta+1\right)^{\frac{n}{2}}} \,d\eta\le
\frac{2}{s}\;\frac{\bar r^{s-(\epsilon/2)}}{\bar R^{n-\epsilon}},$$
and so, by~\eqref{GREEN} and~\eqref{GREEN2},
for every~$x$, $y\in B_1$,
$$ \mathcal{G}_s\left(x,y\right)\le
\frac{C\,d^{s-(\epsilon/2)}(x)\,d^{s-(\epsilon/2)}(y)}{|x-y|^{n-\epsilon}}
$$
for some~$C>0$. Hence, recalling~\eqref{DEF uG},
\begin{eqnarray*}
\int_{B_1} |u(x)|\,dx&\le& \int_{B_1}
\left(\int_{B_1} \mathcal{G}_s\left(x,y\right)\,|f(y)|\,dy\right)\,dx
\\ &\le& C \int_{B_1}
\left(\int_{B_1} \frac{|f(y)|}{|x-y|^{n-\epsilon}}\,dy\right)\,dx\\&=&
C \int_{B_1}
\left(\int_{B_1} \frac{|f(y)|}{|x-y|^{n-\epsilon}}\,dx\right)\,dy\\&=&
C \int_{B_1}|f(y)|\,dy,
\end{eqnarray*}
up to renaming~$C>0$ line after line, and this proves~\eqref{LON1}.
Now, if~$x\in B_1\setminus B_R$ and~$y\in B_r$, with~$0<r<R<1$, we have that
$$|x-y|\ge |x|-|y|\ge R-r$$
and accordingly
$$ r_0\left(x,y\right)\le
\frac{2d(x)}{(R-r)^2},$$
which in turn implies that
$$ \mathcal{G}_s\left(x,y\right)\le\frac{k(n,s)}{\left|x-y\right|^{n-2s}}\,
\int_0^{{2d(x)}/{(R-r)^2}}
\frac{\eta^{s-1}}{\left(\eta+1\right)^{\frac{n}{2}}} \,d\eta,
\le {C\,d^s(x)},$$
for some~$C>0$.
As a consequence,
since~$f$ vanishes outside~$B_r$, we see that, for any~$x\in B_1\setminus B_R$,
\begin{eqnarray*}
|u(x)|\le
\int_{B_r} \mathcal{G}_s\left(x,y\right)\,|f(y)|\,dy\le C\,d^s(x)
\int_{B_r} |f(y)|\,dy,
\end{eqnarray*}
which proves~\eqref{LON2}.
Now, we fix~$\hat r\in(r,1)$ and consider a mollification of~$f$,
that we denote by~$f_j\in C^\infty_0(B_{\hat r})$, with~$f_j\to f$
in~$L^2(B_1)$ as~$j\to+\infty$.
We also write~$\mathcal{G}_s * f$ as a short notation for the right hand side
of~\eqref{DEF uG}. Then, by \cite{MR3673669} and \cite{AJS3},
we know that~$u_j:=\mathcal{G}_s * f_j$ is a (locally smooth, hence distributional) solution of~$(-\Delta)^s u_j=f_j$.
Furthermore, if we set~$\tilde u_j:=u_j-u$ and~$\tilde f_j:=f_j-f$
we have that
$$ \tilde u_j=\mathcal{G}_s * (f_j-f)=\mathcal{G}_s * \tilde f_j,$$
and therefore, by~\eqref{LON1},
$$ \|\tilde u_j\|_{L^1(B_1)}\le C\,\|\tilde f_j\|_{L^1(B_1)},$$
which is infinitesimal as~$j\to+\infty$. This says that~$u_j\to u$
in~$L^1(B_1)$ as~$j\to+\infty$, and consequently, for any~$\varphi\in C^\infty_0(B_1)$,
\begin{eqnarray*}
&&\int_{B_1} u(x)\,(-\Delta)^s\varphi(x)\,dx=\lim_{j\to+\infty}
\int_{B_1} u_j(x)\,(-\Delta)^s\varphi(x)\,dx
\\&&\qquad=\lim_{j\to+\infty}
\int_{B_1} f_j(x)\,\varphi(x)\,dx=\int_{B_1} f(x)\,\varphi(x)\,dx,
\end{eqnarray*}
thus completing the proof of~\eqref{LON3}.
Now, to prove~\eqref{LON4}, we can suppose that~$s\in(0,+\infty)\setminus{\mathbb{N}}$,
since the case of integer~$s$ is classical, see e.g. \cite{MR1814364}.
First of all, we claim that
\begin{equation}\label{08}
{\mbox{\eqref{LON4} holds true for every~$s\in(0,1)$.}}
\end{equation}
For this, we first claim that
if~$g\in C^\infty(B_1)$ and~$v$ is a (locally smooth) solution of~$(-\Delta)^s v=g$
in~$B_1$, with~$v=0$ outside~$B_1$, then~$v\in W^{2s,2}_{loc}(B_1)$,
and, for any~$\rho\in(0,1)$,
\begin{equation}\label{BIX}
\|v\|_{W^{2s,2}(B_\rho)}\le C_\rho\,\|g\|_{L^2(B_1)}.
\end{equation}
This claim can be seen as a localization of
Lemma~3.1 of~\cite{MR2863859},
or a quantification of the last claim in Theorem~1.3 of~\cite{MR3641649}.
To prove~\eqref{BIX}, we let~$R_-<R_+\in(\rho,1)$,
and consider~$\eta\in C^\infty_0(B_{R_+})$ with~$\eta=1$ in~$B_{R_-}$.
We let~$v^*:=v\eta$, and we recall formulas~(3.2), (3.3)
and~(A.5) in~\cite{MR3641649}, according to which
\begin{equation*}\begin{split}&
(-\Delta)^s v^*-\eta(-\Delta)^s v=g^* \quad{\mbox{ in }}{\mathbb{R}}^n,\\
{\mbox{with }}\quad&\| g^*\|_{L^2({\mathbb{R}}^n)}\le C\,\|v\|_{W^{s,2}({\mathbb{R}}^n)}
,\end{split}\end{equation*}
for some~$C>0$.
Moreover,
using a notation taken from \cite{MR3641649}
we denote by~$W^{s,2}_0(\overline{B_1})$
the space of functions in~$W^{s,2}({\mathbb{R}}^n)$ vanishing outside~$B_1$
and we consider the dual space~$
W^{-s,2}_0(\overline{B_1})$. We remark that if~$h\in L^2(B_1)$
we can naturally identify~$h$ as an element of~$
W^{-s,2}_0(\overline{B_1})$ by considering the action of~$h$
on any~$\varphi\in W^{s,2}_0(\overline{B_1})$ as defined by
$$ \int_{B_1} h(x)\,\varphi(x)\,dx.$$
With respect to this, we have that
\begin{equation}\label{DUE} \|h\|_{W^{-s,2}_0(\overline{B_1})}=\sup_{{\varphi\in
W^{s,2}_0(\overline{B_1})}\atop{\|\varphi\|_{W^{s,2}_0(\overline{B_1})}=1}}
\int_{B_1}h(x)\,\varphi(x)\,dx\le\|h\|_{L^2(B_1)}.\end{equation}
We notice also that
$$ \|v\|_{W^{s,2}({\mathbb{R}}^n)}\le C\,\| g\|_{W^{-s,2}(\overline{B_1})},$$
in light of Proposition~2.1 of~\cite{MR3641649}. This and~\eqref{DUE}
give that
$$ \|v\|_{W^{s,2}({\mathbb{R}}^n)}\le C\,\| g\|_{L^2(B_1)}.$$
Then, by Lemma~3.1 of~\cite{MR2863859}
(see in particular formula~(3.2) there, applied here with~$\lambda:=0$),
we obtain that
\begin{equation}\label{BIX0}
\begin{split}
\|v^*\|_{W^{2s,2}({\mathbb{R}}^n)}\,&\le C\,\|
\eta(-\Delta)^s v+
g^*\|_{L^2({\mathbb{R}}^n)}\\
&\le C\,\big( \| (-\Delta)^s v\|_{L^2(B_{R_+})}+\|g^*\|_{L^2({\mathbb{R}}^n)}\big)
\\&=C\,\big( \| g\|_{L^2(B_{R_+})}+\|g^*\|_{L^2({\mathbb{R}}^n)}\big)\\
&\le C\,\big(\| g\|_{L^2(B_{1})}+
\|v\|_{W^{s,2}({\mathbb{R}}^n)}\big)\\
&\le C\, \| g\|_{L^2(B_{1})}
,\end{split}\end{equation}
up to renaming~$C>0$ step by step.
On the other hand, since~$v^*=v$ in~$B_\rho$,
$$ \|v\|_{W^{2s,2}(B_\rho)}=
\|v^*\|_{W^{2s,2}(B_\rho)}\le \|v^*\|_{W^{2s,2}({\mathbb{R}}^n)}.$$
{F}rom this and~\eqref{BIX0}, we obtain \eqref{BIX}, as desired.
Now, we let~$f_j$, $\tilde f_j$, $u_j$ and~$\tilde u_j$
as above and make use
of~\eqref{BIX} to write
\begin{equation}\label{qwe89:BBA}
\begin{split}
&\|u_j\|_{W^{2s,2}(B_\rho)}\le C_\rho\,\|f_j\|_{L^2(B_1)}
\\ {\mbox{and }}\quad&
\|\tilde u_j\|_{W^{2s,2}(B_\rho)}\le C_\rho\,\|\tilde f_j\|_{L^2(B_1)}.
\end{split}
\end{equation}
As a consequence,
taking the limit as~$j\to+\infty$
we obtain that
$$ \|u\|_{W^{2s,2}(B_\rho)}\le C_\rho\,\|f\|_{L^2(B_1)},$$
that is~\eqref{LON4} in this case, namely the claim in~\eqref{08}.
Now, to prove~\eqref{LON4}, we argue by induction on
the integer part of~$s$. When the integer part of~$s$
is zero, the basis of the induction is warranted by~\eqref{08}.
Then, to perform the inductive step, given~$s\in(0,+\infty)\setminus{\mathbb{N}}$,
we suppose that~\eqref{LON4} holds true for~$s-1$,
namely
\begin{equation}\label{0lL:AN1}
\mathcal{G}_{s-1} * f
\in W^{2s-2,2}_{loc}(B_1).
\end{equation}
Then, following~\cite{AJS3},
it is convenient to introduce the notation
$$ [x,y]:=\sqrt{|x|^2|y|^2-2x\cdot y+1}$$
and consider
the auxiliary kernel given, for every~$x\ne y\in B_1$, by
\begin{equation}
\label{aux}
P_{s-1}(x,y):=\frac{(1-|x|^2)^{s-2}_+(1-|y|^2)^{s-1}_+(1-|x|^2|y|^2)}{[x,y]^n}.
\end{equation}
We point out that if~$x\in B_r$ with~$r\in(0,1)$,
then
\begin{equation}
\label{klop}
[x,y]^2\ge|x|^2|y|^2-2|x|\,| y|+1=(1-|x|\,|y|)^2\ge(1-r)^2>0.
\end{equation}
Consequently, since~$f$ is supported in~$B_r$,
\begin{equation}\label{0lL:AN2}
P_{s-1}*f\in C^\infty({\mathbb{R}}^n).\end{equation}
Then,
we recall that
\begin{equation}\label{0lL:AN3}
-\Delta_x\mathcal{G}_s(x,y)=\mathcal{G}_{s-1}(x,y)-CP_{s-1}(x,y),\end{equation}
for some~$C\in{\mathbb{R}}$, see Lemma~3.1 in \cite{AJS3}.
As a consequence, in view of~\eqref{0lL:AN1}, \eqref{0lL:AN2}, \eqref{0lL:AN3},
we conclude that
$$ -\Delta (\mathcal{G}_s*f)= (-\Delta_x\mathcal{G}_s)*f
\in W^{2s-2,2}_{loc}(B_1).$$
This and the classical elliptic regularity theory (see e.g. \cite{MR1814364})
give that~$\mathcal{G}_s*f\in W^{2s,2}_{loc}(B_1)$, which
completes the inductive proof and establishes~\eqref{LON4}.
\end{proof}
\subsection{Solving $(-\Delta)^s u=f$ in $B_1$ for~$f$ H\"older continuous near $\partial B_1$}
Our goal is now
to extend the representation
results of \cite{MR3673669} and \cite{AJS3} to
the case in which the right hand side is not H\"older continuous
in the whole of the ball,
but merely in a neighborhood of the boundary.
This result is obtained here by superposing the
results in \cite{MR3673669} and \cite{AJS3}
with Proposition~\ref{LONTANO} here, taking advantage of the linear
structure of the problem.
\begin{proposition} \label{LEJOS}
Let~$f\in L^2(B_1)$.
Let~$\alpha$, $r\in(0,1)$ and assume that
\begin{equation}\label{CHlaIA}
f\in C^\alpha(B_1\setminus B_r).\end{equation}
In the notation of~\eqref{GREEN}, let
\begin{equation} \label{0olwsKA}
u(x):=
\begin{cases}
\displaystyle\int_{B_1} \mathcal{G}_s\left(x,y\right)\,f(y)\,dy & {\mbox{ if }}x\in B_1,\\
0&{\mbox{ if }}x\in{\mathbb{R}}^n\setminus B_1.
\end{cases}
\end{equation}
Then, in the notation of~\eqref{GREEN2}, we have that:
\begin{equation}\label{VIC2}
{\mbox{for every $R\in(r,1)$, }} \sup_{x\in B_1\setminus B_R}
d^{-s}(x)\,|u(x)|\le C_R\,\big(\|f\|_{L^1(B_1)}+
\|f\|_{L^\infty(B_1\setminus B_r)}\big)
,\end{equation}
\begin{equation}\label{VIC3}
{\mbox{$u$ satisfies }}(-\Delta)^s u=f{\mbox{ in }}B_1 {\mbox{ in the sense of
distributions,}} \end{equation}
and
\begin{equation}\label{VIC4}
u\in W^{2s,2}_{loc}(B_1).
\end{equation}
Here above,
$C>0$ is a constant depending on~$n$, $s$ and~$r$,
$C_R>0$ is a constant depending on~$n$, $s$, $r$ and~$R$
and $C_\rho>0$ is a constant depending on~$n$, $s$, $r$ and~$\rho$.
\end{proposition}
\begin{proof} We take~$r_1\in(r,1)$ and~$\eta\in C^\infty_0(B_{r_1})$
with~$\eta=1$ in~$B_r$.
Let also
$$f_1:=f\eta\qquad{\mbox{and}}\qquad f_2:=f-f_1.$$
We observe that~$f_1\in L^2(B_1)$, and that~$f_1=0$ outside~$B_{r_1}$.
Therefore, we are in the position of applying Proposition~\ref{LONTANO}
and find a function~$u_1$ (obtained by convolving~$\mathcal{G}_s$
against~$f_1$) such that
\begin{eqnarray}
&& \label{XLON2}
{\mbox{for every $R\in(r_1,1)$, }} \sup_{x\in B_1\setminus B_R}
d^{-s}(x)\,|u_1(x)|\le C_R\,\|f_1\|_{L^1(B_1)},\\
\label{XLON3}
&& {\mbox{$u_1$ satisfies }}(-\Delta)^s u_1=f_1{\mbox{ in }}B_1 {\mbox{ in the sense of
distributions,}}
\\
\label{XLON4}{\mbox{and }}
&& u_1\in W^{2s,2}_{loc}(B_1).
\end{eqnarray}
On the other hand, we have that~$f_2=f(1-\eta)$
vanishes outside~$B_1\setminus B_r$
and it is H\"older continuous. Accordingly,
we can apply Theorem~1.1 of~\cite{AJS3}
and find a function~$u_2$ (obtained by convolving~$\mathcal{G}_s$
against~$f_2$) such that
\begin{eqnarray}
&& \label{YLON2}
{\mbox{for every $R\in(r_1,1)$, }} \sup_{x\in B_1\setminus B_R}
d^{-s}(x)\,|u_2(x)|\le C_R\,\|f_2\|_{L^\infty(B_1)},\\
\label{YLON3}
&& {\mbox{$u_2$ satisfies }}(-\Delta)^s u_2=f_2{\mbox{ in }}B_1 {\mbox{ in the sense of
distributions,}}
\\
\label{YLON4}
{\mbox{and }}&& u_2\in C^{2s+\alpha}_{loc}(B_1).
\end{eqnarray}
Then, $f=f_1+f_2$, and thus,
in view of~\eqref{0olwsKA}, we have that~$
u=u_1+u_2$. Also, $u$ satisfies~\eqref{VIC2},
thanks to~\eqref{XLON2} and~\eqref{YLON2},
\eqref{VIC3},
thanks to~\eqref{XLON3} and~\eqref{YLON3}, and~\eqref{VIC4},
thanks to~\eqref{XLON4} and~\eqref{YLON4}.
\end{proof}
\section{Existence and regularity for the first eigenfunction of the higher order fractional Laplacian}\label{SEC:eigef}
The goal of these pages is
to study the boundary behaviour of the first Dirichlet
eigenfunction of higher order fractional equations.
For this, writing~$s=m+\sigma$, with~$m\in{\mathbb{N}}$ and~$\sigma\in(0,1)$,
we define the energy space
\begin{equation}
\label{energy}
H_0^s\left(B_1\right):=\left\{u\in H^s\left(\mathbb{R}^n\right);\; u=0 \;\text{in}\; \mathbb{R}^n\setminus B_1\right\},
\end{equation}
endowed with the Hilbert norm
\begin{equation}
\label{energynorm} \left\|u\right\|_{H_0^s\left(B_1\right)}:=
\left(\sum_{\left|\alpha\right|\leq m} {\left\|\partial^\alpha u
\right\|^2_{L^2\left(B_1\right)}}+\mathcal{E}_s
\left(u,u\right)\right)^{\frac{1}{2}},\end{equation}
where
\begin{equation}\label{ENstut}
\mathcal{E}_s\left(u,v\right)=\int_{\mathbb{R}^n} {
\left|\xi\right|^{2s}\mathcal{F}u\left(\xi\right)\overline{\mathcal{F}v\left(\xi\right)} \,
d\xi},\end{equation}
being~$\mathcal{F}$ the Fourier transform and using the notation~$\overline{z}$ to denote the complex conjugated
of a complex number~$z$.
In this setting, we consider~$u\in H^s_0(B_1)$ to be
such that
\begin{equation}\label{dirfun}\begin{cases}
\left(-\Delta\right)^s u=\lambda_1 u &\quad\text{ in }B_1, \\
u=0 &\quad\text{ in } \mathbb{R}^n\setminus\overline{B_1},
\end{cases}\end{equation}
for every~$s>0$, with~$\lambda_1$ as small as possible.
The existence of solutions of \eqref{dirfun} is ensured
via variational techniques, as stated in the following result:
\begin{lemma}\label{VARIA}
The functional~$\mathcal{E}_s\left(u,u\right)$
attains its minimum~$\lambda_1$ on the functions in~$H^s_0(B_1)$
with unit norm in~$L^2(B_1)$.
The minimizer satisfies~\eqref{dirfun}.
In addition, $\lambda_1>0$.
\begin{proof} The proof is based on the direct method
in the calculus of variations. We provide some details for completeness.
Let~$s=m+\sigma$,
with~$m\in\mathbb{N}$ and~$\sigma\in(0,1)$.
Let us consider a minimizing sequence~$u_j\in H^s_0(B_1)\subseteq H^m({\mathbb{R}}^n)$
such that~$\|u_j\|_{L^2(B_1)}=1$ and
$$ \lim_{j\to+\infty}\mathcal{E}_s\left(u_j,u_j\right)=\inf_{{u\in H^s_0(B_1)}\atop{
\|u\|_{L^2(B_1)}=1}}\mathcal{E}_s\left(u,u\right).$$
In particular, we have that~$u_j$ is bounded in~$H^s_0(B_1)$ uniformly in~$j$,
so, up to a subsequence, it converges to some~$u_\star$
weakly in~$H^s_0(B_1)$ and strongly in~$L^2(B_1)$
as~$j\to+\infty$.
The weak lower semicontinuity of the seminorm $\mathcal{E}_s\left(\cdot,\cdot\right)$
then implies that~$u_\star$ is the desired minimizer.
Then, given~$\phi\in C^\infty_0(B_1)$, we have that
$$ \mathcal{E}_s\left(u_\star+\epsilon\phi,u_\star+\epsilon\phi\right)
\ge\mathcal{E}_s\left(u_\star,u_\star\right),$$
for every~$\epsilon\in{\mathbb{R}}$, and this gives that~\eqref{dirfun} is
satisfied in the sense of distributions,
and also in the classical sense by the elliptic regularity theory.
Finally, we have that~$\mathcal{E}_s\left(u_\star,u_\star\right)>0$,
since~$u_\star$ (and thus~${\mathcal{F}}u_\star$) does not vanish identically.
Consequently,
$$ \lambda_1=\frac{ \mathcal{E}_s
\left(u_\star,u_\star\right)}{\|u_\star\|_{L^2(B_1)}^2}=
\mathcal{E}_s\left(u_\star,u_\star\right)>0,$$
as desired.
\end{proof}
\end{lemma}
Our goal is now to apply Proposition~\ref{LEJOS}
to solutions of~\eqref{dirfun}, taking~$f:=\lambda u$. To this end,
we have to check that condition~\eqref{CHlaIA}
is satisfied, namely that solutions of~\eqref{dirfun} are
H\"older continuous in $B_1\setminus B_r$, for any $0<r<1$.
To this aim, we prove that polyharmonic operators of any order~$s>0$
always admit
a first eigenfunction in the ball which does not change sign
and which is radially symmetric. For this, we start discussing
the sign property:
\begin{lemma}\label{ikAHHPKAK}
There exists a nontrivial solution of~\eqref{dirfun} that does not change sign.
\begin{proof} We exploit a method explained in detail in
Section~3.1 of~\cite{MR2667016}. As a matter of fact,
when~$s\in{\mathbb{N}}$, the desired result is exactly Theorem~3.7
in~\cite{MR2667016}.
Let~$u$ be as in
Lemma~\ref{VARIA}.
If either~$u\ge0$ or~$u\le0$, then the desired result is proved.
Hence, we argue by contradiction,
assuming that~$u$ attains strictly positive and strictly negative values.
We define
$$ {\mathcal{K}}:=\{ w:{\mathbb{R}}^n\to{\mathbb{R}} {\mbox{ s.t.
$\mathcal{E}_s\left(w,w\right)<+\infty$, and
$w\ge0$ in $B_1$}} \}.$$
Also, we set
$$ {\mathcal{K}}^\star
:=\{ w\in H^s_0(B_1) {\mbox{ s.t. }}
\mathcal{E}_s\left(w,v\right)\le0
{\mbox{ for all }}v\in {\mathcal{K}}\}.$$
We claim that
\begin{equation}\label{kstar po}
{\mbox{if~$w\in{\mathcal{K}}^\star$, then $w\le0$}}.
\end{equation}
To prove this, we recall the notation in~\eqref{GREEN},
take~$\phi\in C^\infty_0(B_1)\cap{\mathcal{K}}$,
and let
$$ v_\phi(x):=
\begin{cases}
\displaystyle\int_{B_1} \mathcal{G}_s\left(x,y\right)\,\phi(y)\,dy & {\mbox{ if }}x\in B_1,\\
0&{\mbox{ if }}x\in{\mathbb{R}}^n\setminus B_1.
\end{cases}
$$
Then~$v_\phi\in{\mathcal{K}}$ and it satisfies~$
\left(-\Delta\right)^s v_\phi=\phi$ in~$B_1$, thanks
to~\cite{MR3673669} or~\cite{AJS3}.
Consequently, we can write, for every~$x\in B_1$,
$$ \phi(x)={\mathcal{F}}^{-1}(|\xi|^{2s}{\mathcal{F}}v_\phi)(x).$$
Hence, for every~$w\in{\mathcal{K}}^\star$,
\begin{eqnarray*}
0&\ge&\mathcal{E}_s\left(w,v_\phi\right)\\
&=&
\int_{\mathbb{R}^n}
\left|\xi\right|^{2s}\mathcal{F}v_\phi\left(\xi\right)
\overline{\mathcal{F}w\left(\xi\right)} \,d\xi
\\&=&
\int_{\mathbb{R}^n} {\mathcal{F}}^{-1}(
\left|\xi\right|^{2s}\mathcal{F}v_\phi)(x)\,
w\left(x\right) \,dx\\&=&
\int_{B_1} {\mathcal{F}}^{-1}(
\left|\xi\right|^{2s}\mathcal{F}v_\phi)(x)\,
w\left(x\right) \,dx\\&=&
\int_{B_1}\phi(x)\,w\left(x\right) \,dx.
\end{eqnarray*}
Since~$\phi$ is arbitrary and nonnegative, this gives that~$w\le0$,
and this establishes~\eqref{kstar po}.
Furthermore, by Theorem~3.4 in~\cite{MR2667016}, we can write
$$ u=u_1+u_2,$$
with~$u_1\in {\mathcal{K}}\setminus\{0\}$,
$u_2\in{\mathcal{K}}^\star\setminus\{0\}$, and~$\mathcal{E}_s\left(u_1,u_2\right)=0$.
We observe that
$$ \mathcal{E}_s\left(u_1-u_2,u_1-u_2\right)=
\mathcal{E}_s\left(u_1,u_1\right)+\mathcal{E}_s\left(u_2,u_2\right)
+2\mathcal{E}_s\left(u_1,u_2\right)=\mathcal{E}_s\left(u_1,u_1\right)+\mathcal{E}_s\left(u_2,u_2\right).$$
In the same way,
$$ \mathcal{E}_s\left(u,u\right)=
\mathcal{E}_s\left(u_1+u_2,u_1+u_2\right)=
\mathcal{E}_s\left(u_1,u_1\right)+\mathcal{E}_s\left(u_2,u_2\right),$$
and therefore
\begin{equation}\label{7yhbAxcvTFV}
\mathcal{E}_s\left(u_1-u_2,u_1-u_2\right)=\mathcal{E}_s\left(u,u\right).
\end{equation}
On the other hand,
\begin{eqnarray*}
\| u_1-u_2\|_{L^2(B_1)}^2-\| u\|_{L^2(B_1)}^2&=&\| u_1-u_2\|_{L^2(B_1)}^2-\| u_1+u_2\|_{L^2(B_1)}^2\\
&=& -4\int_{B_1} u_1(x)\,u_2(x)\,dx.
\end{eqnarray*}
As a consequence, since~$u_2\le0$ in view of~\eqref{kstar po},
we conclude that
$$ \| u_1-u_2\|_{L^2(B_1)}^2-\| u\|_{L^2(B_1)}^2\ge0.$$
This and~\eqref{7yhbAxcvTFV} say that the function~$u_1-u_2$
is also a minimizer for the variational problem in Lemma~\ref{VARIA}.
Since now~$u_1-u_2\ge0$, the desired result follows.
\end{proof}
\end{lemma}
Now, we define the spherical mean\index{spherical mean} of a function~$v$ by
$$ v_\sharp(x):=
\frac{1}{\left|\mathbb{S}^{n-1}\right|}
\int_{\mathbb{S}^{n-1}} v({\mathcal{R}}_\omega\,x)
\,d{\mathcal{H}}^{n-1}(\omega)
$$
where~${\mathcal{R}}_\omega$ is the rotation corresponding to the solid angle~$\omega
\in{\mathbb{S}^{n-1}}$, ${\mathcal{H}}^{n-1}$ is the standard
Hausdorff measure, and~$\left|\mathbb{S}^{n-1}\right|=
{\mathcal{H}}^{n-1}(\mathbb{S}^{n-1})$.
Notice that~$v_\sharp(x)=v_\sharp({\mathcal{R}}_\varpi x)$
for any~$\varpi \in\mathbb{S}^{n-1}$, that is~$v_\sharp$
is rotationally invariant.
Then, we have:
\begin{lemma}
\label{lapsfercom}
Any positive power of the Laplacian commutes
with the spherical mean, that is
$$ \big( (-\Delta)^s v\big)_\sharp(x)=(-\Delta)^s v_\sharp(x).$$
\begin{proof} By density,
we prove the claim for a function~$v$ in the
Schwartz space of smooth and rapidly decreasing functions.
In this setting, writing~${\mathcal{R}}_\omega^T$
to denote the transpose of the rotation~${\mathcal{R}}_\omega$,
and changing variable~$\eta:={\mathcal{R}}_\omega^T\,\xi$,
we have that
\begin{equation}\label{RFA2}
\begin{split} (-\Delta)^s v({\mathcal{R}}_\omega\,x)\,&=
\int_{{\mathbb{R}}^n} |\xi|^{2s} {\mathcal{F}}v(\xi)\,
e^{2\pi i {\mathcal{R}}_\omega\,x\cdot\xi}\,d\xi\\ &=
\int_{{\mathbb{R}}^n} |\xi|^{2s} {\mathcal{F}}v(\xi)\,
e^{2\pi i x\cdot{\mathcal{R}}_\omega^T\,\xi}\,d\xi\\
&=
\int_{{\mathbb{R}}^n} |\eta|^{2s}
{\mathcal{F}}v({\mathcal{R}}_\omega\,\eta)\,
e^{2\pi i x\cdot\eta}\,d\eta.
\end{split}\end{equation}
On the other hand, using the substitution~$y:={\mathcal{R}}_\omega^T\,x$,
\begin{eqnarray*}
{\mathcal{F}}v({\mathcal{R}}_\omega\,\eta)
&=&\int_{{\mathbb{R}}^n} v(x)\,
e^{-2\pi i x\cdot{\mathcal{R}}_\omega\,\eta}\,dx\\
&=& \int_{{\mathbb{R}}^n} v(x)\,
e^{-2\pi i {\mathcal{R}}_\omega^T\,x\cdot\eta}\,dx\\
&=& \int_{{\mathbb{R}}^n} v({\mathcal{R}}_\omega\,y)\,
e^{-2\pi i y\cdot\eta}\,dy,
\end{eqnarray*}
and therefore, recalling~\eqref{RFA2},
$$ (-\Delta)^s v({\mathcal{R}}_\omega\,x)=
\iint_{{\mathbb{R}}^n\times{\mathbb{R}}^n} |\eta|^{2s}
v({\mathcal{R}}_\omega\,y)\,
e^{2\pi i (x-y)\cdot\eta}\,dy\,d\eta.$$
As a consequence,
\begin{eqnarray*}
\big( (-\Delta)^s v\big)_\sharp(x)&=&
\frac{1}{\left|\mathbb{S}^{n-1}\right|}
\int_{\mathbb{S}^{n-1}} (-\Delta)^s v({\mathcal{R}}_\omega\,x)
\,d{\mathcal{H}}^{n-1}(\omega)
\\ &=&
\frac{1}{\left|\mathbb{S}^{n-1}\right|}
\iiint_{\mathbb{S}^{n-1}\times{\mathbb{R}}^n\times{\mathbb{R}}^n}
|\eta|^{2s}
v({\mathcal{R}}_\omega\,y)\,
e^{2\pi i (x-y)\cdot\eta}
\,d{\mathcal{H}}^{n-1}(\omega)\,dy\,d\eta\\
&=& \iint_{{\mathbb{R}}^n\times{\mathbb{R}}^n}
|\eta|^{2s}
v_\sharp(y)\,
e^{2\pi i (x-y)\cdot\eta}\,dy\,d\eta\\&=&
\int_{{\mathbb{R}}^n}
|\eta|^{2s} {\mathcal{F}} (v_\sharp)(\eta)\,
e^{2\pi i x\cdot\eta}\,d\eta\\&=&(-\Delta)^s v_\sharp(x),
\end{eqnarray*}
as desired.
\end{proof}
\end{lemma}
It is also useful to observe that the spherical mean is compatible with the energy bounds.
In particular we have the following observation:
\begin{lemma}
We have that
\begin{equation}\label{ENblaoe1}
\mathcal{E}_s\left(v_\sharp,v_\sharp\right)\le
\mathcal{E}_s\left(v,v\right).\end{equation}
Moreover,
\begin{equation}\label{ENblaoe2}
{\mbox{if~$v\in H^s_0(B_1)$, then so does~$v_\sharp$.}}\end{equation}
\begin{proof} We see that
\begin{eqnarray*}
{\mathcal{F}}(v_\sharp)(\xi)&=&
\int_{\mathbb{R}^n} v_\sharp(x)\,e^{-2\pi ix\cdot\xi}\,dx\\
&=&
\frac{1}{\left|\mathbb{S}^{n-1}\right|}
\iint_{\mathbb{S}^{n-1}\times{\mathbb{R}^n}}
v({\mathcal{R}}_\omega\,x)\,e^{-2\pi ix\cdot\xi}
\,d{\mathcal{H}}^{n-1}(\omega)\,dx
\end{eqnarray*}
and therefore, taking the complex conjugated,
$$ \overline{ {\mathcal{F}}(v_\sharp)(\xi) }=
\frac{1}{\left|\mathbb{S}^{n-1}\right|}
\iint_{\mathbb{S}^{n-1}\times{\mathbb{R}^n}}
v({\mathcal{R}}_\omega\,x)\,e^{2\pi ix\cdot\xi}
\,d{\mathcal{H}}^{n-1}(\omega)\,dx.$$
Hence, by~\eqref{ENstut}, and exploiting the changes of variables~$y:=
{\mathcal{R}}_{\omega}\,x$ and~$\tilde y:=
{\mathcal{R}}_{\tilde\omega}\,\tilde x$,
\begin{eqnarray*}&&
\mathcal{E}_s\left(v_\sharp,v_\sharp\right)\\&=&\int_{\mathbb{R}^n}
\left|\xi\right|^{2s}\mathcal{F}(v_\sharp)\left(\xi\right)
\overline{\mathcal{F}(v_\sharp)\left(\xi\right)} \,d\xi\\
&=&\frac{1}{\left|\mathbb{S}^{n-1}\right|^2}
\iiiint\!\!\!\int_{\mathbb{S}^{n-1}\times\mathbb{S}^{n-1}\times{\mathbb{R}^n}\times{\mathbb{R}^n}\times{\mathbb{R}^n}}
|\xi|^{2s}v({\mathcal{R}}_\omega\,x)\,v({\mathcal{R}}_{\tilde\omega}\,\tilde x)\,e^{2\pi i (\tilde x-x)\cdot\xi}
\,d{\mathcal{H}}^{n-1}(\omega)
\,d{\mathcal{H}}^{n-1}(\tilde\omega)\,dx\,d\tilde x\,d\xi\\
&=& \frac{1}{\left|\mathbb{S}^{n-1}\right|^2}
\iiiint\!\!\!\int_{\mathbb{S}^{n-1}\times\mathbb{S}^{n-1}\times{\mathbb{R}^n}\times{\mathbb{R}^n}\times{\mathbb{R}^n}}
|\xi|^{2s}v(y)\,v(\tilde y)\,e^{2\pi i \tilde y\cdot{\mathcal{R}}_{\tilde\omega}\,\xi}
e^{-2\pi i y\cdot{\mathcal{R}}_\omega\,\xi}
\,d{\mathcal{H}}^{n-1}(\omega)
\,d{\mathcal{H}}^{n-1}(\tilde\omega)\,dy\,d\tilde y\,d\xi\\
&=& \frac{1}{\left|\mathbb{S}^{n-1}\right|^2}
\iiint_{\mathbb{S}^{n-1}\times\mathbb{S}^{n-1}\times{\mathbb{R}^n}}
|\xi|^{2s}\,{\mathcal{F}}v({\mathcal{R}}_\omega\,\xi)\,\overline{
{\mathcal{F}}v({\mathcal{R}}_{\tilde\omega}\,\xi)}\,
\,d{\mathcal{H}}^{n-1}(\omega)
\,d{\mathcal{H}}^{n-1}(\tilde\omega)\,d\xi.
\end{eqnarray*}
Consequently, using the Cauchy-Schwarz Inequality,
and the substitutions~$\eta:=
{\mathcal{R}}_{\omega}\,\xi$ and~$\tilde\eta:=
{\mathcal{R}}_{\tilde\omega}\,\xi$,
\begin{eqnarray*}
\mathcal{E}_s\left(v_\sharp,v_\sharp\right)&\le&
\frac{1}{\left|\mathbb{S}^{n-1}\right|^2}
\iiint_{\mathbb{S}^{n-1}\times\mathbb{S}^{n-1}\times{\mathbb{R}^n}}
|\xi|^{2s}\,\big|{\mathcal{F}}v({\mathcal{R}}_\omega\,\xi)\big|
\,\big|
{\mathcal{F}}v({\mathcal{R}}_{\tilde\omega}\,\xi)\big|\,
\,d{\mathcal{H}}^{n-1}(\omega)
\,d{\mathcal{H}}^{n-1}(\tilde\omega)\,d\xi
\\ &\le&
\frac{1}{\left|\mathbb{S}^{n-1}\right|^2}
\left(
\iiint_{\mathbb{S}^{n-1}\times\mathbb{S}^{n-1}\times{\mathbb{R}^n}}
|\xi|^{2s}\,\big|{\mathcal{F}}v({\mathcal{R}}_\omega\,\xi)\big|^2
\,d{\mathcal{H}}^{n-1}(\omega)
\,d{\mathcal{H}}^{n-1}(\tilde\omega)\,d\xi\right)^{\frac12}\\&&\qquad\cdot
\left(
\iiint_{\mathbb{S}^{n-1}\times\mathbb{S}^{n-1}\times{\mathbb{R}^n}}
|\xi|^{2s}\,
\big|{\mathcal{F}}v({\mathcal{R}}_{\tilde\omega}\,\xi)\big|^2
\,d{\mathcal{H}}^{n-1}(\omega)
\,d{\mathcal{H}}^{n-1}(\tilde\omega)\,d\xi\right)^{\frac12}\\
\\ &=&
\frac{1}{\left|\mathbb{S}^{n-1}\right|^2}
\left(
\iiint_{\mathbb{S}^{n-1}\times\mathbb{S}^{n-1}\times{\mathbb{R}^n}}
|\eta|^{2s}\,\big|{\mathcal{F}}v(\eta)\big|^2
\,d{\mathcal{H}}^{n-1}(\omega)
\,d{\mathcal{H}}^{n-1}(\tilde\omega)\,d\eta\right)^{\frac12}\\&&\qquad\cdot
\left(
\iiint_{\mathbb{S}^{n-1}\times\mathbb{S}^{n-1}\times{\mathbb{R}^n}}
|\tilde\eta|^{2s}\,\big|
{\mathcal{F}}v(\tilde\eta)\big|^2
\,d{\mathcal{H}}^{n-1}(\omega)
\,d{\mathcal{H}}^{n-1}(\tilde\omega)\,d\tilde\eta\right)^{\frac12}
\\ &=&
\left(
\int_{{\mathbb{R}^n}}
|\eta|^{2s}\,\big|{\mathcal{F}}v(\eta)\big|^2
\,d\eta\right)^{\frac12}
\left(
\int_{\mathbb{R}^{n}}
|\tilde\eta|^{2s}\,\big|
{\mathcal{F}}v(\tilde\eta)\big|^2
\,d\tilde\eta\right)^{\frac12}
\\&=&
\mathcal{E}_s\left(v,v\right).
\end{eqnarray*}
This proves~\eqref{ENblaoe1}.
Now, we prove~\eqref{ENblaoe2}. For this, we observe that
$$ \frac{\partial^\ell v_\sharp}{\partial x_{j_1}\dots\partial x_{j_\ell}}(x)=
\frac{1}{\left|\mathbb{S}^{n-1}\right|}\sum_{k_1,\dots,k_\ell=1}^n
\int_{\mathbb{S}^{n-1}}
\frac{\partial^\ell v}{\partial x_{k_1}\dots\partial x_{k_\ell}}
({\mathcal{R}}_\omega\,x)\;{\mathcal{R}}_\omega^{k_1j_1}\dots{\mathcal{R}}_\omega^{k_\ell j_\ell}
\;d{\mathcal{H}}^{n-1}(\omega),$$
for every $\ell\in{\mathbb{N}}$ and~$j_1,\dots,j_\ell\in\{1,\dots,n\}$,
where~${\mathcal{R}}_\omega^{jk}$ denotes the $(j,k)$ component of the matrix~${\mathcal{R}}_\omega$.
In particular,
$$ \left|\frac{\partial^\ell v_\sharp}{\partial x_{j_1}\dots\partial x_{j_\ell}}(x)\right|\le
C\,\sum_{k_1,\dots,k_\ell=1}^n
\int_{\mathbb{S}^{n-1}}
\left|\frac{\partial^\ell v}{\partial x_{k_1}\dots\partial x_{k_\ell}}
({\mathcal{R}}_\omega\,x)\right|
\;d{\mathcal{H}}^{n-1}(\omega),$$
for some~$C>0$ only depending on~$n$ and~$\ell$, and hence
\begin{eqnarray*} \left\|\frac{\partial^\ell v_\sharp}{\partial x_{j_1}\dots
\partial x_{j_\ell}}(x)\right\|_{L^2(B_1)}^2&\le&
C\,\sum_{k_1,\dots,k_\ell=1}^n
\iint_{\mathbb{S}^{n-1}\times B_1}
\left|\frac{\partial^\ell v}{\partial x_{k_1}\dots\partial x_{k_\ell}}
({\mathcal{R}}_\omega\,x)\right|^2
\;d{\mathcal{H}}^{n-1}(\omega)\,dx\\&=&
C\,\sum_{k_1,\dots,k_\ell=1}^n
\iint_{\mathbb{S}^{n-1}\times B_1}
\left|\frac{\partial^\ell v}{\partial x_{k_1}\dots\partial x_{k_\ell}}
(y)\right|^2
\;d{\mathcal{H}}^{n-1}(\omega)\,dy\\&=&
C\,\sum_{k_1,\dots,k_\ell=1}^n
\left\|\frac{\partial^\ell v}{\partial x_{k_1}\dots\partial x_{k_\ell}}
\right\|_{L^2(B_1)}^2,
\end{eqnarray*}
up to renaming~$C$.
This, together with~\eqref{energynorm} and~\eqref{ENblaoe1},
gives~\eqref{ENblaoe2}, as desired.
\end{proof}
\end{lemma}
With this preliminary work, we can now find a nontrivial,
nonnegative
and radial solution of~\eqref{dirfun}.
\begin{proposition}\label{2.52.5}
There exists a solution of~\eqref{dirfun} in~$H^s_0(B_1)$
which is radial, nonnegative
and with unit norm in~$L^2(B_1)$.
\begin{proof} Let~$u\ge0$ be a nontrivial solution of~\eqref{dirfun},
whose existence is warranted by Lemma~\ref{ikAHHPKAK}.
Then, we have that~$u_\sharp\ge0$.
Moreover,
\begin{eqnarray*}&& \int_{B_1}u_\sharp(x)\,dx=
\frac{1}{\left|\mathbb{S}^{n-1}\right|}
\iint_{\mathbb{S}^{n-1}\times B_1} u({\mathcal{R}}_\omega\,x)
\,d{\mathcal{H}}^{n-1}(\omega)\,dx\\&&\qquad=
\frac{1}{\left|\mathbb{S}^{n-1}\right|}
\iint_{\mathbb{S}^{n-1}\times B_1} u(y)
\,d{\mathcal{H}}^{n-1}(\omega)\,dy=
\int_{ B_1} u(y)
\,dy>0,
\end{eqnarray*}
and therefore~$u_\sharp$ does not vanish identically.
As a consequence, we can define
$$u_\star:=\frac{u_\sharp}{\|u_\sharp\|_{L^2(B_1)}}.$$
We know that~$u_\star\in H^s_0(B_1)$, due to~\eqref{ENblaoe2}.
Moreover, in view of Lemma~\ref{lapsfercom},
$$ (-\Delta)^s u_\star=
\frac{(-\Delta)^s u_\sharp}{\|u_\sharp\|_{L^2(B_1)}}=
\frac{\big((-\Delta)^s u\big)_\sharp}{\|u_\sharp\|_{L^2(B_1)}}
=\frac{\lambda_1\,u_\sharp}{\|u_\sharp\|_{L^2(B_1)}}
=\lambda_1\,u_\star,$$
which gives the desired result.
\end{proof}
\end{proposition}
Now, we are in the position of proving the following result.
\begin{lemma}
\label{onesob}
Let $s\ge1$ and~$r\in(0,1)$. If $u\in H_0^s\left(B_1\right)$ and $u$ is radial, then $u\in C^\alpha\left({\mathbb{R}}^n\setminus B_r\right)$ for any $\alpha\in\left[0,\frac{1}{2}\right]$.
\begin{proof}
We write
\begin{equation}
u\left(x\right)=u_0\left(\left|x\right|\right),\qquad\mbox{for some }
\;u_0:[0,+\infty)\rightarrow\mathbb{R}
\end{equation}
and we observe that~$u\in H_0^s\left(B_1\right)\subset H^1\left({\mathbb{R}}^n\right) $.
Accordingly, for any $0<r<1$, we have
\begin{equation}
\label{funz}
\infty>\int_{{\mathbb{R}}^n\setminus B_r} |u(x)|^2 \,dx
=\int_r^{+\infty} |u_0(\rho)|^2\rho^{n-1} \,d\rho\geq
r^{n-1}\int_r^{+\infty}|u_0(\rho)|^2 \,d\rho
\end{equation}
and
\begin{equation}
\label{grad}
\infty>\int_{{\mathbb{R}}^n\setminus B_r} {|\nabla u(x)|^2 \,dx}=\int_r^{+\infty}
{\left|u_0' (\rho)\right|^2\rho^{n-1} \,d\rho}
\geq r^{n-1}\int_r^{+\infty}{\left|u_0' (\rho)\right|^2 \,d\rho}.
\end{equation}
Thanks to \eqref{funz} and \eqref{grad} we have that $u_0\in H^1\left(\left(r,+\infty\right)\right)$,
with~$u_0=0$ in~$[1,+\infty)$.
Then, from
the Morrey Embedding Theorem, it follows that
$u_0\in C^\alpha\left(\left(r,+\infty\right)\right)$ for any
$\alpha\in\left[0,\frac{1}{2}\right]$, which leads to the desired result.
\end{proof}
\end{lemma}
\begin{corollary}\label{QUESTCP} Let~$s\in(0,+\infty)$.
There exists a radial, nonnegative
and nontrivial solution of \eqref{dirfun}
which belongs to~$H^s_0(B_1)\cap C^{\alpha}({\mathbb{R}}^n\setminus B_{1/2})$,
for some~$\alpha\in(0,1)$.
\begin{proof} If~$s\in(0,1)$, the desired claim follows from
Corollary~8 in~\cite{DSV1}.
If instead~$s\ge1$, we obtain the desired result as a consequence of
Proposition \ref{2.52.5} and Lemma \ref{onesob}.
\end{proof}
\end{corollary}
\section{Boundary asymptotics\index{boundary behaviour} of the first eigenfunctions of~$(-\Delta)^s$}
\label{sec5}
In Lemma 4 of~\cite{DSV1}, some precise asymptotics
at the boundary for the first Dirichlet eigenfunction
of~$(-\Delta)^s$ have been established in the range~$s\in(0,1)$.
Here, we obtain a related expansion in the range~$s>0$
for the eigenfunction provided in Corollary~\ref{QUESTCP}.
The result that we obtain is the following:
\begin{proposition}
\label{sharbou}
There exists a nontrivial solution $\phi_*$ of \eqref{dirfun}
which belongs to~$H^s_0(B_1)\cap C^{\alpha}
({\mathbb{R}}^n\setminus B_{1/2})$,
for some~$\alpha\in(0,1)$, and such that, for every~$e\in\partial B_1$
and~$\beta=(\beta_1,\dots,\beta_n)\in{\mathbb{N}}^n$,
\[\lim_{\epsilon\searrow 0}\epsilon^{\left|\beta\right|-s}\partial^\beta\phi_*\left(e+\epsilon X\right)=\left(-1\right)^{\left|\beta\right|}k_*\, s\left(s-1\right)\ldots\left(s-\left|\beta\right|+1\right)e_1^{\beta_1}\ldots e_n^{\beta_n}\left(-e\cdot X\right)_+^{s-\left|\beta\right|},\]
in the sense of distribution, with~$|\beta|:=\beta_1+\dots+\beta_n$
and~$k_*>0$.\end{proposition}
The proof of Proposition~\ref{sharbou} relies on Proposition~\ref{LEJOS}
and some auxiliary computations on the Green function in~\eqref{GREEN}.
We start with the following result:
\begin{lemma}
\label{lklkl}
Let $0<r<1$, $e\in\partial B_1$, $s>0$,
$f\in C^\alpha(\mathbb{R}^n\setminus B_r)\cap L^2(\mathbb{R}^n)$
for some $\alpha\in(0,1)$, and $f=0$ outside $B_1$. Then the integral
\begin{equation}
\label{I1I2}
\int_{B_1} f(z)\frac{(1-|z|^2)^s}{s|z-e|^n} \,dz
\end{equation}
is finite.
\begin{proof}
We denote by~$I$ the integral in~\eqref{I1I2}. We let
$$ I_1:=
\int_{B_1\setminus B_r} f(z)\frac{(1-|z|^2)^s}{s|z-e|^n} \,dz
\qquad{\mbox{and}}\qquad I_2:=
\int_{B_r} f(z)\frac{(1-|z|^2)^s}{s|z-e|^n} \,dz.$$
Then, we have that
\begin{equation}\label{SsalKAM:1} I=I_1+I_2.\end{equation}
Now, if $z\in B_1\setminus B_r$, we have that
\begin{equation*}
f(z)\leq|f(z)-f(e)|\leq C|z-e|^\alpha,
\end{equation*}
therefore
\begin{equation}\label{SsalKAM:2}
I_1\leq\int_{B_1\setminus B_r}\frac{(1-|z|^2)^s}{s|z-e|^{n-\alpha}} \,dz<\infty.
\end{equation}
If instead~$z\in B_r$,
\begin{equation*}
|z-e|\geq 1-r>0,
\end{equation*}
and consequently
\begin{equation}\label{SsalKAM:3}
I_2\leq \frac{1}{s\,(1-r)^n}\int_{B_r}f(z)\, dz<\infty.
\end{equation}
The desired result follows from~\eqref{SsalKAM:1},
\eqref{SsalKAM:2} and~\eqref{SsalKAM:3}.
\end{proof}
\end{lemma}
Next result gives
a precise boundary behaviour of the Green function
for any $s>0$ (the case in which~$s\in(0,1)$ and~$f\in
C^\alpha(\mathbb{R}^n)$ was considered in Lemma~6 of~\cite{DSV1},
and in fact the proof presented here also simplifies the one in
Lemma~6 of~\cite{DSV1} for the setting considered there).
\begin{lemma}
\label{lemsix}
Let $e$, $\omega\in\partial B_1$, $\epsilon_0>0$ and~$r\in(0,1)$.
Assume that
\begin{equation}\label{CHIAmahdfn}
e+\epsilon\omega\in B_1,\end{equation}
for any $\epsilon\in(0,\epsilon_0]$. Let $f\in C^\alpha(\mathbb{R}^n\setminus B_r)\cap L^2(\mathbb{R}^n)$ for some $\alpha\in(0,1)$, with $f=0$ outside $B_1$. \\
Then
\begin{equation}
\lim_{\epsilon\searrow 0}\epsilon^{-s}\int_{B_1} f(z)
\mathcal{G}_s(e+\epsilon\omega,z) \,dz=k(n,s)\,
\int_{B_1} f(z)\frac{(-2e\cdot\omega)^s(1-|z|^2)^s}{s|z-e|^n}\, dz,
\end{equation}
for a suitable normalizing constant~$k(n,s)>0$.
\begin{proof} In light of~\eqref{CHIAmahdfn}, we have that
\begin{equation*}1>
|e+\epsilon\omega|^2=1+\epsilon^2+2\epsilon e\cdot\omega,
\end{equation*}
and therefore
\begin{equation}\label{RGHHCicj}
-e\cdot\omega>\frac{\epsilon}{2}>0.
\end{equation}
Moreover, if $r_0$ is as given in \eqref{GREEN}, we have that, for all~$z\in B_1$,
\begin{equation}
\label{kfg}
r_0(e+\epsilon\omega,z)=\frac{\epsilon(-\epsilon-2e\cdot\omega)(1-|z|^2)}{|z-e-\epsilon\omega|^2}\leq\frac{3\epsilon}{|z-e-\epsilon\omega|^2}.
\end{equation}
Also, a Taylor series representation allows us to write, for any~$t\in(-1,1)$,
\begin{equation}\label{7uJAJMMA.aA}
\frac{t^{s-1}}{(t+1)^{\frac{n}{2}}}=\sum_{k=0}^\infty \binom{-n/2}{k}t^{k+s-1}.
\end{equation}
We also notice that
\begin{equation}
\label{bound}\begin{split}&
\left|\binom{-n/2}{k}\right|=
\left|
\frac{-\frac{n}2
\left(-\frac{n}2-1\right)\,...\,\left(-\frac{n}2-k+1\right)
}{k!}
\right|
=
\frac{\frac{n}2
\left(\frac{n}2+1\right)\,...\,\left(\frac{n}2+(k-1)\right)
}{k!}\\
&\qquad\le\frac{n
\left(n+1\right)\,...\,\left(n+(k-1)\right)
}{k!}
\le\frac{\left(n+(k-1)\right)!
}{k!}= (k+1)\,...\,\left(n+(k-1)\right)\\&\qquad
\le (n+k+1)^{n+1}.
\end{split}\end{equation}
This and the Root Test
give that the series in~\eqref{7uJAJMMA.aA}
is uniformly convergent on compact sets in $(-1,1)$.
As a consequence, if we set
\begin{equation}
\label{min}
r_1(x,z):=\min\left\{\frac{1}{2},r_0(x,z)\right\},
\end{equation}
we can switch integration and summation signs and obtain that
\begin{equation}
\label{mmno}
\int_0^{r_1(x,z)} \frac{t^{s-1}}{(t+1)^{\frac{n}{2}}}\, dt=\sum_{k=0}^\infty c_k(r_1(x,z))^{k+s},
\end{equation}
where
\begin{equation*}
c_k:=\frac{1}{k+s}\binom{-n/2}{k}.
\end{equation*}
Once again, the bound in~\eqref{bound}, together with~\eqref{min},
give that the series in~\eqref{mmno} is convergent.
Now, we omit for simplicity the normalizing constant~$k(n,s)$
in the definition of the Green function in~\eqref{GREEN},
and
we define
\begin{equation}\label{GNAKDDEF}
\mathcal{G}(x,z):=|z-x|^{2s-n}\sum_{k=0}^\infty c_k(r_1(x,z))^{k+s}
\end{equation}
and
\begin{equation*}
g(x,z):=|z-x|^{2s-n}\int_{r_1(x,z)}^{r_0(x,z)} \frac{t^{s-1}}{(t+1)^{\frac{n}{2}}} \,dt.
\end{equation*}
Using~\eqref{GREEN}
and \eqref{mmno}, and dropping dimensional constants for the sake of shortness,
we can write
\begin{equation}
\label{splitgreen}
\mathcal{G}_s(x,z)=\mathcal{G}(x,z)+g(x,z).
\end{equation}
Now, we show that
\begin{equation}\label{VKLACL}
g(x,z)\leq
\begin{cases} C\chi(r,z)\,|z-x|^{2s-n}&\quad\text{if}\quad n>2s, \\
C\chi(r,z)\,\log r_0(x,z)&\quad\text{if}\quad n=2s, \\
C\chi(r,z)\,|z-x|^{2s-n}(r_0(x,z))^{s-\frac{n}{2}}&\quad\text{if}\quad n<2s, \end{cases}
\end{equation}
where~$\chi(r,z)=1$ if~$r_0(x,z)> \frac{1}{2}$
and~$\chi(r,z)=0$ if~$r_0(x,z)\leq \frac{1}{2}$.
To check this,
we notice that if~$r_0(x,z)\leq \frac{1}{2}$ we have that~$r_1(x,z)=r_0(x,z)$,
due to~\eqref{min}, and therefore~$g(x,z)=0$.
On the other hand, if $r_0(x,z)>\frac{1}{2}$,
we deduce from~\eqref{min} that~$r_1(x,z)=\frac12$, and consequently
\begin{equation*}
g(x,z)\leq |z-x|^{2s-n}\int_{1/2}^{r_0(x,z)} t^{s-\frac{n}{2}-1} dt\leq
\begin{cases} C|z-x|^{2s-n}&\quad\text{if}\quad n>2s, \\
C\log r_0(x,z)&\quad\text{if}\quad n=2s, \\
C|z-x|^{2s-n}(r_0(x,z))^{s-\frac{n}{2}}&\quad\text{if}\quad n<2s, \end{cases}
\end{equation*}
for some constant $C>0$. This completes the proof of~\eqref{VKLACL}.
Now, we exploit the bound in~\eqref{VKLACL} when $x=e+\epsilon\omega$.
For this, we notice that
if~$r_0(e+\epsilon\omega,z)>\frac12$,
recalling \eqref{kfg}, we find that
\begin{equation}\label{75g8676769}
|z-e-\epsilon\omega|^2\leq 6\epsilon<9\epsilon,
\end{equation}
and therefore $z\in B_{3\sqrt{\epsilon}}(e+\epsilon\omega)$.
Hence, using~\eqref{VKLACL},
\begin{equation}
\label{estimates1}
\begin{split}
&\left|\int_{B_1} f(z)g(e+\epsilon\omega,z) dz\right|\leq
\int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)}
|f(z)||g(e+\epsilon\omega,z)| dz \\ &
\leq\begin{cases} C\displaystyle\int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)}
|f(z)||z-e-\epsilon\omega|^{2s-n} dz&\quad\text{if}\quad n>2s,\\
C\displaystyle\int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)}|f(z)|\log r_0(
e+\epsilon\omega,z) dz&\quad\text{if}\quad n=2s, \\
C\displaystyle\int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)}
|f(z)||z-e-\epsilon\omega|^{2s-n}(r_0(e+\epsilon\omega,z))^{
s-\frac{n}{2}} dz &\quad\text{if}\quad n<2s .\end{cases}
\end{split}
\end{equation}
Now, if $z\in B_{3\sqrt{\epsilon}}(e+\epsilon\omega)$,
then \begin{equation}\label{z0okdscxi}
|z-e|\leq |z-e-\epsilon\omega|+|\epsilon\omega|
\leq 3\sqrt{\epsilon}+\epsilon<4\sqrt{\epsilon}.\end{equation}
Furthermore,
for a given~$r\in(0,1)$, we have that~$
B_{3\sqrt{\epsilon}}(e+\epsilon\omega)\subseteq{\mathbb{R}}^n\setminus
B_r$, provided that~$\epsilon$ is sufficiently small.
Hence, if~$z\in B_{3\sqrt{\epsilon}}(e+\epsilon\omega)$,
we can exploit the regularity of~$f$ and deduce that
$$ |f(z)|=|f(z)-f(e)|\leq C|z-e|^\alpha.$$
This and~\eqref{z0okdscxi} lead to
\begin{equation}
\label{su}
|f(z)|\leq C\epsilon^{\frac{\alpha}{2}},
\end{equation}
for every~$z\in B_{3\sqrt{\epsilon}}(e+\epsilon\omega)$.
Thanks to \eqref{kfg}, \eqref{estimates1} and \eqref{su}, we have that
\begin{equation*}
\begin{split}
\left|\int_{B_1} f(z)g(e+\epsilon\omega,z) dz\right|&\leq\begin{cases} C\epsilon^{\frac{\alpha}{2}}
\displaystyle\int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)}|z-e-\epsilon\omega|^{2s-n} dz&\quad\text{if}\quad n>2s,\\ C\epsilon^{\frac{\alpha}{2}}
\displaystyle\int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)}\log \frac{3\epsilon}{|z-e-\epsilon\omega|^2} dz&\quad\text{if}\quad n=2s ,\\ C\epsilon^{\frac{\alpha}{2}+s-\frac{n}{2}}\displaystyle\int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)} dz &\quad\text{if}\quad n<2s \end{cases} \\
&\leq C\epsilon^{\frac{\alpha}{2}+s},
\end{split}
\end{equation*}
up to renaming~$C$.
This and \eqref{splitgreen} give that
\begin{equation}
\label{alalal}
\int_{B_1} f(z)\mathcal{G}_s(e+\epsilon\omega,z) dz=\int_{B_1} f(z)\mathcal{G}(e+\epsilon\omega,z) dz+o(\epsilon^s).
\end{equation}
Now, we consider the series in~\eqref{GNAKDDEF},
and we split the contribution coming from the index $k=0$ from the ones coming from the indices $k>0$, namely
we write
\begin{equation}\label{IN16p}
\begin{split}
&\mathcal{G}(x,z)=\mathcal{G}_0(x,z)+\mathcal{G}_1(x,z), \\
\quad\text{with}\quad &\mathcal{G}_0(x,z):=\frac{|z-x|^{2s-n}}{s}(r_1(x,z))^s \\
\quad\text{and}\quad &\mathcal{G}_1(x,z):=|z-x|^{2s-n}\sum_{k=1}^{+\infty} c_k\,(r_1(x,z))^{k+s}.
\end{split}
\end{equation}
Firstly, we consider the contribution given by the term $\mathcal{G}_1$. Thanks to \eqref{min} and \eqref{su}, we have that
\begin{equation}
\label{estimates5}
\begin{split}
&\left|\int_{B_1\cap B_{3\sqrt{\epsilon}}(e+\epsilon\omega)}f(z)\mathcal{G}_1(e+\epsilon\omega,z) dz\right|\leq \int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)} |f(z)|\mathcal{G}_1(e+\epsilon\omega,z) dz \\
&\quad\quad\leq C\epsilon^{\frac{\alpha}{2}}\int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)}|z-e-\epsilon\omega|^{2s-n}\sum_{k=1}^{+\infty} |c_k|\,(r_1(e+\epsilon\omega,z))^{k+s} dz \\
&\quad\quad\leq C\epsilon^{\frac{\alpha}{2}}\int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)}|z-e-\epsilon\omega|^{2s-n}\sum_{k=1}^{+\infty} |c_k|\,\left(\frac{1}{2}\right)^{k+s} dz \\
&\quad\quad\leq C\epsilon^{\frac{\alpha}{2}}\int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)}|z-e-\epsilon\omega|^{2s-n} dz \\
&\quad\quad\leq C\epsilon^{\frac{\alpha}{2}+s},
\end{split}
\end{equation}
up to renaming the constant $C$ step by step.
On the other hand, for every~$z\in{\mathbb{R}}^n$,
\[|z|=|e+\epsilon\omega+z-e-\epsilon\omega|
\geq|e+\epsilon\omega|-|z-e-\epsilon\omega|
\geq 1-\epsilon-|z-e-\epsilon\omega|.\]
Therefore,
for every~$z\in B_1\setminus\left(B_r\cup B_{3\sqrt{\epsilon}}
(e+\epsilon\omega)\right)$, we can take~$e_*:=\frac{z}{|z|}$ and obtain that
\begin{equation}
\label{estimate2}
\begin{split}&
|f(z)|=|f(z)-f(e_*)|\leq
C|z-e_*|^\alpha= C(1-|z|)^\alpha
\\&\qquad\leq C(\epsilon+|z-e-\epsilon\omega|)^\alpha
\leq C|z-e-\epsilon\omega|^\alpha,
\end{split}\end{equation}
up to renaming~$C>0$.
Also, using \eqref{kfg}, we see that, for any $k>0$,
\begin{equation}
\label{estimates3}
\begin{split}
&(r_0(e+\epsilon\omega,z))^{s+\frac{\alpha}{4}}\left(\frac{1}{2}\right)^{k-\frac{\alpha}{4}}\leq\frac{C\epsilon^{s+\frac{\alpha}{4}}}{2^k|z-e-\epsilon\omega|^{2s+\frac{\alpha}{2}}}.
\end{split}
\end{equation}
This, \eqref{min} and \eqref{estimate2} give that if $z\in B_1\setminus\left(B_r\cup B_{3\sqrt{\epsilon}}(e+\epsilon\omega)\right)$, then
\begin{equation*}
\begin{split}
|f(z)\mathcal{G}_1(e+\epsilon\omega,z)| \,&
\leq C|z-e-\epsilon\omega|^{\alpha+2s-n}
\sum_{k=1}^{+\infty}|c_k|\,(r_1(e+\epsilon\omega,z))^{k+s} \\
&=C|z-e-\epsilon\omega|^{\alpha+2s-n}
\sum_{k=1}^{+\infty}|c_k|\,(r_1(e+\epsilon\omega,z))^{s+\frac\alpha4}
(r_1(e+\epsilon\omega,z))^{k-\frac\alpha4}\\
&\le C|z-e-\epsilon\omega|^{\alpha+2s-n}
\sum_{k=1}^{+\infty}|c_k|\,(r_0(e+\epsilon\omega,z))^{s+\frac\alpha4}
\left(\frac12\right)^{k-\frac\alpha4}\\
&\leq C\epsilon^{s+\frac{\alpha}{4}}|z-e-\epsilon\omega|^{\frac{\alpha}{2}-n}\sum_{k=1}^{+\infty}\frac{|c_k|}{2^k},
\end{split}
\end{equation*}
where the latter series is absolutely convergent thanks to \eqref{bound}.
This implies that, if we
set $E:=B_1\setminus\left(B_r\cup B_{3\sqrt{\epsilon}}
(e+\epsilon\omega)\right)$, it holds that
\begin{equation}
\label{estimates4}
\begin{split}
&\left|\int_E f(z)\mathcal{G}_1(e+\epsilon\omega,z) dz\right|\leq C\epsilon^{s+\frac{\alpha}{4}}\int_E |z-e-\epsilon\omega|^{\frac{\alpha}{2}-n} dz \\
\quad\quad &\qquad\qquad\leq C\epsilon^{s+\frac{\alpha}{4}}\int_{B_1} |z-e-\epsilon\omega|^{\frac{\alpha}{2}-n} dz\leq C\epsilon^{s+\frac{\alpha}{4}}\int_{B_3} |z|^{\frac{\alpha}{2}-n} dz \leq C\epsilon^{s+\frac{\alpha}{4}}.
\end{split}
\end{equation}
Moreover, if $z\in B_r$, we have that
\begin{equation*}
|e+\epsilon\omega-z|\geq 1-\epsilon-r,
\end{equation*}
and therefore, recalling~\eqref{estimates3},
\begin{equation*}
\begin{split}
\sup_{z\in B_r} |\mathcal{G}_1(e+\epsilon\omega,z)|\,&\leq |z-e-\epsilon\omega|^{2s-n}\sum_{k=1}^{+\infty}
|c_k|\,\big(r_1(e+\epsilon\omega,z)\big)^{s+\frac\alpha4}
\big(r_1(e+\epsilon\omega,z)\big)^{k-\frac\alpha4}\\&\le
|z-e-\epsilon\omega|^{2s-n}\sum_{k=1}^{+\infty}
|c_k|\,\big(r_0(e+\epsilon\omega,z)\big)^{s+\frac\alpha4}
\left(\frac12\right)^{k-\frac\alpha4}
\\ &\le C\,
|z-e-\epsilon\omega|^{-n-\frac\alpha2}\sum_{k=1}^{+\infty}
\frac{|c_k|}{2^k}
\\ &\le
C(1-\epsilon-r)^{-n-\frac{\alpha}{2}}\,
\epsilon^{s+\frac{\alpha}{4}},
\end{split}\end{equation*}
up to renaming~$C$.
As a consequence, we find that
\begin{equation}
\label{estimates6}
\begin{split}
\left|\int_{B_r} f(z)\mathcal{G}_1(e+\epsilon\omega,z) dz\right|
&\leq
\sup_{z\in B_r} |\mathcal{G}_1
(e+\epsilon\omega,z)|\,
\left\|f\right\|_{L^1(B_r)}
\\ &\leq \left\|f\right\|_{L^1(B_r)}(1-\epsilon-r)^{-n-\frac{\alpha}{2}}\epsilon^{s+\frac{\alpha}{4}}
\\
&\leq \left\|f\right\|_{L^1(B_r)}2^{n+\frac{\alpha}{2}}(1-r)^{-n-\frac{\alpha}{2}}\epsilon^{s+\frac{\alpha}{4}}
\\
&=C\epsilon^{s+\frac{\alpha}{4}},
\end{split}
\end{equation}
as long as $\epsilon$ is suitably
small with respect to $r$, and $C$ is a positive constant
which depends on $\|f\|_{L^1(B_r)}$, $r$, $n$ and~$\alpha$.
Then, by \eqref{estimates5}, \eqref{estimates4} and \eqref{estimates6} we conclude that
\begin{equation}
\int_{B_1} f(z)\mathcal{G}_1(e+\epsilon\omega,z) dz=o(\epsilon^s).
\end{equation}
Inserting this information into \eqref{alalal},
and recalling~\eqref{IN16p},
we obtain
\begin{equation}
\label{ololol}
\int_{B_1} f(z)\mathcal{G}_s(e+\epsilon\omega,z) dz=\int_{B_1} f(z)\mathcal{G}_0(e+\epsilon\omega,z) dz+o(\epsilon^s).
\end{equation}
Now, we define \[\mathcal{D}_1:=\left\{z\in B_1\quad\text{s.t.}\quad r_0(e+\epsilon\omega,z)>1/2\right\}\] and \[\mathcal{D}_2:=\left\{z\in B_1\quad\text{s.t.}\quad r_0(e+\epsilon\omega,z)\leq 1/2\right\}.\]
If $z\in\mathcal{D}_1$, then $z\in B_1\setminus B_r$, thanks to~\eqref{75g8676769},
and hence we can use \eqref{estimates1} and \eqref{su} and write
\[|f(z)\mathcal{G}_0(e+\epsilon\omega,z)|\leq
C\epsilon^{\frac{\alpha}{2}}|z-e-\epsilon\omega|^{2s-n}.\]
Then, recalling again \eqref{estimates1},
\begin{equation}
\label{D1}
\left|\int_{\mathcal{D}_1} f(z)\mathcal{G}_1(e+\epsilon\omega,z) dz\right|\leq C\epsilon^{\frac{\alpha}{2}}\int_{B_{3\sqrt{\epsilon}}(e+\epsilon\omega)} |z-e-\epsilon\omega|^{2s-n} dz=C\epsilon^{\frac{\alpha}{2}+s},
\end{equation}
up to renaming the constant $C>0$. This information and \eqref{ololol} give that
\begin{equation*}
\int_{B_1} f(z)\mathcal{G}_s(e+\epsilon\omega,z) dz=
\int_{\mathcal{D}_2} f(z)\mathcal{G}_0(e+\epsilon\omega,z) dz+o(\epsilon^s).
\end{equation*}
Now, by \eqref{kfg} and \eqref{min}, if $z\in\mathcal{D}_2$,
\begin{equation*}
\mathcal{G}_0(e+\epsilon\omega,z)=\frac{|z-e-\epsilon\omega|^{2s-n}}{s}(r_0(e+\epsilon\omega))^s=\frac{\epsilon^s(-\epsilon-2e\cdot\omega)^s(1-|z|^2)^s}{s|z-e-\epsilon\omega|^n}.
\end{equation*}
Hence, we have
\begin{equation}
\label{sesa}
\begin{split}
&\lim_{\epsilon\searrow 0}\epsilon^{-s}\int_{B_1} f(z)\mathcal{G}_s(e+\epsilon\omega,z) dz \\
=\;&\lim_{\epsilon\searrow 0}\epsilon^{-s}\int_{\mathcal{D}_2} f(z)\mathcal{G}_0(e+\epsilon\omega,z) dz
\\
=\;&\lim_{\epsilon\searrow 0} \int_{\left\{2\epsilon(-\epsilon-2e\cdot\omega)(1-|z|^2)\leq|z-e-\epsilon\omega|^2\right\}} f(z)\frac{(-\epsilon-2e\cdot\omega)^s(1-|z|^2)^s}{s|z-e-\epsilon\omega|^n} dz.
\end{split}
\end{equation}
Now we set
\begin{equation}\label{H7uJA78JsadA}
F_\epsilon(z):=\begin{cases}f(z)
\displaystyle\frac{(-\epsilon-2e\cdot\omega)^s(1-|z|^2)^s}{
s|z-e-\epsilon\omega|^n}&\quad\text{if}\quad 2\epsilon
(-\epsilon-2e\cdot\omega)(1-|z|^2)\leq|z-e-\epsilon\omega|^2, \\
0&\quad\text{otherwise}, \end{cases}
\end{equation}
and we prove that for any $\eta>0$ there exists $\delta>0$ independent
of~$\epsilon$ such that, for any $E\subset\mathbb{R}^n$ with $|E|\leq\delta$, we have
\begin{equation}
\label{eta}
\int_{B_1\cap E} |F_\epsilon(z)| dz\leq\eta.
\end{equation}
To this aim, given~$\eta$ and~$E$ as above,
we define
\begin{equation} \label{RGANrho}
\rho:= \min\left\{
\epsilon (-\epsilon-2e\cdot\omega),\,
\sqrt{{ 2\epsilon (-\epsilon-2e\cdot\omega)}(1-r)},\,
\left(
\frac{
2^{s+\alpha} s^2\,\epsilon^{s+\alpha}\,(-\epsilon-2e\cdot\omega)^\alpha
\eta}{3^{2s}\,\|f\|_{C^\alpha(B_1\setminus B_r)}\,|\partial B_1|}
\right)^{\frac1{2\alpha}}
\right\}.\end{equation}
We stress that the above definition is well-posed, thanks to~\eqref{RGHHCicj}.
In addition, using the integrability of~$f$, we take~$\delta>0$
such that if~$A\subseteq B_1$ and~$|A|\le\delta$ then
\begin{equation}\label{PPKAjnaOP} \int_{A} |f(x)|\,dx\le \frac{s\rho^n\eta}{2\cdot 3^s}.
\end{equation}
We set
\begin{equation}\label{9ikjendE} E_1:=E\cap B_{\rho}(e+\epsilon\omega)\qquad{\mbox{and}}\qquad
E_2:=E\setminus B_{\rho}(e+\epsilon\omega).\end{equation}
{F}rom~\eqref{H7uJA78JsadA}, we see that
$$ |F_\epsilon(z)|\le
\frac{|f(z)|\,\chi_\star(z)}{2^s s\,\epsilon^s
|z-e-\epsilon\omega|^{n-2s}},$$
where
$$ \chi_\star(z):=\begin{cases}
1 & \quad\text{if}\quad 2\epsilon
(-\epsilon-2e\cdot\omega)(1-|z|^2)\leq|z-e-\epsilon\omega|^2, \\
0&\quad\text{otherwise},
\end{cases}$$
and therefore
\begin{equation}\label{THAnaoa}
\int_{B_1\cap E_1} |F_\epsilon(z)|\,dz
\le \int_{B_1\cap E_1}\frac{|f(z)|\,\chi_\star(z)}{2^s s\,\epsilon^s
|z-e-\epsilon\omega|^{n-2s}}\,dz.
\end{equation}
Now, for every~$z\in B_1\cap E_1\subseteq B_{\rho}(e+\epsilon\omega)$ for which~$\chi_\star(z)\ne0$,
we have that
$$ 2\epsilon
(-\epsilon-2e\cdot\omega)(1-|z|^2)\le|z-e-\epsilon\omega|^2\le\rho^2,$$
and hence
$$ |z|\ge \sqrt{1-\frac{\rho^2}{ 2\epsilon (-\epsilon-2e\cdot\omega)}}
\ge 1-\frac{\rho^2}{ 2\epsilon (-\epsilon-2e\cdot\omega)},$$
which in turn gives that~$|z|\ge r$, recall~\eqref{RGANrho}.
{F}rom this and~\eqref{THAnaoa} we deduce that
\begin{equation}\label{9ikjendE2}
\begin{split}&
\int_{B_1\cap E_1} |F_\epsilon(z)|\,dz
\le \int_{1-\frac{\rho^2}{ 2\epsilon (-\epsilon-2e\cdot\omega)}\le|z|<1
}\frac{\|f\|_{C^\alpha(B_1\setminus B_r)}\,(1-|z|)^\alpha}{2^s s\,\epsilon^s
|z-e-\epsilon\omega|^{n-2s}}\,dz\\&\qquad
\le \frac{\|f\|_{C^\alpha(B_1\setminus B_r)}}{2^s s\,\epsilon^s}\,
\left(\frac{\rho^2}{ 2\epsilon (-\epsilon-2e\cdot\omega)}\right)^\alpha
\int_{1-\frac{\rho^2}{ 2\epsilon (-\epsilon-2e\cdot\omega)}\le|z|<1
}\frac{dz}{|z-e-\epsilon\omega|^{n-2s}}\\&\qquad
\le \frac{\|f\|_{C^\alpha(B_1\setminus B_r)}}{2^s s\,\epsilon^s}\,
\left(\frac{\rho^2}{ 2\epsilon (-\epsilon-2e\cdot\omega)}\right)^\alpha
\int_{B_3}\frac{dx}{|x|^{n-2s}}
\\&\qquad=\frac{3^{2s}\,\|f\|_{C^\alpha(B_1\setminus B_r)}\,
|\partial B_1|}{
2^{s+\alpha+1} s^2\,\epsilon^{s+\alpha}\,(-\epsilon-2e\cdot\omega)^\alpha }\,
\;\rho^{2\alpha}\\&\qquad\le
\frac{\eta}{2},
\end{split}\end{equation}
where~\eqref{RGANrho} has been exploited in the last inequality.
We also point out that, by~\eqref{H7uJA78JsadA},
\eqref{PPKAjnaOP} and~\eqref{9ikjendE},
\begin{eqnarray*}
\int_{B_1\cap E_2}|F_\epsilon(z)|\,dz
&\le&\int_{(B_1\setminus
B_{\rho}(e+\epsilon\omega))\cap E}
|f(z)|\,\frac{(-\epsilon-2e\cdot\omega)^s(1-|z|^2)^s}{
s|z-e-\epsilon\omega|^n}\,dz\\
&\le&\frac{3^s}{s\rho^n}\int_{B_1\cap E}|f(z)|\,dz\\&\le&\frac{\eta}{2}.
\end{eqnarray*}
This, \eqref{9ikjendE} and~\eqref{9ikjendE2} give~\eqref{eta},
as desired.
Notice also that the sequence $F_\epsilon(z)$ converges pointwise
to the function \[F(z):=f(z)\frac{(-2e\cdot\omega)^s(1-|z|^2)^s}{s|z-e|^n}.\]
Hence \eqref{sesa}, \eqref{eta} and the Vitali Convergence Theorem allow us to conclude that
\begin{equation}\label{lkl5678kl}
\begin{split}
\lim_{\epsilon\searrow 0}\int_{B_1} f(z)\mathcal{G}_s(e+\epsilon\omega,z) dz&=\lim_{\epsilon\searrow 0}\int_{B_1} F_\epsilon(z) dz \\
&=\int_{B_1} f(z)\frac{(-2e\cdot\omega)^s(1-|z|^2)^s}{s|z-e|^n} dz,
\end{split}
\end{equation}
which establishes the claim of Lemma \ref{lemsix}
(notice that the finiteness of the latter quantity in~\eqref{lkl5678kl}
follows from~\eqref{lklkl}).
\end{proof}
\end{lemma}
With this preliminary work,
we can now establish the boundary behaviour of solutions which is needed
in our setting. As a matter of fact, from Lemma~\ref{lemsix} we immediately deduce that:
\begin{corollary}
\label{propsev}
Let $e$, $\omega\in\partial B_1$, $\epsilon_0>0$
and~$r\in(0,1)$.
Assume that $e+\epsilon\omega\in B_1$, for any $\epsilon\in(0,\epsilon_0]$. Let $f\in C^\alpha(\mathbb{R}^n\setminus B_r)\cap L^2(\mathbb{R}^n)$ for some $\alpha\in(0,1)$, with $f=0$ outside $B_1$.
Let $u$ be as in~\eqref{0olwsKA}.
Then,
\begin{equation*}
\lim_{\epsilon\searrow 0}\epsilon^{-s}u(e+\epsilon\omega)=k(n,s)(-2e\cdot\omega)^s\int_{B_1} f(z)\frac{(1-|z|^2)^s}{s|z-e|^n} dz,
\end{equation*}
where $k(n,s)$ denotes a positive normalizing constant.
\end{corollary}
Now we apply the previous results to detect the
boundary growth of a suitable
first eigenfunction. For our purposes, the statement that we need
is the following:
\begin{corollary}\label{8iJJAUMPAAAxc}
There exists a nontrivial solution $\phi_*$ of \eqref{dirfun}
which belongs to~$H^s_0(B_1)\cap C^{\alpha}
({\mathbb{R}}^n\setminus B_{1/2})$,
for some~$\alpha\in(0,1)$, and such that, for every~$e\in\partial B_1$,
\begin{equation}\label{CHaert}
\lim_{\epsilon\searrow 0}\epsilon^{-s}\phi_*(e+\epsilon\omega)=k_*\,
(-e\cdot\omega)^s_+,
\end{equation}
for a suitable constant~$k_*>0$.
Furthermore,
for every $R\in(r,1)$, there exists~$C_R>0$ such that
\begin{equation}\label{COSI}
\sup_{x\in B_1\setminus B_R}
d^{-s}(x)\,|\phi_*(x)|\le C_R.\end{equation}
\begin{proof}
Let~$\alpha\in(0,1)$ and~$\phi\in H^s_0(B_1)\cap C^{\alpha}({\mathbb{R}}^n\setminus B_{1/2})$
be the nonnegative
and nontrivial solution of \eqref{dirfun}, as given in Corollary~\ref{QUESTCP}.
In the spirit of~\eqref{0olwsKA},
we define
$$
\phi_*(x):=
\begin{cases}
\displaystyle\lambda_1\int_{B_1} \mathcal{G}_s\left(x,y\right)\,\phi(y)\,dy & {\mbox{ if }}x\in B_1,\\
0&{\mbox{ if }}x\in{\mathbb{R}}^n\setminus B_1.
\end{cases}$$
We stress that we can use Proposition~\ref{LEJOS}
in this context, with~$f:=\lambda_1\phi$,
since condition~\eqref{CHlaIA} is satisfied in this case.
Then, from~\eqref{VIC2} and~\eqref{VIC4}, we know that~$\phi_*\in H^s_0(B_1)$
and, from~\eqref{VIC3},
$$ (-\Delta)^s \phi_*=\lambda_1\,\phi{\mbox{ in }}B_1.$$
In particular, we have that~$(-\Delta)^s (\phi-\phi_*)=0$ in~$B_1$,
and~$\phi-\phi_*\in H^s_0(B_1)$, which give that~$\phi-\phi_*$ vanishes identically.
Hence, we can write that~$\phi=\phi_*$, and thus~$\phi_*$
is a solution of~\eqref{dirfun}.
Now, we check~\eqref{CHaert}.
For this, we distinguish two cases.
If~$e\cdot\omega\ge 0$, we have that
$$ |e+\epsilon\omega|^2 =1+2\epsilon e\cdot\omega+\epsilon^2>1,$$
for all~$\epsilon>0$. Then, in this case~$e+\epsilon\omega\in
{\mathbb{R}}^n\setminus B_1$, and therefore~$\phi_*(e+\epsilon\omega)=0$.
This gives that, in this case,
\begin{equation}\label{GIANDIACNKS}
\lim_{\epsilon\searrow 0}\epsilon^{-s}\phi_*(e+\epsilon\omega)=0.\end{equation}
If instead~$e\cdot\omega<0$, we see that
$$ |e+\epsilon\omega|^2 =1+2\epsilon e\cdot\omega+\epsilon^2<1,$$
for all~$\epsilon>0$ sufficiently small. Hence, we can exploit
Corollary~\ref{propsev} and find that
\begin{equation}\label{GIANDIACNKS2}
\lim_{\epsilon\searrow 0}\epsilon^{-s}\phi_*(e+\epsilon\omega)=\lambda_1\,
k(n,s)(-2e\cdot\omega)^s\int_{B_1} \phi(z)\frac{(1-|z|^2)^s}{s|z-e|^n} \,dz,\end{equation}
with~$k(n,s)>0$. Then, we define
$$ k_*:=2^s\,k(n,s)\int_{B_1} \phi(z)\frac{(1-|z|^2)^s}{s|z-e|^n} \,dz.$$
We observe that~$k_*$ is positive by construction,
with~$k(n,s)>0$. Also, in light of
Lemma~\ref{lklkl}, we know that $k_*$ is finite.
Hence, from~\eqref{GIANDIACNKS} and~\eqref{GIANDIACNKS2}
we obtain~\eqref{CHaert}, as desired.
It only remains to check~\eqref{COSI}.
For this, we use~\eqref{VIC3}, and we see that,
for every~$R\in(r,1)$,
$$ \sup_{x\in B_1\setminus B_R}
d^{-s}(x)\,|\phi_*(x)|\le C_R\,\lambda_1\,\big(\|\phi\|_{L^1(B_1)}+
\|\phi\|_{L^\infty(B_1\setminus B_r)}\big),
$$
and this gives~\eqref{COSI} up to renaming~$C_R$.
\end{proof}
\end{corollary}
Now, we can complete the proof of Proposition~\ref{sharbou}, by arguing as follows.
\begin{proof}[Proof of Proposition~\ref{sharbou}]
Let $\psi$ be a test function in $C^\infty_0(\mathbb{R}^n)$.
Let also~$R:=\frac{r+1}{2}\in(r,1)$ and
$$ g_\epsilon(X):=
\epsilon^{-s}\phi_*(e+\epsilon X)\partial^{\beta}\psi(X).$$
We claim that
\begin{equation}\label{7UHSNs9oKN}
\sup_{{X\in{\mathbb{R}}^n}}|g_\epsilon(X)|\le C,\end{equation}
for some~$C>0$ independent of~$\epsilon$.
To prove this, we distinguish three cases.
If~$e+\epsilon X\in{\mathbb{R}}^n\setminus B_1$,
we have that~$\phi_*(e+\epsilon X)=0$ and thus~$g_\epsilon(X)=0$.
If instead~$e+\epsilon X\in B_R$,
we observe that
$$ R>|e+\epsilon X|\ge 1-\epsilon|X|,$$
and therefore~$|X|\ge \frac{1-R}{\epsilon}$. In particular,
in this case~$X$ falls outside the support of~$\psi$, as long as~$\epsilon>0$
is sufficiently small, and consequently~$\partial^{\beta}\psi(X)=0$
and~$g_\epsilon(X)=0$.
Hence, to complete the proof of~\eqref{7UHSNs9oKN},
we are only left with the case in which~$
e+\epsilon X\in B_1\setminus B_R$. In this situation,
we make use of~\eqref{COSI} and we find that
\begin{eqnarray*}
&& |\phi_*(e+\epsilon X)|\le C\,d^{s}(e+\epsilon X)=
C\,(1-|e+\epsilon X|)^s\\&&\qquad
\le C\,(1-|e+\epsilon X|)^s(1+|e+\epsilon X|)^s=
C\,(1-|e+\epsilon X|^2)^s\\&&\qquad=
C\,\epsilon^s(-2e\cdot X-\epsilon|X|^2)^s\le C\epsilon^s,
\end{eqnarray*}
for some~$C>0$ possibly varying from line to line,
and this completes the proof of~\eqref{7UHSNs9oKN}.
Now, from~\eqref{7UHSNs9oKN} and the
Dominated Convergence Theorem, we obtain that
\begin{equation}\label{eq567a8s81n} \lim_{\epsilon\searrow0}\int_{\mathbb{R}^n}
\epsilon^{-s}\phi_*(e+\epsilon X)\partial^{\beta}\psi(X) dX
=\int_{\mathbb{R}^n} \lim_{\epsilon\searrow0}
\epsilon^{-s}\phi_*(e+\epsilon X)\partial^{\beta}\psi(X) dX.\end{equation}
On the other hand, by Corollary~\ref{8iJJAUMPAAAxc},
used here with~$\omega:=\frac{X}{|X|}$, we know that
\begin{eqnarray*}&& \lim_{\epsilon\searrow0}
\epsilon^{-s}\phi_*(e+\epsilon X)
=\lim_{\epsilon\searrow0}
\epsilon^{-s}\phi_*(e+\epsilon |X|\omega)=|X|^s
\lim_{\epsilon\searrow 0}\epsilon^{-s}\phi_*(e+\epsilon\omega)\\&&\qquad=k_*\,|X|^s\,
(-e\cdot\omega)^s_+=k_*\,(-e\cdot X)^s_+.
\end{eqnarray*}
Substituting this into~\eqref{eq567a8s81n}, we thus find that
$$ \lim_{\epsilon\searrow0}\int_{\mathbb{R}^n}
\epsilon^{-s}\phi_*(e+\epsilon X)\partial^{\beta}\psi(X) dX
=k_*\,\int_{\mathbb{R}^n} (-e\cdot X)^s_+\partial^{\beta}\psi(X) dX.$$
As a consequence, integrating by parts twice,
\begin{equation*}
\begin{split}
&\lim_{\epsilon\searrow 0}\epsilon^{|\beta|-s}\int_{\mathbb{R}^n}
\partial^\beta\phi_*(e+\epsilon X)\psi(X) dX=
\lim_{\epsilon\searrow 0}\int_{\mathbb{R}^n}\partial^\beta
\Big(\epsilon^{-s}\phi_*(e+\epsilon X)\Big)\psi(X) dX \\
&\qquad=(-1)^{|\beta|}\lim_{\epsilon\searrow 0}\int_{\mathbb{R}^n}
\epsilon^{-s}\phi_*(e+\epsilon X)\partial^{\beta}\psi(X) dX \\
&\qquad=(-1)^{|\beta|}\,k_*\,\int_{\mathbb{R}^n} (-e\cdot X)^s_+\partial^{\beta}\psi(X) dX\\
&\qquad=k_*\,\int_{\mathbb{R}^n} \partial^{\beta}(-e\cdot X)^s_+\psi(X) dX
\\
&\qquad=(-1)^{|\beta|}\,k_*\, s(s-1)\ldots(s-|\beta|+1)e_1^{\beta_1}\ldots e_n^{\beta_n}\int_{\mathbb{R}^n}(-e\cdot X)^{s-|\beta|}_+\psi(X) dX.
\end{split}
\end{equation*}
Since the test function $\psi$ is arbitrary, the claim in
Proposition~\ref{sharbou} is proved.
\end{proof}
\section{Boundary behaviour\index{boundary behaviour} of~$s$-harmonic functions\index{function!$s$-harmonic}}
\label{s:hwb}
In this section we analyze the asymptotic behaviour of $s$-harmonic
functions, with a ``spherical bump function'' as exterior Dirichlet datum.
The result needed for our purpose is the following:
\begin{lemma}
\label{hbump}
Let $s>0$. Let~$m\in\mathbb{N}_0$
and~$\sigma\in(0,1)$ such that~$s=m+\sigma$.
Then, there exists
\begin{equation}\label{0oHKNSSH013oe2urjhfe}
{\mbox{$\psi\in H^s(\mathbb{R}^n)\cap C^s_0(\mathbb{R}^n)$
such that $
(-\Delta)^s \psi=0$ in~$B_1$,}}\end{equation} and, for
every $x\in\partial B_{1-\epsilon}$,
\begin{equation}\label{0oHKNSSH013oe2urjhfe:2}
\psi(x)=k\,\epsilon^s+o(\epsilon^s),\end{equation}
as $\epsilon\searrow 0$, for some $k>0$.
\begin{proof}
Let~$\overline{\psi}\in C^\infty(\mathbb{R},\,[0,1])$ such that $\overline{\psi}=0$ in $\mathbb{R}\setminus(2,3)$ and $\overline{\psi}>0$ in $(2,3)$. Let $\psi_0(x):=(-1)^m\overline{\psi}(|x|)$.
We recall the Poisson kernel\index{kernel!Poisson}
$$
\Gamma_s(x,y):=(-1)^m\frac{\gamma_{n,\sigma}}{
|x-y|^n}\frac{(1-|x|^2)^s_+}{(|y|^2-1)^s},$$
for $x\in\mathbb{R}^n$, $y\in\mathbb{R}^n\setminus\overline{B_1}$, and
a suitable normalization constant~$\gamma_{n,\sigma}>0$ (see formulas~(1.10) and~(1.30)
in~\cite{ABX}).
We define
$$ \psi(x):=
\displaystyle\int_{{\mathbb{R}}^n\setminus B_1} \Gamma_s(x,y)\,\psi_0(y)\,dy+\psi_0(x).$$
Notice that~$\psi_0=0$ in~$B_{3/2}$ and therefore we can
exploit Theorem
in~\cite{ABX} and obtain that~\eqref{0oHKNSSH013oe2urjhfe}
is satisfied
(notice also that~$\psi=\psi_0$ outside~$B_1$, hence~$\psi$ is compactly supported).
Furthermore, to prove~\eqref{0oHKNSSH013oe2urjhfe:2}
we borrow some ideas from Lemma 2.2 in~\cite{MR3626547}
and we see that, for any $x\in \partial B_{1-\epsilon}$,
\begin{equation*}
\begin{split}
\psi(x)
&=c(-1)^m\int_{\mathbb{R}^n\setminus B_1} \frac{\psi_0(y)(1-|x|^2)^s}{(|y|^2-1)^s|x-y|^n} dy+\psi_0(x) \\
&=c(-1)^m\int_{\mathbb{R}^n\setminus B_1} \frac{\psi_0(y)(1-|x|^2)^s}{(|y|^2-1)^s|x-y|^n} dy \\
&=c\,(1-|x|^2)^s\int_2^3\left[\int_{\mathbb{S}^{n-1}} \frac{\rho^{n-1}\overline{\psi}(\rho)}{(\rho^2-1)^s|x-\rho\omega|^n} d\omega\right] d\rho
\\&=c\,(2\epsilon-\epsilon^2)^s\int_2^3\left[\int_{\mathbb{S}^{n-1}} \frac{\rho^{n-1}\overline{\psi}(\rho)}{(\rho^2-1)^s|(1-\epsilon)e_1-\rho\omega|^n} d\omega\right] d\rho \\
&=2^sc\,\epsilon^s\int_2^3\left[\int_{\mathbb{S}^{n-1}} \frac{\rho^{n-1}\overline{\psi}(\rho)}{(\rho^2-1)^s|e_1-\rho\omega|^n} d\omega\right] d\rho+o(\epsilon^s) \\
&=c\epsilon^s+o(\epsilon^s),
\end{split}
\end{equation*}
where~$c>0$ is a constant possibly varying from line to line, and this establishes~\eqref{0oHKNSSH013oe2urjhfe:2}.
\end{proof}
\end{lemma}
\begin{remark}\label{RUCAPSJD} {\rm
As in Proposition~\ref{sharbou}, one can extend~\eqref{0oHKNSSH013oe2urjhfe:2}
to higher derivatives (in the distributional sense), obtaining, for any~$e\in\partial B_1$
and~$\beta\in\mathbb{N}^n$
$$ \lim_{\epsilon\searrow0} \epsilon^{|\beta|-s}\partial^\beta\psi(e+\epsilon X)=k_\beta\,
e_1^{\beta_1}\dots e_n^{\beta_n}(-e\cdot X)_+^{s-|\beta|}
,$$
for some~$\kappa_\beta\ne0$.}\end{remark}
Using Lemma \ref{hbump}, in the spirit of \cite{MR3626547}, we
can construct a sequence of $s$-harmonic functions
approaching~$(x\cdot e)^s_+$ for a fixed unit vector $e$,
by using a blow-up argument. Namely, we prove the following:
\begin{corollary}
\label{lapiog}
Let $e\in\partial B_1$. There exists a sequence $v_{e,j}\in H^s(\mathbb{R}^n)\cap C^s(\mathbb{R}^n)$ such that $(-\Delta)^s v_{e,j}=0$ in $B_1(e)$, $v_{e,j}=0$ in $\mathbb{R}^n\setminus B_{4j}(e)$, and \[v_{e,j}\to\kappa(x\cdot e)^s_+\quad\mbox{in}\quad L^1(B_1(e)),\] as $j\to+\infty$, for some $\kappa>0$.
\begin{proof}
Let $\psi$ be as in Lemma \ref{hbump} and
define \[v_{e,j}(x):=j^s\psi\left(\frac{x}{j}-e\right).\]
The $s$-harmonicity and the property of being compactly supported follow
by the ones of $\psi$. We now prove the convergence.
To this aim, given $x\in B_1(e)$, we write $p_j:=\frac{x}{j}-e$ and $\epsilon_j:=1-|p_j|$. Recall that since $x\in B_1(e)$, then $|x-e|^2<1$, which implies that $|x|^2<2x\cdot e$ and $x\cdot e>0$ for any $x\in B_1(e)$. \\
As a consequence \[|p_j|^2=\left|\frac{x}{j}-e\right|^2=
\frac{|x|^2}{j^2}+1-2\frac{x}{j}\cdot e=1-\frac{2}{j}(x\cdot e)_+
+o\left(\frac{1}{j}\right)(x\cdot e)^2_+,\]
and so \[\epsilon_j=\frac{(1+o(1))}{j}(x\cdot e)_+.\]
Therefore, using \eqref{0oHKNSSH013oe2urjhfe:2},
\begin{equation*}
\begin{split}
v_{e,j}(x)&=j^s\psi(p_j) \\ &=j^s\kappa(\epsilon_j^s+o(\epsilon^s_j)) \\
&=j^s\left(\frac{\kappa}{j^s}(x\cdot e)_+^s+o\left(\frac{1}{j^s}\right)\right) \\
&=\kappa(x\cdot e)^s_+ +o(1).
\end{split}
\end{equation*}
Integrating over $B_1(e)$, we obtain the desired $L^1$-convergence.
\end{proof}
\end{corollary}
Now, we show that, as in the case $s\in (0,1)$ proved in
Theorem~3.1 of \cite{MR3626547}, we can find an $s$-harmonic function
with an arbitrarily large number of derivatives prescribed at some point.
\begin{proposition}
\label{maxhlapspan}
For any $\beta\in\mathbb{N}^n$, there exist~$p\in\mathbb{R}^n$, $R>r>0$, and~$v\in H^s(\mathbb{R}^n)\cap C^s(\mathbb{R}^n)$ such that
\begin{equation}
\label{csi}
\begin{cases}
(-\Delta)^s v=0&\quad\text{in}\quad B_r(p), \\
v=0&\quad\text{in}\quad\mathbb{R}^n\setminus B_R(p),
\end{cases}
\end{equation}
\begin{equation*}
D^\alpha v(p)=0\quad{\mbox{ for any }}\; \alpha\in\mathbb{N}^n\quad{\mbox{ with }}\;|\alpha|\leq|\beta|-1,\end{equation*}
\begin{equation*}
D^\alpha v(p)=0\quad{\mbox{ for any }}\; \alpha\in\mathbb{N}^n\quad{\mbox{ with }}\;|\alpha|=|\beta|\quad{\mbox{ and }}\;\alpha\neq\beta\end{equation*}
and
\begin{equation*}
D^\beta v(p)=1.\end{equation*}
\begin{proof}
Let $\mathcal{Z}$ be the set of all pairs~$(v,x)\in\left(H^s(\mathbb{R}^n)\cap C^s(\mathbb{R}^n)\right)\times B_r(p)$ that satisfy \eqref{csi} for some $R>r>0$ and $p\in\mathbb{R}^n$.
To each pair $(v,x)\in\mathcal{Z}$ we associate the vector
$\left(D^\alpha v(x)\right)_{|\alpha|\leq|\beta|}\in\mathbb{R}^{K'}$, for some $K'=K'_{|\beta|}$ and
consider~$\mathcal{V}$ to be the vector space spanned by this construction, namely
we set
$$\mathcal{V}:=\Big\{\left(D^\alpha v(x)\right)_{|\alpha|\leq|\beta|},
\quad{\mbox{ with }}\; (v,x)\in\mathcal{Z}
\Big\}.$$
We claim that
\begin{equation}\label{CLSPAZ}
\mathcal{V}=\mathbb{R}^{K'}.\end{equation}
To check this, we suppose by contradiction that~$\mathcal{V}$ lies in a
proper subspace of~$\mathbb{R}^{K'}$. Then, $\mathcal{V}$ must lie in a
hyperplane, hence there exists
\begin{equation}
\label{cnonull0}
c=(c_\alpha)_{|\alpha|\leq|\beta|}\in\mathbb{R}^{K'}\setminus\left\{0\right\}
\end{equation}
which is orthogonal to any vector $\left(D^\alpha v(x)\right)_{|\alpha|\leq|\beta|}$
with~$(v,x)\in\mathcal{Z}$, that is
\begin{equation}
\label{prp18}
\sum_{|\alpha|\leq|\beta|} c_\alpha D^\alpha v(x)=0.
\end{equation}
We notice that the pair $(v_{e,j},x)$, with $v_j$ as in Corollary \ref{lapiog},
$e\in\partial B_1$
and $x\in B_1(e)$, belongs to~$\mathcal{Z}$. Consequently,
fixed~$\xi\in\mathbb{R}^n\setminus B_{1/2}$
and set~$e:=\frac{\xi}{|\xi|}$, we have that~\eqref{prp18} holds true when~$v:=v_{e,j}$ and $x\in B_1(e)$, namely
$$
\sum_{|\alpha|\leq|\beta|} c_\alpha D^\alpha v(x)=0.
$$
Let now~$\varphi\in C_0^\infty(B_1(e))$.
Integrating by parts,
by Corollary \ref{lapiog} and the Dominated Convergence Theorem,
we have that
\begin{eqnarray*}
&&0=\lim_{j\to+\infty}\int_{\mathbb{R}^n}\sum_{|\alpha|\leq|\beta|}
c_\alpha D^\alpha v_{e,j}(x)\varphi(x)\,dx
=\lim_{j\to+\infty}\int_{\mathbb{R}^n}\sum_{|\alpha|\leq|\beta|}(-1)^{|\alpha|}
c_\alpha v_{e,j}(x)D^\alpha\varphi(x)\,dx\\
&&\qquad=\kappa\int_{\mathbb{R}^n}\sum_{|\alpha|\leq|\beta|}(-1)^{|\alpha|}
c_\alpha(x\cdot e)^s_+D^\alpha\varphi(x)\,dx
=\kappa\int_{\mathbb{R}^n}\sum_{|\alpha|\leq|\beta|}c_\alpha
D^\alpha(x\cdot e)^s_+\varphi(x)\,dx.\end{eqnarray*}
This gives that, for every $x\in B_1(e)$,
\[\sum_{|\alpha|\leq|\beta|}c_\alpha D^\alpha(x\cdot e)^s_+=0.\]
Moreover, for every $x\in B_1(e)$,
\[D^\alpha(x\cdot e)^s_+=s(s-1)\ldots(s-|\alpha|+1)
(x\cdot e)^{s-|\alpha|}_+e_1^{\alpha_1}\ldots e_n^{\alpha_n}.\]
In particular, for $x=\frac{e}{|\xi|}\in B_1(e)$,
\[D^\alpha(x\cdot e)^s_+\big|_{|_{x=e/{|\xi|}}}=s(s-1)
\ldots(s-|\alpha|+1)|\xi|^{-s}\xi_1^{\alpha_1}\ldots \xi_n^{\alpha_n}.\] And, using the usual multi-index notation, we write
\begin{equation}
\label{sampspal}
\sum_{|\alpha|\leq|\beta|}c_\alpha s(s-1)\ldots(s-|\alpha|+1)
\xi^\alpha=0,
\end{equation}
for any $\xi\in\mathbb{R}^n\setminus B_{1/2}$.
The identity \eqref{sampspal} describes
a polynomial in $\xi$ which vanishes for any~$\xi$ in an open subset of $\mathbb{R}^n$. As a result, the Identity Principle
for polynomials leads to
$$ c_\alpha s(s-1)\ldots(s-|\alpha|+1)=0,$$
for all~$|\alpha|\leq|\beta|$.
Consequently, since $s\in\mathbb{R}\setminus\mathbb{N}$,
the product $s(s-1)\ldots(s-|\alpha|+1)$ never vanishes,
and so the coefficients $c_\alpha$ are forced to be null for any $|\alpha|\leq|\beta|$.
This is in contradiction with~\eqref{cnonull0}, and therefore the proof
of~\eqref{CLSPAZ} is complete.
{F}rom this, the desired claim in
Proposition~\ref{maxhlapspan} plainly follows.
\end{proof}
\end{proposition}
\chapter{Proof of the main result}\label{CH5}
This chapter is devoted to the proof of the main result in Theorem~\ref{theone}.
This will be accomplished by an auxiliary result of purely nonlocal type
which will allow us to prescribe an arbitrarily large number of derivatives
at a point for the solution of a fractional equation.
\section{A result which implies Theorem \ref{theone}}\label{s:fourthE}
We will use the notation
\begin{equation}\label{NEOAAJKin1a}
\Lambda_{-\infty}:=\Lambda_{(-\infty,\dots,-\infty)},\end{equation}
that is we exploit~\eqref{1.6BIS} with~$a_1:=\dots:=a_l:=-\infty$.
This section presents the following statement:
\begin{theorem}\label{theone2}
Suppose that
\begin{equation*}\begin{split}&
{\mbox{either there exists~$i\in\{1,\dots,M\}$ such that~${\mbox{\large{\wedn{b}}}}_i\ne0$
and~$s_i\not\in{\mathbb{N}}$,}}\\
&{\mbox{or there exists~$i\in\{1,\dots,l\}$ such that~${\mbox{\large{\wedn{c}}}}\,_i\ne0$ and $\alpha_i\not\in{\mathbb{N}}$.}}\end{split}
\end{equation*}
Let $\ell\in\mathbb{N}$, $f:\mathbb{R}^N\rightarrow\mathbb{R}$,
with $f\in C^{\ell}\big(\overline{B_1^N}\big)$. Fixed $\epsilon>0$,
there exist
\begin{equation*}\begin{split}&
u=u_\epsilon\in C^\infty\left(B_1^N\right)\cap C\left(\mathbb{R}^N\right),\\
&a=(a_1,\dots,a_l)=(a_{1,\epsilon},\dots,a_{l,\epsilon})
\in(-\infty,0)^l,\\ {\mbox{and }}\quad&
R=R_\epsilon>1\end{split}\end{equation*} such that:
\begin{itemize}
\item for every~$h\in\{1,\dots,l\}$ and~$(x,y,t_1,\dots,t_{h-1},t_{h+1},\dots,t_l)$
\begin{equation}\label{SPAZIO}
{\mbox{the map ${\mathbb{R}}\ni t_h\mapsto u(x,y,t)$
belongs to~$C^{k_h,\alpha_h}_{-\infty}$,}}
\end{equation}
in the notation of formula~(1.4) of~\cite{CDV18},
\item it holds that
\begin{equation}\label{MAIN EQ:2}\left\{\begin{matrix}
\Lambda_{-\infty} u=0 &\mbox{ in }\;B_1^{N-l}\times(-1,+\infty)^l, \\
u(x,y,t)=0&\mbox{ if }\;|(x,y)|\ge R,
\end{matrix}\right.\end{equation}
\begin{equation}\label{ESTENSIONE}
\partial^{k_h}_{t_h} u(x,y,t)=0\qquad{\mbox{if }}t_h\in(-\infty,a_h),\qquad{\mbox{for all }}h\in\{1,\dots,l\},
\end{equation}
and
\begin{equation}\label{IAzofm:2}
\left\|u-f\right\|_{C^{\ell}(B_1^N)}<\epsilon.
\end{equation}\end{itemize}
\end{theorem}
The proof of Theorem~\ref{theone2} will basically occupy the
rest of this work, and this will lead us to the completion of the
proof of Theorem~\ref{theone}. Indeed, we have that:
\begin{lemma}\label{GRAT}
If the statement of Theorem~\ref{theone2} holds true,
then the statement in Theorem~\ref{theone} holds true.
\end{lemma}
\begin{proof} Assume that the claims
in Theorem~\ref{theone2} are satisfied. Then, by~\eqref{SPAZIO} and~\eqref{ESTENSIONE},
we are in the position of exploting Lemma~A.1 in~\cite{CDV18}
and conclude that, in~$B_1^{N-l}\times(-1,+\infty)^l$,
$$ D^{\alpha_h}_{t_h ,a_h} u=D^{\alpha_h}_{t_h,-\infty} u,$$
for every~$h\in\{1,\dots,l\}$. This and~\eqref{MAIN EQ:2}
give that
\begin{equation} \label{33ujNAKS}
\Lambda_{a}u=\Lambda_{-\infty} u=0 \qquad\mbox{ in }\;B_1^{N-l}
\times(-1,+\infty)^l.\end{equation}
We also define
$$\underline{a}:=\min_{h\in\{1,\dots,l\}} a_h$$
and take~$\tau\in C^\infty_0 ([-\underline{a}-2,3])$ with~$\tau=1$
in~$[-\underline{a}-1,1]$. Let
\begin{equation}\label{UJNsdA} U(x,y,t):=u(x,y,t)\,\tau(t_1)\dots\tau(t_l).\end{equation}
Our goal is to prove that~$U$ satisfies the theses of Theorem~\ref{theone}.
To this end, we observe that~$u=U$ in~$B^N_1$, therefore~\eqref{IAzofm}
for~$U$
plainly follows from~\eqref{IAzofm:2}.
In addition, from~\eqref{defcap}, we see that~$
D^{\alpha_h}_{t_h,a_h}$ at a point~$t_h\in(-1,1)$
only depends on the values of the function
between~$a_h$ and~$1$. Since the cutoffs in~\eqref{UJNsdA} do not
alter these values, we see that~$D^{\alpha_h}_{t_h,a_h}U=D^{\alpha_h}_{t_h,a_h}u$
in~$B_1^N$, and accordingly~$\Lambda_a U=\Lambda_a u$ in~$B_1^N$.
This and~\eqref{33ujNAKS} say that
\begin{equation}\label{9OAJA}
\Lambda_a U=0\qquad{\mbox{in }}B_1^N.\end{equation}
Also, since~$u$ in Theorem~\ref{theone2} is compactly supported
in the variable~$(x,y)$, we see from~\eqref{UJNsdA} that~$U$
is compactly supported in the variables~$(x,y,t)$.
This and~\eqref{9OAJA} give that~\eqref{MAIN EQ} is satisfied by~$U$
(up to renaming~$R$).
\end{proof}
\section{A pivotal span result towards the proof of Theorem \ref{theone2}}\label{s:fourth0}
In what follows, we let~$\Lambda_{-\infty}$
be as in~\eqref{NEOAAJKin1a}, we recall the setting in~\eqref{1.0},
and we
use the following multi-indices notations:
\begin{equation}\label{mulPM}
\begin{split}
& \iota=\left(i,I,\mathfrak{I}\right)=\left(i_1,\ldots,i_n,I_1,\ldots,I_M,\mathfrak{I}_1,
\ldots,\mathfrak{I}_l\right)\in\mathbb{N}^N\\
{\mbox{and }} &
\partial^\iota w=\partial^{i_1}_{x_1}\ldots\partial^{i_n}_{x_n}
\partial^{I_1}_{y_1}\ldots\partial^{I_M}_{y_M}\partial^{\mathfrak{I}_1}_{t_1}
\ldots\partial^{\mathfrak{I}_l}_{t_l}w.
\end{split}\end{equation}
Inspired by Lemma 5 of~\cite{DSV1},
we consider the span of the derivatives of functions in~$
\ker\Lambda_{-\infty}$, with derivatives up to a fixed order $K\in{\mathbb{N}}$.
We want to prove that the derivatives of such functions span
a maximal vectorial space.
For this, we denote by $\partial^K w(0)$
the vector with entries given,
in some prescribed order,
by~$
\partial^\iota w(0)$ with $\left|\iota\right|\leq K$.
We notice that
\begin{equation}\label{8iokjKK}
{\mbox{$\partial^K w(0)\in\mathbb{R}^{K'}$ for some $K'\in{\mathbb{N}}$,}}
\end{equation}
with~$K'$ depending on~$K$.
Now, we adopt the notation in formula~(1.4) of~\cite{CDV18},
and
we denote by~$
\mathcal{A}$ \label{CALSASS}
the set of all functions~$w=w(x,y,t)$
such that for all~$h\in\{1,\ldots, l\}$ and all~$
(x,y,t_1,\ldots,t_{h-1},t_{h+1},\ldots,t_l)\in\mathbb{R}^{N-1}$,
the map~${\mathbb{R}}\ni t_h\mapsto w(x,y,t)$ belongs to~$
C^{\infty}((a_h,+\infty))\cap C^{k_h,\alpha_h}_{-\infty}$,
and~\eqref{ESTENSIONE} holds true for some~$a_h\in (-2,0)$.
We also set
\begin{equation*}
\begin{split}
\mathcal{H}:=\Big\{w\in C(\mathbb{R}^N)
\cap C_0(\mathbb{R}^{N-l})\cap C^\infty(\mathcal{N})\cap\mathcal{A},
\text{ for some neighborhood
$\mathcal{N} $
of the origin, } \\
\text{ such that }
\Lambda_{-\infty} w=0 \text{ in }\mathcal{N}\Big\}
\end{split}
\end{equation*}
and, for any $w\in\mathcal{H}$, let $\mathcal{V}_K$ be the vector space spanned by the vector $\partial^K w(0)$.
By \eqref{8iokjKK}, we know that~$\mathcal{V}_K\subseteq\mathbb{R}^{K'}$.
In fact, we show that equality holds in this inclusion, as
stated in the following\footnote{Notice that results
analogous to Lemma~\ref{lemcin}
cannot hold for solutions of local operators: for instance,
pure second derivatives of harmonic functions have to satisfy
a linear equation, so they are forced to lie in a proper subspace.
In this sense, results such as Lemma~\ref{lemcin} here reveal a truly nonlocal
phenomenon.}
result:
\begin{lemma}
\label{lemcin}
It holds that $\mathcal{V}_K=\mathbb{R}^{K'}$.
\end{lemma}
The proof of Lemma~\ref{lemcin} is
by contradiction. Namely, if $\mathcal{V}_K$ does not exhaust the whole of $\mathbb{R}^{K'}$ there exists
\begin{equation}
\label{tetaort}
\theta\in\partial B_1^{K'}
\end{equation}
such that
\begin{equation}
\label{aza}
\mathcal{V}_K\subseteq\left\{\zeta\in\mathbb{R}^{K'}
\text{ s.t. } \theta\cdot\zeta=0\right\}.
\end{equation}
In coordinates, recalling~\eqref{mulPM},
we write~$\theta$
as~$\theta_\iota=\theta_{i,I,\mathfrak{I}}$,
with~$i\in\mathbb{N}^{p_1+\dots+p_n}$,
$I\in\mathbb{N}^{m_1+\dots+m_M}$
and~$\mathfrak{I}\in\mathbb{N}^l$.
We consider
\begin{equation}\label{IBARRA}
\begin{split}&
{\mbox{a multi-index $\overline{I}\in\mathbb{N}^{m_1+\dots+m_M}$
such that it maximizes~$|I|$}}\\
&{\mbox{among all the multi-indexes~$(i,I,\mathfrak{I})$
for which~$\left|i\right|+\left|I\right|+|\mathfrak{I}|\leq K$}}\\
&{\mbox{and~$\theta_{i,I,\mathfrak{I}}\ne0$
for some~$(i,\mathfrak{I})$.}}
\end{split}\end{equation}
Some comments on the setting\label{FFOAK}
in~\eqref{IBARRA}. We stress that, by~\eqref{tetaort},
the set~$\mathcal{S}$
of indexes~$I$ for which there exist indexes~$
(i,\mathfrak{I})$ such that~$|i|+|I|+|\mathfrak{I}|\le K$
and~$\theta_{i,I,\mathfrak{I}}\ne0$ is not empty.
Therefore, since~${\mathcal{S}}$ is a finite set,
we can take
$$ S:=\sup_{I\in {\mathcal{S}}} |I|=\max_{I\in {\mathcal{S}}}
|I|\in{\mathbb{N}}\cap [0,K].$$
Hence, we consider a multi-index $\overline{I}$ for
which~$|\overline I|=S$ to obtain the
setting in~\eqref{IBARRA}. By construction, we have that
\begin{itemize}
\item $|i|+|\overline I|+|\mathfrak{I}|\le K$,
\item if~$|I|>|\overline I|$, then $\theta_{i,I,\mathfrak{I}}=0$,
\item
and there exist multi-indexes~$i$ and~$\mathfrak{I}$
such that~$\theta_{i,\overline I,\mathfrak{I}}\ne0$.\end{itemize}
As a variation of the setting in~\eqref{IBARRA},
we can also consider
\begin{equation}\label{IBARRA2}
\begin{split}&
{\mbox{a multi-index $\overline{\mathfrak{I}}\in\mathbb{N}^{l}$
such that it maximizes~$|\mathfrak{I}|$}}\\
&{\mbox{among all the multi-indexes~$(i,I,\mathfrak{I})$
for which~$\left|i\right|+\left|I\right|+|\mathfrak{I}|\leq K$}}\\
&{\mbox{and~$\theta_{i,I,\mathfrak{I}}\ne0$
for some~$(i,I)$.}}
\end{split}\end{equation}
In the setting of~\eqref{IBARRA} and~\eqref{IBARRA2},
we claim that there exists an open set
of~$\mathbb{R}^{p_1+\ldots+p_n}\times\mathbb{R}^{m_1+\ldots+m_M}\times\mathbb{R}^{l}$
such that for every~$(\underline{\mbox{\large{\wedn{x}}}}\,,\underline{\mbox{\large{\wedn{y}}}}\,,\underline{\mbox{\large{\wedn{t}}}}\,)$ in such open set we have that
\begin{equation}
\label{ipop}\begin{split}
{\mbox{either }}\qquad&
0=\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I| = |\overline{I}|}}
C_{i,I,\mathfrak{I}}\;\theta_{i,I,\mathfrak{I}}\;
\underline{\mbox{\large{\wedn{x}}}}\,^i \underline{\mbox{\large{\wedn{y}}}}\,^{{I}}\underline{\mbox{\large{\wedn{t}}}}\,^{\mathfrak{I}},\qquad{\mbox{ with }}\qquad
C_{i,I,\mathfrak{I}}\ne0,\\
{\mbox{or }}\qquad&
0=\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|\mathfrak{I}| = |\overline{\mathfrak{I}}|}}
C_{i,I,\mathfrak{I}}\;\theta_{i,I,\mathfrak{I}}\;
\underline{\mbox{\large{\wedn{x}}}}\,^i \underline{\mbox{\large{\wedn{y}}}}\,^{{I}}\underline{\mbox{\large{\wedn{t}}}}\,^{\mathfrak{I}},\qquad{\mbox{ with }}\qquad
C_{i,I,\mathfrak{I}}\ne0.\end{split}
\end{equation}
In our framework, the claim in~\eqref{ipop} will be pivotal
towards the completion of the proof of Lemma~\ref{lemcin}.
Indeed, let us suppose for the moment that~\eqref{ipop}
is established and let us complete the proof of Lemma~\ref{lemcin}
by arguing as follows.
Formula \eqref{ipop} says that $\theta\cdot\partial^K w(0)$
is a polynomial which vanishes for any triple $(\underline{\mbox{\large{\wedn{x}}}}\,,\underline{\mbox{\large{\wedn{y}}}}\,,\underline{\mbox{\large{\wedn{t}}}}\,)$
in an open subset of $\mathbb{R}^{p_1+\ldots+p_n}\times\mathbb{R}^{m_1+\ldots+m_M}\times\mathbb{R}^{l}$.
Hence, using the identity principle of polynomials, we have that each $C_{i,I,\mathfrak{I}}\;\theta_{i,I,\mathfrak{I}}$ is equal to zero
whenever~$|i|+|I|+|\mathfrak{I}|\le K$
and either~$|I|=|\overline I|$ (if the first identity in~\eqref{ipop}
holds true) or~$|\mathfrak{I}|=|\overline{\mathfrak{I}}|$
(if the second identity in~\eqref{ipop}
holds true). Then, since~$C_{i,I,\mathfrak{I}}\neq 0$,
we conclude that each $\theta_{i,I,\mathfrak{I}}$ is zero
as long as either~$|I|=|\overline{I}|$ (in the first case)
or~$|\mathfrak{I}|=|\overline{\mathfrak{I}}|$
(in the second case), but this contradicts either
the definition of $\overline{I}$
in~\eqref{IBARRA} (in the first case)
or the definition of~$\overline{\mathfrak{I}}$
in~\eqref{IBARRA2} (in the second case). This would therefore complete the proof
of Lemma~\ref{lemcin}.
In view of the discussion above,
it remains to prove~\eqref{ipop}.
To this end, we distinguish the following four
cases:
\begin{enumerate}
\item\label{itm:case1} there exist $i\in\{1,\dots,n\}$ and
$j\in\{1,\dots,M\}$ such that~${\mbox{\large{\wedn{a}}}}_i\ne0$ and~${\mbox{\large{\wedn{b}}}}_j\ne0$,
\item\label{itm:case2} there exist $i\in\{1,\dots,n\}$ and
$h\in\{1,\dots,l\}$ such that~${\mbox{\large{\wedn{a}}}}_i\ne0$ and~${\mbox{\large{\wedn{c}}}}\,_h\ne0$,
\item\label{itm:case3} we have that~${\mbox{\large{\wedn{a}}}}_1=\dots={\mbox{\large{\wedn{a}}}}_n=0$,
and there exists~$j\in\{1,\dots,M\}$ such that~${\mbox{\large{\wedn{b}}}}_j\ne0$,
\item\label{itm:case4} we have that~${\mbox{\large{\wedn{a}}}}_1=\dots={\mbox{\large{\wedn{a}}}}_n=0$,
and there exists~$h\in\{1,\dots,l\}$ such that~${\mbox{\large{\wedn{c}}}}\,_h\ne0$.
\end{enumerate}
Notice that cases~\ref{itm:case1} and~\ref{itm:case3}
deal with the case in which space-fractional diffusion is present
(and in case~\ref{itm:case1} one also has classical
derivatives, while in case~\ref{itm:case3}
the classical derivatives are absent).
Similarly, cases~\ref{itm:case2} and~\ref{itm:case4}
deal with the case in which time-fractional diffusion is present
(and in case~\ref{itm:case2} one also has classical
derivatives, while in case~\ref{itm:case4}
the classical derivatives are absent).
Of course, the case in which both space- and time-fractional diffusion occur is already comprised by the
previous cases (namely, it is comprised in
both cases~\ref{itm:case1} and~\ref{itm:case2}
if classical derivatives are also present,
and in both cases~\ref{itm:case3}
and~\ref{itm:case4} if classical derivatives are absent).
\begin{proof}[Proof of \eqref{ipop}, case \ref{itm:case1}]
For any $j\in\left\{1,\ldots,M\right\}$ we denote by $\tilde{\phi}_{\star,j}$
the first eigenfunction for $(-\Delta)^{s_j}_{y_j}$
vanishing outside $B_1^{m_j}$ given in Corollary \ref{QUESTCP}.
We normalize it such that $ \|\tilde{\phi}_{\star,j}\|_{L^2(\mathbb{R}^{m_j})}=1$,
and we write $\lambda_{\star,j}\in(0,+\infty)$ to indicate the corresponding first eigenvalue
(which now depends on~$s_j$), namely we write
\begin{equation}
\label{lambdastarj}
\begin{cases}
(-\Delta)^{s_j}_{y_j}\tilde{\phi}_{\star,j}=\lambda_{\star,j}\tilde{\phi}_{\star,j}&\quad\text{in}\,B_1^{m_j} ,\\
\tilde{\phi}_{\star,j}=0&\quad\text{in}\,\mathbb{R}^{m_j}\setminus\overline{B_1^{m_j}}.
\end{cases}
\end{equation}
Up to reordering the variables and/or
taking the operators to the other side of the equation,
given the assumptions of case~\ref{itm:case1},
we can suppose that
\begin{equation}\label{AGZ}
{\mbox{${\mbox{\large{\wedn{a}}}}_1\ne0$}}\end{equation}
and
\begin{equation}\label{MAGGZ}
{\mbox{${\mbox{\large{\wedn{b}}}}_M>0$}}.\end{equation}
In view of~\eqref{AGZ}, we can define
\begin{equation}\label{MAfghjkGGZ} R:=\left(
\frac{ 1 }{|{\mbox{\large{\wedn{a}}}}_1|}\displaystyle\left(\sum_{j=1}^{M-1}{|{\mbox{\large{\wedn{b}}}}_j|\lambda_{\star,j}}
+\sum_{h=1}^l|{\mbox{\large{\wedn{c}}}}\,_h|\right)\right)^{1/|r_{1}|}.\end{equation}
Now, we fix two sets of free parameters
\begin{equation}\label{FREExi}
\underline{\mbox{\large{\wedn{x}}}}\,_1\in(R+1,R+2)^{p_1},\ldots,\underline{\mbox{\large{\wedn{x}}}}\,_n\in(R+1,R+2)^{p_n},\end{equation}
and
\begin{equation}\label{FREEmustar}
\underline{\mbox{\large{\wedn{t}}}}\,_{\star,1}\in\left(\frac12,1\right),\dots,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}\in\left(\frac12,1\right).\end{equation}
We also set
\begin{equation}\label{1.6md}
{\mbox{$\lambda_j:=\lambda_{\star,j}$ for $j\in\left\{1,\ldots,M-1\right\}$, }}\end{equation}
where $\lambda_{\star,j}$ is defined as in \eqref{lambdastarj}, and
\begin{equation}
\label{alp}
\lambda_M\,:=\,\frac{1}{{\mbox{\large{\wedn{b}}}}_M}\left(
\sum_{j=1}^n {\left|{\mbox{\large{\wedn{a}}}}_j\right|\underline{\mbox{\large{\wedn{x}}}}\,_j^{r_j}}-\sum_{j=1}^{M-1}
{{\mbox{\large{\wedn{b}}}}_j\lambda_j}-\sum_{h=1}^l{\mbox{\large{\wedn{c}}}}\,_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\right).\end{equation}
Notice that this definition is well-posed, thanks to~\eqref{MAGGZ}.
In addition, from~\eqref{FREExi}, we can write~$\underline{\mbox{\large{\wedn{x}}}}\,_{j}=(
\underline{\mbox{\large{\wedn{x}}}}\,_{j1},\dots,\underline{\mbox{\large{\wedn{x}}}}\,_{jp_j})$, and
we know that~$\underline{\mbox{\large{\wedn{x}}}}\,_{j\ell}>R+1$
for any~$j\in\{1,\dots,n\}$ and any~$\ell\in\{1,\dots,p_j\}$.
Therefore,
\begin{equation}\label{1.15bis} \underline{\mbox{\large{\wedn{x}}}}\,_j^{r_j}= \underline{\mbox{\large{\wedn{x}}}}\,_{j1}^{r_{j1}}\dots\underline{\mbox{\large{\wedn{x}}}}\,_{jp_j}^{r_{jp_j}}\ge0.\end{equation}
{F}rom this, \eqref{MAfghjkGGZ} and~\eqref{FREEmustar}, we deduce that
\begin{eqnarray*}&& \sum_{j=1}^n {\left|{\mbox{\large{\wedn{a}}}}_j\right|\underline{\mbox{\large{\wedn{x}}}}\,_j^{r_j}}
\ge \left|{\mbox{\large{\wedn{a}}}}_1\right|\underline{\mbox{\large{\wedn{x}}}}\,_1^{r_1}\ge
\left|{\mbox{\large{\wedn{a}}}}_1\right| (R+1)^{|r_1|}>
\left|{\mbox{\large{\wedn{a}}}}_1\right| R^{|r_1|}\\&&\qquad
=\sum_{j=1}^{M-1}
{|{\mbox{\large{\wedn{b}}}}_j|\lambda_j}+\sum_{h=1}^l |{\mbox{\large{\wedn{c}}}}\,_h|\geq\sum_{j=1}^{M-1}
{{\mbox{\large{\wedn{b}}}}_j\lambda_j}+\sum_{h=1}^l {\mbox{\large{\wedn{c}}}}\,_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h},\end{eqnarray*}
and consequently, by~\eqref{alp},
\begin{equation}
\label{alp-0}
\lambda_M>0.\end{equation}
We also set
\begin{equation}\label{OMEj}
\omega_j:=\begin{cases}1&\quad\text{if }\,j=1,\dots,M-1 ,\\
\displaystyle\frac{\lambda_{\star,M}^{1/2s_M}}{
\lambda_M^{1/2s_M}}&\quad\text{if }\,j=M.\end{cases}
\end{equation}
Notice that this definition is well-posed, thanks to~\eqref{alp-0}.
In addition,
by~\eqref{lambdastarj}, we have that,
for any $j\in\{1,\dots,M\}$, the functions
\begin{equation}
\label{autofun1}
\phi_j\left(y_j\right):=\tilde{\phi}_{\star,j}\left(\frac{y_j}{\omega_j}\right)
\end{equation}
are eigenfunctions of $(-\Delta)^{s_j}_{y_j}$ in $B_{\omega_j}^{m_j}$
with external homogenous Dirichlet boundary condition\index{external!boundary condition},
and eigenvalues\index{Dirichlet!eigenvalues} $\lambda_j$:
namely, we can rewrite~\eqref{lambdastarj} as
\begin{equation}\label{REGSWYS-A}
\begin{cases}
(-\Delta)^{s_j}_{y_j} {\phi}_{j}=\lambda_{j} {\phi}_{j}&\quad\text{in}\,B_{\omega_j}^{m_j} ,\\
{\phi}_{j}=0&\quad\text{in}\,\mathbb{R}^{m_j}\setminus\overline{B_{\omega_j}^{m_j}}.
\end{cases}
\end{equation}
Now, we define
\begin{equation}
\label{chosofpsistar}
\psi_{\star,h}(t_h):=E_{\alpha_h,1}(t_h^{\alpha_h}),
\end{equation}
where~$E_{\alpha_h,1}$ denotes the Mittag-Leffler function
with parameters $\alpha:=\alpha_h$ and $\beta:=1$ as defined
in \eqref{Mittag}.
Moreover,
we consider~$a_h\in(-2,0)$, for every~$h=1,\dots,l$,
to be chosen appropriately in what follows
(the precise choice will be performed in~\eqref{pata7UJ:AKK}),
and, recalling~\eqref{FREEmustar},
we let
\begin{equation}\label{TGAdef}
\underline{\mbox{\large{\wedn{t}}}}\,_h:=\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}^{1/{\alpha_h}},\end{equation} and we define
\begin{equation}
\label{autofun2}
\psi_h(t_h):=\psi_{\star,h}\big(\underline{\mbox{\large{\wedn{t}}}}\,_h (t_h-a_h)\big)=
E_{\alpha_h,1}\big(\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h} (t_h-a_h)^{\alpha_h}\big).
\end{equation}
We point out that, thanks to Lemma~\ref{MittagLEMMA}, the function in~\eqref{autofun2}, solves
\begin{equation}
\label{jhjadwlgh}
\begin{cases}
D^{\alpha_h}_{t_h,a_h}\psi_h(t_h)=\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\psi_h(t_h)&\quad\text{in }\,(a_h,+\infty), \\
\psi_h(a_h)=1, \\
\partial^m_{t_h}\psi_h(a_h)=0&\quad\text{for every }\,m\in\{1,\dots,[\alpha_h] \}.
\end{cases}
\end{equation}
Moreover, for any $h\in\{1,\ldots, l\}$, we define
\begin{equation}
\label{starest}
\psi^{\star}_h(t_h):=\begin{cases}
\psi_h(t_h)\qquad\text{ if }\,t_h\in[a_h,+\infty) \\
1\qquad\qquad\text{ if }\,t_h\in(-\infty,a_h).\end{cases}
\end{equation}
Thanks to \eqref{jhjadwlgh} and Lemma A.3 in \cite{CDV18} applied here with $b:=a_h$, $a:=-\infty$, $u:=\psi_h$, $u_\star:=\psi^{\star}_h$, we have that $\psi^{\star}_h\in C^{k_h,\alpha_h}_{-\infty}$, and
\begin{equation}\label{DOBACHA}
D^{\alpha_h}_{t_h,-\infty}\psi_h^\star(t_h)=D^{\alpha_h}_{t_h,a_h}\psi_h(t_h)
=\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\psi_h(t_h)
=\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\psi_h^\star(t_h)\,\text{ in every interval }\,I\Subset(a_h,+\infty).
\end{equation}
We observe that the setting in \eqref{starest} is compatible with the ones in \eqref{SPAZIO} and \eqref{ESTENSIONE} .
{F}rom~\eqref{Mittag} and~\eqref{autofun2}, we see that
\begin{equation*}
\psi_h(t_h)=
\sum_{j=0}^{+\infty} {\frac{
\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}^j\, (t_h-a_h)^{\alpha_h j}}{\Gamma\left(\alpha_h j+1\right)}}.
\end{equation*}
Consequently, for every~${\mathfrak{I}_h}\in\mathbb{N}$,
we have that
\begin{equation}\label{STAvca}
\partial^{\mathfrak{I}_h}_{t_h}\psi_h(t_h)=
\sum_{j=0}^{+\infty} {\frac{
\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}^j\, \alpha_h j(\alpha_h j-1)\dots(\alpha_h j-
\mathfrak{I}_h+1)
(t_h-a_h)^{\alpha_h j-\mathfrak{I}_h}}{\Gamma\left(\alpha_h j+1\right)}}
.\end{equation}
Now, we define, for any $i\in\left\{1,\ldots,n\right\}$,
\begin{equation*}
\overline{{\mbox{\large{\wedn{a}}}}}_i:=
\begin{cases}
\displaystyle\frac{{\mbox{\large{\wedn{a}}}}_i}{\left|{\mbox{\large{\wedn{a}}}}_i\right|}\quad \text{ if }{\mbox{\large{\wedn{a}}}}_i\neq 0 ,\\
1 \quad \text{ if }{\mbox{\large{\wedn{a}}}}_i=0.
\end{cases}
\end{equation*}
We notice that
\begin{equation}\label{NOZABA}
{\mbox{$\overline{{\mbox{\large{\wedn{a}}}}}_i\neq 0$ for all~$i\in\left\{1,\ldots,n\right\}$,}}\end{equation}
and
\begin{equation}\label{NOZABAnd}
{{\mbox{\large{\wedn{a}}}}}_i\overline{{\mbox{\large{\wedn{a}}}}}_i=|{{\mbox{\large{\wedn{a}}}}}_i|.\end{equation}
Now, for each~$i\in\{1,\dots,n\}$, we consider
the multi-index~$r_i=(r_{i1},\dots,r_{i p_i})\in\mathbb{N}^{p_i}$.
This multi-index acts on~$\mathbb{R}^{p_i}$,
whose variables are denoted by~$x_i=(x_{i1},\dots,x_{ip_i})\in\mathbb{R}^{p_i}$.
We let~$\overline{v}_{i1}$ be the solution of
the Cauchy problem
\begin{equation}\label{CAH1}
\begin{cases}
\partial^{r_{i1}}_{x_{i1}}\overline{v}_{i1}=-\overline{{\mbox{\large{\wedn{a}}}}}_i\overline{v}_{i1} \\
\partial^{\beta_1}_{x_{i1}}\overline{v}_{i1}\left(0\right)=1\quad
\text{ for every } \beta_1\leq r_{i1}-1.
\end{cases}
\end{equation}
We notice that
the solution of the Cauchy problem in~\eqref{CAH1}
exists at least in a neighborhood of the origin of the form
$[-\rho_{i1},\rho_{i1}]$ for a suitable $\rho_{i1}>0$.
Moreover, if~$p_i\ge2$,
for any $\ell\in\{2,\dots, p_i\}$, we consider
the solution of the following Cauchy problem:
\begin{equation}\label{CAH2}
\begin{cases}
\partial^{r_{i\ell}}_{x_{i\ell}}\overline{v}_{i\ell}=\overline{v}_{i\ell} \\
\partial^{\beta_\ell}_{x_{i\ell}}\overline{v}_{i\ell}\left(0\right)=1\quad
\text{ for every } \beta_\ell\leq r_{i\ell}-1.
\end{cases}
\end{equation}
As above, these solutions
are well-defined at least in a neighborhood of the origin of the form $[-\rho_{i\ell},\rho_{i\ell}]$,
for a suitable $\rho_{i\ell}>0$.
Then, we define
\[\overline{\rho}_i:=\min\{ \rho_{i1},\dots,\rho_{i p_i}\}=\min_{\ell\in\{1,\dots,p_i\}}\rho_{i\ell}.\]
In this way, for every~$x_i=(x_{i1},\dots,x_{ip_i})\in B^{p_i}_{\overline{\rho}_i}$,
we set
\begin{equation}\label{vba}
\overline{v}_i(x_i):=\overline{v}_{i1}(x_{i1})\ldots
\overline{v}_{i{p_i}}(x_{ip_i}).\end{equation}
By~\eqref{CAH1} and~\eqref{CAH2}, we have that
\begin{equation}
\label{cau}
\begin{cases}
\partial^{r_i}_{x_i}\overline{v}_i=-\overline{{\mbox{\large{\wedn{a}}}}}_i\overline{v}_i \\
\\
\partial^{\beta}_{x_i}\overline{v}_i\left(0\right)=1\quad
\begin{matrix}
&\text{ for every $\beta=(\beta_1,\dots\beta_{p_i})\in{\mathbb{N}}^{p_i}$}\\
&\text{ such that~$\beta_{\ell}\leq r_{i\ell}-1$ for each~$\ell\in\{1,\dots,p_i\}$.}\end{matrix}
\end{cases}
\end{equation}
Now, we define \[\rho:=\min\{ \overline{\rho}_1,\dots\overline{\rho}_n\}
=\min_{i\in\{1,\dots,n\}}\overline{\rho}_i.\]
We take
\begin{equation*}
\overline{\tau}\in C_0^\infty\left(B^{p_1+\ldots+p_n}_{\rho/(R+2)}\right),
\end{equation*}
with $\overline{\tau}=1$ in $B^{p_1+\ldots+p_n}_{\rho/(2(R+2))}$, and,
for every~$x=(x_1,\dots,x_n)\in{\mathbb{R}}^{p_1}\times\dots\times{\mathbb{R}}^{p_n}$, we set
\begin{equation}
\label{otau1}\tau_1\left(x_1,\ldots,x_n\right):=\overline{\tau}\left(\underline{\mbox{\large{\wedn{x}}}}\,_1 \otimes x_1,\ldots,\underline{\mbox{\large{\wedn{x}}}}\,_n \otimes x_n\right).\end{equation}
We recall that the free parameters~$\underline{\mbox{\large{\wedn{x}}}}\,_1,\dots,\underline{\mbox{\large{\wedn{x}}}}\,_n$
have been introduced in~\eqref{FREExi}, and we have used here the notation
$$ \underline{\mbox{\large{\wedn{x}}}}\,_i\otimes x_i = (\underline{\mbox{\large{\wedn{x}}}}\,_{i1},\dots,\underline{\mbox{\large{\wedn{x}}}}\,_{ip_i})\otimes(x_{i1},\dots,x_{ip_i}):= (\underline{\mbox{\large{\wedn{x}}}}\,_{i1}x_{i1},\dots,\underline{\mbox{\large{\wedn{x}}}}\,_{ip_i}x_{ip_i})\in{\mathbb{R}}^{p_i},$$
for every~$i\in\{1,\dots,n\}$.
We also set, for any~$i\in\{1,\dots,n\}$,
\begin{equation}
\label{vuggei}
v_i\left(x_i\right):=\overline{v}_i\left(\underline{\mbox{\large{\wedn{x}}}}\,_i\otimes x_i\right).
\end{equation}
We point out that if~$x_i\in B^{p_i}_{\overline\rho_i/(R+2)}$ we have that
$$ |\underline{\mbox{\large{\wedn{x}}}}\,_i\otimes x_i|^2=\sum_{\ell=1}^{p_i}(\underline{\mbox{\large{\wedn{x}}}}\,_{i\ell}x_{i\ell})^2\le
(R+2)^2\sum_{\ell=1}^{p_i}x_{i\ell}^2<\overline\rho_i^2,$$
thanks to~\eqref{FREExi}, and therefore the setting in~\eqref{vuggei}
is well-defined for every~$x_i\in B^{p_i}_{\overline\rho_i/(R+2)}$.
Recalling~\eqref{cau} and~\eqref{vuggei}, we see that, for any~$i\in\{1,\dots,n\}$,
\begin{equation}\label{QYHA0ow1dk} \partial_{x_i}^{r_i}v_i(x_i)=
\underline{\mbox{\large{\wedn{x}}}}\,_i^{r_i}\partial_{x_i}^{r_i}\overline{v}_i\left(\underline{\mbox{\large{\wedn{x}}}}\,_i\otimes x_i\right)
=
-\overline{\mbox{\large{\wedn{a}}}}_i \underline{\mbox{\large{\wedn{x}}}}\,_i^{r_i}\overline{v}_i\left(\underline{\mbox{\large{\wedn{x}}}}\,_i\otimes x_i\right)
=
-\overline{\mbox{\large{\wedn{a}}}}_i \underline{\mbox{\large{\wedn{x}}}}\,_i^{r_i} {v}_i( x_i).\end{equation}
We take $e_1,\ldots,e_M$, with
\begin{equation}
\label{econ}
e_j\in\partial B_{\omega_j}^{m_j},
\end{equation}
and we introduce an additional set of free parameters
$Y_1,\ldots,Y_M$ with
\begin{equation}
\label{eq:FREEy}
Y_j\in\mathbb{R}^{m_j}\qquad{\mbox{
and }}\qquad e_j\cdot Y_j<0. \end{equation}
We let~$\epsilon>0$, to be taken small
possibly depending on the free parameters $e_j$, $Y_j$ and $\underline{\mbox{\large{\wedn{t}}}}\,_h$,
and we define
\begin{equation}
\label{svs}
\begin{split}
w\left(x,y,t\right):
=&\tau_1\left(x\right)v_1\left(x_1\right)\cdot
\ldots \cdot
v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)\cdot
\ldots\cdot
\phi_M\left(y_M+e_M+\epsilon Y_M\right) \\
&\times\psi^\star_1(t_1)\cdot\ldots\cdot\psi^\star_l(t_l),
\end{split}
\end{equation}
where the setting in~\eqref{autofun1},
\eqref{starest}, \eqref{otau1} and~\eqref{vuggei} has been exploited.
We also notice that $w\in C\left(\mathbb{R}^N\right)\cap C_0\left(\mathbb{R}^{N-l}\right)\cap\mathcal{A}$. Moreover,
if
\begin{equation}\label{pata7UJ:AKK}
a=(a_1,\dots,a_l):=\left(-\frac{\epsilon}{\underline{\mbox{\large{\wedn{t}}}}\,_1},\dots,-\frac{\epsilon}{\underline{\mbox{\large{\wedn{t}}}}\,_l}
\right)\in(-\infty,0)^l\end{equation}
and~$(x,y)$ is sufficiently close to the origin
and~$t\in(a_1,+\infty)\times\dots\times(a_l,+\infty)$, we have that
\begin{eqnarray*}&&
\Lambda_{-\infty} w\left(x,y,t\right)\\&=&
\left( \sum_{i=1}^n {\mbox{\large{\wedn{a}}}}_i \partial^{r_i}_{x_i}
+\sum_{j=1}^{M} {\mbox{\large{\wedn{b}}}}_j (-\Delta)^{s_j}_{y_j}+
\sum_{h=1}^{l} {\mbox{\large{\wedn{c}}}}\,_h D^{\alpha_h}_{t_h,-\infty}\right) w\left(x,y,t\right)\\
&=&
\sum_{i=1}^n {\mbox{\large{\wedn{a}}}}_i
v_1\left(x_1\right)
\ldots
v_{i-1}\left(x_{i-1}\right)
\partial^{r_i}_{x_i}v_{i}\left(x_{i}\right)v_{i+1}\left(x_{i+1}\right)
\ldots v_n\left(x_n\right)\\
&&\qquad\times\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\psi^\star_1\left(t_1\right)
\ldots\psi^\star_l\left(t_l\right)\\
&&+\sum_{j=1}^{M} {\mbox{\large{\wedn{b}}}}_j
v_1\left(x_1\right)
\ldots v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_{j-1}\left(y_{j-1}+e_{j-1}+\epsilon Y_{j-1}\right)\\&&\qquad\times
(-\Delta)^{s_j}_{y_j}\phi_j\left(y_j+e_j+\epsilon Y_j\right)
\phi_{j+1}\left(y_{j+1}+e_{j+1}+\epsilon Y_{j+1}\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\\
&&\qquad\times\psi^\star_1\left(t_1\right)
\ldots\psi^\star_l\left(t_l\right)\\
&&+\sum_{h=1}^{l} {\mbox{\large{\wedn{c}}}}\,_h
v_1\left(x_1\right)
\ldots v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\psi^\star_1\left(t_1\right)
\ldots\psi^\star_{h-1}\left(t_{h-1}\right)\\
&&\qquad\times D^{\alpha_h}_{t_h,-\infty}\psi^\star_h\left(t_h\right)
\psi^\star_{h+1}(t_{h+1})\ldots\psi^\star_l\left(t_l\right)\\
&=&
-\sum_{i=1}^n {\mbox{\large{\wedn{a}}}}_i \overline{\mbox{\large{\wedn{a}}}}_i\underline{\mbox{\large{\wedn{x}}}}\,_i^{r_i}
v_1\left(x_1\right)
\ldots v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\psi^\star_1(t_1)\ldots\psi^\star_l(t_l)\\
&&+\sum_{j=1}^{M} {\mbox{\large{\wedn{b}}}}_j \lambda_j
v_1\left(x_1\right)
\ldots v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\psi^\star_1(t_1)\ldots\psi^\star_l(t_l)
\\
&&+\sum_{h=1}^{l} {\mbox{\large{\wedn{c}}}}\,_h \underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}
v_1\left(x_1\right)
\ldots v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\psi^\star_1(t_1)\ldots\psi^\star_l(t_l)
\\
&=&\left( -\sum_{i=1}^n {\mbox{\large{\wedn{a}}}}_i \overline{\mbox{\large{\wedn{a}}}}_i\underline{\mbox{\large{\wedn{x}}}}\,_i^{r_i}
+\sum_{j=1}^{M} {\mbox{\large{\wedn{b}}}}_j \lambda_j+\sum_{h=1}^l {\mbox{\large{\wedn{c}}}}\,_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\right) w(x,y,t),
\end{eqnarray*}
thanks to \eqref{REGSWYS-A}, \eqref{DOBACHA} and~\eqref{QYHA0ow1dk}
.
Consequently, making use of~\eqref{1.6md}, \eqref{alp} and~\eqref{NOZABAnd},
if~$(x,y)$ lies near the origin and~$t\in(a_1,+\infty)\times\dots\times(a_l,+\infty)$,
we have that
\begin{eqnarray*}&&
\Lambda_{-\infty} w\left(x,y,t\right)=
\left( -\sum_{i=1}^n |{\mbox{\large{\wedn{a}}}}_i |\underline{\mbox{\large{\wedn{x}}}}\,_i^{r_i}
+\sum_{j=1}^{M-1} {\mbox{\large{\wedn{b}}}}_j \lambda_{j}+{\mbox{\large{\wedn{b}}}}_M\lambda_M+\sum_{h=1}^l {\mbox{\large{\wedn{c}}}}\,_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\right) w(x,y,t)
\\&&\qquad=
\left( -\sum_{i=1}^n |{\mbox{\large{\wedn{a}}}}_i |\underline{\mbox{\large{\wedn{x}}}}\,_i^{r_i}
+\sum_{j=1}^{M-1} {\mbox{\large{\wedn{b}}}}_j \lambda_{\star,j}+{\mbox{\large{\wedn{b}}}}_M\lambda_M+\sum_{h=1}^l {\mbox{\large{\wedn{c}}}}\,_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\right) w(x,y,t)=0.
\end{eqnarray*}
This says that~$w\in\mathcal{H}$. Thus, in light of~\eqref{aza} we have that
\begin{equation}
\label{eq:ort}
0=\theta\cdot\partial^K w\left(0\right)=
\sum_{\left|\iota\right|\leq K}
{\theta_{\iota}\partial^\iota w\left(0\right)}=\sum_{|i|+|I|+|\mathfrak{I}|\le K}
\theta_{i,I,\mathfrak{I}}\,\partial_{x}^{i}\partial_{y}^{I}\partial_{t}^{\mathfrak{I}}w\left(0\right)
.\end{equation}
Now, we recall~\eqref{vba} and we claim that,
for any $j\in\{1,\dots,n\}$,
any~$\ell\in\{1,\dots, p_j\}$ and any~$
i_{j\ell}\in\mathbb{N}$, we have that
\begin{equation}
\label{notnull}
\partial^{i_{j\ell}}_{x_{j\ell}}\overline{v}_{j\ell}(0)\neq 0.
\end{equation}
We prove it by induction over~$i_{j\ell}$. Indeed, if
$i_{j\ell}\in\left\{0,\ldots,r_{j\ell}-1\right\}$, then
the initial condition in \eqref{CAH1} (if~$\ell=1$)
or~\eqref{CAH2} (if~$\ell\ge2$) gives that~$
\partial^{i_{j\ell}}_{x_{i\ell}}\overline{v}_{i\ell}\left(0\right)=1$, and
so~\eqref{notnull}
is true in this case.
To perform the inductive step,
let us now
suppose that the claim in~\eqref{notnull}
still holds for all $i_{j\ell}\in\left\{0,\ldots,i_0\right\}$
for some~$i_0$ such that $i_0\geq r_{j\ell}-1$.
Then, using the equation in~\eqref{CAH1} (if~$\ell=1$)
or in~\eqref{CAH2} (if~$\ell\ge2$), we have that
\begin{equation}\label{8JAMANaoaksd}
\partial^{i_0+1}_{x_{j\ell}}\overline{v}_j=
\partial^{i_0+1-r_{j\ell}}_{x_{j\ell}}\partial^{r_{j\ell}}_{x_{j\ell}}
\overline{v}_j=-\tilde{a}_j\partial^{i_0+1-r_{j\ell}}_{x_{j\ell}}\overline{v}_j,
\end{equation}
with
$$\tilde{a}_j:=\begin{cases}
\overline{a}_j & {\mbox{ if }}\ell=1,\\
-1 & {\mbox{ if }}\ell\ge2.
\end{cases}$$
Notice that~$\tilde{a}_j\ne0$, in view of~\eqref{NOZABA},
and~$\partial^{i_0+1-r_{j\ell}}_{x_{j \ell}}\overline{v}_j\left(0\right)\neq 0$,
by the inductive assumption. These considerations and~\eqref{8JAMANaoaksd}
give that~$\partial^{i_0+1}_{x_{j\ell}}\overline{v}_j\left(0\right)\neq 0$,
and this proves~\eqref{notnull}.
Now, using \eqref{vba} and~\eqref{notnull} we have that,
for any $j\in\{1,\dots,n\}$ and any~$
i_{j}\in\mathbb{N}^{p_j}$,
\begin{equation*}
\partial^{i_j}_{x_{j}}\overline{v}_{j}(0)\neq 0.
\end{equation*}
This, \eqref{FREExi} and the computation in~\eqref{QYHA0ow1dk} give that,
for any $j\in\{1,\dots,n\}$ and any~$
i_{j}\in\mathbb{N}^{p_j}$,
\begin{equation}
\label{eq:adoajfiap}
\partial^{i_j}_{x_{j}} {v}_{j}(0)=\underline{\mbox{\large{\wedn{x}}}}\,_j^{i_j}\partial^{i_j}_{x_{j}}\overline{v}_{j}(0)\neq 0.
\end{equation}
We also notice that, in light of~\eqref{starest}, \eqref{svs} and~\eqref{eq:ort},
\begin{equation}\label{7UJHASndn2weirit}\begin{split}&
0=\sum_{|i|+|I|+|\mathfrak{I}|\le K}
\theta_{i,I,\mathfrak{I}}\,
\partial^{i_1}_{x_{1}} {v}_{1}(0)\ldots
\partial^{i_n}_{x_{n}} {v}_{n}(0)
\,\partial_{y_1}^{I_1}\phi_1\left(e_1+\epsilon Y_1\right)
\ldots\partial_{y_M}^{I_M}\phi_M\left(e_M+\epsilon Y_M\right)\\
&\qquad\qquad\qquad\times\partial^{\mathfrak{I}_1}_{t_1}
\psi_1(0)\ldots\partial^{\mathfrak{I}_l}\psi_l(0).\end{split}\end{equation}
Now, by~\eqref{autofun1}
and Proposition \ref{sharbou}
(applied to $s:=s_j$, $\beta:=I_j$, $e:=\frac{e_j}{\omega_j}\in\partial B_1^{m_j}$, due to~\eqref{econ}, and $X:=\frac{Y_j}{\omega_j}$),
we see that, for any $j=1,\ldots, M$,
\begin{equation}
\label{alppoo}
\begin{split}\omega_j^{|I_j|}
\lim_{\epsilon\searrow 0}\epsilon^{|I_j|-s_j}\partial_{y_j}^{I_j}
\phi_j\left( e_j+\epsilon Y_j \right)
\; =\;&
\lim_{\epsilon\searrow 0}\epsilon^{|I_j|-s_j}\partial_{y_j}^{I_j}
\tilde\phi_{\star,j}\left(\frac{e_j+\epsilon Y_j}{\omega_j}\right)
\\=\;& \kappa_j \frac{e_j^{I_j}}{\omega_j^{|I_j|}}\left(-\frac{e_j}{\omega_j}\cdot\frac{Y_j}{\omega_j}\right)_+^{s_j-|I_j|}
,\end{split}\end{equation}
with~$\kappa_j\ne0$, in the sense of distributions (in the coordinates~$Y_j$).
Moreover, using~\eqref{STAvca} and \eqref{pata7UJ:AKK},
it follows that
\begin{eqnarray*}
\partial^{\mathfrak{I}_h}_{t_h}\psi_h(0)&=&
\sum_{j=0}^{+\infty} {\frac{
\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}^j\, \alpha_h j(\alpha_h j-1)\dots(\alpha_h j-
\mathfrak{I}_h+1)
(0-a_h)^{\alpha_h j-\mathfrak{I}_h}}{\Gamma\left(\alpha_h j+1\right)}}\\
&=& \sum_{j=0}^{+\infty} {\frac{
\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}^j\, \alpha_h j(\alpha_h j-1)\dots(\alpha_h j-
\mathfrak{I}_h+1)
\,{\epsilon}^{\alpha_h j-\mathfrak{I}_h}}{\Gamma\left(\alpha_h j+1\right)\;
{\underline{\mbox{\large{\wedn{t}}}}\,_h}^{\alpha_h j-\mathfrak{I}_h}}}\\
&=& \sum_{j=1}^{+\infty} {\frac{
\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}^j\, \alpha_h j(\alpha_h j-1)\dots(\alpha_h j-
\mathfrak{I}_h+1)
\,{\epsilon}^{\alpha_h j-\mathfrak{I}_h}}{\Gamma\left(\alpha_h j+1\right)\;
{\underline{\mbox{\large{\wedn{t}}}}\,_h}^{\alpha_h j-\mathfrak{I}_h}}}.
\end{eqnarray*}
Accordingly, recalling~\eqref{TGAdef}, we find that
\begin{equation}
\label{limmittl}\begin{split}&
\lim_{\epsilon\searrow 0}\epsilon^{\mathfrak{I}_h-\alpha_h}
\partial^{\mathfrak{I}_h}_{t_h}\psi_h(0)
=\lim_{\epsilon\searrow 0}
\sum_{j=1}^{+\infty} {\frac{
\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}^j\, \alpha_h j(\alpha_h j-1)\dots(\alpha_h j-
\mathfrak{I}_h+1)
\,{\epsilon}^{\alpha_h (j-1)}}{\Gamma\left(\alpha_h j+1\right)\;
{\underline{\mbox{\large{\wedn{t}}}}\,_h}^{\alpha_h j-\mathfrak{I}_h}}}
\\&\qquad=
{\frac{ \underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\,\alpha_h (\alpha_h -1)\dots(\alpha_h -\mathfrak{I}_h+1)
}{\Gamma\left(\alpha_h +1\right)\;
\underline{\mbox{\large{\wedn{t}}}}\,_h^{\alpha_h -\mathfrak{I}_h}}} =
{\frac{ \underline{\mbox{\large{\wedn{t}}}}\,_h^{\mathfrak{I}_h}\,\alpha_h (\alpha_h -1)\dots(\alpha_h -\mathfrak{I}_h+1)
}{\Gamma\left(\alpha_h +1\right)}}.
\end{split}\end{equation}
Also, recalling~\eqref{IBARRA}, we can write~\eqref{7UJHASndn2weirit} as
\begin{equation}
\label{eq:ort:X}\begin{split}&
0=\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I|\le|\overline{I}|}}
\theta_{i,I,\mathfrak{I}}\,
\partial^{i_1}_{x_{1}} {v}_{1}(0)\ldots
\partial^{i_n}_{x_{n}} {v}_{n}(0)
\,\partial_{y_1}^{I_1}\phi_1\left(e_1+\epsilon Y_1\right)
\ldots\partial_{y_M}^{I_M}\phi_M\left(e_M+\epsilon Y_M\right)\\&\qquad\qquad\times
\partial^{\mathfrak{I}_1}_{t_1}\psi_{1}(0)\ldots
\partial^{\mathfrak{I}_l}_{t_l}\psi_{l}(0).\end{split}
\end{equation}
Moreover, we define
\begin{equation*}
\Xi:=\left|\overline{I}\right|-\sum_{j=1}^M {s_j}+|\mathfrak{I}|-\sum_{h=1}^l {\alpha_h}.
\end{equation*}
Then, we
multiply~\eqref{eq:ort:X} by $\epsilon^{\Xi}\in(0,+\infty)$, and we
send~$\epsilon$ to zero. In this way, we obtain from~\eqref{alppoo},
\eqref{limmittl}
and~\eqref{eq:ort:X} that
\begin{eqnarray*}
0&=&\lim_{\epsilon\searrow0}
\epsilon^{\Xi}
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I|\le|\overline{I}|}}
\theta_{i,I,\mathfrak{I}}\,
\partial^{i_1}_{x_{1}} {v}_{1}(0)\ldots
\partial^{i_n}_{x_{n}} {v}_{n}(0)
\,\partial_{y_1}^{I_1}\phi_1\left(e_1+\epsilon Y_1\right)
\ldots\partial_{y_M}^{I_M}\phi_M\left(e_M+\epsilon Y_M\right)
\\ &&\qquad\times\partial^{\mathfrak{I}_1}_{t_1}\psi_1(0)\ldots\partial^{\mathfrak{I}_l}_{t_l}\psi_l(0)
\\ &=&\lim_{\epsilon\searrow0}
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I|\le|\overline{I}|}}
\epsilon^{|\overline{I}|-|I|}
\theta_{i,I,\mathfrak{I}}\,
\partial^{i_1}_{x_{1}} {v}_{1}(0)\ldots
\partial^{i_n}_{x_{n}} {v}_{n}(0)\\
&&\qquad\times\epsilon^{|I_1|-s_1}\partial_{y_1}^{I_1}\phi_1\left(e_1+\epsilon Y_1\right)
\ldots\epsilon^{|I_M|-s_M}\partial_{y_M}^{I_M}\phi_M\left(e_M+\epsilon Y_M\right)
\\ &&\qquad\times\epsilon^{\mathfrak{I}_1-\alpha_1}\partial^{\mathfrak{I}_1}_{t_1}\psi_1(0)\ldots\epsilon^{\mathfrak{I}_l-\alpha_l}\partial^{\mathfrak{I}_l}_{t_l}\psi_l(0)
\\&=&
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I| = |\overline{I}|}}
\tilde C_{i,I,\mathfrak{I}}\,\theta_{i,I,\mathfrak{I}}\,
\partial^{i_1}_{x_{1}} {v}_{1}(0)\ldots
\partial^{i_n}_{x_{n}} {v}_{n}(0)\\\
&&\qquad\times e_1^{I_1}\ldots e_M^{I_M}\,
\left(- e_1 \cdot Y_1 \right)_+^{s_1-|I_1|}
\ldots
\left(- e_M \cdot Y_M\right)_+^{s_M-|I_M|}\underline{\mbox{\large{\wedn{t}}}}\,_1^{\mathfrak{I}_1}\ldots\underline{\mbox{\large{\wedn{t}}}}\,_l^{\mathfrak{I}_l}
,\end{eqnarray*}
for a suitable~$\tilde C_{i,I,\mathfrak{I}}\ne0$
(strictly speaking, the above identity holds
in the sense of distribution with respect to the coordinates~$Y$
and~$\underline{\mbox{\large{\wedn{t}}}}\,$, but since the left hand side vanishes,
we can consider it also a pointwise identity).
Hence, recalling~\eqref{eq:adoajfiap},
\begin{equation}\label{GANAfai}\begin{split}
0&\,=\,\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I| = |\overline{I}|}}
C_{i,I,\mathfrak{I}}\,\theta_{i_1,\dots,i_n,I_1,\dots,I_M,\mathfrak{I}_1,\dots,\mathfrak{I}_l}\;
\underline{\mbox{\large{\wedn{x}}}}\,_1^{i_1}\ldots\underline{\mbox{\large{\wedn{x}}}}\,_n^{i_n}\\&\qquad\qquad\times
e_1^{I_1}\ldots e_M^{I_M}\,
\left(- e_1 \cdot Y_1 \right)_+^{s_1-|I_1|}
\ldots
\left(- e_M \cdot Y_M\right)_+^{s_M-|I_M|}
\underline{\mbox{\large{\wedn{t}}}}\,_1^{\mathfrak{I}_1}\ldots\underline{\mbox{\large{\wedn{t}}}}\,_l^{\mathfrak{I}_l}\\
&\,=\,
\left(- e_1 \cdot Y_1 \right)_+^{s_1}
\ldots
\left(- e_M \cdot Y_M\right)_+^{s_M}\\&\qquad\qquad\times
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I| = |\overline{I}|}}
C_{i,I,\mathfrak{I}}\,\theta_{i,I,\mathfrak{I}}\;
\underline{\mbox{\large{\wedn{x}}}}\,^i\,e^{I}\,
\left(- e_1 \cdot Y_1 \right)_+^{-|I_1|}
\ldots
\left(- e_M \cdot Y_M\right)_+^{-|I_M|}
\underline{\mbox{\large{\wedn{t}}}}\,^{\mathfrak{I}}
,\end{split}\end{equation}
for a suitable~$C_{i,I,\mathfrak{I}}\ne0$.
We observe that the equality in~\eqref{GANAfai}
is valid for any choice of the free parameters~$(\underline{\mbox{\large{\wedn{x}}}}\,,Y,\underline{\mbox{\large{\wedn{t}}}}\,)$
in an open subset of~$\mathbb{R}^{p_1+\dots+p_n}\times
\mathbb{R}^{m_1+\dots+m_M}\times\mathbb{R}^l$,
as prescribed in~\eqref{FREExi},~\eqref{FREEmustar}
and~\eqref{eq:FREEy}.
Now, we take new free parameters, $\underline{\mbox{\large{\wedn{y}}}}\,_1,\ldots,\underline{\mbox{\large{\wedn{y}}}}\,_M$ with $\underline{\mbox{\large{\wedn{y}}}}\,_j\in\mathbb{R}^{m_j}\setminus\{0\}$, and we
define
\begin{equation}\label{COMPATI}
e_j:=\frac{\omega_j\underline{\mbox{\large{\wedn{y}}}}\,_j}{|\underline{\mbox{\large{\wedn{y}}}}\,_j|}\quad {\mbox{ and }}\quad
Y_j:=-\frac{\underline{\mbox{\large{\wedn{y}}}}\,_j}{|\underline{\mbox{\large{\wedn{y}}}}\,_j|^2}.\end{equation}
We stress that the setting in~\eqref{COMPATI} is compatible with that in~\eqref{eq:FREEy}, since
$$ e_j\cdot Y_j=-\frac{\omega_j\underline{\mbox{\large{\wedn{y}}}}\,_j}{|\underline{\mbox{\large{\wedn{y}}}}\,_j|}\cdot
\frac{\underline{\mbox{\large{\wedn{y}}}}\,_j}{|\underline{\mbox{\large{\wedn{y}}}}\,_j|^2}=-\frac{\omega_j}{|\underline{\mbox{\large{\wedn{y}}}}\,_j|}<0,$$
thanks to~\eqref{OMEj}. We also notice that, for all~$j\in\{1,\dots,M\}$,
$$ e_j^{I_j}\left(- e_j \cdot Y_j \right)_+^{-|I_j|}=
\frac{\omega_j^{|I_j|}\underline{\mbox{\large{\wedn{y}}}}\,_j^{I_j}}{|\underline{\mbox{\large{\wedn{y}}}}\,_j|^{|I_j|}}\,
\frac{|\underline{\mbox{\large{\wedn{y}}}}\,_j|^{|I_j|}}{\omega_j^{|I_j|}}=\underline{\mbox{\large{\wedn{y}}}}\,_j^{I_j},
$$
and hence
$$ e^{I}\,
\left(- e_1 \cdot Y_1 \right)_+^{-|I_1|}
\ldots
\left(- e_M \cdot Y_M\right)_+^{-|I_M|}=
\underline{\mbox{\large{\wedn{y}}}}\,^I.$$
Plugging this into formula \eqref{GANAfai},
we obtain the first identity in~\eqref{ipop},
as desired.
Hence,
the proof of~\eqref{ipop} in case~\ref{itm:case1}
is complete.
\end{proof}
\begin{proof}[Proof of \eqref{ipop}, case \ref{itm:case2}]
Thanks to the assumptions given in case~\ref{itm:case2}, we can suppose
that formula~\eqref{AGZ} still holds, and also that
\begin{equation}\label{MAGGZC}
{\mbox{${\mbox{\large{\wedn{c}}}}\,_l>0$}}.\end{equation} In addition,
for any $j\in\{1,\ldots,M\}$, we consider $\lambda_j$
and~$\phi_j$ as in \eqref{REGSWYS-A}.
Then, we define
\begin{equation}\label{MAfghjkGGZC} R:=\left(
\frac{ 1 }{|{\mbox{\large{\wedn{a}}}}_1|}\displaystyle\left(\sum_{h=1}^{l-1}|{\mbox{\large{\wedn{c}}}}\,_h|+\sum_{j=1}^M|{\mbox{\large{\wedn{b}}}}_j|\lambda_j\right)\right)^{1/|r_{1}|}.\end{equation}
We notice that, in light of \eqref{AGZ}, the setting in~\eqref{MAfghjkGGZC} is well-defined.
Now, we fix two sets of free parameters $\underline{\mbox{\large{\wedn{x}}}}\,_1,\ldots,\underline{\mbox{\large{\wedn{x}}}}\,_n$
as in~\eqref{FREExi}
and~$\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,1},\dots,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}$ as in~\eqref{FREEmustar}, here taken with~$R$
as in~\eqref{MAfghjkGGZC}.
Moreover, we define
\begin{equation}
\label{alpc}
\lambda\,:=\,\frac{1}{{\mbox{\large{\wedn{c}}}}\,_l\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}}\left(
\sum_{j=1}^n {\left|{\mbox{\large{\wedn{a}}}}_j\right|\underline{\mbox{\large{\wedn{x}}}}\,_j^{r_j}}-\sum_{j=1}^M{\mbox{\large{\wedn{b}}}}_j\lambda_j-\sum_{h=1}^{l-1}{\mbox{\large{\wedn{c}}}}\,_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\right).\end{equation}
We notice that \eqref{alpc} is well-defined,
thanks to~\eqref{FREEmustar}
and~\eqref{MAGGZC}.
Furthermore, recalling~\eqref{FREExi}, \eqref{1.15bis} and~\eqref{MAfghjkGGZC},
we find that
\begin{equation*}
\begin{split}
\sum_{i=1}^n&|{\mbox{\large{\wedn{a}}}}_i|\underline{\mbox{\large{\wedn{x}}}}\,_i^{r_i}\ge|{\mbox{\large{\wedn{a}}}}_1|\underline{\mbox{\large{\wedn{x}}}}\,_1^{r_1}>
|{\mbox{\large{\wedn{a}}}}_1|(R+1)^{|r_1|}>|{\mbox{\large{\wedn{a}}}}_1|R^{|r_1|} \\
&=\sum_{h=1}^{l-1}|{\mbox{\large{\wedn{c}}}}\,_h|+\sum_{j=1}^M|{\mbox{\large{\wedn{b}}}}_j|\lambda_j
\ge\sum_{h=1}^{l-1}{\mbox{\large{\wedn{c}}}}\,_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}+\sum_{j=1}^M{\mbox{\large{\wedn{b}}}}_j\lambda_j.
\end{split}
\end{equation*}
Consequently, by~\eqref{alpc},
\begin{equation}
\label{alp-0c}
\lambda>0.\end{equation}
Hence, we can define
\begin{equation}\label{OVlam}
\overline{\lambda}:=\lambda^{1/{\alpha_l}}.\end{equation}
Moreover,
we consider~$a_h\in(-2,0)$, for every~$h\in\{ 1,\dots,l\}$,
to be chosen appropriately in what follows
(the exact choice will be performed in~\eqref{pata7UJ:AKKcc}),
and, using the notation in~\eqref{chosofpsistar}
and~\eqref{TGAdef}, we define
\begin{equation}
\label{autofun2c}
\psi_h(t_h):=\psi_{\star,h}\big(\underline{\mbox{\large{\wedn{t}}}}\,_h (t_h-a_h)\big)=
E_{\alpha_h,1}\big(\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h} (t_h-a_h)^{\alpha_h}\big)\quad\text{if }\,h\in\{1,\dots,l-1\}
\end{equation}
and
\begin{equation}
\label{psielle}
\psi_{l}(t_l):=\psi_{\star,l}\big(\overline{\lambda}\,\underline{\mbox{\large{\wedn{t}}}}\,_l (t_l-a_l)\big)=
E_{\alpha_l,1}\big(\lambda\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l} (t_l-a_l)^{\alpha_l}\big).
\end{equation}
We recall that, thanks to Lemma~\ref{MittagLEMMA}, the function in~\eqref{autofun2c}
solves~\eqref{jhjadwlgh} and satisfies~\eqref{STAvca} for any $h\in\{1,\dots,l-1\}$, while the function
in~\eqref{psielle} solves
\begin{equation}
\label{jhjadwlghplop}
\begin{cases}
D^{\alpha_l}_{t_l,a_l}\psi_l(t_l)=\lambda\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}\psi_l(t_l)&\quad\text{in }\,(a_l,+\infty), \\
\psi_l(a_l)=1, \\
\partial^m_{t_l}\psi_l(a_l)=0&\quad\text{for every }\,m\in\{1,\dots,[\alpha_l] \}.
\end{cases}
\end{equation}
As in~\eqref{starest}, we extend the functions~$\psi_{h}$
constantly in~$(-\infty,a_h)$, calling~$\psi^\star_h$ this
extended function. In this way,
Lemma~A.3 in~\cite{CDV18} translates~\eqref{jhjadwlghplop} into
\begin{equation}
\label{jhjadwlghplop2}
D^{\alpha_h}_{t_h,-\infty}\psi_h^\star(t_h)=\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\psi_h(t_h)
=\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\psi_h^\star(t_h)\,\text{ in every interval }\,I\Subset(a_h,+\infty).
\end{equation}
Now, we let~$\epsilon>0$, to be taken small
possibly depending on the free parameters,
and we exploit the functions defined in \eqref{otau1} and \eqref{vuggei},
provided that
one replaces the positive constant $R$ defined in \eqref{MAfghjkGGZ}
with the one in \eqref{MAfghjkGGZC}, when necessary.
With this idea in mind, for any $j\in\{1,\ldots,M\}$, we let\footnote{Comparing~\eqref{freee2}
with \eqref{econ}, we observe that~\eqref{econ} reduces to~\eqref{freee2} with the choice~$\omega_j:=1$.}
\begin{equation}
\label{freee2}
e_j\in\partial B^{m_j}_1,
\end{equation}
and we define
\begin{equation}
\label{svsc}
\begin{split}
w\left(x,y,t\right):
=&\tau_1\left(x\right)v_1\left(x_1\right)\cdot
\ldots \cdot
v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)\cdot
\ldots\cdot
\phi_M\left(y_M+e_M+\epsilon Y_M\right) \\
&\times\psi^\star_1(t_1)\cdot\ldots\cdot\psi^\star_l(t_l),
\end{split}
\end{equation}
where the setting in~\eqref{REGSWYS-A},
\eqref{otau1},
\eqref{vuggei}, \eqref{eq:FREEy}, \eqref{autofun2c}
and~\eqref{psielle} has been exploited.
We also notice that $w\in C\left(\mathbb{R}^N\right)\cap C_0(\mathbb{R}^{N-l})\cap\mathcal{A}$. Moreover,
if
\begin{equation}\label{pata7UJ:AKKcc}
a=(a_1,\dots,a_l):=\left(-\frac{\epsilon}{\underline{\mbox{\large{\wedn{t}}}}\,_1},\dots,-\frac{\epsilon}{\,\underline{\mbox{\large{\wedn{t}}}}\,_l}\right)
\in(-\infty,0)^l\end{equation}
and~$(x,y)$ is sufficiently close to the origin
and~$t\in(a_1,+\infty)\times\dots\times(a_l,+\infty)$, we have that
\begin{eqnarray*}&&
\Lambda_{-\infty} w\left(x,y,t\right)\\&=&
\left( \sum_{i=1}^n {\mbox{\large{\wedn{a}}}}_i \partial^{r_i}_{x_i}
+\sum_{j=1}^{M} {\mbox{\large{\wedn{b}}}}_j (-\Delta)^{s_j}_{y_j}+
\sum_{h=1}^{l} {\mbox{\large{\wedn{c}}}}\,_h D^{\alpha_h}_{t_h,-\infty}\right) w\left(x,y,t\right)\\
&=&
\sum_{i=1}^n {\mbox{\large{\wedn{a}}}}_i
v_1\left(x_1\right)
\ldots
v_{i-1}\left(x_{i-1}\right)
\partial^{r_i}_{x_i}v_{i}\left(x_{i}\right)v_{i+1}\left(x_{i+1}\right)
\ldots v_n\left(x_n\right)\\
&&\qquad\times\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\psi^\star_1\left(t_1\right)
\ldots\psi^\star_{l-1}\left(t_{l-1}\right)\psi^\star_l\left(t_l\right)\\
&&+\sum_{j=1}^{M} {\mbox{\large{\wedn{b}}}}_j
v_1\left(x_1\right)
\ldots v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_{j-1}\left(y_{j-1}+e_{j-1}+\epsilon Y_{j-1}\right)\\&&\qquad\times
(-\Delta)^{s_j}_{y_j}\phi_j\left(y_j+e_j+\epsilon Y_j\right)
\phi_{j+1}\left(y_{j+1}+e_{j+1}+\epsilon Y_{j+1}\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\\
&&\qquad\times\psi^\star_1\left(t_1\right)
\ldots\psi^\star_{l-1}\left(t_{l-1}\right)\psi^\star_l\left(t_l\right)\\
&&+\sum_{h=1}^{l} {\mbox{\large{\wedn{c}}}}\,_h
v_1\left(x_1\right)
\ldots v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\psi^\star_1\left(t_1\right)
\ldots\psi^\star_{h-1}\left(t_{h-1}\right)\\
&&\qquad\times D^{\alpha_h}_{t_h,-\infty}\psi^\star_h\left(t_h\right)
\psi^\star_{h+1}(t_{h+1})\ldots\psi^\star_{l-1}\left(t_{l-1}\right)\psi^\star_l\left(t_l\right)\\
&=&
-\sum_{i=1}^n {\mbox{\large{\wedn{a}}}}_i \overline{\mbox{\large{\wedn{a}}}}_i\underline{\mbox{\large{\wedn{x}}}}\,_i^{r_i}
v_1\left(x_1\right)
\ldots v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\\
&&\qquad\times\psi^\star_1(t_1)\ldots\psi^\star_{l-1}(t_{l-1})\psi^\star_l(t_l)\\
&&+\sum_{j=1}^{M} {\mbox{\large{\wedn{b}}}}_j \lambda_j
v_1\left(x_1\right)
\ldots v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\\&&\qquad\times\psi^\star_1(t_1)\ldots\psi^\star_{l-1}\left(t_{l-1}\right)\psi^\star_l(t_l)
\\
&&+\sum_{h=1}^{l-1} {\mbox{\large{\wedn{c}}}}\,_h \underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}
v_1\left(x_1\right)
\ldots v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\\&&\qquad\times\psi^\star_1(t_1)\ldots\psi^\star_{l-1}(t_{l-1})\psi^\star_l(t_l)
\\
&&+{\mbox{\large{\wedn{c}}}}\,_l\lambda\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}v_1\left(x_1\right)
\ldots v_n\left(x_n\right)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\\
&&\qquad\times\psi^\star_1(t_1)\ldots\psi^\star_{l-1}(t_{l-1})\psi^\star_l(t_l) \\
&=&\left( -\sum_{i=1}^n {\mbox{\large{\wedn{a}}}}_i \overline{\mbox{\large{\wedn{a}}}}_i\underline{\mbox{\large{\wedn{x}}}}\,_i^{r_i}
+\sum_{j=1}^M{\mbox{\large{\wedn{b}}}}_j\lambda_j+\sum_{h=1}^{l-1} {\mbox{\large{\wedn{c}}}}\,_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}+{\mbox{\large{\wedn{c}}}}\,_l\lambda\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}\right) w(x,y,t),
\end{eqnarray*}
thanks to \eqref{REGSWYS-A}, \eqref{jhjadwlgh}, \eqref{QYHA0ow1dk}
and~\eqref{jhjadwlghplop2}.
Consequently, making use of \eqref{NOZABAnd} and~\eqref{alpc}, when~$(x,y)$ is
near the origin and~$t\in(a_1,+\infty)\times\dots\times(a_l,+\infty)$,
we have that
$$
\Lambda_{-\infty} w\left(x,y,t\right)=
\left( -\sum_{i=1}^n |{\mbox{\large{\wedn{a}}}}_i |\underline{\mbox{\large{\wedn{x}}}}\,_i^{r_i}
+\sum_{j=1}^M{\mbox{\large{\wedn{b}}}}_j\lambda_j+\sum_{h=1}^{l-1} {\mbox{\large{\wedn{c}}}}\,_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}+\lambda{\mbox{\large{\wedn{c}}}}\,_l\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}\right) w(x,y,t)=0
.$$
This says that~$w\in\mathcal{H}$. Thus, in light of~\eqref{aza} we have that
\begin{equation*}
0=\theta\cdot\partial^K w\left(0\right)=
\sum_{\left|\iota\right|\leq K}
{\theta_{\iota}\partial^\iota w\left(0\right)}=\sum_{|i|+|I|+|\mathfrak{I}|\le K}
\theta_{i,I,\mathfrak{I}}\,\partial_{x}^{i}\partial_{y}^I\partial_{t}^{\mathfrak{I}}w\left(0\right)
.\end{equation*}
Hence, in view of~\eqref{eq:adoajfiap} and~\eqref{svsc},
\begin{equation}\label{1.64bis}
\begin{split}
0\,&=\,\sum_{|i|+|I|+|\mathfrak{I}|\le K}
\theta_{i,I,\mathfrak{I}}\,
\partial^{i_1}_{x_{1}} {v}_{1}(0)\ldots
\partial^{i_n}_{x_{n}} {v}_{n}(0)\\&\qquad\times
\partial^{I_1}_{y_{1}} {\phi}_{1}(e_1+\epsilon Y_1)\ldots
\partial^{I_M}_{y_{M}} {\phi}_{M}(e_M+\epsilon Y_M)
\,\partial_{t_1}^{\mathfrak{I}_1}\psi_1(0)
\ldots\partial_{t_l}^{\mathfrak{I}_l}\psi_l(0)
\\&=
\,\sum_{|i|+|I|+|\mathfrak{I}|\le K}
\theta_{i,I,\mathfrak{I}}\,\underline{\mbox{\large{\wedn{x}}}}\,_1^{r_1}\ldots\underline{\mbox{\large{\wedn{x}}}}\,_n^{r_n}\,
\partial^{i_1}_{x_{1}} \overline{v}_{1}(0)\ldots
\partial^{i_n}_{x_{n}} \overline{v}_{n}(0)\\&\qquad\times
\partial^{I_1}_{y_{1}} {\phi}_{1}(e_1+\epsilon Y_1)\ldots
\partial^{I_M}_{y_{M}} {\phi}_{M}(e_M+\epsilon Y_M)
\,\partial_{t_1}^{\mathfrak{I}_1}\psi_1(0)
\ldots\partial_{t_l}^{\mathfrak{I}_l}\psi_l(0)
.\end{split}\end{equation}
Moreover, using~\eqref{Mittag}, \eqref{psielle}
and \eqref{pata7UJ:AKKcc},
it follows that
\begin{eqnarray*}
\partial^{\mathfrak{I}_l}_{t_l}\psi_l(0)&=&
\sum_{j=0}^{+\infty} {\frac{
\lambda^j\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}^j\, \alpha_l j(\alpha_l j-1)\dots(\alpha_l j-
\mathfrak{I}_l+1)
(0-a_l)^{\alpha_l j-\mathfrak{I}_l}}{\Gamma\left(\alpha_l j+1\right)}}\\
&=& \sum_{j=0}^{+\infty} {\frac{
\lambda^j\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}^j\, \alpha_l j(\alpha_l j-1)\dots(\alpha_l j-
\mathfrak{I}_l+1)
\,{\epsilon}^{\alpha_l j-\mathfrak{I}_l}}{\Gamma\left(\alpha_l j+1\right)\;
{\underline{\mbox{\large{\wedn{t}}}}\,_l}^{\alpha_l j-\mathfrak{I}_l}}}\\
&=& \sum_{j=1}^{+\infty} {\frac{
\lambda^j\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}^j\, \alpha_l j(\alpha_l j-1)\dots(\alpha_l j-
\mathfrak{I}_l+1)
\,{\epsilon}^{\alpha_l j-\mathfrak{I}_l}}{\Gamma\left(\alpha_l j+1\right)\;
{\underline{\mbox{\large{\wedn{t}}}}\,_l}^{\alpha_l j-\mathfrak{I}_l}}}.
\end{eqnarray*}
Accordingly, by~\eqref{TGAdef}, we find that
\begin{equation}
\label{limmittlc}\begin{split}&
\lim_{\epsilon\searrow 0}\epsilon^{\mathfrak{I}_l-\alpha_l}
\partial^{\mathfrak{I}_l}_{t_l}\psi_l(0)
=\lim_{\epsilon\searrow 0}
\sum_{j=1}^{+\infty} {\frac{
\lambda^j\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}^j\, \alpha_l j(\alpha_l j-1)\dots(\alpha_l j-
\mathfrak{I}_l+1)
\,{\epsilon}^{\alpha_l (j-1)}}{\Gamma\left(\alpha_l j+1\right)\;
{\underline{\mbox{\large{\wedn{t}}}}\,_l}^{\alpha_l j-\mathfrak{I}_l}}}
\\&\qquad=
{\frac{ \lambda\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}\,\alpha_l (\alpha_l -1)\dots(\alpha_l -\mathfrak{I}_l+1)
}{\Gamma\left(\alpha_l +1\right)\;
\underline{\mbox{\large{\wedn{t}}}}\,_l^{\alpha_l -\mathfrak{I}_l}}} =
{\frac{ \lambda\,\underline{\mbox{\large{\wedn{t}}}}\,_l^{\mathfrak{I}_l}\,\alpha_l (\alpha_l -1)\dots(\alpha_l -\mathfrak{I}_l+1)
}{\Gamma\left(\alpha_l +1\right)}}.
\end{split}\end{equation}
Hence, recalling~\eqref{IBARRA2}, we can write~\eqref{1.64bis} as
\begin{equation}
\label{eq:ort:Xc}\begin{split}&
0=\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|\mathfrak{I}|\le|\overline{\mathfrak{I}}|}}
\theta_{i,I,\mathfrak{I}}\,\underline{\mbox{\large{\wedn{x}}}}\,_1^{r_1}\ldots\underline{\mbox{\large{\wedn{x}}}}\,_n^{r_n}\,
\partial^{i_1}_{x_{1}} \overline{v}_{1}(0)\ldots
\partial^{i_n}_{x_{n}} \overline{v}_{n}(0)\\
&\qquad\qquad\times
\partial^{I_1}_{y_{1}} {\phi}_{1}(e_1+\epsilon Y_1)\ldots
\partial^{I_M}_{y_{M}} {\phi}_{M}(e_M+\epsilon Y_M) \partial^{\mathfrak{I}_1}_{t_1}\psi_{1}(0)\ldots
\partial^{\mathfrak{I}_l}_{t_l}\psi_{l}(0).\end{split}
\end{equation}
Moreover, we define
\begin{equation*}
\Xi:=|\overline{\mathfrak{I}}|-\sum_{h=1}^l {\alpha_h}+\left|I\right|-
\sum_{j=1}^M {s_j}.
\end{equation*}
Then, we
multiply~\eqref{eq:ort:Xc} by $\epsilon^{\Xi}\in(0,+\infty)$, and we
send~$\epsilon$ to zero. In this way, we obtain from~\eqref{limmittl}, used here for~$h\in\{1,\ldots, l-1\}$,
\eqref{limmittlc} and~\eqref{eq:ort:Xc} that
\begin{eqnarray*}
0&=&\lim_{\epsilon\searrow0}
\epsilon^{\Xi}
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|\mathfrak{I}|\le|\overline{\mathfrak{I}}|}}
\theta_{i,I,\mathfrak{I}}\,
\,\underline{\mbox{\large{\wedn{x}}}}\,_1^{r_1}\ldots\underline{\mbox{\large{\wedn{x}}}}\,_n^{r_n}\,
\partial^{i_1}_{x_{1}} \overline{v}_{1}(0)\ldots
\partial^{i_n}_{x_{n}} \overline{v}_{n}(0)\\
&&\qquad\times\partial_{y_1}^{I_1}\phi_1\left(e_1+\epsilon Y_1\right)
\ldots\partial_{y_M}^{I_M}\phi_M\left(e_M+\epsilon Y_M\right)
\\ &&\qquad\times\partial^{\mathfrak{I}_1}_{t_1}\psi_1(0)\ldots\partial^{\mathfrak{I}_l}_{t_l}\psi_l(0)
\\ &=&\lim_{\epsilon\searrow0}
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|\mathfrak{I}|\le|\overline{\mathfrak{I}}|}}
\epsilon^{|\overline{\mathfrak{I}}|-|\mathfrak{I}|}
\theta_{i,I,\mathfrak{I}}
\,\underline{\mbox{\large{\wedn{x}}}}\,_1^{r_1}\ldots\underline{\mbox{\large{\wedn{x}}}}\,_n^{r_n}\,
\partial^{i_1}_{x_{1}} \overline{v}_{1}(0)\ldots
\partial^{i_n}_{x_{n}} \overline{v}_{n}(0)\\
&&\qquad\times\epsilon^{|I_1|-s_1}\partial_{y_1}^{I_1}\phi_1\left(e_1+\epsilon Y_1\right)
\ldots\epsilon^{|I_M|-s_M}\partial_{y_M}^{I_M}\phi_M\left(e_M+\epsilon Y_M\right)
\\ &&\qquad\times\epsilon^{\mathfrak{I}_1-\alpha_1}\partial^{\mathfrak{I}_1}_{t_1}\psi_1(0)\ldots\epsilon^{\mathfrak{I}_l-\alpha_l}\partial^{\mathfrak{I}_l}_{t_l}\psi_l(0)
\\&=&
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|\mathfrak{I}| = |\overline{\mathfrak{I}}|}}
\lambda\,\tilde C_{i,I,\mathfrak{I}}\,\theta_{i,I,\mathfrak{I}}\,\underline{\mbox{\large{\wedn{x}}}}\,_1^{r_1}\ldots\underline{\mbox{\large{\wedn{x}}}}\,_n^{r_n}\,
\partial^{i_1}_{x_{1}} \overline{v}_{1}(0)\ldots
\partial^{i_n}_{x_{n}} \overline{v}_{n}(0)\\\
&&\qquad\times e_1^{I_1}\ldots e_M^{I_M}\,
\left(- e_1 \cdot Y_1 \right)_+^{s_1-|I_1|}
\ldots
\left(- e_M \cdot Y_M\right)_+^{s_M-|I_M|}\underline{\mbox{\large{\wedn{t}}}}\,_1^{\mathfrak{I}_1}\ldots\underline{\mbox{\large{\wedn{t}}}}\,_l^{\mathfrak{I}_l}
,\end{eqnarray*}
for a suitable~$\tilde C_{i,I,\mathfrak{I}}$. We stress that~$\tilde C_{i,I,\mathfrak{I}}\ne0$,
thanks also to~\eqref{alppoo},
applied here with~$\omega_j:=1$, $\tilde{\phi}_{\star,j}:=\phi_j$
and $e_j$ as in \eqref{freee2} for any $j\in\{1,\ldots,M\}$.
Hence, recalling~\eqref{alp-0c},
\begin{equation}\label{GANAfaic}
\begin{split}
0&\,=\,\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|\mathfrak{I}| = |\overline{\mathfrak{I}}|}}
C_{i,I,\mathfrak{I}}\,\theta_{i_1,\dots,i_n,I_1,\dots,I_M,\mathfrak{I}_1,\dots,\mathfrak{I}_l}\;
\underline{\mbox{\large{\wedn{x}}}}\,_1^{i_1}\ldots\underline{\mbox{\large{\wedn{x}}}}\,_n^{i_n}\\&\qquad\qquad\times
e_1^{I_1}\ldots e_M^{I_M}\,
\left(- e_1 \cdot Y_1 \right)_+^{s_1-|I_1|}
\ldots
\left(- e_M \cdot Y_M\right)_+^{s_M-|I_M|}
\underline{\mbox{\large{\wedn{t}}}}\,_1^{\mathfrak{I}_1}\ldots\underline{\mbox{\large{\wedn{t}}}}\,_l^{\mathfrak{I}_l}\\
&\,=\,
\left(- e_1 \cdot Y_1 \right)_+^{s_1}
\ldots
\left(- e_M \cdot Y_M\right)_+^{s_M}\\&\qquad\qquad\times
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|\mathfrak{I}| = |\overline{\mathfrak{I}}|}}
C_{i,I,\mathfrak{I}}\,\theta_{i,I,\mathfrak{I}}\;
\underline{\mbox{\large{\wedn{x}}}}\,^i\,e^{I}\,
\left(- e_1 \cdot Y_1 \right)_+^{-|I_1|}
\ldots
\left(- e_M \cdot Y_M\right)_+^{-|I_M|}
\underline{\mbox{\large{\wedn{t}}}}\,^{\mathfrak{I}}
,\end{split}\end{equation}
for a suitable~$C_{i,I,\mathfrak{I}}\ne0$.
We observe that the equality in~\eqref{GANAfaic}
is valid for any choice of the free parameters~$(\underline{\mbox{\large{\wedn{x}}}}\,,Y,\underline{\mbox{\large{\wedn{t}}}}\,)$
in an open subset of~$\mathbb{R}^{p_1+\dots+p_n}\times
\mathbb{R}^{m_1+\ldots+m_M}\times\mathbb{R}^l$,
as prescribed in~\eqref{FREExi}, \eqref{FREEmustar}
and~\eqref{eq:FREEy}.
Now, we take new free parameters $\underline{\mbox{\large{\wedn{y}}}}\,_j$ with $\underline{\mbox{\large{\wedn{y}}}}\,_j\in\mathbb{R}^{m_j}\setminus\{0\}$ for any $j=1,\ldots,M$, and perform in \eqref{GANAfaic} the same change of variables done in \eqref{COMPATI}, obtaining that
$$
0=\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|\mathfrak{I}| = |\overline{\mathfrak{I}}|}}
C_{i,I,\mathfrak{I}}\,\theta_{i,I,\mathfrak{I}}\;
\underline{\mbox{\large{\wedn{x}}}}\,^i\underline{\mbox{\large{\wedn{y}}}}\,^I\underline{\mbox{\large{\wedn{t}}}}\,^{\mathfrak{I}},
$$
for some $C_{i,I,\mathfrak{I}}\ne 0$.
Hence,
the second identity in~\eqref{ipop} is obtained as desired, and the proof of Lemma \ref{lemcin} in case~\ref{itm:case2}
is completed.
\end{proof}
\begin{proof}[Proof of \eqref{ipop}, case \ref{itm:case3}]
We divide the proof of case \ref{itm:case3} into two subcases, namely
either
\begin{equation}\label{SC-va1}
{\mbox{there exists $h\in\{1,\ldots,l\}$ such that ${\mbox{\large{\wedn{c}}}}\,_h\ne 0$,}}\end{equation}
or
\begin{equation}\label{SC-va2}
{\mbox{${\mbox{\large{\wedn{c}}}}\,_h=0\,$ for every $h\in\{1,\ldots,l\}$.}}\end{equation}
We start by dealing with the case in~\eqref{SC-va1}.
Up to relabeling and reordering the coefficients ${\mbox{\large{\wedn{c}}}}\,_h$, we can assume that
\begin{equation}\label{CNONU}
{\mbox{${\mbox{\large{\wedn{c}}}}\,_1\ne 0$}}.\end{equation}
Also, thanks to the assumptions given in case~\ref{itm:case3}, we can suppose that
\begin{equation}\label{NEGB}
{\mbox{${\mbox{\large{\wedn{b}}}}_M<0$}},\end{equation}
and, for any $j\in\{1,\ldots,M\}$, we consider $\lambda_{\star,j}$ and $\tilde{\phi}_{\star,j}$ as in \eqref{lambdastarj}.
Then, we take~$\omega_j:=1$ and~$\phi_j$ as in~\eqref{autofun1},
so that~\eqref{REGSWYS-A} is satisfied.
In particular, here we have that
\begin{equation}\label{7yHSSIKnNSJS}
\lambda_{j}=
\lambda_{\star,j}\qquad{\mbox{ and }}\qquad\phi_j=\tilde\phi_{\star,j}.\end{equation}
We define
\begin{equation}\label{MAfghjkGGZ3} R:=
\frac{ 1 }{|{\mbox{\large{\wedn{c}}}}\,_1|}\displaystyle\sum_{j=1}^{M-1}|{\mbox{\large{\wedn{b}}}}_j|\lambda_{\star,j}.\end{equation}
We notice that, in light of \eqref{CNONU},
the setting in~\eqref{MAfghjkGGZ3} is well-defined.
Now, we fix a set of free parameters
\begin{equation}\label{FREEmustar3}
\underline{\mbox{\large{\wedn{t}}}}\,_{\star,1}\in(R+1,R+2),\dots\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,l}\in(R+1,R+2).\end{equation}
Moreover, we define
\begin{equation}
\label{alp3}
\lambda_M\,:=\,\frac{1}{{\mbox{\large{\wedn{b}}}}_M}\left(
-\sum_{j=1}^{M-1}{\mbox{\large{\wedn{b}}}}_j\lambda_{\star,j}-\sum_{h=1}^{l}|{\mbox{\large{\wedn{c}}}}\,_h|\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\right).\end{equation}
We notice that \eqref{alp3} is well-defined thanks to \eqref{NEGB}.
{F}rom~\eqref{MAfghjkGGZ3} we deduce that
\begin{equation*}
\begin{split}
\sum_{h=1}^l&|{\mbox{\large{\wedn{c}}}}\,_h|\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}+\sum_{j=1}^{M-1}{\mbox{\large{\wedn{b}}}}_j\lambda_{\star,j}\ge
|{\mbox{\large{\wedn{c}}}}\,_1|\underline{\mbox{\large{\wedn{t}}}}\,_{\star,1}-\sum_{j=1}^{M-1}|{\mbox{\large{\wedn{b}}}}_j|\lambda_{\star,j} \\
&>|{\mbox{\large{\wedn{c}}}}\,_1|R-\sum_{j=1}^{M-1}|{\mbox{\large{\wedn{b}}}}_j|\lambda_{\star,j}=0.
\end{split}
\end{equation*}
Consequently, by~\eqref{NEGB} and~\eqref{alp3},
\begin{equation}
\label{alp-03}
\lambda_M>0.\end{equation}
Now, we define, for any $h\in\left\{1,\ldots,l\right\}$,
\begin{equation*}
\overline{{\mbox{\large{\wedn{c}}}}\,}_h:=
\begin{cases}
\displaystyle\frac{{\mbox{\large{\wedn{c}}}}\,_h}{\left|{\mbox{\large{\wedn{c}}}}\,_h\right|}\quad& \text{ if }{\mbox{\large{\wedn{c}}}}\,_h\neq 0 ,\\
1 \quad &\text{ if }{\mbox{\large{\wedn{c}}}}\,_h=0.
\end{cases}
\end{equation*}
We notice that
\begin{equation}\label{ahflaanhuf}
{\mbox{$\overline{{\mbox{\large{\wedn{c}}}}\,}_h\neq 0$ for all~$h\in\left\{1,\ldots,l\right\}$,}}\end{equation}
and
\begin{equation}\label{ahflaanhuf3}
{{\mbox{\large{\wedn{c}}}}\,}_h\overline{{\mbox{\large{\wedn{c}}}}\,}_h=|{{\mbox{\large{\wedn{c}}}}\,}_h|.\end{equation}
Moreover,
we consider~$a_h\in(-2,0)$, for every~$h=1,\dots,l$,
to be chosen appropriately in what follows (see~\eqref{pata7UJ:AKK3}
for a precise choice).
Now, for every $h\in\{1,\ldots,l\}$, we define
\begin{equation}
\label{autofun3}
\psi_h(t_h):=E_{\alpha_h,1}(\overline{{\mbox{\large{\wedn{c}}}}\,}_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}(t_h-a_h)^{\alpha_h}),
\end{equation}
where~$E_{\alpha_h,1}$ denotes the Mittag-Leffler function
with parameters $\alpha:=\alpha_h$ and $\beta:=1$ as defined
in \eqref{Mittag}. By Lemma~\ref{MittagLEMMA},
we know that
\begin{equation}
\label{HFloj}
\begin{cases}
D^{\alpha_h}_{t_h,a_h}\psi_h(t_h)=\overline{{\mbox{\large{\wedn{c}}}}\,}_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\psi_h(t_h)\quad\text{in }\,(a_h,+\infty) ,\\
\psi_h(a_h)=1 ,\\
\partial^m_{t_h}\psi_h(a_h)=0\quad\text{for any}\,m=1,\dots,[\alpha_h],
\end{cases}
\end{equation}
and we consider again the extension $\psi^\star_h$ given in \eqref{starest}.
By Lemma~A.3 in~\cite{CDV18}, we know that~\eqref{HFloj}
translates into
\begin{equation}\label{HFloj2}
D^{\alpha_h}_{t_h,-\infty}\psi_h^\star(t_h)
=\overline{{\mbox{\large{\wedn{c}}}}\,}_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\psi_h^\star(t_h)\,\text{ in every interval }\,I\Subset(a_h,+\infty).
\end{equation}
Now, we
consider auxiliary parameters~$\underline{\mbox{\large{\wedn{t}}}}\,_h$, $e_j$ and $Y_j$
as in \eqref{TGAdef}, \eqref{econ} and~\eqref{eq:FREEy}.
Moreover, we introduce an additional set of free parameters
\begin{equation}
\label{FREEXXXXX}
\underline{\mbox{\large{\wedn{x}}}}\,=(\underline{\mbox{\large{\wedn{x}}}}\,_1,\ldots,\underline{\mbox{\large{\wedn{x}}}}\,_n)\in
\mathbb{R}^{p_1}\times\ldots\times\mathbb{R}^{p_n}.
\end{equation}
We
let~$\epsilon>0$, to be taken small
possibly depending on the
free parameters.
We take~$\tau\in C^\infty(\mathbb{R}^{p_1+\ldots+p_n},[0+\infty))$ such that
\begin{equation}
\label{otau3}
\tau(x):=\begin{cases}
\exp\left({\,\underline{\mbox{\large{\wedn{x}}}}\,\cdot x}\right)&\quad\text{if }\,x\in B_1^{p_1+\ldots+p_n} ,\\
0&\quad\text{if }\,x\in \mathbb{R}^{p_1+\ldots+p_n}\setminus B_2^{p_1+\ldots+p_n},
\end{cases}
\end{equation}
where
$$ \underline{\mbox{\large{\wedn{x}}}}\,\cdot x:=\sum_{j=1}^{n} \underline{\mbox{\large{\wedn{x}}}}\,_i\cdot x_i$$
denotes the standard scalar product.
We notice that, for any $i\in\mathbb{N}^{p_1}\times\ldots\times\mathbb{N}^{p_n}$,
\begin{equation}
\label{resczzz}
\partial^{i}_{x}\tau(0)=\partial^{i_1}_{x_1}\ldots\partial^{i_n}_{x_n}\tau(0)=
\underline{\mbox{\large{\wedn{x}}}}\,^{i_{11}}_{11}\ldots\underline{\mbox{\large{\wedn{x}}}}\,^{i_{1p_1}}_{1p_1}\ldots\underline{\mbox{\large{\wedn{x}}}}\,^{i_{n1}}_{n1}\ldots\underline{\mbox{\large{\wedn{x}}}}\,^{i_{np_n}}_{np_n}
=\underline{\mbox{\large{\wedn{x}}}}\,^i.
\end{equation}
We define
\begin{equation}
\label{svs3}
w\left(x,y,t\right):=\tau(x)\phi_1\left(y_1+e_1+\epsilon Y_1\right)\cdot
\ldots\cdot
\phi_M\left(y_M+e_M+\epsilon Y_M\right)
\psi^\star_1(t_1)\cdot\ldots\cdot\psi^\star_l(t_l),
\end{equation}
where the setting in~\eqref{REGSWYS-A}
has also been exploited.
We also notice that $w\in C\left(\mathbb{R}^N\right)\cap C_0\left(\mathbb{R}^{N-l}\right)\cap\mathcal{A}$. Moreover,
if
\begin{equation}\label{pata7UJ:AKK3}
a=(a_1,\dots,a_l):=\left(-\frac{\epsilon}{\underline{\mbox{\large{\wedn{t}}}}\,_1},\dots,
-\frac{\epsilon}{\,\underline{\mbox{\large{\wedn{t}}}}\,_l}\right)\in(-\infty,0)^l\end{equation}
and~$\left(x,y\right)$ is sufficiently close to the origin
and~$t\in(a_1,+\infty)\times\dots\times(a_l,+\infty)$, we have that
\begin{eqnarray*}&&
\Lambda_{-\infty} w\left(x,y,t\right)\\&=&
\left( \sum_{j=1}^{M} {\mbox{\large{\wedn{b}}}}_j (-\Delta)^{s_j}_{y_j}+
\sum_{h=1}^{l} {\mbox{\large{\wedn{c}}}}\,_h D^{\alpha_h}_{t_h,-\infty}\right) w\left(x,y,t\right)\\
&=&\sum_{j=1}^{M} {\mbox{\large{\wedn{b}}}}_j
\tau(x)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_{j-1}\left(y_{j-1}+e_{j-1}+\epsilon Y_{j-1}\right)(-\Delta)^{s_j}_{y_j}\phi_j\left(y_j+e_j+\epsilon Y_j\right)\\&&\qquad\times
\phi_{j+1}\left(y_{j+1}+e_{j+1}+\epsilon Y_{j+1}\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\psi^\star_1\left(t_1\right)
\ldots\psi^\star_l\left(t_l\right)\\
&&+\sum_{h=1}^{l} {\mbox{\large{\wedn{c}}}}\,_h\tau(x)\phi_1\left(y_1+e_1+\epsilon Y_1\right)\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\psi^\star_1\left(t_1\right)
\ldots\psi^\star_{h-1}\left(t_{h-1}\right)\\
&&\qquad\times D^{\alpha_h}_{t_h,-\infty}\psi^\star_h\left(t_h\right)
\psi^\star_{h+1}(t_{h+1})\ldots\psi^\star_l\left(t_l\right)\\
&=&\sum_{j=1}^{M} {\mbox{\large{\wedn{b}}}}_j \lambda_j
\tau(x)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\psi^\star_1(t_1)\ldots\psi^\star_l(t_l)
\\
&&+\sum_{h=1}^{l} {\mbox{\large{\wedn{c}}}}\,_h\overline{{\mbox{\large{\wedn{c}}}}\,}_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}
\tau(x)\phi_1\left(y_1+e_1+\epsilon Y_1\right)
\ldots\phi_M\left(y_M+e_M+\epsilon Y_M\right)\psi^\star_1(t_1)\ldots\psi^\star_l(t_l) \\
&=&\left( \sum_{j=1}^M{\mbox{\large{\wedn{b}}}}_j\lambda_j+\sum_{h=1}^{l} {\mbox{\large{\wedn{c}}}}\,_h\overline{{\mbox{\large{\wedn{c}}}}\,}_h\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\right) w(x,y,t),
\end{eqnarray*}
thanks to \eqref{REGSWYS-A} and~\eqref{HFloj2}.
Consequently, making use of \eqref{7yHSSIKnNSJS}, \eqref{alp3} and~\eqref{ahflaanhuf3},
if~$(x,y)$ is near the origin and~$t\in(a_1,+\infty)\times\dots\times(a_l,+\infty)$,
we have that
$$
\Lambda_{-\infty} w\left(x,y,t\right)=
\left( \sum_{j=1}^M{\mbox{\large{\wedn{b}}}}_j\lambda_{\star,j}+{\mbox{\large{\wedn{b}}}}_M\lambda_M+\sum_{h=1}^{l} |{\mbox{\large{\wedn{c}}}}\,_h|\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\right) w(x,y,t)=0
.$$
This says that~$w\in\mathcal{H}$. Thus, in light of~\eqref{aza} we have that
\begin{equation*}
0=\theta\cdot\partial^K w\left(0\right)=
\sum_{\left|\iota\right|\leq K}
{\theta_{\iota}\partial^\iota w\left(0\right)}=\sum_{|i|+|I|+|\mathfrak{I}|\le K}
\theta_{i,I,\mathfrak{I}}\,\partial_{x}^i\partial_{y}^I\partial_{t}^{\mathfrak{I}}w\left(0\right)
.\end{equation*}
{F}rom this and~\eqref{svs3}, we obtain that
\begin{equation}\label{eq:orthsi3}
0=\sum_{|i|+|I|+|\mathfrak{I}|\le K}
\theta_{i,I,\mathfrak{I}}\,
\partial^{i}_x\tau(0)
\partial^{I_1}_{y_{1}} {\phi}_{1}(e_1+\epsilon Y_1)\ldots
\partial^{I_M}_{y_{M}} {\phi}_{M}(e_M+\epsilon Y_M)
\,\partial_{t_1}^{\mathfrak{I}_1}\psi_1(0)\ldots\partial_{t_l}^{\mathfrak{I}_l}\psi_l(0).\end{equation}
Moreover, using~\eqref{autofun3} and \eqref{pata7UJ:AKK3},
it follows that, for every $\mathfrak{I}_h\in\mathbb{N}$
\begin{eqnarray*}
\partial^{\mathfrak{I}_h}_{t_h}\psi_h(0)&=&
\sum_{j=0}^{+\infty} {\frac{
\overline{{\mbox{\large{\wedn{c}}}}\,}_h^j\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}^j\, \alpha_h j(\alpha_h j-1)\dots(\alpha_h j-
\mathfrak{I}_l+1)
(0-a_h)^{\alpha_h j-\mathfrak{I}_h}}{\Gamma\left(\alpha_h j+1\right)}}\\
&=& \sum_{j=0}^{+\infty} {\frac{
\overline{{\mbox{\large{\wedn{c}}}}\,}_h^j\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}^j\, \alpha_h j(\alpha_h j-1)\dots(\alpha_h j-
\mathfrak{I}_h+1)
\,{\epsilon}^{\alpha_h j-\mathfrak{I}_h}}{\Gamma\left(\alpha_h j+1\right)\;
{\underline{\mbox{\large{\wedn{t}}}}\,_h}^{\alpha_h j-\mathfrak{I}_h}}}\\
&=& \sum_{j=1}^{+\infty} {\frac{
\overline{{\mbox{\large{\wedn{c}}}}\,}_h^j\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}^j\, \alpha_h j(\alpha_h j-1)\dots(\alpha_h j-
\mathfrak{I}_h+1)
\,{\epsilon}^{\alpha_h j-\mathfrak{I}_h}}{\Gamma\left(\alpha_h j+1\right)\;
{\underline{\mbox{\large{\wedn{t}}}}\,_h}^{\alpha_h j-\mathfrak{I}_h}}}.
\end{eqnarray*}
Accordingly, recalling~\eqref{TGAdef}, we find that
\begin{equation}
\label{limmittl3}\begin{split}&
\lim_{\epsilon\searrow 0}\epsilon^{\mathfrak{I}_h-\alpha_h}
\partial^{\mathfrak{I}_h}_{t_h}\psi_h(0)
=\lim_{\epsilon\searrow 0}
\sum_{j=1}^{+\infty} {\frac{
\overline{{\mbox{\large{\wedn{c}}}}\,}_h^j\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}^j\, \alpha_h j(\alpha_h j-1)\dots(\alpha_h j-
\mathfrak{I}_h+1)
\,{\epsilon}^{\alpha_h (j-1)}}{\Gamma\left(\alpha_h j+1\right)\;
{\underline{\mbox{\large{\wedn{t}}}}\,_h}^{\alpha_h j-\mathfrak{I}_h}}}
\\&\qquad=
{\frac{ \overline{{\mbox{\large{\wedn{c}}}}\,}_h\,\underline{\mbox{\large{\wedn{t}}}}\,_{\star,h}\,\alpha_h (\alpha_h -1)\dots(\alpha_h -\mathfrak{I}_h+1)
}{\Gamma\left(\alpha_h +1\right)\;
\underline{\mbox{\large{\wedn{t}}}}\,_h^{\alpha_h -\mathfrak{I}_h}}} =
{\frac{ \overline{{\mbox{\large{\wedn{c}}}}\,}_h\,\underline{\mbox{\large{\wedn{t}}}}\,_h^{\mathfrak{I}_h}\,\alpha_h (\alpha_h -1)\dots(\alpha_h -\mathfrak{I}_h+1)
}{\Gamma\left(\alpha_h +1\right)}}.
\end{split}\end{equation}
Also, recalling~\eqref{IBARRA}, we can write~\eqref{eq:orthsi3} as
\begin{equation}
\label{eq:ort:X3}
0=\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I|\le|\overline{I}|}}
\theta_{i,I,\mathfrak{I}}\,\partial^{i}_x\tau(0)
\partial^{I_1}_{y_{1}} {\phi}_{1}(e_1+\epsilon Y_1)\ldots
\partial^{I_M}_{y_{M}} {\phi}_{M}(e_M+\epsilon Y_M)
\partial^{\mathfrak{I}_1}_{t_1}\psi_{1}(0)\ldots
\partial^{\mathfrak{I}_l}_{t_l}\psi_{l}(0).
\end{equation}
Moreover, we define
\begin{equation*}
\Xi:=\left|\overline{I}\right|-\sum_{j=1}^M {s_j}+|\mathfrak{I}|-\sum_{h=1}^l {\alpha_h}
.\end{equation*}
Then, we
multiply~\eqref{eq:ort:X3} by $\epsilon^{\Xi}\in(0,+\infty)$, and we
send~$\epsilon$ to zero. In this way, we obtain from~\eqref{alppoo},
\eqref{resczzz}, \eqref{limmittl3} and~\eqref{eq:ort:X3} that
\begin{eqnarray*}
0&=&\lim_{\epsilon\searrow0}
\epsilon^{\Xi}
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I|\le|\overline{I}|}}
\theta_{i,I,\mathfrak{I}}\,\partial_{x}^i\tau(0)
\partial_{y_1}^{I_1}\phi_1\left(e_1+\epsilon Y_1\right)
\ldots\partial_{y_M}^{I_M}\phi_M\left(e_M+\epsilon Y_M\right)
\partial^{\mathfrak{I}_1}_{t_1}\psi_1(0)\ldots\partial^{\mathfrak{I}_l}_{t_l}\psi_l(0)
\\ &=&\lim_{\epsilon\searrow0}
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I|\le|\overline{I}|}}
\epsilon^{|\overline{I}|-|I|}
\theta_{i,I,\mathfrak{I}}\,\partial_x^i\tau(0)
\epsilon^{|I_1|-s_1}\partial_{y_1}^{I_1}\phi_1\left(e_1+\epsilon Y_1\right)
\ldots\epsilon^{|I_M|-s_M}\partial_{y_M}^{I_M}\phi_M\left(e_M+\epsilon Y_M\right)
\\ &&\qquad\times\epsilon^{\mathfrak{I}_1-\alpha_1}\partial^{\mathfrak{I}_1}_{t_1}\psi_1(0)\ldots\epsilon^{\mathfrak{I}_l-\alpha_l}\partial^{\mathfrak{I}_l}_{t_l}\psi_l(0)
\\&=&
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I| = |\overline{I}|}}
C_{i,I,\mathfrak{I}}\,\theta_{i,I,\mathfrak{I}}\,
\underline{\mbox{\large{\wedn{x}}}}\,_1^{i_1}\ldots\underline{\mbox{\large{\wedn{x}}}}\,_n^{i_n}\,
e_1^{I_1}\ldots e_M^{I_M}\,
\left(- e_1 \cdot Y_1 \right)_+^{s_1-|I_1|}
\ldots
\left(- e_M \cdot Y_M\right)_+^{s_M-|I_M|}\underline{\mbox{\large{\wedn{t}}}}\,_1^{\mathfrak{I}_1}\ldots\underline{\mbox{\large{\wedn{t}}}}\,_l^{\mathfrak{I}_l}
\\&=&
\left(- e_1 \cdot Y_1 \right)_+^{s_1}
\ldots
\left(- e_M \cdot Y_M\right)_+^{s_M}\\&&\qquad\qquad\times
\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I| = |\overline{I}|}}
C_{i,I,\mathfrak{I}}\,\theta_{i,I,\mathfrak{I}}\;
\underline{\mbox{\large{\wedn{x}}}}\,^i\,e^{I}\,
\left(- e_1 \cdot Y_1 \right)_+^{-|I_1|}
\ldots
\left(- e_M \cdot Y_M\right)_+^{-|I_M|}
\underline{\mbox{\large{\wedn{t}}}}\,^{\mathfrak{I}}
,\end{eqnarray*}
for a suitable~$C_{i,I,\mathfrak{I}}\ne0$.
We observe that the latter equality
is valid for any choice of the free parameters~$(\underline{\mbox{\large{\wedn{x}}}}\,,Y,\underline{\mbox{\large{\wedn{t}}}}\,)$
in an open subset of~$\mathbb{R}^{p_1+\ldots+p_n}\times\mathbb{R}^{m_1+\ldots+m_M}\times\mathbb{R}^l$,
as prescribed in~\eqref{eq:FREEy},~\eqref{FREEmustar3} and~\eqref{FREEXXXXX}.
Now, we take new free parameters $\underline{\mbox{\large{\wedn{y}}}}\,_j$ with $\underline{\mbox{\large{\wedn{y}}}}\,_j\in\mathbb{R}^{m_j}\setminus\{0\}$ for any $j=1,\ldots,M$, and perform in the latter identity the same change of variables done in \eqref{COMPATI}, obtaining that
$$
0=\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I| = |\overline{I}|}}
C_{i,I,\mathfrak{I}}\,\theta_{i,I,\mathfrak{I}}\;
\underline{\mbox{\large{\wedn{x}}}}\,^i\underline{\mbox{\large{\wedn{y}}}}\,^I\underline{\mbox{\large{\wedn{t}}}}\,^{\mathfrak{I}},
$$
for some $C_{i,I,\mathfrak{I}}\ne 0$.
This completes the proof of~\eqref{ipop} in case~\eqref{SC-va1} is satisfied.
Hence, we now focus on the case in which~\eqref{SC-va2} holds true.
For any $j\in\{1,\ldots,M\}$,
we consider the function~$\psi\in H^{s_j}(\mathbb{R}^{m_j})\cap C^{s_j}_0(\mathbb{R}^{m_j})$
constructed in Lemma \ref{hbump} and we call such function~$\phi_j$,
to make it explicit its dependence on~$j$ in this case.
We recall that
\begin{equation}
\label{harmony}
(-\Delta)^{s_j}_{y_j}\phi_j(y_j)=0\quad\text{ in }\,B_1^{m_j}.
\end{equation}
Also, for every $j\in\{1,\ldots,M\}$, we let $e_j$
and~$Y_j$ be as in \eqref{econ} and \eqref{eq:FREEy}.
Thanks to Lemma~\ref{hbump} and Remark~\ref{RUCAPSJD},
for any $I_j\in\mathbb{N}^{m_j}$, we know that
\begin{equation}
\label{psveind}
\lim_{\epsilon\searrow 0}\epsilon^{|I_j|-s_j}\partial_{y_j}^{I_j}\phi_j(e_j+\epsilon Y_j)=\kappa_{s_j}e_j^{I_j}(-e_j\cdot Y_j)_+^{s_j-|I_j|},
\end{equation}
for some $\kappa_{s_j}\ne 0$.
Moreover, for any $h=1,\ldots, l$, we define $\overline{\tau}_h(t_h)$ as
\begin{equation}
\label{tita}
\overline{\tau}_h(t_h):=\begin{cases}
e^{\underline{\mbox{\large{\wedn{t}}}}\,_h t_h}\quad&\text{if}\quad t_h\in[-1,+\infty), \\
\displaystyle e^{-\underline{\mbox{\large{\wedn{t}}}}\,_h}\sum_{i=0}^{k_h-1}\frac{\underline{\mbox{\large{\wedn{t}}}}\,_h^i}{i!}(t_h+1)^i\quad&\text{if}\quad t_h\in(-\infty,-1),\end{cases}
\end{equation}
where
$\underline{\mbox{\large{\wedn{t}}}}\,=(\underline{\mbox{\large{\wedn{t}}}}\,_1,\ldots,\underline{\mbox{\large{\wedn{t}}}}\,_l)\in(1,2)^l$ are free parameters.
We notice that, for any $h\in\{1,\ldots, l\}$ and $\mathfrak{I}_h\in\mathbb{N}$,
\begin{equation}
\label{rescttt}
\partial^{\mathfrak{I}_h}_{t_h}\overline{\tau}_h(0)=\underline{\mbox{\large{\wedn{t}}}}\,_h^{\mathfrak{I}_h}.
\end{equation}
Now, we
define
\begin{equation}
\label{quiquoqua}
w(x,y,t):=\tau(x)\phi_1(y_1+e_1+\epsilon Y_1)\ldots\phi_M(y_M+e_M+\epsilon Y_M)\overline{\tau}_1(t_1)\ldots\overline{\tau}_l(t_l),
\end{equation}
where the setting of~\eqref{autofun1}, \eqref{otau3} and \eqref{tita} has been exploited.
We have that~$w\in\mathcal{A}$. Moreover, we point out that, since $\tau$, $\phi_1,\ldots,\phi_M$ are
compactly supported, we have that~$w\in C(\mathbb{R}^N)\cap C_0(\mathbb{R}^{N-l})$, and, using Proposition \ref{maxhlapspan}, for any $j\in\{1,\ldots,M\}$, it holds that~$\phi_j\in C^{\infty}(\mathcal{N}_j)$ for some neighborhood~$\mathcal{N}_j$
of the origin in $\mathbb{R}^{m_j}$.
Hence $w\in C^{\infty}(\mathcal{N})$.
Furthermore,
using \eqref{harmony}, when~$y$
is in a neighborhood of the origin we have that
\begin{equation*}
\begin{split}
\Lambda_{-\infty} w(x,y,t)&=\tau(x)\left({\mbox{\large{\wedn{b}}}}_1(-\Delta)^{s_1}_{y_1}\phi_1(y_1+e_1+\epsilon Y_1)\right)\ldots\phi_M(y_M+e_M+\epsilon Y_M)\overline{\tau}_1(t_1)\ldots\overline{\tau}_l(t_l) \\
&+\ldots+\tau(x)\phi_1(y_1)\ldots\left({\mbox{\large{\wedn{b}}}}_M(-\Delta)^{s_M}_{Y_M}\phi_M(y_M+
e_M+\epsilon Y_M)\right)\overline{\tau}_1(t_1)\ldots\overline{\tau}_l(t_l)=0,
\end{split}
\end{equation*}
which gives that~$w\in\mathcal{H}$.
In addition, using~\eqref{IBARRA}, \eqref{resczzz} and \eqref{rescttt}, we have that
\begin{eqnarray*}&&
0=\theta\cdot\partial^K w(0)=\sum_{|\iota|\leq K}\theta_{i,I,\mathfrak{I}}
\partial_x^i \partial_y^I \partial_t^{\mathfrak{I}} w(0)
=\sum_{{|\iota|\leq K}\atop{|I| \le |\overline{I}|}}\theta_{i,I,\mathfrak{I}}
\partial_x^i \partial_y^I \partial_t^{\mathfrak{I}} w(0)\\
&&\qquad\qquad=\sum_{{|\iota|\leq K}\atop{|I| \le |\overline{I}|}}\theta_{i,I,\mathfrak{I}}
\,\underline{\mbox{\large{\wedn{x}}}}\,^i\partial_{y_1}^{I_1} \phi_1(e_1+\epsilon Y_1)\ldots
\partial_{y_M}^{I_M}\phi_M(e_M+\epsilon Y_M)\,\underline{\mbox{\large{\wedn{t}}}}\,^\mathfrak{I}.
\end{eqnarray*}
Hence, we set
$$ \Xi:=|\overline{I}|-\sum_{j=1}^M s_j,$$
we multiply the latter identity by~$\epsilon^\Xi$
and we exploit~\eqref{psveind}. In this way, we find that
\begin{eqnarray*}
0 &=& \lim_{\epsilon\searrow0}
\sum_{{|\iota|\leq K}\atop{|I| \le |\overline{I}|}}\epsilon^{|\overline I|-|I|}\theta_{i,I,\mathfrak{I}}
\,\underline{\mbox{\large{\wedn{x}}}}\,^i\,\epsilon^{|I_1|-s_1}\partial_{y_1}^{I_1} \phi_1(e_1+\epsilon Y_1)\ldots\epsilon^{|I_M|-s_M}
\partial_{y_M}^{I_M}\phi_M(e_M+\epsilon Y_M)\,\underline{\mbox{\large{\wedn{t}}}}\,^\mathfrak{I}\\&=&
\sum_{{|\iota|\leq K}\atop{|I| = |\overline{I}|}}\theta_{i,I,\mathfrak{I}}\,\kappa_{s_j}\,
\,\underline{\mbox{\large{\wedn{x}}}}\,^i\,e^{I}\,(-e_1\cdot Y_1)_+^{s_1-|I_1|}\ldots(-e_M\cdot Y_M)_+^{s_M-|I_M|}
\,\underline{\mbox{\large{\wedn{t}}}}\,^\mathfrak{I}\\
&=&
(-e_1\cdot Y_1)_+^{s_1}\ldots(-e_M\cdot Y_M)_+^{s_M}
\sum_{{|\iota|\leq K}\atop{|I| = |\overline{I}|}}\theta_{i,I,\mathfrak{I}}\,\kappa_{s_j}\,
\,\underline{\mbox{\large{\wedn{x}}}}\,^i\,e^{I}\,(-e_1\cdot Y_1)_+^{-|I_1|}\ldots(-e_M\cdot Y_M)_+^{-|I_M|}
\,\underline{\mbox{\large{\wedn{t}}}}\,^\mathfrak{I},
\end{eqnarray*}
and consequently
\begin{equation}\label{7UJHAnAXbansdo}
0=\sum_{{|\iota|\leq K}\atop{|I| = |\overline{I}|}}\theta_{i,I,\mathfrak{I}}\,\kappa_{s_j}\,
\,\underline{\mbox{\large{\wedn{x}}}}\,^i\,e^{I}\,(-e_1\cdot Y_1)_+^{-|I_1|}\ldots(-e_M\cdot Y_M)_+^{-|I_M|}
\,\underline{\mbox{\large{\wedn{t}}}}\,^\mathfrak{I}.
\end{equation}
Now we take
free parameters $\underline{\mbox{\large{\wedn{y}}}}\,\in\mathbb{R}^{m_1+\ldots+m_M}\setminus\{0\}$
and we perform the same change of variables in \eqref{COMPATI}.
In this way, we deduce from~\eqref{7UJHAnAXbansdo} that
\begin{eqnarray*}
0=\sum_{{|i|+|I|+|\mathfrak{I}|\le K}\atop{|I| = |\overline{I}|}}C_{i,I,\mathfrak{I}}\theta_{i,I,\mathfrak{I}}\underline{\mbox{\large{\wedn{x}}}}\,^i\underline{\mbox{\large{\wedn{y}}}}\,^I\underline{\mbox{\large{\wedn{t}}}}\,^\mathfrak{I},
\end{eqnarray*}
for some $C_{i,I,\mathfrak{I}}\ne 0$, and the first claim in \eqref{ipop} is proved
in this case as well.
\end{proof}
\begin{proof}[Proof of \eqref{ipop}, case \ref{itm:case4}]
Notice that if there exists $j\in\{1,\ldots,M\}$ such that ${\mbox{\large{\wedn{b}}}}_j\ne 0$, we are in the setting of case \ref{itm:case3}.
Therefore, we assume that ${\mbox{\large{\wedn{b}}}}_j=0$ for every $j\in\{1,\ldots,M\}$.
We let $\psi$
be the function constructed in Lemma~\ref{LF}.
For each $h\in\{1,\dots,l\}$,
we let~$\overline\psi_h(t_h):=\psi(t_h)$,
to make the dependence on $h$ clear and explicit.
Then, by formulas~\eqref{LAp1} and~\eqref{LAp2},
we know that
\begin{equation}
\label{cidivu}
D^{\alpha_h}_{t_h,0}\overline{\psi}_h(t_h)=0\quad\text{ in }\,(1,+\infty)
\end{equation}
and, for every~$\ell\in\mathbb{N}$,
\begin{equation}\label{FOR2.50}
\lim_{\epsilon\searrow0} \epsilon^{\ell-\alpha_h}\partial^\ell_{t_h}
\overline{\psi}_h(1+\epsilon t_h)=\kappa_{h,\ell}\; t_h^{\alpha_h-\ell}
,\end{equation}
in the sense of distribution,
for some~$\kappa_{h,\ell}\ne0$.
Now,
we introduce a set of auxiliary parameters $\underline{\mbox{\large{\wedn{t}}}}\,=(\underline{\mbox{\large{\wedn{t}}}}\,_1,\ldots,\underline{\mbox{\large{\wedn{t}}}}\,_l)
\in(1,2)^l$, and fix $\epsilon$ sufficiently small
possibly depending on the parameters. Then, we define
\begin{equation}
\label{keanzzz}
a=(a_1,\dots,a_l):=\left(-\frac{\epsilon}{\underline{\mbox{\large{\wedn{t}}}}\,_1}-1,\ldots,-\frac{\epsilon}{\underline{\mbox{\large{\wedn{t}}}}\,_l}-1\right)
\in(-2,0)^l,\end{equation}
and
\begin{equation}
\label{translation}
\psi_h(t_h):=\overline{\psi}_h(t_h-a_h).
\end{equation}
With a simple computation we have that the function
in~\eqref{translation} satisfies
\begin{equation}\label{cgraz}
D^{\alpha_h}_{t_h,a_h}\psi_h(t_h)=D^{\alpha_h}_{t_h,0}
\overline{\psi}_h(t_h-a_h)=0 \quad \text{in }\,(1+a_h,+\infty)=\left(-\frac{\epsilon}{\underline{\mbox{\large{\wedn{t}}}}\,_h},+\infty\right),
\end{equation}
thanks to~\eqref{cidivu}.
In addition, for every~$\ell\in\mathbb{N}$, we have that~$\partial^\ell_{t_h}\psi_h(t_h)=
\partial^\ell_{t_h}\overline{\psi}_h(t_h-a_h)$, and therefore,
in light of~\eqref{FOR2.50} and~\eqref{keanzzz},
\begin{equation}\label{FOR2.5}
\epsilon^{\ell-\alpha_h}\partial^\ell_{t_h}\psi_h(0)=\epsilon^{\ell-\alpha_h}
\partial^\ell_{t_h}\overline{\psi}_h(-a_h)=
\epsilon^{\ell-\alpha_h}\partial^\ell_{t_h}\overline{\psi}_h\left(
1+\frac{\epsilon}{\underline{\mbox{\large{\wedn{t}}}}\,_h}
\right)\to\kappa_{h,\ell}\;\underline{\mbox{\large{\wedn{t}}}}\,_h^{\ell-\alpha_h},
\end{equation}
in the sense of distributions, as~$\epsilon\searrow0$.
Moreover, since for any $h=1,\ldots, l$, $\psi_h\in C^{k_h,\alpha_h}_{a_h}$, we can consider the extension
\begin{equation}\label{estar}
\psi^\star_h(t_h):=\begin{cases}
\psi_h(t_h)&\quad\text{ if }\,t_h\in[a_h,+\infty), \\
\displaystyle\sum_{i=0}^{k_h-1}\frac{\psi_h^{(i)}(a_h)}{i!}(t_h-a_h)^i&\quad\text{ if }\,t_h\in(-\infty,a_h),
\end{cases}
\end{equation}
and, using Lemma A.3 in \cite{CDV18} with $u:=\psi_h$, $a:=-\infty$, $b:=a_h$ and $u_\star:=\psi^\star_h$, we have that
\begin{equation}
\label{tuck}
\psi^\star_h\in C^{k_h,\alpha_h}_{-\infty}\quad\text{and}\quad D^{\alpha_h}_{t_h,-\infty}\psi_h^\star=D^{\alpha_h}_{t_h,a_h}\psi_h=0\quad\text{in every interval }\,I\Subset\left(-\frac{\epsilon}{\underline{\mbox{\large{\wedn{t}}}}\,_h},+\infty\right).
\end{equation}
Now, we fix a set of free parameters $\underline{\mbox{\large{\wedn{y}}}}\,=\left(\underline{\mbox{\large{\wedn{y}}}}\,_1,\ldots,\underline{\mbox{\large{\wedn{y}}}}\,_M\right)\in\mathbb{R}^{m_1+\ldots+m_M}$, and consider~$\overline\tau\in C^\infty
(\mathbb{R}^{m_1+\ldots+m_M})$, such that
\begin{equation}
\label{otau5}
\overline{\tau}(y):=\begin{cases}
\exp\left({\underline{\mbox{\large{\wedn{y}}}}\,\cdot y}\right)\quad&\text{if }\quad y\in B_1^{m_1+\ldots+m_M} ,\\
0\quad&\text{if }\quad y\in \mathbb{R}^{m_1+\ldots+m_M}\setminus B_2^{m_1+\ldots+m_M},\end{cases}
\end{equation}
where
$$
\underline{\mbox{\large{\wedn{y}}}}\,\cdot y=\sum_{j=1}^M\underline{\mbox{\large{\wedn{y}}}}\,_j\cdot y_j,
$$
denotes the standard scalar product.
We notice that, for any multi-index $I\in\mathbb{N}^{m_1+\ldots m_M}$,
\begin{equation}
\label{rescyyy}
\partial^{I}_{y}\overline{\tau}(0)=\underline{\mbox{\large{\wedn{y}}}}\,^I,
\end{equation}
where the multi-index notation has been used.
Now, we define
\begin{equation}
\label{sinaloa}
w(x,y,t):=\tau(x)\overline{\tau}(y)\psi^\star_1(t_1)\ldots\psi^\star_l(t_l),
\end{equation}
where the setting in \eqref{otau3},
\eqref{estar} and \eqref{otau5} has been exploited.
Using~\eqref{tuck}, we have that, for any $(x,y)$
in a neighborhood of the origin and~$t\in\left(-\frac{\epsilon}{2},+\infty\right)^l$,
\begin{equation*}
\begin{split}
\Lambda_{-\infty} w(x,y,t)&=\tau(x)\overline{\tau}(y)\left({\mbox{\large{\wedn{c}}}}\,_1 D^{\alpha_1}_{t_1,-\infty}\psi^\star_1(t_1)\right)
\ldots\psi^\star_l(t_l) \\
&+\ldots+\tau(x)\overline{\tau}(y)\psi^\star_1(t_1)\ldots\left({\mbox{\large{\wedn{c}}}}\,_l D^{\alpha_l}_{t_l,-\infty}\psi^\star_l(t_l)\right)=0.
\end{split}
\end{equation*}
We have that~$w\in\mathcal{A}$, and, since $\tau$ and $\overline{\tau}$ are compactly supported, we
also have that $w\in C(\mathbb{R}^N)\cap C_0(\mathbb{R}^{N-l})$.
Also, from Lemma~\ref{LF}, for any $h\in\{1,\ldots,l\}$, we know that~$\overline{\psi}_h\in C^{\infty}((1,+\infty))$, hence $\psi_h \in C^{\infty}\left(\left(
-\frac{\epsilon}{\underline{\mbox{\large{\wedn{t}}}}\,_h},+\infty\right)\right)$.
Thus, $w\in C^{\infty}(\mathcal{N})$, and consequently~$w\in\mathcal{H}$.
Recalling~\eqref{IBARRA2}, \eqref{resczzz}, and \eqref{rescyyy}, we have that
\begin{equation}\label{LATTER}\begin{split}&
0=\theta\cdot\partial^K w(0)=\sum_{|\iota|\leq K}\theta_{i,I,\mathfrak{I}}
\partial_x^i \partial_y^I \partial_t^{\mathfrak{I}} w(0)
=\sum_{{|\iota|\leq K}\atop{|\mathfrak{I}| \le |\overline{\mathfrak{I}}|}}\theta_{i,I,\mathfrak{I}}
\partial_x^i \partial_y^I \partial_t^{\mathfrak{I}} w(0)\\
&\qquad\qquad=\sum_{{|\iota|\leq K}\atop{|\mathfrak{I}| \le |\overline{\mathfrak{I}}|}}\theta_{i,I,\mathfrak{I}}
\,\underline{\mbox{\large{\wedn{x}}}}\,^i\underline{\mbox{\large{\wedn{y}}}}\,^I\partial_{t_1}^{\mathfrak{I}_1}\psi_1(0)\ldots\partial_{t_l}^{\mathfrak{I}_l}\psi_l(0).
\end{split}\end{equation}
Hence, we set
$$ \Xi:=|\overline{\mathfrak{I}}|-\sum_{h=1}^l \alpha_h
,$$
we multiply the identity in~\eqref{LATTER} by~$\epsilon^\Xi$
and we exploit~\eqref{FOR2.5}. In this way, we find that
\begin{eqnarray*}
0 &=& \lim_{\epsilon\searrow0}
\sum_{{|\iota|\leq K}\atop{|\mathfrak{I}| \le |\overline{\mathfrak{I}}|}}\epsilon^{|\overline{\mathfrak{I}}|-|\mathfrak{I}|}\theta_{i,I,\mathfrak{I}}
\,\underline{\mbox{\large{\wedn{x}}}}\,^i\,\underline{\mbox{\large{\wedn{y}}}}\,^I\,\epsilon^{\mathfrak{I}_1-\alpha_1}\partial_{t_1}^{\mathfrak{I}_1} {\psi}_1(0)\ldots\epsilon^{\mathfrak{I}_l-\alpha_l}\partial_{t_l}^{\mathfrak{I}_l} {\psi}_l(0)\\&=&
\sum_{{|\iota|\leq K}\atop{|\mathfrak{I}| = |\overline{\mathfrak{I}}|}}\theta_{i,I,\mathfrak{I}}\;
\kappa_{1,\mathfrak{I}_1}\dots
\kappa_{l,\mathfrak{I}_l}\;
\underline{\mbox{\large{\wedn{x}}}}\,^i\,\underline{\mbox{\large{\wedn{y}}}}\,^I\,\underline{\mbox{\large{\wedn{t}}}}\,_1^{\mathfrak{I}_1-\alpha_1}\ldots\underline{\mbox{\large{\wedn{t}}}}\,_l^{\mathfrak{I}_l-\alpha_l}\\
&=&
\underline{\mbox{\large{\wedn{t}}}}\,_1^{-\alpha_1}\ldots\underline{\mbox{\large{\wedn{t}}}}\,_l^{-\alpha_l}
\sum_{{|\iota|\leq K}\atop{|\mathfrak{I}| = |\overline{\mathfrak{I}}|}}\theta_{i,I,\mathfrak{I}}
\;\kappa_{1,\mathfrak{I}_1}\dots
\kappa_{l,\mathfrak{I}_l}\;
\underline{\mbox{\large{\wedn{x}}}}\,^i\,\underline{\mbox{\large{\wedn{y}}}}\,^I\,\underline{\mbox{\large{\wedn{t}}}}\,_1^{\mathfrak{I}_1}\ldots\underline{\mbox{\large{\wedn{t}}}}\,_l^{\mathfrak{I}_l},
\end{eqnarray*}
and consequently
\begin{equation*}
0=\sum_{{|\iota|\leq K}\atop{|\mathfrak{I}| = |\overline{\mathfrak{I}}|}}
\theta_{i,I,\mathfrak{I}}\;
\kappa_{1,\mathfrak{I}_1}\dots
\kappa_{l,\mathfrak{I}_l}\;
\,\underline{\mbox{\large{\wedn{x}}}}\,^i\,\underline{\mbox{\large{\wedn{y}}}}\,^I
\,\underline{\mbox{\large{\wedn{t}}}}\,^\mathfrak{I},
\end{equation*}
and the second claim in \eqref{ipop} is proved
in this case as well.
\end{proof}
\section{Every function is locally $\Lambda_{-\infty}$-harmonic up to a small error,
and completion of the proof of Theorem \ref{theone2}}
\label{s:fourth}
In this section we complete the proof of Theorem \ref{theone2}
(which in turn implies Theorem \ref{theone} via Lemma~\ref{GRAT}).
By standard approximation arguments we can reduce to
the case in which $f$ is a polynomial, and hence, by the linearity
of the operator~$\Lambda_{-\infty}$, to the case in which is a monomial.
The details of the proof are therefore the following:
\subsection{Proof of Theorem \ref{theone2} when $f$ is a monomial}\label{7UHASGBSBSBBSB}
We prove Theorem~\ref{theone2} under the initial assumption
that $f$ is a monomial, that is
\begin{equation}\label{iorade}
f\left(x,y,t\right)=\frac{x_1^{i_1}\ldots x_n^{i_n}y_1^{I_1}\ldots y_M^{I_M}t_1^{\mathfrak{I}_1}\ldots t_l^{\mathfrak{I}_l}}{\iota!}=\frac{x^iy^It^{\mathfrak{I}}}{\iota!}=
\frac{(x ,y,t)^{\iota}}{\iota!},
\end{equation}
where $\iota!:=i_1!\ldots i_n!I_1!\ldots I_M!\mathfrak{I}_1!\ldots\mathfrak{I}_l!$ and $I_\beta!:=I_{\beta,1}!\ldots I_{\beta,m_\beta}!$, $i_\chi!:=i_{\chi,1}!\ldots i_{\chi,p_\chi}!$ for all $\beta=1,\ldots M$. and $\chi=1,\ldots,n$.
To this end, we argue as follows.
We
consider $\eta\in\left(0,1\right)$, to be taken
sufficiently small with respect to the
parameter~$\epsilon>0$ which has been fixed
in the statement of Theorem~\ref{theone2}, and we define
$$ {\mathcal{T}}_\eta(x,y,t):=\left(
\eta^{\frac{1}{r_1}}x_1,\ldots,\eta^{\frac{1}{r_n}}x_n,\eta^{\frac{1}{2s_1}}y_1,\ldots,\eta^{\frac{1}{2s_M}}y_M,\eta^{\frac{1}{\alpha_1}}t_1,\ldots,\eta^{\frac{1}{\alpha_l}}t_l\right).$$
We also define
\begin{equation}\label{iorade2}
\gamma:=\sum_{j=1}^n {\frac{|i_j|}{r_j}}+\sum_{j=1}^M {\frac{\left|I_j\right|}{2s_j}}+\sum_{j=1}^l {\frac{\mathfrak{I}_j}{\alpha_j}}
,\end{equation}
and
\begin{equation}
\label{am}
\delta:=\min\left\{\frac{1}{r_1},\ldots,\frac{1}{r_n},\frac{1}{2s_1},\ldots,\frac{1}{2s_M},\frac{1}{\alpha_1},\ldots,\frac{1}{\alpha_l}\right\}.
\end{equation}
We also take $K_0\in\mathbb{N}$ such that
\begin{equation}
\label{blim}
K_0\geq\frac{\gamma+1}{\delta}
\end{equation}
and we let
\begin{equation}
\label{blo}
K:=K_0+\left|i\right|+\left|I\right|+\left|\mathfrak{I}\right|+\ell=
K_0+\left|\iota\right|+\ell,
\end{equation}
where $\ell$ is the fixed integer given in the statement of Theorem~\ref{theone2}.
By Lemma \ref{lemcin}, there exist a neighborhood~$\mathcal{N}$
of the origin and a function~$w\in C\left(\mathbb{R}^N\right)\cap C_0\left(\mathbb{R}^{N-l}\right)\cap C^\infty\left(\mathcal{N}\right)\cap\mathcal{A}$ such that
\begin{equation}\label{7UJHAanna}
{\mbox{$\Lambda_{-\infty} w=0$ in $\mathcal{N}$, }}\end{equation}
and such that
\begin{equation}\label{9IKHAHSBBNSBAA}
\begin{split}&
{\mbox{all the derivatives of $w$ in 0 up to order $K$ vanish,}}\\
&{\mbox{with the exception of $\partial^\iota w \left(0\right)$ which equals~$1$,}}\end{split}\end{equation}
being~$\iota$ as in~\eqref{iorade}.
Recalling the definition of~$\mathcal{A}$ on page~\pageref{CALSASS},
we also know that
\begin{equation}\label{SPE12129dd}
{\mbox{$\partial^{k_h}_{t_h}w=0$
in~$(-\infty,a_h)$, }}\end{equation}
for suitable~$a_h\in(-2,0)$,
for all~$h\in\{1,\dots,l\}$.
In this way, setting
\begin{equation}
\label{quaqui}
g:=w-f,
\end{equation}
we deduce from~\eqref{9IKHAHSBBNSBAA} that
$$
\partial^\sigma g\left(0\right)=0\quad \text{ for any }
\sigma\in\mathbb{N}^N \text{ with }\left|\sigma\right|\leq K.
$$
Accordingly, in $\mathcal{N}$ we can write
\begin{equation}
\label{quaqua}
g\left(x,y,t\right)=\sum_{\left|\tau\right|\geq K+1} {x^{\tau_1}y^{\tau_2}t^{\tau_3}h_\tau\left(x,y,t\right)},
\end{equation}
for some $h_\tau$ smooth in $\mathcal{N}$, where the multi-index
notation $\tau=(\tau_1,\tau_2,\tau_3)$ has been used.
Now, we define
\begin{equation}\label{JAncasxciNasd}
u\left(x,y,t\right):=\frac{1}{\eta^\gamma}
w\left({\mathcal{T}}_\eta(x,y,t)\right).
\end{equation}
In light of~\eqref{SPE12129dd}, we notice that
$\partial^{k_h}_{t_h}u=0$
in~$(-\infty,a_h/\eta^{\frac1{\alpha_h}})$, for all~$h\in\{1,\dots,l\}$,
and therefore~$u\in C\left(\mathbb{R}^N\right)\cap C_0(\mathbb{R}^{N-l})\cap C^\infty
\left({\mathcal{T}}_\eta(\mathcal{N})\right)\cap\mathcal{A}$. We also claim that
\begin{equation}\label{DEN}
{\mathcal{T}}_\eta([-1,1]^{N-l}\times(a_1,+\infty)\times\ldots\times(a_l,+\infty))\subseteq {\mathcal{N}}.
\end{equation}
To check this, let~$(x,y,t)\in[-1,1]^{N-l}\times(a_1+\infty)\times\ldots\times(a_l,+\infty)$
and~$(X,Y,T):={\mathcal{T}}_\eta(x,y,t)$.
Then, we have that~$|X_1|=\eta^{\frac{1}{r_1}}|x_1|\le \eta^{\frac{1}{r_1}}$,
~$|Y_1|=\eta^{\frac{1}{2s_1}}|y_1|\le \eta^{\frac{1}{2s_1}}$,
~$T_1=\eta^{\frac{1}{\alpha_1}}t_1> a_1\eta^{\frac{1}{\alpha_1}}>-1$,
provided~$\eta$ is small enough.
Repeating this argument, we obtain that, for small~$\eta$,
\begin{equation}\label{CLOS}
{\mbox{$(X,Y,T)$ is as
close to the origin as we wish.}}\end{equation}
{F}rom \eqref{CLOS} and the fact that~${\mathcal{N}}$
is an open set, we infer that~$(X,Y,T)\in{\mathcal{N}}$,
and this proves~\eqref{DEN}.
Thanks to~\eqref{7UJHAanna} and~\eqref{DEN}, we have that, in~$B_1^{N-l}\times(-1,+\infty)^l$,
\begin{align*}
&\eta^{\gamma-1}\,\Lambda_{-\infty} u\left(x,y,t\right) \\
=\;&\sum_{j=1}^n {{\mbox{\large{\wedn{a}}}}_j\partial_{x_j}^{r_j}w
\left({\mathcal{T}}_\eta(x,y,t)\right)}
+\sum_{j=1}^M {{\mbox{\large{\wedn{b}}}}_j(-\Delta)^{s_j}_{y_j} w
\left({\mathcal{T}}_\eta(x,y,t)\right)}
+\sum_{j=1}^l {{\mbox{\large{\wedn{c}}}}\,_jD_{t_h,-\infty}^{\alpha_h}w
\left({\mathcal{T}}_\eta(x,y,t)\right) }\\=\;&\Lambda_{-\infty}w
\left({\mathcal{T}}_\eta(x,y,t)\right)\\
=\;&0.
\end{align*}
These observations establish that $u$ solves the equation in $B_1^{N-l}\times(-1+\infty)^l$ and $u$ vanishes when $|(x,y)|\ge R$,
for some~$R>1$, and thus the claims in~\eqref{MAIN EQ:2}
and~\eqref{ESTENSIONE}
are proved.
Now we prove that $u$ approximates $f$, as claimed in~\eqref{IAzofm:2}. For this, using the monomial structure of $f$ in~\eqref{iorade}
and the definition of $\gamma$ in~\eqref{iorade2}, we have,
in a multi-index notation,
\begin{equation}
\label{monsca}
\begin{split}
&\frac{1}{\eta^\gamma}f\left({\mathcal{T}}_\eta(x,y,t)\right)=\frac{1}{\eta^\gamma\,\iota!} \;
(\eta^{\frac{1}{r}}x)^i (\eta^{\frac{1}{2s}}y)^I \big(\eta^{\frac{1}{\alpha}}t\big)^\mathfrak{I}
=\frac{1}{\iota!} x^i y^I t^\mathfrak{I}=f(x,y,t).
\end{split}
\end{equation}
Consequently, by~\eqref{quaqui}, \eqref{quaqua}, \eqref{JAncasxciNasd} and~\eqref{monsca},
\begin{align*}
u\left(x,y,t\right)-f\left(x,y,t\right)
&=\frac{1}{\eta^\gamma}g\left(\eta^{\frac{1}{r_1}}x_1,\ldots,\eta^{\frac{1}{r_n}}x_n,\eta^{\frac{1}{2s_1}}y_1,\ldots,\eta^{\frac{1}{2s_M}}y_M,\eta^{\frac{1}{\alpha_1}}t_1,\ldots,\eta^{\frac{1}{\alpha_l}}t_l\right) \\
&=\sum_{\left|\tau\right|\geq K+1} {\eta^{\left|\frac{\tau_1}{r}\right|+\left|\frac{\tau_2}{2s}\right|+\left|\frac{\tau_3}{\alpha}\right|-\gamma}x^{\tau_1}y^{\tau_2}t^{\tau_3}h_\tau\left(\eta^{\frac{1}{r}}x,\eta^{\frac{1}{2s}}y,\eta^{\frac{1}{\alpha}}t\right)}
,\end{align*}
where a multi-index notation has been used, e.g. we have written
$$ \frac{\tau_1}{r}:=\left( \frac{\tau_{1,1}}{r_1},\dots,\frac{\tau_{1,n}}{r_n}\right)
\in\mathbb{R}^n.$$
Therefore, for any multi-index $\beta=\left(\beta_1,\beta_2,\beta_3\right)$ with $\left|\beta\right|\leq \ell$,
\begin{equation}
\label{eq:quaquo}\begin{split}
&\partial^\beta\left(u\left(x,y,t\right)-f\left(x,y,t\right)\right)
\\=\,&\partial^{\beta_1}_{x}\partial^{\beta_2}_{y}\partial^{\beta_3}_{t}\left(u\left(x,y,t\right)-f\left(x,y,t\right)\right)
\\=\,&\sum_{\substack{\left|\beta'_1\right|+\left|\beta''_1\right|=\left|\beta_1\right| \\ \left|\beta'_2\right|+\left|\beta''_2\right|=\left|\beta_2\right| \\ \left|\beta'_3\right|+\left|\beta''_3\right|=\left|\beta_3\right| \\ \left|\tau\right|\geq K+1}} {c_{\tau,\beta}\;\eta^{\kappa_{\tau,\beta}}\; x^{\tau_1-\beta'_1}y^{\tau_2-\beta'_2}t^{\tau_3-\beta'_3}\partial_{x}^{\beta''_1}\partial_{y}^{\beta''_2}\partial_{t}^{\beta''_3}h_\tau\left(\eta^{\frac{1}{r}}x,\eta^{\frac{1}{2s}}y,\eta^{\frac{1}{\alpha}}t\right)},
\end{split}\end{equation}
where
$$
\kappa_{\tau,\beta}:=\left|\frac{\tau_1}{r}\right|+\left|\frac{\tau_2}{2s}\right|+\left|\frac{\tau_3}{\alpha}\right|-\gamma+\left|\frac{\beta''_1}{r}\right|+\left|\frac{\beta''_2}{2s}\right|+\left|\frac{\beta''_3}{\alpha}\right|,
$$
for suitable coefficients $c_{\tau,\beta}$. Thus, to complete the proof of~\eqref{IAzofm:2}, we need to show that this quantity is small if so is $\eta$.
To this aim, we use~\eqref{am}, \eqref{blim}
and~\eqref{blo} to see that
\begin{eqnarray*}\kappa_{\tau,\beta}
&\geq&\left|\frac{\tau_1}{r}\right|+\left|\frac{\tau_2}{2s}\right|
+\left|\frac{\tau_3}{\alpha}\right|-\gamma \\
& \geq&\delta\left(\left|\tau_1\right|+\left|\tau_2\right|
+\left|\tau_3\right|\right)-\gamma\\&\geq& K\delta
-\gamma\\&\geq& K_0\delta-\gamma\\&\geq& 1.
\end{eqnarray*}
Consequently, we deduce from \eqref{eq:quaquo} that $\left\|u-f\right\|_{C^\ell\left(B_1^N\right)}\leq C\eta$ for some $C>0$. By choosing $\eta$ sufficiently small with respect to $\epsilon$, this implies the claim in~\eqref{IAzofm:2}. This completes the proof of
Theorem \ref{theone2} when $f$ is a monomial.
\subsection{Proof of Theorem \ref{theone2} when $f$ is a polynomial}\label{AUJNSLsxcrsd}
Now, we consider the case in which~$f$ is a polynomial. In this case, we can write $f$ as
$$
f\left(x,y,t\right)=\sum_{j=1}^J c_jf_j\left(x,y,t\right),
$$
where each $f_j$ is a monomial, $J\in\mathbb{N}$ and $c_j\in\mathbb{R}$ for all $j=1,\ldots, J$.
Let $$c:=\max_{j\in\{1,\dots,J\}} c_j.$$ Then,
by the work done in Subsection~\ref{7UHASGBSBSBBSB},
we know that the claim
in Theorem \ref{theone2} holds true for each $f_j$, and so we can find $a_j\in(-\infty,0)^l$, $u_j\in C^\infty\left(B_1^N\right)\cap C\left(\mathbb{R}^N\right)
\cap\mathcal{A}$
and $R_j>1$ such that $\Lambda_{-\infty} u_j=0$ in $B_1^{N-l}\times(-1,+\infty)^l$, $\left\|u_j-f_j\right\|_{C^\ell\left(B_1^N\right)}\leq\epsilon$ and $u_j=0$ if $|(x,y)|\ge R_j$.
Hence, we set
$$
u\left(x,y,t\right):=\sum_{j=1}^J c_ju_j\left(x,y,t\right),
$$
and we see that
\begin{equation}
\label{pazxc}
\left\|u-f\right\|_{C^\ell\left(B_1^N\right)}\leq\sum_{j=1}^J {\left|c_j\right|\left\|u_j-f_j\right\|_{C^\ell\left(B_1^N\right)}}\leq cJ\epsilon.
\end{equation}
Also, $\Lambda_{-\infty} u=0$ thanks to the linearity of $\Lambda_{-\infty}$ in $B_1^{N-l}\times(-1,+\infty)^l$. Finally, $u$ is supported in $B_R^{N-l}$ in the variables $(x,y)$, being $$R:=\max_{j\in\{1,\dots,J\}} R_j.$$ This proves
Theorem \ref{theone2} when $f$ is a polynomial (up to replacing $\epsilon$ with $cJ\epsilon$).
\subsection{Proof of Theorem \ref{theone2} for a general $f$}
Now we deal with the case of a general~$f$. To this end, we exploit
Lemma~2 in~\cite{MR3626547} and we see that
there exists a polynomial $\tilde{f}$ such that
\begin{equation}\label{6ungfbnreog}
\|f-\tilde{f}\|_{C^\ell(B_1^N)}\leq\epsilon.\end{equation}
Then, applying the result already proven in Subsection~\ref{AUJNSLsxcrsd}
to the polynomial $\tilde{f}$,
we can find $a\in(-\infty,0)^l$, $u\in C^\infty\left(B_1^N\right)
\cap C\left(\mathbb{R}^N\right)\cap\mathcal{A}$ and $R>1$ such that
\begin{eqnarray*}
&& \Lambda_{-\infty} u=0 \quad{\mbox{ in }}B_1^{N-l}\times(-1,+\infty)^l, \\&&
u=0 \qquad\quad{\mbox{ if }}|(x,y)|\ge R,\\&&
\partial^{k_h}_{t_h} u=0 \quad\quad{\mbox{if }}t_h\in(-\infty,a_h),\qquad{\mbox{ for all }}h\in\{1,\dots,l\},
\\ {\mbox{and }}&&\|u-\tilde{f}\|_{C^\ell(B_1^N)}
\leq\epsilon.\end{eqnarray*}
Then, recalling~\eqref{6ungfbnreog}, we see that
$$ \|u-f\|_{C^\ell(B_1^N)}\leq\|u-\tilde{f}
\|_{C^\ell(B_1^N)}+\|f-\tilde{f}
\|_{C^\ell(B_1^N)}\leq2\epsilon.$$ Hence, the proof
of Theorem~\ref{theone2} is complete.
\qed
\begin{appendix}
\appendixpage
\noappendicestocpagenum
\addappheadtotoc
\chapter{Some applications}\label{APPEA}
In this appendix we give
some applications of the approximation results obtained and discussed
in this book. These examples exploit particular cases of the
operator $\Lambda_a$, namely, when $s\in(0,1)$ and $\Lambda_a$ is the fractional Laplacian $(-\Delta)^s$, or the fractional heat operator $\partial_t+(-\Delta)^s$.
Similar applications have been pointed
out in~\cite{MR3579567, AV, 2017arXiv170806300R}.
\begin{example} [The classical Harnack inequality\index{inequality!Harnack} fails for $s$-harmonic functions]{\rm
Harnack inequality, in its classical formulation, says that if $u$ is a nontrivial and nonnegative harmonic function in $B_1$ then, for any $0<r<1$, there exists $0<c=c(n,r)$ such that
\begin{equation}
\label{appendeq}
\sup_{B_r}u\leq c\inf_{B_r}u.
\end{equation}
The same result is not true for $s$-harmonic functions. To construct a counterexample, consider the smooth function $f(x)=|x|^2$, and, for a small $\epsilon>0$, let $v=v_\epsilon$ be the function
provided by Theorem \ref{theone}, where we choose $\ell=0$.
Notice that, if $x\in B_1\setminus B_{r/2}$,
$$
v(x)\geq f(x)-\left\|v-f\right\|_{L^{\infty}(B_1)}\geq \frac{r^2}{4}-\epsilon>\frac{r^2}{8},
$$
provided $\epsilon$ is small enough, while
$$
v(0)\leq f(0)+\left\|v-f\right\|_{L^{\infty}(B_1)}\leq\epsilon<\frac{r^2}{8}.
$$
Hence, we have that $v(0)<v(x)$ for any $x\in B_1\setminus B_{r/2}$, and therefore the minimum of $v$ in $B_1$ is attained in some point $\overline{x}\in \overline{B_{r/2}}$. Then, we define
$$
u(x):=v(x)-v(\overline{x}).
$$
Notice that $u$ is $s$-harmonic in $B_1$ since so does~$v$.
Also, $u\geq 0$ in $B_1$ by construction, and~$u>0$ in $B_1\setminus B_{r/2}$. On the other hand, since $\overline{x}\in B_r$
$$
\inf_{B_r} u=u(\overline{x})=0,
$$
which implies that $u$ cannot satisfies an inequality such as \eqref{appendeq}.
As a matter of fact, in the fractional case, the analogue of the Harnack inequality requires~$u$
to be nonnegative in the whole of~${\mathbb{R}}^n$, hence a ``global'' condition
is needed to obtain a ``local'' oscillation bound. See e.g.~\cite{MR2817382}
and the references therein for a complete discussion
of nonlocal Harnack inequalities.
}\end{example}
\begin{example}[A logistic equation with nonlocal interactions]{\rm
We consider the logistic equation taken into account in \cite{MR3579567}
\begin{equation}
\label{eq:logistic}
-(-\Delta)^s u+(\sigma-\mu u)u+\tau(J*u)=0,
\end{equation}
where $s\in(0,1]$, $\tau\in[0,+\infty)$ and $\sigma,\mu, J$ are nonnegative functions. The symbol $*$ denotes as usual the convolution product between $J$ and $u$. Moreover, the convolution kernel $J$ is assumed to be of unit mass and even, namely
$$
\int_{\mathbb{R}^n} J(x)dx=1
$$
and
$$
J(-x)=J(x)\quad\text{for any}\,x\in\mathbb{R}^n.
$$
In this framework, the solution $u$ denotes the density of a population living in some environment $\Omega\subseteq\mathbb{R}^n$, while the functions $\sigma$ and $\mu$ model respectively the growing and dying effects on the population.
The equation is driven by the fractional Laplacian that models a nonlocal dispersal strategy which has been observed experimentally in nature, and may be related to optimal hunting strategies and adaptation to the environment stimulated by natural selection.
We state here a result which translates the fact that a
population with a nonlocal strategy can plan the distribution of resources in a strategic region better than populations with a local one.
Namely, fixed $\Omega=B_1$, one can find a solution of a slightly perturbed version
of~\eqref{eq:logistic} in~$B_1$, compactly supported in a larger ball~$B_{R_\epsilon}$,
where~$\epsilon\in(0,1)$ denotes the perturbation.
The strategic plan consists in properly adjusting
the resources in $B_{R_{\epsilon}}\setminus B_1$ (that is,
a bounded region in which the equation is not satisfied) in order to consume almost
all the given resources in $B_1$.
The detailed statement goes as follows:
\begin{theorem}\label{VECH}
Let $s\in(0,1)$ and $\ell\in\mathbb{N}$, $\ell\geq 2$. Assume that
$\sigma,\mu\in C^{\ell}(\overline{B_2})$, with
$$
\inf_{\overline{B_2}}\mu>0,\qquad\inf_{\overline{B_2}}\sigma>0.
$$
Fixed $\epsilon\in(0,1)$, there exist
a nonnegative function $u_\epsilon$,
$R_\epsilon>2$ and $\sigma_\epsilon\in C^{\ell}(\overline{B_1})$ such that
\begin{equation*}
(-\Delta)^s u_\epsilon=(\sigma_\epsilon-\mu u_\epsilon)u_\epsilon+
\tau(J*u_\epsilon)\quad\mbox{ in } B_1,
\end{equation*}
\begin{equation*}
u_\epsilon=0\quad\text{in } \mathbb{R}^n\setminus B_{R_{\epsilon}},
\end{equation*}
\begin{equation*}
\left\|\sigma_\epsilon-\sigma\right\|_{C^{\ell}(\overline{B_1})}\leq\epsilon,
\end{equation*}
\begin{equation*}
u_\epsilon\geq\mu^{-1}\sigma_\epsilon\quad\text{in } B_1.
\end{equation*}
\end{theorem}
}
\end{example}
\begin{example}{\rm Higher order nonlocal equations also appear naturally
in several contexts, see e.g.~\cite{MR3051400} for a nonlocal version
of the Cahn-Hilliard phase coexistence model.
Higher orders operators have also appeared in connection with logistic equations,
see e.g.~\cite{MR3578282}. In this spirit, we point out a version
of Theorem~\ref{VECH} which is new and relies
on Theorem~\ref{theone}. Its content is that nonlocal logistic equations
(of any order and with nonlocality given in either time or space, or both)
admits solutions which can arbitrarily well adapt to any given resource.
The precise statement is the following:
\begin{theorem}
Let $s\in(0,+\infty)$, $\alpha\in(0,+\infty)$ and $\ell\in\mathbb{N}$, $\ell\geq 2$.
Assume that
\begin{equation}\label{EFVA}
{\mbox{either~$s\not\in{\mathbb{N}}$ or~$\alpha\not\in{\mathbb{N}}$.}}\end{equation}
Let~$\sigma,\mu\in C^{\ell}(\overline{B_2})$, with
\begin{equation}\label{INAsdH}
\inf_{\overline{B_1}}\mu>0.
\end{equation}
Fixed $\epsilon\in(0,1)$, there exist
a nonnegative function $u_\epsilon$,
$R_\epsilon>2$, $a_\epsilon<0$,
and $\sigma_\epsilon\in C^{\ell}(\overline{B_1})$ such that
\begin{equation}
\label{eq:primeq77}\begin{split}&
D^\alpha_{t,a_\epsilon} u_\epsilon(x,t)+
(-\Delta)^s u_\epsilon(x,t)=\Big(\sigma_\epsilon(x,t)-\mu (x,t)u_\epsilon(x,t)\Big)\,u_\epsilon(x,t)\\
&\qquad\mbox{ for all $(x,t)\in{\mathbb{R}}^p\times{\mathbb{R}}$ with $|(x,t)|<1$},\end{split}
\end{equation}
\begin{equation}
\label{eq:secondeq77}
u_\epsilon(x,t)=0\quad\text{if } |(x,t)|\ge R_{\epsilon},
\end{equation}
\begin{equation}
\label{eq:terzeq77}
\left\|\sigma_\epsilon-\sigma\right\|_{C^{\ell}(\overline{B_1})}\leq\epsilon,
\end{equation}
\begin{equation}
\label{eq:quarteq77}
u_\epsilon=\mu^{-1}\sigma_\epsilon\geq\mu^{-1}\sigma-\epsilon\quad\text{in } B_1.
\end{equation}
\end{theorem}
\begin{proof}
We use Theorem \ref{theone} in the case in which $\Lambda_a:=
D^\alpha_{t,a}+(-\Delta)^s$. Let~$f:=\sigma/\mu$. Then,
by Theorem~\ref{theone}, which can be exploited here in view of~\eqref{EFVA},
we obtain the existence of suitable~$u_\epsilon$,
$R_\epsilon>2$ and $a_\epsilon<0$ satisfying~\eqref{eq:secondeq77},
\begin{equation}
\label{eq:primeq7781}\begin{split}&
D^\alpha_{t,a_\epsilon} u_\epsilon(x,t)+
(-\Delta)^s u_\epsilon(x,t)=0\\
&\qquad\mbox{ for all $(x,t)\in{\mathbb{R}}^p\times{\mathbb{R}}$ with $|(x,t)|<1$},\end{split}
\end{equation}
and
\begin{equation}
\label{eq:primeq7789}
\left\|u_\epsilon-f\right\|_{C^{\ell}(\overline{B_1})}\leq\epsilon.\end{equation}
Then, we set~$\sigma_\epsilon:=\mu u_\epsilon$, and then, by~\eqref{eq:primeq7789},
\begin{equation}\label{7hS823} \begin{split}
\left\|\sigma_\epsilon-\sigma\right\|_{C^{\ell}(\overline{B_1})}\,&\leq C\,
\|\mu\|_{C^{\ell}(\overline{B_1})}\,
\left\|\frac{\sigma_\epsilon}{\mu}-\frac{\sigma}{\mu}\right\|_{C^{\ell}(\overline{B_1})}\\
&= C\,
\|\mu\|_{C^{\ell}(\overline{B_1})}\,
\left\|u_\epsilon-f\right\|_{C^{\ell}(\overline{B_1})}\\&\le C\,
\|\mu\|_{C^{\ell}(\overline{B_1})}\,\epsilon,
\end{split}\end{equation}
which gives~\eqref{eq:terzeq77}, up to renaming~$\epsilon$.
Moreover, if~$|(x,t)|<1$,
$$ (\sigma_\epsilon-\mu u_\epsilon)u_\epsilon=0=
D^\alpha_{t,a_\epsilon} u_\epsilon+
(-\Delta)^s u_\epsilon,$$
thanks to~\eqref{eq:primeq7781}, and this proves~\eqref{eq:primeq77}.
In addition, recalling~\eqref{7hS823} and~\eqref{INAsdH},
$$ u_\epsilon=\mu^{-1}\sigma_\epsilon\ge \mu^{-1}\sigma-
\frac{1}{\inf_{\overline{B_1}}\mu}\|\sigma-\sigma_\epsilon\|_{L^\infty(B_1)}
\ge\mu^{-1}\sigma-
\frac{\|\mu\|_{C^{\ell}(\overline{B_1})}\,\epsilon}{\inf_{\overline{B_1}}\mu},$$
in~$B_1$, which proves~\eqref{eq:quarteq77}, up to renaming~$\epsilon$.
\end{proof}
}\end{example}
\end{appendix}
\begin{bibdiv}
\begin{biblist}
\bib{AJS2}{article}{
author={Abatangelo, Nicola},
author={Jarohs, Sven},
author={Salda\~na, Alberto},
title={Positive powers of the Laplacian: From hypersingular integrals to
boundary value problems},
journal={Commun. Pure Appl. Anal.},
volume={17},
date={2018},
number={3},
pages={899--922},
issn={1534-0392},
review={\MR{3809107}},
doi={10.3934/cpaa.2018045},
}
\bib{AJS3}{article}{
author={Abatangelo, Nicola},
author={Jarohs, Sven},
author={Salda\~na, Alberto},
title={Green function and Martin kernel for higher-order fractional
Laplacians in balls},
journal={Nonlinear Anal.},
volume={175},
date={2018},
pages={173--190},
issn={0362-546X},
review={\MR{3830727}},
doi={10.1016/j.na.2018.05.019},
}
\bib{AJS1}{article}{
author={Abatangelo, Nicola},
author={Jarohs, Sven},
author={Salda\~na, Alberto},
title={On the loss of maximum principles for higher-order fractional Laplacians},
journal={to appear on Proc. Amer. Math. Soc.},
doi={10.1090/proc/14165},
}
\bib{ABX}{article}{
author={Abatangelo, Nicola},
author={Jarohs, Sven},
author={Salda\~na, Alberto},
title={Integral representation of solutions to higher-order fractional
Dirichlet problems on balls},
journal={to appear on Commun. Contemp. Math.},
doi={10.1142/S0219199718500025},
}
\bib{AV}{article}{
author={Abatangelo, Nicola},
author={Valdinoci, Enrico},
title={Getting acquainted with the fractional Laplacian},
journal={Springer INdAM Series},
date={2019},
}
\bib{MR1191901}{book}{
author={Abel, Niels Henrik},
title={\OE uvres compl\`etes. Tome I},
language={French},
note={Edited and with a preface by L. Sylow and S. Lie;
Reprint of the second (1881) edition},
publisher={\'{E}ditions Jacques Gabay, Sceaux},
date={1992},
pages={viii+621},
isbn={2-87647-073-X},
review={\MR{1191901}},
}
\bib{MR3488533}{article}{
author={Allen, Mark},
author={Caffarelli, Luis},
author={Vasseur, Alexis},
title={A parabolic problem with a fractional time derivative},
journal={Arch. Ration. Mech. Anal.},
volume={221},
date={2016},
number={2},
pages={603--630},
issn={0003-9527},
review={\MR{3488533}},
doi={10.1007/s00205-016-0969-z},
}
\bib{MR3557159}{article}{
author={Almeida, Ricardo},
author={Bastos, Nuno R. O.},
author={Monteiro, M. Teresa T.},
title={Modeling some real phenomena by fractional differential equations},
journal={Math. Methods Appl. Sci.},
volume={39},
date={2016},
number={16},
pages={4846--4855},
issn={0170-4214},
review={\MR{3557159}},
doi={10.1002/mma.3818},
}
\bib{MR3480553}{article}{
author={Andersson, John},
title={Optimal regularity for the Signorini problem and its free
boundary},
journal={Invent. Math.},
volume={204},
date={2016},
number={1},
pages={1--82},
issn={0020-9910},
review={\MR{3480553}},
doi={10.1007/s00222-015-0608-6},
}
\bib{comb}{article}{
author={Arkhincheev, V. E.},
author={Baskin, \'E. M.},
title={Anomalous diffusion and drift in a comb model of percolation clusters},
journal={J. Exp. Theor. Phys.},
volume={73},
date={1991},
pages={161--165},
}
\bib{MR0115096}{article}{
author={Balakrishnan, A. V.},
title={Fractional powers of closed operators and the semigroups generated
by them},
journal={Pacific J. Math.},
volume={10},
date={1960},
pages={419--437},
issn={0030-8730},
review={\MR{0115096}},
}
\bib{MR3578282}{article}{
author={Bhakta, Mousomi},
title={Solutions to semilinear elliptic PDE's with biharmonic operator
and singular potential},
journal={Electron. J. Differential Equations},
date={2016},
pages={Paper No. 261, 17},
issn={1072-6691},
review={\MR{3578282}},
}
\bib{MR3641649}{article}{
author={Biccari, Umberto},
author={Warma, Mahamadi},
author={Zuazua, Enrique},
title={Local elliptic regularity for the Dirichlet fractional Laplacian},
journal={Adv. Nonlinear Stud.},
volume={17},
date={2017},
number={2},
pages={387--409},
issn={1536-1365},
review={\MR{3641649}},
doi={10.1515/ans-2017-0014},
}
\bib{MR1904936}{book}{
author={Boyarchenko, Svetlana I.},
author={Levendorski\u{\i}, Sergei Z.},
title={Non-Gaussian Merton-Black-Scholes theory},
series={Advanced Series on Statistical Science \& Applied Probability},
volume={9},
publisher={World Scientific Publishing Co., Inc., River Edge, NJ},
date={2002},
pages={xxii+398},
isbn={981-02-4944-6},
review={\MR{1904936}},
doi={10.1142/9789812777485},
}
\bib{MR3461641}{article}{
author={Bucur, Claudia},
title={Some observations on the Green function for the ball in the
fractional Laplace framework},
journal={Commun. Pure Appl. Anal.},
volume={15},
date={2016},
number={2},
pages={657--699},
issn={1534-0392},
review={\MR{3461641}},
doi={10.3934/cpaa.2016.15.657},
}
\bib{MR3716924}{article}{
author={Bucur, Claudia},
title={Local density of Caputo-stationary functions in the space of
smooth functions},
journal={ESAIM Control Optim. Calc. Var.},
volume={23},
date={2017},
number={4},
pages={1361--1380},
issn={1292-8119},
review={\MR{3716924}},
doi={10.1051/cocv/2016056},
}
\bib{claudia}{book}{
author={Bucur, Claudia},
author={Valdinoci, Enrico},
title={Nonlocal diffusion and applications},
series={Lecture Notes of the Unione Matematica Italiana},
volume={20},
publisher={Springer, [Cham]; Unione Matematica Italiana, Bologna},
date={2016},
pages={xii+155},
isbn={978-3-319-28738-6},
isbn={978-3-319-28739-3},
review={\MR{3469920}},
doi={10.1007/978-3-319-28739-3},
}
\bib{MR3579567}{article}{
author={Caffarelli, Luis},
author={Dipierro, Serena},
author={Valdinoci, Enrico},
title={A logistic equation with nonlocal interactions},
journal={Kinet. Relat. Models},
volume={10},
date={2017},
number={1},
pages={141--170},
issn={1937-5093},
review={\MR{3579567}},
doi={10.3934/krm.2017006},
}
\bib{MR3051400}{article}{
author={Caffarelli, Luis},
author={Valdinoci, Enrico},
title={A priori bounds for solutions of a nonlocal evolution PDE},
conference={
title={Analysis and numerics of partial differential equations},
},
book={
series={Springer INdAM Ser.},
volume={4},
publisher={Springer, Milan},
},
date={2013},
pages={141--163},
review={\MR{3051400}},
doi={10.1007/978-88-470-2592-9\_10},
}
\bib{MR2379269}{article}{
author={Caputo, Michele},
title={Linear models of dissipation whose $Q$ is almost frequency
independent. II},
note={Reprinted from Geophys. J. R. Astr. Soc. {\bf 13} (1967), no. 5,
529--539},
journal={Fract. Calc. Appl. Anal.},
volume={11},
date={2008},
number={1},
pages={4--14},
issn={1311-0454},
review={\MR{2379269}},
}
\bib{CDV18}{article}{
author={Carbotti, Alessandro},
author={Dipierro, Serena},
author={Valdinoci, Enrico},
title={Local density of Caputo-stationary functions of any order},
journal = {To appear in Complex Variables and Elliptic Equations},
eprint = {1809.04005},
date = {2018},
adsurl = {https://arxiv.org/abs/1809.04005},
doi = {10.1080/17476933.2018.1544631}
}
\bib{MR3709717}{article}{
author={Danielli, Donatella},
author={Garofalo, Nicola},
author={Petrosyan, Arshak},
author={To, Tung},
title={Optimal regularity and the free boundary in the parabolic
Signorini problem},
journal={Mem. Amer. Math. Soc.},
volume={249},
date={2017},
number={1181},
pages={v + 103},
issn={0065-9266},
isbn={978-1-4704-2547-0},
isbn={978-1-4704-4129-6},
review={\MR{3709717}},
doi={10.1090/memo/1181},
}
\bib{MR2944369}{article}{
author={Di Nezza, Eleonora},
author={Palatucci, Giampiero},
author={Valdinoci, Enrico},
title={Hitchhiker's guide to the fractional Sobolev spaces},
journal={Bull. Sci. Math.},
volume={136},
date={2012},
number={5},
pages={521--573},
issn={0007-4497},
review={\MR{2944369}},
doi={10.1016/j.bulsci.2011.12.004},
}
\bib{MR3089369}{article}{
author={Di Paola, Mario},
author={Pinnola, Francesco Paolo},
author={Zingales, Massimiliano},
title={Fractional differential equations and related exact mechanical
models},
journal={Comput. Math. Appl.},
volume={66},
date={2013},
number={5},
pages={608--620},
issn={0898-1221},
review={\MR{3089369}},
doi={10.1016/j.camwa.2013.03.012},
}
\bib{MR3673669}{article}{
author={Dipierro, Serena},
author={Grunau, Hans-Christoph},
title={Boggio's formula for fractional polyharmonic Dirichlet problems},
journal={Ann. Mat. Pura Appl. (4)},
volume={196},
date={2017},
number={4},
pages={1327--1344},
issn={0373-3114},
review={\MR{3673669}},
doi={10.1007/s10231-016-0618-z},
}
\bib{MR3626547}{article}{
author={Dipierro, Serena},
author={Savin, Ovidiu},
author={Valdinoci, Enrico},
title={All functions are locally $s$-harmonic up to a small error},
journal={J. Eur. Math. Soc. (JEMS)},
volume={19},
date={2017},
number={4},
pages={957--966},
issn={1435-9855},
review={\MR{3626547}},
doi={10.4171/JEMS/684},
}
\bib{DSV1}{article}{
author={Dipierro, Serena},
author={Savin, Ovidiu},
author={Valdinoci, Enrico},
title={Local approximation of arbitrary functions by solutions of nonlocal equations},
journal={J. Geom. Anal.},
date={2018},
doi={10.1007/s12220-018-0045-z},
}
\bib{DV1}{article}{
author={Dipierro, Serena},
author={Valdinoci, Enrico},
title={A Simple Mathematical Model Inspired by the Purkinje Cells: From
Delayed Travelling Waves to Fractional Diffusion},
journal={Bull. Math. Biol.},
volume={80},
date={2018},
number={7},
pages={1849--1870},
issn={0092-8240},
review={\MR{3814763}},
doi={10.1007/s11538-018-0437-z},
}
\bib{MR2863859}{article}{
author={Dong, Hongjie},
author={Kim, Doyoon},
title={On $L_p$-estimates for a class of non-local elliptic equations},
journal={J. Funct. Anal.},
volume={262},
date={2012},
number={3},
pages={1166--1199},
issn={0022-1236},
review={\MR{2863859}},
doi={10.1016/j.jfa.2011.11.002},
}
\bib{ferrari}{article}{
author={Ferrari, Fausto},
TITLE = {Weyl and Marchaud Derivatives: A Forgotten History},
JOURNAL = {Mathematics},
VOLUME = {6},
YEAR = {2018},
NUMBER = {1},
URL = {http://www.mdpi.com/2227-7390/6/1/6},
ISSN = {2227-7390},
DOI = {10.3390/math6010006},
}
\bib{MR0176661}{article}{
author={Fichera, Gaetano},
title={Sul problema elastostatico di Signorini con ambigue condizioni al
contorno},
language={Italian},
journal={Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8)},
volume={34},
date={1963},
pages={138--142},
review={\MR{0176661}},
}
\bib{2017arXiv171203347G}{article}{
author = {Garofalo, Nicola},
title = {Fractional thoughts},
journal = {arXiv e-prints},
date = {2017},,
archivePrefix = {arXiv},
eprint = {1712.03347},
adsurl = {https://ui.adsabs.harvard.edu/\#abs/2017arXiv171203347G},
}
\bib{MR2667016}{book}{
author={Gazzola, Filippo},
author={Grunau, Hans-Christoph},
author={Sweers, Guido},
title={Polyharmonic boundary value problems},
series={Lecture Notes in Mathematics},
volume={1991},
note={Positivity preserving and nonlinear higher order elliptic equations
in bounded domains},
publisher={Springer-Verlag, Berlin},
date={2010},
pages={xviii+423},
isbn={978-3-642-12244-6},
review={\MR{2667016}},
doi={10.1007/978-3-642-12245-3},
}
\bib{2016arXiv160909248G}{article}{
author = {Ghosh, Tuhin},
author = {Salo, Mikko},
author = {Uhlmann, Gunther},
title = {The Calder\'on problem for the fractional Schr\"odinger equation},
journal = {ArXiv e-prints},
eprint = {1609.09248},
date = {2016},
adsurl = {http://adsabs.harvard.edu/abs/2016arXiv160909248G},
}
\bib{MR1814364}{book}{
author={Gilbarg, David},
author={Trudinger, Neil S.},
title={Elliptic partial differential equations of second order},
series={Classics in Mathematics},
note={Reprint of the 1998 edition},
publisher={Springer-Verlag, Berlin},
date={2001},
pages={xiv+517},
isbn={3-540-41160-7},
review={\MR{1814364}},
}
\bib{MR3468941}{book}{
author={Gill, Tepper L.},
author={Zachary, Woodford W.},
title={Functional analysis and the Feynman operator calculus},
publisher={Springer, Cham},
date={2016},
pages={xix+354},
isbn={978-3-319-27593-2},
isbn={978-3-319-27595-6},
review={\MR{3468941}},
doi={10.1007/978-3-319-27595-6},
}
\bib{MR3244285}{book}{
author={Gorenflo, Rudolf},
author={Kilbas, Anatoly A.},
author={Mainardi, Francesco},
author={Rogosin, Sergei V.},
title={Mittag-Leffler functions, related topics and applications},
series={Springer Monographs in Mathematics},
publisher={Springer, Heidelberg},
date={2014},
pages={xiv+443},
isbn={978-3-662-43929-6},
isbn={978-3-662-43930-2},
review={\MR{3244285}},
doi={10.1007/978-3-662-43930-2},
}
\bib{MR2244037}{book}{
author={Haase, Markus},
title={The functional calculus for sectorial operators},
series={Operator Theory: Advances and Applications},
volume={169},
publisher={Birkh\"{a}user Verlag, Basel},
date={2006},
pages={xiv+392},
isbn={978-3-7643-7697-0},
isbn={3-7643-7697-X},
review={\MR{2244037}},
doi={10.1007/3-7643-7698-8},
}
\bib{MR1281370}{article}{
author={de Icaza Herrera, Miguel},
title={Galileo, Bernoulli, Leibniz and Newton around the brachistochrone
problem},
language={English, with English and Spanish summaries},
journal={Rev. Mexicana F\'{i}s.},
volume={40},
date={1994},
number={3},
pages={459--475},
issn={0035-001X},
review={\MR{1281370}},
}
\bib{MR860085}{article}{
author={Kalla, S. L.},
author={Ross, B.},
title={The development of functional relations by means of fractional
operators},
conference={
title={Fractional calculus},
address={Glasgow},
date={1984},
},
book={
series={Res. Notes in Math.},
volume={138},
publisher={Pitman, Boston, MA},
},
date={1985},
pages={32--43},
review={\MR{860085}},
}
\bib{MR2817382}{article}{
author={Kassmann, Moritz},
title={A new formulation of Harnack's inequality for nonlocal operators},
language={English, with English and French summaries},
journal={C. R. Math. Acad. Sci. Paris},
volume={349},
date={2011},
number={11-12},
pages={637--640},
issn={1631-073X},
review={\MR{2817382}},
doi={10.1016/j.crma.2011.04.014},
}
\bib{MR2218073}{book}{
author={Kilbas, Anatoly A.},
author={Srivastava, Hari M.},
author={Trujillo, Juan J.},
title={Theory and applications of fractional differential equations},
series={North-Holland Mathematics Studies},
volume={204},
publisher={Elsevier Science B.V., Amsterdam},
date={2006},
pages={xvi+523},
isbn={978-0-444-51832-3},
isbn={0-444-51832-0},
review={\MR{2218073}},
}
\bib{2018arXiv181007648K}{article}{
author = {Krylov, Nicolai V.},
title = {On the paper ``All functions are locally $s$-harmonic up to a small error'' by Dipierro, Savin, and Valdinoci},
journal = {ArXiv e-prints},
archivePrefix = {arXiv},
eprint = {1810.07648},
date = {2018},
adsurl = {http://adsabs.harvard.edu/abs/2018arXiv181007648K},
}
\bib{MR2858052}{article}{
author={de la Llave, Rafael},
author={Valdinoci, Enrico},
title={$L^p$-bounds for quasi-geostrophic equations via functional
analysis},
journal={J. Math. Phys.},
volume={52},
date={2011},
number={8},
pages={083101, 12},
issn={0022-2488},
review={\MR{2858052}},
doi={10.1063/1.3621828},
}
\bib{MR555103}{article}{
author={L\"{u}tzen, Jesper},
title={Heaviside's operational calculus and the attempts to rigorise it},
journal={Arch. Hist. Exact Sci.},
volume={21},
date={1979/80},
number={2},
pages={161--200},
issn={0003-9519},
review={\MR{555103}},
doi={10.1007/BF00330405},
}
\bib{MR2676137}{book}{
author={Mainardi, Francesco},
title={Fractional calculus and waves in linear viscoelasticity},
note={An introduction to mathematical models},
publisher={Imperial College Press, London},
date={2010},
pages={xx+347},
isbn={978-1-84816-329-4},
isbn={1-84816-329-0},
review={\MR{2676137}},
doi={10.1142/9781848163300},
}
\bib{MR3590678}{article}{
author={Massaccesi, Annalisa},
author={Valdinoci, Enrico},
title={Is a nonlocal diffusion strategy convenient for biological
populations in competition?},
journal={J. Math. Biol.},
volume={74},
date={2017},
number={1-2},
pages={113--147},
issn={0303-6812},
review={\MR{3590678}},
doi={10.1007/s00285-016-1019-z},
}
\bib{MR0242239}{article}{
author={Mandelbrot, Benoit B.},
author={Van Ness, John W.},
title={Fractional Brownian motions, fractional noises and applications},
journal={SIAM Rev.},
volume={10},
date={1968},
pages={422--437},
issn={0036-1445},
review={\MR{0242239}},
doi={10.1137/1010093},
}
\bib{MR3235230}{article}{
author={Mandelbrot, Benoit},
title={The variation of certain speculative prices [reprint of J. Bus.
{\bf 36} (1963), no. 4, 394--419]},
conference={
title={Financial risk measurement and management},
},
book={
series={Internat. Lib. Crit. Writ. Econ.},
volume={267},
publisher={Edward Elgar, Cheltenham},
},
date={2012},
pages={230--255},
review={\MR{3235230}},
}
\bib{MR1219954}{book}{
author={Miller, Kenneth S.},
author={Ross, Bertram},
title={An introduction to the fractional calculus and fractional
differential equations},
series={A Wiley-Interscience Publication},
publisher={John Wiley \& Sons, Inc., New York},
date={1993},
pages={xvi+366},
isbn={0-471-58884-9},
review={\MR{1219954}},
}
\bib{MR0105594}{book}{
author={Mikusi\'{n}ski, Jan},
title={Operational calculus},
series={International Series of Monographs on Pure and Applied
Mathematics, Vol. 8},
publisher={Pergamon Press, New York-London-Paris-Los Angeles; Pa\'{n}stwowe
Wydawnictwo Naukowe, Warsaw},
date={1959},
pages={495},
review={\MR{0105594}},
}
\bib{MR2639369}{article}{
author={Nakagawa, Junichi},
author={Sakamoto, Kenichi},
author={Yamamoto, Masahiro},
title={Overview to mathematical analysis for fractional diffusion
equations---new mathematical aspects motivated by industrial
collaboration},
journal={J. Math-for-Ind.},
volume={2A},
date={2010},
pages={99--108},
issn={1884-4774},
review={\MR{2639369}},
}
\bib{MR0361633}{book}{
author={Oldham, Keith B.},
author={Spanier, Jerome},
title={The fractional calculus},
note={Theory and applications of differentiation and integration to
arbitrary order;
With an annotated chronological bibliography by Bertram Ross;
Mathematics in Science and Engineering, Vol. 111},
publisher={Academic Press [A subsidiary of Harcourt Brace Jovanovich,
Publishers], New York-London},
date={1974},
pages={xiii+234},
review={\MR{0361633}},
}
\bib{MR1015374}{article}{
author={Omey, E.},
author={Willekens, E.},
title={Abelian and Tauberian theorems for the Laplace transform of
functions in several variables},
journal={J. Multivariate Anal.},
volume={30},
date={1989},
number={2},
pages={292--306},
issn={0047-259X},
review={\MR{1015374}},
doi={10.1016/0047-259X(89)90041-9},
}
\bib{MR1363489}{book}{
author={Pandey, J. N.},
title={The Hilbert transform of Schwartz distributions and applications},
series={Pure and Applied Mathematics (New York)},
note={A Wiley-Interscience Publication},
publisher={John Wiley \& Sons, Inc., New York},
date={1996},
pages={xvi+262},
isbn={0-471-03373-1},
review={\MR{1363489}},
}
\bib{MR2962060}{book}{
author={Petrosyan, Arshak},
author={Shahgholian, Henrik},
author={Uraltseva, Nina},
title={Regularity of free boundaries in obstacle-type problems},
series={Graduate Studies in Mathematics},
volume={136},
publisher={American Mathematical Society, Providence, RI},
date={2012},
pages={x+221},
isbn={978-0-8218-8794-3},
review={\MR{2962060}},
doi={10.1090/gsm/136},
}
\bib{MR1658022}{book}{
author={Podlubny, Igor},
title={Fractional differential equations},
series={Mathematics in Science and Engineering},
volume={198},
note={An introduction to fractional derivatives, fractional differential
equations, to methods of their solution and some of their applications},
publisher={Academic Press, Inc., San Diego, CA},
date={1999},
pages={xxiv+340},
isbn={0-12-558840-2},
review={\MR{1658022}},
}
\bib{GEN1}{article}{
author = {Regner, Benjamin M.}
author = {Vu\v{c}ini\'{c}, Dejan},
author = {Domnisoru, Cristina},
author = {Bartol, Thomas M.}
author = {Hetzer, Martin W.}
author = {Tartakovsky, Daniel M.}
author = {Sejnowski, Terrence J.},
title = {Anomalous diffusion of single particles in cytoplasm},
journal = {Biophys. J.},
volume = {104},
number = {8},
pages = {1652--1660},
date = {2013},
issn = {0006-3495},
doi = {10.1016/j.bpj.2013.01.049},
url = {http://www.sciencedirect.com/science/article/pii/S0006349513001823},
}
\bib{MR3168912}{article}{
author={Ros-Oton, Xavier},
author={Serra, Joaquim},
title={The Dirichlet problem for the fractional Laplacian: regularity up
to the boundary},
language={English, with English and French summaries},
journal={J. Math. Pures Appl. (9)},
volume={101},
date={2014},
number={3},
pages={275--302},
issn={0021-7824},
review={\MR{3168912}},
doi={10.1016/j.matpur.2013.06.003},
}
\bib{MR3694738}{article}{
author={Ros-Oton, Xavier},
author={Serra, Joaquim},
title={Boundary regularity estimates for nonlocal elliptic equations in
$C^1$ and $C^{1,\alpha}$ domains},
journal={Ann. Mat. Pura Appl. (4)},
volume={196},
date={2017},
number={5},
pages={1637--1668},
issn={0373-3114},
review={\MR{3694738}},
doi={10.1007/s10231-016-0632-1},
}
\bib{MR2624107}{book}{
author={Ross, Bertram},
title={The development, theory and applications of the
Gamma-function and a profile of fractional calculus},
note={Thesis (Ph.D.)--New York University},
publisher={ProQuest LLC, Ann Arbor, MI},
date={1974},
pages={412},
review={\MR{2624107}},
}
\bib{MR0444394}{article}{
author={Ross, Bertram},
title={The development of fractional calculus 1695--1900},
language={English, with German and French summaries},
journal={Historia Math.},
volume={4},
date={1977},
pages={75--89},
issn={0315-0860},
review={\MR{0444394}},
doi={10.1016/0315-0860(77)90039-8},
}
\bib{MR125162492}{article}{
author={Ross, Bertram},
title={Origins of fractional calculus and some applications},
journal={Internat. J. Math. Statist. Sci.},
volume={1},
date={1992},
number={1},
pages={21--34},
issn={1055-7490},
review={\MR{1251624}},
}
\bib{2017arXiv170804285R}{article}{
author={R\"uland, Angkana},
title = {Quantitative invertibility and approximation for the truncated Hilbert
and Riesz Transforms},
journal = {ArXiv e-prints},
eprint = {1708.04285},
date= {2017},
adsurl = {http://adsabs.harvard.edu/abs/2017arXiv170804285R},
}
\bib{2017arXiv170806294R}{article}{
author={R\"uland, Angkana},
author={Salo, Mikko},
title = {The fractional Calder\'on problem: low regularity and stability},
journal = {ArXiv e-prints},
eprint = {1708.06294},
date = {2017},
adsurl = {http://adsabs.harvard.edu/abs/2017arXiv170806294R},
}
\bib{2017arXiv170806300R}{article}{
author={R\"uland, Angkana},
author={Salo, Mikko},
title = {Quantitative approximation properties for the fractional heat equation},
journal = {ArXiv e-prints},
eprint = {1708.06300},
date = {2017},
adsurl = {http://adsabs.harvard.edu/abs/2017arXiv170806300R},
}
\bib{MR3774704}{article}{
author={R\"uland, Angkana},
author={Salo, Mikko},
title={Exponential instability in the fractional Calder\'on problem},
journal={Inverse Problems},
volume={34},
date={2018},
number={4},
pages={045003, 21},
issn={0266-5611},
review={\MR{3774704}},
doi={10.1088/1361-6420/aaac5a},
}
\bib{MR1347689}{book}{
author={Samko, Stefan G.},
author={Kilbas, Anatoly A.},
author={Marichev, Oleg I.},
title={Fractional integrals and derivatives},
note={Theory and applications;
Edited and with a foreword by S. M. Nikol\cprime ski\u\i ;
Translated from the 1987 Russian original;
Revised by the authors},
publisher={Gordon and Breach Science Publishers, Yverdon},
date={1993},
pages={xxxvi+976},
isbn={2-88124-864-0},
review={\MR{1347689}},
}
\bib{SANTA}{article}{
author={Santamaria, F.},
author={Wils, S.},
author={De Schutter, E.},
author={Augustine, G. J.},
title={Anomalous diffusion in Purkinje cell dendrites caused by spines},
journal={Neuron.}
volume={52},
date={2006},
number={4},
pages={635--648},
doi={10.1016/j.neuron.2006.10.025}
}
\bib{Schiessel}{article}{
author = {Schiessel, H.}
author = {Blumen, A.},
title = {Hierarchical analogues to fractional relaxation equations},
journal = {J. Phys. A: Math. Gen.},
volume = {26},
date = {1993},
number = {19},
pages = {5057--5069},
doi = {10.1088/0305-4470/26/19/034},
}
\bib{MR0118021}{article}{
author={Signorini, A.},
title={Questioni di elasticit\`a non linearizzata e semilinearizzata},
language={Italian},
journal={Rend. Mat. e Appl. (5)},
volume={18},
date={1959},
pages={95--139},
review={\MR{0118021}},
}
\bib{MR3563609}{article}{
author={Sin, Chung-Sik},
author={Zheng, Liancun},
title={Existence and uniqueness of global solutions of Caputo-type
fractional differential equations},
journal={Fract. Calc. Appl. Anal.},
volume={19},
date={2016},
number={3},
pages={765--774},
issn={1311-0454},
review={\MR{3563609}},
doi={10.1515/fca-2016-0040},
}
\bib{GEN2}{incollection}{
author = {Seffens, William},
title = {Models of RNA interaction from experimental datasets: framework of resilience},
booktitle = {Applications of RNA-Seq and Omics Strategies},
publisher = {IntechOpen},
address = {Rijeka},
date = {2017},
editor = {Marchi, Fabio A.},
editor = {Cirillo, Priscila D.R.},
editor = {Mateo, Elvis C.},
chapter = {4},
doi = {10.5772/intechopen.69452},
url = {https://doi.org/10.5772/intechopen.69452}
}
\bib{2018arXiv180805159S}{article}{
author = {Stinga, P.~R.},
title = {User's guide to the fractional Laplacian and the method of semigroups},
journal = {arXiv e-prints},
date = {2018},
archivePrefix = {arXiv},
eprint = {1808.05159},
primaryClass = {math.AP},
adsurl = {https://ui.adsabs.harvard.edu/\#abs/2018arXiv180805159S},
}
\bib{MR942661}{book}{
author={Titchmarsh, E. C.},
title={Introduction to the theory of Fourier integrals},
edition={3},
publisher={Chelsea Publishing Co., New York},
date={1986},
pages={x+394},
isbn={0-8284-0324-4},
review={\MR{942661}},
}
\bib{MR2584076}{article}{
author={Valdinoci, Enrico},
title={From the long jump random walk to the fractional Laplacian},
journal={Bol. Soc. Esp. Mat. Apl. SeMA},
number={49},
date={2009},
pages={33--44},
issn={1575-9822},
review={\MR{2584076}},
}
\bib{ALBA}{article}{
author={Viswanathan, G. M.},
author={Afanasyev, V.},
author={Buldyrev, S. V.},
author={Murphy, E. J.},
author={Prince, P. A.},
author={Stanley, H. E.},
title={L\'evy flight search patterns of wandering albatrosses},
journal={Nature},
volume={381},
date={1996},
pages={413--415},
doi={10.1038/381413a0},
}
\end{biblist}
\end{bibdiv}
\addcontentsline{toc}{chapter}{Index}
\printindex
\end{document} |
\begin{document}
\title{A stability theorem on cube tessellations}
\markright{Stability of cube tessellations}
\author{Peter Frankl\thanks{R\'enyi Institute, H-1364 Budapest, POB 127, Hungary. Email: {\tt [email protected]}.} \and J\'anos Pach\thanks{R\'enyi Institute, Budapest, Hungary and EPFL, Lausanne, Switzerland. Email: {\tt [email protected]}. Research partially supported by Swiss National Science Foundation Grants 200020-162884 and 200021-165977.}}
\date{}
\maketitle
\begin{abstract}
It is shown that if a $d$-dimensional cube is decomposed into $n$ cubes, the side lengths of which belong to the interval $(1-\frac{1}{n^{1/d}+1},1]$, then $n$ is a perfect $d$-th power and all cubes are of the same size. This result is essentially tight.
\end{abstract}
\section{Introduction}
It was proved by Dehn~\cite{De03} that, for $d\ge 2$, in any decomposition (tessellation, tiling) of the $d$-dimensional unit cube into finitely many smaller cubes, the side length of every participating cube must be rational. Fine and Niven~\cite{FN46} and, independently, Hadwiger raised the problem of characterizing, for a fixed $d\ge 2$, the set $N_d$ of all integers $n$ such that the $d$-dimensional unit cube can be decomposed into $n$ smaller cubes. Obviously, $m^d\in N_d$ for every positive integer $m$. Hadwiger observed that the intervals $(1,2^d)$ and $(2^d,2^d+2^{d-1})$ do not belong to $N_d$. On the other hand, for any $d$ there is a threshold $n_0(d)$ such that every integer $n\ge n_0(d)$ belongs to $N_d$; see \cite{P72}, \cite{M74}, \cite{E74}, \cite{CFG91}. It is conjectured that $n_0(d)\le c^d$ for a suitable constant $c$.
Amram Meir asked many years ago whether for any $d\ge 2, {\varepsilon}>0$, and for every sufficiently large $n\ge n_0(d,{\varepsilon})$, there exists a decomposition of a $d$-dimensional cube into $n$ smaller cubes such that the ratio between the side lengths of any two cubes is at least $1-{\varepsilon}$. This question was answered in the affirmative in~\cite{FMP17}. In particular, it was shown in~\cite{FMP17} that, for large $n$, a square can be decomposed into precisely $n$ smaller squares such that the ratio of their side lengths is at least $1-O\left(\frac{1}{\sqrt{n}}\right)$.
The aim of this note is to show that the above bound is asymptotically tight. More precisely, we have the following stability result, which holds in every dimension $d\ge 2$.
\noindent{\bf Theorem 1.} {\em Let $d, n\ge 2$ be positive integers. Suppose that a $d$-dimensional cube can be decomposed into precisely $n$ smaller cubes whose side lengths belong to the interval $(1-\frac{1}{n^{1/d}+1},1]$.
Then $n$ is a perfect $d$-th power, that is, $n=m^d$ for a positive integer $m$. Moreover, in this case the small cubes must be congruent.}
\section{Proof of Theorem 1}
Consider a decomposition of the cube $[0,z]^d$ into $n$ smaller cubes of side lengths $s_i, 1\le i\le n$, where
$$1=s_1\ge s_2\ge \ldots \ge s_n>1-\frac{1}{n^{1/d}+1}.$$
By Dehn's theorem mentioned in the Introduction, we can assume that all $s_i$ and, hence, also $z$ are rational numbers. The total volume of the small cubes is $z^d$, so that we have
\begin{equation}\label{eq0}
z^d=\sum_{1\le i\le n}s_i^d\le ns_1^d =n.
\end{equation}
If equality holds here, then $s_1=\ldots=s_n=1$ and $n$ is a perfect $d$-th power, so we are done. Therefore, we can assume
\noindent{\bf Claim 2.} {\em $z < n^{1/d}.$}
Fix a line $\ell$ parallel to the $x$-axis (say) that does not share a segment with the boundary of any small cube participating in the decomposition. (This holds, for example, if the other $d-1$ coordinates of the points of $\ell$ are all irrational.) Let $C_1, C_2, \ldots, C_m$ denote the small cubes crossed by $\ell$, listed from left to right, and let
$$0=x_0 < x_1 < x_2 <\ldots < x_m=z$$
be the $x$-coordinates of the points at which $\ell$ stabs the facets of these cubes. Using the assumption on the side lengths of the cubes, we have
\begin{equation}\label{formula1}
j\left(1-\frac{1}{n^{1/d}+1}\right)<x_j\le j,
\end{equation}
for every $j\; (1\le j\le m).$
\noindent{\bf Claim 3.} {\em $m=\lceil z\rceil.$}
\noindent{\bf Proof.} Since the side length of each cube $C_j$ is at most $1$, we clearly have $m\ge z$. It remains to show that $m<z+1$.
Suppose for contradiction that $m\ge z+1$. Applying (\ref{formula1}) with $j=m$, we obtain
$$z+\frac{n^{1/d}-z}{n^{1/d}+1}=(z+1)\left(1-\frac{1}{n^{1/d}+1}\right)
\le m\left(1-\frac{1}{n^{1/d}+1}\right)<x_m=z.$$
Comparing the left-hand side and the right-hand side, we get $n^{1/d}-z<0$, which contradicts Claim 2. $\Box$
Claims 2 and 3 immediately imply that every line $\ell$ which is parallel to one of the coordinate axes and does not share a segment with the boundary of any small cube, intersects the same number, $m= \lceil z\rceil<n^{1/d}+1$, of small cubes. In particular, (\ref{formula1}) can be extended to
$$j-1<j\left(1-\frac{1}{n^{1/d}+1}\right)<x_j\le j,$$
for $1\le j\le m$. Thus, we can pick a small ${\varepsilon}>0$ such that
\begin{equation}\label{formula2}
j-1+{\varepsilon} \in (x_{j-1},x_j)
\end{equation}
holds for every $j\; (1\le j\le m)$.
Given a small irrational number ${\varepsilon}>0$, define a gridlike set $P_{{\varepsilon}}$ of $m^d$ points in ${\mathbb R}^d$, as follows. Let
$$P_{{\varepsilon}}=\{{\varepsilon}, 1+{\varepsilon}, 2+{\varepsilon},\ldots, m-1+{\varepsilon}\}^d.$$
If ${\varepsilon}$ is small enough, then all of these points lie in the interior of the cube $[0,z]^d$.
\noindent{\bf Claim 4.} {\em There exists ${\varepsilon}>0$ such that every cube participating in the decomposition contains precisely one point in $P_{{\varepsilon}}$.}
\noindent{\bf Proof.} If ${\varepsilon}$ is irrational, no element of $P_{{\varepsilon}}$ lies on the boundary of any small cube. (This follows from the theorem of Dehn cited at the beginning of the Introduction.) The sidelength of every small cube is at most $1$, the minimum distance between two points in $P_{{\varepsilon}}$, so that no cube can cover two elements of $P_{{\varepsilon}}$.
We now finalize the choice of ${\varepsilon}>0$. For every cube $C$ in the decomposition, pick a point $p=p(C)$ in the interior of $C$, all of whose coordinates are irrational. Let $\ell_1, \ell_2, \ldots, \ell_d$ denote the lines through $p$ parallel to the coordinate axes. None of them shares a segment with the boundary of any cube.
The line $\ell_1$ intersects precisely $m$ cubes. Suppose that $C$ is the $j$-th among them, and its projection to the first coordinate axis is the interval $[x_{j-1},x_j]$. If we choose ${\varepsilon}>0$ small enough, then (\ref{formula2}) is satisfied for $\ell_1$. The same is true for the lines $\ell_2,\ldots,\ell_d$. Repeating the argument for every cube $C$, we can find an irrational ${\varepsilon}>0$, which simultaneously satisfies all of the above conditions for all $C$. Then, for every $C$, there exist integers $j_k=j_k(C)$\; $(1\le j_k\le m,\, 1\le k\le d)$ such that the orthogonal projection of $C$ to the $k$-th coordinate axis contains $j_k-1+{\varepsilon}$. Hence, we have
$$(j_1-1+{\varepsilon}, j_2-1+{\varepsilon},\ldots, j_d-1+{\varepsilon})\in C,$$
showing that $C$ contains a point of $P_{{\varepsilon}}$. $\Box$
It follows from Claim 4 that $n$, the number of cubes participating in the decomposition, is equal to $|P_{{\varepsilon}}|=m^d$. Thus, $n=m^d$ is a perfect $d$-th power.
Notice that the set $P_{{\varepsilon}}$ can be covered by $m^{d-1}$ lines parallel to the first coordinate axis, and every small cube is stabbed by precisely one of these lines. The total sidelength of the cubes stabbed by each of these lines is equal to $z$. Therefore, the sum of the sidelengths of all small cubes satisfies $\sum_{i=1}^ns_i=\sum_{i=1}^{m^d}s_i=m^{d-1}z$, or, equivalently,
$$\frac{\sum_{i=1}^{m^d}s_i}{m^d}=\frac{z}{m}.$$
On the other hand, it follows from (\ref{eq0}) for $n=m^d$ that
$$\frac{\sum_{i=1}^{m^d}s_i^d}{m^d}=\left(\frac{z}{m}\right)^d.$$
For any positive numbers $s_i$, we have
$$\left(\frac{\sum_{i=1}^{m^d}s_i}{m^d}\right)^d\le
\frac{\sum_{i=1}^{m^d}s_i^d}{m^d},$$
with equality if and only if all $s_i$ are equal. In our setting equality holds, hence all small cubes must be of the same size.
This completes the proof of Theorem~1. \;\;\;\;\;\;\;\;\; $\Box$ $\Box$
Finally, we show that Theorem 1 is not far from being best possible. Consider the subdivision of the cube $[0,m]^d$ into $m^d$ unit cubes. Discard all of them that are not tangent to any of the coordinate hyperplanes. Fill out the resulting hole, $[1,m]^d$, by $m^d$ cubes of sidelength $1-\frac{1}{m}$. Altogether we have $$n=m^d-(m-1)^d+m^d<(m+1)^d=m^d+O(dm^{d-1})$$
cubes, where the inequality follows from the fact that the function $x^d$ is strictly convex. The sidelengths of these cubes belong to the interval
$$[1-\frac{1}{m},1]=[1-\frac{1}{n^{1/d}(1+o(1))},1],$$
as $m$ tends to infinity. This interval is only slightly larger than the interval of ``permissible'' sidelengths in Theorem 1, but the number of small cubes participating in the tessellation is not a perfect $d$-th power.
\end{document} |
\begin{document}
\title{The Cantor's First Diagonal Formalized and Extended}
\author{Jo\~{a}o Alves Silva J\'{u}nior}
\date{\today}
\maketitle
\section{Introduction}\label{S:Intro}
Diagrams like that in Figure~\ref{F:CantDiag} are often presented as proof that $\mathbb{N}\times\mathbb{N}$ and $\mathbb{N}$ have the same cardinality (see \cite[p.~8]{BBJ07}, \cite[p.~76]{HJ99} and \cite[p.~10]{YM07}). The arrows indicate the growth direction of a function $\Psi: \mathbb{N}\times\mathbb{N} \to \mathbb{N}$ that intuitively covers, without repetitions, all the elements of $\mathbb{N}$. So, $\Psi$ is a bijection between $\mathbb{N}\times\mathbb{N}$ and $\mathbb{N}$. This is the idea known as Cantor's first diagonal. None of these texts formally defined such function $\Psi$, neither rigorously proved that $\Psi$ is a bijection between $\mathbb{N}\times\mathbb{N}$ and $\mathbb{N}$. The purpose of this paper is to fill these gaps in a more general context, finding a simple closed-form expression for a bijection between $\mathbb{N}^k$ and $\mathbb{N}$, where $k$ is an arbitrary positive integer.
\begin{figure}
\caption{Cantor's first diagonal.}
\label{F:CantDiag}
\end{figure}
Given $r \in \mathbb{N}$, let $P_r$ be the set $\{ i \in \mathbb{N} \mid 1 \leq i \leq r\}$. We denote arbitrary elements of $\mathbb{N}^k$ using bold letters like $\boldsymbol{m}$, $\boldsymbol{n}$ and $\boldsymbol{p}$. The $i$-th coordinate of a $\boldsymbol{m} \in \mathbb{N}^k$ is denoted by $m_i$. So, $\boldsymbol{m} = \boldsymbol{n}$ if and only if $m_i = n_i$ for all $i \in P_k$.
Let $\mathrm{Diff}(\boldsymbol{m}, \boldsymbol{n})$ denote the set of all indices $i \in \mathbb{N}-\{0\}$ such that $m_i$ and $n_i$ are defined and $m_i \neq n_i$, for all $\boldsymbol{m}, \boldsymbol{n} \in \bigcup_{r = 1}^\infty \mathbb{N}^r$. A natural way of ordering elements of $\mathbb{N}^k$ is given by the \emph{lexicografic order} $<_k$, defined by
\begin{equation}\label{E:DefLexOrd}
\boldsymbol{m} <_k \boldsymbol{n} \quad \Leftrightarrow \quad \exists i \in P_k, \ m_i < n_i \text{ and } i = \min \mathrm{Diff}(\boldsymbol{m}, \boldsymbol{n}).
\end{equation}
We are specially interested in considering this relation on the subset
\begin{equation}\label{E:DefDk}
\mathcal{D}_k = \{ \boldsymbol{n} \in \mathbb{N}^k \mid n_1 \geq \dots \geq n_k \},
\end{equation}
which is clearly equipotent to $\mathbb{N}^k$ via
\begin{align}\label{E:Defhk}
h_k: \mathcal{D}_k &\to \mathbb{N}^k\\
\boldsymbol{n} &\mapsto (n_k, n_{k-1} - n_k, n_{k-2} - n_{k-1}, \dots, n_1 - n_2) \notag.
\end{align}
Note in Figure \ref{F:CantDiag} that there is an arrow from $(m_1, m_2)$ to $(n_1, n_2)$ if and only if $h_2^{-1}(m_1, m_2)$ immediately precedes $h_2^{-1}(n_1, n_2)$ in $(\mathcal{D}_2, <_2)$. Our approach is based on a generalization of this idea for $\mathbb{N}^k$.
\section{Two inverse bijections between $\mathbb{N}^k$ and $\mathbb{N}$}
\begin{proposition}\label{T:LexOrd}
$(\mathcal{D}_k, <_k)$ is a well-ordered set, without maximum, such that every nonempty subset of $\mathcal{D}_k$ with an upper bound has a $<_k$-maximum.
\end{proposition}
\begin{proof}
See \cite[p.~82]{HJ99} for a proof that $<_k$ is irreflexive, transitive and total. The induced order $<_k \cap \,(\mathcal{D}_k \times \mathcal{D}_k)$ inherits these properties. Given a nonempty $A \subseteq \mathcal{D}_k$, define $\boldsymbol{m}$ by $m_i = \min \{ n_i \mid \boldsymbol{n} \in A_{i-1}\}$, where $A_0 = A$ and $A_i = \{ \boldsymbol{n} \in A_{i-1} \mid n_i = m_i\}$, for all $i \in P_k$. It is an easy exercise to verify that $\boldsymbol{m}$ is well-defined and it is the $<_k$-minimum of $A$. The maximum of a nonempty $A \subseteq \mathcal{D}_k$ with an upper bound is determined analogously (just replace min with max). Finally, $(\mathcal{D}_k, <_k)$ doesn't have a maximum because for all $\boldsymbol{m} \in \mathcal{D}_k$, there is a $\boldsymbol{n} = (m_1+1, 0 \dots, 0) \in \mathcal{D}_k$ such that $\boldsymbol{m} <_k \boldsymbol{n}$.
\end{proof}
Supported by Proposition \ref{T:LexOrd}, we define a function $f_k: \mathbb{N} \to \mathcal{D}_k$ by
\begin{equation}\label{E:Deffk}
\left\{ \begin{aligned} f_k(0) &= \min \mathcal{D}_k \\ \forall x\in \mathbb{N}, \ f_k(x+1) &= \min \{ \boldsymbol{n} \in \mathcal{D}_k \mid f_k(x) <_k \boldsymbol{n} \}.\end{aligned}\right.
\end{equation}
\begin{proposition}\label{T:fkBijection}
$f_k$ is one-to-one onto $\mathcal{D}_k$.
\end{proposition}
\begin{proof}
Use Proposition \ref{T:LexOrd} mimicking the proof of Theorem~3.4 of \cite{HJ99}.
\end{proof}
Let $\Phi_k: \mathcal{D}_k \to \mathbb{N}$ be defined by
\begin{equation}\label{E:DefPhik}
\Phi_k(\boldsymbol{n}) = \sum_{i=1}^{k}\binom{k-i+n_i}{k-i+1},
\end{equation}
where the binomial coefficients $\binom{x}{y}$ are generalized for $x,y \in \mathbb{Z}$ (see \cite[\S 2.3.2]{RMG00}):
\[ \binom{x}{y} = \begin{cases} (-1)^y\binom{y-x-1}{y} &\text{if $y\geq0$ and $x < 0$,}\\ 0 &\text{if $y < 0$ or $x<y$.} \end{cases} \]
We now prove that $\Phi_k$ is the inverse function of $f_k$.
\begin{lemma}\label{T:fkLemma}
Given $\boldsymbol{m} \in \mathcal{D}_k$ and $x \in \mathbb{N}$, suppose that $f_k(x) = \boldsymbol{m}$. Let $m_0$ denote $m_1 + 1$. If $r \in P_k$ is such that $m_{r-1} > m_{r}$ and $m_i = m_{r}$ for all $i \in \{r, \dots, k\}$, then $f_k(x+1) = (m_1, \dots, m_{r-1}, m_{r}+1, 0, \dots, 0)$. In particular, when the coordinates of $\boldsymbol{m}$ are all equal, $f_k(x+1) = (m_1+1, 0, \dots, 0)$.
\end{lemma}
\begin{proof}
Let $A$ be the set $\{ \boldsymbol{n} \in \mathcal{D}_k \mid \boldsymbol{m} <_k \boldsymbol{n} \}$, so that $f_k(x+1) = \min A$. We want to prove that $\min A = \boldsymbol{p}$, where $\boldsymbol{p} = (m_1, \dots, m_{r-1}, m_{r}+1, 0, \dots, 0)$. It is easy to note that $\boldsymbol{p} \in A$. So, since $<_k$ is a \emph{linear} order, it remains only to show that $\boldsymbol{p}$ is minimal. Suppose towards a contradiction that $\boldsymbol{n} <_k \boldsymbol{p}$, for some $\boldsymbol{n} \in A$. Let $j = \min \mathrm{Diff}(\boldsymbol{n},\boldsymbol{p})$, so that $n_j < p_j$. Since $p_{r+1} = \dots = p_k = 0$, $j \leq r$. But $j \geq r$, otherwise we would have $\min \mathrm{Diff}(\boldsymbol{m},\boldsymbol{n}) = j$ and, hence, $m_j < n_j < p_j = m_j$ (because $m_i = p_i$ for all $i \in \{1, \dots, r-1\}$). So, $r = j$, whence $n_r < p_r = m_r +1$ and $n_i = p_i = m_i$, for all $i \in \{1, \dots, r-1\}$. Since $\boldsymbol{n} \in \mathcal{D}_k$ and $m_r = \dots = m_k$, it follows that $n_i \geq m_i$ for all $i \in P_k$ (see Figure \ref{F:mnDiagram}). But this contradicts the hypothesis that $\boldsymbol{m} <_k \boldsymbol{n}$. Thus, $\boldsymbol{p}$ is minimal.
\end{proof}
\begin{figure}
\caption{$m_i \geq n_i$, for all $i \in P_k$.}
\label{F:mnDiagram}
\end{figure}
\begin{lemma} \label{T:PhikCircfkEqualsId}
$(\Phi_k \circ f_k)(0) = 0$ and $(\Phi_k \circ f_k)(x+1) = (\Phi_k \circ f_k)(x) + 1$, for all $x \in \mathbb{N}$. Thus, $\Phi_k \circ f_k$ is $\mathrm{id}_{\mathbb{N}}$, the identity function on $\mathbb{N}$.
\end{lemma}
\begin{proof}
$(0, \dots, 0)$ is clearly the minimum of $(\mathcal{D}_k, <_k)$ (in fact, it is the minimum of $(\mathbb{N}^k, <_k)$) and $\Phi_k(0, \dots, 0) = 0$. Hence, $(\Phi_k \circ f_k)(0) = 0$.
Given $x \in \mathbb{N}$, let $\boldsymbol{m}$, $\boldsymbol{p}$ and $m_0$ denote $f_k(x)$, $f_k(x+1)$ and $m_1 + 1$, respectively. There is an unique $r \in P_k$ such that $m_{r-1} > m_{r}$ and $m_i = m_{r}$ for all $i \in \{r, \dots, k\}$. By Lemma \ref{T:fkLemma}, $\boldsymbol{p} = (m_1, \dots, m_{r-1}, m_{r}+1, 0, \dots, 0)$. So, by \eqref{E:DefPhik},
\begin{align}
\Phi_k(\boldsymbol{p}) &= \sum_{i = 1}^{r-1} \binom{k-i+m_i}{k-i+1} + \binom{k-r+m_r+1}{k-r+1}, \label{E:Phikp} \\
\Phi_k(\boldsymbol{m}) &= \sum_{i = 1}^{r-1} \binom{k-i+m_i}{k-i+1} + \sum_{i=r}^k\binom{k-i+m_r}{k-i+1}. \label{E:Phikm}
\end{align}
On the other hand, by the parallel summation identity (see \cite[\S 2.3.4]{RMG00}),
\begin{equation}\label{E:ParallelSummation}
\forall x,y \in \mathbb{Z}, \quad \sum_{i=0}^{y}\binom{i+x-1}{i} = \binom{x+y}{y}.
\end{equation}
Applying \eqref{E:ParallelSummation} for $x = m_r$ and $y = k-r+1$, we obtain an equation which can be modified by a changing of index (namely, replacing $i$ with $k-i+1$) into
\begin{equation}\label{E:AuxForPhikpEqualsPhikmPlus1}
\binom{m_r+k-r+1}{k-r+1} = \sum_{i=r}^{k} \binom{k-i+m_r}{k-i+1} + 1.
\end{equation}
By \eqref{E:Phikp}, \eqref{E:Phikm} and \eqref{E:AuxForPhikpEqualsPhikmPlus1}, it results that $\Phi_k(\boldsymbol{p}) = \Phi_k(\boldsymbol{m}) +1$. That is, $(\Phi_k \circ f_k)(x+1) = (\Phi_k \circ f_k)(x) + 1$.
The last assertion follows from the uniqueness part of the recursion theorem (see \cite[p.~53]{YM07}), since $\mathrm{id}_{\mathbb{N}}(0) = 0$ and $\mathrm{id}_{\mathbb{N}}(x+1) = \mathrm{id}_{\mathbb{N}}(x)+1$ for all $x \in \mathbb{N}$.
\end{proof}
\begin{proposition}
$\Phi_k$ is the inverse function of $f_k$.
\end{proposition}
\begin{proof}
By Proposition \ref{T:fkBijection}, $f_k$ is invertible. So, by Lemma \ref{T:PhikCircfkEqualsId}, $\Phi_k = \Phi_k \circ \mathrm{id}_{\mathbb{N}} = \Phi_k \circ (f_k \circ f_k^{-1}) = (\Phi_k \circ f_k) \circ f_k^{-1} = \mathrm{id}_{\mathbb{N}} \circ f_k^{-1} = f_k^{-1}$.
\end{proof}
\begin{corolary}
$h_k \circ f_k: \mathbb{N} \to \mathbb{N}^k$ has an inverse $\Psi_k: \mathbb{N}^k \to \mathbb{N}$ given by
\begin{equation}\label{E:DefPsik}
\Psi_k(\boldsymbol{n}) = \sum_{i=1}^{k}\binom{i-1+n_1+ \dots + n_i}{i}.
\end{equation}
\end{corolary}
\begin{proof}
For all $\boldsymbol{n} \in \mathbb{N}^k$, the $i$-th coordinate of $h_k^{-1}(\boldsymbol{n})$ is $\sum_{j=1}^{k-i+1}n_j$. Moreover, since $h_k$ and $f_k$ are invertible and $f_k^{-1} = \Phi_k$, $\Psi_k = \Phi_k \circ h_k^{-1}$ is the inverse of $h_k \circ f_k$. So, $h_k \circ f_k$ has an inverse function $\Psi_k$ given by
\[ \Psi_k(\boldsymbol{n}) = \sum_{i=1}^{k}\binom{k-i+\sum_{j=1}^{k-i+1}n_j}{k-i+1} = \sum_{i=1}^{k}\binom{i-1+n_1+ \dots + n_i}{i}. \]
The last equality is obtained by reversing the order of the summands.
\end{proof}
\section{Combinatorial remarks}
So far, we have not explained where the formula \eqref{E:DefPhik} came from. It can be deduced combinatorially after noticing that $f_k^{-1}(\boldsymbol{n})$ is the cardinality of $\mathcal{D}_k[\boldsymbol{n}] = \{\boldsymbol{m} \in \mathcal{D}_k \mid \boldsymbol{m} <_k \boldsymbol{n} \}$, which is the disjoint union of the family $\{\mathcal{D}_k[\boldsymbol{n};i]\}_{i\in P_k}$, where $\mathcal{D}_k[\boldsymbol{n};i] = \{ \boldsymbol{m} \in \mathcal{D}_k \mid m_i < n_i \text{ and } i = \min \mathrm{Diff}(\boldsymbol{m},\boldsymbol{n})\}$. Given $i \in P_k$, the mapping $g_k[\boldsymbol{n};i]: \boldsymbol{m} \mapsto (m_i, \dots, m_k)$, defined on $\mathcal{D}_k[\boldsymbol{n};i]$, is one-to-one onto $\mathcal{D}_{k-i+1}$ and $h_{k-i+1} \circ g_k[\boldsymbol{n};i]$ sends $\mathcal{D}_k[\boldsymbol{n};i]$ biunivocally to
\begin{align}
\mathrm{Im}\,(h_{k-i+1} \circ g_k[\boldsymbol{n};i]) &= \left\{\boldsymbol{p} \in \mathbb{N}^{k-i+1} \mid p_1 + \dots + p_{k-i+1} < n_i \right\} \\
&= \bigsqcup_{j = 0}^{n_i-1}\left\{\boldsymbol{p} \in \mathbb{N}^{k-i+1} \mid p_1 + \dots + p_{k-i+1} = j \right\},\notag
\end{align}
where $\bigsqcup$ denotes disjoint union. But, according \cite[\S 2.3.3]{RMG00},
\begin{equation}
\forall j \in \mathbb{Z}, \quad |\left\{\boldsymbol{p} \in \mathbb{N}^{k-i+1} \mid p_1 + \dots + p_{k-i+1} = j \right\}| = \binom{k-i+j}{j}.
\end{equation}
So, by \eqref{E:ParallelSummation},
\begin{equation}
|\mathrm{Im}\,(h_{k-i+1} \circ g_k[\boldsymbol{n};i])| = \sum_{j = 0}^{n_i-1} \binom{k-i+j}{j} = \binom{k-i+n_i}{k-i+1}.
\end{equation}
Now, by the foregoing considerations,
\begin{equation}
|\mathcal{D}_k[\boldsymbol{n}]| = \sum_{i=1}^{k}|\mathcal{D}_k[\boldsymbol{n};i]| \quad \text{ and } \quad |\mathrm{Im}\,(h_{k-r+1} \circ g_k[\boldsymbol{n};r])| = |\mathcal{D}_k[\boldsymbol{n};r]|,
\end{equation}
for all $r \in P_k$. Thus,
\begin{equation}
|\mathcal{D}_k[\boldsymbol{n}]| = \sum_{i=1}^{k}\binom{k-i+n_i}{k-i+1}.
\end{equation}
To finish with, we prove a theorem that confirms that $\boldsymbol{n} \mapsto |\mathcal{D}_k[\boldsymbol{n}]|$ is a bijection between $\mathcal{D}_k$ and $\mathbb{N}$. It may be useful for constructing other explicitly defined injections onto $\mathbb{N}$.
\begin{theorem}\label{T:Criterion}
Let $(A,\prec)$ be a linearly ordered set. If $A$ is infinite and, for all $a \in A$, the set $\{ x \in A \,|\, x \prec a \}$ is finite, then the function $\Psi : A \to \mathbb{N}$ given by $\Psi(a) = |\{x \in A \, | \, x \prec a\}|$ is a bijection between $A$ and $\mathbb{N}$.
\end{theorem}
\begin{proof}
Let $A[a]$ denote $\{x \in A \, | \, x \prec a \}$, for all $a \in A$. Given $a,b,x \in A$, with $a \prec b$, $x \prec a$ implies $x \prec b$ and $x \nprec a$ (because $\prec$ is irreflexive and transitive), so that $A[a]$ is strictly contained in $A[b]$. Since $A[a]$ and $A[b]$ are finite, it follows that $\Psi(a) = |A[a]| < |A[b]| = \Psi(b)$. Thus, $\Psi$ is strictly growing. Since $\prec$ is total on $A$, it is easy to note that the strict growing of $\Psi$ ensures its injectivity. It remains to prove that $\mathrm{Im}\,\Psi = \mathbb{N}$, where $\mathrm{Im}\,\Psi$ denotes the image of $\Psi$.
Suppose, to get a contradiction, that some $r \in \mathbb{N}$ is not in $\mathrm{Im}\, \Psi$. Since $A$ is infinite and linearly ordered by $\prec$, there are $a_0,\dots,a_r\in A$, pairwise distinct, such that $a_0\prec\dots\prec a_r \, \therefore \, \Psi(a_0)<\dots<\Psi(a_r) \, \therefore \, r<\Psi(a_r)$. So, the set $B = \{y \in \mathrm{Im}\, \Psi \, | \, r < y \}$ is nonempty. Let $w = \min B$ and $\alpha = \Psi^{-1}(w)$. Since $0 \leq r < w$, $|A[\alpha]| = \Psi(\alpha) = w > 0$, whence $A[\alpha]$ is finite nonempty. Thus, exists $\beta = \max A[\alpha]$ such that $A[\alpha] = A[\beta]\cup\{\beta\}$ and $A[\beta]\cap\{\beta\} = \varnothing$. Then, $w = \Psi(\alpha) = |A[\alpha]| = |A[\beta]| + 1 = \Psi(\beta) + 1 \, \therefore \, w-1 = \Psi(\beta) \in \mathrm{Im}\, \Psi$. Moreover, $r \leq w - 1$, because $w \in B \therefore r < w$. But $r \notin \mathrm{Im}\, \Psi \ni w-1$, so that $r \neq w-1$. Hence, $r < w-1 \, \therefore \, w-1 \in B$, contradicting the minimality of $w$.
\end{proof}
\end{document} |
\begin{document}
\title{Unboundedness problems for languages of vector addition systems}
\begin{abstract}
A vector addition system (VAS) with an initial and a final marking
and transition labels induces a language. In part because the
reachability problem in VAS remains far from being well-understood,
it is difficult to devise decision procedures for such languages.
This is especially true for checking properties that state
the existence of infinitely many words of a particular
shape. Informally, we call these \emph{unboundedness properties}.
We present a simple set of axioms for predicates that can express
unboundedness properties. Our main result is that such a
predicate is decidable for VAS languages as soon as it is decidable
for regular languages. Among other results, this allows us to show
decidability of (i)~separability by bounded regular languages,
(ii)~unboundedness of occurring factors from a language $K$ with
mild conditions on $K$, and (iii)~universality of the set of factors.
\end{abstract}
\section{Introduction}
Vector addition systems (VAS) and, essentially equivalent, Petri nets
are among the most widely used models of concurrent systems. Although
they are used extensively in practice, there are still fundamental
questions that are far from being well understood.
This is reflected in what we know about decidability questions
regarding the most expressive class of languages associated to VAS:
The languages of (arbitrarily) labeled VAS with a given initial and
final configuration, which we just call \emph{VAS languages}. In the
1970s, this class has been characterized in terms of closure
properties and Dyck languages by Greibach~\cite{Greibach1978} and
Jantzen~\cite{Jantzen1979}. Almost all decidability results about
these languages use a combination of these closure properties and the
decidability of the reachability problem for
VAS~\cite{DBLP:conf/stoc/Mayr81} (or for Reinhardt's
extension~\cite{Reinhardt2008}, such as
in~\cite{AtigGanty2011,Zetzsche2018a}). Of course, this method is
confined to procedures that somehow reduce to the existence of one or
finitely many runs of vector addition systems.
There are two notable exceptions (and, to the authors' knowledge,
these are the only exceptions) to this and they both rely on an
inspection of decision procedures for VAS. The first is Hauschildt
and Jantzen's result~\cite{HauschildtJantzen1994} from 1994 that
finiteness of VAS languages is decidable, which employs Hauschildt's
algorithm to decide semilinearity of reachability
sets~\cite{hauschildt1990semilinearity}. The second is the much more
recent result of Habermehl, Meyer, and Wimmel from
2010~\cite{HabermehlMeyerWimmel2010}, showing that downward closures
are computable for VAS languages, which significantly generalizes
decidability of finiteness. Their proof involves a careful inspection
of marked graph-transition sequences (MGTS) of Lambert's algorithm for
the reachability proof. This sparsity of decidability results is due
to the fact that the algorithms for the reachability problem are still
quite unwieldy and have been digested by few members of the research
community.
In particular, it currently seems difficult to decide whether there
exist infinitely many words of some shape in a given language---unless
the problem reduces to computing downward closures. Informally, we
call problems of this type \emph{unboundedness problems}. Such
problems are important for two reasons. The first concerns
\emph{separability problems}, which have attracted attention in recent
years~\cite{DBLP:journals/fuin/Bojanczyk17,DBLP:conf/stacs/ClementeCLP17,
DBLP:conf/icalp/Goubault-Larrecq16,DBLP:conf/csr/PlaceZ17,DBLP:conf/lics/PlaceZ17}.
Here, instead of deciding whether two languages are
disjoint, we are looking for a (typically finite-state) certificate
for disjointness, namely a set that includes one language and is
disjoint from the other. For general topological reasons,
inseparability is usually witnessed by a common pattern, whose
presence in a language is an unboundedness property. The second
reason is that unboundedness problems tend to be \emph{decidable where
exact queries are not}. This phenomenon also occurs in the theory of
regular cost functions~\cite{Colcombet2013}. Moreover, as it turns
out in this work, this is true for VAS languages as well.
\subparagraph*{Contribution}
We present a simple notion of an \emph{unboundedness predicate} on
languages and show that such predicates are decidable for VAS
languages as soon as they are decidable for regular languages. On the
one hand, this provides an easy and general way to obtain new
decidability results for VAS languages without the need to understand
the details of the KLMST decomposition. On the other hand, we apply
this framework to prove:
\begin{results}
\item\label{list:boundedness} Boundedness in the sense of Ginsburg and
Spanier~\cite{ginsburg1964bounded} is decidable for VAS languages. A
language $L\subseteq\Sigma^*$ is \emph{bounded} if there are
$w_1,\ldots,w_n\in\Sigma^*$ with $L\subseteq w_1^*\cdots
w_n^*$. Moreover, it is decidable whether two given VAS languages
are separable by a bounded regular language.
\item\label{list:downward} Computability of downward closures
can be recovered as well.
\item\label{list:factors} Suppose that $K\subseteq\Sigma^*$ is chosen
so that it is decidable whether $K$ intersects a given regular
language. Then, it is decidable for a given VAS language $L$ whether
$L$ contains words with arbitrarily many factors from $K$.
Moreover, in case the number of factor occurrences in $L$ is
bounded, we can even compute an upper bound.
\item\label{list:universality} Under the same assumptions as above on
$K\subseteq\Sigma^*$, one can decide if every word from $K^*$
appears as a factor of a given VAS language $L\subseteq\Sigma^*$. In
particular, it is decidable whether $L$ contains every word from
$\Sigma^*$ as a factor.
\end{results}
It should be stressed that results
\labelcref{list:factors,list:universality} came deeply unexpected to
the authors. First, this is because the assumptions are already
satisfied when $K$ is induced by a system model as powerful as
well-structured transition systems or higher-order recursion schemes.
In these cases, it is in general \emph{undecidable} whether a given
VAS language contains a factor from $K$ \emph{at least once}, because
intersection emptiness easily reduces to this problem (see the remarks
after~\cref{non-overlapping-factors:decidable}). We therefore believe
that these results might lead to new approaches to verifying systems
with concurrency and (higher-order) recursion, where the latter
undecidability (or the unknown status in the case of simple
recursion~\cite{LerouxSutreTotzke2015}) is usually a barrier for
decision procedures.
The second reason for our surprise about
\labelcref{list:factors,list:universality} is that these problems are
undecidable as soon as $L$ is just slightly beyond the realm of VAS:
Already for one-counter languages $L$,
both \labelcref{list:factors,list:universality} become undecidable. Thus, compared to other infinite-state systems, VAS languages turn
out to be extraordinarily amenable to unboundedness problems.
\subparagraph*{Related work} Other authors have investigated general
notions of unboundedness properties for
VAS~\cite{DBLP:journals/ijfcs/AtigH11,BlockeletSchmitz2011,demri-infinity2010,yen1992unified},
usually with the goal of obtaining $\mathsf{EXPSPACE}$ upper bounds. However,
those properties a priori concern the state space itself. While they
can sometimes be used to reason about
languages~\cite{BlockeletSchmitz2011,demri-infinity2010}, this has
been confined to coverability languages, which are significantly less
expressive than the reachability languages studied here. Specifically,
every problem we consider here is hard for the reachability problem
(see \cref{complexity}).
An early attempt was Yen's work~\cite{yen1992unified}, which claimed
an $\mathsf{EXPSPACE}$ upper bound for a powerful logic concerning paths in
VAS. Unfortunately, a serious flaw in the latter was discovered by
Atig and Habermehl~\cite{DBLP:journals/ijfcs/AtigH11}, who presented a
corrected proof for a restricted version of Yen's
logic. Demri~\cite{demri-infinity2010} then introduced a notion of
\emph{generalized unboundedness properties}, which covers more
properties from Yen's logic and proved an $\mathsf{EXPSPACE}$ procedure to
check them. Examples include reversal-boundedness, place boundedness,
and regularity of firing sequences of unlabeled VAS. Finally, Blockelet and
Schmitz~\cite{BlockeletSchmitz2011} introduce an extension of
computation tree logic (CTL) that can express ``coverability-like
properties'' of VAS. The authors prove an
$\mathsf{EXPSPACE}$ upper bound for model checking this logic on VAS.
\subparagraph{Organization} After \cref{sec:prelim} contains
preliminaries, \cref{sec:results} defines our notion of
unboundedness predicates and presents our main result. In
\cref{sec:applications}, we apply the results to obtain the
consequences mentioned above. \Cref{sec:proof} is devoted to the
main result's proof.
\section{Preliminaries}\label{sec:prelim}
Let $\Sigma$ be a finite alphabet. For $w\in\Sigma^*$, we denote its length by $|w|$.
The $i$-th letter of $w$, for $i \in [1,|w|]$ is denoted $w[i]$.
Moreover, we write $\Sigma_\varepsilon = \Sigma \cup \{\varepsilon\}$.
A \emph{($d$-dimensional) vector addition system} (VAS) $V$ consists of finite set of \emph{transitions} $T \subseteq \mathbb{Z}^d$,
\emph{source} and \emph{target} vectors $s, t \in \mathbb{N}^d$ and a \emph{labeling} $h\colon T \to \Sigma_\varepsilon$, whose extension to a morphism $T^*\to\Sigma^*$ is also denoted $h$.
Vectors $v\in\mathbb{N}^d$ are also called \emph{configurations}. A transition $t \in T$ can be \emph{fired} in a configuration
$v \in \mathbb{N}^d$ if $v+t \in \mathbb{N}^d$. Then, the result of firing $t$ is the configuration $v+t$ and we write $v \trans{h(t)} v'$ for $v' = v+t$.
For $w\in\Sigma^*$, we write $v \trans{w} v'$ if there exist $v_1, \ldots, v_k\in\mathbb{N}^d$
such that
$
v = v_0 \trans{x_1} v_1 \trans{x_2} \ldots \trans{x_k} v_k \trans{x_{k+1}} v_{k+1} = v',
$
where $w = x_1 \cdots x_{k+1}$ for some $x_1,\ldots,x_{k+1}\in\Sigma_\varepsilon$.
The \emph{language} of $V$, denoted $L(V)$, is the set of all labels of runs from source to target, i.e. $L(V) = \{w\in\Sigma^* \mid s \trans{w} t\}$.
The languages of the form $L(V)$ for VAS $V$ are called \emph{VAS languages}.
A word $u = a_1 \cdots a_n$ with $a_i \in \Sigma$ is a \emph{subword}
of a word $v \in \Sigma^*$ if $v \in \Sigma^* a_1 \Sigma^* \cdots
\Sigma^* a_n \Sigma^*$, which is denoted $u \preceq v$. For a
language $L \subseteq \Sigma^*$ its \emph{downward closure} is the
language $\dcl{L} = \{u\in\Sigma^* \mid \exists v \in L \colon u
\preceq v\}$. It is known that $\dcl{L}$ is regular for every
$L\subseteq\Sigma^*$~\cite{Higman1952,Haines1969}. A \emph{language
class} is a collection of languages, together with some way of
finitely describing these languages (such as by grammars, automata,
etc.). If $\mathcal{C}$ is a language class so that given a description of a
language $L$ from $\mathcal{C}$, we can compute an automaton for $\dcl{L}$, we
say that \emph{downward closures are computable} for $\mathcal{C}$.
A \emph{full trio} is a language class that is effectively closed
under rational transductions~\cite{Berstel1979}, which are relations
defined by nondeterministic two-tape automata. Examples of full trios
are abundant among infinite-state models: If a nondeterministic
machine model involves a finite-state control, the resulting language
class is a full trio. Equivalently, a full trio is a class that is
effectively closed under morphisms, inverse morphisms, and regular
intersection~\cite{Berstel1979}. Examples include \emph{VAS
langauges}~\cite{Jantzen1979}, \emph{coverability languages of
WSTS}~\cite{GeeraertsRaskinVanBegin2007}, \emph{one-counter languages}
(which are accepted by one-counter automata \emph{with zero
tests})~\cite{HopcroftUllman1979}, and languages of \emph{higher-order
pushdown automata}~\cite{Maslov1976} and \emph{higher-order recursion
schemes}~\cite{DBLP:conf/lics/HagueMOS08}. The context-sensitive do not
constitute a full trio, as they are not closed unter erasing morphisms.
\section{Main result}\label{sec:results}
Here, we introduce our notion of
unboundedness predicates and present our main result.
For didactic purposes, we begin our exposition of unboundedness
predicates with a simplified (but already useful) version. An
important aspect of the definition is that technically, an
unboundedness predicates is not a property of the language
$L\subseteq\Sigma^*$ we want to analyze, but of the set of its
factors. In other words, we have a unary predicate $\pP$ on languages
and we want to decide whether $\pP(\factors{L})$, where
$\factors{L}=\{w\in\Sigma^* \mid L\cap \Sigma^*w\Sigma^*\ne\emptyset
\}$ is the set of factors of $L$.
For the definition, it is helpful to keep in mind the simplest example
of an unboundedness predicate, the \emph{infinity predicate} $\pP[inf]$,
where $\pP[inf](K)$ if and only if $K$ is infinite. Then, $\pP[inf](\factors{L})$ if
and only if $L$ is infinite.
A unary predicate $\pP$ on languages over
$\Sigma^*$ is called \emph{1-dimensional unboundedness predicate} if
for every $K,L\subseteq\Sigma^*$, we have:
\begin{axioms}[label=({\roman*}$^*$), widest=(iii$^*$)]
\item\label{axiom:1dim:upward} if $\pP(K)$ and $K\subseteq L$, then $\pP(L)$.
\item\label{axiom:1dim:union} if $\pP(K\cup L)$, then either $\pP(L)$ or $\pP(K)$.
\item\label{axiom:1dim:concatenation} if $\pP(\factors{KL})$, then either $\pP(\factors{K})$ or $\pP(\factors{L})$.
\end{axioms}
Part of our result will be that for such predicates, if we can decide
whether $\pP(F(R))$ for regular languages $R$, we can decide whether
$\pP(F(L))$ for VAS languages $L$. Before we come to that, we want to
generalize a bit.
There are predicates we want to decide that fail to satisfy
\cref{axiom:1dim:concatenation}, such as the one stating
$a^*b^*\subseteq\dcl{L}$ for $L\subseteq\Sigma^*$: It is satisfied for
$a^*b^*$, but neither for $a^*$ nor for $b^*$. (Deciding such
predicates is useful for computing downward
closures~\cite{DBLP:conf/icalp/Zetzsche15} and separability by
piecewise testable
languages~\cite{DBLP:journals/dmtcs/CzerwinskiMRZZ17}) To capture such
predicates, which intuitively ask for several quantities being
unbounded simultaneously, we present a more general set of axioms. Here, the idea is to
formulate predicates over simultaneously occurring factors. For a
language $L\subseteq\Sigma^*$ and $n\in\mathbb{N}$, let
\[ \factors[n]{L} = \{(w_1,\ldots,w_n)\in(\Sigma^*)^n \mid \Sigma^*w_1\Sigma^*\cdots w_n\Sigma^*\cap L\ne\emptyset \}. \]
We will speak of \emph{$n$-dimensional} predicates, i.e., predicates
$\pP$ on subsets of $(\Sigma^*)^n$, and we want to decide whether
$\pP(\factors[n]{L})$ for a given language $L$. The following are
axioms referring to all subsets $S,T\subseteq(\Sigma^*)^n$, languages
$L_i\subseteq\Sigma^*$, and all $k\in\mathbb{N}$. We call $\pP$ an \emph{($n$-dimensional)
unboundedness predicate} if
\begin{axioms}
\item\label{axiom:upward} if $\pP(S)$ and $S\subseteq T$, then $\pP(T)$.
\item\label{axiom:union} if $\pP(S\cup T)$, then $\pP(S)$ or $\pP(T)$.
\item\label{axiom:concatenation} if $\pP(\factors[n]{L_1\cdots L_k})$, then $n=n_1+\cdots +n_k$ such that
$\pP(\factors[n_1]{L_1}\times\cdots\times\factors[n_k]{L_k})$.
\end{axioms}
Intuitively, the last axiom says that if a concatenation satisfies the
predicate, then this is already witnessed by factors in at most $n$
participants of the concatenation. Note that for $n=1$, the axioms
coincide with the simplified
\cref{axiom:1dim:upward,axiom:1dim:union,axiom:1dim:concatenation}
above. An $n$-dimensional unboundedness predicate $\pP$ is
\emph{decidable for a language class $\mathcal{C}$} if, given a language $L$
from $\mathcal{C}$, it is decidable whether $\pP(\factors[n]{L})$. The
following is our main result.
\begin{thm}\label{thm:approximation-unboundedness}
Given a VAS language $L\subseteq\Sigma^*$, one can compute a regular
$R\subseteq\Sigma^*$ such that $L\subseteq R$ and for every $n$-dim. unboundedness
predicate $\pP$, we have $\pP(\factors[n]{L})$ iff
$\pP(\factors[n]{R})$.
\end{thm}
Note that this implies that decidability of $\pP$ for regular
languages implies decidability of $\pP$ for VAS languages for any
$n$-dim. unboundedness predicate $\pP$. In addition, when our
unboundedness predicate expresses that a certain quantity is
unbounded, then in the bounded case,
\cref{thm:approximation-unboundedness} sometimes allows us to compute
an upper bound (see, e.g. \cref{non-overlapping-factors:decidable}).
\begin{rem}\label{complexity}
Let us comment on the complexity of deciding whether
$\pP(\factors[n]{L})$ for a VAS language $L$. Call $\pP$
\emph{non-trivial} if there is at least one $K\subseteq\Sigma^*$
that satisfies $\pP$ and least one $K'\subseteq\Sigma^*$ for which
$\pP$ is not satisfied. Then, deciding whether $\pP(\factors[n]{L})$
is at least as hard as the reachability problem. Indeed, in this
case \cref{axiom:upward} implies that
$\factors[n]{\Sigma^*}=\Sigma^*$ satisfies $\pP$, but
$\factors[n]{\emptyset}=\emptyset$ does not. Given a VAS $V$ and two
vectors $\mu_1$ and $\mu_2$, it is easy to construct a VAS $V'$ so
that $L(V')=\Sigma^*$ if $V$ can reach $\mu_2$ from $\mu_1$ and
$L(V')=\emptyset$ otherwise.
\end{rem}
\section{Applications}\label{sec:applications}
\newcommand{\application}[1]{\subparagraph*{#1}}
\application{Bounded languages}
Our first application concerns bounded languages. A language $L
\subseteq \Sigma^*$ is \emph{bounded} if there exist words $w_1,
\ldots, w_n \in \Sigma^*$ such that $L \subseteq w_1^* \cdots w_n^*$.
This notion was introduced by Ginsburg
and Spanier~\cite{ginsburg1964bounded}. Since a
bounded language as above can be characterized by the set of vectors
$(x_1,\ldots,x_n)\in\mathbb{N}^n$ for which $w_1^{x_1}\cdots w_n^{x_n}\in L$,
bounded languages are quite amenable to analysis. This has led to a
number of applications to concurrent recursive
programs~\cite{DBLP:conf/popl/EsparzaG11,DBLP:conf/lics/EsparzaGM12,DBLP:journals/toplas/EsparzaGP14,DBLP:journals/fmsd/GantyMM12,long2012language},
but also counter systems~\cite{demri2010model} and WSTS~\cite{DBLP:journals/tcs/ChambartFS16}.
Boundedness has been shown decidable for \emph{context-free languages}
by Ginsburg and Spanier~\cite{ginsburg1964bounded}
($\mathsf{PTIME}$-completeness by
Gawrychowski~et~al.~\cite{gawrychowski2010finding}) and hence also for
\emph{regular languages} ($\mathbb{N}L$-completeness also
in~\cite{gawrychowski2010finding}), for \emph{equal matrix languages}
by Siromoney~\cite{siromoney1969characterization}, and for trace
languages of \emph{complete deterministic well-structured transition
systems} by
Chambart~et~al.~\cite{DBLP:journals/tcs/ChambartFS16}. The latter
implies that boundedness is decidable for coverability languages of
deterministic vector addition systems, in which case
$\mathsf{EXPSPACE}$-completeness was shown by
Chambart~et~al.~\cite{DBLP:journals/tcs/ChambartFS16} (the upper bound
had been established by Blockelet and
Schmitz~\cite{BlockeletSchmitz2011}).
We use \cref{thm:approximation-unboundedness} to show the following.
\begin{thm}\label{thm:boundedness}
Given a VAS, it is decidable whether its language is bounded.
\end{thm}
The rest of this section is devoted to the proof of \cref{thm:boundedness}.
Let $\pP[notb]$ be the $1$-dimensional predicate that holds for a
language $K \subseteq \Sigma^*$ if and only if $K$ it is not
bounded. We plan to apply~\cref{thm:approximation-unboundedness} to $\pP[not]$, but it
allows us to decide only whether $\pP[notb](\factors{L})$ for a given VAS language $L$.
Thus we need the following fact, which we prove in a moment.
\begin{fact}\label{fact:boundedness-factors}
A language $L\subseteq\Sigma^*$ is bounded if and only if $F(L)$ is bounded.
\end{fact}
Now we need to show that $\pP[notb]$ is indeed an unboundedness
predicate, meaning that it satisfies
\cref{axiom:1dim:upward,axiom:1dim:union,axiom:1dim:concatenation}.
By definition of boundedness, $\pP[notb]$ clearly fulfills
\cref{axiom:1dim:upward}: The subset of any bounded language is
bounded itself. \Cref{axiom:1dim:union,axiom:1dim:concatenation} are
implied by \cref{fact:boundedness-factors} and the following.
\begin{fact}\label{fact:boundedness-closed}
If $K$ and $L$ are bounded then both $K \cup L$ and $KL$ are bounded as well.
\end{fact}
\newcommand{\fofword}[1]{\factors{#1}}
Let us prove \cref{fact:boundedness-closed,fact:boundedness-factors} and
begin with \cref{fact:boundedness-closed}. If $K$ and $L$ are
bounded, then $K \subseteq w_1^* \cdots w_n^*$ and $L \subseteq
w_{n+1}^* \cdots w_m^*$ for some $w_1, \ldots, w_m \in \Sigma^*$.
Then we have $K \cup L, KL \subseteq w_1^* \cdots w_m^*$, which shows
\cref{fact:boundedness-closed}. In order to show
\cref{fact:boundedness-factors}, observe first that for each
individual word $w\in\Sigma^*$, the language $\fofword{w}$ is
bounded because it is finite. Thus, if $L\subseteq w_1^*\cdots
w_n^*$, then $\factors{L}$ is included in
$\fofword{w_1}w_1^*\fofword{w_1}\cdots \fofword{w_n}w_n^* \fofword{w_n}$,
which is bounded as a concatenation of bounded languages by
\cref{fact:boundedness-closed}. Thus, $F(L)$ is bounded as
well. Conversely, $L$ inherits boundedness from its superset
$\factors{L}$.
To conclude \cref{thm:boundedness}, we need to show that given regular language
$R\subseteq\Sigma^*$, it is decidable whether
$\pP[notb](\factors{R})$. By
\cref{fact:boundedness-factors}, this amounts to checking whether $R$
is bounded. This is decidable even for context-free
languages~\cite{ginsburg1964bounded} (and in $\mathbb{N}L$ for regular
ones~\cite{gawrychowski2010finding}).
\subparagraph*{Separability}
We can also use our results to decide
whether two VAS languages are separable by a bounded regular
language. Very generally, if $\mathcal{S}$ is a class of sets, we say that a
set $K$ is \emph{separable from} a set $L$ \emph{by} a set from $\mathcal{S}$
if there is a set $S$ in $\mathcal{S}$ so that $K\subseteq S$ and $L\cap
S=\emptyset$.
The separability problem was recently investigated for VAS languages
and several subclasses thereof.
In~\cite{DBLP:journals/dmtcs/CzerwinskiMRZZ17} it is shown that
separability of VAS languages by piecewise testable languages (a
subclass of regular languages) is decidable. Decidability of separability of VAS
languages by regular languages is still open, but it is
known for several subclasses of VAS
languages~\cite{DBLP:conf/icalp/ClementeCLP17,DBLP:conf/stacs/ClementeCLP17,DBLP:conf/lics/CzerwinskiL17}.
In~\cite{DBLP:journals/corr/CzerwinskiL17a} it is shown that any two
disjoint VAS coverability languages are separable by a regular
language. Here, using~\cref{thm:boundedness} we are able to show the following.
\begin{thm}\label{thm:bounded-separability}
Given two VAS languages $K$ and $L$, it is decidable whether $K$ is separable
from $L$ by a bounded regular language.
\end{thm}
Clearly, in order for that to hold, $K$ has to be bounded, which we
can decide. Moreover, by enumerating expressions $w_1^*\cdots w_n^*$,
we can find one with $K\subseteq w_1^*\cdots w_n^*$. Since the bounded regular languages (BRL) are
closed under intersection (recall that a subset of a bounded language
is again bounded), $K$ and $L$ are separable by a BRL if and only if
$L_0=K$ and $L_1=L\cap w_1^*\cdots w_n^*$ are separable by a BRL. Since now
both input languages are included in $w_1^*\cdots w_n^*$, we can reformulate
the problem into one over vector sets.
\begin{lem}\label{separability-commutative}
Let $L_0,L_1\subseteq w_1^*\cdots w_n^*$ and
$U_i=\{(x_1,\ldots,x_n)\in\mathbb{N}^n \mid w_1^{x_1}\cdots w_n^{x_n}\in
L_i\}$ for $i\in\{0,1\}$. Then, $L_0$ is separable from $L_1$ by a
BRL if and only if $U_0$ is separable from $U_1$ by a recognizable
subset of $\mathbb{N}^n$.
\end{lem}
Recall that a subset $S\subseteq\mathbb{N}^n$ is \emph{recognizable} if there is a morphism
$\varphi\colon\mathbb{N}^n\to F$ into a finite monoid $F$ with $S=\varphi^{-1}(\varphi(S))$.
\Cref{separability-commutative} is a straightforward application of Ginsberg and Spanier's
characterization of BRL~\cite{GinsburgSpanier1966a}.
Since in our case, $L_0$ and $L_1$ are VAS languages, a standard construction shows that $U_0$ and $U_1$
are (effectively computable) sections of VAS reachability sets.
Here, sections are defined as follows. For a subset $I\subseteq [1,n]$,
let $\pi_I\colon \mathbb{N}^n\to\mathbb{N}^{|I|}$ be the projection onto the coordinates in $I$.
Then, every set of the form $\pi_{[1,n]\setminus I}(S\cap \pi_{I}^{-1}(x))$ for some
$I\subseteq[1,n]$ and $x\in\mathbb{N}^{|I|}$ is called a \emph{section} of $S\subseteq\mathbb{N}^n$.
Thus, the following result by Clemente~et~al.~\cite{DBLP:conf/stacs/ClementeCLP17} allows us to decide separability by BRL.
\begin{thm}[\cite{DBLP:conf/stacs/ClementeCLP17}]
Given two sections $S_0,S_1\subseteq\mathbb{N}^n$ of reachability sets of
VAS, it is decidable whether $S_0$ is separable from $S_1$ by a
recognizable subset of $\mathbb{N}^n$.
\end{thm}
\application{Downward closures and simultaneus unboundedness}
We now illustrate how to compute downward closures using our results.
First of all, computability of downward closures for VAS languages
follows directly from \cref{thm:approximation-unboundedness} because
it implies $\dcl{R}=\dcl{L}$: For each word $w=a_1\cdots a_n$ with
$a_1,\ldots,a_n\in\Sigma$, consider the $n$-dimensional predicate
$\mathfrak{p}_w$ which is satisfied for $S\subseteq(\Sigma^*)^n$ iff
$(a_1,\ldots,a_n)\in S$. Then $\mathfrak{p}_w(\factors[n]{L})$ if and only if
$w\in\dcl{L}$. It is easy to check that this is an unboundedness
predicate. Hence, $\dcl{R}=\dcl{L}$.
However, in order to illustrate how to apply unboundedness predicates,
we present an alternative approach. In
\cite{DBLP:conf/icalp/Zetzsche15}, it was shown that if a language
class $\mathcal{C}$ is closed under rational transductions (which is the case
for VAS languages), then downward closures are computable for $\mathcal{C}$ if
and only if, given a language $L$ from $\mathcal{C}$ and letters
$a_1,\ldots,a_n$, it is decidable whether $a_1^*\cdots
a_n^*\subseteq\dcl{L}$. Let us show how to decide the latter using
unboundedness predicates.
For this, we use an $n$-dimensional predicate. For a subset
$S\subseteq(\Sigma^*)^n$, let $\dcl{S}$ be the set of all tuples
$(u_1,\ldots,u_n)\in(\Sigma^*)^n$ such that there is some
$(v_1,\ldots,v_n)\in S$ with $u_i\preceq v_i$ for $i\in[1,n]$. Our
predicate $\pP[sup]$ is satisfied for $S\subseteq(\Sigma^*)^n$ if and
only if $a_1^*\times\cdots \times a_n^*\subseteq S$. Then clearly
$\pP[sup](\factors[n]{L})$ if and only if $a_1^*\cdots
a_n^*\subseteq\dcl{L}$. It is easy to check that $\pP$ fulfills
\cref{axiom:upward} and \cref{axiom:union}. For the latter, note
that $a_1^*\times\cdots\times a_n^*\subseteq \dcl{(S_1\cup S_2)}$
implies that for some $j\in\{1,2\}$, there are infinitely many
$\ell\in\mathbb{N}$, with $(a_1^\ell,\ldots,a_n^\ell)\in S_j$ and hence
$a_1^*\times\cdots\times a_n^*\subseteq\dcl{S_j}$. For \cref{axiom:concatenation},
we need a simple combinatorial argument:
\begin{lem}\label{lemma:sup}
If $a_1^*\times\cdots \times a_n^*\subseteq\dcl{\factors[n]{L_1\cdots L_k}}$, then $n=n_1+\cdots+n_k$
with $a_1^*\times\cdots\times a_n^*\subseteq\dcl{(\factors[n_1]{L_1}\times\cdots\times\factors[n_k]{L_k})}$.
\end{lem}
It remains to show that for a regular language $R$, it is decidable
whether $a_1^* \cdots a_n^* \subseteq \dcl{R}$. Since it is easy to
construct an automaton for $\dcl{R}$, this amounts to a simple
inclusion check.
\application{Non-overlapping factors}
Our next example shows that under very mild assumptions on a
language $K$, one can decide whether the words in a VAS language $L$
contain arbitrarily many factors from $K$. For $w\in\Sigma^*$ and
$K\subseteq\Sigma^+$, let $|w|_K$ be the largest number $m$ such that
there are $w_1,\ldots,w_m\in K$ with $(w_1,\ldots,w_m)\in
\factors[m]{w}$. Note that since $\varepsilon \not\in K$, there is always
a maximal such $m$. Consider the function $f_K\colon\Sigma^*\to\mathbb{N}$,
$w\mapsto |w|_K$. A function $f\colon \Sigma^*\to\mathbb{N}om$ is
\emph{unbounded} on $L\subseteq\Sigma^*$ if for every $k\in\mathbb{N}$,
we have $f(w)\ge k$ for some $w\in L$.
\begin{thm}\label{non-overlapping-factors:decidable}
If $\mathcal{C}$ is a full trio with decidable emptiness problem, then given
a VAS language $L$ and a language $K\subseteq\Sigma^+$ from $\mathcal{C}$,
it is decidable whether $f_K$ is unbounded on $L$. If $f_K$ is
bounded on $L$, we can compute an upper bound.
\end{thm}
\Cref{non-overlapping-factors:decidable} is quite
unexpected because very slight variations lead to undecidability.
If we ask whether $f_K$ is non-zero on a given VAS language (as
opposed to unbounded), then this is in general undecidable. Indeed,
suppose $\mathcal{C}$ is a full trio for which intersection with VAS languages
is undecidable (such as languages of lossy channel systems\footnote{It
seems to be folklore that intersection between languages of lossy
channel systems and languages of one-dimensional VAS is undecidable
(the additional counter can be used to ensure that no letter is
dropped). The only reference we could find is \cite{Reinhardt2015}.}
or higher-order pushdown
languages~\cite{HagueLin2011,DBLP:conf/icalp/Zetzsche15}). Then given
a language $K\subseteq\Sigma^*$ from $\mathcal{C}$, a VAS language $L$ and
some $c\notin\Sigma$, the function $f_{cKc}$ is non-zero on $cLc$ if
and only if $K\cap L\ne\emptyset$.
Furthermore, the same problem becomes undecidable in general if
instead of VAS languages, we want to decide the problem for a language
class as simple as one-counter languages (OCL). Indeed, suppose $\mathcal{C}$
is a full trio for which intersection with OCL is undecidable (such as
the class of OCL). For a given $K\subseteq\Sigma^*$ from $\mathcal{C}$, an OCL
$L\subseteq\Sigma^*$, and some $c\notin\Sigma$, the set $c(Lc)^*$ is
effectively an OCL and $f_{cKc}$ is unbounded on $c(Lc)^*$ if and only
if $K\cap L\ne\emptyset$.
Let us prove \cref{non-overlapping-factors:decidable}. Fix a language
$K\subseteq\Sigma^*$ from $\mathcal{C}$. Our predicate $\pP[nof]$ is
one-dimensional and is satisfied on a set $L\subseteq\Sigma^*$ if and
only if $f_K$ is unbounded on $L$. Then clearly,
$\pP[nof](\factors{L})$ if and only if $f_K$ is unbounded on $L$. It
is immediate that \cref{axiom:1dim:upward,axiom:1dim:union} are
satisfied. Furthermore, \cref{axiom:1dim:concatenation} follows by
contraposition: If neither $\pP[nof](\factors{L_0})$ nor
$\pP[nof](\factors{L_1})$, then there are $B_0,B_1\in\mathbb{N}$ such that
$f_K$ is bounded by $B_i$ on $L_i$ for $i=0,1$. That implies that
$f_K$ is bounded by $B_0+B_1+1$ on $L_0L_1$. This rules out
$\pP[nof](\factors{L_0L_1})$, which establishes
\cref{axiom:1dim:concatenation}. The following uses standard arguments.
\begin{lem}\label{factors-unboundedness-regular}
Let $\mathcal{C}$ be a full trio with decidable emptiness problem. Given a
language $K$ from $\mathcal{C}$ and a regular language $R$, it is decidable
whether $f_K$ is unbounded on $R$. Moreover, if $f_K$ is bounded on
$R$, we can compute an upper bound.
\end{lem}
We can deduce \cref{non-overlapping-factors:decidable} from
\cref{factors-unboundedness-regular} as follows. Using
\cref{thm:approximation-unboundedness}, we compute the language $R$.
Then, $f_K$ is unbounded on $R$ iff it is unbounded on $L$. Moreover,
an upper bound for $f_K$ on $R$ is also an upper bound for $f_K$ on
$L$ because $L\subseteq R$.
\application{Counting automata} To illustrate how these results can be
used, we formulate an extension of
\cref{non-overlapping-factors:decidable} in terms of automata that can
count. Let $\mathcal{C}$ be a full trio. Intuitively, a $\mathcal{C}$-counting automaton can read a word
produced by a VAS and can use machines corresponding to $\mathcal{C}$ as oracles.
Just like the intersection of two languages that describe threads in a
concurrent system signals a safety
violation~\cite{BouajjaniEsparzaTouili2003,ChakiEtAl2006,long2012language},
a successful oracle call would signal a particular undesirable event.
In such a model, it would be undecidable whether any oracle
call can be successful if, for example, $\mathcal{C}$ is the class of
higher-order pushdown languages. However, we show that it
\emph{is} decidable whether such an automaton can make an unbounded
number of successful oracle calls and if not, compute an upper bound.
Hence, we can decide if the number of undesirable events is bounded
and, if so, provide a bound.
\newcommand{\oppush}[1]{\mathsf{push}(#1)}
\newcommand{\opcheck}[2]{\mathsf{check}(#1,#2)}
\newcommand{\Omega}{\Omega}
A \emph{$\mathcal{C}$-counting automaton} is a tuple
$\mathcal{A}=(Q,\Sigma,\Gamma,C,q_0,E,Q_f)$, where $Q$ is a finite set of
\emph{states}, $\Sigma$ is its \emph{input alphabet}, $\Gamma$ is its
\emph{(oracle) tape alphabet}, $C$ is a finite set of \emph{counters},
$q_0\subseteq Q$ is its \emph{initial state}, $Q_f\subseteq Q$ is its
set of \emph{final states}, and $E\subseteq Q\times\Sigma^*\times
(\Omega\cup\{\varepsilon\})\times Q$ is a finite set of \emph{edges},
where $\Omega$ is a set of \emph{operations} of the following
form. First, we have an operation $\oppush{a}$ for each
$a\in\Gamma$, which appends $a$ to the oracle tape. Moreover,
we have $\opcheck{K}{c}$ for each $K\subseteq\Gamma^*$ from $\mathcal{C}$ and
each $c\in C$, which first checks whether the current tape content
belongs to $K$ and if so, increments the counter $c$. After the oracle query,
it empties the oracle tape, regardless of whether the oracle anwsers
positively or negatively.
\newcommand{\autstep}[1]{\xrightarrow{#1}}
\newcommand{\reset}[2]{#1[#2\mapsto 0]}
A \emph{configuration} of $\mathcal{A}$ is a triple $(q,u,\mu)$, where $q\in
Q$ is the current state, $u\in\Gamma^*$ is the oracle tape content, and $\mu\in\mathbb{N}^C$ describes
the counter values. For a label
$x\in\Sigma\cup\{\varepsilon\}$, and configurations $(q,u,\mu),
(q',u',\mu')$, we write $(q,u,\mu)\autstep{x}(q',u',\mu')$ if $(q',u',\mu')$ results from
$(q,u,\mu)$ as described above.
In the general case $w\in\Sigma^*$, $(q,u,\mu)\autstep{w}(q',u',\mu')$ has the obvious meaning.
$\mathcal{A}$ defines a function $\Sigma^*\to\mathbb{N}om$:
\[ \mathcal{A}(w)=\sup\left\{\left.\inf_{c\in C} \mu(c) ~\right|~ \mu\in\mathbb{N}^C,~(q_0,\varepsilon,0)\xrightarrow{w} (q,u,\mu) ~\text{for some $q\in Q_f,~u\in\Gamma^*$}\right\}. \]
Hence, $\mathcal{A}$ is unbounded on $L$ if for every $k\in\mathbb{N}$, there is a
$w\in L$ and a run of $\mathcal{A}$ on $w$ in which for each $c\in C$, at
least $k$ of the oracle queries for $c$ are successful. The following
can be shown similarly to \cref{non-overlapping-factors:decidable},
but using a multi-dimensional unboundedness predicate.
\begin{thm}\label{counting-automata:decidable}
Let $\mathcal{C}$ be a full trio with decidable emptiness. Given a VAS
language $L$ and a $\mathcal{C}$-counting automaton $\mathcal{A}$, it is decidable
whether $\mathcal{A}$ is unbounded on $L$. Moreover, if $\mathcal{A}$ is bounded on
$L$, then one can compute an upper bound $B\in\mathbb{N}$ for $\mathcal{A}$ on $L$.
\end{thm}
\application{Factor inclusion} As a last example, we show how our
results can be used to decide inclusion problems. Specifically, given
a VAS language $L\subseteq\Sigma^*$, it is decidable whether
$\Sigma^*\subseteq\factors{L}$. In fact, we show a more general result:
\begin{thm}\label{factor-universality:ext:decidable}
If $\mathcal{C}$ is a full trio with decidable emptiness problem, then given
a VAS language $L$ and a language $K$ from $\mathcal{C}$, it is decidable
whether $K^*\subseteq\factors{L}$.
\end{thm}
Here, $\Sigma^*\subseteq\factors{L}$ is the special case where $K=\Sigma$.
Recall that is is undecidable whether
$L=\Sigma^*$ for VAS languages and for
one-counter languages (OCL) (e.g.~\cite[Lemma
6.1]{DBLP:journals/dmtcs/CzerwinskiMRZZ17}).
Similar to \cref{non-overlapping-factors:decidable}, deciding whether
$\Sigma^*\subseteq\factors{L}$ is already undecidable for OCL
$L$: For a given OCL $L\subseteq\Sigma^*$, pick a letter
$c\notin\Sigma$ and note that
$L'=c(Lc)^*\subseteq(\Sigma\cup\{c\})^*$ is effectively an OCL and
$(\Sigma\cup\{c\})^*\subseteq\factors{L'}$ if and only if
$L=\Sigma^*$.
Also, under the assumptions of the
\lcnamecref{factor-universality:ext:decidable}, it is undecidable
whether $K\subseteq\factors{L}$: If $L\subseteq\Sigma^*$ and
$c\notin\Sigma$, then $c\Sigma^*c\subseteq\factors{cLc}$ if and only
if $L=\Sigma^*$ (every full trio contains the regular set
$c\Sigma^*c$).
Let us see how \cref{factor-universality:ext:decidable} follows from
\cref{thm:approximation-unboundedness}. Fix a language $K$ from $\mathcal{C}$. We use the
$1$-dim. predicate $\pP[fu]$, which is satisfied on a set
$L\subseteq\Sigma^*$ if and only if $K^*\subseteq\factors{L}$. Of
course, \cref{axiom:upward} holds by definition.
\Cref{axiom:concatenation} follows by contraposition: Suppose that
$K^*\subseteq\factors{L_1L_2}$ and $K^*\not\subseteq\factors{L_1}$
with some $u\in K^*\setminus \factors{L_1}$. Let $v\in K^*$ be
arbitrary. Then, since $K^*\subseteq\factors{L_1L_2}$, we have
$uv\in\factors{L_1L_2}$. This means, there are $x,y\in\Sigma^*$ with
$xuvy\in L_1L_2$. Hence, we have $xuvy=w_1w_2$ for some $w_i\in L_i$
for $i=1,2$. Then $|w_1|<|xu|$, because otherwise $u$ would belong to
$\factors{L_1}$. Therefore, $v$ is a factor of $w_2$ and thus
$v\in\factors{L_2}$. Hence, $K^*\subseteq\factors{L_2}$. Of course, a
similar argument works if $K^*\subseteq\factors{L_1L_2}$ and
$K^*\not\subseteq\factors{L_2}$. This proves
\cref{axiom:concatenation}. \Cref{axiom:union} can be shown the same
way. Thus, by \cref{thm:approximation-unboundedness}, it
suffices to decide whether $K^*\subseteq\factors{R}$ for regular
$R$, which follows from $\mathcal{C}$ being a full trio and having decidable
emptiness (see \cref{factor-universality-regular}).
\section{Separability by bounded regular languages}\label{sec:separability}
This section contains the omitted proofs concerning separability by bounded regular languages.
\begin{proof}[Proof of \cref{separability-commutative}]
First, if $L_0$ and $L_1$ are separable by a regular $R\subseteq
w_1^*\cdots w_n^*$, then the set
\[ S=\{(x_1,\ldots,x_n) \mid w_1^{x_1}\cdots w_n^{x_n}\in R\} \]
is recognizable. This is a classical result by Ginsburg and
Spanier~\cite{GinsburgSpanier1966a}. Moreover, $S$ clearly separates
$U_0$ from $U_1$.
Conversely, if $S\subseteq \mathbb{N}^n$ is recognizable and separates $U_0$
from $U_1$, then the set
\[ R=\{w_1^{x_1}\cdots w_n^{x_n} \mid (x_1,\ldots,x_n)\in S \} \]
is
regular. Let us show that it separates $L_0$ and $L_1$. If $w\in
L_0$, then we can write $w=w_1^{x_1}\cdots w_n^{x_n}$, which implies
$(x_1,\ldots,x_n)\in U_0$. Therefore, we have $(x_1,\ldots,x_n)\in S$
and thus $w=w_1^{x_1}\cdots w_n^{x_n}\in R$. Thus, $L_0\subseteq
R$. Now suppose $w\in R$. Then we can write $w=w_1^{x_1}\cdots
w_n^{x_n}$ with $(x_1,\ldots,x_n)\in S$. That implies
$(x_1,\ldots,x_n)\notin U_1$ and hence $w_1^{x_1}\cdots
w_n^{x_n}\notin L_1$. Hence, $R\cap L_1=\emptyset$.
\end{proof}
In the proof, we also use the following fact:
\begin{prop}
If $L\subseteq w_1^*\cdots w_n^*$ is a VAS language, then
the set \[ U=\{(x_1,\ldots,x_n)\in\mathbb{N}^n \mid w_1^{x_1}\cdots w_n^{x_n}\in L\} \]
is a effectively a section of a VAS reachability set.
\end{prop}
\begin{proof}
First recall the notion of a section.
For a subset $I\subseteq [1,n]$, let $\pi_I\colon \mathbb{N}^n\to\mathbb{N}^{|I|}$ be the projection onto the coordinates in $I$.
Then, every set of the form $\pi_{[1,n]\setminus I}(S\cap \pi_{I}^{-1}(x))$ for some
$I\subseteq[1,n]$ and $x\in\mathbb{N}^{|I|}$ is called a \emph{section} of $S\subseteq\mathbb{N}^n$.
Intuitively, we fix a vector $x \in \mathbb{N}^{|I|}$ on coordinates from $I$ and take into the section all the
vectors $y \in \mathbb{N}^{n-|I|}$, which together with $x$ form an $n$-dimensional vector from $S$.
Assume that $L$ is a language of $d$-dimensional VAS $V$.
In order to show that $U$ is a section of a VAS reachability set we construct another VAS $V'$
in the following way. VAS $V'$ simulates $V$ on $d$ coordinates and has $n$ additional coordinates,
on which it counts number of occurrences of words $w_1, \ldots, w_n$. It is easy to see that VAS indeed can count
such occurrences by keeping some additional finite information, like the suffix of current run, which has not been yet
counted into any $w_i$ and the information which $w_i$ has recently appeared.
Section of $V'$ leaving only these $n$ counting coordinates is exactly the set $U$.
\end{proof}
\section{Proof of the main result}\label{sec:proof}
We prove our decidability result using the KLMST decomposition. More
specifically, we show a consequence that might be interesting in its
own right.
\begin{thm}\label{thm:approximation}
Given a VAS language $L\subseteq\Sigma^*$, one can compute $m, k\in\mathbb{N}$ and regular languages
$R_{i,j}\subseteq\Sigma^*$, for $i \in [1,m]$, $j\in [1,k]$ so that
\begin{align}
L\subseteq\bigcup_{i=1}^m R_{i,1}\cdots R_{i,k} && \text{and} && R_{i,1}\times \cdots \times R_{i,k}\subseteq \factors[k]{L}~\text{for every $i\in[1,m]$}.\label{approx-rel}
\end{align}
\end{thm}
We first show how to derive \cref{thm:approximation-unboundedness}
from \cref{thm:approximation} and then proceed with the proof of
\cref{thm:approximation}, as it is much more technically complicated.
\subparagraph*{Proof of \cref{thm:approximation-unboundedness}}
Suppose \cref{thm:approximation} holds. Then, given a VAS language
$L$, we compute $m, k\in\mathbb{N}$ and the regular languages $R_{i,j}$ for
$i\in[1, m], j\in[1, k]$. We choose $R=\bigcup_{i=1}^m R_{i,1}\cdots
R_{i,k}$. Then we have $L\subseteq R$. Let us show that
$\pP(\factors[n]{L})$ if and only if $\pP(\factors[n]{R})$. If
$\pP(\factors[n]{L})$, then clearly $\pP(\factors[n]{R})$, because
$L\subseteq R$ implies $\factors[n]{L}\subseteq \factors[n]{R}$ and by
\cref{axiom:upward}, this implies $\pP(\factors[n]{R})$. Conversely,
suppose $\pP(\factors[n]{R})$. Then by \cref{axiom:union}, there is an
$i\in[1,m]$ such that $\pP(\factors[n]{R_i})$, where
$R_i=R_{i,1}\cdots R_{i,k}$. According to \cref{axiom:concatenation},
we can write $n=n_1+\cdots+n_k$ such that $\pP$ holds for
$S:=\factors[n_1]{R_{i,1}}\times \cdots\times\factors[n_k]{R_{i,k}}$.
Note that by the choice of $R_{i,j}$, we have $R_{i,1}\times\cdots\times R_{i,k}\subseteq
\factors[k]{L}$ and therefore $S\subseteq
\factors[n]{L}$. This implies $\pP(\factors[n]{L})$ by \cref{axiom:upward}.
\subparagraph*{Proof of \cref{thm:approximation}}
The remainder of this section is devoted to the proof of \cref{thm:approximation}.
Like the method for computing downward closures by Habermehl, Meyer, and
Wimmel~\cite{HabermehlMeyerWimmel2010}, the construction of the sets $R_{i,j}$ is
based on Lambert's proof~\cite{DBLP:journals/tcs/Lambert92} of the decidability of
the reachability problem for Petri nets. In order to be compatible with Lambert's
exposition, we phrase our proof in terms of Petri nets instead of vector addition
systems.
A \emph{Petri net} $N = (P, T, \textsc{Pre}, \textsc{Post})$ consists of a finite set $P$ of \emph{places},
a finite set $T$ of \emph{transitions} and two mappings $\textsc{Pre}, \textsc{Post}\colon T \to \mathbb{N}^P$.
Configurations of Petri net are elements of $\mathbb{N}^P$, called \emph{markings}.
For two markings $M, M'$ we say that $M'$ \emph{dominates} $M$, denoted $M\leq M'$, if for every place $p \in P$,
we have $M[p] \leq M'[p]$. The \emph{effect} of a transition $t \in T$
is $\textsc{Post}(t) - \textsc{Pre}(t) \in \mathbb{Z}^P$, denoted $\Delta(t)$. If a marking $M$ dominates $\textsc{Pre}(t)$ for a transition $t \in T$ then
$t$ is \emph{fireable} in $M$ and the result of firing $t$ in marking $M$ is $M' = M + \Delta(t)$,
we write $M \trans{t} M'$. We extend notions of fireability and firing naturally to sequences
of transitions, we also write $M \trans{w} M'$ for $w \in T^*$. The \emph{effect} of $w \in T^*$
is sum of the effects of its letters, $\Delta(w) = \sum_{i = 1}^{|w|} \Delta(w[i])$.
For a Petri net $N = (P, T, \textsc{Pre}, \textsc{Post})$ and markings $M_0,M_1$, we define the language
$L(N, M_0, M_1) = \{w \in T^* \mid M_0 \trans{w} M_1\}$.
Hence, $L(N,M_0,M_1)$ is the set of transition sequences leading from
$M_0$ to $M_1$. Moreover, let $L(N, M_0) = \bigcup_{M \in \mathbb{N}^P} L(N,
M_0, M)$, i.e. the set of all the transition sequences fireable in $M_0$. A
\emph{labeled Petri net} is a Petri net $N=(P,T,\textsc{Pre},\textsc{Post})$ together
with an \emph{initial marking} $M_I$, a \emph{final marking} $M_F$,
and a \emph{labeling}, i.e. a homomorphism $T^*\to\Sigma^*$.
The language \emph{recognized by} the labeled Petri net is then defined as
$L_h(N, M_I, M_F) = h(L(N, M_I, M_F))$.
It is folklore (and easy to see) that a language is a VAS language if and only
if it is recognized by a labeled Petri net (and the translation is
effective). Thus, it suffices to show \cref{thm:approximation} for languages of
the form $L=h(L(N,M_I,M_F))$. Moreover, it is already enough to prove
\cref{thm:approximation} for languages of the form $L(N,M_I,M_F)$. Indeed,
observe that if we have constructed $R_{i,j}$ so that \cref{approx-rel} is
satisfied, then with $S_{i,j}=h(R_{i,j})$, we have
$h(L)\subseteq\bigcup_{i=1}^m S_{i,1}\cdots S_{i,k}$
and
$S_{i,1}\times\cdots \times S_{i,k}\subseteq \factors[k]{h(L)}$ for every $i\in[1,m]$.
Thus from now on, we assume $L=L(N,M_I,M_F)$ for a fixed Petri net $N=(P,T,\textsc{Pre},\textsc{Post})$.
\subparagraph*{The KLMST decomposition}
Lambert's decision procedure~\cite{DBLP:journals/tcs/Lambert92} is a refinement of
the previous ones by Mayr~\cite{DBLP:conf/stoc/Mayr81} and
Kosaraju~\cite{DBLP:conf/stoc/Kosaraju82}. Later, Leroux and
Schmitz~\cite{leroux2015reachability} recast it again as an algorithm using WQO
ideals and dubbed the procedure \emph{KLMST
decomposition} after its
inventors~\cite{DBLP:conf/stoc/Kosaraju82,DBLP:journals/tcs/Lambert92,DBLP:conf/stoc/Mayr81,sacerdote1977decidability}.
The idea is the following. We disregard for a moment that a transition sequence
has to keep all intermediate markings non-negative and only look for a sequence
that may go negative on the way. It is standard technique to express the
existence of such a sequence as a linear equation system $Ax=b$. As expected,
solvability of this system is not sufficient for the existence of an actual
run. However, if we are in the situation that we can find (a)~runs that pump up
all coordinates arbitrarily high and also (b)~counterpart runs that remove those
excess tokens again, then solvability of the equation system is also sufficient:
We first increase all coordinates high enough, then we execute our
positivity-ignoring sequence, and then we pump down again. Roughly speaking, the
achievement of the KLMST decomposition is to put us in the latter situation, which
we informally call \emph{perfect circumstances}.
To this end, one uses a data structure, in Lambert's version called \emph{marked
graph-transition sequence (MGTS)}, which restricts the possible runs of the
Petri net. If the MGTS satisfies a condition that realizes the above perfect
circumstances, then it is called \emph{perfect}. Unsurprisingly, not every MGTS is
perfect. However, part of the procedure is a decomposition of an imperfect MGTS
into finitely many MGTS that are less imperfect. Moreover, this
decomposition terminates in a finite set of perfect MGTS. Thus, applied to an
MGTS whose restriction is merely to start in $M_I$ and end in $M_F$, then the
decomposition yields finitely many perfect MGTS $\mathbb{N}cal_1,\ldots,\mathbb{N}cal_n$ such that
the runs from $M_I$ to $M_F$ are precisely those conforming to at least one
of the MGTS. Moreover, checking whether $\mathbb{N}cal_i$ admits a run amounts to
solving a linear equation system.
\subparagraph*{Basic notions} Let us introduce some notions used in Lambert's proof.
We extend the set of configurations $\mathbb{N}^d$ into $\mathbb{N}om^d$, where $\mathbb{N}om = \mathbb{N} \cup \{\omega\}$ for $\omega$ being
the first infinite ordinal number and representing the infinity. We extend the notion of transition firing into $\mathbb{N}om^d$ naturally, by
defining $\omega - k = \omega = \omega + k$ for every $k \in \mathbb{N}$. For $u, v \in \mathbb{N}om^d$ we write $u \leq_\omega v$ if
$u[i] = v[i]$ or $v[i] = \omega$. Intuitively reaching a configuration with $\omega$ at some places
means that it is possible to reach configurations with values $\omega$ substituted by arbitrarily high values.
A key notion in~\cite{DBLP:journals/tcs/Lambert92} is that of MGTS, which formulate restrictions on paths in
Petri nets. A \emph{marked graph-transition sequence (MGTS)} for our Petri net
$N=(P,T,\textsc{Pre},\textsc{Post})$ is a finite sequence
$
C_0, t_1, C_1 \ldots C_{n-1}, t_n, C_n,
$
where $t_i$ are transitions from $T$ and $C_i$ are precovering graphs, which are defined next.
A \emph{precovering graph} is a quadruple $C = (G, m, m^\textup{init}, m^\textup{fin})$, where $G=(V,E,h)$ is a finite, strongly connected, directed graph with $V\subseteq\mathbb{N}om^P$ and labeling $h\colon E \to T$,
and three vectors: a \emph{distinguished} vector $m \in V$, an \emph{initial} vector $m^\textup{init} \in \mathbb{N}om^P$, and a \emph{final} vector $m^\textup{fin} \in \mathbb{N}om^P$.
A precovering graph has to meet two conditions:
First, for every edge $e = (m_1, m_2) \in E$, there is an $m_3\in\mathbb{N}om^P$ with $m_1 \trans{h(e)} m_3 \leq_\omega m_2$.
Second, we have
$m^\textup{init}, m^\textup{fin} \leq_\omega m$.
Additionally we impose the restriction on MGTS that the initial vector of $C_0$
equals $M_I$ and the final vector of $C_n$ equals $M_F$.
\subparagraph*{Languages of MGTS}
Each precovering graph can be treated as a finite automaton. For $m_1,m_2\in V$,
we denote by $L(C,m_1,m_2)$ the set of all $w\in T^*$ read on a path from
$m_1$ to $m_2$. Moreover, let $L(C) = L(C, m, m)$. MGTS have associated
languages as well. Let $\mathbb{N}cal = C_0, t_1, C_1 \ldots C_{n-1}, t_n, C_n$ be an
MGTS of a Petri net $N$, where $C_i = (G_i, m_i, m_i^\textup{init}, m_i^\textup{fin})$. Its
language $L(\mathbb{N}cal)$ is the set of all words of the form $w = w_0 t_1 w_1 \cdots
w_{n-1} t_n w_n \in T^*$ where:
$w_i \in L(C_i)$ for each $i\in[0,n]$
and
(ii)~there exist markings $\mu_0, \mu'_0, \mu_1, \mu'_1, \ldots, \mu_n, \mu'_n \in \mathbb{N}^P$ such
that $\mu_i \leq_\omega m^\textup{init}_i$ and $\mu'_i \leq_\omega m^\textup{fin}_i$ and
$\mu_0 \trans{w_0} \mu'_0 \trans{t_1} \mu_1 \trans{w_1} \ldots \trans{w_{n-1}} \mu'_{n-1} \trans{t_n} \mu_n \trans{w_n} \mu'_n$.
Notice that by (ii) and the restriction that $m^\textup{init}_0 = M_I$ and $m^\textup{fin}_n = M_F$,
we have $L(\mathbb{N}cal)\subseteq L(N,M_I,M_F)$ for any MGTS $\mathbb{N}cal$.
Hence roughly speaking, $L(\mathbb{N}cal)$ is the set of runs that contain the transitions
$t_1,\ldots,t_n$ and additionally markings before and after firing these
transitions are prescribed on some places: this is exactly what the restrictions
$\mu_i \leq_\omega m^\textup{init}_i$, $\mu'_i \leq_\omega m^\textup{fin}_i$ impose.
Notice that at the moment we do not expect that values $\omega$ occurring at $m_i,
m_i^\textup{init}, m_i^\textup{fin}$ impose any restriction on the form of accepted runs. Meaning
of $\omega$ values is reflected in the notion of \emph{perfect} MGTS described
later.
As an immediate consequence of the definition, we observe that for every
MGTS $\mathbb{N}cal = C_0, t_1, C_1 \ldots C_{n-1}, t_n, C_n$
we have
\begin{equation} L(\mathbb{N}cal) \subseteq L(C_0) \cdot \{t_1\} \cdot L(C_1) \cdots L(C_{n-1}) \cdot \{t_n\} \cdot L(C_n). \label{eq:overapproximation}\end{equation}
\subparagraph{Perfect MGTS} As announced above, Lambert calls MGTS
with a paricular property
\emph{perfect}~\cite{DBLP:journals/tcs/Lambert92}. Since the precise
definition is involved and we do not need all the details, it is
enough for us to mention a selection of properties of perfect MGTS.
Intuitively, in perfect MGTSes, the value $\omega$ on place $p$ in
$m_i$ means that inside of the component $C_i$, the token count in
place $p$ can be made arbitrarily high.
In~\cite{DBLP:journals/tcs/Lambert92} it is shown (Theorem 4.2
(page~94) together with the preceding definition) that
\begin{thm}[\cite{DBLP:journals/tcs/Lambert92}]\label{thm:language-union}
For a Petri net $N$ one can compute finitely many perfect MGTS $\mathbb{N}cal_1, \ldots, \mathbb{N}cal_m$
such that $L(N,M_I,M_F) = \bigcup_{i=1}^m L(\mathbb{N}cal_i)$.
\end{thm}
Moreover, by Corollary~4.1 in~\cite{DBLP:journals/tcs/Lambert92} (page~93), given a perfect
MGTS $\mathbb{N}cal$, it is decidable whether $L(\mathbb{N}cal)\ne\emptyset$. Therefore, our task
reduces to the following. We have a perfect MGTS $\mathbb{N}cal$ with $L(\mathbb{N}cal)\ne\emptyset$
and want to compute regular languages $R_1,\ldots,R_k$ such that
$L(\mathbb{N}cal)\subseteq R_1\cdots R_k$ and $R_1\times\cdots\times R_k\subseteq \factors[k]{L(\mathbb{N}cal)}$.
(Note that if the MGTS have different lengths, we can always fill up with $\{\varepsilon\}$).
We choose $R_1,\ldots,R_k$ to be the sequence $L(C_0), \{t_1\},
L(C_1),\ldots,L(C_{n-1}),\{t_n\},L(C_n)$. Then \cref{eq:overapproximation} tells us
that this achieves $L(\mathbb{N}cal)\subseteq R_1\cdots R_k$ and all that remains to be
shown is
\begin{equation}
L(C_0)\times\{t_1\}\times L(C_1) \times \cdots \times L(C_{n-1}) \times\{t_n\} \times L(C_n)\subseteq \factors[2n+1]{L(\mathbb{N}cal)}.\label{eq:inclusion-mgts}
\end{equation}
\subparagraph*{Constructing runs}
In order to show \cref{eq:inclusion-mgts}, we employ a simplified version of
Lambert's iteration lemma, which involves covering sequences.
Let $C$ be a precovering graph for a Petri net $N = (P, T, \textsc{Pre}, \textsc{Post})$
with a distinguished vector $m \in \mathbb{N}om^P$ and initial vector $m^{\textup{init}} \in \mathbb{N}om^P$.
A sequence $u \in L(C) \cap L(N, m^{\textup{init}})$ is called a \emph{covering sequence for $C$}
if for every place $p \in P$ we have either 1) $m^{\textup{init}}[p] = \omega$, or 2) $m[p] = m^{\textup{init}}[p]$ and $\Delta(u)[p] = 0$,
or 3) $m[p] = \omega$ and $\Delta(u)[p] > 0$.
This corresponds intuitively to the three possible cases for the set of runs in $N$ crossing the component $C$ in a place $p$:
(i)~runs that can have arbitrarily high value on $p$ when entering $C$,
(ii)~runs where, when entering $C$, $p$ has a fixed value, and the tokens in $p$ cannot be pumped inside of $C$,
or (iii)~runs where, when entering $C$, $p$ has a fixed value, but it can be pumped up inside of $C$.
Let $\mathbb{N}cal = C_0, t_1, C_1 \ldots C_{n-1}, t_n, C_n$ be an MGTS, where $C_i = (V_i, E_i, h_i)$ is a precovering graph, and
let the distinguished vertex be $m_i$ and initial vertex be $m_i^\textup{init}$. If $\mathbb{N}cal$ is a perfect MGTS
then according to the definition from~\cite{DBLP:journals/tcs/Lambert92} (page~92),
for every $i \in [0,n]$ there exists a covering sequence $u_i \in L(C_i) \cap L(N, m_i^\textup{init})$.
This corresponds to the mentioned intuition that $\omega$ values imply arbitrarily high values.
As a direct consequence of Lemma 4.1 in~\cite{DBLP:journals/tcs/Lambert92} (page~92), Lambert's iteration lemma, we obtain:
\begin{lem}\label{lem:iteration}
Let $\mathbb{N}cal = C_0, t_1, C_1 \ldots C_{n-1}, t_n, C_n$ be a perfect MGTS and let
$x_i$ be a covering sequences for $C_i$ for $i\in[0,n]$. Then there exist
words $y_i \in T^*$ for $i \in [0,n]$ such that
$x_0 y_0 \cdot t_1 \cdot x_1 y_1 \cdots x_{n-1} y_{n-1} \cdot t_n \cdot x_n y_n \in L(\mathbb{N}cal)$.
\end{lem}
\cref{lem:iteration} is obtained from Lemma 4.1
in~\cite{DBLP:journals/tcs/Lambert92} as follows. The word $u_i$ there is our
$x_i$ and $v_i$ there is an arbitrary covering sequence of $C_i$ reversed. Then,
our $y_i$ is set to $u_i^{k-1}\beta_i (w_i)^k (v_i)^k$ for some $k\ge k_0$.
The only technical part of the proof of~\cref{thm:approximation} is the following lemma.
\begin{lem}\label{lem:covering-sequences}
Let $C$ be a precovering graph for a Petri net $N = (P, T, \textsc{Pre}, \textsc{Post})$
with a distinguished vector $m \in \mathbb{N}om^P$ and initial vector $m^\textup{init} \in \mathbb{N}om^P$
such that $s \in L(C) \cap L(N, m^\textup{init})$ is a covering sequence.
Then for every $v \in L(C)$ there is a covering sequence for $C$ of the form $u v$,
for some $u \in T^*$.
\end{lem}
\begin{proof}
Intuitively, we do the following. The existence of a covering
sequence means that one can obtain arbitrarily high values on places $p$ where
$m[p] = \omega$. Thus, in order to construct a covering sequence containing $v$
as a suffix, we first go very high on the $\omega$ places, so high that
adding $v$ as a suffix later will still result in a sequence with positive effect.
Let us make this precise. Executing the sequence $v$ might have a negative
effect in a place $p\in P$ with $m[p]=\omega$. Let $k\in\mathbb{N}$ be the largest
possible negative effect a prefix of $v$ can have in any coordinate. Note that
since $s$ is a covering sequence, $s^k$ is a covering sequence as well. We claim
that $s^kv$ is also a covering sequence. It is contained in $L(C)$ and fireable
at $m^{\textup{init}}$. Moreover, by choice of $k$, the sequence $s^kv$ has a positive
effect on each $p$ with $m[p]=\omega$. If $m[p]<\omega$, then
$\Delta(s)[p]=0=\Delta(v)[p]$ and hence $\Delta(s^kv)[p]=0$.
\end{proof}
Using \cref{lem:iteration} and \cref{lem:covering-sequences}, it is now easy to
show \cref{eq:inclusion-mgts}. Given words $v_i\in T^*$ with $v_i\in L(C_i)$
for $i\in[0,n]$, we use \cref{lem:covering-sequences} to choose $x_i\in T^*$ such
that $x_iv_i$ is a covering sequence of $C_i$ for $i\in[0,n]$. By \cref{lem:iteration},
we can find $w_1,\ldots,w_n$ so that
\[ x_0v_0w_0\cdot t_1\cdot x_1v_1w_1 \cdots x_{n-1}v_{n-1}w_{n-1} \cdot t_n\cdot x_nv_nw_n \in L(\mathbb{N}cal), \]
and thus $(v_0,t_1,v_1,\ldots,v_{n-1},t_n,v_n)\in\factors[2n+1]{L(\mathbb{N}cal)}$, which proves \cref{eq:inclusion-mgts}.
\subparagraph*{Acknowledgements} We
are indebted to Mohamed Faouzi Atig for suggesting to study
separability by bounded languages, which was the starting point for
this work. Furthermore, we would like to thank S{\l}awomir Lasota and
Sylvain Schmitz for important discussions. Finally, we are happy to
acknowledge that this collaboration started at the Gregynog~71717
research workshop organized by Ranko Lazi\'{c} and Patrick Totzke.
\appendix
\section{Separability by bounded regular languages}\label{sec:separability}
This section contains the omitted proofs concerning separability by bounded regular languages.
\begin{proof}[Proof of \cref{separability-commutative}]
First, if $L_0$ and $L_1$ are separable by a regular $R\subseteq
w_1^*\cdots w_n^*$, then the set
\[ S=\{(x_1,\ldots,x_n) \mid w_1^{x_1}\cdots w_n^{x_n}\in R\} \]
is recognizable. This is a classical result by Ginsburg and
Spanier~\cite{GinsburgSpanier1966a}. Moreover, $S$ clearly separates
$U_0$ from $U_1$.
Conversely, if $S\subseteq \mathbb{N}^n$ is recognizable and separates $U_0$
from $U_1$, then the set
\[ R=\{w_1^{x_1}\cdots w_n^{x_n} \mid (x_1,\ldots,x_n)\in S \} \]
is
regular. Let us show that it separates $L_0$ and $L_1$. If $w\in
L_0$, then we can write $w=w_1^{x_1}\cdots w_n^{x_n}$, which implies
$(x_1,\ldots,x_n)\in U_0$. Therefore, we have $(x_1,\ldots,x_n)\in S$
and thus $w=w_1^{x_1}\cdots w_n^{x_n}\in R$. Thus, $L_0\subseteq
R$. Now suppose $w\in R$. Then we can write $w=w_1^{x_1}\cdots
w_n^{x_n}$ with $(x_1,\ldots,x_n)\in S$. That implies
$(x_1,\ldots,x_n)\notin U_1$ and hence $w_1^{x_1}\cdots
w_n^{x_n}\notin L_1$. Hence, $R\cap L_1=\emptyset$.
\end{proof}
In the proof, we also use the following fact:
\begin{prop}
If $L\subseteq w_1^*\cdots w_n^*$ is a VAS language, then
the set \[ U=\{(x_1,\ldots,x_n)\in\mathbb{N}^n \mid w_1^{x_1}\cdots w_n^{x_n}\in L\} \]
is a effectively a section of a VAS reachability set.
\end{prop}
\begin{proof}
First recall the notion of a section.
For a subset $I\subseteq [1,n]$, let $\pi_I\colon \mathbb{N}^n\to\mathbb{N}^{|I|}$ be the projection onto the coordinates in $I$.
Then, every set of the form $\pi_{[1,n]\setminus I}(S\cap \pi_{I}^{-1}(x))$ for some
$I\subseteq[1,n]$ and $x\in\mathbb{N}^{|I|}$ is called a \emph{section} of $S\subseteq\mathbb{N}^n$.
Intuitively, we fix a vector $x \in \mathbb{N}^{|I|}$ on coordinates from $I$ and take into the section all the
vectors $y \in \mathbb{N}^{n-|I|}$, which together with $x$ form an $n$-dimensional vector from $S$.
Assume that $L$ is a language of $d$-dimensional VAS $V$.
In order to show that $U$ is a section of a VAS reachability set we construct another VAS $V'$
in the following way. VAS $V'$ simulates $V$ on $d$ coordinates and has $n$ additional coordinates,
on which it counts number of occurrences of words $w_1, \ldots, w_n$. It is easy to see that VAS indeed can count
such occurrences by keeping some additional finite information, like the suffix of current run, which has not been yet
counted into any $w_i$ and the information which $w_i$ has recently appeared.
Section of $V'$ leaving only these $n$ counting coordinates is exactly the set $U$.
\end{proof}
\section{Downward closures}
\begin{proof}[Proof of \cref{lemma:sup}]
Since $a_1^*\times\cdots \times
a_n^*\subseteq\dcl{\factors[n]{L_1\cdots L_k}}$, we know that for
every $\ell\in\mathbb{N}$, we can find words $w_i\in L_i$ for $i\in[1,k]$ so
that $a_1^{\ell\cdot k}\cdots a_n^{\ell\cdot k}\preceq w_1\cdots
w_k$. Then, in particular, there is a monotone map
$\pi_\ell\colon[1,n]\to[1,k]$ so that $a_i^\ell\preceq w_{\pi(i)}$.
Since there are only finitely many maps $[1,n]\to[1,k]$, there is
one monotone map $\pi\colon[1,n]\to[1,k]$ that occurs infinitely
often in the sequence $\pi_1,\pi_2,\ldots$. We can decompose
$[1,n]=\pi^{-1}(1)\cup\cdots\cup\pi^{-1}(k)$ and since $\pi$ is
monotone, each $\pi^{-1}(i)$ is convex. This give rise to a
decomposition $n=n_1+\cdots+n_k$ so that $\pi^{-1}(1)\subseteq[1,n_1]$,
$\pi^{-1}(2)\subseteq[n_1+1,n_2]$, etc. Now, by choice of $\pi$,
for each $\ell\in\mathbb{N}$, we can find $w_i\in L_i$ so
that $a_i^\ell\preceq w_{\pi(i)}$, which means
$(a_1^\ell,\ldots,a_n^\ell)\in
\dcl{(\factors[n_1]{L_1}\times\cdots\times\factors[n_k]{L_k})}$.
This implies
$a_1^*\times\cdots\times
a_n^*\subseteq\dcl{(\factors[n_1]{L_1}\times\cdots\times\factors[n_k]{L_k})}$.
\end{proof}
\section{Non-overlapping factors}
\begin{proof}[Proof of \cref{factors-unboundedness-regular}]
Suppose $K\subseteq\Sigma^*$ and let $\mathcal{A}$ be a finite automaton for
$R\subseteq\Sigma^*$. Pick a symbol $c\notin\Sigma$. We obtain a
finite automaton $\mathcal{B}$ from $\mathcal{A}$ as follows. In the first step, for
each pair $p,q$ of states, we check whether there is a word in $K$
that labels a path $p$ to $q$ in $\mathcal{A}$: This is decidable because we
can effectively intersect languages in $\mathcal{C}$ with regular languages
and emptiness is decidable for $\mathcal{C}$. If such a word exists, we add
an edge labeled $c$ from $p$ to $q$. In the second step, for each
edge with a label $\ne c$, we replace the label by $\varepsilon$. This
completes the construction of $\mathcal{B}$.
Clearly, $f_K$ is unbounded on $R$ if and only if $\{c\}^*\subseteq
L(\mathcal{B})$. Moreover, if $f_K$ is bounded on $R$, then $L(\mathcal{B})$ is
finite and we can compute the maximal length $\ell$ of a word in
$L(\mathcal{B})$. This $\ell$ is then an upper bound for $f_K$ on $L$.
\end{proof}
\section{Counting automata}
We begin with a formal definition of the step relation in counting automata.
For a label
$x\in\Sigma\cup\{\varepsilon\}$, and configurations $(q,u,\mu),
(q',u',\mu')$, we write $(q,u,\mu)\autstep{x}(q',u',\mu')$
if there is
an edge $(q,x,o,q)\in E$ such that one of the following holds:
\begin{itemize}
\item We have $o=\oppush{a}$ for some $a\in\Gamma$ and $u'=ua$ and $\mu'=\mu$.
\item
We have
$o=\opcheck{K}{c}$ for some $K\subseteq\Gamma^*$ from $\mathcal{C}$ and
$c\in C$ and $u'=\varepsilon$ and either (a)~$u\in K$ and
$\mu'=\mu+1_c$
or (b)~$u\notin K$ and $\mu'=\mu$. Here, $1_c\in\mathbb{N}^C$ is the vector with
$1_c(c)=1$ and $1_c(c')=0$ for $c'\in C\setminus c$.
\end{itemize}
Moreover, for $w\in\Sigma^*$, we write $(q,u,\mu)\autstep{w} (q',u',\mu')$
if
\[ (q,u,\mu)=(q_1,u_1,\mu_1)\autstep{x_0}\cdots \autstep{x_n}(q_n,u_n,\mu_n)=(q',u',\mu'), \]
for some configurations $(q_i,u_i,\mu_i)$ and $w = x_0 \cdots x_n$, where $x_0, \ldots, x_n \in\Sigma_\varepsilon$.
In our proof of \cref{counting-automata:decidable}, we will use
\cref{thm:approximation-unboundedness} and hence decidability of a multidimensional
predicate. Suppose $t=(K_1,\ldots,K_n)$ is a tuple of languages
$K_i\subseteq\Sigma^+$. We define a function $f_t\colon\Sigma^*\to\mathbb{N}$
as follows. Intuitively, $f_t(w)$ is the largest number $k$ so that
we can pick a set of non-overlapping factors of $w$ among whom there
are at least $k$ members of $K_i$ for each $i\in[1,n]$.
Formally, for a word $w\in\Sigma^*$, let $f_t(w)$ be the largest
number $\ell$ such that there is a tuple $(w_1,\ldots,w_m)\in
(\Sigma^+)^m$ with $(w_1,\ldots,w_m)\in \factors[m]{w}$ such that for
each $i\in[1,n]$, we have $|\{j\in[1,m] \mid w_j\in K_i\}|\ge \ell$.
Using an $n$-dimensional predicate and \cref{thm:approximation-unboundedness}, we
can show the following.
\begin{lem}\label{non-overlapping-factors:multi:decidable}
Let $\mathcal{C}$ be a full trio with decidable emptiness. Given a tuple
$t=(K_1,\ldots,K_n)$ of languages from $\mathcal{C}$ and a VAS language
$L$, it is decidable whether $f_t$ is unbounded on $L$. Moreover, if
$f_t$ is bounded on $L$, one can compute an upper bound $B\in\mathbb{N}$ for
$f_t$ on $L$.
\end{lem}
\begin{proof}
Let $t=(K_1,\ldots,K_n)$ be a tuple of languages with
$K_i\subseteq\Sigma^+$ for $i\in[1,n]$. For a word $w\in\Sigma^*$,
let $\Delta(w)\subseteq\mathbb{N}^n$ be the set of all
$(x_1,\ldots,x_n)\in\mathbb{N}^n$ such that there is a tuple $(w_1,\ldots,
w_m)\in (\Sigma^+)^m$ with $(w_1,\ldots,w_m)\in \factors[m]{w}$ and
$x_i=|\{j\in[1,m]\mid w_j\in K_i\}|$.
Let us now define the predicate $\pP$. For
$S\subseteq(\Sigma^*)^n$, let $\pP(S)$ express that for every
$\ell\in\mathbb{N}$, there is a tuple $(w_1,\ldots,w_n)\in S$ and a vector
$(x_1,\ldots,x_n)\in \sum_{i=1}^n \Delta(w_i)$ such that
$x_i\ge\ell$ for each $i\in[1,n]$. Here, the sum on subsets of
$\mathbb{N}^n$ is to be read as the Minkowski sum: $X+Y=\{x+y \mid x\in
X,~y\in Y\}$. Note that then indeed $\pP(\factors[n]{L})$ if and
only if $f_t$ is unbounded on $L$.
The predicate $\pP$ clearly satisfies
\cref{axiom:upward,axiom:union}, so let us prove
\cref{axiom:concatenation} and suppose $\pP(\factors[n]{L_1\cdots
L_k})$. A \emph{profile} is a map $\pi\colon
[1,n]\to[1,k]$. Intuitively, a profile records for each $i\in[1,n]$
which of the factors $L_1,\ldots,L_k$ can be chosen to find a
particular number of factors from $K_i$.
Let $\ell\in\mathbb{N}$. Since $\pP(\factors[n]{L_1\cdots L_k})$, we know
that there is a $(w_1,\ldots,w_n)\in\factors[n]{L_1\cdots L_k}$ such
that there is a $(x_1,\ldots,x_n)\in\sum_{i=1}^n \Delta(w_i)$ with
$x_i\ge k\cdot\ell+k$. Since $(w_1,\ldots,w_n)\in\factors[n]{L_1\cdots L_k}$, there is a word $u\in
L_1\cdots L_k$ with $(w_1,\ldots,w_n)\in\factors[n]{u}$.
Thus, we have a $(y_1,\ldots,y_n)\in\Delta(u)$ with $y_i\ge k\cdot \ell+k$
for each $i\in[1,n]$. Since $u\in L_1\cdots L_k$, we can write
$u=u_1\cdots u_k$ with $u_i\in L_i$ for $i\in[1,k]$.
Observe that then there is a $(z_1,\ldots,z_n)\in\sum_{i=1}^k
\Delta(u_i)$ with $z_i\ge y_i-k$ for $i\in[1,n]$: From the set of
factors that witnesses $(y_1,\ldots,y_n)\in\Delta(u)$, we can select
those that are confined to a single $u_i$; then we lose at most
those that fall on the border of two $u_i$'s, hence at most
$k$. Since $y_i\ge k\cdot\ell+k$, we have $z_i\ge k\cdot \ell$ for
$i\in[1,n]$. Write $(z_1,\ldots,z_n)=\sum_{i=1}^k
(z_{i,1},\ldots,z_{i,n})$ with
$(z_{i,1},\ldots,z_{i,n})\in\Delta(u_i)$. Since
$z_{1,i}+\cdots+z_{k,i}=z_i\ge k\cdot\ell$, we can find for each
$i\in[1,n]$, an index $j\in[1,k]$ so that $z_{j,i}\ge \ell$. This
defines a profile $\pi_\ell$: Let $\pi_\ell(i)=j$.
To summarize, we have defined for each $\ell\in\mathbb{N}$ a profile
$\pi_\ell$ so that the following holds. For each $\ell\in\mathbb{N}$, there
are words $u_1,\ldots,u_k$ with $u_j\in L_j$ for $j\in[1,k]$ so that
for each $i\in[1,n]$, the set $\Delta(u_{\pi_\ell(i)})$ contains a
vector $(z_1,\ldots,z_n)$ with $z_i\ge\ell$.
Since there are only finitely many profiles, the sequence
$\pi_1,\pi_2,\ldots$ must contain one profile $\pi$ infinitely
often. This profile has thus the following property. For each $\ell\in\mathbb{N}$,
there are words $u_1,\ldots,u_k$ with $u_j\in L_j$ for $j\in[1,k]$ so that
for each $i\in[1,n]$, the set $\Delta(u_{\pi(i)})$ contains a
vector $(z_1,\ldots,z_n)$ with $z_i\ge\ell$.
This allows us to define the decomposition $n=n_1+\cdots+n_k$: For
each $j\in[1,k]$, let $n_j=|\{i\in[1,n]\mid \pi(i)=j\}|$. We claim
that then $\pP(\factors[n_1]{L_1}\times\cdots\factors[n_k]{L_k})$
holds. Let $j\in\mathbb{N}$. We can choose words $u_1,\ldots,u_k$ with
$u_j\in L_j$ for $j\in[1,k]$ so that for each $i\in[1,n]$, the set
$\Delta(u_{\pi(i)})$ contains a vector $(z_1,\ldots,z_n)$ with
$z_i\ge\ell$.
Let us construct the tuple $(v_1,\ldots,v_n)$ successively from left
to right. For each $j=1,\ldots,k$, we do the following. If $n_j=0$,
then we add no new component. If $n_j>0$, then we include $u_j$ and
then $(n_j-1)$ entries containing just the empty word
$\varepsilon$. This clearly yields a tuple with $n=n_1+\cdots+n_k$
entries. Moreover, we have
$(v_1,\ldots,v_n)\in\factors[n_1]{L_1}\times\cdots\times\factors[n_k]{L_k}$.
Finally, for each $i\in[1,n]$, we have $n_{\pi(i)}>0$ and hence
$u_{\pi(i)}$ occurs in the tuple $(v_1,\ldots,v_n)$. Therefore, some
$\Delta(v_i)$ contains a vector $(z_1,\ldots,z_n)$ with
$z_i\ge\ell$. Therefore, the sum $\sum_{i=1}^n\Delta(v_i)$ contains
a tuple $(z_1,\ldots,z_n)$ with $z_i\ge \ell$ for every $i\in[1,n]$.
This proves our claim and hence that $\pP$ satisfies
\cref{axiom:concatenation}.
This shows that $\pP$ is in fact an unboundedness predicate.
According to \cref{thm:approximation-unboundedness}, we can compute
a regular language $R\supseteq L$ such that $\pP(\factors[n]{L})$ if
and only if $\pP(\factors[n]{R})$. This means $f_t$ is unbounded on
$L$ if and only if it is unbounded on $R$. Moreover, since
$L\subseteq R$, an upper bound of $f_t$ on $R$ is also an upper
bound of $f_t$ on $L$. Thus, it remains to show that we can decide
whether $f_t$ is bounded on $R$ and, if so, we can compute an upper
bound of $f_t$ on $R$.
Take a finite automaton $\mathcal{A}$ for $R$. From $\mathcal{A}$, we obtain a
finite automaton $\mathcal{B}$ over the alphabet $\Gamma=\{a_1,\ldots,a_n\}$
as follows. First, we remove all edges. Then, for each pair $p,q$
of states and each $i\in[1,n]$, we check whether there is a word
$K_i$ that is read on a path from $p$ to $q$ in $\mathcal{A}$: This can be
checked because $K_i$ belongs to $\mathcal{C}$, $\mathcal{C}$ is effectively closed
under intersecion with regular languages, and emptiness is decidable
for $\mathcal{C}$. If that is the case, then we draw a new edge labeled
$a_i$ from $p$ to $q$. Then, clearly, $f_t$ is unbounded on $R$ if
and only if for every $\ell\in\mathbb{N}$, there is a word $w$ accepted by
$\mathcal{B}$ that contains $a_i$ at least $\ell$ times, for each
$i\in[1,n]$. Consider the set
\[ S=\{\ell\in\mathbb{N} \mid \exists w\in L(\mathcal{B})\colon \forall i\in[1,n]\colon |w|_{a_i}\ge \ell \}. \]
It is easy to see that $S$ is effectively semilinear: the Parikh
image of $L(\mathcal{B})$ is semilinear and hence $S$ is definable in Presburger
arithmetic. Furthermore, $f_t$ is unbounded on $R$ if and only if
$S$ is infinite, which is easy to check. Finally, if $f_t$ is
bounded on $R$, then $S$ is finite and we can compute the maximal
element of $S$, which is an upper bound of $f_t$ on $R$.
\end{proof}
In the proof of \cref{counting-automata:decidable}, we will use the
concept of a transducer. A \emph{(finite-state) transducer} is a tuple
$\mathcal{A}=(Q,\Sigma,\Gamma,E,q_0,Q_f)$, where $Q$ is a finite set of
\emph{states}, $\Sigma$ is its \emph{input alphabet}, $\Gamma$ is its
\emph{output alphabet}, $E\subseteq Q\times
\Sigma_\varepsilon\times\Gamma_\varepsilon\times Q$ is its set of \emph{edges},
$q_0\in Q$ is its \emph{initial state}, and $Q_f\subseteq Q$ is its
set of \emph{final states}. A \emph{configuration} of $\mathcal{A}$ is a
triple $(q,u,v)\in Q\times\Sigma^*\times\Gamma^*$ and we write
$(q,u,v)\to(q',u',v')$ if there is an edge $(q,x,y,q')$ with $u'=ux$
and $v'=vy$. Let $\to^*$ denote the reflexive transitive closure of
$\to$.
Subsets of $\Sigma^*\times\Gamma^*$ for alphabets $\Sigma,\Gamma$ are
called \emph{transductions}.
A transducer induces a transduction as follows:
\[ T(\mathcal{A})=\{(u,v)\in\Sigma^*\times\Gamma^* \mid (q_0,\varepsilon,\varepsilon)\to^* (q,u,v)~\text{for some $q\in Q_f$} \}. \]
Then, $T(\mathcal{A})$ is called the transduction \emph{induced by $\mathcal{A}$}.
A transduction of the form $T(\mathcal{A})$ is called a \emph{rational transduction}.
In
general, for a transduction $T\subseteq\Sigma^*\times\Gamma^*$ and a
language $L\subseteq\Sigma^*$, we define
\[ T(L)=\{v\in\Gamma^* \mid \exists u\in L\colon (u,v)\in T \}. \]
It is well known that a language class $\mathcal{C}$ is a full trio if and
only if it is effectively closed under rational transductions, meaning
given a description of $L$, we can effectively compute a description
of $T(L)$ in $\mathcal{C}$.
We are now ready to prove \cref{counting-automata:decidable}.
\begin{proof}[Proof of \cref{counting-automata:decidable}]
Given $\mathcal{A}$, we can transform $\mathcal{A}$ into a transducer $\mathcal{B}$ as
follows. Let $K_1,\ldots,K_n$ be the languages occurring in edges
$\opcheck{K}{c}$ in $\mathcal{A}$ and pick letters $d,e_{i,c}\notin\Gamma$
for each $i\in[1,n]$ and $c\in C$. The transducer $\mathcal{B}$ operates like $\mathcal{A}$, but
instead of performing operations $\oppush{a}$ or $\opcheck{K_i}{c}$,
it outputs symbols from the alphabet $\Lambda=\Gamma\cup
\{d,e_{i,c}\mid i\in[1,n], c\in C\}$: When $\mathcal{A}$ performs
$\oppush{a}$, $\mathcal{B}$ outputs $a$. When $\mathcal{A}$ performs
$\opcheck{K_i}{c}$, then $\mathcal{B}$ outputs $e_{i,c}d$. Moreover, in the
beginning of a run, $\mathcal{B}$ outputs a single $d$ before it starts
operating like $\mathcal{A}$. Now let $T$ be the transduction induced by
$\mathcal{B}$ and let $L'=T(L)$.
Then $L'$ is again a VAS language and
consists of precisely those words
$du_1e_{i_1,c_1}du_2e_{i_2,c_2}\cdots du_me_{i_m,c_m}u$ such
that $u\in\Gamma^*$ and $\mathcal{A}$ has a run on a member of $L$ that
performs for each $j\in[1,m]$ the operation
$\opcheck{K_{i_j}}{c_j}$ while $u_j$ is on the work tape.
Consider the language class $\bar{\mathcal{C}}$, which consists of all
finite unions of languages in $\mathcal{C}$. Then $\bar{\mathcal{C}}$ is again a
full trio and has a decidable emptiness problem. For each $c\in C$,
let $\bar{K}_c=\bigcup_{i\in[1,n]} dK_ie_{i,c}$. Then clearly
$\bar{K}_c$ belongs to $\bar{\mathcal{C}}$. Let $C=\{c_1,\ldots,c_k\}$ and
consider the language tuple
$t=(\bar{K}_{c_1},\ldots,\bar{K}_{c_k})$. Then $f_t$ is unbounded on
$L'$ if and only if $\mathcal{A}$ is unbounded on $L$. Moreover, an upper
bound $B\in\mathbb{N}$ for $f_t$ on $L'$ is also an upper bound for $\mathcal{A}$ on
$L$. Thus, an application of
\cref{non-overlapping-factors:multi:decidable} completes the proof.
\end{proof}
\section{Factor inclusion}
\subparagraph*{Detailed proof of \cref{axiom:union}}
First, let us verify \cref{axiom:union} in detail. Suppose that
$L_1\cup L_2$ is $K$-factor universal and that $L_1$ is not $K$-factor
universal. The latter means there is some $u\in K^*$ with
$u\notin\factors{L_1}$. Now let $v\in K^*$ be arbitrary. Since $uv\in
K^*$ and by $K$-factor universality of $L_1\cup L_2$, we know that
$uv\in\factors{L_1\cup L_2}=\factors{L_1}\cup\factors{L_2}$. Since
$uv\in\factors{L_1}$ is impossible, this only leaves
$uv\in\factors{L_2}$ and in particular $v\in\factors{L_2}$. This
proves that $L_2$ is $K$-factor universal and hence
\cref{axiom:union}.
It remains to show decidability of whether $K^*\subseteq\factors{R}$.
\begin{lem}\label{factor-universality-regular}
Let $\mathcal{C}$ be a full trio with decidable emptiness. Given a language
$K$ from $\mathcal{C}$ and a regular language $R$, it is decidable whether
$K^*\subseteq\factors{R}$.
\end{lem}
\begin{proof}
Suppose $K\subseteq\Sigma^*$ and let $\mathcal{A}$ be a finite automaton for
the regular language $\Sigma^*\setminus\factors{R}$. We have to
decide whether $K^*\cap L(\mathcal{A})=\emptyset$. Pick a symbol
$c\notin\Sigma$. We obtain a finite automaton $\mathcal{B}$ from $\mathcal{A}$ as
follows. For each pair $p,q$ of states, we check whether there is a
word in $K$ that labels a path $p$ to $q$ in $\mathcal{A}$: This is
decidable because we can effectively intersect languages in $\mathcal{C}$
with regular languages and emptiness is decidable for $\mathcal{C}$. If such
a word exists, we add an edge labeled $c$ from $p$ to $q$. In the
second step, we remove all edges except for those labeled $c$. This
finishes the construction of $\mathcal{B}$. Then we have
$L(\mathcal{B})\subseteq\{c\}^*$. Furthermore, $K^*$ intersects $L(\mathcal{A})$ if
and only if $L(\mathcal{B})\ne\emptyset$.
\end{proof}
\end{document} |
\begin{document}
\title{Periodicity of hyperplane arrangements
with integral coefficients modulo positive integers}
\author{Hidehiko Kamiya
\footnote
{
{\it Faculty of Economics, Okayama University}
}\\
Akimichi Takemura
\footnote
{\it Graduate School of Information Science and Technology,
University of Tokyo}
\\
Hiroaki Terao
\footnote
{
{\rm This work was
supported by the
MEXT
and the JSPS.}\,\,
{\tt [email protected]}\,\,\,
{\it Department of Mathematics, Hokkaido University}
}
}
\date{\today}
\maketitle
\begin{abstract}
We study central hyperplane arrangements with integral coefficients
modulo positive integers $q$. We prove that the cardinality of the
complement of the hyperplanes is a quasi-polynomial in two ways, first
via the theory of elementary divisors and then via the theory of the
Ehrhart quasi-polynomials. This result is useful for determining the
characteristic polynomial of the corresponding real arrangement.
With the former approach, we also prove that intersection lattices
modulo $q$ are periodic except for a finite number of $q$'s.
\noindent
{\it Key words}:
characteristic polynomial,
Ehrhart quasi-polynomial,
elementary divisor,
hyperplane arrangement,
intersection lattice.
\end{abstract}
\section{Introduction}
\label{sec:intro}
When a linear form in
$x_{1}, \dots, x_{m}$
with integral coefficients
is given, we may naturally consider its ``$q$-reduction''
for any positive integer $q$.
The $q$-reduction
is the image by the modulo $q$ projection
$
{\mathbb Z}[x_{1}, \dots, x_{m}]
\longrightarrow
{\mathbb Z}_{q} [x_{1}, \dots, x_{m}],
$
where ${\mathbb Z}_{q} = {\mathbb Z} /q {\mathbb Z}$.
In this paper, we call the kernel of the
resulting linear form
a ``hyperplane'' in $V := {\mathbb Z}_{q}^{m}$.
Suppose that a finite set of nonzero linear forms with
integral coefficients is given.
Then it not only defines a central
hyperplane arrangement ${\cal A}$ in ${\mathbb R}^{m}$,
but also gives
a ``hyperplane arrangement'' ${\cal A}_{q}$
in $V$ through the $q$-reduction
for each $q\in {\mathbb Z}_{> 0} $.
A basic fact we prove in this paper is that
the cardinality of the complement
$M({\cal A}_{q})$ of the arrangement ${\cal A}_{q}$ in $V$,
as a
function of $q$, is a quasi-polynomial in $q$.
(In other words, there exist a positive integer
$\rho$ (a period) and polynomials
$P_{j} (t) \,\,(1\le j \le \rho)$ such that
$|M({\cal A}_{q} )| = P_{r} (q)
\,\,
(1\le r\le \rho,\,\,
q \in r + \rho{\mathbb Z}_{\ge 0})$
for all $q\in {\mathbb Z}_{> 0}$.)
We provide two proofs of
this fact. The first proof uses the theory of elementary divisors.
The second proof is based on the
theory of the Ehrhart quasi-polynomials applied to each chamber of the
arrangement.
In our setting,
the approach via elementary divisors is more powerful
than the one via the Ehrhart theory.
The former gives more
information on the coefficients of the quasi-polynomials,
and it also enables us to prove
that the intersection lattices modulo $q$ are themselves periodic
except for a finite number of $q$'s.
Despite the advantage of the
approach via elementary divisors for our setting, we also consider the
connection to the Ehrhart theory an important aspect of our discussion,
because many results in the Ehrhart theory can be applied to further
develop the arguments
in this paper.
Especially when $q$ is a prime, the arrangement ${\cal A}_{q}$
lies in the vector space $V = {\mathbb Z}_{q}^{m}$.
In this case, it is well known (e.g., \cite{crr}, \cite[(4.10)]{tertohoku},
\cite[Thm.3.2]{kott})
that
$|M({\cal A}_q)|$ is equal to
$\chi({\cal A}_{q}, q)$
and that
$\chi({\cal A}_{q}, t)$
coincides with
$\chi({\cal A}, t)$
for a sufficiently large prime $q$,
where $\chi( - , t)$ stands for
the characteristic polynomial
(e.g., \cite[Def.2.52]{ort},
\cite[Chap.3, Ex.56]{sta})
of an arrangement.
These facts provide the ``finite field method'' to study
the real arrangement ${\cal A}$.
The method was
initiated and systematically applied by
Athanasiadis \cite{ath96, ath99, ath04}.
It has been used to solve problems related to hyperplane arrangements
by
Bj\"orner and Ekedahl \cite{bje} and Blass and Sagan \cite{bls}
among others.
It was also used in \cite{kott} to find
the characteristic polynomials of the mid-hyperplane
arrangements up to a certain
dimension.
Athanasiadis \cite{ath06} studies a problem similar to but different from the problem
in the present paper.
He proves that the coefficients of the characteristic polynomial of a certain
deformation of a central arrangement are quasi-polynomials.
A series of works by Athanasiadis on the finite field method is worth special mention
as the driving force of the research on this method.
For the theory of hyperplane arrangement, the reader is referred to
\cite{ort}.
For the Ehrhart theory for
counting lattice points in rational polytopes, see the book by Beck and
Robins \cite{ber}.
Beck and Zaslavsky \cite{bez} study
the extension of the Ehrhart theory to counting lattice points
in ``inside-out polytopes''.
The organization of the paper is as follows. In the rest of this
section, we set up our notation.
In Section \ref{sec:quasi}, we prove that
the cardinality of the complement
$M({\cal A}_{q})$
is a quasi-polynomial in $q$, via the theory of
elementary divisors (Section \ref{subsec:via-elementary-divisors})
and via the theory of the Ehrhart quasi-polynomials (Section
\ref{subsec:ehrhart}).
Based on this result, we consider a way of calculating the characteristic
polynomial
$\chi({\cal A}, t)$ of the corresponding real arrangement
${\cal A}$ (Section \ref{subsection:characteristic-polynomial}).
In Section \ref{sec:intersection-lattice},
we prove that the intersection lattices modulo $q$ are
periodic except for a finite number of $q$'s.
In our forthcoming paper
\cite{ktt},
we apply the results in the present paper to
the arrangements arising from root systems
and the mid-hyperplane arrangements.
\subsection{Setup and notation}
\label{subsec:setup}
Let $m, n \in {\mathbb Z}_{>0}$ be positive integers. In this paper, $m$ denotes the dimension
and $n$ is the number of hyperplanes in an arrangement.
Suppose we are given
an $m \times n$ integer matrix
\begin{equation*}
C=(c_1,\ldots,c_n) \in {\rm Mat}_{m \times n}({\mathbb Z})
\end{equation*}
consisting of
column vectors
$c_j=(c_{1j},\ldots,c_{mj})^T \in {\mathbb Z}^m, \ 1\le j\le n$.
Here, $^T$ denotes the transpose and
${\rm Mat}_{m \times n}({\mathbb Z})$ stands for the set of
$m\times n$ matrices with integer elements.
We assume that integral vectors
$c_j$
are nonzero:
\begin{equation}
\label{eq:cj-nonzero}
c_j\ne (0,\ldots,0)^T, \quad 1\le j\le n.
\end{equation}
Consider a real central hyperplane arrangement
\[
{\cal A}={\cal A}_C:=\{H_j: 1\le j\le n\}
\]
with
\[
H_j=H_{c_j}:=\{x=(x_1,\ldots,x_m) \in {\mathbb R}^m: x c_j=0\}.
\]
As an example, let us take $m=2, \ n=3$ and
\begin{equation}
\label{eq:c1c2c3}
C=
\begin{pmatrix}
1 & 1 & -2 \\
-1 & 1 & 1
\end{pmatrix},
\end{equation}
i.e., $c_1=(1,-1)^T, \ c_2=(1,1)^T, \ c_3=(-2,1)^T$.
Then the corresponding
hyperplane arrangement in
${\mathbb R}^2=\{ (x,y): x,y\in {\mathbb R} \}$
is ${\cal A}=\{ H_1, H_2, H_3\}$ with
\[
H_1: x-y=0, \quad
H_2: x+y=0, \quad
H_3: -2x + y=0.
\]
Since the coefficient vectors $c_j=(c_{1j},\ldots,c_{mj})^T\in {\mathbb Z}^m, \ 1\le j\le n$,
defining
$H_j
$ are integral,
we can consider the
reductions
of $c_j$
modulo positive integers
$q\in {\mathbb Z}_{>0}$.
Fix $q\in {\mathbb Z}_{>0}$ and
let
\[
[c_j]_q=([c_{1j}]_q,\ldots,[c_{mj}]_q)^T\in {\mathbb Z}_q^m
\]
be the {\it $q$-reduction} of
$c_j $,
i.e.,
$[c_{ij}]_q=c_{ij}+q{\mathbb Z} \in {\mathbb Z}_q,
\ 1\le i \le m, \ 1\le j \le n$.
In $V={\mathbb Z}_q^m$,
let us consider
\[
H_{j, q}=H_{c_j, q}:=\{ x=(x_1,\ldots,x_m) \in V: x[c_j]_q=[0]_q \},
\]
and define
\[
{\cal A}_q={\cal A}_{C, q}:=\{ H_{j, q}: 1\le j \le n \}.
\]
We emphasize that ${\cal A}_q={\cal A}_{C, q}$ is determined by
$C$ and $q$, but not by ${\cal A}={\cal A}_C$ and $q$.
For a non-prime $q$, it may not be appropriate to call
$H_{j, q}$ a hyperplane, but by abusing the terminology we call
$H_{j, q}$ a hyperplane, and
${\cal A}_q$
an arrangement of hyperplanes.
In our previous example \eqref{eq:c1c2c3},
${\cal A}_q=\{ H_{1,q}, H_{2,q}, H_{3,q} \}$
with
\begin{align}
H_{1, q} &= \{ ([0]_q,[0]_q), ([1]_q,[1]_q), \dots, ([q-1]_q,[q-1]_q) \}, \nonumber \\
H_{2, q} &= \{ ([0]_q,[0]_q), ([1]_q,[q-1]_q), \ldots, ([q-1]_q,[1]_q) \}, \label{eq:H2q-example} \\
H_{3, q} &= \{ ([0]_q,[0]_q), ([1]_q,[2]_q), ([2]_q,[4]_q), \ldots, ([q-1]_q,[q-2]_q) \}. \nonumber
\end{align}
In the finite field method and its generalization in the present paper,
we are interested in the cardinality of the
complement of ${\cal A}_q$. We denote the complement by
\begin{equation*}
\label{eq:Mq}
M({{\cal A}}_q) := V \setminus \bigcup_{1\le j \le n}H_{j, q}
\end{equation*}
and its cardinality by $|M({{\cal A}}_q)|$.
We will prove
that
$|M({{\cal A}}_q)|$ is a quasi-polynomial in $q$
of degree $m$ and with the
leading coefficient identically equal to 1.
That is, there exist
{\it a period} ${\rho} \in {\mathbb Z}_{>0}$ and
$\alpha_{h, s}\in {\mathbb Q}, \ 0\le h\le m-1, \ s\in {\mathbb Z}_{\rho}$, such that
\begin{equation}
\label{eq:quasi-polynomial}
|M({{\cal A}}_q)|
= q^m + \alpha_{m-1,[q]_{\rho}}q^{m-1} + \dots + \alpha_{1,[q]_{\rho}}q
+ \alpha_{0,[q]_{\rho}},
\quad
q \in {\mathbb Z}_{>0};
\end{equation}
in fact, $\alpha_{h,s}, \ 0\le h\le m-1, \ s\in {\mathbb Z}_{\rho}$,
are integral: $\alpha_{h,s}\in {\mathbb Z}$.
In this paper, we will call \eqref{eq:quasi-polynomial}
the {\it characteristic quasi-polynomial}
of ${\cal A}_q$,
because, as we will see in Section \ref{subsection:characteristic-polynomial},
the value \eqref{eq:quasi-polynomial} coincides with
$\chi({\cal A}, q)$
if $q$ and $\rho$ are coprime, where $\chi({\cal A}, t)$ denotes
the characteristic polynomial
(e.g., \cite[Def.2.52]{ort},
\cite[Chap.3, Ex.56]{sta})
of the real arrangement
${\cal A}$.
The minimum period is simply called
{\it the period} of $|M({{\cal A}}_q)|$.
Often it is not trivial to find the period of
$|M({{\cal A}}_q)|$, although it is relatively easy to evaluate some
multiple of the period, which we simply call a period.
This is because of the following.
The sum $\chi_1(q)+\chi_2(q)$ of two quasi-polynomials $\chi_1(q), \chi_2(q)$
is a quasi-polynomial having as a period the least common multiple
of the periods of $\chi_1(q)$ and $\chi_2(q)$.
However, due to possible cancellations of terms,
the period of $\chi_1(q)+\chi_2(q)$ may be smaller than this least common multiple.
See McAllister and Woods \cite{mcw}.
For a subset $J=\{j_1,\dots,j_k\} \subseteq \{ 1,\ldots,n\}$, write
\begin{equation}
\label{eq:HJq}
H_{J, q}:=\bigcap_{j\in J}H_{j, q}
= H_{j_1, q} \cap \cdots \cap H_{j_k, q}.
\end{equation}
When $J$ is nonempty,
$H_{J, q}$ in \eqref{eq:HJq} is determined by the $q$-reduction of
the $m\times k$ submatrix
\[
C_{J}:=(c_{j_1},\ldots,c_{j_k})
\in {\rm Mat}_{m \times k}({\mathbb Z})
\]
of $C$;
when $J$ is empty, we understand that $H_{\emptyset, q}=V$.
The Smith normal form of an integer matrix
$G\in {\rm Mat}_{m\times k}({\mathbb Z}), \ k\in {\mathbb Z}_{>0}$, is
\begin{eqnarray}
\label{eq:snfC}
\qquad
SGT=
\begin{pmatrix}
E & O \\
O & O
\end{pmatrix}
\in {\rm Mat}_{m \times k}({\mathbb Z}),
&&
E=\mathop{\rm diag}(e_1,\ldots,e_\ell), \ \
\ell=\mathop{\rm rank} G,
\\
&&
\quad
e_1,\ldots,e_{\ell}\in {\mathbb Z}_{>0}, \ \ e_1|e_2|\cdots|e_{\ell}, \nonumber
\end{eqnarray}
where $S\in {\rm Mat}_{m\times m}({\mathbb Z})$ and $T\in {\rm Mat}_{k\times k}({\mathbb Z})$
are unimodular matrices.
The positive integers $e_1,\ldots,e_{\ell}$ are
the {\it elementary divisors} of $G$.
For
simplicity, we often use the following notation
\begin{equation*}
\label{eq:notation-diag}
\mathop{\rm diag}(\{e_1, \dots,e_\ell\}; m,k)
=
\begin{pmatrix}
E & O \\
O & O
\end{pmatrix}
\in {\rm Mat}_{m \times k}({\mathbb Z}) .
\end{equation*}
\section{Characteristic quasi-polynomial}
\label{sec:quasi}
\subsection{Via elementary divisors}
\label{subsec:via-elementary-divisors}
In this subsection,
we prove that
$|M({{\cal A}}_q) |=|V \setminus \bigcup_{1\le j \le n}H_{j, q}|$
is a quasi-polynomial in $q\in {\mathbb Z}_{>0}$ using the theory of
elementary divisors.
Let $I_Y( \, \cdot \, ), \ Y\subseteq V$, stand for the characteristic function
(indicator function) of $Y: I_Y(x)=1, \ x\in Y$ and
$I_Y(x)=0, \ x \in V\setminus Y$.
Then for every $x \in V$,
\[
\prod_{j=1}^n\left(1- I_{H_{j, q}}(x)\right)
= \sum_{J \subseteq \{ 1,\ldots, n\}}(-1)^{|J|}I_{H_{J, q}}(x)
= I_V(x)+\sum_{\emptyset \ne J \subseteq \{ 1,\ldots,n\}}
(-1)^{|J|}I_{H_{J, q}}(x),
\]
which may be viewed as the inclusion-exclusion principle.
Therefore, from the relation
$x \in
M({{\cal A}}_q)
\Leftrightarrow
1=\prod_{j=1}^n(1-I_{H_{j, q}}(x))$,
we have
\begin{eqnarray}
|M({{\cal A}}_q)|
&=& \sum_{x\in V}\prod_{j=1}^n\left( 1-I_{H_{j, q}}(x)\right) \label{eq:|M|=SP}
= q^m+\sum_{\emptyset \ne J \subseteq \{ 1,\ldots,n\}}
(-1)^{|J|}\left| H_{J, q}\right|.
\end{eqnarray}
Hence it suffices to verify that
for each nonempty subset $J=\{j_1,\ldots,j_k\}$
of $\{1,\ldots,n\}$,
the cardinality $|H_{J, q} |$
is a quasi-polynomial in $q\in {\mathbb Z}_{>0}$.
Actually, we can show that $|H_{J, q} |$ is a quasi-monomial
with an integral coefficient.
Fix
$J=\{j_1,\ldots,j_k\}\ne \emptyset$ and
consider
$C_{J}=(c_{j_1},\ldots,c_{j_k})
\in {\rm Mat}_{m \times k}({\mathbb Z})$.
For each $
q \in {\mathbb Z}_{>0}$,
let us define $f_{J,q}:V={\mathbb Z}_q^m \to {\mathbb Z}_q^k$ by
\begin{equation}
\label{eq:x-mapsto}
x \mapsto x[C_J]_q,
\end{equation}
where
$[C_J]_q=([c_{j_1}]_q,\ldots,[c_{j_k}]_q)
\in {\rm Mat}_{m \times k}({\mathbb Z}_q)$
is the $q$-reduction of $C_J$.
Then
$|H_{J, q}|= |\ker f_{J,q}|$, so
the problem reduces to proving that $|\ker f_{J,q}|$ is
a quasi-monomial in $q$.
This fact can be shown by using the following general lemma.
\begin{lm}
\label{lm:homomorphism}
Let $m$ and $k$ be positive integers.
Let $f: {\mathbb Z}^m\to {\mathbb Z}^k$ be a ${\mathbb Z}$-homomorphism.
Then the cardinality of the kernel of the induced morphism
$f_q: {\mathbb Z}_q^m\to {\mathbb Z}_q^k$ is a quasi-monomial of $q\in {\mathbb Z}_{>0}$.
Furthermore, suppose $f$ is represented by
a matrix $G \in {\rm Mat}_{m\times k}({\mathbb Z})$.
Then this quasi-monomial $|\ker f_q|, \ q\in {\mathbb Z}_{>0}$, can be expressed as
\begin{equation}
\label{eq:q^(m-l)C}
|\ker f_{q}|=(d_1(q)\cdots d_{\ell}(q))q^{m-\ell},
\end{equation}
where $\ell=\mathop{\rm rank} G$ and $d_j(q):={\rm gcd}\{e_j, q \}, \ 1\le j \le \ell$.
Here, $e_1,\ldots,e_{\ell}\in {\mathbb Z}_{>0}, \ e_1|e_2|\cdots |e_{\ell}$, are the elementary
divisors of $G$.
In that case, the quasi-monomial $|\ker f_q|, \ q\in {\mathbb Z}_{>0}$,
has the
minimum
period $e_{\ell}$, where we consider $e_0$ to be one.
\end{lm}
\noindent{\sl Proof.}\qquad
If $f$ is the zero ${\mathbb Z}$-homomorphism, then $|\ker f_q|=|{\mathbb Z}_q^m|=q^m$ and
the theorem is trivially true.
So we may assume that $f$ is not the zero ${\mathbb Z}$-homomorphism.
Since $|\ker f_q|=q^m/|{\rm im}f_q|$, we will study $|{\rm im}f_q|$.
Suppose $f$ is represented by an $m\times k$ integer
matrix $G
\in {\rm Mat}_{m\times k}({\mathbb Z})$.
Then, for $q\in {\mathbb Z}_{>0}$, the induced morphism $f_q: {\mathbb Z}_q^m\to {\mathbb Z}_q^k$ is
given by $x \mapsto x[G]_q$.
Consider the Smith normal form of $G$ in \eqref{eq:snfC}.
Since unimodularity is preserved under $q$-reductions,
we may assume that $G$ is of the form
\begin{equation*}
G= \mathop{\rm diag}(\{e_1,\dots,e_\ell\}; m,k)
\end{equation*}
from the outset.
Then we have
\[
f_{q}(x)=([e_1]_q x_1,\ldots,[e_{\ell}]_q x_{\ell},
[0]_q,\ldots,[0]_q) \in {\mathbb Z}_q^k
\]
for $x=(x_1,\ldots,x_m)\in {\mathbb Z}_q^m$.
Therefore,
${\rm im} f_{q}=[e_1]_q {\mathbb Z}_q\times \cdots \times [e_{\ell}]_q {\mathbb Z}_q$
and hence
\[
|{\rm im} f_{q}|=\frac{q}{d_1(q)}\times \cdots \times \frac{q}{d_{\ell}(q)}
=\frac{q^{\ell}}{d_1(q)\cdots d_{\ell}(q)},
\]
where
$d_j(q)={\rm gcd}\{e_j, q \}, \ 1\le j \le \ell$.
Consequently, we obtain \eqref{eq:q^(m-l)C}.
Now, for any $j=1,\ldots,\ell$, we have
$
d_j(q + e_{\ell})
={\rm gcd}\{ e_j, q+e_{\ell}\}={\rm gcd}\{ e_j, q\}
=d_j(q)$.
Therefore, \eqref{eq:q^(m-l)C} is a quasi-monomial in $q$
of degree $m-\ell <m$ and with a period $e_{\ell}$.
In fact, we can show that $e_{\ell}$ is the minimum period as follows.
Let $e'$ be the minimum period.
Note $e'|e_{\ell}$.
We have
$d_{j}(e_{\ell})=e_{j}\ge d_{j}(e')=d_{j}(e'+e_{\ell})>0$ for all $j=1,
\ldots,\ell.$
Since $e'$ is a period,
$d_1(e_{\ell})\cdots d_{\ell}(e_{\ell})
=
d_1(e'+e_{\ell})
\cdots d_{\ell}(e'+e_{\ell})$.
Therefore $e_{\ell} = d_{\ell} (e_{\ell} )
=
d_{\ell}(e' + e_{\ell} )
=
e'$.
\hbox{\vrule width 4pt height 6pt depth 1.5pt}\par
Now, $f_{J,q}:V={\mathbb Z}_q^m \to {\mathbb Z}_q^k$ in \eqref{eq:x-mapsto}
is induced from
the ${\mathbb Z}$-homomorphism $f_J: {\mathbb Z}^m \to {\mathbb Z}^k$
represented by $C_J$.
Thus, Lemma \ref{lm:homomorphism}
implies that
\begin{equation}
\label{eq:|H|}
|H_{J,q}|=|\ker f_{J,q}|=
(d_{J,1}(q)\cdots d_{J,\ell(J)}(q))q^{m-\ell(J)}
\end{equation}
is a quasi-monomial with the period $e_{J,\ell(J)}$,
where
$\ell(J):=\mathop{\rm rank} C_J$ and
$d_{J,j}(q):={\rm gcd}\{e_{J,j}, q \}$, $1\le j \le \ell(J)$.
Here, $e_{J,1},\ldots,e_{J,\ell(J)}\in {\mathbb Z}_{>0}, \
e_{J,1}|e_{J,2}|\cdots |e_{J,\ell(J)}$, denote the elementary
divisors of $C_J$.
Note that $\ell(J)>0$ for all $J, \ |J|\ge 1$, because of the assumption
\eqref{eq:cj-nonzero}.
\begin{rem}
Assume that $q$ is prime.
Then each $d_{J,j}(q)={\rm gcd}\{ e_{J,j},q\}, \ 1\le j\le \ell(J),$ is
$1$ or $q$, and $d_{J,j}(q)=q$ if and only if $[e_{J,j}]_q=0$.
It follows from \eqref{eq:|H|} that $X:=H_{J,q}$ for any nonempty $J$
satisfies $|X|=q^{m-\ell'}=q^{{\rm dim}X}$, where
$\ell'=| \{ j: 1\le j\le \ell(J), \ [e_{J,j}]_q\ne 0 \} |$.
Note that $|X|=q^{{\rm dim}X}$ for $X=H_{J,q}$ is true also when
$J$ is empty: $|{\mathbb Z}_q^m|=q^m$.
\end{rem}
From the discussions so far,
we reach the following conclusions.
First,
$|M({\cal A}_q)|$,
$q\in {\mathbb Z}_{>0}$,
is a monic quasi-polynomial in $q$ of degree $m$.
Second,
a period of this quasi-polynomial can be obtained in the following way.
For each $m\times k \ (1\le k \le n)$ submatrices $C_J$ of
$C=(c_1,\ldots,c_n)\in {\rm Mat}_{m \times n}({\mathbb Z})$,
find its largest
elementary divisor $e(J):=e_{J,\ell(J)}$. Let
\begin{equation*}
\label{eq:period-0}
{\rho}_0:={\rm lcm}\{ e(J) : J \subseteq \{1,\ldots,n\}, J \neq \emptyset \}.
\end{equation*}
Then
${\rho}_0$ is a period of
$|M({\cal A}_q)|$.
For computing ${\rho}_0$ when $m<n$,
we can restrict the size of $J$ as $|J|\le m$:
\begin{equation}
\label{eq:period--0}
{\rho}_0 ={\rm lcm}\{ e(J) : J \subseteq \{1,\ldots,n\}, \
1\le |J| \le \min\{m, n\} \}.
\end{equation}
We can prove \eqref{eq:period--0} in the following way.
First, we note the next
lemma.
\begin{lm}
\label{lm:restriction}
Let $f_1, f_2: {\mathbb Z}^n\to {\mathbb Z}^m$ be two ${\mathbb Z}$-homomorphism with
$\mathop{\rm rank} ({\rm im}f_1)=\mathop{\rm rank} ({\rm im}f_2)$ and
${\rm im} f_2\subseteq {\rm im} f_1$.
Then the largest elementary divisor of $f_1$ divides the largest elementary
divisor of $f_2$.
\end{lm}
\noindent{\sl Proof.}\qquad
Define $I_{i} = {\rm Ann}({\rm coker}f_{i})
:=\{ p\in {\mathbb Z}: p({\rm coker}f_{i} )=0\},
$ and
the ideal $I_{i} $ is generated
by the largest elementary divisor of $f_i
\,\,
(i = 1, 2).$
Since there is a natural projection ${\rm coker}f_2\to{\rm coker}f_1$,
we have $I_{2} \subseteq I_{1}$. This shows the lemma.
\hbox{\vrule width 4pt height 6pt depth 1.5pt}\par
Now, suppose $m<n$, and take an arbitrary $J\subseteq \{ 1,\ldots, n\}$
with $m<|J|\le n$.
Let $\ell=\ell(J)=\mathop{\rm rank} C_J\ (\le\! m)$.
Then we can take a subset $\tilde{J}\subset J, \ |\tilde{J}|=\ell$, such that
$\mathop{\rm rank} C_{\tilde{J}}=\ell$.
For this $\tilde{J}$, we have
${\rm im}g_{\tilde{J}}\subseteq {\rm im}g_J$, where
$g_J, g_{\tilde{J}}: {\mathbb Z}^n\to {\mathbb Z}^{m}$ are the ${\mathbb Z}$-homomorphisms
defined by $C_J$ and $C_{\tilde{J}}$, respectively:
$g_J(x)=\sum_{j\in J}x_jc_j, \ g_{\tilde{J}}(x)=\sum_{j\in \tilde{J}}x_jc_j,
\ x=(x_1,\ldots,x_n)^T \in {\mathbb Z}^n$.
Then Lemma \ref{lm:restriction} implies that $e(J)|e(\tilde{J})$.
From this observation, we obtain \eqref{eq:period--0}.
When $n$ is considerably larger than $m$, the restriction $|J|\le m$
is computationally very useful.
Let us find a period ${\rho}_0$ for our example
\eqref{eq:c1c2c3}.
Take $J=\{1,2\}$.
Then we have
\[
C_J=
\begin{pmatrix}
1 & 1 \\
-1 & 1
\end{pmatrix}
\]
with the Smith normal form
$\mathop{\rm diag}(1, 2)$.
Hence $e(J)=e_{J,2}
=2$.
In a similar manner, we can find $e(J)$ for the other
$J$'s with $1\le |J| \le 2$,
and obtain ${\rho}_0={\rm lcm}\{ 1,1,1,2,1,3
\}=6$.
Furthermore,
for $ |J| \ge 1, \ 1\le j\le \ell(J)$,
\begin{equation}
\label{eq:d(q)===}
d_{J,j}(q)={\rm gcd}\{ e_{J,j}, q \}={\rm gcd}\{ e_{J,j}, {\rho}_0, q \}
={\rm gcd}\{ e_{J,j}, {\rm gcd}\{ {\rho}_0, q\} \}.
\end{equation}
This implies that the coefficient $d_{J,1}(q)\cdots d_{J,\ell(J)}(q)$
of each monomial
$|H_{J, q}|, \
|J|\ge 1$, in \eqref{eq:|H|}
depends on $q$ only through
$\gcd\{ {\rho}_0, q\}$.
Therefore, the constituents of
the quasi-polynomial $|M({\cal A}_q)|$ in \eqref{eq:|M|=SP}
coincide for all $q$ with the same $\gcd\{ {\rho}_0, q\}$.
We summarize the results obtained so far as follows:
\begin{theorem}
\label{thm:quasi-polynomial-1}
The function
$|M({{\cal A}}_q)|$ is a monic quasi-polynomial in $q\in {\mathbb Z}_{>0}$ of degree
$m$ with a period ${\rho}_0$ given in \eqref{eq:period--0}.
Furthermore,
in (\ref{eq:quasi-polynomial}) with ${\rho}={\rho}_0$,
the coefficients
$\alpha_{h,[q]_{{\rho}_0}}\in{\mathbb Z}, \ 0\le h \le m-1$, depend on $[q]_{{\rho}_0}$
only through $\gcd\{ {\rho}_0, q\}$.
\end{theorem}
Let us find the characteristic quasi-polynomial for our example \eqref{eq:c1c2c3}.
Since ${\rho}_0=6
$, we know by Theorem \ref{thm:quasi-polynomial-1} that
each of the sets $\{ 1,5 \}, \{ 2,4 \}, \{ 3,9\},\{ 6,12\}$ of values of $q$
determines a constituent of the characteristic
quasi-polynomial $|M({\cal A}_q)|$.
For $q=1$, we have $V=H_{1,1}=H_{2,1}=H_{3,1}=\{ ([0]_1, [0]_1) \}$ and thus
$|M({\cal A}_1)|=0$.
For $q=5$, we
can count $|H_{1,5}\cup H_{2,5}\cup H_{3,5}|=13$
and get
$|M({\cal A}_5)|=5^2-13=12$.
By interpolation, we obtain the constituent
$q^2-3q+2$ for
$\gcd\{ 6, q\}=1$.
In this way,
we can get the following characteristic quasi-polynomial:
\begin{equation}
\label{eq:quasi-example}
| M({\cal A}_q)|=
\begin{cases}
q^2-3q+2 & \text{ when } {\rm gcd}\{ 6, q \}=1, \\
q^2-3q+3 & \text{ when } {\rm gcd}\{ 6, q \}=2, \\
q^2-3q+4 & \text{ when } {\rm gcd}\{ 6, q \}=3, \\
q^2-3q+5 & \text{ when } {\rm gcd}\{ 6, q \}=6.
\end{cases}
\end{equation}
From this characteristic
quasi-polynomial, we can see that the minimum period is $6={\rho}_0$.
\subsection{Via the Ehrhart theory}
\label{subsec:ehrhart}
We want to show via the Ehrhart theory that
$|M({\cal A}_q)|=|V \setminus \bigcup_{1\le j \le n}H_{j, q}|$
is a quasi-polynomial in $q\in {\mathbb Z}_{>0}$. The Ehrhart theory is indeed
useful for establishing that $|M({\cal A}_q)|$ is a
quasi-polynomial, and gives a geometric insight into its period.
However, it does not seem to give information on the constituents of the
quasi-polynomial.
For $j=1,\ldots,n$, let
\[
S_j:={\mathbb Z} \cap \{ x c_j \mid x\in [0,1)^m \}.
\]
For example, for $c_j=(1,\dots,1)^T \in {\mathbb Z}^m$
\[
\min_{x\in [0,1)^m \atop x_1+\cdots +x_m\in {\mathbb Z}} (x_1 + \dots + x_m)=0, \quad
\max_{x\in [0,1)^m \atop x_1+\cdots +x_m\in {\mathbb Z}} (x_1 + \dots + x_m)=m-1,
\]
and
$S_j=\{ 0,1,\ldots,m-1 \}$ for this $c_j$.
Now define the additional ``translated'' hyperplanes
\[
H_{j}^{s_j}(q)=
H_{c_j}^{s_j}(q):=
\{ x=(x_1,\ldots,x_m) \in {\mathbb R}^m: x c_j=s_jq \}\subset {\mathbb R}^m,
\quad s_j \in S_j,
\]
for $j=1,\ldots,n,$ and consider the real hyperplane arrangement
\[
{\cal A}^{{\rm deform}}(q)=
{\cal A}^{{\rm deform}}_C(q) :=
\{ H_{j}^{s_j}(q):
s_j \in S_j, \ 1\le j \le n \}.
\]
For any positive integer $q$,
we can express
$|M({\cal A}_q)|=|V \setminus \bigcup_{1\le j \le n}H_{j, q}|$
as
\begin{eqnarray}
~~|M({\cal A}_q)|
&=&\left| {\mathbb Z}^m \cap [0,q)^m\setminus \bigcup
{\cal A}^{{\rm deform}}(q)\right| \label{eq:sharp-sharp}
=\left| {\mathbb Z}^m \cap
\left( q \times
\left([0,1)^m \setminus \bigcup
{\cal A}^{{\rm deform}}
\right)\right)\right|,
\end{eqnarray}
where $\bigcup{\cal A}^{{\rm deform}}(q):=\bigcup_{H\in {\cal A}^{{\rm deform}}(q)}H$
and
${\cal A}^{{\rm deform}}:={\cal A}^{{\rm deform}}(1)$.
Now, let us consider
$[0,1)^m \setminus \bigcup {\cal A}^{{\rm deform}}$ in \eqref{eq:sharp-sharp}.
We see that $[0,1)^m$ is cut by the hyperplanes
$H_j^{s_j}(1), \ s_j \in S_j, \ 1\le j \le n$,
into
\[
P^O(s_1,\ldots,s_n):=
\{ x \in [0,1)^m: s_j < x c_j < s_j+1, \ 1\le j \le n \},
\]
$(s_1,\ldots,s_n) \in S_1^* \times \cdots \times S_n^*=:S^*$,
where
$S_j^*:=S_j\cup \{ \min S_j -1 \}, \ 1\le j\le n$.
Therefore
\begin{equation}
\label{eq:=sqcup}
[0,1)^m \setminus \bigcup {\cal A}^{{\rm deform}}=
\bigsqcup_{(s_1,\ldots,s_n) \in S^*}P^O(s_1,\ldots,s_n)
\end{equation}
is a disjoint union.
From \eqref{eq:sharp-sharp} and \eqref{eq:=sqcup}, we obtain
\begin{eqnarray*}
|M({{\cal A}}_q)|
&=&
\sum_{(s_1,\ldots,s_n) \in S^*}
\left| {\mathbb Z}^m \cap qP^O(s_1,\ldots,s_n) \right|
= \sum_{(s_1,\ldots,s_n) \in S^*}
i(P^O(s_1,\ldots,s_n),q),
\end{eqnarray*}
where $i(P^O(s_1,\ldots,s_n),q):=
|{\mathbb Z}^m \cap qP^O(s_1,\ldots,s_n)|.$
It should be noted that $P^O(s_1,\ldots,s_n), \ (s_1,\ldots,s_n) \in S^*$,
are not necessarily open in ${\mathbb R}^m$.
However, by applying the Ehrhart theory to some faces of each nonempty
$P^O(s_1,\ldots,s_n)$, we can show that
$i(P^O(s_1,\ldots,s_n),q)$
is a quasi-polynomial of $q\in {\mathbb Z}_{>0}$ with degree
$
d
=\dim(P^O(s_1,\ldots,s_n))
$
and
the leading coefficient equal to the normalized volume of
$P^O(s_1,\ldots,s_n)$.
When $d=m$, the normalized volume is the
same as the usual volume in ${\mathbb R}^m$.
Therefore, we can conclude that the sum
$\sum_{(s_1,\ldots,s_n) \in S^*} i(P^O(s_1,\ldots,s_n),q)=|M({{\cal A}}_q)|$
is a quasi-polynomial
of $q\in {\mathbb Z}_{>0}$ with
degree
$m$
and the leading coefficient
$
\sum
{\rm vol}_m(P^O(s_1,\ldots,s_n))=
{\rm vol}_m([0,1)^m)=1$, where
${\rm vol}_m( \, \cdot \, )$ denotes the usual volume in ${\mathbb R}^m$.
Let us move on to the investigation into
periods of the characteristic quasi-polynomial
$|M({{\cal A}}_q)|, \ q\in {\mathbb Z}_{>0}$.
{}From the above discussion, we see that
a common multiple of periods of
$i(P^O(s_1,\ldots,s_n), q), \ (s_1,\ldots,s_n) \in S^*$,
is a period of
$|M({{\cal A}}_q)|$.
Let $\bar{P}(s_1,\ldots,s_n)$ denote the closure of
$P^O(s_1,\ldots,s_n)$. Define
the {\it denominator} ${\cal D}({\cal A}^{{\rm deform}})$ of ${\cal A}^{{\rm
deform}}$ by
\begin{eqnarray*}
{\cal D}({\cal A}^{{\rm deform}}) &:=& \min \{ q \in {\mathbb Z}_{>0}:
\text{ all }
q\bar{P}(s_1,\ldots,s_n),
\ (s_1,\ldots,s_n) \in S^*, \\
&& \qquad \text{ are integral polytopes}
\}.
\end{eqnarray*}
The Ehrhart theory now implies that
${\cal D}({\cal A}^{{\rm deform}})$ is a period of $|M({\cal A}_q)|$.
Put $\tilde{C}:=(C, I_m)$, where
$I_m$ is the $m\times m$ identity matrix.
For $J\subseteq \{ 1,\ldots,n+m\}$, let
$\tilde{C}_{J}$ denote the $m \times |J|$
submatrix of $\tilde{C}$ consisting of the columns
corresponding to the elements of $J$.
Then,
in view of Cramer's formula, we see that
${\cal D}({\cal A}^{{\rm deform}})$ divides
\begin{eqnarray*}
&& {\rho}_{\rm E}
:= {\rm lcm}\{ \det(\tilde{C}_{J}):
J\subset \{1,\ldots,n+m\} \\
&& \qquad \qquad \qquad
\text{ \ such that \ }
|J|=m \text{ \ and \ }
\det(\tilde{C}_{J}) \ne 0 \}. \nonumber
\end{eqnarray*}
Hence, the minimum period divides
${\rho}_{\rm E}$.
For $J$ with $|J\cap \{ n+1, \dots,n+m\}|=h<m$,
the determinant $\det(\tilde{C}_{J})$ equals
an $(m-h)\times (m-h)$ minor
of $C$ up to sign.
Therefore, we can also write
\begin{equation}
\label{eq:period-E}
{\rho}_{\rm E}
= {\rm lcm}\{ \text{nonzero $j$-minors of $C$}, \
1\le j\le m \}.
\end{equation}
Now, recall the well-known fact that
$\bar{e}(J):=e_{J,1}\times \cdots \times e_{J,\ell(J)}$
is equal to the greatest common
divisor of all the (nonzero) $\ell(J)$-minors of $C_J, \ J\ne \emptyset$,
and note the relation $e(J)=e_{J, \ell(J)}|\bar{e}(J), \ J\ne \emptyset$.
Then we can easily see from \eqref{eq:period--0} and \eqref{eq:period-E}
that ${\rho}_0 | {\rho}_{\rm E}$.
Therefore, ${\rho}_0$ gives a tighter
bound for the period of the characteristic quasi-polynomial
$|M({{\cal A}}_q)|$ than ${\rho}_{\rm E}$.
In our working example \eqref{eq:c1c2c3}, we have
${\rho}_{{\rm E}}={\rm lcm}\{ 1,1,-2,-1,1,1,2,-1,3\}=6$ and thus
${\rho}_0=
{\rho}_{{\rm E}}$.
In general, if we obtain the characteristic quasi-polynomial by interpolation using
${\rho}_{{\rm E}}$ as a period and find that ${\rho}_{{\rm E}}$ happens to be
the minimum period, then we know ${\rho}_0=
{\rho}_{{\rm E}}$.
\subsection{Characteristic polynomial of the real arrangement}
\label{subsection:characteristic-polynomial}
Let $\chi({\cal A}, t)$ be the characteristic polynomial
of the
real hyperplane arrangement
${\cal A}=\{H_j: 1\le j\le n\}$, where
$H_j=\{x\in {\mathbb R}^m: x c_j=0\}, \ 1\le j\le n$.
\begin{theorem}
Let ${\rho}$ be a period of the quasi-polynomial
$|M({{\cal A}}_q)|$ and $q$ be a positive integer relatively
prime to $\rho$.
Then
$|M({{\cal A}}_q)| = \chi({\cal A}, q)$.
\end{theorem}
\noindent{\sl Proof.}\qquad
Choose $c\in {\mathbb Z}_{\ge 0} $ and
$q'\in {\mathbb Z}$ such that
$q = \rho c + q'$ and
$1\le q'\le \rho$.
By Theorem \ref{thm:quasi-polynomial-1},
there exist integers
$\alpha_{0}, \dots, \alpha_{m-1}$ such that
$|M({{\cal A}}_k)|
=
k^m+\alpha_{m-1} k^{m-1}+\cdots+\alpha_0
$ for
all $k \in q' +{\rho}{\mathbb Z}_{\ge 0} $.
Since $q'$ and ${\rho}$ are
relatively prime, then by
Dirichlet's theorem on arithmetic progressions (e.g., \cite{ser}),
$q'+{\rho} {\mathbb Z}_{\ge 0}$
contains an infinite number of primes.
On the other hand,
it is well known
(e.g., \cite{crr} \cite[(4.10)]{tertohoku} \cite[Theorem 3.2]{kott})
that,
when $k$ is a sufficiently large prime,
$
|M({{\cal A}}_k)|
$
coincides with $\chi({\cal A}, k)$.
Remember that the characteristic polynomial
$\chi({\cal A}, t)$
is a
monic polynomial of degree
$\dim({\mathbb R}^m)=m$.
This implies that
$
\chi({\cal A}, t) =
t^m+\alpha_{m-1} t^{m-1}+\cdots+\alpha_0
$
and thus
$
\chi({\cal A}, q) =
q^m+\alpha_{m-1} q^{m-1}+\cdots+\alpha_0
=
|M({{\cal A}}_q)|.
$
\hbox{\vrule width 4pt height 6pt depth 1.5pt}\par
The argument above implies that we can obtain the characteristic polynomial
$\chi({\cal A}, t)$ by
counting
$|M({\cal A}_{q_i})|=
|{\mathbb Z}_{q_i}^m\setminus \bigcup_{1\le j\le n}H_{j, q_i}|
$
for an arbitrary set of $m$ distinct values
$q_1,\ldots,q_{m}$
with
${\rm gcd}\{{\rho}, \, q_i \}=1
\,\,\,
(1\le i\le m)$.
Note that $q_1,\ldots,q_m$ need not be prime.
When $q'$ and ${\rho}$ are not relatively prime,
$q'+{\rho}{\mathbb Z}_{\ge 0}$ contains at most
one prime (and this prime is not necessarily
``sufficiently large''),
so the above argument does not hold.
For $q'+{\rho}{\mathbb Z}_{\ge 0}$ with such $q'$'s,
we obtain different polynomials
than $\chi({\cal A}, t)$.
In our example \eqref{eq:c1c2c3}, the constituent
of the characteristic
quasi-polynomial \eqref{eq:quasi-example}
for $1+6{\mathbb Z}_{\ge 0}$ and $5+6{\mathbb Z}_{\ge 0}$, i.e.,
$\gcd\{ 6, q\}=1$,
is the characteristic polynomial of
${\cal A}$.
Thus $\chi({\cal A}, t)=t^2-3t+2=(t-1)(t-2)$.
\section{Periodicity of the Intersection lattice}
\label{sec:intersection-lattice}
In this section, we show that
the intersection lattice $L_q=L({\cal A}_q)$
(e.g., \cite[2.1]{ort}, \cite[Chap.3, Ex.56]{sta} )
is periodic for large enough $q$.
Let us begin with our working example to illustrate the
periodicity of the intersection lattice $L_{q}$.
In our example
\eqref{eq:c1c2c3},
the ``hyperplanes'' $H_{1,q}, H_{2,q}, H_{3,q}$ were given in \eqref{eq:H2q-example}.
For $q=1$, $V=\{([0]_1, [0]_1)\}=H_{1,1}=H_{2, 1}=H_{3, 1}$.
For $q=2$, $H_{1, 2}=H_{2, 2}$ and
for $q=3$, $H_{2, 3}=H_{3, 3}$. These are the exceptions.
{}From $q=4$ on, we have the periodicity of the
intersection lattice---the lattice of
$H_{J,q}=\cap_{j\in J}H_{j,q}, \ J\subseteq \{ 1,2,3\}$,
by reverse inclusion.
First, it is easily seen that
for $q \ge 4$, $H_{j, q}$, $j=1,2,3$, are distinct, proper subsets of $V$.
Furthermore, for $q \ge 4$,
\begin{eqnarray*}
H_{\{1,2\}, q}
&=&\begin{cases}
\{([0]_q,[0]_q)\}, & q:\text{odd}, \\
\{([0]_q,[0]_q), ([\frac{q}{2}]_q, [\frac{q}{2}]_q)\}, & q:\text{even},
\end{cases} \\
H_{\{2,3\}, q}
&=&\begin{cases}
\{([0]_q,[0]_q)\}, & 3 \not| \ q , \\
\{([0]_q,[0]_q), ([\frac{q}{3}]_q,[\frac{2q}{3}]_q), ([\frac{2q}{3}]_q,[\frac{q}{3}]_q)\}, & 3\;|\;q ,
\end{cases}
\end{eqnarray*}
and
$H_{\{ 1,3\},q}=H_{\{ 1,2,3\}, q}
=\{([0]_q,[0]_q)\}$. We see that the intersection lattice for this example is
periodic and has the period 6 for $q\ge 4$. The Hasse diagrams for the four types of the
intersection lattices are illustrated in Figure \ref{fig:1}. In Figure
\ref{fig:1}, the subscript ${}_q$ is omitted for simplicity.
\begin{figure}
\caption{Hasse diagrams of intersection lattices for $q\ge 4$}
\label{fig:1}
\end{figure}
Let
$J=\{ j_1,\ldots, j_k \}, \
1\le j_1<\cdots <j_k \le n,
\ 1\le k \le n,$
be a nonempty subset of $\{ 1,\ldots,n \}$.
We write the Smith normal form of $C_J\in {\rm Mat}_{m \times k}({\mathbb Z})$ as
\begin{eqnarray}
\label{eq:SmithCJ}
&& S_JC_JT_J=
\mathop{\rm diag}(\{ e_{J,1},\ldots,e_{J,\ell(J)} \}; m, k)
=:\tilde{E}_J, \\
&& \qquad
\ell(J)=\mathop{\rm rank} C_J, \quad
e_{J,1},\ldots,e_{J,\ell(J)}\in {\mathbb Z}_{>0}, \quad
e_{J,1}|e_{J,2}|\cdots|e_{J,\ell(J)}. \nonumber
\end{eqnarray}
As in Section \ref{subsec:via-elementary-divisors},
we write $e(J)=e_{J,\ell(J)}$,
the largest
elementary divisor of $C_J$.
Let ${\rho}_0$ be
the least common multiple of all $e(J)$'s with $1\le |J|\leq \min\{ m, n\}$
as in \eqref{eq:period--0}.
Furthermore,
define
\[
q_0:=\max_{\emptyset \ne J\subseteq \{ 1,\ldots, n \}}
\min_{S_J} \, \max \{ | u |: u \text{ is an entry of } S_JC \text{ or } C \},
\]
where the minimization is over all
possible choices of $S_J$ in \eqref{eq:SmithCJ} for each fixed $J$.
We are now in a position to state the main theorem of this section.
\begin{theorem}
\label{th:period-L_q}
Let $J$ be an arbitrary nonempty subset of $\{ 1,\ldots, n\}$.
Suppose $q, q' \in {\mathbb Z}_{>0}$ satisfy $q, q'>q_0$ and
${\rm gcd}\{ {\rho}_0, q \}={\rm gcd}\{ {\rho}_0, q' \}$.
Then, for any $j \in \{ 1,\ldots, n\}$, we have that $H_{j, q}\supseteq H_{J, q}$
if and only if $H_{j, q'}\supseteq H_{J, q'}$.
\end{theorem}
When $j\in J$, the theorem is trivially true.
In proving Theorem \ref{th:period-L_q}, we need the following
proposition.
Regard $V={\mathbb Z}_q^m$ as a ${\mathbb Z}_q$-module.
Let $V^*$ be the ${\mathbb Z}_q$-module consisting of the linear forms on $V$.
For any $A\subseteq V$, we denote by $I(A)$ the set of linear forms
vanishing on $A$:
\[
I(A)=\{ \alpha \in V^*: \alpha(x)=0 \text{ \ for all \ } x \in A \}.
\]
Also, for any $B\subseteq V^*$, let $V(B)$ stand for the set of points
at which each linear form in $B$ vanishes:
\[
V(B)=\{ x \in V: \alpha(x)=0 \text{ \ for all \ } \alpha \in B \}.
\]
Evidently, $I(A)$ and $V(B)$ are submodules of $V^*$ and $V$, respectively.
\begin{prop}
\label{prop:I(V(B))}
For any $B\subseteq V^*$, we have $I(V(B))=\langle B\rangle$, where
$\langle B \rangle$ signifies the submodule of $V^*$ spanned by $B$.
\end{prop}
\noindent{\sl Proof.}\qquad
It suffices to show $I(V(B))=B$ for any submodule $B$ of $V^*$.
It is trivially true that $I(V(B))\supseteq B$, so we will prove $I(V(B))\subseteq B$.
Let $C_q(B)\in {\rm Mat}_{m\times k}({\mathbb Z}_q), \ k=|B|,$ be the
coefficient matrix of $B$.
We can find an integral matrix $C(B)\in {\rm Mat}_{m\times k}({\mathbb Z})$
whose $q$-reduction is $C_q(B)$, i.e.,
$[C(B)]_q=C_q(B)$.
Now, let $e_1|e_2|\cdots | e_{\ell}, \ \ell=\mathop{\rm rank} C(B)$,
be the elementary divisors of $C(B)$.
In ${\mathbb Z}_q$, we then have $C_q(B)$ is equivalent to
\begin{equation}
\label{eq:diag[]0}
\mathop{\rm diag}(\{[e_1],\ldots,[e_{\ell'}]\}; m,k)
\in {\rm Mat}_{m\times k}({\mathbb Z}_q),
\end{equation}
where $[e_1], \ldots, [e_{\ell'}] \in {\mathbb Z}_q\setminus\{ 0 \}, \ \ell'\le \ell$.
Here, we are writing $[ \, \cdot \, ]$ for $[ \, \cdot \, ]_q$ for simplicity.
We can choose $C(B)$ in such a way that $\ell'=\ell$, and we decide to do so.
From \eqref{eq:diag[]0} we see that we can assume $B$ is spanned by
$[e_1]y_1, \ldots, [e_{\ell}]y_{\ell}$ after a suitable coordinate change,
where $\{ y_1,\ldots,y_{\ell}, y_{\ell +1}, \ldots, y_m \}$ is a basis of $V^*$.
It follows that $V(B)$ is spanned by
\begin{align*}
p_1&:=(\,[q/d_1(q)]\,, 0,\ldots,0), \ldots,\;
p_{\ell}:=
( \underbrace{0,\ldots,0}_{\ell-1},\,[q/d_{\ell}(q)]\,,
0,\ldots,0), \\
& p_{\ell+1}:=( \underbrace{0,\ldots,0}_{\ell}, 1, 0,\ldots,0 )
,\ldots, \;
p_m:=\left( 0,\ldots,0, 1 \right)
\end{align*}
with $d_j(q)={\rm gcd}\{ e_j,q\}, \ 1\le j \le \ell$.
Now, take an arbitrary $\alpha=[a_1]y_1+\cdots+[a_{m}]y_{m}
\in I(V(B))=I(p_1,\ldots,p_m)$
with $[a_1],\ldots,[a_{m}]\in {\mathbb Z}_q$.
Then we have $0=\alpha(p_1)=[qa_1/d_1(q)],$ so
$a_1=r_1d_1(q)$ for some $r_1\in {\mathbb Z}$.
This implies $[a_1]=[r_1][d_1(q)]=[r'_1][e_1]$ with
$[r'_1]:=[r_1][e_1/d_1(q)]^{-1}\in {\mathbb Z}_q$,
where $[e_1/d_1(q)]^{-1}$ exists because
${\rm gcd}\{e_1/d_1(q), q \}=1$.
Similarly, for each $j=2,\ldots,\ell$, we have $[a_j]=[r'_j][e_j]$ for some
$[r'_j]\in {\mathbb Z}_q$.
Moreover, for $j=\ell+1,\ldots,m$, we obtain $0=\alpha(p_j)=[a_j]$.
Therefore, we have
$\alpha = [r'_1][e_1]y_1+\cdots+ [r'_{\ell}][e_{\ell}]y_{\ell} \in B$, and
the proof is complete.
\hbox{\vrule width 4pt height 6pt depth 1.5pt}\par
\begin{flushleft}
{\it Proof of Theorem \ref{th:period-L_q}.}
\end{flushleft}
Without loss of generality, we may assume $j=1$.
Let $[S_J]_q, [C_J]_q, [T_J]_q
$ and $[\tilde{E}_J]_q$ be
the $q$-reductions of
$S_J, C_J, T_J
$ and $\tilde{E}_J$ in
\eqref{eq:SmithCJ}, respectively.
First, we know by Proposition \ref{prop:I(V(B))}
that $H_{1,q}\supseteq H_{J, q}$ if and only if
$[c_1]_q$ lies in the column space of
$[C_J]_q$ in ${\mathbb Z}_q^m$.
Since
$S_J^{-1}$ and $T_J^{-1}$ exist in ${\rm Mat}_{m\times m}({\mathbb Z})$ and
${\rm Mat}_{k\times k}({\mathbb Z})$, respectively,
the latter condition is equivalent to
$[c_1]_q$ being in the column space of $[C_J]_q[T_J]_q=
[S_J^{-1}]_q[\tilde{E}_J]_q$,
which in turn is equivalent to
$[S_J]_q[c_1]_q$ being in the column space of $[\tilde{E}_J]_q$
in ${\mathbb Z}_q^m$.
Next, let us
paraphrase the above condition in ${\mathbb Z}_q^m$
as a condition in ${\mathbb Z}^m$.
The condition holds if and only if
$S_Jc_1\in {\mathbb Z}^m$ is in the column space of
$(\tilde{E}_J, qI_m)\in {\rm Mat}_{m\times (k+m)}({\mathbb Z})$ in ${\mathbb Z}^m$.
Noting that
$e_{J,j}{\mathbb Z}+q{\mathbb Z}=d_{J,j}(q){\mathbb Z}$ with
$d_{J,j}(q)={\rm gcd}\{ e_{J,j}, q \}, \ 1\le j\le \ell(J)$,
we see that the condition holds if and only if
$S_Jc_1$ is in the column space of
$\mathop{\rm diag}(d_{J,1}(q),\ldots,d_{J,\ell(J)}(q),q,\ldots,q)\allowbreak
\in {\rm Mat}_{m\times m}({\mathbb Z})$.
Since the absolute value of each entry of $S_Jc_1\in {\mathbb Z}^m$ is
less than $q$, the condition is equivalent to
$S_Jc_1$ being in the column space of
\begin{equation}
\label{eq:hline}
\mathop{\rm diag}(\{d_{J,1}(q), \dots, d_{J,\ell(J)}(q)\}; m, \ell(J)) \
\in {\rm Mat}_{m\times \ell(J)}({\mathbb Z}).
\end{equation}
Now, since the absolute value of each entry of $S_Jc_1$ is
less than $q'$ as well,
the preceding argument holds true also for $q'$.
Moreover, we see from \eqref{eq:d(q)===} that
$d_{J,j}(q)=d_{J,j}(q')$ for $j=1,\ldots,\ell(J)$.
Thus \eqref{eq:hline} remains the same when $q$ is
replaced by $q'$.
Therefore, we obtain the desired result.
\hbox{\vrule width 4pt height 6pt depth 1.5pt}\par
Our assumption \eqref{eq:cj-nonzero} implies that
$H_{j, q} \not\supseteq H_{\emptyset, q}=V, \ 1\le j\le n$, for all $q>q_0$.
From this observation and Theorem \ref{th:period-L_q},
it follows immediately that $L_q=L({\cal A}_q)$ for $q>q_0$ is
periodic in $q$ with a period ${\rho}_0$.
\begin{co}
\label{co:periodicity-of-L}
The intersection lattice $L_q=L({\cal A}_q)$ is
periodic in $q>q_0$ with a period ${\rho}_0$:
\[
L_{q+s{\rho}_0}\simeq L_q \text{ for all } q > q_0 \text{ and } s\in {\mathbb Z}_{\ge 0}.
\]
\end{co}
Finally, we make a remark on the coarseness of the intersection lattices
for different $q$'s. In Figure \ref{fig:1}
we see that the intersection lattice for the case $\gcd\{ 6, q\}=6$ is
the most detailed and that the coarseness is nested according to the
divisibility of $\gcd\{ 6, q\}$.
This observation can be generally stated as follows.
\begin{prop}
\label{prop:divisibility}
Let $I, J\subseteq \{1,\dots,n\}$ and suppose that
$H_{I,q}=H_{J,q}$ for some $q > q_0$. Then
$H_{I,q'}=H_{J,q'}$ for every $q'> q_0$ such that
$\gcd\{ {\rho}_0, q' \}|\gcd\{ {\rho}_0, q \}$.
\end{prop}
\noindent{\sl Proof.}\qquad It suffices to show that for any $i\in I$, if $[c_i]_q$ lies in
the column space of $[C_J]_q$ in ${\mathbb Z}_q^m$, then
$[c_i]_{q'}$ lies in
the column space of $[C_{J}]_{q'}$ in ${\mathbb Z}_{q'}^m$.
Without loss of generality, take $i=1$ and assume that
$[c_1]_q$ lies in
the column space of $[C_{J}]_q$ in ${\mathbb Z}_q^m$.
Then $S_J c_1$ is in the column space of \eqref{eq:hline}.
Now, because $\gcd\{ {\rho}_0, q'\}|\gcd\{ {\rho}_0, q\}$ by assumption,
we can see from \eqref{eq:d(q)===} that
$d_{J,j}(q')|d_{J,j}(q), \ 1\le j\le \ell(J)$.
This implies that
$S_J c_1$ is in the column space of \eqref{eq:hline} with $q$ replaced by $q'$.
Therefore, $[c_1]_{q'}$ lies in the column space of $[C_{J}]_{q'}$ in ${\mathbb Z}_{q'}^m$.
\hbox{\vrule width 4pt height 6pt depth 1.5pt}\par
\end{document} |
\begin{document}
\title[Numerical methods for elliptic membrane shells]{Numerical approximation of the solution of an obstacle problem modelling the displacement of elliptic membrane shells via the penalty method}
\author[Aaron Meixner]{Aaron Meixner}
\address{Department of Mathematics The Ohio State University, 100 Math Tower, 231 West 18th Avenue, Columbus, Ohio, USA}
\email{[email protected]}
\author[Paolo Piersanti]{Paolo Piersanti}
\address{Department of Mathematics and Institute for Scientific Computing and Applied Mathematics, Indiana University Bloomington, 729 East Third Street, Bloomington, Indiana, USA}
\email[Corresponding author]{[email protected]}
\today
\begin{abstract}
In this paper we establish the convergence of a numerical scheme based, on the Finite Element Method, for a time-independent problem modelling the deformation of a linearly elastic elliptic membrane shell subjected to remaining confined in a half space. Instead of approximating the original variational inequalities governing this obstacle problem, we approximate the penalized version of the problem under consideration.
A suitable coupling between the penalty parameter and the mesh size will then lead us to establish the convergence of the solution of the discrete penalized problem to the solution of the original variational inequalities.
We also establish the convergence of the Brezis-Sibony scheme for the problem under consideration. Thanks to this iterative method, we can approximate the solution of the discrete penalized problem without having to resort to nonlinear optimization tools.
Finally, we present numerical simulations validating our new theoretical results.
\noindent \textbf{Keywords.} Obstacle problems $\cdot$ Variational Inequalities $\cdot$ Elasticity theory $\cdot$ Finite Difference Quotients $\cdot$ Penalty Method $\cdot$ Finite Element Method
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
\label{sec0}
In this paper we establish the convergence of a numerical scheme, based on the Finite Element Method, for approximating the solution of a set of variational inequalities modelling the displacement of a linearly elastic elliptic membrane shell subject to remaining confined in a prescribed half space.
Differently from the numerical scheme presented in~\cite{PS}, where the authors studied the convergence of a numerical scheme based on the Finite Element Method for approximating the solution of a fourth order set of variational inequalities modelling the displacement of a shallow shell which, we recall, takes the form of a Kirchhoff-Love vector field, the solution of the problem we are studying in this paper is a vector field and the variational inequalities we shall be considering involve all the three components of one such displacement vector field.
Critical to establishing the convergence of the finite element approximation of the solution of the problem under consideration is the augmentation of regularity of the solution of the governing variational inequalities.
This preparatory result improves the standard penalization argument extensively discussed in~\cite{Lions1969} and lets us infer \emph{how fast} the penalized solution converges to the solution of the original variational inequalities.
A similar numerical analysis has been treated by Scholz in the paper~\cite{Scholz1984} where, however, the author resorted to the very peculiar assumption $(\ast)$ on the elliptic operator under consideration.
We will replace this assumption by a more reasonable geometrical assumption, which is exactly the assumption needed to ensure the ``density property'' devised by Ciarlet, Mardare \& Piersanti in~\cite{CiaMarPie2018b,CiaMarPie2018}. In addition to this, the augmentation of regularity argument carried out in~\cite{Scholz1984} is only valid for scalar functions. The fact that the solution of the variational problem under consideration is a vector field renders this analysis substantially more complicated than in the scalar case.
Other references about numerical approximations of the solutions of obstacle problems via the Finite Element Method are, for instance, the seminal paper by Falk~\cite{Falk1974}, where the author exploited the augmentation of regularity result established by Brezis and Stampacchia~\cite{BrezisStampacchia1968}. The scheme there proposed, however, seems not to be reproducible in the case where the unknown of the variational problem under consideration is a vector field.
The study of the augmentation of regularity of solutions for boundary value problems modelled via elliptic equations began between the end of the Fifties and the early Sixties, when Agmon, Douglis \& Nirenberg published the two pioneering papers~\cite{AgmDouNir1959} and~\cite{AgmDouNir1964} about the regularity properties of solutions of elliptic systems up to the boundary of the integration domain.
The augmentation of regularity for solutions of variational inequalities for scalar functions was first addressed by Frehse in the early Seventies~\cite{Frehse1971,Frehse1973}. In the late Seventies and early Eighties, Caffarelli and his collaborators published the two papers~\cite{Caffarelli1979,CafFriTor1982}, where they proved that the solution of an obstacle problem for the biharmonic operator (cf., e.g., Section~6.7 of~\cite{PGCLNFAA}) could not be \emph{too regular}. It was recently established in~\cite{Pie2020-1} that the solution of an obstacle problem for linearly elastic shallow shells enjoys higher regularity properties in the interior of the domain where it is defined. To our best knowledge, the results contained in~\cite{Pie2020-1} constitute the first attempt where the augmented regularity of a vector field solving a set of variational inequalities is studied.
Augmentation of regularity for linear problems in elasticity theory was treated, for instance, by Geyomonat in the seminal paper~\cite{Geymonat1965}, by Alexandrescu-Iosifescu~\cite{Iosifescu1994}, where the augmentation of regularity for Koiter's model is considered, and by Genevey in~\cite{Genevey1996}, where the higher regularity of the solution for a variational problem modelling the displacement of a linearly elastic elliptic membrane shell is established.
To our best knowledge, the only record in the literature treating the augmentation of regularity of the solution of second order variational inequalities in the case where one such solution is a vector field and the constraint defining the non-empty, closed, and convex subset of the Sobolev space where the solution is sought is expressed in terms of all of the three components of the displacement vector field is the recent paper~\cite{Pie-2022-interior}.
This paper is divided into ten sections (including this one). In section~\ref{sec1} we present some background and notation.
In section~\ref{sec2} we recall the formulation and the properties of a three-dimensional obstacle problem for a ``general'' linearly elastic shell. It is worth mentioning that this three-dimensional problem is the starting point for deriving the variational formulation of the two-dimensional problem, whose solution regularity is the object of interest of this paper.
In section~\ref{sec3} we scale the original three-dimensional problem over a domain of fixed thickness and we state the corresponding scaled problem, modelled by a set of variational inequalities. We then recall the result of the asymptotic analysis conducted in~\cite{CiaMarPie2018b,CiaMarPie2018}, we state the two-dimensional limit problem obtained as a result of an application of the ``density property'' and, finally, we de-scale the limit problem by re-introducing the thickness parameter.
In section~\ref{sec:penalty}, we establish the existence and uniqueness of the solution for the de-scaled penalized limit problem, after recalling the regularity properties of the penalty operator entering the model under consideration.
In section~\ref{sec:aug-interior}, we establish the augmentation of regularity up to the boundary of the de-scaled penalized problem. As a consequence of this, we are able to prove that the solution of the de-scaled variational inequalities is actually the weak limit of the sequence of solutions of the de-scaled penalized problems as the penalty parameter tends to zero with respect to a vector space which is is characterized by a higher regularity than the one where the search for minimizers of the energy functional was originally performed.
In section~\ref{approx:original} we show that the sequence of solutions of the de-scaled penalized problems converges to the solution of the de-scaled variational inequalities at a polynomial rate. To obtain this result, the augmentation of regularity devised in section~\ref{sec:aug-interior} will be playing a crucial role.
In section~\ref{approx:penalty} we approximate the solution of the de-scaled penalized problem by a Finite Element Method, the convergence of which shall strongly be hinging on a suitable coupling between the penalty parameter and the mesh size.
In section~\ref{approx:BrezisSibony}, we prove that the iterative scheme originally proposed by Brezis and Sibony in the seminal paper~\cite{BrezisSibony1968} makes possible to approximate the solution of the discrete penalized problem introduced in section~\ref{approx:penalty} without having to resort to nonlinear optimization tools like, for instance, the Primal-Dual Active Set Method and the Gradient Descent Method.
Finally, in section~\ref{numerics} we present numerical experiments meant to validate our theoretical results.
\section{Background and notation}
\label{sec1}
For a complete overview about the classical notions of differential geometry used in this paper, see, e.g.~\cite{Ciarlet2000} or \cite{Ciarlet2005}.
Greek indices, except $\varepsilon$, take their values in the set $\{1,2\}$, while Latin indices, except when they are used for ordering sequences, take their values in the set $\{1,2,3\}$, and, unless differently specified, the summation convention with respect to repeated indices is used jointly with these two rules.
As a model of the three-dimensional ``physical'' space $\mathbb{R}^3$, we take a \emph{real three-dimensional affine Euclidean space}, i.e., a set in which a point $O \in\mathbb{R}^3$ has been chosen as the \emph{origin} and with which a \emph{real three-dimensional Euclidean space}, denoted $\mathbb{E}^3$, is associated. We equip $\mathbb{E}^3$ with an \emph{orthonormal basis} consisting of three vectors $\bm{e}^i$, with components $e^i_j=\delta^i_j$.
The definition of $\mathbb{R}^3$ as an affine Euclidean space means that with any point $x \in \mathbb R^3$ is associated an uniquely determined vector $\boldsymbol{Ox} \in \mathbb{E}^3$. The origin $O \in \mathbb{R}^3$ and the orthonormal vectors $\bm{e}^i \in \mathbb{E}^3$ together constitute a \emph{Cartesian frame} in $\mathbb{R}^3$ and the three components $x_i$ of the vector $\boldsymbol{Ox}$ over the basis formed by $\bm{e}^i$ are called the \emph{Cartesian coordinates} of $x \in \mathbb{R}^3$, or the \emph{Cartesian components} of $\boldsymbol{Ox} \in \mathbb{E}^3$. Once a Cartesian frame has been chosen, any point $x \in \mathbb{R}^3$ may be thus \emph{identified} with the vector $\boldsymbol{Ox}=x_i \bm{e}^i \in \mathbb{E}^3$. As a result, a set in $\mathbb{R}^3$ can be identified with a ``physical'' body in the Euclidean space $\mathbb{E}^3$.
The Euclidean inner product and the vector product of $\bm{u}, \bm{v} \in \mathbb{E}^3$ are respectively denoted by $\bm{u} \cdot \bm{v}$ and $\bm{u} \wedge \bm{v}$; the Euclidean norm of $\bm{u} \in \mathbb{E}^3$ is denoted by $\left|\bm{u} \right|$. The notation $\delta^j_i$ designates the Kronecker symbol.
Given an open subset $\Omega$ of $\mathbb{R}^n$, where $n \ge 1$, we denote the usual Lebesgue and Sobolev spaces by $L^2(\Omega)$, $L^1_{\textup{loc}}(\Omega)$, $H^1(\Omega)$, $H^1_0 (\Omega)$, $H^1_{\textup{loc}}(\Omega)$, and the notation $\mathcal{D} (\Omega)$ designates the space of all functions that are infinitely differentiable over $\Omega$ and have compact supports in $\Omega$. We denote $\left\| \cdot \right\|_X$ the norm in a normed vector space $X$. Spaces of vector-valued functions are written in boldface.
The Euclidean norm of any point $x \in \Omega$ is denoted by $|x|$.
The boundary $\Gamma$ of an open subset $\Omega$ in $\mathbb{R}^n$ is said to be Lipschitz-continuous if the following conditions are satisfied (cf., e.g., Section~1.18 of~\cite{PGCLNFAA}): Given an integer $s\ge 1$, there exist constants $\alpha_1>0$ and $L>0$, a finite number of local coordinate systems, with coordinates
$$
\bm{\phi}'_r=(\phi_1^r, \dots, \phi_{n-1}^r) \in \mathbb{R}^{n-1} \textup{ and } \phi_r=\phi_n^r, 1 \le r \le s,
$$
sets
$$
\tilde{\omega}_r:=\{\bm{\phi}_r \in\mathbb{R}^{n-1}; |\bm{\phi}_r|<\alpha_1\},\quad 1 \le r \le s,
$$
and corresponding functions
$$
\tilde{\theta}_r:\tilde{\omega}_r \to\mathbb{R},\quad 1 \le r \le s,
$$
such that
$$
\Gamma=\bigcup_{r=1}^s \{(\bm{\phi}'_r,\phi_r); \bm{\phi}'_r \in \tilde{\omega}_r \textup{ and }\phi_r=\tilde{\theta}_r(\bm{\phi}'_r)\},
$$
and
$$
|\tilde{\theta}_r(\bm{\phi}'_r)-\tilde{\theta}_r(\bm{\upsilon}'_r)|\le L |\bm{\phi}'_r-\bm{\upsilon}'_r|, \textup{ for all }\bm{\phi}'_r, \bm{\upsilon}'_r \in \tilde{\omega}_r, \textup{ and all }1\le r\le s.
$$
We observe that the second last formula takes into account overlapping local charts, while the last set of inequalities expresses the Lipschitz continuity of the mappings $\tilde{\theta}_r$.
An open set $\Omega$ is said to be locally on the same side of its boundary $\Gamma$ if, in addition, there exists a constant $\alpha_2>0$ such that
\begin{align*}
\{(\bm{\phi}'_r,\phi_r);\bm{\phi}'_r \in\tilde{\omega}_r \textup{ and }\tilde{\theta}_r(\bm{\phi}'_r) < \phi_r < \tilde{\theta}_r(\bm{\phi}'_r)+\alpha_2\}&\subset \Omega,\quad\textup{ for all } 1\le r\le s,\\
\{(\bm{\phi}'_r,\phi_r);\bm{\phi}'_r \in\tilde{\omega}_r \textup{ and }\tilde{\theta}_r(\bm{\phi}'_r)-\alpha_2 < \phi_r < \tilde{\theta}_r(\bm{\phi}'_r)\}&\subset \mathbb{R}^n\setminus\overline{\Omega},\quad\textup{ for all } 1\le r\le s.
\end{align*}
A \emph{domain in} $\mathbb{R}^n$ is a bounded and connected open subset $\Omega$ of $\mathbb{R}^n$, whose boundary $\partial \Omega$ is Lipschitz-continuous, the set $\Omega$ being locally on a single side of $\partial \Omega$.
Let $\omega$ be a domain in $\mathbb{R}^2$ with boundary $\gamma:=\partial\omega$, and let $\omega_1 \subset \omega$. The special notation $\omega_1 \subset \subset \omega$ means that $\overline{\omega_1} \subset \omega$ and $\textup{dist}(\gamma,\partial\omega_1):=\min\{|x-y|;x \in \gamma \textup{ and } y \in \partial\omega_1\}>0$.
Let $y = (y_\alpha)$ denote a generic point in $\omega$, and let $\partial_\alpha := \partial / \partial y_\alpha$. A mapping $\bm{\theta} \in \mathcal{C}^1(\overline{\omega}; \mathbb{E}^3)$ is said to be an \emph{immersion} if the two vectors
$$
\bm{a}_\alpha (y) := \partial_\alpha \bm{\theta} (y)
$$
are linearly independent at each point $y \in \overline{\omega}$. Then the set $\bm{\theta} (\overline{\omega})$ is a \emph{surface in} $\mathbb{E}^3$, equipped with $y_1, y_2$ as its \emph{curvilinear coordinates}. Given any point $y\in \overline{\omega}$, the linear combinations of the vectors $\bm{a}_\alpha (y)$ span the \emph{tangent plane} to the surface $\bm{\theta} (\overline{\omega})$ at the point $\bm{\theta} (y)$, the unit vector
$$
\bm{a}_3 (y) := \frac{\bm{a}_1(y) \wedge \bm{a}_2 (y)}{|\bm{a}_1(y) \wedge \bm{a}_2 (y)|}
$$
is orthogonal to $\bm{\theta} (\overline{\omega})$ at the point $\bm{\theta} (y)$, the three vectors $\bm{a}_i(y)$ form the \emph{covariant} basis at the point $\bm{\theta} (y)$, and the three vectors $\bm{a}^j(y)$ defined by the relations
$$
\bm{a}^j(y) \cdot \bm{a}_i (y) = \delta^j_i
$$
form the \emph{contravariant} basis at $\bm{\theta} (y)$; note that the vectors $\bm{a}^\beta (y)$ also span the tangent plane to $\bm{\theta} (\overline{\omega})$ at $\bm{\theta} (y)$ and that $\bm{a}^3(y) = \bm{a}_3 (y)$.
The \emph{first fundamental form} of the surface $\bm{\theta} (\overline{\omega})$ is then defined by means of its \emph{covariant components}
$$
a_{\alpha \beta} := \bm{a}_\alpha \cdot \bm{a}_\beta = a_{\beta \alpha} \in \mathcal{C}^0 (\overline{\omega}),
$$
or by means of its \emph{contravariant components}
$$
a^{\alpha \beta}:= \bm{a}^\alpha \cdot \bm{a}^\beta = a^{\beta \alpha}\in \mathcal{C}^0(\overline{\omega}).
$$
Note that the symmetric matrix field $(a^{\alpha \beta})$ is then the inverse of the positive-definite matrix field $(a_{\alpha \beta})$, that $\bm{a}^\beta = a^{\alpha \beta}\bm{a}_\alpha$ and $\bm{a}_\alpha = a_{\alpha \beta} \bm{a}^\beta$, and that the \emph{area element} along $\bm{\theta} (\overline{\omega})$ is given at each point $\bm{\theta}(y), y \in \overline{\omega}$, by $\sqrt{a(y)} \, \mathrm{d} y$, where
$$
a := \det(a_{\alpha \beta}) \in \mathcal{C}^0(\overline{\omega}),
$$
and satisfies $a_0 \le a(y) \le a_1$, for all $y\in\overline{\omega}$ for some $a_0, a_1>0$.
Given an immersion $\bm{\theta} \in \mathcal{C}^2(\overline{\omega};\mathbb{E}^3)$, the \emph{second fundamental form} of the surface $\bm{\theta}(\overline{\omega})$ is defined by means of its \emph{covariant components}
$$
b_{\alpha \beta}:= \partial_\alpha \bm{a}_\beta \cdot \bm{a}_3 = -\bm{a}_\beta \cdot \partial_\alpha \bm{a}_3 = b_{\beta \alpha} \in \mathcal{C}^0(\overline{\omega}),
$$
or by means of its \emph{mixed components}
$$
b^\beta_\alpha := a^{\beta \sigma} b_{\alpha \sigma} \in \mathcal{C}^0(\overline{\omega}),
$$
and the \emph{Christoffel symbols} associated with the immersion $\bm{\theta}$ are defined by
$$
\Gamma^\sigma_{\alpha \beta}:= \partial_\alpha \bm{a}_\beta \cdot \bm{a}^\sigma = \Gamma^\sigma_{\beta \alpha} \in \mathcal{C}^0 (\overline{\omega}).
$$
The \emph{Gaussian curvature} at each point $\bm{\theta} (y) , \, y \in \overline{\omega}$, of the surface $\bm{\theta} (\overline{\omega})$ is defined by
$$
\kappa (y) := \frac{\det (b_{\alpha \beta} (y))}{\det (a_{\alpha \beta} (y))} = \det \left( b^\beta_\alpha (y)\right).
$$
Observe that the denominator in the above relation does not vanish since $\bm{\theta}$ is assumed to be an immersion. Note that the Gaussian curvature $\kappa (y)$ at the point $\bm{\theta} (y)$ is also equal to the product of the two principal curvatures at this point.
Given an immersion
$\bm{\theta} \in \mathcal{C}^2 (\overline{\omega}; \mathbb{E}^3)$ and a
vector field $\bm{\eta} = (\eta_i) \in \mathcal{C}^1(\overline{\omega};
\mathbb{R}^3)$, the vector field
$$
\tilde{\bm{\eta}} := \eta_i \bm{a}^i
$$
may be viewed as the \emph{displacement field of the surface} $\bm{\theta} (\overline{\omega})$, thus defined by means of its \emph{covariant components} $\eta_i$ over the vectors $\bm{a}^i$ of the contravariant bases along the surface. If the norms $\left\| \eta_i \right\|_{\mathcal{C}^1(\overline{\omega})}$ are small enough, the mapping $(\bm{\theta} + \eta_i \bm{a}^i) \in \mathcal{C}^1(\overline{\omega}; \mathbb{E}^3)$ is also an immersion, so that the set $(\bm{\theta} + \eta_i \bm{a}^i) (\overline{\omega})$ is again a surface in $\mathbb{E}^3$, equipped with the same curvilinear coordinates as those of the surface $\bm{\theta} (\overline{\omega})$ and is called the \emph{deformed surface} corresponding to the displacement field $\tilde{\bm{\eta}} = \eta_i \bm{a}^i$.
It is thus possible to define the first fundamental form of the deformed surface in terms of its covariant components by
\begin{align*}
a_{\alpha \beta} (\bm{\eta}) :=& (\bm{a}_\alpha + \partial_\alpha \tilde{\bm{\eta}}) \cdot (\bm{a}_\beta + \partial_\beta \tilde{\bm{\eta}}) \\
=& a_{\alpha \beta} + \bm{a}_\alpha \cdot \partial_\beta \tilde{\bm{\eta}} + \partial_\alpha \tilde{\bm{\eta}} \cdot \bm{a}_\beta + \partial_\alpha \tilde{\bm{\eta}} \cdot \partial_\beta \tilde{\bm{\eta}}.
\end{align*}
The \emph{linear part with respect to} $\tilde{\bm{\eta}}$ in the difference $(a_{\alpha \beta}(\bm{\eta}) - a_{\alpha \beta})/2$ is called the \emph{linearized change of metric}, or \emph{strain}, \emph{tensor} associated with the displacement field $\eta_i \bm{a}^i$, the covariant components of which are thus defined by
$$
\gamma_{\alpha \beta}(\bm{\eta}) := \dfrac{1}{2}(\bm{a}_\alpha \cdot \partial_\beta \tilde{\bm{\eta}} + \partial_\alpha \tilde{\bm{\eta}} \cdot \bm{a}_\beta) = \frac12 (\partial_\beta \eta_\alpha + \partial_\alpha \eta_\beta ) - \Gamma^\sigma_{\alpha \beta} \eta_\sigma - b_{\alpha \beta} \eta_3 = \gamma_{\beta \alpha} (\bm{\eta}).
$$
In this paper, we shall consider a specific class of surfaces, according to the following definition: Let $\omega$ be a domain in $\mathbb{R}^2$. Then a surface $\bm{\theta} (\overline{\omega})$ defined by means of an immersion $\bm{\theta} \in \mathcal{C}^2(\overline{\omega};\mathbb{E}^3)$ is said to be \emph{elliptic} if its Gaussian curvature $K$ is everywhere strictly positive in $\overline{\omega}$, or equivalently, if there exists a constant $K_0$ such that:
$$
0 < K_0 \le K (y), \text{ for all } y \in \overline{\omega}.
$$
It turns out that, when an \emph{elliptic surface} is subjected to a displacement field $\eta_i \bm{a}^i$ whose \emph{tangential covariant components $\eta_\alpha$ vanish on the entire boundary of the domain $\omega$}, the following inequality holds. Note that the components of the displacement fields and linearized change of metric tensors appearing in the next theorem are no longer assumed to be continuously differentiable functions; they are instead to be understood in a generalised sense, since they now belong to \emph{ad hoc} Lebesgue or Sobolev spaces.
\begin{theorem}
\label{korn}
Let $\omega$ be a domain in $\mathbb{R}^2$ and let an immersion $\bm{\theta} \in \mathcal{C}^3 (\overline{\omega}; \mathbb{E}^3)$ be given such that the surface $\bm{\theta}(\overline{\omega})$ is elliptic. Define the space
$$
\bm{V}_M (\omega) := H^1_0 (\omega) \times H^1_0 (\omega) \times L^2(\omega).
$$
Then, there exists a constant $c_0=c_0(\omega,\bm{\theta})>0$ such that
$$
\Big\{ \sum_\alpha \left\| \eta_\alpha \right\|^2_{H^1(\omega)} + \left\| \eta_3 \right\|^2_{L^2(\omega)}\Big\}^{1/2} \leq c_0 \Big\{ \sum_{\alpha, \beta} \left\| \gamma_{\alpha \beta}(\bm{\eta}) \right\|_{L^2(\omega)}\Big\}^{1/2}
$$
for all $\bm{\eta}= (\eta_i) \in \bm{V}_M (\omega)$.
\qed
\end{theorem}
The above inequality, which is due to \cite{CiaLods1996a} and \cite{CiaSanPan1996} (see also Theorem~2.7-3 of~\cite{Ciarlet2000}), constitutes an example of a \emph{Korn's inequality on a surface}, in the sense that it provides an estimate of an appropriate norm of a displacement field defined on a surface in terms of an appropriate norm of a specific ``measure of strain'' (here, the linearized change of metric tensor) corresponding to the displacement field under consideration.
\section{The three-dimensional obstacle problem for a ``general'' linearly elastic shell} \label{Sec:2}
\label{sec2}
Let $\omega$ be a domain in $\mathbb{R}^2$, let $\gamma:= \partial \omega$, and let $\gamma_0$ be a non-empty relatively open subset of $\gamma$. For each $\varepsilon > 0$, we define the sets
$$
\Omega^\varepsilon = \omega \times \left] - \varepsilon , \varepsilon \right[ \text{ and } \Gamma^\varepsilon_0 := \gamma_0 \times \left] - \varepsilon , \varepsilon \right[,
$$
we let $x^\varepsilon = (x^\varepsilon_i)$ designate a generic point in the set $\overline{\Omega^\varepsilon}$, and we let $\partial^\varepsilon_i := \partial / \partial x^\varepsilon_i$. Hence we also have $x^\varepsilon_\alpha = y_\alpha$ and $\partial^\varepsilon_\alpha = \partial_\alpha$.
Given an immersion $\bm{\theta} \in \mathcal{C}^3(\overline{\omega}; \mathbb{E}^3)$ and $\varepsilon > 0$, consider a \emph{shell} with \emph{middle surface} $\bm{\theta} (\overline{\omega})$ and with \emph{constant thickness} $2 \varepsilon$. This means that the \emph{reference configuration} of the shell is the set $\bm{\Theta} (\overline{\Omega^\varepsilon})$, where the mapping $\bm{\Theta} : \overline{\Omega^\varepsilon} \to \mathbb{E}^3$ is defined by
$$
\bm{\Theta} (x^\varepsilon) := \bm{\theta} (y) + x^\varepsilon_3 \bm{a}^3(y) \text{ at each point } x^\varepsilon = (y, x^\varepsilon_3) \in \overline{\Omega^\varepsilon}.
$$
One can then show (cf., e.g., Theorem~3.1-1 of~\cite{Ciarlet2000}) that, if $\varepsilon > 0$ is small enough, such a mapping $\bm{\Theta} \in \mathcal{C}^2(\overline{\Omega^\varepsilon}; \mathbb{E}^3)$ is an \emph{immersion}, in the sense that the three vectors
$$
\bm{g}^\varepsilon_i (x^\varepsilon) := \partial^\varepsilon_i \bm{\Theta} (x^\varepsilon)
$$
are linearly independent at each point $x^\varepsilon \in \overline{\Omega^\varepsilon}$; these vectors then constitute the \emph{covariant basis} at the point $\bm{\Theta} (x^\varepsilon)$, while the three vectors $\bm{g}^{j, \varepsilon} (x^\varepsilon)$ defined by the relations
$$
\bm{g}^{j, \varepsilon} (x^\varepsilon) \cdot \bm{g}^\varepsilon_i (x^\varepsilon) = \delta^j_i
$$
constitute the \emph{contravariant basis} at the same point. It will be implicitly assumed in the sequel that $\varepsilon > 0$ \emph{is small enough so that $\bm{\Theta} : \overline{\Omega^\varepsilon} \to \mathbb{E}^3$} is an \emph{immersion}.
One then defines the \emph{metric tensor associated with the immersion} $\bm{\Theta}$ by means of its \emph{covariant components}
$$
g^\varepsilon_{ij}:= \bm{g}^\varepsilon_i \cdot \bm{g}^\varepsilon_j \in \mathcal{C}^1(\overline{\Omega^\varepsilon}),
$$
or by means of its \emph{contravariant components}
$$
g^{ij, \varepsilon} := \bm{g}^{i, \varepsilon} \cdot \bm{g}^{j,\varepsilon} \in \mathcal{C}^1(\overline{\Omega^\varepsilon}).
$$
Note that the symmetric matrix field $(g^{ij, \varepsilon})$ is then the inverse of the positive-definite matrix field $(g^\varepsilon_{ij})$, that $\bm{g}^{j, \varepsilon} = g^{ij, \varepsilon} \bm{g}^\varepsilon_i$ and $\bm{g}^\varepsilon_i = g^\varepsilon_{ij} \bm{g}^{j, \varepsilon}$, and that the \emph{volume element} in $\bm{\Theta}(\overline{\Omega^\varepsilon})$ is given at each point $\bm{\Theta} (x^\varepsilon)$, $x^\varepsilon \in \overline{\Omega^\varepsilon}$, by $\sqrt{g^\varepsilon (x^\varepsilon)} \, \mathrm{d} x^\varepsilon$, where
$$
g^\varepsilon := \det (g^\varepsilon_{ij}) \in \mathcal{C}^1(\overline{\Omega^\varepsilon}).
$$
One also defines the \emph{Christoffel symbols} associated with the immersion $\bm{\Theta}$ by
$$
\Gamma^{p, \varepsilon}_{ij}:= \partial_i \bm{g}^\varepsilon_j \cdot \bm{g}^{p, \varepsilon} = \Gamma^{p, \varepsilon}_{ji} \in \mathcal{C}^0(\overline{\Omega^\varepsilon}).
$$
Note that $\Gamma^{3,\varepsilon}_{\alpha 3} = \Gamma^{p, \varepsilon}_{33} = 0$.
Given a vector field $\bm{v}^\varepsilon = (v^\varepsilon_i) \in \mathcal{C}^1 (\overline{\Omega^\varepsilon}; \mathbb{R}^3)$, the associated vector field
$$
\tilde{\bm{v}}^\varepsilon := v^\varepsilon_i \bm{g}^{i, \varepsilon}
$$
can be viewed as a \emph{displacement field} of the reference configuration $\bm{\Theta} (\overline{\Omega^\varepsilon})$ of the shell, thus defined by means of its \emph{covariant components} $v^ \varepsilon_i$ over the vectors $\bm{g}^{i, \varepsilon}$ of the contravariant bases in the reference configuration.
If the norms $\left\| v^\varepsilon_i \right\|_{\mathcal{C}^1 (\overline{\Omega^\varepsilon})}$ are small enough, the mapping $(\bm{\Theta} + v^\varepsilon_i \bm{g}^{i, \varepsilon})$ is also an immersion, so that one can also define the metric tensor of the \emph{deformed configuration} $(\bm{\Theta} + v^\varepsilon_i \bm{g}^{i, \varepsilon})(\overline{\Omega^\varepsilon})$ by means of its covariant components
\begin{align*}
g^\varepsilon_{ij} (v^\varepsilon) :=& (\bm{g}^\varepsilon_i + \partial^\varepsilon_i \tilde{\bm{v}}^\varepsilon ) \cdot (\bm{g}^\varepsilon_j + \partial^\varepsilon_j \tilde{\bm{v}}^\varepsilon) \\
=& g^\varepsilon_{ij} + \bm{g}^\varepsilon_i \cdot \partial_j \tilde{\bm{v}}^\varepsilon + \partial^\varepsilon_i \tilde{\bm{v}}^\varepsilon \cdot \bm{g}^\varepsilon_j + \partial_i \tilde{\bm{v}}^\varepsilon \cdot \partial_j \tilde{\bm{v}}^\varepsilon.
\end{align*}
The linear part with respect to $\tilde{\bm{v}}^\varepsilon$ in the difference $(g^\varepsilon_{ij} (\bm{v}^\varepsilon) - g^\varepsilon_{ij})/2$ is then called the \emph{linearized strain tensor} associated with the displacement field $v^\varepsilon_i \bm{g}^{i, \varepsilon}$, the covariant components of which are thus defined by
$$
e^\varepsilon_{i\|j} (\bm{v}^\varepsilon) := \frac12 \left( \bm{g}^\varepsilon_i \cdot \partial_j \tilde{\bm{v}}^\varepsilon + \partial^\varepsilon_i \tilde{\bm{v}}^\varepsilon \cdot \bm{g}^\varepsilon_j \right) = \frac12 (\partial^\varepsilon_j v^\varepsilon_i + \partial^\varepsilon_i v^\varepsilon_j) - \Gamma^{p, \varepsilon}_{ij} v^\varepsilon_p = e_{j\|i}^\varepsilon (\bm{v}^\varepsilon).
$$
The functions $e^\varepsilon_{i\|j} (\bm{v}^\varepsilon)$ are called the \emph{linearized strains in curvilinear coordinates} associated with the displacement field $v^\varepsilon_i \bm{g}^{i, \varepsilon}$.
We assume throughout this paper that, for each $\varepsilon > 0$, the reference configuration $\bm{\Theta} (\overline{\Omega^\varepsilon})$ of the shell is a \emph{natural state} (i.e., stress-free) and that the material constituting the shell is \emph{homogeneous}, \emph{isotropic}, and \emph{linearly elastic}. The behavior of such an elastic material is thus entirely governed by its two \emph{Lam\'{e} constants} $\lambda \geq 0$ and $\mu > 0$ (for details, see, e.g., Section~3.8 of~\cite{Ciarlet1988}).
We will also assume that the shell is subjected to \emph{applied body forces} whose density per unit volume is defined by means of its covariant components $f^{i, \varepsilon} \in L^2(\Omega^\varepsilon)$, and to a \emph{homogeneous boundary condition of place} along the portion $\Gamma^\varepsilon_0$ of its lateral face (i.e., the displacement vanishes on $\Gamma^\varepsilon_0$).
In this paper, we consider a specific \emph{obstacle problem} for such a shell, in the sense that the shell is also subjected to a \emph{confinement condition}, expressing that any \emph{admissible} deformed configuration remains in a \emph{half-space} of the form
$$
\mathbb{H} := \{\bm{Ox} \in \mathbb{E}^3; \, \bm{Ox} \cdot \bm{q} \geq 0\},
$$
where $\bm{q} \in\mathbb{E}^3$ is a \emph{non-zero vector} given once and for all. In other words, any admissible displacement field must satisfy
$$
\left(\bm{\Theta} (x^\varepsilon) + v^\varepsilon_i (x^\varepsilon) \bm{g}^{i,\varepsilon} (x^\varepsilon)\right) \cdot \bm{q} \ge 0
$$
for all $x^\varepsilon \in \overline{\Omega^\varepsilon}$, or possibly only for almost all (a.a. in what follows) $x^\varepsilon \in \Omega^\varepsilon$ when the covariant components $v^\varepsilon_i$ are required to belong to the Sobolev space $H^1(\Omega^\varepsilon)$ as in Theorem \ref{t:2} below.
We will of course assume that the reference configuration satisfies the confinement condition, i.e., that
$$
\bm{\Theta} (\overline{\Omega^\varepsilon}) \subset \mathbb{H}.
$$
It is to be emphasized that the above confinement condition \emph{considerably departs} from the usual \emph{Signorini condition} favoured by most authors, who usually require that only the points of the undeformed and deformed ``lower face'' $\omega \times \{-\varepsilon\}$ of the reference configuration satisfy the confinement condition (see, e.g., \cite{Leger2008}, \cite{LegMia2018}, \cite{MezChaBen2020}, \cite{Rodri2018}). Clearly, the confinement condition considered in the present paper is more physically realistic, since a Signorini condition imposed only on the lower face of the reference configuration does not prevent -- at least ``mathematically'' -- other points of the deformed reference configuration to ``cross'' the plane $\{\bm{Ox}\in \mathbb{E}^3; \; \bm{Ox} \cdot \bm{q} = 0\}$ and then to end up on the ``other side'' of this plane. The mathematical models characterized by the confinement condition introduced beforehand, confinement condition which is also considered in the seminal paper~\cite{Leger2008} in a different geometrical framework, do not take any traction forces into account. Indeed, by Classical Mechanics, there could be no traction forces applied to the portion of the three-dimensional shell boundary that engages contact with the obstacle. Friction is not considered in the context of this analysis.
Unlike the classical \emph{Signorni condition}, the confinement condition here considered is more suitable in the context of multi-scales multi-bodies problems like, for instance, the study of the motion of the human heart valves, conducted by Quarteroni and his associates in~\cite{Quarteroni2021-3,Quarteroni2021-2,Quarteroni2021-1} and the references therein.
Such a confinement condition renders the study of this problem considerably more difficult, however, as the constraint now bears on a vector field, the displacement vector field of the reference configuration, instead of on only a single component of this field.
The mathematical modelling of such an \emph{obstacle problem for a linearly elastic shell} is then clear; since, \emph{apart from} the confinement condition, the rest, i.e., the \emph{function space} and the expression of the quadratic \emph{energy} $J^\varepsilon$, is classical (viz.~\cite{Ciarlet2000}). More specifically, let
$$
A^{ijk\ell, \varepsilon} := \lambda g^{ij, \varepsilon} g^{k\ell, \varepsilon} + \mu \left( g^{ik, \varepsilon} g^{j\ell, \varepsilon} + g^{i\ell, \varepsilon} g^{jk, \varepsilon} \right) =
A^{jik\ell, \varepsilon} = A^{k\ell ij, \varepsilon}
$$
denote the contravariant components of the \emph{elasticity tensor} of the elastic material constituting the shell. Then the unknown of the problem, which is the vector field $\bm{u}^\varepsilon = (u^\varepsilon_i)$ where the functions $u^\varepsilon_i : \overline{\Omega^\varepsilon} \to \mathbb{R}$ are the three covariant components of the unknown ``three-dimensional'' displacement vector field $u^\varepsilon_i \bm{g}^{i, \varepsilon}$ of the reference configuration of the shell, should minimize the energy $J^\varepsilon : \bm{H}^1(\Omega^\varepsilon) \to \mathbb{R}$ defined by
$$
J^\varepsilon (\bm{v}^\varepsilon) := \frac12 \int_{\Omega^\varepsilon} A^{ijk\ell, \varepsilon} e^\varepsilon_{k\| \ell} (\bm{v}^\varepsilon)e^\varepsilon_{i\|j} (\bm{v}^\varepsilon) \sqrt{g^\varepsilon} \, \mathrm{d} x^\varepsilon - \int_{\Omega^\varepsilon} f^{i, \varepsilon} v^\varepsilon_i \sqrt{g^\varepsilon} \, \mathrm{d} x^\varepsilon
$$
for each $\bm{v}^\varepsilon = (v^\varepsilon_i) \in \bm{H}^1(\Omega^\varepsilon)$
over the \emph{set of admissible displacements} defined by:
\begin{equation*}
\bm{U}(\Omega^\varepsilon):=\{ \bm{v}^\varepsilon=(v^\varepsilon_i) \in \bm{H}^1(\Omega^\varepsilon); \bm{v}^\varepsilon = \bm{0} \text{ on } \Gamma^\varepsilon_0 \textup{ and } (\bm{\Theta}(x^\varepsilon)+v^\varepsilon_i(x^\varepsilon) \bm{g}^{i,\varepsilon}(x^\varepsilon)) \cdot \bm{q} \ge 0 \textup{ for a.a. } x^\varepsilon \in \Omega^\varepsilon\}.
\end{equation*}
The solution to this \emph{minimization problem} exists and is unique, and it can be also characterized as the solution of a set of appropriate variational inequalities (cf., Theorem~2.1 of~\cite{CiaMarPie2018}).
\begin{theorem} \label{t:2}
The quadratic minimization problem: Find a vector field $\bm{u}^\varepsilon \in \bm{U} (\Omega^\varepsilon)$ such that
$$
J^\varepsilon (\bm{u}^\varepsilon) = \inf_{\bm{v}^\varepsilon \in \bm{U} (\Omega^\varepsilon)} J^\varepsilon (\bm{v}^\varepsilon)
$$
has one and only one solution. Besides, the vector field $\bm{u}^\varepsilon$ is also the unique solution of the variational problem $\mathcal{P} (\Omega^\varepsilon)$: Find $\bm{u}^\varepsilon \in \bm{U} (\Omega^\varepsilon)$ that satisfies the following variational inequalities:
$$
\int_{\Omega^\varepsilon}
A^{ijk\ell, \varepsilon} e^\varepsilon_{k\| \ell} (\bm{u}^\varepsilon)
\left( e^\varepsilon_{i\| j} (\bm{v}^\varepsilon) - e^\varepsilon_{i\| j} (\bm{u}^\varepsilon) \right) \sqrt{g^\varepsilon} \, \mathrm{d} x^\varepsilon \geq \int_{\Omega^\varepsilon} f^{i , \varepsilon} (v^\varepsilon_i - u^\varepsilon_i)\sqrt{g^\varepsilon} \, \mathrm{d} x^\varepsilon
$$
for all $\bm{v}^\varepsilon = (v^\varepsilon_i) \in \bm{U}(\Omega^\varepsilon)$.
\qed
\end{theorem}
Since $\bm{\theta} (\overline{\omega}) \subset \bm{\Theta} (\overline{\Omega^\varepsilon})$, it evidently follows that $\bm{\theta} (y) \cdot \bm{q} \geq 0$ for all $y \in \overline{\omega}$. But in fact, a stronger property holds (cf., Lemma~2.1 of~\cite{CiaMarPie2018}, and see also~\cite{Pie-2022-jde} for a different approach to the asymptotic analysis):
\begin{lemma}
\label{lem0}
Let $\omega$ be a domain in $\mathbb{R}^2$, let $\bm{\theta} \in \mathcal{C}^1(\overline{\omega}; \mathbb{E}^3)$ be an immersion, let $\bm{q} \in \mathbb{E}^3$ be a non-zero vector, and let $\varepsilon > 0$. Then the inclusion
$$
\bm{\Theta} (\overline{\Omega^\varepsilon} ) \subset \mathbb{H} = \{ x \in \mathbb{E}^3; \; \boldsymbol{Ox} \cdot \bm{q} \geq 0\}
$$
implies that
$$
\min_{y \in \overline{\omega}} (\bm{\theta} (y) \cdot \bm{q}) > 0.
$$
\qed
\end{lemma}
\section{The scaled three-dimensional problem for a family of linearly elastic elliptic membrane shells} \label{sec3}
In section~\ref{Sec:2}, we considered an obstacle problem for ``general'' linearly elastic shells. From now on, we will restrict ourselves to a specific class of shells, according to the following definition that was originally proposed in~\cite{Ciarlet1996} (see also \cite{Ciarlet2000}).
Consider a linearly elastic shell, subjected to the various assumptions set forth in section~\ref{Sec:2}. Such a shell is said to be a \emph{linearly elastic elliptic membrane shell} (from now on simply \emph{membrane shell}) if the following two additional assumptions are satisfied: \emph{first}, $\gamma_0 = \gamma$, i.e., the homogeneous boundary condition of place is imposed over the \emph{entire lateral face} $\gamma \times \left] - \varepsilon , \varepsilon \right[$ of the shell, and \emph{second}, its middle surface $\bm{\theta}(\overline{\omega})$ is \emph{elliptic}, according to the definition given in section \ref{sec1}.
In this paper, we consider the \emph{obstacle problem} (as defined in section~\ref{sec2}) \emph{for a family of membrane shells}, all sharing the \emph{same middle surface} and whose thickness $2 \varepsilon > 0$ is considered as a ``small'' parameter approaching zero. In order to conduct an asymptotic analysis on the three-dimensional model as the thickness $\varepsilon \to 0$, we resorted to a (by now standard) methodology first proposed in~\cite{CiaDes1979}: To begin with, we ``scale'' each problem $\mathcal{P} (\Omega^\varepsilon), \, \varepsilon > 0$, over a \emph{fixed domain} $\Omega$, using appropriate \emph{scalings on the unknowns} and \emph{assumptions on the data}.
More specifically, let
$$
\Omega := \omega \times \left] - 1, 1 \right[ ,
$$
let $x = (x_i)$ denote a generic point in the set $\overline{\Omega}$, and let $\partial_i := \partial/ \partial x_i$. With each point $x = (x_i) \in \overline{\Omega}$, we associate the point $x^\varepsilon = (x^\varepsilon_i)$ defined by
$$
x^\varepsilon_\alpha := x_\alpha = y_\alpha \text{ and } x^\varepsilon_3 := \varepsilon x_3,
$$
so that $\partial^\varepsilon_\alpha = \partial_\alpha$ and $\partial^\varepsilon_3 = \varepsilon^{-1} \partial_3$. To the unknown $\bm{u}^\varepsilon = (u^\varepsilon_i)$ and to the vector fields $\bm{v}^\varepsilon = (v^\varepsilon_i)$ appearing in the formulation of the problem $\mathcal{P} (\Omega^\varepsilon)$ corresponding to a membrane shell, we then associate the \emph{scaled unknown} $\bm{u} (\varepsilon) = (u_i(\varepsilon))$ and the \emph{scaled vector fields} $\bm{v} = (v_i)$ by letting
$$
u_i (\varepsilon) (x) := u^\varepsilon_i (x^\varepsilon) \quad\text{ and }\quad v_i(x) := v^\varepsilon_i (x^\varepsilon)
$$
at each $x\in \overline{\Omega}$. Finally, we \emph{assume} that there exist functions $f^i \in L^2(\Omega)$ \emph{independent of} $\varepsilon$ such that the following \emph{assumptions on the data} hold
$$
f^{i, \varepsilon} (x^\varepsilon) = f^i(x) \text{ at each } x \in \Omega.
$$
Note that the independence on $\varepsilon$ of the Lam\'{e} constants assumed in Section \ref{Sec:2} in the formulation of problem $\mathcal{P} (\Omega^\varepsilon)$ implicitly constituted another \emph{assumption on the data}.
The variational problem $\mathcal{P} (\varepsilon; \Omega)$ defined in the next theorem will constitute the point of departure of the asymptotic analysis performed in~\cite{CiaMarPie2018}.
\begin{theorem} \label{t:3}
For each $\varepsilon > 0$, define the set
\begin{align*}
\bm{U}(\varepsilon;\Omega) &:= \{\bm{v} = (v_i) \in \bm{H}^1(\Omega); \bm{v} = \bm{0} \textup{ on } \gamma \times \left] -1, 1 \right[ , \\
& \big(\bm{\theta} (y) + \varepsilon x_3 \bm{a}_3 (y) + v_i (x) \bm{g}^i(\varepsilon) (x)\big) \cdot \bm{q} \ge 0 \textup{ for a.a. } x = (y,x_3) \in \Omega \},
\end{align*}
where
$$
\bm{g}^i(\varepsilon ) (x) := \bm{g}^{i, \varepsilon} (x^\varepsilon) \textup{ at each } x\in \overline{\Omega}.
$$
Then the scaled unknown of the variational problem $\mathcal{P}(\Omega^\varepsilon)$ is the unique solution of the variational problem $\mathcal{P} (\varepsilon; \Omega)$: Find $\bm{u}(\varepsilon) \in \bm{U}(\varepsilon; \Omega)$ that satisfies the following variational inequalities:
\begin{equation*}
\int_\Omega A^{ijk\ell}(\varepsilon) e_{k\| \ell}(\varepsilon; \bm{u}(\varepsilon)) \left(e_{i\| j}(\varepsilon; \bm{v}) - e_{i\|j}(\varepsilon; \bm{u}(\varepsilon))\right) \sqrt{g(\varepsilon)} \, \mathrm{d} x
\ge \int_\Omega f^i (v_i - u_i(\varepsilon)) \sqrt{g(\varepsilon)} \, \mathrm{d} x,
\end{equation*}
for all $\bm{v} \in \bm{U}(\varepsilon;\Omega)$, where
\begin{align*}
g(\varepsilon)(x) &:= g^\varepsilon(x^\varepsilon)
\textup{ and } A^{ijk\ell}(\varepsilon) (x) :=
A^{ijk\ell, \varepsilon}(x^\varepsilon)
\textup{ at each } x\in \overline{\Omega}, \\
e_{\alpha \| \beta}(\varepsilon; \bm{v}) &:= \frac{1}{2}(\partial_\beta v_\alpha + \partial_\alpha v_\beta) - \Gamma^k_{\alpha \beta}(\varepsilon) v_k = e_{\beta \| \alpha}(\varepsilon;\bm{v}) , \\
e_{3\| \alpha}(\varepsilon; \bm{v}) &:= \frac12 \left(\frac{1}{\varepsilon} \partial_3 v_\alpha + \partial_\alpha v_3\right) - \Gamma^\sigma_{\alpha3}(\varepsilon) v_\sigma=e_{\alpha \| 3}(\varepsilon; \bm{v}),\\
e_{3\| 3}(\varepsilon;\bm{v}) &:= \frac{1}{\varepsilon} \partial_3 v_3,
\end{align*}
where
$$
\Gamma^p_{ij}(\varepsilon)(x) := \Gamma^{p,\varepsilon}_{ij}(x^\varepsilon) \textup{ at each } x\in \overline{\Omega}.
$$
\qed
\end{theorem}
The problem we are interested in is derived as a result of the rigorous asymptotic analysis conducted in Theorem~4.1 of~\cite{CiaMarPie2018}.
\begin{theorem}\label{t:4}
Let $\omega$ be a domain in $\mathbb{R}^2$, let $\bm{\theta} \in \mathcal{C}^3(\overline{\omega}; \mathbb{E}^3)$ be an immersion such that the surface $\bm{\theta} (\overline{\omega})$ is elliptic \textup{(cf.\, section \ref{sec1})}. Define the space and sets
\begin{align*}
\bm{V}_M (\omega) &:= H^1_0 (\omega) \times H^1_0 (\omega) \times L^2(\omega), \\
\bm{U}_M (\omega) &:= \{\bm{\eta} = (\eta_i) \in H^1_0 (\omega) \times H^1_0 (\omega) \times L^2(\omega); \big(\bm{\theta} (y) + \eta_i (y) \bm{a}^i(y)\big) \cdot \bm{q} \geq 0 \textup{ for a.a. } y \in \omega \}, \\
\tilde{\bm{U}}_M (\omega) &:= \{\bm{\eta} = (\eta_i) \in H^1_0 (\omega) \times H^1_0 (\omega) \times H^1_0(\omega); \big(\bm{\theta} (y) + \eta_i (y) \bm{a}^i(y)\big) \cdot \bm{q} \geq 0 \textup{ for a.a. } y \in \omega \},
\end{align*}
and assume that the immersion $\bm{\theta}$ is such that
$$
\tilde{d}:=\min_{y \in \overline{\omega}}(\bm{\theta}(y)\cdot\bm{q})>0,
$$
is independent of $\varepsilon$, and assume that the following ``density property'' holds:
$$
\tilde{\bm{U}}_M (\omega) \textup{ is dense in } \bm{U}_M(\omega) \textup{ with respect to the norm of } \left\| \cdot \right\|_{H^1 (\omega) \times H^1(\omega) \times L^2(\omega)}.
$$
Let there be given a family of membrane shells with the same middle surface $\bm{\theta} (\overline{\omega})$ and thickness $2 \varepsilon > 0$, and let
\begin{align*}
\bm{u}(\varepsilon) &= (u_i(\varepsilon)) \in \bm{U} ( \varepsilon; \Omega) := \{ \bm{v} = (v_i) \in \bm{H}^1(\Omega) ; \; \bm{v} = \bm{0} \textup{ on } \gamma \times \left] -1, 1 \right[ , \\
& \big(\bm{\theta} (y) + \varepsilon x_3 \bm{a}_3 (y) + v_i (x) \bm{g}^i(\varepsilon) (x)\big) \cdot \bm{q} \geq 0 \textup{ for a.a. } x = (y, x_3 ) \in \Omega \}
\end{align*}
denote for each $\varepsilon > 0$ the unique solution of the corresponding problem $\mathcal{P} (\varepsilon; \Omega)$ introduced in Theorem~\ref{t:3}.
Then there exist functions $u_\alpha \in H^1(\Omega)$ independent of the variable $x_3$ and satisfying
$$
u_\alpha = 0 \textup{ on } \gamma \times \left]-1, 1\right[,
$$
and there exists a function $u_3 \in L^2(\Omega)$ independent of the variable $x_3$, such that
$$
u_\alpha (\varepsilon ) \to u_\alpha \textup{ in } H^1(\Omega) \textup{ and } u_3 (\varepsilon) \to u_3 \textup{ in } L^2(\Omega).
$$
Define the average
$$
\overline{\bm{u}} = (\overline{u}_i) := \frac12 \int^1_{-1} \bm{u} \, \mathrm{d} x_3 \in \bm{V}_M(\omega).
$$
Then
$$
\overline{\bm{u}} = \bm{\zeta},
$$
where $\bm{\zeta}$ is the unique solution of the two-dimensional variational problem $\mathcal{P}_M (\omega)$: Find $\bm{\zeta} \in \bm{U}_M(\omega)$ that satisfies the following variational inequalities
$$
\int_\omega a^{\alpha \beta \sigma \tau} \gamma_{\sigma \tau}(\bm{\zeta}) \gamma_{\alpha \beta} (\bm{\eta} - \bm{\zeta}) \sqrt{a} \, \mathrm{d} y \geq \int_\omega p^i (\eta_i - \zeta_i) \sqrt{a} \, \mathrm{d} y \quad\textup{ for all } \bm{\eta} = (\eta_i) \in \bm{U}_M(\omega),
$$
where
$$
a^{\alpha \beta \sigma \tau} := \frac{4\lambda \mu}{\lambda + 2 \mu} a^{\alpha \beta} a^{\sigma \tau} + 2\mu \left(a^{\alpha \sigma} a^{\beta \tau} + a^{\alpha \tau} a^{\beta \sigma}\right) \textup{ and } p^i := \int^1_{-1} f^i \, \mathrm{d} x_3.
$$
\qed
\end{theorem}
Note that it does not make sense to talk about the trace of $\zeta_3$ along $\gamma$, since $\zeta_3$ is \emph{a priori} only of class $L^2(\omega)$.
The loss of the homogeneous boundary condition for the transverse component of the limit model, which is \emph{a priori} only square integrable, is \emph{compensated} by the appearance of a boundary layer for the transverse component.
By proving that the solution enjoys a higher regularity, we will establish that it is possible to \emph{restore} the boundary condition for the transverse component of the solution too, and that the trace of the transverse component of the solution along the boundary is almost everywhere (in the sense of the measure of the contour) equal to zero.
Critical to establish the convergence of the family $\{\bm{u}(\varepsilon)\}_{\varepsilon > 0}$ is the ``density property" assumed there, which asserts that \emph{the set $\tilde{\bm{U}}_M (\omega)$ is dense in the set $\bm{U}_M(\omega)$ with respect to the norm $\left\| \cdot \right\|_{H^1(\omega) \times H^1(\omega) \times L^2(\omega)}$}. The same ``density property" is used to provide a justification, via a rigorous asymptotic analysis, of Koiter's model for membrane shells subject to an obstacle (cf. \cite{CiaPie2018bCR}, \cite{CiaPie2018b}).
We hereby recall a sufficient \emph{geometric} condition ensuring the assumed ``density property'' (cf. Theorem~5.1 of~\cite{CiaMarPie2018}).
\begin{theorem}
\label{density}
Let $\bm{\theta} \in \mathcal{C}^2(\overline{\omega}; \mathbb{E}^3)$ be an immersion with the following property: There exists a non-zero vector $\bm{q} \in \mathbb{E}^3$ such that
\begin{equation*}
\min_{y \in \overline{\omega}} (\bm{\theta} (y) \cdot \bm{q}) > 0
\textup{ and }
\min_{y \in \overline{\omega}} (\bm{a}_3 (y) \cdot \bm{q}) > 0.
\end{equation*}
Define the sets
\begin{align*}
\bm{U}_M (\omega) := \{\bm{\eta} = &(\eta_i) \in H^1_0 (\omega) \times H^1_0 (\omega) \times L^2(\omega); \big(\bm{\theta} (y) + \eta_i (y) \bm{a}^i(y)\big) \cdot \bm{q} \geq 0 \textup{ for a.a. } y \in \omega \}, \\
\bm{U}_M(\omega) \cap \boldsymbol{\mathcal{D}} (\omega) := \{ \bm{\eta} =& (\eta_i ) \in \mathcal{D} (\omega) \times \mathcal{D} (\omega) \times \mathcal{D} (\omega); \big(\bm{\theta} (y) + \eta_i (y) \bm{a}^i(y)\big) \cdot \bm{q} \geq 0 \textup{ for a.a. } y \in \omega \}.
\end{align*}
Then the set $\bm{U}_M (\omega) \cap \boldsymbol{\mathcal{D}} (\omega)$ is dense in the set $\bm{U}_M(\omega)$ with respect to the norm $\left\| \cdot \right\|_{H^1(\omega) \times H^1(\omega) \times L^2(\omega)}$.
\qed
\end{theorem}
Examples of membrane shells satisfying the ``density property'' thus include those whose middle surface is a portion of an ellipsoid that is strictly contained in one of the open half-spaces that contain two of its main axes, the boundary of the half-space coinciding with the obstacle in this case.
As a final step, we \emph{de-scale} Problem $\mathcal{P}_M(\omega)$ and we obtain the following variational formulation (cf. Theorem~4.2 of~\cite{CiaMarPie2018}).
\begin{customprob}{$\mathcal{P}_M^\varepsilon(\omega)$}
\label{problem1}
Find $\bm{\zeta}^\varepsilon=(\zeta_i^\varepsilon) \in \bm{U}_M(\omega)$ satisfying the following variational inequalities:
\begin{equation*}
\varepsilon \int_\omega a^{\alpha \beta \sigma \tau} \gamma_{\sigma \tau}(\bm{\zeta}^\varepsilon) \gamma_{\alpha \beta} (\bm{\eta} - \bm{\zeta}^\varepsilon) \sqrt{a} \, \mathrm{d} y \geq \int_\omega p^{i,\varepsilon} (\eta_i - \zeta_i^\varepsilon) \sqrt{a} \, \mathrm{d} y,
\end{equation*}
for all $\bm{\eta} = (\eta_i) \in \bm{U}_M(\omega)$, where $p^{i,\varepsilon}:=\varepsilon\int_{-1}^{1} f^i \, \mathrm{d} x_3$.
\bqed
\end{customprob}
By virtue of the Korn inequality recalled in Theorem~\ref{korn}, it results that Problem~\ref{problem1} admits a unique solution. Solving Problem~\ref{problem1} amounts to minimizing the energy functional $J^\varepsilon: H^1(\omega) \times H^1(\omega) \times L^2(\omega) \to \mathbb{R}$, which is defined by
\begin{equation*}
\label{Jeps}
J^\varepsilon(\bm{\eta}):=\dfrac{1}{2} \int_{\omega} a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(\bm{\eta}) \gamma_{\alpha\beta}(\bm{\eta})\sqrt{a} \, \mathrm{d} y-\int_{\omega} p^{i,\varepsilon} \eta_i \sqrt{a} \, \mathrm{d} y,
\end{equation*}
along all the test functions $\bm{\eta}=(\eta_i) \in \bm{U}_M(\omega)$.
\section{Approximation of the solution of Problem~$\mathcal{P}_M^\varepsilon(\omega)$ by penalization}
\label{sec:penalty}
Following~\cite{Scholz1984}, we first approximate the solution of Problem~\ref{problem1} by penalty method. By so doing, the geometrical constraint appearing in the definition of the set $\bm{U}_M(\omega)$ the deformation must obey now appears in the governing model in the form of a monotone term. As a consequence of this, the test vector fields are no longer sought in a non-empty, closed and convex subset of $\bm{V}_M(\omega)$, but in the whole $\bm{V}_M(\omega)$, and the variational inequalities are replaced by a set of nonlinear equations, where the nonlinearity is monotone.
More precisely, define the operator $\bm{\beta}:\bm{L^2}(\omega) \to\bm{L}^2(\omega)$ in the following fashion
\begin{equation*}
\bm{\beta}(\bm{\xi}):=\left(-\{(\bm{\theta}+\xi_j \bm{a}^j)\cdot\bm{q}\}^{-}\left(\dfrac{\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}}\right)\right)_{i=1}^3,\quad\textup{ for all }\bm{\xi}=(\xi_i) \in \bm{L}^2(\omega),
\end{equation*}
and we notice that this operator is associated with a penalization proportional to the extent the constraint is broken. Note that the denominator never vanishes and that this fact is independent of the assumption $\min_{y \in \overline{\omega}}(\bm{a}^3\cdot\bm{q})>0$.
Following the ideas of~\cite{PWDT3D}, we show that the operator $\bm{\beta}$ is monotone, bounded and non-expansive.
\begin{lemma}
\label{lem:beta}
Let $\bm{q} \in \mathbb{E}^3$ be a given unit-norm vector. Assume that $\min_{y \in \overline{\omega}}(\bm{a}^3(y)\cdot\bm{q})>0$.
Then, the operator $\bm{\beta}:\bm{L^2}(\omega) \to\bm{L}^2(\omega)$ defined by
\begin{equation*}
\bm{\beta}(\bm{\xi}):=\left(-\{(\bm{\theta}+\xi_j \bm{a}^j)\cdot\bm{q}\}^{-}\left(\dfrac{\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}}\right)\right)_{i=1}^3,\quad\textup{ for all }\bm{\xi}=(\xi_i) \in \bm{L}^2(\omega),
\end{equation*}
is bounded, monotone and Lipschitz continuous with Lipschitz constant $L=1$.
\end{lemma}
\begin{proof}
Let $\bm{\xi}$ and $\bm{\eta}$ be arbitrarily given in $\bm{L}^2(\omega)$. Evaluating
\begin{align*}
&\int_{\omega} (\bm{\beta}(\bm{\xi})-\bm{\beta}(\bm{\eta}))\cdot(\bm{\xi}-\bm{\eta})\, \mathrm{d} y
=\int_{\omega} \left(\left[-\{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}\}^{-}\right] - \left[-\{(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}\}^{-}\right]\right) \left(\dfrac{(\xi_i-\eta_i)\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}}\right) \, \mathrm{d} y\\
&=\int_{\omega}\dfrac{\left|-\{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}\}^{-}\right|^2}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} \, \mathrm{d} y +\int_{\omega}\dfrac{\left|-\{(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}\}^{-}\right|^2}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} \, \mathrm{d} y\\
&\quad+\int_{\omega}\dfrac{\left(-\{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}\}^{-}\right)}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} \left(-\{(\bm{\theta}+\eta_i\bm{a}^i)\cdot\bm{q}\}^{+}+\{(\bm{\theta}+\eta_i\bm{a}^i)\cdot\bm{q}\}^{-}\right)\, \mathrm{d} y\\
&\quad+\int_{\omega}\dfrac{\left(-\{(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}\}^{-}\right)}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} \left(-\{(\bm{\theta}+\xi_i\bm{a}^i)\cdot\bm{q}\}^{+}+\{(\bm{\theta}+\xi_i\bm{a}^i)\cdot\bm{q}\}^{-}\right)\, \mathrm{d} y\\
&\ge \int_{\omega}\dfrac{\left|-\{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}\}^{-}\right|^2}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} \, \mathrm{d} y
+\int_{\omega}\dfrac{\left|-\{(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}\}^{-}\right|^2}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} \, \mathrm{d} y
+\int_{\omega}\dfrac{\left(-\{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}\}^{-}\right)}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} \left(\{(\bm{\theta}+\eta_i\bm{a}^i)\cdot\bm{q}\}^{-}\right)\, \mathrm{d} y\\
&\quad+\int_{\omega}\dfrac{\left(-\{(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}\}^{-}\right)}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} \left(\{(\bm{\theta}+\xi_i\bm{a}^i)\cdot\bm{q}\}^{-}\right)\, \mathrm{d} y\\
&=\int_{\omega}\dfrac{\left|\left(-\{(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}\}^{-}\right) - \left(-\{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}\}^{-}\right)\right|^2}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}}\, \mathrm{d} y\ge 0,
\end{align*}
proves the monotonicity of the operator $\bm{\beta}$.
For showing the boundedness of the operator $\bm{\beta}$, we show that it maps bounded sets of $\bm{L}^2(\omega)$ into bounded sets of $\bm{L}^2(\omega)$.
Let the set $\mathscr{F} \subset \bm{L}^2(\omega)$ be bounded. For each $\bm{\xi} \in \mathscr{F}$, we have that
\begin{align*}
&\|\bm{\beta}(\bm{\xi})\|_{\bm{L}^2(\omega)}=\left(\int_{\omega}\dfrac{|-\{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}\}^{-}|^2}{\sum_{\ell=1}^3|\bm{a}^\ell \cdot\bm{q}|^2}\sum_{i=1}^3|\bm{a}^i\cdot\bm{q}|^2 \, \mathrm{d} y\right)^{1/2}\\
&= \|-\{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}\}^{-}\|_{L^2(\omega)} \le \|\bm{\theta}\cdot\bm{q}\|_{L^2(\omega)}+\|\bm{\xi}\|_{\bm{L}^2(\omega)},
\end{align*}
and the sought boundedness is thus asserted, being $\bm{\theta} \in \mathcal{C}^3(\overline{\omega};\mathbb{E}^3)$ and $\mathscr{F}$ bounded in $\bm{L}^2(\omega)$.
Finally, to establish the Lipschitz continuity, for all $\bm{\xi}$ and $\bm{\eta} \in \bm{L}^2(\omega)$, we evaluate $\|\bm{\beta}(\bm{\xi})-\bm{\beta}(\bm{\eta})\|_{\bm{L}^2(\omega)}$. We have that
\begin{equation*}
\begin{aligned}
&\|\bm{\beta}(\bm{\xi})-\bm{\beta}(\bm{\eta})\|_{\bm{L}^2(\omega)}=\left(\int_{\omega}\dfrac{1}{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}\left\{\left|[-\{(\bm{\theta}+\xi_i\bm{a}^i)\cdot\bm{q}\}^{-}] - [-\{(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}\}^{-}] \right|^2 \left(\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2\right) \right\}\, \mathrm{d} y\right)^{1/2}\\
&=\left(\int_{\omega}\left|\left[-\{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}\}^{-}\right] - \left[-\{(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}\}^{-}\right]\right|^2 \, \mathrm{d} y\right)^{1/2}\\
&=\left(\int_{\omega}\left|\dfrac{|(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}|-(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}}{2} - \dfrac{|(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}|-(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}}{2}\right|^2 \, \mathrm{d} y\right)^{1/2}\\
&=\left(\int_{\omega}\left|\dfrac{|(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}|-|(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}|}{2} - \dfrac{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}-(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}}{2}\right|^2 \, \mathrm{d} y\right)^{1/2}\\
&\le\left(\int_{\omega}\left|\left|\dfrac{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}-(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}}{2}\right| - \dfrac{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}-(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}}{2}\right|^2 \, \mathrm{d} y\right)^{1/2}\\
&\le\left(\int_{\omega}\left(\left|\dfrac{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}-(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}}{2}\right| + \left|\dfrac{(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}-(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}}{2}\right|\right)^2 \, \mathrm{d} y\right)^{1/2}\\
&\le \left\|(\bm{\theta}+\xi_j\bm{a}^j)\cdot\bm{q}-(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}\right\|_{L^2(\omega)}\le \|\bm{\xi}-\bm{\eta}\|_{\bm{L}^2(\omega)},
\end{aligned}
\end{equation*}
and the sought Lipschitz continuity is thus established. Note in passing that the Lipschitz constant is $L=1$. This completes the proof.
\end{proof}
Let $\kappa>0$ denote a penalty parameter which is meant to approach zero. The penalized version of Problem~\ref{problem1} is formulated as follows.
\begin{customprob}{$\mathcal{P}_{M,\kappa}^\varepsilon(\omega)$}
\label{problem2}
Find $\bm{\zeta}^\varepsilon_\kappa=(\zeta^\varepsilon_{\kappa,i}) \in \bm{V}_M(\omega)$ satisfying the following variational equations:
\begin{equation*}
\varepsilon \int_\omega a^{\alpha \beta \sigma \tau} \gamma_{\sigma \tau}(\bm{\zeta}^\varepsilon_\kappa) \gamma_{\alpha \beta} (\bm{\eta}) \sqrt{a} \, \mathrm{d} y
+\dfrac{\varepsilon}{\kappa}\int_{\omega} \bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot \bm{\eta} \, \mathrm{d} y
= \int_\omega p^{i,\varepsilon} \eta_i \sqrt{a} \, \mathrm{d} y,
\end{equation*}
for all $\bm{\eta} = (\eta_i) \in \bm{V}_M(\omega)$.
\bqed
\end{customprob}
The existence and uniqueness of solutions of Problem~\ref{problem2} can be established by resorting to the Minty-Browder theorem (cf., e.g., Theorem~9.14-1 of~\cite{PGCLNFAA}). For the sake of completeness, we present the proof of this existence and uniqueness result.
\begin{theorem}
\label{ex-un-kappa}
Let $\bm{q} \in\mathbb{E}^3$ be a given unit-norm vector. Assume that $\bm{\theta} \in \mathcal{C}^3(\overline{\omega};\mathbb{E}^3)$ is such that $\min_{y \in \overline{\omega}}(\bm{\theta}(y)\cdot\bm{q})>~0$.
Then, for each $\kappa>0$ and $\varepsilon>0$, Problem~\ref{problem2} admits a unique solution. Moreover, the family of solutions $\{\bm{\zeta}^\varepsilon_\kappa\}_{\kappa>0}$ is bounded in $\bm{V}_M(\omega)$ independently of $\kappa$ and $\varepsilon$, and
$$
\bm{\zeta}^\varepsilon_\kappa \to \bm{\zeta}^\varepsilon,\quad\textup{ in }\bm{V}_M(\omega) \textup{ as }\kappa \to 0^+,
$$
where $\bm{\zeta}^\varepsilon$ is the solution of Problem~\ref{problem1}.
\end{theorem}
\begin{proof}
Let us define the operator $\bm{A}^\varepsilon:\bm{V}_M(\omega) \to \bm{V}'_M(\omega)$ by
\begin{equation*}
\langle \bm{A}^\varepsilon \bm{\xi},\bm{\eta}\rangle_{\bm{V}'_M(\omega), \bm{V}_M(\omega)}:=\varepsilon\int_{\omega} a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(\bm{\xi}) \gamma_{\alpha\beta}(\bm{\eta}) \sqrt{a} \, \mathrm{d} y.
\end{equation*}
We observe that the operator $\bm{A}^\varepsilon$ is linear, continuous and, thanks to Korn's inequality (Theorem~\ref{korn}), such that
\begin{equation}
\label{Aeps}
\langle \bm{A}^\varepsilon \bm{\xi} -\bm{A}^\varepsilon\bm{\eta},\bm{\xi}-\bm{\eta}\rangle_{\bm{V}'_M(\omega), \bm{V}_M(\omega)} \ge \varepsilon c \|\bm{\xi}-\bm{\eta}\|_{\bm{V}_M(\omega)}^2, \quad\textup{ for all }\bm{\xi}, \bm{\eta} \in \bm{V}_M(\omega),
\end{equation}
for some $c=c(\omega,\bm{\theta})>0$. Define the operator $\hat{\bm{\beta}}:\bm{V}_M(\omega) \to \bm{V}'_M(\omega)$ as the following composition
\begin{equation*}
\bm{V}_M(\omega) \hookrightarrow \bm{L}^2(\omega) \xrightarrow{\bm{\beta}} \bm{L}^2(\omega) \hookrightarrow \bm{V}'_M(\omega).
\end{equation*}
Thanks to the monotonicity of $\bm{\beta}$ established in Lemma~\ref{lem:beta}, we easily infer that $\hat{\bm{\beta}}$ is monotone.
Therefore, as a direct consequence of~\eqref{Aeps} and Lemma~\ref{lem:beta}, we can infer that the operator $(\bm{A}^\varepsilon+\hat{\bm{\beta}}):\bm{V}_M(\omega) \to \bm{V}'_M(\omega)$ is strictly monotone. To see this, observe that for all $\bm{\eta}$, $\bm{\xi} \in \bm{V}_M(\omega)$ with $\bm{\xi}\neq\bm{\eta}$, we have that
\begin{equation*}
\begin{aligned}
&\langle (\bm{A}^\varepsilon+\hat{\bm{\beta}}) \bm{\xi} -(\bm{A}^\varepsilon+\hat{\bm{\beta}})\bm{\eta},\bm{\xi}-\bm{\eta}\rangle_{\bm{V}'_M(\omega), \bm{V}_M(\omega)}\\
&= \langle \bm{A}^\varepsilon \bm{\xi} -\bm{A}^\varepsilon\bm{\eta},\bm{\xi}-\bm{\eta}\rangle_{\bm{V}'_M(\omega), \bm{V}_M(\omega)}\\
&\quad+\langle\hat{\bm{\beta}}(\bm{\xi})-\hat{\bm{\beta}}(\bm{\eta}),\bm{\xi}-\bm{\eta}\rangle_{\bm{V}'_M(\omega), \bm{V}_M(\omega)}\ge \varepsilon c \|\bm{\xi}-\bm{\eta}\|_{\bm{V}_M(\omega)}^2>0.
\end{aligned}
\end{equation*}
Similarly, we can establish the coerciveness of the operator $(\bm{A}^\varepsilon+\hat{\bm{\beta}})$. Indeed, we have that
\begin{equation*}
\dfrac{\langle (\bm{A}^\varepsilon+\hat{\bm{\beta}}) \bm{\eta},\bm{\eta}\rangle_{\bm{V}'_M(\omega), \bm{V}_M(\omega)}}{\|\bm{\eta}\|_{\bm{V}_M(\omega)}} =\dfrac{\langle\bm{A}^\varepsilon\bm{\eta},\bm{\eta}\rangle_{\bm{V}'_M(\omega), \bm{V}_M(\omega)}}{\|\bm{\eta}\|_{\bm{V}_M(\omega)}} +\dfrac{\langle \hat{\bm{\beta}}(\bm{\eta}),\bm{\eta}\rangle_{\bm{V}'_M(\omega), \bm{V}_M(\omega)}}{\|\bm{\eta}\|_{\bm{V}_M(\omega)}} \ge c\varepsilon \|\bm{\eta}\|_{\bm{V}_M(\omega)},
\end{equation*}
where the last inequality is obtained by combining~\eqref{Aeps}, Lemma~\ref{lem:beta} with the fact that $\bm{0} \in \bm{U}_M(\omega)$ or, equivalently, that $\bm{\beta}(\bm{0})=\bm{0}$ in $\bm{L}^2(\omega)$.
The continuity of the operator $\bm{A}^\varepsilon$ and the Lipschitz continuity of the operator $\bm{\beta}$ established in Lemma~\ref{lem:beta} in turn give that the operator $(\bm{A}^\varepsilon+\hat{\bm{\beta}})$ is hemicontinuous, and we are in position to apply the Minty-Browder theorem (cf., e.g., Theorem~9.14-1 of~\cite{PGCLNFAA}) to establish that there exists a unique solution $\bm{\zeta}^\varepsilon_\kappa \in \bm{V}_M(\omega)$ for Problem~\ref{problem2}.
Observe that the fact that $\min_{y\in\overline{\omega}} (\bm{\theta}(y)\cdot\bm{q})>0$ implies:
\begin{equation}
\label{beta-2}
\begin{aligned}
&\int_{\omega} \bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot\bm{\zeta}^\varepsilon_\kappa \, \mathrm{d} y
=\int_{\omega}\dfrac{1}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} \left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right) (\zeta^\varepsilon_{\kappa,i}\bm{a}^i\cdot\bm{q}) \, \mathrm{d} y\\
&=\int_{\omega}\dfrac{1}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} \left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right) ((\bm{\theta}+\zeta^\varepsilon_{\kappa,i}\bm{a}^i)\cdot\bm{q}) \, \mathrm{d} y\\
&\quad-\int_{\omega}\dfrac{1}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} \left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right) (\bm{\theta}\cdot\bm{q}) \, \mathrm{d} y\\
&\ge \int_{\omega}\dfrac{1}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}} |-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,i}\bm{a}^i)\cdot \bm{q}\}^{-}|^2 \, \mathrm{d} y.
\end{aligned}
\end{equation}
Furthermore, if we specialize $\bm{\eta}=\bm{\zeta}^\varepsilon_\kappa$ in the variational equations of Problem~\ref{problem2}, we have that an application of Korn's inequality (Theorem~\ref{korn}), the monotonicity of $\bm{\beta}$ (Lemma~\ref{lem:beta}), the strict positiveness and boundedness of $a$ (Theorems~ 3.1-1 of~\cite{Ciarlet2000}), the uniform positive definiteness of the fourth order two-dimensional elasticity tensor $(a^{\alpha\beta\sigma\tau})$ (Theorem~3.3-2 of~\cite{Ciarlet2000}), and the fact that $\bm{0} \in \bm{U}_M(\omega)$ or, equivalently, that $\bm{\beta}(\bm{0})=\bm{0}$ in $\bm{L}^2(\omega)$ give:
\begin{equation*}
\begin{aligned}
\dfrac{\varepsilon\sqrt{a_0}}{c_0 c_e}\|\bm{\zeta}^\varepsilon_\kappa\|_{\bm{V}_M(\omega)}^2
&\le \varepsilon\int_{\omega} a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(\bm{\zeta}^\varepsilon_\kappa) \gamma_{\alpha \beta}(\bm{\zeta}^\varepsilon_\kappa) \sqrt{a} \, \mathrm{d} y +\dfrac{\varepsilon}{\kappa} \int_{\omega} \bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot\bm{\zeta}^\varepsilon_\kappa \, \mathrm{d} y\\
&\le \|\bm{p}^\varepsilon\|_{\bm{L}^2(\omega)} \|\bm{\zeta}^\varepsilon_\kappa\|_{\bm{V}_M(\omega)} \sqrt{a_1}
=\varepsilon \sqrt{a_1}\|\bm{p}\|_{\bm{L}^2(\omega)} \|\bm{\zeta}^\varepsilon_\kappa\|_{\bm{V}_M(\omega)}.
\end{aligned}
\end{equation*}
Note that the last equality holds thanks to the definition of $\bm{p}=(p^i)$ and $\bm{p}^\varepsilon=(p^{i,\varepsilon})$ introduced, respectively, in Theorem~\ref{t:4} and Problem~\ref{problem1}.
By virtue of the definition of $p^{i,\varepsilon}$ and the assumptions on the data stated at the beginning of section~\ref{sec3}, we get that $\|\bm{\zeta}^\varepsilon_\kappa\|_{\bm{V}_M(\omega)}$ is bounded independently of $\kappa$ and $\varepsilon$. Therefore, by the Banach-Eberlein-Smulian theorem (cf., e.g., Theorem~5.14-4 of~\cite{PGCLNFAA}), we can extract a subsequence, still denoted $\{\bm{\zeta}^\varepsilon_\kappa\}_{\kappa>0}$ such that
\begin{equation}
\label{beta-1}
\bm{\zeta}^\varepsilon_\kappa \rightharpoonup \bm{\zeta}^\varepsilon, \quad\textup{ in } \bm{V}_M(\omega) \textup{ as } \kappa\to0^+.
\end{equation}
Specializing $\bm{\eta}=\bm{\zeta}^\varepsilon_\kappa$ in the variational equations of Problem~\ref{problem2} and applying~\eqref{beta-1} and~\eqref{beta-2} give that
\begin{equation}
\label{beta-2.5}
\dfrac{\left(3\max\{\|\bm{a}^j \cdot \bm{q}\|_{\mathcal{C}^0(\overline{\omega})}^2;1\le j\le 3\}\right)^{-1/2}}{\kappa}\|-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\bm
q\}^{-}\|_{\bm{L}^2(\omega)}^2
\le\dfrac{1}{\kappa}\int_{\omega} \bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot\bm{\zeta}^\varepsilon_\kappa \, \mathrm{d} y\le C,
\end{equation}
for some $C>0$ independent of $\varepsilon$ and $\kappa$. Therefore, we have that an application of the Banach-Eberlein-Smulian theorem and~\eqref{beta-2.5} give that
\begin{equation}
\label{beta-3}
\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \to \bm{0},\quad\textup{ in }\bm{L}^2(\omega) \textup{ as }\kappa\to0^+,
\end{equation}
and that
\begin{equation}
\label{beta-4}
\langle \hat{\bm{\beta}}(\bm{\zeta}^\varepsilon_\kappa),\bm{\zeta}^\varepsilon_\kappa\rangle_{\bm{V}'_M(\omega), \bm{V}_M(\omega)} \to 0,\quad\textup{ as }\kappa\to0^+.
\end{equation}
Therefore, the monotonicity of $\hat{\bm{\beta}}$ (which is a direct consequence of Lemma~\ref{lem:beta}), and the the properties established in~\eqref{beta-1}, \eqref{beta-3} and~\eqref{beta-4} give that $\hat{\bm{\beta}}(\bm{\zeta}^\varepsilon)=\bm{0}$, so that $\bm{\zeta}^\varepsilon \in \bm{U}_M(\omega)$.
Observe that the monotonicity of $\bm{\beta}$ (viz. Lemma~\ref{lem:beta}), the properties of $\bm{\zeta}^\varepsilon_\kappa$, the continuity of the components $\gamma_{\alpha \beta}$ of the linearized change of metric tensor, the definition of $\bm{p}^\varepsilon$ (Theorem~\ref{t:4}), the boundedness $\bm{\zeta}^\varepsilon$ independently of $\varepsilon$ (Theorem~\ref{t:4}), and the weak convergence~\eqref{beta-1} give
\begin{align*}
\|\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^\varepsilon\|_{\bm{V}_M(\omega)}^2 &\le
\dfrac{c_0 c_e}{\sqrt{a_0}}\int_{\omega} a^{\alpha\beta\sigma\tau}\gamma_{\sigma\tau}(\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^\varepsilon)\gamma_{\alpha \beta}(\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^\varepsilon)\sqrt{a} \, \mathrm{d} y\\
&=-\dfrac{c_0 c_e}{\kappa\sqrt{a_0}}\int_{\omega} \bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^\varepsilon) \, \mathrm{d} y\\
&\quad+\dfrac{c_0 c_e}{\varepsilon\sqrt{a_0}} \int_{\omega} p^{i, \varepsilon} (\zeta^\varepsilon_{\kappa,i}-\zeta^\varepsilon_i) \sqrt{a} \, \mathrm{d} y\\
&\quad-\dfrac{c_0 c_e}{\sqrt{a_0}}\int_{\omega} a^{\alpha\beta\sigma\tau}\gamma_{\sigma\tau}(\bm{\zeta}^\varepsilon)\gamma_{\alpha \beta}(\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^\varepsilon)\sqrt{a} \, \mathrm{d} y\\
&\le \dfrac{c_0 c_e}{\varepsilon\sqrt{a_0}} \int_{\omega} p^{i, \varepsilon} (\zeta^\varepsilon_{\kappa,i}-\zeta^\varepsilon_i) \sqrt{a} \, \mathrm{d} y\\
&\quad-\dfrac{c_0 c_e}{\sqrt{a_0}}\int_{\omega} a^{\alpha\beta\sigma\tau}\gamma_{\sigma\tau}(\bm{\zeta}^\varepsilon)\gamma_{\alpha \beta}(\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^\varepsilon)\sqrt{a} \, \mathrm{d} y\\
&=\dfrac{c_0 c_e}{\sqrt{a_0}} \int_{\omega} p^i (\zeta^\varepsilon_{\kappa,i}-\zeta^\varepsilon_i) \sqrt{a} \, \mathrm{d} y\\
&\quad-\dfrac{c_0 c_e}{\sqrt{a_0}}\int_{\omega} a^{\alpha\beta\sigma\tau}\gamma_{\sigma\tau}(\bm{\zeta}^\varepsilon)\gamma_{\alpha \beta}(\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^\varepsilon)\sqrt{a} \, \mathrm{d} y\to 0,
\end{align*}
as $\kappa \to 0^+$. Observe that the latter term is bounded independently of $\varepsilon$ and $\kappa$. In conclusion, we have been able to establish the strong convergence:
\begin{equation}
\label{beta-5}
\bm{\zeta}^\varepsilon_\kappa \to \bm{\zeta}^\varepsilon,\quad\textup{ in } \bm{V}_M(\omega) \textup{ as } \kappa \to0^+.
\end{equation}
Specializing $(\bm{\eta}-\bm{\zeta}^\varepsilon_\kappa)\in\bm{V}_M(\omega)$ in the variational equations of Problem~\ref{problem2}, with $\bm{\eta}\in\bm{U}_M(\omega)$, the monotonicity of $\bm{\beta}$, the convergence~\eqref{beta-3} and the convergence~\eqref{beta-5} immediately give that the limit $\bm{\zeta}^\varepsilon$ satisfies the variational inequalities in Problem~\ref{problem1}. This completes the proof.
\end{proof}
We observe that in the proof of Theorem~\ref{ex-un-kappa}, we established that $\|\bm{\zeta}^\varepsilon_\kappa-\bm{\zeta}^\varepsilon\|_{\bm{V}_M(\omega)}$ converges to zero as $\kappa\to0^+$. For the purpose of constructing a convergent numerical scheme for approximating the solution of the variational inequalities in Problem~\eqref{problem1}, we need to establish \emph{how fast} the latter norm converges to zero as $\kappa\to0^+$.
In order to establish this property, we need to prove a preparatory result concerning the augmentation of regularity of th solution of Problem~\ref{problem2} by resorting to the finite difference quotients approach originally proposed by Agmon, Douglis \& Nirenberg~\cite{AgmDouNir1959,AgmDouNir1964}, as well as the approach proposed by Frehse~\cite{Frehse1971} for variational inequalities, that was later on generalized in~\cite{Pie-2022-interior,Pie2020-1}.
Recalling that $\bm{\zeta}^\varepsilon_\kappa$ denotes the solution of Problem~\ref{problem2}, in the same spirit as Theorem~4.5-1(b) of~\cite{Ciarlet2000} we define
$$
n^{\alpha\beta,\varepsilon}_\kappa:=\varepsilon a^{\alpha\beta\sigma\tau}\gamma_{\sigma\tau}(\bm{\zeta}^\varepsilon_\kappa),
$$
and we also define
$$
n^{\alpha\beta,\varepsilon}_\kappa|_{\sigma}:=\partial_\sigma n^{\alpha\beta,\varepsilon}_\kappa+\Gamma^\alpha_{\sigma\tau}n^{\beta\tau,\varepsilon}_\kappa+\Gamma^\beta_{\sigma\tau}n^{\alpha\tau,\varepsilon}_\kappa.
$$
If the solution $\bm{\zeta}^\varepsilon_\kappa$ of Problem~\ref{problem2} is smooth enough, then it is immediate to see that it satisfies the following boundary value problem:
\begin{equation}
\label{BVP}
\begin{cases}
-n^{\alpha\beta,\varepsilon}_\kappa|_{\beta}+\dfrac{1}{\kappa\sqrt{a}}\beta_\alpha(\bm{\zeta}^\varepsilon_\kappa)&=p^{\alpha,\varepsilon},\textup{ in }\omega,\\
-b_{\alpha\beta}n^{\alpha\beta,\varepsilon}_\kappa+\dfrac{1}{\kappa\sqrt{a}}\beta_3(\bm{\zeta}^\varepsilon_\kappa)&=p^{3,\varepsilon},\textup{ in }\omega,\\
\zeta^\varepsilon_{\kappa,\alpha}=0,\textup{ on }\gamma.
\end{cases}
\end{equation}
\section{Augmentation of the regularity of the solution of Problem~\ref{problem2}}
\label{sec:aug-interior}
Let $\omega_0\subset \omega$ and $\omega_1 \subset \omega$ be such that
\begin{equation}
\label{sets}
\omega_1 \subset\subset \omega_0 \subset \subset \omega.
\end{equation}
Let $\varphi_1 \in \mathcal{D}(\omega)$ be such that
$$
\text{supp }\varphi_1 \subset\subset \omega_1 \textup{ and } 0\le \varphi_1 \le 1.
$$
By the definition of the symbol $\subset\subset$ in~\eqref{sets}, we obtain that the quantity
\begin{equation}
\label{d}
d=d(\varphi_1):=\dfrac{1}{2}\min\{\textup{dist}(\partial\omega_1,\partial\omega_0),\textup{dist}(\partial\omega_0,\gamma),\textup{dist}(\textup{supp }\varphi_1,\partial\omega_1)\}
\end{equation}
is strictly greater than zero.
Denote by $D_{\rho h}$ the first order (forward) finite difference quotient of either a function or a vector field in the canonical direction $\bm{e}_\rho$ of $\mathbb{R}^2$ and with increment size $0<h<d$ sufficiently small. We can regard the first order (forward) finite difference quotient of a function as a linear operator defined as follows:
$$
D_{\rho h}: L^2(\omega) \to L^2(\omega_0).
$$
The first order finite difference quotient of a function $\xi$ in the canonical direction $\bm{e}_\rho$ of $\mathbb{R}^2$ and with increment size $0<h<d$ is defined by:
$$
D_{\rho h}\xi(y):=\dfrac{\xi(y+h\bm{e}_\rho)-\xi(y)}{h},
$$
for all (or, possibly, a.a.) $y\in\omega$ such that $(y+h\bm{e}_\rho)\in\omega$.
The first order finite difference quotient of a vector field $\bm{\xi}=(\xi_i)$ in the canonical direction $\bm{e}_\rho$ of $\mathbb{R}^2$ and with increment size $0<h<d$ is defined by
$$
D_{\rho h}\bm{\xi}(y):=\dfrac{\bm{\xi}(y+h\bm{e}_\rho)-\bm{\xi}(y)}{h},
$$
or, equivalently,
$$
D_{\rho h}\bm{\xi}(y)=(D_{\rho h}\xi_i(y)).
$$
Similarly, we can show that the first order (forward) finite difference quotient of a vector field is a linear operator from $\bm{L}^2(\omega)$ to $\bm{L}^2(\omega_0)$.
We define the second order finite difference quotient of a function $\xi$ in the canonical direction $\bm{e}_\rho$ of $\mathbb{R}^2$ and with increment size $0<h<d$ by
$$
\delta_{\rho h}\xi(y):=\dfrac{\xi(y+h \bm{e}_\rho)-2 \xi(y)+\xi(y-h \bm{e}_\rho)}{h^2},
$$
for all (or, possibly, a.a.) $y \in \omega$ such that $(y\pm h\bm{e}_\rho) \in \omega$.
The second order finite difference quotient of a vector field $\bm{\xi}=(\xi_i)$ in the canonical direction $\bm{e}_\rho$ of $\mathbb{R}^2$ and with increment size $0<h<d$ is defined by
$$
\delta_{\rho h}\bm{\xi}(y):=\left(\dfrac{\xi_i(y+h \bm{e}_\rho)-2 \xi_i(y)+\xi_i(y-h \bm{e}_\rho)}{h^2}\right),
$$
for all (or, possibly, a.a.) $y \in \omega$ such that $(y\pm h\bm{e}_\rho) \in \omega$.
Define, following page~293 of~\cite{Evans2010}, the mapping $D_{-\rho h}:L^2(\omega) \to L^2(\omega_0)$ by
$$
D_{-\rho h}\xi(y):=\dfrac{\xi(y)-\xi(y-h\bm{e}_\rho)}{h},
$$
as well as the mapping $D_{-\rho h}:\bm{L}^2(\omega) \to \bm{L}^2(\omega_0)$ by
$$
D_{-\rho h}\bm{\xi}(y):=\dfrac{\bm{\xi}(y)-\bm{\xi}(y-h\bm{e}_\rho)}{h}.
$$
Note in passing that the second order finite difference quotient of a function $\xi$ can be expressed in terms of the first order finite difference quotient via the following identity:
\begin{equation*}
\label{ide}
\delta_{\rho h} \xi=D_{-\rho h} D_{\rho h} \xi.
\end{equation*}
Similarly, the second order finite difference quotient of a vector field $\bm{\xi}=(\xi_i)$ can be expressed in terms of the first order finite difference quotient via the following identity:
\begin{equation*}
\label{ide2}
\delta_{\rho h} \bm{\xi}=D_{-\rho h} D_{\rho h} \bm{\xi}.
\end{equation*}
Let us define the translation operator $E$ in the canonical direction $\bm{e}_\rho$ of $\mathbb{R}^2$ and with increment size $0<h<d$ for a smooth enough function $v:\omega_0 \to \mathbb{R}$ by
\begin{align*}
E_{\rho h} v(y)&:=v(y+h \bm{e}_\rho),\\
E_{-\rho h} v(y)&:=v(y-h \bm{e}_\rho).
\end{align*}
Moreover, the following identities can be easily checked out (cf.\,\cite{Frehse1971} and~\cite{Pie2020-1}):
\begin{align}
D_{\rho h}(v w)&=(E_{\rho h} w) (D_{\rho h} v)+v D_{\rho h} w, \label{D+}\\
D_{-\rho h}(v w)&=(E_{-\rho h} w )(D_{-\rho h} v)+v D_{-\rho h} w, \label{D-}\\
\delta_{\rho h}(vw)&=w \delta_{\rho h} v+(D_{\rho h}w )(D_{\rho h} v) +(D_{-\rho h}w)( D_{-\rho h} v)+v\delta_{\rho h}w.\label{delta+}
\end{align}
We observe that the following properties hold for finite difference quotients.
The proof of the first lemma can be found in Lemma~4 of~\cite{Pie-2022-interior} and for this reason it is omitted.
\begin{lemma}
\label{lem:fdq-1}
Let $\{v_k\}_{k\ge1}$ be a sequence in $\mathcal{C}^1(\overline{\omega})$ that converges to a certain element $v \in H^1(\omega)$ with respect to the norm $\|\cdot\|_{H^1(\omega)}$.
Then, we have that for all $0<h<d$ and all $\rho\in\{1,2\}$,
\begin{equation*}
\label{conv1}
D_{\rho h} v\in H^1(\omega_0) \textup{ with }\partial_\alpha(D_{\rho h} v)=D_{\rho h} (\partial_\alpha v) \quad\textup{ and }\quad D_{\rho h} v_k\to D_{\rho h} v \textup{ in }H^1(\omega_0) \textup{ as } k\to\infty.
\end{equation*}
\qed
\end{lemma}
As a direct consequence of Lemma~\ref{lem:fdq-1}, if $\{v_k\}_{k\ge1}$ is a sequence in $\mathcal{C}^1(\overline{\omega})$ that converges to a certain element $v \in H^1(\omega)$ with respect to the norm $\|\cdot\|_{H^1(\omega)}$, then, we have that for all $0<h<d$ and all $\rho\in\{1,2\}$,
\begin{equation*}
\label{conv1-delta}
\delta_{\rho h} v\in H^1(\omega_1) \textup{ with }\partial_\alpha(\delta_{\rho h} v)=\delta_{\rho h} (\partial_\alpha v) \quad\textup{ and }\quad \delta_{\rho h} v_k\to \delta_{\rho h} v \textup{ in }H^1(\omega_1) \textup{ as } k\to\infty.
\end{equation*}
We also state the following elementary lemma, which exploits the compactness of the support of the test function $\varphi_1$ defined beforehand.
\begin{lemma}
\label{fdq-neg-part}
Let $f \in\mathcal{D}(\omega)$ with $\textup{supp }f \subset\subset \omega_1$.
Let $0<h<d$, where $d>0$ has been defined in~\eqref{d} and let $\rho\in\{1,2\}$ be given. Then,
\begin{equation*}
\int_{\omega} D_{\rho h}(-f^{-}) D_{\rho h}(f^{+}) \, \mathrm{d} y \ge 0.
\end{equation*}
\end{lemma}
\begin{proof}
By the definition of $D_{\rho h}$ and the definition of the positive and negative part of a function, we have that
\begin{equation*}
\int_{\omega} D_{\rho h}(-f^{-}) D_{\rho h}(f^{+}) \, \mathrm{d} y
=-\int_{\omega} \left(\dfrac{f^{-}(y+h\bm{e}_\rho)-f^{-}(y)}{h}\right) \left(\dfrac{f^{+}(y+h\bm{e}_\rho)-f^{+}(y)}{h}\right) \, \mathrm{d} y.
\end{equation*}
If $y\in \omega$ is such that $f(y+h\bm{e}_\rho)>0$ and $f(y)>0$ then the integrand (i.e., the argument of the integral under consideration) of interest is equal to zero.
If $y\in \omega$ is such that $f(y+h\bm{e}_\rho)<0$ and $f(y)<0$ then the integrand (i.e., the argument of the integral under consideration) of interest is equal to zero.
If $y\in \omega$ is such that $f(y+h\bm{e}_\rho)>0$ and $f(y)<0$ then the integrand (i.e., the argument of the integral under consideration) of interest becomes equal to
\begin{equation*}
- \left(-\dfrac{\{f(y)\}^{-}}{h}\right) \left(\dfrac{\{f(y+h\bm{e}_\rho)\}^{+}}{h}\right) > 0.
\end{equation*}
If $y\in \omega$ is such that $f(y+h\bm{e}_\rho)<0$ and $f(y)>0$ then the integrand (i.e., the argument of the integral under consideration) of interest becomes equal to
\begin{equation*}
-\left(\dfrac{\{f(y+h\bm{e}_\rho)\}^{-}}{h}\right) \left(-\dfrac{\{f(y)\}^{+}}{h}\right) > 0.
\end{equation*}
In conclusion, the integrand is never negative and the integral under examination is always greater or equal than zero, as it was to be proved.
\end{proof}
Let us recall that $\bm{\theta}(y) \cdot \bm{q}>0$ for all $y \in \overline{\omega}$ (Lemma~\ref{lem0}), where the unit-norm vector $\bm{q}$ is given. In view of this, we wonder whether the immersion $\bm{\theta} \in \mathcal{C}^3(\overline{\omega};\mathbb{E}^3)$ admits a prolongation $\tilde{\bm{\theta}}\in\mathcal{C}^3(\overline{\tilde{\omega}};\mathbb{E}^3)$, for some domain $\omega \subset \subset \tilde{\omega}$, prolongation which is associated with the natural covariant and contravariant bases $\{\tilde{\bm{a}}_1, \tilde{\bm{a}}_2, \tilde{\bm{a}}_3\}$ and $\{\tilde{\bm{a}}^1, \tilde{\bm{a}}^2, \tilde{\bm{a}}^3\}$ and which enjoys the following properties:
\begin{itemize}
\item[(a)] The mapping $\tilde{\bm{\theta}} \in \mathcal{C}^3(\overline{\tilde{\omega}};\mathbb{E}^3)$ is an immersion and $\tilde{\bm{\theta}}\big|_{\overline{\omega}}=\bm{\theta}$;
\item[(b)] The surface $\tilde{\bm{\theta}}(\overline{\tilde{\omega}})$ is elliptic;
\item[(c)] If $\min_{y \in \overline{\omega}}(\bm{\theta}(y) \cdot\bm{q}) >0$ then $\min_{y \in \overline{\tilde{\omega}}}(\tilde{\bm{\theta}}(y) \cdot\bm{q}) >0$;
\item[(d)] If $\min_{y \in \overline{\omega}} (\bm{a}^3(y) \cdot\bm{q})>0$ then $\min_{y \in \overline{\tilde{\omega}}} (\tilde{\bm{a}}^3(y) \cdot\bm{q})>0$.
\end{itemize}
We will say that $\bm{\theta}$ satisfies the ``prolongation property'' if there exists an extension $\tilde{\bm{\theta}}$ satisfying the properties (a)--(d) above.
Thanks to the Whitney's extension theorem (cf., e.g., Theorem~2.3.6 of~\cite{Hormander1990}), we are able to give a \emph{constructive proof} of the fact that the ``prolongation property'' is satisfied by all the elliptic surfaces satisfying the sufficient condition ensuring the ``density property'', thus giving an affirmative answer to the question posed above.
\begin{lemma}
\label{geometry}
Let $\omega \subset \mathbb{R}^2$ be a domain and let $\bm{\vartheta} \in \mathcal{C}^2(\overline{\omega};\mathbb{E}^3)$ be an immersion associated with an elliptic surface and satisfying the sufficient condition ensuring the ``density property''. Then $\bm{\vartheta}$ satisfies the ``prolongation property''.
\end{lemma}
\begin{proof}
Let $\{\bm{e}_i\}_{i=1}^3$ be an orthonormal covariant basis for the Euclidean space $\mathbb{E}^3$. Let $\{\bm{e}^i\}_{i=1}^3$ denote the corresponding contravariant basis of the Euclidean space $\mathbb{E}^3$, and recall that $\bm{e}_i=\bm{e}^i$ for all $1 \le i \le 3$.
For each $y \in \overline{\omega}$, we can write $\bm{\vartheta}(y)=\vartheta_i(y) \bm{e}^i$. Therefore, each of the components $\vartheta_i$, $1 \le i \le 3$, of the immersion $\bm{\vartheta}$ is clearly of class $\mathcal{C}^2(\overline{\omega})$ since $\vartheta_j=\bm{\vartheta} \cdot \bm{e}_j$, for all $1\le j \le 3$ and the right hand side is of class $\mathcal{C}^2(\overline{\omega})$.
By the Whitney extension theorem (cf., e.g., Theorem~2.3.6 of~\cite{Hormander1990}), for each $1 \le i \le 3$, there exists a function $\tilde{\vartheta}_i \in \mathcal{C}^2(\mathbb{R}^2)$ that extends $\vartheta_i$.
We can thus define a mapping $\tilde{\bm{\vartheta}}:=\tilde{\vartheta}_i \bm{e}^i \in \mathcal{C}^2(\overline{\tilde{\omega}};\mathbb{E}^3)$ that extends $\bm{\vartheta}$, for all $\tilde{\omega}\supset\supset\omega$.
Observe that the covariant basis $\{\bm{a}_i\}_{i=1}^3$ associated with $\bm{\vartheta}$ satisfies
$$
\det(a_{\alpha\beta}(y))>0,\quad\textup{ for all }y\in \overline{\omega},
$$
since $\bm{\vartheta}$ is assumed to be an immersion. Let $\{\tilde{\bm{a}}_i\}_{i=1}^3$ denote the covariant basis of the extension $\tilde{\bm{\vartheta}}$.
By the continuity of the determinant and the properties of the prolongation $\tilde{\bm{\vartheta}}$ with obvious meaning of the notation we have that, up to shrinking $\tilde{\omega}$:
$$
\det(\tilde{a}_{\alpha\beta}(y))>0,\quad\textup{ for all }y\in \overline{\tilde{\omega}},
$$
and property (a) is thus established.
Recall that the Gaussian curvature $\kappa$ of the immersion $\bm{\vartheta}$ is defined at each $y\in\overline{\omega}$ by
\begin{equation*}
K(y)=\det(b_\alpha^\beta(y)),
\end{equation*}
namely, in terms of the invariants of the matrix associated with the mixed components of the second fundamental form of $\bm{\vartheta}$. Let $\tilde{K}$ denote the Gaussian curvature associated with the extension $\tilde{\bm{\vartheta}}$ and observe that $\tilde{K} \in \mathcal{C}^2(\mathbb{R})$, and that $\tilde{K}(y)=K(y)$, for all $y \in \overline{\omega}$.
By the continuity of the mixed components of the second fundamental form (recall that $\bm{\vartheta}$ was assumed to be of class $\mathcal{C}^2(\overline{\omega};\mathbb{E}^3)$) we can thus find a set $\tilde{\omega} \supset\supset\omega$ such that $\tilde{K}>0$ in $\overline{\tilde{\omega}}$. This proves property (b).
Properties (c) and (d) also a direct consequence of the continuity of $\tilde{\bm{\vartheta}}$.
Up to shrinking $\tilde{\omega}$, we can affirm without loss of generality that the restriction of the mapping $\tilde{\bm{\vartheta}}$ to the set $\tilde{\omega}$ is the sought prolongation of the given immersion $\bm{\vartheta}$, that satisfies properties (a)--(d) of the ``prolongation property''. This completes the proof.
\end{proof}
We are ready to state the main result of this section, that constitutes the first new result in this paper. Note in passing, upon proving the following theorem, we will be able to obtain the conclusion of Theorem~6 in~\cite{Pie-2022-interior} under weaker assumptions on the given term $\bm{p}^\varepsilon$. The main novelty of the approach presented in this paper is that the proof of the augmented regularity of the solution of Problem~\ref{problem1} will be established \emph{without} resorting to the ``density property'' exploited for establishing Theorem~\ref{density}.
\begin{theorem}
\label{aug:int}
Let $\omega_0$ and $\omega_1$ be as in~\eqref{sets}. Assume that there exists a unit norm vector $\bm{q} \in \mathbb{E}^3$ such that
\begin{equation*}
\min_{y \in \overline{\omega}} (\bm{\theta} (y) \cdot \bm{q}) > 0
\textup{ and }
\min_{y \in \overline{\omega}} (\bm{a}_3 (y) \cdot \bm{q}) > 0.
\end{equation*}
Assume also that the vector field $\bm{f}^\varepsilon=(f^{i,\varepsilon})$ defining the applied body force density is of class $L^2(\Omega^\varepsilon)\times L^2(\Omega^\varepsilon)\times H^1(\Omega^\varepsilon)$.
Then, the solution $\bm{\zeta}^\varepsilon_\kappa=(\zeta^\varepsilon_{\kappa,i})$ of Problem~\ref{problem2} is of class $\bm{V}_M(\omega)\cap H^2_{\textup{loc}}(\omega) \times H^2_{\textup{loc}}(\omega) \times H^1_{\textup{loc}}(\omega)$.
\end{theorem}
\begin{proof}
Fix $\varphi\in\mathcal{D}(\omega)$ such that $\text{supp }\varphi \subset\subset \omega_1$ and $0\le \varphi \le 1$. Let $\bm{\zeta}^\varepsilon_\kappa \in \bm{V}_M(\omega)$ be the unique solution of Problem~\ref{problem2}.
Observe that the transverse component $\zeta^\varepsilon_{\kappa,3}$ can be extended outside of $\omega$ by zero, preserving the $L^2(\mathbb{R}^2)$ regularity.
For what concerns the tangential components $\zeta^\varepsilon_{\kappa,\alpha}$, Proposition~9.18 of~\cite{Brez11} states that the only admissible prolongation outside of $\omega$ is the prolongation by zero. Therefore, it makes sense to consider the vector field
\begin{equation*}
(-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \in H^1(\mathbb{R}^2) \times H^1(\mathbb{R}^2) \times L^2(\mathbb{R}^2).
\end{equation*}
Since the support of this vector field is compactly contained in $\omega_1$, we obtain that, actually,
\begin{equation*}
(-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \in \bm{V}_M(\omega),
\end{equation*}
and we can specialize $\bm{\eta}=-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)$ in the variational equations of Problem~\ref{problem2}.
Let us now evaluate
\begin{equation*}
\begin{aligned}
&\int_{\omega} p^{i,\varepsilon} (-\varphi \delta_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,i})) \sqrt{a} \, \mathrm{d} y
=-\int_{\omega_1}(\varphi p^{i,\varepsilon}) (\delta_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,i})) \sqrt{a} \, \mathrm{d} y\\
&=-\int_{\omega_1}(\varphi p^{\alpha,\varepsilon}) (\delta_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y
+\int_{\omega}D_{\rho h}(\varphi p^{3,\varepsilon}) (D_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,3})) \sqrt{a} \, \mathrm{d} y\\
&=\varepsilon \|\varphi\|_{\mathcal{C}^1(\overline{\omega})}\|\bm{p}\|_{L^2(\omega)\times L^2(\omega) \times H^1(\omega)}\sqrt{a_1}
\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)},
\end{aligned}
\end{equation*}
where the second holds thanks to the integration by parts formula for finite difference quotients (cf. page~293 of~\cite{Evans2010}), and the inequality holds thanks to the H\"older inequality. Note in passing that $\|\bm{p}^\varepsilon\|_{L^2(\omega)\times L^2(\omega) \times H^1(\omega)}$ is independent of $\varepsilon$ thanks to the assumptions on the data.
Thanks to these inequalities, we have that
\begin{equation}
\label{int-1}
\begin{aligned}
&\varepsilon\int_{\omega_1} a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(\bm{\zeta}^\varepsilon_\kappa) \gamma_{\alpha \beta}(-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \sqrt{a} \, \mathrm{d} y
+\dfrac{\varepsilon}{\kappa}\int_{\omega_1}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y\\
&\le \varepsilon \|\varphi\|_{\mathcal{C}^1(\overline{\omega})}\|\bm{p}\|_{L^2(\omega)\times L^2(\omega) \times H^1(\omega)}\sqrt{a_1}
\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}.
\end{aligned}
\end{equation}
The first step in our analysis consists in showing that:
\begin{equation}
\label{key-relation-2}
\begin{aligned}
&-\varepsilon \int_{\omega_1}a^{\alpha\beta\sigma\tau}\gamma_{\sigma \tau}(\varphi\bm{\zeta}^\varepsilon_\kappa)\gamma_{\alpha\beta}(\delta_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa))\sqrt{a} \, \mathrm{d} y\\
&\le -\varepsilon \int_{\omega_1} a^{\alpha\beta\sigma\tau}\gamma_{\sigma \tau}(\bm{\zeta}^\varepsilon_\kappa)\gamma_{\alpha\beta}(\varphi \delta_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa))\sqrt{a} \, \mathrm{d} y+C(1+\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}),
\end{aligned}
\end{equation}
for some $C>0$ independent of $\varepsilon$, $\kappa$ and $h$.
Recalling the definition of the change of metric tensor components $\gamma_{\alpha \beta}$ (cf. section~\ref{sec1}) and recalling that $\bm{\theta} \in \mathcal{C}^3(\overline{\omega};\mathbb{E}^3)$, we have that the integral
\begin{equation*}
-\varepsilon\int_{\omega_1} a^{\alpha\beta\sigma\tau}\gamma_{\sigma \tau}(\varphi\bm{\zeta}^\varepsilon_\kappa)\gamma_{\alpha\beta}(\delta_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)) \sqrt{a} \, \mathrm{d} y
\end{equation*}
can be estimated by estimating the following main nine addends of its. In the evaluation of the following nine terms, the indices are assumed to be fixed, i.e., the summation rule with respect to repeated indices is not enforced in~\eqref{term-1}--\eqref{term-9} below.
Overall, the strategy we resort to is the following: we take into accounts the addends of the linearised change of metric tensor and we apply Green's formula and the integration-by-parts formula for finite difference quotients for suitably arranging the position of the compactly supported function $\varphi$.
First, thanks to an application of Green's formula (cf., e.g., Theorem~6.6-7 of~\cite{PGCLNFAA}), we estimate:
\begin{equation}
\label{term-1}
\begin{aligned}
&\int_{\omega_1} -a^{\alpha\beta\sigma\tau} \partial_\sigma(\varphi \zeta^\varepsilon_{\kappa,\tau}) \partial_\beta(\delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y\\
&=\int_{\omega_1} -a^{\alpha\beta\sigma\tau} [(\partial_\sigma \varphi)\zeta^\varepsilon_{\kappa,\tau} +\varphi \partial_\sigma \zeta^\varepsilon_{\kappa,\tau}] \partial_\beta(\delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y\\
&=\int_{\omega_1} \partial_\beta(a^{\alpha\beta\sigma\tau} (\partial_\sigma \varphi) \zeta^\varepsilon_{\kappa,\tau} \sqrt{a}) \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha}) \, \mathrm{d} y\\
&\quad+\int_{\omega_1} -a^{\alpha\beta\sigma\tau} \partial_\sigma \zeta^\varepsilon_{\kappa,\tau} [\varphi\partial_\beta(\delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha}))] \sqrt{a} \, \mathrm{d} y\\
&\le C \|\zeta^\varepsilon_{\kappa,\tau}\|_{H^1(\omega_1) \times H^1(\omega_1) \times L^2(\omega_1)} \|D_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})\|_{H^1(\omega_1)}\\
&\quad+\int_{\omega_1} -a^{\alpha\beta\sigma\tau} \partial_\sigma \zeta^\varepsilon_{\kappa,\tau} \partial_\beta(\varphi \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y\\
&\quad + \int_{\omega_1} a^{\alpha\beta\sigma\tau} (\partial_\sigma \zeta^\varepsilon_{\kappa,\tau}) (\partial_\beta \varphi) \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha}) \sqrt{a} \, \mathrm{d} y\\
&\le \int_{\omega_1} -a^{\alpha\beta\sigma\tau} (\partial_\sigma \zeta^\varepsilon_{\kappa,\tau}) \partial_\beta(\varphi\delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y +C \|D_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})\|_{H^1(\omega_1)}.
\end{aligned}
\end{equation}
Second, we estimate:
\begin{equation}
\label{term-2}
\begin{aligned}
&\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (-\Gamma_{\sigma\tau}^\varsigma \varphi \zeta^\varepsilon_{\kappa,\varsigma}) \partial_\beta(\delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y\\
&=\int_{\omega_1} a^{\alpha\beta\sigma\tau} \Gamma_{\sigma\tau}^\upsilon \zeta^\varepsilon_{\kappa,\upsilon} \partial_\beta(\varphi \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y\\
&\quad-\int_{\omega_1} a^{\alpha\beta\sigma\tau} \Gamma_{\sigma\tau}^\upsilon \zeta^\varepsilon_{\kappa,\upsilon} (\partial_\beta\varphi) (\delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y\\
&\le \int_{\omega_1} a^{\alpha\beta\sigma\tau} \Gamma_{\sigma\tau}^\upsilon \zeta^\varepsilon_{\kappa,\upsilon} \partial_\beta(\varphi \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y +C \|D_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})\|_{H^1(\omega_1)},
\end{aligned}
\end{equation}
where the equality holds as a consequence of Green's formula.
Third, we estimate:
\begin{equation}
\label{term-3}
\begin{aligned}
&\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (-b_{\alpha\beta}\varphi \zeta^\varepsilon_{\kappa,3}) \partial_\beta(\delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y\\
&=\int_{\omega_1} a^{\alpha\beta\sigma\tau} (b_{\alpha\beta} \zeta^\varepsilon_{\kappa,3}) \partial_\beta(\varphi \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y\\
&\quad-\int_{\omega_1} a^{\alpha\beta\sigma\tau} b_{\alpha\beta} \zeta^\varepsilon_{\kappa,3} (\partial_\beta \varphi) \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha}) \sqrt{a} \, \mathrm{d} y\\
&\le \int_{\omega_1} a^{\alpha\beta\sigma\tau} (b_{\alpha\beta} \zeta^\varepsilon_{\kappa,3}) \partial_\beta(\varphi \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})) \sqrt{a} \, \mathrm{d} y +C \|D_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})\|_{H^1(\omega_1)}.
\end{aligned}
\end{equation}
Fourth, we estimate:
\begin{equation}
\label{term-4}
\begin{aligned}
&\int_{\omega_1} -a^{\alpha\beta\sigma\tau} \partial_\sigma(\varphi \zeta^\varepsilon_{\kappa,\tau}) \Gamma_{\alpha\beta}^\upsilon \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\upsilon}) \sqrt{a} \, \mathrm{d} y\\
&=\int_{\mathcal{U}_1} -a^{\alpha\beta\sigma\tau} (\partial_\sigma\varphi) \zeta^\varepsilon_{\kappa,\tau} \Gamma_{\alpha\beta}^\upsilon \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\upsilon}) \sqrt{a} \, \mathrm{d} y\\
&\quad+\int_{\omega_1} -a^{\alpha\beta\sigma\tau} \varphi (\partial_\sigma \zeta^\varepsilon_{\kappa,\tau}) \Gamma_{\alpha\beta}^\upsilon \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\upsilon}) \sqrt{a} \, \mathrm{d} y\\
&\le C \|D_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\alpha})\|_{H^1(\omega_1)}
+\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (\partial_\sigma \zeta^\varepsilon_{\kappa,\tau}) \Gamma_{\alpha\beta}^\upsilon [\varphi\delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\upsilon})] \sqrt{a} \, \mathrm{d} y.
\end{aligned}
\end{equation}
Fifth, we straightforwardly observe that:
\begin{equation}
\label{term-5}
\begin{aligned}
&\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (-\Gamma_{\sigma\tau}^\varsigma \zeta^\varepsilon_{\kappa,\varsigma} \varphi) \Gamma_{\alpha\beta}^\upsilon \delta_{\rho h}(\zeta^\varepsilon_{\kappa,\upsilon} \varphi) \sqrt{a} \, \mathrm{d} y\\
&\le C(1+\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1) \times H^1(\omega_1) \times L^2(\omega_1)})\\
&\quad+ \int_{\omega_1} -a^{\alpha\beta\sigma\tau} (-\Gamma_{\sigma\tau}^\varsigma \zeta^\varepsilon_{\kappa,\varsigma}) \Gamma_{\alpha\beta}^\upsilon [\varphi\delta_{\rho h}(\zeta^\varepsilon_{\kappa,\upsilon} \varphi)] \sqrt{a} \, \mathrm{d} y.
\end{aligned}
\end{equation}
Sixth, we straightforwardly observe that:
\begin{equation}
\label{term-6}
\begin{aligned}
&\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (b_{\sigma\tau} \zeta^\varepsilon_{\kappa,3} \varphi) \Gamma_{\alpha\beta}^\upsilon \delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\upsilon}) \sqrt{a} \, \mathrm{d} y\\
&\le C(1+\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1) \times H^1(\omega_1) \times L^2(\omega_1)})\\
&\quad+\int_{\omega_1} -a^{\alpha\beta\sigma\tau} b_{\sigma\tau} \zeta^\varepsilon_{\kappa,3} \Gamma_{\alpha\beta}^\upsilon [\varphi\delta_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,\upsilon})] \sqrt{a} \, \mathrm{d} y.
\end{aligned}
\end{equation}
Seventh, we straightforwardly observe that:
\begin{equation}
\label{term-7}
\begin{aligned}
&\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (-\Gamma_{\sigma\tau}^\varsigma \varphi \zeta^\varepsilon_{\kappa,\varsigma}) b_{\alpha\beta} \delta_{\rho h}(\zeta^\varepsilon_{\kappa,3} \varphi) \sqrt{a} \, \mathrm{d} y\\
&\le C(1+\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1) \times H^1(\omega_1) \times L^2(\omega_1)})\\
&\quad+\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (-\Gamma_{\sigma\tau}^\varsigma \zeta^\varepsilon_{\kappa,\varsigma}) b_{\alpha\beta} [\varphi\delta_{\rho h}(\zeta^\varepsilon_{\kappa,3} \varphi)] \sqrt{a} \, \mathrm{d} y.
\end{aligned}
\end{equation}
Eighth, we estimate:
\begin{equation}
\label{term-8}
\begin{aligned}
&\int_{\omega_1} -a^{\alpha\beta\sigma\tau} \partial_\sigma(\varphi \zeta^\varepsilon_{\kappa,\tau}) b_{\alpha\beta} \delta_{\rho h}(\zeta^\varepsilon_{\kappa,3} \varphi) \sqrt{a} \, \mathrm{d} y\\
&=\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (\partial_\sigma \zeta^\varepsilon_{\kappa,\tau}) b_{\alpha\beta} [\varphi \delta_{\rho h}(\zeta^\varepsilon_{\kappa,3} \varphi)] \sqrt{a} \, \mathrm{d} y\\
&\quad+\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (\partial_\sigma \varphi) \zeta^\varepsilon_{\kappa,\tau} b_{\alpha\beta} \delta_{\rho h}(\zeta^\varepsilon_{\kappa,3} \varphi)\sqrt{a} \, \mathrm{d} y\\
&=\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (\partial_\sigma \zeta^\varepsilon_{\kappa,\tau}) b_{\alpha\beta} [\varphi \delta_{\rho h}(\zeta^\varepsilon_{\kappa,3} \varphi)] \sqrt{a} \, \mathrm{d} y\\
&\quad+\int_{\omega_1} D_{\rho h}(-a^{\alpha\beta\sigma\tau} (\partial_\sigma \varphi) \zeta^\varepsilon_{\kappa,\tau} b_{\alpha\beta} \sqrt{a}) D_{\rho h}(\varphi \zeta^\varepsilon_{\kappa,3}) \, \mathrm{d} y\\
&\le \int_{\omega_1} -a^{\alpha\beta\sigma\tau} (\partial_\sigma \zeta^\varepsilon_{\kappa,\tau}) b_{\alpha\beta} [\varphi \delta_{\rho h}(\zeta^\varepsilon_{\kappa,3} \varphi)] \sqrt{a} \, \mathrm{d} y\\
&\quad + C(1+\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1) \times H^1(\omega_1) \times L^2(\omega_1)}),
\end{aligned}
\end{equation}
where in the last equality we used the integration-by-parts formula for finite difference quotients.
Ninth, and last, we straightforwardly observe that
\begin{equation}
\label{term-9}
\begin{aligned}
&\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (b_{\sigma\tau} \zeta^\varepsilon_{\kappa,3} \varphi) b_{\alpha\beta} \delta_{\rho h}(\zeta^\varepsilon_{\kappa,3} \varphi) \sqrt{a} \, \mathrm{d} y\\
&=\int_{\omega_1} -a^{\alpha\beta\sigma\tau} (b_{\sigma\tau} \zeta^\varepsilon_{\kappa,3}) b_{\alpha\beta} [\varphi\delta_{\rho h}(\zeta^\varepsilon_{\kappa,3} \varphi)] \sqrt{a} \, \mathrm{d} y\\
&\le C(1+\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1) \times H^1(\omega_1) \times L^2(\omega_1)})\\
&\quad+ \int_{\omega_1} -a^{\alpha\beta\sigma\tau} (b_{\sigma\tau} \zeta^\varepsilon_{\kappa,3}) b_{\alpha\beta} [\varphi\delta_{\rho h}(\zeta^\varepsilon_{\kappa,3} \varphi)] \sqrt{a} \, \mathrm{d} y.
\end{aligned}
\end{equation}
In conclusion, combining~\eqref{term-1}--\eqref{term-9} together gives~\eqref{key-relation-2}. Combining ~\eqref{int-1} and~\eqref{key-relation-2} gives that there exists a constant $C>0$ independent of $\varepsilon$, $\kappa$ and $h$ such that
\begin{equation*}
\begin{aligned}
&-\varepsilon \int_{\omega_1} a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(\varphi\bm{\zeta}^\varepsilon_\kappa) \gamma_{\alpha \beta}(\delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \sqrt{a} \, \mathrm{d} y\\
&\quad+\dfrac{\varepsilon}{\kappa}\int_{\omega_1}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y \le C\varepsilon(1+\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}).
\end{aligned}
\end{equation*}
An application of the integration-by-parts formula for finite difference quotients (cf., e.g., page~293 of~\cite{Evans2010}) and~\eqref{D+} turn the latter into:
\begin{equation*}
\begin{aligned}
&\varepsilon \int_{\omega_1} a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \gamma_{\alpha \beta}(D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \sqrt{a} \, \mathrm{d} y
+\varepsilon \int_{\omega_1} D_{\rho h}(a^{\alpha\beta\sigma\tau}\sqrt{a}) E_{\rho h}\left(\gamma_{\sigma\tau}(\varphi\bm{\zeta}^\varepsilon_\kappa)\right) \gamma_{\alpha \beta}(D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y\\
&\quad+\dfrac{\varepsilon}{\kappa}\int_{\omega_1}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y\\
&=\varepsilon \int_{\omega_1} D_{\rho h}\left(a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(\varphi\bm{\zeta}^\varepsilon_\kappa)\sqrt{a}\right) \gamma_{\alpha \beta}(D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y
+\dfrac{\varepsilon}{\kappa}\int_{\omega_1}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y\\
&=-\varepsilon \int_{\omega_1} a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(\varphi\bm{\zeta}^\varepsilon_\kappa) D_{-\rho h}\left(\gamma_{\alpha \beta}(D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa))\right) \sqrt{a} \, \mathrm{d} y
+\dfrac{\varepsilon}{\kappa}\int_{\omega_1}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y\\
&=-\varepsilon \int_{\omega_1} a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(\varphi\bm{\zeta}^\varepsilon_\kappa) \gamma_{\alpha \beta}(\delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \sqrt{a} \, \mathrm{d} y
+\dfrac{\varepsilon}{\kappa}\int_{\omega_1}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y\\
&\le C\varepsilon(1+\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}),
\end{aligned}
\end{equation*}
for some $C>0$ independent of $\varepsilon$, $\kappa$ and $h$.
We then have that the fact that $\varphi$ has compact support in $\omega_1$, Korn's inequality (Theorem~\ref{korn}), the definition of $d$ (viz. \eqref{d}) give
\begin{equation*}
\begin{aligned}
&\dfrac{\varepsilon\sqrt{a_0}}{c_0 c_e}\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}^2
+\dfrac{\varepsilon}{\kappa}\int_{\omega_1}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y\\
&\le\varepsilon \int_{\omega_1} a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \gamma_{\alpha \beta}(D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \sqrt{a} \, \mathrm{d} y
+\dfrac{\varepsilon}{\kappa}\int_{\omega_1}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y\\
&\le C\varepsilon(1+\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)})
-\varepsilon \int_{\omega_1} D_{\rho h}(a^{\alpha\beta\sigma\tau}\sqrt{a}) E_{\rho h}\left(\gamma_{\sigma\tau}(\varphi\bm{\zeta}^\varepsilon_\kappa)\right) \gamma_{\alpha \beta}(D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y\\
&\le C\varepsilon(1+\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)})\\
&\quad+\varepsilon \left(\max_{\alpha,\beta,\sigma,\tau \in \{1,2\}}\{\|a^{\alpha\beta\sigma\tau}\sqrt{a}\|_{\mathcal{C}^1(\overline{\omega})}\}\right) \left(\max_{\alpha,\beta\in\{1,2\}}\|\gamma_{\alpha\beta}(D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{L^2(\omega_1)}\right) \left(\max_{\sigma,\tau\in\{1,2\}}\|E_{\rho h}(\gamma_{\sigma\tau}(\varphi \bm{\zeta}^\varepsilon_\kappa))\|_{L^2(\omega_1)}\right)\\
&= C\varepsilon(1+\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)})\\
&\quad+\varepsilon \left(\max_{\alpha,\beta,\sigma,\tau \in \{1,2\}}\{\|a^{\alpha\beta\sigma\tau}\sqrt{a}\|_{\mathcal{C}^1(\overline{\omega})}\}\right) \left(\max_{\alpha,\beta\in\{1,2\}}\|\gamma_{\alpha\beta}(D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{L^2(\omega_1)}\right) \left(\max_{\sigma,\tau\in\{1,2\}}\|\gamma_{\sigma\tau}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{L^2(\omega_1)}\right)\\
&= C\varepsilon(1+\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)})\\
&\quad+\varepsilon \left(\max_{\alpha,\beta,\sigma,\tau \in \{1,2\}}\{\|a^{\alpha\beta\sigma\tau}\sqrt{a}\|_{\mathcal{C}^1(\overline{\omega})}\}\right)
\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}
\|\bm{\zeta}^\varepsilon_\kappa\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}\\
&\le C\varepsilon(1+\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}),
\end{aligned}
\end{equation*}
where, once again, the constant $C>0$ is independent of $\varepsilon$, $\kappa$ and $h$. The latter computations summarize in the following result
\begin{equation}
\label{checkpoint-1}
\begin{aligned}
&\dfrac{\varepsilon\sqrt{a_0}}{c_0 c_e}\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}^2
+\dfrac{\varepsilon}{\kappa}\int_{\omega_1}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (-\varphi \delta_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)) \, \mathrm{d} y\\
&\le C\varepsilon(1+\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}),
\end{aligned}
\end{equation}
for some constant $C>0$ is independent of $\varepsilon$, $\kappa$ and $h$.
Let us now estimate the penalty term. Thanks to the equations of Problem~\ref{problem1}, we have that
\begin{equation*}
\label{bdd-1}
\dfrac{\varepsilon}{\kappa}\int_{\omega}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot\bm{\eta} \, \mathrm{d} y=-\varepsilon\int_{\omega}a^{\alpha\beta\sigma\tau}\gamma_{\sigma \tau}(\bm{\zeta}^\varepsilon_\kappa) \gamma_{\alpha\beta}(\bm{\eta}) \sqrt{a} \, \mathrm{d} y+\int_{\omega}p^{i,\varepsilon} \eta_i\sqrt{a} \, \mathrm{d} y,\quad\textup{ for all }\bm{\eta}=(\eta_i) \in \bm{V}_M(\omega).
\end{equation*}
An application of the triangle inequality and the continuity of the components $\gamma_{\alpha\beta}$ of the linearized change of metric tensor gives
\begin{equation*}
\left|\dfrac{\varepsilon}{\kappa}\int_{\omega}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot\bm{\eta} \, \mathrm{d} y\right|\le \varepsilon \left(\max_{\alpha,\beta,\sigma,\tau \in \{1,2\}}\|a^{\alpha\beta\sigma\tau}\|_{\mathcal{C}^0(\overline{\omega})}\right)\|\bm{\zeta}^\varepsilon_\kappa\|_{\bm{V}_M(\omega)}\|\bm{\eta}\|_{\bm{V}_M(\omega)} \sqrt{a}_1+\varepsilon\|\bm{p}\|_{\bm{L}^2(\omega)}\|\bm{\eta}\|_{\bm{L}^2(\omega)}\sqrt{a_1},
\end{equation*}
for all $\bm{\eta}=(\eta_i) \in\bm{V}_M(\omega)$.
Passing to the supremum over all the vector fields $\bm{\eta}=(\eta_i) \in\bm{V}_M(\omega)$ with $\|\bm{\eta}\|_{\bm{V}_M(\omega)}=1$ gives
\begin{equation*}
\sup_{\substack{\bm{\eta}\in\bm{V}_M(\omega)\\\|\bm{\eta}\|_{\bm{V}_M(\omega)}=1}} \left|\dfrac{1}{\kappa}\int_{\omega}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot\bm{\eta} \, \mathrm{d} y\right|
\le \sqrt{a_1}\left(\left(\max_{\alpha,\beta,\sigma,\tau \in \{1,2\}}\|a^{\alpha\beta\sigma\tau}\|_{\mathcal{C}^0(\overline{\omega})}\right)\|\bm{\zeta}^\varepsilon_\kappa\|_{\bm{V}_M(\omega)}+\|\bm{p}\|_{\bm{L}^2(\omega)}\right),
\end{equation*}
where, by Theorem~\ref{ex-un-kappa}, the right hand side is bounded independently of $\varepsilon$ and $\kappa$. In conclusion, we have shown that there exists a constant $M_1>0$ independent of $\varepsilon$ and $\kappa$ (and clearly $h$) such that:
\begin{equation}
\label{bdd-2}
\dfrac{1}{\kappa}\|\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa)\|_{\bm{V}'_M(\omega)} \le M_1.
\end{equation}
The fact that we identified $L^2(\omega)$ with its dual, the assumption $\min_{y \in \overline{\omega}}(\bm{a}^3\cdot\bm{q})>0$, and~\eqref{bdd-2} give
\begin{equation*}
\begin{aligned}
M_1&\ge\dfrac{1}{\kappa}\|\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa)\|_{\bm{V}'_M(\omega)}
=\dfrac{1}{\kappa}\Bigg\{\left\|-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\left(\dfrac{\bm{a}^1\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\right\|_{H^{-1}(\omega)}^2\\
&\quad+\left\|-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\left(\dfrac{\bm{a}^2\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\right\|_{H^{-1}(\omega)}^2\\
&\quad+\left\|-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\left(\dfrac{\bm{a}^3\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\right\|_{L^{2}(\omega)}^2
\Bigg\}^{1/2}\\
&\ge\dfrac{1}{\kappa} \left\|-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\left(\dfrac{\bm{a}^3\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\right\|_{L^{2}(\omega)}\\
&\ge\dfrac{\left(\min_{y \in \overline{\omega}}(\bm{a}^3\cdot\bm{q})\right)}{\kappa\sqrt{3\max\left\{\|\bm{a}^\ell \cdot\bm{q}\|_{\mathcal{C}^0(\overline{\omega})}^2;1\le\ell\le 3\right\}}}
\left(\int_{\omega} |-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}|^2 \, \mathrm{d} y\right)^{1/2}\\
&\ge \dfrac{\left(\min_{y \in \overline{\omega}}(\bm{a}^3\cdot\bm{q})\right)}{\kappa\sqrt{3\max\left\{\|\bm{a}^\ell \cdot\bm{q}\|_{\mathcal{C}^0(\overline{\omega})}^2;1\le\ell\le 3\right\}}} \|-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\|_{L^2(\omega)},
\end{aligned}
\end{equation*}
so that we have the following estimate:
\begin{equation}
\label{bdd-3}
\|-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\|_{L^2(\omega)}\le \kappa\dfrac{M_1\sqrt{3\max\left\{\|\bm{a}^\ell \cdot\bm{q}\|_{\mathcal{C}^0(\overline{\omega})}^2;1\le\ell\le 3\right\}}}{\left(\min_{y \in \overline{\omega}}(\bm{a}^3\cdot\bm{q})\right)}.
\end{equation}
Let us now evaluate the penalty term in the governing equations of Problem~\ref{problem2}. An application of formulas~\eqref{D+}, \eqref{D-}, \eqref{delta+}, Lemma~\ref{fdq-neg-part}, Lemma~\ref{geometry} and~\eqref{bdd-3} gives:
\begin{align*}
&\dfrac{1}{\kappa}\int_{\omega_1}\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (-\varphi \delta_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa))\, \mathrm{d} y =-\dfrac{1}{\kappa}\int_{\omega_1}\left[-\varphi\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\left(\dfrac{\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\right]\delta_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,i})\, \mathrm{d} y\\
&=\dfrac{1}{\kappa}\int_{\omega_1}D_{\rho h}\left(-\varphi\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \left(\dfrac{\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\right) D_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,i})\, \mathrm{d} y\\
&=\dfrac{1}{\kappa}\int_{\omega_1}\left[D_{\rho h}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right) E_{\rho h}\left(\dfrac{\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\right] D_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,i})\, \mathrm{d} y\\
&\quad+\dfrac{1}{\kappa}\int_{\omega_1} \left[\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right) D_{\rho h}\left(\dfrac{\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\right] D_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,i})\, \mathrm{d} y\\
&=\dfrac{1}{\kappa}\int_{\omega_1}E_{\rho h}\left(\dfrac{1}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\left[D_{\rho h}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\right] D_{\rho h}\left(\varphi \zeta^\varepsilon_{\kappa,i}\bm{a}^i\cdot\bm{q}\right)\, \mathrm{d} y\\
&\quad-\dfrac{1}{\kappa}\int_{\omega_1}E_{\rho h}\left(\dfrac{1}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\left[D_{\rho h}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\right] (\varphi \zeta^\varepsilon_{\kappa,i}) D_{\rho h}\left(\bm{a}^i\cdot\bm{q}\right)\, \mathrm{d} y\\
&\quad+\dfrac{1}{\kappa}\int_{\omega_1} \left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\left(D_{\rho h}\left(\dfrac{\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) D_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,i})\right)\, \mathrm{d} y\\
&=\dfrac{1}{\kappa}\int_{\omega_1}E_{\rho h}\left(\dfrac{1}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\left[D_{\rho h}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\right] D_{\rho h}\left(\varphi (\bm{\theta}+\zeta^\varepsilon_{\kappa,i}\bm{a}^i)\cdot\bm{q}\right)\, \mathrm{d} y\\
&\quad-\dfrac{1}{\kappa}\int_{\omega_1}E_{\rho h}\left(\dfrac{1}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\left[D_{\rho h}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\right] D_{\rho h}\left(\varphi \bm{\theta}\cdot\bm{q}\right)\, \mathrm{d} y\\
&\quad+\dfrac{1}{\kappa}\int_{\omega_1}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right) D_{-\rho h}\left(E_{\rho h}\left(\dfrac{1}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)(\varphi \zeta^\varepsilon_{\kappa,i}) D_{\rho h}\left(\bm{a}^i\cdot\bm{q}\right)\right)\, \mathrm{d} y\\
&\quad+\dfrac{1}{\kappa}\int_{\omega_1} \left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\left(D_{\rho h}\left(\dfrac{\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) D_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,i})\right)\, \mathrm{d} y\\
&=\dfrac{1}{\kappa}\int_{\omega_1} E_{\rho h}\left(\dfrac{1}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\left[D_{\rho h}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\right] D_{\rho h}\left(\varphi \{(\bm{\theta}+\zeta^\varepsilon_{\kappa,i}\bm{a}^i)\cdot\bm{q}\}^{+}\right)\, \mathrm{d} y\\
&\quad+\dfrac{1}{\kappa}\int_{\omega_1} E_{\rho h}\left(\dfrac{1}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)\left|D_{\rho h}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\right|^2\, \mathrm{d} y\\
&\quad+\dfrac{1}{\kappa}\int_{\omega_1}(-\varphi \{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}) D_{-\rho h}\left[E_{\rho h}\left(\dfrac{1}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) D_{\rho h}(\varphi \bm{\theta}\cdot\bm{q})\right] \, \mathrm{d} y\\
&\quad+\dfrac{1}{\kappa}\int_{\omega_1}(-\varphi \{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}) D_{-\rho h}\left[E_{\rho h}\left(\dfrac{1}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) \left(D_{\rho h}(\bm{a}^i\cdot\bm{q})\right) (\varphi\zeta^\varepsilon_{\kappa,i})\right] \, \mathrm{d} y\\
&\quad+\dfrac{1}{\kappa}\int_{\omega_1} \left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\left(D_{\rho h}\left(\dfrac{\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) D_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,i})\right)\, \mathrm{d} y.
\end{align*}
Applying the latter computations, \eqref{bdd-3}, the fact that $\bm{\theta}\in\mathcal{C}^3(\overline{\omega};\mathbb{E}^3)$, Lemma~\ref{fdq-neg-part}, Lemma~\ref{geometry}, the assumption according to which $\min_{y \in \overline{\omega}}(\bm{a}^3\cdot\bm{q})>0$ and the fact that $\textup{supp }\varphi \subset\subset \omega_1$ to~\eqref{checkpoint-1} gives:
\begin{align*}
&\dfrac{\varepsilon\sqrt{a_0}}{c_0 c_e}\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}^2\\
&\quad+\varepsilon\dfrac{\left(3\max\{\|\tilde{\bm{a}}^\ell\cdot\bm{q}\|_{\mathcal{C}^0(\overline{\tilde{\omega}})}^2;1\le \ell \le 3\}\right)^{-1/2}}{\kappa}\int_{\omega_1}\left|D_{\rho h}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\right|^2\, \mathrm{d} y\\
&\le C\varepsilon(1+\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)})\\
&\quad-\dfrac{\varepsilon}{\kappa}\int_{\omega_1}(-\varphi \{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}) D_{-\rho h}\left[E_{\rho h}\left(\dfrac{1}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) D_{\rho h}(\varphi \bm{\theta}\cdot\bm{q})\right] \, \mathrm{d} y\\
&\quad-\dfrac{\varepsilon}{\kappa}\int_{\omega_1}(-\varphi \{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}) D_{-\rho h}\left[E_{\rho h}\left(\dfrac{1}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) \left(D_{\rho h}(\bm{a}^i\cdot\bm{q})\right) (\varphi\zeta^\varepsilon_{\kappa,i})\right] \, \mathrm{d} y\\
&\quad-\dfrac{\varepsilon}{\kappa}\int_{\omega_1} \left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\left(D_{\rho h}\left(\dfrac{\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) D_{\rho h}(\varphi\zeta^\varepsilon_{\kappa,i})\right)\, \mathrm{d} y\\
&\le C\varepsilon(1+\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}),
\end{align*}
for some constant $C>0$ independent of $\varepsilon$, $\kappa$ and $h$. In conclusion, the latter computations can be summarized as follows:
\begin{equation}
\label{checkpoint-2}
\begin{aligned}
&\dfrac{\varepsilon\sqrt{a_0}}{c_0 c_e}\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}^2\\
&\quad+\varepsilon\dfrac{\left(3\max\{\|\tilde{\bm{a}}^\ell\cdot\bm{q}\|_{\mathcal{C}^0(\overline{\tilde{\omega}})}^2;1\le \ell \le 3\}\right)^{-1/2}}{\kappa}\int_{\omega_1}\left|D_{\rho h}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\right|^2\, \mathrm{d} y\\
&\le C\varepsilon(1+\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}).
\end{aligned}
\end{equation}
A consequence of~\eqref{checkpoint-2} is that
\begin{equation}
\label{conclusion-1}
\dfrac{\sqrt{a_0}}{c_0 c_e}\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}^2
-C\|D_{\rho h}(\varphi\bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}-C\le 0.
\end{equation}
Regarding $\|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)}$ as the variable of the corresponding second-degree polynomial $\frac{\sqrt{a_0}}{c_0 c_e} x^2 -C x -C$, we have that its discriminant is positive. Therefore, we have that the inequality~\eqref{conclusion-1} is satisfied for
\begin{equation}
\label{conclusion-2}
0\le \|D_{\rho h}(\varphi \bm{\zeta}^\varepsilon_\kappa)\|_{H^1(\omega_1)\times H^1(\omega_1)\times L^2(\omega_1)} \le \dfrac{C+\sqrt{C^2+4\frac{C\sqrt{a_0}}{c_0 c_e}}}{\frac{2 \sqrt{a_0}}{c_0 c_e}},
\end{equation}
where the upper bound is independent of $\varepsilon$, $\kappa$ and $h$. Applying~\eqref{conclusion-2} to~\eqref{checkpoint-2} gives that
\begin{equation}
\label{conclusion-3}
\dfrac{1}{\kappa}\left\|D_{\rho h}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \varphi\right)\right\|_{L^2(\omega_1)}^2 \le C,
\end{equation}
for some $C>0$ independent of $\varepsilon$, $\kappa$ and $h$.
An application of Theorem~3 of Section~5.8.2 of~\cite{Evans2010}, together with the fact that $\varphi$ in a way such that its support has nonempty interior in $\omega$ and that there exists a nonzero measure set $U\subset \textup{supp }\varphi$ such that $\varphi\equiv 1$ in $U$ shows that the sequence $\{\bm{\zeta}^\varepsilon_\kappa\}_{\kappa>0}$ is bounded in $H^2_{\textup{loc}}(\omega) \times H^2_{\textup{loc}}(\omega) \times H^1_{\textup{loc}}(\omega)$ independently of $\kappa$ as well as that $\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \in H^1_{\textup{loc}}(\omega)$, and
\begin{equation*}
\|-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\|_{H^1(U)} \le C\sqrt{\kappa}.
\end{equation*}
Exploiting the fact that $(\bm{a}^i\cdot\bm{q}) \in \mathcal{C}^1(\overline{\omega})$ for all $1\le i \le 3$ and the assumption $\min_{y \in \overline{\omega}}(\bm{a}^3\cdot\bm{q})>0$, we have that an application of the product rule in Sobolev spaces (cf., e.g., Proposition~9.4 of~\cite{Brez11}) together with~\eqref{conclusion-3} implies that each component of the vector field $\bm{\beta}(\varphi\bm{\zeta}^\varepsilon_\kappa)$ is of class $H^1_{\textup{loc}}(\omega)$ and that the following estimate holds
\begin{equation*}
\label{conclusion-4}
\|\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa)\|_{\bm{H}^1(U)}^2\le C\kappa,
\end{equation*}
for some $C>0$ independent of $\varepsilon$, $\kappa$ and $h$. This completes the proof.
\end{proof}
As a remark, we observe that the higher regularity of the negative part of the constraint has been established without resorting by any means to Stampacchia's theorem~\cite{Stampacchia1965}. Moreover, we showed that the negative part approaches zero as $\kappa \to 0^+$ \emph{more rapidly} than what inferred in the energy estimates in Theorem~\ref{ex-un-kappa}.
A straightforward consequence of~\eqref{conclusion-3} is that
\begin{equation*}
\bm{\zeta}^\varepsilon_\kappa \rightharpoonup \bm{\zeta}^\varepsilon,\quad\textup{ in }H^2(\omega_1) \times H^2(\omega_1) \times H^1(\omega_1) \textup{ as }\kappa\to0^+,
\end{equation*}
thus showing an alternative proof of the interior regularity for the solution of Problem~\ref{problem1} \emph{without} resorting, as it was instead done in~\cite{Pie-2022-interior}, to the ``density property'' recalled in Theorem~\ref{density} (although in the proof of Theorem~\ref{aug:int} we exploited the \emph{sufficient conditions} ensuring the validity of the ``density property'') and \emph{without} assuming additional regularity for the tangential components of $\bm{p}^\varepsilon$.
The result established in Theorem~\ref{aug:int} actually shows that the solution of Problem~\ref{problem1} is the weak limit of the sequence of solutions of Problem~\ref{problem2} in the space $H^2(\omega_1) \times H^2(\omega_1) \times H^1(\omega_1)$.
Let us now show that the solution $\bm{\zeta}^\varepsilon_\kappa$ of Problem~\ref{problem2} enjoys the higher regularity established in Theorem~\ref{aug:int} up to the boundary of the domain $\omega$.
\begin{theorem}
\label{aug:bdry}
Assume that the boundary $\gamma$ of the domain $\omega$ is of class $\mathcal{C}^2$.
Assume that there exists a unit-norm vector $\bm{q} \in \mathbb{E}^3$ such that
\begin{equation*}
\min_{y \in \overline{\omega}} (\bm{\theta} (y) \cdot \bm{q}) > 0 \quad
\textup{ and } \quad
\min_{y \in \overline{\omega}} (\bm{a}_3 (y) \cdot \bm{q}) > 0.
\end{equation*}
Assume also that the vector field $\bm{f}^\varepsilon=(f^{i,\varepsilon})$ defining the applied body force density is such that $\bm{p}^\varepsilon=(p^{i,\varepsilon}) \in L^2(\omega) \times L^2(\omega) \times H^1(\omega)$. Define $\bm{H}(\omega):=H^2(\omega) \times H^2(\omega) \times H^1(\omega)$.
Then, the solution $\bm{\zeta}^\varepsilon_\kappa=(\zeta^\varepsilon_{\kappa,i})$ of Problem~\ref{problem2} is of class $\bm{V}_M(\omega)\cap \bm{H}(\omega)$.
\end{theorem}
\begin{proof}
Since the penalty term does not contain any derivatives, since the bilinear form appearing in the variational equations in Problem~\ref{problem2} is $\bm{V}_M(\omega)$-elliptic, and since its non-divergence form is known (cf., e.g., Theorem~4.5-1(b) of~\cite{Ciarlet2000}; see also~\cite{CiaSanPan1996}), we can apply the same argument for the augmentation of regularity near the boundary (cf., e.g., Theorem~4 on page~334 of~\cite{Evans2010}) using the same test vector field introduced in the proof of Theorem~\ref{aug:int} to recover the desired conclusion.
\end{proof}
The boundary value problem recovered in~\eqref{BVP} enters, in the same spirit of Theorem~4 on page~334 of~\cite{Evans2010}, the proof of Theorem~\ref{aug:bdry} to show the augmented regularity in the nearness of a flat boundary.
As a remark, we observe that an application of Theorem~\ref{aug:int} and Theorem~\ref{aug:bdry} gives
\begin{equation}
\label{conclusion-5}
\bm{\zeta}^\varepsilon_\kappa \rightharpoonup \bm{\zeta}^\varepsilon,\quad\textup{ in }H^2(\omega)\times H^2(\omega) \times H^1(\omega) \textup{ as }\kappa \to 0^+,
\end{equation}
where we recall that $\bm{\zeta}^\varepsilon$ is the solution of Problem~\ref{problem1}. Besides, we have that
\begin{equation}
\label{conclusion-5.5}
\dfrac{1}{\kappa}\|\bm{\beta}(\bm{\zeta}^\varepsilon_\kappa)\|_{\bm{H}^1_0(\omega)}^2 \le C,
\end{equation}
for some $C>0$ independent of $\varepsilon$, $\kappa$ and $h$.
Furthermore, the estimate~\eqref{conclusion-2} can be extended up to the boundary, so that, exploiting the compactness of $\overline{\omega}$ gives
\begin{equation}
\label{conclusion-6}
0\le \|\bm{\zeta}^\varepsilon_\kappa\|_{H^2(\omega)\times H^2(\omega)\times H^1(\omega)} \le \dfrac{C+\sqrt{C^2+4\frac{C\sqrt{a_0}}{c_0 c_e}}}{\frac{2 \sqrt{a_0}}{c_0 c_e}},
\end{equation}
for some $C>0$ independent of $\varepsilon$, $\kappa$ and $h$. Combining the lower semicontinuity of $\|\cdot\|_{H^2(\omega)\times H^2(\omega)\times H^1(\omega)}$ with~\eqref{conclusion-5} and~\eqref{conclusion-6} gives that
\begin{equation}
\label{conclusion-7}
\|\bm{\zeta}^\varepsilon\|_{H^2(\omega)\times H^2(\omega)\times H^1(\omega)} \le \liminf_{\kappa\to 0^+}\|\bm{\zeta}^\varepsilon_\kappa\|_{H^2(\omega)\times H^2(\omega)\times H^1(\omega)} \le \dfrac{C+\sqrt{C^2+4\frac{C\sqrt{a_0}}{c_0 c_e}}}{\frac{2 \sqrt{a_0}}{c_0 c_e}},
\end{equation}
thus asserting that the solution of Problem~\ref{problem1} is of class $\bm{H}(\omega)=H^2(\omega)\times H^2(\omega)\times H^1(\omega)$ and which is bounded in $\bm{H}(\omega)$ independently of $\varepsilon$.
The results established in Theorem~\ref{aug:int} and Theorem~\ref{aug:bdry} actually improve Theorem~\ref{ex-un-kappa} as the solution of Problem~\ref{problem1} is proved to be the weak limit of the sequence of solutions of Problem~\ref{problem2} in the space $H^2(\omega) \times H^2(\omega) \times H^1(\omega)$.
Finally, we recall that the augmentation of regularity up to the boundary holds for domains with Lipschitz continuous boundary provided that $\omega$ is convex (viz. \cite{Eggleston1958} and~\cite{Grisvard2011}).
\section{Approximation of the solution of Problem~\ref{problem1} via the Penalty Method}
\label{approx:original}
In this section, we exploit the augmentation of regularity established in Theorem~\ref{aug:int}, Theorem~\ref{aug:bdry} as well as the subsequent remarks to sharpen the convergence~\eqref{beta-5} obtained as a result of Theorem~\ref{ex-un-kappa}.
\begin{theorem}
\label{th:beta-6}
Let $\kappa>0$ be given.
Let $\bm{\zeta}^\varepsilon$ be the solution of Problem~\ref{problem1} and let $\bm{\zeta}^\varepsilon_\kappa$ be the solution of Problem~\ref{problem2}. Then, there exists a constant $C>0$ independent of $\varepsilon$ and $\kappa$ such that
\begin{equation*}
\|\bm{\zeta}^\varepsilon-\bm{\zeta}^\varepsilon_\kappa\|_{\bm{V}_M(\omega)}\le C \sqrt{\kappa}.
\end{equation*}
\end{theorem}
\begin{proof}
For each $\bm{\eta}\in \bm{L}^2(\omega)$, define
\begin{equation*}
\tilde{\bm{\beta}}(\bm{\eta}):=\left(-\{(\bm{\theta}+\eta_j\bm{a}^j)\cdot\bm{q}\}^{-}\left(\dfrac{\bm{a}^i\cdot\bm{q}}{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}\right)\right)_{i=1}^3.
\end{equation*}
Define $P(\bm{\zeta}^\varepsilon_\kappa):=\bm{\zeta}^\varepsilon_\kappa-\tilde{\bm{\beta}}(\bm{\zeta}^\varepsilon_\kappa)$, and observe that $P(\bm{\zeta}^\varepsilon_\kappa) \in\bm{U}_M(\omega)$. Indeed, a direct computation gives
\begin{equation*}
\begin{aligned}
&\left(\bm{\theta}+\left[\zeta^\varepsilon_{\kappa,i}-\dfrac{-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,i}\bm{a}^i)\cdot\bm{q}\}^{-}(\bm{a}^i\cdot\bm{q})}{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}\right]\bm{a}^i\right)\cdot\bm{q}
=((\bm{\theta}+\zeta^\varepsilon_{\kappa,i}\bm{a}^i)\cdot\bm{q})+\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,i}\bm{a}^i)\cdot\bm{q}\}^{-}\\
&=\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,i}\bm{a}^i)\cdot\bm{q}\}^{+}\ge0,
\end{aligned}
\end{equation*}
thus proving the claim.
Let us estimate
\begin{equation*}
\label{est-1}
\|\bm{\zeta}^\varepsilon_\kappa-\bm{\zeta}^\varepsilon\|_{\bm{V}_M(\omega)} \le \|\tilde{\bm{\beta}}(\bm{\zeta}^\varepsilon_\kappa)\|_{\bm{H}^1(\omega)}
+\|\bm{\zeta}^\varepsilon_\kappa-\tilde{\bm{\beta}}(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon\|_{\bm{V}_M(\omega)} \le C \sqrt{\kappa}+\|\bm{\zeta}^\varepsilon_\kappa-\tilde{\bm{\beta}}(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon\|_{\bm{V}_M(\omega)},
\end{equation*}
where the latter inequality holds thanks to~\eqref{conclusion-5.5}. Since $P(\bm{\zeta}^\varepsilon_\kappa)\in \bm{U}_M(\omega)$, an application of the uniform positive definiteness of the fourth order two-dimensional elasticity tensor $(a^{\alpha\beta\sigma\tau})$ (Theorem~3.1-1 of~\cite{Ciarlet2000}), Korn's inequality (Theorem~\ref{korn}) and~\eqref{conclusion-5.5} gives
\begin{align*}
&\dfrac{\varepsilon\sqrt{a_0}}{c_e c_0}\|P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon\|_{\bm{V}_M(\omega)}
\le \varepsilon\int_{\omega}a^{\alpha\beta\sigma\tau} \gamma_{\sigma \tau}(P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon) \gamma_{\alpha\beta}(P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon) \sqrt{a} \, \mathrm{d} y\\
&\le-\int_{\omega} \bm{p}^{\varepsilon} \cdot (P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon) \sqrt{a} \, \mathrm{d} y
+\varepsilon\int_{\omega} a^{\alpha\beta\sigma\tau} \gamma_{\sigma \tau}(P(\bm{\zeta}^\varepsilon_\kappa)) \gamma_{\alpha\beta}(P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon) \sqrt{a} \, \mathrm{d} y\\
&=-\int_{\omega} \bm{p}^{\varepsilon} \cdot (P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon) \sqrt{a} \, \mathrm{d} y
-\dfrac{\varepsilon}{\kappa} \int_{\omega} \bm{\beta}(\bm{\zeta}^\varepsilon_\kappa) \cdot (P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon) \, \mathrm{d} y
+\int_{\omega} \bm{p}^{\varepsilon} \cdot (P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon) \sqrt{a} \, \mathrm{d} y\\
&\quad-\varepsilon\int_{\omega}a^{\alpha\beta\sigma\tau} \gamma_{\sigma \tau}(\tilde{\bm{\beta}}(\bm{\zeta}^\varepsilon_\kappa)) \gamma_{\alpha\beta}(P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon) \sqrt{a} \, \mathrm{d} y\\
&=\dfrac{\varepsilon}{\kappa}\int_{\omega}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right) \left( \dfrac{\zeta^\varepsilon_i\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) \, \mathrm{d} y
-\dfrac{\varepsilon}{\kappa}\int_{\omega}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right) \left( \dfrac{\zeta^\varepsilon_{\kappa,i}\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) \, \mathrm{d} y\\
&\quad+\dfrac{\varepsilon}{\kappa}\int_{\omega} \left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \dfrac{\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right)_{i=1}^3 \cdot \left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-} \dfrac{\bm{a}^i\cdot\bm{q}}{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}\right)_{i=1}^3 \, \mathrm{d} y\\
&\quad-\varepsilon\int_{\omega}a^{\alpha\beta\sigma\tau} \gamma_{\sigma \tau}(\tilde{\bm{\beta}}(\bm{\zeta}^\varepsilon_\kappa)) \gamma_{\alpha\beta}(P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon) \sqrt{a} \, \mathrm{d} y\\
&\le-\dfrac{\varepsilon}{\kappa}\int_{\omega}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right) \left( \dfrac{\bm{\theta}\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) \, \mathrm{d} y
+\dfrac{\varepsilon}{\kappa}\int_{\omega}\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right) \left( \dfrac{\bm{\theta}\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}}\right) \, \mathrm{d} y\\
&\quad-\dfrac{\varepsilon}{\kappa}\int_{\omega}\dfrac{|-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}|^2}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}} \, \mathrm{d} y+\dfrac{\varepsilon}{\kappa}\int_{\omega}\dfrac{|-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}|^2}{\sqrt{\sum_{\ell=1}^3|\bm{a}^\ell\cdot\bm{q}|^2}} \, \mathrm{d} y\\
&\quad-\varepsilon\int_{\omega}a^{\alpha\beta\sigma\tau} \gamma_{\sigma \tau}(\tilde{\bm{\beta}}(\bm{\zeta}^\varepsilon_\kappa)) \gamma_{\alpha\beta}(P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon) \sqrt{a} \, \mathrm{d} y\\
&=-\varepsilon\int_{\omega}a^{\alpha\beta\sigma\tau} \gamma_{\sigma \tau}(\tilde{\bm{\beta}}(\bm{\zeta}^\varepsilon_\kappa)) \gamma_{\alpha\beta}(P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon) \sqrt{a} \, \mathrm{d} y\le M \varepsilon \|\tilde{\bm{\beta}}(\bm{\zeta}^\varepsilon_\kappa)\|_{\bm{H}^1_0(\omega)} \|P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon\|_{\bm{V}_M(\omega)} \sqrt{a_1}\\
&\le M C \sqrt{a_1} \varepsilon \kappa \|P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon\|_{\bm{V}_M(\omega)}.
\end{align*}
In conclusion, we have that
\begin{equation*}
\|P(\bm{\zeta}^\varepsilon_\kappa)-\bm{\zeta}^\varepsilon\|_{\bm{V}_M(\omega)} \le M C c_0 c_e\dfrac{\sqrt{a_1}}{\sqrt{a}_0} \kappa,
\end{equation*}
so that
\begin{equation*}
\|\bm{\zeta}^\varepsilon_\kappa-\bm{\zeta}^\varepsilon\|_{\bm{V}_M(\omega)} \le C \sqrt{\kappa},
\end{equation*}
for some $C>0$ independent of $\varepsilon$ and $\kappa$.
\end{proof}
We note in passing that the proof of Theorem~\ref{th:beta-6} was established by just assuming that $\min_{y \in \overline{\omega}}(\bm{a}^3\cdot\bm{q})>0$.
Our assumption appears to be more realistic than the abstract assumption $(\ast)$ introduced on page~299 of Scholz's seminal paper~\cite{Scholz1984}.
Moreover, we notice that the conclusions in Lemma~3 and Theorem~4 of~\cite{Scholz1984} continue to hold in the vector-valued case.
\section{Numerical approximation of the solution of Problem~\ref{problem2} via the Finite Element Method}
\label{approx:penalty}
In this section we present a suitable Finite Element Method to approximate the solution to Problem~\ref{problem1}.
Following~\cite{PGCFEM} and~\cite{Brenner2008} (see also~\cite{ChaBat2011}, \cite{CheGloLi2003}, \cite{Ganesan2017} and~\cite{LiHuaAHuaQ2015}), we recall some basic terminology and definitions.
In what follows the letter $h$ denotes a quantity approaching zero. For brevity, the same notation $C$ (with or without subscripts) designates a positive constant independent of $\varepsilon$, $\kappa$ and $h$, which can take different values at different places.
We denote by $(\mathcal{T}_h)_{h>0}$ a \emph{family of triangulations of the polygonal domain} $\overline{\omega}$ made of triangles and we let $T$ denote any element of such a family.
Let us first recall, following~\cite{Brenner2008} and~\cite{PGCFEM}, the \emph{rigorous} definition of \emph{finite element} in $\mathbb{R}^n$, where $n \ge 1$ is an integer. A \emph{finite element} in $\mathbb{R}^n$ is a \emph{triple}
$(T,P, \mathcal{N})$ where:
(i) $T$ is a closed subset of $\mathbb{R}^n$ with non-empty interior and Lipschitz-continuous boundary,
(ii) $P$ is a finite dimensional space of real-valued functions defined over $T$,
(iii) $\mathcal{N}$ is is a finite set of linearly independent linear forms $N_i$, $1 \le i \le \dim P$, defined over the space $P$.
By definition, it is assumed that the set $\mathcal{N}$ is \emph{$P$-unisolvent} in the following sense: given any real scalars $\alpha_i$, $1\le i \le \dim P$, there exists a unique function $g \in P$ which satisfies
$$
N_i(g)=\alpha_i, \quad 1 \le i \le \dim P.
$$
It is henceforth assumed that the \emph{degrees of freedom}, $N_i$ , lie in the dual space of a function space larger than $P$ like, for instance, a Sobolev space (see~\cite{Brenner2008}).
For brevity we shall conform our terminology to the one of~\cite{PGCFEM}, calling the sole set $T$ a finite element.
Define the \emph{diameter} of any finite element $T$ as follows:
$$
h_T=\text{diam }T:= \max_{x,y \in T} |x-y|.
$$
Let us also define
$$
\rho_T:=\sup\{\text{diam }B; B \textup{ is a ball contained in }T\}.
$$
A triangulation $\mathcal{T}_h$ is said to be \emph{regular} (cf., e.g., \cite{PGCFEM}) if:
(i) There exists a constant $\sigma>0$, independent of $h$, such that
$$
\textup{for all }T \in \mathcal{T}_h,\quad \dfrac{h_T}{\rho_T} \le \sigma.
$$
(ii) The quantity $h:=\max\{h_T>0; T \in \mathcal{T}_h\} $ approaches zero.
A triangulation $\mathcal{T}_h$ is said to satisfy \emph{an inverse assumption} (cf., e.g., \cite{PGCFEM}) if there exists a constant $\upsilon>0$ such that
$$
\textup{for all }T \in \mathcal{T}_h,\quad \dfrac{h}{h_T} \le \upsilon.
$$
We assume that the finite elements $(K, P_K, \Sigma_K)$, $K \in \bigcup_{h>0}\mathcal{T}_h$, are of class $\mathcal{C}^0$ and are affine (cf. Section~2.3 of~\cite{PGCFEM}), in the sense that they are affine equivalent to a single reference element $(\hat{K}, \hat{P}, \hat{\Sigma})$.
The forthcoming finite element analysis will be carried out using triangles of type $(1)$ (see Figure~2.2.1 of~\cite{PGCFEM}) to approximate the components of the solution of Problem~\ref{problem2}. In this case, the set $\mathcal{V}_h$ consists of all the vertices of the triangulation $\mathcal{T}_h$.
Let $V_{1,h}$, $V_{2,h}$ and $V_{3,h}$ be three finite dimensional spaces such that $V_{\alpha,h}\subset H^1_0(\omega)$ and $V_{3,h} \subset L^2(\omega)$.
Define
$$
\bm{V}_h:=V_{1,h} \times V_{2,h}\times V_{3,h},
$$
and observe that $\bm{V}_h \subset \bm{V}_M(\omega)$.
Let us now define the $\bm{V}_h$ interpolation operator $\bm{\Pi}_h:\bm{\mathcal{C}}^0(\overline{\omega})\to\bm{V}_h$ as follows
$$
\bm{\Pi}_h \bm{\xi}:=\left(\Pi_{1,h} \xi_1, \Pi_{2,h} \xi_2, \Pi_{3,h} \xi_3\right)\quad\textup{ for all }\bm{\xi}=(\xi_i)\in \bm{\mathcal{C}}^0(\overline{\omega}),
$$
where $\Pi_{i,h}$ is the standard $V_{i,h}$ interpolation operator (cf., e.g., \cite{PGCFEM} and~\cite{Brenner2008}).
It thus results that the interpolation operator $\bm{\Pi}_h$ satisfies the following properties
\begin{equation*}
(\Pi_{j,h} \xi_j)(p)=\xi_j(p)\quad\textup{ for all integers }1 \le j \le 3 \textup{ and all vertices }p \in \mathcal{V}_h,
\end{equation*}
where $\nu_e$ is outer unit normal vector to the edge $e$.
Recall that
$$
\bm{H}(\omega)=H^2(\omega)\times H^2(\omega) \times H^1(\omega)
$$
and that it is equipped with the norm:
$$
\|\bm{\xi}\|_{\bm{H}(\omega)}=\|\xi_1\|_{H^2(\omega)}+\|\xi_2\|_{H^2(\omega)}+\|\xi_3\|_{H^1(\omega)} \quad\textup{ for all } \bm{\xi}=(\xi_i)\in \bm{H}(\omega).
$$
An application of Theorem~3.2.1 of~\cite{PGCFEM} (see also Theorem~4.4.20 of~\cite{Brenner2008}) yields
\begin{equation}
\label{Pih}
\|\bm{\xi}-\bm{\Pi}_h \bm{\xi}\|_{\bm{V}_M(\omega)} \le C h |\bm{\xi}|_{\bm{H}(\omega)},
\end{equation}
for all $\bm{\xi}\in \bm{H}(\omega)\cap \bm{V}_M(\omega)$, where $|\cdot|_{\bm{H}(\omega)}$ denotes the semi-norm associated with the norm $\|\cdot\|_{\bm{H}(\omega)}$.
For each $h>0$, denote the discretization of the elliptic operator $\bm{A}^\varepsilon:\bm{V}_M(\omega) \to \bm{V}'_M(\omega)$ over the triangulation $\mathcal{T}_h$ by $\bm{A}^{\varepsilon,h}$. We have that the linear mapping $\bm{A}^{\varepsilon,h}: \bm{V}_h \to \bm{V}_h$ si defined by
\begin{equation*}
\langle \bm{A}^{\varepsilon,h}\bm{\eta},\bm{\xi}\rangle_{\bm{V}'_M(\omega), \bm{V}_M(\omega)}:=\varepsilon\int_{\omega} a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(\bm{\eta})\gamma_{\alpha\beta}(\bm{\xi})\sqrt{a}\, \mathrm{d} y,\quad\textup{ for all }\bm{\eta}, \bm{\xi} \in \bm{V}_h.
\end{equation*}
For each $h>0$, denote the projection of $\bm{L}^2(\omega)$ onto $\bm{V}_h$ by $\bm{P}^h$. We have that the mapping $\bm{P}^h:\bm{L}^2(\omega) \to \bm{V}_h$ is defined by
\begin{equation*}
\bm{P}^h(\bm{\eta}):=\sum_{\ell=1}^{\dim \bm{V}_h} \left(\int_{\omega} \bm{\eta}\cdot\bm{e}_\ell\, \mathrm{d} y\right)\bm{e}_\ell,
\end{equation*}
where $\{\bm{e}_\ell\}_{\ell=1}^\infty$ is a Hilbert basis in $\bm{L}^2(\omega)$. We observe that the projection is defined in terms of the Fourier series of $\bm{\eta}$ (viz. Theorem~4.9-1 of~\cite{PGCLNFAA}).
The discretized version of Problem~\ref{problem2} is formulated as follows.
\begin{customprob}{$\mathcal{P}_{M,\kappa}^{\varepsilon,h}(\omega)$}
\label{problem3}
Find $\bm{\zeta}^{\varepsilon,h}_\kappa=(\zeta^{\varepsilon,h}_{\kappa,i}) \in \bm{V}_h$ satisfying the following variational equations:
\begin{equation*}
\varepsilon \int_\omega a^{\alpha \beta \sigma \tau} \gamma_{\sigma \tau}(\bm{\zeta}^{\varepsilon,h}_\kappa) \gamma_{\alpha \beta} (\bm{\eta}) \sqrt{a} \, \mathrm{d} y
+\dfrac{\varepsilon}{\kappa}\int_{\omega} \bm{\beta}(\bm{\zeta}^{\varepsilon,h}_\kappa) \cdot \bm{\eta} \, \mathrm{d} y
= \int_\omega p^{i,\varepsilon} \eta_i \sqrt{a} \, \mathrm{d} y,
\end{equation*}
for all $\bm{\eta} = (\eta_i) \in \bm{V}_h$.
\bqed
\end{customprob}
It can be shown, thanks to an argument similar to the one exploited for establishing Theorem~\ref{ex-un-kappa}, that Problem~\ref{problem3} admits a unique solution $\bm{\zeta}^{\varepsilon,h}_\kappa \in\bm{V}_h$.
\begin{theorem}
\label{th:conv}
Let $\bm{\zeta}^\varepsilon_\kappa \in \bm{V}_M(\omega)$ be the solution of Problem~\ref{problem2}, and let $\bm{\zeta}^{\varepsilon,h}_\kappa \in \bm{V}_h$ be the solution of Problem~\ref{problem3}. Then there exists a constant $\tilde{C}>0$ independent of $\varepsilon$, $\kappa$ and $h$ for which the following estimate holds
\begin{equation*}
\label{est:conv}
\|\bm{\zeta}^\varepsilon_\kappa-\bm{\zeta}^{\varepsilon,h}_\kappa\|_{\bm{V}_M(\omega)}
\le \tilde{C} h\left(1+\dfrac{1}{\sqrt{\kappa}}\right).
\end{equation*}
\end{theorem}
\begin{proof}
Thanks to the boundedness of the sequences $\{\bm{\zeta}^\varepsilon_\kappa\}_{\kappa>0}$ and $\{\bm{\zeta}^\varepsilon\}_{\varepsilon>0}$ in $\bm{H}(\omega)$ (Theorem~\ref{ex-un-kappa}, Theorem~\ref{t:4}, Theorem~\ref{aug:int}, Theorem~\ref{aug:bdry}, \eqref{conclusion-7} and~\eqref{Pih}), we have that
\begin{equation*}
\|\bm{\zeta}^\varepsilon_\kappa-\bm{\Pi}_h \bm{\zeta}^\varepsilon_\kappa\|_{\bm{V}_M(\omega)} \le C h |\bm{\zeta}^\varepsilon_\kappa|_{\bm{H}(\omega)},
\end{equation*}
and the semi-norm on the right-hand side is bounded independently of $\kappa$ and $\varepsilon$ (see the remark after Theorem~\ref{aug:bdry}).
Thanks to the calculations carried out in Lemma~\ref{lem:beta} for establishing the monotonicity of the operator $\bm{\beta}$, we have that
\begin{align*}
&\dfrac{\sqrt{a_0}\varepsilon}{c_0 c_e}\|\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^{\varepsilon,h}_\kappa\|_{\bm{V}_M(\omega)}^2 +
\dfrac{\varepsilon\kappa^{-1}}{\sqrt{3 \max\{\|\bm{a}^\ell \cdot\bm{q}\|_{\mathcal{C}^0(\overline{\omega})}^2;1\le \ell \le 3\}}}\int_{\omega}\left|\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right) - \left(-\{(\bm{\theta}+\zeta^{\varepsilon,h}_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right)\right|^2\, \mathrm{d} y\\
&\le\varepsilon\int_{\omega} a^{\alpha\beta\sigma\tau}\gamma_{\sigma\tau}(\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^{\varepsilon,h}_\kappa)\gamma_{\alpha \beta}(\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^{\varepsilon,h}_\kappa)\sqrt{a} \, \mathrm{d} y\\
&\quad+\dfrac{\varepsilon}{\kappa}\int_{\omega} \left(\left[-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right] - \left[-\{(\bm{\theta}+\zeta^{\varepsilon,h}_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right]\right) \left(\dfrac{(\zeta^\varepsilon_{\kappa,i}-\zeta^{\varepsilon,h}_{\kappa,i})\bm{a}^i\cdot\bm{q}}{\sqrt{\sum_{\ell=1}^{3}|\bm{a}^\ell \cdot\bm{q}|^2}}\right) \, \mathrm{d} y\\
&\le \dfrac{\sqrt{a_0}\varepsilon}{2 c_0 c_e}\|\bm{\zeta}^\varepsilon_\kappa-\bm{\zeta}^{\varepsilon,h}_\kappa\|_{\bm{V}_M(\omega)}^2
+\dfrac{\varepsilon c_0 c_e a_1}{2\sqrt{a_0}}\|\bm{\zeta}^\varepsilon_\kappa-\bm{\zeta}^{\varepsilon,h}_\kappa\|_{\bm{V}_M(\omega)}^2\\
&\quad+\dfrac{\varepsilon\kappa^{-1}}{2\sqrt{3 \max\{\|\bm{a}^\ell \cdot\bm{q}\|_{\mathcal{C}^0(\overline{\omega})}^2;1\le \ell \le 3\}}}
\int_{\omega}\left|\left[-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right] - \left[-\{(\bm{\theta}+\zeta^{\varepsilon,h}_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right]\right|^2\, \mathrm{d} y\\
&\quad+\varepsilon\dfrac{3 \max\{\|\bm{a}^\ell \cdot\bm{q}\|_{\mathcal{C}^0(\overline{\omega})}^2;1\le \ell \le 3\}}{2\kappa \min_{y \in \overline{\omega}}(\bm{a}^3\cdot\bm{q})}\|\bm{\zeta}^\varepsilon_\kappa-\bm{\zeta}^{\varepsilon,h}_\kappa\|_{\bm{L}^2(\omega)}^2.
\end{align*}
Combining the latter inequalities with Cea's lemma (cf., e.g., Theorem~2.4.1 of~\cite{PGCFEM}) and~\eqref{Pih} gives
\begin{equation*}
\begin{aligned}
&\dfrac{\sqrt{a_0}\varepsilon}{2c_0 c_e}\|\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^{\varepsilon,h}_\kappa\|_{\bm{V}_M(\omega)}^2 +
\dfrac{\varepsilon\kappa^{-1}}{2\sqrt{3 \max\{\|\bm{a}^\ell \cdot\bm{q}\|_{\mathcal{C}^0(\overline{\omega})}^2;1\le \ell \le 3\}}}\int_{\omega}\left|\left(-\{(\bm{\theta}+\zeta^\varepsilon_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right) - \left(-\{(\bm{\theta}+\zeta^{\varepsilon,h}_{\kappa,j}\bm{a}^j)\cdot\bm{q}\}^{-}\right)\right|^2\, \mathrm{d} y\\
&\le\dfrac{\varepsilon c_0 c_e a_1}{2\sqrt{a_0}}\|\bm{\zeta}^\varepsilon_\kappa-\bm{\Pi}_h\bm{\zeta}^\varepsilon_\kappa\|_{\bm{V}_M(\omega)}^2
+\varepsilon\dfrac{3 \max\{\|\bm{a}^\ell \cdot\bm{q}\|_{\mathcal{C}^0(\overline{\omega})}^2;1\le \ell \le 3\}}{2\kappa \min_{y \in \overline{\omega}}(\bm{a}^3\cdot\bm{q})}\|\bm{\zeta}^\varepsilon_\kappa-\bm{\Pi}_h\bm{\zeta}^\varepsilon_\kappa\|_{\bm{L}^2(\omega)}^2.
\end{aligned}
\end{equation*}
Letting
$$
\tilde{C}^2:=\dfrac{c_0 c_e}{\sqrt{a_0}}\max\left\{\dfrac{c_0 c_e a_1}{\sqrt{a_0}},\dfrac{3 \max\{\|\bm{a}^\ell \cdot\bm{q}\|_{\mathcal{C}^0(\overline{\omega})}^2;1\le \ell \le 3\}}{\kappa \min_{y \in \overline{\omega}}(\bm{a}^3\cdot\bm{q})}\right\}C,
$$
where $C>0$ is the constant appearing in~\eqref{Pih} or, equivalently, in Theorem~3.2.1 of~\cite{PGCFEM}, we obtain the sought estimate, namely,
\begin{equation*}
\|\bm{\zeta}^\varepsilon_\kappa - \bm{\zeta}^{\varepsilon,h}_\kappa\|_{\bm{V}_M(\omega)}^2
\le \tilde{C}^2 h^2\left(1+\dfrac{1}{\kappa}\right)|\bm{\zeta}^\varepsilon_\kappa|_{\bm{H}(\omega)}^2.
\end{equation*}
\end{proof}
\section{Numerical approximation of the solution of Problem~\ref{problem2} via the Brezis-Sibony iteration scheme}
\label{approx:BrezisSibony}
In view of Theorem~\ref{th:conv}, we are in position to define the discrete nonlinear operator $\bm{N}^\varepsilon_h:\bm{V}_h \to\bm{V}_h$ by:
\begin{equation*}
\bm{N}^\varepsilon_h(\bm{\eta})=\bm{A}^\varepsilon_h\bm{\eta}+\dfrac{\varepsilon}{h^q}\bm{P}_h(\bm{\beta}(\bm{\eta}))-\bm{P}_h(\bm{p}^\varepsilon\sqrt{a}),
\end{equation*}
where the specialization $\kappa:=h^q$, with $0< q <2$ ensures the convergence of the sequence of solutions of Problem~\ref{problem3} to the solution of Problem~\ref{problem2} (Theorem~\ref{th:conv}).
We have that if $\bm{\zeta}^{\varepsilon,h}_\kappa$ is the solution of Problem~\ref{problem3}, then we have that $\bm{N}^\varepsilon_h(\bm{\zeta}^{\varepsilon,h}_\kappa)=\bm{0}$.
In this section we extend the validity of the scheme proposed by Brezis \& Sibony in~\cite{BrezisSibony1968} to approximate the solution of Problem~\ref{problem3} by means of an iterative pattern.
Critical to establishing the sought convergence is the inverse assumption stated in section~\ref{approx:penalty} which, we notice, was not exploitd to carry out the proof of Theorem~\ref{th:conv}. As a consequence of Theorem~3.2.6 of~\cite{PGCFEM} we have that the following \emph{inverse inequality} holds.
\begin{lemma}
\label{inv:ineq}
Let $h>0$ be given and let $\mathcal{T}_h$ be a regular triangulation of $\omega$ made of affine elements of class $\mathcal{C}^0$ (viz. section~\ref{approx:penalty}).
Then, the following inverse inequality holds
\begin{equation*}
\left(\sum_{K\in\mathcal{T}_h}|\bm{\eta}_h|_{\bm{V}_M(K)}^2\right)^{1/2} \le C_{\textup{inv}} h^{-1} \left(\sum_{K\in\mathcal{T}_h}|\bm{\eta}_h|_{\bm{L}^2(K)}^2\right)^{1/2},\quad\textup{ for all }\bm{\eta}_h\in \bm{V}_h,
\end{equation*}
for some $C_{\textup{inv}}>0$ independent of $h$.
\end{lemma}
\begin{proof}
An application of Theorem~3.2.6 of~\cite{PGCFEM} gives
\begin{equation*}
\left(\sum_{K\in\mathcal{T}_h}|\eta_h|_{H^1(K)}^2\right)^{1/2} \le C h^{-1} \left(\sum_{K\in\mathcal{T}_h}|\eta_h|_{L^2(K)}^2\right)^{1/2},
\end{equation*}
and the sought estimate derives straightforwardly.
\end{proof}
We are thus in position to establish the main result of this section, namely, the convergence of the Brezis-Sibony scheme for Problem~\ref{problem3}.
\begin{theorem}
\label{BrezisSybony}
Let us define, for the sake of simplicity, the vector field $\hat{\bm{\psi}}$ as follows
\begin{equation}
\label{psi:hat}
\hat{\bm{\psi}}:=\bm{\zeta}^{\varepsilon,h}_\kappa,
\end{equation}
and we let $\bm{\psi}_0 \in \bm{V}_h$ be arbitrarily chosen. Let $c_0>0$ be the constant of Korn's inequality (Theorem~\ref{korn}), let $c_e>0$ the constant associated with the uniform positive-definiteness of the fourth order two-dimensional elasticity tensor $(a^{\alpha\beta\sigma\tau})$, let $C_{\textup{inv}}>0$ be the constant associated with the inverse property (Theorem~\ref{inv:ineq}), let $M>0$ be the sup norm of the fourth order two-dimensional elasticity tensor $(a^{\alpha\beta\sigma\tau})$, and let $a_0>0$ and $a_1>0$ be, respectively, the minimum and maximum of the function $a=\det(a_{\alpha\beta})$ introduced in section~\ref{sec1}.
Then, there exists a positive number $\Xi>0$ such that the sequence of vector fields $\{\bm{\psi}_k\}_{k=0}^\infty \subset \bm{V}_h$ defined by
\begin{equation}
\label{iterate}
\bm{\psi}_{k+1}:=\bm{\psi}_k-\Xi h^4 \bm{N}^\varepsilon_h(\bm{\psi}_k),
\end{equation}
satisfies
\begin{equation}
\label{conv:iter}
\|\hat{\bm{\psi}}-\bm{\psi}_{k+1}\|_{\bm{L}^2(\omega)}\le \sqrt{1-\rho'}\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}, \quad\textup{ for all }k\ge 0,
\end{equation}
for some $\rho'=\rho'(h,\Xi) \in (0,1)$, whenever $h>0$ is such that
\begin{equation}
\label{h:bound}
h <\sqrt{\dfrac{c_0 c_e\left(MC_{\textup{inv}}^2\sqrt{a_1}+1\right)}{\sqrt{a_0}}},
\end{equation}
and $\Xi>0$ is such that
\begin{equation}
\label{xi:bound}
\Xi<\frac{2\sqrt{a_0}}{c_0 c_e \left(MC_{\textup{inv}}^2\sqrt{a_1}+1\right)^2}.
\end{equation}
\end{theorem}
\begin{proof}
To begin with, thanks to~\eqref{iterate} and the fact that $\bm{N}^\varepsilon_h(\hat{\bm{\psi}})=\bm{0}$ by~\eqref{psi:hat}, we compute
\begin{align*}
&\hat{\bm{\psi}}-\bm{\psi}_{k+1}=\hat{\bm{\psi}}-\bm{\psi}_k+h^4 \Xi \bm{N}^\varepsilon_h(\bm{\psi}_k)
=\hat{\bm{\psi}}-\bm{\psi}_k-h^4 \Xi \left(\bm{N}^\varepsilon_h(\hat{\bm{\psi}})-\bm{N}^\varepsilon_h(\bm{\psi}_k)\right)\\
&=\hat{\bm{\psi}}-\bm{\psi}_k-h^4 \Xi\left[\left(\bm{A}^\varepsilon_h \hat{\bm{\psi}} +\varepsilon h^{-q} \bm{P}_h(\bm{\beta}(\hat{\bm{\psi}}))-\bm{P}_h(\bm{p}^\varepsilon\sqrt{a})\right)-\left(\bm{A}^\varepsilon_h \bm{\psi}_k +\varepsilon h^{-q} \bm{P}_h(\bm{\beta}(\bm{\psi}_k))-\bm{P}_h(\bm{p}^\varepsilon \sqrt{a})\right)\right]\\
&=\hat{\bm{\psi}}-\bm{\psi}_k-h^4 \Xi\left[\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)+\varepsilon h^{-q}\bm{P}_h\left(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k)\right)\right].
\end{align*}
Define the operator $\bm{Q}_h:\bm{V}_h \to\bm{V}_h$ by
\begin{equation*}
\bm{Q}_h(\bm{\eta}):=\bm{A}^\varepsilon_h\bm{\eta}+\varepsilon h^{-q}\bm{P}_h\left(\bm{\beta}(\bm{\eta})\right),\quad\textup{ for all }\bm{\eta}\in\bm{V}_h.
\end{equation*}
Thanks to this newly introduced definition we can thus write
\begin{equation}
\label{BS1}
\hat{\bm{\psi}}-\bm{\psi}_{k+1}=\hat{\bm{\psi}}-\bm{\psi}_k-h^4 \Xi\left[\bm{Q}_h(\hat{\bm{\psi}})-\bm{Q}_h(\bm{\psi}_k)\right],\quad\textup{ for all }k\ge0.
\end{equation}
In view of~\eqref{BS1}, the uniform positive-definiteness of the fourth order two-dimensional elasticity tensor $(a^{\alpha\beta\sigma\tau})$ (Theorem~3.3-1 of~\cite{Ciarlet2000}), Korn's inequality (Theorem~\ref{korn}), we compute
\begin{equation}
\label{BS1.1}
\begin{aligned}
&\|\hat{\bm{\psi}}-\bm{\psi}_{k+1}\|_{\bm{L}^2(\omega)}^2=\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2
+h^8 \Xi^2\|\bm{Q}_h(\hat{\bm{\psi}})-\bm{Q}_h(\bm{\psi}_k)\|_{\bm{L}^2(\omega)}^2\\
&\quad-2h^4 \Xi \int_{\omega} (\hat{\bm{\psi}}-\bm{\psi}_k) \cdot \left(\bm{Q}_h(\hat{\bm{\psi}})-\bm{Q}_h(\bm{\psi}_k)\right) \, \mathrm{d} y\\
&=\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2
+h^8 \Xi^2\|\bm{Q}_h(\hat{\bm{\psi}})-\bm{Q}_h(\bm{\psi}_k)\|_{\bm{L}^2(\omega)}^2
-2h^4 \Xi \int_{\omega} (\hat{\bm{\psi}}-\bm{\psi}_k) \cdot \bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k) \, \mathrm{d} y\\
&\quad-2\varepsilon h^{4-q} \Xi \int_{\omega} (\hat{\bm{\psi}}-\bm{\psi}_k) \cdot \left(\bm{P}_h(\bm{\beta}(\hat{\bm{\psi}}))-\bm{P}_h(\bm{\beta}(\bm{\psi}_k))\right) \, \mathrm{d} y\\
&\le \|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2 +h^8 \Xi^2\|\bm{Q}_h(\hat{\bm{\psi}})-\bm{Q}_h(\bm{\psi}_k)\|_{\bm{L}^2(\omega)}^2
-2 h^4 \Xi \dfrac{\varepsilon\sqrt{a_0}}{c_0 c_e}\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2\\
&\quad-2 \varepsilon h^{4-q} \Xi \int_{\omega} (\hat{\bm{\psi}}-\bm{\psi}_k) \cdot \bm{P}_h(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k)) \, \mathrm{d} y.
\end{aligned}
\end{equation}
Let $\{\bm{e}_\ell\}_{\ell=1}^{\infty}$ be a Hilbert basis in $\bm{L}^2(\omega)$.
By the theory of Fourier series (cf., e.g., Theorem~4.9-1 of~\cite{PGCLNFAA}), we have that the last integral term can be rewritten as follows:
\begin{equation}
\label{BS1.2}
\begin{aligned}
&\int_{\omega} (\hat{\bm{\psi}}-\bm{\psi}_k) \cdot \left(\bm{P}_h(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k))\right) \, \mathrm{d} y
=\int_{\omega} (\hat{\bm{\psi}}-\bm{\psi}_k) \cdot \left(\sum_{\ell=1}^{\dim\bm{V}_h} \left(\int_{\omega} (\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k)) \cdot \bm{e}_\ell\, \mathrm{d} s\right)\bm{e}_\ell\right) \, \mathrm{d} y\\
&=\sum_{\ell=1}^{\dim\bm{V}_h}\left\{\left(\int_{\omega} (\hat{\bm{\psi}}-\bm{\psi}_k) \cdot \bm{e}_\ell \, \mathrm{d} y\right)\left(\int_{\omega} (\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k)) \cdot \bm{e}_\ell\, \mathrm{d} s\right)\right\}\\
&=\int_{\omega}(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k)) \cdot \left(\sum_{\ell=1}^{\dim\bm{V}_h} \left(\int_{\omega}(\hat{\bm{\psi}}-\bm{\psi}_k) \cdot\bm{e}_\ell \, \mathrm{d} y\right)\bm{e}_\ell\right) \, \mathrm{d} s
=\int_{\omega}(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k)) \cdot (\hat{\bm{\psi}}-\bm{\psi}_k)\, \mathrm{d} s \ge 0,
\end{aligned}
\end{equation}
where the last equality holds thanks to the fact that $\hat{\bm{\psi}}, \bm{\psi}_k \in\bm{V}_h$.
For each $k\ge 0$, let us now estimate
\begin{align*}
&\|\bm{Q}_h(\hat{\bm{\psi}})-\bm{Q}_h(\bm{\psi}_k)\|_{\bm{L}^2(\omega)}^2
=\left\|\left(\bm{A}^\varepsilon_h\hat{\bm{\psi}}+\varepsilon h^{-q}\bm{P}_h(\bm{\beta}(\hat{\bm{\psi}}))\right)-\left(\bm{A}^\varepsilon_h\bm{\psi}_k+\varepsilon h^{-q}\bm{P}_h(\bm{\beta}(\bm{\psi}_k))\right)\right\|_{\bm{L}^2(\omega)}^2\\
&=\left\|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)+\varepsilon h^{-q}\bm{P}_h(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k))\right\|_{\bm{L}^2(\omega)}^2
=\|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{L}^2(\omega)}^2\\
&\quad+ \varepsilon h^{-2q}\|\bm{P}_h(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k))\|_{\bm{L}^2(\omega)}^2
+2\varepsilon h^{-q}\int_{\omega} \left(\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\right) \cdot\left(\bm{P}_h(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k))\right) \, \mathrm{d} y\\
&=\varepsilon\int_{\omega}a^{\alpha\beta\sigma\tau} \gamma_{\sigma \tau}(\hat{\bm{\psi}}-\bm{\psi}_k) \gamma_{\alpha\beta}(\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k))\sqrt{a}\, \mathrm{d} y\\
&\quad+\varepsilon h^{-2q}\|\bm{P}_h(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k))\|_{\bm{L}^2(\omega)}^2
+2\varepsilon h^{-q}\int_{\omega} \left(\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\right) \cdot\left(\bm{P}_h(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k))\right) \, \mathrm{d} y\\
&\le\varepsilon\int_{\omega}a^{\alpha\beta\sigma\tau} \gamma_{\sigma \tau}(\hat{\bm{\psi}}-\bm{\psi}_k) \gamma_{\alpha\beta}(\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k))\sqrt{a}\, \mathrm{d} y
+\varepsilon h^{-2q}\|\bm{P}_h(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k))\|_{\bm{L}^2(\omega)}^2\\
&\quad+2\varepsilon h^{-q}\|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{L}^2(\omega)} \|\bm{P}_h(\bm{\beta}(\hat{\bm{\psi}})-\bm{\beta}(\bm{\psi}_k))\|_{\bm{L}^2(\omega)}\\
&\le M\varepsilon\sqrt{a}_1\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{V}_M(\omega)} \|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{V}_M(\omega)}
+\varepsilon h^{-2q}\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2\\
&\quad+2\varepsilon h^{-q}\|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{L}^2(\omega)} \|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)},
\end{align*}
where the second last estimate is due to the continuity of the bilinear form, and the last estimate is due to the fact that the projection $\bm{P}_h$ and the operator $\bm{\beta}$ are non-expansive mappings (cf., e.g., Theorem~4.3-1(c) of~\cite{PGCLNFAA} and Lemma~\ref{lem:beta}). To sum up, we have shown that
\begin{equation}
\label{BS2}
\begin{aligned}
&\|\bm{Q}_h(\hat{\bm{\psi}})-\bm{Q}_h(\bm{\psi}_k)\|_{\bm{L}^2(\omega)}^2\le M\varepsilon\sqrt{a}_1\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{V}_M(\omega)} \|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{V}_M(\omega)}
+\varepsilon h^{-2q}\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2\\
&\quad+2\varepsilon h^{-q}\|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{L}^2(\omega)} \|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}.
\end{aligned}
\end{equation}
Thanks to the inverse property (Lemma~\ref{inv:ineq}), we have that
\begin{equation}
\label{BS3}
\|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{V}_M(\omega)}\le \dfrac{C_{\textup{inv}}}{h}\|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{L}^2(\omega)}.
\end{equation}
An application of~\eqref{BS3} gives
\begin{align*}
&\|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{L}^2(\omega)}^2
=\varepsilon\int_{\omega} a^{\alpha\beta\sigma\tau} \gamma_{\sigma\tau}(\hat{\bm{\psi}}-\bm{\psi}_k) \gamma_{\alpha\beta}(\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)) \sqrt{a}\, \mathrm{d} y\\
&\le M\varepsilon\sqrt{a_1}\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{V}_M(\omega)} \|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{V}_M(\omega)}\\
&\le \dfrac{MC_{\textup{inv}}^2\varepsilon\sqrt{a_1}}{h^2}\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}\|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{L}^2(\omega)}.
\end{align*}
The latter in turn implies that
\begin{equation}
\label{BS4}
\|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{L}^2(\omega)}
\le \dfrac{MC_{\textup{inv}}^2\varepsilon\sqrt{a_1}}{h^2}\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}.
\end{equation}
Thanks to~\eqref{BS3}, \eqref{BS4}, the inverse property (Theorem~) and the fact that $0< q<2$, we are able to estimate the right-hand side of~\eqref{BS2} as follows:
\begin{equation*}
\begin{aligned}
&M\varepsilon\sqrt{a}_1\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{V}_M(\omega)} \|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{V}_M(\omega)}
+\varepsilon h^{-2q}\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2+2\varepsilon h^{-q}\|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{L}^2(\omega)} \|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}\\
&\le \dfrac{MC_{\textup{inv}}^2\varepsilon\sqrt{a_1}}{h^2}\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)} \|\bm{A}^\varepsilon_h(\hat{\bm{\psi}}-\bm{\psi}_k)\|_{\bm{L}^2(\omega)}
+ \varepsilon h^{-2q}\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2+\dfrac{2MC_{\textup{inv}}^2\varepsilon^2\sqrt{a_1}}{h^{2+q}} \|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2\\
&\le \left(\dfrac{M^2C_{\textup{inv}}^4\varepsilon^2 a_1}{h^4}+\varepsilon h^{-2q}+\dfrac{2MC_{\textup{inv}}^2\varepsilon^2\sqrt{a_1}}{h^{2+q}}\right)\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2
\le h^{-4}\left(M^2C_{\textup{inv}}^4\varepsilon^2 a_1+\varepsilon+2MC_{\textup{inv}}^2\varepsilon^2\sqrt{a_1}\right)\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2\\
&\le h^{-4}\varepsilon\left(MC_{\textup{inv}}^2\sqrt{a_1}+1\right)^2\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2.
\end{aligned}
\end{equation*}
Combining the latter inequality with~\eqref{BS2} gives at once:
\begin{equation}
\label{BS5}
\|\bm{Q}_h(\hat{\bm{\psi}})-\bm{Q}_h(\bm{\psi}_k)\|_{\bm{L}^2(\omega)}^2
\le h^{-4}\varepsilon\left(MC_{\textup{inv}}^2\sqrt{a_1}+1\right)^2\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2.
\end{equation}
Therefore, combining~\eqref{BS1.1}, \eqref{BS1.2} and~\eqref{BS5} gives
\begin{equation}
\label{BS6}
\begin{aligned}
&\|\hat{\bm{\psi}}-\bm{\psi}_{k+1}\|_{\bm{L}^2(\omega)}^2
\le \|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2 +h^8 \Xi^2\|\bm{Q}_h(\hat{\bm{\psi}})-\bm{Q}_h(\bm{\psi}_k)\|_{\bm{L}^2(\omega)}^2
-2 h^4 \Xi \dfrac{\varepsilon\sqrt{a_0}}{c_0 c_e}\|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2\\
&\le \left(1-2 h^4 \dfrac{\varepsilon\sqrt{a_0}}{c_0 c_e}\Xi+ \varepsilon h^4 \left(MC_{\textup{inv}}^2\sqrt{a_1}+1\right)^2\Xi^2\right) \|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}^2.
\end{aligned}
\end{equation}
Let us now consider the polynomial $p(\Xi):=1-2 \varepsilon h^4 \dfrac{\sqrt{a_0}}{c_0 c_e}\Xi+ \varepsilon h^4 \left(MC_{\textup{inv}}^2\sqrt{a_1}+1\right)^2\Xi^2$, and let us observe that its discriminant is such that
\begin{equation*}
\dfrac{\Delta}{4}=h^4\left(\varepsilon^2 \dfrac{a_0}{c_0^2 c_e^2} h^4-\varepsilon\left(MC_{\textup{inv}}^2\sqrt{a_1}+1\right)^2\right)
< \varepsilon h^4 \left(\dfrac{a_0}{c_0^2 c_e^2} h^4-\left(MC_{\textup{inv}}^2\sqrt{a_1}+1\right)^2\right)
\end{equation*}
and it is negative when
\begin{equation*}
h <\sqrt{\dfrac{c_0 c_e\left(MC_{\textup{inv}}^2\sqrt{a_1}+1\right)}{\sqrt{a_0}}}.
\end{equation*}
Therefore, thanks to~\eqref{h:bound}, we have that $p(\Xi)>0$ for all $\Xi\in\mathbb{R}$, on the one hand.
On the other hand, we have that $p(\Xi)<1$ if and only if
\begin{equation*}
\Xi<\frac{2\sqrt{a_0}}{c_0 c_e \left(MC_{\textup{inv}}^2\sqrt{a_1}+1\right)^2},
\end{equation*}
as per our assumption~\eqref{xi:bound}. This means that, under the assumptions~\eqref{h:bound} and~\eqref{xi:bound}, the coefficient on the right-hand side of~\eqref{BS6} is a number between $0$ and $1$.
We thus define the number
\begin{equation*}
\rho':=1-\left(1-2 h^4 \Xi \dfrac{\varepsilon\sqrt{a_0}}{c_0 c_e}+ h^4 \varepsilon\Xi^2 \left(MC_{\textup{inv}}^2\sqrt{a_1}+1\right)^2\right) \in (0,1),
\end{equation*}
and~\eqref{BS6} becomes
\begin{equation*}
\label{BS7}
\|\hat{\bm{\psi}}-\bm{\psi}_{k+1}\|_{\bm{L}^2(\omega)}
\le \sqrt{1-\rho'} \|\hat{\bm{\psi}}-\bm{\psi}_k\|_{\bm{L}^2(\omega)},
\end{equation*}
and the proof is complete.
\end{proof}
Note in passing that iterating~\eqref{conv:iter} gives
\begin{equation*}
\label{conv:iter:2}
\|\hat{\bm{\psi}}-\bm{\psi}_{k+1}\|_{\bm{L}^2(\omega)}\le (1-\rho')^{\frac{k+1}{2}}\|\hat{\bm{\psi}}-\bm{\psi}_0\|_{\bm{L}^2(\omega)} \to 0,
\end{equation*}
as $k\to\infty$, being $(1-\rho') \in (0,1)$.
As a final remark, we observe that the iterative scheme~\eqref{iterate} is expected to converge very slowly.
This is due to the presence of the $h^4$ multiplicative term, which \emph{dampens} the convergence by making the norm $\|\bm{\psi}_{k+1}-\bm{\psi}_k\|_{\bm{L}^2(\omega)}$ small for all $k\ge 0$.
The dampening is due to the fact that the $h^4$ term neglects the effects of the term $\kappa=h^{-q}$, $0<q<2$, appearing in the penalty term.
This means that the iterates will slowly depart from the initialisation $\bm{\psi}_0$ which is customarily chosen to be either $\bm{0}$ (viz. \cite{Scholz1984}) or the solution of the linearised version of the problem under consideration.
\section{Numerical Simulations}
\label{numerics}
In this last section of the paper, we implement numerical simulations aiming to test the convergence of the algorithms presented in section~\ref{approx:original} and in section~\ref{approx:penalty}.
Let $R>0$ be given. We consider as a domain a circle of radius $r_A:=\frac{R}{2}$
\begin{equation*}
\omega:=\left\{y=(y_\alpha)\in \mathbb{R}^2;\sqrt{y_1^2+y_2^2}<r_A\right\}.
\end{equation*}
The middle surface of the membrane shell under consideration is a non-hemispherical spherical cap which is not in contact with the plane $\{x_3=0\}$. The parametrization we choose is $\bm{\theta} \in \mathcal{C}^2(\overline{\omega};\mathbb{E}^3)$ defined by:
\begin{equation}
\label{middlesurf}
\bm{\theta}(y):=\left(y_1, y_2, \sqrt{r_A^2-y_1^2-y_2^2}-0.85\right),\quad\textup{ for all } y=(y_\alpha) \in \overline{\omega}.
\end{equation}
Throughout this section, the values of $\varepsilon$, $\lambda$, $\mu$ and $R$ are fixed as follows
\begin{equation*}
\begin{aligned}
\varepsilon&=0.001,\\
\lambda&=0.4,\\
\mu&=0.012,\\
R&=1.0.
\end{aligned}
\end{equation*}
The applied body force density $\bm{p}^\varepsilon=(p^{i,\varepsilon})$ entering the first two batches of experiments is given by $\bm{p}^\varepsilon=(0,0,g(y))$, where
$$
g(y):=
\begin{cases}
-\frac{2\varepsilon}{25}(-5.0 y_1^2-5.0 y_2^2+0.295), &\textup{ if } |y|< 0.060,\\
0, &\textup{otherwise}.
\end{cases}
$$
We let $\bm{q}=(0,0,1)$.
For the third batch of experiments, the applied body force density $\bm{p}^\varepsilon=(p^{i,\varepsilon})$ entering the model is given by $\bm{p}^\varepsilon=(0,0,g_\ell(y))$, where $\ell$ is a nonnegative integer and
$$
g_\ell(y):=
\begin{cases}
-\frac{2\varepsilon}{25}(-5.0 y_1^2-5.0 y_2^2+(1+0.05 \ell)\times 0.295), &\textup{ if } |y|< 0.060,\\
0, &\textup{otherwise}.
\end{cases}
$$
The first batch of numerical experiments is meant to validate the claim of Theorem~\ref{ex-un-kappa}. We fix the mesh size $0<h<<1$ and we let $\kappa=h^q$ in Problem~\ref{problem2}. Consider a sequence of exponents $\{q_\ell\}_{\ell=1}^\infty$ such that $q_\ell \to \infty$ as $\ell \to\infty$ and let $\bm{\zeta}^{\varepsilon,h}_{h^{q_n}}$ and $\bm{\zeta}^{\varepsilon,h}_{h^{q_m}}$ be the solutions of Problem~\ref{problem2} corresponding to $\kappa=h^{q_n}$ and $\kappa=h^{q_m}$ respectively.
The experiments whose results are shown in Figures~\ref{fig:1}--\ref{fig:4} an Tables~\ref{table:1}--\ref{table:4} below show that $\|\bm{\zeta}^{\varepsilon,h}_{h^{q_n}}-\bm{\zeta}^{\varepsilon,h}_{h^{q_m}}\|_{\bm{V}_M(\omega)} \to 0$ as $m,n \to\infty$. The algorithm stops when $\|\bm{\zeta}^{\varepsilon,h}_{h^{q_n}}-\bm{\zeta}^{\varepsilon,h}_{h^{q_m}}\|_{\bm{V}_M(\omega)}< 2.0 \times 10^{-6}$.
Each component $\zeta^{\varepsilon}_{\kappa,i}$ of Problem~\ref{problem2} is discretized by Lagrange triangles (cf., e.g., \cite{PGCFEM}) and homogeneous Dirichlet boundary conditions are imposed for all the components. The reason why the transverse component $\zeta^{\varepsilon}_{\kappa,3}$ was imposed to be subjected to this boundary condition is that Problem~\ref{problem1} is derived as a result of a rigorous asymptotic analysis starting from Koiter's model~\cite{CiaPie2018bCR,CiaPie2018b}. The fact that the transverse component of the solution of Koiter's model is of class $H^2_0(\omega)$ makes a boundary layer appear (viz. Section~7.3 of~\cite{Ciarlet2000}) and justifies our choice for this boundary condition, without which the boundary would be pushed down to the obstacle when, clearly, this is not the case. The higher regularity of the solution of Problem~\ref{problem2} (viz.~\eqref{conclusion-7}) and the higher regularity of the solution of Koiter's model for elliptic membranes subject to an obstacle, which can be derived by adapting the argument of Theorem~\ref{aug:int} and Theorem~\ref{aug:bdry} to the proof in~\cite{Iosifescu1994} justify the choice for the boundary condition of the transverse component.
At each iteration, Problem~\ref{problem3} is solved by Newton's method.
The expressions of the geometrical parameters (i.e., the covariant and contravariant bases, the first fundamental form in covariant and contravariant components, the second fundamental form in covariant and mixed components, etc.) associated with the middle surface~\eqref{middlesurf} were computed by means of the symbolic computer provided by MATLAB.
The numerical simulations are performed by means of the software FEniCS~\cite{Fenics2016} and the visualization is performed by means of the software ParaView~\cite{Ahrens2005}.
The plots were created by means of the \verb*|matplotlib| libraries from a Python~3.9.8 installation.
\begin{table}[H]
\begin{varwidth}[b]{0.6\linewidth}
\centering
\begin{tabular}{ c c l }
\toprule
$q_n$ &$q_m$& Error \\
\midrule
0.5 &1.0& 0.0009870505482918299\\
1.0 &1.5& 0.0005716399376703707\\
1.5 &2.0& 0.0003259806690885746\\
2.0 &2.5& 0.0001851239908727575\\
2.5 &3.0& 0.00010447749338622102\\
3.0 &3.5& 5.8930703167247946e-05\\
3.5 &4.0& 3.2967335748701205e-05\\
4.0 &4.5& 1.928599264387323e-05\\
4.5 &5.0& 1.0777591612766316e-05\\
5.0 &5.5& 5.9221185866507025e-06\\
5.5 &6.0& 3.545734351957462e-06\\
6.0 &6.5& 2.595888616762957e-06\\
6.5 &7.0& 2.1107364521837126e-06\\
7.0 &7.5& 1.958867087445544e-06\\
\bottomrule
\end{tabular}
\caption{Verification of Theorem~\ref{ex-un-kappa} for $h=0.03123779990753546$ fixed and $q$ varying}
\label{table:1}
\end{varwidth}
\begin{minipage}[b]{0.4\linewidth}
\centering
\includegraphics[width=\textwidth]{./figures/kappah00312}
\captionof{figure}{Error convergence}
\label{fig:1}
\end{minipage}
\end{table}
\begin{table}[H]
\begin{varwidth}[b]{0.6\linewidth}
\centering
\begin{tabular}{ c c l }
\toprule
$q_n$ &$q_m$& Error \\
\midrule
0.5 &1.0& 0.000997704692607139\\
1.0 &1.5& 0.0004068375815673128\\
1.5 &2.0& 0.0001630057036007356\\
2.0 &2.5& 6.487506105700657e-05\\
2.5 &3.0& 2.588698723839158e-05\\
3.0 &3.5& 1.0186383020625006e-05\\
3.5 &4.0& 4.304206713437714e-06\\
4.0 &4.5& 1.7181576983518609e-06\\
\bottomrule
\end{tabular}
\caption{Verification of Theorem~\ref{ex-un-kappa} for $h=0.015623491510797227$ fixed and $q$ varying}
\label{table:2}
\end{varwidth}
\begin{minipage}[b]{0.4\linewidth}
\centering
\includegraphics[width=\textwidth]{./figures/kappah00156}
\captionof{figure}{Error convergence}
\label{fig:2}
\end{minipage}
\end{table}
\begin{table}[H]
\begin{varwidth}[b]{0.6\linewidth}
\centering
\begin{tabular}{ c c l }
\toprule
$q_n$ &$q_m$& Error \\
\midrule
0.5 &1.0& 0.0008589020335743345\\
1.0 &1.5& 0.00024578598359837673\\
1.5 &2.0& 6.925018104565528e-05\\
2.0 &2.5& 1.943169742921457e-05\\
2.5 &3.0& 5.4843823503599594e-06\\
3.0 &3.5& 1.5246502664061824e-06\\
\bottomrule
\end{tabular}
\caption{Verification of Theorem~\ref{ex-un-kappa} for $h=0.007812398571396802$ fixed and $q$ varying}
\label{table:3}
\end{varwidth}
\begin{minipage}[b]{0.4\linewidth}
\centering
\includegraphics[width=\textwidth]{./figures/kappah00078}
\captionof{figure}{Error convergence}
\label{fig:3}
\end{minipage}
\end{table}
\begin{table}[H]
\begin{varwidth}[b]{0.6\linewidth}
\centering
\begin{tabular}{ c c l }
\toprule
$q_n$ &$q_m$& Error \\
\midrule
0.5 &1.0& 0.0006853937021067343\\
1.0 &1.5& 0.0001389653237533636\\
1.5 &2.0& 2.8767966192506784e-05\\
2.0 &2.5& 6.876855587433019e-06\\
2.5 &3.0& 1.1588084240614098e-06\\
\bottomrule
\end{tabular}
\caption{Verification of Theorem~\ref{ex-un-kappa} for $h=0.0039062328553237536$ fixed and $q$ varying}
\label{table:4}
\end{varwidth}
\begin{minipage}[b]{0.4\linewidth}
\centering
\includegraphics[width=\textwidth]{./figures/kappah00039}
\captionof{figure}{Error convergence}
\label{fig:4}
\end{minipage}
\end{table}
From the data patterns in Figures~\ref{fig:1}--\ref{fig:4} we observe that as $h$ decreases (and so $\kappa$ increases) less iterations are needed to reach the tolerance triggering the stopping criterion. This is coherent with the conclusion of Theorem~\ref{ex-un-kappa}.
The second batch of numerical experiments is meant to validate the claim of Theorem~\ref{th:conv}.
We show that, for a fixed $0< q <2$, the error $\|\zeta^{\varepsilon,h}_{h^{q}}-\zeta^{\varepsilon,h}_{h^{q}}\|_{\bm{V}_M(\omega)}$ tends to zero as $h \to 0^+$.
The results of these experiments are reported in Figure~\ref{fig:5} below.
\begin{figure}\end{figure}
\begin{figure}
\caption{Given $0<q<2$, the error $\|\bm{\zeta}
\label{fig:5}
\end{figure}
The third batch of numerical experiments validates the genuineness of the model.
We observe that the presented data exhibits the pattern that, for a fixed $0<h<<1$ and a fixed $0< q <2$, the contact area increases as the applied body force intensity increases.
The results of these experiments are reported in Figure~\ref{fig:6} below.
\begin{figure}\end{figure}
\begin{figure}
\caption{Cross sections of a deformed membrane shell subjected not to cross a plane.
Given $0<h<<1$ and $0<q<2$ we observe that as the applied body force magnitude increases the contact area increases.}
\label{fig:6}
\end{figure}
\section*{Conclusions and Commentary}
In this paper we established the convergence of a numerical scheme based on the Finite Element Method for approximating the solution of a set of variational inequalities modelling the deformation of a linearly elastic elliptic membrane shell subject to remaining confined in a prescribed half space.
Instead of directly approximating the solution of the variational inequalities, we approximate the solution of the corresponding penalized variational formulation with respect to the norm of the space where the solution of this penalized problem is sought. Moreover, we also show that the iterative method proposed by Brezis and Sibony can be applied to approximate the solution of the discrete penalized problem under consideration with respect, however, to a weaker norm.
The main novelty introduced in this paper is the overcoming of the condition $(\ast)$ introduced by Scholz~\cite{Scholz1984}.
Indeed, since the second order differential operator we are considering takes into account all the components of the solution, which is a vector field with values in the Euclidean space $\mathbb{E}^3$, it is not straightforward to re-write the condition $(\ast)$ introduced by Scholz~\cite{Scholz1984} in a vectorial context. We instead assume that the middle surface of the linearly elastic shell under consideration satisfies a certain geometrical assumption, which is the same assumption ensuring the validity of the ``density property'' introduced in~\cite{CiaMarPie2018b,CiaMarPie2018}.
The method we presented in this paper is, however, in general not applicable to fourth order obstacle problems like the one studied by L\'eger \& Miara~\cite{Leger2008,Leger2010}, and for which a suitable numerical scheme was studied in~\cite{PS}. The reason why the methodology presented in this paper is not applicable to fourth order problems is due to the fact that the solution of fourth order obstacle problems is not in general of class $H^4$ over its definition domain. This limitation was established by Caffarelli and his associates in the papers~\cite{Caffarelli1979,CafFriTor1982}.
In order to study the convergence of the finite element analysis addressed in the paper~\cite{PS}, an interior $\mathcal{C}^0$ penalty method based on a nonconforming finite element of Morley type had to be exploited. The choice of the nonconforming finite element of Morley type is motivated by the fact that the highest regularity one can achieve for the considered problem is $H^3$ over the definition domain. One such regularity is sufficient to apply a suitable Green's formula for establishing the convergence of the finite element scheme in~\cite{PS}.
We also observe that the penalty method discussed in this paper is, in the context of a finite element analysis, more easily applicable than the primal-dual active set method~\cite{SunYuan2006}. The latter is particularly amenable in the context of the optimization of problems the solution of which is a real-valued functions or a vector field for which the constraint bears on the transverse component~\cite{PWDT2D, PWDT3D}.
\section*{Statements and Declarations}
All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
\section*{Funding}
Partial financial support was received from the National Science Foundation under Grant Number DMS-2051032.
\end{document} |
\begin{document}
\title{Harmonic models and spanning forests of residually finite groups}
\author{Lewis Bowen}
\author{Hanfeng Li}
\address{\hskip-\parindent
Lewis Bowen, Department of Mathematics, Texas A{\&}M University,
College Station, TX 77843-3368, U.S.A.}
\email{[email protected]}
\address{\hskip-\parindent
Hanfeng Li, Department of Mathematics, SUNY at Buffalo,
Buffalo, NY 14260-2900, U.S.A.}
\email{[email protected]}
\keywords{Harmonic model, algebraic dynamical system, Wired Spanning Forest, WSF, tree entropy}
\begin{abstract}
We prove a number of identities relating the sofic entropy of a certain class of non-expansive algebraic dynamical systems, the sofic entropy of the Wired Spanning Forest and the tree entropy of Cayley graphs of residually finite groups. We also show that homoclinic points and periodic points in harmonic models are dense under general conditions.
\end{abstract}
\date{August 13, 2011}
{\rm m}aketitle
\tableofcontents
\section{Introduction}
This paper is concerned with two related dynamical systems. The quickest way to explain the connections is to start with finite graphs. So consider a finite connected simple graph $G=(V,E)$. The graph Laplacian $\Delta_G$ is an operator on $\ell^2(V,{\mathbb R})$ given by $\Delta_G x(v) = \sum_{w: \{v,w\}\in E} (x(v)-x(w))$. Let $\det^*(\Delta_G)$ be the product of the non-zero eigenvalues of $\Delta_G$. By the Matrix-Tree Theorem (see, e.g., \cite{GR} Lemma 13.2.4), $|V|^{-1}\det^*(\Delta_G)$, is the number of spanning trees in $G$. Recall that a subgraph $H$ of $G$ is {\em spanning} if it contains every vertex. It is a {\em forest} if it has no cycles. A connected forest is a {\em tree}. The number of spanning trees in $G$ is denoted $\tau(G)$.
There is another interpretation for this determinant. Consider the space $({\mathbb R}/{\mathbb Z})^V$ of all functions $x:V \to {\mathbb R}/{\mathbb Z}$. The operator $\Delta_G$ acts on this space as well by the same formula. An element $x\in ({\mathbb R}/{\mathbb Z})^V$ is {\em harmonic mod $1$} if $\Delta_G x \in {\mathbb Z}^V$. The set of harmonic mod $1$ elements is an additive group $X_G < ({\mathbb R}/{\mathbb Z})^V$ containing the constants. The number of connected components of this group is denoted $|X_G|$. Let ${\mathbb Z}^V_0$ be the set of integer-valued functions $x:V \to {\mathbb Z}$ with zero sum: $\sum_{v\in V} x(v)=0$. Because $\Delta_G$ maps ${\mathbb Z}_0^V$ injectively into itself, $\det^*(\Delta_G) = |V||X_G|$. To our knowledge, this was first observed in \cite{So98}. We provide more details in \S {\rm re}f{S-approximation}.
Our main results generalize the equalities $\det^*(\Delta_G)|V|^{-1} = \tau(G) = |X_G|$ to Cayley graphs of finitely generated residually finite groups.
\subsection{Harmonic models and other algebraic systems}
We begin with a discussion of the appropriate analogue of the group of harmonic mod $1$ points and, to provide further context, the more general setting of group actions by automorphisms of a compact group. This classical subject has long been studied when the acting group is ${\mathbb Z}$ or ${\mathbb Z}^d$ \cite{Schmidt} though not as much is known in the general case.
Let $\Gamma$ be a countable group and $f$ be an element in the integral group ring ${\mathbb Z}\Gamma$. The action of $\Gamma$ on the discrete abelian group ${\mathbb Z}\Gamma/{\mathbb Z}\Gamma f$ induces an action by automorphisms on the Pontryagin dual $X_f:=\widehat{{\mathbb Z}\Gamma/{\mathbb Z}\Gamma f}$, the compact abelian group of all homomorphisms from ${\mathbb Z}\Gamma/{\mathbb Z}\Gamma f$ to ${\mathbb T}$, the unit circle in ${\mathbb C}$. Call the latter action $\alpha_f$. The topological entropy and measure-theoretic entropy (with respect to the Haar probability measure) coincide when $\Gamma$ is amenable \cite{Berg, De06}. Denote this number by $h(\alpha_f)$.
\begin{example} If $S \subset \Gamma \setminus \{e\}$ is a finite symmetric generating set, where $e$ denotes the identity element of $\Gamma$, and $f$ is defined by $f_s=-1$ if $s\in S$, $f_e=|S|$ and $f_s=0$ for $s \notin S \cup \{e\}$ then $X_f$ is canonically identified with the group $\{x \in ({\mathbb R}/{\mathbb Z})^\Gamma:~ \sum_{s\in S} x_{ts} = |S|x_t, \forall t\in \Gamma\}$ of harmonic mod $1$ points of the associated Cayley graph.
\end{example}
Yuzvinskii proved \cite{Yu65, Yu67} that if $\Gamma={\mathbb Z}$ then the entropy of $\alpha_f$ is calculable as follows. When $f=0$, $h(\alpha_f)=+\infty$. When $f\neq 0$, write $f$ as a Laurent polynomial $f = u^{-k}\sum_{j=0}^n c_j u^j$ by identifying $1 \in {\mathbb Z}=\Gamma$ with the indeterminate $u$ and requiring that $c_nc_0\ne 0, n\ge 0$. If $\lambda_1,\ldots, \lambda_n$ are the roots of $\sum_{j=0}^n c_j u^j$, then
$$h(\alpha_f) = \log |c_n| + \sum_{j=1}^n \log^+ |\lambda_j|$$
where $\log^+ t = \log {\rm m}ax(1,t)$. More generally, Yuzvinskii developed formulas for the entropy for any endomorphism of a compact metrizable group \cite{Yu67}.
When $\Gamma={\mathbb Z}^d$, we identify ${\mathbb Z}\Gamma$ with the Laurent polynomial ring ${\mathbb Z}[u_1^{\pm 1},\ldots, u_d^{\pm1}]$. Given a nonzero Laurent polynomial $f \in {\mathbb Z}\Gamma$, the Mahler measure of $f$ is defined by
$${\rm m}athbb{M}(f)=\exp \left(\int_{{\mathbb T}^d} |f(s)|~ds\right)$$
where the integral is with respect to the Haar probability measure on the torus ${\mathbb T}^d$. In \cite{LSW90} it is shown that $h(\alpha_f) = \log {\rm m}athbb{M}(f)$. This is a key part of a more general procedure for computing the entropy of any action of ${\mathbb Z}^d$ by automorphisms of a compact metrizable group.
In \cite{FK}, Fuglede and Kadison introduced a determinant $\det_A f$ for elements $f$ of a von Neumann algebra $A$ with respect to a normal tracial state ${\rm tr}_A$. It has found widespread application in the study of $L^2$-invariants \cite{Luck}. We will apply it to the special case when $A$ is the group von Neumann algebra ${{\rm m}athcal N}\Gamma$ with respect to its natural trace ${\rm tr}_{{{\rm m}athcal N}\Gamma}$. Note that ${\mathbb Z}\Gamma$ is a sub-ring of ${{\rm m}athcal N}\Gamma$. These concepts are reviewed in \S {\rm re}f{S-notation}.
As explained in \cite[Example 3.13]{Luck}, if $\Gamma = {\mathbb Z}^d$ and $f \in {\mathbb Z}\Gamma$ is nonzero, then the Mahler measure of $f$ equals the logarithm of its Fuglede-Kadison determinant. So it was natural to wonder whether the equation $h(\alpha_f) = \log \det_{{{\rm m}athcal N}\Gamma} f$ holds whenever $\Gamma$ is amenable and $f$ is invertible in ${{\rm m}athcal N}\Gamma$. Some special cases were proven in \cite{De06} and \cite{DS} before the general case was solved in the affirmative in \cite{Li}.
Recall that $\Gamma$ is {\em residually finite} if the intersection of all finite-index subgroups of $\Gamma$ is the trivial subgroup. In this case, there exists a sequence of finite-index normal subgroups $\Gamma_n\lhd \Gamma$ such that $\bigcap_{n=1}^\infty \bigcup_{i\ge n} \Gamma_i = \{e\}$. Our main results concern residually finite groups; we do not, in general, require that $\Gamma$ is amenable.
A group $\Gamma$ is {\em sofic} if it admits a sequence of partial actions on finite sets which, asymptotically, are free actions. This large class of groups, introduced implicitly in \cite{Gr99} and developed further in \cite{We00, ES05, ES06}, contains all residually finite groups and all amenable groups. It is not known whether all countable groups are sofic.
Entropy was introduced in \cite{Ko58,Ko59} for actions of ${\mathbb Z}$. The definition and major results were later extended to all countable amenable groups \cite{Ki75,Ol85, OW87}. Until recently it appeared to many observers that entropy theory could not be extended beyond amenable groups. This changed when \cite{Bo10a} introduced an entropy invariant for actions of free groups. Soon afterwards, \cite{Bo10} introduced entropy for probability-measure-preserving sofic group actions. One disadvantage of the approach taken in \cite{Bo10} (and in \cite{Bo10a}) is that it only applies to actions with a finite-entropy generating partition. This requirement was removed in \cite{KL11a}. That paper also introduced topological sofic entropy for actions of sofic groups on compact metrizable spaces and proved a variational principle relating the two concepts analogous to the classical variational principle.
If $\Gamma$ is non-amenable then the definition of entropy of a $\Gamma$-action depends on a choice of sofic approximation. We will not need the full details here because we are only concerned with the special case in which $\Gamma$ is a residually finite group. A sequence $\Sigma=\{\Gamma_n\}^\infty_{n=1}$ of finite-index normal subgroups of $\Gamma$ satisfying $\bigcap_{n=1}^\infty \bigcup_{i\ge n} \Gamma_i = \{e\}$ determines, in a canonical manner, a sofic approximation to $\Gamma$. Thus, we let $h_{\Sigma,\lambda}(\alpha_f)$ and $h_\Sigma(\alpha_f)$ denote the measure-theoretic sofic entropy and the topological sofic entropy of $\alpha_f$ with respect to $\Sigma$ respectively. Here $\lambda$ denotes the Haar probability measure on $X_f$. The precise definition of sofic entropy is given in \S {\rm re}f{sec:sofic} below.
In \cite{Bo11}, it was proven that if $f \in {\mathbb Z}\Gamma$ is invertible in $\ell^1(\Gamma)$ then $h_{\Sigma,\lambda}(\alpha_f) = \log \det_{{{\rm m}athcal N}\Gamma} f$ as expected. Also, if $f$ is invertible in the universal group $C^*$-algebra of $\Gamma$ then by \cite{KL11a}, $h_{\Sigma}(\alpha_f) = \log \det_{{{\rm m}athcal N}\Gamma} f$.
\begin{definition}\label{defn:well-balanced}
We say that $f \in {\mathbb R}\Gamma$ is {\em well-balanced} if
\begin{enumerate}
\item $\sum_{s\in \Gamma}f_s=0$,
\item $f_s\le 0$ for every $s\in \Gamma\setminus \{e\}$,
\item $f=f^*$ (where $f^*$, the adjoint of $f$ is given by $f^*_s = f_{s^{-1}}$ for all $s\in \Gamma$).
\item the support of $f$ generates $\Gamma$.
\end{enumerate}
\end{definition}
If $f \in {\mathbb Z}\Gamma$ is well-balanced then the dynamical system $\Gamma {\curvearrowright} X_f$ is called a {\em harmonic model} because $X_f$ is naturally identified with the set of all $x \in ({\mathbb R}/{\mathbb Z})^\Gamma$ such that $xf =0$, i.e., $x$ satisfies the harmonicity equation mod $1$:
$$\sum_{s\in \Gamma} x_{ts}f_s = 0 ~{\rm m}od {\mathbb Z}$$
for all $t\in \Gamma$.
If $\Gamma={\mathbb Z}^d$ ($d\ge 2$), then much is known about the harmonic model: the entropy was computed in \cite{LSW90} in terms of Mahler measure, it follows from \cite[Theorem 7.2]{KS} that the periodic points are dense, \cite{SV} provides an explicit description of the homoclinic group in some cases, and \cite{LSV} contains a number of results on the homoclinic group, periodic points, the specification property and more in some cases. See also \cite{SV} for results relating the harmonic model to the abelian sandpile model.
Our first main result is:
\begin{theorem}\label{thm:main1}
Let $\Gamma$ be a countably infinite group and $\Sigma=\{\Gamma_n\}^\infty_{n=1}$ a sequence of finite-index normal subgroups of $\Gamma$ satisfying $\bigcap_{n=1}^\infty \bigcup_{i\ge n} \Gamma_i = \{e\}$. Let $f \in {\mathbb Z}\Gamma$ be well-balanced. Then
$$h_{\Sigma,\lambda}(\alpha_f) = h_\Sigma(\alpha_f) = \log {\rm det}_{{{\rm m}athcal N}\Gamma} f = \lim_{n\to\infty} [\Gamma:\Gamma_n]^{-1} \log |{\rm Fix}_{\Gamma_n}(X_f)|$$
where $|{\rm Fix}_{\Gamma_n}(X_f)|$ is the number of connected components of the set of fixed points of $\Gamma_n$ in $X_f$.
\end{theorem}
In the appendix, we show that if $f$ is well-balanced then it is not invertible in $\ell^1(\Gamma)$ or even in the universal group $C^*$-algebra of $\Gamma$. Moreover, it is invertible in ${{\rm m}athcal N}\Gamma$ if and only if $\Gamma$ is non-amenable. Thus Theorem {\rm re}f{thm:main1} does not have much overlap with previous results.
In order to prove this theorem, we obtain several more results of independent interest.
We show that if $\Gamma$ is not virtually isomorphic to ${\mathbb Z}$ or ${\mathbb Z}^2$ then the homoclinic subgroup of $X_f$ is dense in $X_f$.
This implies $\Gamma {\curvearrowright} X_f$ is mixing of all orders (with respect to the Haar probability measure).
Also, ${\rm Fix}_{\Gamma_n}(X_f)$ converges to $X_f$ in the Hausdorff topology as $n\to\infty$.
As far we know, these results were unknown except in the case $\Gamma={\mathbb Z}^d$.
Indeed, in the case $\Gamma={\mathbb Z}^d$, it follows from \cite[Theorem 7.2]{KS} that the periodic points are dense and
it is known that the action is mixing of all orders if $d \ge 2$ (see Remark {\rm re}f{remark:homoclinic} for details).
\subsection{Uniform Spanning Forests}
In this paper, all graphs are allowed to have multiple edges and loops. A subgraph of a graph is {\em spanning} if it contains every vertex. It is a {\em forest} if every connected component is simple connected (i.e., has no cycles). A connected forest is a {\em tree}. If $G$ is a finite connected graph then the {\em Uniform Spanning Tree} (UST) on $G$ is the random subgraph whose law is uniformly distributed on the collection of spanning trees. Motivated in part to develop an analogue of the UST for infinite graphs, R. Pemantle implicitly introduced in \cite{Pe91} the {\em Wired Spanning Forest} (WSF) on ${\mathbb Z}^d$. This model has been generalized to arbitrary locally finite graphs and studied intensively (see e.g., \cite{BLPS01}).
In order to define the WSF, let $G=(V,E)$ be a locally finite connected graph and $\{G_n\}_{n=1}^\infty$ an increasing sequence of finite subgraphs $G_n=(V_n,E_n) \subset G$ whose union is all of $G$. For each $n$, define the wired graph $G^w_n=(V^w_n,E^w_n)$ as follows. The vertex set $V^w_n= V_n \cup \{*\}$. The edge set $E_n^w$ of $G_n^w$ contains all edges in $G_n$. Also for every edge $e$ in $G$ with one endpoint $v$ in $G_n$ and the other endpoint $w$ not in $G_n$, let $e^*=\{v,*\}$. Then $G^w_n$ contains all edges of the form $e^*$. These are all of the edges of $G^w_n$. Let $\nu^w_n$ be the uniform probability measure on the set of spanning trees of $G^w_n$. We consider it as a probability measure on the set $2^{E_n^w}$ of all subsets of $E_n^w$. Because $E_n \subset E$, we can think of $2^{E_n}$ (which we identify with the set of all subsets of $E_n$) as a subset of $2^E$ (which we identify with the set of all subsets of $E$). The projection of $\nu^w_n$ to $2^{E_n} \subset 2^E$ converges (as $n\to\infty$) to a Borel probability measure $\nu_{WSF}$ on $2^E$ (in the weak* topology on the space of Borel probability measures of $2^E$). This measure does not depend on the choice of $\{G_n\}_{n=1}^\infty$. This was implicitly proven by Pemantle \cite{Pe91} (answering a question of R. Lyons). By definition, $\nu_{WSF}$ is the law of the Wired Spanning Forest on $G$. There is a related model, called the Free Spanning Forest (FSF) (obtained by using $G_n$ itself in place of $G^w_n$ above) which is frequently discussed in comparison with the WSF. However, we will not make any use of it here. For more background, the reader is referred to \cite{BLPS01}.
\begin{definition}\label{defn:Cayley}
Let $\Gamma$ be a countably infinite group and let $f \in {\mathbb Z}\Gamma$ be well-balanced (Definition~{\rm re}f{defn:well-balanced}). The Cayley graph $C(\Gamma,f)$ has vertex set $\Gamma$. For each $v \in \Gamma$ and $s \ne e$, there are $|f_s|$ edges between $v$ and $vs$. Let $E(\Gamma,f)$ denote the set of edges of $C(\Gamma,f)$. Similarly, for $\Gamma_n {\rm tr}iangleleft \Gamma$, we let $C_n^f=C(\Gamma/\Gamma_n,f)$ be the graph with vertex set $\Gamma/\Gamma_n$ such that for each $g\Gamma_n \in \Gamma/\Gamma_n$ and $s \in \Gamma$ there are $|f_s|$ edges between $g\Gamma_n$ and $gs\Gamma_n$. We let $E_n^f$ be the set of edges of $C^f_n$.
\end{definition}
Let $\nu_{WSF}$ be the law of the WSF on $C(\Gamma,f)$. It is a probability measure on $2^{E(\Gamma,f)}$. Of course, $\Gamma$ acts on $E(\Gamma,f)$ which induces an action on $2^{E(\Gamma,f)}$ which preserves $\nu_{WSF}$.
In \cite{Lyons05}, R. Lyons introduced the {\em tree entropy} of a transitive weighted graph (and more generally, of a probability measure on weighted rooted graphs). In our case, the definition runs as follows. Let ${\rm m}u$ be the probability measure on $\Gamma$ defined by ${\rm m}u(s)=|f_s|/f_e$ for $s\in \Gamma \setminus \{e\}$. Then the tree entropy of $C(\Gamma,f)$ is
$${\bf h}(C(\Gamma, f)):= \log f_e - \sum_{k\ge 1} k^{-1} {\rm m}u^k(e)$$
where ${\rm m}u^k$ denotes the $k$-fold convolution power of ${\rm m}u$. In probabilistic terms, ${\rm m}u^k(e)$ is the probability that a random walk with i.i.d. increments with law ${\rm m}u$ started from $e$ will return to $e$ at time $k$. In \cite{Lyons05}, R. Lyons proved that if $\Gamma$ is amenable then the measure-entropy of $\Gamma {\curvearrowright} (2^{E(\Gamma,f)}, \nu_{WSF})$ equals ${\bf h}(C(\Gamma,f))$. We extend this result to all finitely-generated residually finite groups (where entropy is taken with respect to a sequence $\Sigma$ of finite-index normal subgroups converging to the trivial subgroup).
In \cite[Theorem 3.1]{Lyons10}, it is shown that ${\bf h}(C(\Gamma,f)) = \log \det_{{{\rm m}athcal N}\Gamma} f$. We give another proof in \S {\rm re}f{S-approximation}. Thus by Theorem {\rm re}f{thm:main1}, the entropy of the WSF equals the entropy of the associated harmonic model.
In the case $\Gamma={\mathbb Z}^d$, the entropy of the harmonic model was computed in \cite{LSW90}. Then the topological entropy of the action of ${\mathbb Z}^d$ on the space of essential spanning forests was computed in \cite{BP93} and shown to coincide with the entropy of the harmonic model. This coincidence was mysterious until \cite{So98} explained how to derive this result without computing the entropy.
We also study a topological model related to the WSF (though not the same model as in \cite{BP93}). To describe it, we need to introduce some notation.
\begin{notation}\label{note:S}
Let $S$ be the set of all $s\in \Gamma \setminus\{e\}$ with $f_s \ne 0$. Let $S_* \subset E(\Gamma,f)$ denote the set of edges adjacent to the identity element. Let $p:S_* \to S$ be the map which takes an edge with endpoints $\{e,g\}$ to $g$. Also, let $s_*^{-1}=s^{-1}s_* \in S_*$ where $p(s_*)=s$. Note that $p(s_*^{-1})= p(s_*)^{-1}$ and $(s_*^{-1})^{-1}=s_*$.
\end{notation}
An element $y\in S_*^\Gamma$ defines, for each $g\in \Gamma$ a directed edge in $C(\Gamma,f)$ away from $g$ (namely, the edge $gs$ where $y_g=s$).
Let ${{\rm m}athcal F}_f \subset S_*^\Gamma$ be the set of all elements $y \in S_*^\Gamma$ such that
\begin{enumerate}
\item edges are oriented consistently: if $y_g=s_*$ and $p(s_*)=s$ then $y_{gs} \ne s_*^{-1}$.
\item there are no cycles: there does not exist $g_1,g_2,\ldots, g_n\in \Gamma$ and $s_1,\ldots, s_n \in S$ such that $g_is_i=g_{i+1}$, $g_1=g_n$ and $p(y_{g_i})=s_i$ for $1\le i \le n-1$.
\end{enumerate}
The space ${{\rm m}athcal F}_f \subset S_*^\Gamma$ is closed (where $S_*^\Gamma$ is given the product topology) and is therefore a compact metrizable space. Let $h_\Sigma({{\rm m}athcal F}_f,\Gamma)$ denote the topological sofic entropy of the action $\Gamma {\curvearrowright} {{\rm m}athcal F}_f$. Our second main result is:
\begin{theorem}\label{thm:WSF}
If $\Gamma$ is a countably infinite group, $\Sigma=\{\Gamma_n\}_{n=1}^\infty$ is a sequence of finite-index normal subgroups of $\Gamma$, $\bigcap_{n=1}^\infty \bigcup_{i \ge n} \Gamma_i = \{e\}$ and $f \in {\mathbb Z}\Gamma$ is well-balanced then
$$h_{\Sigma,\nu_{WSF}}(2^{E(\Gamma,f)},\Gamma) = h_{\Sigma}({{\rm m}athcal F}_f,\Gamma) = {\bf h}(C(\Gamma,f))=\log {\rm det}_{{{\rm m}athcal N}\Gamma} f = \lim_{n\to\infty} [\Gamma:\Gamma_n]^{-1} \log \tau(C_n^f)$$
where $\tau(C_n^f)$ is the number of spanning trees of the Cayley graph $C(\Gamma/\Gamma_n,f)$.
\end{theorem}
{\bf Acknowledgements}: L. B. would like to thank Russ Lyons for asking the question, ``what is the entropy of the harmonic model on the free group?'' which started this project. L.B. would also like to thank Andreas Thom for suggesting the use of Property (T) to prove an upper bound on the sofic entropy of the harmonic model. This idea turned out to be a very useful entry way into the problem. Useful conversations occurred while L.B. visited the Banff International Research Station, the Mathematisches Forschungsinstitut Oberwolfach, and the Institut Henri Poincar\'e. L.B. was partially supported by NSF grants DMS-0968762 and DMS-0954606.
H. L. was partially supported by NSF Grants DMS-0701414 and DMS-1001625. This work was carried out while H.L. visited the Institut Henri Poincar\'{e} and the math departments of Fudan University and University of Science and Technology of China in the summer of 2011. He thanks Wen Huang, Shao Song, Yi-Jun Yao, and Xiangdong Ye for warm hospitality. H.L. is also grateful to Klaus Schmidt for sending him the manuscript \cite{LS}.
\section{Notation and preliminaries} \label{S-notation}
Throughout the paper, $\Gamma$ denotes a
countable group with identity element $e$. We denote $f \in \ell^p(\Gamma):=\ell^p(\Gamma, {{\rm m}athbb R})$ by $f=(f_s)_{s\in \Gamma}$. Given $g\in \ell^1(\Gamma)$ and $h\in \ell^\infty(\Gamma)$ we define the convolution $g h \in \ell^\infty(\Gamma)$ by
\begin{eqnarray*}
(gh)_w &:=& \sum_{s \in \Gamma} g_s h_{s^{-1}w} =\sum_{s \in \Gamma} g_{ws^{-1}} h_s, \quad \forall w\in \Gamma.
\end{eqnarray*}
More generally, whenever $g$ and $h$ are functions of $\Gamma$ we define $gh$ as above whenever this formula is well-defined. Let ${\mathbb Z}\Gamma\subset \ell^\infty(\Gamma)$ be the subring of all elements $f$ with $f_s \in {\mathbb Z}~\forall s \in \Gamma$ and $f_s=0$ for all but finitely many $s\in \Gamma$. Similarly ${\mathbb R}\Gamma\subset \ell^\infty(\Gamma)$ is the subring of all elements $f$ with $f_s \in {\mathbb R}~\forall s \in \Gamma$ and $f_s=0$ for all but finitely many $s\in \Gamma$. The element $1 \in {\mathbb Z}\Gamma$ is defined by $1_e=1, 1_s =0 ~\forall s\in\Gamma \setminus\{e\}$.
Given sets $A$ and $B$, $A^B$ denotes the set of all functions from $B$ to $A$. We frequently identify the unit circle ${\mathbb T}$ with ${\mathbb R}/{\mathbb Z}$. If $g\in {\mathbb Z}\Gamma$ and $h\in {\mathbb T}^\Gamma$ then define the convolutions $g h$ and $h g \in {\mathbb T}^\Gamma$ as above.
The {\em adjoint} of an element $g \in {\mathbb C}^\Gamma$ is defined by $g^*(s) := \overline{g(s^{-1})}$.
A probability measure ${\rm m}u$ on $\Gamma$ can be thought of as an element of $\ell^1(\Gamma)$. Thus we write ${\rm m}u_s={\rm m}u(\{s\})$ for any $s\in \Gamma$, and ${\rm m}u^n$ denotes the $n$-th convolution power of ${\rm m}u$.
The group $\Gamma$ acts isometrically on the Hilbert space $\ell^2(\Gamma, {{\rm m}athbb C})$ from the left by $(sf)_t=f_{s^{-1}t}$ for all $f\in \ell^2(\Gamma, {{\rm m}athbb C})$ and $s, t\in \Gamma$. Denote by ${{\rm m}athcal B}(\ell^2(\Gamma, {{\rm m}athbb C}))$ the algebra of bounded linear operators from $\ell^2(\Gamma, {{\rm m}athbb C})$ to itself.
The group von Neumann algebra ${{\rm m}athcal N} \Gamma$ is the algebra of elements in ${{\rm m}athcal B}(\ell^2(\Gamma, {{\rm m}athbb C}))$ commuting with the left action of $\Gamma$ (see \cite[Section 2.5]{BO} for detail), and is complete under the operator norm $\|\cdot \|$.
For each $g\in {{\rm m}athbb R}\Gamma$, we denote by $R_g$ the operator $\ell^2(\Gamma, {{\rm m}athbb C})\rightarrow \ell^2(\Gamma, {{\rm m}athbb C})$ defined by
$R_g(x)=xg$ for $x\in \ell^2(\Gamma, {{\rm m}athbb C})$. It is easy to see that $R_g\in {{\rm m}athcal N}\Gamma$.
Then we have the injective ${{\rm m}athbb R}$-algebra homomorphism ${{\rm m}athbb R}\Gamma\rightarrow {{\rm m}athcal N}(\Gamma)$ sending $g$ to $R_{g^*}$. In this way we shall think of
${{\rm m}athbb R}\Gamma$ as a subalgebra of ${{\rm m}athcal N}\Gamma$.
For each $s\in \Gamma$, we also think of $s$ as the element in $\ell^2(\Gamma, {{\rm m}athbb C})$ being $1$ at $s$ and $0$ at $t\in \Gamma\setminus \{s\}$.
The canonical trace ${\rm tr}_{{{\rm m}athcal N}\Gamma}$ on ${{\rm m}athcal N}\Gamma$ is the linear functional ${{\rm m}athcal N}\Gamma\rightarrow {{\rm m}athbb C}$ defined by
$$ {\rm tr}_{{{\rm m}athcal N}\Gamma}(T)=\left< Te, e\right>.$$
An element $T\in {{\rm m}athcal N}\Gamma$ is called positive if $\left<Tx, x\right>\ge 0$ for all $x\in \ell^2(\Gamma, {{\rm m}athbb C})$. Let $T\in {{\rm m}athcal N}\Gamma$ be positive. The spectral measure of $T$, is the unique Borel probability measure ${\rm m}u$ on the interval $[0, \|T\|]$ satisfying
\begin{align} \label{E-spectral measure}
\int_0^{\|T\|}p(t)\, d{\rm m}u(t)={\rm tr}_{{{\rm m}athcal N}\Gamma}(p(T))
\end{align}
for every complex-coefficients polynomial $p$. If $\ker T=\{0\}$, then the Fuglede-Kadison determinant $\det_{{{\rm m}athcal N}\Gamma}(T)$ of $T$ \cite{FK} is defined as
\begin{align} \label{E-determinant}
{\rm det}_{{{\rm m}athcal N}\Gamma}(T)=\exp\left(\int_{0+}^{\|T\|}\log t\, d{\rm m}u(t)\right).
\end{align}
If furthermore $f$ is invertible in ${{\rm m}athcal N}\Gamma$, then
\begin{align} \label{E-determinant invertible}
{\rm det}_{{{\rm m}athcal N}\Gamma}(T)=\exp({\rm tr}_{{{\rm m}athcal N}\Gamma}\log T).
\end{align}
For a locally compact abelian group $X$, we denote by $\widehat{X}$ its Pontryagin dual, i.e. the locally compact abelian group of continuous group homomorphisms $X\rightarrow {{\rm m}athbb R}/{{\rm m}athbb Z}$. For $f\in {{\rm m}athbb Z}\Gamma$, we set $X_f=\widehat{{{\rm m}athbb Z}\Gamma/{{\rm m}athbb Z}\Gamma f}$. Note that one can identify $\widehat{{{\rm m}athbb Z}\Gamma}$ with
$({{\rm m}athbb R}/{{\rm m}athbb Z})^\Gamma$ naturally, with the pairing between $x\in ({{\rm m}athbb R}/{{\rm m}athbb Z})^\Gamma$ and $g\in {{\rm m}athbb Z}\Gamma$ given by
$$ \left<x, g\right>=(xg^*)_e.$$
It follows that we may identify $X_f$ with $\{x\in ({{\rm m}athbb R}/{{\rm m}athbb Z})^\Gamma: xf^*=0\}$ naturally, with the pairing between $x\in X_f$ and $g+{{\rm m}athbb Z}\Gamma f\in {{\rm m}athbb Z}\Gamma/{{\rm m}athbb Z}\Gamma f$ for $g\in {{\rm m}athbb Z}\Gamma$ given by
\begin{align} \label{E-pairing}
\left<x, g+{{\rm m}athbb Z}\Gamma f\right>=(xg^*)_e.
\end{align}
\section{Approximation by finite models}\label{S-approximation}
The purpose of this section is to prove:
\begin{theorem}\label{T-det vs number of fixed point}
Let $\Gamma$ be a finitely generated and residually finite infinite group. Let $\Sigma=\{\Gamma_n\}^\infty_{n=1}$ be a sequence of finite-index normal subgroups of $\Gamma$ such that $\bigcap_{n\in {{\rm m}athbb N}}\bigcup_{i\ge n}\Gamma_i=\{e\}$. Let $f\in {{\rm m}athbb Z}\Gamma$ be well-balanced (Definition~{\rm re}f{defn:well-balanced}). Then
$$ \log {\rm det}_{{{\rm m}athcal N}\Gamma} f= {\bf h}(C(\Gamma,f)) = \lim_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log \tau(C_n^f)= \lim_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log |{\rm Fix}_{\Gamma_n}(X_f)|$$
where $\tau(C_n^f)$ is the number of spanning trees of the Cayley graph $C_n^f$ (Definition~{\rm re}f{defn:Cayley}), ${\rm Fix}_{\Gamma_n}(X_f)$ is the fixed point set of $\Gamma_n$ in $X_f$, and $|{\rm Fix}_{\Gamma_n}(X_f)|$ is the number of connected components of ${\rm Fix}_{\Gamma_n}(X_f)$.
\end{theorem}
The following result is a special case of \cite[Theorem 3.1]{Lyons10}. For the convenience of the reader, we give a proof here.
\begin{proposition} \label{P-tree entropy}
Let $\Gamma$ be a countable group and $f\in {{\rm m}athbb R}\Gamma$ be well-balanced. Set ${\rm m}u=-(f-f_e)/f_e$.
We have
$$ \log {\rm det}_{{{\rm m}athcal N}\Gamma}f
= \log f_e-\sum_{k=1}^\infty \frac{1}{k}({\rm m}u^k)_e.$$
\end{proposition}
\begin{proof} Note that $f$ is positive in ${{\rm m}athcal N}\Gamma$. Let $\varepsilon>0$. Then the norm of $\frac{f_e}{f_e+\varepsilon}{\rm m}u$ in ${{\rm m}athcal N}\Gamma$ is bounded above by
$\|\frac{f_e}{f_e+\varepsilon}{\rm m}u\|_1$, which in turn is strictly smaller than $1$.
Thus $\varepsilon+f=(f_e+\varepsilon)(1-\frac{f_e}{f_e+\varepsilon}{\rm m}u)$ is positive and invertible in ${{\rm m}athcal N}\Gamma$.
Therefore in ${{\rm m}athcal N}\Gamma$ we have
\begin{align*}
\log (\varepsilon+f)= \log (f_e+\varepsilon)-\sum_{k=1}^\infty \frac{1}{k}\cdot \left(\frac{f_e}{f_e+\varepsilon}{\rm m}u\right)^k,
\end{align*}
and the right hand side converges in norm. Thus
\begin{align*}
\log {\rm det}_{{{\rm m}athcal N}\Gamma}(\varepsilon+f)&\overset{{\rm eq}ref{E-determinant invertible}}={\rm tr}_{{{\rm m}athcal N}\Gamma} \log (\varepsilon+f)\\
&=\log (f_e+\varepsilon)-\sum_{k=1}^\infty \frac{(f_e)^k}{(f_e+\varepsilon)^kk}({\rm m}u^k)_e.
\end{align*}
We have $\lim_{\varepsilon\to 0+}{\rm det}_{{{\rm m}athcal N}\Gamma}(f+\varepsilon)={\rm det}_{{{\rm m}athcal N}\Gamma}f$ \cite[Lemma 5]{FK}. Note that $({\rm m}u^k)_e\ge 0$ for all $k\in {{\rm m}athbb N}$. Thus
\begin{align*}
\log {\rm det}_{{{\rm m}athcal N}\Gamma}f &=\lim_{\varepsilon\to 0+}\log {\rm det}_{{{\rm m}athcal N}\Gamma}(f+\varepsilon) \\
&= \log f_e- \sum_{k=1}^\infty \frac{1}{k}({\rm m}u^k)_e.
\end{align*}
\end{proof}
Recall that all graphs in this paper are allowed to have multiple edges and loops.
\begin{lemma}\label{lem:matrix-tree}
Let $G=(V,E)$ be a finite connected graph and let $\Delta_G:{\mathbb R}^V \to {\mathbb R}^V$ be the graph Laplacian: $\Delta_G x(v) = \sum_{\{v,w\}\in E} (x(v)-x(w))$. Let $\det^*(\Delta_G)$ be the product of the nonzero eigenvalues of $\Delta_G$. Then $|V|^{-1}\det^* \Delta_G = \tau(G)$, the number of spanning trees in $G$. Moreover, if $v_0 \in V$ is any vertex, $V_0:=V \setminus \{v_0\}$, $P_0:{\mathbb R}^V \to {\mathbb R}^{V_0}$ is the projection map and $\Delta^0_G:{\mathbb R}^{V_0} \to {\mathbb R}^{V_0}$ is defined by $\Delta^0_G = P_0\Delta_G$, then
$$\det(\Delta^0_G) = |V|^{-1} {\rm det}^*(\Delta_G) = \tau(G).$$
\end{lemma}
\begin{proof}
This is the Matrix-Tree Theorem (see e.g., \cite[Lemmas 13.2.3, 13.2.4]{GR}).
\end{proof}
\begin{lemma}\label{lem:correspondence}
Let $M$ be an $m\times m$ matrix with integral entries and inverse $M^{-1}$ with real entries. Let $\phi: {\mathbb R}^m \to {\mathbb R}^m$ be the corresponding linear transformation. Then the absolute value of the determinant of $M$ equals the number of integral points in $\phi([0,1)^m)$.
\end{lemma}
\begin{proof}
This is \cite[Lemma 4]{So98}.
\end{proof}
\begin{lemma}\label{lem:harmonic points}
Let $G=(V,E)$ be a finite connected graph. Let $\Delta_G:{\mathbb R}^V \to {\mathbb R}^V$ be the graph Laplacian. We also consider $\Delta_G$ as a map from $({\mathbb R}/{\mathbb Z})^V$ to itself. Let $X_G < ({\mathbb R}/{\mathbb Z})^V$ be the subgroup consisting of all $x\in ({\mathbb R}/{\mathbb Z})^V$ with $\Delta_G x = {\mathbb Z}^V$. Let $|X_G|$ denote the number of connected components of $X_G$. Then
$$|X_G| = |V|^{-1}{\rm det}^*(\Delta_G)$$
where ${\rm det}^*(\Delta_G)$ is the product of the non-zero eigenvalues of $\Delta_G$.
\end{lemma}
\begin{proof}
This lemma is an easy generalization of results in \cite{So98}. For completeness, we provide the details. As in Lemma {\rm re}f{lem:matrix-tree}, fix a vertex $v_0 \in V$, let $V_0:=V \setminus \{v_0\}$, think of ${\mathbb R}^{V_0}$ as a subspace of ${\mathbb R}^V$ in the obvious way, let $P_0:{\mathbb R}^V \to {\mathbb R}^{V_0}$ be the projection map and $\Delta^0_G:{\mathbb R}^{V_0} \to {\mathbb R}^{V_0}$ be defined by $\Delta^0_G = P_0\Delta_G$.
By Lemma {\rm re}f{lem:matrix-tree}, $\det \Delta_G^0 = |V|^{-1} \det^* \Delta_G$ is non-zero.
Under the standard basis of ${{\rm m}athbb R}^{V_0}$ the linear map $\Delta_{G}^0$ is represented as a matrix with integral entries. Therefore, the previous lemma implies $\det \Delta_{G}^0$ equals the number of integral points in $\Delta_{G}^0( [0,1)^{V_0} )$.
If $x \in {\mathbb R}^V$ has $\| x\|_\infty < (2|E|)^{-1}$ and $\Delta_G x \in {\mathbb Z}^V$, then $\Delta_Gx=0$ and hence $x$ is constant. Therefore, each connected component of $X_G$ is a coset of the constants. Thus $|X_G|$ is the cardinality of $X_G/Z$ where $Z<({\mathbb R}/{\mathbb Z})^V$ denotes the constant functions.
Given $x \in X_G$, let $\tilde{x} \in [0,1)^V$ be the unique element with $\tilde{x} + {\mathbb Z}^V = x$. There is a unique element $x_0 \in x + Z$ such that $\tilde{x_0}(v_0)=0$. If we let $X^0_G$ be the set of all such $\tilde{x_0}$, then $X^0_G$ is a finite set with $|X^0_G|=|X_G|$.
We claim that $X^0_G$ is precisely the set of points $y \in [0,1)^{V_0}$ with $\Delta^0_G(y) \in {\mathbb Z}^{V_0}$. To see this, let $\tilde{x_0} \in X^0_G$. Then $\Delta_G^0 \tilde{x_0} = P_0\Delta_G \tilde{x_0} \in {\mathbb Z}^{V_0}$. To see the converse, let $S:{\mathbb R}^V \to {\mathbb R}$ denote the sum function: $S(y) = \sum_{v\in V} y(v)$. Note that $S(\Delta_G(y))=0$ for any $y\in {\mathbb R}^V$. Therefore, if $y \in [0,1)^{V_0}$ is any point with $\Delta_G^0(y) \in {\mathbb Z}^{V_0}$ then because $S(\Delta_G y)=0=\Delta_G y(v_0) + S(\Delta^0_G y)$, it must be that $\Delta_G y(v_0)$ is an integer and thus $\Delta_G y \in {\mathbb Z}^V$. Thus $y + {\mathbb Z}^V \in X_G$. Because $y(v_0)=0$, we must have $y \in X_G^0$ as claimed.
So $|X_G|$ equals $|X^0_G|$ which equals the number of points $y \in [0,1)^{V_0}$ with $\Delta_{G}^0(y) \in {\mathbb Z}^{V_0}$. We showed above that the later equals $\det\Delta_{G}^0 = |V|^{-1}\det^* \Delta_G$.
\end{proof}
We are ready to prove Theorem~{\rm re}f{T-det vs number of fixed point}.
\begin{proof}[Proof of Theorem {\rm re}f{T-det vs number of fixed point}]
Define the Cayley graphs $C(\Gamma,f)$ and $C_n^f=C(\Gamma/\Gamma_n,f)$ as in Definition {\rm re}f{defn:Cayley}. Then the group of harmonic mod $1$ points on $C_n^f$ is canonically isomorphic with ${\rm Fix}_{\Gamma_n}(X_f)$ \cite[Lemma 7.4]{KL11a}. So Lemmas {\rm re}f{lem:matrix-tree} and {\rm re}f{lem:harmonic points} imply $|{\rm Fix}_{\Gamma_n}(X_f)| = \tau(C_n^f)$. By \cite[Theorem 3.2]{Lyons05}, $\lim_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log \tau(C_n^f) = {\bf h}(C(\Gamma,f))$. By Proposition~{\rm re}f{P-tree entropy}, ${\bf h}(C(\Gamma,f)) = \log {\rm det}_{{{\rm m}athcal N}\Gamma}f$. This implies the result.
\end{proof}
\section{Homoclinic Group} \label{S-homoclinic}
For an action of a countable group $\Gamma$ on a compact metrizable group $X$ by continuous automorphisms, a point $x\in X$ is called {\it homoclinic} if
$sx$ converges to the identity element of $X$ as $\Gamma\ni s\to \infty$ \cite{LS99}. The set of all homoclinic points, denoted by $\Delta(X)$, is a $\Gamma$-invariant subgroup of $X$.
The main result of this section is the following
\begin{theorem} \label{T-dense homoclinic}
Let $\Gamma$ be a countably infinite group such that $\Gamma$ is not virtually ${{\rm m}athbb Z}$ or ${{\rm m}athbb Z}^2$ (i.e., does not have any finite-index normal subgroup isomorphic to ${{\rm m}athbb Z}$ or ${{\rm m}athbb Z}^2$). Let $f\in {{\rm m}athbb Z}\Gamma$ be well-balanced. Then
\begin{enumerate}
\item The homoclinic group $\Delta(X_f)$ is dense in $X$.
\item $\alpha_f$ is mixing of all orders (with respect to the Haar probability measure of $X_f$). In particular, $\alpha_f$ is ergodic.
\end{enumerate}
\end{theorem}
\begin{remark}\label{remark:homoclinic}
When $\Gamma={{\rm m}athbb Z}^2$ and $f\in {{\rm m}athbb Z}\Gamma$ is well-balanced,
$\alpha_f$ is mixing of all orders.
We are grateful to Doug Lind for explaining this to us. Indeed, let $\Gamma={{\rm m}athbb Z}^d$ for $d\in {{\rm m}athbb N}$ and $g\in {{\rm m}athbb Z}\Gamma$. If $\alpha_g$ has completely positive entropy (with respect to the Haar probability measure on $X_g$), then
$\alpha_g$ is mixing of all orders \cite{Kaminski} \cite[Theorem 20.14]{Schmidt}.
By \cite[Theorems 20.8, 18.1, and 19.5]{Schmidt}, $\alpha_g$ has completely positive entropy exactly when $g$ has no factor in ${{\rm m}athbb Z}\Gamma={{\rm m}athbb Z}[u_1^{\pm}, \dots, u_d^{\pm}]$ as a generalized cyclotomic polynomial, i.e. $g$ has no factor of the form $u_1^{m_1}\dots u_d^{m_d}h(u_1^{n_1}\dots u_d^{n_d})$ for
$m_1, \dots, m_d, n_1, \dots, n_d\in {{\rm m}athbb Z}$, not all $n_1, \dots, n_d$ being $0$, and $h$ being a cyclotomic polynomial in a single variable.
When $d\ge 2$, if $g$ has a factor of such form, then $g(z_1, \dots, z_d)=0$ for some $z_1, \dots, z_d$ in the unit circle of the complex plane not being all $1$. On the other hand, when $f\in {{\rm m}athbb Z}\Gamma$ is well-balanced, if $f(z_1, \dots, z_d)=0$ for some $z_1, \dots, z_d$ in the unit circle of the complex plane,
then $z_1=\dots=z_d=1$.
\end{remark}
\begin{remark} \label{R-ergodic}
Lind and Schmidt \cite{LS} showed that for any countably infinite amenable group $\Gamma$ which is not virtually ${{\rm m}athbb Z}$,
if $f\in {{\rm m}athbb Z}\Gamma$ is not a right zero-divisor in ${{\rm m}athbb Z}\Gamma$, then $\alpha_f$ is ergodic. In particular, if $\Gamma$ is virtually
${{\rm m}athbb Z}^2$ and $f\in {{\rm m}athbb Z}\Gamma$ is well-balanced, then $\alpha_f$ is ergodic.
\end{remark}
\begin{remark} \label{R-integer not ergodic}
When $\Gamma={{\rm m}athbb Z}$ and $f\in {{\rm m}athbb Z}\Gamma$ is well-balanced, $\alpha_f$ is not ergodic. This follows from \cite[Theorem 6.5.(1)]{Schmidt}, and can also be seen as follows. We may identify ${{\rm m}athbb Z}\Gamma$ with the Laurent polynomial ring
${{\rm m}athbb Z}[u^{\pm}]$. Since the sum of the coefficients of $f$ is $0$, one has $f=(1-u)g$ for some $g\in {{\rm m}athbb Z}[u^{\pm}]$.
It follows that $g+{{\rm m}athbb Z}[u^{\pm}] f\in {{\rm m}athbb Z}[u^{\pm}]/{{\rm m}athbb Z}[u^{\pm}] f=\widehat{X_f}$ is fixed by the action of ${{\rm m}athbb Z}=\Gamma$ (i.e., it is fixed under multiplication by $u$).
As ${{\rm m}athbb Z}[u^{\pm}]$ is an integral domain and $1-u$ is not invertible in ${{\rm m}athbb Z}[u^{\pm}]$, the element $g$ is not in ${{\rm m}athbb Z}[u^{\pm}]f$.
Thinking of $g+{{\rm m}athbb Z}[u^{\pm}]f$ as a continuous ${{\rm m}athbb C}$-valued function on $X_f$, we find that $g+{{\rm m}athbb Z}[u^{\pm}]f$ has $L^2$-norm $1$ and is orthogonal to the constant functions with respect to the Haar probability measure. Thus $\alpha_f$ is not ergodic.
\end{remark}
We recall first the definition of mixing of all orders.
\begin{lemma} \label{L-mixing of all order}
Let $\Gamma$ be a countable group acting on a standard probability space $(X, {{\rm m}athcal B}, \lambda)$ by measure-preserving transformations.
Let $n\in {{\rm m}athbb N}$ with $n\ge 2$. Let $s_1,\dots, s_n\in \Gamma$.
The following conditions are equivalent:
\begin{enumerate}
\item for any $A_1, \dots, A_n\in {{\rm m}athcal B}$, one has $\lambda\big(\bigcap_{j=1}^ns_j^{-1}A_j\big )\to \prod_{j=1}^n\lambda(A_j)$ as $s_j^{-1}s_k\to \infty$ for all $1\le j<k\le n$.
\item for any $f_1, \dots, f_n$ in the space $L^{\infty}_{{{\rm m}athbb C}}(X, {{\rm m}athcal B}, \lambda)$ of essentially bounded ${{\rm m}athbb C}$-valued ${{\rm m}athcal B}$-measurable functions on $X$, one has $\lambda\big(\prod_{j=1}^ns_j(f_j)\big)\to \prod_{j=1}^n \lambda(f_j)$
as $s_j^{-1}s_k\to \infty$ for all $1\le j<k\le n$.
\end{enumerate}
If furthermore $X$ is a compact metrizable space, ${{\rm m}athcal B}$ is the $\sigma$-algebra of Borel subsets of $X$, then the above conditions are also equivalent to
\begin{enumerate}
\item[(3)] for any $f_1, \dots, f_n$ in the space $C_{{\rm m}athbb C}(X)$ of ${{\rm m}athbb C}$-valued continuous functions on $X$, one has $\lambda\big(\prod_{j=1}^ns_j(f_j)\big)\to \prod_{j=1}^n \lambda(f_j)$
as $s_j^{-1}s_k\to \infty$ for all $1\le j<k\le n$.
\end{enumerate}
If furthermore $X$ is a compact metrizable abelian group, ${{\rm m}athcal B}$ is the $\sigma$-algebra of Borel subsets of $X$, $\lambda$ is the Haar probability measure of $X$, and $\Gamma$ acts on $X$ by continuous automorphisms, then the above conditions are also equivalent to
\begin{enumerate}
\item[(4)] for any $f_1, \dots, f_n\in \widehat{X}$ not being all $0$, there is some finite subset $F$ of $\Gamma$ such that $\sum_{j=1}^ns_jf_j\neq 0$ for all $s_1, \dots, s_n\in \Gamma$ with
$s_j^{-1}s_k\not\in F$ for all $1\le j<k\le n$.
\item[(5)] for any $f_1, \dots, f_n\in \widehat{X}$ with $f_1\neq 0$, there is some finite subset $F$ of $\Gamma$ such that $f_1+\sum_{j=2}^ns_jf_j\neq 0$ for all $s_2, \dots, s_n\in \Gamma$ with
$s_j\not\in F$ for all $2\le j\le n$.
\end{enumerate}
\end{lemma}
\begin{proof} (1)$\Longleftrightarrow$(2) follows from the observation that (1) is exactly (2) when $f_j$ is the characteristic function of $A_j$ and the fact that the linear span of characteristic functions of elements in ${{\rm m}athcal B}$ is dense in $L^{\infty}_{{\rm m}athbb C}(X, {{\rm m}athcal B}, \lambda)$ under the essential supremum norm $\|\cdot \|_\infty$.
(2)$\Longleftrightarrow$(3) follows from the fact that for any $f\in L^{\infty}_{{{\rm m}athbb C}}(X, {{\rm m}athcal B}, \lambda)$ and $\varepsilon>0$ there exists $g\in C_{{\rm m}athbb C}(X)$ with
$\|g\|_\infty\le \|f\|_\infty$ and $\|f-g\|_2<\varepsilon$.
We identify ${{\rm m}athbb R}/{{\rm m}athbb Z}$ with the unit circle $\{z\in {{\rm m}athbb C}: |z|=1\}$ naturally. Then every $g\in \widehat{X}$ can be thought of as an element in $C_{{\rm m}athbb C}(X)$. Note that the identity element $0$ in $\widehat{X}$ is the element $1$ in $C_{{\rm m}athbb C}(X)$. Then (3)$\Longleftrightarrow$(4) follows from the observation that for any $g\in \widehat{X}$, $\lambda(g)=1$ or $0$ depending on whether $g=0$ in $\widehat{X}$ or not, and the fact that
the linear span of elements in $\widehat{X}$ is dense in $C_{{\rm m}athbb C}(X)$ under the supremum norm.
(4)$\Longleftrightarrow$(5) is obvious.
\end{proof}
When the condition (1) in Lemma~{\rm re}f{L-mixing of all order} is satisfied, we say that the action is {\it (left) mixing of order $n$}.
We say that the action is {\it (left) mixing of all orders} if it is mixing of order $n$ for all $n\in {{\rm m}athbb N}$ with $n\ge 2$.
\begin{proposition} \label{P-homoclinic to mixing}
Let a countable group $\Gamma$ act on a compact metrizable abelian group $X$ by continuous automorphisms. Suppose that the homoclinic group $\Delta(X)$ is dense in $X$. Then the action is mixing of all orders with respect to the Haar probability measure of $X$.
\end{proposition}
\begin{proof}
We verify the condition (5) in Lemma~{\rm re}f{L-mixing of all order} by contradiction.
Let $n\in {{\rm m}athbb N}$ with $n\ge 2$, and $f_1, \dots, f_n\in \widehat{X}$ with $f_1\neq 0$.
Suppose that there is a sequence $\{(s_{m, 2}, \dots, s_{m, n})\}_{m\in {{\rm m}athbb N}}$ of $(n-1)$-tuples in $\Gamma$
such that $f_1+\sum_{j=2}^ns_{m,j} f_j=0$ for all $m\in {{\rm m}athbb N}$ and $s_{m, j}\to \infty$ as $m\to \infty$ for every $2\le j\le n$.
Let $x\in \Delta(X)$. Then $f_1(x)+\sum_{j=2}^nf_j(s_{m, j}^{-1}x)=(f_1+\sum_{j=2}^ns_{m,j}f_j)(x)=0$ in ${{\rm m}athbb R}/{{\rm m}athbb Z}$ for all $m\in {{\rm m}athbb N}$.
Since $s_{m, j}^{-1}x$ converges to the identity element of $X$ as $m\to \infty$ for every $2\le j\le n$, we have $f_j(s_{m, j}^{-1}x)\to 0$ in ${{\rm m}athbb R}/{{\rm m}athbb Z}$ as $m\to \infty$ for every $2\le j\le n$. It follows that $f_1(x)=0$ in ${{\rm m}athbb R}/{{\rm m}athbb Z}$. Since $\Delta(X)$ is dense in $X$, we get $f_1=0$, a contradiction.
\end{proof}
\begin{example} \label{E-invertible in von Neumann}
For a countable group $\Gamma$, when $g\in {{\rm m}athbb Z} \Gamma$
is invertible in the group von Neumann algebra ${{\rm m}athcal N}\Gamma$, $\Delta(X_g)$ is dense in $X_g$ \cite[Lemma 5.4]{CL}, and hence by Proposition~{\rm re}f{P-homoclinic to mixing} the action $\alpha_g$ is mixing of all orders with respect to the Haar probability measure of $X_g$.
\end{example}
Let $\Gamma$ be a finitely generated infinite group. Let ${\rm m}u$ be a finitely supported symmetric probability measure on $\Gamma$ such that the support of ${\rm m}u$ generates $\Gamma$. We shall think of ${\rm m}u$ as an element in ${{\rm m}athbb R} \Gamma$. We endow $\Gamma$ with the word length associated to the support of ${\rm m}u$.
By the Cauchy-Schwarz inequality for any $s\in \Gamma$ and $n\in {{\rm m}athbb N}$ one has $({\rm m}u^{2n})_s=({\rm m}u^n\cdot ({\rm m}u^n)^*)_s\le ({\rm m}u^n\cdot ({\rm m}u^n)^*)_e=({\rm m}u^{2n})_e$.
Also note that for any $s\in \Gamma$ and $n\in {{\rm m}athbb N}$ one has $({\rm m}u^{2n+1})_s=({\rm m}u^{2n}\cdot {\rm m}u)_s\le \|{\rm m}u^{2n}\|_\infty$. It follows that
$\sum_{k=0}^{\infty} ({\rm m}u^k)_e<+\infty$ if and only if $\sum_{k=0}^{\infty}\|{\rm m}u^k\|_\infty<+\infty$.
Now assume that $\Gamma$ is not virtually ${{\rm m}athbb Z}$ or ${{\rm m}athbb Z}^2$.
By a result of Varopoulos \cite{Varopoulos} \cite[Theorem 2.1]{Fu02}
\cite[Theorem 3.24]{Woess} one has $\sum_{k=0}^{\infty} ({\rm m}u^k)_e<+\infty$, and hence $\sum_{k=0}^{\infty}\|{\rm m}u^k\|_\infty<+\infty$.
Thus we have the element $\sum_{k=0}^\infty {\rm m}u^k$ in $\ell^\infty(\Gamma)$. Let $\varepsilon>0$. Take $m\in {{\rm m}athbb N}$ such that
$\sum_{k=m+1}^\infty\|{\rm m}u^k\|_\infty<\varepsilon$. For each $s\in \Gamma$ with word length at least $m+1$, one has
$$|(\sum_{k=0}^\infty {\rm m}u^k)_s|=|(\sum_{k=m+1}^\infty {\rm m}u^k)_s|\le \sum_{k=m+1}^\infty\|{\rm m}u^k\|_\infty<\varepsilon.$$
Therefore $\sum_{k=0}^\infty {\rm m}u^k$ lies in the space $C_0(\Gamma)$ of ${{\rm m}athbb R}$-valued functions on $\Gamma$ vanishing at $\infty$. Since the support of ${\rm m}u$ is symmetric and generates $\Gamma$, one has $(\sum_{k=0}^\infty {\rm m}u^k)_s>0$ for every $s\in \Gamma$.
Now let $f\in {{\rm m}athbb R}\Gamma$ be well-balanced.
Set ${\rm m}u=-(f-f_e)/f_e$. Then ${\rm m}u$ is a finitely supported symmetric probability measure on $\Gamma$ and the support of ${\rm m}u$ generates $\Gamma$. Set $\omega=f_e^{-1}\sum_{k=0}^\infty{\rm m}u^k\in C_0(\Gamma)$. We have $f=f_e(1-{\rm m}u)$, and hence
$$ f\omega=(1-{\rm m}u)\sum_{k=0}^\infty{\rm m}u^k=1$$
and
$$ \omega f=(\sum_{k=0}^\infty{\rm m}u^k)(1-{\rm m}u)=1.$$
Note that the space $C_0(\Gamma)$ is not closed under convolution. The above identities show that $\omega$ is a formal inverse of $f$ in $C_0(\Gamma)$. Now we show that $f$ has no other formal inverse in $C_0(\Gamma)$.
\begin{lemma} \label{L-associative}
Let $g\in C_0(\Gamma)$ such that $gf\in \ell^1(\Gamma)$. Then
$$ (gf)\omega=g.$$
\end{lemma}
\begin{proof} In the Banach space $\ell^\infty(\Gamma)$ we have
\begin{align*}
(gf) \omega &= \lim_{m\to \infty}(gf)(f_e^{-1}\sum_{k=0}^m{\rm m}u^k) = \lim_{m\to \infty}g(f f_e^{-1}\sum_{k=0}^m{\rm m}u^k) = \lim_{m\to \infty}g((1-{\rm m}u)\sum_{k=0}^m{\rm m}u^k) \\
&= \lim_{m\to \infty}g(1-{\rm m}u^{m+1}) = g-\lim_{m\to \infty}g{\rm m}u^{m+1}.
\end{align*}
Let $\varepsilon>0$. Take a finite set $F\subset\Gamma$ such that $\|g|_{\Gamma \setminus F}\|_\infty<\varepsilon$. Write $g$ as $u+v$ for $u, v\in \ell^\infty(\Gamma)$ such that $u$ has support contained in $F$ and $v$ has support contained in $\Gamma\setminus F$. For each $m\in {{\rm m}athbb N}$ we have
\begin{align*}
\|g{\rm m}u^{m+1}\|_\infty &\le \|u{\rm m}u^{m+1}\|_\infty+\|v{\rm m}u^{m+1}\|_\infty \le \|u\|_1\|{\rm m}u^{m+1}\|_\infty+\|v\|_\infty\|{\rm m}u^{m+1}\|_1\\
&\le \|g\|_\infty \cdot |F|\cdot \|{\rm m}u^{m+1}\|_\infty+ \varepsilon.
\end{align*}
Letting $m\to \infty$, we get $\limsup_{m\to \infty}\|g{\rm m}u^{m+1}\|_\infty\le \varepsilon$. Since $\varepsilon$ is an arbitrary positive number, we get
$\limsup_{m\to \infty}\|g{\rm m}u^{m+1}\|_\infty=0$ and hence $\lim_{m\to \infty}g{\rm m}u^{m+1}=0$. It follows that $(gf)\omega=g$ as desired.
\end{proof}
\begin{corollary} \label{C-inverse}
Let $g\in C_0(\Gamma)$ such that $gf=1$. Then $g=\omega$.
\end{corollary}
\begin{proof} By Lemma~{\rm re}f{L-associative} we have
$$ g=(gf)\omega=\omega.$$
\end{proof}
Denote by $Q$ the natural quotient map $\ell^\infty(\Gamma)\rightarrow ({{\rm m}athbb R}/{{\rm m}athbb Z})^\Gamma$. We assume furthermore that $f\in {{\rm m}athbb Z}\Gamma$.
\begin{lemma} \label{L-homoclinic group}
We have
$$ \Delta(X_f)=Q({{\rm m}athbb Z}\Gamma \omega).$$
\end{lemma}
\begin{proof} Let $h\in {{\rm m}athbb Z}\Gamma$. Then
$$ (h\omega)f=h(\omega f)=h\in {{\rm m}athbb Z}\Gamma,$$
and hence $Q(h\omega)\in X_f$. Since $h\omega \in C_0(\Gamma)$, one has $Q(h\omega)\in \Delta(X_f)$. Thus $Q({{\rm m}athbb Z}\Gamma \omega)\subset\Delta(X_f)$.
Now let $x\in \Delta(X_f)$. Take $\tilde{x}\in C_0(\Gamma)$ such that $Q(\tilde{x})=x$. Then $\tilde{x} f\in C_0(\Gamma)\cap \ell^\infty(\Gamma, {{\rm m}athbb Z})={{\rm m}athbb Z}\Gamma$.
Set $h=\tilde{x}f\in {{\rm m}athbb Z}\Gamma$.
By Lemma~{\rm re}f{L-associative} we have $\tilde{x}=h\omega$. Therefore
$x=Q(\tilde{x})=Q(h\omega)$, and hence $\Delta(X_f)\subset Q({{\rm m}athbb Z}\Gamma \omega)$ as desired.
\end{proof}
\begin{corollary} \label{C-module}
As a left ${{\rm m}athbb Z}\Gamma$-module, $\Delta(X_f)$ is isomorphic to ${{\rm m}athbb Z}\Gamma/{{\rm m}athbb Z}\Gamma f$.
\end{corollary}
\begin{proof} By Lemma~{\rm re}f{L-homoclinic group} we have the surjective left ${{\rm m}athbb Z}\Gamma$-module map $\Phi: {{\rm m}athbb Z}\Gamma\rightarrow \Delta(X_f)$ sending $h$ to $Q(h\omega)$. Then it suffices to show $\ker \Phi={{\rm m}athbb Z}\Gamma f$.
If $h\in {{\rm m}athbb Z}\Gamma f$, say $h=gf$ for some $g\in {{\rm m}athbb Z}\Gamma$, then
$$ Q(h\omega)=Q((gf)\omega)=Q(g(f\omega))=Q(g)=0.$$
Thus ${{\rm m}athbb Z}\Gamma \subset\ker \Phi$.
Let $h\in \ker \Phi$. Then $h\omega \in C_0(\Gamma)\cap \ell^\infty(\Gamma, {{\rm m}athbb Z})={{\rm m}athbb Z}\Gamma$. Set $g=h\omega \in {{\rm m}athbb Z}\Gamma$. Then
$$ gf=(h\omega)f=h(\omega f)=h.$$
Thus $\ker\Phi\subset{{\rm m}athbb Z}\Gamma f$.
\end{proof}
We are ready to prove Theorem~{\rm re}f{T-dense homoclinic}.
\begin{proof}[Proof of Theorem~{\rm re}f{T-dense homoclinic}] The assertion (2) follows from the assertion (1) and Proposition~{\rm re}f{P-homoclinic to mixing}.
To prove the assertion (1), by Pontryagin duality it suffices to show any $\varphi \in \widehat{X_f}$ vanishing on $\Delta(X_f)$ is $0$. Thus let $\varphi \in \widehat{X_f}={{\rm m}athbb Z}\Gamma/{{\rm m}athbb Z}\Gamma f$ vanishing on $\Delta(X_f)$. Say, $\varphi=g+{{\rm m}athbb Z}\Gamma f$ for some $g\in {{\rm m}athbb Z}\Gamma$. For each $h\in {{\rm m}athbb Z}\Gamma$, in ${{\rm m}athbb R}/{{\rm m}athbb Z}$ one has
$$ 0=\left< Q(h\omega), \varphi\right>\overset{{\rm eq}ref{E-pairing}}=((h\omega) g^*)_e+{{\rm m}athbb Z}=(h(\omega g^*))_e+{{\rm m}athbb Z},$$
and hence $(h(\omega g^*))_e\in {{\rm m}athbb Z}$. Taking $h=s$ for all $s\in \Gamma$, we conclude that $\omega g^*\in C_0(\Gamma)\cap \ell^\infty(\Gamma, {{\rm m}athbb Z})={{\rm m}athbb Z}\Gamma$.
Set $v=\omega g^*\in {{\rm m}athbb Z}\Gamma$. Then
$$ fv=f(\omega g^*)=(f\omega)g^*=g^*,$$
and hence
$$ g=v^*f^*=v^*f\in {{\rm m}athbb Z}\Gamma f.$$
Therefore $\varphi=g+{{\rm m}athbb Z}\Gamma f=0$ as desired.
\end{proof}
\section{Periodic Points} \label{S-periodic}
Throughout this section we let $\Gamma$ be a finitely generated residually finite infinite group, and let $\Sigma=\{\Gamma_n\}^\infty_{n=1}$ be a sequence of finite-index normal subgroups of $\Gamma$ such that $\bigcap_{n\in {{\rm m}athbb N}}\bigcup_{i\ge n}\Gamma_i=\{e\}$.
For a compact metric space $(X, {\rm h}o)$, recall that the Hausdorff distance between two nonempty closed subsets $Y$ and $Z$ of $X$ is defined as
$$ {\rm dist_H}(Y, Z):={\rm m}ax({\rm m}ax_{y\in Y}{\rm m}in_{z\in Z}{\rm h}o(y, z), {\rm m}ax_{z\in Z}{\rm m}in_{y\in Y}{\rm h}o(z, y)).$$
For $f\in {{\rm m}athbb Z}\Gamma$ we denote by ${\rm Fix}_{\Gamma_n}(X_f)$ the group of fixed points of $\Gamma_n$ in $X_f$.
The main result of this section is
\begin{theorem} \label{T-dense periodic points}
Let $f\in {{\rm m}athbb Z}\Gamma$ be well-balanced.
Let ${\rm h}o$ be a
compatible metric on $X_f$. We have ${\rm Fix}_{\Gamma_n}(X_f)\rightarrow X_f$ under the Hausdorff distance when $n\to \infty$.
\end{theorem}
Denote by $\pi_n$ the natural ring homomorphism ${{\rm m}athbb R}\Gamma \rightarrow {{\rm m}athbb R}(\Gamma/\Gamma_n)$. Let $X_{\pi_n(f)}$ be $\widehat{{{\rm m}athbb Z}(\Gamma/\Gamma_n)/{{\rm m}athbb Z}(\Gamma/\Gamma_n)\pi_n(f)}$, which is the additive group of all maps $x:\Gamma/\Gamma_n \to {\mathbb R}/{\mathbb Z}$ satisfying $x\pi_n(f) =0$, i.e., $\sum_{s\in \Gamma} x_{ts\Gamma_n} f_{s^{-1}} = {\mathbb Z}$ for every $t\in \Gamma$.
\begin{lemma} \label{L-vanishing}
Let $f\in {{\rm m}athbb Z}\Gamma$ and $\varphi=g+{{\rm m}athbb Z}\Gamma f\in {{\rm m}athbb Z}\Gamma/{{\rm m}athbb Z}\Gamma f$ for some $g\in {{\rm m}athbb Z}\Gamma$. Let $n\in {{\rm m}athbb N}$. Then $\varphi$ vanishes on ${\rm Fix}_{\Gamma_n}(X_f)$ if and only if $\pi_n(g)\in {{\rm m}athbb Z}(\Gamma/\Gamma_n)\pi_n(f)$.
\end{lemma}
\begin{proof}
By \cite[Lemma 7.4]{KL11a} we have a compact group isomorphism $\Phi_n:X_{\pi_n(f)} \rightarrow {\rm Fix}_{\Gamma_n}(X_f)$ defined by $(\Phi_n(x))_s=x_{s\Gamma_n}$ for all $x\in X_{\pi_n(f)}$ and $s\in \Gamma$.
For each $x\in X_{\pi_n(f)}$, in ${{\rm m}athbb R}/{{\rm m}athbb Z}$ we have
\begin{align*}
\left<\Phi_n(x), \varphi\right>&\overset{{\rm eq}ref{E-pairing}}=(\Phi_n(x)g^*)_e =\sum_{s\in \Gamma} (\Phi_n(x))_sg_s =\sum_{s\in \Gamma} x_{s\Gamma_n}g_s \\
&= \sum_{s\Gamma_n \in \Gamma/\Gamma_n} x_{s\Gamma_n}\pi_n(g)_{s\Gamma_n} =\left<x, \pi_n(g)+{{\rm m}athbb Z}(\Gamma/\Gamma_n)\pi_n(f)\right>.
\end{align*}
Thus $\varphi$ vanishes on ${\rm Fix}_{\Gamma_n}(X_f)$ iff the element $\pi_n(g)+{{\rm m}athbb Z}(\Gamma/\Gamma_n)\pi_n(f)$ in ${{\rm m}athbb Z}(\Gamma/\Gamma_n)/{{\rm m}athbb Z}(\Gamma/\Gamma_n)\pi_n(f)$ vanishes on $X_{\pi_n(f)}$, iff $\pi_n(g)\in {{\rm m}athbb Z}(\Gamma/\Gamma_n)\pi_n(f)$.
\end{proof}
For the next lemma, recall that if $S \subset \Gamma \setminus \{e\}$ is a symmetric finite generating set then $C(\Gamma,S)$, the Cayley graph of $\Gamma$ with respect to $S$, has vertex set $\Gamma$ and edge set $\{ \{g,gs\} \}_{g\in \Gamma,s\in S}$. Similarly, $C(\Gamma/\Gamma_n, \pi_n(S))$ has vertex set $\Gamma/\Gamma_n$ and edge set $\{\{g\Gamma_n, gs\Gamma_n\}:~g\in \Gamma, s\in S\}$. A subset $A \subset \Gamma$ is identified with the induced subgraph of $C(\Gamma,S)$ which has vertex set $A$ and contains every edge of $C(\Gamma,S)$ with endpoints in $A$. Similarly, a subset $A \subset \Gamma/\Gamma_n$ induces a subgraph of $C(\Gamma/\Gamma_n,\pi_n(S))$. Thus we say that a subset $A \subset \Gamma$ (or $A \subset \Gamma/\Gamma_n$) is {\em connected} if its induced subgraph is connected.
\begin{lemma} \label{L-connectedness in Cayley graphs}
Let $S\subset \Gamma\setminus \{e\}$ be a finite symmetric generating set of $\Gamma$.
Let $A\subset\Gamma$ be finite. Then there exists a finite set $B\subset\Gamma$ containing $A$ such that when $n\in {{\rm m}athbb N}$ is large enough, in the Cayley graph $C(\Gamma/\Gamma_n, \pi_n(S))$ the set $(\Gamma/\Gamma_n)\setminus \pi_n(B)$ is connected.
\end{lemma}
\begin{proof}
We claim that there exists a connected finite set $B \supset A \cup \{e\}$ such that every connected component of $C(\Gamma,S)\setminus B$ is infinite. To see this, let $A'\subset\Gamma$ be a finite connected set such that $A'\supset A\cup \{e\}$. For any connected component ${{\rm m}athcal C}$ of $C(\Gamma,S) \setminus A'$, taking a path in $C(\Gamma, S)$ from some point in ${{\rm m}athcal C}$ to some point in $A'$, we note that the last point of this path in ${{\rm m}athcal C}$ must lie in $A'S$, whence ${{\rm m}athcal C} \cap A'S\neq \emptyset$. It follows that $C(\Gamma,S) \setminus A'$ has only finitely many connected components. Denote by $B$ the union of $A'$ and all the finite connected components of $C(\Gamma,S) \setminus A'$. Then $B$ is finite, contains $A\cup \{e\}$, and is connected.
Furthermore, the connected components of $C(\Gamma,S) \setminus B$ are exactly the infinite connected components of $C(\Gamma,S) \setminus A'$, whence are all infinite.
For each $t\in BS\setminus B$, since the connected component of $C(\Gamma,S) \setminus B$ containing $t$ is infinite, we can take a path $\gamma_t$ in $C(\Gamma,S) \setminus B$ from $t$ to some element in $\Gamma\setminus (B(\{e\}\cup S)^2B^{-1})$.
Let $n \in {{\rm m}athbb N}$ be sufficiently large so that $\pi_n$ is injective on $B\cup (B(\{e\}\cup S)^2B^{-1}) \cup \bigcup_{t\in BS\setminus B}\gamma_t$.
An argument similar to that in the first paragraph of the proof shows that every connected component ${{\rm m}athcal C}$ of $C(\Gamma/\Gamma_n,\pi_n(S)) \setminus \pi_n(B)$ has nonempty intersection with $\pi_n(BS)$, and hence contains $\pi_n(t)$ for some $t\in BS\setminus B$. Then ${{\rm m}athcal C}$ contains $\pi_n(\gamma_t)$, whence contains $\pi_n(g)$ for some $g\in \gamma_t\setminus (B(\{e\}\cup S)^2B^{-1})$. Since $\pi_n$ is injective on $\gamma_t\cup (B(\{e\}\cup S)^2B^{-1})$, we have
$\pi_n(g)\not \in \pi_n(B(\{e\}\cup S)^2B^{-1})$, equivalently,
$g\pi_n(B\cup BS)\cap \pi_n(B\cup BS)=\emptyset$.
List all the connected components of $C(\Gamma/\Gamma_n,\pi_n(S)) \setminus \pi_n(B)$ as ${{\rm m}athcal C}_0,{{\rm m}athcal C}_1,\ldots, {{\rm m}athcal C}_k$.
We may assume that $|{{\rm m}athcal C}_0| = {\rm m}in_{0\le j\le k} |{{\rm m}athcal C}_j|$. Take $g_0\in \Gamma$ with $\pi_n(g_0)\in {{\rm m}athcal C}_0$ and $g_0\pi_n(B\cup BS)\cap \pi_n(B\cup BS)=\emptyset$. Since $B$ is connected, $B\cup BS$ is connected. Thus $g_0\pi_n(B\cup BS)$ is connected and disjoint from $\pi_n(B)$.
Therefore $g_0\pi_n(B\cup BS)$ is contained in one of the connected components of $C(\Gamma/\Gamma_n,\pi_n(S)) \setminus \pi_n(B)$.
As $\pi_n(g_0)\in (g_0\pi_n(B\cup BS))\cap {{\rm m}athcal C}_0$, we get $g_0\pi_n(B\cup BS)\subset{{\rm m}athcal C}_0$.
Since $\Gamma$ acts on $C(\Gamma/\Gamma_n,\pi_n(S))$ by left translation, the connected components of $C(\Gamma/\Gamma_n,\pi_n(S)) \setminus g_0\pi_n(B)$
are $g_0{{\rm m}athcal C}_0, g_0{{\rm m}athcal C}_1, \dots, g_0{{\rm m}athcal C}_k$.
Suppose that $g_0{{\rm m}athcal C}_0 \cap \pi_n(B) = \emptyset$. Then $g_0{{\rm m}athcal C}_0$ must be contained in one of the connected components of $C(\Gamma/\Gamma_n,\pi_n(S)) \setminus \pi_n(B)$. Since ${{\rm m}athcal C}_0\cap \pi_n(BS)\neq \emptyset$, one has $g_0{{\rm m}athcal C}_0\cap g_0\pi_n(BS)\neq \emptyset$. Because $g_0\pi_n(BS)\subset{{\rm m}athcal C}_0$, we have
$g_0{{\rm m}athcal C}_0\cap {{\rm m}athcal C}_0\neq \emptyset$. Therefore $g_0{{\rm m}athcal C}_0\subset{{\rm m}athcal C}_0$.
Because $|g_0{{\rm m}athcal C}_0|=|{{\rm m}athcal C}_0|$, this implies $g_0{{\rm m}athcal C}_0={{\rm m}athcal C}_0$. But this contradicts the fact that $g_0\pi_n(B) \subset{{\rm m}athcal C}_0$ but $g_0\pi_n(B) \cap g_0{{\rm m}athcal C}_0 = \emptyset$.
So $g_0{{\rm m}athcal C}_0 \cap \pi_n(B) \ne \emptyset$.
Since $\pi_n(B\cup BS)$ is connected and disjoint from $g_0\pi_n(B)$, it is contained in one of the connected components of $C(\Gamma/\Gamma_n,\pi_n(S)) \setminus g_0\pi_n(B)$. Because $g_0{{\rm m}athcal C}_0 \cap \pi_n(B) \ne \emptyset$, we get $\pi_n(B\cup BS)\subset g_0{{\rm m}athcal C}_0$.
Suppose that $k>0$. Since ${{\rm m}athcal C}_k$ is disjoint from ${{\rm m}athcal C}_0$ and $g_0\pi_n(B)\subset{{\rm m}athcal C}_0$,
${{\rm m}athcal C}_k$ is disjoint from $g_0\pi_n(B)$. Thus ${{\rm m}athcal C}_k$ is contained in one of the connected components of $C(\Gamma/\Gamma_n,\pi_n(S)) \setminus g_0\pi_n(B)$. Because ${{\rm m}athcal C}_k$ has nonempty intersection with $\pi_n(BS)$, and $\pi_n(B\cup BS)\subset g_0{{\rm m}athcal C}_0$, we get
${{\rm m}athcal C}_k\cup \pi_n(B) \subset g_0{{\rm m}athcal C}_0$. Therefore $|{{\rm m}athcal C}_0|=|g_0{{\rm m}athcal C}_0|>|{{\rm m}athcal C}_k|$
which contradicts the fact that $|{{\rm m}athcal C}_0| = {\rm m}in_{0\le j\le k} |{{\rm m}athcal C}_j|$. Thus $k=0$; i.e., $C(\Gamma/\Gamma_n,\pi_n(S)) \setminus \pi_n(B)$ is connected.
\end{proof}
\begin{lemma} \label{L-bounded}
Let $f\in{{\rm m}athbb R}\Gamma$ be well-balanced and $g\in {{\rm m}athbb R}\Gamma$. Then there exists $C>0$ such that if $\pi_n(g)=h\pi_n(f)$ for some $n\in {{\rm m}athbb N}$ and $h\in {{\rm m}athbb R}(\Gamma/\Gamma_n)$, then
$${\rm m}ax_{}\{h_{s\Gamma_n}:~s\Gamma_n\in \Gamma/\Gamma_n\} -{\rm m}in_{}\{h_{s\Gamma_n}:~s\Gamma_n\in \Gamma/\Gamma_n\}\le C.$$
\end{lemma}
\begin{proof} Set ${\rm m}u=-(f-f_e)/f_e$. Then ${\rm m}u$ is a symmetric finitely supported probability measure on $\Gamma$.
Denote by $S$ and $K$ the supports of ${\rm m}u$ and $g$ respectively. Set $a={\rm m}in_{s\in S} {\rm m}u_s$.
Suppose that $\pi_n(g)=h\pi_n(f)$ for some $n\in {{\rm m}athbb N}$ and $h\in {{\rm m}athbb R}(\Gamma/\Gamma_n)$. Denote by $W$ the set of $s\Gamma_n\in \Gamma/\Gamma_n$ satisfying
$h_{s\Gamma_n}={\rm m}in_{}\{h_{t\Gamma_n}:~t\Gamma_n\in \Gamma/\Gamma_n\}$. If $s\Gamma_n\in W\setminus K\Gamma_n$, then from
$$ 0=(\pi_n(g))_{s\Gamma_n}=(h\pi_n(f))_{s\Gamma_n}=f_e(h_{s\Gamma_n}-\sum_{t\in S}h_{st^{-1}\Gamma_n}{\rm m}u_t)$$
we conclude that $st\Gamma_n\in W$ for all $t\in S$. Since $S$ generates $\Gamma$, it follows that there exists some $s_{{\rm m}in} \in K$
satisfying $h_{s_{{\rm m}in}\Gamma_n}={\rm m}in_{}\{ h_{t\Gamma_n}:~ t\Gamma_n\in \Gamma/\Gamma_n\}$. Similarly, there exists some $s_{{\rm m}ax}\in K$
satisfying $h_{s_{{\rm m}ax}\Gamma_n}={\rm m}ax_{}\{ h_{t\Gamma_n}:~ t\Gamma_n\in \Gamma/\Gamma_n\}$.
Now we show by induction on the word length $|t|$ of $t\in \Gamma$ with respect to $S$ that one has $h_{s_{{\rm m}in}t\Gamma_n}\le h_{s_{{\rm m}in}\Gamma_n}+|t|\cdot \frac{\|g\|_1}{f_ea^{|t|}}$ for all $t\in \Gamma$.
This is clear when $|t|=0$, i.e. $t=e$. Suppose that this holds for all $t\in \Gamma$ with $|t|\le k$. Let $t\in \Gamma$ with $|t|=k+1$. Then
$t=t_1s_1$ for some $t_1\in \Gamma$ with $|t_1|=k$ and some $s_1\in S$. Note that
\begin{align*}
-\|g\|_1&\le (\pi_n(g))_{s_{{\rm m}in}t_1\Gamma_n}=(h\pi_n(f))_{s_{{\rm m}in}t_1\Gamma_n}\\
&=f_e\left(h_{s_{{\rm m}in}t_1\Gamma_n}-h_{s_{{\rm m}in}t_1s_1\Gamma_n}{\rm m}u_{s_1^{-1}}-\sum_{s\in S\setminus \{s_1\}}h_{s_{{\rm m}in}t_1s\Gamma_n}{\rm m}u_{s^{-1}}\right),
\end{align*}
and hence
\begin{align*}
h_{s_{{\rm m}in}t\Gamma_n}&=
h_{s_{{\rm m}in}t_1s_1\Gamma_n}\\
&\le \left(\frac{\|g\|_1}{f_e}+h_{s_{{\rm m}in}t_1\Gamma_n}-\sum_{s\in S\setminus \{s_1\}}h_{s_{{\rm m}in}t_1s\Gamma_n}{\rm m}u_{s^{-1}}\right)/{\rm m}u_{s_1^{-1}}\\
&\le \left(\frac{\|g\|_1}{f_e}+h_{s_{{\rm m}in}\Gamma_n}+k\cdot \frac{\|g\|_1}{f_ea^k}-\sum_{s\in S\setminus \{s_1\}}h_{s_{{\rm m}in}\Gamma_n}{\rm m}u_{s^{-1}}\right)/{\rm m}u_{s_1^{-1}}\\
&= \left(\frac{\|g\|_1}{f_e}+h_{s_{{\rm m}in}\Gamma_n}{\rm m}u_{s_1^{-1}}+k\cdot \frac{\|g\|_1}{f_ea^k}\right)/{\rm m}u_{s_1^{-1}}\le h_{s_{{\rm m}in}\Gamma_n}+|t|\cdot \frac{\|g\|_1}{f_ea^{|t|}}.
\end{align*}
This finishes the induction.
Set $m={\rm m}ax_{s\in K^{-1}K} |s|$. Taking $t=s_{{\rm m}in}^{-1}s_{{\rm m}ax}$ in above we get
\begin{align*}
h_{s_{{\rm m}ax}\Gamma_n}-h_{s_{{\rm m}in}\Gamma_n}\le |s_{{\rm m}in}^{-1}s_{{\rm m}ax}|\cdot \frac{\|g\|_1}{f_ea^{|s_{{\rm m}in}^{-1}s_{{\rm m}ax}|}}\le \frac{m \|g\|_1}{f_e a^m}.
\end{align*}
Now we may set $C=\frac{m \|g\|_1}{f_e a^m}$.
\end{proof}
\begin{lemma} \label{L-connected component to left module}
Let $f\in {{\rm m}athbb Z}\Gamma$ be well-balanced and $g\in {{\rm m}athbb Z}\Gamma$. Then $\pi_n(g)\in {{\rm m}athbb Z}(\Gamma/\Gamma_n)\pi_n(f)$ for all $n\in {{\rm m}athbb N}$ if and only if $g\in {{\rm m}athbb Z}\Gamma f$.
\end{lemma}
\begin{proof}
The ``if'' part is obvious. Suppose that $\pi_n(g)\in {{\rm m}athbb Z}(\Gamma/\Gamma_n)\pi_n(f)$ for all $n\in {{\rm m}athbb N}$. For each $n\in {{\rm m}athbb N}$, take $h_n\in {{\rm m}athbb Z}(\Gamma/\Gamma_n)$ such that $\pi_n(g)=h_n\pi_n(f)$. By Lemma~{\rm re}f{L-bounded} there exists some $C\in {{\rm m}athbb N}$ such that
$${\rm m}ax_{s\Gamma_n\in \Gamma/\Gamma_n}(h_n)_{s\Gamma_n}-{\rm m}in_{s\Gamma_n\in \Gamma/\Gamma_n}(h_n)_{s\Gamma_n}\le C$$
for all $n\in {{\rm m}athbb N}$.
Let $S$ be the set of all $s\in \Gamma\setminus \{e\}$ with $f_s\neq 0$. Denote by $A_1$ the support of $g$. By Lemma~{\rm re}f{L-connectedness in Cayley graphs} we can find finite subsets $A_2, \dots, A_{C+1}$ of $\Gamma$ and $N\in {{\rm m}athbb N}$ such that for any $n\ge N$ and $1\le j\le C$, one has $A_j(\{e\}\cup S)\subset A_{j+1}$ and in the Cayley graph $C(\Gamma/\Gamma_n, \pi_n(S))$ the set
$(\Gamma/\Gamma_n)\setminus \pi_n(A_{j+1})$ is connected.
Let $n\ge N$. Set $a_j={\rm m}ax_{}\{(h_n)_{s\Gamma_n}:~s\Gamma_n\in (\Gamma/\Gamma_n)\setminus \pi_n(A_j)\}$ and $b_j={\rm m}in_{}\{(h_n)_{s\Gamma_n}:~s\Gamma_n\in (\Gamma/\Gamma_n)\setminus \pi_n(A_j)\}$ for all $1\le j\le C+1$. We shall show by induction that
$$ a_j-b_j\le C+1-j$$
for all $1\le j\le C+1$. This is trivial when $j=1$. Suppose that $a_j-b_j\le C+1-j$ for some $1\le j\le C$. If $a_{j+1}<a_j$,
then
$a_{j+1}-b_{j+1}<a_j-b_j\le C+1-j$ and hence $a_{j+1}-b_{j+1}\le C+1-(j+1)$. Thus we may assume that $a_{j+1}=a_j$.
Denote by $W$ the set of elements in $(\Gamma/\Gamma_n)\setminus \pi_n(A_{j+1})$ at which $h_n$ takes the value $a_{j+1}$.
Let $t\Gamma_n\in W$. Since $\pi_n(g)$ takes value $0$ at $t\Gamma_n$, we have
$$ f_e (\pi_n(h))_{t\Gamma_n}=\sum_{s\in S}(-f_s) (\pi_n(h))_{ts\Gamma_n}.$$
Note that $ts\Gamma_n\in (\Gamma/\Gamma_n)\setminus \pi_n(A_j)$ for all $s\in S$. Thus $(\pi_n(h))_{ts\Gamma_n}\le a_j=a_{j+1}=(\pi_n(h))_{t\Gamma_n}$ for all $s\in S$, and hence $(\pi_n(h))_{t\Gamma_n}=(\pi_n(h))_{ts\Gamma_n}$ for all $s\in S$. Therefore, if for some $s\in S$ one has $ts\Gamma_n\in (\Gamma/\Gamma_n)\setminus \pi_n(A_{j+1})$, then $ts\Gamma_n\in W$. Take $t_1\Gamma_n, t_2 \Gamma_n \in \Gamma/\Gamma_n$.
By our choice of $A_{j+1}$ we have a path in $(\Gamma/\Gamma_n)\setminus \pi_n(A_{j+1})$ connecting $t_1\Gamma_n$ and $t_2\Gamma_n$. Therefore
$t_1\Gamma_n\in W \Leftrightarrow t_2\Gamma_n \in W$, whence $a_{j+1}-b_{j+1}=0\le C+1-(j+1)$.
Now we have that $h_n$ is a constant function on $(\Gamma/\Gamma_n)\setminus \pi_n(A_{C+1})$. Replacing $h_n$ by the difference of $h_n$ and a suitable constant function,
we may assume that $h_n$ is $0$ on $(\Gamma/\Gamma_n)\setminus \pi_n(A_{C+1})$. Then $\|h_n\|_\infty\le C$.
Passing to a subsequence of $\{\Gamma_n\}_{n\in {{\rm m}athbb N}}$ if necessary, we may assume that $h_n(s\Gamma_n)$ converges to some integer $h_s$ as $n\to \infty$ for every $s\in \Gamma$. Then $h_s=0$ for all $s\in \Gamma\setminus A_{C+1}$. Thus $h\in {{\rm m}athbb Z}\Gamma$.
Note that
$$ (hf)_s=\lim_{n\to \infty}(h_n\pi_n(f))_{s\Gamma_n}=\lim_{n\to \infty}(\pi_n(g))_{s\Gamma_n}=g_s$$
for each $s\in \Gamma$ and hence $g=hf\in {{\rm m}athbb Z}\Gamma f$. This proves the ``only if'' part.
\end{proof}
We are ready to prove Theorem~{\rm re}f{T-dense periodic points}.
\begin{proof}[Proof of Theorem~{\rm re}f{T-dense periodic points}]
Since $X_f$ is compact, the set of nonempty closed subsets of $X_f$ is a compact space under the Hausdorff distance \cite[Theorem 7.3.8]{BBI}. Thus, passing to a subsequence of $\{\Gamma_n\}_{n\in {{\rm m}athbb N}}$ if necessary, we may assume that ${\rm Fix}_{\Gamma_n}(X_f)$ converges to some nonempty closed subset $Y$ of $X_f$ under the Hausdorff distance as $n\to \infty$. A point $x\in X_f$ is in $Y$ exactly when $x=\lim_{n\to \infty} x_n$ for some $x_n\in {\rm Fix}_{\Gamma_n}(X_f)$ for each $n\in {{\rm m}athbb N}$. It follows easily that $Y$ is a closed subgroup of $X_f$. Thus, by Pontryagin duality it suffices to show that the only $\varphi\in \widehat{X_f}={{\rm m}athbb Z}\Gamma/{{\rm m}athbb Z}\Gamma f$ vanishing on $Y$ is $0$. Let $\varphi\in \widehat{X_f}={{\rm m}athbb Z}\Gamma/{{\rm m}athbb Z}\Gamma f$ vanishing on $Y$. Say, $\varphi=g+{{\rm m}athbb Z}\Gamma f$ for some $g\in {{\rm m}athbb Z}\Gamma$.
Let $U$ be a small neighborhood of $0$ in ${{\rm m}athbb R}/{{\rm m}athbb Z}$ such that the only subgroup of ${{\rm m}athbb R}/{{\rm m}athbb Z}$ contained in $U$ is $\{0\}$. Since ${\rm Fix}_{\Gamma_n}(X_f)$ converges to $Y$ under the Hausdorff distance, we have $\left<{\rm Fix}_{\Gamma_n}(X_f), \varphi\right>\subset U$ for all sufficiently large $n$. Note that
$\left<{\rm Fix}_{\Gamma_n}(X_f), \varphi\right>$ is a subgroup of ${{\rm m}athbb R}/{{\rm m}athbb Z}$. By our choice of $U$, we see that $\varphi$ vanishes on ${\rm Fix}_{\Gamma_n}(X_f)$ for all sufficiently large $n$. Without loss of generality, we may assume that $\varphi$ vanishes on ${\rm Fix}_{\Gamma_n}(X_f)$ for all $n\in {{\rm m}athbb N}$.
From Lemmas~{\rm re}f{L-vanishing} and {\rm re}f{L-connected component to left module} we get $g\in {{\rm m}athbb Z}\Gamma f$ and hence $\varphi=0$ as desired.
\end{proof}
\section{Sofic Entropy}\label{sec:sofic}
The purpose of this section is to review sofic entropy theory. To be precise, we use Definitions 2.2 and 3.3 of \cite{KL11b} to define entropy. So let $\Gamma$ act by homeomorphisms on a compact metrizable space $X$. Suppose the action preserves a Borel probability measure $\lambda$. Let $\Sigma:=\{\Gamma_n\}^\infty_{n=1}$ be a sequence of finite-index normal subgroups of $\Gamma$ such that $\bigcap_{n\in {\mathbb N}}\bigcup_{i\ge n}\Gamma_i=\{e\}$.
Let ${\rm h}o$ be a continuous pseudo-metric on $X$. For $n \in {\mathbb N}$, define, on the space $Map(\Gamma/\Gamma_n,X)$ of all maps from $\Gamma/\Gamma_n$ to $X$ the pseudo-metrics
\begin{eqnarray*}
{\rm h}o_2(\phi,\psi)&:=& \left([\Gamma:\Gamma_n]^{-1}\sum_{s\Gamma_n \in \Gamma/\Gamma_n} {\rm h}o(\phi(s \Gamma_n), \psi(s\Gamma_n) )^2 \right)^{1/2},\\
{\rm h}o_\infty(\phi,\psi)&:=& \sup_{s\Gamma_n \in \Gamma/\Gamma_n} {\rm h}o(\phi(s \Gamma_n), \psi(s\Gamma_n)).
\end{eqnarray*}
\begin{definition}
Let $W\subset \Gamma$ be finite and nonempty and $\delta>0$. Define $Map(W,\delta,\Gamma_n)$ to be the set of all maps $\phi: \Gamma/\Gamma_n \to X$ such that ${\rm h}o_2(\phi \circ s, s \circ \phi) \le \delta$ for all $s\in W$.
Given a finite set $L$ in the space $C(X)$ of continuous ${{\rm m}athbb R}$-valued functions on $X$, let $Map_\lambda(W,L,\delta,\Gamma_n) \subset Map(W,\delta,\Gamma_n)$ be the subset of maps $\phi: \Gamma/\Gamma_n \to X$ such that $|\phi_*U_n(p)-\lambda(p)| \le \delta$ for all $p\in L$, where $U_n$ denotes the uniform probability measure on $\Gamma/\Gamma_n$.
\end{definition}
\begin{definition}\label{defn:separating}
Let $(Z,{\rm h}o_Z)$ be a pseudo-metric space. A set $Y \subset Z$ is {\em $({\rm h}o_Z,\epsilon)$-separating} if ${\rm h}o_Z(y_1,y_2) >\epsilon$ for every $y_1\ne y_2 \in Y$. If ${\rm h}o_Z$ is understood, then we simply say that $Y$ is {\em $\epsilon$-separating}. Let $N_\epsilon(Z,{\rm h}o_Z)$ denote the largest cardinality of a $({\rm h}o_Z,\epsilon)$-separating subset of $Z$.
\end{definition}
Define
\begin{eqnarray*}
h_{\Sigma,2}({\rm h}o)&:=&\sup_{\epsilon>0} \inf_{W \subset \Gamma} \inf_{\delta>0} \limsup_{n\to\infty} [\Gamma:\Gamma_n]^{-1} \log N_\epsilon(Map(W,\delta,\Gamma_n),{\rm h}o_2)\\
h_{\Sigma,\lambda,2}({\rm h}o)&:=&\sup_{\epsilon>0} \inf_{W \subset \Gamma} \inf_{L\subset C(X)} \inf_{\delta>0} \limsup_{n\to\infty} [\Gamma:\Gamma_n]^{-1} \log N_\epsilon(Map_\lambda(W,L,\delta,\Gamma_n),{\rm h}o_2).
\end{eqnarray*}
Similarly, define $h_{\Sigma,\infty}({\rm h}o)$ and $h_{\Sigma,\lambda,\infty}({\rm h}o)$ by replacing ${\rm h}o_2$ with ${\rm h}o_\infty$ in the formulae above.
The pseudo-metric ${\rm h}o$ is said to be {\em dynamically generating} if for any $x,y \in X$ with $x\ne y$, ${\rm h}o(sx,sy) >0$ for some $s \in \Gamma$.
\begin{theorem}\label{thm:entropy}
If ${\rm h}o$ is any dynamically generating continuous pseudo-metric on $X$ then
$$h_{\Sigma,\lambda}(X,\Gamma)=h_{\Sigma,\lambda,2}({\rm h}o) = h_{\Sigma,\lambda,\infty}({\rm h}o),$$
$$h_{\Sigma}(X,\Gamma) = h_{\Sigma,2}({\rm h}o) = h_{\Sigma,\infty}({\rm h}o).$$
\end{theorem}
\begin{proof}
This follows from Propositions 2.4 and 3.4 of \cite{KL11b}.
\end{proof}
\section{Entropy of the Harmonic Model}
In this section we prove Theorem~{\rm re}f{thm:main1}. Throughout this section we let $\Gamma$ be a countably infinite group, $\Sigma=\{\Gamma_n\}^\infty_{n=1}$ be a sequence of finite-index normal subgroups of $\Gamma$ satisfying $\bigcap_{n=1}^\infty \bigcup_{i\ge n} \Gamma_i = \{e\}$, and $f \in {\mathbb Z}\Gamma$ be well-balanced.
\subsection{The lower bound}
Note that ${\rm Fix}_{\Gamma_n}(X_f) \subset X_f$ is a closed $\Gamma$-invariant subgroup. Let $\lambda_n$ be its Haar probability measure.
\begin{lemma} \label{L-measure convergence}
The measure $\lambda_n$ converges in the weak* topology to the Haar probability measure $\lambda$ on $X_f$ as $n\to \infty$.
\end{lemma}
\begin{proof}
For $x\in X_f$, let $A_x:X_f \to X_f$ by the addition map $A_x(y)=x+y$. Each $A_x$ induces a map $(A_x)_*$ on the space $M(X_f)$ of all Borel probability measures on $X_f$. The map $X_f \times M(X_f) \to M(X_f)$ defined by $(x,{\rm m}u) {\rm m}apsto (A_x)_*{\rm m}u$ is continuous (with respect to the weak* topology on $M(X_f)$).
Choose an increasing sequence $\{n_i\}$ of natural numbers so that $\lim_{i\to\infty} \lambda_{n_i} = \lambda_\infty\in M(X_f)$ exists (this is possible by the Banach-Alaoglu Theorem). By the above if $x_i \in {\rm Fix}_{\Gamma_{n_i}}(X_f)$ and $\lim_{i\to\infty} x_i = x$ then
$$\lim_{i\to\infty} (A_{x_i})_*\lambda_{n_i} = (A_x)_*\lambda_\infty.$$
Since $\lambda_{n_i}$ is the Haar probability measure on ${\rm Fix}_{\Gamma_{n_i}}(X_f)$, $(A_{x_i})_*\lambda_{n_i} = \lambda_{n_i}$, so the above implies $(A_x)_*\lambda_\infty = \lambda_\infty$. Because ${\rm Fix}_{\Gamma_{n_i}}(X_f)$ converges in the Hausdorff topology to $X_f$ by Theorem~{\rm re}f{T-dense periodic points}, we have that $(A_x)_*\lambda_\infty = \lambda_\infty$ for every $x\in X_f$ which, by uniqueness of the Haar probability measure, implies that $\lambda_\infty=\lambda$ as required.
\end{proof}
\begin{definition}\label{defn:rho}
For $t \in {\mathbb R}/{\mathbb Z}$ let $t' \in [-1/2,1/2)$ be such that $t' + {\mathbb Z} = t$ and define $|t|: = |t'|$. Similarly, for $x\in ({\mathbb R}/{\mathbb Z})^\Gamma$, let $x' \in {\mathbb R}^\Gamma$ be the unique element satisfying $x'_s \in [-1/2,1/2)$ and $x'_s + {\mathbb Z} = x_s$ for all $s\in \Gamma$. Define $\|x\|_\infty = \|x'\|_\infty$. Let ${\rm h}o$ be the continuous pseudo-metric on $X_f$ defined by ${\rm h}o(x,y)=|(x-y)'_e|$. It is easy to check that ${\rm h}o$ is dynamically generating.
\end{definition}
For $x\in {\rm Fix}_{\Gamma_n}(X_f)$, let $\phi_x:\Gamma/\Gamma_n \to X_f$ be the map defined by $\phi_x(s\Gamma_n) = sx$ for all $s\in \Gamma$. Let $W \subset \Gamma, L \subset C(X_f)$ be non-empty finite sets and $\delta>0$. Note that $\phi_x\in Map(W,\delta,\Gamma_n)$ for all $x\in {\rm Fix}_{\Gamma_n}(X_f)$.
Let ${\rm BAD}(W,L,\delta,\Gamma_n)$ be the set of all $x\in {\rm Fix}_{\Gamma_n}(X_f)$ such that $\phi_x \notin Map_\lambda(W,L,\delta,\Gamma_n)$. Because $\phi_x\circ s = s \circ \phi_x$ for every $s\in \Gamma$, $\phi_x \notin Map_\lambda(W,L,\delta,\Gamma_n)$ if and only if there exists $p\in L$ such that
$$\left| (\phi_x)_*U_n(p) - \lambda(p)\right| > \delta.$$
\begin{lemma}\label{lem:BAD1}
Assume that $\Gamma$ is not virtually ${{\rm m}athbb Z}$ or ${{\rm m}athbb Z}^2$.
Then
$$\lim_{n\to\infty} \lambda_n({\rm BAD}(W,L,\delta,\Gamma_n)) = 0.$$
\end{lemma}
\begin{proof}
The proof is similar to the proof of \cite[Theorem 3.1]{Bo11}. Let $n\in {{\rm m}athbb N}$. For each $\sigma \in \{-1,0,1\}^L$, let ${\rm BAD}_\sigma(W,L,\delta,\Gamma_n)$ be the set of all $x \in {\rm BAD}(W,L,\delta,\Gamma_n)$ such that for every $p \in L$, if $\sigma(p)\ne 0$ then
$$\sigma(p)\left[(\phi_x)_*U_n(p)-\lambda(p)\right] =\sigma(p) \left[ -\lambda(p) + [\Gamma:\Gamma_n]^{-1}\sum_{s\Gamma_n\in \Gamma/\Gamma_n} p(sx) \right] > \delta$$
and if $\sigma(p)=0$ then $\left|(\phi_x)_*U_n(p)-\lambda(p)\right| \le \delta$.
Observe that for each $\sigma \in \{-1,0,1\}^L$, ${\rm BAD}_\sigma(W,L,\delta,\Gamma_n)$ is $\Gamma$-invariant. Moreover, $\{{\rm BAD}_\sigma(W,L,\delta,\Gamma_n):~\sigma \in \{-1,0,1\}^L\}$ is a partition of ${\rm BAD}(W,L,\delta,\Gamma_n)$.
Let $t_{n,\sigma} = \lambda_n({\rm BAD}_\sigma(W,L,\delta,\Gamma_n))$ and $t_{n,G} = 1- \lambda_n({\rm BAD}(W,L,\delta,\Gamma_n))$. So $t_{n,G} + \sum_\sigma t_{n,\sigma} = 1$. For each $\sigma \in \{-1,0,1\}^L$, define a Borel probability measure $\lambda_{n,\sigma}$ on $X_f$ by
$$\lambda_{n,\sigma}(E) = \lambda_n(E \cap {\rm BAD}_\sigma(W,L,\delta,\Gamma_n)) t_{n,\sigma}^{-1},\quad \forall {\rm m}box{ Borel } E \subset X_f$$
if $t_{n,\sigma} \ne 0$. Otherwise, define $\lambda_{n,\sigma}$ arbitrarily. Let $\lambda_{n,G}$ be the Borel probability measure on ${\rm Fix}_{\Gamma_n}(X_f)$ defined by
$$\lambda_{n,G}(E) = \lambda_n( E \setminus {\rm BAD}(W,L,\delta,\Gamma_n) ) t_{n,G}^{-1},\quad \forall {\rm m}box{ Borel } E \subset X_f$$
if $t_{n,G}\ne 0$. Otherwise, define $t_{n,G}$ arbitrarily. Observe that
$$\lambda_n = t_{n,G}\lambda_{n,G} + \sum_{\sigma} t_{n,\sigma} \lambda_{n,\sigma}.$$
Because the space of Borel probability measures on $X_f$ is weak* sequentially compact (by the Banach-Alaoglu Theorem), there is a subsequence $\{n_i\}_{i=1}^\infty$ such that
\begin{itemize}
\item $\lambda_{n_i,G}$ converges in the weak* topology as $i\to\infty$ to a Borel probability measure $\lambda_{\infty,G}$ on $X_f$,
\item each $\lambda_{n_i,\sigma}$ converges in the weak* topology as $i\to\infty$ to a Borel probability measure $\lambda_{\infty,\sigma}$ on $X_f$,
\item the limits $\lim_{i\to\infty} t_{n_i,G} = t_{\infty,G}$ and $\lim_{i\to\infty} t_{n_i,\sigma} = t_{\infty,\sigma}$ exist for all $\sigma$.
\end{itemize}
By the previous lemma, $\lambda_n$ converges to $\lambda$ as $n\to\infty$. Therefore,
$$\lambda = t_{\infty,G}\lambda_{\infty,G} + \sum_{\sigma} t_{\infty,\sigma} \lambda_{\infty,\sigma}.$$
Because weak* convergence preserves invariance, $\lambda_{\infty,G}$ and each of $\lambda_{\infty,\sigma}$ are $\Gamma$-invariant Borel probability measures on $X_f$. Because $\lambda$ is ergodic by Theorem~{\rm re}f{T-dense homoclinic}, this implies that for each $\sigma \in \{-1,0,1\}^L$ with $t_{\infty,\sigma} \ne 0$, $\lambda_{\infty,\sigma} = \lambda$. However, for any $p \in L$ with $\sigma(p)\ne 0$,
$$\sigma(p)\left(\lambda_{\infty,\sigma}(p) - \lambda(p)\right)=\lim_{i\to\infty} \sigma(p)\left(\lambda_{n_i,\sigma}(p) -\lambda(p) \right) \ge \delta.$$
This contradiction implies $t_{\infty,\sigma} = 0$ for all $\sigma \in \{-1,0,1\}^L$ (if $\sigma$ is constantly $0$, then ${\rm BAD}_\sigma(W,L,\delta,\Gamma_n)$ is empty so $t_{\infty,\sigma}=0$). Thus $\lim_{n\to\infty} t_{n,\sigma} = 0$ for all $\sigma$ which, since
$$\lambda_n({\rm BAD}(W,L,\delta,\Gamma_n)) = \sum_{\sigma} \lambda_n({\rm BAD}_\sigma(W,L,\delta,\Gamma_n) ) = \sum_{\sigma} t_{n,\sigma},$$
implies the lemma.
\end{proof}
\begin{lemma}\label{lem:well-known}
Let $x \in \ell^\infty(\Gamma)$ and suppose $xf = 0$. Suppose also that for some finite-index subgroup $\Gamma'<\Gamma$, $sx=x$ for all $s\in \Gamma'$. Then $x$ is constant.
\end{lemma}
\begin{proof}
Because $x$ is fixed by a finite-index subgroup, there is an element $s_0\in \Gamma$ such that $x_{s_0} = {\rm m}in_{s\in \Gamma} x_s$. Because $xf=0$ and $f$ is well-balanced this implies that $x_{s_0t}=x_{s_0}$ for every $t$ in the support of $f$. By induction, $x_{s_0 t}=x_{s_0}$ for every $t$ in the semi-group generated by the support of $f$. By hypothesis, this semi-group is all of $\Gamma$.
\end{proof}
\begin{lemma} \label{L-small suprenorm to constant}
There is a number $C>0$ such that if $x\in {\rm Fix}_{\Gamma_n}(X_f)$ for some $n\in {{\rm m}athbb N}$ satisfies $\|x\|_\infty< C$ then $x$ is constant.
\end{lemma}
\begin{proof}
Let $x'$ be as in Definition~{\rm re}f{defn:rho}.
Because $f$ has finite support, there is some number $C>0$ such that if $\|x'\|_\infty=\|x\|_\infty < C$ then $\| x'f\|_\infty < 1$. Since $x\in X_f$, $x'f \in \ell^\infty(\Gamma,{\mathbb Z})$. So $\|x'f\|_\infty < 1$ implies $x'f = 0$. Because $x'$ is fixed by a finite-index subgroup the previous lemma implies $x'$ is constant and hence $x$ is constant.
\end{proof}
\begin{lemma}\label{lem:lower}
Assume that $\Gamma$ is not virtually ${{\rm m}athbb Z}$ or ${{\rm m}athbb Z}^2$.
Then
$$h_{\Sigma,\lambda}(X_f,\Gamma) \ge \limsup_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log |{\rm Fix}_{\Gamma_n}(X_f)|.$$
\end{lemma}
\begin{proof}
Let $W \subset \Gamma, L \subset C(X_f)$ be non-empty finite sets and $\delta>0$.
Let us identify ${\mathbb R}/{\mathbb Z}$ with the constant functions (on $\Gamma$) in $X_f$.
It follows easily from Lemma~{\rm re}f{L-small suprenorm to constant} that the connected component of ${\rm Fix}_{\Gamma_n}(X_f)$ containing the identity element is exactly ${{\rm m}athbb R}/{{\rm m}athbb Z}$.
Choose a maximal set $Y_n \subset {\rm Fix}_{\Gamma_n}(X_f)$ such that $Y_n \cap {\rm BAD}(W,L, \delta, \Gamma_n) = \emptyset$ and for each $x \in Y_n$ and $t \in {\mathbb R}/{\mathbb Z}$ with $t\ne 0$, $x+t \notin Y_n$. By Lemma~{\rm re}f{lem:BAD1}, $\lim_{n\to \infty} |Y_n|^{-1} |{\rm Fix}_{\Gamma_n}(X_f)| = 1$. Let $C>0$ be the constant in Lemma~{\rm re}f{L-small suprenorm to constant}. By Lemma~{\rm re}f{L-small suprenorm to constant} if $x\ne y \in Y_n$ then $\|x-y\|_\infty \ge C$ which implies ${\rm h}o_\infty(\phi_x,\phi_y)\ge C$.
Therefore, if $0<\epsilon<C$ then $\{\phi_y:~y\in Y_n\}$ is $\epsilon$-separated with respect to ${\rm h}o_\infty$ which implies
$$ N_\epsilon( Map_\lambda(W,L,\delta,\Gamma_n), {\rm h}o_\infty) \ge |Y_n|.$$
Because $\lim_{n\to \infty} |Y_n|^{-1} |{\rm Fix}_{\Gamma_n}(X_f)| = 1$, this implies
$$\limsup_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log |{\rm Fix}_{\Gamma_n}(X_f)| \le h_{\Sigma,\lambda}(X_f,\Gamma).$$
\end{proof}
\begin{lemma} \label{L-amenable lower bound}
Assume that $\Gamma$ is amenable. Then
$$h_{\Sigma,\lambda}(X_f,\Gamma) \ge \limsup_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log |{\rm Fix}_{\Gamma_n}(X_f)|.$$
\end{lemma}
\begin{proof} The argument in the proof of Lemma~{\rm re}f{lem:lower} shows that
$$ h_{\Sigma}(X_f,\Gamma) \ge \limsup_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log |{\rm Fix}_{\Gamma_n}(X_f)|.$$
Note that $h_{\Sigma,\lambda}(X_f,\Gamma)$ coincides with the classical measure-theoretic entropy $h_\lambda(X_f, \Gamma)$ \cite[Theorem 1.2]{Bo12} \cite[Theorem 6.7]{KL11b}, and $h_{\Sigma}(X_f,\Gamma)$ coincides with the classical topological entropy $h(X_f, \Gamma)$ \cite[Theorem 5.3]{KL11b}.
Since $\Gamma$ acts on $X_f$ by continuous group automorphism and $\lambda$ is the Haar probability measure of $X_f$, one has $h_\lambda(X_f, \Gamma)=h(X_f, \Gamma)$ \cite{Berg, De06}. Therefore
$$ h_{\Sigma,\lambda}(X_f,\Gamma)=h_\lambda(X_f, \Gamma)=h(X_f, \Gamma)=h_{\Sigma}(X_f,\Gamma) \ge \limsup_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log |{\rm Fix}_{\Gamma_n}(X_f)|.$$
\end{proof}
\begin{lemma} \label{L-lower bound}
We have
$$ h_{\Sigma,\lambda}(X_f,\Gamma) \ge \limsup_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log |{\rm Fix}_{\Gamma_n}(X_f)|.$$
\end{lemma}
\begin{proof} If $\Gamma$ has a finite-index normal subgroup isomorphic to ${{\rm m}athbb Z}$ or ${{\rm m}athbb Z}^2$, then $\Gamma$ is amenable \cite[Theorem G.2.1 and Proposition G.2.2]{BHV}. Thus the assertion follows from Lemmas~{\rm re}f{lem:lower} and {\rm re}f{L-amenable lower bound}.
\end{proof}
\subsection{The upper bound}
Let $U_n$ be the uniform probability measure on $\Gamma/\Gamma_n$. For $x,y\in {\mathbb R}^{\Gamma/\Gamma_n}$, let $\langle x,y\rangle_U$ be the inner product with respect to $U_n$ and $\|x\|_{p,U} := \left([\Gamma: \Gamma_n]^{-1} \sum_{s\Gamma_n \in \Gamma/\Gamma_n} |x_{s\Gamma_n}|^p\right)^{1/p}$ for $p\ge 1$.
For $x\in ({\mathbb R}/{\mathbb Z})^{\Gamma/\Gamma_n}$, let $x'$ be as in Definition~{\rm re}f{defn:rho}.
Let $|x|=|x'|$ and $\|x\|_{p,U} = \|x'\|_{p,U}$ for $p\ge 1$.
We will use $\langle \cdot ,\cdot \rangle$ and $\|\cdot\|_p$ to denote the inner product and $\ell^p$-norm with respect to the counting measure.
\begin{lemma} \label{L-diffrent norms}
For any $x\in ({\mathbb R}/{\mathbb Z})^{\Gamma/\Gamma_n}$,
$$\|x\|_{2,U}^2 \le \|x \|_{1,U} \le \|x\|_{2,U}\le 1/2.$$
\end{lemma}
\begin{proof}
This first inequality is immediate from $\|x\|_{\infty} \le 1/2$. Note
\begin{eqnarray*}
\|x\|_1^2 &=& \sum_{s\Gamma_n \in \Gamma/\Gamma_n} \sum_{t\Gamma_n \in \Gamma/\Gamma_n} |x_{s\Gamma_n}| |x_{t\Gamma_n}| = \sum_{s\Gamma_n \in \Gamma/\Gamma_n} \sum_{t\Gamma_n \in \Gamma/\Gamma_n} |x_{s\Gamma_n}| |x_{ts\Gamma_n}| \\
&=& \sum_{t\Gamma_n \in \Gamma/\Gamma_n} \langle |x|, |x \circ t\Gamma_n|\rangle.
\end{eqnarray*}
By the Cauchy-Schwarz inequality, for any $t\in \Gamma$,
$$\langle |x|, |x \circ t\Gamma_n|\rangle \le \|x\|_2 \|x \circ t\Gamma_n\|_2 = \|x\|_2^2.$$
Hence
$$\|x\|_1^2 \le [\Gamma:\Gamma_n] \|x\|_2^2.$$
Since $\|x\|_{1,U} = [\Gamma:\Gamma_n]^{-1}\|x\|_1$ and $\|x\|_{2,U}^2 =[\Gamma:\Gamma_n]^{-1} \|x\|_2^2$, this implies the second inequality. The last one follows from $\|x\|_{2,U} \le \|x\|_\infty \le 1/2$.
\end{proof}
Recall from Section~{\rm re}f{S-notation} that for a countable group $\Gamma'$ and $g\in {{\rm m}athbb R}\Gamma'$,
if $g$ is positive in ${{\rm m}athcal N}\Gamma'$, then we have the spectral measure of $g$ on
$[0, \|R_g\|]\subset [0, \|g\|_1]$ determined by {\rm eq}ref{E-spectral measure}.
For each $n\in {{\rm m}athbb N}$ we denote by $\pi_n$ the natural algebra homomorphism ${{\rm m}athbb R}\Gamma\rightarrow {{\rm m}athbb R}(\Gamma/\Gamma_n)$.
\begin{lemma} \label{L-bound preimage}
Let $g\in {{\rm m}athbb R}\Gamma$ such that
the kernel of $g$ on $\ell^2(\Gamma, {{\rm m}athbb C})$ is $\{0\}$,
and $\pi_n(g)$ is positive in ${{\rm m}athcal N}(\Gamma/\Gamma_n)$ for all $n\in {{\rm m}athbb N}$.
For each $n\in {{\rm m}athbb N}$ and $\eta>0$ denote by $B_{n, \eta}$ the set of $x\in {{\rm m}athbb R}(\Gamma/\Gamma_n)$ satisfying
$\|x\pi_n(g)\|_{2, U}\le \eta$ and $\|P_n(x)\|_{2, U}\le 1$, where $P_n$ denotes the orthogonal projection from $\ell^2(\Gamma/\Gamma_n, {{\rm m}athbb C})$ onto $\ker \pi_n(g)$.
For each $n\in {{\rm m}athbb N}$ denote by ${\rm m}u_n$ the spectral measure of $\pi_n(g)$ on $[0, \|g\|_1]$.
Let $\zeta>1$, $1>\varepsilon>0$, and $1/2>\kappa>0$. Then
there exists $\eta>0$ such that when $n\in {{\rm m}athbb N}$ is large enough, one has
$$N_\varepsilon(B_{n, \eta}, \|\cdot\|_{2, U})<\zeta^{[\Gamma: \Gamma_n]}\exp(-[\Gamma: \Gamma_n]\int_{0+}^{\kappa}\log t \, d{\rm m}u_n(t))$$
where $N_\varepsilon(\cdot,\cdot)$ is as in Definition {\rm re}f{defn:separating}.
\end{lemma}
\begin{proof}
Since $\pi_n(g)$ is positive in ${{\rm m}athcal N}(\Gamma/\Gamma_n)$, one has $(\pi_n(g))^*=\pi_n(g)$.
Let $Y_n$ be a maximal $(\|\cdot\|_{2, U},\varepsilon/6)$-separated subset of the closed unit ball of $\ker \pi_n(g)$ under $\|\cdot \|_{2, U}$. Then
the open $\varepsilon/12$-balls centered at $y$ under $\|\cdot\|_{2, U}$ for all $y\in Y_n$ are pairwise disjoint, and their union is contained in the open $2$-ball of $\ker \pi_n(g)$ under $\|\cdot \|_{2, U}$. Comparing the volumes we obtain $|Y_n|\le (24/\varepsilon)^{\dim_{{{\rm m}athbb R}} \ker \pi_n(g)}$.
For each $n\in {{\rm m}athbb N}$
denote by $V_{n, \kappa}$ the linear span of the eigenvectors of $\pi_n(g)$ $\ell^2(\Gamma/\Gamma_n, {{\rm m}athbb C})$ with eigenvalue no bigger than $\kappa$.
Note that $V_{n, 0}=\ker \pi_n(g)$.
Since $\ker (g^*g)=\ker(g)=0$, by a result of L\"{u}ck \cite[Theorem 2.3]{Luck94}
(it was assumed in \cite{Luck94} that $\Gamma_n\supset \Gamma_{n+1}$ for all $n\in {{\rm m}athbb N}$; but the argument there holds in general), one has
$\lim_{n\to \infty}[\Gamma: \Gamma_n]^{-1}\dim_{{{\rm m}athbb C}} \ker \pi_n(g)=\lim_{n\to \infty}[\Gamma: \Gamma_n]^{-1}\dim_{{{\rm m}athbb C}} \ker \pi_n(g^*g)=0$. It follows that $|Y_n|\le \zeta^{[\Gamma:\Gamma_n]}$ when $n$ is large enough.
Denote by $P_{n, \kappa}$ the orthogonal projection of $\ell^2(\Gamma/\Gamma_n, {{\rm m}athbb C})$ onto $V_{n, \kappa}$. Set $\eta={\rm m}in(\varepsilon/24, \kappa \varepsilon /12)$.
Note that for each $x\in {{\rm m}athbb R}(\Gamma/\Gamma_n)$ one has
$$\|x\pi_n(g)\|_{2, U}^2=\|(P_{n, \kappa}(x))\pi_n(g)\|_{2, U}^2+\|(x-P_{n, \kappa}(x))\pi_n(g)\|_{2, U}^2\ge \kappa^2\|x-P_{n, \kappa}(x)\|_{2, U}^2.$$
Thus $\|x-P_{n, \kappa}(x)\|_{2, U}\le \eta/\kappa\le \varepsilon/12$ for every $x\in B_{n, \eta}$.
Then every two points in $({\rm Id}-P_{n, \kappa})(B_{n, \eta})$ have $\|\cdot \|_{2, U}$-distance at most $\varepsilon/6$. Let $X_n$ be a one-point subset of $({\rm Id}-P_{n, \kappa})(B_{n, \eta})$. Then $X_n$ is a maximal $(\varepsilon/6)$-separated subset of $({\rm Id}-P_{n, \kappa})(B_{n, \eta})$ under $\|\cdot \|_{2, U}$.
Denote by $E_{n, \kappa}$ the ordered set of all eigenvalues of $\pi_n(g)$ in $(0, \kappa]$ listed with multiplicity.
Let $Z_n$ be a maximal $(\varepsilon/6)$-separated subset of $(P_{n, \kappa}-P_{n, 0})(B_{n, \eta})$ under $\|\cdot \|_{2, U}$.
For each $z\in Z_n$ denote by $B_z$ the open ball centered at $z$ with radius $\varepsilon/12$ under $\|\cdot \|_{2, U}$. Note that $\|x\pi_n(g)\|_{2, U}\le \kappa\|x\|_{2, U}$ for all $x\in V_{n, \kappa}\ominus V_{n, 0}$. Thus every element in $(\bigcup_{z\in Z_n}B_z)\pi_n(g)$ has $\|\cdot \|_{2, U}$-norm at most $\eta+\kappa \varepsilon/12$. The volume of $(\bigcup_{z\in Z_n}B_z)\pi_n(g)$ is $\det(\pi_n(g)|_{V_{n, \kappa}\ominus V_{n, 0}})=\prod_{t\in E_{n, \kappa}}t$ times the volume of $\bigcup_{z\in Z_n}B_z$. It follows that
$$ |Z_n| \prod_{t\in E_{n, \kappa}} t\le \left(\frac{\eta+\kappa \varepsilon/12}{\varepsilon/12}\right)^{\dim_{{\rm m}athbb R} (V_{n, \kappa}\ominus V_{n, 0})}=\left(\frac{12\eta+\kappa \varepsilon}{\varepsilon}\right)^{\dim_{{\rm m}athbb R} (V_{n, \kappa}\ominus V_{n, 0})}\le 1.$$
Note that for every $t\in [0, \|g\|_1]$, the measure ${\rm m}u_n(\{t\})$ is exactly $[\Gamma:\Gamma_n]^{-1}$ times the multiplicity of $t$ as an eigenvalue of $\pi_n(g)$.
When $n\in {{\rm m}athbb N}$ is sufficiently large, we have
\begin{align*}
N_{\varepsilon}(B_{n, \eta}, \|\cdot \|_{2, U})&\le |X_n|\cdot |Y_n|\cdot |Z_n| \le \zeta^{[\Gamma: \Gamma_n]}\prod_{t\in E_{n, \kappa}}t^{-1} \\
&=\zeta^{[\Gamma: \Gamma_n]}\exp\left(-[\Gamma: \Gamma_n]\int_{0+}^{\kappa}\log t \, d{\rm m}u_n(t)\right)
\end{align*}
as desired.
\end{proof}
\begin{lemma} \label{L-measure weak convergence}
Let $g\in {{\rm m}athbb R}\Gamma$ be such that
$g$ is positive in ${{\rm m}athcal N}\Gamma$, and $\pi_n(g)$ is positive in ${{\rm m}athcal N}(\Gamma/\Gamma_n)$ for all $n\in {{\rm m}athbb N}$.
Denote by ${\rm m}u$ the spectral measure of $g$ on $[0, \|g\|_1]$.
For each $n\in {{\rm m}athbb N}$ denote by ${\rm m}u_n$ the spectral measure of $\pi_n(g)$ on $[0, \|g\|_1]$.
Let ${\rm m}in(1, \|g\|_1)>\kappa>0$. Then
$$ \limsup_{n\to \infty}\int_{\kappa+}^{\|g\|_1}\log t \, d{\rm m}u_n(t)\le \int_{\kappa+}^{\|g\|_1}\log t \, d{\rm m}u(t).$$
\end{lemma}
\begin{proof} It suffices to show
$$ \limsup_{n\to \infty}\int_{\kappa+}^{\|g\|_1}\log t \, d{\rm m}u_n(t)\le \eta(1+\|g\|_1)+\int_{\kappa+}^{\|g\|_1}\log t \, d{\rm m}u(t).$$
for every $\eta>0$. Let $\eta>0$.
For each sufficiently large $k\in {{\rm m}athbb N}$ define a real-valued continuous function $q_k$ on $[0, \|g\|_1]$ to be $0$ on $[0, \kappa]$,
$\log t$ at $t\in [\kappa+1/k, \|g\|_1]$, and linear on $[\kappa, \kappa+1/k]$. By the Lebesgue dominated convergence theorem one has $\int_{\kappa+}^{\|g\|_1} q_k(t)\, d{\rm m}u(t)\to \int_{\kappa+}^{\|g\|_1}\log t\, d{\rm m}u(t)$ as $k\to \infty$. Fix $k\in {{\rm m}athbb N}$ with $\int_{\kappa+}^{\|g\|_1} q_k(t)\, d{\rm m}u(t)\le \int_{\kappa+}^{\|g\|_1}\log t\, d{\rm m}u(t)+\eta$, and take
a real-coefficients polynomial $p$ such that $q_k+\eta \ge p\ge q_k$ on $[0, \|g\|_1]$. Then $p(t)\ge q_k(t)\ge \log t$ for all $t\in (0, \|g\|_1]$, and
\begin{align*}
{\rm tr}_{{{\rm m}athcal N}\Gamma}(p(g))&\overset{{\rm eq}ref{E-spectral measure}}= \int_{0}^{\|g\|_1}p(t) \, d{\rm m}u(t) \le \eta \|g\|_1 +\int_{0}^{\|g\|_1}q_k(t) \, d{\rm m}u(t) \\
&= \eta \|g\|_1 +\int_{\kappa+}^{\|g\|_1}q_k(t) \, d{\rm m}u(t) \le \eta(1+\|g\|_1)+\int_{\kappa+}^{\|g\|_1}\log t \, d{\rm m}u(t).
\end{align*}
When $n\in {{\rm m}athbb N}$ is large enough, one has ${\rm tr}_{{{\rm m}athcal N}(\Gamma/\Gamma_n)}(p(\pi_n(g)))={\rm tr}_{{{\rm m}athcal N}\Gamma}(p(g))$ \cite[Lemma 2.6]{Luck94}, whence
$$ {\rm tr}_{{{\rm m}athcal N}\Gamma}(p(g))={\rm tr}_{{{\rm m}athcal N}(\Gamma/\Gamma_n)}(p(\pi_n(g)))\overset{{\rm eq}ref{E-spectral measure}}=\int_{0}^{\|g\|_1}p(t) \, d{\rm m}u_n(t)\ge \int_{\kappa+}^{\|g\|_1}p(t) \, d{\rm m}u_n(t)\ge \int_{\kappa+}^{\|g\|_1}\log t \, d{\rm m}u_n(t) .$$
Thus
\begin{align*}
\limsup_{n\to \infty}\int_{\kappa+}^{\|g\|_1}\log t \, d{\rm m}u_n(t)\le {\rm tr}_{{{\rm m}athcal N}\Gamma}(p(g))\le \eta(1+\|g\|_1)+\int_{\kappa+}^{\|g\|_1}\log t \, d{\rm m}u(t)
\end{align*}
as desired.
\end{proof}
\begin{lemma} \label{L-upper bound}
We have
$$h_{\Sigma}(X_f,\Gamma) \le \log {\rm det}_{{{\rm m}athcal N}\Gamma} f.$$
\end{lemma}
\begin{proof} Let ${\rm h}o$ be the pseudo-metric on $X_f$ defined as in Definition {\rm re}f{defn:rho}. Let $\zeta>1$ and $1/2>\kappa>0$. Let $1>\varepsilon>0$.
Denote by $W$ the support of $f$. An argument similar to that in the proof of Lemma~{\rm re}f{lem:well-known} shows that the kernel of $f$ on $\ell^2(\Gamma, {{\rm m}athbb C})$ is $\{0\}$. Note that $f\ge 0$ in ${{\rm m}athcal N}\Gamma$ and
$\pi_n(f)\ge 0$ in ${{\rm m}athcal N}(\Gamma/\Gamma_n)$ for all $n\in {{\rm m}athbb N}$.
Take $\eta>0$ in Lemma~{\rm re}f{L-bound preimage} for $g=f$. Take $\delta>0$ such that
$2\|f\|_2^{1/2}|W|^{1/4}\delta^{1/2}<\eta$.
For $\phi \in Map(W,\delta,\Gamma_n)$ define $y_\phi \in ({\mathbb R}/{\mathbb Z})^{\Gamma/\Gamma_n}$ by $y_\phi(s\Gamma_n) = \phi(s^{-1}\Gamma_n)_e$. Note that
\begin{eqnarray*}
\|y_\phi \pi_n(f)\|_{2,U}^2&=&[\Gamma:\Gamma_n]^{-1} \sum_{s\Gamma_n \in \Gamma/\Gamma_n} \left| \sum_{t\in W} y_\phi(st\Gamma_n) f(t^{-1})\right|^2 \\
&=& [\Gamma:\Gamma_n]^{-1} \sum_{s\Gamma_n \in \Gamma/\Gamma_n} \left| \sum_{t\in W} \phi(t^{-1}s^{-1}\Gamma_n)_e f(t^{-1})\right|^2 \\
&=& [\Gamma:\Gamma_n]^{-1} \sum_{s\Gamma_n \in \Gamma/\Gamma_n} \left| \sum_{t\in W} \left(\phi(t^{-1}s^{-1}\Gamma_n)_e - [t^{-1}\phi(s^{-1}\Gamma_n)]_e + [t^{-1}\phi(s^{-1}\Gamma_n)]_e\right) f(t^{-1})\right|^2 \\
&=& [\Gamma:\Gamma_n]^{-1} \sum_{s\Gamma_n \in \Gamma/\Gamma_n} \left| \sum_{t\in W} \left(\phi(t^{-1}s^{-1}\Gamma_n)_e - [t^{-1}\phi(s^{-1}\Gamma_n)]_e \right) f(t^{-1})\right|^2 \\
&\le& [\Gamma:\Gamma_n]^{-1} \sum_{s\Gamma_n \in \Gamma/\Gamma_n} \left( \sum_{t\in W} \left|\phi(t^{-1}s^{-1}\Gamma_n)_e - [t^{-1}\phi(s^{-1}\Gamma_n)]_e \right|^2 \right) \|f \|_2^2 \\
&=& \|f\|_2^2 \sum_{t \in W} {\rm h}o_2(\phi \circ t^{-1}, t^{-1} \circ \phi)^2\\
&\le& \|f\|_2^2 |W| \delta^2.
\end{eqnarray*}
For each $\phi\in Map(W,\delta,\Gamma_n)$ take $\tilde{y}_\phi \in [-1/2, 1/2)^{\Gamma/\Gamma_n}$ such that $\tilde{y}_\phi + {\mathbb Z}^{\Gamma/\Gamma_n} =y_\phi$. Then there exists $z_\phi\in {\mathbb Z}^{\Gamma/\Gamma_n}$ such that $\tilde{y}_\phi\pi_n(f) - z_\phi \in [-1/2,1/2)^{\Gamma/\Gamma_n}$ which implies $\|\tilde{y}_\phi \pi_n(f)-z_\phi\|_{2,U} = \|y_\phi \pi_n(f)\|_{2,U}\le \|f\|_2 |W|^{1/2} \delta$.
Let $S_n:{\mathbb R}^{\Gamma/\Gamma_n} \to {\mathbb R}$ be the sum function: $S_n(y) = \sum_{s\Gamma_n \in \Gamma/\Gamma_n} y_{s\Gamma_n}$. Note that
$$|S_n(\tilde{y}_\phi \pi_n(f)-z_\phi)| \le \|y_\phi \pi_n(f)\|_1 \le (1/2) [\Gamma:\Gamma_n].$$
Note that $S_n(y\pi_n(f))=0$ for every $y \in {\mathbb R}^{\Gamma/\Gamma_n}$. In particular, $S_n(\tilde{y}_\phi\pi_n(f)-z_\phi)=-S_n(z_\phi) \in {\mathbb Z}$.
So there exists $z'_\phi \in \{-1,0,1\}^{\Gamma/\Gamma_n}$ such that $S_n(\tilde{y}_\phi\pi_n(f)-z_\phi-z'_\phi)=0$ and $\|z'_\phi\|_1 \le \|y_\phi \pi_n(f)\|_1$. So by Lemma~{\rm re}f{L-diffrent norms}
\begin{eqnarray*}
\| \tilde{y}_\phi\pi_n(f)-z_\phi-z'_\phi\|_{2,U} &\le& \|\tilde{y}_\phi \pi_n(f) - z_\phi\|_{2,U} + \|z'_\phi\|_{2,U} \\
&\le& \|\tilde{y}_\phi \pi_n(f) - z_\phi\|_{2,U} + \|z'_\phi\|_{1,U}^{1/2}\\
&\le& \|y_\phi \pi_n(f)\|_{2,U} +\|y_\phi \pi_n(f)\|_{2,U}^{1/2} \\
&\le& (1+2^{-1/2})\|y_\phi \pi_n(f)\|_{2,U}^{1/2} \\
&\le& 2\|f\|_2^{1/2}|W|^{1/4}\delta^{1/2}<\eta.
\end{eqnarray*}
Note that $\ker \pi_n(f)$ is the constants in $\ell^2(\Gamma/\Gamma_n, {{\rm m}athbb C})$. Denote by $\ell^2_0(\Gamma/\Gamma_n, {{\rm m}athbb C})$ the orthogonal complement of
the constants in $ \ell^2(\Gamma/\Gamma_n, {{\rm m}athbb C})$, and set $\ell^2_0(\Gamma/\Gamma_n, {{\rm m}athbb R})=\ell^2(\Gamma/\Gamma_n, {{\rm m}athbb R})\cap \ell^2_0(\Gamma/\Gamma_n, {{\rm m}athbb C})$.
Note that $y\in {\mathbb R}^{\Gamma/\Gamma_n}$ is in $\ell^2_0(\Gamma/\Gamma_n, {{\rm m}athbb R})$ exactly when $S_n(y)=0$.
The operator $\pi_n(f)$ is invertible as an operator from $\ell_0^2(\Gamma/\Gamma_n, {{\rm m}athbb C})$ to itself.
Since $\pi_n(f)$ preserves $\ell^2(\Gamma/\Gamma_n, {{\rm m}athbb R})$, it is also invertible from $\ell_0^2(\Gamma/\Gamma_n, {{\rm m}athbb R})$ to itself.
Therefore there exists $\tilde{x}_\phi\in \ell_0^2(\Gamma/\Gamma_n, {{\rm m}athbb R})$ such that $\tilde{x}_\phi\pi_n(f)=z_\phi+z'_\phi$.
Let $P_n$ and $B_{n, \eta}$ be as in Lemma~{\rm re}f{L-bound preimage} for $g=f$.
Then
$$ \|(\tilde{y}_\phi-\tilde{x}_\phi)\pi_n(f)\|_{2, U}=\| \tilde{y}_\phi\pi_n(f)-z_\phi-z'_\phi\|_{2,U}<\eta.$$
Note that $\|P_n(\tilde{y}_\phi-\tilde{x}_\phi)\|_{2, U}=\|P_n(\tilde{y}_\phi)\|_{2, U}\le \|\tilde{y}_\phi\|_{2, U}\le 1/2$.
Therefore $\tilde{y}_\phi-\tilde{x}_\phi\in B_{n, \eta}$.
Let $\Phi_n$ be a $({\rm h}o_2, 2\varepsilon)$-separated subset of $Map(W,\delta,\Gamma_n)$ with $|\Phi_n|=N_{2\varepsilon}(Map(W,\delta,\Gamma_n), {\rm h}o_2)$.
Let $\phi \in \Phi_n$. Denote by $B_\phi$ the set of all $\psi\in \Phi_n$ satisfying
$\|(\tilde{y}_\phi-\tilde{x}_\phi)-(\tilde{y}_\psi-\tilde{x}_\psi)\|_{2, U}<\varepsilon$.
Denote ${{\rm m}athbb Z}(\Gamma/\Gamma_n)\cap \ell_0^2(\Gamma/\Gamma_n, {{\rm m}athbb R})$ by ${{\rm m}athbb Z}_0(\Gamma/\Gamma_n)$.
We claim that the map $B_\phi\rightarrow {{\rm m}athbb Z}_0(\Gamma/\Gamma_n)/{{\rm m}athbb Z}_0(\Gamma/\Gamma_n)\pi_n(f)$ sending
$\psi$ to $z_\psi+z'_\psi+{{\rm m}athbb Z}_0(\Gamma/\Gamma_n)\pi_n(f)$ is injective.
Let $\psi, \varphi\in B_\phi$. Then
\begin{align*}
\|(\tilde{y}_\psi-\tilde{x}_\psi)-(\tilde{y}_\varphi-\tilde{x}_\varphi)\|_{2, U}<2\varepsilon.
\end{align*}
Suppose that $z_\psi+z'_\psi+{{\rm m}athbb Z}_0(\Gamma/\Gamma_n)\pi_n(f)=z_\varphi+z'_\varphi+{{\rm m}athbb Z}_0(\Gamma/\Gamma_n)\pi_n(f)$.
Then $\tilde{x}_\psi\pi_n(f)=\tilde{x}_\varphi\pi_n(f)+w\pi_n(f)$ for some $w\in {{\rm m}athbb Z}_0(\Gamma/\Gamma_n)$. Since the right multiplication by $\pi_n(f)$ is injective on $\ell_0^2(\Gamma/\Gamma_n, {{\rm m}athbb R})$, we get
$\tilde{x}_\psi=\tilde{x}_\varphi+w$, which implies that
$${\rm h}o_2(\psi, \varphi)=\|y_\psi-y_\varphi\|_{2, U}\le \|(\tilde{y}_\psi-\tilde{x}_\psi)-(\tilde{y}_\varphi-\tilde{x}_\varphi)\|_{2, U}<2\varepsilon,$$
and thus $\psi=\varphi$. This proves our claim.
Therefore $|B_\phi|\le |{{\rm m}athbb Z}_0(\Gamma/\Gamma_n)/{{\rm m}athbb Z}_0(\Gamma/\Gamma_n)\pi_n(f)|$. Note that $\ell_0^2(\Gamma/\Gamma_n, {{\rm m}athbb R})$ is the linear span of ${{\rm m}athbb Z}_0(\Gamma/\Gamma_n)$. So any basis of ${{\rm m}athbb Z}_0(\Gamma/\Gamma_n)$ as a free abelian group is also a basis for $\ell_0^2(\Gamma/\Gamma_n, {{\rm m}athbb R})$ as an ${\mathbb R}$-vector space.
Thus by Lemma~{\rm re}f{lem:correspondence}
one has
$|{{\rm m}athbb Z}_0(\Gamma/\Gamma_n)/{{\rm m}athbb Z}_0(\Gamma/\Gamma_n)\pi_n(f)|=|\det (\pi_n(f)|_{\ell_0^2(\Gamma/\Gamma_n, {{\rm m}athbb R})})|$.
Therefore
$$|B_\phi|\le |\det (\pi_n(f)|_{\ell_0^2(\Gamma/\Gamma_n, {{\rm m}athbb R})})|.$$
Now we have
\begin{align*}
|\Phi_n|\le N_\varepsilon(B_{n, \eta}, \|\cdot \|_{2, U}) {\rm m}ax_{\phi\in \Phi_n}|B_\phi|\le N_\varepsilon(B_{n, \eta}, \|\cdot \|_{2, U}) |\det (\pi_n(f)|_{\ell_0^2(\Gamma/\Gamma_n, {{\rm m}athbb R})})|.
\end{align*}
Let ${\rm m}u$ and ${\rm m}u_n$ be as in Lemma~{\rm re}f{L-measure weak convergence} for $g=f$.
When $n\in {{\rm m}athbb N}$ is large enough, by Lemma~{\rm re}f{L-bound preimage} we have
$$ N_\varepsilon(B_{n, \eta}, \|\cdot\|_{2, U})<\zeta^{[\Gamma: \Gamma_n]}\exp\left(-[\Gamma: \Gamma_n]\int_{0+}^{\kappa}\log t \, d{\rm m}u_n(t)\right),$$
whence
\begin{align*}
N_{2\varepsilon}(Map(W,\delta,\Gamma_n), {\rm h}o_2)
&=|\Phi_n| \\
&\le \zeta^{[\Gamma: \Gamma_n]}\exp\left(-[\Gamma: \Gamma_n]\int_{0+}^{\kappa}\log t \, d{\rm m}u_n(t)\right)\left|\det (\pi_n(f)|_{\ell_0^2(\Gamma/\Gamma_n, {{\rm m}athbb R})})\right| \\
&=\zeta^{[\Gamma: \Gamma_n]}\exp\left([\Gamma: \Gamma_n]\int_{\kappa+}^{\|f\|_1}\log t \, d{\rm m}u_n(t)\right).
\end{align*}
It follows that
\begin{align*}
\limsup_{n\to \infty}\frac{1}{[\Gamma: \Gamma_n]}\log N_{2\varepsilon}(Map(W,\delta,\Gamma_n), {\rm h}o)&\le \log \zeta+\limsup_{n\to \infty}\int_{\kappa+}^{\|f\|_1}\log t \, d{\rm m}u_n(t) \\
&\le \log \zeta+\int_{\kappa+}^{\|f\|_1}\log t \, d{\rm m}u(t),
\end{align*}
where the second inequality comes from Lemma~{\rm re}f{L-measure weak convergence}. Therefore
$$ h_{\Sigma}(X_f,\Gamma) \le \log \zeta+\int_{\kappa+}^{\|f\|_1}\log t \, d{\rm m}u(t).$$
Letting $\zeta\to 1+$ and $\kappa\to 0+$, we get
$$ h_{\Sigma}(X_f,\Gamma) \le \int_{0+}^{\|f\|_1}\log t \, d{\rm m}u(t)\overset{{\rm eq}ref{E-determinant}}=\log {\rm det}_{{{\rm m}athcal N}\Gamma} f.$$
\end{proof}
We are ready to prove Theorem~{\rm re}f{thm:main1}.
\begin{proof}[Proof of Theorem~{\rm re}f{thm:main1}]
From Lemmas {\rm re}f{L-lower bound} and {\rm re}f{L-upper bound} and Theorem~{\rm re}f{T-det vs number of fixed point} we obtain
$$h_{\Sigma}(X_f,\Gamma) \le \log {\rm det}_{{{\rm m}athcal N}\Gamma} f=\lim_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log |{\rm Fix}_{\Gamma_n}(X_f)| \le h_{\Sigma,\lambda}(X_f,\Gamma).$$
It follows immediately from Theorem {\rm re}f{thm:entropy} that $h_{\Sigma}(X_f,\Gamma) \ge h_{\Sigma,\lambda}(X_f,\Gamma).$
\end{proof}
\section{Entropy of the Wired Spanning Forest}
The purpose of this section is to prove Theorem {\rm re}f{thm:WSF}. To begin, let us set notation. Recall that $\Sigma=\{\Gamma_n\}^\infty_{n=1}$ a sequence of finite-index normal subgroups of $\Gamma$ satisfying $\bigcap_{n=1}^\infty \bigcup_{i\ge n} \Gamma_i = \{e\}$. All graphs in this paper are allowed to have multiple edges and loops. Let $f \in {\mathbb Z}\Gamma$ be well-balanced. The Cayley graph $C(\Gamma,f)$ has vertex set $\Gamma$. For each $v \in \Gamma$ and $s \ne e$, there are $|f_s|$ edges from $v$ to $vs$. Similarly, we let $C_n^f=C(\Gamma/\Gamma_n,f)$ be the graph with vertex set $\Gamma/\Gamma_n$ such that for each $g\Gamma_n \in \Gamma/\Gamma_n$ and $s \in \Gamma$ there are $|f_s|$ edges from $g\Gamma_n$ to $gs\Gamma_n$. For the sake of convenience we let $E=E(\Gamma,f)$ denote the edge set of $C(\Gamma,f)$ and $E_n=E_n^f$ denote the edge set of $C_n^f$. Recall the definition of $S$ and $S_*$ from Notation {\rm re}f{note:S}.
Let $\pi_n:\Gamma \to \Gamma/\Gamma_n$ denote the quotient map. We also denote by $\pi_n$ the induced map from ${\mathbb R}\Gamma$ to ${\mathbb R}\Gamma/\Gamma_n$ as well as the map from $E(\Gamma,f)$ (the edge set of $C(\Gamma,f)$) to $E_n^f$ (the edge set of $C_n^f$).
\subsection{The lower bound}
If ${{\rm m}athcal H} \subset C_n^f$ is a subgraph then its {\em lift} $\tilde{{{\rm m}athcal H}} \subset C(\Gamma,f)$ is the subgraph which contains an edge $gs$ (for $g\in \Gamma, s\in S_*$) if and only if ${{\rm m}athcal H}$ contains $\pi_n(gs)$. Let $2^E$ be the set of all spanning subgraphs of $C(\Gamma,f)$ and let $2^{E_n}$ be the set of all spanning subgraphs of $C_n^f$. Let $\nu_n$ be the probability measure on $2^{E_n}$ which is uniformly distributed on the collection of spanning trees of $C_n^f$. Let $\tilde{\nu}_n$ be its lift to $2^E$. To be precise, $\tilde{\nu}$ is uniformly distributed on the set of lifts $\tilde{{{\rm m}athcal T}}$ of spanning trees ${{\rm m}athcal T} \in 2^{E_n}$.
\begin{lemma}\label{lem:WSF2}
$\tilde{\nu}_n$ converges in the weak* topology to $\nu_{WSF}$ as $n$ tends to infinity.
\end{lemma}
\begin{remark}
This lemma is a special case of \cite[Proposition 7.1]{AL07}. It is also contained in \cite[Theorem 4.3]{Bo04}. However, there is an error in the proof of \cite[Theorem 4.3]{Bo04} (namely, the fact that subspaces $S_i$ increase to $l^2_-(\Gamma)$ does not logically imply that $P_{S_i}(\star)$ converges to $\star$ in the strong operator topology). For the reader's convenience we provide another proof based on a negative correlations result of Feder and Mihail.
\end{remark}
Let ${{\rm m}athcal G}=(V^{{\rm m}athcal G},E^{{\rm m}athcal G})$ be a finite connected graph. A collection ${{\rm m}athcal A}$ of spanning subgraphs is {\em increasing} if $x \subset y$ and $x\in {{\rm m}athcal A}$ implies $y\in {{\rm m}athcal A}$. We say that ${{\rm m}athcal A}$ {\em ignores} an edge ${\mathfrak e}$ if $x \setminus \{{\mathfrak e}\} = y \setminus \{{\mathfrak e}\}$ and $x \in {{\rm m}athcal A}$ implies $y \in {{\rm m}athcal A}$.
\begin{lemma}
If ${{\rm m}athcal A}$ is increasing, ${{\rm m}athcal A}$ ignores ${\mathfrak e}$, ${\mathfrak e}$ is not a loop and $T$ denotes the uniform spanning tree on ${{\rm m}athcal G}$ then
$${\bf P}(T \in {{\rm m}athcal A}) \ge {\bf P}(T \in {{\rm m}athcal A}| {\mathfrak e} \in T) = \frac{{\bf P}(T \in {{\rm m}athcal A}, {\mathfrak e} \in T) }{{\bf P}({\mathfrak e} \in T) }$$
where ${\bf P}(\cdot)$ denotes probability. Equivalently, ${\bf P}(T \in {{\rm m}athcal A}) \le {\bf P}(T \in {{\rm m}athcal A}| {\mathfrak e} \notin T)$ whenever this is well-defined (i.e., whenever ${\bf P}({\mathfrak e} \notin T)>0$, or equivalently, whenever ${{\rm m}athcal G}\setminus \{{\mathfrak e}\}$ is connected).
\end{lemma}
\begin{proof}
This result is due to Feder and Mihail \cite{FM92}. The first statement is reproduced in \cite[Theorem 4.4]{BLPS01}. To see that the second inequality is equivalent to the first observe that
$$ {\bf P}(T \in {{\rm m}athcal A}| {\mathfrak e} \notin T) = \frac{ {\bf P}(T \in {{\rm m}athcal A}, {\mathfrak e}\notin T) }{{\bf P}({\mathfrak e} \notin T)} = \frac{ {\bf P}(T \in {{\rm m}athcal A}) - {\bf P}(T \in {{\rm m}athcal A}, {\mathfrak e}\in T) }{1-{\bf P}({\mathfrak e} \in T)}.$$
By multiplying denominators, we see that ${\bf P}(T \in {{\rm m}athcal A}) \le {\bf P}(T \in {{\rm m}athcal A}| {\mathfrak e} \notin T)$ if and only if
$$ {\bf P}(T \in {{\rm m}athcal A}) - {\bf P}(T \in {{\rm m}athcal A}, {\mathfrak e}\in T) \ge {\bf P}(T \in {{\rm m}athcal A}) - {\bf P}(T \in {{\rm m}athcal A}){\bf P}({\mathfrak e} \in T)$$
which simplifies to ${\bf P}(T \in {{\rm m}athcal A}) \ge {\bf P}(T \in {{\rm m}athcal A}| {\mathfrak e} \in T)$.
\end{proof}
\begin{proof}[Proof of Lemma {\rm re}f{lem:WSF2}]
For $n\ge 0$, let $B(n)$ denote the ball of radius $n$ centered at the identity element in $C(\Gamma,f)$. For each $n$, choose a non-negative integer $i_n$ so that the following hold.
\begin{enumerate}
\item $\lim_{n\to\infty} i_n \to \infty$.
\item The quotient map $\pi_n$ restricted to $B(i_n)$ is injective but not surjective. Moreover, if $v,w$ are vertices in $B(i_n)$ then the number of edges in $C^f_n$ from $v\Gamma_n$ to $w\Gamma_n$ equals the number of edges in $B(i_n)$ from $v$ to $w$.
\end{enumerate}
Because $\bigcap_{n=1}^\infty \bigcup_{i \ge n} \Gamma_i = \{e\}$, it is possible to find such a sequence.
Let $C^w_n$ denote the graph $C_n^f$ with all the vertices outside of $B(i_n)\Gamma_n$ contracted together. To be precise, $C^w_n$ has vertex set $B(i_n)\Gamma_n \cup \{*\}$. Every edge in $C_n^f$ with endpoints in $B(i_n)\Gamma_n$ is also in $C^w_n$. For every edge in $C_n^f$ with one endpoint $v$ in $B(i_n)\Gamma_n$ and the other endpoint not in $B(i_n)\Gamma_n$, there is an edge in $C^w_n$ from $v$ to $*$.
Similarly, let $D^w_n$ denote the graph $C(\Gamma,S)$ with all the vertices outside of $B(i_n)$ contracted together. By the choice of $i_n$, $D^w_n$ is isomorphic to $C^w_n$. Let $\nu^{C,w}_n$ be the law of the uniform spanning tree on $C^w_n$, $\nu^{D,w}_n$ be the law of the uniform spanning tree on $D^w_n$ and $\nu_n$ be the law of the uniform spanning tree on $C_n^f$.
Let ${{\rm m}athcal A} \subset 2^E$ be an increasing set which depends on only a finite number of edges (i.e., there is a finite subset $F \subset E$ such that if $x, y \in 2^E$ and $x \cap F = y \cap F$ then $x \in {{\rm m}athcal A} \Leftrightarrow y \in {{\rm m}athcal A}$). If $n$ is sufficiently large, then $F \subset B(i_n)$. So we define ${{\rm m}athcal A}_n \subset 2^{E_n}$ by: $x\in {{\rm m}athcal A}_n \Leftrightarrow \exists y \in {{\rm m}athcal A}$ such that $\pi_n(y \cap F) = x \cap \pi_n(F)$. By abuse of notation, we also consider ${{\rm m}athcal A}_n$ to be a subset of the set of edges of $C^w_n$.
Because $C^w_n$ is obtained from $C_n^f$ by adding some edges and contracting some edges, repeated applications of the previous lemma imply $\nu^{C,w}_n({{\rm m}athcal A}_n) \le \nu_n({{\rm m}athcal A}_n)$. By definition, $\nu^{C,w}_n({{\rm m}athcal A}_n) = \nu^{D,w}_n({{\rm m}athcal A})$ and $\nu_n({{\rm m}athcal A}_n) = \tilde{\nu}_n({{\rm m}athcal A})$. Thus,
$$ \nu^{D,w}_n({{\rm m}athcal A}) \le \tilde{\nu}_n({{\rm m}athcal A}).$$
Let $E(i_n)$ denote the set of edges in the ball $B(i_n)$. We consider $2^{E(i_n)}$, the set of all subsets of $E(i_n)$, to be included in $2^E$, the set of all subsets of $E$, in the obvious way. By definition of the Wired Spanning Forest, the projection of $\nu^{D,w}_n$ to $2^{E(i_n)} \subset 2^E$ converges to $\nu_{WSF}$ in the weak* topology. So if $\tilde{\nu}_\infty$ is a weak* limit point of $\{\tilde{\nu}_n\}_{n=1}^\infty$ then we have
$$\nu_{WSF}( {{\rm m}athcal A}) \le \tilde{\nu}_\infty( {{\rm m}athcal A})$$
for every increasing ${{\rm m}athcal A} \subset 2^E$ which depends on only a finite number of edges. This means that, for any finite subset $F \subset E$, the projection of $\nu_{WSF}$ to $2^F$, denoted $\nu_{WSF}|2^F$, is stochastically dominated by $\tilde{\nu}_\infty|2^F$. By Strassen's theorem \cite{St65}, there exists a probability measure $J_F$ on
$$\{ (x,y) \in 2^F \times 2^F :~ x \subset y\}$$
with marginals $\nu_{WSF}|{2^F}$ and $\tilde{\nu}_\infty|{2^F}$ respectively. By taking a weak* limit point of $\{J_F\}_{F \subset E}$ as $F$ increases to $E$, we obtain the existence of a Borel probability measure $J$ on
$$\{ (x,y) \in 2^E \times 2^E :~ x \subset y\}$$
with marginals $\nu_{WSF}$ and $\tilde{\nu}_\infty$ respectively.
Observe that the average degree of a vertex in the WSF is 2. To put it more formally, for every $g\in \Gamma$, let $\deg_g:2^E \to {\mathbb Z}$ be the map $\deg_g(x)$ equals the number of edges in $x$ adjacent to $g$. By \cite[Theorem 6.4]{BLPS01}, $\int \deg_g(x)~d\nu_{WSF}(x)=2$. Also $\int \deg_g(x)~d\tilde{\nu}_\infty(x)=2$. This can be seen as follows. Because $\tilde{\nu}_n$ is $\Gamma$-invariant, it follows that $\int \deg_g(x)~d\tilde{\nu}_{n}(x)$ is just the average degree of a vertex in a uniformly random spanning tree of $C_n^f$. However, each such tree has $[\Gamma:\Gamma_n]$ vertices and $[\Gamma:\Gamma_n]-1$ edges and therefore, the average degree is $2([\Gamma:\Gamma_n]-1)[\Gamma:\Gamma_n]^{-1}$ which converges to $2$ as $n\to\infty$.
Because $\int \deg_g(x)~d\nu_{WSF}(x)=\int \deg_g(x)~d\tilde{\nu}_\infty(x)$ for every $g\in \Gamma$, it follows that $J$ is supported on $\{(x,x):~x \in 2^E\}$. Thus $\tilde{\nu}_\infty = \nu_{WSF}$ as claimed.
\end{proof}
For $x \in 2^E$, let $x_1$ denote the restriction of $x$ to the set of all edges containing the identity element. Let ${\rm h}o$ be the continuous pseudo-metric on $2^E$ defined by ${\rm h}o(x,y) = 1$ if $x_1\ne y_1$ and ${\rm h}o(x,y)=0$ otherwise. This pseudo-metric is dynamically generating.
For $x\in 2^{E_n}$, let $\phi_x: \Gamma/\Gamma_n \to 2^E$ be the map $\phi_x(g\Gamma_n) = \widetilde{gx}$. Let $W \subset \Gamma, L \subset C(2^E)$ be non-empty finite sets and $\delta>0$. Let ${\rm BAD}(W,L,\delta,\Gamma_n)$ be the set of all $x\in 2^{E_n}$ such that $\phi_x \notin Map_{\nu_{WSF}}(W,L,\delta,\Gamma_n)$.
\begin{lemma}\label{lem:BAD2}
$$\lim_{n\to\infty} \nu_n({\rm BAD}(W,L,\delta,\Gamma_n)) = 0.$$
\end{lemma}
\begin{proof}
The proof is similar to the proof of Lemma {\rm re}f{lem:BAD1}. It uses Lemma {\rm re}f{lem:WSF2} above and the fact that $\Gamma {\curvearrowright} (2^E,\nu_{WSF})$ is ergodic by \cite[Corollary 8.2]{BLPS01}.
\end{proof}
\begin{lemma}\label{lem:lower2}
$h_{\Sigma,\nu_{WSF}}(2^E,\Gamma) \ge \limsup_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log \tau(C_n^f).$
\end{lemma}
\begin{proof}
Let $W \subset \Gamma, L \subset C(2^E)$ be non-empty finite sets and $\delta>0$. Denote by $Y_n$ the set of spanning trees in $C_n^f$ not contained in
${\rm BAD}(W,L,\delta,\Gamma_n)$.
By the previous lemma, $\lim_{n\to \infty} |Y_n|^{-1} \tau(G_n) = 1$.
By definition of ${\rm h}o_\infty$, if $x\ne y \in Y_n$ then ${\rm h}o_\infty(\phi_x,\phi_y)= 1$. Therefore, if $0<\epsilon<1$ then $\{\phi_y:~y\in Y_n\}$ is $\epsilon$-separated with respect to ${\rm h}o_\infty$ which implies
$$ N_\epsilon( Map_{\nu_{WSF}}(W,L,\delta,\Gamma_n), {\rm h}o_\infty) \ge |Y_n|.$$
Because $\lim_{n\to \infty} |Y_n|^{-1}\tau(C_n^f) = 1$, this implies
$$\limsup_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log \tau(C_n^f) \le h_{\Sigma,\nu_{WSF}}(2^E,\Gamma).$$
\end{proof}
\subsection{A topological model}
Recall the definition of ${{\rm m}athcal F}={{\rm m}athcal F}_f$ from the introduction. It is a closed $\Gamma$-invariant subset of $S_*^\Gamma$.
We refer the reader to \cite[Chapter I.8]{BH} for background about the end space of a topological space.
\begin{lemma}\label{lem:model}
If $\Gamma$ is not virtually ${{\rm m}athbb Z}$ then $h_{\Sigma,\nu_{WSF}}(2^E,\Gamma) \le h_\Sigma({{\rm m}athcal F},\Gamma)$.
\end{lemma}
\begin{proof}
Because $\Gamma$ is not virtually cyclic, \cite[Theorem 10.1]{BLPS01} implies that for $\nu_{WSF}$-a.e. $x \in 2^E$, every component of $x$ is a 1-ended tree. Therefore, given such an $x$, for every $g\in \Gamma$ there is a unique edge $s_*\in S_*$ such that $gs_* \in x$ and if $C(x,g)$ is the connected component of $x$ containing $g$ then $C(x,g) \setminus \{gs_*\}$ has two components: a finite one containing $g$ and an infinite one containing $gs \in \Gamma$ (where $s=p(s_*) \in S$). Informally, $gs \in \Gamma$ is closer to the point at infinity of $C(x,g)$ than $g$ is. Let $\Phi(x)\in S_*^\Gamma$ be defined by $\Phi(x)_g=s_*$. Also let $\nu_{{{\rm m}athcal F}} = (\Phi_*)\nu_{WSF}$. Because $\nu_{WSF}$-a.e. $x\in 2^E$ is such that every component of $x$ is a $1$-ended tree, it follows that $\nu_{{\rm m}athcal F}$ is supported on ${{\rm m}athcal F}$. The random oriented subgraph with law $\nu_{{\rm m}athcal F}$ is called the {\em Oriented Wired Spanning Forest} (OWSF) in \cite{BLPS01}.
Note that $\Phi$ induces a measure-conjugacy from the action $\Gamma {\curvearrowright} (2^E, \nu_{WSF})$ to the action $\Gamma {\curvearrowright} ({{\rm m}athcal F},\nu_{{\rm m}athcal F})$. Thus $h_{\Sigma,\nu_{WSF}}(2^E,\Gamma) =h_{\Sigma,\nu_{{{\rm m}athcal F}}}({{\rm m}athcal F},\Gamma)$. Theorem {\rm re}f{thm:entropy} now implies the lemma.
\end{proof}
\begin{lemma}\label{lem:model2}
If $\Gamma$ is virtually ${{\rm m}athbb Z}$ then $h_{\Sigma,\nu_{WSF}}(2^E,\Gamma) \le h_\Sigma({{\rm m}athcal F},\Gamma)$.
\end{lemma}
\begin{proof}
Let ${\rm E}nds(\Gamma)$ denote the space of ends of $C(\Gamma,f)$. Because $\Gamma$ is 2-ended, $|{\rm E}nds(\Gamma)|=2$. Let $x\in 2^E$ be connected and denote by ${\rm E}nds(x)$ the space of ends of $x$. Endow each edge in $C(\Gamma, f)$ with length $1$. Note that for any $g\in \Gamma$ and $r>0$ there exists $r'>0$ such that
if $g'\in \Gamma$ has geodesic distance at least $r'$ from $g$ in $x$, then $g'$ has geodesic distance at least $r$ from $g$ in $C(\Gamma, f)$.
Thus the argument in the proof of \cite[Proposition I.8.29]{BH} shows that the inclusion map of $x$ into $C(\Gamma,f)$ induces a map $\phi_x:{\rm E}nds(x) \to {\rm E}nds(\Gamma)$. We claim that this is a surjection.
Let $K \subset \Gamma$ be a finite set such that $C(\Gamma,f)\setminus K$ has two infinite components ${{\rm m}athcal C}_0,{{\rm m}athcal C}_1$ corresponding to the two ends $\eta_0,\eta_1$ of $C(\Gamma,f)$.
For each $i=0, 1$, define a subgraph $x|{{\rm m}athcal C}_i$ of $C(\Gamma, f)$ as follows: it has the same vertices as ${{\rm m}athcal C}_i$ does, and an edge $e$ in $E$ lies in $x|{{\rm m}athcal C}_i$ exactly when $e$ is in both $x$ and ${{\rm m}athcal C}_i$.
Because $x$ is connected, each component of $x|{{{\rm m}athcal C}_i}$ contains an element of $KS$. Since $KS$ is finite, this implies that at least one of the components of $x|{{\rm m}athcal C}_i$ must be infinite. Then any proper ray in this infinite component of $x|{{\rm m}athcal C}_i$ gives rise to an end $\omega_i$ of $x$,
and also gives rise to an end of $C(\Gamma, f)$, which must be $\eta_i$.
It follows that $\phi_x(\omega_i)=\eta_i$. Because $i$ is arbitrary, $\phi_x$ is surjective as claimed.
By the claim, if $x \in 2^E$ is connected and 2-ended, we may identify ${\rm E}nds(x)$ with ${\rm E}nds(\Gamma)$ via the map $\phi_x$.
Given $(x,\eta) \in 2^E \times {\rm E}nds(\Gamma)$ with the property that $x$ is a 2-ended tree, we define $\Phi(x,\eta) \in {{\rm m}athcal F}$ as follows.
For each $g \in \Gamma$, let $s_*\in S_*$ be the unique edge so that $x \setminus \{gs_*\}$
has two components: one containing $g$ (which is either finite or infinite with an end not equal to $\eta$),
the other containing $gp(s_*)$ and having an end equal to $\eta$.
Informally, $gp(s_*) \in \Gamma$ is ``closer'' to $\eta$ than $g$ is.
Let $\Phi(x,\eta)\in S_*^\Gamma$ be defined by $\Phi(x,\eta)_g=s_*$. Clearly $\Phi(x, \eta)\in {{\rm m}athcal F}$.
Let $\zeta$ be the uniform probability measure on ${\rm E}nds(\Gamma)$. By \cite[Theorems 10.1 and 10.4]{BLPS01}, the WSF on $C(\Gamma,f)$ is a.s. a 2-ended tree. So $\nu_{{{\rm m}athcal F}} = (\Phi_*)(\nu_{WSF} \times \zeta)$ is well-defined.
The action of $\Gamma$ on $C(\Gamma,f)$ naturally extends to ${\rm E}nds(\Gamma)$. Note that $\Phi$ induces a measure-conjugacy from the action $\Gamma {\curvearrowright} (2^E \times {\rm E}nds(\Gamma), \nu_{WSF} \times \zeta)$ to the action $\Gamma {\curvearrowright} ({{\rm m}athcal F},\nu_{{\rm m}athcal F})$. By Theorem {\rm re}f{thm:entropy},
$$h_{\Sigma,\nu_{WSF} \times \zeta}(2^E \times {\rm E}nds(\Gamma),\Gamma) =h_{\Sigma,\nu_{{{\rm m}athcal F}}}({{\rm m}athcal F},\Gamma) \le h_\Sigma({{\rm m}athcal F},\Gamma).$$
Because $\Gamma$ is virtually ${\mathbb Z}$, it is amenable \cite[Theorem G.2.1 and Proposition G.2.2]{BHV}. Thus by \cite[Theorem 1.2]{Bo12} \cite[Theorem 6.7]{KL11b}, $h_{\Sigma,\nu_{WSF} \times \zeta}(2^E \times {\rm E}nds(\Gamma),\Gamma)$ is the classical entropy of the action, denoted by $h_{\nu_{WSF} \times \zeta}(2^E \times {\rm E}nds(\Gamma),\Gamma)$. It is well-known that classical entropy is additive under direct products. Thus,
\begin{eqnarray*}
h_{\nu_{WSF} \times \zeta}(2^E \times {\rm E}nds(\Gamma),\Gamma) &=& h_{\nu_{WSF}}(2^E,\Gamma) + h_{\zeta}({\rm E}nds(\Gamma),\Gamma)= h_{\nu_{WSF}}(2^E,\Gamma).
\end{eqnarray*}
The last equality holds because ${\rm E}nds(\Gamma)$ is a finite set. By \cite[Theorem 1.2]{Bo12} \cite[Theorem 6.7]{KL11b} again, $h_{\nu_{WSF}}(2^E,\Gamma) = h_{\Sigma,\nu_{WSF}}(2^E,\Gamma)$. Thus
$$h_{\Sigma,\nu_{WSF}}(2^E,\Gamma)=h_{\Sigma,\nu_{WSF} \times \zeta}(2^E \times {\rm E}nds(\Gamma),\Gamma) \le h_\Sigma({{\rm m}athcal F},\Gamma).$$
\end{proof}
\subsection{The upper bound}
Given $s_* \in S_*$, let ${\vec{s}}_*$ denote the {\em oriented} edge from $e$ to $p(s_*)$. For $x,y \in {{\rm m}athcal F}$, let ${\rm h}o^{{\rm m}athcal F}(x,y) = 1$ if $x_e \ne y_e$. Let ${\rm h}o^{{\rm m}athcal F}(x,y)=0$ otherwise. This is a dynamically generating continuous pseudo-metric on ${{\rm m}athcal F}$.
For $\phi:\Gamma/\Gamma_n \to {{\rm m}athcal F}$, let $E(\phi)$ be the set of all edges of $C_n^f$ of the form $\pi_n(gs_*)$ where $\phi(g^{-1}\Gamma_n)_e=s_*$. Let ${\rm BAD}(\phi)\subset E(\phi)$ be those edges $\pi_n(gs_*)$ where $\phi(g^{-1}\Gamma_n)_e=s_*$ and $\phi( (gs)^{-1}\Gamma_n)_e = s_*^{-1}$ (where $s=p(s_*)$). Let ${\mathcal{G}}(\phi) = E(\phi) \setminus {\rm BAD}(\phi)$. Also let $\vec{{\mathcal{G}}}(\phi)$ be the set of all {\em oriented edges} of the form $\pi_n(g{\vec{s}}_*)$ where $\phi(g^{-1}\Gamma_n)_e=s_*$ and $\phi( (gs)^{-1}\Gamma_n)_e \neq s^{-1}_*$ (where $s=p(s_*)$, so the corresponding unoriented edge is in ${\mathcal{G}}(\phi)$). By abuse of notation, we will sometimes think of ${\mathcal{G}}(\phi)$ and ${\rm BAD}(\phi)$ as subgraphs of $C_n^f$ but not in the usual way. To be precise, we consider ${\mathcal{G}}(\phi)$ to be the smallest subgraph containing all the edges in ${\mathcal{G}}(\phi)$ (and similarly, for ${\rm BAD}(\phi)$). Thus ${\mathcal{G}}(\phi)$ and ${\rm BAD}(\phi)$ have no isolated vertices and, in general, are not spanning.
Because $\bigcap_{n=1}^\infty \bigcup_{i\ge n} \Gamma_i = \{e\}$, we may assume, without loss of generality, that for any $s_1\ne s_2 \in S \cup \{e\}$, $s_1\Gamma_n \ne s_2\Gamma_n$. An {\em oriented cycle} in $C_n^f$ is a sequence $g_0\Gamma_n,g_1\Gamma_n,\ldots, g_m \Gamma_n \in \Gamma/\Gamma_n$ such that $g_0\Gamma_n=g_m\Gamma_n$ and there exist $s_i \in S$ such that $\pi_n(g_is_i)= g_{i+1}\Gamma_n$. We consider two oriented cycles to be the same if they are equal up to a cyclic reordering of the vertices. Thus if $g_0\Gamma_n,g_1\Gamma_n,\ldots, g_m \Gamma_n$ is an oriented cycle then $g_i\Gamma_n, g_{i+1}\Gamma_n,\ldots, g_{m+i}\Gamma_n$ (indices ${\rm m}od m$) denotes the same oriented cycle. The cycle is {\em simple} if there does not exist $i,j$ with $0\le i < j<n$ such that $g_i\Gamma_n=g_j\Gamma_n$. By definition, $\pi_n(g_0s_0s_1\cdots s_{m-1}) = g_0\Gamma_n$. The cycle is {\em homotopically trivial} if $s_0s_1\cdots s_{m-1}$ is the identity element.
\begin{lemma}
Let $W \subset \Gamma$ be a symmetric finite set containing $S$ and $\phi \in Map(W,\delta,\Gamma_n)$ (where $Map(W,\delta,\Gamma_n)$ is defined with respect to the pseudo-metric ${\rm h}o^{{\rm m}athcal F}$ above). Then
\begin{enumerate}
\item $|{\rm BAD}(\phi)| \le \delta^2 |S_*|^2 [\Gamma:\Gamma_n]$ and the number of vertices in ${\rm BAD}(\phi)$ is at most $\delta^2|S| [\Gamma:\Gamma_n]$.
\item each component of ${\mathcal{G}}(\phi)$ contains at most one cycle (i.e., each component is either a tree or is homotopic to a circle);
\item if a component $c$ of ${\mathcal{G}}(\phi)$ is a tree, then there is a single vertex of $c$ incident with an edge in ${\rm BAD}(\phi)$;
\item for every integer $m>0$ there is an integer $N_m$ such that if $W \supset S^m\cup S$ and $n \ge N_m$ then the number of components of ${\mathcal{G}}(\phi)$ is at most $(\delta^2|W| + m^{-1})[\Gamma:\Gamma_n]$;
\item if $n\ge N_m$ and $W \supset S^m\cup S$ then there is a spanning tree $T_\phi$ of $C_n^f$ such that $|T_\phi \Delta {\mathcal{G}}(\phi)| \le (3\delta^2|W||S_*| + 2m^{-1})[\Gamma:\Gamma_n].$
\end{enumerate}
\end{lemma}
\begin{proof}
Let ${\rm BAD}(W,\phi)$ be the set of all vertices $g\Gamma_n \in \Gamma/\Gamma_n$ such that ${\rm h}o^{{\rm m}athcal F}(w \circ \phi(g^{-1}\Gamma_n), \phi(wg^{-1}\Gamma_n)) =1$ for some $w\in W$. Because $\phi \in Map(W,\delta,\Gamma_n)$, $|{\rm BAD}(W,\phi)| \le \delta^2 |W| [\Gamma:\Gamma_n]$.
We claim that the vertex set of ${\rm BAD}(\phi)$ is contained in ${\rm BAD}(W,\phi)$. So let $\pi_n(gs_*) \in {\rm BAD}(\phi)$ for some $g\in \Gamma$, $s_* \in S_*$. Let $s=p(s_*)$. By definition, $\phi(g^{-1}\Gamma_n)_e=s_*$ and $\phi((gs)^{-1}\Gamma_n)_e=s^{-1}_*$. Because $\phi(g^{-1}\Gamma_n) \in {{\rm m}athcal F}$, $\phi(g^{-1}\Gamma_n)_e=s_*$ implies
$$(s^{-1} \phi(g^{-1}\Gamma_n))_e=\phi(g^{-1}\Gamma_n)_s \ne s^{-1}_* =\phi(s^{-1}g^{-1}\Gamma_n)_e.$$
Thus ${\rm h}o^{{\rm m}athcal F}(s^{-1} \phi(g^{-1}\Gamma_n), \phi(s^{-1}g^{-1}\Gamma_n))=1$ which implies $\pi_n(g) \in {\rm BAD}(W,\phi)$. By writing $\pi_n(gs_*)$ as $\pi_n(gs s_*^{-1})$ the same argument yields that $gs \in {\rm BAD}(W,\phi)$. So both endpoints of $\pi_n(gs_*)$ are in ${\rm BAD}(W,\phi)$. Because $\pi_n(gs_*) \in {\rm BAD}(\phi)$ is arbitrary, this implies the vertex set of ${\rm BAD}(\phi)$ is contained in ${\rm BAD}(W,\phi)$.
By choosing $W=S \subset \Gamma$ (by abuse of notation), we see that the number of vertices in ${\rm BAD}(\phi)$ is at most $\delta^2|S|[\Gamma:\Gamma_n]$. Since each vertex is incident to $|S_*|$ edges, $|{\rm BAD}(\phi)| \le \delta^2 |S_*|^2[\Gamma:\Gamma_n].$
Observe for every vertex $g\Gamma_n$ contained in ${\mathcal{G}}(\phi)$ either
\begin{enumerate}
\item $g\Gamma_n$ is contained in both ${\mathcal{G}}(\phi)$ and ${\rm BAD}(\phi)$ and there are no oriented edges of $\vec{{\mathcal{G}}}(\phi)$ with tail $g\Gamma_n$, or
\item there is exactly one oriented edge $\pi_n(g{\vec{s}}_*) \in \vec{{\mathcal{G}}}(\phi)$ with tail $g\Gamma_n$.
\end{enumerate}
For every vertex $g_0\Gamma_n$ of ${\mathcal{G}}(\phi)$, let $H(g_0\Gamma_n)$ be the set of all vertices ``ahead'' of $g_0\Gamma_n$. To be precise, this consists of all vertices $g_k\Gamma_n$ such that there exist oriented edges ${\mathfrak e}_0,{\mathfrak e}_1,\ldots, {\mathfrak e}_{k-1} \in \vec{{\mathcal{G}}}(\phi)$ with ${\mathfrak e}_i=(g_i\Gamma_n,g_{i+1}\Gamma_n)$. Then two vertices $g\Gamma_n,g'\Gamma_n$ are in the same component of ${\mathcal{G}}(\phi)$ if and only if $H(g\Gamma_n) \cap H(g'\Gamma_n) \ne \emptyset$ (one direction is obvious, the other can be shown by induction on the distance between $g\Gamma_n$ and $g'\Gamma_n$ in the component of ${\mathcal{G}}(\phi)$ containing both). Therefore, if $c$ is the collection of vertices in a connected component of ${\mathcal{G}}(\phi)$ then
$$\bigcap_{g\Gamma_n \in c} H(g\Gamma_n)$$
is either a single vertex (contained in ${\rm BAD}(\phi)$) or a simple cycle. This implies items (2) and (3) in the statement of the lemma (since any cycle must be contained in $\bigcap_{g\Gamma_n \in c} H(g\Gamma_n)$).
Because $\bigcap_{n\in {\mathbb N}}\bigcup_{i\ge n}\Gamma_i=\{e\}$, there is an $N_m$ such that $n \ge N_m$ implies every homotopically nontrivial cycle in $C_n^f$ has length $> m$. Let us assume now that $W \supset S^m\cup S$ and $n\ge N_m$. We need to estimate the number of homotopically trivial oriented simple cycles of length $\le m$ in $\vec{{\mathcal{G}}}(\phi)$.
So suppose that $g_0\Gamma_n,g_1\Gamma_n,\ldots, g_k \Gamma_n = g_0\Gamma_n \in \Gamma/\Gamma_n$ is an oriented simple cycle in $\vec{{\mathcal{G}}}(\phi)$ and $k \le m$. By definition, $(g_i\Gamma_n,g_{i+1}\Gamma_n) \in \vec{{\mathcal{G}}}(\phi)$ for all $i$ (indices mod $k$). Thus if $s_{*,i} = \phi(g_i^{-1}\Gamma_n)_e$, $p(s_{*,i})=s_i$ and this cycle is homotopically trivial then $s_0s_1s_2\cdots s_{k-1}$ is the identity element.
By definition, ${{\rm m}athcal F}$ does not contain any simple cycles. Therefore, there is some $i \le k-1$ such
$$((s_0\cdots s_i)^{-1}\phi(g_0^{-1}\Gamma_n))_e = \phi(g_0^{-1}\Gamma_n)_{s_0s_1\cdots s_i} \ne \phi( (g_0s_0s_1\cdots s_i)^{-1}\Gamma_n)_e.$$
Because $W \supset S^m$, $g_0\Gamma_n \in {\rm BAD}(W,\phi)$. Since $|{\rm BAD}(W,\phi)| \le \delta^2 |W| [\Gamma:\Gamma_n]$, this implies that the number of homotopically trivial simple cycles in ${\mathcal{G}}(\phi)$ of length at most $m$ is at most $\delta^2 |W| [\Gamma:\Gamma_n]$.
Since each component of ${\mathcal{G}}(\phi)$ either contains a vertex of ${\rm BAD}(W,\phi)$ or contains a simple cycle of length $>m$, it follows that the number of components is at most $(\delta^2|W| + m^{-1})[\Gamma:\Gamma_n]$.
Let $L \subset {\mathcal{G}}(\phi)$ be a set of edges such that each edge ${\mathfrak e} \in L$ is contained in a simple cycle of ${\mathcal{G}}(\phi)$ and no two distinct edges ${\mathfrak e}_1, {\mathfrak e}_2 \in L$ are contained in the same simple cycle. So $|L| \le (\delta^2|W| + m^{-1})[\Gamma:\Gamma_n]$ and ${\mathcal{G}}(\phi) \setminus L$ is a forest with at most $(\delta^2|W| + m^{-1})[\Gamma:\Gamma_n]$ connected components.
Note ${\mathcal{G}}(\phi) \setminus L$ contains every vertex in ${\mathcal{G}}(\phi)$ which contains at least $(1-\delta^2|S|)[\Gamma:\Gamma_n]$ vertices (because the number of vertices in ${\rm BAD}(\phi)$ is at most $\delta^2|S|[\Gamma:\Gamma_n]$).
An exercise shows that if ${{\rm m}athcal F}_n \subset E_n$ is any forest contained in $C_n^f$ with at most $c({{\rm m}athcal F}_n)$ components and at least $v({{\rm m}athcal F}_n)$ vertices then there is a spanning tree $T_n \subset E_n$ such that ${{\rm m}athcal F}_n \subset T_n$ and $|T_n \setminus {{\rm m}athcal F}_n| \le c({{\rm m}athcal F}_n)+ [\Gamma:\Gamma_n] - v({{\rm m}athcal F}_n)$. In particular, this implies the last statement of the lemma.
\end{proof}
\begin{lemma}\label{lem:upper2}
$$h_\Sigma({{\rm m}athcal F},\Gamma) \le \limsup_n [\Gamma:\Gamma_n]^{-1} \log \tau(C_n^f).$$
\end{lemma}
\begin{proof}
Let $1/2>\epsilon>0$. Let $m$ be a positive integer, $W \subset \Gamma$ be a finite set with $W \supset S^m\cup S$ and $\delta>0$. Let $n\ge N_m$ (where $N_m$ is as in the previous lemma). We assume $m$ is large enough and $\delta$ is small enough so that $\epsilon > 6\delta^2|W||S_*|^2 + 4m^{-1}$. By Stirling's Formula, there is a constant $C>1$ such that for every $k$ with $0\le k \le \epsilon [\Gamma:\Gamma_n]$,
$${ [\Gamma:\Gamma_n] \choose k } \le C \exp(H(\epsilon ) [\Gamma:\Gamma_n]), \quad { [\Gamma:\Gamma_n]|S_*|/2 \choose k } \le C \exp(H(\epsilon ) [\Gamma:\Gamma_n]|S_*|/2),$$
where $H(\epsilon) := -\epsilon \log(\epsilon) - (1-\epsilon)\log(1-\epsilon)$.
If $\phi,\psi \in Map(W,\delta,\Gamma_n)$ are such that $\vec{{\mathcal{G}}}(\phi) = \vec{{\mathcal{G}}}(\psi)$ and ${\rm BAD}(\phi)={\rm BAD}(\psi)$, then ${\rm h}o^{{\rm m}athcal F}(\phi(g\Gamma_n),\psi(g\Gamma_n))=0$ for all $g\in \Gamma$. On the other hand, $|{\rm BAD}(\phi)| \le \delta^2 |S_*|^2 [\Gamma:\Gamma_n] \le \epsilon [\Gamma:\Gamma_n]$. Therefore,
\begin{eqnarray*}
&&N_0(Map(W,\delta,\Gamma_n), {\rm h}o^{{\rm m}athcal F}_2) \\
&\le& (\epsilon [\Gamma: \Gamma_n]+1) C \exp(H(\epsilon) [\Gamma:\Gamma_n]|S_*|/2) |\{\vec{{\mathcal{G}}}(\phi):~ \phi \in Map(W,\delta,\Gamma_n)\}|.
\end{eqnarray*}
Now suppose that $\phi, \psi \in Map(W,\delta,\Gamma_n)$ are such that ${\mathcal{G}}(\phi)={\mathcal{G}}(\psi)$ and ${\rm Vert}({\rm BAD}(\phi))={\rm Vert}({\rm BAD}(\psi))$, where
${\rm Vert}({\rm BAD}(\phi))$ and ${\rm Vert}({\rm BAD}(\psi))$ denote the vertex sets of ${\rm BAD}(\phi)$ and ${\rm BAD}(\psi)$ respectively. Note that if ${\mathfrak e} \in {\mathcal{G}}(\phi)$ is not contained in a simple cycle then the orientation of ${\mathfrak e}$ in $\vec{{\mathcal{G}}}(\phi)$ is the same as the orientation of ${\mathfrak e}$ in $\vec{{\mathcal{G}}}(\psi)$. This is because either ${\mathfrak e}$ is contained in a component which contains a simple cycle (in which case, ${\mathfrak e}$ must be oriented towards the simple cycle), or ${\mathfrak e}$ is contained in a component which contains a vertex of ${\rm BAD}(\phi)$ (in which case ${\mathfrak e}$ must be oriented towards that vertex).
On the other hand, for every simple cycle in ${\mathcal{G}}(\phi)$, there are two possible orientations it can have in $\vec{{\mathcal{G}}}(\psi)$. By the previous lemma, there are at most $(\delta^2|W| + m^{-1})[\Gamma:\Gamma_n] \le \epsilon[\Gamma:\Gamma_n]$ simple cycles. Therefore,
\begin{eqnarray*}
&&|\{{\vec {\mathcal{G}}}(\phi):~ \phi \in Map(W,\delta,\Gamma_n)\}| \\
&\le& 2^{\epsilon[\Gamma:\Gamma_n]} |\{({\mathcal{G}}(\phi), {\rm Vert}({\rm BAD}(\phi))):~ \phi \in Map(W,\delta,\Gamma_n)\}|.
\end{eqnarray*}
Because $|{\rm Vert}({\rm BAD}(\phi))| \le \delta^2 |S|[\Gamma:\Gamma_n] \le \epsilon [\Gamma:\Gamma_n]$, it follows that,
\begin{eqnarray*}
&&|\{({\mathcal{G}}(\phi), {\rm Vert}({\rm BAD}(\phi))):~ \phi \in Map(W,\delta,\Gamma_n)\}| \\
&\le& (\epsilon[\Gamma:\Gamma_n]+1)C\exp( H(\epsilon)[\Gamma:\Gamma_n]) |\{ {\mathcal{G}}(\phi) :~ \phi \in Map(W,\delta,\Gamma_n)\}|.
\end{eqnarray*}
Now suppose that $\phi, \psi \in Map(W,\delta,\Gamma_n)$ are such that $T_\phi = T_\psi$ (where $T_\phi, T_\psi$ is a choice of spanning tree as in the previous lemma). Then $|{\mathcal{G}}(\phi) \Delta {\mathcal{G}}(\psi)| \le (6\delta^2|W||S_*| + 4m^{-1})[\Gamma:\Gamma_n] \le \epsilon [\Gamma:\Gamma_n]$. Therefore,
\begin{eqnarray*}
&&|\{ {\mathcal{G}}(\phi) :~ \phi \in Map(W,\delta,\Gamma_n)\}|\\
&\le& (\epsilon [\Gamma: \Gamma_n]+1) C\exp( H(\epsilon)[\Gamma:\Gamma_n] |S_*|/2) |\{ T_\phi :~ \phi \in Map(W,\delta,\Gamma_n)\}|.
\end{eqnarray*}
We now have
$$N_0(Map(W,\delta,\Gamma_n),{\rm h}o^{{\rm m}athcal F}_2) \le (\epsilon [\Gamma: \Gamma_n]+1)^3 C^3 \exp\left( (\epsilon \log 2 + 2H(\epsilon) |S_*|) [\Gamma:\Gamma_n]\right) \tau(C_n^f).$$
Because $1/2>\epsilon>0$ is arbitrary and $H(\epsilon) \searrow 0$ as $\epsilon \searrow 0$, this implies the lemma.
\end{proof}
We are ready to prove Theorem~{\rm re}f{thm:WSF}.
\begin{proof}[Proof of Theorem {\rm re}f{thm:WSF}]
By Lemmas {\rm re}f{lem:lower2}, {\rm re}f{lem:model}, {\rm re}f{lem:model2} and {\rm re}f{lem:upper2},
\begin{eqnarray*}
\limsup_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log \tau(C_n^f) \le h_{\Sigma,\nu_{WSF}}(2^E,\Gamma) \le h_{\Sigma}({{\rm m}athcal F},\Gamma)\le \limsup_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log \tau(C_n^f).
\end{eqnarray*}
By Theorem 3.2 of \cite{Lyons05}, $ \lim_{n\to \infty} [\Gamma:\Gamma_n]^{-1} \log \tau(C_n^f) = {\bf h}(C(\Gamma,f))$. By Proposition~{\rm re}f{P-tree entropy}, $ {\bf h}(C(\Gamma,f)) = \log \det_{{{\rm m}athcal N}\Gamma} f$ so this proves the theorem.
\end{proof}
\begin{question}
There is another natural topological model for uniform spanning forests. Namely, let ${{\rm m}athcal F}_* \subset 2^E$ be the set of all subgraphs which are essential spanning forests. The word ``essential'' here means that every connected component of any $x\in {{\rm m}athcal F}_*$ is infinite. What is the topological sofic entropy of $\Gamma {\curvearrowright} {{\rm m}athcal F}_*$? Because $\nu_{WSF}$ can naturally be realized as an invariant measure on ${{\rm m}athcal F}_*$, the variational principle implies the topological sofic entropy of ${{\rm m}athcal F}_*$ is at least the measure-theoretic sofic entropy of $\Gamma {\curvearrowright} (2^E, \nu_{WSF})$.
\end{question}
\appendix
\section{Non-invertibility}
\begin{theorem}
Let $\Gamma$ be a countable group and $f\in {{\rm m}athbb R} \Gamma$ be well-balanced. Then $f$ is not invertible in the universal group $C^*$-algebra $C^*(\Gamma)$ (see \cite[Section 2.5]{BO}), or in $\ell^1(\Gamma)$. Moreover, $f$ is invertible in the group von Neumann algebra ${{\rm m}athcal N}\Gamma$ if and only if $\Gamma$ is non-amenable.
\end{theorem}
\begin{proof}
We note first that $f$ is not invertible in
$C^*(\Gamma)$. Indeed, the trivial representation of $\Gamma$ gives rise to a unital $C^*$-algebra homomorphism $\varphi: C^*(\Gamma)\rightarrow {{\rm m}athbb C}$ sending every $s\in \Gamma$ to $1$. Since $\varphi(f)=0$ is not invertible in ${{\rm m}athbb C}$, $f$ is not invertible in $C^*(\Gamma)$.
Next we note that $f$ is not invertible in $\ell^1(\Gamma)$. This follows from the natural unital algebra embedding $\ell^1(\Gamma)\hookrightarrow C^*(\Gamma)$ being the identity map on ${{\rm m}athbb R}\Gamma$ and the non-invertibility of $f$ in $C^*(\Gamma)$.
Finally we note that $f$ is invertible in
${{\rm m}athcal N}\Gamma$ if and only if $\Gamma$ is non-amenable. Suppose that $\Gamma$ is amenable. Let $\{F_n\}_{n\in {{\rm m}athbb N}}$ be a right F{\o}lner sequence of $\Gamma$. Then $|F_n|^{-1/2}\chi_{F_n}$ is a unit vector in $\ell^2(\Gamma, {{\rm m}athbb C})$ for each $n\in {{\rm m}athbb N}$, where $\chi_{F_n}$ denotes the characteristic function of $F_n$ in $\Gamma$,
and $\lim_{n\to \infty}\||F_n|^{-1/2}\chi_{F_n}\cdot f\|_2=0$. Thus $f$ is not invertible in ${{\rm m}athcal N}\Gamma$. Now suppose that $\Gamma$ is non-amenable.
Set ${\rm m}u=-(f-f_e)/f_e\in {{\rm m}athbb R}\Gamma$ and denote by $\|{\rm m}u\|$ the operator norm of ${\rm m}u$ on $\ell^2(\Gamma, {{\rm m}athbb C})$, which is also the norm of ${\rm m}u$ in ${{\rm m}athcal N}\Gamma$.
Then ${\rm m}u$ is a symmetric probability measure on $\Gamma$ and the support of ${\rm m}u$ generates $\Gamma$.
Since $\Gamma$ is non-amenable,
Kesten's theorem \cite[Theorem 4.20.ii]{Paterson} implies that $\|{\rm m}u\|<1$. Therefore $f=f_e(1-{\rm m}u)$ is invertible in the Banach algebra ${{\rm m}athcal N}\Gamma$.
\end{proof}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{HiQR: An efficient algorithm for high dimensional quadratic regression with penalties}
\author[a1]{Cheng Wang\mbox{corr}ef{mycorrespondingauthor}}
\ead{[email protected]}
\author[a1]{Haozhe Chen}
\ead{[email protected]}
\author[a2]{Binyan Jiang}
\cortext[mycorrespondingauthor]{Corresponding author}
\ead{[email protected]}
\address[a1]{School of Mathematical Sciences, MOE-LSC,\\ Shanghai Jiao Tong University, Shanghai 200240, China.}
\address[a2]{Department of Applied Mathematics, \\The Hong Kong Polytechnic University,
Hung Hom, Kowloon, Hong Kong.}
\begin{abstract}
This paper investigates the efficient solution of penalized quadratic regressions in high-dimensional settings. We propose a novel and efficient algorithm for ridge-penalized quadratic regression that leverages the matrix structures of the regression with interactions. Building on this formulation, we develop an alternating direction method of multipliers (ADMM) framework for penalized quadratic regression with general penalties, including both single and hybrid penalty functions. Our approach greatly simplifies the calculations to basic matrix-based operations, making it appealing in terms of both memory storage and computational complexity.
\end{abstract}
\begin{keyword} ADMM \sep LASSO
\sep quadratic regression \sep ridge regression
\end{keyword}
\end{frontmatter}
\section{Introduction}
Quadratic regression, which extends linear regression by accounting for interactions between covariates, has found widespread applications across various disciplines. However, as the complexity of the interactions increases quadratically with the number of variables, parameter estimation becomes increasingly challenging for problems with large or even moderate dimensionality. A surge of methodologies have been developed in the past decade to tackle the high dimensionality challenge under different structural assumptions; see for example \cite{bien2013lasso, hao2014interaction, hao2016model,hao2017note, yu2019reluctant, tang2020high, lu2021} and \cite{wang2021penalized}, among others.
Given the observations $(\mathbf x_i,y_i) \in \mathbb{R}^{p}\times \mathbb{R},~i=1,\ldots,n$, we consider a general penalized quadratic regression model expressed as
\begin{align}\label{pqr}
\argmin_{\mathbf B=\mathbf B^\top, \mathbf B\in \mathbb{R}^{p \times p}} \frac{1}{2n} \sum_{i=1}^n (y_i-\mathbf x_i^\top \mathbf B \mathbf x_i)^2+f(\mathbf B),
\end{align}
where $\mathbf B=(B_{jk})_{p \times p}$ denotes a symmetric matrix of parameters, and $f(\cdot)$ is a convex penalty function. Typically, the first element of $\mathbf x_i$ is a constant 1, allowing for the capture of the intercept, linear effect, and interaction effect through $B_{1,1,}$, $\{B_{1,i}, i=2,\ldots, p\}$, and $\{B_{i,j}, 2\le i\le j \le p\}$, respectively.
Without the penalty $f(\mathbf B)$, the mean squared error is:
\begin{align*}
\frac{1}{2n} \sum_{i=1}^n \left(y_i-B_{11}-\sum_{j=2}^p (B_{1j}+B_{j1})x_{ij}-\sum_{j,k=2}^p B_{jk} x_{ij}x_{ik} \right)^2.
\end{align*}
The penalty term $f(\mathbf B)$ is introduced to impose different structures on the parameter matrix $\mathbf B$ depending on the application scenario. For instance, in gene-gene interaction detection where the number of genes is typically large and the interactions related to the response are sparse, the $\ell_1$ penalty $f(\mathbf B)=\lambda \|\mathbf B\|_1$ is often used to induce sparsity in $\mathbf B$. The resulting model is called the all-pairs LASSO by \cite{bien2015convex}.
In addition to sparsity, researchers have also considered heredity, where the existence of the interaction effect $B_{j,k}$ depends on the existence of its parental linear effects $B_{1,j}, B_{1, k}$. Specifically, we have:
\begin{align*}
\mbox{strong heredity:}~B_{j,k} \neq & 0 \mathbf Rightarrow B_{1j} \neq 0 ~\mbox{and}~ B_{1k} \neq 0,\\
\mbox{weak heredity:}~B_{j,k} \neq & 0 \mathbf Rightarrow B_{1j} \neq 0 ~\mbox{or}~ B_{1k} \neq 0.
\end{align*}
Several penalty functions are proposed in the literature to enforce these heredity structures, including those proposed by \cite{yuan2009structured}, \cite{radchenko2010variable}, \cite{choi2010variable}, \cite{bien2013lasso}, \cite{lim2015learning}, \cite{haris2016convex}, and \cite{she2017group}, among others.
In addition to sparsity and heredity, we can also introduce the nuclear norm penalty to impose a low rank structure in $\mathbf B$, and hybrid penalties to impose more than one structure. Further details will be provided in Section 3.
A naive approach to solving the penalized quadratic regression model \eqref{pqr} is to use vectorization. We define
\begin{align*}
\mathbf z_i=\mathbf x_i \otimes \mathbf x_i \in \mathbb{R}^{p^2 \times 1},
\end{align*}
where \(\otimes\) denotes the Kronecker product, and write
\begin{align*}
\mathbf b=\mathbf wvec(\mathbf B) \in \mathbb{R}^{p^2 \times 1},
\end{align*}
where $\mathbf wvec(\cdot)$ denotes the vectorization of a matrix. We can then obtain the following equivalent form of \eqref{pqr}:
\begin{align*}
\argmin_{ \mathbf b } \frac{1}{2n} \sum_{i=1}^n (y_i-\mathbf z_i^\top \mathbf b)^2+f(\mathbf b).
\end{align*}
Therefore, the penalized quadratic regression problem \eqref{pqr} can be reformulated as a penalized linear model with $p(p+1)/2$ features. From a theoretical perspective, we can use this formulation together with the classical theory for high-dimensional regularized $M$-estimators \citep[Chapter 9]{wainwright2019high}. Detailed theoretical analyses of the consistency of the penalized quadratic regression model can be found in \cite{zhao2016analysis} and the references therein.
However, from a computational perspective, many algorithms do not scale well with a large $p$, since the number of parameters scales quadratically with the dimension $p$. Moreover, storing the design matrix and computer memory can also be expensive when vectorization is applied to the interaction variables. For example, computing an all-pairs LASSO with $n=1000$ and $p=1000$ on a personal computer can cause the well-known algorithm \textit{glmnet} \citep{glmnet} to break down due to out-of-memory errors. Specifically, the feature matrix of order $10^3 \times 10^6$ has a memory size of about 8GB.
To address the computational challenges associated with high-dimensional penalized quadratic regression, several two-stage methods have been proposed in the literature \citep[e.g.,]{hao2014interaction, fan2015innovated, kong2017interaction, hao2016model, yu2019reluctant}. These methods are computationally efficient and have been proven to be consistent under some structural assumptions, which can reduce the computational complexity via a feature selection procedure in the first stage. However, in this paper, we do not consider any of these structures, and our main goal is to develop efficient algorithms for solving the general penalized quadratic regression model \eqref{pqr} directly.
Intuitively, penalized quadratic regression is different from a common linear regression with $O(p^2)$ features because the data has a specific structure for interactions. In this work, we leverage this structure in the algorithm and design an efficient framework for the general penalized quadratic regression problem. In previous work, \cite{wang2021penalized} and \cite{tang2020high} also developed efficient formulas for the matrix parameter under a factor model. However, their procedures greatly rely on the distributional assumptions and cannot be extended to general cases.
In contrast, our approach does not require any distributional assumptions and can be applied to a wide range of high-dimensional data.
In this work, we study the original optimization problem \eqref{pqr} and design the algorithm from the viewpoint of matrix forms. To the best of our knowledge, this is the first algorithm for penalized quadratic regression that does not use vectorization and avoids any matrix operation of the $n \times p^2$ feature matrix. Our contributions are summarized as follows:
\begin{enumerate}
\item For ridge regression, we obtain an efficient solution for quadratic regression with a computational complexity of $O(np^2+n^3)$.
\item
To solve the general penalized quadratic regression problem for single non-smooth penalty and hybrid penalty functions, we propose an alternating direction method of multipliers (ADMM) algorithm. The algorithm is fully formulated with matrix forms, using only $p \times p$, $n \times p$, or $n \times n$ matrices, and has explicit formulas for the solutions in each iteration.
\item
We have developed an R package for penalized quadratic regression. Compared to other existing solvers/packages, our algorithm is much more robust since we do not impose any structural assumptions such as heredity. Our algorithm is also appealing in both memory storage and computational cost, and can handle datasets with very high dimensions. This makes our package a useful tool for researchers and practitioners who need to analyze high-dimensional data using penalized quadratic regression.
\end{enumerate}
The rest of the paper is organized as follows. In Section 2, we start with ridge-penalized quadratic regression and derive an efficient closed-form formula for the solution. In Section 3, we design an efficient ADMM algorithm for both single non-smooth penalty and hybrid penalty functions. We conduct simulations in Section 4 to illustrate the proposed algorithm. The developed R package ``HiQR" is available on GitHub at \url{https://github.com/cescwang85/HiQR}.
\section{Ridge regression}
To facilitate the discussion, we introduce some notations first. For a real $p \times q$ matrix $\mathbf A = (A_{k,l})_{p \times q}$, we define:
\begin{align*}
\|\mathbf A\|_{\infty} \stackrel{\mbox{\textrm{\tiny def}}}{=} \max_{1\le k\le p,1\le l\le q}|A_{k,l}|,~ \|\mathbf A\|_{1} \stackrel{\mbox{\textrm{\tiny def}}}{=}\sum_{k=1}^p \sum_{l=1}^q|A_{k,l}|,~\|\mathbf A\|^2_{2} \stackrel{\mbox{\textrm{\tiny def}}}{=}\sum_{k=1}^p \sum_{l=1}^q|A_{k,l}|^2.
\end{align*}
Denoting the singular values of $\mathbf A$ as $\sigma_1 \geq \cdots \sigma_p \geq 0$, the nuclear norm of $\mathbf A$ is defined as
\begin{align*}
\|\mathbf A\|_*=\sum_{i=1}^p \sigma_i.
\end{align*}
We first consider the ridge regression for the quadratic regression, i.e.,
\begin{align}\label{RidgeQR}
\mbox{Ridge QR:}~~\argmin_{\mathbf B=\mathbf B^\top, \mathbf B \in \mathbb{R}^{p \times p}} \frac{1}{2n} \sum_{i=1}^n (y_i-\mathbf x_i^\top \mathbf B \mathbf x_i)^2+\frac{\lambda }{2} \|\mathbf B\|_2^2.
\end{align}
where $\lambda>0$ is a tuning parameter.
Since the object function is convex in $\mathbf B$, the solution can be obtained by solving the following equation:
\begin{align}\label{ridge_eq}
\frac{1}{n} \sum_{i=1}^n (\mathbf x_i^\top \mathbf B \mathbf x_i-y_i) \mathbf x_i \mathbf x_i ^\top +\lambda \mathbf B=\mathbf 0_{p \times p}.
\end{align}
Denote
$ \mathbf D=\frac{1}{n} \sum_{i=1}^n y_i \mathbf x_i \mathbf x_i ^\top$.
Equation \eqref{ridge_eq} can be equivalently written as:
\begin{align*}
\frac{1}{n} \sum_{i=1}^n \mathbf x_i \mathbf x_i^\top \mathbf B \mathbf x_i \mathbf x_i^\top+\lambda \mathbf B=\mathbf D.
\end{align*}
By applying vectorization to the above equation, we have:
\begin{align*}
\frac{1}{n} \sum_{i=1}^n \left\{ (\mathbf x_i \mathbf x_i^\top) \otimes (\mathbf x_i \mathbf x_i^\top) \right\} \mathbf wvec(\mathbf B)+\lambda \cdot \mathbf wvec(\mathbf B)=\mathbf wvec(\mathbf D),
\end{align*}
and then the solution can be seen as:
\begin{align} \label{sol}
\mathbf wvec(\mathbf B)= \left\{\mathbb {X} \mathbb {X} ^\top+\lambda \mathbf I_{p^2} \right\}^{-1} \mathbf wvec(\mathbf D)=\left\{\mathbb {X} \mathbb {X} ^\top+\lambda \mathbf I_{p^2} \right\}^{-1} \mathbb {X} \mathbf Y,
\end{align}
where
\begin{align*}
\mathbb {X} =\frac{1}{\sqrt{n}} (\mathbf x_1 \otimes \mathbf x_1,\cdots,\mathbf x_n \otimes \mathbf x_n).
\end{align*}
Note that $\mathbb {X} \mathbb {X} ^\top$ is a $p^2 \times p^2$ matrix, which can lead to a high computational complexity of $O(p^6)$ for direct calculation of its inverse. Moreover, storing such a large matrix when $p$ is large is also impractical. Therefore, the naive algorithm that computes \eqref{sol} directly is usually not applicable for high-dimensional quadratic regression.
Note that the rank of $\mathbb {X}$ is $\min\{n,p^2\}$, which can be much smaller than $p^2$ when $n \ll p$. To exploit the low-rank structure of $\mathbb {X}$, we can use the Woodbury matrix identity, which allows us to compute $(\mathbb {X} \mathbb {X} ^\top + \lambda \mathbf I_{p^2})^{-1}$ more efficiently. Specifically, by applying the Woodbury identity,
we have:
\begin{align} \label{wood}
(\mathbb {X} \mathbb {X} ^\top+\lambda \mathbf I_{p^2})^{-1}=\lambda ^{-1}\mathbf I_{p^2}-\lambda ^{-1} \mathbb {X} (\lambda \mathbf I_n+\mathbb {X}^\top \mathbb {X})^{-1} \mathbb {X} ^\top.
\end{align}
The computational complexity is now been reduced to $O(n^2p^2+n^3)$, where the $n^2p^2$ term is due to matrix multiplication and $n^3$ is the complexity of matrix inverse. The Woodbury identity has been widely used in many other algorithms, and it is sometimes referred to as the ``shortcut-trick" for high-dimensional data (\citep[section 4.2.4]{boyd2011distributed}; \citealt{friedman2001elements}).
Another efficient technique to further reduce the computational cost is the implementation of the singular value decomposition(SVD) to $\mathbb {X}$ \citep{haris2016convex}. Specifically, let $\mathbb {X} = \mathbf U \Lambda \mathbf V ^\top$ be the thin SVD of $\mathbb {X}$. Together with \eqref{wood}, the solution \eqref{sol} can be expressed as:
\begin{align} \label{svd}
(\mathbb {X} \mathbb {X} ^\top+\lambda \mathbf I_{p^2})^{-1}\mathbb {X} \mathbf Y=\mathbf U (\Lambda^2+\lambda \mathbf I)^{-1} \Lambda \mathbf V ^\top \mathbf Y.
\end{align}
Here, the complexity of SVD is $O(n^2p^2)$, which can significantly reduce the computational complexity compared to the naive algorithm that computes \eqref{sol} directly. However, for some large-scale problems, the reduction in computational complexity may still be insignificant.
In what follows, we will further exploit the special structure of the parameter matrix in quadratic regression and reduce the computational complexity to $O(np^2)$. Note that from \eqref{wood} and the first equation of \eqref{sol}, we have:
\[
\mathbf wvec(\mathbf B)= \left\{
\lambda ^{-1}\mathbf I_{p^2}-\lambda ^{-1} \mathbb {X} (\lambda \mathbf I_n+\mathbb {X}^\top \mathbb {X})^{-1} \mathbb {X} ^\top
\right\} \mathbf wvec(\mathbf D).
\]
Firstly, note that
\begin{align}\label{p1}
\mathbb {X} ^\top \mathbb {X}=&\begin{pmatrix}
\frac{1}{\sqrt{n}} (\mathbf x_1 \otimes \mathbf x_1)^\top\\
\vdots\\
\frac{1}{\sqrt{n}} (\mathbf x_n \otimes \mathbf x_n)^\top \nonumber\\
\end{pmatrix}\left(\frac{1}{\sqrt{n}} \mathbf x_1 \otimes \mathbf x_1,\cdots,\frac{1}{\sqrt{n}} \mathbf x_n \otimes \mathbf x_n\right)\\
=&\frac{1}{n} \left( (\mathbf x_i ^\top \mathbf x_j)^2 \right)_{n \times n}=n^{-1} (\mathbf X \mathbf X ^\top )\circ (\mathbf X \mathbf X ^\top ),
\end{align}
where $\circ$ is the Hadamard product and the complexity of the last equation is of order $O(n^2p)$. Secondly, note that
\begin{align}\label{p2}
\mathbb {X} ^\top \mathbf wvec(\mathbf D)=&
\begin{pmatrix}
\frac{1}{\sqrt{n}} (\mathbf x_1 \otimes \mathbf x_1)^\top\\
\vdots\\
\frac{1}{\sqrt{n}} (\mathbf x_n \otimes \mathbf x_n)^\top\\
\end{pmatrix} \mathbf wvec(\mathbf D)
=
\begin{pmatrix}
\frac{1}{\sqrt{n}} \mathbf x_1^\top \mathbf D \mathbf x_1\\
\vdots
\\
\frac{1}{\sqrt{n}} \mathbf x_n^\top \mathbf D \mathbf x_n\\
\end{pmatrix} \nonumber\\
=&\frac{1}{\sqrt{n}} \cdot \mbox{diag}\left( \mathbf X \mathbf D \mathbf X ^\top \right),
\end{align}
where in the last equation the complexity is also reduced to $O(np^2)$. Lastly, denoting
\begin{align*}
\mathbf w=\frac{1}{\sqrt{n}}(\lambda \mathbf I_n+\mathbb {X}^\top \mathbb {X})^{-1} \mathbb {X} ^\top \mathbf wvec(\mathbf D) \in \mathbb{R}^{n},
\end{align*}
we have:
\begin{align}\label{p3}
\mathbb {X} (\lambda \mathbf I_n+\mathbb {X}^\top \mathbb {X})^{-1} \mathbb {X} ^\top \mathbf wvec(\mathbf D)=&\sum_{k=1}^n w_k \mathbf x_k \otimes \mathbf x_k \nonumber \\
=&\mathbf wvec\left( \sum_{i=1}^n w_k \mathbf x_k \mathbf x_k ^\top \right)=\mathbf wvec \left( \mathbf X ^\top \mbox{diag}(\mathbf w) \mathbf X \right),
\end{align}
where the complexity of the last equation is also $O(np^2)$.
Your paragraph looks good! Here's a possible minor revision to improve the flow:
By combining equations \eqref{p1}-\eqref{p3}, we can obtain a computationally efficient form for the explicit solution of the ridge-penalized quadratic regression \eqref{RidgeQR}. We summarize the results in the following proposition.
\begin{prop} \label{prop1}
For a given tuning parameter $\lambda>0$, the solution of the ridge-penalized quadratic regression problem \eqref{RidgeQR} is given as:
\begin{align}\label{eff_sol}
\mathbf widehat{\mathbf B}=\lambda^{-1} \mathbf D-\lambda^{-1} \mathbf X ^\top \mbox{diag}\{\mathbf w\} \mathbf X,
\end{align}
where
\begin{align*}
\mathbf X=&(\mathbf x_1,\ldots,\mathbf x_n),\quad \mathbf D=\frac{1}{n} \sum_{i=1}^n y_i \mathbf x_i \mathbf x_i ^\top =\frac{1}{n}\mathbf X ^\top \mbox{diag}(y_1,\cdots,y_n) \mathbf X,\\
\mathbf w=&\left\{ \lambda \mathbf I_n+n^{-1} (\mathbf X \mathbf X ^\top )\circ (\mathbf X \mathbf X ^\top ) \right\}^{-1} \mbox{diag}\left(\frac{1}{n} \mathbf X \mathbf D \mathbf X ^\top\right).
\end{align*}
\end{prop}
The computational complexity for calculating the close-form solution \eqref{eff_sol} is $O(np^2+n^3)$, which is much more efficient than the forms given in \eqref{sol} and \eqref{svd} under the high-dimensional setting where $n\ll p$. In addition, the memory cost of the solution is also lower because it only requires components in the form of either an $n \times n$ matrix, an $n \times p$ matrix, or a $p \times p$ matrix. In next section, we will further extend our results obtained in this section to solve quadratic regression with other non-smooth penalties.
\section{Non-smooth penalty and beyond}
In this section, we consider the case where the penalty $f(\cdot)$ in the penalized quadratic regression \eqref{pqr} is possibly non-smooth. For example, we can consider setting $f(\mathbf B)=\lambda \|\mathbf B\|_1$ as in the all-pairs-LASSO, or $f(\mathbf B)=\lambda \|\mathbf B\|_*$ as in reduced rank regression. For high-dimensional quadratic regression, it is also attractive to introduce additional penalties to impose different structures simultaneously. For instance, we can combine the $\ell_1$ norm and the nuclear norm to get a sparse and low-rank solution, i.e., $f(\mathbf B)=\lambda_1 \|\mathbf B\|_1+\lambda_2 \|\mathbf B\|_*$. In the literature, several hybrid penalty functions are proposed for quadratic regression, and we summarize these hybrid penalties as follows.
\begin{itemize}
\item $\ell_1+\ell_2$:
\begin{align*}
f(\mathbf B)=\lambda_1 \|\mathbf B\|_1+\lambda_2 \sum \limits_{k=2}^p \|\mathbf B_{\cdot,k}\|_2+\lambda_2 \sum \limits_{k=2}^p \|\mathbf B_{k,\cdot}\|_2.
\end{align*}
See \cite{radchenko2010variable} and \cite{lim2015learning} for more details.
\item $\ell_1+\ell_\infty$:
\begin{align*}
f(\mathbf B)=\lambda_1 \|\mathbf B\|_1+\lambda_2 \sum \limits_{k=2}^p \|\mathbf B_{\cdot,k}\|_\infty+\lambda_2 \sum \limits_{k=2}^p \|\mathbf B_{k,\cdot}\|_\infty.
\end{align*}
See \cite{haris2016convex}.
\item $\ell_1+\ell_1/\ell_\infty$:
\begin{eqnarray*}
f(\mathbf B)&=&\lambda_1 \|\mathbf B\|_1+\lambda_2 \sum \limits_{k=2}^p \max\{|\mathbf B_{1,k}|, \|\mathbf B_{-1,k}\|_1\} \\
&&+\lambda_2 \sum \limits_{k=2}^p \max\{|\mathbf B_{k,1}|, \|\mathbf B_{k,-1}\|_1\}
\end{eqnarray*}
See \cite{bien2013lasso} and \cite{haris2016convex}.
\item $\ell_1+\ell_*$:
\begin{align*}
f(\mathbf B)=\lambda_1 \|\mathbf B\|_1+\lambda_2 \|\mathbf B\|_*.
\end{align*}
See \cite{lu2021} and the references therein.
\end{itemize}
We remark that all of these penalties are formulated in a symmetric pattern, i.e., $f(\mathbf B)=f(\mathbf B^\top)$. Thus, the final solution will be a symmetric matrix.
Utilizing the efficient formulation we obtained in Proposition \ref{prop1}, we now introduce an ADMM algorithm for solving the general penalized quadratic regression problem \eqref{pqr}.
\subsection{ADMM algorithm}
Writing the squared loss function
\begin{align*}
f_0(\mathbf B)=\frac{1}{2n} \sum_{i=1}^n (y_i-\mathbf x_i ^\top \mathbf B \mathbf x_i)^2,
\end{align*}
we study the generic problem
\begin{align*}
\min f_0(\mathbf B)+f_1(\mathbf B)+\cdots+f_N(\mathbf B),
\end{align*}
where $f_k(\cdot),k=1,\ldots,N$ are penalty functions. Introducing the local variables $\mathbf B_i \in \mathbb{R}^{p \times p}$,
the problem can be equivalently rewritten as the following \textit{global consensus problem} \citep[Section 7]{boyd2011distributed}
\begin{align} \label{gen1}
\min \sum_{i=0}^N f_i(\mathbf B_i),~\mbox{subject~to~}~\mathbf B_i-\mathbf B=\mathbf{0},~i=0,1,\ldots, N.
\end{align}
The augmented Lagrangian of \eqref{gen1} is
\begin{align*}
L(\mathbf B_0,\ldots,\mathbf B_N, \mathbf B, \mathbf U_0,\ldots,\mathbf U_N)=\sum_{i=0}^N \left\{f_i(\mathbf B_i)+ \frac{\rho}{2} \|\mathbf B_i-\mathbf B+\mathbf U_i\|^2_2 \right\}.
\end{align*}
For a given solution $\mathbf B_i^{k}, i=0,\ldots, N$ in the $k$th iteration,
the $(k+1)$th iteration of the ADMM algorithm is given as follow:
\begin{itemize}
\item Step 1: $\mathbf B^{k+1}_i=\argmin_{\mathbf B_i} \left\{f_i(\mathbf B_i)+ \frac{\rho}{2} { \|\mathbf B_i-\mathbf B^k+\mathbf U^k_i\|^2_2 } \right\}, i=0,\cdots,N$;
\item Step 2:~$\mathbf B^{k+1}=\frac{1}{N+1}\sum_{i=0}^N \left\{ \mathbf B^{k+1}_i+\mathbf U^k_i \right\}$;
\item Step 3:~$\mathbf U_i^{k+1}=\mathbf U_i^k+\mathbf B_i^{k+1}-\mathbf B^{k+1},~i=0,\cdots,N$.
\end{itemize}
If we start with $\sum \mathbf U^1_i=\mathbf{0}$, it can be shown that $\sum \mathbf U^k_i=\mathbf{0}$ for every $k>1$ and so Step 2 will simply be an average operator, i.e.,
\begin{align*}
\mathbf B^{k+1}=\frac{1}{N+1}\sum_{i=0}^N \mathbf B^{k+1}_i.
\end{align*}
As we can see, the computational complexity of the algorithm is usually dominated by the first step.
In general, for a convex function $f(\cdot)$, the proximal operator \citep{parikh2014proximal} is defined as:
\begin{align} \label{l2proc}
\mbox{pen}roc_{f, \rho}(\mathbf A) \stackrel{\mbox{\textrm{\tiny def}}}{=} \argmin_{\mathbf B} f(\mathbf B)+\frac{\rho}{2} \|\mathbf B-\mathbf A\|_2^2 .
\end{align}
Thus, given $\mathbf B^k$ and the $\mathbf U_i^k$'s, Step 1 is a proximal operator for the sum of the squared loss function $f_0(\cdot)$ and the penalty functions $f_i(\cdot),~i=1,\ldots,N$. In Proposition \ref{prop1} we have derived an efficient form for the proximal operator of the squared loss $f_0(\cdot)$ at $\mathbf A={\bf 0}$. For a general $\mathbf A$ in \eqref{l2proc}, the efficient solution can be obtained by setting $\lambda=\rho/2$ and updating $\mathbf D$ as $n^{-1}\sum_{i=1}^n y_i \mathbf x_i \mathbf x_i ^\top+\mathbf A$ in Proposition \ref{prop1}. In next subsection, we provide the proximal operator for each penalty function.
\subsection{Proximal operator}
For most penalty functions, the proximal projection has an explicit solution, and we summarize these operators in this section. With some abuse of notation, let $\mathbf B$ be a parameter matrix with dimension $p\times q$. For the $\ell_1$ norm, writing $\mathbf A=(A_{ij})_{p \times q}$, we have
\begin{align*}
\argmin_{\mathbf B} \lambda \|\mathbf B\|_1+\frac{1}{2} \|\mathbf B-\mathbf A\|_2^2=\left( \mbox{sign}(A_{ij})( |A_{ij}|-\lambda)_{+} \right)_{p \times q}\stackrel{\mbox{\textrm{\tiny def}}}{=} \mbox{soft}(\mathbf A,\lambda),
\end{align*}
where $x_{+}=\max(0,x)$. For the nuclear norm, denoting the singular value decomposition of $\mathbf A$ as
\begin{align*}
\mathbf A=\sum_{i=1}^{\min(p,q)} \sigma_i \mathbf u_i \mathbf v_i^\top,
\end{align*}
we have
\begin{align*}
\argmin_{\mathbf B} \lambda \|\mathbf B\|_*+\frac{1}{2} \|\mathbf B-\mathbf A\|_2^2=\sum_{i=1}^{\min(p,q)} (\sigma_i-\lambda)_{+} \mathbf u_i \mathbf v_i^\top.
\end{align*}
For other penalties imposed on the columns or the rows of $\mathbf B$, we present the solutions in the form of vectors for brevity. Considering the proximal operator,
\begin{align*}
\mathbf widehat{\mathbf b}=\argmin_{\mathbf b} f(\mathbf b)+\frac{1}{2} \|\mathbf b-\mathbf a\|_2^2,~\mathbf a,\mathbf b \in \mathbb{R}^q,
\end{align*}
we have the following solution.
\begin{itemize}
\item $\ell_2$ norm--Group LASSO \citep{yuan2006model}:
\begin{align*}
f(\mathbf b)=\lambda \|\mathbf b\|_2 ,~\mathbf widehat{\mathbf b}=\left(1-\frac{\lambda}{ \|\mathbf a\|_2} \right)_{+} \cdot \mathbf a.
\end{align*}
\item $\ell_\infty$ norm penalty \citep{duchi2009efficient}:
\begin{align*}
f(\mathbf b)=\lambda \|\mathbf b\|_\infty.
\end{align*}
When $\lambda \geq \|\mathbf a\|_1$, we have $\hat{\mathbf b}=\mathbf 0$. Otherwise, the solution is
\begin{align*}
\mathbf widehat{\mathbf b}=\mathbf a-\mbox{soft}(\mathbf a,\lambda_1),
\end{align*}
where $\lambda_1 \geq 0$ satisfies the equation
\begin{align*}
\sum_{i=1}^q (|a_i|-\lambda_1) I(|a_i|>\lambda_1)=\lambda.
\end{align*}
The details of the derivation can be found in Section 5.4 of \cite{duchi2009efficient}.
\item Hybrid $\ell_1/\ell_\infty$ norm penalty \citep{haris2016convex}:
\begin{align*}
f(\mathbf b)=\lambda \max \left(|b_1|, \sum_{i=2}^q |b_i|\right).
\end{align*}
The solution is
\begin{align*}
\mathbf widehat{\mathbf y}=\left(\mbox{soft}(a_1,\lambda_1), \mbox{soft}(\mathbf a_{-1},\lambda-\lambda_1)\right),
\end{align*}
where
\begin{align*}
\lambda_1=\argmin_{t \in [0, \lambda]} \|\mbox{soft}(a_1,t)\|_2^2+\| \mbox{soft}(\mathbf a_{-1},\lambda-t)\|_2^2.
\end{align*}
In particular, when $\lambda \geq |a_1|+\|\mathbf a_{-1}\|_\infty$, $\mathbf widehat{\mathbf b}=\mathbf 0$. Further details can be found in \cite{haris2016convex}.
\end{itemize}
With these explicit proximal operators, we can get the unified algorithm as follows.
\begin{algorithm}[H]\small
\caption{HiQR: High dimensional Quadratic Regression.}
\begin{algorithmic}[1]
\item[Initialization:]
\State Input the observations $(\mathbf x_i,y_i),~i=1,\cdots,n$;
\State Set the loss function $f_0(\cdot)$ and the penalty functions $f_1(\cdot),\cdots~f_N(\cdot)$;
\State Start from $k=0$, $\mathbf B^{0}_i=\mathbf U_i^0=\mathbf 0_{p \times p}$.
\item[Iteration:]
\State Update $\mathbf B^{k+1}_i=\mbox{pen}roc_{f_i,\rho}(\mathbf B-\mathbf U_i),~i=0,\cdots,N$.
\State Update $\mathbf B^{k+1}=\frac{1}{N+1}\sum_{i=0}^N \mathbf B^{k+1}_i$.
\State Update $\mathbf U_i^{k+1}=\mathbf U_i^k+\mathbf B_i^{k+1}-\mathbf B^{k+1}~i=0,\cdots,N$.
\State Repeat steps 4-6 until convergence.
\item[Output:] Return $\mathbf B$.
\end{algorithmic}
\end{algorithm}
Algorithm 1 is simple and efficient owing to the fact that each step of the iteration has a closed form, and we have greatly utilized the matrix structure of the problem to obtain a closed-form solution for the proximal operator of the squared loss for quadratic regression. The algorithm is fully matrix-based, where we update $p \times p$ matrices in each step without any unnecessary matrix operations such as vectorization and Kronecker product. This can greatly reduce the memory and computational burden when handling high-dimensional data.
\section{Simulations}
To illustrate the efficiency of the proposed algorithm, we consider a toy example:
\begin{align*}
Y=2X_1-2X_5+2X_{10}+3X_1X_5-2.5X_5^2+4X_5X_{10}+\epsilon.
\end{align*}
For all the simulations, we generate $\mathbf x_1,\cdots,\mathbf x_n$ independently from $N(\mathbf 0,\mbox{\boldmath $\Sigma$})$, where $\mbox{\boldmath $\Sigma$}=(0.5^{|k-l|})_{p \times p}$, and the error term $\epsilon$ from $N(0,1)$. We fix the sample size $n=1000$, and vary the data dimension $p$ from small to large. The code is implemented on an Apple M1 chip with 8-core CPUs and 8G RAM, and the R version used is 4.2.3 with vecLib BLAS.
\subsection{Ridge regression}
In this part, we compare four algorithms for computing the ridge-penalized quadratic regression, namely, the naive inverse \eqref{sol}, the Woodbury trick \eqref{wood}, the SVD method \eqref{svd}, and the proposed HiQR. We fix $\lambda=10$, and the computation times are recorded in seconds based on 10 replications.
\begin{table}[!htbp] \centering
\caption{Average computation time (standard deviation) of different algorithms for ridge regression ($\lambda=10$) over 10 replications. Time is recorded in seconds. }
\label{tab2}
\resizebox{1\textwidth}{!}{
\begin{tabular}{@{\extracolsep{5pt}} cccccc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
& p=100 & p=200 & p=400 & p=800 & p=1200 \\
\hline \\[-1.8ex]
Naive & 9.808 (0.113) & NA & NA & NA & NA \\
Woodbury & 0.129 (0.005) & 0.557 (0.043) & 2.214 (0.095) & 21.551 (2.892) & NA \\
SVD & 0.547 (0.014) & 2.685 (0.042) & 13.755 (0.167) & 72.081 (1.798) & NA \\
HiQR & 0.019 (0.001) & 0.021 (0.001) & 0.023 (0.002) & 0.047 (0.005) & 0.051 (0.005) \\
\hline \\[-1.8ex]
\multicolumn{6}{l}{*NA is produced due to out of memory in R.}
\end{tabular}
}
\end{table}
From Table \ref{tab2}, we can observe that our HiQR algorithm greatly outperforms other algorithms in terms of computation efficiency. Additionally, the results are roughly consist with their native computational complexity, e.g., $O(p^6)$, $O(n^2p^2+n^3)$, $O(n^2p^2)$ and $O(np^2+n^3)$. As we can see, the vectorization methods all fail to handle the $p=1200$ case due to memory shortage, while our method is still efficient, as we only need to handle the storage of $n \times p$ and $p \times p$ matrices.
\subsection{Single penalty function}
In this part, we investigate the performance of the proposed HiQR for a single penalty, i.e., $f(\mathbf B)=\lambda \|\mathbf B\|_1$. As a comparison, we also implement the all-pairs LASSO of vectorized features using two state-of-the-art algorithms, e.g., ``glmnet" \citep{friedman2010regularization} and ``ncvreg'' \citep{breheny2011coordinate}. Table \ref{tab3} reports the computation times of these three algorithms for a solution path with 50 $\lambda$s based on 10 replications. From Table \ref{tab3}, we can see that while the proposed HiQR is comparable to the two well-established algorithms, both ``glmnet" and ``ncvreg" fail to generate solutions when $p=1200$ due to out-of-memory errors.
\begin{table}[!htbp] \centering
\caption{Average computation time (standard deviation) of three packages for obtaining a solution path for all-paris LASSO over 10 replications. The same set of 50 $\lambda$s has been used for the three different packages, and time is recorded in seconds.}
\label{tab3}
\resizebox{1\textwidth}{!}{
\begin{tabular}{@{\extracolsep{5pt}} cccccc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
& p=100 & p=200 & p=400 & p=800 & p=1000 \\
\hline \\[-1.8ex]
glmnet & 0.235(0.205) & 0.745(0.127) & 3.342(0.268) & 21.167(1.207) & NA \\
ncvreg & 0.373(0.063) & 1.421(0.129) & 6.136(0.102) & 35.162(2.004) & NA \\
HiQR & 0.324(0.087) & 0.987(0.171) & 5.475(1.308) & 26.215(7.535) & 31.319(17.579) \\
\hline \\[-1.8ex]
\end{tabular}
}
\end{table}
We remark that both ``glmnet" and ``ncvreg'' are accelerated by using strong rules; see \cite{tibshirani2012strong} and \cite{lee2015strong} for more details. Strong rules screen out a large number of features to substantially improve computational efficiency. However, as \cite{tibshirani2012strong} has pointed out, the price is that ``the strong rules are not foolproof and can mistakenly discard active predictors, that is, ones that have nonzero coefficients in the solution." As a comparison, our algorithm can be as efficient as ``glmnet" and ``ncvreg'' without the need for the same type of acceleration.
\subsection{Hybrid penalty functions}
In this part, we report the performance of HiQR for hybrid penalty functions. Specifically, we conduct simulations for the $\ell_1+\ell_2$, $\ell_1+\ell_\infty$, $\ell_1+\ell_1/\ell_\infty$, and $\ell_1+\ell_*$ penalties. The two parameters $\lambda_1$ and $\lambda_2$ are determined by $\lambda$ and $\alpha \in(0,1)$, that is,
\begin{align*}
\lambda_1=\lambda\cdot\alpha\cdot\lambda_{1,max},~\lambda_2=\lambda\cdot(1-\alpha)\cdot\lambda_{2,max},
\end{align*}
where $\lambda_{1,max}$ ($\lambda_{2,max}$) is set to be the smallest tuning value corresponding to a zero estimation when $\lambda_2$ $(\lambda_1)$ is set to be 0. We apply HiQR over a $10 \times 50$ grid of $(\alpha,\lambda)$ values, and Table \ref{tab4} presents the average computation times for the whole procedure. We can see that the proposed algorithm scales very well to high-dimensional quadratic regression.
\begin{table}[!htbp] \centering
\caption{Average computation times (standard deviation) of ``HiQR" under hybrid penalties with 500 tuning pairs over 10 replications. Time is recorded in seconds.}
\label{tab4}
\resizebox{1\textwidth}{!}{
\begin{tabular}{@{\extracolsep{5pt}} ccccc} \\[-1.8ex]
\hline \hline \\[-1.8ex]
& $\ell_1+\ell_2$ & $\ell_1+\ell_\infty$ & $\ell_1+\ell_1/\ell_\infty$& $\ell_1+\ell_*$\\
$p=50$ & 8.947(0.735) & 5.113(0.495) & 7.654(0.519) & 14.107(1.038) \\
$p=100$ & 25.963(2.952) & 10.616(1.194) & 21.891(1.964) & 46.136(2.897) \\
$p=200$ & 132.924( 6.018) & 28.047( 2.152) & 105.835( 6.322) & 204.892(11.087) \\
\hline \\[-1.8ex]
\end{tabular}}
\end{table}
\subsection{Model performance}
Lastly, we evaluate different penalties on different models. In particular, we consider
\begin{align}
\mbox{ Model 1:}&~Y=2X_1-2X_5+2X_{10}+3X_1X_5-2.5X_5^2+4X_5X_{10}+\epsilon, \label{m1} \\
\mbox{ Model 2:}&~Y=-2X_5+3X_1X_5-2.5X_5^2+4X_5X_{10}+\epsilon, \label{m2} \\
\mbox{ Model 3:}&~Y=3X_1X_5-2.5X_5^2+4X_5X_{10}+\epsilon, \label{m3}
\end{align}
where the true parameters of $\mathbf B[(1,2,6,11),(1,2,6,11)]$ are
\begin{align*}
\begin{pmatrix}
0&1&-1&1\\
1&0&1.5&0\\
-1&1.5&-2.5&2\\
1&0&2&0
\end{pmatrix},~\begin{pmatrix}
0&0&-1&0\\
0&0&1.5&0\\
-1&1.5&-2.5&2\\
0&0&2&0
\end{pmatrix},\begin{pmatrix}
0&0&0&0\\
0&0&1.5&0\\
0&1.5&-2.5&2\\
0&0&2&0
\end{pmatrix}.
\end{align*}
respectively.
In particular, Model 1 has a strong hierarchical structure, Model 2 has a weak hierarchical structure, and Model 3 is a model with only interactions.
Due to the efficiency of HiQR, we studied a high-dimensional case where $p=200$ and $n=500$. It is noted that the model has about { $2\times 10^4$ parameters}. We implemented the penalized quadratic regression with 50 $\alpha$s and 50 $\lambda$s, resulting in a solution path for 2500 grids. To measure each estimation $\mathbf widehat{\mathbf B}$, we adopted the critical success index (CSI), which can evaluate the support recovery rate and the model size simultaneously. For the true $\mathbf B$ and an estimation $\mathbf widehat{\mathbf B}$, the CSI is defined as follows:
\begin{align*}
\mbox{CSI}(\mathbf B,\mathbf widehat{\mathbf B})=\frac{\#\{(i,j): B_{ij} \neq 0~\mbox{and}~\hat{B}_{ij} \neq 0 \}}{\#\{(i,j): B_{ij} \neq 0~\mbox{or}~\hat{B}_{ij} \neq 0 \}}.
\end{align*}
Figure \ref{fig1} presents the results for different models and different penalties. From these solution paths, we can see that these methods can detect the true signals if the tuning parameters are set suitably.
{Although tuning parameters selection is beyond the scope of the current work, our results indicate that the proposed "HiQR" algorithm is capable of training a model with $2 \times 10^4$ parameters and 2500 tuning parameters efficiently, and is able to generate estimations with satisfactory support recovery (c.f. grids in black in Figure \ref{fig1}).
}
\begin{figure}
\caption{The critical success index for different models and different penalties with 2500 tuning parameters.}
\label{fig1}
\end{figure}
\end{document} |
\begin{document}
\title[]{CONFORMAL MODULE OF THE EXTERIOR \\ OF TWO RECTILINEAR
SLITS}\thanks{The work of the first author was supported by the
Russian Foundation for Basic Research and the Government of the
Republic of Tatarstan, grant No~18-41-160003; the second author
was supported by the Russian Foundation for Basic Research, grant
No~17-01-00282. The third author expresses his thanks to the Kazan
Regional Scientific and Educational Mathematical Center for a
support during his stay at Kazan Federal University in
October-November 2018.}
\author[D.~Dautova]{D.~Dautova}
\address{Kazan Federal University,
Kazan, Russia}
\email[[email protected]]{[email protected]}
\author[S.~Nasyrov]{S.~Nasyrov}
\address{Kazan Federal University,
Kazan, Russia}
\email[]{[email protected]}
\author[M.~Vuorinen]{M.~Vuorinen}
\address{Department of Mathematics and Statistics, University of Turku,
Turku, Finland}
\email[[email protected]]{[email protected]}
\maketitle
\begin{abstract}
We study moduli of planar ring domains whose complements are
linear segments and establish formulas for their moduli in terms
of the Weierstrass elliptic functions. Numerical tests are carried
out to illuminate our results.
\noindent {Keywords:} {conformal module, reduced module, capacity,
elliptic functions}.
\noindent {Mathematics Subject Classification:} 30C20; 30C30;
31A15.
\end{abstract}
\section{Introduction}\label{intr}
The Weierstrass and Jacobian elliptic and theta functions and the
Schwarz-Christoffel formula form the foundation for numerous
explicit formulas for conformal mappings (N.I. Akhiezer
\cite{akhiezer}, W.~Koppenfels, F.~Stallmann \cite{kop_sht}).
During the past thirty years many authors have studied numerical
implementation of conformal mappings. We refer the reader to the
bibliography of the monograph N. Papamichael and N.
Stylianopoulos \cite{ps}. In particular, the Schwarz-Christoffel
toolbox of T. Driscoll and N. Trefethen \cite{dt} has become a
standard tool in the field. In a series of papers of T. DeLillo,
J. Pfaltzgraff, D. Crowdy and their coauthors have extended the
Schwarz-Christoffel method to certain cases of multiply connected
domains with polygonal boundary components
\cite{cr,delil2,delil1}.
In addition to the conformal mapping problem, also the
computation of numerical values of conformal invariants is an
important issue in geometric function theory. Here one can often
use a conformal map onto a canonical domain so as to simplify the
computation. Therefore computation of conformal invariants has a
natural link to numerical conformal mapping. However, the so
called crowding phenomenon can create serious obstacles for
computation of conformal maps, e.g. when long rectangles are mapped onto the upper half space
\cite{ps}.
A basic conformal invariant is the module of a ring domain. A ring
domain $G$ can be conformally mapped onto an annulus $\{z \in
\mathbb{C}: q<|z|<1\}$ and its conformal module and capacity are
defined as
$$
{\rm mod}\, G =( \log (q^{-1}))/(2\pi)\,, \quad {\rm cap}\, G = 2
\pi/\log (q^{-1}) \,.
$$
Therefore, ${\rm mod}\,G =1/{\rm cap}\, G$ and the computation of
${\rm mod}\, G$ can be reduced to the solution of the Dirichlet
problem for the Laplace equation and to the computation of the
$L^2$-norm of its gradient. This method was applied in
\cite{bsv,hrv1,hrv3} for the case of bounded ring domains.
Here we shall consider unbounded ring domains whose complementary
components are segments. We describe one-parametric families of
functions $f(z,t)$ each of which maps conformally an annulus
$\{q<|\zeta|<1\}$ onto the exterior $G=G(t)$ of two disjoint
segments $A_1A_2$ and $A_3A_4$. Here $A_j=A_j(t)$, $1\le j \le 4$,
are some smooth functions and $q=q(t)$; $t$ is a real parameter.
Further we will denote such domains by $G(A_1,A_2,A_3,A_4)$. It
is also assumed that the straight lines, containing the segments
$A_1A_2$ and $A_3A_4$, are fixed.
We note that one-parametric families of conformal mappings were
considered earlier. There is the well-known Loewner-Komatu
differential equation which is a generalization of the Loewner
equation to the doubly-connected case. The approach of Komatu was
developed by Goluzin~\cite{gol1} and others (see, e.g.
\cite{alex,gum1,gum2}).
We deduce a differential equation for $f(z,t)$ in the considered
case
(Theorem~\ref{loew-ell}). In contrast to the
Loewner-Komatu equation, we do not assume that the family of the
images is monotonic as a function of the parameter $t$. As a
corollary, we obtain a system of ODEs to determine the behavior of
the accessory parameters, which are the preimages of the points
$A_j$, and the conformal module $m(t):={\rm mod}\, G(t) =(\log
(q(t))^{-1})/(2\pi)$. On the base of the system, we suggest an
approximate method for finding the accessory parameters and the
conformal module. We note that in our approach we use essentially
the Weierstrass elliptic functions.
Further we apply the obtained results to investigate the behavior
of the conformal module in the case when one of the segments and
the length of the other one are fixed.
Now we briefly describe the structure of the paper. In
Section~\ref{prel} we give some information on the Weierstrass
elliptic functions, moduli, and reduced moduli. In
Section~\ref{repr} we describe an integral representation of an
annulus onto the exterior of two slits $A_1A_2$ and $A_3A_4$
(Theorem~\ref{map_fix}). In contrast to the known representations
\cite{komatu}; (see also \cite{henrici3}, \cite{delil1},
\cite{delil2}), our representation is based on the Weierstrass
$\sigma$-functions. The representation contains some unknown
constants; they are called accessory parameters. In
Section~\ref{fam} we consider one-parametric families of such
functions $f(z,t)$ and deduce a differential equation for them
(Theorem~\ref{loew-ell}). As a corollary, we obtain a system of
ODEs for accessory parameters (Theorem~\ref{system}). We note that
to deduce the equations we use the approach developed earlier for
one-parametric families of rational functions \cite{nas_dokl} and
conformal mappings of complex tori \cite{nas_vuz,nas_petr}. In
Section~\ref{numeric} we give results of some numerical
calculation. In Section~\ref{extr} we study monotonicity of the
conformal module of the exterior of two slits when one segment is
fixed and the other one slides along a straight line and has a
fixed length.
Finally we should note that recently the capacity computation of
doubly connected domains with complicated boundary structure has
been studied for instance in \cite{hrv3}.
\section{Some preliminary results}\label{prel}
\textit{Elliptic functions.} First we recall some information
about elliptic functions (see, e.g., \cite{akhiezer,nist} and also
\cite{dlmf1,dlmf2}).
A meromorphic in the complex plane function is called elliptic if
it has periods $\omega_1$ and
$\omega_2$\footnotemark{*}\footnotetext{${}^*$ In contrast to
\cite{akhiezer}, we denote by $\omega_1$ and $\omega_2$ periods of
elliptic functions, not half-periods. The same remark concerns the
values $\eta_k$ defined by (\ref{periodzeta}).}, linearly
independent over $\mathbb{R}$. In the fundamental parallelogram
constructed by the vectors $\omega_1$ and $\omega_2$, every
nonconstant elliptic function takes each value the same number of
times; the number $r$ is called the order of the elliptic
function.
If $a_1,\ldots,a_r$ are zeroes of an elliptic functions of order
$r$ and $b_1,\ldots,b_r$ are its poles in the fundamental
parallelogram, then
$$
a_1+\ldots+a_r\equiv b_1+\ldots+b_r\quad (\textrm{mod}\,\Omega)
$$
where $\Omega$ is the lattice generated by $\omega_1$ and
$\omega_2$. Further we will denote by $\omega$ an arbitrary
element of the lattice. We note that, by given lattice, the
generators $\omega_1$ and $\omega_2$ are not determined by a
unique way; we will further assume that
$\mathop{\rm Im}\nolimits(\omega_2/\omega_1)>0$.
One of the main elliptic functions is the Weierstrass
$\mathfrak{P}$-function
\begin{equation*}\label{pw}
\mathfrak{P}(z)=\frac{1}{z^2}+\sum\nolimits'\left[\frac{1}{(z-\omega)^2}-\frac{1}{\omega^2}\right];
\end{equation*}
here the summation $\sum'$ is over all nonzero elements of the
lattice. The Weierstrass $\zeta$-function
\begin{equation}\label{zw}
\zeta(z)=\frac{1}{z}+\sum\nolimits'\left[\frac{1}{z-\omega}+\frac{1}{\omega}+\frac{z}{\omega^2}\right]
\end{equation}
has the properties: $\zeta'(z)=-\mathfrak{P}(z)$ and
\begin{equation}\label{periodzeta}
\zeta(z+\omega_k)=\zeta(z)+\eta_k,\quad k=1,2,
\end{equation}
where $\eta_k=2\zeta(\omega_k/2)$. In the fundamental
parallelogram it has a unique pole with residue~$1$. The numbers
$\eta_k$ and $\omega_k$ satisfy the equality
\begin{equation}\label{rel}
\omega_2\eta_1-\omega_1\eta_2=2\pi i.
\end{equation}
At last, we need the Weierstrass $\sigma$-function
\begin{equation}\label{sw}
\sigma(z)=z\prod\nolimits'\left\{\left(1-\frac{z}{\omega}\right)\exp\left(\frac{z}{\omega}+\frac{z^2}{2\omega^2}\right)\right\}.
\end{equation}
It is an odd entire function with the properties:
$$
\frac{\sigma'(z)}{\sigma(z)}=\zeta(z),\quad
\sigma(z+\omega)=\varepsilon \sigma(z)e^{\eta(z+\omega/2)}.
$$
Here $\eta=m\eta_1+n\eta_2$, if $\omega=m\omega_1+n\omega_2$.
Moreover, $\varepsilon=1$, if $\omega/2$ belongs to the lattice
$\Omega$, otherwise, $\varepsilon=-1$.
We recall the Weierstrass invariants $g_2$ and $g_3$:
\begin{equation*}
g_2=60\sum\nolimits'\frac{1}{(m\omega_1+n\omega_2)^4},\quad
g_3=140\sum\nolimits'\frac{1}{(m\omega_1+n\omega_2)^6}.
\end{equation*}
Elliptic functions depend not only on the variable $z$ but also
on the lattice. Further we need explicit expressions for the
partial derivatives $\zeta(z)= \zeta(z;\omega_1,\omega_2)$ by the
periods $\omega_1$ and $\omega_2$ of the lattice. In
\cite{nas_vuz} the following theorem is proved.
\begin{theorem}\label{defzeta}
The partial derivatives of $\zeta(z)=\zeta(z;\omega_1,\omega_2)$
with respect to the periods $\omega_1$ and $\omega_2$ are equal to
\begin{equation*}\label{zetaom1}
\frac{\partial\zeta(z)}{\partial \omega_1}=\frac{1}{2\pi i}\left[
\frac{1}{2}\,\omega_2\mathfrak{P}'(z)+(\omega_2\zeta(z)-\eta_2z)\mathfrak{P}(z)+
\eta_2 \zeta(z)-(\omega_2 g_2/12)z\right],
\end{equation*}
\begin{equation*} \label{zetaom2}\frac{\partial\zeta(z)}{\partial
\omega_2}=-\frac{1}{2\pi i}\left[
\frac{1}{2}\,\omega_1\mathfrak{P}'(z)+(\omega_1\zeta(z)-\eta_1z)\mathfrak{P}(z)+
\eta_1 \zeta(z)-(\omega_1 g_2/12)z\right].
\end{equation*}
\end{theorem}
We will also need the Jacobi theta-function $\vartheta_1(z)$. For
given lattice $\Omega$, generated by $\omega_1$ and $\omega_2$,
let $\tau=\omega_2/\omega_1$, $\mathop{\rm Im}\nolimits \tau>0$, and $q=e^{\pi i
\tau}$. Then, by definition (see, e.g. \cite[ch.~1,
sect.~3]{akhiezer}, \cite{dlmf1}),
\begin{equation}\label{theta1}
\vartheta_1(z)=\vartheta_1(z|\tau)=1+\sum_{n=0}^\infty
(-1)^nq^{(n+1/2)^2}\sin((2n+1)z).
\end{equation}
There is the following connection between $\sigma$-function and
$\vartheta_1(z)$ (see, e.g. \cite[ch.~4, sect. 19, formula
(1)]{akhiezer}):
\begin{equation}\label{connect}
\sigma(z)=\omega_1 \frac{e^{\frac{\eta_1
z^2}{2\omega_1}}\vartheta_1(z/\omega_1 )}{\vartheta'_1(0)}\,.
\end{equation}
\vskip 0.4 cm
\textit{Conformal moduli, reduced moduli, and capacities of
condensers.} Let $G$ be a ring domain in the plane, i.e. a
doubly-connected domain with non-degenerate boundary components.
There is a conformal mapping $\psi:G\to A$ of $G$ onto an annulus
$A=\{q<|z|<1\}$ (see, e.g., \cite{gol}). The value $q$ does not
depend on the choice of $\psi$. We call $$
\text{mod}\,G=\frac{1}{2\pi}\,\log (q^{-1})
$$
the conformal module of $G$. It is conformal invariant and plays
an important role in the theory of conformal and quasiconformal
mappings.
Let $G$ be a ring domain in the plane with complementary
components $C_1$ and $C_2\,,$ and let $K$ be the condenser with
plates $C_1$ and $C_2\,$ and with field $G\,.$ We recall that $$
\text{cap}\,K=\inf\limits_u \int\!\!\!\int|\nabla u|^2 dxdy
$$
where the infimum is taken over all smooth functions $u$ such that
$u=0$ on $C_1$ and $u=1$ on $C_2$. We will define
$\text{cap}\,G:=\text{cap}\,K$ and call $\text{cap}\,G$ the
conformal capacity of the ring domain~$G\,.$
Let $D$ be a simply connected domain with non-degenerate boundary
and $z_0\in D$. For sufficiently small $\varepsilon$ consider the
condenser defined by $D\setminus B_{\varepsilon}(z_0)$; here
$B_{\varepsilon}(z_0)$ is the disk of radius $\varepsilon$
centered at the point $z_0$. Denote by $K_\varepsilon$ its
capacity. Then there exists the limit
$$ r(D,z_0):=\lim_{\varepsilon\to 0+}(K_\varepsilon+(1/(2\pi))\log
\varepsilon)
$$
which is called the reduced module of $D$ at the point $z_0$
\cite[Section 2.4]{dubook}, \cite{garnett}.
\section{Integral representation}\label{repr}
Consider a conformal mapping $g$ of an annulus $\{q<|\zeta|<1\}$
onto the exterior $G=G(A_1,A_2,A_3,A_4)$ of two disjoint
rectilinear slits $A_1A_2$ and $A_3A_4$ in the $w$-plane. With the
help of the exponential map $z\mapsto \zeta=\exp(2\pi i z)$ we can
consider the map $f:=g(2\pi i z)$ from the horizontal strip
$$S:=\{-m<\mathop{\rm Im}\nolimits z<0\},\quad m=\frac{1}{2\pi}\,\log (q^{-1}),$$ onto $G$. It
maps conformally the rectangle $\Pi=\{0<\mathop{\rm Re}\nolimits z<1, \,-m<\mathop{\rm Im}\nolimits z<0\}$
with identified vertical sides onto $G$ (Fig.~\ref{confo}). The
value $m$ is the conformal module of $G$. It is evident that $f$
has a unique pole in $\Pi$.
We will find an integral representation for the conformal mapping
$f$ of Schwarz--Christoffel type using the Weierstrass
$\sigma$-function. We should note that analogs of the
Schwarz--Christoffel integral for doubly-connected domains were
obtained earlier in \cite{komatu}; it is based on
$\theta$-functions (see also \cite{delil1,delil2,henrici3}).
\begin{figure}
\caption{Conformal mapping of the rectangle $\Pi$
with identified vertical sides onto $G(A_1,A_2,A_3,A_4)$.}
\label{confo}
\end{figure}
Using the Riemann--Schwarz reflection principle, we can extend $f$
to $\mathbb{C}$ as a meromorphic function. We see that the
function $h(z)=f''(z)/f'(z)$ is doubly periodic in $\mathbb{C}$
with periods $\omega_1=1$ and $\omega_2=2mi$. Consider $h$ in the
double rectangle $\widetilde{\Pi}=\{0<\mathop{\rm Re}\nolimits z<1, \,-m<\mathop{\rm Im}\nolimits z<m\}$; it
is its fundamental parallelogram. Here the function $h$ has only
polar singularities at points $z_k$, $1\le k\le 4$, corresponding
to the endpoints $A_k$ of the slits, and also at two distinct
points, $z_0$ and $\overline{z}_0$, where $f$ has poles. For
definiteness, we assume that $y_0:=\mathop{\rm Im}\nolimits z_0>0$. The residues of $h$
are known, therefore, we can express it with the help of the
Weierstrass zeta-function:
\begin{equation}\label{h}
h(z)=\gamma+\sum_{k=1}^4\zeta(z-z_k)-2\zeta(z-z_0)-2\zeta(z-\overline{z}_0)
\end{equation}
where $\gamma$ is a constant. (Here and further, unless otherwise
specified, we assume that $\zeta(z)$ and other elliptic functions
have periods $\omega_1=1$ and $\omega_2=2mi$.)
From (\ref{h}) we have
\begin{equation*}
\log f'(z)= \gamma z+\log
C+\sum_{k=1}^4\log\sigma(z-z_k)-2\log\sigma(z-z_0)-2\log\sigma(z-\overline{z}_0),
\end{equation*}
\begin{equation*}\label{f_prime}
f'(z)= C e^{\gamma
z}\frac{\prod_{k=1}^4\sigma(z-z_k)}{\sigma^2(z-z_0)\sigma^2(z-\overline{z}_0)},
\end{equation*}
\begin{equation}\label{f}
f(z)= C\int_{0}^z e^{\gamma
\xi}\frac{\prod_{k=1}^4\sigma(\xi-z_k)}{\sigma^2(\xi-z_0)\sigma^2(\xi-\overline{z}_0)}\,d\xi+C_1
\end{equation}
where $\sigma(z)$ is the Weierstrass sigma-function, $C\neq0$ and
$C_1$ are complex constants.
The residue of $f'(z)$ at $z_0$ must vanish, therefore,
\begin{equation*}\label{res}
\gamma
+\sum_{k=1}^4\zeta(z_0-z_k)-2\zeta(z_0-\overline{z}_0)=0\,.
\end{equation*}
The $\sigma$-function satisfies (\ref{periodzeta}).
Because $f'(z)$ must be periodic
with period $\omega_1=1$, we have
\begin{multline*}
f'(z+1)= C e^{\gamma(
z+1)}\frac{\prod_{k=1}^4\sigma(z-z_k+1)}{\sigma^2(z-z_0+1)\sigma^2(z-\overline{z}_0+1)}\,\\=\frac{Ce^{\gamma}
e^{\gamma
z}\prod_{k=1}^4e^{\eta_1(z-z_k+1/2)}\sigma(z-z_k)}{e^{2\eta_1(z-z_0+1/2)}\sigma^2(z-z_0)e^{2\eta_1(z-\overline{z}_0+1/2)}\sigma^2(z-\overline{z}_0)}\,
=e^{\gamma+\eta_1(2z_0+2\overline{z}_0-\sum_{k=1}^4z_k)}f'(z).
\end{multline*}
Consequently,
\begin{equation}\label{1}
\gamma+\eta_1\left(2z_0+2\overline{z}_0-\sum_{k=1}^4z_k\right)\equiv
0\quad (\mbox{\rm mod}\, 2\pi i).
\end{equation}
In a similar way, we have
\begin{equation*}
f'(z+\omega_2)=e^{\gamma\omega_2+\eta_2(2z_0+2\overline{z}_0-\sum_{k=1}^4z_k)}f'(z).
\end{equation*}
Since $\arg f'(z+\omega_2)-\arg f'(z)=2\beta$ where $\beta$ is the
angle between the segments $A_1A_2$ and $A_3A_4$, we have
\begin{equation}\label{2}
\gamma\omega_2+\eta_2\left(2z_0+2\overline{z}_0-\sum_{k=1}^4z_k\right)=2\beta
i \quad (\mbox{\rm mod}\, 2\pi i).
\end{equation}
Now we will specify the position of the points $z_k$. We will
assume that $z_1$ and $z_2$ lie on the real axis, and $z_3$ and
$z_4$ are on the lower side of $\Pi$ (Fig.~\ref{confo}). Because
$z_k$ can be chosen up to the values $k \omega_1+n\omega_2$, $k$,
$n\in \mathbb{N}$, for convenience, we shift $z_3$ by
$\omega_2=2mi$ and assume that
\begin{equation}\label{xk}
z_1=x_1,\ z_2=x_2,\ z_3=x_3+i m,\ z_4=x_4-i m,
\end{equation}
where $x_k$ are real numbers.
Denote
\begin{equation}\label{a1}
a=\sum_{k=1}^4z_k-2z_0-2\overline{z}_0.
\end{equation}
Taking into account (\ref{xk}), we see that $a$ is real. We write
(\ref{1}) and (\ref{2}) in the form
\begin{equation}\label{sys}
\gamma- \eta_1 a =2\pi k i,\quad \omega_2\gamma-\eta_2 a=2\beta i
+2\pi n i,\quad k,n\in \mathbb{N}.
\end{equation}
Solving (\ref{sys}) as a system of linear equation with respect to
$\gamma$ and $a$ and taking into account that its determinant
equals $\omega_1\eta_2-\omega_2\eta_1=2\pi i$, we obtain
\begin{equation}\label{sys1}
\gamma=-k\eta_2 +(n+\beta/\pi)\eta_1, \quad
a=-k\omega_2+(n+\beta/\pi).
\end{equation}
Since $\omega_2$ is a purely imaginary number, from the second
equality in (\ref{sys1}) we deduce that $k=0$. We can change $x_3$
by entire values, therefore, we can assume that $n=0$. So
(\ref{sys1}) has the form
\begin{equation}\label{sys2}
\gamma=\beta\eta_1/\pi, \quad a=\beta/\pi.
\end{equation}
Thus, from (\ref{a1}) and (\ref{sys2}) we have
\begin{equation*}
\sum_{k=1}^4x_k=4x_0+\beta/\pi,\quad x_0=\mathop{\rm Re}\nolimits z_0.
\end{equation*}
Since every horizontal shift does not change the strip $S$, we can
assume that $x_0=0$.
Therefore, we establish the following theorem.
\begin{theorem}\label{map_fix}
The function, mapping the annulus $\{q<|\zeta|<1\}$ onto
$G(A_1,A_2,A_3,A_4)$, is $f(z)$ where $z=(2\pi i)^{-1}\log\zeta$
and $f$ is defined by (\ref{f}). In (\ref{f})
$\gamma=\beta\eta_1/\pi$, the points $z_k=x_k+i y_k$ correspond to
the endpoints $A_k$ of the slits and satisfy (\ref{xk}) with real
$x_k$ and $m=(1/(2\pi))\log (q^{-1})$, the point $z_0=i y_0$
matches to the infinity, $C\neq 0$ and $C_1$ are some complex
constants. Moreover, $\sum_{k=1}^4x_k=\beta/\pi$.
\end{theorem}
\section{One-parametric families}\label{fam}
The parametric method for doubly connected domains was developed
by Komatu~\cite{komatu1} and Goluzin~\cite{gol1} (in details, see
\cite{alex}, ch.~5). In recent papers \cite{gum1,gum2} some new
results were obtained. Here we obtain an equation of Loewner type
using ideas of the papers \cite{nas_vuz,nas_dokl}.
Taking into account the integral representation (\ref{f}),
obtained in Theorem~\ref{map_fix}, we consider a smooth
one-parametric family of conformal mappings
\begin{equation}\label{family}
f(z,t)= c(t)\int_{0}^z e^{\gamma(t)
\xi}\frac{\prod_{k=1}^4\sigma(\xi-z_k(t))}{\sigma^2(\xi-z_0(t))\sigma^2(\xi-\overline{z_0(t)})}\,d\xi+c_1(t)
\end{equation}
Here $\sigma(z)=\sigma(z;1,\omega_2)$ where $\omega_2=2mi$,
$m=m(t)>0$. For a fixed $t$, $f(z,t)$ is periodic with period
$\omega_1\equiv 1$ and maps the half of the fundamental
parallelogram (rectangle) $\{0<\mathop{\rm Re}\nolimits x<1, -m<\mathop{\rm Im}\nolimits z<0\}$ onto the
exterior of two rectilinear slits. Without loss of generality we
may assume that one slit lies on the positive part of the real
axis and the second one is on the ray $\{\arg w=\beta\}$. (The
general case can be obtained by multiplying $c(t)$ by
$e^{i\theta}$; this means the rotation by the angle $\theta$.
Further, in some situations, we will use this remark.)
We note once more that the angle $0<\beta<2\pi$ does not depend on
$t$. Moreover,
\begin{equation*}
\mathop{\rm Im}\nolimits z_1(t)=\mathop{\rm Im}\nolimits z_2(t)=0,\quad \mathop{\rm Im}\nolimits z_3(t)=-\mathop{\rm Im}\nolimits z_4(t)=m(t), \quad
\mathop{\rm Re}\nolimits z_0(t)=0,
\end{equation*}
therefore,
\begin{equation*}
z_1(t)=x_1(t),\ z_2(t)=x_2(t),\ z_3(t)=x_3(t)+im(t),\
z_4(t)=x_4(t)-im(t),\ z_0(t)=iy_0(t),
\end{equation*}
\begin{equation*}
x_1(t)<x_2(t)<x_1(t)+1,\ x_3(t)<x_4(t)<x_3(t)+1,\ 0\le y_0(t)\le
m,
\end{equation*}
\begin{equation*}
\gamma(t)=(\beta/\pi)\eta_1(t), \quad
\sum_{k=1}^4x_k(t)=\beta/\pi,
\end{equation*}
\begin{equation*}
\gamma(t)
+\sum_{k=1}^4\zeta(z_0(t)-z_k(t))-2\zeta(z_0(t)-\overline{z_0(t)})=0,
\end{equation*}
By the Riemann-Schwarz symmetry principle, we can extend $f(z,t)$
meromorphically to the whole complex plane. It is evident that
the extension satisfies
\begin{equation}\label{quasiper}
f(z+1,t)=f(z,t),\quad f(z+\omega_2(t),t)=e^{2i\beta}f(z,t),
\end{equation}
Differentiating \eqref{quasiper} with respect to $t$ and $z$, we obtain
$$
\dot{f}(z+1,t)=\dot{f}(z,t), \quad
\dot{\omega}_2(t)f'(z+\omega_2(t),t)+\dot{f}(z+\omega_2(t),t)=e^{2i\beta}\dot{f}(z,t),
$$
\begin{equation*}
f'(z+1,t)=f'(z,t),\quad f'(z+\omega_2(t),t)=e^{2i\beta}f'(z,t).
\end{equation*}
Here and further the dot means differentiation with respect to
the parameter $t$ and the prime is differentiation with respect
to $z$. Thus, we have
$$
\frac{\dot{f}(z+\omega_k(t),t)}{f'(z+\omega_k(t),t)}+\dot{\omega}_k(t)=\frac{\dot{f}(z,t)}{f'(z,t)}.
$$
Consequently, the function $h(z,t):={\dot{f}(z,t)}/{f'(z,t)}$
satisfies
\begin{equation}\label{periodh}
h(z+\omega_k(t),t)-h(z,t)=-\dot{\omega}_k(t),\quad k=1,2,
\end{equation}
where $\dot{\omega}_1(t)\equiv 0$.
Now we write Taylor's expansion of $f(z,t)$ in a neighborhood of
$z_k(t)$:
\begin{equation}\label{taylor}
f(z,t)=A_k(t)+\frac{D_k(t)}{2}\,(z-z_k(t))^2+\ldots,\quad
\end{equation}
{where} $D_k(t)=f''(z_k(t),t). $ We have
\begin{multline*}
f''(z,t)=c(t)e^{\gamma(t) z
}\,\frac{\prod\limits_{j=1}^4\sigma(z-z_j(t))}{\sigma^{2}(z-z_0(t))\sigma^{2}(z-\overline{z_0(t)})}\,\\
\times
\Bigl[\,\gamma(t)+\sum_{j=1}^4\zeta(z-z_j(t))-2\zeta(z-z_0(t))-2\zeta(z-\overline{z_0(t)})\Bigr],
\end{multline*}
therefore, as $z\to z_k(t)$, we obtain
\begin{equation}\label{dk}
D_k(t)= c(t)e^{\gamma(t) z_k(t)}\,\frac{\prod\limits_{j=1,\,j\neq
k}^4\sigma(z_k(t)-z_j(t))}{\sigma^{2}(z_k(t)-z_0(t))\sigma^{2}(z_k(t)-\overline{z_0(t))}}\,.
\end{equation}
From (\ref{taylor}) it follows that
\begin{equation}\label{fprime}
f'(z,t)=D_k(t)(z-z_k(t))+\ldots,
\end{equation}
\begin{equation*}\label{(3.2)}
\dot{f}(z,t)=\dot{A}_k(t)-\dot{z}_k(t)D_k(t)(z-z_k(t))+\ldots,
\end{equation*}
and, therefore,
\begin{equation*} \label{(3.3)}
h(z,t)=\frac{\dot{f}(z,t)}{f'(z,t)}=\frac{\gamma_k(t)}{z-z_k(t)}+O(1),
\quad z\to z_k(t),
\end{equation*}
where
\begin{equation}\label{gammak}
\gamma_k(t):=\frac{\dot{A}_k(t)}{D_k(t)}\,.\end{equation}
At the point $z_0(t)$, the function $\dot{f}(z,t)$ has a pole of
order at most $2$, and $f'(z,t)$ has a pole of order $2$. Thus,
$h(z,t)$ has a removable singularity at the point. In more
details, denoting by $d_{-1}(t)$ the residue of $f(z,t)$ at the
point $z_0(t)$, we have
\begin{equation*}\label{z0}
f(z,t)=\frac{d_{-1}(t)}{z-z_0(t)}\,+d_0(t)+O(1),
\end{equation*}
\begin{equation}\label{z0t}
\dot{f}(z,t)=\dot{z}_0(t)\frac{d_{-1}(t)}{(z-z_0(t))^2}\,+\,\frac{\dot{d}_{-1}(t)}{z-z_0(t)}\,+O(1),
\end{equation}
\begin{equation*}\label{z0z}
f'(z,t)=-\frac{d_{-1}(t)}{(z-z_0(t))^2}\,+O(1).
\end{equation*}
From this we see that in a neighborhood of $z_0(t)$ the function
$h(z,t)$ has the expansion
\begin{equation}\label{hzo}
h(z,t)=-\dot{z}_0(t)+o(1), \quad z\to z_0(t).
\end{equation}
In a similar way, we show that $h(z,t)$ has a removable
singularity at the point~$\overline{z}_0(t)$.
The function $$
F(z,t):=h(z,t)-\sum_{j=1}^4\gamma_j(t)\zeta(z-z_j(t))$$ has only
removable singularities at the points $z_k(t)$, $1\le k\le 4$,
${z}_0(t)$, and $\overline{z}_0(t)$, and at points equivalent to
them (by mod of the lattice). At other points of the plane it is
holomorphic. Consequently, it can be extended holomorphically to
the whole plane $\mathbb{C}$.
From (\ref{periodh}) we obtain
\begin{equation}\label{periodg}
F(z+\omega_k(t),t)-F(z,t)=-\dot{\omega}_k(t)-\eta_k(t)\sum_{j=1}^4\gamma_j(t),
\quad k=1,2.
\end{equation}
By (\ref{periodg}), the function $F$ grows not faster than a
linear function, therefore, $F(z,t)=\alpha(t)z+\beta(t)$. So we
have
\begin{equation}\label{h0}
h(z,t)=\sum_{j=1}^4\gamma_j(t)\zeta(z-z_j(t))+\alpha(t)z+\beta(t).
\end{equation}
From (\ref{hzo}) we find
\begin{equation}\label{beta0}
\beta(t)=-\sum_{j=1}^4\gamma_j(t)\zeta(z_0(t)-z_j(t))-\alpha(t)z_0(t)-\dot{z}_0(t).
\end{equation}
From (\ref{periodg}) it follows that
\begin{equation}\label{periods}
\alpha(t)\omega_k(t)=-\dot{\omega}_k(t)-\eta_k(t)\sum_{j=1}^4\gamma_j(t),
\quad k=1,2.
\end{equation} If we put $k=1$, then, taking into account that $\omega_1(t)\equiv
1$, we obtain
\begin{equation}\label{alpha0}
\alpha(t)=-\eta_1(t)\sum_{j=1}^4\gamma_j(t).
\end{equation}
At last, from (\ref{h0}), (\ref{beta0}), and (\ref{alpha0}) we
deduce that
\begin{equation}\label{h1}
h(z,t)=\sum_{j=1}^4\gamma_j(t)[\zeta(z-z_j(t))-\zeta(z_0(t)-z_j(t))-\eta_1(t)(z-z_0(t))]-\dot{z}_0(t).
\end{equation}
If we put $k=2$, from (\ref{periods}) we have
$$
\dot{\omega}_2(t)=-\alpha(t)\omega_2(t)-\eta_2(t)\sum_{j=1}^4\gamma_j(t)=(\omega_2(t)\eta_1(t)-\eta_2(t))\sum_{j=1}^4\gamma_j(t),
$$
and, with the help of the equality (\ref{rel}), we obtain
\begin{equation}\label{period2}
\dot{\omega}_2(t)=2\pi i\sum_{j=1}^4\gamma_j(t).
\end{equation}
Therefore, we proved the following result.
\begin{theorem}\label{loew-ell}
The family $f(z,t)$ satisfies the PDE
$$ \frac{\dot{f}(z,t)}{f'(z,t)}=h(z,t)$$ where $h(z,t)$ is defined by (\ref{h1}); here $\gamma_k(t)$ and $D_k(t)$ are specified by (\ref{gammak}) and (\ref{dk}). The period $\omega_1(t)$
is equal $1$ and the period $\omega_2(t)$
satisfies~(\ref{period2}).
\end{theorem}
Now we will write a system of differential equations to find $z_l(t)$, $1\le
l\le 4$. For this, we will write $\dot{f}'(a_l(t),t)$ in two
different ways. On the one hand, from (\ref{fprime}) it follows
that
\begin{equation}\label{aldotprime1}
\dot{f}'(z_l(t),t)=-\dot{z}_l(t)D_l(t).
\end{equation}
On the other hand, by Theorem~\ref{loew-ell}, we have
$\dot{f}(z,t)=h(z,t)f'(z,t)$, therefore,
\begin{multline}\label{dotf}
\dot{f}(z,t)=c(t)\left[\sum_{j=1}^4\gamma_j(t)[\zeta(z-z_j(t))-\zeta(z_0(t)-z_j(t))-
\eta_1(t)(z-z_0(t))]-\dot{z}_0(t)\right]\, \\
\times e^{\gamma(t)
z}\frac{\prod_{k=1}^4\sigma(z-z_k(t))}{\sigma^2(z-z_0(t))\sigma^2(z-\overline{z_0(t)})}
\end{multline}
and
\begin{multline}\label{dotfpr}
\dot{f}'(z,t)=c(t)\Biggl\{\Biggl[\sum_{j=1}^4\gamma_j(t)\,[\zeta(z-z_j(t))-\zeta(z_0(t)-z_j(t))-\eta_1(t)(z-z_0(t))]-\dot{z}_0(t)\Biggr]\\
\times
\Bigl(\gamma(t)+\sum_{s=1}^4\zeta(z-z_s(t))-2\zeta(z-z_0(t))-2\zeta(z-\overline{z_0(t)})\Bigr)\\
-
\sum_{j=1}^4\gamma_j(t)[\mathfrak{P}(z-z_j(t))-\eta_1(t)]\Biggr\}\,
\,e^{\gamma(t)
z}\frac{\prod_{k=1}^4\sigma(z-z_k(t))}{\sigma^2(z-z_0(t))\sigma^2(z-\overline{z_0(t)})}.
\end{multline}
From (\ref{dotfpr}) we obtain, as $z\to z_l(t)$,
\begin{multline}\label{aldotprime2}
\dot{f}'(z_l(t),t)=c(t)\Biggl[
-\dot{z}_0(t)+\sum_{j=1, j\neq
l}^4\gamma_j(t)\,\bigl[\zeta(z_l(t)-z_j(t))-\zeta(z_0(t)-z_j(t))
\\
-\eta_1(t)(z_l(t)-z_0(t))\bigr] +
\gamma_l(t)\,\Bigl(\sum_{s=1,s\neq l}^4\zeta(z_l(t)-z_s(t))
+\gamma(t)-\eta_1 (z_l(t)-z_0(t))
\\ -\zeta(z_l(t)-z_0(t))-2\zeta(z_l(t)-\overline{z_0(t)})\Bigr)\Biggr]
\,\frac{e^{\gamma(t) z_l(t)}\prod_{k\neq
l,k=1}^4\sigma(z_l(t)-z_k(t))}{\sigma^2(z_l(t)-z_0(t))\sigma^2(z_l(t)-\overline{z_0(t)})}.
\end{multline}
Comparing (\ref{aldotprime1}) and (\ref{aldotprime2}), taking
into account (\ref{dk}), we see that
\begin{multline}\label{al}
\dot{z}_l=\dot{z}_0- \sum_{j=1, j\neq
l}^4\gamma_j\,\bigl[\zeta(z_l-z_j)-\zeta(z_0-z_j)-\eta_1(z_l-z_0)\bigr]
\\- \gamma_l\,\Bigl(\sum_{s=1,s\neq
l}^4\zeta(z_l-z_s) +\gamma-\eta_1
(z_l-z_0)-\zeta(z_l-z_0)-2\zeta(z_l-\overline{z}_0)\Bigr),\
1\le l \le n.
\end{multline}
Now we will find a differential equation to determine $c(t)$.
Comparing (\ref{family}), (\ref{dotf}), and (\ref{z0t}), we have
\begin{equation}\label{d}
d_{-1}(t)=-c(t)e^{\gamma(t)
z_0(t)}\frac{\prod_{k=1}^4\sigma(z_0(t)-z_k(t))}{\sigma^2(z_0(t)-\overline{z_0(t)})}\,,
\end{equation}
$$
\dot{d}_{-1}(t)=-c(t)\left\{\sum_{j=1}^4\gamma_j(t)\mathfrak{P}(z_0(t)-z_j(t))+\eta_1(t)+\dot{z}_0(t)\right.
$$
$$
\times\left. \left[\gamma(t)+\sum_{k=1}^4\zeta(z_0(t)-z_k(t))-2\zeta(z_0(t)-\overline{z}_0(t))\right]\right\}\\
\,e^{\gamma(t)
z_0(t)}\frac{\prod_{k=1}^4\sigma(z_0(t)-z_k(t))}{\sigma^2(z_0(t)-\overline{z_0(t)})}\,.
$$
Since
\begin{equation}\label{ident}
\gamma +\sum_{k=1}^4\zeta(z_0-z_k)-2\zeta(z_0-\overline{z}_0)=0,
\end{equation}
we have
\begin{equation*}\label{ddt}
\dot{d}_{-1}(t)=-c(t)\left[\sum_{j=1}^4\gamma_j(t)\mathfrak{P}(z_0(t)-z_j(t))+\eta_1(t)\right]
e^{\gamma(t)
z_0(t)}\frac{\prod_{k=1}^4\sigma(z_0(t)-z_k(t))}{\sigma^2(z_0(t)-\overline{z_0(t)})}\,.
\end{equation*}
Therefore,
\begin{equation}\label{a}
\dot{a}(t)=\sum_{j=1}^4\gamma_j(t)\mathfrak{P}(z_0(t)-z_j(t))+\eta_1(t)
\end{equation}
where $a=\log d_{-1}$.
Differentiating (\ref{ident}), we obtain
$$
i\frac{4\beta}{\pi}\,\frac{\partial \zeta(1/2)}{\partial
\omega_2}\,\dot{m}-\sum_{k=1}^4\mathfrak{P}(z_0-z_k)(\dot{z}_0-\dot{z}_k)+
i2\sum_{k=1}^4\frac{\partial \zeta(z_0-z_k)}{\partial
\omega_2}\,\dot{m}$$
$$+2\mathfrak{P}(z_0-\overline{z}_0)(\dot{z}_0-\dot{\overline{z}}_0)-i4\frac{\partial
\zeta(z_0-\overline{z}_0)}{\partial \omega_2}\,\dot{m}=0,
$$
$$
\left(4\mathfrak{P}(z_0-\overline{z}_0)-\sum_{k=1}^4\mathfrak{P}(z_0-z_k)\right)\dot{z}_0=$$
$$-\sum_{k=1}^4\mathfrak{P}(z_0-z_k)\dot{z}_k+i\left[4\,\frac{\partial
\zeta(z_0-\overline{z}_0)}{\partial
\omega_2}\,-\frac{4\beta}{\pi}\,\frac{\partial
\zeta(1/2)}{\partial \omega_2}\,- 2\sum_{k=1}^4\frac{\partial
\zeta(z_0-z_k)}{\partial \omega_2}\right]\dot{m},
$$
and, therefore,
\begin{multline}\label{yy}
\dot{y}_0=-\sum_{k=1}^4\,\mathop{\rm Im}\nolimits\frac{\mathfrak{P}(z_0-z_k)}{4\mathfrak{P}(z_0-\overline{z}_0)-\sum_{j=1}^4\mathfrak{P}(z_0-z_j)}\,\dot{x}_k
+\mathop{\rm Re}\nolimits\Biggl[\frac{4\,\displaystyle\frac{\partial
\zeta(z_0-\overline{z}_0)}{\partial
\omega_2}\,}{{4\mathfrak{P}(z_0-\overline{z}_0)-\sum_{k=1}^4\mathfrak{P}(z_0-z_k)}}\\
+\frac{-\displaystyle\frac{4\beta}{\pi}\,\displaystyle\frac{\partial
\zeta(1/2)}{\partial \omega_2}-
2\sum_{k=1}^4\displaystyle\frac{\partial \zeta(z_0-z_k)}{\partial
\omega_2}-\mathfrak{P}(z_0-z_3)+\mathfrak{P}(z_0-z_4)}{{4\mathfrak{P}(z_0-\overline{z}_0)-\sum_{k=1}^4\mathfrak{P}(z_0-z_k)}}\,\Biggr]\dot{m}.
\end{multline}
\begin{theorem}\label{system}
The accessory parameters satisfy the system of ODEs: (\ref{al}),
(\ref{a}), and (\ref{yy}) where $a=\log d_{-1}$ and $d_{-1}$ is
defined by (\ref{d}).
\end{theorem}
\begin{corollary}\label{mod}
The conformal module of the domains satisfies the equation
\begin{equation*}
\dot{m}(t)=\pi \sum_{j=1}^4\gamma_j(t).
\end{equation*}
where $\gamma_k(t):={\dot{A}_k(t)}/{D_k(t)}$, $D_k(t)=f''(z_k)$.
\end{corollary}
\section{Symmetric case. Numeric results}\label{numeric}
Now we will describe an approximate method of finding the
accessory parameters in (\ref{f}). It is based on
Theorem~\ref{system}. If we consider a smooth one-parametric
family $f(z,t)$, $0\le t\le 1$, of conformal mappings of the form
(\ref{family}), then, knowing the values of the parameters for
$t=0$, we can solve the Cauchy problem with this initial data and
obtain the values of the accessory parameters for all $t$. We note
that it is natural to use the uniform motion of the points
$A_k=A_k(t)$, therefore, in our calculations we will take
$\dot{A}_k=\text{const}$. Moreover, if we choose the appropriate
initial data, then we change only two of $A_k$, say, $A_1$ and
$A_2$; thus, we can put $\dot{A}_3=\dot{A}_4\equiv0$.
Therefore, to solve the Cauchy problem for the obtained system, we
need to know the initial data, i.e. the values of the accessory
parameters for some $t$. For this, it is convenient to use the
data for the symmetric case when the segment $A_1A_2$ and $A_3A_4$
are symmetric with respect to the real axis and the straight
lines, containing these segments, pass through the origin. (This
can be achieved by a rotation and a shift.)
Now we describe the conformal mapping for the symmetric case.
Because of the Riemann-Schwarz symmetry principle, we can consider
the conformal mapping of a strip onto the upper half of the
symmetric domain $G=G(A_1,A_2,A_3,A_4)$ and then extend it up to
the conformal mapping of the strip, with twice the original
width, onto the whole domain~$G$.
a) If $0<\beta<\pi$, then the conformal mapping has the form (see
\cite[Part~B, Section~8.2, Example~1, p.~354]{kop_sht}):
\begin{equation*}\label{fsym_theta}
f(z)=\widetilde{c}\,\frac{\vartheta_1(z-\alpha)}{\vartheta_1(z+\alpha)}\,,
\quad \alpha=\frac{\beta}{4\pi}\,,
\end{equation*}
where $\vartheta_1(z)$ is the Jacobi theta-function defined by
(\ref{theta1}) and $\widetilde{c}>0$ is a constant. From
(\ref{connect}), taking into account that $\omega_1=1$, we easily
deduce that
\begin{equation}\label{fsym}
f(z)=ce^{2\alpha\eta_1z}\,\frac{\sigma(z-\alpha)}{\sigma(z+\alpha)}\,,\quad
c>0.
\end{equation}
We should note that, in contrast to (\ref{f}), here $\sigma(z)$,
defined by (\ref{sw}), matches to the periods $1$ and $im$, not to
$1$ and $i2m$.
The function $f$, defined by~(\ref{fsym}), maps the rectangle
$R:=\{-1/2<\mathop{\rm Re}\nolimits z<1/2,\ 0<\mathop{\rm Im}\nolimits z<m/2\}$ with identified vertical
sides onto the upper half of $G(A_1,A_2,A_3,A_4)$ and keeps the
real axis; it can be extended, by symmetry, to the rectangle
$\widetilde{R}:=\{-1/2<\mathop{\rm Re}\nolimits z<1/2,\ -m/2<\mathop{\rm Im}\nolimits z<m/2\}$, and the
extended function maps $\widetilde{R}$ onto the whole domain
$G(A_1,A_2,A_3,A_4)$. The function $f$ has four critical points
$\pm z_k$, $1\le k \le 4$, and $z_1=x_1+im/2$, $z_2=x_2+im/2$,
$z_3=x_1-im/2$, $z_4=x_2-im/2$. Besides, $f$ has a pole at the
point $z=-\alpha$ and a zero at $z=\alpha$ (Fig.~\ref{symcase}).
\begin{figure}
\caption{Conformal mapping of the rectangle
$\widetilde{R}
\label{symcase}
\end{figure}
The critical points $z_k$, $1\le k\le 4$, can be found from the
equality $f'(z)=0$, i.e.
$$
\zeta(\alpha-z)+\zeta(\alpha+z)=2\eta_1\alpha.
$$
Because of the equality (\cite{akhiezer}, ch.III, \S~15),
\begin{equation*}
\zeta(u+v)+\zeta(u-v)-2\zeta(u)=\frac{\mathfrak{P}'(u)}{\mathfrak{P}(u)-\mathfrak{P}(v)}\,,
\end{equation*}
we have \begin{equation}\label{crit}
\mathfrak{P}(z)=\mathfrak{P}(\alpha)-\frac{\mathfrak{P}'(\alpha)}{2(\alpha\eta_1-\zeta(\alpha))}\,,
\end{equation}
therefore, $z_k$ can be found via the inverse function
$\mathfrak{P}^{-1}$. Because of the evenness of the
$\mathfrak{P}$-function, we see that $z_3=-z_2$ and $z_4=-z_1$.
Without loss of generality we can assume that the nearest points
of the slits are located at the distance $1$ from the origin. Then
the farthest points are at the distance
$l:=if(z_3)/f(z_2)=if(-z_2)/f(z_2)$. Therefore, making use of
(\ref{fsym}) and oddness of the $\sigma$-function, we have
\begin{equation}\label{leng}
l=ie^{-4\alpha\eta_1z_2}\,\frac{\sigma^2(z_2+\alpha)}{\sigma^2(z_2-\alpha)}\,.
\end{equation}
Let $z_2$ be a root of (\ref{crit}); we note that it depends on
$m$. Then we solve (\ref{leng}) with respect to $m$, to obtain the
initial value of the module. After that, we easily find the
initial values of $z_k$, $1\le k\le 4$. To use them in the
non-symmetric case, we need to shift the obtained values of $z_k$
by the vector $\alpha-i m/2$.
To find $c$ we use the equalities
$$
f(z_3)=f(-z_2)=ce^{-2\alpha\eta_1z_2}\,\frac{\sigma(z_2+\alpha)}{\sigma(z_2-\alpha)}\,,\quad
f(z_2)=ce^{2\alpha\eta_1z_2}\,\frac{\sigma(z_2-\alpha)}{\sigma(z_2+\alpha)}\,.
$$
Multiplying them, we have $c^2=f(z_3)f(z_2)=|f(z_3)f(z_2)|$,
therefore,
$$
c=\sqrt{|f(z_3)f(z_2)|}=\sqrt{|f(z_3)f(z_4)|}.
$$
The residue of $f(z)$, defined by (\ref{fsym}), is equal to
$$
d_{-1}^{\,\,0}=-ce^{-2\alpha^2\eta_1}{\sigma(2\alpha)}, \quad
$$
therefore, the initial value of $a$ is
$$
a^0=\log d_{-1}^{\,\,0}=(1/2)\log|f(z_3)f(z_4)| - 2\alpha^2\eta_1+
\log\sigma (2\alpha)+\pi i.
$$
We note that $|f(z_3)|$ and $|f(z_4)|$ are the distances $l_3$ and
$l_4$ from $A_3$ and $A_4$ to $A_5$; here $A_5$ is the point of
intersection of the straight lines containing the slits. Finally,
we have
$$
a^0=\log d_{-1}^{\,\,0}=(1/2)\log(l_3l_4)-2\alpha^2\eta_1+
\log\sigma (2\alpha)+\pi i.
$$
\vskip 0.3 cm
b) Consider the case $\beta=0$ when the slits lie on the
(distinct) parallel lines. Without loss of generality we can
assume that the slits are on straight lines parallel to the real
axis. Then the conformal mapping has the form (see \cite[Part~B,
Section~8.1, Example~1, p.~339]{kop_sht} ):
\begin{equation*}\label{fsym1}
f(z)=-\frac{b}{\pi}\,\left(\zeta(z)-\eta_1 z\right).
\end{equation*}
As in the case a), $\zeta(z)$, defined by (\ref{zw}), has the
periods $1$ and $im$, not $1$ and $i2m$. The parameter $b$ means a
half of the vertical distance between the slits. The critical
points $z_k$ of the map can be found from the equation $f'(z)=0$;
it is equivalent to the equality $\mathfrak{P}(z)=-\eta_1$. Using
the evenness of the $\mathfrak{P}$-function, we see that
$x_1=-x_2$, $z_3=-z_2$ and $z_4=-z_1$. Finding $z_2$ and using the
oddness of $f(z)$, we obtain
$$-\frac{b}{\pi}\,\left(\zeta(z_2)-\eta_1 z_2\right)=l/2$$
where $l$ is the length of each slit. From the last equality we
find the initial value of $m$ and $z_k$. As in the case a), to use
the obtained values, we need to shift them; taking into account
that here $\alpha=0$, we see that the shift parameter is the vector $-im/2$. The
residue of $f(z)$ at $z=0$ equals $-b/\pi$, thus, the initial
value of $a$ is $\log(b/\pi)+\pi i$ or
$$
a^0=\log(\mathop{\rm Im}\nolimits(z_1-z_3)/(2\pi))+\pi i.
$$
Now we give the Mathematica code, with commentaries, to calculate
the values of parameters, the module and the capacity of ring
domains with the exterior of two rectilinear slits. For convenience, we divide it into 5 steps.
If $0<\beta<\pi$, we first find the point $A_5$ which is the
intersection of the straight lines containing the slits. We will
assume that $A_3A_4$ does not contain $A_5$; in the opposite case
we renumber the points and use the reflection with respect to the
real axis which, in fact, does not change the desired
parameters. We also assume that $A_4$ is farther from $A_5$ than
$A_3$. If $A_1A_2$ also does not contain $A_5$, then we number the
points so that $A_2$ is farther from $A_5$ than~$A_1$. If $A_1A_2$
contains $A_5$, then we consider that $\arg(A2-A5)/(A4-A5)=\beta$.
In the case, either $A_1=A_5$ or $\arg(A1-A5)/(A4-A5)=\beta\pm
\pi$. Dependence on $t$ describes the uniform movement of points
$A_1(t)$ and $A_2(t)$ along the corresponding segments; $A_3(t)$
and $A_4(t)$ are herewith constant. Therefore, $\dot{A}_k(t)$ are
constant, moreover, $\dot{A}_3(t)=\dot{A}_4(t)=0$.
\noindent \textbf{Step~1.} Input of location of the points $A_k$,
$1\le k\le 4$. (Here we take $A_1=-2i$, $A_2=3i$, $A_3=1$,
$A_4=3$.) Finding $A_5$, $\beta$, $\dot{A}_1$, and $\dot{A}_2$.
\small
\begin{verbatim}
A1=-2.*I; A2=3.*I; A3=1.; A4=3.;
A5=A2+(A1-A2)*Im[(A4-A2)Conjugate[(A3-A4)]]/
Im[(A1-A2)Conjugate[(A3-A4)]];
l1=Sign[Re[(A1-A5)Exp[-I*beta/2]]]*Abs[A1-A5]; l2=Abs[A2-A5];
l3=Abs[A3-A5]; l4=Abs[A4-A5]; beta=Arg[(A2-A1)/(A4-A3)];
alpha=beta/(4*Pi); Adot1=(l1-l3)Exp[I*beta/2];
Adot2=(l2-l4)Exp[I*beta/2];
\end{verbatim}
\normalsize \noindent \textbf{Step~2.} Defining Weierstrass
elliptic functions ($\mathfrak{P}(z)$, $\mathfrak{P}'(z)$,
$\zeta(z)$, $\sigma(z)$, $\partial \zeta(z)/\partial\omega_2$)
with periods $\omega_1=1$ and $\omega_2=im$ as functions depending
on complex variable $z$ and $m$. Defining functions $\gamma_k(t)$,
$k=1$, $2$. \small
\begin{verbatim}
wp1[z_,w1_,w2_]:=WeierstrassP[z,WeierstrassInvariants[{w1/2,w2/2}]]; wpp1[z_, w1_,
w2_]:=WeierstrassPPrime[z,WeierstrassInvariants[{w1/2,w2/2}]];
wz1[z_,w1_,w2_]:=WeierstrassZeta[z,WeierstrassInvariants[{w1/2,w2/2}]];
ws1[z_,w1_,w2_]:=WeierstrassSigma[z,WeierstrassInvariants[{w1/2,
w2/2}]]; wi1[x_,y_]:=WeierstrassInvariants[{x,y}]; g2[w1_,w2_]:=
-4(wp1[w1/2,w1,w2]*wp1[w2/2,w1,w2]+wp1[w1/2,w1,w2]*wp1[(w1+w2)/2,w1,w2]+
wp1[w2/2,w1,w2]*wp1[(w1+w2)/2,w1,w2]);
wz1primeperiod[z_,w1_,w2_]:=-1/(2*Pi*I)((1/2)wpp1[z,w1,w2]+(wz1[z,w1,w2]-
z*2*wz1[1/2,w1,w2])wp1[z,w1,w2]+2*wz1[w1/2,w1,w2]*wz1[z,w1,w2]-g2[w1,w2]*z/12);
wp[z_,t_]:=wp1[z,1,2*I*t]; ws[z_,t_]:=ws1[z,1,2*I*t]; wz[z_,t_]:=wz1[z,1,2*I*t];
wzw2[z_,t_]:=wz1primeperiod[z,1,2*I*t]; gamma[t_]:=2*beta/Pi*wz[0.5,m[t]];
gamma1[t_]:=-ws[z0[t]-z1[t],m[t]]*ws[z0[t]-z2[t],m[t]]*ws[z0[t]-z3[t],m[t]]*
ws[z0[t]-z4[t],m[t]]/(ws[2*z0[t],m[t]])^2*Exp[-(a[t]+gamma[t]*(z1[t]-z0[t]))]*
(ws[z1[t]-z0[t],m[t]])^2*(ws[z1[t]+z0[t],m[t]])^2/(ws[z1[t]-z2[t],m[t]]*
ws[z1[t]-z3[t],m[t]]*ws[z1[t]-z4[t],m[t]]);
gamma2[t_]:=-ws[z0[t]-z1[t],m[t]]*ws[z0[t]-z2[t],m[t]]*ws[z0[t]-z3[t],m[t]]*
ws[z0[t]-z4[t],m[t]]/(ws[2*z0[t],m[t]])^2*Exp[-(a[t]+gamma[t]*(z2[t]-z0[t]))]*
(ws[z2[t] - z0[t],m[t]])^2*(ws[z2[t]+z0[t],m[t]])^2/(ws[z2[t]-z1[t],m[t]]*
ws[z2[t]-z3[t],m[t]]*ws[z2[t]-z4[t],m[t]]);
\end{verbatim}
\normalsize \noindent \textbf{Step~3.} Finding initial value of
module, critical points, pole, and constant $a$ . \small
\begin{verbatim}
f1[t_]=wp1[alpha,1,I*t]-wpp1[alpha,1,I*t]/(2(alpha*2*wz1[0.5,1,I*t]-
wz1[alpha,1,I*t]));
Z1[t_]=InverseWeierstrassP[f1[t],WeierstrassInvariants[{0.5,0.5*I*t}]];
L[t_]=Abs[Exp[-4*alpha*2*wz1[0.5,1,I*t]*Z1[t]]*(ws1[Z1[t]+alpha,1,I*t]/
ws1[Z1[t]-alpha,1,I*t])^2]-l4/l3; ar=0.1; bl=3.0; Do[m0=(ar+ bl)/2.;
fc=L[m0]; If[L[bl]*fc>0,bl=m0,ar=m0],{i,70}]; X0=Re[Z1[m0]]; x10=beta/(4*Pi)+X0;
x20=beta/(4*Pi)-X0; x30=beta/(4*Pi)+X0; x40=beta/(4*Pi)-X0;
a20=Pi; y00=m0/2; a10=(1/2)*Log[l3*l4]-(beta/(2*Pi))^2*Re[wz1[0.5,1,I*m0]]+
Log[Abs[ws1[beta/(2*Pi),1,I*m0]]];
\end{verbatim}
\normalsize \noindent \textbf{Step~4.} Solving system of ODEs.
\small
\begin{verbatim}
sol = NDSolve[ {-z1'[t]==Re[Adot1*gamma1[t]*(wz[z1[t]-
z2[t],m[t]]+wz[z1[t]-z3[t],m[t]]+
wz[z1[t]-z4[t],m[t]]+gamma[t]-2*wz[0.5,m[t]]*(z1[t]-z0[t])-wz[z1[t]-z0[t],
m[t]]-2*wz[z1[t]+z0[t],m[t]])+Adot2*gamma2[t](wz[z1[t]-z2[t],m[t]]-
wz[z0[t]-z2[t],m[t]]-2*wz[0.5,m[t]]*(z1[t]-z0[t]))],
-z2'[t]==Re[Adot2*gamma2[t]*(wz[z2[t]-z1[t],m[t]]+wz[z2[t]-z3[t],m[t]]+
wz[z2[t]-z4[t],m[t]]+gamma[t]-2*wz[0.5,m[t]]*(z2[t]-z0[t])-wz[z2[t]-z0[t],
m[t]]-2*wz[z2[t]+z0[t],m[t]])+Adot1*gamma1[t] (wz[z2[t]-z1[t],m[t]]-
wz[z0[t]-z1[t],m[t]]-2*wz[0.5,m[t]]*(z2[t]-z0[t]))],
-z3'[t]==-I*m'[t]+Re[Adot1*gamma1[t](wz[z3[t]-z1[t],m[t]]-wz[z0[t]-z1[t],m[t]]-
2*wz[0.5,m[t]]*(z3[t]-z0[t]))+Adot2*gamma2[t](wz[z3[t]-z2[t],m[t]]-
wz[z0[t]-z2[t],m[t]]-2*wz[0.5,m[t]]*(z3[t]-z0[t]))],
-z4'[t]==I*m'[t]+Re[Adot1*gamma1[t](wz[z4[t]-z1[t],m[t]]-wz[z0[t]-z1[t],m[t]]-
2*wz[0.5,m[t]]*(z4[t]-z0[t]))+Adot2*gamma2[t](wz[z4[t]-z2[t],m[t]]-
wz[z0[t]-z2[t],m[t]]-2*wz[0.5,m[t]]*(z4[t]-z0[t]))],
m'[t]==Re[Pi*(Adot1*gamma1[t]+Adot2*gamma2[t])],
a'[t]==Adot1*gamma1[t]*wp[z0[t]-z1[t],m[t]]+Adot2*gamma2[t]*wp[z0[t]-z2[t],m[t]]
+2*wz[0.5,m[t]]*(Adot1*gamma1[t]+Adot2*gamma2[t]),
z0'[t]==I*Im[(4*wp[2*z0[t],m[t]]-wp[z0[t]-z1[t],m[t]]-wp[z0[t]-z2[t],m[t]]-
wp[z0[t]-z3[t],m[t]]-wp[z0[t] - z4[t],m[t]])^(-1)(-wp[z0[t]-z1[t],m[t]]z1'[t]-
wp[z0[t]-z2[t],m[t]]z2'[t]-wp[z0[t]-z3[t],m[t]]z3'[t]-wp[z0[t]-z4[t],m[t]]
z4'[t])]+I*Re[(4*wp[2*z0[t],m[t]]-wp[z0[t]-z1[t],m[t]]-wp[z0[t]-z2[t],m[t]]-
wp[z0[t]-z3[t],m[t]]-wp[z0[t]-z4[t],m[t]])^(-1)(4*wzw2[2*z0[t],m[t]]
-(4*beta/Pi)wzw2[0.5,m[t]]-2(wzw2[z0[t]-z1[t],m[t]]+wzw2[z0[t]-z2[t],m[t]]+
wzw2[z0[t]-z3[t],m[t]]+wzw2[z0[t]-z4[t],m[t]]))]*Pi*Re[(Adot1*gamma1[t]+
Adot2*gamma2[t])],
z1[0]==x10,z2[0]==x20,z3[0]==x30+I*m0,z4[0]==x40-I*m0,a[0]==a10+I*a20,
m[0]==m0,z0[0]==-I*y00},{z1,z2,z3,z4,m,a,z0},{t,0,1.}];
\end{verbatim}
\normalsize \noindent \textbf{Step~5.} Output of desired values of
capacity, module, critical points, pole, and constant~$a$. \small
\begin{verbatim}
s=1.; {1/m[s], m[s], z1[s], z2[s], z3[s], z4[s], a[s], z0[s]} /.sol
\end{verbatim}
\normalsize
In the case of slits, parallel to the real axis, we have the same
system of ODEs. We recall that we can assume that $\mathop{\rm Re}\nolimits A_1<\mathop{\rm Re}\nolimits
A_2$, $\mathop{\rm Re}\nolimits A_3<\mathop{\rm Re}\nolimits A_4$, and $\mathop{\rm Im}\nolimits A_1=\mathop{\rm Im}\nolimits A_2>\mathop{\rm Im}\nolimits A_3=\mathop{\rm Im}\nolimits A_4$.
Then we find the values of $\dot{A}_1$ and $\dot{A}_2$ by the
formulas $\dot{A}_1=\mathop{\rm Re}\nolimits(A_1-A_3)$. $\dot{A}_2=\mathop{\rm Re}\nolimits(A_2-A_4)$. We
also have other formulas to find the initial values. Thus, Steps 1
and 3 must be changed to the following ones.
\noindent \textbf{Step~1'.} Input of location of the points $A_k$,
$1\le k\le 4$. (Here we take $A_1=i$, $A_2=2+i$, $A_3=-2-i$,
$A_4=-1-i$.) Finding $A_5$, $\beta$, $\dot{A}_1$, and
$\dot{A}_2$. \small
\begin{verbatim}
A1=1.*I; A2=2.+1.*I; A3=-2.-1.*I; A4=-1.-1.*I; Adot1=Re[A1-A3];
Adot2=Re[A2-A4]; beta=0.;
\end{verbatim}
\normalsize \noindent \textbf{Step~3'.} Finding initial value of
module, critical points, pole, and constant $a$. \small
\begin{verbatim}
g[m_]:=-2*WeierstrassZeta[0.5,WeierstrassInvariants[{0.5,0.5*m*I}]];
h[m_]:=Re[InverseWeierstrassP[g[m],WeierstrassInvariants[{0.5,0.5*m*I}]]];
f[m_]:=Re[(2/Pi)(WeierstrassZeta[h[m]+0.5*m*I,WeierstrassInvariants[{0.5,0.5*m*I}
]]-2(h[m]+0.5*m*I)WeierstrassZeta[0.5,WeierstrassInvariants[{0.5,0.5*m*I}]])]-
2*Abs[A3-A4]/Abs[Im[(A3-A1)]]; ar=0.1; bl=3.; Do[m0=(ar+bl)/2.;
fc=f[m0]; If[f[bl]*fc>0,bl=m0,ar=m0],{i,70}]; X0=Re[h[m0]]; x10=X0;
x20=-X0; x30=X0; x40=-X0; y00=m0/2; a10=Log[Im[A1-A3]/(2*Pi)]; a20=Pi;
\end{verbatim}
\normalsize
\textbf{Example~1.} Consider the case when the endpoints of one
of the segments are the points $a-0.5$, $a+0.5$ on the real axis and the
endpoints of the other one are the points $-i$, $-2i$ of the
imaginary axis. Then $\beta=\pi/2$ and $\alpha=0.125$.
As an initial situation, we take the symmetric case when $a=1.5$. With the
help of (\ref{leng}) and (\ref{crit}) we find the initial module
and the real parts of the critical points:
$$m^0=0.67578477\ldots, \quad \widetilde{x}_2^{\,\,0} =-\widetilde{x}_1^{\,\,0} = 0.22367571\ldots$$
Since in the non-symmetric case we have $\mathop{\rm Im}\nolimits z_0=0$ in (\ref{f}),
we use a shift $z\mapsto z+\alpha$ in the $z$-plane and take, as
an initial data, the values
$x^0_k=\widetilde{x}{\,}^{\,0}_k+\alpha$. Therefore,
$$x_1^0 =x_3^0 = 0.34867571\ldots \quad x_2^0=x_4^0=-0.09867571\ldots$$
Because of symmetry, we have $y^0_0=m^0/2$. Consequently,
$$
z^0_{1}=0.34867571\ldots,\phantom{+im^0} \quad
z^0_{2}=-0.09867571\ldots,\phantom{+im^0}
$$
$$
z^0_{3}=0.34867571\ldots+im^0,\quad
z^0_{4}=-0.09867571\ldots-im^0.
$$
At last $a^0=\log d_{-1}^{\,0}$ where $d_{-1}^{\,0}$ is the
residue of (\ref{f}) at the point $z_0$.
\begin{figure}
\caption{The graph of the dependence of the module $m$
on the parameter $a$ (Example~1).}
\label{graph0}
\end{figure}
Finding the residue in the symmetric case, we obtain
$$
a^0=[\ln
\sqrt{2}-4\alpha^2\zeta(0.5;1,im^0)+\ln{\sigma(2\alpha;1,im^0)}]+\pi
i
$$
$$
=-1.11526111\ldots + i\,3.14159265\ldots$$
Here the functions $\zeta(z)=\zeta(z;1,im^0)$ and
$\sigma(z)=\sigma(z;1,im^0)$ correspond to the periods $1$ and
$im^0$. Solving the system of differential equations, we find the
dependence of the parameters in (\ref{f}) on the parameter $a$
(see Fig.~\ref{graph0}).
The values of moduli for some $a$ are given on the
Table~\ref{tab1}.
\begin{table}[ht]
\caption{The values of moduli and capacities for some $a$
(Example~1).} \centering
\begin{tabular}
{|c|c|c|c|c|c|c|c|c|
}
\hline
$a$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7
\\
\hline
$m$ & 0.56247 & 0.62207 & 0.72955 & 0.82469 & 0.90239 & 0.96656 & 1.02073 & 1.06743
\\
\hline
cap & 1.77787& 1.60753& 1.37070 & 1.21258 & 1.10817 & 1.03459 & 0.97968 &0.93682
\\
\hline
\end{tabular}\label{tab1}
\end{table}
\textbf{Example~2.} We also computed the moduli $\text{mod}\,G$
and the corresponding capacities $\text{cap}\,G$ for some domains
$G(A_1,A_2,A_3,A_4)$ when $A_k$ are from the integer lattice in
the complex plane. Comparison our results with those obtained by
other methods show very good coincidence, up to $10^{-6}$. In
Table~\ref{tab2} we give some values of capacities obtained by our
method and by a MATLAB algorithm written by Prof. M.~Nasser
\cite{nv}; the values are given with $8$ digits after the decimal
point.
\begin{table}[ht]
\caption{The values of capacities for some domains
$G(A_1,A_2,A_3,A_4)$ (Example~2).}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
&&&&&\multicolumn{2}{|c|}{$\text{cap}\,G$}\\
\cline{6-7} &\raisebox{1.5ex}[0cm][0cm]{$A_1$}&\raisebox{1.5ex}[0cm][0cm]{$A_2$}&\raisebox{1.5ex}[0cm][0cm]{$A_3$}&\raisebox{1.5ex}[0cm][0cm]{$A_4$}& our results & Nasser's\\
\hline
$1$ & $i$ & $2+i$ & $-2-i$ & $-1-i$ &1.44058466& 1.44058486\\
\hline
$2$ & $i$ & $2+i$ & $-2-2i$ & $-1-2i$ &1.30971558& 1.30971579\\
\hline
$3$ & $i$ & $2+i$ & $3-2i$ & $4-3i$ &1.35832035 & 1.35832051\\
\hline
$4$ & $i$ & $2+2i$ & $-2-i$ & $-1-i$ &1.42710109 & 1.42710150\\
\hline
$5$ & $i$ & $2+2i$ & $-2-2i$ & $-1-2i$ &1.29776864 & 1.29776889\\
\hline
$6$ & $i$ & $2+2i$ & $3-2i$ & $4-3i$ &1.32814214 & 1.32814249\\
\hline
$7$ & $i$ & $3+2i$ & $-2-i$ & $-1-i$ &1.49363842 & 1.49363897\\
\hline
$8$ & $i$ & $3+2i$ & $-2-2i$ & $-1-2i$&1.36333122& 1.36333156\\
\hline
$9$ & $i$ & $3+2i$ & $3-2i$ & $4-3i$ &1.45844055 & 1.45844094\\
\hline
$10$ & $i$ & $3i$ & $3$ & $4$ &1.29126199 & 1.29126229\\
\hline
$11$ & $i$ & $3i$ & $0$ & $2$ &2.18251913& 2.18251948\\
\hline
$12$ & $i$ & $3i$ & $-3$ & $2$ &2.82846257 & 2.82846345\\
\hline
$13$ & $i$ & $3+i$ & $-i$ & $3-i$ &2.69941565 & 2.69941690\\
\hline
$14$ & $i$ & $3+2i$ & $-i$ & $3-2i$ &2.23470313& 2.23470399\\
\hline
$15$ & $i$ & $3+3i$ & $-i$ & $3-3i$ &2.11547784& 2.11547801\\
\hline
\end{tabular}\label{tab2}
\end{table}
\section{Monotonicity of conformal module }\label{extr}
Now we will investigate behavior of the conformal module of
$G=G(A_1,A_2,A_3,A_4)$ in the case when the segment $A_3A_4$ is
fixed and the segment $A_1A_2$ slides along a straight line with a
fixed length. This case is equivalent to the situation when the
segment $A_1A_2$ is fixed and $A_3A_4$ has a fixed length and
shifts by vectors with a fixed direction.
We note that some similar problems for quadrangles were
investigated by Dubinin and Vuorinen~\cite{dub_vuor}.
Without loss of generality we may assume that $A_1A_2$ lies on the
real axis and $A_1$ is the left endpoint of the segment. Moreover,
we can consider the family with $A_1=t$, $A_2=t+l$, where $l$ is
the length of $A_1A_2$. Then $\dot{A}_1(t)=\dot{A}_2(t)=1$. It is
clear that $\dot{A}_3(t)=\dot{A}_4(t)=0$. From Corollary~\ref{mod}
we obtain that $\dot{m}(t)=\pi(\gamma_1(t)+\gamma_2(t))$ where
$\gamma_k(t)=1/f''(x_k,t)$, $k=1$,~$2$. It is easy to see that
$f''(x_1,t)>0$ and $f''(x_2,t)<0$. Therefore,
$$\dot{m}(t)=\pi\left(|\gamma_1(t)|-|\gamma_2(t)|\right)=\pi\left(\frac{1}{|f''(x_1,t)|}-\frac{1}{|f''(x_2,t)|}\right).$$
If $|f''(x_1,t)|>|f''(x_2,t)|$, then, when moving a segment
$A_1A_2$ to the right, the conformal module of
$G=G(A_1,A_2,A_3,A_4)$ decreases, otherwise, it increases. At
critical points of the module we have $|f''(x_1,t)|=|f''(x_2,t)|$.
Now we compare $|f''(x,t)|$ at the points $x_1$ and $x_2$ using
methods of the symmetrization theory. We will temporarily assume
that $A_1A_2$ is symmetric with respect to the imaginary axis,
i.e. $A_2$ lies on the positive part of the real axis
symmetrically to $A_1$.
In the following lemma we investigate a more general case when the
considered doubly connected domain $G$ is the exterior of the
segment $A_1A_2$ and some continuum $Q$; if $Q=A_3A_4$, we obtain
our case.
\begin{lemma}\label{sym}
Let the continuum $Q$ lie in the right half-plane $\mathop{\rm Re}\nolimits w>0$ and
let $\psi: \{q<|\zeta|<1\}\to G$ be a conformal mapping. If $\zeta_1$
and $\zeta_2$ are the points of the unit circle corresponding to
the endpoints $A_1$ and $A_2$ of the segment, then
$|\psi''(\zeta_1)|>|\psi''(\zeta_2)|$.
\end{lemma}
Proof. Without loss of generality we can assume that $A_1A_2$
coincides with the segment $[-1,1]$ (Fig.~\ref{graph3}, $a)$).
Consider the function $\varphi$ inverse to the Joukowskii
function; it maps $G$ onto the exterior of the unit disk with
excluded set $Q_1:=\varphi(Q)$. Using the Riemann-Schwarz symmetry
principle, we conclude that $\varphi(Q)$ lies in the right
half-plane. Now applying the symmetry principle once more, we can
extend $\varphi\circ \psi$ to the annulus $A:=\{q<|\zeta|<1/q\}$.
The function $\varphi\circ \psi$ maps it onto the doubly connected
domain $G_1:=\overline{\mathbb{C}}\setminus (Q_1\cap Q_2)$; here
$Q_2$ is symmetric to $Q_1$ with respect to the unit circle,
therefore, it also lies in the right half-plane.
\begin{figure}
\caption{The domain $G$: $a)$ in Lemma~\ref{sym}
\label{graph3}
\end{figure}
Now consider the reduced moduli of $A$ at the points $\zeta_1$ and
$\zeta_2$; it is obvious that they are equal, i.e.
$r(A,\zeta_1)=r(A,\zeta_2)$. On the other side,
$$r(G_1,-1)=r(A,\zeta_1)+\frac{1}{4\pi}\,\log |\psi''(\zeta_1)|,\quad
r(G_1,1)=r(A,\zeta_2)+\frac{1}{4\pi}\,\log |\psi''(\zeta_2)|.$$
Therefore, we only need to show that $r(G_1,-1)>r(G_1,1)$. But this conclusion
follows from \cite{dub}, thrm.~1.2, because the configuration
$(G_1,-1)$ is obtained from $(G_1,1)$ by polarization. \vskip 0.5
cm
\begin{corollary}\label{corseg}
Let $A_3A_4$ be a fixed segment in the right half-plane,
intersecting the real at the point $\widetilde{x}\,,$
and let one of its
endpoints lie on the imaginary axis. Let $A_1A_2$ be the segment
$[a-l/2,a+l/2]$ on the real axis with a fixed length $l$
(Fig.~\ref{graph3}, $b)$). If $\widetilde{x}\le l/2$, then, when\
\, $a$\ \, increases from $-\infty$ to $\widetilde{x}-l/2$, the
conformal module of $G(A_1,A_2,A_3,A_4)$ decreases from $+\infty$
to $0$. If $\widetilde{x}> l/2$, then the conformal module
decreases from $+\infty$ to some positive value, when \ \, $a$\ \,
increases from $-\infty$ to $0$.
\end{corollary}
If $A_3A_4$ does not intersect the real axis then the conformal
module decreases for $a$ close to $-\infty$ and increases for $a$
close to $+\infty$. In this connection the problem arises: does the
module always have a unique minimum or are there situations when it
has more than one (local) minimum?
It is also interesting to investigate the problem for the case
when the slits are parallel to each other. Then, using the result
that the conformal module decreases after symmetrization with
respect to a straight line, we conclude that the minimum of the
conformal module is attained for the case of slits symmetric with
respect to the orthogonal line.
The same is valid when the slits are perpendicular to each other,
one of the slits is fixed and does not intersect the straight line
containing the second one. Then the minimal module is attainted
for the case when the second slit is symmetric with respect to the
line containing the first slit.
\end{document} |
\begin{document}
\title{Constructing quantum circuits for maximally entangled multi-qubit states using the genetic algorithm}
\author{Zheyong Fan$^{1}$\footnote{Email: [email protected]},
Hugo de Garis$^{2}$,
Ben Goertzel$^{2,3}$,
Zhongzhou Ren$^{1}$, and
Huabi Zeng$^{1}$}
\address{$^1$ Department of Physics, Nanjing University, Nanjing, 210093, China}
\address{$^2$ Artificial Intelligence Institute, Computer Science
Department, Xiamen University, Xiamen, Fujian Province, China}
\address{$^3$ Novamente LLC}
\begin{abstract}
Numerical optimization methods such as hillclimbing and simulated annealing have been applied to search for highly entangled multi-qubit states. Here the genetic algorithm is applied to this optimization problem -- to search not only for highly entangled states, but also for the corresponding quantum circuits creating these states. Simple quantum circuits for maximally (highly) entangled states are discovered for 3, 4, 5, and 6-qubit systems; and extension of the method to systems with more qubits is discussed. Among other results we have found explicit quantum circuits for maximally entangled 5 and 6-qubit circuits, with only 8 and 13 quantum gates respectively. One significant advantage of our method over previous ones is that it allows very simple construction of quantum circuits based on the quantum states found.
\end{abstract}
\maketitle
\section{Introduction}
\textit{Quantum entanglement} \cite{bengtsson06, horodecki09}, enabling states with correlations that have no classical analogue, is one of the central concepts differentiating quantum information from classical information. Entanglement is essential to many quantum information protocols such as quantum key distribution, dense coding, and quantum teleportation, and is also thought to play important roles in quantum computational speedup \cite{jozsa03}.
Abstractly, a bipartite \emph{pure} state is called \emph{entangled} if it is not decomposable to a tensor product of two states of the two subsystems. Quantitatively, there are measures \cite{plenio07} which tell us how entangled a state is. Maximally or highly entangled states are of particular interest because their entanglement provides a valuable resource which can be used to perform tasks that are otherwise difficult or impossible.
Maximally entangled quantum states of small systems are well known. For 2-qubit states, the Bell state
$|\textmd{Bell}\rangle = (|00\rangle + |11\rangle)/\sqrt{2}$
is maximally entangled on all counts: maximal entanglement entropy, maximal violation of the Bell inequality, and complete mixture of its one-party reduced states. The 3-qubit generalization of the Bell state is the GHZ state
$|\textmd{GHZ}3\rangle
= (|000\rangle + |111\rangle)/\sqrt{2}$,
which is also maximally entangled.
One can easily generalize the GHZ state to general $n$-qubit states, $|\textmd{GHZ}n\rangle = (|00\cdots 0\rangle \pm |11\cdots 1\rangle)/\sqrt{2}$.
However, for 4 or more qubits, these states are not maximally entangled \cite{higuchi00}, and in fact display below-average entanglement, as shown by the numerical calculations of Borras {\it et al} \cite{borras07}.
Since the mathematical structures of multi-qubit states are complex, numerical optimization methods have been found very helpful in the search for maximally or highly entangled states. Using \emph{negativity} \cite{zyczkowski98,vidal02,kendon02} as the entanglement measure,
Brown {\it et al} \cite{brown05} performed a numerical search for highly entangled states of 2, 3, 4, and 5 qubits using the hillclimbing optimization method. Although it searches through a space including \emph{mixed} states, their method ultimately converges to \emph{pure} states. They have successfully found a \emph{simple} (in a technical sense of \emph{simple} to be given below) maximally entangled 5-qubit pure state, which has application to quantum teleportation, superdense coding, and quantum state sharing \cite{muralidharan08}.
There are many entanglement measures besides the negativity \cite{plenio07}. Borras {\it et al} \cite{borras07} presented a detailed comparison of the results obtained by different search procedures based upon different entanglement measures. They searched only pure states and discovered a simple maximally entangled 6-qubit state, the utility of which for quantum teleportation and quantum state sharing has been explored \cite{choudhury09}.
Motivated by the simplicity of these maximally entangled 5-qubit and 6-qubit states, Tapiador {\it et al} \cite{tapiador09} conducted a simulated annealing algorithm optimization procedure, searching for states not only highly entangled but also algebraically simple. They successfully found maximally entangled 5 and 6-qubit states with very simple algebraic structure. For 7 and 8-qubit systems, they also discovered highly entangled states with rather simple structure.
The simplicity of the structures of the states mentioned above means \cite{tapiador09} that the coefficients of the states with respect to the standard basis of the multi-qubit system are nice to write: only a sparse subset of the standard basis vectors have nonzero coefficients, and the coefficients are taken from a finite set of simple numbers ($\pm 1$, $\pm i$, ...). This is a natural requirement, since it is harder to derive quantum information protocols based on states lacking simple structures.
However, it is even more natural to demand that the maximally (highly) entangled states can be conveniently created by simple quantum circuits. If a maximally entangled state can be achieved by a sequence of simple quantum gates, it may be considered to have a certain conceptual simplicity, as well as (perhaps more importantly) a superior practical realizability.
Here we report research in which the genetic algorithm is used to search for quantum circuits producing maximally entangled multi-qubit \emph{pure} states numerically. Using the negativity entanglement measure, we can construct simple quantum circuits for maximally or highly entangled states with up to 6 qubits, a method which, among other results, has led us to the simplest known maximally entangled 5 and 6-qubit quantum circuits.
Whether the GA is more effective than other optimization techniques at finding maximally entangled states for larger $n$ remains unknown, as the initial computational experiments reported here relied upon a GA implementation not suitable for highly scalable computation; so exploration of this question remains for future work. However, our results do exemplify one significant advantage of our method over previous ones: it allows very simple construction of quantum circuits based on the quantum states found.
Section 2 reviews the entanglement measure used in the paper.
Section 3 reviews some relevant prior results on maximally entangled quantum systems.
Section 4 describes our use of genetic algorithms, including the details of the encoding scheme and the fitness function. Section 5 presents results; and Section 6 gives conclusions and discussion.
\section{Measuring Entanglement}
We now review the \emph{negativity} \cite{zyczkowski98,vidal02,kendon02} entanglement measure which we use as the fitness function for our GA. This measure has been shown computationally tractable in several previous works \cite{brown05,borras07,tapiador09}.
The negativity measure is closely related to the PPT \cite{peres96,horodecki97} (positive partial transpose) test, which states that positivity of the partial transpose of the density matrix of a bipartite state is a \emph{necessary} condition for the state to be separable.
Thus, an inseparable (entangled) state is characterized by non-vanishing \emph{negativity}, where the latter is defined as the sum of all the negative eigenvalues of the partial transpose of the density matrix. It is convenient to define the entanglement $E_{\rm N}$ to be the negative of the negativity, i.e. $E_{\rm N} = - {\rm negativity}$. The larger $E_{\rm N}$, the more entangled the state. The maximally entangled 2-qubit states are the Bell states having
$E_{\rm N} = 0.5$.
It is straightforward to generalize the definition of negativity to multipartite states. In this paper, we consider systems with a fixed number of qubits, where each qubit constitutes one part of the whole system. Therefore, an $n$-partite state just means an $n$-qubit state. For an $n$-qubit system, there are $C_n^0 + C_n^1 + ... + C_n^n = 2^n$ possible cuts (partitions). Each cut corresponds to a possible partial transpose. However, since each cut has an equivalent cut (e.g. the two cuts $\{0, 1\}$
\footnote{We label the qubits for an $n$-qubit system by the integers $0, 1, ..., n-1.$}
and $\{2, 3\}$ for a 4-qubit system are equivalent), and we do not need to consider the trivial partial transpose which does nothing, there are only $2^n/2 - 1 = 2^{n - 1} - 1$ nonequivalent cuts. The negativity for an $n$-qubit state is defined to be the sum of the negativity for the $2^{n - 1} - 1$ partial transposes and the entanglement $E_{\rm N}$ is still defined to be the negative of the negativity. There are upper bounds of $E_{\rm N}$ for $n$-qubit states, which can be derived \cite{borras07} by considering a hypothetical $n$-qubit state whose marginal density matrices are all completely mixed. In this case, each $n$-cut partial transpose contributes an amount of $(2^{n} - 1)/2$ to $E_{\rm N}$. For example, the upper bounds of $E_{\rm N}$ for 3, 4, 5, and 6-qubit systems can be calculated to be 1.5, 6.5, 17.5, and 60.5 respectively. See Table 1.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|}
\hline
$n$ & 3 & 4 & 5 & 6 \\
\hline
1-cuts & 3 & 4 & 5 & 6 \\
\hline
2-cuts & 0 & 3 & 10 & 15 \\
\hline
3-cuts & 0 & 0 & 0 & 10 \\
\hline
total cuts & 3 & 7 & 15 & 31 \\
\hline
$E_{\rm N}$ & $0.5 \times 3 $
& $0.5 \times 4 $
& $0.5 \times 5 $
& $0.5 \times 6 + 1.5 \times 15 $\\
& $=1.5$
& $+ 1.5 \times 3 = 3.5$
& $+ 1.5 \times 10 = 17.5$
& $+ 3.5 \times 10 = 60.5$\\
\hline
\end{tabular}
\end{center}
\caption{Number of cuts for $n$-qubit systems and the calculation of the (hypothetical) maximal entanglement.}
\label{1}
\end{table}
\section{Prior Results on Maximally Entangled Multi-qubit Systems}
\subsection{Conventions for States, Gates and Circuits}
We use the following conventions for describing quantum states, gates, and circuits.
The $2^n$ computational basis of the $n$-qubit states is taken to be
$\{ |q_{n-1}\rangle |q_{n-2}\rangle \cdots |q_{0}\rangle \\
\equiv |q_{n-1} q_{n-2} \cdots q_{0} \rangle \}$
where $q_i = 0$ or 1.
A general $n$-qubit state is expressed as a superposition of these basis states as
$|\Psi_n\rangle = \sum_{q_{n-1}q_{n-2}...q_{0}} c_{q_{n-1}q_{n-2} \cdots q_{0}}
|q_{n-1}q_{n-2} \cdots q_{0}\rangle$,
where the coefficients $c_{q_{n-1}q_{n-2}\cdots q_{0}}$ are complex numbers such that the normalization condition
$\sum_{q_{n-1}q_{n-2}\cdots q_{0}} |c_{q_{n-1}q_{n-2}\cdots q_{0}}|^2 = 1$
for the state $|\Psi_n\rangle$ is satisfied. We also denote the four Bell basis states as
$|\psi^{\pm}\rangle = (|00\rangle \pm |11\rangle)/\sqrt{2}$ and
$|\phi^{\pm}\rangle = (|01\rangle \pm |10\rangle)/\sqrt{2}$.
The quantum gates acting on a general $n$-qubit state may be represented by $2^n$ dimensional matrices in the above basis. Elementary quantum gates can be conveniently expressed as tensor products of small matrices and appropriate identity matrices. For example, we use the notation $H(i)$ to represent a Hadamard gate acting on the $i$th qubit and use $\textmd{CNOT}(i, j)$ to represent a CNOT gate acting on the $i$th and the $j$th qubits, which are taken to be the control and the target qubits respectively.
A quantum circuit of \emph{size} $N$ will be represented by a string of $N$ elementary gates in the form
$\textmd{Circuit} =
\textmd{Gate}(N-1)\textmd{Gate}(N-2)\cdots \textmd{Gate}(1)\textmd{Gate}(0)$,
where Gate(0) acts first and Gate($N-1$) acts last. This convention is chosen to respect the convention of matrix multiplication -- which is opposite to the convention of the pictorial representations of quantum circuits, in which the gate at the leftmost acts first.
\subsection{4-qubit States}
Higuchi and Sudbery \cite{higuchi00} have proved that there is no 4-qubit pure state with all its marginal density matrices completely mixed, which means that the hypothetical maximal entanglement
($E_{\rm N}$ = 6.5)
is unreachable. Using variational methods, they found a highly entangled state
$|\textmd{HS}4\rangle$
with
$E_{\rm N} = 6.0981$,
\begin{equation}
\label{HS4}
|\textmd{HS}4\rangle = \frac{1}{\sqrt{6}}\big(
|1100\rangle + |0011\rangle + \omega(|1001\rangle + |0110\rangle)
+ \omega^2(|1010\rangle + |0101\rangle)
\big)
\end{equation}
where
$\omega = -1/2 + i\sqrt{3}/2$
is the third root of unity. This state is known to be a local maximum \cite{brierley07} and is also conjectured to be a global maximum \cite{higuchi00} according to the von Neumann entropy measure.
\subsection{5-qubit States}
Maximally entangled 5-qubit states have been discovered via previous experiments with numerical search algorithms \cite{brown05,borras07,tapiador09}. These states all have the same entanglement feature as the following state,
\begin{equation}\label{BSSB5}
|\textmd{BSSB}5\rangle = \frac{1}{2}(
|001\rangle|\phi^{-}\rangle + |010\rangle|\psi^{-}\rangle +
|100\rangle|\phi^{+}\rangle + |111\rangle|\psi^{+}\rangle).
\end{equation}
The entanglement distribution of this state among different cuts can be expressed as $E_{\rm N} = 5 \times 0.5 + 10 \times 1.5 = 17.5$.
Muralidharan and Panigrahi \cite{muralidharan08} found various quantum information
applications of this state and outlined the procedure of a possible physical realization of this state. They proposed to create this state by using 2 $H$s and 3 CNOTs followed by a 32 dimensional matrix with prescribed matrix elements, but the decomposition of this matrix into elementary quantum gates was not provided. The advantage of our algorithm, to be reviewed below, is that we can find not only maximally entangled 5-qubit states, but also the corresponding quantum circuits creating the states.
\subsection{6-qubit states}
By running a hill climbing algorithm, Borras {\it et al} \cite{borras07} discovered a maximally entangled 6-qubit state with 32 non-vanishing coefficients, which was used by Choudhury {\it et al} \cite{choudhury09} for various quantum information applications.
Maximally entangled 6-qubit states with 16 nonzero coefficients were also discovered by Tapiador {\it et al} \cite{tapiador09} using the simulated annealing algorithm.
Using our algorithm, we can find both of these kinds of states and the corresponding circuits creating them.
\section{Applying the Genetic Algorithm}
We now describe our use of a genetic algorithm to evolve quantum circuits with maximal possible entanglement according to the negativity measure.
\subsection{Brief Review of Genetic Algorithms}
The genetic algorithm (GA) \cite{goldberg89} is an optimization algorithm encapsulating the basic ideas of biological evolutionary theory (mutation, combinatory sexual reproduction, and differential reproduction in a population based on fitness). GAs and other related ``evolutionary algorithms" have been applied to quantum information and computation before, e.g. to the automated generation of quantum algorithms; see \cite{spector04} and \cite{gepp09} for reviews.
The basic idea of the GA is rather simple. Initially, a starting population of individuals (constituting possible solutions to the problem at hand) is generated randomly. In our problem, an individual is a quantum circuit which can be used to create a new quantum state from an initial state. In the GA broadly conceived, an individual has two aspects. On one hand, it has a genotype, which is a sequence of genes (also called a chromosome); on the other hand, it has a phenotype, which determines its fitness (the quality of the solution). The fitness of an individual is calculated using the fitness function which, in our problem is just the entanglement $E_{\rm N}$. The translations from the phenotype to its genotype and vice versa are called encoding and decoding respectively. The implementation of these ideas in the context of evolving maximally entangled quantum circuits will be detailed in the next subsection.
After an initial population is created, the GA starts the evolutionary loop, which consists of the following steps:
\begin{itemize}
\item evaluate the fitness of each individual in the current population (generation)
\item select individuals based on the fitness levels to be parents of the next generation
\item generate the next generation through genetic operations, namely
\begin{itemize}
\item crossover, a binary operation that takes in two chromosomes and outputs a new one combining genes from each of them
\item mutation, which alters a certain percentage of genes in a chromosome, usually randomly (sometimes drawn from a particular probability distribution)
\end{itemize}
\end{itemize}
At each generation, the best solution (the elite) in the population is always retained for the next generation, until better solutions are found. This loop terminates when sufficiently optimized individuals have been found or other predetermined termination criteria (such as a maximum number of generations) are met.
\subsection{Encoding of the quantum circuits}
Our aim, in the present research, is to find simple quantum circuits which can create maximally entangled states using the GA. To do this, we first encode all possible quantum circuits (solutions) into chromosomes on which the genetic operators can act. There are many feasible encoding schemes from which we can choose. Since our problem is discrete in nature, we can use binary or integer representations for the chromosomes, and we have chosen the latter.
A quantum circuit is a sequence of elementary quantum gates acting on a number of qubits. Any quantum circuit (any unitary matrix) can be approximated by a set of universal quantum gates. One set of such universal gates consists of the Hadamard gate $H$, the $\pi/8$ gate $T$, and the controlled-NOT gate CNOT. Other useful elementary quantum gates include the three Pauli gates $X$, $Y$, and $Z$, the phase
gate $S$, and the controlled-$Z$ gate \cite{nielsen00}. Not all the gates are needed to construct a quantum circuit for a given $n$-qubit state. For example, the Bell states can be prepared using only a Hadamard gate followed by a CNOT gate. Therefore, in our computational experiments we have tried various sets of elementary quantum gates to generate states of a given number of qubits.
The elementary quantum gates mentioned above are all 1 and 2-qubit gates.
When acting on a multi-qubit state, each of these gates can act (nontrivially) on different qubit(s). Take the 6-qubit states as an example. If we use $H$ and CNOT as our elementary quantum gates, there will be 6 different $H$s acting on each one of the 6 qubits and 30 CNOTs acting on different (ordered) pairs of qubits. We can arrange all 30 elementary quantum gates into an array, in which each one of the 30 elementary gates is specified by an integer between 0 and 35. In this way, a quantum circuit for a 6-qubit state using say, 10 elementary gates is represented by an array of 10 integers, each of which ranges from 0 to 35. This array of integers is a chromosome, and each integer corresponds to a gene. Since there are 10 genes and each gene can take 36 different values, the size of the search space (the number of all possible solutions) is $36^{10} \simeq 10^{15}$. The size of this search space renders brute force search infeasible on current computers; but using the GA, we can find an optimized solution using only hundreds to thousands of evaluations of the fitness function.
\subsection{Evaluation of the fitness function}
While the chromosome is the object which the genetic operators act on, the fitness is the driving force for GA to evolve the population of individuals. The fitness function accepts a chromosome as its input and returns a scalar (a real number) as its output. The evaluation of the fitness function for our problem takes on the following steps:
\begin{itemize}
\item translate the chromosome (array of integers) into the corresponding quantum circuit (sequence of elementary quantum gates)
\item obtain the target state by acting on the initial state (taken as $|00\cdots0\rangle$) with the quantum circuit
\item calculate the fitness $E_{\rm N}$ of the state obtained in the last step using the method given above
\end{itemize}
\section{Results}
We now report the results we have obtained via implementing the above ideas using a GA implemented using the genetic algorithm toolbox of MATLAB \cite{matlab}, for the case of 3, 4, 5, and 6-qubit states.
\begin{figure}
\caption{The entanglement generated by the quantum gates at each step of the quantum circuits for the 3, 4, 5, and 6-qubit GHZ states.}
\label{fig:GHZstates}
\end{figure}
\subsection{3 Qubits}
For the 3 qubit case, the GA easily finds the GHZ state, as expected.
For example, the GHZ state
$|\textmd{GHZ}3\rangle = (|000\rangle + |111\rangle)/\sqrt{2}$
is prepared by the following circuit with only 3 elementary gates acting on the initial state $|000\rangle$,
\begin{equation}
\label{circutGHZ3}
\textmd{Circuit}_{\rm GHZ3} =
\textmd{CNOT}(2,0)\textmd{CNOT}(2,1) H(2).
\end{equation}
The entanglement is generated in the following way. Firstly, $H(2)$ change the initial state
$|000\rangle$
to a superposition of
$|000\rangle$ and $|100\rangle$
without generating any entanglement. Then, CNOT(2,1) prepares a Bell state
$(|00\rangle + |11\rangle)|0\rangle/\sqrt{2}$
for the two qubits $q_1$ and $q_2$, generating an amount of entanglement
$E_{\rm N} = 1$.
Finally, CNOT(2,0) promotes the entanglement to the maximal value
$E_{\rm N} = 1.5$,
making the 3 qubits completely entangled. See Figure~\ref{fig:GHZstates}.
Generalizing this construction, the $n$-qubit GHZ state can be prepared by the following circuit with $n$ elementary quantum gates,
\begin{equation}
\label{circutGHZn}
\textmd{Circuit}_{{\rm GHZ}n} =
\textmd{CNOT}(n-1,0)\cdots \textmd{CNOT}(n-1,n-2) H(n-1).
\end{equation}
Again, the Hadamard gate prepares superpositions and the CNOT gates generate entanglements. The amount of entanglement generated by
$\textmd{CNOT}(n-1, m) (0\leq m \leq n-2)$
is $E_{\rm N} = 2^{m-1}$ and the total entanglement for the $n$-qubit GHZ state is $E_{\rm N} = (2^{n-1} - 1)/2$. See Figure~\ref{fig:GHZstates}.
\begin{figure}
\caption{The entanglement generated by the quantum circuit for $|\Psi_{4a(b)}
\label{fig:4qubits}
\end{figure}
\subsection{4 Qubits}
For the 4 qubit case the result is more interesting. Using $H$ and CNOT as the elementary quantum gates, the best solution found by the GA has
$E_{\rm N} = 5.5$
and the simplest quantum circuit we found is of size 5,
\begin{equation}\label{circut5}
\textmd{Circuit}_{4a} =
\textmd{CNOT}(3,1)\textmd{CNOT}(3,0) H(3)\textmd{CNOT}(2,1) H(2).
\end{equation}
The state created by this quantum circuit has the following form,
\begin{equation}
\label{Psi4a}
\begin{array}{rl}
|\Psi_{4a}\rangle = &\frac{1}{2}\big(
|0000\rangle + |0110\rangle + |1011\rangle + |1101\rangle \big) \\
= & \frac{1}{\sqrt{2}}\big(
|0\rangle |\psi^{+}\rangle |0\rangle +
|1\rangle |\phi^{+}\rangle |1\rangle
\big).
\end{array}
\end{equation}
There are 7 cuts for a 4-qubit system, 4 single-cuts and 3 two-cuts.
The marginal density matrices of $|\Psi_{4a}\rangle$ for the 4 single-cuts are all completely mixed. For 2 of the 3 two-index cuts, $\{0,1\}$ and $\{0,2\}$, the marginal density matrices are also completely mixed, but the marginal density matrix for the two-index cut $\{0,3\}$ is not. The entanglement distribution of this state can be expressed as $E_{\rm N} = 4 \times 0.5 + 2 \times 1.5 + 0.5 = 5.5$. We note that this state has the same entanglement feature as the one used by Wu and Zhang to show that not all 4-partite pure states are GHZ reducible \cite{wu01}.
By a permutation of the qubits (This is natural, since all the qubits are equivalent and there is no privilege for any qubit \cite{gisin98}), this state can be brought to a nicer form,
\begin{equation}
\label{Psi4b}
|\Psi_{4b}\rangle = \frac{1}{\sqrt{2}}\big(
|00\rangle |\psi^{+}\rangle +
|11\rangle |\phi^{+}\rangle
\big).
\end{equation}
The quantum circuit for the state $|\Psi_{4b}\rangle$ can be deduced from $\textmd{Circuit}_{4a}$ to be
\begin{equation}\label{circut5}
\textmd{Circuit}_{4b} =
\textmd{CNOT}(3,0)\textmd{CNOT}(3,2) H(3)\textmd{CNOT}(1,0) H(1).
\end{equation}
Although less entangled than the HS state, $\Psi_{4a(b)}$ has a simpler form and is much more entangled than an average 4-qubit state as can be seen from the entanglement distribution of for 4-qubit states \cite{borras07}. The entanglement generated by $\textmd{Circuit}_{4a(b)}$ is shown in Figure~\ref{fig:4qubits}.
Incidently, we note that by using a GA with continuous chromosome representation and searching the states only, the algorithm always converges to the same entanglement value as the HS state $E_{\rm N} = 60.981$. This conforms once again that the HS state is the global maximally entangled 4-qubit state.
We can interpret the quantum circuit $\textmd{Circuit}_{4a(b)}$ in the following way.
Firstly, two Bell pairs are prepared by two sets of Hadamard and CNOT gates acting on two pairs of qubits. The entanglement generated at this stage is $E_{\rm N} = 5$. Secondly, a CNOT gate with the control qubit choosing from one pair of Bell qubits and the target qubit choosing from the other pair of Bell qubits increases the entanglement to
$E_{\rm N} = 5.5$.
\begin{figure}
\caption{The entanglement generated by the quantum circuit for $|\Psi_{5a(b)}
\label{fig:5qubits}
\end{figure}
\subsection{5 Qubits}
We have tried various sets of elementary quantum gates in our GA experiments aimed at evolution of maximally entangled 5-qubit states.
It turns out that it suffices to use the set of elementary quantum gates consisting of 5 Hadamard gates $H(i)$ $(0\leq 4)$ and 20 CNOT gates $\textmd{CNOT}(i, j)$ $(0\leq i, j \leq 4; i \neq j)$.
Using these gates as the elementary quantum gates, a maximally entangled state similar to $|\textmd{BSSB}5\rangle$ can be easily found by the GA. As we increase the size of the quantum circuit, the maximal entanglement obtainable increases monotonically. When the size of the circuit equals to 8, the maximal entanglement $E_{\rm N} = 17.5$ is achieved.
One of the maximally entangled states discovered takes the following form,
\begin{equation}\label{Psi5a}
\begin{array}{rl}
|\Psi_{5a}\rangle = &
\frac{1}{\sqrt{8}}(|00000\rangle + |00111\rangle + |01011\rangle + |01100\rangle \\
& + |10010\rangle + |10101\rangle - |11001\rangle - |11110\rangle ) \\
= & \frac{1}{2} (
|0\rangle |\psi^{+}\rangle |00\rangle +
|0\rangle |\phi^{+}\rangle |11\rangle +
|1\rangle |\phi^{-}\rangle |01\rangle +
|1\rangle |\psi^{-}\rangle |10\rangle ),
\end{array}
\end{equation}
which is created from the following circuit acting on the initial state $|00000\rangle$,
\begin{equation}\label{circut5a}
\begin{array}{rl}
\textmd{Circuit}_{5a} = &
\textmd{CNOT}(3,2) \textmd{CNOT}(4,1) \textmd{CNOT}(1,0)\\
& H(4)\textmd{CNOT}(4,3) H(4)\textmd{CNOT}(2,1) H(2).
\end{array}
\end{equation}
By a permutation of the qubits, we can bring the state $|\Psi_{5a}\rangle$ to a nicer form,
\begin{equation}\label{Psi5b}
|\Psi_{5b}\rangle = \frac{1}{2} (
|000\rangle |\psi^{+}\rangle +
|011\rangle |\phi^{+}\rangle +
|101\rangle |\phi^{-}\rangle +
|110\rangle |\psi^{-}\rangle ),
\end{equation}
which can be created from the following circuit acting on the initial state $|00000\rangle$,
\begin{equation}\label{circut5b}
\begin{array}{rl}
\textmd{Circuit}_{5b} = &
\textmd{CNOT}(1,0) \textmd{CNOT}(4,3) \textmd{CNOT}(3,2)\\
& H(4)\textmd{CNOT}(4,1) H(4) \textmd{CNOT}(0,3) H(0).
\end{array}
\end{equation}
The entanglement generated by $\textmd{Circuit}_{5a(b)}$ is shown in Figure~\ref{fig:5qubits}. In this case, two Bell pairs are firstly created, followed by a Hadamard gate and 3 CNOT gates. We note that at least 3 Hadamard gates are needed to generate maximal entanglement, which means that maximally entangled 5-qubit states have at least 8 non-vanishing coefficients.
\begin{figure}
\caption{The entanglement generated by the quantum circuit for $|\Psi_{6}
\label{fig:6qubits}
\end{figure}
\subsection{6 Qubits}
Similarly, we can find a quantum circuit producing maximal entanglement for 6-qubit system easily using the GA.
As with the 5-qubit case, quantum circuits for maximally entangled 6-qubit states can be found by the GA using only the Hadamard and the CONT gates. Therefore, the set of elementary quantum gates which can generate maximal entanglement for 6-qubit states consists of 6 Hadamard gates $H(i)$ $(0\leq 5)$ and 30 CNOT gates $\textmd{CNOT}(i, j)$ $(0\leq i, j \leq 5; i \neq j)$.
The shortest circuit creating maximal entanglement found by our algorithm has 13 gates, 5 $H$s and 8 CNOTs. A typical maximally entangled 6-qubit state with 32 non-vanishing coefficients in this case is,
\begin{equation}
\label{Psi6a}
\begin{array}{rl}
|\Psi_{6a}\rangle = &
\frac{1}{\sqrt{32}}
( |000000\rangle + |000001\rangle + |000010\rangle - |000011\rangle \\
& - |001100\rangle + |001101\rangle + |001110\rangle + |001111\rangle \\
& + |010100\rangle + |010101\rangle - |010110\rangle + |010111\rangle \\
& + |011000\rangle - |011001\rangle + |011010\rangle + |011011\rangle \\
& + |100100\rangle - |100101\rangle + |100110\rangle + |100111\rangle \\
& + |101000\rangle + |101001\rangle - |101010\rangle + |101011\rangle \\
& + |110000\rangle - |110001\rangle - |110010\rangle - |110011\rangle \\
& - |111100\rangle - |111101\rangle - |111110\rangle + |111111\rangle )\\
= & \frac{1}{4}
( (|0000\rangle - |1111\rangle) ( |\psi^{-}\rangle + |\phi^{+}\rangle)\\
&+ (|0011\rangle - |1100\rangle) (-|\psi^{-}\rangle + |\phi^{+}\rangle)\\
&+ (|0101\rangle + |1010\rangle) ( |\psi^{+}\rangle + |\phi^{-}\rangle)\\
&+ (|0110\rangle + |1001\rangle) ( |\psi^{+}\rangle - |\phi^{-}\rangle)).
\end{array}
\end{equation}
which is created by the following circuit of size 13 from the initial state $|000000\rangle$,
\begin{equation}
\label{circut6a}
\begin{array}{rl}
\textmd{Circuit}_{6a} =
&\textmd{CNOT}(2,1)\textmd{CNOT}(4,1)H(1)\textmd{CNOT}(4,3)\\ &H(4)\textmd{CNOT}(5,2)\textmd{CNOT}(3,0)\textmd{CNOT}(5,4)\\
&H(5)\textmd{CNOT}(3,2)H(3)\textmd{CNOT}(1,0)H(1).
\end{array}
\end{equation}
The state $|\Psi_{6a}\rangle$ is maximally entangled, $E_{\rm N} = 6 \times 0.5 + 15 \times 1.5 + 10 \times 3.5= 60.5$. See Figure~\ref{fig:6qubits} for the generation of entanglement of $\textmd{Circuit}_{6a}$. In this case, 3 Bell pairs are created before the action of the other 2 Hadamard and 5 CNOT gates.
We note that $\Psi_{6a}$ has a more compact form than the maximally entangled 6-qubit state discovered by Borras {\it et al} \cite{borras07,choudhury09}. Moreover, by applying a Hadamard gate say, $H(0)$, to $|\Psi_{6a}\rangle$, we can get a maximally entangled 6-qubit state with only 16 nonvanishing coefficients,
\begin{equation}\label{Psi6b}
\begin{array}{rl}
|\Psi_{6b}\rangle = &
\frac{1}{\sqrt{8}}
( (|0000\rangle - |1111\rangle) |\psi^+\rangle
+ (-|0011\rangle + |1100\rangle) |\phi^-\rangle\\
& + (|0101\rangle + |1010\rangle) |\psi^-\rangle
+ (|0110\rangle + |1001\rangle) |\phi^+\rangle),
\end{array}
\end{equation}
which is essentially equivalent to the state found by Tapiador {\it et al} \cite{tapiador09}.
Our experiments also suggest that 16 is the minimal number of the nonvanishing coefficients for the maximally entangled 6-qubit states created from the Hadamard and CNOT gates.
\section{Conclusions and Future Work}
In this paper, we have applied the GA to the automated design of quantum circuits for maximally entangled states. Highly entangled 4-qubit states and maximally entangled 5 and 6-qubit states have been found, including the corresponding quantum circuits creating them.
The quantum circuits for the maximally (highly) entangled states are very simple, using only 3, 5, 8, and 13 elementary gates for 3, 4, 5, and 6-qubit systems respectively. The elementary gates used only include the Hadamard and the CNOT gates. The Hadamard gates play the role of generating \emph{quantum parallelism} (generating non-vanishing coefficients of the state) and the CNOT gates play the genuine role of \emph{entanglers}. Our numerical results suggest that maximally entangled multi-qubit states are more-than-two superpositions of the computational basis states. The minimal numbers of nonzero coefficients for maximally entangled 5 and 6-qubit states are found to be 8 and 16 respectively. This explains partly why the GHZ states are not the maximally entangled states for 4 and more qubit systems.
We note that the construction of quantum circuits for maximally entangled states is similar to the construction of \emph{perfect entanglers} \cite{kraus01} from imperfect ones. A perfect 2-qubit entangler is a quantum gate which can generate maximal entanglement from an unentangled state. The CNOT gate is a perfect entangler for 2-qubit states, but for an $n$-qubit ($n\geq3$) system, the CNOT gates are not perfect entanglers since each one of them generates only a (small) part of the whole entanglement of the $n$-qubit system. In this sense, the quantum circuits for maximally entangled states presented in this paper can be seen as perfect entanglers for multi-qubit systems.
The performance of our algorithm appears similar to that of previous approaches using hill climbing or simulated annealing. The running time for a single fitness evaluation ($E_{\rm N}$) increases exponentially with the number of qubits. The number of evaluations of the fitness function needed before finding an optimal solution also increases, though the scaling here is less clear. A typical run of the algorithm for the 6-qubit case on a common contemporary laptop takes a few minutes, evaluating the fitness function several thousand times.
Envisioned future work includes experimenting with a more scalable C++ GA implementation, which may allow search through spaces associated with 7 and more qubit systems. We also intend to experiment with alternative entanglement measures including the EMM (the entanglement measure based on minors), which is hypothesized to be more computationally tractable than eigenvalue based entanglement measures \cite{long09}. Use of more advanced evolutionary algorithms such as the Bayesian Optimization Algorithm \cite{pelikan05} may also be helpful; or the use of estimation-of-distribution algorithms such as MOSES \cite{looks06} that act directly on circuit space rather than utilizing an integer genome.
\ack
This work is supported by National Natural Science Foundation of China under grant Nos 10535010, 10675090, 10775068, and 10735010.
\section*{References}
\end{document} |
\begin{document}
\title{Quantum Teleportation and Bell's Inequality Using
Single-Particle Entanglement}
\author{Hai-Woong Lee\footnote{E-mail address :
[email protected]} and Jaewan Kim\footnote{On leave
from Samsung Advanced Institute
of Technology}}
\address{Department of Physics,
Korea Advanced Institute of Science and Technology,
Taejon 305-701, Korea}
\maketitle
\begin{abstract}
A single-particle entangled state can be generated by illuminating
a beam splitter with a single photon. Quantum teleportation
utilizing such a single-particle entangled state can be
successfully achieved with a simple setup consisting only of
linear optical devices such as beam splitters and phase shifters.
Application of the locality assumption to a single-particle
entangled state leads to Bell's inequality, a violation of which
signifies the nonlocal nature of a single particle.
\end{abstract}
PACS number(s): 03.67.-a, 03.65.Bz, 42.50.-p
\section{Introduction}
It has long been realized that the striking nonclassical nature of
entanglement lies at the heart of the study of fundamental issues
in quantum mechanics, as witnessed by the Einstein-Podolsky-Rosen
paper\cite{PR47_777}, Bell's theorem\cite{PLIC1_195} and its
subsequent experimental verifications\cite{RPP41_1881,PRL49_91}.
The recent surge of interest and progress in quantum information
theory allows one to take a more positive view of entanglement and
regard it as an essential resource for many ingenious applications
such as quantum teleportation\cite{PRL70_1895,N390_575} and
quantum cryptography\cite{PRL67_661}. These applications rely on
the ability to engineer and manipulate entangled states in a
controlled way. So far, the generation and manipulation of
entangled states have been demonstrated with photon pairs produced
in optical processes such as parametric
downconversion\cite{N390_575,PRL75_4337}, with ions in an ion
trap\cite{PRL74_4091}, and with atoms in cavity QED
experiments\cite{PRL79_1}. All these experiments use as a source
of entanglement two or more spatially separated particles
(photons, ions or atoms) possessing correlated properties.
In this work we consider entanglement produced with a single
particle(``single-particle entanglement") and explore its
usefulness. As a prototype of a single-particle entangled state,
we take an output state emerging from a lossless 50/50 beam
splitter irradiated by a single photon. Here the one photon state
and the vacuum state can be regarded to represent the logical
states 1 and 0 of the qubit. Single photons have already been
considered as a unit to carry logical states of the qubit in a
proposal to construct a quantum optical model of the Fredkin
gate\cite{PRL89_2124}. Recently, it has been proposed that the
single-photon entangled state be used to create macroscopic
entangled field states\cite{PRA62_012102}.
The main propose of this work is two-fold. First, we wish to
present a scheme for quantum teleportation based on the
single-photon entangled state. A characteristic feature of this
scheme is that it requires only linear optical devices such as
beam splitters and phase shifters and thus provides a way of
achieving all linear optical teleportation along the line
suggested by Cerf et al\cite{PRA57_R1477}. Second, we wish to
derive a single-particle version of Bell's inequality that is
applied to an interference pattern produced by single particles. A
violation of this inequality establishes the nonlocal nature of a
system described by a single-particle entangled state.
\section{Single-particle entanglement}
Let us consider a single photon incident on a lossless symmetric
50/50 beam splitter equipped with a pair of $-\frac{\pi}{2}$ phase
shifters, as depicted in Fig.\ref{beam}. Denoting the two input
ports of the beam splitter by I and J and the output ports by A
and B, and assuming that the photon enters the beam splitter
through the input port I, the input state can be written as
$|1\rangle_I|0\rangle_J$, where $|1\rangle$ and $|0\rangle$ are
the one photon state and the vacuum state, respectively, and the
subscripts I and J refer to the modes of photon entering the beam
splitter through the input ports I and J, respectively. The output
state emerging from the beam splitter is then given by
\begin{equation}
|\Psi\rangle=\frac{1}{\sqrt{2}}(|1\rangle_A|0\rangle_B
+|0\rangle_A|1\rangle_B) \label{input}
\end{equation}
where subscripts A and B refer to the modes of photon exiting the
beam splitter through the output ports A and B, respectively. The
state given by Eq.\ (\ref{input}) represents a single-photon
entangled state. We note that the output state is obtained in the
symmetric combination as given by Eq.\ (\ref{input}), because the
phase shifter at the output port A acts to offset the phase
difference of $\frac{\pi}{2}$ between the reflected and
transmitted waves\cite{OC64_485} (we assume throughout this work
that the reflected wave leads the transmitted wave by
$\frac{\pi}{2}$ in phase). The phase shifter at the input port J
does not play any role in this case because only vacuum is present
at this port.
\section{Quantum teleportation}
We are now ready to describe a teleportation scheme that makes use
of single-particle entanglement. As in the standard teleportation
scheme\cite{PRL70_1895,N390_575}, this scheme consists of three
distinct parts as shown in Fig.\ \ref{tele}; the source station
that generates a single-photon entangled state, Alice's station
where a Bell measurement is performed and its result is sent away
through classical communication channels, and Bob's station where
the signal from Alice is read through classical communication
channels and a suitable unitary transformation is performed.
Details of the teleportation procedure described below follow
closely the original proposal\cite{PRL70_1895}.
The source station consisting of the same setup as in Fig.\
\ref{beam} generates a single-photon entangled state in the form
of Eq.\ (\ref{input}). The reflected wave A of the entangled state
is sent to Alice and the transmitted wave B to Bob. At Alice's
station this reflected wave A of the entangled state is combined
via a lossless symmetric 50/50 beam splitter with a pair of
$-\frac{\pi}{2}$ phase shifters to a wave C which is in an unknown
superposition of a one photon state and a vacuum state,
$a|1\rangle_C+b|0\rangle_C$, where $|a|^2+|b|^2=1$. This state of
unknown superposition is the state that Alice wishes to teleport
to Bob. The field state incident on Alice's beam splitter is
$|\Psi\rangle_{in}=\frac{1}{\sqrt{2}}(|1\rangle_A|0\rangle_B
+|0\rangle_A|1\rangle_B)(a|1\rangle_C+b|0\rangle_C)$, which upon
rearrangement can be written in the Bell basis as
\begin{eqnarray}
|\Psi\rangle_{in}&=&\frac{1}{2}[|\Psi^{(+)}\rangle(a|1\rangle_B
+b|0\rangle_B)+|\Psi^{(-)}\rangle(a|1\rangle_B-b|0\rangle_B)\\
\mbox{}&+&|\Phi^{(+)}\rangle(a|0\rangle_B+b|1\rangle_B)
+|\Phi^{(-)}\rangle(a|0\rangle_B-b|1\rangle_B)]\nonumber
\label{rearrange}
\end{eqnarray}
where $|\Psi^{(\pm)}\rangle$ and $|\Phi^{(\pm)}\rangle$ are the
Bell states defined by
\begin{eqnarray}
|\Psi^{(\pm)}\rangle=\frac{1}{\sqrt{2}}(|0\rangle_A|1\rangle_C\pm
|1\rangle_A|0\rangle_C)\\
|\Phi^{(\pm)}\rangle=\frac{1}{\sqrt{2}}(|1\rangle_A|1\rangle_C\pm
|0\rangle_A|0\rangle_C)\nonumber \label{bell_eq}
\end{eqnarray}
A straightforward algebra based on the quantum theory of the beam
splitter\cite{OC64_485,PRA33_4033} yields that the output states
corresponding to $|\Psi^{(+)}\rangle$, $|\Psi^{(-)}\rangle$,
$|\Phi^{(+)}\rangle$ and $|\Phi^{(-)}\rangle$ are given
respectively by $|0\rangle_E|1\rangle_F$,
$|1\rangle_E|0\rangle_F$,
$\frac{1}{2}(|0\rangle_E|2\rangle_F-|2\rangle_E|0\rangle_F)
+\frac{1}{\sqrt{2}}|0\rangle_E|0\rangle_F$ and
$\frac{1}{2}(|0\rangle_E|2\rangle_F-|2\rangle_E|0\rangle_F)
-\frac{1}{\sqrt{2}}|0\rangle_E|0\rangle_F$, where subscripts $E$
and $F$ refer to the modes of photon exiting the beam splitter via
the output ports $E$ and $F$, respectively. Thus, a detection of a
single photon by the detector $D_F$ combined with a detection of
no photon by the detector $D_E$ would indicate that the input
state is $|\Psi^{(+)}\rangle$ and that, according to Eq.\ (2), the
state at Bob's station is $a|1\rangle_B+b|0\rangle_B$, exactly the
state that Alice wants to teleport to Bob. In this case, Bob needs
do nothing and teleportation is successfully achieved. A detection
of a single photon by the detector $D_E$ and a detection of no
photon by the detector $D_F$ would mean that the input state is
$|\Psi^{(-)}\rangle$. The corresponding state at Bob's station is
$a|1\rangle_B-b|0\rangle_B$. If Bob is informed of such a Bell
measurement result from Alice through classical communication
channels, he needs to apply a $\pi$ phase shifter which changes
the sign of the state $|1\rangle_B$, and teleportation is then
successfully achieved. The teleportation, however, fails, either
if one of the detectors registers two photons and the other none,
which would mean that the input state is $|1\rangle_A|1\rangle_C$,
or if neither detector registers any photon, which would mean that
the input state is $|0\rangle_A|0\rangle_C$. The probability of
success for our teleportation scheme is thus 50\%, which is the
same as the probability of success for the standard teleportation
method. It has been noted \cite{PRA59_116} that a reliable (100\%
probability of success) teleportation cannot be achieved by linear
operations due to the absence of photon-photon interactions. It
should be noted that the 50\% probability of success for our
scheme is obtained only if the Bell states $|\Psi^{(+)}\rangle$
and $|\Psi^{(-)}\rangle$ are clearly distinguished not only from
each other but also from the states $|1\rangle_A|1\rangle_C$ and
$|0\rangle_A|0\rangle_C$ (or from the Bell states
$|\Phi^{(+)}\rangle$ and $|\Phi^{(-)}\rangle$). This means that
our detectors should be capable of distinguishing a single photon
from two. This is of course not an easy requirement to be met. It
seems, however, that single photon counting in the optical regime
and, in particular, in the high-energy(x-ray, $\gamma$-ray) regime
lies within the reach of the present technology. Our analysis also
assumes that the detectors are of unit quantum efficiency.
The state, $a|1\rangle_C+b|0\rangle_C$, to be teleported in our
teleportation scheme can be generated using the methods proposed
in the past\cite{Pegg,Dakna}. One may also generate the state to
be teleported using a beam splitter, as indicated in the leftmost
part of Fig.\ \ref{tele}. The field state emerging from the beam
splitter of complex reflection and transmission coefficients $r$
and $t$ can be written as
$t|1\rangle_C|0\rangle_D+r|0\rangle_C|1\rangle_D$, where the
subscripts C and D refer to the modes of the transmitted and
reflected waves, respectively. The transmitted wave C is then
directed toward Alice's station for teleportation. Alice therefore
has two entangled waves in the state
$|\Psi\rangle_{in}=\frac{1}{\sqrt{2}}(|1\rangle_A|0\rangle_B
+|0\rangle_A|1\rangle_B)(t|1\rangle_C|0\rangle_D
+r|0\rangle_C|1\rangle_D)$ to be combined in the beam splitter.
She of course has a control over only the waves A and C. The state
$|\Psi\rangle_{in}$ can be rewritten in the Bell basis as
\begin{eqnarray}
|\Psi\rangle_{in}&=&\frac{1}{2}[|\Psi^{(+)}\rangle
(t|1\rangle_B|0\rangle_D+r|0\rangle_B|1\rangle_D)
+|\Psi^{(-)}\rangle(t|1\rangle_B|0\rangle_D
-r|0\rangle_B|1\rangle_D)\\
\mbox{}&+&|\Phi^{(+)}\rangle(t|0\rangle_B|0\rangle_D
+r|1\rangle_B|1\rangle_D)+|\Phi^{(-)}\rangle(t|0\rangle_B|0\rangle_D
-r|1\rangle_B|1\rangle_D)]\nonumber
\end{eqnarray}
If Alice's Bell measurement yields the state $|\Psi^{(+)}\rangle$,
Bob has a wave B in the entangled state
$t|1\rangle_B|0\rangle_D+r|0\rangle_B|1\rangle_D$. The
teleportation is thus successfully achieved. If Alice's Bell
measurement yields the state $|\Psi^{(-)}\rangle$, Bob needs to
apply a $\pi$ phase shifter, which changes the relative phase of
the state $|1\rangle_B|0\rangle_D$ with respect to the state
$|0\rangle_B|1\rangle_D$ by $\pi$. We therefore see that our
scheme offers a simple way of teleporting an entangled state. That
teleportation works also for entangled states was already pointed
out by Bennett et al.
\cite{PRL70_1895} in their original proposal
for quantum teleportation.
It is easy to confirm that teleportation has indeed been
successfully achieved. As shown in the rightmost part of Fig.\
\ref{tele}, we combine the wave D with the teleported wave B using
a beam splitter that has the same transmission and reflection
coefficients as the beam splitter that created the teleported
entangled state $t|1\rangle_C|0\rangle_D+r|0\rangle_C|1\rangle_D$.
If the teleportation is successful, then the input state to the
beam splitter must be
$t|1\rangle_B|0\rangle_D+r|0\rangle_B|1\rangle_D$. The situation
then is exactly the reverse of the situation that created the
teleported entangled state. Thus, a successful teleportation can
be verified by confirming that the detector $D_G$ detects a single
photon and the detector $D_H$ detects none.
Finally we mention that the teleportation scheme described here
uses essentially the same setup as the scheme proposed by Pegg et
al.\cite{Pegg} to perform optical state truncation. The similarity
of the teleportation process and the truncation process has
already been noted by Pegg et al. Whereas the input state to be
truncated is a superposition of many number states including one
photon state and vacuum, and a successful truncation at one photon
state requires waiting until the two detectors register a total of
one photon, the input state to be teleported is a superposition of
one photon state and vacuum, and teleportation is successful half
of the times when the two detectors ($D_E $ and $D_F $ of Fig.\
\ref{tele} )register a total of one photon.
\section{Bell's inequality}
It was shown in the previous section that single-particle
entanglement can be as useful as two-particle entanglement, as far
as application to quantum teleportation is concerned. Considering
that two-particle entanglement provides an opportunity to test
fundamental principles of quantum mechanics related to EPR paradox
and Bell's theorem, one may wonder whether single-particle
entanglement can offer a similar opportunity. Although up to now
Bell's inequality tests have been performed with entangled photon
pairs\cite{RPP41_1881,PRL49_91}, a proposal for an experiment that
demonstrates nonlocality and a violation of Bell's inequality with
a single photon was made 10 years ago\cite{Tan}. The proposal
stimulated much interest and, at the same time, intensive
debate\cite{Santos}. There is no question that the proposed
experiment demonstrates nonlocality of the system and a violation
of Bell's inequality. It, however, does not seem entirely clear at
least to some of the researchers that the outcome of the
experiment can be attributed solely to an effect associated with a
single photon, because the experiment requires performing a
particle-particle correlation measurement.
Here, for our discussion of nonlocality with a single-particle
entangled state, we concentrate on the type of correlation
measurement that can certainly be attributed to a single photon
effect, i.e., a correlation measurement of the first-order type in
Glauber's sense\cite{PR130_2529}. In fact, the nonlocal behavior
demonstrated in the first-order interference measurement of
Grangier et al.\cite{Grangier} with a Mach-Zehnder interferometer
is undoubtedly a single-photon effect. We elaborate further on
this experiment and show that Bell's inequality, which is violated
by the experimental observation of Grangier et al., can be derived
based on the locality assumption. Our argument below can be
considered as a derivation of a single-particle version of Bell's
inequality\cite{PLIC1_195,Ballentine}. We recall that it was
proven\cite{Gisin} that any pure entangled state of two or more
particles violate Bell's inequality. Our derivation allows one to
extend the proof to an entangled state of a single particle. It
should be noted, however, that the interference pattern observed
by Grangier et al. can be explained by a nonlocal classical wave
theory as well as by the quantum theory. A violation of the
single-particle version of Bell's inequality therefore does not
establish the quantum theory as the only correct theory. Its
significance lies in the fact that it gives a quantitative
confirmation that a system described by a single-particle
entangled state behaves nonlocally.
Consider a Mach-Zehnder interferometer consisting of a pair of
lossless symmetric 50/50 beam splitters, each with a pair of
$-\frac{\pi}{2}$ phase shifters, and a pair of perfect mirrors, as
shown in Fig.\ \ref{bell}. A single photon and vacuum are incident
on the first beam splitter from the input ports I and J,
respectively. The output state is again given by
$\frac{1}{\sqrt{2}}(|1\rangle_A|0\rangle_B+|0\rangle_A|1\rangle_B)$.
The reflected wave A and the transmitted wave B are recombined at
the second beam splitter. Alice and Bob, located somewhere along
the pathway of the reflected wave A and the transmitted wave B,
respectively, are each equipped with a phase shifter. If neither
Alice nor Bob applies a phase shifter, the field state emerging
from the second beam splitter is $|1\rangle_C|0\rangle_D$ and it
is certain that the photon strikes the detector $D_C$. Thus, when
N photons are sent from the input port I in succession, all N
photons arrive at the detector $D_C$ and none at the detector
$D_D$. Suppose now Alice inserts her phase shifter into the beam A
and changes its phase by $\phi_A$. A straightforward calculation
based on the quantum theory of the beam
splitter\cite{OC64_485,PRA33_4033} yields that the output state
emerging from the second beam splitter is (apart from an overall
phase factor) $\cos\frac{\phi_A}{2}|1\rangle_C|0\rangle_D
+i\sin\frac{\phi_A}{2}|0\rangle_C|1\rangle_D$. Thus
$N_A\equiv(\sin^2\frac{\phi_A}{2})N$ photons out of the total $N$
incident photons change their paths and strike the detector $D_D$
as a consequence of Alice's action to change the phase of the beam
A by $\phi_A$. If Bob, not Alice, inserts his phase shifter into
the beam B and changes its phase by $-\phi_B$, the output state
becomes $\cos\frac{\phi_B}{2}|1\rangle_C|0\rangle_D
+i\sin\frac{\phi_B}{2}|0\rangle_C|1\rangle_D$. Thus
$N_B\equiv(\sin^2\frac{\phi_B}{2})N$ photons out of the total $N$
incident photons change their paths and strike the detector $D_D$
as a consequence of Bob's action. What would happen if both Alice
and Bob use their phase shifters and change the phases of the
beams A and B by $\phi_A$ and $-\phi_B$, respectively? A
straightforward quantum calculation yields that the output state
in this case is $\cos\frac{\phi_A+\phi_B}{2}|1\rangle_C|0\rangle_D
+i\sin\frac{\phi_A+\phi_B}{2}|0\rangle_C|1\rangle_D$, i.e.
$N_{AB}\equiv(\sin^2\frac{\phi_A+\phi_B}{2})N$ photons out of the
$N$ incident photons change their paths and strike the detector
$D_D$.
On the other hand an argument based on the locality assumption
leads to a result contradictory to the above quantum result. In
order to show this, we assume that those photons that do not
change their paths and still arrive at $D_C$, {\it both\/} when
Alice, not Bob, uses her phase shifter, {\it and\/} when Bob, not
Alice, uses his phase shifter, will still not change their paths
and still arrive at $D_C$ when both Alice and Bob use their phase
shifters. This assumption means that we do not allow for any
cooperative effect between Alice's phase shifter and Bob's and
therefore assures independence from each other\cite{COMM}. It may
therefore be considered as a single-particle version of the
locality assumption. Let the groups $G_N , G_A , G_B , G_{AB} $
contain, respectively, the total $N$ photons, $N_A$ photons that
strike the detector $D_D$ when Alice, not Bob, uses her phase
shifter, $N_B$ photons that strike the detector $D_D$ when Bob,
not Alice, uses his phase shifter, and $N_{AB}$ photons that
strike the detector $D_D$ when both Alice and Bob use their phase
shifters. The locality assumption dictates that the group $(G_N
-G_A ) \cap (G_N -G_B)$ is a subset of the group $(G_N -G_{AB} )$.
Since the number of photons that belong to the group $(G_N -G_A )
\cap (G_N -G_B )$ is greater than or equal to $N- N_A -N_B $, it
immediately follows that $N-N_{AB} \geq N- N_A -N_B $. We
therefore arrive at the inequality $N_{AB} \leq N_A +N_B $. This
inequality is in disagreement with the quantum theory, because the
inequality, $\sin^2\frac{\phi_A+\phi_B}{2} \leq
\sin^2\frac{\phi_A}{2}+\sin^2\frac{\phi_B}{2}$, is clearly
violated for some values of $\phi_A$ and $\phi_B$. The inequality,
$\sin^2\frac{\phi_A+\phi_B}{2} \leq \sin^2\frac{\phi_A}{2}
+\sin^2\frac{\phi_B}{2}$, is completely equivalent to the formula,
$1+P(\vec{b},\vec{c}) \geq
|P(\vec{a},\vec{b})-P(\vec{a},\vec{c})|$, derived originally by
Bell\cite{PLIC1_195} for a correlated spin pair, if we take the
spin correlation function $P(\vec{a},\vec{c})=-\cos\phi_A $,
$P(\vec{b},\vec{c})=-\cos \phi_B $, and
$P(\vec{a},\vec{b})=-\cos(\phi_A +\phi_B )$.
\section{Conclusion}
In conclusion we have investigated a possibility of utilizing
single-particle entanglement and shown that single-particle
entanglement can be used as a useful resource for fundamental
studies in quantum mechanics and for applications in quantum
teleportation. An experimental scheme that utilizes
single-particle entanglement generally requires production,
maintenance and detection of photons at a single photon level.
With the development of photon counting techniques and of reliable
single photon sources\cite{PRL83_2722}, however, the experimental
realization of the schemes seems within the reach of the present
technology.
\section*{Acknowledgment}
This research was supported by the Brain Korea 21 Project of the
Korean Ministry of Education and by the Korea Atomic Energy
Research Institute(KAERI). The authors wish to thank Professors K.
An, P. Ko, E.K. Lee, S.C. Lee, Y.H. Lee, E. Stewart and Mr. J.C.
Hong for helpful discussions.
\begin{flushleft}
{\Large \bf Figure Captions}
{\bf Figure \ref{beam}.} Generation of a single photon entangled
state. A single photon and vacuum are incident on a beam splitter
from the input ports I and J, respectively. A $-\frac{\pi}{2}$
phase shifter is placed at the output port A and another at the
input port J.
{\bf Figure \ref{tele}.} Quantum teleportation experiment using
single-particle entanglement. At the source station a single
photon entangled state is generated by a beam splitter. The
transmitted wave B is sent to Bob, while the reflected wave A is
sent to Alice who combines it with the wave C to be teleported.
Alice makes a Bell measurement upon the combined waves A and C and
informs the result to Bob via a classical communication channel
(represented by a wavy line). When Bob is informed of Alice's
measurement result, he performs a suitable unitary transformation
with a $\pi$ phase shifter. The station to the right of Bob
equipped with a beam splitter and detectors $D_G$ and $D_H$
performs a verification of a successful operation of teleportation
if necessary.
{\bf Fig. \ref{bell}.} Single-particle version of Bell's
inequality test with a Mach-Zehnder interferometer. A single
photon and vacuum are incident on the beam splitter (with a pair
of $-\frac{\pi}{2}$ phase shifter) from the input ports I and J,
respectively. The reflected wave A and the transmitted wave B are
recombined at the second beam splitter (with a pair of
$-\frac{\pi}{2}$ phase shifter). Alice and Bob, located somewhere
along the pathway of the reflected wave A and the transmitted wave
B, respectively, each have a phase shifter which they may or may
not use.
\end{flushleft}
\begin{figure}
\caption{\label{beam}
\label{beam}
\end{figure}
\begin{figure}
\caption{\label{tele}
\label{tele}
\end{figure}
\begin{figure}
\caption{\label{bell}
\label{bell}
\end{figure}
\end{document} |
\begin{document}
\title{Approximately multiplicative maps between algebras of bounded operators on Banach spaces}
\begin{abstract}
We show that for any separable reflexive Banach space $X$ and a large class of Banach spaces $E$, including those with a subsymmetric shrinking basis but also all spaces $L_p[0,1]$ for $1\le p \le \infty$, every bounded linear map ${\mathcal B}(E)\to {\mathcal B}(X)$ which is approximately multiplicative is necessarily close in the operator norm to some bounded homomorphism ${\mathcal B}(E)\to {\mathcal B}(X)$.
That is, the pair $({\mathcal B}(E),{\mathcal B}(X))$ has the AMNM property in the sense of Johnson (\textit{J.~London Math.\ Soc.} 1988). Previously this was only known for $E=X=\ell_p$ with $1<p<\infty$; even for those cases, we improve on the previous methods and obtain better constants in various estimates. A crucial role in our approach is played by a new result, motivated by cohomological techniques, which establishes AMNM properties relative to an amenable subalgebra; this generalizes a theorem of Johnson (\textit{op cit.}).
\end{abstract}
\hrule
\eject
\tableofcontents
\begin{section}{Introduction}
\begin{subsection}{Background context, and the statement of our main theorem}
The AMNM property referred to in the abstract was formulated by B. E. Johnson in \cite{BEJ_AMNM2}, and fits into the broader theme of ``Ulam stability'' for normed representations of groups or algebras: see \cite{BOT_ulam,choi_jaust13,Ko,McVi19} for more recent work in a similar direction. The main purpose of the present paper is to extend our knowledge of the AMNM property to a class of Banach algebras where relatively little has been done, namely the algebras consisting of all bounded operators on $E$, for various Banach spaces~$E$. (The more restricted setting of stability for \emph{surjective} homomorphisms has recently been considered by the second author with Tarcsay; see~\cite{HorTar}.)
To state Johnson's original definition, and our own results, we need to set up some notation. For a Banach space $X$ and $r\ge 0$, $\operatorname{\rm ball}\nolimits_r(X)$ denotes
$\{ x\in X \colon \norm{x}\le r\}$.
Given Banach spaces $E$ and $F$, and $n\in\mathbb N$, we write ${\mathcal L}^n(E,F)$ for the space of bounded $n$-multilinear maps $E\times \dots \times E \to F$.
If $n=1$, then we shall usually modify this notation slightly and write ${\mathcal L}(E,F)$.
One exception to this notational convention is that when $n=1$ and $E=F$, we will denote the Banach algebra of all bounded linear operators $E\to E$ by ${\mathcal B}(E)$, to emphasise that this space is being equipped with extra algebraic structure. (We use the notation ${\mathcal L}^n(E,F)$ for the space of bounded, $n$-linear maps in place of ${\mathcal B}^n(E,F)$ to avoid confusion later in the paper; ${\mathcal B}^n$ usually stands for the space of continuous $n$-coboundaries in the context of Hochschild cohomology.)
For Banach algebras ${\mathsf A}$ and ${\mathsf B}$ we write $\operatorname{Mult}({\mathsf A},{\mathsf B})$ for the set of bounded algebra homomorphisms ${\mathsf A}\to {\mathsf B}$ (the zero map is allowed).
Then, given $\psi\in {\mathcal L}({\mathsf A},{\mathsf B})$, we have a ``global'' measure of how far $\psi$ is from being a homomorphism; namely, we can consider the distance of $\psi$ from the set ${\mathcal L}({\mathsf A},{\mathsf B})$ with respect to the operator norm. Explicitly,
\begin{align*}
\operatorname{dist}(\psi) := \inf \lbrace \Vert \psi - \phi \Vert \, : \, \phi \in \operatorname{Mult}({\mathsf A},{\mathsf B}) \rbrace.
\end{align*}
(Note that since $\operatorname{Mult}({\mathsf A},{\mathsf B})$ is closed, $\operatorname{dist}(\psi) = 0$ if and only if $\psi \in \operatorname{Mult}({\mathsf A},{\mathsf B})$.)
On the other hand, since a linear map $\psi:{\mathsf A}\to{\mathsf B}$ is a homomorphism if and only if it satisfies the identity $\psi(a_1a_2)=\psi(a_1)\psi(a_2)$ for each $a_1$ and $a_2$ in the closed unit ball of ${\mathsf A}$, we may consider the following ``local'' measure of how far $\psi$ is from being a homomorphism.
\begin{dfn}
Given a linear map $\psi:{\mathsf A}\to{\mathsf B}$, the \dt{multiplicative defect} of $\psi$ is
\begin{align*}
\operatorname{def}(\psi) := \sup \{ \norm{\psi(a_1a_2)-\psi(a_1)\psi(a_2)} \colon a_1,a_2 \in \operatorname{\rm ball}\nolimits_1({\mathsf A})\} \in [0, \infty].
\end{align*}
\end{dfn}
If $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ and
we have some a priori upper bound on $\norm{\psi}$ (say $\norm{\psi}\le 1000$), it is easily checked that $\operatorname{dist}(\psi)$ being small implies $\operatorname{def}(\psi)$ is small. That is: starting with a multiplicative and bounded linear map, adding a linear perturbation with small norm yields a bounded linear map that has small multiplicative defect. Ulam stability is then the phenomenon that, under certain conditions on our algebras ${\mathsf A}$ and ${\mathsf B}$, we can go the other way. The following definition is due to B.~E.~Johnson, see \cite[Definition~1.2]{BEJ_AMNM2}.
\begin{dfn}(AMNM pair)\label{amnmdef}
Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras. The pair $({\mathsf A},{\mathsf B})$ is said to have the \dt{AMNM property}, or be an \dt{AMNM pair}, if the following holds:
\begin{quote}
For any $\varepsilon > 0$ and $L > 0$ there exists $\delta > 0$ such that for all $\phi \in \operatorname{\rm ball}\nolimits_L {\mathcal L}({\mathsf A},{\mathsf B})$ with $\operatorname{def}(\phi) < \delta$, we have $\operatorname{dist}(\phi) < \varepsilon$.
\end{quote}
\end{dfn}
Johnson investigated a diverse range of AMNM pairs $({\mathsf A},{\mathsf B})$, in addition to providing some explicit examples of ${\mathsf A}$ and ${\mathsf B}$ which do \emph{not} form an AMNM pair. However, when it came to Banach algebras of the form ${\mathcal B}(E)$, only one infinite-dimensional example was considered in \cite{BEJ_AMNM2}. Namely, Johnson showed (see \cite[Proposition~6.3]{BEJ_AMNM2}) that the pair $({\mathcal B}(\ell_2),{\mathcal B}(\ell_2))$ has the AMNM property, which is striking since one is not making any assumptions about \ensuremath{{\rm w}^*}-\ensuremath{{\rm w}^*} continuity.
Johnson's result was extended from $\ell_2$ to $\ell_p$, for $1<p<\infty$, in the PhD thesis of Howey \cite[Theorem 5.2.1]{Howey}; his proof is essentially identical to Johnson's. In both cases, the argument has a somewhat ``monolithic'' feel, and freely uses special features of $\ell_p$, so that it is not obvious how one might adapt the proof to more general Banach spaces.
Our main theorem extends the Johnson--Howey results to a much wider range of Banach spaces, including the classical spaces $L_p[0,1]$ for $p\in (1,\infty)$, but also many of their complemented subspaces such as $\ell_p(\ell_2)$ or Rosenthal's $X_p$-spaces, and also any reflexive space with a subsymmetric basis. At the same time, we obtain results for pairs $({\mathcal B}(E),{\mathcal B}(X))$ where $E\not\cong X$ and $E$ need not be reflexive. To state our theorem, it will be convenient to make the following definition.
\begin{dfn}\label{d:clone system}
Let $E$ be a Banach space. A \dt{clone system} for $E$ is a bounded family $(P_i)_{i\in {\mathbb I}}$ of idempotents in ${\mathcal B}(E)$, such that the operator $P_iP_j$ has finite rank for all $i\neq j$, and $\sup_{i\in{\mathbb I}} d(E,\operatorname{\rm Ran}(P_i)) <\infty$ where $d$ denotes the Banach--Mazur distance.
\end{dfn}
\begin{thm}\label{t:headline result}
Let $X$ be any separable, reflexive Banach space. Let $E$ be a Banach space such that both of the following conditions hold:
\begin{romnum}
\item ${\mathcal K}(E)$, the algebra of compact operators on $E$, is amenable as a Banach algebra;
\item $E$ has an uncountable clone system.
\end{romnum}
Then the pair $({\mathcal B}(E),{\mathcal B}(X))$ has the AMNM property.
\end{thm}
Although the hypotheses of Theorem \ref{t:headline result} are rather technical, we will show in the next section that they hold for several classical examples of interest.
\end{subsection}
\begin{subsection}{Examples covered by our main theorem}\label{s: examples}
\begin{cor}\label{c:subsymm-shrinking}
Let $E$ be a Banach space with a subsymmetric shrinking basis. Then $({\mathcal B}(E),{\mathcal B}(X))$ is an AMNM pair for every reflexive and separable~$X$.
\end{cor}
Note that in this corollary, the hypothesis on $E$
is satisfied by $\ell_p$ for all $p\in (1,\infty)$ and $c_0$ (see \cite[Section~9.2]{ak}), and also for several natural families of Orlicz sequence spaces (see \cite[Propositions~4.a.4 and 3.a.3]{LTbook_combined}) and for Lorentz sequence spaces (see \cite[Propositions~4.e.3 and 1.c.12]{LTbook_combined}).
\begin{proof}[Proof of Corollary~\ref{c:subsymm-shrinking}.]
By \cite[Theorem 4.2]{GJW_isr} and \cite[Theorem 4.5]{GJW_isr}, ${\mathcal K}(E)$ is amenable.
The construction of an uncountable clone system for $E$ is a straightforward consequence of the definition of ``subsymmetric'' and the existence of uncountable almost disjoint families of subsets of $\mathbb N$; given such a family $\mathcal{D}\subset\mathcal{P}(\mathbb N)$ and
a subsymmetric basis $(u_n)_{n\ge 1}$ for~$E$,
for each $S\in \mathcal{D}$
define $P_S$ to be the projection $\sum_{n\ge 1} \lambda_n u_n \mapsto \sum_{n\in S} \lambda_n u_n$. For details, see e.g.~the proof of \cite[Proposition 3.5(1)]{HorTar} (although this technique was already well known to specialists in Banach space theory).
\end{proof}
The construction of an uncountable clone system in Corollary \ref{c:subsymm-shrinking} only used the fact that $E$ possessed a subsymmetric basis; the shrinking condition was needed to invoke results from \cite{GJW_isr} on amenability of ${\mathcal K}(E)$. On the other hand, it is well known that ${\mathcal K}(\ell_1)$ is amenable: this is a special case of \cite[Theorem 4.7]{GJW_isr}. We may therefore run the same argument as before to obtain an extra example.
\begin{cor}\label{c: ell_1 amnm}
$({\mathcal B}(\ell_1),{\mathcal B}(X))$ is an AMNM pair for every reflexive and separable $X$.
\end{cor}
The spaces $L_p[0,1]$ do not have a subsymmetric basis unless $p=2$; see \textit{e.g.}\ \cite[Theorem~21.2, Chapter~II, p.~568]{Sing}. In fact, $L_1[0,1]$ does not even have an unconditional basis; see \textit{e.g.}\ \cite[Theorem~6.3.3]{ak}.
Thus, the next corollary shows that Corollaries \ref{c:subsymm-shrinking} and \ref{c: ell_1 amnm} are far from describing the full extent of the spaces covered by Theorem \ref{t:headline result}.
\begin{cor}
Let $p\in [1,\infty]$. Then $({\mathcal B}(L_p[0,1]),{\mathcal B}(X))$ is an AMNM pair for every reflexive and separable $X$.
\end{cor}
\begin{proof}
By \cite[Theorem 4.7]{GJW_isr} ${\mathcal K}(L_p[0,1])$ is amenable.
For $1\le p <\infty$, an uncountable clone system for $L_p[0,1]$ is given by the construction in \cite[Proposition 3.5]{HorTar}. While that construction does not work for $p=\infty$, we recall that by a celebrated application of Pe\l czy\'nski's decomposition method $L_\infty[0,1] \cong \ell_\infty$ as Banach spaces. Then it is simple to construct an uncountable clone system for $\ell_\infty$ using an uncountable family of almost disjoint subsets of $\mathbb N$, as in previous proofs.
\end{proof}
For our final corollary, we rely on recent work of Johnson--Phillips--Schechtman \cite{JPS_SHAI}, which we learned of after the initial work was done on this paper. For details we refer to \cite{ros} and \cite{JPS_SHAI}.
\begin{cor}\label{c:JPS(hash)}
Let $p\in (1,2)\cup(2,\infty)$. Then $({\mathcal B}(E),{\mathcal B}(X))$ is an AMNM pair for every reflexive and separable $X$, whenever $E$ is any of the following Banach spaces:
\begin{romnum}
\item $\ell_p\oplus\ell_2$;
\item $\ell_p(\ell_2) \equiv \ell_p(\mathbb N; \ell_2)$;
\item $\overbrace{X_p \otimes_p \dots \otimes_p X_p}^{n}$ for some $n\in\mathbb N$, where $X_p$ denotes Rosenthal's $X_p$-space and $\otimes_p$ denotes the tensor product for closed subspaces of $L_p[0,1]$.
\end{romnum}
\end{cor}
\begin{proof}
All of the listed choices for $E$ are complemented subspaces of $L_p[0,1]$, and hence are $\mathscr{L}_p$-spaces in the sense of Lindenstrauss--Pe\l czy\'nski by \cite[Theorem~III]{lr}. Thus ${\mathcal K}(E)$ is amenable by \cite[Theorem 6.4]{GJW_isr}, so it only remains to show that $E$ has an uncountable clone system.
In \cite[Definitions~1.2 and 2.1]{JPS_SHAI} the notion of an unconditional finite dimensional Schauder decomposition (UFDD) with a so-called \dt{property $(\sharp)$} is introduced. We do not give the precise definition here, but it should be clear from the arguments below. It follows from Propositions~2.4 and 2.5 and the the paragraph after Definition~2.1 in \cite{JPS_SHAI} that all of the listed choices for $E$ have a UFDD with $(\sharp)$ with some constant $K>0$, in the sense of \cite[Definition~2.1]{JPS_SHAI}.
We now show that whenever $E$ is a Banach space with a UFDD that has property $(\sharp)$ with some constant $K>0$, then $E$ has an uncountable clone system.
Take a UFDD $(E_n)$ with property $(\sharp)$ with some constant $K >0$. By taking an uncountable almost disjoint family ${\mathcal D}$ on $\mathbb N$, we obtain that $E_S:=\overline{\spanning}(E_n \colon n \in S)$ is $K$-isomorphic to $E$ for each $S \in {\mathcal D}$. Hence $\sup_{S \in {\mathcal D}} d(E,E_S) \le K$.
As outlined on page~2 in \cite{JPS_SHAI}, for every $B \subseteq \mathbb N$ there is an idempotent $P_B \in {\mathcal B}(E)$ such that $\operatorname{\rm Ran}(P_B) = \overline{\spanning}(E_n \colon n \in B)$.
Moreover, there is a $C >0$ (called the \dt{suppression constant} in \cite{JPS_SHAI}) such that $\sup_{B \subseteq \mathbb N} \norm{ P_{B} } \le C$. So $P_S \in {\mathcal B}(E)$ is an idempotent with $\operatorname{\rm Ran}(P_S)=E_S$ and $\norm{ P_S } \le C$ for each $S \in {\mathcal D}$. Also, $\operatorname{\rm Ran}(P_S P_{S'}) = \overline{\spanning}(E_n \colon n \in S \cap S')$ is finite-dimensional, whenever $S, S' \in {\mathcal D}$ are distinct.
Thus $E$ has an uncountable clone system, as required.
\end{proof}
We hope that this selection of examples, while not exhaustive, shows that one can go far beyond the cases $E=X=\ell_p$ ($1<p<\infty$) studied by Johnson and Howey.
Even for those special cases, our proof of Theorem \ref{t:headline result} makes several technical improvements over their approach: we provide an argument with clearer structure, and we obtain better constants, which in principle could be made explicit.
\begin{rem}\label{r: tsirelson}
One can show that the Tsirelson space $T$ (as constructed by Figiel and Johnson~\cite{FJ}) has an uncountable clone system. This may be folklore, but we include a proof in an appendix for sake of completeness (see Proposition \ref{p:tsirelson}). On the other hand, Blanco and Gr{\o}nb{\ae}k proved that ${\mathcal K}(T)$ is \emph{not} amenable, see \cite[Corollary~5.8]{bg}, and so Theorem \ref{t:headline result} cannot be applied to ${\mathcal B}(T)$. It is an open problem whether the pair $({\mathcal B}(T),{\mathcal B}(T))$ has the AMNM property, and we believe this would be an interesting case to study further.
\end{rem}
\paragraph{Note added in proof.} Let $K$ be an uncountable, compact metric space. The second-named author has recently shown that there exists an uncountable clone system in $C(K)$; details of this result will appear elsewhere. Since ${\mathcal K}(C(K))$ is amenable by \cite[Theorem~4.7]{GJW_isr}, Theorem \ref{t:headline result} shows that the pair $({\mathcal B}(C(K)), {\mathcal B}(X))$ has the AMNM property for every reflexive and separable~$X$.
\end{subsection}
\begin{subsection}{Comments on the proof of our main theorem, and other results of interest}
Theorem \ref{t:headline result} will follow by combining several other technical results. In this section we wish to highlight two of them, which correspond to the two conditions in the theorem. Proofs will be given in later sections.
The following definition will be used repeatedly throughout our arguments.
\begin{dfn}[Self-modular maps with respect to a subalgebra]
\label{d:self-modular}
Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras and let ${\mathsf D}$ be a closed subalgebra of ${\mathsf A}$. We denote by $\selfhom{{\mathsf D}}({\mathsf A},{\mathsf B})$ the set of all bounded linear maps $\theta:{\mathsf A}\to{\mathsf B}$ which satisfy
\[
\theta(ar)=\theta(a)\theta(r)\text{ and }\theta(ra)=\theta(r)\theta(a)\quad\text{for all $a\in {\mathsf A}$ and all $r\in{\mathsf D}$.}
\]
\end{dfn}
Our main technical innovation is the following theorem, which provides a significant generalization of the main result in \cite{BEJ_AMNM2}.
\begin{thm}[ANMM with respect to an amenable subalgebra]\label{t:main innovation}
Let ${\mathsf A}$ be a Banach algebra with a closed amenable subalgebra ${\mathsf D}_0$, and let ${\mathsf B}$ be a unital dual Banach algebra with an isometric predual. Fix some $L\ge 1$. Then there exists a constant $C'\ge 1$ (possibly depending on $L$ and ${\mathsf D}_0$) such that the following holds:
whenever $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ satisfies $\norm{\psi}\le L$ and $C'\operatorname{def}(\psi)\le 1$, there exists $\theta\in \selfhom{{\mathsf D}_0}({\mathsf A},{\mathsf B})$ with $\norm{\theta-\psi} \le C'\operatorname{def}(\psi)$.
\end{thm}
The case where ${\mathsf A}$ itself is amenable is \cite[Theorem 3.1]{BEJ_AMNM2}, but in order to obtain our generalization, it does not suffice to bootstrap from the earlier result. Instead we rework the arguments in Johnson's proof, introducing a version of the multiplicative defect relative to a closed subalgebra, and putting certain calculations from that proof in the framework of ``approximate cobounding'' for a modified version of the Hochschild cochain complex. This will be treated in Sections \ref{s:using improving} and \ref{s:building improving}.
We note that in the setting of Ulam stability for bounded representations of discrete groups on Hilbert space, a result analogous to Theorem \ref{t:main innovation} was given in \cite[Theorem 3.2]{BOT_ulam}; the proof makes use of features particular to groups and to operators on Hilbert space.
Our other main ingredient in the proof of Theorem \ref{t:headline result} is the following proposition, whose proof will be given in Section \ref{ss:prove MvN}. It can be viewed as a ``perturbed'' version of \cite[Proposition~3.8]{HorTar} (see also \cite[Corollary~6.16]{bp}), and it generalizes an argument of Johnson (from the proof of \cite[Proposition 6.3]{BEJ_AMNM2}) in the case $X=E=\ell_2$. Moreover, we obtain better constants than those obtained by just repeating the steps in \cite{BEJ_AMNM2}; see Remark \ref{r:finesse} for further details.
\begin{prop}\label{p:MvN trick}
Let $E$ be a Banach space with an uncountable clone system.
There exists a constant $c_E\in (0,1]$ such that the following holds: whenever $X$ is a separable Banach space, and $\psi: {\mathcal B}(E)/{\mathcal K}(E) \to {\mathcal B}(X)$ is bounded linear with $\operatorname{def}(\psi)\le c_E$, we have $\norm{\psi} \le \frac{3}{2}\operatorname{def}(\psi)$.
\end{prop}
The key point here is that the constant $c_E$ does not depend on the chosen $\psi$, and so $\operatorname{def}(\psi)$ could be much smaller than $c_E$.
Note that in the conclusion of Proposition \ref{p:MvN trick}, we obtain the constant $3/2$ rather than some constant depending on the Banach algebras ${\mathcal B}(E)$ and ${\mathcal B}(X)$.
Obtaining a universal constant (such as $3/2$) is not essential to the proof of Theorem \ref{t:headline result} but it makes some of the epsilon-delta chasing significantly simpler.
\end{subsection}
\end{section}
\begin{section}{Definitions and preliminary results}
\begin{subsection}{Basic properties of the multiplicative defect}
First we have a general lemma. (A similar estimate is given without proof in \cite[Proposition~1.1]{BEJ_AMNM2}.)
\begin{lem}\label{l:defect of perturbed}
Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras and let $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$.
Suppose that $\theta\in {\mathcal L}({\mathsf A},{\mathsf B})$ satisfies $\norm{\theta-\psi}\le 1$. Then
\[
\operatorname{def}(\theta)\le \operatorname{def}(\psi) + 2\norm{\theta-\psi} (1+\norm{\psi}).
\]
\end{lem}
\begin{proof}
Writing $\theta=\psi+\gamma$, for each $a$ and $b$ in ${\mathsf A}$ we have
\[
\theta(ab)-\theta(a)\theta(b)
= \psi(ab) + \gamma(ab) - \psi(a)\psi(b) - \psi(a)\gamma(b)-\gamma(a)\psi(b)-\gamma(a)\gamma(b).
\]
Hence $\operatorname{def}(\theta) \le \operatorname{def}(\psi) + \norm{\gamma} + 2 \norm{\gamma} \norm{\psi} + \norm{\gamma}^2 $. Since we are assuming $\norm{\gamma}\le 1$, the desired inequality follows.
\end{proof}
In the rest of this section we collect some general results concerning approximately multiplicative maps between Banach algebras, which do not seem to be spelled out in \cite{BEJ_AMNM2}. These may be useful for future work on the AMNM property for other kinds of Banach algebras.
It will be convenient to use the following terminology: given $\eta\in [0,\infty)$, we say that a linear map $\psi:{\mathsf A}\to{\mathsf B}$ is \dt{$\eta$-multiplicative} if $\operatorname{def}(\psi)\le\eta$; equivalently, if
\[ \norm{\psi(ab)-\psi(a)\psi(b)} \le \eta\norm{a}\norm{b} \qquad\text{for all $a,b\in A$.} \]
The point is that often we are not concerned with the precise value of the multiplicative defect, but merely with whether it is controlled by some (small) constant or parameter.
\begin{lem}\label{l:absorption trick}
Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras and let $\eta\ge 0$. Let $\psi:{\mathsf A}\to {\mathsf B}$ be linear and $\eta$-multiplicative.
\begin{romnum}
\item\label{li:left unit}
Suppose $ab=b$ with $\norm{\psi(a)}\le 1/3$. Then $\norm{\psi(b)}\le \frac{3}{2} \eta\norm{a}\norm{b}$.
\item\label{li:right unit}
Suppose $bc=b$ with $\norm{\psi(c)}\le 1/3$. Then $\norm{\psi(b)}\le \frac{3}{2} \eta\norm{b}\norm{c}$.
\end{romnum}
\end{lem}
\begin{proof}
We prove \ref{li:left unit}; the proof for \ref{li:right unit} is identical with left and right swapped.
Since $ab=b$, $\norm{\psi(b)-\psi(a)\psi(b)} \le \eta\norm{a}\norm{b}$. Hence
\[
\norm{\psi(b)} \le \eta\norm{a}\norm{b} + \norm{\psi(a)\psi(b)} \le \eta\norm{a}\norm{b} + \frac{1}{3}\norm{\psi(b)}.
\]
Rearranging we obtain the desired upper bound on $\norm{\psi(b)}$.
\end{proof}
The following corollary is immediate.
\begin{cor}\label{c:small on identity}
Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras with ${\mathsf A}$ unital. Let $\psi:{\mathsf A}\to {\mathsf B}$ be linear and $\eta$-multiplicative. If $\norm{\psi(1_{\mathsf A})}\le 1/3$ then
$\psi$ is bounded with
$\norm{\psi}\le 3\eta/2$.
\end{cor}
\begin{rem}
As observed in Section 1 of \cite{BEJ_AMNM2}, for a general linear $T:{\mathsf A}\to {\mathsf B}$ one can have $\operatorname{def}(T)$ small while $T$ has large norm, even when ${\mathsf A}=\mathbb C$. But examination of Example 1.5 in that paper shows that $T(1_{\mathsf A})$ is large in that example. Corollary \ref{c:small on identity} shows that this is the only obstruction.
\end{rem}
The next result will be applied to show that if $p$ is an idempotent in a unital Banach algebra ${\mathsf A}$ and $p$ is Murray--von Neumann equivalent to $1_{\mathsf A}$, then $\psi(p)$ being small implies $\psi(1_{\mathsf A})$ is small, provided that $\operatorname{def}(\psi)$ is small.
Normally, in perturbing exact algebraic arguments, one has to impose an {\it a~priori} upper bound on norms: informally, large times zero equals zero, but large times small might not be small. It is therefore somewhat surprising that in our result, we do not need to impose such a bound on $\norm{\psi}$.
\begin{prop}\label{p:equivalent proj}
Let ${\mathsf A}$ and ${\mathsf B}$ be Banach algebras. Let $u,v\in {\mathsf A}$ be such that $uv$ and $vu$ are idempotents. Let $\psi:{\mathsf A}\to {\mathsf B}$ be linear and $\eta$-multiplicative, for some $\eta$ satisfying $0\le \eta\norm{u}^3\norm{v}^3 \le 2/9$. If $\norm{\psi(uv)}\le 1/3$ then $\norm{\psi(vu)}\le 1/3$.
\end{prop}
\begin{proof}
If $vu=0$ then $\psi(vu)=0$ so there is nothing to prove. Hence we assume $vu\neq 0$; since $vu$ is an idempotent $1\le \norm{vu}\le\norm{v}\norm{u}$.
Since $uv$ is an idempotent, $uvu=uv\cdot uvu$ and $vuv=vuv\cdot uv$. Applying Lemma \ref{l:absorption trick} gives
\[
\norm{\psi(uvu)} \le \frac{3}{2} \eta \norm{uv} \norm{uvu}
\quad\text{and}\quad
\norm{\psi(vuv)} \le \frac{3}{2} \eta \norm{vuv} \norm{uv}
\]
and so
\[
\norm{\psi(uvu)\psi(vuv)}
\le \left(\frac{3}{2} \eta\right)^2 \norm{u}^5\norm{v}^5
\le \left(\frac{3}{2} \eta\right)^2 \norm{u}^6\norm{v}^6
\le \left(\frac{3}{2} \right)^2 \left(\frac{2}{9}\right)^2 = \frac{1}{9} \,.
\]
But since $vu$ is an idempotent, $vuv\cdot uvu= vu$. Hence
\[ \norm{\psi(vu)-\psi(vuv)\psi(uvu)} \le \eta\norm{vuv}\norm{uvu} \le \eta\norm{u}^3\norm{v}^3 \le \frac{2}{9} \]
and so $\norm{\psi(vu)} \le \frac{2}{9} + \norm{\psi(vuv)\psi(uvu)} \le \frac{1}{3}$.
\end{proof}
\begin{rem}
The choice of $\tfrac{1}{3}$ is somewhat arbitrary, and the reader may wonder why we did not attempt to prove sharper inequalities. In fact, it follows automatically from Corollary \ref{c:norm-dichotomy} below that if $\psi(uv)$ is ``moderately small'' then $\psi(vu)$ will be ``very small''. However, this refinement is not needed for the proofs of our main results.
\end{rem}
\end{subsection}
\begin{subsection}{Dual Banach algebras}
There are various equivalent formulations in the literature of the notion of a dual Banach algebra. We follow the definition in \cite[Section 1]{Daws_DBArep}, although our terminology is slightly different and is influenced by \cite[Section 2]{DawsPhamWhite}.
\begin{dfn}\label{d:DBA}
Let ${\mathsf B}$ be a Banach algebra and let ${\mathsf V}$ be a Banach space. We say that
${\mathsf B}$ is a \dt{dual Banach algebra} with \dt{isometric predual}~${\mathsf V}$, if there is an isometric isomorphism of Banach spaces $j:{\mathsf B}\to{\mathsf V}^*$ such that multiplication ${\mathsf B}\times{\mathsf B}\to{\mathsf B}$ is separately $\sigma({\mathsf B},{\mathsf V})$-continuous.
\end{dfn}
Strictly speaking, in this definition, the choice of isometric isomorphism $j:{\mathsf B}\to{\mathsf V}^*$ should be part of the data. However, in most examples that occur in practice, it is clear from context which map $j$ is being used. Moreover, as discussed in \cite[Section 2]{DawsPhamWhite}:
\begin{itemize}
\item the ``dual Banach algebra structure'' induced on ${\mathsf B}$ only depends on the image of the isometry $j^*\kappa:{\mathsf V} \to {\mathsf B}^*$, where $\kappa$ is the canonical embedding of ${\mathsf V}$ in its bidual;
\item the condition that multiplication in ${\mathsf B}$ be separately $\sigma({\mathsf B},{\mathsf V})$-continuous is equivalent to requiring $j^*\kappa({\mathsf V})$ to be a sub-${\mathsf B}$-bimodule of ${\mathsf B}^*$.
\end{itemize}
This latter condition is often easier to check in practice.
If the choice of isometric predual for ${\mathsf B}$ is not important, or is clear from context, then we will usually just refer to the \ensuremath{{\rm w}^*}-topology on ${\mathsf B}$ without mentioning the particular predual.
\begin{eg}
The following Banach algebras are dual Banach algebras with an isometric predual.
\begin{itemize}
\item[--] $M(G)$ where $G$ is a locally compact group, with the isometric predual being $C_0(G)$;
\item[--] any von Neumann algebra ${\mathsf N}$, with the isometric predual being the space of normal linear functionals on ${\mathsf N}$;
\item[--] ${\mathcal B}(X)$ for any reflexive Banach space $X$, with the isometric predual being the projective tensor product $X^*\mathbin{\widehat{\otimes}} X$.
\end{itemize}
\end{eg}
\begin{rem}
It was shown by Daws \cite[Theorem 3.5 and Corollary 3.8]{Daws_DBArep} that the last of these examples is in some sense a universal one: given any dual Banach algebra ${\mathsf B}$ with an isometric predual, there exists a reflexive Banach space $X$ and an isometric, \ensuremath{{\rm w}^*}-\ensuremath{{\rm w}^*}-continuous algebra homomorphism ${\mathsf B}\to{\mathcal B}(X)$.
\end{rem}
Throughout the paper, we will work with \emph{isometric} preduals, as they suffice to cover all our applications. Our methods would also work for isomorphic (possibly non-isometric) preduals, although one would need to be much more careful in the estimates when keeping track of the constants.
For instance, if the isomorphism $j:{\mathsf B}\to {\mathsf V}^*$ is not assumed to be isometric, and we wish to take $\sigma({\mathsf B},{\mathsf V})$-cluster points of a net $(b_i)$ in $\operatorname{\rm ball}\nolimits_1({\mathsf B})$, then it is not clear why $j^{-1}(\ensuremath{{\rm w}^*}\lim_i j(b_i))$ should have norm $\le 1$.
\end{subsection}
\begin{subsection}{A sharper dichotomy result}
\label{ss:sharper dichotomy}
This section is not required for the proof of our main result, but it is included since the proofs are elementary and since it may be useful in future work.
The following lemma is inspired by similar observations/calculations in \cite[Section~3.1]{choi_jaust13}, but we are able to give a simpler proof.
\begin{lem}\label{l:kicsi-nagy}
Let $x \in [0,\infty)$ and suppose that $x\le x^2 + c$ for some $c\in [0,2/9]$. Then
\[ \min(x, 1-x) \le \frac{3c}{2} \le \frac{1}{3} \;. \]
\end{lem}
\begin{proof}
By comparing the graphs of the functions $f(u)=u$ and $g(u)=u^2+c$ for $u\ge 0$, which cross in exactly two points, we see that $x\in [0, u_1] \cup [u_2,\infty)$, where $0\le u_1< u_2\le 1$ are the solutions of $u=u^2+c$. Explicitly
\[ u_1 = \frac{1}{2} (1- \sqrt{1-4c}) \quad,\quad u_2 = \frac{1}{2} (1+ \sqrt{1-4c}) =1-u_1\;. \]
It therefore suffices to prove that $u_1 \le 3c/2$. This is equivalent to proving that $1-3c \le \sqrt{1-4c}$, which (since both sides are non-negative) is equivalent to proving that $(1-3c)^2\le 1-4c$. Since $0\le c\le 2/9$, we have $9c^2\le 2c$, and therefore $1-6c+9c^2 \le 1-4c$ as required.
\end{proof}
\begin{cor}[A norm dichotomy]\label{c:norm-dichotomy}
Let ${\mathsf A}$, ${\mathsf B}$ be Banach algebras and let $p$ be an idempotent in ${\mathsf A}$. Let $\delta$ satisfy $0\le\delta\norm{p}^2 \le \tfrac{2}{9}$, and suppose $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ is $\delta$-multiplicative. Then either $\norm{\psi(p)}\le \frac{3}{2}\norm{p}^2\delta \le \tfrac{1}{3}$, or $\norm{\psi(p)}\ge 1-\tfrac{3}{2}\norm{p}^2\delta \ge \tfrac{2}{3}$.
\end{cor}
The point of this result is that we do not need {\it a priori} control on $\norm{\psi}$ to choose how small $\delta$ must be; nor do we need any holomorphic functional calculus for the codomain~${\mathsf B}$.
\begin{proof}
Since $p^2=p$, we have
$\norm{\psi(p)} \le \norm{\psi(p)-\psi(p)^2}+ \norm{\psi(p)}^2 \le \delta\norm{p}^2 + \norm{\psi(p)}^2$. Now applying Lemma \ref{l:kicsi-nagy} completes the proof.
\end{proof}
\end{subsection}
\end{section}
\begin{section}{Towards a proof of the main theorem}
\begin{subsection}{Self-modular maps relative to an ideal}
Throughout this section, ${\mathsf B}$ is a dual Banach algebra with an isometric predual
(Definition~\ref{d:DBA}).
We denote \ensuremath{{\rm w}^*}-limits in ${\mathsf B}$ by $\lim\nolimits^\sigma$.
\begin{prop}[Decomposition relative to an ideal]\label{p:decompose}
Let ${\mathsf B}$ be a dual Banach algebra with an isometric predual. Let ${\mathsf A}$ be a Banach algebra and ${\mathsf J}$ be a closed ideal in ${\mathsf A}$ with a b.a.i. Then each $\theta\in\selfhom{{\mathsf J}}({\mathsf A},{\mathsf B})$ can be written as $\theta=\phi+\theta_s$, where $\phi:{\mathsf A}\to{\mathsf B}$ is a bounded homomorphism, ${\theta_s\vert}_{\mathsf J}=0$, and $\operatorname{def}(\theta_s)=\operatorname{def}(\theta)$.
\end{prop}
\begin{proof}
Let ${\mathsf B}_0$ denote the \ensuremath{{\rm w}^*}-closure of $\theta({\mathsf J})$ inside~${\mathsf B}$.
Since ${\mathsf J}$ is an ideal and multiplication in ${\mathsf B}$ is separately \wstar-\wstar-continuous, the self-modular property of $\theta$ implies that
\begin{equation}\label{eq:1}
\theta(a){\mathsf B}_0\subseteq {\mathsf B}_0 \quad\text{and}\quad {\mathsf B}_0\theta(a)\subseteq {\mathsf B}_0\quad\text{for all $a\in {\mathsf A}$.}
\end{equation}
If $a_1,a_2\in {\mathsf A}$ and $x\in {\mathsf J}$, then repeated use of the self-modularity property yields
\begin{equation}\label{eq:2}
\theta(x)\theta(a_1a_2) = \theta(xa_1a_2) = \theta(xa_1)\theta(a_2) = \theta(x)\theta(a_1)\theta(a_2);
\end{equation}
hence, by taking \ensuremath{{\rm w}^*}-limits in \eqref{eq:2}, we have
\begin{equation}\label{eq:3}
b \theta(a_1a_2)= b\theta(a_1)\theta(a_2) \qquad\text{for all $a_1,a_2\in {\mathsf A}$ and all $b\in {\mathsf B}_0$.}
\end{equation}
Now let $(e_i)$ be a b.a.i.\ in ${\mathsf J}$. Passing to a subnet, we may assume that $\theta(e_i)$ \ensuremath{{\rm w}^*}-converges in~${\mathsf B}$ to some $p\in {\mathsf B}_0$. Then for any $x\in {\mathsf J}$,
\begin{equation}\label{eq:4}
\begin{aligned}
\theta(x)=\lim_i \theta(e_ix)
& = \lim_i\theta(e_i)\theta(x) \\
& = \lim\nolimits^\sigma_i \theta(e_i)\theta(x) = \left(\lim\nolimits^\sigma_i\theta(e_i)\right)\theta(x) = p\theta(x),
\end{aligned}
\end{equation}
and similarly $\theta(x)=\theta(x)p$.
Hence, by another application of \wstar-\wstar-continuity,
\begin{equation}\label{eq:5}
pb=b=bp \qquad\text{for all $b\in {\mathsf B}_0$.}
\end{equation}
(In particular, $p$ is idempotent, although we do not use this explicitly in what follows.)
For each $a\in {\mathsf A}$, \eqref{eq:1} implies that $\theta(a)p\in {\mathsf B}_0$ and $p\theta(a)\in {\mathsf B}_0$. Hence by \eqref{eq:5}
\begin{equation}\label{eq:6}
p\theta(a)p =\theta(a)p\quad\text{and}\quad p\theta(a)=p\theta(a)p \quad\text{for all $a\in {\mathsf A}$.}
\end{equation}
Now define $\phi$ by putting $\phi(a):= p\theta(a)$. Combining \eqref{eq:3} and \eqref{eq:6}, for all $a_1,a_2\in {\mathsf A}$ we have
\begin{equation}\label{eq:7}
\phi(a_1a_2)=p\theta(a_1)\theta(a_2)=p\theta(a_1)p\theta(a_2) = \phi(a_1)\phi(a_2),
\end{equation}
and thus $\phi$ is multiplicative.
Put $\theta_s(a) := \theta(a)-p\theta(a)$. Clearly $\phi+\theta_s=\theta$, and \eqref{eq:4} implies that $\theta_s(x)=0$ for all $x\in J$.
Finally: note that by \eqref{eq:6}, $\theta_s(a_1)p=0$. Hence, for all $a_1,a_2\in {\mathsf A}$,
\begin{equation}\label{eq:8}
\begin{aligned}
\theta_s(a_1)\theta_s(a_2) = \theta_s(a_1)\theta(a_2)
& = \theta(a_1)\theta(a_2)-p\theta(a_1)\theta(a_2) \\
& = \theta(a_1)\theta(a_2)-p\theta(a_1a_2),
\end{aligned}
\end{equation}
where the last equality follows from \eqref{eq:3}. Therefore
\begin{equation}
\theta_s(a_1a_2) -\theta_s(a_1)\theta_s(a_2)= \theta(a_1a_2)-\theta(a_1)\theta(a_2),
\end{equation}
and we conclude that $\operatorname{def}(\theta_s)=\operatorname{def}(\theta)$.
\end{proof}
\begin{rem}
If the b.a.i.\ in ${\mathsf J}$ has norm $\le M$, then the functions $\phi$ and $\theta_s$ in this result can be taken to satisfy $\norm{\phi}\le M\norm{\theta}$ and $\norm{\theta_s}\le (1+M)\norm{\theta}$. However, we will not need these bounds in the applications of Proposition \ref{p:decompose}.
\end{rem}
\end{subsection}
\begin{subsection}{The proof of Proposition \ref{p:MvN trick}}
\label{ss:prove MvN}
In this section we prove Proposition \ref{p:MvN trick}. For convenience, we repeat the statement:
\begin{quote}
{\itshape
Let $E$ be a Banach space with an uncountable clone system.
There exists a constant $c_E\in (0,1]$ such that the following holds: whenever $X$ is a separable Banach space, and $\psi: {\mathcal B}(E)/{\mathcal K}(E) \to {\mathcal B}(X)$ is bounded linear with $\operatorname{def}(\psi)\le c_E$, we have $\norm{\psi} \le \frac{3}{2}\operatorname{def}(\psi)$.
}
\end{quote}
We start by shifting perspective slightly in the definition of a clone system. It is well known (see \textit{e.g.}\ \cite[Lemma~1.4]{l03} for a proof) that an idempotent $P\in {\mathcal B}(E)$ satisfies $\operatorname{\rm Ran}(P)\cong E$ if and only if $P$ is Murray--von Neumann equivalent to $I_E$.
We state a quantitative version in the following lemma, whose proof is left to the reader.
\begin{lem}\label{l:shift POV}
Let $E$ be a Banach space and let $P\in{\mathcal B}(E)$ be an idempotent.
\begin{romnum}
\item If $\operatorname{\rm Ran}(P)\cong E$, then for every $\varepsilon >0$ there exist $U,V\in {\mathcal B}(E)$ such that $P=UV$, $I_E=VU$ and $\norm{U}\norm{V} \le (d(E,\operatorname{\rm Ran}(P))+ \varepsilon) \norm{P}$.
\item If $U,V\in{\mathcal B}(E)$ are such that $I_E=VU$ and $UV=P$, then $\operatorname{\rm Ran}(U)=\operatorname{\rm Ran}(P)$ and ${V\vert}_{\operatorname{\rm Ran}(P)}$ is an isomorphism from $\operatorname{\rm Ran}(P)$ onto $E$. Hence, $d(E,\operatorname{\rm Ran}(P))\le\norm{U}\norm{V}$ (and clearly $\norm{P}\le \norm{U}\norm{V}$).
\end{romnum}
\end{lem}
We recall that idempotents $p,q$ in a ring are said to be \dt{orthogonal} if $pq=0=qp$.
\begin{lem}\label{l:separation lemma}
Let ${\mathsf Q}$ be a Banach algebra containing an uncountable family $\Omega$ of pairwise orthogonal idempotents, and suppose $\sup_{p\in\Omega} \norm{p} \le L$ for some $L\ge 1$.
Let $X$ be a separable Banach space, and suppose $\psi\in{\mathcal L}({\mathsf Q},{\mathcal B}(X))$ is $\eta$-multiplicative for some $\eta>0$. Then $\norm{\psi(p)}\le 2\eta L^2$ for uncountably many $p\in\Omega$.
\end{lem}
\begin{proof}
For $\varepsilon>0$ let $\Omega_\varepsilon = \{ p\in \Omega \colon \norm{\psi(p)} > \varepsilon\}$. It suffices to show that $\Omega_{2\eta L^2}$ is countable; therefore, since $\Omega_{2\eta L^2} = \bigcup_{n=1}^\infty \Omega_{2\eta L^2 + 1/n}$, it suffices to show that $\Omega_c$ is countable for every $c> 2\eta L^2$.
Fix $c> 2\eta L^2$.
We may assume that $\Omega_c$ is infinite (otherwise there is nothing to prove); in particular, this implies $\Vert\psi\Vert >0$.
For each $p\in\Omega_c$ pick a unit vector $x_p\in X$ such that $\norm{\psi(p)x_p}\ge c$, and let $y_p=\psi(p)x_p$.
If $r\in\Omega_c$ and $r\neq p$, then
\[ \norm{\psi(p)y_r} =\norm{\psi(p)\psi(r)x_r} \le \norm{\psi(p)\psi(r)} = \norm{\psi(p)\psi(r)-\psi(pr)} \le \eta L^2 \;; \]
on the other hand, since $\norm{\psi(p)-\psi(p)\psi(p)} \le \eta L^2$,
\[ \norm{\psi(p)y_p} = \norm{\psi(p)\psi(p)x_p} \ge \norm{\psi(p)x_p} - \eta L^2 \ge c -\eta L^2 \;. \]
Combining these inequalities yields $\norm{\psi(p) y_p -\psi(p) y_r} \ge c -2\eta L^2$. Hence
\[
\norm{y_p-y_r} \ge \frac{c-2\eta L^2}{\norm{\psi(p)}} \ge \frac{c-2\eta L^2}{\norm{\psi} L} > 0
\quad\text{for all $p,r\in\Omega_c$ with $p\neq r$.}
\]
Since $X$ is separable this is only possible if $\Omega_c$ is countable.
\end{proof}
\begin{proof}[Proof of Proposition \ref{p:MvN trick}]
Let $\Omega$ be an uncountable clone system for $E$. By Lemma \ref{l:shift POV}(i), there is a constant $C\ge 1$ such that each $P\in\Omega$ can be factorized as $P=UV$, for some $V$ and $U$ in ${\mathcal B}(E)$ satisfying $\norm{U}\norm{V}\le C$ and $VU=I_E$. We will show that the conclusion of Proposition \ref{p:MvN trick} holds with $c_E:= 6^{-1}C^{-3}$.
Let $\psi:{\mathcal B}(E)/{\mathcal K}(E)\to{\mathcal B}(X)$ be bounded linear. For convenience, let $\eta:= \operatorname{def}(\psi)$, and suppose that $\eta\le 6^{-1}C^{-3}$. Writing $q$ for the quotient homomorphism ${\mathcal B}(E)\to {\mathcal B}(E)/{\mathcal K}(E)$, note that
$q(\Omega)$ is an uncountable family of orthogonal idempotents in ${\mathcal B}(E)/{\mathcal K}(E)$ with $\norm{q(P)}\le \norm{P}\le C$ for every $P\in \Omega$. By Lemma \ref{l:separation lemma} with ${\mathsf Q}={\mathcal B}(E)/{\mathcal K}(E)$, there exists some $P\in\Omega$ such that
\[ \norm{\psi q(P)} \le 2 \eta C^2 \le 2\eta C^3 \le \frac{1}{3} \,.\]
(In fact there exist uncountably many, but we only need one!)
Consider $\psi q: {\mathcal B}(E)\to{\mathcal B}(X)$, which satisfies $\operatorname{def}(\psi q)=\operatorname{def}(\psi) = \eta$. We have
\[
\eta\norm{U}^3 \norm{V}^3 \le \eta C^3 \le \frac{1}{6} < \frac{2}{9}\;.
\]
Hence, applying Proposition \ref{p:equivalent proj} to the map $\psi q :{\mathcal B}(E)\to{\mathcal B}(X)$, we deduce that $\norm{\psi q(I_E)} \le 1/3$. Since $q(I_E)$ is the identity element of ${\mathcal B}(E)/{\mathcal K}(E)$, it follows from Corollary \ref{c:small on identity} that $\norm{\psi}\le 3\eta /2$ as required.
\end{proof}
\begin{rem}\label{r:finesse}
Comparing our proof of Proposition \ref{p:MvN trick} with Johnson's arguments in \cite{BEJ_AMNM2}: he uses the fact that in any Banach algebra an element $x$ for which $\norm{x^2-x}$ is ``small'' is ``close in norm'' to a genuine idempotent. The proof of this result relies on holomorphic functional calculus, and hence has implicit constants depending on the given algebra. Our approach bypasses this issue.
\end{rem}
The proof works just as well if ${\mathcal B}(E)$ is replaced by an arbitrary unital Banach algebra ${\mathsf A}$ and ${\mathcal K}(E)$ by an arbitrary closed ideal ${\mathsf J} \unlhd {\mathsf A}$. However, we do not know of natural examples that satisfy the \emph{hypotheses} of Proposition \ref{p:MvN trick} which are not of the form ${\mathsf A}={\mathcal B}(E)$ and ${\mathsf J}$ being some closed operator ideal, so it seems more appropriate to restrict ourselves to this setting.
\end{subsection}
\begin{subsection}{Deducing the main theorem from other results}
We now show how Theorem \ref{t:headline result} will follow from combining Theorem \ref{t:main innovation}, Proposition \ref{p:decompose} and Proposition \ref{p:MvN trick}. For convenience let us restate the theorem:
\begin{quote}
{\itshape
Let $X$ be any separable, reflexive Banach space. Let $E$ be a Banach space such that both of the following conditions hold:
\begin{romnum}
\item ${\mathcal K}(E)$, the algebra of compact operators on $E$, is amenable as a Banach algebra;
\item $E$ has an uncountable clone system.
\end{romnum}
Then the pair $({\mathcal B}(E),{\mathcal B}(X))$ has the AMNM property.
}
\end{quote}
\begin{proof}[Proof of Theorem \ref{t:headline result}, assuming Theorem \ref{t:main innovation}]
${\mathcal B}(X)$ is a dual Banach algebra with an isometric predual, since $X$ is reflexive.
Hence we may apply Theorem~\ref{t:main innovation} with ${\mathsf A}={\mathcal B}(E)$, ${\mathsf D}_0={\mathcal K}(E)$ and ${\mathsf B}={\mathcal B}(X)$.
Fix some $L\ge 1$, and let $C'\ge1$ satisfy the conclusion of Theorem~\ref{t:main innovation} (recall that $C'$ may depend on the constant $L$ and also on the Banach space~$E$).
Given $\varepsilon>0$, we fix some $\delta > 0$ to be determined later.
Let $\psi:{\mathcal B}(E)\to{\mathcal B}(X)$ satisfy $\norm{\psi}\le L$ and $\operatorname{def}(\psi)\le \delta$. It suffices to prove that there exists some bounded homomorphism $\phi:{\mathcal B}(E)\to{\mathcal B}(X)$ with $\norm{\phi-\psi}\le \varepsilon$.
By Theorem \ref{t:main innovation}, \emph{provided that $C'\delta \le 1$},
there exists $\theta \in \selfhom{{\mathcal K}(E)}({\mathcal B}(E),{\mathcal B}(X))$ such that $\norm{\theta-\psi}\le C'\delta$.
Note that by Lemma \ref{l:defect of perturbed},
\[ \begin{aligned}
\operatorname{def}(\theta)
\le \operatorname{def}(\psi)+ 2(1+\norm{\psi})\norm{\theta-\psi}
& \le \delta + 2(1+L) C'\delta \le 5LC'\delta .
\end{aligned} \]
By Proposition \ref{p:decompose}, applied with ${\mathsf A}={\mathcal B}(E)$, ${\mathsf J}={\mathcal K}(E)$ and ${\mathsf B}={\mathcal B}(X)$, there exist
\begin{itemize}
\item a bounded homomorphism $\phi:{\mathcal B}(E) \to{\mathcal B}(X)$,
\item a bounded linear map $\theta_s:{\mathcal B}(E)\to{\mathcal B}(X)$ which vanishes on ${\mathcal K}(E)$ and satisfies
\[
\operatorname{def}(\theta_s)=\operatorname{def}(\theta) \le 5LC'\delta\]
\end{itemize}
such that $\theta=\phi+\theta_s$. Writing $q$ for the quotient homomorphism ${\mathcal B}(E)\to{\mathcal B}(E)/{\mathcal K}(E)$, we may factorize $\theta_s$ as $\widetilde{\theta_s}q$ where $\norm{\widetilde{\theta_s}}=\norm{\theta_s}$.
Let $c_E$ be the constant provided by Proposition \ref{p:MvN trick} (recall that this depends only on the chosen clone system for $E$).
By applying that proposition to $\widetilde{\theta_s}$:
{\bf provided that $5 LC'\delta \le c_E$},
we have $\norm{\widetilde{\theta_s}}\le 15 LC'\delta/2$. Hence
\[
\norm{\phi-\psi} \le \norm{\theta-\psi} + \norm{\theta_s} \le C'\delta + \frac{15}{2}LC'\delta < 9LC'\delta.
\]
Therefore, if we originally chose our $\delta$ to satisfy $0< 5 LC'\delta \le c_E$ and $9LC'\delta \le\varepsilon$, we have $\norm{\phi-\psi}\le\varepsilon$ as required.
\end{proof}
At this point, the only piece missing from our proof of Theorem \ref{t:headline result} is the proof of our main technical novelty, Theorem \ref{t:main innovation}. This will take up the rest of the paper.
\end{subsection}
\end{section}
\begin{section}{Towards a proof of Theorem \ref{t:main innovation}}
\label{s:using improving}
The process of proving Theorem \ref{t:main innovation} is quite long, and it may be helpful for the reader to know that the key implications are given by the following chain:
\begin{quote}
Theorem \ref{t:main innovation} $\Longleftarrow$ Theorem \ref{t:one-sided improved BEJ} $\Longleftarrow$ Proposition \ref{p:improving} $\Longleftarrow$ Section \ref{s:define-imp-op}.
\end{quote}
\begin{subsection}{The projective tensor product and approximate diagonals}
It turns out that we need to make \emph{quantitative} (rather than merely \emph{qualitative}) use of amenability. Thus, we shall briefly review the basic properties of the projective tensor norm for Banach spaces and the associated completed tensor product; a good source for background material is the monograph \cite{Ryan}. In what follows $\operatorname{Bil}(E,F; X)$ denotes the space of bounded, bilinear maps $E \times F \to X$ for Banach spaces $E,F$ and~$X$.
Rather than defining the projective tensor norm directly, we use the following property (see also \cite[Theorem~2.9]{Ryan}).
\begin{quote}
Given Banach spaces $E$ and $F$, there exists a Banach space $E\mathbin{\widehat{\otimes}} F$ and a map $\iota_{E,F}\in\operatorname{Bil}(E,F; E\mathbin{\widehat{\otimes}} F)$ of norm~$1$ such that for each Banach space $X$ the map $T \mapsto T \circ \iota_{E,F}, \; {\mathcal L}(E\mathbin{\widehat{\otimes}} F,X) \to \operatorname{Bil}(E,F;X)$ is an isometric isomorphism.
\end{quote}
As is standard, for $x\in E$ and $y\in F$ we write $x\mathbin{\otimes} y$ for $\iota_{E,F}(x,y)$. It follows from the previous remarks that for each $T\in{\mathcal L}(E\mathbin{\widehat{\otimes}} F, X)$,
\begin{equation}\label{eq:ball of ptp}
\norm{T}_{{\mathcal L}(E\mathbin{\widehat{\otimes}} F, X)} = \norm{T\circ \iota_{E,F}}_{\operatorname{Bil}(E,F;X)} = \sup\{ \norm{T(x\mathbin{\otimes} y)} \colon x\in \operatorname{\rm ball}\nolimits_1(E), y\in \operatorname{\rm ball}\nolimits_1(F)\} \;.
\end{equation}
That is: to determine the norm of $T\in{\mathcal L}(E\mathbin{\widehat{\otimes}} F,X)$, it suffices to check how $T$ acts on elementary tensors arising from the unit balls of $E$ and $F$.
The theory of amenability for Banach algebras is now a vast topic (see \textit{e.g.\ }\cite{Runde} for a comprehensive modern study). We shall only need the following fragment.
Let ${\mathsf A}$ be a Banach algebra. A bounded net $(\Delta_{\alphapha})_{\alphapha \in {\mathbb I}}$ in ${\mathsf A} \mathbin{\widehat{\otimes}} {\mathsf A}$ is called a \dt{bounded approximate diagonal for ${\mathsf A}$} if
\begin{equation}\label{amenabledef}
\lim\nolimits_{\alphapha} (a \cdot \Delta_{\alphapha} - \Delta_{\alphapha} \cdot a) = 0 \quad \text{and} \quad
\lim\nolimits_{\alphapha} a \pi_{\mathsf A}(\Delta_{\alphapha}) = a \qquad \text{for all $a \in {\mathsf A}$,}
\end{equation}
where the limits are taken in the norm topology, and $\pi_{\mathsf A} :{\mathsf A}\mathbin{\widehat{\otimes}}{\mathsf A}\to {\mathsf A}$ is the unique bounded linear map satisfying $\pi_{{\mathsf A}}(a\mathbin{\otimes} b)=ab$ for all $a,b \in {\mathsf A}$. We refer to $\sup_\alphapha \norm{\Delta_\alphapha}$ as the norm of the bounded approximate diagonal.
A Banach algebra ${\mathsf A}$ is \dt{amenable} if there is a bounded approximate diagonal for ${\mathsf A}$, and the \dt{amenability constant of ${\mathsf A}$} is the infimum of norms of all possible bounded approximate diagonals. It follows from compactness arguments in the bidual, together with Goldstine's lemma and a convexity argument, that we can always find a bounded approximate diagonal for ${\mathsf A}$ whose norm achieves this infimum.
\end{subsection}
\begin{subsection}{Reduction to a unital version}
Let us revisit the definition of the multiplicative defect. Given Banach algebras ${\mathsf A}$ and ${\mathsf B}$ and a linear map $\phi \colon {\mathsf A} \to {\mathsf B}$,
we define $\phi^\vee: {\mathsf A}\times {\mathsf A} \to {\mathsf B}$ by
\begin{equation}\label{eq:define phi-check}
\phi^\vee(a,b) := \phi(ab)-\phi(a)\phi(b) \qquad \text{for all $a,b\in {\mathsf A}$.}
\end{equation}
Our earlier definition merely says that
\begin{equation}
\operatorname{def}(\phi) = \sup \{ \norm{\phi(a_1a_2)-\phi(a_1)\phi(a_2)} \colon a_1,a_2 \in \operatorname{\rm ball}\nolimits_1({\mathsf A})\} =\norm{\phi^\vee}_{\operatorname{Bil}({\mathsf A},{\mathsf A};{\mathsf B})}
\end{equation}
Now let ${\mathsf D}\subseteq {\mathsf A}$ be a closed subalgebra. We will need to define quantities analogous to $\operatorname{def}(\phi)$ where the ``multiplicative property'' is only tested on pairs in ${\sD\times\sA}$ or ${\sA\times\sD}$.
To be precise:
\begin{equation}
\begin{aligned}
\operatorname{def}_{{\sD\times\sA}}(\phi)
& = \norm{\phi^\vee}_{\operatorname{Bil}({\mathsf D},{\mathsf A};{\mathsf B})} \\
& = \sup \{ \norm{\phi(a_1a_2)-\phi(a_1)\phi(a_2)} \colon a_1\in \operatorname{\rm ball}\nolimits_1({\mathsf D}),a_2 \in \operatorname{\rm ball}\nolimits_1({\mathsf A})\}
\end{aligned}
\end{equation}
with $\operatorname{def}_{{\sA\times\sD}}(\phi)$ defined similarly. The function $\operatorname{def}_{{\sD\times\sA}} \colon {\mathcal L} ({\mathsf A},{\mathsf B}) \to [0, \infty)$ is continuous.
The next lemma is a sharper version of Lemma \ref{l:defect of perturbed}.
\begin{lem}\label{l:relative defect of perturbed}
Let ${\mathsf A}$, ${\mathsf B}$ be Banach algebras and let $\phi,\gamma\in{\mathcal L}({\mathsf A},{\mathsf B})$. Then for all $a_1,a_2 \in {\mathsf A}$,
\begin{equation}\label{eq:linearize}
(\phi+\gamma)^\vee(a_1,a_2) = \phi^\vee(a_1,a_2) -\phi(a_1)\gamma(a_2)+ \gamma(a_1a_2)-\gamma(a_1)\phi(a_2) - \gamma(a_1)\gamma(a_2)\;.
\end{equation}
In particular, for any closed subalgebra ${\mathsf D}\subseteq{\mathsf A}$,
\begin{align}
\operatorname{def}_{{\sD\times\sA}}(\phi+\gamma) &\le \operatorname{def}_{{\sD\times\sA}}(\phi) + (2\norm{\phi}+1) \norm{\gamma} +\norm{\gamma}^2 \,, \\
\operatorname{def}_{{\sA\times\sD}}(\phi+\gamma) &\le \operatorname{def}_{{\sA\times\sD}}(\phi) + (2\norm{\phi}+1) \norm{\gamma} +\norm{\gamma}^2 \,.
\end{align}
\end{lem}
\begin{proof}
The first identity is a direct calculation, and we omit the details. The subsequent inequalities follow easily from the first identity and the definitions of $\operatorname{def}_{{\sD\times\sA}}$ and $\operatorname{def}_{{\sA\times\sD}}$.
\end{proof}
The following theorem, which extends \cite[Theorem 3.1]{BEJ_AMNM2}, is the heart of Theorem~\ref{t:main innovation}. Note that unlike the earlier theorem, we impose the condition that the subalgebra is unital and restrict attention to unit-preserving maps, even though in the original application to Theorem \ref{t:headline result} it was important to allow non-unital examples.
\begin{thm}[AMNM with respect to a unital amenable subalgebra]\label{t:one-sided improved BEJ}
Let ${\mathsf A}$ be a Banach algebra, let ${\mathsf D}$ be a closed subalgebra of ${\mathsf A}$ which is unital and amenable with amenability constant $\le K$, and let ${\mathsf B}$ be a unital dual Banach algebra with an isometric predual.
Fix $L\ge1$ and $\delta>0$ satisfying $K^2L^2\delta \le 1/8$.
Let $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$ satisfy $\norm{\phi}\le L$, $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$, $\operatorname{def}_{{\sA\times\sD}}(\phi)\le\delta$, and $\operatorname{def}_{{\sD\times\sA}}(\phi)\le\delta$. Then there exists
$\psi\in\selfhom{{\mathsf D}}({\mathsf A},{\mathsf B})$ with $\psi(1_{{\mathsf D}})=1_{{\mathsf B}}$ and $\norm{\phi-\psi}\le 12K^2L^3\delta$.
\end{thm}
Recall the statement of Theorem \ref{t:main innovation}:
\begin{quote}
{\itshape
Let ${\mathsf A}$ be a Banach algebra with a closed amenable subalgebra ${\mathsf D}_0$, and let ${\mathsf B}$ be a unital dual Banach algebra with an isometric predual. Fix some $L\ge 1$. Then there exists a constant $C'\ge 1$ (possibly depending on $L$ and ${\mathsf D}_0$) such that the following holds:
whenever $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ satisfies $\norm{\psi}\le L$ and $C'\operatorname{def}(\psi)\le 1$, there exists $\theta\in \selfhom{{\mathsf D}_0}({\mathsf A},{\mathsf B})$ with $\norm{\theta-\psi} \le C'\operatorname{def}(\psi)$.
}
\end{quote}
\begin{proof}[Deducing Theorem \ref{t:main innovation} from Theorem \ref{t:one-sided improved BEJ}]
We start by considering an arbitrary $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$. Let $\fu{{\mathsf A}}= \mathbb C 1 \oplus_1
{\mathsf A}$ denote the forced unitization of $A$ (here $\oplus_1$ denotes the $\ell_1$-sum of two Banach spaces). Then there is a natural extension of $\psi$ to $\fu{\psi}:\fu{{\mathsf A}}\to {\mathsf B}$, given by
\[ \fu{\psi}(\lambda, a) = \lambda 1_{\mathsf B} + \psi(a) \qquad \text{for all $\lambda\in\mathbb C$, $a\in {\mathsf A}$.} \]
It is easily checked that $\norm{\fu{\psi}}=\norm{\psi}$ (one direction is trivial since ${\mathsf A}\subset\fu{{\mathsf A}}$, and the other follows by our choice of norm on $\fu{{\mathsf A}}$). Moreover, a direct calculation shows that
\begin{equation}\label{eq:unitize}
\fu{\psi}((\lambda_1,a_1)(\lambda_2,a_2)) - \fu{\psi}(\lambda_1,a_1)\fu{\psi}(\lambda_2,a_2)
= \psi(a_1a_2)-\psi(a_1)\psi(a_2);
\end{equation}
and hence $\operatorname{def}(\fu{\psi})=\operatorname{def}(\psi)$. (Once again, one direction is trivial since ${\mathsf A}\subset\fu{{\mathsf A}}$; the non-trivial direction follows from the identity
\eqref{eq:unitize}.)
Let ${\mathsf D}=\fu{{\mathsf D}_0}$, which coincides with the closed subalgebra $\mathbb C1 \oplus_1 D_0$ of $\fu{A}$, where $1$ is the adjoined unit. It is well known that the unitization of any amenable Banach algebra is itself amenable;
let $K$ be the amenability constant of ${\mathsf D}$, which automatically satisfies $K\ge 1$.
Given $L\ge 1$, put $C':= 12K^2L^3$. Suppose $\psi\in\operatorname{\rm ball}\nolimits_L{\mathcal L}({\mathsf A},{\mathsf B})$ satisfies $\operatorname{def}(\psi)= \delta$, for some $\delta \in [0, 1/C' ]$.
By our previous remarks, the extended map $\fu{\psi}:\fu{{\mathsf A}}\to {\mathsf B}$ also has multiplicative defect~$\delta$ and norm $\le L$, and by construction it satisfies $\fu{\psi}(1)=1_B$. Applying Theorem \ref{t:one-sided improved BEJ} to the triple $({\mathsf D},\fu{{\mathsf A}},{\mathsf B})$ (note that $8K^2L^2\delta \le 12K^2L^3\delta \le 1)$, we deduce that there exists $\phi\in \selfhom{{\mathsf D}}(\fu{{\mathsf A}},{\mathsf B})$ with $\phi(1)=1_{\mathsf B}$ and $\norm{\phi-\fu{\psi}}\le C'\delta$.
Taking $\theta = {\phi\vert}_{\mathsf A} \in \selfhom{{\mathsf D}_0}({\mathsf A},{\mathsf B})$, we see that the conclusions of Theorem \ref{t:main innovation} are satisfied.
\end{proof}
\end{subsection}
\begin{subsection}{Obtaining the unital version, using an improving operator}
Guided by the case ${\mathsf D}={\mathsf A}$ that is treated in \cite{BEJ_AMNM2}, we shall prove Theorem~\ref{t:one-sided improved BEJ} by an iterative argument.
Notably, the proof works by repeated application of a \emph{nonlinear} operator $F:{\mathcal L}({\mathsf A},{\mathsf B})\to {\mathcal L}({\mathsf A},{\mathsf B})$ with certain ``improving'' properties.
The operator $F$ is designed in such a way that for each $\phi$ satisfying the assumptions of Theorem~\ref{t:one-sided improved BEJ}, the sequence of iterates $(F^n(\phi))_{n\in\mathbb N}$ is a fast Cauchy sequence in ${\mathcal L}({\mathsf A},{\mathsf B})$ and satisfies $\operatorname{def}_{{\sD\times\sA}}(F^n(\phi))\to 0$. The map $\phi_\infty:=\lim_{n\to\infty} F^n(\phi)$ then satisfies $\operatorname{def}_{{\sD\times\sA}}(\phi_\infty)=0$; and $\norm{\phi-\phi_\infty}$ can be bounded above in terms of $\operatorname{def}_{{\sD\times\sA}}(\phi)$, using the geometric decay from the fast Cauchy property. To get the final map $\psi$, one performs a ``left--right switch'' and exploits some {\it ad hoc} features of the operator $F$.
Before constructing the operator $F$, we isolate those of its properties which are needed for the argument in the previous paragraph.
\begin{prop}[A nonlinear improving operator]\label{p:improving}
Let ${\mathsf A}$ be a Banach algebra, let ${\mathsf B}$ be a unital dual Banach algebra with an isometric predual, and let ${\mathsf D}$ be a closed subalgebra of ${\mathsf A}$ which is unital and amenable with amenability constant $\le K$. Then there is a function $F:{\mathcal L}({\mathsf A},{\mathsf B})\to{\mathcal L}({\mathsf A},{\mathsf B})$ with the following properties:
for each $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$ satisfying $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$, we have
\begin{romnum}
\item\label{li:unital}
$F(\phi)(1_{{\mathsf D}})=1_{{\mathsf B}}$;
\item\label{li:small step}
$\norm{F(\phi)-\phi} \le K\norm{\phi} \operatorname{def}_{{\sD\times\sA}}(\phi)$;
\item\label{li:improve defect}
$\operatorname{def}_{{\sD\times\sA}}(F(\phi))
\le 3K^2 \norm{\phi}^2 \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi)$.
\end{romnum}
Moreover,
\begin{romnum}
\setcounter{enumi}{3}
\item\label{li:preserve right}
if $\operatorname{def}_{{\sA\times\sD}}(\phi)=0$, then $\operatorname{def}_{{\sA\times\sD}}(F(\phi))=0$.
\end{romnum}
\end{prop}
\begin{proof}[{Proof of Theorem \ref{t:one-sided improved BEJ}, given Proposition~\ref{p:improving}}]
We fix $K$, $L$ and $\delta$ as in the statement of the theorem. Let $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$ with $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$, $\norm{\phi}\le L$, $\operatorname{def}_{{\sA\times\sD}}(\phi)\le\delta$, and $\operatorname{def}_{{\sD\times\sA}}(\phi)\le\delta$.
The first step is to prove that $(F^n(\phi))_{n\ge 0}$ is a Cauchy sequence in ${\mathcal L}({\mathsf A},{\mathsf B})$. In fact, we prove a more precise technical statement, as follows.
\subproofhead{Claim} $\norm{F^n(\phi)-F^{n-1}(\phi)} \le KL\delta 2^{-(n-1)}$ and $\operatorname{def}_{{\sD\times\sA}}(F^n(\phi))\le 3\delta 2^{-2n-1}$, for each $n \ge 1$.
The claim is proved by strong induction on $n$.
For the base case ($n=1$): applying Proposition~\ref{p:improving} to $\phi$, we obtain
$\norm{F(\phi)-\phi} \le K\norm{\phi}\operatorname{def}_{{\sD\times\sA}}(\phi) \le K L\delta$
and
\begin{align*}
\operatorname{def}_{{\sD\times\sA}}(F(\phi))
& \le 3K^2\norm{\phi}^2 \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi) \\
&\le 3K^2\norm{\phi}^2 \operatorname{def}_{{\sD\times\sA}}(\phi)^2 \notag \\
& \le 3K^2L^2\delta^2 \\
& \le 3\delta /8
\end{align*}
as required.
Now suppose the claim holds for all $1\le j\le n$ for some $n\in\mathbb N$. Then
\begin{align}\label{est0}
\norm{F^n(\phi)}
& \le \norm{\phi}+ \sum_{j=1}^n \norm{F^j(\phi)-F^{j-1}(\phi)} \notag \\
& \le L + KL\delta \sum_{j=1}^n2^{-(j-1)} \le L + 2KL\delta \le 5 L /4,
\end{align}
using the fact that $K\delta \le KL\delta \le 1/8$.
Combining \eqref{est0} with the second part of the inductive hypothesis yields
\begin{align}\label{est1}
\norm{F^n(\phi)}\operatorname{def}_{{\sD\times\sA}}(F^n(\phi)) & \le (5L/4) \cdot 3\delta 2^{-2n-1} \notag \\
& \le L \delta 2^{-2n+1} \le L\delta 2^{-n} \qquad\text{(since $n\ge 1$)}.
\end{align}
Applying Proposition~\ref{p:improving}~(ii) to $F^n(\phi)$ and using \eqref{est1} yields
\begin{align*}
\norm{F^{n+1}(\phi)-F^n(\phi)} \le K\norm{F^n(\phi)}\operatorname{def}_{{\sD\times\sA}}(F^n(\phi)) \le KL\delta 2^{-n}\,,
\end{align*}
and applying Proposition~\ref{p:improving}~(iii) to $F^n(\phi)$ yields
\[
\begin{aligned}
\operatorname{def}_{{\sD\times\sA}}(F^{n+1}(\phi))
& \le 3K^2 \norm{F^n(\phi)}^2 \operatorname{def}_{{\sD\times\sD}}(F^n(\phi)) \operatorname{def}_{{\sD\times\sA}}(F^n(\phi)) \\
& \le 3 \left( K \norm{F^n(\phi)} \operatorname{def}_{{\sD\times\sA}}(F^n(\phi))\right)^2 \\
& \le 3 (KL\delta 2^{-n})^2 \quad\qquad\text{(using \eqref{est1})} \\
& = 3K^2L^2\delta \cdot \delta 2^{-2n} \\
& \le 3 \delta 2^{-2n-3} \quad\qquad \text{(since $K^2L^2\delta \le 1/8$).}
\end{aligned}
\]
This completes the inductive step, and hence proves the claim.
It follows from the claim
that the sequence $(F^n(\phi))_{n\ge 0}$ is Cauchy in ${\mathcal L}({\mathsf A},{\mathsf B})$. Let $\phi_\infty= \lim_{n\to\infty} F^n(\phi) \in {\mathcal L} ({\mathsf A},{\mathsf B})$.
Since $F^n(\phi)(1_{{\mathsf D}})=1_{{\mathsf B}}$ for all $n \in \mathbb N$ and $\lim_n\operatorname{def}_{{\sD\times\sA}}(F^n(\phi)) = 0$, we have $\phi_\infty(1_{{\mathsf D}})=1_{{\mathsf B}}$ and $\operatorname{def}_{{\sD\times\sA}}(\phi_\infty)=0$ by continuity.
Also, $\norm{\phi-\phi_\infty}\le 2KL\delta$. This implies
\[ \norm{\phi_\infty} \le \norm{\phi}+\norm{\phi-\phi_\infty}
\le L + 2KL\delta
\le L (1+ 2K^2L^2\delta)
\le 5 L /4 \;, \]
and, by the estimate given at the end of Lemma~\ref{l:relative defect of perturbed},
\[ \begin{aligned}
\operatorname{def}_{{\sA\times\sD}}(\phi_\infty)
& \le \operatorname{def}_{{\sA\times\sD}}(\phi)+ (2\norm{\phi}+1) \norm{\phi_\infty-\phi} + \norm{\phi_\infty-\phi}^2 \\
& \le \delta + (2L+1) 2KL\delta + (2KL\delta)^2 \\
& \le \delta ( 1+ 6KL^2 + 4K^2L^2\delta)
\quad \le \quad \delta ( 3/2 + 6KL^2)
\quad \le \quad 8KL^2\delta.
\end{aligned} \]
To obtain the final map $\psi$, let $\op{\sA}$ and $\op{\sB}$ be the Banach algebras whose underlying Banach spaces are the same as ${\mathsf A}$ and ${\mathsf B}$ respectively, but which have the opposite algebra structures, so that $a_1\cdot_{(\op{\sA})} a_2 := a_2a_1$, etc. Note that $\op{\sD}$ is a closed subalgebra of $\op{\sA}$. Moreover, $\op{\sD}$ is unital and amenable with constant $\le K$: for if $\sigma: {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D} \to {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$ is the flip map defined by $c_1\mathbin{\otimes} c_2 \mapsto c_2\mathbin{\otimes} c_1$, then $\sigma$ maps bounded approximate diagonals for ${\mathsf D}$ to bounded approximate diagonals for $\op{\sD}$.
Let $\phi'\in {\mathcal L}(\op{\sA}, \op{\sB})$ be the same function as $\phi_\infty\in{\mathcal L}({\mathsf A},{\mathsf B})$ (we introduce new notation to emphasise that we are now working with different algebras as domain and codomain, which affects the definition of $\operatorname{def}$). Then the following properties hold:
\[ \phi'(1_{\op{\sD}})=\phi_\infty(1_{{\mathsf D}}) =1_{{\mathsf B}} = 1_{\op{\sB}}; \qquad
\operatorname{def}_{\op{\sA}DO}(\phi') = \operatorname{def}_{{\sD\times\sA}}(\phi_\infty)=0 \;. \]
Applying Proposition \ref{p:improving} to the triple $(\op{\sA},\op{\sB},\op{\sD})$, there is a function $F': {\mathcal L}(\op{\sA},\op{\sB})\to{\mathcal L}(\op{\sA},\op{\sB})$ such that
\begin{enumerate}
\item $F'(\phi')(1_{\op{\sD}})=1_{\op{\sB}}$;
\item $\norm{F'(\phi')-\phi'} \le K \norm{\phi'} \operatorname{def}_{\op{\sD}AO}(\phi') = K\norm{\phi_\infty} \operatorname{def}_{{\sA\times\sD}}(\phi_\infty)$;
\item\label{li:exploit}
$\operatorname{def}_{\op{\sD}AO}(F'(\phi'))
\le 3K^2 \norm{\phi'} \operatorname{def}_{\op{\sD}CO}(\phi') \operatorname{def}_{\op{\sD}AO}(\phi')$;
\item
$\operatorname{def}_{\op{\sA}DO}(F'(\phi'))=0$.
\end{enumerate}
Now observe that $\operatorname{def}_{\op{\sD}CO}(\phi')\le \operatorname{def}_{\op{\sA}DO}(\phi')=0$. Hence we may improve the property \ref{li:exploit} above to: $\operatorname{def}_{\op{\sD}AO}(F'(\phi'))=0$.
We define $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ to have the same underlying function as $F'(\phi')$. Then
$\psi(1_{{\mathsf D}})=F'(\phi')(1_{\op{\sD}})=1_{\op{\sB}}=1_{{\mathsf B}}$,
and
\[ \begin{aligned}
\norm{\psi-\phi}
& \le \norm{F'(\phi')-\phi'}+\norm{\phi_\infty-\phi} \\
& \le K (5L/4) 8K L^2\delta +2KL\delta
& \le 12K^2L^3\delta\;.
\end{aligned} \]
Finally, $\psi\in\selfhom{{\mathsf D}}({\mathsf A},{\mathsf B})$ since
$\operatorname{def}_{{\sD\times\sA}}(\psi)=\operatorname{def}_{\op{\sA}DO}(F'(\phi'))=0$ and
$\operatorname{def}_{{\sA\times\sD}}(\psi)=\operatorname{def}_{\op{\sD}AO}(F'(\phi')))=0$.
\end{proof}
\end{subsection}
\begin{subsection}{Explanation for the improving operator}
We have not given any definition of the operator $F$, let alone explained why amenability of ${\mathsf D}$ would allow us to find or construct~$F$. In fact the definition of $F$ is quite simple and explicit --- see Equation \eqref{eq:define next step} below --- but attempting to prove directly that $F$ has the required ``improving properties'' is far less straightforward. Subtle cancellations are required, and one has to pay attention to technical issues arising when carrying out repeated \ensuremath{{\rm w}^*}-averaging.
These issues are already present in the proof of \cite[Theorem 3.1]{BEJ_AMNM2}, where an operator analogous to ours is constructed in the special case ${\mathsf D}={\mathsf A}$. Although Johnson chooses in his proof to verify the necessary properties directly, he follows this with a brief sketch of how the construction of the operator and the proof that it has the required properties are motivated by a ``vanishing $H^2$ argument'' that is standard in the Hochschild cohomology theory of (amenable) Banach algebras.
In our setting, the algebra ${\mathsf A}$ is no longer amenable, but the unital subalgebra ${\mathsf D}$ is, and the corresponding notion in cohomology theory is that of \emph{normalizing a $2$-cocycle with respect to an amenable subalgebra}. It is this approach which guides our construction of the desired ``improving operator'' $F$.
Rather than adapting the calculations in the proof of \cite[Theorem 3.1]{BEJ_AMNM2} in an \textit{ad hoc} way to the setting of an amenable subalgebra ${\mathsf D}\subseteq {\mathsf A}$, it seems both more comprehensible and more robust to set up a general framework. This is our goal in the final section of the paper; the desired ``improving operator'' $F$ will then emerge naturally as a special case of the general machinery.
\end{subsection}
\end{section}
\begin{section}{Constructing the nonlinear improving operator}
\label{s:building improving}
\begin{subsection}{An approximate cochain complex}
Throughout this subsection, we fix Banach algebras ${\mathsf A},{\mathsf B}$ and $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$; we shall think of $\phi$ as defining an ``approximate action'' of ${\mathsf A}$ on ${\mathsf B}$.
As mentioned earlier, we are guided by a standard construction in the Hochschild cohomology theory of Banach algebras, which arises when normalizing cochains with respect to an amenable unital subalgebra.
However, we require the actual techniques in the proofs and not just the results, and therefore we shall build the required machinery from scratch.
\begin{rem}\label{r:kazhdan disclaimer}
After the original work was done for this section, it was brought to our attention that \cite{kazhdan} also adopts a similar setup with an approximate cochain complex; however, this is only done in the setting of (bounded) group cohomology for discrete groups. Moreover, \cite{kazhdan} does not explore the ``relative'' setting where one only has amenability for a subgroup rather than for the whole group.
\end{rem}
\begin{dfn}\label{d:approx cochain complex}
For each $n\in\mathbb N$, define the bounded linear map $\mathop{\partial}\nolimits_\phi^n: {\mathcal L}^n({\mathsf A},{\mathsf B})\to {\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$ by
\[
\mathop{\partial}\nolimits_\phi^n \psi(a_1,\dots, a_{n+1}) =
\left\{
\begin{aligned}
\phi(a_1)\psi(a_2,\dots, a_{n+1}) \\
+ \sum_{j=1}^{n}(-1)^j \psi(a_1,\dots, a_ja_{j+1}, \dots, a_{n+1}) \\
+ (-1)^{n+1} \psi(a_1,\dots, a_n)\phi(a_{n+1}).
\end{aligned}
\right.
\]
\end{dfn}
In fact, to prove Proposition~\ref{p:improving}, we only need this definition for $n\in\{1,2\}$. We include the definitions for general~$n$, to put the following arguments in their proper context.
\begin{rem}\label{r:coho waffle}
We make some remarks to provide context; they are not necessary for the proof of Proposition \ref{p:improving}.
\begin{romnum}
\item
If $\phi$ is multiplicative, then $(a,b)\mapsto \phi(a)b$ and $(b,a)\mapsto b\phi(a)$ give ${\mathsf B}$ the structure of an ${\mathsf A}$-bimodule ${}_\phi {\mathsf B}_\phi$, and the operator $\mathop{\partial}\nolimits^n_\phi$ is just the usual Hochschild coboundary operator for ${}_\phi {\mathsf B}_\phi$-valued cochains.
If $\phi$ is not multiplicative, then we might have $\mathop{\partial}\nolimits^{n+1}_\phi\circ\mathop{\partial}\nolimits^n_\phi\neq 0$, but a direct calculation shows that
$\norm{\mathop{\partial}\nolimits^{n+1}_\phi\circ\mathop{\partial}\nolimits^n_\phi} \le 4 \operatorname{def}(\phi)$.
\item
Recall that we have a nonlinear function
$(\underline{\quad})^\vee: {\mathcal L}({\mathsf A},{\mathsf B})\to{\mathcal L}^2({\mathsf A},{\mathsf B}); \; \psi \mapsto \psi^{\vee}$ (where $\psi^{\vee}$ is defined as in Equation \eqref{eq:define phi-check}), which satisfies $\operatorname{def}(\psi)=\norm{\psi^\vee}$. If $\gamma\in{\mathcal L}({\mathsf A},{\mathsf B})$, Equation \eqref{eq:linearize} may be rewritten as
\[ (\phi+\gamma)^\vee(a_1,a_2) = \phi^\vee(a_1,a_2) - \mathop{\partial}\nolimits_\phi^1 (\gamma)(a_1,a_2) - \gamma(a_1,a_2) \quad\text{for all $a_1,a_2\in {\mathsf A}$,} \]
and it follows that the derivative of the function $(\underline{\quad})^\vee$ at $\phi$ is just $-\mathop{\partial}\nolimits_\phi^1$. (This observation is taken from remarks in \cite[Section~3]{BEJ_AMNM2}.)
\item
For now, we do not assume either ${\mathsf A}$ or ${\mathsf B}$ is unital; but when it comes to our analogue of ``normalization of cocycles'', some kind of unitality assumption is needed to obtain maps with the right properties.
\end{romnum}
\end{rem}
Since $\mathop{\partial}\nolimits_\phi^2$ can be applied to arbitrary elements of ${\mathcal L}^2({\mathsf A},{\mathsf B})$, we may apply it to the particular bilinear map $\phi^\vee$.
\begin{lem}[A $2$-cocycle for $\mathop{\partial}\nolimits_\phi$]
\label{l:2-cocycle}
$\mathop{\partial}\nolimits_\phi^2(\phi^\vee)=0$.
\end{lem}
The proof is a straightforward calculation, which we omit.
\begin{dfn}[Notation for restricting in first variable]\label{d:currying}
Let $E$ and $V$ be Banach spaces, and let $F$ be a closed subspace of $E$. Let $n \ge 2$. Given $\psi\in{\mathcal L}^n(E,V)$ we may regard it as an element of ${\mathcal L}(E, {\mathcal L}^{n-1}(E,V))$, which is defined by
\[ x_1 \mapsto \left( (x_2,\dots, x_n) \mapsto \psi(x_1,\dots, x_n) \right). \]
Restricting this function to $F$ yields a bounded linear map $F\to{\mathcal L}^{n-1}(E,V)$, which we denote by
\begin{align*}
\LRES{F}(\psi)\in {\mathcal L}(F,{\mathcal L}^{n-1}(E,V)).
\end{align*}
The function $\LRES{F}: {\mathcal L}^n(E,V) \to {\mathcal L}(F,{\mathcal L}^{n-1}(E,V))$ is linear and contractive.
\end{dfn}
For the rest of this subsection, we fix a closed subalgebra ${\mathsf D}\subseteq{\mathsf A}$. Note that $\operatorname{def}_{{\sD\times\sA}}(\phi)=\norm{\LRES{{\mathsf D}}(\phi^\vee)}$.
Our goal is to define (linear) operators $\mathop{\sigma}\nolimits_\phi^n : {\mathcal L}^{n+1}({\mathsf A},{\mathsf B}) \to {\mathcal L}^n({\mathsf A},{\mathsf B})$ such that for each $\psi\in {\mathcal L}^n({\mathsf A},{\mathsf B})$, the map
\[
\LRES{{\mathsf D}}\left( \mathop{\partial}\nolimits_\phi^{n-1}\mathop{\sigma}\nolimits_\phi^{n-1}(\psi) + \mathop{\sigma}\nolimits_\phi^n\mathop{\partial}\nolimits_\phi^n(\psi) - \psi\right)
\]
has norm controlled by $\operatorname{def}_{{\sD\times\sA}}(\psi)$ (we make this precise in Proposition \ref{p:approx-splitting-v2} below).
As a first step towards this, we set up a general construction by which elements of ${\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$ define (linear) operators ${\mathcal L}^{n+1}({\mathsf A},{\mathsf B})\to{\mathcal L}^n({\mathsf A},{\mathsf B})$.
\begin{dfn}\label{d: ave}
Let $n\in\mathbb N$. Given any $c,d\in {\mathsf D}$ and $\psi\in {\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$, we obtain an element of ${\mathcal L}^n({\mathsf A},{\mathsf B})$, defined by
\begin{align}
(a_1,\dots, a_n) \mapsto \phi(c)\ \psi(d,a_1,\dots, a_n)
\qquad \text{for all $a_1,\dots, a_n\in {\mathsf A}$.}
\end{align}
This process yields a bounded bilinear map ${\sD\times\sD}\to {\mathcal L}( {\mathcal L}^{n+1}({\mathsf A},{\mathsf B}) , {\mathcal L}^n({\mathsf A},{\mathsf B}))$, with norm $\le \norm{\phi}\norm{\LRES{{\mathsf D}}}$, and hence
uniquely defines a bounded linear map
\[
{\mathsf D}\mathbin{\widehat{\otimes}} {\mathsf D}\to {\mathcal L}({\mathcal L}^{n+1}({\mathsf A},{\mathsf B}),{\mathcal L}^n({\mathsf A},{\mathsf B}))\]
that we denote by $w\mapsto \ave[\phi]{w}^n$.
Explicitly: for $c,d\in {\mathsf D}$ and $\psi\in{\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$,
\begin{align}
\ave[\phi]{c \mathbin{\otimes} d}^n (\psi)(a_1,\dots, a_n)
= \phi(c)\ \psi(d,a_1,\dots, a_n)
\quad \text{for all $a_1,\dots, a_n\in {\mathsf A}$.}
\end{align}
\end{dfn}
Note that from our definitions, for each $\psi \in {\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$ we have
\begin{equation}\label{eq:bound of averaging operator}
\norm{\ave[\phi]{w}^n(\psi)}_{{\mathcal L}^{n}({\mathsf A},{\mathsf B})} \le \norm{w}_{{\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}} \norm{\phi} \norm{ \LRES{{\mathsf D}}(\psi) }_{{\mathcal L}({\mathsf D},{\mathcal L}^{n}({\mathsf A},{\mathsf B}))} \qquad\text{for all $w\in {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$.}
\end{equation}
Also, since we used the universal property of $\mathbin{\widehat{\otimes}}$ to define $\ave[\phi]{w}^n$, it is clear that this operator is independent of any chosen representation of $w$ as an absolutely convergent sum of elementary tensors.
\begin{lem}[Approximate splitting, 1st version]\label{l:approx-splitting-v1}
Let $n\ge 2$.
Then
\begin{equation}\label{eq:towards homotopy}
\left.
\begin{gathered}
\mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{w}^{n-1} (\psi)(a_1,\dots,a_n) \\
+ \ave[\phi]{w}^n \mathop{\partial}\nolimits_\phi^n(\psi)(a_1,\dots, a_n)
\end{gathered}\right\}
= \left\{
\begin{gathered}
\pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(w)\cdot\psi(a_1,\dots,a_n) \\
+ \phi(a_1)\cdot \ave[\phi]{w}^{n-1}(\psi)(a_2,\dots,a_n) \\
- \ave[\phi]{w \cdot a_1}^{n-1}(\psi)(a_2,\dots,a_n)
\end{gathered}
\right.
\end{equation}
for all $w \in {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$, $a_1,\dots,a_n\in {\mathsf A}$ and $\psi\in{\mathcal L}^n({\mathsf A},{\mathsf B})$.
\end{lem}
\begin{proof}
Fix $a_1,\dots, a_n\in {\mathsf A}$ and $\psi\in{\mathcal L}^n({\mathsf A},{\mathsf B})$. We denote the left-hand side of \eqref{eq:towards homotopy} by $T_L(w)$ and denote the right-hand side by $T_R(w)$. Then $T_L$ and $T_R$ are bounded linear maps from ${\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$ to ${\mathsf B}$, so it suffices to prove that $T_L(c\mathbin{\otimes} d)=T_R(c\mathbin{\otimes} d)$ for all $c,d\in {\mathsf D}$.
Consider
\[
T_L(c\mathbin{\otimes} d) = \mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{c\mathbin{\otimes} d}^{n-1}\psi(a_1,\dots,a_n)
+ \ave[\phi]{c\mathbin{\otimes} d}^n\mathop{\partial}\nolimits_\phi^n\psi(a_1,\dots,a_n).
\]
Expanding these expressions, most of the terms cancel, leaving
\[
\begin{gathered}
\phi(a_1)\cdot \phi(c) \cdot\psi(d,a_2,\dots, a_n)
+ \phi(c)\cdot \phi(d)\cdot\psi(a_1,\dots, a_n)
- \phi(c)\cdot \psi(da_1,a_2,\dots, a_n)
\\
=
\phi(a_1)\cdot \ave[\phi]{c\mathbin{\otimes} d}^{n-1}\psi (a_2,\dots,a_n)
+ \pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(c\mathbin{\otimes} d) \cdot \psi(a_1,\dots,a_n) \\
- \ave[\phi]{c\mathbin{\otimes} da_1}^{n-1}\psi(a_2,\dots,a_n)
\end{gathered}
\]
which equals $T_R(c\mathbin{\otimes} d)$, as required.
\end{proof}
\begin{lem}
\label{l:approx-left-modular}
Let $n \ge 2$. Let $w\in {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$ and let $a_1\in \operatorname{\rm ball}\nolimits_1({\mathsf D})$, $a_2,\dots,a_n\in \operatorname{\rm ball}\nolimits_1({\mathsf A})$.
Then for each $\psi\in{\mathcal L}^n({\mathsf A},{\mathsf B})$,
\begin{equation}\label{eq:left-modular}
\begin{gathered}
\left\Vert
\phi(a_1)\cdot \ave[\phi]{w}^{n-1}(\psi)(a_2,\dots,a_n)
- \ave[\phi]{a_1\cdot w}^{n-1}(\psi)(a_2,\dots,a_n)
\right\Vert \\
\le
\operatorname{def}_{{\sD\times\sD}}(\phi) \,\norm{w}_{{\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}}\ \norm{ \LRES{{\mathsf D}}(\psi) }_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))}.
\end{gathered}
\end{equation}
\end{lem}
\begin{proof}
Fixing $a_1\in \operatorname{\rm ball}\nolimits_1({\mathsf D})$ and $a_2,\dots, a_n\in \operatorname{\rm ball}\nolimits_1({\mathsf A})$,
let
\[ T(w):=\phi(a_1)\cdot \ave[\phi]{w}^{n-1}(\psi)(a_2,\dots,a_n)
- \ave[\phi]{a_1\cdot w}^{n-1}(\psi)(a_2,\dots,a_n) \quad \text{for all $w \in {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$.}
\]
Then $T \colon {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}\to {\mathsf B}$ is a bounded linear map and it suffices to prove that $\norm{T}\le \operatorname{def}_{{\sD\times\sD}}(\phi) \, \norm{ \LRES{{\mathsf D}}(\psi) }_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))}$.
By \eqref{eq:ball of ptp} it suffices to prove that
\[
\norm{T(c\mathbin{\otimes} d)}\le \operatorname{def}_{{\sD\times\sD}}(\phi) \, \norm{ \LRES{{\mathsf D}}(\psi) }_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))}
\quad\text{for all $c,d\in \operatorname{\rm ball}\nolimits_1({\mathsf D})$.}
\]
This is now a straightforward calculation:
\begin{align*}
\norm{T(c\mathbin{\otimes} d)} &= \left\Vert
\phi(a_1)\cdot \ave[\phi]{c\mathbin{\otimes} d}^{n-1}(\psi)(a_2,\dots,a_n)
- \ave[\phi]{a_1c\mathbin{\otimes} d}^{n-1}(\psi)(a_2,\dots,a_n)
\right\Vert \\
& =
\left\Vert
\phi(a_1) \phi(c) \psi(d,a_2,\dots,a_n)
- \phi(a_1c) \psi(d,a_2,\dots,a_n)
\right\Vert \\
& \le \norm{\phi(a_1)\phi(c)-\phi(a_1c)} \, \norm{\psi(d,a_2,\dots,a_n)} \\
& = \norm{\phi(a_1)\phi(c)-\phi(a_1c)} \, \norm{[\LRES{{\mathsf D}}(\psi)(d)](a_2,\dots,a_n)} \\
& \le \operatorname{def}_{{\sD\times\sD}}(\phi) \, \norm{\LRES{{\mathsf D}}(\psi)}_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))},
\end{align*}
as required.
\end{proof}
\end{subsection}
\begin{subsection}{Defining the approximate homotopy}\label{ss:approx-homotopy}
To construct our approximate homotopy, we have to place further restrictions on ${\mathsf B}$ and~${\mathsf D}$. Thus throughout this subsection:
\begin{itemize}
\item
${\mathsf A}$ is a Banach algebra, ${\mathsf B}$ is a unital dual Banach algebra with an isometric predual, and $\phi\in {\mathcal L}({\mathsf A},{\mathsf B})$;
\item
${\mathsf D}$ is a closed subalgebra of ${\mathsf A}$, which is unital and amenable with constant $\le K$;
\end{itemize}
We also fix a net $(\Delta_\alpha)_{\alpha\in I}$ which is a bounded approximate diagonal for ${\mathsf D}$ and has the following properties: $\sup_\alpha\norm{\Delta_\alpha}_{{\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}}\le K$; and there exists $\boldsymbol{\Delta} \in ({\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D})^{**}$ such that $\Delta_\alpha \xrto{\ensuremath{{\rm w}^*}} \boldsymbol{\Delta}$.
The desired operators $\mathop{\sigma}\nolimits_\phi^n:{\mathcal L}^{n+1}({\mathsf A},{\mathsf B})\to {\mathcal L}^n({\mathsf A},{\mathsf B})$ will be constructed as limits of the operators $\ave[\phi]{\Delta_\alpha}^n$, with respect to an appropriate topology which we now describe.
Let $E$ and $F$ be Banach spaces and let $n \in \mathbb N$ be fixed. For the sake of readability, elements of $E^n$ will be written as $\underline{x}:= (x_1,x_2, \ldots, x_n)$.
For every $\underline{x} \in E^n$ and $y \in F$, we introduce the seminorm
\begin{align*}
p_{\underline{x},y} &\colon {\mathcal L}^n(E, F^*) \to [0, \infty); \quad \psi \mapsto |\langle y, \psi(\underline{x}) \rangle |.
\end{align*}
The topology on ${\mathcal L}^n(E, F^*)$ generated by the family of seminorms $(p_{\underline{x},y})_{(\underline{x},y) \in E^n \times F}$ is called the \dt{topology of point-to-$\ensuremath{{\rm w}^*}$ convergence} and will be denoted by~$\tau$.
The topology $\tau$ is linear, locally convex and Hausdorff (see \cite[Chapter~II,~\S4]{STVS}). We record a lemma here for future reference.
\begin{lem}\label{l:point-to-weak* conv}
A net $(\psi_{\gamma})_{\gamma \in \Gamma}$ in ${\mathcal L}^n(E, F^*)$ converges to zero with respect to $\tau$ (in notation, $\lim\nolimits^\tau_{\gamma} \psi_{\gamma} =0$) if and only if
$\lim\nolimits^\sigma_{\gamma} \psi_{\gamma}(\underline{x}) = 0$ for all $\underline{x} \in E^n$.
\end{lem}
\begin{lem}\label{l:coboundary weak-star-cts}
Suppose ${\mathsf B}$ is a dual Banach algebra with an isometric predual.
Then for every $n\in\mathbb N$, the operator $\mathop{\partial}\nolimits_\phi^n:{\mathcal L}^n({\mathsf A},{\mathsf B})\to{\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$ is $\tau$-to-$\tau$ continuous.
\end{lem}
\begin{proof}
Let $(\psi_i)$ be a $\tau$-convergent net in ${\mathcal L}^n({\mathsf A},{\mathsf B})$, with limit~$\psi$. Let $a_1,\dots, a_{n+1}\in {\mathsf A}$. By Lemma~\ref{l:point-to-weak* conv}, for each $j=1,\dots, n$ we have
\[
\psi_i(a_1,\dots, a_ja_{j+1}, \dots, a_{n+1}) \xrto{\ensuremath{{\rm w}^*}} \psi(a_1,\dots, a_ja_{j+1},\dots, a_{n+1}).
\]
Also, since ${\mathsf B}$ is a dual Banach algebra, multiplication in ${\mathsf B}$ is separately \ensuremath{{\rm w}^*}-continuous. Hence
\[
\begin{aligned}
\lim\nolimits^\sigma_i \big(\phi(a_1)\psi_i(a_2,\dots, a_{n+1}) \big)
& = \phi(a_1) \ \lim\nolimits^\sigma_i\psi_i(a_2,\dots, a_{n+1}) \\
& = \phi(a_1)\psi(a_2,\dots,a_{n+1})\;, \\
\end{aligned} \]
and similarly
\[
\begin{aligned}
\lim\nolimits^\sigma_i \big(\psi_i(a_1,\dots, a_n)\phi(a_{n+1}) \big) = \psi(a_1,\dots,a_n)\phi(a_{n+1}) .
\end{aligned} \]
Thus $(\mathop{\partial}\nolimits_\phi^n\psi_i)(a_1,\dots, a_{n+1}) \xrto{\ensuremath{{\rm w}^*}} (\mathop{\partial}\nolimits_\phi^n\psi)(a_1,\dots, a_{n+1})$, as required.
\end{proof}
\begin{lem}\label{l:build splitting}
Given $n\in\mathbb N$ and $\psi\in{\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$, the net $(\ave[\phi]{\Delta_\alpha}^n\psi)$ $\tau$-converges in ${\mathcal L}^n({\mathsf A},{\mathsf B})$.
\end{lem}
\begin{proof}
Fix $\psi\in{\mathcal L}^{n+1}({\mathsf A},{\mathsf B})$. Given $a_1,\dots, a_n\in {\mathsf A}$, define $T\in {\mathcal L}({\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D},{\mathsf B})$ by $T(w) := \ave[\phi]{w}^n(\psi)(a_1,\dots, a_n)$.
Then $T: {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}\to {\mathsf B}$ is a bounded linear map with values in a dual Banach space, and hence has a unique \wstar-\wstar-continuous extension $\widetilde{T}:({\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D})^{**} \to {\mathsf B}$, which satisfies $\norm{\widetilde{T}}=\norm{T}$.
In particular,
\begin{align}\label{eq: aux splitting}
\ave[\phi]{\Delta_\alpha}^n(\psi)(a_1,\dots,a_n) = T(\Delta_\alpha) \xrto{\ensuremath{{\rm w}^*}} \widetilde{T} (\boldsymbol{\Delta}).
\end{align}
Denote the right-hand side of \eqref{eq: aux splitting} by $\Psi(a_1,\dots a_n)$.
Routine calculations show that the map $\Psi \colon {\mathsf A}^n \to {\mathsf B}$ is $n$-multilinear.
Using \eqref{eq: aux splitting} and the bound in \eqref{eq:bound of averaging operator}, we obtain
\begin{align*}
\norm{\Psi(a_1,\dots,a_n)}
& = \norm{\widetilde{T}(\boldsymbol{\Delta})}
\le \liminf_{\alpha} \norm{T(\Delta_{\alpha})} \le K \norm{T} \notag \\
& \le K \norm{\phi}\norm{\LRES{{\mathsf D}}(\psi)} \norm{a_1}\dots \norm{a_n} \;.
\end{align*}
Thus $\Psi\in{\mathcal L}^n({\mathsf A},{\mathsf B})$, and $\ave[\phi]{\Delta_\alpha}^n(\psi) \xrto{\tau} \Psi$ by \eqref{eq: aux splitting} and Lemma~\ref{l:point-to-weak* conv}.
\end{proof}
\begin{dfn}[Approximate homotopy]
\label{d:define approx homotopy}
Define $\mathop{\sigma}\nolimits_\phi^n : {\mathcal L}^{n+1}({\mathsf A},{\mathsf B}) \to {\mathcal L}^n({\mathsf A},{\mathsf B})$ by
\begin{equation}\label{eq:define split}
\mathop{\sigma}\nolimits_\phi^n(\psi) = \lim\nolimits^\tau\nolimits_{\alpha} \ave[\phi]{\Delta_\alpha}^n (\psi)
\qquad \text{for all $\psi\in{\mathcal L}^n({\mathsf A},{\mathsf B})$.}
\end{equation}
This is well-defined by Lemma~\ref{l:build splitting}.
\end{dfn}
The following lemma is basic, and is included just for sake of convenient reference. We leave the proof to the reader.
\begin{lem}\label{l:bound of w*-limit}
Let $F$ be a Banach space, and let $(f_i)$ be a net in $F^*$ which converges \ensuremath{{\rm w}^*}\ to some $f\in F^*$. Suppose also that there is a convergent net $(c_i)$ in $[0,\infty)$ such that $\norm{f_i}\le c_i$. Then $\norm{f}\le \lim_i c_i$.
\end{lem}
\begin{prop}[Approximate splitting, 2nd version]\label{p:approx-splitting-v2}
Suppose $\phi(1_{\mathsf D})=1_{{\mathsf B}}$.
Then for all $n\ge 2$ and all $\psi\in {\mathcal L}^n({\mathsf A},{\mathsf B})$,
\[
\begin{gathered}
\Norm{ \LRES{{\mathsf D}} \left( \mathop{\partial}\nolimits_\phi^{n-1}\mathop{\sigma}\nolimits_\phi^{n-1} (\psi)
+ \mathop{\sigma}\nolimits_\phi^n \mathop{\partial}\nolimits_\phi^n(\psi)
- \psi \right) }_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))}
\\
\le
2 K \operatorname{def}_{{\sD\times\sD}}(\phi)\norm{\LRES{{\mathsf D}}(\psi)}_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))} \;.
\end{gathered}
\]
\end{prop}
\begin{proof}
To ease notational congestion, throughout this proof we let
\[ M := \norm{\LRES{{\mathsf D}}(\psi)}_{{\mathcal L}({\mathsf D},{\mathcal L}^{n-1}({\mathsf A},{\mathsf B}))}\;.\]
Let $a_1\in \operatorname{\rm ball}\nolimits_1({\mathsf D})$ and let $a_2,\dots,a_n\in \operatorname{\rm ball}\nolimits_1({\mathsf A})$; it suffices to prove that
\begin{equation}\label{eq:desired}
\left\Vert \begin{gathered}
\mathop{\partial}\nolimits_\phi^{n-1}\mathop{\sigma}\nolimits_\phi^{n-1} (\psi)(a_1,\dots, a_n)
+ \mathop{\sigma}\nolimits_\phi^n \mathop{\partial}\nolimits_\phi^n(\psi)(a_1,\dots, a_n) \\
- \psi(a_1,\dots, a_n)
\end{gathered} \right\Vert
\le 2K \operatorname{def}_{{\sD\times\sD}}(\phi) M\;.
\end{equation}
Since $\mathop{\partial}\nolimits_\phi^{n-1}:{\mathcal L}^{n-1}({\mathsf A},{\mathsf B})\to {\mathcal L}^n({\mathsf A},{\mathsf B})$ is
$\tau$-to-$\tau$ continuous by Lemma~\ref{l:coboundary weak-star-cts},
\begin{equation*}
\begin{aligned}
& \mathop{\partial}\nolimits_\phi^{n-1}\mathop{\sigma}\nolimits_\phi^{n-1}(\psi)
+
\mathop{\sigma}\nolimits_\phi^n\mathop{\partial}\nolimits_\phi^n(\psi) -\psi
& =
\lim\nolimits^\tau_\alpha \left( \mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{\Delta_\alpha}^{n-1}(\psi)
+
\ave[\phi]{\Delta_\alpha}^n\mathop{\partial}\nolimits_\phi^n(\psi)-\psi \right).
\end{aligned}
\end{equation*}
Thus the left-hand side of the desired inequality \eqref{eq:desired} is equal to
\begin{equation}\label{eq:en route}
\left\Vert
\lim\nolimits^\sigma_\alpha \left(
\begin{gathered}
\mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{\Delta_\alpha}^{n-1} (\psi)(a_1,\dots, a_n)
+ \ave[\phi]{\Delta_\alpha}^n \mathop{\partial}\nolimits_\phi^n(\psi)(a_1,\dots, a_n) \\
- \psi(a_1,\dots, a_n)
\end{gathered}\right) \right\Vert.
\end{equation}
Combining Lemma \ref{l:approx-splitting-v1}, Lemma \ref{l:approx-left-modular}, and the bound in \eqref{eq:bound of averaging operator} yields
\[
\begin{aligned}
&
\left\Vert
\begin{gathered}
\mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{\Delta_\alpha}^{n-1} (\psi)(a_1,\dots,a_n)
+ \ave[\phi]{\Delta_\alpha}^n\mathop{\partial}\nolimits_\phi^n(\psi)(a_1,\dots,a_n) \\
- \pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(\Delta_\alpha)\cdot\psi(a_1,\dots,a_n)
\end{gathered}
\right\Vert \\
= &
\left\Vert
\phi(a_1)\cdot \ave[\phi]{\Delta_\alpha}^{n-1}(\psi)(a_2,\dots,a_n)
- \ave[\phi]{\Delta_\alpha\cdot a_1}^{n-1}(\psi)(a_2,\dots,a_n)
\right\Vert \\
\le &
\left\{
\begin{gathered}
\left\Vert \phi(a_1)\cdot \ave[\phi]{\Delta_\alpha}^{n-1}(\psi)(a_2,\dots,a_n)
- \ave[\phi]{a_1\cdot\Delta_\alpha}^{n-1}(\psi)(a_2,\dots,a_n) \right\Vert \\
+
\left\Vert
\ave[\phi]{a_1\cdot\Delta_\alpha-\Delta_\alpha\cdot a_1}^{n-1}(\psi)(a_2,\dots,a_n)
\right\Vert
\end{gathered} \right. \\
\le &
\operatorname{def}_{{\sD\times\sD}}(\phi) \norm{\Delta_\alpha} M
+ \norm{\phi} \norm{a_1\cdot\Delta_\alpha-\Delta_\alpha\cdot a_1} M.
\end{aligned}
\]
Also, since $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$, using $\operatorname{def}_{{\sD\times\sD}}(\phi)= \norm{ \phi \pi_{{\mathsf D}} - \pi_{{\mathsf B}} (\phi \mathbin{\widehat{\otimes}} \phi) }$, we obtain
\begin{align*}
& \phantom{\quad} \left\Vert
\pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(\Delta_\alpha)\cdot\psi(a_1,\dots,a_n)
- \psi(a_1,\dots,a_n)
\right\Vert \\
& \le
\norm{\pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(\Delta_\alpha) - \phi(1_{{\mathsf D}})} \norm{\psi(a_1,\dots,a_n)} \\
& \le
\norm{\pi_{{\mathsf B}}(\phi\mathbin{\widehat{\otimes}}\phi)(\Delta_\alpha) - \phi(\pi_{{\mathsf D}}(\Delta_\alpha))} M
+ \norm{\phi( \pi_{{\mathsf D}}(\Delta_\alpha)-1_{{\mathsf D}})} M
\\
& \le \operatorname{def}_{{\sD\times\sD}}(\phi) \norm{\Delta_\alpha} M + \norm{\phi} \norm{\pi_{{\mathsf D}}(\Delta_\alpha)-1_{{\mathsf D}}} M \;.
\end{align*}
Putting things together, and recalling that $K \ge\sup_\alpha\norm{\Delta_\alpha}$, we have:
\begin{equation}\label{eq:before limit}
\left\Vert \begin{gathered}
\mathop{\partial}\nolimits_\phi^{n-1}\ave[\phi]{\Delta_\alpha}^{n-1} (\psi)(a_1,\dots,a_n)
+ \ave[\phi]{\Delta_\alpha}^n\mathop{\partial}\nolimits_\phi^n(\psi)(a_1,\dots,a_n) \\
- \psi(a_1,\dots,a_n)
\end{gathered} \right\Vert
\le
2 \operatorname{def}_{{\sD\times\sD}}(\phi) KM + \varepsilon_\alpha\;,
\end{equation}
where
$\varepsilon_\alpha :=
\norm{\phi} \norm{a_1\cdot\Delta_\alpha-\Delta_\alpha\cdot a_1} M
+ \norm{\phi} \norm{\pi_{{\mathsf D}}(\Delta_\alpha)-1_{{\mathsf D}}} M$,
which tends to $0$ by \eqref{amenabledef}.
Comparing \eqref{eq:en route} and \eqref{eq:before limit},
and appealing to Lemma~\ref{l:bound of w*-limit},
the desired inequality \eqref{eq:desired} follows.
\end{proof}
\end{subsection}
\begin{subsection}{Defining the ``improving operator''}
\label{s:define-imp-op}
As in the previous subsection:
\begin{itemize}
\item
${\mathsf A}$ is a Banach algebra, ${\mathsf B}$ is a unital dual Banach algebra with an isometric predual;
\item
${\mathsf D}$ is a closed subalgebra of ${\mathsf A}$, which is unital and amenable with constant $\le K$.
\end{itemize}
Then, for any given $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$, we may still form the splitting maps $\mathop{\sigma}\nolimits_\phi^n$, as in Definition \ref{d:define approx homotopy}. However, rather than fixing a single $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$ and working with it throughout, we will now allow $\phi$ to vary.
\begin{dfn}\label{d:define improving}
The \dt{improving operator} $F: {\mathcal L}({\mathsf A},{\mathsf B})\to{\mathcal L}({\mathsf A},{\mathsf B})$ is defined by
the formula
\begin{equation}\label{eq:define next step}
F(\phi) := \phi+ \mathop{\sigma}\nolimits_\phi^1(\phi^\vee) = \phi+ \lim\nolimits^\tau_\alpha \ave[\phi]{\Delta_\alpha}^1(\phi^\vee).
\end{equation}
\end{dfn}
The desired properties of $F$ follow from applying the machinery of Section~\ref{ss:approx-homotopy} to the bilinear map $\phi^\vee\in{\mathcal L}^2({\mathsf A},{\mathsf B})$, viewed as a ``$2$-cocycle'' with respect to the operator $\mathop{\partial}\nolimits_\phi^2$ (see Lemma~\ref{l:2-cocycle}).
We first deal with some technical details that do not depend on amenability of~${\mathsf D}$.
\begin{lem}\label{l:preserved by improvement}
Let $\phi,\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$ and let $w\in {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$.
\begin{romnum}
\item
If $\psi(1_{{\mathsf D}})=1_{{\mathsf B}}$, then $\ave[\phi]{w}^1(\psi^\vee)(1_{{\mathsf D}})=0$.
\item
If $\operatorname{def}_{{\sA\times\sD}}(\psi)=0$, then $\ave[\phi]{w}^1(\psi^\vee)(x)=0$ for all $x\in {\mathsf D}$,
and
\[
\ave[\phi]{w}^1(\psi^\vee)(ax)=
\ave[\phi]{w}^1(\psi^\vee)(a) \cdot \psi(x)\qquad \text{for all $a\in {\mathsf A}$ and $x\in {\mathsf D}$.}
\]
\end{romnum}
\end{lem}
\begin{proof}
For fixed $\phi$ and $\psi\in{\mathcal L}({\mathsf A},{\mathsf B})$, the map $w\mapsto \ave[\phi]{w}^1(\psi^\vee)$ is bounded linear from ${\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}$ to ${\mathcal L}({\mathsf A},{\mathsf B})$.
Hence, for both (i) and (ii), it suffices to prove the desired identity in the special case $w=c\mathbin{\otimes} d$, where $c,d\in {\mathsf D}$.
\begin{romnum}
\item $\ave[\phi]{c\mathbin{\otimes} d}^1(\psi^\vee)(1_{{\mathsf D}}) = \phi(c) \psi^\vee(d,1_{{\mathsf D}}) = \phi(c)\psi(d1_{{\mathsf D}})-\phi(c)\psi(d)\psi(1_{{\mathsf D}})=0$.
\item Let $a\in {\mathsf A}$ and $x\in {\mathsf D}$. Then
\[
\ave[\phi]{c\mathbin{\otimes} d}^1(\psi^\vee)(x)
= \phi(c)\psi^\vee(d,x)
= \phi(c)\psi(dx) - \phi(c)\psi(d)\psi(x) = 0
\]
and
\[
\begin{aligned}
\ave[\phi]{c\mathbin{\otimes} d}^1(\psi^\vee)(ax)
& = \phi(c)\psi^\vee(d,ax) \\
& = \phi(c)\psi(dax) - \phi(c)\psi(d)\psi(ax) \\
& = \phi(c)\psi(da)\psi(x) - \phi(c)\psi(d)\psi(a)\psi(x) \\
& = \ave[\phi]{c\mathbin{\otimes} d}^1(\psi^\vee)(a) \cdot \psi(x).
\end{aligned}
\]
\end{romnum}
\vskip-1.5em
\end{proof}
\begin{proof}[Proof of Proposition \ref{p:improving}]
Let $\phi\in{\mathcal L}({\mathsf A},{\mathsf B})$ with $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$.
Let $F$ be as in Definition~\ref{d:define improving}.
\subproofhead{Part \ref{li:unital}: show that $F(\phi)(1_{{\mathsf D}})=1_{{\mathsf B}}$}
By the definition of $F$, this is equivalent to showing that $\lim\nolimits^\sigma_\alpha \ave[\phi]{\Delta_\alpha}^1(\phi^\vee)(1_{{\mathsf D}})=0$, which in turn follows from Lemma~{\ref{l:preserved by improvement}(i)}.
\subproofhead{Part \ref{li:small step}: show that $\norm{F(\phi)-\phi} \le K\norm{\phi} \operatorname{def}_{{\sD\times\sA}}(\phi)$}
Applying the bound in \eqref{eq:bound of averaging operator} with $\psi=\phi^\vee$ and $w=\Delta_\alpha$ yields
\[ \norm{ \ave[\phi]{\Delta_\alpha}^1(\phi^\vee) }
\le K \norm{\phi} \norm{\LRES{{\mathsf D}}(\phi^\vee)}
= K \norm{\phi}\operatorname{def}_{{\sD\times\sA}}(\phi) \;.\]
Taking the limit on the left-hand side gives the desired bound on $\norm{F(\phi)-\phi}$.
\subproofhead{Part \ref{li:improve defect}: show that $\operatorname{def}_{{\sD\times\sA}}(F(\phi))\le 3K^2\norm{\phi}^2 \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi)$}
We put $\gamma:= F(\phi)-\phi = \mathop{\sigma}\nolimits_\phi^1(\phi^\vee)$ in order to simplify some formulas. Rewriting the identity \eqref{eq:linearize} in terms of the operator $\mathop{\partial}\nolimits_\phi^1$, we have
\[
(\phi+\gamma)^\vee = \phi^\vee - \mathop{\partial}\nolimits_\phi^1(\gamma) - \pi_{{\mathsf B}}\circ(\gamma\mathbin{\widehat{\otimes}}\gamma) \circ \iota_{{\mathsf A},{\mathsf A}} \;,
\]
where $\iota_{{\mathsf A},{\mathsf A}} \in \operatorname{Bil}({\mathsf A},{\mathsf A}; {\mathsf A}\mathbin{\widehat{\otimes}} {\mathsf A})$ is the canonical map.
Hence
\begin{align}\label{eq:two terms to bound}
\operatorname{def}_{{\sD\times\sA}}(F(\phi))
&= \norm{\LRES{{\mathsf D}}(\phi+\gamma)^\vee} \notag \\
& \le \norm { \LRES{{\mathsf D}}\left(\phi^\vee - \mathop{\partial}\nolimits_\phi^1(\gamma)\right)} + \norm{ \LRES{{\mathsf D}}( \pi_{{\mathsf B}}\circ (\gamma\mathbin{\widehat{\otimes}}\gamma) \circ \iota_{{\mathsf A},{\mathsf A}}) } \notag \\
& \le \norm { \LRES{{\mathsf D}}\left(\phi^\vee - \mathop{\partial}\nolimits_\phi^1(\gamma)\right)} + \norm{{\gamma\vert}_{{\mathsf D}}}_{{\mathcal L}({\mathsf D},{\mathsf B})} \ \norm{\gamma} \;.
\end{align}
To bound the first term on the right-hand side of \eqref{eq:two terms to bound}, we take $n=2$ and $\psi=\phi^\vee$ in Proposition \ref{p:approx-splitting-v2}. This yields
\begin{align}\label{eq:use approx splitting}
\norm{ \LRES{{\mathsf D}}\left( \mathop{\partial}\nolimits_\phi^1\mathop{\sigma}\nolimits_\phi^1(\phi^\vee) + \mathop{\sigma}\nolimits_\phi^2 \mathop{\partial}\nolimits_\phi^2(\phi^\vee)- \phi^\vee \right) }
&
\le
2 K \operatorname{def}_{{\sD\times\sD}}(\phi)\norm{\LRES{{\mathsf D}}(\phi^\vee)} \notag \\
& = 2K \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi).
\end{align}
Recall that $\mathop{\partial}\nolimits_\phi^2(\phi^\vee)=0$ (by Lemma~\ref{l:2-cocycle}) and
$\gamma=\mathop{\sigma}\nolimits_\phi^1(\phi^\vee)$. Hence \eqref{eq:use approx splitting} may be rewritten as
\begin{align}\label{eq: pre bound on 2nd term}
\norm{ \LRES{{\mathsf D}}\left( \mathop{\partial}\nolimits_\phi^1(\gamma) - \phi^\vee \right) }\le 2K \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi).
\end{align}
The second term is easier to deal with. We already know from part~\ref{li:small step} of this proposition that $\norm{\gamma} \le K\norm{\phi}\operatorname{def}_{{\sD\times\sA}}(\phi)$.
By the same argument, using \eqref{eq:bound of averaging operator}, we obtain
\[ \norm{{\gamma\vert}_{{\mathsf D}}}_{{\mathcal L}({\mathsf D},{\mathsf B})} \le K \norm{\phi}\norm{{\phi^\vee \vert}_{{\sD\times\sD}} }_{{\mathcal L}^2({\mathsf D},{\mathsf B})} = K\norm{\phi} \operatorname{def}_{{\sD\times\sD}}(\phi). \]
Hence
\begin{equation}\label{eq:bound on 2nd term}
\norm{{\gamma\vert}_{{\mathsf D}}}_{{\mathcal L}({\mathsf D},{\mathsf B})} \norm{\gamma} \le K^2\norm{\phi}^2 \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi).
\end{equation}
Combining \eqref{eq:two terms to bound} with \eqref{eq: pre bound on 2nd term} and \eqref{eq:bound on 2nd term} yields
\[
\operatorname{def}_{{\sD\times\sA}}(F(\phi))\le (2K+K^2\norm{\phi}^2) \operatorname{def}_{{\sD\times\sD}}(\phi)\operatorname{def}_{{\sD\times\sA}}(\phi).
\]
To finish off the proof of part~\ref{li:improve defect}
it suffices to observe that $K\ge 1$ (because $\pi_{{\mathsf D}} \colon {\mathsf D}\mathbin{\widehat{\otimes}}{\mathsf D}\to {\mathsf D}$ is contractive and $(\pi_{{\mathsf D}}(\Delta_\alpha))_{\alphapha \in I}$ is a b.a.i.\ for ${\mathsf D}$) and $\norm{\phi}\ge 1$ (since $\phi(1_{{\mathsf D}})=1_{{\mathsf B}}$ and both ${\mathsf A}$ and ${\mathsf B}$ are unital).
\subproofhead{Part \ref{li:preserve right}: show that if $\operatorname{def}_{{\sA\times\sD}}(\phi)=0$, then $\operatorname{def}_{{\sA\times\sD}}(F(\phi))=0$}
Applying Lemma \ref{l:preserved by improvement}~(ii) with $w=\Delta_\alpha$ and $\psi=\phi$, and then taking the limit, we have
\[ \gamma(x) = \lim\nolimits^\sigma_\alpha \ave[\phi]{\Delta_\alpha}^1(\phi^{\vee})(x) = 0 \qquad \text{for all $x\in {\mathsf D}$} \]
and
\[ \gamma(ax) -\gamma(a)\phi(x) = \lim\nolimits^\sigma_\alpha \big(\ave[\phi]{\Delta_\alpha}^1(\phi^{\vee})(ax) - \ave[\phi]{\Delta_\alpha}^1(\phi^{\vee})(a) \cdot \phi(x) \big) = 0 \quad \text{for all $a\in {\mathsf A}, x\in {\mathsf D}$.} \]
Hence, whenever $a\in {\mathsf A}$ and $x\in {\mathsf D}$, we have
\[
\begin{aligned}
F(\phi)^\vee(a,x)
& = \phi(ax)+\gamma(ax) - (\phi(a)+\gamma(a))(\phi(x)+\gamma(x)) \\
& = \phi(ax)+\gamma(a)\phi(x) - (\phi(a)+\gamma(a))\phi(x) \\
& = 0
\end{aligned}
\]
as required.
\end{proof}
This completes the proof of Proposition~\ref{p:improving}, and hence --- via Theorem \ref{t:one-sided improved BEJ} --- the proof of Theorem \ref{t:main innovation}.
\end{subsection}
\end{section}
\end{ack}
\appendix
\begin{section}{Constructing an uncountable clone system for the Tsi\-rel\-son space}
Let $T$ denote the Tsirelson space. In this appendix we prove the following result.
\begin{prop}\label{p:tsirelson}
There is an uncountable clone system for $T$.
\end{prop}
\begin{proof}
We use the notation and terminology of \cite{CS} and \cite[Section~3]{BKL}. Let $(t_n)$ denote the unit vector basis for~$T$. For a subset $M$ of~$\mathbb N$, $P_M$ is the norm one basis projection onto the closed linear span of $\{ t_m : m\in M\}$, denoted by $T_M$. We first recall a few definitions. We say that $J\subseteq \mathbb N$ is a nonempty \dt{Schreier set} if $J$ is a finite set with $\lvert J\rvert\le\min J$. Let $M \subseteq \mathbb N$. We say that $J$ is an \dt{interval in $\mathbb N\setminus M$} if $J$ is of the form $J= [a,b] \cap \mathbb N$ for some real numbers $b >a \ge 1$, such that $J \cap M = \emptyset$. Lastly, if $M \subseteq \mathbb N$ and $J$ is an interval in $\mathbb N\setminus M$, we define
\[ \sigma(\mathbb N,J) = \sup\biggl\{\sum_{j\in J} s_j : s_j\in [0,1]\ (j\in J),\,
{\biggl\|
\sum_{j\in J} s_jt_j
\biggr\|
}_T\le 1\biggr\}. \]
We rely on the following two results:
\begin{itemize}
\item Let $J\subseteq \mathbb N$ be a nonempty Schreier set. Then
\[ \norm{ x } \ge \frac12\sum_{j\in J}\lvert x_j \rvert\qquad \text{for all $x = (x_j)\in T$.} \]
This is an immediate consequence of how the Tsirelson norm is defined.
\item For an infinite $M\subseteq \mathbb N$, we have $T_M\cong T$ if and only if there is a constant $C\ge 1$ such that $\sigma(\mathbb N,J)\le C$ for every interval $J$ in $\mathbb N\setminus M$. This is a special case of a result of Casazza--Johnson--Tzafriri~\cite{CJT}, stated in \cite[Corollary~3.2]{BKL}, and applied here only in the particular case where $N = \mathbb N$.
\end{itemize}
Combining these two results, we obtain the following conclusion: Suppose that $M = \{m_1<m_2<\cdots\}\subseteq \mathbb N$ is an infinite set with
\begin{equation}\label{eq1}
m_1 = 1\qquad\text{and}\qquad m_{j+1}\le 2m_j+2\qquad \text{for all $j\in\mathbb N$.}
\end{equation}
For every nonempty interval $J$ in $\mathbb N\setminus M$, there is a unique $j\in\mathbb N$ such that $J\subseteq [m_j+1,m_{j+1}-1]$. This implies that
\[ \lvert J\rvert\le (m_{j+1}-1)-(m_j+1)+1\le 2m_j+1 - m_j = m_j+1\le \min J, \]
so $J$ is a Schreier set, and therefore
\[ {\biggl\|\sum_{j\in J} s_jt_j\biggr\|}_T\ge \frac12\sum_{j\in J} s_j\qquad \text{for all $s_j\in[0,1]$, $j\in J$} \]
by the first bullet point, so $\sigma(\mathbb N,J)\le 2$. Hence $T_M\cong T$ by the second bullet point. In fact, it follows from the second part of the proof of Theorem~$10$ and the paragraph before Proposition~$3$ in \cite{CJT} that $T_M$ and $T$ are $4$-isomorphic.
We can therefore establish the result by constructing an uncountable, almost disjoint family~${\mathcal D}$ of sets whose elements satisfy~\eqref{eq1}. For then, the uncountable family of norm one idempotents $(P_M)_{M \in {\mathcal D}}$ will be the desired clone system. We construct ${\mathcal D}$ as follows.
Given a function $f\in\{0,1\}^{\mathbb N}$, define
\[ m_n(f) = 2^{n-1} + \sum_{j=1}^{n-1} f(j)2^{n-1-j}\qquad \text{for all $n\in\mathbb N$.} \]
Alternatively, we can state this definition recursively as follows:
\begin{equation}\label{eq2}
m_1(f)=1\qquad\text{and}\qquad m_{n+1}(f) = 2m_n(f)+f(n)\qquad \text{for all $n\in\mathbb N$.}
\end{equation}
Set
\[ M(f) = \{ m_n(f) : n\in\mathbb N \} \qquad\text{and}\qquad {\mathcal D} = \{ M(f) : f\in\{0,1\}^{\mathbb N} \}. \]
Clearly $M(f)$ is an infinite subset of $\mathbb N$ for each $f\in\{0,1\}^{\mathbb N}$. Since $f(n)\in\{0,1\}$, the recursive definition~\eqref{eq2} shows that the elements of $M(f)$ satisfy~\eqref{eq1}.
It remains to verify that the family ${\mathcal D}$ is almost disjoint. More precisely, for distinct functions
$f,g\in\{0,1\}^{\mathbb N}$, we claim that $\lvert M(f)\cap M(g)\rvert=k$, where $k\in\mathbb N$ is the smallest number such that $f(k)\ne g(k)$. This however follows from an easy induction argument and~\eqref{eq2}.
\end{proof}
\end{section}
\Addresses
\end{document} |
\begin{document}
\title{Lower Bounds for Matrix Factorization}
\begin{abstract}
We study the problem of constructing explicit families of matrices which cannot be expressed as a product of a few sparse matrices. In addition to being a natural mathematical question on its own, this problem appears in various incarnations in computer science; the most significant being in the context of lower bounds for algebraic circuits which compute linear transformations, matrix rigidity and data structure lower bounds.
We first show, for every constant $d$, a deterministic construction in subexponential time of a family $\{M_n\}$ of $n \times n$ matrices which cannot be expressed as a product $M_n = A_1 \cdots A_d$ where the total sparsity of $A_1,\ldots,A_d$ is less than $n^{1+1/(2d)}$. In other words, any depth-$d$ linear circuit computing the linear transformation $M_n\cdot \vecx$ has size at least $n^{1+\Omega(1/d)}$. This improves upon the prior best lower bounds for this problem, which are barely super-linear, and were obtained by a long line of research based on the study of super-concentrators (albeit at the cost of a blow up in the time required to construct these matrices).
We then outline an approach for proving improved lower bounds through a certain derandomization problem, and use this approach to prove asymptotically optimal quadratic lower bounds for natural special cases, which generalize many of the common matrix decompositions.
\end{abstract}
\section{Introduction}
\label{sec:intro}
This work concerns the following (informally stated) very natural problem:
\begin{openproblem}
\label{openproblem:lb}
Exhibit an explicit matrix $A \in \F^{n \times n}$, such that $A$ cannot be written as $A=BC$, where $B \in \F^{n \times m}$ and $C \in \F^{m \times n}$ are sparse matrices.
\end{openproblem}
Before bothering ourselves with the precise meaning of the words ``explicit'' and ``sparse'' in the above problem, we discuss the various contexts in which this problem presents itself.
\subsection{Linear circuits and matrix factorization}
\label{sec:intro:lin-circuits}
Algebraic complexity theory studies the complexity of computing polynomials using arithmetic operations: addition, subtraction, multiplication and division. An algebraic circuit over a field $\F$ is an acyclic directed graph whose vertices of in-degree 0, also called inputs, are labeled by indetermeinates $\set{x_1, \ldots, x_n}$ or field element from $\F$, and every internal node is labeled with an arithmetic operation. The circuit computes rational functions in the natural way, and the polynomials (or rational functions) computed by the circuit are those computed by its vertices of out-degree 0, called the outputs. This framework is general enough to encompass virtually all the known algorithms for algebraic computational problems. The size of the circuit is defined to be the number of edges in it. For a more detailed background on algebraic circuits, see \cite{SY10}.
Perhaps the simplest non-trivial class of of polynomials is the class of linear (or affine) functions. Accordingly, such polynomials can be computed by a very simple class of circuits called \emph{linear circuits}: these are algebraic circuits which are only allowed to use addition and multiplication by a scalar. It is often convenient to consider graphs with labels on the edges as well: every internal node is an addition gate, and for $c \in \F$, an edged labeled $c$ from a vertex $v$ to a vertex $u$ denotes that the output of $v$ is multiplied by $c$ when feeding into $u$. Thus, every node computes a linear combination of its inputs.
It is not hard to show that any arithmetic circuit for computing a set of linear functions can be converted into a linear circuit with only a constant blow-up in size (see \cite{BCS97}, Theorem 13.1; eliminating division gates requires that the field $\F$ in question is large enough. In this paper we will always makes this assumption when needed).
Clearly, every set of $n$ linear functions on $n$ variables (represented by a matrix $A \in \F^{n \times n}$) can be computed by a linear circuit of size $O(n^2)$. Using counting arguments (over finite fields) or dimension arguments (over infinite fields), it can be shown that for a random or generic matrix this upper bound is fairly tight. Thus, a central open problem in algebraic complexity theory is to prove any super-linear lower bound for an \emph{explicit} family of matrices $\set{A_n}$ where $A_n \in \F^{n \times n}$. The standard notion of explicitness in complexity theory is that there is a deterministic algorithm that outputs the matrix $A_n$ in $\poly(n)$ time, although more or less stringent definitions can be considered as well.
Despite decades of research and partial results, such lower bounds are not known.\footnote{We remark that super-linear lower bounds for general arithmetic circuits are known, but for polynomials of high degree \cite{Strassen73, BS83}.} In order to gain insight into the general model of computation, research has focused on limited models of linear circuits, such as monotone circuits, circuits with bounded coefficients, or bounded depth circuits. We defer a more thorough discussion on previous work to \autoref{sec:intro:prev}, and proceed to describe bounded depth circuits, which are the focus of this work.
The \emph{depth} of a circuit is the length (in edges) of a longest path from an input to an output. Constant depth circuits appear to be a particularly weak model of computation. However, even this model is surprisingly powerful (see also \autoref{sec:intro:rigidity}).
The ``easiest'' non-trivial model is the model of depth-2 linear circuits. A depth 2 linear circuit computing a linear transformation $A \in \F^{n \times n}$ consists of a bottom layer of $n$ input gates, a middle layer of $m$ gates, and a top layer of $n$ output gates. We assume, without loss of generality, that the circuit is \emph{layered}, in the sense that every edge goes either from the bottom to the middle layer, or from the middle to the top layer. Indeed, every edge going directly from the bottom to the top layer can be replaced by a path of length 2; this transformation increases the size of the circuit by at most a factor of 2.
By letting $C \in \F^{m \times n}$ be the adjacency matrix of the (labeled) subgraph between the bottom and the middle layer, and $B \in \F^{n \times m}$ be the adjacency matrix as the subgraph between the bottom and the top layer, it is clear that $A=BC$. Thus, a decomposition of $A$ into the product of two sparse matrices is equivalent to saying that $A$ has a small depth-2 linear circuit. This argument can be generalized, in exactly the same way, to depth-$d$ circuits and decompositions of the form $A=A_1 \cdots A_d$, for constant $d$.
Weak super-linear lower bounds are known for constant depth linear circuits. They are based on the following observation, due to Valiant \cite{Valiant75}: for subsets $S,T\subseteq [n]$ of size $k$, let $A_{S,T}$ denote the submatrix of $A$ indexed by rows in $S$ and columns in $T$. If $A_{S,T}$ has rank $k$, the minimal vertex cut in the subcircuit restricted to input from $S$ and outputs from $T$ is of size at least $k$: indeed, a smaller cut corresponds to a factorization $A_{S,T} = PQ$ for $P \in \F^{k \times r}$ and $Q \in \F^{r \times k}$ for $r < k$, contradicting the rank assumption. Using Menger's theorem, it is now possible to deduce that if $A$ is a matrix such that for every $S,T$ as above the matrix $A_{S,T}$ is non-singular, then the circuit computing $A$ contains, for every subcircuit which corresponds to such $S,T$, at least $k$ vertex disjoint paths from $S$ to $T$. Such graphs were named \emph{superconcentrators} by Valiant, and their minimal size was extensively studied \cite{Valiant75, Pippenger77, Pippenger82, DDPW83, Pudlak94, AP94, RTS00}.
Superconcentrators of logarithmic depth and linear size do exist, so while this approach cannot show lower bounds for circuits of logarithmic depth, it is possible to show that for constant $d$, any depth-$d$ superconcentrator has size at least $n \cdot \lambda_d(n)$, where $\lambda_d(n)$ is a function that unfortunately grows very slowly with $n$. For example, $\lambda_2(n) = \Theta(\log^2 n / \log \log n)$, $\lambda_3(n) = \Theta(\log \log n)$, $\lambda_4(n) = \lambda_5(n) = \log^*(n)$, and so on. Such lower bounds apply for any matrix whose minors of all orders are non-zero, e.g., a Cauchy matrix given by $A_{i,j} = 1/(x_i - y_j)$ for any distinct $x_1,\ldots,x_n,y_1,\ldots,y_n$. Over finite fields it is possible to to modify the proof and obtain a similar lower bounds for matrices defining good error correcting codes \cite{GHKPV13}.
These lower bounds on the size of superconcentrators are tight: for every $d \in \N$, there exists a super-concentrator of depth $d$ and size $O(n \cdot \lambda_d (n))$. It is thus impossible to improve the lower bounds only using this technique.
\subsection{Matrix rigidity}
\label{sec:intro:rigidity}
A demonstration of the surprising power of depth-2 circuits can be seen using the notion of \emph{matrix rigidity}, a pseudorandom property of matrices which we now recall. A matrix $A \in \F^{n \times n}$ is $(r,s)$ rigid if $A$ \emph{cannot} be written as a sum $A=R+S$ where $R$ is a matrix of rank $r$, and $S$ is a matrix with at most $s$ non-zero entries. Valiant \cite{Valiant77} famously proved that if $A$ is computed by a linear circuit with bounded fan-in of depth $O(\log n)$ and size $O(n)$, then $A$ is not $(\varepsilon n, n^{1+\delta})$ rigid for every $\varepsilon,\delta>0$.\footnote{In fact, one can obtain slightly better parameters. See, for example, \cite{Valiant77} or \cite{DGW18}.} It follows that an explicit construction $(\varepsilon n, n^{1+\delta})$ matrix, for some $\varepsilon, \delta>0$, will imply a super-linear lower bound for linear circuits of depth $O(\log n)$. Pudl{\'{a}}k \cite{Pudlak94} observed that similar rigidity parameters will imply even stronger lower bounds for constant depth circuits.
A random matrix (over infinite fields) is $(r, (n-r)^2)$-rigid, but the best explicit constructions have rigidity $(r,n^2/r \cdot \log (n/r))$ \cite{F93, SSS97}, which is insufficient for proving lower bounds.
Observe that a decomposition $A=R+S$ where $\rank(R) = \varepsilon n$ and $S$ is $n^{1+\delta}$-sparse corresponds to a depth-$2$ circuit with a very special structure and with at most $2\varepsilon n^2 + n^{1+\delta}$ edges (this circuit is not layered, but as we explained above, this does not make a significant difference). In particular, one way of interpreting Valiant's result is as a non-trivial depth reduction from depth $O(\log n)$ to depth 2, so that proving \emph{any} depth-2 $\Omega(n^2)$ lower bound for an explicit matrix, will imply a lower bound for depth $O(\log n)$.\footnote{We note that this statement makes sense only over large fields, as over fixed finite fields, it is always possible to prove an \emph{upper bound} of $O(n^2 / \log n)$ on the depth-2 complexity of any matrix \cite{JS13}. This does not contradict the fact that rigid matrices exist over finite fields --- a decomposition to $R+S$ is a very special type of depth-$2$ circuit.} This can be seen as the linear circuit analog of similar strong depth reduction theorems for general algebraic circuits \cite{AV08, K12b, T15, GKKS16}.
However, we would like to argue that proving lower bounds for depth-2 circuits is in fact \emph{necessary} for proving rigidity lower bounds, by observing that \emph{upper bounds} on the depth-2 complexity of $A$ give upper bounds on its rigidity parameters. Indeed, suppose $A=BC$ can be computed by a depth-2 circuit of size $n^{1+\varepsilon}$. Let $m$ be as before the number of columns of $B$ (which equals the number of rows of $C$), and note that we may assume $m \le n^{1+\varepsilon}$, as zero columns of $B$ or zero rows of $C$ can be omitted. For $i \in [m]$, let $B_i$ denote the $i$-th column of $B$, and $C_i$ the $i$-th row of $C$, so that $A = \sum_{i=1}^{m} B_i C_i$. Fix a constant $\delta > 0$, and say $i \in [m]$ is \emph{dense} if either $B_i$ or $C_i$ has more than $n^{\varepsilon}/\delta$ non-zero entries; otherwise, $i$ is \emph{sparse}. Since $B$ can have at most $\delta n$ columns with sparsity of more than $n^{\varepsilon}/\delta$, and similarly for the rows of $C$, the number of dense $i$-s is at most $2 \delta n$. It follows that
\[
A = \sum_{i\text{ dense}} B_i C_i + \sum_{i\text{ sparse}} B_i C_i.
\]
The first sum is a matrix of rank at most $2 \delta n$, and the second is a matrix whose sparsity is at most $m \cdot n^{2\varepsilon}/\delta^2 = n^{1+3\varepsilon}/\delta^2$. Thus, proving rigidity lower bounds of the type required to carry out Valiant's approach necessarily means proving lower bounds of the form ``$n^{1+\varepsilon}$'' on the depth-2 complexity of $A$ (we remark that the argument above is very similar to the aforementioned result of Pudl{\'{a}}k \cite{Pudlak94}; Pudl{\'{a}}k's argument is stated in a slightly different language and in greater generality). Since proving rigidity lower bounds is a long-standing open problem, we view the problem of proving an $\Omega(n^{1+\varepsilon})$ lower bound for depth-2 circuits as an important milestone towards this.
\subsection{Data structure lower bounds}
\label{sec:intro:ds}
The problem of matrix factorization into sparse matrices also appears in the context of proving lower bounds for data structures. A dynamic data structure with $n$ inputs and $q$ queries is a pair of algorithms whose purpose is to update and retrieve certain data under a sequence of operations, while minimizing the memory access. In the group model, it is given by a pair of algorithms. The update algorithm is represented by a matrix $U \in \F^{s \times n}$. Given $x \in \F^{n}$, thought of as assignment of weights to the $n$ inputs, $Ux$ computes a linear combination of those weights and stores them in memory. The query algorithm is given by a matrix $Q \in \F^{q \times s}$. Given a query, it computes a linear function of the $s$ memory cells, and returns the answer. Hence, an ``update'' operation followed by a ``retrieve'' operation computes the linear transformation given by $A=QU$.
The worst case update time of the database is the maximal number of non-zero elements in a column of $U$, and the worst case query time is the maximal number of non-zero elements in a row of $Q$. The value $s$ denotes the space required by the data structure. It now directly follows that a matrix $A \in \F^{q \times n}$ which cannot be factored as $A =QU$ for a row-sparse $Q$ and column-sparse $U$ gives a data structure problem with a lower bound on its worst case query or update time. It is also possible to define an analogous average case notion. Lower bounds for this model were proved by \cite{Fredman82, FredmanSaks89, PD06, Patrascu07, Larsen12, Larsen14, LWY18}, but none of these results beats the lower bounds for depth-2 circuits obtained using superconcentrators.
A related model is that of a static data structures, which is again given by a factorization $A=QP$, where now we are interested in trade-offs between the space $s$ of the data structure and its worst case query time, while not being charged for the total sparsity of $P$. A recent work of Dvir, Golovnev and Weinstein \cite{DGW18} showed that proving lower bounds for this model is related to the problem of matrix rigidity from \autoref{sec:intro:rigidity}.
Despite the overall similarity, there are several key technical differences between the linear circuit complexity and the data structure problems. The first and obvious issue is that worst-case lower bounds on the update or query time do not necessarily imply that $Q$ or $U$ are dense matrices: the total sparsity of $Q$ and $U$ is related to the average-case update and query time. The second, more severe issue, is that in many applications the number of queries $q$ is polynomially larger than $n$, while the lower bounds on running time are still measured as functions of the number of inputs $n$. This makes sense in the data structure settings, but from a circuit complexity point of view, a set of say $n^3$ linear functions trivially requires a circuit of size $n^3$, and thus a lower bound of say $n \polylog(n)$ is meaningless in that setting.
This issue also comes up when studying the so-called \emph{succinct space} setting, where we require $s=n(1+o(1))$. The lower bounds we are aware of for this setting are worst case lower bounds, and require the number of outputs $q$ to be at least $Cn$ for some $C>1$ \cite{GM07,DGW18}, so that in the corresponding circuit the number of vertices in the middle layer is required to be much smaller than the number of outputs, which may be considered quite unnatural. In particular, we are unaware of any improved lower bounds on the sparsity of matrix factorization for $A \in \F^{n \times n}$ when $s=n(1+o(1))$ or even $s=n$ which come from the data structure lower bounds literature.
\subsection{Machine learning}
\label{sec:intro:ml}
We briefly remark that the problem of factorizing a matrix into a product of two or more sparse matrices is also ubiquitous in machine learning and related areas. Naturally, research in those areas did not focus on lower bounds but rather on algorithms for finding such a representation, assuming it exists, sometimes heuristically, and it is usually enough to approximate the target matrix $A$. In particular, algorithms have been proposed for the very related problems of non-negative matrix factorization \cite{LS00}\footnote{It is interesting to observe that for the problem of factorizing matrices into non-negative matrices it is quite easy to prove almost-optimal lower bounds even for unbounded depth linear circuits, as mentioned in \autoref{sec:intro:prev}} or sparse dictionary learning \cite{MBPS09}, and there are also connections to the analysis of deep neural networks \cite{NP13}.
\subsection{Previous work}
\label{sec:intro:prev}
As mentioned in \autoref{sec:intro:lin-circuits}, there are no non-trivial known lower bounds for general linear circuits, and for bounded depth circuits, the best lower bounds follow from the lower bounds on bounded depth super-concentrators, which are barely super-linear.
Shoup and Smolensky \cite{SS96} give a lower bound of $\Omega(dn^{1+1/d})$ for depth-$d$ circuits computing a certain linear transformation given by a matrix $A \in \R^{n \times n}$. Unfortunately, the matrices for which their lower bound holds are not explicit from the complexity theoretic point of view, despite having a very succinct mathematical description (for example, one can take $A_{i,j} = \sqrt{p_{i,j}}$ for $n^2$ distinct prime numbers $p_{i,j}$). For the same matrix, they in fact prove super-linear lower bounds for circuits of depth up to $\polylog(n)$.
Quite informally, the intuition behind their lower bounds is that all small bounded depth linear circuits can be described as lying in the image of a low-degree polynomial map in a small number of variables, and thus, if the elements of $A$ are sufficiently ``algebraically rich'', for a certain specific measure, $A$ cannot be computed by such a circuit. This same philosophy lies behind Raz's elusive function approach for proving lower bounds for algebraic circuits \cite{Raz10a}. In particular, among other results, Raz uses an argument which can be seen as a modification of the technique of Shoup and Smolensky (as worked out in \cite{SY10}) to prove lower bounds for bounded depth algebraic circuits computing bounded degree polynomials.
One class of linear circuits which has attracted significant attention is the class of circuits with bounded coefficients. Here, the circuit is only allowed to multiply by scalars with absolute value of at most some constant. For definiteness, we may assume this constant is 1 (this does not affect the complexity by more than a constant factor). The earliest result for this model is Morgenstern's ingenious proof \cite{Morgenstern73} of an $\Omega(n \log n)$ lower bound on bounded coefficient circuits computing the discrete Fourier transform matrix (this lower bound is matched by the upper bound given by the Cooley-Tukey FFT algorithm, which is a bounded coefficient linear circuit). For depth-$d$ circuits, Pudl{\'{a}}k \cite{Pudlak00} has proved lower bounds of the form $\Omega(d n^{1+1/d})$ for the same matrix.
Another natural subclass which was considered in earlier works is the class of monotone linear circuits. These are circuits which are defined over $\R$, and can only use non-negative scalars. Chazelle \cite{Chazelle2001} observed that it is possible to prove lower bounds in this model, even against unbounded-depth circuits, for any boolean matrix with no large monochromatic rectangle. Instantiated with the recent explicit constructions of bipartite Ramsey graphs \cite{CZ16, BDT17, Cohen17, Li18}, this gives an almost optimal $n^{2-o(1)}$ lower bound against such circuits. The main observation in the proof is that if $A$ does not have monochromatic $t \times t$ rectangle, then since the model is monotone and no cancellations are allowed, every internal node which computes a linear function supported on at least $t$ variables cannot be connected to more than $t$ output gates.
For a more detailed survey on these results and some other related results, see the survey by Lokam \cite{Lokam09}.
\subsection{Our results}
\label{sec:intro:results}
In this paper, we prove several results regarding bounded depth linear circuits which we now discuss.
\paragraph{Lower bounds for depth-$d$ linear circuits. }We start by considering general depth-$d$ circuits. We construct, in subexponential time, matrices which require depth-$d$ circuits of size $n^{1+\Omega(1/d)}$.
\begin{theorem}
\label{thm:intro-depth-d}
Let $\F$ be a field. There exists a family of matrices $\set{A_n}_{n \in \N}$, which can be constructed in time $\exp(n^{1-\Omega(1/d)})$, such that every depth-$d$ linear circuit computing $A_n$, even over the algebraic closure of $\F$, has size at least $n^{1+\Omega(1/d)}$.
If $\F=\Q$, the entries of $A$ are integers of bit complexity $\exp(n^{1-\Omega(1/d)})$. If $\F=\F_q$ is a finite field, the entries of $A$ are elements of an extension $\mathbb{E}$ of $\F$ of degree $\exp(n^{1-\Omega(1/d)})$.
\end{theorem}
This theorem is proved in \autoref{sec:shoup-smol based lb}. We remark again that the best lower bounds against general depth-$d$ linear circuits for matrices that can be constructed in polynomial time are barely super-linear and much weaker than $n^{1+\varepsilon}$. In the recent work of Dvir, Golovnev and Weinstein \cite{DGW18} it was pointed out that currently there are not even known constructions of rigid matrices (with parameters that would imply lower bounds) in classes such as $\mathbf{E}^{\mathbf{NP}}$. By arguing directly about circuit size, and not about rigidity, \autoref{thm:intro-depth-d} gives constructions of matrices in a much smaller complexity class, which have the same bounded-depth complexity lower bounds as would follow from optimal constructions of rigid matrices using the results of Pudl{\'{a}}k \cite{Pudlak94}.
While the statement in \autoref{thm:intro-depth-d} holds for any $d \ge 2$, for $d=2$ there is a much simpler construction of a hard family of matrices in quasi-polynomial time.
\begin{theorem}\label{thm:intro-depth-2-quasipoly}
Let $\F$ be any field and $c$ be any positive constant. Then, there is a family $\{A_n\}_{n \in \N}$ of $n \times n$ matrices which can be constructed in time $\exp(O(\log^{2c + 1} n))$ such that any depth-$2$ linear circuit computing $A_n$ even over the algebraic closure of $\F$ has size at least $\Omega(n\log^c n)$.
\end{theorem}
For every constant $c \geq 2$, this theorem already improves upon the current best lower bound of $\Omega(n\log^2 n/\log\log n)$ known for this problem (see~\cite{RTS00}). This construction is based on an exponential time construction of a small hard matrix, and then amplifying its hardness using a direct sum construction (note, however, that over infinite fields even the fact that a hard matrix can be constructed in exponential time, while not very hard to prove, is not \emph{completely} obvious).
For completeness, we describe this simple construction in~\autoref{subsec:depth-2-direct-sum}.
\paragraph*{Lower bounds for restricted depth-$2$ linear circuits. }
Given the importance of the model of depth-2 linear circuits, as explained above, and its resistance to strong lower bounds, we then move on to consider several natural subclasses of depth-2 circuits. These classes in particular correspond to almost all common matrix decompositions. We are able to prove asymptotically optimal $\Omega(n^2)$ lower bounds for these restricted models. As mentioned above, such lower bounds for general depth-2 circuits will imply super-linear lower bounds for logarithmic depth linear circuits, thus resolving a major open problem.
\paragraph{Symmetric circuits. }
A symmetric depth-2 circuit (over $\R$) is a circuit of the form $B^T B$ for some $B \in \R^{m \times n}$ (considered as a graph, the subgraph between the middle and the top layer is the ``mirror image'' of the subgraph between the bottom and middle layer). Over $\mathbb{C}$, one should take the conjugate transpose $B^*$ instead of $B^T$.
Symmetric circuits are a natural computational model for computing positive semi-definite (PSD) matrix. Clearly, every symmetric circuit computes a PSD matrix, and every PSD matrix has a (non-unique) symmetric circuit. In particular, a Cholesky decomposition of PSD matrices corresponds to a computation by a symmetric circuit (of a very special form).
We prove asymptotically optimal lower bounds for this model.
\begin{theorem}
\label{thm:intro:PSD}
There exists an explicit family of real $n \times n$ PSD matrices $\set{A_n}_{n \in \N}$ such that every symmetric circuit computing $A_n$ (over $\R$ or $\mathbb{C}$) has size $\Omega(n^2)$.
\end{theorem}
We do not know whether every depth-2 linear circuit for a PSD matrix can be converted to a symmetric circuit with a small blow-up in size. One way to phrase this question is given below.
\begin{question}
\label{ques:intro:PSD}
Is there a constant $c<2$, such that every PSD matrix $A \in \R^{n \times n}$ which can be computed by a linear circuit of size $s$, can be computed by a symmetric circuit of size $O(s^c)$?
\end{question}
A positive answer for \autoref{ques:intro:PSD} will imply, using \autoref{thm:intro:PSD}, an $\Omega(n^{1+\varepsilon})$ lower bound for depth-2 linear circuits.
\paragraph{Invertible circuits. }
Invertible circuits are circuits of the form $BC$, where either $B$ or $C$ are invertible (but not necessarily both). We stress that invertible circuits can (and do) compute non-invertible matrices. In particular, if $B \in \F^{n \times m}$ and $C \in \F^{m \times n}$, here we require $m=n$.
Invertible circuits generalize many of the common matrix decompositions, such as QR decomposition, eigendecomposition, singular value decomposition\footnote{A diagonal matrix can be multiplied with the matrix to its left or to its right, without increasing the sparsity, to obtain an invertible depth-$2$ circuit.} and LUP decomposition (in the case where the matrix $L$ is required to be unit lower triangular).\footnote{The sparsity of $UP$ equals the sparsity of $U$, as $P$ simply permutes the columns of $U$, so every $LUP$ decomposition corresponds to the invertible depth-$2$ circuit given by $L(UP)$.}
We prove optimal lower bounds for invertible circuits.
\begin{theorem}
\label{thm:intro:invertible}
Let $\F$ be a large enough field. There exists an explicit family of $n \times n$ matrices $\set{A_n}_{n \in \N}$ over $\F$ such that every invertible circuit computing $A_n$ has size $\Omega(n^2)$.
\end{theorem}
If $A$ is an invertible matrix, then clearly every depth-$2$ circuit with $m=n$ must be an invertible circuit. However, our technique for proving \autoref{thm:intro:invertible} crucially requires the hard matrix $A$ to be non-invertible.
\subsection{Proof Overview}
\label{sec:techniques}
Our proofs rely on a few different ideas coming from algebraic complexity theory, coding theory, arithmetic combinatorics and the theory of derandomization. We now discuss some of the key aspects.
\paragraph*{Shoup-Smolensky dimension.}
For the proof of \autoref{thm:intro-depth-d}, we rely on the notion of \emph{Shoup-Smolensky} dimension as a measure of complexity of matrices. Shoup-Smolensky dimensions are a family of measures, parametrized by $t \in \N$, of ``algebraic richness'' of the entries of a matrix (see \autoref{def:SS-dim} for details), which is supposed to capture the intuition that matrices with small circuits should depend on a few ``parameters'' and thus should not posses much richness.
Shoup and Smolensky~\cite{SS96} showed that for an appropriate choice of parameters, this measure is non-trivially small for linear transformations with small linear circuits of depth at most $\poly(\log n)$. Informally, as the order $t$ gets larger, this measure becomes useful against stronger models of computation; however, it also becomes harder to construct matrices which have a large complexity with respect to this measure (and hence cannot be computed by a small linear circuit). Shoup and Smolensky do this by constructing hard matrices which do not have small bit complexity (and hence this construction is not complexity theoretically explicit) but do have short and succinct mathematical description.
For our proof, we first observe that for bounded depth circuits it suffices to use much smaller order $t$ than what Shoup and Smolensky used. This observation was also made by Raz \cite{Raz10a} in a similar context, but in a different language.
We then use this observation to ``derandomize'', in a certain sense, an exponential time construction of a hard matrix, by giving deterministic constructions of matrices with large Shoup-Smolensky dimension.
A key ingredient of our proof is a connection between the notion of Sidon Sets in arithmetic combinatorics and Shoup-Smolensky dimension (see~\autoref{sec:sidon-ss dim} for details). Our construction is in two steps. In the first step we construct matrices with entries in $\F[y]$ which have a large Shoup-Smolensky dimension over $\F$, and degree of every entry is not too large. In the next step, we go from these univariate matrices to a matrix with entries in an appropriate low degree extension of $\F$ while still maintaining the Shoup-Smolensky dimension over $\F$. Our construction of hard matrices over the field of complex numbers is based on similar ideas but differs in some minor details.
\paragraph*{Lower bounds via Polynomial Identity Testing. } Our proofs for \autoref{thm:intro:PSD} and \autoref{thm:intro:invertible} are based on a derandomization argument. Connections between derandomization and lower bounds are prevalent in algebraic and Boolean complexity, but in our current setting they have not been widely studied before.
We say that a set $\mathcal{H}$ of $n \times n$ matrices is a \emph{hitting set}
for a class $\mathcal{C}$ of matrices if for every non-zero $A \in \mathcal{C}$ there is $H \in \mathcal{H}$ such that $\ip{A,H} := \sum_{i,j} A_{i,j}H_{i,j} \neq 0$.
Every class $\mathcal{C}$ has a hitting set of size $n^2$, namely the indicator matrices of each of the entries. A hitting set is non-trivial if its size is at most $n^2 - 1$. Observe that a non-trivial hitting set for $\mathcal{C}$ gives an efficient algorithm for finding a matrix $M \not\in \mathcal{C}$, by finding a non-zero $A$ such that $\ip{A,H} = 0$ for every $H \in \mathcal{H}$. Such an $A$ exists and can be found in polynomial time because the set $\mathcal{H}$ imposes at most $n^2 - 1$ homogeneous linear constraints on the $n^2$ entries of $A$. This argument is a special case of a more general theorem showing how efficient algorithms for black box polynomial identity testing give lower bounds for algebraic circuits \cite{A05a, HS80}.
In practice, it is often convenient (although by no means necessary) to consider hitting sets that contain only rank 1 matrices $\vecx \vecy^T$, since $\ip{A,\vecx \vecy^T} = \vecx^T A \vecy$, and thus we find ourselves in the more familiar territory of polynomial identity testing, trying to construct a hitting set for the class of polynomials of the form $\vecx^T A \vecy$ for $A \in \mathcal{C}$. This approach was also taken by Forbes and Shpilka \cite{FS12}, who considered this exact problem where $\mathcal{C}$ is the class of low-rank matrices, and remarked that hitting sets for the class of low-rank matrices plus sparse matrices will give an explicit construction of a rigid matrix.
We carry out this idea for two different classes in the proofs of \autoref{thm:intro:PSD} and \autoref{thm:intro:invertible}. However, the following problem remains open.
\begin{openproblem}
\label{openproblem:hit-sparse}
For some $0<\epsilon \le 1$, construct an explicit hitting set of size at most $n^2 - 1$ for the class of $n \times n$ matrices $A$ which can be written as $A=BC$ where $B,C$ have at most $n^{1+\epsilon}$ non-zero entries.
\end{openproblem}
A solution to \autoref{openproblem:hit-sparse} will imply lower bounds of the form $n^{1+\varepsilon}$ for an explicit matrix. If $\epsilon=1$, this will imply lower bounds for logarithmic depth linear circuits.
A useful ingredient in our constructions is the use of maximum distance separable (MDS) codes (for example, Reed-Solomon codes), as their dual subspace is a small dimensional subspace which does not contain sparse non-zero vectors. Over the reals, it is also easy to give such construction based on the well known Descartes' rule of signs which says that a sparse univariate real polynomial cannot have too many real roots. We refer the reader to~\autoref{sec:hitting set const} for details.
\section{Lower bounds for constant depth linear circuits}\label{sec:shoup-smol based lb}
In this section, we prove~\autoref{thm:intro-depth-d}.
We start by describing the notion of Shoup-Smolensky dimension, but first we set up some notation.
\subsection{Notation}
We work with matrices whose entries lie in an appropriate extension of a base finite field $\F_p$. We follow the natural convention that the elements of this extension will be represented as univariate polynomials of appropriate degree over the base field, and the arithmetic is done modulo an explicitly given irreducible polynomial.
We use boldface letters ($\vecx,\vecy$) to denote vectors. The length of the vectors is understood from the context.
For a matrix $M$, $\spars{M}$ denotes the number of non-zero entries in $M$.
\subsection{Shoup-Smolensky Dimension}
\label{sec:ssdim}
A useful concept will be the notion of Shoup-Smolensky dimension of subsets of elements of an extension $\mathbb{E}$ of a field $\F$.
\begin{definition}[Shoup-Smolensky dimension]
\label{def:SS-dim}
Let $\F$ be a field, and $\mathbb{E}$ be an extension field of $\F$.
Let $M \in \mathbb{E}^{n\times n}$ be a matrix. For $t \in \N$, denote by $\Pi_t(M_L)$ the set of $t$-wise products of distinct entries of $M$ that is,
\[
\Pi_t(M) = \set{\prod_{(a,b) \in T} M_{a,b} : T \in \binom{[n] \times [n]}{t}}.
\]
The \emph{Shoup-Smolensky dimension} of $M$ of order $t$, denoted by $\Gamma_{t, \F}(M)$ is defined to be the dimension, over $\F$, of the vector space spanned by $\Pi_t(M)$.
We also denote by $\Sigma_{t}(M)$ the number of distinct elements of $\mathbb{E}$ that can be obtained by \emph{summing} distinct elements of $\Pi_t(M)$.
\end{definition}
\subsection{Upper bounding the Shoup-Smolensky dimension for Sparse Products}
\label{sec:SS-dim}
The following lemma shows that any matrix computable by a depth-$d$ linear circuit of size at most $s$ has a somewhat small Shoup-Smolensky dimension.
\begin{lemma}\label{lem:ss-easy-ub-large-depth}
Let $\F$ be a field, $\mathbb{E}$ an extension of $\F$ and $A \in \mathbb{E}^{n \times n}$ be a matrix such that $A = \prod_{i = 1}^d P_i$ for $P_i \in \mathbb{E}^{n_i \times m_i}$, where $\sum_{i = 1}^d\spars{P_i} \leq s$. Then, for every $t\le n^2/4$ such that $s \ge dt$ it holds that
\[
\Gamma_{t, \F}(A) \le \inparen{e^d (2s/dt)^d}^t.
\]
\end{lemma}
\begin{proof}
Since
$$A_{i,j} = \left(\prod_{\ell = 1}^d P_{\ell}\right)_{i,j} = \sum_{k_1, \ldots, k_{d-1}} (P_1)_{i, k_1}\cdot \left(\prod_{\ell = 2}^{d-1} (P_{\ell})_{k_{\ell-1}, k_{\ell}} \right) \cdot (P_{d})_{k_{d-1}, j}\, , $$
every element in $\Pi_{t} (A)$ is a sum of monomials of degree $dt$ in the entries of $P_1, P_2, \ldots, P_d$, that is,
\[
\Gamma_{t, \F}\left(\prod_{i = 1}^d P_i\right) \leq \binom{s + dt}{dt},
\]
with the right hand side being the number of monomials of degree $dt$ in $s$ variables.
Using the inequality $\binom{n}{k} \le (en/k)^k$,
\[
\Gamma_{t, \F}(A) \leq (e(1 + s/dt))^{dt} \le \inparen{e^d (2s/dt)^d}^t. \qedhere
\]
\end{proof}
Over $\Q$, we do not wish to use field extensions (which would give rise to elements with infinite bit complexity). Thus, we use a similar argument that replaces the measure $\Gamma_{t,\F}$ with $\Sigma_{t}$ (recall \autoref{def:SS-dim}) for a small tolerable penalty.
\begin{lemma}
\label{lem:ss-up-sigma}
Let $d$ be a positive integer. Let $A \in \Q^{n \times n}$ be a matrix such that $A = \prod_{i = 1}^d P_i$ for $P_i \in \Q^{n_i \times m_i}$, where $\sum_{i = 1}^d\spars{P_i} \leq s $. Assume that for each $i$, $n_i \leq n^2$ and $m_i \leq n^2$. Then, for every $t\le n^2/4$ such that $s \ge dt$ it holds that
\[
\Sigma_{t}(A) \le 2^{2n^3\cdot \inparen{e^d (2s/dt)^d}^t}.
\]
\end{lemma}
\begin{proof}
We follow the same steps as in the proof of~\autoref{lem:ss-easy-ub-large-depth}, replacing the measure $\Gamma_{t,\F}(A)$ by $\Sigma_t(A)$. As before,
\[
A_{i,j} = \left(\prod_{\ell = 1}^d P_{\ell}\right)_{i,j} = \sum_{k_1, \ldots, k_{d-1}} (P_1)_{i, k_1}\cdot \left(\prod_{\ell = 2}^{d-1} (P_{\ell})_{k_{\ell-1}, k_{\ell}} \right) \cdot (P_{d})_{k_{d-1}, j}\, .
\]
Every element in $\Pi_{t} (A)$ can be written as
\begin{equation}
\label{eq:monomials-in-product}
\sum_{\alpha \in \mathcal{M}} c_\alpha \cdot \alpha
\end{equation}
where $\mathcal{M}$ is the set of monomials of degree $dt$ in the entries of $P_1, P_2, \ldots, P_d$, and each $c_\alpha$ is a non-negative integer of of absolute value at most $s^{dt} \le 2^{n^3}$ (since $s \leq n^2d$ and $d$ is $O(1)$). It now follows that each element in $\Sigma_t(A)$ has the same form as in \eqref{eq:monomials-in-product}, with $c_\alpha \le |\Pi_t(A)| \cdot 2^{n^3} \le 2^{2n^3}$ . We conclude that
\[
\Sigma_t(A) \le (2^{2n^3})^{\binom{s+dt}{dt}},
\]
which implies the statement of the lemma using the same bounds on binomial coefficients as in \autoref{lem:ss-easy-ub-large-depth}.
\end{proof}
We now move on to describe constructions of matrices which have large Shoup-Smolensky dimension, and then deduce lower bounds for them.
\subsection{Sidon sets and hard univariate matrices}\label{sec:sidon-ss dim}
In this section, we describe a construction of a matrix $G \in \F[y]^{n \times n}$ which has a large value of $\Gamma_{t, \F}$. Let us denote $G_{i,j} = y^{e_{i,j}}$ for some non-negative integer $e_{i,j}$. For $G$ to have a large Shoup-Smolensky dimension of order $t$, the set $S = \set{e_{1,1}, e_{1,2}, \ldots, e_{n,n}} \subseteq \N$ should have the property that $tS := \set{a_1 + a_2 + \ldots + a_t : a_i \in S \text{ distinct}}$ has size comparable to $\binom{|S|}{t}$. A set $S$ such that every subset of size $t$ of $S$ has a distinct sum is called a \emph{$t$-wise Sidon set}. These are very well studied objects in arithmetic combinatorics, and explicit constructions are known for them in $\poly(n)$ time (e.g., Lemma 60 in~\cite{Bshouty}). However, another important parameter in the construction is the degree of $y$, and such a set will inevitably contain integers of size roughly $n^{\Omega(t)}$. Thus, the construction of $G$ would take time which is not polynomially bounded in $n$. Below we give an elementary construction of such a set in time $n^{O(t)}$ (cf.\ \cite{AGKS15}).
\begin{lemma}
\label{lem:easy-sidon}
Let $t$ be a positive integer. There is a set $S = \set{e_{i,j} : i,j \in [n]} \subseteq \N$ of size $n^2$ such that:
\begin{enumerate}
\item $tS := \set{a_1 + a_2 + \ldots + a_t : a_i \in S \text{ distinct}}$ has size $\binom{n^2}{t}$.
\item $\max_{i,j \in [n]} \{ e_{i,j} \} \le n^{O(t)}$.
\item $S$ can be constructed in time $n^{O(t)}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $S' = \set{1,2,2^2,\ldots,2^{n^2-1}}$. Clearly, every subset of $S'$ has a distinct sum. For a prime $p$ we denote $S_p = S' \bmod p = \set{a \bmod p : a \in S'}$, and we claim that there exists a prime $p \le n^{O(t)}$ such that $|tS_p| = \binom{n^2}{t}$. Since this condition can be checked in time $n^{O(t)}$, this would immediately imply the statement of the lemma, by checking this condition for every $p \le n^{O(t)}$ and letting $S=S_p$ for a $p$ which satisfies this condition.
For every subset $T \subseteq S'$ of size $t$, let $\sigma_T$ denote the sum of its elements, and observe that $\sigma_T \le 2^{n^2}$. Clearly, $\sigma_T \bmod p = \sigma_{T'} \bmod p$ if and only if $p \mid \sigma_T - \sigma_{T'}$, so it is enough to show that there exists $p \le n^{O(t)}$ which does not divide
\[
N := \prod_{\substack{T \neq T' \subseteq S' \\ |T|=|T'|=t}} (\sigma_T - \sigma_{T'}),
\]
and therefore does not divide any of the terms on the right hand size.
It further holds that $0 \neq N \le {(2^{n^2})}^{n^{O(t)}} = 2^{n^{O(t)}}$, so the existence of $p$ now follows from the fact that $N$ can have at most $\log N = n^{O(t)}$ distinct prime divisors, and from the prime number theorem.
\end{proof}
Given the above construction of $t$-wise Sidon sets, we now describe the construction of matrices with univariate polynomial entries which has large Shoup-Smolensky dimension.
\begin{construction}
\label{con:univariate hard matrix}
Let $S = \set{e_{i, j} : i, j \in [n]}$ be a $t$-wise Sidon set of positive integers, as in \autoref{lem:easy-sidon}. Then, the matrix $G_{t,n} \in \F[y]^{n \times n}$ is defined as follows as
$(G_t)_{i, j} = y^{e_{i, j}}$.
\end{construction}
The useful properties of \autoref{con:univariate hard matrix} are given by the following lemma.
\begin{lemma}
\label{lem:univariate hard properties}
Let $t \le n$ be a parameter, $S \subseteq N$ be a $t$-wise Sidon set of size $n^2$ and let $G_{t,n}$ be the matrix defined in~\autoref{con:univariate hard matrix}. Then, the following are true.
\begin{enumerate}
\item Every entry of $G_{t,n}$ is a monomial of degree at most $n^{O(t)}$.
\item $\Gamma_{t,\F}((G_{t,n})) \geq \binom{n^2}{t} \ge \left(\frac{n^2}{t}\right)^t$.
\end{enumerate}
\end{lemma}
\begin{proof}
The first item follows from the definition of $G_{t,n}$ and the properties of the set $S$ in~\autoref{lem:easy-sidon}. The second item also follows from the properties of $S$ and the definition of Shoup-Smolensky dimension, since every $t$-wise product of elements of $G_{t,n}$ gives a distinct monomial in $y$, and thus they are all linearly independent over the base field $\F$.
\end{proof}
\subsection{Hard matrices over finite fields}
From the univariate matrix in \autoref{con:univariate hard matrix}, we now construct, for every $p$ and parameter $t$, a matrix $M$ over an extension of $\F_p$ which has large Shoup-Smolensky dimension over $\overline{\F}_p$ with the same parameters as $G_{t,n}$.
\begin{lemma}
\label{lem:hard-over-finite}
Let $p$ be a prime, and $t $ be any positive integer. There is a matrix $M_{t,n} \in \mathbb{E}^{n\times n}$ over an extension $\mathbb{E}$ of $\F_p$ of degree $\exp\inparen{{O(t\log n)}}$, which can be deterministically constructed in time $n^{O(t)}$, and satisfies
\[
\Gamma_{t, \F_p}(M_{t,n}) \geq \left(\frac{n^2}{t}\right)^t\,
\]
\end{lemma}
\begin{proof}
Let $G_{t,n}$ be as in \autoref{con:univariate hard matrix}, and let $\Delta$ be the maximum degree of any entry of $G_{t,n}$. Set $D = 10\cdot t\cdot \Delta = \exp\left(O(t\log n)\right)$. We use Shoup's algorithm (see Theorem 3.2 in~\cite{Shoup90}) to construct an irreducible polynomial $g(z)$ of degree $D+1$ over $\F_p$ in deterministic $\poly(D, |\F_p|)$ time. Let $\alpha$ be a root of $g(z)$ in an extension $\mathbb{E}$ of $\F_p$, where $\mathbb{E} \equiv \F_p[z]/\langle g(z) \rangle$.\footnote{We identify the elements of $\mathbb{E}$ with coefficient vectors of polynomials of degree at most $D$ in $\F_p[z]$, and in this representation $\alpha$ is identified with the polynomial $z$.} Then, it follows that $1, \alpha, \alpha^2, \ldots, \alpha^{D}$ are linearly independent over $\F$.
The matrix $M_{t,n}$ is obtained from $G_t$ by just replacing every occurrence of the variable $y$ by $\alpha$. We now need to argue that $M_{t,n}$ continues to satisfy $\Gamma_{t, \F_p}(M_{t,n}) \geq \left(\frac{n^2}{t}\right)^t$. By the choice of $\alpha$, it immediately follows that $\Gamma_{t, \F_p}(M_{t,n}) = \Gamma_{t, \F_p}(G_{t,n})$, since every monomial in the set $\Pi_t(M_{t,n})$ is mapped to a distinct power of $\alpha$ in $\{0, 1, \ldots, D\}$, which are all linearly independent over $\F_p$.
The upper bound on the running time needed to construction $M_{t,n}$ now follows from the upper bound on the degree of the extension $\mathbb{E}$, and from \autoref{lem:easy-sidon}.
\end{proof}
The following theorem now directly follows.
\begin{theorem}\label{thm:depth-d-finite}
Let $p$ be any prime and $d\geq 2$ be a positive integer. Then, there exists a family of matrices $\{A_n\}_{n \in \N}$ which can be constructed in time $n^{O(n^{1-1/2d})}$ such that every depth-$d$ linear circuit $\overline{\F}_p$ computing $A_n$ has size at least $\Omega(n^{1 + 1/2d})$. Moreover, the entries of $A_n$ lie in an extension of $\F_p$ of degree at most $\exp(O(n^{1-1/2d}\log n))$.
\end{theorem}
\begin{proof}
We invoke~\autoref{lem:hard-over-finite} with parameter $t$ set to $n^{1-1/2d}$ to get matrices $\{A_n\}$ in time $n^{O(t)}$ with the following lower bound on their Shoup-Smolensky dimension.
\[
\Gamma_{t, \F_p}(M_{n}) \geq \left(\frac{n^2}{t}\right)^t\, .
\]
If there is a depth $d$ linear circuit of size $s$ computing the linear transformation $A_n\cdot \vecx$, the following inequality must hold (from~\autoref{lem:ss-easy-ub-large-depth}),
\begin{equation}\label{eqn:depth-d-lb}
\inparen{e^d (2s/dt)^d}^t \geq \left(\frac{n^2}{t}\right)^t \, .
\end{equation}
If $s \leq n^{1+1/2d}/2$, we have,
\[
\inparen{e^d (2s/dt)^d}^t \leq (O(e/d))^{dt} \cdot n^t\, .
\]
We also have,
\[
\left(\frac{n^2}{t}\right)^t \geq \left(n^{1 + 1/2d} \right)^t \, .
\]
For any constant $d$, these estimates contradict~\autoref{eqn:depth-d-lb}, thereby implying a lower bound of $\Omega(n^{1+1/2d})$ on s.
\end{proof}
\subsection{Hard matrices over $\mathbb{C}$}
We now prove an analog for \autoref{lem:hard-over-finite}. We construct a matrix whose entries are positive integers that can be represented by at most $\exp(O(t\log n))$ bits, and give a lower bound for its $\Sigma_t$-measure (rather than $\Gamma_{t,\F}$ as before).
\begin{lemma}
\label{lem:hard-over-C}
Let $t$ be any positive integer. There is a matrix $M_{t,n} \in \Q^{n\times n}$, which can be deterministically constructed in time $n^{O(t)}$, such that every entry of $M_{t,n}$ is an integer of bit complexity at most $\exp(O(t \log n))$, and it holds that
\[
\Sigma_{t}(M_{t,n}) \geq 2^{\left(\frac{n^2}{t}\right)^t}.
\]
\end{lemma}
\begin{proof}
Let $G_{t,n} \in \F[y]^{n \times n}$ be as in \autoref{con:univariate hard matrix}. Define $M_{t,n} \in \Q^{n \times n}$ as
\[
(M_{t,n})_{a,b} = (G_{t,n})_{a,b} (2),
\]
that is, $(M_{t,n}){a,b}$ is simply the polynomial $(G_{t,n})_{a,b}(y)$ evaluated at $y=2$.
As in the proof of \autoref{lem:univariate hard properties}, each element in $\Pi_t(M_{t,n})$ is now a distinct power of 2, which implies that $\Sigma_t(M_{t,n}) = 2^{\binom{n^2}{t}}$.
The statement on the running time follows directly from \autoref{lem:univariate hard properties}.
\end{proof}
The analog of \autoref{thm:depth-d-finite} for $\mathbb{C}$ is given below.
\begin{theorem}~\label{thm:depth-d-complex}
There exists a family of matrices $\{A_n\}_{n \in \N}$ over $\Q$ which can be constructed in time $n^{O(n^{1-1/2d})}$ such that every depth-$d$ linear circuit $\mathbb{C}$ computing $A_n$ has size at least $\Omega(n^{1 + 1/2d})$. Moreover, the entries of $A_n$ are positive integers of bit complexity at most $\exp(O(n^{1-1/2d}\log n))$.
\end{theorem}
\begin{proof}
Let $s = n^{1+1/2d}/2$ and $t = n^{1 -1/2d}$ and let $A_n = M_{t,n}$, where $M_{t,n}$ is as in \autoref{lem:hard-over-C}. A depth-$d$ circuit for $M_n$ implies a factorization $M_n = \prod_{i =1}^d P_i$, with $P_i \in \mathbb{C}^{n_i \times m_i}$, such that $\sum_{i = 1}^d\spars{P_i} \le s$. Observe that since zero columns of $P$ or zero rows of $Q$ can be omitted without affecting the product, we may assume $n_i, m_i \le n^2$, as otherwise the lower bound trivially holds.
By \autoref{lem:ss-up-sigma} and \autoref{lem:hard-over-C}, this implies that
\[
(n^2/t)^t \leq \log \Sigma_{t} (A_n) \le 2n^3 \cdot \inparen{e^d (2s/t)^d}^t.
\]
If $s \leq n^{1+1/2d}/2$, we have,
\[
\inparen{e^d (2s/dt)^d}^t \leq (O(e/d))^{dt} \cdot n^t\, .
\]
We also have
\[
\left(\frac{n^2}{t}\right)^t \geq \left(n^{1 + 1/2d} \right)^t \, .
\]
For any constant $d$, these estimates contradict the inequality above, thus implying a lower bound of $\Omega(n^{1+1/2d})$ on $s$.
The statement on the running time for constructing $A_n$ follows again from \autoref{lem:hard-over-C}.
\end{proof}
\subsection{Lower bounds for depth-$2$ linear circuits}
\label{subsec:depth-2-direct-sum}
The lower bounds of \autoref{thm:depth-d-complex} and \autoref{thm:depth-d-finite} apply to any constant depth. However, here we briefly remark that in the special case of $d = 2$ there is in fact a much simpler construction. As discussed in the introduction, for depth-$2$ linear circuits, the best lower bounds currently known is a lower bound of $\Omega\left(n\frac{\log^2n}{\log\log n}\right)$ based on the study of super-concentrator graphs in the work of Radhakrishnan and Ta-Shma~\cite{RTS00}. We now discuss two constructions of matrices in quasi-polynomial time which improve upon this bound. More formally, we prove the following theorem.
\begin{theorem}\label{thm:depth-2-quasipoly}
Let $c$ be any positive constant. Then, there is a family $\{A_n\}_{n \in \N}$ of $n \times n$ matrices with entries in $\N$ of bit complexity at most $\exp(O(\log^{2c + 1} n))$ such that $A_n$ can be constructed in time $\exp(O(\log^{2c + 1} n))$ and any depth-$2$ linear circuit over $\mathbb{C}$ computing $A_n$ has size at least $\Omega(n\log^c n)$.
\end{theorem}
The first construction directly follows from~\autoref{lem:hard-over-C} when invoked with $t = 10\cdot \log^{2c} n$. Once we have the matrices guaranteed by~\autoref{lem:hard-over-C}, we just follow the proof of~\autoref{thm:depth-d-complex} as is by taking $d = 2$ and $t = 10\log^{2c} n$. We skip the technical details and now discuss the second construction, which is based on the following observation.
\begin{observation}\label{obs:trivial hard matrix}
Let $\{A_n\}_{n \in \N}$ be a family of matrices where $(A_n)_{i, j} = 2^{2^{(n+1)(i-1) + j}}$. Then, any depth$-2$ linear circuit computing $A_n$ has size $\Omega(n^2)$.
\end{observation}
\begin{proof}
The key to the proof is to observe that for $t = n^2/4$, $\Sigma_t(A_n) \geq 2^{\binom{n^2}{n^2/4}} \geq 2^{2^{n^2/2}}$. This follows from the fact that each $t$ wise product of the entries of $A_n$ is a power of $2$ where the exponent is a sum of powers of $2$ and for any two distinct degree $t$ multilinear monomials in the entries of $A_n$, the set of powers of $2$ that appear in the exponent are distinct. On the other hand, from~\autoref{lem:ss-up-sigma}, we know that if $A_n$ can be computed by a depth-$2$ linear circuit of size at most $s$, then
\[
\Sigma_t(A_n) \leq 2^{2n^3\left(e^2(4s/n^2) \right)^{n^2/4}} \, .
\]
Now, for $s \leq n^2/100$, this upper bound is much smaller than the lower bound of $2^{2^{n^2/2}}$. Thus, any depth-$2$ linear circuit for $A_n$ over $\mathbb{C}$ has size at least $n^2/100$.
\end{proof}
If we directly use this observation to construct hard matrices, the bit complexity of the entries of $A_n$ (and hence the time complexity of constructing $A_n$) is as large as $2^{\Theta(n^2)}$. However, it also gives a much stronger (quadratic) lower bound on the depth-$2$ linear circuit size for $A_n$ than what is promised in~\autoref{thm:depth-2-quasipoly}. For our second construction for hard matrices for~\autoref{thm:depth-2-quasipoly}, we invoke~\autoref{obs:trivial hard matrix} to construct \emph{small} hard matrices (thus saving on the running time) and then construct a larger block diagonal matrix by taking a Kronecker product of this small hard matrix with a large identity matrix. The following lemma then guarantees a non-trivial lower bound on the size of any depth-$2$ linear circuit computing this larger block diagonal matrix.
\begin{lemma}\label{lem:block diagonal hard matrix}
Let $A$ be an $k \times k$ matrix, such that any depth-$2$ linear circuit computing $A$ has size at least $s$. Let $B$ be an $mk \times mk$ matrix defined as $B = \mathbf{I}_m \otimes A $, where $\otimes$ denotes the Kronecker product, and $\mathbf{I}_m$ the $m \times m$ identity matrix. Then, any depth-$2$ linear circuit computing $B$ has size at least $m\cdot s$.
\end{lemma}
\begin{proof}
A depth-$2$ linear circuit for $B$ gives a factorization of $B$ as $P\cdot Q$ for an $mk \times r$ matrix $P$ and an $r \times mk$ matrix $Q$ for some parameter $r$. We partition the rows of $P$ into $m$ contiguous blocks of size $k$ each, and let $P_i$ be the $k \times r$ submatrix which consists of the $i^{th}$ block (i.e. rows $(i-1)k + 1, (i-1)k + 2, \ldots, ik$ of $P$). Similarly, we partition the columns of $Q$ into $m$ contiguous blocks of size $k$ each and let $Q_i$ be the $r \times k$ submatrix of $Q$ corresponding to the $i^{th}$ block. From the structure of $B$, it follows that for every $i \in \{1, 2, \ldots, m\}$, $P_i\cdot Q_i = A$. From the lower bound on the size of any depth-$2$ linear circuit for $A$, we get that $\spars{P_i} + \spars{Q_i} \geq s$. Combining this lower bound for $i = 1, 2, \ldots, m$, we get $\spars{P} + \spars{Q} = \sum_{i = 1}^m\left(\spars{P_i} + \spars{Q_i}\right) \geq m\cdot s$.
\end{proof}
We now note that~\autoref{obs:trivial hard matrix} and~\autoref{lem:block diagonal hard matrix} imply another family of matrices for which~\autoref{thm:depth-2-quasipoly} holds.
\begin{proof}[Second proof of~\autoref{thm:depth-2-quasipoly}]
Pick $k = \Theta(\log^c n)$ such that $k$ divdes $n$, and let $M_k$ be the matrix defined as $(M_k)_{i, j} = 2^{2^{(k+1)(i-1) + j}}$. Let $A_n = \mathbf{I}_{n/k} \otimes M_k$. Clearly, $A_n$ can be constructed in time $2^{O(k^2)}$. Moreover, from~\autoref{obs:trivial hard matrix} and~\autoref{lem:block diagonal hard matrix} it follows that any depth-$2$ linear circuit computing $A_n$ has size at least $\Omega(n/k \cdot k^2) = \Omega(n\log^ c n)$.
\end{proof}
We note that even though the discussion in this section was confined to depth-$2$ linear circuit lower bounds over $\mathbb{C}$, similar ideas can be extended to other fields as well.
\subsubsection*{Extension of the direct sum based construction to arbitrary constant depth?}
In light of the above construction, it is a natural question is to ask if this idea also extends to the construction of hard matrices for depth-$d$ circuits for arbitrary constant $d$. While this is a reasonable conjecture, the easy proof of \autoref{lem:block diagonal hard matrix} breaks down even at depth $3$.
There are some variations of this idea, such us looking at $\mathbf{J}_{n/k} \otimes M_k$, where $\mathbf{J}$ is the all-1 matrix, which would work equally well to prove a lower bound for depth-$2$, but for which it is possible to prove an $O(n)$ upper bound in depth-$3$.
Furthermore, it can be seen that upper bounds on matrix multiplication in bounded depth will give small linear circuits for computing $\mathbf{I}_{n/k} \otimes M_k$. Thus, improved lower bounds using this construction, even for depth-$3$, will require proving new lower bounds for matrix multiplication in bounded depth (the current best lower bounds are again barely super-linear \cite{RS03}).
\section{Lower bounds via Hitting Sets}
\label{sec:lb-hitting-sets}
In this section, we prove lower bounds for several classes of depth 2 circuits using hitting sets for matrices. We first recall the definition.
\begin{definition}[Hitting set for matrices, \cite{FS12}]
Let $\mathcal{C} \subseteq \F^{n \times n}$ be a set of matrices. A set $\mathcal{H} \subseteq \F^n \times \F^n$ is said to be a \emph{hitting set} for $\mathcal{C}$, if for every non-zero $C \in \mathcal{C}$, there is a pair $(\veca, \vecb) \in \mathcal{H}$ such that
\[
\ip{\veca, M\cdot \vecb} = \sum_{i \in [n], j \in [m]} M_{i, j} a_i b_j \neq 0 . \qedhere
\]
\end{definition}
\subsection{Matrices with no sparse vectors in their kernel}\label{sec:hitting set const}
In this section, we recall some simple, deterministic and efficient constructions of matrices which do not have any sparse non-zero vector in their kernel. Such a construction forms the basic building block for building hard instances of matrices for various cases of the matrix factorization problem that we discuss in the rest of this paper. We start by describing such a construction over the field of real numbers.
\subsubsection{Construction over $\R$}
The following is a weak form of a classical lemma of Descartes.
\begin{lemma}[Descartes' rule of signs]\label{lem:rule of signs}
Let $d_1< d_2 < \cdots < d_k$ be non-negative integers, and let $a_1, a_2, \ldots, a_k$ be arbitrary real numbers. Then, the number of distinct positive roots of the polynomial $\sum_{i = 1}^k a_i x^{d_i}$ is at most $k-1$.
\end{lemma}
\autoref{lem:rule of signs} immediately gives the following construction of a small set of vectors, such that not all of them can lie in the kernel of any matrix with at least one sparse row.
\begin{lemma}\label{lem:hitting set for real sparse}
For $i \in [n]$, let $\vecv_i := \inparen{1, i, i^2, \ldots, i^{n-1}} \in \R^n$. Then, for every $1 \le s \le n$ and for every $m \times n$ matrix $B$ over real numbers that has a non-zero row with at most $s$ non-zero entries, there is an $i \in [s]$ such that $B\cdot \vecv_i \neq \mathbf{0}$.
\end{lemma}
\begin{proof}
Let $(a_0, a_1, \ldots, a_{n-1}) \in \R^n$ be any non-zero vector with at most $s$ non zero entries. So, the polynomial $P(x) = \sum_{i = 0}^{n-1} a_i x^i$ has sparsity at most $s$. From~\autoref{lem:rule of signs}, it follows that $P$ has at most $t-1$ positive real roots. Therefore, there exists an $i \in [s]$ such that $i$ is \emph{not} a root of $P(x)$, i.e., $P(i) \neq 0$. The lemma now follows immediately by taking $(a_0, a_1, \ldots, a_{n-1})$ to be any non-zero $s$-sparse row of $B$.
\end{proof}
We remark that~\autoref{lem:hitting set for real sparse} also holds for matrices over $\mathbb{C}$ which have a sparse non-zero row for the choice of the vectors $v_i$ as above. This follows from the application of~\autoref{lem:rule of signs} separately for the real and complex parts of a sparse complex polynomial, both of which are individually sparse, with real coefficients and at least one of them is not identically zero. This observation extends our results over $\R$ in~\autoref{sec:sym lb} to the field of complex numbers.
\subsubsection{Construction over finite fields}
We now recall some basic properties of Reed-Solomon codes, and observe they can be used as well in lieu of the construction in \autoref{lem:hitting set for real sparse}.
The proofs for these properties can be found in any standard reference on coding theory, e.g., Chapter 5 in~\cite{GRS-book}.
\begin{definition}[Reed Solomon codes]\label{def:RS codes}
Let $\F_q = \{\alpha_0, \alpha_2, \ldots, \alpha_{q-1}\}$ be the finite field with $q$ elements and let $k \in \{0, 1, \ldots, q-1\}$. The Reed-Solomon code of block length $q$ and dimension $k$ are defined as follows.
\[
RS_{q}[q, k] = \{\left(P(\alpha_0), P(\alpha_1), \ldots, P(\alpha_{q-1})\right) : P(z) \in \F_q[z], \deg(P) \leq k-1 \}. \qedhere
\]
\end{definition}
\begin{lemma}\label{lem:RS props}
Let $\F_q$ be the finite field with $q$ elements and let $k \in \{0, 1, \ldots, q-1\}$. The linear space $RS_{q}[q, k]$ as in~\autoref{def:RS codes} satisfies the following properties.
\begin{itemize}
\item Every non-zero vector in $RS_q[q, k]$ has at least $q-k +1$ non-zero coordinates.
\item The dual of $RS_{q}[q, k]$ is the space of Reed Solomon codes of block length $q$ and dimension $q-k$.
\end{itemize}
\end{lemma}
\begin{lemma}\label{lem:RS dual const}
Let $\F_q = \{\alpha_0, \alpha_2, \ldots, \alpha_{q-1}\}$ be the finite field with $q$ elements. For any $k\leq q-1$, let $G_k$ be the $q \times k$ matrix over $\F_q$ whose $i$-th row is $(1, \alpha_{i-1}, \alpha_{i-1}^2, \ldots, \alpha_{i -1}^{k-1})$. Then, every non-zero vector in $\F_q^q$ in the kernel of $(G_k)^{T}$ has at least $k + 1$ non-zero coordinates.
\end{lemma}
\begin{proof}
Observe that $G_k$ is the precisely the generator matrix of Reed Solomon codes of block length $q$ and dimension $k$ over $\F_q$. In particular, the linear space $RS_q[q, k]$ as in~\autoref{lem:RS props} is spanned by the columns of $G_k$. Thus any vector $\vecw$ in the kernel of $(G_k)^{T}$ is in fact a codeword of the dual of these codes, which as we know from Item 2 of~\autoref{lem:RS props}, is itself a Reed Solomon code of block length $q$ and dimension $q-k$. From the first item of~\autoref{lem:RS props}, it now follows that $\vecw$ has at least $k+1$ non-zero coordinates.
\end{proof}
The following lemma is an analog of~\autoref{lem:hitting set for real sparse}.
\begin{lemma}\label{lem:hitting set for finite fields sparse}
Let $\F_q = \{\alpha_0, \alpha_2, \ldots, \alpha_{q-1}\}$ be the finite field with $q$ elements, $s \in [q]$ be a parameter and let $\vecv_i$ be the $i$-th column of the matrix $G_{k}$ as in~\autoref{lem:RS dual const} for $k = s$.
Then, for every $m \times n$ matrix $B$ over $\F_q$ that has a non-zero row with at most $s$ non zero entries, there is an $i \in [s]$ such that $B\cdot \vecv_i \neq 0$.
\end{lemma}
\begin{proof}
The proof follows from the observation that any non-zero vector orthogonal to all the vectors $v_1, v_2, \ldots, v_s$ must be in the kernel of the matrix $G_{s}^T$ and hence by~\autoref{lem:RS dual const} must have at least $s+1$ non-zero entries.
\end{proof}
\subsection{Lower bounds for symmetric circuits}\label{sec:sym lb}
We now prove our lower bounds for symmetric circuits. Recall that a symmetric circuit is a linear depth-2 circuit of the form $B^TB$.
\begin{theorem}\label{thm:sym factor lb}
There is an explicit family of positive semidefinite matrices $\{M_n\}$ such that every symmetric circuit computing $M_n$ has size at least $n^2/4$.
\end{theorem}
For the proof of this theorem, we give an efficient deterministic construction of a hitting set $\mathcal{H}$ for the set of matrices which factor as $B^T\cdot B$ for $B$ of sparsity less than $n^2/4$, and
as outlined in \autoref{sec:techniques}, we construct a hard matrix $M = \tilde{M}^T\cdot \tilde{M}$ which is not hit by such a hitting set and has a high rank.
We start by describing the construction of $M$.
\begin{lemma}
\label{lem:hard-PSD-matrix}
Let $\set{\vecv_i : i \in [n]}$ be the set of vectors defined in \autoref{lem:hitting set for real sparse}.
There exists an explicit PSD matrix $M$ of rank $n/2$ such that $\vecv_i^T M\vecv_i = 0$ for $i \in [n/2]$.
\end{lemma}
\begin{proof}
We wish to find a matrix $\tilde{M}$ of high rank such that $\tilde{M} \vecv_i = 0$ for $i=1,\ldots,n/2$. This can be done by completing $\{\vecv_i : i \in \{1, 2, \ldots, n/2\}\}$ to a basis (in an arbitrary way) and requiring that the other $n/2$ basis elements are mapped to linearly independent vectors under $\tilde{M}$. Conveniently, the set $\set{\vecv_i : i \in [n]}$ is itself a basis for $\R^n$: the matrix $V$ whose rows are the $\vecv_i$'s is a Vandermonde matrix.
We now describe this in some more detail. For $i \in [n]$, let $\vece_i$ by the $i$-th elementary basis vector.
For a set of $n^2$ variables $Y = (y_{i, j})_{n \times n}$ consider the system of (non-homogeneous) linear equations on the variables $Y$ given by the $n$ constraints.
\begin{align*}
Y\cdot \vecv_i &= 0 \quad\; \text{for } i \in \{1, 2, \ldots, n/2\} \\
Y\cdot \vecv_i &= \vece_i \quad \text{for } i \in \{ n/2 + 1, \ldots, n\} \, .
\end{align*}
Since the vectors $\set{\vecv_i : i \in [n]}$ are linearly independent, this system has a solution, which can be found in polynomial time using basic linear algebra. More explicitly the $j$-th row of $Y$, $\vecy_j$, is given by the solution to the linear system $V\cdot (\vecy_j)^T = 0$ for $1 \le j \le n/2$ and $V \cdot (\vecy_j)^T = \vece_j$ for $n/2+1 \le j \le n$ where $V$ is the Vandermonde matrix whose rows are the $\vecv_i$'s. Let $\tilde{M}$ be the matrix whose rows are the solution to the system above. Also, note that the rank of $\tilde{M}$ is at least $n/2$, as linearly independent vectors $\vece_{n/2+1}, \vece_{n/2+2}, \ldots, \vece_{n}$ are in the image of the linear transformation given by $\tilde{M}$.
Now let $M = (\tilde{M}^T) \cdot \tilde{M}$, so that indeed $M$ is a positive semi-definite matrix, and $\rank M = n/2$ as well. It immediately follows that
\[
\vecv_i^T M \vecv_i = (\vecv_i^T \tilde{M}^T)(\tilde{M}\vecv_i) = 0. \qedhere
\]
\end{proof}
We are now ready to prove \autoref{thm:sym factor lb}.
\begin{proof}[Proof of~\autoref{thm:sym factor lb}]
Let $M$ be the matrix from \autoref{lem:hard-PSD-matrix}. Let $B \in \R^{m \times n}$ be real matrix such that $\spars{B} < n^2/4$, and suppose towards contradiction that $M = B^TB$.
It follows that the rank of $B$ must be at least $n/2$. Thus, $B$ must have at least $n/2$ non-zero rows. Now, since the total sparsity of $B$ is at most $n^2/4-1$, there must be a non-zero row of $B$ with sparsity at most $(n^2/4-1)/(n/2) \leq n/2$. From~\autoref{lem:hitting set for real sparse}, it follows that there is an $i \in [n/2]$ such that $B\cdot \vecv_i$ is non-zero. Thus, for this index $i$, we have that
\[
\vecv_i^T (B^TB)\vecv_i = \left\Vert B\vecv_i \right\Vert_2^2 \neq 0,
\]
contradicting \autoref{lem:hard-PSD-matrix}.
\end{proof}
We remark that the proof of \autoref{thm:sym factor lb} goes through almost verbatim for symmetric circuits over $\mathbb{C}$ (recall that over $\mathbb{C}$ these are circuits of form $B^*B$, where $B^*$ is the conjugate transpose of $B$).
\subsection{Lower bounds for invertible circuits}
Recall that an invertible circuit is a circuit of them form $BC$ where either $B$ or $C$ is invertible. In this section, we prove~\autoref{thm:intro:invertible}, which shows a quadratic lower bound for such circuits. For convenience, we restate the theorem.
\begin{theorem}
\label{thm:invertible-lb}
There exists an explicit family of $n \times n$ matrices $\set{A_n}$, over any field $\F$ such that $\F \ge \poly(n)$, such that every invertible circuit computing $A_n$ has size $n^2/4$.
\end{theorem}
\begin{proof}
We give a proof over the field of real numbers and highlight the ideas necessary to extend the argument to work over large enough finite fields.
Fix $n$, and let $M = \tilde{M}^T \tilde{M}$ be the matrix constructed in \autoref{lem:hard-PSD-matrix}. Let $B$ and $C$ be $n \times n$ matrices over $\R$ such that $M=BC$. Suppose first that $B$ is invertible and $C$ has sparsity less than $n^2/4$.
Since $\rank(M) \ge n/2$, the same applies for $\rank(C)$, and hence the number of non-zero rows in $C$ must be at least $n/2$. Thus, $C$ must have a non-zero row with at most $(n^2/4-1)/(n/2) \leq n/2$ non-zero entries. Along with \autoref{lem:hitting set for real sparse}, this implies that there is an $i \in [n/2]$ such that $C\cdot \vecv_i \neq \mathbf{0}$, where $\vecv_i$ is as in \autoref{lem:hitting set for real sparse}. Since $B$ is invertible, we get that $(B\cdot C \cdot \vecv_i)$ is a non-zero vector, so for some $j \in [n]$,
\[
\vece_j^T (BC)\vecv_i \neq 0.
\]
However, as in the proof of \autoref{lem:hard-PSD-matrix}
\[
\vece_j^T (M)\vecv_i = \vece_j^T \tilde{M}^T \tilde{M} \vecv_i = 0,
\]
since $\tilde{M} \vecv_i = 0$ for all $i \in [n/2]$.
The case that $B$ is sparse and $C$ is invertible is virtually the same, by considering $\vecv_i^T (BC) \vece_j$, and replacing the argument on the rows of $C$ by a similar one on the columns of $B$.
For the proof over finite fields, we replace every application of \autoref{lem:hitting set for real sparse} by \autoref{lem:hitting set for finite fields sparse}. Note that this requires the $n$-th matrix in the family to be defined over a field of size more than $n$. The rest of the argument essentially remains the same.
\end{proof}
Over fixed finite fields (for example, $\F_2$), it is possible to prove an analog of \autoref{thm:invertible-lb}, with worse constants, by replacing the use of Reed-Solomon codes with any good explicit error-correcting code $C$ of dimension $\alpha n$ and distance $\delta n$ for some fixed constants $\alpha, \delta >0$. The proof proceeds as above by finding a matrix $\tilde{M}$ of rank $\alpha n$ such that $M\vecv= 0$ for every $\vecv \in C^{\perp}$.
\section{Open Problems}
An important problem that continues to remain open is to prove a lower bound of the form $\Omega(n^{1+\varepsilon})$ for some constant $\varepsilon>0$ for the depth-2 complexity of an explicit matrix. Such a lower bound would follow from an explicit hitting set of size at most $n^2 - 1$ for the class of polynomials of the form $\vecx^T BC \vecy$ such that $\spars{B}+\spars{C} \le n^{1+\varepsilon}$.
Another natural question here is be to understand if this PIT based approach can be used for explicit constructions of rigid matrices, which improve the state of art. One concrete question in this direction would be to construct explicit hitting sets for the set of matrices which are \emph{not} $(r, s)$ rigid for $rs > \omega(n^2\log (n/r))$. Using the techniques in this paper, it is possible to construct hitting sets of size $O(rs)$ for matrices which are not $(r, s)$ rigid. But, this is non-trivial only when $rs \le cn^2$ for some constant $c<1$, which is a regime of parameters for which explicit construction of rigid matrices is already known. A sequence of recent results~\cite{AW17, DE17, DL19} showed that many natural candidates for rigid matrices that posses certain symmetries are in fact not as rigid as suspected. This approach might circumvent these obstacles by giving an explicit construction which is not ruled out by these results.
A lower bound of $s$ on the size of depth $d$ linear circuits computing the linear transformation $A\vecx$ implies a lower bound of $\Omega(s)$ for depth $\Omega(d)$ algebraic circuits computing the degree-2 polynomial $\vecy^T A \vecx$ \cite{BS83, KalSin91} (so, we can convert lower bounds for circuits with $n$ outputs to lower bounds for circuits with 1 output). A notable open problem in algebraic complexity, which is very related to this work, is to prove any super-linear lower bound for algebraic circuits of depth $O(\log n)$ computing a polynomial with constant total degree. We refer to \cite{Raz10a} for a discussion on the importance of this problem.
\end{document} |
\begin{document}
\begin{center}
{\LARGE Quantum search processes in the cyclic }
{\LARGE group state space
s}
{\normalsize Xijia Miao}$^{*}$
June, 2005; Somerville, Massachusetts
Abstract
\end{center}
The hardness to solve an unstructured quantum search problem by a standard
quantum search algorithm mainly originates from the low efficiency to
amplify the amplitude of the unknown marked state in the Hilbert space of an
$n-$qubit pure-state quantum system by the oracle unitary operation
associated with other known unitary operations. A standard quantum search
algorithm generally can achieve only a square speedup over the best known
classical counterparts. In order to bypass this square speedup limitation it
is necessary to develop other type of quantum search algorithms. In the
paper an oracle-based quantum dynamical method has been developed to solve
the quantum search problem in the cyclic group state space of the Hilbert
space. The binary dynamical representation for a quantum state in the
Hilbert space of the $n-$qubit quantum system is generalized to the
multi-base dynamical representation for a quantum state in the cyclic group
state space. Thus, any quantum state such as the marked state and its
corresponding oracle unitary operation in the cyclic group state space may
be described completely in terms of a set of dynamical parameters that are
closely related to the symmetric property and structure of the cyclic group.
The quantum search problem then may be solved through determining the set of
dynamical parameters that describe completely the marked state instead by
directly measuring the marked state which is a necessary step in the
standard quantum search algorithm. The quantum dynamical method makes it
possible to manipulate at will the unknown marked state and its oracle
unitary operation. By a similar method used extensively in the hidden
subgroup problems, a cyclic group state space may be formed by mapping all
the group elements of a cyclic group one-to-one onto the specific states of
the Hilbert space of the $n-$qubit quantum system. It carries the symmetric
property and structure of a cyclic group. An unstructured quantum search
process in the Hilbert space may be affected greatly by the symmetric
property and structure of the cyclic group when the quantum search problem
is solved in the cyclic group state space. When the cyclic group is high
symmetric, that is, the cyclic group with order $p$ is a product group of
many cyclic subgroups and each cyclic subgroup has an order $\thicksim
O(\log p)$, the quantum search problem in the cyclic group state space could
be solved better through reducing it from the cyclic group state space with
dimension $p$ to the cyclic group state subspaces with dimension $\thicksim
O(\log p)$ of these cyclic subgroups, for any quantum search problem can be
efficiently solved in these subspaces due to their much small dimension. The
main attempt of the paper is to make use of the symmetric properties and
structures of groups to help solving the unstructured quantum search problem
in the Hilbert space. It is shown how the quantum search process could be
reduced efficiently from the cyclic group state space to its cyclic group
state subspaces with the help of the symmetric property and structure of the
cyclic group on an ideal universal quantum computer. \newline
\newline
{\large 1. Introduction}
The quantum search is tremendously valuable as it has an extensive
application in computation science. In classical computation most important
problems are either polynomial-time or NP-complete [1]. The conventional
computers based on the classical physical principles are much suitable for
solving efficiently the polynomial-time problems, but inherently they are
not enough powerful to treat efficiently all the NP-hard problems [1]. On
the other hand, it has been shown that all the NP-complete problems in the
classical computation could be solved efficiently on a quantum computer if
there existed a polynomial-time unstructured quantum search algorithm. Thus,
a great progress could be achieved in quantum computation if an efficient
quantum search algorithm could be found. In the past decade a great effort
has been devoted to attacking this extremely important problem in quantum
computation. A number of quantum search algorithms [2-13] have been proposed
and developed since the standard Grover quantum search algorithm was
suggested [2]. The famous include the standard Grover search algorithm [2],
the quantum adiabatic search algorithm [4, 5], and the
amplitude-amplification search algorithm [6]. Most of these oracle-based or
block-box-based quantum search algorithms are based on the quantum-state
tomographic method. These quantum search algorithms usually start with a
superposition of the Hilbert space of a pure-state quantum system, then
perform an iterative sequence of unitary operations which include the oracle
unitary operation and other known unitary operations to amplify the
amplitude of the marked state of the quantum search problem, and after the
unitary operation sequence measure the generated state, in which the marked
state has a high probability ($\thicksim 1)$, to output directly the
computing result, i.e., the complete information of the marked state. Since
the efficiency is low to amplify amplitude of the marked state with these
unitary operation sequences these search algorithms usually can only achieve
a square speedup over the best known classical counterparts. It has been
proven that this square speedup for all these unstructured quantum search
algorithms is optimal [3, 6, 9, 13]. More generally, it has been shown that
many oracle-based quantum algorithms (not limited to the quantum search
algorithms) based on the quantum-state tomography are subjected to
polynomial bounds in speedup [14], that is, these quantum algorithms can
only achieve a polynomial speedup over their best classical counterparts. In
order to bypass this speedup obstacle inherently for the oracle-based
quantum algorithms based on the quantum-state tomography it is necessary to
develop other types of quantum algorithms to solve the quantum search
problem and other problems [15]. Due to the fact that there is a low
efficiency to amplify the amplitude of the marked state in these quantum
search algorithms [2, 3, 6, 15], in developing new type of quantum search
algorithms a direct quantum measurement on the marked state with a high
probability ($\thicksim 1$) should be avoided becoming a necessary step so
that amplification of the amplitude of the marked state may not be the key
component in algorithm, while the quantum measurement to output the
computing results could be carried out on those states which are closely
related to the marked state and carry the complete information of the marked
state [15]. It is particularly important to be able to manipulate at will
any quantum state even the unknown marked state in the Hilbert space in
developing efficient quantum algorithms. This is an important step towards
the goal to realize that any quantum state in the Hilbert space of an $n-$
qubit quantum system is able to be described and characterized completely in
a parameterization form [15]. Such a parameterization description for a
quantum state is different from the conventional quantum-state tomographic
method. Since any quantum state in the Hilbert space can be described and
characterized completely by a set of dynamical parameters called the
quantum-state unit-number vector [15] and there is a one-to-one
correspondence between the oracle unitary operation and the marked state in
the quantum search problem, it becomes possible to manipulate at will the
oracle unitary operation and the unknown marked state. Due to the fact that
the unknown marked state can be described completely by the set of dynamical
parameters it is possible to solve the quantum search problem by determining
the set of dynamical parameters instead directly through measuring the
marked state [15]. This gives a possibility to avoid a direct amplification
of amplitude of the marked state which is a key component in the
conventional quantum search algorithms [2-13]. This strategy to solve the
quantum search problem opens a large space to develop new quantum search
algorithms.
Generally, the unknown marked state of the quantum search problem can be any
possible state of the Hilbert space and the quantum search space for the
problem must contain the marked state in a quantum search algorithm.
Therefore, for these conventional quantum search algorithms the quantum
search spaces usually are taken as the whole Hilbert space and hence the
initial state is a superposition over the whole Hilbert space. The usual
quantum search algorithms also have showed that the low efficiency to
amplify the amplitude of the marked state is strongly dependent on the
dimensional size of the search space, that is, the efficiency generally is
inversely proportional to the square root of the dimensional size of the
search space [2, 3, 6] and it has been shown that the efficiency is optimal
[3, 6, 9, 13]. One possible scheme to increase this efficiency therefore
could be that the quantum search space is limited to a small state subspace
of the Hilbert space for a quantum search problem [16]. Generally, this
scheme will meet difficulty and is not feasible if the marked state is not
in the subspace. To make the scheme feasible one must convert the marked
state from the whole Hilbert space to the small subspace. Because there is
the rotation symmetry in spin space in the $n-$qubit quantum spin system the
whole Hilbert space of the spin system can be divided into $(n+1)$ state
subspaces according to the angular momentum theory in quantum mechanics and
it can be shown that any unknown quantum state such as the marked state can
be converted efficiently from a small subspace into a larger subspace in the
Hilbert space [16]. This fact directly leads to that in a single $n-$qubit
quantum system the quantum search problem can be reduced efficiently from
the whole Hilbert space into its largest subspace. This search space
reduction speeds up the conventional quantum search process, although this
speedup is limited and does not change essentially the computational
complexity for the quantum search problem. However, it is very important for
the fact that the symmetric properties and structures of quantum systems may
be exploited to speed up the quantum search process, for one can go a
further step to use the special group symmetric properties and structures to
help solving the quantum search problem. This idea will play an important
role to guide the construction of quantum search algorithms in the cyclic
group state space in the paper. Generally, the whole Hilbert space of the $
n- $qubit quantum system may not have some specific group symmetric
properties and structures, but a specific state subset of the Hilbert space
could carry the symmetric property and structure of a specific group such as
a cyclic group. Then quantum computation may be affected greatly by the
symmetric property and structure of the group if it is carried out on the
state subset. In order to make use of the symmetric property and structure
of a finite group in developing new quantum algorithms one may first form
this specific state subspace in the Hilbert space of the $n-$qubit quantum
system. By mapping all the group elements of the group one-to-one onto these
specific states of the Hilbert space, which means that each group element
corresponds one-to-one to a state of the specific state subspace and hence
the mapping is isomorphic, then all these specific states form a state
subspace of the Hilbert space and evidently this subspace is an invariant or
closed state subspace under the action of the group operations. This state
subspace is called the group state space of the Hilbert space. This subspace
can be thought of as an artificially-formed state space of the Hilbert space
which carries the information of the symmetric property and structure of the
group. The dimension of the group state space is just the order of the
finite group. The similar scheme to the group state space has been
extensively used previously in the hidden subgroup problems [17]. If the
group is high symmetric, which means that the group is a product of many its
factor subgroups, then the corresponding group state space also may contain
many state subspaces which one-to-one correspond to these factor subgroups
of the group. Since the dimensional size of the group is a product of the
dimensional sizes of all these factor subgroups, the dimension of the group
state space is also a product of those of the group state subspaces of the
factor subgroups. The dimensional size (denoted as $p$ here) of the group
state space can be very large, $p\thicksim 2^{n},$ and may increase
exponentially as the qubit number, but since it is a product of the
dimensional sizes of many state subspaces of these factor subgroups the
dimensional sizes of the state subspaces of the factor subgroups can be much
small, $\thicksim O(\log p),$ and may increase only polynomially as the
qubit number. Then the quantum search problem in the whole group state space
would be solved efficiently if the problem could be efficiently reduced from
the whole group state space to the state subspaces of these subgroups. As
the main purpose, the paper intends to achieve such a search space reduction
for the quantum search problem in the cyclic group state space with the help
of the symmetric property and structure of a cyclic group. Here the
symmetric property and structure of a cyclic group are employed to help
solving the quantum search problem as a cyclic group is one of the simplest
groups and its symmetric property and structure are very simple and have
been studied in detail and thoroughly [18]. \newline
\newline
{\large 2. Quantum search model in the cyclic group state space}\newline
\newline
{\large 2.1. The binary dynamical representation and the multi-base
dynamical representation of quantum states}
In the Hilbert state space of an $n-$qubit pure-state quantum system a
quantum state can be characterized and described completely by a set of $n$
dynamical parameters $\{a_{k}^{s}=\pm 1,$ $k=1,2,...,n\},$ which has been
called the quantum-state unit-number vector in the papers [15]. This is a
parameterization description for quantum states in the $2^{n}-$dimensional
Hilbert space and different from the conventional quantum-state tomographic
method. By measuring the set of the dynamical parameters one can determine
uniquely the corresponding quantum state. This dynamical description picture
not only is able to describe completely a quantum state $|s\rangle $ in the $
2^{n}-$dimensional Hilbert space of the pure-state quantum system but also
is used to describe the corresponding quantum state $\rho _{s}=|s\rangle
\langle s|,$ which is represented by a diagonal density operator, in the
Liouville operator space of the quantum ensemble of the quantum system. For
instance, in the Hilbert state space a conventional computational basis
state $|s\rangle $ can be described completely with the parameter vector $
\{a_{k}^{s}\}$ by $|s\rangle =\bigotimes_{k=1}^{n}(\frac{1}{2}
T_{k}+a_{k}^{s}S_{k})$ with $T_{k}=|0_{k}\rangle +|1_{k}\rangle $ and $S_{k}=
\frac{1}{2}(|0_{k}\rangle -|1_{k}\rangle ),$ while in the corresponding
Liouville operator space the quantum state is described completely by the
diagonal density operator $\rho _{s}=|s\rangle \langle
s|=\bigotimes_{k=1}^{n}(\frac{1}{2}E_{k}+a_{k}^{s}I_{kz})$ which is also
determined uniquely by the vector $\{a_{k}^{s}\}$. By the parameter vector $
\{a_{k}^{s}\}$ one can set up a one-to-one correspondence between a quantum
state $|s\rangle $ or $\rho _{s}=|s\rangle \langle s|$ and the selective
rotation unitary operation $C_{s}(\theta )=\exp (-i\theta D_{s})$ with the
Hermitian diagonal operator $D_{s}=|s\rangle \langle s|=\bigotimes_{k=1}^{n}(
\frac{1}{2}E_{k}+a_{k}^{s}I_{kz}),$ which selectively acts on only the
quantum state $|s\rangle $ or $\rho _{s}$ and is described completely also
by the vector $\{a_{k}^{s}\}$. The diagonal operator $D_{s}$ is called the
quantum-state diagonal operator since it is a diagonal operator and also
equals the state $\rho _{s}$ formally$.$ The unitary evolution process of a
quantum system or its corresponding quantum ensemble under the action of the
selective rotation unitary operation $C_{s}(\theta )$ is described
completely by the vector parameters $\{a_{k}^{s}\}$ and in this sense the
vector parameters $\{a_{k}^{s}\}$ are also called the dynamical parameters.
The quantum-state diagonal operator $D_{s}$ makes it possible for one
manipulating at will the evolution process of an unknown quantum state in
the Hilbert space of a quantum system. This representation for the quantum
state via the dynamical parameter vector $\{a_{k}^{s}\}$ is called the
binary dynamical representation, for the vector parameters $\{a_{k}^{s}\}$
can take only two values $+1$ and $-1.$ In the quantum dynamics any
quantum-state search problem can be reduced to determining the dynamical
parameter vector $\{a_{k}^{s}\}$ of the marked state. The direct measurement
on the marked state to output the information of the marked state in the
conventional quantum search algorithms [2-13] therefore could not be
necessary in the quantum dynamical method, for there are a number of
possible methods in quantum dynamics which work either in a pure-state
quantum system or in a quantum ensemble to determine the dynamical parameter
vectors $\{a_{k}^{s}\}$ for quantum states including the marked state [15].
Therefore, the quantum search algorithms based on the quantum dynamics have
an important difference from the conventional ones [2-13] that it is not
necessary to measure directly the marked state to output the complete
information of the marked state in algorithm. The quantum dynamic method
opens a large space to develop new type of quantum search algorithms.
Besides the binary dynamical representation for a quantum state in the
Hilbert space of an $n-$qubit quantum system it is possible to use other
multi-base dynamical representations to describe completely a quantum state
of a quantum system. The multi-base dynamical representations for a quantum
state could be a better choice for the quantum search problem in the cyclic
group state space. Before the multi-base dynamical representations can be
described the group state space of a cyclic group is firstly defined in the
Hilbert space. A cyclic group is an Abelian group in which any group
elements are commutable to one another [18]. A cyclic group $G$ can be
generated by a fixed generator $g,$ that is, $G=\langle g\rangle
=\{E,g,g^{2},...,g^{n_{r}-1}\},$ here $E$ is the unity element and $n_{r}$
the order of the cyclic group. In an analogue way to the method used
extensively in the hidden subgroup problems [17], now each group element of
the cyclic group $G$ is mapped one-to-one onto the specific state of the
Hilbert state space of an $n-$qubit quantum system. Then these specific
states of the Hilbert space that correspond to all the group elements of the
cyclic group form a state subset and this state subset is the cyclic group
state space of the Hilbert space. The cyclic group state space is an
invariant state subspace under the action of any group operation (element)
of the cyclic group. Suppose further that the unity element $E$ of the group
$G$ is mapped onto the specific state $|\varphi _{0}\rangle $ of the Hilbert
space, then the cyclic group state space $S(G)$ may be given formally by
\[
S(G)=\{|\varphi _{0}\rangle ,g|\varphi _{0}\rangle ,g^{2}|\varphi
_{0}\rangle ,...,g^{n_{r}-1}|\varphi _{0}\rangle \}.
\]
Here the cyclic group state space $S(G)$ is within the Hilbert space and its
dimension is just the order $n_{r}$ of the cyclic group. In quantum
computation a convenient state basis in the Hilbert space of an $n-$qubit
quantum system is the conventional computational basis. This basis set
consists of the integer states $\{|Z_{2^{n}}\rangle \}=\{|0\rangle
,|1\rangle ,|2\rangle ,...,|2^{n}-1\rangle \}$. Then the state basis set of
the cyclic group state space of the Hilbert space is the specific state
subset of the integer state set $\{|Z_{2^{n}}\rangle \}$. Now consider the
integer set $Z_{m}=\{0,1,2,...,m-1\}$. The integer set $Z_{m}$ is a Ring $
(Z_{m}=Z/mZ)$ under the modular arithmetic operation ($\func{mod}m$) in
number theory [19] and also an additive cyclic group under the modular $(m)$
additive operation [18, 20]. As the multiplicative operation is often used
in quantum computation, the integer set $Z_{p-1}=\{0,1,...,p-2\}$ can be
mapped by the modular exponentiation: $z\rightarrow g^{z}\func{mod}p$ to the
positive integer set $Z_{p}^{+}=\{g^{z}\func{mod}p\}=\{1,2,...,p-1\},$ where
the integer $z\in Z_{p-1}$ and $p$ is a known prime and $g$ a known
primitive root ($\func{mod}p)$. The integer set $Z_{p}^{+}$ forms a
multiplicative cyclic group under modular multiplication operation [18, 20].
Both the additive cyclic group $Z_{p-1}$ and the multiplicative cyclic group
$Z_{p}^{+}$ have an order $p-1.$ Both the cyclic groups have a one-to-one
correspondence. In fact, all the same order cyclic groups are isomorphic to
one another [18]. Hereafter $p$ is denoted as a known prime, $g$ as a
primitive root or a generator of a cyclic group, and $C_{m}$ the
multiplicative cyclic group such as $Z_{m}^{+}$ with order $m.$ If any of
the two cyclic groups is mapped onto the Hilbert space, one obtains their
corresponding cyclic group state spaces. For the additive cyclic group $
Z_{p-1}$ the mapping between the group elements and the corresponding states
in the Hilbert space may be given conveniently by $s\rightarrow |s\rangle $
for the group element $s\in Z_{p-1}=\{0,1,...,p-2\}$ ($|\varphi _{0}\rangle
=|0\rangle $), and for the multiplicative cyclic group $C_{p-1}$ the mapping
may be conveniently written as $f(s)=g^{s}\func{mod}p\rightarrow |g^{s}\func{
mod}p\rangle $ ($|\varphi _{0}\rangle =|1\rangle $) for the group element $
g^{s}\equiv g^{s}\func{mod}p\in Z_{p}^{+}=\{1,2,...,p-1\}.$ Therefore, any
quantum state of the cyclic group state space $S(C_{p-1})=\{|g^{s}\func{mod}
p\rangle \}=\{|1\rangle ,|2\rangle ,...,|p-1\rangle \}$ of the
multiplicative cyclic group $(C_{p-1})$ can be expressed generally as
\[
|\varphi _{s}\rangle =|g^{s}\func{mod}p\rangle ,\text{ }s\in Z_{p-1},
\]
where the state $|\varphi _{s}\rangle $ is also a conventional computational
base. Since $\varphi _{s}>0$ for any $s\in Z_{p-1}$ the state $|0\rangle $
is not included in the cyclic group state space $S(C_{p-1})$. There is a
one-to-one correspondence between the modular exponential function $
f(s)=g^{s}\func{mod}p$ of the integer set $Z_{p}^{+}$ and the index $s$ of
the integer set $Z_{p-1}.$ The index $s$ is really the discrete logarithmic
function of the function $f(s)$, that is, $s=\log _{g}f(s)$ with $g$ a
logarithmic base. In other words, the index $s$ is the inversion function of
the modular exponential function $f(s)=g^{s}\func{mod}p$, i.e., $
s=f(s)^{-1}. $ In classical computation the modular exponential function is
not hard to be computed, but the discrete logarithmic function usually is
hard and this forms the basis of the classical public secure key
cryptography based on the discrete logarithms [21]. However, the Shor
discrete logarithmic quantum algorithm shows that the discrete logarithmic
function now is not hard yet to be computed in quantum computation [22].
Actually, the additive cyclic group state space $S(Z_{p-1})$ is just the
state subset $\{|Z_{p-1}\rangle \}$ consisting of the first $p-2$
conventional computational bases of the Hilbert space $\{|Z_{2^{n}}\rangle
\} $. Apparently it can not see any difference between the additive cyclic
group state space $S(Z_{p-1})$ and the state subset $\{|Z_{p-1}\rangle \}$
of the Hilbert space if one does not care about the symmetric properties and
structures of the two state subsets in the quantum search problem. However,
their difference could be great if their symmetric properties and structures
are taken into consideration. For the multiplicative cyclic group state
space $S(C_{p-1})$ whose state bases are the modular exponentiation states $
\{|g^{s}\func{mod}p\rangle \}$, one may easily imagine that there exists
difference between the two state subsets $S(C_{p-1})$ and $\{|Z_{p-1}\rangle
\}$. However, only from the symmetric property and structure of the cyclic
group state space can one understand deeply that the difference could lead
to a completely different result in quantum computation. According to the
fundamental theorem of arithmetic (see the Theorem 2 in Ref. [19]) the order
$p-1$ of the multiplicative cyclic group $C_{p-1}$ which is also the
dimension of the cyclic group state space $S(C_{p-1})$ can be expressed as a
product of distinct primes, $p-1=p_{1}^{a_{1}}p_{2}^{a_{2}}...p_{r}^{a_{r}},$
where $p_{1},p_{2},...,p_{r}$ are distinct primes and the exponents $
a_{1},a_{2},...,a_{r}>0$. Then correspondingly the cyclic group $C_{p-1}$
can be decomposed as a product of its factor cyclic subgroups (see Chapter
One and Two in Ref. [18]) :
\begin{equation}
C_{p-1}=C_{p_{1}^{a_{1}}}\times C_{p_{2}^{a_{2}}}\times ...\times
C_{p_{r}^{a_{r}}}, \label{1}
\end{equation}
where the factor cyclic subgroup $C_{p_{k}^{a_{k}}}$ has an order $
p_{k}^{a_{k}}$ for $k=1,2,...,r.$ Thus, the order $p-1$ of the cyclic group $
C_{p-1}$ is a product of the orders $\{p_{k}^{a_{k}}\}$ of the factor cyclic
subgroups $\{C_{p_{k}^{a_{k}}}\}$. This shows that though the order $p-1$ of
the cyclic group $C_{p-1}$ can be a large number (even $p\thicksim 2^{n})$,
the orders $\{p_{k}^{a_{k}}\}$ of the factor cyclic subgroups $
\{C_{p_{k}^{a_{k}}}\}$ may take much small numbers $\thicksim O(\log p)$.
Just like the cyclic group state space $S(C_{p-1})$ the cyclic group state
subspace $S(C_{p_{k}^{a_{k}}})$ ($k=1,2,...,r$) also can be formed by
mapping all the elements of the cyclic subgroup $C_{p_{k}^{a_{k}}}$ onto the
specific states of the Hilbert space, and it is a state subspace of the
cyclic group state space $S(C_{p-1})$ and also of the Hilbert space. Since
the dimensional size of a cyclic group state space is just the order of the
cyclic group, the cyclic group state subspaces $\{S(C_{p_{k}^{a_{k}}})\}$
may also have much small dimensional sizes $\thicksim O(\log p)$ even if the
whole cyclic group state space $S(C_{p-1})$ has a large dimension $(p$ $
\thicksim 2^{n}).$ It is well known that a problem could be difficult to be
solved in a large dimension, but generally it may be fast solved in a small
dimension even in classical computation. Since the quantum search speed for
a search problem is generally inversely proportional to the square root of
the dimensional size of the problem [2, 6], then the quantum search problem
could be efficiently solved even in the whole cyclic group state space $
S(C_{p-1})$\ if it could be efficiently reduced from the whole cyclic group
state space to the state subspaces $\{S(C_{p_{k}^{a_{k}}})\}$ of the factor
cyclic subgroups $\{C_{p_{k}^{a_{k}}}\}$. Thus, the main purpose in the
paper is how to achieve the reduction for the quantum search problem from
the cyclic group state space $S(C_{p-1})$\ to the cyclic group state
subspaces $\{S(C_{p_{k}^{a_{k}}})\}.$
A quantum state $|s\rangle $ of the additive cyclic group state space $
S(Z_{p-1})$ may be described completely by the dynamical parameter vector $
\{a_{k}^{s}\}$. Since $g^{s}\func{mod}p$ is an integer of the positive
integer set $Z_{p}^{+}$ any quantum state $|g^{s}\func{mod}p\rangle $ of the
cyclic group state space $S(C_{p-1})$ is a usual computational basis and
also can be described completely by the dynamical parameter vector $
\{a_{k}^{t}\},$ where the parameter vector $\{a_{k}^{t}\}$ may not be equal
to the vector $\{a_{k}^{s}\}$ and the two vectors $\{a_{k}^{s}\}$ and $
\{a_{k}^{t}\}$ are related by the one-to-one correspondence $
s\leftrightarrow g^{s}\func{mod}p$. However, a better method to describe
completely a quantum state $|g^{s}\func{mod}p\rangle $ of the multiplicative
cyclic group state space $S(C_{p-1})$ could be to use the multi-base
dynamical representation in quantum computation. Notice that the cyclic
group $C_{p-1}$ is a product of the cyclic subgroups $\{C_{p_{k}^{a_{k}}}\}$
, each of which has an order $p_{k}^{a_{k}}$. Suppose that the cyclic
subgroup $C_{p_{k}^{a_{k}}}$ is generated by a generator $g_{k},$ that is, $
C_{p_{k}^{a_{k}}}=\langle g_{k}\rangle .$ Then any element of the cyclic
subgroup $C_{p_{k}^{a_{k}}}$ can be generally written as $g_{k}^{l_{k}}$ for
the index $l_{k}=0,1,...,p_{k}^{a_{k}}-1.$ Corresponding to the product
decomposition (1) for the cyclic group $C_{p-1}$ each group element $g^{s}$
of the cyclic group $C_{p-1}$ is also a product of the group elements $
\{g_{k}^{s_{k}}\}$ of the factor cyclic subgroups $\{C_{p_{k}^{a_{k}}}\},$
\newline
\begin{equation}
g^{s}=g_{1}^{n_{1}s_{1}}\times g_{2}^{n_{2}s_{2}}\times ...\times
g_{r}^{n_{r}s_{r}}, \label{2}
\end{equation}
where the generator $g_{k}$ of the subgroup $C_{p_{k}^{a_{k}}}$ can written
as $g_{k}=g^{M_{k}}\func{mod}p$ [18] for $k=1,2,...,r$ and the positive
integers $M_{k}$ and $n_{k}$ will be determined later. The product
decomposition (2) for a group element of the cyclic group $C_{p-1}$ is
really written according to the Chinese remainder theorem (see the Theorem
121 in Ref. [19]). Actually, there exists a one-to-one correspondence
between the index $s$ of the group element $g^{s}$ of the cyclic group $
C_{p-1}$ and the index vector $\{s_{k}\}$ of the group elements $
\{g_{k}^{n_{k}s_{k}}\}$ of the factor cyclic subgroups $\{C_{p_{k}^{a_{k}}}
\}.$ Note that the order $p-1$ of the cyclic group $C_{p-1}$ is decomposed
as a product of the distinct prime factors $\{p_{k}^{a_{k}}\}:$ $
p-1=p_{1}^{a_{1}}p_{2}^{a_{2}}...p_{r}^{a_{r}}$. For convenience, here
denote that $m_{k}=p_{k}^{a_{k}}$ for $k=1,2,...,r$, and $
p-1=m=m_{1}m_{2}...m_{r}.$ Evidently, the integers $\{m_{k}\}$ are coprime
to one another in pair, that is, the highest common divisor for any pair of
the integers $m_{i}$ and $m_{j}$ equals one: $(m_{i},m_{j})=1$ for $1\leq
i<j\leq r.$ Since the index $s=s\func{mod}(p-1)$ and if the index $s_{k}$ is
written as $s_{k}=s\func{mod}m_{k}$ for $k=1,2,...,r,$ then the index $s$
can be uniquely expressed as a linear combination ($\func{mod}(p-1)$) of the
indices $\{s_{k}\}$ according to the Chinese remainder theorem [19],
\begin{equation}
s=(n_{1}M_{1}s_{1}+n_{2}M_{2}s_{2}+...+n_{r}M_{r}s_{r})\func{mod}(p-1),
\label{3}
\end{equation}
where $p-1=m_{k}M_{k}$ for $k=1,2,...,r.$ Note that $(m_{k},M_{k})=1.$ The
integer $n_{k}$ is just the multiplicative inverse to the integer $M_{k}$ ($
\func{mod}m_{k}$) that satisfies $n_{k}M_{k}=1\func{mod}m_{k}.$ Using the
known integers $m_{k}$ and $M_{k}$ one can efficiently calculate the integer
$n_{k}$ by the Euclidean algorithm [20]. Because the integer $M_{k}$
satisfies $M_{k}=(p-1)/m_{k},$ that is, $M_{k}$ is a divisor of the order $
p-1$, it follows from the Theorem 1.4.3 in Ref. [18] that the generator $
g_{k}$ of the cyclic subgroup $C_{p_{k}^{a_{k}}}$ is just $g_{k}=g^{M_{k}}
\func{mod}p$ and the order of the subgroup $C_{p_{k}^{a_{k}}}$ is $
p_{k}^{a_{k}}$ and hence the cyclic subgroup is written as $
C_{p_{k}^{a_{k}}}=\langle g^{M_{k}}\func{mod}p\rangle .$ Then the state
subspace of the cyclic subgroup $C_{p_{k}^{a_{k}}}=\langle g^{M_{k}}\func{mod
}p\rangle $ is given by
\[
S(C_{p_{k}^{a_{k}}})=\{|(g^{M_{k}})^{l_{k}}\func{mod}p\rangle
,l_{k}=0,1,...,p_{k}^{a_{k}}-1\}.
\]
The dimension of the state subspace $S(C_{p_{k}^{a_{k}}})$ is just the order
$p_{k}^{a_{k}}$ of the cyclic subgroup $C_{p_{k}^{a_{k}}}.$ The cyclic group
state subspace $S(C_{p_{k}^{a_{k}}})$ is an invariant subspace under the
action of any group operation $g_{k}^{l_{k}}$ of the cyclic subgroup $
C_{p_{k}^{a_{k}}}.$
Given a set of the indices $\{s_{k}\}$ for the group elements $
\{(g^{M_{k}})^{n_{k}s_{k}}\func{mod}p\}$ of the factor cyclic subgroups $
\{C_{p_{k}^{a_{k}}}\}$ with the generators $\{g^{M_{k}}\func{mod}p\}$, here
the integers $\{n_{k}\}$ are known, then according to the equations (2) one
can compose a unique group element $g^{s}\func{mod}p$ for the cyclic group $
C_{p-1}$ with the index $s$ determined by the equation (3). In turn, if one
is given any element $g^{s}\func{mod}p$ of the cyclic group $C_{p-1}$ with
the index $s,$ then according to the equation (2) the element can be
decomposed uniquely as a product of the elements $\{(g^{M_{k}})^{n_{k}s_{k}}
\func{mod}p\}$ of the cyclic subgroups $\{C_{p_{k}^{a_{k}}}\}$ and the
indices $\{s_{k}\}$ are given by $s_{k}=s\func{mod}m_{k}.$ Therefore, there
is a one-to-one correspondence between the index $s$ of the group element $
g^{s}\func{mod}p$ of the cyclic group $C_{p-1}$ and the index set $\{s_{k}\}$
of the group elements $\{(g^{M_{k}})^{n_{k}s_{k}}\func{mod}p\}$ of the
factor cyclic subgroups $\{C_{p_{k}^{a_{k}}}\}$. This one-to-one
correspondence shows that one can also use the set of indices $\{s_{k}\}$ to
describe completely the index state $|s\rangle $ and the cyclic group state $
|g^{s}\func{mod}p\rangle $ as well in addition to the dynamical parameter
vector $\{a_{k}^{s}\}$. Furthermore, because $s_{k}=s\func{mod}p_{k}^{a_{k}}$
the index $s_{k}$ can be expanded in the field $GF(p_{k}^{a_{k}})$ [19, 20,
21],
\begin{equation}
s_{k}=s\func{mod}p_{k}^{a_{k}}=\stackrel{a_{k}-1}{\stackunder{l=0}{\sum }}
h_{kl}^{s}p_{k}^{l}, \label{4}
\end{equation}
where the coefficients $\{h_{kl}^{s}\}$ satisfy $0\leq h_{kl}^{s}<p_{k}$ for
$l=0,1,...,a_{k}-1$ and $k=1,2,...,r.$ This expansion could be thought of as
the $p_{k}-$base expansion for the index $s_{k}$ similar to the conventional
binary expansion for a number. Clearly, given the prime $p_{k}$ the index $
s_{k}=s\func{mod}p_{k}^{a_{k}}$ is determined uniquely by the coefficients $
h_{kl}^{s}$ for $l=0,1,...,a_{k}-1.$ Therefore, it is needed $r$ indices $
\{s_{k}\}$ or $\sum_{k=1}^{r}a_{k}$ coefficients $\{h_{kl}^{s}\}$ to
describe completely the index state $|s\rangle $ or the cyclic group state $
|g^{s}\func{mod}p\rangle ,$ while in the binary representation it need only $
n$ parameters $\{a_{k}^{s}\}$ for the complete description for the index
state $|s\rangle $ in the Hilbert space of an $n-$qubit quantum system. It
seems to be that the multi-base representation $\{h_{kl}^{s}=0,1,...,p_{k}-1
\}$ or the index vector $\{s_{k}\}$ for the index state $|s\rangle $ or the
cyclic group state $|g^{s}\func{mod}p\rangle $ is more complicated than the
binary representation $\{a_{k}^{s}=+1,-1\}$ in the Hilbert space. However,
the importance is that the multi-base representation $\{h_{kl}^{s}\}$ or the
index vector $\{s_{k}\}$ is related to the symmetric property and structure
of the cyclic group $C_{p-1}$, while this symmetric property and structure
could have a great impact on the quantum computation that is carried out in
the cyclic group state space. Thus, it could be better in the quantum search
problem in the cyclic group state space that the binary dynamical
representation $\{a_{k}^{s}\}$ is replaced with the index vector $\{s_{k}\}$
or the multi-base dynamical representation $\{h_{kl}^{s}\}$ to represent
completely the quantum states and to describe the quantum dynamics of the
quantum search process. Now in the cyclic group state space the quantum
search process to find the marked state is just to determine completely the
index vector $\{s_{k}\}$ or the parameter vector $\{h_{kl}^{s}\}$, which is
the same process as the previous one that searching for the marked state is
just to determine the dynamical parameter vector $\{a_{k}^{s}\}$ [15].
\newline
\newline
{\large 2.2. The oracle unitary operation acting on the cyclic group state
space}
The quantum search process in the cyclic group state space may be carried
out either in the additive cyclic group state space $S(Z_{p-1})$ or in the
multiplicative cyclic group state space $S(C_{p-1})$. Correspondingly the
marked state of the search problem can be represented either by the index
state $|s\rangle $ of the additive cyclic group state space $S(Z_{p-1})$ or
the modular exponentiation state $|g^{s}\func{mod}p\rangle $ of the
multiplicative cyclic group state space $S(C_{p-1})$. The index state $
|s\rangle $ and the modular exponentiation state $|g^{s}\func{mod}p\rangle $
can be efficiently converted into each other by a unitary transformation
which is given in next section. It might be more convenient that the quantum
search process is carried out in the multiplicative cyclic group state space
$S(C_{p-1})$ as the multiplicative unitary operations usually are easily
constructed and used. Suppose that the quantum search process is used to
solve a specific problem such as an NP problem which has only one solution
and the possible solution to the problem is within the integer set $
Z_{p}^{+}=\{g^{k}\func{mod}p\}=\{1,2,...,p-1\},$ here assume that the
possible solution can be represented with the integer index variable $x\in
Z_{p}^{+}$. In the quantum search problem there is a block box or an oracle
to compute a function $f(x)$ for the variable $x=g^{k}\func{mod}p\in
Z_{p}^{+}.$ If the variable $x=g^{s}\func{mod}p$ is the solution to the
problem, then the function $f(x)=f(g^{s}\func{mod}p)=1$; otherwise, $f(x)=0$
. The quantum computational process to compute the function $f(x)$ in the
block box can be represented by a unitary operation. This basic unitary
operation is called the oracle unitary operation. It is the unique unitary
operation that can access directly the unknown marked state $|g^{s}\func{mod}
p\rangle $ in the quantum search problem, where the marked state corresponds
to the unique solution $x=g^{s}\func{mod}p$ to the problem. Generally, if
the marked state is defined as $|g^{s}\func{mod}p\rangle ,$ then the
corresponding oracle unitary operation $U_{o}=U_{os}(\theta )$ in the cyclic
group state space $S(C_{p-1})$ can be defined by
\begin{eqnarray*}
U_{os}(\theta )|g^{x}\func{mod}p\rangle |a\rangle &=&\exp [-i\theta f(g^{x}
\func{mod}p)]|g^{x}\func{mod}p\rangle |a\rangle \\
&=&\left\{
\begin{array}{c}
\exp (-i\theta )|g^{s}\func{mod}p\rangle |a\rangle ,\text{ if }x=s \\
|g^{x}\func{mod}p\rangle |a\rangle ,\text{ if }x\neq s
\end{array}
\right.
\end{eqnarray*}
where the auxiliary state $|a\rangle =\frac{1}{\sqrt{2}}(|0\rangle
-|1\rangle ).$ According to this definition the oracle unitary operation $
U_{os}(\theta )$ is really equivalent to the selective rotation operation in
the cyclic group state space $S(C_{p-1}):$
\[
U_{os}(\theta )=\exp [-i\theta D_{s}(g)].
\]
Here, the quantum-state diagonal operator $D_{s}(g)$ [15] which is applied
to the cyclic group state space $S(C_{p-1})$ can be generally expressed in
terms of the cyclic group state,
\[
D_{s}(g)=|g^{s}\func{mod}p\rangle \langle g^{s}\func{mod}p|.
\]
Note that the diagonal operator $D_{s}(g)$ is different from the
conventional one $D_{s}=|s\rangle \langle s|$ in the Hilbert space of an $n-$
qubit quantum system. Actually, the diagonal operator $D_{s}(g)$ can also be
expressed in terms of the dynamical parameter vector $\{b_{k}^{s}\}$,
\[
D_{s}(g)=\stackrel{n}{\stackunder{k=1}{\bigotimes }}(\frac{1}{2}
E_{k}+b_{k}^{s}I_{kz}),
\]
but here the vector $\{b_{k}^{s}=\pm 1\}$ corresponds to the state $|g^{s}
\func{mod}p\rangle ,$ while the conventional vector $\{a_{k}^{s}=\pm 1\}$ is
assigned to the index state $|s\rangle $ and the state $D_{s}=|s\rangle
\langle s|.$ Through the quantum-state diagonal operator $D_{s}(g)$ one can
set up one-to-one correspondence between the oracle unitary operation $
U_{os}(\theta )$ and the cyclic group state $|g^{s}\func{mod}p\rangle .$
This correspondence makes it possible to calculate explicitly the time
evolution of a quantum system under the action of the oracle unitary
operation $U_{os}(\theta )$ [15]$,$ and it may also provide a convenience
for manipulating at will the time evolution of a quantum system by the
oracle unitary operation. If the auxiliary state is taken as $|a\rangle
=|0\rangle ,$ then the oracle unitary operation $U_{o}$ is simply defined as
\begin{eqnarray*}
U_{os}|g^{x}\func{mod}p\rangle |0\rangle &=&|g^{x}\func{mod}p\rangle |f(g^{x}
\func{mod}p)\rangle \\
&=&\left\{
\begin{array}{c}
|g^{s}\func{mod}p\rangle |1\rangle ,\text{ if }x=s \\
|g^{x}\func{mod}p\rangle |0\rangle ,\text{ if }x\neq s
\end{array}
\right. .
\end{eqnarray*}
The quantum search problem in the cyclic group state space is how to find
the marked state $|g^{s}\func{mod}p\rangle ,$ given the oracle unitary
operation $U_{os}.$ This is different from the quantum discrete logarithmic
problem which states that given a positive integer $\varphi _{s}$ such that $
\varphi _{s}=g^{s}\func{mod}p,$ how to compute the index $s,$ while the
quantum search problem is really equivalent to that given the oracle unitary
operation $U_{os},$ how to determine the index $s.$
The conventional quantum search process usually is carried out in the $
2^{n}- $dimensional Hilbert space of a single $n-$qubit quantum system. If
besides the given work register used for searching task there are also other
auxiliary registers, then the oracle unitary operation $U_{o}$ could be
thought of as a non-selective oracle unitary operation with respect to any
states of those auxiliary registers, since in addition to the auxiliary
state $|a\rangle ,$ which loads the functional values $f(g^{x}\func{mod}p)$,
the oracle unitary operation $U_{o}$ can only apply to the work register. If
there are any other auxiliary registers the oracle unitary operation $U_{o}$
will not make any effect on any states of all these auxiliary registers. If
the quantum search process is carried out in such a multi-register quantum
system that contains a work register and several auxiliary registers in
addition to the auxiliary state $|a\rangle $ and each register could consist
of a single $n-$qubit quantum subsystem, then in order that the search space
still has the same dimensional size as before all the states of the
auxiliary registers should be set to a given state, e.g., the state $|
\mathbf{R0}\rangle =|00...0\rangle .$ This search space is really a small
subspace of the whole Hilbert space of the multi-register quantum system.
Corresponding to this search subspace the subspace-selective oracle unitary
operation $\overline{U}_{o}$ should be defined by
\[
\overline{U}_{os}(\theta )|\Psi \rangle |g^{x}\func{mod}p\rangle |a\rangle
=\left\{
\begin{array}{c}
\exp (-i\theta )|\mathbf{R0}\rangle |g^{s}\func{mod}p\rangle |a\rangle ,
\text{ } \\
\qquad \qquad \qquad \qquad \text{if }x=s\text{ and }|\Psi \rangle =|\mathbf{
R0}\rangle ; \\
|\mathbf{R0}\rangle |g^{x}\func{mod}p\rangle |a\rangle ,\text{ } \\
\qquad \qquad \qquad \quad \quad \text{if }x\neq s\text{ and }|\Psi \rangle
=|\mathbf{R0}\rangle ; \\
|\Psi \rangle |g^{x}\func{mod}p\rangle |a\rangle ,\text{ if }|\Psi \rangle
\neq |\mathbf{R0}\rangle ;
\end{array}
\right.
\]
where the states $|\Psi \rangle $ and $|\mathbf{R0}\rangle $ belong to the
auxiliary registers and the oracle unitary operation works in the cyclic
group state space $S(C_{p-1})$ of the work register. Here $|\mathbf{R0}
\rangle $ also denotes the auxiliary register library with the specific
state $|00...0\rangle $ (see next sections). The subspace-selective oracle
unitary operation $\overline{U}_{os}(\theta )$ acts on selectively the state
$|\mathbf{R0}\rangle $ but does not have any effect on any other state $
|\Psi \rangle $ of the auxiliary registers. It is really equivalent to the
selective rotation operation in the Hilbert space of the multi-register
quantum system where the work register is in the cyclic group state space $
S(C_{p-1})$,
\[
\overline{U}_{os}(\theta )=\exp [-i\theta \overline{D}_{s}(g)]
\]
with the diagonal operator $\overline{D}_{s}(g)=|\mathbf{R0}\rangle |g^{s}
\func{mod}p\rangle \langle g^{s}\func{mod}p|\langle \mathbf{R0}|.$ Why using
many auxiliary registers here? This is mainly because the conventional
mathematical-logic gates usually need to use a large space to perform their
reversible operations, while these mathematical-logic gates have been used
extensively in constructing the quantum search algorithms in the Hilbert
space [16] and also in the cyclic group state space (see next sections).
However, it must be careful as there could be a potential risk that the
auxiliary registers could enlarge greatly the search space for the quantum
search problem and as a result the quantum search process could become
degraded.
The effect of the oracle unitary operation $U_{o},$ which acts on only the
marked state, on the evolution process of an $n-$qubit quantum system is so
small that it is hard to be detected quantum mechanically when the qubit
number $n$ is large [2, 3, 6, 15]. This results in that the quantum search
problem generally is hard to be solved in a large Hilbert space. Any
superposition of the Hilbert space with dimension $N=2^{n}$ could be
converted partly into the marked state under the action of the oracle
unitary operation associated with other known quantum operations, but each
time for the action this conversion efficiency of the marked state is
proportional to $1/\sqrt{N}$ [2, 3, 6]. In order to achieve an observable
amplitude for the marked state a standard quantum search algorithm needs to
call $\thicksim \sqrt{N}$ times the oracle unitary operation and thus, the
quantum search time to find the marked state with a high probability ($
\thicksim 1$) is proportional to the square root $(\sqrt{N})$ of the
dimensional size $(N)$ of the search space which here is the whole Hilbert
space. This search time therefore increases exponentially as the qubit
number $n.$ This low amplitude-amplification efficiency results in that a
standard quantum search algorithm usually can have only a square speedup
over the best known classical counterparts, and it has been also shown that
this square speedup is optimal and hence can not be further improved
essentially [3, 6, 9, 13]. A number of quantum search algorithms [2-13] have
been proposed to achieve this optimal efficiency (with respect to the
dimensional variable $N$) which include the standard Grover search algorithm
[2], the amplitude-amplification search algorithm [6], and the quantum
adiabatic search algorithm [4, 5]. All these search algorithms are based on
the quantum-state tomography. A direct quantum measurement on the marked
state is necessary to output the information of the marked state in these
search algorithms and hence it is required in algorithm that the amplitude
of the marked state be first amplified by a suitable unitary sequence that
contains $\thicksim \sqrt{N}$ oracle unitary operations so that the
probability for the marked state is high enough $(\thicksim 1)$ for
observation. In recent years a great effort has been made to develop other
type of quantum search algorithms [15, 16] in order to break through the
square speedup limitation. These quantum search algorithms are based on the
quantum dynamical principles. In these quantum-dynamical search algorithms a
direct measurement on the marked state may not be necessary so that a direct
amplification for the amplitude of the marked state could be avoided,
instead the quantum measurement to output the computing results could be
carried out on some other states that are closely related to the marked
state and carry the complete information of the marked state and the
computing results are further used to obtain the complete information of the
marked state [15]. The basis behind the quantum-dynamical search algorithms
is that $(i)$ any quantum state such as the unknown marked state in the
Hilbert space can be described completely in a parameterization form by a
set of dynamical parameters and the quantum searching for the marked state
therefore is reduced to determining the set of the dynamical parameters; $
(ii)$ by the set of the dynamical parameter the oracle unitary operation and
the unknown marked state is set up a one-to-one correspondence, and it
becomes possible to manipulate at will the evolution process of a quantum
system under the oracle unitary operation in the quantum search process.
This quantum search method may avoid a direct quantum measurement on the
marked state in the quantum search problem and hence it could not be
necessary to achieve an enough high probability for the marked state to be
observable. Regarding the fact that there is a low efficiency to amplify the
amplitude of the marked state by the oracle unitary operation and this
efficiency is closely related to the dimensional size of the search space of
the search problem, that is, the larger the search space, the lower the
efficiency, one simple scheme to increase the efficiency is that the search
space of the search problem is limited to a small subspace of the Hilbert
space [16]. This scheme is feasible only if the marked state is in the
subspace. Hence to make the scheme feasible one may convert the marked state
from the whole Hilbert space to the subspace. It is well known that the
structure of the Hilbert space of a quantum system generally is closely
related to the symmetric property and structure of the quantum system.
Because there is a rotation symmetry in spin space in the $n-$qubit quantum
spin system according to the angular momentum theory in quantum mechanics,
the Hilbert space of the $n-$qubit spin system can be divided into $(n+1)$
state subspaces. Then it can be shown that the quantum search problem in the
Hilbert space of the $n-$qubit spin system can be efficiently reduced from
the whole Hilbert space to the largest subspace of the $(n+1)$ state
subspaces [16]. The conventional quantum search process therefore may be
sped up, although this speedup is limited and does not change essentially
the computational complexity for the search problem. However, the importance
for the fact that the symmetric properties and structures of quantum systems
may be employed to speed up the quantum computational process is that one
may further use the symmetric property and structure of a group to help
solving the quantum search problem. This is just the main purpose of the
paper that the symmetric property and structure of a cyclic group are
employed to help solving the quantum search problem. \newline
\newline
{\large 2.3. The structural quantum search in the cyclic group state space}
The conventional unstructured and structural search problems are referred to
the problems themselves [7, 8]. The structure for a quantum search problem
in a cyclic group state space has a different sense from the conventional
one. It is referred to the symmetric structure of the cyclic group used to
help solving the unstructured quantum search problem in the Hilbert space of
the $n-$qubit quantum system. A cyclic group is one of the simplest groups.
It is an Abelian group and any two elements of a cyclic group are commutable
to one another. Its property and structure have been studied in detail and
extensively [18, 20]. As shown in equation (1), a cyclic group can be
decomposed as a direct product of its factor cyclic subgroups because every
Abelian group can be decomposed as a direct product of cyclic groups [18].
The cyclic groups of prime order are the only Abelian simple groups. They
have not any nontrivial and proper subgroup. A cyclic group of non-prime
order must have a nontrivial and proper subgroup at least. Here, that a
cyclic group is highly symmetric means that the cyclic group has many factor
cyclic subgroups. Thus, a highly symmetric cyclic group can be expressed as
a direct product of its factor cyclic subgroups, as shown in equation (1).
The quantum search problem in the cyclic group state space is either
unstructured or structural only dependent on the symmetric structure of the
cyclic group no matter what the quantum search problem itself is
unstructured or structural in the Hilbert space of the $n-$qubit quantum
system. If the quantum search is performed in a cyclic group state space
whose cyclic group has a prime order, then it is said to be an unstructured
quantum search. However, generally a quantum search is carried out in the
cyclic group state space of a highly symmetric cyclic group so that the
symmetric property and structure of the cyclic group can be employed to help
solving the quantum search problem. Therefore, the quantum search proposed
in the paper generally is structural in the cyclic group state space.
Generally, the Hilbert space of the $n-$qubit quantum system does not have
some specific group symmetric properties and structures, but a specific and
artificial state subset of the Hilbert space which could be formed by
mapping all the group elements of a specific group such as a cyclic group
onto the Hilbert space may have the symmetric property and structure of the
group. Then quantum computation which is carried out on the state subset may
be affected greatly by the group symmetric property and structure.
Consequently, though the quantum search problem in the Hilbert space of the $
n-$qubit quantum system is unstructured, it is affected inevitably by the
symmetric property and structure of the group if it can be reduced to and
therefore is solved in the group state space. The effect of the group
symmetric properties and structures could lead to a significant speedup for
some quantum computation processes. How the cyclic group symmetric property
and structure influence on the speedup of the unstructured quantum search
process in the whole Hilbert space is important research project that comes
to be investigated in detail in the paper and in the future work. \newline
\newline
{\large 3. The efficient state transformation between the additive and
multiplicative cyclic group state spaces}
In the quantum factoring problem and the discrete logarithmic problem a
number of reversible mathematical-logic operations such as the modular
addition, modular multiplication, and modular exponentiation unitary
operations have been used extensively [22, 23, 24, 25]. In the reversible
computation one basic principle to construct a reversible mathematical-logic
operation is that all the input states are also kept together with the
output states after the logic operation [26, 27]. A mathematical-logic
operation usually needs to use many auxiliary registers so that the
operation process can be made reversible. Both the classical irreversible
computation and the reversible computation usually are equivalent in
computational complexity in time and space [27, 28]. Therefore, the
classical irreversible computation generally can be efficiently simulated by
the reversible one. The reversible logic operations can be performed in a
quantum system as well, but they usually consume much more qubits in space
than the conventional unitary operators quantum mechanically in the quantum
system. They could influence on the unitary evolution process of a quantum
system in a different manner from the conventional unitary operators quantum
mechanically. This is because a reversible logic operation usually acts on
only some specific states of the quantum system, while the conventional
unitary operators quantum mechanically usually act on any states of the
quantum system. It must be careful to use a reversible logic operation to
manipulate the quantum dynamical process of a quantum system. A reversible
mathematical-logic operation could be thought of as a selective unitary
operation of a quantum system because there are usually a number of quantum
states of the quantum system independent of the action of the logic
operation. Quantum physically there are a number of unitary evolution
pathways in a multi-qubit quantum system, but under the reversible
mathematical-logic operations there are only few unitary evolution pathways
to be allowed in the quantum system. This just shows that the mathematical
principles can make constraints on the unitary evolution process of a
quantum system. On the other hand, quantum computation is a physical process
or exactly a unitary evolution process quantum physically, as pointed out by
Deutsch [29, 38]. Therefore, the quantum computational process for a given
problem obeys not only the quantum physical laws but also is compatible with
the used mathematical principles and its computational complexity not only
is dependent on the quantum dynamical process but also on the used
mathematical principles. One large advantage to use the reversible
mathematical-logic operations in solving some mathematical problems is that
one could easily trace the unitary evolution pathways for some quantum
states in the Hilbert space of the quantum system under the action of the
logic operations.
The discrete logarithmic problem is an important problem in classical public
secure key cryptography [21]. It can be stated that given an integer $
b=a^{s}>0$, how to calculate the discrete logarithmic function $s=(\log
_{a}b)\func{mod}p$, which is also called the index of the discrete
logarithm, where the positive integer $a$ is a given logarithmic base and $p$
a known prime. In the classical computation it is hard to calculate the
logarithmic function of a large integer $b$. This is the basis for the
classical public key cryptographic systems based on the discrete logarithm
[21]. It has been shown [22, 25, 30] that the discrete logarithmic problem
can be solved in polynomial time in quantum computation. Shor first gave an
efficient quantum algorithm to calculate the index of the discrete logarithm
[22]. Later this quantum algorithm was improved in a determination form
[30a] with the help of the amplitude amplification method [6]. Here, with
the help of these quantum algorithms [6, 22, 25, 30a] an efficient unitary
sequence is constructed to generate the index state $|s\rangle $ of the
discrete logarithm from the modular exponentiation state $|g^{s}\func{mod}
p\rangle $. By this efficient unitary sequence any quantum state $|g^{s}
\func{mod}p\rangle $ of the multiplicative cyclic group state space $
S(C_{p-1})$ can be efficiently converted into the corresponding index state $
|s\rangle $ of the additive cyclic group state space $S(Z_{p-1})$. In
constructing this efficient unitary sequence many efficient
mathematical-logic operations have been employed extensively, such as the
modular exponentiation operation, the modular multiplication operation, and
the quantum Fourier transform and so on, and some mathematical knowledge of
number theory are also used necessarily. Because the index $s$ and the
modular exponential function $f(s)=g^{s}\func{mod}p$ have a one-to-one
correspondence, there exists a unitary operator $U_{\log }(g)$ such that $
U_{\log }(g)|g^{s}\func{mod}p\rangle =|s\rangle $ and $U_{\log
}^{+}(g)|s\rangle =|g^{s}\func{mod}p\rangle ,$ here $g$ is the logarithmic
base and also a primitive root ($\func{mod}p$) or a generator of the
multiplicative cyclic group $C_{p-1}$. Note that here there is not any extra
auxiliary register to be used by the unitary operator $U_{\log }(g)$.
Generally, such a discrete logarithmic unitary operator $U_{\log }(g)$ is
hard to be constructed. However, with the help of the reversible
computational techniques [22, 23, 24, 26, 27] an alternative construction to
the discrete logarithmic unitary operator $U_{\log }(g)$ could be achieved
conveniently by using many extra auxiliary registers. The construction can
be divided into two steps [23, 24, 27]. One step is to construct by using
two registers the modular exponentiation unitary operation: $V_{f}|s\rangle
|0\rangle =|s\rangle |g^{s}\func{mod}p\rangle .$ It is well known that the
modular exponentiation unitary operation can be efficiently built up in the
reversible computation [21, 22, 23, 24]. Another is to construct the unitary
operation of inversion function of the modular exponential function: $
V_{f^{-1}}|g^{s}\func{mod}p\rangle |0\rangle =|g^{s}\func{mod}p\rangle
|s\rangle ,$ here also by using two registers$.$ Then the discrete
logarithmic unitary operation $U_{\log }(g)$ may be expressed equivalently
by $U_{\log }(g)=V_{f}^{+}SV_{f^{-1}},$ where the SWAP unitary operation $S$
is defined by $S|s\rangle |g^{s}\func{mod}p\rangle =|g^{s}\func{mod}p\rangle
|s\rangle .$ This is due to the fact that there holds $U_{\log }(g)|g^{s}
\func{mod}p\rangle |0\rangle =V_{f}^{+}SV_{f^{-1}}|g^{s}\func{mod}p\rangle
|0\rangle =|s\rangle |0\rangle ,$ which further indicates that by omitting
the auxiliary register with the state $|0\rangle $ the unitary operation
sequence $V_{f}^{+}SV_{f^{-1}}$ is really the discrete logarithmic unitary
operation $U_{\log }(g).$ In effect the unitary operation sequence $
V_{f}^{+}SV_{f^{-1}}$ is equivalent to the unitary operator $U_{\log }(g)$
of the discrete logarithm, but it must be careful when the unitary operation
sequence is performed in a quantum system since the unitary sequence
requires the auxiliary registers of the quantum system be in the specific
state $|0\rangle $ before and after the operation, while the quantum system
may be in any state. Though the modular exponentiation unitary operation $
V_{f}$ can be built up efficiently [22, 23, 24], it is generally hard to
build up the unitary operation $V_{f^{-1}}$ of the inversion function of the
modular exponential function. It is this unitary operation $V_{f^{-1}}$ that
makes it hard to construct the discrete logarithmic unitary operation $
U_{\log }(g)$. A functional unitary operation may exist but its
inversion-functional unitary operation could or could not, which usually is
dependent on the mathematical property of the function. Obviously, some
mathematical functions have their own unique inversion functions but some do
not have in some given functional or variable value ranges. If the functions
do not have their own unique inversion functions in some given value ranges,
then the unitary operations for the inversion functions usually could not
exist uniquely in these value ranges although the functions may have their
own unitary operations. When both a function and its inversion function
exist in a given value range, they usually have their own unitary
operations, respectively, and sometime their unitary operations are the same
up to the conjugate relation. But in general a functional and its
inversion-functional unitary operations can be different completely. As the
modular exponential function $f(s)=g^{s}\func{mod}p$ and its index variable $
s$ have a one-to-one correspondence, this makes the modular exponential
function $f(s)$ and its inversion function, i.e., the discrete logarithmic
function or the index variable $s$, have their own unitary operations.
Because the unitary operation $V_{f}$ can be built up efficiently the
discrete logarithmic unitary sequence $U_{\log }(g)=V_{f}^{+}SV_{f^{-1}}$ is
mainly dependent on the unitary operation $V_{f^{-1}}$ in computational
complexity. Below it is devoted to the construction for the efficient
unitary operation $V_{f^{-1}}$ of inversion function of the modular
exponential function. Before building up the unitary operation $V_{f^{-1}}$
several conventional reversible mathematical-logic operations are introduced.
$(i)$ The modular addition operation $ADD_{L}(\alpha ,\beta )$. The modular
addition operation is defined as
\[
ADD_{L}(1,2)|x\rangle |y\rangle =|x\rangle |x+y\func{mod}L\rangle ,\text{ }
x,y\in Z_{L}.
\]
Here the integer set $Z_{L}=\{0,1,...,L-1\}.$ The indices $\alpha $ and $
\beta $ denote the registers that are acted on by the modular addition
operation $ADD_{L}(\alpha ,\beta )$. The modular addition operation $
ADD_{L}(1,2)$ is performed by adding the integer $x$ of the first register
to the second register and taking modulus $L$. It can be implemented in
polynomial time $\thicksim O(\log L)$ [22]. The modular addition operation $
ADD_{L}(1,2)$ is a reversible operation since the integer $y$ can be derived
uniquely from $x$ and $(x+y)\func{mod}L$ if $0\leq x,y\leq L-1.$ As a
specific modular addition unitary operation the COPY unitary operation $
COPY(\alpha ,\beta )$ is defined as
\[
COPY(1,2)|x\rangle |0\rangle =|x\rangle |x\rangle ,\text{ }x\in Z_{L}.
\]
The inverse COPY unitary operation $[COPY(\alpha ,\beta )]^{+}$ is really
the subtraction unitary operation: $[COPY(1,2)]^{+}|x\rangle |x\rangle
=|x\rangle |0\rangle ,$ $x\in Z_{L}.$
$(ii)$ The modular multiplication unitary operation $M_{L}(\alpha ,\beta
,\gamma )$ is defined as
\[
M_{L}(1,2,3)|x\rangle |y\rangle |0\rangle =|x\rangle |y\rangle |xy\func{mod}
L\rangle ,\text{ }x,y\in Z_{N}
\]
where $x,y$ are integer variables, the integer $L$ is modulus and the
integer $N$ may be different from $L$. The indices $\alpha $ and $\beta $
denote the two registers whose integer variables $x$ and $y$ are multiplied
to one another and the index $\gamma $ marks the third register that loads
the multiplication operation result. As an example, the modular
multiplication unitary operation is applied on the cyclic group state:
\begin{eqnarray*}
&&M_{p}(1,2,3)|g^{x}\func{mod}p\rangle |g^{y}\func{mod}p\rangle |0\rangle \\
&=&|g^{x}\func{mod}p\rangle |g^{y}\func{mod}p\rangle |g^{x+y}\func{mod}
p\rangle ,\text{ }x,y\in Z_{p-1}.
\end{eqnarray*}
$(iii)$ The modular exponentiation unitary operation. First consider the
modular multiplication operation $U_{a,N}(\alpha )$ which is defined as
\[
U_{a,N}(1)|x\rangle =|xa\func{mod}N\rangle ,\text{ }(a,N)=1,\text{ }x\in
Z_{N}.
\]
This operation is unitary only when the integer $a$ is coprime to the
integer $N$ [17, 31]$,$ i.e., $(a,N)=1$. This unitary operation need not any
additional auxiliary register in principle, But when the unitary operation
is constructed by the mathematical-logic operations it still needs many
extra auxiliary registers. The index $\alpha $ denotes the register acted on
by the operation $U_{a,N}(\alpha )$. Generally, the modular exponentiation
operation may be taken as $[U_{a,N}(\alpha )]^{l}$ for any positive integer $
l.$ The conditional modular exponentiation operation $U_{a,L}^{c}(\alpha
,\beta )$ may be defined with the help of the modular multiplication
operation $U_{a,L}(\beta ):$
\begin{eqnarray*}
U_{a,L}^{c}(1,2)|x\rangle |y\rangle &=&|x\rangle [U_{a,L}(2)]^{x}|y\rangle \\
&=&|x\rangle |ya^{x}\func{mod}L\rangle ,\text{ }x\in Z_{N},\text{ }y\in
Z_{L}.
\end{eqnarray*}
This conditional modular exponentiation operation is unitary only if the
integer $a$ is coprime to the integer $L$. However, using one more auxiliary
register a general conditional modular exponentiation operation, which is
unitary even if the integer $a$ is not coprime to the integer $L,$ may be
constructed by
\[
U_{a,L}^{c}(1,2,3)|x\rangle |y\rangle |0\rangle =|x\rangle |y\rangle |ya^{x}
\func{mod}L\rangle ,x,y\in Z_{N}.
\]
In particular, the two-variable conditional modular exponentiation operation
$U_{f}=U_{a,b,L}^{c}(1,2,3)$ have been used extensively in the discrete
logarithmic problem [22, 25, 30]: $U_{a,b,L}^{c}(1,2,3)|x\rangle |y\rangle
|0\rangle =|x\rangle |y\rangle |b^{x}a^{y}\func{mod}L\rangle ,$ $x,y\in
Z_{N},$ where $a$ and $b$ are constant integers and usually $N\geq L$. These
modular multiplication and modular exponentiation unitary operations may be
built up efficiently by the basic reversible logic operations [21-27] and
generally can be efficiently implemented in polynomial time $\thicksim
O(\log ^{2}N)$ and $\thicksim O(\log ^{3}N),$ respectively [22]. The qubit
number used to implement these modular exponentiation operations generally
is $\thicksim O(\log N)$ [21, 22, 23, 24].
Besides these conventional mathematical-logic unitary operations introduced
above mathematically or quantum physically many important unitary operators,
unitary operations, elementary propagators, or quantum gates also can be
employed in construction of a unitary sequence. A large advantage for the
type of unitary operations is that the unitary operations usually are
non-selective unitary operators and hence need not any auxiliary qubits. But
the artificial conditional unitary operations, which also can be thought of
as the selective unitary operations, may need few auxiliary qubits to help
achieving the specific conditional operations.
$(iv)$ The $SWAP$ unitary operation and other elementary quantum gates [32].
The $SWAP(\alpha ,\beta )$ unitary operation is defined as
\[
SWAP(1,2)|x\rangle |y\rangle =|y\rangle |x\rangle ,x,y\in Z_{N}.
\]
$(v)$ The quantum Fourier transforms in the Hilbert space. The conventional
quantum Fourier transform [22, 33, 34] usually is defined in the regular
Hilbert space $\{|Z_{N}\rangle \},$
\begin{equation}
|l\rangle \stackrel{Q_{NFT}}{\rightarrow }\frac{1}{\sqrt{N}}\stackrel{N-1}{
\stackunder{k=0}{\sum }}\exp [i2\pi kl/N]|k\rangle ,\text{ }k,l\in Z_{N}.
\label{5}
\end{equation}
For the integer $N=2^{n}$ the quantum circuit $Q_{NFT}$ for the quantum
Fourier transform is very simple and consists of $\thicksim O(n^{2})$ basic
quantum gates. Note that there is not any auxiliary qubit in construction of
the quantum circuit $Q_{2^{n}FT}$. For the case that the integer $N$ is not
a power of two the quantum circuit $Q_{NFT}$ also can be constructed with $
\thicksim O(\log ^{2}N)$ basic quantum gates or even less [30, 31, 34, 35],
but many auxiliary qubits are needed in the construction of the quantum
circuit.
$(vi)$ The functional quantum Fourier transform. The functional quantum
Fourier transform is really the quantum Fourier transform applying to a
non-regulation state subspace of the Hilbert space. Because the functional
quantum Fourier transform is related closely to the unitary operation of the
inversion function of a function it could not be generally constructed
efficiently for any function. Suppose that the function $f(x)$ is a periodic
function: $f(x)=f(x+r),$ here $r$ is the period of the function. Then the
functional quantum Fourier transform $Q_{rft}$ for the periodic function $
f(x)$ may be defined as [36]
\begin{equation}
Q_{rft}|f(l)\rangle =\frac{1}{\sqrt{r}}\stackrel{r-1}{\stackunder{k=0}{\sum }
}\exp [i2\pi kl/r]|f(k)\rangle ,\text{ }k,l\in Z_{r}. \label{6}
\end{equation}
It can be shown that the functional quantum Fourier transform $Q_{rft}$ can
be constructed efficiently if both the unitary operations for the periodic
function $f(x)$ and its inversion function $f(x)^{-1}$ in the variable value
range $Z_{r}$ can be built up efficiently. Suppose that the functional and
its inversion-functional unitary operations are defined by $V_{f}|x\rangle
|0\rangle =|x\rangle |f(x)\rangle $ and $V_{f^{-1}}|f(x)\rangle |0\rangle
=|f(x)\rangle |x\rangle $ for $x\in Z_{r},$ respectively. Then the unitary
sequence for the invertible periodic function $f(x)$ is $
U_{f}=V_{f^{-1}}^{+}SV_{f}$ which satisfies $U_{f}|x\rangle =|f(x)\rangle $
for $x\in Z_{r},$ here any auxiliary qubits are dropped and $S$ is the SWAP
operation. Using the invertible-function unitary sequence $U_{f}$ the
functional quantum Fourier transform $Q_{rft}$ is related to the
conventional $r-$base quantum Fourier transform $Q_{rFT}$ by
\[
Q_{rft}=U_{f}Q_{rFT}U_{f}^{+}.
\]
Thus, the quantum circuit for the functional quantum Fourier transform $
Q_{rft}$ can be efficiently constructed if there is an efficient quantum
circuit for the unitary operation $U_{f}$ of the invertible function $f(x).$
$(vii)$ The group operations of a cyclic group. A cyclic group $G$ can be
generated by a generator $g,$ $G=\langle g\rangle
=\{E,g,g^{2},...,g^{n_{r}-1}\}.$ If the generator $g$ is a unitary operator
which is denoted as $U_{g}$ here, then all the group elements of the cyclic
group $G$ are also unitary operators. When the unitary cyclic group
operation $U_{g}$ is applied to a cyclic group state the unitary
transformation is given by
\[
U_{g}|g^{x}\func{mod}p\rangle =|g^{x+1}\func{mod}p\rangle .
\]
The unitary operation of the cyclic group may be built up efficiently with
the help of the diagonal and anti-diagonal unitary operators [16]. Actually,
just like the modular multiplication unitary operation $U_{a,L}(\alpha )$
the cyclic group operation $U_{g}$ could also be constructed efficiently by
using the basic reversible logic operations [26, 27], but this construction
needs many extra auxiliary qubits. The cyclic group operation $U_{g}$ can
also be performed in a conditional form
\[
U_{g}^{c}|a\rangle |g^{x}\func{mod}p\rangle =|a\rangle |g^{x+a}\func{mod}
p\rangle .
\]
With the help of these efficient unitary operations mentioned above an
efficient unitary sequence will be built up below, by which the index state $
|s\rangle $ of the discrete logarithm can be generated from the modular
exponentiation state $|g^{s}\func{mod}p\rangle $.
The oracle unitary operation in the discrete logarithmic problem is the
usual conditional modular exponentiation operation $U_{f}=U_{b,g,p}^{c}(
\alpha ,\beta ,\gamma ):$
\[
U_{f}|x\rangle |y\rangle |b\rangle |g\rangle |0\rangle =|x\rangle |y\rangle
|b\rangle |g\rangle |f(x,y)\rangle ,\text{ }x,y\in Z_{N}.
\]
The double-variable modular exponential function $f(x,y)$ is defined by
\[
f(x,y)=b^{x}g^{y}\func{mod}p
\]
where the integer $b=g^{s}\func{mod}p>0$ with the index $s\in Z_{p-1}$. The
Fermat little theorem (the Theorem 71 in Ref. [19]) shows that there holds $
a^{p-1}\equiv 1\func{mod}p$ for a prime $p$ and any integer $a$ that is not
divided by the prime $p.$ In particular, for the integer $a=g,$ $b,$ or even
$g^{z}\func{mod}p$ with $z=sx+y$ for any integers $x$ and $y$ there also
holds $a^{p-1}\equiv 1\func{mod}p$ since $g$ is a primitive root ($\func{mod}
p$). Thus, the modular exponential function $f(x,y)$ is a periodic function
with the period $p-1$ by the Fermat little theorem. Since the periodic
function $f(x,y)$ satisfies $f(x,y)\equiv f_{1}(sx+y)=g^{sx+y}\func{mod}p,$ $
f_{1}(z)=f_{1}(z+(p-1))$ and also $f(x,y)=f(x+l,y-ls)$ for any integer $l$
[25] the Fourier transform of the functional state $|f(x,y)\rangle $
therefore takes the form
\begin{eqnarray}
|\widetilde{f}(l_{1},l_{2})\rangle &=&|\widetilde{f}(l_{2}s\func{mod}
(p-1),l_{2})\rangle \delta ((l_{2}s-l_{1})\func{mod}(p-1)) \nonumber \\
&=&\delta ((l_{2}s-l_{1})\func{mod}(p-1)) \nonumber \\
&&\times \frac{1}{p-1}\stackrel{p-2}{\stackunder{x=0}{\sum }}\stackrel{p-2}{
\stackunder{y=0}{\sum }}\exp [i2\pi l_{2}(sx+y)/(p-1)]|f(x,y)\rangle .
\label{7}
\end{eqnarray}
The indices $l_{1}$ and $l_{2}$ in the Fourier transform state $|\widetilde{f
}(l_{1},l_{2})\rangle $ must satisfy the relation $(l_{2}s-l_{1})=0\func{mod}
(p-1)$ for $l_{1},l_{2}=0,1,...,p-2$ due to the fact that $
f(x,y)=f(x+l,y-ls) $. In terms of the Fourier transform states (7) the
functional state $|f(x,y)\rangle $ is expressed as
\begin{equation}
|f(x,y)\rangle =\frac{1}{p-1}\stackrel{p-2}{\stackunder{l=0}{\sum }}\exp
[-i2\pi l(sx+y)/(p-1)]|\widetilde{f}(ls,l)\rangle . \label{8}
\end{equation}
If one looks the function $f(x,y)$ as the single-variable periodic function $
f_{1}(z)=g^{z}\func{mod}p$ with the variable $z=sx+y=0,1,...,p-2$, $
f_{1}(z)=f_{1}(z+p-1),$ then one can express the functional state $
|f(x,y)\rangle =|f_{1}(z)\rangle $ in terms of its Fourier transform states $
\{|\widetilde{f}_{1}(l)\rangle \},$
\[
|f_{1}(z)\rangle =\frac{1}{\sqrt{p-1}}\stackrel{p-2}{\stackunder{l=0}{\sum }}
\exp [-i2\pi lz/(p-1)]|\widetilde{f}_{1}(l)\rangle .
\]
By comparing it with equation (8) one can see that there holds the state
identity $|\widetilde{f}(ls,l)\rangle /\sqrt{p-1}=|\widetilde{f}
_{1}(l)\rangle $ for $l=0,1,...,p-2$ and equation (7) is indeed the Fourier
transform of the functional state $|f_{1}(z)\rangle $ (its explanation can
be seen later).
The functional Fourier transform states (7) and the functional states (8)
will be used below in building up the unitary operation $V_{f^{-1}}$ of the
inversion function of the modular exponential function. There are many
auxiliary registers to be used in the construction of the unitary operation $
V_{f^{-1}}$. The starting state in the construction may be taken as $|\Psi
_{0}\rangle =|\mathbf{R0}\rangle \bigotimes |g^{s}\func{mod}p\rangle .$ Here
$|\mathbf{R0}\rangle =|0\rangle |0\rangle ...|0\rangle $ stands for the
library of auxiliary registers with the initial state $|0\rangle $ and
suppose that the register library stores sufficiently many registers to
supply to the coming quantum computation. The starting state is first
converted into the superposition by applying the conventional $(p-1)-$base
quantum Fourier transforms $Q_{(p-1)FT}$ to the first two registers,
respectively, which are supplied by the register library $|\mathbf{R0}
\rangle $. Then the oracle unitary operation $U_{f}$ of the discrete
logarithm is applied to the first three registers, here the oracle unitary
operation $U_{f}$ uses the data $g$ and $b=g^{s}\func{mod}p.$ After the
oracle unitary operation $U_{f}$ the state of the quantum system is in the
state $|\Psi _{1}\rangle ,$
\begin{eqnarray*}
|\Psi _{0}\rangle &=&|\mathbf{R0}\rangle \bigotimes |g^{s}\func{mod}p\rangle
\equiv |\mathbf{R0}\rangle \bigotimes |0\rangle |0\rangle |g^{s}\func{mod}
p\rangle \\
&&\stackrel{Q_{(p-1)FT}}{\rightarrow }|\mathbf{R0}\rangle \bigotimes \frac{1
}{p-1}\stackrel{p-2}{\stackunder{x=0}{\sum }}\stackrel{p-2}{\stackunder{y=0}{
\sum }}|x\rangle |y\rangle |g^{s}\func{mod}p\rangle \\
&&\stackrel{U_{f}}{\rightarrow }|\Psi _{1}\rangle =|\mathbf{R0}\rangle
\bigotimes \frac{1}{p-1}\stackrel{p-2}{\stackunder{x=0}{\sum }}\stackrel{p-2
}{\stackunder{y=0}{\sum }}|x\rangle |y\rangle |f(x,y)\rangle |g^{s}\func{mod}
p\rangle .
\end{eqnarray*}
The oracle unitary operation $U_{f}$ of the discrete logarithm is performed
in the conventional manner that the integers $g$ and $b=g^{s}\func{mod}p$
are first stored in auxiliary registers, the quantum computer reads the
integers $g$ and $b$ and values of the variables $x$ and $y$ in the first
two registers, then performs the functional operation $f(x,y)=b^{x}g^{y}
\func{mod}p$ and puts the computing result in the third register which is
provided by the register library $|\mathbf{R0}\rangle $. Note that the data $
b$ is already in the third register before the oracle unitary operation $
U_{f}$ and in the fourth register after the oracle unitary operation, while
the known data $g$ can be stored in a temporary register beforehand and
after the operation $U_{f}$ it can be removed from the register. Using the
functional Fourier transform states (7) to express the functional state $
|f(x,y)\rangle $ one obtains, by inserting equation (8) into the state $
|\Psi _{1}\rangle ,$
\begin{eqnarray*}
|\Psi _{1}\rangle &=&|\mathbf{R0}\rangle \bigotimes \frac{1}{p-1}\stackrel{
p-2}{\stackunder{l=0}{\sum }}\{[\frac{1}{\sqrt{p-1}}\stackrel{p-2}{
\stackunder{x=0}{\sum }}\exp [-i2\pi lsx/(p-1)]|x\rangle ] \\
&&\bigotimes [\frac{1}{\sqrt{p-1}}\stackrel{p-2}{\stackunder{y=0}{\sum }}
\exp [-i2\pi ly/(p-1)]|y\rangle ]|\widetilde{f}(ls,l)\rangle |g^{s}\func{mod}
p\rangle \}.
\end{eqnarray*}
Now the conventional $(p-1)-$base quantum Fourier transforms $Q_{(p-1)FT}$
are applied again to the first two registers in the state $|\Psi _{1}\rangle
,$ respectively, then the quantum system is in the created state $|\Psi
_{2}\rangle $ after the SWAP unitary operation,
\begin{eqnarray*}
&&|\Psi _{1}\rangle \stackrel{Q_{(p-1)FT}}{\rightarrow }|\mathbf{R0}\rangle
\bigotimes \frac{1}{p-1}\stackrel{p-2}{\stackunder{l=0}{\sum }}|ls\func{mod}
(p-1)\rangle |l\rangle |\widetilde{f}(ls,l)\rangle |g^{s}\func{mod}p\rangle
\\
&&\stackrel{SWAP}{\rightarrow }|\Psi _{2}\rangle =|\mathbf{R0}\rangle
\bigotimes \frac{1}{p-1}\stackrel{p-2}{\stackunder{l=0}{\sum }}|l\rangle |ls
\func{mod}(p-1)\rangle |\widetilde{f}(ls,l)\rangle |g^{s}\func{mod}p\rangle .
\end{eqnarray*}
The state $|\Psi _{2}\rangle $ contains the information of the index $s$ in
the last three registers. It is expected to extract the index $s$ from the
second register as the quantum states in other two registers are more
complicated. Therefore, the problem to be solved is how to extract the index
$s$ from the state of the second register in the state $|\Psi _{2}\rangle $
and this is related to the construction of the unitary transformation $
U_{s}: $
\[
|\Psi _{2}\rangle \stackrel{U_{s}}{\rightarrow }|\mathbf{R0}\rangle
\bigotimes \frac{C}{p-1}\stackrel{p-2}{\stackunder{l\geq 0}{\sum }}|l\rangle
|ls\func{mod}(p-1)\rangle |s\rangle |\widetilde{f}(ls,l)\rangle |g^{s}\func{
mod}p\rangle ,
\]
where the index $l$ runs over only some specific values in the range $0\leq
l<p-1$ and $C$ is a normalization constant (see below). In the unitary
transformation $U_{s}$ the desired state transfer $|l\rangle |ls\func{mod}
(p-1)\rangle |0\rangle \rightarrow |l\rangle |ls\func{mod}(p-1)\rangle
|s\rangle $ usually could not be achieved by the conventional inverse
multiplication operation $M_{p-1}^{+}(\alpha ,\beta ,\gamma )$. This is
because the function $f(s)=ls\func{mod}(p-1)$ does not have a one-to-one
correspondence to its variable $s$ for some integer values $l$ in the range $
0\leq l<p-1$. Actually, it is possible that the inversion function $
f(s)^{-1}\neq s$ if the integer $l$ is not coprime to $p-1.$ However, the
inversion function $f(s)^{-1}=s$ if the integer $l$ is coprime to $p-1,$ $
i.e.,$ $(l,p-1)=1,$ and this is one of the two bases to achieve this unitary
state transfer and obtain the real index state $|s\rangle .$ It can be seen
that the state $|\Psi _{2}\rangle $ consists of $p-1$ orthogonal states with
index $l=0,1,...,p-2$. Among all the $(p-1)$ orthogonal states how many
orthogonal states have an index integer $l$ coprime to $(p-1)$? The question
can be answered by the Euler theorem in number theory (see the Theorem 72 in
reference [19]). As known in number theory [19], number for the positive
integers coprime to and not greater than $p-1$ is $\phi (p-1),$ where $\phi
(p-1)$ is the Euler totient function, and it is also known that the Euler
totient function $\phi (p-1)>\delta (p-1)/\log \log (p-1)$ for some constant
$\delta .$ More exactly, if the integer $(p-1)$ has a prime factorization: $
p-1=p_{1}^{a_{1}}p_{2}^{a_{2}}...p_{r}^{a_{r}},$ where $
p_{1},p_{2},...,p_{r} $ are distinct primes, then $\phi
(p-1)=(p-1)\prod_{l=1}^{r}(1-p_{l}^{-1}).$ This shows that among the $p-1$
orthogonal states of the state $|\Psi _{2}\rangle $ there are $\phi (p-1)$
orthogonal states that have an index integer $l$ coprime to $p-1$. Thus, the
probability for all such orthogonal states in the state $|\Psi _{2}\rangle $
is $\phi (p-1)/(p-1)>\delta /\log \log (p-1).$ The probability is inversely
proportional to $\log \log (p-1)$ and hence is high even when the prime $p$
is very large. This is another basis to obtain the real index state $
|s\rangle $. If the index integer $l$ is coprime to the integer $(p-1),$
there is a modular multiplication unitary operator $
U_{l^{-1}}=U_{l,(p-1)}^{+}$ such that $U_{l^{-1}}|ls\func{mod}(p-1)\rangle
=|s\func{mod}(p-1)\rangle .$ Indeed, the unitary operation $U_{l^{-1}}$ can
generate the real index state $|s\rangle $ from the state $|ls\func{mod}
(p-1)\rangle .$ But the unitary operation $U_{l^{-1}}$ does depend on the
integer $l,$ then it is clear that for the case $l\neq l^{\prime }$ the
unitary operation $U_{l^{-1}}$ does not generate the index state $|s\rangle $
from the state $|l^{\prime }s\func{mod}(p-1)\rangle ,$ that is, $
U_{l^{-1}}|l^{\prime }s\func{mod}(p-1)\rangle \neq |s\func{mod}(p-1)\rangle $
if $l^{\prime }\neq l.$ Since all the index integers $l$ in the $(p-1)$
orthogonal states of the state $|\Psi _{2}\rangle $ are different it is
impossible to use a single unitary operation $U_{l^{-1}}$ to generate the
real index state $|s\rangle $ from these orthogonal states even if the index
integer $l$ for each of these states is coprime to $(p-1)$. In order to
generate the real index state $|s\rangle $ from the state $|\Psi _{2}\rangle
$ the unitary transformation $U_{s}$ should be independent of any index
integer $l$. The conventional Euclidean algorithm [19] could be used to
construct the unitary transformation $U_{s}$. Suppose that the greatest
common divisor for the two integers $l$ and $(p-1)$ is $d_{l}$, i.e., $
(l,p-1)=d_{l}$. The Euclidean algorithm can find efficiently two integers $
a_{l}$ and $b_{l}$ such that the greatest common divisor $
d_{l}=(l,p-1)=a_{l}l+b_{l}(p-1)$. Then $a_{l}l=d_{l}\func{mod}(p-1).$ If $
d_{l}=1$ then $a_{l}l=1\func{mod}(p-1)$ and hence $a_{l}$ is the inverse
element of the integer $l$ ($\func{mod}(p-1)$). Using the Euclidean
algorithm the following unitary transformations can be obtained,
\begin{eqnarray*}
&&|l\rangle |0\rangle |ls\func{mod}(p-1)\rangle |0\rangle \stackrel{GCD}{
\rightarrow }|l\rangle |a_{l}\rangle |ls\func{mod}(p-1)\rangle |0\rangle \\
&&\stackrel{M_{p-1}(2,3,4)}{\rightarrow }|l\rangle |a_{l}\rangle |ls\func{mod
}(p-1)\rangle |a_{l}ls\func{mod}(p-1)\rangle
\end{eqnarray*}
\begin{eqnarray*}
&=&|l\rangle |a_{l}\rangle |ls\func{mod}(p-1)\rangle |d_{l}s\func{mod}
(p-1)\rangle \\
&&\stackrel{(GCD)^{+}}{\rightarrow }|l\rangle |0\rangle |ls\func{mod}
(p-1)\rangle |d_{l}s\func{mod}(p-1)\rangle
\end{eqnarray*}
\[
=\left\{
\begin{array}{c}
|l\rangle |0\rangle |ls\func{mod}(p-1)\rangle |s\rangle ,\text{ if }d_{l}=1.
\\
|l\rangle |0\rangle |ls\func{mod}(p-1)\rangle |d_{l}s\func{mod}(p-1)\rangle ,
\text{ if }d_{l}>1.
\end{array}
\right.
\]
Here the Euclidean algorithm $GCD$ must be performed in a quantum parallel
form. This unitary transformation could be used to build up efficiently the
unitary transformation $U_{s}$ as the classical Euclidean algorithm can be
implemented in polynomial time $\thicksim O(\log ^{3}p).$ A quantum-version
extended Euclidean algorithm was given in Ref. [30b]. Another algorithm that
may be used to build up the unitary transformation $U_{s}$ is based on the
Euler theorem in number theory [19]. The Euler theorem (the Theorem 72 in
Reference [19]) states that if $(a,m)=1$, then $a^{\phi (m)}=1\func{mod}m.$
Thus, there holds $l^{\phi (p-1)}=1\func{mod}(p-1)$ for any integer $l$
coprime to $(p-1)$, i.e., $(l,p-1)=1$. But if $(l,p-1)\neq 1,$ the identity $
l^{\phi (p-1)}=1\func{mod}(p-1)$ generally does not hold. Since the
computation for the modular exponentiation $l^{\phi (p-1)}\func{mod}(p-1)$
is simpler and efficient, it could be more convenient to use the modular
exponentiation operation to build up the unitary transformation $U_{s}$.
When the state $|\Psi _{2}\rangle $ is acted on by the conditional modular
exponentiation unitary operation $U_{\phi (p-1)-1,p-1}^{c}$ it will be
converted into the state $|\Psi _{3}\rangle ,$
\begin{eqnarray*}
&&|\Psi _{2}\rangle \stackrel{U_{\phi (p-1)-1,p-1}^{c}}{\rightarrow }|\Psi
_{3}\rangle =|\mathbf{R0}\rangle \bigotimes \frac{1}{p-1}\stackrel{p-2}{
\stackunder{l=0}{\sum }}|l\rangle |ls\func{mod}(p-1)\rangle \\
&&\bigotimes |l^{\phi (p-1)}s\func{mod}(p-1)\rangle |\widetilde{f}
(ls,l)\rangle |g^{s}\func{mod}p\rangle
\end{eqnarray*}
where the modular exponential function $l^{\phi (p-1)-1}\func{mod}(p-1)$ is
first computed by the conditional modular exponentiation operation $U_{\phi
(p-1)-1,p-1}^{c}$ in a quantum parallel form by using the integer $l$ in the
first register and then is put in a temporary register, then the function $
l^{\phi (p-1)-1}\func{mod}(p-1)$ and the function $ls\func{mod}(p-1)$ in the
second register are multiplied with one another and the result is put in the
third register, and after these operations those states in the temporary
registers are removed unitarily. The state $|\Psi _{3}\rangle $ is written
as $|\Psi _{3}\rangle =|\Psi _{3s}\rangle +|\Psi _{3s^{\prime }}\rangle $
and the two orthogonal states $|\Psi _{3s}\rangle $ and $|\Psi _{3s^{\prime
}}\rangle $ are given respectively by
\begin{eqnarray*}
|\Psi _{3s}\rangle &=&|\mathbf{R0}\rangle \bigotimes \frac{1}{p-1}\stackrel{
p-2}{\stackunder{(l,p-1)=1}{\sum }}|l\rangle |ls\func{mod}(p-1)\rangle \\
&&\bigotimes |s\rangle |\widetilde{f}(ls,l)\rangle |g^{s}\func{mod}p\rangle ,
\\
|\Psi _{3s^{\prime }} &\rangle =&|\mathbf{R0}\rangle \bigotimes \frac{1}{p-1}
\stackrel{p-2}{\stackunder{(l,p-1)>1}{\sum }}|l\rangle |ls\func{mod}
(p-1)\rangle \\
&&\bigotimes |s^{\prime }\rangle |\widetilde{f}(ls,l)\rangle |g^{s}\func{mod}
p\rangle ,
\end{eqnarray*}
where the sum with symbol $(l,p-1)=1$ means that the index $l$ takes those
integers less than and coprime to the integer $(p-1)$ and the sum with $
(l,p-1)>1$ for the index $l$ runs over those integers less than and not
coprime to the integer $(p-1)$, the index $s^{\prime }=l^{\phi (p-1)}s\func{
mod}(p-1)$ for $(l,p-1)>1$ (this also includes $l=0)$ and the index $
s=l^{\phi (p-1)}s\func{mod}(p-1)$ by the Euler theorem that $l^{\phi (p-1)}=1
\func{mod}(p-1)$ if $l$ is coprime to $p-1.$ Generally, the index $s^{\prime
}\neq s$. It is known that the computational complexity for the modular
exponentiation operation is $\thicksim O(\log ^{3}p)$ and hence the
conditional modular exponentiation unitary operation $U_{\phi
(p-1)-1,p-1}^{c}$ may be implemented in polynomial time $\thicksim O(\log
^{3}p)$. Now there are the desired state $|\Psi _{3s}\rangle $ which
contains the real index state $|s\rangle $ and the undesired state $|\Psi
_{3s^{\prime }}\rangle $ which does not have the index state $|s\rangle $ in
the state $|\Psi _{3}\rangle .$ Obviously, the probability for the desired
state $|\Psi _{3s}\rangle $ in the state $|\Psi _{3}\rangle $ is $\phi
(p-1)/(p-1)$ and hence the probability for the real index state $|s\rangle $
in the state $|\Psi _{3}\rangle $ is $\phi (p-1)/(p-1)>\delta /\log \log
(p-1)$. It is necessary to remove unitarily the undesired state $|\Psi
_{3s^{\prime }}\rangle $ from the state $|\Psi _{3}\rangle $ or to convert
it into the desired state $|\Psi _{3s}\rangle $ by a unitary transformation
so that the real index state $|s\rangle $ can be obtained from the desired
state $|\Psi _{3s}\rangle $ in a high probability $(\thicksim 1)$.
Here gives a simple method to convert unitarily the whole state $|\Psi
_{3}\rangle $ into the desired state $|\Psi _{3s}\rangle .$ This method is
similar to the amplitude amplification method [6, 30a]. It uses simply two
unitary operations, one is the inversion operation for the desired state $
|\Psi _{3s}\rangle ,$
\[
U(|\Psi _{3s}\rangle )=\exp \{-i\pi (|\Psi _{3s}\rangle \langle \Psi
_{3s}|)\}
\]
and another is simply taken as
\begin{eqnarray*}
U(|\Psi _{3}\rangle ) &=&\exp \{-i\pi (|\Psi _{3}\rangle \langle \Psi
_{3}|)\} \\
&=&\exp \{-i\pi (|\Psi _{3s}\rangle +|\Psi _{3s^{\prime }}\rangle )(\langle
\Psi _{3s}|+\langle \Psi _{3s^{\prime }}|)\}.
\end{eqnarray*}
Firstly, the inversion for the state $|\Psi _{3s}\rangle $ can be achieved
efficiently. Because $g$ is a primitive root ($\func{mod}p$), it has the
inverse element $g^{-1}=g^{p-2}\func{mod}p$ such that $g^{-1}g=1\func{mod}p.$
Then by making the conditional cyclic group operation $U_{g^{-1}}^{c}$ one
obtains the following state transformation:
\[
U_{g^{-1}}^{c}|s^{\prime }\rangle |g^{s}\func{mod}p\rangle |0\rangle
=|s^{\prime }\rangle |g^{s}\func{mod}p\rangle |g^{-s^{\prime }+s}\func{mod}
p\rangle ,s,s^{\prime }\in Z_{p-1}.
\]
Here the operation result is put in the last register. Therefore, there
holds the unitary transformation:
\[
U_{g^{-1}}^{c}|s^{\prime }\rangle |g^{s}\func{mod}p\rangle |0\rangle
=\left\{
\begin{array}{c}
|s\rangle |g^{s}\func{mod}p\rangle |1\rangle ,\text{ if }s^{\prime }=s \\
|s^{\prime }\rangle |g^{s}\func{mod}p\rangle |g^{-s^{\prime }+s}\func{mod}
p\rangle ,\text{ if }s^{\prime }\neq s
\end{array}
\right.
\]
Because the state $|1\rangle $ is orthogonal to these states $|g^{-s^{\prime
}+s}\func{mod}p\rangle $ for any indices $s^{\prime }\neq s,$ one can make
the selective inversion operation $C_{1}(\pi )=\exp (-i\pi D_{1})$ to invert
the state $|1\rangle ,$ while leaving these states $|g^{-s^{\prime }+s}\func{
mod}p\rangle $ with $s^{\prime }\neq s$ unchanged. If now the conditional
cyclic group operation $U_{g^{-1}}^{c}$ acts on the state $|\Psi _{3}\rangle
,$ then only the desired state $|\Psi _{3s}\rangle $ generates the state $
|1\rangle $ because it contains the index state $|s\rangle $, while the
state $|\Psi _{3s^{\prime }}\rangle $ produces the states $|g^{-s^{\prime
}+s}\func{mod}p\rangle $ with $s^{\prime }\neq s.$ After the unitary
operation $U_{g^{-1}}^{c}$ the selective inversion operation $C_{1}(\pi )$
is applied to the register whose state is either $|1\rangle $ or $
|g^{-s^{\prime }+s}\func{mod}p\rangle ,$ then only the state $
U_{g^{-1}}^{c}|\Psi _{3s}\rangle $ is inverted, while the state $
U_{g^{-1}}^{c}|\Psi _{3s^{\prime }}\rangle $ keeps unchanged. After the
selective inversion operation $C_{1}(\pi )$ the states $U_{g^{-1}}^{c}|\Psi
_{3s}\rangle $ and $U_{g^{-1}}^{c}|\Psi _{3s^{\prime }}\rangle $ are
returned to the states $|\Psi _{3s}\rangle $ and $|\Psi _{3s^{\prime
}}\rangle ,$ respectively, by applying the inverse unitary operation $
(U_{g^{-1}}^{c})^{+}.$ The inversion for the state $|\Psi _{3s}\rangle $
therefore is achieved, while the state $|\Psi _{3s^{\prime }}\rangle $ keeps
unchanged. Another unitary operation $U(|\Psi _{3}\rangle )$ is generated
from the oracle unitary operation: $U_{os}(\theta )=\exp \{-i\theta (|
\mathbf{R0}\rangle |g^{s}\func{mod}p\rangle \langle g^{s}\func{mod}p|\langle
\mathbf{R0}|)\}$ with $\theta =\pi .$ It is shown above that the state $|
\mathbf{R0}\rangle |g^{s}\func{mod}p\rangle $ can be efficiently converted
into the state $|\Psi _{3}\rangle =|\Psi _{3s}\rangle +|\Psi _{3s^{\prime
}}\rangle $ by a sequence of unitary operations which may be simply denoted
as $U_{T}(|\Psi _{3}\rangle ).$ Then $|\mathbf{R0}\rangle |g^{s}\func{mod}
p\rangle \stackrel{U_{T}(|\Psi _{3}\rangle )}{\rightarrow }|\Psi _{3}\rangle
$ and the unitary operation $U(|\Psi _{3}\rangle )$ can be expressed as $
U(|\Psi _{3}\rangle )=U_{T}(|\Psi _{3}\rangle )U_{os}(\pi )U_{T}^{+}(|\Psi
_{3}\rangle ).$ The unitary operation sequence that converts the state $
|\Psi _{3}\rangle $ into the desired state $|\Psi _{3s}\rangle $ then is
given simply by
\[
R(m)=[U(|\Psi _{3}\rangle )C(|\Psi _{3s}\rangle )]^{m},
\]
where the iterative number $m$ takes $\thicksim $ $O(\sqrt{\log \log (p-1)})$
so that the state $|\Psi _{3}\rangle $ is converted in a high probability ($
\thicksim 1$) into the desired state $|\Psi _{3s}\rangle ,$ this is because
the probability for the desired state $|\Psi _{3s}\rangle $ in the state $
|\Psi _{3}\rangle $ is $\phi (p-1)/(p-1)>\delta /\log \log (p-1)$. Thus,
under the unitary operation sequence $R(m)$ the state $|\Psi _{3}\rangle $
is converted completely into the desired state $|\Psi _{3s}\rangle ,$
\begin{eqnarray*}
|\Psi _{3}\rangle \stackrel{R(m)}{\rightarrow }|\Psi _{3s}\rangle &=&|
\mathbf{R0}\rangle \bigotimes \frac{C}{p-1}\stackrel{p-2}{\stackunder{
(l,p-1)=1}{\sum }}|l\rangle |ls\func{mod}(p-1)\rangle \\
&&\bigotimes |s\rangle |\widetilde{f}(ls,l)\rangle |g^{s}\func{mod}p\rangle ,
\end{eqnarray*}
where $C$ is a normalization constant, $C=\sqrt{(p-1)/\phi (p-1)}$. Now all
the orthogonal states in the state $|\Psi _{3s}\rangle $ have the index
state $|s\rangle .$ The state $|ls\func{mod}(p-1)\rangle $ in the second
register in the state $|\Psi _{3s}\rangle $ can be removed unitarily by
making an inverse multiplication operation $M_{p-1}^{+}(1,3,2)$ on the state
$|\Psi _{3s}\rangle .$ After the index state $|s\rangle $ in the third
register in the state $|\Psi _{3s}\rangle $ is moved to the last register,
in which the index state $|s\rangle $ will be kept to the end, the state $
|\Psi _{3s}\rangle $ is changed to the state $|\Psi _{4s}\rangle :$
\[
|\Psi _{3s}\rangle \stackrel{M_{p-1}^{+}(1,3,2)}{\rightarrow }\stackrel{SWAP
}{\rightarrow }|\Psi _{4s}\rangle
\]
\[
=|\mathbf{R0}\rangle \bigotimes \frac{C}{p-1}\stackrel{p-2}{\stackunder{
(l,p-1)=1}{\sum }}|l\rangle |\widetilde{f}(ls,l)\rangle |g^{s}\func{mod}
p\rangle |s\rangle .
\]
By inserting the inverse Fourier transform state $|\widetilde{f}
(ls,l)\rangle $ (7) the state $|\Psi _{4s}\rangle $ can be rewritten as
\begin{eqnarray*}
|\Psi _{4s}\rangle &=&|\mathbf{R0}\rangle \bigotimes \frac{C}{p-1}\frac{1}{
p-1}\stackrel{p-2}{\stackunder{x_{1}=0}{\sum }}\stackrel{p-2}{\stackunder{
x_{2}=0}{\sum }}\stackrel{p-2}{\stackunder{(l,p-1)=1}{\sum }}\exp [i2\pi
l(sx_{1}+x_{2})/(p-1)] \\
&&\times |l\rangle |f(x_{1},x_{2})\rangle |g^{s}\func{mod}p\rangle |s\rangle
.
\end{eqnarray*}
Since the functional state $|f(x_{1},x_{2})\rangle =|g^{sx_{1}+x_{2}}\func{
mod}p\rangle =|f_{1}(sx_{1}+x_{2})\rangle ,$ there are only $p-1$ functional
states $|f(x_{1},x_{2})\rangle $ to be independent. However, there are $
(p-1)\times (p-1)$ functional states $|f(x_{1},x_{2})\rangle $ in the state $
|\Psi _{4s}\rangle ,$ then not all these $(p-1)\times (p-1)$ functional
states are independent. Actually, the state $|\Psi _{4s}\rangle $ can be
reduced to the simple form $|\Psi _{5s}\rangle :$
\begin{eqnarray*}
|\Psi _{5s}\rangle &=&|\mathbf{R0}\rangle \bigotimes \frac{C}{p-1}\stackrel{
p-2}{\stackunder{z=0}{\sum }}\stackrel{p-2}{\stackunder{(l,p-1)=1}{\sum }}
\exp [i2\pi lz/(p-1)] \\
&&\times |l\rangle |f_{1}(z)\rangle |g^{s}\func{mod}p\rangle |s\rangle .
\end{eqnarray*}
Why can the state $|\Psi _{4s}\rangle $ be written as the simple form $|\Psi
_{5s}\rangle ?$ There are totally $(p-1)\times (p-1)$ different index pairs $
(x_{1},x_{2})$ in the state $|\Psi _{4s}\rangle $ since the indices $x_{1},$
$x_{2}=0,1,...,p-2.$ Now for each given $z=(sx_{1}+x_{2})\func{mod}(p-1)$
for $z=0,1,...,p-2$ there are $(p-1)$ different index pairs $(x_{1},x_{2})$
to satisfy the same equation $z=(sx_{1}+x_{2})\func{mod}(p-1),$ while for
all these $(p-1)$ pairs of indices $(x_{1},x_{2})$ the functional states $
|f(x_{1},x_{2})\rangle $ take the same one: $|f_{1}(z)\rangle $ and the
phase factor $\exp [i2\pi l(sx_{1}+x_{2})/(p-1)]$ also are the same as $\exp
[i2\pi lz/(p-1)]$. These $(p-1)$ different index pairs $(x_{1},x_{2})$ that
fulfill the same equation: $z=(sx_{1}+x_{2})\func{mod}(p-1)$ may be taken as
$(x_{1},$ $(z-sx_{1})\func{mod}(p-1))$ for $x_{1}=0,1,...,p-2.$ Thus, taking
$x_{1}=0,1,...,p-2$ and $z=0,1,...,p-2$ generates just all possible $
(p-1)\times (p-1)$ different index pairs $(x_{1},x_{2}).$ Then in the state $
|\Psi _{4s}\rangle $ the sums over the indices $x_{1}$ and $x_{2}$ may be
carried out in such a way that the sum for the index $x_{1}$ is first to run
over the $(p-1)$ different index pairs $(x_{1},$ $(z-sx_{1})\func{mod}(p-1))$
for $x_{1}=0,1,...,p-2$ and for any given $z=(sx_{1}+x_{2})\func{mod}(p-1),$
this sum will generate a factor of $(p-1)$\ as the same functional states $
|f(x_{1},x_{2})\rangle $ and the same phase factors $\exp [i2\pi
l(sx_{1}+x_{2})/(p-1)]$ in the state $|\Psi _{4s}\rangle $ are taken for
these $(p-1)$ index pairs, then the sum for the index $z$ is carried out for
$z=0,1,...,p-2$, and hence the state $|\Psi _{4s}\rangle $ can be written as
the simple state $|\Psi _{5s}\rangle $. Now one can also understand why the
Fourier transform state $|\widetilde{f}_{1}(l)\rangle =|\widetilde{f}
(ls,l)\rangle /\sqrt{p-1}$ for $l=0,1,...,p-2$ (see before).
Now observe the state $|\Psi _{5s}^{^{\prime }}\rangle $ and a series of
unitary transformations:
\begin{eqnarray*}
|\Psi _{5s}^{^{\prime }}\rangle &=&|\mathbf{R0}\rangle \bigotimes \frac{1}{
p-1}\stackrel{p-2}{\stackunder{z=0}{\sum }}\stackrel{p-2}{\stackunder{l=0}{
\sum }}\exp [i2\pi lz/(p-1)] \\
&&\times |l\rangle |f_{1}(z)\rangle |g^{s}\func{mod}p\rangle |s\rangle \\
&&\stackrel{Q_{(p-1)FT}^{+}}{\rightarrow }|\mathbf{R0}\rangle \bigotimes
\frac{1}{\sqrt{p-1}}\stackrel{p-2}{\stackunder{z=0}{\sum }}|z\rangle |g^{z}
\func{mod}p\rangle |g^{s}\func{mod}p\rangle |s\rangle \\
&&\stackrel{(U_{g,p}^{c})^{+}}{\rightarrow }|\mathbf{R0}\rangle \bigotimes
\frac{1}{\sqrt{p-1}}\stackrel{p-2}{\stackunder{z=0}{\sum }}|z\rangle |g^{s}
\func{mod}p\rangle |s\rangle \\
&&\stackrel{Q_{(p-1)FT}^{+}}{\rightarrow }|\mathbf{R0}\rangle \bigotimes
|g^{s}\func{mod}p\rangle |s\rangle .
\end{eqnarray*}
It can be seen that by making the inverse Fourier transform, the inverse
modular exponentiation operation $(U_{g,p}^{c})^{+},$ and again the inverse
Fourier transform the state $|\Psi _{5s}^{^{\prime }}\rangle $ is changed to
the state $|\mathbf{R0}\rangle \bigotimes |g^{s}\func{mod}p\rangle |s\rangle
.$
However, the state $|\Psi _{5s}\rangle $ is different from the state $|\Psi
_{5s}^{^{\prime }}\rangle $ in that the sum for the index $l$\ in the state $
|\Psi _{5s}\rangle $ runs over only those integers that are less than and
coprime to the integer $(p-1)$. By making the inverse Fourier transform on
the state $|l\rangle $ in the first register the state $|\Psi _{5s}\rangle $
is changed to the state $|\Psi _{6s}\rangle ,$
\[
|\Psi _{6s}\rangle =|\mathbf{R0}\rangle \bigotimes \frac{1}{p-1}\stackrel{p-2
}{\stackunder{z=0}{\sum }}\stackrel{p-2}{\stackunder{z^{\prime }=0}{\sum }}
h(z,z^{\prime })|z^{\prime }\rangle |f_{1}(z)\rangle |g^{s}\func{mod}
p\rangle |s\rangle .
\]
The trigonometrical sum $h(z,z^{\prime })$ is given by
\[
h(z,z^{\prime })=\frac{1}{\sqrt{\phi (p-1)}}\stackrel{p-2}{\stackunder{
(l,p-1)=1}{\sum }}\exp [i2\pi l(z-z^{\prime })/(p-1)]
\]
where the sum for the index $l$ runs over only those integers less than and
coprime to the integer $(p-1)$. Obviously, the trigonometrical sum $h(z,z)=
\sqrt{\phi (p-1)}$ if the index $z^{\prime }=z,$ for the number of the
integers less than and coprime to the integer $(p-1)$ is $\phi (p-1).$ Then
the state $|\Psi _{6s}\rangle $ can be rewritten as the sum of the two
terms:
\begin{eqnarray*}
|\Psi _{6s}\rangle &=&|\mathbf{R0}\rangle \bigotimes \frac{\sqrt{\phi (p-1)}
}{p-1}\stackrel{p-2}{\stackunder{z=0}{\sum }}|z\rangle |g^{z}\func{mod}
p\rangle |g^{s}\func{mod}p\rangle |s\rangle \\
&&+|\mathbf{R0}\rangle \bigotimes \frac{1}{p-1}\stackrel{p-2}{\stackunder{
z\neq z^{\prime },z,z^{\prime }=0}{\sum }}h(z,z^{\prime })|z^{\prime
}\rangle |g^{z}\func{mod}p\rangle |g^{s}\func{mod}p\rangle |s\rangle .
\end{eqnarray*}
By making the inverse modular exponentiation operation $(U_{g,p}^{c})^{+}$
on the first two registers the state $|\Psi _{6s}\rangle $ is transferred to
the state $|\Psi _{7s}\rangle :$
\begin{eqnarray*}
|\Psi _{7s}\rangle &=&|\mathbf{R0}\rangle \bigotimes \frac{\sqrt{\phi (p-1)}
}{p-1}\stackrel{p-2}{\stackunder{z=0}{\sum }}|z\rangle |1\rangle |g^{s}\func{
mod}p\rangle |s\rangle \\
&&+|\mathbf{R0}\rangle \bigotimes \frac{1}{p-1}\stackrel{p-2}{\stackunder{
z\neq z^{\prime },z,z^{\prime }=0}{\sum }}h(z,z^{\prime })|z^{\prime
}\rangle |g^{z-z^{\prime }}\func{mod}p\rangle |g^{s}\func{mod}p\rangle
|s\rangle .
\end{eqnarray*}
Since the index $z^{\prime }\neq z,$ the state $|g^{z-z^{\prime }}\func{mod}
p\rangle \neq |1\rangle $ and hence the two terms in the state $|\Psi
_{7s}\rangle $ are orthogonal to one another. Evidently, the first term in
the state $|\Psi _{7s}\rangle $ has a total probability $\phi (p-1)/(p-1)$
which is greater than $\delta /\log \log (p-1)$ for some constant $\delta .$
Again using the amplitude amplification method the second term in the state $
|\Psi _{7s}\rangle $ can be converted into the first term in a high
probability $(\thicksim 1)$ and the iterative number in the amplitude
amplification process to achieve this complete state conversion needs $
\thicksim O(\sqrt{\log \log (p-1)}).$ This time the selective inversion
operation is applied to the state $|1\rangle $ in the second register in the
state $|\Psi _{7s}\rangle $ and another unitary operation for the amplitude
amplification process is just the unitary operator $\exp \{-i\pi (|\Psi
_{7s}\rangle \langle \Psi _{7s}|)\}$ which can be also built up efficiently
because the state $|\Psi _{7s}\rangle $ itself can be generated efficiently
from the initial state $|\Psi _{0}\rangle $, as shown in the state-transfer
process above. After the state $|\Psi _{7s}\rangle $ is changed to its first
term completely, an inverse Fourier transform on the state $|z\rangle $ in
the first register and the state transfer $F_{1}^{+}:$ $|1\rangle
\rightarrow |0\rangle $ in the second register change the first term to the
desired state ultimately,
\begin{eqnarray*}
|\Psi _{7s}\rangle &\rightarrow &|\mathbf{R0}\rangle \bigotimes \frac{1}{
\sqrt{p-1}}\stackrel{p-2}{\stackunder{z=0}{\sum }}|z\rangle |1\rangle |g^{s}
\func{mod}p\rangle |s\rangle \\
&&\stackrel{Q_{(p-1)FT}^{+}}{\rightarrow }\stackrel{F_{1}^{+}}{\rightarrow }|
\mathbf{R0}\rangle \bigotimes |g^{s}\func{mod}p\rangle |s\rangle .
\end{eqnarray*}
Obviously, the whole unitary transformation process above really performs a
unitary transformation that firstly converts the starting state $|\Psi
_{0}\rangle =|\mathbf{R0}\rangle \bigotimes |g^{s}\func{mod}p\rangle
|0\rangle $ to the state $|\Psi _{3}\rangle ,$ then to the state $|\Psi
_{7s}\rangle ,$ and finally to the desired state $|\mathbf{R0}\rangle
\bigotimes |g^{s}\func{mod}p\rangle |s\rangle .$ Evidently, this is an
efficient unitary transformation process. This unitary operation sequence is
just the unitary operation $V_{f^{-1}}$ of the inversion function of the
modular exponential function $f(s)=g^{s}\func{mod}p$ if the register library
$|\mathbf{R0}\rangle $ is dropped. Once the inversion-functional unitary
operation $V_{f^{-1}}$ is obtained the unitary operation $U_{\log }(g)$ of
the discrete logarithmic function $s=\log _{g}f(s)$ can be set up by $
U_{\log }(g)=V_{f}^{+}SV_{f^{-1}}$.
If the starting state is a superposition, $|\Psi _{0}\rangle =\sum_{s}\alpha
_{s}|\mathbf{R0}\rangle \bigotimes |g^{s}\func{mod}p\rangle ,$ then it is
required that the unitary operator $U(|\Psi _{0}\rangle )=\exp \{-i\theta
(|\Psi _{0}\rangle \langle \Psi _{0}|)\}$ be efficiently constructed so that
the unitary operation $U(|\Psi _{3}\rangle ),$ $etc.,$ can be efficiently
built up with the unitary operator $U(|\Psi _{0}\rangle ).$ In this case the
superposition $|\Psi _{0}\rangle $ can be efficiently converted into the
superposition $|\Psi _{f}\rangle =\sum_{s}\alpha _{s}|\mathbf{R0}\rangle
\bigotimes |g^{s}\func{mod}p\rangle |s\rangle .$ For the quantum discrete
logarithmic problem the integer $b=g^{s}\func{mod}p$ is given beforehand and
hence the oracle unitary operation $\overline{U}_{os}(\theta )=\exp
[-i\theta \overline{D}_{s}(g)]$ can be constructed efficiently in advance.
Note that here the data $b$ is used to prepare the unitary operation instead
of a quantum state. Then using the above unitary operation sequence $
V_{f^{-1}}$ the initial known state $|\Psi _{0}\rangle =|\mathbf{R0}\rangle
\bigotimes |g^{s}\func{mod}p\rangle $ can be efficiently converted into the
state $|\mathbf{R0}\rangle \bigotimes |g^{s}\func{mod}p\rangle |s\rangle .$
Furthermore, by using directly the discrete logarithmic unitary operation $
U_{\log }(g)$ the initial known state $|\Psi _{0}\rangle $ can be
efficiently transferred to the index state $|\mathbf{R0}\rangle \bigotimes
|s\rangle $ and a quantum measurement on the index state will output
directly the complete information of the index $s$ of the integer $b=g^{s}
\func{mod}p.$\newline
\newline
{\large 4. The efficient state transformations among the cyclic group state
subspaces}
Once it\ is obtained the unitary operation $U_{\log }(g)$ of the discrete
logarithmic function $x=\log _{g}f(x)$ with $f(x)=g^{x}\func{mod}p$, one may
further use it to prepare some useful auxiliary oracle unitary operations $
U_{os^{\prime }}(\theta )$ where the index $s^{\prime }\neq s$ generally and
the index $s$ is of the oracle unitary operation $U_{os}(\theta )=\exp
[-i\theta D_{s}(g)]$. The process to generate the auxiliary oracle unitary
operation $U_{os^{\prime }}(\theta )$ with index $s^{\prime }=js$ from the
basic oracle unitary operation $U_{os}(\theta )$ is related to the state
unitary transformation $V_{js}$:
\[
|\mathbf{R0}\rangle \bigotimes |g^{s}\func{mod}p\rangle \stackrel{V_{js}}{
\rightarrow }|\mathbf{R0}\rangle \bigotimes |g^{js}\func{mod}p\rangle .
\]
In the classical irreversible computation the modular exponential function $
g^{js}\func{mod}p$ can be efficiently computed for any given integers $j$
and $b=g^{s}\func{mod}p$ [21], but it may not be so easy in the quantum
search problem to generate unitarily the state $|g^{js}\func{mod}p\rangle $
from the state $|g^{s}\func{mod}p\rangle $ for any given integer $j.$ If the
integer $j$ is coprime to the integer $(p-1)$, then there is an efficient
unitary transformation such that $U_{j,p-1}(\alpha )|s\rangle =|js\func{mod}
(p-1)\rangle $ and the unitary transformation $V_{js}$ therefore can be
achieved efficiently with the help of the unitary operation $U_{\log }(g)$
of the discrete logarithm. Hence the auxiliary oracle unitary operation $
U_{ojs}(\theta )$ can be efficiently generated from the oracle unitary
operation $U_{os}(\theta ).$ However, in order to simplify the quantum
search problem in the cyclic group state space one had better convert the
marked state into a small subspace of the cyclic group state space. Then the
auxiliary oracle unitary operation $U_{ojs}(\theta )$ usually is specific
one and the integer $j$ takes only some specific positive integer values
that are usually not coprime to the integer $(p-1)$. How can such an
auxiliary oracle unitary operation $U_{ojs}(\theta )$ be constructed from
the oracle unitary operation $U_{os}(\theta )$?
Evidently, the following unitary transformations can be achieved efficiently
for any integer $j$:
\begin{eqnarray*}
&&|\mathbf{R0}\rangle \bigotimes |g^{s}\func{mod}p\rangle \stackrel{U_{\log
}(g)}{\rightarrow }|\mathbf{R0}\rangle \bigotimes |s\rangle \stackrel{F_{j}}{
\rightarrow }|\mathbf{R0}\rangle \bigotimes |j\rangle |s\rangle \\
&&\stackrel{M_{p-1}(\alpha ,\beta ,\gamma )}{\rightarrow }|\mathbf{R0}
\rangle \bigotimes |j\rangle |s\rangle |js\func{mod}(p-1)\rangle
\end{eqnarray*}
where the unitary transformation $F_{j}|0\rangle =|j\rangle $ for any known
integer $j$ can be built up efficiently. If the integer $j$ is coprime to $
p-1,$ then a further unitary transformation sequence can be made:
\begin{eqnarray*}
&&|\mathbf{R0}\rangle \bigotimes |j\rangle |s\rangle |js\func{mod}
(p-1)\rangle |0\rangle \\
&&\stackrel{U_{\phi (p-1)-1,p-1}^{c}(1,3,4)}{\rightarrow }|\mathbf{R0}
\rangle \bigotimes |j\rangle |s\rangle |js\func{mod}(p-1)\rangle |s\rangle \\
&&\stackrel{COPY(4,2)}{\rightarrow }|\mathbf{R0}\rangle \bigotimes |j\rangle
|js\func{mod}(p-1)\rangle |s\rangle \\
&&\stackrel{U_{\phi (p-1)-1,p-1}^{c}(1,2,3)^{+}}{\rightarrow }|\mathbf{R0}
\rangle \bigotimes |j\rangle |js\func{mod}(p-1)\rangle \\
&&\stackrel{F_{j}^{+}}{\rightarrow }|\mathbf{R0}\rangle \bigotimes |js\func{
mod}(p-1)\rangle \stackrel{U_{\log }^{+}(g)}{\rightarrow }|\mathbf{R0}
\rangle \bigotimes |g^{js}\func{mod}p\rangle .
\end{eqnarray*}
Therefore, the state unitary transformation $V_{js}$ can be achieved too by
a more complicated way. However, from these detailed state unitary
transformations one may see more clearly why the state unitary
transformation $V_{js}$ is not easy to be constructed if the integer $j$ is
not coprime to the integer $(p-1)$.
If the integer $j$ is not coprime to the integer $(p-1),$ that is, $
(j,p-1)>1 $, then situation becomes much more complicated. Firstly, the
state transformation $|j\rangle |js\func{mod}(p-1)\rangle |0\rangle
\rightarrow |j\rangle |js\func{mod}(p-1)\rangle |s\rangle $ for any index $
s\in Z_{p-1}$ usually could not be unitary. This is related to the problem
whether there exists a unique inversion function of the function $f(x)=jx
\func{mod}(p-1)$ or not for any index variable $x\in Z_{p-1}$. Since the
function $f(x)$ may not be a one-to-one function corresponding to its
variable $x\in Z_{p-1}$ if the integer $j$ is not coprime to $(p-1),$ the
inversion-functional operation $f(x)^{-1}$ therefore may not be unitary in
the variable value range: $x\in Z_{p-1}$. In the same argument the state
transformation $|g^{js}\func{mod}p\rangle |0\rangle \rightarrow |g^{js}\func{
mod}p\rangle |s\rangle $ for any $s\in Z_{p-1}$ usually could not be unitary
if $(j,p-1)>1$, although the state transformation $|g^{js}\func{mod}p\rangle
|0\rangle \rightarrow |g^{js}\func{mod}p\rangle |js\func{mod}(p-1)\rangle $
is unitary. More generally, there could not be a single unitary
transformation for any integer $j\in Z_{p-1}$ such that $|j\rangle |js\func{
mod}(p-1)\rangle |0\rangle \rightarrow |j\rangle |js\func{mod}(p-1)\rangle
|s\rangle $ for any given index $s,$ as shown in section 3$.$ These may be
best understood with the knowledge of number theory [19]. Suppose that one
is given a set of the integers $j=a_{k}$ and $js\func{mod}(p-1)=b_{k}$ for $
k=1,2,...,r$ to reproduce uniquely the index $s.$ This problem is really
equivalent to solving the congruence system:
\begin{equation}
a_{k}x=b_{k}\func{mod}(p-1),k=1,2,...,r, \label{9}
\end{equation}
where the integers $\{a_{k}\}$ may not be coprime to $p-1$ and evidently $
x=s $ is a solution to the congruence system. First consider a single
congruence, for example, the $k-$th congruence: $a_{k}x\func{mod}(p-1)=b_{k}$
. Denote that the greatest common divisor between $a_{k}$ and $p-1$ is $
d_{k}=(a_{k},p-1).$ Then the single $k-$th congruence has exactly $d_{k}$
solutions [19, 20] as $d_{k}$ is a divisor of the integer $b_{k}=a_{k}s\func{
mod}(p-1)$ (i.e. $d_{k}|b_{k})$ for $k=1,2,...,r.$ If now the integer $a_{k}$
is not coprime to the integer $(p-1)$, that is, $d_{k}>1$, then there are $
d_{k}$ different index values $s$ to satisfy the same $k-$th congruence,
indicating that there is not a single unitary state transformation: $
|a_{k}\rangle |a_{k}s\func{mod}(p-1)\rangle |0\rangle \rightarrow
|a_{k}\rangle |a_{k}s\func{mod}(p-1)\rangle |s\rangle $ for any index $s\in
Z_{p-1}.$
Now consider the whole congruence system (9). Obviously, the congruence
system is solvable. Note that $d_{k}$ divides the integers $(p-1)$, $a_{k}$,
and $b_{k}$. Denote the integer $m_{k}=(p-1)/d_{k}$, $\widetilde{a}
_{k}=a_{k}/d_{k},$ and $\widetilde{b}_{k}=b_{k}/d_{k}\equiv \widetilde{a}
_{k}s\func{mod}m_{k}.$ Then the Theorem 57 in reference [19] shows that the
congruence system is equivalent to the simpler one: $\widetilde{a}_{k}x=
\widetilde{b}_{k}\func{mod}m_{k},$ $k=1,2,...,r.$ Since $(\widetilde{a}
_{k},m_{k})=1$ there exists an inverse element $h_{k}$ of $\widetilde{a}_{k}$
such that $h_{k}\widetilde{a}_{k}=1\func{mod}m_{k},$ the congruence system $
\widetilde{a}_{k}x=\widetilde{b}_{k}\func{mod}m_{k},$ $k=1,2,...,r,$ then
can be further reduced to the standard one:
\begin{equation}
x=c_{k}\func{mod}m_{k},\text{ }k=1,2,....,r, \label{10}
\end{equation}
where the coefficients $c_{k}=h_{k}\widetilde{b}_{k}.$ Now the Chinese
remainder theorem [19, 20] shows that if $m_{1},m_{2},...,m_{r}$ are coprime
in pair, i.e., $(m_{i},m_{j})=1$ for $1\leq i<j\leq r,$ then the standard
congruence system (10) has a unique solution $(\func{mod}m),$
\begin{equation}
x=(n_{1}M_{1}c_{1}+n_{2}M_{2}c_{2}+...+n_{r}M_{r}c_{r})\func{mod}m,
\label{11}
\end{equation}
where $m=m_{1}m_{2}...m_{r}=m_{1}M_{1}=m_{2}M_{2}=...=m_{r}M_{r}$ and the
inverse element $n_{k}$ of $M_{k}$ ($\func{mod}m_{k}$) satisfies $
n_{k}M_{k}=1\func{mod}m_{k}$ for $k=1,2,...,r$ because $(m_{k},M_{k})=1$.
Hence using the efficient Euclidean algorithm [19] the integer $n_{k}$ is
determined from the known integers $M_{k}$ and $m_{k}$ for $k=1,2,...,r$.
The solution $x$ of equation (11) is really the index $s$ if the index $s$
is bounded on by $0\leq s<m$ because the solution $x$ is unique ($\func{mod}
m).$ However, the index $s$ really belongs to the integer set $
Z_{p-1}=\{0,1,...,p-2\}.$ Then the solution $x$ could not be the real index $
s$ if $m<p-1,$ for example, it could occur that $s=x+m$ for $0\leq s<p-1$.
In order that the solution $x$ of equation (11) is exactly the real index $s$
the integer $m$ should be equal to or greater than $(p-1)$. In fact, it is
better to take the integer $m$ exactly as the integer $p-1,$ that is, $
m=p-1, $ as the situation is related closely to the prime factorization of
the integer $p-1$ and the structure of the cyclic group $S(C_{p-1}),$ as
shown in section 2. Now consider this specific case that the integer $
m=(p-1) $ and $(p-1)$ has the prime factorization$:$ $
(p-1)=p_{1}^{a_{1}}p_{2}^{a_{2}}...p_{r}^{a_{r}}$ ($p_{k}$ are distinct
primes). Take $a_{k}=(p-1)/p_{k}^{a_{k}}=M_{k}$ and $b_{k}=a_{k}s\func{mod}
(p-1)=M_{k}s\func{mod}(p-1)$. Thus, $d_{k}=(a_{k},p-1)=M_{k}$ and $
m_{k}=(p-1)/d_{k}=p_{k}^{a_{k}}.$ Then $\widetilde{a}_{k}=1$ and $\widetilde{
b}_{k}=s\func{mod}m_{k}.$ Moreover, $(p-1)=m_{1}m_{2}...m_{r}=m_{i}M_{i},$
and $(m_{i},m_{j})=1$, $(m_{i},M_{i})=1$, for $1\leq i<j\leq r.$ Clearly, $
h_{k}=1$ and $c_{k}=s\func{mod}m_{k}.$ Thus, the solution (11) is further
reduced to the form
\begin{equation}
x=(n_{1}M_{1}c_{1}+n_{2}M_{2}c_{2}+...+n_{r}M_{r}c_{r})\func{mod}(p-1).
\end{equation}
Now the solution $x$ of equation (12) is just the real index $s$ and the
vector $\{c_{k}\}$ is just the index vector $\{s_{k}\}$ in the equation (3)
in section 2. Actually, in comparison with the equation (3) in section 2 one
now sees that the equation (12) is just the equation (3), showing once again
that this solution $x$ is just the real index $s$. Therefore, the Chinese
remainder theorem [19, 20] ensures that any index state $|s\rangle $ with $
0\leq s<p-1$ can be exactly expressed as
\begin{eqnarray}
|s\rangle &\equiv &|(n_{1}M_{1}s_{1}+n_{2}M_{2}s_{2}+...+n_{r}M_{r}s_{r})
\func{mod}(p-1)\rangle \nonumber \\
&\equiv &|(n_{1}M_{1}+n_{2}M_{2}+...+n_{r}M_{r})s\func{mod}(p-1)\rangle ,
\label{13}
\end{eqnarray}
where the identity $M_{k}s_{k}\equiv M_{k}s\func{mod}(p-1)$ has been used
for $k=1,2,...,r$.
The index state identity (13) could be helpful to prepare some specific
auxiliary oracle unitary operations in the additive cyclic group state space
$S(Z_{p-1})$. Now it can be shown below that the index state $|s\rangle $
can be converted unitarily into a tension product of the $r$ states $\{|s
\func{mod}m_{k}\rangle \}$ or $\{|M_{k}s\func{mod}(p-1)\rangle \}$ for $
k=1,2,...,r$ in the $r$ different registers. Firstly, by simply applying the
reversible modular arithmetic operation $MOD(m_{k})$ on the index state $
|s\rangle $ one obtains
\[
|\mathbf{R0\rangle }\bigotimes |s\rangle \stackrel{MOD(m_{k})}{\rightarrow }
|\Phi _{0}\rangle =|\mathbf{R0\rangle }\bigotimes |s\rangle |s\func{mod}
m_{k}\rangle .
\]
The reversible modular arithmetic operation can be thought of as a specific
reversible modular addition operation. Evidently, the state $|s\func{mod}
m_{k}\rangle $ $\in S(Z_{m_{k}})$ and here $0\leq s\func{mod}m_{k}<m_{k}$
for $k=1,2,...,r$. Repeating the reversible modular arithmetic operation $r$
times for $k=1,2,...,r$ one arrives at the state $|\Phi _{1}\rangle :$
\begin{eqnarray*}
|\mathbf{R0\rangle }\bigotimes |s\rangle &\rightarrow &|\Phi _{1}\rangle =|
\mathbf{R0\rangle }\bigotimes |s\rangle \bigotimes |s\func{mod}m_{1}\rangle
\\
&&\bigotimes |s\func{mod}m_{2}\rangle \bigotimes ...\bigotimes |s\func{mod}
m_{r}\rangle .
\end{eqnarray*}
Here each state $|s\func{mod}m_{k}\rangle =|s_{k}\rangle $ occupies one
register for $k=1,2,...,r$. Now substituting the state identity (13) for the
index state $|s\rangle $ the state $|\Phi _{1}\rangle $ is expressed as
\begin{eqnarray*}
|\Phi _{1}\rangle &=&|\mathbf{R0\rangle }\bigotimes
|(n_{1}M_{1}s_{1}+n_{2}M_{2}s_{2}+...+n_{r}M_{r}s_{r})\func{mod}(p-1)\rangle
\\
&&\bigotimes |s_{1}\rangle \bigotimes |s_{2}\rangle \bigotimes ...\bigotimes
|s_{r}\rangle .
\end{eqnarray*}
In order to remove unitarily the composite state $|\sum_{k}n_{k}M_{k}s_{k}
\func{mod}(p-1)\rangle $ in the state $|\Phi _{1}\rangle $ one needs to
perform a series of the modular multiplication operations $M_{p-1}(\alpha
,\beta ,\gamma )$ and inverse modular addition operations $
ADD_{p-1}^{+}(\alpha ,\beta )$ on the state $|\Phi _{1}\rangle ,$ for
example,
\begin{eqnarray*}
&&|\Phi _{1}\rangle \stackrel{F_{n_{1}M_{1}}}{\rightarrow }\ \ \stackrel{
M_{p-1}(\alpha _{1},\beta _{1},\gamma _{1})}{\rightarrow } \\
&&|\mathbf{R0\rangle }\bigotimes |n_{1}M_{1}\func{mod}(p-1)\rangle
|n_{1}M_{1}s_{1}\func{mod}(p-1)\rangle \\
&&\bigotimes |(n_{1}M_{1}s_{1}+n_{2}M_{2}s_{2}+...+n_{r}M_{r}s_{r})\func{mod}
(p-1)\rangle \\
&&\bigotimes |s_{1}\rangle \bigotimes |s_{2}\rangle \bigotimes ...\bigotimes
|s_{r}\rangle \\
&&\stackrel{ADD_{p-1}(\alpha _{1},\beta _{1})^{+}}{\rightarrow }|\mathbf{
R0\rangle }\bigotimes |n_{1}M_{1}\func{mod}(p-1)\rangle |n_{1}M_{1}s_{1}
\func{mod}(p-1)\rangle \\
&&\bigotimes |(n_{2}M_{2}s_{2}+n_{3}M_{3}s_{3}+...+n_{r}M_{r}s_{r})\func{mod}
(p-1)\rangle \\
&&\bigotimes |s_{1}\rangle \bigotimes |s_{2}\rangle \bigotimes ...\bigotimes
|s_{r}\rangle \\
&&\stackrel{M_{p-1}^{+}(\alpha _{1},\beta _{1},\gamma _{1})}{\rightarrow }\
\ \stackrel{F_{n_{1}M_{1}}^{+}}{\rightarrow } \\
&&|\mathbf{R0\rangle }\bigotimes
|(n_{2}M_{2}s_{2}+n_{3}M_{3}s_{3}+...+n_{r}M_{r}s_{r})\func{mod}(p-1)\rangle
\\
&&\bigotimes |s_{1}\rangle \bigotimes |s_{2}\rangle \bigotimes ...\bigotimes
|s_{r}\rangle .
\end{eqnarray*}
The unitary transformation process in the example is stated below. The state
$|n_{1}M_{1}\func{mod}(p-1)\rangle $ is first created by the unitary
transformation: $F_{n_{1}M_{1}}|0\rangle =|n_{1}M_{1}\func{mod}(p-1)\rangle
, $ then the modular multiplication operation $M_{p-1}(\alpha _{1},\beta
_{1},\gamma _{1})$ acts on both the states $|n_{1}M_{1}\func{mod}
(p-1)\rangle $ and $|s_{1}\rangle $ to generate the state $|n_{1}M_{1}s_{1}
\func{mod}(p-1)\rangle ,$ and then the modular subtraction operation or the
inverse modular addition operation $ADD_{p-1}^{+}(\alpha _{1},\beta _{1})$
acts on both the state $|n_{1}M_{1}s_{1}\func{mod}(p-1)\rangle $ and the
composite state $|\sum_{k}n_{k}M_{k}s_{k}\func{mod}(p-1)\rangle $ so that
the composite state is changed to the state $
|(n_{2}M_{2}s_{2}+n_{3}M_{3}s_{3}+...+n_{r}M_{r}s_{r})\func{mod}(p-1)\rangle
.$ After these unitary transformations the unitary operations $
M_{p-1}^{+}(\alpha ,\beta ,\gamma )$ and $F_{n_{1}M_{1}}^{+}$ are used to
convert the states $|n_{1}M_{1}\func{mod}(p-1)\rangle $ and $|n_{1}M_{1}s_{1}
\func{mod}(p-1)\rangle $ back to the states $|0\rangle .$ Clearly, the whole
unitary transformation process really cancels the term $n_{1}M_{1}s_{1}\func{
mod}(p-1)$ in the composite state $|\sum_{k}n_{k}M_{k}s_{k}\func{mod}
(p-1)\rangle $ of the state $|\Phi _{1}\rangle .$ If this unitary
transformation process is repeated $r$ times with different unitary
operations $F_{n_{k}M_{k}},$ $M_{p-1}(\alpha _{k},\beta _{k},\gamma _{k}),$
and $ADD_{p-1}^{+}(\alpha _{k},\beta _{k})$ for $k=1,2,...,r,$ then the
composite state $|\sum_{k}n_{k}M_{k}s_{k}\func{mod}(p-1)\rangle $ is
ultimately converted into the state $|0\rangle $ in the state $|\Phi
_{1}\rangle .$ Therefore, it is shown that with the help of the state
identity (13) the index state $|\mathbf{R0\rangle }\bigotimes |s\rangle $
can be efficiently converted into the state $|\Phi _{2}\rangle :$
\begin{eqnarray*}
|\mathbf{R0\rangle }\bigotimes |s\rangle &\rightarrow &|\Phi _{2}\rangle =|
\mathbf{R0\rangle }\bigotimes |s\func{mod}m_{1}\rangle \\
&&\bigotimes |s\func{mod}m_{2}\rangle \bigotimes ...\bigotimes |s\func{mod}
m_{r}\rangle .
\end{eqnarray*}
The state $|\Phi _{2}\rangle $ is a tension product of the $r$ states $\{|s
\func{mod}m_{k}\rangle \}$ in the $r$ different registers. In an analogue
way, the index state $|\mathbf{R0\rangle }\bigotimes |s\rangle $ also can be
efficiently converted into a tension product of the $r$ states $\{|M_{k}s
\func{mod}(p-1)\rangle \}$ in the $r$ different registers,
\begin{eqnarray*}
|\mathbf{R0\rangle }\bigotimes |s\rangle &\rightarrow &|\Phi _{3}\rangle =|
\mathbf{R0\rangle }\bigotimes |M_{1}s\func{mod}(p-1)\rangle \\
&&\bigotimes |M_{2}s\func{mod}(p-1)\rangle \bigotimes ...\bigotimes |M_{r}s
\func{mod}(p-1)\rangle .
\end{eqnarray*}
Since there is the identity $M_{k}s_{k}\equiv M_{k}s\func{mod}(p-1)$ for $
k=1,2,...,r$, the state $|M_{k}s\func{mod}(p-1)\rangle =|M_{k}s_{k}\func{mod}
(p-1)\rangle .$ By using the inverse discrete logarithmic unitary operation $
U_{\log }^{+}(g)$ the state $|M_{k}s_{k}\func{mod}(p-1)\rangle $ can be
converted into the state $|(g^{M_{k}})^{s_{k}}\func{mod}p\rangle $ which
belongs to the state subspace $S(C_{p_{k}^{a_{k}}})$ of the cyclic subgroup $
C_{p_{k}^{a_{k}}}.$ On the other hand, using the inverse discrete
logarithmic unitary operation $U_{\log }^{+}(g^{M_{k}})$ with the
logarithmic base $g^{M_{k}}$ the state $|s\func{mod}m_{k}\rangle ,$ i.e., $
|s_{k}\rangle ,$ can also be converted to the same state $
|(g^{M_{k}})^{s_{k}}\func{mod}p\rangle .$ These results show that the
auxiliary oracle unitary operation $\exp \{-i\theta |(g^{M_{k}})^{s_{k}}
\func{mod}p\rangle \langle (g^{M_{k}})^{s_{k}}\func{mod}p|\}$ of the
multiplicative cyclic group state subspace $S(C_{p_{k}^{a_{k}}})$ can be
efficiently built out of the auxiliary oracle unitary operation $\exp
\{-i\theta (|s\func{mod}m_{k}\rangle \langle s\func{mod}m_{k}|)\}$ of the
additive cyclic group state subspace $S(Z_{m_{k}})$ or the auxiliary oracle
unitary operation $\exp \{-i\theta (|M_{k}s\func{mod}(p-1)\rangle \langle
M_{k}s\func{mod}(p-1)|)\}$ of the additive cyclic group state space $
S(Z_{p-1})$.
Now consider the multiplicative cyclic group state space $S(C_{p-1})$.
Suppose that the prime factors of the integer $
(p-1)=p_{1}^{a_{1}}p_{2}^{a_{2}}...p_{r}^{a_{r}}$ are ordered in magnitude: $
p_{1}^{a_{1}}<p_{2}^{a_{2}}<...<p_{r}^{a_{r}}$ and $p_{r}^{a_{r}}\thicksim
O(\log p).$ Then $m_{1}<m_{2}<...<m_{r}$ and $M_{1}>M_{2}>...>M_{r}.$ As
shown in section 2, the cyclic group $C_{p-1}$ with order $p-1$ is the
direct product of $r$ factor cyclic subgroups: $C_{p-1}=C_{p_{1}^{a_{1}}}
\times C_{p_{2}^{a_{2}}}\times ...\times C_{p_{r}^{a_{r}}}.$ Each such
cyclic subgroup $C_{p_{k}^{a_{k}}}$ corresponds to a state subspace $
S(C_{p_{k}^{a_{k}}})$ with dimension $p_{k}^{a_{k}}$ of the cyclic group
state space $S(C_{p-1})$. For convenience, denote $S(m_{k})\equiv
S(C_{p_{k}^{a_{k}}})$ with $m_{k}=p_{k}^{a_{k}}.$ It can be proven that the
state $|g^{sM_{k}}\func{mod}p\rangle $ is in the state subspace $S(m_{k})$
for any index integer $s.$ This is because the generator and the order of
the cyclic subgroup $C_{p_{k}^{a_{k}}}$ is $g^{M_{k}}$ and $m_{k},$
respectively, then there holds the state identity $|g^{sM_{k}}\func{mod}
p\rangle =|(g^{M_{k}})^{s\func{mod}m_{k}}\func{mod}p\rangle $ for any index $
s,$ while the latter state $|(g^{M_{k}})^{s_{k}}\func{mod}p\rangle $ with
the index $s_{k}=s\func{mod}m_{k}$ is just in the state subspace $S(m_{k}).$
Since the dimensional size of the cyclic group state subspace $S(m_{k})$ is
just the order $m_{k}$ of the subgroup $C_{p_{k}^{a_{k}}}$ and $
m_{1}<m_{2}<...<m_{r},$ then the state $|g^{sM_{1}}\func{mod}p\rangle $ is
in the smallest state subspace $S(m_{1})$, the state $|g^{sM_{2}}\func{mod}
p\rangle $ in the second smallest subspace $S(m_{2})$, ..., and the state $
|g^{sM_{r}}\func{mod}p\rangle $ in the largest subspace $S(m_{r})$ of the $r$
state subspaces $\{S(m_{k})\}$. It follows from the equation (2) in section
2 that every state of the cyclic group state space $S(C_{p-1})$ can be
expressed as
\begin{equation}
|g^{s}\func{mod}p\rangle \equiv |(g^{M_{1}})^{n_{1}s_{1}}\times
(g^{M_{2}})^{n_{2}s_{2}}\times ...\times (g^{M_{r}})^{n_{r}s_{r}}\func{mod}
p\rangle \label{14}
\end{equation}
The state identity (14) plays a similar role to the state identity (13) in
decomposing any state of the cyclic group state space $S(C_{p-1})$ as a
tension product of the states of the state subspaces $\{S(m_{k})\}$ of the
factor cyclic subgroups $\{C_{p_{k}^{a_{k}}}\}.$ By the modular
exponentiation operation the state $|(g^{M_{k}})^{s}\func{mod}p\rangle $ of
the state subspace $S(m_{k})$ can be generated from the cyclic group state $
|g^{s}\func{mod}p\rangle ,$
\begin{eqnarray*}
|\mathbf{R0\rangle }\bigotimes |g^{s}\func{mod}p\rangle &\rightarrow &|\Phi
_{4}\rangle =|\mathbf{R0\rangle }\bigotimes |g^{s}\func{mod}p\rangle
|(g^{M_{k}})^{s}\func{mod}p\rangle \\
&=&|\mathbf{R0\rangle }\bigotimes |g^{s}\func{mod}p\rangle
|(g^{M_{k}})^{s_{k}}\func{mod}p\rangle .
\end{eqnarray*}
Repeating this modular exponentiation operation $r$ times for $k=1,2,...,r$
the state $|\mathbf{R0\rangle }\bigotimes |g^{s}\func{mod}p\rangle $ is
converted into the state $|\Phi _{5}\rangle ,$
\begin{eqnarray*}
|\mathbf{R0\rangle }\bigotimes |g^{s}\func{mod}p\rangle &\rightarrow &|\Phi
_{5}\rangle =|\mathbf{R0\rangle }\bigotimes |g^{s}\func{mod}p\rangle
\bigotimes |(g^{M_{1}})^{s_{1}}\func{mod}p\rangle \\
&&\bigotimes |(g^{M_{2}})^{s_{2}}\func{mod}p\rangle \bigotimes ...\bigotimes
|(g^{M_{r}})^{s_{r}}\func{mod}p\rangle .
\end{eqnarray*}
By using the state identity (14) and the modular exponentiation, the modular
multiplication, and the COPY operation the state $|g^{s}\func{mod}p\rangle $
in the state $|\Phi _{5}\rangle $ can be removed unitarily and hence the
state $|\mathbf{R0\rangle }\bigotimes |g^{s}\func{mod}p\rangle $ can be
efficiently converted into a tension product of the $r$ states $
\{|(g^{M_{k}})^{s_{k}}\func{mod}p\rangle \}$ of the $r$ different subspaces $
\{S(m_{k})\}$ in the $r$ different registers$,$
\begin{eqnarray*}
|\mathbf{R0\rangle }\bigotimes |g^{s}\func{mod}p\rangle &\rightarrow &|\Phi
_{6}\rangle =|\mathbf{R0\rangle }\bigotimes |(g^{M_{1}})^{s_{1}}\func{mod}
p\rangle \\
&&\bigotimes |(g^{M_{2}})^{s_{2}}\func{mod}p\rangle \bigotimes ...\bigotimes
|(g^{M_{r}})^{s_{r}}\func{mod}p\rangle .
\end{eqnarray*}
This unitary transformation is stated below. The states $
\{|(g^{M_{k}})^{n_{k}s_{k}}\func{mod}p\rangle \}$ are first generated
efficiently from the states $\{|(g^{M_{k}})^{s_{k}}\func{mod}p\rangle \}$ by
the modular exponentiation operations in temporary registers in the state $
|\Phi _{5}\rangle $ because the integers $\{n_{k}\}$ are known, as shown
before. Then by the modular multiplication operations the state $
|\prod_{k}(g^{M_{k}})^{n_{k}s_{k}}\func{mod}p\rangle $ is created
efficiently from these states $\{|(g^{M_{k}})^{n_{k}s_{k}}\func{mod}p\rangle
\}.$ The state identity (14) shows that the state $
|\prod_{k}(g^{M_{k}})^{n_{k}s_{k}}\func{mod}p\rangle $ is just the state $
|g^{s}\func{mod}p\rangle .$ Then using the COPY operation the state $|g^{s}
\func{mod}p\rangle $ can be removed from the state $|\Phi _{5}\rangle .$
After these unitary operations those states $
|\prod_{k}(g^{M_{k}})^{n_{k}s_{k}}\func{mod}p\rangle $ and $
\{|(g^{M_{k}})^{n_{k}s_{k}}\func{mod}p\rangle \}$ in temporary registers are
returned back to the state $|0\rangle $ and therefore the state $|\Phi
_{6}\rangle $ is obtained. Note that these states $|(g^{M_{j}})^{s}\func{mod}
p\rangle $ for different index $j$ in the state $|\Phi _{6}\rangle $ belong
to different subspaces $\{S(m_{j})\}$ and also different registers. It has
been shown that any unknown state can be efficiently transferred to a larger
state subspace from a small subspace in the Hilbert space [16]. Then the
state $|(g^{M_{j}})^{s}\func{mod}p\rangle $ which is in the subspace $
S(m_{j})$ with the dimensional size $m_{j}$ may be efficiently transferred
to a larger subspace $S(m_{k})$ with dimensional size $m_{k}>m_{j}.$ Since
the dimensional size $m_{k}$ for any subspace $S(m_{k})$ is $\thicksim
O(\log p)$ and there hold $0\leq s_{k}<m_{k}$ and $m_{1}<m_{2}<...<m_{r}$,
the unitary operation for the state transfer $|(g^{M_{j}})^{s}\func{mod}
p\rangle \rightarrow |(g^{M_{k}})^{s}\func{mod}p\rangle $ for $1\leq j<k\leq
r$ always can be constructed efficiently [16]. Now the state transfer is
carried out from a small subspace $S(m_{k})$ ($k\neq r)$ to the largest
subspace $S(m_{r}),$ that is, $|(g^{M_{k}})^{s_{k}}\func{mod}p\rangle
\rightarrow |(g^{M_{r}})^{s_{k}}\func{mod}p\rangle $ for $k=1,2,...,r-1,$
then the state $|\Phi _{6}\rangle $ will be directly changed to a tension
product of the $r$ states $\{|(g^{M_{r}})^{s_{k}}\func{mod}p\rangle ,$ $
k=1,2,...,r\}$ of the largest subspace $S(m_{r})$ in the $r$ different
registers respectively:
\begin{eqnarray*}
|\Phi _{6}\rangle &\rightarrow &|\Phi _{7}\rangle =|\mathbf{R0\rangle }
\bigotimes |(g^{M_{r}})^{s_{1}}\func{mod}p\rangle \\
&&\bigotimes |(g^{M_{r}})^{s_{2}}\func{mod}p\rangle \bigotimes ...\bigotimes
|(g^{M_{r}})^{s_{r}}\func{mod}p\rangle ,
\end{eqnarray*}
where the state transfers can be performed in a parallel manner in the first
$r-1$ registers of the state $|\Phi _{6}\rangle .$ The state $|\Phi
_{7}\rangle $ shows that any state $|g^{s}\func{mod}p\rangle $ of the cyclic
group state space $S(C_{p-1})$ can be efficiently converted into a tension
product of the $r$ cyclic group states of the largest subspace $S(m_{r})$.
If the index state $|s\rangle $ is unknown, then in the state $|\Phi
_{7}\rangle $ all these states $\{|(g^{M_{r}})^{s_{k}}\func{mod}p\rangle
,k=1,2,...,r\}$ are also unknown and they carry the complete information of
the index state $|s\rangle $. Evidently, if the initial index state $
|s\rangle $ or the initial cyclic group state $|g^{s}\func{mod}p\rangle $ is
replaced with a superposition, then the above state transformations work as
well.
For the discrete logarithmic problem it is much simple to generate unitarily
the auxiliary oracle unitary operation $\overline{U}_{ojs}(\theta )=\exp
[-i\theta \overline{D}_{js}(g)]$ with the diagonal operator $\overline{D}
_{js}(g)=|\mathbf{R0\rangle }\langle \mathbf{R0|}\bigotimes |g^{js}\func{mod}
p\rangle \langle g^{js}\func{mod}p|$ and $j=M_{k}$ or even $j=(p-1)/p_{k}$
from the basic oracle unitary operation $\overline{U}_{os}(\theta )=\exp
[-i\theta \overline{D}_{s}(g)]$ in polynomial time. Actually, this can be
achieved directly by the state transformation: $|\mathbf{R0\rangle }
\bigotimes |g^{s}\func{mod}p\rangle \rightarrow |\Phi _{4}\rangle $ without
using any state identity (13) or (14). This is because $(i)$ the integer $
b=g^{s}\func{mod}p$ is given beforehand and hence the oracle unitary
operation $\overline{U}_{os}(\theta )$ can be efficiently constructed in
advance, and $(ii)$ the known state $|g^{s}\func{mod}p\rangle $ in the state
$|\Phi _{4}\rangle $ can be efficiently converted to the state $|0\rangle .$
Therefore, using the auxiliary oracle unitary operation $\overline{U}
_{ojs}(\theta )$ and the standard quantum search algorithm one can solve
efficiently the discrete logarithmic problem in polynomial time if the
dimensional size $m_{k}$ for every cyclic group state subspace $S(m_{k})$ is
$\thicksim O(\log p).$ This quantum discrete logarithmic algorithm is
similar to the classical counterpart [21]. By combining with the quantum
discrete logarithmic algorithm in section 3 this algorithm will obtain much
more speedup.
However, the quantum search problem is much harder than the discrete
logarithmic problem. The auxiliary oracle unitary operations corresponding
to the states $|\Phi _{2}\rangle $ and $|\Phi _{7}\rangle $ still may be
unsuitable for the quantum search task, for these factor states $\{|s\func{
mod}m_{k}\rangle \}$ in the state $|\Phi _{2}\rangle $ or $
\{|(g^{M_{r}})^{s_{k}}\func{mod}p\rangle \}$ in the state $|\Phi _{7}\rangle
$ that carry the complete information of the index state $|s\rangle $ are in
the $r$ different registers and this makes the search space too large for
the quantum search problem. There are two possible schemes to solve this
problem. One scheme is to compress unitarily all these $r$ states in the $r$
different registers into one register only in the state $|\Phi _{2}\rangle $
or $|\Phi _{7}\rangle ,$ and this scheme will lead to that the quantum
search space is limited to the largest cyclic group state subspace $
S(Z_{m_{r}})$ or $S(m_{r}).$ Since the dimension of the state subspace $
S(Z_{m_{r}})$ or $S(m_{r})$ is $m_{r}\thicksim O(\log p)$ the quantum search
process may be implemented efficiently in these state subspaces. Another is
to keep only one desired state but remove unitarily the other $r-1$ states
in the state $|\Phi _{2}\rangle $ or $|\Phi _{7}\rangle .$ For example, one
may let all those states $|(g^{M_{r}})^{s_{j}}\func{mod}p\rangle $ for $
j\neq k$ return unitarily to the known state $|0\rangle $ but only the
desired state $|(g^{M_{r}})^{s_{k}}\func{mod}p\rangle $ be retained in the
state $|\Phi _{7}\rangle $. It could be better that the two schemes are used
together. In next section a possible algorithm is proposed on a universal
quantum computer to further reduce the quantum search space for the state $
|\Phi _{7}\rangle $ in the multiplicative cyclic group state $S(C_{p-1})$,
while the reduction for the quantum search space on the basis of the state $
|\Phi _{2}\rangle $ in the additive cyclic group state space $S(Z_{p-1})$ is
left in the future work. \newline
\newline
{\large 5. An efficient reduction for the quantum search space on an ideal
universal quantum computer}
A universal quantum computer [29, 38, 40] should be capable of computing any
recursive function in mathematics and any computational process on it obeys
the unitary quantum dynamics in physics. Now a quantum computational program
based on the reversible computation [26, 27] is designed to transform some
states $\{|(g^{M_{r}})^{s_{k}}\func{mod}p\rangle \}$ back to the known state
$|0\rangle $ but keep the desired state in the state $|\Phi _{7}\rangle $.
This quantum program $Q_{p}$ may run on a universal quantum computer [29,
38, 40]. It is given by
\[
|n_{h}\rangle =|0\rangle
\]
\[
|b_{h}\rangle =|0\rangle
\]
\[
\text{For }i=1\text{ to }m_{r}
\]
\[
\text{If }|g_{r}(y)\rangle =|1\rangle \text{ then }|b_{h}\rangle \rightarrow
|b_{h}+1\rangle \text{ end if}
\]
\[
\text{When }|g_{r}(y)\rangle =|1\rangle ,\text{ Do }|g_{r}(y)\rangle
|n_{h}\rangle =|1\rangle |0\rangle \rightarrow |0\rangle |0\rangle ,\text{ }
|n_{h}\rangle =|0\rangle \rightarrow |1\rangle ,\text{ halt}
\]
\[
\text{If }|b_{h}\rangle =|0\rangle \text{ then}
\]
\[
U_{g^{M_{r}}}|f_{r}(x)\rangle |g_{r}(y)\rangle
\]
\[
U_{r}|f_{r}(x)\rangle |g_{r}(y)\rangle
\]
\[
\text{else }U_{g^{M_{r}}}|f_{r}(x)\rangle |g_{r}(y)\rangle \text{ end if}
\]
\[
\text{end for.}
\]
The quantum program $Q_{p}$ can be really written as $Q_{p}=\{Q_{u}
\}^{m_{r}} $ in which the basic operational unit $Q_{u}$ is repeated to
execute $m_{r}$ times. The basic operational unit $Q_{u}$ may be formally
expressed as $Q_{u}=\{U_{r}^{c}U_{g^{M_{r}}}P^{c}\}$, here the operation $
P^{c}$ executes the two statements:\ $^{\prime \prime }$If $|g_{r}(y)\rangle
=|1\rangle $ then $|b_{h}\rangle \rightarrow |b_{h}+1\rangle $ end if$
^{\prime \prime }$ and $^{\prime \prime }$When $|g_{r}(y)\rangle =|1\rangle
, $ Do $|g_{r}(y)\rangle |n_{h}\rangle =|1\rangle |0\rangle \rightarrow
|0\rangle |0\rangle ,$ $|n_{h}\rangle =|0\rangle \rightarrow |1\rangle ,$
halt$^{\prime \prime },$ the operation $U_{r}^{c}$ performs conditionally
the unitary operation $U_{r}$ if the branch-control state $|b_{h}\rangle
=|0\rangle $, and the operation $U_{g^{M_{r}}}$ performs the unitary cyclic
group operation of the cyclic subgroup $C_{p_{r}^{a_{r}}}.$ The state $
|n_{h}\rangle $ is the halting state of the quantum program and belongs to
an independent two-dimensional state space $\{|0\rangle ,|1\rangle \}$. The
branch-control state $|b_{h}\rangle $ belongs to a larger and independent
state space $\{|0\rangle ,$ $|1\rangle ,$ $|2\rangle ,$ $...\}$ instead of a
simple two-dimensional state space. The index $i$ ($1\leq i\leq m_{r}$)
stands for number of the basic operational unit $Q_{u}$ to have been already
executed. In the quantum program the functions $f_{r}(x)$ and $g_{r}(x)$ are
$f_{r}(x)=g_{r}(x)=(g^{M_{r}})^{x}\func{mod}p$ for $0\leq x<m_{r}.$ Both the
functions are periodic functions, $f_{r}(x)=f_{r}(x+m_{r})$ and $
g_{r}(y)=g_{r}(y+m_{r}),$ and they also satisfy $f_{r}(x)=g_{r}(x)=1$ for $
x=0\func{mod}m_{r}.$ In the quantum program the cyclic group operation $
U_{g^{M_{r}}}$ acts on only the state $|f_{r}(x)\rangle ,$
\[
U_{g^{M_{r}}}|f_{r}(x)\rangle |g_{r}(y)\rangle =|f_{r}(x+1)\rangle
|g_{r}(y)\rangle ,
\]
while the state transformation of the unitary operation $U_{r}$ is defined
by
\begin{equation}
U_{r}|f_{r}(x)\rangle |g_{r}(y)\rangle =\left\{
\begin{array}{c}
|f_{r}(x)\rangle |g_{r}(y)\rangle ,\text{ if }x+y\neq 0\func{mod}m_{r}. \\
|f_{r}(x)\rangle |1\rangle ,\text{ if }x+y=0\func{mod}m_{r}.
\end{array}
\right. \label{15}
\end{equation}
Note that for any given indices $x$ and $y$ ($0\leq x,y<m_{r}$) there is a
unique index $i$ $(1\leq i\leq m_{r})$ such that $x+y+i=0\func{mod}m_{r}.$
Therefore, there is a unique index $i$ $(1\leq i\leq m_{r})$ such that the
state $|f_{r}(x+i)\rangle |g_{r}(y)\rangle $ can be changed to the state $
|f_{r}(x+i)\rangle |1\rangle $ for given indices $x$ and $y$ by the unitary
operation $U_{r}$ in the quantum program.
In order to explain clearly how the quantum program $Q_{p}$ works the
statement $^{\prime \prime }$When $|g_{r}(y)\rangle =|1\rangle ,$ Do $
|g_{r}(y)\rangle |n_{h}\rangle =|1\rangle |0\rangle \rightarrow |0\rangle
|0\rangle ,$ $|n_{h}\rangle =|0\rangle \rightarrow |1\rangle ,$ halt$
^{\prime \prime }$ which involves in the halting protocol of quantum Turing
machine [29] in the quantum program is not considered temporarily. The
quantum program starts at the initial state $|b_{h}=0\rangle
|f_{r}(x)\rangle |g_{r}(y)\rangle $ of the quantum system of a universal
quantum computer. The program first checks whether the state $
|g_{r}(y)\rangle $ is $|1\rangle $ or not. If yes, then the branch-control
state $|b_{h}\rangle =|0\rangle $ is changed to the state $|1\rangle ,$
otherwise it keeps unchanged. If the branch-control state $|b_{h}\rangle $
is not $|0\rangle ,$ then the program performs only the cyclic group
operation $U_{g^{M_{r}}}$, otherwise ($|b_{h}\rangle =|0\rangle $) it
executes another unitary operation sequence, that is, it executes first the
cyclic group operation $U_{g^{M_{r}}}$ and then the unitary operation $
U_{r}. $ At the end of the step ($i=1$) the quantum system is either $(a)$
in the state $|b_{h}=1\rangle |f_{r}(x+1)\rangle |1\rangle $ if the initial
state $|g_{r}(y)\rangle =|1\rangle $ or $(b)$ in the state $|b_{h}=0\rangle
|f_{r}(x+1)\rangle |1\rangle $ if the initial state $|g_{r}(y)\rangle \neq
|1\rangle $ but $x+y+1=0\func{mod}m_{r}$ or $(c)$ in the state $
|b_{h}=0\rangle |f_{r}(x+1)\rangle |g_{r}(y)\rangle $ if the initial state $
|g_{r}(y)\rangle \neq |1\rangle $ and $x+y+1\neq 0\func{mod}m_{r}.$
Therefore, at next step ($i=2$) the three situations need to be considered,
respectively. For the case $(a),$ since the state $|g_{r}(y)\rangle
=|1\rangle $ and the branch-control state $|b_{h}\rangle =|1\rangle $ the
program performs only the cyclic group operation $U_{g^{M_{r}}}$ which
converts the state $|n_{h}=1\rangle |f_{r}(x+1)\rangle |1\rangle $ into the
state $|n_{h}=1\rangle |f_{r}(x+2)\rangle |1\rangle .$ Evidently, once the
state $|g_{r}(y)\rangle $ is transformed to the state $|1\rangle $ and then
the state $|b_{h}\rangle $ to the state $|1\rangle ,$ the two states $
|g_{r}(y)\rangle $ and $|b_{h}\rangle $ are kept at the state $|1\rangle $
in following steps and even to the end of the program, and hence the program
performs only the cyclic group operation $U_{g^{M_{r}}}$ to the end ($
i=m_{r} $). Then at the end the quantum system is in the state $
|b_{h}=1\rangle |f_{r}(x+m_{r})\rangle |1\rangle =|b_{h}=1\rangle
|f_{r}(x)\rangle |1\rangle .$ For the case $(b),$ since the state $
|g_{r}(y)\rangle =|1\rangle $, then the branch-control state $|b_{h}\rangle
=|0\rangle $ is changed to $|1\rangle ,$ that is, $|b_{h}=0\rangle
|f_{r}(x+1)\rangle |1\rangle $ is transformed to $|b_{h}=1\rangle
|f_{r}(x+1)\rangle |1\rangle $ which will be further changed to the state $
|b_{h}=1\rangle |f_{r}(x)\rangle |1\rangle $ at the end of the program, as
explained in the case $(a)$. For the case $(c)$, just like at the end of the
step ($i=1$), at the end of the step ($i=2$) there are also three situations
to be considered again and these situations can be analyzed in a similar way
given in the step ($i=1$). The analysis shows that when the program is at
the $k-$th step $(i=k)$ such that $x+y+k=0\func{mod}m_{r},$ the quantum
system is changed from the state $|b_{h}=0\rangle |f_{r}(x+k-1)\rangle
|g_{r}(y)\rangle $ with $|g_{r}(y)\rangle \neq |1\rangle $ at the beginning
to the state $|b_{h}=0\rangle |f_{r}(x+k)\rangle |1\rangle $ at the end of
the $k-$th step by the unitary operation $U_{r}$. At the following step ($
i=k+1$) the branch-control state $|b_{h}\rangle =|0\rangle $ is transformed
to the state $|1\rangle .$ Then starting from the step $(i=k+1)$ the quantum
system is acted on only by the cyclic group operation $U_{g^{M_{r}}}$ and
this action continues to the end of the program. The final state ($i=m_{k}$)
of the quantum system therefore is $|b_{h}=1\rangle |f_{r}(x+m_{r})\rangle
|1\rangle =|b_{h}=1\rangle |f_{r}(x)\rangle |1\rangle .$ Thus, after
execution of the whole quantum program one time the input state $
|b_{h}=0\rangle |f_{r}(x)\rangle |g_{r}(y)\rangle $ is changed to the output
state $|b_{h}=1\rangle |f_{r}(x)\rangle |1\rangle .$
However, there is a precondition for the quantum program to work as stated
above that once the state $|g_{r}(y)\rangle $ is changed to the state $
|1\rangle $ by the unitary operation $U_{r},$ the branch-control state $
|b_{h}\rangle =|0\rangle $ is changed to the state $|1\rangle $ and \textit{
since then the branch-control state }$|b_{h}\rangle =|1\rangle $\textit{\ is
kept unchanged to the end of the program}. This precondition may be achieved
by the statement: $^{\prime \prime }$When $|g_{r}(y)\rangle =|1\rangle ,$ Do
$|g_{r}(y)\rangle |n_{h}\rangle =|1\rangle |0\rangle \rightarrow |0\rangle
|0\rangle ,$ $|n_{h}\rangle =|0\rangle \rightarrow |1\rangle ,$ halt$
^{\prime \prime }$ in the program. This statement is executed after the
branch-control state $|b_{h}\rangle =|0\rangle $ is changed to the state $
|1\rangle .$ The statement shows that once the state $|g_{r}(y)\rangle $
goes to the state $|1\rangle ,$ the state $|g_{r}(y)\rangle |n_{h}\rangle
=|1\rangle |0\rangle $ is changed to the state $|0\rangle |0\rangle $ which
means that the state $|g_{r}(y)\rangle =|1\rangle $ is changed to the state $
|0\rangle $ conditionally when the halting state $|n_{h}\rangle =|0\rangle ,$
then the halting state $|n_{h}\rangle =|0\rangle $ is changed to the state $
|1\rangle ,$ and \textit{since then the halting state }$|n_{h}\rangle
=|1\rangle $\textit{\ is kept unchanged to the end of the program} which is
executed by the instruction $^{\prime \prime }$halt$^{\prime \prime }$ of
the statement. There are three operations in the statement, the first is the
unitary operation $U_{h}:|g_{r}(y)\rangle |n_{h}\rangle =|1\rangle |0\rangle
\leftrightarrow |0\rangle |0\rangle ,$ the second is the trigger pulse $
P_{c} $ on the halting qubit$:|n_{h}\rangle =|0\rangle \leftrightarrow
|1\rangle ,$ and the last operation $T(n):^{\prime \prime }$halt$^{\prime
\prime },$ which could involve in the unitary nondemolition measurement
operation on the halting qubit [29, 40], will kept the halting qubit at the
state $|n_{h}\rangle =|1\rangle $ unchanged until the end of the program. It
can be shown that if the halting state $|n_{h}\rangle =|1\rangle $ can be
kept unchanged, then the branch-control state $|b_{h}\rangle =|1\rangle $
can also be kept unchanged. Suppose that at the $i-$th step of the program
the state $|g_{r}(y)\rangle $ goes to the state $|1\rangle ,$ then at the $
(i+1)- $th step the state $|b_{h}\rangle $ goes to the state $|1\rangle $
which will stop the unitary operation $U_{r}$ later, and then the state $
|g_{r}(y)\rangle =|1\rangle $ is changed to the state $|0\rangle $ and the
halting state $|n_{h}\rangle $ enters the state $|1\rangle .$ Note that the
cyclic group operation $U_{g^{M_{r}}}$ does not affect the state $
|g_{r}(y)\rangle $ and the unitary operation $U_{r}$ now is halted. Now at
the $(i+2)-$th step the conditional unitary operation $U_{b}:|b_{h}\rangle
\rightarrow |b_{h}+1\rangle $ does not change the state $|b_{h}\rangle
=|1\rangle $ because the state $|g_{r}(y)\rangle =|0\rangle ,$ and the
unitary operation $U_{h}:|g_{r}(y)\rangle |n_{h}\rangle =|1\rangle |0\rangle
\leftrightarrow |0\rangle |0\rangle $ also has not net effect on the quantum
system because the state $|g_{r}(y)\rangle |n_{h}\rangle =|1\rangle
|1\rangle $ now. Though the unitary operation $P_{c}:|n_{h}\rangle
=|0\rangle \leftrightarrow |1\rangle $ may change the halting state $
|n_{h}\rangle =|1\rangle $ back to the state $|0\rangle ,$ but the halting
state $|n_{h}\rangle =|1\rangle $ is prevented by the halting operation $
T(i+2)$ from the action of the unitary operation $P_{c}$ so that it still
keeps at the same state $|1\rangle $ at the step, and this is the key point
for the whole quantum program. Thus, from the $(i+2)-$th step to the end of
the program the halting state is kept at the state $|1\rangle $ and hence
the branch-control state is also kept at the state $|1\rangle $. Obviously,
when the whole quantum program includes the statement: $^{\prime \prime }$
When $|g_{r}(y)\rangle =|1\rangle ,$ Do $|g_{r}(y)\rangle |n_{h}\rangle
=|1\rangle |0\rangle \rightarrow |0\rangle |0\rangle ,$ $|n_{h}\rangle
=|0\rangle \rightarrow |1\rangle ,$ halt$^{\prime \prime },$ the output
state $|n_{h}\rangle |b_{h}\rangle |f_{r}(x)\rangle |g_{r}(y)\rangle $ is $
|1\rangle |1\rangle |f_{r}(x)\rangle |0\rangle $ if the input state is $
|0\rangle |0\rangle |f_{r}(x)\rangle |g_{r}(y)\rangle .$
One might ask one question: is the unitarity of the quantum program
destroyed?, because there are different input states $|0\rangle |0\rangle
|f_{r}(x)\rangle |g_{r}(y)\rangle $ for different states $|g_{r}(y)\rangle ,$
but the quantum program obtains the same output state $|1\rangle |1\rangle
|f_{r}(x)\rangle |0\rangle .$ Actually, there is a different index $i$ ($
1\leq i\leq m_{r}$) such that $(x+y+i)=0\func{mod}m_{r}$ for a different
input state $|0\rangle |0\rangle |f_{r}(x)\rangle |g_{r}(y)\rangle $ where
the state $|f_{r}(x)\rangle $ may be fixed. Then there is a different time
(e.g., the $i-$th step) for the state $|g_{r}(y)\rangle $ to go to the state
$|1\rangle $ and for the halting operation $T(i)$ to act on the quantum
system. In effect the halting operation $T(i)$ acting on the quantum system
at different time $i$ is equivalent to that the quantum program in a
different unitary operation acts on the input state. According as the
universal quantum computer model [29], the halting state $|n_{h}\rangle $
should be periodically observed from the outside in a unitary and
nondemolition form so that once the halting state is found at the state $
|1\rangle $ the halting operation $T(i)$ starts to act on the quantum system
of the quantum computer. Before the halting operation $T(i)$ takes an action
the quantum system has already been made a unitary transformation $U(i)$
which is clearly dependent on the time $i$. Obviously, this unitary
transformation generally is different if the halting operation $T(i)$ takes
an action at a different time, while for the current quantum program this is
clearly correct as well. Therefore, the same output state $|1\rangle
|1\rangle |f_{r}(x)\rangle |0\rangle $ is obtained from different input
state $|0\rangle |0\rangle |f_{r}(x)\rangle |g_{r}(y)\rangle $ by a
different unitary transformation in the current quantum program. Although
different input states can not be converted to the same output state by a
same unitary transformation, they are admitted to change to the same output
state by different unitary transformations! Therefore, the quantum program
keeps its unitarity.
The key point to make the quantum program $Q_{p}$ work as stated above is
that the halting protocol of quantum Turing machine is available and must be
unitary. The unitarity for the halting protocol of quantum Turing machine is
crucial for the quantum program when it is used to solve the quantum search
problem based on the quantum unitary dynamics. Unlike the conventional
measurement operation in quantum computation where the measurement operation
usually could not be unitary and some information could loss during the
measurement operation but these usually do not much affect the final
computing results, the current halting operation must be unitary which
contains the unitary nondemolition measurement operation since it could
carry some information of the input state, as shown before, while the
information could be necessary because in theory the inverse halting
operation which contains the inverse unitary process of the nondemolition
measurement operation could be necessary for solving quantum search problem
based on the unitary quantum dynamics.
The quantum program $Q_{p}$ is really assumed to run on an ideal universal
quantum computer which has the unitary halting protocol of quantum Turing
machine. Obviously, this program is trivial and could be irreversible if it
runs on a conventional classical computer, but it could be simulated
efficiently by the reversible computation [26, 27, 28]. The quantum program
could also be efficiently performed on a quantum Turing machine (QTM) [29,
37, 40], as analyzed above. In fact, in a quantum Turing machine one may set
directly the halting state $|n_{h}\rangle $ in the program to be the QTM
halting-control state to control the quantum program. Once the state $
|g_{r}(y)\rangle $ is $|1\rangle $ in the program the halting state $
|n_{h}\rangle =|0\rangle $ is changed to the state $|1\rangle ,$ then the
program stops performing the operational branch consisting of the two
unitary operations $U_{g^{M_{r}}}$ and $U_{r}$ but turns to perform another
operational branch of a single cyclic group operation $U_{g^{M_{r}}}$ to the
end ($i=m_{r}$), which ensures that the whole process of the program is
unitary, as pointed out in [41c]. However, there hides a basic assumption
that \textit{any input state of the quantum program is a single basis state}
. This basic assumption could ensure that the halting protocol of quantum
Turing machine could be made available and unitary for the quantum program
on an ideal universal quantum computer [29, 40, 41a-41d].
However, if the input state of the quantum program is a superposition $
\sum_{s}\alpha _{s}|n_{h}=0\rangle |b_{h}=0\rangle |f_{r}(x(s))\rangle
|g_{r}(y(s))\rangle ,$ there seems to be a question whether the halting
protocol can be available and unitary or not on a quantum Turing machine
[41a-41d] when the quantum program is run on the QTM machine. This is
because in this situation there are many operational branches to be executed
simultaneously, and one does not known in advance when the state $
|g_{r}(y(s))\rangle $ is changed to the state $|1\rangle $ and actually for
different index value $s$ there may be a different time (the index $i$, $
1\leq i\leq m_{r}$) for the state $|g_{r}(y(s))\rangle $ to go to the state $
|1\rangle $ in the program, although for any index value $s$ the quantum
program always stops at the same time when the index $i=m_{r}$. At present
there is not a satisfactory halting protocol on a quantum Turing machine
when the input state is a superposition. The detail discussion relevant to
the halting problem of quantum Turing machine for this situation can be seen
in Refs. [41a-41d]. However, it has been shown [29, 40, 41a-41d] that there
is an acceptable halting protocol of quantum Turing machine which may be
made unitary if the input state is limited to be any single basis state on a
quantum Turing machine. Then there should not be any problem to run the
quantum program in a unitary form on a quantum Turing machine if its input
state is limited to be a single basis state. One therefore concludes that if
there existed a universal quantum Turing machine (UQTM) that on it any
computational process obeys the unitary quantum dynamics in physics and it
is capable of computing any computable functions in mathematics such as any
recursive functions which of course include the current one computed by the
quantum program, then such a universal quantum Turing machine could run the
current quantum program in a unitary form when the input state is limited to
be any single basis state for the program.
Though the quantum program could work on a universal quantum Turing machine
and it has been shown that a quantum circuit model is equivalent to a
universal quantum Turing machine in computation [39], it is still a
challenge to construct an efficient quantum circuit for the quantum program.
From the point of view of a quantum circuit model [38] the situation may be
different. A quantum circuit model usually does not use any halting protocol
and its input state can be either a single basis state or a superposition.
However, in order to achieve the same result as the quantum program run on a
universal quantum Turing machine, the quantum circuit model should be really
able to simulate faithfully and efficiently the quantum program and
especially the unitary halting protocol of quantum Turing machine used in
the program. According to the definition (15) of the unitary operation $
U_{r} $ the unitary operation $U_{k}$ for $k=1,2,...,r$ can be generally
defined by
\[
|(g^{M_{k}})^{x}\func{mod}p\rangle |(g^{M_{k}})^{-x}\func{mod}p\rangle
\leftrightarrow |(g^{M_{k}})^{x}\func{mod}p\rangle |1\rangle ,\text{ }0\leq
x\leq m_{k}-1,
\]
\begin{eqnarray*}
|(g^{M_{k}})^{x}\func{mod}p\rangle |(g^{M_{k}})^{y}\func{mod}p\rangle
&\leftrightarrow &|(g^{M_{k}})^{x}\func{mod}p\rangle |(g^{M_{k}})^{y}\func{
mod}p\rangle , \\
\text{ }x+y &\neq &0\func{mod}m_{k};\text{ }0\leq x,y\leq m_{k}-1,
\end{eqnarray*}
while the conditional unitary operation $U_{k}^{c}$ is defined as
\begin{eqnarray*}
&&U_{k}^{c}|b_{h}\rangle |(g^{M_{k}})^{x}\func{mod}p\rangle |(g^{M_{k}})^{y}
\func{mod}p\rangle \\
&=&\left\{
\begin{array}{c}
|b_{h}\rangle U_{k}(|(g^{M_{k}})^{x}\func{mod}p\rangle |(g^{M_{k}})^{y}\func{
mod}p\rangle ),\text{ if }b_{h}=0. \\
|b_{h}\rangle |(g^{M_{k}})^{x}\func{mod}p\rangle |(g^{M_{k}})^{y}\func{mod}
p\rangle ,\text{ if }b_{h}\neq 0.
\end{array}
\right.
\end{eqnarray*}
The unitary operations $U_{k}$ and $U_{k}^{c}$ always can be built up
efficiently since the dimension of the cyclic group state subspace $
S(m_{k})=\{|(g^{M_{k}})^{x}\func{mod}p\rangle \}$ is $m_{k}$ and $
m_{k}\thicksim O(\log p)$. The conditional unitary operation $U_{k}^{c}$ is
dependent on the branch-control state $|b_{h}\rangle $. When the
branch-control state $|b_{h}\rangle \neq |0\rangle $ the conditional unitary
operation $U_{k}^{c}$ does not act on the state $|(g^{M_{k}})^{x}\func{mod}
p\rangle |(g^{M_{k}})^{y}\func{mod}p\rangle $ for any indices $x$ and $y$.
If the unitary operator $U_{k}$ can be written as $U_{k}=\exp [-iH_{k}]$
with the Hamiltonian $H_{k}$, then it is clear that $U_{k}^{c}=\exp
[-i(|b_{h}=0\rangle \langle b_{h}=0|)\bigotimes H_{k}].$ As pointed out
before, the key point for the quantum circuit is to simulate faithfully the
unitary halting protocol of quantum Turing machine. Since the quantum
circuit does not use the halting qubit, one may use an isolated two-level
state control subspace to replace it. Denote the isolated two-level state
subspace as $\{|c\rangle ,|0\rangle \}$. The two states in the subspace $
\{|c\rangle ,|0\rangle \}$ are not in the cyclic group state subspace $
S(m_{r})$ but still belong to the Hilbert space $\{|Z_{p}\rangle \}$. The
control unit of the quantum circuit that simulates the halting protocol
consists of a conditional trigger pulse and a conditional state-locking
pulse. The conditional trigger pulse $P_{t}$ is designed to change the state
$|g_{r}(y)\rangle $ to the state $|c\rangle $ of the control subspace when
the state $|g_{r}(y)\rangle $ is the state $|1\rangle .$ It may be defined
by
\[
P_{t}|b_{h}\rangle |f_{r}(x)\rangle |g_{r}(y)\rangle =\left\{
\begin{array}{c}
|b_{h}\rangle |f_{r}(x)\rangle |c\rangle ,\text{ if }g_{r}(y)=1 \\
|b_{h}\rangle |f_{r}(x)\rangle |g_{r}(y)\rangle ,\text{ if }g_{r}(y)\neq 1
\end{array}
\right.
\]
Note that the conditional trigger pulse $P_{t}$ is different from that
trigger pulse $P_{c}$ in the quantum program $Q_{p}$. The conditional
trigger pulse connects the state $|c\rangle $ of the control subspace to the
state $|1\rangle $ of the cyclic group state subspace $S(m_{r})$. The
time-dependent state-locking pulse $P_{SL}^{c}(\{\varphi _{i}(t)\}),$ where $
\{\varphi _{i}(t)\}$ are time-dependent control parameters, can be only
applied to the control subspace and does not affect any other states in the
quantum system. Then the state-locking pulse does not make any net effect on
the quantum system if the quantum system is not in the control subspace.
Therefore, the conditional state-locking pulse can take an action on the
quantum system only when the quantum system goes to the states of the
control subspace. The ideal conditional state-locking pulse $
P_{SL}^{c}(\{\varphi _{i}(t)\})$ could be defined by
\[
P_{SL}^{c}(\{\varphi _{i}(t)\})|b_{h}\rangle |f_{r}(x)\rangle
|g_{r}(y)\rangle =|b_{h}\rangle |f_{r}(x)\rangle |g_{r}(y)\rangle ,\text{ }
t<t_{0}^{-},
\]
\[
P_{SL}^{c}(\{\varphi _{i}(t)\})|b_{h}\rangle |f_{r}(x)\rangle |c\rangle
=|b_{h}\rangle |f_{r}(x)\rangle |0\rangle ,\text{ }t_{0}\leq t\leq
t_{0}+\Delta t_{0},
\]
\[
P_{SL}^{c}(\{\varphi _{i}(t)\})|b_{h}\rangle |f_{r}(x)\rangle |0\rangle
=|b_{h}\rangle |f_{r}(x)\rangle |0\rangle ,\text{ }t>t_{0}+\Delta t_{0},
\]
where $t_{0}$ is the time at which the state $|c\rangle $ is generated
completely by the trigger pulse $P_{t}$ and evidently there are $m_{r}$
different times $t_{0}$ at most for the quantum circuit $Q_{c}$ (see below),
$\Delta t_{0}$ is the interval that the state $|c\rangle $ is converted
completely into the state $|0\rangle $ and it is shorter than the interval
to execute the statement: $^{\prime \prime }$While $|g_{r}(y)\rangle
=|1\rangle ,$ Do $P_{t}:|g_{r}(y)\rangle =|1\rangle \rightarrow |c\rangle ,$
$P_{SL}^{c}:|c\rangle \rightarrow |0\rangle ^{\prime \prime }$ (see the
quantum program $Q_{c}$ below). Here also assume that during the period ($
t_{0}^{-}=t_{0}-\delta t_{0}\leq t<t_{0})$ of the trigger pulse $P_{t}$ the
state-locking pulse has a negligible effect on the quantum system. The
unitary transformation shows that after the state $|c\rangle $ is changed to
the state $|0\rangle $, the state $|0\rangle $ is kept unchanged by the
state-locking pulse and hence it will not change as the time. The
conditional trigger pulse $P_{t}$ instructs what time the conditional
state-locking pulse $P_{SL}^{c}(\{\varphi _{i}(t)\})$ starts to take an
action on the quantum system because before the trigger pulse $P_{t}$
changes the state $|1\rangle $ of the cyclic group state subspace $S(m_{r})$
to the state $|c\rangle $ of the control subspace the quantum system is not
in the control subspace and hence the state-locking pulse has not a net
effect on the quantum system. On the other hand, the conditional trigger
pulse $P_{t}$ can change the state $|1\rangle $ of the cyclic group state
subspace $S(m_{r})$ to the state $|c\rangle $ of the control subspace only
when the state $|g_{r}(y)\rangle $ goes to the state $|1\rangle .$
Therefore, the conditional state-locking pulse $P_{SL}^{c}(\{\varphi
_{i}(t)\})$ can take an action on the quantum system only after the state $
|g_{r}(y)\rangle $ goes to the state $|1\rangle .$ In effect the conditional
state-locking pulse will replace the halting operation $T(i)$ of the quantum
program $Q_{p}$ to control the quantum circuit $Q_{c},$ as can be seen
below. This is because when the state $|c\rangle $ is changed to the state $
|0\rangle $ of the control subspace and then the state $|0\rangle $ is
locked by the state-locking pulse, the quantum system really leaves the
state $|c\rangle $ and hence the trigger pulse $P_{t}$ is no longer to take
an action on the quantum system. Using the conditional unitary operation $
U_{b}:|g_{r}(y)=1\rangle |b_{h}\rangle \rightarrow |g_{r}(y)=1\rangle
|b_{h}+1\rangle $, the conditional unitary operation $U_{r}^{c},$ and the
cyclic group operation $U_{g^{M_{r}}}$ as well as the conditional trigger
pulse $P_{t}$ and the conditional state-locking pulse $P_{SL}^{c}$ a
possible unitary quantum circuit that simulates faithfully and efficiently
the quantum program $Q_{p}$ could be constructed by
\[
Q_{c}=\{P_{SL}^{c}:OFF\}\{U_{r}^{c}U_{g^{M_{r}}}P_{t}U_{b}\}^{m_{r}}
\{P_{SL}^{c}:ON\}.
\]
In fact, given any input basis state this quantum circuit in theory is
exactly equivalent to the following quantum program $Q_{c}$:
\[
\text{State-Locking Pulse}:ON
\]
\[
|b_{h}\rangle =|0\rangle
\]
\[
\text{For }i=1\text{ to }m_{r}
\]
\[
\text{If }|g_{r}(y)\rangle =|1\rangle \text{ then }|b_{h}\rangle
=|b_{h}+1\rangle \text{ end if}
\]
\[
\text{While }|g_{r}(y)\rangle =|1\rangle ,\text{ Do }P_{t}:|g_{r}(y)\rangle
=|1\rangle \rightarrow |c\rangle ,\text{ }P_{SL}^{c}:|c\rangle \rightarrow
|0\rangle
\]
\[
\text{If }|b_{h}\rangle =|0\rangle \text{ then}
\]
\[
U_{g^{M_{r}}}|f_{r}(x)\rangle |g_{r}(y)\rangle
\]
\[
U_{r}|f_{r}(x)\rangle |g_{r}(y)\rangle
\]
\[
\text{else }U_{g^{M_{r}}}|f_{r}(x)\rangle |g_{r}(y)\rangle \text{ end if}
\]
\[
\text{end for}
\]
\[
\text{State-Locking Pulse}:OFF.
\]
The sole difference from the previous one $Q_{p}$ is that the halting qubit $
\{|n_{h}\rangle \}$ of the quantum program $Q_{p}$ is replaced with the
two-level state subspace $\{|c\rangle ,|0\rangle \}$ in the quantum program $
Q_{c}.$ Here the input state of the quantum circuit $Q_{c}$ is still limited
to be a single basis state, although a quantum circuit does not limit any
input state. In theory the output state of the quantum circuit $Q_{c}$ is $
|b_{h}=1\rangle |f_{r}(x)\rangle |0\rangle $ if the input state is the
single basis state $|b_{h}=0\rangle |f_{r}(x)\rangle |g_{r}(y)\rangle .$ The
quantum program $Q_{c}$ shows that the state-locking pulse $P_{SL}^{c}$ is
first applied to the quantum system at the beginning of the quantum circuit.
Because the quantum system may not be in the control subspace at the
beginning, the state-locking pulse does not make an action on the quantum
system, but it keeps applying and does not start to act on the quantum
system until the quantum system goes to the state $|c\rangle ,$ and only at
the end of the quantum circuit the state-locking pulse is switched off.
The performance of the quantum circuit $Q_{c}$ usually may be mainly
dependent on the state-locking pulse $P_{SL}^{c}(\{\varphi _{i}(t)\})$. The
real unitary transformation during the period of the state-locking pulse
applying to the quantum system should be generally written as\newline
\[
P_{SL}^{c}(\{\varphi _{i}(t)\})|b_{h}\rangle |f_{r}(x)\rangle
|g_{r}(y)\rangle =|b_{h}\rangle |f_{r}(x)\rangle |g_{r}(y)\rangle ,\text{ }
t<t_{0}^{-},
\]
\begin{eqnarray*}
&&P_{SL}^{c}(\{\varphi _{i}(t)\})|b_{h}\rangle |f_{r}(x)\rangle |c\rangle \\
&=&|b_{h}\rangle |f_{r}(x)\rangle (\varepsilon (t,t_{0})|c\rangle
+e^{-i\gamma (t,t_{0})}\sqrt{1-|\varepsilon (t,t_{0})|^{2}}|0\rangle ),\text{
}t\geq t_{0},
\end{eqnarray*}
where $\gamma (t,t_{0})$ is a phase factor and the absolute amplitude value $
|\varepsilon (t,t_{0})|$ is zero in theory when the time $t>t_{0}+\Delta
t_{0}$ for every time $t_{0}$. Hereafter the absolute amplitude value $
|\varepsilon (t,t_{0})|$ is referred to the one with the time $
t>t_{0}+\Delta t_{0}.$ The real amplitude value $|\varepsilon (t,t_{0})|$
may be dependent on the real physical process of the quantum circuit. The
amplitude value $|\varepsilon (t,t_{0})|$ measures how close the quantum
circuit $Q_{c}$ is to the quantum program $Q_{p}$, the closer the amplitude
value $|\varepsilon (t,t_{0})|$ to zero, the closer the quantum circuit $
Q_{c}$ to the quantum program $Q_{p}$. The quantum circuit $Q_{c}$ is really
equivalent to the quantum program $Q_{p}$ when the amplitude value $
|\varepsilon (t,t_{0})|=0$ exactly for every time $t_{0}$, but this could be
possibly achieved only in an ideal case. However, the amplitude value $
|\varepsilon (t,t_{0})|$ could not be always equal to zero for every time $
t_{0}$ if the input state of the quantum circuit is a superposition. This is
one reason why the input state of the quantum circuit $Q_{c}$ is still
limited to be a single basis state, although the input state is allowed to
be any state such as a superposition in the quantum circuit. Therefore, the
quantum circuit $Q_{c}$ is really an approximation to the ideal quantum
program $Q_{p}$ in a real physical process. In practice the conditional
state-locking pulses need to be designed so that the amplitude value $
|\varepsilon (t,t_{0})|$ is as close zero as possible for every time $t_{0}.$
Hence this involves in quantum control in technique. The conditional
state-locking pulse generally could be an amplitude- and phase-modulation
time-dependent pulse. A better choice for the state-locking pulse could be
an adiabatic pulse.
If there existed a universal quantum computer that in computation obeys the
unitary quantum dynamics in physics and is capable of computing any
computable functions in mathematics such as any recursive functions, then
such an ideal universal quantum computer would be enough powerful to solve
efficiently the quantum search problem in the cyclic group state space.
Actually, by taking the basis state $|n_{h}=0\rangle |b_{h}=0\rangle |\Phi
_{7}\rangle $ as the input state of the quantum program $Q_{p}$ and setting
the function $f_{r}(x)=(g^{M_{r}})^{x}\func{mod}p$ with $x=s_{1}$ and $
g_{r}(y)=(g^{M_{r}})^{y}\func{mod}p$ with $y=s_{2},$ after executing one
time the quantum program the output state is given by
\begin{eqnarray*}
|n_{h} &=&0\rangle |b_{h}=0\rangle \bigotimes |(g^{M_{r}})^{s_{1}}\func{mod}
p\rangle |(g^{M_{r}})^{s_{2}}\func{mod}p\rangle \bigotimes |\Phi
_{7}^{^{\prime }}\rangle \\
\stackrel{Q_{p}}{\rightarrow }|\Phi _{8}\rangle &=&|1\rangle |1\rangle
\bigotimes |(g^{M_{r}})^{s_{1}}\func{mod}p\rangle |0\rangle \bigotimes |\Phi
_{7}^{^{\prime }}\rangle ,
\end{eqnarray*}
where the state $|\Phi _{7}\rangle =|(g^{M_{r}})^{s_{1}}\func{mod}p\rangle
|(g^{M_{r}})^{s_{2}}\func{mod}p\rangle \bigotimes |\Phi _{7}^{^{\prime
}}\rangle $ and the state $|\Phi _{7}^{^{\prime }}\rangle $ is given by
\[
|\Phi _{7}^{^{\prime }}\rangle =|\mathbf{R0\rangle }\bigotimes
|(g^{M_{r}})^{s_{3}}\func{mod}p\rangle \bigotimes ...\bigotimes
|(g^{M_{r}})^{s_{r}}\func{mod}p\rangle .
\]
This is because only the state $|(g^{M_{r}})^{s_{1}}\func{mod}p\rangle
\bigotimes |(g^{M_{r}})^{s_{2}}\func{mod}p\rangle $ in the first two
registers of the state $|\Phi _{7}\rangle $ is made the unitary
transformation by the quantum program $Q_{p},$ while the state $|\Phi
_{7}^{^{\prime }}\rangle $ in other registers of the state $|\Phi
_{7}\rangle $ keeps unchanged, and the output state of the quantum program
is $|1\rangle |1\rangle |f_{r}(x)\rangle |0\rangle $ if the input state is $
|0\rangle |0\rangle |f_{r}(x)\rangle |g_{r}(y)\rangle ,$ as shown before.
This unitary transformation removes the state $|(g^{M_{r}})^{s_{2}}\func{mod}
p\rangle $ in the second register of the state $|\Phi _{7}\rangle .$ Next
step is to remove unitarily the state $|(g^{M_{r}})^{s_{3}}\func{mod}
p\rangle $ in the third register of the state $|\Phi _{7}\rangle .$ First,
both the branch-control state $|b_{h}\rangle =|1\rangle $ and the halting
state $|n_{h}\rangle =|1\rangle $ are changed back to the state $|0\rangle $
in the state $|\Phi _{8}\rangle $ and the state $|0\rangle $ in the register
four of the state $|\Phi _{8}\rangle $ is absorbed by the register library $|
\mathbf{R0\rangle .}$ After these operations the state $|\Phi _{8}\rangle $
is changed to the state $|\Phi _{9}\rangle :$
\[
|\Phi _{9}\rangle =|n_{h}=0\rangle |b_{h}=0\rangle \bigotimes
|(g^{M_{r}})^{s_{1}}\func{mod}p\rangle |(g^{M_{r}})^{s_{3}}\func{mod}
p\rangle \bigotimes |\Phi _{8}^{^{\prime }}\rangle
\]
where the state $|\Phi _{8}^{^{\prime }}\rangle =|\mathbf{R0\rangle }
\bigotimes |(g^{M_{r}})^{s_{4}}\func{mod}p\rangle \bigotimes ...\bigotimes
|(g^{M_{r}})^{s_{r}}\func{mod}p\rangle .$ Now taking the state $|\Phi
_{9}\rangle $ as the input state of the quantum program $Q_{p}$ and setting
the function $f_{r}(x)=(g^{M_{r}})^{x}\func{mod}p$ with $x=s_{1}$ and $
g_{r}(y)=(g^{M_{r}})^{y}\func{mod}p$ with $y=s_{3},$ the unitary
transformation of the quantum program removes the state $|(g^{M_{r}})^{s_{3}}
\func{mod}p\rangle $ of the state $|\Phi _{9}\rangle .$ In an analogue way,
by setting the fixed function $f_{r}(x)=(g^{M_{r}})^{x}\func{mod}p$ with $
x=s_{1}$ and the function $g_{r}(y)=(g^{M_{r}})^{y}\func{mod}p$ with $
y=s_{k} $ for $k=2,3,...,r,$ respectively, and then repeating $r-1$ times
the application of the quantum program $Q_{p},$ the states $
|(g^{M_{r}})^{s_{k}}\func{mod}p\rangle $ with $k=2,3,..,r$ are one by one
removed unitarily from the state $|\Phi _{7}\rangle $ and ultimately the
state $|\Phi _{7}\rangle $ is transformed to the desired state $|\mathbf{
R0\rangle }\bigotimes |(g^{M_{r}})^{s_{1}}\func{mod}p\rangle ,$ where the
branch-control state $|0\rangle $ and the halting state $|0\rangle $ are
also absorbed by the register library. This transformation may also be
carried out in a parallel manner. In an analogue way, one may obtain the
desired state $|\mathbf{R0\rangle }\bigotimes |(g^{M_{r}})^{s_{k}}\func{mod}
p\rangle $ from the state $|\Phi _{7}\rangle $ for $k=1,2,...,r,$
respectively. Once the unitary state transformation $|\mathbf{R0\rangle }
\bigotimes |g^{s}\func{mod}p\rangle \rightarrow |\mathbf{R0\rangle }
\bigotimes |(g^{M_{r}})^{s_{k}}\func{mod}p\rangle $ is efficiently achieved
for $k=1,2,...,r$, the auxiliary oracle unitary operation $\overline{U}
_{os_{k}}(\theta )=\exp [-i\theta \overline{D}_{s_{k}}(g^{M_{r}})]$ with the
quantum-state diagonal operator $\overline{D}_{s_{k}}(g^{M_{r}})=$ $|\mathbf{
R0\rangle }\langle \mathbf{R0|}\bigotimes |(g^{M_{r}})^{s_{k}}\func{mod}
p\rangle \langle (g^{M_{r}})^{s_{k}}\func{mod}p|$ can be efficiently built
out of the oracle unitary operation $\overline{U}_{os}(\theta ).$ This
auxiliary oracle unitary operation is applied only to the cyclic group state
subspace $S(m_{r})$. The state $|\mathbf{R0\rangle }\bigotimes
|(g^{M_{r}})^{s_{k}}\func{mod}p\rangle $ may be transferred to the register
of the search space by a $SWAP$ operation so as to obtain the auxiliary
oracle unitary operation $\overline{U}_{os_{k}}(\theta )$ which is applied
only to the search space with dimension $m_{r}\thicksim O(\log p)$. Note
that the register of the search space in which the index vector $\{s_{k}\}$
is determined may be different from all those registers in the state $|\Phi
_{7}\rangle .$ \newline
\newline
{\Large 6. An efficient quantum search process in the cyclic group state
subspaces}
When the auxiliary oracle unitary operation $\overline{U}_{os_{k}}(\theta )$
with $k=1,2,..,r$ is obtained the quantum search process to find the index $
s_{k}$ can be efficiently constructed. As shown in the previous section 5,
the initial state for the quantum search process should be limited to be a
single basis state because both the input states of the quantum program $
Q_{p}$ and the quantum circuit $Q_{c}$ are limited to be a single basis
state. Therefore, the standard quantum search algorithm which usually starts
at a superposition will not be used here to determine the index vector $
\{s_{k}\}$. Because the quantum search space now is limited to the cyclic
group state subspace $S(m_{k})$ with dimensional size $m_{k}\thicksim O(\log
p),$ one may use every basis state of the cyclic group state subspace $
S(m_{k})$ as the initial state of the quantum search process without
changing essentially the computational complexity of the quantum search
process. For convenience, now the oracle unitary operation $\overline{U}
_{os_{k}}(\theta )$ acting on a basis state of the search space $S(m_{r})$
can be rewritten as
\begin{equation}
\overline{U}_{os_{k}}(\theta )|(g^{M_{r}})^{x}\func{mod}p\rangle =\left\{
\begin{array}{c}
\exp (-i\theta )|(g^{M_{r}})^{x}\func{mod}p\rangle ,\text{ if }x=s_{k}, \\
|(g^{M_{r}})^{x}\func{mod}p\rangle ,\text{ if }x\neq s_{k},
\end{array}
\right. \label{16}
\end{equation}
where the register library $|\mathbf{R0\rangle }$ is dropped without
confusion. On the other hand, the basis state $|(g^{M_{r}})^{x}\func{mod}
p\rangle $ with $0\leq x<m_{r}$ can also be expressed in terms of the binary
dynamical parameter $\{b_{k}^{x}\}$ (see sections 2.1 and 2.2),
\[
|(g^{M_{r}})^{x}\func{mod}p\rangle =\stackrel{n}{\stackunder{k=1}{\bigotimes
}}(\frac{1}{2}T_{k}+b_{k}^{x}S_{k}).
\]
The dynamical parameters $\{b_{k}^{x}\}$ can be determined conveniently
below for a given integer $(g^{M_{r}})^{x}\func{mod}p$ and will be used
later in the construction of the quantum search process. The integer $
(g^{M_{r}})^{x}\func{mod}p$ is first expressed in terms of the usual binary
representation:
\begin{equation}
(g^{M_{r}})^{x}\func{mod}
p=a_{n}2^{n-1}+a_{n-1}2^{n-2}+...+a_{2}2^{1}+a_{1}2^{0}, \label{17}
\end{equation}
where the qubit number $n=[\log _{2}p]+1$ and $a_{k}=0$ or $1.$ Then the
dynamical parameter $b_{k}^{x}$ is given by $b_{k}^{x}=(1-2a_{k})$ for $
k=1,2,..,n.$ Since the oracle unitary operation $\overline{U}
_{os_{k}}(\theta )$ can generate a phase factor $\exp (-i\theta )$ only for
the marked state $|(g^{M_{r}})^{s_{k}}\func{mod}p\rangle $ but nothing for
any other states of the search space $S(m_{r}),$ as shown in (16), one can
only use this phase factor to distinguish the marked state $
|(g^{M_{r}})^{s_{k}}\func{mod}p\rangle $ from any other states of the search
space. This search process to find the marked state can be made efficient
due to the fact that the dimension of the search space $S(m_{r})$ is $
m_{k}\thicksim O(\log p).$ Here, an efficient quantum search process is
suggested to find the marked state in the search space. It is based on the
use of the multiple-quantum unitary operators [42] in the $n-$qubit quantum
spin system ($n=[\log _{2}p]+1$) whose Hilbert space contains the search
space $S(m_{r})$.
A particularly important multiple-quantum transition to be used in the
quantum search process is the highest-order quantum transition in the $n-$
qubit quantum spin system. The highest-order quantum transition is defined
as the transition between the ground state $|00...0\rangle $ and the highest
excited state $|11...1\rangle $ of the $n-$qubit spin system. In an $n-$
qubit spin system the highest order of quantum transition is $\pm n$ [43]
and the Hermitian highest-order quantum operators $Q_{nx}$ and $Q_{ny}$ may
be defined by
\begin{equation}
Q_{nx}=\frac{1}{2}
(I_{1}^{+}I_{2}^{+}......I_{n}^{+}+I_{1}^{-}I_{2}^{-}......I_{n}^{-}),
\label{18}
\end{equation}
and
\begin{equation}
Q_{ny}=\frac{1}{2i}
(I_{1}^{+}I_{2}^{+}......I_{n}^{+}-I_{1}^{-}I_{2}^{-}......I_{n}^{-}),
\label{19}
\end{equation}
where the operators $I_{k}^{\pm }=I_{kx}\pm iI_{ky}$ for $k=1,2,...,n$. The
highest-order quantum unitary operators are defined by $U_{n\mu }(\theta
)=\exp (-i2\theta Q_{n\mu })$ with $\mu =x,y.$ They can induce an $n-$order
quantum transition only between the ground state $|00...0\rangle $ and the
highest excited state $|11...1\rangle $ of the Hilbert space of the $n-$
qubit spin system, but they do not induce any other order quantum transition
between any pair of quantum states of the spin system different from the
pair of the ground state and the highest excited state. This is because the
transition matrix elements $\langle k|Q_{n\mu }|r\rangle =\langle r|Q_{n\mu
}|k\rangle ^{*}=0$ $(\mu =x,y)$ for any computational base $|k\rangle $ and $
|r\rangle $ of the spin system other than the ground state $|00...0\rangle $
or the highest excited state $|11...1\rangle $. Since $I_{k}^{+}|0_{l}
\rangle =0,$ $I_{k}^{+}|1_{l}\rangle =\delta _{kl}|0_{k}\rangle ,$ $
I_{k}^{-}|0_{l}\rangle =\delta _{kl}|1_{k}\rangle ,$ and $
I_{k}^{-}|1_{l}\rangle =0$ [43] for $k,l=1,2,...,n,$ the $n-$order quantum
operator $Q_{ny}$ acting on the ground state (the highest excited state)
creates the highest excited state (the ground state),
\[
2Q_{ny}|00...0\rangle =i|11...1\rangle
\]
and
\[
2Q_{ny}|11...1\rangle =-i|00...0\rangle .
\]
Then it is easy to turn out that there are the unitary transformations when
the $n-$order quantum unitary operator $U_{ny}(\theta )=\exp (-i2\theta
Q_{ny})$ acts on the ground state and the highest excited state,
respectively,
\begin{equation}
\exp (-i2\theta Q_{ny})|00...0\rangle =\cos \theta |00...0\rangle +\sin
\theta |11...1\rangle ,\text{ }(-\pi \leq \theta \leq \pi ) \label{20}
\end{equation}
and
\begin{equation}
\exp (-i2\theta Q_{ny})|11...1\rangle =\cos \theta |11...1\rangle -\sin
\theta |00...0\rangle ,\text{ }(-\pi \leq \theta \leq \pi ). \label{21}
\end{equation}
In particular, when $\theta =\pi /4$ the equally weighted superposition of
the ground state and the highest excited state is obtained from (20),
\begin{equation}
|\Psi _{0n}\rangle =\exp (-i\frac{\pi }{2}Q_{ny})|00...0\rangle =\frac{1}{
\sqrt{2}}(|00...0\rangle +|11...1\rangle ). \label{22}
\end{equation}
The efficient quantum circuit for the highest-order quantum unitary operator
$U_{ny}(\theta )$ is constructed below. By using the quantum-state diagonal
operator $D_{0}$ [15] the $n-$order quantum operator $Q_{ny}$ may be
expressed as
\begin{equation}
2iQ_{ny}=[D_{0},2^{n}I_{1x}I_{2x}...I_{nx}]. \label{23}
\end{equation}
On the other hand, the operator $Q_{ny}$ can also be written as
\begin{equation}
2iQ_{ny}=(-i)\exp (i\varphi I_{z})[D_{0},2^{n}I_{1x}I_{2x}...I_{nx}]_{+}\exp
(-i\varphi I_{z}) \label{24}
\end{equation}
with $n\varphi =\pi /2.$ The relation (24) can be proved below. Since there
holds the unitary transformation: $\exp (-i\varphi I_{kz})I_{k}^{\pm }\exp
(i\varphi I_{kz})=\exp (\mp i\varphi )I_{k}^{\pm }$ [43] it follows from
(18) and (19) that there exists the unitary transformation when the unitary
operator $\exp (-i\varphi I_{z})=\exp [-i\varphi \sum_{k=1}^{n}I_{kz}$ $]$
with $n\varphi =\pi /2$ acts on the $n-$order quantum operator $Q_{ny}$,
\begin{eqnarray*}
&&\exp (-i\varphi I_{z})Q_{ny}\exp (i\varphi I_{z}) \\
&=&\frac{1}{2i}[\exp (-in\varphi )I_{1}^{+}I_{2}^{+}......I_{n}^{+}-\exp
(in\varphi )I_{1}^{-}I_{2}^{-}......I_{n}^{-}] \\
&=&-\frac{1}{2}[D_{0},2^{n}I_{1x}I_{2x}...I_{nx}]_{+}=-Q_{nx}
\end{eqnarray*}
Obviously, the relation (24) can be obtained directly from this unitary
transformation. There is a general unitary transformation identity for the
selective rotation operation $C_{t}(\theta )$ [15]$,$
\begin{eqnarray}
C_{t}(\theta )\rho C_{t}(\theta )^{-1} &=&\rho -(1-\cos \theta )[\rho
,D_{t}]_{+}+i\sin \theta [\rho ,D_{t}] \nonumber \\
&&+2(1-\cos \theta )D_{t}\rho D_{t}. \label{25}
\end{eqnarray}
Taking $\rho =2^{n}I_{1x}I_{2x}...I_{nx},$ $D_{t}=D_{0},$ and $\theta =\pi ,$
and noting that there holds the operator identity $
D_{t}2^{n}I_{1x}I_{2x}...I_{nx}D_{t}=0$ for any index $t,$ one obtains the
following relation from the identity (25),
\begin{eqnarray}
\lbrack D_{0},2^{n}I_{1x}I_{2x}...I_{nx}]_{+} &=&\frac{1}{2}
\{2^{n}I_{1x}I_{2x}...I_{nx} \nonumber \\
&&-C_{0}(\pi )2^{n}I_{1x}I_{2x}...I_{nx}C_{0}(\pi )^{-1}\}.
\end{eqnarray}
With the help of the relations (24) and (26) and the Trotter-Suzuki formula
[44] the quantum circuit for the highest-order quantum unitary operator $
U_{ny}(\theta )$ can be constructed efficiently by
\[
U_{ny}(\theta )=\exp (-i2\theta Q_{ny})
\]
\begin{equation}
=\exp (i\varphi I_{z})\{C_{0}(\pi )GC_{0}{}(\pi )^{-1}G^{-1}\}^{m}\exp
(-i\varphi I_{z})+O(m^{-1}) \label{27}
\end{equation}
where the unitary operation $G=\exp (-i\theta 2^{n-1}I_{1x}I_{2x}...I_{nx}/m)
$ can be decomposed efficiently into a sequence of one- and two-qubit
quantum gates [15]. Note that the norms $||D_{0}||=1$ and $
||2^{n}I_{1x}I_{2x}...I_{nx}||=1.$ For a modest integer $m$ the
decomposition (27) converges quickly.
With the help of the unitary transformations of (20) and (21) of the
highest-order quantum unitary operator $U_{ny}(\theta )$ one can set up two
quantum circuits to judge whether a known quantum state is just the solution
of the quantum search problem or not in polynomial time. One quantum circuit
$U_{0n}(\pi )$ is constructed with the selective inversion operation $
C_{t}(\pi )$ and the highest-order quantum unitary operator $U_{ny}(\theta
), $
\begin{eqnarray*}
U_{0n}(\pi )|00...0\rangle &=&\exp (i\frac{1}{2}\pi Q_{ny})C_{t}(\pi )\exp
(-i\frac{1}{2}\pi Q_{ny})|00...0\rangle \\
&=&\left\{
\begin{array}{l}
\ \ |11...1\rangle ,\text{ if }t=0 \\
-|11...1\rangle ,\text{ if }t=N-1 \\
\ \ |00...0\rangle ,\text{ if }t\neq 0,N-1
\end{array}
\right.
\end{eqnarray*}
where $N=2^{n}$. The quantum circuit $U_{0n}(\pi )$ acting on the ground
state $|00...0\rangle $ induces the highest-order quantum transition only
when the selective inversion operation $C_{t}(\pi )$ with $t=0$ or $N-1$ is
applied to either the ground state $|00...0\rangle $ or the highest excited
state $|11...1\rangle ,$ while for any other selective inversion operation $
C_{t}(\pi )$ with $t\neq 0$ and $N-1$ which is applied to neither the ground
state nor the highest excited state the quantum circuit $U_{0n}(\pi )$
induces no transition from the ground state to the highest excited state.
Generally, the quantum circuit $U_{0n}(\theta )$ with a general selective
rotation operation $C_{t}(\theta )$ $(-\pi \leq \theta \leq \pi )$ acting on
the ground state induces the $n-$order quantum transition with a transition
probability dependent on the rotation angle $\theta ,$
\begin{eqnarray*}
&&\exp (i\frac{1}{2}\pi Q_{ny})C_{t}(\theta )\exp (-i\frac{1}{2}\pi
Q_{ny})|00...0\rangle \\
&=&\left\{
\begin{array}{l}
P_{+}|00...0\rangle +P_{-}|11...1\rangle \newline
,\text{ if }t=0 \\
P_{+}|00...0\rangle -P_{-}|11...1\rangle ,\text{ if }t=N-1 \\
|00...0\rangle ,\text{ if }t\neq 0,N-1
\end{array}
\right.
\end{eqnarray*}
with $P_{\pm }=\frac{1}{2}(1\pm \exp (-i\theta )),$ but the quantum circuit
does not induce any quantum transition when the selective rotation operation
$C_{t}(\theta )\neq $ $C_{0}(\theta )$ and $C_{N-1}(\theta )$. When $
C_{t}(\theta )=C_{0}(\theta )$ or $C_{N-1}(\theta )$ the unitary operation $
U_{0n}(\theta )$ does induce the highest order quantum transition with the
transition probability:
\[
P_{0n}(\theta )=|P_{-}|^{2}=\frac{1}{2}(1-\cos \theta ).
\]
The transition probability $P_{0n}(\theta )\geq 0.5$ when $\pi /2\leq
|\theta |\leq \pi .$
Using the total quantum circuit $U_{0n}(\theta )|00...0\rangle $ $(\pi
/2\leq |\theta |\leq \pi )$ which includes the initial state, i.e., the
ground state, one can know whether the quantum state $|t\rangle $ is one of
the two states: the ground state and the highest excited state or any other
quantum state of the Hilbert space. If the quantum state $|t\rangle $ is
either the ground state or the highest excited state, then one need use
further another quantum circuit $U_{0n}^{\prime }$ to determine certainly
the quantum state $|t\rangle $ to be the ground state or the highest excited
state,
\begin{eqnarray*}
U_{0n}^{\prime }|00...0\rangle &\equiv &\exp (i\frac{1}{2}\pi
Q_{ny})C_{0}(\pi /2)C_{t}(-\pi /2)\exp (-i\frac{1}{2}\pi
Q_{ny})|00...0\rangle \\
&=&\left\{
\begin{array}{l}
\ |00...0\rangle ,\text{ if }t=0 \\
i|11...1\rangle ,\text{ if }t=N-1
\end{array}
\right. .
\end{eqnarray*}
If the quantum state $|t\rangle $ is the highest excited state, which means
that $C_{t}(-\pi /2)=C_{N-1}(-\pi /2),$ then there is an $n-$order quantum
transition from the ground state to the highest excited state under the
action of the unitary operation $U_{0n}^{\prime }$ on the ground state,
otherwise there is not such an $n-$order quantum transition and the ground
state keeps unchanged. Now it is easy to judge if an unknown state $
|t\rangle $ is the state $|00...0\rangle \}$ or the state $|11...1\rangle $
or any other state of the Hilbert space by using first the quantum circuit $
U_{0n}(\pi )|00...0\rangle $ and then $U_{0n}^{\prime }|00...0\rangle .$
It is well known in computational complexity that an NP-hard problem is hard
to be solved on a classical computer, but whether a given solution is just
the real solution to the NP problem or not can be efficiently checked
computationally. This fact is also true on a quantum computer. How to
confirm whether a given state is the solution to the quantum search problem
on a quantum computer? Suppose that the marked state $|s\rangle $ is the
real solution to the quantum search problem and the oracle unitary operation
of the marked state is $C_{s}(\theta )$. For a given quantum state $
|r\rangle $ one knows its dynamical parameter vector $\{a_{k}^{r}\},$ an
example can be seen in equation (17). One first sets up an auxiliary oracle
unitary operation $C_{t}(\theta )=U_{or}C_{s}(\theta )U_{or}^{+}:$
\[
U_{or}C_{s}(\theta )U_{or}^{+}=\left\{
\begin{array}{l}
C_{0}(\theta ),\text{ if }|r\rangle =|s\rangle \\
C_{t}(\theta )\text{ }(t\neq 0),\text{ if }|r\rangle \neq |s\rangle
\end{array}
\right.
\]
where the known unitary operator $U_{or}$ that depends upon the dynamical
parameter vector $\{a_{k}^{r}\}$ is given by [15a],
\[
U_{or}=\stackrel{n}{\stackunder{k=1}{\prod }}\{\exp (i\pi I_{kx}/2)\exp
(-i\pi a_{k}^{r}I_{kx}/2)\}.
\]
Then using the quantum circuit $U_{0n}(\pi )|00...0\rangle $ one knows
whether the auxiliary oracle unitary operation $C_{t}(\theta )$ is just $
C_{0}(\theta )$ or $C_{N-1}(\theta )$ or any other one. If $C_{t}(\theta
)\neq C_{0}(\theta )$ and $C_{N-1}(\theta ),$ then the quantum state $
|r\rangle $ is not the real solution $|s\rangle $ to the quantum search
problem. If $C_{t}(\theta )=C_{0}(\theta )$ or $C_{N-1}(\theta ),$ then the
quantum circuit $U_{0n}^{\prime }|00...0\rangle $ is further used to judge
whether $C_{t}(\theta )=C_{0}(\theta )$ or $C_{t}(\theta )=C_{N-1}(\theta ).$
If $C_{t}(\theta )=C_{N-1}(\theta ),$ then the state $|r\rangle $ is not the
solution $|s\rangle $. But if $C_{t}(\theta )=C_{0}(\theta )$ one knows
certainly the quantum state $|r\rangle $ is just the solution $|s\rangle .$
Therefore, in polynomial time one can confirm whether a given quantum state
is just the solution to the quantum search problem.
Both the ground state $|00...0\rangle $ and the highest excited state $
|11...1\rangle $ of the Hilbert space of the $n-$qubit spin system with $
n=[\log _{2}p]+1$ do not belong the search space $S(m_{k}).$ This is clear
that the ground state $|00...0\rangle $ is not contained in the
multiplicative cyclic group state space $S(C_{p-1})$, as shown in section
2.1. On the other hand, the prime $p$ is less than $2^{n}$ with $n=[\log
_{2}p]+1,$ that is, $p\leq 2^{n}-1,$ then $p-1\leq 2^{n}-2,$ which means
that every cyclic group state $|g^{x}\func{mod}p\rangle $ of the state space
$S(C_{p-1})$ corresponds one-to-one to its own integer $g^{x}\func{mod}p\in
Z_{p}^{+}$ which is never greater than $2^{n}-2,$ while the highest excited
state $|11...1\rangle $ stands for the number $2^{n}-1.$ Therefore, the
cyclic group state space $S(C_{p-1})$ does not contain the state $
|11...1\rangle .$ By checking the quantum program $Q_{p}$ and the quantum
circuit $Q_{c}$ in section 5 one can see that the state $|00...0\rangle $
has been used by the program $Q_{p}$ in the unitary transformation: $
|g_{r}(y)\rangle |n_{h}\rangle =|1\rangle |0\rangle \leftrightarrow
|0\rangle |0\rangle $ and by the quantum circuit $Q_{c}$ as the control
state $|0\rangle $ of the control subspace $\{|c\rangle ,|0\rangle \}$ with $
|c\rangle \neq |11...1\rangle $, but that state $|00...0\rangle $ is not in
the current search space $S(m_{r})$, while the state $|11...1\rangle $ of
the Hilbert space that contains the search space $S(m_{r})$ is never used by
both the program and the quantum circuit. Therefore, there hold the unitary
transformations: $Q_{p}|00...0\rangle =|00...0\rangle $ and $
Q_{p}|11...1\rangle =|11...1\rangle $ in the search space $S(m_{r})$. This
is also in agreement with the fact that the oracle unitary operation $
\overline{U}_{os_{k}}(\theta )$ does not make an effect on both the states.
The quantum circuit $U_{0n}(\pi )|00...0\rangle $ now can be modified so
that it can be used to determine the index $s_{k}$ of the oracle unitary
operation $\overline{U}_{os_{k}}(\theta ).$ Obviously, the superposition $
|\Psi _{0n}\rangle $ of (22) is not in the search space $S(m_{r})$ and not
affected by the quantum program $Q_{p}.$ Now the ground state $
|00...0\rangle $ in the superposition $|\Psi _{0n}\rangle $ is changed to
the state $|1\rangle $ by the unitary operation $F_{1}$ and further changed
to the cyclic group state $|(g^{M_{r}})^{x}\func{mod}p\rangle $ by the
cyclic group operation $(U_{g^{M_{r}}})^{x},$
\begin{eqnarray*}
|\Psi _{0n}\rangle &=&\frac{1}{\sqrt{2}}(|00...0\rangle +|11...1\rangle ) \\
\stackrel{F_{1}}{\rightarrow }\stackrel{(U_{g^{M_{r}}})^{x}}{\rightarrow }
|\Psi _{1n}\rangle &=&\frac{1}{\sqrt{2}}(|(g^{M_{r}})^{x}\func{mod}p\rangle
+|11...1\rangle ).
\end{eqnarray*}
The unitary operation $F_{1}$ and the cyclic group operation $
(U_{g^{M_{r}}})^{x}$ do not affect the highest-level state $|11...1\rangle $
. If now the superposition $|\Psi _{1n}\rangle $ is taken as the input state
of the oracle unitary operation $\overline{U}_{os_{k}}(\pi ),$ then in
effect the input state is essentially a single basis state for the oracle
unitary operation $\overline{U}_{os_{k}}(\pi )$ and also for the quantum
program $Q_{p}.$ Since the highest-level state $|11...1\rangle $ is not in
the search space $S(m_{r})$ and also not affected by the quantum program,
there is only the single basis state $|(g^{M_{r}})^{x}\func{mod}p\rangle $
in the state $|\Psi _{1n}\rangle $ that the quantum program can take an
action, although the state $|\Psi _{1n}\rangle $ is a superposition of two
states. Now it is applied the oracle unitary operation $\overline{U}
_{os_{k}}(\pi )$ to the state $|\Psi _{1n}\rangle .$ Note that only the
basis state $|(g^{M_{r}})^{x}\func{mod}p\rangle $ in the state $|\Psi
_{1n}\rangle $ is affected by the oracle unitary operation. If the index $
s_{k}=x$ then the state $|(g^{M_{r}})^{x}\func{mod}p\rangle $ is inverted by
the oracle unitary operation $\overline{U}_{os_{k}}(\pi )$, otherwise the
state $|\Psi _{1n}\rangle $ keeps unchanged. After these unitary operations
the state $|(g^{M_{r}})^{x}\func{mod}p\rangle $ is changed back to the
ground state $|0\rangle $ by the inverse operations $
[(U_{g^{M_{r}}})^{x}]^{+}$ and $F_{1}^{+}.$ At the final step the inverse $
n- $order quantum unitary operation $\exp (i\frac{1}{2}\pi Q_{ny})$ is
applied so that it can be judged whether the index $x=s_{k}$ or not by the
quantum measurement. The final result is given by
\begin{eqnarray*}
Q(x,s_{k})|00...0\rangle &=&\exp (i\frac{1}{2}\pi
Q_{ny})F_{1}^{+}[(U_{g^{M_{r}}})^{x}]^{+}\overline{U}_{os_{k}}(\pi ) \\
&&\times (U_{g^{M_{r}}})^{x}F_{1}\exp (-i\frac{1}{2}\pi Q_{ny})|00...0\rangle
\\
&=&\left\{
\begin{array}{l}
\ \ |11...1\rangle ,\text{ if }x=s_{k} \\
\ \ |00...0\rangle ,\text{ if }x\neq s_{k}
\end{array}
\right.
\end{eqnarray*}
The quantum measurement is carried out on the highest-level state $
|11...1\rangle .$ Given the oracle unitary operation $\overline{U}
_{os_{k}}(\pi )$ one can try $m_{r}$ different index values $
x=0,1,...,m_{r}-1$ at most with the quantum circuit $Q(x,s_{k})|00...0
\rangle $ to find the index $s_{k}$ due to the fact that $0\leq s_{k}<m_{r}.$
If the highest-level state $|11...1\rangle $ is measured in a high
probability ($\thicksim 1)$, then the corresponding index value $x$ is just
the index $s_{k}.$ Again it is pointed out that the input state of the
quantum program $Q_{p}$ is essentially limited to be a single basis state
during the quantum search process. When the index values $\{s_{k}\}$ are
obtained one may use the index identity (3) or (12) to compose the index $s$
and hence the marked state $|s\rangle $ is found ultimately for the quantum
search problem in the cyclic group state space. \newline
\newline
{\large 7. Discussion}
In the paper an oracle-based quantum dynamical method has been set up to
solve the quantum search problem in the cyclic group state space of the
Hilbert space of an $n-$qubit pure-state quantum system. The main attempt is
to make use of the symmetric properties and structures of groups to help
solving a general unstructured quantum search problem in the Hilbert space.
It is known that the hardness to solve an unstructured quantum search
problem by a standard quantum search algorithm mainly originates from the
low efficiency to amplify the amplitude of the marked state in the Hilbert
space by the oracle unitary operation associated with other known quantum
operations. This low amplitude-amplification efficiency results in that a
standard quantum search algorithm generally can have only a square speedup
over the best known classical counterparts. In order to break through the
square speedup limitation it is necessary to develop other type of quantum
search algorithms. The quantum dynamical method [15] may be a better choice,
for it allows a parameterization description for an unknown quantum state
such as the marked state and its oracle unitary operation in the Hilbert
space of the $n-$qubit quantum system. Since the oracle unitary operation
corresponds one-to-one to the unknown marked state, with the help of the
parameterization description the quantum dynamical method makes it possible
to manipulate at will the evolution process of the marked state in the
quantum system and hence it also makes it possible to manipulate at will the
oracle unitary operation. The quantum dynamical method is different from the
standard quantum search algorithm in that any quantum state of the Hilbert
space can be described completely by a set of dynamical parameters and hence
the quantum searching for the marked state can be indirectly achieved by
determining the set of dynamical parameters which describe completely the
marked state instead by directly measuring the marked state. Therefore,
amplification of amplitude of the marked state and the direct measurement on
the marked state to obtain the complete information of the marked state,
both are the key components of a standard quantum search algorithm, may not
be necessary in the quantum dynamical method. In the quantum dynamical
method the quantum measurement to output the computing results may be
carried out on those states that carry the information of the marked state,
while the complete information of the marked state can be further extracted
from these computing results. In the paper the binary dynamical
representation for a quantum state in the Hilbert space of an $n-$qubit
quantum system is generalized to a general multi-base dynamical
representation for a quantum state in a cyclic group state space and the
quantum dynamical method therefore is extended to solve the quantum search
problem in the cyclic group state space of the Hilbert space.
A cyclic group state space of the Hilbert space of an $n-$qubit quantum
system carries the symmetric property and structure of the cyclic group. A
quantum search process may be affected greatly by the symmetric property and
structure of the cyclic group if the quantum search is performed in the
cyclic group state space. It is known that the amplitude-amplification
efficiency for the marked state by the oracle unitary operation associated
with other known unitary operations generally is inversely proportional to
the square root of the dimensional size of the search space of the quantum
search problem and this low efficiency results in the square speedup
limitation for a standard quantum search algorithm. There is naturally a
possible scheme to bypass this speedup limitation that the search space of
the problem is limited to a small subspace of the Hilbert space so that this
speedup limitation becomes less important or even unimportant in the quantum
search problem. Therefore, it is a challenge how to reduce efficiently the
search space from the whole Hilbert space to its small subspaces in the
unstructured quantum search problem in the Hilbert space. It has been shown
that the symmetric property and structure in spin space of an $n-$qubit spin
system may be helpful for this reduction of search space. In the paper it is
made a further emphasis and generalization for the idea that the symmetric
property and structure of a quantum system or even a group may be employed
to speed up the quantum search process through the scheme of the
search-space reduction. A cyclic group is one of the simplest groups and its
symmetric property and structure has been studied in detail and extensively.
Therefore, it could be simplest and most convenient to exploit the symmetric
property and structure of a cyclic group to help solving the quantum search
problem in the cyclic group state space of the Hilbert space.
The reversible mathematical-logic operations have been used extensively in
quantum computation. They may be generally thought of as selective unitary
operations in a quantum system and have be employed in the construction of
quantum search processes in the cyclic group state space. A large advantage
for the type of unitary operations is that the time evolution process of a
quantum state in a complex multi-qubit quantum system may be traced more
easily under the action of the mathematical-logic operations. However, in
order to be reversible and unitary a logic operation in mathematics usually
needs to consume much more extra auxiliary qubits with respect to those
unitary operators quantum physically. Since the dimensional size of the
Hilbert space of a quantum system increases exponentially as the qubit
number, it must be careful to use the mathematical-logic operations in
solving a quantum search problem, otherwise these extra auxiliary qubits
could lead to a large search space for the quantum search problem and make
the quantum search process degraded. On the other hand, the conventional
unitary operators, propagators, operations, or quantum gates in a quantum
system in physics usually need not any extra auxiliary qubits except those
artificial conditional unitary operations which usually need only few extra
qubits to help to achieve some specific conditional operations instead of
their unitarity. The time evolution process of a quantum state in a
multi-qubit quantum system generally is complex and is not easy to trace
under the action of the type of unitary operations. However, there is a
general rule that any unknown quantum state can be efficiently transferred
to a larger subspace from a small subspace in the Hilbert space of the
multi-qubit quantum system. Through this general rule one could set up the
connection between the Hilbert space of the $n-$qubit quantum system and its
cyclic group state space for an unstructured quantum search problem.
It has been shown that if there existed a universal quantum computer that in
computation obeys the unitary quantum dynamics in physics and is capable of
computing any computable functions in mathematics such as any recursive
functions, then such a universal quantum computer would be enough powerful
to solve efficiently the quantum search problem in the cyclic group state
space. There seems to be a question whether such an ideal universal quantum
computer existed or not. This question is due to the argument that a
universal quantum computer could not have a satisfactory halting protocol
when its input state is a superposition. However, as far as the present
quantum search process in the multiplicative cyclic group state space is
concerned, there seems not to be such a question because the input state in
the quantum search process can be strictly limited to be a single basis
state. An ideal quantum program, which is a key component of the present
quantum search process, is designed for the efficient reduction of quantum
search space for the quantum search problem. It has been shown in theory
that this quantum program could be run unitarily on an ideal universal
quantum computer when its input state is strictly limited to be a single
basis state and hence it could be used to solve efficiently the quantum
search problem in the cyclic group state space. Moreover, a quantum circuit
is also designed to simulate efficiently the ideal quantum program. The key
point for the quantum circuit is to use the state-locking pulse and the
two-level control subspace to simulate efficiently the unitary halting
protocol of the quantum program. Although at present a state-locking pulse
that is continuously applied to a quantum system during the whole period of
the quantum circuit is not popularly used in quantum computation, a large
number of similar techniques have been used extensively in the conventional
NMR experiments [43]. Obviously, it is necessary to further investigate in
detail the quantum circuit in some important problems such as how to design
a state-locking pulse with a better performance and how the state-locking
pulse affects the practical computational complexity of the quantum circuit
and the whole quantum search process. Evidently, it is possible to design
simpler quantum program and quantum circuit than the present ones to solve
the quantum search problem in the cyclic group state space.
With the help of the symmetric property and structure of a cyclic group and
the Chinese remainder theorem in number theory any quantum state in the
cyclic group state space can be efficiently converted into a tension product
of the states of the cyclic group state subspaces of the cyclic group state
space. There are the relations among these states of the cyclic group state
subspaces through the Chinese remainder theorem. These relations are
important and may be further employed to develop efficient quantum search
methods in the cyclic group state space in the future work. \newline
$\newline
${\large References}\newline
* E\_mail address for the author: [email protected].\newline
1. (a) S.A.Cook, The P versus NP problem, http://www.cs.toronto.edu/ \symbol{
126}sacook, 2000; (b) M.R.Garey and D.S.Johnson, Computers and
Intractability: A guide to the theory of NP-completeness, Freeman and
Company, New York, 1979; (c) C.Papadimtriou, Computational Complexity,
Addison-Wesley, 1994. \newline
2. L.K.Grover, Quantum mechanics helps in searching for a needle in a
haystack, Phys.Rev.Lett. 79, 325 (1997);\newline
3. C.H.Bennett, E.Bernstein, G.Brassard, and U.Vazirani, Strengths and
weaknesses of quantum computing, http://arxiv.org/abs/quant-ph/9701001 (1997)
\newline
4. E.Farhi and S.Gutmann, Analog analogue of digital quantum computation,
Phys.Rev. A 57, 2403 (1998); \newline
5. E.Farhi, J.Goldstone, S.Gutmann, and M.Sipser, Quantum computation by
adiabatic evolution, http://arxiv.org/abs/quant-ph/0001106 (2000)\newline
6. G.Brassard, P.Hoyer, M.Mosca, and A.Tapp, Quantum amplitude amplification
and estimation, http://arxiv.org/abs/quant-ph/0005055 (2000) \newline
7. N.J.Cerf, L.K.Grover, and C.P.Williams, Nested quantum search and
NP-complete problems, Phys. Rev. A 61, 2303 (2000) (see also:
quant-ph/9806078)\newline
8. T. Hogg, A framework for structured quantum search, http://arxiv.org
/abs/quant-ph/9701013 (1997).\newline
9. C.Zalka, Grover quantum searching algorithm is optimal, Phys.Rev. A 60,
27462751 (1999) (see also: quant-ph/9711070) \newline
10. E.Biham, O.Biham, D.Biron, M.Grass, D.A.Lidar, and D.Shapira, Analysis
of Generalized Grover's Quantum search algorithms using recursion equations,
http://arxiv.org/abs/quant-ph/0010077 (2000) \newline
11. N.Shenvi, J.Kempe, K.Whaley, Quantum random-walk search algorithm,
Phys.Rev. A, 67, 052307 (2003)\newline
12. A.Childs and J.Goldstone, Spatial search by quantum walk,
http://arxiv.org/abs/quant-ph/0306054 (2003) \newline
13. W. van Dam, M.Mosca, and U.Vazirani, How powerful is adiabatic quantum
computation?, http://arxiv.org/abs/quant-ph/0206003 (2002)\newline
14. B.Robert, H.Buhrman, R.Cleve, M.Mosca, and R.De Wolf, Quantum lower
bounds by polynomials, Proceedings of 39th Annual Symposium on Foundations
of Computer Science, pp. 352 (1998)\newline
15. (a) X.Miao, Universal construction for the unsorted quantum search
algorithms, http://arxiv.org/abs/quant-ph/0101126 (2001)
(b) X.Miao, Solving the quantum search problem in polynomial time on an NMR
quantum computer, http://arxiv.org/abs/quant-ph/0206102 (2002)\newline
16. X.Miao, Efficient multiple-quantum transition processes in an $n-$qubit
spin system, http://arxiv.org/abs/quant-ph/0411046 (2004)\newline
17. R. Jozsa, Quantum algorithms and the Fourier transform,
http://arxiv.org/abs/quant-ph/9707033\newline
18. H.Kurzweil and B.Stellmacher, \textit{An introduction to the theory of
finite groups}, Springer-Verlag, New York, 2004\newline
19. G.H.Hardy and E.M.Wright, \textit{An introduction to the theory of
numbers}, 5th. ed., Oxford Science Press, 1979.\newline
20. W.J.Leveque, \textit{Fundamentals of number theory}, Dover Publications,
Inc., New York, 1996\newline
21. S.C.Pohlig and M.E.Hellman, An improved algorithm for computing
logarithms over $GF(p)$ and its cryptographic significance, IEEE
transactions on information theory, IT-24, 106 (1978)\newline
22. P.W.Shor, Polynomial-time algorithms for prime factorization and
discrete logarithms on a quantum computer, SIAM J.Comput. 26, 1484 (1997),
also see: Proc. 35th Annual Symposium on Foundations of Computer Science,
IEEE Computer Society, Los Alamitos, CA, pp.124 (1994)\newline
23. D.Beckman, A.N.Chari, S.Devabhaktuni, and J.Preskill, Efficient networks
for quantum factoring, Phys. Rev. A 54, 1034 (1996)\newline
24. V.Vedral, A.Barenco, and A.Ekert, Quantum networks for elementary
arithmetic operations, Phys. Rev. A 54, 147 (1996)\newline
25. M.A.Nielsen and I.L.Chuang, \textit{Quantum computation and quantum
information}, Chapter 5, Cambridge University Press, 2000\newline
26. C.H.Bennett, Logical reversibility of computation, IBM J. Res. Develop.
17, 525 (1973)\newline
27. C.H.Bennett, Time/space trade-offs for reversible computation, SIAM J.
Comput. 18, 766 (1989)\newline
28. Y.Levine and A.T.Sherman, A note on Bennett$^{\prime }$s time-space
tradeoff for reversible computation, SIAM J.Comput. 19, 673 (1990)\newline
29. D.Deutsch, Quantum theory, the Church-Turing principle and the universal
quantum computer, Proc.Roy.Soc. London A, 400, 96 (1985)\newline
30. (a) M.Mosca and C.Zalka, Exact quantum Fourier transforms and discrete
logarithm algorithms, http://arxiv.org/abs/quant-ph/0301093 (2003)
(b) J.Proos and C.Zalka, Shor$^{\prime }$s discrete logarithm quantum
algorithm for elliptic curves, http://arxiv.org/abs/quant-ph/0301141 (2003)
\newline
31. A.Y.Kitaev, Quantum measurements and the Abelian stabilizer problem,
http://arxiv.org/abs/quant-ph/9511026 (1995) \newline
32. A.Barenco, C.H.Bennett, R.Cleve, D.DiVincenzo, N.Margolus, P.Shor,
T.Sleator, J.Smolin, and H.Weinfurter, Elementary gates for quantum
computation, Phys.Rev. A 52, 3457 (1995)\newline
33. D.Coppersmith, An approximate Fourier transform useful in quantum
factoring, IBM research report RC 19642 (1994);
see also: http://arxiv.org/abs/quant-ph/0201067 (2002) \newline
34. (a) R.Cleve, A note on computing quantum Fourier transforms by quantum
programs, http://www.cpsc.ucalgary.ca (1994);
(b) R.Cleve and J.Watrous, Fast parallel circuits for the quantum Fourier
transform, http://arxiv.org/abs/quant-ph/0006004 (2000) \newline
35. L.Hales and S.Hallgren, An improved quantum Fourier transform algorithm
and applications, Proc. 41st Annual Symposium on Foundations of Computer
Science, 515 (2000) \newline
36. R.Cleve, A.Ekert, C.Macchiavello, and M.Mosca, Quantum algorithms
revisited, Proc.R.Soc.Lond. A 454, 339 (1998)\newline
37. P.Benioff, The computer as a physical system: A microscopic quantum
mechanical Hamiltonian model of computers as represented by Turing machines,
J.Statist.Phys., 22, 563 (1980) \newline
38. D.Deutsch, Quantum computational networks, Proc.Roy.Soc. London A, 425,
73 (1989)\newline
39. A.Yao, Quantum circuit complexity, Proc. 34th Annual Symposium on
Foundations of Computer Science, IEEE Computer Society Press, Los Alamitos,
CA, pp. 352\newline
40. E.Bernstein and U.Vazirani, Quantum computation complexity, SIAM
J.Comput. 26, 1411 (1997)\newline
41. (a) J.M.Myers, Can a universal quantum computer be fully quantum?,
Phys.Rev.Lett. 78, 1823 (1997);
(b) M.Ozawa, Quantum Turing machines: local transition, preparation,
measurement, and halting, http://arxiv.org/abs/quant-ph/9809038 (1998);
(c) N.Linden and S.Popescu, The halting problem for quantum computers,
http://arxiv.org/abs/quant-ph/9806054 (1998);
(d) Y.Shi, Remarks on universal quantum computer, http://arxiv.org/abs
/quant-ph/9908074 (1999) \newline
42. X. Miao, Multiple-quantum operator algebra spaces and description for
the unitary time evolution of multilevel spin systems, Molec.Phys. 98, 625
(2000)\newline
43. R.R.Ernst, G.Bodenhausen, and A.Wokaun, \textit{Principles of nuclear
magnetic resonance in one and two dimensions}, Oxford university press,
Oxford, 1987\newline
44. (a) H.F.Trotter, On the product of semigroups of operators,
Proc.Am.Math.Soc. 10, 545 (1959)
(b) M.Suzuki, Decomposition formulas of exponential operators and Lie
exponentials with some applications to quantum mechanics and statistical
physics, J.Math.Phys. 26, 601 (1985) \newline
\newline
\end{document} |
\begin{document}
\title[]{Polarization and greedy energy on the sphere}
\keywords{Spherical cap discrepancy, greedy sequences, Stolarsky principle.}
\subjclass[2010]{52A40, 52C99}
\author[]{Dmitriy Bilyk \and Michelle Mastrianni \and \\ Ryan W. Matzke \and Stefan Steinerberger
}
\address{School of Mathematics, University of Minnesota, Minneapolis, MN 55455, USA}
\email{[email protected]}
\address{School of Mathematics, University of Minnesota, Minneapolis, MN 55455, USA}
\email{[email protected]}
\address{Department of Mathematics, Vanderbilt University, Nashville, TN 37235, USA}
\email{[email protected]}
\address{Department of Mathematics, University of Washington, Seattle, WA 98195, USA}
\email{[email protected]}
\begin{abstract}
We investigate the behavior of a greedy sequence on the sphere $\mathbb{S}^d$ defined so that at each step the point that minimizes the Riesz $s$-energy is added to the existing set of points. We show that for $0<s<d$, the greedy sequence achieves optimal second-order behavior for the Riesz $s$-energy (up to constants). In order to obtain this result, we prove that the second-order term of the maximal polarization with Riesz $s$-kernels is of order $N^{s/d}$ in the same range $0<s<d$. Furthermore, using the Stolarsky principle relating the $L^2$-discrepancy of a point set with the pairwise sum of distances (Riesz energy with $s=-1$),
we also obtain a simple upper bound on the $L^2$-spherical cap discrepancy of the greedy sequence and give numerical examples that indicate that the true discrepancy is much lower.
\end{abstract}
\maketitle
\section{Introduction and background}
We consider the classical problem of distributing points on the sphere $\mathbb{S}^d$ as regularly as possible. Any such discussion requires a notion of regularity. We recall several such notions below.
\subsection{Energy and polarization}
Let $\omega_{N,d} = \left\{x_1, \dots, x_N\right\} \subset \mathbb{S}^d$ denote a set of $N$ points in the sphere.
For a symmetric, lower semi-continuous kernel $K: \mathbb{S}^d \times \mathbb{S}^d \rightarrow (- \infty, \infty]$ and a point set $\omega_{N,d}$, we define the discrete energy as
\begin{equation}\label{eq:en}
E_K(\omega_{N,d}) = 2 \sum_{1 \leq i < j \leq N } K(x_i, x_j).
\end{equation}
Of particular importance are the Riesz $s$-kernels:
\begin{equation*}
K_s(x,y) : = \begin{cases}
\sgn(s) \|x - y\|^{-s} & s \neq 0 \\
- \log(\| x-y\|) & s = 0.
\end{cases}
\end{equation*}
Another quantity closely related to energy is polarization, defined as
\begin{equation}\label{eq:pol}
P_K(\omega_{N,d}) = \min_{x \in \mathbb{S}^d} \sum_{j=1}^{N} K(x, x_j).
\end{equation}
For a fixed $N$, we define the minimal energy and maximal (constrained) polarization, respectively, by
\begin{equation*}
\mathcal{E}_K(N) = \min_{\omega_{N,d} \subset \mathbb{S}^d} E_K(\omega_{N,d}) \quad \textup{and} \quad \mathcal{P}_K(N) = \max_{\omega_{N,d} \subset \mathbb{S}^d} P_K(\omega_{N,d}).
\end{equation*}
Both energy (see e.g. \cite{BelMO, BetS, BraG, GraS, HarS}) and polarization (see e.g. \cite{BDHSS22, FarN, FarR, Oht, RezSVo, RezSVl, Sim}), especially in the case of Riesz kernels, have been actively studied over the past few decades: we refer the reader to the excellent recent book \cite{BHS} for a comprehensive exposition.
The second-order asymptotic behavior for the optimal Riesz $s$-energy is well understood (see the discussion in Section \ref{sec:lower}) for $-2<s<d$. However, results for second-order asymptotics of maximal polarization with Riesz $s$-kernels have not been obtained before (except when $d=1$). We prove these optimal bounds in Theorem \ref{thm:Singular Riesz Polarization} for the singular potential-theoretic range $0<s<d$ and then use this result to demonstrate that, in the same range, the greedy sequence obtains optimal second-order asymptotic behavior (up to constants) for Riesz $s$-energy (Theorem \ref{thm:greedyenergy}) and, for infinitely many $N$, polarization (Corollary \ref{cor:pol}).
In this paper we only consider the range $-2 < s < d$. This is natural for the Riesz $s$-energy on the $d$-dimensional manifold $\mathbb S^d$, since for $s<-2$ minimizing energy no longer leads to uniform distribution (indeed, optimal configurations are achieved by placing half of the points at each of two opposite poles \cite{Bj}), while for $s>d$ the kernels $K_s$ are hypersingular.
\subsection{Spherical cap discrepancy}
For any $w \in \mathbb{S}^d$ and any $t \in (-1,1)$, we define
$$ C(w,t) = \left\{x \in \mathbb{S}^d: \left\langle w, x \right\rangle \geq t \right\} \subset \mathbb{S}^d$$
to be a spherical cap. The spherical cap discrepancy of a set $\omega_{N,d} = \left\{x_1, \dots, x_N\right\} \subset \mathbb{S}^d$ is then defined as
$$ D(\omega_{N,d}) = \sup_{w \in \mathbb{S}^d} \sup_{-1 < t < 1} \left| \frac{1}{N} \sum_{i=1}^{N} \chi_{C(w,t)}(x_i) - \sigma_d(C(w,t)) \right|,$$
where $\sigma_d$ is the surface measure on $\mathbb{S}^d$ normalized so that $\sigma_d(\mathbb{S}^d) = 1$.
A fundamental result of Beck \cite{beck1, beck2} states that
\begin{equation}\label{eq:beck}
N^{-\frac{1}{2}-\frac{1}{2d}} \lesssim \inf_{\omega_{N,d}} D(\omega_{N,d}) \lesssim N^{-\frac{1}{2}-\frac{1}{2d}} \sqrt{\log{N}},
\end{equation}
where the implicit constants depend on the dimension $d$. The upper bound in \eqref{eq:beck} follows from a probabilistic construction (jittered sampling). There are many probabilistic \cite{ABD, AZ, ABS, bellhouse, BE, BBL, BRSSWW, FHM} and deterministic \cite{ABD,etayo, fer,FHM,HEAL, kor,alex, marzo, narco,s} constructions that have been investigated. Most of these explicit point sets have only been proven to have discrepancy of the order no better than $N^{-1/2}$, although there is numerical evidence that some of them might be almost optimal.
Instead of the spherical cap discrepancy, we will in this paper work with the $L^2-$spherical cap discrepancy
\begin{equation}\label{eq:L2cap}
D_{L^2, \tiny \mbox{cap}}^2(\omega_{N,d}) = \int_{-1}^1 \int_{\mathbb{S}^d} \left| \frac{1}{N} \sum_{i=1}^{N} \chi_{C(w,t)}(x_i) - \sigma_d(C(w,t)) \right|^2 d\sigma_d(x) dt.
\end{equation}
The $L^2$-spherical cap discrepancy is smaller than the spherical cap discrepancy though the difference is, for many examples, at most logarithmic. The $L^2$-discrepancy is connected to energy, particularly to the Riesz energy with $s=-1$ (sum of distances) via the Stolarsky invariance principle \cite{stol}, which we state in \S \ref{sec:thm1}.
\section{Greedy sequences and main results}\label{s:main}
One of the main purposes of this paper is to discuss a simple greedy construction of sequences that turns out to simultaneously
\begin{enumerate}[(i)]
\item achieve optimal Riesz $s$-energy, up to the second-order behavior, when $0 < s < d$,
\item satisfy good distribution properties in the sense of $ D_{L^2, \tiny \mbox{cap}}^2$.
\end{enumerate}
Greedy sequences have been studied in the context of energy \cite{brown, HarRSV, Lop, LM21, LopM, LopS, LopW, Pri1, Pri2, SafT, Sic, wolf} and discrepancy \cite{brown, kritzinger, stein1, stein2, stein3, stein4}.
It is well understood that greedy sequences on $\mathbb{S}^1$ tend to be fairly structured \cite{P20}, but this cannot be expected in higher dimensions. However, we show that greedy sequences on $\mathbb{S}^d$ exhibit an unusual amount of regularity: they produce nearly optimal Riesz energy for $0<s<d$ and their $L^2$-spherical cap discrepancy satisfies favorable bounds (while numerics in \S \ref{sec:numerics} suggests they might even be optimal).
Generally, constructing a {\emph{sequence}} with good discrepancy or energy properties is significantly harder than constructing point sets for arbitrary $N$ since one is forced to keep previously chosen points. In some situations (e.g. discrepancy with respect to axis-parallel boxes on the torus \cite{KN, Pil}) even the optimal discrepancy bounds are slightly different for sequences and point sets. Most known constructions of good point distributions on the sphere are not sequences. This makes the greedy {\emph{sequence}} even more valuable.
\subsection{Construction of the greedy sequence} \label{subsec:construction} We will work with an infinite sequence $(x_n)_{n=1}^{\infty}$ instead of a single set. This is more difficult since points, once placed, remain fixed. We start with a completely arbitrary initial set of points $\left\{x_1, \dots, x_m \right\} \subset \mathbb{S}^d$. The construction is then greedy: at each step we add the point that minimizes the Riesz $s$-energy with respect to the existing set of points. More formally, given a set $\omega_{N,d} = \left\{x_1, \dots, x_N\right\} \subset \mathbb{S}^d$, we pick
\begin{equation}\label{eq:greedy}
x_{N+1} = \argmin_{x \in \mathbb{S}^d} \sum_{i=1}^{n} K_s(x, x_i) = \argmin_{x\in \mathbb{S}^d} E_{K_s} \big( \omega_{N,d} \cup \{x\}\big).
\end{equation}
In other words, $x_{N+1}$ is just the point where the polarization is achieved for the set $\{x_1,\dots, x_N\}$, i.e. $$ \sum_{j=1}^{N} K_s(x_{N+1}, x_j) = P_{K_s} \big( \{x_1,\dots, x_N\} \big). $$
If the minimum is not assumed at a unique point, then any such point may be chosen.
Note that if $s = -1$, this construction is equivalent to maximizing the sum of (Euclidean) distances to the points in the existing set. When $s=0$, i.e. $K_0(x,y) = - \log(\|x-y\|)$, {and $d = 1$ (or more generally, on any compact $\Omega \subset \mathbb{C}$)}, such sequences are called Leja sequences in recognition of the work of Leja \cite{Lej} although they were introduced by Edrei \cite{Erd} earlier. Simultaneously, G\'{o}rski studied the case of $s = -1$ (the Newtonian kernel) for compact sets $\Omega \subset \mathbb{R}^3$ \cite{Gor}. For $s = d-2$ (i.e. the Green kernel) and compact $\Omega \subset \mathbb{R}^d$, the corresponding greedy sequences are known as Leja--G\'{o}rski sequences, which have been studied in \cite{Got01, Pri2}. Leja points have applications in Stochastic Analysis \cite{NarJ} and Approximation/Interpolation Theory \cite{BiaCC, CalVM11, CalVM12, Chk, DeM, JanWZ, Rei, TayT}, and various numerical methods have been developed to approximate such points (see, e.g. \cite{BagCR, BosMSV, CorD}).
As a general remark on the behavior of greedy sequences in any dimension, we note that greedy sequences defined above (when initialized with a single point) have the property that every other point in the sequence is the antipodal point to the last one placed. This is a generalization of \cite[Theorem 2.1]{LopM}, the proof is identical.
\begin{proposition}\label{prop:Greedy Symmetry}
Let $s \in \mathbb{R}$. Let $(x_n)_{n=1}^{\infty}$ be a greedy sequence on $\mathbb{S}^d$ for some initial point $x_1 \in \mathbb{S}^d$. Then for all $k \in \mathbb{N}$, $x_{2k} = - x_{2k-1}$.
\end{proposition}
More generally, this result holds if we replace $K_s$ with any kernel $K(x,y) = f(\|x-y\|)$ with $f$ strictly decreasing.
\subsection{Optimal second-order polarization estimates}
For $s<d$, we note that the kernel $K_s(x,y)$ is integrable on $\mathbb S^d$ in each variable and depends only on the distance between $x$ and $y$. This suggests defining the constant
\begin{equation}\label{eq:Continuous Min Energy}
\mathcal{I}_{s,d} = \int_{\mathbb{S}^d} \int_{\mathbb{S}^d} K_s (x,y) d \sigma_d(x) d\sigma_d(y) = \int_{\mathbb{S}^d} K_s(x,y) d\sigma_d(y) , \quad \forall~x \in \mathbb{S}^{d}. \quad
\end{equation}
with $\sigma_d$ denoting, as before, the normalized uniform measure on $\mathbb{S}^d$. Observe that the constant $\mathcal{I}_{s,d} $ is negative for $s<0$.
One would expect that polarization is maximized when the points are distributed as uniformly as possible. In that case, one could replace summation by integration in \eqref{eq:pol} and expect the maximal Riesz polarization $\mathcal{P}_{K_s}$ to behave roughly as $\mathcal{I}_{s,d} N$, and this is indeed correct, see e.g. \cite{LopS}.
We prove more delicate polarization estimates with optimal (up to constants) second-order terms for $0<s<d$.
\begin{theorem}\label{thm:Singular Riesz Polarization}
For $-2 < s < d$, there exist positive constants $b_{s,d}, c_{s,d}$ such that the following hold for all $N \in \mathbb{N}$.
\noindent For $0 < s <d$,
\begin{equation}\label{eq:opt_polarization}
-b_{s,d} N^{\frac{s}{d}} \leq \mathcal{P}_{K_s}(N) - \mathcal{I}_{s,d} N \leq -c_{s,d} N^{\frac{s}{d}}.
\end{equation}
For $-2 < s < 0$,
\begin{equation}\label{eq:opt_polarization1}
\mathcal{I}_{s,d} + b_{s,d} N^{\frac{s}{d}} \leq \mathcal{P}_{K_s}(N) - \mathcal{I}_{s,d} N \leq -c_{s,d} N^{\frac{s}{d}} .
\end{equation}
For $s=0$,
\begin{equation}\label{eq:opt_polarization2}
- \frac{\log N}{d} - b_{0,d} \leq \mathcal{P}_{K_0}(N) - \mathcal I_{0,d} N \leq - c_{0,d}.
\end{equation}
\end{theorem}
The lower bounds in the Theorem above (proved in \S \ref{sec:lower}) follow from known optimal energy estimates (see Theorem \ref{thm:Riesz Energy Asymptotics}), but the upper bounds are novel and rely on the estimates of the generalized $L^2$ discrepancy \cite{BD, BDM, BMV}, see \S\ref{sec:upper}.
Note that even in the previously studied case of the circle ($d=1$), our result is new when $-2<s<-1$. This in fact leads us to the following characterization of asymptotics for polarization on the circle:
\begin{corollary}\label{cor:Polarization circle}
For $-1 \leq s < 1$, $s \neq 0$,
\begin{equation}\label{eq:Pol Circ s geq -1}
\mathcal{P}_{K_s}(N) = \mathcal{I}_{s,1} N - \frac{2(2^s-1)}{(2 \pi)^s}\zeta(s) N^s + \mathcal{O}( N^{s-2}).
\end{equation}
For $s = 0$,
\begin{equation}\label{eq:Pol Circ log}
\mathcal{P}_{K_0}(N) = - \log(2).
\end{equation}
When $-2 < s <-1$, there are $b_{s,1}, c_{s,1} \in \mathbb{R}_{>0}$ so that for $N$ sufficiently large,
\begin{equation}\label{eq:Pol Circ s leq -1}
\mathcal{I}_{s,1} N + b_{s,1} N^{s} \leq \mathcal{P}_{K_s}(N) \leq \mathcal{I}_{s,1} N + c_{s,1} N^{s}.
\end{equation}
\end{corollary}
\noindent An extensive discussion of the one-dimensional case is presented in \S\ref{sec:circle}.
As suggested by these sharp bounds in the one-dimensional case, as well as the asymptotics for the hypersingular case $s \geq d$, which were studied in \cite{AmbBE, BorB, BorHRS, ErdS, HKS13, HarPS}, we conjecture that the upper bounds for polarization given in Theorem \ref{thm:Singular Riesz Polarization} are sharp for $-2<s\le 0$ in all dimensions $d\ge 1$, i.e. the second term is of the order $\mathcal O (N^{s/d})$.
\subsection{Riesz energy of greedy sequences}
While Theorem \ref{thm:Singular Riesz Polarization} is interesting in its own right, we also use it as a tool to obtain second-order energy estimates for the greedy sequences (see \S\ref{subsec:greedy}), which are of optimal order in the range $0<s<d$.
\begin{theorem}
\label{thm:greedyenergy}
Let $d \geq 2$, $-2 < s < d$, and let $(x_n)_{n \in\mathbb{N}}$ be a greedy sequence for $K_s$ as defined in \eqref{eq:greedy} with fixed initial point $x_1 \in \mathbb{S}^d$. For $N>1$, denote the first $N$ elements of the sequence by $\omega_{N,d} = \{ x_1, \dots, x_N \}$. Then there exist positive constants $C_{s,d}, C'_{s,d}$ such that the following holds for $N \geq 2$.
\noindent For $0 < s < d$,
\begin{equation}\label{eq:greedyenergy}
- C_{s,d} N^{1 +\frac{s}{d}} \leq E_{K_s}(\omega_{N,d}) - \mathcal{I}_{s,d} N^2 \leq - C'_{s,d} N^{1 +\frac{s}{d}}.
\end{equation}
For $-2 < s < 0$,
\begin{equation}\label{eq:greedyenergy1}
C_{s,d} N^{1 +\frac{s}{d}} \leq E_{K_s}(\omega_{N,d}) - \mathcal{I}_{s,d} N^2 \leq - \mathcal{I}_{s,d} N - C_{s,d}' N^{1 +\frac{s}{d}} .
\end{equation}
For $s = 0$,
\begin{equation}\label{eq:greedyenergy2}
- \frac{1}{d} N \log(N) - C'_{0,d} N \leq E_{K_0}(\omega_{N,d}) - \mathcal{I}_{0,d} N^2 \leq \mathcal{O}( N).
\end{equation}
\end{theorem}
The results of Theorem \ref{thm:greedyenergy} are new for all $d\ge 2$. {The case of $s \leq -2$ was entirely handled in \cite[Theorems 4.1 and 5.2]{LopM} and the case of $s \geq d$ has been studied in \cite{LopS, LopW}.}
In the case of the circle $\mathbb S^1$, i.e. $d=1$, more precise information about the Riesz energy of greedy sequences is known, see Theorem \ref{thm:Energy of Greedy points Circle}.
The lower bounds in Theorem \ref{thm:greedyenergy} are well-known optimal estimates for minimal Riesz $s$-energies discussed in \S \ref{sec:lower}, while the upper bounds for greedy sequences are the case $m =1$ of the following more general result about greedy sequences with an arbitrary number of initial points, which we prove in \S \ref{subsec:greedy}.
\begin{theorem}\label{thm:Greedy with m starting points}
Let $d \in \mathbb{N}$, $-2 < s < d$, and let $(x_n)_{n \in\mathbb{N}}$ be the greedy sequence of points on $\mathbb{S}^d$ with respect to $K_s$ as defined in \eqref{eq:greedy} with fixed arbitrary $x_1, ..., x_m \in \mathbb{S}^d$. Let $\omega_{N,d}$ denote the set of the first $N$ points in this sequence. Then there exists a positive constant $C_{s,d}$ such that for $N > m$, the following hold.
\noindent For $ 0 \leq s < d$,
\begin{equation}
E_{K_s}(\omega_{N,d}) \leq \mathcal{I}_{s,d}(N^2 - m^2) - C_{s,d}' (N^{1 + \frac{s}{d}} - m^{1 + \frac{s}{d}} ) - \mathcal{I}_{s,d}(N-m) + E_{K_s} (\omega_{ m,d}) + \mathcal{O}(N^{\frac{s}{d}}).
\end{equation}
For $ - 2< s \leq 0$ if $d \geq 2$ and $-1 < s \leq 0$ if $d=1$,
\begin{equation}\label{eq:greedyenergy1m}
E_{K_s}(\omega_{N,d}) \leq \mathcal{I}_{s,d}(N^2 - m^2) - \mathcal{I}_{s,d}(N-m) - C_{s,d}' (N^{1 + \frac{s}{d}} - m^{1 + \frac{s}{d}} ) + E_{K_s} (\omega_{ m,d}) + \mathcal{O}(1).
\end{equation}
For $d=1$ and $s=-1$,
\begin{equation}
E_{K_{-1}}(\omega_{N,1}) \leq \mathcal{I}_{-1,1}(N^2 - m^2) - \mathcal{I}_{-1,1}(N-m) - C_{-1,1}' (\log(N) - \log(m)) + E_{K_{-1}} (\omega_{ m,1}) + \mathcal{O}(1).
\end{equation}
For $d=1$ and $-2 < s < -1$,
\begin{equation}
E_{K_s}(\omega_{N,1}) \leq \mathcal{I}_{s,1}(N^2 - m^2) - \mathcal{I}_{s,1}(N-m) + E_{K_s} (\omega_{ m,1}) + \mathcal{O}(N^{1+ s}) + \mathcal{O}(m^{1+ s}).
\end{equation}
\end{theorem}
These are the first second-order upper bounds for the energy in the literature for the case when the greedy sequence is initialized with $m$ arbitrary points, rather than a single point. The Riesz energy of greedy sequences initialized by one point has been studied before in a general setting, see e.g. \cite{Lop, LopS, Sic}, where the first-order term was obtained, along with the second-order estimates in the case $d=1$, see \cite{LopM, LopW}. In fact, for $m=1$ these results give much stronger bounds on the Riesz energy of greedy sequences on the circle $\mathbb S^1$, but the proofs rely strongly on the structural properties of the greedy sequences (see Theorem \ref{thm:Energy of Greedy points Circle} and the discussion in \S \ref{sec:greedyenergyS1}), which do not generalize to $m>1$. Hence, the general results of Theorem \ref{thm:Greedy with m starting points} are interesting (although probably not sharp for $s \in (-2,0]$) even in the one-dimensional case).
\subsection{Polarization of greedy sequences}
Theorem \ref{thm:Greedy with m starting points} demonstrates, in particular, that for any greedy sequence with $m$ initial data points
\begin{equation}
\lim_{N \rightarrow \infty} \frac{E_{K_s}(\omega_{N,d})}{N^2} = \mathcal{I}_{s,d}.
\end{equation}
Combining this with Theorem 4.2.2 and Corollaries 14.6.5 and 14.6.7 from \cite{BHS}, we have that any such greedy sequence is uniformly distributed and therefore yields optimal first-order asymptotics for polarization (which was shown for $m=1$ in \cite[Thm 2.1]{LopS} {and \cite[Lemma 3.1]{Sic}}).
\begin{corollary}\label{cor:unif}
Let $d \in \mathbb{N}$, $-2 < s < d$, and $(x_n)_{n \in\mathbb{N}}$ be the greedy sequence of points on $\mathbb{S}^d$ with respect to $K_s$ as defined in \eqref{eq:greedy} with fixed arbitrary $x_1, ..., x_m \in \mathbb{S}^d$. Let $\omega_{N,d}$ denote the set of the first $N$ points in this sequence. Then the sequence of point sets $\{ \omega_{N,d} \}_{N=1}^{\infty}$ is uniformly distributed on $\mathbb{S}^d$, and
\begin{equation}
\lim_{N \rightarrow \infty} \frac{P_{K_s}(\omega_{N,d})}{N} = \mathcal{I}_{s,d}.
\end{equation}
\end{corollary}
Moreover, Theorem \ref{thm:greedyenergy} states that greedy sequences on $\mathbb{S}^d$ achieve Riesz energy that is asymptotically optimal to the second-order (at least up to constants) in the potential-theoretic case $0<s<d$. This may be interpreted as a way of measuring regularity of a set of points, we would expect the greedy sequence to behave fairly regularly also with respect to other notions of uniformity. We prove that greedy points also achieve almost maximal polarization (up to the second-order term) most of the time in the sense of asymptotic density (in particular, for infinitely many $N$).
\begin{corollary}\label{cor:pol}
Let $d \geq 2$, $0 < s < d$, and $(x_n)_{n \in\mathbb{N}}$ be a greedy sequence for $K_s$. Let $\omega_{N,d}$ denote the set of the first $N$ points in this sequence. For every $\varepsilon > 0$, there exists $X_{s,d, \varepsilon} > 0$ such that
$$ \# \left\{1 \leq j \leq N: P_{K_s} (\omega_{j,d}) \geq j \cdot \mathcal{I}_{s,d} - X_{s,d,\varepsilon} \cdot j^{\frac{s}{d}} \right\} \geq \left(1 - \varepsilon \right)N \qquad \mbox{as}~N \rightarrow \infty.$$
\end{corollary}
This is optimal up to the value of $X_{s,d,\varepsilon}$ since Theorem \ref{thm:Singular Riesz Polarization}
implies the unconditional bound
$ P_{K_s} (\omega_{j,d}) \leq \mathcal{P}_{K_s}(j) \leq \mathcal{I}_{s,d}\cdot j - c_{s,d} \cdot j^{\frac{s}{d}}.$
The proof gives a quantitative description of how $X_{s,d,\varepsilon}$ depends on $s,d$ and $\varepsilon$ in terms of the constant $C_{s,d}$ arising in (\ref{eq:greedyenergy}), we refer to the proof in \S\ref{sec:cor} for details.
\subsection{A uniform $L^2$-discrepancy bound} We now concentrate on Riesz energy with $s=-1$ or, equivalently, on the problem of selecting $\left\{x_1, \dots, x_N \right\} \subset \mathbb{S}^d$ in such a way that the sum of distances
$\sum_{i,j = 1}^{N} \|x_i - x_j\| $ is maximized.
The problem has particular geometric significance in light of the Stolarsky invariance principle (formally stated in \S \ref{sec:thm1}) which states that maximizing the sum of distances is the
same as minimizing the spherical $L^2-$cap discrepancy. On $\mathbb{S}^2$, there are a large number of deterministic constructions \cite{ABD, etayo, fer, FHM, HEAL, kor, alex, marzo, narco, s}
and some of them are known to achieve a spherical cap discrepancy as small as $N^{-1/2}$. The seminal results of Beck imply that optimal constructions should be as small as $N^{-3/4}$ (and there is numerical evidence that some of these sequences of point sets achieve it). Our contribution to this question is two-fold:
\begin{enumerate}
\item we show that a large class of recursively defined sequences, containing all greedy sequences, achieve an $L^2-$spherical cap discrepancy of order $N^{-1/2}$ with an explicit small constant, and
\item we provide some numerical evidence that greedy sequences achieve a rate that is either $N^{-3/4}$ or very close to it, see \S \ref{sec:numerics}.
\end{enumerate}
The result will apply to a broader class of sequences: we note the trivial inequality
$$ \max_{x \in \mathbb{S}^d} \sum_{i=1}^{n} \| x_{}- x_i\| \geq n \int_{\mathbb{S}^d} \sum_{i=1}^{n} \| x- x_i\| d\sigma_d(x) = n \cdot (-\mathcal{I}_{-1, d}),$$
where we note that $-\mathcal{I}_{-1, d} > 0$.
The subsequent theorem applies to all sequences where the next element $x_{n+1}$ is always chosen in such a way that the inequality above is satisfied, i.e. rather than maximizing the sum, we choose any point where the sum exceeds its mean value (and, in particular, the greedy constructions always satisfy the inequality).
\begin{theorem}
\label{thm:L2discrepancy}
Let $\omega_N = \left\{x_1, \dots, x_m \right\} \subset \mathbb{S}^d$ be an arbitrary initial set and suppose that for all $n \geq m$, the set is extended to a sequence satisfying
$$ \sum_{i=1}^{n} \| x_{n+1}- x_i\| \geq n \cdot (-\mathcal{I}_{-1,d}).$$
Let $\omega_{N,d}$ denote the set of the first $N$ points in this sequence. Then, for any $N \geq m$,
$$ D_{L^2, {\tiny \emph{cap}}}(\omega_{N,d}) \leq \sqrt{ \frac{1}{d} \frac{\Gamma((d+1)/2)}{\sqrt{\pi} \Gamma(d/2)}} (-\mathcal{I}_{-1,d})^{1/2} \left( \frac{1}{N} + \frac{m^2 - m}{N^2} \right)^{1/2}.$$
On $\mathbb{S}^2$, this expression simplifies to
$$ D_{L^2, \emph{cap}}(\omega_{N,2}) \leq \frac{\sqrt{2}}{\sqrt{3}} \left( \frac{1}{N} + \frac{m^2 - m}{N^2} \right)^{1/2}.$$
\end{theorem}
Observe that, due to the Stolarsky principle (Theorem \ref{thm:stolarsky}), when $m=1$, the right-hand side this inequality exactly matches the first-order term in the upper bound in \eqref{eq:greedyenergy1} in Theorem \ref{thm:greedyenergy}. In fact, inequality \eqref{eq:greedyenergy1} is even stronger, since it includes a negative second-order term (similar conclusions for all $m\ge 1$ follow from \eqref{eq:greedyenergy1m}). However, this theorem still provides new information since the bounds here apply to a much wider class of sequences than just purely greedy sequences.
We first note that it applies to a fairly large family of sequences and gives a uniform $N^{-1/2}-$bound for all of them. This $N^{-1/2}-$bound is generically tight. One might be inclined to believe that the greedy sequence is better behaved, which is suggested by the case $0<s<d$ of Theorem \ref{thm:greedyenergy}.
Theorem \ref{thm:Singular Riesz Polarization}, in the case $s=-1$, can be rewritten as saying that there is some $c_d > 0$ such that for any $N$-point set $\{x_1, \cdots, x_N\},$
$$ \max_{x \in \mathbb{S}^d} \sum_{i=1}^{N} \| x_{}- x_i\| \geq N \cdot (-\mathcal{I}_{-1, d}) + \frac{c_{d}}{ N^{1/d}}.$$
While this is sharp for $d=1$ and we believe it to be sharp for higher dimensions, it cannot be sharp for a greedy sequence ``on average", which becomes evident from the following result proved in \S \ref{sec:prop:disc} (recall that $-\mathcal{I}_{-1, d} = \frac43$).
\begin{proposition}\label{prop:disc} Let $(x_n)$ be a greedy sequence on $\mathbb{S}^2$ with $m\ge 1$ initial points. For $N$ sufficiently large, depending on $m$,
\begin{equation}\label{eq:averagepol}
\frac{1}{25} \leq \frac{1}{N}\sum_{n=1}^{N-1} \left( \max_{x \in \mathbb{S}^2} \sum_{i=1}^{n} \| x - x_i\| - \frac{4 n}{3} \right) \leq \frac{2}{3}.
\end{equation}
\end{proposition}
If greedy sequences indeed have good $L^2-$discrepancy properties (as suggested by numerics) and if $D_{L^2, \emph{cap}}(\omega_{N,2}) \ll N^{-1/2}$, then
the upper bound in Proposition \ref{prop:disc} has to be asymptotically tight. The gap between $1/25$ and $2/3$ is a quantitative way of measuring our lack of understanding of the underlying dynamics.
The proof of Proposition \ref{prop:disc} can be easily adapted, with different constants, to other $s$-Riesz kernels and to all dimensions $d\ge 2$ (as well as for a wide range of more general kernels). We choose to state it just for $s=-1$ and $d=2$ for the sake of simplicity of exposition.
\begin{remark}
The $L^2$ spherical cap discrepancy can also be interpreted as the worst-case error for numeric integration (with equal weights) for the Sobolev space $H^{\frac{d+1}{2}}(\mathbb{S}^d)$, see \cite{BraD}, and our results can be translated to this setting. Greedy sequences in the context of worst-case error for numeric integration for reproducing kernel Hilbert spaces have been studied, for example, in \cite{SanKH}, where a similar bound (although with an additional optimization over weights) was obtained.
\end{remark}
\subsection{Outline}
The outline of the paper is as follows. In \S \ref{sec:circle}, we collect a number of results about both polarization and greedy sequences in the special case of the circle $\mathbb{S}^1$, presenting the proof of Corollary \ref{cor:Polarization circle} as well as the analogues of Theorem \ref{thm:greedyenergy} and Corollary \ref{cor:pol} for $d = 1$. In \S \ref{sec:higherdim} we present the proofs of some of the main results for $d \ge 2$: Theorem \ref{thm:Singular Riesz Polarization} in \S \ref{sec:upper}--\ref{sec:lower}, Theorems \ref{thm:greedyenergy} and \ref{thm:Greedy with m starting points} in \S\ref{subsec:greedy}, and Corollary \ref{cor:pol} in \S\ref{sec:cor}. In \S \ref{sec:thm1} we prove the $L^2$-discrepancy bound (Theorem \ref{thm:L2discrepancy}) and Proposition \ref{prop:disc} and discuss some similar known results. We conclude with discussion of numerical properties of greedy sequences in \S \ref{sec:numerics}, and provide some auxiliary results in \S \ref{sec:Appendix}.
\section{Polarization and Greedy Sequences on $\mathbb{S}^1$}
\label{sec:circle}
The case of $d = 1$ in Theorem \ref{thm:Singular Riesz Polarization} and Corollary \ref{thm:greedyenergy} has mostly been settled in the literature previously, as the unique properties of the circle yield convenient examples of point sets that maximize polarization and greedy energy. In this section we collect known results on the circle for polarization and greedy energy and note where our Theorem \ref{thm:Singular Riesz Polarization} and Corollary \ref{thm:greedyenergy} fill in some gaps or differ.
\subsection{Polarization on the circle and proof of Corollary \ref{cor:Polarization circle}}
In \cite[Theorem 1]{HKS13} it is shown that equally spaced points on $\mathbb{S}^1$, denoted $\omega_N^*$, are optimal for polarization for any kernel $K(x,y) = f( \arccos(\langle x, y \rangle))$ such that $f$ is decreasing and convex on $[0, \pi]$. This includes the Riesz kernels for $-1 \le s < \infty$, but not $s < -1$. However, for $-2 < s < -1$, $\omega_N^*$ still has optimal asymptotic behavior, and likely maximizes polarization as well.
For the range $-2 < s$, the points that minimize the discrete potential with respect to $\omega_N^*$ are the midpoints of an arch between two $N$th roots of unity. Because the midpoints of an arch between two $N$th roots of unity are themselves $2N$th roots of unity, there is a natural expression for the polarization on $\mathbb{S}^1$ in terms of the corresponding energies for $N$ and $2N$ equally spaced points:
\begin{equation}
\label{eqn:diffofenergies}
P_{K_s}(\omega_N^*) = \frac{E_{K_s}(\omega_{2N}^*)}{2N} - \frac{E_{K_s}(\omega_N^*)}{N}.
\end{equation}
There is an explicit formula for the energy of $N$ equally spaced points on $\mathbb{S}^1$.
\begin{theorem}[Theorem 1.1 in \cite{BHS09}]
\label{thm:zetafunction}
Let $s \in \mathbb{C}$, $s \neq 0,1,3,5,...,$ and let $q$ be any nonnegative integer such that $q \ge (\operatorname{Re}(s) -1)/2$. If $\omega_N^{\star}$ is a configuration of $N$ equally spaced points on $\mathbb{S}^1$, then
$$E_{K_s}(\omega_N^*) = \mathcal{I}_{s,1} N^2 + \frac{2}{(2\pi)^s} \sum_{n=0}^q a_n(s) \zeta(s-2n)N^{1+s-2n}+\mathcal{O}(N^{s-1-2q}),$$
where $\zeta(s)$ is the classical Riemann zeta function and $a_n(s)$ are the coefficients in the expansion
$$\Big(\frac{\sin \pi z}{\pi z}\Big)^{-s} = \sum_{n=0}^{\infty} a_n(s) z^{2n}, \hspace{1cm} |z| < 1.$$
\end{theorem}
Combining Theorem \ref{thm:zetafunction} with \eqref{eqn:diffofenergies}, we have that on $\mathbb{S}^1$ for $-2 < s < 1$, $s \neq 0$,
\begin{equation}
\label{eqn:expansion}
P_{K_s}(\omega_N^*) = \mathcal{I}_{s,1} N - \frac{2}{(2\pi)^s} \zeta(s) N^s (2^s-1) + \mathcal{O}(N^{s-2}).
\end{equation}
In \cite[page 623]{BHS09}, the authors also show that
\begin{equation}\label{eq:Log Energy equally spaced points}
E_{K_0}(\omega_N^*) = - N \log(N)
\end{equation}
which, when combined with \eqref{eqn:diffofenergies}, gives us that
\begin{equation}\label{eqn:expansion log Pol circle}
P_{K_0}(\omega_N^*) = - \log(2).
\end{equation}
Both \eqref{eqn:expansion} and \eqref{eqn:expansion log Pol circle} yield a lower bound on optimal polarization $\mathcal{P}_{K_s}(N) \ge P_{K_s}(\omega_N^*)$, which matches the upper bounds given in Theorem \ref{thm:Singular Riesz Polarization} for $-2<s<1$.
In the case $-1 \leq s < 1$, optimality of roots of unity, \cite[Theorem 1]{HKS13} shows that $\mathcal{P}_{K_s}(N) = P_{K_s}(\omega_N^*)$, completing the proof of \eqref{eq:Pol Circ s geq -1} and \eqref{eq:Pol Circ log}.
The proof of the upper bounds in Theorem \ref{thm:Singular Riesz Polarization} (which is presented in \S \ref{sec:upper} and holds for all $d\ge 1$) covers the range $-2<s< -1$ in which the roots of unity are not known to be optimal, proving \eqref{eq:Pol Circ s leq -1}. Thus this case of Theorem \ref{thm:Singular Riesz Polarization} is new even in the one-dimensional case. This completes the proof of polarization estimates for $d=1$, i.e. Corollary \ref{cor:Polarization circle}.
\subsection{Energy for greedy sequences}\label{sec:greedyenergyS1}
The behavior of greedy sequences on $\mathbb{S}^1$ is also well-studied. For $-2 < s$, it is known that any greedy sequence is in fact a classical van der Corput sequence \cite{vdc} (see \cite[Theorem 5]{BiaCC}, \cite[Lemma 3.7]{LopM}, \cite[Sec 1.2]{LopW}, \cite[Lemmas 4.1 and 4.2]{LopS}, and also \cite[Example 2]{wolf}, which is perhaps the earliest observation of this kind, but just for $s=-1$). A similar result was shown for a large class of kernels (whenever $K(x,y) = f( \arccos( \langle x , y \rangle))$, and $f$ is a bounded, continuous, decreasing, convex function of the geodesic distance) in \cite[Thm 2.1]{P20}. This has made explicit computations for bounds of the asymptotic behavior of the greedy Riesz energies possible (see Theorems 1.1, 1.2, and 1.5 in \cite{LopW} and Theorems 3.16, 3.17, 3.18 in \cite{LopM}). Here we collect the these results, in less detail.
\begin{theorem}\label{thm:Energy of Greedy points Circle}
On the circle $\mathbb{S}^1$, for $-2 < s < 1$ and $N \geq 2$, if $\omega_N$ is the first $N$ elements of a greedy sequence, then
\begin{equation}
E_{K_s}(\omega_N) = \mathcal{I}_{s,1} N^2 + \begin{cases}
\mathcal{O}(N^{s+1}) & s \in (-1,0) \cup (0,1) \\
- N \log(N) + \mathcal{O}(N) & s = 0 \\
\mathcal{O}(\log(N)) & s = -1 \\
\mathcal{O}(1) & s \in (-2,-1)
\end{cases}.
\end{equation}
The order (of the second-order term) in each case cannot be improved.
\end{theorem}
We note that for $-2 < s \leq 0$, this is an improvement on the upper bounds achieved in Theorem \ref{thm:greedyenergy}, suggesting that there is likely room for improvement on the upper bounds in higher dimensions in this range.
Moreover, according to Theorem \ref{thm:Riesz Energy Asymptotics}, which states that the optimal second-order term for the Riesz $s$-energy on $\mathbb S^d$ is $ \mathcal O (N^{1+\frac{s}{d}})$, we see that for $s > -1$, point sets produced via the greedy algorithm have optimal asymptotic behavior on $\mathbb S^1$, whereas for $ -2 < s \leq -1$, the greedy algorithm \emph{does not} produce point sets with optimal Riesz energy, which leads us to the open question of whether these results also hold true for higher dimensions.
At the same time, at least when $s=-1$, these bounds are actually sharp for {\emph{sequences}}. Indeed, the case of $s=-1$ in Theorem \ref{thm:Energy of Greedy points Circle} and the Stolarsky invariance principle (Theorem \ref{thm:stolarsky}) imply that the $L^2$-spherical caps discrepancy satisfies $D_{L^2,cap} (\omega_N) = \mathcal O (\sqrt{N^{-1} \log N})$. But spherical caps on the circle are just intervals, hence, this is just the classical periodic $L^2$-discrepancy of one-dimensional sequences, for which $\mathcal O (N^{-1} \sqrt{\log N})$ is known to be the optimal order, as shown in \cite{Proi} (see also \cite{Kirk, Pil}) with the ideas going back to the seminal results of Roth \cite{Roth}. In fact, as mentioned earlier, in this case, the greedy sequence with one initial point is just the van der Corput sequence, whose discrepancy is well studied \cite{CF, kritzinger, PA}.
On the other hand, Theorem \ref{thm:Riesz Energy Asymptotics} shows that the optimal second-order term for the energy in the case $d=1$ and $s=-1$ is $ \mathcal O (N^{1+\frac{s}{d}}) = \mathcal O (1)$, and therefore, the optimal $L^2$-discrepancy is $\mathcal O (N^{-1})$ (by the Stolarsky principle), which is easily seen to be achieved by $N$ equally spaced points on the circle. This highlights an important difference between the behavior of $N$-point sets and {\emph{infinite sequences}}.
\subsection{Polarization for greedy sequences}
As discussed earlier, the fact that any greedy sequence is uniformly distributed suggests that they may behave well for different measures of uniformity. For polarization, the asymptotic behavior of a greedy sequence for Riesz kernels on the circle was shown in \cite[Theorem 3.11]{LopM} and \cite[Theorems 1.1 and 1.4]{LM21} (the case of hypersingular Riesz kernels was also handled in \cite{LM21}
{ and the case of $s \leq -2$ in \cite[Theorem 4.1 and 5.2]{LopM}}).
\begin{theorem}\label{thm:Polar of Greedy points Circle}
On the circle $\mathbb{S}^1$, for $-2 < s < 1$ and $N \geq 1$, if $\omega_N$ is the first $N$ elements of a greedy sequence, then
\begin{equation}
P_{K_s}(\omega_N) = \mathcal{I}_{s,1} N + \begin{cases}
\mathcal{O}(N^s) & s \in (-1,0) \cup (0,1) \\
\mathcal{O}(\log(N)) & s = 0 \\
\mathcal{O}(1) & -2 <s < 0 \\
\end{cases}.
\end{equation}
The order (of the second-order term) in each case cannot be improved.
\end{theorem}
Comparing this to Corollary \ref{cor:Polarization circle}, we see that the greedy sequences on the circle have good second-order asymptotic behavior for $0 < s$, but not for $-2 < s \leq 0$.
\section{The Proof of Theorems \ref{thm:Singular Riesz Polarization} and \ref{thm:greedyenergy}}
\label{sec:higherdim}
We proceed with the proof of Theorem \ref{thm:Singular Riesz Polarization} for $d \ge 2$. We associate the Riesz $s$-kernel $K_s(x,y)$ with a function of the inner product $t$ between the points $x$ and $y$ by observing that $\| x-y \| = ( 2 - 2 t)^{1/2}$ and setting $K_s (x,y) = f_s (\langle x,y \rangle)$, where:
\begin{align*}
f_{s}(t) &= \begin{cases}
\sgn(s) (2 - 2t)^{-s/2} & s \neq 0 \\
-\frac{1}{2} \log( 2 - 2t) & s=0.
\end{cases}
\end{align*}
Further, to facilitate our proof of the upper bound for polarization, we also introduce non-singular approximations to the functions $f_s$. For $\varepsilon > 0$, take
\begin{equation}
f_{s,\varepsilon}(t) = \begin{cases}
\sgn(s) (2 + \varepsilon - 2t)^{-s/2} & s \neq 0 \\
-\frac{1}{2} \log( 2 + \varepsilon - 2t) & s=0.
\end{cases}
\end{equation}
\subsection{Proof of Theorem \ref{thm:Singular Riesz Polarization}: upper bound}\label{sec:upper}
\label{subsec:upperbound}
For $d \in \mathbb{N}$, let $w_{d}(t) = (1-t^2)^{\frac{d-2}{2}}$. For any $f \in L_{w_d}^1([-1,1])$, we have the Gegenbauer expansion
\begin{equation}\label{eq:Gegen Expand General}
f(t) \sim \sum_{n=0}^{\infty} \widehat{f}(n,d) \frac{2n+d-1}{d-1} C_n^{\frac{d-1}{2}}(t),
\end{equation}
where $C_n^\lambda$ are the standard Gegenbauer polynomials (see e.g. \cite{DX} for details) and
\begin{equation}\label{eq:GegenCoeff}
\widehat{f}(n,d) = \frac{\Gamma(\frac{d+1}{2}) n! \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) \Gamma(n+d-1)} \int_{-1}^{1} f(t) C_n^{\frac{d-1}{2}}(t) w_d(t) dt.
\end{equation}
Note that for all $s \in \mathbb{R}$ and $\varepsilon >0$, the function $f_{s,\varepsilon}$ is continuous, and therefore in $L_{w_d}^2([-1,1])$. On the other hand, $f_s \in L_{w_d}^2([-1,1])$ only when $s< \frac{d}{2}$, which would lead to certain technical complications in the proofs when $s\ge \frac{d}{2}$. At the same time $f_s \in L_{w_d}^1([-1,1])$ for all $s<d$, in other words, $K_s$ is integrable on the sphere, i.e. $\mathcal{I}_{s,d} < \infty$, for $s<d$ (this is the potential-theoretic case).
We shall need the information about the behavior of the Gegenbauer coefficients of the Riesz kernels $f_s$ as well as their approximations $f_{s,\varepsilon}$. We postpone a detailed analysis to the Appendix and only mention the most relevant information here: for $0<s<d$, the coefficients $\widehat{f}_{s,\varepsilon} (n,d) $ and $\widehat{f}_{s} (n,d) $ are non-negative and decreasing, and $\widehat{f}_{s} (n,d) $ is of the order $n^{s-d}$ (Corollary \ref{cor:Riesz Gegen Coeff}).
We start with the case $-2 < s < \frac{d}{2}$ and will then explain the adjustments needed in the range $\frac{d}{2} \le s < d$. Starting with an arbitrary $\omega_{N,d} = \{ x_1,\dots, x_N\}$ and trivially estimating the integral by the average, we observe that
$$ P_{K_s} (\omega_{N,d}) = \min_{x \in \mathbb{S}^d} \sum_{j=1}^{N} K_s(x, x_j) \le \sum_{j=1}^{N} \int_{\mathbb S^d} K_s (x, x_j) \, d\sigma_d (x) = \mathcal{I}_{s,d} N , $$
where we have used that due to rotational invariance the integral $\int_{\mathbb S^d} K_s (x, z) \, d\sigma_d (x)
= \mathcal{I}_{s,d} $ is independent of $z\in \mathbb S^d$.
Therefore, $$ \mathcal{I}_{s,d} - \frac{1}{N} P_{K_s} (\omega_{N,d})
= \max_{x\in \mathbb S^d} \bigg( \mathcal{I}_{s,d} - \frac{1}{N} \sum_{j=1}^{N} K_s(x, x_j) \bigg) \ge 0,$$ and we can thus estimate this expression from below by the $L^2$-average:
\begin{align*}
\mathcal{I}_{s,d} - \frac{1}{N} P_{K_s} (\omega_{N,d}) &= \max_{x\in \mathbb S^d} \bigg( \mathcal{I}_{s,d} - \frac{1}{N} \sum_{j=1}^{N} K_s(x, x_j) \bigg)\\
& \ge \bigg( \int_{\mathbb S^d} \bigg| \mathcal{I}_{s,d} - \frac{1}{N} \sum_{j=1}^{N} K_s(x, x_j) \bigg|^2 d\sigma_d (x) \bigg)^{1/2}\\
& = \bigg( \int_{\mathbb S^d} \bigg| \int_{\mathbb S^d} f_s (\langle x,y \rangle) \, d\sigma_d (y) - \frac{1}{N} \sum_{j=1}^{N} f_s(\langle x, x_j \rangle) \bigg|^2 d\sigma_d (x) \bigg)^{1/2}\\
& = D_{L^2, f_s} (\omega_{N,d}),
\end{align*}
where $D_{L^2, f} (\omega_{N,d})$ is the generalized $L^2$-discrepancy of $\omega_{N,d}$ with respect to the function $f$ which is studied in \cite{BD,BDM,BMV} and well-defined for $ f \in L^2_{w_d} ([-1,1])$. Observe the similarity of this notion to the classical $L^2$-spherical cap discrepancy \eqref{eq:L2cap}. Indeed, taking $f = {\bf{1}}_{[t,1]}$, one obtains the inner integral in the definition \eqref{eq:L2cap} of $ D_{L^2, \tiny \mbox{cap}}(\omega_{N,d})$.
It has been shown \cite[Theorem 4.2]{BD} that $$ D_{L^2, f} (\omega_{N,d}) \ge C_d \min_{1\le n \le c_d N^{1/d}} | \widehat{f} (n,d) | $$ for some constants $C_d$, $c_d >0$.
Since $f_s \in L^2_{w_d} ([-1,1])$ for $-2 < s<d/2$ and Gegenbauer coefficients $\widehat{f_s} (n,d)$, $n > 0$, are positive and decreasing (Corollary \ref{cor:Riesz Gegen Coeff}), we find that
$$ \mathcal{I}_{s,d}N - P_{K_s} (\omega_{N,d}) \ge N D_{L^2, f_s} (\omega_{N,d}) \ge C_d N \widehat{f}_{s}\Big( \big\lfloor c_d N^{1/d} \big\rfloor ,d \Big) \ge c_{s,d} N^{\frac{s}{d} },$$
where we have used the asymptotics of the Gegenbauer coefficients $\widehat{f_s} (n,d)$. Taking the supremum over $\omega_{N,d}$ proves the upper bound in Theorem \ref{thm:Singular Riesz Polarization} for $-2< s < d/2$. \\
Turning to the case $s\ge d/2$, we can repeat the argument above verbatim for the kernel $K_{s,\varepsilon } (x,y ) = f_{s,\varepsilon} (\langle x,y \rangle)$, using the fact that the Gegenbauer coefficients of $f_{s,\varepsilon} $ are positive and decreasing (Corollary \ref{cor:Riesz epsilon Gegen Coeff}) and the fact that $0\le f_{s,\varepsilon} (t) \le f_{s} (t) $), to find that
$$ \mathcal{I}_{s,d} N - P_{K_{s,\varepsilon}} (\omega_{N,d}) \ge C_d N \widehat{f}_{s,\varepsilon}\Big( \big\lfloor c_d N^{1/d} \big\rfloor ,d \Big) . $$
Taking the limit as $\varepsilon \rightarrow 0$ which is justified by Lemma \ref{lem:Limit for Polarization} and by the Lebesgue dominated convergence theorem, since $0\le f_{s,\varepsilon} (t) \le f_{s} (t) $.
Using the asymptotics of $\widehat{f_s} (n,d)$ from Corollary \ref{cor:Riesz Gegen Coeff}, one obtains
$$ \mathcal{I}_{s,d} N - P_{K_s} (\omega_{N,d}) \ge C_d N \widehat{f}_{s}\Big( \big\lfloor c_d N^{1/d} \big\rfloor ,d \Big) \ge c_{s,d} N^{\frac{s}{d} }, $$ which proves the required bound for the range $ d/2 \le s < d$.
\subsection{Proof of Theorem \ref{thm:Singular Riesz Polarization}: lower bound}\label{sec:lower}
We collect some known results from which the lower bound of Theorem \ref{thm:Singular Riesz Polarization} follows immediately for $d \ge 2$. The first relates the maximal polarization to the minimal energy.
\begin{proposition}[Proposition 14.1.1, \cite{BHS}]\label{prop:Relate Polarization and Energy LB}
For all $N \geq 2$ and kernels $K$, we have
\begin{equation}
\mathcal{P}_K(N) \geq \frac{\mathcal{E}_K(N+1)}{N+1} \geq \frac{\mathcal{E}_K(N)}{N-1}.
\end{equation}
\end{proposition}
Optimal second-order asymptotics for the discrete Riesz $s$-energy in the range $-2<s<d$ have been computed by various authors \cite{Brauchart, BHS12, KS, RSZ, wagner1, wagner2}, see also Theorems 6.4.5, 6.4.6, and 6.4.7 in \cite{BHS}. These bounds are as follows.
\begin{theorem}\label{thm:Riesz Energy Asymptotics}
For $d \geq 1$, $-2 < s < d$, $s \neq 0$, there exist positive constants $C_{s,d}$, $C'_{s,d}$ such that for $N \geq 2$,
\begin{align*}
\mathcal{I}_{s,d} N^2 - C'_{s,d} N^{1+\frac{s}{d}} & \le \mathcal{E}_{K_s}(N) \le \mathcal{I}_{s,d} N^2 - C_{s,d} N^{1+\frac{s}{d}}, \quad & s>0,\\
\mathcal{I}_{s,d} N^2 + C'_{s,d} N^{1+\frac{s}{d}} & \le \mathcal{E}_{K_s}(N) \le \mathcal{I}_{s,d} N^2 + C_{s,d} N^{1+\frac{s}{d}}, \quad & s<0.
\end{align*}
If $s = 0$, then
\begin{equation}
\mathcal{E}_{K_s}(N) = \mathcal{I}_{s,d} N^2 - \frac{1}{d}N \log(N) + \mathcal{O}(N).
\end{equation}
\end{theorem}
Combining Theorem \ref{thm:Riesz Energy Asymptotics} and Proposition \ref{prop:Relate Polarization and Energy LB}, we deduce the following bounds.
\begin{corollary}\label{cor:Polarization asymptotics lower bound}
For $d \in \mathbb{N}$, $-2 < s < d$, $s \neq 0$, and $N \in \mathbb{N}$,
\begin{equation}
\mathcal{P}_{K_s}(N) \geq \begin{cases}
\mathcal{I}_{s,d}N - C_{s,d}' 2^{\frac{s}{d}} N^{\frac{s}{d}} + \mathcal{I}_{s,d} & 0<s<d, \\
\mathcal{I}_{s,d} N + \mathcal{I}_{s,d} + C_{s,d}' 2^{\frac{s}{d}} N^{\frac{s}{d}} , & -2 <s < 0
\end{cases},
\end{equation}
where $C_{s,d}'$ is as in Theorem \ref{thm:Riesz Energy Asymptotics}.
If $s = 0$, then
\begin{equation}
\mathcal{P}_{K_0}(N) \geq \mathcal{I}_{0,d} N - \frac{1}{d} \log(N) + \mathcal{O}(1).
\end{equation}
\end{corollary}
Since the constant term is essential when $s<0$, this gives the lower bound of Theorem \ref{thm:Singular Riesz Polarization} for the case $d \ge 2$.
\subsection{Greedy energy: proof of Theorems \ref{thm:greedyenergy} and \ref{thm:Greedy with m starting points}}
\label{subsec:greedy}
The lower bounds in Theorem \ref{thm:greedyenergy} are just the general lower bounds for minimal Riesz energies presented in Theorem \ref{thm:Riesz Energy Asymptotics} above, so we concentrate on the upper bound of Theorem \ref{thm:Greedy with m starting points}.
\begin{proof}[Proof of Theorem \ref{thm:Greedy with m starting points}]
Let $\omega_{N,d}$ denote the set of the first $N$ points of the greedy sequence with respect to $K_s$ with $m$ initial points. Observe that, by construction, for $k \geq m$
$$ \sum_{j=1}^{k} K_s (x_{k+1}, x_j) = P_{K_s} (\omega_{k,d}).$$
Therefore the discrete energy of $\omega_{N+1,d}$ satisfies
\begin{align*}
E_{K_s} (\omega_{ N+1,d}) & = E_{K_s} (\omega_{ m,d}) + 2 \sum_{k=m}^{N} \sum_{j=1}^{k} K_s (x_{k+1}, x_j) \\
& = E_{K_s} (\omega_{ m,d}) + 2 \sum_{k=m}^{N} P_{K_s} (\omega_{k,d}) \\
& \leq E_{K_s} (\omega_{ m,d}) + 2 \sum_{k=m}^{N} \mathcal{P}_{K_s} (k)\\
\end{align*}
Using the upper bounds from Theorem \ref{thm:Singular Riesz Polarization} one obtains
\begin{align*}
E_{K_s} (\omega_{ N,d}) & \le E_{K_s} (\omega_{ m,d}) + 2 \sum_{k=m}^{N-1} \mathcal{P}_{K_s} (k) \\
& \leq E_{K_s} (\omega_{ m,d}) + 2 \sum_{k=m}^{N-1} \big( \mathcal I_{s,d} k - c_{s,d} k^{\frac{s}{d}} \big) \\
& = E_{K_s} (\omega_{ m,d}) + 2 \sum_{k=1}^{N-1} \Big(\mathcal I_{s,d} k - c_{s,d} k^{\frac{s}{d}} \Big) - 2 \sum_{k=1}^{m-1} \Big(\mathcal I_{s,d} k - c_{s,d} k^{\frac{s}{d}} \Big) \\
& = E_{K_s} (\omega_{ m,d}) + \mathcal{I}_{s,d} N (N-1) - \mathcal{I}_{s,d} m (m-1) - 2 c_{s,d} \Big( \sum_{k=1}^{N-1} k^{\frac{s}{d}} -\sum_{h=1}^{m-1} h^{\frac{s}{d}} \Big).
\end{align*}
We have that for $M \geq 2$
\begin{equation}
\sum_{k=1}^{M-1} k^{\frac{s}{d}} = \begin{cases}
\frac{d}{d+s} M^{1+ \frac{s}{d}} + \mathcal{O}(M^{\frac{s}{d}}), & s \ge 0,\\
\frac{d}{d+s} M^{1+ \frac{s}{d}} + \mathcal{O}(1), & -d<s<0,\\
\log(M) + \mathcal{O}(1), & s= - d,\\
\zeta(-\frac{s}{d}) + \mathcal{O}(M^{1+\frac{s}{d}}), & -2d <s <-d.\\
\end{cases}
\end{equation}
and since $N \geq m$, our claim now follows. Note that the case $s\le -d$ is only relevant when $d = 1$ given that $s \in (-2, d)$.
\end{proof}
\subsection{Proof of Corollary \ref{cor:pol}}\label{sec:cor}
Let $(x_n)_{n \in\mathbb{N}}$ be the greedy sequence of points on $\mathbb{S}^d$ with respect to $K_s$ as defined in \eqref{eq:greedy}.
Arguing as in the proof of Theorem \ref{thm:greedyenergy},
$$E_{K_s} (\omega_{ N+1,d}) = 2 \sum_{k=1}^{N} \sum_{j=1}^{k} K(x_{k+1}, x_j) = 2 \sum_{k=1}^{N} P_{K_s} (\omega_{k,d}).$$
We know from Theorem \ref{thm:greedyenergy} that
$$ E_{K_s}(\omega_{N,d}) \geq \mathcal{I}_{s,d} N^2 - C_{s,d} N^{1 +\frac{s}{d}}.$$
We also recall that
$$P_K(\omega_{N,d}) = \min_{x \in \mathbb{S}^d} \sum_{j=1}^{N} K(x, x_j) \leq \int_{\mathbb{S}^d} \sum_{j=1}^{N} K(x, x_j) d\sigma_d = N \cdot \mathcal{I}_{s,d}.$$
We can now introduce, for $X>0$ and $N \in \mathbb{N}$, the set
$$ A_N = \left\{ 1 \leq j \leq N: P_{K_s} (\omega_{j,d}) \leq j \cdot \mathcal{I}_{s,d} - X j^{\frac{s}{d}} \right\}.$$
Collecting all these estimates, we see, for some $\alpha_{s,d} > 0$,
\begin{align*}
\mathcal{I}_{s,d} N^2 - C_{s,d} N^{1 +\frac{s}{d}} &\leq E_{K_s}(\omega_{N,d}) = 2 \sum_{k=1}^{N} P_{K_s} (\omega_{k,d}) \\
& \le 2 \sum_{k =1 \atop k \notin A_N}^{N} k \cdot \mathcal I_{s,d} + 2 \sum_{k =1 \atop k \in A_N}^{N}( k \cdot \mathcal{I}_{s,d} - X k^{\frac{s}{d}} ) \\
& = 2 \sum_{k =1}^{N} k \cdot \mathcal{I}_{s,d} - 2 \sum_{ k \in A_N}^{} X k ^{\frac{s}{d}} \\
& \le 2 \sum_{k =1}^{N} k \cdot \mathcal{I}_{s,d} - 2 \sum_{k =1}^{\# A_N} X k ^{\frac{s}{d}} \\
& \le N^2 \mathcal{I}_{s,d} - \alpha_{s,d} X \left(\# A_N\right)^{1 + \frac{s}{d}}.
\end{align*}
From this we deduce that
$$ \# A_N \leq \left( \frac{C_{s,d}}{\alpha_{s,d} X} \right)^{-\frac{1}{1+\frac{s}{d}}} \cdot N,$$
which is less than $\varepsilon N$ for $X$ large enough. This proves Corollary \ref{cor:pol}.
\section{$L^2$-discrepancy: Proof of Theorem \ref{thm:L2discrepancy}}
We turn to showing our main result on the $L^2$-spherical cap discrepancy of the greedy sequence.
The relevance of the greedy construction to discrepancy is based on the following classical result \cite{stol} that relates the sum of pairwise Euclidean distances (in other words, the Riesz energy with $s = -1$) to the $L^2$-discrepancy.
\begin{theorem}[Stolarsky Invariance Principle]
\label{thm:stolarsky}
For $\omega_{N,d} = \left\{x_1, \dots, x_N \right\} \subset \mathbb{S}^d$,
\begin{align*}
(D_{L^2, {\tiny \mbox{cap}}}(\omega_{N,d}))^2 & = c_d \left( \int_{\mathbb{S}^d} \int_{\mathbb{S}^d} \| x-y\| d\sigma_d(x) d\sigma_d(y) - \frac{1}{N^2} \sum_{i,j=1}^{N} \|x_i - x_j\| \right)\\
& = c_d \, \bigg( \frac{1}{N^2} E_{K_{-1}} (\omega_{N,d}) - \mathcal I_{-1,d} \bigg),
\end{align*}
where the constant $c_d$ is given by
$$ c_d = \frac{1}{d} \frac{\Gamma((d+1)/2)}{\sqrt{\pi} \Gamma(d/2)} \sim \frac{1}{\sqrt{2\pi d}} \quad \mbox{as}~d \rightarrow \infty.$$
\end{theorem}
Note that the construction of the greedy sequence in \eqref{eq:greedy} when $s=-1$ aims to maximize the pairwise sum of Euclidean distances at every step and thus minimize the $L^2$-discrepancy at every step by the Stolarsky invariance principle. As will become clear in our proof of the $N^{-1/2}$ upper bound below, it is not of tremendous importance that one picks the maximum to show such an upper bound -- one really only cares about having a large value in the sum. Obviously, the larger the value the better (and thus the maximum is optimal at that step) but taking values close to the maximum should suffice (and does in practice).
\label{sec:thm1}
\begin{proof}[Proof of Theorem \ref{thm:L2discrepancy}]
Let us assume $\left\{x_1, \dots, x_m \right\} \subset \mathbb{S}^d$ is given and, for all $n \geq m$,
$$ \sum_{i=1}^{n} \| x_{n+1}- x_i\| \geq n \int_{\mathbb{S}^d} \int_{\mathbb{S}^d} \|x-y\| d\sigma_d(x) d\sigma_d(y).$$
We have the trivial bound
$$ \sum_{i,j=1}^{m} \|x_i - x_j\| \geq 0$$
and, for $n \geq m$, that
\begin{align*}
\sum_{i,j=1}^{n+1} \|x_i - x_j\| &= \sum_{i,j=1}^{n} \|x_i - x_j\| + 2\sum_{i=1}^{n} \|x_{n+1} - x_i\| \\
&\geq \sum_{i,j=1}^{n} \|x_i - x_j\| + 2n \int_{\mathbb{S}^d} \int_{\mathbb{S}^d} \|x-y\| d\sigma_d(x) d\sigma_d(y).
\end{align*}
Iterating this inequality, we infer, for all $n > m$, that
$$ \sum_{i,j=1}^{n} \|x_i - x_j\| \geq 2 \left(\sum_{k=m}^{n-1} k \right)\int_{\mathbb{S}^d} \int_{\mathbb{S}^d} \|x-y\| d\sigma_d(x) d\sigma_d(y).$$
This now implies that for the first $N$ elements
\begin{align*}
(D_{L^2, {\tiny \mbox{cap}}}(\omega_{N,d}))^2 &= c_d \left( \int_{\mathbb{S}^d} \int_{\mathbb{S}^d} \| x-y\| d\sigma_d(x) d\sigma_d(y) - \frac{1}{N^2} \sum_{i,j=1}^{N} \|x_i - x_j\| \right)\\
&\leq c_d \left(\int_{\mathbb{S}^d} \int_{\mathbb{S}^d} \|x-y\| d\sigma_d(x) d\sigma_d(y) \right) \left( 1 - \frac{2}{N^2} \sum_{k=m}^{N-1} k\right)
\end{align*}
from which we deduce that
$$ D_{L^2, {\tiny \mbox{cap}}}(\omega_{N,d}) \leq \sqrt{c_d} \left(\int_{\mathbb{S}^d} \int_{\mathbb{S}^d} \|x-y\| d\sigma_d(x) d\sigma_d(y) \right)^{\frac{1}{2}}\left( \frac{1}{N} + \frac{m^2-m}{N^2} \right)^{1/2}.$$
In the case of $d=2$, we have
$$ c_d = \frac{1}{2} \qquad \mbox{as well as} \qquad -\mathcal{I}_{-1,2} = \int_{\mathbb{S}^2} \int_{\mathbb{S}^2} \|x-y\| d\sigma_d(x) d\sigma_d(y) = \frac{4}{3}$$
from which we deduce
$$ D_{L^2, {\tiny \mbox{cap}}}(\omega_{N,d}) \leq \frac{\sqrt{2}}{\sqrt{3}} \left( \frac{1}{N} + \frac{m^2 - {m} }{N^2} \right)^{1/2}.$$
\end{proof}
Note that the result is sharp (up to lower order terms) for sequences satisfying
$$ \sum_{i=1}^{n} \| x_{n+1}- x_i\| = n \int_{\mathbb{S}^2} \int_{\mathbb{S}^2} \|x-y\| d\sigma_d(x) d\sigma_d(y).$$
It is clear that many such sequences exist: all involved functions are continuous so there are always points where they attain their average value.
\begin{remark}
The above result, in a different form, has more or less appeared in the literature previously: for example, the result follows directly from Theorem 3.1 (and Remark 3.2) in \cite{LopM}. The purpose of reproving it here is to state the result in the form of an $L^2$-discrepancy bound and to give more explicit constants.
\end{remark}
\subsection{Proof of Proposition \ref{prop:disc}}\label{sec:prop:disc} We conclude with a proof of Proposition \ref{prop:disc}.
To establish a nontrivial lower bound, we show that the average growth of any consecutive pairs of points is bounded from below.
Suppose $\left\{x_1, \dots, x_n\right\} \subset \mathbb{S}^2$.
Recall that $-\mathcal{I}_{-1,2} = \frac{4}{3}$.
Fix $X \ge 0$ such that
$$ \max_{x \in \mathbb{S}^2} \sum_{i=1}^{n} \| x - x_i\| = \frac{4n}{3} + X.$$
Assume that $X < \frac{4n}{3}$. In this case it is easy to see that
$$ \sigma_d \left( \left\{x \in \mathbb{S}^2: \sum_{i=1}^{n} \| x - x_i\| < \frac{4n}{3} - X \right\}\right) < \frac{1}{2}.$$
Indeed, the average value of the function $ \sum_{i=1}^{n} \| x - x_i\| $ is $4 n/3$
and its maximum is $ 4n/3 + X$, therefore, denoting the set above by $\Omega$, we have
$$ \frac{4n}{3} < \sigma_d(\Omega) \left( \frac{4n}{3} - X \right) + \big( 1 -\sigma_d(\Omega) \big) \left(\frac{4n}{3}+X\right),$$
which implies that $\sigma_d (\Omega) < 1/2$.
Thus every hemisphere contains points $x$ satisfying the inequality $\sum_{i=1}^{n} \| x - x_i\| \ge 4n/3 - X$.
By restricting $x$ to the hemisphere centered at $-x_{n+1}$ we conclude that
\begin{align*}
\max_{x \in \mathbb{S}^2} \sum_{i=1}^{n+1} \| x - x_i\| & = \max_{x \in \mathbb{S}^2} \big( \|x - x_{n+1} \| + \sum_{i=1}^{n} \| x - x_i\| \big) \\
& \geq \sqrt{2} + \frac{4n}{3} - X = \frac{4(n+1)}{3} + \sqrt{2} - \frac{4}{3} - X.
\end{align*}
Combining this with the definition of $X$, we see that
\begin{equation}\label{eq:sumof2}
\frac{1}{2}\left( \max_{x \in \mathbb{S}^2} \sum_{i=1}^{n} \| x - x_i\| - \frac{4n}{3} + \max_{x \in \mathbb{S}^2} \sum_{i=1}^{n+1} \| x - x_i\| - \frac{4(n+1)}{3} \right) \geq \frac{1}{\sqrt{2}} - \frac{2}{3}.
\end{equation}
If $X \ge 4n/3$, one can see immediately that \eqref{eq:sumof2} holds with the right-hand side of $4n/3 - 1/6 >1$. Therefore, the sum $\sum_{n=1}^{N-1} \left( \max_{x \in \mathbb{S}^2} \sum_{i=1}^{n} \| x - x_i\| - \frac{4 n}{3} \right)$ in \eqref{eq:averagepol} grows at least linearly in $N$, which leads to the lower bound in Proposition \ref{prop:disc}.
The upper bound follows from the Stolarsky principle which implies
$$ \int_{\mathbb{S}^d} \int_{\mathbb{S}^d} \| x-y\| d\sigma_d(x) d\sigma_d(y) - \frac{1}{N^2} \sum_{i,j=1}^{N} \|x_i - x_j\| \geq 0$$
and thus, for any set of points $\left\{x_1, \dots, x_N \right\} \subset \mathbb{S}^2$,
$$ \sum_{i,j=1}^{N} \|x_i - x_j\| \leq \frac{4N^2}{3}.$$
For a greedy sequence, we can write
\begin{align*}
\sum_{i,j=1}^{N} \|x_i - x_j\| &= 2 \sum_{n=1}^{N-1} \left[ \frac{4n}{3} + \left(\max_{x \in \mathbb{S}^2} \sum_{i=1}^{n} \| x - x_i\| - \frac{4n}{3} \right) \right] \\
&= \frac{4}{3} (N-1)N+ 2 \sum_{n=1}^{N-1} \left(\max_{x \in \mathbb{S}^2} \sum_{i=1}^{n} \| x - x_i\| - \frac{4n}{3} \right)
\end{align*}
and thus
$$ \frac{1}{N}\sum_{n=1}^{N-1} \left(\max_{x \in \mathbb{S}^2} \sum_{i=1}^{n} \| x - x_i\| - \frac{4n}{3} \right) \leq \frac{2}{3}.$$
Note that this upper bound (with a worse constant) could be obtained directly from the lower bound in \eqref{eq:opt_polarization1} of Theorem \ref{thm:Singular Riesz Polarization}. We also remark that the same arguments would also work in higher dimensions (with different constants).
\section{Numerical Examples and Comments}
\label{sec:numerics}
This section contains some basic numerical examples and general comments. First we note that, in the greedy construction, finding the exact point
$$x_{n+1} = \arg\max_{x \in \mathbb{S}^d} \sum_{i=1}^{n} \| x- x_i\|$$
is computationally nontrivial.
\begin{center}
\begin{figure}
\caption{An example of a greedy point set with $N=500$ points on $\mathbb{S}
\label{fig:points}
\end{figure}
\end{center}
If the success of the greedy construction were to depend very strongly on finding exact maxima in each step, it would not be a very useful construction to begin with (indeed, as also indicated by the proof of Theorem \ref{thm:L2discrepancy}, luckily this does not seem to be the case). Numerical experiments suggest the exact opposite: these types of sequences tend to be incredibly robust and seem to lead to good results even if sometimes adversarial points are added manually. Throughout this section, we consider approximate sequences obtained as follows: given $\left\{x_1, \dots, x_N\right\} \subset \mathbb{S}^d$, we consider
100 random points and add the one maximizing the sum of distances among those.
Figure \ref{fig:points} shows an example of a set of $N=500$ points on $\mathbb{S}^2$ obtained from a single individual point. We observe that the sequence looks somewhat random but avoids clusters of points more so than an actually random sequence would.
\begin{center}
\begin{figure}
\caption{$L^2-$discrepancy of a greedy sequence on $\mathbb{S}
\label{fig:S2}
\end{figure}
\end{center}
When plotting the $L^2-$spherical cap discrepancy of this sequence one observes, numerically, that the discrepancy seems to be relatively close to $\sim 0.5 \cdot N^{-3/4}$. Note that our construction of the sequence was based on the approximation by random points and it stands to reason that Figure \ref{fig:S2} serves as an upper bound of the true behavior of a greedy sequence.
The greedy method is agnostic to what happened in the past and works well with any arbitrary initial set. We illustrate this with a simple example where we first take 250 points uniformly at random and then compute another 250 points greedily.
\begin{center}
\begin{figure}
\caption{$L^2-$discrepancy of a greedy sequence comprised of 250 random points and then followed by 250 points using the greedy construction. Left: $N^{1/2}
\end{figure}
\end{center}
The result is striking: for the first 250 elements, we see that $N^{1/2} \cdot D_{L^2, \tiny \mbox{cap}}(\omega_{N})$ is approximately constant (as expected for random points). After that there is a pronounced decay. Perhaps even more striking is that $N^{3/4} \cdot D_{L^2, \tiny \mbox{cap}}(\omega_{N})$ is first increasing (roughly at rate $\sim N^{1/4}$ as we expect) and then quickly decreases and returns to a constant slightly above $1/2$ (see also Fig. 2).
\begin{center}
\begin{figure}
\caption{$L^2-$discrepancy of a sequence on $\mathbb{S}
\label{fig:s3}
\end{figure}
\end{center}
One could wonder about higher dimensions: one would expect optimal sequences to behave as $D_{L^2, \tiny \mbox{cap}}(\omega_{N,3}) \sim N^{-2/3}$ on $\mathbb{S}^3$ as well as
$D_{L^2, \tiny \mbox{cap}}(\omega_{N,4}) \sim N^{ -5/8}$ on $\mathbb{S}^4$.
The same basic numerical experiment leads to results compatible with this interpretation. We emphasize that, due to increasing computational cost, these experiments were carried out only for rather small values of $N \leq 500$. At this scale, small powers of $N$ or logarithms are not always easy to detect. Moreover, the effectiveness our numerical procedure of picking 100 points at random and then adding the one with the largest distance depends on the profile of the function: if large deviations occur on sets of small measure (an effective that could conceivably become more pronounced in higher dimensions), then this method will lose effectiveness. Regardless, we believe these preliminary results to be rather interesting and hope they will inspire subsequent work on the regularity of greedy sequences on $\mathbb{S}^d$.
\begin{center}
\begin{figure}
\caption{$L^2-$discrepancy of a sequence on $\mathbb{S}
\label{fig:S4}
\end{figure}
\end{center}
We conclude with a conjecture which collects our observations in this section and speculates that the greedy sequence has nearly optimal spherical cap discrepancy.
\begin{conjecture}
Let $\omega_{N,d} \subseteq \mathbb{S}^d$ be the first $N$ elements of the greedy sequence with respect to $K_{-1}$ as defined in \eqref{eq:greedy}. Then, for some $c = c(d) \ge 0$,
$$D_{L^2, \tiny \mbox{cap}}(\omega_{N,d}) = \mathcal O \big( N^{-\frac{1}{2}-\frac{1}{2d}} \log^c(N) \big).$$
\end{conjecture}
\section{Appendix}\label{sec:Appendix}
\subsection{Gegenbauer coefficients}
In this section we compute the Gegenbauer coefficients of the functions $f_s$ (the Riesz kernel) and $f_{s,\varepsilon}$ (its approximation) which were defined in \S\ref{sec:upper}. Various computations for such coefficients for some ranges of $s$ can be found scattered in the literature (e.g. \cite{Brauchart,DG}). However, since we need finer properties of these coefficients (asymptotics, monotonicity, positivity) in the full range, we present detailed computations here.
We proceed first with finding the Gegenbauer coefficients for the functions $f_{s, \varepsilon}$.
\begin{lemma}\label{lem:Riesz epsilon Gegen Coeff}
For $0< s $, $n \in \mathbb{N}$
\begin{align*}
\widehat{f_{s, \varepsilon}}(n,d) &= \frac{\Gamma( \frac{d+1}{2}) \Gamma(\frac{s}{2} +n)}{\Gamma( n + \frac{d+1}{2}) \Gamma( \frac{s}{2})} (2+ \varepsilon)^{-n - \frac{s}{2}} \Hypergeom21
{ \frac{2n+s}{4}, \frac{2n+2+s}{4}}
{n + \frac{d+1}{2}}{\bigg( \frac{2}{2+\varepsilon} \bigg)^2} \\
& = \frac{\Gamma( \frac{d+1}{2})}{\Gamma(\frac{s}{2})} (2+\varepsilon)^{-n - \frac{s}{2}} \sum_{k=0}^{\infty} \frac{\Gamma( n + 2k + \frac{s}{2})}{k! \Gamma( n + k + \frac{d+1}{2})} (2 + \varepsilon)^{-2k},
\end{align*}
where $ \sideset{_2}{_1}\HyperF$ is the ordinary hypergeometric function.
\end{lemma}
\begin{proof}
On multiple occasions, we will use the identity
\begin{equation}
\sqrt{\pi} \Gamma(2a) = 2^{2a-1} \Gamma(a) \Gamma \bigg( a + \frac{1}{2} \bigg), \quad a > 0.
\end{equation}
We shall also use the Rodrigues formula
\begin{equation}\label{eq:Rodrigues}
C_n^{\frac{d-1}{2}}(t) = \frac{(-2)^n \Gamma(n + \frac{d-1}{2}) \Gamma(n+d-1)}{n! \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1)} (1-t^2)^{\frac{2-d}{2}} \bigg( \frac{d}{dt} \bigg)^n ( 1-t^2)^{n+ \frac{d-2}{2}}.
\end{equation}
Using \eqref{eq:Rodrigues} with \eqref{eq:GegenCoeff}, we have, for $0 < s < d$ and $\varepsilon > 0$
\begin{align*}
\widehat{f_{s, \varepsilon}}(n,d) =& \hspace{0.1cm} 2^{-\frac{s}{2}} \frac{\Gamma(\frac{d+1}{2}) n! \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) \Gamma(n+d-1)} \frac{(-2)^n \Gamma(n + \frac{d-1}{2}) \Gamma(n+d-1)}{n! \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1)}\\ &\times \int_{-1}^{1} (1 + \frac{\varepsilon}{2}-t)^{- \frac{s}{2}} \big( \frac{d}{dt} \big)^n ( 1-t^2)^{n+ \frac{d-2}{2}} dt \\
= & \hspace{0.1cm} 2^{n-\frac{s}{2}} \frac{\Gamma(\frac{d+1}{2}) \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) } \frac{ \Gamma(n + \frac{d-1}{2}) \Gamma(\frac{s}{2}+n)}{ \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1) \Gamma(\frac{s}{2})} \\ & \times \int_{-1}^{1} (1 + \frac{\varepsilon}{2}-t)^{- \frac{s}{2}-n } ( 1-t^2)^{n+ \frac{d-2}{2}} dt \\
= & \hspace{0.1cm} 2^{n-\frac{s}{2}} \frac{\Gamma(\frac{d+1}{2}) \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) } \frac{ \Gamma(n + \frac{d-1}{2}) \Gamma(\frac{s}{2}+n)}{ \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1) \Gamma(\frac{s}{2})} \Big( \frac{2}{2+\varepsilon} \Big)^{n + \frac{s}{2}} \sqrt{\pi} \\
& \quad \times \frac{\Gamma( n + \frac{d}{2})}{\Gamma( n + \frac{d+1}{2})} \Hypergeom21
{ \frac{2n+s}{4}, \frac{2n+2+s}{4}}
{n + \frac{d+1}{2}}{\bigg( \frac{2}{2+\varepsilon} \bigg)^2} \\
= & \hspace{0.1cm} \frac{\Gamma( \frac{d+1}{2}) \Gamma(\frac{s}{2} +n)}{\Gamma( n + \frac{d+1}{2}) \Gamma( \frac{s}{2})} (2+ \varepsilon)^{-n - \frac{s}{2}} \Hypergeom21
{ \frac{2n+s}{4}, \frac{2n+2+s}{4}}
{n + \frac{d+1}{2}}{\bigg( \frac{2}{2+\varepsilon} \bigg)^2},
\end{align*}
giving us the first expression in the claim. For the second one,
\begin{align*}
\widehat{f_{s, \varepsilon}}(n,d) = & \hspace{0.1cm} \frac{\Gamma( \frac{d+1}{2}) \Gamma(\frac{s}{2} +n)}{\Gamma( n + \frac{d+1}{2}) \Gamma( \frac{s}{2})} (2+ \varepsilon)^{-n - \frac{s}{2}} \Hypergeom21
{ \frac{2n+s}{4}, \frac{2n+2+s}{4}}
{n + \frac{d+1}{2}}{\bigg( \frac{2}{2+\varepsilon} \bigg)^2}\\
= & \hspace{0.1cm} \frac{\Gamma( \frac{d+1}{2}) \Gamma(\frac{s}{2} +n)}{\Gamma( n + \frac{d+1}{2}) \Gamma( \frac{s}{2})} (2+ \varepsilon)^{-n - \frac{s}{2}} \\ & \times \sum_{k=0}^{\infty} \frac{\Gamma( \frac{n}{2} + \frac{s}{4} + k) \Gamma( \frac{n+1}{2} + \frac{s}{4} + k) \Gamma(n + \frac{d+1}{2})}{k! \Gamma( \frac{n}{2} + \frac{s}{4}) \Gamma( \frac{n+1}{2} + \frac{s}{4}) \Gamma(n + \frac{d+1}{2} + k)} \bigg( \frac{2}{2+\varepsilon} \bigg)^{2k}\\
= & \hspace{0.1cm} 2^{n + \frac{s}{2}-1} \frac{\Gamma( \frac{d+1}{2})}{\Gamma( \frac{s}{2}) \sqrt{\pi}} (2+ \varepsilon)^{-n - \frac{s}{2}} \sum_{k=0}^{\infty} \frac{\Gamma( \frac{n}{2} + \frac{s}{4} + k) \Gamma( \frac{n+1}{2} + \frac{s}{4} + k)}{k! \Gamma(n + \frac{d+1}{2} + k)} \bigg( \frac{2}{2+\varepsilon} \bigg)^{2k}\\
= & \hspace{0.1cm} \frac{\Gamma( \frac{d+1}{2})}{\Gamma(\frac{s}{2})} (2+\varepsilon)^{-n - \frac{s}{2}} \sum_{k=0}^{\infty} \frac{\Gamma( n + 2k + \frac{s}{2})}{k! \Gamma( n + k + \frac{d+1}{2})} (2 + \varepsilon)^{-2k}.
\end{align*}
\end{proof}
\begin{corollary}\label{cor:Riesz epsilon Gegen Coeff}
For $0< s $, $n \in \mathbb{N}_0$, the Gegenbauer coefficients $\widehat{f_{s, \varepsilon}}(n,d)$ are positive and decreasing in $n$.
\end{corollary}
\begin{proof}
For $0 < s < d$ and $n \in \mathbb{N}_0$, we see that all the summands of
\begin{equation}\label{eq:Gegen Epsilon summands}
\sum_{k=0}^{\infty} \frac{\Gamma( n + 2k + \frac{s}{2})}{k! \Gamma( n + k + \frac{d+1}{2})} (2 + \varepsilon)^{-2k-n-\frac{s}{2}}
\end{equation}
are positive, so $\widehat{f_{s, \varepsilon}}(n,d) > 0$. We also see that for $k \in \mathbb{N}_0$,
\begin{equation*}
\frac{\frac{\Gamma( n + 2k + \frac{s}{2})}{ k! \Gamma( n + k + \frac{d+1}{2})} (2+\varepsilon)^{-n - \frac{s}{2}-2k}}{\frac{\Gamma( (n+1) + 2k + \frac{s}{2})}{ k! \Gamma( (n+1) + k + \frac{d+1}{2})} (2+\varepsilon)^{-(n+1) - \frac{s}{2}-2k}} = \frac{n+k + \frac{d+1}{2}}{n+2k+\frac{s}{2}} (2+\varepsilon) > 1
\end{equation*}
so the summands in \eqref{eq:Gegen Epsilon summands}
are decreasing as a function of $n$. Thus, $ \widehat{f_{s, \varepsilon}}(n,d)$ is indeed a decreasing function in $n$, for $0 < s < d$ and any $\varepsilon > 0$.
\end{proof}
We know consider the Gegenbauer coefficients of the Riesz kernels themselves.
\begin{lemma}\label{lem:Riesz Gegen Coeff}
For $-2< s < d$, $n \in \mathbb{N}$
\begin{equation}
\widehat{f_s}(n,d) = \begin{cases}
\sgn(s) 2^{d-s-1} \frac{\Gamma(\frac{d+1}{2}) \Gamma( \frac{d-s}{2}) }{\sqrt{\pi} \Gamma(\frac{s}{2})} \frac{\Gamma(n+ \frac{s}{2})}{ \Gamma( n+d-\frac{s}{2})} & s \neq 0 \\
2^{d-2} \frac{\Gamma(\frac{d+1}{2}) \Gamma( \frac{d}{2}) }{\sqrt{\pi} } \frac{\Gamma(n)}{ \Gamma( n+d)} & s = 0
\end{cases}.
\end{equation}
\end{lemma}
\begin{proof}
We again use the Rodrigues formula along with integration by parts to find, for $s \in (-2,0) \cup (0,d)$, and $n \in \mathbb{N}_0$,
\begin{align*}
\widehat{f_s}(n,d) & = \sgn(s) 2^{-\frac{s}{2}} \frac{\Gamma(\frac{d+1}{2}) n! \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) \Gamma(n+d-1)} \frac{(-2)^n \Gamma(n + \frac{d-1}{2}) \Gamma(n+d-1)}{n! \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1)} \\
& \quad \times \int_{-1}^{1} (1-t)^{- \frac{s}{2}} \big( \frac{d}{dt} \big)^n ( 1-t^2)^{n+ \frac{d-2}{2}} dt \\
& = \sgn(s) 2^{n-\frac{s}{2}} \frac{\Gamma(\frac{d+1}{2}) \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) } \frac{ \Gamma(n + \frac{d-1}{2}) \Gamma(\frac{s}{2}+n)}{ \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1) \Gamma(\frac{s}{2})}\\
& \quad \times \int_{-1}^{1} (1-t)^{- \frac{s}{2}-n } ( 1-t^2)^{n+ \frac{d-2}{2}} dt \\
& = \sgn(s) 2^{n-\frac{s}{2}} \frac{\Gamma(\frac{d+1}{2}) \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) } \frac{ \Gamma(n + \frac{d-1}{2}) \Gamma(\frac{s}{2}+n)}{ \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1) \Gamma(\frac{s}{2})}\\
& \quad \times \int_{-1}^{1} (1-t)^{\frac{d-s-2}{2} } ( 1+t)^{n+ \frac{d-2}{2}} dt \\
& = \sgn(s) 2^{n-\frac{s}{2}} \frac{\Gamma(\frac{d+1}{2}) \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) } \frac{ \Gamma(n + \frac{d-1}{2}) \Gamma(\frac{s}{2}+n)}{ \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1) \Gamma(\frac{s}{2})} \\
& \quad \times\frac{2^{n+d-\frac{s}{2}-1} \Gamma( \frac{d-s}{2}) \Gamma( n + \frac{d}{2})}{(n+d-\frac{s}{2}-1) \Gamma( n+d-\frac{s}{2}-1)}\\
& = \sgn(s) 2^{d-s-1} \frac{\Gamma(\frac{d+1}{2}) }{\sqrt{\pi} } \frac{ \Gamma(\frac{s}{2}+n)}{ \Gamma(\frac{s}{2})} \frac{\Gamma( \frac{d-s}{2}) }{ \Gamma( n+d-\frac{s}{2})}.
\end{align*}
For $s=0$ and $n \geq 1$, we have a similar result:
\begin{align*}
\widehat{f_s}(n,d) & = - \frac{1}{2} \frac{\Gamma(\frac{d+1}{2}) n! \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) \Gamma(n+d-1)} \frac{(-2)^n \Gamma(n + \frac{d-1}{2}) \Gamma(n+d-1)}{n! \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1)} \\
& \quad \times\int_{-1}^{1} \log(1-t) \big( \frac{d}{dt} \big)^n ( 1-t^2)^{n+ \frac{d-2}{2}} dt \\
& = 2^{n-1} \frac{\Gamma(\frac{d+1}{2}) \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) } \frac{ \Gamma(n + \frac{d-1}{2}) (n-1)! }{ \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1)} \\
& \quad \times\int_{-1}^{1} (1-t)^{1-n} ( 1-t^2)^{n+ \frac{d-2}{2}} dt \\
& = 2^{n-1} \frac{\Gamma(\frac{d+1}{2}) \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) } \frac{ \Gamma(n + \frac{d-1}{2}) (n-1)! }{ \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1)} \\
& \quad \times\int_{-1}^{1} (1-t)^{\frac{d}{2}} ( 1+t)^{n+ \frac{d-2}{2}} dt \\
& = 2^{n-1} \frac{\Gamma(\frac{d+1}{2}) \Gamma(d-1)}{\sqrt{\pi} \Gamma( \frac{d}{2}) } \frac{ \Gamma(n + \frac{d-1}{2}) (n-1)!}{ \Gamma(\frac{d-1}{2}) \Gamma(2n+ d-1) } \frac{2^{n+d-1} \Gamma( \frac{d}{2}) \Gamma( n + \frac{d}{2})}{\Gamma( n+d)}\\
& = 2^{d-2} \frac{\Gamma(\frac{d+1}{2}) \Gamma( \frac{d}{2}) }{\sqrt{\pi} } \frac{\Gamma(n)}{ \Gamma( n+d)}.
\end{align*}
\end{proof}
The coefficients for $n \in \mathbb{N}$ are thus clearly positive, and a comparison of consecutive coefficients in all cases also shows that $\widehat{f_s}(n,d)$ is also decreasing as a function of $n$. A quick asymptotic analysis give us the following:
\begin{corollary}\label{cor:Riesz Gegen Coeff}
For $-2< s < d$, $n \in \mathbb{N}$, $\widehat{f_s}(n,d)$ is positive and decreasing as a function of $n$. Moreover
\begin{equation}\label{eq:Riesz Gegen Coeff Asymptotics}
\widehat{f_s}(n,d) = \begin{cases}
\sgn(s) 2^{d-s-1} \frac{\Gamma(\frac{d+1}{2}) \Gamma( \frac{d-s}{2})}{\sqrt{\pi} \Gamma(\frac{s}{2})} n^{s-d} + \mathrm{O}(n^{s-d-1}) & s \neq 0 \\
2^{d-2} \frac{\Gamma(\frac{d+1}{2}) \Gamma( \frac{d}{2}) }{\sqrt{\pi} } n^{-d} + \mathrm{O}(n^{-d-1}) & s = 0
\end{cases}.
\end{equation}
\end{corollary}
We note that in the case that $d=1$, we actually have the Chebyshev polynomials of the first type instead of the Gegenbauer polynomials. These are given by $T_0(t) = 1$ and, for $n \in \mathbb{N}$,
\begin{equation}
\lim_{d \rightarrow 1^+} \frac{2n+d-1}{2(d-1)} C_n^{\frac{d-1}{2}}(t) = T_n(t) = \cos( n \arccos(t)).
\end{equation}
Through this limit, one can quickly find that Lemmas \ref{lem:Riesz epsilon Gegen Coeff} and \ref{lem:Riesz Gegen Coeff} as well as Corollaries \ref{cor:Riesz Gegen Coeff} and \ref{cor:Riesz epsilon Gegen Coeff} still hold in this case.
\subsection{Limits for polarization}
We define, for any symmetric, lower semi-continuous kernel $K: \mathbb{S}^d \times \mathbb{S}^d \rightarrow(- \infty, \infty]$ and finite Borel measure $\mu$, the potential
\begin{equation}\label{eq:Potential Def}
U_{K}^{\mu}(x) = \int_{\mathbb{S}^d} K(x,y) d \mu(y).
\end{equation}
We define the polarization of $\mu$ to be
\begin{equation}
P_K(\mu) = \inf_{x \in \mathbb{S}^d} U_K^{\mu}(x).
\end{equation}
\begin{lemma}\label{lem:Limit for Polarization}
Let $K: \mathbb{S}^d \times \mathbb{S}^d \rightarrow(- \infty, \infty]$ be a symmetric, lower semi-continuous kernel. Let $\mu$ be a finite Borel measure on $\Omega$ and $(K_m)_{m=1}^{\infty}$ be a sequence of symmetric, continuous kernels, increasing pointwise in $m$ to $K$. Then
\begin{equation}
P_K(\mu) = \lim_{m \rightarrow \infty} P_{K_m}(\mu).
\end{equation}
\end{lemma}
\begin{proof}
By the Monotone Convergence Theorem, we have that for all $x \in \mathbb{S}^d$,
\begin{equation*}
\lim_{m \rightarrow \infty} U_{K_m}^{\mu}(x) = U_K^{\mu}(x).
\end{equation*}
For $m \in \mathbb{N}$, let $x_m = \argmin U_{K_m}^{\mu}(x) $ and let $x_{\infty} = \argmin U_K^{\mu}(x)$. Thus
\begin{equation}\label{eq:lower bound}
\lim_{m \rightarrow \infty} U_{K_m}^{\mu}(x_m) \leq \lim_{m \rightarrow \infty} U_{K_m}^{\mu}(x_{\infty}) = U_K^{\mu}(x_{\infty}).
\end{equation}
For all $m \in \mathbb{N}$, we see that
\begin{equation*}
U_{K_m}^{\mu}(x_m) \leq U_{K_m}^{\mu}(x_{m+1}) \leq U_{K_{m+1}}^{\mu}(x_{m+1}) \leq U_{K_{m+1}}^{\mu}(x_{\infty}) \leq U_{K}^{\mu}(x_{\infty}),
\end{equation*}
so $U_{K_m}^{\mu}(x_m)$ is an increasing sequence, bounded from above by $U_{K}^{\mu}(x_{\infty})$.
Since $\mathbb{S}^d$ is compact, there is a convergent subsequence $x_{m_k}$ of $x_m$, with limit point $x^* \in \mathbb{S}^d$. Now, for each $j, k, l \in \mathbb{N}$ such that $j \leq k \leq l$ we know
\begin{equation*}
U_{K_{m_j}}^{\mu}(x_{m_{k}}) \leq U_{K_{m_k}}^{\mu}(x_{m_{k}}) \leq U_{K_{m_k}}^{\mu}(x_{m_{l}}) \leq U_{K_{m_{l}}}^{\mu}(x_{m_{l}})
\end{equation*}
so for all $j\leq k$
\begin{equation*}
U_{K_{m_j}}^{\mu}(x_{m_{j}}) \leq U_{K_{m_j}}^{\mu}(x_{m_{k}}) \leq \lim_{l \rightarrow \infty} U_{K_{m_l}}^{\mu}(x_{m_l}) = \lim_{m \rightarrow \infty} U_{K_{m}}^{\mu}(x_{m}).
\end{equation*}
Thus, by continuity, for $j \in \mathbb{N}$,
\begin{equation*}
U_{K_{m_j}}^{\mu}( x^*) = \lim_{k \rightarrow \infty} U_{K_{m_j}}^{\mu}(x_{m_{k}}) \leq \lim_{m \rightarrow \infty} U_{K_{m}}^{\mu}(x_{m}).
\end{equation*}
Thus, by the Monotone Convergence Theorem, we have
\begin{equation}\label{eq:upper bound}
U_K^{\mu}(x^*) =\lim_{j \rightarrow \infty} U_{K_{m_j}}(x^*) \leq \lim_{m \rightarrow \infty} U_{K_{m}}^{\mu}(x_{m}).
\end{equation}
Our claim now follows from \eqref{eq:lower bound}, \eqref{eq:upper bound}, and the fact that $ U_K^{\mu}(x_{\infty}) \leq U_K^{\mu}(x^*)$.
\end{proof}
Taking $\mu = \sum_{j=1}^{N} \delta_{x_j}$ gives a discrete version of the result. We are unaware of a proof like this for any other polarization problems, and would like to point out that it should hold on arbitrary compact metric measure spaces as well.
\end{document} |
\begin{document}
\title{The LifeV library: engineering mathematics beyond the proof of concept}
\begin{abstract}
LifeV is a library for the finite element (FE) solution of partial differential equations
in one, two, and three dimensions.
It is written in C++ and designed to run on diverse parallel architectures, including
cloud and high performance computing facilities. In spite of its academic research nature, meaning a library for the development
and testing of new methods,
one distinguishing feature of LifeV
is its use on real world problems and it is intended to provide a tool
for many engineering applications.
It has been actually used in computational hemodynamics, including cardiac mechanics and
fluid-structure interaction problems,
in porous media, ice sheets dynamics for both forward and inverse problems.
In this paper we give a short overview of the features of LifeV and its coding paradigms
on simple problems.
The main focus is on the parallel environment which is mainly driven by domain decomposition methods and based on
external libraries such as MPI, the Trilinos project, HDF5 and ParMetis.
\noindent \emph{Dedicated to the memory of Fausto Saleri.}
\end{abstract}
\section{Introduction}
\label{sec:introduction}
LifeV\footnote{Pronounced ``life five'', the name stands for Library
for Finite Elements, 5th edition as V is the Roman notation for the
number 5} is a parallel library written in C++ for the approximation
of Partial Differential Equations (PDEs) by the finite element method
in one, two and three dimensions.
The project started in 1999/2000 as a collaboration between
the modeling and scientific computing group (CMCS) at EPFL Lausanne
and the MOX laboratory at Politecnico di Milano. Later the
REO and ESTIME groups at INRIA joined the project.
In 2006 the library has been progressively parallelized using
MPI with the Trilinos library suite as back-end interface.
In 2008 the Scientific Computing group at Emory University
joined the LifeV consortium.
Since then, the number of active developers has fluctuated between 20
and 40 people, mainly PhD students and researchers from the
laboratories within the LifeV consortium.
LifeV is open source and currently distributed under the LGPL license on github\footnote{\url{https://github.com/lifev}, install instructions at \url{https://bitbucket.org/lifev-dev/lifev-release/wiki/lifev-ubuntu}},
and migration to BSD License is currently under consideration.
The developers page is hosted by a Redmine system at
\url{http://www.lifev.org}.
LifeV has two specific aims: (i) it provides tools for developing and testing
novel numerical methods for single and multi-physics problems, and
(ii) it provides a platform for simulations of engineering and, more generally, real
world problems.
In addition to ``basic'' Finite Elements tools, LifeV also provides data structures and algorithms tailored for specific applications in a variety of fields, including fluid and structure dynamics,
heat transfer, and transport in porous media, to mention a few.
It has already been used in medical and industrial contexts, particularly for cardiovascular simulations, including fluid mechanics, geometrical multiscale modeling of the vascular system, cardiac electro-mechanics and its coupling with the blood flow.
When in 2006 we decided to introduce parallelism, the choice has turned towards available open-source tools:
MPI (mpich or openmpi implementations), ParMETIS, and the Epetra, AztecOO, IFPACK, ML, Belos, and Zoltan packages distributed within Trilinos \cite{trilinos}.
LifeV has benefited from the contribution of many PhD students, almost all of them working on project financed by public funds. We provide a list of supporting agencies hereafter.
In this review article we explain the parallel design
of the library and provide two examples of how to solve PDEs using
LifeV.
Section~\ref{sec:parallel} is devoted to a description of
how parallelism is handled in the library while in
Section~\ref{sec:FeaturesAndParad} we discuss the distinguishing
features and coding paradigms of the library. In Section~\ref{sec:life-poisson} we illustrate how to use LifeV to approximate PDEs by the finite element method, using a simple Poisson problem as an example.
In Section~\ref{sec:Oseen} we show how to approximate unsteady Navier--Stokes equations
and provide convergence, scalability, and timings. We conclude by pointing to some applications of LifeV.
Before we detail technical issues, let us briefly address the natural question when approaching this
software, namely {\it yet another finite element library}?
No question that the research and the commercial arenas offer a huge variety of finite element libraries
(or, in general, numerical solvers for partial differential equations)
to meet diverse expectations in different fields of engineering sciences. LifeV is - strictly speaking - no exception.
Since the beginning, LifeV was intended to address two needs: (i) {\it a permanent playground for new methodologies in computational mechanics};
(ii) {\it a translational tool to shorten the time-to-market of new successful methodologies to real engineering problems}.
Also, over the years, we organized portions of the library to be used for teaching purposes.
It was used as a sort of {\it gray box} tool for instance in Continuing Education initiatives at the Politecnico di Milano,
or in undergraduate courses at Emory University (MATH352: Partial Differential Equations in Action) - see \cite{FormaggiaSaleriVenezianiBook}.
As such, LifeV incorporated since the beginning the most advanced methodological developments on topics of interest for the different groups involved. In fact, state-of-the-art methodologies have been rapidly implemented, particularly in incompressible computational fluid dynamics,
to be tested on problems of real interest, so to quickly assess the real performances of new ideas and their practical impact.
On the other hand, advanced implementation paradigms and efficient parallelization were prioritized, as we will describe in the paper.
When a code is developed with a strong research orientation, the working force is mainly provided by young and junior scholars on specific projects.
This has required a huge coordination effort, in each group and overall. Stratification of different ideas evolving has sometimes
made the crystallization of portions of the code quite troublesome, also for the diverse background of the developers.
Notwithstanding this, the stimulus of the applications has promoted the development of truly advanced methods
for the solution of specific problems. In particular, the vast majority of the stimuli were provided by computational hemodynamics,
as all the groups involved worked in this field with a strict connection to medical and healthcare institutions.
The result is a library extremely advanced regarding performances and with a sort of unique deep treatment of a specific class of problems.
Beyond the overall approach to the numerical solution of the incompressible Navier-Stokes equations (with either monolithic or algebraic partitioning schemes),
fluid-structure interaction problems, patient-specific image based modeling, defective boundary conditions and geometrical multiscale modeling
have been implemented and tested in LifeV in an extremely competitive and unique way. The validation and benchmarking
on real applications, as witnessed by several publications out of the traditional field of computational mechanics, makes
it a reliable and efficient tool for modern engineering.
So, it is another finite element library yet with peculiarities that make it a significant - somehow unique - example of modern scientific computing tools
\subsection{Financial support}
LifeV has been supported by the
6th European Framework Programme (Haemodel Project, 2000-2005,
PI: A.\ Quarteroni),
the 7th European Framework Programme (VPH2 Project, 2008-2011, ERC
Advanced Grant MATHCARD 2009-2014, PI: A.\ Quarteroni),
the Italian MIUR (PRIN2007, PRIN2009 and PRIN12 projects, PI:
A.\ Quarteroni and L.\
Formaggia),
the Swiss National Science Foundation
(several projects, from 1999 to 2017, 59230, 61862, 65110, 100637,
117587, 125444, 122136, 122166, 141034, PI: A.\ Quarteroni),
the Swiss supercomputing initiatives HP2C and PASC (PI: A.\ Quarteroni),
the Fondazione Politecnico di Milano with Siemens Italy (Aneurisk
Project, 2005-2008, PI: A.\ Veneziani),
the Brain Aneurysm Foundation
(Excellence in Brain Aneurysm Research, 2010, PI: A.\ Veneziani),
Emory University Research Committee Projects (2007, 2010, 2015, PI:
A.\ Veneziani),
ABBOTT Resorb Project (2014-2017, PI: H.\ Samady,
D.\ Giddens, A.\ Veneziani),
the National Science Foundation (NSF DMS 1419060, 2014-2017, PI:
A.\ Veneziani, NSF DMS 1412963, 2014-2017, PI: A.\ Veneziani, NSF DMS 1620406, 2016-2018, PI: A. \ Veneziani),
iCardioCloud Project (2013-2016, PI: F.\ Auricchio, A.\ Reali,
A.\ Veneziani),
the Lead beneficiary programm (Swiss SNF and German DFG, 140184,
2012-2015, PI: A.\ Klawonn, A.\ Quarteroni, J.\ Schr\"oder),
and the French National Research Agency. Some private companies also
collaborated to both the
support and the development, in particular MOXOFF SpA (2012-2104) exploited the geometric multiscale paradigm for the simulation of an industrial packaging system and and Eni SpA (2011-2014) contributed to the development of the Darcy solver and extended FE capabilities, as well as the use of the library for the simulation the evolution of sedimentary basins~\cite{cervone12:_simul}.
\subsection{Main Contributors}
The initial core of developers was the group of A.\ Quarteroni at MOX, Politecnico
di Milano, Italy and at the Department of Mathematics, EPFL, Lausanne, Switzerland from
an initiative of L.\ Formaggia, J.F.\ Gerbeau, F.\ Saleri and
A.\ Veneziani.
The group of J.F.\ Gerbau at the INRIA, Rocquencourt, France gave significant contributions from
2000 through 2009 (in particular with M.\ Fernandez and, later,
M.\ Kern).
The important contribution of C.\ Prud'homme and G.\ Fourestey during their stays at
EPFL and of D.\ Di Pietro at University of Bergamo are acknowledged too.
S.\ Deparis has been the coordinator of the LifeV consortium since 2007.
Here, we limit to summarize the list of main contributors
who actively developed the library in the last five years.
We group the names by affiliation.
As some of the authors moved over the years to different institutions, they may be listed with multiple affiliations hereafter.
\noindent
[D.\ Baroli, A.\ Cervone, M.\ Del Pra, N.\ Fadel,
E.\ Faggiano, L.\ Formaggia, A.\ Fumagalli, G.\ Iori,
R.M.\ Lancellotti, A.\ Melani, S.\ Palamara, S.\ Pezzuto,
S.\ Zonca]\footnote{\label{mox}MOX, Politecnico di Milano IT}.
[L.\ Barbarotta, C.\ Colciago, P.\ Crosetto,
S.\ Deparis, D.\ Forti, G.\ Fourestey, G.\ Grandperrin,
T.\ Lassila, C.\ Malossi,
R.\ Popescu, S.\ Quinodoz, S.\ Rossi,
R.\ Ruiz Baier, P.\ Tricerri]\footnote{\label{epfl}CMCS, EPFL Lausanne, CH}, M.\ Kern\footnote{INRIA, Rocquencourt, FR},
[S.\ Guzzetti, L.\ Mirabella,
T.\ Passerini, A.\ Veneziani]$^{\text{\footnotesize{\ref{mox},}}}$\footnote{\label{emory}Dept. Math \& CS, Emory Univ, Atlanta GA USA},
U.\ Villa$^{\text{\footnotesize{\ref{emory}}}}$,
L.\ Bertagna$^{\text{\footnotesize{\ref{emory}, }}}$\footnote{\label{fsu} Department of Scientific Computing, Florida State University, Tallahassee, FL USA},\footnote{\label{snl}Sandia National Lab, Albuquerque, NM USA},
M.\ Perego$^{\text{\footnotesize{\ref{mox},\ref{emory},\ref{fsu},\ref{snl}}}}$,
A. \ Quaini\footnote{Department of Mathematics, University of Houston, TX USA}$^{\text{\footnotesize{,\ref{emory}}}}$,
H.\ Yang$^{\text{\footnotesize{\ref{emory},\ref{fsu}}}}$,
A. \ Lefieux\footnote{Department of Civil Engineering and Structures, University of Pavia, IT}$^{\text{\footnotesize{,\ref{emory}}}}$.
\paragraph*{Abbreviations}: Algebraic Additive Schwarz (AAS), FE (Finite Elements), FEM (Finite Element Method), DoF (Degree(s) of Freedom), OO (Object Oriented), PCG (Preconditioned Conjugate Gradient), PGMRes (Preconditioned Generalized Minimal Residual), ML (Multi Level), DD (Domain Decomposition), MPI (Message Passing Interface), ET (Expression Template), HDF (Hierarchical Data Format), HPC (High Perfromance Computing), CSR (Compact Sparse Row)
\section{The Parallel Framework}
\label{sec:parallel}
The library can be used for the approximation of PDEs in
one dimension, two dimensions, and three dimensions. Although it can be used in serial mode (i.e., with one
processor), parallelism is crucial when solving three dimensional problems. To better underline the ability of LifeV
to tackle large problems, in this review we focus on PDEs discretized on unstructured linear tetrahedral meshes,
although we point out that LifeV also supports hexahedral meshes as well as quadratic meshes.
Parallelism in LifeV is achieved by domain decomposition (DD) strategies, although it is not
mandatory to use DD preconditioners for the solution of sparse linear systems.
In a typical simulation, the main steps involved in the parallel solution of the finite element problem using LifeV are the following:
\begin{enumerate}
\item All the MPI processes load the same (not partitioned) mesh.
\item The mesh is partitioned in parallel using ParMETIS or Zoltan. At the end each process keeps only its own local partition.
\item The DoFs are distributed according to the mesh partitions. By looping on the local partition, a list of local DoF in global numbering is built.
\item The FE matrices and vectors are distributed according to the DoFs list. In particular,
the matrices are stored in row format, for which whole rows are assigned to the process owning the associated DoF.
\item Each process assembles its local contribution to the matrices and vectors. Successively, global communication consolidates contributions on shared nodes (at the interface of two subdomains).
\item The linear system is solved using an iterative solver, typically either a Preconditioned Conjugate Gradient (PCG)
when possible or a Preconditioned GMRes (PGMRes). The preconditioner runs in parallel. Ideally, the number of preconditioned iterations should be independent of the number of processes used.
\item The solution is downloaded to mass storage
in parallel using HDF5 for post-processing purposes (see Sect.~\ref{sec:paraview}).
\end{enumerate}
The aforementioned steps are explained in detail in the next subsections.
\subsection{Mesh partitioning: ParMETIS and Zoltan}
\label{sec:parmetis}
As mentioned above, LifeV achieves parallelism by partitioning the mesh among the available
processes. Typically, this is done ``online'': the entire mesh is loaded by all the processes but
it is deleted after the partitioning phase, so that each process keeps only the part required for the solution of the
local problem and to define inter-process communications. As the mesh size increases, the ``online'' procedure
may become problematic. Therefore for large meshes it is possible, and sometimes
necessary, to partition the mesh
offline on a workstation with sufficient memory~\cite{radu:PhD}. It is also
possible to include an halo of ghost elements such that the partitions overlap by one or more layers of elements (see e.g. \cite{guzz1}). This may be relevant for schemes that require a large stencil. To perform the partition, LifeV can interface with two third party libraries: ParMetis and Zoltan \cite{metis,zoltan}.
\subsection{Distributed arrays: Epetra}
\label{sec:epetra}
The sparse matrix class used in LifeV is a wrapper to the Epetra matrix container
Epetra\_FECrsMatrix and, similarly, the vector
type is a wrapper to Epetra\_FEVector, both provided by the Epetra~\cite{heroux09:_epetra} package of Trilinos.
The distribution of the unknowns is determined automatically by the
partitioned mesh: with a loop over each element of the local mesh we create the list
of DoF managed by the current processor. This procedure in fact creates a {\it repeated} map,
i.e., an instance of an Epetra\_Map with some entries referring to the DoF associated with geometric entities lying on the interface between two (or more) subdomains. Then, a {\it unique}
map is created, in such a way that, among all the owners of a \textit{repeated} DoF, only one will also own it in the \textit{unique} map.
The unique map is used for the vectors and matrices to be used in the linear algebra routines as well as for the solution vector. The repeated map is used
to access information stored on other processors, which is usually necessary only in the assembly and post-processing phases.
The assembly of the FE matrices is typically performed by looping on the local elements~\cite{ernguermond,FormaggiaSaleriVenezianiBook}.
To reduce latency time, the loops on each subdomain are performed in parallel, without need of any communication
during the loop. Just a single communication phase takes place once all processes have assembled their local contribution, to complete the assembly for interface dofs.
Efficiency and stability may be improved by two further available operations,
(i) precomputing the matrix graph; (ii) using overlapping meshes.
The former demands for the creation at the beginning of the simulation of an Epetra\_Graph, associated to the matrix. Since it depends on the problem at hand and the chosen finite elements,
its computation needs a loop on all the elements. {This is coded by {\it Expression Templates}} (ET), see Sect. \ref{sec:et}, using the same call sequence as for the matrix assembly.
The latter further reduces communications by allowing all processes to compute the local finite element matrix also on all elements sharing a DoF on the interface. As a result, each process can independently compute all the entries of matrices and vectors pertaining to the DoF it owns at the price of some extra computation. Yet, the little overhead is justified by the complete elimination of the post-assembly communication costs.
\subsection{Parallel preconditioners}
\label{sec:prec}
The solution of linear systems in LifeV relies on the Trilinos~\cite{trilinos} packages AztecOO
and Belos~\cite{bavier12:_amesos_belos}, which provide an extensive choice of iterative or direct
solvers. LifeV provides a common interface to both of them.
The proper use of ParMETIS and Zoltan for the partitioning, and of Epetra matrices and vectors for the linear algebra, ensures that matrix-vector multiplication and vector operations are properly parallelized, i.e., they scale well with the number of processes used, and communications are optimized. In this situation the parallel scalability of iterative solvers like PCG or PGMRES depends essentially on the properties of the preconditioner.
The choice of preconditioner is thus critical. In our experience it may follow two directions: (i)
parallel preconditioners for the generic linear systems, like single or multilevel overlapping Schwarz preconditioners, or multigrid preconditioners, that are generally well suited for highly coercive elliptic problems, or incomplete factorization (ILU), which are generally well suited for advection-dominated elliptic problems; (ii)
problem specific preconditioners, typically required for multifield or multiphysics problem. These preconditioners exploits specific features of the problem at hand to recast the solution to standard problems that can be eventually solved with the generic strategies in (i). Preconditioners of this class are, e.g., the SIMPLE, the Least Square Commutator, the Caouet-Chabard and the Yosida preconditioners for the incompressible Navier-Stokes equations~\cite{PatankarSpalding1972,elman2014finite,cahouet1988some,Veneziani2003}
or the Monodomain preconditioner for the Bidomain problem in electrocardiology~\cite{giorda09}.
In LifeV preconditioners for elliptic problems are indeed an interface to the Trilinos package IFPACK~\cite{ifpack}, which is a suite of algebraic preconditioners based on incomplete factorization, and to ML~\cite{ml-guide} or MueLu~\cite{MueLu},
which are two Trilinos packages for multi-level preconditioning based on algebraic multigrid.
Typically we use IFPACK to define algebraic overlapping Schwarz preconditioners with exact or inexact LU factorization
of the restricted matrix. The preconditioner $P$ in this case can be formally written as
\begin{equation}
P^{-1} = \sum_{i=1}^n R_i^T A_i^{-1} R_i, \text{ where }
A_i = R_i A^{-1} R_i^T, \label{eq:schwarz}
\end{equation}
where $A$ is the finite element matrix related to the PDE approximation, $n$ is the number of partitions (or subdomains)
$\Omega_i$, $R_i$ is the restriction operator to $\Omega_i$ and $R_i^T$ the
extension operator from $\Omega_i$ to the whole domain $\Omega$.
$A_i$ is inverted many times during the iterations of PCG or PGMRES, which is why it is factorised
by LU or ILU. In LifeV, the choice of the factorization is left to the user.
Similarly, it is possible to use multilevel preconditioners via the ML~\cite{ml-guide} or MueLu~\cite{MueLu,MueLuURL} packages.
They work at the algebraic level too
and the coarsening and extension are done either automatically, or by user defined strategies, based on the parallel distribution of the matrix and its graph.
The current distribution of LifeV does not offer the last option, but the interested developer could add this extra functionality with relatively little effort.
\subsection{Parallel I/O with HDF5}
\label{sec:paraview}
When dealing with large meshes and large number of processes, input/output access to files on disk deserves particular care.
A prerequisite is that the filesystem of the supercomputing architecture being used provides the necessary access speed in parallel. However this is not enough. MPI itself offers parallel I/O capabilities, for which HDF5 is one
of the existing front-ends~\cite{hdf5}. Although other formats are also supported (see section \ref{subsec:io_formats}), LifeV strongly encourages the use HDF5 for I/O processing essentially for three reasons.
\begin{itemize}
\item The number of files produced is independent of the number of running processes: each process accesses the same file in parallel
and writes out his own chunks of data.
As a result, LifeV generally produces one single large output binary file, along with a xmf text file
describing its contents.
\item HDF5 is compatible with open source post-processing visualization tools like Paraview~\cite{paraview} and VisIt~\cite{visit}).
\item Having one single binary file makes it very easy to use for restarting a simulation.
\end{itemize}
The interface to HDF5 in LifeV exploits the facilities of the EpetraExt package in Trilinos, and since the LifeV vectors are compatible with Epetra format the
calls are simple~\cite{heroux09:_epetra}.
\section{Features and Paradigms}\label{sec:FeaturesAndParad}
\subsection{I/O data formats} \label{subsec:io_formats}
In a typical simulation, the user provides a text file containing input data (including physical and discretization parameters, and options for the linear/nonlinear solvers), and the code will generate results, which need to be stored for post-processing.
Although it is up to the user to write the program main file where the data file is parsed, LifeV makes use of two particular classes in order to forward the problem data to all the objects involved in the simulation: the GetPot class \cite{getpot} (for which LifeV also provides an \emph{ad hoc} re-implementation inside the core module), and the ParameterList class from the Teuchos package in Trilinos. The former has been the preferred way since the early development of LifeV, and is therefore supported by virtually all classes that require a setup. The latter is used mostly for the linear and nonlinear solvers, since it is the standard way to pass configuration parameters to the Trilinos solvers. Both classes map strings representing the names of properties to their actual values (be it a number, a string, or other), and they both allow the user to organize the data in a tree structure.
Details on the syntax of the two formats
are available online.
When it comes to mesh handling, LifeV has the built-in capability of generating structured meshes on domains of the form $\Omega=[a_1,b_1]\times[a_2,b_2]\times[a_3,b_3]$. If a more general mesh is required, the user needs to create it beforehand. Currently, LifeV supports the FreeFem \cite{freefem} and Gmsh \cite{gmsh} formats (both usually with extension \verb|.msh|) in 2D, while in 3D it supports the formats of Gmsh, Netgen \cite{netgen} (usually with extension \verb|.vol|) and Medit \cite{medit} (usually with extension \verb|.mesh|). Additionally, as mentioned in Section \ref{sec:parmetis}, LifeV offers the capability of offline partitioning. The partitions of a mesh are stored and subsequently loaded using HDF5 for fast and parallel input.
Finally, LifeV offers three different formats for storing the simulation results for post-processing: Ensight \cite{ensight}, HDF5 \cite{hdf5},
which is the preferred format when running in parallel, and VTK \cite{vtk}. All these formats are supported by the most common scientific visualization software packages, like Paraview \cite{paraview} and VisIt \cite{visit}. The details on these formats can be found on their respective webpages.
\subsection{Expression Templates for Finite Elements}
\label{sec:et}
One of the aims of LifeV is to be used in multiple contexts, ranging from industrial and social applications to teaching
purposes. For this reason, it is important to find the best trade-off between computational efficiency
and code readability.
The high level of abstraction proper of C++ is in principle perfectly matched by the abstraction of mathematics.
However, versatility and efficiency may conflict. Quoting~\cite{furnish},
a ``natural union has historically been more of a stormy relationship". This aspect is crucial in High Performance Computing, where efficiency is a priority.
Operator overloading has a major impact on efficiency, readability, maintainability and versatility,
however it may adversely affect the run time.
{\it Expression Templates} (ET) have been originally devised to minimize this drawback by T. Veldhuizen~\cite{veldhuizen},
and later further developed in the context of linear algebra~\cite{HArdtlein2009,iglberger2012expression} and solution of partial differential equations~\cite{pflaum2001expression,di2009expression}.
In the context of linear algebra, the technique was developed to allow high level vector syntax without compromising on code speed, due to function overloading. The goal of ET is to write high level \emph{Expressions}, and use \emph{Template} meta-programming to parse the expression at compile-time, generating highly efficient code. Put it simply, in the context of PDE's, ET aims to allow a syntax of the form
\begin{verbatim}
auto weak_formulation = alpha*dot(grad(u), grad(v)) + sigma*u*v;
\end{verbatim}
which is an \emph{expression} very close to the abstract mathematical formulation of the problem. However, during the assembly phase, the resolution of the overloaded operators and functions would yield a performance hit, compared to a corresponding \emph{ad-hoc} for loop. To overcome this issue, the above \emph{expression} is implemented making massive use of \emph{template} meta-programming, which allows to expand the \emph{expression} at compile time, resulting in highly efficient code. Upon expansion, the \emph{expression} takes the form of a combination of polynomial basis functions and their derivative at a quadrature point. At run time, during the assembly phase, such combination is then evaluated \emph{at once} at a given quadrature point, as opposed to the more \emph{classical} implementation, where all contributions are evaluated separately and then summed up together.
For instance, for the classical linear advection-diffusion-reaction equation in the unknown $u$, $-\mu {\uu d}elta u + \mathbf{\beta} \cdot \nabla u + \sigma u$,
we need to combine three different differential operators weighted by coefficients $\mu,\mathbf{\beta}$ and $\sigma$.
These may be numbers, prescribed functions or pointwise functions (inherited for instance by another FE computation).
In nonlinear problems --- after proper linearization --- expressions may involve finite element functions too.
Breaking down the assembly part to each differential operator individually with its own coefficient is possible but leads to duplicating the loops over
the quadrature nodes, as opposed to the assembly of their sum.
In
LifeV the possible
differential operators for the construction of linear and nonlinear advection-diffusion-reaction problems are enucleated into
specific \emph{Expression}s,
following the idea originally proposed in \cite{eta}.
The ET technique provides readable code
with no efficiency loss for operator parsing. As a matter of fact, the final gathering of all the assembly operations in a single loop,
as opposed to standard approaches with a separate assembly for each elemental operator, introduces computational advantages. Indeed, numerical tests have pointed out a significantly improved performance for probelms with non-constant coefficients when using the ET technique.
A detailed description of ET definition and implementation in LifeV can be found in~\cite{Quinodoz:PhD} and in the code snapshots presented later on.
\section{Basics: Life = Library of Finite Elements}\label{sec:life-poisson}
The library supports different type of finite elements. The use of ET makes the set-up of simple problems easy, as we illustrate hereafter.
\subsection{The Poisson problem}
\label{sec:PDE}
As a first example, we present the setup of a finite element solver for a Poisson
problem.
We assume a polygonal $\Omega$ with boundary $\partial\Omega$ split into two subsets $\Gamma^D$ and $\Gamma^N$ of positive measure such that
$\Gamma^D\cup\Gamma^N = \partial\Omega$ and $\Gamma^D\cap\Gamma^N=\emptyset$.
Let $V^D_h\subset H^1_{\Gamma^D}$ be a discrete finite element space relative to a mesh
of $\Omega$, for example continuous piecewise linear functions vanishing on $\Gamma_D$.
The Galerkin formulation of the problem reads: find $u_h\in V^D_h$ such that
\begin{equation}
\label{eq:poisson}
\int_{\Omega} \kappa \Grad u_h : \Grad \varphi_h
=
- \int_{\Omega} \kappa \Grad u^D : \Grad \varphi_h
+\int_\Omega f \varphi_h
+\int_{\Gamma^N} g^N \varphi_h
\qquad \forall \varphi_h \in V^D_h,
\end{equation}
where $\kappa$ is the diffusion coefficient,
possibly dependent on the space coordinate, $g^N$ is the Neumann boundary condition,
$\frac{\partial u}{\partial n} = g^N$ on $\Gamma^N$, and $f$ are the volumetric forces.
The lifting $u_D$ can be any finite element function such that $u^D|_{\Gamma^D}$
is a suitable approximation of $g^D$. As usually done, $u^D$ is such that $u^D|_{\Gamma^D}$
is the Lagrange interpolation of $g^D$, extended to zero inside $\Omega$.
In LifeV, DoF associated with Dirichlet boundary conditions are not physically eliminated from the FE unknown vectors
and matrices.
Even though this elimination would certainly affect positively the performances of the linear algebra solver,
it introduces a practical burden in the implementation and memory particularly in 3D unstructured problems, that makes it less appealing.
The enforcement of these conditions can be done alternatively in different ways as illustrated e.g. in~\cite{FormaggiaSaleriVenezianiBook},
after the matrix assembly. We illustrate the strategy adopted in the following sections.
\subsection{Matrix form and expression templates}
\label{sec:ETA}
Firstly, we introduce the matrix assembled for homogeneous Natural conditions associated with the
differential operator at hand (aka ``do-nothing'' boundary conditions, as they do not require any extra work to the pure discretization of the differential operator),
i.e.
$$
A =(a_{ij})_{i,j=1,...,n} \text{ and }
a_{ij} =
\int_{\Omega} \kappa \Grad \varphi_j : \Grad \varphi_i, i,j=1,...,n,
$$
where $\varphi_j$, $j=1,...,n$ are the basis functions of the finite element space $V_h$.
Thanks to the ET framework ~\cite{eta,Quinodoz:PhD}, once the mesh, the solution FE space and the quadrature rule have been created, the assembly of the stiffness matrix $A$ is as simple as the following instruction
\lstinputlisting[language=C++]{assembly.cpp}
We emphasize how the ET syntax clearly highlights the differential operator being assembled, making the code easy to read and maintain. In a similar way, we define the right hand side of the linear problem as the
vector
$$
\mathbf b = (b_i)_{i=1,...,n}\text{ where }
b_i = \int_\Omega f \varphi_i + \int_{\Gamma^N}g^N\varphi_i, i=1,...,n.
$$
In this case, possible non-homogeneous Neumann or natural conditions are included.
Finally, the DoF related to the Dirichlet boundary conditions are enforced
by setting the associated rows of $A$ equal to zero except for the diagonal entries.
In this way, the equation associated with the $i$-th Dirichlet DoF is replaced by $c u_i = c g_i$, where $c$ is a scaling factor, depending in general on the mesh size, to be used to control the condition number of the matrix.
A general strategy is to pick up values of the same order of magnitude
of the entries of the row of the do-nothing matrix being modified.
Without further modification, the system matrix is not symmetric anymore.
Many of the problem faced in application are not symmetric, therefore we describe here
only how to deal with non-symmetric matrices.
It is worth noting that the symmetry break does not prevent using specific methods for symmetric systems
like CG when appropriate as pointed out in \cite{ernguermond}.
A symmetrization of the matrix can be also achieved by enforcing the condition $c u_i = c g_i$
column-wise, i.e. by setting to 0 also the off-diagonal entries of the columns of Dirichlet DoF.
Some sparse-matrix formats oriented to row-wise access of the matrix, like the popular CSR,
need in this case to be equipped with specific storage information that could make the column-wise access convenient~\cite{FormaggiaSaleriVenezianiBook}.
\subsection{Linear algebra}
\label{sec:DD}
The linear system $A\mathbf x = \mathbf b$ can be solved by a preconditioned iterative method like
PCG, PGMres, BiCGStab, etc, available in the packages AztecOO or Belos of Trilinos. The following snippet highlights the simplicity of the usage of LifeV's linear solver interface.
\lstinputlisting[language=C++]{solveSystem.cpp}
The choice of the method and its settings are to be set via an input xml file. For the above example (which used AztecOO as solver and Ifpack as preconditioner), a minimal input file {\tt params.xml} would have the following form
\lstinputlisting[language=xml]{SolverParamList.xml}
The main difficulty is to set up a scalable preconditioner. As pointed out , in LifeV there are
several options based on Algebraic Additive Schwarz (AAS) or Multigrid preconditioners. In the first case,
the local problem related to $A_i$ in~\eqref{eq:schwarz} has to be solved.
It is possible to use an LU factorization, using the interface with Amesos~\cite{amesos,amesos:para06} or
incomplete factorizations (ILU).
LU factorizations are more robust than incomplete ones in the sense that they do not need any
parameter tuning, which is delicate in particular within AAS, while incomplete ones are much faster and require less storage.
In a parallel context though, for a given problem,
the size of the local problem is inversely proportional to the number
of subdomains. The LU factorization, whose cost depends only on the number of unknowns,
is perfectly scalable, but more memory demanding. An example of the scalability in this settings is given in Figure~\ref{fig:laplacian}.
LifeV leaves the choice to the user depending on the type of problem and
computer architecture at hand.
\begin{figure}
\caption{Solving a Poisson problem in a cube with P2 finite elements with
1'367'631 degrees of freedom. The scalability in terms of CPU time (left) is perfect, however
the number of iterations (right) linearly increases. The choice of the preconditioner is not optimal, the use
of a coarse level or of multigrid in the preconditioner is essential and allows to use more processes with no
loss of resources, cf.\ also Figures~\ref{fig:VMSscalability}
\label{fig:laplacian}
\end{figure}
The previous example is just an immediate demonstration of LifeV coding. For other examples we refer the reader to~\cite{FormaggiaSaleriVenezianiBook}.
\section{The CFD Portfolio}
\label{sec:Oseen}
A core application developed since the beginning, consistently with the tradition of the group where the library has been
originally conceived, is incompressible fluid dynamics, which is particularly relevant
for hemodynamics.
It is well-known that the problem has a {\it saddle point} nature that stems from the incompressiblity constraint.
From the mathematical stand point, this introduces specific challenges, for instance the
choice of finite element spaces for velocity and pressure that should satisfy the so called {\it inf-sup condition}, unless special stabilization
techniques are used~\cite{elman2014finite}. LifeV offers both possibilities.
In fact, one can choose among inf-sup stable P2-P1 finite element pairs or equal order P1-P1 or P2-P2 stabilized formulations, either
by interior penalty \cite{burmanfernandez} or SUPG-VMS~\cite{Bazilevs2007173,FortiDedeVMS15}.
As it is well known, for high Reynolds flow it is important to be able to describe turbulence by modeling it, being impossible to resolve it in practice. Being hemodynamics the main LifeV application, where turbulence is normally less relevant, LifeV has not implemented a full set of turbulence models. However, it includes the possibility of using the Large Eddie Simulation (LES)
approach, which relies on the introduction of a suitable filter of the
convective field in the Navier-Stokes equations with the role of bringing the
unresolved scales of turbulence
to the mesh scale.
The Van Cittert deconvolution operator considered in~\cite{BowersRebholz}
has been recently introduced in LifeV in~\cite{BertagnaQuainiVeneziani}.
Validation up to a Reynolds number 6500 has been validated with the FDA Critical Path Initiative Test Case.
Another LES procedure based on the variational splitting of resolved and unresolved
parts of the solution has been considered in~\cite{FortiDedeVMS15}, while other LES filtering techniques, in particular the $\sigma$-model~\cite{nicoud2011using}, have been implemented in~\cite{Lancellotti15}.
\subsection{Preconditioners for Stokes problem}
\label{preconditioners}
In Section~\ref{sec:prec} we have introduced generic parallel preconditioners based on AAS or multigrid.
These algorithms have been originally devised for elliptic problems.
In our experience, their use for
saddle-point problems like Darcy, Stokes and Navier--Stokes equations, or in fluid structure interaction problems, is not effective.
However, their combination with specific preconditioners, like e.g. SIMPLE, Least Square Commutator, Yosida for unsteady Navier--Stokes equations
or Dirichlet-Neumann for FSI, leads to efficient and scalable solvers.
In~\cite{DeparisGrandperrinQuarteroni2013} this approach has been applied with
success to unsteady Navier-Stokes problems with inf-sup stable finite elements,
and then extended to a VMS-LES stabilized formulation with equal order elements for
velocity and pressure~\cite{FortiDedeVMS15}, see also Figure~\ref{fig:VMSscalability}.
\begin{figure}
\caption{Flow around a cylinder, Reynolds number equal to 22'000.
Scalability of hybrid preconditioners for stabilized Navier--Stokes equations~\cite{FortiDedeVMS15}
\label{fig:VMSscalability}
\end{figure}
Applying the same techniques to build a parallel preconditioner for FSI
based on a Dirichlet--Neumann splitting~\cite{CrosettoDeparisFouresteyQuarteroni2011}
and a SIMPLE~\cite{PatankarSpalding1972,Elman2008} preconditioner
does not lead to a scalable algorithm. To this end, it is necessary
to add additional algebraic operations which leads to a FaCSI
preconditioner~\cite{DeparisFortiGrandperrinQuarteroniFaCSI2015}, see also Figure~\ref{fig:FSIscalability}.
\begin{figure}
\caption{Blood flow in a patient-specific arterial bypass. Scalability of FaCSI preconditioner for FSI simulations~\cite{DeparisFortiGrandperrinQuarteroniFaCSI2015}
\label{fig:FSIscalability}
\end{figure}
\begin{table}
\begin{tabular}{lccccc}
\hline
Mesh & Fluid DoF & Structure DoF & Coupling DoF & Geometry DoF & Total \\
\hline
Coarse & 9'029'128 & 2'376'720 & 338'295 & 8'674'950 & 20'419'093 \\
Fine & 71'480'725 & 9'481'350 & 1'352'020 & 68'711'934 & 151'026'029\\
\hline
\end{tabular}
\caption{Femoropopliteal bypass test case: number of Degrees of Freedom (DoF).}\label{Tab:cylDofs}
\end{table}
\subsection{Algebraic Factorizations}
Another issue is to reduce the computational cost by separating pressure and velocity computations.
The origin of splitting schemes for Navier-Stokes problems may be dated back to the
separate work of A. Chorin and R. Temam that give rise to the well-known scheme that bears their name.
The basic scheme operates at a differential level by exploiting the Helmholtz decomposition Theorem (also known as Ladhyzhenskaja Theorem)
to separate the differential problem into the sequence of
a vector advection-diffusion-reaction problem and a Poisson equation, with a final correction step for the velocity.
As opposed to the ``split-then-discretized'' paradigm, B. Perot in ~\cite{perot} advocates
a ``discretize then split'' strategy, by pointing out the formal analogy
between the Chorin-Temam scheme and an inexact LU factorization of the matrix obtained after
discretization of the Navier-Stokes equations.
This latter approach, often called ``algebraic factorization''
is easier to implement, particularly when one has to treat general boundary conditions \cite{QuarteroniSaleriVenezianiCMAME}.
Let us introduce briefly a general framework.
Let $A$ be the matrix obtained by the finite element discretization of the (linearized) incompressible Navier-Stokes equations.
The discretized problem at each step reads
$$
A
\begin{bmatrix}
{\uu{u}} \\
\uu p
\end{bmatrix}=
\begin{bmatrix}
\uu f \\
0
\end{bmatrix}
\qquad {\rm with \ } \quad
A =
\begin{bmatrix}
C & D^T \\
D & 0
\end{bmatrix}
$$
where $A$ collects the contribution of the
linearized differential operator acting on the velocity field in the momentum equation,
$D$ and $D^T$ are the discretization of the divergence and the gradient operators, respectively.
Notice that
$$
A = LU=
\begin{bmatrix}
C & 0 \\
D & - D C^{-1} D^T
\end{bmatrix}
\begin{bmatrix}
I & C^{-1}D^T \\
0 & I
\end{bmatrix}.
$$
This ``exact'' LU factorization of the problem formally realizes a velocity-pressure splitting.
However there is no computational advantage because of the presence of the matrix $C^{-1}$, which is
not explicitly available, so any matrix-vector product with this matrix requires to solve a linear system.
The basic idea of algebraic splitting is to approximate this factorization. A first possibility is to replace $C^{-1}$ with the inverse of the velocity mass matrix scaled by ${\uu d}elta t$. This is the result of the first term truncation
of the Neumann expansion that may be used to represent $C^{-1}$. The advantage of this approximation is that
the mass matrix can be further (and harmless) approximated by a diagonal matrix by the popular ``mass-lumping'' step.
In this way, $D C^{-1} D^T$ is approximated by a s.p.d matrix --- sometimes called ``discrete Laplacian'' for its spectral analogy with the Laplace operator ---
that can be tackled with many different convenient numerical strategies.
In addition, it is possible to see that the splitting error gathers in the
first block row, i.e.\ in the momentum equation. We finally note that replacing the original $C^{-1}$ with the velocity
mass matrix in the $U$ factor of the splitting implies that the exact boundary conditions cannot be enforced exactly.
The Yosida strategy, on the other hand, follows a similar pathway, except for not approximating
$C^{-1}$ in $U$. Similar properties can be proved as for the Perot scheme, however in this case
the splitting error affects only the mass conservation (with a moderate mass loss depending on the time step)
and the final step does actually enforce the exact boundary conditions for the velocity.
Successively, different splittings have been proposed in \cite{GauthierSaleriVeneziani,SaleriVeneziani,GervasioSaleriVeneziani,Gervasio2006,Gervasio2008,Veneziani2009}
to reduce the impact of the splitting error by successive corrections of the pressure field.
In particular, in \cite{Veneziani2003,GauthierSaleriVeneziani} the role of inexact factorizations as preconditioners
for the original problem was investigated.
LifeV incorporates these last developments. In particular, the Yosida
scheme has been preferred since the error on the mass conservation has
less impact on the interface with the structure in fluid-structure
interaction problems. It is worth noting that a special block operator structure reflecting
the algebraic factorization concept has been implemented in
\cite{VillaPhD}.
It is worth noting that a robust validation of these methods has been
successfully performed not only against classical analytical test
cases but also within the framework of one {\it Critical Path
Initiative} promoted by the US Food and Drug Administration (FDA)
\cite{QuainiEtAl}, (https://fdacfd.nci.nih.gov). Also, extensions of the inexact algebraic factorization
approach to the steady problem have been recently proposed in \cite{Viguerie}.
\paragraph{Time adaptivity}
An interesting follow up of the pressure corrected Yosida algebraic factorizations is presented in
\cite{VenezianiVilla2012}. This work stems from the fact that the sequence of pressure corrections
not only provides an enhancement of the overall splitting error, but also provides an error estimator in time
for the pressure field - with no additional computational cost. Based on this idea,
a sophisticated time adaptive solver has been introduced~\cite{VillaPhD,VenezianiVilla2012},
with the aim of cutting the computational costs by a smart and automatic selection of the time step.
The latter must be the trade-off among the desired accuracy, the computational efficiency and the
numerical stability constraints introduced by the splitting itself.
The final result is a solver that automatically detects the optimal time step, possibly performing
an appropriate number of pressure correction to attain stability.
This approach is particularly advantageous for computational hemodynamics problems featuring
a periodic alternation of fast and slow transients (the so called systolic and diastolic phases in
circulation). As a matter of fact, for the same level of accuracy, the total number of time steps
required within a heart beat is reduced to one third of the ones required by the
non adaptive scheme.
In fact, in \cite{VenezianiVilla2012} a smart combination of algebraic factorizations
as solvers and preconditioners of the Navier-Stokes equations
based on the {\it a posteriori} error estimation provided by the pressure corrections
is proposed as a potential optimal trade-off between numerical stability and efficiency.
\section{Beyond the proof of concept}
\label{applications}
As explained in Sect. \ref{sec:introduction}, LifeV is intended to be a tool to work aggressively on real problems,
aiming at a general scope of bringing most advanced methods
for computational predictive tools in the engineering practice (in broad sense).
In particular, one of the most important applications --- yet non exclusive --- is the simulation of cardiovascular problems.
Examples of the use of LifeV for real clinical problems are: simulations of Left Ventricular Assist Devices (LVAD)~\cite{bonnemain13:_numer,bonnemain12:VAD},
the study of the physiological~\cite{CrosettoReymondDeparisKontaxakisStergiopulosQuarteroni2011} and abnormal fluid-dynamics in ascending aorta
in presence of a bicuspid aortic valve~\cite{bonomiv1,vergarav1,faggianoa1,Viscardi2010},
of Thoracic EndoVascular Repair (TEVAR) \cite{pavia1,pavia2}, of the Total Cavopulmonary Connection
\cite{y1,restrepo2015surgical,tang2015respiratory}, of blood flow in stented coronary arteries \cite{jacc} and in cerebral aneurysms \cite{passerinietal}.
The current trend in this field is the setting up of {\it in silico} or ``Computer Aided'' Clinical Trials\cite{AV4Gogas},
i.e. of systematic investigations on large pools of patients to retrieve data of clinical relevance by integrating traditional measures and numerical simulations\footnote{The ABSORB Project granted by Abbott Inc. at Emory University and the iCardioCloud Project granted by Fondazione Cariplo to University of Pavia have been developed in this perspective using LifeV.}
In this scenario, numerical simulations are part of a complex, integrated pipeline involving: (i) Image/Data retrieval; (ii) Image Processing and Reconstruction (extracting the patient-specific morphology); (iii) Mesh generation and preprocessing (encoding of the boundary conditions);
(iv) Numerical simulation (with LifeV); (v) Postprocessing and synthesis. As in the following sections we mainly describe applications relevant to step (iv), it is important to stress the integrated framework in which LifeV developers work, toward a systematic automation of the process, required from the large volume of patients to process.
In this respect,
we may consider LifeV as a vehicle of {\it methodological transfer} or {\it translational mathematics}, as leading edge methods
are made available to the engineering community with a short time-to-market. Here
we present a series of distinctive applications where we feel that using LifeV actually allowed to
to bring rapidly new methods to real problems beyond the proof of concept stage.
\subsection{FSI}
In the Fluid-Structure Interaction (FSI) context, LifeV has offered a
very important bench for testing novel algorithms.
For example, it has been possible to test Robin-based interface conditions
for applications in hemodynamics~\cite{nobilev1,nobilep1,nobilep2},
or compare segregated algoritms, the monolithic formulation, and the Steklov-Poincar\'e formulation~\cite{DeparisDiscacciatiFouresteyQuarteroni2006}.
An efficient solution method for FSI problem considers the
physical unknowns and the fluid geometry problem for the Arbitrary
Lagrangian-Eulerian (ALE) mapping as a single variable.
This monolithic description implies to use ad-hoc parallel preconditioners.
Most often they rely on a Dirichlet-Neumann inexact factorization
between fluid and structure and then specific preconditioners
for the subproblems~\cite{CrosettoDeparisFouresteyQuarteroni2011}.
Recently a new preconditioner FaCSI~\cite{DeparisFortiGrandperrinQuarteroniFaCSI2015}
has been developed and tested
with LifeV with an effective scalability up to 4 thousands processors.
A next step in FSI has been the use of non-conforming
meshes between fluid and structure using rescaled-localized radial basis
functions
\cite{DeparisFortiQuarteroniRLRBF2014}.
A study on different material constitutive models for cerebral arterial tissue
--- in particular Hypereslatic isotropic laws, Hyperelastic anisotropic laws ---
have been studied in~\cite{tricerri15:_fluid}.
A benchmark for the simulation of the flow inside carotids and
the computation of shear stresses \cite{FSIbenchmark2015} has been tested with
LifeV coupled with the FEAP library~\cite{feap},
which includes sophisticated anisotropic material models.
\subsection{Geometric Multiscale}
The cardiovascular system features coupled local and global dynamics.
Modeling its integrity by three dimensional geometries including FSI
is either unfeasible or very expensive computation-wise and,
most of the time, useless. A more efficient model entails
for the coupling of multiple dimensions, like lumped zero-dimensional
models, hyperbolic one-dimensional ones, and three dimensional FSI,
leading to the so called {\it geometric multiscale} modeling as advocated in
\cite{VenezianiPhD,formaggia2001coupling}.
LifeV implements a full set of tools to integrate 0D, 1D and 3D-FSI models of the cardiovascular
system~\cite{malossi11:algorithms,malossi13:_implic,blanco:TotStress2013,passerini20093d},
with also a multirate time stepping scheme to improve the computational efficiency~\cite{malossi:two_time_step}.
It has been used to simulate integrated models of the cardiovascular system~\cite{bonnemain13:_numer}.
A critical aspect of this approach is the management of the dimensional mismatch
between the different models, as the accurate 3D problems
require more information at the interface that the one provided by the other
surrogate models, This required the accurate analysis of ``defective boundary problems''
~\cite{formaggia02,veneziani2005flow,venezianiv2,formaggia2008new,formaggia2010flow}.
A recent review on these topics can be found in \cite{quarteroni2016geometric}.
\subsection{Heart dynamics}
Electrocardiology is one of the problems - beyond CFD but still related to cardiovascular mathematics -
where LifeV has cumulated extensive experience.
An effective preconditioner for the bidomain equations has been proposed and demonstrated in \cite{giorda09}.
The basic idea is to use the simplified extended monodomain model to precondition the solution
of the more realistic bidomain equations. Successively, the idea has been adapted to reduce the computational costs
by mixing Monodomain and Bidomain equations in an adaptive procedure. A suitabe
{\it a posteriori} estimator is used to decide when the Monodoimain equations are enough or
the bidomain solution is needed \cite{MirabellaNobileVeneziani,GGPeregoVeneziani1}. Ionic models
solved in LifeV ranges from the classical Rogers McCulloch, Fenton Karma, Luo Rudy I and II~\cite{clayton2011models} to
more involved ones \cite{dfgqr_mmas15}. Specific high order methods (extending the classical Rush Larsen one)
have been proposed and implemented in the library \cite{PeregoVeneziani}.
In addition, one research line has been oriented to the coupling of electrocardiology with cardiac mechanics.
Hyperelasticity problems based on non-trivial mixed and primal formulations with applications in
cardiac biomechanics have been studied in \cite{rrpq_pamm11,rrpq_ijnmbe12}. LifeV-based simulations of fully coupled electromechanics (using modules for the abstract coupling of solvers)
can be found in \cite{nqr_ijnmbe12,raprq_springer13,rlrsq_ejmsol14,abqr_m3as15} for whole organ models, and in \cite{rgrclsq_mmb14,grrlcf_springer15,ruiz_jcp15} for single-cell problems. The coupling with ventricular fluid dynamics and arterial tree FSI description are possible through a multiscale framework \cite{quarte_rev15}.
The coupling the Purkinje network, a network of high electrical conductivity myocardium fibers, has been implemented in~\cite{Vergara2016218}.
Another research line in this field successfully carried out with LifeV is the variational estimation
of cardiac conductivities (the tensor coefficients that are needed by the Bidomain equations)
from potential measures \cite{YangVeneziani}.
\subsection{Inverse problems and data assimilation}
One of the most recent challenging topics in computational hemodynamics is the
quantification of uncertainty and the improvement of the reliability in patient-specific settings.
As a matter of fact, while the inclusion of patient specific geometries
is now a well established procedure (as we recalled above), many other
aspects of the patient-specific modeling still deserve attention. Parameters like viscosity,
vessel wall rigidity, or cardiac conductivity are not routinely measured (or measurable)
in the specific patient and however have generally a major impact on the numerical results.
These concepts have been summarized in \cite{veneziani2013inverse}.
Variational procedures have been implemented in LifeV, where the assimilation
with available data or the parameter estimation are obtained by minimizing a mismatch functional.
In \cite{deliaperegoveneziani} this approach was introduced to incorporate into the numerical simulation
of the incompressible Navier-Stokes equations
sparse data available in the region of interest;
in \cite{peregov1} the procedure was introduced for estimating the vascular rigidity
by solving an inverse FSI problem, while a similar procedure in \cite{YangVeneziani,YangPOD}
aims at the estimate of the cardiac conductivity.
\subsection{Model reduction}
One of the major challenges of modern scientific computing is the controlled reduction of the computational costs.
In fact, practical use of HPC demands for extreme efficiency --- even real time solutions. Improvement of computing architectures and
cloud solutions that make relatively easy the access to HPC facilties is only a partial answer to this need~\cite{guzzetti2016platform}.
From the modeling and methodological side,
we need also customized models that can realize the trade off between efficiency and accuracy.
These may be found by a smart combination of available High Fidelity solutions, according to the offline/online paradigm;
or by the inclusion of specific features of a problem that may bring a significant advantage in comparison with general versatile
but expensive methods. In LifeV these strategy have been both considered.
For instance, in \cite{colciago14:RFSI}, a model for blood flow dynamics in a fixed domain,
obtained by transpiration condition and a membrane model for the structure,
has been compared to a full three dimesional FSI simulation. The former model
is described only in the lumen with the Navier--Stokes equations, the
structure is taken into account
by surface Laplace--Beltrami operator on the surface representing the
fluid-strucutre interface. The reduced model allows for roughly one third
of the computational time and, in situations where the displacement of the
artery is pretty small, the dynamics, including e.g. wall shear stresses,
are very close if not indistinguishable from a full three dimensional simulation
In \cite{BertagnaVeneziani} a solution reduction procedure based on the Proper Orthogonal Decomposition
(POD) was used to accelerate the variational estimate of the Young modulus of vascular tissues
by solving an Inverse Fluid Structuire Interaction problem. POD wisely combines
available offline High Fidelity solutions to obtain a rapid (online) parameter estimation.
More challenging is the use of a similar approach for cardiac conductivities~\cite{yang2015parameter,YangPOD}, requiring nonstandard procedures.
A directional model reduction procedure called HiMod ({\it Hierarchical Model Reduction} - see
\cite{ernperottoveneziani,alettiperottoveneziani,perotto2014survey,perotto2014coupled,blanco2015hybrid,mansilla2016transversally}) to accelerate the computation of
advection diffusion reaction problems as well as incompressible fluids in pipe-like domains (or generally domains with a clear dominant direction,
like in arteries)
has been implemented in LifeV \cite{alettiperottoveneziani,guzzetti}.
These modules will be released in the library soon.
\subsection{Darcy equations and porous media}
Single and multi-phase flow simulators in fractured porous media are of
paramount importance in many fields like oil exploration and exploitation, CO$_2$
sequestration, nuclear waste disposal and geothermal reservoirs. A single-phase
flow solver is implemented with the standard finite element spaces
Raviart-Thomas for the Darcy velocity and piecewise constant for the pressure. A
global pressure-total velocity formulation for the two-phase flow is developed
and presented in \cite{Fumagalli2011a,Fumagalli2012} where the equations are
solved using an IMPES-like technique. To handle fractures and faults in an
efficient and accurate way the extended finite element method is adopted to
locally enrich the cut elements, see \cite{Iori2011,DelPra2015a}, and a
one-codimensional problem for the flow is considered for these objects, see
\cite{Ferroni2014}.
\subsection{Ice sheets}
Another application that has used the LifeV library (at Sandia Nat Lab, Albuquerque, NM) is the simulation of ice sheet flow. Ice behaves like a highly viscous shear-thinning incompressible fluid and can be modeled by nonlinear Stokes equations. In order to reduce computational costs several simplifications have been made to the Stokes model, exploiting the fact that ice sheets are very shallow. LifeV has been used to implement some of these models, including the Blatter-Pattyn (also known as First Order) approximation and the L1L2 approximation. The former model is a three-dimensional nonlinear elliptic PDE and the latter a depth-integrated integro-differential equations (see \cite{Perego2012}). In all models, nonlinearity has been solved with Newton method, coupling LifeV with Trilinos NOX package.
The LifeV based ice sheet implementation has been mentioned in \cite{Evans2012} as an example of modern solver design for the solution of earth system models. Further,
in \cite{Tezaur2015} the authors verified the results of another ice sheet code with those obtained using LifeV.
LifeV ice sheet module has been coupled with the climate library MPAS and used in inter-comparison studies to asses sensitivities to different boundary conditions and forcing terms, see \cite{Edwards2014} and \cite{Shannon2013}. The latter (\cite{Shannon2013}) has been considered in the IPCC (Intergovernmental Panel on Climate Change) report of 2014.
In \cite{Perego2014} the authors perform a large scale PDE-constrained optimization to estimate the basal friction field (70K parameters) in Greenland ice sheet. For this purpose LifeV has been coupled with the Trilinos package ROL to perform a reduced-gradient optimization using BFGS. The assembly of state and adjoint equations and the computation of objective functional and its gradient have been performed in LifeV.
\section{Perspectives}
LifeV has proved to be a versatile library for the study of numerical
techniques for large scale and multiphysics computations with finite
elements. The code is in continuous development. The latest
introduction of expression templates has increased the easiness of
usage at the high level. Unfortunately, due to the fact the way the code
has been developed, mainly by PhD students, not all the applications
have been ported yet to this framework. Work is ongoing in that
direction. The library has been coded using the C++98
yet porting is ongoing to exploit new features of the C++11 and C++14 standard, which can make the code more readable, user friendly
and efficient.
As for the parallelization issues, LifeV relies strongly on the tools provided by the Trilinos libraries. We are following their development closely and
we will be ready to integrate all the new features the library will offer, in particular with respect to hybrid type parallelism.
A target use of the library the team is working on is running on cloud facilities \cite{slaw,guzzetti2016platform}
and GPL architectures.
\subsection*{Acknowledgments}
Besides the funding agencies cited in the introduction of this paper,
we have to acknowledge all the developers of LifeV, who have contributed
with new numerical methods, provided help to beginners, and struggled for
the definition of a common path. Porting and fixing bugs is also a remarkable task to
which they have constantly contributed.
It is difficult to name people, the list would be anyway incomplete, a good representation
of the contributors is given in the text of the paper and, of course, on the developers website.
However, the three seniors authors of this paper wish to thank their collaborators
who over the years gave passion and dedication to the development.
SD and LF from Lausanne and Milan groups wish to thank
A. Quarteroni, who has contributed LifeV in many ways, by supporting its foundation,
by dedicating human resources to the common project, outreaching, and mentoring.
Also, worth of specific acknowledgment is G. Fourestey who has invested a lot of efforts to parallelize the code by introducing Trilinos
and many of the concepts described in the paper.
SD wishes to thank all his collaborators at CMCS in Lausanne, who have been able to work toghether in a very constructive environment. Particularly
P. Crosetto for introducing and testing the parallel framework for monolithic FSI and associated preconditioners; C. Malossi for the design and implementation of the multiscale framework; S. Quinodoz for the design and implementation of the expression template module which is now at the core of the finite element formulation in LifeV; A. Gerbi, S. Rossi, R. Ruiz Baier, and P. Tricerri for their work on the mechanics of tissues, including active electromechainics; G. Grandperrin and R. Popescu for thier studies on parallel algorithms
for the approximation of PDEs, in particolar the former and D. Forti for their
essential contribution for the Navier--Stokes equations and FSI;
the promising and successfull coupling of LifeV with
a reduced basis solver for 3D problems has been carried out by C. Colciago and N. Dal Santo.
LF wishes to thank specially Antonio Cervone, for his work on the phase field formulation for stratified fluids and the Stokes solver, Alessio Fumagalli for the Darcy solver, Guido Iori and Marco del Pra for the XFEM implementation for Darcy's flow in fractured media and the porting to C++11, Davide Baroli for his many contributions and the support of several Master students struggling with the code, Stefano Zonca for the DG-XFEM implementation of fluid structure interaction problems, Michele Lancellotti for his work on Large Eddy Simulation. And, finally, a thank to Christian Vergara who has exploited LifeV for several real life applications of simulations of the cardiovascular system, as well as giving an important contribution to the development of numerical techniques for defective boundary conditions and FSI.
AV wishes to thank all the Emory collaborators for their passion, dedication and courage
joining him in the new challenging and enthusiastic adventure in Atlanta; specifically
(i) T. Passerini for the constant generous development
and support not only in the development but also in the dissemination of LifeV among Emory students;
(ii) M. Perego for his rigorous contribution and expansion of using
LifeV at Sandia on the Ice-Sheet activity; (iii) U. Villa
for the seminal and fundamental work in the Navier-Stokes modules that provided the basis of the current CFD in LifeV at Emory;
(iv) L. Mirabella for bridging LifeV with the community of computational electrocardioologists and biomedical engineers at GA Tech.
The early contribution of Daniele Di Pietro (University of Montpellier) for Expression Templates is gratefully acknowledged too.
Finally, all the authors together join to remember Fausto Saleri.
Fausto Saleri introduced the name "LiFE" many years ago, in a 2D, Fortran 77 serial code
for advection diffusion problems. Over the years, we made changes and additions,
yet we do hope to have followed his enthusiasm, passion and dedication to scientific computing.
\end{document} |
\begin{document}
\renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}}
\begin{center}
EXISTENCE FOR WEAKLY\ COERCIVE NONLINEAR DIFFUSION EQUATIONS VIA A
VARIATIONAL PRINCIPLE{\huge \ }
GABRIELA MARINOSCHI\footnote{
Institute of Mathematical Statistics and Applied Mathematics of the Romanian
Academy, Calea 13 Septembrie 13, \ Bucharest, Romania. \noindent
[email protected], [email protected]
\par
{}}{\LARGE \ }
\end{center}
\noindent {\small Abstract. We are concerned with the study of the
well-posedness of a nonlinear diffusion equation with a monotonically
increasing multivalued time-dependent nonlinearity derived from a convex
continuous potential having a superlinear growth to infinity. The results in
this paper state that the solution of the nonlinear equation can be
retrieved as the null minimizer of an appropriate minimization problem for a
convex functional involving the potential and its conjugate. This approach,
inspired by the Brezis-Ekeland variational principle, provides new existence
results under minimal growth and coercivity conditions.}
\noindent Mathematics Subject Classification 2010. 35K55, 47J30, 58EXX, 76SXX
\noindent \textit{Keywords and phrases.} Variational methods, Brezis-Ekeland
principle, convex optimization problems, nonlinear diffusion equations,
porous media equations
\section{ Introduction}
\setcounter{equation}{0}
We are concerned with the study of the well-posedness of a nonlinear
diffusion equation with a monotonically increasing discontinuous
nonlinearity derived from a convex continuous potential, by using a dual
formulation of this equation as a minimization of an appropriate convex
functional. The idea of identifying the solutions of evolution equations as
the minima of certain functionals is due to Brezis and Ekeland and
originates in their papers published in 1976 (see \cite{brezis-ekeland-76}
and \cite{brezis-ekeland-76-2}). During the past decades this approach has
enjoyed much attention, as seen in the various literature and in some more
recently published monograph and papers (see e.g., \cite{auchmuty-88}, \cite
{auchmuty-93}, \cite{ghoussoub}, \cite{ghoussoub-tzou-04}, \cite
{ghoussoub-08}, \cite{visintin-08}, \cite{stefanelli-08}, \cite
{stefanelli-09}, \cite{barbu-2011-jmma}, \cite{barbu-2012-jota}, \cite
{gm-jota-var-1}). In \cite{gm-jota-var-1} two cases were considered, the
first for a continuous potential with a polynomial growth and the second for
a singular potential. The latter has provided the existence of the solution
to variational inequality which models a free boundary flow.
The challenging part in this duality principle is the proof of the
well-posedness of the evolution equation as a consequence of the existence
of a null minimizer in the associated minimization problem (that is a
solution which minimizes the functional to zero). A general receipt for
proving this implication does not exist, it rather depending on the good
choice of the functional and on the particularities of the potential of the
nonlinearity arising in the diffusion term. This way of approaching the
well-posedness of nonlinear diffusion equations by a dual formulation as a
minimization problem is extremely useful especially when a direct approach
by using the semigroup theory (see e.g., \cite{vb-springer-2010}, \cite
{Crandall-Pazy-71}) or other classical variational results (see e.g., \cite
{lions}) cannot be followed due either to the low regularity of the data or
to the weak coercivity of the potential.
In this work, the nonlinearity in the diffusion term is more general and it
has a time and space dependent potential assumed to have a weak coercivity
and no particular regularity with respect to time and space. The paper is
organized in two parts. At the beginning we investigate the case with the
potential and its conjugate depending on time and space. We prove that the
minimization problem has at least one solution, unique if the functional is
strictly convex. This seems to be a good candidate for the solution to the
nonlinear equation, reason for which it can be viewed as a generalized or
variational solution. If the admissible set is restricted by imposing a $
L^{\infty }$-constraint on the state, then the generalized solution which
minimizes the functional to zero turns out to be quite the weak solution to
the nonlinear equation.
The second part concerns the case in which the potential does not depend on
space. The main result establishes that the null minimizer in the
minimization problem is the unique solution to the nonlinear equation,
provided that the potential exhibits a symmetry at large values of the
argument.
We would like to mention the benefit of such a duality approach, which
allows an elegant proof of the existence for a time dependent diffusion
equation, under general assumptions, by making possible its replacement by
the problem of minimizing a convex functional with a linear state equation.
We also stress that the existence results obtained in this way are not
covered and do not follow by the general existence theory of porous media
equations, as well as that of time dependent nonlinear infinite dimensional
Cauchy problems.
\section{Problem presentation}
We deal with the problem
\begin{eqnarray}
\frac{\partial y}{\partial t}-\Delta \beta (t,x,y) &\ni &f\mbox{ \ \ \ \ \ \
\ \ \ \ \ \ \ \ in }Q:=(0,T)\times \Omega , \notag \\
-\frac{\partial \beta (t,x,y)}{\partial \nu } &=&\alpha \beta (t,x,y)\mbox{
\ \ on }\Sigma :=(0,T)\times \Gamma , \label{si-1} \\
y(0,x) &=&y_{0}\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ in }\Omega , \notag
\end{eqnarray}
where $\Omega $ is an open bounded subset of $\mathbb{R}^{N},$ $N\leq 3,$
with the boundary $\Gamma $ sufficiently smooth, $T$ is finite and $\beta $
has a potential $j$. The notation $\frac{\partial }{\partial \nu }$
represents the normal derivative and $\alpha $ is positive.
In this paper we assume that $j:Q\times \mathbb{R}\rightarrow (-\infty
,\infty ]$ and has the following properties:
\noindent $(h_{1})$ $\ (t,x)\rightarrow j(t,x,r)$ is measurable on $Q,$ for
all $r\in \mathbb{R},$
\noindent $(h_{2})$ $\ j(t,x,\cdot )$ is a proper, convex, continuous
function, a.e. $(t,x)\in Q,$
\begin{equation}
\partial j(t,x,r)=\beta (t,x,r)\mbox{ for all }r\in \mathbb{R},\mbox{ a.e. }
(t,x)\in Q, \label{si-beta-j}
\end{equation}
\begin{eqnarray}
\frac{j(t,x,r)}{\left\vert r\right\vert } &\rightarrow &\infty ,\mbox{ \ as\
}\left\vert r\right\vert \rightarrow \infty ,\mbox{ uniformly for }(t,x)\in
Q, \label{si-9-2} \\
\frac{j^{\ast }(t,x,\omega )}{\left\vert \omega \right\vert } &\rightarrow
&\infty ,\mbox{ \ as\ }\left\vert \omega \right\vert \rightarrow \infty ,
\mbox{ uniformly for }(t,x)\in Q, \label{si-9-2-00}
\end{eqnarray}
\begin{equation}
j(\cdot ,\cdot ,0)\in L^{\infty }(Q),\mbox{ }j^{\ast }(\cdot ,\cdot ,0)\in
L^{\infty }(Q). \label{si-9-2-0}
\end{equation}
We define the conjugate $j^{\ast }:Q\times \mathbb{R}\rightarrow (-\infty
,\infty ]$ by
\begin{equation}
j^{\ast }(t,x,\omega )=\sup_{r\in \mathbb{R}}(\omega r-j(t,x,r)),\mbox{ a.e.
}(t,x)\in Q. \label{si-4-0}
\end{equation}
Then, the following two relations (Legendre-Fenchel) take place (see \cite
{vb-springer-2010}, p. 6, see also \cite{fenchel-53}):
\begin{equation}
j(t,x,r)+j^{\ast }(t,x,\omega )\geq r\omega \mbox{ for all }r,\omega \in
\mathbb{R},\mbox{ a.e. }(t,x)\in Q, \label{si-4-1}
\end{equation}
\begin{equation}
j(t,x,r)+j^{\ast }(t,x,\omega )=r\omega ,\mbox{ iff }\omega \in \partial
j(t,x,r),\mbox{ for all }r,\omega \in \mathbb{R},\mbox{ a.e. }(t,x)\in Q.
\label{si-4-2}
\end{equation}
By (\ref{si-beta-j}) it follows that $\beta $ is a maximal monotone graph
(possibly multivalued) on $\mathbb{R}$, a.e. $(t,x)\in Q.$ Relations (\ref
{si-9-2})-(\ref{si-9-2-00}) are equivalent with the the properties that $
(\beta )^{-1}(t,x,\cdot )$ and $\beta (t,x,\cdot ),$ respectively, are
bounded on bounded subsets, uniformly a.e. $(t,x)\in Q$. This means that for
any $M>0$ there exists $Y_{M}$ and $W_{M},$ independent on $t$ and $x,$ such
that
\begin{equation}
\sup \left\{ \left\vert r\right\vert ;\mbox{ }r\in \beta ^{-1}(t,x,\omega ),
\mbox{ }\left\vert \omega \right\vert \leq M\right\} \leq W_{M},
\label{si-9-6}
\end{equation}
\begin{equation}
\sup \left\{ \left\vert \omega \right\vert ;\mbox{ }\omega \in \beta (t,x,r),
\mbox{ }\left\vert r\right\vert \leq M\right\} \leq Y_{M}. \label{si-9-7}
\end{equation}
In fact, when $j$ does not depend on $t$ and $x$, relations (\ref{si-9-2})-(
\ref{si-9-2-00}) express\ that
\begin{equation*}
D(\partial j(r))=R(\partial j(r))=\mathbb{R}\mathbf{,}\mbox{ }D(\partial
j^{\ast }(r))=R(\partial j^{\ast }(r))=\mathbb{R}
\end{equation*}
(see \cite{vb-springer-2010}, p. 9). We also recall that $\partial j^{\ast
}(t,x,\cdot )=(\partial j(t,x,\cdot ))^{-1}$ a.e. $(t,x)\in Q.$
We call \textit{weakly coercive }a nonlinear diffusion term with $j$ having
the properties (\ref{si-9-2})-(\ref{si-9-2-00}), and implicitly the
corresponding equation (\ref{si-1}).
We also recall that a proper, convex l.s.c. function is bounded below by an
affine function, hence
\begin{equation}
j(t,x,r)\geq k_{1}(t,x)r+k_{2}(t,x),\mbox{ }j^{\ast }(t,x,\omega )\geq
k_{3}(t,x)\omega +k_{4}(t,x) \label{si-9-7-01}
\end{equation}
for any $r,\omega \in \mathbb{R}$ and we assume that
\begin{equation}
k_{i}\in L^{\infty }(Q),\mbox{ }i=1,...4. \label{si-9-7-02}
\end{equation}
In fact (\ref{si-9-7-01}) follows if besides (\ref{si-9-2-0}) we assume that
there exist $\xi ,\eta \in L^{\infty }(Q)$ such that $\xi \in \partial
j(t,x,0),$ $\eta \in \partial j^{\ast }(t,x,0)$ a.e. $(t,x)\in Q.$
In this work we show that problem (\ref{si-1}) reduces to a certain
minimization problem $(P)$ for a convex lower semicontinuous functional
involving the functions $j$ and $j^{\ast }$. In Section 3, the existence of
at least a solution to $(P)$ is proved in Theorem 3.2, this being actually
the generalized solution associated to (\ref{si-1}). The uniqueness is
deduced directly from $(P)$ under the assumption of the strictly convexity
of $j.$ Moreover, when a state constraint $y\in \lbrack y_{m},y_{M}]$ is
included in the admissible set we show that the null minimization solution
is the unique weak solution to (\ref{si-1}) in Theorem 3.3.
In the case when $j$ does not depend on $x$ but on $t$ and has the same
behavior at $\left\vert r\right\vert $ large, i.e., it satisfies the
relation
\begin{equation}
j(t,-r)\leq \gamma _{1}j(t,r)+\gamma _{2},\mbox{ for any }r\in \mathbb{R},
\mbox{ a.e. }t\in (0,T), \label{si-9-5}
\end{equation}
with $\gamma _{1}$ and $\gamma _{2}$ constants, we prove in Theorem 4.3 in
Section 4 that the solution to the minimization problem is the unique weak
solution to (\ref{si-1}) without assuming the previous additional state
constraint. This is based on Lemma 4.1 which plays an essential role in the
proof of this result. We mention that stochastic porous media equations of
the form (\ref{si-1}) were studied under a similar assumptions in \cite
{barbu-daprato}, by a different method.
Theorem 4.3 is the main novelty of this work since it provides existence in (
\ref{si-1}) for a time dependent weakly coercive $j$. With respect to the
treatment of the case which assumed a polynomial boundedness of $j$ (see
\cite{gm-jota-var-1}), the present one requires a sharp analysis in the $
L^{1}$-space.
\subsection{Functional setting}
First, we introduce several linear operators related to problem (\ref{si-1}
). Actually they represent the operator $-\Delta $ defined on various
spaces. The main operators which we use, $A_{0,\infty }$ and $A$ are defined
as follows:
\begin{eqnarray}
A_{0,\infty }\psi &=&-\Delta \psi ,\mbox{ }A_{0,\infty }:D(A_{0,\infty
})=X\subset L^{\infty }(\Omega )\rightarrow L^{\infty }(\Omega ),
\label{200} \\
X &=&\left\{ \psi \in W^{2,\infty }(\Omega ),\mbox{ }\frac{\partial \psi }{
\partial \nu }+\alpha \psi =0\mbox{ on }\Gamma \right\} \notag
\end{eqnarray}
and
\begin{eqnarray}
A &:&D(A)=L^{1}(\Omega )\subset X^{\prime }\rightarrow X^{\prime }, \notag
\\
\left\langle A\theta ,\psi \right\rangle _{X^{\prime },X} &=&\int_{\Omega
}\theta A_{0,\infty }\psi dx,\mbox{ }\forall \theta \in L^{1}(\Omega ),\mbox{
}\forall \psi \in X, \label{A}
\end{eqnarray}
where by $X^{\prime }$ we denote the dual of $X,$ with the pivot space $
L^{2}(\Omega )$ ($X\subset L^{2}(\Omega )\subset X^{\prime }).$
We introduce the operator
\begin{eqnarray}
A_{1}\psi &=&-\Delta \psi ,\mbox{ }A_{1}:D(A_{1})\subset L^{1}(\Omega
)\rightarrow L^{1}(\Omega ), \label{A0-L1} \\
D(A_{1}) &=&\left\{ \psi \in W^{1,1}(\Omega );\mbox{ }\Delta \psi \in
L^{1}(\Omega ),\mbox{ }\frac{\partial \psi }{\partial \nu }+\alpha \psi =0
\mbox{ on }\Gamma \right\} , \notag
\end{eqnarray}
which is $m$-accretive on $L^{1}(\Omega )$ (see \cite{brezis-strauss}). For
a later use we recall that
\begin{eqnarray}
A_{2}\psi &=&-\Delta \psi ,\mbox{ }A_{2}:X_{2}=D(A_{2})\subset L^{2}(\Omega
)\rightarrow L^{2}(\Omega ), \label{A0} \\
X_{2} &=&\left\{ \psi \in W^{2,2}(\Omega );\mbox{ }\frac{\partial \psi }{
\partial \nu }+\alpha \psi =0\mbox{ on }\Gamma \right\} , \notag
\end{eqnarray}
is $m$-accretive on $L^{2}(\Omega )$ and $\widetilde{A_{2}},$ its extension
to $L^{2}(\Omega ),$ defined by
\begin{eqnarray}
\widetilde{A_{2}} &:&L^{2}(\Omega )\subset X_{2}^{\prime }\rightarrow
X_{2}^{\prime }, \notag \\
\left\langle \widetilde{A_{2}}\theta ,\psi \right\rangle _{X_{2}^{\prime
},X_{2}} &=&\int_{\Omega }\theta A_{2}\psi dx,\mbox{ }\forall \theta \in
L^{2}(\Omega ),\mbox{ }\forall \psi \in X_{2}, \label{A2}
\end{eqnarray}
is $m$-accretive on $X_{2}^{\prime }.$ Here, $X_{2}^{\prime }$ is the dual
of $X_{2}$ with $L^{2}(\Omega )$ as pivot space (see these last definitions
in \cite{gm-jota-var-1}).
Finally, let us consider the Hilbert space $V=H^{1}(\Omega )$ endowed with
the norm
\begin{equation*}
\left\Vert \phi \right\Vert _{V}=\left( \left\Vert \phi \right\Vert
^{2}+\alpha \left\Vert \phi \right\Vert _{L^{2}(\Gamma )}^{2}\right) ^{1/2},
\end{equation*}
which is equivalent (for $\alpha >0)$ with the standard Hilbertian norm on $
H^{1}(\Omega )$ (see \cite{necas-67}, p. 20)$.$ The dual of $V$ is denoted $
V^{\prime }$ and the scalar product on $V^{\prime }$ is defined as
\begin{equation}
(\theta ,\overline{\theta })_{V^{\prime }}=\left\langle \theta ,A_{V}^{-1}
\overline{\theta }\right\rangle _{V^{\prime },V} \label{si-0-0}
\end{equation}
where $A_{V}:V\rightarrow V^{\prime }$ is given by
\begin{equation}
\left\langle A_{V}\psi ,\phi \right\rangle _{V^{\prime },V}=\int_{\Omega
}\nabla \psi \cdot \nabla \phi dx+\int_{\Gamma }\alpha \psi \phi d\sigma ,
\mbox{ for any }\phi \in V. \label{si-0-1}
\end{equation}
(In fact, $A_{V}$ is the extension of $A_{2}$ defined by (\ref{A0}) to $
V^{\prime }).$
For the sake of simplicity, we shall omit sometimes to write the function
arguments in the integrands, writing $\int_{Q}gdxdt$ instead of $
\int_{Q}g(t,x)dxdt,$ where $g:Q\rightarrow \mathbb{R}.$ In appropriate
places we indicate it as $g(t),$ to specify that $g:(0,T)\rightarrow Y,$
with $Y$ a Banach space.
\subsection{Statement of the problem}
In terms of the previously introduced operators we can write the abstract
Cauchy problem
\begin{eqnarray}
\frac{dy}{dt}(t)+A\beta (t,x,y) &\ni &f(t),\mbox{ a.e. }t\in (0,T),
\label{si-7} \\
y(0) &=&y_{0}. \notag
\end{eqnarray}
\noindent \textbf{Definition 1.1.} Let $f\in L^{\infty }(Q)$ and $y_{0}\in
V^{\prime }.$ We call a\textit{\ weak solution} to (\ref{si-1}) a pair $
(y,\eta ),$
\begin{equation*}
y\in L^{1}(Q)\cap W^{1,1}([0,T];X^{\prime }),\mbox{ }w\in L^{1}(Q),\mbox{ }
w(t,x)\in \beta (t,x,y(t,x))\ \mbox{\ a.e. }(t,x)\in Q,
\end{equation*}
which satisfies the equation
\begin{equation}
\int_{0}^{T}\left\langle \frac{dy}{dt}(t),\psi (t)\right\rangle _{X^{\prime
},X}dt+\int_{Q}w(t,x)(A_{0,\infty }\psi (t))(x)dxdt=\int_{0}^{T}\left\langle
f(t),\psi (t)\right\rangle _{X^{\prime },X}dt \label{si-8-1-0}
\end{equation}
for any $\psi \in L^{\infty }(0,T;X),$ and the initial condition $
y(0)=y_{0}. $
In literature, such a solution is called\ sometimes \textit{very weak} or
\textit{distributional} solution.
We consider the minimization problem
\begin{equation}
\underset{_{(y,w)\in U}}{\mbox{Minimize}}\mbox{ }J(y,w), \tag{$P$}
\end{equation}
where
\begin{equation}
J(y,w)=\left\{
\begin{array}{l}
\int_{Q}\left( j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x))\right) dxdt+\frac{1}{2}
\left\Vert y(T)\right\Vert _{V^{\prime }}^{2} \\
\\
-\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime
}}^{2}-\int_{Q}y(t,x)(A_{0,\infty }^{-1}f(t))(x)dxdt\mbox{\ \ if }(y,w)\in U,
\mbox{ } \\
\\
+\infty ,\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ otherwise,}
\end{array}
\right. \label{J}
\end{equation}
and
\begin{multline*}
U=\{(y,w);\mbox{ }y\in L^{1}(Q)\cap W^{1,1}([0,T];X^{\prime }),\mbox{ }
y(T)\in V^{\prime },\mbox{ }w\in L^{1}(Q), \\
j(\cdot ,\cdot ,y(\cdot ,\cdot ))\in L^{1}(Q),\mbox{ \ }j^{\ast }(\cdot
,\cdot ,w(\cdot ,\cdot ))\in L^{1}(Q), \\
(y,w)\mbox{ verifies (\ref{si-8-P}) below}\}
\end{multline*}
\begin{eqnarray}
\frac{dy}{dt}(t)+Aw(t) &=&f(t)\mbox{ a.e. }t\in (0,T), \label{si-8-P} \\
y(0) &=&y_{0}. \notag
\end{eqnarray}
Here, $\frac{dy}{dt}$ is taken in the sense of $X^{\prime }$-valued
distributions on $(0,T).$
We see that, by the existence theory of elliptic boundary value problems
(see \cite{ADN}), if $f(t)\in L^{\infty }(\Omega )$ then $A_{0,\infty
}^{-1}f(t)\in \bigcap\limits_{p\geq 2}W^{2,p}(\Omega )\subset L^{\infty
}(\Omega ),$ a.e. $t\in (0,T),$ so the last term in the expression of $J$
makes sense.
\section{Time and space dependent potential}
\setcounter{equation}{0}
In this section we consider that $j$ and $j^{\ast }$ depend on $t$ and $x$
as well, and assume $(h_{1})-(h_{2}),$ (\ref{si-beta-j})-(\ref{si-9-2-0}), (
\ref{si-9-7-01})-(\ref{si-9-7-02}). We begin with an intermediate result.
\noindent \textbf{Lemma 3.1. }\textit{The function }$J$\textit{\ is proper,
convex and lower semicontinuous on }$L^{1}(Q)\times L^{1}(Q).$
\noindent \textbf{Proof.} It is obvious that\textbf{\ }$J$ is proper
(because $U\neq \varnothing )$ and convex. Let $\lambda >0.$ For the lower
semicontinuity we prove that the level set
\begin{equation*}
E_{\lambda }=\{(y,w)\in L^{1}(Q)\times L^{1}(Q);\mbox{ }J(y,w)\leq \lambda \}
\end{equation*}
is closed in $L^{1}(Q)\times L^{1}(Q).$ Let $(y_{n},w_{n})\in E_{\lambda }$
such that
\begin{equation}
y_{n}\rightarrow y\mbox{ strongly in }L^{1}(Q),\mbox{ \ }w_{n}\rightarrow w
\mbox{ strongly in }L^{1}(Q),\mbox{ as }n\rightarrow \infty .
\label{si-198-1}
\end{equation}
It follows that $(y_{n},w_{n})\in U$ is the solution to
\begin{eqnarray}
\frac{dy_{n}}{dt}(t)+Aw_{n}(t) &=&f(t),\mbox{ a.e. }t\in (0,T),
\label{si-closed-0} \\
y_{n}(0) &=&y_{0} \notag
\end{eqnarray}
and
\begin{eqnarray}
&&J(y_{n},w_{n})=\int_{Q}(j(t,x,y_{n}(t,x))+j^{\ast }(t,x,w_{n}(t,x)))dxdt
\label{si-198} \\
&&+\frac{1}{2}\left\{ \left\Vert y_{n}(T)\right\Vert _{V^{\prime
}}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\}
-\int_{Q}y_{n}A_{0,\infty }^{-1}fdxdt\leq \lambda .\mbox{ } \notag
\end{eqnarray}
The convergences (\ref{si-198-1}) imply that
\begin{equation*}
\int_{0}^{T}\left\langle Aw_{n}(t),\psi (t)\right\rangle _{X^{\prime
},X}dt=\int_{Q}w_{n}A_{0,\infty }\psi dxdt\rightarrow \int_{Q}wA_{0,\infty
}\psi dxdt,\mbox{ as }n\rightarrow \infty ,
\end{equation*}
for any $\psi \in L^{\infty }(0,T;X)$ and
\begin{equation*}
\int_{Q}y_{n}A_{0,\infty }^{-1}fdxdt\rightarrow \int_{Q}yA_{0,\infty
}^{-1}fdxdt\mbox{, as }n\rightarrow \infty ,
\end{equation*}
Therefore, by (\ref{si-closed-0}), we can write
\begin{equation}
\int_{0}^{T}\left\langle \frac{dy_{n}}{dt}(t),\psi (t)\right\rangle
_{X^{\prime },X}dt=-\int_{Q}w_{n}A_{0,\infty }\psi dxdt+\int_{Q}f\psi dxdt,
\label{si-198-2}
\end{equation}
for any $\psi \in L^{\infty }(0,T;X),$ and we deduce that
\begin{equation*}
\frac{dy_{n}}{dt}\rightarrow \frac{dy}{dt}\mbox{ weakly in }
L^{1}(0,T;X^{\prime })\mbox{ as }n\rightarrow \infty ,
\end{equation*}
meaning that $y_{n}$ is absolutely continuous on $[0,T]$ with values in $
X^{\prime }.$
Again by (\ref{si-closed-0}) we have
\begin{equation}
y_{n}(t)=y_{0}+\int_{0}^{t}f(s)ds-\int_{0}^{t}Aw_{n}(s)ds\mbox{, for }t\in
\lbrack 0,T]. \label{si-200}
\end{equation}
From here we get
\begin{equation}
\int_{\Omega }y_{n}(t)\phi dx=\left\langle y_{0},\phi \right\rangle
_{V^{\prime },V}+\int_{0}^{t}\int_{\Omega }f(s)\phi
dxds-\int_{0}^{t}\left\langle Aw_{n}(s),\phi \right\rangle _{X^{\prime },X}ds
\label{si-200-0}
\end{equation}
for any $\phi \in X$ and $t\in \lbrack 0,T].$ Passing to the limit we obtain
\begin{equation*}
l(t)=\lim_{n\rightarrow \infty }\int_{\Omega }y_{n}(t)\phi dx=\left\langle
y_{0},\phi \right\rangle _{V^{\prime },V}+\int_{0}^{t}\int_{\Omega }f(s)\phi
dxds-\int_{0}^{t}\left\langle Aw(s),\phi \right\rangle _{X^{\prime },X}ds.
\end{equation*}
We multiply this relation by $\varphi _{0}\in L^{\infty }(0,T)$ and
integrate over $(0,T),$ to obtain that
\begin{eqnarray}
&&\int_{0}^{T}\varphi _{0}(t)l(t)dt \label{si-201} \\
&=&\int_{0}^{T}\left( \left\langle y_{0},\phi \right\rangle _{V^{\prime
},V}+\int_{0}^{t}\int_{\Omega }f(s)\phi dxds-\int_{0}^{t}\left\langle
Aw(s),\phi \right\rangle _{X^{\prime },X}ds\right) \varphi _{0}(t)dt. \notag
\end{eqnarray}
We multiply (\ref{si-200}) by $\varphi _{0}(t)\phi (x)$ and integrate over $
(0,T)\times \Omega .$ We have
\begin{eqnarray}
\int_{Q}\varphi _{0}\phi y_{n}dxdt &=&\int_{0}^{T}\left( \left\langle
y_{0},\phi \right\rangle _{V^{\prime },V}+\int_{\Omega }\int_{0}^{t}f(s)\phi
dsdx\right) \varphi _{0}(t)dt \label{si-202} \\
&&-\int_{0}^{T}\int_{0}^{t}\left\langle Aw_{n}(s),\phi \right\rangle
_{X^{\prime },X}\varphi _{0}(t)dsdt, \notag
\end{eqnarray}
whence we use the strong convergence $y_{n}\rightarrow y$ in $L^{1}(Q)$ to
get that
\begin{eqnarray}
\int_{Q}\varphi _{0}\phi ydxdt &=&\int_{0}^{T}\left( \left\langle y_{0},\phi
\right\rangle _{V^{\prime },V}+\int_{\Omega }\int_{0}^{t}f(s)\phi
dsdx\right) \varphi _{0}(t)dt \label{si-203} \\
&&-\int_{0}^{T}\int_{0}^{t}\left\langle Aw(s),\phi \right\rangle _{X^{\prime
},X}\varphi _{0}(t)dsdt. \notag
\end{eqnarray}
Comparing (\ref{si-201}) and (\ref{si-203}) we deduce that
\begin{equation*}
\int_{0}^{T}\varphi _{0}(t)l(t)dt=\int_{Q}\varphi _{0}\phi ydxdt\mbox{ for
any }\varphi _{0}\in L^{\infty }(0,T),
\end{equation*}
hence
\begin{equation*}
l(t)=\lim_{n\rightarrow \infty }\int_{\Omega }y_{n}(t)\phi dx=\int_{\Omega
}y(t)\phi dx\mbox{ for any }\phi \in X,\mbox{ }t\in \lbrack 0,T].
\end{equation*}
Thus
\begin{equation}
y_{n}(t)\rightarrow y(t)\mbox{ weakly in }X^{\prime }\mbox{ as }n\rightarrow
\infty ,\mbox{ for any }t\in \lbrack 0,T] \label{si-204}
\end{equation}
and therefore
\begin{equation}
y_{n}(T)\rightarrow y(T)\mbox{, }y_{n}(0)\rightarrow y(0)=y_{0}\mbox{ weakly
in }X^{\prime }\mbox{, as }n\rightarrow \infty . \label{si-206}
\end{equation}
Letting $n\rightarrow \infty $ in (\ref{si-198-2}) we obtain
\begin{equation*}
\int_{0}^{T}\left\langle \frac{dy}{dt}(t),\psi (t)\right\rangle _{X^{\prime
},X}dt+\int_{0}^{T}\int_{\Omega }wA_{0,\infty }\psi
dxdt=\int_{0}^{T}\left\langle f(t),\psi (t)\right\rangle _{X^{\prime },X}dt,
\end{equation*}
which proves that $(y,w)$ is the solution to (\ref{si-8-P}).
By (\ref{si-198}) and (\ref{si-9-7-01}) we can write that
\begin{eqnarray*}
&&\int_{Q}(k_{1}y_{n}+k_{2}+k_{3}w_{n}+k_{4})dxdt+\frac{1}{2}\left\Vert
y_{n}(T)\right\Vert _{V^{\prime }}^{2} \\
&\leq &\int_{Q}(j(t,x,y_{n}(t,x))+j^{\ast }(t,x,w_{n}(t,x)))dxdt+\frac{1}{2}
\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2} \\
&\leq &\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}+\left\Vert
y_{n}\right\Vert _{L^{1}(Q)}\left\Vert A_{0,\infty }^{-1}f\right\Vert
_{L^{\infty }(Q)}+\lambda \leq C\mbox{,}
\end{eqnarray*}
whence, using (\ref{si-198-1}) we get
\begin{equation*}
\frac{1}{2}\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2}\leq
C+\max_{i=1,...4}\left\vert k_{i}\right\vert _{L^{\infty }(Q)}\left(
\left\Vert y\right\Vert _{L^{1}(Q)}+\left\Vert w\right\Vert
_{L^{1}(Q)}+2\right) =C_{1}
\end{equation*}
with $C$ and $C_{1}$ constants and $\left\vert k_{i}\right\vert _{\infty
}=\left\Vert k_{i}\right\Vert _{L^{\infty }(Q)}.$
It follows that $y_{n}(T)\rightarrow \xi $ weakly in $V^{\prime }$ as $
n\rightarrow \infty .$ As seen earlier, $y_{n}(T)\rightarrow y(T)$ weakly in
$X^{\prime },$ and by the uniqueness of the limit we get $\xi =y(T)\in
V^{\prime }.$
The function
\begin{equation*}
\varphi :L^{1}(Q)\rightarrow \mathbb{R},\mbox{ }\varphi
(z)=\int_{Q}j(t,x,z(t,x))dxdt
\end{equation*}
is proper, convex and l.s.c. (see \cite{vb-springer-2010}, p. 56) and so by
Fatou's lemma (if $j$ would be nonnegative$)$ we get
\begin{equation}
\varphi (y)\leq \liminf_{n\rightarrow \infty }\varphi
(y_{n})=\liminf_{n\rightarrow \infty }\int_{Q}j(t,x,y_{n}(t,x))dxdt<\infty .
\label{si-206-0}
\end{equation}
Since $j$ is not generally nonnegative we use (\ref{si-9-7-01}) and apply
Fatou's lemma for
\begin{equation*}
\widetilde{j}(t,x,r)=j(t,x,r)-k_{1}(t,x)r-k_{2}(t,x)\geq 0.
\end{equation*}
We get, by the strongly convergence $y_{n}\rightarrow y$ in $L^{1}(Q)$ and
the continuity of $j,$
\begin{eqnarray*}
&&\int_{Q}(j(t,x,y(t,x))-k_{1}y-k_{2})dxdt=\int_{Q}\liminf_{n\rightarrow
\infty }\widetilde{j}(t,x,y_{n}(t,x))dxdt \\
&\leq &\liminf_{n\rightarrow \infty }\int_{Q}\widetilde{j}
(t,x,y_{n}(t,x))dxdt=\liminf_{n\rightarrow \infty
}\int_{Q}j(t,x,y_{n}(t,x))dxdt-\int_{Q}(k_{1}y+k_{2})dxdt,
\end{eqnarray*}
and so (\ref{si-206-0}) holds.
Similarly we have that $\int_{Q}j^{\ast }(t,x,w(t,x))dxdt<\infty ,$ and so,
in particular, we have shown that $(y,w)\in U.$
Moreover, passing to the limit in (\ref{si-198}) as $n\rightarrow \infty $
we obtain by lower semicontinuity that
\begin{eqnarray*}
&&\int_{Q}(j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x)))dxdt+\frac{1}{2}\left\Vert
y(T)\right\Vert _{V^{\prime }}^{2} \\
&&-\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime
}}^{2}-\int_{Q}yA_{0,\infty }^{-1}fdxdt\leq \liminf_{n\rightarrow \infty
}J(y_{n},w_{n})\leq \lambda
\end{eqnarray*}
which means that $(y,w)\in E_{\lambda }.$ This ends the proof. \
$
\square $
\noindent \textbf{Theorem 3.2. }\textit{Problem }$(P)$\textit{\ has at least
a solution }$(y^{\ast },w^{\ast }).$ \textit{If} $j$ \textit{is strictly
convex the solution to }$(P)$\textit{\ is unique}.
\noindent \textbf{Proof. }By (\ref{si-9-7-01}) we note that if $(y,w)\in U,$
then
\begin{eqnarray*}
J(y,w) &\geq &-\left\vert k_{1}\right\vert _{\infty }\left\Vert y\right\Vert
_{L^{1}(Q)}-\left\vert k_{2}\right\vert _{\infty }-\left\vert
k_{3}\right\vert _{\infty }\left\Vert w\right\Vert _{L^{1}(Q)}-\left\vert
k_{4}\right\vert _{\infty } \\
&&-\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\left\Vert
y\right\Vert _{L^{1}(Q)}\left\Vert A_{0,\infty }^{-1}f\right\Vert
_{L^{\infty }(Q)}.
\end{eqnarray*}
Let us set $d=\inf\limits_{(y,w)\in U}J(y,w).$ We assume first that $
d>-\infty $ and we shall show later that this is indeed the only case.
Let us consider a minimizing sequence $(y_{n},w_{n})\in U$, such that
\begin{equation}
d\leq J(y_{n},w_{n})\leq d+\frac{1}{n}, \label{si-99}
\end{equation}
where the pair $(y_{n},w_{n})$ satisfies (\ref{si-closed-0}).
By (\ref{si-9-2})-(\ref{si-9-2-00}), for any $M>0,$ there exist $C_{M}$ and $
D_{M}$ such that $j(t,x,r)>M\left\vert r\right\vert $ as $\left\vert
r\right\vert >C_{M}$ and $j^{\ast }(t,x,\omega )>M\left\vert \omega
\right\vert $ as $\left\vert \omega \right\vert >D_{M}.$ Then, by (\ref
{si-99}) we write
\begin{eqnarray*}
&&\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq
C_{M}\}}j(t,x,y_{n}(t,x))dxdt+M\int_{\{(t,x);\left\vert
y_{n}(t,x)\right\vert >C_{M}\}}\left\vert y_{n}\right\vert dxdt \\
&&+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq D_{M}\}}j^{\ast
}(t,x,w_{n}(t,x))dxdt+M\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert
>D_{M}\}}\left\vert w_{n}\right\vert dxdt \\
&&+\frac{1}{2}\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2}-\frac{1}{2}
\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\leq d+\frac{1}{n} \\
&\leq &\left\Vert A_{0,\infty }^{-1}f\right\Vert _{L^{\infty }(Q)}\left(
\int_{\{(t,x);\left\vert y_{n}(t,x\right\vert )\leq C_{M}\}}\left\vert
y_{n}\right\vert dxdt+\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert
>C_{M}\}}\left\vert y_{n}\right\vert dxdt\right) .
\end{eqnarray*}
Denoting $\left\Vert A_{0,\infty }^{-1}f\right\Vert _{L^{\infty
}(Q)}=f_{\infty },$ and taking $M$ large enough such that $M>f_{\infty }$ it
follows that
\begin{eqnarray*}
&&(M-f_{\infty })\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert
>C_{M}\}}\left\vert y_{n}\right\vert dxdt+M\int_{\{(t,x);\left\vert
w_{n}(t,x)\right\vert >D_{M}\}}\left\vert w_{n}\right\vert dxdt \\
&&\mbox{ \ }+\frac{1}{2}\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2} \\
&\leq &\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}+f_{\infty
}C_{M}\mbox{meas}(Q)+\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq
C_{M}\}}\left\vert j(t,x,y_{n}(t,x))\right\vert dxdt \\
&&+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq D_{M}\}}\left\vert
j^{\ast }(t,x,w_{n}(t,x))\right\vert dxdt+d+1 \\
&\leq &\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}+f_{\infty
}C_{M}\mbox{meas}(Q)+\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq
C_{M}\}}\left\vert \widetilde{j}(t,x,y_{n}(t,x))\right\vert dxdt \\
&&+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq D_{M}\}}\left\vert
\widetilde{j}^{\ast }(t,x,w_{n}(t,x))\right\vert dxdt+d+1 \\
&&+\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq C_{M}\}}\left\vert
k_{1}y_{n}+k_{2}\right\vert dxdt+\int_{\{(t,x);\left\vert
w_{n}(t,x)\right\vert \leq D_{M}\}}\left\vert k_{3}w_{n}+k_{4}\right\vert
dxdt,
\end{eqnarray*}
where $\widetilde{j}(t,x,r)=j(t,x,r)-k_{1}r-k_{2},$ $\widetilde{j}^{\ast
}(t,x,\omega )=j^{\ast }(t,x,\omega )-k_{3}\omega -k_{4}.$
Recalling (\ref{si-9-7}) and (\ref{si-9-6}),
\begin{equation*}
j(t,x,y_{n}(t,x))\leq \left\vert j(t,x,0)\right\vert +\left\vert \eta
_{n}(t,x)\right\vert \left\vert y_{n}(t,x)\right\vert \leq Y_{M}^{1}\mbox{
on }\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq C_{M}\},
\end{equation*}
\begin{equation*}
j^{\ast }(t,x,w_{n}(t,x))\leq \left\vert j^{\ast }(t,x,0)\right\vert
+\left\vert \varpi _{n}(t,x)\right\vert \left\vert y(t,x)\right\vert \leq
W_{M}^{1}\mbox{ on }\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq D_{M}\},
\end{equation*}
where $\eta _{n}(t,x)\in \beta (t,x,y_{n})$ and $\varpi _{n}(t,x)\in (\beta
)^{-1}(t,x,w_{n})$ a.e. on $Q.$
Then
\begin{equation*}
0\leq \widetilde{j}(t,x,y_{n}(t,x))\leq Y_{M}^{1}+\left\vert
k_{1}\right\vert _{\infty }C_{M}+\left\vert k_{2}\right\vert _{\infty }\mbox{
on }\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq C_{M}\},
\end{equation*}
\begin{equation*}
0\leq \widetilde{j}^{\ast }(t,x,w_{n}(t,x))\leq W_{M}^{1}+\left\vert
k_{3}\right\vert _{\infty }D_{M}+\left\vert k_{4}\right\vert _{\infty }\mbox{
on }\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq D_{M}\}
\end{equation*}
and we deduce that
\begin{eqnarray}
&&(M-f_{\infty })\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert
>C_{M}\}}\left\vert y_{n}\right\vert dxdt+M\int_{\{(t,x);\left\vert
w_{n}(t,x)\right\vert >D_{M}\}}\left\vert w_{n}\right\vert dxdt
\label{bound-inf} \\
&&+\frac{1}{2}\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2}\leq C+d.
\notag
\end{eqnarray}
Consequently, this yields
\begin{equation}
\left\Vert y_{n}\right\Vert _{L^{1}(Q)}\leq C,\mbox{ \ }\left\Vert
w_{n}\right\Vert _{L^{1}(Q)}\leq C,\mbox{ \ }\left\Vert y_{n}(T)\right\Vert
_{V^{\prime }}\leq C. \label{bound}
\end{equation}
(By $C$ and $C_{i},$ $i=1,...4,$ we denote several constants independent on $
n$).
From (\ref{si-99}) we get
\begin{equation}
I_{n}:=\int_{Q}j(t,x,y_{n}(t,x))dxdt+\int_{Q}j^{\ast
}(t,x,w_{n}(t,x))dxdt\leq C. \label{si-198-0}
\end{equation}
We continue by proving that separately each term is bounded, i.e.,
\begin{equation}
\int_{Q}j(t,x,y_{n}(t,x))dxdt\leq C_{1},\mbox{ }\int_{Q}j^{\ast
}(t,x,w_{n}(t,x))dxdt\leq C_{2}. \label{si-100}
\end{equation}
We write
\begin{eqnarray*}
&&I_{n}=\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq
M\}}j(t,x,y_{n}(t,x))dxdt+\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert
>M\}}j(t,x,y_{n}(t,x))dxdt \\
&&+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq M\}}j^{\ast
}(t,x,w_{n}(t,x))dxdt+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert
>M\}}j^{\ast }(t,x,w_{n}(t,x))dxdt \\
&\leq &C.
\end{eqnarray*}
Therefore
\begin{eqnarray*}
&&\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert
>M\}}j(t,x,y_{n}(t,x))dxdt+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert
>M\}}j^{\ast }(t,x,w_{n}(t,x))dxdt \\
&\leq &C+Y_{M}^{1}\mbox{meas}(Q)+W_{M}^{1}\mbox{meas}(Q)=C_{3}.
\end{eqnarray*}
Since $j(t,x,y_{n}(t,x))\geq k_{1}(t,x)y_{n}(t,x)+k_{2}(t,x)$ we deduce that
\begin{equation*}
\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert >M\}}j^{\ast
}(t,x,w_{n}(t,x))dxdt\leq C_{4},
\end{equation*}
whence
\begin{equation}
\int_{Q}j^{\ast }(t,x,w_{n}(t,x))dxdt\leq C_{1}. \label{101-1}
\end{equation}
Finally, (\ref{si-198-0}) yields
\begin{equation}
\int_{Q}j(t,x,y_{n}(t,x))dxdt\leq C_{2}, \label{101-2}
\end{equation}
with $C_{1}$ and $C_{2}$ independent of $n.$
Next, we shall show that the sequences $(y_{n})_{n}$ and $(w_{n})_{n}$ are
weakly compact in $L^{1}(Q).$
To this end we have to show that the integrals $\int_{S}\left\vert
w_{n}\right\vert dxdt,$ with $S\subset Q,$ are equi-absolutely continuous,
meaning that for every $\varepsilon >0$ there exists $\delta $ such that $
\int_{S}\left\vert w_{n}\right\vert dxdt<\varepsilon $ whenever meas$
(S)<\delta .$ Let $M_{\varepsilon }>\frac{2C_{2}}{\varepsilon },$ where $
C_{2}$ is the constant in (\ref{si-100}), and let $R_{M}$ be such that $
\frac{j^{\ast }(t,x,w_{n})}{\left\vert w_{n}\right\vert }\geq M_{\varepsilon
}$ for $\left\vert r\right\vert >R_{M},$ by (\ref{si-9-2}). If $\delta <
\frac{\varepsilon }{2R_{M}}$ then
\begin{eqnarray*}
&&\int_{S}\left\vert w_{n}\right\vert dxdt\leq \int_{\{(t,x);\left\vert
w_{n}(t,x)\right\vert >R_{M}\}}\left\vert w_{n}\right\vert
dxdt+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq R_{M}\}}\left\vert
w_{n}\right\vert dxdt \\
&\leq &M_{\varepsilon }^{-1}\int_{Q}j^{\ast
}(t,x,w_{n}(t,x))dxdy+R_{M}\delta <\varepsilon .
\end{eqnarray*}
Hence, by the Dunford-Pettis theorem it follows that $(w_{n})_{n}$ is weakly
compact in $L^{1}(Q).$ In a similar way we proceed for showing the weakly
compactness of the sequence $(y_{n})_{n}.$ Thus,
\begin{equation*}
y_{n}\rightarrow y^{\ast }\mbox{ weakly in }L^{1}(Q),\mbox{ }
w_{n}\rightarrow w^{\ast }\mbox{ weakly in }L^{1}(Q)\mbox{ as }n\rightarrow
\infty ,
\end{equation*}
\begin{equation*}
Aw_{n}\rightarrow Aw^{\ast }\mbox{ weakly in }L^{1}(0,T;X^{\prime }),\mbox{
as }n\rightarrow \infty ,
\end{equation*}
by (\ref{A}) which implies by (\ref{si-closed-0}) that
\begin{equation*}
\frac{dy_{n}}{dt}\rightarrow \frac{dy^{\ast }}{dt}\mbox{ weakly in }
L^{1}(0,T;X^{\prime })\mbox{ as }n\rightarrow \infty .
\end{equation*}
Passing to the limit in
\begin{equation*}
\int_{0}^{T}\left\langle \frac{dy_{n}}{dt}(t),\psi (t)\right\rangle
_{X^{\prime },X}dt+\int_{Q}w_{n}A_{0,\infty }\psi
dxdt=\int_{0}^{T}\left\langle f(t),\psi (t)\right\rangle _{X^{\prime },X}dt
\end{equation*}
for any $\psi \in L^{\infty }(0,T;X)$ we get that $(y^{\ast },w^{\ast })$
verifies (\ref{si-8-1-0}), or equivalently (\ref{si-8-P}), i.e.,
\begin{equation*}
\int_{0}^{T}\left\langle \frac{dy^{\ast }}{dt}(t),\psi (t)\right\rangle
_{X^{\prime },X}dt+\int_{\Omega }w^{\ast }A_{0,\infty }\psi
dxdt=\int_{0}^{T}\left\langle f(t),\psi (t)\right\rangle _{X^{\prime },X}dt.
\end{equation*}
Next we show that
\begin{equation*}
y_{n}(T)\rightarrow y^{\ast }(T)\mbox{ and }y_{n}(0)\rightarrow y(0)=y_{0}
\mbox{ weakly in }V^{\prime }\mbox{, as }n\rightarrow \infty ,
\end{equation*}
in a similar way as in Lemma 3.1. In order to obtain (\ref{si-203}) we use
the weakly compactness of ($y_{n})_{n}$ in $L^{1}(Q).$
Finally, by passing to the limit in (\ref{si-99}), on the basis of the
weakly lower semicontinuity of the functional $J$ on $L^{1}(Q)\times
L^{1}(Q),$ we obtain that
\begin{equation*}
J(y^{\ast },w^{\ast })=d.
\end{equation*}
Hence, we have got that $y^{\ast }\in L^{1}(Q),$ $w^{\ast }\in L^{1}(Q),$ $
y^{\ast }(T)\in V^{\prime }$ and $(y^{\ast },w^{\ast })$ satisfies (\ref
{si-8-P}). By (\ref{si-100}) we get
\begin{equation*}
\int_{Q}j(t,x,y^{\ast }(t,x))dxdt<\infty ,\mbox{ }\int_{Q}j^{\ast
}(t,x,w^{\ast }(t,x))dxdt<\infty .
\end{equation*}
With these relations we have ended the proof that $(y^{\ast },w^{\ast })$
belongs to $U$ and that it is is a solution to $(P).$
Let us show now that $d>-\infty .$ Indeed, otherwise, for every $K$ real
positive, there exists $n_{K},$ such that for every $n\geq n_{K}$ we have $
J(y_{n},w_{n})<-K.$ Following the computations in the same way as before we
arrive at the inequality (\ref{bound-inf}) which reads now
\begin{eqnarray*}
&&(M-f_{\infty })\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert
>C_{M}\}}\left\vert y_{n}\right\vert dxdt+M\int_{\{(t,x);\left\vert
w_{n}(t,x)\right\vert >D_{M}\}}\left\vert w_{n}\right\vert dxdt \\
&&+\frac{1}{2}\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2}\leq C-K.
\end{eqnarray*}
Since $C$ is a fixed constant, this implies $C-K<0,$ for $K$ large enough,
and this leads to a contradiction, as claimed.
The argument for the uniqueness proof is standard and it relies on the
assumption of the strict convexity of $j$ and on the obvious inequality
\begin{eqnarray}
&&J\left( \frac{y_{1}+y_{2}}{2},\frac{w_{1}+w_{2}}{2}\right) \label{si-104}
\\
&=&\int_{Q}\left( j\left( t,x,\frac{y_{1}+y_{2}}{2}(t,x)\right) +j^{\ast
}\left( t,x,\frac{w_{1}+w_{2}}{2}(t,x)\right) \right) dxdt \notag \\
&&+\frac{1}{2}\left\Vert \frac{y_{1}+y_{2}}{2}(T)\right\Vert _{V^{\prime
}}^{2}-\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}
\frac{y_{1}+y_{2}}{2}A_{0,\infty }^{-1}fdxdt \notag \\
&\leq &\frac{1}{2}(J(y_{1},w_{1})+J(y_{2},w_{2}))-\frac{1}{2}\left\Vert
\frac{y_{1}-y_{2}}{2}(T)\right\Vert _{V^{\prime }}^{2}, \notag
\end{eqnarray}
where $(y_{1},w_{1})$ and $(y_{2},w_{2})$ are two solutions to $(P).$
$
\square $
We call the solution to the minimization problem $(P)$ a \textit{variational}
or \textit{generalized} solution to (\ref{si-1}).
One might suspect that if the minimum in $(P)$ is zero, then the null
minimizer is a weak solution to (\ref{si-1}). We shall prove this for a
slightly modified version of $(P),$ by including a boundedness constraint
for the state $y$ in the admissible set $U.$ More exactly we consider the
problem
\begin{equation}
\mbox{Minimize }\widetilde{J}(y,w)\mbox{ for all }(y,w)\in \widetilde{U}
\tag{$\widetilde{P}$}
\end{equation}
where
\begin{equation*}
\widetilde{J}(y,w)=\left\{
\begin{array}{l}
J(y,w),\mbox{ \ }(y,w)\in \widetilde{U}, \\
+\infty ,\mbox{ \ \ \ \ \ \ otherwise,}
\end{array}
\right.
\end{equation*}
\begin{equation*}
\widetilde{U}=\{(y,w)\in U;\mbox{ }y(t,x)\in \lbrack y_{m},y_{M}]\mbox{ a.e.
}(t,x)\in Q\},
\end{equation*}
with $y_{m},$ $y_{M}$ two constants. We assume that
\begin{equation*}
y_{0}\in L^{\infty }(\Omega ),\mbox{ }y_{0}\in \lbrack y_{m},y_{M}],\mbox{ \
}f\in L^{\infty }(Q)
\end{equation*}
and remark that $\widetilde{U}$ is not empty (it contains e.g., $y_{0}$ with
$w_{0}=A_{0,\infty }^{-1}f(t))$ given by (\ref{si-8-P})).
If we set $y_{m}=0,$ then the previous boundedness property is in agreement
with the physical significance of $y,$ that of a fluid concentration in a
diffusion process, which is nonnegative.
Problem $(\widetilde{P})$ has at least a solution and the proof is the same
as in Theorem 3.2.
\noindent \textbf{Theorem 3.3. }\textit{Let }$(y,w)\in \widetilde{U}$
\textit{be a null minimizer in }$(\widetilde{P}),$\textit{\ i.e., }
\begin{equation*}
\min (\widetilde{P})=\widetilde{J}(y,w)=0.
\end{equation*}
\textit{\ Let us assume in addition that }
\begin{equation}
\frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2}-\frac{1}{2}
\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}y(t,x)(A_{0,\infty
}^{-1}f(t))(x)dxdt=-\int_{Q}w(t,x)y(t,x)dxdt. \label{si-298}
\end{equation}
\textit{\ Then }
\begin{equation*}
w(t,x)\in \beta (t,x,y(t,x)),\mbox{ }a.e.\mbox{ }(t,x)\in Q,
\end{equation*}
\textit{and the pair }$(y,w)$\textit{\ is the unique weak solution to} (\ref
{si-1}).
\noindent \textbf{Proof. }Let $(y,w)$ be the null minimizer in $(\widetilde{P
}).$ Then
\begin{eqnarray*}
\widetilde{J}(y,w) &=&\int_{Q}(j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x)))dxdt \\
&&+\frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2}-\frac{1}{2}
\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}y(t,x)(A_{0,\infty
}^{-1}f(t))(x)dxdt=0.
\end{eqnarray*}
By (\ref{si-298}) we have
\begin{equation}
\int_{Q}(j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x))-y(t,x)w(t,x))dxdt=0.
\label{si-298-0}
\end{equation}
This implies that $j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x))-y(t,x)w(t,x)=0$ a.e. $
(t,x)\in Q$ and so
\begin{equation*}
w(t,x)\in \beta (t,x,y(t,x))\mbox{ a.e. }(t,x)\in Q,
\end{equation*}
as claimed.
$\square $
\section{Time dependent potential}
\setcounter{equation}{0}
In this section we consider the case when $j$ and $j^{\ast }$ depend only on
$t$ and assume $(h_{1})-(h_{2}),$ (\ref{si-beta-j})-(\ref{si-9-2-0}), (\ref
{si-9-7-01}) and (\ref{si-9-5}), where $k_{1},$ $k_{2}\in L^{\infty }(0,T).$
The main result of this section is that a solution to $(P)$ belongs to $
L^{\infty }(0,T;V^{\prime })$ and minimizes $J$ to zero, being exactly the
unique weak solution to (\ref{si-1}).
To this end we need some intermediate results. The first is proved in the
next lemma and the second given in Theorem 4.2 recalls one of the main
results in \cite{gm-jota-var-1}.
\noindent \textbf{Lemma 4.1. }\textit{Let} $(y,w)\in U$\textit{\ and }$y\in
L^{\infty }(0,T;V^{\prime }).$\textit{\ Then }$yw\in L^{1}(Q)$ \textit{and
we have the formula}
\begin{equation}
-\int_{Q}ywdxdt=\frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2}-
\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime
}}^{2}-\int_{Q}yA_{0,\infty }^{-1}fdxdt. \label{60}
\end{equation}
\noindent \textbf{Proof. }Let\textbf{\ }$(y,w)\in U.$ Then\ $y,w\in
L^{1}(Q), $ $j(\cdot ,\cdot ,y(\cdot ,\cdot ))\in L^{1}(Q),$ $j^{\ast
}(\cdot ,\cdot ,w(\cdot ,\cdot ))\in L^{1}(Q).$ By (\ref{si-9-5}) we have
\begin{equation*}
j(t,-y(t,x))\leq \gamma _{1}j(t,y(t,x))+\gamma _{2}\mbox{ a.e. on }Q,
\end{equation*}
which implies that $\int_{Q}j(t,x,-y(t,x))dxdt<\infty .$ Next, by the
relations
\begin{eqnarray*}
j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x)) &\geq &y(t,x)w(t,x), \\
j(t,x,-y(t,x))+j^{\ast }(t,x,w(t,x)) &\geq &-y(t,x)w(t,x)
\end{eqnarray*}
it follows that
\begin{equation}
yw\in L^{1}(Q). \label{61}
\end{equation}
Because $(y,w)\in U$ it also satisfies (\ref{si-8-P}). Then $y\in
L^{1}(Q)\cap L^{\infty }(0,T;V^{\prime }),$ $w\in L^{1}(Q).$ We perform a
regularization by applying $(I+\varepsilon A_{\Delta })^{-1}$ to (\ref
{si-8-P}), where $A_{\Delta }$ denotes here the realization of the operator $
-\Delta $ on the spaces indicated in Section 2.1. We obtain
\begin{eqnarray}
\frac{dy_{\varepsilon }}{dt}(t)+Aw_{\varepsilon }(t) &=&f_{\varepsilon }(t)
\mbox{ a.e. }t\in (0,T), \label{63} \\
y_{\varepsilon }(0) &=&(I+\varepsilon A_{V})^{-1}y_{0}, \notag
\end{eqnarray}
where
\begin{eqnarray}
y_{\varepsilon }(t) &=&(I+\varepsilon A_{V})^{-1}y(t),\mbox{ a.e. }t\in (0,T)
\notag \\[1pt]
w_{\varepsilon }(t) &=&(I+\varepsilon A_{1})^{-1}w(t),\mbox{ a.e. }t\in (0,T)
\label{61-0} \\
f_{\varepsilon }(t) &=&(I+\varepsilon A_{0,\infty })^{-1}f(t),\mbox{ a.e. }
t\in (0,T). \notag
\end{eqnarray}
According again to Brezis and Strauss (see \cite{brezis-strauss}), if $
w(t)\in L^{1}(\Omega )$ then
\begin{equation}
w_{\varepsilon }(t)\in W^{1,q}(\Omega ),\mbox{ a.e. }t\in (0,T),\mbox{ with }
1\leq q<\frac{N}{N-1}. \label{64}
\end{equation}
Since $\frac{N}{N-1}<N\leq 3$, we get by the Sobolev inequalities that
\begin{equation*}
W^{1,q}(\Omega )\subset L^{q^{\ast }}(\Omega ),\mbox{ }\frac{1}{q^{\ast }}=
\frac{1}{q}-\frac{1}{N},
\end{equation*}
with $\frac{N}{N-1}\leq q^{\ast }<\frac{N}{N-2}.$ It follows that
\begin{equation*}
w_{\varepsilon }\in L^{1}(0,T;L^{2}(\Omega )).
\end{equation*}
Next,
\begin{equation*}
y_{\varepsilon }\in L^{1}(0,T;L^{2}(\Omega ))\cap L^{\infty }(0,T;V),
\end{equation*}
by a similar argument as for $w_{\varepsilon },$ since $y\in L^{1}(Q)\cap
L^{\infty }(0,T:V^{\prime })$ and
\begin{equation*}
y_{\varepsilon }(t)=(I+\varepsilon A_{1})^{-1}y(t),\mbox{ a.e. }t\in (0,T),
\end{equation*}
too. Finally,
\begin{equation*}
f_{\varepsilon }\in L^{\infty }(0,T;\bigcap\limits_{p\geq 2}W^{2,p}(\Omega
)),
\end{equation*}
by the elliptic regularity.
Moreover, $A_{1}$ is $m$-accretive on $L^{1}(\Omega ),$ and it follows that
\begin{equation*}
w_{\varepsilon }(t)\rightarrow w(t)\mbox{ strongly in }L^{1}(\Omega )\mbox{
for any }t\in \lbrack 0,T]
\end{equation*}
and
\begin{equation*}
\left\Vert w_{\varepsilon }(t)\right\Vert _{L^{1}(\Omega )}\leq \left\Vert
w(t)\right\Vert _{L^{1}(\Omega )}\mbox{ for any }t\in \lbrack 0,T],
\end{equation*}
(see \cite{brezis-strauss}).
For a later use, we deduce by the Lebesgue dominated convergence theorem
that
\begin{equation}
w_{\varepsilon }\rightarrow w\mbox{ strongly in }L^{1}(Q),\mbox{ as }
\varepsilon \rightarrow 0. \label{63-0}
\end{equation}
Similarly, we have that
\begin{equation}
y_{\varepsilon }\rightarrow y\mbox{ strongly in }L^{1}(Q),\mbox{ as }
\varepsilon \rightarrow 0. \label{63-1}
\end{equation}
Finally,
\begin{equation}
f_{\varepsilon }\rightarrow f\mbox{ weak* in }L^{\infty }(Q),\mbox{ and
strongly in }L^{p}(Q),\mbox{ }p\geq 2,\mbox{ as }\varepsilon \rightarrow 0.
\label{63-2}
\end{equation}
By the first relation in (\ref{61-0}) we still have that
\begin{equation*}
(I+\varepsilon A_{V})^{-1}y(t)\rightarrow y(t)\mbox{ strongly in }V^{\prime }
\mbox{ for any }t\in \lbrack 0,T].
\end{equation*}
We also observe that
\begin{equation*}
\int_{0}^{T}\left\langle Aw_{\varepsilon }(t),\psi (t)\right\rangle
_{X^{\prime },X}dt=\int_{Q}w_{\varepsilon }A_{0,\infty }\psi dxdt\rightarrow
\int_{Q}wA_{0,\infty }\psi dxdt\mbox{ as }\varepsilon \rightarrow 0,
\end{equation*}
for any $\psi \in L^{\infty }(0,T;X)$ and by (\ref{63})
\begin{equation*}
\frac{dy_{\varepsilon }}{dt}\rightarrow \frac{dy}{dt}\mbox{ weakly in }
L^{1}(0,T;X^{\prime })\mbox{ as }\varepsilon \rightarrow 0.
\end{equation*}
Passing to the limit in (\ref{63}) tested for any $\psi \in L^{\infty
}(0,T;X),$
\begin{equation*}
\int_{0}^{T}\left\langle \frac{dy_{\varepsilon }}{dt}(t),\psi
(t)\right\rangle _{X^{\prime },X}dt+\int_{Q}w_{\varepsilon }A_{0,\infty
}\psi dxdt=\int_{0}^{T}\left\langle f_{\varepsilon }(t),\psi
(t)\right\rangle _{X^{\prime },X}dt
\end{equation*}
we check that $(y,w)$ indeed satisfies (\ref{si-8-P}).
Next, we assert that
\begin{equation}
\int_{Q}j(t,y_{\varepsilon }(t,x))dxdt\leq \int_{Q}j(t,y(t,x))dxdt.
\label{68}
\end{equation}
Indeed, let us introduce the Yosida approximation of $\beta ,$
\begin{equation}
\beta _{\lambda }(t,r)=\frac{1}{\lambda }\left( 1-(1+\lambda \beta (t,\cdot
)\right) ^{-1})r,\mbox{ a.e. }t,\mbox{ for all }r\in \mathbb{R}\mbox{ and }
\lambda >0. \label{69}
\end{equation}
We have $\beta _{\lambda }(t,r)=\frac{\partial j_{\lambda }}{\partial r}
(t,r),$ where $j_{\lambda }$ is the Moreau approximation of $j,$
\begin{equation}
j_{\lambda }(t,r)=\inf_{s\in \mathbb{R}}\left\{ \frac{\left\vert
r-s\right\vert ^{2}}{2\lambda }+j(t,s)\right\} ,\mbox{ a.e. }t,\mbox{ for
all }r\in \mathbb{R}, \label{70}
\end{equation}
that can be still written as
\begin{equation}
j_{\lambda }(t,r)=\frac{1}{2\lambda }\left\vert (1+\lambda \beta (t,\cdot
))^{-1}r-r\right\vert ^{2}+j(t,(1+\lambda \beta (t,\cdot ))^{-1}r)).
\label{70-0}
\end{equation}
The function $j_{\lambda }$ is convex, continuous and satisfies
\begin{eqnarray}
j_{\lambda }(t,r) &\leq &j(t,r)\mbox{ for all }r\in \mathbb{R},\mbox{ }
\lambda >0,\mbox{ } \label{71} \\
\lim_{\lambda \rightarrow 0}j_{\lambda }(t,r) &=&j(t,r),\mbox{ for all }r\in
\mathbb{R}. \notag
\end{eqnarray}
We have
\begin{eqnarray*}
&&\int_{Q}j_{\lambda }(t,y_{\varepsilon }(t,x))dxdt \\
&\leq &\int_{Q}j_{\lambda }(t,y(t,x))dxdt-\varepsilon
\int_{0}^{T}\left\langle A_{V}y_{\varepsilon }(t),\beta _{\lambda
}(t,y_{\varepsilon }(t))\right\rangle _{V^{\prime },V}dt.
\end{eqnarray*}
Since for any $z\in V$ one has
\begin{equation*}
-\left\langle A_{V}z,\beta _{\lambda }(t,z)\right\rangle _{V^{\prime
},V}=-\int_{\Gamma }\alpha \beta _{\lambda }(t,z)zd\sigma -\int_{\Omega }
\frac{\partial \beta _{\lambda }}{\partial z}(t,z)\left\vert \nabla
z\right\vert ^{2}dx\leq 0,
\end{equation*}
we obtain
\begin{equation}
\int_{Q}j_{\lambda }(t,y_{\varepsilon }(t,x))dxdt\leq \int_{Q}j_{\lambda
}(t,y(t,x))dxdt. \label{70-1}
\end{equation}
Now, by (\ref{71})
\begin{equation*}
j_{\lambda }(t,y(t,x))\leq j(t,y(t,x))
\end{equation*}
and
\begin{equation}
\lim_{\lambda \rightarrow 0}j_{\lambda }(t,y(t,x))=j(t,y(t,x))\mbox{ a.e. on
}Q. \label{70-2}
\end{equation}
Let us assume for the moment that $j$ would be nonnegative. Then by the
Lebesgue dominated convergence theorem we get
\begin{equation}
\lim_{\lambda \rightarrow 0}\int_{Q}j_{\lambda
}(t,y(t,x))dxdt=\int_{Q}j(t,y(t,x))dxdt,\mbox{ for any }y\mbox{ fixed.}
\label{70-3}
\end{equation}
Passing to the limit in (\ref{70-1}) as $\lambda \rightarrow 0,$ we obtain
that
\begin{equation}
\int_{Q}j(t,y_{\varepsilon }(t,x))dxdt\leq \int_{Q}j(t,y(t,x))dxdt,\mbox{
for all }\varepsilon >0. \label{70-4}
\end{equation}
Since in general $j$ is not necessarily nonnegative we consider the function
\begin{equation}
\widetilde{j}(t,r)=j(t,r)-k_{1}(t)r-k_{2}(t) \label{70-6}
\end{equation}
which is nonnegative by (\ref{si-9-7-01}). Hence, by (\ref{70-4}) we have
\begin{equation*}
\int_{Q}\widetilde{j}(t,y_{\varepsilon }(t,x))dxdt\leq \int_{Q}\widetilde{j}
(t,y(t,x))dxdt+\int_{Q}k_{1}(t)(y(t,x)-y_{\varepsilon }(t,x))dxdt,
\end{equation*}
hence
\begin{equation}
\int_{Q}\widetilde{j}(t,y_{\varepsilon }(t,x))dxdt\leq \int_{Q}\widetilde{j}
(t,y(t,x))dxdt+\delta (\varepsilon ), \label{70-7}
\end{equation}
\begin{equation*}
\delta (\varepsilon )=\int_{Q}k_{1}(t)(y(t,x)-y_{\varepsilon }(t,x))dxdt\leq
\left\Vert k_{1}\right\Vert _{L^{\infty }(0,T)}\left\Vert y-y_{\varepsilon
}\right\Vert _{L^{1}(Q)}\rightarrow 0,\mbox{ as }\varepsilon \rightarrow 0,
\end{equation*}
by (\ref{63-1}). Then, (\ref{70-7}) implies (\ref{68}) as claimed.
A similar relation to (\ref{68}) takes place for $j^{\ast }$,
\begin{equation}
\int_{Q}j^{\ast }(t,w_{\varepsilon }(t,x))dxdt\leq \int_{Q}j(t,w(t,x))dxdt.
\label{70-5}
\end{equation}
This implies that $j(\cdot ,y_{\varepsilon }(\cdot ,\cdot ))\in L^{1}(Q)$, $
j^{\ast }(\cdot ,w_{\varepsilon }(\cdot ,\cdot ))\in L^{1}(Q),$ for all $
\varepsilon >0,$ and so, by the same argument as for $yw$ we deduce that
\begin{equation*}
y_{\varepsilon }w_{\varepsilon }\in L^{1}(Q).
\end{equation*}
We test (\ref{63}) by $A_{2}^{-1}y_{\varepsilon }(t)$ and integrate over $
(0,T).$ Since $y_{\varepsilon }\in L^{1}(Q)\cap L^{\infty }(0,T;V),$ $
w_{\varepsilon }\in L^{1}(0,T;L^{2}(\Omega ))$ we get $A_{2}^{-1}y_{
\varepsilon }(t)\in X_{2},$ a.e. $t\in (0,T)$ and by (\ref{A2})
\begin{equation*}
\int_{0}^{T}\left\langle \widetilde{A_{2}}w_{\varepsilon
}(t),A_{2}^{-1}y_{\varepsilon }(t)\right\rangle _{X_{2}^{\prime
},X_{2}}dt=\int_{Q}y_{\varepsilon }w_{\varepsilon }dxdt.
\end{equation*}
Then, by a few computations we deduce by (\ref{63}) that
\begin{equation}
-\int_{Q}y_{\varepsilon }w_{\varepsilon }dxdt=\frac{1}{2}\left\Vert
y_{\varepsilon }(T)\right\Vert _{V^{\prime }}^{2}-\frac{1}{2}\left\Vert
(I+\varepsilon A_{V})^{-1}y_{0}\right\Vert _{V^{\prime
}}^{2}-\int_{Q}y_{\varepsilon }A_{0,\infty }^{-1}f_{\varepsilon }dxdt.
\label{65}
\end{equation}
Recalling that by (\ref{61-0}) we have that
\begin{equation*}
(I+\varepsilon A_{V})^{-1}y(t)\rightarrow y(t)\mbox{ strongly in }V^{\prime }
\mbox{ for any }t\in \lbrack 0,T]
\end{equation*}
and passing to the limit in (\ref{65}) as $\varepsilon \rightarrow 0$ we
obtain
\begin{equation}
\lim_{\varepsilon \rightarrow 0}\left( -\int_{Q}y_{\varepsilon
}w_{\varepsilon }dxdt\right) =\frac{1}{2}\left\Vert y(T)\right\Vert
_{V^{\prime }}^{2}-\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime
}}^{2}-\int_{Q}yA_{0,\infty }^{-1}fdxdt. \label{66}
\end{equation}
Moreover, by the strongly convergence of $(y_{\varepsilon }$)$_{\varepsilon
} $ and $(w_{\varepsilon })_{\varepsilon },$ (\ref{63-1}) and (\ref{63-0})
we get
\begin{equation*}
y_{\varepsilon }\rightarrow y\mbox{ a.e. in }Q,\mbox{ }w_{\varepsilon
}\rightarrow w\mbox{ a.e. in }Q,\mbox{ as }\varepsilon \rightarrow 0,
\end{equation*}
which implies that
\begin{equation*}
y_{\varepsilon }w_{\varepsilon }\rightarrow yw\mbox{ a.e. in }Q,\mbox{ as }
\varepsilon \rightarrow 0.
\end{equation*}
The functions $j$ and $j^{\ast }$ are continuous and so
\begin{equation*}
j(t,y_{\varepsilon }(t,x))\rightarrow j(t,y(t,x)),\mbox{ }j^{\ast
}(t,w_{\varepsilon }(t,x))\rightarrow j^{\ast }(t,w(t,x)),\mbox{ a.e. on }Q,
\mbox{ as }\varepsilon \rightarrow 0.
\end{equation*}
Now, by (\ref{68}) and (\ref{70-5}) we have
\begin{eqnarray*}
&&\int_{Q}\left( j(t,y_{\varepsilon }(t,x))+j^{\ast }(t,w_{\varepsilon
}(t,x))-y_{\varepsilon }w_{\varepsilon }\right) dxdt \\
&\leq &\int_{Q}\left( j(t,y(t,x))+j^{\ast }(t,w(t,x))-y_{\varepsilon
}w_{\varepsilon }\right) dxdt
\end{eqnarray*}
and we apply the Fatou lemma because $j(t,y_{\varepsilon })+j^{\ast
}(t,w_{\varepsilon })-y_{\varepsilon }w_{\varepsilon }\geq 0.$ We get, using
(\ref{68}) and (\ref{70-5}) that
\begin{eqnarray*}
&&\int_{Q}\left( j(t,y(t,x))+j^{\ast }(t,w(t,x))-yw\right) dxdt \\
&\leq &\liminf\limits_{\varepsilon \rightarrow 0}\int_{Q}\left(
j(t,y_{\varepsilon }(t,x))+j^{\ast }(t,w_{\varepsilon }(t,x))-y_{\varepsilon
}w_{\varepsilon }\right) dxdt \\
&\leq &\limsup\limits_{\varepsilon \rightarrow 0}\int_{Q}\left(
j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x))-y_{\varepsilon }w_{\varepsilon }\right)
dxdt \\
&\leq &\int_{Q}\left( j(t,y(t,x))+j^{\ast }(t,w(t,x))\right)
dxdt-\lim\limits_{\varepsilon \rightarrow 0}\int_{Q}y_{\varepsilon
}w_{\varepsilon }dxdt,
\end{eqnarray*}
whence, by using (\ref{66}), we see that
\begin{equation}
-\int_{Q}ywdxdt\leq \frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2}-
\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime
}}^{2}-\int_{Q}yA_{0,\infty }^{-1}fdxdt. \label{72}
\end{equation}
We continue the proof by relying on the same arguments, starting this time
with Fatou's lemma applied for the positive function $j(t,x,-y_{\varepsilon
})+j^{\ast }(t,x,w_{\varepsilon })+y_{\varepsilon }w_{\varepsilon }$. By
similar computations we get
\begin{equation*}
-\int_{Q}ywdxdt\geq \frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2}-
\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime
}}^{2}-\int_{Q}yA_{0,\infty }^{-1}fdxdt,
\end{equation*}
which together with (\ref{72}) imply (\ref{60}).
$\square $
Next we recall one of the main results given in \cite{gm-jota-var-1} in a
more general case, but particularized here to the space $L^{2}(Q).$
Let us consider the problem
\begin{eqnarray}
\frac{\partial y}{\partial t}-\Delta \widetilde{\beta }(t,x,y) &\ni &f\mbox{
\ \ \ \ \ \ \ \ \ \ \ \ \ \ in }Q, \notag \\
-\frac{\partial \widetilde{\beta }(t,x,y)}{\partial \nu } &=&\alpha
\widetilde{\beta }(t,x,y)\mbox{ \ \ on }\Sigma , \label{si-1-0} \\
y(0,x) &=&y_{0}\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ in }\Omega , \notag
\end{eqnarray}
where $\widetilde{\beta }(t,x,r)=\partial \varphi (t,x,r)$ a.e. on $Q,$ for
all $r\in \mathbb{R},$ and $\varphi :\mathbb{R}\rightarrow \mathbb{R}$ is a
proper, convex, l.s.c. function satisfying $(h_{1}),(h_{2}),$ and the growth
condition
\begin{equation}
C_{1}\left\vert r\right\vert ^{2}+C_{1}^{0}\leq \varphi (t,x,r)\leq
C_{2}\left\vert r\right\vert ^{2}+C_{2}^{0},\mbox{ for all }r\in \mathbb{R},
\mbox{ a.e. }t\in (0,T) \label{si-1-1}
\end{equation}
in addition. ($C_{i},$ $C_{i}^{0}$ are constants$,$ $i=1,2,$ and $C_{1}>0.)$
We consider the minimization problem
\begin{equation}
\mbox{Minimize }J_{0}(y,w)=\int_{Q}\left( \varphi (t,x,y(t,x))+\varphi
^{\ast }(t,x,w(t,x))-w(t,x)y(t,x)\right) dxdt \tag{$P_{0}$}
\end{equation}
for all $(y,w)\in U_{0},$ where
\begin{multline*}
U_{0}=\{(y,w);\mbox{ }y\in L^{2}(Q)\cap W^{1,2}([0,T];X_{2}^{\prime }),\mbox{
}y(T)\in V^{\prime },\mbox{ }w\in L^{2}(Q), \\
\varphi (\cdot ,\cdot ,y)\in L^{1}(Q),\mbox{ }\varphi ^{\ast }(\cdot ,\cdot
,w)\in L^{1}(Q), \\
(y,w)\mbox{ verifies (\ref{76}) below}\},
\end{multline*}
\begin{eqnarray}
\frac{dy}{dt}(t)+\widetilde{A_{2}}w(t) &=&f(t)\mbox{ a.e. }t\in (0,T),
\label{76} \\
y(0) &=&y_{0}. \notag
\end{eqnarray}
We recall the notations $X_{2},$ $X_{2}^{\prime },$ $\widetilde{A_{2}}$
given in Section 2.1.
In \cite{gm-jota-var-1} it has been proved that $(P_{0})$ has at least a
solution and it has been established the equivalence between (\ref{si-1-0})
and $(P_{0}),$ resumed below (see Theorem 3.2 in \cite{gm-jota-var-1}).
\noindent \textbf{Theorem 4.2.} \textit{Let }$y_{0}\in V^{\prime },$ $f\in
L^{\infty }(Q),$ \textit{and} \textit{let the pair }$(y,w)\in U_{0}$\textit{
\ be a solution to }$(P_{0}).$\textit{\ Then,}
\begin{equation*}
w(t,x)\in \widetilde{\beta }(t,y(t,x))\mbox{ \textit{a.e.} }(t,x)\in Q
\end{equation*}
\textit{and }$(y,w)$\textit{\ is the unique weak solution to }(\ref{si-1-0}
). \textit{Moreover,}
\begin{eqnarray}
&&-\int_{0}^{t}\int_{\Omega }ywdxd\tau \label{300} \\
&=&\frac{1}{2}\left\{ \left\Vert y(t)\right\Vert _{V^{\prime
}}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\}
-\int_{0}^{t}\int_{\Omega }yA_{0,\infty }^{-1}fdxd\tau ,\mbox{ }for\mbox{ }
all\mbox{ }t\in \lbrack 0,T]. \notag
\end{eqnarray}
Of course, the result remains true when $\varphi $ does not depend on $x$.
Now, we can pass to the main result of this section which shows that a null
minimizer in $(P)$ provides a unique weak solution to (\ref{si-1}).
\noindent \textbf{Theorem 4.3. }\textit{Under the assumptions }$
(h_{1})-(h_{2}),$ (\ref{si-beta-j})-(\ref{si-9-2-0}), (\ref{si-9-7-01})-(\ref
{si-9-7-02}), (\ref{si-9-5}) \textit{problem }$(P)$ \textit{has a solution }$
(y^{\ast },w^{\ast })$\textit{\ such that }$y^{\ast }\in L^{\infty
}(0,T;V^{\prime }).$ \textit{Then, this solution is a null minimizer in (}$
P) $\textit{\ }
\begin{equation}
J(y^{\ast },w^{\ast })=\inf_{(y,w)\in U}J(y,w)=0 \label{Jinf}
\end{equation}
\textit{and it turns out that it is the unique weak solution to }(\ref{si-1})
\textit{. }
\noindent \textbf{Proof. }Let us introduce the approximating problem
\begin{eqnarray}
\frac{\partial y}{\partial t}-\Delta \beta _{\lambda }(t,y) &=&f\mbox{ \ \ \
\ \ \ \ \ \ \ \ \ \ in }Q, \notag \\
-\frac{\partial \beta _{\lambda }(t,y)}{\partial \nu } &=&\alpha \beta
_{\lambda }(t,y)\mbox{ \ \ on }\Sigma , \label{73} \\
y(0,x) &=&y_{0}\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ in }\Omega , \notag
\end{eqnarray}
where $\beta _{\lambda }$ is the Yosida approximation of $\beta .$
\noindent Let $\sigma $ be positive and consider the approximating problem
indexed upon $\sigma ,$
\begin{eqnarray}
\frac{\partial y}{\partial t}-\Delta (\beta _{\lambda }(t,y)+\sigma y) &=&f
\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ in }Q, \notag \\
-\frac{\partial (\beta _{\lambda }(t,y)+\sigma y)}{\partial \nu } &=&\alpha
(\beta _{\lambda }(t,y)+y)\mbox{ \ on }\Sigma , \label{74} \\
y(0,x) &=&y_{0}\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ in }\Omega .
\notag
\end{eqnarray}
The potential of $\beta _{\lambda }(t,r)+\sigma r$ is
\begin{equation}
j_{\lambda ,\sigma }(t,r)=j_{\lambda }(t,r)+\frac{\sigma }{2}r^{2},
\label{74-0}
\end{equation}
where $j_{\lambda }$ is the Moreau regularization of $j.$ By a simple
computation using (\ref{70}), (\ref{si-9-7-01}), (\ref{si-9-2-0}) we get that
\begin{equation}
\frac{\sigma }{2}\left\vert r\right\vert ^{2}+k_{1}r+k_{2}-2\lambda
k_{1}^{2}\leq j_{\lambda ,\sigma }(t,r)\leq j(t,0)+\left\vert r\right\vert
^{2}\left( \frac{1}{2\lambda }+\sigma ^{2}\right) . \label{74-0-1}
\end{equation}
Hence $j_{\lambda ,\sigma }$ satisfies (\ref{si-1-1}) and we rely on Theorem
4.2 with $\varphi (t,r)=j_{\lambda ,\sigma }(t,r)$ and $\widetilde{\beta }
(t,r)=\beta _{\lambda }(t,r)+\sigma r$ to get that (\ref{74}) has a unique
weak solution $(y_{\lambda ,\sigma },w_{\lambda ,\sigma })\in U_{0},$
\begin{eqnarray*}
y_{\lambda ,\sigma } &\in &L^{2}(Q)\cap W^{1,2}([0,T];X_{2}^{\prime }),\mbox{
}y_{\lambda ,\sigma }(T)\in V^{\prime }, \\
w_{\lambda ,\sigma } &=&\beta _{\lambda }(t,y_{\lambda ,\sigma })+\sigma
y_{\lambda ,\sigma }\in L^{2}(Q).
\end{eqnarray*}
This solution is the null minimizer in $(P_{0}),$ i.e.,
\begin{equation}
J_{0}(y_{\lambda ,\sigma },w_{\lambda ,\sigma })=\int_{Q}\left( j_{\lambda
,\sigma }(t,y_{\lambda ,\sigma }(t,x))+j_{\lambda ,\sigma }^{\ast
}(t,w_{\lambda ,\sigma }(t,x))-y_{\lambda ,\sigma }w_{\lambda ,\sigma
}\right) dxdt=0, \label{77}
\end{equation}
and satisfies (\ref{76}), namely
\begin{eqnarray}
\frac{dy_{\lambda ,\sigma }}{dt}(t)+\widetilde{A_{2}}w_{\lambda ,\sigma }(t)
&=&f(t)\mbox{ a.e. }t\in (0,T), \label{76-0} \\
y(0) &=&y_{0}. \notag
\end{eqnarray}
Moreover, we have by (\ref{300}) that
\begin{eqnarray}
&&-\int_{0}^{t}\int_{\Omega }y_{\lambda ,\sigma }w_{\lambda ,\sigma }dxd\tau
\label{si-9-1} \\
&=&\frac{1}{2}\left\{ \left\Vert y_{\lambda ,\sigma }(t)\right\Vert
_{V^{\prime }}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\}
-\int_{0}^{t}\int_{\Omega }y_{\lambda ,\sigma }A_{0,\infty }^{-1}fdxd\tau ,
\mbox{ } \notag
\end{eqnarray}
for all $t\in \lbrack 0,T].$ Taking into account (\ref{si-9-1}) and (\ref{77}
) we still can write
\begin{eqnarray}
&&\int_{Q}(j_{\lambda ,\sigma }(t,y_{\lambda ,\sigma }(t,x))+j_{\lambda
,\sigma }^{\ast }(t,w_{\lambda ,\sigma }(t,x)))dxdt \label{78} \\
&&+\frac{1}{2}\left\{ \left\Vert y_{\lambda ,\sigma }(T)\right\Vert
_{V^{\prime }}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\}
=\int_{0}^{T}\int_{\Omega }y_{\lambda ,\sigma }A_{0,\infty }^{-1}fdxdt.
\notag
\end{eqnarray}
We note that
\begin{equation}
\frac{j_{\lambda ,\sigma }^{\ast }(t,\omega )}{\left\vert \omega \right\vert
}\rightarrow \infty \mbox{ as }\left\vert \omega \right\vert \rightarrow
\infty , \label{78-0}
\end{equation}
uniformly in $\lambda $ and $\sigma .$ This happens due to (\ref{si-9-7})
because by setting
\begin{equation*}
\eta _{\lambda ,\,\sigma }=\partial j_{\lambda ,\sigma }(t,r),\mbox{ }\eta
_{\lambda ,\sigma }=\beta _{\lambda }(t,r)+\sigma r=\beta (t,(1+\lambda
\beta (t,\cdot ))^{-1}r)+\sigma r,
\end{equation*}
then $\eta _{\lambda ,\sigma }$ is bounded on bounded subsets $\left\vert
r\right\vert \leq M,$ uniformly in $\lambda $ and $\sigma ,$ for $\lambda $
and $\sigma $ small (smaller than 1, e.g.).
We also note that
\begin{eqnarray*}
&&\int_{Q}j_{\lambda ,\sigma }(t,y_{\lambda ,\sigma }(t,x))dxdt\geq
\int_{Q}j_{\lambda }(t,y_{\lambda ,\sigma }(t,x))dxdt \\
&=&\int_{Q}j(t,(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma
})dxdt+\int_{Q}\frac{1}{2\lambda }\left\vert y_{\lambda ,\sigma }-(1+\lambda
\beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }\right\vert ^{2}dxdt
\end{eqnarray*}
and
\begin{eqnarray*}
&&\int_{0}^{T}\int_{\Omega }y_{\lambda ,\sigma }A_{0,\infty
}^{-1}fdxdt=\int_{Q}\left( y_{\lambda ,\sigma }-(1+\lambda \beta (t,\cdot
))^{-1}y_{\lambda ,\sigma }\right) A_{0,\infty }^{-1}fdxdt \\
&&+\int_{Q}(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }A_{0,\infty
}^{-1}fdxdt \\
&\leq &\int_{Q}\frac{1}{2\lambda }\left\vert y_{\lambda ,\sigma }-(1+\lambda
\beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }\right\vert ^{2}dxdt+2\lambda
\int_{Q}(A_{0,\infty }^{-1}f)^{2}dxdt \\
&&+\int_{Q}(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }A_{0,\infty
}^{-1}fdxdt.
\end{eqnarray*}
Plugging these in (\ref{78}) we get after some algebra that
\begin{eqnarray}
&&\int_{Q}j(t,(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma
})dxdt+\int_{Q}j_{\lambda ,\sigma }^{\ast }(t,w_{\lambda ,\sigma }(t,x)))dxdt
\label{78-1} \\
&&+\frac{1}{2}\left\{ \left\Vert y_{\lambda ,\sigma }(T)\right\Vert
_{V^{\prime }}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\}
\notag \\
&\leq &\int_{Q}(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma
}A_{0,\infty }^{-1}fdxdt+2\lambda \int_{Q}(A_{0,\infty }^{-1}f)^{2}dxdt.
\notag
\end{eqnarray}
Further we set
\begin{equation*}
(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }=z_{\lambda ,\sigma }
\end{equation*}
and argue as in Theorem 3.2 to deduce by the Dunford-Pettis theorem that $
(z_{\lambda ,\sigma })_{\sigma }$ and $(w_{\lambda ,\sigma })_{\sigma }$ are
weakly compact in $L^{1}(Q).$ Recalling (\ref{bound}) we also get
\begin{equation}
\left\Vert y_{\lambda ,\sigma }(T)\right\Vert _{V^{\prime }}\leq C
\label{79}
\end{equation}
independently on $\sigma $ and $\lambda $.
Taking into account that $w_{\lambda ,\sigma }=\beta _{\lambda }(y_{\lambda
,\sigma })+\sigma y_{\lambda ,\sigma },$ equation (\ref{si-9-1}) yields
\begin{eqnarray*}
&&\frac{1}{2}\left\Vert y_{\lambda ,\sigma }(t)\right\Vert _{V^{\prime
}}^{2}+\int_{0}^{t}\int_{\Omega }(\beta _{\lambda }(\tau ,y_{\lambda ,\sigma
})y_{\lambda ,\sigma }+\sigma y_{\lambda ,\sigma }^{2})dxd\tau \\
&=&\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime
}}^{2}+\int_{0}^{t}\left\langle y_{\lambda ,\sigma }(\tau ),A_{0,\infty
}^{-1}f(\tau )\right\rangle _{V^{\prime },V}d\tau \\
&=&\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}+\frac{1}{2}
\int_{0}^{t}\left\Vert A_{0,\infty }^{-1}f(\tau )\right\Vert _{V}^{2}d\tau +
\frac{1}{2}\int_{0}^{t}\left\Vert y_{\lambda ,\sigma }(\tau )\right\Vert
_{V^{\prime }}^{2}d\tau ,\mbox{ }
\end{eqnarray*}
for all $t\in \lbrack 0,T].$
Taking into account that $\beta _{\lambda }(t,r)r\geq 0,$ for all $r\in
\mathbb{R},$ and in virtue of the Gronwall lemma, we deduce that
\begin{equation}
\left\Vert y_{\lambda ,\sigma }\right\Vert _{L^{\infty }(0,T;V^{\prime
})}\leq C, \label{79-0}
\end{equation}
\begin{equation*}
\sqrt{\sigma }\left\Vert y_{\lambda ,\sigma }\right\Vert _{L^{2}(Q)}\leq C,
\end{equation*}
and
\begin{equation}
\int_{Q}j(t,z_{\lambda ,\sigma }(t,x))dxdt\leq C,\mbox{ }\int_{Q}j_{\lambda
,\sigma }^{\ast }(t,w_{\lambda ,\sigma }(t,x))dxdt\leq C \label{79-1}
\end{equation}
independently on $\sigma $ and $\lambda .$ (For getting (\ref{79-1}) we
recall the arguments leading to (\ref{101-1}), (\ref{101-2})).
Then, (\ref{78}) and relation (\ref{si-9-7-01}) for $j_{\lambda ,\sigma
}^{\ast }$ imply that
\begin{equation}
\int_{Q}j_{\lambda ,\sigma }(t,y_{\lambda ,\sigma }(t,x))dxdt\leq C
\label{79-2}
\end{equation}
independently on $\sigma $ and $\lambda .$ Following the proof of Theorem
3.2 we deduce that
\begin{eqnarray*}
z_{\lambda ,\sigma } &\rightarrow &z_{\lambda }\mbox{ weakly in }L^{1}(Q),
\mbox{ as }\sigma \rightarrow 0, \\
w_{\lambda ,\sigma } &\rightarrow &w_{\lambda }\mbox{ weakly in }L^{1}(Q),
\mbox{ as }\sigma \rightarrow 0, \\
\sqrt{\sigma }y_{\lambda ,\sigma } &\rightarrow &\zeta _{\lambda }\mbox{
weakly in }L^{2}(Q),\mbox{ as }\sigma \rightarrow 0, \\
y_{\lambda ,\sigma } &\rightarrow &y_{\lambda }\mbox{ weak-star in }
L^{\infty }(0,T;V^{\prime }),\mbox{ as }\sigma \rightarrow 0, \\
y_{\lambda ,\sigma }(T) &\rightarrow &\xi \mbox{ weakly in }V^{\prime },
\mbox{ as }\sigma \rightarrow 0, \\
Aw_{\lambda } &\rightarrow &Aw_{\lambda }\mbox{ weakly in }
L^{1}(0,T;X^{\prime }),\mbox{ as }\sigma \rightarrow 0, \\
\frac{dy_{\lambda ,\sigma }}{dt} &\rightarrow &\frac{dy_{\lambda }}{dt}\mbox{
weakly in }L^{1}(0,T;X^{\prime }),\mbox{ as }\sigma \rightarrow 0.
\end{eqnarray*}
By (\ref{79-2}) and (\ref{70-0}) we have
\begin{equation*}
\int_{Q}\frac{1}{2\lambda }\left\vert y_{\lambda ,\sigma }-(1+\lambda \beta
(t,\cdot ))^{-1}y_{\lambda ,\sigma }\right\vert ^{2}dxdt\leq
\int_{Q}j_{\lambda ,\sigma }(t,y_{\lambda ,\sigma }(t,x))dxdt\leq C
\end{equation*}
whence, denoting $\chi _{\lambda ,\sigma }=\left( y_{\lambda ,\sigma
}-z_{\lambda ,\sigma }\right) /\sqrt{2\lambda }$ we see that $(\chi
_{\lambda ,\sigma })_{\sigma }$ is bounded in $L^{2}(Q)$ and $\chi _{\lambda
,\sigma }\rightarrow \chi _{\lambda }$ weakly in $L^{2}(Q),$ as $\sigma
\rightarrow 0,$ on a subsequence. Then
\begin{equation*}
y_{\lambda ,\sigma }-z_{\lambda ,\sigma }\rightarrow \sqrt{2\lambda }\chi
_{\lambda }\mbox{ weakly in }L^{1}(Q),\mbox{ as }\sigma \rightarrow 0,
\end{equation*}
where $\left\Vert \chi _{\lambda }\right\Vert _{L^{1}(Q)}\leq C.$ Since $
z_{\lambda ,\sigma }\rightarrow z_{\lambda }$ weakly in $L^{1}(Q),$ it
follows that $(y_{\lambda ,\sigma })_{\sigma }$ is bounded in $L^{1}(Q),$ so
it converges weakly and by the limit uniqueness we have
\begin{equation*}
y_{\lambda ,\sigma }\rightarrow y_{\lambda }\mbox{ weakly in }L^{1}(Q),\mbox{
as }\sigma \rightarrow 0.
\end{equation*}
We also have
\begin{equation}
y_{\lambda }=z_{\lambda }+\sqrt{2\lambda }\zeta _{\lambda }\mbox{ a.e. on }Q.
\label{79-3}
\end{equation}
By Arzel\`{a}-Ascoli theorem (since $V^{\prime }$ is compact in $X^{\prime }$
because $X$ is compact in $V)$ it follows that
\begin{equation*}
y_{\lambda ,\sigma }(t)\rightarrow y_{\lambda }(t)\mbox{ in }X^{\prime },
\mbox{ uniformly in }t\in \lbrack 0,T],\mbox{ as }\sigma \rightarrow 0,
\end{equation*}
so $\xi =y_{\lambda }(T)$ and $y_{\lambda }(0)=y_{0}.$
Passing to the limit in (\ref{76-0}) we get that $(y_{\lambda },w_{\lambda
}) $ satisfies
\begin{eqnarray}
\frac{dy_{\lambda }}{dt}(t)+Aw_{\lambda }(t) &=&f(t)\mbox{ a.e. }t\in (0,T),
\label{76-00} \\
y(0) &=&y_{0}. \notag
\end{eqnarray}
Passing to the limit in (\ref{78-1}) as $\sigma \rightarrow 0$, using the
weak lower semicontinuity property we get
\begin{eqnarray}
\int_{Q}(j(t,z_{\lambda }(t,x))+j_{\lambda }^{\ast }(t,w_{\lambda
}(t,x)))dxdt && \label{82} \\
+\frac{1}{2}\left\{ \left\Vert y_{\lambda }(T)\right\Vert _{V^{\prime
}}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\}
-\int_{Q}y_{\lambda }A_{0,\infty }^{-1}fdxdt-2\lambda \int_{Q}(A_{0,\infty
}^{-1}f)^{2}dxdt &\leq &0. \notag
\end{eqnarray}
We repeat again the arguments developed in Theorem 3.2 and deduce by the
Dunford-Pettis theorem that $(z_{\lambda })_{\lambda }$ and $(w_{\lambda
})_{\lambda }$ are weakly compact in $L^{1}(Q).$ It still follows that
\begin{equation}
\left\Vert z_{\lambda }(T)\right\Vert _{V^{\prime }}\leq C,\mbox{ }
\int_{Q}j(t,z_{\lambda }(t,x))dxdt\leq C,\mbox{ }\int_{Q}j_{\lambda }^{\ast
}(t,w_{\lambda }(t,x))dxdt\leq C \label{82-1}
\end{equation}
independently on $\lambda $ (recall (\ref{bound}), (\ref{101-1}), (\ref
{101-2}))$.$ Passing to the limit in (\ref{79-0}) as $\sigma \rightarrow 0$
we get
\begin{equation*}
\left\Vert y_{\lambda }\right\Vert _{L^{\infty }(0,T;V^{\prime })}\leq C
\end{equation*}
where $C$ are several constants independent on $\lambda .$
Then, proceeding along with the proof of Theorem 3.2 we obtain from (\ref{82}
), by selecting subsequences, that
\begin{eqnarray*}
z_{\lambda } &\rightarrow &z^{\ast }\mbox{ weakly in }L^{1}(Q),\mbox{ as }
\lambda \rightarrow 0, \\
w_{\lambda } &\rightarrow &w^{\ast }\mbox{ weakly in }L^{1}(Q),\mbox{ as }
\sigma \rightarrow 0, \\
y_{\lambda } &\rightarrow &y^{\ast }\mbox{ weak-star in }L^{\infty
}(0,T;V^{\prime }),\mbox{ as }\sigma \rightarrow 0, \\
y_{\lambda }(T) &\rightarrow &y^{\ast }(T)\mbox{ weakly in }V^{\prime },
\mbox{ as }\sigma \rightarrow 0, \\
Aw_{\lambda } &\rightarrow &Aw^{\ast }\mbox{ weakly in }L^{1}(0,T;X^{\prime
}),\mbox{ as }\sigma \rightarrow 0, \\
\frac{dy_{\lambda }}{dt} &\rightarrow &\frac{dy^{\ast }}{dt}\mbox{ weakly in
}L^{1}(0,T;X^{\prime }),\mbox{ as }\sigma \rightarrow 0.
\end{eqnarray*}
By (\ref{79-3}) we get that $z^{\ast }=y^{\ast }$ a.e. on $Q$ and by (\ref
{82-1}) we obtain
\begin{equation}
\left\Vert y^{\ast }(T)\right\Vert _{V^{\prime }}\leq C,\mbox{ }
\int_{Q}j(t,y^{\ast }(t,x))dxdt\leq C,\mbox{ }\int_{Q}j^{\ast }(t,w^{\ast
}(t,x))dxdt\leq C. \label{82-3}
\end{equation}
The first inequality is obvious. For the second (if $j(t,r)\geq 0)$ Fatou's
lemma yields
\begin{equation}
\int_{Q}j(t,y^{\ast }(t,x))dxdt=\int_{Q}\liminf\limits_{\lambda \rightarrow
0}j(t,z_{\lambda }(t,x))dxdt\leq \liminf\limits_{\lambda \rightarrow
0}\int_{Q}j(t,z_{\lambda }(t,x))dxdt\leq C. \label{82-2}
\end{equation}
If $j$ is not positive, we use again (\ref{si-9-7-01}) and denoting $
\widetilde{j}(t,r)=j(t,r)-k_{1}r-k_{2}\geq 0$ we write
\begin{equation*}
\int_{Q}\widetilde{j}(t,y^{\ast }(t,x))dxdt\leq \liminf\limits_{\lambda
\rightarrow 0}\int_{Q}\widetilde{j}(t,z_{\lambda }(t,x))dxdt,
\end{equation*}
whence we get
\begin{eqnarray*}
&&\int_{Q}j(t,y^{\ast }(t,x))dxdt-\int_{Q}(k_{1}y^{\ast }+k_{2})dxdt \\
&\leq &\liminf\limits_{\lambda \rightarrow 0}\int_{Q}j(t,z_{\lambda
}(t,x))dxdt-\int_{Q}(k_{1}y^{\ast }+k_{2})dxdt,
\end{eqnarray*}
i.e., (\ref{82-2}). In what concerns the third inequality in (\ref{82-3}),
we can write by (\ref{82-1})
\begin{eqnarray}
&&\int_{Q}\frac{1}{2\lambda }\left\vert w_{\lambda }-(1+\lambda \beta
^{-1}(t,\cdot ))^{-1}w_{\lambda }\right\vert ^{2}dxdt+j^{\ast }(t,(1+\beta
^{-1}(t,\cdot ))^{-1}w_{\lambda }) \label{82-4} \\
&\leq &\int_{Q}j_{\lambda }^{\ast }(t,w_{\lambda }(t,x))dxdt\leq C. \notag
\end{eqnarray}
Recalling that $j^{\ast }(t,x,\omega )\geq k_{3}(t,x)\omega +k_{4}(t,x)$ we
get that
\begin{equation*}
k_{3}\int_{Q}(1+\beta ^{-1}(t,\cdot ))^{-1}w_{\lambda }dxdt\leq C+k_{4}\mbox{
meas}(Q),
\end{equation*}
hence $\left( (1+\beta ^{-1}(t,\cdot ))^{-1}w_{\lambda }\right) _{\lambda }$
is bounded in $L^{1}(Q).$ Then
\begin{equation*}
\int_{Q}\frac{1}{2\lambda }\left\vert w_{\lambda }-(1+\lambda \beta
^{-1}(t,\cdot ))^{-1}w_{\lambda }\right\vert ^{2}dxdt\leq
C+\int_{Q}(k_{3}(1+\beta ^{-1}(t,\cdot ))^{-1}w_{\lambda }+k_{4})dxdt\leq
C_{1}.
\end{equation*}
It follows that
\begin{equation*}
(1+\lambda \beta ^{-1}(t,\cdot ))^{-1}w_{\lambda }\rightarrow w^{\ast }\mbox{
weakly in }L^{1}(Q),\mbox{ as }\lambda \rightarrow 0.
\end{equation*}
Then, we passing to the limit in (\ref{82-4}) as $\lambda \rightarrow 0$ (if
$j^{\ast }$ is nonnegative). Otherwise we use again (\ref{si-9-7-01}) for $
j^{\ast }.$
Passing to the limit in (\ref{76-00}) and (\ref{82}) as $\lambda \rightarrow
0$ we get
\begin{eqnarray}
\frac{dy^{\ast }}{dt}(t)+Aw^{\ast }(t) &=&f(t)\mbox{ a.e. }t\in (0,T),
\label{76-000} \\
y(0) &=&y_{0}, \notag
\end{eqnarray}
and, again by the weak lower semicontinuity,
\begin{eqnarray}
&&\int_{Q}(j(t,y^{\ast }(t,x))+j^{\ast }(t,w^{\ast }(t,x)))dxdt \label{83}
\\
&&+\frac{1}{2}\left\{ \left\Vert y^{\ast }(T)\right\Vert _{V^{\prime
}}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\}
-\int_{Q}y^{\ast }A_{0,\infty }^{-1}fdxd\tau \leq 0. \notag
\end{eqnarray}
We have got that $(y^{\ast },w^{\ast })\in U,$ $y^{\ast }\in L^{\infty
}(0,T;V^{\prime })$ and so by Lemma 4.1 it follows that $y^{\ast }w^{\ast
}\in L^{1}(Q).$ Replacing the sum of the last two terms on the right-hand
side in (\ref{83}) by (\ref{60}) we get
\begin{equation*}
\int_{Q}(j(t,y^{\ast }(t,x))+j^{\ast }(t,w^{\ast }(t,x))-y^{\ast
}(t,x)w^{\ast }(t,x))dxdt\leq 0.
\end{equation*}
Recalling (\ref{si-4-1}) we obtain
\begin{equation}
\int_{Q}(j(t,y^{\ast }(t,x))+j^{\ast }(t,w^{\ast }(t,x))-y^{\ast
}(t,x)w^{\ast }(t,x))dxdt=0 \label{84}
\end{equation}
which eventually implies that
\begin{equation*}
j(t,y^{\ast }(t,x))+j^{\ast }(t,w^{\ast }(t,x))-y^{\ast }(t,x)w^{\ast
}(t,x)=0\mbox{ a.e. on }Q.
\end{equation*}
Therefore, we conclude that $w^{\ast }(t,x)\in \beta (t,y^{\ast }(t,x))$
a.e. on $Q,$ by the Legendre-Fenchel relations.
On the other hand, due again to (\ref{60}) in Lemma 4.1, relation (\ref{84})
means in fact that $(y^{\ast },w^{\ast })$ realizes the minimum in $(P),$ as
claimed in (\ref{Jinf}).
The uniqueness follows directly by (\ref{si-1}) using the monotony of $\beta
.$
Indeed, let $(y$,$\eta )$ and $(\widetilde{y},\widetilde{\eta })$ be two
solutions to (\ref{si-1}) corresponding to the same data, where $\eta
(t,x)\in \beta (t,x,y(t,x)),$ $\widetilde{\eta }(t,x)\in \beta (t,x,
\widetilde{y}(t,x))$ a.e. on $Q,$ $(y,\eta )$ and $(\widetilde{y},\widetilde{
\eta })$ belong to $U$ and $y,\widetilde{y}\in L^{\infty }(0,T;V^{\prime }).$
We write the equations satisfied by their difference
\begin{eqnarray*}
\frac{d(y-\widetilde{y})}{dt}(t)+A(\eta -\widetilde{\eta })(t) &\ni &0\mbox{
a.e. }t\in (0,T), \\
(y-\widetilde{y})(0) &=&0,
\end{eqnarray*}
multiply the equation by $A_{0,\infty }^{-1}(y-\widetilde{y})(t)$ and
integrate it over $(0,t)$ obtaining
\begin{equation*}
\frac{1}{2}\left\Vert (y-\widetilde{y})(t)\right\Vert _{V^{\prime
}}^{2}+\int_{0}^{t}\int_{\Omega }\left( \eta -\widetilde{\eta })(y-
\widetilde{y}\right) dxdt=0.
\end{equation*}
But $\beta (t,r)$ is maximal monotone, hence we get $\left\Vert y(t)-
\widetilde{y}(t)\right\Vert _{V^{\prime }}^{2}\leq 0,$ whence $y(t)=
\widetilde{y}(t)$ for all $t\in \lbrack 0,T].$
$\square $
We remark now that if $(y^{\ast },\eta ^{\ast })$ is the solution to (\ref
{si-1}), with $\eta ^{\ast }(t,x)\in \beta (t,y^{\ast }(t,x))$ a.e. on $Q$
and $y^{\ast }\in L^{\infty }(0,T;V^{\prime }),$ then it is a unique
solution to $(P),$ because by Lemma 4.1, $J(y,w)\geq 0$ for any $(y,w)\in U$
and the minimum is realized at $(y^{\ast },\eta ^{\ast })$ since $J(y^{\ast
},\eta ^{\ast })=0.$ So, we conclude that (\ref{si-1}) is equivalent with
the minimization problem $(P)$.
\section{Conclusions}
This paper deals with the application of the Brezis-Ekeland principle for a
nonlinear diffusion equation with a monotonically increasing time depending
nonlinearity which provides a potential having a weak coercivity property.
The result states that the solution of the nonlinear equation can be
retrieved as the null minimizer of an appropriate minimization problem for a
convex functional involving the potential of the nonlinearity.
This approach is useful because it allows the existence proof in cases in
which, due to the generality of the nonlinearity, standard methods do not
apply. Also it can lead to a simpler numerical computation of the solution
to the equation by replacing its direct determination by the numeric
calculus of the minimum of a convex functional with a linear state equation.
With respect to the literature concerning existence results for (\ref{si-1}
), Theorem 4.3 provides existence under very general conditions on the
nonlinear function $\beta .$ As regards the assumption (\ref{si-9-5}) it can
be equivalently expressed as
\begin{equation*}
\limsup_{\left\vert r\right\vert \rightarrow \infty }\frac{j(t,-r)}{j(t,r)}
<\infty ,
\end{equation*}
(see \cite{barbu-daprato}). Since in specific real problems the solution to (
\ref{si-1}) is nonnegative, and so $\beta $ is defined on $[0,\infty ),$
this condition is achieved by extending in a convenient way the function $
\beta $ on $(-\infty ,0).$ For instance, conditions (\ref{si-9-2})-(\ref
{si-9-2-00})\ are satisfied for $\beta $ of the form
\begin{equation*}
\beta (t,x,r)=\mbox{sgn(}r)\log (\left\vert r\right\vert +a(t,x)),\mbox{ }
a\geq a_{0}>0
\end{equation*}
or
\begin{equation*}
\beta (t,x,r)=\mbox{sgn}(r)\exp (a(t,x)r^{2}),\mbox{ }a\geq a_{0}>0.
\end{equation*}
Concerning possible applications, we remark that problem (\ref{si-1}) can be
obtained by a change of variable in an equation of the form
\begin{equation*}
\frac{\partial (m(t,x)y)}{\partial t}-\Delta \beta _{0}(y)\ni f
\end{equation*}
which is associated to various physical models as for example: fluid
diffusion in saturated-unsaturated deformable porous media with the porosity
$m$ time and space dependent (see appropriate problems in \cite{af-gm-12},
\cite{gm-cc-08}), or to absorption-desorption processes in saturated porous
media in which $m$ is the absorption-desorption rate of the fluid by the
solid. The Robin boundary condition arising in (\ref{si-1}) was chosen
because of its relevance in these physical models. Also, evolution equations
with nonautonomous operators can be associated to models in which the
boundary conditions are of time dependent nonhomogeneous Dirichlet type, or
nonlocal as in population dynamics (see \cite{cuiama-2}).
In all these problems the coefficient $m$ or the coefficients in the
boundary conditions may have a very low regularity which makes not possible
the approach of (\ref{si-1}) by the nonlinear semigroup method in the
time-dependent case given in \cite{Crandall-Pazy-71}.
\noindent \textbf{Acknowledgment. }This work has been carried out within a
grant of the Romanian National Authority for Scientific Research,
CNCS-UEFISCDI, project number PN-II-ID-PCE-2011-3-0027. The author
acknowledges the fellowship awarded by CIRM-Fondazione Bruno Kessler, Italy,
in November 2012.
\end{document} |
\mathbf{b}egin{document}
\mathbf{t}itle{Greedy methods, randomization approaches and multi-arm bandit algorithms
for efficient sparsity-constrained optimization}
\mathbf{a}uthor{A. Rakotomamonjy~\mathbf{I}EEEmembership{Member,~IEEE}, S. Ko\c{c}o, L. Ralaivola
\mathbf{I}EEEcompsocitemizethanks{
\mathbf{I}EEEcompsocthanksitem AR is with University of Rouen, LITIS Lab. Most of
this work has been carried while he was visiting LIF at Aix-Marseille University. ~\mathbf{p}rotect Email [email protected]\mathbf{p}rotect
\mathbf{I}EEEcompsocthanksitem SK is with University of Rouen, LITIS LAB. \mathbf{p}rotect Email: [email protected]\mathbf{p}rotect
\mathbf{I}EEEcompsocthanksitem LR is with Aix-Marseille University, LIF. \mathbf{p}rotect Email: [email protected]\mathbf{p}rotect
}
\mathbf{t}hanks{This work is supported by Agence Nationale de la Recherche, project
GRETA 12-BS02-004-01.}
\mathbf{t}hanks{Manuscript received August2015; revised XXX.}
}
\title{Greedy methods, randomization approaches and multi-arm bandit algorithms
for efficient sparsity-constrained optimization}
\mathbf{b}egin{abstract}
Several sparsity-constrained algorithms such as
Orthogonal Matching Pursuit or the Frank-Wolfe algorithm
with sparsity constraints work by iteratively selecting
a novel atom to add to the current non-zero set of variables. This selection step
is usually performed by computing the gradient
and then by looking for the gradient component
with maximal absolute entry. This step can be computationally
expensive especially for large-scale and high-dimensional
data. In this work, we aim at accelerating these sparsity-constrained
optimization algorithms by exploiting the key observation
that, for these algorithms to work, one only needs the coordinate
of the gradient's top entry. Hence, we introduce
algorithms based on greedy methods and randomization
approaches that aim at cheaply estimating the gradient and
its top entry. Another of our contribution is to cast the
problem of finding the best gradient entry as a best arm
identification in a multi-armed bandit problem. Owing
to this novel insight, we are able to provide a
bandit-based algorithm that directly estimates the top
entry in a very efficient way. Theoretical observations
stating that the resulting inexact Frank-Wolfe or Orthogonal
Matching Pursuit algorithms act, with high probability, similarly to their exact
versions are also given.
We have carried out several
experiments showing that the greedy deterministic
and the bandit approaches we propose can achieve an
acceleration of an order of magnitude
while being as efficient as the exact gradient
when used in algorithms such as OMP, Frank-Wolfe
or CoSaMP.
\mathbf{e}nd{abstract}
\mathbf{b}egin{IEEEkeywords}
Sparsity, Orthogonal Matching Pursuit, Frank-Wolfe algorithm, Greedy
methods, Best arm identification.
\mathbf{e}nd{IEEEkeywords}
\mathbf{s}ection{Introduction}
Over the last decade, there has been a large interest in inference
problems featuring data of very high-dimension and a small number of observations. Such problems occur in a wide variety of application domains ranging
from computational biology and text mining to information retrieval and finance.
In order to learn from these datasets, statistical models are frequently designed so as to feature some sparsity properties. Hence, fueled by
the large interest induced by these application domains, an important amount of research
work in the machine learning, statistics and signal
processing communities, has been devoted to sparse learning
and as such, many algorithms have been developed
for yielding models that use only few dimensions of the data.
To obtain these models, one typically needs to solve a problem
of the form
\mathbf{b}egin{equation}\label{eq:trueprob}
\min_\mathbf{w} L(\mathbf{w}) \mathbf{t}ext{ subject~to } \|\mathbf{w}\|_0 \leq K
\mathbf{e}nd{equation}
where $L(\mathbf{w})$ is a smooth convex objective function that measures a goodness
of fit of the model, $\|\mathbf{w}\|_0$ is the $\mathbf{e}ll_0$ pseudo-norm that
counts the number of non-zero components of the vector $\mathbf{w}$ and
$K$ is a parameter that controls the sparsity level.
One usual approach that has been widely considered is the use of
a convex and continuous surrogate of the $\mathbf{e}ll_0$ pseudo-norm, namely
an $\mathbf{e}ll_1$-norm. The resulting problem is the well-known Lasso
problem \cite{Tibshrani_Lasso_1996} and a large variety of algorithms for its resolution exist,
ranging from homotopy methods \cite{efron_lars} to the Frank-Wolfe (FW) algorithm \cite{jaggi13:_revis_frank_wolfe}. In the same flavour, non-convex continuous penalties are also common solutions for relaxing the $\mathbf{e}ll_0$ pseudo-norm
\cite{rakotomamonjy2015dc,laporte13:_noncon_regul_featur_selec_rankin}.
Another possible approach is to consider greedy methods that provide
local optimal solution to the problem (\mathbf{r}ef{eq:trueprob}). In this last context, a flurry of algorithms have been proposed, the most popular
ones being the \mathbf{e}mph{Matching Pursuit} (MP) and the \mathbf{e}mph{Orthogonal Matching Pursuit} (OMP) algorithms \cite{mallat_mp,pati93:_orthog_match_pursuit,merhej11:_embed_prior_knowl_within_compr}.
One common point of the aforementioned algorithms for solving problem
(\mathbf{r}ef{eq:trueprob}) is that they require,
at each iteration, the computation of the objective function's gradient. For large-scale and high-dimensional setting, computing the gradient at
each iteration may be very time-consuming.
Stochastic gradient descent (SGD) algorithms are now classical methods
for avoiding the computation of the full gradient in large-scale
learning problems \cite{zhang04:_solvin,shalev-shwartz07:_pegas}.
Most of these works have been devoted to smooth composite optimization
although some efforts addressing $\mathbf{e}ll_1$-regularized problems exist
\cite{shalev2011stochastic}. Recently, these SGD algorithms have been
further accelerated through the introduction of variance
reduction methods for gradient estimation \cite{johnson2013accelerating}.
In the context of problem ~(\mathbf{r}ef{eq:trueprob}) where a non-smooth non-convex
constraint appears, very few works have envisaged
the use of stochastic optimization. Nguyen et al. \cite{nguyen14:_linear}
have proposed stochastic versions of a gradient pursuit
algorithm.
Following a similar direction, by exploiting inexact gradient information,
we address the problem of accelerating
sparsity-inducing algorithms such as Matching Pursuit, Orthogonal
Matching Pursuit and the Frank-Wolfe algorithm with an $\mathbf{e}ll_1$ ball
constraint.
However, unlike stochastic gradient approaches, the acceleration
we propose leverages on the fact that at each iteration,
these algorithms seek for the gradient's component
that has the
largest (absolute) value.
Hence, our main contribution in this paper is to propose novel
algorithms that allow to efficiently find this top entry of
the gradient. By doing so, our objective is to design novel
efficient versions of MP, OMP and FW algorithms while keeping intact
all the properties of these algorithms for sparse
approximation. Indeed, this becomes possible
if at each iteration of MP, OMP or FW, our inexact gradient-based
estimation of the component with largest value
is the same as the one obtained with the exact gradient.
We propose two approaches, the first one based on a greedy deterministic algorithm and the second one based on a randomized method, whose aim is to build an inexact estimation of the gradient so that its top entry in the same as the exact one.
Next, by casting the problem
as a best arm identification multi-armed bandit problem \cite{bubeck2009pure}, we are able to derive an algorithm that directly estimates
the best component of the gradient. Interestingly,
these algorithms are supported by theoretical
evidences that with high probability, they are able
to retrieve the correct component of the gradient.
As a consequence, we show that MP, OMP and FW algorithms
employing these approaches for spotting
the correct component of the gradient behave
as their exact counter part with high probability.
\mathbf{r}evIII{This paper is an extended version of the conference paper \cite{rakotomamonjy2015more}. It provides full details on the context of the problem and
proposes a novel key contribution based on multi-arm bandits. In addition, it gives
enlightning insights compared to related works. Extended experimental analysis
also strengthen the results compared to the conference version.
}
The remainder of the paper is organized as follows. Section \mathbf{r}ef{sec:sparse} presents sparsity-constrained algorithms, formalizes our problem
and introduces the key observation on which we build our acceleration strategy.
Section \mathbf{r}ef{sec:algo} provides the different algorithms
we propose for efficiently estimating the extreme gradient component
of interest.
A discussion with related works is given in Section \mathbf{r}ef{sec:discussion}.
Experimental results are depicted in Section \mathbf{r}ef{sec:expe} while
Section \mathbf{r}ef{sec:conclusion} concludes the work and presents
different outlooks.
\mathbf{s}ection{Sparse Learning Algorithm with Extreme Gradient Component}
\label{sec:sparse}
In this section, we introduce the sparse learning problem
we are interested in, as well as some algorithms that are frequently
used for sparse learning. We then point out
the prominent common trait of those algorithms, namely
{\mathbf{e}m the extreme gradient component} property, and discuss how its estimation can be employed for accelerating
some classes of sparse learning algorithms.
\mathbf{s}ubsection{Framework}
Consider the problem where we want to estimate a
relation between a set of $n$ samples gathered in a vector
$\mathbf{y} \in \mathbb{R}^n$ and the matrix $\mathbf{X} \in \mathbb{R}^{n \mathbf{t}imes d}$. In a sparse
signal approximation problem, $\mathbf{X}$ would be a matrix whose columns are
the elements of a dictionary and $\mathbf{y}$ the target signal,
while in a machine learning problem, the $i$-th row of matrix $\mathbf{X}$,
formalized as $\mathbf{x}_i^\mathbf{t}op$, $\mathbf{x}_i \in \mathbb{R}^d$, depicts
the features of the $i$-th example and $y_i$ is the label or target
associated to that example. In the sequel, we denote as $x_{i,j}$ the
entry of $\mathbf{X}$ at the $i$-th row and $j$-th column.
Our objective is to learn the relation
between $\mathbf{y}$ and $\mathbf{X}$ through a linear model of the data denoted $\mathbf{X}\mathbf{w}$
by looking for the vector $\mathbf{w}$ that solves problem (\mathbf{r}ef{eq:trueprob})
when the objective function is of the form
$$L(\mathbf{w})= \mathbf{s}um_i^n \mathbf{e}ll(y_i,g(\mathbf{w}^\mathbf{t}op \mathbf{x}_i)),$$
where $\mathbf{e}ll$ is an individual loss function that measures
the discrepancy between a true value $y$ and its estimation, and
$g(\cdot)$ is a given differentiable function that
depicts the (potential) non-linear dependence of the loss to $\mathbf{w}$.
Typically,
$\mathbf{e}ll$ might be the least-square error function, which leads to
$L(\mathbf{w})=\frac{1}{2}\|\mathbf{y} - \mathbf{X}\mathbf{w}\|_2^2$
or the logistic loss and we have $L(\mathbf{w})=\mathbf{s}um_{i=1}^n \log(1+ \mathbf{e}xp\{-y_i \mathbf{x}_i^\mathbf{t}op \mathbf{w}\}).$
In the sequel, we present two algorithms that solve problem~\mathbf{e}qref{eq:trueprob} by a greedy method and by a continuous and a convex relaxation
of the $\mathbf{e}ll_0$ pseudo-norm, respectively.
\mathbf{s}ubsection{Algorithms}
\mathbf{s}ubsubsection{Gradient Pursuit}
This algorithm is a generalization of the greedy algorithm known
as \mathbf{e}mph{Orthogonal Matching Pursuit} (OMP) to generic loss
functions \cite{blumensath2008gradient_ieee}. It can be directly applied to
our problem given in Equation (\mathbf{r}ef{eq:trueprob}).
Similarly to OMP, the gradient pursuit algorithm is a greedy iterative procedure, which, at each iteration, selects
the largest absolute coordinate of the loss gradient. This coordinate is added to the set of already selected
elements and the new iterate is obtained as the solution of the original optimization problem restricted to these selected elements while keeping
the other ones at $0$. The detailed procedure is given
in Algorithm \mathbf{r}ef{algo:gp}.
\mathbf{b}egin{algorithm}[t]
\caption{Gradient Pursuit Algorithm \label{algo:gp}}
\mathbf{b}egin{algorithmic}[1]
\STATE set k = 0, initialize $\mathbf{w}_0 = \mathbf{0}$
\FOR{k=0,1, $\cdots$}
\STATE $i^\mathbf{s}tar= \mathbf{a}rg \max_i |\nabla L(\mathbf{w}_k)|_i$ \STATE $\mathbf{G}amma_k= \mathbf{G}amma_k \cup \{i^\mathbf{s}tar\}$
\STATE $\mathbf{w}_{k+1} = \mathbf{a}rg \min L(\mathbf{w})$ over $\mathbf{G}amma_k$ with $\mathbf{w}_{\mathbf{G}amma_k^C}= \mathbf{0}$
\ENDFOR
\mathbf{e}nd{algorithmic}
\mathbf{e}nd{algorithm}
While conceptually simple, this algorithm comes with theoretical guarantees
on its ability to recover the exact underlying sparsity pattern of the model, for different types of loss functions \cite{pati93:_orthog_match_pursuit,aravkin2014orthogonal,lozano2011group}.
Several variations of this algorithm have been proposed ranging from methods exploring new pursuits directions instead of the gradient \cite{blumensath2008gradient_ieee}, to methods making only slight changes to the original one that have strongly impacted the ability of the algorithm to recover signals.
For instance, the \mathbf{e}mph{CoSaMP} \cite{needell09:_cosam} and {GraSP} \cite{bahmani2013greedy} algorithms select the top $2K$ entries in the absolute gradient,
$K$ being the desired sparsity pattern, optimize over these entries and the already selected $K$ ones, and prune the resulting estimate to be $K$-sparse. In addition to being efficient, these algorithms output sparse vectors whose distances from the true sparse optimum are bounded.
\mathbf{s}ubsubsection{ Frank-Wolfe algorithm}
\label{sec:fw}
The Frank-Wolfe (FW) algorithm aims at solving the following problem
\mathbf{b}egin{equation}
\label{eq:fw}
\min_{\mathbf{w} \in C} L(\mathbf{w})
\mathbf{e}nd{equation}
with $\mathbf{w} \in \mathbb{R}^d$, $L$ a convex and differentiable function
with Lipschitz gradient and $C$ a compact subset of $\mathbb{R}^d$.
The FW algorithm, given in Algorithm \mathbf{r}ef{algo:fw}, is a straightforward procedure that solves problem \mathbf{r}ef{eq:fw}
by first iteratively looking for a search direction and then updating the current
iterate. The search direction $\mathbf{s}_k$ is obtained from
the following convex optimization problem (line~\mathbf{r}ef{alg:sk} Alg~\mathbf{r}ef{algo:fw})
\mathbf{b}egin{equation}
\mathbf{s}_k= \mathbf{a}rg \min_{\mathbf{s} \in C} \mathbf{s}^\mathbf{t}op \nabla L(\mathbf{w}_k),
\label{eq:s_k}
\mathbf{e}nd{equation}
which may be efficiently solved,
depending on the constraint set $C$. For achieving sparsity,
we typically choose $C$ as a $\mathbf{e}ll_1$-norm ball,
\mathbf{e}mph{e.g.} $
C=\{\mathbf{w} : \|\mathbf{w}\|_1 \leq 1\}$,
which turns~\mathbf{e}qref{eq:s_k} into
a linear program.
\mathbf{b}egin{algorithm}[t]
\caption{Frank-Wolfe Algorithm \label{algo:fw}}
\mathbf{b}egin{algorithmic}[1]
\STATE set k = 0, initialize $\mathbf{w}_0=\mathbf{0}$
\FOR{k=0,1, $\cdots$}
\STATE \label{alg:sk} $\mathbf{s}_k= \mathbf{a}rg \min_{\mathbf{s} \in C} \mathbf{s}^\mathbf{t}op \nabla L(\mathbf{w}_k)$
\STATE $\mathbf{d}_k= \mathbf{s}_k - \mathbf{w}_k$
\STATE set, linesearch or optimize $\mathbf{g}amma_k\in[0;1]$
\STATE $\mathbf{w}_{k+1}= \mathbf{w}_k + \mathbf{g}amma_k \mathbf{d}_k$
\ENDFOR
\mathbf{e}nd{algorithmic}
\mathbf{e}nd{algorithm}
Despite its simplistic nature, the FW algorithm has been shown to be linearly
convergent \cite{guelat86:_some_wolfes,jaggi13:_revis_frank_wolfe}.
Interestingly, it can also be shown that convergence is preserved
as long as the following condition holds
\mathbf{b}egin{equation}
\mathbf{s}_k^\mathbf{t}op \nabla L(\mathbf{w}_k) \leq \min_{\mathbf{s} \in C} \mathbf{s}^\mathbf{t}op\nabla L(\mathbf{w}_k) + \mathbf{e}psilon,
\label{eq:approximate}
\mathbf{e}nd{equation}
where $\mathbf{e}psilon$ depends on the smoothness of $L$ and the step-size $\mathbf{g}amma_k$.
In general cases where the minimization problem~\mathbf{e}qref{eq:s_k} is expensive to solve,
this condition suggests that an approximate solution $\mathbf{s}_k$ may be sufficient,
provided that the true gradient is available. Similarly, if an
inexact gradient information $\mathbf{h}at \nabla L(\mathbf{w}_k)$ is available, convergence is still
guaranteed under the condition that
$
\mathbf{s}_k^\mathbf{t}op \mathbf{h}at \nabla L(\mathbf{w}_k) \leq \min_{\mathbf{s} \in C} \mathbf{s}^\mathbf{t}op\nabla L(\mathbf{w}_k) + \mathbf{e}psilon,
$
and some conditions relating $\mathbf{h}at \nabla L(\mathbf{w}_k)$ and $\nabla L(\mathbf{w}_k)$.
For instance, if $C$ is a unit-norm ball associated with some norm $\|\cdot \|$
and $\mathbf{s}_k$ a minimizer of $\min_{\mathbf{s} \in C} \mathbf{s}^\mathbf{t}op \mathbf{h}at \nabla L(\mathbf{w}_k)$, i.e.
$\mathbf{s}_k=\mathbf{a}rg\min_{\mathbf{s} \in C} \mathbf{s}^\mathbf{t}op \mathbf{h}at \nabla L(\mathbf{w}_k),$ then
in order to ensure convergence, it is sufficient to have
$\mathbf{s}_k$ so that \cite{jaggi13:_revis_frank_wolfe}
\mathbf{b}egin{equation}
\label{eq:condition}
\|\mathbf{h}at \nabla L(\mathbf{w}_k) -\nabla L(\mathbf{w}_k)\|_\mathbf{s}tar \leq \mathbf{e}psilon,
\mathbf{e}nd{equation}
where $\|\cdot\|_{\mathbf{s}tar}$ is the conjugate norm associated with $\|\cdot\|$.
\iffalse
\mathbf{b}egin{eqnarray}
\mathbf{e}psilon^\mathbf{p}rime & \mathbf{g}eq &
\|\mathbf{h}at \nabla L(\mathbf{w}_k) -\nabla L(\mathbf{w}_k)\|_\mathbf{s}tar \\\nonumber
& = & \| \nabla L(\mathbf{w}_k) - \mathbf{h}at \nabla L(\mathbf{w}_k)\|_\mathbf{s}tar \\\nonumber
&\mathbf{g}eq & \| \nabla L(\mathbf{w}_k)\|_\mathbf{s}tar - \|\mathbf{h}at \nabla L(\mathbf{w}_k)\|_\mathbf{s}tar \\\nonumber
&\mathbf{g}eq & \|- \nabla f(\mathbf{w}_k)\|_\mathbf{s}tar - \|- \mathbf{h}at \nabla f(\mathbf{w}_k)\|_\mathbf{s}tar \\\nonumber
&= & \max_{\mathbf{s} \in C} \mathbf{s}^\mathbf{t}op(-\nabla L(\mathbf{w}_k)) - \max_{\mathbf{s} \in C}( \mathbf{s}^\mathbf{t}op(- \mathbf{h}at \nabla L(\mathbf{w}_k)) \\\nonumber
&\mathbf{g}eq & -\min_{\mathbf{s} \in C} \mathbf{s}^\mathbf{t}op(\nabla L(\mathbf{w}_k)) + \min_{\mathbf{s} \in C}( \mathbf{s}^\mathbf{t}op(\mathbf{h}at \nabla L(\mathbf{w}_k))
\mathbf{e}nd{eqnarray}
\fi
Interestingly, the above equation provides a guarantee on the
convergence on an inexact gradient Frank-Wolfe algorithm,
but unfortunately, the precision needed on the inexact gradient is given with respect to the exact one.
However, while condition \mathbf{e}qref{eq:condition} is impractical,
it conveys the important idea that inexact gradient computation
may be sufficient for solving sparsity-constrained optimization
problems.
\mathbf{s}ubsection{Leveraging from the extreme gradient component estimation}
As stated above, the gradient pursuit algorithm needs to solve at each iteration the following problem
$
j^\mathbf{s}tar = \mathbf{a}rg \max_j |\nabla L(\mathbf{w}_k)|_j,
$
i.e., the goal is to find at each iteration the gradient's component with the largest absolute value.
In the Frank-Wolfe algorithm, similar situations where finding $\mathbf{s}_k$
corresponds to looking for an extreme component of the (absolute)
gradient, occur. For instance, when the constraint set is the
$\mathbf{e}ll_1$-norm ball $
C_1=\{\mathbf{w} \in \mathbb{R}^d: \|\mathbf{w}\|_1 \leq 1\}$
or the positive simplex constraint
$
C_2=\{\mathbf{w} \in \mathbb{R}^d: \|\mathbf{w}\|_1 = 1, w_j \mathbf{g}eq 0\}$
\mathbf{r}ev{and we denote
$$
\mathbf{s}^\mathbf{s}tar= \mathbf{a}rg\min_{\mathbf{s} \in C_1} \mathbf{s}^\mathbf{t}op\nabla L(\mathbf{w}_k)\mathbf{t}ext{ and } j^\mathbf{s}tar=\mathbf{a}rg\max_j \left|\nabla L(\mathbf{w}_k)|_j\mathbf{r}ight| ,
$$
or
$$
\mathbf{s}^\mathbf{s}tar = \mathbf{a}rg\min_{\mathbf{s} \in C_2} \mathbf{s}^\mathbf{t}op\nabla L(\mathbf{w}_k) \mathbf{t}ext{ and } j^\mathbf{s}tar = \mathbf{a}rg\min_j \nabla L(\mathbf{w}_k)|_j ,
$$
then $\mathbf{s}^\mathbf{s}tar= \mathbf{e}_{j^\mathbf{s}tar}$, with $\mathbf{e}_{j^\mathbf{s}tar}$
the $j^\mathbf{s}tar$-th canonical vector of $\mathbb{R}^d$.}
Hence, as in Matching Pursuit, OMP or the gradient pursuit
algorithm, for specific choices of $C$, we are not interested
in the full gradient, nor in its extreme values, but only on the coordinate of the gradient component with the smallest, the largest
(absolute) value.
Our objective in this paper is to leverage on these observations for
accelerating sparsity-inducing algorithms which look for one extreme
component of the gradient. In particular, we are interested in
situations where the gradient is expensive to compute and we aim at
providing computation-efficient algorithms that produce either
approximated gradients whose extreme component of interest is the same
as the exact gradient's one, or identify the extreme component without
computing the whole (exact) gradient.
\mathbf{s}ection{Looking for the extreme gradient component}
\label{sec:algo}
This section formalizes the problem of identifying the
extreme gradient component and provides different
algorithms for its resolution. We first introduce
a greedy approach, then a randomized one and finally exhibit
how this problem can be cast as best arm identification
in a multi-armed bandit framework.
\mathbf{s}ubsection{The problem}
As mentioned in Section \mathbf{r}ef{sec:sparse}, we are interested in learning problems
where the objective function is of the form
$$
\mathbf{s}um_i \mathbf{e}ll( y_i, g(\mathbf{w}^\mathbf{t}op \mathbf{x}_i)).
$$
The gradient of our objective function is thus given by
\mathbf{b}egin{align}
\nabla L(\mathbf{w})& = \mathbf{s}um_i \mathbf{e}ll^\mathbf{p}rime(y_i, g(\mathbf{w}^\mathbf{t}op\mathbf{x}_i))) g^\mathbf{p}rime(\mathbf{w}^\mathbf{t}op\mathbf{x}_i) \mathbf{x}_i = \mathbf{X}^\mathbf{t}op\mathbf{r}
\mathbf{e}nd{align}
where $\mathbf{r} \in \mathbb{R}^n$ is the vector whose $i$-th component is
$\mathbf{e}ll^\mathbf{p}rime(y_i, g(\mathbf{w}^\mathbf{t}op\mathbf{x}_i))) g^\mathbf{p}rime(\mathbf{w}^\mathbf{t}op\mathbf{x}_i)$.
\mathbf{r}ev{This particular form entails that the gradient may be computed iteratively. Indeed,
the sum $\nabla L(\mathbf{w})= \mathbf{s}um_{i=1}^n \mathbf{x}_i r_i$ is invariant to
the order according to which index $i$ decribes the set $[1,...,n]$.
This makes possible the computation, at each iteration $t$, of an approximate gradient $\mathbf{h}at \nabla L_t$.
Denote $\mathcal{I}_t$ the first
$t$ indices used for computing the sum and $i_{t+1}$ the
$(t+1)$-th index, then, at iteration $t$, $\mathbf{h}at \nabla L_{t}= \mathbf{s}um_{i \in \mathcal{I}_{t}} \mathbf{x}_i r_i $
and
\mathbf{b}egin{equation}\label{eq:accumulation}
\mathbf{h}at \nabla L_{t+1} = \mathbf{h}at \nabla L_{t} + \mathbf{x}_{i_{t+1}} r_{i_{t+1}}. \mathbf{e}nd{equation}
}
According to this framework, our objective is then to find an efficient way
for computing an approximate gradient $\mathbf{h}at \nabla L_t$ for which the desired
extreme entry is equal to the one of the exact gradient. \mathbf{r}ev{Note that
the computational cost of the exact gradient $\mathbf{X}^\mathbf{t}op\mathbf{r}$ is
about $O(nd)$ and a naive computation of the residual $\mathbf{r}$ has
the same cost, which leads to a global complexity of
$O(2nd)$. However, because $\mathbf{w}$ is typically a sparse vector in Gradient
Pursuit or in the Frank-Wolfe algorithm, we compute
the residual vector $\mathbf{r}$ as $\mathbf{r}=\mathbf{y} - \mathbf{X}_\Omega \mathbf{w}_\Omega$
where $\Omega$ are the indices of the non-zero elements of the $k$-sparse vector $\mathbf{w}$.
Hence, computing $\mathbf{r}$ has only a cost of $O(nk)$
and it does not require a full pass on all the elements of
$\mathbf{X}$. The main objective of our contributions is to estimate
the extreme entry of $\mathbf{X}^\mathbf{t}op \mathbf{r}$ with algorithms having a complexity
lower than $O(nd)$.
}
We propose
and discuss, in the sequel, three different approaches.
\mathbf{s}ubsection{Greedy deterministic approach}
\mathbf{b}egin{algorithm}[t]
\mathbf{r}enewcommand{\mathbf{t}extbf{Input:}}{\mathbf{t}extbf{Input:}}
\mathbf{r}enewcommand{\mathbf{t}extbf{Output:}}{\mathbf{t}extbf{Output:}}
\caption{Greedy deterministic algorithm to compute
$\mathbf{h}at \nabla L$ }
\mathbf{b}egin{algorithmic}[1]
\label{algo:greedy}
\mathbb{R}EQUIRE: $\mathbf{r}$, $\{\|\mathbf{x}_i\|\}$, $\mathbf{X}$
\STATE [values, indices]=sort $\{|r_i| \|\mathbf{x}_i\| \}_i$ in decreasing order
\STATE $\mathbf{h}at \nabla L_t = \mathbf{0}$, $t= 0$
\mathbb{R}EPEAT
\STATE $i=$indices$(t+1)$
\STATE $\mathbf{h}at \nabla L_{t+1}= \mathbf{h}at \nabla L_{t} + \mathbf{x}_i r_i $
\STATE $t = t+1$
\mathbf{U}NTIL{stopping criterion over $\mathbf{h}at \nabla L_{t+1}$ is met }
\ENSURE $\mathbf{h}at \nabla L_{t+1}$
\mathbf{e}nd{algorithmic}
\mathbf{e}nd{algorithm}
The first approach we propose is a greedy approach which, at iteration
$t$, looks for the best index $i$ so that $r_i \mathbf{x}_i$ optimizes
a criterion depending on $\mathbf{h}at \nabla L_t$ and $\nabla L$.
Let $\mathcal{I}_t$ be the set of indices of the examples chosen in the first $t$ iterations for computing $\mathbf{h}at \nabla L_t$.
At iteration $t+1$, our goal is to find the example $i^\mathbf{s}tar$ that is the solution of the following problem:
\mathbf{b}egin{align}
i^\mathbf{s}tar &= \operatorname{argmin}\limits_{i\in \{1,\ldots,n\} \mathbf{b}ackslash \mathcal{I}_t}
\|\nabla L -\mathbf{h}at \nabla L_{t} - \mathbf{x}_i r_i\|,
\mathbf{e}nd{align}
with $\mathbf{h}at \nabla L_{t+1}= \mathbf{h}at \nabla L_{t}+ \mathbf{x}_i r_i$.
The solution of this problem is thus the vector product
$r_i\mathbf{x}_i$ that has minimal norm difference compared to the current gradient estimation
residual $\nabla L - \mathbf{h}at \nabla L_{t}$. While simple, this solution
cannot be computed because $\nabla L$ is not accessible.
Hence, we have to resort to an approximation based on
the following equivalent problem:
\mathbf{b}egin{align}
\label{eq:approxgraddiff}
i^\mathbf{s}tar &= \operatorname{argmax}\limits_{i\in \{1,\ldots,n\} \mathbf{b}ackslash \mathcal{I}_t} - \|\nabla L - \mathbf{h}at \nabla L_{t+1}\| \\
&= \operatorname{argmax}\limits_{i\in \{1,\ldots,n\} \mathbf{b}ackslash \mathcal{I}_t} \|\nabla L - \mathbf{h}at \nabla L_t\| - \|\nabla L - \mathbf{h}at \nabla L_{t+1}\| \nonumber
\mathbf{e}nd{align}
where we use the fact that $\|\nabla L - \mathbf{h}at \nabla L_t\|$ does
not depend on $i$.
By upper bounding the above objective value, one can derive the best
choice of $r_i \mathbf{x}_i$ that achieves the largest variation of the
gradient estimation norm residual. Indeed, we have
\mathbf{b}egin{equation}
\|\nabla L - \mathbf{h}at \nabla L_t\| - \|\nabla L - \mathbf{h}at \nabla L_{t+1}\| \leq \| - \mathbf{h}at \nabla L_t + \mathbf{h}at \nabla L_{t+1}\| \notag
\mathbf{e}nd{equation}
The right-hand side of this equation can be further simplified
\mathbf{b}egin{align}\label{eq:approxgraddiff2}
\| \mathbf{h}at \nabla L_{t+1} - \mathbf{h}at \nabla L_t\|
& =\| \mathbf{h}at \nabla L_t - \mathbf{x}_{i} r_{i} - \mathbf{h}at \nabla L_t\| \notag \\
& =\| \mathbf{x}_{i} r_{i} \| = \| \mathbf{x}_i\| | r_{i}|.
\mathbf{e}nd{align}
Equation \mathbf{e}qref{eq:approxgraddiff2} suggests that the index $i$ that should be chosen at iteration $t+1$ is the one with the largest absolute residual weighted by its example norm.
Simply put, in the first iteration, the algorithm chooses the index
that leads to the largest value $\| \mathbf{x}_i\| | r_{i}|$ , in the second one, it selects the second best, and so forth.
That is, the examples are considered in a decreasing order with respect to the weighted value of their residuals. Pseudo-code of the approach
is given in Algorithm \mathbf{r}ef{algo:greedy}.
Note that for this method, at iteration $t=n$ we recover the exact gradient,
$$
\mathbf{h}at \nabla L_n = \nabla L.
$$
Hence, when $t=n$, we are assured to retrieve
the correct extreme entry of the gradient at the expense of extra
computations needed for running this greedy approach compared to
a plain computation of the full gradient. On the other hand,
if we stop this gradient estimation procedure before
$t=n$, we save computational efforts at the risk of missing the correct extreme
entry.
\mathbf{s}ubsection{Matrix-Vector Product as Expectations}
\label{sec:mvprod}
\mathbf{b}egin{algorithm}[t]
\mathbf{r}enewcommand{\mathbf{t}extbf{Input:}}{\mathbf{t}extbf{Input:}}
\mathbf{r}enewcommand{\mathbf{t}extbf{Output:}}{\mathbf{t}extbf{Output:}}
\caption{Computing $\mathbf{h}at \nabla L$ as an empirical average }
\mathbf{b}egin{algorithmic}[1]\label{algo:empirical}
\mathbb{R}EQUIRE: $\mathbf{r}$, $\|\mathbf{x}_i\|$,$\mathbf{X}$
\STATE build $\mathbf{p}\in\mathbf{D}elta_n^*$ \mathbf{e}mph{e.g} $p_i \mathbf{p}ropto \frac{1}{n}$
or $ p_i\mathbf{p}ropto |r_i|\|\mathbf{x}_{ i}\|$
\mathbb{R}EPEAT
\STATE random draw $i \in 1,\cdots,n$ according to $\mathbf{p}$
\STATE $\mathbf{h}at \nabla L_{t+1}= \mathbf{h}at \nabla L_{t} + \frac{1}{p_i}\mathbf{x}_i r_i $
\STATE $t = t+1$
\mathbf{U}NTIL{stopping criterion over $\mathbf{h}at \nabla L_{t+1}$ is met }
\ENSURE $\mathbf{h}at \nabla L_{t+1}$
\mathbf{e}nd{algorithmic}
\mathbf{e}nd{algorithm}
The problem of finding the extreme component of the gradient can also be addressed from the point of view of randomization, as described
in Algorithm \mathbf{r}ef{algo:empirical}.
The approach consists in considering the computation
of $\mathbf{X}^\mathbf{t}op \mathbf{r}$ as an the expectation of a given random variable.
Recall that $\mathbf{X}$ is composed of the vectors $\{\mathbf{x}_i^\mathbf{t}op\}_{i=1}^n$
and $\mathbf{x}_i \in \mathbb{R}^d$.
Hence, the matrix-vector product $\mathbf{X}^\mathbf{t}op\mathbf{r}$ can be rewritten:
\mathbf{b}egin{equation}
\label{eq:decomposition}
\mathbf{X}^\mathbf{t}op\mathbf{r}=\mathbf{s}um_{i=1}^{n}r_i\mathbf{x}_{i}.
\mathbf{e}nd{equation}
From now on, given some integer $n$, $\mathbf{D}elta_n^*$ denotes the interior of the probabilistic simplex of size $n$:
\mathbf{b}egin{equation}
\label{eq:psimplex}
\mathbf{D}elta_n^*\mathbf{d}oteq\left\{\mathbf{p}=[p_1\cdots p_n]:\mathbf{s}um_{i=1}^np_i=1,\;p_i> 0,i=1,\ldots,n\mathbf{r}ight\}.
\mathbf{e}nd{equation}
For any element $\mathbf{p}=(p_i)_{i\in[n]}\in\mathbf{D}elta_n^*$ we introduce a random vector
$C$ that takes value in the set
$$\mathcal{C}\mathbf{d}oteq\left\{\mathbf{c}_i\mathbf{d}oteq r_i\mathbf{x}_{i}/p_i:i=1,\ldots,n\mathbf{r}ight\}$$
so that $ \mathbb{P}(C=\mathbf{c}_i)=p_i$.
This way,
\mathbf{b}egin{equation}\label{eq:expectationC}
\mathbb{E} C = \mathbf{s}um_{i=1}^np_i\mathbf{c}_i=\mathbf{s}um_{i=1}^np_ir_i\mathbf{x}_{i}/p_i=\mathbf{s}um_{i=1}^n r_i\mathbf{x}_{i}=\mathbf{X}^\mathbf{t}op\mathbf{r}
\mathbf{e}nd{equation}
Hence, if $C_1,\cdots,C_s$ are independent copies of $C$ and $\mathbf{h}at{C}^s$ is
defined as:
$
\mathbf{h}at{C}^s\mathbf{d}oteq \frac{1}{s}\mathbf{s}um_{i=1}^s C_i
$
then
$
\mathbb{E}\mathbf{h}at{C}^s=\mathbf{X}^\mathbf{t}op\mathbf{r}.
$
$\mathbf{h}at{C}^s$ is thus an estimator of the matrix-vector product that we are interested in \mathbf{e}mph{i.e} the gradient of our objective function.
According to the above, a relevant approach for estimating the extreme component of the gradient is to randomly sample $s$ copies of $C$, to average them
and then to look for the extreme component of this estimated gradient.
Interestingly, this approach based on randomized matrix multiplication
can be related to our deterministic approach. Indeed, a result given by
\cite{drineas2006fast} (Lemma 4) says that the element $\mathbf{p}\in\mathbf{D}elta_n^*$ that minimizes $ \mathbb{E}[\|\mathbf{X}^\mathbf{t}op\mathbf{r} - \mathbf{h}at{C}^s\|^2_F]$ is such that
\mathbf{b}egin{equation}\label{eq:best}
p_i\mathbf{p}ropto |r_i|\|\mathbf{x}_{i}\|.
\mathbf{e}nd{equation}
It thus suggests that vectors $c_i$ of large values of $|r_i|\|\mathbf{x}_{i}\|$ have higher probability to be sampled. This resonates with the greedy
deterministic approach in which vectors $\mathbf{x}_ir_i$ are accumulated
in the order of the decreasing values of $|r_i|\|\mathbf{x}_{i}\|$.
\mathbf{s}ubsection{Best arm identification and multi-armed bandits}
The two preceding approaches seek at approximating the
gradient, at a given level of accuracy that has yet to be defined,
and then at evaluating the coordinate of its extreme component.
Yet, the problem of finding the extreme component coordinate of a gradient vector
obtained from matrix-vector multiplication can also be posed
as the problem of finding the best arm in a multi-armed bandit
problem. In a nutshell, given a slot machine with multiple arms, the goal in the bandit problem is to find the arm that maximizes the expected reward or minimizes the loss.
For this, an iterative procedure is used, where at each step, a forecaster selects an arm, based on his previous actions, and receives a reward or observes a loss.
Depending on how the reward is obtained, the problem can be stochastic
(the reward/loss is drawn from a probability distribution) or non-stochastic.
Bubeck et al. \cite{bubeck2009pure} propose an extensive review of these methods for various settings.
We cast our problem of finding the extreme gradient component as a
best arm identification problem as follows. In the remainder of this section, we suppose that we look for a minimum gradient component and thus we look for the arm with minimal loss instead of maximal reward.
We consider that the arms are the components of the gradient
(we have $d$ arms) and at each pull of a given arm, we observe
a loss that is built from a term of the gradient matrix-vector
multiplication $\mathbf{X}^\mathbf{t}op r$, as made clear in the sequel.
In a stochastic setting, we consider a similar framework
as the one we described in Section \mathbf{r}ef{sec:mvprod}.
We model the loss obtained from the $k$-th pull of
arm $j \in [1,\cdots,d]$ as a random variable $V$, independent of $k$, that takes value in the set
$$\left\{v_{j,i} \mathbf{d}oteq r_i x_{i,j}/p_i:i=1,\ldots,n\mathbf{r}ight\}$$
so that $ \mathbb{P}(V=v_{j,i})=p_i$. From this definition, the expected
loss of arm $i$ is
$$ \mathbb{E} V = \mathbf{s}um_{i=1}^np_i v_{j,i} = \mathbf{s}um_{i=1}^n r_i x_{i,j} =
(\mathbf{X}^\mathbf{t}op\mathbf{r})_j,
$$
which is the $j$-th component of our gradient vector.
In this setting,
given a certain number of pulls, one pull providing a
realization of the random variable $V$ of the chosen arm,
the objective of a best arm identification algorithm is to provide
recommendation about the arm with minimal
expected loss, which in our case is the
coordinate of smallest value in
the $d$-dimensional vector $\mathbf{X}^\mathbf{t}op\mathbf{r}$.
Several algorithms for identifying this best
arm have been provided in the literature. Most of them
are built around an empirical average loss statistic.
This latter can be computed, after $s$ pulls of
an arm $j$, as
$
\mathbf{h}at V_{j,s} = \frac{1}{s} \mathbf{s}um_{t=1}^s v_{j,i(k,j)}
$,
where $i(k,j)$ is the index $i$ drawn at
the $k$-th pull for arm $j$.
Some of the most
interesting algorithms are the \mathbf{e}mph{successive reject} \cite{audibert2010best}
and \mathbf{e}mph{successive halving} \cite{karnin2013almost} algorithms which, given a fixed budget
of pulls, iteratively discard after some predefined
number of pulls (say $s$) the worst arm or the worst-half arms, respectively,
according to the values $\{\mathbf{h}at V_{j,s}\}_{j=1}^d$.
These approaches are relatively simple and the \mathbf{e}mph{successive halving} approach is depicted in detail in Algorithm \mathbf{r}ef{algo:sh}.
They directly provide some guarantees on the
probability to correctly estimate the extreme component of the gradient.
For instance, for a fixed budget of pulls $T$, under some minor
and easily satisfied conditions on $\{v_{j,i}\}$, the \mathbf{t}extit{successive
halving} algorithm correctly identifies the minimum gradient component with
probability at least $1-3 \log_2 d \cdot \mathbf{e}xp(-\frac{T}{8 H_2 \log_2 d})$
where $H_2$ is a problem-dependent constant (the larger
$H_2$ is, the harder the problem is).
However, these two algorithms have the drawbacks to work on individual
entries $x_{i,j}$ of the matrix $\mathbf{X}$. Hence, overload due to single memory accesses
compared to those needed for accessing chunks of memory may hinder
the computational gain obtained by identifying the component
with minimal gradient value without computing the full gradient.
For the \mathbf{t}extit{successive halving} algorithm, one way of overcoming this issue is to consider \mathbf{e}mph{non i.i.d} sampling
of the arm's loss. As such, we consider that at each iteration, the losses
generated by pulling the remaining arms come from the same component $j$
of the residual. \mathbf{r}ev{This approach has
the advantage of working on the full vector $\mathbf{x}_i r_i$,
allowing thus efficient memory caching, instead of
on individual elements of $\mathbf{X}$.
Let $\mathcal{A}_{s^\mathbf{p}rime}$ and $\mathcal{A}$ denote the sets
of components $i$ that have been drawn after
$s^\mathbf{p}rime$ and the subsequent $s^\mathbf{d}agger$ pulls such
that $s = s^\mathbf{p}rime + s^\mathbf{d}agger$ respectively, then
the loss for arm $j$ after $s$ pulls can be defined as
\mathbf{b}egin{equation}\label{eq:noniid}
\mathbf{h}at V_{j,s}
= \mathbf{h}at V_{j,s^\mathbf{p}rime} + \mathbf{s}um_{i \in \mathcal{A}} \frac{r_i x_{i,j}}{p_i},
\mathbf{e}nd{equation}
where $\mathbf{h}at V_{j,0}=0$ and the set
$\mathcal{A}$ is the same for all arms.
In practice, as implemented in the numerical simulations we provide, each component $i$
of $\mathcal{A}$ is randomly chosen according to a uniform distribution over $1,\cdots,n$.
The \mathbf{e}mph{non-iidness} of the stochastic process
comes from that the arm losses
are dependent through the set $\mathcal{A}$. However,
this does not hinder the fact that empirical loss $\mathbf{h}at V_{j,s}$ is still
a relevant estimation of the expected loss of arm $j$.}
\mathbf{b}egin{algorithm}[t]
\caption{Successive Halving to find the minimum gradient component\label{algo:sh}}
\mathbf{b}egin{algorithmic}[1]
\STATE \mathbf{t}extbf{inputs:} $\mathbf{X}$, $\mathbf{r}$, set $T$ the budget of pulls
\STATE Initialize $S_0= [d\,]$
\STATE $\mathbf{h}at V_{j,0} = 0, \forall j \in S_0$
\FOR{$\mathbf{e}ll$= 0,1, $\cdots, \lceil log_2(d) \mathbf{r}ceil$ -1}
\STATE Pull each arm in $S_\mathbf{e}ll$ for $r_\mathbf{e}ll =
\lfloor \frac{T}{|S_\mathbf{e}ll| \lceil \log_2(n) \mathbf{r}ceil}\mathbf{r}floor
$ additional times and
compute resulting loss (\mathbf{t}extit{non-iid}) $\mathbf{h}at V_{\cdot,R_\mathbf{e}ll}$ in Equation (\mathbf{r}ef{eq:noniid}) or (\mathbf{t}extit{non-stochastic}) $v_{\cdot,R_\mathbf{e}ll}$ in Equation (\mathbf{r}ef{eq:banditnonstoch}) with $R_\mathbf{e}ll = \mathbf{s}um_{u=0}^\mathbf{e}ll r_u$
\STATE Sort arms in $S_\mathbf{e}ll$ by increasing value of losses
\STATE Define $S_{\mathbf{e}ll+1}$ as the $\lfloor|S_\mathbf{e}ll|/2 \mathbf{r}floor$ indices of arms with smallest values
\ENDFOR
\STATE \mathbf{t}extbf{output:} index of the best arm
\mathbf{e}nd{algorithmic}
\mathbf{e}nd{algorithm}
Non-stochastic best arm identification has been barely studied and
only a very recent work has addressed this problem \cite{jamieson2015non}. In this latter
work, the main hypothesis about the non-stochasticity of the loss
is that they are assumed to be generated before the game starts. This is
exactly our situation since \mathbf{r}ev{before each estimation
of the minimum gradient component, all
losses are given by $x_{i,j} r_i$ and thus
can be computed beforehand}. In this
non-stochastic setting \cite{jamieson2015non}, the framework is that the
$k$-th pull of an arm $j$ provides a loss $v_{j,k}$ and
the objective of the bandit algorithm is to identify
$
\mathbf{a}rg \min_{j} \lim_{k \mathbf{r}ightarrow \infty} v_{j,k},
$
assuming that such limits exist for all $j$.
Again we can fit our problem of finding the extreme gradient
component (here the minimum) into this framework by
defining the loss for a given arm at pull $k$ as
\mathbf{b}egin{equation}\label{eq:banditnonstoch}
v_{j,k} = \left \{
\mathbf{b}egin{array}{ll}
0 & \mathbf{t}ext{ if } k=0 \\
v_{j,k-1} + r_{\mathbf{t}au_k} x_{\mathbf{t}au_k,j} & \mathbf{t}ext{ if } 1 \leq k \leq n \\
v_{j,k-1} & \mathbf{t}ext{ if } k > n \\
\mathbf{e}nd{array}
\mathbf{r}ight.
\mathbf{e}nd{equation}
where $\mathbf{t}au$ is a predefined or random permutation of
the rows of the vector $\mathbf{r}$ and the matrix $\mathbf{X}$, and \mathbf{r}ev{$\mathbf{t}au_k$
its $k$-th entry.}
In practice, we choose $\mathbf{t}au$ to be the same for all the arms
for computational reasons as explained above, but in theory this
is not necessary \cite{jamieson2015non}.
According to this loss definition, we have
$$
\lim_{k \mathbf{r}ightarrow \infty} v_{j,k} = \mathbf{s}um_{k=1}^n r_{\mathbf{t}au_k} x_{\mathbf{t}au_k,j}
= \mathbf{s}um_{k=1}^n r_{k} x_{k,j} = (\mathbf{X}^\mathbf{t}op\mathbf{r})_j.
$$
Hence, an algorithm that recommends the best arm after a given number
of pulls, will return the index of the minimum component in our
gradient. Interestingly, the algorithm proposed by
Jamieson et al. \cite{jamieson2015non} for solving the non-stochastic
best arm identification problem is also the one used in
the stochastic setting namely the \mathbf{e}mph{successive halving}
algorithm (Alg. \mathbf{r}ef{algo:sh}). This algorithm can be
shown to work as is despite the dependence between arm losses.
Indeed, each round-robin pull of the surviving arms can have dependent values,
and as long as the algorithm does not adapt to the observed losses
during the middle of a round-robin pull \cite{jamieson2015non}.
As already mentioned, for a fixed budget $T$ of pulls, this
\mathbf{e}mph{successive halving}
bandit algorithm comes with theoretical guarantee in its ability to identify the best arm.
In the stochastic case, the probability of success depends on the number of
arms, the number of pulls $T$ and on a parameter denoting the hardness
of the problem (see Theorem 4.1 in \cite{karnin2013almost}).
In the non-stochastic case, the budget of pulls needed for
guaranteeing the correct recovery of the best arm essentially
depends on a function $\mathbf{g}amma_j(k)$ such that
$
|v_{j,k} - \lim_{k \mathbf{r}ightarrow \infty} v_{j,k}| \leq \mathbf{g}amma_j(k)
$
and on a parameter denoting the gap between
any of $(\mathbf{X}^\mathbf{t}op \mathbf{r})_j$ and the smallest component of
$\mathbf{X}^\mathbf{t}op \mathbf{r}$ which is not accessible unless we compute the
exact gradient.
One may note the strong resemblance between the matrix-vector
product approximation as given in
\mathbf{e}qref{eq:expectationC} and the \mathbf{e}mph{non-iid} bandit strategy,
\mathbf{r}ev{as in this latter setting,
we consider the full vector $\mathbf{x}_i r_i$ to
compute $\mathbf{h}at V_{\cdot,s}$ for all remaining arms.
This \mathbf{e}mph{non-iid} strategy can also be related
to the non-stochastic setting if we choose
$\mathbf{t}au$ as a random permutation of $1,\ldots,n$.
}
In addition, in the non-stochastic
bandit setting, we can recover the greedy deterministic approach
if we assume that the permutation $\mathbf{t}au$ defines a re-ordering
of $\|\mathbf{x}_i\||r_i|$ in decreasing order, then the accumulation
given in Equation \mathbf{e}qref{eq:banditnonstoch} is exactly the one given in the greedy deterministic
approach. \mathbf{r}ev{This is the choice of $\mathbf{t}au$ we have considered in our experiments.} Multi-armed bandit framework
and the gradient approximation approaches use thus
similar ways for computing the criteria used for
estimating the best arm. The main difference resides
in the fact that with multi-armed bandit,
one is directly provided with the
estimation of the best arm.
\mathbf{s}ubsection{Stopping criteria}
In the greedy deterministic and randomized methods introduced in this
section, we have no clues on how many elements $ r_i \mathbf{x}_i$ have to be
accumulated in order to achieve a sufficient approximation of the
gradient or in the multi-armed bandit approach how many pulls we need
to draw. Here, we discuss two possible stopping criteria for the
non-bandit algorithms: one that holds for any approach and a second
one that holds only for the Frank-Wolfe algorithm in the deterministic
sampling case. Discussion on the budgets that needs to be allocated to
the bandit problem is also provided.
\mathbf{s}ubsubsection{Stability condition}
For the sake of simplicity, we limit the exposition to the search
of the smallest component of the gradient, although
the approach can be generalized to other cases.
Denote by $j^\mathbf{s}tar$ the coordinate such that $j^\mathbf{s}tar=\mathbf{a}rg\min_j \nabla L(\mathbf{w}_k)|_j$
and let $T_s$ be the maximal number of iterations or samplings allowed
for computing the inexact gradient (for instance, in the greedy deterministic approach, $T_s=n$). Our objective
is to estimate $j^\mathbf{s}tar$ with the fewest number $t$ of iterations.
For this to be possible, we make the hypothesis that there exists
an iteration $t_1$, $t_1 \leq T_s$, and
$$
j^\mathbf{s}tar=\mathbf{a}rg\min_j \mathbf{h}at \nabla L_{t}(\mathbf{w}_k)|_j \quad \forall t : t_1 \leq t \leq T_s
$$
in other words, we suppose that starting from a given number of iterations
$t_1$, the gradient approximation is sufficiently accurate so that the updates of the gradient will leave
the minimum coordinate unchanged. Formally, this condition means
that $\forall t \in [t_1,\cdots,n]$, we have
$$
\mathbf{B}ig[\mathbf{h}at \nabla L_t + \mathbf{s}um_{u=0}^{T_s-t} U_{t+u} \mathbf{B}ig]_{j^\mathbf{s}tar}
\leq \mathbf{B}ig[\mathbf{h}at \nabla L_t + \mathbf{s}um_{u=0}^{T_s-t} U_{t+u} \mathbf{B}ig]_{j}
\quad \forall j \in [1,\cdots,d] .
$$
where each $U_{t+i}= r_{i(t+u)} \mathbf{x}_{i(t+u)}$, $i(t+u)$ being
an index of samples that depends how the greedy or randomized
strategy considered.
However, checking the above condition is as expensive as
computing the full gradient, thus we propose an
estimation of $j^\mathbf{s}tar$ based on an approximation of this inequality,
by truncating the sum
to few iterations on each side. Basically, this consists in
evaluating $j^\mathbf{s}tar$ at each iteration and checking
whether this index has changed over the last $N_s$ iterations.
We refer to this criterion
as the \mathbf{t}extit{stability criterion}, parametrized
by $N_s$.
\mathbf{s}ubsubsection{Error bound criterion}
In Section \mathbf{r}ef{sec:fw}, we discussed that the convergence of the Frank-Wolfe inexact algorithm can be guaranteed as long as the norm difference between the approximate gradient and the exact one could be upper-bounded by some quantity $\mathbf{e}psilon$. Formally, this means that the iterations over gradient approximation can be stopped as soon as
$\|\mathbf{h}at \nabla L_t - \nabla L \|_\infty \leq \mathbf{e}psilon$,
where $\mathbf{e}psilon$ depends on the curvature of the function $L(\mathbf{w})$ \cite{jaggi13:_revis_frank_wolfe}. In practice, the criterion
$\|\mathbf{h}at \nabla L_t - \nabla L \|_\infty$ cannot be computed as it depends on the exact gradient but it can be upper-bounded by a term that is accessible. For the greedy deterministic approach, by norm equivalence, we have
\mathbf{r}evII{
\mathbf{b}egin{equation}\label{eq:errorcrit}
\|\mathbf{h}at \nabla L_t - \nabla L \|_\infty =
\mathbf{B}ig\|\mathbf{s}um_{i \not \in \mathcal{I}_t} \mathbf{x}_i r_i \mathbf{B}ig \|_\infty \leq
\mathbf{s}um_{i \not \in \mathcal{I}_t} \|\mathbf{x}_i\|_\infty |r_i| \leq \mathbf{e}psilon
\mathbf{e}nd{equation}
Hence if the norms $\{\| \mathbf{x}_i\|_\infty\}$ have been precomputed beforehand, this criterion can be easily evaluated at each gradient update iteration.
}
\mathbf{s}ubsubsection{Pull budget for the bandit}
In multi-armed bandit algorithms, one typically specifies the number of pulls $T$ available for estimating the best arm. As such, $T$
can be considered as hyperparameter of the algorithm.
A possible strategy for removing the dependency of the
bandit algorithm to this pull budget, is to use the
\mathbf{t}extit{doubling trick} \cite{jamieson2015non}, which consists in running the algorithm with a small value of $T$ and then repeatedly doubling it until some stopping criterion is met. The algorithm can then
be adapted so as to exploit the loss computations that have already been
carried from iteration to iteration.
However, this strategy needs a stopping criterion for the \mathbf{e}mph{doubling
trick}. According to Theorem $1$ in \cite{jamieson2015non}, there
exists a lower bound of pulls for which the algorithm is guaranteed
to return the best arm. Hence, the following heuristic can
be considered: if $T^\mathbf{p}rime$ and $2T^\mathbf{p}rime$ number of pulls return the same best arm, then we conjecture that the proposed arm is indeed the best one.
One can note that this idea is similar to the above-described
stability criterion.
While this strategy is appealing, in the experiment, we have just fixed the
budget of pulls $T$ to a fixed predefined value.
\mathbf{s}ection{Discussion}
\label{sec:discussion}
This section provides comments and discussions of the
approaches we proposed compared to existing works.
\mathbf{s}ubsection{Relation and gains compared to OMP and variants}
Several recent works on sparse approximation have
proposed theoretically founded algorithms. These works
include OMP \cite{pati93:_orthog_match_pursuit,tropp07:signalrecov},
greedy pursuit \cite{blumensath2008gradient_ieee,tropp06:SSA}, CoSaMP \cite{needell09:_cosam} and several others like \cite{shalev-shwartz10:_tradin_optim}.
Most of these algorithms make use of the
top absolute entry of the gradient vector at each iteration. The work presented
in this paper is strongly related to these ones as we share the same
quest for the top entry.
Indeed, the proposed methodology provides tools that can be applied to
many sparse approximation algorithms including the aforementioned ones.
What makes our work novel and compelling is that at each iteration, the
gradient is computed with as few information as possible.
If the stopping criterion for estimating this gradient
is based on a maximal number of samples --- \mathbf{t}extit{e.g}. we are interested in constructing the best approximation of the gradient from only 20\% of the samples---, our approach can be interpreted as a method for computing the gradient on a limited budget.
Hence, the proposed method allows to obtain a gain in the computational time needed for the estimation of the gradient.
On the downside, if other stopping criteria are used (alone or jointly with the budget criterion), this gain may be partly impaired by further computations needed for their estimation.
As an example, the \mathbf{e}mph{stability} criterion induces a $O(d)$ overhead at each iteration due to the max computation.
\mathbf{s}ubsection{Relation with other stochastic MP/FW approaches}
\mathbf{r}ev{
Some prior works from the literature are related to the approaches we
have proposed in the present paper.
Chen et al. \cite{chen2011stochastic} have recently introduced
a stochastic version of a Matching Pursuit algorithm in a
Bayesian context. Their principal contribution was to define
some prior distribution over each component of the vector
$\mathbf{w}$ and then to sample over this distribution so as to estimate
$\mathbf{w}$. In their approach, the sparsity pattern related to Matching Pursuit
is controlled by the prior distribution which is assumed
to be a mixture of two distributions one of which induces sparsity.
While this approach is indeed stochastic, it strongly differs
from ours in the way stochasticity is in play. As we will
discuss in the next subsection, our framework
is more related to stochastic gradient than to the
stochastic sampling of Chen et al.
Stronger similarities with our work
appear in the work of Peel et al. \cite{peel2012matching}.
Indeed, they propose to accelerate the atom selection
procedure by randomly selecting a subset of atoms
as well as a subset of example for computing $\mathbf{X}^\mathbf{t}op\mathbf{r}$.
This idea is also the base of our work. However,
an essential difference appears as we do
not select a subset of atoms. By doing so, we are
ensured not to discard the top entry of $\mathbf{X}^\mathbf{t}op\mathbf{r}$
and thus we can guarantee for instance that
our bandit approaches are able to retrieve this top
entry with high probability given enough budget
of pulls.
Stochastic variants of the Frank-Wolfe algorithm
have been recently proposed by Lacoste-julien
et al. \cite{lacoste-julien13:_block_coord_frank_wolfe_struc_svms}
and Ouyang et al. \cite{ouyang2010fast}. These works
are mostly tailored for solving large-scale SVM optimization problem
and do not focus on sparsity.
}
\mathbf{s}ubsection{About stochasticity}
The randomization approach for approximating the gradient, introduced in Section \mathbf{r}ef{sec:mvprod},
involves random sampling of the columns. In the extreme situation
where only a single column $i$ is sampled, we thus have~
$\mathbf{h}at \nabla L = \mathbf{x}_ir_i,
$
and the method we propose boils down to a stochastic gradient
method. In the context of sparse
greedy approximation, the first work devoted to stochastic gradient approximation has been recently released \cite{nguyen14:_linear}. Nguyen et al. \cite{nguyen14:_linear} show that
their stochastic version of the iterative hard thresholding algorithm,
or the gradient matching pursuit algorithm which aim at greedily solving
a sparse approximation problem with arbitrary loss functions, behave properly in expectation.
The randomized approach we propose in this work goes beyond the
stochastic gradient method for
greedy approximation, since it also provides a novel approach for
computing stochastic gradient. Indeed, we differ from
their setting in several important aspects:
\mathbf{b}egin{itemize}
\item First, in our stochastic gradient approximation,
we always consider a number of samples larger than $1$. As such,
we are essentially using a stochastic mini-batch gradient.
\item
Second, the size of the mini-batch is variable (depending
on the stopping criterion considered) and
it depends on some heuristics that estimates on the fly the ability
of the approximate gradient to retrieve the top entry of the true
gradient.
\item Finally, one important component of our approach is
the importance sampling used in the stochastic mini-batch sampling.
This component theoretically helps in reducing the error of
the gradient estimation \cite{p14:_stoch_optim_impor_sampl}.
In the context of matrix multiplication approximation we used
for developing the randomized approach in Section \mathbf{r}ef{sec:mvprod}, theoretical results of Drineas et al. \cite{drineas2006fast} have
also shown there exists an importance sampling that minimizes
the expectation of the Frobenius norm of the matrix multiplication approximation.
Our experiments corroborate these results showing that,
compared to a uniform sampling, importance sampling clearly enhances the efficiency in retrieving the top entry of the true gradient.
\mathbf{e}nd{itemize}
All these differences make our randomized algorithm not only clearly distinguishable
from stochastic gradient approaches, but also harder to analyze.
We thus defer for further work the theorical analyses of such
a stochastic adaptive-size mini-batch gradient coupled with an importance sampling approach.
\mathbf{s}ubsection{Theoretical considerations}
Although a complete theoretical analysis of the algorithm is out
of the scope of this paper, an interesting property deserves to be
mentioned here. Note that, unlike stochastic gradient approaches,
our algorithm is built upon inexact gradients that hopefully
have the same minimum component as the true gradient.
If this latter fact occurs along the iterations of the
FW or OMP algorithms, then all the properties (\mathbf{e}mph{e.g} linear convergence,
exact recovery property\ldots) of these
algorithms apply. Based on the probability of recovering
an exact minimum component of the gradient at each iteration,
we show below a bound on the probability of
our algorithm to recover at a given iteration
$K$ of OMP or FW, the same sequence of minimum components as
the one obtained with exact gradient.
Suppose that at each iteration $t$ of OMP or FW, our algorithm for
estimating the minimum component correctly identifies
this component with a probability at least
$ 1 - P_t$ and there exists $\mathbf{b}ar P$ so that
$P_t> \mathbf{b}ar P,\,\forall t \leq K$, then the probability
of identifying the correct sequence of minimum component
is at least
$
1- K \mathbf{b}ar P.
$
We get this thanks to the following reasoning.
Denote $B(t)=\{I^1= i_{e}^1,I^2= i_{e}^2,\ldots,I^t= i_{e}^t\}$
the event that our algorithm outputs the exact sequence of
minimum components up to iteration $t$, $I^t$ and $i_e^t$ being the
coordinates retrieved with the inexact and exact gradient. Similarly,
we note $A(t)=\{I^t= i_{e}^t\}$ the event of retrieving, at iteration $t$, the correct component of the gradient. We assume that
$\mathbb{P}(B(0))=1$.
\mathbf{r}ev{
Formally, we are interested in lower-bounding the probability
of $B(K)$. By definition, we have
\mathbf{b}egin{align*}\nonumber
\mathbb{P}(B(K))&=\mathbb{P}(A(K)|B(K-1)) \mathbb{P}(B(K-1)) \\\nonumber
&=\mathbb{P}(B(0)) \mathbf{p}rod_{t=1}^K\mathbb{P}(A(t)|B(t-1)).
\mathbf{e}nd{align*}
Note that this equation captures all the time dependencies that occur during the FW or the OMP
algorithm. Since $\mathbb{P}(A(t)|B(t-1)) \mathbf{g}eq 1 - P_t$, we have
\mathbf{b}egin{align*}\nonumber
\mathbb{P}(B(K))&\mathbf{g}eq \mathbf{p}rod_{t=1}^K(1-P_t)
\mathbf{g}eq (1- \mathbf{b}ar P)^K \mathbf{g}eq (1- K \mathbf{b}ar P)
\mathbf{e}nd{align*}
where in the last inequality, we used the fact
that $(1-u)^K \mathbf{g}eq (1-Ku)$ for $0 \leq u \leq 1$.
}
For instance, in the \mathbf{e}mph{successive halving} algorithm, we
have $P_t = 3 \log_2 d \cdot \mathbf{e}xp\mathbf{B}ig(- \frac{T}{8H_2(t)log_2 d}\mathbf{B}ig)$,
where $H_2(t)$ is a iteration-dependent constant \cite{karnin2013almost}
and $T$ the number of pulls. Thus, if we
define $\mathbf{b}ar H$ so that $\forall t, \mathbf{b}ar H \mathbf{g}eq H_2(t)$, we
have $ \mathbf{b}ar P = 3 \log_2 d \cdot \mathbf{e}xp\mathbf{B}ig(- \frac{T}{8 \mathbf{b}ar H log_2 d}\mathbf{B}ig)$ and
$$P(B(K)) \mathbf{g}eq 1 - 3 K \log_2 d \cdot \mathbf{e}xp\mathbf{B}ig(- \frac{T}{8 \mathbf{b}ar H log_2 d}\mathbf{B}ig).
$$
We can see that the probability of our OMP or FW having the same behaviour as their exact counterpart decreases with the number $K$ of iterations
and the number $d$ of dimensions of the problems and increases with the number of pulls. By rephrasing this last equation, we also get the following
property. For $\mathbf{d}elta \in [0,1]$, if the number $T$ of pulls is set so that at each iteration $t$,
$$
T \mathbf{g}eq \left (\log \log_2 \frac{d}{\mathbf{d}elta} + \log \frac{K}{\mathbf{d}elta} + log \frac{3}{\mathbf{d}elta}\mathbf{r}ight) 8 \mathbf{b}ar H \log_2 d
$$
then, when using the \mathbf{e}mph{successive halving} algorithm
for retrieving the extremum gradient component, an
inexact OMP or FW algorithm behaves like the exact OMP or
FW with probability $1-\mathbf{d}elta$.
This last property is another emphasis on the strength of
our inexact gradient method compared to stochastic gradient descent approaches as it shows that with high probability, all the
theoretical properties of OMP or FW (\mathbf{e}mph{e.g} convergence, exact recovery of sparse signal) apply.
\mathbf{s}ection{Numerical experiments}
\label{sec:expe}
In this section, we describe the experimental studies we
have carried out for illustrating the computational
benefits of using inexact gradient for sparsity-constraint
optimization problems.
\mathbf{s}ubsection{Experimental setting}
In order to illustrate the benefit of using inexact gradient
for sparse learning or sparse approximation, we have set up a simple sparse
approximation problem which focuses on the computational gain, and for
which a sparse signal has to be recovered by
the Frank-Wolfe, OMP or CoSaMP algorithm.
\mathbf{r}evII{Note that sparse approximations are mostly used for
approximation problems on overcomplete dictionary. This is the case
in our experiments, where the dimension $d$ of the learning problem is
in most cases larger than the number $n$ of samples. We believe that
if the signal or the image at hand to be approximated can be fairly
approximated by representations for which fast transforms
are available, then it is better
(and faster) to indeed used this representation and the fast
transform. Sparse approximation problems as considered in
the sequel, mostly occur in overcomplete dictionary learning
problems. In such a situation, as the dictionary is data-driven, we believe that the approach we propose is
relevant.
}
The target
sparse signals are built as
follows. For a given value of the dictionary size
$d$ and a number $k$ of active elements in the dictionary, the true
coefficient vector $\mathbf{w}^\mathbf{s}tar$ is obtained as follows. The $k$ non-zero positions are chosen randomly, and their values
are drawn from a zero-mean unit variance Gaussian distribution,
to which we added $\mathbf{p}m 0.1$ according to the sign of the values.
The columns of the regression design matrix $\mathbf{X} \in \mathbb{R}^{n \mathbf{t}imes d}$ are drawn uniformly from the surface of a unit
hypersphere of dimension n. Finally, the target vector is obtained
as
$\mathbf{y} = \mathbf{X} \mathbf{w}^\mathbf{s}tar+ \mathbf{e}$,
where $\mathbf{e}$ is a random noise vector drawn from a Gaussian distribution with zero-mean and variance $\mathbf{s}igma_e^2$ determined from a given
signal-to-noise as
$
\mathbf{s}igma_e^2= \frac{1}{n}\|\mathbf{X} \mathbf{w}^\mathbf{s}tar\|^2 \cdot 10^{-SNR/10}.
$
Unless specified, the SNR ratio has been set to $3$.
For each setting, the results are averaged over 20 trials, and $\mathbf{X}$, $\mathbf{w}^\mathbf{s}tar$ and $\mathbf{e}$ are resampled at each trial.
The criteria used for evaluating and comparing the proposed approaches are the running time of the algorithms and
their ability to recover the true non-zero elements of $\mathbf{w}^\mathbf{s}tar$.
The latter is computed through the F-measure
between the support of the true coefficient vector $\mathbf{w}^\mathbf{s}tar$
and the estimated one $\mathbf{h}at \mathbf{w}$: $$
\mathbf{t}ext{F-meas}= 2 \frac{|\mathbf{t}ext{supp}_\mathbf{g}amma(\mathbf{w}^\mathbf{s}tar) \cup \mathbf{t}ext{supp}_\mathbf{g}amma(\mathbf{h}at \mathbf{w})|}{
|\mathbf{t}ext{supp}_\mathbf{g}amma(\mathbf{w}^\mathbf{s}tar)|+|\mathbf{t}ext{supp}_\mathbf{g}amma(\mathbf{h}at \mathbf{w})|}
$$
where, $\mathbf{t}ext{supp}_\mathbf{g}amma(\mathbf{w}) = \{j : |w_j| > \mathbf{g}amma\}$ is the support of vector $\mathbf{w}$ and $\mathbf{g}amma$ is a threshold used to neglect some
non-zero coefficients that may be obliterated by the noise.
In all our experiments, we have set $\mathbf{g}amma = 0.001$ which is small compared
to the minimal absolute value of a non-zero coefficient (0.1).
All the algorithms (Frank-Wolfe, OMP and CoSaMP) and exact and inexact gradient codes have
been written in Matlab except for the \mathbf{t}extit{successive reject} bandit which as been written in C and transformed in a
mex file. \mathbf{r}ev{All computations have been run on each single core of an Intel Xeon E5-2630 processor clocked at 2.4 GHz in a Linux machine with 144 Gb of memory.}
\mathbf{s}ubsection{Sparse learning using a Frank-Wolfe algorithm}
\mathbf{b}egin{figure*}[t]
\centering
~\mathbf{h}fill \includegraphics[width=7.3cm]{FmeasFW-dim2000Dic4000T50ch0.pdf}
\mathbf{h}fill
\includegraphics[width=7.3cm]{TimeFW-dim2000Dic4000T50ch0.pdf}\mathbf{h}fill~
\caption{Comparing vanilla Frank-Wolfe and inexact FW algorithms with different ways for computing the inexact gradient with $n=2000$, $d=4000$ and $k=50$. Performances are compared with increasing precision on the inexact computations.
We report the exact recovery performance and the running time of algorithms.
\label{fig:fw}}
\label{fig:fw2000}
\mathbf{e}nd{figure*}
\mathbf{b}egin{figure*}[t]
\centering
~\mathbf{h}fill \includegraphics[width=6.5cm]{FmeasOMP-dim2000Dic4000T.pdf}
\mathbf{h}fill
\includegraphics[width=6.5cm]{TimeOMP-dim2000Dic4000T.pdf} \mathbf{h}fill~\\
~\mathbf{h}fill \includegraphics[width=6.5cm]{FmeasOMP-dim5000Dic10000T.pdf}\mathbf{h}fill
\includegraphics[width=6.5cm]{TimeOMP-dim5000Dic10000T.pdf} \mathbf{h}fill~\\
\caption{Comparing OMP algorithm with different ways for computing the inexact gradient. The comparison holds for dictionary size with (top) $n=2000$, $d=4000$. (bottom) $n=5000$, $d=10000$ and for an exact recovery criterion (left) and
running time (right). }
\label{fig:OMP}
\mathbf{e}nd{figure*}
For this experiment, the constraint set $C$ is the $\mathbf{e}ll_1$ unit-ball and the loss function is $L(\mathbf{w})=
\frac{1}{2}\|\mathbf{y} - \mathbf{X}\mathbf{w}\|_2^2$.
Our objectives are two-folded:
\mathbf{b}egin{enumerate}
\item analyze the capability of inexact gradient approaches to recover the true support and compare them to the FW algorithm,
\item compare the two stopping criteria for computing the inexact gradient : the \mathbf{e}mph{stability condition} and the \mathbf{e}mph{error bound condition}.
\mathbf{e}nd{enumerate}
In the latter, while this \mathbf{e}mph{error bound condition} provides
an adaptive condition for stopping --- recall that the parameter
$\mathbf{e}psilon$ in Equation (\mathbf{r}ef{eq:errorcrit}) is determined automatically
through the data and the related curvature of the loss function ---, the
\mathbf{e}mph{stability condition} needs
a user-defined parameter $N_s$ for stopping the accumulation of the partial gradient.
In the same spirit, we use a fixed pre-defined budget
of pulls in the best-arm identification problem. This budget
is given as a ratio of $n\mathbf{t}imes d$.
The exact gradient is computed using
the accumulation strategy as given in Equation \mathbf{e}qref{eq:accumulation}
so as to make all running times comparable.
The maximum number of iterations for FW is set to $5000$.
Figure \mathbf{r}ef{fig:fw} presents the results obtained for $n=2000$ samples, $d=4000$ dictionary elements and $k=50$ active
atoms. We depict the running time and recovery abilities
of the Frank-Wolfe algorithm with an exact gradient (\mathbf{t}extbf{exact}), a greedy deterministic gradient sampling computation with a stability stopping criterion
(\mathbf{t}extbf{deterministic}) and an error bound stopping criterion \mathbf{t}extbf{(grad upb}), the randomized approach with an uniform sampling (\mathbf{t}extbf{uniform}), and with a best probability sampling as
given in Equation \mathbf{e}qref{eq:best} (\mathbf{t}extbf{best}), the successive reject bandit (\mathbf{t}extbf{succ}), the \mathbf{e}mph{non-iid} successive halving approach
with losses computed as given in Equation \mathbf{e}qref{eq:noniid} (\mathbf{t}extbf{SuccHalvSame})
\mathbf{r}ev{with a random uniform sampling} and the non-stochastic successive halving approach
with losses computed as given in Equation \mathbf{e}qref{eq:banditnonstoch} (\mathbf{t}extbf{SuccHalvNonStoch}) \mathbf{r}ev{using a permutation $\mathbf{t}au$ that defines a decreasing ordering of the
$\|\mathbf{x}_i\| |r_i|$}.
The figures depict the performances with respect
to the stopping condition parameter $N_s$ of the stability
criterion (the first value in the bracket) and the sampling budget of the bandit approach ($\frac{n\mathbf{t}imes d}{10} z$ where $z$ is the second value in the bracket).
First, we can note that the deterministic approach used with any
stopping criterion and the non-stochastic successive halving approaches
are able to perfectly recover the exact support of the true vector $\mathbf{w}^\mathbf{s}tar$, regardless of the considered stopping criterion's value.
Randomized approaches with uniform and best probability
sampling nearly
achieve perfect recovery with an average F-measure of $0.975$
with the stability criterion $N_s$ equal to $5$ for the uniform
approach. When $N_s$ increases,
the performances of these two approaches also increase but still fail to achieve perfect recovery.
From a running time point of view, the proposed approaches based on
greedy deterministic and randomized sampling strategies with stability
criterion and the successive halving strategies are faster than the
exact FW approach, the plain \mathbf{e}mph{successive reject} method acting on
single entry of $\{x_{j,i} r_j\}$ and the deterministic method with
the \mathbf{t}extit{error bound condition}. For instance, the greedy deterministic
approach
(green curve) achieves a gain
in running time of a factor $2$ with respect to the exact Frank-Wolfe
algorithm. Interestingly for the greedy deterministic approach and the
successive halving approaches, this gain is achieved without compromise on
the recovery performance. For the randomized strategies,
increasing the stability parameter $N_s$ leads to a very slight
increase of running time, hence for these methods, a trade-off can
eventually be found. When comparing bandit approaches, one can note
the substantial gain in performances that can be obtained by the
halving strategy, the \mathbf{e}mph{non-iid} and \mathbf{e}mph{non-stochastic}
strategies compared to \mathbf{e}mph{successive reject}. We conjecture that this higher computational running time
of the \mathbf{e}mph{successive reject} algorithm is essentially due to
computational overhead needed for accessing each single matrix entry
$\mathbf{X}_{i,j}$ in memory while all other methods use slices of this matrix
(through the samples $\mathbf{x}_i$) and thus they can leverage on the chunk
of memory access. Best performances jointly in recovery and
running times are achieved by the greedy deterministic and
the non-stochastic successive halving approaches.
When comparing the\mathbf{t}extit{ stability }and the \mathbf{t}extit{error bound}
stopping criteria, the latter one is rather inefficient. While
grounded on theoretical analysis, this bound is loose enough to be
non-informative.
\mathbf{r}evII{Indeed, a careful inspection shows that
the \mathbf{t}extit{error bound} criterion accumulates about $5$ times more
elements $r_i\mathbf{x}_i$ than the stability one before triggering.
In addition, other computational overheads necessary for the bound estimation, make the approach just as efficient as the exact Frank-Wolfe algorithm.
}
In summary, from this experiment, we can conclude that the
non-stochastic bandit approach is the most efficient one. It
can achieve a gain in computation of about an order of magnitude
(the left most point in the Figure \mathbf{r}ef{fig:fw2000}'s right panel) without compromising accuracy.
The greedy deterministic approach with stability criterion performs
also very well but it is slightly less efficient. We can remark
that these two best methods both use the same strategy of gradient accumulation
based on decreasing ordering of $\|\mathbf{x}_i\|| r_i|$.
\mathbf{s}ubsection{Sparse Approximation with OMP}
Here, we evaluate the usefulness of using inexact gradient
in a greedy framework like OMP.
The toy problem is similar to the one used above except
that we analyze the performance of the algorithm for
an increasing number $k$ of active atoms and two sizes of dictionary matrix $\mathbf{X}$ have been considered.
The same ways for computing the inexact gradient
are evaluated and compared in terms of efficiency
and correctness to the true gradient in an OMP algorithm.
For all sampling approaches, the stopping criterion for
gradient accumulation is based on the stability criterion with the
parameter $N_s$ adaptively set at $2\%$ of the number $n$
of samples.
For the successive reject bandit approach, the sampling budget has
been limited to $20\%$ of the number of entries (which
is $n\cdot d$ in the matrix $\mathbf{X}$.
In all cases, the stopping criterion for the OMP algorithm
is based on a fixed number of iterations and this
number is the desired sparsity $k$.
Results are reported in Figure \mathbf{r}ef{fig:OMP}. They globally
follow the same trend as those obtained for the Frank-Wolfe algorithm.
First, note that in terms of support
recovery, when the number of active atoms is small,
the greedy deterministic approach performs better than the
randomized sampling strategies. Bandit approaches perform similar to the greedy deterministic
method. As the number
of active atoms increases, the bandit approaches succeed better
in recovering the extreme component of the gradient while the deterministic
approach is slightly less accurate. Note that for any value of $k$, the randomized
strategies suffer more than the other strategies for recovering
the true vector $\mathbf{w}^\mathbf{s}tar$ support. From a running time point of view,
again, we note that the deterministic and \mathbf{e}mph{non-iid} successive halving bandit approaches seem to be the most efficient methods.
The gain in running time compared to the exact gradient OMP is slight
but significant while it is larger when
comparing with the \mathbf{t}extit{successive reject} algorithm.
\mathbf{s}ubsection{Sparse Approximation with CoSaMP}
\mathbf{b}egin{figure*}[t]
\centering
~\mathbf{h}fill\includegraphics[width=7.3cm]{FmeasCosAMP-dim2000Dic4000T.pdf}\mathbf{h}fill
\includegraphics[width=7.3cm]{TimeCosAMP-dim2000Dic4000T.pdf}
\mathbf{h}fill~ \\
~\mathbf{h}fill \includegraphics[width=7.3cm]{FmeasCosAMP-dim5000Dic10000T.pdf} \mathbf{h}fill
\includegraphics[width=7.3cm]{TimeCosAMP-dim5000Dic10000T.pdf}
\mathbf{h}fill~
\caption{Comparing CosAMP and CosAMP algorithms with different ways for computing the inexact gradient. The comparison holds for different dictionary size $n=2000$, $d=4000$ and $n=5000$, $d=10000$. We report exact recovery performance and running time. }
\label{fig:cosamp}
\mathbf{e}nd{figure*}
\mathbf{b}egin{figure*}[t]
\centering
~\mathbf{h}fill \includegraphics[width=7.3cm]{FmeasCosAMPBudget-dim5000Dic10000T.pdf} \mathbf{h}fill
\includegraphics[width=7.3cm]{TimeCosAMPBudget-dim5000Dic10000T.pdf}
\mathbf{h}fill~
\caption{Analyzing the effect of the pull budget on the successive halving algorithm (left) recovery F-measure and (right) computational running time.
The pull budget is defined as the budget ratio times $n\cdot d$. Here
$n=5000$, $d=10000$ and $k=50$.}
\label{fig:cosampbudget}
\mathbf{e}nd{figure*}
To the best of our knowledge, there are very few greedy algorithms
that are able to leverage from stochastic gradient. One
of these algorithms has been introduced in \cite{nguyen14:_linear}.
In this experiment, we want to evaluate the
efficiency gain achieved by our inexact gradient approach
compared to this stochastic greedy algorithm.
Our objective is to show that the approach we propose
is empirically significantly faster than a pure stochastic gradient approach.
For the different versions of the CoSaMP algorithm, we have
set the stopping criterion as follows. For the CoSaMP with
exact gradient approach, which serves only as a baseline
for computing the exact solution, the number of iteration is set to
the level of sparsity $k$ of the target signal. A tolerance of the
residual norm is also used as a stopping criterion which should be
below $10^{-3}$. Next, for the stochastic and the inexact gradient
CoSaMP versions, the algorithms were stopped when the norm of the
residual ($\mathbf{y} - \mathbf{X}\mathbf{w}$) became smaller than $1.001$ times the one
obtained by the exact CoSaMP or when a maximal number of
iterations. Regarding gradient accumulation, the stopping criterion we
choose is based on the stability condition with the parameter $N_S$
set dynamically at $2\%$ of the number of samples. For the bandit
approaches, we have fixed the budget of pulls at $0.2 n \mathbf{t}imes d$.
Note that for the CoSaMP algorithm, we do not look for the top entry
of the gradient vector but for the top $2k$ entries as such, we have thus
straightforwardly adapted the \mathbf{t}extit{successive halving} algorithm to
handle such a situation.
Figure \mathbf{r}ef{fig:cosamp} presents the observed results. Regarding
support recovery, we remark that all approaches achieve performances
similar to the exact CoSaMP. When
few active atoms are in play, we can note that sometimes, the
stochastic approach of \cite{nguyen14:_linear} fails to
recover the support of $\mathbf{w}^\mathbf{s}tar$. This occurs seldom but it
happens regardless of the dictionary size we have experimented
with.
From a running time perspective, the results show that the proposed
approaches are highly more efficient than the exact gradient approach
and interestingly, they are faster than a pure gradient stochastic
approach. One or two orders of magnitude can be gained depending on
the level of sparsity of the signal to be recovered. This observation
clearly depicts the trade-off that occurs in sparsity-constrained
optimization problems in which the gradient computation and an
approximation problem on a limited number of atoms are the major
computational burdens (Lines 3 and 5 of Algo~\mathbf{r}ef{algo:gp}). Indeed in
a stochastic gradient approach, inexact gradient computations are very
cheap but more approximation problems to be solved may be needed for
achieving a desired accuracy. In the approaches we propose, the
inexact gradient computation is slightly more expensive but we somehow
``ensure'' that it provides the correct information needed by the
CoSaMP algorithm. Hence, our approaches need less approximation
problems to be solved, making them more efficient than a stochastic
gradient approach.
When comparing the efficiency of the proposed algorithms,
the approach based on non-stochastic successive halving
and the greedy deterministic approach are the most efficient
especially as the number of active atoms grows. \\
In the second experiment, for the \mathbf{e}mph{successive halving} algorithms, we analyze the effect of the pull budget on the running time and on the recovery
performance. We consider the following setting
of the problem: $n=5000$, $d=10000$ and $k=50$.
Results are given in Figure \mathbf{r}ef{fig:cosampbudget}.
We note that a budget ratio varying from $0.05$ and $0.7$
allows a good compromise between ability
of recovering the true vector and gain in computational time,
particularly for the non-stochastic successive halving method.
As the budget of pulls decreases, both algorithms fail more
frequently in recovery and in addition, the computational gain
substantially reduces. This experiment suggests that one should not
be too greedy and should allow sufficient amount of pulls.
A budget ratio of $0.2$ or $0.3$ for the bandit algorithms
seems to be a good rule of thumb according to our experience.
\mathbf{b}egin{figure*}[t]
\centering
~\mathbf{h}fill\includegraphics[width=7.4cm]{FmeasCosAMP-complexity-atom-dim2000DicT.pdf}\mathbf{h}fill
\includegraphics[width=7.4cm]{TimeCosAMP-complexity-atom-dim2000DicT.pdf}
\mathbf{h}fill~ \\
~\mathbf{h}fill\includegraphics[width=7.4cm]{FmeasCosAMP-complexity-atom-dim10000DicT.pdf}\mathbf{h}fill
\includegraphics[width=7.4cm]{TimeCosAMP-complexity-atom-dim10000DicT.pdf}
\mathbf{h}fill~\\
\caption{(top left and top right) Evaluating how the recovery capability behaves and how the computation time scales with the number
of dictionary elements (with $n=2000$ and $k=100$).
(bottom left and bottom right) Evaluation of the
same criteria with respect to the number of samples (with $d=10000$ and $k=400$). }
\label{fig:cosampcomplexity}
\mathbf{e}nd{figure*}
\mathbf{r}ev{Our last experiment with CoSaMP demonstrates how the
running time and the support recovery performance behave
with an increasing number $n$ of samples and afterwards with an increasing
number of dictionary elements $d$. We have restricted
our comparison to the exact and stochastic CoSaMP,
and the CoSaMP variants based on the successive
halving bandit algorithms and the greedy deterministic one (which
are the most efficient among those we propose).
The experimental setup, the stopping criterion for the CoSaMP algorithm as well as the stopping criterion
for the gradient accumulation and pull budget are the same as above.
Results are depicted in Figure \mathbf{r}ef{fig:cosampcomplexity}. As a
sanity check, we note that recovery performances are almost
similar for all algorithms with slightly worse performances
for the stochastic CoSaMP and the \mathbf{e}mph{non-iid} bandit algorithm
based CoSaMP.
The computational time results show that all algorithms globally
follow the same trend as the number of dictionary atoms
or the number of samples increase.
Recall that the computational complexity for the gradient computation
is $O(nd)$. For the bandit approaches, we use a fixed
budget of pulls dependent on $nd$ to compute the
inexact gradient. Similarly, for the greedy deterministic approach,
the number of accumulation (and the stability criterion) is proportional to the number $n$ of samples and thus the gradient computation
is a constant factor of $nd$. Hence, our findings, illustrated on
Figure \mathbf{r}ef{fig:cosampcomplexity}, are somewhat natural
since the main differences of
running time essentially come from a constant factor.
This factor is highly dependent on the problem but according
to our numerical experiments, a ten-fold factor computational gain can be
expected in many cases.
}
\mathbf{s}ubsection{Application to audio data}
We have compared the efficiency of the approaches we propose on a real signal processing application.
The audio dataset we use is the one considered by Yaghoobi et al. \cite{yaghoobi09:_diction_learn_for_spars_approx}. This dataset is composed of an audio sample recorded from a BBC radio session which plays classical music. From that audio sample, $8192$ pieces of signal have
been extracted, each being composed of $1024$ time samples. Details about
the dataset can be found in \cite{yaghoobi09:_diction_learn_for_spars_approx}.
From this dataset, we have learned $2048$ dictionary atoms
using the approach described in \cite{rakotomamonjy12:_direc}.
\mathbf{r}ev{Our objective is to perform sparse approximation of each of the $8192$ audio pieces over the $2048$
dictionary atoms using CoSaMP and we want to evaluate the running time and the approximation quality of a
CoSaMP algorithm using an exact gradient computation (\mathbf{t}extbf{Exact}), a stochastic
gradient CoSaMP algorithm (\mathbf{t}extbf{Stoch $k$}) and the CoSaMP variants with
inexact gradient computations as we propose. The approximation error
is measured as $\frac{\|\mathbf{y} - \mathbf{h}at \mathbf{y}\|_2}{\|\mathbf{y}\|_2}$ where $\mathbf{y}$ and
$\mathbf{h}at \mathbf{y}$ are respectively the true audio piece and its CoSaMP-based approximation.
We thus want to validate that our approaches achieve similar approximation
performance than CoSaMP while being faster. For all
algorithms, the number of CoSaMP iterations is fixed
to the sparsity pattern, here fixed to $k=10$. Note that
for the stochastic gradient approach, we have also
considered a version with more iterations (\mathbf{t}extbf{Stoch $3k$}) .
}
Results are gathered in Table \mathbf{r}ef{tab:error} and they are
obtained as the averaged performance when approximating
all the $8192$ pieces of audio signal in the dataset.
We can see that the inexact approaches we introduce lead
to the best compromise between approximation error and running
time. \mathbf{r}ev{For instance, our \mathbf{e}mph{successive halving} algorithms
achieve similar approximation errors than the exact CoSaMP but they are $3$ times
faster.} At the contrary, the stochastic gradient CoSaMP approaches are efficient
but lack in properly approximating target audio pieces.
\mathbf{b}egin{table}[t]
\caption{Approximation performance results and running time for CoSaMP and
variants. $\mathbf{y}$ and $\mathbf{h}at \mathbf{y}$ respectively depicts the signal and
its resulting approximation. Results are averaged over the approximation of $4500$ signals.
\mathbf{e}mph{Stoch $3k$} denotes the stochastic gradient algorithm that
used $3k$ iterations. }
\label{tab:error}
\centering
\mathbf{b}egin{tabular}[h]{l|cc}
\mathbf{h}line
Approaches & $\frac{\|\mathbf{y} - \mathbf{h}at \mathbf{y}\|}{\|\mathbf{y}\|}$ & Time (s) \\\mathbf{h}line\mathbf{h}line
exact & 0.376 $\mathbf{p}m$ 0.22 & 0.164 $\mathbf{p}m$ 0.01 \\
stoch $k$ & 0.670 $\mathbf{p}m$ 0.13 & 0.026 $\mathbf{p}m$ 0.00 \\
stoch $3k$ & 0.570 $\mathbf{p}m$ 0.17 & 0.076 $\mathbf{p}m$ 0.01 \\
uniform & 0.351 $\mathbf{p}m$ 0.21 & 0.133 $\mathbf{p}m$ 0.02 \\
deterministic & 0.361 $\mathbf{p}m$ 0.22 & 0.187 $\mathbf{p}m$ 0.02 \\
SuccHalvSame & 0.371 $\mathbf{p}m$ 0.22 & 0.059 $\mathbf{p}m$ 0.01 \\
SuccHalvNonStoch & 0.374 $\mathbf{p}m$ 0.22 & 0.064 $\mathbf{p}m$ 0.01 \\\mathbf{h}line
\mathbf{e}nd{tabular}
\mathbf{e}nd{table}
\mathbf{s}ubsection{Benchmark classification problems}
\mathbf{b}egin{table*}[t]
\mathbf{b}egin{center}
\caption{Comparing performances of CoSaMP and its variants with approximate
gradients on real-world high-dimensional classification problems.
(top) statistic summary of datasets. (middle) Accuracy of the decision function. (bottom) Running time (in seconds)
of the learning algorithms.
\label{tab:real}}
\mathbf{b}egin{tabular}{rccccc}
&\multicolumn{5}{c}{Datasets} \\\mathbf{h}line
\multicolumn{1}{c}{Information}& \multicolumn{1}{c}{ohscal} &\multicolumn{1}{c}{classic} & \multicolumn{1}{c}{la2} & \multicolumn{1}{c}{hitech} & \multicolumn{1}{c}{sports} \\\mathbf{h}line
\multicolumn{1}{c}{n} & 11162 & 7094 &3075 & 2301 & 8580 \\
\multicolumn{1}{c}{d} & 11465 &41681 & 31472 & 10080 & 14866\\
\mathbf{h}line
\mathbf{v}space{0.1cm}
\\
&\multicolumn{5}{c}{Datasets} \\\mathbf{h}line
\multicolumn{1}{c}{Algorithms}& \multicolumn{1}{c}{ohscal} &\multicolumn{1}{c}{classic} & \multicolumn{1}{c}{la2} & \multicolumn{1}{c}{hitech} & \multicolumn{1}{c}{sports} \\\mathbf{h}line\mathbf{h}line
CoSAMP & 82.93 $\mathbf{p}m$ 1.9& 83.46 $\mathbf{p}m$ 1.8& 81.20 $\mathbf{p}m$ 2.5& 76.29 $\mathbf{p}m$ 3.7& 93.07 $\mathbf{p}m$ 1.3\\\mathbf{h}line
Stoch & 56.16 $\mathbf{p}m$ 19.3& 63.90 $\mathbf{p}m$ 22.6& 70.24 $\mathbf{p}m$ 5.1& 11.63 $\mathbf{p}m$ 11.1& 40.74 $\mathbf{p}m$ 22.8\\\mathbf{h}line
Stoch $3k$ & 36.92 $\mathbf{p}m$ 26.2& 52.58 $\mathbf{p}m$ 26.8& 69.76 $\mathbf{p}m$ 5.5& 8.42 $\mathbf{p}m$ 3.1& 32.54 $\mathbf{p}m$ 20.4\\\mathbf{h}line
Determ. & 83.20 $\mathbf{p}m$ 1.6& 82.35 $\mathbf{p}m$ 1.3& 77.29 $\mathbf{p}m$ 2.3& 75.98 $\mathbf{p}m$ 2.9& 92.30 $\mathbf{p}m$ 1.4\\\mathbf{h}line
Uniform & 77.50 $\mathbf{p}m$ 1.7& 77.17 $\mathbf{p}m$ 1.2& 78.98 $\mathbf{p}m$ 2.9& 68.88 $\mathbf{p}m$ 3.6& 92.33 $\mathbf{p}m$ 1.4\\\mathbf{h}line
HalvingSame & 81.39 $\mathbf{p}m$ 1.5& 82.57 $\mathbf{p}m$ 1.5& 81.54 $\mathbf{p}m$ 2.6& 75.16 $\mathbf{p}m$ 2.1& 93.23 $\mathbf{p}m$ 1.2\\\mathbf{h}line
HalvingNonStoch & 81.90 $\mathbf{p}m$ 1.5& 82.97 $\mathbf{p}m$ 1.7& 79.91 $\mathbf{p}m$ 2.6& 76.49 $\mathbf{p}m$ 3.8& 93.21 $\mathbf{p}m$ 1.0\\\mathbf{h}line
\mathbf{v}space{0.1cm}
\\
&\multicolumn{5}{c}{Datasets} \\\mathbf{h}line
\multicolumn{1}{c}{Algorithms}& \multicolumn{1}{c}{ohscal} &\multicolumn{1}{c}{classic} & \multicolumn{1}{c}{la2} & \multicolumn{1}{c}{hitech} & \multicolumn{1}{c}{sports} \\\mathbf{h}line\mathbf{h}line
CoSAMP & 1257.89 $\mathbf{p}m$ 306.3& 563.51 $\mathbf{p}m$ 208.5& 401.36 $\mathbf{p}m$ 222.1& 56.42 $\mathbf{p}m$ 19.4& 998.63 $\mathbf{p}m$ 385.3\\\mathbf{h}line
Stoch & 3.12 $\mathbf{p}m$ 0.8& 1.05 $\mathbf{p}m$ 0.7& 2.98 $\mathbf{p}m$ 2.0& 0.69 $\mathbf{p}m$ 0.3& 4.13 $\mathbf{p}m$ 1.8\\\mathbf{h}line
Stoch $3k$ & 7.93 $\mathbf{p}m$ 1.8& 2.73 $\mathbf{p}m$ 1.6& 8.50 $\mathbf{p}m$ 5.4& 1.90 $\mathbf{p}m$ 0.8& 12.08 $\mathbf{p}m$ 4.8\\\mathbf{h}line
Determ. & 323.65 $\mathbf{p}m$ 76.6& 142.06 $\mathbf{p}m$ 62.3& 209.70 $\mathbf{p}m$ 130.5& 41.24 $\mathbf{p}m$ 16.3& 353.04 $\mathbf{p}m$ 129.9\\\mathbf{h}line
Uniform & 416.74 $\mathbf{p}m$ 102.2& 189.14 $\mathbf{p}m$ 74.4& 150.28 $\mathbf{p}m$ 89.2& 21.00 $\mathbf{p}m$ 7.7& 348.30 $\mathbf{p}m$ 117.0\\\mathbf{h}line
HalvingSame & 17.50 $\mathbf{p}m$ 5.2& 13.20 $\mathbf{p}m$ 12.5& 14.68 $\mathbf{p}m$ 15.2& 1.48 $\mathbf{p}m$ 0.6& 11.94 $\mathbf{p}m$ 6.3\\\mathbf{h}line
HalvingNonStoch & 17.61 $\mathbf{p}m$ 5.4& 11.74 $\mathbf{p}m$ 11.9& 13.54 $\mathbf{p}m$ 15.9& 1.37 $\mathbf{p}m$ 0.5& 10.57 $\mathbf{p}m$ 6.5\\\mathbf{h}line
\mathbf{e}nd{tabular}
\mathbf{e}nd{center}
\mathbf{e}nd{table*}
\mathbf{r}ev{
We have also benchmarked our algorithms on real-world high-dimensional learning
classification problems. These datasets are frequently used for
evaluating sparse learning problems \cite{gong2013jieping,rakotomamonjy2015dc}
and more details about them can be found in
these papers.
Here, we considered CoSaMP as a learning algorithm and our objective is to validate the fact that the approaches we propose for computing approximate gradient are able to speed up computation time while achieving the same level of accuracy as the exact gradient.
For the approximate gradient computations, we have considered the stability
criterion with $N_S=\frac{n_{train}}{20}$ ($n_{train}$ being the number of training examples) for the deterministic and randomized approaches, and we have set the budget as $0.2 n_{train} \mathbf{t}imes d $ for the bandit approaches.
The protocol we have set up is the following. Training and test sets are obtained by
randomly splitting the dataset in a $80\%-20\%$ fold. For model selection,
the training set is further split in two sets of equal size. The parameter we have
cross-validated is the number of non-zero elements $k$ in $\mathbf{w}$. It has been
selected among $10,50 ,100, 250$ so as to maximize the accuracy on the
validation set. This value of $k$ has also been used as the maximal number
of iterations for all the algorithms except for the stochastic ones.
For these, we have reported accuracies and running times for
a number of maximal iterations of $k$ and $3k$.
Results averaged over $20$ replicas of the training
and test sets are reported in Table \mathbf{r}ef{tab:real}. Stochastic
approaches fail in learning a relevant decision function.
We can note that our deterministic and randomized
approaches are more efficient than the exact CoSaMP but are less accurate. On the other hand, our bandit approaches achieve
nearly similar accuracy to CoSaMP while being at least $30$ times
faster.
}
\mathbf{s}ection{Conclusions}
\label{sec:conclusion}
The methodologies proposed in this paper aim at
accelerating sparsity-constrained optimization algorithms.
This is made possible thanks to the key observation that, at each
iteration, only the component of the gradient
with smallest or largest entry is needed, instead of the full gradient.
By exploiting this insight, we proposed greedy algorithms,
randomized approaches and bandit-based best arm identification methods
for estimating efficiently this top entry.
Our experimental results show that the bandit and the greedy approaches
seem to be the most efficient methods for this estimation.
\mathbf{r}ev{Interestingly, the bandit approaches come with guarantees
that, given a sufficient number of draws, this top entry
can be retrieved with high-probability.
Future works will be geared towards gaining further theoretical
understandings on the good behaviour of the greedy approach,
linking the number of iterations needed for the Frank-Wolfe algorithm
to converge, with the quality of the gradient approximation
in the greedy and randomized approaches,
analyzing
the role of the importance sampling in the randomized methods.
In addition, we plan to explore how this work can be extended
to an online and/or distributed computation setting.
}
\mathbf{b}ibliographystyle{IEEEbib}
\mathbf{b}egin{thebibliography}{10}
\mathbf{b}ibitem{Tibshrani_Lasso_1996}
R.~Tibshirani,
\newblock ``Regression shrinkage and selection via the lasso,''
\newblock {\mathbf{e}m Journal of the Royal Statistical Society}, vol. 58, no. 1, pp.
267--288, 1996.
\mathbf{b}ibitem{efron_lars}
B.~Efron, T.~Hastie, I.~Johnstone, and R.~Tibshirani,
\newblock ``Least angle regression (with discussion),''
\newblock {\mathbf{e}m Annals of statistics}, vol. 32, no. 2, pp. 407--499, 2004.
\mathbf{b}ibitem{jaggi13:_revis_frank_wolfe}
M.~Jaggi,
\newblock ``Revisiting frank-wolfe~: Projection free sparse convex
optimization,''
\newblock in {\mathbf{e}m Proceedings og the International Conference on Machine
Learning}, 2013.
\mathbf{b}ibitem{rakotomamonjy2015dc}
Alain Rakotomamonjy, Remi Flamary, and Gilles Gasso,
\newblock ``Dc proximal newton for nonconvex optimization problems,''
\newblock {\mathbf{e}m IEEE Trans. on Neural Networks and Learning Systems}, vol. 1,
no. 1, pp. 1--13, 2015.
\mathbf{b}ibitem{laporte13:_noncon_regul_featur_selec_rankin}
L.~Laporte, R.~Flamary, S.~Canu, S.~Dejean, and J.~Mothe,
\newblock ``Nonconvex regularizations for feature selection in ranking with
sparse svm,''
\newblock {\mathbf{e}m Neural Networks and Learning Systems, IEEE Transactions on},
vol. 25, no. 6, pp. 1118--1130, 2014.
\mathbf{b}ibitem{mallat_mp}
S.~Mallat and Z.~Zhang,
\newblock ``Matching pursuit with time-frequency dictionaries,''
\newblock {\mathbf{e}m IEEE Trans Signal Processing}, vol. 41, no. 12, pp. 3397--3415,
1993.
\mathbf{b}ibitem{pati93:_orthog_match_pursuit}
Y.C. Pati, R.~Rezaiifar, and P.~Krishnaprasad,
\newblock ``Orthogonal matching pursuit~: Recursive function approximation with
applications to wavelet decomposition,''
\newblock in {\mathbf{e}m Proc. of the 27th Annual Asilomar Conference on Signals,
Systems and Computers}, 1993.
\mathbf{b}ibitem{merhej11:_embed_prior_knowl_within_compr}
D.~Merhej, C.~Diab, M.~Khalil, and R.~Prost,
\newblock ``Embedding prior knowledge within compressed sensing by neural
networks,''
\newblock {\mathbf{e}m Neural Networks, IEEE Transactions on}, vol. 22, no. 10, pp.
1638--1649, Oct 2011.
\mathbf{b}ibitem{zhang04:_solvin}
Tong Zhang,
\newblock ``Solving large scale linear prediction problems using stochastic
gradient descent algorithms,''
\newblock in {\mathbf{e}m Proceedings of the twenty-first international conference on
Machine learning}. ACM, 2004, pp. 116--123.
\mathbf{b}ibitem{shalev-shwartz07:_pegas}
S.~Shalev-Shwartz, Y.~Singer, and N.~Srebro,
\newblock ``Pegasos : Primal estimated subgradient solver for svm,''
\newblock in {\mathbf{e}m Proceedings of the International Conference on Machine
Learning}, 2007, pp. 807--814.
\mathbf{b}ibitem{shalev2011stochastic}
Shai Shalev-Shwartz and Ambuj Tewari,
\newblock ``Stochastic methods for l 1-regularized loss minimization,''
\newblock {\mathbf{e}m The Journal of Machine Learning Research}, vol. 12, pp.
1865--1892, 2011.
\mathbf{b}ibitem{johnson2013accelerating}
Rie Johnson and Tong Zhang,
\newblock ``Accelerating stochastic gradient descent using predictive variance
reduction,''
\newblock in {\mathbf{e}m Advances in Neural Information Processing Systems}, 2013, pp.
315--323.
\mathbf{b}ibitem{nguyen14:_linear}
N~Nguyen, Deanna Needell, and T.~Woolf,
\newblock ``Linear convergence of stochastic iterative greedy algorithms with
sparse constraints,''
\newblock {\mathbf{e}m http://arxiv.org/abs/1407.0088}, 2014.
\mathbf{b}ibitem{bubeck2009pure}
S{\'e}bastien Bubeck, R{\'e}mi Munos, and Gilles Stoltz,
\newblock ``Pure exploration in multi-armed bandits problems,''
\newblock in {\mathbf{e}m Algorithmic Learning Theory}. Springer, 2009, pp. 23--37.
\mathbf{b}ibitem{rakotomamonjy2015more}
Alain Rakotomamonjy, Sokol Ko{\c{c}}o, and Liva Ralaivola,
\newblock ``More efficient sparsity-inducing algorithms using inexact
gradient,''
\newblock in {\mathbf{e}m Signal Processing Conference (EUSIPCO), 2015 23rd European}.
IEEE, 2015, pp. 709--713.
\mathbf{b}ibitem{blumensath2008gradient_ieee}
Thomas Blumensath and Michael~E Davies,
\newblock ``Gradient pursuits,''
\newblock {\mathbf{e}m Signal Processing, IEEE Transactions on}, vol. 56, no. 6, pp.
2370--2382, 2008.
\mathbf{b}ibitem{aravkin2014orthogonal}
Aleksandr Aravkin, Aurelie Lozano, Ronny Luss, and Prabhajan Kambadur,
\newblock ``Orthogonal matching pursuit for sparse quantile regression,''
\newblock in {\mathbf{e}m Data Mining (ICDM), 2014 IEEE International Conference on}.
IEEE, 2014, pp. 11--19.
\mathbf{b}ibitem{lozano2011group}
Aur{\'e}lie~C Lozano, Grzegorz Swirszcz, and Naoki Abe,
\newblock ``Group orthogonal matching pursuit for logistic regression,''
\newblock in {\mathbf{e}m International Conference on Artificial Intelligence and
Statistics}, 2011, pp. 452--460.
\mathbf{b}ibitem{needell09:_cosam}
D.~Needell and J.~Tropp,
\newblock ``Cosamp: Iterative signal recovery from incomplete and inaccurate
samples,''
\newblock {\mathbf{e}m Applied and Computational Harmonic Analysis}, vol. 26, no. 3,
pp. 301--321, 2009.
\mathbf{b}ibitem{bahmani2013greedy}
Sohail Bahmani, Bhiksha Raj, and Petros~T Boufounos,
\newblock ``Greedy sparsity-constrained optimization,''
\newblock {\mathbf{e}m The Journal of Machine Learning Research}, vol. 14, no. 1, pp.
807--841, 2013.
\mathbf{b}ibitem{guelat86:_some_wolfes}
J.~Gu\'elat and P.~Marcotte,
\newblock ``Some comments on wolfe's away step,''
\newblock {\mathbf{e}m Mathematical Programming}, vol. 35, no. 1, 1986.
\mathbf{b}ibitem{drineas2006fast}
P.~Drineas, R.~Kannan, and M.~Mahoney,
\newblock ``Fast monte carlo algorithms for matrices i: Approximating matrix
multiplication,''
\newblock {\mathbf{e}m SIAM Journal on Computing}, vol. 36, no. 1, pp. 132--157, 2006.
\mathbf{b}ibitem{audibert2010best}
Jean-Yves Audibert and S{\'e}bastien Bubeck,
\newblock ``Best arm identification in multi-armed bandits,''
\newblock in {\mathbf{e}m COLT-23th Conference on Learning Theory-2010}, 2010, pp.
13--20.
\mathbf{b}ibitem{karnin2013almost}
Zohar Karnin, Tomer Koren, and Oren Somekh,
\newblock ``Almost optimal exploration in multi-armed bandits,''
\newblock in {\mathbf{e}m Proceedings of the 30th International Conference on Machine
Learning (ICML-13)}, 2013, pp. 1238--1246.
\mathbf{b}ibitem{jamieson2015non}
K.~Jamieson and A.~Talwalkar,
\newblock ``Non-stochastic best arm identification and hyperparameter
optimization,''
\newblock in {\mathbf{e}m Proceedings of the 19th International Workshop on Artificial
Intelligence and Statistic}, 2016.
\mathbf{b}ibitem{tropp07:signalrecov}
J.~Tropp and A.~Gilbert,
\newblock ``Signal recovery from random measurements via orthogonal matching
pursuit,''
\newblock {\mathbf{e}m IEEE Trans. Information Theory}, vol. 53, no. 12, pp.
4655--4666, 2007.
\mathbf{b}ibitem{tropp06:SSA}
J.~Tropp, A.~Gilbert, and M.~Strauss,
\newblock ``Algorithms for simultaneous sparse approximation. part {I}: Greedy
pursuit,''
\newblock {\mathbf{e}m Signal Processing}, vol. 86, pp. 572--588, 2006.
\mathbf{b}ibitem{shalev-shwartz10:_tradin_optim}
S.~Shalev-Shwartz, N.~Srebro, and T.~Zhang,
\newblock ``Trading accuracy for sparsity in optimization problems with
sparsity constraints,''
\newblock {\mathbf{e}m Siam Journal on Optimization}, vol. 20, 2010.
\mathbf{b}ibitem{chen2011stochastic}
R.-B. Chen, C.-H. Chu, T.-Y. Lai, and Y.~Wu,
\newblock ``Stochastic matching pursuit for bayesian variable selection,''
\newblock {\mathbf{e}m Statistics and Computing}, vol. 21, no. 2, pp. 247--259, 2011.
\mathbf{b}ibitem{peel2012matching}
Thomas Peel, Valentin Emiya, Liva Ralaivola, and Sandrine Anthoine,
\newblock ``Matching pursuit with stochastic selection,''
\newblock in {\mathbf{e}m Signal Processing Conference (EUSIPCO), 2012 Proceedings of
the 20th European}. IEEE, 2012, pp. 879--883.
\mathbf{b}ibitem{lacoste-julien13:_block_coord_frank_wolfe_struc_svms}
S.~Lacoste-Julien, M.~Jaggi, M.~Schmidt, and P.~Pletscher,
\newblock ``Block-coordinate frank-wolfe optimization for structural svms,''
\newblock in {\mathbf{e}m Proceedings of the International Conference on Machine
Learning}, 2013.
\mathbf{b}ibitem{ouyang2010fast}
Hua Ouyang and Alexander~G Gray,
\newblock ``Fast stochastic frank-wolfe algorithms for nonlinear svms.,''
\newblock in {\mathbf{e}m SDM}. SIAM, 2010, pp. 245--256.
\mathbf{b}ibitem{p14:_stoch_optim_impor_sampl}
Zhao P and Zhang T,
\newblock ``Stochastic optimization with importance sampling,''
\newblock {\mathbf{e}m http://arxiv.org/abs/1401.2753}, 2014.
\mathbf{b}ibitem{yaghoobi09:_diction_learn_for_spars_approx}
M.~Yaghoobi, T.~Blumensath, and M.~Davies,
\newblock ``Dictionary learning for sparse approximations with the majorization
method,''
\newblock {\mathbf{e}m IEEE Transaction on Signal Processing}, vol. 57, no. 6, pp.
2178--2191, 2009.
\mathbf{b}ibitem{rakotomamonjy12:_direc}
A.~Rakotomamonjy,
\newblock ``Direct optimization of the dictionary learning problem,''
\newblock {\mathbf{e}m IEEE Trans. on Signal Processing}, vol. 61, no. 12, pp.
5495--5506, 2013.
\mathbf{b}ibitem{gong2013jieping}
P.~Gong, C.~Zhang, Z.~Lu, J.~Huang, and J.~Ye,
\newblock ``A general iterative shrinkage and thresholding algorithm for
non-convex regularized optimization problems,''
\newblock in {\mathbf{e}m Proceedings of the 30th International Conference on Machine
Learning}, Atlanta, Georgia, Jun. 2013, pp. 37--45.
\mathbf{e}nd{thebibliography}
\mathbf{e}nd{document} |
\begin{document}
\title{ Nested Artin approximation }
\author{ Dorin Popescu }
\thanks{The support from the the project ID-PCE-2011-1023, granted by the Romanian National Authority for Scientific Research, CNCS - UEFISCDI is gratefully acknowledged. }
\address{Dorin Popescu, Simion Stoilow Institute of Mathematics of the Romanian Academy, Research unit 5,
University of Bucharest, P.O.Box 1-764, Bucharest 014700, Romania}
\email{[email protected]}
\maketitle
\begin{abstract} A short proof of the linear nested Artin approximation property of the algebraic power series rings is given here.
\noindent
{\it Key words } : Henselian rings, Algebraic power series rings, Nested Artin approximation property\\
{\it 2010 Mathematics Subject Classification: Primary 13B40, Secondary 14B25,13J15, 14B12.}
\end{abstract}
\vskip 0.5 cm
\section*{Introduction}
The solution of an old problem (see for instance \cite{K}), the
so-called nested Artin approximation property, is given in the following theorem.
\begin{Theorem}[\cite{P}, {\cite[Theorem 3.6]{P1}}] \label{nes}
Let $k$ be a field, $ A=k\langle x\rangle$, $x=(x_1,\ldots,x_n)$ the algebraic power series over $k$, $f=(f_1,\ldots,f_s)\in A[Y]^s$, $Y=(Y_1,\ldots,Y_m)$ and $0<r_1\leq \ldots \leq r_m\leq n$, $c$ be some non-negative integers. Suppose that $f$ has a solution ${\hat y}=({\hat y}_1,\ldots,{\hat y}_m)$ in $ k[[x]] $ such that ${\hat y}_i\in k[[x_1,\ldots,x_{r_i}]]$ for all $1\leq i\leq m$ (that is a so-called nested formal solution). Then there exists a solution $y=(y_1,\ldots,y_m)$ of $f$ in $A$ such that $y_i\in k\langle x_1,\ldots,x_{r_i}\rangle$ for all $1\leq i\leq m$ and $y\equiv{\hat y} \ \ \mbox{mod}\ \ (x)^ck[[x]]$.
\end{Theorem}
The proof relies on an idea of Kurke from 1972 and the Artin approximation property of rings of type $k[[u]]\langle x\rangle$ (see \cite{P}, \cite{S}, \cite{P1}). When $f$ is linear, interesting relations with other problems and a description of many results on this topic are nicely explained in \cite{Rond1}.
Also a proof of the above theorem in the linear case
can be found in \cite{Rond1} using just the classical Artin approximation property of $A$ (see \cite{A}). Unfortunately, we had some difficulties in reading \cite{Rond1}, but finally we noticed a shorter proof using mainly the same ideas. This proof is the content of the present note.
We owe thanks to G. Rond who noticed a gap in a previous version of this note.
\section{Linear nested Artin approximation property}
We start recalling \cite[Lemma 9.2]{Rond} with a simplified proof.
\begin{Lemma}\label{am} Let $(A,{\frk m})$ be a complete normal local domain, $u=(u_1,\dots,u_n)$, $v=(v_1,\ldots,v_m)$, $B=A[[u]]\langle v\rangle $ be the algebraic closure of $A[[u]][v]$ in $A[[u,v]]$ and $f\in B$. Then there exists $g$ in the algebraic closure $A\langle v, Z\rangle $ of $A[v,Z]$, $Z=(Z_1,\ldots,Z_s)$ in $A[[v,Z]]$
for some $s\in \bf N$ and ${\hat z}\in A[[u]]^s$ such that $f=g(\hat z)$.
\end{Lemma}
\begin{proof} Changing $f$ by $f-f(u=0)$ we may assume that $f\in (u)$. Note that $B$ is the Henselization of $C=A[[u]][v]_{({\frk m},u,v)}$ by \cite{Ra} and so there exists some etale neighborhood of $C$ containing $f$. Using for example \cite[Theorem 2.5]{S} there exists a monic polynomial $F$ in $X$ over $A[[u]][v]$ such that $F(f)=0$ and $F'(f)\not \in ({\frk m},u,v)$, let us say $F=\sum_{i,j}F_{ij}v^iX^j$ for some $F_{ij}\in A[[u]]$.
Set ${\hat z}_{ij}=F_{ij}-F_{ij}(u=0)\in (u)A[[u]]$, ${\hat z}=({\hat z}_{ij})$ and $G=\sum_{ij}(F_{ij}(u=0)+Z_{ij})v^iX^j$ for some new variables $Z=(Z_{ij})$. We have $G({\hat z})=F$. Set $G'=\partial G/\partial X$. As
$$G(Z=0)=G({\hat z}(u=0))\equiv F(f)=0\ \mbox{ modulo}\ (u),$$
$$G'(Z=0)=G'({\hat z}(u=0))\equiv F'(f)\not \equiv 0 \ \mbox{modulo}\ ({\frk m},u,v)$$
we get $G(X=0)\equiv 0$, $G'(X=0)\not \equiv 0$ modulo $({\frk m},v,Z)$. By the Implicit Function Theorem there exists $g\in ({\frk m},v,Z) A\langle v,Z\rangle$
such that $G(g)=0$. It follows that $G(g({\hat z}))=0$. But $F=G({\hat z})=0$ has just a solution $X=f$ in $({\frk m},u,v)B$ by the Implicit Function Theorem and so $f=g({\hat z})$.
\ \end{proof}
\begin{Lemma} \label{la} Let $(A,{\frk m})$ be a Noetherian local ring, $f\in A[U]$, $U=(U_1,\ldots,U_s)$ a linear system of polynomial equations, $c\in \bf N$ and ${\hat u}$ a solution of $f$ in the completion $\hat A$ of $A$. Then there exists a solution $u\in A^s$ of $f$ such that $u\equiv {\hat u}\ \mbox{modulo}\ {\frk m}^c\hat A$.
\end{Lemma}
\begin{proof} Let $B=A[U]/(f)$ and $h:B\to {\hat A}$ be the map given by $U\to {\hat u}$. By \cite[Lemma 4.2]{P} (or \cite[Proposition 36]{P2}) $h$ factors through a polynomial algebra $A[Z]$, $Z=(Z_1,\ldots,Z_s)$, let us say $h$ is the composite map $B\xrightarrow{t} A[Z]\xrightarrow{g} {\hat A}$. Choose $ z\in A^s$ such that $z\equiv g(Z)\ \mbox{modulo} \ {\frk m}^c\hat A$. Then $u=t(cls\ U)(z)$ is a solution of $f$ in $A$ such that $u\equiv {\hat u}\ \mbox{modulo}\ {\frk m}^c\hat A$.
\ \end{proof}
\begin{Proposition} \label{p} Let $k\langle x,y\rangle$, $x=(x_1,\ldots,x_n)$, $y=(y_1,\ldots,y_m)$ be the ring of algebraic power series in $x,y$ over a field $k$ and $M\subset k\langle x,y\rangle^p$ a finitely generated $k\langle x,y\rangle$-submodule. Then
$$ k[[x]] (M\cap k\langle x\rangle^p)=(k[[x,y]]M)\cap k[[x]]^p,$$
equivalently $M\cap k\langle x\rangle^p$ is dense in $(k[[x,y]]M)\cap k[[x]]^p$, that is for all ${\hat v}\in (k[[x,y]]M)\\
\cap k[[x]]^p$ and $c\in \bf N$ there exists
$v_c\in (M\cap k\langle x\rangle)^p$ such that $v_c\equiv {\hat v}\ \mbox{modulo}\ (x)^ck[[x]]^p$.
Moreover, if $c\in \bf N$ and ${\hat v}=\sum_{i=1}^t {\hat u}_i a_i$ for some $a_i\in M$, ${\hat u}_i\in k[[x,y]]$ then there exist $ u_{ic}\in k\langle x,y\rangle$ such that
$ u_{ic}\equiv {\hat u}_i\ \mbox{modulo} \ (x)^ck[[x,y]] $, $v_c=\sum_{i=1}^t u_{ic}a_i\in (M\cap k\langle x\rangle^p) $ and $\hat v$ is the limit of $(v_c)_c$ in the $(x)$-adic topology.
\end{Proposition}
\begin{proof} Let ${\hat v}\in (k[[x,y]]M)\cap k[[x]]^p$, let us say
${\hat v}=\sum_{i=1}^t {\hat u}_ia_i$ for some $a_i\in M$ and ${\hat u}_i\in k[[x,y]]^p$.
By flatness of $k[[x]]\langle y\rangle\subset k[[x,y]]$ we see that there exist ${\tilde u}_i\in k[[x]]\langle y\rangle$ such that ${\hat v}=\sum_{i=1}^t {\tilde u}_ia_i$. Moreover by Lemma \ref{la} we may choose ${\tilde u}_i$ such that ${\tilde u}_i\equiv {\hat u}_i\ \mbox{modulo} \ (x)^ck[[x,y]]$.
Then using Lemma \ref{am} there exist $g_i\in k\langle y,Z\rangle$, $i\in [t]$ for some variables $Z=(Z_1,\ldots,Z_s)$ and ${\hat z}\in k[[x]]^s$ such that ${\tilde u}_i=g_i({\hat z})$. Note that $a_i=\sum_{r\in {\bf N}^m} a_{ir}y^r$, $g_i=\sum_{r\in {\bf N}^m} g_{ir}y^r$ with $a_{ir}\in k\langle x\rangle^p$, $g_{ir}\in k\langle Z\rangle$.
Clearly, ${\hat z}, {\hat v}$ is a solution in $k[[x]]$ of the system of polynomial equations $ V=\sum_{i=1}^t a_ig_i(Z) $, $V=(V_1,\ldots, V_p)$ if and only if it is a solution of the infinite system of polynomial equations
($*$) $V=\sum_{i=1}^t a_{i0}g_{i0}(Z)$, \ \ \ $\sum_{i=1}^t \sum_{r+r'=e}a_{ir}g_{ir'}(Z)=0$, $e\in {\bf N}^m$, $e\not =0.$
Since $k\langle x,Z,V\rangle$ is Noetherian we see that it is enough to consider in ($*$) only a finite set of equations, let us say with $e\leq \omega$ for some $\omega$ high enough. Applying the Artin approximation property of $k\langle x\rangle$ (see \cite{A}) we can find for any $c\in \bf N$ a solution $v_c\in k\langle x\rangle^p$, $z_c\in k\langle x\rangle^s$ of ($*$) such that $v_c\equiv {\hat v}\ \mbox{modulo}\ (x)^ck[[x]]^p$, $z_c\equiv {\hat z}\ \mbox{modulo}\ (x)^ck[[x]]^s$. Then $v_c=\sum_{i=1}^t a_ig_i(z_c)\in M\cap k\langle x\rangle^p $, and $u_{ic}=g_i(z_{ic})\in k\langle x,y\rangle$ satisfies $u_{ic}\equiv {\tilde u}_i\equiv {\hat u}_i\ \mbox{modulo}\ (x)^ck[[x,y]] $. Clearly $\hat v$ is the limit of $(v_c)_c$ in the $(x)$-adic topology and belongs to $k[[x]](M\cap k\langle x\rangle^p)$.
\ \end{proof}
\begin{Remark}{\em When $p=1$ then the above $M$ is an ideal and we get the so-called (see \cite{Rond1}) strong elimination property of the algebraic power series.}
\end{Remark}
The following proposition is partially contained in \cite[Lemma 4.2]{Rond1}.
\begin{Proposition} \label{p1} Let $M\subset k\langle x\rangle^p$ be a finitely generated $k\langle x\rangle$-submodule and $1\leq r_1< \ldots < r_e\leq n$, $p_1,\ldots,p_e$ be some positive integers such that $p=p_1+\ldots +p_e$. Then
$$T=M\cap (k\langle x_1,\ldots,x_{r_1}\rangle^{p_1}\times \ldots \times k\langle x_1,\ldots,x_{r_e}\rangle^{p_e})$$
is dense in
$${\hat T}=(k[[x]]M)\cap (k[[x_1,\ldots,x_{r_1}]]^{p_1}\times \ldots \times k[[ x_1,\ldots,x_{r_e}]]^{p_e}).$$
Moreover, if $c\in \bf N$ and ${\hat v}=\sum_{i=1}^t {\hat u}_i a_i\in {\hat T}$ for some $a_i\in M$, ${\hat u}_i\in k[[x]]$ then there exist $u_{ic}\in k\langle x\rangle$ such that $ u_{ic}\equiv {\hat u}_i\ \mbox{modulo} \ (x)^ck[[x]]$, $v_c=\sum_{i=1}^t u_{ic}a_i\in T$ and
$\hat v$ is the limit of $(v_c)_c$ in the $(x)$-adic topology.
\end{Proposition}
\begin{proof} Apply induction on $e$, the case $e=1$ being
done in Proposition \ref{p}. Assume that $e>1$. We may reduce to the case when $r_e=n$ replacing $M$ by $M\cap k\langle x_1,\ldots,x_{r_e}\rangle^p$ if $r_e<n$. Let
$$q:\Pi_{i=1}^{p_e}k[[x_1,\ldots,x_{r_i}]]^{p_i}\to \Pi_{i=1}^{p_{e-1}}k[[x_1,\ldots,x_{r_i}]]^{p_i},$$
$$q':\Pi_{i=1}^{p_e}k[[x_1,\ldots,x_{r_i}]]^{p_i}\to k[[x_1,\ldots,x_{r_e}]]^{p_e}$$
be the canonical projections, ${\hat v}=({\hat v_1},\ldots, {\hat v}_p)\in {\hat T},$ and $M_1=q(M)$.
Assume that ${\hat v}=\sum_{i=1}^t {\hat u}_ia_i$ for some ${\hat u}_i\in k[[x]]$, $a_i\in M$.
By induction hypothesis applied to $M_1$, $q(\hat v)$ given $c\in \bf N$ there exists $ u_{ic}\in k\langle x\rangle$ with $u_{ic}\equiv {\hat u}_i\ \mbox{modulo}\ (x)^ck[[x]]$
such that $v'_c=\sum_{i=1}^t u_{ic}q(a_i)\in q(T)$ and $q({\hat v})$ is the limit of $(v'_c)_c$ in the $(x)$-adic topology.
Now, let $v''_c=\sum_{i=1}^t u_{ic}q'(a_i)\in k\langle x_1,\ldots,x_n\rangle^{p_e} $. We have $v''_c\equiv q'({\hat v})\\
\mbox{modulo}\ (x)^ck[[x]]^{p_e}$. Then $v_c=(v'_c,v''_c)=\sum_{i=1}^t u_{ic}a_i\in T$, $v_c\equiv {\hat v}\ \mbox{modulo} \
(x)^ck[[x]]^{p}$ and $\hat v$ is the limit of $(v_c)_c$ in the $(x)$-adic topology.
\ \end{proof}
\begin{Corollary} \label{lnes} Theorem \ref{nes} holds when $f$ is linear.
\end{Corollary}
\begin{proof} If $f$ is homogeneous then it is enough
to apply Proposition \ref{p1} for the module $M$ of the solutions of $f$ in $A=k\langle x\rangle$. Suppose that $f$ is not homogeneous, let us say $f$ has the form $g+a_0$ for some system of linear homogeneous polynomials $g\in A[Y]^s$ and $a_0\in A^s$. The proof in this case follows \cite[page 7]{Rond1} and we give it here only for the sake of completeness. Change $f$ by the homogeneous system of linear polynomials ${\bar f}=g+a_0Y_0$ from $A[Y_0,Y]^s$. A nested formal solution $\hat y$ of $\bar f$ in $k[[x]]$ with ${\hat y}_i\in K[[x_1,\ldots,x_{r_i}]]$, $1\leq i\leq m$ induces a nested formal solution $({\hat y}_0,{\hat y}) $, ${\hat y}_0=1$ of $\bar f$ with $r_0=r_1$. As above, for all $c\in \bf N$ we get a nested algebraic solution $(y_0,y)$ of $\bar f$ with $y_i\in k\langle x_1,\ldots,x_{r_i}\rangle$ and $y_i\equiv {\hat y}_i\ \mbox{modulo}\ (x)^ck[[x]]$ for all $0\leq i\leq m$. It follows that $y_0$ is invertible and clearly $y_0^{-1}y$ is the wanted nested algebraic solution of $f$.
\ \end{proof}
\vskip 0.5 cm
\end{document} |
\begin{document}
\twocolumn[
\begin{center}
{\LARGE \bf ENTANGLEMENT
AND THE LINEARITY OF}\\
\vspace*{0.62cm}
{\LARGE \bf
QUANTUM MECHANICS}\\
\vspace*{0.8cm}
{\large \bf Gernot Alber}\\
\vspace*{0.1cm}
Abteilung f\"ur Quantenphysik,
Universit\"at Ulm, D-89069 Ulm, Germany
\\(Proceedings of the X International Symposium on
Theoretical Electrical Engineering ISTET 99)
\end{center}
\vspace*{1.0cm}
]
\noindent
{\Large \bf Abstract}\\
\\
Optimal universal entanglement processes are discussed
which entangle two quantum systems in an optimal way
for all possible initial states. It is demonstrated that
the linear character of quantum theory which enforces the
peaceful coexistence of quantum mechanics and relativity
imposes severe restrictions on the structure of the resulting
optimally entangled states. Depending on the dimension of
the one-particle Hilbert space such a universal process
generates either a pure Bell state or
mixed entangled states. In the limit of
very large dimensions of the one-particle Hilbert space
the von-Neumann entropy of the
optimally entangled state
differs from the one of the
maximally mixed two-particle state
by one bit only.
\\
\\
{\Large \bf Introduction}\\
\\
Ever since its discovery by Schr\"odinger \cite{Schr}
the existence of entanglement between
different quantum systems has been a major puzzling
aspect of quantum theory.
If a
quantum mechanical many particle system is in an
entangled state its characteristic physical properties are
distributed over all its
subsystems without being present in any one of them
separately. In the newly emerging science of quantum information
processing \cite{world} these puzzling aspects of quantum theory are
recognized as a potentially useful new resource which might help
to perform various tasks of practical interest
more efficiently than it is possible by any other classical means.
Prominent examples in this respect are applications
of entangled states in secret quantum key distribution (quantum
cryptography)
and in fast quantum algorithms (quantum computing).
In view of these developments the natural question arises
whether it is possible to design universal quantum processes
which entangle two or more quantum systems in an optimal way
for all possible initial states of the separate subsystems.
Definitely, provided the initial states of the subsystems are
known it should always be possible to design a particularly tailored
quantum process which
produces any desired quantum state.
However, this becomes less obvious if one wants to design a
universal
quantum process which is independent of possibly unknown
input states and
which performs the required task for all possible input
states in the same optimal way. Which constraints are imposed
by the fundamental laws of quantum mechanics on such universal,
optimal processes?
Recently, similar questions have been studied
extensively in the context of quantum cloning
\cite{clone1,clone2,clone3,clone4,clone5}
where one aims at
copying arbitrary quantum states by a universal quantum process.
It has been known for a long time that this task
cannot be performed perfectly
due to constraints imposed by the
linear character of quantum theory \cite{Wigner,Wooters}.
According to this linear character any quantum
process has to map the density operator of
the initial state linearly
onto the density operator of the final state. If the relation
between the density operators of initial and final states were not
linear, one could distinguish different unravellings of
one and the same density operator physically.
This would contradict
the basic postulate of quantum theory that the physical state
of a quantum system is described by a density operator and not
by any of its possibly inequivalent unravellings \cite{Peres}.
This linear character of quantum theory implies, for example,
that despite their nonlocal character
it is not possible
to use entangled states for super luminal communication
\cite{Gisin1}.
This so called no-signaling constraint of quantum theory
enforces the peaceful coexistence of quantum mechanics
and relativity \cite{Shimony} and
imposes severe restrictions
on universal quantum processes \cite{clone5}.
In the context of
optimal quantum cloning \cite{clone1,clone2,clone3,clone4,clone5}
and the universal NOT gate \cite{Buzek} these constraints
have already been
investigated. However, their influence on other universal quantum
processes is still widely unknown.
Motivated by the importance which entangled states play
in the context of
quantum information processing
in the following the question is addressed
whether it is
possible to design a universal quantum process which entangles
quantum systems in an optimal way.
How can one define such a universal, optimal entanglement process
and which restrictions are imposed by the linear character
of quantum theory?
What is the nature of the class of resulting optimally
entangled states?
Answering these questions sheds new light onto
the basic concept of entanglement itself and onto the question
which types of entangled states can be prepared by quantum processes
in a natural way.
\\
\noindent
{\Large \bf Optimal Entanglement by\\ Universal Quantum Processes}
\\
\\
How can one define a universal quantum process which entangles
quantum systems in an optimal way for all possible input states?
In order to put this problem into perspective let us consider
the simplest possible situation, namely a quantum process which
entangles
two particles whose associated
Hilbert spaces ${\cal H}_N$ have equal dimensions
of magnitude $N$.
We assume that an arbitrary, pure
input state $\rho_{in}({\bf m})$
is entangled with a known reference state
$\rho_{ref}$
by a general quantum process
\begin{equation}
{\cal P}: \rho_{in}({\bf m})\otimes \rho_{ref} \to
\rho_{out}({\bf m})
\label{ansatz}
\end{equation}
thereby yielding the two-particle
output state $\rho_{out}({\bf m})$.
In particular, we are looking for a universal
quantum process which
is independent of the input state and
entangles both particles in an optimal way for all possible, pure
input states $\rho_{in}({\bf m})$.
In an N-dimensional Hilbert space
an arbitrary input state can always be represented in the form
\begin{equation}
\rho_{in}({\bf m}) = \frac{1}{N} ({\bf 1} + m_{ij} {\bf A}_{i j})
\end{equation}
where the operators ${\bf A}_{ij}$ ($i,j=1,...,N$)
form a basis for the Lie-Algebra
of $SU_N$ \cite{Mahler}. (We adopt the usual
convention that one has to sum over all
indices which appear twice .)
Explicitly these operators can be
represented by the $N\times N$ matrices
\begin{eqnarray}
({\bf A}_{ij})^{(kl)}=
\delta_{k i} \delta_{j l} - \delta_{ij}\delta_{kl}/N
\hspace*{0.5cm}(k,l=1,...,N)
\end{eqnarray}
with $\delta_{ij}$ denoting the Kronecker delta-function.
These operators
might be viewed as
generalizations of the Pauli spin operators
${\bf \sigma}_x,{\bf \sigma}_y$ and ${\bf \sigma}_z$
to cases with $N > 2$ and they
fulfill the relations ${\rm Tr}\{{\bf A}_{ij}\}=0$,
${\bf A}^{\dagger}_{ij}={\bf A}_{ji}$.
The characteristic quantity ${\bf m}$ whose
components are denoted $m_{ij}$ ($i,j=1,..,N$) might
be viewed as a generalized Bloch vector.
For $N=2$ the operators ${\bf A}_{ij}$ are
related to the Pauli spin operators by
${\bf \sigma}_z\equiv2{\bf A}_{11}\equiv-2{\bf A}_{22}$,
${\bf \sigma}_x+i{\bf \sigma}_y
\equiv2{\bf A}_{12}$ and
${\bf \sigma}_x-i{\bf \sigma}_y\equiv2{\bf A}_{21}$.
The self-adjointness of
the density operator $\rho_{in}({\bf m})$
implies that
$m_{ij}=m_{ji}^*$. Furthermore, $\rho_{in}({\bf m})$
represents a
pure state only if ${\rm Tr}[\rho^2_{in}({\bf m})]
= {\rm Tr}[\rho_{in}({\bf m})] = 1$ which implies
the relation
$[m_{i j}m_{j i} - (m_{ii})^2/N]=N^2(1-1/N)$.
In a similar way also
the two-particle
output state of Eq.(\ref{ansatz})
can be expressed in terms of
these generators of $SU_N$ according to
\begin{eqnarray}
\rho_{out}({\bf m}) &=&\frac{{\bf 1}\otimes {\bf 1}}{N^2} +
\alpha^{(1)}_{ij}({\bf m}){\bf A}_{i j}\otimes {\bf 1} +
\label{rout}
\\
&&
\alpha^{(2)}_{ij}({\bf m}){\bf 1}\otimes {\bf A}_{i j} +
K_{ij rs}({\bf m}){\bf A}_{i j}\otimes {\bf A}_{r s}.\nonumber
\end{eqnarray}
What are the basic requirements which
an optimal, universal entanglement process ${\cal P}$ of the general
form of Eq.(\ref{ansatz}) should fulfill?
Definitely the notion of optimal entanglement is not well defined
in particular for mixed states
due to the lack of a unique measure of entanglement
\cite{measure,measure1,measure2}.
Despite these difficulties it appears natural to
regard the following two conditions as a
minimal set of
requirements for an optimal, universal entanglement process for
two particles, namely
\begin{eqnarray}
{\rm Tr}_{2}\{\rho_{out}({\bf m})\} &=&
{\rm Tr}_{1}\{\rho_{out}({\bf m})\} = \frac{{\bf 1}}{N},
\label{1}
\end{eqnarray}
\begin{eqnarray}
S[\rho_{out}({\bf m})] &=& -
{\rm Tr}\{\rho_{out}({\bf m}) {\rm ln}[\rho_{out}({\bf m})]\} \to
{\rm minimum}\nonumber\\
~
\label{2}
\end{eqnarray}
for all possible input states $\rho_{in}({\bf m})$.
The first condition expresses the well know property
of pure, two-particle entangled states that
they behave as maximally mixed states as far as all
one-particle properties are concerned.
(${\rm Tr}_{1(2)}$ denotes the trace over the state space of particle
$1 (2)$.)
The second condition states that the entangled
two-particle output state
$\rho_{out}({\bf m})$
should be as pure as possible so that the associated von-Neumann
entropy $S[\rho_{out}({\bf m})]$ is minimal. (In Eq.(\ref{2})
this entropy is measured in units of Boltzmann's constant.)
Together
these two requirements imply that
in the resulting quantum state $\rho_{out}({\bf m})$
the quantum information
is distributed over both particles without
being present in each one of the separate particles alone.
If one does not consider both particles together one looses
a maximum amount of information. In this sense the requirements
(\ref{1}) and (\ref{2}) characterize optimal entanglement between
both particles.
In the subsequent treatment it is demonstrated that these
two conditions
which concentrate on the information theoretic aspects of
entanglement
characterize uniquely a universal quantum
process which
yields entangled two-particle output states for arbitrary dimensions
of the one-particle Hilbert space ${\cal H}_N$.
Conditions (\ref{1}) and
(\ref{2}) imply that an optimal $\rho_{out}({\bf m})$
can always be found by
the covariant ansatz
\begin{eqnarray}
\rho_{out}({\bf R} {\bf m}) &=& U({\bf R})\otimes U({\bf R})
\rho_{out}({\bf m})
U^{\dagger}({\bf R})\otimes U^{\dagger}({\bf R})
\nonumber\\
~
\label{cov}
\end{eqnarray}
for all possible ${\bf R}\in SU_N$.
Eq.(\ref{cov}) states that the set
of possible two-particle output states $\rho_{out}({\bf m})$
forms a representation of the group
$SU_N \times SU_N$ thus ensuring that
the von-Neumann entropy is the same for all possible input states
$\rho_{in}({\bf m})$.
Thereby ${\bf R}\in SU_N$ represents the particular
unitary transformation with matrix elements $R_{ijkl}$
which transforms a given input state
$\rho_{in}({\bf m})$
into an arbitrary other input state $\rho_{in}({\bf m}')$
according to the
transformation law $m'_{ij} = R_{ijkl}m_{kl}$.
In fact, as conditions (\ref{1}) and (\ref{2})
are independent of the input state,
the optimal output state can even be found by the more restrictive
invariant ansatz
$\rho_{out}({\bf R}{\bf m}) = \rho_{out}({\bf m})$
for all possible ${\bf R}\in SU_N$. This ansatz
is a special
case of the covariant relation of Eq.(\ref{cov}).
However, as we want to investigate universal quantum processes also in
a more general context we do not want to impose this more restrictive
invariance condition already from the very beginning.
Furthermore, as conditions (\ref{1}) and (\ref{2})
are also invariant under
permutations of the particles an optimal $\rho_{out}({\bf m})$
also has to be permutation invariant.
Apart from the covariance condition of Eq.(\ref{cov})
any quantum process also has to be compatible
with the linearity of quantum mechanics. This linearity
implies that
$\rho_{out}({\bf m})$ has to be a linear function of the generalized
Bloch vector
${\bf m}$ which characterizes the input state. Thus
the characteristic quantities
$\alpha^{(1)}_{ij}({\bf m}),\alpha^{(2)}_{ij}({\bf m})$ and
$K_{ij rs}({\bf m})$ of Eq.(\ref{rout}) all have to be linear
functions of ${\bf m}$. This linear dependence guarantees
that different unravellings of the same
input state yield the same output state
after application of the
universal quantum process so that this process cannot
distinguish between different unravellings
of $\rho_{in}({\bf m})$.
Both the covariance condition of Eq.(\ref{cov}) and the linearity
constraint impose severe restrictions on
general universal quantum processes of the form of Eq.(\ref{rout}).
\\
\\
{\Large \bf Covariant and linear universal\\ quantum processes}
\\
\\
What is the
structure of
general covariant and linear quantum processes which result in
a two-particle output state which is invariant under permutations
of both particles?
Answering this question will yield a unified theoretical
description for a general class of universal
quantum processes which include both universal optimal
quantum cloning and
universal optimal entanglement as special cases.
The covariance condition of
Eq.(\ref{cov}) can be implemented easily
by observing that
only a tensor product of the form
${\bf S}={\bf A}_{ij}\otimes {\bf A}_{ji}$
transforms as a scalar under $SU_N\times SU_N$, i.e.
${\bf U}({\bf R})\otimes {\bf U}({\bf R}) {\bf S}
{\bf U}^{\dagger}({\bf R})\otimes {\bf U}^{\dagger}({\bf R}) = {\bf S}$.
Similarly, it is straightforward to demonstrate
that only tensor products of the form
${\bf V}_{il}={\bf A}_{ij}\otimes {\bf A}_{jl}$ or
${\bf V}^{\dagger}_{il}$ transform like generalized vectors
under $SU_N\times SU_N$, i.e.
${\bf U}({\bf R})\otimes {\bf U}({\bf R}) {\bf V}_{il}
{\bf U}^{\dagger}({\bf R})\otimes {\bf U}^{\dagger}({\bf R})
= {\bf V}_{km}R_{kmil}$.
Thus the most general two-particle
quantum process which is covariant, linear
in ${\bf m}$ and invariant under permutations of both
particles is of the form
\begin{eqnarray}
\rho_{out}({\bf m}) &=&\frac{{\bf 1}\otimes {\bf 1}}{N^2}
+
\alpha m_{i j} {\bf A}_{ij}\otimes {\bf 1} +\nonumber \\
&&\alpha m_{i j} {\bf 1}\otimes {\bf A}_{ij} +
C{\bf A}_{i j}\otimes {\bf A}_{j i} +\nonumber\\
&&
\beta m_{il} {\bf A}_{ij}\otimes {\bf A}_{jl} +
\beta m_{li}{\bf A}_{ji}\otimes {\bf A}_{lj}
\label{routcov}
\end{eqnarray}
and is characterized uniquely by the
real-valued parameters $\alpha,\beta$ and $C$.
These characteristic parameters have to be restricted to
their physical domain which is defined by the
requirement that
$\rho_{out}({\bf m})$ is a density operator
and must have
non-negative eigen values with
${\rm Tr}[\rho_{out}({\bf m})] = 1$.
In order to obtain insight into the physical contents of the class
of universal, covariant and linear quantum processes
which is described by Eq.(\ref{routcov})
let us investigate the structure of $\rho_{out}({\bf m})$
more explicitly. Due to the covariance condition
we can restrict ourselves to a pure input state
with
$m_{ij} = N \delta_{i1}\delta_{j1}$
without loss of generality.
Introducing an orthogonal
basis $\{|1\rangle,...,|N\rangle \}$ in the N-dimensional
one-particle Hilbert space ${\cal H}_N$
in which state $|1\rangle$ denotes the input state,
i.e. $\rho_{in}({\bf m}=m_{11}{\bf e}_{11}) = |1\rangle \langle 1|$,
one obtains from Eq.(\ref{routcov}) the expression
\begin{eqnarray}
\rho_{out}({\bf m}&=&m_{11}{\bf e}_{11}) =
M_{11}|11\rangle \langle 11| +\nonumber\\
&& \sum_{j=2}^{N} |jj\rangle \langle jj|
(M_{23}+C) +
\nonumber\\
&&
\sum_{j=2}^{N} \{
|1j\rangle \langle 1j| M_{12} +
|1j\rangle \langle j1| (C+\beta m_{11}) +\nonumber\\
&&
|j1\rangle \langle 1j| (C+\beta m_{11}) +
|j1\rangle \langle j1| M_{12}
\}+\nonumber\\
&&\sum_{i<j=2}^{N} \{
|ij\rangle \langle ij| M_{23} +
|ij\rangle \langle ji| C +\nonumber\\
&&
|ji\rangle \langle ij| C +
|ji\rangle \langle ji| M_{23}
\}
\label{explicit}
\end{eqnarray}
with
\begin{eqnarray}
M_{23}&=&1/N^2-2\alpha m_{11}/N-C/N+2\beta m_{11}/N^2,\nonumber\\
M_{12}&=&M_{23}+\alpha m_{11}-2\beta m_{11}/N,\nonumber\\
M_{11}&=&1/N^2+2\alpha m_{11}
(1-1/N)+C(1-1/N)+\nonumber\\
&&2\beta m_{11}(1-1/N)^2.\nonumber
\end{eqnarray}
The non-negativity of $\rho_{out}({\bf m})$ implies
the constraints
$M_{23}\geq|C|$, $M_{12}\geq|C+\beta m_{11}|$, $M_{23}+C\geq 0$ and
$M_{11}\geq 0$.
The two-particle output states of
Eqs.(\ref{routcov}) or (\ref{explicit})
characterize all possible permutation invariant,
covariant, linear mappings.
They describe in a unified way the restrictions which are imposed
by the linearity of quantum mechanics on universal
quantum processes which treat both particles in a symmetric way.
The universality of these processes guarantees
that they fulfill any additional conditions
for all possible input states.
The general covariance condition of Eq.(\ref{cov})
implies that
these additional conditions need not be invariant under
unitary transformations. They may very well depend on properties
of the initial input state.
As a special case of such a universal
quantum process let us consider
optimal cloning of pure states
\cite{clone1,clone2,clone3,clone4,clone5}.
In this case one is looking for a
mapping ${\cal P}$ of the form of Eq.(\ref{ansatz})
which fulfills the additional constraint
\begin{equation}
{\rm Tr}\{
\rho_{in}({\bf m})\otimes \rho_{in}({\bf m})
\rho_{out}({\bf m})
\} \to {\rm maximum}
\label{clone}
\end{equation}
for all possible input states $\rho_{in}({\bf m})$.
This constraint involves the input state explicitly and it is
equivalent to maximizing $M_{11}$ in Eq.(\ref{explicit}).
Physically speaking this condition imposes the constraint that
the output state $\rho_{out}({\bf m})$ should be as close as possible
to the ideally cloned state
$\rho_{in}({\bf m})\otimes \rho_{in}({\bf m})$.
It is straightforward to work out the optimal parameters which
satisfy Eq.(\ref{clone}), namely
$2\alpha m_{11} = (N+2)/[N (N+1)], \beta m_{11}=1/[2N+2] , C = 0$.
Inserting these parameters into Eq.(\ref{explicit}) one realizes that
optimal cloning can be achieved only imperfectly with a probability
of $P_{11} \equiv M_{11} = 2/(N + 1) < 1$. With a probability of
$1-P_{11} = (N-1)/(N+1)$
in this process also an unavoidable maximally mixed
state is generated which involves
all possible Bell states of the form
$|\psi_{1j}\rangle^{(+)}
= (|1 j \rangle + |j 1\rangle)/\sqrt{2}$ with
equal probabilities.
Thereby state $|j\rangle$ can be any of the $(N-1)$ basis
states which are orthogonal to the pure input state
$\rho_{in}({\bf m}=m_{11}{\bf e}_{11})$.
Thus the two-particle output state
of the universal, optimal quantum cloning process
is given by
\begin{eqnarray}
\rho_{out}({\bf m}&=&m_{11}{\bf e}_{11}) = P_{11}|11\rangle \langle 11|
+\nonumber\\
&& \frac{(1 - P_{11})}{N-1}
\sum_{j=2}^N|\psi_{1j}\rangle^{(+)} ~^{(+)}\langle \psi_{1j}|.
\end{eqnarray}
\\
\\
{\Large \bf
Nature of the universal, optimally entangled
two-particle states}
\\
\\
What is the nature of the entangled states which are produced
by the optimal entanglement process characterized by
the covariant and linear map of Eq.(\ref{explicit})
and by conditions
(\ref{1}) and (\ref{2})?
Let us first of all
determine the values of the characteristic parameters
$\alpha, \beta$ and $C$
of this universal, optimal entanglement process.
Condition (\ref{1})
implies that $\alpha =0$. Minimizing the von-Neumann entropy
$S[\rho_{out}({\bf m})]$
implies that we have to determine the remaining parameters
$\beta$ and $C$ in such a way that the number of eigen values
of magnitude zero is as large as possible.
The physical region of the two remaining parameters
\begin{figure}
\caption{\small Schematic representation of the physical
region of the parameters
$y=C$ and $x=\beta m_{11}
\label{fig1}
\end{figure}
$C$ and $\beta m_{11}$ is indicated in Fig. 1 by the black area.
From Fig. 1 it is straightforward to
show that the condition of minimal entropy
is fulfilled for $\beta=0$ and
$C = -1/[N(N - 1)]$. This implies that
$\rho_{out}({\bf m})$ transforms indeed as a scalar
under $SU_N\times SU_N$ as we have already anticipated earlier.
Thus the two-particle output state
which is produced by the optimal, universal entanglement process
is independent of the input state $\rho_{in}({\bf m})$ and
is given by
\begin{eqnarray}
\rho_{out}({\bf m}) &=&\frac{2!}{N ( N - 1)}\sum_{i<j=1}^N
|\psi_{ij}\rangle^{(-)}~^{(-)}\langle \psi_{ij}|.
\nonumber\\
~\label{ent}
\end{eqnarray}
In general, $\rho_{out}({\bf m})$ is a maximally disordered mixture
of all possible anti-symmetric Bell states
\begin{eqnarray}
|\psi_{ij}\rangle^{(-)} &=& \frac{1}{\sqrt{2}}
(|ij\rangle - | ji \rangle)
\end{eqnarray}
which can
be formed by two possible basis states
$|i\rangle$ and $|j\rangle$ of the N-dimensional one-particle
Hilbert space ${\cal H}_N$.
The number of these anti-symmetric Bell states is given by
$[N(N-1)/2!]={N \choose 2}$.
It is interesting to realize that it is only the anti-symmetric
Bell states which appear in this optimal, universal
entanglement process. This is understandable from the fact that
these Bell states are the only ones which are invariant under
arbitrary unitary transformations. This invariance property
guarantees that one obtains entangled output states for all
possible input states so that the resulting entanglement
process is universal.
The other three two-particle
Bell states, namely
\begin{eqnarray}
|\psi_{ij}\rangle^{(+)} &=&\frac{1}{\sqrt{2}}
(|ij\rangle + | ji\rangle),
\nonumber\\
|\Phi_{ij}\rangle^{(\pm)} &=&\frac{1}{\sqrt{2}}
(|ii\rangle \pm | jj\rangle),
\end{eqnarray}
do not have this invariance property.
If they appeared in the two-particle output state, it would always
be possible to find a particular input state which produces a
separable, non-entangled output state. Thus such a quantum process
would not fulfill the universality requirement.
For the case of universal optimal entanglement of a qubit, i.e. for
$N=2$, there is only one possible anti-symmetric Bell state, namely
$|\psi_{12}\rangle^{(-)}$.
Thus in this particular case
the universal entanglement process of Eq.(\ref{ent})
produces the pure two-particle output state
$\rho_{out}({\bf m}) =
|\psi_{12}\rangle^{(-)}~^{(-)}\langle \psi_{12}|$
which is known to violate Bell inequalities maximally \cite{Peres}.
For all higher values of
the dimension of the Hilbert space ${\cal H}_N$ the two-particle
output state is mixed.
Nevertheless according to condition (\ref{2})
the von-Neumann entropy of all possible
output states is always as small as possible
within the linearity constraints
imposed by quantum theory.
Furthermore, it is straightforward
to show that all output states are not separable
as their partial transposes have at least one
negative eigen value \cite{Peres1}
of magnitude $\lambda = -1/[N(N-1)] < 0$.
How do these optimal, universal
two-particle output states behave for high values
of the dimension of the one-particle Hilbert space
${\cal H}_N$?
In general the von-Neumann entropy of the two-particle output
state is given by
\begin{equation}
S[\rho_{out}({\bf m})] = {\rm ln} {N \choose 2} =
{\rm ln}[N(N-1)] - {\rm ln}[2!].
\end{equation}
For $N\gg 1$ this entropy approaches the value\\
$S[\rho_{out}({\bf m})] \to {\rm ln}[N^2] - {\rm ln}[2!]$.
Thereby ${\rm ln}[N^2]$ is the entropy of the maximally disordered
two-particle state $\rho_{max} = {\bf 1}\otimes {\bf 1}/N^2$.
Thus, in the limit
of large dimensions of the Hilbert space ${\cal H}_N$
the entropies of $\rho_{max}$ and of
$\rho_{out}({\bf m})$ differ by one bit only .
This shows that in the limit of large $N$
these universal, optimally entangled states
are located very close to the
maximally mixed state $\rho_{max}$
from the information theoretic point of view.
They are very fragile with respect to any disturbances.
Loosing one bit of information only changes them to a maximally
mixed state $\rho_{max}$.
Nevertheless, it is worth pointing out that this does not
necessarily
imply that these states are also close to $\rho_{max}$ in state
space. In order to characterize the distance of a mixed
quantum state $\rho$
from the maximally mixed one in state space one
usually decomposes $\rho$ according to
\begin{eqnarray}
\rho &=& (1 - \epsilon){\bf 1}/d + \epsilon \rho_1
\end{eqnarray}
with a suitably chosen density operator $\rho_1$ (with\\
${\rm Tr}[\rho_1] = 1$) and with $d$ denoting the dimension of the
relevant Hilbert space.
The quantity $\epsilon$ might be considered as characterizing
the separation of $\rho$ from the maximally mixed state.
Mixed states which are close to the maximally mixed one in the sense
that $0\leq \epsilon \ll 1$ are of particular interest for
quantum information processing in high-temperature nuclear magnetic
resonance \cite{nuc1,nuc2,nuc3}.
In this context Braunstein et al. \cite{Schack}
have shown recently that in systems consisting
of n-qubits with $d=2^n$ one can always find a sufficiently small
neighborhood around the maximally mixed state with
$\epsilon = O(4^{-n})$ within which all states are separable.
In view of this result it is of interest to work out also the
distance of the
universal, optimally entangled states of Eq.(\ref{ent})
from the maximally mixed state $\rho_{max}$ in state space.
As many of the possible $N^2$ eigen values of $\rho_{out}({\bf m})$
are zero these states
are characterized by $\epsilon = 1$. Thus, despite their
closeness to $\rho_{max}$ as far as the von-Neumann entropy
is concerned, these latter states are well separated
from $\rho_{max}$ in state space for all possible values of the
dimension of the Hilbert space ${\cal H}_N$.
\\
\\
{\Large \bf Acknowledgments}\\
\\
This work is supported by the DFG within the SPP
`Quanteninformationsverarbeitung' and by the ESF programme
`Quantum Information Theory and Quantum Computation'.
Stimulating discussions with N. Gisin are acknowledged.
\end{document} |
\begin{document}
\title{Clique dynamics of locally cyclic graphs with $\delta\geq 6$}
\listoftodos
\begin{abstract}
We prove that the clique graph operator $k$ is divergent on a locally cyclic
graph $G$ (i.\,e. $N_G(v)$ is a circle) with minimum degree
$\delta(G)=6$ if and only if $G$ is $6$-regular.
The clique graph $kG$ of a graph $G$ has the maximal complete subgraphs of
$G$ as vertices, and the edges are given by non-empty intersections.
If all iterated clique graphs of $G$ are pairwise non-isomorphic, the graph $G$
is $k$-divergent; otherwise, it is $k$-convergent.
To prove our claim, we explicitly construct the iterated clique graphs of those infinite locally cyclic graphs with $\delta\geq6$ which induce
simply connected simplicial surfaces. These graphs are $k$-convergent\text{e}nspace if the
size of triangular-shaped subgraphs of a specific type is bounded from above.
We apply this criterion by using the universal cover of
the triangular complex of an arbitrary finite
locally cyclic graph with $\delta= 6$, which shows our divergence
characterisation.
\textbf{Keywords:} Iterated clique graphs, clique convergence, clique dynamics, locally cyclic, hexagonal grid, covering graph
\text{e}nd{abstract}
\section{Introduction}\label{Sect_Introduction}
Applied to a graph $G$, the clique graph operator constructs its clique graph $kG$.
The vertices of $kG $ are the maximal complete subgraphs of $G$, called
\text{e}mph{cliques}. These cliques are adjacent in $kG$ if they intersect
in $G$. In 1972, Hedetniemi and Slater first studied line graphs and triangle free graphs
using the clique graph operator \cite{hedetniemi1972line}.
We are interested in locally cyclic graphs,
which means that the
set of vertices incident to a given vertex $v$ always induces a circle. Popular locally cyclic graphs are the octahedron,
the icosahedron, and the hexagonal grid, which are displayed in Figure \ref{fig_loccyclic}.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.5]
\def1.8{2}
\coordinate (Z) at (0,0);
\foreach \i in {0,...,4}{
\coordinate (R\i) at (\i*72:1.8);
}
\foreach \i in {1,...,4}{
\draw (Z) -- (R\i);
\pgfmathparse{mod(int(1+\i),5)}
\pgfmathsetmacro{\nxt}{\pgfmathresult}
\draw (R\i) -- (R\nxt);
}
\draw (R0) -- (Z);
\node at (-2.3,-2) {a)};
\node[fill=white] at (Z) {$v$};
\node[fill=white] at (R1) {$w_k$};
\node[fill=white] at (R2) {$w_1$};
\node[fill=white] at (R3) {$w_2$};
\node[fill=white] at (R4) {$w_3$};
\node[fill=white] at (R0) {$w_4$};
\def15{15}
\draw[dotted,thick] (15:2) arc (15:72-15:1.8);
\text{e}nd{tikzpicture}
\hspace{0.5cm}
\begin{tikzpicture}[scale=0.43]
\node at (3,0.5) {b)};
\draw [color=black] (5.96,4.74)-- (3.86,2.38);
\draw [color=black] (3.86,2.38)-- (5.34,1.66);
\draw [color=black] (5.34,1.66)-- (5.96,4.74);
\draw (5.34,1.66)-- (6.,0.);
\draw (5.34,1.66)-- (8.08,2.3);
\draw (3.86,2.38)-- (6.,0.);
\draw (6.,0.)-- (8.08,2.3);
\draw (5.96,4.74)-- (8.08,2.3);
\draw [dash pattern=on 5pt off 5pt] (5.96,4.74)-- (6.38,2.84);
\draw [dash pattern=on 5pt off 5pt] (6.38,2.84)-- (8.08,2.3);
\draw [dash pattern=on 5pt off 5pt] (3.86,2.38)-- (6.38,2.84);
\draw [dash pattern=on 5pt off 5pt] (6.38,2.84)-- (6.,0.);
\text{e}nd{tikzpicture}
\hspace{0.1cm}
\def \varphi {1.617}
\begin{tikzpicture}[
x={(-0.86in, -0.5in)}, y = {(0.86in, -0.5in)}, z = {(0, 1in)},
rotate = 22,
scale = 0.1,
foreground/.style = { },
background/.style = { dash pattern=on 5pt off 5pt}
]
\coordinate (9) at (0, -\varphi*\varphi, \varphi);
\coordinate (8) at (0, \varphi*\varphi, \varphi);
\coordinate (12) at (0, \varphi*\varphi, -\varphi);
\coordinate (5) at (0, -\varphi*\varphi, -\varphi);
\coordinate (7) at ( \varphi, 0, \varphi*\varphi);
\coordinate (3) at (-\varphi, 0, \varphi*\varphi);
\coordinate (6) at (-\varphi, 0, -\varphi*\varphi);
\coordinate (4) at ( \varphi, 0, -\varphi*\varphi);
\coordinate (2) at ( \varphi*\varphi, \varphi, 0);
\coordinate (10) at (-\varphi*\varphi, \varphi, 0);
\coordinate (1) at (-\varphi*\varphi, -\varphi, 0);
\coordinate (11) at ( \varphi*\varphi, -\varphi, 0);
\draw[foreground] (10) -- (3) -- (8) -- (10) -- (12) -- (8);
\draw[foreground] (4) -- (12) -- (2) -- (4) -- (11) -- (2);
\draw[foreground] (9) -- (3) -- (7) -- (9) -- (11) -- (7);
\draw[foreground] (7) -- (8) -- (2) -- cycle;
\draw[background] (12) -- (6) -- (10) -- (1) -- (6) -- (5) -- (1)
-- (9) -- (5) -- (11);
\draw[background] (5) -- (4) -- (6);
\draw[background] (3) -- (1);
\foreach \n in {1,...,12}
\node at (\n) {};
\text{e}nd{tikzpicture}\hspace{0.1cm}
\begin{tikzpicture}[scale=0.23]
\HexagonalCoordinates{7}{7}
\draw (A33) -- (A37);
\draw (A42) -- (A47);
\draw (A52) -- (A57);
\draw (A62) -- (A66);
\draw (A33) -- (A73);
\draw (A24) -- (A74);
\draw (A25) -- (A75);
\draw (A26) -- (A66);
\draw (A42) -- (A24);
\draw (A52) -- (A25);
\draw (A62) -- (A26);
\draw (A63) -- (A36);
\draw (A73) -- (A37);
\draw (A74) -- (A47);
\draw (A75) -- (A57);
\node at (U75) {$\dots$};
\node at (U74) {$\dots$};
\node at (U73) {$\dots$};
\node at (U57) {\reflectbox{$\ddots$}};
\node at (U47) {\reflectbox{$\ddots$}};
\node at (U37) {\reflectbox{$\ddots$}};
\node at (D15) {$\dots$};
\node at (D14) {$\dots$};
\node at (D13) {$\dots$};
\node at (D51) {\reflectbox{$\ddots$}};
\node at (D41) {\reflectbox{$\ddots$}};
\node at (D31) {\reflectbox{$\ddots$}};
\text{e}nd{tikzpicture}
\text{e}nd{center}
\caption{a) The cyclic neighbourhood of $v$, b) Octahedron, icosahedron, hexagonal grid}\label{fig_loccyclic}
\text{e}nd{figure}
For minimum degree $\delta$ of at least 4, they can be described as
Whitney triangulations of surfaces, which were investigated for example
in \cite{larrion2003clique},\cite{larrion2002whitney}, and
\cite{larrion2006graph}.
In 1999, Larrión and Neuman-Lara showed that some \mbox{$6$-regular}
triangulations of the torus are $k$-divergent\text{e}nspace \cite{larrion1999clique}
and, in 2000, they generalised this result to every \mbox{$6$-regular}
locally cyclic graph graph \cite{larrion2000locally}.
Furthermore, Larriòn, Neumann-Lara, and Piza\~na \cite{larrion2002whitney}
showed that graphs in which every open neighbourhood of a vertex has a girth of at least $7$ are $k$-convergent.
Thus, locally cyclic graphs of minimum degree $\delta$ of at least $7$ are $k$-convergent. The question remains whether every non-regular locally cyclic
graph with $\delta=6$ is $k$-convergent. In this paper,
we provide the affirmative answer by generalising the approach
from \cite{larrion2000locally} and using the theoretical
background from \cite{rotman1973covering}.
In the remainder of this paper, a graph is not necessarily considered finite.
Furthermore, we extend the incidence terminology of a graph to three-circles. Thus, the three vertices and the three edges of a
three-circle\text{e}nspace are each \text{e}mph{incident} to the three-circle\ itself.
Throughout the paper, we use different kinds of neighbourhoods, which are defined in the appendix.
\section{Overview and basic concepts}
The focus of this paper consists in proving our main result:
\begin{theos}[Main result]
Let $G$ be a finite, locally cyclic graph with minimum degree $\delta=6$.
The clique graph operator diverges on $G$ if and only if $G$ is $6$-regular.
\text{e}nd{theos}
\noindent This paper is based on two core insights:
\begin{enumerate}
\item The explicit description of iterated clique graphs
is massively simplified if we restrict ourselves to
triangularly simply connected graphs (there, triangular
substructures do not ``self-overlap'').
Subsection \ref{Subsect_SimplyConnected} gives a short
refresher on the definitions. In Section \ref{Sect_UniversalCover},
we extend the result to general locally cyclic graphs with
$\delta =6$ using universal covers.
\item Clique graphs grow only in regions with ``many'' vertices
of degree 6. More precisely, these vertices have to form
a triangular-shaped structure. We can also parametrise these regions by
barycentric coordinates. The necessary
formalism is given in Subsection \ref{Subsect_Hexagonal}.
This way, we combinatorially encode the adjacencies
in the iterated clique graphs in Section \ref{Sect_Graph}.
\text{e}nd{enumerate}
\noindent For simplification, we shrink the specifications of the centrally discussed object with a new definition.
\begin{defi}
A \text{e}mph{pika} is a triangularly simply connected locally
cyclic graph $G$ with minimum degree $\delta=6$.
\text{e}nd{defi}
\noindent For pikas, we head for the following theorem.
\begin{theos}[Main Theorem for Pikas]
Let $G$ be a pika.
If there is an $m\geq 0$ such that
the triangular-shaped graph of side length $m$ (see Definition
\ref{Def_HexagonalGrid}) cannot be embedded into $G$, the clique operator is
convergent on $G$.
\text{e}nd{theos}
\noindent We prove the main theorem for pikas\text{e}nspace by induction.
For a pika\text{e}nspace $G$ and for every $n\in\menge{Z}_{\geq 0}$, we define
a graph $G_n$, beginning with $G_0= G$.
We construct all the cliques of $G_n$ and their intersection, which yields
$G_{n+1}\cong k(G_n)$ and, by induction, $k^nG=G_n$. Thus, $G$ is clique
convergent if and only if the sequence $G_n$ converges.
Before diving into this line of arguments, in Section \ref{Sect_Topology},
we discuss some intricate topological arguments that are based
on the discrete curvature of a locally cyclic graph.
The results will be used to show that all cliques in $G_n$ are of
one of the types we describe and that the adjacencies in $G_{n+1}$
correspond to the intersections of cliques of $G_n$.
In Section \ref{Sect_Graph}, we define the graph $G_n$ and construct
two types of cliques in $G_n$. In Section \ref{Sect_ChartExtension},
our goal is to ensure the existence of a combined parametrisation of
intersecting triangular-shaped subgraphs in the general case and to
handle the exceptional cases. The results of this discussion are used
in Section \ref{Sect_CliqueCompleteness} to prove that $G_n$ has no
more cliques than the ones from Section \ref{Sect_Graph}. In Section
\ref{Sect_CliqueIntersections}, we finish the inductive proof for
$G_n\cong k^nG$ by describing the clique intersections in $G_n$
through the vertex adjacencies in $G_{n+1}$ and prove the main theorem
for pikas. In Section \ref{Sect_UniversalCover}, we deduce a
convergence criterion that does not rely on the simple connectivity
of $G$. In the special case of finite $G$ we conclude the main result.
Section \ref{Sect_FurtherResearch} includes our conjecture about
the infinite case and our suggestions for further research questions.
\subsection{Review: Simple Connectivity}\label{Subsect_SimplyConnected}
In this subsection, we review topological aspects of locally cyclic
graphs.
The definition of a path we use here originates from topological settings. In graph theoretic literature, those paths would be called walks. A \text{e}mph{path} $p = x_0x_1\ldots x_k$ in a graph $G$ is a finite sequence of vertices such that $x_ix_{i+1} \in E$ for all $0 \leq i < k$.
The \text{e}mph{length} of a path is the number of contained edges.
Let $G$ be a locally cyclic graph. Following
\cite{larrion2000locally}, its \text{e}mph{triangular complex} is the
simplicial complex $\mathemph{\K{G}}$, whose simplices are the vertices, edges,
and three-circles\text{e}nspace of $G$. In this way, the three-circles\text{e}nspace of $G$ become the facets
of $\K{G}$ and, from now on, we will call them the \text{e}mph{facets} of $G$, too.
Like in \cite{rotman1973covering}, we call two paths $\alpha$ and $\beta$ in $G$ \text{e}mph{equivalent}
if we can reach one path from the other by applying a finite number of elementary moves
consisting in replacing two consecutive path edges $uv$ and $vw$ by the
edge $uw$ or the other way around whenever $\{u,v,w\}$ is a facet.
The complex $\K{G}$ is called
\text{e}mph{simply connected} if $G$ is connected --- i.\,e. for every pair of
vertices, there is a path in $G$ connecting them ---,
and if every closed path
$\alpha$ is equivalent to a path consisting of a single vertex (which is
the origin and end of $\alpha$). We call a locally cyclic graph
$G$ \text{e}mph{triangularly simply connected} if $\K{G}$ is simply connected.
\subsection{Hexagonal Grid}\label{Subsect_Hexagonal}
If a locally cyclic graph has an area of vertex degree $6$, the graph locally looks like the hexagonal grid, a term we will define now.
\begin{defi}\label{Def_HexagonalGrid}
We define the coordinate set
\begin{equation*}
\mathemph{\oldvec{D}_0} \text{e}nsuremath{\mathrel{\mathop:}=} \{ (1,-1,0), (1,0,-1), (-1,1,0), (0,1,-1), (-1,0,1), (0,-1,1) \}.
\text{e}nd{equation*}
For $m \in \menge{Z}$, the \text{e}mph{hexagonal grid of height} $\mathemph{ m}$ is the graph $\mathemph{Hex_m} = (V_m,E_m)$ with
\begin{align*}
\mathemph{V_m} &\text{e}nsuremath{\mathrel{\mathop:}=} \{ (x_1,x_2,x_3) \in \menge{Z}^3 \mid x_1+x_2+x_3 = m \} \text{ and}\\
\mathemph{E_m} &\text{e}nsuremath{\mathrel{\mathop:}=} \{ \{x,y\} \subseteq V_m \mid x-y \in \vec{D}_0 \}.
\text{e}nd{align*}
For $m\geq 0$, we denote the \text{e}mph{triangular-shaped graph of side length} $\mathemph{m}$, which is defined as $\Hex_m[V_m\cap\menge{Z}_{\geq 0}^3]$, by $\mathemph{\Delta_{m}}$.
Figure \ref{Fig_Delta_m} shows the smallest five of those subgraphs.
\begin{figure}[htbp]
\begin{center}
\quad\quad
\begin{tikzpicture}[scale=0.5]
\HexagonalCoordinates{1}{1}
\node[draw,fill=black,inner sep=1pt, shape=circle](x){};
\text{e}nd{tikzpicture}\quad
\begin{tikzpicture}[scale=0.5]
\HexagonalCoordinates{1}{1}
\draw (A00) -- (A10) -- (A01) --cycle;
\text{e}nd{tikzpicture}\quad
\begin{tikzpicture}[scale=0.5]
\HexagonalCoordinates{2}{2}
\draw (A00) -- (A20) -- (A02) --cycle;
\draw (A10) -- (A01) -- (A11) --cycle;
\text{e}nd{tikzpicture}\quad
\begin{tikzpicture}[scale=0.5]
\HexagonalCoordinates{3}{3}
\draw (A00) -- (A30) -- (A03) --cycle;
\draw (A01) -- (A21) -- (A20) --(A02) -- (A12) -- (A10) --cycle;
\text{e}nd{tikzpicture}\quad
\begin{tikzpicture}[scale=0.5]
\HexagonalCoordinates{4}{4}
\draw (A00) -- (A40) -- (A04) --cycle;
\draw (A01) -- (A31) -- (A30) --(A03) -- (A13) -- (A10) --cycle;
\draw (A02) -- (A20) -- (A22) --cycle;
\text{e}nd{tikzpicture}
\nopagebreak
$\Delta_0\quad\Delta_1\quad\quad\quad\quad \Delta_2\quad\quad\quad\quad\quad\quad \Delta_3\quad\quad\quad\quad\quad\quad\quad\quad\quad \Delta_4\quad\quad\quad$
\text{e}nd{center}
\caption{$\Delta_m$ for $m\in \{0,\ldots,4\}$}
\label{Fig_Delta_m}
\text{e}nd{figure}
\text{e}nd{defi}
For a locally cyclic graph $G$, a \text{e}mph{hexagonal chart} is a graph
isomorphism $\mu: H \to F$ (also written $H \iso{\mu} F$) with
vertex-induced subgraphs $H \subseteqeq \Hex_m$ and $F\subseteqeq G$.
If $F \cong \Delta_m$, we call it \text{e}mph{standard chart}.
Since the symmetric group on three points acts on the hexagonal grid
by coordinate permutations, every subgraph $F \cong \Delta_m$ with
$m \geq 1$ has six standard charts.
For $(t_1,t_2,t_3)\in\menge{Z}^3$, we define the \text{e}mph{triangle inclusion map}:
\begin{equation*}
\mathemph{\Delta_m^{t_1,t_2,t_3}} : \Delta_m \to \Hex_{m+t_1+t_2+t_3}, \qquad
(a_1,a_2,a_3) \mapsto (a_1+t_1,a_2+t_2,a_3+t_3).
\text{e}nd{equation*}
\section{Topology}\label{Sect_Topology}
We translate $\Delta_m$-shaped graphs into the setting of locally cyclic graphs.
A \text{e}mph{locally cyclic graph with boundary} is a simple
graph $G=(V,E)$ such
that for every vertex $v\in V$ the (open)
neighbourhood $N_G(v)$ is either a circle graph or a path graph.
If $N_G(v)$ is a circle,
$v$ is called an \text{e}mph{inner vertex} of $G$; otherwise, $v$ is called
a \text{e}mph{boundary vertex.}
An edge $xy\in E$ is called an \text{e}mph{inner edge} if its incident
vertices $x$ and $y$ have
two common neighbours, and a \text{e}mph{boundary edge}, if not. The \text{e}mph{boundary graph} $\mathemph{\partial G}$
is the subgraph of $G$ consisting of the boundary vertices and the boundary edges.
$G$ is called \text{e}mph{locally cyclic} if $\partial G=\text{e}mptyset$.
The boundary graph $\partial G$ is well-defined, since for every inner
vertex $x$ and every edge $xy$, the vertex $y$ lies in the cyclic
neighbourhood $N_G(x)$ and has, therefore, two neighbours in
$N_G(x)$. Thus, $xy$ is an inner edge. Conversely, there
exist inner edges that are incident to only boundary vertices.
\subsection{Straight Paths}\label{Subsect_StraightPaths}
A monomorphism of locally cyclic graphs with boundary preserves vertex degrees
of inner vertices. Furthermore, the number of incident facets on either side of a
path is preserved, too. We formalise this by the concept of \text{e}mph{path degree}:
Let $p = x_0x_1\ldots x_k$ be a path in a locally cyclic graph $G$ and
consider a vertex $x_i$
for $0 < i < k$.
\begin{itemize}
\item If $x_i$ is an inner vertex, $N_G(x_i)$ is a circle, say
of length $L$, and
marking $x_{i-1}$ and $x_{i+1}$ splits the circle into
two paths of lengths $l_1$ and $l_2$, satisfying $l_1+l_2=L$.
The \text{e}mph{path degree} $\mathemph{deg_G^p(x_i)}$ is defined as
$\{l_1,l_2\}$, as is visualized in Figure \ref{fig_pathdeg}.
\item If $x_i$ is a boundary vertex, $N_G(v)$ is a path graph
containing a unique shortest path $q$ from $x_{i-1}$ to $x_{i+1}$
with length $l$.
The \text{e}mph{path degree} $\mathemph{deg_G^p(x_i)}$ is defined as $\{l\}$.
\text{e}nd{itemize}
The concept of path degrees is illustrated in Figure \ref{fig_pathdeg}.
The path $p$ is called \text{e}mph{straight}
if $3$ is contained in $15_G^p(x_i)$ for every $0 < i < k$.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.5]
\HexagonalCoordinates{10}{4}
\draw
(A11) -- (A20) -- (A30) -- (A31) node[draw=none,fill=none,font=\scriptsize,midway,below right] {$15_G^p(v)=\{3,4\}$} -- (D21) -- (A22) -- (A12) -- (A11)
(A20) -- (A22)
(A30) -- (A12)
(A21) -- (D21);
\draw
[very thick, blue]
(A11) -- (A21) node[draw=none,fill=none,font=\scriptsize,midway,below] {$p$} -- (A31);
\draw
(A61) -- (A70);
\draw (A80) -- (A81) node[draw=none,fill=none,font=\scriptsize,midway,below right] {$15_G^p(w)=\{4\}$} -- (D71) -- (A72) -- (A62) -- (A61)
(A70) -- (A72)
(A80) -- (A62)
(A71) -- (D71);
\draw
[very thick, blue]
(A61) -- (A71) node[draw=none,fill=none,font=\scriptsize,midway,below] {$p$} -- (A81);
\draw
[very thick, blue, dotted]
(A01) -- (A91);
\node[fill=white] at (A21) {$v$};
\node[fill=white] at (A71) {$w$};
\text{e}nd{tikzpicture}
\caption{The path degrees of the inner vertex $v$ and the boundary vertex $w$. Since the path degree $15_G^p(w)$ does not contain $3$, $p$ is not straight.}\label{fig_pathdeg}
\text{e}nd{figure}
\noindent As an important application, we construct the straight paths within $\Delta_m$.
\begin{rem}\label{Rem_StraightPathsInTriangle}
Up to symmetry (see Subsection \ref{Subsect_Hexagonal}),
the maximal straight paths with length at least $m-2$ in
$\Delta_m$ (with $m \geq 3$) are the following,
depicted in Figure \ref{Fig_alphabetagamma}:
\begin{enumerate}
\item For length $m$, we have $\alpha: \{0,\dots,m\}\to\menge{Z}^3$ with
$t \mapsto (m-t,t,0)$.
\item For length $m-1$, we have $\beta: \{0,\dots,m-1\}\to\menge{Z}^3$
with $t \mapsto (m-1-t,t,1)$.
\item For length $m-2$, we have $\gamma: \{0,\dots,m-2\}\to\menge{Z}^3$
with $t \mapsto (m-2-t,t,2)$.
\text{e}nd{enumerate}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.3]
\HexagonalCoordinates{5}{5}
\draw (A00) -- (A05) -- (A50);
\draw[blue] (A00) --node[below] {$\alpha$} (A50);
\draw[red] (A01) --node[below] {$\beta$} (A41);
\draw[green!80!black] (A02)--node[below] {$\gamma$} (A32);
\text{e}nd{tikzpicture}
\caption{The maximal straight paths in $\Delta_m$}\label{Fig_alphabetagamma}
\text{e}nd{figure}
\text{e}nd{rem}
\begin{proof}
The boundary $\partial \Delta_m$ consists of six straight
paths of length $m$, given by $\alpha$ and its images under coordinate permutations.
Since any other straight path contains an inner vertex and
inner vertices have degree 6, the value of one of
the three coordinates is constant along the path (compare Subsection
\ref{Subsect_StraightPaths}).
Without loss of generality (see Subsection \ref{Subsect_Hexagonal}),
let this be the third coordinate. This way, we receive $\beta$ and $\gamma$.
\text{e}nd{proof}
\subsection{Topological consequences of $\delta=6$}
To understand the structure of a pika\text{e}nspace $G$ properly, we need to make sure that the boundary vertices of a $\Delta_m$-shaped subgraph are not connected by paths in an unexpected way.
\begin{lem}\label{lem_topology}
For every induced subgraph $S\cong\Delta_m$ of
$G$ the following statements hold:
\begin{enumerate}
\item\label{top_a} Any edge incident to two boundary vertices of $S$ lies in $S$.
\item\label{top_b} Let $v_0v_1v_2$ be a path with $v_0, v_2 \in \partial S$ and $v_1 \notin S$.
Then, either $v_0 = v_2$ or
$\{v_0,v_1,v_2\}$ is a facet (i.e. $v_0v_2$ is
a boundary edge).
\item\label{top_c} Let $v_0v_1v_2v_3$ be a non-repeating
path with $v_0, v_3 \in \partial S$ and $v_1,v_2\notin S$
such that neither $\{v_0,v_1,v_3\}$ nor $\{v_0,v_2,v_3\}$ are facets.
Then, there exists a boundary vertex $v \in \partial S$
such that $\{v,v_1,v_2\}$ is a facet.
\text{e}nd{enumerate}
\text{e}nd{lem}
\begin{proof}
See Appendix \ref{Sec_Proofsfromtopology}.
\text{e}nd{proof}
\noindent This helps proving two more auxiliary lemmas.
\begin{lem}\label{Lem_NeighbourhoodIntersection}
Let $m \geq 1$ and $\Delta_m \cong S = S_1 \text{e}nsuremath{\cup} S_2 \text{e}nsuremath{\cup} S_3$ with $S_i \cong \Delta_{m-1}$.
Then, $\neig{G}{S_1} \cap \neig{G}{S_2} \cap \neig{G}{S_3} \subseteq S$.
\text{e}nd{lem}
\begin{proof}
See Appendix \ref{Sec_Proofsfromtopology}.
\text{e}nd{proof}
\begin{lem}\label{Lem_NeighbourhoodCycle}
Let $S \cong \Delta_m$. Then, $\neig{G}{S} \text{e}nsuremath{\backslash} S$ is a cycle and
vertices in $\neig{G}{S}\text{e}nsuremath{\backslash} S$ are incident to at most three faces
in $\neig{G}{S}$.
\text{e}nd{lem}
\begin{proof}
See Appendix \ref{Sec_Proofsfromtopology}.
\text{e}nd{proof}
\section{The graph $G_n$}\label{Sect_Graph}
\noindent We construct a graph sequence $G_n$ for every pika\text{e}nspace $G$ in a geometric way (see Def. \ref{Def_theCliqueGraph}).
In Subsection \ref{Subsect_Triangles}, we prove some general
statements concerning the relation between different triangular-shaped subgraphs.
These will be helpful in all further analyses.
In Subsection \ref{Subsect_CliqueConstruction},
we construct some cliques of $G_n$ explicitly. This is the first step on our way to prove $G_n\cong k^nG$ inductively.
If not otherwise stated, from now on $G$ will always refer to a
pika\text{e}nspace and $G_n$ will be the geometric clique graph
defined in Definition \ref{Def_theCliqueGraph}.
\begin{defi}\label{Def_theCliqueGraph}
Let $G$ be a pika. For a non-negative integer $n$, the
\text{e}mph{geometric clique graph} $\mathemph{G_n}$ has the following form:
\begin{itemize}
\item Its vertices are the subgraphs of $G$ isomorphic to
triangle graphs $\Delta_m$ with $m \leq n$, where
$m$ and $n$ have the same parity.
\item Its edges are defined as follows:
\begin{enumerate}
\item Two subgraphs (of $G$) $S_1 \cong \Delta_m$ and $S_2 \cong \Delta_m$
are adjacent (in $G_n$) if $S_1 \subseteq \neig{G}{S_2}$ or
$S_2 \subseteq \neig{G}{S_1}$.\footnote{These two conditions are in fact
equivalent. This is a direct consequence of Lemma
\ref{Lem_HexagonalChartExtension}, shown later.}
\item Two subgraphs $S_1 \cong \Delta_m$ and $S_2 \cong \Delta_{m-2}$
(with $m \geq 2$) are adjacent if $S_2 \subseteq S_1$.
\item Two subgraphs $S_1 \cong \Delta_m$ and $S_2 \cong \Delta_{m-4}$
(with $m \geq 4$) are adjacent if $S_2 \subseteq S_1$ and
$S_2$ does not contain any vertex $\partial S_1$, i.\,e. $S_2\cap \partial S_1=\text{e}mptyset$.
\item Two subgraphs $S_1 \cong \Delta_m$ and $S_2 \cong \Delta_{m-6}$
(with $m \geq 6$) are adjacent if $S_2 \subseteq S_1$ and
$S_2$ does not contain any vertex with distance at most 1
from the boundary of $\partial S_1$, i.\,e. $S_2\cap N_G[\partial S_1]=\text{e}mptyset.$
\text{e}nd{enumerate}
\text{e}nd{itemize}
A subgraph $S\cong \Delta_m$
of $G$ is said to be of \text{e}mph{level} $\mathemph{m}$ in $G_n$.
\text{e}nd{defi}
\noindent Clearly, $G_0 = G$. Thus we can try to prove $G_n=k^nG$ by induction.
\begin{ex}\label{Ex_CliqueGraphAdjacency_four}
The subgraph $\Delta_4$ of the pika\text{e}nspace $\Hex_4$ is a vertex of every geometric clique graph $(\Hex_4)_n$ with an even $n\geq 4$. The adjacent vertices of level $0$ are the three $\Delta_0$ that are depicted in blue in Figure \ref{Fig_DeltaFour} and the adjacent vertices of level $2$ are the ``face-down`` $\Delta_2$, which is depicted in red, and the six ``face-up'' $\Delta_2$, two of which are depicted in yellow.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.5]
\HexagonalCoordinates{4}{4}
\foreach \a/\b/\c in {00/20/02, 11/31/13}{
\fill[yellow] (A\a) -- (A\b) -- (A\c) -- cycle;
}
\draw
(A00) -- (A40) -- (A04) -- (A00)
(A01) -- (A31) -- (A30) -- (A03) -- (A13) -- (A10) -- (A01)
(A02) -- (A20) -- (A22) -- (A02);
\draw[very thick, red] (A20) -- (A22) -- (A02) -- cycle;
\foreach \p in {11, 21, 12}{
\fill[blue] (A\p) circle (4pt);
}
\text{e}nd{tikzpicture}
\caption{The (types of) subgraphs of level $0$ and $2$ of $\Hex_4$ that are adjacent to $\Delta_4$ in any $(\Hex_4)_n$ with an even $n\geq4$}
\label{Fig_DeltaFour}
\text{e}nd{figure}
\text{e}nd{ex}
\subsection{Properties of Triangles}\label{Subsect_Triangles}
In this subsection, we collect some technical results about triangles
and their relations.
\begin{rem}\label{Rem_TriangleInclusionBoundaryDistance}
Let $m \geq 3$.
The vertices of $\Delta_m$ with distance at least 1 to the boundary
induce the graph
$\Delta_m\setminus \partial \Delta_m\cong\Delta_{m-3}$. Thus, for
$m \geq 6$, the vertices with
distance at least 2 induce
$\Delta_m\setminus N_G[\partial \Delta_m] \cong\Delta_{m-6}$.
\text{e}nd{rem}
\noindent We define some graphs and vertex sets for future reference.
The set
\begin{align*}
\mathemph{\oldvec{E}}&\text{e}nsuremath{\mathrel{\mathop:}=} V_1\cap\menge{Z}_{\geq 0}^3=\{(1,0,0),(0,1,0),(0,0,1)\}\intertext{is the canonical basis, the graph}
\mathemph{\nabla_{1}}&\text{e}nsuremath{\mathrel{\mathop:}=}\Hex_2[(1,1,0),(0,1,1),(1,0,1)]\intertext{is the downward triangle of side length 1 in the centre of $\Delta_2$, the graphs}
\mathemph{\nabla_{1}^{\oldvec{e}}}&\text{e}nsuremath{\mathrel{\mathop:}=}\Hex_3[(1,1,0)+\vec{e},(0,1,1)+\vec{e},(1,0,1)+\vec{e}]\intertext{with $\vec{e}\in\vec{E}$ are the downward triangles of side length 1 inside $\Delta_3$, and the graph}
\mathemph{\nabla_{2}}&\text{e}nsuremath{\mathrel{\mathop:}=}\Hex_4[(2,2,0),(0,2,2),(2,0,2)]
\text{e}nd{align*}
is the downward triangle of side length 2 in the centre of $\Delta_4$.
The following two auxiliary lemmas discuss small special cases.
\begin{lem}\label{Lem_TriangleInclusionOne}
Let $m \geq 1$ and consider $\Delta_m \subseteq \Hex_m$. If
$\Delta_{m-1} \cong S \subseteq \Delta_m$, either
\begin{enumerate}
\item $S$ is the image of $\Delta_{m-1}^{\vec{e}}$ with $\vec{e} \in \vec{E}$, or
\item $m = 2$ and $S=\nabla_1$.
\text{e}nd{enumerate}
In particular, $\Delta_m \subseteq \neig{G}{S}$.
\text{e}nd{lem}
\begin{proof}
See Appendix \ref{Sect_propoftriang}
\text{e}nd{proof}
\begin{lem}\label{Lem_TriangleInclusionTwo}
Consider $\Delta_m \subseteq\Hex_m$ with $m\geq 2$.
If $\Delta_{m-2} \cong S \subseteq \Delta_m$, either
\begin{enumerate}
\item $S$ is the image of $\Delta_{m-2}^{\vec{f}}$ with $\vec{f} \in \vec{E}+\vec{E}=V_2\cap\menge{Z}_{\geq 0}^3$,
\item $m=3$ and $S=\nabla_{1}^{\vec{e}}$ for
some $\vec{e} \in \vec{E}$, or
\item $m=4$ and $S=\nabla_{2}$.
\text{e}nd{enumerate}
\text{e}nd{lem}
\begin{proof}
See Appendix \ref{Sect_propoftriang}
\text{e}nd{proof}
\subsection{Clique construction of $G_n$}\label{Subsect_CliqueConstruction}
In this subsection, we construct different cliques of $G_n$. The constructed
cliques fall into two classes; those that are formed from three $\Delta_m$ within
one $\Delta_{m+1}$ (Lemma \ref{Lem_CliqueConstructionTriangle}),
and those that are formed by all $\Delta_1$ incident to a vertex
(Lemma \ref{Lem_CliqueConstructionVertex}).
In the the next lemma, we employ a shorthand: For a hexagonal chart
$\mu: \Delta_{m+1} \to S$ and $(t_1,t_2,t_3) \in \menge{Z}^3$, we denote
the image of $\mu \circ \Delta_{m+1-t_1-t_2-t_3}^{t_1,t_2,t_3}$ by
$\mathemph{\mu^{t_1,t_2,t_3}}$.
\begin{lem}\label{Lem_CliqueConstructionTriangle}
Let $G$ be a pika\text{e}nspace and
$\Delta_{m+1} \iso{\mu} S \subseteq G$ a hexagonal chart with
$m \leq n$ and $m\text{e}quiv_2 n$.
The common
neighbourhood $\cneig{G_n}{\mu^{1,0,0},\mu^{0,1,0},\mu^{0,0,1}}$
forms a clique in $G_n$.
\text{e}nd{lem}
\begin{proof}
By Definition \ref{Def_theCliqueGraph}, the $\mu^{\vec{e}}$ with $\vec{e}\in\vec{E}$
are vertices of $G_n$.
They are all contained in $S \subseteq \neig{G}{\mu^{\vec{e}}}$
(by Lemma \ref{Lem_TriangleInclusionOne}).
Thus, by Definition \ref{Def_theCliqueGraph},
they are all adjacent to each other.
The first step of the proof is finding
all elements in the common neighbourhood.
Let $T$ be in $\cneig{G_n}{\mu^{1,0,0},\mu^{0,1,0},\mu^{0,0,1}}\setminus\{\mu^{1,0,0},\mu^{0,1,0},\mu^{0,0,1}\}$.
\begin{enumerate}
\item If $T \cong \Delta_{m-k}$ for $k \in \{2,4,6\}$ and $k\leq m$,
by Definition \ref{Def_theCliqueGraph},
$T \subseteq\mu^{1,0,0}\cap \mu^{0,1,0}\cap \mu^{0,0,1}$.
For $m\in \{0,1\}$ we have
$\mu^{1,0,0}\cap \mu^{0,1,0}\cap \mu^{0,0,1}=\text{e}mptyset$,
thus this is a contradiction.
For $m \geq 2$, we have
$\mu^{1,0,0}\cap \mu^{0,1,0}\cap \mu^{0,0,1}=\mu^{1,1,1} \cong \Delta_{m-2}$.
We distinguish between the possible values of $k$.
\begin{enumerate}
\item $k=2$: We conclude $T = \mu^{1,1,1}$.
\item $k=4$: We have
$T\subseteqeq \mu^{\vec{e}}\setminus\partial \mu^{\vec{e}}$
for the three $\vec{e}\in\vec{E}$. Thus, by Remark
\ref{Rem_TriangleInclusionBoundaryDistance},\\
$\Delta_{m-4}\cong T\subseteqeq \mu^{1,1,1}\setminus\partial\mu^{1,1,1}\cong \Delta_{m-5}$,
which is impossible.
\item $k=6$: We have
$T\subseteqeq \mu^{\vec{e}}\setminus\partial \neig{G}{\mu^{\vec{e}}}$
for the three $\vec{e}\in\vec{E}$. Thus, by Remark
\ref{Rem_TriangleInclusionBoundaryDistance},
$\Delta_{m-6}\cong T\subseteqeq \mu^{1,1,1}\setminus\partial\neig{G}{\mu^{1,1,1}}\cong \Delta_{m-8}$,
which is impossible.
\text{e}nd{enumerate}
We conclude $m\geq k=2$ and $T = \mu^{1,0,0}\cap \mu^{0,1,0}\cap \mu^{0,0,1}=\mu^{1,1,1}$.
\item If $T \cong \Delta_m$, by Definition \ref{Def_theCliqueGraph},
$T \subseteq \neig{G}{\mu^{1,0,0}}\cap \neig{G}{\mu^{0,1,0}}\cap \neig{G}{\mu^{0,0,1}}$.
Since by Lemma \ref{Lem_NeighbourhoodIntersection},
$S=\neig{G}{\mu^{1,0,0}}\cap \neig{G}{\mu^{0,1,0}}\cap \neig{G}{\mu^{0,0,1}}$,
Lemma \ref{Lem_TriangleInclusionOne} shows that $T$ can
only appear if $m = 1$ (in which case it comes from a $\nabla_1$).
In particular, it is never part of the common neighbourhood
if $\mu^{1,1,1}$ is.
\item If $T \cong \Delta_{m+k}$ for $k \in \{2,4,6\}$, we have
$\Delta_{m+1}\cong S = \mu^{1,0,0}\cup\mu^{0,1,0}\cup\mu^{0,0,1}\subseteq T$.
Again, we distinguish between the possible values of $k$.
\begin{enumerate}
\item $k=2$: We employ Lemma \ref{Lem_TriangleInclusionOne}
to describe
how $S$ can lie in $T$. By the same lemma and
Definition \ref{Def_theCliqueGraph},
all of these $T$ are pairwise adjacent.
Consider adjacency to smaller triangles:
If $m=1$, the additional $\Delta_m$ from Lemma \ref{Lem_TriangleInclusionOne}
lies in $S$ and is thus adjacent to all $T$.
If $m \geq 2$, the intersection
$\mu^{1,1,1} \cong \Delta_{m-2}$ has distance 1
to $\partial S$. Thus, it also
has distance 1 to $\partial T$ and it is therefore
adjacent to all of them.
\item $k=4$: The subgraphs $\mu^{\vec{e}}$ need to have distance 1 to
the boundary of $T$. Thus,
$S$ also has this distance. By
Remark \ref{Rem_TriangleInclusionBoundaryDistance},
this uniquely defines $T$ with $\neig{G}{S} \subseteq T$.
Since the additional $\Delta_{m+2}$ lie in $\neig{G}{S}$
(Lemma \ref{Lem_TriangleInclusionOne}),
they are adjacent to $T$. For $m=1$, the additional
$\Delta_m$ lies in
$S$ and has distance 1 from the boundary of $T$.
For $m \geq 2$, the intersection
$\mu^{1,1,1}$ has distance 1 from $\partial S$.
Since $S$ has distance 1 from
$\partial T$, the total distance between
$\mu^{1,1,1}$ and $\partial T$ is 2, showing
their adjacency.
\item $k=6$: By Remark
\ref{Rem_TriangleInclusionBoundaryDistance}, there is only
one embedding $\Delta_m \to \Delta_{m+6}$ with distance 2
to the boundary. Thus,
there is no such element adjacent to
all $\mu^{\vec{e}}$ simultaneously.
\text{e}nd{enumerate}
\text{e}nd{enumerate}
Finally, we conclude that $\cneig{G_n}{\mu^{1,0,0},\mu^{0,1,0},\mu^{0,0,1}}$
is a clique of $G_n$.\qedhere
\text{e}nd{proof}
\noindent After having covered the triangle case, we now cover the vertex case.
The neighbours of a vertex $v$ form a circle $w_1w_2\dots w_k$ for
some $k\in \menge{N}$.
The \text{e}mph{umbrella} of $v$ is the set containing the $k$
facets $\{v,w_i,w_{i+1}\}$ for all $1 \leq i \leq k$, where
we read indices modulo $k$.
\begin{lem}\label{Lem_CliqueConstructionVertex}
Let $G$ be a pika\text{e}nspace and $v$ a vertex in $G$.
For odd $n$, the common neighbourhood $\cneig{G_n}{T\cong \Delta_1\mid v\subseteqeq \Delta_1}$ of all
$\Delta_1$ containing $v$ forms a clique in $G_n$.
\text{e}nd{lem}
\begin{proof}
Clearly, all these $\Delta_1$ are pairwise adjacent in $G_n$, since they share $v$. Thus, they lie in a clique, which itself lies in the common neighbourhood $\cneig{G_n}{T\cong \Delta_1\mid v\subseteqeq \Delta_1}$. \\
We consider all $\Delta_{1+k} \cong T$ which lie in this common neighbourhood (for
$k \in \{0,2,4,6\}$).
\begin{enumerate}
\item Case $k=0$: If there was a $\Delta_1$ adjacent to all facets in the
umbrella, all of its vertices would lie in $N_G(v)$ (each
of its vertices can only lie in two facets and the number of facets
is at least 6). Thus, $N_G(v)$ contains a three-circle, in contradiction
to being at least a 6-cycle.
\item Case $k=2$: Any $\Delta_3$ which is adjacent to all facets in
the umbrella, would be the $\Delta_3$ containing $v$ as its middle vertex. Thus, $v$ has degree $6$.
In this case, there are two
$\Delta_3$ with $v$ as their central vertex and these two
are clearly adjacent.
\item Case $k=4$: By Remark \ref{Rem_TriangleInclusionBoundaryDistance},
every $\Delta_1$
adjacent to a given $\Delta_5$ has to lie within the central $\Delta_2$.
By Lemma
\ref{Lem_TriangleInclusionOne}, there are four of
these $\Delta_1$.
Since $15(v) \geq 6$, no $\Delta_5$ can be adjacent to
all the facets in the umbrella.
\item Case $k=6$: By Remark \ref{Rem_TriangleInclusionBoundaryDistance}
and Definition \ref{Def_theCliqueGraph}, a $\Delta_7$ is only
adjacent to one $\Delta_1$.
Since $15(v) \geq 6$, no $\Delta_7$ can be adjacent to all
the facets in the umbrella.
\text{e}nd{enumerate}
\noindent Thus, all the elements in
$\cneig{G_n}{T\cong \Delta_1\mid v\subseteqeq \Delta_1}$ are
pairwise adjacent, and we obtain a clique.
\text{e}nd{proof}
\noindent Those two lemmas suggest a correspondence between the cliques of $G_n$ we constructed and the vertices of $G_{n+1}$.
\begin{rem}\label{lem_map}
For every pika\text{e}nspace $G$, and every $n\in \menge{Z}_{\geq 0}$ there is a map
\begin{align*}
C\colon V(G_{n+1})&\to \{\text{cliques of }G_n\},\\
S&\mapsto\text{the clique from}
\begin{cases}
\text{Lemma \ref{Lem_CliqueConstructionVertex}}, &\text{if $S$ is of level $0$},\\
\text{Lemma \ref{Lem_CliqueConstructionTriangle}}, &\text{otherwise.}
\text{e}nd{cases}
\text{e}nd{align*}
\text{e}nd{rem}
\noindent In Section \ref{Sect_ChartExtension}, we discuss some theory
that helps to show the bijectivity of this map in Section \ref{Sect_CliqueCompleteness}.
In Section \ref{Sect_CliqueIntersections}, we prove that it is a
graph isomorphism between $G_{n+1}$ and $kG_{n}$.
\section{Chart Extensions}\label{Sect_ChartExtension}
In Section \ref{Sect_Graph}, we introduced the graph $G_n$
(of a pika\text{e}nspace $G$) and constructed
several of its cliques in Subsection \ref{Subsect_CliqueConstruction}. We
still need to show that $G_n$ has no more cliques than those.
To do so, we transfer local regions of $G_n$ to local regions of the
hexagonal grid, where the calculations become simpler. This transfer is
easy if we only consider ``smaller`` triangles within a ``larger`` hexagonal
chart. However, Definition \ref{Def_theCliqueGraph} also includes edges
between triangles of the same size. In this case, we
extend a hexagonal chart to a larger domain, containing
all adjacent triangular-shaped subgraphs as well, if possible.
The existence of such an extension is non-trivial and
needs several intricate arguments about topology and straight paths.
In this technical chapter, we show that such an extension of charts is always
possible for $m \geq 3$ (Lemma \ref{Lem_HexagonalChartExtension}).
For $m\geq 4$ and a $\Delta_m$-shaped subgraph $S$ of
$\Hex_m$, any neighbouring $\Delta_m$-shaped subgraph
(with respect to the geometric clique graph $(\Hex_m)_n$) can be constructed
by adding vectors from $\vec{D}_0$ (see Def. \ref{Def_HexagonalGrid}) to $S$.
For $m=3$, one additional neighbour occurs, which is the rotation of $S$ by
$\frac{\pi}{3}$. We want to prove that the same structure holds for
subgraphs of $G$. We start by noting that
each $\vec{d} \in \vec{D}_0$ can lead to an adjacent $\Delta_m$.
(The complementary claim is proven in Lemma \ref{Lem_NeighbourChart}).
\begin{rem}\label{Rem_TranslationNeighboursExist}
Let $G$ be a pika\text{e}nspace and let $\nu\colon H\to F$ be a
hexagonal chart, with $\Delta_m\subseteqeq H\subseteqeq \Hex_m$ for an $m\geq 3$.
If for a $\vec{d}\in \vec{D}_0$, the image of
$\Delta_m^{\vec{d}}$ is
contained in $H$, the image of $\nu\circ\Delta_m^{\vec{d}}$
lies inside $\neig{G}{\nu(\Delta_m)}.$
\text{e}nd{rem}
\noindent We employ \text{e}mph{facet-paths} to extend charts. For a
locally cyclic graph with boundary $G = (V,E)$, these are finite
sequences of facets
$f_1f_2\dots f_k$ such that $f_i \cap f_{i+1} \in E$
for all $1 \leq i < k$.
Given a monomorphism $\mu: H \to G$ and a facet-path
$f_1 f_2 \dots f_k$ in $H$, we have the following transfer:
For each pair $f_if_{i+1}$ with
$1 \leq i < k$, the image $\mu(f_i \cap f_{i+1})$ is an
inner edge and $\mu(f_i) \neq \mu(f_{i+1})$.
In particular, the images of the vertices in
$f_i$ uniquely determine the image of the
vertex $f_{i+1}\text{e}nsuremath{\backslash}(f_i \cap f_{i+1})$.
Inductively, the images of the vertices in $f_1$
determine those in $f_k$.
This allows a unique extension along a facet-path.
Unfortunately, the extensions from different facet-paths
are not compatible in general.
\noindent Given a $\Delta_m$-shaped subgraph $S\subseteqeq G$ with neighbouring $\Delta_m$-shaped subgraphs $T_k$, we want
to show the existence of a chart containing all of them. We start by
extending the chart from $S$ to one neighbouring
triangular-shaped subgraph.
\begin{lem}\label{Lem_NeighbourChart}
Let $G$ be a pika\text{e}nspace and $\Delta_m \iso{\mu} S \subseteq G$ be a standard
chart with $m \geq 3$. Let $\Delta_m \cong T \subseteq \neig{G}{S}$. Then, there is a
$H \subseteq \Hex_m$ and a hexagonal chart $\hat{\mu}: H \to T$
such that $\mu^{-1}(x) = \hat{\mu}^{-1}(x)$ holds for every vertex
$x$ in $S \cap T$.
Furthermore, $H$ falls in one of these two cases:
\begin{itemize}
\item $H = \vec{d} + \Delta_m$ for some $\vec{d} \in \vec{D}_0$.
\item $H = \mathemph{\nabla_3} \text{e}nsuremath{\mathrel{\mathop:}=} \{(a,b,c)\in\Hex_3 \mid a \leq 2, b \leq 2, c\leq 2\}$,
with corner vertices $(2,2,-1)$, $(2,-1,2)$, and $(-1,2,2)$.
\text{e}nd{itemize}
\noindent Consequently, for $m \geq 4$ there are at most six of these
$\Delta_m\cong T \subseteq \neig{G}{S}$,
and for $m=3$ there are at most seven.
\text{e}nd{lem}
\begin{proof}
By Remark \ref{Rem_StraightPathsInTriangle}, the boundary of
$\Delta_m$ consists of three straight paths of length $m$.
Thus, to find $\Delta_m \cong T \subseteq \neig{G}{S}$,
we start by describing all straight paths with $m$ edges within $\neig{G}{S}$.
Each of those paths either completely lies in
$\neig{G}{S}\text{e}nsuremath{\backslash} S$ or it intersects $S$ in at least one vertex.
If all boundary paths of $T$ lay in $\neig{G}{S}\text{e}nsuremath{\backslash} S$,
Lemma \ref{Lem_NeighbourhoodCycle}
would imply $\partial T = \neig{G}{S} \text{e}nsuremath{\backslash} S$ since both are cyclic graphs.
However, this would
imply $S \subseteqneq T$, contradicting $S \cong T$.
Thus, at least one boundary path of $T$
intersects $S$. We can construct each of those
paths by extending a straight path from $S$ into
$\neig{G}{S}\text{e}nsuremath{\backslash} S$. The first extended edge cannot be a boundary
edge of $\neig{G}{S}$.
By Lemma \ref{Lem_NeighbourhoodCycle},
vertices in $\neig{G}{S}\text{e}nsuremath{\backslash} S$ are incident to at most three facets.
Thus, any straight path in
$S$ can only be extended by one edge into $\neig{G}{S}$
on each side (the path degree
of any extension is at most 2), which can be seen in Figure \ref{Extensions_of_a_straight_path}.
\begin{figure}[htbp]
\centering
\begin{tikzpicture} [scale=0.3]
\HexagonalCoordinates{8}{8}
\draw
(A11) -- (A71) -- (A26) -- (A17) node[draw=none,fill=none,font=\scriptsize,midway,above right] {$S$} -- (A11)
(A12) -- (A22) -- (A13) -- (A23) -- (A14);
\draw[green!80!black]
(A12) -- (A03) -- (A13) -- (A04) -- (A14) -- (A05) -- (A04) -- (A03);
\draw[very thick, blue] (A31) -- (A04);
\draw (A53) -- (A43) -- (A44) -- (A34) -- (A35);
\draw[green!80!black]
(A54) -- (A63) -- (A53) -- (A54) -- (A44) -- (A45) -- (A35) node[draw=none,fill=none,font=\scriptsize,midway,above right] {$\neig{G}{S}$}
(A45) -- (D44);
\draw[green!80!black,thick, dotted]
(A54) -- (D44);
\draw[very thick, blue] (A24) -- (A44);
\draw[very thick,blue,dashed] (A54) -- (A44) -- (D44);
\text{e}nd{tikzpicture}
\caption{Possible extension of a straight path into the neighbourhood of $\Delta_{m}$.}
\label{Extensions_of_a_straight_path}
\text{e}nd{figure}
Consequently, we start with the straight paths in $S$ with at least $m-2$
edges, extend them to straight paths of length $m$ and
add the possible two other sides of the $\Delta_m$-shaped subgraph.
Up to symmetry, the preimages of the paths of length $m-2$ with respect to $\mu$
are described in Remark \ref{Rem_StraightPathsInTriangle}, whose
notation ($\alpha$,
$\beta$, and $\gamma$) we employ.
The images of $\alpha,\beta,\gamma \subseteqeq \Delta_m$ under
$\mu$ are called $\alpha^\mu, \beta^\mu,\gamma^\mu \subseteqeq S$.
A straight path with $m$ edges can only be the boundary
of at most two $T \cong \Delta_m$. However, if we start at a vertex
$\mu(m-t-k,t,k) \in S$ and follow a straight path ``down''
(i.\,e. in the direction
of smaller third coordinates), it can only go $k$ steps while staying
in $S$. Conversely, if we follow a straight path in the direction
of larger
third coordinates (``up''), we can go at most $m-k$ steps within $S$.
We conclude that from any vertex of
\begin{align*}
\left\{\begin{array}{c}
\alpha^\mu\\
\beta^\mu\\
\gamma^\mu
\text{e}nd{array}\right\} \text{ we can go} \left\{\begin{array}{c}
m+1\\
m\\
m-1
\text{e}nd{array}\right\}\text{ steps `up'}
\text{ and }
\left\{\begin{array}{c}
1\\
2\\
3
\text{e}nd{array}\right\}
\text{ steps `down'}
\text{e}nd{align*}
staying in $\neig{G}{S}$.
Since $m\geq 3$, beginning at a vertex
of $\alpha^\mu$ or $\beta^\mu$
only in the `up'-direction we find a straight path of
length $m$. For $m=3$, beginning at a vertex
of $\gamma^\mu$ only in the `down'-direction we have
the space for a straight path of length $m$ and for $m\geq 4$, no such straight path beginning at a vertex
of $\gamma^\mu$
exists, neither `upwards' nor `downwards'.
Now, we discuss which parts of $\alpha^\mu,\ \beta^\mu$, and $\gamma^\mu$
can be the straight boundary paths of a
$\Delta_m \cong T \subseteq \neig{G}{S}$. Since $\alpha^\mu$ lies in
$\partial S$ with $S \cong \Delta_m$, it cannot
be part of the boundary of $T$. Thus, two possible
extensions of $\alpha$ and two possible extensions of $\beta^\mu$
remain, as can
be seen in Figure \ref{Fig_paths}.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.5]
\HexagonalCoordinates{9}{8}
\draw (A10) -- (A90) -- (A18) -- cycle;
\draw (A00) -- (A01) -- (A11) -- (A20);
\draw (A01) -- (A10);
\draw (A13) -- (A04) -- (A14);
\draw (A08) -- (A07) -- (A16);
\draw (A07) -- (A17);
\draw[thick, red] (A10) -- (A80);
\draw[thick, red, dashed] (A00) -- (A10);
\node[red,below] at (A40) {$\alpha^\mu_{|\{0,\dots,m-1\}}$};
\draw[thick, blue] (A80) -- (A17);
\draw[thick, blue, dashed] (A17) -- (A08);
\node[blue, right] at (A44) {$\mu\circ\beta'$};
\foreach \p/\r/\x/\y/\z in {00/below left/m+1/-1/0, 10/below/m/0/0, 80/below/1/m-1/0,
01/above left/m/-1/1, 11/right/m-1/0/1, 13/below right/m-k/0/k, 04/left/m-k/-1/k+1,
14/right/m-k-1/0/k+1, 16/below right/2/0/m-2, 17/right/1/0/m-1, 07/below left/2/-1/m-1,
08/left/1/-1/m}{
\fill[black] (A\p) circle (2.5pt);
\node[\r,scale=0.7] at (A\p) {$\begin{pmatrix}\x\\\y\\\z\text{e}nd{pmatrix}$};
}
\text{e}nd{tikzpicture}
\text{e}nd{center}
\caption{Straight paths in a triangle}\label{Fig_paths}
\text{e}nd{figure}
\noindent We extend $\alpha^\mu_{|\{0,\dots,m-1\}}$ at 0. This path ends at
$(1,m-1,0)$, where a rotation of $\beta^\mu$ starts:
\begin{equation*}
\beta': \{0,\dots,m-1\} \to \menge{Z}^3,\qquad t \mapsto (0,m-t,t).
\text{e}nd{equation*}
If these paths lie in the boundary of $T$, the group action
from Subsection \ref{Subsect_Hexagonal}
allows us to find a
hexagonal chart $\nu: \Delta_m \to T$ with
\begin{align*}
\nu(0,m,0) = \mu(1,m-1,0) \qquad\text{and}\qquad \nu(1,m-1,0) = \mu(2,m-2,0).
\text{e}nd{align*}
A translation allows us to rewrite the hexagonal chart as
\begin{align*}
\mu_{1,-1,0}: \Delta_m + (1,-1,0) \to T,\qquad x \mapsto \nu(x+(-1,1,0)).
\text{e}nd{align*}
In particular, all $\mu(k,0,m-k)$ with $2 \leq k \leq m-1$ have degree
6 if this chart exists.
Since the group action of Subsection \ref{Subsect_Hexagonal}
acts transitively on the extensions of $\alpha$ and $\beta$,
respectively, there are exactly six triangular-shaped graphs $T$ that could be constructed
in such a manner. These correspond to the elements of $\vec{D}_0$.
To complete the construction of $\Delta_m \cong T \subseteq \neig{G}{S}$, we
consider the path $\gamma$ for $m=3$. It has to be extended in both directions.
Similarly to our construction of $\mu_{1,-1,0}$, we extend the chart $\mu$
to incorporate the vertices $(2,2,-1)$, $(2,-1,2)$, and $(-1,2,2)$.
\text{e}nd{proof}
\noindent Next, we combine these different hexagonal charts. We show that they
define compatible maps.
\begin{lem}\label{Lem_ChartExtensionCompatible}
Let $G$ be a pika\text{e}nspace and
$\mu: \Delta_m \to S \subseteq G$ be a standard chart with $m \geq 3$.
Let $\mu_1: H_1 \to T_1$ and $\mu_2: H_2 \to T_2$ be two hexagonal charts
from Lemma \ref{Lem_NeighbourChart}. For any $x \in H_1 \cap H_2$, we have
$\mu_1(x) = \mu_2(x)$.
\text{e}nd{lem}
\begin{proof}
Let $x \in H_1 \cap H_2$. If $x \in \Delta_m$, the claim follows directly
from Lemma \ref{Lem_NeighbourChart}. Otherwise, we have to consider the
extension construction along facet-paths.
If there is a facet path $f_1f_2$ in $H_1\cap H_2$ with
$f_1 \subseteq \Delta_m$ and
$x \in f_2$, both $\mu_1$ and $\mu_2$ have to map $x$ to the same value.
Thus, only the
corner vertices of $\vec{d} + \Delta_m$ might be problematic.
Without loss of generality (Subsection \ref{Subsect_Hexagonal}),
let $H_1 = (1,-1,0) + \Delta_m$. The corner vertex $(m+1,-1,0)$ does not lie
in any $H_2$, so it can be ignored. The corner $x = (1,-1,m)$ also lies
in $H_2 = (0,-1,1) + \Delta_m$. We can define $x$ by a facet path in $H_1$
with three facets. Since $m\geq 3$, this facet path also lies in $H_2$.
Thus, the charts
$\mu_1$ and $\mu_2$ cannot conflict, as can be seen in Figure
\ref{Fig_ExtendingtheStandardChart}.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}[scale=0.3]
\HexagonalCoordinates{9}{8}
\draw[green!80!black] (A00) -- (A60) -- (A06) -- cycle;
\draw (A10) -- (A70) -- (A16) -- cycle;
\draw[purple] (A01) -- (A61) -- (A07) -- cycle;
\draw[blue] (A12) -- (A03) -- (A13) -- (A22) -- cycle;
\draw[blue] (A12) -- (A13);
\draw[blue] (A06) -- (A15) -- (A05) -- cycle;
\draw[blue] (A14) -- (A15) -- (A05) -- cycle;
\draw[blue] (A14) -- (A15) -- (A24) -- cycle;
\foreach \p in {3,6,0}{
\fill[blue] (A0\p) circle (5pt);
}
\text{e}nd{tikzpicture}
\caption{The edge-face-paths that can be used for extending the standard chart to a neighbouring $\Delta_m$}\label{Fig_ExtendingtheStandardChart}
\text{e}nd{center}
\text{e}nd{figure}
\text{e}nd{proof}
\noindent Finally, we put all of the pieces together.
\begin{lem}\label{Lem_HexagonalChartExtension}
Let $G$ be a pika\text{e}nspace and
$\mu: \Delta_m \to S \subseteq G$ be a standard chart
with $m \geq 3$.
There is a hexagonal chart $\hat{\mu}: E \to \hat{S}$, with $\Delta_m \subseteq E$
and $S \subseteq \hat{S} \subseteq G$, such that $\hat{\mu}_{|\Delta_m} = \mu$
and
such that any $T \cong \Delta_m$ with $T \subseteq \neig{G}{S}$
is either
\begin{itemize}
\item the image of $\hat{\mu} \circ \Delta_m^{\vec{t}}$ for some $\vec{t} \in \vec{D}_0$
or
\item the image of $\hat{\mu}(\nabla_3)$ from Lemma \ref{Lem_NeighbourChart} if $m=3$.
\text{e}nd{itemize}
\text{e}nd{lem}
\begin{proof}
For any $\Delta_m \cong T \subseteq \neig{G}{S}$, we construct the hexagonal chart
$\mu_T : H_T \to T$ from Lemma \ref{Lem_NeighbourChart}. We define $E$ as
the union of $\Delta_m$ with all those $H_T$.
For $m > 3$, this graph is
\begin{equation*}
E \text{e}nsuremath{\mathrel{\mathop:}=} \Delta_m + \bigcup_{\vec{d}\in \vec{D}} (\vec{d}+\Delta_m),
\text{e}nd{equation*}
in which $\vec{D}$ is the set of all applicable translation vectors.
For $m = 3$, the graph also contains $\nabla_3$.
By Lemma \ref{Lem_NeighbourChart} and Lemma \ref{Lem_ChartExtensionCompatible}, we can define $\hat{\mu}$ as follows:
\begin{equation*}
\hat{\mu}: E \to G \qquad x \mapsto
\begin{cases}
\mu(x), & x \in \Delta_m, \\
\mu_T(x), & x \in H_T \text{ for } \Delta_m \cong T \subseteq \neig{G}{S}.
\text{e}nd{cases}
\text{e}nd{equation*}
It remains to show that $\hat{\mu}$ is injective.
Since $\mu$ is
injective on $S$, we only need to consider pairs of vertices, in which at least one is from $\neig{G}{S}\text{e}nsuremath{\backslash} S$.
\begin{enumerate}
\item Let $x \in E\text{e}nsuremath{\backslash}\Delta_m$ and $y \in \Delta_m$
such that $\hat{\mu}(x) = \hat{\mu}(y) \in S$.
Then, there is a $\vec{d} \in \vec{D}_0$ such that
$x \in \vec{d} + \Delta_m$. By construction of
$\mu_T$, the point $x$ is mapped to a point in
$\neig{G}{S}$, which is different from $S$ by
Lemma \ref{lem_topology}(\ref{top_a}), in contradiction
to our assumption.
\item Let $x,y \in E\text{e}nsuremath{\backslash}\Delta_m$ such that
$\hat{\mu}(x) = \hat{\mu}(y)$.
Except the translates of the triangle tips, every
vertex in $E\text{e}nsuremath{\backslash}\Delta_m$ in adjacent
to two different boundary vertices of $\Delta_m$. Thus,
we first consider the case where $x$ and $y$ are adjacent
to different boundary vertices $a$ and $b$, respectively. Thus, we have
a path of length 2 from $\hat{\mu}(a) \in \partial S$ over
$\hat{\mu}(x) \in \neig{G}{S}\text{e}nsuremath{\backslash} S$ to
$\hat{\mu}(b) \in \partial S$. Then, Lemma \ref{lem_topology}\text{e}qref{top_b}
implies that $\{\hat{\mu}(a),\hat{\mu}(b),\hat{\mu}(x)\}$ is
a facet. By our proof of well-definedness, this is only possible
if $x=y$.
It remains to show the claim if $x$ and $y$ both are adjacent
to the same triangle tip, say $x = (0,-1,m+1)$ and
$y = (-1,0,m+1)$. In this case, $\hat{\mu}(x) = \hat{\mu}(y)$
would imply $15(\hat{\mu}(0,0,m)) = 5$, in contradiction
to $\delta=6$.
\qedhere
\text{e}nd{enumerate}
\text{e}nd{proof}
The condition $m\geq 3$ in Lemma \ref{Lem_HexagonalChartExtension} is
necessary. Extending a chart $\mu\colon \Delta_2\to S$ simultaneously
to all subgraphs $T\cong\Delta_{2}$ of $G$ is only possible if the
vertices $(1,1,0),(1,0,1)$, and $(0,1,1)$ are mapped to vertices of degree $6$.
\section{Full Clique Description}\label{Sect_CliqueCompleteness}
In this section, we show that the cliques from Subsection
\ref{Subsect_CliqueConstruction} are all cliques of the geometric
clique graph $G_n$.
This section culminates in a full description of all cliques of $G_n$ (Corollary
\ref{Cor_CliqueSummary})
and the correspondence to the vertices of $G_{n+1}$ (Theorem \ref{Theo_bijective}).
For $m \geq 3$, we employ the chart extensions from Section
\ref{Sect_ChartExtension}. The smaller cases have to be argued
differently.
\subsection{Exceptional (Small) Cases}
In this subsection, we discuss the cliques which only contain elements of
levels smaller than 3.
\begin{lem}\label{Lem_EvenCliquesOfSmallLevel}
Let $C$ be a clique of $G_n$,
in which every vertex is of level $0$ or $2$.
Then, $C$ is one of the
cliques described in Lemma \ref{Lem_CliqueConstructionTriangle}.
\text{e}nd{lem}
\begin{proof}
We start with the case where all vertices of $C$ are of level $0$, i.e. they are isomorphic
to $\Delta_0$. In this case, they form a clique
of $G$, i.\,e. a triangle $S\cong \Delta_1$. So, $C$ is
constructed from $S$ by Lemma
\ref{Lem_CliqueConstructionTriangle}.
For the remainder, we assume that $C$ contains a vertex of level $2$, i.e.\ a subgraph
$S\cong \Delta_{2}$ of $G$. Thus, $C$ lies in the closed neighbourhood
$N_{G_n}[S]$.
We visualise
the neighbourhood in Figure \ref{Fig_mtwo}.
Remark \ref{Rem_TranslationNeighboursExist} shows that all the depicted $\Delta_2$-shaped subgraphs exist.
We label the subgraphs which are isomorphic to $\Delta_0$ with their preimage under a standard chart of $S$.
Since it is not necessarily possible to extend this chart to all the
$\Delta_2$-shaped subgraphs in the neighbourhood, we label those in
a new labelling scheme. We place every label inside the central facet of the subgraph.
Two different $\Delta_2$-shaped subgraphs
are adjacent if and only if their central facets have facet-distance at
most 2.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1.5]
\HexagonalCoordinates{5}{5}
\foreach \a\b in {20/30, 30/20, 02/03, 03/02, 32/23, 23/32}{
\coordinate (T\a\b) at (barycentric cs:A\a=2/3,A\b=1/3);
}
\fill[yellow] (A11) -- (A31) -- (A13) -- cycle;
\draw (A20) -- (A23) -- (A03) -- (A30) -- (A32) -- (A02) -- cycle;
\foreach \a/\b/\c/\d/\text{e} in {21/03/23/red/0.3, 12/30/32/red/0.3, 22/20/02/red/0.3}{
\foreach \x in {0,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,1}{
\draw[\d,line width=\text{e} mm](barycentric cs:A\a=\x,A\b=1-\x)--(barycentric cs:A\a=\x,A\c=1-\x);
}
}
\foreach \a/\b/\c/\d/\text{e} in {12/13/22/blue/0.3, 22/12/13/blue/0.3, 21/11/12/blue/0.3,12/11/21/blue/0.3,21/31/22/blue/0.3,22/31/21/blue/0.3}{
\foreach \x in {0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1}{
\draw[\d,line width=\text{e} mm](barycentric cs:A\a=\x,A\b=1-\x)--(barycentric cs:A\a=\x,A\c=1-\x);
}
}
\foreach \a/\b/\c/\d/\text{e}/\f in {11/10/21/2030/blue/0.3,31/40/21/3020/blue/0.3, 11/01/12/0203/blue/0.3, 13/04/12/0302/blue/0.3, 13/14/22/2332/blue/0.3, 31/41/22/3223/blue/0.3}{
\foreach \x in {0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1}{
\draw[\text{e},line width=\f mm](barycentric cs:A\a=\x,A\b=1-\x)--(barycentric cs:A\c=\x,T\d=1-\x);
}
}
\foreach \a/\b/\c/\text{e} in { 11/12/02/{1,-1,0}, 11/20/21/{1,0,-1}, 21/31/30/{0,1,-1}, 22/31/32/{-1,1,0}, 22/23/13/{-1,0,1}, 12/13/03/{0,1,-1}}{
\node[shape=circle, draw=blue,fill=white] at (barycentric cs:A\a=1,A\b=1,A\c=1) {\textcolor{blue}{$\wedge_2^{\text{e}}$}};
}
\node[shape=circle, fill=yellow] at (barycentric cs:A12=1,A21=1,A22=1) {\textcolor{black}{$\phantom.\wedge_2^{0,0,0\phantom.}$}};
\foreach \a/\b/\c/\text{e} in {11/21/12/{1,0,0}, 21/31/22/{0,1,0}, 12/22/13/{0,0,1}}{
\node[shape=circle, draw=red,fill=white] at (barycentric cs:A\a=1,A\b=1,A\c=1) {\textcolor{red}{$\phantom.\vee_{2}^{\text{e}\phantom.}$}};
}
\foreach \p/\r/\text{e} in {11/below left/{2,0,0},21/below/{1,1,0},31/below right/{0,2,0}, 22/above right/{0,1,1}, 13/above/{0,0,2}, 12/above left/{1,0,1}}{
\node[fill=white, \r] at (A\p) {\small\textcolor{green!50!black}{$(\text{e})$}};
\fill[green!50!black] (A\p) circle (2pt);
}
\text{e}nd{tikzpicture}
\caption{Neighbourhood of an $S\cong\Delta_2$ in $G_n$, subgraphs isomorphic to $\Delta_2$ are labelled in their middle face with the symbols $\wedge$ for 'upward' and $\vee$ for 'downward' facing }\label{Fig_mtwo}
\text{e}nd{figure}
\noindent We describe all the cliques of $N_{G_n}[S]$ which contain $S$ using the labels in Figure \ref{Fig_mtwo}.
\begin{enumerate}
\item If a corner-vertex of $S$, like $(2,0,0)$,
is contained in the clique,
the common neighbourhood of this vertex
and $S$ is a clique, which by Lemma
\ref{Lem_CliqueConstructionTriangle} is constructed from the
$\Delta_1$ in $S$ containing the corner-vertex.
\item Assume no corner-vertex of $S$ is contained in the clique.
For all three corner-vertices, there must be an element in
the clique which is not adjacent to it; otherwise, the clique would not be maximal. From the remaining
elements in $N_{G_n}(S)$, the three middle vertices are each
adjacent to exactly two corner-vertices, the other elements
are each adjacent to exactly one corner-vertex. Thus, to
exclude the corner-vertices, either the three middle vertices
are in $C$ or at least one other element is in $C$.
\begin{enumerate}
\item\label{i} In the first case, the clique is constructed
from the three middle-vertices using Lemma
\ref{Lem_CliqueConstructionTriangle}.
\item\label{ii} In the second case, there is an element not
adjacent to two of the corner vertices. Without loss of
generality, these corner-vertices are $(0,2,0)$ and
$(0,0,2)$ and the element not adjacent to them is
$\wedge_2^{1,0,-1}$ or $\vee_{2}^{1,0,0},$ which will
be called $S_1$. Additionally, in $C$ there needs to be
an element not adjacent to $(2,0,0)$ called $S_2$ which
is adjacent to $S_1$ since they both lie in $C$.
Thus, $S_2$ can be neither $\wedge_{2}^{-1,1,0}$ nor
$\wedge_{2}^{-1,0,1}$ nor $\wedge_{2}^{0,-1,1}$ since
those are not adjacent to each of the possible $S_1$. Picking
$S_2$ to be $\vee_{2}^{0,1,0}$ is only possible if
$S_1$ is $\vee_{2}^{1,0,0}$. In this case, $\cneig{G_n}{S,S_1,S_2}$ is a clique
containing the middle-vertices and we are in case
\ref{i}. The same happens if we choose $S_2$ to be
$\vee_{2}^{0,0,1}$.
If the degree of $(1,1,0)$ is at least $7$, there is no
other possibility for $S_2$, but if the degree of
$(1,1,0)$ is $6$, the vertices $\wedge_2^{1,0,-1}$ and
$\wedge_2^{1,0,-1}$ are adjacent, and if
$S_1=\wedge_2^{1,0,-1}$ we can choose $S_2$ to be
$\wedge_2^{1,0,-1}$.
In this case, $S$, $S_1$,
and $S_2$ are contained in a common $T\cong \Delta_3$,
from which $C$ is constructed by Lemma
\ref{Lem_CliqueConstructionTriangle}.\qedhere
\text{e}nd{enumerate}
\text{e}nd{enumerate}
\text{e}nd{proof}
\begin{lem}\label{Lem_OddCliquesofSmallLevel}
Let $C$ be a clique of $G_n$,
in which every vertex is of level $1$.
Then, $C$ is one of the
cliques described in Lemma \ref{Lem_CliqueConstructionTriangle}
or in Lemma \ref{Lem_CliqueConstructionVertex}.
\text{e}nd{lem}
\begin{proof}
If $C$ is not given as the common neighbourhood of
the set of facets incident to a given vertex
like in Lemma \ref{Lem_CliqueConstructionVertex},
the intersection of the elements of $C$ is empty and
$C$ has at least three elements. Furthermore, there are two
elements of $C$ that do not intersect in an edge: otherwise,
for any three element subset of $C$
there would be a vertex $v$ in the intersection of the three
elements and the neighbourhood of $v$ would contain a three-circle.
Thus, we choose two elements $S$ and $T$ from $C$ which intersect
in a vertex $v$ but not in an edge. Since the common intersection
of all elements of $C$ is empty, there must be an element $U\in C$
not containing $v$, but intersecting $S$ and $T$ in at least one
vertex each, which we will call $s$ and $t$. Those two vertices are
distinct since $S$, $T$, and $U$ do not have a common vertex,
and they are connected by a edge from $U$. As $s$ and $t$ also
lie in the neighbourhood of $v$, the edge $st$ also lies in this
neighbourhood. Since, by assumption, the third vertex of $U$ is
not $v$, it is the other common neighbour of $s$ and $t$. This
way, we proved that $C$ is constructed from the union of $S$, $T$, and $U$, which is $\Delta_2$-shaped, using
Lemma \ref{Lem_CliqueConstructionTriangle}, as it is depicted in Figure \ref{Fig_triangclique}.
\text{e}nd{proof}
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1]
\HexagonalCoordinates{5}{5}
\draw (A21) -- (A12) -- (A22) -- cycle;
\draw[very thick, blue] (A11) -- (A31) -- (A13) -- cycle;
\node[blue] at (barycentric cs:A22=1,A13=1,A23=1) {$R$};
\foreach \a/\b/\c/\text{e} in {11/21/12/S, 21/31/22/T, 12/22/13/U}{
\node at (barycentric cs:A\a=1,A\b=1,A\c=1) {$\text{e}$};
}
\foreach \p/\r/\text{e} in {21/below/{v}, 22/above right/{t}, 12/above left/{s}}{
\fill[red] (A\p) circle (2pt);
\node[red, \r] at (A\p) {$\text{e}$};
}
\text{e}nd{tikzpicture}
\caption{The clique of $G_n$ containing $S,T,$ and $U$ is constructed
from their union $R\cong\Delta_2$ using Lemma
\ref{Lem_CliqueConstructionTriangle}. }\label{Fig_triangclique}
\text{e}nd{figure}
\subsection{The Generic (Large) Case}
Up to now we only investigated cliques lying in the lower levels of $G_n$. The cliques left to discuss are those containing a $\Delta_m$ with $m\geq 3$.
In this generic case, we describe the neighbourhood $\neig{G_n}{S}$ of a $S\cong \Delta_m$ explicitly
by using triangle inclusion maps. Then, we classify the cliques
there explicitly.
We can describe the adjacency conditions of Definition
\ref{Def_theCliqueGraph} combinatorially with triangle inclusion maps.
Additional to the aforementioned set
\begin{align*}
&\vec{D}_{0\phantom{-}} \phantom{:}= \{ (1,-1,0), (1,0,-1), (-1,1,0), (0,1,-1), (-1,0,1), (0,-1,1) \}, \intertext{we define the following sets of coordinates:}
&\mathemph{\oldvec{D}_{-2}} \text{e}nsuremath{\mathrel{\mathop:}=} \{(2,0,0),(1,1,0),(0,2,0),(0,1,1),(0,0,2),(1,0,1)\}, \\
&\mathemph{\oldvec{D}_{-4}} \text{e}nsuremath{\mathrel{\mathop:}=} \{(2,1,1),(1,2,1),(1,1,2)\}, \text{ and}\\
&\mathemph{\oldvec{D}_{-6}} \text{e}nsuremath{\mathrel{\mathop:}=} \{(2,2,2) \}.
\text{e}nd{align*}
\begin{lem}\label{Lem_TriangleInclusionAdjacency}
Let $\mu: H \to F\subseteqeq G$ be a hexagonal chart of the
pika\text{e}nspace $G$. Let $\vec{s},\vec{t}\in \menge{Z}^3$ and
$k\in \{0,2,4,6\}$ and $m\geq k$,
be such that the images of $\Delta_m^{\vec{s}}$ and
$\Delta_{m-k}^{\vec{t}}$ are subsets of $H$. Further, let $S \subseteq F$ be
the image of $\mu \circ \Delta_m^{\vec{s}}$ and $T \subseteq F$ the
image of $\mu \circ \Delta_{m-k}^{\vec{t}}$.
Then, $S$ and $T$ are adjacent
in the clique graph $G_n$ for all $n \geq m$ with
$n \text{e}quiv_2 m$ if and only if $\vec{t}-\vec{s} \in \vec{D}_{-k}$.
\text{e}nd{lem}
\begin{proof}
Since $\mu$ is an isomorphism, $S$ and $T$ are adjacent in $G_n$ if
and only if the images of
$\Delta_m^{\vec{s}}$ and $\Delta_{m-k}^{\vec{t}}$ are connected
by an edge of the $n$-th iterated geometric clique graph
$(\Hex_{m+|\vec{s}|})_n$ of the hexagonal grid. Therefore, it is sufficient
to prove the claim
for $G=\Hex_m$. Since
\begin{equation*}
\Hex_{m+|\vec{s}|} \to \Hex_m, \qquad a \mapsto a-\vec{s}
\text{e}nd{equation*}
is an isomorphism between hexagonal grids,
we can assume without loss of generality, that
$S = \Delta_m$ and $T$ is the image of $\Delta_{m-k}^{\vec{t}-\vec{s}}$ with corners
$(m-k,0,0)+\vec{t}-\vec{s}$, $(0,m-k,0)+\vec{t}-\vec{s}$, and $(0,0,m-k)+\vec{t}-\vec{s}$.
Now, we distinguish with respect to $k$:
\begin{enumerate}
\item $k=0$: $T$ is adjacent to $S$ if the corners of $T$ lie
in the neighbourhood $\neig{G}{S}$. A vertex $(v_1,v_2,v_3)\in \Hex_m$
lies
in $\neig{G}{\Delta_m}$ if and only if $-1 \leq v_i \leq m+1$.
Since the components of $\vec{t}-\vec{s}$ sum to 0, this is
equivalent to $\vec{t}-\vec{s}\in \vec{D}_0$.
\item $k=2$: $T \subseteq S$ if and only if the corners of $T$ lie
in $S$. Equivalently,
all components of $\vec{t}-\vec{s}$ have to be non-negative. Since the
components sum to 2, this is equivalent to $\vec{t}-\vec{s} \in \vec{D}_{-2}$.
\item $k=4$: The corners of $T$ do not lie on the boundary if and
only if all components of $\vec{t}-\vec{s}$ are at least 1. Since
the components sum to 4, this is equivalent to $\vec{t}-\vec{s} \in \vec{D}_{-4}$.
\item $k=6$: The corners of $T$ have distance 2 from the boundary
of $S$ if and only if all components of $\vec{t}-\vec{s}$ are at least 2.
Since the components sum to 6,
this is equivalent to $\vec{t}-\vec{s} \in \vec{D}_{-6}$. \qedhere
\text{e}nd{enumerate}
\text{e}nd{proof}
\noindent From every clique we can choose an element $S$ of maximal level $m$.
Then, we describe the clique as a clique of the lower-level neighbourhood
$N_{G_n}[S]\cap V(G_m)$. To describe $N_{G_n}[S]$ combinatorially, we
introduce the \text{e}mph{local hexagonal graph}: Its vertices are
\begin{equation*}
\mathemph{V_{LHG}}\text{e}nsuremath{\mathrel{\mathop:}=}\left\{v_0^{0,0,0}\right\} \cup \left\{v_r^{\vec{d}} \mid r \in \{0,-2,-4,-6\}, \vec{d} \in \vec{D}_r \right\}
\text{e}nd{equation*}
and its edges are given by
\begin{equation*}
\mathemph{E_{LHG}}\text{e}nsuremath{\mathrel{\mathop:}=}\left\{(v_{r}^{\vec{x}},v_{r-k}^{\vec{y}})\mid \vec{y}-\vec{x} \in \vec{D}_{-k}\text{ for a }k\in\{0,2,4,6\} \right\}.
\text{e}nd{equation*}
For a set $Q$ of vertices of $G_n$ of a given level $m$, the
\text{e}mph{lower level neighbourhood} of $Q$ is defined as the
set $\cneig{G_m}{Q} \subseteqeq G_n$, which consists of all the
common neighbours of the elements in $Q$ that have a level of at most $m$.
\begin{lem}\label{Lem_LowerLevelNeighbourhoodToLHG}
Let $S \cong \Delta_m$ be a vertex in $G_n$ with $m \geq 3$.
The lower-level-neighbourhood of $S$ in $G_n$ is
isomorphic to an induced subgraph of
the local hexagonal
graph.
\text{e}nd{lem}
\begin{proof}
We give a graph monomorphism $\varphi\colon N_{G_n}[S]\cap G_m\to LHG$
that maps non-edges to non-edges.
We start with the generic case $m \geq 6$ and a standard chart
$\Delta_m \to S$. By Lemma \ref{Lem_HexagonalChartExtension},
we can extend it to a hexagonal chart
$\mu: E \to G$ such that all adjacent $T \cong \Delta_m$ are
contained.
We have the following adjacencies of smaller level:
\begin{enumerate}
\item The inclusions of $\Delta_{m-2}$ into $\Delta_m= S$ are all described
by triangle inclusion maps since $m > 4$
(Lemma \ref{Lem_TriangleInclusionTwo}).
\item The inclusions of $\Delta_{m-4}$ into $\Delta_{m-3}$ (compare
Remark \ref{Rem_TriangleInclusionBoundaryDistance}) are all
described by triangle inclusion maps since $m-3 > 2$
(Lemma \ref{Lem_TriangleInclusionOne}).
\item The inclusion of $\Delta_{m-6}$ into $\Delta_{m-6}$ (compare
Remark \ref{Rem_TriangleInclusionBoundaryDistance}) is unique
and also given by a triangle inclusion map.
\text{e}nd{enumerate}
Thus, all adjacent
triangles of smaller level are given by triangle inclusion maps.
Therefore, by Lemma \ref{Lem_TriangleInclusionAdjacency},
$\varphi(\mu(\Delta_{m-k}^{\vec{e}}))=v_{-k}^{\vec{e}}$ is a monomorphism
of the required property, but it is not necessarily an isomorphism since not all
of the $\vec{D}_0$-translated neighbours of $S$ need to be present.
We continue with the case $m=5$, illustrated in Figure \ref{Fig_AdjacencyFive}.
Since $5 > 4$, all neighbours of level
$m-2$ are given by triangle inclusion maps (Lemma \ref{Lem_TriangleInclusionTwo}).
For level $m-4=1$, we need to consider inclusions of $\Delta_1$ into
$\Delta_{2}$ (Remark \ref{Rem_TriangleInclusionBoundaryDistance}). By
Lemma \ref{Lem_TriangleInclusionOne}, one exceptional case occurs: a
graph $T \cong \Delta_1$ with vertices $(2,1,2)$, $(2,2,1)$, and $(1,2,2)$.
However, there is no neighbour of level $m-6$ since $m-6=-1$. Thus,
we define $\varphi$ as in the generic case, but we map $T$ to $v_{-6}^{2,2,2}$.
Since Lemma \ref{Lem_TriangleInclusionAdjacency} shows the correct edge
correspondence for all neighbours given by triangle inclusion maps, it remains
to show that the edges of the local hexagonal graph correctly describe the
adjacencies of $v_{-6}^{2,2,2}$.
\begin{figure}[htbp]
\centering
\def\small{\small}
\begin{tikzpicture} [scale=0.8, inner sep=0.1]
\HexagonalCoordinates{5}{5}
\foreach \a/\b/\c/\col/\l in {11/21/12/yellow/{$v_{-4}^{2,1,1}$}, 21/31/22/yellow/{$v_{-4}^{1,2,1}$}, 12/22/13/yellow/{$v_{-4}^{1,1,2}$}, 21/22/12/lime/{$v_{-6}^{2,2,2}$}}{
\fill[\col] (A\a) -- (A\b) -- (A\c) -- cycle;
\node at (barycentric cs:A\a=1,A\b=1,A\c=1) {\small\l};
}
\foreach \i in {0,...,4}{
\pgfmathparse{int(5-\i)}
\pgfmathsetmacro{\opp}{\pgfmathresult}
\draw (A0\i) -- (A\opp\i);
\draw (A\i0) -- (A\i\opp);
\draw (A0\opp) -- (A\opp0);
}
\foreach \a/\b/\c in {14/11/41, 01/31/04, 10/13/40}{
\draw[red, line width=1 mm] (A\a) -- (A\b) -- (A\c) -- cycle;
}
\foreach \a/\b/\c/\d/\text{e} in { 13/10/40/red/0.55, 31/01/04/red/0.55, 11/41/14/red/0.55}{
\foreach \x in {0,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,1}{
\draw[\d,line width=\text{e} mm](barycentric cs:A\a=\x,A\b=1-\x)--(barycentric cs:A\a=\x,A\c=1-\x);
}
}
\foreach \a/\b/\c in {02/32/05, 00/30/03, 20/23/50}{
\draw[blue, line width=1mm] (A\a) -- (A\b) -- (A\c) -- cycle;
}
\foreach \a/\b/\c/\d/\text{e} in { 05/02/32/blue/0.55, 50/20/23/blue/0.55, 00/30/03/blue/0.55}{
\foreach \x in {0,0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,1}{
\draw[\d,line width=\text{e} mm](barycentric cs:A\a=\x,A\b=1-\x)--(barycentric cs:A\a=\x,A\c=1-\x);
}
}
\foreach \a/\b in {10/40, 01/04, 41/14}{
\draw[red, line width=0.3 mm] (A\a) -- (A\b);
}
\node[circle,draw=red, fill=white] at (A22) {\tiny \textcolor{red}{$v_{-2}^{0,1,1}$}};
\node[circle,draw=red, fill=white] at (A21) {\tiny \textcolor{red}{$v_{-2}^{1,1,0}$}};
\node[circle,draw=red, fill=white] at (A12) {\tiny \textcolor{red}{$v_{-2}^{1,0,1}$}};
\node[circle,draw=blue, fill=white] at (A13) {\tiny \textcolor{blue}{$v_{-2}^{0,0,2}$}};
\node[circle,draw=blue, fill=white] at (A31) {\tiny \textcolor{blue}{$v_{-2}^{0,2,0}$}};
\node[circle,draw=blue, fill=white] at (A11) {\tiny \textcolor{blue}{$v_{-2}^{2,0,0}$}};
\foreach \a/\b/\c/\col/\l in {11/21/12/yellow/{$v_{-4}^{2,1,1}$}, 21/31/22/yellow/{$v_{-4}^{1,2,1}$}, 12/22/13/yellow/{$v_{-4}^{1,1,2}$}, 21/22/12/lime/{$v_{-6}^{2,2,2}$}}{
\node[circle,fill=\col] at (barycentric cs:A\a=1,A\b=1,A\c=1) {\tiny\l};
}
\text{e}nd{tikzpicture}
\caption{Adjacencies for $m=5$}
\label{Fig_AdjacencyFive}
\text{e}nd{figure}
\begin{itemize}
\item By definition, the middle $\Delta_1$ is adjacent to $\Delta_5$ in $G_n$ as well as $v_0^{0,0,0}$ and $v_{-6}^{2,2,2}$ are adjacent in $LHG$.
\item Furthermore, $T$ is adjacent to
the three triangles $\Delta_3^{0,1,1}, \Delta_3^{1,0,1}$, and
$\Delta_3^{1,1,0}$ (as an exceptional case in
Lemma
\ref{Lem_TriangleInclusionTwo}), exactly as in the
local hexagonal graph.
\item Since $T$ is adjacent to all three relevant
$\Delta_1$, the description of the
local hexagonal graph is correct again.
\text{e}nd{itemize}
Next, we move on to $m=4$. Again, the only difference to the
generic case is the designated preimage of $v_{-6}^{2,2,2}$,
which is a $\nabla_2$ with corners
$(2,2,0)$, $(0,2,2)$, and $(2,0,2)$.
As can be seen in Figure \ref{Fig_AdjacencyFour}, we check the adjacencies to the levels 0 and 2. Both are
satisfied again.
\begin{figure}[htbp]
\centering
\begin{tikzpicture} [scale=1.02,inner sep=1.0]
\HexagonalCoordinates{5}{5}
\foreach \a/\b/\c in {33/31/13}{
\fill[yellow] (A\a) -- (A\b) -- (A\c) -- cycle;
}
\foreach \a/\b/\c in {11/31/13}{
\foreach \x in {0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,1}
\draw[red,line width=0.4mm](barycentric cs:A\a=\x,A\b=1-\x)--(barycentric cs:A\a=\x,A\c=1-\x);
}
\foreach \a/\b/\c in {33/31/51}{
\foreach \x in {0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,1}
\draw[red,line width=0.4mm](barycentric cs:A\c=\x,A\a=1-\x)--(barycentric cs:A\c=\x,A\b=1-\x);
}
\foreach \a/\b/\c in {33/13/15}{
\foreach \x in {0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,1}
\draw[red,line width=0.4mm](barycentric cs:A\c=\x,A\a=1-\x)--(barycentric cs:A\c=\x,A\b=1-\x);
}
\foreach \a/\b/\c in {22/42/24}{
\foreach \x in {0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,1}
\draw[blue,line width=0.4mm](barycentric cs:A\a=\x,A\b=1-\x)--(barycentric cs:A\a=\x,A\c=1-\x);
}
\foreach \a/\b/\c in {12/32/14}{
\foreach \x in {0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,1}
\draw[blue,line width=0.4mm](barycentric cs:A\b=\x,A\a=1-\x)--(barycentric cs:A\b=\x,A\c=1-\x);
}
\foreach \a/\b/\c in {21/23/41}{
\foreach \x in {0.05,0.1,0.15,0.2,0.25,0.3,0.35,0.4,0.45,0.5,0.55,0.6,0.65,0.7,0.75,0.8,0.85,0.9,0.95,1}
\draw[blue,line width=0.4mm](barycentric cs:A\b=\x,A\a=1-\x)--(barycentric cs:A\b=\x,A\c=1-\x);
}
\draw (A11) -- (A51) -- (A15) -- (A11)
(A12) -- (A42) -- (A41) -- (A14) -- (A24) -- (A21) -- (A12)
(A13) -- (A31) -- (A33) -- (A13);
\foreach \p in {22, 32, 23}{
\fill[green!80!black] (A\p) circle (3pt);
\foreach \a/\b/\c/\text{e} in {22/12/21/{2,0,0},31/51/33/{0,2,0}, 13/33/15/{0,0,2}}{
\node[circle, draw=red,fill=white] at (barycentric cs:A\a=1,A\b=1,A\c=1) (l\a\b\c) {\small \textcolor{red}{$v_{-2}^{\text{e}}$}};
}
}
\foreach \a/\b/\c/\text{e} in {21/41/23/{1,1,0}, 12/32/14/{1,0,1}, 22/42/24/{0,1,1}}{
\node[circle, draw=blue,fill=white] at (barycentric cs:A\a=1,A\b=1,A\c=1) {\small\textcolor{blue}{$v_{-2}^{\text{e}}$}};
}
\foreach \a/\b/\c/\text{e} in {40/31/30/{2,1,1}, 43/34/33/{1,2,1}, 03/04/13/{1,1,2}}{
\node[circle, draw=black,fill=green!80!black] at (barycentric cs:A\a=1,A\b=1,A\c=1) (l\a) {\tiny$v_{-4}^{\text{e}}$};
}
\draw (l40) edge[black,out=160,in=240,->] (A22);
\draw (l43) edge[black,out=270,in=0,->] (A32);
\draw (l03) edge[black,out=40,in=120,->] (A23);
\node[circle,draw=black,fill=yellow] at (U22) {\tiny\textcolor{black}{$v_{-6}^{2,2,2}$}};
\text{e}nd{tikzpicture}
\caption{Adjacencies for $m=4$ }
\label{Fig_AdjacencyFour}
\text{e}nd{figure}
Finally, we deal with $m=3$, with several differences to the generic case:
\begin{enumerate}
\item There is another adjacent $\Delta_3$ adjacent to $S$ which is ``facing down''.
We denote it by $T$ and set
$\varphi(T)=v_{-6}^{2,2,2}$.
\item There are the three subgraphs isomorphic to $\Delta_1$ from Lemma
\ref{Lem_TriangleInclusionTwo} adjacent to $S$, called
$T_{1,0,0}, T_{0,1,0}$, and $T_{0,0,1}$, and we map
them by $\varphi(T_{1,0,0})=v_{-4}^{2,1,1}$,
$\varphi(T_{0,1,0})=v_{-4}^{1,2,1}$, and
$\varphi(T_{0,0,1})=v_{-4}^{1,1,2}$.
\text{e}nd{enumerate}
Figure \ref{Fig_AdjacencyThree} shows that
the local hexagonal graph
describes the adjacencies correctly.\qedhere
\begin{figure}[htbp]
\centering
\begin{tikzpicture}
\HexagonalCoordinates{4}{4}
\foreach \a/\b/\c/\text{e} in {11/21/12/{2,0,0}, 21/31/22/{1,1,0}, 31/41/32/{0,2,0}, 12/22/13/{1,0,1}, 22/32/23/{0,1,1}, 13/23/14/{0,0,2}}{
\fill[yellow] (A\a) -- (A\b) -- (A\c) -- cycle;
\node at (barycentric cs:A\a=1,A\b=1,A\c=1) {$v_1^{\text{e}}$};
}
\foreach \a/\b/\c/\text{e} in {21/12/22/{2,1,1}, 22/31/32/{1,2,1}, 13/23/22/{1,1,2}}{
\fill[lime] (A\a) -- (A\b) -- (A\c) -- cycle;
\node at (barycentric cs:A\a=1,A\b=1,A\c=1) {$v_{-1}^{\text{e}}$};
}
\draw (A11) -- (A41) -- (A14) -- (A11)
(A30) -- (A33) -- (A03) -- (A30)
(A20) -- (A40) -- (A42) -- (A24) -- (A04) -- (A02) -- (A20)
(A20) -- (A24)
(A40) -- (A04)
(A02) -- (A42);
\draw[very thick, blue] (A11) -- (A41) -- (A14) -- cycle;
\node[blue] at (barycentric cs:A24=1,A33=1,A34=1) (l1) {$v_3^{0,0,0}$};
\node at (barycentric cs:A14=1,A23=1) (m1) {};
\draw (l1) edge[blue,out=120,in=60,->] (m1);
\draw[very thick, red] (A03) -- (A30) -- (A33) -- cycle;
\node[red] at (barycentric cs:A42=1,A33=1,A43=1) (l2) {$v_{-3}^{2,2,2}$};
\node at (barycentric cs:A33=1,A32=1) (m2) {};
\draw (l2) edge[red,out=240,in=300,->] (m2);
\foreach \p/\r/\l in {11/left/{$(3,0,0)$}, 41/right/{$(0,3,0)$}, 14/above/{$(0,0,3)$}}{
\node[\r] at (A\p) {\l};
}
\text{e}nd{tikzpicture}
\caption{Adjacencies for $m=3$}
\label{Fig_AdjacencyThree}
\text{e}nd{figure}
\text{e}nd{proof}
\noindent We describe the cliques of the local hexagonal graph.
\begin{lem}\label{Lem_CliquesOfLocalHexagonalGraph}
Let $C$ be a clique in the local hexagonal graph with $v_0^{0,0,0} \in C$.
Then, one of the following three cases holds:
\begin{align*}
1.\quad C = \mathemph{C_{-6}} &\text{e}nsuremath{\mathrel{\mathop:}=} \{v_0^{0,0,0},v_{-2}^{1,1,0}, v_{-2}^{0,1,1},
v_{-2}^{1,0,1}, v_{-4}^{2,1,1}, v_{-4}^{1,2,1}, v_{-4}^{1,1,2},
v_{-6}^{2,2,2}\}\\
&\ =\cneig{LHG}{v_{-4}^{2,1,1}, v_{-4}^{1,2,1}, v_{-4}^{1,1,2}}=\neig{LHG}{v_{-6}^{2,2,2}},\\
2.\quad C = \mathemph{C_{-4}^{\oldvec{e}}} &\text{e}nsuremath{\mathrel{\mathop:}=}
\{ v_0^{-(1,0,0)+\vec{e}}, v_0^{-(0,1,0)+\vec{e}}, v_0^{-(0,0,1)+\vec{e}},\\
&\qquad v_{-2}^{(1,0,0)+\vec{e}}, v_{-2}^{(0,1,0)+\vec{e}}, v_{-2}^{(0,0,1)+\vec{e}}, v_{-4}^{(1,1,1)+\vec{e}}\} \\
&\ =\cneig{LHG}{v_{-2}^{(1,0,0)+\vec{e}}, v_{-2}^{(0,1,0)+\vec{e}}, v_{-2}^{(0,0,1)+\vec{e}}}\\
&\ =\neig{LHG}{v_{-2}^{2\vec{e}}} \text{ for an $\vec{e} \in \vec{E}$,}\\
3. \quad C = \mathemph{C_{-2}^{\oldvec{e}}} &\text{e}nsuremath{\mathrel{\mathop:}=}
\{ v_0^{(1,0,0)-\vec{e}}, v_0^{(0,1,0)-\vec{e}},
v_0^{(0,0,1)-\vec{e}}, v_{-2}^{(1,1,1)-\vec{e}} \}\\
&\ =\cneig{LHG}{v_0^{(1,0,0)-\vec{e}}, v_0^{(0,1,0)-\vec{e}}, v_0^{(0,0,1)-\vec{e}}} \text{ for an $\vec{e} \in \vec{E}$.}
\text{e}nd{align*}
\text{e}nd{lem}
\begin{proof}
By the definition of the local hexagonal graph, the given sets form
complete subgraphs. Furthermore, they are represented as common
neighbourhoods of triangles or as closed neighbourhoods of vertices in the claimed way. Thus, they are also maximal.
It remains to show that there cannot be any other cliques.
If $v_{-6}^{2,2,2} \in C$, we get $C=\neig{LHG}{v_{-6}^{2,2,2}}$ since this neighbourhood already forms a clique. Thus, the first case of the lemma holds.
Otherwise, $C$ contains an element not incident to $v_{-6}^{2,2,2}$. Those elements are either given by $v_{-2}^{2\vec{e}}$ for an $\vec{e}\in\vec{E}$ or by $v_{0}^{\vec{e_2}-\vec{e_1}}$ for $\vec{e_1},\vec{e_2}\in\vec{E}$ with $\vec{e_1}\neq\vec{e_2}$.
If $v_{-2}^{2\vec{e}} \in C$, we get $C=\neig{LHG}{v_{-2}^{2\vec{e}}}$ since this neighbourhood already forms a clique. Thus, the second case of the lemma holds. This neighbourhood is a clique.
Finally, we assume $v_0^{\vec{e_2}-\vec{e_1}}\in C$, but $v_{-2}^{2\vec{e_2}}\notin C$
(the other two vertices $v_{-2}^{2\vec{e}}$ with $\vec{e}\in\vec{E}$ are not adjacent to $v_0^{\vec{e_1}-\vec{e_2}}\in C$, anyway).
For reasons of symmetry, we can choose $e_1=(1,0,0)$ and $e_2=(0,1,0)$. Thus, we have $v_0^{-1,1,0} \in C$, but $v_{-2}^{0,2,0} \not\in C$.
The set of neighbours of $v_0^{-1,1,0}$ is
\begin{equation*}
\left\{ v_0^{0,0,0}, v_0^{0,1,-1}, v_0^{-1,0,1}, v_{-2}^{1,1,0}, v_{-2}^{0,2,0}, v_{-4}^{1,2,1} \right\}.
\text{e}nd{equation*}
Only $v_0^{-1,0,1}$ is not adjacent to $v_{-2}^{0,2,0}$, so it
has to lie in $C$ since $C$ is maximal.
Then, $C=\cneig{LHG}{v_0^{-1,1,0},v_0^{-1,0,1},v_0^{0,0,0}}=C_{-2}^{e_1}$, as
described in the third case of the lemma.
\text{e}nd{proof}
\noindent The following lemma describes how we can find the cliques of an induced
subgraph using the cliques of the surrounding graph.
\begin{lem}\label{Lem_cliquesofinducedsubgraphs}
For a graph $G$ and an induced subgraph
$H$, every clique of $H$ is given as the intersections of a
(not necessarily unique) clique of $G$ with $H$.
\text{e}nd{lem}
\begin{proof}
Let $C$ be a clique of $H$. Then, $C$ is a complete subgraph of $G$.
Therefore, there is at least one clique $C_G$ of $G$ containing $C$.
Obviously, $C\subseteqeq C_G\cap H$. If there was an
$x\in C_G\cap H\setminus C$, the union $C\cup \{x\}$ were a complete subgraph
of $H$ since $H$ is an induced subgraph in contradiction to $C$ being chosen maximal.\qedhere
\text{e}nd{proof}
We apply this to the image of a lower-level-neighbourhood under the
embedding given in \ref{Lem_LowerLevelNeighbourhoodToLHG}. This way, we
can classify all the cliques of $G_n$.
\begin{theo}\label{Thm_AllTheCliquesLarge}
If $C$ is a clique of $G_n$ containing a vertex $\Delta_m$ with
$m\geq 3$, $C$ is given by the construction in
\ref{Lem_CliqueConstructionTriangle} or \ref{Lem_CliqueConstructionVertex}.
\text{e}nd{theo}
\begin{proof}
Let $C$ be a clique of $G_n$ and let $S\cong\Delta_m$ be a vertex with
maximal $m\geq 3$ of $C$. Thus, $C$ is contained in the lower level
neighbourhood of $S$.
The lower level neighbourhood of $S$ is isomorphic to an induced
subgraph $H$ of the local hexagonal graph containing $v_0^{0,0,0}$ and the
isomorphism $\mu$ maps $S$ to $v_0^{0,0,0}$.
Thus, by Lemma \ref{Lem_cliquesofinducedsubgraphs},
$C$ is isomorphic to the
intersection of $H$ with a clique $C_{LHG}$ of the local hexagonal graph.
Thus, $C_{LHG}$ is one of the cliques given in Lemma
\ref{Lem_CliqueConstructionTriangle}.
For reasons of symmetry, we can restrict our investigation to the cliques
$C_{-6}, C_{-4}^{1,0,0},$ and $C_{-2}^{1,0,0}$.
\begin{enumerate}
\item If $C_{LHG}=C_{-6}$ and $m\geq 4$, the preimages of
$v_{-4}^{2,1,1}, v_{-4}^{1,2,1},$ and $v_{-4}^{1,1,2}$ are
subgraphs of $S$ isomorphic to $\Delta_{m-4}$. Thus, they do
exist and $C$ is given by the construction of Lemma
\ref{Lem_CliqueConstructionTriangle}.
If $m=3$, the preimages of $v_{-4}^{2,1,1}, v_{-4}^{1,2,1},$ and
$v_{-4}^{1,1,2}$ do exist, but they are not contained in a common
$\Delta_2$ and we cannot apply Lemma \ref{Lem_CliqueConstructionTriangle}.
Therefore, we look at the preimages of
$v_{-2}^{1,1,0}, v_{-2}^{0,1,1}, v_{-2}^{1,0,1}, v_{-4}^{2,1,1}, v_{-4}^{1,2,1},$
and $v_{-4}^{1,1,2}$ which do exist, since they are induced
subgraphs of $S$ isomorphic to $\Delta_1$. Furthermore, those preimages
are the subgraphs isomorphic to $\Delta_1$ containing the middle vertex of $S$.
Thus $C$ is constructed from this vertex by Lemma
\ref{Lem_CliqueConstructionVertex}.
\item If $C_{LHG}=C_{-4}^{1,0,0}$, the preimages of
$v_{-2}^{(1,0,0)+\vec{e}}, v_{-2}^{(0,1,0)+\vec{e}}$ and
$v_{-2}^{(0,0,1)+\vec{e}}$ are subgraphs of $S$ isomorphic
to $\Delta_{m-2}$. Thus, they exist and $C$ is given
by the construction of Lemma \ref{Lem_CliqueConstructionTriangle}.
\item If $C_{LHG}=C_{-2}^{1,0,0}$, either the preimages of
$v_0^{(1,0,0)-\vec{e}}, v_0^{(0,1,0)-\vec{e}}, v_0^{(0,0,1)-\vec{e}}$ exist
and $C$ is their common neighbourhood, or one of them does
not exist. In the second case, without loss of generality,
$\vec{e}=(1,0,0)$. Thus $v_0^{(1,0,0)-\vec{e}}=v_{0,0,0}$ and there is no
preimage of $v_0^{(0,0,1)-\vec{e}}=v_0^{1,0,-1}$. The remaining elements
of $C_2^{\vec{e}}$ are at most $v_{0,0,0}, v_{-1,1,0}$ and
$v_{-1}^{0,1,1}$ which also lie in $C_{-4}^{0,1,0}$. Hence, we can
also see $C$ as the intersection of $LHG$ with $C_{-4}^{0,1,0}$ and, by
applying the second case, it is given by the construction of
Lemma \ref{Lem_CliqueConstructionTriangle}.\qedhere
\text{e}nd{enumerate}
\text{e}nd{proof}
\noindent We finally managed to prove the surjectivity of the map from Remark \ref{lem_map} between the vertices of $G_{n+1}$ and the cliques of $G_{n}$. We continue proving the injectivity.
\begin{theo}\label{Theo_bijective}
The map
\begin{align*}
C\colon V(G_{n+1})&\to \{\text{cliques of }G_n\},\\
S&\mapsto\text{the clique from}
\begin{cases}
\text{Lemma \ref{Lem_CliqueConstructionVertex}}, &\text{if $S$ is of level $0$},\\
\text{Lemma \ref{Lem_CliqueConstructionTriangle}}, &\text{ otherwise,}
\text{e}nd{cases}
\text{e}nd{align*}
is bijective.
\text{e}nd{theo}
\begin{proof}
The map $C$ is surjective by Lemma \ref{Lem_EvenCliquesOfSmallLevel}, Lemma \ref{Lem_OddCliquesofSmallLevel}, and Theorem \ref{Thm_AllTheCliquesLarge}.
For the injectivity we discuss three cases.
The cliques from Lemma \ref{Lem_CliqueConstructionVertex} contain at
least six $\Delta_1$-shaped subgraphs of $G$ and the cliques from
Lemma \ref{Lem_CliqueConstructionTriangle} contain at most four of them.
Thus, a clique, which is constructed through Lemma
\ref{Lem_CliqueConstructionTriangle}, cannot be constructed through
Lemma \ref{Lem_CliqueConstructionVertex} and vice versa.
Furthermore, for an $S\cong\Delta_0$, the umbrella in $C(S)$ (recall
Subsection \ref{Subsect_CliqueConstruction}) has a
unique central vertex, which is the vertex of $S$, and the preimage
of $C(S)$ is unique. Finally, let $S\cong \Delta_m$ for an
$m\geq 1$. We look for a $T\cong\Delta_{m'}$ such that $C(S)=C(T)$.
To do this, we check the options for $T_1,T_2,T_3\cong\Delta_{m'-1}$
like in Lemma \ref{Lem_CliqueConstructionTriangle}. By Corollary
\ref{Cor_CliqueSummary}, if $m\geq 3$, the only other three triangles
in $C(S)$ of a common level are the elements of level $m+1$, if all
three of them exist. But their union is not isomorphic to a
$\Delta_{m+2}$. For $m=2$, there is an additional element of level
$1$, but it does not form a $\Delta_2$ with two of the other three.
For $m=1$, there is an additional element of level $2$ , but it
does not form a $\Delta_3$ with two of the other three.
\text{e}nd{proof}
\noindent The following corollary gives an explicit description of the cliques.
\begin{cor}\label{Cor_CliqueSummary}
\begin{itemize}
\item[1.] For $m\geq 1$ and $\Delta_{m}\iso{\mu}S\in V(G_{n+1})$,
an explicit description of
$C(S)$ is given through
\resizebox{.9\hsize}{!}{$
C(S)=
\text{e}nsuremath{\wedge}erbrace{M_{m-1}}_{\lvert \cdot \rvert=3}\cup
\text{e}nsuremath{\wedge}erset{\lvert\cdot\rvert=0,\text{ if }n=m,}{\text{e}nsuremath{\wedge}erbrace{M_{m+1}}_{\lvert\cdot \rvert\leq 3\phantom{,\text{ if }n=m,}}}
\cup\text{e}nsuremath{\wedge}erset{\lvert\cdot\rvert=0, \text{ if } n\leq m+2,}{\text{e}nsuremath{\wedge}erbrace{M_{m+3}}_{\lvert\cdot\rvert\leq 1\phantom{, \text{ if } n\leq m+2,}}} \cup
\begin{cases}
\text{e}mptyset, &\text{ if }m=1\text{ and }n\leq 1,\\
\{\hat\mu(\Delta_4^{-(1,1,1)}(\nabla_{2}))\}, &\text{ if }m=1\text{ and }n\geq 2,\\
\{\mu(\nabla_{1})\}, &\text{ if }m=2,\\
\{ S\setminus \partial S\}, &\text{ if }m\geq 3.
\text{e}nd{cases} $}
\begin{itemize}
\item $M_{m-1}$ consists of the elements
$\Delta_{m-1}\cong\mu^{\vec{e}}$ for $\vec{e}\in \vec{E}$.
\item $M_{m+1}$ consists of the elements $\Delta_{m+1}\iso{\nu}T$
fulfilling $\mu=\nu^{\vec{e}}$ for an $\vec{e}\in \vec{E}$.
\item $M_{m+3}$ consists of the element $\Delta_{m+3}\cong T$
enclosing $S$ with distance $1$, i.\,e. $S=T\setminus \partial T$.
\text{e}nd{itemize}
\item[2.] For $\Delta_0\cong S\in V(G_{n+1})$, if we denote the vertex of $S$ by $v$, an explicit description of
$C(S)$ is given through
\resizebox{.9\hsize}{!}{$
C(S)=
\text{e}nsuremath{\wedge}erbrace{\{T\in V(G_n)\mid T\cong \Delta_1,\ S\subseteqeq T\}}_{\lvert\cdot\rvert=15_G(v)}
\cup
\text{e}nsuremath{\wedge}erset{\lvert \cdot \rvert=2,\text{ if } 15_G(v)=6 \text{ and } n\geq 3,}{\text{e}nsuremath{\wedge}erbrace{\{T\in V(G_n)\mid T\cong \Delta_3,\ S\subseteqeq T\setminus \partial T\}.}_{\lvert \cdot\rvert=0,\text{ if } 15_G(v)\geq 7 \text{ or } n\leq 2,}\phantom{a}}$}
\text{e}nd{itemize}
\text{e}nd{cor}
\section{Clique intersections of $\mathemph{G_n}$}\label{Sect_CliqueIntersections}
After having constructed all cliques of the geometric clique graph
$G_n$ (and proven that these
cliques correspond to vertices of $G_{n+1}$), we need to show that
two cliques $C(S_1)$ and $C(S_2)$ intersect if and only if the
corresponding vertices $S_1$ and $S_2$
in $G_{n+1}$ are connected by an edge. From now on, we assume
that $S_1\cong \Delta_m$ and $S_2\cong \Delta_{m+k}$ for $k\in\{0,2,4,6\}$ and $m\geq 0$.
\subsection{Case: $\mathemph{S_2 \cong \Delta_m}$}
\begin{lem}\label{Lem_NoLargeOrSmallTriangles}
For $S_1,S_2\in V(G_{n+1})$ with $S_1 \cong S_2 \cong \Delta_m$ for some $m\geq 0$,
the cliques $C(S_1)$ and $C(S_2)$ do not intersect in
a vertex $T\cong\Delta_{m+3}$. Furthermore, if $m\geq 4$,
they do not intersect in an element $T\cong \Delta_{m-3}$.
\text{e}nd{lem}
\begin{proof}
From Corollary \ref{Cor_CliqueSummary}, we see that if a clique
$C(S)$ contains an element $T\cong\Delta_{m+3}$, the clique is
uniquely defined by this element since $S=T\setminus \partial T$.
Furthermore, for $m\geq 4$ the clique is also uniquely defined by
an element $T\cong \Delta_{m-3}$ since then $T=S\setminus\partial S$, which has only one solution $S$.
In either way, the vertex $T$ cannot lie in two distinct cliques of $G_n$.
\text{e}nd{proof}
\begin{lem}
For $S_1,S_2\in V(G_{n+1})$ with $S_1\cong S_2\cong \Delta_{0}$,
the cliques $C(S_1)$ and $C(S_2)$ intersect non-trivially if and
only if $S_1$ and $S_2$ are adjacent in $G_{n+1}$, i.\,e.\ they
are adjacent in $G$.
\text{e}nd{lem}
\begin{proof}
At first, we suppose that $S_1$ and $S_2$ are adjacent in $G$. Since $G$ is locally cyclic, they have two common neighbours, there is a
$\Delta_1 \cong T \subseteq G$ with $S_1 \subseteq T$
and $S_2 \subseteq T$. Thus, $T$ lies in both $C(S_1)$ and $C(S_2)$.
Conversely, suppose there is a $T\in C(S_1)\cap C(S_2)$. By
Lemma \ref{Lem_NoLargeOrSmallTriangles}, $T\cong \Delta_{1}$.
Furthermore,
by Corollary \ref{Cor_CliqueSummary}, $S_1$ and $S_2$ are both
vertices of $T$ and thus adjacent.
\text{e}nd{proof}
\begin{lem}\label{Lem Fromm+1tom-1}
For $S_1,S_2\in V(G_{n+1})$ with $S_1 \cong S_2 \cong \Delta_m$ for some $m \geq 1$,
if the cliques $C(S_1)$ and $C(S_2)$ intersect
in a $T_1\cong\Delta_{m+1}$, they also intersect in a
$T_2\cong\Delta_{m-1}$.
\text{e}nd{lem}
\begin{proof}
If $T\in C(S_1)\cap C(S_2)$ for a $\Delta_{m+1}\iso{\nu} T$,
both $S_1\subseteqeq T$ and $S_2\subseteqeq T$. If $m+1\geq 3$, it follows that $S_1=\nu^{\vec{e}}$ and $S_2=\nu^{\vec{f}}$ for
$\vec{e},\vec{f}\in \vec{E}$ and $\vec{e}\neq\vec{f}$, implying
$S_1 \cap S_2 \cong \Delta_{m-1}$.
If $m+1=2$, either the situation is the same as in the foregoing
case or, without loss of generality, $S_1=\mu^{\vec{e}}$ for
an $\vec{e}\in \vec{E}$ and $S_2=\mu(\nabla_{1})$.
Even in this case, $S_1$ and $S_2$ intersect in a vertex.
\text{e}nd{proof}
\noindent Consequently, if $m \geq 1$, we only have to investigate whether two cliques intersect in a $\Delta_{m-1}$-shaped vertex of $G_n$.
\begin{lem}
For $S_1,S_2\in V(G_{n+1})$ with $S_1\cong S_2\cong \Delta_{m}$
for an $m\geq 1$, the cliques $C(S_1)$ and $C(S_2)$ intersect
non-trivially if and only if $S_1$ and $S_2$ are adjacent in
$G_{n+1}$, i.\,e. $S_1\subseteqeq \neig{G}{S_2}$.
\text{e}nd{lem}
\begin{proof}
If there is a $T\in C(S_1)\cap C(S_2)$, by Lemma
\ref{Lem_NoLargeOrSmallTriangles} and Lemma \ref{Lem Fromm+1tom-1},
we can choose $T \cong \Delta_{m-1}$. Thus, we have
$S_1 \subseteq\neig{G}{T} \subseteq \neig{G}{S_2}$,
where $S_1 \subseteq \neig{G}{T}$ follows from Lemma \ref{Lem_TriangleInclusionOne}.
\noindent Conversely, suppose $S_1 \subseteq \neig{G}{S_2}$. We distinguish between the values of $m$.
\begin{itemize}
\item If $m=1$, $S_1$ is one of the additional faces
in $\neig{G}{S_2}$. Thus $S_1$ and $S_2$ intersect in at least one
vertex, which lies in both $C(S_1)$ and $C(S_2)$.
\item If $m=2$, there is a $\Delta_1\cong T\subseteqeq S_1 \cap S_2$.
Thus, $T\in C(S_1)\cap C(S_2)$.
\item If $m\geq 4$, let $\mu\colon \Delta_{m}\to S_1$ be a standard
chart. By Lemma \ref{Lem_HexagonalChartExtension}, there is an
extension $\hat{\mu}\colon E\to \hat{S}$ such that $S_2$ is the image
of $\hat{\mu}\circ \Delta_m^{\vec{t}}$ for a $\vec{t}\in \vec{D}_0$.
Therefore,
$S_1\cap S_2\cong \hat{\mu}^{-1}(S_1\cap S_2)=\Delta_m\cap \Delta_m^{\vec{t}}(\Delta_m)\cong \Delta_{m-1}$.
Thus, by definition of the cliques,
$S_1\cap S_2\in C(S_1)\cap C(S_2)$, so they intersect non-trivially.
\item If $m=3$, by Lemma \ref{Lem_HexagonalChartExtension}, we are either in the same situation as for $m\geq 4$, which proves the claim,
or $S_2$ lies twisted in the middle of $\neig{G}{S_1}$. In this case,
$C(S_1)$ and $C(S_2)$ share the vertex equivalent to the midpoint
of both $S_1$ and $S_2$. \qedhere
\text{e}nd{itemize}
\text{e}nd{proof}
\noindent Thus, for $S_1\cong S_2\in V(G_{n+1})$, intersection in $G_n$ and adjacency in $G_{n+1}$ are equivalent.
\subsection{Case: $\mathemph{S_2\cong\Delta_{m+k}}$ for $\mathemph{k\in \{2,4,6\}}$}
\begin{lem}
For $S_1,S_2\in V(G_{n+1})$ with $S_1\cong \Delta_{m}$ and
$S_2\cong \Delta_{m+2}$ for an $m\geq 0$, the cliques $C(S_1)$
and $C(S_2)$ intersect non-trivially if and only if $S_1$
and $S_2$ are adjacent in $G_{n+1}$, i.\,e. $S_1\subseteqeq S_2$.
\text{e}nd{lem}
\begin{proof}
At first, we suppose that $S_1\subseteqeq S_2$.
By Lemma \ref{Lem_TriangleInclusionTwo}, if $m\neq 2$ and by choosing a chart $\Delta_{m+2}\iso{\mu} S_2$, we get $S_1=\mu^{\vec{e}+\vec{f}}$ for some $\vec{e},\vec{f}\in \vec{E}$ and
$T\text{e}nsuremath{\mathrel{\mathop:}=}\mu^{\vec{e}}$ fulfils $S_1\subseteqeq T\subseteqeq S_2$. By
Corollary \ref{Cor_CliqueSummary}, $T$ lies in both cliques $C(S_1)$ and $C(S_2)$.
If $m=2$, the triangle $S_1$ either lies inside a
$T\subseteqeq S_2$ which is isomorphic to $\Delta_3$ or $S_1$
contains the unique $S\cong \Delta_1$, which has distance
$1$ to $\partial S_2$. In both cases, $S$ or $T$ respectively
lies in both $C(S_1)$ and $C(S_2)$.
Conversely, we now suppose that $C(S_1)$ and $C(S_2)$ intersect
non-trivially.
We distinguish between the possibilities for the element in the
intersection. Any element $T\in C(S_1)\cap C(S_2)$ is isomorphic
to $\Delta_{m-1},\ \Delta_{m+1},$ or $\Delta_{m+3}$.
\begin{itemize}
\item If $T\cong \Delta_{m-1}$ (i.\,e. $m\geq 1$), $T$ has
distance $1$ to the boundary of $S_2$; thus
$T=S_2\setminus\partial S_2$. All the graphs isomorphic
to $\Delta_{m}$, which contain $T$, are subgraphs of
$S_2$; thus $S_1\subseteqeq S_2$.
\item If $T\cong \Delta_{m+1}$, by Corollary
\ref{Cor_CliqueSummary}, this means
$S_1\subseteqeq T\subseteqeq S_2$, which proves the claim.
\item If $T\cong \Delta_{m+3}$, $S_1$ is the unique subgraph
of $T$ with distance $1$ from the boundary, i.\,e.
$S_1=T\setminus\partial T$. This subgraph is contained in every subgraph
of $T$ isomorphic to $\Delta_{m+2}$; thus $S_1\subseteqeq S_2$. \qedhere
\text{e}nd{itemize}
\text{e}nd{proof}
\begin{lem}
For $S_1,S_2\in V(G_{n+1})$ with $S_1\cong \Delta_{m}$ and
$S_2\cong \Delta_{m+4}$ for an $m\geq 0$, the cliques $C(S_1)$ and
$C(S_2)$ intersect non-trivially if and only if $S_1$ and $S_2$
are adjacent in $G_{n+1}$, i.\,e. $S_1\subseteqeq S_2\setminus\partial S_2$.
\text{e}nd{lem}
\begin{proof}
If $S_1\subseteqeq S_2\setminus\partial S_2$, the element $S_2\setminus\partial S_2\cong \Delta_{m+1}$ lies in both cliques.
\noindent Conversely, if the cliques intersect, they intersect in a
$T$ fulfilling $S_1\subseteqeq T \subseteqeq S_2$ with
$T \cong\Delta_{m+1}$. Further, the distance of $T$ and $\partial S_2$ is $1$
or $T\cong\Delta_{m+3}$ and the distance of $S_1$ and $\partial T$ is $1$.
Thus, the distance between $S_1$ and $\partial S_2$ is 1 and $S_1\subseteqeq S_2\setminus\partial S_2$.
\text{e}nd{proof}
\begin{lem}
For $S_1,S_2\in V(G_{n+1})$ with $S_1\cong \Delta_{m}$ and
$S_2\cong \Delta_{m+6}$ for an $m\geq 0$, the cliques $C(S_1)$ and
$C(S_2)$ intersect non-trivially if and only if $S_1$ and
$S_2$ are adjacent in $G_{n+1}$, i.\,e. $S_1\subseteqeq S_2\setminus\neig{G}{\partial S_2}$.
\text{e}nd{lem}
\begin{proof}
If
$S_1\subseteqeq S_2\setminus\neig{G}{\partial S_2}$, the only
$T \cong \Delta_{m+3}$ such that
$S_1 \subseteq T\setminus\partial T$ and $T \subseteq S_2\setminus\partial S_2$ is $T=S_2\setminus \partial S_2$, by Corollary \ref{Cor_CliqueSummary}. This subgraph $T$ lies in both $C(S_1)$ and $C(S_2)$.
Conversely, if the cliques intersect in a $T\cong\Delta_{m+3}$, this subgraph $T$ has
distance 1 to both the boundaries of $S_1$ and $S_2$; therefore, the boundaries
of $S_1$ and $S_2$ have distance 2.
\text{e}nd{proof}
\noindent The preceding lemmata can be summarised in the following way:
\begin{cor}\label{Cor_Edges}
For $S_1,S_2\in V(G_{n+1})$, the cliques $C(S_1)$ and $C(S_2)$
intersect non-trivially if and only if $S_1$ and $S_2$ are
adjacent in $G_{n+1}$. Furthermore, for every
$n\in \menge{Z}_{\geq0}$, $G_n\cong k^nG$.
\text{e}nd{cor}
\noindent Now we can finally prove our main theorem for pikas.
\begin{theo}\label{Thm_ConvergentorDivergent}
Let $G$ be a triangularly simply connected locally cyclic graph with
minimum degree at least $6$.
If there is an $m\geq 0$ such that
$\Delta_m$ cannot be embedded into $G$, the clique operator is
convergent on $G$.
\text{e}nd{theo}
\begin{proof}
If $\Delta_m$ cannot be embedded into $G$, this means $m\geq 2$ and
$G_{m-2}=G_m$ since the graphs $G_m$ and $G_{m-2}$ can only differ in vertices isomorphic to $\Delta_m$, which would be subgraphs of $G$.
But by Corollary \ref{Cor_Edges}, this means $k^{m}G\cong k^{m-2}G$,
which is the definition of the clique operator being convergent on $G$.
\text{e}nd{proof}
\section{Coverings}\label{Sect_UniversalCover}
Up to this point, we only considered triangularly simply connected locally cyclic graphs.
Fortunately, any other locally cyclic graph is covered by a
simply connected one, to which we will apply Theorem \ref{Thm_ConvergentorDivergent}.
For the generalisation of the theory, we need results from
\cite{larrion2000locally} and \cite{rotman1973covering}, whose ways of notation
look incompatible at first glance. Instead of repeating and re-deriving
large parts of both, we show how to fit the definitions of
\cite{larrion2000locally} into the setting of \cite{rotman1973covering}.
While one of the sources talks about \textit{simple graphs} with \textit{edges} and
\textit{triangles} (\cite[Section 1, p. 160]{larrion2000locally}), the
other one describes \textit{complexes} with \textit{1-simplices} and
\textit{2-simplices} (\cite[Section 1, p. 642]{rotman1973covering}). We
can transition from graphs to complexes by constructing the
\textit{triangular complex} $\K{G}$, which is defined in \cite{larrion2000locally}, as we did before.
\begin{lem}
Let $G, G'$ be simple graphs. A vertex map $V(G) \to V(G')$
defines a \textit{homomorphism} $G \to G'$ in the sense of
\cite[Section 2, p. 161]{larrion2000locally} if and only if
it defines a \textit{map} $\K{G} \to \K{G'}$ in the sense
of \cite[Section 1, p. 642]{rotman1973covering}.
\text{e}nd{lem}
\begin{proof}
Let $f: G \to G'$ be a homomorphism in the sense of
\cite{larrion2000locally} and $\{u,v,w\}$ a triangle in $G$,
i.\,e.\ a 2-simplex in $\K{G}$. By assumption,
$\{f(u),f(v)\}$, $\{f(u),f(w)\}$, and $\{f(v),f(w)\}$ are
edges in $G'$. Thus, $\{f(u),f(v),f(w)\}$ is a triangle
in $G'$, i.\,e.\ a 2-simplex in $\K{G'}$. The other implication is trivial.
\text{e}nd{proof}
\noindent We will continue calling these maps graph homomorphisms.
We also take the next definition from \cite[Section 2, p. 162]{larrion2000locally}.
\begin{defi}\label{Def_CoveringMap}
Let $G, \tilde{G}$ be connected, simple graphs. A graph homomorphism
$p: \tilde{G} \to G$ is called a \text{e}mph{triangular covering map}
if it fulfils the triangle lifting property: For each triangle
$\{u,v,w\}\in G$ and each preimage $\tilde{u}$ of $u$, there exists a
(unique) triangle $\{\tilde{u},\tilde{v},\tilde{w}\}$ in
$\tilde{G}$ which is mapped to $\{u,v,w\}$ by $p$.
\text{e}nd{defi}
\begin{lem}
Let $G, \tilde{G}$ be connected, simple graphs and
$p: \tilde{G} \to G$ be a homomorphism. Then, $p$ is a triangular covering
map if
and only if $(\K{\tilde{G}},p)$ is a
\textit{covering complex} (\cite[Section 2, p. 650]{rotman1973covering}).
\text{e}nd{lem}
\begin{proof}
We only need to show that the lifting properties are equivalent.
\noindent For the first part, assume that $(\K{\tilde{G}},p)$ is
a covering complex. By Theorem 2.1 (\cite[p. 651]{rotman1973covering}),
$p$ has the unique path lifting property. It remains to show that
$p$ has the triangle lifting property. Let $\{u,v,w\}$ be a triangle
in $G$, i.\,e.\ a 2-simplex in $\K{G}$. Since
$p^{-1}(\{u,v,w\})$ is the union of 2-simplices, every $\tilde{u} \in p^{-1}(u)$
lies in some triangle of $\tilde{G}$.
For the second part, assume that $p$ is a triangular covering map. Consider
a 1-simplex $\{u,v\}$ in $\K{G}$ and let $\tilde{u} \in p^{-1}(u)$.
By the unique edge lifting property of $p$
(\cite[Section 2, p. 161]{larrion2000locally}), there is a unique
$\tilde{v} \in p^{-1}(v)$ adjacent to $\tilde{u}$. By the same argument,
$\tilde{u}$ is unique with respect to $\tilde{v}$. Thus, $p^{-1}(\{u,v\})$
splits into pairwise disjoint 1-simplices.
Next, consider a 2-simplex $\{u,v,w\}$ in $\K{G}$. By the triangle
lifting property (\cite[Section 2, p. 162]{larrion2000locally}),
$p^{-1}(\{u,v,w\})$ is the union of triangles in $\tilde{G}$. If two
different triangles would intersect, the unique edge lifting property
would be violated.
\text{e}nd{proof}
\noindent We take the following definition from \cite[Section 3, p.\ 663]{rotman1973covering}.
\begin{defi}
A \text{e}mph{universal covering complex} of $K$ is a covering complex $p\colon\tilde{K}\to K$ such that, for ever covering complex $q\colon \tilde{J}\to K$ there exists a unique map $h\colon \tilde{K}\to\tilde{J}$ making the following diagram commute:
\begin{figure}[htbp]
\centering
\begin{tikzpicture} [scale=0.7]
\HexagonalCoordinates{5}{5}
\node (K') at (A01) {$\tilde{K}$};
\node (J') at (A22) {$\tilde{J}$};
\node (K) at (A30) {$K$};
\draw[dashed,->] (K')--(J') node[draw=none,fill=none,font=\scriptsize,midway,above] {$h$};
\draw[->] (K')--(K) node[draw=none,fill=none,font=\scriptsize,midway,below] {$p$};
\draw[->] (J')--(K) node[draw=none,fill=none,font=\scriptsize,midway,right] {$q$};
\text{e}nd{tikzpicture}
\text{e}nd{figure}
\text{e}nd{defi}
\noindent Universal covering complexes are unique up to isomorphism,
and every connected complex has a universal covering
complex (\cite[Section 3, p.\ 663]{rotman1973covering}).
We apply this to our graph setting.
\begin{defi}
Let $G, \tilde{G}$ be connected, simple graphs and
$p: \tilde{G} \to G$ be a triangular covering map. Then, $\tilde{G}$ is the \text{e}mph{universal cover} of $G$ if $(\K{\tilde{G}},p)$ is the universal covering complex of $\K{G}$.
\text{e}nd{defi}
\noindent We would like to apply Proposition 3.2 from \cite{larrion2000locally} to the universal cover:
\begin{prop}\label{Prop_Galois_in_Cliques}
Let $p: \tilde{G} \to G$ be Galois with group $\Gamma$. Then,
$p_k: k\tilde{G} \to kG$ is also Galois with group $\Gamma_k \cong \Gamma$.
\text{e}nd{prop}
\noindent To apply this proposition to the universal cover of an arbitrary locally cyclic graph of minimum degree at least $6$, we need to show that the universal cover always
defines a \textit{Galois triangular map} (\cite[Section 3, p. 165]{larrion2000locally}).\\
\begin{lem}\label{Lem_universal_covering_is_Galois}
Let $G,\tilde{G}$ be connected (simple) graphs such that $\K{\tilde{G}}$
is the universal cover of $\K{G}$. Then, the associated covering map
$p\colon \K{\tilde{G}} \to \K{G}$ is Galois.
\text{e}nd{lem}
\begin{proof}
Define $\Gamma \text{e}nsuremath{\mathrel{\mathop:}=} \{ \gamma \in \Aut(\tilde{G}) \mid p \circ \gamma = p \}$
(the deck transformations from \cite[Section 3, p. 665]{rotman1973covering}).
We need to show that $\Gamma$ acts transitively on each fibre of $p$. This is
proven in Corollary 3.11 (\cite[p. 667]{rotman1973covering}) if $p$ is
\textit{regular}. Since $p$ is a covering map from the universal cover,
the regularity follows from the remark at the top of p. 666 in \cite{rotman1973covering}.
\text{e}nd{proof}
\noindent Now, we can conclude that clique convergence of the universal cover transfers
to the original graph.
\begin{lem}\label{Lem_CoverConvergent}
Let $G$ be a locally cyclic graph with $\delta(G)=6$, whose universal cover is $k$-convergent.
Then, $G$ is $k$-convergent\text{e}nspace as well.
\text{e}nd{lem}
\begin{proof}
Let $p\colon \tilde{G} \to G$ be the triangular covering map. By Lemma \ref{Lem_universal_covering_is_Galois}, $p$ is a Galois covering map with group $\Gamma$, the group of deck transformations. We define $p_{k^n}\colon k^n\tilde G\to k^nG$ by $p_{k^0}=p_k$ and $p_{k^n}(Q)=\{p_{k^{n-1}}(v)\mid v\in Q\}$ for every $n\geq 1$. By Proposition \ref{Prop_Galois_in_Cliques}, the maps $p_{k^n}$ are Galois with groups $\Gamma_{k^n}\cong \Gamma$.
Since the universal cover is $k$-convergent, there are $n,l\in\menge{N}$ such that $k^n\tilde G\cong k^{n+l}\tilde G.$ Thus, $p_{k^n}$ and $p_{k^{n+l}}$ are Galois covering maps with a group isomorphic to $\Gamma$. Therefore,
\begin{equation*}
k^nG\cong k^n(\tilde{G})/\Gamma\cong k^{n+l}(\tilde{G})/\Gamma\cong k^{n+l}G,
\text{e}nd{equation*}
and $G$ is $k$-convergent\text{e}nspace as well.
\text{e}nd{proof}
\noindent Combining Lemma \ref{Lem_CoverConvergent} with Theorem \ref{Thm_ConvergentorDivergent}, we get a general criterium for convergence.
\begin{theo}
If $G$ is a locally cyclic graph with $\delta(G)=6$ and there is an $m\geq 0$ such that
$\Delta_m$ cannot be embedded into the universal cover of $G$, then $G$ is $k$-convergent.
\text{e}nd{theo}
\noindent If the graph $G$ is finite, we also have a criterium for divergence.
\begin{lem}
Let $G$ be a finite locally cyclic graph with $\delta(G)=6$ whose universal cover is
$k$-divergent. Then, $G$ is $6$-regular.
\text{e}nd{lem}
\begin{proof}
Let $\tilde{G}$ be the universal cover and $p: \tilde{G} \to G$
the universal covering map.
Since the universal cover diverges, there is a
$\Delta_m \subseteq \tilde{G}$
for any $m \geq 1$ which is mapped to $G$ via $p$.
Since $G$ is finite, the maximal length of a facet-path
between any two vertices can be bounded by a finite
number $d$.
Now, consider $\Delta_{3d+3} \subseteq \tilde{G}$. For any vertex
$x \in G$, there is a facet-path between $p(d+1,d+1,d+1)$
and $x$ with length at most $d$. Since $p$ is a covering map
(compare Definition \ref{Def_CoveringMap}), this facet-path
lifts to a facet-path in $\tilde{G}$. All vertices
in $\Delta_{3d+3}$ with distance at most $d$ from $(d+1,d+1,d+1)$
are inner vertices of $\Delta_{3d+3}$; thus, the vertex $x$ has the same
degree as such an inner vertex, namely, 6.
\text{e}nd{proof}
\noindent Since by \cite[Theorem 1.1]{larrion2000locally},
a locally cyclic graph which is $6$-regular is $k$-divergent, we state our main theorem.
\begin{cor}[Main result]
Let $G$ be a locally cyclic graph with minimum degree at least $6$.
\begin{enumerate}
\item For a finite graph $G$, the clique graph operator diverges on $G$ if and only if $G$ has only vertices of degree $6$.
\item For an infinite graph $G$, if there exists an $m\geq 0$, such that
$\Delta_m$ cannot be embedded into the universal cover of $G$, the clique operator is convergent on $G$.
\text{e}nd{enumerate}
\text{e}nd{cor}
Whether an infinite locally cyclic $G$ can be convergent, even though
its universal cover diverges, is still an open question.
\section{Further Research}\label{Sect_FurtherResearch}
\noindent In our research, we were able to decide which finite locally
cyclic graphs with minimum degree $\delta=6$ are $k$-convergent\text{e}nspace and
which are $k$-divergent. But we are not able to decide this for infinite graphs,
not even if they are triangularly simply connected. To prove in an
analogous way that every pika\footnote{i.\,e. a triangularly simply
connected locally cyclic graphs with minimum degree $\delta=6$}
which
contains a subgraph isomorphic to $\Delta_m$, for every $m$, is $k$-divergent,
it would be necessary to show that $k^{n}G\subseteqneq k^{n+l}G$ implies
$k^{n}G\not\cong k^{n+l}G$. Even if this was proven, our classification of
$k$-convergence would not be finished, since an
infinite graph with a $k$-divergent\text{e}nspace universal cover can itself be $k$-convergent.
Our work shows that explicit consideration of the clique
dynamics can be fruitful. It would be interesting to know whether
this approach gives feasible results for smaller minimum degrees.
\appendix
\section{Definitions}
\begin{defi}
For a graph $G=(V_G,E_G)$,
the \text{e}mph{closed neighbourhood} of $M\subseteq V_G$ in $G$ is given by the induced subgraph
\begin{equation*}
\mathemph{\neig{G}{M}}\text{e}nsuremath{\mathrel{\mathop:}=}
G[y\in V_G\mid y\in M\text{ or }\text{e}xists x\in M\colon xy\in E_G],
\text{e}nd{equation*}
and the \text{e}mph{common neighbourhood} of $M$ is
\begin{equation*}
\mathemph{\cneig{G}{M}}\text{e}nsuremath{\mathrel{\mathop:}=} G[y\in V_G\mid y\in M\text{ or } \forall x\in M\colon xy\in E_G ].
\text{e}nd{equation*}
For a subgraph $H$ of $G$, $\mathemph{\neig{G}{H}}\text{e}nsuremath{\mathrel{\mathop:}=} \neig{G}{V_H}$ and
$\mathemph{\cneig{G}{H}}\text{e}nsuremath{\mathrel{\mathop:}=} \cneig{G}{V_H}$. Furthermore, for a vertex $v\in V_G$,
the \text{e}mph{closed neighbourhood} is given by $\mathemph{\neig{G}{v}}
\text{e}nsuremath{\mathrel{\mathop:}=} \neig{G}{\{v\}}$ and the \text{e}mph{open neighbourhood}
is given by $N_G(v)=G[y\in V_G\mid vy\in E_G].$
For the two graphs $G = (V_G, E_G)$ and $H = (V_H, E_H)$,
the graph $\mathemph{G\setminus H}$ is defined as
$(V_G\text{e}nsuremath{\backslash} V_H, \{ xy \in E_G \text{e}nsuremath{\backslash} E_H \mid x \not\in V_H \wedge y \not\in V_H \})$.
\text{e}nd{defi}
\begin{defi}
\noindent A \text{e}mph{graph homomorphism} $\varphi\colon G\to H$ is any
adjacency-preserving vertex map, i.\,e.
$uv\in E(G)\ \menge{R}ightarrow\ f(u)f(v)\in E(H)$, see
\cite[Section 2, p. 161]{larrion2000locally}. Injective homomorphisms
are called \text{e}mph{monomorphisms}.
An \text{e}mph{isomorphism} is a bijective homomorphism whose inverse
is also a homomorphism.
\text{e}nd{defi}
\section{Proofs from Topology}\label{Sec_Proofsfromtopology}
\begin{proof}[Proof of Lemma \ref{lem_topology}]
In all cases, we start with a path $v_0v_1\dots v_k$ with $v_0,v_k\in \partial S$ and
$v_1,\dots,v_{k+1} \not\in S$ such that none of the edges $v_iv_{i+1}$, for $0 \leq i < k$,
lies in $S$. In the first case, we aim for a contradiction, in the second and third we show the claims directly.
Since $G$ is simply connected, both the path and $S$ lie in a common
planar subgraph $U$ of $G$, such that every bounded face is a triangle.
Henceforth, we consider all paths as paths in the plane.
There are two paths along $\partial S$ that connect $v_k$ to $v_0$.
By \cite[Corollary 1.2]{Thomassen_KuratowskiTheorem}, one of those together with $v_0v_1\dots v_k$ bound a disc containing the other one. These two paths cannot be
the paths along $\partial S$, since $v_0\dots v_k$ does not lie in $S$.
The path which lies ``inside'' will be denoted by $v_k=s_0s_1\dots s_m=v_0$,
the other one by $v_kv_{k+1}\dots v_r=v_0$.
We define $\alpha_i$ as the inner
\text{e}mph{facet-degree}\footnote{Which counts the
number of facets and not the number of edges} at $v_i$ of the path
$v_0\dots v_ks_1\dots s_m$ for every $0 \leq i \leq k$ (thus, $\alpha_0$
and $\alpha_k$ are the facet-degrees between the path and $\partial S$).
To prove the lemma, we focus on the path $v_0\dots v_kv_{k+1}\dots v_r$
in more detail. We denote the inner facet-degree at $v_j$ by $\beta_j$.
This situation is displayed in Figure \ref{fig_top}.
\begin{figure}[htbp]
\begin{center}
\begin{tikzpicture}
\HexagonalCoordinates{6}{2}
\draw (A21) -- (A11) -- (A12) -- (A22);
\draw (A41) -- (A51) -- (A42);
\draw[dotted] (A21) -- ($(A31)!0.5!(A21)$);
\draw[dotted] (A01) -- (A11);
\draw[dotted] ($(A31)!0.5!(A41)$) -- (A41);
\draw[dotted] (A22) -- ($(A22)!0.5!(A32)$);
\draw[dotted] ($(A32)!0.5!(A42)$) -- (A42);
\draw[dotted] (barycentric cs:A50=1,A51=1) -- (A51);
\foreach \v/\r/\n in {11/above left/$v_r = v_0 = s_m$,
12/above/$v_1$, 22/above/$v_2$, 42/above/$v_{k-1}$, 21/above/$s_{m-1}$,
41/above/$s_1$, 51/above right/$v_k=s_0$}{
\fill[black] (A\v) circle (2pt);
\node[\r] at (A\v) {\n};
}
\foreach \p/\r/\s/\text{e}/\n in {
A11/0.8/-180/60/$\beta_0$, A11/0.7/0/60/$\alpha_0$,
A12/1.1/-120/0/$\alpha_1\!\!=\!\!\beta_1$,
A51/0.8/120/180/$\alpha_k$, A51/0.9/120/240/}{
\draw[blue] ($(\p)+(\s:\r)$) arc (\s:\text{e}:\r);
\node[blue] at ($(\p)+(0.5*\s+0.5*\text{e}:0.6*\r)$) {\n};
}
\node[blue] at ($(A51)+(210:0.6)$) {$\beta_k$};
\text{e}nd{tikzpicture}
\text{e}nd{center}
\caption{Illustration of a path starting at a boundary vertex of $\Delta_m\cong S\subseteqeq G$ and ending at a corner vertex}
\label{fig_top}
\text{e}nd{figure}
Since this path bounds a disc in the plane, Lemma 4.1.5 from
\cite{Baumeister_PhD} is applicable and gives
\begin{equation*}
6 = \sum_{v \text{ inner vertex}}(6 - 15(v)) + \sum_{j=0}^{r-1}(3-\beta_j).
\text{e}nd{equation*}
Since $15(v) =6$ for all vertices in $G$, we obtain
\begin{equation*}
6 \leq \sum_{j=0}^{r-1}(3-\beta_j) = (3-\beta_0) + \sum_{i=1}^{k-1}(3-\alpha_i) + (3-\beta_k) + \sum_{j=k+1}^{r-1}(3-\beta_j).
\text{e}nd{equation*}
If a vertex $v_j$ with $k < j < r$ lies in a corner of $S$,
we have $\beta_j = 1$; otherwise, $\beta_j=3$. Thus, it remains to
analyse $\beta_0$ and $\beta_k$.
If $v_0$ is a corner vertex, we have $\beta_0 = 1 + \alpha_0$;
otherwise, we have $\beta_0 = 3 + \alpha_0$.
Whichever case applies, we can rewrite the inequality into
\begin{equation*}
6 \leq \sum_{i=0}^k (3 - \alpha_i) - 6 + 2c,
\text{e}nd{equation*}
where $c \in \{0,1,2,3\}$ is the number of corner vertices in
$\{v_k,v_{k+1},\dots,v_r\}$ (note the inclusion of $v_k$ and
$v_r$ in this set).
With this inequality,
we proceed through the three cases of the lemma. Recall that
$\alpha_0 \geq 1$ and $\alpha_k \geq 1$ have to hold (otherwise
the edge $v_0v_1$ or the edge $v_{k-1}v_k$ lies in $S$).
\begin{enumerate}
\item For the path $v_0v_1$ we obtain $6 \leq 2c - \alpha_0 - \alpha_1$,
which has no solutions. Thus, there can be no edge between these vertices
that does not already lie in $S$.
\item For the path $v_0v_1v_2$, we obtain $3 \leq 2c - \alpha_0 - \alpha_1 - \alpha_2$.
If $\alpha_1 = 0$, we obtain $v_0 = v_2$. Otherwise, the only possible solution
is $c=3$ and $\alpha_0=\alpha_1=\alpha_2=1$. This already implies that
$\{v_0,v_1,v_2\}$ is a facet of $G$.
\item For the path $v_0v_1v_2v_3$, we obtain $2c \geq \alpha_0+\alpha_1+\alpha_2+\alpha_3$.
Since the path is non-repeating, we have $\alpha_1 \geq 1$
and $\alpha_2 \geq 1$; thus $c \in \{2,3\}$.
Now, $\alpha_1 = 1$ implies the facet $\{v_0,v_1,v_2\}$. In particular,
$v_0v_2v_3$ is a path. With part \text{e}qref{top_b} of this lemma,
we conclude that $\{v_0,v_2,v_3\}$ is a facet, in contradiction
to our assumption. The same argument applies if $\alpha_2=1$.
Thus, both of them have to be at least 2.
Then, the only solution is $c= 3$ with $\alpha_0 = \alpha_3 = 1$
and $\alpha_1 = \alpha_2 = 2$. Since $\alpha_0 = 1$, the triple $\{v_0,s_{m-1},v_1\}$ forms a facet.
Since $\alpha_1 = 2$, the triple
$\{v_1,s_{m-1},v_2\}$ also has to be a facet. For $v=s_{m-1}$, this was the claim
that needed to be shown. \qedhere
\text{e}nd{enumerate}
\text{e}nd{proof}
\begin{proof}[Proof of Lemma \ref{Lem_NeighbourhoodIntersection}]
Assume to the contrary that there is an
$x \in (\neig{G}{S_1} \cap \neig{G}{S_2} \cap \neig{G}{S_3} ) \text{e}nsuremath{\backslash} S$.
Since $S_i \subseteq S$, we conclude $x \in \neig{G}{S} \text{e}nsuremath{\backslash} S$. Without
loss of generality, $x$ is adjacent to $(t,m-t,0)$ for some $0 \leq t < m$.
We permute the coordinates, such that $t$ is maximal among all edges.
\begin{enumerate}
\item Case: $t > 0$:\\
Since $(t,m-t,0) \not\in \Delta_{m-1}^{001}$, the vertex $x$ has
to be adjacent to a boundary vertex of $\Delta_{m-1}^{001}$ as well,
say $(s,0,m-s)$ for some $0 \leq s < m$ (see Figure \ref{t_larger_zero}).
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=0.3,decoration={brace}]
\HexagonalCoordinates{4}{4}
\draw
(A11) -- (A41) -- (A14) -- (A11)
(D00) -- (A13)
(D00) -- (A21);
\draw [decorate,line width=1pt] (A13) -- (A14);
\draw [decorate,line width=1pt] (A21) -- (A41);
\node at (D03) {$s$};
\node[above] at (A31) {$t$};
\node[shape=circle, fill=black,scale=0.5] at (A13) {};
\node[shape=circle, fill=black,scale=0.5] at (A21) {};
\node[shape=circle, fill=black,scale=0.5] at (D00) {};
\node[below] at (D00) {$x$};
\text{e}nd{tikzpicture}
\caption{Illustration of common neighbour $x$.}\label{t_larger_zero}
\text{e}nd{figure}
By Lemma \ref{lem_topology}\text{e}qref{top_b}, the vertices
$(t,m-t,0)$ and $(s,0,m-s)$ have to be adjacent. This is
only possible for $t = s = m-1$. But both facets incident
to the edge $\{(m-1,1,0), (m-1,0,1)\}$ already lie in $S$ (for $m > 1$),
contradicting $x \not\in S$.
\item Case: $t=0$:\\
Since $x$ is only adjacent to corner vertices (otherwise
we would be in the case $t>0$), it has to be adjacent to
$(0,m,0)$ and $(0,0,m)$ as well (to lie in each $\neig{G}{S_i}$).
By Lemma \ref{lem_topology}\text{e}qref{top_b}, this implies the
adjacency of the corner vertices, i.\,e. $m=1$. But then, the
neighbourhood of $x$ contains a circle of length $3$, in
contradiction to neighbourhoods being circles of length at least $6$.\qedhere
\text{e}nd{enumerate}
\text{e}nd{proof}
\begin{proof}[Proof of Lemma \ref{Lem_NeighbourhoodCycle}]
If $m=0$, this is the definition of locally cyclic. For $m\geq 1$,
we enumerate the boundary vertices of $S$ in cyclic order by
$b_1b_2\dots b_{3m}$. For each adjacent pair $(b_i,b_{i+1})$,
there is exactly one face containing $\{b_i,b_{i+1}\}$ and not
lying in $S$. Call the final corner of this face $n_{i,i+1}$.
If $n_{i,i+1} \in S$, Lemma \ref{lem_topology} (\ref{top_a})
would imply that the full face lied in $S$.
With this notation, $N_G(b_i)\text{e}nsuremath{\backslash} S$ is the path
$n_{i-1,i}x_1x_2\dots x_kn_{i,i+1}$ (there are no further edges
between these vertices since the neighbourhood of $b_i$ is a cycle).
None of the $x_i$ lies in $S$, since Lemma \ref{lem_topology}
(\ref{top_a}) would imply further boundary edges of $S$, in
contradiction to our assumption. This situation is illustrated
in Figure \ref{Fig_localNeighbourhood}.
\begin{figure}[htbp]
\centering
\begin{tikzpicture}[scale=1]
\HexagonalCoordinates{3}{2}
\def1.8{1.8}
\def40{40}
\coordinate (N1) at ($(A11)+(120-40:1.8)$);
\coordinate (Nk) at ($(A11)+(40:1.8)$);
\pgfsetfillpattern{north west lines}{blue}
\fill (A00) -- (A20) -- (A11) -- (A01) -- (A00);
\foreach \p/\r/\l in {A01/above left/$b_{i-1}$, A11/below left/$b_i$, A20/right/$b_{i+1}$,
A02/above/{$n_{i-1,i}$}, N1/above/$x_1$, Nk/right/$x_k$, A21/right/{$n_{i,i+1}$}}{
\node[\r,fill=white] at (\p) {\l};
\fill[black] (\p) circle (2pt);
}
\node[blue,fill=white] at (U00) {$S$};
\draw
(A00) -- (A01) -- (A11) -- (A20)
(A01) -- (A02) -- (A11) -- (A21) -- (A20)
(A02) -- (N1) -- (A11) -- (Nk) -- (A21);
\draw[dotted] (Nk) arc (40:120-40:1.8);
\text{e}nd{tikzpicture}
\caption{Part of a local neighbourhood}\label{Fig_localNeighbourhood}
\text{e}nd{figure}
By Lemma \ref{lem_topology}\text{e}qref{top_c}, any edge between
vertices in $\neig{G}{S}\text{e}nsuremath{\backslash} S$ already lies in one $N_G(b_i)\text{e}nsuremath{\backslash} S$.
Combining these paths gives the desired cycle.
\text{e}nd{proof}
\section{Proofs from Properties of Triangles}\label{Sect_propoftriang}
\begin{proof}[Proof of Lemma \ref{Lem_TriangleInclusionOne}]
Let $\Delta_{m-1}^{\vec{t}} \to \Hex_m$ be a triangle inclusion map
whose image lies in $\Delta_m$. Thus, each component
of $\vec{t}$ has to be non-negative, leaving
$\vec{t} \in V_1\cap\menge{Z}_{\geq 0}^3=\vec{E}$.
It remains to consider those $\Delta_{m-1} \cong S \subseteq \Delta_m$
that are not the image of a triangle inclusion map. The boundary
of such an $S$ consists of three straight paths of length $m-1$.
By Remark \ref{Rem_StraightPathsInTriangle}, there
are six such paths along the boundary of $\Delta_m$ and three such paths in
the interior (each given by all the vertices for which one fixed has value 1).
The boundary paths can only lie in one $\Delta_{m-1} \cong S \subseteq \Delta_m$.
Since the triangle inclusion maps ``use'' two boundary paths and one
interior path each, the only remaining possibility is combining
the three interior paths into a $\Delta_{m-1}$. But this is only possible
if the paths meet at the boundary (where one component is 0). Thus,
the component sum $m$ has to be 2.
\text{e}nd{proof}
\begin{proof}[Proof of Lemma \ref{Lem_TriangleInclusionTwo}]
Let $\Delta_{m-2}^{\vec{f}}\colon \Delta_{m-2} \to \Hex_m$ be a
triangle inclusion map
whose image lies in $\Delta_m$. Thus, each component of $\vec{t}$ has
to be non-negative and thus $\vec{f}\in V_2\cap\menge{Z}_{\geq 0}^3=\vec{E}+\vec{E}$.
For $m=2$, all $\Delta_0\cong S\subseteqeq \Delta_m$ are possible images.
It remains to consider those $\Delta_{m-2} \cong S \subseteq \Delta_m$
that are not the image of a triangle inclusion map. In this case
$m\geq 3$. The boundary of
such an $S$ consists of three straight paths of length $m-2$.
Remark \ref{Rem_StraightPathsInTriangle} describes all paths of this kind inside $\Delta_{m}$.
Up to the group action (Subsection \ref{Subsect_Hexagonal}),
we only need to look at four of these paths:
\begin{enumerate}
\item The path $\alpha_{|\{0,\dots,m-2\}}$
can only be the boundary of one $\Delta_{m-2}$ and it already
is the boundary of $\Delta_{m-2}^{(2,0,0)}$.
\item The path $\alpha_{|\{1,\dots,m-1\}}$
can only be the boundary of one $\Delta_{m-2}$ and it already
is the boundary of $\Delta_{m-2}^{(1,1,0)}$.
\item\label{case} The path $\beta_{|\{0,\dots,m-2\}}$ is
the boundary of $\Delta_{m-2}^{(1,0,1)}$ (whose third component
is higher). There can only be a $\Delta_{m-2}$ with lower third
component if $m-2 \leq 1$, implying $m=3$. The triangle has corner
vertices $(2,0,1)$, $(1,1,1)$, and $(2,1,0)$.
\item The path $\gamma$ is the
boundary of $\Delta_{m-2}^{(0,0,2)}$ (whose third component is
higher). There can only be a $\Delta_{m-2}$ with lower third
component if $m-2 \leq 2$. The case $m=3$ gives a triangle which
is brought to the triangle of (\ref{case}.) by the group action.
The case $m=4$ gives vertices $(2,0,2)$, $(0,2,2)$, and $(2,2,0)$.
\text{e}nd{enumerate}
Applying the group action to these $\Delta_{m-2}$ gives the desired results.
\text{e}nd{proof}
\begin{thebibliography}{1}
\bibitem{Baumeister_PhD}
M.~Baumeister.
\newblock {\text{e}m {R}egularity aspects for combinatorial simplicial surfaces}.
\newblock Dissertation, RWTH Aachen University, Aachen, 2020.
\newblock Veröffentlicht auf dem Publikationsserver der RWTH Aachen
University; Dissertation, RWTH Aachen University, 2020.
\bibitem{hedetniemi1972line}
S.~Hedetniemi and P.~Slater.
\newblock Line graphs of triangleless graphs and iterated clique graphs.
\newblock In {\text{e}m Graph theory and Applications}, pages 139--147. Springer,
1972.
\bibitem{larrion1999clique}
F.~Larri{\'o}n and V.~Neumann-Lara.
\newblock Clique divergent graphs with unbounded sequence of diameters.
\newblock {\text{e}m Discrete Mathematics}, 197:491--501, 1999.
\bibitem{larrion2000locally}
F.~Larri{\'o}n and V.~Neumann-Lara.
\newblock Locally c6 graphs are clique divergent.
\newblock {\text{e}m Discrete Mathematics}, 215(1-3):159--170, 2000.
\bibitem{larrion2003clique}
F.~Larri{\'o}n, V.~Neumann-Lara, and M.~Piza{\~n}a.
\newblock Clique convergent surface triangulations.
\newblock {\text{e}m Mat. Contemp}, 25:135--143, 2003.
\bibitem{larrion2002whitney}
F.~Larri{\'o}n, V.~Neumann-Lara, and M.~A. Piza{\~n}a.
\newblock Whitney triangulations, local girth and iterated clique graphs.
\newblock {\text{e}m Discrete Mathematics}, 258(1-3):123--135, 2002.
\bibitem{larrion2006graph}
F.~Larri{\'o}n, V.~Neumann-Lara, and M.~A. Piza{\~n}a.
\newblock Graph relations, clique divergence and surface triangulations.
\newblock {\text{e}m Journal of Graph Theory}, 51(2):110--122, 2006.
\bibitem{rotman1973covering}
J.~Rotman.
\newblock Covering complexes with applications to algebra.
\newblock {\text{e}m The Rocky Mountain Journal of Mathematics}, 3(4):641--674, 1973.
\bibitem{Thomassen_KuratowskiTheorem}
C.~Thomassen.
\newblock Kuratowski's theorem.
\newblock {\text{e}m Journal of Graph Theory}, 5(3):225--241, 1981.
\text{e}nd{thebibliography}
\text{e}nd{document} |
\begin{document}
\title[Time-dependent pointer states of the generalized spin-boson model and etc.]{Time-dependent pointer states of the generalized spin-boson model and consequences regarding the decoherence of the central system}
\author{Hoofar Daneshvar and G W F Drake}
\address{Department of Physics, University of Windsor, Windsor ON, N9B 3P4, Canada}
\ead{[email protected] and [email protected]}
\begin{abstract}
We consider a spin-boson Hamiltonian which is generalized such that the Hamiltonians for the system ($\hat{H}_{\cal S}$) and the interaction with the environment ($\hat{H}_{\rm int}$) do not commute with each other. Considering a single-mode quantized field in exact resonance with the tunneling matrix element of the system, we obtain the time-evolution operator for our model. Using our time-evolution operator we calculate the time-dependent pointer states of the system and the environment (which are characterized by their ability not to entangle with each other) for the case that the environment initially is prepared in the coherent state. We show that our solution for the pointer states of the system and the environment is valid over a length of time which is proportional to $\bar{n}$, the average number of bosons in the environment. We also obtain a closed form for the offdiagonal element of the reduced density matrix of the system and study the decoherence of the central system in our model. We show that for the case that the system initially is prepared in one of its pointer states, the offdiagonal element of the reduced density matrix of the system will be a \emph{sinusoidal function} with a slow decaying envelope which is characterized by a decay time proportional to $\bar{n}$; while it will experience a much faster decoherence, when the system initially is \emph{not} prepared in one of its initial pointer states.
\end{abstract}
\noindent{\it Keywords\/}: Spin-boson model, Jaynes-Cummings model, Decoherence, Pointer states of measurement, Foundation of quantum mechanics.
\pacs{03.65.Ta, 03.65.Yz}
\section{Introduction}
This is the second paper in the series of papers where we discuss pointer states of measurement and their evaluation. We refer to this paper as paper $\sf II$ in this series of papers.
\textcolor[rgb]{0.00,0.00,0.00}{\subsection{Foreword}}
\textcolor[rgb]{0.00,0.00,0.00}{The pointer states of a system are defined as those states of the system characterized by their property of not entangling with states of another system \cite{Schlosshauer1,Zurek1,Zurek2}. This condition is commonly referred to as the \emph{stability
criterion} for the selection of pointer states \cite{Schlosshauer1,Zurek1}.}
\textcolor[rgb]{0.00,0.00,0.00}{In paper $\sf I$ \cite{paper1} we discussed the pointer states of measurement and presented a general method for obtaining the pointer states of the system and the environment for a given total Hamiltonian defining the system-environment model. As we elaborately described in paper $\sf I$, generally we should distinguish between the pointer states of a system and the preferred basis of measurement. We explicitly proved that the pointer states of a system generally are time-dependent and a preferred basis of measurement does not exist, unless under some specific conditions (discussed there in paper $\sf I$) that the pointer states of measurement become time-independent. Moreover, time-dependent pointer states necessarily are not orthogonal amongst themselves at all times. Therefore, they cannot be considered as eigenstates of a Hermitian operator at all times.}
\textcolor[rgb]{0.00,0.00,0.00}{In paper $\sf I$ we also used our method in order to rederive the time-dependent pointer states of the system and the environment for the Jaynes-Cummings model (JCM) of quantum optics \cite{paper1}; verifying the previous results for the JCM \cite{Gea-Banacloche,Gea-Banacloche2,Knight}.}
In this paper we study a spin-boson model \footnote{For a serious review and analysis of spin-boson models in different regimes the interested reader can refer to the seminal article by Leggett \emph{et al.} \cite{Leggett} or the book by Weiss \cite{Weiss}. Also a brief while very useful review of the model can be found in chapter 5 of Schlosshauer's book \cite{Schlosshauer1}.} which is defined through the following total Hamiltonian
\begin{equation}\label{1}
\hat{H}=-\frac{1}{2}\Delta_{0}\hat{\sigma}_{x}+\omega\hat{a}^{\dag}\hat{a}+\hat{\sigma}_{z}\otimes(\mathrm{g} \hat{a}^{\dag}+\mathrm{g}^{\ast}\hat{a}).
\end{equation}
This model basically is composed of a central spin-half particle (or other two-level system) surrounded by an environment of $N$ bosonic particles such as photons. In the Hamiltonian of the spin-boson model, represented by equation (\ref{1}), we have considered an intrinsic tunneling contribution in the self-Hamiltonian of the system (proportional to the $\hat{\sigma}_{x}$ Pauli matrix), which can induce intrinsic transitions between the upper and lower states of the central system. Here $\Delta_{0}$ is the so-called tunneling matrix element. Also, it is assumed that the asymmetry energy in the self-Hamiltonian of the central system is negligible. Therefore, here we do not consider a contribution proportional to the $\hat{\sigma}_{z}$ Pauli matrix in the self-Hamiltonian of the system \footnote{However, one can easily verify that modifications due to such contribution can be done quite easily, as such a term commutes with the interaction between the system and the environment, represented by the third term in equation (\ref{1}).}. The second term in equation (\ref{1}) represents the self-Hamiltonian of the electromagnetic field; where we have considered a single-mode quantized field with the frequency of $\omega$ for the environment. Also the third term, with $\mathrm{g}$ as the spin-field coupling constant,
represents the interaction between the central spin-half particle and a single-mode quantized field; which in fact is the quantized form of the famous
$-\mbox{\boldmath{$\mu$}}.\textbf{B}$
Hamiltonian due to the interaction between a particle of magnetic dipole-moment $\mbox{\boldmath{$\mu$}}$ and a magnetic field $\textbf{B}$.
\textcolor[rgb]{0.00,0.00,0.00}{In the Hamiltonian of equation (\ref{1}), if we switch the $\hat{\sigma}_{x}$ and $\hat{\sigma}_{z}$ operators and apply the rotating-wave approximation, we would obtain a Hamiltonian which would precisely look like the Hamiltonian of the Jaynes-Cummings model (JCM). However, the Hamiltonian that would be obtained in this way, will \emph{not} describe the Jaynes-Cummings model of quantum optics. The above point is because of the fact that when considering our model, represented by the Hamiltonian of equation (\ref{1}), a very special meaning is attributed to the eigenstates of the $\hat{\sigma}_{z}$ operator; they are the upper and lower states of the two-level system. In fact, this attribution of the upper and lower states of the two-level system to the eigenstates of the $\hat{\sigma}_{z}$ operator will be considered everywhere in our calculations; as is also considered in calculations for the JCM \cite{Scully,Drake}. Therefore, as we will see in the following section, when considering the rotating-wave-approximation with the exact resonance condition the Hamiltonian of equation (\ref{1}) will lead to a time-evolution operator that is very different from the one that we know for the JCM. As a result, as we will see, there will be clear differences between the physics which arises from our spin-boson model and the one that we know from the JCM.}
In the paper by Leggett \emph{et al.} \cite{Leggett} they considered a general form of the Hamiltonian given by equation (\ref{1}), where the environment can be represented by a spectral density \textbf{\emph{J}}$(\omega)$ (rather than considering a \emph{single-mode} quantized field). They used the ``influence-functional" method of Feynman and Vernon \cite{Feynman} to obtain general expressions for $P(t)\equiv\langle\hat{\sigma}_{z}(t)\rangle$ in the form of a power series in $\Delta$. However, the general expressions they obtained for $P(t)$ in terms of the spectral density function \textbf{\emph{J}}$(\omega)$ were exceedingly cumbersome to calculate in most regimes. So, they assumed \textcolor[rgb]{0.00,0.00,0.00}{a nonresonance regime, which was quite different from the resonance regime that we will consider here in this article} \cite{Leggett}.
The model represented by the Hamiltonian of equation (\ref{1}) can also be studied in the framework of the Born-Markov approximation in order to obtain an approximate master equation for the evolution of the reduced density matrix of the system \cite{Weiss}. The master equations obtained in this way are valid only in certain regimes; and moreover, one often may need to resort to numerical computation in order to be able to solve them. However, the main purpose of this paper is (1) to obtain the time-dependent pointer states of the system and the environment, as well as expressions for the evolution of the reduced density matrix of the system in the exact resonance regime and for an environment initially prepared in the coherent state; and (2) to obtain approximate expressions \emph{in closed form} for the evolution of the off-diagonal elements of the reduced density matrix (for the case that the environment initially is prepared in a coherent state with a large average number of photons), which can be used \textcolor[rgb]{0.00,0.00,0.00}{in order to} study the decoherence of the central system \emph{in an analytical way}.
\textcolor[rgb]{0.00,0.00,0.00}{\subsection{Review of our method for calculation of pointer states}}
\textcolor[rgb]{0.00,0.00,0.00}{Our goal is to find the pointer states of the system, as well as their corresponding pointer states from the environment for an arbitrary total Hamiltonian defining the system-environment model. In order to find pointer states, we look for system states
$|s_{i}(t)\rangle$ such that the composite system-environment state, when starting from a product state
$|s_{i}(t_{0})\rangle|E_{0}\rangle$ at $t=0$, remains in the product form
$|s_{i}(t)\rangle|E_{i}(t)\rangle$ at all subsequent times $t>0$; i.e.\ pointer states must satisfy the aforementioned stability criterion.}
Now consider a two-state system $\cal S$ with two arbitrary basis states $|a\rangle$ and $|b\rangle$, initially prepared in the state
\begin{equation}\label{240}
|\psi^{\cal S}(t_{0})\rangle=\alpha|a\rangle+\beta|b\rangle \quad \mathrm{with} \quad |\alpha|^{2}+|\beta|^{2}=1;
\end{equation}
and an environment initially prepared in the state
\begin{equation}\label{250}
|\Phi^{\cal E}(t_{0})\rangle=\sum_{n=0}^{\infty}c_{n}|\varphi_{n}\rangle,
\end{equation}
where $\{|\varphi_{n}\rangle\}$'s are a complete set of basis states for the environment. \textcolor[rgb]{0.00,0.00,0.00}{For the two-state system with the two basis states $|a\rangle$ and $|b\rangle$ we can consider a time evolution operator for the global state of the system and the environment given by
\begin{equation}\label{270}
\hat{U}_{\rm tot}(t)=\hat{\cal E}_{1}|a\rangle\langle a|+\hat{\cal E}_{2}|a\rangle\langle b|
+\hat{\cal E}_{3}|b\rangle\langle a|+\hat{\cal E}_{4}|b\rangle\langle b|;
\end{equation}}
where the $\hat{\cal E}_{i}$'s are some generally \emph{time-dependent} operators in the Hilbert space of the environment, and depend on the total Hamiltonian defining the system-environment model.
Using equations (2) to (4) we can write the global state of the system and the environment as
\begin{eqnarray}\label{3.29}
\nonumber |\psi^{\rm tot}(t)\rangle=\hat{U}_{\rm tot}(t).\ (\alpha|a\rangle+\beta|b\rangle)\otimes(\sum_{n=0}^{\infty}c_{n}|\varphi_{n}\rangle)\\
=\textbf{A}(t)\ |a\rangle+\textbf{B}(t)\ |b\rangle; \\ \nonumber \mathrm{with} \quad
\textbf{A}(t)=\sum_{n=0}^{\infty}c_{n}\{\alpha\hat{\cal E}_{1}(t)+\beta\hat{\cal E}_{2}(t)\}\ |\varphi_{n}\rangle \\ \mathrm{and} \quad \nonumber
\textbf{B}(t)=\sum_{n=0}^{\infty}c_{n}\{\alpha\hat{\cal E}_{3}(t)+\beta\hat{\cal E}_{4}(t)\}\ |\varphi_{n}\rangle.
\end{eqnarray}
Now, for the global state of the system and the environment, given by equation (\ref{3.29}), we observe that \emph{if} for some initial states of the system and the environment the two vectors $\textbf{A}(t)$ and $\textbf{B}(t)$ of the Hilbert space of the environment turn out to be parallel with each other, i.e.\ if
\begin{equation}\label{3.30.1}
\textbf{A}(t)=G(t)\textbf{B}(t),
\end{equation}
with $G(t)$ as a scalar which generally may depend on time, then those initial states of the system and the environment will not entangle with each other, and hence they can represent the initial pointer states of the system and the environment. In fact, if for some initial states of the system and the environment the condition represented by equation (\ref{3.30.1}) is satisfied, the global state of the system and the environment (equation (\ref{3.29})) can be written in a product from as
\begin{equation}\label{3.31}
\nonumber |\psi^{\rm tot}(t)\rangle=\{G(t)|a\rangle+|b\rangle\}\times(\sum_{n=0}^{\infty}c_{n}\{\alpha\hat{\cal E}_{3}+\beta\hat{\cal E}_{4}\}\ |\varphi_{n}\rangle),
\end{equation}
in which case pointer states can be realized for the system and the environment given by
\begin{eqnarray}\label{3.32}
|\pm(t)\rangle=\mathcal{N}\ \{G(t)|a\rangle+|b\rangle\} \qquad \mathrm{and}\\ \nonumber
|\Phi_{\pm}(t)\rangle=\mathcal{N}^{-1}\textbf{B}(t)=\mathcal{N}^{-1}\sum_{n=0}^{\infty}c_{n}\{\alpha\hat{\cal E}_{3}(t)+\beta\hat{\cal E}_{4}(t)\}|\varphi_{n}\rangle.
\end{eqnarray}
In the above equation we have represented the pointer states of the system by $|\pm(t)\rangle$ and those of the environment by $|\Phi_{\pm}(t)\rangle$. Also, $\cal N$ is the normalization factor for the pointer states of the system.
In general, an arbitrary choice of $c_{n}$'s will not necessarily yield an $\textbf{A}(t), \textbf{B}(t)$ pair that remain parallel for any choice of $\alpha$ and $\beta$; therefore an arbitrary choice of $c_{n}$'s does not necessarily correspond to a pointer state for the environment. Nonetheless, for a given set of $c_{n}$'s there may exist two sets of complex numbers for $\alpha$ and $\beta$ with two corresponding values for the scalar function $G(t)$ such that equation (\ref{3.30.1}) is satisfied. In the following example, we show that for an assumed set of $c_{n}$'s, values of $\alpha$ and $\beta$ exist such that equation (\ref{3.30.1}) is approximately satisfied in the limit of large number of degrees of freedom for the environment \footnote{In fact, by looking at equation (\ref{3.31}) we notice that in practice we can expect some states of the system and the environment to keep their individuality and not to entangle with each other \emph{even} if they can satisfy our condition (given by equation (\ref{3.30.1})) only in a fraction of the Hilbert space of the environment where the $c_{n}$ coefficients are not negligible. This of course will involve assuming some approximations in obtaining pointer states. However, as we will show in this paper, in the end we can define a measure for the degree of entanglement between the states of the system and the environment, which after its calculation for the pointer states which we obtain for our model we can know exactly in which regimes our pointer states are valid and will not entangle with the states of another system. For example, this way we will show in this paper that the pointer states which we will obtain for our spin-boson model for an environment initially prepared in the coherent state are valid (i.e.\ will not entangle with the states of any other subsystem throughout their evolution with time) up to times of the order $\bar{n}/\mathrm{g}$; where $\bar{n}$ is the average number of photons in the coherent state of the environment.}.
\section{Calculation of the time-evolution operator}
In order to calculate the time-evolution operator in the interaction picture for the Hamiltonian given by equation (\ref{1}), first we need to have the corresponding Hamiltonian in the interaction picture. It can \textcolor[rgb]{0.00,0.00,0.00}{easily be calculated as}
\begin{equation}\label{6}
\hat{H}_{\mathrm{int}}(t)=\{\hat{\sigma}_{z}\cos(\Delta_{0}t)-\hat{\sigma}_{y}\sin(\Delta_{0}t)\}\{\mathrm{g}\hat{a}^{\dag}e^{i\omega t}+\mathrm{g}^{\ast}\hat{a}e^{-i\omega t}\}.
\end{equation}
Here, the commutator of $\hat{H}_{\mathrm{int}}(t)$ and $\hat{H}_{\mathrm{int}}(t'\neq t)$, i.e.\ $[\hat{H}_{\mathrm{int}}(t),\hat{H}_{\mathrm{int}}(t'\neq t)]$ with $\hat{H}_{\mathrm{int}}(t)$ given by equation (\ref{6}), is not a function of a constant number. This in fact can make the evaluation of the time-evolution operator quite difficult \cite{Duan}.
In parallel with paper $\sf I$ we consider the general form given by equation (\ref{270}) for the evolution operator of the global spin-field system.
For such time-evolution operator in the interaction picture, which satisfies the Schr\"{o}dinger equation
$i\hbar\frac{\partial}{\partial t}\hat{U}(t)=\hat{H}_{\rm int}\hat{U}(t),$
we have
\begin{eqnarray}\label{9}
i\hbar \left(
\begin{array}{cc}
\dot{\hat{\cal E}_{1}} & \dot{\hat{\cal E}_{2}} \\
\dot{\hat{\cal E}_{3}} & \dot{\hat{\cal E}_{4}} \\
\end{array}
\right)=\hat{H}_{\mathrm{int}}(t)\left(
\begin{array}{cc}
\hat{\cal E}_{1} & \hat{\cal E}_{2} \\
\hat{\cal E}_{3} & \hat{\cal E}_{4} \\
\end{array}
\right)=\{\mathrm{g}\hat{a}^{\dag}e^{i\omega t}+\mathrm{g}^{\ast}\hat{a}e^{-i\omega t}\}\\
\nonumber \times\ \left(
\begin{array}{cc}
\hat{\cal E}_{1}\cos(\Delta_{0}t)+i\hat{\cal E}_{3}\sin(\Delta_{0}t) & \hat{\cal E}_{2}\cos(\Delta_{0}t)+i\hat{\cal E}_{4}\sin(\Delta_{0}t) \\
-\hat{\cal E}_{3}\cos(\Delta_{0}t)-i\hat{\cal E}_{1}\sin(\Delta_{0}t) & -\hat{\cal E}_{4}\cos(\Delta_{0}t)-i\hat{\cal E}_{2}\sin(\Delta_{0}t) \\
\end{array}
\right).
\end{eqnarray}
Now, we assume the transition matrix element $\Delta_{0}$ to be in resonance with the cavity eigenmode $\omega$ and we use the rotating-wave approximation (RWA) \cite{Scully,Eberly} (just as is assumed in the conventional Jaynes-Cummings model of quantum optics \cite{Scully}). So, by entering the resonance condition $\Delta_{0}=\omega$ and using the rotating wave approximation (i.e.\ disregarding the higher-frequency terms which contain $e^{\pm i(\omega+\Delta_{0})t}$) the above equation will simplify to the following set of four equations
\begin{eqnarray}\label{10}
\nonumber i\dot{\hat{\cal E}_{1}}=\frac{\mathrm{g}\hat{a}^{\dag}}{2}(\hat{\cal E}_{1}-\hat{\cal E}_{3})+\frac{\mathrm{g}^{\ast}\hat{a}}{2}(\hat{\cal E}_{1}+\hat{\cal E}_{3}),\\
\nonumber i\dot{\hat{\cal E}_{2}}=\frac{\mathrm{g}\hat{a}^{\dag}}{2}(\hat{\cal E}_{2}-\hat{\cal E}_{4})+\frac{\mathrm{g}^{\ast}\hat{a}}{2}(\hat{\cal E}_{2}+\hat{\cal E}_{4}),\\
i\dot{\hat{\cal E}_{3}}=\frac{\mathrm{g}\hat{a}^{\dag}}{2}(\hat{\cal E}_{1}-\hat{\cal E}_{3})-\frac{\mathrm{g}^{\ast}\hat{a}}{2}(\hat{\cal E}_{1}+\hat{\cal E}_{3}),\\
\nonumber i\dot{\hat{\cal E}_{4}}=\frac{\mathrm{g}\hat{a}^{\dag}}{2}(\hat{\cal E}_{2}-\hat{\cal E}_{4})-\frac{\mathrm{g}^{\ast}\hat{a}}{2}(\hat{\cal E}_{2}+\hat{\cal E}_{4}).
\end{eqnarray}
In order to solve the above set of coupled differential equations, we proceed as follows. First, we take a derivative with respect to time of the first equation. By replacing $\dot{\hat{\cal E}_{1}}$ and $\dot{\hat{\cal E}_{3}}$ from the first and the third equations in the resulting equation we find
\begin{equation}\label{11}
i\ddot{\hat{\cal E}_{1}}=\frac{-i|\mathrm{g}|^{2}}{2}\{(1+2\hat{N})\hat{\cal E}_{1}-\hat{\cal E}_{3}\},
\end{equation}
where $\hat{N}=\hat{a}^{\dag}\hat{a}$ is the number operator. Similarly, by doing the same procedure on the third equation for $\dot{\hat{\cal E}_{3}}$ we find
\begin{equation}\label{12}
i\ddot{\hat{\cal E}_{3}}=\frac{-i|\mathrm{g}|^{2}}{2}\{(1+2\hat{N})\hat{\cal E}_{3}-\hat{\cal E}_{1}\}.
\end{equation}
Next, we define $\hat{\cal E}_{++}$ and $\hat{\cal E}_{+-}$ as follows
\begin{equation}\label{13}
\hat{\cal E}_{++}=\hat{\cal E}_{1}+\hat{\cal E}_{3} \qquad \mathrm{and} \qquad \hat{\cal E}_{+-}=\hat{\cal E}_{1}-\hat{\cal E}_{3}.
\end{equation}
By adding and subtracting equations (\ref{11}) and (\ref{12}) we find
\begin{eqnarray}\label{14}
\nonumber \ddot{\hat{\cal E}}_{++}=-|\mathrm{g}|^{2}\hat{N}\hat{\cal E}_{++} \qquad \mathrm{and} \\
\ddot{\hat{\cal E}}_{+-}=-|\mathrm{g}|^{2}(\hat{N}+1)\ \hat{\cal E}_{+-}\ .
\end{eqnarray}
These equations for $\hat{\cal E}_{++}$ and $\hat{\cal E}_{+-}$ can simply be solved to find
\begin{eqnarray}\label{15}
\nonumber \hat{\cal E}_{++}=\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{A}+\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{B}\qquad \mathrm{and}\\
\hat{\cal E}_{+-}=\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{A'}+\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{B'},
\end{eqnarray}
where $\hat{A}$, $\hat{A'}$, $\hat{B}$ and $\hat{B'}$ are some time-independent \emph{operators} (rather than constant \emph{numbers}), which will be found from our initial conditions in the following paragraphs. Here we note that since these coefficients generally are some time-independent operators rather than constant numbers, and they do not necessarily commute with the number operator $\hat{N}$, we \emph{must} have them on the right-hand side of the $sin$ and $cos$ functions (rather than having them on the left-hand side). Only in this way the solutions in equation (\ref{15}) will satisfy equation (\ref{14}).
Now using equations (\ref{13}) and (\ref{15}), we can obtain the operators $\hat{\cal E}_{1}$ and $\hat{\cal E}_{3}$ as follows:
\begin{eqnarray}\label{16}
\nonumber \hat{\cal E}_{1}=\frac{1}{2}\{\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{A}+\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{B}\\+\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{A'}+\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{B'}\} \qquad \mathrm{and} \\
\nonumber \hat{\cal E}_{3}=\frac{1}{2}\{\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{A}+\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{B}\\-\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{A'}-\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{B'}\}.
\end{eqnarray}
In quite the same manner we can calculate $\hat{\cal E}_{2}$ and $\hat{\cal E}_{4}$ as follows
\begin{eqnarray}\label{18}
\nonumber \hat{\cal E}_{2}=\frac{1}{2}\{\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{C}+\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{D}\\+\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{C'}+\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{D'}\} \qquad \mathrm{and} \\
\nonumber \hat{\cal E}_{4}=\frac{1}{2}\{\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{C}+\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{D}\\-\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{C'}-\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{D'}\};
\end{eqnarray}
where $\hat{C}$, $\hat{C'}$, $\hat{D}$ and $\hat{D'}$ also, generally are some time-independent \emph{operators}, which will be determined from our original set of equations (\ref{10}) and the initial conditions on $\{\hat{\cal E}_{i}\}$'s.
In order to obtain the eight operator coefficients which appear in our expressions for $\{\hat{\cal E}_{i}\}$'s, first we note that the time-evolution operator, given by equation (\ref{270}), must satisfy the initial condition $\hat{U}(t=0)=\hat{I}_{\cal S}\otimes\hat{I}_{\cal E}$; where $\hat{I}_{\cal S}=|a\rangle\langle a|+|b\rangle\langle b|$ represents the identity operator in the Hilbert space of the system and $\hat{I}_{\cal E}$ is the identity operator in the Hilbert space of the environment. This means that we must have
\begin{equation}\label{20}
\hat{\cal E}_{1}(0)=\hat{\cal E}_{4}(0)=\hat{I}_{\cal E} \qquad \mathrm{and} \qquad \hat{\cal E}_{2}(0)=\hat{\cal E}_{3}(0)=0
\end{equation}
From the above initial conditions and equations (\ref{16}) to (21) we easily find four of the coefficients as follows
\begin{equation}\label{21}
\hat{B}=\hat{B'}=\hat{D}=\hat{I}_{\cal E} \qquad \mathrm{and} \qquad \hat{D'}=-\hat{I}_{\cal E}.
\end{equation}
In order to find $\hat{A}$ and $\hat{A'}$ we proceed as follows. First, we use equation (\ref{10}-a) to obtain $\hat{\cal E}_{3}$ as follows
\begin{equation}\label{22}
(\mathrm{g}^{\ast}\hat{a}-\mathrm{g}\hat{a}^{\dag})\hat{\cal E}_{3}=2i\dot{\hat{\cal E}_{1}}-(\mathrm{g}^{\ast}\hat{a}+\mathrm{g}\hat{a}^{\dag})\hat{\cal E}_{1}\ .
\end{equation}
Replacing $\hat{\cal E}_{1}$ and $\dot{\hat{\cal E}_{1}}$ from equation (\ref{16}) into the above equation, it reads
\begin{eqnarray}\label{23}
\nonumber (\mathrm{g}^{\ast}\hat{a}-\mathrm{g}\hat{a}^{\dag})\hat{\cal E}_{3}=i|\mathrm{g}|\sqrt{\hat{N}}\ \{\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{A}-\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\}\\+i|\mathrm{g}|\sqrt{\hat{N}+1}\ \{\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{A'}-\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\}\\ \nonumber
-(\frac{\mathrm{g}^{\ast}\hat{a}+\mathrm{g}\hat{a}^{\dag}}{2})\{\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{A}+\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\\ \nonumber+\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{A'}+\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\}.
\end{eqnarray}
At $t=0$ the above equation reduces to
\begin{equation}\label{24}
i|\mathrm{g}|\sqrt{\hat{N}}\ \hat{A}+i|\mathrm{g}|\sqrt{\hat{N}+1}\ \hat{A'}-(\mathrm{g}^{\ast}\hat{a}+\mathrm{g}\hat{a}^{\dag})=(\mathrm{g}^{\ast}\hat{a}-\mathrm{g}\hat{a}^{\dag})\ \hat{\cal E}_{3}(t=0)=0.
\end{equation}
Operating this last equation on $|n\rangle$ we have
\begin{equation}\label{25}
i|\mathrm{g}|\sqrt{\hat{N}}\ \hat{A}\ |n\rangle+i|\mathrm{g}|\sqrt{\hat{N}+1}\ \hat{A'}\ |n\rangle=\mathrm{g}^{\ast}\sqrt{n}\ |n-1\rangle+\mathrm{g}\sqrt{n+1}\ |n+1\rangle.
\end{equation}
Next, we use equation (\ref{10}-c) to obtain $\hat{\cal E}_{1}$ as follows
\begin{equation}\label{26}
(\mathrm{g}\hat{a}^{\dag}-\mathrm{g}^{\ast}\hat{a})\hat{\cal E}_{1}=2i\dot{\hat{\cal E}_{3}}+(\mathrm{g}^{\ast}\hat{a}+\mathrm{g}\hat{a}^{\dag})\hat{\cal E}_{3}\ .
\end{equation}
Replacing $\hat{\cal E}_{3}$ and $\dot{\hat{\cal E}_{3}}$ from equation (19) into the above equation, it reads
\begin{eqnarray}\label{27}
\nonumber (\mathrm{g}\hat{a}^{\dag}-\mathrm{g}^{\ast}\hat{a})\hat{\cal E}_{1}=i|\mathrm{g}|\sqrt{\hat{N}}\ \{\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{A}-\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\}\\+i|\mathrm{g}|\sqrt{\hat{N}+1}\ \{-\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{A'}+\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\}\\ \nonumber
+(\frac{\mathrm{g}^{\ast}\hat{a}+\mathrm{g}\hat{a}^{\dag}}{2})\{\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\hat{A}+\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\\ \nonumber-\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\hat{A'}-\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\}.
\end{eqnarray}
At $t=0$ the above equation reduces to
\begin{equation}\label{28}
i|\mathrm{g}|\sqrt{\hat{N}}\ \hat{A}-i|\mathrm{g}|\sqrt{\hat{N}+1}\ \hat{A'}=(\mathrm{g}\hat{a}^{\dag}-\mathrm{g}^{\ast}\hat{a})\ \hat{\cal E}_{1}(t=0)=(\mathrm{g}\hat{a}^{\dag}-\mathrm{g}^{\ast}\hat{a}).
\end{equation}
Operating this last equation on $|n\rangle$ we have
\begin{equation}\label{29}
i|\mathrm{g}|\sqrt{\hat{N}}\ \hat{A}\ |n\rangle-i|\mathrm{g}|\sqrt{\hat{N}+1}\ \hat{A'}\ |n\rangle=\mathrm{g}\sqrt{n+1}\ |n+1\rangle-\mathrm{g}^{\ast}\sqrt{n}\ |n-1\rangle.
\end{equation}
Finally, we use equations (\ref{25}) and (\ref{29}) to obtain the coefficients $\hat{A}$ and $\hat{A'}$. Assuming $\mathrm{g}$ to be real and then adding equations (\ref{25}) and (\ref{29}) we find
\begin{equation}\label{30}
\hat{A}\ |n\rangle=-i\sqrt{\frac{n+1}{\hat{N}}}\ |n+1\rangle=-i|n+1\rangle.
\end{equation}
Comparing the above equation to $\frac{-i}{\sqrt{\hat{N}}}\ \hat{a}^{\dag}\ |n\rangle=-i|n+1\rangle$ we find $\hat{A}$ as
\begin{equation}\label{31}
\hat{A}=\frac{-i}{\sqrt{\hat{N}}}\ \hat{a}^{\dag}.
\end{equation}
Similarly, by subtracting equation (\ref{29}) from equation (\ref{25}) to find
\begin{equation}\label{32}
\hat{A'}\ |n\rangle=-i\sqrt{\frac{n}{\hat{N}+1}}\ |n-1\rangle=-i|n-1\rangle
\end{equation}
and comparing the above equation to $\frac{-i}{\sqrt{\hat{N}+1}}\ \hat{a}\ |n\rangle=-i|n-1\rangle$ we find $\hat{A'}$ as
\begin{equation}\label{33}
\hat{A'}=\frac{-i}{\sqrt{\hat{N}+1}}\ \hat{a}\ .
\end{equation}
One should pay attention to the order that the operators appear in equations (\ref{31}) and (\ref{33}); as they are not commuting operators.
By doing exactly the same procedure on equations (\ref{10}-b) and (\ref{10}-d) we would find the operator coefficients $\hat{C}$ and $\hat{C'}$ as follows
\begin{equation}\label{34}
\hat{C}=\frac{i}{\sqrt{\hat{N}}}\ \hat{a}^{\dag}=-\hat{A} \qquad \mathrm{and} \qquad \hat{C'}=\frac{-i}{\sqrt{\hat{N}+1}}\ \hat{a}=\hat{A'}\ .
\end{equation}
Now that we found all the operator coefficients, we can replace them back in equations (\ref{16}) to (20) and write the $\{\hat{\cal E}_{i}\}$ in their final form
\begin{eqnarray}\label{35}
\nonumber \hat{\cal E}_{1}=\frac{1}{2}\{-i\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\frac{1}{\sqrt{\hat{N}}}\ \hat{a}^{\dag}+\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\\-i\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\frac{1}{\sqrt{\hat{N}+1}}\ \hat{a}+\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\}, \\
\nonumber \hat{\cal E}_{3}=\frac{1}{2}\{-i\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\frac{1}{\sqrt{\hat{N}}}\ \hat{a}^{\dag}+\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\\+i\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\frac{1}{\sqrt{\hat{N}+1}}\ \hat{a}-\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\},
\end{eqnarray}
\begin{eqnarray}\label{36}
\nonumber \hat{\cal E}_{2}=\frac{1}{2}\{i\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\frac{1}{\sqrt{\hat{N}}}\ \hat{a}^{\dag}+\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\\-i\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\frac{1}{\sqrt{\hat{N}+1}}\ \hat{a}-\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\} \qquad \mathrm{and} \\
\nonumber \hat{\cal E}_{4}=\frac{1}{2}\{i\sin(|\mathrm{g}|\sqrt{\hat{N}}\ t)\frac{1}{\sqrt{\hat{N}}}\ \hat{a}^{\dag}+\cos(|\mathrm{g}|\sqrt{\hat{N}}\ t)\\+i\sin(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\frac{1}{\sqrt{\hat{N}+1}}\ \hat{a}+\cos(|\mathrm{g}|\sqrt{\hat{N}+1}\ t)\}.
\end{eqnarray}
One can verify that the above set of operators satisfy the unitarity of the time-evolution operator $\hat{U}^{\dag}\hat{U}=\hat{U}\hat{U}^{\dag}=\hat{I}$.
\section{Calculation of the time-dependent pointer states of the system and the environment}
In order to obtain the time-dependent pointer states of the system and those of its environment for our model, represented by the Hamiltonian of equation (\ref{1}), we assume the field to be initially prepared in the coherent state
$|\nu\rangle$
\begin{equation}\label{48}
|\Phi_{\rm
field}(t_{0})\rangle=|\nu\rangle=\sum_{n=0}^{\infty}\mathrm{c}_{n}|n\rangle;
\quad\mathrm{with} \quad
\mathrm{c}_{n}=\frac{\rme^{-\frac{1}{2}|\nu|^{2}}\nu^{n}}{\sqrt{n!}},
\end{equation}
where $|\nu|^{2}=\bar{n}$ is the average number of photons in the
coherent state, and $\nu=|\nu|e^{-i\varphi}$. In this section we will show that in the regime that we are considering (i.e.\ the exact-resonance \textcolor[rgb]{0.00,0.00,0.00}{with the RWA} regime) and for the environment initially prepared in the coherent state, in the limit of a large average number of photons $\bar{n}\rightarrow\infty$ we must have pointer states for the system (the central spin) given by
\begin{eqnarray}\label{49}
\nonumber |+(t)\rangle=-i\cos(\frac{\varphi}{2}+\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})|a\rangle+\sin(\frac{\varphi}{2}+\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})|b\rangle \qquad \mathrm{and} \\ |-(t)\rangle=i\sin(\frac{\varphi}{2}-\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})|a\rangle+\cos(\frac{\varphi}{2}-\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})|b\rangle,
\end{eqnarray}
where $|a\rangle$ and $|b\rangle$ are eigenstates of the $\hat{\sigma}_{z}$ Pauli matrix.
We make the usual assumption that there exists no correlations between the system and the environment at $t=0$. So, we consider the following initial state for the total composite system
\begin{equation}\label{50}
|\psi^{\mathrm{tot}}(t_{0})\rangle=(\alpha|a\rangle+\beta|b\rangle)\otimes\sum_{n=0}^{\infty}\mathrm{c}_{n}|n\rangle \qquad \mathrm{with} \qquad |\alpha|^{2}+|\beta|^{2}=1.
\end{equation}
\textcolor[rgb]{0.00,0.00,0.00}{For pointer states the two vectors $\textbf{A}(t)$ and $\textbf{B}(t)$ of the Hilbert space of the environment must be parallel with each other. Therefore, for pointer states due to our condition, given by equation (\ref{3.30.1}), we must have:}
\begin{equation}\label{51}
\nonumber \sum_{n}c_{n}\{\alpha\hat{\cal E}_{1}+\beta\hat{\cal E}_{2}\}\ |\varphi_{n}\rangle=G(t)\times\sum_{n}c_{n}\{\alpha\hat{\cal E}_{3}+\beta\hat{\cal E}_{4}\}\ |\varphi_{n}\rangle.
\end{equation}
For the $\{\hat{\cal E}_{i}\}$, given by equations (36) through (39), we can write
\begin{equation}\label{52}
\hat{\cal E}_{i}|n\rangle=f_{i1}(n)|n+1\rangle+f_{i2}(n)|n\rangle+f_{i3}(n)|n-1\rangle,
\end{equation}
where $f_{ij}$'s (with $i=1,2,3,4$ and $j=1,2,3$) are given by
\begin{eqnarray}\label{53}
\nonumber f_{11}(n)=f_{31}(n)=-f_{21}(n)=-f_{41}(n)=\frac{-i}{2}\sin(\mathrm{g} t\sqrt{n+1})\equiv f_{1}(n),\\
\nonumber f_{12}(n)=f_{42}(n)=\frac{1}{2}(\cos(\mathrm{g} t\sqrt{n})+\cos(\mathrm{g} t\sqrt{n+1}))\equiv f_{2}(n),\\
f_{22}(n)=f_{32}(n)=\frac{1}{2}(\cos(\mathrm{g} t\sqrt{n})-\cos(\mathrm{g} t\sqrt{n+1}))\equiv f_{3}(n) \qquad \mathrm{and}\\
\nonumber f_{13}(n)=f_{23}(n)=-f_{33}(n)=-f_{43}(n)=\frac{-i}{2}\sin(\mathrm{g} t\sqrt{n})\equiv f'_{1}(n).
\end{eqnarray}
Using equation (\ref{52}) our condition, given by equation (\ref{51}), becomes
\begin{eqnarray}\label{54}
\nonumber \sum_{n=0}^{\infty}c_{n}\{\alpha(f_{11}(n)|n+1\rangle+f_{12}(n)|n\rangle+f_{13}(n)|n-1\rangle)\\ \nonumber +\beta(f_{21}(n)|n+1\rangle+
f_{22}(n)|n\rangle+f_{23}(n)|n-1\rangle)\}\\=G(t)\times \sum_{n=0}^{\infty}c_{n}\{\alpha(f_{31}(n)|n+1\rangle+f_{32}(n)|n\rangle+f_{33}(n)|n-1\rangle)\\ \nonumber +\beta(f_{41}(n)|n+1\rangle+f_{42}(n)|n\rangle+f_{43}(n)|n-1\rangle)\}.
\end{eqnarray}
\textcolor[rgb]{0.00,0.00,0.00}{Now}, since the number states $\{|n\rangle\}$ are a complete set of basis states for the environment, for the initial pointer states we can open the summations in equation (\ref{54}) and equalize terms from the two sides of this equation which correspond to the same number state $|n\rangle$. After using equation (\ref{53}), equation (\ref{54}) would simplify as
\textcolor[rgb]{0.00,0.00,0.00}{\begin{eqnarray}\label{56}
\fl \nonumber G(t)=\frac{(\alpha-\beta)c_{n}f_{1}(n)+\alpha c_{n+1}f_{2}(n+1)+\beta c_{n+1}f_{3}(n+1)+(\alpha+\beta)c_{n+2}f'_{1}(n+2)}{(\alpha-\beta)c_{n}f_{1}(n)+\alpha c_{n+1}f_{3}(n+1)+\beta c_{n+1}f_{2}(n+1)-(\alpha+\beta)c_{n+2}f'_{1}(n+2)};
\\ for\ all\ n.
\end{eqnarray}}
The above result for $G(t)$, which generally depends on $n$, would contradict our initial assumption of \textcolor[rgb]{0.00,0.00,0.00}{the two vectors $\textbf{A}(t)$ and $\textbf{B}(t)$ being parallel to each other}, \emph{unless} we can find certain initial states for the system for which $G(t)$ turns out to become independent of $n$; since as we discussed, for pointer states all components of the vector $\textbf{A}$ (${A_{n}}^{'}s$) must be related to their corresponding components from $\textbf{B}$ (${B_{n}}^{'}s$) through the \emph{same} scalar factor $G$ (see equation (6)). So now we should seek for those particular initial states of the system which can make $G(t)$ independent of the index $n$ of the states of the environment.
For an initial coherent field (equation (\ref{48})) we have: \\ $c_{n+1}=c_{n}e^{-i\varphi}\sqrt{\frac{\bar{n}}{n+1}}$ and $c_{n+2}=c_{n}e^{-2i\varphi}\frac{\bar{n}}{\sqrt{(n+1)(n+2)}}$.
Moreover, in the limit of a large average number of photons $\bar{n}\rightarrow\infty$ we can replace the factors $\sqrt{\frac{\bar{n}}{n+1}}$ and $\sqrt{\frac{\bar{n}}{n+2}}$ by unity\footnote{This approximation has been used by Gea-Banacloche and other people \cite{Gea-Banacloche,Gea-Banacloche2} in the study of the Jaynes-Cummings model of quantum optics. In fact, for the Jaynes-Cummings model it has been shown that an average number of photons only as large as twenty is enough to make this assumption a good approximation \cite{Gea-Banacloche}.};
since for $\bar{n}\rightarrow\infty$ the Poisson distribution of the coherent field is extremely sharp (with $\bar{n}$ at the center) and hence, for $\bar{n}\rightarrow\infty$ and $n\approx\bar{n}$ we have $\sqrt{\frac{\bar{n}}{n+1}}\approx1$ and $\sqrt{\frac{\bar{n}}{n+2}}\approx1$, while for $n$ being far from $\bar{n}$ the $c_{n}$ coefficient is negligible. So, the corresponding terms (of $n$ being far from $\bar{n}$) do not have any contribution in the summations of equation (\ref{54}). As a result, equation (\ref{56}) for $G(t)$ can be further simplified to
\begin{eqnarray}\label{57}
\fl G(t)=\frac{(\alpha-\beta)f_{1}(n)+\alpha e^{-i\varphi}f_{2}(n+1)+\beta e^{-i\varphi}f_{3}(n+1)+(\alpha+\beta)e^{-2i\varphi}f'_{1}(n+2)}{(\alpha-\beta)f_{1}(n)+\alpha e^{-i\varphi}f_{3}(n+1)+\beta e^{-i\varphi}f_{2}(n+1)-(\alpha+\beta)e^{-2i\varphi}f'_{1}(n+2)}.
\end{eqnarray}
Replacing the $f_{i}$ functions from equation (\ref{53}), the above equation reads
\begin{eqnarray}\label{58}
\nonumber G(t)=\{\textcolor[rgb]{0.98,0.00,0.00}{(\alpha+\beta)e^{-i\varphi}}\cos(\mathrm{g} t\sqrt{n+1})-i\textcolor[rgb]{0.98,0.00,0.00}{(\alpha-\beta)}\sin(\mathrm{g} t\sqrt{n+1}) \\ \nonumber+\textcolor[rgb]{0.00,0.00,1.00}{(\alpha-\beta)e^{-i\varphi}}\cos(\mathrm{g} t\sqrt{n+2})-i\textcolor[rgb]{0.00,0.00,1.00}{(\alpha+\beta)e^{-2i\varphi}}\sin(\mathrm{g} t\sqrt{n+2})\} \\ \quad \div\ \quad \{\textcolor[rgb]{0.98,0.00,0.00}{(\alpha+\beta)e^{-i\varphi}}\cos(\mathrm{g} t\sqrt{n+1})-i\textcolor[rgb]{0.98,0.00,0.00}{(\alpha-\beta)}\sin(\mathrm{g} t\sqrt{n+1})\nonumber \\ +\textcolor[rgb]{0.00,0.00,1.00}{(\beta-\alpha)e^{-i\varphi}}\cos(\mathrm{g} t\sqrt{n+2})+i\textcolor[rgb]{0.00,0.00,1.00}{(\alpha+\beta)e^{-2i\varphi}}\sin(\mathrm{g} t\sqrt{n+2})\}.
\end{eqnarray}
In order to obtain the pointer states of the system, we should look for the probable initial states of the system (represented by the coefficients $\alpha$ and $\beta$ in equation (\ref{50})) which can make the expression (in the above equation) for $G(t)$ \emph{independent} of the index $n$ of the states of the environment. On the other hand, by looking at equation (\ref{58}) we realize that if $\alpha-\beta=\pm(\alpha+\beta)e^{-i\varphi}$ the expression for $G(t)$ will be considerably simplified. In what follows we show that for $\alpha-\beta=\pm(\alpha+\beta)e^{-i\varphi}$, which is equivalent to the initial conditions for the state of the system given by
\begin{eqnarray}\label{59}
\nonumber
\alpha_{+}=-i\cos(\varphi/2)\quad \mathrm{and} \quad \beta_{+}=\sin(\varphi/2)\ \ \rm(for\ the\ plus\ sign)\quad\ \mathrm{or} \\
\alpha_{-}=i\sin(\varphi/2)\qquad \mathrm{and} \qquad \beta_{-}=\cos(\varphi/2)\ \ \rm(for\ the\ minus\ sign),
\end{eqnarray}
$G(t)$ of equation (\ref{58}) will be independent of the states of the environment; provided we have a large average number of photons in the field $\bar{n}\rightarrow\infty$. Therefore, the initial conditions of equation (\ref{59}) correspond to the initial states of the system which do not entangle with the states of the environment. After that, we will obtain the time evolution of these initial pointer states and followed by that we obtain the corresponding pointer states of the environment.
For $\alpha-\beta=\pm(\alpha+\beta)e^{-i\varphi}$ the expression in equation (\ref{58}) for $G(t)$ simplifies to
\begin{equation}\label{60}
G(t)=\frac{e^{\mp i\mathrm{g} t\sqrt{n+1}}\pm e^{-i\varphi}\ e^{\mp i\mathrm{g} t\sqrt{n+2}}}{e^{\mp i\mathrm{g} t\sqrt{n+1}}\mp e^{-i\varphi}\ e^{\mp i\mathrm{g} t\sqrt{n+2}}}.
\end{equation}
The above expression can be written as
\begin{eqnarray}\label{61}
\fl \nonumber G(t)=\frac{ \{e^{-i\varphi/2}\ e^{\mp\frac{i\mathrm{g} t}{2}(\sqrt{n+1}+\sqrt{n+2})}\} \{e^{i\varphi/2}\ e^{\pm\frac{i\mathrm{g} t}{2}(\sqrt{n+2}-\sqrt{n+1})}\pm e^{-i\varphi/2}\ e^{\pm\frac{i\mathrm{g} t}{2}(\sqrt{n+1}-\sqrt{n+2})}\} } {\{e^{-i\varphi/2}\ e^{\mp\frac{i\mathrm{g} t}{2}(\sqrt{n+1}+\sqrt{n+2})}\} \{e^{i\varphi/2}\ e^{\pm\frac{i\mathrm{g} t}{2}(\sqrt{n+2}-\sqrt{n+1})}\mp e^{-i\varphi/2}\ e^{\pm\frac{i\mathrm{g} t}{2}(\sqrt{n+1}-\sqrt{n+2})}\} }\\ =\frac{ \{e^{i\varphi/2}\ e^{\pm\frac{i\mathrm{g} t}{2}(\sqrt{n+2}-\sqrt{n+1})}\pm e^{-i\varphi/2}\ e^{\pm\frac{i\mathrm{g} t}{2}(\sqrt{n+1}-\sqrt{n+2})}\} } { \{e^{i\varphi/2}\ e^{\pm\frac{i\mathrm{g} t}{2}(\sqrt{n+2}-\sqrt{n+1})}\mp e^{-i\varphi/2}\ e^{\pm\frac{i\mathrm{g} t}{2}(\sqrt{n+1}-\sqrt{n+2})}\} }.
\end{eqnarray}
From the Taylor series expansion of $\sqrt{n+2}-\sqrt{n+1}$ about $\bar{n}$ (the average number of photons in the environment), given by
\begin{equation}\label{61.1}
\sqrt{n+2}-\sqrt{n+1}=\frac{1}{2\sqrt{\bar{n}}}-\frac{(n+1-\bar{n})}{4\bar{n}^{3/2}}+ ...\ ,
\end{equation}
we notice that in the limit of a very large average number of photons we can replace $\sqrt{n+2}-\sqrt{n+1}$ by $\frac{1}{2\sqrt{\bar{n}}}$ \cite{Gea-Banacloche}; since for very large $\bar{n}$ all terms containing the index $n$, which appear after the first term, are negligible; and \emph{the series is convergent}.
So, in the classical limit of $\bar{n}\rightarrow\infty$ we can rewrite equation (\ref{61}) for $G(t)$ as
\begin{eqnarray}\label{62}
\nonumber G(t)=\frac{ \{e^{i\varphi/2}\ e^{\pm\frac{i\mathrm{g} t}{4\sqrt{\bar{n}}}}\pm e^{-i\varphi/2}\ e^{\mp\frac{i\mathrm{g} t}{4\sqrt{\bar{n}}}}\} } { \{e^{i\varphi/2}\ e^{\pm\frac{i\mathrm{g} t}{4\sqrt{\bar{n}}}}\mp e^{-i\varphi/2}\ e^{\mp\frac{i\mathrm{g} t}{4\sqrt{\bar{n}}}}\} } \\ \nonumber
=-i\cot(\frac{\varphi}{2}+\frac{\mathrm{g} t}{4\sqrt{\bar{n}}}) \qquad \rm{for\ the\ first\ sign}\ \ \\ \mathrm{or} \qquad=i\tan(\frac{\varphi}{2}-\frac{\mathrm{g} t}{4\sqrt{\bar{n}}}) \qquad \rm{for\ the\ second\ sign};
\end{eqnarray}
which clearly is independent of the index $n$ of the states of the environment.
In appendix A by calculating the degree of entanglement between the states of the system and the environment for the pointer states which will be obtained from the above result, we will show that this result is valid over a length of time which is proportional to $\bar{n}$, the average number of photons in the field.
The result of equation (\ref{62})) simply means that for the initial states of the system given by
\begin{equation}\label{63}
\fl |+(t_{0})\rangle=-i\cos(\varphi/2)|a\rangle+\sin(\varphi/2)|b\rangle \quad \mathrm{and} \quad |-(t_{0})\rangle=i\sin(\varphi/2)|a\rangle+\cos(\varphi/2)|b\rangle
\end{equation}
the states of the system and the environment will not entangle with each other. Moreover, using equation (\ref{3.32}) which gives us the general time evolution of the pointer states of the system; and $G(t)$ of equation (\ref{62}) (which is independent of the index $n$ of the states of the environment) we can find the time evolution of the pointer states of the system as follows
\begin{eqnarray}\label{64}
\nonumber |+(t)\rangle=\mathcal{N}_{+}\ \{-i\cot(\frac{\varphi}{2}+\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})|a\rangle+|b\rangle\} \quad \mathrm{and} \\
|-(t)\rangle=\mathcal{N}_{-}\ \{i\tan(\frac{\varphi}{2}-\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})|a\rangle+|b\rangle\};
\end{eqnarray}
where $\mathcal{N}_{+}$ and $\mathcal{N}_{-}$ are the normalization factors for the $|+(t)\rangle$ and $|-(t)\rangle$ states respectively. It is easy to verify that
\begin{equation}\label{65}
\fl \qquad \quad \mathcal{N}_{+}=\sin(\theta_{+}(t)) \quad \mathrm{and} \quad \mathcal{N}_{-}=\cos(\theta_{-}(t)), \quad \mathrm{where} \quad \theta_{\pm}(t)=\frac{\varphi}{2}\pm\frac{\mathrm{g} t}{4\sqrt{\bar{n}}}.
\end{equation}
So, we can rewrite equation (\ref{64}) as
\begin{eqnarray}\label{66}
\nonumber |+(t)\rangle=-i\cos(\frac{\varphi}{2}+\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})|a\rangle+\sin(\frac{\varphi}{2}+\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})|b\rangle \qquad \mathrm{and} \\ |-(t)\rangle=i\sin(\frac{\varphi}{2}-\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})|a\rangle+\cos(\frac{\varphi}{2}-\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})|b\rangle,
\end{eqnarray}
which is the same as equation (\ref{49}); Q.E.D.
Next, we obtain the corresponding pointer states of the environment. Using equations (\ref{3.32}) and (\ref{52}) we have
\begin{eqnarray}\label{67}
\nonumber |\Phi_{\pm}(t)\rangle=\mathcal{N}_{\pm}^{-1}\sum_{n=0}^{\infty}c_{n}\{\alpha_{\pm}\hat{\cal E}_{3}+\beta_{\pm}\hat{\cal E}_{4}\}\ |\varphi_{n}\rangle \\ =
\mathcal{N}_{\pm}^{-1}\sum_{n=0}^{\infty}c_{n}\{\alpha_{\pm}\ [f_{31}(n)|n+1\rangle+f_{32}(n)|n\rangle+f_{33}(n)|n-1\rangle]\\ \nonumber +\beta_{\pm}\ [f_{41}(n)|n+1\rangle+f_{42}(n)|n\rangle+f_{43}(n)|n-1\rangle]\},
\end{eqnarray}
where in the above equation $\alpha_{\pm}$ and $\beta_{\pm}$ are those of the initial pointer states of the system given by equation (\ref{59}). Let us first obtain $|\Phi_{+}(t)\rangle$; i.e.\ the pointer state of the environment corresponding to the $|+(t)\rangle$ state. Replacing the $f_{ij}$ functions from equation (\ref{53}), $\alpha_{+}$ and $\beta_{+}$ from equation (\ref{59}) and $\mathcal{N}_{+}$ from equation (\ref{65}) we have
\begin{eqnarray}\label{68}
\nonumber |\Phi_{+}(t)\rangle=\sum_{n=0}^{\infty}\frac{c_{n}}{2\sin(\theta_{+}(t))}\{-\sin(\mathrm{g} t\sqrt{n+1})\ e^{-i\varphi/2}\ |n+1\rangle \\ \nonumber +[-i\cos(\mathrm{g} t\sqrt{n})\ e^{i\varphi/2}+i\cos(\mathrm{g} t\sqrt{n+1})\ e^{-i\varphi/2}\ ]\ |n\rangle \\+\sin(\mathrm{g} t\sqrt{n})\ e^{i\varphi/2}\ |n-1\rangle\}.
\end{eqnarray}
Using $c_{n\pm1}\approx c_{n}e^{\mp i\varphi}$ for the coherent field and in the limit of $\bar{n}\rightarrow\infty$, the above relation can be written as
\begin{eqnarray}\label{69}
\nonumber |\Phi_{+}(t)\rangle=\sum_{n=0}^{\infty}\frac{c_{n}}{2i\sin(\theta_{+}(t))}\{-i\sin(\mathrm{g} t\sqrt{n})\ e^{i\varphi/2}\ \\ \nonumber +[\cos(\mathrm{g} t\sqrt{n})\ e^{i\varphi/2}-\cos(\mathrm{g} t\sqrt{n+1})\ e^{-i\varphi/2}\ ]\ \\ \nonumber +i\sin(\mathrm{g} t\sqrt{n+1})\ e^{-i\varphi/2}\ \}\ |n\rangle \\ =\sum_{n=0}^{\infty}\frac{c_{n}}{2i\sin(\theta_{+}(t))}\ \{e^{i(\varphi/2-\mathrm{g} t\sqrt{n})}-e^{-i(\varphi/2+\mathrm{g} t\sqrt{n+1})}\}\ |n\rangle,
\end{eqnarray}
which can easily be simplified into the following final result for $|\Phi_{+}(t)\rangle$
\begin{equation}\label{70}
|\Phi_{+}(t)\rangle=\sum_{n=0}^{\infty}c_{n}\ e^{\frac{-i\mathrm{g} t}{2}(\sqrt{n+1}+\sqrt{n})}\ |n\rangle.
\end{equation}
Following exactly the same procedure one can also find $|\Phi_{-}(t)\rangle$ as follows
\begin{equation}\label{71}
|\Phi_{-}(t)\rangle=\sum_{n=0}^{\infty}c_{n}\ e^{\frac{i\mathrm{g} t}{2}(\sqrt{n+1}+\sqrt{n})}\ |n\rangle.
\end{equation}
We should also mention that the pointer states of the system at $t=t_{0}$ (equation (\ref{63})) are orthonormal and hence, they form a complete basis set for the state of the system. Therefore, the evolution of \emph{any} initial state of the system $|\psi_{\cal S}(t_{0})\rangle=\alpha'\ |+(t_{0})\rangle+\beta'\ |-(t_{0})\rangle$ in contact with an initial coherent field $|\nu\rangle$ can be expressed as a linear combination of the evolution of $|+(t_{0})\rangle|\nu\rangle$ and $|-(t_{0})\rangle|\nu\rangle$, and in the following diagonal form:
\begin{eqnarray}\label{73}
\fl (\alpha'\ |+(t_{0})\rangle+\beta'\ |-(t_{0})\rangle)\ |\nu\rangle\rightarrow
\alpha'\ |+(t)\rangle\ |\Phi_{+}(t)\rangle+\beta'\ |-(t)\rangle\ |\Phi_{-}(t)\rangle.
\end{eqnarray}
\section{State preparation at specific times}
One interesting feature of the pointer states of the system, given by equation (\ref{66}), is that at specific times they coincide with each other. In fact, by looking at equation (\ref{64}) we notice that at those times for which $\tan(\frac{\varphi}{2}-\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})=-\cot(\frac{\varphi}{2}+\frac{\mathrm{g} t}{4\sqrt{\bar{n}}})$ the $|\pm(t)\rangle$ states are equal to each other. One can easily verify that at $t_{1}=(4n+1)\pi\sqrt{\bar{n}}/\mathrm{g} \ (\mathrm{with}\ n=0,1,2,...)$ both of the $|\pm(t)\rangle$ states will be in the common state given by
\begin{equation}\label{74}
|\pm(t_{1})\rangle=i\sin(\frac{\varphi}{2}-\frac{\pi}{4})|a\rangle+\cos(\frac{\varphi}{2}-\frac{\pi}{4})|b\rangle;
\end{equation}
while at $t_{2}=(4n-1)\pi\sqrt{\bar{n}}/\mathrm{g} \ (\mathrm{with}\ n=1,2,...)$ the $|\pm(t)\rangle$ states will be in the common state given by
\begin{equation}\label{75}
|\pm(t_{2})\rangle=i\sin(\frac{\varphi}{2}+\frac{\pi}{4})|a\rangle+\cos(\frac{\varphi}{2}+\frac{\pi}{4})|b\rangle.
\end{equation}
The state preparation at these specific times basically means that whatever is the initial state of the system, at these specific times the states of the system and the environment are not entangled to each other and the system can be represented by a well-defined state of its own (see equation (\ref{73})). Moreover, as we see, these specific states clearly depend on the phase $\varphi$ of the initial state of the coherent field.
The same kind of phenomenon was also discovered in the simpler Jaynes-Cummings model of quantum optics by Gea-Banacloche \cite{Gea-Banacloche} in 1991.
\section{Consequences regarding the decoherence of the central spin}
In this section we will use the pointer states of the system and the environment, which we already obtained in section 3, in order to \textcolor[rgb]{0.00,0.00,0.00}{obtain a \emph{closed} form for the coherences of the reduced density matrix of the system}. However, prior to that let us use the time-evolution operator, which we already obtained in section 2, to calculate the offdiagonal element of the reduced density matrix of the system \textcolor[rgb]{0.00,0.00,0.00}{in a more} precise form.
\textcolor[rgb]{0.00,0.00,0.00}{\subsection{General expressions for the evolution of the state of the total composite system and the reduced density matrix of the system}}
Using equations (\ref{3.29}) and (\ref{52}) to obtain $|\psi_{\mathrm{tot}}(t)\rangle$, we can write
\begin{eqnarray}\label{76}
\fl \nonumber |\psi_{\mathrm{tot}}(t)\rangle=\sum_{n=0}^{\infty}c_{n}\{(\alpha f_{11}(n)+\beta f_{21}(n))|a,n+1\rangle+(\alpha f_{12}(n)+\beta f_{22}(n))|a,n\rangle \\+(\alpha f_{13}(n)+\beta f_{23}(n))|a,n-1\rangle+(\alpha f_{31}(n)+\beta f_{41}(n))|b,n+1\rangle \\ \nonumber+(\alpha f_{32}(n)+\beta f_{42}(n))|b,n\rangle+(\alpha f_{33}(n)+\beta f_{43}(n))|b,n-1\rangle\},
\end{eqnarray}
where $f_{ij}$'s are given by equation (\ref{53}). Replacing these functions from equation (\ref{53}) the above relation can be simplified into the following general form
\begin{eqnarray}\label{78}
\fl \nonumber |\psi_{\mathrm{tot}}(t)\rangle=\sum_{n=0}^{\infty}(c_{a,n}(t)|a,n\rangle+c_{b,n}(t)|b,n\rangle), \qquad \mathrm{with} \\
\nonumber c_{a,n}(t)=\frac{-i}{2}(\alpha-\beta)c_{n-1}\sin(\mathrm{g} t\sqrt{n})+c_{n}[(\frac{\alpha+\beta}{2})\cos(\mathrm{g} t\sqrt{n})\\ +(\frac{\alpha-\beta}{2})\cos(\mathrm{g} t\sqrt{n+1})]-\frac{i}{2}(\alpha+\beta)c_{n+1}\sin(\mathrm{g} t\sqrt{n+1})\\
\nonumber c_{b,n}(t)=\frac{-i}{2}(\alpha-\beta)c_{n-1}\sin(\mathrm{g} t\sqrt{n})+c_{n}[(\frac{\alpha+\beta}{2})\cos(\mathrm{g} t\sqrt{n})\\ \nonumber -(\frac{\alpha-\beta}{2})\cos(\mathrm{g} t\sqrt{n+1})]+\frac{i}{2}(\alpha+\beta)c_{n+1}\sin(\mathrm{g} t\sqrt{n+1}).
\end{eqnarray}
We can do the trace operation over the basis states of the environment, and obtain the reduced density matrix of the system $\cal S$ as follows
\begin{eqnarray}\label{79}
\nonumber \hat{\rho}^{\cal S}(t)=\sum_{n=0}^{\infty} \langle n|\hat{\rho}^{\mathrm{tot}}(t)|n\rangle=\sum_{n=0}^{\infty} \langle n|\psi_{\mathrm{tot}}(t)\rangle\langle\psi_{\mathrm{tot}}(t)|n\rangle\\=\sum_{n=0}^{\infty}\ (\ |c_{a,n}(t)|^{2}\ |a\rangle\langle a|+|c_{b,n}(t)|^{2}\ |b\rangle\langle b|+c_{a,n}(t)c_{b,n}^{\ast}(t)\ |a\rangle\langle b|+c.c.\ ).
\end{eqnarray}
So, in the basis of the eigenstates of $\sigma_{z}$ (i.e.\ in the basis of the $|a\rangle$ and $|b\rangle$ states) the elements of the reduced density matrix of the system must be given by
\begin{eqnarray}\label{80}
\nonumber \rho^{\cal S}_{12}(t)=\sum_{n=0}^{\infty}c_{a,n}(t)\ c_{b,n}^{\ast}(t)=c_{a,0}\ c_{b,0}^{\ast}+c_{a,1}\ c_{b,1}^{\ast}+c_{a,2}\ c_{b,2}^{\ast}+... \qquad \mathrm{and} \\ \rho^{\cal S}_{11}(t)=1-\rho^{\cal S}_{22}(t)=\sum_{n=0}^{\infty}|c_{a,n}(t)|^{2}.
\end{eqnarray}
Replacing $c_{a,n}(t)$ and $c_{b,n}(t)$ from equation (\ref{78}) in the above equation, after some algebra one finds
\begin{eqnarray}\label{81}
\nonumber \rho^{\cal S}_{12}(t)=\gamma f_{0}(t)+\delta f_{1}(t)+\lambda f_{2}(t)+\lambda^{\ast} f_{3}(t)+(\lambda-\lambda^{\ast})f_{4}(t) \qquad \mathrm{and} \\
\rho^{\cal S}_{11}(t)=\gamma g_{0}(t)+\delta g_{1}(t)+\lambda g_{2}(t)+\lambda^{\ast} g_{2}^{\ast}(t);
\end{eqnarray}
where in the above equations the coefficients $\gamma,\delta$ and $\lambda$ are given by
\begin{eqnarray}\label{82}
\nonumber \gamma=\frac{1}{4}|\alpha-\beta|^{2} \qquad \mathrm{and} \qquad \delta=\frac{1}{4}|\alpha+\beta|^{2} \qquad \mathrm{and} \\
\lambda=\frac{1}{4}(|\alpha|^{2}-|\beta|^{2}+\alpha\beta^{\ast}-\beta\alpha^{\ast}).
\end{eqnarray}
Also the $f_{i}(t)$ and $g_{i}(t)$ functions are given by
\begin{eqnarray}\label{83}
\fl \nonumber f_{0}(t)=\sum_{n=0}^{\infty}(\ |c_{n-1}|^{2}\sin^{2}(\mathrm{g} t\sqrt{n})+i[c_{n-1}c_{n}^{\ast}+c_{n-1}^{\ast}c_{n}]\\ \nonumber \times\sin(\mathrm{g} t\sqrt{n})\cos(\mathrm{g} t\sqrt{n+1})-|c_{n}|^{2}\cos^{2}(\mathrm{g} t\sqrt{n+1})\ ), \\
\fl \nonumber f_{1}(t)=\sum_{n=0}^{\infty}(\ |c_{n}|^{2}\cos^{2}(\mathrm{g} t\sqrt{n})-i[c_{n}c_{n+1}^{\ast}+c_{n}^{\ast}c_{n+1}]\\ \nonumber \times\cos(\mathrm{g} t\sqrt{n})\sin(\mathrm{g} t\sqrt{n+1})-|c_{n+1}|^{2}\sin^{2}(\mathrm{g} t\sqrt{n+1})\ ), \\
\fl \nonumber f_{2}(t)=\sum_{n=0}^{\infty}(\ -i c_{n-1}c_{n}^{\ast}\sin(\mathrm{g} t\sqrt{n})\cos(\mathrm{g} t\sqrt{n})-c_{n-1}c_{n+1}^{\ast}\\ \nonumber \times\sin(\mathrm{g} t\sqrt{n})\sin(\mathrm{g} t\sqrt{n+1})-i c_{n}c_{n+1}^{\ast}\sin(\mathrm{g} t\sqrt{n+1})\cos(\mathrm{g} t\sqrt{n+1})), \\
\fl \nonumber f_{3}(t)=\sum_{n=0}^{\infty}(\ i c_{n}c_{n-1}^{\ast}\sin(\mathrm{g} t\sqrt{n})\cos(\mathrm{g} t\sqrt{n})+c_{n+1}c_{n-1}^{\ast}\\ \nonumber \times\sin(\mathrm{g} t\sqrt{n})\sin(\mathrm{g} t\sqrt{n+1})+i c_{n+1}c_{n}^{\ast}\sin(\mathrm{g} t\sqrt{n+1})\cos(\mathrm{g} t\sqrt{n+1})), \\
\fl f_{4}(t)=\sum_{n=0}^{\infty}(\ |c_{n}|^{2}\cos(\mathrm{g} t\sqrt{n})\cos(\mathrm{g} t\sqrt{n+1})), \\
\fl \nonumber g_{0}(t)=\sum_{n=0}^{\infty}(\ |c_{n-1}|^{2}\sin^{2}(\mathrm{g} t\sqrt{n})+i[c_{n}c_{n-1}^{\ast}\\ \nonumber-c_{n}^{\ast}c_{n-1}]\sin(\mathrm{g} t\sqrt{n})\cos(\mathrm{g} t\sqrt{n+1})+|c_{n}|^{2}\cos^{2}(\mathrm{g} t\sqrt{n+1})\ ) , \\
\fl \nonumber g_{1}(t)=\sum_{n=0}^{\infty}(\ |c_{n}|^{2}\cos^{2}(\mathrm{g} t\sqrt{n})+i[c_{n}c_{n+1}^{\ast}-c_{n}^{\ast}c_{n+1}] \times\cos(\mathrm{g} t\sqrt{n})\sin(\mathrm{g} t\sqrt{n+1})\\ \nonumber +\ |c_{n+1}|^{2}\sin^{2}(\mathrm{g} t\sqrt{n+1}))\quad \mathrm{and}\\
\fl \nonumber g_{2}(t)=\sum_{n=0}^{\infty}(\ -i c_{n-1}c_{n}^{\ast}\sin(\mathrm{g} t\sqrt{n})\cos(\mathrm{g} t\sqrt{n})+c_{n-1}c_{n+1}^{\ast}\\ \nonumber \times\sin(\mathrm{g} t\sqrt{n})\sin(\mathrm{g} t\sqrt{n+1})+|c_{n}|^{2}\cos(\mathrm{g} t\sqrt{n})\cos(\mathrm{g} t\sqrt{n+1})\\ \nonumber+i c_{n}c_{n+1}^{\ast}\sin(\mathrm{g} t\sqrt{n+1})\cos(\mathrm{g} t\sqrt{n+1}))
\end{eqnarray}
\textcolor[rgb]{0.00,0.00,0.00}{In Figure 1 we have used equation (\ref{81}) to plot the evolution of the population inversion $W(t)=\rho_{11}(t)-\rho_{22}(t)$ for the case that the two-level system initially is prepared in the upper level. Here the evolution of the population inversion is characterized by an oscillating envelope, which dominates the fine oscillations which show themselves as collapses and revivals of the atomic inversion in the simpler example of the JCM.}
Now, let us obtain the coherences of the reduced density matrix of the system in another way by using the pointer states of the system and the environment which we obtained in section 3. As we will see, in this way not only we can obtain a closed form for the coherences of the reduced density matrix of the system, but also we can acquire a better understanding regarding the characteristics of decoherence of the central system in our model.
For $|\psi_{\mathrm{tot}}(t)\rangle$ given by equation (\ref{73}) the reduced density matrix of the system $\hat{\rho}_{\cal S}(t)$ can be calculated by tracing over the environmental degrees of freedom to obtain
\begin{eqnarray}\label{84}
\fl \nonumber \hat{\rho}_{\cal S}(t)=|\alpha'|^{2}\times|+(t)\rangle\langle +(t)|+|\beta'|^{2}\times|-(t)\rangle\langle -(t)|+\alpha'\beta'^{\ast}\\\fl \ \qquad \times|+(t)\rangle\langle -(t)|\times\langle\Phi_{-}(t)|\Phi_{+}(t)\rangle+\beta'\alpha'^{\ast}\times|-(t)\rangle\langle +(t)|\times\langle\Phi_{+}(t)|\Phi_{-}(t)\rangle.
\end{eqnarray}
So, in an arbitrary basis $|a\rangle$ and $|b\rangle$ of the state of the two-level system generally we have
\begin{eqnarray}\label{85}
\fl \nonumber \rho^{\cal S}_{11}(t)=1-\nonumber \rho^{\cal S}_{22}(t)=|\alpha'|^{2}\times\langle a|+(t)\rangle\langle +(t)|a\rangle+|\beta'|^{2}\times\langle a|-(t)\rangle\langle -(t)|a\rangle+\alpha'\beta'^{\ast}\\ \fl \quad \times\langle a|+(t)\rangle\langle -(t)|a\rangle\times\langle\Phi_{-}(t)|\Phi_{+}(t)\rangle+\beta'\alpha'^{\ast}\times\langle a|-(t)\rangle\langle +(t)|a\rangle\times\langle\Phi_{+}(t)|\Phi_{-}(t)\rangle \\ \fl \nonumber \mathrm{and} \qquad
\rho^{\cal S}_{12}(t)=|\alpha'|^{2}\times\langle a|+(t)\rangle\langle +(t)|b\rangle+|\beta'|^{2}\times\langle a|-(t)\rangle\langle -(t)|b\rangle+\alpha'\beta'^{\ast}\\ \fl \quad \nonumber \times\langle a|+(t)\rangle\langle -(t)|b\rangle\times\langle\Phi_{-}(t)|\Phi_{+}(t)\rangle+\beta'\alpha'^{\ast}\times\langle a|-(t)\rangle\langle +(t)|b\rangle\times\langle\Phi_{+}(t)|\Phi_{-}(t)\rangle.
\end{eqnarray}
For the system initially prepared in one of the pointer states $|\pm(t_{0})\rangle$ (i.e.\ for $\alpha'=0$ or $\beta'=0$) the above expressions can be simplified. For example, for the system initially prepared in the $|+(t_{0})\rangle$ state generally we have
\begin{equation}\label{86}
\rho^{\cal S}_{11}(t)=|\langle a|+(t)\rangle|^{2} \qquad \mathrm{and} \qquad \rho^{\cal S}_{12}(t)=\langle a|+(t)\rangle.\langle b|+(t)\rangle^{\ast}\ ;
\end{equation}
while for the system initially prepared in the $|-(t_{0})\rangle$ state we have
\begin{equation}\label{87}
\rho^{\cal S}_{11}(t)=|\langle a|-(t)\rangle|^{2} \qquad \mathrm{and} \qquad \rho^{\cal S}_{12}(t)=\langle a|-(t)\rangle.\langle b|-(t)\rangle^{\ast}.
\end{equation}
\begin{figure}
\caption{Time evolution of the population inversion W(t) for the case that the system initially is prepared in the upper level and for an initial coherent state with $\varphi=\pi/6$ and $\bar{n}
\label{F1}
\end{figure}
Using equation (\ref{66}) for the pointer states of our spin-boson model, the above equation reads
\begin{eqnarray}\label{88}
\fl \nonumber \rho^{\cal S}_{11}(t)=\cos^{2}(\frac{\varphi}{2}+\frac{\mathrm{g} t}{4\sqrt{\bar{n}}}) \ \ \mathrm{and} \quad \rho^{\cal S}_{12}(t)=-\frac{i}{2}\sin(\varphi+\frac{\mathrm{g} t}{2\sqrt{\bar{n}}}) \ \mathrm{for} \ \ |\psi_{\cal S}(t_{0})\rangle=|+(t_{0})\rangle \\
\fl \rho^{\cal S}_{11}(t)=\sin^{2}(\frac{\varphi}{2}-\frac{\mathrm{g} t}{4\sqrt{\bar{n}}}) \ \ \mathrm{and} \quad \rho^{\cal S}_{12}(t)=\frac{i}{2}\sin(\varphi-\frac{\mathrm{g} t}{2\sqrt{\bar{n}}}) \ \mathrm{for} \ \ |\psi_{\cal S}(t_{0})\rangle=|-(t_{0})\rangle.
\end{eqnarray}
The above expressions for $\rho^{\cal S}_{12}(t)$ basically mean that for the system initially prepared in one of its pointer states, the offdiagonal element of the reduced density matrix of the system should be a sinusoidal function with the frequency of $\frac{\mathrm{g}}{2\sqrt{\bar{n}}}$. Also, it must have successive zeros which are apart from each other by $\Delta t=4\pi\sqrt{\bar{n}}/\mathrm{g}$.
An examination of $\rho^{\cal S}_{12}(t)$ by plotting its more exact expression, given by equation (\ref{81}), shows good agreement with the above result \emph{only} as long as we have a very large average number of photons in the field. However, for a smaller average number of photons
although we observe the oscillating behavior with the same frequency of $\frac{\mathrm{g}}{2\sqrt{\bar{n}}}$, we can clearly observe a decaying envelope which would destroy the offdiagonal element of the reduced density matrix at large times (causing decoherence of the state of the system \emph{even} when the system is initially prepared in one of its pointer states). Also, we observe that this decay is specifically more significant for a smaller average number of photons. Hence, we can guess that the difference between the prediction of equation (\ref{88}) and what we expect from the more exact expression of equation (\ref{81}), for the case that we have a smaller average number of photons, must be due to the fact that in calculating the pointer states of the system we assumed having a large average number of photons in the environment (so that we have a sharp distribution for the coherent state of the field and can assume $\sqrt{n+1}-\sqrt{n}\approx\frac{1}{2\sqrt{\bar{n}}}$). In other words, we guess that \emph{the decoherence of the state of the central system when we start from one of the pointer states of the system must be due to having a limited number of photons in the field.}
In what follows our first goal is to make the appropriate corrections in equation (\ref{88}) so that we can theoretically justify the decoherence of the state of the system when starting from one of the pointer states. Followed by that, we make corrections to the other elements of equation (\ref{85}); and finally, we will use equation (\ref{85}) together with the corrections which we make for having a limited average number of photons, in order to obtain a closed form for $\rho^{\cal S}_{12}(t)$. As we will see, after these corrections our closed form for the offdiagonal element of the reduced density matrix of the system will be in good agreement with the more exact but cumbersome expression of equation (\ref{81}) which we obtained in this section for $\rho^{\cal S}_{12}(t)$.
\textcolor[rgb]{0.00,0.00,0.00}{\subsection{First order corrections due to having a finite average number of photons in the environment}}
In section 3, while obtaining our pointer states of the system and the environment, we assumed having a large average number of photons in the environment; so that we could substitute $\sqrt{n+1}-\sqrt{n}$ by $\frac{1}{2\sqrt{\bar{n}}}$ in our expressions. Now we consider the next order in the Taylor expansion of $\sqrt{n+1}-\sqrt{n}$ about $\bar{n}$, i.e.\ in
\begin{equation}\label{89}
\sum_{n=0}^{\infty}|c_{n}|^{2}\ (\sqrt{n+1}-\sqrt{n})\approx\sum_{n=0}^{\infty}|c_{n}|^{2}\ (\frac{1}{2\sqrt{\bar{n}}}-\frac{(n-\bar{n})}{4\bar{n}^{3/2}}+... ),
\end{equation}
and make the appropriate corrections (due to having a finite average number of photons) in equation (\ref{88}). In fact, by looking at equations (\ref{61}) and (\ref{89}) we notice that \emph{only} at the limit of a large average number of photons, where $\sum_{n=0}^{\infty}|c_{n}|^{2}\ (\sqrt{n+1}-\sqrt{n})\approx\frac{1}{2\sqrt{\bar{n}}}$ is a good approximation and there is no need to consider the next terms in our expansion for $\sqrt{n+1}-\sqrt{n}$\ , the function $G(t)$ will be independent of the states of the environment and pointer states can be realized for the system and the environment, which do not entangle with each other.
We make corrections on $\rho^{\cal S}_{12}(t)$ of equation (\ref{88}) by using the following substitution in our expressions
\begin{equation}\label{90}
e^{\mp it'/2\sqrt{\bar{n}}}\rightarrow\sum_{n=0}^{\infty}|c_{n}|^{2}\ e^{\mp it'(\sqrt{n+1}-\sqrt{n})} \qquad \mathrm{where} \qquad t'=\mathrm{g} t.
\end{equation}
Using equations (\ref{48}) and (\ref{89})
we have
\begin{eqnarray}\label{91}
\nonumber \sum_{n=0}^{\infty}|c_{n}|^{2}\ e^{-it'(\sqrt{n+1}-\sqrt{n})}\approx\sum_{n=0}^{\infty} \frac{e^{-\bar{n}}\ \bar{n}^{n}}{n!}\ e^{-it'(\frac{1}{2\sqrt{\bar{n}}}-\frac{(n-\bar{n})}{4\bar{n}^{3/2}})}\\ =e^{-\bar{n}}\ e^{-\frac{3it'}{4\sqrt{\bar{n}}}}\times\sum_{n=0}^{\infty}
\frac{(\bar{n}\ e^{\frac{it'}{4\bar{n}^{3/2}}})^{n}}{n!}=e^{-\bar{n}}e^{-\frac{3it'}{4\sqrt{\bar{n}}}}\times
\exp(\bar{n}\ e^{\frac{it'}{4\bar{n}^{3/2}}})\\ \nonumber =\exp(\bar{n}\ [e^{\frac{it'}{4\bar{n}^{3/2}}}-1])\ e^{-\frac{3it'}{4\sqrt{\bar{n}}}}=
\exp(\bar{n}\ e^{\frac{it'}{8\bar{n}^{3/2}}}[e^{\frac{it'}{8\bar{n}^{3/2}}}-e^{\frac{-it'}{8\bar{n}^{3/2}}}])\ e^{-\frac{3it'}{4\sqrt{\bar{n}}}};
\end{eqnarray}
which simplifies as
\begin{equation}\label{92}
\sum_{n=0}^{\infty}|c_{n}|^{2}\ e^{-it'(\sqrt{n+1}-\sqrt{n})}\approx\exp(2i\ \bar{n}\ e^{\frac{it'}{8\bar{n}^{3/2}}}\sin(\frac{t'}{8\bar{n}^{3/2}}))\times e^{-\frac{3it'}{4\sqrt{\bar{n}}}}.
\end{equation}
For an average number of photons large enough and times short enough for which $\frac{t'}{\bar{n}^{3/2}}\ll 1$ we can approximate $\sin(\frac{t'}{8\bar{n}^{3/2}})$ by $\frac{t'}{8\bar{n}^{3/2}}$ and $e^{\frac{it'}{8\bar{n}^{3/2}}}$ by $1+\frac{it'}{8\bar{n}^{3/2}}$ in the above equation. In other words, provided $t$ goes to infinity slowly enough to have $\frac{t'}{\bar{n}^{3/2}}\ll 1$ we can write
\begin{eqnarray}\label{93}
\nonumber \sum_{n=0}^{\infty}|c_{n}|^{2}\ e^{-it'(\sqrt{n+1}-\sqrt{n})}\approx\exp(2i\ \bar{n}\ (1+\frac{it'}{8\bar{n}^{3/2}})\times(\frac{t'}{8\bar{n}^{3/2}}))\times e^{-\frac{3it'}{4\sqrt{\bar{n}}}} \qquad \mathrm{or} \\
\sum_{n=0}^{\infty}|c_{n}|^{2}\ e^{-it'(\sqrt{n+1}-\sqrt{n})}\approx e^{-\frac{it'}{2\sqrt{\bar{n}}}}\ e^{-t'^{2}/32\bar{n}^{2}}.
\end{eqnarray}
So, to make the appropriate corrections in our expressions we should use the following substitution
\begin{equation}\label{94}
e^{-it'/2\sqrt{\bar{n}}}\rightarrow e^{-it'/2\sqrt{\bar{n}}}\ e^{-t'^{2}/32\bar{n}^{2}}.
\end{equation}
Making the above substitution in the expressions of equation (\ref{88}) for $\rho^{\cal S}_{12}(t)$ we find
\begin{eqnarray}\label{95}
\fl \nonumber \rho^{\cal S}_{12}(t)=-\frac{i}{2}\sin(\varphi+\frac{t'}{2\sqrt{\bar{n}}})\rightarrow -\frac{i}{2}\sin(\varphi+\frac{t'}{2\sqrt{\bar{n}}})\ e^{-t'^{2}/32\bar{n}^{2}} \ \ \mathrm{for} \ \ |\psi_{\cal S}(t_{0})\rangle=|+(t_{0})\rangle \quad \mathrm{and}\\
\fl \rho^{\cal S}_{12}(t)=\frac{i}{2}\sin(\varphi-\frac{t'}{2\sqrt{\bar{n}}})\rightarrow \frac{i}{2}\sin(\varphi-\frac{t'}{2\sqrt{\bar{n}}})\ e^{-t'^{2}/32\bar{n}^{2}} \ \ \mathrm{for} \ \ |\psi_{\cal S}(t_{0})\rangle=|-(t_{0})\rangle.
\end{eqnarray}
One interesting aspect of the evolution of coherences given by the above equation is that for $\varphi=0$ and $\varphi=\pi$ no matter whether the system initially is prepared in the $|+(t_{0})\rangle$ state or the $|-(t_{0})\rangle$ state, the evolution of $\rho^{\cal S}_{12}(t)$ is given by $\rho^{\cal S}_{12}(t)=\mp\frac{i}{2}\sin(\frac{t'}{2\sqrt{\bar{n}}})\ e^{-t'^{2}/32\bar{n}^{2}}$ (with the minus sign for $\varphi=0$ and the plus sign for $\varphi=\pi$). Also, if $\varphi=\frac{\pi}{2}$ or $\varphi=\frac{3\pi}{2}$, for both initial pointer states the evolution of $|\rho^{\cal S}_{12}(t)|$ is given by $|\rho^{\cal S}_{12}(t)|=\frac{1}{2}\cos(\frac{t'}{2\sqrt{\bar{n}}})\ e^{-t'^{2}/32\bar{n}^{2}}$. In general, for $\varphi=n\pi/2$ the evolution of $|\rho^{\cal S}_{12}(t)|$ will be the same for both initial pointer states. However, as we will see in the following paragraphs, this does not mean that for $\varphi=n\pi/2$ the evolution of $|\rho^{\cal S}_{12}(t)|$ becomes independent of the initial state of the system.
In Figure 2 we have used equation (\ref{95}) to plot the evolution of $|\rho^{\cal S}_{12}(t)|$ for the case that the system initially is prepared in the $|+(t_{0})\rangle$ state. We also used the more exact expression, given by equation (\ref{81}), to plot the same function. As we see, the correction (due to having a finite average number of photons in the environment) that we made on our initial expression for $\rho^{\cal S}_{12}(t)$, nicely describes the decaying envelope in the evolution of coherences of the reduced system $\cal S$, which is given by the factor $e^{-t'^{2}/32\bar{n}^{2}}$.
\textcolor[rgb]{0.00,0.00,0.00}{$\rho^{\cal S}_{12}(t)$ has also been calculated for the simpler JC-model of quantum optics by Gea-Banacloche \cite{Gea-Banacloche2}. Gea-Banacloche showed that for the case that the two-level system is initially prepared in one of its pointer states the evolution of $|\rho^{\cal S}_{12}(t)|$ is simply given by $|\rho^{\cal S}_{12}(t)|\approx \frac{1}{2}\ e^{-t'^{2}/32\bar{n}^{2}}$ (which is valid for $t'\ll \bar{n}^{3/2}$); as opposed to what we have for our model, given by equation (\ref{95})}.
\textcolor[rgb]{0.00,0.00,0.00}{As we observe, when the system is initially prepared in one of its pointer states the evolution of coherences of our model is characterized by an oscillating envelope, given by $\frac{1}{2}\ |\sin(\varphi+\frac{t'}{2\sqrt{\bar{n}}})|$; while for the simpler JC-model of quantum optics we never observe such ``decayo-sinusoidal" behavior in the evolution of coherences.}
\begin{figure}
\caption{Evolution of $|\rho^{\cal S}
\label{F1}
\end{figure}
Coming back to equation (\ref{85}) for the general evolution of coherences of the reduced system, we also need to calculate the expressions for $\langle a|+(t)\rangle\langle -(t)|b\rangle$ and $\langle a|-(t)\rangle\langle +(t)|b\rangle$, as well as the overlap between the pointer states of the environment $\langle\Phi_{-}(t)|\Phi_{+}(t)\rangle$, for our generalized spin boson model.
Using equation (\ref{66}), we can evaluate $\langle a|+(t)\rangle\langle -(t)|b\rangle$ and $\langle a|-(t)\rangle\langle +(t)|b\rangle$ as follows
\begin{eqnarray}\label{96}
\nonumber \langle a|+(t)\rangle\langle -(t)|b\rangle=-i\cos(\frac{\varphi}{2}+\frac{t'}{4\sqrt{\bar{n}}})\cos(\frac{\varphi}{2}-\frac{t'}{4\sqrt{\bar{n}}}) \qquad \mathrm{and} \\
\langle a|-(t)\rangle\langle +(t)|b\rangle= i\sin(\frac{\varphi}{2}+\frac{t'}{4\sqrt{\bar{n}}})\sin(\frac{\varphi}{2}-\frac{t'}{4\sqrt{\bar{n}}}).
\end{eqnarray}
However, here also we should make the appropriate correction due to having a finite average number of photons in the environment. Such correction can be made again by using equation (\ref{94}) to obtain
\begin{eqnarray}\label{97}
\nonumber \langle a|+(t)\rangle\langle -(t)|b\rangle=-i\cos(\frac{\varphi}{2}+\frac{t'}{4\sqrt{\bar{n}}})\cos(\frac{\varphi}{2}-\frac{t'}{4\sqrt{\bar{n}}})\ e^{-t'^{2}/64\bar{n}^{2}} \qquad \mathrm{and} \\
\langle a|-(t)\rangle\langle +(t)|b\rangle= i\sin(\frac{\varphi}{2}+\frac{t'}{4\sqrt{\bar{n}}})\sin(\frac{\varphi}{2}-\frac{t'}{4\sqrt{\bar{n}}})\ e^{-t'^{2}/64\bar{n}^{2}}.
\end{eqnarray}
Finally, we use the expressions for $|\Phi_{\pm}(t)\rangle$, given by equations (\ref{70}) and (\ref{71}), in order to calculate the overlap between the pointer states of the environment $\langle\Phi_{-}(t)|\Phi_{+}(t)\rangle$. We have
\begin{equation}\label{98}
\fl \qquad |\Phi_{\pm}(t)\rangle=\sum_{n=0}^{\infty}c_{n}\ e^{\mp\frac{i\mathrm{g} t}{2}(\sqrt{n+1}+\sqrt{n})}\ |n\rangle=\sum_{n=0}^{\infty}\frac{e^{-\bar{n}/2}\ \bar{n}^{n/2}\ e^{-in\varphi}}{\sqrt{n!}}\ e^{\mp\frac{i\mathrm{g} t}{2}(\sqrt{n+1}+\sqrt{n})}\ |n\rangle.
\end{equation}
As we discussed, for the coherent field with a large average number of photons we can use
\begin{equation}\label{99}
\sqrt{n}\approx\sqrt{\bar{n}}+\frac{(n-\bar{n})}{2\sqrt{\bar{n}}}-\frac{(n-\bar{n})^{2}}{8\bar{n}^{3/2}};
\end{equation}
So, using the above relation and $t'=\mathrm{g} t$, equation (\ref{98}) becomes
\begin{eqnarray}\label{100}
\fl \nonumber |\Phi_{\pm}(t)\rangle\approx\sum_{n=0}^{\infty}\frac{e^{-\bar{n}/2}\ \bar{n}^{n/2}\ e^{-in\varphi}}{\sqrt{n!}}\ e^{\mp\frac{it'}{2}(\sqrt{\bar{n}}+\frac{(n+1-\bar{n})}{2\sqrt{\bar{n}}}-\frac{(n+1-\bar{n})^{2}}{8\bar{n}^{3/2}}+\sqrt{\bar{n}}
+\frac{(n-\bar{n})}{2\sqrt{\bar{n}}}-\frac{(n-\bar{n})^{2}}{8\bar{n}^{3/2}})}\ |n\rangle \\
=e^{-\bar{n}/2}\exp(\mp\frac{it'}{2}[\frac{3}{4}\sqrt{\bar{n}}+\frac{3}{4\sqrt{\bar{n}}}-\frac{1}{8\bar{n}^{3/2}}])\times\sum_{n=0}^{\infty}\frac{\bar{n}^{n/2}\ e^{-in\varphi}}{\sqrt{n!}}\\ \nonumber \times \exp(\mp\frac{it'}{2}\{(\frac{n}{\sqrt{\bar{n}}})\times[\frac{3}{2}-\frac{n}{4\bar{n}}]-\frac{n}{4\bar{n}^{3/2}}\})\ |n\rangle.
\end{eqnarray}
For the coherent field and within the approximation that we are using, as we discussed, $\sum_{n} |c_{n}|^{2}\ (n/\bar{n})\approx\sum_{n} |c_{n}|^{2}=1$. Therefore, in the limit of $\bar{n}\rightarrow\infty$ we can replace the expression $[\frac{3}{2}-\frac{n}{4\bar{n}}]$ of the above equation by $\frac{5}{4}$. So, using equation (\ref{48}) we can simplify equation (\ref{100}) to
\begin{equation}\label{101}
|\Phi_{\pm}(t)\rangle\approx\exp(\mp\frac{it'\sqrt{\bar{n}}}{2}[\frac{3}{4}+\frac{3}{4\bar{n}}-\frac{1}{8\bar{n}^{2}}])\times|\ \nu\ \exp(\mp\frac{it'}{2\sqrt{\bar{n}}}[\frac{5}{4}-\frac{1}{4\bar{n}}])\ \rangle.
\end{equation}
So now
\begin{equation}\label{102}
\fl \langle\Phi_{-}(t)|\Phi_{+}(t)\rangle\approx \exp(-it'\sqrt{\bar{n}}\ [\frac{3}{4}+\frac{3}{4\bar{n}}-\frac{1}{8\bar{n}^{2}}])
\times \langle\ \nu\ e^{\frac{it'}{2\sqrt{\bar{n}}}[\frac{5}{4}-\frac{1}{4\bar{n}}]}|\ \nu\ e^{-\frac{it'}{2\sqrt{\bar{n}}}[\frac{5}{4}-\frac{1}{4\bar{n}}]}\ \rangle.
\end{equation}
Using the following formula from quantum optics \cite{Scully} for the scalar product of the coherent states
\begin{equation}\label{103}
\langle \nu^{\ \prime}|\nu\rangle=\exp[-(|\nu^{\ \prime}|^{2}+
|\nu|^{2})/2+\nu^{\ \prime\ast}\ \nu].
\end{equation}
equation (\ref{102}) becomes
\begin{equation}\label{104}
\fl \langle\Phi_{-}(t)|\Phi_{+}(t)\rangle\approx\exp(-it'\sqrt{\bar{n}}\ [\frac{3}{4}+\frac{3}{4\bar{n}}-\frac{1}{8\bar{n}^{2}}])\times\exp(\ \bar{n}\
\{e^{\frac{-it'}{\sqrt{\bar{n}}}(\frac{5}{4}-\frac{1}{4\bar{n}})}-1\}).
\end{equation}
Therefore,
\begin{equation}\label{105}
\fl |\langle\Phi_{-}(t)|\Phi_{+}(t)\rangle|^{2}\approx\exp(-4\bar{n}\sin^{2}([\frac{t'}{2\sqrt{\bar{n}}}][\frac{5}{4}-\frac{1}{4\bar{n}}])).
\end{equation}
For an average number of photons large enough and times short enough for which $\frac{t'}{\sqrt{\bar{n}}}\ll 1$ the above expression for the overlap between the pointer states of the environment reduces to
\begin{equation}\label{106}
|\langle\Phi_{-}(t)|\Phi_{+}(t)\rangle|^{2}\approx \exp(-t'^{2}\ [\frac{5}{4}-\frac{1}{4\bar{n}}]^{2})\approx e^{-\frac{25}{16}t'^{2}}.
\end{equation}
Now, using equations (\ref{85}), (\ref{95}), (\ref{97}) and (\ref{104}) we can obtain the following closed form for $\rho^{\cal S}_{12}(t)$, which is valid for $t'\ll \bar{n}^{3/2}$
\begin{eqnarray}\label{107}
\fl \nonumber \rho^{\cal S}_{12}(t)=|\alpha'|^{2}\{\frac{-i}{2}\sin(\varphi+\frac{t'}{2\sqrt{\bar{n}}})\ e^{-t'^{2}/32\bar{n}^{2}}\}+|\beta'|^{2}\{\frac{i}{2}\sin(\varphi-\frac{t'}{2\sqrt{\bar{n}}})\ e^{-t'^{2}/32\bar{n}^{2}}\}\\ \nonumber +\alpha'\beta'^{\ast}\{-i\cos(\frac{\varphi}{2}+\frac{t'}{4\sqrt{\bar{n}}})\cos(\frac{\varphi}{2}-\frac{t'}{4\sqrt{\bar{n}}})\ e^{-t'^{2}/64\bar{n}^{2}}\}\times \\ \exp(-it'\sqrt{\bar{n}}\ [\frac{3}{4}+\frac{3}{4\bar{n}}-\frac{1}{8\bar{n}^{2}}]) \times\exp(\ \bar{n}\
\{e^{\frac{-it'}{\sqrt{\bar{n}}}(\frac{5}{4}-\frac{1}{4\bar{n}})}-1\})
\\ \nonumber +\beta'\alpha'^{\ast}\{i\sin(\frac{\varphi}{2}+\frac{t'}{4\sqrt{\bar{n}}})\sin(\frac{\varphi}{2}-\frac{t'}{4\sqrt{\bar{n}}})\ e^{-t'^{2}/64\bar{n}^{2}}\}\times\\ \nonumber \exp(it'\sqrt{\bar{n}}\ [\frac{3}{4}+\frac{3}{4\bar{n}}-\frac{1}{8\bar{n}^{2}}]) \times\exp(\ \bar{n}\
\{e^{\frac{it'}{\sqrt{\bar{n}}}(\frac{5}{4}-\frac{1}{4\bar{n}})}-1\}).
\end{eqnarray}
In Figure 3 we used equation (\ref{107}) to plot the short time evolution of $|\rho^{\cal S}_{12}(t)|$ for a case that the system initially is \emph{not} prepared in one of its pointer states. We also used the more exact expression, given by equation (\ref{81}), to plot the same function. As we see from this figure, equation (\ref{107}) serves as a good approximation in closed form for the more exact relation, as long as we are not considering long times \footnote{\textcolor[rgb]{0.00,0.00,0.00}{For longer times, the approximate equation (\ref{107}) shows internal oscillations in the evolution of $|\rho^{\cal S}_{12}(t)|$ which are misplaced compared to those of the plot which we obtain from the more exact expression of equation (\ref{81}). However, the envelopes still do coincide with each other with great precession.}}.
\begin{figure}
\caption{Short time evolution of $|\rho^{\cal S}
\label{F1}
\end{figure}
As we can see from equations (\ref{106}) and (\ref{107}), at sufficiently short times $t'\ll \sqrt{\bar{n}}$ the decay of the first two terms is characterized by the decaying factor $e^{-t'^{2}/32\bar{n}^{2}}$, while the decay of the other two terms is characterized by the much faster-decaying term due to the overlap between the pointer states of the environment $\langle\Phi_{-}(t)|\Phi_{+}(t)\rangle$ which is proportional to the factor $e^{-\frac{25}{32}t'^{2}}$. This fact clearly shows why indeed we should generally expect a much slower decoherence of the state of the system when the system is initially prepared in one of its pointer states, compared to the case that the system initially is \emph{not} in any of its pointer states. Also, equation (\ref{107}) shows that for $\varphi=n\pi/2$ and if the system initially is \emph{not} prepared in one of its pointer states, the evolution of coherences of the central system would not be independent of the initial state of the system (just unlike the case that the system is initially prepared in one of its pointer states). However, since the last two terms of equation (\ref{107}) vanish much faster than the first two terms, at larger times and for $\varphi=n\pi/2$ we expect the evolution of coherences of the central system to be independent of the initial state of the system.
In this section we calculated the offdiagonal element of the reduced density matrix of the system in the basis of eigenstates of the $\hat{\sigma}_{z}$ operator. However, we could equally study the decoherence of the state of the central system in the basis of the $|\pm(t_{0})\rangle$ states.
From equation (\ref{66}) it is clear that for $t'\ll \sqrt{\bar{n}}$ the pointer states of the system almost are time-independent; and they can be approximated by $|\pm(t_{0})\rangle$. So, in this limit the evolution of an arbitrary initial state of the central system $(\alpha'\ |+(t_{0})\rangle+\beta'\ |-(t_{0})\rangle)$ in contact with an initial coherent field $|\nu\rangle$ is approximately given by $|\psi^{\mathrm{tot}}(t)\rangle=\alpha'\ |+(t_{0})\rangle|\Phi_{+}(t)\rangle+\beta'\ |-(t_{0})\rangle|\Phi_{-}(t)\rangle$. Therefore, we would have
\begin{eqnarray}\label{108}
\fl \nonumber \hat{\rho}_{\cal S}(t)=|\alpha'|^{2}\ |+(t_{0})\rangle\langle +(t_{0})|+|\beta'|^{2}\ |-(t_{0})\rangle\langle -(t_{0})|\\ \fl \quad +\alpha'\beta'^{\ast}\ |+(t_{0})\rangle\langle -(t_{0})|\times\langle\Phi_{-}(t)|\Phi_{+}(t)\rangle+\beta'\alpha'^{\ast}\ |-(t_{0})\rangle\langle +(t_{0})|\times\langle\Phi_{+}(t)|\Phi_{-}(t)\rangle.
\end{eqnarray}
For this short range of times the evolution of the pointer states of the environment can be approximated by equation (\ref{101}). So, in the $|\pm(t_{0})\rangle$ basis and for $t'\ll \sqrt{\bar{n}}$ we must have
\begin{eqnarray}\label{109}
\fl \nonumber \rho^{\cal S}_{12}(t)=\alpha'\beta'^{\ast}\langle\Phi_{-}(t)|\Phi_{+}(t)\rangle \\ \approx\alpha'\beta'^{\ast}\exp(-it'\sqrt{\bar{n}}\ [\frac{3}{4}+\frac{3}{4\bar{n}}-\frac{1}{8\bar{n}^{2}}]) \times\exp(\ \bar{n}\
\{e^{\frac{-it'}{\sqrt{\bar{n}}}(\frac{5}{4}-\frac{1}{4\bar{n}})}-1\}).
\end{eqnarray}
Finally, using equation (\ref{106}) we find that for $t'\ll \sqrt{\bar{n}}$
\begin{equation}\label{110}
|\rho^{\cal S}_{12}(t)|^{2}\approx|\alpha'\beta'|^{2}\exp(-t'^{2}\ [\frac{5}{4}-\frac{1}{4\bar{n}}]^{2})\approx|\alpha'\beta'|^{2}\ e^{-\frac{25}{16}t'^{2}}.
\end{equation}
Hence, in the basis of the $|\pm(t_{0})\rangle$ states the short-time decoherence of the state of the central system is characterized by the fast-decaying factor
$e^{-\frac{25}{32}t'^{2}}$ when the system initially is \emph{not} prepared in one of its pointer states; while in this basis the pointer states of the system almost do not decohere within short times.\\
\section{Summary and conclusions}
Considering a single-mode quantized field in exact resonance with the tunneling matrix element of the system, we obtained the time-evolution operator for our model. Using this time-evolution operator then we calculated the pointer states of the system and the environment for the case that the environment is initially prepared in the coherent state with a large average number of photons. Most importantly, we observed that for our spin-boson model represented by the Hamiltonian of equation (\ref{1}) the pointer states of the system turn out to become time-dependent; as opposed to the pointer states of a simplified spin-boson model (with $\hat{H}_{\cal S}$ proportional to $\hat{\sigma}_{z}$ rather than $\hat{\sigma}_{x}$) for which $[\hat{H}_{\cal S},\hat{H}_{\rm int}]=0$. The simplified model has often been used in the context of quantum information and quantum computation to gain some insights regarding the decoherence of a single qubit \cite{Ekert,Unruh,Reina}. However, in most of the practical situations different noncommutable perturbations may exist in the total Hamiltonian of a realistic system-environment model which would result in having time-dependent pointer states for the system \cite{paper1}. Indeed, the authors believe that the fact that the pointer states of a system generally are time-dependent and may evolve with time has not been seriously acknowledged in the context of quantum computation and quantum information. In specific, in the context of quantum error correction \cite{Laflamme,Nielsen} it is often assumed that the premeasurement by the environment does not change the initial pointer states of the system. In other words, quantum ``nondemolition" premeasurement by the environment is often assumed \cite{Laflamme,Nielsen}; as is also assumed in Von Neumann's scheme of measurement \cite{Neuman,Schlosshauer1}. Also, in the context of Decoherence-Free-Subspaces (DFS) theory the models which often are studied either contain self-Hamiltonian for the system which commutes with the interaction between the system and the environment, or it is assumed that we are in the \emph{quantum measurement limit} \footnote{In the \emph{quantum measurement limit} the interaction between the system and the environment is so strong as to dominate the evolution of the system $\hat{H}\approx \hat{H}_{\rm int}$. Also in the \emph{quantum limit of decoherence} the Hamiltonian for the system almost dominates the interaction between the system and the environment as well as the self-Hamiltonian of the environment $\hat{H}\approx \hat{H}_{\cal S}$.} or in the \emph{quantum limit of decoherence} \cite{Ekert,90,91,93}. However, all of these assumptions are in fact a big simplification of the problem; since, as we discussed in paper $\sf I$, they completely exclude the possibility of having pointer states for the system which may depend on time \cite{paper1}.
Another interesting point in obtaining the pointer states of the system and the environment for our model was the realization of the fact that \emph{only} in the limit of a large average number of photons can we have a set of (time-dependent) pointer states for the system. In other words, unless we have a sufficiently large average number of photons which can make a sharp distribution function
for the state of the electromagnetic field, there is always some degree of entanglement between the states of the system and the environment (see equations (\ref{61}) and (\ref{89})) and the pointer states of measurement cannot be realized at all.
We also showed that at $t=(2n+1)\pi\sqrt{\bar{n}}/\mathrm{g} \ (\mathrm{with}\ n=0,1,2,...)$ the $|\pm(t)\rangle$ pointer states of the system coincide with each other and hence, whatever is the initial state of the system, at these specific times the states of the system and the environment are not entangled with each other and the system can be represented by a well-defined state of its own. Using our pointer states, we also obtained a closed form for the offdiagonal element of the reduced density matrix of the system and studied the decoherence of the central system in our model. We showed that for the case that the system initially is prepared in one of its pointer states, the offdiagonal element of the reduced density matrix of the system will be \textcolor[rgb]{0.00,0.00,0.00}{a \emph{sinusoidal function} with a slow decaying envelope which is characterized by a decay time proportional to $\bar{n}$ (through a decoherence factor calculated as $e^{-\mathrm{g}^{2} t^{2}/32\bar{n}^{2}}$)}; while for the case that the system initially is not prepared in one of its initial pointer states, it will experience a fast decoherence within a time of order $1/\mathrm{g}$. The ``decayo-sinusoidal" evolution of coherences (figure 2) which we observe in our model and for the case that the system initially is prepared in one of its pointer states is a new form of decoherence which cannot be observed in the somewhat similar Jaynes-Cummings model of quantum optics \cite{Gea-Banacloche2}.
It will be interesting to generalize this study to the case that the environment is not merely represented by a single-mode bosonic field; and consider some classes of spectral densities for the environment. Also, for the spin-boson model represented by the Hamiltonian of equation (\ref{1}) at least in principle one should be able to obtain the pointer states of the system and the environment in some nonresonance regimes and for the single-mode quantized field.
To further demonstrate the generality and usefulness of our method of obtaining pointer states, in another article \cite{paper4} we will obtain the time-dependent pointer states of the system and the environment for the quantized atom-field model and in some nonresonance regimes.
\appendix
\section*{Appendix A}
\renewcommand{A-\arabic{equation}}{A-\arabic{equation}}
\setcounter{equation}{0}
In this section we introduce a measure for degree of entanglement between the states of the system and those of the environment, and by calculating this measure for the initial pointer states of our model (which are given by equation (\ref{63})) we show that our result for the pointer states of the system and the environment is valid over a length of time which is proportional to $\bar{n}$, the average number of photons in the field; since as we will see, only within up to this range of times our pointer states of the system and the environment can stay separated and will not considerably entangle with the states of another subsystem.
For the global state of the system and the environment, given by equation (\ref{310}), we have
\begin{eqnarray}\label{A-1}
\fl \nonumber |\psi^{\rm tot}(t)\rangle=|\textbf{A}(t)\rangle\ |a\rangle+|\textbf{B}(t)\rangle\ |b\rangle \\ =|\textbf{A}^{\prime}(t)\rangle\ [G(t)|a\rangle]+|\textbf{B}(t)\rangle\ |b\rangle;\ \ \mathrm{where}\ \ |\textbf{A}^{\prime}(t)\rangle=\frac{|\textbf{A}(t)\rangle}{G(t)}.
\end{eqnarray}
For our pointer states (given by $|\pm(t)\rangle=\mathcal{N}_{\pm}\ \{G_{\pm}(t)|a\rangle+|b\rangle\}$) we can define a degree of entanglement through the following relation
\begin{equation}\label{A-2}
q(t)=\frac{\langle\textbf{A}(t)|G_{\pm}(t)\textbf{B}(t)\rangle}{|\textbf{A}(t)|^{2}};
\end{equation}
which also is equal to
\begin{equation}\label{A-3}
q(t)=\frac{\langle\textbf{A}^{\prime}(t)|\textbf{B}(t)\rangle}{|\textbf{A}^{\prime}(t)|^{2}}.
\end{equation}
The above function basically is the overlap between the vectors $|\textbf{A}^{\prime}(t)\rangle$ and $|\textbf{B}(t)\rangle$ of equation (\ref{A-1}); which is \emph{normalized} to the unity, since for our pointer states of the system we have $|\textbf{A}(t)\rangle=G_{\pm}(t)|\textbf{B}(t)\rangle$. From equation (\ref{A-1}) it is clear that for perfect pointer states, where there is no entanglement between the states of the system and the environment, $q(t)$ must always remain equal to the unity; all throughout the evolution of the system and the environment (i.e.\ $|\textbf{A}^{\prime}(t)\rangle$ and $|\textbf{B}(t)\rangle$ must perfectly coincide with each other, in which case the states of the system and the environment in equation (A-1) will not entangle with each other; and we will have pointer states for the system which are given by equation (11)). Only in this case the states of the system and the environment in equation (\ref{A-1}) will always stay separated and one can assign each of the two subsystems with well-defined states of their own.
Our goal is to calculate our measure of entanglement $q(t)$ for the pointer states of the system and the environment which we obtained for our model in this paper; and to study its evolution with time. For the global global state of the system and the environment, given by equation (\ref{A-1}), one can calculate the reduced density matrix of the system as
\begin{equation}\label{A-5}
\fl \hat{\rho}^{\cal S}(t)=|a\rangle\langle a|\langle\textbf{A}(t)|\textbf{A}(t)\rangle+|b\rangle\langle b|\langle\textbf{B}(t)|\textbf{B}(t)\rangle+|a\rangle\langle b|\langle\textbf{B}(t)|\textbf{A}(t)\rangle+|b\rangle\langle a|\langle\textbf{A}(t)|\textbf{B}(t)\rangle.
\end{equation}
Therefore,
\begin{equation}\label{A-6}
\rho^{\cal S}_{12}(t)=\langle\textbf{B}(t)|\textbf{A}(t)\rangle;
\end{equation}
where in the above equation $\rho^{\cal S}_{12}(t)$ is the offdiagonal element of the reduced density matrix of the two-level system in the $|a\rangle$ and $|b\rangle$ basis.
From equation (\ref{A-6}) we can easily see that if the system initially is prepared in one of its initial pointer states; i.e.\ if $|\psi^{\cal S}(t_{0})\rangle=|\pm(t_{0})\rangle$, then we have:
\begin{equation}\label{A-7}
\fl \langle\textbf{A}(t)|G_{\pm}(t)\textbf{B}(t)\rangle=G_{\pm}^{\ast}(t)\langle\textbf{B}(t)|\textbf{A}(t)\rangle=G_{\pm}^{\ast}(t)\rho^{\cal S}_{12}(t)
\end{equation}
Therefore,
\begin{equation}\label{A-8}
q(t)=\frac{\langle\textbf{A}(t)|G_{\pm}(t)\textbf{B}(t)\rangle}{|\textbf{A}(t)|^{2}}=\frac{G_{\pm}^{\ast}(t)\rho^{\cal S}_{12}(t)}{|\textbf{A}(t)|^{2}};\quad \mathrm{or}
\end{equation}
\begin{equation}\label{A-8.5}
|q(t)|=\frac{|\rho^{\cal S}_{12}(t)|}{|\textbf{A}(t)|\times|\textbf{B}(t)|}=\frac{|\rho^{\cal S}_{12}(t)|}{\sqrt{\rho^{\cal S}_{11}(t)}\times\sqrt{1-\rho^{\cal S}_{11}(t)}}.
\end{equation}
In calculating the above equations we must keep in mind that $\rho^{\cal S}_{11}(t)$ and $\rho^{\cal S}_{12}(t)$ must be calculated in the basis of the $|a\rangle$ and $|b\rangle$ basis states; and also they must be calculated for the case that the system initially is prepared in one of its initial pointer states, given by equation (\ref{63}).
For our spin-boson model and for example for the case that the system initially is prepared in the $|+(t_{0})\rangle$ state, from equations (\ref{88}), (\ref{94}) and (\ref{95}) we had
\begin{equation}\label{A-10}
\fl \rho^{\cal S}_{11}(t)=\cos^{2}(\frac{\varphi}{2}+\frac{t'}{4\sqrt{\bar{n}}})\ e^{-t'^{2}/32\bar{n}^{2}} \ \ \mathrm{and} \ \ |\rho^{\cal S}_{12}(t)|=\frac{1}{2}|\sin(\varphi+\frac{t'}{2\sqrt{\bar{n}}})|\ e^{-t'^{2}/32\bar{n}^{2}}.
\end{equation}
Therefore,
\begin{equation}\label{A-11}
|q(t)|=\frac{|\rho^{\cal S}_{12}(t)|}{\sqrt{\rho^{\cal S}_{11}(t)}\times\sqrt{1-\rho^{\cal S}_{11}(t)}}\simeq1\quad \rm if\ and\ only\ if\quad t'\ll \bar{n}
\end{equation}
One can easily verify that for the case that the system initially is prepared in the $|-(t_{0})\rangle$ state also, we would have the same result for the degree of entanglement between the states of the system and the environment. These results basically indicate that our result for the pointer states of the system and the environment is valid over a length of time which is proportional to $\bar{n}$, the average number of photons in the field; since within times of the order of $\bar{n}/\mathrm{g}$ the degree of entanglement, calculated for our pointer states, will stay close to the unity and our calculated pointer states will be immune to entanglement.
\section*{References}
\end{document} |
\begin{document}
\begin{abstract}
Working in the framework of $(\mon{T},\V)$-categories, for a symmetric monoidal closed category $\V$ and a (not necessarily cartesian) monad $\mon{T}$, we present a common account to the study of ordered compact Hausdorff spaces and stably compact spaces on one side and monoidal categories and representable multicategories on the other one. In this setting we introduce the notion of dual for $(\mon{T},\V)$-categories.
\hspace*{-\parindent}{\em Mathematics Subject Classification}: 18C20, 18D15, 18A05, 18B30, 18B35.
\noindent {\em Key words}: monad, Kock-Z\"oberlein monad, multicategory, topological space, $(\mon{T},\V)$-category.
\end{abstract}
\title{Representable $(\mT, \V)$-categories}
\section{Introduction}
The principal objective of this paper is to present a common account to the study of ordered compact Hausdorff spaces and stably compact spaces on one side and monoidal categories and representable multicategories on the other one. Both theories have similar features but were developed independently.
On the topological side, the starting point is the work of Stone on the representation of Boolean algebras \cite{Sto36} and distributive lattices \cite{Sto38}. In the latter paper, Stone proves that (in modern language) the category of distributive lattices and homomorphisms is dually equivalent to the category of spectral topological spaces and spectral maps. Here a topological space is spectral whenever it is sober and the compact open subsets form a basis for the topology which is closed under finite intersections; and a continuous map is called spectral whenever the inverse image of a compact open subset is compact. Later Hochster \cite{Hoc69} showed that spectral spaces are, up to homeomorphism, the prime spectra of commutative rings with unit, and in the same paper he also introduced a notion of dual spectral space. A different perspective on duality theory for distributive lattices was given by Priestley in \cite{Pri70}: the category of distributive lattices and homomorphisms is also dually equivalent to the category of certain ordered compact Hausdorff spaces (introduced by Nachbin in \cite{Nac50}) and continuous monotone maps. In particular, this full subcategory of the category of ordered compact Hausdorff spaces is equivalent to the category of spectral spaces. In fact, this equivalence generalises to all ordered compact Hausdorff spaces: the category $cat\-egory font{OrdCompHaus}$ of ordered compact Hausdorff spaces and continuous monotone maps is equivalent to the category $cat\-egory font{StablyComp}$ of stably compact spaces and spectral maps (see \cite{GHK+80}). Furthermore, as shown in \cite{Sim82} (see also \cite{EF99}), stably compact spaces can be recognised among all topological spaces by a universal property; namely, as the algebras for a Kock-Z\"oberlein monad (or lax idempotent monad, or simply KZ; see \cite{Ko}) on $cat\-egory font{Top}$. Finally, Flagg \cite{Fla97a} proved that $cat\-egory font{OrdCompHaus}$ is also monadic over ordered sets.
Independently, a very similar scenario was developed by Hermida in \cite{Her00,Her01} in the context of higher-dimensional category theory, now with monoidal categories and multicategories in lieu of ordered compact Hausdorff spaces and topological spaces. More specifically, he introduced in \cite{Her00} the notion of representable multicategory and constructed a 2-equivalence between the 2-category of representable multicategories and the 2-category of monoidal categories; that is, representable multicategories can be seen as a higher-dimensional counterpart of stably compact topological spaces. More in detail, we have the following analogies:
\begin{longtable}{rl}
topological space & multicategory,\\
ordered compact Hausdorff space & monoidal category,\\
stably compact space & representable multicategory;
\end{longtable}
\noindent and there are KZ-monadic 2-adjunctions
\begin{align*}
cat\-egory font{OrdCompHaus}\adjunct{}{}\Top && cat\-egory font{MonCat}\adjunct{}{}cat\-egory font{MultiCat},
\intertext{which restrict to 2-equivalences}
cat\-egory font{OrdCompHaus}\simeqcat\-egory font{StablyComp} && cat\-egory font{MonCat}\simeqcat\-egory font{RepMultiCat}.
\end{align*}
To bring both theories under one roof, we consider here the setting used in \cite{CT03} to introduce $(\mon{T},\V)$-categories; that is, a symmetric monoidal closed category $\V$ together with a (not necessarily cartesian) monad $\mon{T}$ on $\Set$ laxly extended to the bicategory $\Rels{\V}$ of $\V$-relations. After recalling the notions of $(\mon{T},\V)$-categories and $(\mon{T},\V)$-functors, we proceed showing that the above-mentioned results hold in this setting: the $\Set$-monad $\mon{T}$ extends naturally to $\Cats{\V}$, and its Eilenberg--Moore category admits an adjunction
\[
(\Cats{\V})^\mon{T}\adjunct{}{}\Cats{(\mon{T},\V)},
\]
so that the induced monad is of Kock-Z\"oberlein type. Following the terminology of \cite{Her00}, we call the pseudo-algebras for the induced monad on $\Cats{(\mon{T},\V)}$ representable $(\mon{T},\V)$-categories. We explain in more detail how this notion captures both theories mentioned above. Finally, we introduce a notion of dual $(\mon{T},\V)$-category. We recall that this concept turned out to be crucial in the development of a completeness theory for $(\mon{T},\V)$-categories when $\V$ is a quantale, i.e.\ a small symmetric monoidal closed complete category (see \cite{CH09}).
From a more formal point of view, $(\mon{T}, \V)$-categories are monads within a certain bicategory-like structure. Some of the theory presented in this paper is ``formal monad theoretical" in character. This perspective will be developed in an upcoming paper \cite{Ch14}.
\section{Basic assumptions}
Throughout the paper \emph{$\V$ is a complete, cocomplete, symmetric
monoidal-closed category, with tensor product $\otimes$ and unit
$I$}. Normally we avoid explicit reference to the natural unit,
associativity and symmetry isomorphisms.
The bicat\-egory $\V\mbox{-}\Rel$ of $\V$-relations (also called $\Mat(\V)$: see \cite{BCSW, RW}) has as \vspace*{1mm}\\
-- objects sets, denoted by $X$, $Y$, $\dots$, also
considered as (small) discrete categories,\vspace*{1mm}\\
-- arrows (=1-cells) $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ are families of $\V$-objects
$r(x,y)$ ($x\in X,\,y\in Y$),\vspace*{1mm}\\
-- 2-cells $\varphi:r\to r'$ are families of morphisms
$\varphi_{x,y}:r(x,y)\to r'(x,y)$ ($x\in X,\,y\in Y$) in $\V$,
i.e., natural transformations $\varphi :r\to r'$; hence, their
(vertical) composition is computed componentwise in $\V$:
\[(\varphi '\cdot \varphi )_{x,y}=\varphi '_{x,y}\varphi _{x,y}.\]
The (horizontal) composition of arrows $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ and $s:Y\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Z$
is given by {\em relational multiplication}:
\[(sr)(x,z)=\sum_{y\in Y}\;r(x,y)\otimes s(y,z),\]
which is extended naturally to 2-cells; that is, for $\varphi :r\to
r'$, $\psi:s\to s'$,
\[(\psi\varphi )_{x,z}=\sum_{y\in
Y}\;\varphi _{x,y}\otimes\psi_{y,z}:(sr)(x,z)\to(s'r')(x,z).\]
There is a pseudofunctor $\Set\longrightarrow \V\mbox{-}\Rel$ which maps objects
identically and treats a $\Set$-map $f:X\to Y$ as a $\V$-relation
$f:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}to Y$ in $\V\mbox{-}\Rel$, with $f(x,y)=I$ if $f(x)=y$ and
$f(x,y)=\bot$ otherwise, where $\bot$ is a fixed initial object of
$\V$. If an arrow $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}to Y$ is given by a $\Set$-map, we shall
indicate this by writing $r:X\to Y$, and by normally using
$f,\,g,\,\dots$, rather than $r,\,s,\,\dots$.
Like for $\V$, in order to simplify formulae and diagrams, we disregard the unity and associativity isomorphisms
in the bicategory $\V\mbox{-}\Rel$ when convenient.
$\V\mbox{-}\Rel$ has a pseudo-involution, given by {\em
transposition}: the transpose $r^\circ :Y\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} X$ of $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ is
defined by $r^\circ (y,x)=r(x,y)$; likewise for 2-cells. In
particular, there are natural and coherent isomorphisms
\[(sr)^\circ \cong r^\circ s^\circ \]
involving the symmetry isomorphisms of $\V$. The
transpose $f^\circ $ of a $\Set$-map $f:X\to Y$ is a
right adjoint to $f$ in the bicat\-egory $\V\mbox{-}\Rel$, so that $f$ is really a
``map" in Lawvere's sense; hence, there are 2-cells
\[\xymatrix{1_X\ar[r]^{\lambda_f}& f^\circ f&\mbox{ and }&ff^\circ \ar[r]^{\rho_f}& 1_Y}\]satisfying the triangular
identities.
We fix a \emph{monad
$\mon{T}=(T,e,m)$ on $\Set$ with a
lax extension to $\V\mbox{-}\Rel$}, again denoted by $\mon{T}$, so that:
\begin{enumerate}[--]
\item There is a lax functor $T:\V\mbox{-}\Rel\to\V\mbox{-}\Rel$ which extends
the given $\Set$-functor; hence, for an arrow $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ we are
given $Tr:TX\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} TY$, with $Tr$ a $\Set$-map if $r$ is one,
and $T$ extends to 2-cells functorially:
\[T(\varphi '\cdot\varphi )=T\varphi '\cdot T\varphi ,\;\;T1_r=1_{Tr};\]
furthermore, for all $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ and $s:Y\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Z$ there are natural and
coherent 2-cells
\[\kappa=\kappa_{s,r}:TsTr\longrightarrow T(sr),\]
so that the following diagrams commute:
\begin{equation}\tag{lax}\label{eq:lax}
\xymatrix{TsTr\ar[r]^{\kappa_{s,r}}\ar[d]_{(T\psi)(T\varphi )}&T(sr)\ar[d]^{T(\psi\varphi )}&
TtT(sr)\ar[r]^{\kappa_{t,sr}}&T(tsr)\\
Ts'Tr'\ar[r]^-{\kappa_{s',r'}}&T(s'r')
&TtTsTr\ar[r]^-{\kappa_{t,s}-}\ar[u]^{-\kappa_{s,r}}&T(ts)Tr
\ar[u]_{\kappa_{ts,r}}}
\end{equation}
(also: $\kappa_{r,1_X}=1_{Tr}=\kappa_{1_Y,r}$; all unity and
associativity isomorphisms are suppressed).\vspace*{1mm}\\
Furthermore, \emph{we assume that $T(f^\circ)=(Tf)^\circ$ for every map $f$.}
\noindent It follows that whenever $f$ is a set map $\kappa_{s, f}$ is invertible. Its inverse is the composite
\[T(sf) \xrightarrow{-\lambda_{Tf}} T(sf)Tf^\circ Tf \xrightarrow{\kappa_{sf, f^\circ}-} T(sff^\circ) Tf \xrightarrow{T(s\rho_f)-} Ts Tf.\]
Also, $\kappa_{f^\circ, t}$ is invertible, for $t:Z\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$. Its inverse is the composite
\[T(f^\circ t) \xrightarrow{\lambda_{Tf}-} Tf^\circ Tf T(f^\circ t) \xrightarrow{-\kappa_{f, f^\circ t}} Tf^\circ T(f f^\circ t) \xrightarrow{-T(\rho_ft)} Tf^\circ Tt.\]
\item The natural transformations $e:1\to T$, $m:T^2\to T$ of $\Set$
are $\op$-lax in $\V\mbox{-}\Rel$, so that for every $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} Y$ one
has natural and coherent $2$-cells
\[\alpha=\alpha_r:e_Yr\to
Tre_X,\;\;\beta=\beta_r:m_YT^2r\to Trm_X,\mbox{ as in}\]
\begin{equation}\tag{oplax}\label{eq:oplax}
\xymatrix{ X \ar[r]|-{\object@{|}}^r
\ar[d]_{e_X} & Y \ar[d]^{e_Y}
\ar@{}[dl]|{\mbox{\large $\stackrel{\alpha}{\Leftarrow}$}} &
T^2X\ar[r]|-{\object@{|}}^{T^2r}\ar[d]_{m_X}&T^2Y
\ar@{}[dl]|{\mbox{\large $\stackrel{\beta}{\Leftarrow}$}}\ar[d]^{m_Y} \\
TX \ar[r]|-{\object@{|}}_{Tr} & TY&
TX\ar[r]|-{\object@{|}}_{Tr}&TY}
\end{equation}
such that $\alpha_{f}=1_{e_Yf}$, $\beta_{f}=1_{m_YT^2f}$
whenever $r=f$ is a $\Set$-map.
\item The following diagrams
commute (where again we disregard associativity isomorphisms):
\begin{equation}\tag{mon}\label{eq:mon}
\xymatrix@!R=3mm{&&m_YTe_YTr\ar[r]^{-\kappa_{e_Y,r}}\ar[dd]_1&
m_YT(e_Yr)\ar[r]^{-T\alpha_r}&m_YT(Tre_X)\ar[d]^{-\kappa^{-1}_{Tr,e_X}}\\
m_Ye_{TY}Tr\ar[r]^{-\alpha_{Tr}}\ar[d]_1&m_YT^2re_{TX}\ar[d]^{\beta_r-}&
&&m_YT^2rTe_X\ar[d]^{\beta_r-}\\
Tr\ar[r]^-{1}&Trm_Xe_{TX}&Tr\ar[rr]^-{1}&&Trm_XTe_X\\
m_YTm_YT^3r\ar[d]_1\ar[r]^{-\kappa_{m_Y,T^2r}}&
m_YT(m_YT^2r)\ar[r]^{-T\beta_r}&
m_YT(Trm_X)\ar[d]^{-\kappa^{-1}_{Tr,m_X}}\\
m_Ym_{TY}T^3r\ar[d]_{-\beta_{Tr}}&&m_YT^2rTm_X\ar[d]^{\beta_r-}\\
m_YT^2rm_{TX}\ar[r]_{\beta_r-}&Trm_Xm_{TX}\ar[r]_1&
Trm_XTm_X.}\end{equation}
\item One also needs the coherence conditions
\begin{equation}\tag{coh}\label{eq:coh}
\xymatrix@!R=3mm{e_Zsr\ar[r]^{\alpha_s-}\ar[d]_{1}&Tse_Yr\ar[r]^{-\alpha_r}&
TsTre_X\ar[d]^{\kappa_{s,r}-}\\
e_Zsr\ar[rr]^{\alpha_{sr}}&&T(sr)e_X\\
m_ZT^2sT^2r\ar[r]^{\beta_s-}\ar[d]_{-\kappa_{Ts,Tr}}
&Tsm_YT^2r\ar[r]^{-\beta_r}&TsTrm_X\ar[dd]^{\kappa_{s,r}-}\\
m_ZT(TsTr)\ar[d]_{-T\kappa_{s,r}}&&\\
m_ZT^2(sr)\ar[rr]^{\beta_{sr}}&&T(sr)m_X.}
\end{equation}
\item And the following naturality conditions, for all $\varphi :r\to r'$,
\begin{equation}\tag{nat}\label{eq:nat}
T\varphi e_X\cdot\alpha_r=\alpha_{r'}\cdot e_Y\varphi \;\mbox{ and }
T\varphi m_X\cdot\beta_r=\beta_{r'}\cdot m_YT^2\varphi .
\end{equation}
\end{enumerate}
The op-lax natural transformations $e$ and $m$ induce two lax natural transformations
\[(e^\circ,\hat{\alpha}):T\to\Id_{\V\mbox{-}\Rel}\mbox{ and }(m^\circ,\hat{\beta}):T\to T^2\] on $\V\mbox{-}\Rel$: for each $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}to Y$ we have
\[\xymatrix{TX \ar[r]|-{\object@{|}}^{Tr}
\ar[d]|-{\object@{|}}_{e_X^\circ}\ar@{}[dr]|{\mbox{\large $\stackrel{\hat{\alpha}}{\Rightarrow}$}} & TY \ar[d]|-{\object@{|}}^{e_Y^\circ}
&
TX\ar[r]|-{\object@{|}}^{Tr}\ar[d]|-{\object@{|}}_{m_X^\circ}\ar@{}[dr]|{\mbox{\large $\stackrel{\hat{\beta}}{\Rightarrow}$}}&TY
\ar[d]|-{\object@{|}}^{m_Y^\circ} \\
X \ar[r]|-{\object@{|}}_{r} & Y&
T^2X\ar[r]|-{\object@{|}}_{T^2r}&T^2Y}\]
where $\hat{\alpha}_r:re_X^\circ\to e_Y^\circ Tr$ and $\hat{\beta}_r:T^2rm_X^\circ\to m_Y^\circ Tr$, are mates of $\alpha_r$ and $\beta_r$ respectively, i.e. they are defined by the composites:
\[\xymatrix{re_X^\circ\ar[r]^-{\lambda_{e_Y}-}&e_Y^\circ e_Y re_X^\circ\ar[r]^-{-\alpha_r-}&e_Y^\circ Tr e_Xe_X^\circ\ar[r]^-{-\rho_{e_X}}&e_Y^\circ Tr\\
T^2rm_X^\circ\ar[r]^-{\lambda_{m_Y}-}&m_Y^\circ m_Y T^2r m_X^\circ\ar[r]^-{-\beta_r-}&m_Y^\circ Tr m_X m_X^\circ\ar[r]^-{-\rho_{m_X}}&m_Y^\circ Tr.}\]
\section{$(\mon{T},\V)$-categories}\label{sect:three}
Now we define the 2-cat\-egory $(\mT,\V)\mbox{-}\Cat$ of $(\mon{T},\V)$-categories, $(\mon{T},\V)$-functors and transformations between these:
\begin{enumerate}[--]
\item \emph{$(\mon{T},\V)$-categories} are defined as $(X,a,\eta_a,\mu_a)$, with $X$ a set, $a:TX\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} X$ a $\V$-relation, and $\eta_a$ and $\mu_a$ 2-cells as in the following diagrams:
\[
\xymatrix{X\ar[dr]_{1_X}\ar[r]^{e_X}&TX\ar[d]|-{\object@{|}}^a&TX\ar[d]|-{\object@{|}}_a&T^2X\ar[l]|-{\object@{|}}_{Ta}\ar[d]^{m_X}\\
\ar@{}[ur]|(.75){\mbox{\large $\stackrel{\eta_a}{\Rightarrow}$}}&X&X\ar@{}[ur]|{\mbox{\large
$\stackrel{\mu_a}{\Rightarrow}$}}&
TX;\ar[l]|-{\object@{|}}^a}
\]
furthermore, $\eta_a,\,\mu_a$ provide a
generalized monad structure on $a$, i.e., the following diagrams
must commute (modulo associativity isomorphisms):
\begin{equation}\tag{cat}\label{eq:cat}
\xymatrix{ae_Xa\ar[r]^-{-\alpha_a} & aTae_{TX}\ar[d]^-{\mu_a-}
&aT(ae_X)\ar[r]^-{-\kappa^{-1}_{a,e_X}}&aTaTe_X\ar[d]^{\mu_a-}\\
a\ar[u]^{\eta_a -}\ar[r]^-{1} & am_Xe_{TX} & a\ar[u]^{-T\eta_a}\ar[r]^-{1}&a m_XTe_X\\
aTaT^2a\ar[r]^{-\kappa_{a,Ta}}\ar[d]_{\mu_a-} &
aT(aTa)\ar[r]^{-T\mu_a} &aT(am_X)\ar[d]^{-\kappa^{-1}_{a,m_X}}\\
am_XT^2a\ar[d]_{-\beta_a}& &aTaTm_X\ar[d]^{\mu_a-}\\
aTam_{TX}\ar[r]^{\mu_a -}&am_Xm_{TX}\ar[r]^{1}&am_XTm_X.}
\end{equation}
We will sometimes denote a $(\mon{T},\V)$-category $(X,a,\eta_a,\mu_a)$ simply by $(X,a)$.
\item A \emph{$(\mon{T},\V)$-functor}
$(f,\varphi _f):(X,a,\eta_a,\mu_a)\to(Y,b,\eta_b,\mu_b)$ between $(\mon{T},\V)$-categories is given by a $\Set$-map $f:X\to Y$ equipped with a 2-cell $\varphi_f:fa\to bTf$
\[\label{eq:old6}
\xymatrix{TX\ar[r]^{Tf}\ar[d]|-{\object@{|}}_a&
TY\ar[d]|-{\object@{|}}^b\\
X\ar[r]_f\ar@{}[ur]|{\mbox{\large
$\stackrel{\varphi_f}{\Rightarrow}$}}&Y}
\]
making the following diagrams commute:
\begin{equation}\tag{fun}\label{eq:fun}
\xymatrix@!R=.2mm{f\ar[r]^-{-\eta_a}\ar[dd]_{\eta_b -} & fae_X\ar[dd]^{\varphi _f -}\\
\\
be_Yf\ar[r]^-{1} & bTfe_X\\
faTa\ar[rr]^{-\mu_a}\ar[dd]_{\varphi_f-}&&fam_X\ar[ddd]^{\varphi _f -}\\
&&\\ bTfTa\ar[dd]_{-\kappa_{f,a}}&&\\
&&bTfm_X\ar[ddd]^{1}\\ bT(fa)\ar[dd]_{-T\varphi _f}&&\\
&&\\
bT(bTf)\ar[r]^-{-\kappa^{-1}_{b,Tf}}&bTbT^2f\ar[r]^-{\mu_b-}&bm_YT^2f.}
\end{equation}
\item A \emph{$(\mon{T},\V)$-natural transformation} (or simply a \emph{natural transformation}) between $(\mon{T},\V)$-functors $(f, \varphi _f) \to (g, \varphi _g)$ is defined as a 2-cell $\zeta: ga \to bTf$
\[
\xymatrix{TX\ar[r]^{Tf}\ar[d]|-{\object@{|}}_a&
TY\ar[d]|-{\object@{|}}^b\\
X\ar[r]_g\ar@{}[ur]|{\mbox{\large
$\stackrel{\zeta}{\Rightarrow}$}}&Y}
\]
such that the two sides of the following diagram commute
\[\xymatrix@C=.6em@R=1.2em{& \ar[ld]_{\zeta -}gae_Xa&& \ar[ll]_-{-\eta_a-} ga\ar[dddd]_{\zeta} \ar[rr]^-{1}&&ga \ar[rd]^{\varphi _g}&\\
\ar[d]^{1}bTfe_Xa&&&&&&bTg \ar[d]_{-T(g\eta_a)}\\
\ar[d]^{-\varphi _f}be_Yfa&&&&&&bT(gae_X)\ar[d]_{-T(\zeta e_X)}\\
\ar[rd]_-{-\alpha_b-}be_YbTf&&&&&&bT(bTfe_X)\ar[ld]^-{-\kappa^{-1}_{b,e_Yf}}\\
&\ar[rr]_-{\mu_b-}bTbe_{TY}Tf&&bTf&&bTbTe_YTf\ar[ll]^-{\mu_b-}&}
\]
Such a 2-cell $\zeta$ is determined by the 2-cell
\begin{equation}\tag{$\zeta_0$}\label{eq:zeta0}
\xymatrix{(g\ar[r]^-{\zeta_0}& be_Yf)=(g\ar[r]^-{-\eta_a}&gae_X\ar[r]^-{\zeta-}&bTfe_X= be_Yf),}
\end{equation}
from which it can be reconstructed by either side of the above diagram.
\end{enumerate}
The composite of $(\mon{T}, \V)$-functors $(f, \varphi _f)$ and $(g, \varphi _g)$ is defined by the picture
\[\xymatrix{TX\ar[r]^{Tf}\ar[d]|-{\object@{|}}_a&
TY\ar[d]|-{\object@{|}}^b\ar[r]^{Tg}&TZ\ar[d]|-{\object@{|}}^c\\
X\ar[r]_f\ar@{}[ur]|{\mbox{\large
$\stackrel{\varphi_f}{\Rightarrow}$}}&Y\ar[r]_g\ar@{}[ur]|{\mbox{\large
$\stackrel{\varphi_g}{\Rightarrow}$}}&Z,}\]
that is as $(gf,\varphi_{gf})$, with $\varphi_{gf}=(\varphi_g Tf)(g\varphi_f)$.
The identity $(\mon{T},\V)$-functor on $(X,a)$ is $(1_X,1_a)$.
The horizontal composition of $(\mon{T},\V)$-natural transformations $\zeta : (f, \varphi _f) \to (g, \varphi _g)$ and $\zeta' : (f', \varphi _{f'}) \to (g', \varphi _{g'})$ is defined by a picture obtained from the above one by replacing $\varphi _f$ and $\varphi _g$ with $\zeta$ and $\zeta'$.
The vertical composition of $(\mon{T},\V)$-natural transformations $\zeta:(f,\varphi _f)\to (g,\varphi _g)$ and $\zeta':(g,\varphi _g)\to(h,\varphi _h)$ is defined by the diagram
\[\xymatrix{TX\ar@/_3pc/[dd]_{1_{TX}}^{\mbox{\large ${\stackrel{T\eta_a}{\Rightarrow}}$}}
\ar[d]_{Te_X}\ar[rr]^{Tf}&&TY\ar[d]^{Te_Y}\ar@/^3pc/[ddd]|-{\object@{|}}^{bm_YTe_Y=b}_{\mbox{\large ${\stackrel{\mu_b-}{\Rightarrow}}$}}
\\
T^2X\ar@{}[rrd]|{\mbox{\large ${\stackrel{T\zeta}{\Rightarrow}}$}}\ar[d]|-{\object@{|}}_{Ta}\ar[rr]^{T^2f}&&T^2Y\ar[d]|-{\object@{|}}^{Tb}\\
TX\ar@{}[rrd]|{\mbox{\large ${\stackrel{\zeta'}{\Rightarrow}}$}}\ar[rr]^{Tg}\ar[d]|-{\object@{|}}_a&&TY\ar[d]|-{\object@{|}}^b\\
X\ar[rr]^h&&Y.}\]
The identity natural transformation on a $(\mon{T}, \V)$-functor $(f, \varphi _f)$ is the 2-cell $\varphi _f$ itself.
The definitions of horizontal and vertical compositions can be naturally stated in terms of the alternative definition (\ref{eq:zeta0}) of $(\mon{T},\V)$-natural transformation too.
When $\mon{T}$ is the identity monad, identically extended to $\V\mbox{-}\Rel$, the category $(\mT,\V)\mbox{-}\Cat$ is exactly the 2-category $\V\mbox{-}\Cat$ of $\V$-categories, $\V$-functors and \V-natural transformations.
Next we summarize briefly our two main examples. In the first example, $\V=\mbox{\sf 2}$ and $\mon{T}$ is the ultrafilter monad together with a suitable extension to $\mbox{\sf 2}\mbox{-}\Rel = \Rel$. In this case $(\mT,\V)\mbox{-}\Cat$ is the category of topological spaces and continuous maps. In the second example, $\V=\Set$ and $\mon{T}$ is the free-monoid monad with a suitable extension to $\Set\mbox{-}\Rel = \rm\bf{Span}$. In this case $(\mT,\V)\mbox{-}\Cat$ is the category of multicategories and multifunctors. For details on these examples, as well as for other examples, see \cite{CT03, HST14}.
For any $\mon{T}$ there is an adjunction of 2-functors:
\begin{equation}\tag{adj}\label{eq:adj}
(\mT,\V)\mbox{-}\Cat\adjunct{A^\circ}{A_e}\V\mbox{-}\Cat.
\end{equation}
$A_e$ is the algebraic functor associated with $e$, that is, for any $(\mon{T},\V)$-category $(X,a,\eta_a,\mu_a)$, $(\mon{T},\V)$-functor $(f,\varphi_f)$ and $(\mon{T},\V)$-natural transformation $\zeta:(f,\varphi_f)\to(g,\varphi_g)$, $A_e(X,a,\eta_a,\mu_a)=(X,ae_X,\eta_a,\overline{\mu}_a)$, where
\[\xymatrix{(ae_Xae_X\ar[r]^-{\overline{\mu}_a} &ae_X)=(ae_Xae_X\ar[r]^-{-\alpha_a-}&aTae_{TX}e_X\ar[r]^-{\mu_a-}&am_Xe_{TX}e_X=ae_X),}\]
$A_e(f,\varphi_f)=(f,\varphi_fe_X)$ and $A_e(\zeta)=\zeta e_X$ (see \cite{CT03} for details).
$A^\circ$ is defined as follows. For a $\V$-category $(Z,c,\eta_c,\mu_c)$, $A^\circ(Z,c,\eta_c,\mu_c)$ is the $(\mon{T},\V)$-category $(Z,c^\sharp,\eta_{c^\sharp},\mu_{c^\sharp})$ where $c^\sharp=e_Z^\circ Tc$, while $\eta_{c^\sharp}:1\to e_Z^\circ Tce_Z$ and $\mu_{c^\sharp}:e_Z^\circ Tc T(e_Z^\circ Tc)\to e_Z^\circ Tc m_Z$ are defined by the composites
\[\xymatrix{1\ar[r]^-{\lambda_{e_Z}}&e_Z^\circ e_Z\ar[r]^-{-T\eta_c-}&e_Z^\circ Tc e_Z}\]
\[\xymatrix@R=1.5em{
T^2Z \ar[dd]_{m_Z} \ar[r]|-{\object@{|}}^{T^2c} \ar@{}[rdd]|{\mbox{\large $\stackrel{\beta_c}{\Leftarrow}$}} \ar@/^2.5pc/[rrr]|-{\object@{|}}^{T(e_Z^\circ Tc)}_{\mbox{\large $\stackrel{\kappa^{-1}_{e_Z^\circ, Tc}}{\Leftarrow}$}} &T^2Z \ar[rr]|-{\object@{|}}^{Te^\circ_Z} \ar[d]_{1_{T^2Z}} \ar@{}[rd]|{\mbox{\large $\stackrel{\rho_{Te_Z}}{\Leftarrow}$}} &&TZ \ar[d]|-{\object@{|}}_{Tc} \ar@/^1pc/[lld]^{Te_Z}\\
&T^2Z \ar[d]_{m_Z} && TZ \ar[d]|-{\object@{|}}_{e^\circ_Z}\\
TZ \ar[r]|-{\object@{|}}_{Tc} \ar@/_2pc/[rr]|-{\object@{|}}_{Tc}^{\mbox{\large $\stackrel{T\mu_c \kappa_{c,c}}{\Leftarrow}$}}& TZ \ar[r]|-{\object@{|}}_{Tc} &TZ \ar[r]|-{\object@{|}}_{e^\circ_Z}&Z.
}
\]
For a $\V$-functor $(f, \varphi _f) : (Z, c) \to (Z', c')$, $A^\circ(f, \varphi _f)$ is defined by the diagram
\[\xymatrix{TZ\ar[r]|-{\object@{|}}^{Tc}\ar[d]_{Tf}&
TZ\ar[r]|-{\object@{|}}^{e^\circ_Z}\ar[d]^{Tf}&Z\ar[d]^f\\
TZ'\ar[r]|-{\object@{|}}_{Tc'}\ar@{}[ur]|{\mbox{\large
$\stackrel{T\varphi_f}{\Leftarrow}$}}&TZ'\ar[r]|-{\object@{|}}_{e^\circ_{Z'}}\ar@{}[ur]|{\mbox{\large
$\stackrel{}{\Leftarrow}$}}&Z',}\]
wherein the right 2-cell is the mate of the identity 2-cell $1_{Tfe_Z=e_{Z'}f}$. On $\V$-natural transformations $A^\circ$ is defined by a similar diagram. By direct verifications $A^\circ$ is indeed a 2-functor, and as already stated we have:
\begin{prop}
$A^\circ$ is a left 2-adjoint to $A_e$.
\end{prop}
\begin{proof}
The unit of the adjunction has the component at a $\V$-category $(Z,c)$ given by a $\V$-functor consisting of $1_Z$ and the 2-cell
\[\xymatrix{c \ar[r]^-{\lambda_{e_Z}-}& e_Z^\circ e_Zc \ar[r]^-{- \alpha_c} &e_Z^\circ Tc e_Z.}\]
The counit of the adjunction has the component at a $(\mon{T}, \V)$-category $(X, a)$ given by a $(\mon{T}, \V)$-functor consisting of $1_X$ and the 2-cell
\[\xymatrix{e_X^\circ T(ae_X) \ar[r]^-{-\kappa^{-1}_{a, e_X}}& e_X^\circ Ta Te_X \ar[r]^-{\eta_a -}& a e_X e_X^\circ TaTe_X \ar[r]^-{-\rho_{e_X}-} &aTaTe_X\ar[r]^-{\mu_a-} &am_XTe_X = a.}\]
The triangle identities are then directly verified.
\end{proof}
The next proposition is a $(\mon{T}, \V)$-categorical analogue of the ordinary- and enriched-categorical fact that an adjunction between functors induces isomorphisms between hom-sets/-objects.
\begin{prop}\label{th:adj}
Given an adjunction $(f, \varphi _f) \dashv (g, \varphi _g) : (X, a) \rightarrow (Y, b)$ in the 2-category $(\mT,\V)\mbox{-}\Cat$, there is an isomorphism:
\[g^\circ a \cong bTf.\]
\end{prop}
\begin{proof}
The unit and the counit of the given adjunction are $(\mon{T}, \V)$-natural transformations $(1_X, 1_a) \to (g, \varphi _g)(f, \varphi _f)$ and $(f, \varphi _f)(g, \varphi _g) \to (1_Y, 1_b)$. These are given by 2-cells $\upsilon_0 : gf \to ae_X$ and $\epsilon_0: 1_Y \to be_Yfg$ respectively. Define a 2-cell $bTf \to g^\circ a$ by
\[
\xymatrix@=2em{
TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l^d[dd]`^r[dd]_{1_{TX}} [dd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar[rr]^{1_Y} \ar@{}[rd]+UUU|{\mbox{\large $\stackrel{\lambda_g}{\Leftarrow}$}}&& Y,\\
T^2X \ar[d]_{m_X} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_a}{\Leftarrow}$}}&& TX \ar[rr]|-{\object@{|}}^{a}&& X \ar@<-.3em>[rru]_{g^\circ}&& \\
TX \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{a} && && && \\
}
\]
wherein the blank symbols stand for the obvious instances of $\kappa$ or $\kappa^{-1}$.
In the opposite direction define a 2-cell $g^\circ a \to bTf$ by
\[
\xymatrix@=2em{
&& && && Y \ar[d]^{g} \ar@{}[ld]|{\mbox{\large $\stackrel{\rho_g}{\Leftarrow}$}} \ar@/^2pc/[dddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}}\\
TX \ar[rrrr]|-{\object@{|}}_{a} \ar[d]_{Tf} \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}} && && X \ar[d]^{f} \ar[rr]^{1_X} \ar@<.3em>[rru]^{g^\circ} && X \ar[d]^{f} \\
TY\ar[rrrr]|-{\object@{|}}^{b} \ar[d]_{e_{TY}} \ar `l^d[dd]`^r[dd]_{1_{TY}} [dd] \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\alpha_b}{\Leftarrow}$}}&& && Y \ar[d]_{e_Y} && Y \ar[d]^{e_Y} \\
T^2Y \ar[rrrr]|-{\object@{|}}_{Tb}\ar[d]_{m_Y} \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && TY \ar[d]|-{\object@{|}}_{b} && TY \ar[d]|-{\object@{|}}^{b}\\
TY \ar[rrrr]|-{\object@{|}}_{b} && && Y \ar[rr]_{1_Y} && Y.
}
\]
These two 2-cells are inverses to each other. The following calculation shows that the equality $(bTf \to g^\circ a \to bTf) = 1_{bTf}$ holds. The remaining equation is proved using analogous arguments. Pasting the first diagram on top of the second, and using the equation $(\rho_gg) (g\lambda_g)= 1_g$ we obtain
\[
\xymatrix@=2em{
TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l^d[dd]`^r[dd]_{1_{TX}} [dd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar@/^2pc/[ddddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}} \\
T^2X \ar[d]_{m_X} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_a}{\Leftarrow}$}}&& TX \ar[rr]|-{\object@{|}}^{a}&& X \ar[dd]_{f} \\
TX \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{a} \ar[d]_{Tf} \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}} && && \\
TY\ar[rrrr]|-{\object@{|}}^{b} \ar[d]_{e_{TY}} \ar `l^d[dd]`^r[dd]_{1_{TY}} [dd] \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\alpha_b}{\Leftarrow}$}}&& && Y \ar[d]_{e_Y} \\
T^2Y \ar[rrrr]|-{\object@{|}}_{Tb}\ar[d]_{m_Y} \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && TY \ar[d]|-{\object@{|}}_{b} \\
TY \ar[rrrr]|-{\object@{|}}_{b} && && Y;
}
\]
using (\ref{eq:fun}) for $(f, \varphi _f)$ we get
\[
\xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar@/_1.5pc/[ldd]_{1_{TX}} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar@/^2pc/[ddddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}} \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} \ar[ld]_{m_X}&& TX \ar[rr]|-{\object@{|}}^{a} \ar[d]_{Tf} \ar@{}[rrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}}&& X \ar[d]_{f} \\
TX \ar[rd]_{Tf} &T^2Y \ar[rr]|-{\object@{|}}^{Tb} \ar[d]_{m_Y} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}} &&TY \ar[rr]|-{\object@{|}}^{b} && Y \ar[dd]_{e_Y}\\
&TY \ar[d]_{e_{TY}} \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b} \ar `l^d[dd]`^r[dd]_{1_{TY}} [dd] \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\alpha_b}{\Leftarrow}$}}&& && \\
&T^2Y \ar[rrrr]|-{\object@{|}}_{Tb}\ar[d]_{m_Y} \ar@{}[rrrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && TY \ar[d]|-{\object@{|}}_{b} \\
&TY \ar[rrrr]|-{\object@{|}}_{b} && && Y.
}
\]
Then, using naturality of $\alpha$ we obtain
\[
\xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar@/_1.5pc/[ldd]_{1_{TX}} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar@/^2pc/[ddddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}} \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} \ar[ld]_{m_X}&& TX \ar[rr]|-{\object@{|}}^{a} \ar[d]_{Tf} \ar@{}[rrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}}&& X \ar[d]_{f} \\
TX \ar[d]_{Tf} &T^2Y \ar[ld]_{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar[d]_{e_{T^2Y}} \ar@{}[rrd]|{\mbox{\large $\stackrel{\alpha_{Tb}}{\Leftarrow}$}} &&TY \ar[d]_{e_{TY}} \ar[rr]|-{\object@{|}}^{b} \ar@{}[rrd]|{\mbox{\large $\stackrel{\alpha_{b}}{\Leftarrow}$}}&& Y \ar[d]_{e_Y}\\
TY \ar@/_1.5pc/[rdd]_{1_{TY}} \ar[rd]_{e_{TY}}&T^3Y \ar[rr]|-{\object@{|}}^{T^2b} \ar[d]_{Tm_Y} \ar@{}[rrrd]|{\mbox{\large $\stackrel{-T\mu_b-}{\Leftarrow}$}}&& T^2Y \ar[rr]|-{\object@{|}}^{Tb}&& TY \ar[dd]|-{\object@{|}}_{b}\\
&T^2Y \ar[d]_{m_Y} \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{Tb}\ar@{}[rrrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && \\
&TY \ar[rrrr]|-{\object@{|}}_{b} && && Y,
}
\]
and using the associativity axiom in (\ref{eq:cat}) for $\mu_b$ we get
\[
\xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar@/^2pc/[dddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}} \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} && TX \ar[rr]|-{\object@{|}}^{a} \ar[d]_{Tf} \ar@{}[rrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}}&& X \ar[d]_{f} \\
&T^2Y \ar[rr]|-{\object@{|}}^{Tb} \ar[d]_{e_{T^2Y}} \ar@{}[rrd]|{\mbox{\large $\stackrel{\alpha_{Tb}}{\Leftarrow}$}} &&TY \ar[d]_{e_{TY}} \ar[rr]|-{\object@{|}}^{b} \ar@{}[rrd]|{\mbox{\large $\stackrel{\alpha_{b}}{\Leftarrow}$}}&& Y \ar[d]_{e_Y}\\
&T^3Y \ar[rr]|-{\object@{|}}^{T^2b} \ar[d]+<-1em,.8em>_{Tm_Y} \ar@<.5em>[d]^{m_{TY}} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\mu_b-}{\Leftarrow}$}}&& T^2Y \ar[d]_{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}} && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \enspace T^2Y \ar@<.5em>[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} && Y. \\
& TY \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b} \ar[u]+<-1em,-.8em>;[]_{m_Y}&& &&
}
\]
From (\ref{eq:mon}) we obtain
\[
\xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} \ar[d]_{Tg} \ar@{}[rrd]|{\mbox{\large$\stackrel{\varphi _g}{\Leftarrow}$ }}&& Y \ar[d]_{g} \ar@/^2pc/[dddd]^{1_Y}_{\mbox{\large $\stackrel{\epsilon_0}{\Leftarrow}$}} \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} && TX \ar[rr]|-{\object@{|}}^{a} \ar[d]_{Tf} \ar@{}[rrd]|{\mbox{\large $\stackrel{\varphi _f}{\Leftarrow}$}}&& X \ar[d]_{f} \\
&T^2Y \ar@/^1.5pc/[dd]^(0.4){1_{T^2Y}} \ar[rr]|-{\object@{|}}^{Tb} \ar[d]_{e_{T^2Y}} &&TY \ar@/_1.5pc/[dd]_(0.6){1_{T^2Y}} \ar[d]^{e_{TY}} \ar[rr]|-{\object@{|}}^{b} \ar@{}[rrd]|{\mbox{\large $\stackrel{\alpha_{b}}{\Leftarrow}$}}&& Y \ar[d]_{e_Y}\\
&T^3Y \ar[d]_{m_{TY}} && T^2Y \ar[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}} && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \ar[d]_{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} && Y, \\
& TY \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b}&& &&
}
\]
and the axiom of a $(\mon{T}, \V)$-natural transformation for $\epsilon_0$ gives
\[
\xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[d]^{Tg} \ar@/^1.5pc/[rrddd]^{1_{TY}}&& \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} && TX \ar[d]^{Tf} && \\
&T^2Y \ar[dd]_{1_{T^2Y}} \ar[rr]|-{\object@{|}}^{Tb} &&TY \ar@/_1.5pc/[dd]_(0.6){1_{T^2Y}} \ar[d]^{Te_Y} \ar@{}[rr]|{\mbox{\large $\stackrel{-T\epsilon_0-}{\Leftarrow}$}}&& \\
& && T^2Y \ar[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}} && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \ar[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} && Y. \\
& TY \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b}&& &&
}
\]
Using (\ref{eq:mon}) again we obtain
\[
\xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[d]^{Tg} \ar@/^1.5pc/[rrddd]^{1_{TY}}&& \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} && TX \ar[d]^{Tf} && \\
&T^2Y \ar@/_1.3pc/[dd]_{1_{T^2Y}} \ar[d]^{Te_{TY}} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\alpha_b-}{\Leftarrow}$}} &&TY \ar[d]^{Te_Y} \ar@{}[rr]|{\mbox{\large $\stackrel{-T\epsilon_0-}{\Leftarrow}$}}&& \\
&T^3Y \ar[d]^{m_{TY}} \ar[rr]|-{\object@{|}}^{T^2b} \ar@{}[rrd]|{\mbox{\large $\stackrel{\beta_b}{\Leftarrow}$}}&& T^2Y \ar[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}} && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \ar[d]^{m_Y} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& TY \ar[rr]|-{\object@{|}}^{b} && Y, \\
& TY \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b}&& &&
}
\]
and using associativity of $\mu_b$ again we get
\[
\xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\upsilon_0-}{\Leftarrow}$}}&& TY \ar[d]^{Tg} \ar@/^1.5pc/[rrddd]^{1_{TY}}&& \\
&T^2X \ar[d]_{T^2f} \ar[rr]|-{\object@{|}}^{Ta} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} && TX \ar[d]^{Tf} && \\
&T^2Y \ar[d]_{Te_{TY}} \ar[rr]|-{\object@{|}}^{Tb} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\alpha_b-}{\Leftarrow}$}} &&TY \ar[d]^{Te_Y} \ar@{}[rr]|{\mbox{\large $\stackrel{-T\epsilon_0-}{\Leftarrow}$}}&& \\
&T^3Y \ar[d]+<-1em,.8em>_{m_{TY}} \ar@<.5em>[d]^{Tm_Y} \ar[rr]|-{\object@{|}}^{T^2b} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\mu_b-}{\Leftarrow}$}}&& T^2Y \ar[rr]|-{\object@{|}}^{Tb} && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \enspace T^2Y \ar@<.5em>[d]^{m_Y} \ar@/^-1.2pc/[rrrru]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && Y. \\
& TY \ar[u]+<-1em,-.8em>;[]_{m_Y} \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b}&& &&
}
\]
Now, one of the triangle equations satisfied by the unit $\upsilon_0$ and the counit $\epsilon_0$ of our adjunction gives us
\[
\xymatrix@=2em{
&TX \ar[rr]^{Tf} \ar[d]_{Te_X} \ar `l /25pt[ld]`[ddddd]_{Tf} [ddddd] \ar@{}[rrrrddd]|{\mbox{\large $\stackrel{-T\eta_b-}{\Leftarrow}$}} && TY \ar[lldd]^{Te_Y} \ar@/^1.5pc/[rrddd]^{1_{TY}}&& \\
&T^2X \ar[d]_{T^2f} && && \\
&T^2Y \ar[d]_{Te_{TY}} \ar[rrrrd]|-{\object@{|}}^{Tb} \ar@/^1.5pc/[dd]^{1_{TY}} && && \\
&T^3Y \ar[d]_{Tm_Y} && && TY \ar[d]|-{\object@{|}}_{b}\\
&T^2Y \ar[d]_{m_Y} \ar@/^-1.2pc/[rrrru]|-{\object@{|}}^{Tb} \ar@{}[rrrd]|{\mbox{\large $\stackrel{\mu_b}{\Leftarrow}$}}&& && Y, \\
& TY \ar@/_1.2pc/[urrrr]|-{\object@{|}}_{b}&& &&
}
\]
and finally, by the unity axiom in (\ref{eq:cat}), this equals to
\[
\xymatrix@=1.5em{
&TX \ar[rr]^{Tf} \ar[dd]_{Tf} && TY \ar[ld]_{Te_Y} \ar[dd]|-{\object@{|}}^{b} \\
&&T^2Y \ar[ld]_{m_Y} & \\
& TY \ar[rr]|-{\object@{|}}^{b}&& Y,
}
\]
which is the identity map $1_{bTf}$.
We leave it to the reader to verify the equality $(g^\circ a \to bTf \to g^\circ a) = 1_{g^\circ a}$.
\end{proof}
\section{$\mon{T}$ as a $\V\mbox{-}\Cat$ monad}\label{sect:VCatMonad}
In this section we show that the properties of the lax extension of the $\Set$-monad $\mon{T}$ to $\V\mbox{-}\Rel$ allow us to extend $\mon{T}$ to $\V\mbox{-}\Cat$. Straightforward calculations show that:
\begin{lemma}
\begin{enumerate}[\em (1)]
\item If $(X,a,\eta_a,\mu_a)$ is a $\V$-category, then $(TX,Ta,T\eta_a,T\mu_a\kappa_{a,a})$ is a $\V$-category.
\item If $(f,\varphi _f):(X,a,\eta_a,\mu_a)\to(Y,b,\eta_b,\mu_b)$ is a $\V$-functor, then $(Tf,\varphi _{Tf}):(TX,Ta)\to(TY,Tb)$, where $\varphi _{Tf}:=\kappa^{-1}_{b,f} \, T\varphi _f \, \kappa_{f,a}$, is a $\V$-functor as well.
\item If $\zeta : (f, \varphi _f) \to (g, \varphi _g)$ is a $\V$-natural transformation, then so is $\kappa^{-1}_{b,f} T\zeta \, \kappa_{g,a}:(Tf, \varphi _{Tf}) \to (g, \varphi _{Tg})$.
\end{enumerate}
\end{lemma}
\noindent These assignments define an endo 2-functor on $\V\mbox{-}\Cat$ that we denote again by $T:\V\mbox{-}\Cat\to\V\mbox{-}\Cat$. The 2-cells $\alpha, \beta$ of the oplax natural transformations $e, m$ on $\V\mbox{-}\Rel$ equip $e$ and $m$ so that they become natural transformations in $\V\mbox{-}\Cat$, as we show next.
\begin{lemma}
For each $\V$-category $(X,a)$:
\begin{enumerate}[\em (1)]
\item $(e_X,\alpha_a):(X,a)\to(TX,Ta)$ is a $\V$-functor;
\item $(m_X,\beta_a):(T^2X,T^2a)\to(TX,Ta)$ is a $\V$-functor.
\end{enumerate}
\end{lemma}
\begin{proof}
To check that the diagrams
\[\xymatrix@!C=15ex{e_X\ar[r]^{-\eta_a}\ar[d]_{T\eta_a-}&e_Xa\ar[ld]^{\alpha_a}&
m_X\ar[r]^-{-\eta_{T^2a}}\ar[d]_{\eta_{Ta}-}&m_XT^2a\ar[ld]^{\beta_a}\\
Tae_X&&Tam_X}\]
commute one uses the naturality conditions (\ref{eq:nat}) with respectively $\varphi=\eta$ and $\varphi=\beta$.
For the diagrams
\[\xymatrix@!C=100pt{e_Xaa\ar[rr]^{-\mu_a}\ar[d]_{\alpha_a-}\ar[rdd]^{\alpha_{a,a}}&&e_Xa\ar[dd]^{\alpha_a}\\
Tae_Xa\ar[d]_{-\alpha_a}\\
TaTae_X\ar@{}[rruu]^(0.2){\framebox{1}}\ar@{}[rruu]|{\framebox{2}}\ar[r]^{\kappa_{a,a}-}&T(aa)e_X\ar[r]^{T\mu_a-}&Tae_X\\
m_XT^2aT^2a\ar[r]^{-\kappa_{Ta,Ta}}\ar[d]_{\beta_a-}&m_XT(TaTa)\ar[r]^{-T\kappa_{a,a}}&m_XT^2(aa)
\ar[r]^{-T^2\mu_a}\ar[dd]^{\beta_{aa}}&m_XT^2a\ar[dd]^{\beta_a}\\
Tam_XT^2a\ar[d]_{-\beta_a}\\
TaTam_X\ar@{}[rruu]|{\framebox{3}}\ar[rr]^{\kappa_{a,a}-}&&T(aa)m_X\ar@{}[ruu]|{\framebox{4}}\ar[r]^{T\mu_a-}&Tam_X,}\]
commutativity of $\framebox{1}$ and $\framebox{3}$ follows from the coherence conditions (\ref{eq:coh}), while commutativity of $\framebox{2}$ and $\framebox{4}$ follows from the naturality conditions (\ref{eq:nat}).
\end{proof}
\begin{lemma}
For each $\V$-category $(X,a)$, let $e_{(X,a)}=(e_X,\alpha_a)$ and $m_{(X,a)}=(m_X,\beta_a)$.
\begin{enumerate}[\em (1)]
\item $e=(e_{(X,a)})_{(X,a)\in\V\mbox{-}\Cat}:\Id_{\V\mbox{-}\Cat}\to T$ is a 2-natural transformation.
\item $m=(m_{(X,a)})_{(X,a)\in\V\mbox{-}\Cat}:T^2\to T$ is a 2-natural transformation.
\end{enumerate}
\end{lemma}
\begin{proof}
To check that, in the diagrams
\[\xymatrix{&X\ar[rr]^{e_X}\ar[ld]|-{\object@{|}}_a\ar[dd]^>>>>>>{f}&&TX\ar[ld]|-{\object@{|}}^{Ta}\ar[dd]^{Tf}&&T^2X\ar[rr]^{m_X}\ar[ld]|-{\object@{|}}_{T^2a}
\ar[dd]^>>>>>>{T^2f}&&TX\ar[ld]|-{\object@{|}}^{Ta}\ar[dd]^{Tf}\\
X\ar@{}[dr]|{\mbox{\large $\Downarrow$}\varphi_{f}}\ar@{}[urrr]|{\mbox{\large $\stackrel{\alpha_a}{\Rightarrow}$}}\ar[rr]^(0.65){e_X}\ar[dd]_f&&TX\ar@{}[dr]|{\mbox{\large $\Downarrow$}\varphi_{Tf}}\ar[dd]^(0.65){Tf}&&
T^2X\ar@{}[dr]|{\mbox{\large $\Downarrow$}\varphi_{T^2f}}\ar@{}[urrr]|{\mbox{\large $\stackrel{\beta_a}{\Rightarrow}$}}\ar[dd]_{T^2f}\ar[rr]^(0.65){m_X}&&TX\ar@{}[dr]|{\mbox{\large $\Downarrow$}\varphi_{Tf}}\ar[dd]^(0.65){Tf}\\
&Y\ar[rr]^(0.35){e_Y}\ar[ld]|-{\object@{|}}_b&&TY\ar[ld]|-{\object@{|}}^{Tb}&
&T^2Y\ar[rr]^(0.35){m_Y}\ar[ld]|-{\object@{|}}_{T^2b}&&TY\ar[ld]|-{\object@{|}}^{Tb}\\
Y\ar@{}[urrr]|{\mbox{\large $\stackrel{\alpha_b}{\Rightarrow}$}}\ar[rr]^{e_Y}&&TY&&T^2Y\ar@{}[urrr]|{\mbox{\large $\stackrel{\beta_b}{\Rightarrow}$}}\ar[rr]^{m_Y}&&TY}\]
the composition of the 2-cells commute, one uses again diagrams (\ref{eq:nat}) and (\ref{eq:coh}). To prove 2-naturality just take in these diagrams a 2-cell $\zeta$ giving a transformation of $(\mon{T}, \V)$-functors instead of $\varphi _f$.
\end{proof}
\begin{theorem}
$(T,e,m)$ is a 2-monad on $\V\mbox{-}\Cat$.
\end{theorem}
\begin{proof}
It remains to check the commutativity of the diagrams, for each category $(X,a)$,
\[\xymatrix@!C=7ex{(TX,Ta)\ar[rr]^-{(e_{TX},\alpha_{Ta})}\ar[ddrr]_{(1,1)}&&(T^2X,T^2a)\ar[dd]^{(m_X,\beta_a)}&&(TX,Ta)\ar[ll]_-{(Te_X,\kappa^{-1} T\alpha_a\kappa)}\ar[ddll]^{(1,1)}&(T^3X,T^3a)\ar[rr]^-{(m_{TX},\beta_{Ta})}\ar[dd]_{(Tm_X,\kappa^{-1} T\beta_a\kappa)}&&(T^2X,T^2a)\ar[dd]^{(m_X,\beta_a)}\\
\\
&&(TX,Ta)&&&(T^2X,T^2a)\ar[rr]^-{(m_X,\beta_a)}&&(TX,Ta),}\]
which follows again from diagrams (\ref{eq:nat}) and (\ref{eq:coh}).
\end{proof}
Denoting the 2-category of algebras of this 2-monad by $(\V\mbox{-}\Cat)^\mon{T}$, we get a commutative diagram
\begin{equation}\tag{$\mon{T}$-alg}\label{eq:Talg}
\xymatrix@!R=5ex{\Set^\mon{T}\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&(\V\mbox{-}\Cat)^\mon{T}\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[ll]\\
\Set\ar@<1ex>[u]^{F^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&\V\mbox{-}\Cat.\ar@<-1ex>[ll]\ar@<1ex>[u]^{F^\mon{T}}}
\end{equation}
\section{The fundamental adjunction}\label{sect:FundAdj}
\emph{From now on we assume that $\hat{\beta}_r:Trm_X^\circ\to m_Y^\circ Tr$ is an isomorphism for each $\V$-relation $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}to Y$, so that $m^\circ:T\to T^2$ becomes a pseudo-natural transformation on $\V\mbox{-}\Rel$.}
In this section we will build an adjunction
\begin{equation}\tag{ADJ}\label{eq:ADJ}
(\V\mbox{-}\Cat)^\mon{T}\adjunct{M}{K}(\mT,\V)\mbox{-}\Cat.
\end{equation}
Let $((Z,c,\eta_c,\mu_c),(h,\varphi _h))$ be an object of $(\V\mbox{-}\Cat)^\mon{T}$. The $\V$-category unit $\eta_c$ is a 2-cell
$1_Z \to c = che_Z$. Let $\widetilde{\mu}_c$ be the 2-cell defined by:
\begin{equation}\tag{$\widetilde{\mu}_c$}\label{eq:muc}
\xymatrix{chT(ch)\ar[r]^-{-\kappa^{-1}_{c,h}}& chTcTh \ar[r]^-{-\varphi _h -}& cch Th = cch m_Z\ar[r]^-{\mu_c -}& ch m_Z.}
\end{equation}
\begin{lemma}
The data $(Z, ch,\eta_c,\widetilde{\mu}_c)$ gives a $(\mon{T}, \V)$-category.
\end{lemma}
\begin{proof}
Each of the three $(\mon{T}, \V)$-category axioms follows from the corresponding $\V$-category axiom for $(Z,c,\eta_c,\mu_c)$, using (\ref{eq:mon}) and the fact that $(h,\varphi _h)$ is an algebra structure.
\end{proof}
\noindent We set
\[K((Z,c,\eta_c,\mu_c),(h,\varphi _h))=(Z,ch,\eta_c,\widetilde{\mu}_c).\]
$K$ extends to a 2-functor in the following way. For a morphism of $\mon{T}$-algebras $(f,\varphi _f):((Z,c), h)\to((W,d), k)$, we set $K(f,\varphi _f) = (f,\varphi _fh)$, where $\varphi _f h$ is regarded as a morphism $f ch \longrightarrow dfh=dk Tf$.
For a natural transformation of $\mon{T}$-algebras $\zeta : (f, \varphi _f) \to (g, \varphi _g)$ we define $K(\zeta)=\zeta h$. By straightforward calculations these indeed define a 2-functor.
Let now $(X,a,\eta_a,\mu_a)$ be a $(\mon{T},\V)$-category. Let $\hat{a}=Ta m_X^\circ$. Define a 2-cell $\eta_{\hat{a}} : 1_{TX} \to \hat{a}$ by the composite
\begin{equation}\tag{$\eta_{\hat{a}}$}\label{eq:etaa}
\xymatrix{1_{TX} = T1_X \ar[r]^-{T\eta_a}& T(ae_X)\ar[r]^{\kappa^{-1}_{a,e_X}}&Ta Te_X \ar[r]^-{-\lambda_{m_X}-}& Ta m_X^\circ m_XTe_X=Tam_X^\circ,}
\end{equation}
and define $\mu_{\hat{a}} : \hat{a}\hat{a} \to \hat{a}$ by
\[
\xymatrix{
TX \ar[rr]|-{\object@{|}}^{m^\circ_X} \ar[d]|-{\object@{|}}_{m^\circ_X} && T^2X \ar[rr]|-{\object@{|}}^{Ta} \ar[d]|-{\object@{|}}_{m^\circ_{TX}} \ar@{}[rrd]|{\mbox{\large$\stackrel{\hat{\beta}^{-1}}{\Leftarrow}$ }}&& TX \ar[d]|-{\object@{|}}_{m^\circ_X} \\
T^2X \ar[rr]|-{\object@{|}}^{Tm^\circ_X} \ar@{}[rrd]|{\mbox{\large $\stackrel{\rho_{Tm^\circ_X}}{\Leftarrow}$}} \ar@/_1.5pc/[rrd]_{1_{T^2X}} && T^3X \ar[rr]|-{\object@{|}}^{T^2a} \ar[d]_{Tm_X} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\mu_a-}{\Leftarrow}$}}&& T^2X \ar[d]|-{\object@{|}}_{Ta} \\
&&T^2X \ar[rr]|-{\object@{|}}^{Ta} && TX.\\
}
\]
\begin{lemma}
The data $(TX,\hat{a}, \eta_{\hat{a}},\mu_{\hat{a}})$ determines a $\V$-category.
\end{lemma}
\begin{proof}
The three $\V$-category axioms follow from the corresponding $(\mon{T}, \V)$-category axioms for $(X,a,\eta_a,\mu_a)$.
\end{proof}
Let $\varphi _{\hat{a}} : m_XT\hat{a}\to \hat{a} m_X$ be the composite 2-cell
\[\xymatrix@R=1.5em{
T^2X \ar[d]_{m_X} \ar[rr]|-{\object@{|}}^{Tm^\circ_X} \ar@{}[rrd]|{\mbox{\large $\stackrel{}{\Leftarrow}$}} \ar@/^3.pc/[rrrr]|-{\object@{|}}^{T(Tam^\circ_X)}_{\mbox{\large $\stackrel{\kappa^{-1}_{Ta,m^\circ_X}}{\Leftarrow}$}} &&T^3X \ar[rr]|-{\object@{|}}^{T^2a} \ar[d]_{m_{TX}} \ar@{}[rrd]|{\mbox{\large $\stackrel{\beta_a}{\Leftarrow}$}} &&T^2X \ar[d]^{m_X}\\
TX \ar[rr]|-{\object@{|}}_{m^\circ_X}&&T^2X \ar[rr]|-{\object@{|}}_{Ta} && TX.
}
\]
Wherein the left 2-cell is the mate of the identity map $1_{m_X m_{TX}=m_XTm_X}$. Direct calculations yield:
\begin{lemma}
The pair $(m_X, \varphi _{\hat{a}})$ is a $\V$-functor $T(TX, \hat{a}) \to (TX, \hat{a})$; moreover, it defines a $\mon{T}$-algebra structure on the $\V$-category $(TX, \hat{a})$.
\end{lemma}
We set
\[M(X,a)=((TX,\hat{a}), (m_X, \varphi _{\hat{a}})).\]
We extend this construction to a 2-functor as follows. For a $(\mon{T},\V)$-functor $(f,\varphi _f):(X,a)\to(Y,b)$, $M(f,\varphi _f)=(Tf,\widetilde{\varphi }_{Tf})$, where $\widetilde{\varphi }_{Tf}$ is given by
\[\xymatrix@R=1.5em{
TX \ar[d]_{Tf} \ar[rr]|-{\object@{|}}^{m^\circ_X} \ar@{}[rrd]|{\mbox{\large $\stackrel{\hat{\beta}_f}{\Leftarrow}$}} &&T^2X \ar[rr]|-{\object@{|}}^{Ta} \ar[d]_{T^2f} \ar@{}[rrd]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Leftarrow}$}} &&TX \ar[d]^{Tf}\\
TY \ar[rr]|-{\object@{|}}_{m^\circ_Y}&&T^2Y \ar[rr]|-{\object@{|}}_{Tb} && TY.
}\]
For a natural transformation of $(\mon{T},\V)$-functors $\zeta : (f, \varphi _f) \to (g, \varphi _g)$, $M(\zeta)$ is defined by a similar diagram. By direct verification $M$ is a 2-functor.
\begin{theorem}
$M$ is a left 2-adjoint to $K$.
\end{theorem}
\begin{proof}
Given a $(\mon{T},\V)$-category $(X,a,\eta_a,\mu_a)$,
\[
\xymatrix{(e_X,\widetilde{\alpha}_a):(X,a,\eta_a,\mu_a)\ar[r]& KM(X,a,\eta_a,\mu_a)=(TX,Tam_X^\circ m_X,\eta_{\hat{a}},\widetilde{\mu}_a),}\]
is a $(\mon{T}, \V)$-functor, where $\widetilde{\alpha}_a$ is the composite
\begin{equation}\tag{unit}\label{eq:unit}
\xymatrix{(e_Xa\ar[r]^-{\alpha_a}&Tae_{TX}\ar[rr]^-{-\lambda_{m_X}-}&&Tam_X^\circ m_Xe_{TX}=Tam_X^\circ m_X Te_X),}\end{equation}
These functors define a natural transformation $1 \rightarrow KM$. Given a $\mon{T}$-algebra $((Z,c,\eta_c,\mu_c),(h,\varphi _h))$,
\[\xymatrix{(h, \widetilde{\varphi }_h): MK((Z,c,\eta_c,\mu_c),(h,\varphi _h))=(TZ,T(ch)m_X^\circ,\hat{\eta}_{ch},\mu_{\widehat{ch}}) \ar[r]& ((Z,c,\eta_c,\mu_c),(h,\varphi _h))},\]
is a morphism of $\mon{T}$-algebras, where $\widetilde{\varphi }_h$ is defined as
\[hT(ch)m_X^\circ\xrightarrow{-\kappa_{c,h}^{-1}}h TcTh m_X^\circ \xrightarrow{\varphi _h -} ch Th m_X^\circ = ch m_Xm_X^\circ \xrightarrow{- \rho_{m_X}} ch,\]
These define a natural transformation $MK \rightarrow 1$. These natural transformations serve as the unit and the counit of our adjunction. The triangle identities are straightforwardly verified.
\end{proof}
\section{$\mon{T}$ as a $(\mT,\V)\mbox{-}\Cat$ monad}
Let us identify the 2-monad on $(\mT,\V)\mbox{-}\Cat$ induced by the adjunction $M \dashv K$, which we denote again by $\mon{T}= (KM = T,e,m)$.
Thus, $T = KM$ is a 2-endofunctor on $(\mT,\V)\mbox{-}\Cat$. To a $(\mon{T}, \V)$-category $(X, a,\eta_a,\mu_a)$ it assigns the $(\mon{T},\V)$-category
$(TX, \hat{a} m_X=Tam_X^\circ m_X,\eta_{\hat{a}},\widetilde{\mu}_{\hat{a}})$ with components defined in the diagrams (\ref{eq:etaa}) and (\ref{eq:muc}) of the last section, to a
$(\mon{T}, \V)$-functor $(f, \varphi _f)$ it assigns the $(\mon{T},\V)$-functor $(Tf,\widetilde{\varphi }_f)$ which can be diagrammatically specified by
\[\xymatrix{T^2X\ar[d]_{m_X}\ar[rr]^{T^2f}&&T^2Y\ar[d]^{m_Y}\\
TX\ar[d]|-{\object@{|}}_{m_X^\circ}\ar[rr]^{Tf}&&TY\ar[d]|-{\object@{|}}^{m_Y^\circ}\\
T^2X\ar@{}[urr]|{\mbox{\large $\stackrel{\hat{\beta}_f}{\Rightarrow}$}}\ar[d]|-{\object@{|}}_{Ta}\ar[rr]^{T^2f}&&T^2Y\ar[d]|-{\object@{|}}^{Tb}\\
TX\ar@{}[urr]|{\mbox{\large $\stackrel{-T\varphi _f-}{\Rightarrow}$}}\ar[rr]^{Tf}&&TY,}\]
and the $\mon{T}$-image of a $(\mon{T}, \V)$-natural transformation $\zeta : (f, \varphi _f) \to (g, \varphi _g)$ is computed by a similar diagram.
The unit of the 2-monad is the unit $(e,\widetilde{\alpha})$ of the adjunction $K \dashv M$ defined in (\ref{eq:unit}). The multiplication of the 2-monad is given by $(m,\widetilde{\beta})$, the component of which at a $(\mon{T}, \V)$-category $(X, a)$,
-- which is a $(\mon{T}, \V)$-functor $MKMK(X, a) \to MK(X, a)$ --, is pictorially described by:
\[\xymatrix{T^3X\ar `l^d[ddddd]`^r[ddddd]|-{\object@{|}}_{MKMK(a)} [ddddd]
\ar[d]_{m_{TX}}\ar[rr]^{Tm_X}&&T^2X\ar `r_d[ddddd]`_l[ddddd]|-{\object@{|}}^{MK(a)} [ddddd]\ar[d]^{m_X}\\
T^2X\ar[rr]^{m_X}\ar[d]|-{\object@{|}}_{m_{TX}^\circ}&&TX\ar[d]^1\\
T^3X\ar[rr]_{m_Xm_{TX}}\ar@{}[urr]|{\mbox{\large $\stackrel{-\rho_{m_{TX}}}{\Rightarrow}$}}\ar[d]_{Tm_X}&&TX\ar[d]^1\\
T^2X\ar[d]|-{\object@{|}}_{Tm_X^\circ}\ar[rr]^{m_X}&&TX\ar[d]|-{\object@{|}}^{m_X^\circ}\\
T^3X\ar[rr]^{m_{TX}}\ar@{}[rru]|{\mbox{\large $\stackrel{(-\rho_{Tm_X})(\lambda_{m_X}-)}{\Rightarrow}$}}\ar[d]|-{\object@{|}}_{T^2a}&&T^2X\ar[d]|-{\object@{|}}^{Ta}\\
T^2X\ar[rr]^-{m_X}\ar@{}[rru]|{\mbox{\large $\stackrel{\beta_{a}}{\Rightarrow}$}}&&TX.}\]
\begin{theorem}
The 2-monad $(T, e,m)$ on $(\mT,\V)\mbox{-}\Cat$ is a KZ monad.
\end{theorem}
\begin{proof}
One of the equivalent conditions expressing the KZ property is the existence of a modification $\delta : Te \to eT : T \to TT$ such that
\begin{equation}\tag{mod}\label{eq:mod}
\delta e = 1_{ee}\mbox{ and }m\delta = 1_{1_T}.
\end{equation}
For a $(\mon{T}, \V)$-category $(X, a, \mu_a, \eta_a)$, let $\delta_{(X, a)}$ be the composite 2-cell
\[\xymatrix{e_{TX} \ar[r]^-{T^2\eta_a-}& T^2(ae_X)e_{TX} \ar[r]^{T\kappa_{a,e_X}-} &T(TaTe_X)e_{TX} \ar[r]^{\kappa_{Ta,Te_X}-} & T^2aT^2e_Xe_{TX}}\]
\[\xymatrix{= T(Ta)e_{T^2X}Te_X\ar[rrr]^-{T(Ta\lambda_{m_X})\lambda_{m_{TX}}-}&&& T(Tam_X^\circ m_X)m_{TX}^\circ m_{TX}e_{T^2X}Te_X.}\]
This defines a $(\mon{T}, \V)$-natural transformation
\[\delta_{(X,a)}:(Te_X, T\widetilde{\alpha}_a) \to (e_{TX}, \widetilde{\alpha}_{\hat{a} m_X}).\]
The family of these natural transformations gives the required modification $Te \to eT$. The first of the two required equalities (\ref{eq:mod}) is straightforward. The second one follows from (mon).
\end{proof}
\section{Representable $(\mon{T},\V)$-categories: from Nachbin spaces\\to Hermida's representable multicategories}
Being a KZ monad, for the monad $\mon{T}$ on $(\mT,\V)\mbox{-}\Cat$ a $\mon{T}$-algebra structure on a $(\mon{T},\V)$-category $(X,a)$ is, up to isomorphism, a reflective left adjoint to the unit $e_{(X,a)}$; hence, having a $\mon{T}$-algebra structure is a property, rather than an additional structure, for any $(\mon{T},\V)$-category. As Hermida in \cite{Her00}, we say that:
\begin{definition}
A $(\mon{T},\V)$-category is \emph{representable} if it has a pseudo-algebra structure for $\mon{T}$.
\end{definition}
In the diagram below $((\mT,\V)\mbox{-}\Cat)^\mon{T}$ is the 2-category of $\mon{T}$-algebras, $F^\mon{T}\dashv G^\mon{T}$ is the corresponding adjunction, and $\widetilde{K}$ is the comparison 2-functor:
\[\xymatrix{(\V\mbox{-}\Cat)^\mon{T}\ar@{}[rd]|{\top}\ar[rr]^{\widetilde{K}}\ar@<1ex>[rd]^K&&((\mT,\V)\mbox{-}\Cat)^\mon{T}.\ar@<1ex>[ld]^{G^\mon{T}}\ar@{}[ld]|{\bot}\\
&(\mT,\V)\mbox{-}\Cat\ar@<1ex>[lu]^M\ar@<1ex>[ru]^{F^\mon{T}}}\]
The composition of the adjunctions $F^\mon{T}\dashv G^\mon{T}$ and $A^\circ\dashv A_e$ (see (\ref{eq:adj}) in Section \ref{sect:three}) gives an adjunction $F_e^\mon{T}\dashv G_e^\mon{T}$ that induces again the monad $\mon{T}$ on $\V\mbox{-}\Cat$. Let $\widetilde{A}_e$ be the corresponding comparison 2-functor as depicted in the following diagram:
\[\xymatrix{(\V\mbox{-}\Cat)^\mon{T}\ar@<-1ex>@{}[rddd]|{\top}\ar@{.>}[rddd]\ar@{}[rd]|{\top}\ar@<-1ex>[rr]_{\widetilde{K}}\ar@<1ex>[rd]^K&&((\mT,\V)\mbox{-}\Cat)^\mon{T}.
\ar@<1ex>[ld]^{G^\mon{T}}\ar@{}[ld]|{\bot}\ar@{.>}@<-1ex>[ll]_{\widetilde{A}_e}\ar@{.>}@<2ex>[lddd]^{G_e^\mon{T}}\ar@{}@<1ex>[lddd]|{\bot}\\
&(\mT,\V)\mbox{-}\Cat\ar@<1ex>[lu]^M\ar@<1ex>[ru]^{F^\mon{T}}\ar@{.>}@<1ex>[dd]^{A_e}\ar@{}[dd]|{\dashv}\\
\\
&\V\mbox{-}\Cat\ar@{.>}@<2ex>[luuu]\ar@{.>}[ruuu]^{F_e^\mon{T}}\ar@<1ex>@{.>}[uu]^{A^\circ}}\]
\begin{theorem}\label{th:rep}
$\widetilde{K}$ and $ \widetilde{A}_e$ define an adjoint 2-equivalence.
\end{theorem}
\begin{proof}
The isomorphism $\widetilde{A}_e\widetilde{K} \cong 1$ can be directly verified. We will establish that $\widetilde{K}\widetilde{A}_e \cong 1$.
Suppose that a $(\mon{T}, \V)$-functor $(f, \varphi _f) : T(X, a) \rightarrow (X, a)$ is a $\mon{T}$-algebra structure on a $(\mon{T}, \V)$-category $(X, a)$. Observe that the underlying $\V$-relation of the representable $(\mon{T}, \V)$-category $\widetilde{K}\widetilde{A}_e((X, a), (f, \varphi _f))$ is $ae_Xf : TX \longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex} TX$.
Since $\mon{T}$ is a KZ monad, following \cite{KellyLack}, $(f, \varphi _f)$ is a left adjoint to the unit $(e_X, \widetilde{\alpha}_a)$ of $\mon{T}$. By Proposition \ref{th:adj} we get an isomorphism
\[\omega : e^\circ_XTam^\circ_Xm_X \rightarrow aTf.\]
Let $\iota$ denote the composite isomorphism
\[ae_Xf = aTfe_{T_X} \xrightarrow{\omega^{-1}-} e^\circ_XTam^\circ_Xm_Xe_{T_X} =e^\circ_XTam^\circ_Xm_XTe_{X} \xrightarrow{\omega-} aTfTe_{X} = a.\]
It can be verified that the pair $(1_X, \iota)$ is an isomorphism $\widetilde{K}\widetilde{A}_e((X, a), (f, \varphi _f)) \to ((X, a), (f, \varphi _f))$ in $((\mT,\V)\mbox{-}\Cat)^{\mon{T}}$. The family of these morphisms determine the required 2-natural isomorphism $\widetilde{K}\widetilde{A}_e \cong 1$.
\end{proof}
We explain now how representable $(\mon{T},\V)$-categories capture two important cases which were developed independently.
\subsection*{Nachbin's ordered compact Hausdorff spaces.}
For $\V = 2$ and $\mon{T}=\mon{U}=(U,e,m)$ the ultrafilter monad extended to $\mbox{\sf 2}$-$\Rel=\Rel$ as in \cite{Ba}, so that, for any relation $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}to Y$, $Ur=Uq(Up)^\circ$, where $p:R\to X$, $q:R\to Y$ are the projections of $R=\{(x,y)\mid x\,r\,y\}$. Then $\Cats{2}\simeq\Ord$ and the functor $U:\Ord\to\Ord$ sends an ordered set $(X,\le)$ to $(UX,U\!\le)$ where
\[
\mathfrak{x}\,(U\!\le)\,\mathfrak{y}\hspace{1em}\text{whenever}\hspace{1em}\forall A\in\mathfrak{x},B\in\mathfrak{y}\exists x\in A,y\in B\,.\,x\le y,
\]
for all $\mathfrak{x},\mathfrak{y}\in UX$. The algebras for the monad $\mon{U}$ on $\Ord$ are precisely the ordered compact Hausdorff spaces as introduced in \cite{Nac50}:
\begin{definition}
An \emph{ordered compact Hausdorff space} is an ordered set $X$ equipped with a compact Hausdorff topology so that the graph of the order relation is a closed subset of the product space $X\times X$.
\end{definition}
We denote the category of ordered compact Hausdorff spaces and monotone and continuous maps by $cat\-egory font{OrdCompHaus}$. It is shown in \cite{Tho09} that, for a compact Hausdorff space $X$ with ultrafilter convergence $\alpha:UX\to X$ and an order relation $\le$ on $X$, the set $\{(x,y)\mid x\le y\}$ is closed in $X\times X$ if and only if $\alpha:UX\to X$ is monotone; and this shows
\[
cat\-egory font{OrdCompHaus}\simeq\Ord^\mon{U},
\]
and the diagram (\ref{eq:Talg}) at the end of Section \ref{sect:VCatMonad} becomes
\[
\xymatrix@!R=5ex{\CompHaus\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&cat\-egory font{OrdCompHaus}\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[ll]\\
\Set\ar@<1ex>[u]^{F^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&\Ord.\ar@<-1ex>[ll]\ar@<1ex>[u]^{F^\mon{T}}}
\]
The functor $K:cat\-egory font{OrdCompHaus}\tocat\-egory font{Top}=(\mon{U},\mbox{\sf 2})$-$\Cat$ of Section \ref{sect:FundAdj} can now be described as sending $((X,\leq),\alpha:UX\to X)$ to the space $KX=(X,a)$ with ultrafilter convergence $a:UX\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}to X$ given by the composite
\[\xymatrix{UX\ar[r]^-{\alpha}&X\ar[r]|-{\object@{|}}^-{\leq}&X;}\]
of the order relation $\le:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}to X$ of $X$ with the ultrafilter convergence $\alpha:UX\to X$ of the compact Hausdorff topology of $X$. In terms of open subsets, the topology of $KX$ is given precisely by those open subsets of the compact Hausdorff topology of $X$ which are down-closed with respect to the order relation of $X$. On the other hand, for a topological space $(X,a)$, the ordered compact Hausdorff space $MX$ is the set $UX$ of all ultrafilters of $X$ with the order relation
\[\xymatrix{UX\ar[r]|-{\object@{|}}^-{m_X^\circ}&UUX\ar[r]|-{\object@{|}}^-{Ua}&UX,}\]
and with the compact Hausdorff topology given by the convergence $m_X:UUX\to UX$; put differently, the order relation on $UX$ is defined by
\[
\mathfrak{x}\le\mathfrak{y}\iff\forall A\in\mathfrak{x}\,.\,\overline{A}\in\mathfrak{y},
\]
and the compact Hausdorff topology on $UX$ is generated by the sets
\[
\{\mathfrak{x}\in UX\mid A\in\mathfrak{x}\}\hspace{2em}(A\subseteq X).
\]
The monad $\mon{U}=(U,e,m)$ on $\Top$ induced by the adjunction $M\dashv K$ assigns to each topological space $X$ the space $UX$ with basic open sets
\[
\{\mathfrak{x}\in UX\mid A\in\mathfrak{x}\}\hspace{2em}(A\subseteq X\text{ open}).
\]
By definition, a topological space $X$ is called \emph{representable} if $X$ is a pseudo-algebra for $\mon{U}$, that is, whenever $e_X:X\to UX$ has a (reflective) left adjoint. Note that a left adjoint of $e_X:X\to UX$ picks, for every ultrafilter $\mathfrak{x}$ on $X$, a smallest convergence point of $\mathfrak{x}$. The following result provides a characterisation of representable topological spaces.
\begin{theorem}
Let $X$ be a topological space. The following assertions are equivalent.
\begin{enumerate}[\em (i)]
\item $X$ is representable.
\item $X$ is locally compact and every ultrafilter has a smallest convergence point.
\item $X$ is locally compact, weakly sober and the way-below relation on the lattice of open subsets is stable under finite intersection.
\item $X$ is locally compact, weakly sober and finite intersections of compact down-sets are compact.
\end{enumerate}
\end{theorem}
Representable T$_0$-spaces are known under the designation \emph{stably compact spaces}, and are extensively studied in \cite{GHK+03,Jun04,Law11} and \cite{Sim82} (called \emph{well-compact spaces} there). One can also find there the following characterisation of morphisms between representable spaces.
\begin{theorem}
Let $f:X\to Y$ be a continuous map between representable spaces. Then the following are equivalent.
\begin{enumerate}[\em (i)]
\item $f$ is a pseudo-homomorphism.
\item For every compact down-set $K\subseteq Y$, $f^{-1}(K)$ is compact.
\item The frame homomorphism $f^{-1}:\mathcal{O} Y\to\mathcal{O} X$ preserves the way-below relation.
\end{enumerate}
\end{theorem}
\subsection*{Hermida's representable multicategories}
We sketch now some of the main achievements of \cite{Her00,Her01} which fit in our setting and can be seen as counterparts to the classical topological results mentioned above. In \cite{Her00,Her01} Hermida is working in a finitely complete category $cat\-egory font{B}$ admitting free monoids so that the free-monoid monad $\mon{M}=(M,e,m)$ is Cartesian; however, for the sake of simplicity we consider only the case $cat\-egory font{B}=\Set$ here. We write $cat\-egory font{Span}$ to denote the bicategory of spans in $\Set$, and recall that a \emph{category} can be viewed as a span
\[
\xymatrix{& C_1\ar[dl]_d\ar[dr]^c\\ C_0 && C_0}
\]
which carries the structure of a monoid in the category $cat\-egory font{Span}(C_0,C_0)$. The 2-category of monoids in $\Cat$ (aka strict monoidal categories) and strict monoidal functors is denoted by $cat\-egory font{MonCat}$, and the diagram (\ref{eq:Talg})
becomes
\[\xymatrix@!R=5ex{\Mon\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&cat\-egory font{MonCat}\ar@{}[d]|{\dashv}\ar@<1ex>[d]^{U^\mon{T}}\ar@<-1ex>[ll]\\
\Set\ar@<1ex>[u]^{F^\mon{T}}\ar@<-1ex>[rr]\ar@{}[rr]|{\top}&&\Cat.\ar@<-1ex>[ll]\ar@<1ex>[u]^{F^\mon{T}}}\]
A \emph{multicategory} can be viewed as a span
\[
\xymatrix{& C_1\ar[dl]_d\ar[dr]^c\\ MC_0 && C_0}
\]
in $\Set$ together with a monoid structure in an appropriate category. This amounts to the following data:
\begin{enumerate}[--]
\item a set $C_0$ of objects;
\item a set $C_1$ of arrows where the domain of an arrow $f$ is a sequence $(X_1,X_2,\ldots,X_n)$ of objects and the codomain is an object $X$, depicted as
\[
f:(X_1,X_2,\ldots,X_n)\to X;
\]
\item an identity $1_X:(X)\to X$;
\item a composition operation.
\end{enumerate}
The 2-category of multicategories, morphisms of multicategories and appropriate 2-cells is denoted by $\MultiCat$. Keeping in mind that $cat\-egory font{Span}$ is equivalent to $\Rels{\Set}$, for $\V=\Set$ and $\mon{T}=\mon{M}$, the fundamental adjunction (\ref{eq:ADJ}) of Section \ref{sect:FundAdj} specialises to:
\begin{theorem}
There is a 2-monadic 2-adjunction $\MultiCat\adjunct{M}{K}cat\-egory font{MonCat}$.
\end{theorem}
Here, for a strict monoidal category
\[
\xymatrix{& C_1\ar[dl]_d\ar[dr]^c\\ C_0 && C_0}
\]
with monoid structure $\alpha:MC_0\to C_0$ on $C_0$, the corresponding multicategory is given by the composite of
\[
\xymatrix{& MC_0\ar[dl]_1\ar[dr]^\alpha && C_1\ar[dl]_d\ar[dr]^c\\ MC_0 && C_0 && C_0}
\]
in $cat\-egory font{Span}$; and to a multicategory
\[
\xymatrix{& C_1\ar[dl]_d\ar[dr]^c\\ MC_0 && C_0}
\]
one assigns the strict monoidal category
\[
\xymatrix{&& MC_1\ar[dl]_d\ar[dr]^c\\ & MMC_0\ar[dl]_{m_{C_0}} && MC_0\\ MC_0}
\]
where the objects in the span are free monoids.
The induced 2-monad on $\MultiCat$ is of Kock-Z\"oberlein type, and a \emph{representable multicategory} is a pseudo-algebra for this monad. In elementary terms, a multicategory
\[
\xymatrix{& C_1\ar[dl]_d\ar[dr]^c\\ MC_0 && C_0}
\]
is representable precisely if for every $(x_1,\ldots,x_n)\in MC_0$ there exists a morphism (called universal arrow)
\[
(x_1,\ldots,x_n)\to \otimes(x_1,\ldots,x_n)
\]
which induces a bijection
\[
\hom((x_1,\ldots,x_n),y)\simeq\hom(\otimes(x_1,\ldots,x_n),y),
\]
natural in $y$, and universal arrows are closed under composition.
\section{Duals for $(\mon{T},\V)$-categories}
For a $\V$-category $(Z, c) = (Z, c, \eta_c, \mu_c)$, the \emph{dual} $D(Z, c)$ of $(Z, c)$ is defined to be the $\V$-category $Z^\op=(Z, c^\op, \eta_{c^\op}, \mu_{c^\op})$, with $c^\op=c^\circ$, $\eta_{c^\op}=\eta_c^\circ$ and $\mu_{c^\op}=\mu_c^\circ$. This construction extends to a 2-functor
\[D : \V\mbox{-}\Cat \to \V\mbox{-}\Cat^\co\]
as follows. For a $\V$-functor $(f, \varphi _f) : (Z, c) \to (W, d)$ set $D(f, \varphi _f) = f^\op=(f, \varphi _f^\op):(Z,c^\circ)\to(W,d^\circ)$, where $\varphi _f^\op$ is defined by
\[\xymatrix{f c^\circ \ar[r]^-{-\lambda_f}& f c^\circ f^\circ f = f (fc)^\circ f \ar[r]^-{-(\varphi _f)^\circ -}& f (df)^\circ f = ff^\circ d^\circ f \ar[r]^-{\rho_f -}& d^\circ f.}\]
On 2-cells $\zeta: (f, \varphi _f) \to (g, \varphi _g)$ of $\V\mbox{-}\Cat$, set $D(\zeta) = \zeta^\op$, which is defined analogously by
\[\xymatrix{f c^\circ \ar[r]^-{-\lambda_g}& f c^\circ g^\circ g = f (gc)^\circ g \ar[r]^-{-\zeta^\circ -}& f (df)^\circ g = ff^\circ d^\circ g \ar[r]^-{\rho_f -}& d^\circ g.}\]
The monad $\mon{T}$ on $\V\mbox{-}\Cat$ of Section \ref{sect:VCatMonad} gives rise to a monad $\mon{T}$ on $\V\mbox{-}\Cat^\co$.
From now on \emph{we assume that $T(c^\circ)=(Tc)^\circ$ for every $\V$-relation $c$}.
Let $((Z, c), (h, \varphi _h))$ be a $\mon{T}$-algebra. Then
\[\xymatrix{(TZ, Tc^\circ) \ar[rr]^{D(h, \varphi _h)} && (Z, c^\circ)}\]
gives a $\mon{T}$-algebra structure on $(Z, c^\circ)$, which we write as $((Z, c^\circ), h^\op)$.
\begin{definition}
The \emph{dual} of a $\mon{T}$-algebra $((Z, c), h)$ is the $\mon{T}$-algebra $(Z^\op,h^\op)=((Z, c^\circ), h^\op)$.
\end{definition}
This construction extends to a 2-functor
\begin{equation}\tag{Dual}\label{eq:Dual}
D : (\V\mbox{-}\Cat)^\mon{T} \longrightarrow ((\V\mbox{-}\Cat)^\mon{T})^\co
\end{equation}
as follows. If $(f, \varphi _f): ((Z, c), h) \to ((W,d), k)$ is a morphism of $\mon{T}$-algebras, then $D(f, \varphi _f)=f^\op:((Z, c^\circ), h^\op) \to ((W, d^\circ), k^\op)$ is a morphism of $\mon{T}$-algebras, and if $\zeta:(f,\varphi _f)\to(g,\varphi _g)$ is a 2-cell in $(\V\mbox{-}\Cat)^\mon{T}$, then $D(\zeta)=\zeta^\op:D(g, \varphi _g) \to D(f, \varphi _f)$ is a 2-cell in $\V\mbox{-}\Cat^\mon{T}$.
Using the adjunction $M\dashv K$ we can define the dual of a $(\mon{T},\V)$-category using the construction of duals in $(\V\mbox{-}\Cat)^\mon{T}$ via the composition:
\[\xymatrix@=8ex{(\V\mbox{-}\Cat)^\mon{T}\ar@`{(-20,-20),(-20,20)}^D\ar@{}[r]|{\top}\ar@<1mm>@/^2mm/[r]^{{K}} & (\mT,\V)\mbox{-}\Cat.\ar@<1mm>@/^2mm/[l]^{{M}}}\]
\begin{definition}
The \emph{dual} of a $(\mon{T}, \V)$-category $(X,a)$ is the $(\mon{T},\V)$-category $KDM(X,a)$; that is,
\[X^\op=(TX,m_XTa^\circ m_X).\]
\end{definition}
For representable $(\mon{T},\V)$-categories $(X,a)$ we can use directly extensions of $\widetilde{K}$ and $\widetilde{A}_e$ to pseudo-algebras, so that we can obtain a dual structure $X^{\widetilde{\op}}$ on the same underlying set $X$ via the composition $\widetilde{K}D\widetilde{A}_e$:
\[\xymatrix@=8ex{(\V\mbox{-}\Cat)^\mon{T}\ar@`{(-20,-20),(-20,20)}^D\ar@{}[r]|-{\top}\ar@<1mm>@/^2mm/[r]^-{\widetilde{K}} & ((\mT,\V)\mbox{-}\Cat)^\mon{T}.\ar@<1mm>@/^2mm/[l]^-{\widetilde{A}_e}}\]
Then it is easily checked that, for any $(\mon{T},\V)$-category $X$,
\[X^\op=(TX)^{\widetilde{\op}},\]
since $TX$, as a free $\mon{T}$-algebra on $(\mT,\V)\mbox{-}\Cat$, is representable.
For $\V$ a quantale, duals of $(\mon{T},\V)$-categories proved to be useful in the study of (co)completeness (see \cite{CH09,CH09a,Hof11}). Next we outline briefly the setting used and the role duals play there.
Let $\V$ be a quantale. When the lax extension of $T:\Set\to\Set$ to $\V\mbox{-}\Rel$ is determined by a map $\xi:TV\to V$ which is a $\mon{T}$-algebra structure on $\V$ (for the $\Set$-monad $\mon{T}$) as outlined in \cite[Section 4.1]{CH09}, then, under suitable conditions, $\V$ itself has a natural $(\mon{T},\V)$-category structure $\hom_\xi$ given by the composite
\begin{equation}\tag{$(\mon{T},\V)$-$\hom$}\label{eq:TVhom}
\xymatrix{TV\ar[r]^\xi&V\ar[r]|-{\object@{|}}^\hom&V,}
\end{equation}
where $\hom$ is the internal hom on $V$.\footnote{This is the case when a \emph{topological theory} in the sense of \cite{Hof07} is given; see \cite{Hof07} for details.} Then the well-known equivalence:\\
\begin{quotation}
Given $\V$-categories $(X,a)$, $(Y,b)$, for a $\V$-relation $r:X\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}to Y$,\vspace*{2mm}
\begin{itemize}
\item[] $r:(X,a)\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}to (Y,b)$ is a $\V$-module (or profunctor, or distributor)
\item[] $\iff$ the map $r:X^\op\otimes (Y,b)\to(\V,\hom)$ is a $\V$-functor.
\end{itemize}
\end{quotation}
can be generalized to the $(\mon{T},\V)$-setting. Here a \emph{$(\mon{T},\V)$-relation} $r:X\krelto Y$ is a $\V$-relation $TX\longrightarrow\hspace*{-2.8ex}{\mapstochar}\hspace*{2.6ex}to Y$, and $(\mon{T},\V)$-relations $X\stackrel{r}{\krelto} Y\stackrel{s}{\krelto} Z$ compose as $\V$-relations as follows:
\[\xymatrix{TX\ar[r]|-{\object@{|}}^{m_X^\circ}&T^2X\ar[r]|-{\object@{|}}^{Tr}&TY\ar[r]|-{\object@{|}}^s&Z;}\]
we denote this composition by $s\circ r$. A \emph{$(\mon{T},\V)$-module} $\varphi :(X,a)\krelto(Y,b)$ between $(\mon{T},\V)$-categories $(X,a)$, $(Y,b)$ is a $(\mon{T},\V)$-relation such that
\[\varphi \circ a=\varphi =b\circ\varphi .\]
The next result can be found in \cite{CH09} (see also \cite[Remark 5.1 and Lemma 5.2]{Hof13}).
\begin{theorem}\label{prop:Mod_vs_Fun}
Let $(X,a)$ and $(Y,b)$ be $(\mon{T},\V)$-categories and $\varphi:X\krelto Y$ be a $(\mon{T},\V)$-relation. The following assertions are equivalent.
\begin{enumerate}[\em (i)]
\item $\varphi:(X,a)\krelto (Y,b)$ is a $(\mon{T},\V)$-module.
\item The map $\varphi:TX\times Y\to\V$ is a $(\mon{T},\V)$-functor $\varphi:X^\op\otimes (Y,b)\to(\V,\hom_\xi)$.
\end{enumerate}
\end{theorem}
In particular, the $(\mon{T},\V)$-relation $a:X\krelto X$ is a $(\mon{T},\V)$-module from $(X,a)$ to $(X,a)$. Although $(\mT,\V)\mbox{-}\Cat$ is in general not monoidal closed for $\otimes$, the functor $X^\op\otimes-:(\mT,\V)\mbox{-}\Cat\to(\mT,\V)\mbox{-}\Cat$ has a right adjoint $(-)^{X^\op}:(\mT,\V)\mbox{-}\Cat\to(\mT,\V)\mbox{-}\Cat$ for every $(\mon{T},\V)$-category $X$, and from the $(\mon{T},\V)$-module $a$ we obtain the \emph{Yoneda $(\mon{T},\V)$-functor}
\[
y_X:X\to\V^{X^\op}.
\]
By Theorem \ref{prop:Mod_vs_Fun}, we can think of the elements of $\V^{X^\op}$ as $(\mon{T},\V)$-modules from $(X,a)$ to $(1,e_1^\circ)$. The following result was proven in \cite{CH09} and provides a Yoneda-type Lemma for $(\mon{T},\V)$-categories.
\begin{theorem}\label{lem:Yoneda}
Let $(X,a)$ be a $(\mon{T},\V)$-category. Then, for all $\psi$ in $\V^{X^\op}$ and all $\mathfrak{x}\in TX$,
\[
\llbracket Ty_X(\mathfrak{x}),\psi\rrbracket=\psi(\mathfrak{x}),
\]
with $\llbracket-,-\rrbracket$ the $(\mon{T},\V)$-categorical structure on $\V^{X^\op}$.
\end{theorem}
To generalise these results to the general setting studied in this paper, that is when $\V$ is not necessarily a thin category, one faces a first obstacle: When can we equip the category $\V$ with a canonical (although non-legitimate) $(\mon{T},\V)$-category structure as in (\ref{eq:TVhom})? The obstacle seems removable when $\mon{T}=\mon{M}$ is the free-monoid monad.
In fact, as above, the monoidal structure $(X_1,\ldots,X_n)\mapsto X_1\otimes\dots\otimes X_n$ defines a lax extension of $\mon{M}$ to $\Rels{\V}$, a monoidal structure on $\Cats{(\mon{M},\V)}\simeq\V$-$\MultiCat$, and it turns $\V$ into a generalised multicategory. We therefore conjecture that Theorems \ref{prop:Mod_vs_Fun} and \ref{lem:Yoneda} hold also in this more general situation; however, so far we were not able to prove this.
\section*{Acknowledgments}
The first author acknowledges partial financial
assistance by the Centro de Matem\'{a}tica da Universidade de Coimbra (CMUC),
funded by the European Regional Development Fund through the program COMPETE
and by the Portuguese Government through FCT, under the project PEst-C/MAT/UI0324/2013 and grant number SFRH/BPD/79360/2011, and by the Presidential Grant for Young Scientists, PG/45/5-113/12, of the National Science Foundation of Georgia.
The second author acknowledges partial financial
assistance by CMUC,
funded by the European Regional Development Fund through the program COMPETE
and by the Portuguese Government through FCT, under the project PEst-C/MAT/UI0324/2013.
The third author acknowledges partial financial assistance by Portuguese funds through CIDMA (Center for Research and Development in Mathematics and Applications), and the Portuguese Foundation for Science and Technology (``FCT -- Funda\c{c}\~ao para a Ci\^encia e a Tecnologia''), within the project PEst-OE/MAT/UI4106/2014, and by the project NASONI under the contract PTDC/EEI-CTP/2341/2012.
{\small
}
\end{document} |
\begin{document}
\title{
Eigenvalues, Peres' separability condition and entanglement
\thanks{Supported by the National Natural Science Foundation of China under Grant No. 69773052.}}
\author{An Min WANG$^{1,2,3}$}
\address{CCAST(World Laboratory) P.O.Box 8730, Beijing 100080$^1$\\
and Laboratory of Quantum Communication and Quantum Computing\\
University of Science and Technology of China$^2$\\
Department of Modern Physics, University of Science and Technology of China\\
P.O. Box 4, Hefei 230027, People's Republic of China$^3$}
\maketitle
\centerline{(\it Revised Version)}
\begin{abstract}
The general expression with the physical significance and positive definite condition of the eigenvalues of $4\times 4$ Hermitian and trace-one matrix are obtained. This implies that the eigenvalue problem of the $4\times 4$ density matrix is generally solved. The obvious expression of Peres' separability condition for an arbitrary state of two qubits is then given out and it is very easy to use. Furthermore, we discuss some applications to the calculation of the entanglement, the upper bound of the entanglement, and a model of the transfer of entanglement in a qubit chain through a noisy channel.
\noindent{PACS: 03.67-a,03.65.Bz, 89.70.+c}
\noindent{Key Words: Density Matrix, Eigenvalues, Separability, Entanglement}
{\rm e}nd{abstract}
The density matrix (DM) was introduced by J. von Neumann to describe the statistical concepts in quantum mechanics \cite{Neumann}. The main virtue of DM is its analytical power in the construction of the general formulas and in the proof of the general theorems. The evaluation of averages and probabilities of the physical quantities characterizing a given system is extremely cumbersome without the use of density matrix techniques. Recently, the application of DM has been gaining more and more importance in the many fields of physics. For example, in the quantum information and quantum computing\cite{QC}, DM techniques have become an important tool for describing and characterizing the measure of entanglement, purification of entanglement and encoding \cite{Bennett1,Deutsch,Horodecki,Wootters}. However, even if DM is a simple enough $4\times4$ dimensional one, to write a general expression of its eigenvalues in a compact form with the physical significance seems not to be a trivial problem. Although one has known the theory of quartic equation, this is still difficult since DM has 15 independent parameters. Actually, we need a physical closed form for it but not a mathematical closed form only with formal meaning. This letter is just devoted to this fundamental problem in quantum mechanics. It successfully finds out the general expression of the eigenvalues of a $4\times 4$ density matrix with a clear physical significance and in a compact form. Thus, an obvious expression of Peres' separability condition is derived clearly. This provides a very easy and direct way to use it. Moreover, some important applications to the entanglement and separability in the quantum information, such as the calculation of the entanglement, the upper bound of the entanglement and the transfer of the entanglement, are discussed constructively.
The elementary unit of quantum information is so-called ``qubit" \cite{Schumacher}. A single qubit can be envisaged as a two-state quantum system such as a spin-half or a two-level atom. A pair of qubits forms the simplest quantum register which can be expressed by a $4\times 4$ density matrix. Just is well known, the eigenvalues of DM of two qubits are closely related with its entanglement and separability. For example, Wootters gave a measure of the entanglement in terms of the eigenvalues \cite{Wootters}, and Peres' separability condition depends on the positive definite property of the partial transpose of DM. Therefore, it is very interesting and essentially important to know what is the general expression of eigenvalues of DM of two qubits in an arbitrary state.
DM of two qubits can be written as
\begin{equation}
\rho=\frac{1}{4}\sum_{\mu,\nu=0}^3 a_{\mu\nu}\sigma_\mu\otimesimes\sigma_\nu,\label{Rhoe}
{\rm e}nd{equation}
where $\sigma_0$ is two dimensional identity matrix and $\sigma_i$ is the usual Pauli matrix. $\rho=\rho^{\rm d}agger$ (Hermitian) leads to $a_{\mu\nu}$ be the real numbers, ${\rm Tr}\rho=1$ (trace-one) requires $a_{00}=1$, and from the eigenvalue of Pauli matrix it follows that $-1\leq a_{\mu\nu}\leq 1$. Moreover, it is easy to get
\begin{equation}
a_{\mu\nu}={\rm Tr}(\rho\sigma_\mu\otimesimes\sigma_\nu).\label{CA}
{\rm e}nd{equation}
Note that Eq.(\ref{Rhoe}) does not involve with the positive definite condition for $\rho$. In order to find out the general expression of the eigenvalues of DM, we first give out the following two lemmas.
\noindent {\it Lemma One}\ The form of the characteristic polynomial of a $4\times 4$ Hermitian and trace-one matrix $\Omega$ is
\begin{equation}
b_0+b_1\lambda + b_2\lambda^2-\lambda^3+\lambda^4,
{\rm e}nd{equation}
where the coefficients $b_0,b_1$ and $b_2$ are defined by
\begin{eqnarray}
b_0&=&\frac{1}{64}[1-\bm{\xi}_A^2 \bm{\xi}_B^2-(A\bm{\xi}_A)^2-(A\bm{\xi}_B)^2 \nonumber \\
& &+2\bm{\xi}_A^T A \bm{\xi}_B +(({\rm Tr}A)^2-{\rm Tr}A^2)\bm{\xi}_A \cdot\bm{\xi}_B \nonumber \\
& &+2\bm{\xi}_B^T A^2\bm{\xi}_A-2{\rm Tr}A\;\bm{\xi}_B^T A\bm{\xi}_A-(\bm{a}_1\times\bm{a}_2)^2 \nonumber\\
& &-(\bm{a}_2\times\bm{a}_3)^2-(\bm{a}_3\times\bm{a}_1)^2-2(\bm{a}_1\times\bm{a}_2)\cdot\bm{a}_3] \nonumber\\
& &-\frac{1}{16}[{\rm Tr}\Omega^2-({\rm Tr}\Omega^2)^2],\\
b_1&=&\frac{1}{8}[2{\rm Tr}\Omega^2-1-\bm{\xi}_A^T A \bm{\xi}_B+(\bm{a}_1\times\bm{a}_2)\cdot\bm{a}_3],\\
b_2&=&\frac{1}{2}(1-{\rm Tr}\Omega^2).
{\rm e}nd{eqnarray}
In the above equations, we have introduced the polarized vectors of the reduced density matrices $\bm{\xi}_A=(a_{10},a_{20},a_{30})$, $
\bm{\xi}_B=(a_{01},a_{02},a_{03})$; the space Bloch's vector $\bm{a}_1=(a_{11},a_{12},a_{13})$, $\bm{a}_2=(a_{21},a_{22},a_{23})$, $ \bm{a}_3=(a_{31},a_{32},a_{33})$;
and the polarized rotation matrix $A=\{a_{ij}\}\; (i,j=1,2,3)$.
Note that $\bm{\xi}_{\{A,B\}}$ is viewed as a column vector and its transpose $\bm{\xi}^{\rm T}_{\{A,B\}}$ is then a row vector. The physical meaning of $3\times 3$ matrix $A$ can be seen in my paper \cite{My0}. Again, the positive definite condition has not been used here and $a_{\mu\nu}$ is defined just as Eq.(\ref{CA}).
\noindent{\it Lemma Two}\ If a $4\times 4$ Hermitian and trace-one matrix $\Omega$ has $m$ non-zero eigenvalues, then
\begin{equation}
{\rm Tr}\Omega^2\geq \frac{1}{m}.
{\rm e}nd{equation}
If $\Omega$ is positive definite, then
\begin{equation}
{\rm Tr}\Omega^2\leq 1.
{\rm e}nd{equation}
To prove Lemma one, we need to use the physical ideas to arrange those coefficients from the characteristic determinant into a compact form. And, it leads to the general expression of the eigenvalues of DM with the physical significance for their applications. Lemma two can be obtained by the standard method to find the extremum. Based on the theory of the quartic equation, we have
\noindent{\it Theorem One}\ The eigenvalues of the $4\times 4$ Hermitian and trace-one matrix $\Omega$ are
\begin{eqnarray}
\lambda^{\pm}(-)&=&\frac{1}{4}-\frac{1}{4\sqrt{3}}(4{\rm Tr}\Omega^2-1+8c_1\cos\phi)^{1/2} \nonumber\\
& &\pm\frac{1}{2\sqrt{6}} \left[4{\rm Tr}\Omega^2-1-4c_1\cos\phi+\frac{3\sqrt{3}(1+8b_1-2{\rm Tr}\Omega^2)}{\sqrt{4{\rm Tr}\Omega^2-1+8c_1\cos\phi}}\right]^{1/2},\\
\lambda^\pm(+)&=&\frac{1}{4}+\frac{1}{4\sqrt{3}}(4{\rm Tr}\Omega^2-1+8c_1\cos\phi)^{1/2}\nonumber \\
& &\pm\frac{1}{2\sqrt{6}}\left[4{\rm Tr}\Omega^2-1-4c_1\cos\phi-\frac{3\sqrt{3}(1+8b_1-2{\rm Tr}\Omega^2)}{\sqrt{4{\rm Tr}\Omega^2-1+8c_1\cos\phi}}\right]^{1/2},
{\rm e}nd{eqnarray}
where
\begin{eqnarray}
\cos3\phi&=&\frac{c_2}{2c_1^3}=4\cos^3\phi-3\cos\phi\label{3Phi}, \\
c_1&=&\sqrt{12b_0+3b_1+b_2^2},\\
c_2&=&27b_1^2+b_0(27-72b_2)+9b_1b_2+2b_2^3.
{\rm e}nd{eqnarray}
In the above equations, we have assumed $c_1\neq 0, c_2\neq 0$. Note that $c_2^2-4c_1^6=-27(\lambda_1-\lambda_2)^2(\lambda_1-\lambda_3)^2(\lambda_1-\lambda_4)^2(\lambda_2-\lambda_3)^2(\lambda_2-\lambda_4)^2(\lambda_3-\lambda_4)^2$ is non-positive. Thus, $4c_1^6\geq c_2^2\geq 0$, and so $c_1$ is real. If there is any repeated root ($c_2^2-4c_1^6=0$), $\phi=0$ or $\pi/3$ since $c_2=\pm 2c_1^3$. In fact, Eq.(\ref{3Phi}) implies that
\begin{eqnarray}
\cos\phi&=&\frac{c_1}{2^{2/3}(c_2+\sqrt{c_2^2-4c_1^6})^{1/3}}+\frac{(c_2+\sqrt{c_2^2-4c_1^6})^{1/3}}{2\times 2^{1/3}c_1},\\
\phi&=&{\rm Arg}\left[\left(c_2+\sqrt{c_2^2-4c_1^6}\right)^{1/3}\right]
{\rm e}nd{eqnarray}
Obviously, if $c_2<0$, then $\pi/6<\phi\leq \pi/3$, and if $c_2>0$, then $0\leq\phi<\pi/6$.
Now consider the case that $c_1$ or/and $c_2$ are equal to zero. We discuss it by two steps. First, suppose $b_2=3/8$ or ${\rm Tr}\Omega^2=1/4$. If only $c_1=0$ or $c_2=0$, then some of these eigenvalues will be complex numbers. This is contradict with Hermitian property of DM. So this conclusion means that both $c_1$ and $c_2$ have to be zero together. Thus, we can obtain that all the eigenvalues are $1/4$. Second, suppose $b_2\neq 3/8$ or ${\rm Tr}\Omega^2\neq 1/4$. We have to analysis the possibilities stated as the following.
For only $c_1=0$ or $12b_0+3b_1+b_2^2=0$, again from $c_2^2=c_2^2-4c_1^6=-27(\lambda_1-\lambda_2)^2(\lambda_1-\lambda_3)^2(\lambda_1-\lambda_4)^2(\lambda_2-\lambda_3)^2(\lambda_2-\lambda_4)^2(\lambda_3-\lambda_4)^2\leq 0$, we have that $c_2$ has to be zero since it is real. So only $c_1=0$ is impossible.
For only $c_2=0$ or $27b_1^2+b_0(27-72b_2)+9b_1b_2+2b_2^3=0$, the eigenvalues of the $4\times 4$ Hermitian and trace-one matrix are then
\begin{eqnarray}
\lambda^\pm(-)&=&\frac{1}{4}-\frac{1}{4\sqrt{3}}\sqrt{4{\rm Tr}\Omega^2-1}\pm\frac{1}{2\sqrt{6}}\left(4{\rm Tr}\Omega^2-1+\frac{3\sqrt{3}(1+8b_1-2{\rm Tr}\Omega^2)}{\sqrt{4{\rm Tr}\Omega^2-1}}\right)^{1/2},\\
\lambda^\pm(+)&=&\frac{1}{4}+\frac{1}{4\sqrt{3}}\sqrt{4{\rm Tr}\Omega^2-1}\pm\frac{1}{2\sqrt{6}}\left(4{\rm Tr}\Omega^2-1-\frac{3\sqrt{3}(1+8b_1-2{\rm Tr}\Omega^2)}{\sqrt{4{\rm Tr}\Omega^2-1}}\right)^{1/2}.
{\rm e}nd{eqnarray}
For both $c_1=0$ and $c_2=0$, the eigenvalues of the $4\times 4$ Hermitian and trace-one matrix are either
\begin{eqnarray}
\lambda_{1,2,3}&=&\frac{1}{4}-\frac{1}{4\sqrt{3}}\sqrt{4{\rm Tr}\Omega^2-1},\\
\lambda_4&=&\frac{1}{4}+\frac{\sqrt{3}}{4}\sqrt{4{\rm Tr}\Omega^2-1}.
{\rm e}nd{eqnarray}
if $b_0=[3-6{\rm Tr}\Omega^2-6({\rm Tr}\Omega^2)^2+\sqrt{3}(4{\rm Tr}\Omega^2-1)^{3/2}]/288,\; b_1=[18{\rm Tr}\Omega^2-9-\sqrt{3}(4{\rm Tr}\Omega^2-1)^{3/2}]/72$, or
\begin{eqnarray}
\lambda_{1,2,3}&=&\frac{1}{4}+\frac{1}{4\sqrt{3}}\sqrt{4{\rm Tr}\Omega^2-1},\\
\lambda_4&=&\frac{1}{4}-\frac{\sqrt{3}}{4}\sqrt{4{\rm Tr}\Omega^2-1}.
{\rm e}nd{eqnarray}
if $b_0=[3-6{\rm Tr}\Omega^2-6({\rm Tr}\Omega^2)^2-\sqrt{3}(4{\rm Tr}\Omega^2-1)^{3/2}]/288,\; b_1=[18{\rm Tr}\Omega^2-9+\sqrt{3}(4{\rm Tr}\Omega^2-1)^{3/2}]/72$.
Just is well known, Peres' separability condition tells us, all the eigenvalues of the partial transpose of DM ought to be non-negative \cite{Peres}. Thus, taking the minimum eigenvalue in Theorem one and setting it non-negative, we have
{\noindent}{\it Theorem Two}\ The separability condition of DM $\rho$ of two qubits in an arbitrary state is
\begin{eqnarray}
1&\geq&\frac{1}{\sqrt{3}}(4{\rm Tr}\rho^2-1+8c_1^{\rm P}\cos\phi^{\rm P})^{1/2}+\frac{2}{\sqrt{6}}\left[4{\rm Tr}\rho^2-1\right.\nonumber\\
& &\left.-4c_1^{\rm P}\cos\phi^{\rm P}+\frac{3\sqrt{3}(1+8b_1^{\rm P}-2{\rm Tr}\rho^2)}{\sqrt{4{\rm Tr}\rho^2-1+8c_1^{\rm P}\cos\phi^{\rm P}}}\right]^{1/2},
{\rm e}nd{eqnarray}
where
\begin{eqnarray}
b_0^{\rm P}&=&b_0-\frac{1}{32}[(({\rm Tr}A)^2-{\rm Tr}A^2)\bm{\xi}_A \cdot\bm{\xi}_B+2\bm{\xi}_B^T A^2\bm{\xi}_A\nonumber\\
& &-2{\rm Tr}A\;\bm{\xi}_B^T A\bm{\xi}_A)]+\frac{1}{16}(\bm{a}_1\times\bm{a}_2)\cdot\bm{a}_3,\label{PTB0}\\
b_1^{\rm P}&=&b_1-\frac{1}{4}(\bm{a}_1\times\bm{a}_2)\cdot\bm{a}_3,\label{PTB1}\\
b_2^{\rm P}&=&b_2,\\
c_1^{\rm P}&=&\sqrt{12b_0^{\rm P}+3b_1^{\rm P}+b_2^{\rm P}{}^2},\label{PTB2}\\
c_2^{\rm P}&=&27b_1^{\rm P}{}^2+b_0^{\rm P}(27-72b_2^{\rm P})+9b_1^{\rm P}b_2^{\rm P}+2b_2^{\rm P}{}^3,\\
\cos\phi^{\rm P}&=&\frac{c_1^{\rm P}}{2^{2/3}(c_2^{\rm P}+\sqrt{c_2^{\rm P}{}^2-4c_1^{\rm P}{}^6})^{1/3}}+\frac{(c_2^{\rm P}+\sqrt{c_2^{\rm P}{}^2-4c_1^{\rm P}{}^6})^{1/3}}{2\times 2^{1/3}c_1^{\rm P}}.
{\rm e}nd{eqnarray}
And $c_1^{\rm P}\neq 0,\;c_2^{\rm P}\neq 0$.
If only $c_2^{\rm P}=0$, the separability condition becomes
\begin{equation}
1\geq\frac{1}{4\sqrt{3}}\sqrt{4{\rm Tr}\rho^2-1}+\frac{1}{2\sqrt{6}}\left(4{\rm Tr}\rho^2-1+\frac{3\sqrt{3}(1+8b_1^{\rm P}-2{\rm Tr}\rho^2)}{\sqrt{4{\rm Tr}\rho^2-1}}\right)^{1/2}.
{\rm e}nd{equation}
If both $c_1^{\rm P}=0$ and $c_2^{\rm P}=0$, then in case one the DM is always separable and in case two the separability condition is
\begin{equation}
{\rm Tr}\rho^2\leq \frac{1}{3}
{\rm e}nd{equation}
In the above, we have used the fact that the trace of the square of the partial transpose matrix of DM is equal to the trace of the square of DM.
Obviously, the pure state is the simplest case. In fact, we can prove the following theorem:
\noindent {\it Theorem Three}\ The eigenvalues of the partial transpose of DM of two qubits in a pure state $\ket{\phi}=a\ket{00}+b\ket{01}+c\ket{10}+d\ket{11}$ is
\begin{equation}
\mp |ad-bc|,\;\frac{1}{2}(1\mp\sqrt{1-4|ad-bc|^2}),
{\rm e}nd{equation}
and then the separability condition is just
\begin{equation}
ad-bc=0.
{\rm e}nd{equation}
It is consistent with my paper \cite{My0}.
Because Peres' separability condition is necessary and sufficient one for two qubits, the Theorem two and Theorem three, as the obvious and general expression of Peres' condition, are necessary and sufficient one either.
If there are some vanishing eigenvalues for a $4\times 4$ Hermitian and trace-one matrix, the conclusions can be simplified. The following theorems will show this judgment. Their proofs can be given by solving the corresponding characteristic equations.
\noindent{\it Theorem Four}\ If at least there is one vanishing eigenvalue for a $4\times 4$ Hermitian and trace-one matrix, its eigenvalues are
\begin{eqnarray}
\lambda_1&=&\frac{1}{3}(1+\sqrt{6{\rm Tr}\Omega^2-2}\; \cos\phi),\\
\lambda_2&=&\frac{1}{3}[1-\sqrt{6{\rm Tr}\Omega^2-2}\; \cos(\phi-\pi/3)],\\
\lambda_3&=&\frac{1}{3}[1-\sqrt{6{\rm Tr}\Omega^2-2}\; \cos(\phi+\pi/3)],
{\rm e}nd{eqnarray}
where
\begin{eqnarray}
\cos\phi&=&
\frac{\sqrt{1-3b_2}}{2^{2/3}(d+\sqrt{d^2-4(1-3b_2)^3})^{1/3}}\\ \nonumber
& &+\frac{(d+\sqrt{d^2-4(1-3b_2)^3})^{1/3}}{2\times 2^{1/3}\sqrt{1-3b_2}}, \label{3COS}\\
d&=&2-27b_1-9b_2.
{\rm e}nd{eqnarray}
Here we have assumed $3{\rm Tr}\Omega^2-1\neq 0$ and $d=(3\lambda_1-1)(3\lambda_2-1)(3\lambda_3-1)\neq 0$. If $d< 0$, then $\pi/6<\phi\leq \pi/3$, and if $d>0$, then $0\leq\phi<\pi/6$. Because that $d^2-4(1-3b_2)^3=-27(\lambda_1-\lambda_2)^2(\lambda_2-\lambda_3)^2(\lambda_3-\lambda_1)^2$, we have $4(1-3b_2)^3\geq d^2 \geq 0$, and $1-3b_2=(3{\rm Tr}\Omega^2-1)/2\geq 0$. If ${\rm Tr}\Omega^2=1/3$, we have that $d$ has to be zero. Thus, $b_1=-1/27$ and $b_2=1/3$. This implies that all the eigenvalues are equal to $1/3$. In particular, only $d=0$, the eigenvalues becomes
\begin{eqnarray}
\lambda_1&=&\frac{1}{3},\\
\lambda_{2,3}&=&\frac{1}{3}\left(1\pm\sqrt{\frac{3}{2}}\sqrt{3{\rm Tr}\Omega^2-1}\right).
{\rm e}nd{eqnarray}
\noindent{\it Theorem Five}\ If at least there is one vanishing eigenvalue for a $4\times 4$ Hermitian and trace-one matrix, then the positive definite condition of eigenvalues is
\begin{equation}
\sqrt{6{\rm Tr}\Omega^2-2}\; \cos(\phi^{\rm P}-\pi/3)\leq 1.
{\rm e}nd{equation}
If only $d=0$, the positive definite condition becomes
\begin{equation}
{\rm Tr}\Omega^2\leq\frac{5}{9}.
{\rm e}nd{equation}
\noindent{\it Theorem Six}\ If at least there are two vanishing eigenvalues for a $4\times 4$ Hermitian and trace-one matrix, then the other eigenvalues are
\begin{equation}
\lambda_\pm=\frac{1}{2}(1\pm\sqrt{2{\rm Tr}\Omega^2-1}).
{\rm e}nd{equation}
The positive definite condition of the eigenvalues
\begin{equation}
{\rm Tr}\Omega^2\leq 1.
{\rm e}nd{equation}
Thus, from the Peres' separability condition, it is easy to prove the following theorem:
\noindent{\it Theorem Seven}\ If the partial transpose of DM of two qubits has at least two vanishing eigenvalues, this density matrix is separable. If the partial transpose of DM of two qubits has only one vanishing eigenvalue, the separability condition is obtained by using of Theorem Five to it and setting the minimum eigenvalue is positive.
Now, let's we discuss some applications of our theorems. Just is well known that many measures of entanglement are related with the quantum entropies which are defined by density matrix, for example, the entanglement of formation \cite{Bennett} and the relative entropy of entanglement \cite{Vedral}. To compute quantum entropy, we often need to find the eigenvalues of density matrix. Even, according to Wootters \cite{Wootters}, the measure of entanglement of two qubits is directly determined by the eigenvalues of $4\times 4$ hermitian matrix. Therefore, in terms of our Theorems about the eigenvalues, we can easily calculate the entanglement of formation of an arbitrary state of two qubits. As to the relative entropy of entanglement or its improving \cite{My1}, we have to calculate von Neumann entropy which is just defined directly by the eigenvalues.
Furthermore, we can find a relation between the upper bound of entanglement and the eigenvalues.
\noindent{\it Theorem Eight}\ If all the eigenvalues of DM of two qubits are not zero, the possibly maximum value of the entanglement of formation is not larger than
\begin{eqnarray}
& &\frac{1}{\sqrt{3}}(4{\rm Tr}\rho^2-1+8c_1^{\rm P}\cos\phi^{\rm P})^{1/2}+\frac{2}{\sqrt{6}}\left[4{\rm Tr}\rho^2-1\right.\nonumber\\
& &\left.-4c_1^{\rm P}\cos\phi^{\rm P}+\frac{3\sqrt{3}(1+8b_1^{\rm P}-2{\rm Tr}\rho^2)}{\sqrt{4{\rm Tr}\rho^2-1+8c_1^{\rm P}\cos\phi^{\rm P}}}\right]^{1/2}.
{\rm e}nd{eqnarray}
This is because DM of two qubits can be written as
\begin{equation}
\rho=\lambda_{\min} I+(1-4\lambda_{\min})\rho^\prime.
{\rm e}nd{equation}
Note that we can not put a number larger than $\lambda_{\min}$ in front of the identity matrix $I$, because we have to keep $\rho^\prime$ to be positive definite.
Furthermore, let's consider a model of transfer of entanglement \cite{My2}. This can be expressed as a following story. Alice and Bob are friends. One day, Alice and Bob sat together at the lounge in a party. On the left side of Alice is Charlie and on the right side of Bob is David. Alice and Charlie, Bob and David respectively exchanged their seats. This leads to Alice and Bob's entanglement decreases. In language of quantum information, Alice and Bob shares an entangled state initially. Without loss of generality, suppose it is in Bell state $(\ket{00}+\ket{11})/\sqrt{2}$, Charlie is in $\ket{c}$ and David is in $\ket{d}$. That is that four of them is in a total state $\ket{c}\otimesimes(\ket{00}+\ket{11})\otimesimes\ket{d}/\sqrt{2}$. Now, introduce the swapping interaction\cite{Loss} respectively between Alice and Charlie, and between Bob and David:
\begin{equation}
S=\left(\begin{array}{cccc}
1&0&0&0\\
0&0&1&0\\
0&1&0&0\\
0&0&0&1
{\rm e}nd{array}\right),\qquad S\ket{ab}=\ket{ba}\quad (a,b=0,1).
{\rm e}nd{equation}
It is easy to see that
\begin{equation}
S\otimesimes S\left[\frac{1}{\sqrt{2}}\ket{c}\otimesimes(\ket{00}+\ket{11})\otimesimes\ket{d}\right]=\frac{1}{\sqrt{2}}(\ket{0}\otimesimes\ket{cd}\otimesimes\ket{0}+\ket{1}\otimesimes\ket{cd}\otimesimes\ket{1}).
{\rm e}nd{equation}
Thus, the entanglement between the second qubit and the third qubit is transfered to the entanglement of the first qubit and the fourth qubit. In general, the swapping process suffers the affection of noisy. We suppose, after transfer of entanglement, that DM becomes
\begin{equation}
\rho^\prime=(1-{\rm e}psilon)\rho+{\rm e}psilon\frac{1}{4}I.
{\rm e}nd{equation}
where ${\rm e}psilon$ represents the strength of the noise and $\rho$ is, in form, the same as DM before transfer of entanglement. Obviously, this model can be extended to a qubit chain:
\begin{equation}
\cdots\overbrace{\underbrace{\bullet\leftrightarrow\cdots\leftrightarrow\bullet\leftrightarrow}_n\overbrace{\bullet\leftrightarrow\underbrace{\bullet\qquad\bullet}_{\rho_0}\leftrightarrow\bullet}^{\rho_1}\underbrace{\leftrightarrow\bullet\leftrightarrow\cdots\leftrightarrow\bullet\leftrightarrow\bullet}_n}^{\rho_n}\cdots
{\rm e}nd{equation}
Through the swapping interaction, the entanglement can be transfered along with the chain one node by one node and forward to the opposite directions. At the beginning, denote DM for a pair of given adjacent nodes of a qubit chain as $\rho_0$. After the first swapping, the DM of a pair of qubits respectively on the nodes of the left side and right side of the given pair of qubits is written as $\rho_1$. Since the affection of noise, after the $n$-th swapping in turn along with two directions, the DM of a pair of qubits respectively on the $n$-th nodes of the left side and right side of the original two adjacent qubits becomes
\begin{equation}
\rho_n=(1-{\rm e}psilon)\rho_{n-1}+{\rm e}psilon\frac{1}{4}I.
{\rm e}nd{equation}
This equation has not taken into account the fact that $\rho_n$ and $\rho_{n-1}$ are related with the different pairs of qubits, and let it is valid only in mathematics. We also assume that at the beginning a pair of qubits in the adjacent nodes is in a pure state. Thus we would like to know what is $n$'s value if $\rho_n$ is separable. That is to calculate the transfer distance $n$ of entanglement for such a chain of qubits through a noisy channel.
According to our theorem, the minimum eigenvalue of the partial transpose of $\rho_n$ is
\begin{equation}
\lambda_{\min}=\frac{1}{4}[1-((1-{\rm e}psilon)^n(1+4|ad-bc|)].
{\rm e}nd{equation}
By using of Peres' criterion for the separable state, we can find that
\begin{equation}
n\leq-\frac{\log(1+4|ad-bc|)}{\log (1-{\rm e}psilon)}.
{\rm e}nd{equation}
In particular, when $\rho_0$ is a density matrix in the maximum entangled state, we obtain
\begin{equation}
n\leq{\rm d}isplaystyle -\frac{\log 3}{\log (1-{\rm e}psilon)}.
{\rm e}nd{equation}
Obviously, when one hopes $n=10$, then it allows the noisy strength ${\rm e}psilon$ is not larger than $0.104042$. When the noisy strength ${\rm e}psilon$ is larger than $0.42265$, any transfer will lead in disentanglement. If the noisy strength ${\rm e}psilon$ only reaches at $0.01$ or $0.1$, the transfer distance $n$ can be 109 or 10. Likewise, we can describe the transfer of entanglement along with one direction. The significances and applications of this model of transfer of entanglement should be imaginable.
In addition, we hope to apply our theorems into seeking the minimum pure state decomposition of DM, this work is in progressing. In a words, we can say, of course, the theorems proposed here are the useful tools to study the entanglement and the related problems.
I would like to thank Artur Ekert for his great help and for his hosting my visit to center of quantum computing in Oxford University.
\begin{references}
\bibitem{Neumann} J. von Neumann, {\it G\"ottinger Nachrichten}, (1927)245
\bibitem{QC} D.P.DiVincenzo, {\it Science} {\bf 270} (1995)255; A.Steane, {\it Per. Prog. Phys.} {\bf 61}117(1998)
\bibitem{Bennett1}C.H.Bennett, G.Brassard, S.Popescu, B.Schumacher, J.A.Smolin and W.K.Wootters, {\it Phys. Rev. Lett.} {\bf 76} (1996)722
\bibitem{Deutsch}D.Deutsch, A.Ekert, R.Jozsa, C.Macchiavello, S.Popescu and A.Sanpera, {\it Phys. Rev. Lett.} {\bf 77} (1996)2818
\bibitem{Horodecki}M.Horodecki, P.Horodecki and R.Horodecki, {\it Phys. Rev. Lett.} {\bf 78} (1997)574
\bibitem{Wootters} W.K.Wootters, {\it Phys. Rev. Lett.} {\bf 80}, 2245(1998); S.Hill and W.K.Wootters, {\it Phys. Rev. Lett.} {\bf 78}, 5022(1997)
\bibitem{Schumacher} B.Schumacher, {\it Phys. Rev. A} {\bf 51} 2738(1995)
\bibitem{My0}An Min Wang, {\it Chinese Phys. Lett.} {\bf 17} 243(2000)
\bibitem{Peres} A.Peres, {\it Phys. Rev. Lett.} {\bf 77}, 1413(1996)
\bibitem{Bennett} C.H.Bennett, H.J.Bernstein, S.Popesu, and B.Schumacher, {\it Phys. Rev. A} {\bf 53}, 2046(1996); S.Popescu, D.Rohrlich, {\it Phys. Rev. A} {\bf 56}, R3319(1997)
\bibitem{Vedral}V.Vedral, M.B.Plenio, K.Jacobs, and P.L.Knight, {\it Phys. Rev. A} {\bf 56}, 4452(1997); V.Vedral, M.B.Plenio, M.A.Rippin, and P.L.Knight, {\it Phys. Rev. Lett.} {\bf 78}, 2275(1997); V.Vedral and M.B.Plenio, {\it Phys. Rev. A} {\bf 57}, 1619(1998)
\bibitem{My1}An Min Wang, quant-ph/0001023
\bibitem{My2}private communication with Artur Ekert (Oct.,1999)
\bibitem{Loss} D. Loss and D. P. DiVincenzo, {\it Phys. Rev. A} {\bf 57} 120(1998)
{\rm e}nd{references}
{\rm e}nd{document} |
\begin{document}
\title{Singular elliptic equation involving the GJMS operator on the
standard unit sphere.}
\author{Mohammed Benalili and Ali Zouaoui}
\maketitle
\begin{abstract}
Given a Riemannian compact manifold $\left( M,g\right) $ of dimension $n\geq
5$, we have proven in \cite{1} under some conditions that the equation :
\begin{equation}
P_{g}(u)=Bu^{2^{\sharp }-1}+\frac{A}{u^{2^{\sharp }+1}}+\frac{C}{u^{p}}
\label{E1}
\end{equation}
where $P_{g}$ is GJMS-operator, $n=\dim (M)>2k$ $(k\in \mathbb{N}^{\star })$
, $A,B$ and $C$ are smooth positive functions on $M$, $p>1$ and $2^{\sharp }=
\frac{2n}{n-2k}$ denotes the critical Sobolev of the embedding $H_{k}^{2}(M)$
$\subset $ $L^{2^{\sharp }}(M)$, admits two distinct positive solutions. The
proof of this result is essentially based on the given smooth function $
\varphi >0$ with norm $\Vert \varphi \Vert _{P_{g}}=1$ fulfilling some
conditions ( see Theorem 3 in \cite{1}). In this note we construct an
example of such function on the unit standard sphere $\left( \mathbb{S}
^{n},h\right) $. Consequently the conditions of the Theorem are improved in
the case of $\left( \mathbb{S}^{n},h\right) $.
\end{abstract}
\section{Construction of the function $\protect\varphi $}
Inspired by the work of F. Robert. (see \cite{4} ) , we construct an example
of a smooth function $\varphi >0$ on the Euclidean sphere $\left( \mathbb{S}
^{n},h\right) $ with norm$\Vert \varphi \Vert _{P_{h}}=1$.\newline
Indeed let $\lambda >0$ and $x_{0}\in \mathbb{S}^{n}$. To a rotation, we may
assume that $x_{0}$ is the north pole i.e. $x_{0}=\left( 0,...0,1\right) $.
We consider the transformation
\begin{equation*}
\phi _{\lambda }:\mathbb{S}^{n}\rightarrow \mathbb{S}^{n}
\end{equation*}
defined by $\phi _{\lambda }(x)=\psi _{x_{0}}^{-1}\left( \lambda ^{-1}.\psi
_{x_{0}}(x)\right) $ if $x\neq x_{0}$ and $\phi _{\lambda }(x_{0})=x_{0}$
where $\psi _{x_{0}}$ is the stereographic projection of $x_{0}$ given by
\begin{equation*}
\psi _{x_{0}}:\left( \mathbb{S}^{n}\setminus \left\{ x_{0}\right\} ,h\right)
\rightarrow \left( \mathbb{R}^{n},\xi \right) ,
\end{equation*}
for any $a=\left( \eta _{1},...,\eta _{n},\zeta \right) $ associates $\psi
_{x_{0}}(a)=\left( \frac{\eta _{1}}{1-\zeta },...,\frac{\eta _{n}}{1-\zeta }
\right) $ and
\begin{equation*}
\begin{array}{c}
\delta _{\lambda }:\left( \mathbb{R}^{n},\xi \right) \rightarrow \left(
\mathbb{R}^{n},\xi \right) \\
x\mapsto \delta _{\lambda }(x)=\frac{1}{\lambda }x
\end{array}
\end{equation*}
is the homothetic mapping. $h$ is the canonical metric on $\mathbb{S}^{n}$
and $\xi $ is the Euclidean one on $\mathbb{R}^{n}$.\newline
\newline
Note that $\psi _{x_{0}}$ is a conformal, mapping more precisely we have
\begin{equation*}
\left( \psi _{x_{0}}^{-1}\right) ^{\star }h=U^{\frac{4}{n-2k}}.\xi
\end{equation*}
where $U(x)=\left( \frac{1+\Vert x\Vert ^{2}}{2}\right) ^{k-\frac{n}{2}}$ .
Hence $\phi _{\lambda }$ is conformal i.e.
\begin{equation*}
\phi _{\lambda }^{\star }h=u_{x_{0},\beta }^{\frac{4}{n-2k}}.h\quad \text{
where}\;\beta =\frac{1+\lambda ^{2}}{\lambda ^{2}-1}
\end{equation*}
and
\begin{equation*}
u_{x_{0},\beta }(x)=\left( \dfrac{\sqrt{\beta ^{2}-1}}{\beta -\cos
d_{h}(x,x_{0})}\right) ^{\frac{n-2k}{2}}\quad \forall x\in \mathbb{S}^{n}\;
\text{with}\;\beta >1.
\end{equation*}
In particular we have
\begin{equation}
\int\limits_{\mathbb{S}^{n}}u_{x_{0},\beta }^{2^{\sharp }}dv_{h}=\omega _{n}
\label{01}
\end{equation}
where $\omega _{n}>0$ is the volume of the unit standard sphere $\left(
\mathbb{S}^{n},h\right) .$\newline
By the conformal invariance of the operator $P_{h}$ on $\left( \mathbb{S}
^{n},h\right) $, we obtain that
\begin{equation}
P_{h}(u_{x_{0},\beta })=\frac{n-2k}{2}Q_{h}u_{x_{0},\beta }^{2^{\sharp }-1}
\label{02}
\end{equation}
where $Q_{h}$ denotes the $Q$-curvature of $\left( \mathbb{S}^{n},h\right) $
which expresses by the Gover's formula as:
\begin{equation*}
Q_{h}=\dfrac{2}{n-2k}P_{h}(1)=\dfrac{2}{n-2k}(-1)^{k}\prod_{l=1}^{k}(c_{l}
\;Sc)
\end{equation*}
where $c_{l}=\frac{(n+2l-2)(n-2l)}{4n(n-1)}$, $Sc=n(n-1)$ (the scalar
curvature of $\left( \mathbb{S}^{n},h\right) $). So the $Q_{h\text{ \ }}$is
a positive constant.\newline
Multiplying the two sides of \eqref{02} by $u_{x_{0},\beta }$ and
integrating on $\mathbb{S}^{n}$ we get:
\begin{equation*}
\int\limits_{\mathbb{S}^{n}}u_{x_{0},\beta }P_{h}(u_{x_{0},\beta })dv_{h}=
\frac{n-2k}{2}Q_{h}\int\limits_{\mathbb{S}^{n}}u_{x_{0},\beta }^{2^{\sharp
}}dv_{h}.
\end{equation*}
And since
\begin{equation*}
\int\limits_{\mathbb{S}^{n}}u_{x_{0},\beta }P_{h}(u_{x_{0},\beta
})dv_{h}=\Vert u_{x_{0},\beta }\Vert _{P_{h}}^{2}
\end{equation*}
\eqref{02} writes
\begin{equation*}
\Vert u_{x_{0},\beta }\Vert _{P_{h}}^{2}=\frac{n-2k}{2}Q_{h}\omega _{n}.
\end{equation*}
Hence by putting
\begin{equation*}
\varphi =\left( \frac{n-2k}{2}\omega _{n}Q_{h}\right) ^{\frac{-1}{2}
}u_{x_{0},\beta }
\end{equation*}
we obtain a function satisfying the conditions of Theorem 3 in \cite{1} i.e.
$\varphi >0$ smooth on $(\mathbb{S}^{n},h)$ such that $\Vert \varphi \Vert
_{P_{g}}=1$
\section{ Existence results on the sphere}
On the standard unit sphere $(\mathbb{S}^{n},h),$ if we take the function $
\varphi $ of Theorem 3 in \cite{1} equals $\left( \frac{n-2k}{2}\omega
_{n}Q_{h}\right) ^{\frac{-1}{2}}u_{x_{0},\beta }$ , we obtain
\begin{theorem}
\label{th1}{\ }Let $\left( \mathbb{S}^{n},h\right) $ be the unit standard
unit sphere of dimension $n>2k,\;k\in \mathbb{N}^{\star }.$ There is a
constant $C(n,p,k)>0$ depending only on $n,p,k$ such that
\begin{equation}
\frac{1}{2^{\sharp }}\left( \frac{n-2k}{2}\omega _{n}Q_{h}\right) ^{\frac{
2^{\sharp }}{2}}\int_{\mathbb{S}^{n}}\frac{A(x)}{u_{x_{0},\beta
}^{2^{\natural }}}dv_{h}\leq C\left( n,p,k\right) \left( S\underset{x\in
\mathbb{S}^{n}}{\max }B(x)\right) ^{\frac{2+2^{\sharp }}{2-2^{\sharp }}}
\label{2.3}
\end{equation}
and
\begin{equation}
\frac{1}{p-1}\left( \frac{n-2k}{2}.\omega _{n}.Q_{h}\right) ^{\frac{p-1}{2}
}\int_{\mathbb{S}^{n}}\frac{C(x)}{u_{x_{0},\beta }^{p-1}}dv_{h}\leq C\left(
n,p,k\right) \left( S\underset{x\in \mathbb{S}^{n}}{\max }B(x)\right) ^{
\frac{p+1}{2-2^{\sharp }}} \label{2.4}
\end{equation}
where
\begin{equation*}
u_{x_{0},\beta }(x)=\left( \dfrac{\sqrt{\beta ^{2}-1}}{\beta -\cos
d_{h}(x,x_{0})}\right) ^{\frac{n-2k}{2}}\quad \forall x\in \mathbb{S}^{n}\;
\text{and}\;\beta >1.
\end{equation*}
then the equation \eqref{E1} admits a solution of class $C^{\infty }(\mathbb{
S}^{n})$. If moreover for any $\varepsilon \in \left] 0,\lambda ^{\star }
\right[ $ where $\lambda ^{\star }$ is a positive constant the two following
conditions are satisfied
\begin{equation*}
\frac{2}{a}\left( \int\limits_{\mathbb{S}^{n}}\sqrt{A(x)}dv_{h}\right)
^{2}\left( \frac{1}{t_{0}a_{1}}\right) ^{2^{\sharp }}>2^{\sharp }k\frac{
t_{0}^{2}}{4n}(2-a)
\end{equation*}
and
\begin{equation*}
\left( \frac{2}{a}\right) ^{\frac{p-1}{2^{\sharp }}}\left( \int\limits_{
\mathbb{S}^{n}}\sqrt{C(x)}dv_{h}\right) ^{2}\left( \frac{1}{t_{0}a_{2}}
\right) ^{p-1}>(p-1)k\frac{t_{0}^{2}}{4n}(2-a)
\end{equation*}
where $a_{1},a_{2}$ are positive constants, $2^{\sharp }=\frac{2n}{n-2k}
\;,3<p<2^{\sharp }+1$. Then the equation \eqref{E1} admits a second solution.
\end{theorem}
Note that since
\begin{equation}
\left( \dfrac{\beta -1}{\beta +1}\right) ^{\frac{n-2k}{4}}\leq
u_{x_{0},\beta }(x)\leq \left( \dfrac{\beta +1}{\beta -1}\right) ^{\frac{n-2k
}{4}} \label{03}
\end{equation}
we can improve the conditions (\ref{2.3}) and (\ref{2.4}) of Theorem \ref
{th1}. Indeed, from (\ref{0.3}) we deduce that
\begin{equation*}
\varphi (x)\geq \left( \dfrac{\beta -1}{\beta +1}\right) ^{\frac{n-2k}{4}
}\left( \frac{n-2k}{2}\omega _{n}Q_{h}\right) ^{-\frac{1}{2}}.
\end{equation*}
Consequently
\begin{equation*}
\frac{\Vert \varphi \Vert ^{2^{\sharp }}}{2^{\sharp }}\int_{\mathbb{S}^{n}}
\frac{A(x)}{\varphi ^{2^{\natural }}}dv_{h}=\frac{1}{2^{\sharp }}\int_{
\mathbb{S}^{n}}\frac{A(x)}{\varphi ^{2^{\natural }}}dv_{h}\leq \frac{1}{
2^{\sharp }}\left( \dfrac{\beta +1}{\beta -1}\right) ^{\frac{n}{2}}\left(
\frac{n-2k}{2}.\omega _{n}.Q_{h}\right) ^{\frac{2^{\sharp }}{2}}\int_{
\mathbb{S}^{n}}A(x)dv_{h}
\end{equation*}
So, if
\begin{equation*}
\frac{1}{2^{\sharp }}\left( \dfrac{\beta +1}{\beta -1}\right) ^{\frac{n}{2}
}\left( \frac{n-2k}{2}.\omega _{n}.Q_{h}\right) ^{\frac{2^{\sharp }}{2}
}\int_{\mathbb{S}^{n}}A(x)dv_{h}\leq C\left( n,p,k\right) \left( S\underset{
x\in \mathbb{S}^{n}}{\max }B(x)\right) ^{\frac{2+2^{\sharp }}{2-2^{\sharp }}}
\end{equation*}
then the condition (\ref{2.3}) is fulfilled. Likewise if
\begin{equation*}
\frac{1}{p-1}\left( \frac{n-2k}{2}\omega _{n}Q_{h}\right) ^{\frac{p-1}{2}
}\left( \dfrac{\beta +1}{\beta -1}\right) ^{\frac{n(p-1)}{2.2^{\sharp }}
}\int_{\mathbb{S}^{n}}C(x)dv_{h}\leq C\left( n,p,k\right) \left( S\underset{
x\in M}{\max }B(x)\right) ^{\frac{p+1}{2-2^{\sharp }}}
\end{equation*}
the condition (\ref{2.4}) is also true and we deduce the following result:
\begin{corollary}
{\ }Let $\left( \mathbb{S}^{n},h\right) $ be the unit standard unit sphere
of dimension $n>2k,\;k\in \mathbb{N}^{\star }.$ There is a constant $
C(n,p,k)>0$ depending only on $n,p,k$ such that
\begin{equation}
\frac{1}{2^{\sharp }}\left( \dfrac{\beta +1}{\beta -1}\right) ^{\frac{n}{2}
}\left( \frac{n-2k}{2}.\omega _{n}.Q_{h}\right) ^{\frac{2^{\sharp }}{2}
}\int_{\mathbb{S}^{n}}A(x)dv_{h}\leq C\left( n,p,k\right) \left( S\underset{
x\in \mathbb{S}^{n}}{\max }B(x)\right) ^{\frac{2+2^{\sharp }}{2-2^{\sharp }}}
\end{equation}
and
\begin{equation}
\frac{1}{p-1}\left( \frac{n-2k}{2}.\omega _{n}.Q_{h}\right) ^{\frac{p-1}{2}
}\left( \dfrac{\beta +1}{\beta -1}\right) ^{\frac{n(p-1)}{2.2^{\sharp }}
}\int_{\mathbb{S}^{n}}C(x)dv_{h}\leq C\left( n,p,k\right) \left( S\underset{
x\in M}{\max }B(x)\right) ^{\frac{p+1}{2-2^{\sharp }}}
\end{equation}
where $\beta >1$. Then the equation\eqref{E1} admits a solution of class $
C^{\infty }(\mathbb{S}^{n})$. If moreover for any $\varepsilon \in \left]
0,\lambda ^{\star }\right[ $, where $\lambda ^{\star }$ is a positive
constant, the two following assumptions are satisfied
\begin{equation}
\frac{2}{a}\left( \int\limits_{\mathbb{S}^{n}}\sqrt{A(x)}dv_{h}\right)
^{2}\left( \frac{1}{t_{0}a_{1}}\right) ^{2^{\sharp }}>2^{\sharp }k\frac{
t_{0}^{2}}{4n}(2-a)
\end{equation}
and
\begin{equation}
\left( \frac{2}{a}\right) ^{\frac{p-1}{2^{\sharp }}}\left( \int\limits_{
\mathbb{S}^{n}}\sqrt{C(x)}dv_{h}\right) ^{2}\left( \frac{1}{t_{0}a_{2}}
\right) ^{p-1}>(p-1)k\frac{t_{0}^{2}}{4n}(2-a)
\end{equation}
where $a_{1},a_{2}$ are positive constants, $2^{\sharp }=\frac{2n}{n-2k}
\;,3<p<2^{\sharp }+1$. Then the equation \eqref{E1} admits a second solution.
\end{corollary}
\end{document} |
\begin{document}
\title{Hidden Subgroup States are Almost Orthogonal}
\begin{abstract}
It~is well known that quantum computers can efficiently find a hidden
subgroup~$H$ of a finite Abelian group~$G$. This implies that after only
a polynomial (in~$\log |G|$) number of calls to the oracle function,
the states corresponding to different candidate subgroups have
exponentially small inner product.
We~show that this is true for noncommutative groups also.
We~present a quantum algorithm which identifies a hidden subgroup of
an arbitrary finite group~$G$ in only a linear (in~$\log |G|$)
number of calls to the oracle function.
This is exponentially better than the best classical algorithm.
However our quantum algorithm requires an exponential amount of time,
as in the classical case.
\end{abstract}
\section{Introduction}
A~function~$f$ on a finite group (with an arbitrary range)
is called {\em H-periodic\/} if
$f$ is constant on left cosets of~$H$.
If~$f$ also takes distinct values on
distinct cosets we say $f$ is {\em strictly} $H$-periodic. Furthermore we
call $H$ the {\em hidden subgroup\/} of $f$. Throughout we assume $f$ is
efficiently computable. Utilizing the quantum Fourier
transform a quantum computer can identify $H$ in time
polynomial in~$\log |G|$.
The question has been repeatedly raised as to whether this may
accomplished for finite non-Abelian groups~\cite{EH99,Kitaev95,RB98}.
This time bound implies that only a polynomial (in~$\log |G|$)
number of calls to the oracle function are necessary to identify~$H$.
The main result of this paper
is that this more limited result is also true for non-Abelian groups.
In other words there exists a quantum algorithm which {\em
informational theoretically\/} determines the hidden subgroup
efficiently. One may also view this as a quantum state
distinguishability problem and from this perspective one may say that
this quantum algorithm efficiently distinguishes among the given possible
states. This result may be seen as a generalization of the
results presented in~\cite{EH99} although in that work the unitary transform
was also efficiently implementable. The quantum algorithm presented
here requires exponential time. An important open question is whether
this may be improved. Even if a time efficient quantum algorithm does
exist one must also inquire as to the complexity of postprocessing the
resulting information. For example in the case of the dihedral group
presented in~\cite{EH99}, although the hidden subgroup is information
theoretically determined in a polynomial number of calls to the oracle,
it is not known how to
efficiently postprocess the resulting information to identify~$H$.
\begin{theorem}[Main]\label{thm:main}
Let $G$ be a finite group, $f$ an oracle function on~$G$ which is
strictly $H$-periodic for some subgroup $H \leqslant G$.
Then there exists a quantum algorithm that calls the oracle function
$4 \log|G| +2$ times and outputs a subset $X \subseteq G$
such that $X=H$ with probability at least \mbox{$1 - 1/|G|$}.
\end{theorem}
\section{The Quantum Algorithm}
Let $G$ be a finite group, $H \leqslant G$ a subgroup,
and $f$ a function on $G$ which is strictly $H$-periodic.
For any subset $X = \{x_1,\dots,x_m\} \subseteq G$, let
$\ket{X}$ denote the normalized superposition
$\frac{1}{\sqrt{m}}\big(\ket{x_1} + \cdots + \ket{x_m}\big)$.
The Hilbert Space~$\mathcal{H}$ in which
we work has dimension $|G|^m$ and has an orthonormal
basis indexed by the elements of the \mbox{$m$-fold} direct product
$\{\ket{(g_1,\dots,g_m)} \mid g_i \in G\}$.
The first step in our quantum algorithm is to prepare the state
\begin{equation}\label{eq:initial}
\frac{1}{\sqrt{|G|^m}} \sum_{g_1,\dots,g_m \in G}
{\ket{g_1,\dots,g_m} \,\ket{f(g_1),\dots,f(g_m)}}.
\end{equation}
We show below that picking $m = 4 \log |G| +2$
allows us to identify~$H$ with exponentially small error probability.
By observing the second register we obtain a state~\ket{\Psi} which is a
tensor product of random left cosets corresponding to the
hidden subgroup~$H$. Let
$\ket{\Psi} = \ket{a_{1}H} \otimes \ket{a_{2}H} \otimes \cdots \otimes
\ket{a_{m}H}$
where $\{a_1,\dots,a_m\} \subseteq G$.
Further, for any subgroup $K \leqslant G$ and
any subset $\{b_1,\dots,b_m\} \subseteq G$, define
\begin{equation}
\ket{\Psi(K,\{b_i\})} =
\ket{b_{1}K} \otimes \ket{b_{2}K} \otimes \cdots \otimes \ket{b_{m}K}.
\end{equation}
The key lemma, stated formally below,
is that if $K \not\leqslant H$ then $\braket{\Psi}{\Psi(K,\{g_i\})}$ is
exponentially small.
Let $\mathcal{H}_K$ be the subspace of $\mathcal{H}$ spanned by all
the vectors of the form $\ket{\Psi(K,\{b_i\})}$ for all subsets
$\{b_i\}.$ Let $P_K$ be the projection operator onto $\mathcal{H}_K$
and let $P_{K}^\perp$ be the projection operator onto the orthogonal
complement of $\mathcal{H}_K$ in $\mathcal{H}.$ Define the
observable $A_K = P_K - P_K^\perp$. Choose an
ordering of the elements of $G$, say, $g_1,g_2,\dots,g_{|G|}.$
The algorithm mentioned in Theorem~\ref{thm:main} works as follows.
We~first apply $A_\cs{g_1}$ to $\ket{\Psi}$,
where $\cs{g} \leqslant G$ denotes
the cyclic subgroup generated by \mbox{$g \in G$}.
If~the outcome is~$-1$ then we know that $g_1 \not\in H$,
and if the outcome is~$+1$ then we know that
$g_1 \in H$ with high probability.
We~then apply $A_\cs{g_2}$ to the
state resulting from the first measurement.
Continuing in this manner we test
each element of $G$ for membership in $H$ by sequentially applying
$A_\cs{g_2}$, $A_\cs{g_3}$ and so on to the resulting states
of the previous measurements.
(Of~course if we discover $g \in H$ then we know that, say,
$g^2 \in H$ and we can omit the test $A_\cs{g^2}$.)
We~prove below that each measurement alters the state
insignificantly with high probability,
implying that by the application of the final operator $A_\cs{g_{|G|}}$
we have, with high probability, identified exactly which
elements of~$G$ are in~$H$ and which are not.
\begin{lemma}\label{lm:key}
Let $K \leqslant G$. If $K \not\leqslant H$
then $\langle \Psi|P_K |\Psi \rangle \leq \frac{1}{2^m}$.
If~$K \leqslant H$ then $\langle \Psi|P_K |\Psi \rangle = 1$.
\end{lemma}
\begin{proof}
Let $|H \cap K| = d$. Notice that for all
$g_1,g_2 \in G$ we have either $|g_1H \cap g_2K| = d$ or $|g_1H \cap
g_2K| = 0$. This implies that if $|g_1H \cap g_2K| = d$ then
$\braket{g_1H}{g_2K} = d/\sqrt{|H||K|}$.
Therefore
\begin{equation*}
\braket{\Psi}{\Psi(K,\{b_i\})} =
\begin{cases}\Big(\frac{d}{\sqrt{|H||K|}}\Big)^m
& \mbox{ if $|H \cap K| = d$}\\ 0 & \mbox{ if $|H \cap K| = 0$.}
\end{cases}
\end{equation*}
There exist exactly $(|H|/d)^m$ vectors of the form
$\ket{\Psi(K,\{b_i\})}$ such that $\braket{\Psi}{\Psi(K,\{b_i\})}$
is nonzero. Hence, $\langle \Psi |P_K | \Psi \rangle =
\big(\frac{|H|}{d}\big)^m \big(\frac{d^2}{|H||K|}\big)^m
= \big(\frac{d}{|K|}\big)^m$.
If $K \not\leqslant H$ then $d/|K| \leq 1/2$, and if
$K \leqslant H$ then \mbox{$d = |K|$}.
\end{proof}
Let $\ket{\Psi_0} = \ket{\Psi}$.
For $1 \leq i \leq |G|$, define the unnormalized states
\begin{equation*}
\ket{\Psi_i} =
\begin{cases}
\,P_\cs{g_i} \;\ket{\Psi_{i-1}} & \mbox{ if $g_i \in H$}\\
\,P_\cs{g_i}^\perp \;\ket{\Psi_{i-1}} & \mbox{ if $g_i \not\in H$.}
\end{cases}
\end{equation*}
Then $\braket{\Psi_i}{\Psi_i}$ equals the probability that
the algorithm given above answers correctly whether $g_j \in H$ for
all $1 \leq j \leq i$.
Now, for all $0 \leq i \leq |G|$,
let $\ket{E_i} = \ket{\Psi} - \ket{\Psi_i}$.
\begin{lemma}\label{lm:error}
For all $0 \leq i \leq |G|$, we have
$\braket{E_i}{E_i} \leq \frac{i^2}{2^m}$.
\end{lemma}
\begin{proof}
We~prove this by induction on~$i$.
Since $\ket{\Psi_0} = \ket{\Psi}$ by definition, $\ket{E_0} = 0$.
Now, suppose that $\braket{E_i}{E_i} \leq \frac{i^2}{2^m}$.
On~the one hand, if $g_{i+1} \in H$, then
$\ket{\Psi_{i+1}} = P_{\cs{g_{i+1}}} \big(\ket{\Psi} - \ket{E_{i}}\big)
= \ket{\Psi} - P_{\cs{g_{i+1}}} \ket{E_{i}}$.
Hence $\braket{E_{i+1}}{E_{i+1}} \leq \braket{E_i}{E_i} \leq
\frac{i^2}{2^m}$.
On~the other hand, if $g_{i+1} \not\in H$, then
$\ket{\Psi_{i+1}} = P_{\cs{g_{i+1}}}^\perp \big(\ket{\Psi} - \ket{E_{i}}\big)
= \ket{\Psi} - P_{\cs{g_{i+1}}} \ket{\Psi} - P_{\cs{g_{i+1}}}^\perp
\ket{E_{i}}$.
By~Lemma~\ref{lm:key}, we then have
$\braket{E_{i+1}}{E_{i+1}}^{1/2} \leq
\frac{1}{2^{m/2}} + \braket{E_i}{E_i}^{1/2} \leq \frac{i+1}{2^{m/2}}$.
\end{proof}
Since $\ket{\Psi_{|G|}} = \ket{\Psi} - \ket{E_{|G|}}$
and $\braket{E_{|G|}}{E_{|G|}} \leq \frac{|G|^2}{2^m}$
by the above lemma, we obtain the following lower bound
for correctly determining all the elements of~$H$.
\begin{lemma}
$\braket{\Psi_{|G|}}{\Psi_{|G|}} \,\geq 1 - \frac{2 |G|}{2^{m/2}}$.
\end{lemma}
By choosing $m = 4 \log|G| +2$, the main theorem follows directly.
\section{Conclusion}
We have shown that there exists a quantum algorithm that discovers a
hidden subgroup of an arbitrary finite group in $O(\log|G|)$ calls to
the oracle function. This is possible due to the geometric fact that
the possible pure states corresponding to different possible subgroups
are almost orthogonal, i.e. they have exponentially small inner
product. Equivalently stated, there exists a measurement, a POVM, that
distinguishes among the possible states in a Hilbert Space of
dimension $|G|^m$ where $m = O(\log|G|)$. The open question remains in
regard to the
existence of a POVM which not only distinguishes among the states but
is efficiently implementable and the resulting information is
efficiently postprocessable.
\end{document} |
\begin{document}
\title{A Unified View of Quantum Correlations and Quantum Coherence}
\author{Tan Kok Chuan Bobby, Hyukjoon Kwon, Chae-Yeun Park, and Hyunseok Jeong}
\affiliation{Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742, Korea}
\date{\today}
\begin{abstract}
In this paper, we argue that quantum coherence in a bipartite system can be contained either locally or in the correlations between the subsystems. The portion of quantum coherence contained within correlations can be viewed as a kind quantum correlation which we call correlated coherence.
We demonstrate that the framework provided by correlated coherence allows us retrieve the same concepts of quantum correlations as defined by the asymmetric and symmetrized versions of quantum discord as well as quantum entanglement, thus providing a unified view of these correlations. We also prove that correlated coherence can be formulated as an entanglement monotone, thus demonstrating that entanglement may be viewed as a specialized form of coherence.
\end{abstract}
\pacs{}
\maketitle
\section{Introduction}
Quantum mechanics admit a superposition between different physical states. A superposed quantum state is described by a pure state and is completely different in nature to a classical stochastic mixture of states, otherwise called mixed states. In the parlance of quantum mechanics, the former is usually referred to as a coherent superposition, the latter one as a incoherent classical mixture.
A particularly illuminating example of quantum coherence in action is the classic double slit experiment. In the quantum version, a single electron, passing through a double slit one at a time and upon emerging, forms an interference fringe despite not interacting with any other electron. An explanation of this phenomena requires a coherent superposition of two travelling waves emerging from both slits. Such an effect is impossible to explain using only incoherent classical mixtures. Following the birth of quantum theory, physical demonstrations of quantum coherence arising from superpositions of many different quantum systems such as electrons, photons, atoms, mechanical modes, and hybrid systems have been achieved \cite{Hornberger12, Wineland13, Aspelmeyer14}.
Recent developments in our understanding of quantum coherence have come from the burgeoning field of quantum information science.
One important area of study that quantum information researchers concern themselves with is the understanding of quantum correlations.
It turns out that in a multipartite setting, quantum mechanical effects allows remote laboratories to collaborate and perform tasks that would otherwise be impossible using classical physics \cite{NielsenChuang}. Historically, the most well studied quantum correlation is quantum entanglement \cite{EPR1935,Horodecki2001, Werner1989}. Subsequent developments of the idea lead to the formulation of quantum discord \cite{Ollivier2001, Henderson2001}, and its symmetrized version \cite{Oppenheim2002, Modi2010, Luo2008} as more generalized forms of quantum correlations that includes quantum entanglement. The development of such ideas of the quantumness of correlations has lead to a plethora of quantum protocols such as quantum cryptography \cite{Ekert91}, quantum teleportation \cite{Bennett1993}, quantum superdense coding \cite{Bennett2001}, quantum random access codes \cite{Chuan2013}, remote state preparation \cite{Dakic2012}, random number generation\cite{Pironio2010}, and quantum computing \cite{Raussendorf2001, Datta2008}, amongst others. Quantum correlations have also proven useful in the study of macroscopic quantum objects \cite{Jeong2015}.
Meanwhile, quantitative theories for entanglement \cite{Plenio07, Vedral98} have been formulated by characterizing and quantifying entanglement as a resource to achieve certain tasks that are otherwise impossible classically.
Building upon this, Baumgratz et al. \cite{Baumgratz14} recently proposed a resource theory of quantum coherence.
Recent developments has since unveiled interesting connections between quantum coherence and correlation, such as their interconversion with each other \cite{Ma15, Streltsov15} and trade-off relations \cite{Xi15}.
In this paper, we demonstrate that quantum correlation can be understood in terms of the coherence contained solely between subsystems.
In contrast to previous studies which established indirect relationships between quantum correlation and coherence \cite{Ma15, Streltsov15, Xi15},
our study establishes a more direct connection between the two and provides a unified view of quantum correlations which includes quantum discord and entanglement using the framework of quantum coherence.
\section{Preliminaries}
\subsection{Bipartite system and local basis}
In this paper, we will frequently refer to a bipartite state which we denote $\rho_{AB}$, where $A$ and $B$ refer to local subsystems held by different laboratories. Following convention, we say the subsystems $A$ and $B$ are held by Alice and Bob respectively.
The local state of Alice is obtained by performing a partial trace on $\rho_{AB}$, and is denoted by $\rho_A = \mathrm{Tr}_B(\rho_{AB})$, and $\{ \ket{i}_A \}$ is a complete local basis of Alice's system.
Bob's local state and local basis are also similarly defined.
In general, the systems Alice and Bob holds may be composite, such that $A=A_1 A_2 \cdots A_N$ and $B = B_1 B_2 \cdots B_M$ so the total state may identically be denoted by $\rho_{A_1A_2\cdots A_N B_1 B_2 \cdots B_M}$.
\subsection{Quantum coherence}
We will adopt the axiomatic approach for coherene measures as shown in Ref.~\cite{Baumgratz14}.
For a fixed basis set $\{ \ket{i} \}$, the set of incoherent states $\cal I$ is the set of quantum states with diagonal density matrices with respect to this basis. Then a reasonable measure of quantum coherence $\mathcal{C}$ should satisfy following properties:
(C1) $C(\rho) \geq 0$ for any quantum states $\rho$ and equality holds if and only if $\rho \in \cal I$.
(C2a) Non-increasing under incoherent completely positive and trace preserving maps (ICPTP) $\Phi$ , i.e., $C(\rho) \geq C(\Phi(\rho))$.
(C2b) Monotonicity for average coherence under selective outcomes of ICPTP:
$C(\rho) \geq \sum_n p_n C(\rho_n)$, where $\rho_n = \hat{K}_n \rho \hat{K}_n^\dagger/p_n$ and $p_n = \mbox{Tr} [\hat{K}_n \rho \hat{K}^\dagger_n ]$ for all $\hat{K}_n$ with $\sum_n \hat{K}_n \hat{K}^\dagger_n = \mathbb 1$ and $\hat{K}_n {\cal I} \hat{K}_n^\dagger \subseteq \cal I$.
(C3) Convexity, i.e., $\lambda C(\rho) + (1-\lambda) C(\sigma) \geq C(\lambda \rho + (1-\lambda) \sigma)$, for any density matrix $\rho$ and $\sigma$ with $0\leq \lambda \leq 1$.
In this paper, we will employ the $l_1$-norm of coherence, which is defined by $\coh{\rho} \coloneqq \sum _{i\neq j} \abs{ \bra{i} \rho \ket{j}}$, for any given basis set $\{ \ket{i} \}$ (otherwise called the reference basis). It can be shown that this definition satisfies all the properties mentioned \cite{Baumgratz14}.
\subsection{Local operations and classical communication (LOCC)}
In addition, we will also reference local operations and classical communication (LOCC) protocols in the context of the resource theory of entanglement. LOCC protocols allow for two different types of operation. First, Alice and Bob are allowed to perform quantum operations, but only locally on their respective subsystems. Second, they are also allowed classical, but otherwise unrestricted communication between them. LOCC operations are especially important in the characterization of quantum entanglement, which typically does not increase under such operations. Measures of entanglement satisfying this are referred to as LOCC monotones \cite{Vidal00}.
\section{Maximal Coherence Loss}
Before establishing the connection between quantum correlation and coherence, we first consider the measurement that leads to the maximal coherence lost in the system of interest. For a monopartite system, the solution to this is trivial. For any quantum state $\rho = \sum_{i,j} \rho_{i,j} \ketbra{i}{j}$ with a reference basis $\{\ket{i}\}$, it is clear that the measurement that maximally removes coherence from the system is the projective measurement $\Pi(\rho) = \sum_{i} \ketbra{i}{i} \rho \ketbra{i}{i}$. This measurement leaves behind only the diagonal terms of $\rho$, so $\coh{\Pi(\rho )} = 0$, which is the minimum coherence any state can have.
A less obvious result for a bipartite state is the following:
\begin{proposition} \label{maxCoh}
For any bipartite state $\rho_{AB} = \sum_{i,j,k,l}\rho_{i,j,k,l} \ket{i,j}_{AB}\bra{k,l}$ where the coherence is measured with respect to the local reference bases $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$, the projective measurement on subsystem $B$ that induces maximal coherence loss is $\Pi_B(\rho_{AB}) = \sum_{j} \left( \openone_A \otimes \ket{j}_B\bra{j} \right) \rho_{AB} \left( \openone_A \otimes \ket{j}_B\bra{j} \right)$.
\end{proposition}
\begin{proof}
We begin by using the spectral decomposition of a general bipartite quantum state $\rho_{AB} = \sum_n p_n \ket{\psi^n}_{AB} \bra{\psi^n}$. Assume that the subsystems have local reference bases $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$ such that $\rho_{AB}= \sum_n \sum_{i,j,k,l} p_n \psi^n_{i,j} (\psi^n_{k,l})^*\ket{i,j}_{AB}\bra{k,l}$. The coherence of the system is measured with respect to these bases. To reduce clutter, we remove the subscripts pertaining to the subsystems $AB$ for the remainder of the proof. Unless otherwise stated, it should be clear from the context which subsystem every operator belong to.
Consider some complete basis on $B$, $\{ \ket{\lambda_m} \}$, and the corresponding projective measurement $\Pi_B(\rho) = \sum_m (\openone\otimes \ketbra{\lambda_m}{\lambda_m}) \,\rho\, (\openone\otimes \ketbra{\lambda_m}{\lambda_m})$. Computing the matrix elements, we get:
\begin{align*}
\braket{i,j}{\Pi_B(\rho)}{k,l} &\coloneqq \left[ \Pi_B(\rho) \right]_{i,j,k,l} \\
&= \sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{kq})^*\sum_m \braket{j}{\lambda_m \rangle\langle \lambda_m}{l} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}.
\end{align*}
Note that minimizing the absolute sum of all the matrix elements will also minimize the coherence, since the diagonal elements of any density matrix always sums to 1 and are non-negative. Consider the absolute sum of all the matrix elements of $\Pi_B(\rho)$:
\begin{align*}
\sum_{i,j,k,l}\abs{\left[ \Pi_B(\rho) \right]_{i,j,k,l}} &= \sum_{i,j,l,k} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \braket{j}{\lambda_m \rangle\langle \lambda_m}{l} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\
&= \left( \sum_{ \substack{i,k \\ j=l}} + \sum_{\substack{i,k \\ j\neq l}} \right) \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \braket{j}{\lambda_m \rangle\langle \lambda_m}{l} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\
&\geq \sum_{\substack{i,k \\ j=l}} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \braket{j}{\lambda_m \rangle\langle \lambda_m}{l} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\
&= \sum_{i,k,j} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \braket{j}{\lambda_m \rangle\langle \lambda_m}{j} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\
&\geq \sum_{i,k} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \sum_j \braket{j}{\lambda_m \rangle\langle \lambda_m}{j} \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\
&= \sum_{i,k} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\sum_m \braket{q}{\lambda_m \rangle\langle \lambda_m}{p}} \\
&= \sum_{i,k} \abs{\sum_n\sum_{p,q} p_n \psi^n_{i,p} (\psi^n_{k,q})^*\delta_{q,p}} \displaybreak \\
&= \sum_{i,k} \abs{\sum_n p_n \psi^n_{i,p} (\psi^n_{k,p})^*}
\end{align*}
The first inequality comes from omitting non-negative terms in the sum, while the second inequality comes from moving a summation inside the absolute value function. Note that the final equality is exactly the absolute sum of the elements when $\ket{\lambda_j} = \ket{j}$ since:
$$
\sum_{j} \left( \openone_A \otimes \ket{j}_B\bra{j} \right) \rho_{AB} \left( \openone_A \otimes \ket{j}_B\bra{j} \right) = \sum_{i,j,k} \sum_n p_n \psi^n_{i,j} (\psi^n_{k,j})^* \ketbra{i,j}{k,j}.
$$
This proves the proposition.
\end{proof}
Since any $N$-partite state $\rho_{A_1A_2\ldots A_N}$, is allowed to perform a bipartition such that $\rho_{A_1A_2\ldots A_N} = \rho_{A'A_N}$ where $A' = A_1\ldots A_{N-1}$, we also get the following corollary:
\begin{pcorollary}
For any $N$-partite state $\rho_{A_1A_2\ldots A_N}$ where the coherence is measured with respect to the local reference bases $\{\ket{i}_{A_k}\}$ and $k = 1,2, \ldots, N$, then the projective measurement on subsystem $A_k$ that induces maximal coherence loss is the projective measurement onto the local basis $\{\ket{i}_{A_k}\}$.
\end{pcorollary}
\section{Local and Correlated Coherence}
Now consider a bipartite state $\rho_{AB}$, with total coherence $\coh{\rho_{AB}}$ with respect to local reference bases $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$. Then $\coh{\rho_A}$ can be interpreted as the coherence that is local to $A$. Similarly, $\coh{\rho_B}$ is the portion of the coherence that is local to $B$. In general, the sum of the total local coherences is not necessarily the same as the total coherence in the system. It is therefore reasonable to suppose that a portion of the quantum coherences are not stored locally, but within the correlations of the system itself.
\begin{definition} [Correlated Coherence] With respect to local reference bases $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$, Correlated Coherence for a bipartite quantum system is given by subtracting local coherences from the total coherence:
$$
\cc{\rho_{AB}} \coloneqq \coh{\rho_{AB}}- \coh{\rho_{A}} - \coh{\rho_{B}}
$$
Where $\rho_{A}$ and $\rho_{B}$ are the reduced density matrices of $A$ and $B$ respectively.
\end{definition}
Further reinforcing the idea that the local coherences form only a portion of the total coherence present in a quantum system, we prove the following property:
\begin{theorem} For any bipartite quantum state $\rho_{AB}$, $\cc{\rho_{AB}} \geq 0 $ (i.e. Correlated Coherence is always non-negative).
\end{theorem}
\begin{proof}
Let $\rho_{AB}= \sum_n \sum_{i,j,k,l} p_n \psi^n_{i,j} (\psi^n_{k,l})^*\ket{i,j}_{AB}\bra{k,l}$, then
\begin{align*}
\cc{\rho_{AB}} &= \coh{\rho_{AB}}- \coh{\rho_{A}} - \coh{\rho_{B}} \\
&= \sum_{\substack{(i,j) \\ \neq (k,l)}} \abs{\sum_n p_n \psi^n_{i,j} (\psi^n_{k,l})^* } - \sum_{i \neq k } \abs{\sum_n p_n\sum_{j} \psi^n_{i,j} (\psi^n_{k,j})^* } -\sum_{j \neq l} \abs{\sum_n p_n \sum_i \psi^n_{i,j} (\psi^n_{i,l})^* } \\
&\geq \sum_{\substack{(i,j) \\ \neq (k,l)}} \abs{\sum_n p_n \psi^n_{i,j} (\psi^n_{k,l})^* } - \sum_{\substack{j \\ i \neq k }} \abs{\sum_n p_n \psi^n_{i,j} (\psi^n_{k,j})^* } -\sum_{\substack{ i \\ j \neq l}} \abs{\sum_n p_n \psi^n_{i,j} (\psi^n_{i,l})^* } \\
&= \left( \sum_{\substack{(i,j) \\ \neq (k,l)}} - \sum_{\substack{j = l \\ i \neq k }}-\sum_{\substack{ i = k \\ j \neq l}} \right ) \abs{\sum_n p_n \psi^n_{i,j} (\psi^n_{k,l})^* }.
\end{align*}
The inequality comes from moving a summation outside of the absolute value function. Since $\sum_{\substack{(i,j) \\ \neq (k,l)}} = \sum_{\substack{j \neq l \\ i \neq k }}+\sum_{\substack{j = l \\ i \neq k }}+\sum_{\substack{ i = k \\ j \neq l}}$, the final equality above is always a sum of non-negative values, which completes the proof.
\end{proof}
\section{Correlated Coherence and Quantum Discord}
Of particular interest to the study of quantum correlations is the idea that certain correlations are quantum and certain correlations are classical. In this section, we will demonstrate that Correlated Coherence is able to unify many of these concepts of quantumness under the same framework.
First, note that in our definition of Correlated Coherence, the choice of reference bases is not unique, while most definitions of quantum correlations are independent of specific basis choices. However, we can retrieve basis independence via a very natural choice of local bases. For every bipartite state $\rho_{AB}$, the reduced density matrices $\rho_A$ and $\rho_B$ has eigenbases $\{ \ket{\alpha_i} \}$ and $\{\ket{\beta_i}\}$ respectively. By choosing these local bases, $\rho_A$ and $\rho_B$ are both diagonal and the local coherences are zero. The implication of this is that for such a choice, \textit{the coherence in the system is stored entirely within the correlations}. Since this can be done for any $\rho_{AB}$, Correlated Coherence with respect to these bases becomes a state dependent property as required. For the rest of the paper, unless otherwise stated, we will assume that the choice of local bases for the calculation of Correlated Coherence will always be the local eigenbases of Alice and Bob.
We first consider the definition of a quantum correlation in the symmetrized version of quantum discord. Under the framework of symmetric discord, a state contains quantum correlations when it cannot be expressed in the form $\rho_{AB} = \sum_{i,j} p_{i,j}\ket{i}_A\bra{i}\otimes \ket{j}_B \bra{j}$, where $\{\ket{i}_A\}$ and $\{\ket{j}_B\}$ are sets of orthonormal vectors. Any such state has zero symmetric discord by definition.
We prove the following theorem:
\begin{theorem} [Correlated Coherence and Symmetric Quantum Discord] \label{symDisc}
For a given state $\rho_{AB}$, $\cqc{\rho_{AB}} = 0$ iff $\rho_{AB} = \sum_{i,j} p_{i,j}\ket{i}_A\bra{i}\otimes \ket{j}_B \bra{j}$.
\end{theorem}
\begin{proof}
If $\{ \ket{i}_A \}$ and $\{\ket{j}_B\}$ are the the eigenbases of $\rho_A$ and $\rho_B$, then $\cqc{\rho_{AB}} = 0$ implies $\coh{\rho_{AB}} = 0$ which implies $\rho_{AB}$ only has diagonal terms, so $\rho_{AB} = \sum_{i,j} p_{i,j}\ket{i}_A\bra{i}\otimes \ket{j}_B \bra{j}$. Therefore, $\cqc{\rho_{AB}} =0 \implies \rho_{AB} = \sum_{i,j} p_{i,j}\ket{i}_A\bra{i}\otimes \ket{j}_B \bra{j}$.
Conversely, if $\rho_{AB} = \sum_{i,j} p_{i,j}\ket{i}_A\bra{i}\otimes \ket{j}_B \bra{j}$, then the state clearly has zero coherence, which implies $\cqc{\rho_{AB}} =0$, so the converse is also true. This proves the theorem.
\end{proof}
This establishes a relationship between Correlated Coherence and symmetric discord. We now consider the asymmetric version of quantum discord. Under this framework, a state contains quantum correlations when it cannot be expressed in the form $\rho_{AB} = \sum_{i} p_{i}\ket{i}_A\bra{i}\otimes \rho_B^i$, where $\rho_B^j$ is some normalized density matrix.
We prove the following:
\begin{theorem} [Correlated Coherence and Asymmetric Quantum Discord] \label{AsymDisc}
For a given state $\rho_{AB}$, let $\{ \ket{i}_A \}$ and $\{\ket{j}_B\}$ be the the eigenbases of $\rho_A$ and $\rho_B$ respectively. Define the measurement on $A$ onto the local basis as $\Pi_A(\rho_{AB}) \coloneqq \sum_i ( \ket{i}_A\bra{i} \otimes \openone_B) \rho_{AB} ( \ket{i}_A\bra{i} \otimes \openone_B)$. Then, with respect to these local bases, $\cqc{\rho_{AB}} - \cqc{\Pi_A(\rho_{AB})}= 0$ iff $\rho_{AB} = \sum_{i} p_{i}\ket{i}_A\bra{i}\otimes \rho_B^i$, where $\rho_B^i$ is some normalized density matrix and $\{\ket{i}_A\}$ is some set of orthonormal vectors.
\end{theorem}
\begin{proof}
First, we write the state in the form $\rho_{AB}= \sum_{i,j,k,l}\rho_{ijkl}\ket{i,j}_{AB}\bra{k,l}$. We can always write the state in block matrix form such that $\rho_{AB}= \sum_{i,k}\ket{i}_A\bra{k} \otimes \rho_B^{i,k}$ where $\rho_B^{i,k} \coloneqq \sum_{j,k}\rho_{ijkl}\ket{j}_B \bra{l}$. If $\{ \ket{i}_A \}$ and $\{\ket{j}_B\}$ are the the eigenbases of $\rho_A$ and $\rho_B$, then $\cqc{\rho_{AB}} - \cqc{\Pi_A(\rho_{AB})}= 0$ implies that when $i \neq k$, $\rho_B^{i,k} = 0$. This implies $\rho_{AB}= \sum_{i}\ket{i}_A\bra{i} \otimes \rho_B^{i,i}$. By defining $\rho_B^i = \rho_B^{i,i}/p_i$ where $p_i \coloneqq \mbox{Tr}{\rho_B^{i,i}}$, we get $\rho_{AB} = \sum_{i} p_{i}\ket{i}_A\bra{i}\otimes \rho_B^i$. Therefore, $\cqc{\rho_{AB}} - \cqc{\Pi_A(\rho_{AB})}= 0 \implies \rho_{AB} = \sum_{i} p_{i}\ket{i}_A\bra{i}\otimes \rho_B^i$.
For the converse, if $\rho_{AB} = \sum_{i} p_{i}\ket{i}_A\bra{i}\otimes \rho_B^i$, then clearly, $\Pi_A(\rho_{AB})= \rho_{AB}$, so $\cqc{\rho_{AB}} - \cqc{\Pi_A(\rho_{AB})}= 0$. This completes the proof.
\end{proof}
Note that the above relationship with asymmetric quantum discord is expressed as a \textit{difference} between the Correlated Coherence of $\rho_{AB}$ and the post measurement state $\Pi_A(\rho_{AB})$. While this characterization of quantum correlations may at first appear to diverge from the one given in Theorem~\ref{AsymDisc}, they are actually similar since $\cqc{\Pi_A \Pi_B (\rho_{AB})} = 0$ so $\cqc{\rho_{AB}}= \cqc{\rho_{AB}} - \cqc{\Pi_A \Pi_B (\rho_{AB})}$. It is therefore possible to interpret quantum discord as the correlated coherence loss when either party performs a maximally coherence destroying measurement only their subsystems (See Proposition~\ref{maxCoh}). When the projective measurement is performed only on one side, one retrieves the asymmetric version of quantum discord, and the symmetrized version is obtained when the coherence destroying measurement is performed by both parties.
\section{Correlated Coherence and Entanglement}
Under the framework of entangled correlations, a state contains quantum correlations when it cannot be expressed as a convex combination of product states $\sum_{i}p_i \ket{\alpha_i}_A \bra{\alpha_i} \otimes \ket{\beta_i}_B \bra{\beta_i}$, where $\ket{\alpha_i}$ and $\ket{\beta_i}$ are normalized but not necessarily orthogonal vectors that can repeat. It is also possible to extend our methodology to entangled quantum states. In order to do this, we consider extensions of the quantum state $\rho_{AB}$. We say that a state $\rho_{ABC}$ is an extension of $\rho_{AB}$ if $\mathrm{Tr}_C(\rho_{ABC}) = \rho_{AB}$. For our purpose, we will consider extensions of the form $\rho_{AA'BB'}$.
\begin{theorem} \label{separability}
Let $\rho_{AA'BB'}$ be some extension of a bipartite state $\rho_{AB}$ and choose the local bases to be the eigenbases of $\rho_{AA'}$ and $\rho_{BB'}$ respectively. Then with respect to these local bases, $\min \cqc{\rho_{AA'BB'}} = 0$ iff $\rho_{AB} = \sum_{i}p_i \ket{\alpha_i}_A \bra{\alpha_i} \otimes \ket{\beta_i}_B \bra{\beta_i}$ for some set of normalized vectors $\ket{\alpha_i}$ and $\ket{\beta_i}$ that are not necessarily orthogonal and may repeat. The minimization is over all possible extensions of $\rho_{AB}$ of the form $\rho_{AA'BB'}$.
\end{theorem}
\begin{proof}
If $\inf \cqc{\rho_{AA'BB'}} = 0$, then $\rho_{AA'BB'}$ must have the form $\sum_{i,j}p_{i,j}\ket{\mu_i}_{AA'}\bra{\mu_i}\otimes \ket{\nu_j}_{BB'}\bra{\nu_j}$ (See Thm.~\ref{symDisc}). Since $\rho_{AA'BB'}$ is an extension, $\mathrm{Tr}_{A'}\mathrm{Tr}_{B'}(\rho_{AA'BB'}) = \sum_{i,j}p_{i,j}\mathrm{Tr}_{A'}(\ket{\mu_i}_{AA'}\bra{\mu_i})\otimes \mathrm{Tr}_{B'} (\ket{\nu_j}_{BB'}\bra{\nu_j}) = \rho_{AB}$. Let $\rho^i_A \coloneqq \mathrm{Tr}_{A'}(\ket{\mu_i}_{AA'}\bra{\mu_i})$ and $\rho^j_B \coloneqq \mathrm{Tr}_{B'}(\ket{\nu_j}_{BB'}\bra{\nu_j})$. Then, $\rho_{AB} = \sum_{i,j}p_{i,j} \rho^i_A\otimes \rho^j_B$. This is equivalent to saying $\rho_{AB}=\sum_{i}p_i \ket{\alpha_i}_A \bra{\alpha_i} \otimes \ket{\beta_i}_B \bra{\beta_i}$, for some set of (non necessarily orthogonal) vectors $\{\ket{\alpha_i}\}$ and $ \{\ket{\beta_i}\}$. This proves $\min \cqc{\rho_{AA'BB'}} = 0 \implies \rho_{AB} = \sum_{i}p_i \ket{\alpha_i}_A \bra{\alpha_i} \otimes \ket{\beta_i}_B \bra{\beta_i}$
For the converse, suppose $\rho_{AB} = \sum_{i}p_i \ket{\alpha_i}_A \bra{\alpha_i} \otimes \ket{\beta_i}_B \bra{\beta_i}$, consider the purification of $\rho_{AB}$ of the form $\ket{\psi}_{ABA'B'C'} = \sum_i \sqrt{p_i}\ket{\alpha_i}_A \ket{\beta_i}_B \ket{i}_{A'}\ket{i}_{B'}\ket{i}_{C'}$. Since this is a purification, $\rho_{AA'BB'} = \mathrm{Tr}_{C'}(\ket{\psi}_{ABA'B'C'}\bra{\psi})$ is clearly an extension of $\rho_{AB}$. Furthermore, the eigenbases of $\rho_{AA'}$ and $\rho_{BB'}$ are $\{ \ket{\alpha_i}_A \ket{i}_{A'}\}$ and $\{ \ket{\beta_i}_B \ket{i}_{B'}\}$ respectively. Since $\rho_{AA'BB'} = \sum_i p_i \ket{\alpha_i}_A \ket{i}_{A'} \bra{\alpha_i}_A \bra{i}_{A'} \otimes \ket{\beta_i}_B \ket{i}_{B'}\bra{\beta_i}_B \bra{i}_{B'}$, $\cqc{\rho_{AA'BB'}} = 0$ with respect to the eigenbases of $\rho_{AA'}$ and $\rho_{BB'}$. Therefore, $\min \cqc{\rho_{AA'BB'}} = 0$, which completes the proof.
\end{proof}
\section{Coherence as an Entanglement Monotone}
We now construct an entanglement monotone using the Correlated Coherence of a quantum state. In order to do this, we first define symmetric extensions of a given quantum state:
\begin{definition} [Unitarily Symmetric Extensions] \label{unitSym}
Let $\rho_{AA'BB'}$ be an extension of a bipartite state $\rho_{AB}$. The extension $\rho_{AA'BB'}$ is said to be unitarily symmetric if it remains invariant up to local unitaries on $AA'$ and $BB'$ under a system swap between Alice and Bob.
More formally, let $\{\ket{i}_{AA'}\}$ and $\{\ket{i}_{BB'}\}$ be complete local bases on $AA'$ and $BB'$ respectively. Define the swap operator through $U_{\mathrm{swap}} \ket{i,j}_{AA'BB'} \coloneqq \ket{j,i}_{AA'BB'}$ . Then $\rho_{AA'BB'}$ is unitarily symmetric if there exists local unitary operations $U_{AA'}$ and $U_{BB'}$ such that $ U_{AA'}\otimes U_{BB'} \left(U_{\mathrm{swap}} \rho_{AA'BB'} U_{\mathrm{swap}}^\dag \right) U_{AA'}^\dag \otimes U_{BB'}^\dag = \rho_{AA'BB'}$.
\end{definition}
Following from the observation that the minimization of coherence over all extensions is closely related to the the separability of a quantum state, we define the following:
\begin{definition}
Let $\rho_{AA'BB'}$ be some extension of a bipartite state $\rho_{AB}$ and choose the local bases to be the eigenbases of $\rho_{AA'}$ and $\rho_{BB'}$ respectively. Then the entanglement of coherence (EOC) is defined to be:
$$
E_{cqc}(\rho_{AB}) \coloneqq \min \cqc{\rho_{AA'BB'}}
$$
The minimization is over all possible unitarily symmetric extensions of $\rho_{AB}$ of the form $\rho_{AA'BB'}$.
\end{definition}
It remains to be proven that $E_{cqc}(\rho_{AB})$ is a valid measure of entanglement (i.e. it is an entanglement monotone). But first, we prove the following elementary properties.
\begin{proposition} [EOC of Separable States] \label{separableEcqc}
If a bipartite quantum state $\rho_{AB}$ is separable, $\ecqc{\rho_{AB}} = 0$.
\end{proposition}
\begin{proof}
The proof is identical to Thm.~\ref{separability}, with the additional observation that $\rho_{AA'BB'} = \sum_{i,j}p_{i,j}\ket{\mu_i}_{AA'}\bra{\mu_i}\otimes \ket{\nu_j}_{BB'}\bra{\nu_j}$ is unitarily symmetric. To see this, define $U_{AA'} \coloneqq \sum_{i} \ket{\nu_i}_{AA'}\bra{\mu_i}$ and $U_{BB'} \coloneqq \sum_{i} \ket{\mu_i}_{AA'}\bra{\nu_i}$. It is easy to verify that is satisfies
$$U_{AA'}\otimes U_{BB'} \left(U_{\mathrm{swap}} \rho_{AA'BB'} U_{\mathrm{swap}}^\dag \right) U_{AA'}^\dag \otimes U_{BB'}^\dag = \rho_{AA'BB'}$$
where $U_{\mathrm{swap}}$ is the same as in Def.~\ref{unitSym} so it is unitarily symmetric.
\end{proof}
\begin{proposition} [Invariance under local unitaries] \label{localU}
For a bipartite quantum state $\rho_{AB}$, $E_{cqc}(\rho_{AB})$ is invariant under local unitary operations on $A$ and $B$
\end{proposition}
\begin{proof}
Without loss in generality, we only need to prove it is invariant under local unitary operations of $A$.
For some bipartite state $\rho_{AB}$, let $\rho^*_{AA'BB'} $ be the optimal unitarily symmetric extension such that $ \ecqc{\rho_{AB}}= \cqc {\rho^*_{AA'BB'}}$. Let ${\ket{i}_{AA'}}$ and ${\ket{j}_{BB'}}$ be the eigenbases of $\rho^*_{AA'}$ and $\rho^*_{BB'}$ respectively. With respect to these bases, $\rho^*_{AA'BB'} = \sum_{ijkl}\rho_{ijkl} \ket{i,j}_{AA'BB'}\bra{k,l}$
Suppose we perform a unitary $U = U_A\otimes \openone_{A'BB'}$ on $A$ such that so $U \ket{i,j} = \ket{\alpha_i, j}$ where $\{\ket{\alpha_i}\}$ is an orthonormal set. Since $U \rho^*_{AA'BB'} U^\dag = \sum_{ijkl}\rho_{ijkl} \ket{\alpha_i ,j}_{AA'BB'}\bra{\alpha_k,l}$, it is clear that the off diagonal matrix elements are invariant under the new bases $\ket{\alpha_i, j}_{AA'BB'}$ so $ \ecqc{\rho_{AB}} = \ecqc {U \rho^*_{AA'BB'} U^\dag }$, which proves the proposition.
\end{proof}
\begin{proposition} [Convexity]
$\ecqc{\rho_{AB}}$ is convex and decreases under mixing:
$$
\lambda \ecqc{\rho_{AB}} + (1-\lambda)\ecqc{\sigma_{AB}} \geq \ecqc{\lambda \rho_{AB} + (1-\lambda) \sigma_{AB}}
$$
For any 2 bipartite quantum states $\rho_{AB}$ and $\sigma_{AB}$, and $\lambda \in [0,1]$.
\end{proposition}
\begin{proof}
Let $\rho^*_{AA'BB'}$ and $\sigma^*_{AA'BB'}$ be the optimal unitarily symmetric extensions for $\rho_{AB}$ and $\sigma_{AB}$ respectively such that $\ecqc{\rho_{AB}}= \cqc {\rho^*_{AA'BB'}}$ and $\ecqc{\sigma_{AB}}= \cqc {\sigma^*_{AA'BB'}}$.
Consider the state $\tau_{AA'A''BB'B''} \coloneqq \lambda \rho^*_{AA'BB'}\otimes \ket{0,0}_{A''B''}\bra{0,0}+ (1-\lambda) \sigma^*_{AA'BB'}\otimes \ket{1,1}_{A''B''}\bra{1,1}$ for $\lambda \in [0,1]$. Direct computation will verify that with respect to the eigenbases of $\tau_{AA'A''}$ and $\tau_{BB'B''}$, $\cqc{\tau_{AA'A''BB'B''}} = \lambda \cqc{\rho^*_{AA'BB'}} + (1-\lambda) \cqc{\sigma^*_{AA'BB'}} = \lambda \ecqc{\rho_{AB}} + (1-\lambda)\ecqc{\sigma_{AB}}$. However, as $\mathrm{Tr}_{A'A''B'B''}(\tau_{AA'A''BB'B''}) = \lambda \rho_{AB} + (1-\lambda) \sigma_{AB}$, it is an extension of $\lambda \rho^*_{AB} + (1-\lambda) \sigma^*_{AB}$.
It remains to be proven that the extension above is also unitarily symmetric. Let $\Xi^{\mathrm{swap}}_{ X \leftrightarrow Y}$ denote the swap operation between $X$ and $Y$. Let the operators $U_{AA'}$, $U_{BB'}$, $V_{AA'}$, $V_{BB'}$ satisfy $\rho^*_{AA'BB'} = U_{AA'}\otimes U_{BB'} \Xi^{\mathrm{swap}}_{ AA' \leftrightarrow BB'}(\rho_{AA'BB'}^*)U_{AA'}^\dag\otimes U_{BB'}^\dag $ and $\sigma^*_{AA'BB'} = V_{AA'}\otimes V_{BB'} \Xi^{\mathrm{swap}}_{ AA' \leftrightarrow BB'}(\sigma_{AA'BB'}^*)V_{AA'}^\dag\otimes V_{BB'}^\dag $ respectively. It can be verified that the local unitary operators $W_{AA'A''} \coloneqq U_{AA'}\otimes \ket{0}_{A''}\bra{0} + V_{AA'}\otimes \ket{1}_{A''}\bra{1}$ and $W_{BB'B''} \coloneqq U_{BB'}\otimes \ket{0}_{B''}\bra{0} + V_{BB'}\otimes \ket{1}_{B''}\bra{1}$ satisfies $\tau^*_{AA'A''BB'B''} = W_{AA'A''}\otimes W_{BB'B''} \Xi^{\mathrm{swap}}_{ AA'A'' \leftrightarrow BB'B''}(\tau_{AA'A''BB'B''})W_{AA'B''}^\dag\otimes W_{BB'B''}^\dag $, so it is also unitarily symmetric.
Since $E_{cqc}$ is a minimization over all unitarily symmetric extensions, we have $ \lambda \ecqc{\rho_{AB}} + (1-\lambda)\ecqc{\sigma_{AB}} = \cqc{\tau_{AA'A''BB'B''}} \geq \ecqc{\lambda \rho_{AB} + (1-\lambda) \sigma_{AB}}$ which completes the proof.
\end{proof}
\begin{proposition} [Contraction under partial trace] \label{partTrace}
Consider the bipartite state $\rho_{AB}$ where $A=A_1A_2$ is a composite system. Then the entanglement of coherence is non-increasing under a partial trace:
$$
\ecqc{\rho_{A_1A_2B}} \geq \ecqc{\mathrm{Tr}_{A_1}(\rho_{A_1A_2B})}
$$
\end{proposition}
\begin{proof}
Let $\rho^*_{A_1A_2A'BB'}$ be the optimal unitarily symmetric extension of $\rho_{A_1A_2B}$ such that $\ecqc{\rho_{A_1A_2B}} = \cqc{\rho^*_{A_1A_2A'BB'}}$. It is clear that $\mathrm{Tr}_{A_1A'B'}(\rho^*_{A_1A_2A'BB'}) = \mathrm{Tr}_{A_1}(\rho_{A_1A_2B}) = \rho_{A_2B}$ so $\rho^*_{A_1A_2A'BB'}$ is an unitarily symmetric extension of $ \mathrm{Tr}_{A_1}(\rho_{A_1A_2B})$. Since $E_{cqc}$ is a minimization over all such extensions, $\ecqc{\rho_{A_1A_2B}} \geq \ecqc{\mathrm{Tr}_{A_1}(\rho_{A_1A_2B})}$.
\end{proof}
\begin{proposition} [Contraction under local projections]
Let $\pi^i_A$ be a complete set of rank 1 projectors on subsystem $A$ such that $\sum_i \pi^i_A = \openone_A$, and define the local projection $\Pi_A(\rho_A) \coloneqq \sum_i \pi^i_A \rho_A \pi^i_A$ The entanglement of coherence is contractive under a local projections:
$$
\ecqc{\rho_{AB}} \geq \ecqc{\Pi_A(\rho_{AB})}
$$
Or, if $A$ is a composite system, $A=A_1A_2$
$$
\ecqc{\rho_{AB}} \geq \ecqc{\Pi_{A_1}(\rho_{A_1A_2B})}
$$
\end{proposition}
\begin{proof}
First, we observe that any projective measurement can be performed via a CNOT type operation with an ancilla, followed by tracing out the ancilla:
$$\mathrm{Tr}_X\left(U^{\mathrm{CNOT}}_{XY} \left( \ket{0}_X\bra{0} \otimes \sum_{i,j} \rho_{ij}\ket{i}_Y\bra{j} \right) (U^{\mathrm{CNOT}}_{XY})^\dag \right) = \sum_{i,i} \rho_{ii}\ket{i}_Y\bra{i}.$$
The unitary performs the operation $U^{\mathrm{CNOT}}_{XY} \ket{0,i}_{XY} = \ket{i,i}_{XY}$. Since adding an uncorrelated ancilla does not increase $E_{cqc}$ , we have $\ecqc{\ket{0}_{A_3}\bra{0} \otimes \rho_{A_1A_2B}} = \ecqc{\rho_{A_1A_2B}}$. As $E_{cqc}$ is invariant under local unitaries (Prop.~\ref{localU}) and contractive under partial trace (Prop.~\ref{partTrace}), this proves the proposition.
\end{proof}
\begin{proposition} [Invariance under classical communication] \label{classCom}
For a bipartite state $\rho_{AB}$, Suppose that on Alice's side, $A = A_1A_2$ is a composite system and $A_1$ is a classical registry storing classical information. Then $E_{eqc}$ remains invariant if a copy of $A_1$ is created on Bob's side.
More formally, let $\rho_{A_1A_2B_2}= \sum_i p_i \ket{i}_{A_1} \bra{i} \otimes \ket{\psi_i}_{A_2B_2}\bra{\psi_i}$ be the initial state, and let $\sigma_{A_1A_2B_1B_2}= \sum_i p_i \ket{i}_{A_1} \bra{i} \otimes \ket{\psi_i}_{A_2B_2}\bra{\psi_i} \otimes \ket{i}_{B_1}\bra{i}$ be the state after Alice communicates a copy of $A_1$ to Bob, then
$$
\ecqc{\rho_{A_1A_2B_2}} = \ecqc{\sigma_{A_1A_2B_1B_2}}
$$
\end{proposition}
\begin{proof}
Let $\Xi^{\mathrm{swap}}_{ X \leftrightarrow Y}$ denote the swap operation between $X$ and $Y$. Let $\rho^*_{A_1A_2A'B_1B_2B'}$ be the optimal unitarily symmetric extension of $\rho_{A_1A_2B_2}$ such that $\ecqc{\rho_{A_1A_2B_2}} = \cqc{\rho^*_{A_1A_2A'B_1B_2B'}}$. Note that $\cqc{\rho^*_{A_1A_2A'B_2B_1B'}} = \cqc{ \ket{0}_{A''}\bra{0} \otimes \rho^*_{A_1A_2A'B_1B_2B'} \otimes \ket{0}_{B''}\bra{0}}$. Define a $\mathrm{CNOT}$ type operation between $A_1$ and $B''$ such that $U^{\mathrm{CNOT}}_{A_1B''} \ket{0,i}_{A_1B''} = \ket{i,i}_{A_1B''}$. Ordinarily, such an operation cannot be done by Bob locally unless he has access to subsystem $A_1$ on Alice's side. However, since $\rho^*_{A_1A_2A'BB'}$ is unitarily symmetric, there exists local unitaries $U_{A_1A_2A'}$ and $U_{BB'}$ such that $\Xi^{\mathrm{swap}}_{A_1A_2A' \leftrightarrow BB'} (\rho^*_{A_1A_2A'BB'}) = U_{A_1A_2A'} \otimes U_{BB'}\rho^*_{A_1A_2A'BB'} U^\dag_{A_1A_2A'} \otimes U^\dag_{BB'}$. This implies that the Bob can perform $U^{\mathrm{CNOT}}_{A_1B''}$ locally by first performing the swap operation through local unitaries, gain access to the information in $A_1$, copy it to $B''$ by performing $U^{\mathrm{CNOT}}_{B_1B''}$ locally, and then undo the swap operation via another set of local unitary operations.
This means there must exist $V_{A_1A_2A'A''}$ and $V_{B_1B_2B'B''}$ such that:
$$V_{A_1A_2A'A''} \otimes V_{B_1B_2B'B''} \left( \ket{0}_{A''}\bra{0} \otimes \rho^*_{A_1A_2A'B_1B_2B'} \otimes \ket{0}_{B''}\bra{0} \right) V_{A_1A_2A'A''}^\dag \otimes V_{B_1B_2B'B''}^\dag$$
is a unitarily symmetric extension of $U^{\mathrm{CNOT}}_{A_1B''} \left( \rho_{A_1A_2B_2} \otimes \ket{0}_{B''}\bra{0} \right) U^{\mathrm{CNOT}}_{A_1B''}\, ^\dag$. However, because this state is equivalent to $\sigma_{A_1A_2B_1B_2}$ as defined previously, it is also a unitarily symmetric extension of $\sigma_{A_1A_2B_1B_2}$. Since $\mathrm{C}_{cqc}$ is invariant under local unitary operations, we have
\begin{align*}
\ecqc{\rho_{A_1A_2B_2}} &= \ecqc{\rho^*_{A_1A_2A'BB'}} \\
&= \ecqc{\ket{0}_{A''}\bra{0} \otimes \rho^*_{A_1A_2A'BB'} \otimes \ket{0}_{B''}\bra{0}} \\
&= \cqc{V_{A_1A_2A'A''} \otimes V_{BB'B''} \left( \ket{0}_{A''}\bra{0} \otimes \rho^*_{A_1A_2A'BB'} \otimes \ket{0}_{B''}\bra{0} \right) V_{A_1A_2A'A''}^\dag \otimes V_{BB'B''}^\dag} \\
&\geq \ecqc{\sigma_{A_1A_2B_1B_2}},
\end{align*}
where the last inequality comes from the fact that the entanglement of coherence is a minimization over all unitarily symmetric extensions. On the other hand, a unitarily symmetric extension of $\sigma_{A_1A_2B_1B_2}$ is also an unitarily symmetric extension of $\rho_{A_1A_2B_2}$ so $\ecqc{\rho_{A_1A_2B_2}} \leq \ecqc{\sigma_{A_1A_2B_1B_2}}$. This implies that $\rho_{A_1A_2B_2}$ so $\ecqc{\rho_{A_1A_2B_2}} = \ecqc{\sigma_{A_1A_2B_1B_2}}$ which completes the proof.
\end{proof}
\begin{proposition} [Contraction under LOCC] \label{LOCC}
For any bipartite state $\rho_{AB}$ and let $\Lambda_{\mathrm{LOCC}}$ be any LOCC protocol performed between $A$ and $B$. Then $E_{cqc}$ is non-increasing under such operations:
$$\ecqc{\rho_{AB}} \geq \ecqc{\Lambda_{\mathrm{LOCC}}(\rho_{AB})}$$
\end{proposition}
\begin{proof}
We consider the scenario where Alice performs a POVM on her subsystems, communicates classical information of her meaurement outcomes to Bob, who then performs a separate operation on his subsystem based on this measurement information.
Suppose Alice and Bob begins with the state $\rho_{A_1B_1}$. By Naimark's theorem, any POVM can be performed through a unitary interaction between the state of interest and an uncorrelated pure state ancilla, followed by a projective measurement on the ancilla and finally tracing out the ancillary systems. In order to facilitate Alice and Bob's performing of such quantum operations, we add uncorrelated ancillas to the state, which does not change the entanglement of coherence so $\ecqc{\rho_{A_1B_1}}= \ecqc{\ket{0,0}_{M_AA_2}\bra{0,0} \otimes \rho_{A_1B_1}\otimes \ket{0,0}_{M_BB_2}\bra{0,0}}$. For Alice's procedure, we will assume the projection is performed on $M_A$, so $M_A$ is a classical register storing classical measurement outcomes.
In the beginning, Alice performs a unitary operation on subsystems $M_AA_1A_2$, followed by a projection on $M_A$ which makes it classical. We represent the composite of these two operations with $\Omega_A$, which represents Alice's local operation. Since $E_{cqc}$ is invariant under local unitaries (Prop.~\ref{localU}) but contractive under a projection (Prop.~\ref{partTrace}), $\Omega_A$ is a contractive operation.
The next part of the procedure is a communication of classical bits to Bob. This procedure equivalent to the copying of the state of the classical register $M_A$ to the register $M_B$. However, $E_{cqc}$ is invariant under such communication (Prop.~\ref{classCom}). We represent this operation as $\Gamma_{A\rightarrow B}$.
The next step requires Bob to perform an operation on his quantum system based on the communicated bits. He can achieve this by performing a unitary operation on subsystems $M_BB_1B_2$. We represent this operation with $\Omega_B$, which does not change $E_{cqc}$. The final step of the procedure requires tracing out the ancillas, $\mathrm{Tr}_{M_AA_2M_BB_2}$, which is again contractive (Prop.~\ref{partTrace}).
Since every step is either contractive or invariant, we have the following inequality:
\begin{align*}
\ecqc{\rho_{A_1B_1}} &= \ecqc{\ket{0,0}_{M_AA_2}\bra{0,0} \otimes \rho_{A_1B_1}\otimes \ket{0,0}_{M_BB_2}\bra{0,0}} \\
&\geq \ecqc{\mathrm{Tr}_{M_AA_2M_BB_2} \circ \Omega_B \circ\Gamma_{A\rightarrow B} \circ \Omega_A \left[\ket{0,0}_{M_AA_2}\bra{0,0} \otimes \rho_{A_1B_1}\otimes \ket{0,0}_{M_BB_2}\bra{0,0}\right]}.
\end{align*}
Any LOCC protocol is a series of such procedures from Alice to Bob or from Bob to Alice, so we must have $\ecqc{\rho_{AB}} \geq \ecqc{\Lambda_{\mathrm{LOCC}}(\rho_{AB})}$, which completes the proof.
\end{proof}
The following theorem shows that the entanglement of coherence is a valid measure of the entanglement of the system.
\begin{theorem} [Entanglement monotone] The entanglement of coherence $E_{cqc}$ is an entanglement monotone in the sense that it satisfies:
\begin{enumerate}
\item $\ecqc{\rho_{AB}} = 0$ iff $\ecqc{\rho_{AB}}$ is seperable.
\item $\ecqc{\rho_{AB}}$ is invariant under local unitaries on $A$ and $B$.
\item $\ecqc{\rho_{AB}}\geq \ecqc{\Lambda_{\mathrm{LOCC}}(\rho_{AB})}$ for any LOCC procedure $\Lambda_{\mathrm{LOCC}}$.
\end{enumerate}
\end{theorem}
\begin{proof}
It follows directly from Prop.~\ref{separableEcqc} and Prop.~\ref{localU} and~\ref{LOCC}.
\end{proof}
\section{conclusion}
To conclude, we defined the Correlated Coherence of quantum states as the total coherence without local coherences, which can be interpreted as the portion of the coherence that is shared between 2 quantum subsystems.
The framework of the Correlated Coherence allows us to identify the same concepts of non-classicality of correlations as those of (both symmetric and asymmetric) quantum discord and quantum entanglement.
Finally, we proved that the minimization of the Correlated Coherence over all symmetric extensions of a quantum state is an entanglement monotone, showing that quantum entanglement may be interpreted as a specialized form of coherence. Our results suggest that quantum correlations in general may be understood from the viewpoint of coherence, thus possibly opening new ways of understanding both.
\section*{acknowledgment}
This work was supported by the National Research Foundation of Korea (NRF) through a grant funded by the Korea government (MSIP) (Grant No. 2010-0018295).
\end{document} |
\begin{document}
\newcommand{\displaystyle}{\displaystyleplaystyle}
\title{Compact endomorphisms of $H^{\infty}
Let $D$ be the open unit disc and, as usual, let
$H^{\infty}(D)$ be the algebra of bounded analytic functions on $D$
with $\|f\|=\sup_{z \in D} |f(z)|.$
With pointwise addition and multiplication, $H^{\infty}(D)$ is a well
known uniform algebra. In this note we characterize the compact endomorphisms
of $H^{\infty}(D)$ and determine their spectra.
We show that although not every endomorphism $T$ of $H^{\infty}(D)$
has the form $T(f)(z)=f(\phi(z))$ for some analytic $\phi$ mapping
$D$ into itself, if $T$ is compact, there is an analytic function $\psi:D \rightarrow D$
associated with $T$. In the case where $T$ is compact, the derivative of $\psi$ at its fixed point determines the spectrum of $T.$
The structure of the maximal ideal space
$M_{H^{\infty}}$ is well known. Evaluation at a point $z \in D$ gives rise to an element in $M_{H^{\infty}}$ in the natural way. The remainder of $M_{H^{\infty}}$ consists of singleton Gleason parts and Gleason parts which are
analytic discs. An analytic disc, $P(m)$, containing a point $m\in M_{H^{\infty}}$,
is a subset of $M_{H^{\infty}}$ for which there exists a continuous bijection
$L_m:D \rightarrow P(m)$ such that $L_m(0)=m$ and $\hat {f}(L_m(z))$
is analytic on $D$ for each $f \in H^{\infty}(D)$.
Moreover, the map $L_m$ has the form $\displaystyle L_m(z)=w^*\lim \frac{z+z_{\alpha}}
{1 + \overline{z_{\alpha}}z}$ for some net ${z_{\alpha}} \rightarrow m$ in the
w*topology, whence
$\displaystyle \hat{f}(L_m(z))=\lim f(\frac{z+z_{\alpha}}{1+\overline{z_{\alpha}}z})$
for all $f \in H^{\infty}(D)$.
A fiber $M_{\lambda}$ over some $\lambda \in \overline{D}\setminus{D}$, is the zero
set in $M_{H^{\infty}}$ of the function $z-\lambda.$ Each part, distinct
from $D$, is contained in exactly one fiber $M_{\lambda}$. With no
loss of generality we let $\lambda = 1.$ We recall, too, that
two elements $n_1$ and $n_2$ are in the same part if, and only if,
$\|n_1 - n_2\| < 2$, where $\| . \|$ is the norm in the
dual space $H^{\infty}(D)^*.$
Now let $T$ be an endomorphism of $H^{\infty}(D)$, i.e. $T$ is
a (necessarily) bounded linear map of $H^{\infty}(D)$ to itself with
$T(fg)=T(f)T(g)$ for all $f, g \in H^{\infty}(D)$. For a given $T$,
either $T$ has the form $Tf(z)=f(\omega(z))$ for some analytic map
$\omega:D \rightarrow D$, or $Tf=\hat{f}(n)1$ for some
$n \in M_{H^{\infty}},$ or
there exists an $m \in M_{H^{\infty}}$, a net $z_{\alpha} \rightarrow m$
in the w* topology and an analytic function $\tau:D \rightarrow D$,
with $\tau(0)=0$ for which $Tf(z)=\hat{f}(L_m(\tau(z)))$ \cite{gar}.
Further, on general principles, if $T$ is an endomorphism of $H^{\infty}(D)$
there exists a w* continuous map $\phi:M_{H^{\infty}} \rightarrow
M_{H^{\infty}}$ with $\widehat{Tf}(n)=\hat{f}(\phi(n))$ for all $n \in M_{H^{\infty}}$. In the last case,
$\phi(z)=L_m(\tau(z))$ for $z \in D$.
For a given endomorphism $T$, if the induced map $\phi$ maps $D$ to
itself, then $T$ is commonly called a {\em composition operator}. Compact
composition operators on $H^{\infty}$ were completely characterized
in \cite{sw}. However, in general, $L_m(\tau(z))$ need not be in $D$,
and so not every endomorphism of $H^{\infty}(D)$ is a composition
operator. It is these endomorphisms that we discuss here.
Trivially, for any $n \in M_{H^{\infty}} \setminus D$, the map $T$ defined by $T:Tf(z)=\hat{f}(n)1$
is a compact endomorphism of $H^{\infty}(D)$ which is not a composition
operator.
Now let $P(m)$ be an analytic part and let
$T$ be an endomorphism defined by $Tf(z)=\hat{f}(L_m(\tau(z)))$
as discussed above. Also suppose that $\displaystyle \phi:M_{H^{\infty}} \rightarrow
M_{H^{\infty}}$ is such that $\widehat{Tf}=\hat{f} \circ \phi.$
Assume that $T$ is compact. We claim that
$\overline{\tau(D)}$ is a compact subset of $D$ in the Euclidean topology.
Indeed, if we regard the endomorphism $T$ as an operator
from $H^{\infty}(D)$ into $C(M_{H^{\infty}})$, then $T$ is compact
if, and only if, $\phi$ is w* to norm continuous on $M_{H^{\infty}}$
\cite{ds}.
Since $M_{H^{\infty}}$ is itself compact and connected (in the w* topology),
$\phi(M_{H^{\infty}})$ must be compact and connected in the norm
topology on $M_{H^{\infty}}$ and so $\phi$ maps $M_{H^{\infty}}$ into
a norm compact connected subset of $P(m).$
Therefore the range, $\phi(D)=L_m(\tau(D))$
is contained in a norm compact subset of $P(m)$, and further since
$L_m^{-1}$ is an isometry in the Gleason norms on $P(m)$ and $D$ \cite{jff},
$\tau(D)=L_m^{-1}(\phi(D))$ is contained in a compact subset of $D$ in the norm topology on $D$. Since
the norm, Euclidean and w* topologies on $D$ coincide, $\overline{\tau(D)}$ is a compact
subset of $D$ in these three topologies. As a consequence, $\hat{\tau}(M_{H^{\infty}})\subset D.$
Next consider two maps of $H^{\infty}(D)$ into itself.
The first, $C_{L_m}$ is defined by $C_{L_m}(f)(z)=\hat{f}(L_m(z))$,
and the second $C_{\tau}$ by $C_{\tau}(f)(z)=f(\tau(z)).$
Then $(C_{L_m} \circ C_{\tau})(f)(z)=C_{L_m}(f \circ \tau)(z)=\widehat{f \circ \tau} (L_m(z))$ and $(C_{\tau} \circ C_{L_m})(f)(z)=\hat{f}(L_m(\tau(z)))=Tf(z).$ But if $B$ is a Banach space and $S_1$ and $S_2$ are any
two bounded linear maps from $B \rightarrow B$,
the spectrum $\sigma(S_1S_2)\setminus\{0\}=\sigma(S_2S_1)\setminus\{0\}$. Thus
we see that
$\sigma(T)\setminus\{0\}= \sigma(C_{L_m} \circ C_{\tau})\setminus\{0\}.$
Since $f$ is analytic on a neighborhood of range $\hat{\tau}$ which is a
subset of $D$, a standard functional calculus argument gives
$\widehat{f \circ \tau }(L_m(z))=f(\hat{\tau}(L_m(z))$. If we let
$\psi(z)=\hat{\tau}(L_m(z))$ we see that $C_{L_m} \circ C_{\tau}$
is a compact composition operator in the usual sense, and so if $z_0 \in D$ is the
unique fixed point of $\psi$, and ${\bf N}$ is the set of positive integers,
then $\sigma(T)=\{(\psi'(z_0))^n:n \in {\bf N}\} \cup \{0,1\}.$
To summarize, we have shown the following.
Theorem:\ If $T$ is a compact endomorphism
of $H^{\infty}(D)$, then either $T$ has one dimensional range,
i.e. $Tf=\hat{f}(n)1$ for some $n \in M_{H^{\infty}},$
or $T$ is a composition operator in the usual sense, or $T$ has the
form $Tf(z)=\hat{f}(L_m(\tau(z)))$ where $\tau$ is described above.
In the last case, there is a compact composition operator $C_{\psi}$,
such that
$\sigma(T)=\sigma(C_{\psi})=\{(\psi'(z_0))^n:n \in {\bf N}\} \cup \{0,1\}$
where $z_0 \in D$ is the unique fixed point of
$\psi.$
We conclude with two examples showing differences between
composition operators and general endomorphisms .
(a) With the same terminology and symbols, suppose $\hat{\tau}$ is constant
on $P(m)$, i.e. $\hat{\tau}(P(m))=\{\hat{\tau}(m)\}.$ Since $T$
is compact, $\hat{\tau}(m)\in D.$
Then using $C_{\tau}$ and $C_{L_m}$ as before, we show
that $T^2f=\hat{f}(n)1$ for some $n \in P(m).$ Indeed,
$(C_{L_m} \circ C_{\tau})f=f(t_0)1$ where $t_0=\hat{\tau}(m) \in D$.
Then we see that \[T^2 f=[(C_{\tau} \circ C_{L_m}) \circ
(C_{\tau} \circ C_{L_m})]f=\]
\[[C_{\tau} \circ (C_{L_m} \circ C_{\tau}) \circ C_{L_m}]f=
[C_{\tau} \circ (C_{L_m} \circ C_{\tau})] (\hat{f} \circ L_m)=
C_{\tau}(\hat{f} (L_m(t_0)) 1)=\hat{f}(L_m(t_0)) 1.\] Letting
$n=L_m(t_0)$ gives the result.
One way to have $\hat{\tau}$ constant on $P(m)$ is for $\tau$ to
be continuous at $1$ in the usual sense.
A more interesting
example, perhaps, is to define $\tau$ by $\displaystyle \tau(z)=
\frac{1}{2} z e^{\frac{z+1}{z-1}}$, and $m \in M_{H^{\infty}}$
as a w* limit of a real net $x_{\alpha}$
approaching $1$. Then $\displaystyle \hat{\tau}(L_m(z))=\lim_{\alpha}
\tau(\frac{z+x_{\alpha}}{1+\overline{x_{\alpha}}z})=0$, and so
$T^2 f=\hat{f}(m)1$ for all $f\in H^{\infty}(D)$.
In both cases, $\sigma(T)=\{0,1\}$.
(b) Finally, let $\{z_n\}$ be an interpolating Blaschke sequence
approaching $1$, $z_1=0$, with $m$ in the w* closure of $\{z_n\}$
and $B$ the corresponding Blaschke
product. If $\displaystyle \tau(z)=\frac{1}{2}B(z)$, then it is
well known \cite{gar} that
$\displaystyle (\hat{\tau} \circ L_m)'(0)=\frac{1}{2}(\hat{B} \circ L_m)'(0) \neq 0.$
This, then, is an example of a compact endomorphism of $H^{\infty}(D)$
which is not a composition operator but whose spectrum properly contains $\{0,1\}.$
{\sf School of Mathematical Sciences
University of Nottingham
Nottingham NG7 2RD, England
email: [email protected]
and
Department of Mathematics
University of Massachusetts at Boston
100 Morrissey Boulevard
Boston, MA 02125-3393
email: [email protected]}
{\sf This research was supported by EPSRC grant GR/M31132}
\end{document} |
\begin{document}
\title{Multi-order asymptotic expansion of blow-up solutions for autonomous ODEs. II -
Dynamical Correspondence}
\author[1]{Hisatoshi Kodani}
\author[1,2]{Kaname Matsue\thanks{(Corresponding author, {\tt [email protected]})}}
\author[1]{\\ Hiroyuki Ochiai}
\author[3]{Akitoshi Takayasu}
\affil[1]{
\normalsize{
Institute of Mathematics for Industry, Kyushu University, Fukuoka 819-0395, Japan
}
}
\affil[2]{International Institute for Carbon-Neutral Energy Research (WPI-I$^2$CNER), Kyushu University, Fukuoka 819-0395, Japan}
\affil[3]{Faculty of Engineering, Information and Systems, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan}
\maketitle
\begin{abstract}
In this paper, we provide a natural correspondence of eigenstructures of \KMb{Jacobian} matrices associated with equilibria for appropriately transformed two systems describing finite-time blow-ups for ODEs with quasi-homogeneity in an asymptotic sense.
As a corollary, we see that asymptotic expansions of blow-ups proposed in Part I \cite{asym1} themselves provide a criterion of the existence of blow-ups with an intrinsic gap structure of stability information among two systems.
Examples provided in Part I \cite{asym1} are revisited to show the above correspondence.
\end{abstract}
{\bf Keywords:} blow-up solutions, asymptotic expansion, dynamics at infinity
\par
{\bf AMS subject classifications : } 34A26, 34C08, 34D05, 34E10, 34C41, 37C25, 58K55
\tableofcontents
\section{Introduction}
\label{section-intro}
As the sequel to Part I \cite{asym1}, this paper aims at describing an intrinsic nature of blow-up solutions of the Cauchy problem of an autonomous system of ODEs
\begin{equation}
\label{ODE-original}
{\bf y}' = \frac{d{\bf y}(t)}{dt}=f ({\bf y}(t)),\quad {\bf y}(t_0) = {\bf y}_0,
\end{equation}
where $t\in[t_0,t_{\max})$ with $t_0<t_{\max} \leq \infty$, $f:\mathbb{R}^n\to\mathbb{R}^n$ is a $C^r$ function with $r\geq 2$ and ${\bf y}_0\in\mathbb{R}^n$.
A solution ${\bf y}(t)$ is said to \KMi{\em blow up} at $t = t_{\max} < \infty$ if \KMh{its modulus} diverges as $t\to t_{\max}-0$.
The value $t_{\max}$, the maximal existence time, is then referred to as the {\em blow-up time} of a blow-up solution.
Throughout the rest of this paper, let $\theta(t)$ be given by
\begin{equation*}
\theta(t) = t_{\max}-t,
\end{equation*}
where $t_{\max}$ is assumed to \KMg{be a finite value} and to be known a priori.
\par
In Part I \cite{asym1}, a systematic methodology for calculating multi-order asymptotic expansions of blow-up solutions is provided for ({\rm Re}f{ODE-original}) with an asymptotic property of $f$.
We have observed there that, assuming the existence of blow-up solutions with uniquely determined leading asymptotic behavior (referred to as {\em type-I} blow-up in the present paper), roots of a nonlinear system of (algebraic) equations called {\em the balance law} and the associated eigenvalues of Jacobian matrices, called {\em the blow-up power eigenvalues} of {\em the blow-up power-determining matrices}, essentially determine all possible terms appeared in asymptotic expansions of blow-up solutions.
In particular, in the case of type-I blow-ups, these algebraic objects can determine all the essential aspects of type-I blow-ups.
A quick review of this methodology is shown in Section {\rm Re}f{section-review-asym}.
\par
On the other hand, the second, the fourth authors and \KMb{their} collaborators have recently developed a framework to characterize blow-up solutions from the viewpoint of dynamics at infinity (e.g. \cite{Mat2018, Mat2019}), and have derived machineries of computer-assisted proofs for the existence of blow-up solutions as well as their qualitative and quantitative features (e.g. \cite{LMT2021, MT2020_1, MT2020_2, TMSTMO2017}).
\KMh{As in} the present paper, \KMh{finite-dimensional} vector fields with scale invariance in an asymptotic sense, {\em asymptotic quasi-homogeneity} defined precisely in Definition {\rm Re}f{dfn-AQH}, are mainly concerned.
The main idea is to apply {\em compactifications} of phase spaces associated with asymptotic quasi-homogeneity of vector fields and {\em time-scale desingularizations} at infinity to obtaining {\em desingularized vector fields} \KMa{so that {\em dynamics at infinity} makes sense.}
In this case, \KMa{divergent} solutions of the original ODE ({\rm Re}f{ODE-original}) correspond to trajectories on (local) stable manifolds of invariant sets on the geometric object expressing the infinity in the compactified phase space, which is referred to as the {\em horizon}.
It is shown in preceding works that a generic dynamical property of invariant sets such as equilibria and periodic orbits on the horizon, {\em hyperbolicity}, yields blow-up solutions whose leading asymptotic behavior, {\em the blow-up rate}, is uniquely determined by the quasi-homogeneous component of $f$, namely type-I, and these invariant sets.
The precise statements are reviewed in Section {\rm Re}f{section-preliminary} in the case of blow-ups characterized by equilibria on the horizon.
This approach reduces the problem involving divergent solutions, including blow-up solutions, to the standard theory of dynamical systems, and several successful examples are shown in preceding works \cite{LMT2021, Mat2018, MT2020_1, MT2020_2, TMSTMO2017}.
In particular, this approach provides \KMb{a criterion for} the (non-)existence of blow-up solutions of systems we are interested in and their characterizations by means of the standard theory of dynamical systems, without any a priori knowledge of the blow-up structure in the original system ({\rm Re}f{ODE-original}).
In successive studies \cite{Mat2019}, it is demonstrated that blow-up behavior other than type-I can be characterized by {\em nonhyperbolic} invariant sets on the horizon for associated desingularized vector fields.
\par
Now we have two characterizations of blow-up solutions: (i). {\em multi-order asymptotic expansions} and (ii). {\em trajectories on local stable manifolds of invariant sets on the horizon for desingularized vector fields}, where the the asymptotic behavior of leading terms is assumed to be identical\footnote{
Indeed, assumptions for characterizing multi-order asymptotic expansions discussed in Part I \cite{asym1} are based on characterizations of blow-ups by means of dynamics at infinity.
}.
It is then natural to ask {\em whether there is a correspondence between \KMb{these} two characterizations of blow-up solutions}.
The main aim of the present paper is to answer this question.
More precisely, we provide the following one-to-one correspondences.
\begin{itemize}
\item Roots of the balance law (in asymptotic expansions) and equilibria on the horizon (for desingularized vector fields) describing blow-ups in forward time direction \KMc{(Theorem {\rm Re}f{thm-balance-1to1})}.
\item Eigenstructure between the associated blow-up power-determining matrices (in asymptotic expansions) and the Jacobian matrices at the above equilibria on the horizon (for desingularized vector fields) \KMc{(Theorem {\rm Re}f{thm-blow-up-estr})}.
\end{itemize}
These correspondences provide us with significant benefits about blow-up characterizations.
First, {\em asymptotic expansions of blow-up solutions themselves provide a criterion of their existence} \KMc{(Theorem {\rm Re}f{thm-existence-blow-up})}.
In general, asymptotic expansions are considered {\em assuming the existence of the corresponding blow-up solutions in other arguments}.
On the other hand, the above correspondences imply that roots of the balance law and blow-up power eigenvalues, which essentially characterize asymptotic expansions of blow-ups, can provide the existence of blow-ups.
More precisely, these algebraic objects provide linear information of dynamics around equilibria on the horizon for desingularized vector fields, which is sufficient to verify the existence of blow-ups, as provided in preceding works.
Second, the correspondence of eigenstructure provides the gap of stability information between two systems of our interests \KMc{(Theorem {\rm Re}f{thm-stability})}.
In particular, {\em stabilities of the corresponding equilibria for two systems describing an identical blow-up solution are always different}.
This gap warns us to take care of the dynamical interpretation of blow-up solutions and their perturbations, depending on the choice of systems we consider.
\par
The rest of this paper is organized as follows.
In Section {\rm Re}f{section-preliminary}, a methodology for characterizing blow-up solutions from the viewpoint of dynamical systems based on preceding works (e.g. \cite{Mat2018, MT2020_1}) is quickly reviewed.
The precise definition of the class of vector fields we mainly treat is presented there.
The methodology successfully extracts blow-up solutions without those knowledge in advance, as already reported in preceding works (e.g. \cite{Mat2018, Mat2019, MT2020_1}).
\par
In Section {\rm Re}f{section-correspondence}, the correspondence of structures characterizing blow-ups is discussed.
First, all notions necessary to characterize multi-order asymptotic expansions of type-I blow-up solutions proposed in Part I \cite{asym1} are reviewed.
Second, we extract one-to-one correspondence between roots of the balance law in asymptotic expansions and equilibria on the horizon for desingularized vector fields.
Note that equilibria on the horizon for desingularized vector fields can \KMb{also} characterize blow-ups in {\em backward} time direction, but the present correspondence excludes such equilibria due to the form of the system for deriving asymptotic expansions.
Third, we prove the existence of a common eigenstructure which {\em all} roots of the balance law, in particular all blow-up solutions of our interests, must possess.
As a consequence of the correspondence of eigenstructure, we also prove the existence of the corresponding eigenstructure in the desingularized vector fields such that all solutions asymptotic to equilibria on the horizon for desingularized vector fields in forward time direction must possess.
The common structures and their correspondence provide the gap of stability information for blow-up solutions between two systems we mentioned.
Finally, we provide the full correspondence of eigenstructures with possible multiplicity of eigenvalues.
As a corollary, we obtain a new criterion of the existence of blow-up solutions by means of roots of the balance law and associated blow-up power eigenvalues, in particular asymptotic expansions of blow-up solutions, and the stability gap depending on the choice of systems.
\par
In Section {\rm Re}f{section-examples}, we revisit examples shown in Part I \cite{asym1} and confirm that our results indeed extract characteristic features of type-I blow-ups and the correspondence stated in main results.
\begin{rem}[\KMe{A correspondence to Painlev\'{e}-type analysis}]
Several results shown in Section {\rm Re}f{section-correspondence} are closely related to Painlev\'{e}-type analysis for complex ODEs.
We briefly refer to several preceding works for accessibility.
In \cite{AM1989}, quasi-homogeneous (in the similar sense to Definition {\rm Re}f{dfn-AQH}, referred to as {\em weight-homogeneity} in \cite{AM1989}) complex ODEs are concerned\KMi{,} and the {\em algebraic complete integrability} for complex Hamiltonian systems is considered.
One of main results \KMe{there} is the characterization of {\em formal} Laurent solutions of quasi-homogeneous complex ODEs to be {\em convergent} by means of the {\em indicial locus} \KMi{$\mathscr{C}$,} and algebraic properties of the {\em Kovalevskaya matrix (Kowalewski matrix in \cite{AM1989}) $\mathscr{L}$} evaluated at points on $\mathscr{C}$.
Eigenstructures of $\mathscr{L}$ is also related to invariants and geometric objects by means of invariant manifolds for the flow and divisors in \KMe{Abelian varieties}.
Furthermore, the family of convergent Laurent solutions leads to affine varieties of parameters called \KMe{{\em Painlev\'{e} varieties}.}
The similar characterization of integrability appears in \cite{C2015}, where asymptotically quasi-homogeneous polynomial vector fields are treated.
It is proved in \cite{C2015} that eigenvalues ${\rm Spec}(\mathscr{L})$ of the Kovalevskaya matrix $\mathscr{L}$, referred to as {\em Kovalevskaya exponents}, are invariants under locally analytic transformations around points on $\mathscr{C}$, and formal Laurent series as convergent solutions of polynomial vector fields on weighted projective spaces are characterized by the structure of ${\rm Spec}(\mathscr{L})$.
These results are applied to integrability of polynomial vector fields (the {\em extended Painlev\'{e} test} in \cite{C2015}) and the first Painlev\'{e} hierarchy, as well as classical Painlev\'{e} equations in the subsequent papers \cite{C2016_124, C2016_356}.
\par
We remark that
\KMi{the indicial locus $\mathscr{C}$ corresponds to a collection of roots of the balance law (Definition {\rm Re}f{dfn-balance}), and that }
the matrix $\mathscr{L}$ is essentially the same as blow-up power-determining matrix, and ${\rm Spec}(\mathscr{L})$ corresponds to blow-up power eigenvalues.
It is therefore observed that there are several similarities of characterizations between Painlev\'{e}-type properties and blow-up behavior.
It should be noted here\KMe{, however, } that exponents being {\em integers with an identical sign} and {\em semi-simple} have played key roles in characterizing integrability in studies of the Painlev\'{e}-type properties, as stated in the above references.
\KMe{In contrast}, only the identical sign is essential to determine blow-up asymptotics\KMh{.}
We notice that essential ideas appeared in algebraic geometry and Painlev\'{e}-type analysis also make contributions to extract blow-up characteristics for ODEs in a general setting.
\end{rem}
\section{Preliminaries: blow-up description through dynamics at infinity}
\label{section-preliminary}
In this section, we briefly review a characterization of blow-up solutions for autonomous, finite-dimensional ODEs from the viewpoint of dynamical systems.
Details of the present methodology are already provided in \cite{Mat2018, MT2020_1}.
\subsection{Asymptotically quasi-homogeneous vector fields}
\label{sec:QH}
First of all, we review a class of vector fields in our present discussions.
\begin{dfn}[Asymptotically quasi-homogeneous vector fields, cf. \cite{D1993, Mat2018}]\rm
\label{dfn-AQH}
Let $f_0: \mathbb{R}^n \to \mathbb{R}$ be a function.
Let $\alpha_1,\ldots, \alpha_n$ \KMl{be nonnegative integers with $(\alpha_1,\ldots, \alpha_n) \not = (0,\ldots, 0)$ and $k > 0$.}
We say that $\KMf{f_0}$ is a {\em quasi-homogeneous function\footnote{
\KMl{In preceding studies, all $\alpha_i$'s and $k$ are typically assumed to be natural numbers.
In the present study, on the other hand, the above generalization is valid.
}
} of type $\KMh{\alpha = }(\alpha_1,\ldots, \alpha_n)$ and order $k$} if
\begin{equation*}
f_0( s^{\Lambda_\alpha}{\bf x} ) = s^k f_0( {\bf x} )\quad \text{ for all } {\bf x} = (x_1,\ldots, x_n)^T \in \mathbb{R}^n \text{ and } s>0,
\end{equation*}
where\KMb{\footnote{
Throughout the rest of this paper, the power of real positive numbers or functions to matrices is described in the similar manner.
}}
\begin{equation*}
\Lambda_\alpha = {\rm diag}\left(\alpha_1,\ldots, \alpha_n\right),\quad s^{\Lambda_\alpha}{\bf x} = (s^{\alpha_1}x_1,\ldots, s^{\alpha_n}x_n)^T.
\end{equation*}
Next, let $X = \sum_{i=1}^n f_i({\bf x})\frac{\partial }{\partial x_i}$ be a continuous vector field on $\mathbb{R}^n$.
We say that $X$, or simply $f = (f_1,\ldots, f_n)^T$ is a {\em quasi-homogeneous vector field of type $\alpha = (\alpha_1,\ldots, \alpha_n)$ and order $k+1$} if each component $f_i$ is a quasi-homogeneous function of type $\alpha$ and order $k + \alpha_i$.
\par
Finally, we say that $X = \sum_{i=1}^n f_i({\bf x})\frac{\partial }{\partial x_i}$, or simply $f$ is an {\em asymptotically quasi-homogeneous vector field of type $\alpha = (\alpha_1,\ldots, \alpha_n)$ and order $k+1$ at infinity} if there is a quasi-homogeneous vector field $f_{\alpha,k} = (f_{i; \alpha,k})_{i=1}^n$ of type $\alpha$ and order $k+1$ such that
\begin{equation*}
\KMk{f_i( s^{\Lambda_\alpha}{\bf x} ) - s^{k+\alpha_i} f_{i;\alpha,k}( {\bf x} ) = o(s^{k+\alpha_i})}
\end{equation*}
\KMk{as $s\to +\infty$} uniformly for \KMf{${\bf x} = (x_1,\ldots, x_n)\in S^{n-1} \equiv \{{\bf x}\in \mathbb{R}^n \mid \sum_{i=1}^n x_i^2 = 1\}$}.
\end{dfn}
A fundamental property of quasi-homogeneous functions and vector fields is reviewed here.
\begin{lem}\label{temporary-label2}
A quasi-homogenous function $f_0$ of type $(\alpha_1,\ldots,\alpha_n)$ and order $k$ satisfies the following differential equation:
\begin{equation}\label{temporary-label1}
\sum_{l=1}^n \alpha_l y_l \frac{\partial f_0}{\partial y_l}({\bf y}) = k f_0({\bf y}).
\end{equation}
This equation is rephrased as
\begin{equation*}
(\nabla_{\bf y} f_0({\bf y}))^T \Lambda_{\alpha} {\bf y} = k f_0({\bf y}).
\end{equation*}
\end{lem}
\begin{proof}
Differentiating the identity
\[
f_0(s^{\Lambda_\alpha}{\bf y}) = s^k f_0({\bf y})
\]
in $s$ and put $s=1$, we obtain
the desired equation \eqref{temporary-label1}.
\end{proof}
The same argument yields that, for any quasi-homogenenous function $f_0$ of type $\alpha = (\alpha_1,\ldots, \alpha_n)$ and order $k$,
\begin{equation*}
\sum_{\KMb{l}=1}^n \alpha_\KMb{l} s^{\alpha_\KMb{l}}y_\KMb{l} \frac{\partial f_0}{\partial y_l}(s^{\Lambda_\alpha}{\bf y}) = ks^k f_0({\bf y})
\end{equation*}
and
\begin{equation*}
\sum_{\KMb{l}=1}^n \alpha_\KMb{l} (\alpha_\KMb{l} - 1) s^{\alpha_\KMb{l}} y_\KMb{l} \frac{\partial f_0}{\partial y_l}(s^{\Lambda_\alpha}{\bf y}) + \sum_{j,l = 1}^n \alpha_j \alpha_l (s^{\alpha_j}y_j) (s^{\alpha_l}y_l) \frac{\partial^2 f_0}{\partial y_j \partial y_l}(s^{\Lambda_\alpha}{\bf y}) = k(k-1)s^k f_0({\bf y})
\end{equation*}
for any ${\bf y}\in \mathbb{R}^n$.
In particular, each partial derivative satisfies
\begin{equation}
\label{order-QH-derivatives}
\frac{\partial f_0}{\partial y_l}(s^{\Lambda_\alpha}{\bf y}) = O\left(s^{k-\alpha_l} \right),\quad
\frac{\partial^2 f_0}{\partial y_j \partial y_l}(s^{\Lambda_\alpha}{\bf y}) = O\left(s^{k-\alpha_j - \alpha_l} \right)
\end{equation}
as $s\to 0, \infty$ for any ${\bf y}\in \mathbb{R}^n$, as long as $f_0$ is $C^2$ in the latter case.
In particular, \KMl{for any fixed ${\bf y}\in \mathbb{R}^n$,} both derivatives are $O(1)$ as $s\to 1$.
\KMk{In other words, we have quasi-homogeneous relations for partial derivatives in the above sense.}
\begin{lem}
\label{lem-identity-QHvf}
A quasi-homogeneous vector field $f=(f_1,\ldots,f_n)$ of
type $\alpha = (\alpha_1,\ldots,\alpha_n)$ and order $k+1$ satisfies the following differential equation:
\begin{equation}
\label{temporary-label3}
\sum_{l=1}^n \alpha_l y_l \frac{\partial f_i}{\partial y_l}({\bf y}) = (k+\alpha_i) f_i({\bf y}) \qquad (i=1,\ldots,n).
\end{equation}
This equation can be rephrased as
\begin{equation}\label{temporay-label4}
(D f)({\bf y}) \Lambda_\alpha \mathbf{y} = \left( k I+ \Lambda_\alpha \right) f({\bf y}).
\end{equation}
\end{lem}
\begin{proof}
By Lemma~{\rm Re}f{temporary-label2},
we obtain \eqref{temporary-label3}.
For \eqref{temporay-label4},
we recall that $Df=(\frac{\partial f_i}{\partial y_l})$ is Jacobian matrix,
while $kI + \Lambda_\alpha$ is the diagonal matrix with diagonal entries $k+\alpha_i$.
Finally $\mathbf{y} = (y_1,\ldots,y_n)^T$ is the column vector,
so that the left-hand side of \eqref{temporay-label4} is the product of two matrices and one column vector.
\end{proof}
Throughout successive sections, consider an (autonomous) $C^r$ vector field\footnote{
$C^1$-smoothness is sufficient to consider the correspondence discussed in Section {\rm Re}f{section-correspondence} in the present paper.
$C^2$-smoothness is actually applied to justifying multi-order asymptotic expansions of blow-up solutions, which is discussed in Part I \cite{asym1}.
} ({\rm Re}f{ODE-original}) with $r\geq 2$,
where $f: \mathbb{R}^n \to \mathbb{R}^n$ is asymptotically quasi-homogeneous of type $\alpha = (\alpha_1,\ldots, \alpha_n)$ and order $k+1$ at infinity.
\subsection{Quasi-parabolic compactifications}
\label{sec:global}
Here we review an example of compactifications which embed the original (locally compact) phase space into a compact manifold to characterize \lq\lq infinity" as a bounded object.
While there are several choices of compactifications, the following compactification to characterize {\em dynamics at infinity} is applied here.
\begin{dfn}[Quasi-parabolic compactification, \cite{MT2020_1}]\rm
\label{dfn-quasi-para}
Let the type $\alpha = (\alpha_1,\ldots, \alpha_n)\in \mathbb{N}^n$ fixed.
Let $\{\beta_i\}_{i=1}^n$ be the collection of natural numbers so that
\begin{equation}
\label{LCM}
\alpha_1 \beta_1 = \alpha_2 \beta_2 = \cdots = \alpha_n \beta_n \equiv c \in \mathbb{N}
\end{equation}
is the least common multiplier.
In particular, $\{\beta_i\}_{i=1}^n$ is chosen to be the smallest among possible collections.
Let $p({\bf y})$ be a functional given by
\begin{equation}
\label{func-p}
p({\bf y}) \equiv \left( y_1^{2\beta_1} + y_2^{2\beta_2} + \cdots + y_n^{2\beta_n} \right)^{1/2c}.
\end{equation}
Define the mapping $T: \mathbb{R}^n \to \mathbb{R}^n$ as the inverse of
\begin{equation*}
S({\bf x}) = {\bf y},\quad y_j = \kappa^{\alpha_j} x_j,\quad j=1,\ldots, n,
\end{equation*}
where
\begin{equation*}
\kappa = \kappa({\bf x}) = (1- p({\bf x})^{2c})^{-1} \equiv \left( 1 - \sum_{j=1}^n x_j^{2\beta_j}\right)^{-1}.
\end{equation*}
We say the mapping $T$ the {\em quasi-parabolic compactification (with type $\alpha$)}.
\end{dfn}
\begin{rem}
\label{rem-kappa}
The functional $\kappa = \tilde \kappa({\bf y})$ as a functional determined by ${\bf y}$ is implicitly determined by $p({\bf y})$.
Details of such a characterization of $\kappa$ in terms of ${\bf y}$, and the bijectivity and smoothness of $T$ are shown in \cite{MT2020_1} with a general class of compactifications including quasi-parabolic compactifications.
\end{rem}
\begin{rem}
\label{rem-zero-comp-compactification}
\KMb{When there is a component $i_0$ such that $\alpha_{i_0} = 0$, we apply the compactification {\em only for components with nonzero $\alpha_i$}.}
\end{rem}
As proved in \cite{MT2020_1}, $T$ maps $\mathbb{R}^n$ one-to-one onto
\begin{equation*}
\mathcal{D} \equiv \{{\bf x}\in \mathbb{R}^n \mid p({\bf x}) < 1\}.
\end{equation*}
Infinity in the original coordinate then corresponds to a point on the boundary
\begin{equation*}
\mathcal{E} = \{{\bf x} \in \mathbb{R}^n \mid p({\bf x}) = 1\}.
\end{equation*}
\begin{dfn}\rm
We call the boundary $\mathcal{E}$ of $\mathcal{D}$ the {\em horizon}.
\end{dfn}
It is easy to understand the geometric nature of the present compactification when $\alpha = (1,\ldots, 1)$, in which case $T$ is defined as
\begin{equation*}
x_j = \frac{2y_j}{1+\sqrt{1+4\|{\bf y}\|^2}}\quad \Leftrightarrow \quad y_j = \frac{x_j}{1-\|{\bf x}\|^2},\quad j=1,\ldots, n.
\end{equation*}
See \cite{EG2006, G2004} for the homogeneous case, which is called the {\em parabolic compactification}.
In this case, the functional $\kappa = \tilde \kappa({\bf y})$ mentioned in Remark {\rm Re}f{rem-kappa} is explicitly determined by
\begin{equation*}
\kappa = \tilde \kappa({\bf y}) = \frac{1+\sqrt{1+4\|{\bf y}\|^2}}{2} = \kappa({\bf x}) = \frac{1}{1-\|{\bf x}\|^2}.
\end{equation*}
A homogeneous compactification of this kind is shown in Figure {\rm Re}f{fig:compactification}-(a), while an example of quasi-parabolic one is shown in Figure {\rm Re}f{fig:compactification}-(b).
\begin{figure}
\caption{Parabolic-type compactifications of $\mathbb{R}
\label{fig:compactification}
\end{figure}
\begin{rem}
\label{rem-compactification}
Global-type compactifications like parabolic ones are typically introduced as {\em homogeneous} ones, namely $\alpha_1 = \cdots = \alpha_n = 1$.
Simple examples of global compactifications are {\em Bendixson}, or {\em one-point} compactification (e.g. embedding of $\mathbb{R}^n$ into $S^n$) and {\em Poincar\'{e}} compactification (i.e., embedding of $\mathbb{R}^n$ into the hemisphere).
Among such compactifications, Poincar\'{e}-type ones are considered to be a prototype of {\em admissible} compactifications discussed in \cite{MT2020_1} (see also \cite{EG2006}), which distinguishes directions of infinity and characterizes dynamics at infinity appropriately, as mentioned in Section {\rm Re}f{label-dynamics-infinity}.
Quasi-homogeneous-type, global compactifications are introduced in \cite{Mat2018, MT2020_1} as quasi-homogeneous counterparts of homogeneous compactifications.
However, the Poincar\'{e}-type compactifications include radicals in the definition (e.g. \cite{Mat2018}), which cause the loss of smoothness of vector fields on the horizon (mentioned below) and the applications to dynamics at infinity are restrictive in general.
On the other hand, {\em homogeneous}, parabolic-type compactifications were originally introduced in \cite{G2004} so that unbounded rational functions are transformed into rational functions.
In particular, smoothness of rational functions are preserved through the transformation.
The present compactification is the quasi-homogeneous counterpart of the homogeneous parabolic compactifications.
\par
There are alternate compactifications which are defined {\em locally}, known as e.g. {\em Poincar\'{e}-Lyapunov disks} (e.g. \cite{DH1999, DLA2006}) and are referred to as {\em directional compactifications} in e.g. \cite{Mat2018, Mat2019, MT2020_2}.
These compactifications are quite simple and are widely used to study dynamics at infinity.
However, one chart of these compactifications can lose symmetric features of dynamics at infinity and perspectives of correspondence between dynamics at infinity and asymptotic behavior of blow-up solutions.
This is the reason why we have chosen parabolic-type compactifications as the one of our central issues.
\end{rem}
\subsection{Dynamics at infinity and blow-up characterization}
\label{label-dynamics-infinity}
Once we fix a compactification associated with the type $\alpha = (\alpha_1, \ldots, \alpha_n)$ of the vector field $f$ with order $k+1$, we can derive the vector field which makes sense including the horizon.
Then the {\em dynamics at infinity} makes sense through the appropriately transformed vector field called the {\em desingularized vector field}, \KMb{denoted by $g$}.
The common approach is twofold.
Firstly, we rewrite the vector field ({\rm Re}f{ODE-original}) with respect to the new variable used in compactifications.
Secondly, we introduce the time-scale transformation of the form $d\tau = q({\bf x})\kappa({\bf x}(t))^k dt$ for some function $q({\bf x})$ which is bounded including the horizon.
We then obtain the vector field with respect to the new time variable $\tau$, which is continuous, including the horizon.
\begin{rem}
\label{rem-choice-cpt}
Continuity of the desingularized vector field $g$ including the horizon is guaranteed by the smoothness of $f$ and asymptotic quasi-homogeneity (\cite{Mat2018}).
In the case of parabolic-type compactifications, $g$ inherits the smoothness of $f$ including the horizon, which is not always the case of other compactifications in general.
Details are discussed in \cite{Mat2018}.
\end{rem}
\begin{dfn}[Time-scale desingularization]\rm
Define the new time variable $\tau$ by
\begin{equation}
\label{time-desing-para}
d\tau = (1-p({\bf x})^{2c})^{-k}\left\{1-\frac{2c-1}{2c}(1-p({\bf x})^{2c}) \right\}^{-1}dt,
\end{equation}
equivalently,
\begin{equation*}
t - t_0 = \int_{\tau_0}^\tau \left\{1-\frac{2c-1}{2c}(1-p({\bf x}(\tau))^{2c}) \right\}(1-p({\bf x}(\tau))^{2c})^k d\tau,
\end{equation*}
where $\tau_0$ and $t_0$ denote the correspondence of initial times, ${\bf x}(\tau) = T({\bf y}(\tau))$ and ${\bf y}(\tau)$ is a solution ${\bf y}(t)$ under the parameter $\tau$.
We shall call ({\rm Re}f{time-desing-para}) {\em the time-scale desingularization of order $k+1$}.
\end{dfn}
The change of coordinate and the above desingularization yield the following vector field $g = (g_1, \ldots, g_n)^T$, which is continuous on $\overline{\mathcal{D}} = \{p({\bf x}) \leq 1\}$:
\begin{align}
\label{desing-para}
\dot x_i \equiv \frac{dx_i}{d\tau} = g_i({\bf x}) = \left(1-\frac{2c-1}{2c}(1-p({\bf x})^{2c}) \right) \left\{ \tilde f_i({\bf x}) - \alpha_i x_i \sum_{j=1}^n (\nabla \kappa)_j \kappa^{\alpha_j - 1}\tilde f_j({\bf x})\right\},
\end{align}
where
\begin{equation}
\label{f-tilde}
\tilde f_j(x_1,\ldots, x_n) := \kappa^{-(k+\alpha_j)} f_j(\kappa^{\alpha_1}x_1, \ldots, \kappa^{\alpha_n}x_n),\quad j=1,\ldots, n,
\end{equation}
and
$\nabla \kappa = \nabla_{\bf x} \kappa = ((\nabla_{\bf x} \kappa)_1, \ldots, (\nabla_{\bf x} \kappa)_n)^T$ is
\begin{equation*}
(\nabla_{\bf x} \kappa)_j
= \frac{\kappa^{1-\alpha_j} x_j^{2\beta_j-1}}{\alpha_j \left(1- \frac{2c-1}{2c} (1 - p({\bf x})^{2c} )\right)},\quad j=1,\ldots, n,
\end{equation*}
as derived in \cite{MT2020_1}.
In particular, the vector field $g$ is written as follows:
\begin{align}
\label{desing-vector}
g({\bf x}) = \left(1-\frac{2c-1}{2c}(1-p({\bf x})^{2c}) \right)\tilde f({\bf x}) - G({\bf x})\Lambda_\alpha {\bf x},
\end{align}
where $\tilde f = (\tilde f_1,\ldots, \tilde f_n)^T$ and
\begin{align}
\label{Gx}
G({\bf x}) &\equiv \sum_{j=1}^n \frac{x_j^{2\beta_j-1}}{\alpha_j}\tilde f_j({\bf x}).
\end{align}
Smoothness of $f$ and the asymptotic quasi-homogeneity guarantee the smoothness of the right-hand side $g$ of ({\rm Re}f{desing-para}) including the horizon $\mathcal{E}\equiv \{p({\bf x}) = 1\}$.
In particular, {\em dynamics at infinity}, such as divergence of solution trajectories to specific directions, is characterized through dynamics generated by ({\rm Re}f{desing-para}) around the horizon.
See \cite{Mat2018, MT2020_1} for details.
\begin{rem}[Invariant structure]
\label{rem-invariance}
The horizon $\mathcal{E}$ is a codimension one invariant submanifold of $\overline{\mathcal{D}}$.
Indeed, direct calculations yield that
\begin{equation*}
\left. \frac{d}{d \tau}p({\bf x}(\tau))^{2c}\right|_{\tau=0} = 0\quad \text{ whenever }\quad {\bf x}(0)\in \mathcal{E}.
\end{equation*}
See e.g. \cite{Mat2018}, where detailed calculations are shown in a similar type of global compactifications.
We shall apply this invariant structure to extracting the detailed blow-up structure later.
\end{rem}
\subsection{Type-I stationary blow-up}
Through the compactification we have introduced,
dynamics around the horizon characterize dynamics at infinity, including blow-up behavior.
\par
For an equilibrium $\bar {\bf x}$ for a desingularized vector field $g$, let
\begin{equation*}
W_{\rm loc}^s(\bar {\bf x}) = W_{\rm loc}^s(\bar {\bf x};g) := \{{\bf x}\in U \mid |\varphi_g(t,{\bf x}) - \bar {\bf x}| \to 0\, \text{ as }\,t\to +\infty\}
\end{equation*}
be the {\em (local) stable set} of $\bar {\bf x}$ for the dynamical system generated by $g$, where $U$ is a neighborhood of $\bar {\bf x}$ in $\mathbb{R}^n$ or an appropriate phase space, and $\varphi_g$ is the flow generated by $g$.
In a special case where $\bar {\bf x}$ is a {\em hyperbolic equilibrium} for $g$, that is, an equilibrium satisfying ${\rm Spec}(Dg(\bar {\bf x})) \cap i\mathbb{R} = \emptyset$, the stable set $W_{\rm loc}^s(\bar {\bf x})$ admits a smooth manifold structure in a small neighborhood of $\bar {\bf x}$ (see {\em the Stable Manifold Theorem} in e.g. \cite{Rob}).
In such a case, the set is referred to as {\em (local) stable manifold} of $\bar {\bf x}$.
Here we review a result for characterizing blow-up solutions by means of stable manifolds of equilibria, which is shown in \cite{MT2020_1} (cf. \cite{Mat2018}).
\begin{thm}[Stationary blow-up, \cite{Mat2018, MT2020_1}]
\label{thm:blowup}
Assume that the desingularized vector field $g$ given by ({\rm Re}f{desing-para}) associated with ({\rm Re}f{ODE-original}) admits an equilibrium on the horizon ${\bf x}_\ast \in \mathcal{E}$.
Suppose that ${\bf x}_\ast$ is hyperbolic, in particular, the Jacobian matrix $Dg({\bf x}_\ast)$ of $g$ at ${\bf x}_\ast$ possesses $n_s > 0$ (resp. $n_u = n-n_s$) eigenvalues with negative (resp. positive) real part.
If there is a solution ${\bf y}(t)$ of ({\rm Re}f{ODE-original}) with a bounded initial point ${\bf y}(0)$ whose image ${\bf x} = T({\bf y})$ is on the stable manifold\footnote{
In the present case, a neighborhood $U$ of ${\bf x}_\ast$ determining $W^s_{\rm loc}$ is chosen as a subset of $\overline{\mathcal{D}}$.
} $W_{\rm loc}^s({\bf x}_\ast; g)$, then $t_{\max} < \infty$ holds; namely, ${\bf y}(t)$ is a blow-up solution.
Moreover,
\begin{equation*}
\kappa\equiv \kappa({\bf x}(t)) \sim \tilde c\theta(t)^{-1/k}\quad \text{ as }\quad t\to t_{\max}\KMc{-0},
\end{equation*}
where $\tilde c > 0$ is a constant.
Finally, if the $j$-th component $x_{\ast, j}$ of ${\bf x}_\ast$ is not zero, then we also have
\begin{equation*}
y_j(t) \sim \KMb{\tilde c_j}\theta(t)^{-\alpha_j /k}\quad \text{ as }\quad t\to t_{\max},
\end{equation*}
where $\KMb{\tilde c_j}$ is a constant with the same sign as $y_j(t)$ as $t\to t_{\max}$.
\end{thm}
The key point of the theorem is that {\em blow-up solutions for ({\rm Re}f{ODE-original}) are characterized as trajectories on \KMb{local} stable manifolds of equilibria or general invariant sets\footnote{
In \cite{Mat2018}, a characterization of blow-up solutions with infinite-time oscillations in $t < t_{\max}$ and unbounded amplitude is also provided by means of time-periodic orbits on the horizon for desingularized vector fields, which referred to as {\em periodic blow-up}.
Hyperbolicity ensures not only blow-up behavior of solutions but their asymptotic behavior with the specific form.
Several case studies of blow-up solutions beyond hyperbolicity are shown in \cite{Mat2019}.
} on the horizon $\mathcal{E}$ for the desingularized vector field}.
Investigations of blow-up structure are therefore reduced to those of stable manifolds of invariant sets, such as (hyperbolic) equilibria, for the associated vector field.
Moreover, the theorem also claims that hyperbolic equilibria on the horizon induce {\em type-I blow-up.
That is, the leading term of the blow-up behavior is determined by the type $\alpha$ and the order $k+1$ of $f$.}
\KMj{In particular, Theorem {\rm Re}f{thm:blowup} can be used to verify the existence of blow-up solutions assumed in our construction of their asymptotic expansions.}
\par
The blow-up time $t_{\max}$ is explicitly given by ({\rm Re}f{time-desing-para}):
\begin{equation}
\label{blow-up-time}
t_{\max} = t_0 + \frac{1}{2c}\int_{\tau_0}^\infty \left\{1+ (2c-1)p({\bf x}(\tau))^{2c} \right\}(1-p({\bf x}(\tau))^{2c})^k d\tau.
\end{equation}
The above formula is consistent with a well-known fact that $t_{\max}$ depends on initial points ${\bf x}_0 = {\bf x}(\tau_0)$.
\begin{rem}
Theorem {\rm Re}f{thm:blowup} itself does not tell us the asymptotic behavior of the component $y_j(t)$ as $t\to t_{\max}-0$ when $x_{\ast, j} = 0$.
Nevertheless, asymptotic expansion of the solution can reveal the detailed behavior near $t=t_{\max}$, as demonstrated in Part I \cite{asym1}.
\end{rem}
We end this section by providing a lemma for a function appeared in ({\rm Re}f{desing-para}), which is essential to characterize equilibria on the horizon from the viewpoint of asymptotic expansions.
The gradient of the horizon $\mathcal{E} = \{p({\bf x}) = 1\}$ at ${\bf x}\in \mathcal{E}$ is given by
\begin{equation*}
\nabla p({\bf x}) = \frac{p({\bf x})^{\KMc{1-2c}}}{c} \left( \beta_1x_1^{2\beta_1-1}, \ldots, \beta_nx_n^{2\beta_n-1} \right)^T = \frac{1}{c} \left( \beta_1x_1^{2\beta_1-1}, \ldots, \beta_nx_n^{2\beta_n-1} \right)^T.
\end{equation*}
In particular,
\begin{equation}
\label{grad-xast}
\nabla p({\bf x}_\ast) = \frac{1}{c}\left( \beta_1x_{\ast,1}^{2\beta_1-1}, \ldots, \beta_n x_{\ast,n}^{2\beta_n-1} \right)^T
\end{equation}
holds at an equilibrium ${\bf x}_\ast \in \mathcal{E}$.
Similarly, we observe that
\begin{equation*}
\nabla (p({\bf x})^{2c}) = 2\left( \beta_1x_1^{2\beta_1-1}, \ldots, \beta_nx_n^{2\beta_n-1} \right)^T
\end{equation*}
for any ${\bf x}\in \overline{\mathcal{D}}.$
Using the gradient, the function $G({\bf x})$ in ({\rm Re}f{Gx}) is also written by
\begin{equation*}
G({\bf x}) = \sum_{j=1}^n \frac{\beta_j}{c}x_j^{2\beta_j-1}\tilde f_j({\bf x}) = \frac{1}{2c}\nabla (p({\bf x})^{2c})^T \tilde f({\bf x}).
\end{equation*}
\begin{lem}
\label{lem-G}
\begin{equation*}
\kappa^{-1} \frac{d\kappa}{d\tau} = G({\bf x}),
\end{equation*}
where $G({\bf x})$ is given in ({\rm Re}f{Gx}).
\end{lem}
\begin{proof}
Direct calculations with ({\rm Re}f{LCM}) yield that
\begin{align*}
\kappa^{-2} \frac{d\kappa}{d\tau} &\equiv -\frac{d(\kappa)^{-1}}{d\tau} \\
&= \nabla (p({\bf x})^{2c})^T \frac{d{\bf x}}{d\tau}\\
&= \nabla (p({\bf x})^{2c})^T\left( \frac{1}{2c}\left(1 + (2c-1)p({\bf x})^{2c} \right)\tilde f({\bf x}) - G({\bf x})\Lambda_\alpha {\bf x}\right) \\
&= \left(1 + (2c-1)p({\bf x})^{2c} \right)G({\bf x}) - 2cp({\bf x})^{2c}G({\bf x}) \\
&= (1-p({\bf x})^{2c})G({\bf x})\\
&= \kappa^{-1}G({\bf x}).
\end{align*}
\end{proof}
\section{Correspondence between asymptotic expansions of blow-ups and dynamics at infinity}
\label{section-correspondence}
\KMa{In Part I \cite{asym1}, a systematic methodology of asymptotic expansions of blow-up solutions has been proposed}.
On the other hand, blow-up solutions can be also characterized by trajectories on the \KMi{local} stable manifold $\KMi{W^s_{\rm loc}}({\bf x}_\ast; g)$ of an equilibrium ${\bf x}_\ast$ on the horizon for the desingularized vector field $g$\KMi{, as reviewed in Section {\rm Re}f{section-preliminary}}.
By definition, the manifold $\KMi{W^s_{\rm loc}}({\bf x}_\ast; g)$ consists of initial points converging to ${\bf x}_\ast$ as $\tau \to \infty$ and hence $\KMi{W^s_{\rm loc}}({\bf x}_\ast; g)$ characterizes the dependence of blow-up solutions on initial points, including the variation of $t_{\max}$.
One then expects that there is a common feature among algebraic information (\KMb{asymptotic expansions}) and geometric one (the stable manifold $\KMi{W^s_{\rm loc}}({\bf x}_\ast; g)$) for characterizing identical blow-up solutions.
\par
This section addresses several structural correspondences between asymptotic expansions of blow-up solutions and dynamics of equilibria on the horizon for desingularized vector fields.
As a corollary, we see that asymptotic expansions of blow-up solutions in the above methodology themselves provide a criterion for their existence\KMa{, as well as the existence of an intrinsic gap of stability information among two systems}.
Unless otherwise mentioned, let $g$ be the desingularized vector field \KMg{({\rm Re}f{desing-vector})} associated with $f$.
\subsection{Quick review of tools for multi-order asymptotic expansion of type-I blow-up solutions}
\label{section-review-asym}
Before \KMc{discussing correspondence of dynamical structures} among two systems, we quickly review the methodology of multi-order asymptotic expansions of type-I blow-up solutions proposed in \cite{asym1}.
The method begins with the following ansatz, which can be easily verified through the desingularized vector field and our blow-up characterization: Theorem {\rm Re}f{thm:blowup}.
\begin{ass}
\label{ass-fundamental}
The asymptotically quasi-homogeneous system ({\rm Re}f{ODE-original}) of type $\alpha$ and the order $k+1$ admits a solution
\begin{equation*}
{\bf y}(t) = (y_1(t), \ldots, y_n(t))^T
\end{equation*}
which blows up at $t = t_{\max} < \infty$ with the \KMb{type-I blow-up} behavior\KMb{, namely}\footnote{
\KMb{For two scalar functions $h_1$ and $h_2$, $h_1 \sim h_2$ as $t\to t_{\max}-0$ iff $(h_1(t)/h_2(t))\to 1$ as $t\to t_{\max}-0$. }
}
\begin{equation}
\label{blow-up-behavior}
y_i(t) \KMb{ \sim c_i\theta(t)^{-\alpha_i / k}},
\quad t\to t_{\max}-0,\quad i=1,\ldots, n
\end{equation}
\KMb{for some constants $c_i\in \mathbb{R}$.}
\end{ass}
Our aim here is, under the above assumption, to write ${\bf y}(t)$ as
\begin{equation}
\label{blow-up-sol}
y_i(t) = \theta(t)^{-\alpha_i / k} Y_i(t),\quad {\bf Y}(t) = (Y_1(t), \ldots, Y_n(t))^T
\end{equation}
with the asymptotic expansion by means of {\em general asymptotic series}\footnote{
\KMb{For two scalar functions $h_1$ and $h_2$, $h_1 \ll h_2$ as $t\to t_{\max}-0$ iff $(h_1(t)/h_2(t))\to 0$ as $t\to t_{\max}-0$.
For two vector-valued functions ${\bf h}_1$ and ${\bf h}_2$ with \KMc{${\bf h}_i = (h_{1;i}, \ldots, h_{n;i})$}, ${\bf h}_1 \ll {\bf h}_2$ iff $h_{l;1} \ll h_{l;2}$ for each $l = 1,\ldots, n$.}
}
\begin{align}
\notag
{\bf Y}(t) &= {\bf Y}_0 + \tilde {\bf Y}(t),\\
\label{Y-asym}
\tilde {\bf Y}(t) &= \sum_{j=1}^\infty {\bf Y}_j(t),\quad {\bf Y}_{j}(t) \ll {\bf Y}_{j-1}(t)\quad (t\to t_{\max}-0),\quad j=1,2,\ldots,\\
\notag
{\bf Y}_j(t) &= (Y_{j,1}(t), \ldots, Y_{j,n}(t))^T,\quad j=1,2,\ldots
\end{align}
and determine the concrete form of the factor ${\bf Y}(t)$.
As the first step, decompose the vector field $f$ into two terms as follows:
\begin{equation*}
f({\bf y}) = f_{\alpha, k}({\bf y}) + f_{\rm res}({\bf y}),
\end{equation*}
where $f_{\alpha, k}$ is the quasi-homogeneous component of $f$ and $f_{\rm res}$ is the residual (\KMe{i.e.,} lower-order) terms.
The componentwise expressions are
\begin{equation*}
f_{\alpha, k}({\bf y}) = (f_{1;\alpha, k}({\bf y}), \ldots, f_{n;\alpha, k}({\bf y}))^T,\quad f_{\rm res}({\bf y}) = (f_{1;{\rm res}}({\bf y}), \ldots, f_{n;{\rm res}}({\bf y}))^T\KMe{,}
\end{equation*}
\KMe{respectively.
}
Substituting ({\rm Re}f{blow-up-sol}) into ({\rm Re}f{ODE-original}), we derive the system of ${\bf Y}(t)$, which is the following nonautonomous system:
\begin{equation}
\label{blow-up-basic}
\frac{d}{dt}{\bf Y} = \theta(t)^{-1}\left\{ -\KMf{ \frac{1}{k}\Lambda_\alpha }{\bf Y} + f_{\alpha, k}({\bf Y}) \right\} + \theta(t)^{\KMf{ \frac{1}{k}\Lambda_\alpha }} f_{{\rm res}}( \theta(t)^{-\KMf{ \frac{1}{k}\Lambda_\alpha }} {\bf Y}).
\end{equation}
From the asymptotic quasi-homogeneity of $f$, the most singular part of the above system yields the following identity which the leading term ${\bf Y}_0$ of ${\bf Y}(t)$ must satisfy.
\begin{dfn}\rm
\label{dfn-balance}
We call the identity
\begin{equation}
\label{0-balance}
-\frac{1}{k}\Lambda_\alpha {\bf Y}_0 + f_{\alpha, k}({\bf Y}_0) = 0
\end{equation}
{\em a balance law} for the blow-up solution ${\bf y}(t)$.
\end{dfn}
The next step is to derive the collection of systems \KMc{for $\{{\bf Y}_j(t)\}_{j\geq 1}$}. by means of {\em inhomogeneous linear systems}.
The key concept towards our aim is the following algebraic objects.
\begin{dfn}[Blow-up power eigenvalues]\rm
\label{dfn-blow-up-power-ev}
Suppose that a nonzero root ${\bf Y}_0$ of the balance law ({\rm Re}f{0-balance}) is given.
We call the constant matrix
\begin{equation}
\label{blow-up-power-determining-matrix}
A = -\KMf{ \frac{1}{k}\Lambda_\alpha } + D f_{\alpha, k}({\bf Y}_0)
\end{equation}
the {\em \KMf{blow-up} power-determining matrix} for the blow-up solution ${\bf y}(t)$, and call the eigenvalues $\{\lambda_i\}_{i=1}^n \equiv {\rm Spec}(A)$ the {\em blow-up power eigenvalues}, where eigenvalues with nontrivial multiplicity are distinguished in this expression, except specifically noted.
\end{dfn}
Using the matrix $A$ and the Taylor expansion of the nonlinearity at ${\bf Y}_0$, we obtain the following system:
\begin{align}
\label{asym-eq}
&\frac{d}{dt} \tilde {\bf Y} = \theta(t)^{-1} \left[ A\tilde {\bf Y} + R_{\alpha, k}({\bf Y}) \right]
+ \theta(t)^{\KMf{ \frac{1}{k}\Lambda_\alpha }} f_{{\rm res}}\left(\theta(t)^{-\KMf{ \frac{1}{k}\Lambda_\alpha }} {\bf Y} \right),\\
\notag
&R_{\alpha, k}({\bf Y}) = f_{\alpha,k}({\bf Y}) - \left\{f_{\alpha,k}({\bf Y}_0) + Df_{\alpha,k}({\bf Y}_0)\tilde {\bf Y}\right\}.
\end{align}
The linear systems solving ${\bf Y}_j(t)$ for $j\geq 1$ are derived inductively from ({\rm Re}f{asym-eq}), assuming the asymptotic relation ({\rm Re}f{Y-asym}).
In particular, the algebraic eigenstructure of $A$ essentially determines concrete forms of ${\bf Y}_j(t)$.
\par
As a summary, the following objects play essential \KMb{roles} in determining multi-order asymptotic \KMb{expansions} of \KMb{blow-up solutions} ${\bf y}(t)$ with Assumption {\rm Re}f{ass-fundamental}:
\begin{itemize}
\item Roots of the balance law ({\rm Re}f{0-balance}): ${\bf Y}_0$ (not identically zero).
\item The blow-up power-determining matrix $A$ in ({\rm Re}f{blow-up-power-determining-matrix}) and blow-up power eigenvalues.
\end{itemize}
\KMa{The precise expression of our asymptotic expansions of blow-up solutions is summarized in \cite{asym1}, and we omit the detail because we do not need the concrete expression of these expansions.}
Throughout the rest of this section, we derive the relationship between the above objects and the corresponding ones in desingularized vector fields.
\subsection{Balance law and equilibria on the horizon}
\KMc{The first issue for the correspondence of dynamical structures is \lq\lq equilibria" among two systems.}
Recall that equilibria for the desingularized vector field ({\rm Re}f{desing-para}) associated with quasi-parabolic compactifications satisfy
\begin{equation}
\label{balance-para}
\KMg{ \left(1-\frac{2c-1}{2c}(1-p({\bf x})^{2c}) \right)\tilde f({\bf x}) = G({\bf x}) \Lambda_\alpha {\bf x}},
\end{equation}
where $G({\bf x})$ is given in ({\rm Re}f{Gx}).
Equilibria ${\bf x}_\ast = (x_{\ast, 1}, \ldots, x_{\ast, n})^T$ on the horizon $\mathcal{E}$ satisfy $p({\bf x}_\ast) \equiv 1$ and hence the following identity holds:
\begin{equation}
\label{const-horizon-0}
\tilde f_i({\bf x}_\ast) = \alpha_i x_{\ast, i} G({\bf x}_\ast),
\end{equation}
equivalently
\begin{equation}
\label{const-horizon}
\frac{\tilde f_i({\bf x}_\ast)}{\alpha_i x_{\ast, i} } = G({\bf x}_\ast) \equiv C_\ast = C_\ast({\bf x}_\ast)
\end{equation}
provided $x_{\ast, i} \not = 0$.
Because at least one $x_i$ is not $0$ on the horizon, the constant $C_\ast$ is determined as a constant independent of $i$.
On the other hand, only the quasi-homogeneous part $f_{\alpha, k}$ of $f$ \KMi{involves equilibria on the horizon}.
In general, we have
\begin{align}
\notag
\tilde f_{i;\alpha, k}(\KMh{{\bf x}}) &= \kappa^{-(k+\alpha_i)} f_{i;\alpha, k}(\KMh{\kappa^{\Lambda_\alpha}{\bf x}} )\\
\notag
&= \kappa^{-(k+\alpha_i)}\kappa^{k+\alpha_i} f_{i;\alpha, k}(\KMh{{\bf x}})\\
\label{identity-f-horizon}
&= f_{i;\alpha, k}(\KMh{{\bf x}})
\end{align}
for ${\bf x}=(x_1,\ldots, x_n)\in \mathcal{E}$.
The identity ({\rm Re}f{balance-para}) is then rewritten as follows:
\begin{equation}
\label{const-horizon-0-Cast}
\KMh{ f_{\alpha,k}({\bf x}_\ast) = G({\bf x}_\ast) \Lambda_\alpha {\bf x}_\ast = C_\ast\Lambda_\alpha {\bf x}_\ast }.
\end{equation}
Introducing a scaling parameter $r_{{\bf x}_\ast} (> 0)$, we have
\begin{equation*}
\KMh{ f_{\alpha,k}({\bf x}_\ast) = r_{{\bf x}_\ast}^{-(kI + \Lambda_\alpha)} f_{\alpha,k}(\KMh{r_{{\bf x}_\ast}^{\Lambda_\alpha} {\bf x}_\ast} ).}
\end{equation*}
Substituting this identity into ({\rm Re}f{balance-para}), we have
\begin{equation*}
r_{{\bf x}_\ast}^{-(k+\alpha_i)} f_{i;\alpha,k}(r_{{\bf x}_\ast}^{\alpha_1}x_{\ast,1}, \ldots, r_{{\bf x}_\ast}^{\alpha_n}x_{\ast,n}) \KMd{= \alpha_i x_{\ast,i} C_\ast}.
\end{equation*}
Here we assume that $r_{{\bf x}_\ast}$ satisfies the following equation:
\begin{equation}
\label{balance-C01}
r_{{\bf x}_\ast}^k C_\ast = \frac{1}{k},
\end{equation}
which implies that $r_{{\bf x}_\ast}$ is uniquely determined once $C_\ast$ is given, {\em provided} $C_\ast > 0$.
The positivity of $C_\ast$ is nontrivial in general, while we have the following result.
\begin{lem}
\label{lem-Cast-nonneg}
Let ${\bf x}_\ast \in \mathcal{E}$ be a hyperbolic equilibrium for $g$ such that the local stable manifold $W^s_{\rm loc}({\bf x}_\ast; g)$ satisfies $W^s_{\rm loc}({\bf x}_\ast; g)\cap \mathcal{D}\not = \emptyset$.
Then $C_\ast \equiv G({\bf x}_\ast) \geq 0$.
\end{lem}
\begin{proof}
Assume that the statement is not true, namely $C_\ast < 0$.
We can choose a solution ${\bf x}(\tau)$ asymptotic to ${\bf x}_{\ast}$ whose initial point ${\bf x}(0)$ satisfies $\kappa({\bf x}(0)) < \infty$ by assumption.
Along such a solution, we integrate $G({\bf x}(\tau))$.
Lemma {\rm Re}f{lem-G} indicates that
\begin{equation*}
\int_{\tau_0}^\tau G({\bf x}(\eta))d\eta = \ln \kappa({\bf x}(\tau)) - \ln \kappa({\bf x}(\tau_0)).
\end{equation*}
By the continuity of $G$, $G({\bf x}(\tau))$ is always negative along ${\bf x}(\tau)$ in a small neighborhood of ${\bf x}_\ast$ in $W^s_{\rm loc}({\bf x}_\ast; g)$.
On the other hand, ${\bf x}(\tau)\to {\bf x}_\ast\in \mathcal{E}$ holds as $\tau\to +\infty$, implying $\kappa = \kappa({\bf x}(\tau))\to +\infty$.
The real-valued function $\ln r$ is monotonously increasing in $r$, and hence $\ln \kappa({\bf x}(\tau))$ diverges to $+\infty$ as $\tau\to \infty$, which contradicts the fact that the integral of $G({\bf x}(\tau))$ is negative.
\end{proof}
At this moment, we cannot exclude the possibility that $C_\ast = 0$.
Now we \KMe{{\em assume}} $C_\ast \not = 0$.
Then $C_\ast > 0$ holds and $r_{{\bf x}_\ast}$ in \KMd{({\rm Re}f{balance-C01})} is well-defined.
Finally the equation ({\rm Re}f{balance-para}) is written by
\begin{align*}
\frac{ \alpha_i}{k} r_{{\bf x}_\ast}^{\alpha_i} \KMc{x_{\ast, i}} = f_{i;\alpha,k}(r_{{\bf x}_\ast}^{\alpha_1}\KMc{x_{\ast, 1}}, \ldots, r_{{\bf x}_\ast}^{\alpha_n} \KMc{x_{\ast, n}}),
\end{align*}
which is nothing but the balance law ({\rm Re}f{0-balance}).
As a summary, we have the one-to-one correspondence among roots of the \KMf{balance law} and equilibria on the horizon \KMe{for the desingularized vector field ({\rm Re}f{desing-para})}.
\begin{thm}[One-to-one correspondence of the balance]
\label{thm-balance-1to1}
Let ${\bf x}_\ast = (x_{\ast,1}, \ldots, x_{\ast,n})^T$ be an equilibrium on the horizon for the desingularized vector field ({\rm Re}f{desing-para}).
Assume that $C_\ast$ in ({\rm Re}f{const-horizon}) is positive so that $r_{{\bf x}_\ast} = (kC_\ast)^{-1/k}\KMe{>0}$ is well-defined.
Then \KMi{the vector} ${\bf Y}_0 = (Y_{0,1},\ldots, Y_{0,n})^T$ \KMi{given by}
\begin{equation}
\label{x-to-C}
(Y_{0,1},\ldots, Y_{0,n}) = (r_{{\bf x}_\ast}^{\alpha_1}x_{\ast,1},\ldots, r_{{\bf x}_\ast}^{\alpha_n}x_{\ast,n}) \equiv r_{{\bf x}_\ast}^{\Lambda_\alpha}{\bf x}_\ast
\end{equation}
\KMi{is a root of the balance law ({\rm Re}f{0-balance}).}
Conversely, let ${\bf Y}_0\not = 0$ be a root of the balance law ({\rm Re}f{0-balance}).
Then \KMi{the vector} ${\bf x}_\ast$ \KMi{given by}
\begin{equation}
\label{C-to-x}
(x_{\ast,1},\ldots, x_{\ast,n}) = (r_{{\bf Y}_0}^{-\alpha_1}Y_{0,1},\ldots, r_{{\bf Y}_0}^{-\alpha_n}Y_{0,n})\equiv r_{{\bf Y}_0}^{-\Lambda_\alpha}{\bf Y}_0
\end{equation}
\KMi{is an equilibrium on the horizon for ({\rm Re}f{desing-para}),}
where $r_{{\bf Y}_0} = p({\bf Y}_0) > 0$.
\end{thm}
\begin{proof}
\KMf{We have already seen the proof of the first statement, and hence we shall prove the second statement here}.
First let
\begin{equation*}
r_{{\bf Y}_0} \equiv p({\bf Y}_0) > 0,\quad \bar Y_{0,i} := \frac{Y_{0,i}}{r_{{\bf Y}_0}^{\alpha_i}}.
\end{equation*}
By definition $p(\bar {\bf Y}_0) = 1$, where $\bar {\bf Y}_0 = (\bar Y_{0,1},\ldots, \bar Y_{0,n})^T$.
Substituting $\bar {\bf Y}_0$ into the right-hand side of ({\rm Re}f{balance-para}), we have
\begin{align*}
\alpha_i \bar Y_{0,i} \sum_{j=1}^n \frac{\bar Y_{0,j}^{2\beta_j-1}}{\alpha_j}\tilde f_j(\bar {\bf Y}_0) = \alpha_i \bar Y_{0,i} \sum_{j=1}^n \frac{\bar Y_{0,j}^{2\beta_j-1}}{\alpha_j} \tilde f_{j; \alpha, k}(\bar {\bf Y}_0),
\end{align*}
where we have used the identity $p(\bar {\bf Y}_0) = 1$ and ({\rm Re}f{identity-f-horizon}).
From quasi-homogeneity of $f_{\alpha, k}$ and the balance law ({\rm Re}f{0-balance}), we further have
\begin{align*}
\alpha_i \bar Y_{0,i} \sum_{j=1}^n \frac{\bar Y_{0,j}^{2\beta_j-1}}{\alpha_j} \tilde f_{j; \alpha, k}(\bar {\bf Y}_0)
&= \alpha_i \bar Y_{0,i} \sum_{j=1}^n \frac{\bar Y_{0,j}^{2\beta_j-1}}{\alpha_j} r_{{\bf Y}_0}^{-(k+\alpha_j)}f_{j; \alpha, k}({\bf Y}_0) \\
&= \alpha_i \bar Y_{0,i} \sum_{j=1}^n \frac{\bar Y_{0,j}^{2\beta_j-1}}{\alpha_j} r_{{\bf Y}_0}^{-(k+\alpha_j)} \frac{\alpha_j}{k}Y_{0,j}
= \alpha_i \bar Y_{0,i} \frac{r_{{\bf Y}_0}^{-k}}{k}\sum_{j=1}^n \bar Y_{0,j}^{2\beta_j}\\
&= \alpha_i \bar Y_{0,i} \frac{r_{{\bf Y}_0}^{-k}}{k} = r_{{\bf Y}_0}^{-(k+\alpha_i)} \frac{\alpha_i}{k} Y_{0,i}\\
&= r_{{\bf Y}_0}^{-(k+\alpha_i)} f_{i; \alpha, k}({\bf Y}_0) = f_{i; \alpha, k}(\bar {\bf Y}_0) = \tilde f_{i; \alpha, k}(\bar {\bf Y}_0),
\end{align*}
implying that $\bar {\bf Y}_0$ is a root of ({\rm Re}f{balance-para}).
\end{proof}
\subsection{A special eigenstructure characterizing blow-ups}
The balance law determines the coefficients of type-I blow-ups, which \KMi{turn out to} correspond to equilibria on the horizon for the desingularized vector field.
This correspondence provides a relationship among two different vector fields involving blow-ups.
We further investigate a common feature characterizing blow-up behavior extracted by blow-up power-determining matrices, as well as desingularized vector fields.
\begin{thm}[Eigenvalue $1$. cf. \cite{AM1989, C2015}]
\label{thm-ev1}
\KMe{Consider \KMe{an} asymptotically quasi-homogeneous vector field $f$ of type $\alpha = (\alpha_1,\ldots, \alpha_n)$ and order $k+1$}.
Suppose that a nontrivial root ${\bf Y}_0$ of the balance law ({\rm Re}f{0-balance}) is given.
Then the corresponding blow-up power-determining matrix $A$ has an eigenvalue $1$ with the associating eigenvector
\begin{equation}
\label{vector-ev1}
{\bf v}_{0,\alpha} = \KMf{\Lambda_\alpha {\bf Y}_0}.
\end{equation}
\end{thm}
Note that the matrix $A$ only involves the quasi-homogeneous part $f_{\alpha, k}$ of $f$.
\begin{proof}
Consider \eqref{temporay-label4} at $\mathbf{y} = \mathbf{Y}_0$
with the help of ({\rm Re}f{0-balance}):
\begin{equation}
(D f_{\alpha,k})(\mathbf{Y}_0) \KMg{ \Lambda_\alpha } \mathbf{Y}_0
= \left( \KMg{ k I+ \Lambda_\alpha } \right) f_{\alpha,k}(\mathbf{Y}_0)
= \left( \KMg{ I + \frac{1}{k}\Lambda_\alpha } \right) \KMg{ \Lambda_\alpha } \mathbf{Y}_0.
\end{equation}
Then, using the definition of $A$, we have
\begin{eqnarray*}
A \KMg{ \Lambda_\alpha } \mathbf{Y}_0
= \left( -\KMf{ \frac{1}{k}\Lambda_\alpha } + Df_{\alpha,k}(\mathbf{Y}_0) \right) \KMg{ \Lambda_\alpha } \mathbf{Y}_0
= -\frac{1}{k}\Lambda_\alpha^2 \mathbf{Y}_0 + \left(I+\KMf{ \frac{1}{k}\Lambda_\alpha } \right)\KMg{ \Lambda_\alpha } \mathbf{Y}_0
=\KMg{ \Lambda_\alpha } \mathbf{Y}_0,
\end{eqnarray*}
which shows the desired statement\footnote{
\KMb{It follows from Theorem {\rm Re}f{thm-blow-up-estr} below that $\Lambda_\alpha \mathbf{Y}_0$ is not a zero-vector}.
}.
\end{proof}
As a corollary, we have the following observations from the balance law, which extracts a common feature of blow-up solutions.
\begin{cor}
Under the same assumptions in Theorem {\rm Re}f{thm-ev1},
the corresponding blow-up power-determining matrix $A$ has an eigenvalue $1$ with the associating eigenvector $f_{\alpha, k}({\bf Y}_0)$.
\end{cor}
Combining with Theorem {\rm Re}f{thm-balance-1to1}, the eigenvector is also characterized as follows.
\begin{cor}
Suppose that all assumptions in Theorem {\rm Re}f{thm-ev1} are satisfied.
Then the blow-up power-determining matrix $A$ associated with the blow-up solution given by the balance law ({\rm Re}f{0-balance}) has an eigenvalue $1$ with the associating eigenvector $\KMc{r_{{\bf x}_\ast}^{\Lambda_\alpha}} f_{\alpha, k}({\bf x}_\ast)$, where ${\bf x}_\ast$ is the equilibrium on the horizon for the desingularized vector field, under the quasi-parabolic compactification of type $\alpha$, given by the formula ({\rm Re}f{C-to-x}).
\end{cor}
The above arguments indicate one specific eigenstructure of the desingularized vector field at equilibria on the horizon {\em under a technical assumption}.
\KMe{To see this, we make the following} assumption to $f$, which is essential to the following arguments coming from the technical restriction due to the form of parabolic compactifications, while it can be relaxed for general systems.
\begin{ass}
\label{ass-f}
For each $i$,
\begin{equation*}
\tilde f_{i;{\rm res}}({\bf x}) = O\left(\kappa({\bf x})^{-(1+\epsilon)} \right),\quad \frac{\partial \tilde f_{i;{\rm res}}}{\partial x_l}({\bf x}) = o\left(\kappa({\bf x})^{-(1+\epsilon)}\right),\quad l=1,\ldots, n
\end{equation*}
hold for some $\epsilon > 0$ as ${\bf x}$ approaches to $\mathcal{E}$.
\KMf{Moreover, for any equilibrium ${\bf x}_\ast$ for $g$ \KMi{under consideration}, $C_\ast > 0$ holds, where $C_\ast = C_\ast({\bf x}_\ast)$ is given in ({\rm Re}f{const-horizon}).}
\end{ass}
The direct consequence of the assumption is the following, which is used to the correspondence of eigenstructures among different matrices.
\begin{lem}
\label{lem-ass-f}
Let ${\bf x}_\ast\in \mathcal{E}$ be an equilibrium for $g$.
Under Assumption {\rm Re}f{ass-f}, we have $D\tilde f({\bf x}_\ast) = D\tilde f_{\alpha, k}({\bf x}_\ast)$, where the derivative $D$ is with respect to ${\bf x}$.
\end{lem}
\begin{proof}
Now $\tilde f_{i;{\rm res}}$ is expressed as
\begin{align*}
\tilde f_{i;{\rm res}}(\KMh{{\bf x}}) &\equiv \kappa^{-(k+\alpha_i)} f_{i,{\rm res}}(\KMh{\kappa^{\Lambda_\alpha}{\bf x}} )\quad \text{(by ({\rm Re}f{f-tilde}))} \\
&\equiv \kappa^{-(1+\epsilon)} \tilde f_{i;{\rm res}}^{(1)} (\KMh{{\bf x}})
\end{align*}
with
\begin{equation*}
\tilde f_{i;{\rm res}}^{(1)}({\bf x}) = O(1),\quad \frac{\partial \tilde f_{i;{\rm res}}^{(1)}}{\partial x_l}({\bf x}) = o(\kappa({\bf x})^{1+\epsilon}), \quad l=1,\ldots, n
\end{equation*}
as ${\bf x}$ approaches to $\mathcal{E}$\KMc{.}
The partial derivative of the component $\tilde f_i$ with respect to $x_l$ at ${\bf x}_\ast$ is
\begin{equation*}
\frac{\partial \tilde f_i}{\partial x_l}({\bf x}_\ast) = \frac{\partial \tilde f_{i;\alpha, k}}{\partial x_l}({\bf x}_\ast) +
\KMg{
(1+\epsilon)\kappa^{-\epsilon}\frac{\partial \kappa^{-1}}{\partial x_l}\tilde f_{i;{\rm res}}^{(1)}(\KMh{{\bf x}_\ast}) + \kappa^{-(1+\epsilon)} \frac{\partial \tilde f_{i;{\rm res}}^{(1)}}{\partial x_l}({\bf x}_\ast ).
}
\end{equation*}
Using the fact that $\kappa^{-1} = 0$ on the horizon,
our present assumption implies that the \lq\lq gap" terms
\begin{equation*}
(1+\epsilon)\kappa^{-\epsilon}\frac{\partial \kappa^{-1}}{\partial x_l}\tilde f_{i;{\rm res}}^{(1)}(\KMh{{\bf x}_\ast}) + \kappa^{-(1+\epsilon)} \tilde f_{i;{\rm res}}^{(1)}({\bf x}_\ast )
\end{equation*}
are identically $0$ on the horizon and hence the Jacobian matrix $D\tilde f({\bf x}_\ast)$ with respect to ${\bf x}$ coincides with $D\tilde f_{\alpha, k}({\bf x}_\ast)$.
\end{proof}
\begin{thm}
\label{thm-ev-special}
\KMe{Suppose that Assumption {\rm Re}f{ass-f} holds}.
Also suppose that ${\bf x}_\ast$ is an equilibrium on the horizon for the associated desingularized vector field ({\rm Re}f{desing-para}).
Then the Jacobian matrix $Dg({\bf x}_\ast)$ always \KMc{possesses} the eigenpair $\{-C_\ast, {\bf v}_{\ast,\alpha}\}$, where
\begin{equation*}
{\bf v}_{\ast,\alpha} = \KMf{ \Lambda_\alpha {\bf x}_\ast }.
\end{equation*}
\end{thm}
\KMi{
Before the proof, it should be noted that
the inner product of the gradient $\nabla p({\bf x}_\ast)$ at an equilibrium ${\bf x}_\ast \in \mathcal{E}$ given in ({\rm Re}f{grad-xast}) and the vector ${\bf v}_{\ast,\alpha}$ is unity:
\begin{equation}
\label{inner-gradp-v}
\nabla p({\bf x}_\ast)^T {\bf v}_{\ast, \alpha} = \frac{1}{c}\sum_{l=1}^n \beta_lx_{\ast, l}^{2\beta_l-1} \alpha_l x_{\ast, l} = \frac{c}{c} \sum_{l=1}^n x_{\ast, l}^{2\beta_l} = 1.
\end{equation}
}
\begin{proof}[Proof of Theorem {\rm Re}f{thm-ev-special}]
First, it follows from ({\rm Re}f{desing-vector}) that
\begin{align*}
Dg({\bf x}) &= (2c-1)p({\bf x})^{2c-1}\tilde f({\bf x})\nabla p({\bf x})^T + \left( 1-\frac{2c-1}{2c}(1-p({\bf x})^{2c})\right) D\tilde f({\bf x})\\
&\quad - (\alpha_1 x_1, \ldots, \alpha_n x_n)^T \nabla G({\bf x})^T - G({\bf x})\Lambda_\alpha, \\
Dg({\bf x}_\ast) &= (2c-1)\tilde f({\bf x}_\ast)\nabla p({\bf x}_\ast)^T + D\tilde f({\bf x}_\ast) - {\bf v}_{\ast, \alpha}\nabla G({\bf x}_\ast)^T - G({\bf x}_\ast)\Lambda_\alpha \quad \text{(from the definition of ${\bf v}_{\ast, \alpha}$)}\\
&= (2c-1)\tilde f({\bf x}_\ast)\nabla p({\bf x}_\ast)^T + D\tilde f({\bf x}_\ast) - {\bf v}_{\ast, \alpha}\nabla G({\bf x}_\ast)^T - C_\ast \Lambda_\alpha \quad \KMi{\text{(from ({\rm Re}f{const-horizon}))}}\\
&= \left\{ - C_\ast \Lambda_\alpha + D\tilde f({\bf x}_\ast) \right\} + {\bf v}_{\ast, \alpha} \left( (2c-1) C_\ast \nabla p({\bf x}_\ast) - \nabla G({\bf x}_\ast) \right)^T \quad \text{(from ({\rm Re}f{const-horizon-0}))}.
\end{align*}
Next, using ({\rm Re}f{Gx}) and ({\rm Re}f{const-horizon}), we have
\begin{align*}
\nabla G({\bf x}_\ast) &= {\rm diag}\left(\frac{2\beta_1-1}{\alpha_1}x_{\ast, 1}^{2\beta_1-2},\ldots, \frac{2\beta_n-1}{\alpha_n}x_{\ast, n}^{2\beta_n-2}\right)C_\ast {\bf v}_{\ast, \alpha}
+ (A_g + C_\ast \Lambda_\alpha)^T \left(\frac{x_{\ast, 1}^{2\beta_1-1}}{\alpha_1},\ldots, \frac{x_{\ast, n}^{2\beta_n-1}}{\alpha_n}\right)^T \\
&= C_\ast \left(2\beta_1 x_{\ast, 1}^{2\beta_1-1},\ldots, 2\beta_n x_{\ast, n}^{2\beta_n-1}\right)^T
+ A_g^T \nabla p({\bf x}_\ast) \\
&= 2cC_\ast \nabla p({\bf x}_\ast) + A_g^T \nabla p({\bf x}_\ast),
\end{align*}
where
\begin{align*}
A_g &:= -C_\ast \Lambda_\alpha + D\tilde f({\bf x}_\ast).
\end{align*}
The Jacobian matrix $Dg({\bf x}_\ast)$ \KMh{then} has a decomposition $Dg({\bf x}_\ast) = A_g + B_g$, where
\begin{align}
\label{Bg}
B_g &:= - {\bf v}_{\ast, \alpha} \nabla p({\bf x}_\ast)^T (A_g + C_\ast I).
\end{align}
Now Lemma {\rm Re}f{lem-ass-f} implies
\begin{equation}
\label{Ag-QH}
A_g = -C_\ast \Lambda_\alpha + D\tilde f_{\alpha, k}({\bf x}_\ast).
\end{equation}
Therefore the same idea as the proof of Theorem {\rm Re}f{thm-ev1} can be applied to obtaining
\begin{align*}
A_g\KMg{ \Lambda_\alpha } {\bf x}_\ast
&= (- \KMg{ C_\ast\Lambda_\alpha } + D\tilde f_{\alpha,k}({\bf x}_\ast))\KMg{ \Lambda_\alpha } {\bf x}_\ast \quad \KMh{\text{(from ({\rm Re}f{Ag-QH}))}}\\
&= - \KMg{ C_\ast\Lambda_\alpha^2 } {\bf x}_\ast + (k I+ \Lambda_\alpha ) \tilde f_{\alpha, k}({\bf x}_\ast) \quad \KMi{\text{(from ({\rm Re}f{temporay-label4}))}}\\
&= - \KMg{ C_\ast\Lambda_\alpha^2 } {\bf x}_\ast + C_\ast(k I+ \Lambda_\alpha )\KMg{ \Lambda_\alpha } {\bf x}_\ast \quad \KMi{\text{(from ({\rm Re}f{const-horizon-0-Cast}))}}\\
&= k\KMg{ C_\ast \Lambda_\alpha } {\bf x}_\ast,
\end{align*}
which shows that the matrix $A_g$ admits an eigenvector ${\bf v}_{\ast,\alpha}$ with associated eigenvalue $kC_\ast$.
In other words,
\begin{equation}
\label{vector-evkC}
A_g {\bf v}_{\ast,\alpha} = kC_\ast {\bf v}_{\ast,\alpha}.
\end{equation}
From ({\rm Re}f{Bg}) and ({\rm Re}f{inner-gradp-v}), we have
\begin{align*}
B_g {\bf v}_{\ast,\alpha} &= - {\bf v}_{\ast, \alpha} \nabla p({\bf x}_\ast)^T (A_g + C_\ast I){\bf v}_{\ast,\alpha}\\
&= - (k+1)C_\ast {\bf v}_{\ast, \alpha} \nabla p({\bf x}_\ast)^T {\bf v}_{\ast,\alpha}\quad \text{(from ({\rm Re}f{vector-evkC}))}\\
&= - (k+1)C_\ast {\bf v}_{\ast, \alpha}\KMh{.} \quad \KMf{ \text{(from ({\rm Re}f{inner-gradp-v}))} }
\end{align*}
Therefore we have
\begin{align*}
Dg({\bf x}_\ast) {\bf v}_{\ast,\alpha} = (A_g + B_g) {\bf v}_{\ast,\alpha} = \{ kC_\ast - (k+1)C_\ast\} {\bf v}_{\ast,\alpha} &= -C_\ast {\bf v}_{\ast,\alpha}
\end{align*}
and, as a consequence, the vector ${\bf v}_{\ast,\alpha}$ is an eigenvector of $Dg({\bf x}_\ast)$ associated with $-C_\ast$ and the proof is completed.
\end{proof}
This theorem and ({\rm Re}f{inner-gradp-v}) imply that the eigenvector ${\bf v}_{\ast,\alpha}$ is transversal to the tangent space $T_{{\bf x}_\ast}\mathcal{E}$.
Combined with the $Dg$-invariance of the tangent bundle $T\mathcal{E}$ (cf. Remark {\rm Re}f{rem-invariance}), we conclude that the eigenvector ${\bf v}_{\ast,\alpha}$ provides the blow-up direction in the linear sense.
Comparing Theorem {\rm Re}f{thm-ev1} with Theorem {\rm Re}f{thm-ev-special}, the eigenpair $\{1, {\bf v}_{0,\alpha}\}$ of the blow-up power-determining matrix $A$ provides a characteristic information of blow-up solutions.
\KMf{Similarly}, from the eigenpair $\{\KMf{-C_\ast}, {\bf v}_{\ast,\alpha}\}$, a direction of trajectories ${\bf x}(\tau)$ for ({\rm Re}f{desing-para}) converging to ${\bf x}_\ast$ is uniquely determined.
Using this fact, we obtain the following corollary, which justifies the correspondence stated in Theorem {\rm Re}f{thm-balance-1to1}.
\begin{cor}
\label{cor-Cast-pos}
\KMe{Suppose that Assumption {\rm Re}f{ass-f} except the condition of $C_\ast$ holds}.
Let ${\bf x}_\ast\in \mathcal{E}$ be a hyperbolic equilibrium for $g$ satisfying assumptions in Lemma {\rm Re}f{lem-Cast-nonneg}.
Then the constant $C_\ast = C_\ast({\bf x}_\ast)$ given in ({\rm Re}f{const-horizon}) is positive.
\end{cor}
\begin{proof}
\KMe{Because} ${\bf x}_\ast$ is hyperbolic, then ${\rm Spec}(Dg({\bf x}_\ast))\cap i\mathbb{R} = \emptyset$.
In particular, $C_\ast \not = 0$ since $-C_\ast \in {\rm Spec}(Dg({\bf x}_\ast))$.
Combining this fact with Lemma {\rm Re}f{lem-Cast-nonneg}, we have $C_\ast > 0$.
\end{proof}
\begin{rem}
The above corollary provides a sufficient condition so that $C_\ast > 0$ is satisfied, while the converse is not always true.
In other words, the nontrivial intersection ${\rm Spec}(Dg({\bf x}_\ast))\cap i\mathbb{R}$ can exist even if $C_\ast > 0$.
\end{rem}
\begin{rem}[Similarity to Painlev\'{e}-type analysis]
\label{rem-Painleve}
In studies involving Painlev\'{e}-type property from the viewpoint of algebraic geometry (e.g. \cite{AM1989, C2015}), the necessary and sufficient conditions which the {\em complex} ODE of the form
\begin{equation}
\label{ODE-complex}
\frac{du}{dz} = f(z,u),\quad z\in \mathbb{C}
\end{equation}
possesses meromorphic solutions are considered.
The key point is that the matrix induced by ({\rm Re}f{ODE-complex}) called {\em Kovalevskaya matrix}, essentially the same as the blow-up power-determining matrix, is diagonalizable and all eigenvalues, which are referred to as {\em Kovalevskaya exponents} in \cite{C2015, C2016_124, C2016_356}, are integers, in which case the meromorphic solutions generate a parameter family.
The number of free parameters is determined by that of integer eigenvalues with required sign.
\par
It is proved in the preceding studies that the Kovalevskaya matrix always admits the eigenvalue $-1$ and the associated eigenstructure is uniquely determined.
Theorem {\rm Re}f{thm-ev1} is therefore regarded as a counterpart of the result to blow-up description.
Difference of the sign \lq\lq $-1$" from our result \lq\lq $+1$" comes from the different form of expansions of solutions.
Indeed, solutions as functions of $z-z_0$ with a movable singularity $z_0\in \mathbb{C}$ are considered in the Painlev\'{e}-type analysis, while solutions as functions of $\theta(t) = t_{\max} - t$ are considered in the present study.
\end{rem}
\subsection{Remaining eigenstructure of $A$ and tangent spaces on the horizon}
As mentioned \KMi{before} the proof of Theorem {\rm Re}f{thm-ev-special}, the vector ${\bf v}_{\ast, \alpha}$ is transversal \KMe{to} the horizon $\mathcal{E}$.
\KMe{Also, as} mentioned in Remark {\rm Re}f{rem-invariance}, the horizon $\mathcal{E}$ is \KMe{a} codimension one invariant manifold for $g$ and hence the remaining $n-1$ independent (generalized) eigenvectors of $Dg({\bf x}_\ast)$ \KMe{span} the tangent space $T_{{\bf x}_\ast}\mathcal{E}$.
Our aim here is to investigate these eigenvectors and the correspondence among those for $Dg({\bf x}_\ast)$, $A_g$ and $A$.
We shall see below that the matrix $B_g$ \KMe{given} in ({\rm Re}f{Bg}) plays a key role whose essence is the determination of the following \KMb{object}.
\begin{prop}
\label{prop-proj-B}
Let $P_\ast := {\bf v}_{\ast,\alpha}\nabla p({\bf x}_\ast)^T$.
Then $P_\ast$ as the linear mapping on $\mathbb{R}^n$ is the (nonorthogonal) projection\footnote{
\KMe{In the homogeneous case $\alpha = (1,\ldots, 1)$, this is orthogonal.}
} onto ${\rm span}\{{\bf v}_{\ast,\alpha}\}$.
Similarly, the map $I-P_\ast$ is the (nonorthogonal) projection onto the tangent space $T_{{\bf x}_\ast}\mathcal{E}$.
\end{prop}
\begin{proof}
Note that the tangent space $T_{{\bf x}_\ast}\mathcal{E}$ is the orthogonal complement of the gradient $\nabla p({\bf x}_\ast)$.
We have
\begin{align*}
\nabla p({\bf x}_\ast)^T \KMg{(I-P_\ast)}
&= \nabla p({\bf x}_\ast)^T - \nabla p({\bf x}_\ast)^T {\bf v}_{\ast,\alpha}\nabla p({\bf x}_\ast)^T \\
&= \nabla p({\bf x}_\ast)^T - (\nabla p({\bf x}_\ast)^T {\bf v}_{\ast,\alpha} )\nabla p({\bf x}_\ast)^T\\
&= \nabla p({\bf x}_\ast)^T - \nabla p({\bf x}_\ast)^T\\
&= 0,
\end{align*}
which yields \KMg{that $(\KMg{I} - P_\ast)$ maps $\mathbb{R}^n$ to $({\rm span}\{\nabla p({\bf x}_\ast)\})^\bot = T_{{\bf x}_\ast}\mathcal{E}$.}
Moreover, the above indentity implies
\begin{equation*}
P_\ast ( \KMg{I} -P_\ast) = P_\ast - P_\ast^2 = 0,
\end{equation*}
that is, $P_\ast$ is idempotent and hence is a projection.
\end{proof}
Using the projection $P_\ast$, the matrix $B_g$, and hence $Dg({\bf x}_\ast)$ is rewritten as follows:
\begin{align}
\label{Bg-easy}
\KMg{B_g} &= \KMg{ -P_\ast (A_g + C_\ast \KMg{I})},\\
\label{Dg-proj}
Dg({\bf x}_\ast) &= A_g + B_g
= A_g - P_\ast (A_g + C_\ast \KMg{I})
= (\KMg{I} - P_\ast)A_g - C_\ast P_\ast.
\end{align}
Using this expression, we obtain the following proposition, which plays a key role in characterizing the correspondence of eigenstructures among different matrices.
\begin{prop}
\label{prop-corr-Dg-Ag}
For any $\lambda \in \mathbb{C}$ and $N\in \KMi{\mathbb{N}}$, we have
\begin{equation}
\label{Dg-I-P}
(Dg({\bf x}_\ast) - \lambda I)^N (I -P_\ast) = (I -P_\ast) (A_g - \lambda I)^N
\end{equation}
\KMi
{
and
\begin{equation}
\label{Dg-I-P-2}
(A_g - k C_\ast I) ( Dg({\bf x}_\ast) - \lambda I)^N (I-P_\ast) = (A_g - k C_\ast I) (A_g- \lambda I)^N = (A_g- \lambda I)^N (A_g - k C_\ast I).
\end{equation}
}
\end{prop}
\begin{proof}
\KMf{Because} $P_\ast$ is a projection, we have
\begin{align}
\notag
Dg({\bf x}_\ast)( \KMg{I} -P_\ast) &= (\KMg{I} - P_\ast)A_g ( \KMg{I} - P_\ast) - C_\ast P_\ast ( \KMg{I} - P_\ast)\\
\label{Dg-I-P-1}
&= (\KMg{I} - P_\ast)A_g - (\KMg{I} - P_\ast)A_g P_\ast.
\end{align}
Moreover, multiplying the identity ({\rm Re}f{vector-evkC}) by $\nabla p({\bf x}_\ast)^T$ from the right, we have
\begin{equation}
\label{A-kC-ortho-P}
\KMi{(A_g - kC_\ast I) P_\ast = 0}.
\end{equation}
Then
\begin{equation*}
(\KMg{I} - P_\ast)A_g P_\ast = (I-P_\ast) k C_\ast P_\ast = 0.
\end{equation*}
Therefore, it follows from ({\rm Re}f{Dg-I-P-1}) that
\begin{equation*}
Dg({\bf x}_\ast)( \KMg{I} -P_\ast) = (\KMg{I} - P_\ast)A_g.
\end{equation*}
Then, for any {\em complex} number $\lambda$ and for any $N\in \KMi{\mathbb{N}}$, we have the first statement.
\par
As for the second statement, direct calculations yield
\KMi{
\begin{align*}
(A_g - k C_\ast I) ( Dg({\bf x}_\ast) - \lambda I)^N (I-P_\ast)
&= (A_g - k C_\ast I) (I-P_\ast) (A_g- \lambda I)^N \quad \text{(from ({\rm Re}f{Dg-I-P}))}\\
&= (A_g - k C_\ast I) (A_g- \lambda I)^N\quad \text{(from ({\rm Re}f{A-kC-ortho-P}))} \\
&= (A_g- \lambda I)^N (A_g - k C_\ast I).
\end{align*}
}
\end{proof}
The formula ({\rm Re}f{Dg-I-P}) yields the correspondence of eigenstructures among $Dg({\bf x}_\ast)$ and $A_g$ in a simple way.
\begin{thm}
\label{thm-evec-Dg}
Let ${\bf x}_\ast\in \mathcal{E}$ be an equilibrium on the horizon for $g$ and suppose that Assumption {\rm Re}f{ass-f} holds.
\begin{enumerate}
\item Assume that $\lambda \in {\rm Spec}(A_g)$ and let ${\bf w}\in \mathbb{C}^n$ be such that
${\bf w}\in \ker((A_g - \lambda I)^{m_\lambda})\setminus \ker((A_g - \lambda I)^{m_\lambda -1})$ with $(I-P_\ast){\bf w}\not = 0$ for some $m_\lambda \in \KMi{\mathbb{N}}$.
\begin{itemize}
\item \KMi{If $\lambda \not = kC_\ast$,} then $(I -P_\ast){\bf w} \in \ker((Dg({\bf x}_\ast) - \lambda I)^{m_\lambda})\setminus \ker((Dg({\bf x}_\ast) - \lambda I)^{m_\lambda -1})$.
\item \KMi{If $\lambda = kC_\ast$, then either $(I-P_\ast){\bf w}\in \ker((Dg({\bf x}_\ast) - kC_\ast I)^{m_\lambda})\setminus \ker((Dg({\bf x}_\ast) - kC_\ast I)^{m_\lambda -1})$
or $(I - P_\ast){\bf w}\in \ker((Dg({\bf x}_\ast) - kC_\ast I)^{m_\lambda-1})\setminus \ker((Dg({\bf x}_\ast) - kC_\ast I)^{m_\lambda -2})$ holds.}
\end{itemize}
\item
Conversely, assume that $\lambda_g \in {\rm Spec}(Dg({\bf x}_\ast))$ and let ${\bf w}_g\in \mathbb{C}^n$ be such that
$(I - P_\ast){\bf w}_g\in \ker((Dg({\bf x}_\ast) - \lambda_g I)^{m_{\lambda_g}})\setminus \ker((Dg({\bf x}_\ast) - \lambda_g I)^{m_{\lambda_g} -1})$ with $(I-P_\ast){\bf w}_g\not = 0$ for some $m_{\lambda_g} \in \KMi{\mathbb{N}}$.
\begin{itemize}
\item If $\lambda_g \not = kC_\ast$, then $(A_g - kC_\ast I){\bf w}_g \in \ker((A_g - \lambda_g I)^{m_{\lambda_g}})\setminus \ker((A_g - \lambda_g I)^{m_{\lambda_g} -1})$.
\item If $\lambda_g = kC_\ast$, \KMi{then either $(A_g - kC_\ast I){\bf w}_g\in \ker((A_g - kC_\ast I)^{m_{\lambda_g}})\setminus \ker((A_g - kC_\ast I)^{m_{\lambda_g} -1})$
or $(A_g - kC_\ast I){\bf w}_g\in \ker((A_g - kC_\ast I)^{m_{\lambda_g}+1})\setminus \ker((A_g- kC_\ast I)^{m_{\lambda_g}})$ holds.}
\end{itemize}
\end{enumerate}
\end{thm}
\begin{rem}
\label{rem-complex-evec}
If $\lambda \in {\rm Spec}(Dg({\bf x}_\ast)) \cap (\mathbb{C} \setminus \mathbb{R})$, the associated eigenvector ${\bf w}_g$ is also complex-valued.
Moreover, $(\bar \lambda, \overline{{\bf w}_g})$ is also an eigenpair of $Dg({\bf x}_\ast)$, which implies
\begin{equation*}
Dg({\bf x}_\ast) \begin{pmatrix}
{\bf w}_g & \overline{{\bf w}_g}
\end{pmatrix} = \begin{pmatrix}
{\bf w}_g & \overline{{\bf w}_g}
\end{pmatrix}\begin{pmatrix}
\lambda & 0 \\
0 & \bar \lambda
\end{pmatrix}.
\end{equation*}
Let $\lambda = \lambda_{\rm re} + i\lambda_{\rm im}$ with $\lambda_{\rm im} \not = 0$ and
\begin{equation*}
Q = \begin{pmatrix}
1 & 1 \\
i & -i
\end{pmatrix}\quad \Leftrightarrow \quad Q^{-1} = \frac{1}{2}\begin{pmatrix}
1 & -i \\
1 & i
\end{pmatrix}.
\end{equation*}
Then we have
\begin{equation*}
Q\begin{pmatrix}
\lambda & 0 \\
0 & \bar \lambda
\end{pmatrix} = \begin{pmatrix}
\lambda_{\rm re} & \lambda_{\rm im} \\
-\lambda_{\rm im} & \lambda_{\rm re}
\end{pmatrix}Q
\end{equation*}
and hence
\begin{equation*}
Dg({\bf x}_\ast) \begin{pmatrix}
{\bf w}_g & \overline{{\bf w}_g}
\end{pmatrix}Q^{-1} = \begin{pmatrix}
{\bf w}_g & \overline{{\bf w}_g}
\end{pmatrix} Q^{-1} \begin{pmatrix}
\lambda_{\rm re} & \lambda_{\rm im} \\
-\lambda_{\rm im} & \lambda_{\rm re}
\end{pmatrix},
\end{equation*}
equivalently
\begin{equation*}
Dg({\bf x}_\ast) \begin{pmatrix}
{\rm Re}\,{\bf w}_g & {\rm Im}\,{\bf w}_g
\end{pmatrix} = \begin{pmatrix}
{\rm Re}\,{\bf w}_g & {\rm Im}\,{\bf w}_g
\end{pmatrix} \begin{pmatrix}
\lambda_{\rm re} & \lambda_{\rm im} \\
-\lambda_{\rm im} & \lambda_{\rm re}
\end{pmatrix}.
\end{equation*}
Therefore ${\rm Re}\,{\bf w}_g$ and ${\rm Im}\,{\bf w}_g$ generate base vectors of \KMc{the} invariant subspace \KMc{$T_{{\bf x}_\ast} \mathcal{E}$.}
\KMc{Indeed,} $-C_\ast$ is real and hence ${\bf v}_{\ast, \alpha}$ and ${\rm Re}\,{\bf w}_g$, ${\rm Im}\,{\bf w}_g$ are linearly independent.
As a consequence, a complex eigenvalue $\lambda \in {\rm Spec}(Dg({\bf x}_\ast))$ associates two independent vectors ${\bf w}_{gr}, {\bf w}_{gi} \in T_{{\bf x}_\ast} \mathcal{E}$ such that ${\bf w}_{gr} + i{\bf w}_{gi}$ is the eigenvector associated with $\lambda$, in which case all arguments in the proof are applied to ${\bf w}_{gr} + i{\bf w}_{gi}$.
\KMi{The similar observation holds for generalized eigenvectors with appropriate matrices realizing the above real form.}
\end{rem}
\begin{proof}
First it follows from ({\rm Re}f{Dg-I-P}) that, for any $\lambda \in \mathbb{C}$, any ${\bf w}\in \mathbb{C}^n$ and $N\in \KMi{\mathbb{N}}$,
\begin{equation}
\label{Dg-I-PN}
(Dg({\bf x}_\ast) - \lambda I)^N (I -P_\ast){\bf w} = (I -P_\ast) (A_g - \lambda I)^N{\bf w}.
\end{equation}
1.
If ${\bf w}\in \mathbb{C}^n$ \KMc{is} such that $(I - P_\ast){\bf w} \not = 0$ and that
\begin{equation}
\label{w-eigen-Ag}
{\bf w} \in \ker((A_g - \lambda I)^{m_\lambda}) \setminus \ker((A_g - \lambda I)^{m_\lambda-1})
\end{equation}
with $\lambda \not = kC_\ast$ for some $m_\lambda \in \KMi{\mathbb{N}}$, we know from ({\rm Re}f{Dg-I-PN}) that
\begin{equation*}
(I-P_\ast){\bf w} \in \ker((Dg({\bf x}_\ast) - \lambda I)^{m_\lambda}) \setminus \ker((Dg({\bf x}_\ast) - \lambda I)^{m_\lambda-1}).
\end{equation*}
Here we have used the fact that $(A_g - \lambda I)^{m_\lambda - 1}{\bf w} \not \in {\rm span}\{{\bf v}_{\ast, \alpha}\}$, otherwise \KMc{$\bar {\bf w} \equiv (A_g - \lambda I)^{m_\lambda-1} {\bf w} = c{\bf v}_{\ast, \alpha}$ satisfies $(A_g - \lambda I)\bar {\bf w} = c(A_g - \lambda I){\bf v}_{\ast, \alpha} = 0$}.
But the latter never occurs because $(A_g - kC_\ast I){\bf v}_{\ast,\alpha} = 0$ and $\lambda \not = kC_\ast$ is assumed at present.
\par
Now we move to the case ({\rm Re}f{w-eigen-Ag}) with $\lambda = kC_\ast$.
Then there are two cases to be considered:
\begin{itemize}
\item $(A_g - kC_\ast I)^{m_\lambda - 1}{\bf w} \not \in {\rm span}\{{\bf v}_{\ast, \alpha}\}$.
\item $(A_g - kC_\ast I)^{m_\lambda - 2}{\bf w} \not \in {\rm span}\{{\bf v}_{\ast, \alpha}\}$ and $(A_g - kC_\ast I)^{m_\lambda - 1}{\bf w} \in {\rm span}\{{\bf v}_{\ast, \alpha}\}$.
\end{itemize}
In the first case, the both sides ({\rm Re}f{Dg-I-PN}) must vanish with $N=m_\lambda$, while do not vanish with $N=m_\lambda - 1$.
Under the assumption $(I-P_\ast){\bf w} \not = 0$, this property indicates that $(I-P_\ast){\bf w} \in \ker((Dg({\bf x}_\ast) - kC_\ast I)^{m_\lambda}) \setminus \ker((Dg({\bf x}_\ast) - kC_\ast I)^{m_\lambda-1})$.
In the second case, on the other hand, the both sides \KMb{in} ({\rm Re}f{Dg-I-PN}) must vanish with $N=m_\lambda - 1$ from ({\rm Re}f{vector-evkC}), while do not vanish with $N=m_\lambda - 2$.
Under the assumption $(I-P_\ast){\bf w} \not = 0$, this property indicates that $(I-P_\ast){\bf w} \in \ker((Dg({\bf x}_\ast) - kC_\ast I)^{m_\lambda-1}) \setminus \ker((Dg({\bf x}_\ast) - kC_\ast I)^{m_\lambda-2})$.
\par
\KMi{2.}
\KMi{
Let ${\bf w}_g\in \mathbb{C}^n$ be such that
$(I - P_\ast){\bf w}_g \not = 0$ and that
\begin{equation}
\label{wg-eigen-Dg}
(I - P_\ast){\bf w}_g \in \ker((Dg({\bf x}_\ast) - \lambda_g I)^{m_\lambda}) \setminus \ker((Dg({\bf x}_\ast) - \lambda_g I)^{m_\lambda-1})
\end{equation}
with $\lambda_g \not = kC_\ast$ for some $m_\lambda \in \KMi{\mathbb{N}}$.
Then ({\rm Re}f{Dg-I-P}) indicates that either of the following properties holds:
\begin{itemize}
\item ${\bf w}_g \in \ker((A_g - \lambda_g I)^{m_\lambda})\setminus \ker((A_g - \lambda_g I)^{m_\lambda -1})$,
\item $(A_g - \lambda_g I)^{m_\lambda-1} {\bf w}_g \not \in {\rm span}\{{\bf v}_{\ast,\alpha}\}$, $(A_g - \lambda_g I)^{m_\lambda} {\bf w}_g \in {\rm span}\{{\bf v}_{\ast,\alpha}\}$.
\end{itemize}
In the latter case, the relation ({\rm Re}f{vector-evkC}) yields $(A_g - kC_\ast I)(A_g - \lambda_g I)^{m_\lambda} {\bf w}_g = 0$.
From ({\rm Re}f{Dg-I-P-2}), we concluded that
\begin{equation*}
(A_g - kC_\ast I){\bf w}_g \in \ker((A_g - \lambda_g I)^{m_\lambda})\setminus \ker((A_g - \lambda_g I)^{m_\lambda -1})
\end{equation*}
holds in both cases.
Notice that $(A_g - kC_\ast I){\bf w}_g \not = 0$ because $(I-P_\ast){\bf w}_g \in T_{{\bf x}_\ast}\mathcal{E}\setminus \{0\}$ and is assumed to satisfy ({\rm Re}f{wg-eigen-Dg}) with $\lambda_g \not = kC_\ast$.
\par
Now we move to the case ({\rm Re}f{wg-eigen-Dg}) with $\lambda_g = kC_\ast$.
Because $(I-P_\ast){\bf w}_g \in T_{{\bf x}_\ast}\mathcal{E} \setminus \{0\}$ is assumed, ({\rm Re}f{Dg-I-P}) implies that there are two cases to be considered, similar to the first statement:
\begin{itemize}
\item $(A_g - kC_\ast I)^{m_{\lambda_g}}{\bf w}_g = 0$.
\item $(A_g - kC_\ast I)^{m_{\lambda_g}}{\bf w}_g \in {\rm span}\{{\bf v}_{\ast, \alpha}\}$.
\end{itemize}
Similar to the proof of the first statement, the identity ({\rm Re}f{Dg-I-P-2}) yields that
\begin{equation*}
(A_g - kC_\ast I){\bf w}_g \in \ker((A_g - \lambda_g I)^{N})\setminus \ker((A_g - \lambda_g I)^{N-1})
\end{equation*}
holds for either $N = m_{\lambda_g}$ or $m_{\lambda_g}+1$.
If $m_{\lambda_g} > 1$ and $(A_g - kC_\ast I)^{N}{\bf w}_g = 0$ for $1\leq N < m_{\lambda_g}$, then ({\rm Re}f{Dg-I-P}) with $\lambda = kC_\ast$ implies $(Dg({\bf x}_\ast) - kC_\ast)^N(I-P_\ast){\bf w}_g = 0$, which contradicts the assumption.
}
\end{proof}
\KMf{We have unraveled the correspondence of eigenpairs between matrices $Dg({\bf x}_\ast)$ and $A_g$.}
Next consider the relationship of eigenpairs between matrices $A_g$ and $A$, \KMh{given in ({\rm Re}f{Ag-QH}) and ({\rm Re}f{blow-up-power-determining-matrix}), respectively.}
\par
\begin{prop}
\label{prop-correspondence-ev-Ag-A}
Let ${\bf x}_\ast\in \mathcal{E}$ be \KMf{an equilibrium on the horizon for $g$ and suppose that Assumption {\rm Re}f{ass-f} holds}.
Also, let \KMf{$\lambda \in {\rm Spec}(A_g)$ and ${\bf u}\in \ker((A_g - \lambda I)^N)\setminus \ker((A_g - \lambda I)^{N-1})$ for some $N\in \mathbb{Z}_{\geq 1}$}, where ${\bf u}$ is linearly independent from ${\bf v}_{\ast, \alpha}$.
If
\begin{equation*}
\tilde \lambda:= r_{{\bf x}_\ast}^k \lambda,\quad {\bf U} := r_{{\bf x}_\ast}^{\Lambda_\alpha}{\bf u},
\end{equation*}
namely
\begin{equation*}
{\bf U} = (U_1,\ldots, U_n)^T,\quad U_i := r_{{\bf x}_\ast}^{\alpha_i}u_i,
\end{equation*}
then \KMf{$\tilde \lambda \in {\rm Spec}(A)$ and ${\bf U} \in \ker((A - \tilde \lambda I)^N)\setminus \ker((A - \tilde \lambda I)^{N-1})$}.
Conversely, \KMf{if $\tilde \lambda \in {\rm Spec}(A)$ and ${\bf U}\in \ker((A - \lambda I)^N)\setminus \ker((A - \lambda I)^{N-1})$ for some $N\in \mathbb{Z}_{\geq 1}$,
then the pair $\{\lambda, {\bf u}\}$ defined by
\begin{equation*}
\lambda:= r_{{\bf Y}_0}^{-k} \tilde \lambda,\quad {\bf u} := r_{{\bf Y}_0}^{-\Lambda_\alpha}{\bf U}
\end{equation*}
satisfy $\lambda \in {\rm Spec}(A_g)$ and ${\bf u} \in \ker((A_g - \KMg{\lambda} I)^N)\setminus \ker((A_g - \KMg{\lambda} I)^{N-1})$.
}
\end{prop}
\begin{proof}
\KMe{Similar to} arguments in the proof of Theorem {\rm Re}f{thm-ev-special}, it is sufficient to consider the case that $f({\bf y})$, equivalently $\tilde f({\bf x})$, is quasi-homogeneous.
\KMi{That is, $f({\bf y}) = f_{\alpha, k}({\bf y})$ and $\tilde f({\bf x}) = \tilde f_{\alpha, k}({\bf x})$,} which are assumed in the following arguments.
Recall that \KMf{an equilibrium on the horizon ${\bf x}_\ast$ with $C_\ast > 0$} and the corresponding root ${\bf Y}_0$ of the balance law satisfy
\begin{align}
\label{identity-balance-equilibrium}
&\KMh{{\bf x}_\ast = r_{{\bf Y}_0}^{-\Lambda_\alpha} {\bf Y}_0,\quad {\bf Y}_0 = r_{{\bf x}_\ast}^{\Lambda_\alpha}{\bf x}_\ast},\\
\notag
&r_{{\bf Y}_0} = p({\bf Y}_0) = r_{{\bf x}_\ast} \equiv (kC_\ast)^{-1/k} > 0\KMf{.}
\end{align}
Similar to arguments in Lemma {\rm Re}f{lem-identity-QHvf}, we have
\begin{equation}
\label{identity-QH-diff}
s^{\alpha_l}\frac{\partial f_i}{\partial x_l}( \KMh{ s^{\Lambda_\alpha}{\bf x}} ) = s^{k + \alpha_i}\frac{\partial f_i}{\partial x_l}( \KMh{{\bf x}} ),
\end{equation}
while the left-hand side coincides with
\begin{equation*}
s^{\alpha_l}\frac{\partial f_i}{\partial x_l}( \KMh{ s^{\Lambda_\alpha}{\bf x}} ) = \frac{\partial f_i}{\partial (s^{\alpha_l} x_l)}( \KMh{ s^{\Lambda_\alpha}{\bf x}} )\frac{\partial (s^{\alpha_l} x_l)}{\partial \KMh{x_l}} \equiv \frac{\partial f_i}{\partial X_l}( \KMh{{\bf X}} )\frac{\partial (s^{\alpha_l} x_l)}{\partial x_l},
\end{equation*}
introducing an auxiliary variable ${\bf X} = (X_1, \ldots, X_n)^T$, $X_i := s^{\alpha_i} x_i$ for some $s > 0$.
\KMi{Let $D_{\bf X}$ be the derivative with respect to the vector variable ${\bf X}$.}
Note that $D_{\bf X}\KMi{\tilde f}({\bf X})|_{{\bf X} = \bar {\bf x}} = D_{\bf x} \KMi{\tilde f}(\bar {\bf x})$ when \KMg{the variable ${\bf X}$ is set as ${\bf x}$} and that $D_{\bf X}\KMi{f}({\bf X})|_{{\bf X} = \bar {\bf Y}} = D_{\bf Y} \KMi{f}(\bar {\bf Y})$ when \KMg{the variable ${\bf X}$ is set as ${\bf Y}$}.
Using the fact that $\KMi{f}({\bf Y})$ and $\KMi{\tilde f}({\bf x})$ have the identical form, we have
\begin{align*}
\KMh{
D_{\bf Y}f({\bf Y}_0) r_{{\bf x}_\ast}^{\Lambda_\alpha} = r_{{\bf x}_\ast}^{kI + \Lambda_\alpha} D_{\bf x}\KMi{f}({\bf x}_\ast)
}
\end{align*}
\KMh{with} $s = r_{{\bf x}_\ast}$ and ${\bf x} = {\bf x}_\ast$ in ({\rm Re}f{identity-QH-diff}) and the identity ({\rm Re}f{identity-balance-equilibrium}).
That is,
\begin{equation}
\label{identity-QH-diff-2-matrix}
D_{\bf Y}\KMi{f}({\bf Y}_0) = \KMh{ r_{{\bf x}_\ast}^{kI + \Lambda_{\alpha}} } D_{\bf x}\KMi{\tilde f}({\bf x}_\ast) r_{{\bf x}_\ast}^{-\Lambda_{\alpha}}.
\end{equation}
Then we have
\begin{align*}
A &= -\frac{1}{k} \Lambda_\alpha + D_{{\bf Y}} f({\bf Y}_0)\quad \text{(from ({\rm Re}f{blow-up-power-determining-matrix}))}\\
&= -r_{{\bf x}_\ast}^k C_\ast \Lambda_\alpha + D_{{\bf Y}} f({\bf Y}_0)\quad \text{(from ({\rm Re}f{identity-balance-equilibrium}))}\\
&= -r_{{\bf x}_\ast}^k C_\ast \Lambda_\alpha + \KMh{ r_{{\bf x}_\ast}^{kI + \Lambda_{\alpha}}} D_{\bf x}\KMi{\tilde f}({\bf x}_\ast) r_{{\bf x}_\ast}^{-\Lambda_{\alpha}}\quad \text{(from ({\rm Re}f{identity-QH-diff-2-matrix}))}\\
&= \KMh{ r_{{\bf x}_\ast}^{kI + \Lambda_{\alpha}} } \left( -C_\ast \Lambda_\alpha + D_{\bf x}\KMi{\tilde f}({\bf x}_\ast) \right)r_{{\bf x}_\ast}^{-\Lambda_{\alpha}}\\
&\KMh{= r_{{\bf x}_\ast}^{kI + \Lambda_{\alpha}} A_g r_{{\bf x}_\ast}^{-\Lambda_{\alpha}} \quad \text{(from ({\rm Re}f{Ag-QH}))}}
\end{align*}
and hence
\begin{equation}
\label{conj-A-Ag}
A = \KMh{ r_{{\bf x}_\ast}^{kI+\Lambda_{\alpha}}} A_g r_{{\bf x}_\ast}^{-\Lambda_{\alpha}} \quad \Leftrightarrow \quad
A_g = \KMh{ r_{{\bf Y}_0}^{-(kI + \Lambda_{\alpha})}} A r_{{\bf Y}_0}^{\Lambda_{\alpha}},
\end{equation}
where we have used $r_{{\bf Y}_0} = r_{{\bf x}_\ast}$.
In particular, for any $\lambda\in \mathbb{C}$ and $N\in \KMi{\mathbb{N}}$ with the identity $\tilde \lambda = r_{{\bf x}_\ast}^k \lambda$, we have
\begin{equation}
\label{conj-A-Ag-2}
(A - \tilde \lambda I)^N = \KMh{ r_{{\bf x}_\ast}^{kN I + \Lambda_{\alpha}}} (A_g - \lambda I)^N r_{{\bf x}_\ast}^{-\Lambda_{\alpha}}\quad \Leftrightarrow \quad
(A_g - \lambda I)^N = \KMh{ r_{{\bf Y}_0}^{-(kN I +\Lambda_{\alpha})}} (A - \tilde \lambda I)^N r_{{\bf Y}_0}^{\Lambda_{\alpha}}.
\end{equation}
This identity directly yields our statements.
For example, let ${\bf u} = (u_1,\ldots, u_n)$ be an eigenvector of $A_g$ associated with an eigenvalue $\lambda$:
$A_g {\bf u} = \lambda {\bf u}$.
Then ({\rm Re}f{conj-A-Ag}) yields
\begin{align*}
\lambda {\bf u} &= A_g {\bf u}
= \KMh{ r_{{\bf Y}_0}^{-(k I + \Lambda_{\alpha})}} A r_{{\bf Y}_0}^{\Lambda_{\alpha}} {\bf u}
= \KMh{ r_{{\bf Y}_0}^{-(k I + \Lambda_{\alpha})}} A {\bf U},
\end{align*}
and hence
\begin{equation*}
A{\bf U} = r_{{\bf Y}_0}^{-k}\lambda {\bf U} = \tilde \lambda {\bf U}.
\end{equation*}
Repeating the same argument conversely assuming the eigenstructure $A{\bf U} = \tilde \lambda {\bf u}$, we know that an eigenpair $(\lambda, {\bf u})$ of $A_g$ is constructed from a given eigenpair $(\tilde \lambda, {\bf U})$ of $A$ through ({\rm Re}f{identity-balance-equilibrium}) and ({\rm Re}f{conj-A-Ag}).
Correspondence of generalized eigenvectors follows from the similar arguments through ({\rm Re}f{conj-A-Ag-2}).
\end{proof}
\subsection{Complete correspondence of eigenstructures and another blow-up criterion}
\label{section-parameter-dep-Y}
Our results here determine the complete correspondence of eigenpairs among $A$ and $Dg({\bf x}_\ast)$.
In particular, blow-up power eigenvalues determining powers of $\theta(t)$ in asymptotic expansions of blow-up solutions are completely determined by ${\rm Spec}(Dg({\bf x}_\ast))$\KMf{, and vice versa}.
The complete correspondence of eigenstructures is obtained under a mild assumption of the corresponding matrices.
\begin{thm}
\label{thm-blow-up-estr}
Let ${\bf x}_\ast\in \mathcal{E}$ be \KMf{an equilibrium on the horizon for $g$ which is mapped to a nonzero root ${\bf Y}_0$ of the balance law ({\rm Re}f{0-balance}) through ({\rm Re}f{x-to-C}) and ({\rm Re}f{C-to-x}), and suppose that Assumption {\rm Re}f{ass-f} holds}.
\KMf{When all the eigenpairs of the blow-up power-determining matrix $A$ associated with ${\bf Y}_0$
are determined, then all the eigenpairs of $Dg({\bf x}_\ast)$ are constructed through the correspondence listed in Table {\rm Re}f{table-eigen1}.
Similarly, if all the eigenpairs of $Dg({\bf x}_\ast)$ are determined, then all the eigenpairs of $A$ are constructed through the correspondence listed in Table {\rm Re}f{table-eigen2}.}
\par
\KMi{Moreover, the Jordan structure associated with eigenvalues, namely the number of Jordan blocks and their size, are identical except $kC_\ast \in {\rm Spec}(Dg({\bf x}_\ast))$ if exists, and $1\in {\rm Spec}(A)$.}
\end{thm}
\begin{proof}
\KMf{Correspondences} between {\em Common eigenvalue} and {\em Common eigenvector} \KMf{in Tables follow} from Theorems {\rm Re}f{thm-ev1} and {\rm Re}f{thm-ev-special}, \KMb{and {\rm Re}f{thm-balance-1to1},} \KMf{while correspondences} between {\em Remaining eigenvalue} and {\em Remaining (generalized) eigenvector} \KMf{in Tables follow} from \KMg{Proposition {\rm Re}f{prop-correspondence-ev-Ag-A} and Theorem {\rm Re}f{thm-evec-Dg}}.
If $kC_\ast \not \in {\rm Spec}(Dg({\bf x}))$ (in particular, $1\in {\rm Spec}(A)$ is simple from Theorem {\rm Re}f{thm-ev1} and Proposition {\rm Re}f{prop-correspondence-ev-Ag-A}), the number and size of Jordan blocks are identical by Proposition {\rm Re}f{prop-correspondence-ev-Ag-A} and Theorem {\rm Re}f{thm-evec-Dg}.
\end{proof}
\begin{table}[ht]\em
\centering
{
\begin{tabular}{cccc}
\hline
& $A$ & $Dg({\bf x}_\ast)$\\
\hline\\[-2mm]
Common eigenvalue & $1$ & $-C_\ast$\\ [1mm]
Common eigenvector & $\KMf{\Lambda_\alpha {\bf Y}_0}$ & \KMg{$\Lambda_\alpha r_{{\bf Y}_0}^{-\Lambda_\alpha} {\bf Y}_0$} \\ [1mm]
Remaining eigenvalue & $\tilde \lambda$ & $\lambda = r_{{\bf Y}_0}^{-k}\tilde \lambda$ \\ [1mm]
Remaining (generalized) eigenvector & \KMg{${\bf U}$} & \KMg{$(I-P_\ast) r_{{\bf Y}_0}^{-\Lambda_\alpha} {\bf U}$} \\ [1mm]
\hline
\end{tabular}
}
\caption{Correspondence of eigenstructures from $A$ to $Dg({\bf x}_\ast)$}
\flushleft
The constant $r_{{\bf Y}_0}$ is $p({\bf Y}_0)$.
Once a nonzero root ${\bf Y}_0$ of the balance law and eigenpairs of $A$ are given, corresponding equilibrium on the horizon ${\bf x}_\ast$ and all eigenpairs of $Dg({\bf x}_\ast)$ are constructed by the rule on the table.
\label{table-eigen1}
\end{table}
\begin{table}[ht]\em
\centering
{
\begin{tabular}{cccc}
\hline
& $Dg({\bf x}_\ast)$ & $A$\\
\hline\\[-2mm]
Common eigenvalue & $-C_\ast$ & $1$ \\ [1mm]
Common eigenvector & $\KMf{\Lambda_\alpha {\bf x}_\ast}$ & \KMg{$\Lambda_\alpha r_{{\bf x}_\ast}^{\Lambda_\alpha} {\bf x}_\ast$} \\ [1mm]
Remaining eigenvalue & $\lambda$ & $\tilde \lambda = r_{{\bf x}_\ast}^k\lambda$ \\ [1mm]
Remaining (generalized) eigenvector & $(I-P_\ast){\bf u}$ & $r_{{\bf x}_\ast}^{\Lambda_\alpha}\KMi{(A_g - kC_\ast I)}{\bf u}$ \\ [1mm]
\hline
\end{tabular}
}
\caption{Correspondence of eigenstructures from $Dg({\bf x}_\ast)$ to $A$}
\flushleft
The constant $r_{{\bf x}_\ast}$ is $(kC_\ast)^{-1/k}$, which is positive whenever ${\bf x}_\ast$ is hyperbolic by Corollary {\rm Re}f{cor-Cast-pos}.
Once \KMb{an} equilibrium on the horizon ${\bf x}_\ast$ and eigenpairs of $Dg({\bf x}_\ast)$ are given, corresponding (nonzero) root of the balance law ${\bf Y}_0$ and all eigenpairs of $A$ are constructed by the rule on the table.
\label{table-eigen2}
\end{table}
The correspondence of {\em Common eigenvector}, and structure of $\mathcal{E}$ (and Remark {\rm Re}f{rem-zero-comp-compactification} if necessary) yield that $\Lambda_\ast {\bf Y}_0$ is not a zero-vector.
As a \KMi{byproduct} of the above correspondence, another criterion of blow-ups is provided.
We have already reviewed in Section {\rm Re}f{section-preliminary} that hyperbolic equilibria on the horizon for $g$ provide blow-up solutions.
On the contrary, the following result provides a criterion of the existence of blow-ups which can be applied {\em without a knowledge of desingularized vector fields}, while the correspondence to dynamics at infinity through $g$ is indirectly used.
\begin{thm}[Criterion of existence of blow-up from asymptotic expansions]
\label{thm-existence-blow-up}
\KMi{Let ${\bf Y}_0$ be a nonzero root of the balance law ({\rm Re}f{0-balance}).}
Assume that the corresponding blow-up power determining matrix $A$ associated with ${\bf Y}_0$ is hyperbolic: ${\rm Spec}(A) \cap i\mathbb{R}=\emptyset$, \KMi{and that Assumption {\rm Re}f{ass-f} holds}.
Then \KMf{({\rm Re}f{ODE-original}) possesses a blow-up solution ${\bf y}(t)$ with the asymptotic behavior $y_i(t) \sim Y_{0,i}\theta(t)^{-\alpha_i/k}$ as $t\to t_{\max} < \infty$, provided $Y_{0,i} \not = 0$.}
\end{thm}
\begin{proof}
Eigenvalues ${\rm Spec}(A)$ of $A$ consist of $1$ and remaining $n-1$ eigenvalues, all of which have nonzero real parts by our assumption.
\KMi{Because ${\bf Y}_0$ is nonzero, an equilibrium on the horizon ${\bf x}_\ast$ for $g$ is uniquely determined through the identity ({\rm Re}f{C-to-x}).
Moreover, the constant $C_\ast$ is defined through the identity $r_{{\bf x}_\ast} = (kC_\ast)^{-1/k} = r_{{\bf Y}_0}$, namely $C_\ast = 1/(k r_{{\bf Y}_0}^{k}) > 0$ from Corollary {\rm Re}f{cor-Cast-pos}.}
The Jacobian matrix $Dg({\bf x}_\ast)$ has eigenvalues $-C_\ast$ and remaining $n-1$ eigenvalues determined one-to-one by ${\rm Spec}(A)\setminus \{1\}$, all of which have nonzero real parts, thanks to the correspondence \KMb{obtained} in \KMi{Theorem {\rm Re}f{thm-blow-up-estr}}.
In particular, ${\bf x}_\ast$ is a hyperbolic equilibrium on the horizon satisfying $W^s_{\rm loc}({\bf x}_\ast; g)\cap \mathcal{D} \not = \emptyset$ \KMi{because} $-C_\ast < 0$\KMi{,} and the associated eigenvector ${\bf v}_{\ast, \alpha}$ determines the distribution of $W^s_{\rm loc}({\bf x}_\ast; g)$ transversal to $\mathcal{E}$.
Then Theorem {\rm Re}f{thm:blowup} shows that $t_{\max} < \infty$ for the corresponding solution with the asymptotic behavior $y_i(t) = O(\theta(t)^{-\alpha_i/k})$, as long as $\KMh{x_{\ast, i}}\not = 0$.
Therefore, the bijection
\begin{equation*}
\frac{x_i(t)}{(1-p({\bf x}(t))^{2c})^{\alpha_i}} = \theta(t)^{-\alpha_i/k}Y_{0,i},\quad i=1,\cdots, n
\end{equation*}
provides the concrete form of the blow-up solution ${\bf y}(t)$ \KMf{whenever $Y_{0,i} \not = 0$}.
\end{proof}
We therefore conclude that {\em asymptotic expansions of blow-up solutions themselves provide a criterion of the existence of blow-up solutions}.
On the other hand, blow-up power eigenvalues do {\em not} extract exact dynamical properties around the corresponding blow-up solutions, as shown below.
\begin{thm}[Stability \KMf{gap}]
\label{thm-stability}
Let $f$ be an asymptotically quasi-homogeneous vector field of type $\alpha$ and order $k+1$ satisfying Assumption {\rm Re}f{ass-f}.
Let ${\bf x}_\ast$ be a hyperbolic equilibrium on the horizon for the desingularized vector field $g$ associated with $f$ \KMf{such that $W^s_{\rm loc}({\bf x}_\ast; g)\cap \mathcal{D}\not = \emptyset$}, and ${\bf Y}_0$ be the corresponding root of the balance law which is not identically zero.
If
\begin{align*}
m &:= \dim W^s_{\rm loc}({\bf x}_\ast; g),\quad m_A = \sharp \{\lambda\in {\rm Spec}(A) \mid {\rm Re}\lambda < 0\},
\end{align*}
then we have $m = m_A+1$.
\end{thm}
\begin{proof}
Theorem {\rm Re}f{thm-evec-Dg} and Proposition {\rm Re}f{prop-correspondence-ev-Ag-A} indicate that $n-1$ eigenvalues of $Dg({\bf x}_\ast)$ and $A$ have the identical sign.
The only difference comes from eigenvalues $1$ associating the eigenvector ${\bf v}_{0,\alpha}$ of $A$, and $-C_\ast$ associating the eigenvector ${\bf v}_{\ast, \alpha}$ of $Dg({\bf x}_\ast)$, respectively.
From the hyperbolicity of ${\bf x}_\ast$, the constant $C_\ast$ is positive by Corollary {\rm Re}f{cor-Cast-pos} and hence $1$ and $-C_\ast$ have mutually opposite sign, which shows $m=m_A+1$.
\end{proof}
\begin{rem}[Gap of dynamical information of blow-ups]
Theorem {\rm Re}f{thm-stability} tells us an assertion to interpret stability information of blow-up solutions.
We have two vector fields for characterizing blow-up solutions: the desingularized vector field $g$ and ({\rm Re}f{blow-up-basic}).
In both systems, the linear parts around steady states (equilibrium on the horizon and roots of the balance law, respectively) characterize the local asymptotic behavior of blow-ups under mild assumptions.
More precisely, {\em stable} eigenvalues in the sense that \KMf{their} real parts are negative and the associated eigenspaces parameterize the asymptotic behavior.
However, the number of such eigenvalues is {\em different} among two systems.
In the desingularized vector field $g$, all dynamical information for the original vector field $f$ are kept through compactifications and time-scale desingularizations, and hence all possible parameter dependence of blow-ups are extracted for a given equilibrium on the horizon.
The blow-up time $t_{\max}$ is regarded as \KMf{a} computable quantity given by solution trajectories for $g$\KMi{, namely ({\rm Re}f{blow-up-time})}.
On the other hand, $t_{\max}$ is assumed to be fixed in ({\rm Re}f{blow-up-basic}) and hence the dependence of $t_{\max}$ on initial points is neglected, which causes the gap of parameter dependence of blow-up solutions among two systems.
\KMa{Convergence of asymptotic series (Theorem 3.8 in \cite{asym1})} indicates that $t_{\max}$ expresses the gap of parameter dependence by means of stability information among two systems.
In \cite{LMT2021}, the maximal existence time $t_{\max}$ as \KMi{a} function of initial points is calculated through the parameterization of invariant manifolds (cf. \cite{CFdlL2005}) with computer-assistance, which constructs the foliation of $W^s_{\rm loc}({\bf x}_\ast)$ by means of level sets of $t_{\max}$.
This foliation can express the remaining parameter dependence of blow-ups.
\end{rem}
\section{Examples of asymptotic expansions revisited}
\label{section-examples}
Examples of asymptotic expansions of blow-up solutions shown in \cite{asym1} are \KMg{revisited}. Here the correspondence of algebraic information for describing asymptotic expansions to dynamics at infinity is revealed.
\subsection{One-dimensional ODEs}
\label{section-ex-1dim}
\subsubsection{A simple example}
The first example is
\begin{equation}
\label{ex1-1dim-1}
y' = -y + y^3\KMb{,\quad {}' = \frac{d}{dt}}.
\end{equation}
If the initial point $y(0) > 0$ is sufficiently large, the corresponding solution would blow up in a finite time.
In \cite{asym1}, the third order asymptotic expansion of the type-I blow-up solution is derived as follows:
\begin{equation*}
y(t) \sim \frac{1}{\sqrt{2}}\theta(t)^{-1/2} + \frac{1}{2\sqrt{2}}\theta(t)^{1/2} +\frac{\sqrt{2}}{48}\theta(t)^{3/2}\quad \text{ as }\quad t\to t_{\max},
\end{equation*}
{\em assuming} its existence.
Here we pay attention to its existence through compactifications and the dynamical correspondence obtained in Section {\rm Re}f{section-correspondence}.
\par
Here apply the (homogeneous) parabolic compactification and the time-scale desingularization to ({\rm Re}f{ex1-1dim-1}).
First note that the ODE ({\rm Re}f{ex1-1dim-1}) is asymptotically homogeneous (namely $\alpha = (1)$) of order $k+1 = 3$, in particular $k=2$.
The following parabolic compactification
\begin{equation*}
y = \frac{x}{1-x^2}.
\end{equation*}
is therefore applied.
The equation ({\rm Re}f{ex1-1dim-1}) is transformed into
\begin{equation*}
x' = \frac{- x(1-x^2)^2 + x^3}{(1-x^2)(1+x^2)}.
\end{equation*}
Now the following time-scale desingularization is introduced:
\begin{equation*}
\frac{d\tau}{dt} = \frac{2}{(1-x^2)^2(1+x^2)}.
\end{equation*}
The corresponding desingularized vector field is
\begin{equation}
\label{ex1-desing}
\frac{dx}{d\tau} = \frac{1}{2}(1-x^2)\left\{ - x(1-x^2)^2 + x^3\right\} = \frac{1}{2}x(1-x^2)\left\{ - 1+3x^2 - x^4 \right\}.
\end{equation}
Note that the horizon is $\{x=\pm 1\}$ and \KMb{that} the desingularized system admits equilibria on the horizon: $x=\pm 1$.
Our interest here is the blow-up solution associated with $x_\ast =1$.
The differential of the desingularized vector field at $x_\ast =1$ is
\begin{equation*}
\left[ \frac{1}{2}(1-x^2)\left\{ - 1+3x^2 - x^4 \right\} - x^2 \left\{ - 1+3x^2 - x^4 \right\} + \frac{1}{2}x(1-x^2)\left\{ 6x - 4x^3 \right\}\right]_{x=x_\ast} = -1,
\end{equation*}
indicating that $x_\ast =1$ is a hyperbolic sink for ({\rm Re}f{ex1-desing}).
Theorem {\rm Re}f{thm:blowup} yields that this sink induces a solution $y(t)$ for ({\rm Re}f{ex1-1dim-1}) with large initial points blowing up at $t = t_{\max} < \infty$ with the blow-up rate
\begin{equation*}
y(t) = O( \theta(t)^{-1/2})\quad \text{ as } \quad t\to t_{\max},
\end{equation*}
and hence the existence of blow-ups mentioned in Assumption {\rm Re}f{ass-fundamental} is verified without assuming its existence.
The constant $C_\ast$ \KMg{in ({\rm Re}f{const-horizon})} is
\begin{equation*}
C_\ast = \left. x \left\{ - x(1-x^2)^2 + x^3\right\}\right|_{x = x_\ast } = 1,
\end{equation*}
which is consistent with Theorem {\rm Re}f{thm-ev-special}.
Indeed, $-C_\ast = -1$ is the only eigenvalue of the Jacobian matrix $Dg(x_\ast)$.
\par
Next, we \KMa{partially} review the asymptotic behavior of this blow-up solution to verify the dynamical correspondence.
Assume that
\begin{equation*}
y(t) = \theta(t)^{-1/2}Y(t),
\end{equation*}
which yields the following equation solving $Y(t)$:
\begin{equation}
\label{system-asymptotic-1dim}
Y' = -Y + \theta(t)^{-1}\left\{ - \frac{1}{2}Y + Y^3\right\}.
\end{equation}
Under the asymptotic expansion of the positive blow-up solution:
\begin{equation*}
Y(t) = \sum_{n=0}^\infty Y_n(t)\quad \text{ with }\quad Y_n(t) \ll Y_{n-1}(t)\quad (t\to t_{\max}-0),\quad \lim_{t\to t_{\max}}Y(t)= Y_0 > 0,
\end{equation*}
the balance law requires $Y_0 = 1/\sqrt{2}$, which is the coefficient of the principal term of $y(t)$.
By definition of the functional $p({\bf y})$ in ({\rm Re}f{func-p}), we have
\begin{equation*}
\frac{Y_0}{p(Y_0)} = \frac{1/\sqrt{2}}{((1/\sqrt{2})^2)^{1/2}} = 1 \equiv x_\ast,
\end{equation*}
while
\begin{equation*}
(kC_\ast)^{-1/k} x_\ast = 2^{-1/2} = \frac{1}{\sqrt{2}}\equiv Y_0.
\end{equation*}
Therefore the correspondence of roots in Theorem {\rm Re}f{thm-balance-1to1} is verified.
The blow-up power-determining matrix at $Y_0$, coinciding with the blow-up power eigenvalue, is
\begin{equation*}
\left\{ - \frac{1}{2} + 3Y^2\right\}_{Y = Y_0} = 1,
\end{equation*}
which is consistent with Theorem {\rm Re}f{thm-ev1}.
This eigenvalue has no contributions to $Y_n(t)$ with $n\geq 1$.
In particular, {\em blow-up power eigenvalues never contribute to determine the orders of \KMg{$\theta(t)$ in } asymptotic expansion of blow-up solutions for any one-dimensional ODEs}.
\subsubsection{Ishiwata-Yazaki's example}
The next example concerns with blow-up solutions of the following system:
\begin{equation}
\label{IY}
u' = a u^{\frac{a+1}{a}}v,\quad v' = a v^{\frac{a+1}{a}}u,
\end{equation}
where \KMg{$a\in (0,1)$} is a parameter.
\begin{rem}[cf. \cite{IY2003, Mat2019}]
\label{rem-IY}
\KMg{Consider} initial points $u(0), v(0) > 0$.
If $u(0) \not = v(0)$, then the solution $(u(t), v(t))$ blows up at $t=t_{\max} < \infty$ with the blow-up rate $O(\theta(t)^{-a})$.
On the other hand, if $u(0) = v(0)$, the solution $(u(t), v(t))$ blows up at $t=t_{\max} < \infty$ with the blow-up rate $O(\theta(t)^{-a/(a+1)})$.
\end{rem}
Introducing the first integral
\begin{equation*}
I = I(u, v) := v^{1-\frac{1}{a}} - u^{1-\frac{1}{a}},
\end{equation*}
the system ({\rm Re}f{IY}) is reduced to a one-dimensional ODE
\begin{equation}
\label{IY-1dim}
u' = a u^{\frac{a+1}{a}}\left( u^{1 - \frac{1}{a}} + I \right)^{\frac{a}{a-1}}.
\end{equation}
Blow-up solutions of the rate $O(\theta(t)^{-a})$ corresponds to $I \not = 0$, while those of the rate $O(\theta(t)^{-a/(a+1)})$ corresponds to $I=0$.
We pay attention to the case $u(0) > v(0)$ when $I\not = 0$, in which case $I>0$ holds.
\par
First consider the case $I>0$, where the vector field ({\rm Re}f{IY-1dim}) is asymptotically homogeneous of the order $1+a^{-1}$.
Using the asymptotic expansion
\begin{equation*}
u(t) = \theta(t)^{-a}U(t) = \theta(t)^{-a}\left(\sum_{n=0}^\infty U_n(t)\right),\quad \lim_{t\to t_{\max}} U(t) = U_0,\\
\end{equation*}
the system becomes
\begin{align}
\notag
U' &= a\theta(t)^{-1} \left\{ -U + \KMf{ I^{\frac{a}{a-1}} } U^{\frac{a+1}{a}} \left( I^{-1} \theta(t)^{-(a-1)}U^{1 - \frac{1}{a}} + 1 \right)^{\frac{a}{a-1}}\right\}\\
\label{IY-system-U}
&= a\theta(t)^{-1} \left\{ -U + \KMf{ I^{\frac{a}{a-1}} } U^{\frac{a+1}{a}} \left( \sum_{k=0}^\infty \begin{pmatrix}
\frac{a}{a-1} \\ k
\end{pmatrix}\left( I^{-1}\theta(t)^{1-a}U^{\frac{a-1}{a}} \right)^k \right) \right\},
\end{align}
where
\begin{equation*}
\begin{pmatrix}
\frac{a}{a-1} \\ k
\end{pmatrix} = \frac{\left(\frac{a}{a-1}\right)_k}{k!},\quad
\left(\frac{a}{a-1}\right)_k = \frac{a}{a-1} \left(\frac{a}{a-1}-1\right)\left(\frac{a}{a-1}-2\right) \cdots \left(\frac{a}{a-1}-k+1\right).
\end{equation*}
The balance law then yields
\begin{equation*}
\KMa{-U_0 + I^{\frac{a}{a-1}} U_0^{\frac{a+1}{a}} = 0} \quad \Rightarrow \quad U_0 = I^{-a^2/(a-1)},
\end{equation*}
\KMf{where the above choice of $U_0$ is consistent with the setting mentioned in Remark {\rm Re}f{rem-IY}.}
The \KMf{corresponding} blow-up power-determining matrix is
\begin{align*}
\frac{d}{dU}\left(a \left\{ -U + U^{\frac{a+1}{a}} I^{\frac{a}{a-1}} \right\} \right)_{U=U_0} &=a \left\{ -1 + \frac{a+1}{a}U_0^{\frac{1}{a}} I^{\frac{a}{a-1}} \right\} = 1,
\end{align*}
which is consistent with Theorem {\rm Re}f{thm-ev1}.
\par
Similarly, the blow-up power eigenvalue in the case $I=0$ is confirmed to be consistent with Theorem {\rm Re}f{thm-ev1}.
Details are shown in \cite{asym1}.
\par
As a summary, we obtain the following result for asymptotic expansions of blow-up solutions.
\begin{align}
\label{IY-sol-Ipos}
u(t) &\sim I^{-a^2/(a-1)} \theta(t)^{-a} + I^{\frac{-2a^2+1}{a-1}}
\frac{a^2}{(1-a)(2-a)} \theta(t)^{1-2a},\quad
v(t) \sim I^{\frac{a}{a-1}} - I^{\frac{1}{a-1}-a} \frac{a}{1-a} \theta(t)^{1-a}
\end{align}
with $I> 0$, while
\begin{equation}
\label{IY-sol-I0}
u(t) = v(t) = \left( \frac{1}{a+1}\right)^{\frac{a}{a+1}}\theta(t)^{-a/(a+1)}
\end{equation}
with $I=0$ as $t\to t_{\max}-0$.
Note that the solution through the above argument coincides with that obtained by the method of separation of variables in the original equation
\begin{equation*}
u' = au^{2+\frac{1}{a}}.
\end{equation*}
Details are summarized in \cite{asym1}.
\par
\begin{rem}[Correspondence of parameter dependence]
\KMa{
The expansion ({\rm Re}f{IY-sol-Ipos}) contains two free parameters: $t_{\max}$ and $I$.
This fact reflects the dynamical property that the solution ({\rm Re}f{IY-sol-I0}) is induced by the hyperbolic sink on the horizon admitting the {\em two}-dimensional stable manifold (after further time-scale desingularizations), as observed in \cite{Mat2019}.
}
On the other hand, when $I=0$, $u(t) \equiv v(t)$ \KMg{holds} for $t\geq 0$ by the invariance of the first integral $I$.
The expansion ({\rm Re}f{IY-sol-I0}) is parameterized by only {\em one} parameter $t_{\max}$, namely initial points $u(0) = v(0)$.
Similar to ({\rm Re}f{IY-sol-Ipos}), this fact reflects the dynamical property that the solution ({\rm Re}f{IY-sol-I0}) is induced by the hyperbolic saddle \KMg{on the horizon} admitting the {\em one}-dimensional stable manifold (\cite{Mat2019}).
\KMa{Gaps of the number of free parameters are consistent with Theorem {\rm Re}f{thm-stability}.}
\end{rem}
\subsection{Two-phase flow model}
\label{section-ex-2phase}
The following system is reviewed next (see e.g. \cite{KSS2003, Mat2018} for the details of the system):
\begin{equation}
\label{two-fluid-1}
\begin{cases}
\beta' = vB_1(\beta) - c\beta - c_1, & \\
v' = v^2 B_2(\beta) - cv - c_2, &
\end{cases}\quad {}'=\frac{d}{dt},
\end{equation}
where
\begin{equation*}
B_1(\beta) = \frac{(\beta-\rho_1)(\beta-\rho_2)}{\beta},\quad B_2(\beta) = \frac{\beta^2- \rho_1\rho_2}{2\beta^2}
\end{equation*}
with $\rho_2 > \rho_1 > 0$,
\begin{equation*}
c = \frac{v_R B_1(\beta_R) - v_L B_1(\beta_L)}{\beta_R - \beta_L}
\end{equation*}
and $(c_1,c_2) = (c_{1L}, c_{2L})$ or $(c_{1R}, c_{2R})$, where
\begin{equation*}
\label{constants-two-phase}
\begin{cases}
c_{1L} = v_L B_1(\beta_L) - c\beta_L, & \\
c_{2L} = v_L^2 B_2(\beta_L) -cv_L, & \\
\end{cases}
\quad
\begin{cases}
c_{1R} = v_R B_1(\beta_R) - c\beta_R, & \\
c_{2R} = v_R^2 B_2(\beta_R) -cv_R. & \\
\end{cases}
\end{equation*}
Points $(\beta_L, v_L)$ and $(\beta_R, v_R)$ are given in advance.
The system ({\rm Re}f{two-fluid-1}) is asymptotically quasi-homogeneous of type $(0,1)$ and order $2$.
Following arguments in \cite{Mat2018}, we observe that there is a blow-up solution with the asymptotic behavior
\begin{equation}
\label{two-fluid-2}
\beta(t) \sim \rho_2,\quad v(t)\sim V_0\theta(t)^{-1}\quad\text{ as }\quad t\to t_{\max}-0,
\end{equation}
which is consistent with arguments in \cite{KSS2003}.
In particular, type-I blow-up solutions are observed.
\begin{rem}
In \cite{Mat2018}, two hyperbolic saddles on the horizon for the desingularized vector field are observed.
One of these saddles admits the {\em $1$-dimensional stable manifold}, which associates a family of blow-up solutions of the above form.
Another saddle admits the {\em $1$-dimensional unstable manifold}, which associates a family of blow-up solutions of the similar form {\em with time reversing}.
\end{rem}
\par
Our main concern here is to derive multi-order asymptotic expansion of the blow-up solution ({\rm Re}f{two-fluid-2}) for ({\rm Re}f{two-fluid-1}).
To this end, write the blow-up solution $(\beta(t), v(t))$ as follows:
\begin{align}
\notag
\beta(t) &= b(t),\quad v(t) = \theta(t)^{-1}V(t),\\
\label{form-b-V}
b(t) &= \sum_{n=0}^\infty b_n(t) \equiv b_0 + \tilde b(t),\quad b_0 = \rho_2,\quad b_n(t) \ll b_{n-1}(t)\quad (t\to t_{\max}-0),\quad n\geq 1,\\
\notag
V(t) &= \sum_{n=0}^\infty V_n(t)\equiv V_0 + \tilde V(t),\quad V_n(t) \ll V_{n-1}(t)\quad (t\to t_{\max}-0),\quad n\geq 1.
\end{align}
The balance law which $(b_0, v_0)$ satisfies can be easily derived.
Substituting the form ({\rm Re}f{form-b-V}) into ({\rm Re}f{two-fluid-1}), we have
\begin{align*}
\beta' &= b' \\
&= \theta(t)^{-1} V B_1(b) - cb - c_1,\\
v' &= \theta(t)^{-2} V + \theta(t)^{-1}V'\\
&= \theta(t)^{-2} V^2 B_2(b) - c\theta(t)^{-1}V - c_2.
\end{align*}
Dividing the first equation by $\theta(t)^{0} \equiv 1$ and the second equation by $\theta(t)^{-1}$, we have
\begin{align}
\label{2phase-asym-main}
\frac{d}{dt}\begin{pmatrix}
b \\
V
\end{pmatrix} = \theta(t)^{-1} \begin{pmatrix}
V B_1(b) \\
-V + V^2 B_2(b)
\end{pmatrix} - \begin{pmatrix}
cb + c_1 \\
cV + \theta(t) c_2
\end{pmatrix}
\end{align}
The balance law is then
\begin{equation*}
\begin{pmatrix}
V_0 B_1(b_0) \\
-V_0 + V_0^2 B_2(b_0)
\end{pmatrix} = \begin{pmatrix}
0 \\
0
\end{pmatrix},
\end{equation*}
that is,
\begin{equation*}
V_0 \frac{(b_0-\rho_1)(b_0-\rho_2)}{\beta} = 0,\quad -V_0 + V_0^2 \frac{b_0^2- \rho_1\rho_2}{2b_0^2} = 0.
\end{equation*}
In the present case, $b_0 = \rho_2$ is already determined as the principal term of $b(t)$, which satisfies the first equation.
Substituting $b_0 = \rho_2$ into the second equation, we have
$V_0 = 2\rho_2 / (\rho_2- \rho_1)$, provided $V_0 \not =0$.
As a summary, the root of the balance law (under ({\rm Re}f{two-fluid-2})) is uniquely determined by
\begin{equation}
\label{balance-two-phase}
(b_0, V_0) = \left(\rho_2, \frac{2\rho_2}{\rho_2- \rho_1}\right).
\end{equation}
Letting
\begin{equation*}
\KMg{\bar f}(b,V) := \begin{pmatrix}
V B_1(b) \\
-V + V^2 B_2(b)
\end{pmatrix} \KMg{ \equiv -\frac{1}{k}\Lambda_\alpha \begin{pmatrix}
b \\ V
\end{pmatrix}+ f_{\alpha, k}(b,V)},
\end{equation*}
we have
\begin{align*}
D\KMg{\bar f}(b_0, V_0) &= \begin{pmatrix}
V_0\frac{d}{d\beta}B_1(\beta)|_{\beta=b_0} & B_1(b_0)\\
V_
0^2 \frac{d}{d\beta}B_2(\beta)|_{\beta=b_0} & -1 + 2V_0 B_2(b_0)\\
\end{pmatrix}
= \begin{pmatrix}
V_0(1 - \rho_1\rho_2b_0^{-2}) & \frac{(b_0-\rho_1)(b_0-\rho_2)}{\KMf{b_0}}\\
V_0^2 \rho_1\rho_2 b_0^{-3} & -1 + 2V_0 \frac{b_0^2- \rho_1\rho_2}{2b_0^2}\\
\end{pmatrix},
\end{align*}
which is the blow-up power determining matrix associated with the blow-up solution ({\rm Re}f{two-fluid-2}).
Using ({\rm Re}f{balance-two-phase}), we have
\begin{align*}
A\equiv D\KMg{\bar f}(b_0, V_0) &= \begin{pmatrix}
\frac{2\rho_2}{\rho_2- \rho_1}(1 - \rho_1\rho_2^{-1}) & 0\\
( \frac{2\rho_2}{\rho_2- \rho_1})^2 \rho_1\rho_2 b_0^{-3} & -1 + 2 \frac{2\rho_2}{\rho_2- \rho_1} \frac{\rho_2- \rho_1}{2\rho_2}\\
\end{pmatrix}\\
&=\begin{pmatrix}
2 & 0\\
\frac{4\rho_1}{(\rho_2- \rho_1)^2} & 1\\
\end{pmatrix}.
\end{align*}
Indeed, $1$ is one of eigenvalues.
The corresponding eigenvector is
\begin{align*}
\begin{pmatrix}
2 & 0\\
\frac{4\rho_1}{(\rho_2- \rho_1)^2} & 1\\
\end{pmatrix}\begin{pmatrix}
x_1 \\ x_2
\end{pmatrix} = \begin{pmatrix}
x_1 \\ x_2
\end{pmatrix} \quad & \Rightarrow \quad \begin{pmatrix}
x_1 \\ x_2
\end{pmatrix} = \begin{pmatrix}
0 \\ 1
\end{pmatrix}
\end{align*}
and hence the present argument is consistent with Theorem {\rm Re}f{thm-ev1}.
Recall that the type $\alpha$ is now $(0,1)$.
\subsection{Andrews' system I}
\label{section-ex-Andrews1}
Consider the following system (cf. \cite{A2002, Mat2019})\footnote{
There is a typo of the system in \cite{Mat2019} (Equation (4.7)).
The correct form is ({\rm Re}f{Andrews1}) below.
The rest of arguments in \cite{Mat2019} is developed for the correct system ({\rm Re}f{Andrews1}).
}:
\begin{equation}
\label{Andrews1}
\begin{cases}
\displaystyle{
\frac{du}{dt} = \frac{1}{\sin \theta} vu^2 - \frac{2a\cos\theta}{\sin \theta}u^3
}
, & \\
\displaystyle{
\frac{dv}{dt} = \frac{a}{\sin \theta} uv^2 + \frac{1-a}{\sin \theta} \frac{uv^3}{v + 2\cos\theta u}
}, &
\end{cases}
\end{equation}
where $a\in (0,1/2)$ and $\theta\in (0,\pi/2)$ are constant parameters, according to our interests (\cite{asym1}).
We easily see that the system ({\rm Re}f{Andrews1}) is homogeneous of the order $3$, in particular $k=2$, and arguments in \cite{Mat2019} show that the first term of stationary blow-up solutions has the form
\begin{equation*}
u(t) = O(\theta(t)^{-1/2}),\quad v(t) = O(\theta(t)^{-1/2}).
\end{equation*}
Introducing
\begin{equation*}
w = u\cos\theta,\quad s = \frac{t}{\sin\theta \cos\theta},
\end{equation*}
the vector field ({\rm Re}f{Andrews1}) is then transformed into
\begin{equation}
\frac{dw}{ds} = w^2 (v - 2a w),\quad \frac{dv}{ds} = wv^2 \frac{v + 2aw}{v+2w},
\end{equation}
which is independent of $\theta$.
Because $\theta \in (0,\pi/2)$, then $\sin \theta > 0$ and $\cos \theta > 0$\KMg{,} and hence dynamics of $(u,v)$ in $t$-time scale and $(w, v)$ in $s$-variable are mutually smoothly equivalent.
\par
The following asymptotic expansions of blow-up solutions are considered:
\begin{equation*}
w(t) = \tilde \theta(s)^{-1/2}\sum_{n=0}^\infty W_n(s),\quad v(t) = \tilde \theta(s)^{-1/2}\sum_{n=0}^\infty V_n(s), \quad \tilde \theta(s) = s_{\max}-s = \frac{2}{\sin(2\theta)}\theta(t).
\end{equation*}
\KMa{In particular, we pay attention to solutions with {\em positive} initial points according to arguments in \cite{asym1}.}
The balance law for the solution under the assumption $W_0, V_0 \not = 0$ is
\begin{equation*}
\frac{1}{2} = W_0 V_0 - 2aW_0^2,\quad
\frac{1}{2} = W_0 V_0\frac{V_0 + 2aW_0}{V_0 + 2W_0}.
\end{equation*}
We therefore have
\begin{equation}
\label{balance-Andrews-result}
W_0 = \pm \sqrt{\frac{1-2a}{8a^2}},\quad V_0 = \frac{2a}{1-2a}W_0 = \pm \sqrt{\frac{1}{2(1-2a)}}, \end{equation}
\KMa{where the positive roots are chosen according to our setting.}
Moreover, the root $V_0$ can be achieved as a real value by the assumption of $a$.
\par
As seen in \cite{asym1}, the blow-up power-determining matrix is $A = -\frac{1}{2} I_2 + C(W_0, V_0; a)$, where
\begin{align*}
&C(W_0, V_0; a)
=\begin{pmatrix}
2V_0W_0 - 6aW_0^2 & W_0^2 \\
V_0^2 \left\{ \frac{V_0+2aW_0}{V_0+2W_0} - W_0 \frac{2(1-a)V_0}{(V_0+2W_0)^2} \right\} & V_0W_0 \left\{ 2 \frac{V_0+2aW_0}{V_0+2W_0} + V_0 \frac{2(1-a)W_0}{(V_0+2W_0)^2} \right\} \\
\end{pmatrix}.
\end{align*}
In \cite{asym1}, we have simplified the matrix into
\begin{align*}
C(W_0, V_0; a) &=
\begin{pmatrix}
\frac{3}{2} - \frac{1}{4a} & W_0^2 \\
\frac{a}{1-a} V_0^2 & \frac{3}{2} - \frac{1}{4(1-a)}
\end{pmatrix}
\end{align*}
after lengthy calculations and eigenpairs of $A$ are calculated as follows:
\begin{align*}
\left\{1, \begin{pmatrix}
\sqrt{\frac{1-2a}{8a^2}}\\
\sqrt{\frac{1}{2(1-2a)}}
\end{pmatrix}\right\},\quad
\left\{ 1 - \frac{1}{4a(1-a)}, \begin{pmatrix}
\sqrt{\frac{1-2a}{8a^2}}\\
\frac{a}{1-a}\sqrt{\frac{1}{2(1-2a)}}
\end{pmatrix}
\right\}.
\end{align*}
The first eigenpair is indeed consistent with Theorem {\rm Re}f{thm-ev1}.
Compare with ({\rm Re}f{balance-Andrews-result}).
\subsection{Andrews' system II}
\label{section-ex-Andrews2}
The next example is the following system:
\begin{align}
\label{Andrews2}
\begin{cases}
u' = u^2 (2av - bu),
\\
v' = bu v^2
\end{cases}
\end{align}
with parameters $a, b$ with $a > 0$ and $2a > b > 0$.
Our interest here is the asymptotic expansion of blow-up solutions with $u(0),v(0)>0$.
As in previous examples, we introduce
\begin{equation}
\label{trans-Andrews2}
\tilde u = \frac{u}{\sqrt{b}},\quad \tilde v = \frac{v}{\sqrt{b}},\quad \KMg{s = b^2 t},\quad 2a = b(1+\sigma)
\end{equation}
with an auxiliary parameter $\sigma$, which transform ({\rm Re}f{Andrews2}) into
\begin{align}
\label{Andrews2-transformed}
\frac{d\tilde u}{ds} = \tilde u^2 \left\{ (1+\sigma) v - \tilde u \right\},\quad \frac{d\tilde v}{ds} = \tilde u \tilde v^2.
\end{align}
In particular, the system becomes a {\em one}-parameter family.
Our interest here is then the blow-up solution $(\tilde u(s), \tilde v(s))$ with the following blow-up rate
\begin{equation*}
\tilde u(t) = O(\tilde \theta(s)^{-1/2}),\quad \tilde v(t) = O(\tilde \theta(s)^{-1/2}),\quad \tilde \theta(s) = s_{\max}-s = \KMg{b^2}\theta(t).
\end{equation*}
Expand the solution $(\tilde u(s), \tilde v(s))$ as the asymptotic series
\begin{align}
\notag
\tilde u(s) &= \tilde \theta(s)^{-1/2}U(s) \equiv \tilde \theta(s)^{-1/2}\sum_{n=0}^\infty U_n(s),\quad U_n(s) \ll U_{n-1}(s),\quad \lim_{s\to s_{\max}}U_n(s) = U_0, \\
\label{series-Andrews2}
\tilde v(s) &= \tilde \theta(s)^{-1/2}V(s) \equiv \tilde \theta(s)^{-1/2}\sum_{n=0}^\infty V_n(s),\quad V_n(s) \ll V_{n-1}(s), \quad \lim_{s\to s_{\max}}V_n(s) = V_0.
\end{align}
Substituting
\eqref{series-Andrews2} into \eqref{Andrews2-transformed}, we have
\begin{align}
\label{asymptotic-system-Andrews2}
\frac{dU}{ds} = \tilde \theta(s)^{-1} \left\{ -\frac{1}{2}U + U^2\{ (1+\sigma) V - U\} \right\},\quad
\frac{dV}{ds} = \tilde \theta(s)^{-1} \left\{ -\frac{1}{2}V + UV^2\right\}.
\end{align}
The balance law under $(U_0, V_0)\not = (0, 0)$ requires
\begin{align*}
-\frac{1}{2} + U_0 \{(1+\sigma)V_0 - U_0 \} = 0,\quad -\frac{1}{2} + U_0V_0 = 0\KMg{,}
\end{align*}
and hence
\begin{equation}
\label{identity-Andrews2}
1 = 2U_0\{(1+\sigma) V_0 - U_0 \},\quad 1 = 2U_0V_0\KMg{,}
\end{equation}
which are used below.
These identities yield the following consequence:
\begin{equation}
\label{balance-Andrews2}
U_0 = \sqrt{\frac{\sigma}{2}},\quad V_0 = \frac{1}{\sqrt{2\sigma}}\KMg{,}
\end{equation}
and we \KMg{have} the first order asymptotic expansion of blow-up solutions:
\begin{equation*}
\tilde u(s)\sim \sqrt{\frac{\sigma}{2}} \tilde \theta(s)^{-1/2},\quad
\tilde v(s)\sim \frac{1}{\sqrt{2\sigma}} \tilde \theta(s)^{-1/2}
\end{equation*}
as $s\to s_{\max}$.
\par
The blow-up power-determining matrix is $A = -\frac{1}{2}I_2 +D(U_0, V_0; \sigma)$, where
\begin{align*}
&D(U_0, V_0; \sigma) =
\begin{pmatrix}
2(1+\sigma)UV - 3U^2 & (1+\sigma)U^2 \\
V^2 & 2UV
\end{pmatrix}_{(U,V) = (U_0, V_0)} =
\begin{pmatrix}
1 - \frac{\sigma}{2} & \frac{\sigma(1+\sigma)}{2} \\
\frac{1}{2\sigma} & 1
\end{pmatrix}
\end{align*}
under the identity ({\rm Re}f{identity-Andrews2}).
The blow-up power eigenvalues are
\begin{equation*}
\lambda = 1,\quad -\frac{\sigma}{2}.
\end{equation*}
\KMa{
The eigenvector associated with $\lambda = 1$ is
\begin{equation*}
\begin{pmatrix}
\frac{1}{2} - \frac{\sigma}{2} & \frac{\sigma(1+\sigma)}{2} \\
\frac{1}{2\sigma} & \frac{1}{2}
\end{pmatrix}
\begin{pmatrix}
x_1 \\
x_2
\end{pmatrix}
= \begin{pmatrix}
x_1 \\
x_2
\end{pmatrix} \quad \Rightarrow \quad
\begin{pmatrix}
x_1 \\
x_2
\end{pmatrix} = \begin{pmatrix}
\sigma \\ 1
\end{pmatrix}.
\end{equation*}
Therefore the existence of eigenpair associated with the eigenvalue $\lambda = 1$ is consistent with Theorem {\rm Re}f{thm-ev1} (see also ({\rm Re}f{balance-Andrews2})).
}
\subsection{Keyfitz-Kranser-type system}
\label{section-ex-KK}
The next example is a $2$-dimensional system
\begin{equation}
\label{KK}
u' = u^2 - v, \quad v' = \frac{1}{3} u^3 - u\KMg{,}
\end{equation}
originated from the Keyfitz-Kranser system \cite{KK1990} which is a system of conservation laws admitting a singular shock.
The system is asymptotically quasi-homogeneous of type $\alpha=(\alpha_1, \alpha_2)=(1,2)$ and order $k+1 = 2$, consisting of the quasi-homogeneous part $f_{\alpha, k}$ and the lower-order part $f_{\rm res}$ given as follows:
\begin{equation}
f_{\alpha, k}(u,v) = \begin{pmatrix}
u^2 - v\\
\frac{1}{3} u^3
\end{pmatrix}, \quad f_{\mathrm{res}}(u,v) = \begin{pmatrix}
0\\
-u
\end{pmatrix}.
\end{equation}
It is proved in \cite{Mat2018} that the system ({\rm Re}f{KK}) admits the following solutions blowing up as $t \to t_{\max} - 0$ associated with {\em two} different equilibria on the horizon:
\begin{equation}
u(t) = O(\theta(t)^{-1}), \quad v(t) = O (\theta(t)^{-2}), \quad \text{as} \quad t \to t_{\max} - 0.
\end{equation}
In \cite{asym1}, the quasi-homogeneous part $f_{\alpha, k}$ and the full system ({\rm Re}f{KK}) are individually investigated.
However, the balance law and associated blow-up power-determining matrices and their eigenvalues are identical, because the quasi-homogeneous part of vector fields are identical between two systems.
\par
Expand the solution $(u(t),v(t))$ as the asymptotic series
\begin{align}
\notag
u(t) &= \theta(t)^{-1}U(t) \equiv \theta(t)^{-1}\sum_{n=0}^\infty U_n(t),\quad U_n(t) \ll U_{n-1}(t),\quad \lim_{t\to t_{\max}}U_n(t) = U_0, \\
\label{series-KK}
v(t) &= \theta(t)^{-2}V(t) \equiv \theta(t)^{-2}\sum_{n=0}^\infty V_n(t),\quad V_n(t) \ll V_{n-1}(t), \quad \lim_{t\to t_{\max}}V_n(t) = V_0.
\end{align}
The balance law for ({\rm Re}f{KK}) is
\begin{equation*}
U_0 = U_0^2 - V_0,\quad 2V_0 = \frac{1}{3} U_0^3,
\end{equation*}
which yields
\begin{equation}
\label{balance-KK}
U_0 = 3 \pm \sqrt{3}, \quad V_0 = \frac{1}{6} U_0^3.
\end{equation}
In particular, we have two different solutions of the balance law, which correspond to different equilibria on the horizon inducing blow-up solutions in the forward time direction, as mentioned above.
Depending on the choice of $U_0$, the eigenstructure of the blow-up power-determining matrix \KMg{changes}.
\begin{description}
\item[Case 1. $U_0 = 3-\sqrt{3}$.]
\end{description}
In this case, the blow-up power-determining matrix $A$ is
\begin{equation*}
A = \begin{pmatrix}
5 - 2\sqrt{3} & -1 \\
12 - 6\sqrt{3} & -2
\end{pmatrix}.
\end{equation*}
The associated eigenpairs are
\begin{equation*}
\left\{1, \begin{pmatrix}
1 \\ 4-2\sqrt{3}
\end{pmatrix} \right\},\quad \left\{2-2\sqrt{3}, \begin{pmatrix}
1 \\ 3
\end{pmatrix} \right\}.
\end{equation*}
Note that the eigenvector $(1, 4-2\sqrt{3})^T$ associated with $\lambda = 1$ is consistent with Theorem {\rm Re}f{thm-ev1}.
\begin{description}
\item[Case 2. $U_0 = 3+\sqrt{3}$.]
\end{description}
In this case, the blow-up power-determining matrix $A$ is
\begin{equation*}
A = \begin{pmatrix}
5 + 2\sqrt{3} & -1 \\
12 + 6\sqrt{3} & -2
\end{pmatrix}.
\end{equation*}
The associated eigenpairs are
\begin{equation*}
\left\{1, \begin{pmatrix}
1 \\ 4+2\sqrt{3}
\end{pmatrix} \right\},\quad \left\{2+2\sqrt{3}, \begin{pmatrix}
1 \\ 3
\end{pmatrix} \right\}.
\end{equation*}
Note that the eigenvector $(1, 4+2\sqrt{3})^T$ associated with $\lambda = 1$ is consistent with Theorem {\rm Re}f{thm-ev1}.
\subsubsection{Correspondence to desingularized vector fields: numerical study}
We shall \KMg{investigate the correspondence of information we have obtained above to dynamical information in the desingularized vector field} to confirm our results obtained in Section {\rm Re}f{section-correspondence}.
The desingularized vector field associated with ({\rm Re}f{KK}) is (cf. \cite{LMT2021, MT2020_1})
\begin{align}
\notag
\frac{dx_1}{d\tau} &= \frac{1}{4}\left(1 + 3p({\bf x})^4\right)(x_1^2 - x_2) - x_1G({\bf x}),\\
\label{KK-desing}
\frac{dx_2}{d\tau} &= \frac{1}{4}\left(1 + 3p({\bf x})^4\right)\left(\frac{1}{3}x_1^3 - (1-p({\bf x})^4)^2 x_1 \right) - 2x_2G({\bf x}),\\
\notag
p({\bf x}) &= (x_1^4 + x_2^2)^{1/4},\\
\notag
G({\bf x}) &= x_1^3(x_1^2 - x_2) + \frac{x_2}{2}\left(\frac{1}{3}x_1^3 - (1-p({\bf x})^4)^2 x_1 \right).
\end{align}
Note that the vector field $\frac{d{\bf x}}{d\tau} = g({\bf x})$ is polynomial of order $13$, while the original vector field $f$ is order at most $3$.
Our interest here is the following two equilibria on the horizon\footnote{
It is reported in \cite{Mat2018} that ({\rm Re}f{KK-desing}) admits four equilibria on the horizon.
The remaining two equilibria induce blow-up solutions {\em in reverse time direction}.
In \cite{LMT2021}, {\em computer assisted proof}, equivalently {\em rigorous numerics} is applied to proving the existence of true \KMb{sink} $p_{\infty}^+$ and the true \KMb{saddle} $p_{\infty, s}^{+}$.
}:
\begin{equation*}
p_\infty^+ \approx (0.989136995894978, 0.206758557005181),\quad
p_{\infty,s}^+ \approx (0.88610812897803, 0.61925794892101).
\end{equation*}
The values $C_\ast$ corresponding to these equilibria are
\begin{equation*}
C_\ast^+ \equiv C_\ast(p_\infty^+) \approx 0.780107753370182,\quad C_{\ast,s}^+ \equiv C_\ast(p_{\infty,s}^+) \approx 0.187256681090721.
\end{equation*}
The parameter dependence associated with blow-up power eigenvalues is different among different $U_0$.
This difference reflects the dynamical property of corresponding equilibria on the horizon, as seen below and preceding works.
One root of the balance law is ${\bf Y}_0\equiv (U_0, V_0) = (3-\sqrt{3}, 9-5\sqrt{3})$.
The corresponding scale parameter $r_{{\bf Y}_0}\equiv r_{{\bf Y}_0}^-$ is
\begin{align*}
r_{{\bf Y}_0}^- \equiv p({\bf Y}_0) &= \left(U_0^4 + V_0^2\right)^{1/4}\\
&= \left\{ (3-\sqrt{3})^4 + (9-5\sqrt{3})^2 \right\}^{1/4}\\
&= (408 - 234\sqrt{3})^{1/4}\\
&\approx \left\{2.70011102888\cdots \right\}^{1/4}\\
&\approx 1.2818741971,
\end{align*}
\KMa{where the functional $p({\bf y})$ is given in ({\rm Re}f{func-p})}
It follows from direct calculation that
\begin{equation*}
r_{{\bf Y}_0}^- \approx \frac{1}{0.780107753370182},
\end{equation*}
which agrees with $1/C_\ast^+ = r_{p_{\infty}^+}$\KMg{($= r_{{\bf x}_\ast}$ in Theorem {\rm Re}f{thm-balance-1to1})}, and implies the identity $r_{{\bf Y}_0}^- = r_{p_{\infty}^+}$ with $k=1$ mentioned in Theorem {\rm Re}f{thm-balance-1to1}.
From ({\rm Re}f{C-to-x}), \KMg{we have}
\begin{align}
\notag
(x_{\ast,1}, x_{\ast,2}) &\equiv \left(\frac{U_0}{\KMg{r_{{\bf Y}_0}^-}}, \frac{V_0}{\KMg{(r_{{\bf Y}_0}^-)^2}}\right) \\
\notag
&= \left(\frac{3-\sqrt{3}}{(408 - 234\sqrt{3})^{1/4}}, \frac{9 - 5\sqrt{3}}{(408 - 234\sqrt{3})^{1/2}}\right) \\
\label{sink-KK-validated}
&\approx \left(0.98913699589, 0.206758557\right),
\end{align}
which is indeed an equilibrium on the horizon $p_{\infty}^{+}$ for ({\rm Re}f{KK-desing}).
\par
Similarly, consider another root of the balance law ${\bf Y}_0\equiv (U_0, V_0) = (3+\sqrt{3}, 9+5\sqrt{3})$.
The corresponding scale parameter $r_{{\bf Y}_0}\equiv r_{{\bf Y}_0}^+$ is
\begin{align*}
r_{{\bf Y}_0}^+ &= \left(U_0^4 + V_0^2\right)^{1/4}\\
&= \left\{ (3+\sqrt{3})^4 + (9+5\sqrt{3})^2 \right\}^{1/4}\\
&= \left\{ (12 + 6\sqrt{3})^2 + (9+ 5\sqrt{3})^2 \right\}^{1/4}\\
&= \left\{ (144 + 108 + 144\sqrt{3}) + (81 + 75 + 90\sqrt{3}) \right\}^{1/4}\\
&= (408 + 234\sqrt{3})^{1/4}\\
&\approx 5.34026339768.
\end{align*}
It follows from direct \KMg{calculations} that
\begin{equation*}
r_{{\bf Y}_0}^+ \approx \frac{1}{0.187256681090721},
\end{equation*}
which agrees with $1/C_{\ast,s}^+ = r_{p_{\infty,s}^+}$, and implies the identity $r_{{\bf Y}_0}^+ = r_{p_{\infty,s}^+}$ with $k=1$ similar to the case \KMg{of} $p_{\infty}^+$.
From ({\rm Re}f{C-to-x}),
\begin{align}
\notag
(x_{\ast,1}, x_{\ast,2}) &\equiv \left(\frac{U_0}{\KMg{r_{{\bf Y}_0}^+}}, \frac{V_0}{\KMg{(r_{{\bf Y}_0}^+)^2}}\right) \\
\notag
&= \left(\frac{3+\sqrt{3}}{(408 + 234\sqrt{3})^{1/4}}, \frac{9+5\sqrt{3}}{(408 + 234\sqrt{3})^{1/2}}\right) \\
\notag
&\approx \left(\frac{4.73205080757}{5.34026339768}, \frac{17.6602540378}{28.5184131566}\right)\\
\label{saddle-KK-validated}
&\approx \left(0.88610812897, 0.61925754917\right),
\end{align}
which is indeed an equilibrium on the horizon $p_{\infty,s}^{+}$ for ({\rm Re}f{KK-desing}).
\par
Next eigenvalues of the Jacobian matrices $Dg(p_{\infty}^+)$ and $Dg(p_{\infty,s}^+)$ are computed, respectively:
\begin{align*}
{\rm Spec}(Dg(p_{\infty}^+)) &= \{ -0.780107753370184, -1.142157021690769\},\\
{\rm Spec}(Dg(p_{\infty, s}^+)) &= \{ -0.187256681090720, 1.023189533593166\}.
\end{align*}
It immediately follows from the above calculations that $-C_\ast^+$ and $-C_{\ast,s}^+$ are eigenvalues of $Dg(p_{\infty}^+)$ and $Dg(p_{\infty,s}^+)$, respectively.
It also follows that another eigenvalues satisfy the following identity, which \KMg{is} consistent with Theorem {\rm Re}f{thm-ev1}:
\begin{align*}
\frac{2-2\sqrt{3}}{r_{{\bf Y}_0}^-} &= \frac{2-2\sqrt{3}}{(408 - 234\sqrt{3})^{1/4} } \approx -1.142157021690769,\\
\frac{2+2\sqrt{3}}{r_{{\bf Y}_0}^+} &= \frac{2+2\sqrt{3}}{(408 + 234\sqrt{3})^{1/4} } \approx 1.023189533593166.
\end{align*}
In particular, \KMa{$p_{\infty}^+$} is a sink admitting two-dimensional stable manifold for the desingularized vector field, while $p_{\infty, s}^{+}$ is a saddle admitting one-dimensional stable manifold.
Both stable manifolds admit nonempty intersections with the interior of compactified phase space \KMb{$\overline{\mathcal{D}} = \{p({\bf x}) \leq 1\}$}, which follows from the inequality $-C_{\ast} < 0$.
\par
Finally we investigate the correspondence of eigenvectors associated with the above eigenvalues.
Eigenpairs of $Dg(p_{\infty}^+)$ are
\begin{equation}
\label{epair-KK-sink}
\left\{ -C_\ast^+, \begin{pmatrix}
0.922620374008554 \\
0.385709275833906
\end{pmatrix}\right\},\quad \left\{ \frac{2-2\sqrt{3}}{r_{{\bf Y}_0}^-},
\begin{pmatrix}
-0.1062185325758257\\
0.9943428097680588
\end{pmatrix}\right\},
\end{equation}
where we have used the identity of eigenvalues derived above.
It is easily checked that
\begin{equation*}
\frac{0.922620374008554}{0.385709275833906} \approx \frac{0.989136995894978}{2\times 0.206758557005181} \approx \frac{(p_\infty^+)_1}{2(p_\infty^+)_2},
\end{equation*}
which is consistent with Theorem {\rm Re}f{thm-ev1} about eigenvectors associated with the eigenvalue $-C_\ast$.
To check the correspondence of another eigenvectors, we consider the projection $P_\ast$ defined at ${\bf x}_\ast\in \mathcal{E}$ onto ${\rm span}\{{\bf v}_{\ast, \alpha}\}$, which is calculated as
\begin{align*}
P_\ast =\frac{1}{2} \left. \begin{pmatrix}
x_1\\ 2x_2
\end{pmatrix}\begin{pmatrix}
2x_1^3 \\ x_2
\end{pmatrix}^T \right|_{{\bf x} = p_{\infty}^+} = \begin{pmatrix}
x_1^4 & \frac{1}{2}x_1x_2\\
2x_1^3 x_2 & x_2^2
\end{pmatrix}_{{\bf x} = {\bf x}_\ast}
\end{align*}
and hence the projection $I-P_{\ast}$ is
\begin{equation*}
I-\KMg{P_\ast} = \begin{pmatrix}
1 - x_1^4 & -\frac{1}{2}x_1x_2\\
-2x_1^3 x_2 & 1 - x_2^2
\end{pmatrix}_{{\bf x} = {\bf x}_\ast}.
\end{equation*}
Letting $P_{\infty}^+$ be the projection $P_{\ast}$ with ${\bf x}_\ast = p_\infty^+$, direct calculations yield
\begin{align*}
(I-P_\infty^+) \begin{pmatrix}
1 / r_{{\bf Y}_0}^- \\
3 / (r_{{\bf Y}_0}^-)^2 \\
\end{pmatrix} &\approx \begin{pmatrix}
0.04274910089486506 & -0.1022562689758424 \\
-0.4001868606922550 & +0.9572508991051355
\end{pmatrix} \begin{pmatrix}
1/ (408 - 234\sqrt{3})^{1/4} \\
3/ (408 - 234\sqrt{3})^{1/2}
\end{pmatrix}\\
&\approx \begin{pmatrix}
-0.1533408070194538 \\
1.435468229567002
\end{pmatrix},
\end{align*}
where the vector $(1,3)^T$ is the eigenvector of $A$ associated with the eigenvalue $2-2\sqrt{3}$.
Then we obtain
\begin{equation*}
\frac{1.435468229567002}{-0.1533408070194538} \approx \frac{0.9943428097680588}{-0.1062185325758257},
\end{equation*}
where the latter ratio is calculated from the eigenvector associated with the eigenvalue $\frac{2-2\sqrt{3}}{r_{{\bf Y}_0}^-}$ shown in ({\rm Re}f{epair-KK-sink}).
The correspondence of eigenvectors stated in Theorem {\rm Re}f{thm-ev1} is therefore confirmed for $p_\infty^+$.
\par
Similarly, we know that eigenpairs of $Dg(p_{\infty,s}^+)$ are
\begin{equation}
\label{epair-KK-saddle}
\left\{ -C_{\ast.s}^+, \begin{pmatrix}
0.581870201791748 \\
0.813281666009280
\end{pmatrix}\right\},\quad \left\{ \frac{2+2\sqrt{3}}{r_{{\bf Y}_0}^+}, \begin{pmatrix}
0.4065790070098367\\
-0.9136156254458955
\end{pmatrix}\right\},
\end{equation}
where we have used the identity of eigenvalues derived above.
Letting $P_{\infty,s}^+$ be the projection $P_{\ast}$ with ${\bf x}_\ast = p_{\infty,s}^+$, direct calculations yield
\begin{align*}
(I-P_{\infty,s}^+) \begin{pmatrix}
1 / r_{{\bf Y}_0}^+ \\
3 / (r_{{\bf Y}_0}^+)^2 \\
\end{pmatrix} &\approx \begin{pmatrix}
0.3834804073018562 & -0.2743647512365853 \\
-0.8617112200159814 & 0.6165195926981430
\end{pmatrix} \begin{pmatrix}
1/ (408 + 234\sqrt{3})^{1/4} \\
3/ (408 + 234\sqrt{3})^{1/2}
\end{pmatrix}\\
&\approx \begin{pmatrix}
-0.2017528727505380 \\
0.4533548802214182
\end{pmatrix},
\end{align*}
where the vector $(1,3)^T$ is the eigenvector of $A$ associated with the eigenvalue $2+2\sqrt{3}$.
Then we obtain
\begin{equation*}
\frac{-0.2017528727505380}{0.4533548802214182} \approx \frac{0.4065790070098367}{-0.9136156254458955},
\end{equation*}
where the latter ratio is calculated from the eigenvector associated with the eigenvalue $\frac{2+2\sqrt{3}}{r_{{\bf Y}_0}^+}$ shown in ({\rm Re}f{epair-KK-sink}).
The correspondence of eigenvectors stated in Theorem {\rm Re}f{thm-ev1} is therefore confirmed for $p_{\infty,s}^+$.
\subsection{An artificial system in the presence of Jordan blocks}
\label{section-ex-log}
The next example \KMg{concerns} with an artificial system \KMg{such that} the blow-up power-determining matrix has a non-trivial Jordan block.
\KMa{In \cite{asym1}, asymptotic expansions of blow-up solutions are calculated assuming their existence.
Here we investigate if blow-up solutions of the systems we are interested in here indeed exist, as well as correspondences of associated eigenstructures stated in Section {\rm Re}f{section-correspondence}.}
\subsubsection{The presence of terms of order $k+\alpha_i - 1$}
First we consider
\begin{equation}
\label{log}
u' = u^2 + v, \quad v' = au^3 + 3uv - u^2\KMg{,}
\end{equation}
where $a\in\mathbb{R}$ is a parameter.
This system is asymptotically quasi-homogeneous of type $\alpha = (1,2)$ and order $k+1=2$,
consisting of the quasi-homogeneous part $f_{\alpha, k}$ and the lower-order part $f_{\rm res}$ given as follows:
\begin{equation}
\label{log-vf}
f_{\alpha, k}(u,v) = \begin{pmatrix}
u^2 + v\\
au^3 + 3uv
\end{pmatrix}, \quad f_{\mathrm{res}}(u,v) = \begin{pmatrix}
0\\
-u^2
\end{pmatrix}.
\end{equation}
To verify the existence of blow-up solutions, we investigate the dynamics at infinity.
To this end, we apply the parabolic compactification
\begin{equation*}
u = \kappa x_1,\quad v = \kappa^2 x_2,\quad \kappa = (1-p({\bf x})^4)^{-1},\quad p({\bf x})^4 = x_1^4 + x_2^2
\end{equation*}
and the time-scale desingularization
\begin{equation*}
d\tau = \frac{1}{4}(1-p({\bf x})^4)^{-1}\left( 1 + 3 p({\bf x})^4 \right)^{-1}dt
\end{equation*}
to ({\rm Re}f{log}) and we have the desingularized vector field
\begin{align}
\label{desing-log}
\begin{aligned}
\dot x_1 &= \frac{1}{4}\left(1+ 3p({\bf x})^4 \right)\left( x_1^2 + x_2 \right) - x_1 G_a({\bf x}),\\
\dot x_2 &= \frac{1}{4}\left(1+ 3p({\bf x})^4 \right)\left( ax_1^3 + 3x_1x_2 - \kappa^{-1}x_1^2\right) - 2 x_2 G_a({\bf x}),
\end{aligned}
\end{align}
where
\begin{equation*}
G_a({\bf x}) = x_1^3 \left( x_1^2 + x_2 \right) + \frac{1}{2}x_2 \left( ax_1^3 + 3x_1x_2 - \kappa^{-1}x_1^2 \right).
\end{equation*}
We pay attention to the case $a=0$.
Then equilibria on the horizon satisfy
\begin{equation*}
x_1^2 + x_2 - x_1 C = 0,\quad 3x_1x_2 - 2 x_2 C = 0,\quad
C = x_1^3 \left( x_1^2 + x_2 \right) + \frac{3}{2} x_1x_2^2.
\end{equation*}
One easily find an equilibrium on the horizon $(x_1, x_2) = (1,0) \equiv {\bf x}_\ast$.
The value $C_\ast$ given in ({\rm Re}f{const-horizon}) is $C_\ast = 1$.
The Jacobian matrix of the vector field ({\rm Re}f{desing-log}) with $a=0$ at $(x_1, x_2)$ is
\begin{align*}
J({\bf x}) &= \begin{pmatrix}
J_{11} & J_{12} \\
J_{21} & J_{22}
\end{pmatrix},\\
J_{11} &= 3x_1^3(x_1^2 + x_2) + \frac{1}{2}x_1 R({\bf x}) - G_0({\bf x}) - x_1 \frac{\partial G_0}{\partial x_1}({\bf x}),\\
J_{12} &= \frac{3}{2}x_2(x_1^2 + x_2) + \frac{1}{4}R({\bf x}) - x_1 \frac{\partial G_0}{\partial x_2}({\bf x}),\\
J_{21} &= 3x_1^3(3x_1x_2 - \kappa^{-1}x_1^2) + \frac{1}{4} R({\bf x})(3x_2 +4x_1^5 - 2\kappa^{-1}x_1 ) - \KMd{2x_2} \frac{\partial G_0}{\partial x_1}({\bf x}),\\
J_{22} &= \frac{3}{2}x_2(3x_1x_2 - \kappa^{-1}x_1^2) + \frac{1}{4}R({\bf x})(3x_1 + 2x_1^2 x_2 ) - 2G_0({\bf x}) - 2x_2 \frac{\partial G_0}{\partial x_2}({\bf x}),
\end{align*}
where $R({\bf x}) = 1+3p({\bf x})^4$ and
\begin{align*}
\frac{\partial G_0}{\partial x_1}({\bf x}) &= 5x_1^4 + 3x_1^2x_2 + \frac{3}{2}x_2^2 - \kappa^{-1}x_1x_2 + 2x_1^5 x_2,\\
\frac{\partial G_0}{\partial x_2}({\bf x}) &= x_1^3 + 3x_1x_2 - \frac{1}{2}\kappa^{-1}x_2^2 + x_1^2 x_2^2.
\end{align*}
Substituting $(x_1, x_2) = {\bf x}_\ast$ into $J({\bf x})$, we have
\begin{equation*}
J({\bf x}_\ast) = \begin{pmatrix}
3 + 2 - 1 - 5 & 1 - 1 \\
0 + (0+4-0) - \KMd{0} & 0 + 3 - 2
\end{pmatrix} = \begin{pmatrix}
- 1 & 0 \\
\KMd{4} & 1
\end{pmatrix},
\end{equation*}
which implies that the equilibrium ${\bf x}_\ast$ is the hyperbolic saddle.
Moreover, the eigenvector associated with the eigenvalue $-1$ is $\KMd{(1,-2)^T}$, while the eigenvector associated with the eigenvalue $+1$ is $(0,1)^T$.
The latter is tangent to the horizon at ${\bf x}_\ast$.
As a consequence, the stable manifold of ${\bf x}_\ast$ is extended inside $\mathcal{D}$ and hence the local stable manifold $W^s_{\rm loc}({\bf x}_\ast)$ induces (finite-time) blow-up solutions of ({\rm Re}f{log}) with $a=0$ in forward time direction.
Note that the eigenstructure of $J({\bf x}_\ast)$ is {\em not contradictory} to Theorem {\rm Re}f{thm-ev1} \KMg{because} there {\em is} a nonzero term of order $k+\alpha_2 - 1 = 2$; $-u^2$, in the second component of ({\rm Re}f{log}).
In contrast, we cannot determine whether $v$ blows up at $t = t_{\max}$ yet, \KMg{because} $\KMb{x_{2;\ast}} = 0$.
This consequence is not contradictory to Theorem {\rm Re}f{thm:blowup}, either.
In other words, subsequent terms must be investigated to determine the asymptotic behavior of $v(t)$ as $t \to t_{\max}$.
\par
\KMa{
Asymptotic expansions calculated in \cite{asym1} indeed clarify this ambiguity.
Now introduce the asymptotic expansion}
\begin{align}
\notag
u(t) &= \theta(t)^{-1}U(t) \equiv \theta(t)^{-1}\sum_{n=0}^\infty U_n(t),\quad U_n(t) \ll U_{n-1}(t),\quad \lim_{t\to t_{\max}}U_n(t) = U_0, \\
\label{series-log}
v(t) &= \theta(t)^{-2}V(t) \equiv \theta(t)^{-2}\sum_{n=0}^\infty V_n(t),\quad V_n(t) \ll V_{n-1}(t), \quad \lim_{t\to t_{\max}}V_n(t) = V_0.
\end{align}
As seen in \cite{asym1}, the balance law is
\begin{equation*}
\begin{pmatrix}
U_0 \\ 2V_0
\end{pmatrix} = \begin{pmatrix}
U_0^2 + V_0\\
aU_0^3 + 3U_0V_0
\end{pmatrix}.
\end{equation*}
Our particular interest here is the case $a=0$, in which case the root is $(U_0, V_0) = (1,0)$, which is consistent with \KMg{${\bf x}_\ast$}.
We fix $a=0$ for a while.
The blow-up power-determining matrix at $(U_0, V_0)$ is
\begin{equation}
\label{blow-up-A-Jordan}
A = \begin{pmatrix}
-1 & 0 \\ 0 & -2
\end{pmatrix} + \begin{pmatrix}
2U_0 & 1\\
3V_0 & 3U_0
\end{pmatrix} = \begin{pmatrix}
1& 1\\
0 & 1
\end{pmatrix},
\end{equation}
that is, the matrix $A$ has nontrivial Jordan block.
The eigenvector associated with the double eigenvalue $\lambda = 1$ is $(1,0)^T$, while the vector $(0,1)^T$ is the generalized eigenvector.
In the present case, the latter corresponds to the eigenvector $J({\bf x}_\ast)$ associated with the eigenvalue $+1$ (not $-1$ !).
We see the common eigenstructure stated in Theorem {\rm Re}f{thm-ev1}.
Note again that the present situation is {\em not} contradictory to Proposition {\rm Re}f{prop-correspondence-ev-Ag-A} because Assumption {\rm Re}f{ass-f} is not satisfied in the present example.
We finally obtain the following second order asymptotic expansion of the blow-up solution $(u(t), v(t))$ as $t\to t_{\max}-0$:
\begin{equation*}
u(t) \sim \theta(t)^{-1} - \frac{1}{4} - \KMd{\frac{1}{144}}\theta(t),\quad v(t) \sim \frac{1}{2}\theta(t)^{-1} - \frac{1}{24}.
\end{equation*}
\KMa{We see that $v(t)$ blows up, while the rate is smaller than $\theta(t)^{-2}$ expected in Theorem {\rm Re}f{thm:blowup}. }
\subsubsection{The absence of terms of order $k+\alpha_i - 1$}
Next we consider
\begin{equation}
\label{log2}
u' = u^2 + v, \quad v' = au^3 + 3uv - u
\end{equation}
with a real parameter $a\in\mathbb{R}$, instead of ({\rm Re}f{log}).
The difference from ({\rm Re}f{log}) is the replacement of $-u^2$ by $-u$ in the second component of the vector fields.
Because the quasi-homogeneous part is unchanged, \KMg{the balance law and blow-up power-determining matrix are the same} as those of ({\rm Re}f{log}), whereas the Jacobian matrix $J({\bf x}_\ast)$ \KMg{for the desingularized vector field} changes due to the absence of terms of the order $k+\alpha_i-1$.
The desingularized vector field is
\begin{align}
\label{desing-log-2}
\begin{aligned}
\dot x_1 &= \frac{1}{4}\left(1+ 3p({\bf x})^4 \right)\left\{ x_1^2 + x_2 \right\} - x_1 \tilde G_a({\bf x}),\\
\dot x_2 &= \frac{1}{4}\left(1+ 3p({\bf x})^4 \right)\left\{ ax_1^3 + 3x_1x_2 - \kappa^{-2}x_1\right\} - 2 x_2 \tilde G_a({\bf x}),
\end{aligned}
\end{align}
where
\begin{equation*}
\tilde G_a({\bf x}) = x_1^3 \left\{ x_1^2 + x_2 \right\} + \frac{1}{2}x_2 \left\{ ax_1^3 + 3x_1x_2 - \kappa^{-2}x_1 \right\}.
\end{equation*}
\KMg{Under the constraint $a=0$,} one easily \KMg{finds} an equilibrium on the horizon $(x_1, x_2)^T = (1,0)^T \equiv {\bf x}_\ast$.
The Jacobian matrix of the vector field ({\rm Re}f{desing-log-2}) with $a=0$ at ${\bf x}_\ast$ is now
\begin{align*}
\KMa{J({\bf x}_\ast)} &= \begin{pmatrix}
J_{11} & J_{12} \\
J_{21} & J_{22}
\end{pmatrix},\\
J_{11} &= \left(3x_1^3(x_1^2 + x_2) + \frac{1}{2}x_1 R({\bf x}) - \tilde G_0({\bf x}) - x_1 \frac{\partial \tilde G_0}{\partial x_1}({\bf x}) \right)_{{\bf x} = {\bf x}_\ast} = -1,\\
J_{12} &= \left(\frac{3}{2}x_2(x_1^2 + x_2) + \frac{1}{4}R({\bf x}) - x_1 \frac{\partial \tilde G_0}{\partial x_2}({\bf x})\right)_{{\bf x} = {\bf x}_\ast} = 0,\\
J_{21} &= \left( 3x_1^3(3x_1x_2 - \kappa^{-2}x_1) + \frac{1}{4} R({\bf x})(3x_2 + 8x_1^4\kappa^{-1} - \kappa^{-2} ) - 2x_2 \frac{\partial \tilde G_0}{\partial x_1}({\bf x}) \right)_{{\bf x} = {\bf x}_\ast} = 0,\\
J_{22} &= \left( \frac{3}{2}x_2(3x_1x_2 - \kappa^{-2}x_1) + \frac{1}{4}R({\bf x})(3x_1 + 4x_1 x_2\kappa^{-1} ) - 2\tilde G_0({\bf x}) - 2x_2 \frac{\partial \tilde G_0}{\partial x_2}({\bf x}) \right)_{{\bf x} = {\bf x}_\ast} = 1,
\end{align*}
where $R({\bf x}) = 1+3p({\bf x})^4$ and
\begin{align*}
\frac{\partial \tilde G_0}{\partial x_1}({\bf x}) &= 5(x_1^2 + 3x_2)x_1^2 + \frac{3}{2}x_2^2 + 4x_1^4 x_2 \kappa^{-1} - \frac{1}{2}x_2\kappa^{-1},\\
\frac{\partial \tilde G_0}{\partial x_2}({\bf x}) &= x_1^3 + \frac{3}{2}x_1x_2 - \frac{1}{2}\kappa^{-2}x_1 + \frac{3}{2}x_1x_2 + 2x_1x_2^2 \kappa^{-1}.
\end{align*}
\KMa{
We therefore know that the equilibrium ${\bf x}_\ast$ is the hyperbolic saddle and that the eigenvector associated with the eigenvalue $-1$ is ${\bf v}_{0,\alpha} = (1,0)^T$, while the eigenvector associated with the eigenvalue $+1$ is $(0,1)^T$.
Now the matrix $A_g$ stated in Theorem {\rm Re}f{thm-evec-Dg} is
\begin{equation*}
A_g \equiv \begin{pmatrix}
1 & 1 \\ 0 & 1
\end{pmatrix} = A.
\end{equation*}
In particular, the eigenstructure of $A_g$ is exactly the same as that of $A$.
Therefore the correspondence of eigenstructure stated in Theorem {\rm Re}f{thm-evec-Dg} is considered between those of $A$ and $J({\bf x}_\ast)$.
Now the blow-up power-determining matrix $A$ for ({\rm Re}f{log2}) is the same as ({\rm Re}f{blow-up-A-Jordan}).
As seen in the previous example, the vector $(0,1)^T$ is the generalized eigenvector of $A$ associated with the {\em double} eigenvalue $\lambda = 1$.
It turns out here that the vector $(0,1)^T$ is the eigenvector of $J({\bf x}_\ast)$ associated with the {\em simple} eigenvalue $+1$.
The gap of multiplicity is exactly what we have stated in Theorem {\rm Re}f{thm-evec-Dg} with $m_\lambda = 2$ and $m_{\lambda_g} = 1$.
Indeed, letting ${\bf w} = (0,1)^T$ and ${\bf w}_g = (0,1)^T$, we have
\begin{align*}
{\bf v}_{\ast, \alpha} &= \begin{pmatrix}
1 \\ 0
\end{pmatrix},\quad \nabla p({\bf x}_\ast) = \begin{pmatrix}
1 \\ 0
\end{pmatrix} \quad \Rightarrow \quad I-P_\ast = \begin{pmatrix}
0 & 0 \\ 0 & 1
\end{pmatrix},\\
B_g &= -P_\ast (A_g + C_\ast I) = -\begin{pmatrix}
1 & 0 \\ 0 & 0
\end{pmatrix}\left\{ \begin{pmatrix}
1 & 1 \\ 0 & 1
\end{pmatrix} + \begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix}\right\},\\
A_g - kC_\ast I &= \begin{pmatrix}
1 & 1 \\ 0 & 1
\end{pmatrix} - \begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix} = \begin{pmatrix}
0 & 1 \\ 0 & 0
\end{pmatrix},\quad (A_g - kC_\ast I)^2 = \begin{pmatrix}
0 & 0 \\ 0 & 0
\end{pmatrix},\\
(I-P_\ast){\bf w} &= \begin{pmatrix}
0 \\ 1
\end{pmatrix}\equiv {\bf w}_g, \quad (A_g - kC_\ast I){\bf w}_g = \begin{pmatrix}
0 \\ 1
\end{pmatrix} \equiv {\bf w},\\
Dg({\bf x}_\ast) - kC_\ast I &= J({\bf x}_\ast) - kC_\ast I = \begin{pmatrix}
-1 & 0 \\ 0 & 1
\end{pmatrix} - \begin{pmatrix}
1 & 0 \\ 0 & 1
\end{pmatrix} = \begin{pmatrix}
-2 & 0 \\ 0 & 0
\end{pmatrix}.
\end{align*}
Obviously, we see that $(I-P_\ast){\bf w} \in \ker(Dg({\bf x}_\ast) - kC_\ast I )$ and $(A_g - kC_\ast I) = {\bf v}_{\ast,\alpha} \in \ker((A_g - kC_\ast I))$.
}
\par
In \cite{asym1}, the following asymptotic expansion for the blow-up solution for ({\rm Re}f{log2}) is calculated:
\begin{equation*}
u(t) \sim \theta(t)^{-1} - \frac{1}{9} \theta(t),\quad v(t) \sim \frac{1}{3}.
\end{equation*}
\KMa{Unlike the system ({\rm Re}f{log}), we see that $v(t)$ remains bounded as $t\to t_{\max}$, which is not expected in Theorem {\rm Re}f{thm:blowup}.
This result as well as that in the previous example extract the importance of multi-order asymptotic expansions for blow-up solutions such that the concrete asymptotic behavior cannot be clearly described from Theorem {\rm Re}f{thm:blowup}.}
\begin{rem}\rm
Persistence of hyperbolicity under perturbations of vector fields yields the existence of hyperbolic equilibria on the horizon for $a\not = 0$ sufficiently close to $0$, in which case ${\bf x} = (1,0)^T$ is not an equilibrium and the new equilibrium with $a\not = 0$ possesses nonzero second component in general.
In particular, the blow-up rate $O(\theta(t)^{-2})$ in $v(t)$ becomes active under such perturbations, according to Theorem {\rm Re}f{thm:blowup}.
This implies that the blow-up rate can depend on vector fields in a {\em discontinuous} manner, \KMg{even if equilibria or general invariant sets on the horizon depend {\em continuously} on parameters}.
\end{rem}
\section*{Concluding Remarks}
In this paper, we have provided the correspondence of \KMb{coefficients characterizing the leading terms of blow-ups and} eigenstructure between the system associated with asymptotic expansions of blow-ups proposed in Part I \cite{asym1}, and desingularized vector fields through compactifications and time-scale desingularizations.
We have shown that equilibria of these transformed systems, introduced to describe type-I finite-time blow-ups in forward time direction for the original system, correspond one-to-one to each other, and that there is a natural correspondence of eigenstructures between Jacobian matrices at the above equilibria.
As a consequence, both equilibria are hyperbolic once either of them turns out to be hyperbolic and, as a corollary, hyperbolic structure of the leading terms of asymptotic expansions of blow-ups, constructed assuming their existence, indeed guarantees their existence.
In particular, in the case of ODEs with asymptotically quasi-homogeneous vector fields,
\KMb{asymptotic expansions} of type-I blow-ups themselves provide their dynamical properties, including their existence, and vice versa.
Such correspondence can be seen in various examples presented in Part I \cite{asym1}.
We believe that the correspondence obtained in the present paper \KMc{provides} a significant insight into blow-up studies for a wide class of differential equations.
\par
We end this paper by providing comments related to the present study.
\subsection*{Asymptotic expansion of complex blow-ups such as oscillatory blow-ups}
According to \cite{Mat2018} and \cite{Mat2019}, blow-up behavior (for asymptotically quasi-homogeneous systems) can be understood through the dynamical structure of {\em invariant sets} on the horizon for defingularized vector fields.
In the present study, we have focused {\em only on stationary blow-ups}, namely blow-ups induced by equilibria on the horizon for desingularized vector fields, to discuss multi-order asymptotic expansions.
It is then natural to question how multi-order asymptotic expansions of blow-ups induced by {\em general invariant sets} on the horizon behave.
As concrete examples, {\em oscillatory blow-up behavior} is studied originated from suspension bridge problems in mechanical engineering (e.g. \cite{GP2011, GP2013, HBT1989, LM1990}).
Later this complex behavior is extracted by means of {\em local stable manifolds of periodic orbits} with computer-assisted proofs or {\em rigorous numerics} (\cite{DALP2015}).
Independently, it is proposed in \cite{Mat2018} that such oscillatory blow-up behavior is characterized by periodic orbits on the horizon for desingularized vector fields, which is referred to as {\em periodic blow-up}.
From the viewpoint of complete understandings of blow-up solutions themselves (for ODEs), asymptotic expansions of periodic blow-up solutions are suitable issues as the sequel to the present study.
\par
It should be noted that, in \cite{DALP2015}, oscillatory blow-up behavior is extracted through validation of {\em unstable} periodic orbits, where the ansatz of oscillatory blow-ups like those mentioned in Assumption {\rm Re}f{ass-fundamental} is assumed to consider its dynamical property.
On the other hand, such an approach can extract wrong stability property about the original blow-up behavior, as indicated in Theorem {\rm Re}f{thm-stability} in the case of stationary blow-ups.
The preceding works and the present study motivate to study the true correspondence of dynamical properties between desingularized vector fields $g$ and the counterpart of the system ({\rm Re}f{blow-up-basic}) to periodic blow-ups\KMf{,} and general complex blow-up behavior.
\subsection*{\KMh{Infinite-dimensional problems}}
In the field of partial differential equations (PDEs for short), the {\em rescaling} of functions for (parabolic-type) equations is widely used for studying asymptotic behavior near finite-time singularity (e.g., behavior of time-space dependent solutions $u(t,x)$ with $t < t_{\max} < \infty$ near $t_{\max}$).
In many studies of blow-ups for parabolic-type PDEs, the rescaling is applied assuming that the type-I blow-up occurs (e.g. \cite{GK1985, HV1993, MZ1998}).
Our present approach of asymptotic expansion of blow-up solutions introduced in \KMb{Part I \cite{asym1}} begins with the ansatz of blow-up profiles with {\em type-I blow-up rates}, which turns out to be similar to typical approaches applied to PDEs mentioned above.
Therefore the system of our interest ({\rm Re}f{blow-up-basic}) has the similar form to that for rescaled profiles of blow-ups, which are referred to as {\em backward self-similar profiles}, for \KMh{parabolic-type} PDEs (see references mentioned above).
Except a special case where the system itself is scale invariant (corresponding to the quasi-homogeneity in our setting), blow-up solutions are typically {\em assumed} to exist for studying their asymptotic behavior.
In other words, their existence is discussed independently through {\em nonlinear} and/or expensive analysis.
In contrast, one of our results, Theorem {\rm Re}f{thm-existence-blow-up}, shows that, under mild assumptions for vector fields, {\em linear} information associated with asymptotic profiles provides the existence of blow-up solutions as well as their dynamical properties, even if the existence is not known a priori.
The present results discussed in this paper will provide a new insight into the characterization of blow-up behavior for evolutionary equations, including infinite-dimensional ones such as parabolic-type PDEs.
\KMh{
Our characterizations of blow-up solutions rely on the approach reviewed in Section {\rm Re}f{section-preliminary}.
Once its infinite-dimensional analogue is constructed, the dynamical correspondence of blow-up solutions presented in this paper can be extended to infinite-dimensional problems, although there are significant difficulties to be overcome, as mentioned in \cite{Mat2019}.
}
\subsection*{Presence of logarithmic terms}
Observations in Section {\rm Re}f{section-ex-log} indicate that logarithmic terms in asymptotic expansions can be present {\em only if} a {\em negative} blow-up power eigenvalue $\lambda$ generates a nontrivial Jordan block of the blow-up power-determining matrix $A$.
To this end, the vector field ({\rm Re}f{ODE-original}) must have the dimension more than two, \KMg{because} the matrix $A$ always possess the eigenvalue $1$.
In other words, in {\em two}-dimensional systems, the eigenvalue $1$ is the only one admitting the nontrivial Jordan block for the matrix $A$\KMg{,} and this is the only case of the presence of logarithmic functions in the fundamental matrix for rational vector fields with rational blow-up rates (the first terms of blow-ups).
We therefore leave the following conjecture.
\begin{itemize}
\item In any two dimensional rational vector fields, all possible blow-up solutions satisfying Assumption {\rm Re}f{ass-fundamental} includes {\em no logarithmic functions of $\theta(t)$} in their finite-order asymptotic expansions.
\end{itemize}
\subsection*{Efficient rigorous numerics of local stable manifolds of equilibria at infinity}
We shall connect the preceding works, {\em computer-assisted proofs} or {\em rigorous numerics} of blow-up solutions with concrete and rigorous bounds of blow-up times, to the present study.
In \cite{MT2020_1, TMSTMO2017}, computer-assisted proofs for the existence of blow-up solutions\KMg{, their concrete profiles, and their} blow-up times are provided, which are based on desingularized vector fields associated with (admissible global) compactifications and time-scale desingularizations.
As seen in several examples in the present paper, these compactifications cause the significant increase of degree of polynomials depending on the type $\alpha$ of the original vector field $f$, even if $f$ consists of polynomials of low degree.
In general, admissible global compactifications require lengthy calculations of vector fields themselves, as well as their equilibria, linearized matrices or {\em parameterization of invariant manifolds} (cf. \cite{LMT2021} \KMg{for constructing stable manifolds in an efficient way}).
On the other hand, the proposed methodology for asymptotic expansions discussed in Part I \cite{asym1} indicate that several essential dynamical information of equilibria on the horizon can be extracted from the simpler system ({\rm Re}f{blow-up-basic}), which can contribute to {\em efficient} validation methodology of blow-ups through simplification of computing objects.
We have seen in Part I and the present paper that equilibria on the horizon and eigenstructures for desingularized vector fields are shown to be calculated in an efficient way through asymptotic expansions.
The next issue for problems mentioned here is the efficient computation of {\em parameterization of invariant manifolds}, which provides mappings describing invariant manifolds as the graphs as well as conjugate relation between the original nonlinear dynamics on them and the simpler, in particular {\em linearized} one.
This concept is originally developed in \cite{CFdlL2003-1, CFdlL2003-2, CFdlL2005} and there are many successful applications to describe nonlinear invariant dynamics with computer assistance.
An application to blow-up validation is shown in \cite{LMT2021}.
Using the knowledge of asymptotic expansions, one expects that invariant manifolds describing a family of blow-up solutions can be also constructed in more efficient way than desingularized vector fields, as achieved in calculations of equilibria and eigenstructures.
It should be noted, however, that the system ({\rm Re}f{blow-up-basic}) itself can extract wrong dynamical properties of blow-up solutions due to the intrinsic difference of eigenvalue distribution among two systems of our interest \KMg{(Theorem {\rm Re}f{thm-stability})}.
The present study implies that careful treatments are required to validate geometric and dynamical properties of blow-up solutions in an efficient way.
\end{document} |
\begin{document}
\begin{centering}
{\huge \textbf{The structure of base phi expansions}}
{\bf \large F.~Michel Dekking}
{CWI, Amsterdam, and DIAM, Delft University of Technology, Faculty EEMCS,\\ P.O.~Box 5031, 2600 GA Delft, The Netherlands.}
{\footnotesize \it Email: [email protected]}
\end{centering}
\begin{abstract}
\noindent In the base phi expansion any natural number is written uniquely as a sum of powers of the golden mean with coefficients 0 and 1, where it is required that the product of two consecutive digits is always 0. We tackle the problem of describing how these expansions look like. We classify the positive parts of the base phi expansions according to their suffices, and the negative parts according to their prefixes, specifying the sequences of occurrences of these digit blocks. Here the situation is much more complex than for the Zeckendorf expansions, where any natural number is written uniquely as a sum of Fibonacci numbers with coefficients 0 and 1, where, again, it is required that the product of two consecutive digits is always 0. In a previous work we have classified the Zeckendorf expansions according to their suffices. It turned out that if we consider the suffices as labels on the Fibonacci tree, then the numbers with a given suffix in their Zeckendorf expansion appear as generalized Beatty sequences in a natural way on this tree.
We prove that the positive parts of the base phi expansions are a subsequence of the sequence of Zeckendorf expansions, giving an explicit formula in terms of a generalized Beatty sequence. The negative parts of the base phi expansions no longer appear lexicographically. We prove that all allowed digit blocks appear, and determine the order in which they do appear.
\end{abstract}
\quad {\footnotesize Keywords: Base phi; Zeckendorf expansion; Generalized Beatty sequence, Wythoff sequence }
\section{Introduction}
Let the golden mean be given by $\varphi:=(1+\sqrt{5})/2$.\\
Ignoring leading and trailing zeros, any natural number $N$ can be written uniquely as
$$N= \sum_{i=-\infty}^{\infty} d_i \varphi^i,$$
with digits $d_i=0$ or 1, and where $d_id_{i+1} = 11$ is not allowed.
As usual, we denote the base phi expansion of $N$ as $\beta(N)$, and we write these expansions with a radix point as
$$\beta(N) = d_{L}d_{L-1}\dots d_1d_0\cdot d_{-1}d_{-2} \dots d_{R+1}d_R.$$
We define $$\beta^+(N)=d_{L}d_{L-1}\dots d_1d_0\; {\;\rm and\;}\; \beta^-(N)=d_{-1}d_{-2} \dots d_{R+1}d_R.$$
So $\beta(N)=\beta^+(N)\cdot\beta^-(N)$. For example, $\beta(2)=10\cdot 01$, and $\beta(3)=100\cdot 01$.
This paper deals with the following question: what are the words of 0's and 1's that can occur as digit blocks in the base phi expansion $N$, and for which numbers $N$ do they occur?
In Section \ref{sec:phi}, we perform this task for the suffices of the $\beta^+$-part of the base phi expansions, and in Section \ref{sec:neg} for the complete $\beta^-\!$-part of the base phi expansions, and the prefixes of the $\beta^-\!$-part of length at most 3.
In Section \ref{sec:embed}, we establish in Theorem \ref{th:Zeckphi} a relationship between the base phi expansions and Zeckendorf expansions, also known as Fibonacci representations.
This will permit us to exploit the results of the paper \cite{Dekk-Zeck-structure} in Section \ref{sec:phi}. See the paper \cite{Frougny-Saka} for a less direct approach, in terms of two-tape automata.
In Section \ref{sec:RST} we recall the recursive structure of base phi expansions, and derive some tools from this which will be useful in the final two sections.
In Section \ref{sec:closer} we take a closer look at the Lucas intervals.
In Section \ref{sec:GBS} we introduce generalized Beatty sequences, which for the base phi expansion take over the role played by arithmetic sequences in the classical expansions in base $b$, where $b$ is an integer larger than 1.
We end this introduction by pointing out that there is a neat way to obtain $N$ from the $\beta^+(N)$-part of $\beta(N)$, without knowing the $\beta^-(N)$-part.
If $\beta(N)=\beta^+(N)\cdot\beta^-(N)$ is the base phi expansion of a natural number $N$, then $N=\lceil \beta^+(N)\rceil$. Here $\lceil \cdot\rceil$ is the ceiling function.
For a proof, add the maximum number of powers corresponding to $\beta^-(N)$, taking into account that no 11 appears.
This is bounded by the geometric series starting at $\varphi^{-1}$ with common ratio $\varphi^{-2}$, i.e., by $\varphi^{-1}/(1-\varphi^{-2})=1.$
\section{Embedding base phi into Zeckendorf}\label{sec:embed}
We define the Lekkerkerker-Zeckendorf expansion. Let $(F_n)$ be the Fibonacci numbers. Let $\ddot{F}_0=1, \ddot{F}_1=2, \ddot{F}_2=3,\dots$ be the twice shifted Fibonacci numbers, defined by $\ddot{F}_i=F_{i+2}$.
Ignoring leading and trailing zeros, any natural number $N$ can be written uniquely as
$$N= \sum_{i=0}^{\infty} e_i \ddot{F}_i,$$
with digits $e_i=0$ or 1, and where $e_ie_{i+1} = 11$ is not allowed.
We denote the Zeckendorf expansion of $N$ as $Z(N)$.
Let $V$ be the generalized Beatty sequence (cf. \cite{GBS}) defined by
$$ V(n) = 3\lfloor n\varphi \rfloor + n +1.$$
\noindent Here $\lfloor \cdot \rfloor$ denotes the floor function, and $(\lfloor n\varphi\rfloor)$ is the well known lower Wythoff sequence.
We define the function $S$ by
$$S(n)=\max\{k\in \mathbb{N}: V(k)\le n\}-1.$$
\begin{theorem}\label{th:Zeckphi}
\noindent For all $N\ge 0$
$$\beta^+(N)=Z(N+S(N)).$$
\end{theorem}
\noindent This theorem will be proved in the Section \ref{sec:Pf-Zeckphi}.
The basis for the embedding of the $\beta^+(N)$ into the collection of Zeckendorf words is the following analysis.
\subsection{The art of adding 1}\label{sec:add}
It is essential to give ourselves the freedom to write also non-admissible expansions in the form
$$\beta(N) = d_{L}d_{L-1}\dots d_1d_0\cdot d_{-1}d_{-2} \dots d_{R+1}d_R.$$
For example, since $\beta(4) =101.01$ and
$\beta(2)=10\cdot 01$, we can write
\begin{equation}\label{eq:plus1} \beta(5)\doteq \beta(4)+1\doteq 101\cdot01+1\cdot 0\doteq 102\cdot01\doteq 110\cdot02\doteq1000\cdot1001.
\end{equation}
Here the $\doteq$-sign indicates that we consider a non-admissible expansion.
It is convenient to generate all Zeckendorf expansions and base phi expansions by repeatedly adding the number 1.
When we compute $\beta(N)+1$ for some number $N$, then, in general, there is a carry both to the left and (two places) to the right.
This is illustrated by the example in Equation (\ref{eq:plus1}).
Note that there is not only a {\it double carry}, but that we also have to get rid of the 11's, by replacing them with 100's.
This is allowed because of the equation $\varphi^{n+2}= \varphi^{n+1}+\varphi^{n}.$ We call this operation a {\it golden mean shift}.
When we compute $Z(N)+1$ for some number $N$, then we have to distinguish between $e_0=0$ and $e_0=1$:
$$Z(N)=e_L\dots e_2e_1\,0 {\quad\rm gives\quad} Z(N)+1=e_L\dots e_2e_1\,1$$
and
$$Z(N)=e_L\dots e_2e_1\,1 {\quad\rm gives\quad} Z(N)+1\doteq e_L\dots e_2\,10.$$
Here we used the $\doteq$-sign because (several) golden mean shifts might follow, where for the Zeckendorf expansion these are justified by the equation $F_{n+2}=F_{n+1}+F_n$. Note that replacing $e_11+1$ by $10$ follows from 1+1=2 (!).
\noindent For the convenience of the reader we provide a list of the Zeckendorf and base phi expansions of the first 18 natural numbers:
\begin{tabular}{|c|c|c|}
\hline
\; $N^{\phantom{|}}$ & $Z(N)$ & $\beta(N)$ \\[.0cm]
\hline
1\; & \;\;\;\;\;\;\;\,1 & \;${1}\cdot$ \\
2\; & \;\;\;\;\;\;10 & \;\:\,\,$1{0}\cdot01$ \\
3\; &\;\;\; 100 & \;$10{0}\cdot01$ \\
4\; &\;\;\; 101 & \;$10{1}\cdot01$ \\
5\; & \;\;1000 & \;\:\,$100{0}\cdot1001$ \\
6\; & \;\;1001 & \;\:\,$101{0}\cdot0001$ \\
7\; & \;\;1010 & \;$1000{0}\cdot0001$ \\
8\; & 10000 & \;$1000{1}\cdot 0001$ \\
9\; & 10001 & \;$1001{0}\cdot0101$ \\
\hline
\end{tabular}\qquad
\begin{tabular}{|c|c|c|}
\hline
\; $N^{\phantom{|}}$ & $Z(N)$ & $\beta(N)$ \\[.0cm]
\hline
10\; & \;\:10010 & \;$1010{0}\cdot0101$ \\
11\; & \;\:10100 & \;$1010{1}\cdot0101$ \\
12\; & \;\:10101 & \;\,\,$10000{0}\cdot101001$ \\
13\; & 100000 & \;\,\,$10001{0}\cdot001001$ \\
14\; & 100001 & \;\,\,$10010{0}\cdot001001$ \\
15\; & 100010 & \;\,\,$10010{1}\cdot001001$ \\
16\; & 100100 & \;\, $10100{0}\cdot100001$\\
17\; & 100101 & \;\, $101010\cdot000001$\\
18\; & 101000 & \;\, $100000{0}\cdot000001$\\
\hline
\end{tabular}\quad
\subsection{Proof of Theorem \ref{th:Zeckphi}}\label{sec:Pf-Zeckphi}
The essential ingredient of the proof is the following result from \cite{Dekk-phi-FQ}, Theorem 5.1 and Remark 5.4. An alternative, short proof of the first part could be given with the Propagation Principle from Section \ref{sec:RST}.
\begin{proposition}\label{prop:D-numbers} Let $\beta(N)=(d_i(N))$ be the base phi expansion of a natural number $N$. Then:\\[-.3cm]
\hspace*{1.5cm} $d_1d_0\cdot d_{-1}(N)=10\cdot1$ never occurs,\\[.1cm]
\hspace*{1.5cm} $d_1d_0\cdot d_{-1}(N)=00\cdot1$ if and only if $N=3\lfloor n\varphi\rfloor + n + 1$ for some natural number $n$.
\end{proposition}
\noindent {\it Proof of Theorem \ref{th:Zeckphi}: } One observes that there are many $\beta(N)$'s such that $\beta^+(N)=Z(N')$ for some $N'$. Moreover, if this is the case, then also $\beta^+(N+1)=Z(N'+1)$, {\it except} if $d_{-1}(N)=1$ in $\beta(N)$. Indeed, as long as $d_{-1}(N)=0$ adding 1 gives the same result for both the Zeckendorf and the positive part of the base phi expansion, as seen in the previous section. However, suppose
$$Z(N')=\beta^+(N), {\; \rm and\;} d_{-1}(N)=1.$$
Then, by Proposition \ref{prop:D-numbers}, $d_1d_0\cdot d_{-1}(N)=00\cdot1$,
and adding 1 to $N$ gives the expansion $\beta(N+1)$ with digit block $d_1d_0\cdot d_{-1}(N+1)=10\cdot0$. So $\beta^+(N+1)$ ends in exactly the same two digits as $Z(N'+2)$, and in fact $\beta^+(N+1)=Z(N'+2)$. This means that one Zeckendorf expansion has been skipped: that of $N'+1$. Every time a $d_{-1}(N)=1$ occurs, this skipping takes place. Since $Z(0)=\beta^+(0),\dots, Z(5)=\beta^+(5)$, this gives the formula $\beta^+(N)=Z(N+S(N))$, with
$S(n)=\max\{k\in \mathbb{N}: 3\lfloor k\varphi\rfloor + k \le n\}$, by the second statement of Proposition \ref{prop:D-numbers}.
${\rm B}ox$
\section{The recursive structure of base phi expansions}\label{sec:RST}
The Lucas numbers $(L_n)=(2, 1, 3, 4, 7, 11, 18, 29, 47, 76,123, 199, 322,\dots)$ are defined by
$$ L_0 = 2,\quad L_1 = 1,\quad L_n = L_{n-1} + L_{n-2}\quad {\rm for \:}n\ge 2.$$
The Lucas numbers have a particularly simple base phi expansion.
\noindent From the well-known formula
$L_{2n}=\varphi^{2n}+\varphi^{-2n}$, and the recursion $L_{2n+1}=L_{2n}+L_{2n-1}$ we have for all $n\ge 1$
\begin{equation}\label{eq:Lm}
\beta(L_{2n}) = 10^{2n}\cdot0^{2n-1}1,\quad \beta(L_{2n+1}) = 1(01)^n\cdot(01)^n.
\end{equation}
By iterated application of the double carry and the golden mean shift to $\beta(L_{2n+1})+\beta(1)$,\; and a similar operation for $\beta(L_{2n+2}-1)$ (see also the last page of \cite{Dekk-How-to-add}) one finds that for all $n\ge 1$
\begin{equation}\label{eq:Lmplus1}
\beta(L_{2n+1}+1) = 10^{2n+1}\cdot(10)^n01,\quad \beta(L_{2n+2}-1)=(10)^{n+1}\cdot 0^{2n+1}1.
\end{equation}
\noindent As in \cite{Dekk-phi-FQ} we partition the natural numbers into Lucas intervals\:
$$\Lambda_{2n}:=[L_{2n},\,L_{2n+1}] \quad{\rm and\quad} \Lambda_{2n+1}:=[L_{2n+1}+1,\, L_{2n+2}-1].$$
The basic idea behind this partition is that if
$$\beta(N) = d_{L}d_{L-1}\dots d_1d_0\cdot d_{-1}d_{-2} \dots d_{R+1}d_R,$$
then the left most index $L=L(N)$ and the right most index $R=R(N)$ satisfy
$$L(N)=|R(N)|=2n \;{\rm iff}\; N\in \Lambda_{2n}, \quad L(N)=2n\!+1,\; |R(N)|=2n\!+2 \;{\rm iff}\; N\in \Lambda_{2n+1}.$$
This is not hard to see from the simple expressions we have for the $\beta$-expansions of the Lucas numbers; see also Theorem 1 in \cite{Grabner94}.
To obtain recursive relations, the interval $\Lambda_{2n+1}=[L_{2n+1}+1, L_{2n+2}-1]$ has to be divided into three subintervals. These three intervals are\\[-.8cm]
\begin{align*}
I_n:=&[L_{2n+1}+1,\, L_{2n+1}+L_{2n-2}-1],\\
J_n:=&[L_{2n+1}+L_{2n-2},\, L_{2n+1}+L_{2n-1}],\\
K_n:=&[L_{2n+1}+L_{2n-1}+1,\, L_{2n+2}-1].
\end{align*}
\noindent It will be very convenient to use the free group versions of words of 0's and 1's. So, for example, $(01)^{-1}0001=1^{-1}001$.
\begin{theorem}{\bf [Recursive structure theorem]}\label{th:rec}
\noindent{\,\bf I\;} For all $n\ge 1$ and $k=0,\dots,L_{2n-1}$
one has $ \beta(L_{2n}+k) = \beta(L_{2n})+ \beta(k) = 10\dots0 \,\beta(k)\, 0\dots 01.$
\noindent{\bf II} For all $n\ge 2$ and $k=1,\dots,L_{2n-2}-1$
\begin{align*}
I_n:&\quad \beta(L_{2n+1}+k) = 1000(10)^{-1}\beta(L_{2n-1}+k)(01)^{-1}1001,\\ K_n:&\quad\beta(L_{2n+1}+L_{2n-1}+k)=1010(10)^{-1}\beta(L_{2n-1}+k)(01)^{-1}0001.
\end{align*}
Moreover, for all $n\ge 2$ and $k=0,\dots,L_{2n-3}$
$$\hspace*{0.7cm}J_n:\quad\beta(L_{2n+1}+L_{2n-2}+k) = 10010(10)^{-1}\beta(L_{2n-2}+k)(01)^{-1}001001.$$
\end{theorem}
See \cite{Dekk-How-to-add} for a proof of this theorem.
As an illustration of the use of Theorem \ref{th:rec} we shall now prove a lemma that we need in Section \ref{sec:phi}.
\begin{lemma}\label{lem:no} Let $m\ge 1$ be an integer. There are {\bf (a)} no expansions $\beta(N)$ with the digit block $d_{2m}\dots d_0\cdot d_{-1}(N)=10^{2m}\cdot1$, and there are {\bf (b)} no expansions $\beta(N)$ with the digit block $d_{2m+1}\dots d_0\cdot d_{-1}(N)=10^{2m+1}\cdot0$.
\end{lemma}
\noindent{\it Proof:} \:{\bf (a)}. The first time $d_{2m}\dots d_0=10^{2m}$ occurs is for $N=L_{2m}$, and then $d_{-1}(N)=0$ (see $\beta(L_{2m})$ formula above). This is also the only occurrence of the digit block $10^{2m}$ at the end of the expansions of the numbers $N$ in $\Lambda_{2m}$. It is also obvious that the digit block $10^{2m}$ will not appear at the end of the expansions of the numbers $N$ in $\Lambda_{2m+1}$.
From part {\bf I} of the Recursive Structure Theorem we see that the digit block $10^{2m}$ at the end of the expansions of the numbers $N$ in $\Lambda_{2m+2}$ only occurs in combination with $d_{-1}(N)=0$.
From part {\bf II} of the Recursive Structure Theorem we will see that the digit block $10^{2m}$ at the end of the expansions of the numbers $N$ in $\Lambda_{2m+3}$ only occurs in combination with $d_{-1}(N)=0$. This is definitely more complicated than this observation for $\Lambda_{2m+2}$. We have to split $\Lambda_{2m+3}$ into the three pieces $I_{m+1}, J_{m+1}$ and $K_{m+1}$. The middle piece $J_{m+1}$ corresponds to numbers in $\Lambda_{2m}$, from which we already know that $d_{2m}\dots d_0(N)=10^{2m}$ implies that $d_{-1}(N)=0$. The numbers $N$ in the first piece, $I_{m+1}$, correspond to numbers in $\Lambda_{2m+1}$ from which the digits $d_{2m+1}d_{2m}=10$ have been replaced by
the digits $d_{2m+3}d_{2m+2}d_{2m+1}d_{2m}=1000$. In particular $d_{2m}=0$ excludes any occurrence of $d_{2m}\dots d_0=10^{2m}$. In the same way occurrences of $d_{2m}\dots d_0=10^{2m}$ in $K_{m+1}$ are excluded.
The final conclusion is that both intervals $\Lambda_{2m+2}$ and $\Lambda_{2m+3}$ only contain numbers $N$ for which the occurrence of $10^{2m}$ as end block implies $d_{-1}(N)=0$. In the same way, these properties of $\Lambda_{2m+2}$ and $\Lambda_{2m+3}$ carry over to the two Lucas intervals $\Lambda_{2m+4}$ and $\Lambda_{2m+5}$, and we can finish the proof by induction.
{\bf (b)}. The first time $d_{2m}\dots d_0=10^{2m+1}$ occurs is for $N=L_{2m+1}+1$ in $\Lambda_{2m+1}$ , and then $d_{-1}(N)=1$ (see Equation (\ref{eq:Lmplus1})). This is also the only occurrence of $d_{2m}\dots d_0=10^{2m+1}$ in $\Lambda_{2m+1}$. Moreover, in $\Lambda_{2m+2}$ the word $10^{2m+1}$ does not occur at all as end block. We finish the proof as in Part {\bf (a)}, with the sole difference that now $1010^{2m+1}$ occurring as end block in $\Lambda_{2m+3}$,
yields an instance of $10^{2m+1}\cdot 1$ in $\Lambda_{2m+3}$.
${\rm B}ox$
It is convenient to have a second version of the Recursive Structure Theorem which involves a higher resemblance between the even Part {\,\bf I\;} case, and the odd Part {\,\bf II\;}. It will also be convenient to have the $\Lambda$-intervals play a more visible role in the recursion. In fact, it is easy to check that the three intervals $I_n, J_n$ and $K_n$ in the Recursive Structure Theorem satisfy
$$I_n=\Lambda^{(a)}_{2n-1}:=\Lambda_{2n-1}+L_{2n},\; J_n=\Lambda^{(b)}_{2n-2}:=\Lambda_{2n-2}+L_{2n+1},\; K_n=\Lambda^{(c)}_{2n-1}:=\Lambda_{2n-1}+L_{2n+1}. $$
In this equation we employ the usual notation $A+x:=\{a+x:a\in A\}$ for a set of real numbers $A$ and a real number $x$.
\begin{theorem}{\bf [Recursive structure theorem: 2nd version]}\label{th:rec2}\\
\noindent{\,\bf (i): Odd\;} For all $n\ge 1$ one has\\[-.3cm]
$$\Lambda_{2n+1}=\Lambda^{(a)}_{2n-1}\cup\Lambda^{(b)}_{2n-2}\cup\Lambda^{(c)}_{2n-1}, $$
where $\Lambda^{(a)}_{2n-1}=\Lambda_{2n-1}+L_{2n}$,\; $\Lambda^{(b)}_{2n-2}=\Lambda_{2n-2}+L_{2n+1}$, and $\Lambda^{(c)}_{2n-1}=\Lambda_{2n-1}+L_{2n+1}$.\\
We have\\[-.8cm]
\begin{subequations} \label{eq:shift-odd}
\begin{align}
\beta(N)= & \;1000(10)^{-1}\,\beta(N-L_{2n})\,(01)^{-1}1001& for\; N\in \Lambda^{(a)}_{2n-1}, \\
\beta(N)= & \; 100\,\beta(N-L_{2n+1})\,(01)^{-1}001001& for\; N\in \Lambda^{(b)}_{2n-2},\\
\beta(N)= & \;10\,\beta(N-L_{2n+1})\,(01)^{-1}0001 & for\; N\in \Lambda^{(c)}_{2n-1}.
\end{align}\\[-.6cm]
\end{subequations}
\noindent{\,\bf (ii): Even\;} For all $n\ge 1$ one has\\[-.3cm]
$$\Lambda_{2n+2}=\Lambda^{(a)}_{2n}\cup\Lambda^{(b)}_{2n-1}\cup\Lambda^{(c)}_{2n}, $$
where $\Lambda^{(a)}_{2n}=\Lambda_{2n}+L_{2n+1}$,\; $\Lambda^{(b)}_{2n-1}=\Lambda_{2n-1}+L_{2n+2}$, and $\Lambda^{(c)}_{2n}=\Lambda_{2n}+L_{2n+2}$.\\
We have\\[-.8cm]
\begin{subequations} \label{eq:shift-even}
\begin{align}
\beta(N)= & \;1000(10)^{-1}\,\beta(N-L_{2n+1})\,(01)^{-1}0001 & for\; N\in \Lambda^{(a)}_{2n},\phantom{x} \label{eq:5a} \\
\beta(N)= & \; 100\,\beta(N-L_{2n+2})\,01 & for\; N\in \Lambda^{(b)}_{2n-1}, \label{eq:5b}\\
\beta(N)= & \;10\,\beta(N-L_{2n+1})\,01 & for\; N\in \Lambda^{(c)}_{2n}.\phantom{x.} \label{eq:5c}
\end{align}
\end{subequations}
\end{theorem}
\noindent{\it Proof:} \:{\bf (i): Odd\;} This is a rephrasing of {Part (II) in Theorem \ref{th:rec}.\\
{\bf (ii): Even\;} We start by showing that the three intervals $\Lambda^{(a)}_{2n},\Lambda^{(b)}_{2n-1},\Lambda^{(c)}_{2n}$ partition $\Lambda_{2n+2}$.
The first number in $\Lambda^{(a)}_{2n}$ is $L_{2n}+L_{2n+1}=L_{2n+2}$, which is the first number of $\Lambda_{2n+2}$. The last number in $\Lambda^{(a)}_{2n}$ is $L_{2n+1}+L_{2n+1}=2L_{2n+1}$.
The first number in $\Lambda^{(b)}_{2n-1}$ is $L_{2n-1}+1+L_{2n+2}=L_{2n-1}+1+L_{2n}+L_{2n+1}=2L_{2n+1}+1$, which indeed, is the successor of the last number in $\Lambda^{(a)}_{2n}$.
The last number in $\Lambda^{(b)}_{2n-1}$ is $L_{2n}-1+L_{2n+2}$, which indeed has successor $L_{2n}+L_{2n+2}$, the first number in $\Lambda^{(c)}_{2n}$. Finally, the last number in $\Lambda^{(c)}_{2n}$ is $L_{2n+1}+L_{2n+2}=L_{2n+3}$, which is the last number in $\Lambda_{2n+2}$.
To prove Equation (\ref{eq:5a}), we first show, using Equation (\ref{eq:Lm}) twice, that this equation is correct for $N=L_{2n+2}$, which is the first number of $\Lambda^{(a)}_{2n}$:
\begin{align*}
\beta(L_{2n+2}) & = 10^{2n+2}\cdot0^{2n+1}1\\
& = 1000\,0^{2n-1}\cdot0^{2n-2}\,0001 \\
& = 1000(10)^{-1}10^{2n}\cdot0^{2n-1}1\,(01)^{-1}0001 \\
& = 1000(10)^{-1}\beta(L_{2n})\,(01)^{-1}0001 \\
& = 1000(10)^{-1}\beta(L_{2n+2}-L_{2n+1})\,(01)^{-1}0001.
\end{align*}
Equation (\ref{eq:5a}) will also be correct for all other $N\in\Lambda^{(a)}_{2n}$, because as above, the digit block $d_Ld_{L-1}d_{L-2}d_{L-3}(N)$ will always be 1000,
and the digit block $d_{L-2}d_{L-3}(N-L_{2n+1})$ will always be 10. For the negative digits we have a similar property.
Equation (\ref{eq:5b}) follows directly from the fact that if $N\in \Lambda^{(b)}_{2n-1}$, then
\begin{align*}
\beta(N-L_{2n+2})+\beta(L_{2n+2}) & = d_{2n-1}\dots d_0\cdot d_{-1}\dots d_{-2n} + 10^{2n+2}\cdot0^{2n+1}1\\
& = d_{2n-1}\dots d_0\cdot d_{-1}\dots d_{-2n} + 100\,0^{2n}\cdot0^{2n}\,01\\
& = 100d_{2n-1}\dots d_0\cdot d_{-1}\dots d_{-2n}01,
\end{align*}
since the numbers in $\Lambda_{2n-1}$ have a $\beta$-expansion $d_{2n-1}\dots d_0\cdot d_{-1}\dots d_{-2n}$ with $2n$ digits on the left and $2n$ digits on the right.
Note that we do not have to use the $\dot{=}$-sign as there are no double carries or golden mean shifts.
Equation (\ref{eq:5c}) follows in the same way.
${\rm B}ox$
Lemma \ref{lem:no} is an example of a general phenomenon, which we call the Propagation Principle. It has an extension to combinations of digit blocks which we will give in Lemma \ref{lem:prop}.
The Propagation Principle is closely connected to the following notion.
We say an interval $\rm \scriptstyle Gamma$ and a union of intervals ${\rm D}elta$ of natural numbers are \emph{$\beta$-congruent modulo $q$} for some natural number $q$ if
${\rm D}elta$ is a disjoint union of translations of $\Lambda$-intervals, such that for all $j=1,\dots,|\rm \scriptstyle Gamma|$,
if $N$ is the $j^{\rm th}$ element of $\rm \scriptstyle Gamma$, and $N'$ is the $j^{\rm th}$ element of ${\rm D}elta$, then
$$d_{q-1}\dots d_1d_0\cdot d_{-1}\dots d_{-q}(N)=d_{q-1}\dots d_1d_0\cdot d_{-1}\dots d_{-q}(N').$$
We write this as\, $\rm \scriptstyle Gamma\cong {\rm D}elta_1{\rm D}elta_2\dots{\rm D}elta_r \mod q$ when the number of translations of $\Lambda$-intervals in ${\rm D}elta$ equals $r$.
Note that the definition implies that the $r$ disjoint translations of $\Lambda$-intervals appear in the natural order, and that we refrain from indicating the translations.
Simple examples are $\Lambda_5 \cong \Lambda_3\Lambda_2\Lambda_3 \mod 1$ and $\Lambda_6 \cong \Lambda_4\Lambda_3\Lambda_4 \mod 3$.
Theorem \ref{th:rec2} is a source of many more examples.
An important observation is that if $\rm \scriptstyle Gamma\cong {\rm D}elta_1{\rm D}elta_2\dots{\rm D}elta_r \mod q$ and $\rm \scriptstyle Gamma'\cong {\rm D}elta'_1{\rm D}elta'_2\dots{\rm D}elta'_{r'} \mod q'$, and $\rm \scriptstyle Gamma\cup \rm \scriptstyle Gamma'$ is an interval, then
\begin{equation}\label{eq:GG}
\rm \scriptstyle Gamma\rm \scriptstyle Gamma':=\rm \scriptstyle Gamma\cup \rm \scriptstyle Gamma'\cong {\rm D}elta_1{\rm D}elta_2\dots{\rm D}elta_r{\rm D}elta'_1{\rm D}elta'_2\dots{\rm D}elta'_{r'} \mod \min\{q,q'\}.
\end{equation}
To keep the formulation and the proof of the following lemma simple, we only formulate it for central digit blocks of length 8 (i.e., $q=4$).
In the following, occurrences of digit blocks in $\beta$-expansions have to be interpreted with additional $0$'s added to the left of the expansion.
\begin{lemma}{\bf [Propagation Principle]}\label{lem:prop}
\noindent {\bf (a)}\, Suppose the digit block $d_{3}\dots d_0\cdot d_{-1}\dots d_{-4}$, does not occur in the $\beta$-expansions of the numbers $N=1,2,\dots,17$. Then it does not occur in any $\beta$-expansion.
\noindent {\bf (b)}\, Let $D$ be an integer between 1 and 4. Suppose the digit block $d_{3}\dots d_0\cdot d_{-1}\dots d_{-4}$ occurs in the $\beta$-expansion of $N$ if and only if the digit block $e_{3}\dots e_0\cdot e_{-1}\dots e_{-4}$ occurs in the $\beta$-expansion of the number $N-D$, for $N=D,D+1,\dots,D+17$. Then this coupled occurrence holds for all $N$.
\end{lemma}
\noindent{\it Proof:} \:{\bf (a)}\, Let us say that a Lucas interval $\Lambda_m$ satisfies property $\cal D$ if the digit block $d_{3}\dots d_0\cdot d_{-1}\dots d_{-4}$, does not occur in the $\beta$-expansions of the numbers $N$ from $\Lambda_m$. Note that $N=17$ is the last number in $\Lambda_5$, so it is given that the intervals $\Lambda_1,\dots,\Lambda_5$ all satisfy property $\cal D$. Also $\Lambda_6$ satisfies property $\cal D$, by an application of Theorem \ref{th:rec}, Part {\bf I}.
The interval $\Lambda_7= \Lambda^{(a)}_{5}\cup\Lambda^{(b)}_{4}\cup\Lambda^{(c)}_{5}$ satisfies property $\cal D$. For $\Lambda^{(a)}_{5}$, this follows since $\Lambda_5$ satisfies property $\cal D$, and (\ref{eq:5a}) does not change the central block of length 8. The same argument applies to $\Lambda^{(c)}_{5}$.
For the interval $\Lambda^{(b)}_{4}$, Equation (\ref{eq:5b}) gives that the positive digit blocks $d_{3}\dots d_0$ are the same as for the corresponding numbers in $\Lambda_4$, and that the negative digit blocks are $d_{-1}\dots d_{-4}(7)(01)^{-1}00=0000$ and $d_{-1}\dots d_{-4}(9)(01)^{-1}00=0100$, which already occurred in the expansions $\beta(0)$ and $\beta(3)$.
The interval $\Lambda_8= \Lambda^{(a)}_{6}\cup\Lambda^{(b)}_{5}\cup\Lambda^{(c)}_{6}$ also satisfies property $\cal D$, since the word transformations in Equation (\ref{eq:shift-even}) do not change the central blocks of length 8 in $\Lambda_{6}$, nor in $\Lambda_{5}$.
Another way to put this, is that $\Lambda_8\cong \Lambda_{6}\Lambda_{5}\Lambda_{6} \mod 4$. Since the $\beta$-expansions only get longer, we have in fact that
$\Lambda_m\cong \Lambda_{m-2}\Lambda_{m-3}\Lambda_{m-2} \mod 4$ for all $m\ge 8$. Thus it follows by induction that $\Lambda_m$ satisfies property $\cal D$ for all $m\ge 8$.
\noindent\:{\bf (b)}\, Let us say that a Lucas interval $\Lambda_m$, $m\ge 1$ satisfies property $\cal E$ if the numbers $N$ from $\Lambda_m$ have the property that the digit block $d_{3}\dots d_0\cdot d_{-1}\dots d_{-4}$ occurs in the $\beta$-expansion of $N$ if and only if the digit block $e_{3}\dots e_0\cdot e_{-1}\dots e_{-4}$ occurs in the $\beta$-expansion of $N-D$. Then it is given that $\Lambda_1,\dots,\Lambda_5$ all satisfy property $\cal E$. The proof continues as in part {\bf (a)}, but we have to take into account that the numbers $N-D$ and $N$ can be elements of different Lucas intervals. This `boundary' problem is easily solved by induction: it is given for $\Lambda_4\Lambda_{5}$ and $\Lambda_5\Lambda_{6}$, and the equation used for induction is\\[-.4cm]
$$\Lambda_{m+1}\Lambda_{m+2}\cong \Lambda_{m-1}\Lambda_{m-2}\Lambda_{m-1}\Lambda_{m}\Lambda_{m-1}\Lambda_{m} \mod 4.$$
This equation is an instance of Equation (\ref{eq:GG}).
${\rm B}ox$
\section{A closer look at the Lucas intervals}\label{sec:closer}
Here we say more on the idea of splitting Lucas intervals in unions of translated Lucas intervals.
To keep the presentation simple, we start with showing how all the natural numbers can be split into translations of the three Lucas intervals $\Lambda_3, \Lambda_4$ and $\Lambda_5$.
This can of course be done in many ways, but we will consider a way derived from the Recursive Structure Theorem \ref{th:rec2}.
One has
\begin{align*}
\Lambda_6= & \:\Lambda^{(a)}_{4}\cup\Lambda^{(b)}_{3}\cup\Lambda^{(c)}_{4}=[\Lambda_{4}\!+\!L_5]\cup[\Lambda_{3}\!+\!L_6]\cup[\Lambda_{4}\!+\!L_6], \\
\Lambda_7= &\: \Lambda^{(a)}_{5}\cup\Lambda^{(b)}_{4}\cup\Lambda^{(c)}_{5}=[\Lambda_{5}\!+\!L_5]\cup[\Lambda_{4}\!+\!L_7]\cup[\Lambda_{5}\!+\!L_7], \\
\Lambda_8= & \:\Lambda^{(a)}_{6}\cup\Lambda^{(b)}_{5}\cup\Lambda^{(c)}_{6}=[\Lambda_{6}\!+\!L_7]\cup[\Lambda_{5}\!+\!L_8]\cup[\Lambda_{6}\!+\!L_8]\\
= & \: [\Lambda_{4}\!+\!L_5\!+\!L_7]\cup[\Lambda_{3}\!+\!L_6\!+\!L_7]\cup[\Lambda_{4}\!+\!L_6\!+\!L_7]\cup[\Lambda_{5}\!+\!L_8]\\
& \hspace*{6cm}\cup[\Lambda_{4}\!+\!L_5\!+\!L_8]\cup[\Lambda_{3}\!+\!L_6\!+\!L_8]\cup[\Lambda_{4}\!+\!L_6\!+\!L_8].
\end{align*}
Note how the splitting of $\Lambda_6$ was used in the splitting of $\Lambda_8$. Continuing in this fashion we obtain inductively a splitting of all Lucas intervals $\Lambda_n$, which we call the \emph{canonical} splitting.
What is the sequence of translated intervals $\Lambda_3, \Lambda_4$ and $\Lambda_5$ created in this way?
Let the word $C(\Lambda_{n})$ code these successive intervals in $\Lambda_n$ by their indices 3, 4 or 5. Let $\kappa$ be the morphism on the monoid $\{3,4,5\}^*$ defined by
$$\kappa(3)=5, \quad \kappa(4)=434 \quad \kappa(5)=545.$$
\begin{theorem}\label{th:L-345} For any $n\ge 3$ the interval $\Lambda_n$ is a union of adjacent translations of the three intervals $\Lambda_3, \Lambda_4$ and $\Lambda_5$. If $C(\cdot)$ is the coding function for the canonical splittings then for $n\ge 0$
$$C(\Lambda_{2n+4})=\kappa^n(4), \quad C(\Lambda_{2n+5})=\kappa^n(5).$$
\end{theorem}
\noindent{\it Proof:} By induction. For $n=0$ this is trivially true.
Suppose it is true for $k =1, \dots n$. Then by Theorem \ref{th:rec2},
\begin{align*}
C(\Lambda_{2n+6})&=C(\Lambda_{2n+4})C(\Lambda_{2n+3})C(\Lambda_{2n+4})=\kappa^n(4)\kappa^{n-1}(5)\kappa^n(4)=\kappa^{n-1}(\kappa(4)5\kappa(4))\\
&=\kappa^{n-1}(4345434)=\kappa^{n-1}(\kappa^2(4))=\kappa^{n+1}(4), \\
C(\Lambda_{2n+7}) &=C(\Lambda_{2n+5})C(\Lambda_{2n+4})C(\Lambda_{2n+5})= \kappa^n(5)\kappa^n(4)\kappa^n(5)=\kappa^{n}(545)=\kappa^{n+1}(5).\quad {\rm B}ox
\end{align*}
We continue this analysis, now focussing on the partition of the natural numbers by the intervals
$$\Xi_n:=\Lambda_{2n-1}\cup\Lambda_{2n}=[L_{2n-1}+1,L_{2n+1}].$$
The relevance of the $\Xi_n$ is that these are exactly the intervals where $\beta^-(N)$ has length $2n$, for $n \ge 1$. The results in the sequel of this section will therefore be useful in Section \ref{sec:neg}.
There are three (Sturmian) morphisms $f,g$ and $h$ that play an important role in these results, where it is convenient to look at $a$ and $b$ both as integers and as abstract letters. The morphisms are given by
\begin{equation}\label{eq:Fib3}
f:\mor{aba}{ab}\,,\qquad g:\mor{baa}{ba}\,,\qquad h:\mor{aab}{ab}.
\end{equation}
\begin{theorem}\label{th:d-345} For any $n\ge 2$ the interval $\Xi_n$ is a union of adjacent translations of the three intervals $\Lambda_3, \Lambda_4$ and $\Lambda_5$. If $C(\cdot)$ is the coding function for the canonical splittings, then for $n\ge 0$
$$C(\Xi_{n+2})=\delta(h^n(b)),$$
where $\delta$ is the decoration morphism given by $\delta(a)=54,\:\delta(b)=34$.
\end{theorem}
\noindent{\it Proof:} We first establish the commutation relation\;$\kappa\,\delta=\delta \,h.$\\
It suffices to prove this for the generators, and indeed:
$$ \kappa(\delta(a))=\kappa(54)=545434=\delta(aab)=\delta(h(a)),\quad \kappa(\delta(b))=\kappa(34)=5434=\delta(ab)=\delta(h(b)). $$
Using Theorem \ref{th:L-345}, and the commutation relation we obtain for $n\ge 1$
\begin{align*}
C(\Xi_{n+2})&=C(\Lambda_{2n+3})C(\Lambda_{2n+4})=\\
&=\kappa^{n-1}(5)\kappa^{n}(4)=\kappa^{n-1}(5434)=\kappa^{n-1}(\delta(ab))=\delta(h^{n-1}(ab))=\delta(h^n(b)).
\end{align*}
For $n= 0$ we have $\Xi_2=\Lambda_3\cup\Lambda_4$, so $C(\Xi_2)=34=\delta(b)$. \quad ${\rm B}ox$
\section{Generalized Beatty sequences}\label{sec:GBS}
Let $\alpha$ be an irrational number larger than 1. We call any sequence $V$ with terms of the form $V_n = p\lfloor n \alpha \rfloor + q n +r $, $n\ge 1$ a \emph{generalized Beatty sequence}. Here $p,q$ and $r$ are integers, called the \emph{parameters} of $V$, and we write $V=V(p,q,r)$.
In this paper we will only consider the case $\alpha=\varphi$, the golden mean, so any mention of a generalized Beatty sequence assumes that $\alpha=\varphi$.
A prominent role is played by the lower Wythoff sequence $A:=V(1,0,0)$ and the upper Wythoff sequence $B:=V(1,1,0)$. These are complementary sequences, associated to the Beatty pair ($\varphi,\varphi^2)$.
Here is the key lemma that tells us how generalized Beatty sequences behave under compositions. In its statement below, as Lemma \ref{lem:VA}, a typo in its source is corrected.
\begin{lemma}{\bf (\cite{GBS}, Corollary 2) }\label{lem:VA} Let $V$ be a generalized Beatty sequence with parameters $(p,q,r)$. Then $VA$ and $VB$ are generalized Beatty sequences with parameters $(p_{V\!A},q_{V\!A},r_{V\!A})=(p+q,p,r-p)$ and $(p_{V\!B},q_{V\!B},r_{V\!B})=(2p+q,p+q,r)$.
\end{lemma}
It will be useful later on to have a sort of converse of this lemma.
If $C$ and $D$ are two ${\mathbb{N}}$-valued sequences, then we denote by $C\sqcup D$ the sequence whose terms give the set $C({\mathbb{N}})\cup D({\mathbb{N}})$, in increasing order.
\begin{lemma}\label{lem:VAconv} Let $V=V(p,q,r)$ be a generalized Beatty sequence. Let $U$ and $W$ be two disjoint sequences with union $V=U\sqcup W$:
$$U({\mathbb{N}})\cap W({\mathbb{N}})=\emptyset,\quad U({\mathbb{N}})\cup W({\mathbb{N}})=V({\mathbb{N}}).$$
Suppose $U$ is a generalized Beatty sequence with parameters $(p+q, p, r-p)$. Then $W$ is the generalized Beatty sequence with parameters $(2p+q,p+q,r)$.
\end{lemma}
\noindent{\it Proof:} According to Lemma \ref{lem:VA}, we have $U=VA$. Since $A$ and $B$ are disjoint with union ${\mathbb{N}}$, we must have $W=VB$, and Lemma \ref{lem:VA} gives that $W$ is a generalized Beatty sequence with parameters $(2p+q,p+q,r)$.
${\rm B}ox$
Here is the key lemma to `recognize' a generalized Beatty sequence, taken from \cite{GBS}.
If $S$ is a sequence, we denote its sequence of first order differences as ${\rm D}elta S$, i.e., ${\rm D}elta S$ is defined by
$${\rm D}elta S(n) = S(n+1)-S(n), \quad {\rm for\;} n=1,2\dots.$$
\begin{lemma}\label{lem:diff}{ \rm \bf(\cite{GBS})} Let $V = (V_n)_{n \geq 1}$ be the generalized Beatty
sequence defined by $V_n = p\lfloor n \varphi \rfloor + q n +r$, and let ${\rm D}elta V$ be the
sequence of its first differences. Then ${\rm D}elta V$ is the Fibonacci word on the alphabet
$\{2p+q, p+q\}$. Conversely, if $x_{a,b}$ is the Fibonacci word on the alphabet
$\{a,b\}$, then any $V$ with ${\rm D}elta V= x_{a,b}$ is a generalized Beatty sequence
$V=V(a-b,2b-a,r)$ for some integer $r$.
\end{lemma}
\section{The positive powers of the golden mean}\label{sec:phi}
For any digit block $w$ we will determine the sequence $R_w$ of those numbers $N$ with digit block $w=d_{m-1}\dots d_0$ as suffix of $\beta^+(N)$. We sometimes call $w$ an \emph{end block} of $\beta^+(N)$. More generally, we are also interested in occurrence sequences of numbers $N$ with $d_{m-1}\dots d_0(N)=w$ and $d_{-1}\dots d_{-m'}(N)=v$. We denote these as $R_{w\cdot v}$.
For a couple of small values of $m,m'$, we have the following result from the paper \cite{Dekk-phi-FQ}, Theorem 5.1.
\begin{theorem}\label{th:d0d1} {\bf (\cite{Dekk-phi-FQ})} Let $\beta(N)=(d_i(N))$ be the base phi expansion of a natural number $N$. Then:\\[.1cm]
$R_{1}=V_0(1,2,1)$,\quad $R_{10}=V(1,2,-1)$,\quad $R_{00\cdot 0}=V_0(1,2,0)$,\quad $R_{00\cdot 1}=V(3,1,1)$.
\end{theorem}
Here it made sense to add $N=1$ to $V(1,2, 1)$, and $N=0$ to $R_{00\cdot 0}$. We accomplished this by adding the $n=0$ term to the generalized Beatty sequence $V$: we define $V_0$ by $$V_0(p,q,r) := (p\lfloor n \phi \rfloor + q n +r)_{n\ge 0}.$$
The digit blocks $w=d_{m-1}\dots d_1\,0$ behave rather differently from digit blocks $w=d_{m-1}\dots d_1\,1$. We therefore analyse these cases separately, in Section \ref{sec:w0} and \ref{sec:w1} .
\subsection{Digit blocks $w=d_{m-1}\dots d_10$}\label{sec:w0}
We order the digit blocks $w$ with $d_0=0$ in a Fibonacci tree. The first four levels of this tree are depicted below.
\hspace*{-1.1cm}
\begin{tikzpicture}
[level distance=15mm,
every node/.style={fill=orange!20,rectangle,inner sep=1pt},
level 1/.style={sibling distance=72mm,nodes={fill=orange!20}},
level 2/.style={sibling distance=48mm,nodes={fill=orange!20}},
level 3/.style={sibling distance=36mm,nodes={fill=orange!20}}]
\node {\footnotesize $\duoZ{=0}{=V_0(-1,3,0)\quad}$}
child {node {\footnotesize $\duoZ{=00}{=V_0(1,2,0) \sqcup V(3,1,1)}$}
child {node {\footnotesize $\duoZ{=000}{=V_0(4,3,0) \sqcup V(3,1,1)}$}
child {node {\footnotesize $\duoZ{=0000}{=V_0(4,3,0) \sqcup V(7,4,1)}$}}
child {node {\footnotesize $\duoZ{=1000}{=V(4,3,-2)}$}}
}
child {node {\footnotesize $\duoZ{=100}{=V(3,1,-1)}$}
child {node {\footnotesize $\duoZ{=0100}{=V(3,1,-1)}$}}
}
}
child {node {\footnotesize $\duoZ{=10}{=V(1,2,-1)}$}
child {node {\footnotesize $\duoZ{=010}{=V(1,2,-1)}$}
child {node {\footnotesize $\duoZ{=0010}{=V(3,1,-2)}$}}
child {node {\footnotesize $\duoZ{=1010}{=V(4,3,-1)}$}}
}
};
\end{tikzpicture}
We start with the short words $w$.
\begin{proposition}\label{prop:small} The sequence of occurrences $R_w$ of numbers $N$ such that the digits $d_{m-1} \dots d_0$ of the base phi expansion of $N$ are equal to $w$, i.e., $d_{m-1}\dots d_0(N)=w$, is given for the words $w$ of length at most 3, and ending in 0 by\\
\mbox{\rm a)} $R_0 = V(-1,3,0)$,\\
\mbox{\rm b)} $R_{00} = V_0(1,2,0) \sqcup V(3,1,1)$,\\
\mbox{\rm c)} $R_{10}=R_{010} = V(1,2,-1)$,\\
\mbox{\rm d)} $R_{000} = V_0(4,3,0) \sqcup V(3,1,1)$,\\
\mbox{\rm e)} $R_{100} = V(3,1,-1)$.
\end{proposition}
\noindent{\it Proof:} \noindent {\rm a)} $w=0$: Since the numbers $\varphi+2$ and $3-\varphi$ form a Beatty pair, i.e.,
$$\frac{1}{\varphi+2}+\frac{1}{3-\varphi}=1,$$
the sequences $V(1,2,0)$ and $V(-1,3,0)$ are complementary in the positive integers. It follows that $R_0=V_0(-1,3,0)$ is the complement of $R_1=V_0(1,2,1)$, by Theorem \ref{th:d0d1}.
\noindent {\rm b)} $w=00$: Theorem \ref{th:d0d1} gives that $R_{00}$ is the union of the two GBS $V_0(1,2,0)$ and $V(3,1,1)$. These two sequences correspond to the numbers $N$ with expansions containing $00\cdot 0$, coded ${\rm B}$ in \cite{Dekk-phi-FQ}, respectively those containing $00\cdot 1$, coded ${\rm D}$ in \cite{Dekk-phi-FQ}.
\noindent {\rm c)} $w=10$ and $w=010$: From Theorem \ref{th:d0d1} we obtain that $R_{10}$ is equal to $V(1,2,-1)$.
\noindent {\rm d)} $w=000$: By Lemma \ref{lem:no} there are no base phi expansions with $d_2d_1d_0d_{-1}(N)=100\cdot 1$. This means that the numbers $N$ from $V(3,1,1)$ in the last part of Theorem \ref{th:d0d1} do exactly correspond with the numbers $N$ with $d_2d_1d_0d_{-1}(N)=000\cdot 1$. This gives one part of the numbers $N$ where $\beta^+(N)$ has suffix 000.
The other part comes from the occurrences of $N$ with $d_2d_1d_0d_{-1}(N)=000\cdot 0$. The trick is to observe that the digit blocks
$1010$ and $000\cdot 0$ always occur in pairs of the expansions of $N-1$ and $N$, for $N=7,\dots 18$. The Propagation Principle (Lemma \ref{lem:prop}, Part {\bf b)}) gives that this coupling will hold for all positive integers $N$.
From Theorem \ref{th:phi-w0} we know that the digit block $1010$ has occurrence sequence $R_{1010}=V(4,3,-1)$. So the coupling implies that the digit block $000\cdot 0$ has occurrence sequence $V_0(4,3,0)$. Here we should mention that Theorem \ref{th:phi-w0} uses the proposition we are on the way of proving (via the formula $R_{1010}=R_{010}\,B$), however, this only uses part {\rm c)}, which we already proved above.
\noindent {\rm e)} $w=100$: We already know that expansions with $100\cdot1$ do not occur, and one checks that an expansion $\beta(N-2)=\dots 100\cdot0\dots$ always occurs coupled to an expansion $\beta(N)=\dots 00\cdot 1\dots$, for $N=2,\dots,19$. The Propagation Principle (Lemma \ref{lem:prop}, Part {\bf b)}) then implies that this coupling occurs for all $N$. This gives that $R_{100}=R_{00\cdot 1}-2=V(3,1,-1)$, using the result of part b).
${\rm B}ox$
The sequences $R_{010}$ and $R_{100}$ are examples of what we call Lucas-Wythoff sequences: their parameters are given respectively by $(L_1,L_0,-1)$ and $(L_2,L_1,-1)$.\\
In general, a \emph{ Lucas-Wythoff sequence} $G$ is a generalized Beatty sequence defined for a natural number $m$ by
$$G = V(L_{m+1},L_m,r),$$
where $r$ is an integer.
\begin{theorem}\label{th:phi-w0}
\noindent For any natural number $m\ge 2$ fix a word $w=w_{m-1}\dots w_0$ of $0$'s and $1$'s, containing no $11$. Let $w_0=0$. Then---except if $w=0^m$---the sequence $R_w$ of occurrences of numbers $N$ such that the digits $d_{m-1} \dots d_0$ of the base phi expansion of $N$ are equal to $w$, i.e., $d_{m-1}\dots d_0(N)=w$, is a Lucas-Wythoff sequence of the form
$$R_w= V(L_{m-2},L_{m-3},\gamma_w) \;\;\text{\rm if}\; w_{m-1}=0,\quad R_w=V(L_{m-1},L_{m-2},\gamma_w) \;\;\text{\rm if}\; w_{m-1}=1,$$
where $\gamma_w$ is a negative integer or 0.\\
In case $w$ consists entirely of $0$'s this sequence of occurrences is given by a disjoint union of two Lucas-Wythoff sequences. We have
\begin{align*}
R_{0^{2m}} & = V(L_{2m},L_{2m-1},1) \:\sqcup\: V_0(L_{2m-1},L_{2m-2},0),\\
R_{0^{2m+1}} & = V_0(L_{2m+1},L_{2m},0) \, \sqcup \, V(L_{2m},L_{2m-1},1).
\end{align*}
\end{theorem}
\noindent{\it Proof:} \:Suppose first that $w$ is a word \emph{not} equal to $0^m$ for some $m\ge 2$.
\noindent The proof is by induction on the length $m$ of $w$. For $m=2$ the statement of the theorem holds by Proposition \ref{prop:small}, part c).
Next, let $w$ be a word of length $m$ with $w_0=0$.
In the case that $w_{m-1}=1$, $w$ has a unique extension to $0w$, and $R_{0w}=R_w$ is equal to the correct Lucas-Wythoff sequence.
In the case that $w_{m-1}=0$, the induction hypothesis is that $R_w$ is a Lucas-Wythoff sequence $R_w=V(L_{m-2},L_{m-3},\gamma_w)$ .
\noindent By Theorem \ref{th:Zeckphi} the numbers $N$ with a $\beta^+(N)$ ending with the digit block $w$ are in one-to-one correspondence with numbers $N'$ with a $Z(N')$ ending with the digit block $w$, and the same property holds for the digit blocks $0w$, respectively $1w$. Note that the correspondence is one-to-one, since the numbers `skipped' in the Zeckendorf expansions all\footnote{We have to follow a different strategy for the words $w=d_{m-1}\dots d_11$ in the next section.} have $d_0=1$. It therefore follows from Proposition 2.6 in \cite{Dekk-Zeck-structure} that $$R_{0w}=R_wA\quad\text{and\:} R_{1w}=R_wB.$$
By Lemma \ref{lem:VA} these have parameters
$$(L_{m-2}+L_{m-3}, L_{m-2}, \gamma_w-L_{m-2})=(L_{m-1},L_{m-2}, \gamma_w-L_{m-2}),$$
respectively
$$(2L_{m-2}+L_{m-1}, L_{m-2}+L_{m-3}, \gamma_w)=(L_m,L_{m-1}, \gamma_w).$$
These are indeed the right expressions for the two words $0w$, respectively $1w$ of length $m+1$.
\noindent Next: The words $w=0^m$ for some $m\ge 2$.
We claim that for all $m\ge 1$
\begin{align}
R_{0^{2m}\cdot 0} & = V_0(L_{2m-1},L_{2m-2},0), & \;\;\; R_{0^{2m}\cdot 1} = V(L_{2m},L_{2m-1}, 1)\label{eq:R1} \\
R_{0^{2m+1}\cdot 0} & = V_0(L_{2m+1},L_{2m},0), & R_{0^{2m+1}\cdot 1} = V(L_{2m},L_{2m-1}, 1)\label{eq:R2}.
\end{align}
The proof is by induction.
We find in the proof of Proposition \ref{prop:small}, part b) that $R_{00\cdot 0}=V_0(1,2,0)$ and $R_{00\cdot 1}=V(3,1,1)$.
Since $L_0=2, L_1=1$ and $L_2=3$, this is Equation (\ref{eq:R1}) for $m=1$.
We find in the proof of Proposition \ref{prop:small}, part d) that $R_{000\cdot 0}=V_0(4,3,0)$ and $R_{000\cdot 1}=V(3,1,1)$. This is Equation (\ref{eq:R2}) for $m=1$.
Next we perform the induction step. Suppose that both Equation (\ref{eq:R1}) and Equation (\ref{eq:R2}) hold.
\noindent {\footnotesize \fbox{(\ref{eq:R1})}} Since $10^{2m+1}\cdot 0$ never occurs by Lemma \ref{lem:no}, we must have
\begin{equation}\label{eq:R0000.0}
R_{0^{2m+2}\cdot 0} = R_{0^{2m+1}\cdot 0} = V_0(L_{2m+1},L_{2m},0).
\end{equation}
This is the left part of Equation (\ref{eq:R1}) for $m+1$ instead of $m$.
That $10^{2m+1}\cdot 0$ never occurs also implies that
\begin{equation}\label{eq:R000001.1}
R_{10^{2m+1}\cdot 1}=R_{10^{2m+1}}=V(L_{2m+1},L_{2m},\gamma_{10^{2m+1}})=V(L_{2m+1},L_{2m}, -L_{2m}+1).
\end{equation}
Here we used the first part of the proof, determining $\gamma_{10^{2m+1}}$ from the observation that the first occurrence of $d_{2m+1}\dots d_0(N)=10^{2m+1}$ is at $N=L_{2m+1}+1$, the first element of the Lucas interval $\Lambda_{2m+1}$.
Next we take $V=R_{0^{2m+1}\cdot 1}$, $U=R_{10^{2m+1}\cdot 1}$ and $W=R_{0^{2m+2}\cdot 1}$ in Lemma \ref{lem:VAconv}. According to Equation (\ref{eq:R2}), we take $(p,q,r)=(L_{2m},L_{2m-1},1)$.
The parameters of the sequence $U$ should be $(p+q,p,r-p)=(L_{2m+1},L_{2m},1-L_{2m})$, which conforms with Equation (\ref{eq:R000001.1}).
The conclusion of Lemma \ref{lem:VAconv} is that $W=R_{0^{2m+2}\cdot 1}$ has parameters
$$(2p+q,p+q,r)=(2L_{2m}+L_{2m-1},L_{2m}+L_{2m-1},1)=(L_{2m+2},L_{2m+1},1).$$
This is the right part of Equation (\ref{eq:R1}) for $m+1$.
\noindent {\footnotesize \fbox{(\ref{eq:R2})}} Since $10^{2m+2}\cdot 1$ never occurs by Lemma \ref{lem:no}, we must have, using the final result of {\footnotesize\fbox{(\ref{eq:R1})}},
$$ R_{0^{2m+3}\cdot 1} = R_{0^{2m+2}\cdot 1} = V( L_{2m+2},L_{2m+1},1).$$
This is the left part of Equation (\ref{eq:R2}) for $m+1$ instead of $m$.
That $10^{2m+2}\cdot 1$ never occurs also implies that
\begin{equation}\label{eq:R10000.0}
R_{10^{2m+2}\cdot 0}=R_{10^{2m+2}}=V(L_{2m+2},L_{2m+1}, -L_{2m+1}).
\end{equation}
Here we used the first part of the proof, determining $\gamma_{10^{2m+2}}$ from the observation that the first occurrence of $d_{2m+3}\dots d_0(N)=10^{2m+2}$ is at $N=L_{2m+2}$, the first element of the Lucas interval $\Lambda_{2m+2}$.
Next we take $V=R_{0^{2m+2}\cdot 0}$, $U=R_{10^{2m+2}\cdot 0}$ and $W=R_{0^{2m+3}\cdot 0}$ in Lemma \ref{lem:VAconv}.
According to Equation (\ref{eq:R0000.0}), we take $(p,q,r)=( L_{2m+1},L_{2m},0)$.
The parameters of the sequence $U$ should be $(p+q,p,r-p)=(L_{2m+2},L_{2m+1},-L_{2m+1})$, which conforms with Equation (\ref{eq:R10000.0}).
The conclusion of Lemma \ref{lem:VAconv} is that $W=R_{0^{2m+3}\cdot 0}$ has parameters
$$(2p+q,p+q,r)=(2L_{2m+1}+L_{2m},L_{2m+1}+L_{2m},0)=(L_{2m+3},L_{2m+2},0).$$
This is the left part of Equation (\ref{eq:R2}) for $m+1$.
${\rm B}ox$
\subsection{Digit blocks $w=d_{m-1}\dots d_11$}\label{sec:w1}
Here there are digit blocks that do not occur at all, like $w=1001$. We denote this as $R_{1001}=\emptyset$.
We order the digit blocks $w$ with $d_0=1$ in a tree. The first four levels of this tree (taking into account that the node corresponding to $R_{1001}$ has no offspring) are depicted below.
\begin{tikzpicture}
[level distance=15mm,
every node/.style={fill=grey!20,rectangle,inner sep=1pt},
level 1/.style={sibling distance=75mm,nodes={fill=grey!20}},
level 2/.style={sibling distance=60mm,nodes={fill=grey!20}},
level 3/.style={sibling distance=36mm,nodes={fill=grey!20}}]
\node {\footnotesize $\duoO{=1}{=V_0(1,2,1)\quad}$}
child {node {\footnotesize $\duoO{=01}{=V_0(1,2,1)}$}
child {node {\footnotesize $\duoO{=001}{=V_0(4,3,1)}$}
child {node {\footnotesize $\duoO{=0001}{=V_0(4,3,1)}$}
child {node {\footnotesize $\duoO{=00001}{=V_0(11,7,1)}$}}
child {node {\footnotesize $\duoO{=10001}{=V(7,4,-3)}$}}}
child {node {\footnotesize $\duoO{=1001}{=\emptyset}$}
}
}
child {node {\footnotesize $\duoO{=101}{=V(3,1,0)}$}
child {node {\footnotesize $\duoO{=0101}{=V(3,1,0)}$}
child {node {\footnotesize $\duoO{=00101}{=V(4,3,-3)}$}}
child {node {\footnotesize $\duoO{=10101}{=V(7,4,0)}$}}}
} };
\end{tikzpicture}
Here $R_{01}=R_1=V_0(1,2,1)$ has been given in Theorem \ref{th:d0d1}. The correctness of the other occurrence sequences follows from Theorem \ref{th:phi-w1}.
We next determine an infinite family of excluded blocks.
\begin{lemma}\label{lem:1001} Let $m\ge 2$ be an integer. There are no expansions $\beta^+(N)$ with end block $10^{2m}1$.
\end{lemma}
\noindent{\it Proof:} Consider any $N$ such that $\beta^+(N)$ has end block $10^{2m}1$. Such an $N$, of course, has $d_{-1}(N)=0$, so we see that
$\beta(N-1)=\dots 10^{2m+1}\cdot 0\dots$. According to Lemma \ref{lem:no} this is not possible.
${\rm B}ox$
Next, we establish a connection with the previous section.
\begin{lemma} \label{lem:1to0} Let $m\ge 2$ be an integer. The block $w=d_{m-1}\dots d_11\cdot 0$ is end block of $\beta^+(N)$ if and only if
the block $\breve{w}:=d_{m-1}\dots d_10\cdot 0$ occurs in $\beta(N-1)$.
\end{lemma}
\noindent{\it Proof:} This follows quickly from the Propagation Principle Lemma \ref{lem:prop} applied to the couple of blocks $00\cdot 0$ and $01\cdot 0$.
${\rm B}ox$
\begin{theorem}\label{th:phi-w1}
\noindent For any natural number $m\ge 2$ fix a word $w=w_{m-1}\dots w_0$ of $0$'s and $1$'s, containing no $11$. Let $w_0=1$.
With exception of the words $w$ with suffix $0^m1$ and $10^m1$, for $m=2,3,\dots$, the sequence $R_w$ of occurrences of numbers $N$ such that the digits $d_{m-1} \dots d_0$ of the base phi expansion of $N$ are equal to $w$, i.e., $d_{m-1}\dots d_0(N)=w$, is a Lucas-Wythoff sequence of the form
$$R_w= V(L_{m-2},L_{m-3}, \gamma_w) \;\;\text{\rm if}\; w_{m-1}=0,\quad R_w=V(L_{m-1},L_{m-2}, \gamma_w) \;\;\text{\rm if}\; w_{m-1}=1,$$
where $\gamma_w$ is a negative integer or 0.
In case $w=0^{2m}1$ we have $R_w=V_0(L_{2m+1},L_{2m},1)$, and this is also the sequence of occurrences of $w=0^{2m+1}1$.
In case $w=10^{2m}1$ the word $w$ does not occur at all as digit end block.
In case $w=10^{2m+1}1$ we have $R_w=V(L_{2m+2},L_{2m+1},-L_{2m+1}+1)$.
\end{theorem}
\noindent{\it Proof:}
It follows from Lemma \ref{lem:1to0} that $R_w=R_{\breve{w}}+1$, if $R_w\ne \emptyset$. So the first part of Theorem \ref{th:phi-w0} yields the statement of the theorem for all $w$ not equal to $0^m1$ or $10^m1$.
In case $w=0^{2m}1\cdot 0$, we have $\breve{w}=0^{2m+1}\cdot 0$, and the result follows from the left part of Equation (\ref{eq:R2}).
In case $w=10^{2m}1$ the word $w$ does not occur as digit end block, according to Lemma \ref{lem:no}.
In case $w=10^{2m+1}1\cdot 0$ we have $\breve{w}=10^{2m+2}\cdot 0$, and now Equation (\ref{eq:R10000.0}) gives that
$R_w=R_{\breve{w}}+1=V(L_{2m+2},L_{2m+1},-L_{2m+1}+1)$.
${\rm B}ox$
\section{The negative powers of the golden mean}\label{sec:neg}
Here we discuss what we can say about the words $\beta^-(N)$.
These do have an even more intricate structure than the $\beta^+(N)$.
\subsection{The words $\beta^-(N)$}\label{sec:trident}
Here we look at complete $\beta^-(N)$'s. Although at first sight these seem to appear in a random order, there is an order dictated not by a coin toss, but by another dynamical system: the rotation over an angle $\varphi$. Moreover, they appear in singletons, or as triples. This can be proved with the \{${\rm A}{\rm B}{\rm C}$, ${\rm D}$\}--structure found in the paper \cite{Dekk-phi-FQ}.
For a more extensive analysis, partition the natural numbers larger than 1 into intervals
$$\Xi_n:=\Lambda_{2n-1}\cup\Lambda_{2n}=[L_{2n-1}+1,L_{2n+1}].$$
The relevance of the $\Xi_n, n=1,2,\dots$ is that these are exactly the intervals where $\beta^-(N)$ has length $2n$.
The $\Xi_n$ intervals have length $$L_{2n+1}-L_{2n-1}=L_{2n+1}-L_{2n}+L_{2n}-L_{2n-1}=L_{2n-1}+L_{2n-2}=L_{2n}.$$
Call three consecutive numbers $N,N+1,N+2$ a \emph{trident}, if $\beta^-(N)=\beta^-(N+1)=\beta^-(N+2)$. For example: 2,3,4 and 6,7,8 are tridents.
We shall always take the middle number $N\!+\!1$ as the representing number of a trident interval $[N,N\!+\!1,N\!+2]$. We call this number $\Pi$-\emph{essential}.
By definition the other $\Pi$-essential numbers are the singletons.
\begin{lemma} {\bf [Trident splitting]} \label{lem:trident}
In $\Lambda_{2n-1}\cup\Lambda_{2n}$ the last number of $\Lambda_{2n-1}$ and the first two numbers in $\Lambda_{2n}$ are in
the same trident.
\end{lemma}
\noindent {\it Proof:} This is true for $n=1$ and $n=2$: $\Lambda_1\cup\Lambda_2=\{2\}\cup[3,4]$ is a trident,
and $\Lambda_3\cup\Lambda_4=[5,6]\cup[7,8,\dots,11]$ contains the trident $[6,7,8]$.
The property then follows by induction, using Theorem \ref{th:rec2}.
${\rm B}ox$
The following lemma helps to count singletons and tridents.
\begin{lemma} The following relation between Lucas numbers and Fibonacci numbers holds: $F_n+3F_{n+1}=L_{n+2}$ for $n=0,1,2,\dots$.
\end{lemma}
For a proof, note that $F_0+3F_1=3=L_2$, and $F_2+3F_3=1+6=L_4$, and then add these two equations, etc.
The lemma describes the fact that the $\Xi_n$ intervals contain $F_{2n-2}$ singletons, and $F_{2n-1}$ tridents, making a total number of $L_{2n}$.
The collection of different $\beta^-(N)$-blocks of length $2n$ has thus cardinality $F_{2n-2}+F_{2n-1}=F_{2n}$. This implies that we have proved the following theorem.
\begin{theorem}\label{th:all-beta-min} All Zeckendorf words of even length ending in 1 appear as $\beta^-(N)$-blocks.
\end{theorem}
Here we mean by a Zeckendorf word (or golden mean word) all words in which 11 does not occur. We denote by $\mathcal{Z}_{m}$ the set of Zeckendorf words of length $m$, for $m=1,2,\dots$. It is easily proved that the cardinality of $\mathcal{Z}_{m}$ equals $F_{m+2}$.
So the cardinality of the set of words from $\mathcal{Z}_{2n}$ ending in 1 is equal to $F_{2n}$, implying the result of Theorem \ref{th:all-beta-min}.
Since all $\beta^-(N)$ have suffix 01, the essential information of these words is contained in
$$ \gamma^-(N):= \beta^-(N)1^{-1}0^{-1}.$$
The words $\gamma^-(N)$ are Zeckendorf words, corresponding one-to-one to the natural numbers $Z^{-1}(\gamma^-(N))$.
Obviously, the $\gamma^-(N)$ have the same ordering as the $\beta^-(N)$.
According to Theorem \ref{th:all-beta-min} we then (after identifying tridents with their middle number) obtain a permutation of length $F_{2n}$ of the $\Pi$-essential elements of $\Xi_n$ by coding these numbers by ${\rm C}(N):=Z^{-1}(\gamma^-(N))$. We denote this permutation by $\Pi^\beta_{2n}$.
The following Zeckendorf words and codes will be important in the sequel.
\begin{lemma} \label{lem:code} For all natural numbers $n$ we have
\begin{equation}\label{eq:bordergammas}
\gamma^-(L_{2n})=0^{2n-2}, \; \gamma^-(L_{2n+1})=[01]^{n-1}, \; \gamma^-(L_{2n+1}+1)=[10]^n,\; \gamma^-(L_{2n+2}-1)=0^{2n}\!.
\end{equation}
\begin{equation}\label{eq:bordercodes}
{\rm C}(L_{2n})=0, \quad {\rm C}(L_{2n+1})=F_{2n-1}-1, \quad {\rm C}(L_{2n+1}+1)=F_{2n+2}-1, \quad {\rm C}(L_{2n+2}-1)=0.
\end{equation}
\end{lemma}
\noindent {\it Proof:}
The correctness of Equation (\ref{eq:bordergammas}) follows from Equations (\ref{eq:Lm}) and (\ref{eq:Lmplus1}).
So $\gamma^-(L_{2n})$ is the first word in $\mathcal{Z}_{2n-2}$, $\gamma^-(L_{2n+1})$ is 0 followed by the last word in $\mathcal{Z}_{2n-3}$,
$\gamma^-(L_{2n+1}+1)$ is the last word in $\mathcal{Z}_{2n}$, and
$\gamma^-(L_{2n+2}-1)$ is the first word in $\mathcal{Z}_{2n-2}$.
Since $\mathcal{Z}_{m}$ has cardinality $F_{m+2}$, Equation (\ref{eq:bordercodes}) follows.
${\rm B}ox$
We have to determine the codings of all natural numbers $N$. For this it is useful to translate Theorem \ref{th:rec2} to the $\gamma^-\!$-blocks.
\begin{theorem}{\bf [Recursive structure theorem: $\gamma^-\!$-version]}\label{th:recg}\\
\noindent{\,\bf (i): Odd\;} For all $n\ge 1$ one has $\Lambda_{2n+1}=\Lambda^{(a)}_{2n-1}\cup\Lambda^{(b)}_{2n-2}\cup\Lambda^{(c)}_{2n-1}, $
where $\Lambda^{(a)}_{2n-1}=\Lambda_{2n-1}+L_{2n}$,\; $\Lambda^{(b)}_{2n-2}=\Lambda_{2n-2}+L_{2n+1}$, and $\Lambda^{(c)}_{2n-1}=\Lambda_{2n-1}+L_{2n+1}$.\\
We have\\[-.8cm]
\begin{subequations} \label{eq:shift-odd}
\begin{align}
\gamma^-(N)= & \;\gamma^-(N-L_{2n})\,10& for\; N\in \Lambda^{(a)}_{2n-1}, \label{eq:15a}\\
\gamma^-(N)= & \;\gamma^-(N-L_{2n+1})\,0010& for\; N\in \Lambda^{(b)}_{2n-2},\label{eq:15b}\\
\gamma^-(N)= & \;\gamma^-(N-L_{2n+1})\,00 & for\; N\in \Lambda^{(c)}_{2n-1}\label{eq:15c}.
\end{align}\\[-.6cm]
\end{subequations}
\noindent{\,\bf (ii): Even\;} For all $n\ge 1$ one has $\Lambda_{2n+2}=\Lambda^{(a)}_{2n}\cup\Lambda^{(b)}_{2n-1}\cup\Lambda^{(c)}_{2n}, $
where $\Lambda^{(a)}_{2n}=\Lambda_{2n}+L_{2n+1}$,\; $\Lambda^{(b)}_{2n-1}=\Lambda_{2n-1}+L_{2n+2}$, and $\Lambda^{(c)}_{2n}=\Lambda_{2n}+L_{2n+2}$.\\
We have\\[-.8cm]
\begin{subequations} \label{eq:shift-even}
\begin{align}
\gamma^-(N)= & \;\gamma^-(N-L_{2n+1})\,00 & for\; N\in \Lambda^{(a)}_{2n},\phantom{x} \label{eq:16a} \\
\gamma^-(N)= & \; \gamma^-(N-L_{2n+2})\,01 & for\; N\in \Lambda^{(b)}_{2n-1}, \label{eq:16b}\\
\gamma^-(N)= & \;\gamma^-(N-L_{2n+1})\,01 & for\; N\in \Lambda^{(c)}_{2n}.\phantom{x.} \label{eq:16c}
\end{align}
\end{subequations}
\end{theorem}
\!
We give the situation for $n=2$, where $\Xi_2=\Lambda_3 \cup \Lambda_4=[5,6,\dots,11]$.
\begin{tabular}{|c|c|c|c|c|}
\hline
\; $N^{\phantom{|}}$ & $\Lambda$-int. & $\cdot\beta^-(N)$ & $\cdot\gamma^-(N)$ & ${\rm C}(N)$ \\[.0cm]
\hline
5\; & $\Lambda_3$ & $\cdot1001$ & \; $\cdot10$ & \; 2 \\
6\; & $\Lambda_3$ & $\cdot0001$ & \; $\cdot00$ & \; \grijs{0} \\
\hline
7\; & $\Lambda_4$ & $\cdot0001$ & \; $\cdot00$ & \; 0 \\
8\; & $\Lambda_4$ & $\cdot0001$ & \; $\cdot00$ & \; \grijs{0} \\
9\; & $\Lambda_4$ & $\cdot0101$ & \; $\cdot01$ & \; \grijs{1} \\
10\, & $\Lambda_4$ & $\cdot0101$ & \; $\cdot01$ & \; 1 \\
11\, & $\Lambda_4$ & $\cdot0101$ & \; $\cdot01$ & \; \grijs{1} \\
\hline
\end{tabular}
\noindent We see that $\Pi^\beta_{4}=\big( 2\, 0\, 1\big)$.
Here is the situation for $n=3$, where $\Xi_3=\Lambda_5 \cup \Lambda_6=[12,13,\dots,29]$.
\begin{tabular}{|c|c|c|c|c|}
\hline
\; $N^{\phantom{|}}$ &{\small $\Lambda$-int.}& $\cdot\beta^-(N)$ & $\cdot\gamma^-(N)$ & ${\rm C}(N)$ \\[.0cm]
\hline
12\; & $\Lambda_5$ & $\cdot101001$ & $\cdot1010$ & 7 \\
13\; & $\Lambda_5$ & $\cdot001001$ & $\cdot0010$ & \grijs{2} \\
14\; & $\Lambda_5$ & $\cdot001001$ & $\cdot0010$ & 2 \\
15\; & $\Lambda_5$ & $\cdot001001$ & $\cdot0010$ & \grijs{2} \\
16\; & $\Lambda_5$ & $\cdot100001$ & $\cdot1000$ & 5 \\
17\; & $\Lambda_5$ & $\cdot000001$ & $\cdot0000$ & \grijs{0} \\
\hline
18\; & $\Lambda_6$ & $\cdot000001$ & $\cdot0000$ & 0 \\
19\; & $\Lambda_6$ & $\cdot000001$ & $\cdot0000$ & \grijs{0} \\
20\; & $\Lambda_6$ & $\cdot010001$ & $\cdot0100$ & \grijs{3} \\
\hline
\end{tabular}\quad
\begin{tabular}{|c|c|c|c|c|}
\hline
\; $N^{\phantom{|}}$ &{\small $\Lambda$-int.}& $\cdot\beta^-(N)$ & $\cdot\gamma^-(N)$ & ${\rm C}(N)$ \\[.0cm]
\hline
21\; & $\Lambda_6$ & $\cdot010001$ & $\cdot0100$ & 3 \\
22\; & $\Lambda_6$ & $\cdot010001$ & $\cdot0100$ & \grijs{3} \\
23\; & $\Lambda_6$ & $\cdot100101$ & $\cdot1001$ & 6 \\
24\; & $\Lambda_6$ & $\cdot000101$ & $\cdot0001$ & \grijs{1} \\
25\; & $\Lambda_6$ & $\cdot000101$ & $\cdot0001$ & 1 \\
26\; & $\Lambda_6$ & $\cdot000101$ & $\cdot0001$ & \grijs{1} \\
27\; & $\Lambda_6$ & $\cdot010101$ & $\cdot0101$ & \grijs{4} \\
28\; & $\Lambda_6$ & $\cdot010101$ & $\cdot0101$ & 4 \\
29\; & $\Lambda_6$ & $\cdot010101$ & $\cdot0101$ & \grijs{4} \\
\hline
\end{tabular}\quad
\noindent We see that $\Pi^\beta_{6}=\big(7\,2\,5\,0\,3\,6\,1\,4\big)$.
What are these permutations?
\begin{theorem}\label{th:permut} For all natural numbers $n$ consider the $F_{2n}$ Zeckendorf words of length $2n$ occurring as $\beta^-(N)$ in the $\beta$-expansions of the numbers in $\Xi_n$. Then these occur in an order given by a permutation $\Pi^\beta_{2n}$ which is the orbit of the element $F_{2n}-1$ under the addition by the element $F_{2n-2}$ on the cyclic group $\mathbb{Z}/F_{2n}\mathbb{Z}$.
\end{theorem}
\noindent {\it Proof:} We have to show for all $n$ that
\begin{equation}\label{eq:IH}
\Pi^\beta_{2n}(1)=F_{2n}-1,\quad \Pi^\beta_{2n}(j+1)=\Pi^\beta_{2n}(j)+F_{2n-2} \!\mod F_{2n}, \; {\rm for}\;j=1,\dots,F_{2n}-1.
\end{equation}
It is easily checked that the cases $n=2$ and $n=3$ given above conform with this. For $n=3$ one has: $F_6=8$, $F_4=3$, and
$\Pi^\beta_{6}(1)=7, \,\Pi^\beta_{6}(j+1) =\Pi^\beta_{6}(j)+3 \mod 8$ for $j=1,\dots 7$.
The first claim in Equation (\ref{eq:IH}) follows from Lemma \ref{lem:code} for all $n$: since the interval $\Xi_n=[L_{2n-1}+1,L_{2n+1}]$, we have $\Pi^\beta_{2n}(1)=F_{2n}-1$ according to Equation (\ref{eq:bordercodes}).
The proof proceeds by induction, based on Theorem \ref{th:recg}, the $\gamma^-\!$-version of the Recursive Structure Theorem.
For the second part of Equation (\ref{eq:IH}) with $n$ replaced by $n+1$, we have to split the permutation $\Pi_{2n+2}^\beta$ into six pieces,
and then we have to glue the expressions together to obtain
the full permutation on the set $\Xi_{n+1} = \Lambda_{2n+1} \cup \Lambda_{2n+2} = [L_{2n+1}+1, L_{2n+2}-1] \cup [L_{2n+2},L_{2n+3}]$.
According to the Recursive Structure Theorem
\begin{equation}\label{eq:six}
\Xi_{n+1} = \Lambda^{(a)}_{2n-1}\cup\Lambda^{(b)}_{2n-2}\cup\Lambda^{(c)}_{2n-1}\cup\Lambda^{(a)}_{2n}\cup\Lambda^{(b)}_{2n-1}\cup\Lambda^{(c)}_{2n}.
\end{equation}
We start with the first interval, $\Lambda^{(a)}_{2n-1}$. From Theorem \ref{th:recg} we have that for $N\in \Lambda^{(a)}_{2n-1}$,
\begin{equation}\label{eq:grec1}
\gamma^-(N)= \gamma^-(N-L_{2n})\,10.
\end{equation}
What does this imply for the codes?
Let $Z({\rm C}(N-L_{2n}))=\gamma^-(N-L_{2n})=d_{2n-3}\dots d_0$, so ${\rm C}(N-L_{2n})= \sum_{i=0}^{2n-3} d_i \ddot{F}_i$. Then Equation (\ref{eq:grec1}) leads to
$$ {\rm C}(N)=\sum_{i=0}^{2n-3} d_i \ddot{F}_{i+2} + 1\cdot \ddot{F}_1 + 0\cdot \ddot{F}_0=\sum_{i=0}^{2n-3} d_i \ddot{F}_{i+2}+2.$$
This implies, in particular, that the differences between the codes of two consecutive $\Pi$-essential numbers within the interval $\Lambda_{2n-1}$ have increased from $F_{2n-2}\!\mod F_{2n}$ to $F_{2n} \!\mod F_{2n+2}$ for the corresponding numbers in the interval $\Lambda^{(a)}_{2n-1}$.
We pass to the second interval, $\Lambda^{(b)}_{2n-2}$. From Theorem \ref{th:recg} we have that for $N$ from $\Lambda^{(b)}_{2n-2}$,
\begin{equation}\label{eq:grec2}
\gamma^-(N)= \gamma^-(N-L_{2n+1})\,0010.
\end{equation}
What does this imply for the codes?
Let $Z({\rm C}(N-L_{2n+1}))=\gamma^-(N-L_{2n+1})=d_{2n-4}\dots d_0$, so ${\rm C}(N-L_{2n+1})= \sum_{i=0}^{2n-4} d_i \ddot{F}_i$. Then Equation (\ref{eq:grec2}) leads to
$${\rm C}(N)=\sum_{i=0}^{2n-4} d_i \ddot{F}_{i+4} + 0\cdot\ddot{F}_3 + 0\cdot \ddot{F}_2 +1\cdot \ddot{F}_1 + 0\cdot \ddot{F}_0=\sum_{i=0}^{2n-4} d_i \ddot{F}_{i+4}+2.$$
This implies that the differences between the codes of two consecutive numbers within the interval $\Lambda_{2n-2}$
have increased from $F_{2n-4}\!\mod F_{2n-2}$ to $F_{2n} \!\mod F_{2n+2}$ for the corresponding numbers in the interval $\Lambda^{(b)}_{2n-2}$.
Similar computations give that for the next 4 intervals $\Lambda^{(c)}_{2n-1}, \Lambda^{(a)}_{2n},\Lambda^{(b)}_{2n-1}$, and $\Lambda^{(c)}_{2n}$ there always is an addition of $F_{2n} \!\mod F_{2n+2}$.
The remaining task is to check that the same holds on the five boundaries between the translated $\Lambda$-intervals.
We number these boundaries with the roman numerals I, II, III, IV, V.
\noindent \fbox{III \& V:}\;For the third and the fifth boundary between respectively the intervals $\Lambda^{(c)}_{2n-1}$ and $\Lambda^{(a)}_{2n}$ and the intervals $\Lambda^{(b)}_{2n-1}$ and $\Lambda^{(c)}_{2n}$ this follows from the Trident Splitting Lemma, Lemma \ref{lem:trident}.
The reason is that if $[N,N+1,N+2]$ is the trident which is splitted, then the difference between ${\rm C}(N-1)$ and ${\rm C}(N)$ is equal to $F_{2n} \!\mod F_{2n+2}$, as these two numbers are both from the first translated $\Lambda$-interval, and not from the same trident. But then the difference between the codes of the last $\Pi$-essential number $N-1$ in the first translated $\Lambda$-interval, and the first $\Pi$-essential number $N+1$ in the second translated $\Lambda$-interval is also equal to $F_{2n} \!\mod F_{2n+2}$.
\noindent \fbox{ I:}\; The last number in the first interval $\Lambda^{(a)}_{2n-1}$ is $2L_{2n}-1$ with associated $\gamma^-\!$-block
$$\gamma^-(2L_{2n}-1)= \gamma^-(2L_{2n}-1-L_{2n})\,10=\gamma^-(L_{2n}-1)\,10=0^{2n-1}\,10.$$
Here we used Equation (\ref{eq:shift-odd}a) in the first, and Equation (\ref{eq:Lmplus1}) in the last step. It follows directly that ${\rm C}(2L_{2n}-1)=2$.
The first number in the second interval $\Lambda^{(b)}_{2n-2}$ is $2L_{2n}$. From Equation (\ref{eq:Lm}) we have $\beta(2L_{2n})\doteq 20^{2n}\cdot 0^{2n-1}2\doteq 20^{2n}\cdot 0^{2n-1}1001$, so $\gamma^-(2L_{2n})=0^{2n-1}10$, giving ${\rm C}(2L_{2n})=2$. It is clear that also the second number $2L_{2n}+1$ in $\Lambda^{(b)}_{2n-2}$ has code ${\rm C}(2L_{2n}+1)=2$. As in the previous case, this implies that the difference between the codes of the last $\Pi$-essential number in the first translated $\Lambda$-interval, and the first $\Pi$-essential number in the second translated $\Lambda$-interval is equal to $F_{2n} \!\mod F_{2n+2}$.
\noindent \fbox{ II:}\; The last number in the second interval $\Lambda^{(b)}_{2n-2}$ is the number $L_{2n-1}+L_{2n+1}$. According to Equation (\ref{eq:shift-odd}b) the associated $\gamma^-\!$-block is
$$\gamma^-(L_{2n-1}+L_{2n+1})= \gamma^-(L_{2n-1}+L_{2n+1}-L_{2n+1})\,0010=\gamma^-(L_{2n-1})\,0010=[01]^{n-2}\,0010.$$
But we know from Lemma \ref{lem:code} that \; $\gamma^-(L_{2n-1})\,0101=[01]^n=\gamma^-(L_{2n+3}).$
By Lemma \ref{lem:code} we have that ${\rm C}(L_{2n+3})=F_{2n+1}-1$. To obtain the code of $N=L_{2n-1}+L_{2n+1}$, we have to subtract the number $F_3+F_1=3$ with Zeckendorf expansion $0101$, and add the number $F_2=2$ with Zeckendorf expansion $0010$. This gives the code
$${\rm C}(L_{2n-1}+L_{2n+1})=F_{2n+1}-1-3+1=F_{2n+1}-3.$$
The first number in the third interval $\Lambda^{(c)}_{2n-1}$ is the number $L_{2n-1}+L_{2n+1}+1$. According to according to Equation (\ref{eq:shift-odd}c) the associated $\gamma^-\!$-block is
$$\gamma^-(L_{2n-1}+L_{2n+1}+1)= \gamma^-(L_{2n-1}+L_{2n+1}+1-L_{2n+1})\,00=\gamma^-(L_{2n-1}+1)\,00.$$
But we know from Lemma \ref{lem:code} that \; $\gamma^-(L_{2n-1}+1)10=[10]^n=\gamma^-(L_{2n+1}+1).$
By Lemma \ref{lem:code} we have that ${\rm C}(L_{2n+1}+1)=F_{2n+2}-1$.
To obtain the code of $N=L_{2n-1}+L_{2n+1}+1$, we have to subtract the number $F_2=2$ with Zeckendorf expansion $10$, from this code. This gives the code
$${\rm C}(L_{2n-1}+L_{2n+1}+1)=F_{2n+2}-1-2=F_{2n+2}-3.$$
The conclusion is that $L_{2n-1}+L_{2n+1}$ and $N=L_{2n-1}+L_{2n+1}+1$ are $\Pi$-essential, with difference in codes $F_{2n+2}-3-(F_{2n+1}-3)=F_{2n}.$
\noindent \fbox{ IV:}\; The last number in the fourth interval $\Lambda^{(c)}_{2n}$ is the number $L_{2n+1}+L_{2n+1}=2L_{2n+1}$. According to Equation (\ref{eq:shift-even}a) the associated $\gamma^-\!$-block is
$$\gamma^-(2L_{2n+1})= \gamma^-(2L_{2n+1}-L_{2n+1})\,00=\gamma^-(L_{2n+1})\,00=[01]^{n-1}\,00.$$
But we know from Lemma \ref{lem:code} that \; $\gamma^-(L_{2n+1})\,01=[01]^n=\gamma^-(L_{2n+3}).$
By Lemma \ref{lem:code} we have that ${\rm C}(L_{2n+3})=F_{2n+1}-1$. To obtain the code of $N=2L_{2n+1}$, we have to subtract the number $F_1=1$ with Zeckendorf expansion $01$. This gives the code
$${\rm C}(2L_{2n+1})=F_{2n+1})-1-1=F_{2n+1}-2.$$
The first number in the fifth interval $\Lambda^{(b)}_{2n-1}$ is the number $L_{2n-1}+1+L_{2n+2}$. According to Equation (\ref{eq:shift-even}b) the associated $\gamma^-\!$-block is
$$\gamma^-(L_{2n-1}+1+L_{2n+2})= \gamma^-(L_{2n-1}+1+L_{2n+2}-L_{2n+2})\,01=\gamma^-(L_{2n-1}+1)\,01.$$
But we know from Lemma \ref{lem:code} that \; $\gamma^-(L_{2n-1}+1)10=[10]^n=\gamma^-(L_{2n+1}+1).$
By Lemma \ref{lem:code} we have that ${\rm C}(L_{2n+1}+1)=F_{2n+2}-1$.
To obtain the code of $N=L_{2n-1}+1+L_{2n+2}$, we have to subtract the number $F_2=2$ with Zeckendorf expansion $10$, and add the number $F_1=1$ with Zeckendorf expansion 01 to this code. This gives the code
$${\rm C}(L_{2n-1}+L_{2n+1}+1)=F_{2n+2}-1-2+1=F_{2n+2}-2.$$
The conclusion is that $2L_{2n+1}$ and $L_{2n-1}+1+L_{2n+2}$ are $\Pi$-essential, with difference in codes $F_{2n+2}-2-(F_{2n+1}-2)=F_{2n}.$
${\rm B}ox$
We now explain the connection with a rotation on a circle mentioned at the beginning of this section. Note that with this point of view all the cyclic groups of Theorem \ref{th:permut} are represented by a single object: the rotation on the circle.
\begin{theorem}\label{th:rot} For all natural numbers $n$ the permutations $\Pi^\beta_{2n}$ are given by the order in which the first $F_{2n}$ iterates of the rotation $z\rightarrow \exp(2\pi i (z-\varphi))$ occur on the circle.
\end{theorem}
We sketch a proof of this result based on the paper \cite{Ravenstein-1988}. In the literature one will not find the rotation $z\rightarrow \exp(2\pi i (z-\varphi))$, but several papers treat the rotation $z\rightarrow \exp(2\pi i (z+\tau))$, where $\tau$ is the algebraic conjugate of $\varphi$. Note that this rotation has exactly the same orbits as $z\rightarrow \exp(2\pi i (z+\varphi))$, and replacing $\varphi$ by $-\varphi$ amounts to reversing the permutation. In the literature the origin is usually added to the orbit. For instance in \cite{Ravenstein-1988}, the $N$ ordered iterates are given by the permutation $\big(u_1\,u_2\,\dots \, u_N\big)$, which for \emph{all} $N$ gives a permutation starting trivially with $u_1=0$.
Lemma 2.1 in \cite{Ravenstein-1988} states that for $j=1,\dots,N$ one has $u_j=(j-1)u_2 \mod N$.\\
Next, Theorem 3.3 in \cite{Ravenstein-1988} states that $u_2=u_2(N)= F_{2n-1}$ in the case that $N=F_{2n}$, $n\ge 1$.
We illustrate this for the case $n=3$.\\
We have $N=F_6=8$, and $0< \{5\tau\}<\{2 \tau\}<\{ 7\tau\}<\{4\tau\}<\{\tau\}<\{6 \tau\}<\{ 3\tau\},$
so $ \big(u_1\,u_2\,\dots \, u_N\big) = \big(0\,5\,2\,7\,4\,1\,6\,3\big)$.
As $\{8\tau\}$ is the largest number in the rotation orbit of the first 9 iterations,
$\big(u_{N+1}\,u_N\,\dots \, u_2\big)=\big(8\,3\,6\,1\,4\,7\,2\,5\big).$
After subtraction of 1 in all entries, one obtains the permutation $\Pi^\beta_{6}$.
\subsection{Digit blocks $w=d_{-1}\dots d_{-m}(N)$ as prefix of $\beta^-(N)$}
For any digit block $w$ we will try to determine the sequence $R_w$ of those numbers $N$ with $w$ as prefix of $\beta^-(N)$.
The tridents introduced in the previous section give occurrence sequences $R_w$ which are unions of three consecutive generalized Beatty sequences.
We will write for short
$$V(p,q,[r,r+1,r+2]):=V(p,q,r)\sqcup V(p,q,r+1)\sqcup V(p,q,r+2).$$
As before, we order the $w$ in a Fibonacci tree. Here we write $R_{\cdot w}$ for the occurence sequences of words $w$ occurring as a prefix of the words $\beta^-(N)$, to emphasize the positions of these words in the expansion $\beta(N)$. The first four levels of this tree are depicted below.
\hspace*{-0.8cm}\begin{tikzpicture}
[level distance=19mm,
every node/.style={fill=green!10,rectangle,inner sep=1pt},
level 1/.style={sibling distance=73mm,nodes={fill=green!10}},
level 2/.style={sibling distance=50mm,nodes={fill=green!10}},
level 3/.style={sibling distance=34mm,nodes={fill=green!10}}]
\node {\footnotesize $\duoZ{=\Lambda}{=\emptyset\quad}$}
child {node {\footnotesize $\duoZ{=0}{=V(1,2,[-1,0,1])}$}
child {node {\footnotesize $\duoZ{=00}{=V(3,1,[2,3,4])}$}
child {node {\footnotesize $\duoZ{=000}{=V(4,3,[-1,0,1])}$}}
child {node {\footnotesize $\duoZ{\!=\! 001}{=V(7,4,[2,3,4])}$}}
}
child {node {\footnotesize $\duoZ{=01}{=V_0(4,3,[2,3,4])}$}
child {node {\footnotesize $\duoZ{=010}{=V_0(4,3,[2,3,4])}$}}
}
}
child {node {\footnotesize $\duoZ{=1}{=V(3,1,1)}$}
child {node {\footnotesize $\duoZ{=10}{=V(3,1,1)}$}
child {node {\footnotesize $\duoZ{=100}{=V(4,3,-2)}$}}
child {node {\footnotesize $\duoZ{=101}{=V(7,4,1)}$}}
}
};
\end{tikzpicture}
We start with the words $w$ on this tree.
\begin{proposition}\label{prop:smallminus} Let $\beta(N)=\beta^+(N)\cdot\beta^-(N)$ be the base phi expansion of the number $N$.\\
Let $w$ be a word of length $m$. Then the sequence of occurrences $R_w$ of numbers $N$ such that the first $m$ digits of $\beta^-(N)$ are equal to $w$, i.e., $d_{-1}\dots d_{-m}(N)=w$, is given for the words $w$ of length at most 3, by\\
\mbox{\rm a)} $R_{\cdot 0} = V(2,1,-1)\, \sqcup \,V(2,1,0) \, \sqcup \,V(2,1,1)$,\\
\mbox{\rm b)} $R_{\cdot 1}= R_{\cdot 10} = V(3,1,1)$,\\
\mbox{\rm c)} $R_{\cdot 00} = V(3,1,2)\, \sqcup \,V(3,1,3) \, \sqcup \,V(3,1,4)$,\\
\mbox{\rm d)} $R_{\cdot 01} =R_{\cdot 010}= V_0(4,3,2)\, \sqcup\, V_0(4,3,3)\, \sqcup \,V_0(4,3,4)$,\\
\mbox{\rm e)} $R_{\cdot 000} = V(4,3,-1)\, \sqcup\, V(4,3,0)\, \sqcup \,V(4,3,1)$,\\
\mbox{\rm f)} $R_{\cdot 001} = V(7,4,2)\, \sqcup \, V(7,4,3)\, \sqcup \, V(7,4,4)$,\\
\mbox{\rm g)} $R_{\cdot 100} = V(4,3,-2)$,\\
\mbox{\rm h)} $R_{\cdot 101} = V(7,4,1)$.
\end{proposition}
\noindent {\it Proof:}
\noindent {\rm a)} $w=\cdot 0$: In Section 5 of the paper \cite{Dekk-phi-FQ} the tridents are coded by triples $({\rm A}, {\rm B}, {\rm C})$. It follows from Theorem 5.1 of \cite{Dekk-phi-FQ} that the first elements (coded ${\rm A}$) of the tridents are all member of $V(2,1,-1)$. This implies the statement in {\rm a)}.
\noindent {\rm b)} $w=\cdot 1$: We already know from Proposition \ref{prop:D-numbers} that $R_{\cdot 1}= V(3,1,1)$.
\noindent {\rm c)} $w=\cdot 00$: Using the Propagation Principle, we see that a digit block $\cdot 10$ is always followed directly by the first element of a trident of $\cdot 00$'s and vice versa. This implies the statement in {\rm c)}, because of {\rm b)}.
\noindent {\rm d)} $w=\cdot 01$: This result is given in Remark 6.2 in the paper \cite{Dekk-phi-FQ}.
\noindent {\rm e)} $w=\cdot 000$: Using the Propagation Principle, we see that a $\cdot 100$ is always followed directly by the first element of a trident of $\cdot 000$'s and vice versa. So {\rm e)} is implied by {\rm g)}.
\noindent {\rm f)} $w=\cdot 001$: Take the first sequence $V(3,1,2)$ of $R_{\cdot 00}$, and put $p=3, q=1, r=2$. Then the first sequence of $R_{\cdot 000}$ is equal to $V(4,3,-1)=V(p+q,p,r-p)$. It then follows from Lemma \ref{lem:VAconv} that the first sequence of $R_{\cdot 001}$ is equal to $V(2p+q,p+q,r)=V(7,4,2)$.
\noindent {\rm g)} $w=\cdot 100$: For the first 17 numbers we check that $\cdot 100$ occurs as prefix of $\beta^-(N)$ if and only if $1000$ occurs as suffix of $\beta^+(N)$.
The result then follows from Theorem \ref{th:phi-w0}: $R_w= V(L_{m-1},L_{m-2},\gamma_w) \;\;\text{\rm if}\; w_{m-1}=0$, where here $m=4$, so
$R_{1000} = V(L_{3},L_{2},\gamma_{1000})=V(4,3,-2)$. Here $\gamma_{1000}$ is determined by noting that $N=5$ is the first number in $R_{1000}$.
\noindent {\rm h)} $w= \cdot 101$: Take the sequence $R_{\cdot 10}=V(3,1,1)$, and put $p=3, q=1, r=1$. Then $R_{\cdot 100}$ is equal to $V(4,3,-2)=V(p+q,p,r-p)$. It then follows from Lemma \ref{lem:VAconv} that the sequence $R_{\cdot 101}$ is equal to $V(2p+q,p+q,r)=V(7,4,1)$.
${\rm B}ox$
The reader might think that we can now proceed, as we did earlier, from these cases to words $w$ with larger lengths $m$, using the same tools. However, this does not work. The reason is that the $\beta^-(N)$ words do not occur in lexicographical order, in contrast with the $\beta^+(N)$ words.
Some occurrence sequences are Lucas-Wythoff, some are not---but still close to Lucas-Wythoff sequences.
Recall the three (Sturmian) morphisms $f,g$ and $h$ from Equation (\ref{eq:Fib3}).
Note that $f$ equals the square of the Fibonacci morphism $a\mapsto ab, \, b\mapsto a$, so $f$ has fixed point $x_{\rm \scriptstyle F}$, the Fibonacci word.
The fixed points $x_{\rm \scriptstyle G}, x_{\rm \scriptstyle H}$ of $g$ and $h$ are given by $x_{\rm \scriptstyle G}=b\,x_{\rm \scriptstyle F},\,x_{\rm \scriptstyle H}=a\,x_{\rm \scriptstyle F}$ ---see \cite{Berstel-Seebold} Theorem 3.1.
Let $V_{\rm \scriptstyle F}, V_{\rm \scriptstyle G} ,V_{\rm \scriptstyle H}$ denote the families of sequences having $x_{\rm \scriptstyle F}, x_{\rm \scriptstyle G}, x_{\rm \scriptstyle H}$ as first differences, with first element an arbitrary integer. Then, by definition, one example is $V=V_{\rm \scriptstyle F}$, if we take $V_{\rm \scriptstyle F}(1)=p+q+r$. We also already have encountered an $V_{\rm \scriptstyle G}$, since $V_0=V_{\rm \scriptstyle G}$, if we take $V_{\rm \scriptstyle G}(1)=r$. This follows from $V_0(p,q,r)= r, p+q+r, \dots = r, b+r, \dots$, which gives ${\rm D}elta V_0 = b x_{\rm \scriptstyle F} =x_{\rm \scriptstyle G}$. We mention that one can show that there do not exist $\alpha, p, q$, and $r$ such that $V_{\rm \scriptstyle H}$ is a generalized Beatty sequence $V = (p\lfloor n \alpha \rfloor + q n +r) $.
We conjecture that the following holds.
\begin{conjecture*} Let $\beta(N)=\beta^+(N)\cdot\beta^-(N)$ be the base phi expansion of the number $N$.\\
Let $w$ be a word of length $m$. Let $R_{\cdot w}$ be the sequence of occurrences of numbers $N$ such that the first $m$ digits of $\beta^-(N)$ are equal to $w$, i.e., $d_{-1}\dots d_{-m}(N)=w$. Then there exist two Lucas numbers $a$ and $b$ such that either $R_{\cdot w} = V_{\rm \scriptstyle F},$ or $ R_{\cdot w} = V_{\rm \scriptstyle G},$ or $R_{\cdot w} = V_{\rm \scriptstyle H}$. A second possibility is that $R_{\cdot w}$ is a union of three of such sequences.
\end{conjecture*}
In all cases in Proposition \ref{prop:smallminus} the sequence $ R_{\cdot w}$ is a $V_{\rm \scriptstyle F}$, except $ R_{\cdot 010}$, which is a union of three $V_{\rm \scriptstyle G}$'s, the middle one being $V_{\rm \scriptstyle G}(4,3,-4)$. The first case where a $V_{\rm \scriptstyle G}$ as $ R_{\cdot w}$ occurs, is for $w=\cdot 1001$, where $a=29, b=18$. The first case where $V_{\rm \scriptstyle H}$ as a $R_{\cdot w}$ occurs, is as first element of the trident for the digit block $w=\cdot 0100$, where $a=18, b=11$.
\noindent AMS Classification Numbers: 11D85, 11A63, 11B39
\end{document} |
\begin{eqnarray}gin{document}
\title{f Friedrichs inequality in irregular domains }
\centerline{\scshape Simone Creo, Maria Rosaria Lancia}
{\footnotesize
\centerline{Dipartimento di Scienze di Base e Applicate per l'Ingegneria, Universit\`{a} degli studi di Roma Sapienza,
}
\centerline{Via A. Scarpa 16,}
\centerline{00161 Roma, Italy.}
}
\begin{itemize}gskip
\begin{eqnarray}gin{abstract}
\noindent We prove a generalized version of Friedrichs and Gaffney inequalities for a bounded $(\varepsilon,\mathrm {d}lta)$ domain $\Omega\subset\mathbb{R}^n$, $n=2,3$, by adapting the methods of Jones to our framework.
\mbox {e}nd{abstract}
\noindent\textbf{Keywords:} Friedrichs inequality, Gaffney inequality, $(\varepsilon,\mathrm {d}lta)$ domains, Whitney decomposition, coercivity estimates.\\
\noindent{\textbf{2010 Mathematics Subject Classification:} Primary: 35A23. Secondary: 35Q61, 78A25.}
\begin{itemize}gskip
\section{Introduction}
\setcounter{equation}{0}
\noindent The aim of this paper is to prove Friedrichs inequality for $(\varepsilon,\mathrm {d}lta)$ domains. This inequality has been introduced in different frameworks by Friedrichs \cite{friedrichs} and Gaffney \cite{gaffney}, and in the literature it is known with different names according to the setting where it is used. In the study of Maxwell problems or Navier-Stokes equations, this inequality is a key tool to prove the coercivity of the associated energy forms. From the point of view of applications, it is interesting to study vector BVPs in irregular domains (see e.g. \cite{CHLTV,LVstokes}) and their numerical approximation, hence it is crucial to extend these inequalities to the case of suitable irregular sets. From this perspective, we confine ourselves to two or three-dimensional domains.\\
Gaffney inequality can be deduced from the Friedrichs inequality. To our knowledge, such inequalities hold for convex and Lipschitz domains; among the others, we refer to \cite{bauerpauly,Schw16,NPW15,amrouche}, see also \cite{dacorogna} and the references listed in. In this paper, we first prove Friedrichs inequality for $(\varepsilon,\mathrm {d}lta)$ domains, and then prove Gaffney inequality by adapting the methods of \cite{duran} (developed for Korn inequality) to this framework.\\
The class of $(\varepsilon,\mathrm {d}lta)$ domains has been introduced by Jones \cite{Jones}, and it is quite general, since the boundary of an $(\varepsilon,\mathrm {d}lta)$ domain can be highly non-rectifiable, e.g. fractal or a $d$-set (see Definitions \ref{defepsdelta} and \ref{dset}).\\
In the literature, for $\Omega\subset\mathbb{R}^n$ $(n=2,3)$ sufficiently smooth, the Friedrichs inequality reads as follows: if $v\in W^{1,p}(\Omega)^n$, there exists a positive constant $C$, depending on $\Omega$, $n$ and $p$, such that
\begin{eqnarray}gin{equation}\label{Fr1}
\|v\|_{W^{1,p}(\Omega)^n}\leq C(\|v\|_{L^p(\Omega)^n}+\|\dive v\|_{L^p(\Omega)}+\|\curl v\|_{L^p(\Omega)^n}).
\mbox {e}nd{equation}
Gaffney inequality is a direct consequence of Friedrichs inequality \mbox {e}qref{Fr1} when considering boundary conditions. We introduce the following spaces:
\[W^p(\dive,\Omega):=\left\lbrace u\in L^p(\Omega)^n\,:\,\dive u\in L^p(\Omega) \right\rbrace,\]
\[W^p_0(\dive,\Omega):=\{u\in W^p(\dive,\Omega)\,:\,\nu\cdot u=0\,\,\text{on}\;\mathbf{P}artial\Omega \},\]
\[W^p(\curl,\Omega):=\left\lbrace u\in L^p(\Omega)^n\,:\,\curl u\in L^p(\Omega)^n \right\rbrace,\]
\[W^p_0(\curl,\Omega):=\{u\in W^p(\curl,\Omega)\,:\,\nu\times u=0\,\,\text{on}\;\mathbf{P}artial\Omega \},\]
where $\cdot$ and $\times$ denote respectively the usual scalar and cross products between vectors in $\mathbb{R}^n$. The boundary conditions have to be interpreted in a suitable weak sense (see e.g. \cite{temam}).\\
When $v\in W^p(\dive,\Omega)\cap W^p_0(\curl,\Omega)$ or $v\in W^p(\curl,\Omega)\cap W^p_0(\dive,\Omega)$, Gaffney inequality takes the following form:
\begin{eqnarray}gin{equation}\label{Fr2}
\|\nabla v\|_{L^p(\Omega)^{n\times n}}\leq C(\|\dive v\|_{L^p(\Omega)}+\|\curl v\|_{L^p(\Omega)^n}).
\mbox {e}nd{equation}
Our aim is to extend Gaffney inequality to those $(\varepsilon,\mathrm {d}lta)$ domains for which it is possible to give an interpretation of the boundary conditions. In particular, we consider $(\varepsilon,\mathrm {d}lta)$ domains $\Omega$ in $\mathbb{R}^n$ whose boundaries are $d$-sets or arbitrary closed sets in the sense of Jonsson \cite{jonsson91}. In these cases, it can be proved that the spaces $W^p_0(\dive,\Omega)$ and $W^p_0(\curl,\Omega)$ are well defined because generalized Green and Stokes formulas hold. This implies that the normal and tangential traces are well defined as elements of the duals of suitable trace Besov spaces on the boundary (see \cite{LaVe2} and \cite{CHLTV}).
We extend \mbox {e}qref{Fr1} and \mbox {e}qref{Fr2} to $(\varepsilon,\mathrm {d}lta)$ domains $\Omega\subset\mathbb{R}^n$ for either $v\in W^p(\curl,\Omega)\cap W^p_0(\dive,\Omega)$ (see Section \ref{divergenza}) or $v\in W^p(\dive,\Omega)\cap W^p_0(\curl,\Omega)$ (see Section \ref{rotore}), according to the boundary conditions under consideration. The main results of this paper are Theorems \ref{fridis}, \ref{gaffin}, \ref{fridisrot} and \ref{gaffinrot}.\\
The proof of our results deeply relies on the assumptions on $\Omega$. Since $\Omega$ is an $(\varepsilon,\mathrm {d}lta)$ domain, for each $v\in W^p(\curl,\Omega)\cap W^p_0(\dive,\Omega)$ (for each $v\in W^p(\dive,\Omega)\cap W^p_0(\curl,\Omega)$ respectively) we construct a suitable extension $Ev$ by adapting Jones' approach \cite{Jones}. More precisely, we consider a Whitney decomposition of $\Omega$ and we construct an extension operator in terms of suitable linear polynomials which satisfies the crucial estimates \mbox {e}qref{corol1} and \mbox {e}qref{corol2} (\mbox {e}qref{corol1rot} and \mbox {e}qref{corol2rot} respectively). The thesis is then achieved by density arguments.\\
Throughout the paper, $C$ will denote different positive constant. Sometimes, we indicate the dependence of these constants on some particular parameters in parentheses.
\section{$(\varepsilon,\mathrm {d}lta)$ domains and trace results}
\setcounter{equation}{0}
We recall the definition of $(\varepsilon,\mathrm {d}lta)$ (or Jones) domain.
\begin{eqnarray}gin{definition}\label{defepsdelta} Let $\mathcal{F}\subset\mathbb{R}^n$ be open and connected and $\mathcal{F}^c:=\mathbb{R}^n\setminus\mathcal{F}$. For $x\in\mathcal{F}$, let $\displaystyle d(x):=\inf_{y\in\mathcal{F}^c}|x-y|$. We say say that $\mathcal{F}$ is an $(\varepsilon,\mathrm {d}lta)$ domain if, whenever $x,y\in\mathcal{F}$ with $|x-y|<\mathrm {d}lta$, there exists a rectifiable arc $\gamma\in\mathcal{F}$ joining $x$ to $y$ such that
\begin{eqnarray}gin{center}
$\displaystyle\mbox {e}ll(\gamma)\leq\frac{1}{\varepsilon}|x-y|\quad$ and\quad $\displaystyle d(z)\geq\frac{\varepsilon|x-z||y-z|}{|x-y|}$ for every $z\in\gamma$.
\mbox {e}nd{center}
\mbox {e}nd{definition}
As pointed out in the Introduction, we consider two particular classes of $(\varepsilon,\mathrm {d}lta)$ domains $\Omega\subset\mathbb{R}^n$:
\begin{eqnarray}gin{itemize}
\item[$i)$] $(\varepsilon,\mathrm {d}lta)$ domains having as boundary a $d$-set;
\item[$ii)$] arbitrary closed $(\varepsilon,\mathrm {d}lta)$ domains in the sense of \cite{jonsson91}.
\mbox {e}nd{itemize}
For the sake of completeness, we recall the definition of $d$-set given in \cite{JoWa}.
\begin{eqnarray}gin{definition}\label{dset}
A closed nonempty set $\mathcal{M}\subset\mathbb{R}^n$ is a $d$-set (for $0<d\leq n$) if there exist a Borel measure $\mu$ with $\supp\mu=\mathcal{M}$ and two positive constants $c_1$ and $c_2$ such that
\begin{eqnarray}gin{equation}\label{defindset}
c_1r^{d}\leq \mu(B(P,r)\cap\mathcal{M})\leq c_2 r^{d}\quad\forall\,P \in\mathcal{M}.
\mbox {e}nd{equation}
The measure $\mu$ is called $d$-measure.
\mbox {e}nd{definition}
In both the cases $i)$ and $ii)$, we can prove trace theorems, i.e. Green and Stokes formulas. For the sake of simplicity, we restrict ourselves to the case in which $\mathbf{P}artial\Omega$ is a $d$-set. We recall the definition of Besov space specialized to our case. For generalities on Besov spaces, we refer to \cite{JoWa}.
\begin{eqnarray}gin{definition}
Let $\mathcal{G}$ be a $d$-set with respect to a $d$-measure $\mu$ and $\alpha=1-\frac{n-d}{p}$. ${B^{p,p}_\alpha(\mathcal{G})}$ is the space of functions for which the following norm is finite:
$$
\|u\|_{B^{p,p}_\alpha(\mathcal{G})}=\|u\|_{L^p(\mathcal{G})}+\left(\quad\iint_{|P-P'|<1}\frac{|u(P)-u(P')|^p}{|P-P'|^{d+p\alpha}}\,\mathrm {d}\mu(P)\,\mathrm {d}\mu(P')\right)^\frac{1}{p}.
$$
\mbox {e}nd{definition}
Throughout the paper, $p'$ will denote the H\"older conjugate exponent of $p$. In the following, we denote the dual of the Besov space on a $d$-set $\mathcal{G}$ with $(B^{p,p}_\alpha(\mathcal{G}))'$; this space coincides with the space $B^{p',p'}_{-\alpha}(\mathcal{G})$ (see \cite{JoWa2}).
\begin{eqnarray}gin{theorem}[Stokes formula]\label{stokes}
Let $u\in W^p(\curl,\Omega)$. There exists a linear and continuous operator $l_\tau(u)=u\times\nu$ from $W^p(\curl,\Omega)$ to $((B^{p',p'}_\alpha(\mathbf{P}artial\Omega))')^3$.
The following generalized Stokes formula holds for every $v\in W^{1,p'}(\Omega)^n$:
\begin{eqnarray}gin{equation}\label{stokesformula}
\left\langle u\times\nu,v\right\rangle_{((B^{p',p'}_\alpha(\mathbf{P}artial\Omega))')^3, B^{p',p'}_\alpha(\mathbf{P}artial\Omega)^3}
=\int_\Omega u\cdot \curl v\,\mathrm {d} x +\int_\Omega v\cdot \curl u\,\mathrm {d} x.
\mbox {e}nd{equation}
Moreover, the operator $u \mapsto l_\tau(u)=u\times\nu$ is linear and continuous on $B^{p',p'}_\alpha(\mathbf{P}artial\Omega)^3$.
\mbox {e}nd{theorem}
\begin{eqnarray}gin{theorem}[Green formula]\label{green}
Let $u\in W^p(\dive,\Omega)$. There exists a linear and continuous operator $l_\nu(u)=u\cdot\nu$ from $W^p(\dive,\Omega)$ to $(B^{p',p'}_\alpha(\mathbf{P}artial\Omega))'$.
The following generalized Green formula holds for every $v\in W^{1,p'}(\Omega)$:
\begin{eqnarray}gin{equation}\label{greenformula}
\left\langle u\cdot\nu,v\right\rangle_{(B^{p',p'}_\alpha(\mathbf{P}artial\Omega))', B^{p',p'}_\alpha(\mathbf{P}artial\Omega)}
=\int_\Omega u\cdot\nabla v\,\mathrm {d} x +\int_\Omega v\dive u\,\mathrm {d} x.
\mbox {e}nd{equation}
Moreover, the operator $u \mapsto l_\nu(u)=u\cdot\nu$ is linear and continuous on $B^{p',p'}_\alpha(\mathbf{P}artial\Omega)$.
\mbox {e}nd{theorem}
For the proofs we refer the reader to \cite{LaVe2} and \cite{CHLTV} with small suitable changes. Examples of domains for which Theorems \ref{stokes} and \ref{green} hold are 2D or 3D Koch-type domains. Formulas \mbox {e}qref{stokesformula} and \mbox {e}qref{greenformula} give a rigorous meaning of the boundary conditions in $W^p_0(\curl,\Omega)$ and $W^p_0(\dive,\Omega)$ respectively in terms of the dual of suitable Besov spaces.
\section{Friedrichs and Gaffney inequalities}
\setcounter{equation}{0}
From now on, let $\Omega\subset\mathbb{R}^n$ be a bounded $(\varepsilon,\mathrm {d}lta)$ domain, for $n=2,3$, having as boundary $\mathbf{P}artial\Omega$ a $d$-set.
\subsection{The case $v\in W^p(\curl,\Omega)\cap W^p_0(\dive,\Omega)$}\label{divergenza}
We first consider the case $v\in W^p(\curl,\Omega)\cap W^p_0(\dive,\Omega)$. We point out that, since $\nu\cdot v=0$ on $\mathbf{P}artial\Omega$, we have that
\begin{eqnarray}gin{equation}\label{mediadive}
\int_\Omega \dive v\,\mathrm {d} x=0.
\mbox {e}nd{equation}
Let $S\subset\mathbb{R}^n$ be a measurable subset of $\mathbb{R}^n$; we denote by $\bar x$ its barycenter.\\
We construct the affine vector field $P_S(u)$ associated to $S$ and $u\in W^p(\curl,S)\cap W^p_0(\dive,S)$ in the following way:
\begin{eqnarray}gin{equation}\label{Pdef}
P_S(u)(x)=a+B(x-\bar x),
\mbox {e}nd{equation}
where $a\in\mathbb{R}^n$ and $B$ is a $n\times n$ matrix with entries $b_{ij}$ defined as
\begin{eqnarray}gin{equation}\label{definizioni}
a=\frac{1}{|S|}\int_S u\,\mathrm {d} x\quad\text{and}\quad b_{ij}=\frac{1}{2|S|}\int_S \left(\frac{\mathbf{P}artial u_i}{\mathbf{P}artial x_j}+\frac{\mathbf{P}artial u_j}{\mathbf{P}artial x_i}\right)\,\mathrm {d} x.
\mbox {e}nd{equation}
We point out that, from the definition, $B$ is a symmetric matrix. Moreover, by calculation it follows that $\curl(P_S(u))=0$,
\begin{eqnarray}gin{equation}\label{diveP}
\dive (P_S(u))=\frac{1}{|S|}\int_S \dive u\,\mathrm {d} x
\mbox {e}nd{equation}
and
\begin{eqnarray}gin{equation}\label{prop2}
\int_S (u-P_S(u))\,\mathrm {d} x=0.
\mbox {e}nd{equation}
By direct computation, it holds that
\begin{eqnarray}gin{equation}\label{stimagradiente}
\|\nabla(u-P_S(u))\|_{L^p(S)^{n\times n}}\leq C\|\nabla u\|_{L^p(S)^{n\times n}},
\mbox {e}nd{equation}
where $C$ depends only on $|S|$.
Let us now suppose that \mbox {e}qref{Fr2} holds in $S$ for $u\in W^p(\curl,S)\cap W^p_0(\dive,S)$. \mbox {e}qref{stimagradiente} infers that
\begin{eqnarray}gin{equation}\label{int1}
\|\nabla(u-P_S(u))\|_{L^p(S)^{n\times n}}\leq C\left(\|\curl u\|_{L^p(S)^n}+\|\dive u\|_{L^p(S)}\right).
\mbox {e}nd{equation}
Since $u-P_S(u)$ has vanishing mean value on $S$ from \mbox {e}qref{prop2}, from Poincar\'e-Wirtinger inequality and \mbox {e}qref{int1} we have
\begin{eqnarray}gin{equation}\label{poincare}
\|u-P_S(u)\|_{L^p(S)^n}\leq C \diam(S)\left(\|\curl u\|_{L^p(S)^n}+\|\dive u\|_{L^p(S)}\right),
\mbox {e}nd{equation}
where $\diam(S)$ is the diameter of $S$. Now, one can easily see that
\begin{eqnarray}gin{equation}\label{stimaPinf}
\|\nabla P_S(u)\|_{L^\infty(S)^{n\times n}}\leq\|\nabla u\|_{L^\infty(S)^{n\times n}};
\mbox {e}nd{equation}
hence, by using again Poincar\'e-Wirtinger inequality (with $p=\infty$), triangle inequality and \mbox {e}qref{stimaPinf} we get
\begin{eqnarray}gin{equation}\label{stimainf}
\|u-P_S(u)\|_{L^\infty(S)^n}\leq C \diam(S)\|\nabla(u-P_S(u))\|_{L^\infty(S)^{n\times n}}\leq 2C\diam(S)\|\nabla u\|_{L^\infty(S)^{n\times n}}.
\mbox {e}nd{equation}
From now on, we choose $v\in W^{1,\infty}(\Omega)^n$. The thesis will then follow by density arguments. We construct the extension $Ev$ following the approach of Jones \cite{Jones} by using the linear polynomials $P_S(v)$.\\
Let us recall that any open set $\Omega\subset\mathbb{R}^n$ admits a so-called \mbox {e}mph{Whitney decomposition} (see \cite{whitney}, \cite{stein}) into dyadic cubes $S_k$, i.e.
\begin{eqnarray}gin{center}
$\displaystyle\Omega=\begin{itemize}gcup_k S_k$.
\mbox {e}nd{center}
This decomposition is such that
\begin{eqnarray}gin{equation}\label{w1}
1\leq\frac{{\rm dist}(S_k,\mathbf{P}artial\Omega)}{\mbox {e}ll(S_k)}\leq 4\sqrt{n}\quad\forall\,k,
\mbox {e}nd{equation}
\begin{eqnarray}gin{equation}\label{w2}
S_j^0\cap S_k^0=\mbox {e}mptyset\quad\text{if}\,\,j\neq k,
\mbox {e}nd{equation}
\begin{eqnarray}gin{equation}\label{w3}
\frac{1}{4}\leq\frac{\mbox {e}ll(S_j)}{\mbox {e}ll(S_k)}\leq 4\quad\text{if}\,\,S_j\cap S_k\neq\mbox {e}mptyset,
\mbox {e}nd{equation}
where $S^0$ denotes the interior of $S$ and $\mbox {e}ll(S)$ is the edgelength of a cube $S$.\\
Let now $W_1=\{S_k\}$ be a Whitney decomposition of $\Omega$ and $W_2=\{Q_j\}$ be a Whitney decomposition of $(\Omega^c)^0$. We set
\begin{eqnarray}gin{equation*}
W_3=\left\{Q_j\in W_2\,:\,\mbox {e}ll(Q_j)\leq\frac{\varepsilon\mathrm {d}lta}{16n}\right\}.
\mbox {e}nd{equation*}
In his paper, Jones has shown that, for every $Q_j\in W_3$, one can choose a $\lq\lq$reflected" cube $Q_j^*=S_k\in W_1$ such that
\begin{eqnarray}gin{equation}\label{numero}
1\leq\frac{\mbox {e}ll(S_k)}{\mbox {e}ll(Q_j)}\leq 4\quad\text{and}\quad{\rm dist}(Q_j,S_k)\leq C\mbox {e}ll(Q_j),
\mbox {e}nd{equation}
see Lemma 2.4 and Lemma 2.8 in \cite{Jones}. Moreover, if $Q_j,Q_k\in W_3$ have non-empty intersection, there exists a chain $F_{j,k}=\{Q_j^*=S_1,S_2,\dots,S_m=Q_k^*\}$ of cubes in $W_1$ which connects $Q_j^*$ and $Q_k^*$ such that $S_i\cap S_{i+1}\neq\mbox {e}mptyset$ and $m\leq C(\varepsilon,\mathrm {d}lta)$.\\
From \cite{stein}, \cite{whitney} it follows that there exists a partition of unity $\{\mathbf{P}hi_j\}$, associated with the Whitney decomposition, such that
\begin{eqnarray}gin{center}
$\mathbf{P}hi_j\in C^\infty(\mathbb{R}^n),\quad\supp\mathbf{P}hi_j\subset\frac{17}{16}Q_j,\quad 0\leq\mathbf{P}hi_j\leq 1$,\\
$\sum_{Q_j\in W_3}\mathbf{P}hi_j=1\,\,\text{on}\,\,\begin{itemize}gcup_{Q_j\in W_3} Q_j\quad$ and $\quad|\nabla\mathbf{P}hi_j|\leq C\mbox {e}ll(Q_j)^{-1}\quad\forall\,j$.
\mbox {e}nd{center}
For $v\in W^{1,\infty}(\Omega)^n$, let $P_j:=P_{Q_j^*}(v)$ be defined as in \mbox {e}qref{Pdef} and \mbox {e}qref{definizioni}. We now define the extension $Ev$ of $v$ to $\mathbb{R}^n$ in the following way:
\begin{eqnarray}gin{equation*}
Ev=
\begin{eqnarray}gin{cases}
\displaystyle\sum_{Q_j\in W_3} P_j\mathbf{P}hi_j\quad &\text{in}\,\,(\Omega^c)^0,\\[2mm]
v &\text{in}\,\,\Omega.
\mbox {e}nd{cases}
\mbox {e}nd{equation*}
We point out that, since the boundary of an $(\varepsilon,\mathrm {d}lta)$ domain has zero measure (see Lemma 2.3 in \cite{Jones}), it follows that $Ev$ is defined a.e. in $\mathbb{R}^n$.\\
From now on, if not otherwise specified, in this subsection we assume that $v\in W^p(\curl,\Omega)\cap W^p_0(\dive,\Omega)\cap W^p_0(\dive,S)$ for every $S\in W_1$. We now prove some preliminary lemmas. For the sake of completeness, we recall Lemma 2.1 in \cite{Jones}.
\begin{eqnarray}gin{lemma}\label{lemmajones} Let $Q$ be a cube and let $F,G\subset Q$ be two measurable subsets such that $|F|,|G|\geq\gamma|Q|$ for some $\gamma>0$. If $P$ is a polynomial of degree 1, then
\begin{eqnarray}gin{equation*}
\|P\|_{L^p(F)}\leq C(\gamma)\|P\|_{L^p(G)}.
\mbox {e}nd{equation*}
\mbox {e}nd{lemma}
\begin{eqnarray}gin{lemma}\label{lemma3.2} Let $F=\{S_1,\dots,S_m\}$ be a chain of cubes in $W_1$. Then
\begin{eqnarray}gin{equation}\label{stimaPLp}
\|P_{S_1}(v)-P_{S_m}(v)\|_{L^p(S_1)^n}\leq C(m)\mbox {e}ll(S_1)\left(\|\curl v\|_{L^p(\cup_j S_j)^n}+\|\dive v\|_{L^p(\cup_j S_j)}\right)
\mbox {e}nd{equation}
and
\begin{eqnarray}gin{equation}\label{stimaPLinf}
\|P_{S_1}(v)-P_{S_m}(v)\|_{L^\infty(S_1)^n}\leq C(m)\mbox {e}ll(S_1)\|\nabla v\|_{L^\infty(\cup_j S_j)^{n\times n}}.
\mbox {e}nd{equation}
\mbox {e}nd{lemma}
\begin{eqnarray}gin{proof} We will use \mbox {e}qref{poincare}, where $S$ is a cube or a union of two neighboring cubes. From \mbox {e}qref{w3}, it follows that the number of possible geometries of $S$ is finite; hence, we can find a uniform constant in \mbox {e}qref{poincare}.\\
By using Lemma \ref{lemmajones}, we get
\begin{eqnarray}gin{align*}
&\|P_{S_1}(v)-P_{S_m}(v)\|_{L^p(S_1)^n}\leq\sum_{r=1}^{m-1}\|P_{S_r}(v)-P_{S_{r+1}}(v)\|_{L^p(S_1)^n}\\[2mm]
&\leq c(m)\sum_{r=1}^{m-1}\|P_{S_r}(v)-P_{S_{r+1}}(v)\|_{L^p(S_r)^n}\\[2mm]
&\leq c(m)\sum_{r=1}^{m-1}\left\{\|P_{S_r}(v)-P_{S_r\cup S_{r+1}}(v)\|_{L^p(S_r)^n}+\|P_{S_r\cup S_{r+1}}(v)-P_{S_{r+1}}(v)\|_{L^p(S_{r+1})^n}\right\}\\[2mm]
&\leq c(m)\sum_{r=1}^{m-1}\left\{\|P_{S_r}(v)-v\|_{L^p(S_r)^n}+\|P_{S_{r+1}}(v)-v\|_{L^p(S_{r+1})^n}+\|P_{S_r\cup S_{r+1}}(v)-v\|_{L^p(S_r\cup S_{r+1})^n}\right\}\\[2mm]
&\leq Cc(m)\mbox {e}ll(S_1)\left(\|\curl v\|_{L^p(\cup_j S_j)^n}+\|\dive v\|_{L^p(\cup_j S_j)}\right),
\mbox {e}nd{align*}
where we used the fact that $F$ is a chain, integral properties and finally \mbox {e}qref{poincare}.\\
The proof of \mbox {e}qref{stimaPLinf} follows analogously by using \mbox {e}qref{stimainf}.
\mbox {e}nd{proof}
For every $Q_j,Q_k\in W_3$ with non-empty intersection, we now choose a chain $F_{j,k}$ which connects $Q_j^*$ and $Q_k^*$ and such that $m\leq C(\varepsilon,\mathrm {d}lta)$. We define
\begin{eqnarray}gin{equation*}
F(Q_j)=\begin{itemize}gcup_{Q_k\in W_3, Q_j\cap Q_k\neq\mbox {e}mptyset} F_{j,k};
\mbox {e}nd{equation*}
hence
\begin{eqnarray}gin{equation}\label{finito}
\left\|\sum_{Q_k\,:\,Q_j\cap Q_k\neq\mbox {e}mptyset} \chi_{\cup F_{j,k}}\right\|_{L^\infty(\mathbb{R}^n)}\leq C\quad\forall\,Q_j\in W_3.
\mbox {e}nd{equation}
We now prove two lemmas which allow us to control the norms of $Ev$, $\dive (Ev)$, $\curl (Ev)$ and $\nabla (Ev)$ in $(\Omega^c)^0$.
\begin{eqnarray}gin{lemma}\label{lemma1} Let $Q_0\in W_3$. We have that:
\begin{eqnarray}gin{equation}\label{stima1}
\|Ev\|_{L^p(Q_0)^n}\leq C\left(\|v\|_{L^p(Q_0^*)^n}+\mbox {e}ll(Q_0)(\|\curl v\|_{L^p(F(Q_0))^n}+\|\dive v\|_{L^p(F(Q_0))})\right),
\mbox {e}nd{equation}
\begin{eqnarray}gin{equation}\label{stima2}
\|\curl(Ev)\|_{L^p(Q_0)^n}+\|\dive (Ev)\|_{L^p(Q_0)}\leq C\left(\|\curl v\|_{L^p(F(Q_0))^n}+\|\dive v\|_{L^p(F(Q_0))}\right),
\mbox {e}nd{equation}
\begin{eqnarray}gin{equation}\label{stima3}
\|Ev\|_{L^\infty(Q_0)^n}\leq C\left(\|v\|_{L^\infty(Q_0^*)^n}+\mbox {e}ll(Q_0)\|\nabla v\|_{L^\infty(F(Q_0))^{n\times n}}\right),
\mbox {e}nd{equation}
\begin{eqnarray}gin{equation}\label{stima4}
\|\nabla(Ev)\|_{L^\infty(Q_0)^{n\times n}}\leq C\|\nabla v\|_{L^\infty(F(Q_0))^{n\times n}}.
\mbox {e}nd{equation}
\mbox {e}nd{lemma}
\begin{eqnarray}gin{proof} We recall that, from the definition of $Ev$, on $Q_0$ we have that $\displaystyle Ev=\sum_{Q_j\in W_3} P_j\mathbf{P}hi_j$. Moreover, since $\displaystyle\sum_{Q_j\in W_3}\mathbf{P}hi_j\mbox {e}quiv 1$ on $\displaystyle\begin{itemize}gcup_{Q_j\in W_3} Q_j$, we get
\begin{eqnarray}gin{equation*}
\left\|\sum_{Q_j\in W_3}P_j\mathbf{P}hi_j\right\|_{L^p(Q_0)^n}\leq\|P_0\|_{L^p(Q_0)^n}+\left\|\sum_{Q_j\in W_3}(P_j-P_0)\mathbf{P}hi_j\right\|_{L^p(Q_0)^n}:=A+B.
\mbox {e}nd{equation*}
We now estimate $A$ and $B$ separately. As to $A$, from Lemma \ref{lemmajones} and \mbox {e}qref{poincare}, we get
\begin{eqnarray}gin{align}\label{stimaAprima}
A&=\|P_0\|_{L^p(Q_0)^n}\leq C\|P_0\|_{L^p(Q_0^*)^n}\leq C(\|P_0-v\|_{L^p(Q_0^*)^n}+\|v\|_{L^p(Q_0^*)^n})\notag\\[2mm]
&\leq C(\mbox {e}ll(Q_0)(\|\curl v\|_{L^p(Q_0^*)^n}+\|\dive v\|_{L^p(Q_0^*)})+\|v\|_{L^p(Q_0^*)^n}),
\mbox {e}nd{align}
where we estimated $\mbox {e}ll(Q_0^*)$ with $\mbox {e}ll(Q_0)$ using \mbox {e}qref{numero}, since $Q_0\in W_3$. We point out that, thanks to \mbox {e}qref{finito}, the norms in the right-hand side of \mbox {e}qref{stimaAprima} can be estimated in terms of the $L^p(\cup_j F_{0,j})$-norms. Hence, we get the following:
\begin{eqnarray}gin{equation}\label{stimaA}
A\leq C(\mbox {e}ll(Q_0)(\|\curl v\|_{L^p(\cup_j F_{0,j})^n}+\|\dive v\|_{L^p(\cup_j F_{0,j})})+\|v\|_{L^p(\cup_j F_{0,j})^n}).
\mbox {e}nd{equation}
As to $B$, from the properties of $\mathbf{P}hi_j$ it is sufficient to bound $\|P_j-P_0\|_{L^p(Q_0)^n}$. By using again Lemma \ref{lemmajones}, \mbox {e}qref{stimaPLp} and proceeding as above, we get
\begin{eqnarray}gin{equation}\label{stimaB}
B\leq\|P_j-P_0\|_{L^p(Q_0)^n}\leq C\|P_j-P_0\|_{L^p(Q_0^*)^n}\leq C\mbox {e}ll(Q_0)(\|\curl v\|_{L^p(\cup_j F_{0,j})^n}+\|\dive v\|_{L^p(\cup_j F_{0,j})}).
\mbox {e}nd{equation}
Hence from \mbox {e}qref{stimaA} and \mbox {e}qref{stimaB} we get \mbox {e}qref{stima1}. Estimate \mbox {e}qref{stima3} follows similarly by using \mbox {e}qref{stimainf} and \mbox {e}qref{stimaPLinf}.\\
We now remark that, on $Q_0$, we have that
\begin{eqnarray}gin{equation*}
Ev=\sum_{Q_j\in W_3} P_j\mathbf{P}hi_j=P_0\sum_{Q_j\in W_3}\mathbf{P}hi_j+\sum_{Q_j\in W_3} (P_j-P_0)\mathbf{P}hi_j=P_0+\sum_{Q_j\in W_3} (P_j-P_0)\mathbf{P}hi_j.
\mbox {e}nd{equation*}
Therefore, since $\curl(P_0)=0$, we have that
\begin{eqnarray}gin{equation*}
\curl(Ev)=\sum_{Q_j\in W_3}\curl((P_j-P_0)\mathbf{P}hi_j).
\mbox {e}nd{equation*}
Moreover, from \mbox {e}qref{mediadive} and \mbox {e}qref{diveP} it follows that
\begin{eqnarray}gin{equation*}
\dive(Ev)=\sum_{Q_j\in W_3}\dive((P_j-P_0)\mathbf{P}hi_j).
\mbox {e}nd{equation*}
Since there is a finite number of cubes $Q_j$ such that $\mathbf{P}hi_j\neq 0$ in $Q_0$ and having non-empty intersection with $Q_0$, from \mbox {e}qref{w3} we have that $\mbox {e}ll(Q_j)\geq\frac{1}{4}\mbox {e}ll(Q_0)$. From the properties of $\mathbf{P}hi_j$, this implies that $|\nabla\mathbf{P}hi_j|\leq\frac{C}{4}\mbox {e}ll(Q_0)^{-1}$.\\
By using vector identities, Lemma \ref{lemmajones} and \mbox {e}qref{stimaPLp}, we have that
\begin{eqnarray}gin{align*}
&\|\curl((P_j-P_0)\mathbf{P}hi_j)\|_{L^p(Q_0)^n}=\|(P_j-P_0)\times\nabla\mathbf{P}hi_j\|_{L^p(Q_0)^n}\leq C\|P_j-P_0\|_{L^p(Q_0)^n}\|\nabla\mathbf{P}hi_j\|_{L^p(Q_0)^n}\\[2mm]
&\leq C\mbox {e}ll(Q_0)^{-1}\|P_j-P_0\|_{L^p(Q_0)^n}\leq C\mbox {e}ll(Q_0)^{-1}\|P_j-P_0\|_{L^p(Q_0^*)^n}\\[2mm]
&\leq C(\|\curl v\|_{L^p(\cup_j F_{0,j})^n}+\|\dive v\|_{L^p(\cup_j F_{0,j})}).
\mbox {e}nd{align*}
As to divergence term, similarly as above we get
\begin{eqnarray}gin{align*}
&\|\dive((P_j-P_0)\mathbf{P}hi_j)\|_{L^p(Q_0)}=\|(P_j-P_0)\cdot\nabla\mathbf{P}hi_j\|_{L^p(Q_0)}\leq\|P_j-P_0\|_{L^p(Q_0)^n}\|\nabla\mathbf{P}hi_j\|_{L^p(Q_0)^n}\\[2mm]
&\leq C(\|\curl v\|_{L^p(\cup_j F_{0,j})^n}+\|\dive v\|_{L^p(\cup_j F_{0,j})}).
\mbox {e}nd{align*}
Summing up in $j$ we get
\begin{eqnarray}gin{equation*}
\|\curl(Ev)\|_{L^p(Q_0)^n}+\|\dive(Ev)\|_{L^p(Q_0)}\leq C(\|\curl v\|_{L^p(F(Q_0))^n}+\|\dive v\|_{L^p(F(Q_0))}),
\mbox {e}nd{equation*}
i.e. \mbox {e}qref{stima2}.\\
We are left to prove \mbox {e}qref{stima4}. Similarly as above, we have that
\begin{eqnarray}gin{equation*}
\nabla(Ev)=\nabla P_0+\sum_{Q_j\in W_3} \nabla((P_j-P_0)\mathbf{P}hi_j).
\mbox {e}nd{equation*}
From Lemma \ref{lemmajones} and \mbox {e}qref{stimaPinf}, we get
\begin{eqnarray}gin{equation*}
\|\nabla P_0\|_{L^\infty(Q_0)^{n\times n}}\leq C\|\nabla P_0\|_{L^\infty(Q_0^*)^{n\times n}}\leq C\|\nabla v\|_{L^\infty(Q_0^*)^{n\times n}}\leq C\|\nabla v\|_{L^\infty(\cup_j F_{0,j})^{n\times n}}.
\mbox {e}nd{equation*}
As above, it follows that
\begin{eqnarray}gin{align*}
&\|\nabla(P_j-P_0)\|_{L^\infty(Q_0)^{n\times n}}\leq C\|\nabla(P_j-P_0)\|_{L^\infty(Q_0^*)^{n\times n}}\leq C\|\nabla(P_j-P_0)\|_{L^\infty(Q_0^*\cup Q_j^*)^{n\times n}}\\[2mm]
&\leq C\|\nabla v\|_{L^\infty(Q_0^*\cup Q_j^*)^{n\times n}}\leq C\|\nabla v\|_{L^\infty(\cup_j F_{0,j})^{n\times n}}
\mbox {e}nd{align*}
From these inequalities, \mbox {e}qref{stima4} follows and the proof is complete.
\mbox {e}nd{proof}
We now prove a result similar to Lemma \ref{lemma1}, which relates to the cubes of $(\Omega^c)^0$ not belonging to $W_3$.
\begin{eqnarray}gin{lemma}\label{lemma2} Let $Q_0\in W_2\setminus W_3$. We have that:
\begin{eqnarray}gin{equation}\label{stima1comp}
\|Ev\|_{L^p(Q_0)^n}\leq C\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset}\left(\|v\|_{L^p(Q_j^*)^n}+\|\curl v\|_{L^p(Q_j^*)^n}+\|\dive v\|_{L^p(Q_j^*)}\right),
\mbox {e}nd{equation}
\begin{eqnarray}gin{align}
\|\curl(Ev)\|_{L^p(Q_0)^n}&+\|\dive(Ev)\|_{L^p(Q_0)}\notag\\[2mm]
&\leq C\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset}\left(\|v\|_{L^p(Q_j^*)^n}+\|\curl v\|_{L^p(Q_j^*)^n}+\|\dive v\|_{L^p(Q_j^*)}\right),\label{stima2comp}
\mbox {e}nd{align}
\begin{eqnarray}gin{equation}\label{stima3comp}
\|Ev\|_{L^\infty(Q_0)^n}\leq C\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset}\left(\|v\|_{L^\infty(Q_j^*)^n}+\|\nabla v\|_{L^\infty(Q_j^*)^{n\times n}}\right),
\mbox {e}nd{equation}
\begin{eqnarray}gin{equation}\label{stima4comp}
\|\nabla(Ev)\|_{L^\infty(Q_0)^{n\times n}}\leq C\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset}\left(\|v\|_{L^\infty(Q_j^*)^n}+\|\nabla v\|_{L^\infty(Q_j^*)^{n\times n}}\right).
\mbox {e}nd{equation}
\mbox {e}nd{lemma}
\begin{eqnarray}gin{proof} We start by pointing out that, if $\mathbf{P}hi_j\neq 0$ on $Q_0$, we have $Q_j\cap Q_0\neq\mbox {e}mptyset$ (since $\supp\mathbf{P}hi_j\subset\frac{17}{16}Q_j)$. Therefore, since $Q_0\in W_2\setminus W_3$, we have
\begin{eqnarray}gin{equation}\label{stimalunghezza}
\mbox {e}ll (Q_j)\geq\frac{1}{4}\mbox {e}ll (Q_0)\geq\frac{\varepsilon\mathrm {d}lta}{64n}.
\mbox {e}nd{equation}
On $Q_0$ we have that
\begin{eqnarray}gin{equation*}
|Ev|=\left|\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset} P_j\mathbf{P}hi_j\right|\leq \sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset} |P_j|.
\mbox {e}nd{equation*}
From Lemma \ref{lemmajones} and triangle inequality, we get
\begin{eqnarray}gin{equation}\label{interm}
\|P_j\|_{L^p(Q_0)^n}\leq C\|P_j\|_{L^p(Q_j)^n}\leq C\|P_j\|_{L^p(Q_j^*)^n}\leq C(\|P_j-v\|_{L^p(Q_j^*)^n}+\|v\|_{L^p(Q_j^*)^n}).
\mbox {e}nd{equation}
From \mbox {e}qref{interm} and \mbox {e}qref{poincare}, it follows that
\begin{eqnarray}gin{align*}
&\|Ev\|_{L^p(Q_0)^n}\leq\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset}\|P_j\|_{L^p(Q_0)^n}\\[3mm]
&\leq C\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset}\left(\|v\|_{L^p(Q_j^*)^n}+\diam(Q_j^*)(\|\curl v\|_{L^p(Q_j^*)^n}+\|\dive v\|_{L^p(Q_j^*)})\right).
\mbox {e}nd{align*}
Sine $\Omega$ is bounded, we can estimate $\diam(Q_j^*)$ with a constant depending on $\diam(\Omega)$, thus proving \mbox {e}qref{stima1comp}.\\
We come to \mbox {e}qref{stima2comp}. By proceeding as in the proof of Lemma \ref{lemma1} and by using \mbox {e}qref{stimalunghezza}, the following estimate holds:
\begin{itemize}gskip
\begin{eqnarray}gin{align*}
&\|\curl(Ev)\|_{L^p(Q_0)^n}+\|\dive(Ev)\|_{L^p(Q_0)}=\left\|\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset}\curl(P_j\mathbf{P}hi_j)\right\|_{L^p(Q_0)^n}\\[2mm]
&+\left\|\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset}\dive(P_j\mathbf{P}hi_j)\right\|_{L^p(Q_0)}\leq C\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset}\|P_j\|_{L^p(Q_0)^n}\|\nabla\mathbf{P}hi_j\|_{L^p(Q_0)^{n\times n}}\\[2mm]
&\leq C\mbox {e}ll(Q_0)^{-1}\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset}\|P_j\|_{L^p(Q_0)^n}\leq C\left(\frac{\varepsilon\mathrm {d}lta}{64n}\right)^{-1}\sum_{Q_j\in W_3\,:\,Q_j\cap Q_0\neq\mbox {e}mptyset}\|P_j\|_{L^p(Q_0^*)^n}.
\mbox {e}nd{align*}
By proceeding as above, we get \mbox {e}qref{stima2comp}. Estimates \mbox {e}qref{stima3comp} and \mbox {e}qref{stima4comp} follow in a similar way by using \mbox {e}qref{stimainf} and \mbox {e}qref{stimaPinf}.
\mbox {e}nd{proof}
From the above lemmas we obtain the following result.
\begin{eqnarray}gin{prop}\label{proposiz} For every $v\in W^{1,\infty}(\Omega)^n$ such that $v\in W^p(\curl,\Omega)\cap W^p_0(\dive,\Omega)$ we have
\begin{eqnarray}gin{align}
\|Ev\|_{L^p((\Omega^c)^0)^n}&+\|\dive(Ev)\|_{L^p((\Omega^c)^0)}+\|\curl(Ev)\|_{L^p((\Omega^c)^0)^n}\notag\\[2mm]
&\leq C\left(\|v\|_{L^p(\Omega)^n}+\|\dive v\|_{L^p(\Omega)}+\|\curl v\|_{L^p(\Omega)^n}\right)\label{corol1}
\mbox {e}nd{align}
and
\begin{eqnarray}gin{equation}\label{corol2}
\|Ev\|_{W^{1,\infty}((\Omega^c)^0)^n}\leq C\|v\|_{W^{1,\infty}(\Omega)^n}.
\mbox {e}nd{equation}
\mbox {e}nd{prop}
\begin{eqnarray}gin{proof} By summing up over every $Q_0\in W_2$, the thesis follows as a direct consequence of Lemma \ref{lemma1} and Lemma \ref{lemma2}. In particular, \mbox {e}qref{corol1} follows from \mbox {e}qref{stima1}, \mbox {e}qref{stima2}, \mbox {e}qref{stima1comp} and \mbox {e}qref{stima2comp}, while \mbox {e}qref{corol2} follows from \mbox {e}qref{stima3}, \mbox {e}qref{stima4}, \mbox {e}qref{stima3comp} and \mbox {e}qref{stima4comp}.
\mbox {e}nd{proof}
We now prove the first main result of this paper, which follows from the above lemmas.
\begin{eqnarray}gin{theorem}[Friedrichs inequality]\label{fridis} Let $\Omega\subset\mathbb{R}^n$ be a bounded $(\varepsilon,\mathrm {d}lta)$ domain with $\mathbf{P}artial\Omega$ a $d$-set. There exists a constant $C=C(\varepsilon,\mathrm {d}lta,n,p,\Omega)>0$ such that, for every $v\in W^{1,p}(\Omega)^n$ such that $v\in W^p(\curl,\Omega)\cap W^p_0(\dive,\Omega)$,
\begin{eqnarray}gin{equation}\label{friedrichs}
\|v\|_{W^{1,p}(\Omega)^n}\leq C\left(\|v\|_{L^p(\Omega)^n}+\|\curl v\|_{L^p(\Omega)^n}+\|\dive v\|_{L^p(\Omega)}\right).
\mbox {e}nd{equation}
\mbox {e}nd{theorem}
\begin{eqnarray}gin{proof} It is sufficient to prove \mbox {e}qref{friedrichs} for $v\in W^{1,\infty}(\Omega)^n$; the thesis will then follow by density. We recall that the extension $Ev$ is defined a.e. on $\mathbb{R}^n$ since $|\mathbf{P}artial\Omega|=0$. Moreover, from the definition of $Ev$ we can suppose that $\supp Ev$ is contained in a ball $B$.\\
Since $Ev\in W^{1,p}(B)^n$, from \mbox {e}qref{corol1} we have that
\begin{eqnarray}gin{equation*}
\|Ev\|_{L^p(B)^n}+\|\curl(Ev)\|_{L^p(B)^n}+\|\dive(Ev)\|_{L^p(B)}\leq C\left(\|v\|_{L^p(\Omega)^n}+\|\curl v\|_{L^p(\Omega)^n}+\|\dive v\|_{L^p(\Omega)}\right).
\mbox {e}nd{equation*}
Hence, from Friedrichs inequality for smooth domains and the above inequality, we get
\begin{eqnarray}gin{align*}
\|v\|_{W^{1,p}(\Omega)^n}&=\|Ev\|_{W^{1,p}(\Omega)^n}\leq\|Ev\|_{W^{1,p}(B)^n}\leq C(\|Ev\|_{L^p(B)^n}+\|\curl(Ev)\|_{L^p(B)^n}+\|\dive(Ev)\|_{L^p(B)})\\[2mm]
&\leq C\left(\|v\|_{L^p(\Omega)^n}+\|\curl v\|_{L^p(\Omega)^n}+\|\dive v\|_{L^p(\Omega)}\right),
\mbox {e}nd{align*}
i.e. the thesis.
\mbox {e}nd{proof}
We conclude this section by proving Gaffney inequality as a direct consequence of Theorem \ref{fridis}.
\begin{eqnarray}gin{theorem}[Gaffney inequality]\label{gaffin} Let $\Omega\subset\mathbb{R}^n$ be a bounded simply connected $(\varepsilon,\mathrm {d}lta)$ domain with $\mathbf{P}artial\Omega$ a $d$-set. Let $v\in W^{1,p}(\Omega)^n$ be such that $v\in W^p(\curl,\Omega)\cap W^p_0(\dive,\Omega)$. Then there exists $C=C(\varepsilon,\mathrm {d}lta,n,p,\Omega)>0$ such that
\begin{eqnarray}gin{equation}\label{gaffney}
\|v\|_{W^{1,p}(\Omega)^n}\leq C\left(\|\curl v\|_{L^p(\Omega)^n}+\|\dive v\|_{L^p(\Omega)}\right).
\mbox {e}nd{equation}
\mbox {e}nd{theorem}
\begin{eqnarray}gin{proof} We argue by contradiction. Let us suppose that \mbox {e}qref{gaffney} does not hold; hence, there exists a sequence of vectors $\{v_k\}\subset W^{1,p}(\Omega)^n\cap W^p(\curl,\Omega)\cap W^p_0(\dive,\Omega)$ such that
\begin{eqnarray}gin{equation*}
\|v_k\|_{W^{1,p}(\Omega)^n}=1\quad\text{and}\quad\|\curl v_k\|_{L^p(\Omega)^n}+\|\dive v_k\|_{L^p(\Omega)}\xrightarrow[k\to+\infty]{} 0.
\mbox {e}nd{equation*}
Since $\|v_k\|_{W^{1,p}(\Omega)^n}=1$, there exists a subsequence of $\{v_k\}$ (which we still denote by $v_{k}$) such that
\begin{eqnarray}gin{equation*}
v_{k}\rightharpoonup v\,\,\text{in}\,\,W^{1,p}(\Omega)^n\quad\text{and}\quad v_{k}\rightarrow v\,\,\text{in}\,\,L^p(\Omega)^n.
\mbox {e}nd{equation*}
Since the distributional limits coincide with the weak limits, it immediately follows that $\dive v=0$ and $\curl v=0$.\\
We now prove that $\{v_k\}$ is a Cauchy sequence in $W^{1,p}(\Omega)^n$. From Friedrichs inequality \mbox {e}qref{friedrichs}, for every $k,j\in\mathbb{N}$ one has
\begin{eqnarray}gin{equation}\label{stimacauchy}
\|v_k-v_j\|_{W^{1,p}(\Omega)^n}\leq C\left(\|v_k-v_j\|_{L^p(\Omega)^n}+\|\dive (v_k-v_j)\|_{L^p(\Omega)}+\|\curl (v_k-v_j)\|_{L^p(\Omega)^n}\right).
\mbox {e}nd{equation}
From the strong convergence of $v_k$ in $L^p(\Omega)^n$, the first term on the right-hand side of \mbox {e}qref{stimacauchy} vanishes. As to the other two terms, they also vanish since $\curl v_k$ and $\dive v_k$ both tend to 0 in $L^p$ as $k\to+\infty$. Hence $v_k$ is a Cauchy sequence in $W^{1,p}(\Omega)^n$, and $v_k\to v$ strongly in $W^{1,p}(\Omega)^n$.\\
We recall that if $\curl v=0$ in $\Omega$ and $\Omega$ is simply connected, there exists a function $\Phi\in W^{1,p}(\Omega)$ such that $v=\nabla\Phi$. This in turn implies that $\mathcal{D}elta\Phi=\dive\nabla\Phi=\dive v=0$ in $\Omega$. Moreover, since $v\in W^p_0(\dive)$, we also have that
\begin{eqnarray}gin{center}
$\displaystyle\frac{\mathbf{P}artial\Phi}{\mathbf{P}artial\nu}=\nu\cdot\nabla\Phi=\nu\cdot v=0$ on $\mathbf{P}artial\Omega$.
\mbox {e}nd{center}
Hence $\Phi\in W^{1,p}(\Omega)$ is the unique weak solution of the following problem
\begin{eqnarray}gin{equation}\label{probphi}
\begin{eqnarray}gin{cases}
\mathcal{D}elta\Phi=0\quad &\text{in}\,\,\Omega,\\[2mm]
\displaystyle\frac{\mathbf{P}artial\Phi}{\mathbf{P}artial\nu}=0 &\text{on}\,\,\mathbf{P}artial\Omega.
\mbox {e}nd{cases}
\mbox {e}nd{equation}
This implies that $\Phi$ is constant, and so $v=\nabla\Phi=0$ on $\Omega$. We reached a contradiction, since
\begin{eqnarray}gin{equation*}
1=\|v_k\|_{W^{1,p}(\Omega)^n}\xrightarrow[k\to+\infty]{}\|v\|_{W^{1,p}(\Omega)^n}=0.
\mbox {e}nd{equation*}
\mbox {e}nd{proof}
\subsection{The case $v\in W^p(\dive,\Omega)\cap W^p_0(\curl,\Omega)$}\label{rotore}
We now consider the case $v\in W^p(\dive,\Omega)\cap W^p_0(\curl,\Omega)$. We recall that this implies $\nu\times v=0$ on $\mathbf{P}artial\Omega$ in the dual of $B^{p',p'}_\alpha(\mathbf{P}artial\Omega)$.\\
We approximate $v\in W^p(\dive,\Omega)\cap W^p_0(\curl,\Omega)\cap W^{1,\infty}(\Omega)^n$ by means of the polynomials $P_j$ as in the previous section. We remark that in this case
\begin{eqnarray}gin{equation*}
\dive P_j=\frac{1}{|Q_j^*|}\int_{Q_j^*} \dive v\,\mathrm {d} x\neq 0.
\mbox {e}nd{equation*}
As in the previous subsection, estimates \mbox {e}qref{int1} and \mbox {e}qref{poincare} hold, as well as lemmas \ref{lemma3.2}, \ref{lemma1} and \ref{lemma2}, under the hypothesis that $v\in W^p(\dive,\Omega)\cap W^p_0(\curl,\Omega)\cap W^p_0(\curl,S)$ for every $S\in W_1$.\\
For the sake of clarity, we state the analogous of Proposition \ref{proposiz} and Theorem \ref{fridis} in this case.
\begin{eqnarray}gin{prop} For every $v\in W^{1,\infty}(\Omega)^n$ such that $v\in W^p(\dive,\Omega)\cap W^p_0(\curl,\Omega)$ we have
\begin{eqnarray}gin{align}
\|Ev\|_{L^p((\Omega^c)^0)^n}&+\|\dive(Ev)\|_{L^p((\Omega^c)^0)}+\|\curl(Ev)\|_{L^p((\Omega^c)^0)^n}\notag\\[2mm]
&\leq C\left(\|v\|_{L^p(\Omega)^n}+\|\dive v\|_{L^p(\Omega)}+\|\curl v\|_{L^p(\Omega)^n}\right)\label{corol1rot}
\mbox {e}nd{align}
and
\begin{eqnarray}gin{equation}\label{corol2rot}
\|Ev\|_{W^{1,\infty}((\Omega^c)^0)^n}\leq C\|v\|_{W^{1,\infty}(\Omega)^n}.
\mbox {e}nd{equation}
\mbox {e}nd{prop}
\begin{itemize}gskip
\begin{eqnarray}gin{theorem}[Friedrichs inequality]\label{fridisrot} Let $\Omega\subset\mathbb{R}^n$ be a bounded $(\varepsilon,\mathrm {d}lta)$ domain with $\mathbf{P}artial\Omega$ a $d$-set. There exists a constant $C=C(\varepsilon,\mathrm {d}lta,n,p,\Omega)>0$ such that, for every $v\in W^{1,p}(\Omega)^n$ such that $v\in W^p(\dive,\Omega)\cap W^p_0(\curl,\Omega)$,
\begin{eqnarray}gin{equation}\label{friedrichsrot}
\|v\|_{W^{1,p}(\Omega)^n}\leq C\left(\|v\|_{L^p(\Omega)^n}+\|\curl v\|_{L^p(\Omega)^n}+\|\dive v\|_{L^p(\Omega)}\right).
\mbox {e}nd{equation}
\mbox {e}nd{theorem}
We conclude by proving Gaffney inequality.
\begin{eqnarray}gin{theorem}[Gaffney inequality]\label{gaffinrot} Let $\Omega\subset\mathbb{R}^n$ be a bounded simply connected $(\varepsilon,\mathrm {d}lta)$ domain with $\mathbf{P}artial\Omega$ a $d$-set. Let $v\in W^{1,p}(\Omega)^n$ be such that $v\in W^p(\dive,\Omega)\cap W^p_0(\curl,\Omega)$. Then there exists $C=C(\varepsilon,\mathrm {d}lta,n,p,\Omega)>0$ such that
\begin{eqnarray}gin{equation}\label{gaffneyrot}
\|v\|_{W^{1,p}(\Omega)^n}\leq C\left(\|\curl v\|_{L^p(\Omega)^n}+\|\dive v\|_{L^p(\Omega)}\right).
\mbox {e}nd{equation}
\mbox {e}nd{theorem}
\begin{eqnarray}gin{proof} We proceed as in the proof of \cite[Corollary 3.51]{monk}; we argue by contradiction and we suppose that \mbox {e}qref{gaffneyrot} does not hold. As in Theorem \ref{gaffin}, this means that there exists a sequence of vectors $\{v_k\}\subset W^{1,p}(\Omega)^n\cap W^p(\dive,\Omega)\cap W^p_0(\curl,\Omega)$ such that
\begin{eqnarray}gin{equation*}
\|v_k\|_{W^{1,p}(\Omega)^n}=1\quad\text{and}\quad\|\curl v_k\|_{L^p(\Omega)^n}+\|\dive v_k\|_{L^p(\Omega)}\xrightarrow[k\to+\infty]{} 0.
\mbox {e}nd{equation*}
This implies that
\begin{eqnarray}gin{equation*}
v_{k}\rightharpoonup v\,\,\text{in}\,\,W^{1,p}(\Omega)^n\quad\text{and}\quad v_{k}\rightarrow v\,\,\text{in}\,\,L^p(\Omega)^n,
\mbox {e}nd{equation*}
with $\dive v=0$ and $\curl v=0$.\\
From \mbox {e}qref{friedrichsrot}, for every $k,j\in\mathbb{N}$ we have that
\begin{eqnarray}gin{equation}\label{stimacauchyrot}
\|v_k-v_j\|_{W^{1,p}(\Omega)^n}\leq C\left(\|v_k-v_j\|_{L^p(\Omega)^n}+\|\dive (v_k-v_j)\|_{L^p(\Omega)}+\|\curl (v_k-v_j)\|_{L^p(\Omega)^n}\right).
\mbox {e}nd{equation}
As in the proof of Theorem \ref{gaffin}, all the terms on the right-hand side of \mbox {e}qref{stimacauchyrot} vanish when $k,j\to+\infty$, hence $\{v_k\}$ is a Cauchy sequence in $W^{1,p}(\Omega)^n$.\\
As in the case $v\in W^p(\curl,\Omega)\cap W^p_0(\dive,\Omega)$, there exists a function $\Phi\in W^{1,p}(\Omega)$ such that $v=\nabla\Phi$ and $\mathcal{D}elta\Phi=0$ in $\Omega$. Since in this case $v\in W^p_0(\curl,\Omega)$, we also have that
\begin{eqnarray}gin{center}
$\nu\times\nabla\Phi=\nu\times v=0$ on $\mathbf{P}artial\Omega$.
\mbox {e}nd{center}
Up to shifting $\Phi$ by a constant, this implies that $\Phi=0$ on $\mathbf{P}artial\Omega$ in the trace sense.
Hence $\Phi\in W^{1,p}(\Omega)$ is the unique weak solution of the following problem
\begin{eqnarray}gin{equation}\label{probphirot}
\begin{eqnarray}gin{cases}
\mathcal{D}elta\Phi=0\quad &\text{in}\,\,\Omega,\\[2mm]
\Phi=0 &\text{on}\,\,\mathbf{P}artial\Omega.
\mbox {e}nd{cases}
\mbox {e}nd{equation}
This implies that $\Phi=0$, therefore $v=0$ on $\Omega$ and we reach the contradiction.
\mbox {e}nd{proof}
\noindent {\bf Acknowledgements.} The authors have been supported by the Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
\begin{eqnarray}gin{thebibliography}{38}
\begin{itemize}bitem{amrouche} C. Amrouche, C. Bernardi, M. Dauge, V. Girault, \mbox {e}mph{Vector potentials in three-dimensional non-smooth domains}, Math. Methods Appl. Sci., 21 (1998), 823--864.
\begin{itemize}bitem{bauerpauly} S. Bauer, D. Pauly, \mbox {e}mph{On Korn's first inequality for tangential or normal boundary conditions with explicit constants}, Math. Methods Appl. Sci., 39 (2016), 5695--5704.
\begin{itemize}bitem{CHLTV} S. Creo, M. Hinz, M. R. Lancia, A. Teplyaev, P. Vernole, \mbox {e}mph{Magnetostatic problems in fractal domains}, accepted for publication on Fractals and Dynamics in Mathematics, Science and the Arts published by World Scientific. Available on arXiv: https://arxiv.org/abs/1805.08262
\begin{itemize}bitem{dacorogna} G. Csato, B. Dacorogna, S. Sil, \mbox {e}mph{On the best constant in Gaffney inequality}, J. Funct. Anal., 274 (2018), 461--503.
\begin{itemize}bitem{duran} R. Dur\'an, M. A. Muschietti, \mbox {e}mph{The Korn inequality for Jones domains}, Electron. J. Differential Equations, 127 (2004), 10 pp.
\begin{itemize}bitem{friedrichs} K. O. Friedrichs, \mbox {e}mph{Differential forms on Riemannian manifolds}, Comm. Pure Appl. Math., 8 (1955), 551--590.
\begin{itemize}bitem{gaffney} M. P. Gaffney, \mbox {e}mph{Hilbert space methods in the theory of harmonic integrals}, Trans. Amer. Math. Soc., 78 (1955), 426--444.
\begin{itemize}bitem{Jones} P. W. Jones, {\mbox {e}m Quasiconformal mapping and extendability of functions in Sobolev spaces}, Acta Math., 147 (1981), 71--88.
\begin{itemize}bitem{jonsson91} A. Jonsson, {\mbox {e}m Besov spaces on closed subsets of $\mathbb{R}^n$}, Trans. Amer. Math. Soc., 341 (1994), 355--370.
\begin{itemize}bitem{JoWa} A. Jonsson, H. Wallin, {\mbox {e}m Function Spaces on Subsets of $\mathbb{R}^n$}, Part 1, Math. Reports, vol.2, Harwood Acad. Publ., London, 1984.
\begin{itemize}bitem{JoWa2} A. Jonsson, H. Wallin, {\mbox {e}m The dual of Besov spaces on fractals}, Studia Math., 112 (1995), 285--300.
\begin{itemize}bitem{LaVe2} M. R. Lancia, P. Vernole, {\mbox {e}m Semilinear fractal problems: approximation and regularity results}, Nonlinear Anal., 80 (2013), 216--232.
\begin{itemize}bitem{LVstokes} M. R. Lancia, P. Vernole, {\mbox {e}m The Stokes problems in fractal domains: asymptotic behavior of the solutions}, accepted for publication on Discrete Contin. Dyn. Syst. Ser. S.
\begin{itemize}bitem{monk} P. Monk, \mbox {e}mph{Finite Element Methods for Maxwell's Equations}, Oxford University Press, New York, 2003.
\begin{itemize}bitem{NPW15} P. Neff, D. Pauly, K. J. Witsch, \mbox {e}mph{Poincar\'e meets Korn via Maxwell: Extending Korn's first inequality to incompatible tensor fields}, J. Diff. Eq., 258 (2015), 1267--1302.
\begin{itemize}bitem{Schw16}
B. Schweizer, \mbox {e}mph{On Friedrichs inequality, Helmholtz decomposition,
vector potentials, and the div-curl lemma}, preprint (2016), TU Dortmund.
\begin{itemize}bitem{stein} E. M. Stein, \mbox {e}mph{Singular Integrals and Differentiability Properties of Functions}, Princeton University Press, Princeton, New Jersey, 1970.
\begin{itemize}bitem{temam} R. Temam, {\mbox {e}m Navier-Stokes Equations. Theory and Numerical Analysis}, Studies in Mathematics and its Applications, 2, North-Holland Publishing Co., Amsterdam-New York, 1979.
\begin{itemize}bitem{whitney} H. Whitney, \mbox {e}mph{Analytic extensions of differentiable functions defined in closed sets}, Trans. Amer. Math. Soc., 36 (1934), 63--89.
\mbox {e}nd{thebibliography}
\mbox {e}nd{document} |
\begin{document}
\title[Linear Instability of Sasaki Einstein and nearly parallel ${\rm G}_2$ manifolds]{Linear Instability of Sasaki Einstein and \\ nearly parallel ${\rm G}_2$ manifolds}
\author{Uwe Semmelmann}
{\rm ad}dress{Institut f\"ur Geometrie und Topologie \\
Fachbereich Mathematik\\
Universit{\"a}t Stuttgart\\
Pfaffenwaldring 57 \\
70569 Stuttgart, Germany}
\email{[email protected]}
\author{Changliang Wang}
{\rm ad}dress{School of Mathematical Sciences and Institute for Advanced Study, Tongji University, Shanghai 200092, China}
\email{[email protected]}
\author{M. Y.-K. Wang}
{\rm ad}dress{Department of Mathematics and Statistics, McMaster
University, Hamilton, Ontario, L8S 4K1, CANADA}
\email{[email protected]}
\date{revised \today}
\begin{abstract}
{In this article we study the stability problem for the Einstein metrics on Sasaki Einstein and on complete nearly parallel ${\rm G}_2$ manifolds. In the Sasaki case we show linear instability if the second Betti number is positive. Similarly we prove that nearly parallel $\rm G_2$ manifolds with positive third Betti number are
linearly unstable. Moreover, we prove linear instability for the Berger space ${\rm SO}(5)/{\rm SO}(3)_{irr} $ which is a $7$-dimensional homology sphere with a proper nearly parallel ${\rm G}_2$ structure.}
\end{abstract}
\mbox{${\mathfrak m}$}aketitle
\mbox{${\mathfrak n}$}oindent{{\it Mathematics Subject Classification} (2000): 53C25, 53C27, 53C44}
\mbox{${\mathfrak m}$}edskip
\mbox{${\mathfrak n}$}oindent{{\it Keywords:} linear stability, real Killing spinors, nearly parallel ${\rm G}_2$ manifolds, Sasaki Einstein manifolds}
\mbox{${\mathfrak m}$}edskip
\mbox{${\mathfrak s}$}etcounter{section}{0}
\mbox{${\mathfrak s}$}ection{\bf Introduction}
In this article we continue the investigation of the linear instability of Einstein manifolds admitting a non-trivial
real Killing spinor. Recall that the linear stability of complete Einstein manifolds admitting a non-trivial parallel
or imaginary Killing spinor has been established in \cite{DWW05}, \cite{Kr17}, and \cite{Wan17}. It is therefore
of some interest to consider the stability problem for complete Einstein manifolds which admit a real Killing spinor,
especially because these manifolds admit more geometric structure than generic Einstein manifolds with positive scalar curvature, and in view of their role in supersymmetric grand unification theories in physics over the years.
We refer the reader to \cite{WW18} for a summary of the various notions of stability under consideration and for a
description of different cases of the general problem. Here we only note that in this paper linear instability refers
to the second variation at Einstein metrics for the Einstein-Hilbert action. Explicitly, this means that there
exists a non-trivial symmetric $2$-tensor $h$ that is {\em transverse traceless} (``TT" in short),
i.e., ${\rm tr}_g h = 0, \delta_g h = 0$, such that
\begin{equation} \label{instability}
\langle \mbox{${\mathfrak n}$}abla^* \mbox{${\mathfrak n}$}abla h - 2 \mbox{${\mathfrak m}$}athring{R} h, h \rangle_{L^2(M, g)} = \langle (\mbox{${\mathbb S}$}elta_L - 2E)h, h \rangle_{L^2(M, g)} < 0,
\end{equation}
where $E$ is the Einstein constant, $\mbox{${\mathbb S}$}elta_L$ is the positive Lichnerowicz Laplacian, $\mbox{${\mathfrak m}$}athring{R}h$
is the action of the curvature tensor on symmetric $2$-tensors, and $\mbox{${\mathfrak n}$}abla^* \mbox{${\mathfrak n}$}abla$ is the (positive) rough Laplacian.
Condition (\ref{instability}) implies linear instability with respect to Perelman's $\mbox{${\mathfrak n}$}u$-entropy as well as
dynamical instability with respect to the Ricci flow (by a theorem of Kr\"oncke \cite{Kr15}).
Recall that complete spin manifolds with constant positive sectional curvature are stable Einstein manifolds. They
are exceptional in the sense that they also admit a maximal family of real Killing spinors. For the sake of a smoother
exposition we will henceforth exclude these manifolds from discussion. It then follows that the only even dimension
for which there are complete metrics admitting a non-trivial real Killing spinor is six. In this
situation the Einstein manifolds are strict nearly K\"ahler or else isometric to round $S^6$. (Recall that a strict, nearly
K\"ahler manifold is an almost Hermitian manifold for which the almost complex structure $J$ is non-parallel and
satisfies $(\mbox{${\mathfrak n}$}abla_X J)X = 0$, where $X$ is any tangent vector and $\mbox{${\mathfrak n}$}abla$ is the Levi-Civita connection.) The round sphere
is clearly stable, but it is distinguished by having a maximal family of real Killing spinors. For the first case
we showed in \cite{SWW20} that if either the second or third Betti number of the manifold is nonzero, then the nearly
K\"ahler metric is linearly unstable. A topological consequence of this fact is that a complete, strict, nearly
K\"ahler $6$-manifold that is linearly stable must be a rational homology sphere.
In this paper, we first consider the Sasaki Einstein case, which arises in all odd dimensions. To simplify matters,
we will take a Sasaki Einstein manifold to be an odd-dimensional Einstein Riemannian manifold $(M^{2n+1}, g)$
together with
\begin{enumerate}
\item[(a)] a contact $1$-form $\eta$ whose dual vector field $\xi$ is a unit-length Killing field, (i.e., $\eta \wedge (d\eta)^n \mbox{${\mathfrak n}$}eq 0 $
everywhere on $M$, $\eta(\xi)=1$, and $L_{\xi} g = 0$), ($K$-contact condition)
\item[(b)] an endomorphism $\Phi: TM \rightarrow TM$ satisfying, for all tangent vectors $X$ and $Y$, the equations
$$ \Phi^2 = - \mbox{${\mathbb I}$} + \eta \otimes \xi, \,\,\,\,\, g(\Phi(X), \Phi(Y)) = g(X, Y) - \eta(X) \eta(Y), \,\, \mbox{${\mathfrak m}$}box{\rm and}$$
\item[(c)] $d\eta(X, Y) = 2 g(X, \Phi(Y)).$
\end{enumerate}
Systematic expositions of Sasaki Einstein manifolds can be found in \cite{BFGK91} and especially in \cite{BG08}.
An equivalent characterization of a Sasaki Einstein manifold is an Einstein manifold whose metric cone has holonomy
lying in ${\rm SU}(n+1)$ \cite{Ba93}. Our first result is:
\begin{thm} \label{SE}
A complete Sasaki Einstein manifold of dimension $>3$ with non-zero second Betti number $b_2$ must
be linearly unstable with respect to the Einstein-Hilbert action and hence dynamically unstable for the
Ricci flow. More precisely, the coindex of such an Einstein metric, i.e., the dimension of the maximal
subspace of the space of transverse traceless symmetric $2$-tensors on which $($\ref{instability}$)$ holds, is $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}eq b_2$.
\end{thm}
This result may be viewed as an odd-dimensional analogue of the observation by Cao-Hamilton-Ilmanen \cite{CHI04} that
a Fano K\"ahler Einstein manifold with second Betti number $> 1$ is linearly unstable. Note that our result applies
to irregular Sasakian Einstein manifolds as well. For dimension $5$, it follows immediately that if a
complete Sasaki Einstein $5$-manifold is linearly stable then it must be a rational homology sphere.
One can also apply the above result to any of the $2$-sphere's worth of compatible Sasaki Einstein structures
on a $3$-Sasakian manifold with nonzero second Betti number to deduce their linear instability. Recall that
$3$-Sasakian manifolds are automatically Einstein.
In dimension seven, a complete simply connected Riemannian manifold admitting a non-trivial real Killing spinor
is called a {\it nearly parallel $\mbox{${\mathfrak m}$}athrm G_2$ manifold}. As we have excluded round spheres, such a manifold falls
into one of three classes depending on whether the dimension of the space
of Killing spinors is $3, 2$ or $1$. These are respectively the $3$-Sasakian, Sasaki Einstein but not $3$-Sasaki,
and the proper nearly parallel $\rm{G}_2$ cases. For this family we obtained the following
\begin{thm} \label{b3}
Let $(M^7, g)$ be a complete nearly parallel ${\rm G}_2$-manifold. Then the coindex of the Einstein metric $g$ is at least
$b_3$. If the manifold is in addition Sasaki Einstein then the coindex is at least $b_2 + b_3$. Hence in the latter
case if such an manifold is linearly stable, then it must be a rational homology sphere.
\end{thm}
While there are numerous examples of complete Sasakian Einstein manifolds with nonzero second Betti number \cite{BG08}
there are relatively fewer examples with $b_3 \mbox{${\mathfrak n}$}eq 0$. To our knowledge they include fourteen examples due
to C. Boyer \cite{Bo08} and ten recent examples by R. G. Gomez \cite{Go19}, all occurring in dimension $7$.
In Gomez's examples, $10 \leq b_3 \leq 20$. Note, however, that for complete $3$-Sasakian manifolds
Galicki and Salamon \cite{GaS96} showed that all their odd Betti numbers must vanish.
We also do not know any example of a {\it proper} nearly parallel ${\mbox{${\mathfrak m}$}athrm G}_2$ manifold with nonzero third Betti
number. Settling the existence question for such an example would be of interest.
We turn next to the special case of a simply connected closed Einstein $7$-manifold that is homogeneous with respect
to the isometric action of some semisimple Lie group. In \cite{WW18} it was shown that if the manifold is not
locally symmetric then it must be linearly unstable with the possible exception of the isotropy irreducible space
${\rm Sp}(2)/{\rm Sp}(1)\approx {\rm SO}(5)/{\rm SO}(3)$. Here the embedding of ${\rm Sp}(1)$ is given by the irreducible complex $4$-dimensional
representation (which is symplectic), while the embedding of ${\rm SO}(3)$ is given by the irreducible complex $5$-dimensional
representation (which is orthogonal). The Einstein metric in this unresolved case is known to be of proper nearly parallel $\rm{G}_2$ type.
\begin{thm} \label{Berger}
The isotropy irreducible Berger space ${\rm Sp}(2)/{\rm Sp}(1)_{irr} \approx {\rm SO}(5)/{\rm SO}(3)_{irr} $ is linearly unstable.
\end{thm}
Recall that the homogeneous Einstein metrics (two up to isometry) on the Aloff-Wallach manifolds
$N_{k, l} = {\rm SU}(3)/T_{kl}$, where $T_{kl}$ is a closed circle subgroup,
are also of proper nearly parallel $\rm{G}_2$ type, with the exception of
one of the Einstein metrics on $N_{1,1}$, which is $3$-Sasakian. It was shown in \cite{WW18} that all
these Einstein metrics are linearly unstable. One therefore deduces the conclusion that all compact
simply connected homogeneous Einstein $7$-manifolds (many of them admitting a non-trivial real Killing spinor)
are linearly unstable.
Interestingly, the Berger space is a rational homology sphere, so the methods based on constructing destabilizing
directions via harmonic forms do not apply. Likewise, the Stiefel manifold ${\rm SO}(5)/{\rm SO}(3)$ (the embedding of
${\rm SO}(3)$ is the usual $3$-dimensional vector representation) is also a homology $7$-sphere. The Einstein metric
here is instead of regular Sasakian Einstein type (the Stiefel manifold is a circle bundle over the Hermitian
symmetric Grassmanian ${\rm SO}(5)/S({\rm O}(3){\rm O}(2))$. It was shown to be linearly unstable in \cite{WW18}
by examining the scalar curvature function for homogeneous metrics. But there are also non-homogeneous
$\mbox{${\mathfrak n}$}u$-unstable directions arising from eigenfunctions.
The proof of Theorem \ref{Berger} is based on a result of independent interest: in Section \ref{Berger section} we will
show that the Berger space admits a $5$-dimensional space of trace and divergence free Killing $2$-tensors. These are symmetric
$2$-tensors with vanishing complete symmetrisation of their covariant derivative. A special motivation for studying
Killing tensors stems from the fact that they define first integrals of the geodesic flow, i.e. functions constant on
geodesics. On the Berger space the constructed Killing tensors turn out to be eigentensors for the Lichnerowicz Laplacian
for an eigenvalue less than the critical $2E$.
\mbox{${\mathfrak m}$}edskip
\mbox{${\mathfrak n}$}oindent{\bf Acknowledgements:}
The first author would like to thank P.-A. Nagy and G. Weingart for helpful discussions on the topic of this
article. He is also grateful for support by the Special Priority Program SPP 2026 "Geometry at Infinity" funded by the DFG.
The third author acknowledges partial support by a Discovery Grant of NSERC.
\mbox{${\mathfrak s}$}ection{\bf Sasaki Einstein manifolds with $b_2 >0$}
Let $(M^{2n+1}, g, \xi, \eta, \Phi)$ be a Sasaki Einstein manifold as defined in the Introduction.
(This definition may be logically redundant but it allows us to keep technicalities to a minimum.)
It then follows that $\Phi$ is skew-symmetric, and for all tangent vectors $X$ in $M$ we have
$$\mbox{${\mathfrak n}$}abla_X \xi = - \Phi(X).$$
Also, the Einstein constant $E$ of $g$ is fixed to be $2n$, and this in turn fixes the Killing constant
of the Killing spinors to be $\mbox{${\mathfrak p}$}m \frac{1}{2}$ depending on the orientation.
The Killing field $\xi$ gives rise to a Riemannian foliation structure on $M$, as well as an
orthogonal decomposition
$$ TM = \mbox{${\mathfrak m}$}athscr{L} \oplus \mbox{${\mathfrak m}$}athscr{N} $$
where $\mbox{${\mathfrak m}$}athscr{L}$ is the line bundle determined by $\xi$ and $\mbox{${\mathfrak m}$}athscr{N}$ is the normal bundle
to the foliation. There is a transverse K\"ahler structure on $\mbox{${\mathfrak m}$}athscr{N}$ with a transverse
Hodge theory associated with the basic de Rham complex. Good references for this material are
\cite{EH86} and sections 7.2, 7.3 in \cite{BG08}. In particular, the transverse almost complex
structure is given by the endomorphism $\Phi|\mbox{${\mathfrak m}$}athscr{N}.$
Recall that a $k$-form $\omega$ on $M$ is called {\em basic} if $L_\xi \omega = 0 = \xi \lrcorner \, \omega$.
These properties are satisfied by $\Phi$ as well by the $K$-contact property. In the transverse Hodge
theory, the role of the K\"ahler form is played by $d\eta$, which is $\Phi$-invariant. As usual we can
define the adjoint $\Lambda$ of the wedge product with $d\eta$ and the basic elements of the kernel of $\Lambda$
are called the primitive basic forms. By Proposition 7.4.13 in \cite{BG08}, a lemma of Tachibana shows that
any harmonic $k$-form on $M$ is horizontal, and this in turn implies that it is basic, and
primitive and harmonic for the basic cohomology.
We shall also need the fact that a harmonic $2$-form on $M$ is $\Phi$-invariant, i.e., as a basic $2$-form it is
of type $(1, 1)$. This follows from the general fact that there are no nonzero basic harmonic $(0, k)$-forms,
$1 \leq k \leq n, $ which is the analogue of Bochner's theorem that on a compact complex manifold with positive first
Chern class and complex dimension $n$, there are no non-trivial holomorphic $k$-forms, $1 \leq k \leq n$.
A proof of this analogue is indicated on pp. 66-67 of \cite{VC17} (see also \cite{GNT16}).
Note that the $\Phi$-invariance of the transverse covariant derivative and transverse curvature tensor,
which is the analogue of the K\"ahler condition, is needed in the argument.
Now let $\alpha$ be a harmonic $2$-form on our Sasaki Einstein manifold. The candidate for a destabilizing
direction is the symmetric $2$-tensor $h_{\alpha}$ defined by
\begin{equation} \label{SE-TT}
h_{\alpha}(X, Y) := \alpha(X, \Phi(Y))
\end{equation}
for arbitrary tangent vectors $X, Y$ on $M$.
It turns out that an efficient method to compute the action of the Lichnerowicz Laplacian on $h_{\alpha}$
is to take advantage of the canonical metric connection with skew torsion preserving the Sasakian structure
rather than using the Levi-Civita connection. We will also use the fact that the Lichnerowicz Laplacian can be written
as $\mbox{${\mathbb S}$}elta_L = \mbox{${\mathfrak n}$}abla^*\mbox{${\mathfrak n}$}abla + q(R)$, where $q(R)$ is an endomorphism on symmetric tensors fibrewise defined by
$
q(R) = \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (e_i \wedge e_j)_\ast \circ R (e_i \wedge e_j)_\ast
$
with an orthonormal basis $\{e_i \}$, where for any $A \in \Lambda^2 \cong \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak o}$}(n)$ we denote with $A_\ast$ the natural action of $A$ on
symmetric tensors. See \cite{SWe19} for the general context of
these endomorphisms and their relation with Weitzenb\"ock formulae.
Recall that Sasakian manifolds are equipped with a canonical metric connection $\bar\mbox{${\mathfrak n}$}abla$ defined by the equation
$$
g(\bar\mbox{${\mathfrak n}$}abla_X Y, Z) \;=\; g(\mbox{${\mathfrak n}$}abla_X Y, Z) \;+\; \mbox{${\mathfrak t}$}rac12 (\eta \wedge d\eta) (X, Y, Z) \ ,
$$
where the $3$-form $\eta \wedge d\eta$ is precisely the torsion of the connection.
Note that the canonical connection $\bar \mbox{${\mathfrak n}$}abla$ for any tangent vector $X$ can also be written as $\bar\mbox{${\mathfrak n}$}abla_X = \mbox{${\mathfrak n}$}abla_X + A_X$ with
$A_X:= - \eta(X) \, \Phi + \xi \wedge \Phi(X)$. $\bar \mbox{${\mathfrak n}$}abla$ preserves the basic forms. The restricted holonomy group
of $\bar \mbox{${\mathfrak n}$}abla$ lies in ${\rm U}(n) \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bset {\rm SO}(2n+1)$ and we have $\bar\mbox{${\mathfrak n}$}abla \Phi = 0$, $\bar \mbox{${\mathfrak n}$}abla \eta = 0$,
and $\bar \mbox{${\mathfrak n}$}abla d\eta = 0$.
Furthermore, the curvature $\bar R$ of $\bar\mbox{${\mathfrak n}$}abla$ and its action on tensors is given by
$$
\bar R_{X, Y} \;=\; R_{X, Y} \,+\, \mbox{${\mathfrak t}$}rac12 \, d\eta(X,Y) \, d\eta \,-\, \Phi(X) \wedge \Phi(Y) \,+\, \xi \wedge (X \wedge Y)\xi,
$$
where $(X \wedge Y) \xi := g(X, \xi) Y - g(Y, \xi) X$, and we have identified vectors with covectors as usual via $g$.
Hence, we have $\bar R_{X, Y} = R_{X, Y} \,-\, \Phi(X) \wedge \Phi(Y) \,+\, \xi \wedge (X \wedge Y)\xi $ \,for the action of $\bar R_{X, Y} $ on $\Phi$-invariant tensors.
As a consequence we obtain for these tensors the formula
\begin{equation}\label{diff}
q(\bar R) - q(R) \;=\; -\mbox{${\mathfrak t}$}rac12 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (e_i \wedge e_j)_\ast \circ (\Phi(e_i) \wedge \Phi(e_j))_* \;+\; \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\xi \wedge e_j)_\ast \circ (\xi \wedge e_j)_\ast \ ,
\end{equation}
where $q(\bar R)$ denotes the curvature endomorphism with respect to the connection $\bar\mbox{${\mathfrak n}$}abla$ and its curvature $\bar R$.
On specific spaces this difference can be further computed. The result for the present situation
is given in the following
\begin{lemma}
Let $(M^{2n+1}, g, \xi, \eta, \Phi)$ be a Sasaki Einstein manifold. Then we have
$$
q(\bar R) \,-\, q(R) \;=\;
\left\{
\begin{array}{ll}
\,\;\; 2\, \rm id & \mbox{${\mathfrak q}$}quad \mbox{${\mathfrak m}$}box{\rm on} \mbox{${\mathfrak q}$}uad \Lambda^2 {\rm T} M\\
-2 \, \rm id & \mbox{${\mathfrak q}$}quad \mbox{${\mathfrak m}$}box{\rm on} \mbox{${\mathfrak q}$}uad {\rm Sym}^2_0 {\rm T} M \\
\end{array}
\right.
$$
for $\Phi$-invariant and basic tensors.
\end{lemma}
\begin{proof}
We start with a remark to understand the action of the curvature term $q(R)$ on $2$-tensors (symmetric or skew-symmetric). Let $A, B \in \Lambda^2 {\rm T} \cong \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak e}$}nd^- {\rm T}$ and let
$h$ be a $2$-tensor. Then the composed action of $A$ and $B$ on $h$ is given by
$$
(A_* B_* h) (X, Y) \;=\; h(B A X, Y) \,+\, h(A X, B Y) \,+\, h(B X, A Y) \,+\, h(X, B A Y ) \ .
$$
Hence, for computing the action of the first summand in \eqref{diff} on $2$-tensors we need the following formula on tangent vectors $X$
$$
-\mbox{${\mathfrak t}$}rac12 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\Phi(e_i) \wedge \Phi(e_j))_\ast \,(e_i \wedge e_j)_*X \;=\; - \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\Phi(X) \wedge \Phi(e_j))_* e_j \;=\; -\Phi^2(X) \;=\; X \mbox{${\mathfrak m}$}od \xi
$$
where we can neglect any multiples of $\xi$ since in the end we want to apply our difference formula to basic tensors. Similarly we compute the
action of the second summand in \eqref{diff} on vector fields. Here we obtain
$$
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\xi \wedge e_j)_\ast \, (\xi \wedge e_j)_\ast X \;=\; g(\xi, X) \, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\xi \wedge e_j)_* e_j \,-\, (\xi \wedge X)_* \xi \;=\; - X \mbox{${\mathfrak m}$}od \xi \ .
$$
Next we have to compute
$$
- \mbox{${\mathfrak t}$}rac12 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m [h((e_i \wedge e_j)_*X, (\Phi(e_i) \wedge \Phi(e_j))_* Y) \;+\; h( (\Phi(e_i) \wedge \Phi(e_j))_*X, (e_i \wedge e_j)_*Y)]
\mbox{${\mathfrak p}$}hantom{xxxxxxxx}
$$
\begin{eqnarray*}
&=&
-\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m [ h(e_j, (\Phi(X) \wedge \Phi(e_j)_*Y) \;+\; h((\Phi(Y) \wedge \Phi(e_j))_*X, e_j )]\\[1ex]
&=&
\mbox{${\mathfrak q}$}uad \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m [g(\Phi(e_j), Y) h(e_j, \Phi(X)) \;+\; [g(\Phi(e_j), X) h(\Phi(Y), e_j)]\\[.5ex]
&=&
-2 h(\Phi(Y), \Phi(X)) \;=\; -2 h(Y, X) \;=\; \left\{
\begin{array}{ll}
\;\; 2\, h(X, Y)& \mbox{${\mathfrak q}$}quad \mbox{${\mathfrak m}$}box{for} \mbox{${\mathfrak q}$}uad \; h \in \Lambda^2 {\rm T} M\\
-2 \, h(X, Y) & \mbox{${\mathfrak q}$}quad \mbox{${\mathfrak m}$}box{for } \mbox{${\mathfrak q}$}uad h \in {\rm Sym}^2 {\rm T} M \\
\end{array}
\right.
\end{eqnarray*}
Finally, we note $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m h((\xi \wedge e_j)_*X, (\xi \wedge e_j)_*Y) = g(\xi, X) g(\xi, Y) \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m h(e_j, e_j) = 0$ on basic and trace-free $2$-tensors $h$.
Then combining the formulas above finishes the proof of the lemma.
\end{proof}
\mbox{${\mathfrak m}$}edskip
\begin{lemma}
On tracefree, divergence-free, $\Phi$-invariant and basic $2$-tensors we have the formula: $\bar\mbox{${\mathfrak n}$}abla^* \bar\mbox{${\mathfrak n}$}abla \,-\, \mbox{${\mathfrak n}$}abla^*\mbox{${\mathfrak n}$}abla \,=\, - 2 \, \rm id$.
\end{lemma}
\begin{proof}
Let $\{e_i\}$ be a local orthonormal basis with $\mbox{${\mathfrak n}$}abla_{e_i} e_i = 0 =\bar \mbox{${\mathfrak n}$}abla_{e_i} e_i $ at an arbitrary but
fixed point $p \in M$. Recall that the torsion of $\bar\mbox{${\mathfrak n}$}abla$ is skew-symmetric. Computing at the point $p$ we get
\begin{eqnarray*}
\bar\mbox{${\mathfrak n}$}abla^* \bar\mbox{${\mathfrak n}$}abla \,-\, \mbox{${\mathfrak n}$}abla^*\mbox{${\mathfrak n}$}abla &=& - \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m \bar \mbox{${\mathfrak n}$}abla_{e_i} \bar \mbox{${\mathfrak n}$}abla_{e_i} \;+\; \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m \mbox{${\mathfrak n}$}abla_{e_i} \mbox{${\mathfrak n}$}abla_{e_i}
\;=\; \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\bar \mbox{${\mathfrak n}$}abla_{e_i} - A_{e_i} ) (\bar \mbox{${\mathfrak n}$}abla_{e_i} - A_{e_i} ) \;-\; \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m \bar \mbox{${\mathfrak n}$}abla_{e_i} \bar \mbox{${\mathfrak n}$}abla_{e_i} \\[1ex]
&=& \mbox{${\mathfrak q}$}uad \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m A_{e_i }A_{e_i } \;-\; 2 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m A_{e_i } \bar \mbox{${\mathfrak n}$}abla_{e_i} \ .
\end{eqnarray*}
On $\Phi$-invariant and basic tensors the first summand reduces to $ \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m A_{e_i }A_{e_i } = \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m A_{e_i } (\xi \wedge \Phi(e_i))_*$.
Computing this first on tangent vectors $X$ (with the factors in reversed order as above) we obtain
$$
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\xi \wedge \Phi(e_i))_* A_{e_i } X \;=\; \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\xi \wedge \Phi(e_i))_* (-\eta(e_i) \Phi(X) \,+\, (\xi \wedge \Phi(e_i))_*X)
\mbox{${\mathfrak p}$}hantom{xxxxxxxx}
$$
\begin{eqnarray*}
&=&
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\xi \wedge \Phi(e_i))_* (\eta(X) \Phi(e_i) \,-\, g(\Phi(e_i), X) \, \xi )
\;=\; - \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m g(\Phi(e_i), X) \Phi(e_i) \mbox{${\mathfrak m}$}od \xi\\[1ex]
&=& \Phi^2(X) \mbox{${\mathfrak m}$}od \xi \;=\; - X \mbox{${\mathfrak m}$}od \xi \ .
\end{eqnarray*}
Here we used $\Phi(\xi)= 0$ and $\xi \mbox{${\mathfrak p}$}erp \mbox{${\mathfrak m}$}athrm{Im} (\Phi)$. Moreover, for tracefree, $\Phi$-invariant and basic $2$-tensors $h$ we obtain
$$
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m h(A_{e_i} X, (\xi \wedge \Phi(e_i))_*Y ) \;+\; h( (\xi \wedge \Phi(e_i))_* X, A_{e_i} Y) \mbox{${\mathfrak p}$}hantom{xxxxxxxx}\mbox{${\mathfrak p}$}hantom{xxxxxxxxxxxx}
$$
$$
\;=\; \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m h( - \eta(e_i) \Phi(X) \,+\, g(\xi, X) \Phi(e_i), \,g(\xi, Y)\, \Phi(e_i)) \;+\; (X \leftrightarrow Y) \mbox{${\mathfrak p}$}hantom{xxxxxxxx}
$$
$$
= \; g(\xi, X) \, g(\xi, Y) \, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m h(\Phi(e_i), \Phi(e_i)) \;+\; (X \leftrightarrow Y) \;=\; 0 \ .\mbox{${\mathfrak p}$}hantom{xxxxxxxxxxxxxx}
$$
The last equation holds since $h$ is tracefree and $\Phi$-invariant. The calculation so far shows that $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m A_{e_i }A_{e_i } h = -2h$.
Finally we have to compute the second summand in our formula for $\bar\mbox{${\mathfrak n}$}abla^* \bar\mbox{${\mathfrak n}$}abla \,-\, \mbox{${\mathfrak n}$}abla^*\mbox{${\mathfrak n}$}abla $. Here we obtain
\begin{eqnarray*}
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m A_{e_i } \bar \mbox{${\mathfrak n}$}abla_{e_i} h &=& \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m(-\eta(e_i)\Phi \,+\, (\eta \wedge \Phi(e_i))_* \bar \mbox{${\mathfrak n}$}abla_{e_i} h \;=\;
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\eta \wedge \Phi(e_i))_* \bar \mbox{${\mathfrak n}$}abla_{e_i} h
\end{eqnarray*}
since $\bar \mbox{${\mathfrak n}$}abla_{e_i} h$ is again trace-free, $\Phi$-invariant, and basic. We conclude $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m A_{e_i } \bar \mbox{${\mathfrak n}$}abla_{e_i} h = 0$ since
we have $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\eta \wedge \Phi(e_i))_* X = \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (g(\xi, X) \, \Phi(e_i) \,-\, g(\Phi(e_i), X) \,\xi)$.
Indeed, clearly
$$\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\eta \wedge \Phi(e_i))_* \bar \mbox{${\mathfrak n}$}abla_{e_i} h(X, Y) = 0,$$
if $g(X, \xi)=g(Y, \xi)=0$ or $X=Y=\xi$, since $\bar \mbox{${\mathfrak n}$}abla_{e_{i}} h$ is basic. Moreover, for $g(\xi, Y)=0$, a straightforward computation gives
$$\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (\eta \wedge \Phi(e_i))_* \bar \mbox{${\mathfrak n}$}abla_{e_i} h(\xi, Y)=(\delta h)(\Phi(Y))=0,$$
since $h$ is divergence-free.
\end{proof}
\mbox{${\mathfrak m}$}edskip
We consider the two Laplace type operators $\mbox{${\mathbb S}$}elta = \mbox{${\mathfrak n}$}abla^*\mbox{${\mathfrak n}$}abla + q(R)$ and $\bar \mbox{${\mathbb S}$}elta = \bar\mbox{${\mathfrak n}$}abla^*\bar \mbox{${\mathfrak n}$}abla + q(\bar R)$,
where the operator $\mbox{${\mathbb S}$}elta$ is just the Lichnerowicz operator on tensors. The operator $\bar \mbox{${\mathbb S}$}elta$ has the important property that
it commutes with parallel bundle maps (see \cite{SWe19}, p. 283). Combining the last two lemmas we obtain
\begin{cor}
On $\Phi$-invariant, divergence-free and basic tensors we have
$$
\bar \mbox{${\mathbb S}$}elta \,-\, \mbox{${\mathbb S}$}elta \;=\;
\left\{
\begin{array}{ll}
\mbox{${\mathfrak q}$}uad 0 & \mbox{${\mathfrak q}$}quad \mbox{${\mathfrak m}$}box{on} \mbox{${\mathfrak q}$}uad \Lambda^2 {\rm T} M\\
-4 \, \rm id & \mbox{${\mathfrak q}$}quad \mbox{${\mathfrak m}$}box{on} \mbox{${\mathfrak q}$}uad {\rm Sym}^2_0 {\rm T} M \\
\end{array}
\right.
$$
\end{cor}
\mbox{${\mathfrak m}$}edskip
Let $\alpha \in \Omega^2(M)$ be a harmonic $2$-form, which must be $\Phi$-invariant and basic. Hence we can
apply the difference formula above and obtain $\bar \mbox{${\mathbb S}$}elta \alpha = 0$. Now the bundle of $\Phi$-invariant $2$-forms can be identified
with the bundle of symmetric $2$-tensors by the $\bar\mbox{${\mathfrak n}$}abla$-parallel bundle map $\alpha \mbox{${\mathfrak m}$}apsto h_\alpha$, where the symmetric
$2$-tensor $h_\alpha$ is given by (\ref{SE-TT}). Then, because $\bar\mbox{${\mathbb S}$}elta$ commutes with parallel bundle maps
we also have $\bar\mbox{${\mathbb S}$}elta h_\alpha = 0$. Using again the difference formula above, now for the case ${\rm Sym}^2 {\rm T} M$, we obtain
$\mbox{${\mathbb S}$}elta h_\alpha = 4 h_\alpha$. As the Einstein constant is $2n$, $(\mbox{${\mathbb S}$}elta - 2E) h_{\alpha} = (4-4n) h_{\alpha}$ and hence
the instability condition (\ref{instability}) is satisfied when $n > 1$.
It remains to check that $h_\alpha$ is a TT-tensor. First, we see that ${\rm tr}_g (h_\alpha) = g(\alpha, d\eta) = 0$
since harmonic forms on Sasaki Einstein manifolds are primitive. Moreover, $\delta_g h_\alpha = 0$ follows by an easy
calculation from the assumption $d^* \alpha = 0$ and the fact that $\alpha$ is basic. Thus we have proved
\begin{thm} $($Theorem \ref{SE}$)$
Let $(M^{2n+1}, g, \xi, \eta, \Phi)$ be a compact Sasaki Einstein manifold with $n>1$ and $b_2 >0$.
Then the Einstein metric $g$ is linearly unstable. \mbox{$\Box$}
\end{thm}
\mbox{${\mathfrak s}$}ection{\bf Properties of nearly parallel $\mbox{${\mathfrak m}$}athrm G_2$-manifolds} \label{facts}
In the remainder of the paper we will focus on the dimension $7$ case. In this section we will summarise
some properties of nearly parallel ${\rm G}_2$ manifolds that we shall need.
Let $(M^7, g)$ be a nearly parallel $\mbox{${\mathfrak m}$}athrm G_2$-manifold, i.e., a complete spin manifold with a
non-trivial real Killing spinor $\mbox{${\mathfrak s}$}igma$. For the moment
we do not exclude the possibility that the dimension of the space of real Killing spinors is greater than one.
We may assume that $\mbox{${\mathfrak s}$}igma$ has length $1$ and Killing constant $\frac{1}{2}$,
i.e., $\mbox{${\mathfrak n}$}abla_X \mbox{${\mathfrak s}$}igma = \frac{1}{2} X \cdot \mbox{${\mathfrak s}$}igma$. In particular the scalar curvature is normalized as ${\rm scal}_g = 42$. Then the Killing spinor $\mbox{${\mathfrak s}$}igma$ determines a vector cross product by the condition
$$ P_{\mbox{${\mathfrak s}$}igma}(X, Y) \cdot \mbox{${\mathfrak s}$}igma = X \cdot Y \cdot \mbox{${\mathfrak s}$}igma + g(X, Y) \mbox{${\mathfrak s}$}igma = (X \wedge Y) \cdot \mbox{${\mathfrak s}$}igma, $$
and hence a $3$-form
$$ \varphi_{\mbox{${\mathfrak s}$}igma}(X, Y, Z) = g(P_{\mbox{${\mathfrak s}$}igma}(X, Y), Z).$$
For details of this construction, see e.g. \cite{FKMS97}. The stabilizers of this $3$-form belong to
the conjugacy class ${\rm G}_2 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bset {\rm SO}(7)$ and we obtain a ${\rm G}_2$-structure on $M$, which we
regard as a principal ${\rm G}_2$ bundle $Q_{\mbox{${\mathfrak s}$}igma}$ over $M$.
There is a unique metric connection $\bar \mbox{${\mathfrak n}$}abla$
on this bundle with totally skew torsion. We let $\mbox{${\mathfrak n}$}abla$ denote the Levi-Civita connection of $g$.
Equivalently, nearly parallel $\mbox{${\mathfrak m}$}athrm G_2$-manifolds can be defined as Riemannian $7$-manifolds carrying a 3-form $\varphi$
whose stabilizer at each point is isomorphic to the group $\mbox{${\mathfrak m}$}athrm G_2$ and such that $d\varphi = \lambda \ast \varphi $ for some non-zero real number $\lambda$.
Since the tensor bundles over $M$ are associated fibre bundles of $Q_{\mbox{${\mathfrak s}$}igma}$, we obtain orthogonal decompositions
of these bundles from the decompositions of the corresponding ${\rm SO}(7)$ representations upon restriction
to ${\rm G}_2$. The decompositions which we need are
\begin{itemize}
\item[(i)] $\Lambda^2 {\rm T} = \Lambda^2_7 \oplus \Lambda^2_{14} $
\item[(ii)] ${\rm S}^2 {\rm T} = \mbox{${\mathbb I}$} \oplus {\rm S}^2_{27} $
\item[(iii)] $\Lambda^3 {\rm T} = \mbox{${\mathbb I}$} \oplus \Lambda^3_{7} \oplus \Lambda^3_{27}$
\item[(iv)] the spin bundle $\mbox{${\mathbb S}$} = \mbox{${\mathbb I}$} \oplus {\rm T}$
\end{itemize}
where ${\rm T}$ denotes the tangent bundle of $M$, the rank of a sub-bundle is indicated by a subscript, $\mbox{${\mathbb I}$}$ denotes
the ($1$-dimensional) trivial bundle, and we have identified orthogonal representations with their duals.
We further have equivalences ${\rm S}^2_{27} \cong \Lambda^3_{27},$ and $\Lambda^2_7 \cong \Lambda^3_7 \cong {\rm T}$.
Note that the trivial bundle in $\Lambda^3 {\rm T}$ is spanned by $\varphi_{\mbox{${\mathfrak s}$}igma}$ and that in $\mbox{${\mathbb S}$}$ is spanned by
$\mbox{${\mathfrak s}$}igma$. We also let $\mbox{${\mathfrak p}$}si = \ast \varphi$, the Hodge dual of $\varphi$, which spans the trivial bundle in $\Lambda^4 {\rm T}$.
More explicitly, it is well-known (see \cite{Br87}) that
\begin{itemize}
\item[(a)] $ \Lambda^2_7 = \{ X \lrcorner \,\varphi: X \in {\rm T} \} = \{ \omega \in \Lambda^2 {\rm T}: \ast(\varphi \wedge \omega) = -2 \omega\}, $
\item[(b)] $ \Lambda^2_{14} = \{ \omega \in \Lambda^2 {\rm T}: \forall X \in {\rm T}, g(\omega, X \lrcorner \, \varphi)= 0\} =
\{ \omega \in \Lambda^2 {\rm T}: \ast(\varphi \wedge \omega) = \omega \}, $
\item[(c)] $ \Lambda^3_{7} = \{ X \lrcorner \, \mbox{${\mathfrak p}$}si : X \in {\rm T} \}, $
\item[(d)] $ \Lambda^3_{27} = \{ \omega \in \Lambda^3 {\rm T}: \omega \wedge \varphi = 0 = \omega \wedge \mbox{${\mathfrak p}$}si \}.$
\end{itemize}
\begin{prop} \label{harmonic-decomp}
Let $(M^7,g,\varphi)$ be a closed nearly parallel ${\rm G}_2$ manifold. Then any harmonic $2$-form is
a section of $\Lambda^2_{14} $ and any harmonic $3$-form is a section of $\Lambda^3_{27}$.
\end{prop}
\begin{proof}
The Clifford product of a harmonic form with a Killing spinor vanishes, as was shown by O. Hijazi in \cite{Hi86}.
Moreover, for a fixed spinor $\mbox{${\mathfrak s}$}igma$, the map $\omega \mbox{${\mathfrak m}$}apsto \omega \cdot \mbox{${\mathfrak s}$}igma: {\rm Cl}(\mbox{${\mathbb R}$}^7) \rightarrow \mbox{${\mathbb S}$}$
is a ${\rm Sp}in(7)$-equivariant hence ${\rm G}_2$-equivariant homomorphism. Therefore by Schur's lemma, the components
of a form which correspond to ${\rm G}_2$-representations which do not occur in the spin representation, e.g.,
$\Lambda^2_{14}$ and $\Lambda^3_{27}$, also act trivially on $\mbox{${\mathfrak s}$}igma$. Hence in order to finish the proof of
the proposition it suffices to show that forms in $\Lambda^2_7, \Lambda^3_1$ and
$\Lambda^3_{7}$ act non-trivially on the Killing spinor $\mbox{${\mathfrak s}$}igma$ of the ${\rm G}_2$ structure.
We will need the following formula for Clifford multiplication of forms:
$$
(X\wedge \omega ) \cdot \; =\; X \cdot \omega \cdot \, + \; (X\lrcorner \, \omega) \cdot
$$
Here $X$ is a tangent vector, $\omega$ an arbitrary $k$-form, and $\cdot$ denotes Clifford multiplication.
Using the cross product $P$ (where we have suppressed the dependence on $\mbox{${\mathfrak s}$}igma$) we calculate that
$\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m P(e_i, P(e_i, X)) = -6 X$, where $\{e_i\}$ is an orthonormal basis of ${\rm T}$.
Moreover, we need the following simple formulas for $\varphi$ and its Hodge dual $\mbox{${\mathfrak p}$}si=\ast \varphi$
\mbox{${\mathfrak m}$}edskip
\begin{enumerate}
\item\mbox{${\mathfrak q}$}uad
$X \lrcorner \, \varphi \;=\; - \mbox{${\mathfrak t}$}rac12 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m e_i \wedge P(e_i, X)$
\mbox{${\mathfrak m}$}edskip
\item \mbox{${\mathfrak q}$}uad
$\mbox{${\mathfrak p}$}si=\ast \varphi \:=\; - \mbox{${\mathfrak t}$}rac{1}{6} \, \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (e_i \lrcorner \, \varphi ) \wedge (e_i \lrcorner \, \varphi ) $
\mbox{${\mathfrak q}$}uad and \mbox{${\mathfrak q}$}uad
$X \, \lrcorner \ast \varphi \;=\; -\mbox{${\mathfrak t}$}rac13 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m P(e_i, X) \wedge (e_i \lrcorner \, \varphi ).$
\end{enumerate}
\mbox{${\mathfrak m}$}edskip
\mbox{${\mathfrak n}$}oindent
First we show that forms in $\Lambda^2_7 $ act non-trivially on the Killing spinor $\mbox{${\mathfrak s}$}igma$. We have
$$
(X \lrcorner \, \varphi ) \cdot \mbox{${\mathfrak s}$}igma \;=\; - \mbox{${\mathfrak t}$}rac12 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m e_i \cdot P(e_i, X) \cdot \mbox{${\mathfrak s}$}igma \;=\; - \mbox{${\mathfrak t}$}rac12 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m e_i \cdot (e_i \cdot X + g(e_i, X))\cdot \mbox{${\mathfrak s}$}igma \;=\; 3 X \cdot \mbox{${\mathfrak s}$}igma \ .
$$
Recall that the map $X \mbox{${\mathfrak m}$}apsto X \cdot \mbox{${\mathfrak s}$}igma$ is injective. Next we show the non-trivial action of $\Lambda^3_1$:
$$
\varphi \cdot \mbox{${\mathfrak s}$}igma \;=\; \mbox{${\mathfrak t}$}rac13 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m e_i \wedge (e_i \lrcorner \, \varphi) \cdot \mbox{${\mathfrak s}$}igma \;=\; \mbox{${\mathfrak t}$}rac13 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m e_i \cdot (e_i \lrcorner \, \varphi) \cdot \mbox{${\mathfrak s}$}igma \;=\; - 7 \, \mbox{${\mathfrak s}$}igma \ .
$$
Finally we have to show that $\Lambda^3_7 $ acts non-trivial on $\mbox{${\mathfrak s}$}igma$. Here we compute
\begin{eqnarray*}
(X \, \lrcorner \,\mbox{${\mathfrak p}$}si) \cdot \mbox{${\mathfrak s}$}igma &=& - \mbox{${\mathfrak t}$}rac13 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m (P(e_i, X ) \cdot (e_i \lrcorner \, \varphi) + P(e_i, P(e_i, X))) \cdot \mbox{${\mathfrak s}$}igma\\
&=&
- \mbox{${\mathfrak t}$}rac13 \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m( -3e_i \cdot P(e_i, X) - 6 X ) \cdot \mbox{${\mathfrak s}$}igma \;=\; - 4 X \cdot \mbox{${\mathfrak s}$}igma \ .
\end{eqnarray*}
\end{proof}
\begin{rmk}
The same result was proved in \cite{DS20}, Thm. 3.8, Thm. 3.9. and in the $2$-form case also in \cite{BO19}, Rem. 4.
Note that also the corresponding statement for harmonic forms on $6$-dimensional nearly K\"ahler manifolds
(see \cite{Fos17}, \cite{V11}) can be reproved using Killing spinors and similar arguments as those above.
\end{rmk}
\begin{rmk}
The above construction and structures clearly depend smoothly on the unit Killing spinor $\mbox{${\mathfrak s}$}igma$ chosen and
can be made in the Sasakian-Einstein (respectively $3$-Sasakian) case using just one of the circle's
(resp. two-sphere's) worth of unit Killing spinors. By contrast, the harmonic forms associated to the metric $g$
do not depend on the structures determined by $\mbox{${\mathfrak s}$}igma$.
\end{rmk}
As already mentioned in the introduction, there are three classes of nearly parallel $\mbox{${\mathfrak m}$}athrm G_2$-manifolds: $3$-Sasakian, Sasaki Einstein and the proper nearly parallel $\mbox{${\mathfrak m}$}athrm G_2$-manifolds. The homogeneous examples were classified in \cite{FKMS97}. In the proper case we only have the squashed $7$-sphere $S^7_{sq}$, the Aloff-Wallach spaces $N_{k,l}$ and the Berger space ${\rm SO}(5)/{\rm SO}(3)_{irr}$. The only other
known class of proper nearly parallel $\mbox{${\mathfrak m}$}athrm G_2$-structures are given by the second Einstein metric in the canonical variation of the $3$-Sasaki metrics in dimension $7$. In particular $S^7_{sq}$ and $N_{1,1}$ belong to this class. For the purpose of our article we note that it is well-known that both Einstein metrics in the canonical variation are unstable.
(See \cite{Be87}, 14.85 together with Fig. 9.72.)
\mbox{${\mathfrak s}$}ection{\bf Nearly parallel $G_2$ manifolds with $b_3 >0$}
In this section we prove
\begin{thm} \label{dim7-b3} $($Theorem \ref{b3} $)$
Let $(M^7, g, \varphi)$ be a nearly parallel ${\rm G}_2$ manifold admitting a non-trivial harmonic $3$-form,
i.e. with $b_3 > 0$. Then the Einstein metric $g$ is linearly unstable.
\end{thm}
\begin{proof}
Let $\beta$ be a harmonic $3$-form on $M$. By Proposition \ref{harmonic-decomp} it is a section of $\Lambda^3_{27}$.
Consider the tracefree symmetric $2$-tensor $h := j(\beta)$ defined by the identification map
$j: \Lambda^3_{27} \rightarrow {\rm S}^2_0 {\rm T} ,$ which was first studied by Bryant \cite{Br05}.
Since $\beta $ is harmonic and in particular closed, Proposition 6.1 in \cite{AS12} immediately implies $\mbox{${\mathbb S}$}elta_L h = \mbox{${\mathfrak t}$}rac{\tau_0^2}{4} h = \mbox{${\mathfrak t}$}rac{2\, \mbox{${\mathfrak s}$}cal_g}{21} h = 4h$,
since we have normalized the scalar curvature to $\mbox{${\mathfrak s}$}cal_g=42$, which corresponds to choosing $\tau_0 =4$. Hence, $h$ is a $\mbox{${\mathbb S}$}elta_L$-eigentensor for the eigenvalue $4< 2E = 12$.
It remains to show that $h$ is also divergence-free and thus a TT-tensor, since it is trace-free by definition.
Here an easy calculation, which is essentially also contained in \cite{AS12},
shows that $\delta j(\beta) = -2 P(d^* \beta)$ holds for any section $\beta$ of $\Lambda^3_{27}$, where $P$ is the
vector cross product introduced in the previous section. Since a harmonic form $\beta$ is also co-closed we conclude
that $h= j(\beta)$ is divergence free.
\end{proof}
\mbox{${\mathfrak m}$}edskip
Note that K. Galicki and S. Salamon showed that the odd Betti numbers of a compact $3$-Sasakian manifold
must vanish (see \cite{GaS96}, Theorem A). So the above result does not apply when the nearly parallel ${\rm G}_2$-manifold is $3$-Sasakian. As well, there is no known example of a proper nearly parallel $G_2$-manifold with $b_3>0$.
However, there are some examples of Sasaki Einstein $7$-manifolds admitting non-trivial harmonic $3$-forms (see Introduction).
\mbox{${\mathfrak s}$}ection{\bf The Berger space} \label{Berger section}
The aim of this section is to show that the nearly parallel ${\rm G}_2$-metric on the Berger space
$M^7 = {\rm SO}(5)/{\rm SO}(3)_{irr}$ is linearly unstable. Since the Berger space has no non-trivial harmonic forms
we cannot use the arguments of the previous sections. Instead we will prove the instability by showing that
there are symmetric Killing tensors which are eigentensors of the Lichnerowicz Laplacian with eigenvalue less than $2E$.
We start with recalling a few facts on Killing tensors, referring to \cite{HMS17} for further details.
Recall that a symmetric tensor $h\in \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}_2$}amma({\rm Sym}^p {\rm T} M)$ is called a {\it Killing tensor} if the complete
symmetrization of $\mbox{${\mathfrak n}$}abla h $ vanishes, i.e., if $(\mbox{${\mathfrak n}$}abla_X h)(X, \ldots, X) = 0$ holds for all tangent vectors $X$.
Let $d : \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}_2$}amma({\rm Sym}^p {\rm T} M) \rightarrow \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}_2$}amma({\rm Sym}^{p+1} {\rm T} M)$
be the differential operator defined by $d h = \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m_i e_i \cdot \mbox{${\mathfrak n}$}abla_{e_i} h$, where $\{e_i\}$ is a local orthonormal
basis and $\cdot$ now denotes the symmetric product. Then the Killing condition for the symmetric tensor $h$ is equivalent to $d h = 0$.
For a trace-free symmetric Killing tensor $h\in \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}_2$}amma({\rm Sym}^p_0 {\rm T} M)$ it is easy to check that $h$ is also divergence-free, i.e.,
trace-free Killing tensors are TT-tensors (see Corollary 3.10 in \cite{HMS17}).
On compact Riemannian manifolds Killing tensors can be characterized using the Lichnerowicz Laplacian $\mbox{${\mathbb S}$}elta_L$ acting on
symmetric tensors. Recall that $\mbox{${\mathbb S}$}elta_L$ can be written as $\mbox{${\mathbb S}$}elta_L = \mbox{${\mathfrak n}$}abla^*\mbox{${\mathfrak n}$}abla + q(R)$. Then an easy calculation shows
$\mbox{${\mathbb S}$}elta_L h \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}e 2 q(R) h $ on divergence-free symmetric tensors, with equality exactly for divergence-free Killing tensors $h$, i.e., $h\in \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}_2$}amma({\rm Sym}^2_0 {\rm T} M)$
is a Killing tensor if and only if $\mbox{${\mathbb S}$}elta_L h = 2 q(R) h$ (see Proposition 6.2 in \cite{HMS17}).
The isotropy representation of the Berger space $ M^7 = {\rm SO}(5)/{\rm SO}(3)_{irr}$ is the unique $7$-dimensional
irreducible representation of ${\rm SO}(3)$. It also defines an embedding of ${\rm SO}(3)$ into ${\rm G}_2$ and thus a ${\rm G}_2$-structure on
$M^7$ which turns out to be a proper nearly parallel ${\rm G}_2$-structure (see \cite{Br87}, p. 567).
Replacing the groups ${\rm SO}(5)$ and ${\rm SO}(3)$ by their double covers, we can realize the Berger space also as
$M^7= {\rm Sp}(2)/{\rm Sp}(1)_{irr}$ where ${\rm Sp}(1)$ is embedded by the unique $4$-dimensional irreducible representation.
We write $\mbox{${\mathfrak s}$}p(2) = \mbox{${\mathfrak s}$}p(1) \oplus \mbox{${\mathfrak m}$}$, where $\mbox{${\mathfrak m}$}$ is the orthogonal complement of $\mbox{${\mathfrak s}$}p(1)$ with respect to the Killing
form $B$ of $\mbox{${\mathfrak s}$}p(2)$. As usual, $\mbox{${\mathfrak m}$}$ is identified with the tangent space at the identity coset and, as mentioned above,
is the irreducible $7$-dimensional representation of ${\rm Sp}(1)$. Recall that the irreducible complex representations of
${\rm Sp}(1)$ can be written as the symmetric powers ${\rm Sym}^k E$, where $E = \mbox{${\mathbb C}$}^2$ is the
standard representation of ${\rm Sp}(1)$. In particular, we have $\mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}} := \mbox{${\mathfrak m}$} \otimes \mbox{${\mathbb C}$} = {\rm Sym}^6 E$. We will need the
following decomposition into irreducible summands:
\begin{equation}\label{deco1}
{\rm Sym}^2_0 \, \mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}} \cong \Lambda^3_{27} \mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}} \cong {\rm Sym}^4 E \oplus {\rm Sym}^8 E \oplus {\rm Sym}^{12} E \ .
\end{equation}
The Peter-Weyl theorem and the Frobenius reciprocity now imply the following
decomposition into irreducible summands of the left-regular representation of ${\rm Sp}(2)$ on sections of the vector bundle ${\rm Sym}^2_0 {\rm T} M^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}}$
\begin{equation}\label{deco2}
\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}_2$}amma({\rm Sym}^2_0 \, {\rm T} M^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}}) \cong \overline \bigoplus_{k,l} \, V(k,l) \otimes \mathrm{Hom}_{{\rm Sp}(1)} (V(k,l), \, {\rm Sym}^2_0\, \mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}})
\end{equation}
where the sum goes over all pairs of integer $(k, l)$ with $k \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}e l \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}e 0$. Here $V(k,l)$ is the irreducible ${\rm Sp}(2)$-representation
with highest weight $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}$}amma = (k,l)$, where $k$ corresponds to the short simple root, and it is easy to compute that the $\mbox{${\mathfrak s}$}p(2)$-Casimir operator (with respect to the Killing form) acts on the representation space $V(k,l)$ as $-\mbox{${\mathfrak t}$}rac{1}{12}(4k + k^2 + 2l + l^2)\rm id$. In particular, $V(2,0)$ is the adjoint representation of
$\mbox{${\mathfrak s}$}p(2)$ and the Casimir eigenvalue is $-1$, as it should be.
Interesting for us will be the representation $V(1,1)$ with Casimir eigenvalue $-\mbox{${\mathfrak t}$}rac23$. It is easy to check that $V(1,1)$ is $5$-dimensional
and that $V(1,1) = {\rm Sym}^4 E$ considered as an ${\rm Sp}(1)$-representation. Moreover, from $\mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}} = {\rm Sym}^6 E$ and the decomposition \eqref{deco1}
we conclude
\begin{equation}\label{hom}
\dim \mathrm{Hom}_{{\rm Sp}(1)}(V(1,1), \, {\rm Sym}^2_0 \, \mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}}) = 1
\mbox{${\mathfrak q}$}uad \mbox{${\mathfrak m}$}box{and} \mbox{${\mathfrak q}$}uad
\mathrm{Hom}_{{\rm Sp}(1)}(V(1,1), \,\mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}} ) = \{0\} \ .
\end{equation}
Using the program LiE for the decomposition of ${\rm Sym}^3 {\rm Sym}^6E$ as an ${\rm Sp}(1)$-representation it is also easy to check that $\mathrm{Hom}_{{\rm Sp}(1)}(V(1,1), \,{\rm Sym}^3 \mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}}) = \{0\}$.
This has the following important consequence:
\begin{lemma}
The space $V(1,1) \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bset \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}_2$}amma({\rm Sym}^2_0 \, {\rm T} M^{\mbox{${\mathfrak s}$}criptsize \mbox{${\mathbb C}$}} )$ consists of tracefree Killing tensors.
\end{lemma}
\begin{proof}
Killing $2$-tensors are by definition symmetric tensors in the kernel of the differential operator $d :\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}_2$}amma({\rm Sym}^2 \, {\rm T} M) \rightarrow \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}_2$}amma({\rm Sym}^3 \, {\rm T} M)$
introduced above. In our situation the operator $d$ is an ${\rm Sp}(2)$-invariant differential operator.
Hence, with respect to the decomposition \eqref{deco2} it restricts to an invariant map
$$
V(1,1)\otimes \mathrm{Hom}_{{\rm Sp}(1)}(V(1,1), \, {\rm Sym}^2_0 \, \mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}}) \longrightarrow V(1,1) \otimes \mathrm{Hom}_{{\rm Sp}(1)}(V(1,1), \,{\rm Sym}^3 \mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize\mbox{${\mathbb C}$}}) \ ,
$$
which has to be zero since the space on right side vanishes. A similar argument shows that also the divergence of elements in $V(1,1)$ has to be zero.
However, since elements in $V(1,1)$ are trace-free by definition this was already clear from a remark above.
\end{proof}
Since we know that $\mbox{${\mathbb S}$}elta_L$ acts by the curvature term $2 q(R)$ on divergence-free Killing tensors it
remains to compute the action of $q(R)$ on the space $V(1,1)$. For doing this we will use the comparison formula
(5.33) in \cite{AS12}. Let $(M^7, g, \varphi)$ be a nearly parallel $\rm G_2$ manifold, with
$d \varphi = \tau_0 \ast \varphi$ and let $\bar R$ denote the curvature of the canonical ${\rm G}_2$ connection
$\bar \mbox{${\mathfrak n}$}abla$ then this formula states
\begin{equation}\label{qr}
q(R) \;=\; q(\bar R) \; +\; 3 (\mbox{${\mathfrak t}$}rac{\tau_0}{12})^2 \, \mbox{${\mathbb C}$}as^{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak o}$}(7)} \,-\;4 (\mbox{${\mathfrak t}$}rac{\tau_0}{12})^2 \, S
\end{equation}
where $ \mbox{${\mathbb C}$}as^{\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak o}$}(7)} $ is the Casimir operator of $\mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak o}$}(7)$ which acts as $-k(7-k)\rm id$ on forms in $\Lambda^k {\rm T}$ and as $-14\,\rm id$ on ${\rm Sym}^2_0 {\rm T}$.
The ${\rm G}_2$ invariant endomorphism $S$ is defined as $S = \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}m P_{e_i} \circ P_{e_i}$, where $P_X$ is the skew-symmetric endomorphism
$X \lrcorner \, \varphi$. Then a straight forward computation on explicit elements shows that $S = - 6\, \rm id $ on ${\rm T}$ and $S = -14 \,\rm id$ on ${\rm Sym}^2_0 {\rm T}$.
\mbox{${\mathfrak m}$}edskip
As an example we consider the difference of $q(R)$ and $q(\bar R)$ on ${\rm T} M$. Of course we already know that
$q(R) = \mbox{${\mathbb R}$}ic = \mbox{${\mathfrak t}$}rac{\mbox{${\mathfrak s}$}cal}{7} = \mbox{${\mathfrak t}$}rac{3\tau^2_0}{8}$. Hence formula \eqref{qr} and the explicit values for the Casimir operator and the endomorphism
$S$ yield $ q(\bar R) = q(R) - \mbox{${\mathfrak t}$}rac{\tau^2_0}{24} = \mbox{${\mathfrak t}$}rac{\tau^2_0}{3}$.
\mbox{${\mathfrak m}$}edskip
Returning to the Berger space, recall that the induced Einstein metric $g$ defined by the ${\rm G}_2$ structure is
given by a multiple of the Killing form $B$ of $\mbox{${\mathfrak s}$}p(2)$ restricted to $\mbox{${\mathfrak m}$}$. Moreover, the canonical homogeneous connection
coincides with the canonical ${\rm G}_2$ connection $\bar \mbox{${\mathfrak n}$}abla$. For our calculation we will use the normalization $g= -B$ for
which we have $\tau^2_0 = \mbox{${\mathfrak t}$}rac{6}{5}$ and $\mbox{${\mathfrak s}$}cal = \mbox{${\mathfrak t}$}rac{63}{20}$ (see \cite{AS12}, Lemma 7.1). In this situation
we know that the endomorphism $q(\bar R)$ acts as $ -\mbox{${\mathbb C}$}as^{\mbox{${\mathfrak s}$}p(1)}$ (see \cite{AS12}, Lemma 7.2).
The Casimir operator of $\mbox{${\mathfrak s}$}p(1)$ with respect to the Killing form acts on ${\rm Sym}^k E$ as $-k(k+2)\,\rm id$.
Thus $q(\bar R)$ acts on the bundle defined by the ${\rm Sp}(1)$ representation ${\rm Sym}^k E$ as $c k(k+ { 2})\,\rm id$ for some
positive constant $c$, which can be determined from the case $k=6$. Indeed here we have
${\rm Sym}^6E = \mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize \mbox{${\mathbb C}$}}$ and we already calculated that $q(\bar R) = \mbox{${\mathfrak t}$}rac{\tau^2_0}{3} = \mbox{${\mathfrak t}$}rac25$ on $\mbox{${\mathfrak m}$}^{ \mbox{${\mathfrak s}$}criptsize \mbox{${\mathbb C}$}}$. It follows $c= \mbox{${\mathfrak t}$}rac{1}{120}$. In particular we conclude that $q(\bar R) = \mbox{${\mathfrak t}$}rac15 \,\rm id$ on ${\rm Sym}^4 E$ and \eqref{qr} in our normalization
implies that $q(R) = \mbox{${\mathfrak t}$}rac{19}{60}\,\rm id$ on the space $V(1,1)$. Hence, for any Killing tensor
$h \in V(1,1) \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak s \mbox{${\mathfrak m}$}athfrak u}$}bset \mbox{${\mathfrak m}$}box{${\mbox{${\mathfrak m}$}athfrak g}_2$}amma({\rm Sym}^2_0 \, \mbox{${\mathfrak m}$}^{\mbox{${\mathfrak s}$}criptsize \mbox{${\mathbb C}$}})$ we have $\mbox{${\mathbb S}$}elta_L h = 2q(R)\, h = \mbox{${\mathfrak t}$}rac{19}{30}\, h$.
But $\mbox{${\mathfrak t}$}rac{19}{30} < 2E = \mbox{${\mathfrak t}$}rac{3\tau^2_0}{4} = \mbox{${\mathfrak t}$}rac{9}{10} $, so the instability condition (\ref{instability}) holds.
\mbox{${\mathfrak m}$}edskip
We have therefore proved the following
\begin{thm} $($Theorem \ref{Berger}$)$
Let $M^7 = {\rm SO}(5)/{\rm SO}(3)_{irr}$ be the Berger space equipped with its homogenous proper nearly parallel ${\rm G}_2$ structure. Then the induced Einstein metric is linearly unstable. Moreover, $M^7$ admits a $5$-dimensional space of divergence and
tracefree Killing tensors. \mbox{$\Box$}
\end{thm}
\begin{rmk} \label{eigenfunction}
The reader might wonder if $\mbox{${\mathfrak n}$}u$-linear instability for the Berger space can be shown by looking at the spectrum
of the Laplacian on functions. It turns out that this is not possible because computations similar to the above
show that the smallest nonzero eigenvalue is larger than $2E$, and the corresponding eigenspace is the $35$-dimensional
irreducible module $V(4, 0)$.
\end{rmk}
\end{document} |
\begin{document}
\def\spacingset#1{\renewcommand{\baselinestretch}
{#1}\small\normalsize} \spacingset{1}
\setcounter{Maxaffil}{0}
\renewcommand\Affilfont{\itshape\small}
\spacingset{1.42}
\title{f {Bayesian Approximations to Hidden Semi-Markov Models for Telemetric Monitoring of Physical Activity }
\begin{abstract}
We propose a Bayesian hidden Markov model for analyzing time series and sequential data where a special structure of the transition probability matrix is embedded to model explicit-duration semi-Markovian dynamics. Our formulation allows for the development of highly flexible and interpretable models that can integrate available prior information on state durations while keeping a moderate computational cost to perform efficient posterior inference. We show the benefits of choosing a Bayesian approach for \acro{\smaller HSMM} estimation over its frequentist counterpart, in terms of
model selection and out-of-sample forecasting, also highlighting the computational feasibility of our inference procedure whilst incurring negligible statistical error. The use of our methodology is illustrated in an application relevant to e-Health, where we investigate rest-activity rhythms using telemetric activity data collected via a wearable sensing device. This analysis considers for the first time Bayesian model selection for the form of the explicit state dwell distribution. We further investigate the inclusion of a circadian covariate into the emission density and estimate this in a data-driven manner.
\end{abstract}
\noindent
{\it Keywords:} Markov Switching Process; Hamiltonian Monte Carlo; Bayes Factor; Telemetric Activity Data; Circadian Rhythm.
\spacingset{1.45}
\section{Introduction}
Recent developments in portable computing technology and the increased popularity of wearable and non-intrusive devices, e.g. smartwatches, bracelets, and smartphones, have provided exciting opportunities to measure and quantify physiological time series that are of interest in many applications, including mobile health monitoring, chronotherapeutic healthcare and cognitive-behavioral treatment of insomnia \citep{williams2013cognitive,kaur2013timing,silva2015mobile,aung2017sensing, huang2018hidden}. The behavioral pattern of alternating sleep and wakefulness in humans can be investigated by measuring gross motor activity. Over the last twenty years, activity-based sleep-wake monitoring has become an important assessment tool for quantifying the quality of sleep \citep{ancoli2003role,sadeh2011role}\color{black}. Though polysomnography \citep{douglas1992clinical}, usually carried out within a hospital or at a sleep center, continues to remain the gold standard for diagnosing sleeping disorders, accelerometers have
become a practical and inexpensive way to collect non-obtrusive and continuous measurements of rest-activity rhythms over a multitude of days in the individual’s home sleep environment \citep{ancoli2015sbsm}.
Our study investigates the \textit{physical activity} (\acro{\smaller PA}) time-series first considered by
\citet{huang2018hidden} and \citet{hadj2019bayesian}, where a wearable sensing device is fixed to the chest of a user to measure its movement via a triaxial accelerometer (ADXL345, Analog Devices). The tool produces \acro{\smaller PA} counts, defined as the number of times an accelerometer undulation exceeds zero over a specified time interval. Figure \ref{fig:physical_activity} displays an example of 4 days of 5-min averaged \acro{\smaller PA} recordings for a healthy subject, providing a total of 1150 data points. Transcribing information from such complex, high-frequency data into interpretable and meaningful statistics is a non-trivial challenge, and there is a need for a data-driven procedure to automate the analysis of these types of measurements.
While \citet{huang2018hidden} addressed this task by proposing a hidden Markov model (\acro{\smaller HMM}) within a frequentist framework, we formulate a more flexible approximate hidden semi-Markov model (\acro{\smaller HSMM}) approach that enables us to explicitly model the dwell time spent in each state. Our proposed modelling approach uses a Bayesian inference paradigm, allowing us to incorporate available prior information for different activity patterns and facilitate consistent and efficient model selection between dwell distribution.
\begin{figure}
\caption{\acro{\smaller PA}
\label{fig:physical_activity}
\end{figure}
We conduct Bayesian inference using a \acro{\smaller HMM} likelihood model that is a reformulation of any given \acro{\smaller HSMM}. We
utilise the method of \citet{langrock2011hidden} to embed the generic state duration distribution within a special transition matrix structure that can approximate the underlying \acro{\smaller HSMM} with arbitrary accuracy.
This framework is able to incorporate the extra flexibility of explicitly modelling the state dwell distribution provided by a \acro{\smaller HSMM}, without renouncing the computational tractability, theoretical understanding, and the multitudes of methodological advancements that are available when using an \acro{\smaller HMM}.
To the best of our knowledge, such a modeling approach has only previously been treated from a non-Bayesian perspective in the literature, where parameters are estimated either by direct numerical likelihood maximization (\acro{\smaller MLE}) or applying the expectation-maximization (\acro{\smaller EM}) algorithm.
The main practical advantages of a fully Bayesian framework for \acro{\smaller HSMM} inference
are that the regularisation and uncertainty quantification provided by the prior and posterior distributions can be readily incorporated into improved mechanisms for prediction and model selection. In particular, selecting the \acro{\smaller HSMM} dwell distribution in a data-driven manner and performing predictive inference for future state dwell times.
However, the posterior distribution is rarely available in closed form and the computational burden of approximating the posterior, often by sampling \citep[see e.g.][]{gelfand1990sampling}, is considered a major drawback of the Bayesian approach.
In particular, evaluating the likelihood in \acro{\smaller HSMM}s is already computationally burdensome \citep{guedon2003estimating}, yielding implementations that are often prohibitively slow. This further motivates the use of the likelihood approximation of \citet{langrock2011hidden} within a Bayesian framework. Here, we combine their approach with the \textit{stan}{} probabilistic programming language \citep{carpenter2016stan}, further accelerating the likelihood evaluations by proposing a sparse matrix implementation and leveraging \textit{stan}'s compatibility with bridge sampling \citep{meng1996simulating,meng2002warp,gronau2017bridgesampling} to facilitate Bayesian model selection. We provide examples to illustrate the statistical advantages of our Bayesian implementation in terms of prior regularization, forecasting, and model selection and further illustrate that the combination of our approaches can
make such inferences computationally feasible (for example, by reducing the time for inference from more than three days to less than two hours), whilst incurring negligible statistical error.
The rest of this article is organized as follows. In Section \ref{sec:hmm_hsmm}, we provide a brief introduction to \acro{\smaller HMM}s and \acro{\smaller HSMM}s. Section \ref{sec:model} reviews the \acro{\smaller HSMM} likelihood approximation of \cite{langrock2011hidden}. Section \ref{sec:bayesian_inference} presents our Bayesian framework and inference approach. Using several simulation studies, Section \ref{sec:simulation_studies} investigates the performance of our proposed procedure when compared with the implementation of \cite{langrock2011hidden}. Section \ref{sec:approx_accuracy} evaluates the trade-off between computational efficiency and statistical accuracy of our method and proposes an approach to investigate the quality of the likelihood approximation for given data. Section \ref{sec:application} illustrates the use of our method to analyze telemetric activity data, and we further investigate the inclusion of spectral information within the emission density in Section \ref{sec:harmonic_emissions}. The \textit{stan}{} files (and \texttt{R} utilities) that were used to implement our experiments are available at
\url{https://github.com/Beniamino92/BayesianApproxHSMM}. The probabilistic programming framework associated with \textit{stan}{} makes it easy for practitioners to consider further dwell/emission distributions to the ones considered in this paper. Users need only change the corresponding function in our \textit{stan}{} files.
\color{black}
\section{Modeling Approach} \label{sec:modeling_approach}
\subsection{ Overview of Hidden Markov and Semi-Markov Models}
\label{sec:hmm_hsmm}
We now provide a brief introduction to the standard \acro{\smaller HMM} and \acro{\smaller HSMM} approaches before considering the special structure of the transition matrix presented by \cite{zucchini2017hidden}, which allows the state dwell distribution to be generalized with arbitrary accuracy.
\acro{\smaller HMM}s, or Markov switching processes, have been shown to be appealing models in addressing learning challenges in time series data and have been successfully applied in fields such as speech recognition \citep{rabiner1989tutorial, jelinek1997statistical}, digit recognition \citep{raviv1967decision, rabiner1989high} as well as biological and physiological data \citep{ langrock2013combining, huang2018hidden, hadj2020spectral}. An \acro{\smaller HMM} is a stochastic process model based on an unobserved (hidden) state sequence $\bm{s} = (s_1, \dots, s_T)$ that takes discrete values in the set $\{1, \dots, K\}$ and whose transition probabilities follow a Markovian structure. Conditioned on this state sequence, the observations $\bm{y} = (y_1, \dots, y_T)$ are assumed to be conditionally independent and generated from a parametric family of probability distributions $f(\bm{\theta}_j)$, which are often called \textit{emission} distributions. This generative process can be outlined as
\begin{equation}
\begin{split}
s_{\,t} \, | \, s_{\,t-1} &\sim \bm{\gamma}_{s_{\, t-1}} \\
y_t \, | \, s_{\,t} \, &\sim \, f \, ( \, \bm{\theta}_{s_{\,t}}) \qquad t = 1, \dots, T,
\end{split}
\label{eq:HMM}
\end{equation}
where $\bm{\gamma}_{\, j} = (\gamma_{j1}, \dots, \gamma_{jK})$ denotes the state-specific vector of transition probabilities, $\gamma_{jk} = p \, (\, s_t = k \, | \, s_{t-1} = j)$ with $\sum_{k} \gamma_{jk} = 1$, and $p\,(\cdot)$ is a generic notation for probability density or mass function, whichever appropriate. The initial state $s_0$ has distribution $\bm{\gamma}_0 = (\gamma_{01}, \dots, \gamma_{0K})$ and $\bm{\theta}_j$ represents the vector of emission parameters modelling state $j$. \acro{\smaller HMM}s provide a simple and flexible mathematical framework that can be naturally used for many inference tasks, such as signal extraction, smoothing, filtering \color{black} and forecasting (see e.g. \citealt{zucchini2017hidden}). These appealing features are a result of an extensive theoretical and methodological literature that includes several dynamic programming algorithms for computing the likelihood in a straightforward and inexpensive manner (e.g. forward messages scheme, \citealt{rabiner1989tutorial}). \acro{\smaller HMM}s are also naturally suited for local and global decoding (e.g. Viterbi algorithm, \citealt{forney1973viterbi}), and the incorporation of trend, seasonality and covariate information in both the observed process and the latent sequence. Although computationally convenient, the Markovian structure of \acro{\smaller HMM}s limits their flexibility. In particular, the \textit{dwell} duration in any state, namely the number of consecutive time points that the Markov chain spends in that state, is implicitly forced to follow a geometric distribution with probability mass function $p_j (d) = (1 - \gamma_{jj}) \, \gamma_{jj}^{\,d - 1}$.
\begin{figure}
\caption{ Graphical models: (left) \acro{\smaller HMM}
\label{fig:HMM_vs_HSMM}
\end{figure}
A more flexible framework can be formulated using \acro{\smaller HSMM}s, where the generative process of an \acro{\smaller HMM} is augmented by introducing an explicit, state specific, form for the dwell time \citep{guedon2003estimating, johnson2013bayesian}. The state stays unchanged until the duration terminates, at which point there is a Markov transition to a new regime. As depicted in Figure \ref{fig:HMM_vs_HSMM}, the \textit{super-states} $\bm{z} = (z_1, \dots, z_S)$ are generated from a Markov chain prohibiting self-transitions wherein each super-state $z_s$ is associated with a dwell time $d_s$ and a random segment of observations $\bm{y}_s = (y_{t_s^1}, \dots, y_{t_s^2})$, where $t_s^1 = 1 + \sum_{r<s} d_r$ and $t_s^2 = t_s^1 + d_s - 1$ represent the first and last index of segment $s$, and $S$ is the (random) number of segments. Here, $d_s$ represents the length of the dwell duration of $z_s$. The generative mechanism of an \acro{\smaller HSMM} can be summarized as \begin{equation}
\begin{split}
z_{\,s} \, | \, z_{\,s-1} &\sim \bm{\pi}_{\,z_{\, s-1}} \\
d_s \, | \, z_s \, &\sim g \, ( \, \bm{\lambda}_{\, z_s} ) \\ \bm{y}_{s}
\, | \, z_{\,s} \, &\sim \, f \, ( \, \bm{\theta}_{z_{\,s}}) \qquad s = 1, \dots, S,
\end{split}
\label{eq:HSMM}
\end{equation}
where $\bm{\pi}_{\, j} = (\pi_{j1}, \dots, \pi_{jK})$ are state-specific transition probabilities in which $\pi_{jk} = p \, (\, z_t = k \, | \, z_{t-1} = j, \, z_t \neq j)$ for $j, k = 1, \dots, K$. Note that $\pi_{jj} = 0$, since self transitions are prohibited. We assume that the initial state has distribution $\bm{\pi}_0 = (\pi_{01}, \dots, \pi_{0K})$, namely $\bm{z}_0 \sim \bm{\pi}_0$. Here, $g$ denotes a family of dwell distributions parameterized by some state-specific duration parameters $\bm{\lambda}_j$, which could be either a scalar (e.g. rate of a Poisson distribution), or a vector (e.g. rate and dispersion parameters for negative binomial durations). Unfortunately, this increased flexibility in modeling the state duration has the cost of substantially increasing the computational burden of computing the likelihood: the message-passing procedure for \acro{\smaller HSMM}s requires $\mathcal{O} \, (T^2 K + T K^2)$ basic computations for a time series of length $T$ and number of states $K$, whereas the corresponding forward-backward algorithm for \acro{\smaller HMM}s requires only $\mathcal{O} \, (T K^2)$.
\subsection{Approximations to Hidden Semi-Markov Models}
\label{sec:model}
In this section we introduce the \acro{\smaller HSMM} likelihood approximation of \citet{langrock2011hidden}. Let us consider an \acro{\smaller HMM} in which $\bm{y}^{\star} = (y^{\star}_1, \dots, y^{\star}_T$) represents the observed process and $\bm{z}^{\star} = (z^{\star}_1 , . . . , z^{\star}_T )$ denotes the latent discrete-valued sequence of a Markov chain with states $\{1, 2, \dots, \bar{A} \}$, where $\bar{A} = \sum_{i = 1}^{K} a_i$, and $a_1, \dots, a_K$ are arbitrarily fixed positive integers. Let us define \textit{state aggregates} $A_j$ as \begin{equation}
A_j = \Bigg\{ \, a : \sum_{i=0}^{j-1} a_i < a \leq \sum_{i=0}^{j} a_i \, \Bigg\}, \quad j = 1, \dots, K,
\end{equation} where $a_0 = 0$, and each state corresponding to $A_j$ is associated with the same emission distribution $f ( \, \bm{\theta}_j )$ in the \acro{\smaller HSMM} formulation of Eq. \eqref{eq:HSMM}, namely $ y^{\star}_t \, \big| \, z^{\star}_{\,t} \in A_j \sim f \, ( \bm{\theta}_j).$ The probabilistic rules governing the transitions between states $\bm{z}^\star$ are described via the matrix $\bm{\Phi} = \big\{ \phi_{il} \big\}$, where $\phi_{il} = p \, (\, z^\star_{\, t} = l \, | \, z^\star_{\, t-1} = i \, )$, for $i, l = 1, \dots, \bar{A}$. This matrix has the following structure
\begin{equation}
\label{eq:phi_mat}
\bm{\Phi} = \begin{bmatrix}
\bm{\Phi}_{11} & \dots & \bm{\Phi}_{1K} \\
\vdots & \ddots & \vdots \\
\bm{\Phi}_{K1} & \dots & \bm{\Phi}_{KK}
\end{bmatrix},
\end{equation} where the sub-matrices $\bm{\Phi}_{jj}$ along the main diagonal, of dimension $a_j \times a_j $, are defined for $a_j \geq 2$, as \begin{equation}
\bm{\Phi}_{jj} =
\begin{bmatrix}
0 & 1 - h_j\,(1) & 0 & \dots & 0 \\
\vdots & 0 & \ddots & & \vdots \\
& \vdots & & & 0 \\
0 & 0 & \dots & 0 & 1 - h_j\,(a_j - 1) \\
0 & 0 & \dots & 0 & 1 - h_j\,(a_j)
\end{bmatrix},
\end{equation}
and $\bm{\Phi}_{jj} = 1 - h_j(1)$, for $a_j = 1$. The $a_j \times a_k$ off-diagonal matrices $\bm{\Phi}_{jk}$ are given by
\begin{equation}
\bm{\Phi}_{jk} =
\begin{bmatrix}
\pi_{jk}\, h_j\,(1) & 0 & \dots & 0 \\
\pi_{jk}\, h_j\,(2) & 0 & \dots & 0 \\
\vdots & & & \\
\pi_{jk} \, h_j\,(a_j) & 0 & \dots & 0
\end{bmatrix}
\end{equation}
where in the case that $a_j = 1$ only the first column is included. Here, $\pi_{jk}$ are the transition probabilities of an \acro{\smaller HSMM} as in Eq. \eqref{eq:HSMM}, and the \textit{hazard rates} $h_j \, (r)$ are specified for $ r \in \mathbb{N}_{> 0} $ as
\begin{equation}
h_j \, (r) = \dfrac{p \, (\, d_j = r \, | \, \bm{\lambda}_j)}{p \, (\, d_j \geq r \, | \, \bm{\lambda}_j)}, \quad \text{if} \, \, p \, (\, d_j \geq r -1 \, | \, \bm{\lambda}_j) < 1,
\label{eq:hazard_rates}
\end{equation}
and 1 otherwise, where $p \, (\, d_j = r \, | \, \bm{\lambda}_j)$ denotes the probability mass function of the dwell distribution $g \, ( \bm{\lambda}_j)$ for state $j$. This structure for the matrix $\bm{\Phi}$ implies that transitions within state aggregate $A_j$ are determined by diagonal matrices $\bm{\Phi}_{jj}$, while transitions between state aggregates $A_j$ and $A_k$ are controlled by off-diagonal matrices $\bm{\Phi}_{jk}$. Additionally, a transition from $A_j$ to $A_k$ must enter $A_k$ in $\min(A_k)$. \citet{langrock2011hidden} showed that this choice of $\bm{\Phi}$ allows for the representation of any duration distribution, and yields an \acro{\smaller HMM} that is, at least approximately, a reformulation of the underlying \acro{\smaller HSMM}. \color{black} In summary, the distribution of $\bm{y}$ (generated from an underlying \acro{\smaller HSMM}) \color{black} can be approximated by that of $\bm{y}^\star$ (modelled using $\bm{\Phi}$)\color{black}, and this approximation can be designed to be arbitrarily accurate by choosing $a_j$ adequately large. In fact, the representation of the dwell distribution through $\bm{\Phi}$ differs from the true distribution, namely the one in the \acro{\smaller HSMM} formulation of Eq. \eqref{eq:HSMM}, only for values larger than $a_j$, i.e., in the right tail.
\section{Bayesian Inference}
\label{sec:bayesian_inference}
Bayesian inference for \acro{\smaller HSMM}s has long been plagued by the computational demands of evaluating its likelihood. In this section we use the \acro{\smaller HSMM} likelihood approximation of \cite{langrock2011hidden} to facilitate efficient Bayesian inference for \acro{\smaller HSMM}s. Extending the model introduced in Section \ref{sec:model} to the Bayesian paradigm requires placing priors on the model parameters $\bm{\eta} = \big\{ \, ( \bm{\pi}_j, \, \bm{\lambda}_j, \, \bm{\theta}_j ) \, \big\}_{j=1}^{\,K}$.
The generative process of our Bayesian model can be summarized by \begin{equation}
\begin{split}
\bm{\pi}_j &\sim \text{Dir}\, (\bm{\alpha}_0), \qquad (\bm{\theta}_j, \bm{\lambda}_j ) \sim H \times G, \qquad j = 1, \dots, K, \\
z^\star_{\,t} \, | \, z^\star_{\,t-1} &\sim \bm{\phi}_{\,z^\star_{\, t-1}} \\
\bm{y}^\star_{t}
\, | \, z^\star_{\,t} \in A_j \, &\sim \, f \, ( \, \bm{\theta}_{j}) \, \hspace{4.6cm} t = 1, \dots, T,
\end{split}
\label{eq:bayes_model}
\end{equation}
where Dir$(\cdot)$ denotes the Dirichlet distribution over a $(K-2)$ dimensional simplex (since the probability of self transition is forced to be zero) and $\bm{\alpha}_0$ is a vector of positive reals. Here, $H$ and $G$ represent the priors over emission and duration parameters, respectively, and $\bm{\phi}_i$ denotes the $i^{th}$ row of the matrix $\bm{\Phi}$. A graphical model representing the probabilistic structure of our approach is shown in Figure \ref{fig:bayes_graphical_model}, where we remark that the entries of the transition matrix $\bm{\Phi}$ are entirely determined by the transition probabilities of the Markov chain $\bm{\pi}_{j}$ and the values of the durations $p \, (\, d_j = r \, | \, \bm{\lambda}_j)$.
\begin{figure}
\caption{A graphical model for Eq. \eqref{eq:bayes_model}
\label{fig:bayes_graphical_model}
\end{figure}
The posterior distribution for $\bm{\eta}$ has the following factorisation.
\begin{equation}
p \, ( \, \bm{\eta} \, | \, \bm{y} ) \propto \mathscr{L}\, ( \bm{y} \, | \, \bm{\eta} ) \, \times \, \bigg[ \, \prod_{j=1}^{K} \, p \, ( \bm{\pi}_j ) \, \times \, p \, ( \bm{\lambda}_j ) \, \times \, p \, ( \bm{\theta}_j) \, \bigg] \, ,
\label{eq:posterior}
\end{equation} where $\mathscr{L}\, ( \, \cdot \, )$ denotes the likelihood of the model, $p \, ( \bm{\pi}_j)$ is the density of the Dirichlet prior for transitions probabilities (Eq. \ref{sec:model}), and $p \, ( \bm{\lambda}_j)$ and $p \, ( \bm{\theta}_j)$ represent the prior densities for dwell and emission parameters, respectively. Since we have
formulated an \acro{\smaller HMM}, we can employ well-known techniques that are available to compute the likelihood, and in particular we can express it using the following matrix multiplication (see e.g. \citealt{zucchini2017hidden})
\begin{equation}
\mathscr{L}\, ( \bm{y} \, | \bm{\eta} ) = \bm{\pi}_0^{\,\star\,'} \, \bm{P}\,(y_1) \, \bm{\Phi} \, \bm{P}\,(y_2) \, \bm{\Phi} \, \cdots \, \bm{\Phi} \, \bm{P}\,(y_{T-1}) \, \bm{\Phi} \, \bm{P}\,(y_{T}) \, \mathbf{1},
\label{eq:likelik}
\end{equation} where the diagonal matrix $\bm{P}\,(\,y\,)$ of dimension $\bar{A} \times \bar{A}$ is defined as \begin{equation}
\bm{P}\,(\,y\,) = \text{diag} \, \big\{ \, \underbrace{p \,( y \, | \, \bm{\theta}_1), \, \dots,\, p \,( y \, | \, \bm{\theta}_1)}_{a_1 \, \, \text{times}}, \, \dots, \, \underbrace{p \, ( y \, | \, \bm{\theta}_K) \dots p \, ( y \, | \, \bm{\theta}_K)}_{a_K \, \, \text{times}} \big\},
\label{eq:diag_mat}
\end{equation}
and $ p \, ( y \, | \, \bm{\theta}_j)$ is the probability density of the emission distribution $f \, ( \, \bm{\theta}_{j})$. Here, $\mathbf{1}$ denotes an $\bar{A}$-dimensional column vector with all entries equal to one and $\bm{\pi}_0^{\,\star}$ represents the initial distribution for the state aggregates. Note that if we assume that the underlying Markov chain is stationary, $\bm{\pi}_0^{\,\star}$ is solely determined by the transition probabilities $\bm{\Phi}$, i.e. $\bm{\pi}_0^{\,\star} \, = ( \bm{I} - \bm{\Phi} + \bm{U})^{\,-1} \, \mathbf{1}$, where $\bm{I}$ is the identity matrix and $\bm{U}$ is a square matrix of ones. Alternatively, it is possible to start from a specified state, namely assuming that $\bm{\pi}_0^{\,\star}$ is an appropriate unit vector, e.g. $(1, 0, \dots, 0)$, as suggested by \citet{leroux1992maximum}. We finally note that computation of the likelihood in Eq. \eqref{eq:likelik} is often subject to numerical underflow and hence its practical implementation usually require appropriate scaling \citep{zucchini2017hidden}.
While a fully Bayesian framework is desirable for its ability to provide coherent uncertainty quantification for parameter values, a perceived drawback of this approach compared with a frequentist analogue is the increased computation required for estimation. Bayesian posterior distributions are only available in closed form under the very restrictive setting when the likelihood and prior are conjugate. Unfortunately, the model outlined in Section \ref{sec:model} does not admit such a conjugate prior form and as a result the corresponding posterior (Eq. \ref{eq:posterior}) is not analytically tractable. However, numerical methods such as Markov Chain Monte Carlo (\acro{\smaller MC}MC) can be employed to sample from this intractable posterior. The last twenty years have seen an explosion of research into \acro{\smaller MC}MC methods and more recently approaches scaling them to high dimensional parameter spaces. The next section outlines one such black box implementation that is used to sample from the posterior in Eq. \eqref{eq:posterior}.
\subsection{Hamiltonian Monte Carlo, No-U-Turn Sampler and Stan Modelling Language}{\label{sec:bayes_computation}}
One particularly successful posterior sampling algorithm is Hamiltonian Monte Carlo (\acro{\smaller HMC}, \citealt{duane1987hybrid}), where we refer the reader to \citet{neal2011mcmc} for an excellent introduction. \acro{\smaller HMC} augments the parameter space with a `momentum variable' and uses Hamiltonian dynamics to propose new samples. The gradient information contained within the Hamiltonian dynamics allows \acro{\smaller HMC} to produce proposals that can traverse high dimensional spaces more efficiently than standard random walk \acro{\smaller MC}MC algorithms. However, the performance of \acro{\smaller HMC} samplers is dependent on the tuning of the leapfrog discretisation of the Hamiltonian dynamics. The No-U-Turn Sampler (\acro{\smaller NUTS}) \citep{hoffman2014no} circumvents this burden. \acro{\smaller NUTS} uses the Hamiltonian dynamics to construct trajectories that move away from the current value of the sampler until they make a `U-Turn' and start coming back,
thus maximising the trajectory distance. An iterative algorithm allows the trajectories to be constructed both forwards and backwards in time, preserving time reversibility. Combined with a stochastic optimisation of the step size, \acro{\smaller NUTS} is able to conduct efficient sampling without any hand-tuning.
The \textit{stan}{} modelling language \citep{carpenter2016stan} provides a probabilistic programming environment facilitating the easy implementation of \acro{\smaller NUTS}. The user needs only define the three components of their model: (i) the inputs to their sampler, e.g. data and prior hyperparameters; (ii) the outputs, e.g. parameters of interest; (iii) the computation required to calculate the unnormalized posterior. Following this, \textit{stan}{} uses automatic differentiation \citep{griewank2008evaluating} to produce fast and accurate samples from the target posterior. \textit{stan}'s easy-to-use interface and lack of required tuning have seen it implemented in many areas of statistical science.
As well as using \acro{\smaller NUTS} to automatically tune the sampler, \textit{stan}{} is equipped with a variety of warnings and tools to help users diagnose the performance of their sampler. For example, convergence of all quantities of interest is monitored in an automated fashion by comparing variation between and within simulated samples initialized at over-dispersed starting values \citep{gelman2017prior}. Additionally, the structure of the transition matrix $\bm{\Phi}$ allows us to take advantage of \textit{stan}'s sparse matrix implementation to achieve vast computational improvements. Although $\bm{\Phi}$ has dimension $\bar{A} \times \bar{A}$, each row has at most $K$ non-zero terms (representing within state transitions to the next state aggregate or between state transitions),
and as a result only a proportion $(K/\bar{A})$ of the elements of $\bm{\Phi}$ is non-zero.
Hence, for large values of the dwell approximation thresholds $\bm{a}$, the matrix $\bm{\Phi}$ exhibits considerable sparsity. The \textit{stan}{} modelling language implements compressed row storage sparse matrix representation and multiplication, which provides considerable speed up when the sparsity is greater than 90\% \citep[][Ch. 6]{stan2018stan}. In our applied scenario we consider dwell-approximation thresholds as big as $\bm{a} = (250, 50, 50)$ with sparsity of greater than 99\% allowing us to take considerable advantage of this formulation. Finally, we note that our proposed Bayesian approach may suffer from \textit{label switching} \citep{stephens2000dealing} since the likelihood is invariant under permutations of the labels of the hidden states. However, this issue is easily addressed using order constraints provided by \textit{stan}. This strategy worked well in the simulations and applications presented in the paper, without introducing any noticeable bias in the results.
\subsection{Bridge Sampling Estimation of the Marginal Likelihood}
The Bayesian paradigm provides a natural framework for selecting between competing models by means of the marginal likelihood, i.e.
\begin{equation}
p \, (\bm{y}) = \int \mathscr{L}\, ( \bm{y} \, | \, \bm{\eta} ) \, p \, (\bm{\eta}) \, d \bm{\eta}.
\label{eq:marg_likelik}
\end{equation}
The ratio of marginal likelihoods from two different models, often called the \textit{Bayes factor} \citep{kass1995bayes}, can be thought of as the weight of evidence in favor of a model against a competing one. The marginal likelihood in Eq. \ref{eq:marg_likelik} corresponds to the normalizer of the posterior $p \,( \bm{\eta} \, | \, \bm{y} )$ (Eq. \ref{eq:posterior}) and is generally the component that makes the posterior analytically intractable. \acro{\smaller MC}MC algorithms, such as the \textit{stan}'s implementation of \acro{\smaller NUTS} introduced above, allow for sampling from the unnormalized posterior, but further work is required to estimate the normalizing constant. Bridge sampling \citep{meng1996simulating,meng2002warp} provides a general procedure for estimating these marginal likelihoods reliably. While standard Monte Carlo (\acro{\smaller MC}) estimates draw samples from a single distribution, bridge sampling formulates an estimate of the marginal likelihood using the ratio of two \acro{\smaller MC} estimates drawn from different distributions: one being the posterior (which has already been sampled from) and the other being an appropriately chosen proposal distribution $q\,(\bm{\eta})$. The bridge sampling estimate of the marginal likelihood is then given by
\begin{equation*}
p \, (\bm{y}) = \frac{\mathbb{E}_{\,q(\bm{\eta})}\left[h(\bm{\eta}) \, \mathscr{L}\, ( \bm{y} \, | \, \bm{\eta} ) \, p \, (\bm{\eta}) \, \right]}{\mathbb{E}_{\, p(\bm{\eta} | \bm{y})}\left[h(\bm{\eta})\,q(
\bm{\eta})\right]} \approx \frac{\frac{1}{n_2}\sum_{j=1}^{n_2}h(\bm{\tilde{\eta}}^{\,(j)}) \, \mathscr{L}\, ( \bm{y} \, | \, \bm{\tilde{\eta}}^{\,(j)}) \, p \, (\bm{\tilde{\eta}}^{\,(j)})}{\frac{1}{n_1}\sum_{i=1}^{n_1}h\,(\bm{\overline{\eta}}^{\,(i)}) \, q(\bm{\overline{\eta}}^{\,(i)})}, \end{equation*}
where $h(\bm{\eta})$ is an appropriately selected \textit{bridge function} and $p (\bm{\eta})$ denotes the joint prior distribution. Here, $\{\bm{\overline{\eta}}^{\,(1)}, \ldots, \bm{\overline{\eta}}^{\,(n_1)}\}$ and $\{\bm{\tilde{\eta}}^{\,(1)}, \ldots, \bm{\tilde{\eta}}^{\,(n_2)}\}$ represent $n_1$ and $n_2$ samples drawn from the posterior $p \,( \bm{\eta} \, | \, \bm{y} )$ and the proposal distribution $q(\bm{\eta})$, respectively. This estimator can be implemented in \texttt{R} using the package \texttt{bridgesampling} \citep{gronau2017bridgesampling}, whose compatibility with \textit{stan}{} makes it particularly straightforward to estimate the marginal likelihood directly from a \textit{stan}{} output. This package implements the method of \cite{meng1996simulating} to choose the optimal bridge function minimising the estimator mean-squared error and constructs a multivariate normal proposal distribution whose mean and variance match those of the sample from the posterior.
\subsection{Comparable Dwell Priors} \label{sec:comparable_dwell}
Model selection based on marginal likelihoods can be very sensitive to prior specifications. In fact, Bayes factors are only defined when the marginal likelihood under each competing model is proper \citep{robert2007bayesian, gelman2013bayesian}. As a result, it is important to include any available prior information into the Bayesian modelling in order to use these quantities in a credible manner. Reliably characterising the prior for the dwell distributions is particularly important for the experiments considered in Section \ref{sec:application}, since we use Bayesian marginal likelihoods to select between the dwell distributions associated with \acro{\smaller HSMM}s and \acro{\smaller HMM}s. For instance, if we believe that the length of sleep for an average person is between 7 and 8 hours we would choose a prior that reflects those beliefs in all competing models. However, we need to ensure that we encode this information in \textit{comparable priors} in order to perform `fair' Bayes factor selection amongst a set of dwell-distributions. Our aim is to infer which dwell distribution, and not which prior specification, is most appropriate for the data at hand.
For example, suppose we consider selecting between geometric (i.e. an \acro{\smaller HMM}), negative binomial or Poisson distributions (i.e. an \acro{\smaller HSMM}), to model the dwell durations of our data. While a Poisson random variable, shifted away from zero to consider strictly positive dwells, has its mean $\lambda_j + 1$ and variance $\lambda_j$ described by the same parameter $\lambda_j$, the negative binomial allows for further modelling of the precision through an additional factor $\rho_j$. In both negative binomial and Poisson \acro{\smaller HSMM}s, the parameters $\lambda_j$ are usually assigned a prior $\lambda_j \sim \text{Gamma}\,(a_{0j}, b_{0j})$ with mean $\mathbb{E}\left[\lambda_j\right] = a_{0j}/b_{0j}$ and variance $\textrm{Var} \left[\lambda_j\right] = a_{0j}/b_{0j}^2$. In order to develop an interpretable comparison of all competing models, we parameterize the geometric dwell distribution associated with state $j$ in the standard \acro{\smaller HMM} (Eq. \ref{eq:HMM}) as also being characterized by the mean dwell length $\tau_j = 1/(1-\gamma_{jj})$, where the geometric is also shifted to only consider strictly positive support and $\gamma_{jj}$ represents the probability of self-transition. Under a Dirichlet prior for the state-specific vector of transition probabilities $\bm{\gamma}_j = (\gamma_{j1}, \ldots, \gamma_{jK}) \sim \text{Dirichlet}(\bm{v}_j)$, with $\bm{v}_j = (v_{j1}, \ldots, v_{jK})$ and $\beta_j = \sum_{i\neq j} v_{ji}$, the mean and variance of the prior mean dwell under an \acro{\smaller HMM} are given by
\begin{equation}
\mathbb{E}\left[\tau_{j}\right] = \frac{v_{jj} + \beta_j -1}{\beta_j - 1} \textrm{ and } \textrm{Var}\left[\tau_{j}\right] = \frac{(v_{jj} + \beta_j -1)(v_{jj} + \beta_j - 2)}{(\beta_j - 1)(\beta_j - 2)} - \left(\frac{v_{jj} + \beta_j -1}{\beta_j - 1}\right)^2\nonumber
\end{equation}
for $\beta_j>2$ (the derivation of this result is provided in the Supplementary Material).
We therefore argue that a comparable prior specification requires hyper-parameters $\{a_{0j}, b_{0j}\}_{j=1}^{K}$ and $\{\bm{v}_j \}_{j=1}^{K}$ be chosen in a way that satisfy $\mathbb{E}\left[\tau_{j}\right] = \mathbb{E}[\lambda_j + 1]$ and $\textrm{Var}\left[\tau_{j}\right] = \textrm{Var}\left[\lambda_j + 1\right]$, ensuring the dwell distribution in each state has the same prior mean and variance across models. The prior mean can be interpreted as a best a priori guess for the average dwell time in each state, and the variance reflects the confidence in this prior belief. In addition, since the negative binomial distribution is further parameterized by a dispersion parameter $\rho_j$, we center our prior belief at $\rho_j = 1$, which is the value that recovers geometric dwell durations (namely an \acro{\smaller HMM}) when $\lambda_j = \gamma_{jj}/(1-\gamma_{jj})$. Between state transition probabilities, i.e. the non-diagonal entries of the transition matrix, as well as the emission parameters, are shared between the \acro{\smaller HMM} and \acro{\smaller HSMM}, and thus we may place a prior specification on these parameters that is common across all models.
\section{A Comparison with Langrock and Zucchini (2011)}
\label{sec:simulation_studies}
This section presents several simulation studies. Firstly, we show that our Bayesian implementation provides similar point estimates as the methodology of \cite{langrock2011hidden}, serving as a ``sanity check''. We then proceed to illustrate the benefits adopting a Bayesian paradigm can bring to \acro{\smaller HSMM} modelling.
\subsection{Parameter Estimation}
\label{sec:ill_example}
For our first example, we simulated $T = 200$ data points from a three-state \acro{\smaller HSMM} (Eq. \ref{eq:HSMM}). Conditional on each state $j$, the observations are generated from a $\text{Normal}\left(\mu_j, \sigma_j^2\right)$, and the dwell durations are Poisson$(\lambda_j$) distributed. We consider relatively large values for $\lambda_j$ in order to evaluate the quality of the \acro{\smaller HSMM} approximation provided by Eq. \eqref{eq:bayes_model}.
The full specification is provided in Table \ref{table:parms_and_res} and a realization of this model is shown in Figure \ref{fig:trans_viterbi} \textcolor{blue}{(a, top)}. The dwell approximation thresholds $\bm{a}$ are set equal to $(30, 30, 30)$ and we placed a Gamma$(0.01, 0.01)$ prior on the Poisson rates $\lambda_j$. The transition probabilities $\bm{\pi}_j$ are distributed as $\text{Dirichlet}(1, 1)$ and the priors for the Gaussian emissions are given as $\text{Normal}(0, 10^2)$ and $\text{Inverse-Gamma}(2, 0.5)$ for locations $\mu_j$ and scale $\sigma^{\,2}_j\,$, respectively. Overall, this prior specification is considered weakly informative \citep{gelman2013bayesian, gelman2017prior}.
Table \ref{table:parms_and_res} shows estimation results for our proposed Bayesian methodology as well as the analogous frequentist approach (\acro{\smaller EM}) of \citet{langrock2011hidden}, which will be referred to as LZ-2011.
Figure \ref{fig:trans_viterbi}$\,$\textcolor{blue}{(a)} displays: (top) a graphical posterior predictive check consisting of the observations alongside 100 draws from the estimated posterior predictive \citep{gelman2013bayesian};
(bottom) the most likely hidden state sequence, i.e. $ \argmax_{\bm{z}} p \, ( \, \bm{z} \, | \, \bm{y}, \, \bm{\eta} \, )$, which is estimated via the Viterbi algorithm (see e.g. \citealt{zucchini2017hidden}) using plug-in Bayes estimates of the model parameters; In order to assess the goodness of fit of the model, we also verified normality of the pseudo-residual (see Supplementary Material).
\begin{table}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{lcccllcccllccc}
\hline \\[-0.9em]
& True & LZ-2011 & Proposed & & & True & LZ-2011 & Proposed & & & True & LZ-2011 & Proposed \\ [.1em] \cmidrule{2-4} \cmidrule{7-9} \cmidrule{12-14}
$\mu_1$ & 5 & 4.96 & \begin{tabular}[c]{@{}c@{}}4.95\\ \footnotesize(4.66–5.24)\end{tabular} & & $\sigma_3$ & 1 & 1.01 & \begin{tabular}[c]{@{}c@{}}1.08\\ \footnotesize(0.90–1.20)\end{tabular} & & $\pi_{13}$ & 0.70 & 0.50 & \begin{tabular}[c]{@{}c@{}}0.5\\ \footnotesize(0.13–0.87)\end{tabular} \\
$\mu_2$ & 14 & 14.02 & \begin{tabular}[c]{@{}c@{}}14.02\\ \footnotesize(13.67–14.37)\end{tabular} & & $\lambda_1$ & 20 & 23.47 & \begin{tabular}[c]{@{}c@{}}23.36\\ \footnotesize(17.03–30.57)\end{tabular} & & $\pi_{21}$ & 0.20 & 0.00 & \begin{tabular}[c]{@{}c@{}}0.20\\ \footnotesize(0.01–0.53)\end{tabular} \\
$\mu_3$ & 30 & 30.19 & \begin{tabular}[c]{@{}c@{}}30.18\\ \footnotesize(29.98–30.38)\end{tabular} & & $\lambda_2$ & 30 & 27.22 & \begin{tabular}[c]{@{}c@{}}27.05 \\ \footnotesize(22.43–32.19)\end{tabular} & & $\pi_{23}$ & 0.80 & 1.00 & \begin{tabular}[c]{@{}c@{}}0.80\\ \footnotesize(0.47–0.99)\end{tabular} \\
$\sigma_1$ & 1 & 1.09 & \begin{tabular}[c]{@{}c@{}}1.15\\ \footnotesize(0.95–1.40)\end{tabular} & & $\lambda_3$ & 20 & 19.98 & \begin{tabular}[c]{@{}c@{}}20.00\\ \footnotesize(15.93–24.46)\end{tabular} & & $\pi_{31}$ & 0.10 & 0.33 & \begin{tabular}[c]{@{}c@{}}0.40\\ \footnotesize(0.10–0.76)\end{tabular} \\
$\sigma_2$ & 2 & 1.90 & \begin{tabular}[c]{@{}c@{}}1.95\\ \footnotesize(1.73–2.22)\end{tabular} & & $\pi_{12}$ & 0.30 & 0.50 & \begin{tabular}[c]{@{}c@{}}0.50\\ \footnotesize(0.13–0.87)\end{tabular} & & $\pi_{32}$ & 0.90 & 0.67 & \begin{tabular}[c]{@{}c@{}}0.60\\ \footnotesize(0.24–0.90)\end{tabular} \\[.2em] \hline
\end{tabular}
}
\caption{Illustrative Example. True model parameterization and corresponding estimates obtained via the EM algorithm and our proposed Bayesian approach. For the latter, we also report $95\%$ credible intervals estimated from the posterior sample. }
\label{table:parms_and_res}
\end{table}
In general, both methods satisfactorily retrieve the correct pre-fixed duration and emission parameters and the posterior predictive checks indicate that our posterior sampler is performing adequately. The implementation of \cite{langrock2011hidden} suffers from a lack of regularisation, for example in the estimation of $\pi_{21}$ as 0, and is not currently available with an automatic method to quantify parameter uncertainty. While augmenting the approach of \cite{langrock2011hidden} by adding regularisation penalties to parameters and producing confidence measures such as standard errors and bootstrap estimates is possible, such features are automatic to our Bayesian adaptation. Further, such an approach allows this uncertainty to be incorporated into methods for prediction and model selection making the Bayesian paradigm appealing for \acro{\smaller HSMM} modelling.
\begin{figure}
\caption{(a, top) a realization (dots) of a three-state \acro{\smaller HSMM}
\label{fig:trans_viterbi}
\end{figure}
\color{black}
\subsection{Forecasting}
A key feature of \acro{\smaller HSMM}s is their ability to be able to capture and forecast when and for how long the model will be in a given state. We compare the forecasting properties of the method presented by \citet{langrock2011hidden} and our proposed Bayesian approach. We simulated 20 `un-seen' time series, $\tilde{\bm{y}} = (\tilde{y}_{\,1}, \, \ldots, \tilde{y}_H)$, where $\tilde{y}_{\,h} = y_{\, T +h}, \hspace{0.1cm} h = 1, \dots, H$ and $H = 100, 300, 500$ denotes the forecast horizon, from the model as in Table \ref{table:parms_and_res}. We used the logarithmic score (log-score) to measure predictive performances.
Let $\hat{\bm{\eta}}$ be the frequentist (\acro{\smaller MLE}/\acro{\smaller EM}) parameter estimate and define the log-score \begin{equation*}
L_{\,\text{freq}}(\tilde{\bm{y}}) = \sum_{h=1}^{H} -\log p \, (\tilde{y}_h \, | \, \hat{\bm{\eta}} ),
\end{equation*} where $p \,(\tilde{y}_h \, | \, \hat{\bm{\eta}})$ denotes the forecast density function (see Supplementary Material for an explicit expression). Our Bayesian framework does not assume a point estimate $\hat{\bm{\eta}}$ but considers instead a posterior distribution $p \, (\bm{\eta} \, | \, \bm{y})$,
which is integrated over to produce a predictive density.
Given $M$ \acro{\smaller MC}MC samples drawn from the posterior, $\left\lbrace\bm{\eta}^{(i)}\right\rbrace_{i=1}^M \sim \pi \, (\bm{\eta} \, | \, \bm{y})$, the log-score of the predictive density can be approximated as
\begin{align*}
L_{\,\text{Bayes}}(\tilde{\bm{y}}) = \sum_{h=1}^{H} -\log p \, ( \tilde{y}_h \, | \, \bm{y}) &= \sum_{h=1}^{H} -\log\int p \, (\tilde{y}_{h} \, | \, \bm{\eta} ) \, p \, (\bm{\eta} \, | \, \bm{y}) \, d\bm{\eta} \\
&\approx \sum_{h=1}^{H} -\log\left( \frac{1}{M}\sum_{i = 1}^M p \, (\tilde{y}_h \, | \, \bm{\eta}^{(i)} )\right).
\end{align*}
Figure \ref{fig:boxplots_forecast} presents box-plots of log-scores for LZ-2011 and our proposed Bayesian approach. It is clear that our Bayesian methodology typically produces a much lower predictive log-score than the frequentist procedure. The approach by \citet{langrock2011hidden} which uses plug-in estimates for parameters, is known to `under-estimate' the true predictive variance
thus yielding large values of the log-score \citep{jewson2018principles}. On the other hand, our Bayesian paradigm integrates over the parameters and hence is more accurately able to capture the true forecast distribution. As a result, it produces significantly smaller log-score estimates.
\begin{figure}
\caption{Boxplots of log-scores for LZ-2011 (via \acro{\smaller EM}
\label{fig:boxplots_forecast}
\end{figure}
\textcolor{black}{\subsection{Dwell Distribution Selection}}
An important consideration is whether to formulate an $\acro{\smaller HMM}$ or to extend the dwell distribution beyond a geometric one (i.e., an \acro{\smaller HSMM}). Ideally, the data should be used to drive such a decision. In this section, we compare the frequentist methods for doing so, namely Akaike's information criterion (\acro{\smaller AIC}, \citealt{akaike1973information}) and Bayesian information criterion (\acro{\smaller BIC}, \citealt{schwarz1978estimating}), with their Bayesian counterpart, namely the marginal likelihood. We choose not to consider other Bayesian inspired information criteria \citep[e.g.][]{spiegelhalter2002bayesian, watanabe2010asymptotic, gelman2014understanding} as our goal here is to compare standard frequentist methods used previously in the literature to conduct model selection for \acro{\smaller HMM}s and \acro{\smaller HSMM}s \citep[e.g.][]{langrock2011hidden, huang2018hidden} with the canonical Bayesian analog. Although the performance of Bayesian model selection can be sensitive to the specification of the prior, we gave specific consideration to specifying this with model selection in mind in Section \ref{sec:comparable_dwell}.
\textcolor{black}{\subsubsection{Consistency for Nested Models}}
A special feature of the negative binomial dwell distribution is that the geometric dwell distribution associated with \acro{\smaller HMM}s is nested within it. Taking $\rho = 1$ for the negative binomial exactly corresponds to the geometric distribution. An important consideration when selecting between nested models is complexity penalization. For the same data set, the more complicated of two nested models will always achieve a higher in-sample likelihood score than the simpler model. Therefore, in order to achieve consistent model selection among nested models, the extra parameters of the more complex models must be penalized.
In this scenario, the \acro{\smaller AIC} \,:= -2\,$\mathscr{L}\,( \bm{y} \, | \, \bm{\eta} ) + 2 p$ where $p$ denotes the number of parameters included in the model, is known not to provide consistent model selection when the data is generated from the simpler model \citep[see e.g.][]{fujikoshi1985selection}.
On the other hand, performing model selection using the marginal likelihood can be shown to be consistent (see e.g. \citealt{o2004kendall}), provided some weak conditions on the prior are satisfied. Therefore, when following a Bayesian paradigm, the correct data generating model is selected with probability one as $T$ tends to infinity. Here we show that under the approximate \acro{\smaller HSMM} likelihood model, Bayesian model selection appears to maintain its desirable properties.
We simulated 20 time series from a two-state \acro{\smaller HMM} with Gaussian emission parameters $\bm{\mu} = (1, 4)$ and $\bm{\sigma}^2 = (1, 1.5)$, and diagonal entries of the transition matrix set to $(\gamma_{11}, \gamma_{22}) = (0.7, 0.6)$. To model this data we considered the \acro{\smaller HMM} and a \acro{\smaller HSMM} with negative binomial durations \color{black}. For the \acro{\smaller HSMM} approximation, we considered $\bm{a} = (3,3)$, $(5, 5)$ and $(10, 10)$ in order to investigate how the dwell approximation affects the model selection performance. We use prior distributions that are comparable as explained in Section \ref{sec:comparable_dwell}, the exact prior specifications are presented in the Supplementary Material. Figure \ref{Fig:BFs_AIC_BIC} (top) displays box-plots of the difference between the model selection criteria (namely marginal likelihood and \acro{\smaller AIC}) achieved by the \acro{\smaller HMM} and the \acro{\smaller HSMM}, for increasing sample size $T = 500, 5000, 10000$ and values for $\bm{a}$. We negate the \acro{\smaller AIC} such that maximising both criteria is desirable. Thus, positive values for the difference correspond to correctly selecting the simpler data generating model, i.e. the \acro{\smaller HMM}. As the sample size $T$ increases, the marginal likelihood appears to converge to a positive value, and the variance across repeats decreases, indicating consistent selection of the correct model. On the other hand, even for large $T$ there are still occasions when the \acro{\smaller AIC} strongly favours the incorrect, more complicated model. Further, such performance appears consistent across values of $\bm{a}$.
\begin{figure}
\caption{ \acro{\smaller AIC}
\label{Fig:BFs_AIC_BIC}
\end{figure}
\textcolor{black}{\subsubsection{Complexity Penalization}}
Unlike the \acro{\smaller AIC}, the $\acro{\smaller BIC} \,:= -2\,\mathscr{L}\,( \bm{y} \, | \, \bm{\eta} ) + p \log T$ penalizes complexity in a manner that depends on the sample size $T$. This is termed `Bayesian' because it corresponds to the Laplace approximation of the marginal likelihood of the data \citep{konishi2008information}, often interpreted as considering a uniform prior for the model parameters \citep{bhat2010derivation, sodhi2010conservation}. Though the uniform distribution may be viewed as naturally uninformative, it is well known that using the marginal likelihood assuming an uninformative prior specification can lead to the selection of the simplest model independently of the data \citep[see e.g.][]{lindley1957statistical, jeffreys1998theory, jennison1997bayesian}. As a result, while \acro{\smaller BIC} can provide consistent selection of nested models, it can punish extra complexity in an excessive manner.
To investigate how the approximate \acro{\smaller HSMM} likelihood model affects this model selection behaviour, we consider data generated from an \acro{\smaller HSMM} with the same formulation as above except that in this scenario the dwell distribution is a negative binomial parameterized by state-specific parameters $\bm{\lambda} = (3.33, 2.50)$ and $\bm{\rho} = (2, 0.5)$. Note that the data generating \acro{\smaller HSMM} has two more parameters than the \acro{\smaller HMM}. For the \acro{\smaller HSMM} approximation, we consider $\bm{a} = (3,3)$, $(5, 5)$ and $(10, 10)$, where the largest of these provides negligible truncation of the right tail of the dwell distribution given the data generating parameters. Figure \ref{Fig:BFs_AIC_BIC} (bottom) shows box-plots of the difference between the model scores (marginal-likelihood and \acro{\smaller BIC}) across 20 simulated time series when fitting the \acro{\smaller HMM} and \acro{\smaller HSMM}, for increasing sample size $T = 200, 1000, 5000$ and values for $\bm{a}$. We negate the \acro{\smaller BIC} so that the preferred model maximises both criteria. Unlike the experiments described above, the data is now from the less parsimonious \acro{\smaller HSMM} approach and therefore negative values for the difference in score correspond to correctly selecting the more complicated model. For small sample sizes, e.g. $T = 200, 1000$, the complexity penalty of the \acro{\smaller BIC} appears to be too large, so that in almost all of the 20 repeat experiments the simple model is incorrectly favored over the correct data generating model, i.e. the \acro{\smaller HSMM}. On the other hand, the marginal likelihood is able to correctly select the more complicated model across almost all simulations and sample sizes. Although for smaller $\bm{a}$ the \acro{\smaller HSMM} approximation is `closer' to a \acro{\smaller HMM}, we still see that the model selection performance is consistent across the different values of $\bm{a}$.
\section{Approximation Accuracy and Computational Time}
\label{sec:approx_accuracy}
The previous section motivated why the Bayesian paradigm can improve statistical inferences for \acro{\smaller HSMM}s. Next, we investigate the computational feasibility of such an approach and the trade-off between computational efficiency and statistical accuracy achieved by our Bayesian approximate \acro{\smaller HSMM} implementation. In particular, we compare our Bayesian approximate \acro{\smaller HSMM} method for different values of the threshold $\bm{a}$ with a Bayesian implementation of the exact \acro{\smaller HSMM}, while also illustrating the computational savings made by our sparse matrix implementation. For the exact \acro{\smaller HSMM}, the full-forward recursion is used to evaluate the likelihood (see e.g. \citealt{guedon2003estimating} or \citealt{economou2014mcmc}). In order to provide a fair comparison, we coded the forward recursion outlined in \citet{economou2014mcmc} in \textit{stan}{} also. We then compare the computational resources required to sample from the approximate and exact \acro{\smaller HSMM} posteriors with the accuracy of the posterior mean parameter estimates with respect to their data generating values.
We generate $T = 5000$ observations from two different \acro{\smaller HSMM}s with Poisson durations (both with $K = 5$ states and the same Gaussian emission distributions). For the two different datasets, we consider the following dwell parameters: (i) \textit{short dwells}, i.e. $\bm{\lambda} = (2, 5, 8, 1, 4)$, where the average time spent in each state is fairly small and (ii) \textit{one long dwell}, i.e. $\bm{\lambda} = (2, 5, 25, 1, 4)$, where four states have short average dwell time and one where the average dwell time is much longer.
We also consider two approximation thresholds: $\bm{a}_1 = (10, 10, 10, 10, 10)$, namely a fixed approximation threshold for all five states, and $\bm{a}_2 = (10, 10, 30, 10, 10)$, a `hybrid' model where four of the states have short dwell thresholds and one has a longer threshold.
The emission parameters were set to $\bm{\mu} = (1, 2, 3.5, 6, 10)$ and $\bm{\sigma}^2 = (1^2, 0.5^2, 0.75^2, 1.5^2, 2.5^2)$, and we specify priors $\mu_j \sim \mathcal{N}(0, 10^2)$, $\sigma^2_j \sim \mathcal{IG}(2, 0.5)$, $\lambda_j\sim\mathcal{G}(0.01, 0.01)$ and $\gamma_j \sim \mathcal{D}(1, \ldots, 1)$ for $j = 1,\ldots, 5$.
The results are presented in Table \ref{Tab:Computation_vs_Accuracy}. Across both datasets and approximation thresholds, the sparse implementation takes less than half the time of the non-sparse implementation, with the saving greater when the dwell thresholds are larger (and the matrix $\bm{\Phi}$, Eq. \eqref{eq:diag_mat}, is sparser). Furthermore, the \acro{\smaller HSMM} approximations are considerably faster than the full \acro{\smaller HSMM} implementation. For the \textit{short dwell} dataset the full \acro{\smaller HSMM} takes close to 3.5 days while the sparse implementations of the \acro{\smaller HSMM} approximation both require less than 2 hours. Similarly, for the \textit{one long dwell} dataset, the full \acro{\smaller HSMM} takes over 4 days to run while again the sparse \acro{\smaller HSMM} approximations require around 2 hours. The quoted Effective Sample Size (\acro{\smaller ESS}, e.g. \citealt{gelman2013bayesian}) values are calculated using the \textit{LaplaceDemons} package in R and are averaged across parameters. These show that the \acro{\smaller ESS} of all the generated samples is close to 1000 and thus the time comparisons are indeed fair. Further, we expect the difference to become starker as the number of observations $T$ increases. While, the approximate \acro{\smaller HSMM} scales linearly in $T$ and quadratically in $\sum_{j=1}^Ka_j$, the full \acro{\smaller HSMM} in the worst case is quadratic in $T$ \citep{langrock2011hidden}.
Lastly, we see that the savings in computation time come at very little cost in statistical accuracy. We measure the statistical accuracy of the vector-valued parameter $\hat{\bm{\theta}}$ to estimate $\bm{\theta}^{\ast}$ using its mean squared error (\acro{\smaller MSE}
= $\sum_{j=1}^K(\hat{\theta}_j - \theta^{\ast}_j)^2$). All methods achieve almost identically \acro{\smaller MSE} values for the emission parameters $
\bm{\mu}$ and $\bm{\sigma}^2$. For the \textit{short dwell data}, the $\bm{a}_1$ approximation has slightly higher \acro{\smaller MSE} for $\bm{\lambda}$ while the $\bm{a}_2$ approximation performed comparably to the \acro{\smaller HSMM}. Clearly, increasing the approximation threshold improves statistical accuracy. On the other hand, the \textit{one long dwell} shows that if the dwell threshold is set too low, as is the case with $\bm{a}_1$, large errors in the dwell estimation can be made.
However, in this example the higher dwell approximation $\bm{a}_2$ once again performs comparably with the full \acro{\smaller HSMM}, whilst requiring only $2\%$ of the computational time.
\begin{table}[ht]
\centering
\begin{tabular}{lccccc}
\hspace{-0.2cm}\\
\hline
\textit{Short dwells} & Time (hours) & \acro{\smaller ESS} & \multicolumn{3}{c}{\acro{\smaller MSE}}\\
& & & $\mu$ & $\sigma^2$ & $\lambda$ \\
\hline
Approx: $\bm{a_1}$ & 2.62 & 986.50 & 3.72 $\times 10^{-2}$ & 2.88 $\times 10^{-3}$ & 0.25 \\
Approx ({\footnotesize{SPARSE}}): $\bm{a_1}$ & 1.30 & 975.20 & 3.84 $\times 10^{-2}$ & 3.06 $\times 10^{-3}$ & 0.26 \\
Approx: $\bm{a_2}$ & 3.94 & 961.76 & 3.84 $\times 10^{-2}$ & 3.05 $\times 10^{-3}$ & 0.17 \\
Approx: ({\footnotesize{SPARSE}}): $
\bm{a_2}$ & 1.78 & 978.82 & 3.96 $\times 10^{-2}$ & 3.01 $\times 10^{-3}$ & 0.18 \\
Exact & 81.15 & 933.28 & 4.02 $\times 10^{-2}$ & 3.24 $\times 10^{-3}$ & 0.19 \\
\hline
\hline
\textit{One long dwell} & & & & &\\
\hline
Approx: $\bm{a_1}$ & 3.33 & 984.51 & 1.76 $\times 10^{-2}$ & 4.68 $\times 10^{-2}$ & 128.50 \\
Approx: ({\footnotesize{SPARSE}}): $\bm{a_1}$ & 1.78 & 981.90 & 1.73 $\times 10^{-2}$ & 4.84 $\times 10^{-2}$ & 128.51 \\
Approx: $
\bm{a_2}$ & 5.08 & 993.89 & 1.50 $\times 10^{-2}$ & 4.84 $\times 10^{-2}$ & 1.25 \\
Approx: ({\footnotesize{SPARSE}}): $\bm{a_2}$ & 2.21 & 983.47 & 1.51 $\times 10^{-2}$ & 4.82 $\times 10^{-2}$ & 1.25 \\
Exact & 101.35 & 980.59 & 1.65 $\times 10^{-2}$ & 4.66 $\times 10^{-2}$ & 1.12 \\
\hline
\end{tabular}
\caption{
\textcolor{black}{Computational time (hours), effective sample size (\acro{\smaller ESS}) and mean squared error (\acro{\smaller MSE}) of posterior mean parameters. The results are reported using the approximate \acro{\smaller HSMM} for different dwell approximations $\bm{a}$ (with the corresponding sparse implementation), and the exact \acro{\smaller HSMM} implementation.}
}
\label{Tab:Computation_vs_Accuracy}
\end{table}
\subsection{Setting the Dwell Threshold}{\label{sub:setting_dwell}}
The results of Section \ref{sec:approx_accuracy} indicate that while vast computational savings are possible using the approximate \acro{\smaller HSMM} likelihood, care must be taken not to set the dwell approximation threshold $\bm{a}$ too low. We propose initialising $\bm{a}$ based on the prior distribution for the dwell times $d_j$, $\pi(d_j) = \int \pi(d_j; \lambda)\pi(\lambda)d\lambda$. Noting that any dwell time $d_j < \bm{a}_j$ is not approximated, we recommend initialising $\tilde{\bm{a}}$ such that $d_j \leq \tilde{\bm{a}}_j$ with high probability for all $j = 1,\ldots, K$.
Such an initialisation however does not guarantee the accuracy of the \acro{\smaller HSMM} modelling, particularly in the absence of informative prior beliefs.
We therefore, propose a diagnostic method to check that $\tilde{\bm{a}}$ is not too small.
\begin{enumerate}[noitemsep]
\item Initialise $\tilde{\bm{a}}$ and conduct inference on the observed data. Record posterior mean parameter estimates $\hat{\bm{\eta}}_{obs}(\tilde{\bm{a}})$
\item Generate data $\tilde{y}_{gen}$ from an exact \acro{\smaller HSMM} with generating parameters $\hat{\bm{\eta}}_{obs}(\tilde{\bm{a}})$. Note that generation from an exact \acro{\smaller HSMM} is easier than inference on its parameters
\item Continuing with $\tilde{\bm{a}}$, conduct inference on the generated data and record posterior mean parameter estimates $\hat{\bm{\eta}}_{gen}(\tilde{\bm{a}})$
\item Compare dwell distribution parameters $\hat{\lambda}_{obs}(\tilde{\bm{a}})$ and $\hat{\lambda}_{gen}(\tilde{\bm{a}})$
\end{enumerate}
The estimates $\hat{\lambda}_{obs}(\tilde{\bm{a}})$ provide the best guess estimate of the parameters of the \acro{\smaller HSMM} underlying the data for fixed $\tilde{\bm{a}}$. Generating from this exact \acro{\smaller HSMM} given by these estimates allows us to verify the accuracy of the proposed model. If the estimates are not accurate then little confidence can be had that $\hat{\lambda}_{obs}(\tilde{\bm{a}})$ accurately represents the dwell distribution of the underlying \acro{\smaller HSMM}. If $\hat{\lambda}_{gen}(\tilde{\bm{a}})_j$ is not considered a satisfactory estimate of $\hat{\lambda}_{obs}(\tilde{\bm{a}})_j$, then $\tilde{\bm{a}}_j$ must be increased. Conveniently, this can be done for each state $j$ independently. Further, if $\hat{\lambda}_{gen}(\tilde{\bm{a}})_j$ is considered accurate enough, then there is also the possibility to decrease $\tilde{\bm{a}}_j$ based on the inferred dwell distribution. Although the above procedure requires the fitting of the model several times, we believe the computational savings of our model when compared with the exact \acro{\smaller HSMM} inference demonstrated in Table \ref{Tab:Computation_vs_Accuracy} render this worthwhile.
This procedure is implemented to set the dwell-approximation threshold for the physical activity time series analysed in the next section.
\color{black}
\section{Telemetric Activity Data}
\label{sec:application}
In this section, we return to the physical activity (\acro{\smaller PA}) time series that \citet{huang2018hidden} analysed using a frequentist \acro{\smaller HMM}.
\color{black}
We seek to conduct a similar study but within a Bayesian framework and consider the extra flexibility afforded by our proposed methodology to investigate departures from the \acro{\smaller HMM}. \textcolor{black}{Further, in Section \ref{sec:harmonic_emissions} we consider the inclusion of spectral information within the \acro{\smaller HMM} and \acro{\smaller HSMM} emission densities.}
We consider three-state $\acro{\smaller HSMM}s$ with Poisson$\,(\lambda_j)$ and Neg-Binomial$\,(\lambda_j, \rho_j)$ dwell durations, shifted to have strictly non-negative support and approximated via thresholds $\bm{a}_{P} = (160, 40, 25)$ and $\bm{a}_{NB} = (250, 50, 50)$ respectively. These are fitted to the square root of the \acro{\smaller PA} time series shown in Figure \ref{fig:physical_activity}, wherein we assume that transformed observations are generated from Normal$(\,\mu_j, \sigma_j^{\,2})$ distributions, as in \citet{huang2018hidden}. We specified $K = 3$ states, in agreement with findings of \citet{migueles2017accelerometer} and \citet{huang2018hidden}, where they collected results from more than forty experiments on \acro{\smaller PA} time series. In their studies, for each individual the lowest level of activity corresponds to the sleeping period, which usually happens during the night, while the other two phases are mostly associated with movements happening in the daytime. Henceforth, these different telemetric activities are represented as inactive (\acro{\smaller IA}), moderately active (\acro{\smaller MA}) and highly active (\acro{\smaller HA}) states. The setting of $\bm{a}$ followed the iterative process outlined in Section \ref{sub:setting_dwell}, initialising $\tilde{a}_j$ giving prior probability of 0.9 that $d_j < a_j$.
This choice also reflects a trade-off between accurately capturing the states with which we have considerable prior information, i.e. \acro{\smaller IA}, whilst improving the computational efficiency of the other states over a standard \acro{\smaller HSMM} formulation.
We assume that the night rest period of a healthy individual is generally between 7 and 8 hours. The parameter of the dwell duration of the \acro{\smaller IA} state, $\lambda_{\,\acro{\smaller IA}}$, is hence assigned a Gamma prior with hyperparameters that reflect mean 90 (i.e. $7.5\times 12$) and variance 36 (i.e. $[0.5\times 12]^{2}$), the latter was chosen to account for some variability amongst people.
Since we do not have significant prior knowledge on how long people spend in the \acro{\smaller MA} and \acro{\smaller HA} states, we assigned $\lambda_{\,\acro{\smaller MA}}$ and $\lambda_{\,\acro{\smaller HA}}$ Gamma priors with mean 24 (i.e. 2 hours) and variance 324 (i.e. $[1.5\times 12]^2$) to reflect a higher degree of uncertainty.
Transition probabilities from state $\acro{\smaller IA}$, $\bm{\pi}_{\acro{\smaller IA}}$, are specified as Dirichlet with equal prior probability of switching to any of the active states \acro{\smaller MA} or \acro{\smaller HA}. On the other hand, active states usually alternate between each other more frequently than with \acro{\smaller IA} \citep{huang2018hidden}, and therefore
we set the prior for $\bm{\pi}_{\acro{\smaller MA}}$ so that transitions from \acro{\smaller MA} to \acro{\smaller HA} are four times more likely
than switching from \acro{\smaller MA} to \acro{\smaller IA} (a similar argument can be made for $\bm{\pi}_{\acro{\smaller HA}}$). Finally, the inverse of dispersion parameters $\rho_j^{-1}$ were given Gamma$\,(2, 2)$ priors, and the parameters of the Gaussian emissions were assigned $\mu_j \sim$ Normal$\,(\bar{y}, 4)$ and $\sigma^{\,2}_j\, \sim$ Inverse-Gamma$\,(2, 0.5)$, where $\bar{y}$ denotes the sample mean.
For each proposed model our Bayesian procedure is run for 6,000
iterations, 1,000 of which are discarded as burn-in. Firstly, we consider selecting which of the competing dwell distributions, i.e. the geometric dwell characterising the \acro{\smaller HMM} and the Poisson and negative binomial \acro{\smaller HSMM} extensions, is most supported by the observed data. As explained in Section \ref{sec:comparable_dwell}, we specified hyperparameters for these competing models so that the corresponding priors match the means and variances of the informative prior specification given above. In order to measure the gain of including available prior knowledge into the model, we also investigated
a weakly informative prior setting (as in Section \ref{sec:ill_example}). Table \ref{table:casestudy_comparison} displays the bridge sampling estimates of the marginal likelihood for the different models and posterior means of the corresponding dwell parameters. It is clear that integrating into the model available prior information improves performance greatly. In addition, modelling dwell durations as either negative binomial or geometric
provides a better approximation to the data
compared to a Poisson model.
Furthermore, the Bayes factor $18.36$ (i.e. $\exp\{-1632.42 + 1635.33\})$ suggests that there is \textit{strong evidence} \citep{kass1995bayes} in favour of the $\acro{\smaller HSMM}$ with negative binomial durations in comparison to a standard \acro{\smaller HMM}.
This is also reflected by the estimated posterior means of the parameters $\rho_j$ which differ from one, hence showing some departure from geometric dwell durations. These `dispersion' parameters are smaller than one for the \acro{\smaller IA} and \acro{\smaller MA} states indicating a larger fitted variance of the dwell times under the negative binomial \acro{\smaller HSMM} than the geometric \acro{\smaller HMM}. \color{black} Combined with their estimated means, this may explain the improved performance of the negative binomial dwell model over the \acro{\smaller HMM}. The increased variance allows the time series to better capture the short transitions to \acro{\smaller IA} states seen in the fitted model (Figure \ref{fig:fit_caseStudy}). This also explains why the Poisson \acro{\smaller HSMM} performs poorly for this dataset; the fitted Poisson dwell distribution for the \acro{\smaller IA} state can be seen to have a much smaller variance than the geometric and negative binomial alternatives. Plots comparing the posterior predictive dwell time for the \acro{\smaller IA}, \acro{\smaller MA}, and \acro{\smaller HA} states estimated under the three proposed dwell distributions are provided in the Supplementary Material. Future work could consider more complex dwell distributions to reflect the different patterns of human sleep. For example, a natural extension to the results presented here could be to look at whether a two-component mixture distribution (e.g. Poisson) can aid in better capturing the short excursions to the \acro{\smaller IA} seen in Figure \ref{fig:fit_caseStudy}. In the Supplementary Material, we have further investigated the different state classifications provided by the optimal
proposed model (using negative binomial durations) with respect to Poisson and geometric dwells.
Posterior means of the emission parameters were $y_t |_{\acro{\smaller IA}} \sim$ Normal(0.93, 0.47), $y_t |_{\acro{\smaller MA}} \sim$ Normal(3.17, 1.28) and $y_t |_{\acro{\smaller HA}} \sim$ Normal(5.38, 0.54).
The $\acro{\smaller IA}$ state naturally corresponds to the state with the lowest mean activity and the \acro{\smaller MA} state appears to have largest variance in activity levels. Posterior means of the dwell parameters in Table \ref{table:casestudy_comparison} show that this individual sleeps an average of 7 and a half hours per night.
\color{black}
In Figure \ref{fig:transitions_caseStudy}, we display posterior histograms of the transition probabilities between different states. There appears to be high chances of switching between active states, since the posterior means for $\pi_{\acro{\smaller HA} \rightarrow \acro{\smaller MA}}$ and $\pi_{\acro{\smaller MA} \rightarrow \acro{\smaller HA}}$ are close to one, though the latter exhibits larger variance. Additionally, the posterior probability of transitioning from \acro{\smaller HA} to \acro{\smaller IA} is very close to zero, which is reasonable considering that it is very unlikely that an individual would go to sleep straight after having performed intense physical activity. Figure \ref{fig:fit_caseStudy} shows the transformed time series as well as simulated data from the predictive distribution, and the estimated hidden state sequence using the Viterbi algorithm. It can be seen that the \acro{\smaller IA} state occurs during the night whereas days are characterized by many switches between the \acro{\smaller MA} and \acro{\smaller HA} states. Our results are in agreement with \citet{huang2018hidden}.
\begin{table}[htbp]
\resizebox{\columnwidth}{!}{
\begin{tabular}{lccccccc}
\hline \\[-0.9em]
& log-marg lik & $\lambda_{\,\acro{\smaller IA}}$ & $\lambda_{\,\acro{\smaller MA}}$ & $\lambda_{\,\acro{\smaller HA}}$ & $\rho_{\,\acro{\smaller IA}}$ & $\rho_{\,\acro{\smaller MA}}$ & $\rho_{\,\acro{\smaller HA}}$ \\ [.1em] \cmidrule{2-8}
Poisson$\ssymbol{2}$ & $-1751.02$ & \begin{tabular}[c]{@{}c@{}}88.32\\ \footnotesize(86.28–89.28)\end{tabular} & \begin{tabular}[c]{@{}c@{}}34.79\\ \footnotesize(29.08–43.02)\end{tabular} & \begin{tabular}[c]{@{}c@{}}18.55\\ \footnotesize(14.45–22.47)\end{tabular} & - & - & - \\[1.0em]
Geometric$\ssymbol{2}$ & $-1653.67$ & \begin{tabular}[c]{@{}c@{}}45.57\\ \footnotesize(26.97–74.42)\end{tabular} & \begin{tabular}[c]{@{}c@{}}10.53\\ \footnotesize(7.49–14.53)\end{tabular} & \begin{tabular}[c]{@{}c@{}}8.60\\ \footnotesize(6.13–11.94)\end{tabular} & - & - & - \\[1.0em]
Neg-Binom$\ssymbol{2}$ & $-1649.00$ & \begin{tabular}[c]{@{}c@{}}46.25\\ \footnotesize(21.12–88.14\end{tabular} & \begin{tabular}[c]{@{}c@{}}10.46\\ \footnotesize(6.22–16.94)\end{tabular} & \begin{tabular}[c]{@{}c@{}}8.37\\ \footnotesize(5.44–12.31)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.61\\ \footnotesize(0.29–1.08)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.61\\ \footnotesize(0.33–0.98)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.22\\ \footnotesize(0.60–2.26)\end{tabular} \\[1.0em]
Poisson & $-1732.16$ & \begin{tabular}[c]{@{}c@{}}88.39\\ \footnotesize(86.65–89.27)\end{tabular} & \begin{tabular}[c]{@{}c@{}}33.61\\ \footnotesize(28.76–40.55)\end{tabular} & \begin{tabular}[c]{@{}c@{}}17.98\\ \footnotesize(14.35–22.04)\end{tabular} & - & - & - \\[1.0em]
Geometric & $-1635.33$ & \begin{tabular}[c]{@{}c@{}}88.72\\ \footnotesize(79.63–98.70)\end{tabular} & \begin{tabular}[c]{@{}c@{}}13.42\\ \footnotesize(9.49–18.68)\end{tabular} & \begin{tabular}[c]{@{}c@{}}10.97\\ \footnotesize(7.91–15.05)\end{tabular} & - & - & - \\[1.0em]
Neg-Binom & $\bm{-1632.42}$ & \begin{tabular}[c]{@{}c@{}}87.97\\ \footnotesize(78.40–97.75)\end{tabular} & \begin{tabular}[c]{@{}c@{}}12.07\\ \footnotesize(7.33–18.84)\end{tabular} & \begin{tabular}[c]{@{}c@{}}9.12\\ \footnotesize(5.99–13.19)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.67\\ \footnotesize(0.33–1.19)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.71\\ \footnotesize(0.36–1.15)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.25\\ \footnotesize(0.60–2.22)\end{tabular} \\ [1em] \hline
\end{tabular}
}
\caption{Telemetric activity data. Log-marginal likelihood for different dwell distributions (Poisson, geometric and negative binomial), where the superscript $\ssymbol{2}$ denotes a weakly informative prior specification. Geometric durations are characterized by their mean dwell length $\lambda_j = 1/(1 - \gamma_{jj})$ where $\gamma_{jj}$ represents the probability of self-transition. Estimated posterior means of the dwell parameters are reported with a 90\% credible intervals estimated from the posterior sample.}
\label{table:casestudy_comparison}
\color{black}
\end{table}
\begin{figure}
\caption{Square root of the \acro{\smaller PA}
\label{fig:fit_caseStudy}
\end{figure}
\begin{figure}
\caption{Estimated posterior density histograms of the transition probabilities between \acro{\smaller IA}
\label{fig:transitions_caseStudy}
\end{figure}
\subsection{Harmonic Emissions}
\label{sec:harmonic_emissions}
\citet{huang2018hidden} further extended the standard Gaussian \acro{\smaller HMM} for the \acro{\smaller PA} recordings by allowing the state transition dynamics to depend on body's circadian periodicity (24 hours).
\color{black}
In a similar vein, we investigate the inclusion of spectral information within the emission density, and study how this affects the \acro{\smaller HMM} and \acro{\smaller HSMM} models considered in the previous section. Specifically, we consider that the observations are generated from state-specific \textit{harmonic emissions} of the form $y_t \, | \, z_{\,t} = j \, \sim \, \mathcal{N} \, ( \, \mu_j (t), \sigma^2_j)$, with oscillatory mean defined as
\begin{equation}
\mu_j (t) = \beta^{(0)}_j + \beta^{(1)}_j \cos (2\pi\hat{\omega} t) + \beta^{(2)}_j \sin (2\pi\hat{\omega} t).
\label{eq:harmonic_emission}
\end{equation}
This emission density is hence expressed as a sum of a sine and a cosine (weighted by the linear coefficients $\beta_{j}^{(1)}$ and $\beta_{j}^{(2)}$) oscillating at frequency $\hat{\omega}$, plus a state-specific intercept $\beta_{j}^{(0)}$. While \citet{huang2018hidden} choose a priori the 24-hour periodicity included in the basis function, in our study we estimate this directly from the data. The next section describes our approach for identifying the frequency $\hat{\omega}$ driving the overall variation in the \acro{\smaller PA} time series.
\subsubsection{Identifying the Periodicity}
We define $\hat{\omega}$ as the posterior mean of the frequency $\omega$ under the periodic model in Eq. \eqref{eq:periodic_model} defined below,
i.e. $ \hat{\omega} := \mathbb{E} \, ( \omega \, |
\, \bm{y}, \bm{\beta}, \sigma^2)$, with $\bm{\beta} = (\beta^{(1)}, \beta^{(2)})$.
In this preliminary step to the proposed model with harmonic emissions (Eq. \ref{eq:harmonic_emission}), we first assume the data to be generated by the following stationary periodic process
\begin{equation} y_t = \beta^{(1)} \cos (2\pi \omega t) + \beta^{(2)} \sin (2\pi\omega) + \varepsilon_t, \quad \varepsilon_t \sim \mathcal{N}(0, \sigma^2_{\omega}), \qquad t=1, \dots, T,
\label{eq:periodic_model}
\end{equation} where we have developed a Metropolis-within-Gibbs sampler to obtain samples from the posterior distribution of the frequency
\begin{equation} \label{posterior_omega}
p \, (\omega \, | \, \bm{\beta}, \, \sigma^2, \, \bm{y} ) \propto \exp \Bigg[ -\dfrac{1}{2\sigma^2} \sum_{ t } \Big\{ y_t - \beta^{(1)} \cos (2\pi \omega t) - \beta^{(2)} \sin (2\pi\omega) \, \Big\}^{2} \Bigg] \mathbbm{1}_{\big[ \, \omega \, \in \, (0, \, \phi_{\omega}) \big]},
\end{equation}
where $\phi_\omega$ is a pre-specified upper bound for the frequency and may be chosen to reflect prior information about the value of $\omega$, for example focusing only on low frequencies (e.g. $ 0 < \phi_{\omega} < 0.1 $). Full details of the sampling scheme and our prior choice are provided in the Supplementary Material. This algorithm is similar to the within-model move of the ``segment model'' presented in \citet{hadj2019bayesian, hadj2020spectral},
but with the number of frequencies fixed at one.
\begin{figure}
\caption{\textcolor{black}
\label{fig:example_oscillatory}
\end{figure}
We ran the sampler for 5000 iterations using software written in Julia 1.6 which took around 3 seconds on an Intel\textsuperscript{\textregistered} Core\textsuperscript{TM} i5 2 GHz Processor with 16 GB RAM. Figure \ref{fig:example_oscillatory} (a) shows the trace plot (after burn-in) of the posterior sample of the frequency where the acceptance rate (28\%) was roughly tuned to be optimal \citep{roberts2001optimal}. We also highlight in red the posterior mean $\hat{\omega} = 0.003453$. In Figure \ref{fig:example_oscillatory} (b) we display 20 draws from the posterior predictive distribution of the stationary periodic model and the posterior mean of the oscillatory signal. This shows that the model predictions appear to capture some of the structure of the \acro{\smaller PA} time series. However, there also appears to be temporal structure not captured by the global circadian harmonic. As a result, in the next section we will use the global $\hat{\omega} = 0.003453$ as the circadian covariate for the emissions of the harmonic \acro{\smaller HMM} and \acro{\smaller HSMM} (Eq. \ref{eq:harmonic_emission}), allowing the harmonic parameters $(\beta_j^{(0)}, \beta_j^{(1)}, \beta_j^{(2)})$ to vary by state in order to better capture the temporal structure.
\subsubsection{Results}
Given the point estimate for $\hat{\omega} = 0.003453$, we then applied the \acro{\smaller HMM} and \acro{\smaller HSMM} approximations with Poisson and negative binomial dwells to the \acro{\smaller PA} time series (using $K = 3$ states). Our prior specification follows the discussion in Section \ref{sec:application} for $\sigma^2_j$, $\lambda_j$, $\gamma_j$ and $\rho_j$, where appropriate, while the intercept of the harmonic mean model $\beta_j^{(0)}$ is given the same prior as $\mu_j$ from the standard Gaussian emission model. The additional parameters of the harmonic model $\beta_j^{(1)}$ and $\beta_j^{(2)}$ are both assumed a priori $\mathcal{N}(0, 2^2)$.
Table \ref{table:casestudy_comparison_harmonic} (top) provides the log-marginal likelihoods of the different models and posterior mean estimates of the parameter of their dwell-distributions, along with the 90\% credibility intervals provided by their posteriors. It is clear that the marginal likelihood favours the negative binomial dwell distribution, with the standard \acro{\smaller HMM} (geometric dwell) being the next most favorable. Further, when comparing Table \ref{table:casestudy_comparison_harmonic} with Table \ref{table:casestudy_comparison}, we see that the inclusion of harmonic emissions results in an increase of the marginal likelihood by a factor ranging between 6 and 7 on the log-scale for all dwell distributions, thus supporting its integration in our model.
\begin{table}[htbp]
\resizebox{\columnwidth}{!}{
\begin{tabular}{lccccccc}
\hline \\[-0.9em]
& log-marg lik & $\lambda_{\,\acro{\smaller IA}}$ & $\lambda_{\,\acro{\smaller MA}}$ & $\lambda_{\,\acro{\smaller HA}}$ & $\rho_{\,\acro{\smaller IA}}$ & $\rho_{\,\acro{\smaller MA}}$ & $\rho_{\,\acro{\smaller HA}}$ \\ [.1em] \cmidrule{2-8}
Poisson & $-1727.24$ & \begin{tabular}[c]{@{}c@{}}88.29\\ \footnotesize(86.42–89.25)\end{tabular} & \begin{tabular}[c]{@{}c@{}}44.68\\ \footnotesize(41.80–47.57)\end{tabular} & \begin{tabular}[c]{@{}c@{}}21.62\\ \footnotesize(18.52–25.05)\end{tabular} & - & - & - \\[1.0em]
Geometric & $-1629.40$ & \begin{tabular}[c]{@{}c@{}}88.24\\ \footnotesize(79.08–98.05)\end{tabular} & \begin{tabular}[c]{@{}c@{}}15.53\\ \footnotesize(10.19–23.08)\end{tabular} & \begin{tabular}[c]{@{}c@{}}12.15\\ \footnotesize(8.36–17.61)\end{tabular} & - & - & - \\[1.0em]
Neg-Binom & $\bm{-1625.61}$ & \begin{tabular}[c]{@{}c@{}}87.54\\ \footnotesize(77.66–97.34)\end{tabular} & \begin{tabular}[c]{@{}c@{}}14.32\\ \footnotesize(7.82–24.02)\end{tabular} & \begin{tabular}[c]{@{}c@{}}10.67\\ \footnotesize(6.44–16.20)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.65\\ \footnotesize(0.31–1.17)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.64\\ \footnotesize(0.33–1.13)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.7\\ \footnotesize(0.63–3.88)\end{tabular} \\ [1em] \hline
\end{tabular}
}\\ [.1em]
\resizebox{\columnwidth}{!}{
\begin{tabular}{lccccccccc}
\hline \\[-0.9em]
& \multicolumn{3}{c}{\acro{\smaller IA}}\ & \multicolumn{3}{c}{\acro{\smaller MA}} & \multicolumn{3}{c}{\acro{\smaller HA}}
\\ [.1em] \hline \\[-0.9em]
& $\beta^{(0)}$ & $\beta^{(1)}$ & $\beta^{(2)}$ & $\beta^{(0)}$ & $\beta^{(1)}$ & $\beta^{(2)}$ & $\beta^{(0)}$ & $\beta^{(1)}$ & $\beta^{(2)}$
\\ [.1em] \hline \\[-0.9em]
Gaussian & \begin{tabular}[c]{@{}c@{}}0.93\\ \footnotesize(0.88-0.98)\end{tabular} & - & - & \begin{tabular}[c]{@{}c@{}}3.18\\ \footnotesize(3.03-3.33)\end{tabular} & - & - & \begin{tabular}[c]{@{}c@{}}5.38\\ \footnotesize(5.27-5.51)\end{tabular} & - & -\\[1.0em] \\
Harmonic & \begin{tabular}[c]{@{}c@{}}1.36\\ \footnotesize(1.26-1.46)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.04\\ \footnotesize(-0.05-0.13)\end{tabular} & \begin{tabular}[c]{@{}c@{}}-0.60\\ \footnotesize(-0.72- -0.47)\end{tabular} & \begin{tabular}[c]{@{}c@{}}3.32\\ \footnotesize(2.94-3.65)\end{tabular} & \begin{tabular}[c]{@{}c@{}}-0.11\\ \footnotesize(-0.34-0.13)\end{tabular} & \begin{tabular}[c]{@{}c@{}}-0.24\\ \footnotesize(-0.69-0.17)\end{tabular} & \begin{tabular}[c]{@{}c@{}}5.46\\ \footnotesize(5.32-5.60)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.20\\ \footnotesize(0.07-0.33)\end{tabular} & \begin{tabular}[c]{@{}c@{}}-0.23\\ \footnotesize(-0.61-0.16)\end{tabular}\\ [1em] \hline
\end{tabular}
}
\caption{\textcolor{black}{Telemetric activity data with harmonic emissions. (Top) Log-marginal likelihood for different dwell durations (i.e. Poisson, geometric and negative binomial). Geometric durations are characterized by their mean dwell length $\lambda_j = 1/(1 - \gamma_{jj})$ where $\gamma_{jj}$ represents the probability of self-transition. (Bottom) Parameters of the mean of the Gaussian and harmonic emission distributions under the selected negative binomial dwell distribution. Estimated posterior means of the parameters are reported with a 90\% credible intervals estimated from the posterior sample.}}
\label{table:casestudy_comparison_harmonic}
\color{black}
\end{table}
Following the selection of the negative binomial dwell distribution for both the standard Gaussian and harmonic emission models, Table \ref{table:casestudy_comparison_harmonic} (bottom) provides the posterior mean values for the parameters of these emission distributions, along with the 90\% credibility intervals provided by the posterior. These results show that even with a global estimate for the periodicity, there are differences between the estimated parameters in each state, supporting the combination of the periodic time-series model with a hidden state model. Furthermore, there are clear differences between the estimated emissions of the harmonic model compared with the estimated Gaussian emissions in the standard model (where $\beta^{(0)}_j = \mu_j$ and $\beta^{(1)}_j$ and $\beta^{(2)}_j$ were both 0). In particular, the intercept $\beta^{(0)}_{\,\acro{\smaller IA}}$ in the \acro{\smaller IA} state differs non-negligibly when using the harmonic model instead of the standard Gaussian, as do $\beta^{(2)}_{\,\acro{\smaller IA}}$ and $\beta^{(1)}_{\,\acro{\smaller HA}}$, whose 90\% credibility intervals do not cover 0. This all supports the selection of the harmonic model over the standard Gaussian emissions.
\section{Concluding Summaries}
We presented a Bayesian model for analyzing time series data based on an \acro{\smaller HSMM} formulation with the goal of analyzing physical activity data collected from wearable sensing devices. We facilitate the computational feasibility of Bayesian inference for \acro{\smaller HSMM}s via the likelihood approximation introduced by \citet{langrock2011hidden}, in which a special structure of the transition matrix is embedded to model the state duration distributions. We utilize the \textit{stan}{} modeling language and deploy a sparse matrix formulation to further leverage the efficiency of the approximate likelihood. We showed the advantages of choosing a Bayesian paradigm over its frequentist counterpart in terms of incorporation of prior information, quantification of uncertainty, model selection, and forecasting. We additionally demonstrated the ability of the \acro{\smaller HSMM} approximation to drastically reduce the computational burden of the Bayesian inference (for example reducing the time for inference on $T = 5000$ observations from $>3$ days to $<2$ hours), whilst incurring negligible statistical error. The proposed approach allows for the efficient implementation of highly flexible and interpretable models that incorporate available prior information on state durations. An avenue not explored in the current paper is how our model compares to particle filtering methods. For example, a referee suggested that an algorithm sampling the filtering distribution using an adaptation of the sequential Monte Carlo (SMC) sampler of \cite{yildirim2013online} inside one of the two particle \acro{\smaller MC}MC algorithms of \cite{whiteley2009particle} could prove competitive for \acro{\smaller HSMM} inference. Further work could define, implement and compare such an approach to ours.
The analysis of physical activity data demonstrated that our model was able to learn the probabilistic dynamics governing the transitions between different activity patterns during the day as well as characterizing the sleep duration overnight. We were also able to illustrate the flexibility of the proposed model by adding harmonic covariates to the emission distribution, extending further the analysis of \cite{huang2018hidden}. Future work will investigate the further inclusion of covariates into these time series models as well as computationally and statistically efficient approaches for conducting variable selection among these \citep{george1993variable, rossell2017nonlocal}. We will also consider extending our methodology to account for higher-dimensional multivariate time series, where computational tractability is further challenging.
\color{black}
\section{Supplementary Material}
Section \ref{sec:mean_variance_dwell} provides the derivations of the mean and variance of the mean dwell time of an \acro{\smaller HMM}. Section \ref{sec:forecast} and \ref{sec:pseudo} includes the form of the forecast function and graphs of pseudo residuals that we utilized in our experiments. In Section \ref{sec:state_classification} we provide further results about our \acro{\smaller PA} data application by comparing different predictive distributions of the state durations as well as investigating state classification. Section \ref{sec:periodicity} illustrates the details of our Metropolis-within-Gibbs sampler to obtain posterior samples of the frequency. Code that implements the methodology is available as online supplemental material (see also \url{https://github.com/Beniamino92/BayesianApproxHSMM}).
\subsection{Mean and Variance of the Mean Dwell Time in an HMM} \label{sec:mean_variance_dwell}
Here we provide the derivations of the mean and variance of the mean dwell time of an \acro{\smaller HMM} as explained in Section \textcolor{blue}{3.1} of the main paper. Consider a standard \acro{\smaller HMM} (Eq. \textcolor{blue}{(1.1)} in the manuscript) with transition probabilities $\{ \bm{\gamma}_j \}_{j=1}^{K}$. Let us assume that $\bm{\gamma}_{j} = (\gamma_{j1}, \ldots, \gamma_{jK})\sim \textrm{Dirichelt}\left(v_{j1}, \ldots, v_{jK}\right)$ and thus, marginally, $\gamma_{jj}\sim \textrm{Beta}(v_{j}, \beta_j)$, where $v_j := v_{jj}$ and $\beta_j := \sum_{i\neq j} v_{ji}$. The dwell duration in any state follows a geometric distribution with failure probability $1-\gamma_{jj}$, and hence the mean dwell time is given by $\tau_j := \frac{1}{1-\gamma_{jj}}$. As a result, the first and second moments of the mean dwell time of an \acro{\smaller HMM} in state $j$ are given by \begin{align}
\mathbb{E}\left[\tau_j\right]& = \int_{0}^{1} \frac{1}{1-\gamma_{jj}}\frac{\Gamma\left(v_j + \beta_j\right)}{\Gamma\left(v_j\right)\Gamma\left(\beta_j\right)}\left(\gamma_{jj}\right)^{v_j-1}\left(1-\gamma_{jj}\right)^{\beta_j-1}d\gamma_{jj}\nonumber\\
&= \int_{0}^{1} \frac{\Gamma\left(v_j + \beta_j\right)}{\Gamma\left(v_j\right)\Gamma\left(\beta_j\right)}\left(\gamma_{jj}\right)^{v_j-1}\left(1-\gamma_{jj}\right)^{\beta_j - 1 - 1}d\gamma_{jj}\nonumber\\
&= \frac{\Gamma\left(v_j + \beta_j\right)\Gamma\left(\beta_j - 1\right)}{\Gamma\left(v_j + \beta_j - 1\right)\Gamma\left(\beta_j\right)}\int_{0}^{1} \frac{\Gamma\left(v_j + \beta_j - 1\right)}{\Gamma\left(v_j\right)\Gamma\left(\beta_j - 1\right)}\left(\gamma_{jj}\right)^{v_j-1}\left(1-\gamma_{jj}\right)^{\beta_j - 1 -1 }d\gamma_{jj}\nonumber\\
&= \frac{v_j + \beta_j -1}{\beta_j - 1}
\end{align}
\begin{align}
\mathbb{E}\left[\tau_j^2\right]& = \int_{0}^{1} \frac{1}{\left(1-\gamma_{jj}\right)^2}\frac{\Gamma\left(v_j + \beta_j\right)}{\Gamma\left(v_j\right)\Gamma\left(\beta_j\right)}\left(\gamma_{jj}\right)^{v_j-1}\left(1-\gamma_{jj}\right)^{\beta_j-1}d\gamma_{jj}\nonumber\\
&= \int_{0}^{1} \frac{\Gamma\left(v_j + \beta_j\right)}{\Gamma\left(v_j\right)\Gamma\left(v_j\right)}\left(\gamma_{jj}\right)^{v_j-1}\left(1-\gamma_{jj}\right)^{\beta_j - 2 -1}d\gamma_{jj}\nonumber\\
&= \frac{\Gamma\left(v_j + \beta_j\right)\Gamma\left(\beta_j - 2\right)}{\Gamma\left(v_j + \beta_j - 2\right)\Gamma\left(\beta_j\right)}\int_{0}^{1} \frac{\Gamma\left(v_j + \beta_j - 2\right)}{\Gamma\left(v_j\right)\Gamma\left(\beta_j - 2\right)}\left(\gamma_{jj}\right)^{v_j-1}\left(1-\gamma_{jj}\right)^{\beta_j - 2 -1}d\gamma_{jj}\nonumber\\
&= \frac{(v_j + \beta_j -1)(v_j + \beta_j - 2)}{(\beta_j - 1)(\beta_j - 2)}\\
\text{Var}\,[\tau_j] &= \frac{(v_j + \beta_j -1)(v_j + \beta_j - 2)}{(\beta_j - 1)(\beta_j - 2)} - \left(\frac{v_j + \beta_j -1}{\beta_j - 1}\right)^2
\end{align}
\subsection{Forecast Density Function} \label{sec:forecast}
Here, we provide the explicit form of the forecasting density $p \,(\tilde{y}_h \, | \, \hat{\bm{\eta}})$ that we used to evaluate predictive performances on a test set $\tilde{\bm{y}} = (\tilde{y}_{\,1}, \, \ldots, \tilde{y}_H)$, with $\tilde{y}_{\,h} = y_{\, T +h}, \hspace{0.1cm} h = 1, \dots, H$, and $H \in \mathbb{N}_{>0}$ denoting the forecast horizon. As in \citet{zucchini2017hidden}, we express the forecast distribution in the following form
\begin{equation*}
p \,(\tilde{y}_h \, | \, \hat{\bm{\eta}}) = \bm{\xi}^{'} \, \bm{\Phi}^{\,h} \, \bm{P}( \tilde{y}_h) \, \bm{1},
\end{equation*} where \begin{equation*}
\bm{\xi} = \dfrac{\bm{\alpha}(T)}{\mathscr{L}\, ( \bm{y} \, | \, \bm{\eta})},
\end{equation*}
and \begin{equation}
\mathscr{L}\, ( \bm{y} \, | \, \bm{\eta}) = \bm{\alpha}(T)^{'} \bm{1}.
\end{equation} Here, $\bm{P}( y) $ and $\mathscr{L}\, ( \bm{y} \, | \, \bm{\eta})$ are defined as in Section \textcolor{blue}{3} of the manuscript, and $\bm{1}$ is an $\bar{A}$-dimensional column vector of ones. The vector of forward-messages $\bm{\alpha}(t) = (\alpha_{1t}, \dots, \alpha_{\bar{A}t})$ can be computed recursively as
\begin{equation*}
\bm{\alpha}(t+1) = \bm{\alpha}(t) \, \bm{\Phi} \, \bm{P}( y_t), \quad t = 1, \dots T-1.
\end{equation*}
\subsection{Normal Pseudo-Residuals} \label{sec:pseudo}
In order to assess the general goodness of fit of the models that we used in our experiments, we also investigated graphs of normal pseudo-residuals. Following \citet{zucchini2017hidden}, the normal pseudo-residuals are defined as
\begin{equation}
r_t = \Psi^{-1} \big[ \, p \, ( Y_{t} < y_t \, | \, \bm{y}^{(-t)},\, \bm{\eta} ) \, \big],
\label{eq:psuedo_res}
\end{equation}
where $\Psi$ is the cumulative distribution function of the standard normal distribution and the vector $\bm{y}^{(-t)} = (y_1, \dots, y_{t-1}, y_{t+1}, \dots, y_{T})$ denotes all observations excluding $y_t$. If the model is accurate, $r_t$ is a realization of a standard normal random variable. We provide below index plots of the normal pseudo-residuals, their histograms and quantile-quantile (Q-Q) plots, for our main experiments.
\begin{figure}
\caption{Illustrative Example. Pseudo residuals: time series, histogram and Q-Q plot.}
\label{fig:ps_ill_ex}
\end{figure}
\begin{figure}
\caption{Telemetric Activity Data. Pseudo residuals: time series, histogram and Q-Q plot.}
\label{fig:ps_casestudy}
\end{figure}
\subsection{Telemetric Activity Data: Further Results} \label{sec:state_classification}
\subsubsection{Estimated Dwell Distribution}
Table 3 of the main paper provides point estimates for the parameters of the geometric (\acro{\smaller HMM}), Poisson and negative binomial dwell distributions of each of the \acro{\smaller IA}, \acro{\smaller MA} and \acro{\smaller HA} states. Figure \ref{fig:casestudy_dwells} here further plots the estimated posterior predictive distributions for the dwell length in each state. The estimated Poisson dwell distributions differ greatly from the geometric and negative binomial alternatives, in particular characterizing a much smaller variance in dwell times. The geometric and negative binomial provide more similar estimates of the dwell distribution, but notably for the \acro{\smaller IA} and \acro{\smaller MA} states the negative binomial assigns a larger probability to very short dwell times.
\begin{figure}
\caption{Posterior predictive distribution for dwell time in \acro{\smaller IA}
\label{fig:casestudy_dwells}
\end{figure}
\color{black}
\subsubsection{State Classification}
We have further investigated the different state classifications provided by the optimal proposed model (using negative binomial durations) compared with the Poisson and geometric dwell distributions. Given the posterior means of the parameters of each dwell distribution, we estimated the most likely state sequence (using the Viterbi algorithm) and compared them using the confusion matrices presented in Table 1 below.
From a state classification perspective, the negative binomial and geometric dwell durations perform similarly, while when choosing a Poisson durations we instead obtain significantly different results. This demonstrates the importance of estimating the dwell distribution in a data-driven manner, as different specifications of the dwell distribution can lead to vastly different inferential conclusions for objects of scientific interest. The values of the marginal likelihood, reported in Table 3 of the manuscript, suggest that there is evidence in favour of the \acro{\smaller HSMM} with negative binomial durations in comparison to a standard \acro{\smaller HMM}, and to a substantially greater extent with respect to a Poisson dwell for this applied scenario.
\begin{table}[htbp]
\centering
\begin{tabular}{clcll}
\hline \\[-0.9em]
& & \multicolumn{3}{c}{Neg-Binom} \\
\multicolumn{1}{l}{} & & \multicolumn{1}{l}{\acro{\smaller IA}} & \acro{\smaller MA} & \acro{\smaller HA} \\ \cmidrule{3-5}
Poisson & \acro{\smaller IA} & 458 & 5 & 0 \\
& \acro{\smaller MA} & 40 & 382 & 66 \\
& \acro{\smaller HA} & 0 & 1 & 198 \\[.2em] \hline
\end{tabular}
\hspace{2em}
\begin{tabular}{clcll}
\hline \\[-0.9em]
& & \multicolumn{3}{c}{Neg-Binom} \\
\multicolumn{1}{l}{} & & \multicolumn{1}{l}{\acro{\smaller IA}} & \acro{\smaller MA} & \acro{\smaller HA} \\ \cmidrule{3-5}
Geometric & \acro{\smaller IA} & 498 & 0 & 0 \\
& \acro{\smaller MA} & 0 & 385 & 0 \\
& \acro{\smaller HA} & 0 & 3 & 264 \\[.2em] \hline
\end{tabular}
\caption{State predictions summarized by the confusion matrix resulting from choosing different durations: (left) Poisson and negative binomial; (right) geometric and negative binomial.}
\end{table}
\subsection{Identifying the Periodicity}
\label{sec:periodicity}
We describe the details of our Metropolis-within-Gibbs sampler to obtain posterior samples of the frequency $\omega$, the linear basis coefficients $\bm{\beta}$, and the residual variance $\sigma^2$, under the periodic model Eq. (6.2) of the main article. This sampling scheme follows closely the within-model move of the “segment model” introduced in \citet{hadj2019bayesian, hadj2020spectral}, with the difference that in this case the number of frequencies is fixed to one. For our prior specification, we choose a uniform prior for the frequency $ \omega \sim \text{Uniform}(0, 0.1)$ and isotropic Gaussian prior for the vector of linear coefficients $\bm{\beta} = (\beta^{(1)}, \beta^{(2)})$ $ \sim \mathcal{\bm{N}}_{2} (\, \bm{0}, \, \sigma^2_{\beta} \, \bm{I}\, )$, where the prior variance $\sigma^2_\beta$ is fixed at 5. The prior on the residual variance $\sigma^2$ is specified as
$\text{Inverse-Gamma} \, \big(\frac{\xi_0}{2}, \frac{\tau_0}{2}\big)$, where $\xi_0 = 4$ and $\tau_0 = 1$.
For sampling the frequency, the proposal distribution is a combination of a Normal random walk centered around the current frequency
and a sample from the periodogram, namely
\begin{equation}
\label{mixture_proposal_freq}
q \, ( \, \omega^{\, p} \, | \, \omega^{\, c} \,) = \pi_{\omega} \, q_1 \, (\, \omega^{\, p} \, | \, \omega^{\, c} \,) + (1 - \pi_{\omega} ) \, q_2 \, (\, \omega^{\, p} \, | \, \omega^{\, c} \,)
\end{equation} where $q_1$ is defined in Eq. \eqref{q1_freq} below, $q_2$ is the density of a Normal $\mathcal{N}\, (\omega^{\, c}, \sigma^2_{\omega})$, $\pi_{\omega}$ is a positive value such that $ 0 \leq \pi_{\omega} \leq 1$, and the superscripts $c$ and $p$ refer to current and proposed values, respectively. For our experiments, we set $\sigma^2_{{\omega} } = 1/(25T)$ and $\pi_{\omega} = 0.1$. Eq. \eqref{mixture_proposal_freq} states that a M-H step with proposal distribution $q_1 \, (\, \omega^{\, p} \, | \, \omega^{\, c} \,)$ \begin{equation} \label{q1_freq}
q_1 \, (\, \omega^{\, p} \, | \, \omega^{\, c} \,) \propto \sum_{h \, = \, 0}^{T - 1} I_h \, \mathbbm{1}_{\big[ \, h/T \, \, \leq \, \, \omega^{\, p} \, < \, \, (h+1)/T \, \big] \, },
\end{equation} is performed with probability $\pi_{\omega}$, where $I_h$ is the value of the periodogram, namely the squared modulus of the Discrete Fourier transform evaluated at frequency $h/T$
$$ I_h = \dfrac{1}{T} \Big| \, \sum_{t=1}^{T} y_t \, \exp{\Big(-i \, 2 \pi \, \frac{h}{T} \, \Big)}\, \Big|^{\, 2}, \quad h = 0, \dots, T-1.$$ The acceptance probability for this move is
\begin{equation*}
\alpha = \min \Bigg\{1, \dfrac{p \, (\omega^{\, p} \, | \, \bm{\beta}, \, \sigma^2, \, \bm{y}) }{p \, (\omega^{\,c} \, | \, \bm{\beta}, \, \sigma^2, \, \bm{y}) } \times \dfrac{q_1 \, (\, \omega^{\, c} \, )}{q_1 \, (\, \omega^{\, p} \, )} \Bigg\}.
\end{equation*}
On the other hand, with probability 1 - $\pi_{\omega} $, we perform random walk M-H step with proposal distribution $q_2 \, (\, \omega^{\, p} \, | \, \omega^{\, c} \,)$, whose density is Normal with mean $\omega^{\, c}$ and variance $\sigma^2_{{\omega} }$, i.e.
$\omega^{\, p} \, | \, \omega^{\, c} \, \sim \mathcal{N}(\,\omega^{\, c}, \, \sigma^2_{{\omega} }\,)$. This move is accepted with probability
\begin{equation*}
\alpha = \min \Bigg\{1, \dfrac{p \, (\omega^{\,p} \, | \, \bm{\beta}, \, \sigma^2, \, \bm{y}) }{p \, (\omega^{\, c} \, | \, \bm{\beta}, \, \sigma^2, \, \bm{y}) } \Bigg\}.
\end{equation*}
Next, we update the vector of linear coefficients $\bm{\beta}$ and the residual variance $\sigma^{\,2}$ following the usual normal Bayesian regression setting \citep{gelman2013bayesian}.
Hence, $\bm{\beta}$ is updated in a Gibbs step from
\begin{equation} \label{posterior_beta}
\bm{\beta} \, \big| \, \omega, \, \sigma^2, \, \bm{y} \sim \bm{\mathcal{N}}_{2} \, (\, \hat{\bm{\beta}}, \, \bm{V}_{\beta}),
\end{equation} where \begin{equation}
\begin{split}
\bm{V}_{\beta} &= \bigg( \sigma^{-2}_\beta \, \bm{I} + \sigma^{-2} \bm{X}(\omega)^{\,'} \bm{X}(\omega) \bigg)^{-1}, \\
\hat{\bm{\beta}} &= \bm{V}_{\beta} \, \big( \sigma^{-2} \bm{X}(\omega)^{\,'} \bm{y} \big),
\end{split}
\end{equation} and we denote with $\bm{X}(\omega)$ the matrix with rows given by $\bm{x}_t \, \big( \omega \big) = [\cos(2\pi\omega t), \, \sin (2\pi \omega t)]$ for $t = 1, \dots, T$.
Finally, $\sigma^{\,2}$ is drawn in a Gibbs step directly from
\begin{equation}
\label{inverse_gamma}
\sigma^2 \, \big| \, \bm{\beta}, \, \omega, \, \bm{y} \sim \text{Inverse-Gamma} \, \Bigg( \, \dfrac{T + \xi_0}{2}, \, \dfrac{\tau_0 + \sum_{t=1}^{T} \Big\{ \, y_t - \bm{x}_t \, \big( \, \omega \, \big)^{\, '} \, \bm{\beta} \, \Big\}^{2}}{2} \Bigg).
\end{equation}
\color{black}
\end{document} |
\begin{document}
\title{Cavity Carving of Atomic Bell States}
\author{Stephan~Welte}
\email{[email protected]}
\author{\hspace{-.4em}\textcolor{link}{\normalfont\textsuperscript{$\dagger$}}\hspace{.4em}Bastian~Hacker}
\thanks{S.W.\ and B.H.\ contributed equally to this work.}
\author{Severin~Daiss}
\author{Stephan~Ritter}
\author{Gerhard~Rempe}
\affiliation{Max-Planck-Institut f\"ur Quantenoptik, Hans-Kopfermann-Strasse 1, 85748 Garching, Germany}
\begin{abstract}
We demonstrate entanglement generation of two neutral atoms trapped inside an optical cavity. Entanglement is created from initially separable two-atom states through carving with weak photon pulses reflected from the cavity. A polarization rotation of the photons heralds the entanglement. We show the successful implementation of two different protocols and the generation of all four Bell states with a maximum fidelity of $(90\pm2)\%$. The protocol works for any distance between cavity-coupled atoms, and no individual addressing is required. Our result constitutes an important step towards applications in quantum networks, e.g.\ for entanglement swapping in a quantum repeater.
\end{abstract}
\maketitle
Entanglement is a central ingredient of quantum physics. It was long debated until groundbreaking experiments with entangled photons \cite{Freedman1972,aspect1982}, ions \cite{turchette1998}, atoms \cite{hagley1997}, artificial atoms \cite{steffen2006}, and ensembles \cite{julsgaard2001} changed the view and launched an active field of research \cite{casabone2013, lin2013, kaufman2015}. Now entanglement is a powerful resource, with the teleportation of information in quantum networks among the most fascinating future applications \cite{kimble2008}. Elementary networks already exist, utilizing photons to distribute entanglement \cite{moehring2007, hofmann2012, ritter2012, bernien2013, delteil2016}. In such networks, cavity quantum electrodynamics (QED) not only ensures efficient connectivity between distant nodes \cite{reiserer2015} but also establishes enhanced capabilities. Most strikingly, network photons have been proposed as perfect workhorses to generate local entanglement \cite{soerensen2003a}.
Here we follow this proposal and show entanglement of two atomic qubits within one cavity QED node from which photons are reflected and detected with polarization-sensitive counters. As the scheme is insensitive to fluctuating photon numbers, we employ weak coherent laser pulses and use photon detection as a herald. Combined with atomic state rotations, we produce entangled states from an initially separable atom pair state, and show the creation of all four maximally entangled Bell states. Our scheme features several distinct advantages that distinguish it from other entanglement schemes \cite{lin2013, kaufman2015, wilk2010, isenhower2010}. Most notably, the interaction strength between two atoms coupled to the optical cavity does not depend on distance. Also, individual addressing of the two atoms is not required, rendering the technique robust, e.g., against focusing and pointing errors of the laser used for atomic state rotations. Moreover, the entangling protocol is fast, limited only by the duration of the atomic state rotation and the light pulses. The minimum pulse duration is determined by the cavity linewidth.
\begin{figure}
\caption{\label{fig:carvingexplanation}
\label{fig:carvingexplanation}
\end{figure}
Following \cite{chen2015}, we call our technique carving. An initially separable state of two atoms undergoes a common projective measurement with probabilistic outcome. For an appropriately chosen projection subspace, the part of a two-atom wavefunction in that subspace can be entangled. If the measurement yields the orthogonal outcome, the atoms are not entangled and the attempt will be discarded. Specifically, each of the two atoms carries a qubit encoded in the states $\ket{{\uparrow}}$ and $\ket{{\downarrow}}$. Starting with an initially separable two-atom state $\ket{{\downarrow}{\downarrow}}$, a global $\pi/2$ rotation prepares $\frac{1}{2}(\ket{{\uparrow}{\uparrow}}-\ket{{\uparrow}{\downarrow}}-\ket{{\downarrow}{\uparrow}}+\ket{{\downarrow}{\downarrow}})$. A projective measurement (``carving'') allows us to probabilistically remove the $\ket{{\downarrow}{\downarrow}}$ and $\ket{{\uparrow}{\uparrow}}$ components of this state in a heralded protocol. In this way, we generate $\ket{\Psi^+}=\frac{1}{\sqrt{2}}(\ket{{\uparrow}{\downarrow}}+\ket{{\downarrow}{\uparrow}})$, a maximally entangled Bell state. The ideal carving process is illustrated with the Husimi Q distribution in Fig.\,\figref{fig:carvingexplanation}.
In our implementation, the projective measurement is performed by coherent light pulses, reflected off the cavity. We use linear polarization, $\ket{\text{A}}=\frac{1}{\sqrt2}(\ket{\text{L}}-i\ket{\text{R}})$, consisting of a right-circular component $\ket{\text{R}}$ which couples resonantly to any atom in $\ket{{\uparrow}}$ via the optical transition $\ket{{\uparrow}}\,{\leftrightarrow}\,\ket{e}$, and a left-circular component $\ket{\text{L}}$ as an uncoupled reference with the corresponding atomic transition far off-resonant. As both atoms are trapped in the same cavity mode, any atom in the coupling state $\ket{{\uparrow}}$ will change the reflection amplitude for $\ket{\text{R}}$. This leads to a nonzero probability to detect the reflected light in the orthogonal polarization mode $\ket{\text{D}}=\frac{1}{\sqrt2}(\ket{\text{L}}+i\ket{\text{R}})$. Therefore, any detection of a photon in $\ket{\text{D}}$ heralds the projection of the atoms into the subspace $\operatorname{span}(\{\ket{{\uparrow}{\uparrow}},\ket{{\uparrow}{\downarrow}},\ket{{\downarrow}{\uparrow}}\})$, and effectively carves away the $\ket{{\downarrow}{\downarrow}}$ component. Coherences within the subspace stay unaffected upon the projective measurement. The $\ket{\text{A}}\rightarrow\ket{\text{D}}$ flip probability $P_f=\bigl(\frac{\kappa_\text{out}}{\kappa}\frac{C}{C+1/2}\bigr)^2$ scales with the cooperativity $C$ (see \cite{supplement}). Here, $C=Ng^2/(2\kappa\gamma)$, with atom-cavity coupling rate $g$, number of coupling atoms $N$, total cavity field decay rate $\kappa$, decay rate through the outcoupling mirror $\kappa_\text{out}$, and atomic dipole decay rate $\gamma$ \cite{reiserer2015}. Our scheme requires neither strong coupling, $g>(\kappa,\gamma)$, nor high cooperativity, but both are beneficial for high efficiencies and high fidelities. In the case of high cooperativity and an asymmetric cavity, $C>\kappa_\text{out}/\kappa-1/2>0$ \cite{supplement}, as achieved in our experiment, any nonzero number of coupled atoms induces a $\pi$-phase shift on the reflected amplitude of $\ket{\text{R}}$ \cite{reiserer2014}. This makes the $\ket{\text{A}}\rightarrow\ket{\text{D}}$ polarization flip efficient even for one reflected photon. When more than one photon is reflected, the projected atomic state will not change further.
Compared to the proposed scheme \cite{soerensen2003a}, our version has the advantage that any light that is not matched to the geometric cavity mode will remain in its original polarization mode $\ket{\text{A}}$ and create no heralding signal in the $\ket{\text{D}}$ detector. This enhances the entangling fidelity significantly and makes the scheme robust against wavefront imperfections of the incident light.
The experimental setup is an extension of earlier work \cite{reiserer2014} which now allows to trap two \textsuperscript{87}Rb atoms in a three-dimensional optical lattice \cite{supplement} at the cavity center with trapping times of several seconds. The largest separation $d$ between the atoms is along an axis perpendicular to the cavity. Via fluorescence images (right inset in Fig.\,\figref{fig:setup}), we ensure that only atom pairs positioned symmetrically around the cavity mode center, with a distance $2\unit{{\ensuremath{\textnormal\textmu}}m}\le{d}\le12\unit{{\ensuremath{\textnormal\textmu}}m}$, are being used. A blue-detuned optical lattice along the cavity axis confines the atoms close to anti-nodes of the $780\unit{nm}$ cavity field and thereby ensures coupling to the cavity on each trapping site.
The cavity with length $486\unit{{\ensuremath{\textnormal\textmu}}m}$ and mode waist $30\unit{{\ensuremath{\textnormal\textmu}}m}$ is single-sided with mirror transmissions of $4.0{\times}10^{-6}$ and $9.2{\times}10^{-5}$, allowing for an efficient in- and outcoupling of light in one direction. It is actively tuned into resonance with the atomic transition $\ket{{\uparrow}}:=\ket{F{=}2,m_F{=}2}\leftrightarrow\ket{e}:=\ket{F'{=}3,m_F{=}3}$ on the D$_2$ line. On this transition, we achieve $(g,\kappa,\kappa_\text{out},\gamma)=2\pi\,(7.8,2.5,2.3,3.0)\unit{MHz}$ and thus a cooperativity $C=4.1$ for one coupling atom.
\begin{figure}
\caption{\label{fig:setup}
\label{fig:setup}
\end{figure}
We choose the ground state $\ket{{\downarrow}}:=\ket{F{=}1,m_F{=}1}$ as our second qubit state (left inset in Fig.\,\figref{fig:setup}). For each experiment, we employ an experimental sequence of state preparation, quantum state carving, analysis and readout \cite{supplement}. Depending on the desired entangled output state, we start by preparing the atoms in $\ket{{\downarrow}{\downarrow}}$ or in a statistical mixture of anti-parallel states with density matrix $\frac12\ket{{\uparrow}{\downarrow}}\bra{{\uparrow}{\downarrow}}+\frac12\ket{{\downarrow}{\uparrow}}\bra{{\downarrow}{\uparrow}}$. These initial states can be realized by means of a dynamical Stark detuning of the optical transitions induced by the power of the trap laser \cite{supplement}. Coherent qubit control is achieved with a pair of Raman lasers which copropagate perpendicular to the cavity axis and illuminate both atoms equally with a beam waist $w_0=35\unit{{\ensuremath{\textnormal\textmu}}m}$, much bigger than the inter-atomic distance. We denote rotations as ``$R$'' with the rotation angle as a superscript, and with a subscript defining the rotation axis $x$ ($\ket{\uparrow}+\ket{\downarrow}$), $y$ ($\ket{\uparrow}+i\ket{\downarrow}$), or $z$ ($\ket{\uparrow}$). State rotations from the initial state $\ket{{\downarrow}{\downarrow}}$ create coherent spin states $\ket{\theta,\phi}$ where $\theta$ and $\phi$ can be controlled via the Raman laser power, duration, detuning and phase.
We employ a state-detection protocol consisting of two successive measurements on the two atoms with an interleaved $\pi$ pulse. This allows us to discriminate between $\ket{{\downarrow}{\downarrow}}$, $\ket{{\uparrow}{\uparrow}}$ and $\ket{{\uparrow}{\downarrow}}$/$\ket{{\downarrow}{\uparrow}}$ \cite{supplement}.
The coherent light pulses for carving ($0.7\unit{{\ensuremath{\textnormal\textmu}}s}$ full-width at half maximum) impinge on the cavity, are reflected, and measured polarization-resolved with separate single-photon detectors for $\ket{\text{A}}$ and $\ket{\text{D}}$ (Fig.\,\figref{fig:setup}).
\begin{figure}
\caption{\label{fig:circuitdiagram}
\label{fig:circuitdiagram}
\end{figure}
We demonstrate the carving technique in a scheme called ``double carving'' which is adapted from Ref.\,\cite{soerensen2003a} [Fig.\,\figref{fig:circuitdiagram}(a)]. We start by preparing the separable state $\ket{{\downarrow}{\downarrow}}$, from which $\frac{1}{2}(\ket{{\uparrow}{\uparrow}}-\ket{{\uparrow}{\downarrow}}-\ket{{\downarrow}{\uparrow}}+\ket{{\downarrow}{\downarrow}})$ is created via a global $R^{\pi/2}_y$ rotation. Our default photon number per reflection pulse is $\overline{n}=0.33$. The first reflection of an $\ket{\text{A}}$-polarized pulse is followed by the projection of the atomic state to $\frac{1}{\sqrt{3}}(\ket{{\uparrow}{\uparrow}}-\ket{{\uparrow}{\downarrow}}-\ket{{\downarrow}{\uparrow}})$ whenever photons are detected in $\ket{\text{D}}$, which happens in 61\% of the detection events (3/4 for an ideal cavity). Then, we apply a global $R^\pi_y$ rotation yielding $\frac{1}{\sqrt{3}}(\ket{{\downarrow}{\downarrow}}+\ket{{\uparrow}{\downarrow}}+\ket{{\downarrow}{\uparrow}})$ and reflect a second $\ket{\text{A}}$-polarized pulse. It is detected in the orthogonal state with 53\% probability (2/3 for an ideal cavity), resulting in $\ket{\Psi^+}=\frac{1}{\sqrt{2}}(\ket{{\uparrow}{\downarrow}}+\ket{{\downarrow}{\uparrow}})$. Effectively, this protocol removes the populations of $\ket{{\uparrow}{\uparrow}}$ and $\ket{{\downarrow}{\downarrow}}$ from the initially rotated state, yielding a total success probability of $32\%$ ($1/2$ for an ideal cavity), as only runs where the polarization of both photons flipped are postselected. With an average detected 0.11 photons per pulse, the experiment succeeds with a total efficiency of $(0.38\pm0.01)\%$.
To analyze a generated two-atom output state with density matrix $\rho$, we determine its fidelity $F{=}\braket{\psi\,\vline\,\rho\,\vline\,\psi}$ with respect to an ideal state $\ket{\psi}$. This requires a direct measurement of the state's populations $P_{{\uparrow}{\uparrow}}$, $P_{{\downarrow}{\downarrow}}$ and $(P_{{\uparrow}{\downarrow}}+P_{{\downarrow}{\uparrow}})$ and the determination of coherences \cite{supplement}. To retrieve the latter, we use the method of parity oscillations \cite{turchette1998,sackett2000}: An additional analysis pulse of area $\pi/2$ is inserted right before the state detection. The rotation axis, given by the phase $\phi$ with respect to the preparation Raman-beam pulses, is scanned from $0$ to $2\pi$ over 750 subsequent experiments and we measure the resulting populations $\tilde P(\phi)$. Experiments are repeated at a rate of $1\unit{kHz}$ with $180\unit{{\ensuremath{\textnormal\textmu}}s}$ being used for optical pumping and $740\unit{{\ensuremath{\textnormal\textmu}}s}$ for cooling between each experiment. The parity $\Pi(\phi):=\tilde P_{{\uparrow}{\uparrow}}+\tilde P_{{\downarrow}{\downarrow}}-\tilde P_{{\uparrow}{\downarrow}}-\tilde P_{{\downarrow}{\uparrow}}$ oscillates as $\Pi(\phi)=2\operatorname{Re}(\rho_{{\uparrow}{\downarrow},{\downarrow}{\uparrow}})+2\operatorname{Im}(\rho_{{\uparrow}{\uparrow},{\downarrow}{\downarrow}})\sin(2\phi)+2\operatorname{Re}(\rho_{{\uparrow}{\uparrow},{\downarrow}{\downarrow}})\cos(2\phi)$ (Fig.\,\figref{fig:parity}) and yields information about coherences through a fit of the oscillation amplitude, phase and offset. Here, $\rho_{ij,kl}$ denote elements of the density matrix $\rho$ with $i,j,k,l\in\{{\uparrow},{\downarrow}\}$.
The generated $\ket{\Psi^+}$ state shows a high offset value of $2\operatorname{Re}(\rho_{{\uparrow}{\downarrow},{\downarrow}{\uparrow}})=(80\pm5)\%$ (Fig.\,\figref{fig:parity} upper right). The fidelity with the expected Bell state is calculated according to $F(\Psi^+)=\frac12(P_{{\uparrow}{\downarrow}}+P_{{\downarrow}{\uparrow}})+\operatorname{Re}(\rho_{{\uparrow}{\downarrow},{\downarrow}{\uparrow}})=(81.9\pm2.8)\%$. A fidelity above $50\%$ with any Bell state proves entanglement.
\begin{figure}
\caption{\label{fig:parity}
\label{fig:parity}
\end{figure}
The $\ket{\Psi^+}$ state is one of the Bell triplet states, and can be transformed into $\ket{\Phi^-}=\frac{1}{\sqrt{2}}(\ket{{\uparrow}{\uparrow}}-\ket{{\downarrow}{\downarrow}})$ via a $R^{\pi/2}_y$ rotation or into $\ket{\Phi^+}=\frac{1}{\sqrt{2}}(\ket{{\uparrow}{\uparrow}}+\ket{{\downarrow}{\downarrow}})$ through a $R^{\pi/2}_x $ rotation (Fig.\,\figref{fig:circuitdiagram}). Both of them exhibit the expected oscillations. Lastly, we can create the Bell singlet state $\ket{\Psi^-}=\frac{1}{\sqrt{2}}(\ket{{\uparrow}{\downarrow}}-\ket{{\downarrow}{\uparrow}})$, using the same carving protocol, but starting with an initial density matrix $\frac12\ket{{\uparrow}{\downarrow}}\bra{{\uparrow}{\downarrow}}+\frac12\ket{{\downarrow}{\uparrow}}\bra{{\downarrow}{\uparrow}}$. A rotation by $\pi/2$ creates coherent $\ket{{\downarrow}{\uparrow}}$ and $\ket{{\uparrow}{\downarrow}}$ contributions with opposite sign, which remain coherently preserved throughout the rest of the protocol. As both $\ket{{\downarrow}{\downarrow}}$ and $\ket{{\uparrow}{\uparrow}}$ contributions are carved out, we arrive at a pure $\ket{\Psi^-}$. The parity data for all four states is shown in Fig.\,\figref{fig:parity}, which additionally displays the measured Husimi Q distribution for an experimentally prepared Bell state $\ket{\Phi^-}$. Table \ref{tab:fidelities} lists the measured populations and the fidelities for all four Bell states. Furthermore, we investigate the lifetimes $\tau$ of the entangled states by measuring the fidelities after various waiting intervals. The $1/e$ values of a fitted Gaussian are given in Tab.\,\ref{tab:fidelities}. We find $\tau$ to be on the order of a few hundred microseconds, limited by fluctuating real and virtual, i.e.\ trap-induced, magnetic fields of a few mG and mechanical-state-dependent dynamical Stark shifts. The $\ket{\Psi^\pm}$ states live longer than the $\ket{\Phi^-}$ state, in accordance with the expectation that they are insensitive to common-mode field fluctuations \cite{Lidar1998}.
\begin{table}[b]
\caption{\label{tab:fidelities}
Measured populations $P$, fidelities $F$ and lifetimes $\tau$ of states in our experiment.}
\begin{ruledtabular}
\begin{tabular}{lccccc}
$\ket{\psi}$&$P_{{\uparrow}{\uparrow}}$&$P_{{\downarrow}{\downarrow}}$&$P_{{\uparrow}{\downarrow}}+P_{{\downarrow}{\uparrow}}$&$F$&$\tau\;(\mathrm{{\ensuremath{\textnormal\textmu}}s})$\\
\hline
$\ket{\Psi^-}$&\hphantom{0}6(2)\%&\hphantom{0}9(2)\%&84(2)\%&$83.4(1.4)\%$&$204(26)$\\
$\ket{\Psi^+}$&\hphantom{0}2(2)\%&15(5)\%&83(5)\%&$81.9(2.8)\%$&$134(17)$\\
$\ket{\Phi^-}$&40(3)\%&54(3)\%&\hphantom{0}6(1)\%&$89.9(1.7)\%$&$\hphantom{0}90(19)$\\
$\ket{\Phi^+}$&44(5)\%&43(5)\%&13(4)\%&$82.4(3.1)\%$&---
\end{tabular}
\end{ruledtabular}
\end{table}
According to the model of \cite{soerensen2003a}, scattering of carving photons by atomic spontaneous decay leads to decoherence of the entangled state. With an impinging coherent pulse of $\overline{n}$ photons on average, the expected number of scattered photons is given by $\overline{n}\cdot{s}$, with scattering fraction $s$, independent of the number of un-scattered detected photons in each shot. With our cavity parameters we have $s=4\kappa_\text{out}\gamma{N}g^2/(\kappa\gamma+Ng^2)^2=0.36$ for $N=1$. With two reflected pulses, twice the amount of decoherence is expected. We compare this model to measured values of the fidelity for various $\overline{n}$, and find very good agreement [Fig.\,\figref{fig:Fvsangle}(a)]. Only for very low $\overline{n}$ the measured fidelities drop below this simple model, because detector dark counts become important. Thus, we conclude that spontaneous decay and detector dark counts are the dominant processes that deteriorate $F$, and that an improvement would require lower $s$, for example through an increased atom-cavity coupling rate $g$.
\begin{figure}
\caption{\label{fig:Fvsangle}
\label{fig:Fvsangle}
\end{figure}
We realize a second entangling protocol, named ``single-carving'' scheme [Fig.\,\figref{fig:circuitdiagram}(b)] and also proposed in Ref.\,\cite{soerensen2003a}. It employs a small $R^\alpha_y$ rotation with \mbox{$\alpha\ll\pi$} on the initial $\ket{{\downarrow}{\downarrow}}$ state followed by one carving step. This rotation creates a state with only a small $\ket{{\uparrow}{\uparrow}}$ contribution
\begin{align}
\label{eq:singlecarvingprep}
\sin^2\frac\alpha2\,\ket{{\uparrow}{\uparrow}}&-\frac12\sin\alpha\,(\ket{{\uparrow}{\downarrow}}+\ket{{\downarrow}{\uparrow}})+\cos^2\frac\alpha2\,\ket{{\downarrow}{\downarrow}}
,
\end{align}
which can be converted into a highly entangled state by only one reflection-based carving of $\ket{{\downarrow}{\downarrow}}$. The resulting state is approximately a $\ket{\Psi^+}$ state for small $\alpha$ and can afterwards be rotated to a $\ket{\Phi^-}$. The scheme has an intrinsic trade-off between achievable efficiency $\eta=1-\cos^4(\alpha/2)$ and fidelity $F=4\cos^2(\alpha/2)\mathbin{/}(3+\cos\alpha)$ with $\alpha$ as an adjustable parameter additionally to the reflection pulse photon number $\overline{n}$. In our experiment we create the $\ket{\Phi^-}$ state with a relatively large $\overline{n}=1.2$, and varied $\alpha$ between 0 and $0.63\,\pi$ [Fig.\,\figref{fig:Fvsangle}(b)]. For large $\alpha\gg0$ we observe the expected drop of the fidelity as the undesired component in Eq.(\ref{eq:singlecarvingprep}) increases. For very small $\alpha$, $F$ decreases again, as the signal becomes quickly dominated by detector dark counts of 0.01 per pulse. For $\alpha=0.23\,\pi$ we find a maximum fidelity of $F(\Phi^-)=(70.2\pm3.2)\%$. This fidelity is limited by both dark counts and $\overline{n}$. With our chosen settings we reach a heralding efficiency of $(5.9\pm0.1)\%$, benefiting from the larger $\overline{n}$ compared to the double-carving scheme. This second scheme can directly be extended to three or more atoms. The feasibility to entangle atomic ensembles has been shown in \cite{haas2014,mcconnell2015}.
To compare, we found the double-carving scheme to yield higher entangling fidelities, limited only by photon scattering and detector dark counts. The efficiency is larger in our implementation of the single-carving scheme. In the presence of dark counts, the choice between different protocols and parameters therefore offers a convenient way to optimize between experimentally achievable efficiencies and fidelities.
Our cavity can readily act as a quantum network node \cite{ritter2012} and local entanglement between two atoms can be mapped onto photons and then be distributed in the network. In a quantum repeater scheme based on cavity QED systems \cite{uphoff2016}, the presented double-carving scheme combined with a third photon reflection can furthermore serve as an entanglement-swapping protocol. The straightforward extensions towards more atoms provides a promising route to explore the full scalability of the system.
\begin{acknowledgments}
We thank M.\ K\"orber, A.\ Neuzner, A.\ Reiserer and M.\ Uphoff for valuable ideas and discussions. This work was supported by the Bundesministerium f\"ur Bildung und Forschung via IKT 2020 (Q.com-Q) and by the Deutsche Forschungsgemeinschaft via the excellence cluster Nanosystems Initiative Munich (NIM). S.W.\ was supported by the doctorate programme Exploring Quantum Matter (ExQM).
\end{acknowledgments}
\nocite{walls2008, neuzner2016, reimann2015}
\begin{thebibliography}{99}
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem{Freedman1972}
\bibinfo{author}{S.~J. Freedman} and \bibinfo{author}{J.~F. Clauser}.
\newblock \emph{\bibinfo{title}{Experimental Test of Local Hidden-Variable
Theories}}.
\newblock
\href{https://doi.org/10.1103/PhysRevLett.28.938}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{28}}, \bibinfo{pages}{938--941}
(\bibinfo{year}{1972})}.
\bibitem{aspect1982}
\bibinfo{author}{A.~Aspect}, \bibinfo{author}{J.~Dalibard} and
\bibinfo{author}{G.~Roger}.
\newblock \emph{\bibinfo{title}{{E}xperimental {T}est of {B}ell's
{I}nequalities {U}sing {T}ime-{V}arying {A}nalyzers}}.
\newblock
\href{https://doi.org/10.1103/PhysRevLett.49.1804}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{49}}, \bibinfo{pages}{1804--1807}
(\bibinfo{year}{1982})}.
\bibitem{turchette1998}
\bibinfo{author}{Q.~A. Turchette}, \bibinfo{author}{C.~S. Wood},
\bibinfo{author}{B.~E. King}, \bibinfo{author}{C.~J. Myatt},
\bibinfo{author}{D.~Leibfried}, \bibinfo{author}{W.~M. Itano},
\bibinfo{author}{C.~Monroe} and \bibinfo{author}{D.~J. Wineland}.
\newblock \emph{\bibinfo{title}{Deterministic Entanglement of Two Trapped
Ions}}.
\newblock
\href{https://doi.org/10.1103/PhysRevLett.81.3631}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{3631--3634}
(\bibinfo{year}{1998})}.
\bibitem{hagley1997}
\bibinfo{author}{E.~Hagley}, \bibinfo{author}{X.~Ma\^{\i}tre},
\bibinfo{author}{G.~Nogues}, \bibinfo{author}{C.~Wunderlich},
\bibinfo{author}{M.~Brune}, \bibinfo{author}{J.~M. Raimond} and
\bibinfo{author}{S.~Haroche}.
\newblock \emph{\bibinfo{title}{Generation of {E}instein-{P}odolsky-{R}osen
Pairs of Atoms}}.
\newblock
\href{https://doi.org/10.1103/PhysRevLett.79.1}{\bibinfo{journal}{Phys. Rev.
Lett.} \textbf{\bibinfo{volume}{79}}, \bibinfo{pages}{1--5}
(\bibinfo{year}{1997})}.
\bibitem{steffen2006}
\bibinfo{author}{M.~Steffen}, \bibinfo{author}{M.~Ansmann},
\bibinfo{author}{R.~C. Bialczak}, \bibinfo{author}{N.~Katz},
\bibinfo{author}{E.~Lucero}, \bibinfo{author}{R.~McDermott},
\bibinfo{author}{M.~Neeley}, \bibinfo{author}{E.~M. Weig},
\bibinfo{author}{A.~N. Cleland} and \bibinfo{author}{J.~M. Martinis}.
\newblock \emph{\bibinfo{title}{Measurement of the Entanglement of Two
Superconducting Qubits via State Tomography}}.
\newblock
\href{https://doi.org/10.1126/science.1130886}{\bibinfo{journal}{Science}
\textbf{\bibinfo{volume}{313}}, \bibinfo{pages}{1423--1425}
(\bibinfo{year}{2006})}.
\bibitem{julsgaard2001}
\bibinfo{author}{B.~Julsgaard}, \bibinfo{author}{A.~Kozhekin} and
\bibinfo{author}{E.~S. Polzik}.
\newblock \emph{\bibinfo{title}{Experimental long-lived entanglement of two
macroscopic objects}}.
\newblock \href{https://doi.org/10.1038/35096524}{\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{413}}, \bibinfo{pages}{400--403}
(\bibinfo{year}{2001})}.
\bibitem{casabone2013}
\bibinfo{author}{B.~Casabone}, \bibinfo{author}{A.~Stute},
\bibinfo{author}{K.~Friebe}, \bibinfo{author}{B.~Brandst\"atter},
\bibinfo{author}{K.~Sch\"uppert}, \bibinfo{author}{R.~Blatt} and
\bibinfo{author}{T.~E. Northup}.
\newblock \emph{\bibinfo{title}{Heralded Entanglement of Two Ions in an Optical
Cavity}}.
\newblock
\href{https://doi.org/10.1103/PhysRevLett.111.100505}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{111}}, \bibinfo{pages}{100505}
(\bibinfo{year}{2013})}.
\bibitem{lin2013}
\bibinfo{author}{Y.~Lin}, \bibinfo{author}{J.~P. Gaebler},
\bibinfo{author}{F.~Reiter}, \bibinfo{author}{T.~R. Tan},
\bibinfo{author}{R.~Bowler}, \bibinfo{author}{A.~S. S{\o}rensen},
\bibinfo{author}{D.~Leibfried} and \bibinfo{author}{D.~J. Wineland}.
\newblock \emph{\bibinfo{title}{Dissipative production of a maximally entangled
steady state of two quantum bits}}.
\newblock \href{https://doi.org/10.1038/nature12801}{\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{504}}, \bibinfo{pages}{415--418}
(\bibinfo{year}{2013})}.
\bibitem{kaufman2015}
\bibinfo{author}{A.~M. Kaufman}, \bibinfo{author}{B.~J. Lester},
\bibinfo{author}{M.~Foss-Feig}, \bibinfo{author}{M.~L. Wall},
\bibinfo{author}{A.~M. Rey} and \bibinfo{author}{C.~A. Regal}.
\newblock \emph{\bibinfo{title}{Entangling two transportable neutral atoms via
local spin exchange}}.
\newblock \href{https://doi.org/10.1038/nature16073}{\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{527}}, \bibinfo{pages}{208--211}
(\bibinfo{year}{2015})}.
\bibitem{kimble2008}
\bibinfo{author}{H.~J. Kimble}.
\newblock \emph{\bibinfo{title}{The quantum internet}}.
\newblock \href{https://doi.org/10.1038/nature07127}{\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{453}}, \bibinfo{pages}{1023--1030}
(\bibinfo{year}{2008})}.
\bibitem{moehring2007}
\bibinfo{author}{D.~L. Moehring}, \bibinfo{author}{P.~Maunz},
\bibinfo{author}{S.~Olmschenk}, \bibinfo{author}{K.~C. Younge},
\bibinfo{author}{D.~N. Matsukevich}, \bibinfo{author}{L.-M. Duan} and
\bibinfo{author}{C.~Monroe}.
\newblock \emph{\bibinfo{title}{Entanglement of single-atom quantum bits at a
distance}}.
\newblock \href{https://doi.org/10.1038/nature06118}{\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{449}}, \bibinfo{pages}{68--71}
(\bibinfo{year}{2007})}.
\bibitem{hofmann2012}
\bibinfo{author}{J.~Hofmann}, \bibinfo{author}{M.~Krug},
\bibinfo{author}{N.~Ortegel}, \bibinfo{author}{L.~G\'{e}rard},
\bibinfo{author}{M.~Weber}, \bibinfo{author}{W.~Rosenfeld} and
\bibinfo{author}{H.~Weinfurter}.
\newblock \emph{\bibinfo{title}{Heralded Entanglement Between Widely Separated
Atoms}}.
\newblock
\href{https://doi.org/10.1126/science.1221856}{\bibinfo{journal}{Science}
\textbf{\bibinfo{volume}{337}}, \bibinfo{pages}{72--75}
(\bibinfo{year}{2012})}.
\bibitem{ritter2012}
\bibinfo{author}{S.~Ritter}, \bibinfo{author}{C.~N{\"o}lleke},
\bibinfo{author}{C.~Hahn}, \bibinfo{author}{A.~Reiserer},
\bibinfo{author}{A.~Neuzner}, \bibinfo{author}{M.~Uphoff},
\bibinfo{author}{M.~M{\"u}cke}, \bibinfo{author}{E.~Figueroa},
\bibinfo{author}{J.~Bochmann} and \bibinfo{author}{G.~Rempe}.
\newblock \emph{\bibinfo{title}{An elementary quantum network of single atoms
in optical cavities}}.
\newblock \href{https://doi.org/10.1038/nature11023}{\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{484}}, \bibinfo{pages}{195--200}
(\bibinfo{year}{2012})}.
\bibitem{bernien2013}
\bibinfo{author}{H.~Bernien}, \bibinfo{author}{B.~Hensen},
\bibinfo{author}{W.~Pfaff}, \bibinfo{author}{G.~Koolstra},
\bibinfo{author}{M.~Blok}, \bibinfo{author}{L.~Robledo},
\bibinfo{author}{T.~Taminiau}, \bibinfo{author}{M.~Markham},
\bibinfo{author}{D.~Twitchen}, \bibinfo{author}{L.~Childress} \emph{et~al.}
\newblock \emph{\bibinfo{title}{Heralded entanglement between solid-state
qubits separated by three metres}}.
\newblock
\href{https://doi.org/doi:10.1038/nature12016}{\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{497}}, \bibinfo{pages}{86--90}
(\bibinfo{year}{2013})}.
\bibitem{delteil2016}
\bibinfo{author}{A.~Delteil}, \bibinfo{author}{Z.~Sun},
\bibinfo{author}{W.~Gao}, \bibinfo{author}{E.~Togan},
\bibinfo{author}{S.~Faelt} and \bibinfo{author}{A.~Imamo\u{g}lu}.
\newblock \emph{\bibinfo{title}{Generation of heralded entanglement between
distant hole spins}}.
\newblock \href{https://doi.org/10.1038/nphys3605}{\bibinfo{journal}{Nat.
Phys.} \textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{218--223}
(\bibinfo{year}{2016})}.
\bibitem{reiserer2015}
\bibinfo{author}{A.~Reiserer} and \bibinfo{author}{G.~Rempe}.
\newblock \emph{\bibinfo{title}{Cavity-based quantum networks with single atoms
and optical photons}}.
\newblock
\href{https://doi.org/10.1103/RevModPhys.87.1379}{\bibinfo{journal}{Rev. Mod.
Phys.} \textbf{\bibinfo{volume}{87}}, \bibinfo{pages}{1379--1418}
(\bibinfo{year}{2015})}.
\bibitem{soerensen2003a}
\bibinfo{author}{A.~S. S\o{}rensen} and \bibinfo{author}{K.~M\o{}lmer}.
\newblock \emph{\bibinfo{title}{Probabilistic Generation of Entanglement in
Optical Cavities}}.
\newblock
\href{https://doi.org/10.1103/PhysRevLett.90.127903}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{90}}, \bibinfo{pages}{127903}
(\bibinfo{year}{2003})}.
\bibitem{isenhower2010}
\bibinfo{author}{L.~Isenhower}, \bibinfo{author}{E.~Urban},
\bibinfo{author}{X.~L. Zhang}, \bibinfo{author}{A.~T. Gill},
\bibinfo{author}{T.~Henage}, \bibinfo{author}{T.~A. Johnson},
\bibinfo{author}{T.~G. Walker} and \bibinfo{author}{M.~Saffman}.
\newblock \emph{\bibinfo{title}{Demonstration of a Neutral Atom
Controlled-{NOT} Quantum Gate}}.
\newblock
\href{https://doi.org/10.1103/PhysRevLett.104.010503}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{104}}, \bibinfo{pages}{010503}
(\bibinfo{year}{2010})}.
\bibitem{wilk2010}
\bibinfo{author}{T.~Wilk}, \bibinfo{author}{A.~Ga\"{e}tan},
\bibinfo{author}{C.~Evellin}, \bibinfo{author}{J.~Wolters},
\bibinfo{author}{Y.~Miroshnychenko}, \bibinfo{author}{P.~Grangier} and
\bibinfo{author}{A.~Browaeys}.
\newblock \emph{\bibinfo{title}{Entanglement of Two Individual Neutral Atoms
Using {R}ydberg Blockade}}.
\newblock
\href{https://doi.org/10.1103/PhysRevLett.104.010502}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{104}}, \bibinfo{pages}{010502}
(\bibinfo{year}{2010})}.
\bibitem{chen2015}
\bibinfo{author}{W.~Chen}, \bibinfo{author}{J.~Hu}, \bibinfo{author}{Y.~Duan},
\bibinfo{author}{B.~Braverman}, \bibinfo{author}{H.~Zhang} and
\bibinfo{author}{V.~Vuleti\ifmmode~\acute{c}\else \'{c}\fi{}}.
\newblock \emph{\bibinfo{title}{Carving Complex Many-Atom Entangled States by
Single-Photon Detection}}.
\newblock
\href{https://doi.org/10.1103/PhysRevLett.115.250502}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{115}}, \bibinfo{pages}{250502}
(\bibinfo{year}{2015})}.
\bibitem{supplement}
\bibinfo{author}{{See \hyperref[supplement]{Supplemental Material}, which includes
Refs.\,[17, 28, 29, 30], for details on the preparation and readout of two-atom
states and for a derivation of the required system parameters.}}
\bibitem{reiserer2014}
\bibinfo{author}{A.~Reiserer}, \bibinfo{author}{N.~Kalb},
\bibinfo{author}{G.~Rempe} and \bibinfo{author}{S.~Ritter}.
\newblock \emph{\bibinfo{title}{A quantum gate between a flying optical photon
and a single trapped atom}}.
\newblock \href{https://doi.org/10.1038/nature13177}{\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{508}}, \bibinfo{pages}{237--240}
(\bibinfo{year}{2014})}.
\bibitem{sackett2000}
\bibinfo{author}{C.~A. Sackett}, \bibinfo{author}{D.~Kielpinski},
\bibinfo{author}{B.~E. King}, \bibinfo{author}{C.~Langer},
\bibinfo{author}{V.~Meyer}, \bibinfo{author}{C.~J. Myatt},
\bibinfo{author}{M.~Rowe}, \bibinfo{author}{Q.~A. Turchette},
\bibinfo{author}{W.~M. Itano}, \bibinfo{author}{D.~J. Wineland} and
\bibinfo{author}{C.~Monroe}.
\newblock \emph{\bibinfo{title}{Experimental entanglement of four particles}}.
\newblock \href{https://doi.org/10.1038/35005011}{\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{404}}, \bibinfo{pages}{256--259}
(\bibinfo{year}{2000})}.
\bibitem{Lidar1998}
\bibinfo{author}{D.~A. Lidar}, \bibinfo{author}{I.~L. Chuang} and
\bibinfo{author}{K.~B. Whaley}.
\newblock \emph{\bibinfo{title}{Decoherence-Free Subspaces for Quantum
Computation}}.
\newblock
\href{https://doi.org/10.1103/PhysRevLett.81.2594}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{81}}, \bibinfo{pages}{2594--2597}
(\bibinfo{year}{1998})}.
\bibitem{haas2014}
\bibinfo{author}{F.~Haas}, \bibinfo{author}{J.~Volz},
\bibinfo{author}{R.~Gehr}, \bibinfo{author}{J.~Reichel} and
\bibinfo{author}{J.~Est{\`e}ve}.
\newblock \emph{\bibinfo{title}{Entangled States of More Than 40 Atoms in an
Optical Fiber Cavity}}.
\newblock
\href{https://doi.org/10.1126/science.1248905}{\bibinfo{journal}{Science}
\textbf{\bibinfo{volume}{344}}, \bibinfo{pages}{180--183}
(\bibinfo{year}{2014})}.
\bibitem{mcconnell2015}
\bibinfo{author}{R.~McConnell}, \bibinfo{author}{H.~Zhang},
\bibinfo{author}{J.~Hu}, \bibinfo{author}{S.~{\'C}uk} and
\bibinfo{author}{V.~Vuleti{\'c}}.
\newblock \emph{\bibinfo{title}{Entanglement with negative Wigner function of
almost 3,000 atoms heralded by one photon}}.
\newblock \href{https://doi.org/10.1038/nature14293}{\bibinfo{journal}{Nature}
\textbf{\bibinfo{volume}{519}}, \bibinfo{pages}{439--442}
(\bibinfo{year}{2015})}.
\bibitem{uphoff2016}
\bibinfo{author}{M.~Uphoff}, \bibinfo{author}{M.~Brekenfeld},
\bibinfo{author}{G.~Rempe} and \bibinfo{author}{S.~Ritter}.
\newblock \emph{\bibinfo{title}{An integrated quantum repeater at telecom
wavelength with single atoms in optical fiber cavities}}.
\newblock
\href{https://doi.org/10.1007/s00340-015-6299-2}{\bibinfo{journal}{Appl.
Phys. B} \textbf{\bibinfo{volume}{122}}, \bibinfo{pages}{46}
(\bibinfo{year}{2016})}.
\bibitem{walls2008}
\bibinfo{author}{D.~Walls} and \bibinfo{author}{G.~Milburn}.
\newblock \emph{\bibinfo{title}{Quantum Optics}}.
\newblock
\href{http://dx.doi.org/10.1007/978-3-540-28574-8}{(\bibinfo{publisher}{Springer Berlin Heidelberg}, 2nd edition, \bibinfo{year}{2008})}.
\bibitem{neuzner2016}
\bibinfo{author}{A.~Neuzner}, \bibinfo{author}{M.~K{\"o}rber},
\bibinfo{author}{O.~Morin}, \bibinfo{author}{S.~Ritter} and
\bibinfo{author}{G.~Rempe}.
\newblock \emph{\bibinfo{title}{Interference and dynamics of light from a
distance-controlled atom pair in an optical cavity}}.
\newblock
\href{https://doi.org/doi:10.1038/nphoton.2016.19}{\bibinfo{journal}{Nat.
Photonics} \textbf{\bibinfo{volume}{10}}, \bibinfo{pages}{303--306}
(\bibinfo{year}{2016})}.
\bibitem{reimann2015}
\bibinfo{author}{R.~Reimann}, \bibinfo{author}{W.~Alt},
\bibinfo{author}{T.~Kampschulte}, \bibinfo{author}{T.~Macha},
\bibinfo{author}{L.~Ratschbacher}, \bibinfo{author}{N.~Thau},
\bibinfo{author}{S.~Yoon} and \bibinfo{author}{D.~Meschede}.
\newblock \emph{\bibinfo{title}{Cavity-Modified Collective {R}ayleigh
Scattering of Two Atoms}}.
\newblock
\href{https://doi.org/10.1103/PhysRevLett.114.023601}{\bibinfo{journal}{Phys.
Rev. Lett.} \textbf{\bibinfo{volume}{114}}, \bibinfo{pages}{023601}
(\bibinfo{year}{2015})}.
\end{thebibliography}
\section{SUPPLEMENTAL MATERIAL}
\label{supplement}
\subsection{Required Cavity Parameters}
While the carving scheme proposed by S{\o}rensen and M{\o}lmer \cite{soerensen2003a_sup} works well only in the case of an exactly symmetric cavity with $\kappa_\text{out}=\kappa/2$, it can be extended to arbitrary combinations of outcoupling rate $\kappa_\text{out}$ and total cavity field decay rate $\kappa$. To this end, we use two polarization modes, $\ket{\text{R}}$ coupling to the atoms and $\ket{\text{L}}$ far off-resonant serving as a reference. Then, instead of heralding on any reflected photon, we impinge linear photons $\ket{\text{A}}=\frac{1}{\sqrt2}(\ket{\text{L}}-i\ket{\text{R}})$ and look for photons of orthogonal polarization with respect to the incident mode. This modification makes the scheme not only robust against variations of the cavity QED parameters, it also lifts the requirement of perfect transversal optical mode matching between the incoming beam and the cavity, as unmatched light is reflected without polarization flip and triggers no heralding signal. This is crucial since it is challenging to achieve a mode matching above 90\% in our experiment.
We now derive the success rate of our scheme which depends on the cavity QED parameters $g$, $\kappa$, $\kappa_\text{out}$ and $\gamma$. The number of coupling atoms is denoted by $N$. For our cavity, these parameters are $(g,\kappa,\kappa_\text{out},\gamma)=2\pi\,(7.8,2.5,2.3,3.0)\unit{MHz}$ making the cooperativity $C=Ng^2/(2\kappa\gamma)=4.1$ for $N=1$ coupling atom. In the limit of long photonic wavepackets (compared to the cavity decay time), the reflection amplitude $r$ of resonant light impinging on the outcoupling mirror is given by \cite{walls2008_sup}
\begin{equation}
r(N)=1-\frac{2\kappa_\text{out}\gamma}{Ng^2+\kappa\gamma}=1-\frac{\kappa_\text{out}/\kappa}{C+1/2}\ .
\end{equation}
The reflection amplitude with $N$ coupling atoms holds for $\ket{\text{R}}$-light, whereas $\ket{\text{L}}$ always gets the amplitude $r(0)$. The reflection operator, expressed in the $\ket{\text{R}}$/$\ket{\text{L}}$-basis is diagonal: $\hat{R}=\ket{\text{R}}\bra{\text{R}}r(N)+\ket{\text{L}}\bra{\text{L}}r(0)$. When we impinge a linear input polarization $\ket{\text{A}}=\frac{1}{\sqrt2}(\ket{\text{L}}-i\ket{\text{R}})$, the probability for the photon being reflected in the orthogonal state $\ket{\text{D}}=\frac{1}{\sqrt2}(\ket{\text{L}}+i\ket{\text{R}})$ is
\begin{align}
P_f
&=\left|\bra{\text{D}}\hat{R}\ket{\text{A}}\right|^2=\nonumber\\
&=\left|\textstyle\frac12r(N{=}0)-\textstyle\frac12r(N{>}0)\right|^2=\nonumber\\
&=\left(\frac{\kappa_\text{out}}{\kappa}\frac{Ng^2}{Ng^2+\gamma\kappa}\right)^2
=\left(\frac{\kappa_\text{out}}{\kappa}\frac{C}{C+1/2}\right)^2
\end{align}
and becomes non-zero whenever $N>0$ and $g>0$.
The total success probability depends on the number of heralding photons $\overline{n}\,P_f$. This favors large $\overline{n}$. However, when the experiment is performed with coherent light pulses containing several photons, the achievable fidelity drops with the number of scattered photons $\overline{n}\,s$ as described in the main text. This favors small $\overline{n}$. Best performance is achieved for a large ratio $P_f/s=\frac{Ng^2\kappa_{\text{out}}}{4\kappa^2\gamma}=\frac{\kappa_\text{out}}{2\kappa}C$, with the cooperativity as the key parameter and an asymmetric cavity with $\kappa_\text{out}\approx\kappa$.
The carving scheme becomes efficient when most of the reflected photons flip their linear polarization for $N>0$. On resonance, this is the case when the reflection amplitude $r(N{>}0)$ for right-circularly polarized coupling light changes its sign, i.e.\ the atoms produce a $\pi$-phase-shift in $r$. The phase of the reflected light is given by $\arg(r)$, the angle of $r$ with the positive real axis. The above expression for $r$ on resonance shows that $r(N{>}0)>r(N{=}0)$. This means that the following conditions need to be fulfilled for the $\pi$-phase shift:
\begin{align}
r(N{>}0)&=1-\frac{2\kappa_\text{out}\gamma}{Ng^2+\kappa\gamma}=1-\frac{\kappa_\text{out}/\kappa}{C+1/2}>0\\
r(N{=}0)&=1-\frac{2\kappa_\text{out}}{\kappa}<0
\end{align}
Here the first inequality $N g^2>\gamma(2\kappa_\text{out}-\kappa)$ is a condition for high cooperativity and the second condition $\kappa_\text{out}>\kappa/2$ implies an asymmetric cavity. Within these limits, the polarization flip of the reflected photons can be expressed as a truthtable that heralds the existence of coupled atoms:
\begin{align}
\label{eq:gateD}
&\ket{{\uparrow}{\uparrow}\,\text{A}}\ \rightarrow\ \ket{{\uparrow}{\uparrow}\,\text{D}}\nonumber\\
&\ket{{\uparrow}{\downarrow}\,\text{A}}\ \rightarrow\ \ket{{\uparrow}{\downarrow}\,\text{D}}\nonumber\\
&\ket{{\downarrow}{\uparrow}\,\text{A}}\ \rightarrow\ \ket{{\downarrow}{\uparrow}\,\text{D}}\nonumber\\
&\ket{{\downarrow}{\downarrow}\,\text{A}}\ \rightarrow\ \ket{{\downarrow}{\downarrow}\,\text{A}}
\end{align}
The total reflectivity on resonance in our system $|r(0)|^2=0.71$, $|r(1)|^2=0.64$ and $|r(2)|^2=0.80$ depends only slightly on the number of coupling atoms.
\subsection{Qubit Preparation and Readout}
\label{sec:stateprepdet}
The full experimental sequence of the double-detection carving of Bell states is shown as a quantum circuit diagram in Fig.\,\figref{fig:stateprep}. All operations for the preparation, manipulation and detection of the state of the two atoms are applied globally, with beams addressing both atoms equally. Even though a discrimination between the two atoms is in principle possible due to their spatial separation, it is not required here. This simplifies the entangling operation from a technological point of view and makes it more robust, e.g.\ against variations in the separation of the atoms or beam pointing instabilities.
\renewcommand{S2}{S1}
\begin{figure}
\caption{\label{fig:stateprep}
\label{fig:stateprep}
\end{figure}
\subsubsection{State Preparation of the Two Atoms}
\label{sec:stateprep}
Two pumping schemes are employed to prepare the atoms trapped in the cavity, one resulting in the state $\ket{{\downarrow}{\downarrow}}$ and the other one yielding an incoherent mixture of $\ket{{\uparrow}{\downarrow}}$ and $\ket{{\downarrow}{\uparrow}}$. We use a $\pi$-polarized repump laser on the $\ket{F{=}1}\leftrightarrow\ket{F'{=}2}$ transition to bring all atoms into the $\ket{F{=}2}$ ground-state manifold. For the preparation of antiparallel states, we impinge an additional right-circularly polarized pumping laser beam along the cavity axis that is resonant with the empty cavity and the $\ket{F{=}2,m_F{=}2}\leftrightarrow\ket{F'{=}3,m_F{=}3}$ transition. This pumps at least one atom into the $\ket{{\uparrow}}=\ket{F{=}2,m_F{=}2}$ state (see supplement of \cite{neuzner2016_sup}) within $170\unit{\ensuremath{\textnormal\textmu}{s}}$. The bare atomic transition $\ket{{\uparrow}}\leftrightarrow\ket{e}$ is red-detuned from the cavity by $80\unit{MHz}$. It is shifted into resonance via the dynamical Stark shift induced by a standing wave from a retroreflected $1.6\unit{W}$ trapping laser at a wavelength of $1064\unit{nm}$, propagating perpendicular to the cavity axis. For three-dimensional trapping, two additional, mutually orthogonal, blue-detuned lattices with a wavelength of $771\unit{nm}$ are applied. As the atoms are trapped in the nodes of these lattices, they induce no light shift. Since one atom in $\ket{{\uparrow}}$ strongly reduces the transmission of further right-circularly polarized pumping light due to the strong coupling to the cavity, the second atom remains in a different state $\ket{F{=}2,m_F{\neq}2}$ with $(86\pm4)\%$ probability. The successful pumping of at least one atom is heralded by the reduction in cavity transmission. The pumped atom is then rotated from $\ket{{\uparrow}}$ to $\ket{{\downarrow}}$ employing the Raman laser pair, thereby restoring full cavity transmission for the pumping laser.
We continue the pumping with the repump laser switched off to avoid driving of the atom in $\ket{{\downarrow}}$. A reduction in cavity transmission again heralds the successful pumping of the second atom to $\ket{{\uparrow}}$. Because we do not distinguish which of the two atoms is pumped first, the outcome of this sequence is described by the density matrix $\frac12\ket{{\uparrow}{\downarrow}}\bra{{\uparrow}{\downarrow}}+\frac12\ket{{\downarrow}{\uparrow}}\bra{{\downarrow}{\uparrow}}$. The duration of the whole preparation sequence including the pumping is $270\unit{{\ensuremath{\textnormal\textmu}}s}$.
To initialize the atoms to the state $\ket{{\downarrow}{\downarrow}}$, a different dipole-trap power of the $1064\unit{nm}$ trapping laser is used, such that the atoms are detuned from the empty-cavity resonance by a few MHz. With this, if one of the atoms is in $\ket{{\uparrow}}$, pumping light can still enter and the intra-cavity intensity is about half compared to an empty cavity. Therefore, pumping of the second atom continues when the first one has been prepared. After $170\unit{{\ensuremath{\textnormal\textmu}}s}$ of continuous pumping the state $\ket{{\uparrow}{\uparrow}}$ is prepared with $93\%$ efficiency, and a global $R^\pi_y$ rotation brings the atoms to $\ket{{\downarrow}{\downarrow}}$ (Fig.\,\figref{fig:stateprep}). To increase the overlap of the initial state of the entanglement sequence with $\ket{{\downarrow}{\downarrow}}$, we apply a heralding scheme for the state preparation. We irradiate the atoms with a resonant laser beam impinging transversally to the cavity axis, which yields fluorescence photons if at least one atom is left in $\ket{{\uparrow}}$ or any other state of the $\ket{F{=}2}$ manifold. In those cases, we discard the preparation attempt. Absence of fluorescence photons acts as a reliable herald for the preparation of $\ket{{\downarrow}{\downarrow}}$. Employing this heralding scheme, we achieve an overlap of our prepared states with $\ket{{\downarrow}{\downarrow}}$ of $99\%$.
\subsubsection{State Detection of the Two Atomic Qubits}
\label{sec:statedet}
\renewcommand{S2}{S2}
\begin{figure}
\caption{\label{fig:statedetection23plot}
\label{fig:statedetection23plot}
\end{figure}
After the entangling experiment, we employ an analysis pulse with a variable phase $\phi$ and subsequently a state-detection scheme that allows us to measure the populations in the two-atom states $\ket{{\uparrow}{\uparrow}}$, $\ket{{\downarrow}{\downarrow}}$ and the sum of the populations in $\ket{{\uparrow}{\downarrow}}$ and $\ket{{\downarrow}{\uparrow}}$. The state detection starts by probing the transmission through the cavity for $3\unit{{\ensuremath{\textnormal\textmu}}s}$ with a right-circularly polarized laser beam resonant with the empty cavity (``Transmission'' box in Fig.\,\protect\figref{fig:stateprep}). If either of the two atoms occupies the state $\ket{{\uparrow}}$, the transmission decreases due to a normal-mode splitting of the coupled atom-cavity system. Afterwards, a Raman $\pi$ pulse (``$R_y^{\pi}$'' box in Fig.\,\protect\figref{fig:stateprep}) is applied followed by a second state-detection interval (rightmost ``Fluorescence'' box in Fig.\,\protect\figref{fig:stateprep}). This time, a laser beam resonant with the $\ket{F{=}2,m_F{=}2}\leftrightarrow\ket{F{=}3,m_F{=}3}$ transition and co-propagating with the Raman lasers irradiates the atoms for $5\unit{{\ensuremath{\textnormal\textmu}}s}$. This fluorescence state detection has a higher discrimination fidelity than the transmission state detection, but does not preserve the $m_F$-populations in $F=2$. It results in a near-Poissonian distribution of fluorescence photons if either of the two atoms is in $\ket{{\uparrow}}$. For both atoms in $\ket{{\uparrow}}$, the amount of the fluorescence photons strongly depends on the relative position of the atoms \cite{neuzner2016_sup, reimann2015_sup}, which may change from one experimental run to the next. Due to destructive interference of the atoms' emission, $\ket{{\uparrow}{\uparrow}}$ might even lead to fewer fluorescence photons than $\ket{{\uparrow}{\downarrow}}$ or $\ket{{\downarrow}{\uparrow}}$. Therefore, two state-detection pulses with an interleaved $\pi$ pulse are needed to discriminate between the different states with a good fidelity. Experimentally, we characterize the performance of our double-state-detection protocol in an independent measurement. For this, we infer the photon number distribution for the state detection in transmission and fluorescence for initially prepared atom-atom states $\ket{{\uparrow}{\uparrow}}$, $\ket{{\downarrow}{\downarrow}}$ and $\{\ket{{\uparrow}{\downarrow}},\ket{{\downarrow}{\uparrow}}\}$. For the state detection in transmission, the respective data is shown in Fig.\,\figref{fig:statedetection23plot}(a). We chose the criterion of more than 3 photons signalling the $\ket{{\downarrow}{\downarrow}}$ state. If 3 or less photons are observed, a distinction between $\{\ket{{\uparrow}{\downarrow}},\ket{{\downarrow}{\uparrow}}\}$ and $\ket{{\uparrow}{\uparrow}}$ is not possible. We subsequently invert the populations with the $\pi$ pulse and apply the second state detection in fluorescence. The initial state $\ket{{\uparrow}{\uparrow}}$, rotated to $\ket{{\downarrow}{\downarrow}}$ by the $R^\pi_y $ pulse, is distinct by a high probability of no detected photons as can be seen from Fig.\,\figref{fig:statedetection23plot}(b). The biggest error is a $5\%$ probability to detect no photon if the atoms were initially prepared in $\{\ket{{\uparrow}{\downarrow}},\ket{{\downarrow}{\uparrow}}\}$ (red bars). Combining the results for both state-detection methods depicted in Fig.\,\figref{fig:statedetection23plot}, we calculate the probability to detect an initially prepared state correctly. For $\ket{{\uparrow}{\uparrow}}$, this leads to less than 3 detected photons in transmission and no detected photon in fluorescence with $97.0\%$ probability. $\ket{{\downarrow}{\downarrow}}$ is indicated by more than 3 photons in transmission and at least one detection event in fluorescence in $97.4\%$ of all attempts. For $\{\ket{{\uparrow}{\downarrow}},\ket{{\downarrow}{\uparrow}}\}$, a $93.1\%$ probability to detect 3 or less photons in the first step and at least one in the second is observed. The quoted numbers are influenced by a non-perfect initial state preparation. We estimate the error of this preparation to approximately $1\%$. In our data, there is also a minor contribution of less than $0.2\%$ for that the two state detections give results incompatible with our discrimination thresholds, namely more than $3$ photons in the first state detection and no fluorescence photons in the second state detection.
\end{document} |
\begin{document}
\title{Kolmogorov Random Graphs and the Incompressibility Method\thanks{A
partial preliminary version appeared in the
{\em Proc. Conference on Compression and Complexity of Sequences}
\begin{abstract}
We investigate topological, combinatorial, statistical, and enumeration
properties of finite graphs with high Kolmogorov complexity
(almost all graphs)
using the novel incompressibility method. Example results are:
(i) the mean and variance of the number
of (possibly overlapping) ordered labeled subgraphs of a labeled graph
as a function of its randomness deficiency (how far it falls short of
the maximum possible Kolmogorov complexity)
and (ii) a new elementary proof for the number of unlabeled graphs.
\end{abstract}
\begin{keywords}
Kolmogorov complexity, incompressiblity method, random graphs,
enumeration of graphs, algorithmic information theory
\end{keywords}
\begin{AMS}
68Q30, 05C80, 05C35, 05C30
\end{AMS}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{H. BUHRMAN, M. LI, J. TROMP AND P. VITANYI}{KOLMOGOROV RANDOM GRAPHS}
\section{Introduction}
The incompressibility of individual random objects
yields a simple but powerful proof technique.
The incompressibility method,
\cite{LiVi93}, is a new general purpose
tool and should be compared with
the pigeon hole principle\index{pigeon hole principle} or the
probabilistic method\index{probabilistic method}.
Here we apply the incompressibility method to randomly generated graphs
and ``individually random'' graphs---graphs with high Kolmogorov complexity.
In a typical proof using the incompressibility method,
one first chooses an individually random object from the
class under discussion.
This object is
effectively incompressible.
The argument invariably says that if a desired property
does not hold, then the object
can be compressed. This yields the required contradiction.
Since
a randomly generated object is
{\em with overwhelming probability}
individually random and hence incompressible,
one usually obtains the property with high probability.
{\bf Results}
We apply the incompressibility
method to obtain combinatorial properties of graphs with high Kolmogorov
complexity. These properties are parametri\-zed in terms
of a ``randomness deficiency'' function.\footnote{Randomness deficiency
measures how far the object falls short of the maximum possible
Kolmogorov complexity. It is formally defined in Definition~\ref{def.rg}.}
This can be considered
as a parametri\-zed version of the incompressibility method. In
Section~\ref{sect.topol} we show that:
For every labeled graph on $n$ nodes with high Kolmogorov
complexity (also called ``Kolmogorov random
graph'' or ``high complexity graph'')
the node degree of every vertex is about $n/2$ and
there are about $n/4$ node-disjoint paths of length 2
between every pair of nodes.
In Section~\ref{sect.statsubgr}, we analyze `normality'
properties of Kolmogorov random graphs. In analogy with infinite
sequences one can call an infinite labeled graph `normal' if each
finite ordered labeled subgraph of size $k$ occurs in the appropriate sense
(possibly overlapping) with
limiting frequency $2^{-{k \choose 2}}$. It follows from
Martin-L\"of's theory of effective tests for randomness \cite{Ma66}
that individually random (high complexity) infinite labeled graphs are
normal.
Such properties cannot hold precisely for finite graphs, where
randomness is necessarily a matter of degree: We determine close
quantitative bounds on the normality (frequency of subgraphs) of high complexity
finite graphs in terms of their randomness deficiency.
Denote the number of
unlabeled graphs on $n$ nodes by $g_n$.
In Section~\ref{sect.unlabeled} we demonstrate the use
of the incompressibility method and Kolmogorov random graphs by
providing a new elementary proof that $g_n \sim 2^{n \choose 2} / n!$.
This has previously been obtained
by more advanced methods, \cite{HP73}. Moreover, we give a good
estimate of the error term.
Part of the proof involves estimating the order (number of automorphisms)
$s(G)$ of graphs $G$ as a function of the randomness deficiency
of $G$. For example, we show that labeled
graphs with randomness deficiency appropriately less than $n$
are rigid (have but one automorphism: the identity automorphism).
{\bf Related Work}
Several properties above (high degree nodes, diameter 2,
rigidity) have also been proven by traditional
methods to hold with high probability
for randomly generated graphs, \cite{Bo85}. We provide new proofs
for these results
using the incompressibility method. They are actually
proved to hold for the definite class of
Kolmogorov random graphs---rather than with high probability
for randomly generated graphs.
In \cite{LiVi94b}
(also \cite{LiVi93}) two of us investigated
topological properties of
labeled graphs with high Kolmogorov complexity
and proved them using the incompressibility
method to compare ease of such proofs with the probabilistic
method \cite{ES74} and entropy method.
In \cite{Ki92}
it was shown that every labeled tree on $n$ nodes with randomness
deficiency $O(\log n)$ has
maximum node degree of $O( \log n / \log \log n)$.
Analysis of Kolmogorov random graphs was used to establish
the total interconnect length
of Euclidean (real-world) embeddings of computer network
topologies \cite{Vi95},
and the size of compact routing tables in computer networks
\cite{BHV95}.
Infinite binary sequences that asymptotically have
equal numbers of 0's and 1's, and more generally,
where every block of length $k$
occurs (possibly overlapping) with frequency
$1/2^k$
were called ``normal'' by E. Borel, \cite{Bo14}.
References \cite{LiVi93,LiVi94a}
investigate
the quantitative deviation from normal
as a function of the Kolmogorov complexity of a finite binary string.
Here we consider analogous question
for Kolmogorov random graphs.
\footnote{There are some results
along these lines related to randomly generated graphs,
but as far as the
authors could
ascertain
(consulting Alan Frieze, Svante Janson, and
Andrzej Rucinski around June 1996)
such properties
have not been investigated in the same detail as here.
See for example
\cite{ASE92}, pp. 125--140. But note that
also pseudo-randomness is different
from Kolmogorov randomness.
}
Finally, there is a close relation and
genuine differences between
high-probability properties
and properties of incompressible objects,
see \cite{LiVi93}, Section 6.2.
\subsection{Kolmogorov complexity}
We use the following notation.
Let $A$ be a finite set. By $d(A)$ we denote the {\em cardinality}
of $A$. In particular, $d(\emptyset)=0$. Let $x$ be
a finite binary string. Then $l(x)$ denotes the {\em length}
(number of bits) of $x$. In particular, $l(\epsilon)=0$
where $\epsilon$ denotes the {\em empty word}.
Let $x,y,z \in {\cal N}$, where
${\cal N}$ denotes the natural
numbers. Identify
${\cal N}$ and $\{0,1\}^*$ according to the
correspondence
\[(0, \epsilon ), (1,0), (2,1), (3,00), (4,01), \ldots . \]
Hence, the length $l(x)$ of $x$ is the number of bits
in the binary string or number $x$.
Let $T_0 ,T_1 , \ldots$ be a standard enumeration
of all Turing machines.
Let $\langle \cdot ,\cdot \rangle$ be a standard one-one mapping
from ${\cal N} \times {\cal N}$
to ${\cal N}$, for technical reasons choosen such that
$l(\langle x ,y \rangle) = l(y)+O(l(x))$.
An example is $\langle x ,y \rangle = 1^{l(x)}0xy$.
This can be iterated to
$\langle \langle \cdot , \cdot \rangle , \cdot \rangle$.
Informally, the Kolmogorov complexity, \cite{Ko65},
of $x$ is the length of the
{\em shortest} effective description of $x$.
That is, the {\em Kolmogorov complexity} $C(x)$ of
a finite string $x$ is simply the length
of the shortest program, say in
FORTRAN (or in Turing machine codes)
encoded in binary, which prints $x$ without any input.
A similar definition holds conditionally, in the sense that
$C(x|y)$ is the length of the shortest binary program
which computes $x$ on input $y$.
Kolmogorov complexity is absolute in the sense
of being independent of the programming language,
up to a fixed additional constant term which depends on the programming
language but not on $x$. We now fix one canonical programming
language once and for all as reference and thereby $C()$.
For the theory and applications, see \cite{LiVi93}.
A formal definition is as follows:
\begin{definition}
\rm
Let $U$ be an appropriate universal Turing machine
such that
\[U(\langle \langle i,p \rangle ,y \rangle ) =
T_i (\langle p,y\rangle) \]
for all $i$ and $\langle p,y\rangle$.
The {\em conditional Kolmogorov complexity} of $x$ given $y$
is
\[C(x|y) = \min_{p \in \{0,1\}^*} \{l(p): U (\langle p,y\rangle)=x \}. \]
The unconditional Kolmogorov complexity of $x$ is defined
as $C(x) := C(x| \epsilon )$.
\end{definition}
It is easy to see that there are strings that can be described
by programs much shorter than themselves. For instance, the
function defined by $f(1) = 2$ and $f(i) = 2^{f(i-1)}$
for $i>1$ grows very fast, $f(k)$ is a ``stack'' of $k$ twos.
Yet for each $k$ it is clear that $f(k)$
has complexity at most $C(k) + O(1)$.
What about incompressibility?
By a simple counting argument one can show
that whereas some strings can be enormously compressed,
the majority of strings can hardly be compressed
at all.
For each $n$ there are $2^n$ binary
strings of length $n$, but only
$\sum_{i=0}^{n-1} 2^i = 2^n -1$ possible shorter descriptions.
Therefore, there is at least one binary string
$x$ of length $n$ such that $C(x) \geq n$.
We call such strings $incompressible$. It also
follows that for any length $n$ and any binary string $y$,
there is a binary string $x$ of length $n$ such that
$C(x| y) \geq n$. Generally,
for every constant $c$ we can call a string $x$ is
\it c-incompressible
\rm if $C(x) \geq l(x) -c$.
Strings that are incompressible (say, $c$-incompressible
with small $c$) are patternless,
since a pattern could be used to reduce
the description length. Intuitively, we
think of
such patternless sequences
as being random, and we
use ``random sequence''
synonymously with ``incompressible sequence.''
\footnote{It is possible to give a rigorous
formalization of the intuitive notion
of a random sequence as a sequence that passes all
effective tests for randomness, see for example \cite{LiVi93}.
}
By the same counting argument as before we find that the number
of strings of length $n$ that are $c$-incompressible
is at least $2^n - 2^{n-c} +1$. Hence
there is at least one 0-incompressible string of length $n$,
at least one-half of all strings of length $n$ are 1-incompressible,
at least three-fourths of all strings
of length $n$ are 2-incompressible, \ldots , and
at least the $(1- 1/2^c )$th part
of all $2^n$ strings of length $n$ are $c$-incompressible. This means
that for each constant $c \geq 1$ the majority of all
strings of length $n$ (with $n > c$) is $c$-incompressible.
We generalize this to the following simple but extremely
useful:
\begin{lemma}
\label{C2}
Let $c$ be a positive integer.
For each fixed $y$, every
set $A$ of cardinality $m$ has at least $m(1 - 2^{-c} ) + 1$
elements $x$ with $C(x| y) \geq \lfloor \log m \rfloor - c$.
\end{lemma}
\begin{proof}
By simple counting.
\end{proof}
As an example, set $A = \{ x: l(x) = n \} $. Then the cardinality
of $A$ is $m = 2^n$.
Since it is easy to assert that $C(x) \leq n + c$ for some
fixed $c$ and all $x$ in $A$, Lemma~\ref{C2} demonstrates
that this trivial estimate is quite sharp. The deeper
reason is that since there are few short programs, there can
be only few objects of low complexity. We require another quantity:
the prefix Kolmogorov complexity which is defined just as $C( \cdot | \cdot)$
but now with respect to a subset of Turing machines that have
the property that the set of programs for which the machine halts
is prefix-free, that is, no halting program is a prefix of any other
halting program. For details see \cite{LiVi93}. Here we require
only the quantitative relation below.
\begin{definition}
The {\em prefix} Kolmogorov complexity of $x$ conditional to $y$
is denoted by $K(x|y)$. It satisfies the inequality
\[C(x|y) \leq K(x|y) \leq C(x|y) + 2 \log C(x|y) + O(1) .\]
\end{definition}
\section{Kolmogorov Random Graphs}\label{sect.KRgraphs}
\label{sect.topol}
Statistical properties of strings with high Kolmogorov complexity
have been studied in \cite{LiVi94a}.
The interpretation of strings as
more complex combinatorial objects leads to a
new set of properties and problems that have no direct
counterpart in the ``flatter'' string world. Here we
derive topological, combinatorial, and statistical
properties of graphs with high Kolmogorov complexity.
Every such graph
possesses simultaneously all properties
that hold with high probability for randomly generated graphs.
They constitute ``almost all graphs'' and the derived properties
a fortiori hold with probability that goes to 1
as the number of nodes grows unboundedly.
\begin{definition}\label{def.gc}
\rm
Each labeled graph $G=(V,E)$ on $n$ nodes $V=\{1,2,\ldots, n\}$
can be represented (up to automorphism)
by a binary string $E(G)$ of length ${n \choose 2}$. We
simply assume a fixed ordering of the
${n \choose 2}$ possible edges in an $n$-node graph, e.g.
lexicographically, and
let the $i$th bit in the string indicate presence (1) or absence (0) of
the $i$'th edge. Conversely, each
binary string of length ${n \choose 2}$ encodes an $n$-node graph.
Hence we can identify each such graph with
its binary string representation.
\end{definition}
\begin{definition}\label{def.rg}
\rm
A labeled graph $G$ on $n$
nodes has {\em randomness deficiency}\index{randomness deficiency}
at most $\delta (n)$, and is called
$\delta (n)$-{\em random},
if it satisfies
\begin{equation}\label{eq.KG}
C(E(G)|n ) \geq {n \choose 2} - \delta (n).
\end{equation}
\end{definition}
\subsection{Some Basic Properties}
Using Lemma~\ref{C2},
with $y=n$, $A$ the set of strings of length $n \choose 2$, and
$c=\delta(n)$ gives us
\begin{lemma}\label{lem.frac}
A fraction of at least
\label{eq.count}
$1 - 1/2^{\delta (n)}$
of all labeled graphs $G$ on $n$ nodes is $\delta (n)$-random.
\end{lemma}
As a consequence, for example the $c \log n$-random
labeled graphs constitute a fraction of
at least $(1 - 1/n^c)$ of all graphs on
$n$ nodes, where $c>0$ is an arbitrary
constant.
Labeled graphs with high-complexity have many specific topological
properties, which seem to contradict their randomness.
However, these are simply the likely properties, whose absence would
be rather unlikely.
Thus, randomness enforces strict statistical regularities.
For example, to have diameter exactly two.
We will use the following lemma (Theorem 2.6.1 in \cite{LiVi93}):
\begin{lemma}\label{blockszerotex}
Let $x=x_1 \ldots x_n$ be a binary string of
length $n$, and $y$ a much smaller string
of length $l$. Let $p = 2^{-l}$ and
$\#y(x)$ be the number of
(possibly overlapping) distinct occurrences of $y$ in $x$.
For convenience, we assume that
$x$ ``wraps around'' so that an occurrence
of $y$ starting at the end of $x$
and continuing at the start also counts.
Assume that $l \leq \log n$.
There is a constant $c$
such that for all $n$ and $x \in \{0,1\}^n$,
if $C(x) \geq n - \delta(n)$, then
\[ |\#y(x)-pn| \leq \sqrt{ \alpha pn},\]
with $\alpha = [K(y|n)+\log l + \delta(n)+c] 3l / \log e $.
\end{lemma}
\begin{lemma}\label{lem.diam}
All $o(n)$-random labeled graphs have $n/4+o(n)$
disjoint paths of length 2 between each pair of nodes $i,j$.
In particular, all $o(n)$-random labeled graphs
have diameter 2.
\end{lemma}
\begin{proof}
The only graphs with diameter 1 are the complete graphs that
can be described in $O(1)$ bits, given $n$, and hence are not random.
It remains to consider an $o(n)$-random graph $G=(V,E)$ with diameter
greater than or equal to 2. Let $i,j$ be a pair of nodes connected
by $r$ disjoint paths of length 2.
Then we can describe $G$ by modifying the old
code for $G$ as follows:
\begin{itemize}
\item
A program to reconstruct the object from the various
parts of the encoding in $O(1)$ bits;
\item
The identities of $i < j$ in $2 \log n$ bits;
\item
The old code $E(G)$ of $G$ with the $2(n-2)$ bits representing
presence or absence of edges $(j,k)$ and $(i,k)$ for each
$k \neq i,j$ deleted.
\item a shortest program for the string $e_{i,j}$ consisting of the (reordered)
$n-2$ pairs of bits deleted above.
\end{itemize}
>From this description we can reconstruct $G$ in
\[ O(\log n) + {n \choose 2} - 2(n-2) + C(e_{i,j}|n) \]
bits, from which we may conclude that $C(e_{i,j}|n) \geq l(e_{i,j}) - o(n)$.
As shown in \cite{LiVi94a} or \cite{LiVi93} (here Lemma~\ref{blockszerotex})
this implies that the frequency of occurrence in $e_{i,j}$
of the aligned 2-bit block `11'---which by construction equals the number of
disjoint paths of length 2 between $i$ and $j$---is $n/4 + o(n)$.
\end{proof}
A graph is {\em $k$-connected} if there are at least $k$ node-disjoint
paths between every pair of nodes.
\begin{corollary}
All $o(n)$-random labeled graphs are $( \frac{n}{4}+o(n))$-connected.
\end{corollary}
\begin{lemma}
Let $G=(V,E)$ be a graph on $n$ nodes with randomness deficiency
$O(\log n)$. Then the largest clique in $G$ has at most
$\lfloor 2 \log n \rfloor + O(1)$ nodes.
\end{lemma}
\begin{proof}
Same proof as largest size transitive subtournament in
high complexity tournament as in \cite{LiVi93}.
\end{proof}
With respect to the related property of random graphs, in \cite{ASE92}
pp. 86,87 it is shown that a random graph with edge probability
$1/2$ contains a clique on asymptotically $2 \log n$ nodes with probability
at least $1-e^{-n^2}$.
\subsection{Statistics of Subgraphs}\label{sect.statsubgr}
We start by defining the notion of labeled subgraph of a labeled graph.
\begin{definition}
\rm
Let $G=(V,E)$ be a labeled graph on $n$ nodes.
Consider a labeled graph
$H$ on $k$ nodes $\{1,2, \ldots,k\}$.
Each
subset of $k$ nodes of $G$ induces a
subgraph $G_k$ of $G$. The subgraph $G_k$
is an ordered labeled {\em occurrence} of $H$ when we obtain $H$ by
relabeling the nodes $i_1< i_2< \cdots < i_k$ of $G_k$ as
$1,2, \ldots, k$.
\end{definition}
It is easy to conclude from the statistics of high-complexity
strings in
Lemma~\ref{blockszerotex}
that the frequency of each of the two labeled two-node subgraphs
(there are only two different ones: the graph consisting
of two isolated nodes and the graph consisting of
two connected nodes)
in a $\delta (n)$-random graph $G$ is
\[ \frac{ n(n-1)}{4} \pm \sqrt{ \frac{3}{4}(\delta (n)+ O(1)) n(n-1)/ \log e}. \]
This case is easy since the frequency of such subgraphs
corresponds to the frequency of 1's or $0$'s in the ${n \choose 2}$-length
standard encoding $E(G)$ of $G$.
However, to
determine the frequencies of labeled subgraphs
on $k$ nodes (up to isomorphism) for $k>2$ is a matter more complicated
than the frequencies of substrings of length $k$.
Clearly, there are $n \choose k$ subsets of $k$ nodes out of $n$
and hence that many occurrences of subgraphs. Such subgraphs may
overlap in more complex ways than substrings of a string.
Let $\#H(G)$ be {\em the number of times $H$ occurs}
as an ordered labeled subgraph of $G$ (possibly overlapping). Let
$p$ be the probability that we obtain $H$ by flipping a fair
coin to decide for each pair of nodes whether
it is connected by an edge or not,
\begin{equation}\label{eq.defp}
p=2^{-k(k-1)/2}.
\end{equation}
\begin{theorem}\label{theo.freqG}
Assume the terminology above with
$G=(V,E)$ a labeled graph on $n$ nodes, $k$ is a positive integer
dividing $n$, and
$H$ is a labeled graph on $k \leq \sqrt{2 \log n}$ nodes. Let
$C(E(G)|n ) \geq {n \choose 2} - \delta (n)$. Then
\[ \left|\#H(G)- {n \choose k}p \right| \leq
{n \choose k} \sqrt{\alpha (k/n) p} , \]
with
$\alpha := (K(H|n) + \delta(n) + \log {n \choose k}/(n/k) + O(1))
3 / \log e$.
\end{theorem}
\begin{proof}
A {\em cover} of $G$ is a
set $C= \{S_1, \ldots , S_N\}$ with $N=n/k$, where the $S_i$'s are pairwise
disjoint subsets of $V$
and $\bigcup_{i=1}^N S_i = V$.
According to \cite{Ba74}:
\begin{claim}\label{claim.bara}
\rm
There is a partition of the ${n \choose k}$ different $k$-node subsets
into $h={n \choose k}/N$ distinct covers of $G$,
each cover consisting of $N = n/k$ disjoint subsets. That is,
each subset of $k$ nodes of $V$ belongs to precisely one cover.
\end{claim}
Enumerate the covers as $C_0, C_2 , \ldots , C_{h-1}$.
For each $i \in \{ 0,1, \ldots, h-1 \}$
and $k$-node labeled graph $H$, let $\#H(G,i)$ be the number
of (now non overlapping) occurrences of subgraph $H$ in $G$
occurring in cover $C_i$.
Now consider an experiment of $N$ trials, each trial with
the same set of $2^{k(k-1)/2}$
outcomes.
Intuitively, each trial
corresponds to an element of a cover, and each outcome
corresponds to a $k$-node subgraph.
For every $i$ we can form a string $s_i$ consisting of the
$N$ blocks of ${k \choose 2}$ bits that represent presence
or absence of edges within the induced subgraphs
of each of the $N$ subsets of $C_i$.
Since $G$ can be reconstructed from $n,i,s_i$ and the remaining
${n \choose 2} - N {k \choose 2}$ bits of $E(G)$,
we find that $C(s_i|n) \geq l(s_i) - \delta (n) - \log h$.
Again, according to Lemma~\ref{blockszerotex} this implies
that the frequency of occurrence of the aligned ${k \choose 2}$-block
$E(H)$, which is $\#H(G,i)$, equals
\[ Np \pm \sqrt{Np \alpha},
\]
with $\alpha$ as in the theorem statement.
One can do this for each $i$ independently,
notwithstanding the dependence between
the frequencies of subgraphs in different covers. Namely, the argument
depends on the incompressibility of $G$ alone. If the number
of occurrences of a certain subgraph in {\em any} of the
covers is too large or too small then we can compress $G$.
Now,
\begin{eqnarray*}
\left| \#H(G) - p{n \choose k} \right| & = &
\sum_{i=0}^{h-1} |\#H(G,i)-Np| \\
& \leq & {n \choose k} \sqrt{\alpha (k/n)p}.
\end{eqnarray*}
\end{proof}
In \cite{LiVi93,LiVi94a} we investigated up to which
length $l$ all blocks of length $l$ occurred at least once in
each $\delta (n)$-random string of length $n$.
\begin{theorem}
Let $\delta (n) < 2^{\sqrt{\frac{1}{2} \log n}}/4 \log n $
and $G$ be a $\delta (n)$-random graph on $n$ nodes.
Then for sufficiently large $n$, the graph $G$
contains all subgraphs on
$\sqrt{ 2 \log n}$ nodes.
\end{theorem}
\begin{proof}
We are sure that $H$ on $k$ nodes occurs at least once in $G$
if $ {n \choose k} \sqrt{ \alpha (k/n) p}$
in Theorem~\ref{theo.freqG} is less than ${n \choose k}p$.
This is the case if $\alpha < (n/ k) p$.
This inequality is satisfied for an overestimate of
$K(H|n)$ by
${k \choose 2} + 2 \log {k \choose 2} +O(1)$
(since $K(H|n) \leq K(H)+O(1)$), and $p=2^{-k(k-1)/2}$,
and $k$ set at
$k = \sqrt{ 2 \log n}$. This proves the theorem.
\end{proof}
\index{statistical properties!of graphs|)}
\subsection{Unlabeled Graph Counting}
\label{sect.unlabeled}
An unlabeled graph is a graph with no labels. For convenience we can
define this as follows: Call two labeled graphs {\em equivalent}
(up to relabeling) if there is a relabeling that makes them equal.
An {\em unlabeled graph} is an equivalence class of labeled graphs.
An {\em automorphism} of $G=(V,E)$ is a permutation $\pi$ of $V$
such that $(\pi(u),\pi(v)) \in E$ iff $(u,v)\in E$.
Clearly, the set of automorphisms of a graph forms a group
with group operation of function composition and
the identity permutation as unity. It is easy to verify that
$\pi$ is an automorphism of $G$ iff $\pi (G)$ and $G$
have the {\em same binary string standard encoding}, that is,
$E(G)=E(\pi (G))$. This contrasts
with the more general case of permutation relabeling, where the
standard encodings may be different.
A graph is {\em rigid} if its only automorphism is the identity automorphism.
It turns out that Kolmogorov random graphs are rigid graphs.
To obtain an expression for the number
of unlabeled graphs we have to estimate the number
of automorphisms of a graph in terms of its randomness deficiency.
In \cite{HP73} an asymptotic expression for the number of unlabeled
graphs is derived using sophisticated methods.
We give a new elementary proof by incompressibility.
Denote by $g_n$ the number of unlabeled graphs on $n$
nodes---that is, the number of isomorphism classes in the set ${\cal G}_n$
of undirected graphs on nodes $\{0,1,\ldots,n-1\}$.
\begin{theorem}\label{theo.unlabeled}
$g_n \sim \frac{2^{n \choose 2}}{n!}$.
\end{theorem}
\begin{proof}
Clearly,
\[ g_n = \sum_{G \in {\cal G}_n} \frac{1}{d( \bar{G})}, \]
where $\bar{G}$ is the
isomorphism class of graph $G$. By elementary group theory,
\[ d(\bar{G}) = \frac{d(S_n)}{d({\it Aut}(G))} = \frac{n!}{d({\it Aut}(G))}, \]
where $S_n$ is the group of permutations on $n$ elements, and ${\it Aut}(G)$
is the automorphism group of $G$.
Let us partition ${\cal G}_n$ into ${\cal G}_n = {\cal G}_n^0 \cup \ldots \cup {\cal G}_n^n$,
where ${\cal G}_n^m$ is the set of graphs for which $m$ is the number of nodes
moved (mapped to another node) by any of its automorphisms.
\begin{claim}
For $G \in {\cal G}_n^m$, $d( {\it Aut}(G)) \leq n^m = 2^{m\log n}$.
\end{claim}
\begin{proof}
$d( {\it Aut}(G)) \leq {n \choose m}m! \leq n^m$.
\end{proof}
Consider each graph $G \in {\cal G}_n$ having a probability
${\it Prob}(G) = 2^{-{n \choose 2}}$.
\begin{claim}
${\it Prob}(G \in {\cal G}_n^m) \leq 2^{-m(\frac{n}{2}-\frac{3m}{8} -\log n)}$.
\end{claim}
\begin{proof}
By Lemma~\ref{lem.frac} it suffices to show that, if
$G \in {\cal G}_n^m$ and
\[ C(E(G)|n,m) \geq {n \choose 2} - \delta (n,m) \]
then $\delta (n,m)$
satisfies
\begin{equation}\label{eq.dnm}
\delta (n,m) \geq m(\frac{n}{2}-\frac{3m}{8} -\log n).
\end{equation}
Let $\pi \in {\it Aut}(G)$ move $m$ nodes. Suppose $\pi$ is the product of $k$
disjoint cycles of sizes $c_1,\ldots, c_k$.
Spend at most $m \log n$ bits describing $\pi$:
For example, if the nodes $i_1 < \cdots < i_m$ are moved
then list the sequence $\pi(i_1),\ldots, \pi(i_m)$. Writing
the nodes of the latter sequence in increasing order
we obtain $i_1 , \dots , i_m$ again, that is, we execute permutation $\pi^{-1}$,
and hence we obtain $\pi$.
Select one node from each cycle---say, the lowest numbered one.
Then for every unselected node on a cycle,
we can delete the $n-m$ bits corresponding
to the presence or absence of edges to stable nodes,
and $m-k$ half-bits corresponding to presence or absence of edges to the other,
unselected cycle nodes.
In total we delete
\[ \sum_{i=1}^{k} (c_i -1)(n-m + \frac{m-k}{2}) = (m-k)(n- \frac{m+k}{2}) \]
bits. Observing that $k=m/2$ is the largest possible value for $k$,
we arrive at the claimed
$\delta (n,m)$ of $G$ (difference between savings and spendings is
$\frac{m}{2}(n- \frac{3m}{4}) - m \log n$)
of Equation~\ref{eq.dnm}.
\end{proof}
We continue the proof of the main theorem:
\[g_n = \sum_{G \in {\cal G}_n} \frac{1}{d(\bar{G})}
= \sum_{G \in {\cal G}_n} \frac{d({\it Aut}(g))}{n!}
= \frac{2^{{n \choose 2}}}{n!} E_n,
\]
where
$E_n := \sum_{G \in {\cal G}_n} {\it Prob}(G) d({\it Aut}(G))$
is the expected size of the automorphism group of a graph on $n$ nodes.
Clearly, $E_n \geq 1$, yielding the lower bound on $g_n$.
For the upper bound on $g_n$, noting that ${\cal G}_n^1 = \emptyset$ and using
the above claims, we find
\begin{eqnarray*}
E_n & = & \sum_{m=0}^n {\it Prob}(G \in {\cal G}_n^m) {\it Avg}_{G \in {\cal G}_n^m} d({\it Aut}(G)) \\
& \leq & 1 + \sum_{m=2}^n 2^{-m(\frac{n}{2}-\frac{3m}{8} -2 \log n)} \\
& \leq & 1 + 2^{-(n - 4 \log n - 2)}.
\end{eqnarray*}
which proves the theorem.
\end{proof}
The proof of the theorem shows that the error in the asymptotic
expression is very small:
\begin{corollary}
$\frac{2^{{n \choose 2}}}{n!} \leq g_n \leq
\frac{2^{{n \choose 2}}}{n!} (1+ \frac{4n^4}{2^n})$.
\end{corollary}
It follows from Equation~\ref{eq.dnm} that (since $m=1$ is impossible):
\begin{corollary}
If a graph $G$ has randomness deficiency slightly less than $n$
(more precisely, $C(E(G)|n) \geq {n \choose 2} - n - \log n -2$)
then $G$ is rigid.
\end{corollary}
The expression for $g_n$ can be used to determine
the maximal complexity of an unlabeled graph on $n$ nodes.
Namely, we can effectively enumerate all unlabeled graphs as follows:
\begin{itemize}
\item
Effectively enumerate all labeled graphs on $n$ nodes
by enumerating all binary strings of length $n$ and
for each labeled graph $G$ do:
\subitem
If $G$ cannot be obtained by relabeling from any previously
enumerated labeled graph then $G$ is added to the set of
unlabeled graphs.
\end{itemize}
This way we obtain each unlabeled graph
by precisely one labeled graph representing it.
Since we can describe each unlabeled graph by its index
in this enumeration, we find
by Theorem~\ref{theo.unlabeled} and
Stirling's formula\index{Stirling's formula} that
if $G$ is an unlabeled graph then
\[ C(E(G)|n) \leq {n \choose 2} - n \log n + O(n) . \]
\begin{theorem}\label{theo.drop}
Let $G$ be a labeled graph on $n$ nodes and let $G_0$ be
the unlabeled version of $G$. There exists a graph
$G'$ and a label permutation $\pi$ such that $G' = \pi(G)$
and up to additional constant terms $C(E(G'))=C(E(G_0))$ and
$C(E(G)|n) = C(E(G_0) , \pi |n )$.
\end{theorem}
By Theorem~\ref{theo.drop},
for {\em every} graph $G$ on $n$ nodes with maxi\-mum-complexity
there is a relabeling (permutation)
that causes the complexity to drop by as much as
$n \log n$. Our proofs of topological properties
by the incompressibility method
required the graph $G$ to be Kolmogorov random in the sense of
$C(E(G)|n) \geq {n \choose 2} - O(\log n)$ or for some results
$C(E(G)|n) \geq {n \choose 2} - o(n)$.
Hence by relabeling such a graph
we can always obtain a labeled graph that
has a complexity too low to use our incompressibility proof.
Nonetheless, topological properties do not change under
relabeling.
\end{document} |
\begin{document}
\begin{abstract}
We lay the groundwork in this first installment of a series of papers aimed at developing a theory of Hrushovski-Kazhdan style motivic integration for certain type of non-archimedean $o$\nobreakdash-minimal fields, namely power-bounded $T$-convex valued fields, and closely related structures. The main result of the present paper is a canonical homomorphism between the Grothendieck semirings of certain categories of definable sets that are associated with the $\VF$-sort and the $\mathds{R}V$-sort of the language $\langlen{T}{RV}{}$. Many aspects of this homomorphism can be described explicitly. Since these categories do not carry volume forms, the formal groupification of the said homomorphism is understood as a universal additive invariant or a generalized Euler characteristic. It admits, not just one, but two specializations to $\mathds{Z}$. The overall structure of the construction is modeled on that of the original Hrushovski-Kazhdan construction.
\end{abstract}
\subseteqjclass[2010]{12J25, 03C64, 14E18, 03C98}
\thanks{The research leading to the true claims in this paper has been partially supported by the ERC Advanced Grant NMNAG, the grant ANR-15-CE40-0008 (D\'efig\'eo), the SYSU grant 11300-18821101, and the NSSFC Grant 14ZDB015.}
\maketitle
\tableofcontents
\vskip 15mm
\section{Introduction}\langlebel{intro}
Towards the end of the introduction of \cite{hrushovski:kazhdan:integration:vf} three hopes for the future of the theory of motivic integration are mentioned. We propose to investigate one of them in a series of papers: additive invariants and integration in $o$\nobreakdash-minimal valued fields. A prototype of such valued fields is $\mathds{R} \dpar{ t^{\mathds{Q}}}$, the generalized power series field over $\mathds{R}$ with exponents in $\mathds{Q}$. One of the cornerstones of the methodology of \cite{hrushovski:kazhdan:integration:vf} is $C$\nobreakdash-minimality, which is the right analogue of $o$\nobreakdash-minimality for algebraically closed valued fields and other closely related structures that epitomizes the behavior of definable subsets of the affine line. It, of course, fails in an $o$\nobreakdash-minimal valued field, mainly due to the presence of a total ordering. Thus the construction we seek has to be carried out in a different framework, which affords a similar type of normal forms for definable subsets of the affine line, a special kind of weak $o$\nobreakdash-minimality; this framework is van den Dries and Lewenberg's theory of $T$-convex valued fields \cite{DriesLew95, Dries:tcon:97}.
The reader is referred to the opening discussions in \cite{DriesLew95, Dries:tcon:97} for a more detailed introduction to $T$\nobreakdash-convexity and a summary of fundamental results. In those papers, how the valuation is expressed is somewhat inconsequential. In contrast, we shall work exclusively with a fixed two-sorted language $\langlen{T}{RV}{}$ --- see \S~\ref{defn:lan} and Example~\ref{exam:RtQ} for a quick grasp of the central features of this language --- since such a language is a part of the preliminary setup of any Hrushovski-Kazhdan style integration.
Throughout this paper, let $T$ be a complete power-bounded $o$\nobreakdash-minimal \LT-theory extending the theory $\usub{\textup{RCF}}{}$ of real closed fields. For the real field $\mathds{R}$, the condition of being power-bounded is the same as that of being polynomially bounded. However, for nonarchimedean real closed fields, the former condition is more general and is indeed more natural.
The language $\langlen{T}{}{}$ extends the language $\{<, 0, 1, +, -, \times\}$ of ordered rings. Let $\mdl R \coloneqq (R, <, \ldots)$ be a model of $T$. By definition, a $T$\nobreakdash-convex subring $\OO$ of $\mdl R$ is a convex subring of $\mdl R$ such that, for every definable (no parameters allowed) continuous function $f : R \longrightarrow R$, we have $f(\OO) \subseteq \OO$. The convexity of $\OO$ implies that it is a valuation ring of $\mdl R$. For instance, if $\mdl R$ is nonarchimedean and $\mathds{R} \subseteq R$ then the convex hull of $\mathds{R}$ forms a valuation ring of $\mdl R$ and, accordingly, the infinitesimals form its maximal ideal. Such a convex hull is $T$\nobreakdash-convex if no definable continuous function can grow so fast as to stretch the standard real numbers into infinity.
Let $\OO$ be a \emph{proper} $T$\nobreakdash-convex subring of $\mdl R$. The theory $T_{\textup{convex}}$ of the pair $(\mdl R, \OO)$, suitably axiomatized in the language $\langlen{}{convex}{}$ that expands $\langlen{T}{}{}$ with a new unary relation symbol, is complete, and if $T$ admits quantifier elimination and is universally axiomatizable then $T_{\textup{convex}}$ admits quantifier elimination as well.
Since $T$ is power-bounded, the definable subsets of $R$ afford a type of normal form, a special kind of weak $o$\nobreakdash-minimality (see \cite{mac:mar:ste:weako}), which we dub Holly normal form (since it was first studied by Holly in \cite{holly:can:1995}); in a nutshell, every definable subset of $R$ is a boolean combination of intervals and (valuative) discs. Clearly this is a natural generalization of $o$\nobreakdash-minimality in the presence of valuation. A number of desirable properties of definable sets in $R$ depends on the existence of such a normal form. For instance, every subset of $R$ defined by a principal type assumes one of the following four forms: a point, an open disc, a closed disc, and a half thin annulus, and, furthermore, these four forms are distinct in the sense that no definable bijection between any two of them is possible.
Let $\vv : R^{\times} \longrightarrow \mathds{G}amma$ be the valuation map induced by $\OO$, $\K$ the corresponding residue field, and $\res : \OO \longrightarrow \K$ the residue map. There is a canonical way of turning $\K$ into a model of $T$ as well, see \cite[Remark~2.16]{DriesLew95}. Let $\mathds{M}M$ be the maximal ideal of $\OO$. Let $\mathds{R}V = R^{\times} / (1 + \mathds{M}M)$ and $\rv : R^{\times} \longrightarrow \mathds{R}V$ be the quotient map. Note that, for each $a \in R$, the map $\vv$ is constant on the set $a + a\mathds{M}M$, and hence there is an induced map $\vrv : \mathds{R}V \longrightarrow \mathds{G}amma$.
The situation is illustrated in the following commutative diagram
\begin{equation*}
\bfig
\square(0,0)/^{ (}->`->>`->>`^{ (}->/<600, 400>[\OO \smallsetminus \mathds{M}M`R^{\times}`\K^{\times}`
\mathds{R}V;`\res`\rv`]
\morphism(600,0)/->>/<600,0>[\mathds{R}V`\mathds{G}amma;\vrv]
\morphism(600,400)/->>/<600,-400>[R^{\times}`\mathds{G}amma;\vv]
\efig
\end{equation*}
where the bottom sequence is exact.
This structure may be expressed and axiomatized in a natural two-sorted first-order language $\langlen{T}{RV}{}$, in which $R$ is referred to as the $\VF$-sort and $\mathds{R}V$ is taken as a new sort. Informally, $\langlen{T}{RV}{}$ is viewed as an extension of $\langlen{}{convex}{}$.
We expand $(\mdl R, \OO)$ to an $\langlen{T}{RV}{}$-structure. The main construction in this paper is carried out in such a setting. For concreteness, the reader is welcome to take $R = \mathds{R} \dpar{ t^{\mathds{Q}} }$ and $\OO = \mathds{R} \llbracket t^{\mathds{Q}} \rrbracket$ in the remainder of this introduction (see Example~\ref{exam:RtQ} below for more on this generalized power series field).
For a description of the ideas and the main results of the Hrushovski-Kazhdan style integration theory, we refer the reader to the original introduction in \cite{hrushovski:kazhdan:integration:vf} and also the introductions in \cite{Yin:int:acvf, Yin:int:expan:acvf}. There is also a quite comprehensive introduction to the same materials in \cite{hru:loe:lef} and, more importantly, a specialized version that relates the Hrushovski-Kazhdan style integration to the geometry and topology of Milnor fibers over the complex field. The method expounded there will be featured in a sequel to this paper as well. In fact, since much of the work below is closely modeled on that in \cite{hrushovski:kazhdan:integration:vf, Yin:special:trans, Yin:int:acvf, hru:loe:lef}, the reader may simply substitute the term ``theory of power-bounded $T$-convex valued fields'' for ``theory of algebraically closed valued fields'' or more generally ``$V$-minimal theories'' in those introductions and thereby acquire a quite good grip on what the results of this paper look like. For the reader's convenience, however, we shall repeat some of the key points, perhaps with minor changes here and there.
Let $\VF_*$ and $\mathds{R}V[*]$ be two categories of definable sets that are respectively associated with the $\VF$-sort and the $\mathds{R}V$-sort as follows. In $\VF_*$, the objects are the definable subsets of cartesian products of the form $\VF^n \times \mathds{R}V^m$ and the morphisms are the definable bijections. On the other hand, for technical reasons (particularly for keeping track of ambient dimensions), $\mathds{R}V[*]$ is formulated in a somewhat complicated way and is hence equipped with a gradation by ambient dimensions (see Definition~\ref{defn:c:RV:cat}).
The Grothendieck semigroup of a category $\mdl C$, denoted by $\mathfrak{s}k \mdl C$, is the free semigroup generated by the isomorphism classes of $\mdl C$, subject to the usual scissor relation $[A \smallsetminus B] + [B] = [A]$, where $[A]$, $[B]$ denote the isomorphism classes of the objects $A$, $B$ and ``$\smallsetminus$'' is certain binary operation, usually just set subtraction. Sometimes $\mdl C$ is also equipped with a binary operation --- for example, cartesian product --- that induces multiplication in $\mathfrak{s}k \mdl C$, in which case $\mathfrak{s}k \mdl C$ becomes a (commutative) semiring. The formal groupification of $\mathfrak{s}k \mdl C$, which is then a ring, is denoted by $\ggk \mdl C$.
The main construction of the Hrushovski-Kazhdan integration theory is a canonical --- that is, functorial in a suitable way --- homomorphism from the Grothendieck semiring $\mathfrak{s}k \VF_*$ of $\VF_*$ to the Grothendieck semiring $\mathfrak{s}k \mathds{R}V[*]$ of $\mathds{R}V[*]$ modulo a semiring congruence relation $\isp$ on the latter. In fact, it turns out to be an isomorphism. This construction has three main steps.
\begin{enumerate}[{Step} 1.]
\item First we define a lifting map $\bb L$ from the set of objects of $\mathds{R}V[*]$ into the set of objects of $\VF_*$ (Definition~\ref{def:L}). Next we single out a subclass of isomorphisms in $\VF_*$, which are called special bijections (Definition~\ref{defn:special:bijection}), and show that for any object $A$ in $\VF_*$ there is a special bijection $T$ on $A$ and an object $\bm U$ in $\mathds{R}V[*]$ such that $T(A)$ is isomorphic to $\bb L \bm U$ (Corollary~\ref{all:subsets:rvproduct}). This implies that $\bb L$
is ``essentially surjective'' on objects, meaning that it is surjective on isomorphism classes of $\VF_*$. For this result alone we do not have to limit our means to special
bijections. However, in Step~3 below, special bijections become an essential ingredient in computing the semiring congruence
relation $\isp$.
\item We show that, for any two isomorphic objects $\bm U_1$, $\bm U_2$ of $\mathds{R}V[*]$, their lifts $\bb L \bm U_1, \bb L \bm U_2$ in $\VF_*$ are isomorphic as well (Corollary~\ref{RV:lift}). This implies that $\bb L$ induces a semiring homomorphism
$
\mathfrak{s}k \mathds{R}V[*] \longrightarrow \mathfrak{s}k \VF_*,
$
which is also denoted by $\bb L$. This homomorphism is surjective by Step~1 and hence, modulo the semiring congruence relation $\isp$ --- that is, the kernel of $\bb L$ --- the inversion $\int_+$ of the homomorphism $\bb L$ is an isomorphism of semirings.
\item A number of important properties of the classical integration can already be verified for $\int_+$ and hence, morally, this third step is not necessary. For applications, however, it is much more satisfying to have a precise description of the semiring congruence relation $\isp$. The basic notion used in the description is that of a blowup of an object in $\mathds{R}V[*]$, which is essentially a restatement of the trivial fact that there is an additive translation from $1 + \mathds{M}M$ onto $\mathds{M}M$ (Definition~\ref{defn:blowup:coa}). We then show that, for any two objects $\bm U_1$, $\bm U_2$ in $\mathds{R}V[*]$, there are isomorphic blowups $\bm U_1^{\flat}$, $\bm U_2^{\flat}$ of them if and only if $\bb L \bm U_1$, $\bb L \bm U_2$ are isomorphic (Proposition~\ref{kernel:L}). The ``if'' direction essentially contains a form of Fubini's theorem and is the most technically involved part of the construction.
\end{enumerate}
We call the semiring homomorphism $\int_+$ thus obtained a Grothendieck homomorphism. If the objects carry volume forms and the Jacobian transformation preserves the integral, that is, the change of variables formula holds, then it may be called a motivic integration; we will not consider this case here and postpone it to a future installment. When the semirings are formally groupified, this Grothendieck homomorphism is accordingly recast as a ring homomorphism, which is denoted by $\int$ and is understood as a (universal) additive invariant.
The structure of the Grothendieck ring $\ggk \mathds{R}V[*]$ may be significantly elucidated. To wit, it can be expressed as a tensor product of two other Grothendieck rings $\ggk \mathds{R}ES[*]$ and $\ggk \mathds{G}amma[*]$, that is, there is an isomorphism of graded rings:
\[
\bb D: \ggk \mathds{R}ES[*] \otimes_{\ggk \mathds{G}amma^{c}[*]} \ggk \mathds{G}amma[*] \longrightarrow \ggk \mathds{R}V[*],
\]
where $\mathds{R}ES[*]$ is essentially the category of definable sets in $\mathds{R}$ (as a model of the theory $T$) and $\mathds{G}amma[*]$ is essentially the category of definable sets over $\mathds{Q}$ (as an $o$\nobreakdash-minimal group), both are graded by ambient dimension, and $\mathds{G}amma^{c}[*]$ is the full subcategory of $\mathds{G}amma[*]$ of finite objects, whose Grothendieck ring admits a natural embedding into $\ggk \mathds{R}ES[*]$ as well. This isomorphism results in various retractions from $\ggk \mathds{R}V[*]$ into $\ggk \mathds{R}ES[*]$ or $\ggk \mathds{G}amma[*]$ and, when combined with the Grothendieck homomorphism $\int$ and the two Euler characteristics in $o$\nobreakdash-minimal groups (one is a truncated version of the other), yield a (generalized) Euler characteristic
\[
\textstyle \Xint{\textup{G}} : \ggk \VF_* \to^{\sim} ( \mathds{Z} \oplus \bigoplus_{i \geq 1} (\mathds{Z}[Y]/(Y^2+Y))X^i) / (1 + 2YX + X),
\]
which is actually an isomorphism, and two specializations to $\mathds{Z}$:
\[
\textstyle \Xint{\textup{R}}^g, \Xint{\textup{R}}^b: \ggk \VF_* \longrightarrow \mathds{Z},
\]
determined by the assignments $Y \longmapsto -1$ and $Y \longmapsto 0$ or, equivalently, $X \longmapsto 1$ and $X \longmapsto -1$ (see Proposition~\ref{prop:eu:retr:k} and Theorem~\ref{thm:ring}). We will demonstrate the significance of these two specializations, as opposed to only one, in a future paper that is dedicated to the study of generalized (real) Milnor fibers in the sense of \cite{hru:loe:lef}.
For certain purposes, the difference between model theory and algebraic geometry is somewhat easier to bridge if one works over the complex field, as is demonstrated in \cite{hru:loe:lef}; however, over the real field, although they do overlap significantly, the two worlds seem to diverge in their methods and ideas. Our results should be understood in the context of ``$o$\nobreakdash-minimal geometry'' \cite{dries:1998, DrMi96} as opposed to real algebraic geometry. In general, the various Grothendieck rings considered in real algebraic geometry bring about lesser collapse of ``algebraic data'' --- since there are much less morphisms in the background --- and can yield much finer invariants, and hence are more faithful to the geometry in this regard, although the flip side of the story is that the invariants are often computationally intractable (especially when resolution of singularities is involved) and specializations are often needed in practice. For instance, the Grothendieck ring of real algebraic varieties may be specialized to $\mathds{Z}[X]$, which is called the virtual Poincar\'e polynomial (see \cite{mccrory:paru:virtual:poin}). Our method here does not seem to be suited for recovering invariants at this level, at least not directly.
The role of $T$-convexity in this paper cannot be overemphasized. However, it does not quite work if the exponential function is included in the theory $T$. It remains a worthy challenge to find a suitable framework in which the construction of this paper may be extended to that case.
Much of the content of this paper is extracted from the preprint \cite{Yin:int:tcvf}, which contains a more comprehensive study of $T$\nobreakdash-convex valued fields. This auxiliary part of the theory we are developing may be regarded as a sequel to or a variation on the themes of the work in \cite{DriesLew95, Dries:tcon:97}. It has become clear that some of the technicalities thereof may be of independent interest. For instance, the valuative or infinitesimal version of Lipschitz continuity plays a crucial role in proving the existence of Lipschitz stratifications in an arbitrary power-bounded $o$\nobreakdash-minimal field (this proof has been published in \cite{halyin} and the result cited there is Corollary~\ref{part:rv:cons}).
Also, in a future paper, we will use the main result here to show that, in both the real and the complex cases, the Euler characteristic of the topological Milnor fiber coincides with that of the motivic Milnor fiber, avoiding the algebro-geometric machinery employed in \cite[Remark~8.5.5]{hru:loe:lef}.
\section{Basic results in $T$-convex valued fields}
In this section, we first describe the two-sorted language $\langlen{T}{RV}{}$ for $o$\nobreakdash-minimal valued fields and the $\langlen{T}{RV}{}$-theory $$T$\nobreakdashCVF$. This theory is axiomatized. Then we show that $$T$\nobreakdashCVF$ admits quantifier elimination. Some of the results in \cite{DriesLew95, Dries:tcon:97} that are crucial for our construction are also translated into the present setting.
\subseteqsection{Some notation}\langlebel{subs:nota}
Recall from the introduction above that $T$ is a complete power-bounded $o$\nobreakdash-minimal \LT-theory extending the theory $\usub{\textup{RCF}}{}$ of real closed fields.
\begin{conv}
For the moment, by \emph{definable} we mean definable with arbitrary parameters from the structure in question. But later --- starting in \S~\ref{def:VF} --- we will abandon this practice and work with a fixed set of parameters. The reason for this change will be made abundantly clear when it happens.
\end{conv}
\begin{defn}[Power-bounded]\langlebel{defn.powBd}
Suppose that $\mdl R$ is an $o$\nobreakdash-minimal real closed field. A \emph{power function} in $\mdl R$ is a definable endomorphism of the multiplicative group $\mdl R^+$.
We say that $\mdl R$ is \emph{power-bounded} if every definable function $f \colon \mdl R \longrightarrow \mdl R$ is eventually dominated by a power function, that is, there exists a power function $g$ such that $\abs{f(x)} \le g(x)$ for all sufficiently large $x$. A complete $o$\nobreakdash-minimal theory extending $\usub{\textup{RCF}}{}$ is \emph{power-bounded} if all its models are.
\end{defn}
All power functions in $\mdl R$ may be understood as functions of the form $x \longmapsto x^\langlembda$, where $\langlembda = \ddx f(1)$. The collection of all such $\langlembda$ form a subfield and is called the \emph{field of exponents} of $\mdl R$. We will quote the results on power-bounded structures directly from \cite{DriesLew95, Dries:tcon:97} and hence do not need to know more about them other than the things that have already been said. At any rate, a concise and lucid account of the essentials may be found in \cite[\S~ 3]{Dries:tcon:97}.
\begin{rem}[Functional language]\langlebel{rem:cont}
We shall need a generality that is due to Lou van den Dries (private communication). It states that the theory $T$ can be reformulated in another language \emph{all} of whose primitives, except the binary relation $\leq$, are function symbols that are interpreted as \emph{continuous} functions in all the models of $T$. Actually, for this to hold, we only need to assume that $T$ is a complete $o$\nobreakdash-minimal theory that extends $\usub{\textup{RCF}}{}$.
More precisely, working in any model of $T$, it can be shown that all definable sets are boolean combinations of sets of the form $f(x) = 0$ or $g(x) > 0$, where $f$ and $g$ are definable total continuous functions. In particular, this holds in the prime model $\mdl P$ of $T$. Taking all definable total continuous functions in $\mdl P$ and the ordering $<$ as the primitives in a new language $\langlen{T'}{}{}$, we see that $T$ can be reformulated as an \emph{equivalent} $\langlen{T'}{}{}$-theory $T'$ in the sense that the syntactic categories of $T$ and $T'$ are naturally equivalent. In traditional but less general and more verbose model-theoretic jargon, this just says that if a model of $T$ is converted to a model of $T'$ in the obvious way then the two models are bi\"{i}nterpretable via the identity map, and vice versa.
The theory $T'$ also admits quantifier elimination, but it cannot be universally axiomatizable in $\langlen{T'}{}{}$. To see this, suppose for contradiction that it can be. Then, by the argument in the proof of \cite[Corollary~2.15]{DMM94}, every definable function $f$ in a model of $T'$, in particular, multiplicative inverse, is given piecewise by terms. But all terms define total continuous functions. This means that, by $o$\nobreakdash-minimality, multiplicative inverse near $0$ is given by two total continuous functions, which is absurd.
Now, we may and do extend $T'$ by definitions so that it is universally axiomatizable in the resulting language. Thus every substructure of a model of $T'$ is actually a model of $T'$ and, as such, is an elementary substructure. In fact, since $T'$ has definable Skolem functions, we shall abuse notation slightly and redefine $T$ to be $T'^{\textup{df}}$, where $T'^{\textup{df}}$ is in effect a Skolemization of $T'$ (see \cite[\S\S~2.3--2.4]{DriesLew95} for further explanation). Note that the language of $T$ contains additional function symbols only and some of them must be interpreted in all models of $T$ as discontinuous functions for the reason given above.
To summarize, the main point is that $T$ admits quantifier elimination, is universally axiomatizable, is a definitional extension of $T'$, and all the primitives of $\langlen{T'}{}{}$, except $\leq$, define continuous functions in all the models of $T'$.
The syntactical maneuver of passing through $T'$ just described will only be used in Theorem~\ref{thm:complete} below, and it is not really necessary if one works with a concrete $o$\nobreakdash-minimal extension of $\usub{\textup{RCF}}{}$ such as $\usub{T}{an}$ defined in \cite{DMM94} (also see Example~\ref{exam:RtQ}).
\end{rem}
We shall work with a sufficiently saturated model $\mdl R \coloneqq (R, <, \ldots)$ of $T$ unless suggested otherwise. Its field of exponents is denoted by $\mathds{K}$.
\begin{nota}[Coordinate projections]\langlebel{indexing}
For each $n \in \mathds{N}$, let $[n]$ abbreviate the set $\{1, \ldots, n\}$. For any $E \subseteq [n]$, we write $$p$\nobreakdashr_E(A)$ for the projection of $A$ into the coordinates contained in $E$. In practice, it is often more convenient to use simple standard descriptions as subscripts. For example, if $E$ is a singleton $\{i\}$ then we shall always write $E$ as $i$ and $\tilde E \coloneqq [n] \smallsetminus E$ as $\tilde i$; similarly, if $E = [i]$, $\{k: i \leq k \leq j\}$, $\{k: i < k < j\}$, $\{\text{all the coordinates in the sort $S$}\}$, etc., then we may write $$p$\nobreakdashr_{\leq i}$, $$p$\nobreakdashr_{[i, j]}$, $$p$\nobreakdashr_{(i, j)}$, $$p$\nobreakdashr_{S}$, etc.; in particular, $A_{\VF}$ and $A_{\mathds{R}V}$ stand for the projections of $A$ into the $\VF$-sort and $\mathds{R}V$-sort coordinates, respectively.
Unless otherwise specified, by writing $a \in A$ we shall mean that $a$ is a finite tuple of elements (or ``points'') of $A$, whose length, denoted by $\lh(a)$, is not always indicated. If $a = (a_1, \ldots, a_n)$ then, for all $1 \leq i < j \leq n$, following the notational scheme above, $a_i$, $a_{\tilde i}$, $a_{\leq i}$, $a_{[i, j]}$, $a_{[i, j)}$, etc., are shorthand for the corresponding subtuples of $a$. We shall write $\{t\} \times A$, $\{t\} \cup A$, $A \smallsetminus \{t\}$, etc., simply as $t \times A$, $t \cup A$, $A \smallsetminus t$, etc., when no confusion can arise.
For $a \in $p$\nobreakdashr_{\tilde E} (A)$, the fiber $\{b : ( b, a) \in A \} \subseteq $p$\nobreakdashr_E(A)$ over $a$ is denoted by $A_a$. Note that, in the discussion below, the distinction between the two sets $A_a$ and $A_a \times a$ is usually immaterial and hence they may and often shall be tacitly identified. In particular, given a function $f : A \longrightarrow B$ and $b \in B$, the pullback $f^{-1}(b)$ is sometimes written as $A_b$ as well. This is a special case since functions are identified with their graphs. This notational scheme is especially useful when the function $f$ has been clearly understood in the context and hence there is no need to spell it out all the time.
\end{nota}
\begin{nota}[Subsets and substructures]\langlebel{nota:sub}
By a definable set we mean a definable subset in $\mdl R$, and by a subset in $\mdl R$ we mean a subset in $R$, by which we mean a subset of $R^n$ for some $n$, unless indicated otherwise. Similarly for other structures or sets in place of $\mdl R$ that have been clearly understood in the context.
Often the ambient total ordering in $\mdl R$ induces a total ordering on a definable set $S$ of interest with a distinguished element $e$. Then it makes sense to speak of the positive and the negative parts of $S$ relative to $e$, which are denoted by $S^+$ and $S^-$, respectively. Also write $S^+_e$ for $S^+ \cup e$, etc. There may also be a natural absolute value map $S \longrightarrow S^+_e$, which is always denoted by $| \cdot |$; typically $S$ is a sort and $S^\times \coloneqq S \smallsetminus e$ is equipped with a (multiplicatively written) group structure, in which case the absolute value map is usually given as a component of a (splitting) short exact sequence
\[
$p$\nobreakdashm 1 \longrightarrow S^\times \longrightarrow S^+ \quad \text{or} \quad S^+ \longrightarrow S^\times \longrightarrow $p$\nobreakdashm 1.
\]
Note that $e$ cannot be the identity element of $S^\times$. We will also write $A < e$ to mean that $A \subseteq S$ and $a < e$ for all $a \in A$, etc. If $$p$\nobreakdashhi(x)$ is a formula then $$p$\nobreakdashhi(x) < e$ denotes the subset of $S$ defined by the formula $$p$\nobreakdashhi(x) \wedge x < e$.
Substructures of $\mdl R$ are written as $\mdl S \subseteq \mdl R$. As has been pointed out above, all substructures $\mdl S$ of $\mdl R$ are actually elementary substructures. If $A \subseteq \mdl R^n$ is a set definable with parameters coming from $\mdl S$ then $A(\mdl S)$ is the subset in $\mdl S$ defined by the same formula, that is, $A(\mdl S) = A \cap \mdl S^n$. Given a substructure $\mdl S \subseteq \mdl R$ and a set $A \subseteq \mdl R$, the substructure generated by $A$ over $\mdl S$ is denoted by $\langle \mdl S , A \rangle$ or $\mdl S \langle A \rangle$. Clearly $\langle \mdl S , A \rangle$ is the definable closure of $A$ over $\mdl S$. Later, we will expand $\mdl R$ and introduce more sorts and structures. In that situation we will write $\mdl S \langle A \rangle_T$ or $\langle \mdl S , A \rangle_T$ to emphasize that this is the \LT-substructure generated by $A$ over the \LT-reduct of $\mdl S$.
\end{nota}
\begin{nota}[Topology]
The default topology on $\mdl R$ is of course the order topology and the default topology on $\mdl R^n$ is the corresponding product topology. Given a subset $S$ in $\mdl R$, we write $\cl(S)$ for its topological closure, $\ito(S)$ for its interior,
and $$p$\nobreakdashartial S \coloneqq \cl(S) \setminus S$ for its frontier (not to be confused with the boundary $\cl(S) \setminus \ito(S)$ of $S$, which is also sometimes denoted by $$p$\nobreakdashartial S$). The same topological discourse applies to a definable set if the ambient total ordering of $\mdl R$ induces a total ordering on it.
\end{nota}
\subseteqsection{The theory $$T$\nobreakdashCVF$}\langlebel{defn:lan}
The language $\langlen{T}{RV}{}$ for $o$\nobreakdash-minimal valued fields --- the theory $T$ may vary, of course --- has the following sorts and symbols:
\begin{itemize}
\item A sort $\VF$, which uses the language $\langlen{T}{}{}$.
\item A sort $\mathds{R}V$, whose basic language is that of groups, written multiplicatively as $\{1, \times, {^{-1}} \}$, together with a constant symbol $0_{\mathds{R}V}$ (for notational ease, henceforth this will be written simply as $0$).
\item A unary predicate $\K^{\times}$ in the $\mathds{R}V$-sort. The union $\K^{\times} \cup \{0\}$ is denoted by $\K$, which is more conveniently thought of as a sort and, as such, employs the language $\langlen{T}{}{}$ as well, where the constant symbols $0$, $1$ are shared with the $\mathds{R}V$-sort.
\item A binary relation symbol $\leq$ in the $\mathds{R}V$-sort.
\item A function symbol $\rv : \VF \longrightarrow \mathds{R}V_0$.
\end{itemize}
We shall write $\mathds{R}V$ to mean the $\mathds{R}V$-sort without the element $0$, and $\mathds{R}V_0$ otherwise, etc., although quite often the difference is immaterial.
\begin{defn}\langlebel{defn:tcf}
The axioms of the theory $\usub{\textup{TCVF}}{}$ of \emph{$T$-convex valued fields} in the language $\langlen{T}{RV}{}$ are presented here informally. Many of them are clearly redundant as axioms, and we try to phrase some of these in such a way as to indicate so. The list also contains additional notation that will be used throughout the paper.
\begin{enumerate}[({Ax.} 1)]
\item The \LT-reduct of the $\VF$-sort is a model of $T$.
Recall from Notation~\ref{nota:sub} that $\VF^+ \subseteq \VF$ is the subset of positive elements and $\VF^- \subseteq \VF$ the subset of negative elements.
\item \langlebel{ax:rv} The quadruple $(\mathds{R}V, 1, \times, {^{-1}})$ forms an abelian group. Inversion is augmented by $0^{-1} = 0$. Multiplication is augmented by $t \times 0 = 0 \times t = 0$ for all $t \in \mathds{R}V$. The map $\rv : \VF^{\times} \longrightarrow \mathds{R}V$ is a surjective group homomorphism augmented by $\rv(0) = 0$.
\item The binary relation $\leq$ is a total ordering on $\mathds{R}V_{0}$ such that, for all $t, t' \in \mathds{R}V_{0}$, $t < t'$ if and only if $\rv^{-1}(t) < \rv^{-1}(t')$.
The distinguished element $0 \in \mathds{R}V_0$ is more aptly referred to as the \emph{middle element} of $\mathds{R}V_{0}$. Clearly $\mathds{R}V^+ = \rv(\VF^+)$ and $\mathds{R}V^- = \rv(\VF^-)$ (see Notation~\ref{nota:sub}). It follows from (Ax.~\ref{ax:rv}) that $\mathds{R}V^+$ is an ordered convex subgroup of $\mathds{R}V$ and the quotient group $\mathds{R}V / \mathds{R}V^+$ is isomorphic to the group $$p$\nobreakdashm 1 \coloneqq \rv($p$\nobreakdashm 1)$. This gives rise to an absolute value map on $\mathds{R}V_{0}$, which is compatible with the absolute value map on $\VF$ in the sense that $\rv(\abs{a}) = \abs{\rv(a)}$ for all $a \in \VF$.
\item \langlebel{ax:K} The set $\K^{\times}$ forms a \emph{nontrivial} subgroup of $\mathds{R}V$ and the set $\K^{+} = \K^{\times} \cap \mathds{R}V^+$ forms a convex subgroup of $\mathds{R}V^+$.
The quotient groups $\mathds{R}V / \K^+$, $\mathds{R}V^{+} / \K^+$ are denoted by $\mathds{G}amma$, $\mathds{G}amma^+$ and the corresponding quotient maps by $\vrv$, $\vrv^+$. Also set $\vrv(0) = 0 \in \mathds{G}amma_0$. Since $\K^+$ is convex, $\mathds{G}amma^+$ is an ordered group, where the induced ordering is also denoted by $\leq$, and the absolute value map on $\mathds{R}V_0$ descends to $\mathds{G}amma_0$ in the obvious sense.
\item Let $\leq^{-1}$ be the ordering on $\mathds{G}amma^+_0$ inverse to $\leq$ and
$\absG_{\infty} \coloneqq (\mathds{G}amma^+_0, +, \leq^{-1})$ the resulting \emph{additively} written ordered abelian group with the top element $\infty$. The composition
\[
\abval : \VF \to^{\rv} \mathds{R}V_0 \to^{\abs{ \cdot}} \mathds{R}V^+_0 \to^{\vrv^+} \mathds{G}amma^+_0
\]
is a (nontrivial) valuation with respect to the ordering $\leq^{-1}$, with valuation ring $\OO = \rv^{-1}(\mathds{R}V^{\circ}_0)$ and maximal ideal $\mathds{M}M = \rv^{-1}(\mathds{R}V^{\circ\circ}_0)$, where, denoting $\vrv^+ \circ \abs{ \cdot}$ by $\abvrv$,
\begin{align*}
\mathds{R}V^{\circ}_0 &= \{t \in \mathds{R}V: 1 \leq^{-1} \abvrv(t) \}, \\
\mathds{R}V^{\circ \circ}_0 &= \{t \in \mathds{R}V: 1 <^{-1} \abvrv(t)\}.
\end{align*}
\item \langlebel{ax:t:model} The $\K$-sort (recall that $\K$ is informally referred to as a sort) is a model of $T$ and, as a field, is the residue field of the valued field $(\VF, \OO)$.
The natural quotient map $\OO \longrightarrow \K$ is denoted by $\res$. For notational convenience, we extend the domain of $\res$ to $\VF$ by setting $\res(a) = 0$ for all $a \in \VF \smallsetminus \OO$. The following function is also denoted by $\res$:
\[
\mathds{R}V \to^{\rv^{-1}} \VF \to^{\res} \K.
\]
\item \langlebel{ax:tcon} ($T$-convexity). Let $f : \VF \longrightarrow \VF$ be a continuous function defined by an \LT-formula. Then $f(\OO) \subseteq \OO$.
\item Suppose that $$p$\nobreakdashhi$ is an \LT-formula that defines a continuous function $f : \VF^m \longrightarrow \VF$. Then $$p$\nobreakdashhi$ also defines a continuous function $\ol f : \K^m \longrightarrow \K$. Moreover, for all $a \in \OO^m$, we have $\res(f(a)) = \ol f(\res(a))$. \langlebel{ax:match}
\end{enumerate}
\end{defn}
By (Ax.~\ref{ax:t:model}) and Remark~\ref{rem:cont}, (Ax.~\ref{ax:match}) can be simplified as: for all function symbols $f$ of $\langlen{T'}{}{}$ and all $a \in \OO^m$, $\res(f(a)) = \ol f(\res(a))$. Then it is routine to check that, except the surjectivity of the map $\rv$ and the nontriviality of the value group $\abs{\mathds{G}amma}$ (this is an existential axiom and is actually expressed in (Ax.~\ref{ax:K})), $$T$\nobreakdashCVF$ is also universally axiomatized.
Let $\mdl S$ be a substructure of a model $\mdl M$ of $$T$\nobreakdashCVF$. We say that $\mdl S$ is \emph{$\VF$-generated} if $\mathds{R}V_0(\mdl S) = \rv(\VF(\mdl S))$. Thus $\mdl S$ is indeed a model of $$T$\nobreakdashCVF$ if it is $\VF$-generated and $\mathds{G}amma(\mdl S)$ is nontrivial. At any rate, $\VF(\mdl S)$, $\res(\VF(\mdl S))$, and $\K(\mdl S)$ are all models of $T$.
For $A \subseteq \VF(\mdl M) \cup \mathds{R}V(\mdl M)$, the substructure generated by $A$ over $\mdl S$ is denoted by $\langle \mdl S , A \rangle$ or $\mdl S \langle A \rangle$. Clearly $\VF(\langle \mdl S , A \rangle) = \langle \mdl S , A \rangle_T$ (see Notation~\ref{nota:sub}).
\begin{rem}\langlebel{signed:Gam}
Although the behavior of the valuation map $\abval$ in the traditional sense is coded in $$T$\nobreakdashCVF$, we shall work with the \emph{signed} valuation map, which is more natural in the present setting:
\[
\vv : \VF \to^{\rv} \mathds{R}VV \to^{\vrv} \mathds{G}AA,
\]
where the ordering $\leq$ on the \emph{signed value group} $\mathds{G}amma_0$ no longer needs to be inverted. It is also tempting to use the ordering $\leq$ in the \emph{value group} $\abs{\mathds{G}amma}_{\infty}$ instead of its inverse, but this makes citing results in the literature a bit awkward. We shall actually abuse the notation and denote the ordering $\leq^{-1}$ in $\abs{\mathds{G}amma}_{\infty}$ also by $\leq$; this should not cause confusion since the ordering on $\mathds{G}amma_0$ will rarely be used (we will indicate so explicitly when it is used).
The axioms above guarantee that the ordered abelian group ${\mathds{G}AA} /{$p$\nobreakdashm 1}$ (here $\vv($p$\nobreakdashm 1)$ is just written as $$p$\nobreakdashm 1$) with the bottom element $0$ is isomorphic to $\abs{\mathds{G}amma}_{\infty}$ if either one of the orderings is inverted. So $\abval$ may be thought of as the composition $\vv/{$p$\nobreakdashm 1} : \VF \longrightarrow {\mathds{G}AA} /{$p$\nobreakdashm 1}$.
\end{rem}
\begin{conv}\langlebel{how:gam}
Semantically we shall treat the value group $\mathds{G}AA$ as an imaginary sort. However, syntactically any reference to $\mathds{G}AA$ may be eliminated in the usual way and we can still work with $\langlen{T}{RV}{}$-formulas for the same purpose.
\end{conv}
\begin{exam}\langlebel{exam:RtQ}
Here our main reference is \cite{DMM94}. A restricted analytic function $\mathds{R}^m \longrightarrow \mathds{R}$ is given on the cube $[-1, 1]^n$ by a power series in $n$ variables over $\mathds{R}$ that converges in a neighborhood of $[-1, 1]^n$, and $0$ elsewhere. Let $\langlen{}{an}{}$ be the language that extends the language of ordered rings with a new function symbol for each restricted analytic function, $\usub{\mathds{R}}{an}$ the real field with its natural $\langlen{}{an}{}$-structure, and $\usub{T}{an}$ the $\langlen{}{an}{}$-theory of $\usub{\mathds{R}}{an}$. Obviously $\usub{T}{an}$ is polynomially bounded. More importantly, it is universally axiomatizable and admits quantifier elimination in a slightly enlarged language, and hence there is no longer any need to extend $\usub{T}{an}$ by definitions as we have arranged in \S~\ref{subs:nota}. (This language is of course more natural than a brute force definitional extension that achieves the same thing, but we do not really care what it is).
A generalized power series with coefficients in the field $\mathds{R}$ and exponents in the additive group $\mathds{Q}$ is a formal sum $x = \sum_{q \in \mathds{Q}} a_q t^q$ such that its support $\supp(x) = \{q \in \mathds{Q} : a_q \neq 0\}$ is well-ordered. Let $\mathds{R} \dpar{ t^{\mathds{Q}} }$, $K$ for short, be the set of all such series. Addition and multiplication in $K$ are defined in the expected way, and this makes $K$ a field, generally referred to as a Hahn field. We consider $\mathds{R}$ as a subfield of $K$ via the map $a \longmapsto at^0$. The map $ K^\times \longrightarrow \mathds{Q}$ given by $x \longmapsto \smallsetminusn\supp(x)$ is indeed a valuation. Its valuation ring $\mathds{R} \llbracket t^{\mathds{Q}} \rrbracket$, $\OO$ for short, consists of those series $x$ with $\smallsetminusn\supp(x) \geq 0$ and its maximal ideal $\mathds{M}M$ of those series $x$ with $\smallsetminusn\supp(x) > 0$. Its residue field admits a section onto $\mathds{R}$ and hence is isomorphic to $\mathds{R}$. It is well-known that $(K, \OO)$ is a henselian valued field and $K$ is real closed. Restricted analytic functions may be naturally interpreted in $K$. According to \cite[Corollary~2.11]{DMM94}, with its naturally induced ordering, $K$ is indeed an elementary extension of $\usub{\mathds{R}}{an}$ and hence a model of $\usub{T}{an}$.
We turn $K$ into a model of $$T$\nobreakdashCVF$, with signed valuation, as follows. First of all, set $\mathds{R}V = K^{\times} / (1 + \mathds{M}M)$. Let $\rv : K^\times \longrightarrow \mathds{R}V$ be the quotient map. The leading term of a series in $K^\times$ is its first term with nonzero coefficient. It is easy to see that two series $x$, $y$ have the same leading term if and only if $\rv(x) = \rv(y)$ and hence $\mathds{R}V$ is isomorphic to the subgroup of $K^\times$ consisting of all the leading terms. There is a natural isomorphism $a_qt^q \longmapsto (q, a_q)$ from this latter group of leading terms to the group $\mathds{Q} \oplus \mathds{R}^\times$, through which we may identify $\mathds{R}V$ with $\mathds{Q} \oplus \mathds{R}^\times$. Since $1 + \mathds{M}M$ is a convex subset of $K^\times$, the total ordering on $K^\times$ induces a total ordering $\leq$ on $\mathds{R}V$. This ordering $\leq$ is the same as the lexicographic ordering on $\mathds{Q} \oplus \mathds{R}^+$ or $\mathds{Q} \oplus \mathds{R}^-$ via the identification just made.
Let $\mathds{R}^{+}$ be the multiplicative group of the positive reals and $\mathds{R}V^{+} = \mathds{Q} \oplus\mathds{R}^+ $. Observe that $\mathds{R}^{+}$ is a convex subgroup of $\mathds{R}V$. The quotient group $\mathds{G}amma \coloneqq (\mathds{Q} \oplus \mathds{R}^\times) / \mathds{R}^{+}$ is naturally isomorphic to the subgroup $$p$\nobreakdashm e^{\mathds{Q}} \coloneqq e^{\mathds{Q}} \cup - e^{\mathds{Q}}$ of $\mathds{R}^\times$ so that $\mathds{Q}$ is identified with $e^\mathds{Q}$ via the map $q \longmapsto e^q$. Adding a new symbol $\infty$ to $\mathds{R}V$, now it is routine to interpret $K$ as an $\langlen{T}{RV}{}$-structure, with $T = \usub{T}{an}$ and the signed valuation given by
\[
x \longmapsto \rv(x) = (q, a_q) \longmapsto \sgn(a_q)e^{-q},
\]
where $\sgn(a_q)$ is the sign of $a_q$. It is also a model of $$T$\nobreakdashCVF$: all the axioms are more or less immediately derivable from the valued field structure, except (Ax.~\ref{ax:tcon}), which holds since $\usub{T}{an}$ is polynomially bounded, and (Ax.~\ref{ax:match}), which follows from \cite[Proposition~2.20]{DriesLew95}.
\end{exam}
\subseteqsection{Quantifier elimination}
Recall from \S~\ref{intro} that $T_{\textup{convex}}$ is the $\langlen{}{convex}{}$-theory of pairs $(\mdl R, \OO)$ with $\mdl R \models T$ and $\OO$ a \emph{proper} $T$-convex subring. We may and shall view $T_{\textup{convex}}$ as the $\langlen{}{convex}{}$-reduct of $$T$\nobreakdashCVF$.
\begin{thm}\langlebel{tcon:qe}
The theory $T_{\textup{convex}}$ admits quantifier elimination and is complete.
\end{thm}
\begin{proof}
See \cite[Theorem~3.10, Corollary~3.13]{DriesLew95}.
\end{proof}
That $\OO$ is a proper subring cannot be expressed by a universal axiom. Of course, we can always add a new constant symbol $\imath$ to $\langlen{}{convex}{}$ and an axiom ``$\imath$ is in the maximal ideal'' to $T_{\textup{convex}}$ so that $T_{\textup{convex}}$ may indeed be formulated as a universal theory. In that case, every substructure of a model of $T_{\textup{convex}}$ is a model of $T_{\textup{convex}}$ and, moreover, $T_{\textup{convex}}$ has definable Skolem functions given by $\langlen{T}{}{}(\imath)$-terms (this is an easy consequence of our assumption on $T$, quantifier elimination in $T_{\textup{convex}}$, and universality of $T_{\textup{convex}}$, as in \cite[Corollary~2.15]{DMM94}). We shall not implement this maneuver formally, even though the resulting properties may come in handy occasionally.
\begin{rem}\langlebel{res:exp}
According to \cite[Remark~2.16]{DriesLew95}, there is a natural way to expand the residue field $\K$ of the $T_{\textup{convex}}$-model $(\mdl R, \OO)$ to a $T$\nobreakdash-model as follows. Let $\mdl R' \subseteq \OO$ be a maximal subfield with respect to the property of being an elementary \LT-substructure of $\mdl R$. It follows that $\mdl R'$ is isomorphic to $\K$ as fields via the residue map $\res$. Then we can expand $\K$ to a $T$\nobreakdash-model so that the restriction $\res \upharpoonright \mdl R'$ becomes an isomorphism of \LT-structures. This expansion procedure does not depend on the choice of $\mdl R'$.
\end{rem}
\begin{prop}\langlebel{uni:exp}
Every $T_{\textup{convex}}$-model expands to a unique $$T$\nobreakdashCVF$-model up to isomorphism.
\end{prop}
\begin{proof}
Let $(\mdl R, \OO)$ be a $T_{\textup{convex}}$-model. It is enough to show that there is a canonical $$T$\nobreakdashCVF$-model expansion $(\mdl R, \mathds{R}VV(\mdl R))$ of $(\mdl R, \OO)$, where $\mdl R$ is the $\VF$-sort, such that any other such expansion $(\mdl R, \mathds{R}VV)$ is isomorphic to it. This canonical expansion is constructed as follows.
Let $\mathds{R}V(\mdl R)$ be the quotient group $\mdl R^\times / (1 + \mathds{M}M)$ and $\rv : \mdl R^\times \longrightarrow \mathds{R}V(\mdl R)$ the quotient map. As in Example~\ref{exam:RtQ}, it is routine to convert the pair $(\mdl R, \mathds{R}VV(\mdl R))$ into an $\langlen{T}{RV}{}$-structure and check that it satisfies all the axioms in Definition~\ref{defn:tcf}, where (Ax.~\ref{ax:t:model}) is implied by the construction just described above. We shall refer to the obvious bijection between $(\mdl R, \mathds{R}VV(\mdl R))$ and $(\mdl R, \mathds{R}VV)$ as the identity map. This map commutes with all the primitives of $\langlen{T}{RV}{}$ except, possibly, those in the $\K$-sort. This is where the syntactical maneuver in Remark~\ref{rem:cont} comes in. Recall that all the functional primitives of $\langlen{T'}{}{}$ define continuous functions in all the models of $T'$ and $T$ is a definitional extension of $T'$. It follows from (Ax.~\ref{ax:match}) that the identity map indeed induces an \LT-isomorphism between the two $\K$-sorts. Thus the two expansions are isomorphic.
\end{proof}
\begin{thm}\langlebel{thm:complete}
The theory $$T$\nobreakdashCVF$ is complete.
\end{thm}
\begin{proof}
By Proposition~\ref{uni:exp}, every embedding between two $T_{\textup{convex}}$-models, which is necessarily elementary, expands uniquely to an $\langlen{T}{RV}{}$-embedding between two $$T$\nobreakdashCVF$-models. This latter embedding is indeed elementary since $$T$\nobreakdashCVF$ admits quantifier elimination, which will be shown below. It follows that the theory $$T$\nobreakdashCVF$ is complete. But here we do not really need to go through that route. We can simply observe that, by the proof of Proposition~\ref{uni:exp}, $T_{\textup{convex}}$ and $$T$\nobreakdashCVF$ are equivalent in the sense mentioned in Remark~\ref{rem:cont}, and hence they are both complete if one of them is.
\end{proof}
\begin{conv}
From now on, we shall work in the model $\mmdl$ of $$T$\nobreakdashCVF$, which is the unique $\langlen{T}{RV}{}$-expansion of the sufficiently saturated $T_{\textup{convex}}$-model $(\mdl R, \OO)$. We shall write $\VF(\mmdl)$ simply as $\VF$ or $\mdl R$, depending on the context, $\mathds{R}VV(\mmdl)$ as $\mathds{R}V_0$, etc. A subset in $\mmdl$ may simply be referred to as a set.
When we work in the \LT-reduct $\mdl R$ of $\mmdl$ instead of $\mmdl$, or just wish to emphasize that a set is definable in $\mdl R$ instead of $\mmdl$, the symbol ``$\langlen{T}{}{}$'' or ``$T$'' will be inserted into the corresponding places in the terminology.
\end{conv}
Let $\mdl S \subseteq \mdl R$ be a small substructure and $a, b \in \mdl R \smallsetminus \mdl S$ such that they make the same cut in (the ordering of) $\mdl S$. By $o$\nobreakdash-minimality, there is an automorphism $\sigma$ of $\mdl R$ over $\mdl S$ such that $\sigma(a) = b$.
Recall that the field of exponents of $\mdl R$ is denoted by $\mathds{K}$.
\begin{thm}\langlebel{theos:qe}
The theory $$T$\nobreakdashCVF$ admits quantifier elimination.
\end{thm}
\begin{proof}
We shall run the usual Shoenfield test for quantifier elimination. To that end, let $\mdl M$ be a model of $$T$\nobreakdashCVF$, $\mdl S$ a substructure of $\mdl M$, and $\sigma : \mdl S \longrightarrow \mmdl$ an embedding. All we need to do is to extend $\sigma$ to an embedding $\mdl M \longrightarrow \mmdl$.
The construction is more or less a variation of that in the proof of \cite[Theorem~3.10]{Yin:QE:ACVF:min}. The strategy is to reduce the situation to Theorem~\ref{tcon:qe}. In the process of doing so, instead of the dimension inequality of the general theory of valued fields, the Wilkie inequality \cite[Corollary~5.6]{Dries:tcon:97} is used (see \cite[\S~3.2]{DriesLew95} for the notion of ranks of $T$-models). Note that, to use this inequality, we need to assume that $T$ is power-bounded.
Let $\mdl S_* = \langle \VF(\mdl S) \rangle$ and $t \in \mathds{R}V(\mdl S) \smallsetminus \mathds{R}V(\mdl S_*)$. Note that if such a $t$ does not exist then we have $\mdl S = \mdl S_*$ and its $\langlen{}{convex}{}$-reduct is an $\langlen{}{convex}{}$-substructure of the $\langlen{}{convex}{}$-reduct of $\mdl M$, and hence an embedding as desired can be easily obtained by applying Theorem~\ref{tcon:qe} and Proposition~\ref{uni:exp}. Let $a \in \VF(\mdl M)$ with $\rv(a) = t$ and $b \in \VF$ with $\rv(b) = \sigma(t)$. Observe that, according to $\sigma$, $a$ and $b$ must make the same cut in $\VF(\mdl S)$ and $\VF(\sigma(\mdl S))$, respectively, and hence there is an \LT-isomorphism
\[
\bar \sigma : \langle \mdl S_*, a \rangle_T \longrightarrow \langle \sigma(\mdl S_*), b \rangle_T
\]
with $\bar \sigma(a) = b$ and $\bar \sigma \upharpoonright \VF(\mdl S) = \sigma \upharpoonright \VF(\mdl S)$. We shall show that $\bar \sigma$ expands to an isomorphism between $\langle \mdl S_*, a \rangle$ and $\langle \sigma(\mdl S_*), b \rangle$ that is compatible with $\sigma$.
Case (1): There is an $a_1 \in \langle \mdl S_*, a \rangle_T$ such that
\[
\abs{\OO(\mdl S_*) } < a_1 < \abs{ \VF(\mdl S_*) \smallsetminus \OO(\mdl S_*) }.
\]
Set $\abs{\mathds{G}amma}(\mdl S_*) = G$. Since $\OO(\langle\mdl S_*, a \rangle)$ is $T$-convex, by \cite[Lemma~5.4]{Dries:tcon:97} and \cite[Remark~3.8]{DriesLew95},
\begin{itemize}
\item either $a_1 \in \OO(\langle\mdl S_*, a \rangle)$ and $\absG(\langle \mdl S_*, a \rangle) = G$ or
\item $a_1 \notin \OO(\langle\mdl S_*, a \rangle)$ and $\absG(\langle \mdl S_*, a \rangle) \cong G \oplus \mathds{K}$.
\end{itemize}
By the Wilkie inequality, if
\[
\absG(\langle \mdl S_*, a \rangle) \cong G \oplus \mathds{K}
\]
then $\K(\langle \mdl S_*, a \rangle) = \K(\mdl S_*)$ and hence $\abvrv(t) \notin G$, which implies $\abvrv(\sigma(t)) \notin \sigma(G)$; conversely, if \[
\absG(\langle \sigma(\mdl S_*), b \rangle) \cong \sigma(G) \oplus \mathds{K}
\]
then $\abvrv(t) \notin G$. Therefore
\[
\absG(\langle \mdl S_*, a \rangle) \cong G \oplus \mathds{K} \quad \text{if and only if} \quad \absG(\langle \sigma(\mdl S_*), b \rangle) \cong \sigma(G) \oplus \mathds{K},
\]
which, by \cite[Remark~3.8]{DriesLew95}, is equivalent to saying that $a_1 \in \OO(\langle\mdl S_*, a \rangle)$ if and only if $\bar \sigma(a_1) \in \OO(\langle \mdl \sigma(\mdl S_*), b \rangle)$.
Subcase (1a): $a_1 \in \OO(\langle \mdl S_*, a \rangle)$. Subcase~(1a) of the proof of \cite[Theorem~3.10]{DriesLew95} shows that $\bar \sigma$ expands to an $\langlen{}{convex}{}$-isomorphism and hence to an $\langlen{T}{RV}{}$-isomorphism, which is also denoted by $\bar \sigma$. Since $\absG(\langle \mdl S_*, a \rangle) = G$, we may assume $t \in \K(\mdl M)$. By the Wilkie inequality, $\K(\langle \mdl S_*, a \rangle)$ is precisely the $T$-model generated by $t$ over $\K(\mdl S_*)$. So $\mathds{R}V(\langle \mdl S_*, a \rangle) = \langle \mathds{R}V(\mdl S_*), t \rangle$ and
\[
\bar \sigma \upharpoonright \mathds{R}V(\langle \mdl S_*, a \rangle) = \sigma \upharpoonright \mathds{R}V(\langle \mdl S_*, a \rangle).
\]
Subcase (1b): $a_1 \notin \OO(\langle \mdl S_*, a \rangle)$. As above, Subcase~(1b) of the proof of \cite[Theorem~3.10]{DriesLew95} shows that $\bar \sigma$ expands to an $\langlen{T}{RV}{}$-isomorphism and this time $\K(\langle \mdl S_*, a \rangle) = \K(\mdl S_*)$. Again it is clear that
\[
\bar \sigma \upharpoonright \mathds{R}V(\langle \mdl S_*, a \rangle) = \sigma \upharpoonright \mathds{R}V(\langle \mdl S_*, a \rangle).
\]
Case (2): Case (1) fails. Then there is also no $b_1 \in \langle \sigma(\mdl S_*), b \rangle_T$ such that
\[
\abs{ \OO(\sigma(\mdl S_*)) } < b_1 < \abs{ \VF(\sigma(\mdl S_*)) \smallsetminus \OO(\sigma(\mdl S_*)) }.
\]
Using Case~(2) of the proof of \cite[Theorem~3.10]{DriesLew95}, compatibility between $\bar \sigma$ and $\sigma$ may be deduced as in Case (1) above.
Iterating this procedure, we may assume $\mdl S = \mdl S_*$. The theorem follows.
\end{proof}
\begin{cor}
For all set $A \subseteq \VF$, $\langle A \rangle$ is an elementary substructure of $\mmdl$ if and only if $\mathds{G}amma(\langle A \rangle)$ is nontrivial, that is, $\mathds{G}amma(\langle A \rangle) \neq $p$\nobreakdashm 1$.
\end{cor}
\begin{cor}\langlebel{trans:VF}
Every parametrically $\langlen{T}{RV}{}$-definable subset of $\VF^n$ is parametrically $\langlen{}{convex}{}$-definable.
\end{cor}
This corollary already follows from Proposition~\ref{uni:exp}. Anyway, it enables us to transfer results in the theory of $T$-convex valued fields \cite{DriesLew95, Dries:tcon:97} into our setting, which we shall do without further explanation.
We include here a couple of generalities on immediate isomorphisms. Their proofs are built on that of Theorem~\ref{theos:qe} and hence we shall skip some details.
\begin{defn}
Let $\mdl M$, $\mdl N$ be substructures and $\sigma : \mdl M \longrightarrow \mdl N$ an $\langlen{T}{RV}{}$-isomorphism. We say that $\sigma$ is an \emph{immediate isomorphism} if $\sigma(t) = t$ for all $t \in \mathds{R}V(\mdl M)$.
\end{defn}
Note that if $\sigma$ is an immediate isomorphism then, \emph{ex post facto}, $\mathds{R}V(\mdl M) = \mathds{R}V(\mdl N)$.
\begin{lem}\langlebel{imm:ext}
Every immediate isomorphism $\sigma : \mdl M \longrightarrow \mdl N$ can be extended to an immediate automorphism of $\mmdl$.
\end{lem}
\begin{proof}
Let $\mdl M_* = \langle \VF(\mdl M) \rangle$ and $\mdl N_* = \langle \VF(\mdl N) \rangle$. Let $t \notin \mathds{R}V(\mdl M_*)$ and $a \in \rv^{-1}(t)$. Since $\sigma$ is immediate, $a$ makes the same cut in $\VF(\mdl M)$ and $\VF(\mdl N)$ according to $\sigma$. By the proof of Theorem~\ref{theos:qe}, $\sigma$ may be extended to an immediate isomorphism $\langle \mdl M, a \rangle \longrightarrow \langle \mdl N, a \rangle$. Iterating this procedure, we reach a stage where the assertion simply follows from Theorem~\ref{tcon:qe}.
\end{proof}
We have something much stronger. For that, the following crucial property is needed.
\begin{prop}[Valuation property]\langlebel{val:prop}
Let $\mdl M$ be a $\VF$-generated substructure and $a \in \VF$. Suppose that $\mathds{G}amma(\langle \mdl M, a \rangle) \neq \mathds{G}amma(\mdl M)$. Then there is a $d \in \VF(\mdl M)$ such that $\vv(a - d) \notin \vv(\mdl M)$.
\end{prop}
\begin{proof}
For the polynomially bounded case, see~\cite[Proposition~9.2]{DriesSpei:2000} and the remark thereafter. Apparently this is established in full generality (power-bounded) in \cite{tyne}, which is in a repository that is password-protected.
\end{proof}
\begin{lem}\langlebel{imm:iso}
Let $\sigma : \mdl M \longrightarrow \mdl N$ be an immediate isomorphism. Let $a \in \VF \smallsetminus \VF(\mdl M)$ and $b \in \VF \smallsetminus \VF(\mdl N)$ such that $\rv(a - c) = \rv(b -\sigma(c))$ for all $c \in \VF(\mdl M)$. Then $\sigma$ may be extended to an immediate isomorphism $\bar \sigma : \langle \mdl M, a \rangle \longrightarrow \langle \mdl N, b \rangle$ with $\bar \sigma(a) = b$.
\end{lem}
Observe that, since every element of $\VF(\langle \mdl M, a \rangle) = \langle \mdl M, a \rangle_T$ is of the form $f(a, c)$, where $c \in \VF(\mdl M)$ and $f$ is a function symbol of $\langlen{T}{}{}$, and similarly for $\langle \mdl N, b \rangle$, the lemma is equivalent to saying that $\rv(a - c) = \rv(b -\sigma(c))$ for all $c \in \VF(\mdl M)$ implies $\rv(f(a,c)) = \rv(f(b,\sigma(c)))$ for all $c \in \VF(\mdl M)$ and all function symbols of $\langlen{T}{}{}$.
\begin{proof}
Without loss of generality, we may assume that $\mdl M$, $\mdl N$ are $\VF$-generated. According to $\sigma$, $a$ and $b$ must make the same cut respectively in $\VF(\mdl M)$ and $\VF(\mdl N)$, and hence there is an \LT-isomorphism $\bar \sigma : \langle \mdl M, a \rangle_T \longrightarrow \langle \mdl N, b \rangle_T$ with $\bar \sigma(a) = b$ that extends $\sigma \upharpoonright \VF(\mdl M)$. We shall first show that $\bar \sigma$ expands to an $\langlen{T}{RV}{}$-isomorphism. There are two cases to consider, corresponding to the two cases in the proof of Theorem~\ref{theos:qe}.
Case (1): There is an $a' \in \langle \mdl M, a \rangle_T$ such that
\[
\abs{ \OO(\mdl M)} < a' < \abs{ \VF(\mdl M) \smallsetminus \OO(\mdl M) }.
\]
Let $f$ be a function symbol of $\langlen{T}{}{}$ and $c \in \VF(\mdl M)$ such that $f(a, c) = a'$. Let $b' = \bar \sigma(f(a, c))$. Then we also have
\[
\abs{\OO(\mdl N)} < b' < \abs{ \VF(\mdl N) \smallsetminus \OO(\mdl N) }.
\]
If $a' \notin \OO(\langle \mdl M, a \rangle)$ then $\mathds{G}amma(\langle \mdl M, a \rangle) \neq \mathds{G}amma(\mdl M)$. By the valuation property, there is a $d \in \VF(\mdl M)$ such that $\vv(a - d) \notin \mathds{G}amma(\mdl M)$. Then $\vv(b - \sigma(d)) \notin \mathds{G}amma(\mdl N)$ and hence, by the Wilkie inequality, $\OO(\langle \mdl N, b\rangle)$ is the convex hull of $\OO(\mdl N)$ in $\langle \mdl N, b \rangle_T$. This implies $b' \notin \OO(\langle \mdl N, b \rangle)$. By symmetry and \cite[Remark~3.8]{DriesLew95}, we see that $a' \in \OO(\langle \mdl M, a \rangle)$ if and only if $b' \in \OO(\langle \mdl N, b \rangle)$, and hence
\[
\bar \sigma(\OO(\langle \mdl M, a \rangle)) = \OO(\langle \mdl N, b \rangle).
\]
Case (2): Case (1) fails. We may proceed exactly as in Case (2) of the proof of Theorem~\ref{theos:qe}.
This concludes our proof that $\bar \sigma$ expands to an $\langlen{T}{RV}{}$-isomorphism.
Next, we show that $\bar \sigma$ is indeed immediate. If $\mathds{R}V(\langle \mdl M, a \rangle) = \mathds{R}V(\mdl M)$ then also $\mathds{R}V(\langle \mdl N, b \rangle) = \mathds{R}V(\mdl N)$, and there is nothing more to be done. So suppose $\mathds{R}V(\langle \mdl M, a \rangle) \neq \mathds{R}V(\mdl M)$. We claim that there is a $d \in \VF(\mdl M)$ such that $\rv(a - d) \notin \mathds{R}V(\mdl M)$. We consider two (mutually exclusive) cases.
Case (1): $\mathds{G}amma(\langle \mdl M, a \rangle) \neq \mathds{G}amma(\mdl M)$. Then the valuation property gives such a $d$ directly.
Case (2): $\K(\langle \mdl M, a \rangle) \neq \K(\mdl M)$. Let $a'$ be as above. Let $\OO'$ be the $T$\nobreakdash-convex subring of $\VF(\mdl M)$ that does not contain $a'$, that is, $\OO'$ is the convex hull of $\OO(\mdl M)$ in $\langle \mdl M, a \rangle_T$. Let $\vv'$, $\mathds{G}amma'(\langle \mdl M, a \rangle)$ be the corresponding signed valuation map and signed value group. Then the valuation property yields a $d \in \VF(\mdl M)$ such that $\vv'(a - d) \notin \mathds{G}amma'(\mdl M)$. Since
\[
\abs{\mathds{G}amma'}(\langle \mdl M, a \rangle) \cong \abs{\mathds{G}amma}(\mdl M) \oplus \mathds{K},
\]
there is a $\mathfrak{a}mma \in \abs{\mathds{G}amma}(\mdl M)$ such that (exactly) one of the following two relations hold:
\begin{gather*}
\abs{ \OO_\mathfrak{a}mma(\mdl M)} < \abs{a- d} < \abs{ \VF(\mdl M) \smallsetminus \OO_\mathfrak{a}mma(\mdl M) },\\
\abs{ \mathds{M}M_\mathfrak{a}mma(\mdl M)} < \abs{a- d} < \abs{ \VF(\mdl M) \smallsetminus \mathds{M}M_\mathfrak{a}mma(\mdl M) },
\end{gather*}
where
\[
\OO_\mathfrak{a}mma = \{c \in \VF: \abval(c) \geq \mathfrak{a}mma\} \quad \text{and} \quad \mathds{M}M_\mathfrak{a}mma = \{c \in \VF: \abval(c) > \mathfrak{a}mma\}.
\]
It is not hard to see that, in either case, $\rv(a - d) \notin \mathds{R}V(\mdl M)$.
Since $\rv(a - d) = \rv(b - \sigma(d)) \eqqcolon t$, by the Wilkie inequality, $\mathds{R}V(\langle \mdl M, a \rangle) = \langle \mathds{R}V(\mdl M), t \rangle$ and hence $\bar \sigma$ must be immediate.
\end{proof}
\subseteqsection{Fundamental structure of $T$-convex valuation}
We review some fundamental facts concerning the valuation in $\mmdl$. Additional notation and terminology are also introduced.
Recall \cite[Theorem~A]{Dries:tcon:97}: The structure of definable sets in the $\K$-sort is precisely that given by the theory $T$.
Recall \cite[Theorem~B]{Dries:tcon:97}: The structure of definable sets in the (imaginary) $\abs \mathds{G}amma$-sort is precisely that given by the $o$\nobreakdash-minimal theory of nontrivially ordered vector spaces over $\mathds{K}$. The structure of definable sets in the (imaginary) $\mathds{G}amma$-sort is the same one modulo the sign. In particular, every definable function in the $\mathds{G}amma$-sort is definably piecewise $\mathds{K}$-linear modulo the sign.
\begin{lem}\langlebel{gk:ortho}
If $f : \mathds{G}amma \longrightarrow \K$ is a definable function then $f(\K)$ is finite. Similarly, if $g : \K \longrightarrow \mathds{G}amma$ is a definable function then $g(\mathds{G}amma)$ is finite.
\end{lem}
\begin{proof}
See \cite[Proposition~5.8]{Dries:tcon:97}.
\end{proof}
Note that \cite[Theorem~B, Proposition~5.8]{Dries:tcon:97} require that $T$ be power-bounded.
\begin{nota}\langlebel{gamma:what}
Recall convention~\ref{how:gam}. There are two ways of treating an element $\mathfrak{a}mma \in \mathds{G}amma$: as a point --- when we study $\mathds{G}amma$ as an independent structure, see Convention~\ref{how:gam} --- or a subset of $\mmdl$ --- when we need to remain in the realm of definable sets in $\mmdl$. The former perspective simplifies the notation but is of course dispensable.
We shall write $\mathfrak{a}mma$ as $\mathfrak{a}mma^\sharp$ when we want to emphasize that it is the set $\vrv^{-1}(\mathfrak{a}mma)$ in $\mmdl$ that is being considered. More generally, if $I$ is a set in $\mathds{G}amma$ then we write $I^\sharp = \bigcup\{\mathfrak{a}mma^\sharp: \mathfrak{a}mma \in I\}$. Similarly, if $U$ is a set in $\mathds{R}V$ then $U^\sharp$ stands for $\bigcup\{\rv^{-1}(t): t \in U\}$.
\end{nota}
Since $$T$\nobreakdashCVF$ is a weakly $o$\nobreakdash-minimal theory (see \cite[Corollary~3.14]{DriesLew95} and Corollary~\ref{trans:VF}), we can use the dimension theory of \cite[\S~4]{mac:mar:ste:weako} in $\mmdl$.
\begin{defn}
The \emph{$\VF$-dimension} of a definable set $A$, denoted by $\dim_{\VF}(A)$, is the largest natural number $k$ such that, possibly after re-indexing of the $\VF$-coordinates, $$p$\nobreakdashr_{\leq k}(A_t)$ has nonempty interior for some $t \in A_{\mathds{R}V}$.
\end{defn}
For all substructures $\mdl M$ and all $a \in \VF$, $\VF(\dcl_{\mdl M}( a)) = \langle \mdl M , a \rangle_T$, where $\dcl_{\mdl M}(a)$ is the definable closure of $a$ over $\mdl M$. This implies that the exchange principle with respect to definable closure --- or algebraic closure, which is the same thing since there is an ordering --- holds in the $\VF$-sort, because it holds for $T$\nobreakdash-models. Therefore, by \cite[\S~4.12]{mac:mar:ste:weako}, we may equivalently define $\dim_{\VF}(A)$ to be the maximum of the algebraic dimensions of the fibers $A_t$, $t \in A_{\mathds{R}V}$.
Algebraic dimension is defined for (any sort of) any theory whose models have the exchange property with respect to algebraic closure, or more generally any suitable notion of closure. In the present setting, the algebraic dimension of a set $B \subseteq \VF^n$ that is definable over a substructure $\mdl M$ is just the maximum of the ranks of the $T$\nobreakdash-models $\langle \mdl M , b \rangle_T$, $b \in B$, relative to the $T$\nobreakdash-model $\VF(\mdl M)$ (again, see \cite[\S~3.2]{DriesLew95} for the notion of ranks of $T$-models). It can be shown that this does not depend on the choice of $\mdl M$.
Yet another way to define this notion of $\VF$-dimension is to imitate \cite[Definiton~4.1]{Yin:special:trans}, since we have:
\begin{lem}\langlebel{altVFdim}
If $\dim_{\VF}(A) = k$ then $k$ is the smallest number such that there is a definable injection $f: A \longrightarrow \VF^k \times \mathds{R}V^l$.
\end{lem}
\begin{proof}
This is immediate by a straightforward argument combining the exchange principle, Lemma~\ref{RV:no:point} below, and compactness.
Alternatively, we may just quote \cite[Theorem~4.11]{mac:mar:ste:weako}.
\end{proof}
\begin{rem}[$\mathds{R}V$-dimension and $\mathds{G}amma$-dimension]\langlebel{rem:RV:weako}
It is routine to verify that the axioms concerning only the $\mathds{R}V$-sort are all universal except for the one asserting that $\K^{\times}$ is a proper subgroup, which is existential. These axioms amount to a weakly $o$\nobreakdash-minimal theory also and the exchange principle holds for this theory. Therefore, we can use the dimension theory of \cite[\S~4]{mac:mar:ste:weako} directly in the $\mathds{R}V$-sort as well. We call it the $\mathds{R}V$-dimension and the corresponding operator is denoted by $\dim_{\mathds{R}V}$. Note that $\dim_{\mathds{R}V}$ does not depend on parameters (see \cite[\S~4.12]{mac:mar:ste:weako}) and agrees with the $o$\nobreakdash-minimal dimension in the $\K$-sort (see \cite[\S~4.1]{dries:1998}) whenever both are applicable.
Similarly we shall use $o$\nobreakdash-minimal dimension in the $\mathds{G}amma$-sort and call it the $\mathds{G}amma$-dimension. The corresponding operator is denoted by $\dim_{\mathds{G}amma}$.
\end{rem}
\begin{lem}\langlebel{dim:cut:gam}
Let $U \subseteq \mathds{R}V^n$ be a definable set with $\dim_{\mathds{R}V}(U) = k$. Then $\dim_{\mathds{R}V}(U_{\mathfrak{a}mma}) = k$ for some $\mathfrak{a}mma \in \vrv(U)$.
\end{lem}
Here $U_{\mathfrak{a}mma}$ denotes the pullback of $\mathfrak{a}mma$ along the obvious function $\vrv \upharpoonright U$, in line with the convention set in the last paragraph of Notation~\ref{indexing}.
\begin{proof}
By \cite[Theorem~4.11]{mac:mar:ste:weako} we may assume $n=k$. Then, for some $\mathfrak{a}mma \in \vrv(U)$, $U_{\mathfrak{a}mma}$ contains an open subset of $\mathds{R}V^n$. The lemma follows.
\end{proof}
\begin{lem}\langlebel{gam:red:K}
Let $D \subseteq \mathds{G}amma^n$ be a definable set with $\dim_{\mathds{G}amma}(D) = k$. Then $D^\sharp$ is definably bijective to a disjoint union of finitely many sets of the form $(\K^+)^{n-k} \times D'^\sharp$, where $D' \subseteq \mathds{G}amma^k$.
\end{lem}
\begin{proof}
Over a definable finite partition of $D$, we may assume that $D \subseteq (\mathds{G}amma^+)^n$ and the restriction $$p$\nobreakdashr_{\leq k} \upharpoonright D$ is injective. It follows from \cite[Theorem~B]{Dries:tcon:97} that the induced function $f : D_{\leq k} \longrightarrow D_{>k}$ is piecewise $\mathds{K}$-linear. Thus, for every $\mathfrak{a}mma \in D_{\leq k}$ and every $t \in \mathfrak{a}mma^\sharp$ there is a $t$-definable point in $f(\mathfrak{a}mma)^\sharp$. The assertion follows.
\end{proof}
Taking disjoint union of finitely many definable sets of course will introduce extra bookkeeping coordinates, but we shall suppress this in notation.
\begin{rem}[$o$\nobreakdash-minimal sets in $\mathds{R}V$]\langlebel{omin:res}
The theory of $o$\nobreakdash-minimality, in particular its terminologies and notions, may be applied to a set $U \subseteq \mathds{R}V^n$ such that $\vrv(U)$ is a singleton or, more generally, is finite. For example, we shall say that $U$ is a \emph{cell} if the multiplicative translation $U / u \subseteq (\K^+)^n$ of $U$ by some $u \in U$ is an $o$\nobreakdash-minimal cell (see \cite[\S~3]{dries:1998}); this definition does not depend on the choice of $u$. Similarly, the \emph{$o$\nobreakdash-minimal Euler characteristic} $\chi(U)$ of such a set $U$ is the $o$\nobreakdash-minimal Euler characteristic of $U / u$ (see \cite[\S~4.2]{dries:1998}). This definition may be extended to disjoint unions of finitely many (not necessarily disjoint) sets $U_i \subseteq \mathds{R}V^n \times \mathds{G}amma^m$ such that each $\vrv(U_i)$ is finite.
\end{rem}
\begin{thm}\langlebel{groth:omin}
Let $U$, $V$ be definable sets in $\mathds{R}V$ with $\vrv(U)$, $\vrv(V)$ finite. Then there is a definable bijection between $U$ and $V$ if and only if
\[
\dim_{\mathds{R}V}(U) = \dim_{\mathds{R}V}(V) \quad \text{and} \quad \chi(U) = \chi(V).
\]
\end{thm}
\begin{proof}
See \cite[\S~8.2.11]{dries:1998}.
\end{proof}
\begin{defn}[Valuative discs]\langlebel{defn:disc}
A set $\mathfrak{b} \subseteq \VF$ is an \emph{open disc} if there is a $\mathfrak{a}mma \in |\mathds{G}amma|$ and a $b \in \mathfrak{b}$ such that $a \in \mathfrak{b}$ if and
only if $\abval(a - b) > \mathfrak{a}mma$; it is a \emph{closed disc} if $a \in \mathfrak{b}$ if and only if $\abval(a - b) \geq \mathfrak{a}mma$. The point $b$ is a \emph{center} of $\mathfrak{b}$. The value $\mathfrak{a}mma$ is the \emph{valuative radius} or simply the \emph{radius} of $\mathfrak{b}$, which is denoted by $\rangled (\mathfrak{b})$. A set of the form $t^\sharp$, where $t \in \mathds{R}V$, is called an \emph{$\mathds{R}V$-disc} (recall Notation~\ref{gamma:what}).
A closed disc with a maximal open subdisc removed is called a \emph{thin annulus}.
A set $\mathfrak{p} \subseteq \VF^n \times \mathds{R}V_0^m$ of the form $($p$\nobreakdashrod_{i \leq n} \mathfrak{b}_i) \times t$ is an (\emph{open, closed, $\mathds{R}V$-}) \emph{polydisc}, where each $\mathfrak{b}_i$ is an (open, closed, $\mathds{R}V$-) disc. The \emph{polyradius} $\rangled(\mathfrak{p})$ of $\mathfrak{p}$ is the tuple $(\rangled(\mathfrak{b}_1), \ldots, \rangled(\mathfrak{b}_n))$, whereas the \emph{radius} of $\mathfrak{p}$ is $\smallsetminusn \rangled(\mathfrak{p})$. If all the discs $\mathfrak{b}_i$ are of the same valuative radius then $\mathfrak{p}$ is referred to as a \emph{ball}.
The open and the closed polydiscs centered at a point $a \in \VF^n$ with polyradius $\mathfrak{a}mma \in |\mathds{G}amma|^n$ are denoted by $\mathfrak{o}(a, \mathfrak{a}mma)$ and $\mathfrak{c}(a, \mathfrak{a}mma)$, respectively.
The \emph{$\mathds{R}V$-hull} of a set $A$, denoted by $\mathds{R}VH(A)$, is the union of all the $\mathds{R}V$-polydiscs whose intersections with $A$ are nonempty. If $A$ equals $\mathds{R}VH(A)$ then $A$ is called an \emph{$\mathds{R}V$-pullback}.
\end{defn}
The map $\abval$ is constant on a disc if and only if it does not contain $0$ if and only it is contained in an $\mathds{R}V$-disc. If two discs have nonempty intersection then one of them contains the other. Many such elementary facts about discs will be used throughout the rest of the paper without further explanation.
\begin{nota}[The definable sort $\mathds{D}C$ of discs]\langlebel{disc:exp}
At times it will be more convenient to work in the traditional expansion $\mdl R_{\rv}^{\textup{eq}}$ of $\mmdl$ by all definable sorts. However, for our purpose, a much simpler expansion $\mdl R_{\rv}^{\bullet}$ suffices. This expansion has only one additional sort $\mathds{D}C$ that contains, as elements, all the open and closed discs (since each point in $\VF$ may be regarded as a closed disc of valuative radius $\infty$, for convenience, we may and occasionally do think of $\VF$ as a subset of $\mathds{D}C$). Heuristically, we may think of a disc that is properly contained in an $\mathds{R}V$-disc as a ``thickened'' point of certain stature in $\VF$. For each $\mathfrak{a}mma \in \absG$, there are two additional cross-sort maps $\VF \longrightarrow \mathds{D}C$ in $\mdl R_{\rv}^{\bullet}$, one sends $a$ to the open disc, the other to the closed disc, of radius $\mathfrak{a}mma$ that contain $a$.
The expansion $\mdl R_{\rv}^{\bullet}$ can help reduce the technical complexity of our discussion. However, as is the case with the imaginary $\mathds{G}amma$-sort, it is conceptually inessential since, for the purpose of this paper, all allusions to discs as (imaginary) elements may be eliminated in favor of objects already definable in $\mmdl$.
Whether parameters in $\mathds{D}C$ are used or not shall be indicated explicitly, if it is necessary. Note that it is redundant to include in $\mathds{D}C$ discs centered at $0$, since they may be identified with their valuative radii.
For a disc $\mathfrak{a} \subseteq \VF$, the corresponding imaginary element in $\mathds{D}C$ is denoted by $\code{\mathfrak{a}}$ when notational distinction makes the discussion more streamlined; $\code{\mathfrak{a}}$ may be heuristically thought of as the ``name'' of $\mathfrak{a}$. Conversely, a set $D \subseteq \mathds{D}C$ is often identified with the set $\{\mathfrak{a} : \code \mathfrak{a} \in D\}$, in which case $\bigcup D$ denotes a subset of $\VF$.
\end{nota}
\begin{nota}\langlebel{nota:tor}
For each $\mathfrak{a}mma \in |\mathds{G}amma|$, let $\mathds{M}M_\mathfrak{a}mma$ and $\OO_{\mathfrak{a}mma}$ be the open and closed discs around $0$ with radius $\mathfrak{a}mma$, respectively. Assume $\mathfrak{a}mma \geq 0$. Let $\mathds{R}V_{\mathfrak{a}mma} = \VF^{\times} / (1 + \mathds{M}M_\mathfrak{a}mma)$, which is a subset of $\mathds{D}C$. It is an abelian group and also inherits an ordering from $\VF^\times$. The canonical map $\VF^{\times} \longrightarrow \mathds{R}V_{\mathfrak{a}mma}$ is denoted by $\rv_{\mathfrak{a}mma}$ and is augmented by $\rv_{\mathfrak{a}mma}(0) = 0$.
If $\code \mathfrak{b} \in \mathds{D}C$, $b \in \mathfrak{b}$, and $\rangled(\mathfrak{b}) \leq \abval(b) + \mathfrak{a}mma$ then $\mathfrak{b}$ is a union of discs of the form $\rv_{\mathfrak{a}mma}^{-1}(\code \mathfrak{a})$. In this case, we shall abuse the notation slightly and write $\code \mathfrak{a} \in \mathfrak{b}$, $\mathfrak{b} \subseteq \mathds{R}V_{\mathfrak{a}mma}$, etc.
For each $\code \mathfrak{a} \in \mathds{R}V_{\mathfrak{a}mma}$, let $\tor (\code \mathfrak{a}) \subseteq \mathds{R}V_{\mathfrak{a}mma}$ be the $\code \mathfrak{a}$-definable subset such that $\rv^{-1}_{\mathfrak{a}mma}(\tor (\code \mathfrak{a}))$ forms the smallest closed disc containing $\mathfrak{a}$. Set
\begin{align*}
\tor^{\times}(\code \mathfrak{a}) & = \tor (\code \mathfrak{a}) \smallsetminus \code \mathfrak{a},\\
\tor^+(\code \mathfrak{a}) &= \{t \in \tor(\code \mathfrak{a}): t > \code \mathfrak{a}\},\\
\tor^-(\code \mathfrak{a}) &= \{t \in \tor(\code \mathfrak{a}): t < \code \mathfrak{a}\}.
\end{align*}
If $\code \mathfrak{a} = (\code {\mathfrak{a}_1}, \ldots, \code {\mathfrak{a}_n})$ with $\code {\mathfrak{a}_i} \in \mathds{R}V_{\mathfrak{a}mma_i}$ then $$p$\nobreakdashrod_i \tor(\code{\mathfrak{a}_i})$ is simply written as $\tor(\code \mathfrak{a})$; similarly for $\tor^{\times}(\code \mathfrak{a})$, $\tor^+(\code \mathfrak{a})$, etc.
If $\mathfrak{a}mma = 0$ then we may, for all purposes, identify $\tor^{\times} (\code \mathfrak{a})$, $\tor (\code \mathfrak{a})$, etc., with $\tor^{\times}(\alpha) \coloneqq \abvrv^{-1}(\alpha) \subseteq \mathds{R}V$, $\tor(\alpha) \coloneqq \tor^{\times}(\alpha) \cup \{0\}$, etc., where $\alpha = \rangled(\mathfrak{a})$.
\end{nota}
\begin{rem}[$\K$-torsors]\langlebel{rem:K:aff}
Let $\code \mathfrak{a} \in \mathds{R}V_{\mathfrak{a}mma}$ and $\alpha = \rangled(\mathfrak{a})$. Since, via additive translation by $\code \mathfrak{a}$, there is a canonical $\code \mathfrak{a}$-definable order-preserving bijection
\[
\aff_{\mathfrak{o}edel{\mathfrak{a}}} :\tor(\code \mathfrak{a}) \longrightarrow \tor(\alpha),
\]
we see that $\code \mathfrak{a}$-definable subsets of $\tor(\code \mathfrak{a})^n$ naturally correspond to those of $\tor(\alpha)^n$. If there is an $\code \mathfrak{a}$-definable $t \in \tor^{\times}(\alpha)$ then, via multiplicative translation by $t$, this correspondence may be extended to $\code \mathfrak{a}$-definable subsets of $\tor(0)^n = \K^n$. More generally, for any $t \in \tor^{\times}(\alpha)$, the induced bijection $\tor(\code \mathfrak{a}) \longrightarrow \K$ is denoted by $\aff_{\mathfrak{o}edel \mathfrak{a}, t}$. Consequently, $\tor(\code \mathfrak{a})$ may be viewed as a $\K$-torsor and, as such, is equipped with much of the structure of $\K$.
\end{rem}
\begin{defn}[Derivation between $\K$-torsors]\langlebel{rem:tor:der}
Let $\code \mathfrak{a}$, $\alpha$ be as above. Let $\mathfrak{o}edel \mathfrak{b} \in \mathds{R}V_{\delta}$ and $\beta = \rangled(\mathfrak{b})$. Let $f : \tor(\code \mathfrak{a}) \longrightarrow \tor(\code \mathfrak{b})$ be a function. We define the \emph{derivative} $\ddx f$ of $f$ at any point $\code \mathfrak{d} \in \tor(\code \mathfrak{a})$ as follows. Choose any $t \in \tor^{\times}(\alpha)$ and any $s \in \tor^{\times}(\beta)$. Consider the function
\[
f_{\mathfrak{o}edel \mathfrak{a}, \mathfrak{o}edel \mathfrak{b}, t,s} : \K \to^{\aff^{-1}_{\mathfrak{o}edel \mathfrak{a}, t}} \tor(\code \mathfrak{a}) \to^f \tor(\mathfrak{o}edel \mathfrak{b}) \to^{\aff_{\mathfrak{o}edel \mathfrak{b}, s}} \K.
\]
Put $r = \aff_{\mathfrak{o}edel \mathfrak{a}, t}(\mathfrak{o}edel \mathfrak{d})$ and suppose that $\frac{d}{dx} f_{\mathfrak{o}edel \mathfrak{a}, \mathfrak{o}edel \mathfrak{b}, t,s}(r) \in \K$ exists. Then we set
\[
\tfrac{d}{d x} f(\mathfrak{o}edel \mathfrak{d}) = s t^{-1} \tfrac{d}{d x} f_{\mathfrak{o}edel \mathfrak{a}, \mathfrak{o}edel \mathfrak{b}, t,s}(r) \in \tor(\beta - \alpha).
\]
It is routine to check that this construction does not depend on the choice of $\code \mathfrak{a}$, $\mathfrak{o}edel \mathfrak{b}$, $t$, $s$ and hence the derivative $\ddx f(\mathfrak{o}edel \mathfrak{d})$ is well-defined.
\end{defn}
\begin{defn}[$\vv$-intervals]
Let $\mathfrak{a}$, $\mathfrak{b}$ be discs, not necessarily disjoint. The subset $\mathfrak{a} < x < \mathfrak{b}$ of $\VF$, if it is not empty, is called an \emph{open $\vv$-interval} and is denoted by $(\mathfrak{a}, \mathfrak{b})$, whereas the subset
\[
\{a \in \VF : \ex{x \in \mathfrak{a}, y \in \mathfrak{b}} ( x \leq a \leq y) \}
\]
if it is not empty, is called a \emph{closed $\vv$-interval} and is denoted by $[\mathfrak{a}, \mathfrak{b}]$. The other $\vv$-intervals $[\mathfrak{a}, \mathfrak{b})$, $(-\infty, \mathfrak{b}]$, etc., are defined in the obvious way, where $(-\infty, \mathfrak{b}]$ is a closed (or half-closed) $\vv$-interval that is unbounded from below.
Let $A$ be such a $\vv$-interval. The discs $\mathfrak{a}$, $\mathfrak{b}$ are called the \emph{end-discs} of $A$. If $\mathfrak{a}$, $\mathfrak{b}$ are both points in $\VF$ then of course we just say that $A$ is an interval and if $\mathfrak{a}$, $\mathfrak{b}$ are both $\mathds{R}V$-discs then we say that $A$ is an $\mathds{R}V$-interval. If $A$ is of the form $(\mathfrak{a}, \mathfrak{b}]$ or $[\mathfrak{b}, \mathfrak{a})$, where $\mathfrak{a}$ is an open disc and $\mathfrak{b}$ is the smallest closed disc containing $\mathfrak{a}$, then $A$ is called a \emph{half thin annulus} and the \emph{radius} of $A$ is $\rangled(\mathfrak{b})$.
Two $\vv$-intervals are \emph{disconnected} if their union is not a $\vv$-interval.
\end{defn}
Obviously the open $\vv$-interval $(\mathfrak{a}, \mathfrak{b})$ is empty if $\mathfrak{a}$, $\mathfrak{b}$ are not disjoint. Equally obvious is that a $\vv$-interval is definable over some substructure $\mdl S$ if and only if its end-discs are definable over $\mdl S$.
\begin{rem}[Holly normal form]\langlebel{rem:HNF}
By the valuation property Proposition~\ref{val:prop} and \cite[Proposition~7.6]{Dries:tcon:97}, we have an important tool called \emph{Holly normal form} \cite[Theorem~4.8]{holly:can:1995} (henceforth abbreviated as HNF); that is, every definable subset of $\VF$ is a unique union of finitely many definable pairwise disconnected $\vv$-intervals. This is obviously a generalization of the $o$\nobreakdash-minimal condition.
\end{rem}
\section{Definable sets in $\VF$}\langlebel{def:VF}
From here on, we shall work with a fixed small substructure $\mdl S$ of $\mdl R_{\rv}$, also occasionally of $\mdl R_{\rv}^{\bullet}$ (primarily in this section). The conceptual reason for this move is that the Grothendieck rings in our main construction below change their meaning if the set of parameters changes. In particular, allowing all parameters trivializes the whole construction somewhat. For instance, every definable set will contain a definable point. Consequently, all Galois actions on the classes of finite definable sets are killed off, and this is highly undesirable for motivic integration in algebraically closed valued fields. Admittedly, this problem is not as severe in our setting. Anyway, we follow the practice in \cite{hrushovski:kazhdan:integration:vf}.
Note that $\mdl S$ is regarded as a part of the language now and hence, contrary to the usual convention in the model-theoretic literature, ``$\emptyset$-definable'' or ``definable'' only means ``$\mdl S$-definable'' instead of ``parametrically definable'' if no other qualifications are given. To simplify the notation, we shall not mention $\mdl S$ and its extensions in context if no confusion can arise. For example, the definable closure operator $\dcl_{\mdl S}$, etc., will simply be written as $\dcl$, etc.
For the moment we do not require that $\mdl S$ be $\VF$-generated or $\mathds{G}amma(\mdl S)$ be nontrivial. When we work in $\mdl R_{\rv}^{\bullet}$ --- either by introducing parameters of the form $\code \mathfrak{a}$ or the phrase ``in $\mdl R_{\rv}^{\bullet}$'' --- the substructure $\mdl S$ may contain names for discs that may or may not be definable from $\VF(\mdl S) \cup \mathds{R}V(\mdl S)$.
\subseteqsection{Definable functions and atomic open discs}
The structural analysis of definable sets in $\VF$ below is, for the most part, of a rather technical nature. One highlight is Corollary~\ref{part:rv:cons}. It is a crucial ingredient of the proof in \cite{halyin} that all definable closed sets in an arbitrary power-bounded $o$\nobreakdash-minimal field admit Lipschitz stratification.
\begin{conv}\langlebel{topterm}
Since apart from $\leq$ the language $\langlen{T}{}{}$ only has function symbols, we may and shall assume that, in any $\langlen{T}{RV}{}$-formula, every \LT-term occurs in the scope of an instance of the function symbol $\rv$. For example, if $f(x)$, $g(x)$ are \LT-terms then the formula $f(x) < g(x)$ is equivalent to $\rv(f(x) - g(x)) < 0$. The \LT-term $f(x)$ in $\rv(f(x))$ shall be referred to as a \emph{top \LT-term}.
\end{conv}
We begin by studying definable functions between various regions of the structure.
\begin{lem}\langlebel{Ocon}
Let $f : \OO \longrightarrow \VF$ be a definable function. Then for some $\mathfrak{a}mma \in \mathds{G}AA$ and $a \in \OO$ we have $\vv(f(b)) = \mathfrak{a}mma$ for all $b > a$ in $\OO$.
\end{lem}
\begin{proof}
See \cite[Proposition~4.2]{Dries:tcon:97}.
\end{proof}
Note that this is false if $T$ is not power-bounded.
A definable function $f$ is \emph{quasi-\LT-definable} if it is a restriction of an \LT-definable function (with parameters in $\VF(\mdl S)$, of course).
\begin{lem}\langlebel{fun:suba:fun}
Every definable function $f : \VF^n \longrightarrow \VF$ is piecewise quasi-\LT-definable; that is, there are a definable finite partition $A_i$ of $\VF^n$ and \LT-definable functions $f_i: \VF^n \longrightarrow \VF$ such that $f \upharpoonright A_i = f_i \upharpoonright A_i$ for all $i$.
\end{lem}
\begin{proof}
By compactness, this is immediately reduced to the case $n = 1$. In that case, let $$p$\nobreakdashhi(x, y)$ be a quantifier-free formula that defines $f$. Let $\tau_i(x, y)$ enumerate the top \LT-terms in $$p$\nobreakdashhi(x, y)$. For each $a \in \VF$ and each $t_i(a, y)$, let $B_{a, i} \subseteq \VF$ be the characteristic finite subset of the function $t_i(a, y)$ given by $o$\nobreakdash-minimal monotonicity (see \cite[\S~3.1]{dries:1998}). It is not difficult to see that if $f(a) \notin \bigcup_i B_{a, i}$ then there would be a $b \neq f(a)$ such that
\[
\rv(\tau_i(a, b)) = \rv(\tau_i(a, f(a)))
\]
for all $i$ and hence $$p$\nobreakdashhi(a, b)$ holds, which is impossible since $f$ is a function. The lemma follows.
\end{proof}
This lemma is just a variation of \cite[Lemma~2.6]{Dries:tcon:97}.
\begin{cor}[Monotonicity]\langlebel{mono}
Let $A \subseteq \VF$ and $f : A \longrightarrow \VF$ be a definable function. Then there is a definable finite partition of $A$ into $\vv$-intervals $A_i$ such that every $f \upharpoonright A_i$ is quasi-\LT-definable, continuous, and monotone (constant or strictly increasing or strictly decreasing). Consequently, each $f(A_i)$ is a $\vv$-interval.
\end{cor}
\begin{proof}
This is immediate by Lemma~\ref{fun:suba:fun}, $o$\nobreakdash-minimal monotonicity, and HNF.
\end{proof}
This corollary is a version of \cite[Corollary~2.8]{Dries:tcon:97}, slightly finer due to the presence of HNF.
\begin{cor}\langlebel{uni:fun:decom}
For the function $f$ in Corollary~\ref{mono}, there is a definable function $$p$\nobreakdashi : A \longrightarrow Â\mathds{R}V^2$ such that, for each $t \in \mathds{R}V^2$, $f \upharpoonright A_t$ is either constant or injective.
\end{cor}
\begin{proof}
This follows easily from monotonicity. Also, the proof of \cite[Lemma~4.11]{Yin:QE:ACVF:min} still works.
\end{proof}
\begin{lem}\langlebel{RV:no:point}
Given a tuple $t = (t_1, \ldots, t_n) \in \mathds{R}V$, if $a \in \VF$ is $t$-definable then $a$ is definable. Similarly, for $\mathfrak{a}mma = (\mathfrak{a}mma_1, \ldots, \mathfrak{a}mma_n) \in \mathds{G}amma$, if $t \in \mathds{R}V$ is $\mathfrak{a}mma$-definable then $t$ is definable.
\end{lem}
\begin{proof}
The first assertion follows directly from Lemma~\ref{fun:suba:fun}. It can also be easily seen through an induction on $n$ with the trivial base case $n=0$. For any $b \in t_n^\sharp$, by the inductive hypothesis, we have $a \in \VF(\langle b \rangle)$. If $a$ were not definable then, by the exchange principle, we would have $b \in \VF(\langle a \rangle)$ and hence $t_n^\sharp \subseteq \VF(\langle a \rangle)$, which is impossible. The second assertion is similar, using the exchange principle in the $\mathds{R}V$-sort (see Remark~\ref{rem:RV:weako}).
\end{proof}
\begin{cor}\langlebel{function:rv:to:vf:finite:image}
Let $U \subseteq \mathds{R}V^m$ be a definable set and $f : U \longrightarrow \VF^n$ a definable function. Then $f(U)$ is finite.
\end{cor}
\begin{proof}
We may assume $n=1$. Then this is immediate by Lemma~\ref{RV:no:point} and compactness.
\end{proof}
There is a more general version of Lemma~\ref{RV:no:point} that involves parameters in the $\mathds{D}C$-sort:
\begin{lem}\langlebel{ima:par:red}
Let $\code \mathfrak{a} = (\code{\mathfrak{a}_1}, \ldots, \code{\mathfrak{a}_n}) \in \mathds{D}C^n$. If $a \in \VF$ is $\code \mathfrak{a}$-definable then $a$ is definable.
\end{lem}
\begin{proof}
We proceed by induction on $n$. Let $b \in \mathfrak{a}_n$ and $t \in \mathds{R}V$ such that $\abvrv(t) = \rangled(\mathfrak{a}_n)$. Then $a$ is $(\code {\mathfrak{a}_1}, \ldots, \code {\mathfrak{a}_{n-1}}, t, b)$-definable. By the inductive hypothesis and Lemma~\ref{RV:no:point}, we have $a \in \VF(\langle b \rangle)$. If $a$ were not definable then we would have $b \in \VF(\langle a \rangle)$ and hence $\mathfrak{a}_n \subseteq \VF(\langle a \rangle)$, which is impossible unless $\mathfrak{a}_n$ is a definable point in $\VF$.
\end{proof}
\begin{lem}\langlebel{open:K:con}
In $\mdl R^{\bullet}_{\rv}$, let $\mathfrak{a} \subseteq \VF$ be a definable open disc and $f : \mathfrak{a} \longrightarrow \K$ a definable nonconstant function. Then there is a definable proper subdisc $\mathfrak{b} \subseteq \mathfrak{a}$ such that $f \upharpoonright (\mathfrak{a} \smallsetminus \mathfrak{b})$ is constant.
\end{lem}
\begin{proof}
If $\mathfrak{b}_1$ and $\mathfrak{b}_2$ are two proper subdiscs of $\mathfrak{a}$ such that $f \upharpoonright (\mathfrak{a} \smallsetminus \mathfrak{b}_1)$ and $f \upharpoonright (\mathfrak{a} \smallsetminus \mathfrak{b}_2)$ are both constant then $\mathfrak{b}_1$ and $\mathfrak{b}_2$ must be concentric, that is, $\mathfrak{b}_1 \cap \mathfrak{b}_2 \neq \emptyset$, for otherwise $f$ would be constant. Therefore, it is enough to show that $f \upharpoonright (\mathfrak{a} \smallsetminus \mathfrak{b})$ is constant for some proper subdisc $\mathfrak{b} \subseteq \mathfrak{a}$. To that end, without loss of generality, we may assume that $\mathfrak{a}$ is centered at $0$. For each $\mathfrak{a}mma \in \vv(\mathfrak{a}) \subseteq \mathds{G}amma$, by \cite[Theorem~A]{Dries:tcon:97} and $o$\nobreakdash-minimality, $f(\vv^{-1}(\mathfrak{a}mma))$ contains a $\mathfrak{a}mma$-definable element $t_{\mathfrak{a}mma} \in \K$. By weak $o$\nobreakdash-minimality, $f(\vv^{-1}(\mathfrak{a}mma)) = t_{\mathfrak{a}mma}$ for all but finitely many $\mathfrak{a}mma \in \vv(\mathfrak{a})$. Let $g : \vv(\mathfrak{a}) \longrightarrow \K$ be the definable function given by $\mathfrak{a}mma \longrightarrow t_{\mathfrak{a}mma}$. By Lemma~\ref{gk:ortho}, the image of $g$ is finite. The assertion follows.
\end{proof}
Alternatively, we may simply quote \cite[Theorem~1.2]{jana:omin:res}.
\begin{defn}
Let $D$ be a set of parameters. We say that a (not necessarily definable) nonempty set $A$ \emph{generates a (complete) $D$-type} if, for every $D$-definable set $B$, either $A \subseteq B$ or $A \cap B = \emptyset$. In that case, $A$ is \emph{$D$-type-definable} if no set properly contains $A$ and also generates a $D$-type. If $A$ is $D$-definable and generates a $D$-type, or equivalently, if $A$ is both $D$-definable and $D$-type-definable then we say that $A$ is \emph{$D$-atomic} or \emph{atomic over $D$}.
\end{defn}
We simply say ``atomic'' when $D =\emptyset$.
In the literature, a type could be a partial type and hence a type-definable set may have nontrivial intersection with a definable set. In this paper, since partial types do not play a role, we shall not carry the superfluous qualifier ``complete'' in our terminology.
\begin{rem}[Taxonomy of atomic sets]\langlebel{rem:type:atin}
It is not hard to see that, by HNF, if $\mathfrak{i} \subseteq \VF$ is atomic then $\mathfrak{i}$ must be a $\vv$-interval. In fact, there are only four possibilities for $\mathfrak{i}$: a point, an open disc, a closed disc, and a half thin annulus. There are no ``meaningful'' relations between them, see Lemma~\ref{atom:type}.
\end{rem}
\begin{lem}\langlebel{atom:gam}
In $\mdl R^{\bullet}_{\rv}$, let $\mathfrak{a}$ be an atomic set. Then $\mathfrak{a}$ remains $\mathfrak{a}mma$-atomic for all $\mathfrak{a}mma \in \mathds{G}amma$. Moreover, if $\mathfrak{a} \subseteq \VF^n$ is an open polydisc then it remains $\code \mathfrak{a}$-atomic.
\end{lem}
\begin{proof}
The first assertion is a direct consequence of definable choice in the $\mathds{G}amma$-sort. For the second assertion, let $\mathfrak{a}mma = \rangled(\mathfrak{a})$. If $\mathfrak{a}$ were not $\code \mathfrak{a}$-atomic then, by compactness, there would be a $\mathfrak{a}mma$-definable subset $A \subseteq \VF^n$ such that $A \cap \mathfrak{a}$ is nonempty and, for all open polydisc $\mathfrak{b}$ with $\mathfrak{a}mma = \rangled(\mathfrak{b})$, if $A \cap \mathfrak{b}$ is nonempty then it is a proper subset of $\mathfrak{b}$ --- this contradicts the first assertion that $\mathfrak{a}$ is $\mathfrak{a}mma$-atomic.
\end{proof}
Recall from \cite[Definition~4.5]{mac:mar:ste:weako} the notion of a cell in a weakly $o$\nobreakdash-minimal structure. In our setting, it is easy to see that, by HNF, we may require that the images of the bounding functions $f_1$, $f_2$ of a cell $(f_1, f_2)_A$ in the $\VF$-sort be contained in $\mathds{D}C$; cell decomposition \cite[Theorem~4.6]{mac:mar:ste:weako} holds accordingly. Cells are in general not invariant under coordinate permutations; however, by cell decomposition, an atomic subset of $\VF^n$ must be a cell and must remain so under coordinate permutations.
\begin{lem}\langlebel{open:rv:cons}
In $\mdl R^{\bullet}_{\rv}$, let $\mathfrak{a} \subseteq \VF^n$ be an atomic open polydisc and $f : \mathfrak{a} \longrightarrow \VF$ a definable function. If $f$ is not constant then $f(\mathfrak{a})$ is an (atomic) open disc; in particular, $\rv \upharpoonright f(\mathfrak{a})$ is always constant.
\end{lem}
\begin{proof}
By atomicity, $f(\mathfrak{a})$ must be an atomic $\vv$-interval. We proceed by induction on $n$. For the base case $n=1$, suppose for contradiction that $f(\mathfrak{a})$ is a closed disc (other than a point) or a half thin annulus. By monotonicity, we may assume that $f(\mathfrak{a})$ is, say, strictly increasing. Then $f^{-1}$ violates Lemma~\ref{Ocon}, contradiction.
For the case $n > 1$, suppose for contradiction again that $f(\mathfrak{a})$ is a closed disc or a half thin annulus. By the inductive hypothesis, for every $a \in $p$\nobreakdashr_{1}(\mathfrak{a})$ there is a maximal open subdisc $\mathfrak{b}_a \subseteq f(\mathfrak{a})$ that contains $f(\mathfrak{a}_a)$, similarly for every $a \in $p$\nobreakdashr_{>1}(\mathfrak{a})$. It follows that $f(\mathfrak{a})$ is actually contained in a maximal open subdisc of $f(\mathfrak{a})$, which is absurd.
\end{proof}
\begin{cor}\langlebel{poly:open:cons}
Let $f : \VF^n \longrightarrow \VF$ be a definable function and $\mathfrak{a} \subseteq \VF^n$ an open polydisc. If $(\rv \circ f) \upharpoonright \mathfrak{a}$ is not constant then there is an $\code \mathfrak{a}$-definable nonempty proper subset of $\mathfrak{a}$.
\end{cor}
Here is a strengthening of Lemma~\ref{atom:gam}:
\begin{lem}\langlebel{atom:self}
Let $B \subseteq \VF^n$ be \LT-type-definable and $\mathfrak{a} = \mathfrak{a}_1 \times \ldots \times \mathfrak{a}_n \subseteq B$ an open polydisc. Then, for all $a = (a_1, \ldots, a_n)$ and $b = (b_1, \ldots, b_n)$ in $\mathfrak{a}$, there is an immediate automorphism $\sigma$ of $\mmdl$ with $\sigma(a) = b$. Consequently, $\mathfrak{a}$ is $(\code \mathfrak{a}, t)$-atomic for all $t \in \mathds{R}V$.
\end{lem}
\begin{proof}
To see that the first assertion implies the second, suppose for contradiction that there is an $(\code \mathfrak{a}, t)$-definable nonempty proper subset $A \subseteq \mathfrak{a}$. Let $a \in A$, $b \in \mathfrak{a} \smallsetminus A$, and $\sigma$ be an immediate automorphism of $\mmdl$ with $\sigma(a) = b$. Then $\sigma$ is also an immediate automorphism of $\mmdl$ over $\langle \code \mathfrak{a}, t \rangle$, contradicting the assumption that $A$ is $(\code \mathfrak{a}, t)$-definable.
For the first assertion, by Lemma~\ref{imm:ext}, it is enough to show that there is an immediate isomorphism $\sigma : \langle a \rangle \longrightarrow \langle b \rangle$ sending $a$ to $b$. Write
\[
\mathfrak{a}' = \mathfrak{a}_1 \times \ldots \times \mathfrak{a}_{n-1}, \quad a' = (a_1, \ldots, a_{n-1}), \quad b' = (b_1, \ldots, b_{n-1}).
\]
Then, by induction on $n$ and Lemma~\ref{imm:iso}, it is enough to show that, for any immediate isomorphism $\sigma' : \langle a' \rangle \longrightarrow \langle b' \rangle$ sending $a'$ to $b'$ and any \LT-definable function $f : \VF^{n-1} \longrightarrow \VF$,
\[
\rv(a_n - f(a')) = \rv(b_n - \sigma'(f(a'))).
\]
This is clear for the base case $n=1$, since $\mathfrak{a}$ must be disjoint from $\VF(\mdl S)$. For the case $n > 1$, we choose an immediate automorphism of $\mmdl$ extending $\sigma'$, which is still denoted by $\sigma'$; this is possible by Lemma~\ref{imm:ext}. By the inductive hypothesis and Lemma~\ref{open:rv:cons}, $f(\mathfrak{a}') = f(\sigma'(\mathfrak{a}')) = \sigma'(f(\mathfrak{a}'))$ is either a point or an open disc. Since $B$ is \LT-type-definable, it follows that $f(\mathfrak{a}')$ must be disjoint from $\mathfrak{a}_n$ and hence the desired condition is satisfied.
\end{proof}
\begin{cor}\langlebel{part:rv:cons}
Let $A \subseteq \VF^n$ and $f : A \longrightarrow \VF$ be an \LT-definable function. Then there is an \LT-definable finite partition $A_i$ of $A$ such that, for all $i$, if $\mathfrak{a} \subseteq A_i$ is an open polydisc then $\rv \upharpoonright f(\mathfrak{a})$ is constant and $f(\mathfrak{a})$ is either a point or an open disc.
\end{cor}
\begin{proof}
For $a \in A$, let $D_a \subseteq A$ be the \LT-type-definable subset containing $a$. By Lemma~\ref{atom:self}, every open polydisc $\mathfrak{a} \subseteq D_a$ is $\code \mathfrak{a}$-atomic and hence, by Lemma~\ref{open:rv:cons}, the assertion holds for $\mathfrak{a}$. Then, by compactness, the assertion must hold in a definable subset $A_a \subseteq A$ that contains $a$; by compactness again, it holds in finitely many definable subsets $A_1, \ldots, A_m$ of $A$ with $\bigcup_i A_i = A$. Then the partition of $A$ generated by $A_1, \ldots, A_m$ is as desired.
\end{proof}
\begin{rem}\langlebel{rem:LT:com}
Clearly the conclusion of Corollary~\ref{part:rv:cons} still holds if we replace ``\LT-definable'' with ``definable'' everywhere therein. Moreover, its proof works almost verbatim in all situations where we want to partition an \LT-definable set $A \subseteq \VF^n$ into finitely many \LT-definable pieces $A_i$ such that certain definable property, not necessarily \LT-definable, holds on every open polydisc (or other imaginary elements) contained in $A_i$.
\end{rem}
Here is a variation of Lemma~\ref{atom:self}.
\begin{lem}\langlebel{atom:exp}
Let $\mathfrak{a} \subseteq \VF^n$ be an $\code \mathfrak{a}$-atomic open polydisc. Let $e \in \VF^{\times}$ with $\abval(e) \gg 0$ (here $\gg$ stands for ``sufficiently larger than''). Then $\mathfrak{a}$ is $(\code \mathfrak{a}, e)$-atomic.
\end{lem}
\begin{proof}
The argument is somewhat similar to that in the proof of Lemma~\ref{atom:self}. We proceed by induction on $n$. Write $\mathfrak{a} = \mathfrak{a}_1 \times \ldots \times \mathfrak{a}_n$ and $\mathfrak{a}' = \mathfrak{a}_1 \times \ldots \times \mathfrak{a}_{n-1}$.
Let $(a' , a_n)$ and $(b', b_n)$ be two points in $\mathfrak{a}' \times \mathfrak{a}_n$. By the inductive hypothesis and Lemma~\ref{atom:self}, there is an immediate isomorphism $\sigma' : \langle a', e \rangle \longrightarrow \langle b', e \rangle$ with $\sigma'(e) = e$ and $\sigma'(a') = b'$. Thus, it is enough to show that, for all \LT-definable function $f : \VF^{n} \longrightarrow \VF$,
\[
\rv(a_n - f(e, a')) = \rv(b_n - \sigma'(f(e, a'))).
\]
Suppose for contradiction that we can always find an $e \in \VF^{\times}$ that is arbitrarily close to $0$ such that $f(e, \mathfrak{a}') \cap \mathfrak{a}_n \neq \emptyset$ (this must hold for some such $f$, for otherwise we are already done by compactness); more precisely, by weak $o$\nobreakdash-minimality, without loss of generality, there is an open interval $(0, \epsilon) \subseteq \VF^+$ such that $f(e, \mathfrak{a}') \cap \mathfrak{a}_n \neq \emptyset$ for all $e \in (0, \epsilon)$.
For each $a' \in \mathfrak{a}'$, let $f_{a'}$ be the $a'$-\LT-definable function on $\VF^+$ given by $b \longmapsto f(b, a')$. By $o$\nobreakdash-minimal monotonicity, there is an $\code{\mathfrak{a}'}$-definable function $l : \mathfrak{a}' \longrightarrow \VF^+$ such that $f^*_{a'} \coloneqq f_{a'} \upharpoonright A_{a'}$ is continuously monotone (of the same kind) for all $a' \in \mathfrak{a}'$, where $A_{a'} \coloneqq (0, l(a'))$. By Lemma~\ref{open:rv:cons}, $l(\mathfrak{a}')$ is either a point or an open disc. Thus the $\vv$-interval $(0, l(\mathfrak{a}'))$ is nonempty, which implies that $f^*_{a'}(e) \in \mathfrak{a}_n$ for some $a' \in \mathfrak{a}'$ and some $e \in A_{a'}$. In that case, we must have that, for all $a' \in \mathfrak{a}'$, $\mathfrak{a}_n \subseteq f^*_{a'}(A_{a'})$ and hence $ f^*_{a'}$ is bijective. By $o$\nobreakdash-minimality in the $\mathds{G}amma$-sort, $\abval((f^*_{a'})^{-1}(\mathfrak{a}_n))$ has to be a singleton, say, $\beta_{a'}$; in fact, the function given by $a' \longmapsto \beta_{a'}$ has to be constant and hence we may write $\beta_{a'}$ as $\beta$. It follows that, for all $e \in \VF^+$ with $\abval(e) > \beta$, $f(e, \mathfrak{a}') \cap \mathfrak{a}_n = \emptyset$, contradiction.
\end{proof}
Next we come to the issue of finding definable points in definable sets. As we have mentioned above, this is a trivial issue if the space of parameters is not fixed.
\begin{lem}\langlebel{S:def:cl}
The substructure $\mdl S$ is definably closed.
\end{lem}
\begin{proof}
By Lemma~\ref{RV:no:point}, we have $\VF(\dcl( \mdl S)) = \VF(\mdl S)$. Suppose that $t \in \mathds{R}V$ is definable. By the first sentence of Remark~\ref{rem:RV:weako}, if $\vrv(\mathds{R}V(\mdl S))$ is nontrivial then $\mathds{R}V(\mdl S)$ is a model of the reduct of $$T$\nobreakdashCVF$ to the $\mathds{R}V$-sort and hence, by quantifier elimination, is an elementary substructure of $\mathds{R}V$, which implies $t \in \mathds{R}V(\mdl S)$. On the other hand, if $\vrv(\mathds{R}V(\mdl S))$ is trivial then $\mathds{R}V(\mdl S) = \K(\mdl S)$ and it is not hard, though a bit tedious, to check, using quantifier elimination again, that $t \in \K(\mdl S)$.
\end{proof}
If $\mdl S$ is $\VF$-generated and $\mathds{G}amma(\mdl S)$ is nontrivial then $\mdl S$ is an elementary substructure and hence every definable set contains a definable point. This, of course, fails if $\mdl S$ carries extra $\mathds{R}V$-data, by the above lemma. However, we do have:
\begin{lem}\langlebel{clo:disc:bary}
Every definable closed disc $\mathfrak{b}$ contains a definable point.
\end{lem}
\begin{proof}
Suppose for contradiction that $\mathfrak{b}$ does not contain a definable point. Since $\mmdl$ is sufficiently saturated, there is an open disc $\mathfrak{a}$ that is disjoint from $\VF(\mdl S)$ and properly contains $\mathfrak{b}$. Let $a \in \mathfrak{a} \smallsetminus \mathfrak{b}$ and $b \in \mathfrak{b}$. Clearly $\rv(c - b) = \rv(c - a)$ for all $c \in \VF(\mdl S)$. As in the proof of Lemma~\ref{atom:self}, there is an immediate automorphism $\sigma$ of $\mmdl$ such that $\sigma(a) = b$. This means that $\mathfrak{b}$ is not definable, which is a contradiction.
\end{proof}
Notice that the argument above does not work if $\mathfrak{b}$ is an open disc.
\begin{cor}\langlebel{open:disc:def:point}
Let $\mathfrak{a} \subseteq \VF$ be a disc and $A$ a definable subset of $\VF$. If $\mathfrak{a} \cap A$ is a nonempty proper subset of $\mathfrak{a}$ then $\mathfrak{a}$ contains a definable point.
\end{cor}
\begin{proof}
It is not hard to see that, by HNF, if $\mathfrak{a} \cap A$ is a nonempty proper subset of $\mathfrak{a}$ then $\mathfrak{a}$ contains a definable closed disc and hence the claim is immediate by Lemma~\ref{clo:disc:bary}.
\end{proof}
\begin{lem}\langlebel{one:atomic}
Let $A \subseteq \VF$ be a definable set that contains infinitely many open discs of radius $\beta$. Then one of these discs $\mathfrak{a}$ is $(\code \mathfrak{a}, \beta)$-atomic.
\end{lem}
\begin{proof}
By Lemmas~\ref{atom:gam} and \ref{atom:self}, it is enough to show that some open disc $\mathfrak{a} \subseteq A$ of radius $\beta$ is contained in a type-definable set. Suppose for contradiction that this is not the case. By Corollary~\ref{open:disc:def:point} and HNF, for every definable set $B \subseteq A$, we have either $\mathfrak{a} \cap B = \emptyset$ or $\mathfrak{a} \subseteq B$ for all but finitely many such open discs $\mathfrak{a} \subseteq A$. Passing to $\mdl R^{\bullet}_{\rv}$ and applying compactness (with the parameter $\beta$), the claim follows.
\end{proof}
\subseteqsection{Contracting from $\VF$ to $\mathds{R}V$}
We can relate definable sets in $\VF$ to those in $\mathds{R}V$, specifically, $\mathds{R}V$-pullbacks, through a procedure called contraction. But a more comprehensive study of the latter will be postponed to the next section.
\begin{defn}[Disc-to-disc]\langlebel{defn:dtdp}
Let $A$, $B$ be two subsets of $\VF$ and $f : A \longrightarrow B$ a bijection. We say that $f$ is \emph{concentric} if, for all open discs $\mathfrak{a} \subseteq A$, $f(\mathfrak{a})$ is also an open disc; if both $f$ and $f^{-1}$ are concentric then $f$ has the \emph{disc-to-disc property} (henceforth abbreviated as ``dtdp'').
More generally, let $f : A \longrightarrow B$ be a bijection between two sets $A$ and $B$, each with exactly one $\VF$-coordinate. For each $(t, s) \in f_{\mathds{R}V}$, let $f_{t, s} = f \cap (\VF^2 \times (t, s))$,
which is called a \emph{$\VF$-fiber} of $f$. We say that $f$ has \emph{dtdp} if every $\VF$-fiber of $f$ has dtdp.
\end{defn}
We are somewhat justified in not specifying ``open disc'' in the terminology since if $f$ has dtdp then, for all open discs $\mathfrak{a} \subseteq A$ and all closed discs $\mathfrak{c} \subseteq \mathfrak{a}$, $f(\mathfrak{c})$ is also a closed disc. In fact, this latter property is stronger: if $f(\mathfrak{c})$ is a closed disc for all closed discs $\mathfrak{c} \subseteq A$ then $f$ has dtdp. But we shall only be concerned with open discs, so we ask for it directly.
\begin{lem}\langlebel{open:pro}
Let $f : A \longrightarrow B$ be a definable bijection between two sets $A$ and $B$, each with exactly one $\VF$-coordinate. Then there is a definable finite partition $A_i$ of $A$ such that each $f \upharpoonright A_i$ has dtdp.
\end{lem}
\begin{proof}
By compactness, we may simply assume that $A$ and $B$ are subsets of $\VF$. Then we may proceed exactly as in the proof of Corollary~\ref{part:rv:cons}, using Lemmas~\ref{open:rv:cons} and~\ref{atom:self} (also see Remark~\ref{rem:LT:com}).
\end{proof}
\begin{defn}
Let $A$ be a subset of $\VF^n$. The \emph{$\mathds{R}V$-boundary} of $A$, denoted by $$p$\nobreakdashartial_{\mathds{R}V}A$, is the definable subset of $\rv(A)$ such that $t \in $p$\nobreakdashartial_{\mathds{R}V} A$ if and only if $t^\sharp \cap A$ is a proper nonempty subset of $t^\sharp$. The definable set $\rv(A) \smallsetminus $p$\nobreakdashartial_{\mathds{R}V}A$, denoted by $\ito_{\mathds{R}V}(A)$, is called the \emph{$\mathds{R}V$-interior} of $A$.
\end{defn}
Obviously, $A \subseteq \VF^n$ is an $\mathds{R}V$-pullback if and only if $$p$\nobreakdashartial_{\mathds{R}V} A$ is empty. Note that $$p$\nobreakdashartial_{\mathds{R}V}A$ is in general different from the topological boundary $$p$\nobreakdashartial(\rv(A))$ of $\rv(A)$ in $\mathds{R}V^n$ and neither one of them includes the other.
\begin{lem}\langlebel{RV:bou:dim}
Let $A$ be a definable subset of $\VF^n$. Then $\dim_{\mathds{R}V}($p$\nobreakdashartial_{\mathds{R}V} A) < n$.
\end{lem}
\begin{proof}
We do induction on $n$. The base case $n=1$ follows immediately from HNF.
We proceed to the inductive step. Since $$p$\nobreakdashartial_{\mathds{R}V} A_a$ is finite for every $i \in [n]$ and every $a \in $p$\nobreakdashr_{\tilde i}(A)$, by Corollary~\ref{open:disc:def:point} and compactness, there are a definable finite partition $A_{ij}$ of $$p$\nobreakdashr_{\tilde i}(A)$ and, for each $A_{ij}$, finitely many definable functions $f_{ijk} : A_{ij} \longrightarrow \VF$ such that
\[
\textstyle\bigcup_k \rv(f_{ijk}(a)) = $p$\nobreakdashartial_{\mathds{R}V} A_a \quad \text{for all } a \in A_{ij}.
\]
By Corollary~\ref{part:rv:cons}, we may assume that if $t^\sharp \subseteq A_{ij}$ then the restriction $\rv \upharpoonright f_{ijk}(t^\sharp)$ is constant. Hence each $f_{ijk}$ induces a definable function $C_{ijk} : \ito_{\mathds{R}V}(A_{ij}) \longrightarrow \mathds{R}VV$.
Let
\[
\textstyle C = \bigcup_{i, j, k} C_{ijk} \quad \text{and} \quad B = \bigcup_{i,j} \bigcup_{t \in $p$\nobreakdashartial_{\mathds{R}V} A_{ij}} \rv(A)_t.
\]
Obviously $\dim_{\mathds{R}V}(C) < n$. By the inductive hypothesis, for all $A_{ij}$ we have $\dim_{\mathds{R}V}($p$\nobreakdashartial_{\mathds{R}V} A_{ij}) < n-1$. Thus $\dim_{\mathds{R}V}(B) < n$. Since $$p$\nobreakdashartial_{\mathds{R}V} A \subseteq B \cup C$, the claim follows.
\end{proof}
For $(a, t) \in \VF^n \times \mathds{R}V_0^m$, we write $\rv(a,t)$ to mean $(\rv(a), t)$, similarly for other maps.
\begin{defn}[Contractions]\langlebel{defn:corr:cont}
A function $f : A \longrightarrow B$ is \emph{$\rv$-contractible} if there is a (necessarily unique) function $f_{\downarrow} : \rv(A) \longrightarrow \rv(B)$, called the \emph{$\rv$-contraction} of $f$, such that
\[
(\rv \upharpoonright B) \circ f = f_{\downarrow} \circ (\rv \upharpoonright A).
\]
Similarly, it is \emph{$\res$-contractible} (resp.\ \emph{$\vv$-contractible}) if the same holds in terms of $\res$ (resp.\ $\vv$ or $\vrv$, depending on the coordinates) instead of $\rv$.
\end{defn}
The subscripts in these contractions will be written as $\downarrow_{\rv}$, $\downarrow_{\res}$, etc., if they occur in the same context and therefore need to be distinguished from one another notationally.
\begin{lem}\langlebel{fn:alm:cont}
For every definable function $f : \VF^n \longrightarrow \VF$ there is a definable set $U \subseteq \mathds{R}V^n$ with $\dim_{\mathds{R}V}(U) < n$ such that $f \upharpoonright (\VF^n \smallsetminus U^\sharp)$ is $\rv$-contractible.
\end{lem}
\begin{proof}
By Corollary~\ref{poly:open:cons}, for any $t \in \mathds{R}V^n$, if $\rv(f(t^\sharp))$ is not a singleton then $t^\sharp$ has a $t$-definable proper subset. By compactness, there is a definable subset $A \subseteq \VF^n$ such that $t \in $p$\nobreakdashartial_{\mathds{R}V} A$ if and only if $\rv(f(t^\sharp))$ is not a singleton. So the assertion follows from Lemma~\ref{RV:bou:dim}.
\end{proof}
For any definable set $A$, a property holds \emph{almost everywhere} in $A$ or \emph{for almost every point} in $A$ if it holds away from a definable subset of $A$ of a smaller $\VF$-dimension. This terminology will also be used with respect to other notions of dimension.
\begin{rem}[Regular points]
Let $f : \VF^n \longrightarrow \VF^m$ be a definable function. By Lemma~\ref{fun:suba:fun} and $o$\nobreakdash-minimal differentiability, $f$ is $C^p$ almost everywhere for all $p$ (see \cite[\S~7.3]{dries:1998}). For each $p$, let $\reg^p(f) \subseteq \VF^n$ be the definable subset of regular $C^p$-points of $f$. If $p=0$ then we write $\reg(f)$, which is simply the subset of the regular points of $f$.
Assume $n=m$. If $a \in \reg(f)$ and $f$ is $C^1$ in a neighborhood of $a$ then $\reg^1(f)$ contains a neighborhood of $a$ on which the sign of the Jacobian of $f$, which is denoted by $\jcb_{\VF} f$, is constant. If $f$ is locally injective on a definable open subset $A \subseteq \VF^n$ then $f$ is regular almost everywhere in $A$ and hence, for all $p$, $\dim_{\VF}(A \smallsetminus \reg^p(f)) < n$.
By \cite[Theorem~A]{Dries:tcon:97}, the situation is quite similar if $f$ is a (parametrically) definable function of the form $\tor(\alpha)^n \longrightarrow \tor(\beta)^m$, $\alpha, \beta \in \absG$, and $\dim_{\VF}$ is replaced by $\dim_{\mathds{R}V}$, in particular, if $f$ is such a function from $\K^n$ into $\K^m$, or more generally, from $\tor(u)$ into $\tor(v)$, where $u \in \mathds{R}V^n_{\alpha}$ and $v \in \mathds{R}V^m_{\beta}$ (see Notation~\ref{rem:K:aff} and Definition~\ref{rem:tor:der}).
\end{rem}
\begin{rem}[$\rv$-contraction of univariate functions]\langlebel{contr:uni}
Suppose that $f$ is a definable function from $\OO^\times$ into $\OO$. By monotonicity, there are a definable finite set $B \subseteq \OO^\times$ and a definable finite partition of $A \coloneqq \OO^\times \smallsetminus B$ into infinite $\vv$-intervals $A_i$ such that both $f$ and $\ddx f$ are quasi-\LT-definable, continuous, and monotone on each $A_i$. If $\rv(A_i)$ is not a singleton then let $U_i \subseteq \K$ be the largest open interval contained in $\rv(A_i)$. Let
\[
A^*_i = U_i^\sharp, \quad U = \textstyle{\bigcup_i U_i}, \quad A^* = U^\sharp, \quad f^* = f \upharpoonright A^*.
\]
By Lemma~\ref{fn:alm:cont}, we may refine the partition such that both $f^*$ and $\frac{d}{d x} f^*$ are $\rv$-contractible. By Lemma~\ref{gk:ortho}, $\vv \upharpoonright f^*(A^*_i)$ and $\vv \upharpoonright \tfrac{d}{d x} f^*(A^*_i)$ must be constant, say $\alpha_i$ and $\beta_i$, respectively. So it makes sense to speak of $\ddx f^*_{\downarrow_{\rv}}$ on each $U_i$, which a priori is not the same as $(\ddx f^*)_{\downarrow_{\rv}}$. Deleting finitely many points from $U$ if necessary, we assume that $f^*_{\downarrow_{\rv}}$, $(\ddx f^*)_{\downarrow_{\rv}}$, and $\ddx f^*_{\downarrow_{\rv}}$ are all continuous monotone functions on each $U_i$.
We claim that $\abs{\beta_i} = \abs{\alpha_i}$ unless $f^*_{\downarrow_{\rv}} \upharpoonright U_i$ is constant. Suppose for contradiction that $f^*_{\downarrow_{\rv}} \upharpoonright U_i$ is not constant and $\abs{\beta_i} \neq \abs{\alpha_i}$. First examine the case $\abs{\beta_i} < \abs{\alpha_i}$. A moment of reflection shows that, then, $f^* \upharpoonright A^*_i$ would increase or decrease too fast to confine $f^*(A_i^*)$ in $\vv^{-1}(\alpha_i)$. Dually, if $\abs{\beta_i} > \abs{\alpha_i}$ then $f^* \upharpoonright A^*_i$ would increase or decrease too slowly to make $f^*_{\downarrow_{\rv}}(U_i)$ contain more than one point. In either case, we have reached a contradiction. Actually, a similar estimate shows that if $\abs{\beta_i} = \abs{\alpha_i} < \infty$ then $f^*_{\downarrow_{\rv}} \upharpoonright U_i$ cannot be constant.
Finally, we show that $\abs{\beta_i} = \abs{\alpha_i}$ implies $(\ddx f^*)_{\downarrow_{\rv}} = \ddx f^*_{\downarrow_{\rv}}$ on $U_i$ (note that if $\abs{\beta_i} > \abs{\alpha_i}$ then $\ddx f^*_{\downarrow_{\rv}} = 0$). Suppose for contradiction that, say,
\[
(\ddx f^*)_{\downarrow_{\rv}}(\rv(a)) > \ddx f^*_{\downarrow_{\rv}}(\rv(a)) > 0
\]
for some $a \in A^*_i$. Then there is an open interval $I \subseteq U_i$ containing $\rv(a)$ such that $(\ddx f^*)_{\downarrow_{\rv}}(I) > \ddx f^*_{\downarrow_{\rv}}(I)$. It follows that $f^*_{\downarrow_{\rv}}(I)$ is properly contained in $\rv(f^*(I^\sharp)) = f^*_{\downarrow_{\rv}}(I)$, which is absurd. The other cases are similar.
\end{rem}
The higher-order multivariate version is more complicated to state than to prove:
\begin{lem}\langlebel{univar:der:contr}
Let $A \subseteq (\OO^\times)^n$ be a definable $\mathds{R}V$-pullback with $\dim_{\mathds{R}V}(\rv(A)) = n$ and $f : A \longrightarrow \OO$ a definable function. Let $p \in \mathds{N}^n$ be a multi-index of order $\abs{p} = d$ and $k \in \mathds{N}$ with $k \gg d$. Suppose that $f$ is $C^k$ and, for all $q \leq p$, $\frac{$p$\nobreakdashartial^q}{$p$\nobreakdashartial x^q} f$ is $\rv$-contractible and its contraction $(\frac{$p$\nobreakdashartial^q}{$p$\nobreakdashartial x^q} f)_{\downarrow_{\rv}}$ is also $C^k$. Then there is a definable set $V \subseteq \rv(A)$ with $\dim_{\mathds{R}V}(V) < n$ and $U \coloneqq \rv(A) \smallsetminus V$ open such that, for all $a \in U^\sharp$ and all $q' < q \leq p$ with $\abs{q'} + 1 = q$, exactly one of the following two conditions holds:
\begin{itemize}
\item either $\frac{$p$\nobreakdashartial^{q}}{$p$\nobreakdashartial x^{q}} f(a) = 0$ or $\abval (\frac{$p$\nobreakdashartial^{q'}}{$p$\nobreakdashartial x^{q'}} f(a)) < \abval (\frac{$p$\nobreakdashartial^{q}}{$p$\nobreakdashartial x^{q}} f(a))$,
\item $(\frac{$p$\nobreakdashartial^{q - q'}}{$p$\nobreakdashartial x^{q - q'}} \frac{$p$\nobreakdashartial^{q'}}{$p$\nobreakdashartial x^{q'}} f)_{\downarrow_{\rv}}(\rv (a)) = \frac{$p$\nobreakdashartial^{q - q'}}{$p$\nobreakdashartial x^{q - q'}}(\frac{$p$\nobreakdashartial^{q'}}{$p$\nobreakdashartial x^{q'}} f)_{\downarrow_{\rv}}(\rv( a)) \neq 0$.
\end{itemize}
If the first condition never occurs then, for all $q \leq p$, we actually have $(\frac{$p$\nobreakdashartial^q}{$p$\nobreakdashartial x^q} f )_{\downarrow_{\rv}} = \frac{$p$\nobreakdashartial^{q}}{$p$\nobreakdashartial x^{q}} f_{\downarrow_{\rv}}$ on $U$. At any rate, for all $q \leq p$, we have $(\frac{$p$\nobreakdashartial^q}{$p$\nobreakdashartial x^q} f )_{\downarrow_{\res}} = \frac{$p$\nobreakdashartial^{q}}{$p$\nobreakdashartial x^{q}} f_{\downarrow_{\res}}$ on $U$.
\end{lem}
\begin{proof}
First observe that, by induction on $d$, it is enough to consider the case $d =1$ and $p = (0, \ldots, 0, 1)$. For each $a \in $p$\nobreakdashr_{<n}(A)$, by the discussion in Remark~\ref{contr:uni}, there is an $a$-definable finite set $V_{a}$ of $\rv(A)_{\rv(a)}$ such that the assertion holds for the restriction $f \upharpoonright (A_a \smallsetminus V_{a}^\sharp)$. Let $A^* = \bigcup_{a \in $p$\nobreakdashr_{<n}(A)} V_{a}^\sharp \subseteq A$. By Lemma~\ref{RV:bou:dim}, $\dim_{\mathds{R}V}($p$\nobreakdashartial_{\mathds{R}V} A^*) < n$ and hence $\dim_{\mathds{R}V}(\rv(A^*)) < n$. Therefore, by Lemma~\ref{fn:alm:cont}, there is a definable open set $U \subseteq \ito(\rv(A) \smallsetminus \rv(A^*))$ that is as desired.
\end{proof}
Suppose that $f = (f_1, \ldots, f_m) : A \longrightarrow \OO$ is a sequence of definable $\res$-contractible functions, where the set $A$ is as in Lemma~\ref{univar:der:contr}. Let $P(x_1, \ldots, x_m)$ be a partial differential operator with definable $\res$-contractible coefficients $a_i : A \longrightarrow \OO$ and $P_{\downarrow_{\res}}(x_1, \ldots, x_m)$ the corresponding operator with $\res$-contracted coefficients $a_{i\downarrow_{\res}} : \res(A) \longrightarrow \K$. Note that both $P(f) : A \longrightarrow \OO$ and $P_{\downarrow_{\res}}(f_{\downarrow_{\res}}) : \res(A) \longrightarrow \K$ are defined almost everywhere. By Lemma~\ref{univar:der:contr}, such an operator $P$ almost commutes with $\res$:
\begin{cor}\langlebel{rv:op:comm}
For almost all $t \in \rv(A)$ and all $a \in t^\sharp$,
\[
\res(P(f)(a)) = P_{\downarrow_{\res}}(f_{\downarrow_{\res}})(\res(a)).
\]
\end{cor}
\begin{cor}
Let $U$, $V$ be definably connected subsets of $(\K^+)^n$ and $f : U^\sharp \longrightarrow V^\sharp$ a definable $\res$-contractible function. Suppose that $f_{\downarrow_{\res}} : U \longrightarrow V$ is continuous and locally injective. Then there is a definable subset $U^* \subseteq U$ of $\mathds{R}V$-dimension $< n$ such that the sign of $\jcb_{\VF} f$ is constant on $(U \smallsetminus U^*)^\sharp$.
\end{cor}
\begin{proof}
This follows immediately from Corollary~\ref{rv:op:comm} and \cite[Theorem~3.2]{pet:star:otop}.
\end{proof}
\begin{lem}\langlebel{atom:type}
In $\xmdl$, let $\mathfrak{a} \subseteq \VF$ be an atomic subset and $f : \mathfrak{a} \longrightarrow \VF$ a definable injection. Then $\mathfrak{a}$ and $f(\mathfrak{a})$ must be of the same one of the four possible forms (see Remark~\ref{rem:type:atin}).
\end{lem}
\begin{proof}
This is trivial if $\mathfrak{a}$ is a point. The case of $\mathfrak{a}$ being an open disc is covered by Lemma~\ref{open:rv:cons}. So we only need to show that if $\mathfrak{a}$ is a closed disc then $f(\mathfrak{a})$ cannot be a half thin annulus. We shall give two proofs. The first one works only when $T$ is polynomially bounded, but is more intuitive and much simpler.
Suppose that $T$ is polynomially bounded. Suppose for contradiction that $\code \mathfrak{a}$ is of the form $\tor(\mathfrak{o}edel \mathfrak{m})$ for some $\mathfrak{o}edel \mathfrak{m} \in \mathds{R}V_{\mathfrak{a}mma}$ and $\mathfrak{o}edel{f(\mathfrak{a})}$ is of the form $\tor^+(\mathfrak{o}edel \mathfrak{n})$ for some $\mathfrak{o}edel \mathfrak{n} \in \mathds{R}V_{\delta}$. By Lemma~\ref{open:pro} and monotonicity, $f$ induces an increasing (or decreasing, which can be handled similarly) bijection $f_{\downarrow} : \tor(\mathfrak{o}edel \mathfrak{m}) \longrightarrow \tor^+(\mathfrak{o}edel \mathfrak{n})$.
In fact, for all $p \in \mathds{N}$,
\[
\tfrac{d^p}{d x^p} f_{\downarrow} : \tor(\mathfrak{o}edel \mathfrak{m}) \longrightarrow \tor^{+}(\delta - p \mathfrak{a}mma)
\]
cannot be constant and hence must be continuous, surjective, and increasing. Using additional parameters, we can translate $f_{\downarrow}$ into a function $\K \longrightarrow \K^+$ and this function cannot be polynomially bounded by elementary differential calculus, which is a contradiction.
We move on to the second proof. The argument is essentially the same as that in the proof of \cite[Lemma~3.45]{hrushovski:kazhdan:integration:vf}.
Consider the group
\[
G \coloneqq \aut(\tor(\mathfrak{o}edel \mathfrak{m}) / \K) \leq \aut(\xmdl / \K).
\]
Suppose for contradiction that $G$ is finite. Since every $G$-orbit is finite, every point in $\tor(\mathfrak{o}edel \mathfrak{m})$ is $\K$-definable. It follows that there exists a nonconstant definable function $\tor(\mathfrak{o}edel \mathfrak{m}) \longrightarrow \K$. But this is not possible since $\mathfrak{a}$ is atomic.
Let $\Lambda$ be the group of affine transformations of $\K$, that is, $\Lambda = \K^{\times} \ltimes \K$, where the first factor is the multiplicative group of $\K$ and the second the additive group of $\K$. Every automorphism in $G$ is a $\K$-affine transformation of $\tor(\mathfrak{o}edel \mathfrak{m})$ and hence $G$ is a subgroup of $\Lambda$. For each $\K$-definable relation $$p$\nobreakdashhi$ on $\tor(\mathfrak{o}edel \mathfrak{m})$, let $G_{$p$\nobreakdashhi} \subseteq \Lambda$ be the definable subgroup of $\K$-affine transformations that preserve $$p$\nobreakdashhi$. So $G = \bigcap_{$p$\nobreakdashhi} G_{$p$\nobreakdashhi}$. Since there is no infinite descending chain of definable subgroups of $\Lambda$, we see that $G$ is actually an infinite definable group. Then we may choose two nontrivial automorphisms $g, g' \in G$ whose fixed points are distinct. It follows that the commutator of $g$, $g'$ is a translation and hence, by $o$\nobreakdash-minimality, $G$ contains all the translations, that is, $\K \leq G$.
By a similar argument, every automorphism in $H \coloneqq \aut(\tor^+(\mathfrak{o}edel \mathfrak{n}) / \K)$ is a $\K$-linear transformation of $\tor^+(\mathfrak{o}edel \mathfrak{n})$ and hence $H = \K^+ \leq \K^{\times}$.
Now any definable bijection between $\tor(\mathfrak{o}edel \mathfrak{m})$ and $\tor^+(\mathfrak{o}edel \mathfrak{n})$ would induce a definable group isomorphism $\K \longrightarrow \K^+$, that is, an exponential function, which of course contradicts the assumption that $T$ is power-bounded.
\end{proof}
\begin{defn}[$\vv$-affine and $\rv$-affine]\langlebel{rvaffine}
Let $\mathfrak{a}$ be an open disc and $f : \mathfrak{a} \longrightarrow \VF$ an injection.
We say that $f$ is \emph{$\vv$-affine} if there is a (necessarily unique) $\mathfrak{a}mma \in \mathds{G}amma$, called the \emph{shift} of $f$, such that, for all $a, a' \in \mathfrak{a}$,
\[
\abval(f(a) - f(a')) = \mathfrak{a}mma + \abval(a - a').
\]
We say that $f$ is \emph{$\rv$-affine} if there is a (necessarily unique) $t \in \mathds{R}V$, called the \emph{slope} of $f$, such that, for all $a, a' \in \mathfrak{a}$,
\[
\rv(f(a) - f(a')) = t \rv(a - a').
\]
\end{defn}
Obviously $\rv$-affine implies $\vv$-affine. With the extra structure afforded by the total ordering, we can reproduce (an analogue of) \cite[Lemma~3.18]{Yin:int:acvf} with a somewhat simpler proof:
\begin{lem}\langlebel{rv:lin}
In $\xmdl$, let $f : \mathfrak{a} \longrightarrow \mathfrak{b}$ be a definable bijection between two atomic open discs. Then $f$ is $\rv$-affine and hence $\vv$-affine with respect to $\rangled(\mathfrak{b}) - \rangled(\mathfrak{a})$.
\end{lem}
\begin{proof}
Since $f$ has dtdp by Lemma~\ref{open:pro}, for all $\rangled(\mathfrak{a}) < \delta$ and all
\[
\mathfrak{d} \coloneqq \tor(\mathfrak{o}edel \mathfrak{c}) \subseteq \rv_{\delta- \abval(\mathfrak{a})}(\mathfrak{a}),
\]
it induces a $\mathfrak{o}edel \mathfrak{d}$-definable $C^1$ function $f_{\mathfrak{o}edel \mathfrak{d}} : \mathfrak{d} \longrightarrow \tor(\mathfrak{o}edel{f(\mathfrak{c})})$. The codomain of its derivative $\ddx f_{\mathfrak{o}edel \mathfrak{d}}$ can be narrowed down to either $\tor^+(\epsilon - \delta)$ or $\tor^{-}(\epsilon - \delta)$, where $\epsilon = \rangled(f(\mathfrak{c}))$. By Lemma~\ref{open:rv:cons}, there is a $t \in \mathds{R}V$ such that $\ddx f(\mathfrak{a}) \subseteq t^\sharp$. By Lemma~\ref{atom:gam}, $\mathfrak{a}$ remains atomic over $\delta$. Then, by (an accordingly modified version of) Remark~\ref{contr:uni}, we must have that, for all $\mathfrak{d}$ as above, all $\mathfrak{o}edel \mathfrak{c} \in \mathfrak{d}$, and all $a \in \mathfrak{c}$,
\[
\ddx f_{\mathfrak{o}edel \mathfrak{d}}(\mathfrak{o}edel \mathfrak{c}) = \rv(\ddx f(a)) = t
\]
and hence
\[
\aff_{\mathfrak{o}edel{f(\mathfrak{c})}} \circ f_{\mathfrak{o}edel \mathfrak{d}} \circ \aff^{-1}_{\mathfrak{o}edel \mathfrak{c}} : \tor(\delta) \longrightarrow \tor(\epsilon)
\]
is a linear function given by $u \longmapsto tu$ (see Definition~\ref{rem:tor:der} for the notation). It follows that, for
\begin{itemize}
\item $a$ and $a'$ in $\mathfrak{a}$,
\item $\mathfrak{d}$ the smallest closed disc containing $a$ and $a'$,
\item $\mathfrak{c}$ and $\mathfrak{c}'$ the maximal open subdiscs of $\mathfrak{d}$ containing $a$ and $a'$, respectively,
\end{itemize}
we have
\[
\rv(f(a) - f(a')) = \rv(f(\mathfrak{c}) - f(\mathfrak{c}')) = t \rv(\mathfrak{c} - \mathfrak{c}') = t \rv(a - a').
\]
That is, $f$ is $\rv$-affine. Moreover, it is clear from dtdp that $\abvrv(t) = \rangled(\mathfrak{b}) - \rangled(\mathfrak{a})$.
\end{proof}
\section{Grothendieck semirings}\langlebel{sect:groth}
In this section, we define various categories of definable sets and explore the relations between their Grothendieck semirings. The first main result is that the Grothendieck semiring $\mathfrak{s}k \mathds{R}V[*]$ of the $\mathds{R}V$-category $\mathds{R}V[*]$ can be naturally expressed as a tensor product of the Grothendieck semirings of two of its full subcategories $\mathds{R}ES[*]$ and $\mathds{G}amma[*]$. The second main result is that there is a natural surjective semiring homomorphism from $\mathfrak{s}k \mathds{R}V[*]$ onto the Grothendieck semiring $\mathfrak{s}k \VF_*$ of the $\VF$-category $\VF_*$.
\begin{hyp}\langlebel{hyp:gam}
By (the proof of) Lemma~\ref{S:def:cl}, every definable set in $\mathds{R}V$ contains a definable point if and only if $\mathds{G}amma(\mdl S) \neq $p$\nobreakdashm 1$. Thus, from now on, we shall assume that $\mathds{G}amma(\mdl S)$ is nontrivial.
\end{hyp}
\subseteqsection{The categories of definable sets}
As in Definition~\ref{defn:dtdp}, an $\mathds{R}V$-fiber of a definable set $A$ is a set of the form $A_a$, where $a \in A_{\VF}$. The $\mathds{R}V$-fiber dimension of $A$ is the maximum of the $\mathds{R}V$-dimensions of its $\mathds{R}V$-fibers and is denoted by $\dim^{\fib}_{\mathds{R}V}(A)$.
\begin{lem}\langlebel{RV:fiber:dim:same}
Suppose that $f : A \longrightarrow A'$ is a definable bijection. Then $\dim^{\fib}_{\mathds{R}V}(A) = \dim^{\fib}_{\mathds{R}V} (A')$.
\end{lem}
\begin{proof}
Let $\dim^{\fib}_{\mathds{R}V}(A) = k$ and $\dim^{\fib}_{\mathds{R}V}(A') =
k'$. For each $a \in $p$\nobreakdashr_{\VF}(A)$, let $h_{a} : A_a \longrightarrow A'_{\VF}$ be the $a$-definable function induced by $f$ and $$p$\nobreakdashr_{\VF}$. By Corollary~\ref{function:rv:to:vf:finite:image}, the image of $h_{a}$ is finite. It follows that $k \leq k'$. Symmetrically
we also have $k \geq k'$ and hence $k = k'$.
\end{proof}
\begin{defn}[$\VF$-categories]\langlebel{defn:VF:cat}
The objects of the category $\VF[k]$ are the definable sets of $\VF$-dimension $\leq k$ and $\mathds{R}V$-fiber dimension $0$ (that is, all the $\mathds{R}V$-fibers are finite). Any definable bijection between two such objects is a morphism of $\VF[k]$. Set $\VF_* = \bigcup_k \VF[k]$.
\end{defn}
\begin{defn}[$\mathds{R}V$-categories]\langlebel{defn:c:RV:cat}
The objects of the category $\mathds{R}V[k]$ are the pairs $(U, f)$ with $U$ a definable set in $\mathds{R}VV$ and $f : U \longrightarrow \mathds{R}V^k$ a definable finite-to-one function. Given two such objects $(U, f)$, $(V, g)$, any definable bijection $F : U \longrightarrow V$ is a \emph{morphism} of $\mathds{R}V[k]$.
\end{defn}
Set $\mathds{R}V[{\leq} k] = \bigoplus_{i \leq k} \mathds{R}V[i]$ and $\mathds{R}V[*] = \bigoplus_{k} \mathds{R}V[k]$; similarly for the other categories below.
\begin{nota}\langlebel{0coor}
We emphasize that if $(U, f)$ is an object of $\mathds{R}V[k]$ then $f(U)$ is a subset of $\mathds{R}V^k$ instead of $\mathds{R}V_0^k$, while $0$ can occur in any coordinate of $U$. An object of $\mathds{R}V[*]$ of the form $(U, \id)$ is often written as $U$.
More generally, if $f : U \longrightarrow \mathds{R}V_0^k$ is a definable finite-to-one function then $(U, f)$ denotes the obvious object of $\mathds{R}V[{\leq} k]$. Often $f$ will be a coordinate projection (every object in $\mathds{R}V[*]$ is isomorphic to an object of this form). In that case, $(U, $p$\nobreakdashr_{\leq k})$ is simply denoted by $U_{\leq k}$ and its class in $\mathfrak{s}k \mathds{R}V[k]$ by $[U]_{\leq k}$, etc.
\end{nota}
\begin{rem}\langlebel{fintoone}
Alternatively, we could allow only injections instead of finite-to-one functions in defining the objects of $\mathds{R}V[k]$. Insofar as the Grothendieck semigroup $\mathfrak{s}k \mathds{R}V[k]$ is concerned, this is not more restrictive in our setting since for any $\bm U \coloneqq (U, f) \in \mathds{R}V[k]$ there is a definable finite partition $\bm U_i \coloneqq (U_i, f_i)$ of $\bm U$, in other words, $[\bm U] = \sum_i [\bm U_i]$ in $\mathfrak{s}k \mathds{R}V[k]$, such that each $f_i$ is injective. It is technically more convenient to work with finite-to-one functions, though (for instance, we can take finite disjoint unions).
\end{rem}
In the above definitions and other similar ones below, all morphisms are actually isomorphisms and hence the categories are all groupoids. For the cases $k =0$, the reader should interpret things such as $\mathds{R}V^0$ and how they interact with other things in a natural way. For instance, $\mathds{R}V^0$ may be treated as the empty tuple. So the categories $\VF[0]$, $\mathds{R}V[0]$ are equivalent.
About the position of ``$*$'' in the notation: ``$\VF_*$'' suggests that the category is filtrated and ``$\mathds{R}V[*]$'' suggests that the category is graded.
\begin{defn}[$\mathds{R}ES$-categories]\langlebel{defn:RES:cat}
The category $\mathds{R}ES[k]$ is the full subcategory of $\mathds{R}V[k]$ such that $(U, f) \in \mathds{R}ES[k]$ if and only if $\vrv(U)$ is finite.
\end{defn}
\begin{rem}[Explicit description of ${\mathfrak{s}k \mathds{R}ES[k]}$]\langlebel{expl:res}
Let $\mathds{R}ES$ be the category whose objects are the definable sets $U$ in $\mathds{R}VV$ with $\vrv(U)$ finite and whose morphisms are the definable bijections. The obvious forgetful functor $\mathds{R}ES[*] \longrightarrow \mathds{R}ES$ induces a surjective semiring homomorphism $\mathfrak{s}k \mathds{R}ES[*] \longrightarrow \mathfrak{s}k \mathds{R}ES$, which is clearly not injective.
The semiring $\mathfrak{s}k \mathds{R}ES$ is actually generated by isomorphism classes $[U]$ with $U$ a set in $\K^+$. By Theorem~\ref{groth:omin}, we have the following explicit description of $\mathfrak{s}k \mathds{R}ES$. Its underlying set is $(0 \times \mathds{N}) \cup (\mathds{N}^+ \times \mathds{Z})$. For all $(a, b), (c, d) \in \mathfrak{s}k \mathds{R}ES$,
\[
(a, b) + (c, d) = (\max\{a, c\}, b+d), \quad (a, b) \times (c, d) = (a + c, b \times d).
\]
By the computation in \cite{kage:fujita:2006}, the dimensional part is lost in the groupification $\ggk \mathds{R}ES$ of $\mathfrak{s}k \mathds{R}ES$, that is, $\ggk \mathds{R}ES = \mathds{Z}$, which is of course much simpler than $\mathfrak{s}k \mathds{R}ES$. However, following the philosophy of \cite{hrushovski:kazhdan:integration:vf}, we shall work with Grothendieck semirings whenever possible.
By Lemma~\ref{gk:ortho}, if $(U, f) \in \mathds{R}ES[*]$ then $\vrv(f(U))$ is finite as well. Therefore the semiring $\mathfrak{s}k \mathds{R}ES[*]$ is generated by isomorphism classes $[(U, f)]$ with $f$ a bijection between two sets in $\K^+$. As above, each $\mathfrak{s}k \mathds{R}ES[k]$ may be described explicitly as well. The semigroup $\mathfrak{s}k \mathds{R}ES[0]$ is canonically isomorphic to the semiring $(0, 0) \times \mathds{N}$. For $k > 0$, the underlying set of $\mathfrak{s}k \mathds{R}ES[k]$ is $\bigcup_{0 \leq i \leq k}((k, i) \times \mathds{Z})$, and its semigroup operation is given by
\[
(k, i, a) + (k, i', a') = (k, \max\{i, i'\}, a + a').
\]
Moreover, multiplication in $\mathfrak{s}k \mathds{R}ES[*]$ is given by
\[
(k, i, a) \times (l, j, b) = (k+l, i + j, a \times b).
\]
\end{rem}
\begin{defn}[$\mathds{G}amma$-categories]\langlebel{def:Ga:cat}
The objects of the category $\mathds{G}amma[k]$ are the finite disjoint unions of definable subsets of $\mathds{G}amma^k$. Any definable bijection between two such objects is a \emph{morphism} of $\mathds{G}amma[k]$. The category $\mathds{G}amma^{c}[k]$ is the full subcategory of $\mathds{G}amma[k]$ such that $I \in \mathds{G}amma^{c}[k]$ if and only if $I$ is finite.
\end{defn}
Clearly $\mathfrak{s}k \mathds{G}amma^c[k]$ is naturally isomorphic to $\mathds{N}$ for all $k$ and hence $\mathfrak{s}k \mathds{G}amma^c[*] \cong \mathds{N}[X]$.
\begin{nota}\langlebel{nota:RV:short}
We introduce the following shorthand for distinguished elements in the various Grothendieck semigroups and their groupifications (and closely related constructions):
\begin{gather*}
\bm 1_{\K} = [\{1\}] \in \mathfrak{s}k \mathds{R}ES[0], \quad [1] = [(\{1\}, \id)] \in \mathfrak{s}k \mathds{R}ES[1],\\
[\bm T] = [(\K^+, \id)] \in \mathfrak{s}k \mathds{R}ES[1], \quad [\bm A] = 2 [\bm T] + [1] \in \mathfrak{s}k \mathds{R}ES[1],\\
\bm 1_{\mathds{G}amma} = [\mathds{G}amma^0] \in \mathfrak{s}k \mathds{G}amma[0], \quad [e] = [\{1\}] \in \mathfrak{s}k \mathds{G}amma[1], \quad [\bm H] = [(0,1)] \in \mathfrak{s}k \mathds{G}amma[1],\\
[\bm P] = [(\mathds{R}V^{\circ \circ}, \id)] - [1] \in \ggk \mathds{R}V[1].
\end{gather*}
Here $\mathds{R}V^{\circ \circ} = \mathds{R}V^{\circ \circ}_0 \smallsetminus 0$. Note that the interval $\bm H$ is formed in the signed value group $\mathds{G}amma$, whose ordering is inverse to that of the value group $\abs \mathds{G}amma_\infty$ (recall Remark~\ref{signed:Gam}). The interval $(1, \infty) \subseteq \mathds{G}amma$ is denoted by $\bm H^{-1}$.
As in~\cite{hrushovski:kazhdan:integration:vf}, the elements $[\bm P]$ and $\bm 1_{\K} + [\bm P]$ in $\ggk \mathds{R}V[*]$ play special roles in the main construction (see Propositions~\ref{kernel:L} and the remarks thereafter).
\end{nota}
The following lemma is a generality proven elsewhere. It is only needed to prove Lemma~\ref{gam:pulback:mono}.
\begin{lem}\langlebel{gen:mat:inv}
Let $K$ be an integral domain and $M$ a torsion-free $K$-module, the latter is viewed as the main sort of a first-order structure of some expansion of the usual $K$-module language. Let $\mathfrak{F}$ be a class of definable functions in the sort $M$ such that
\begin{itemize}
\item all the identity functions are in $\mathfrak{F}$,
\item all the functions in $\mathfrak{F}$ are definably piecewise $K$-linear, that is, they are definably piecewise of the form $x \longmapsto M x + c$, where $M$ is a matrix with entries in $K$ and $c$ is a definable point,
\item $\mathfrak{F}$ is closed under composition, inversion, composition with $\mgl(K)$-transformations ($K$-linear functions with invertible matrices), and composition with coordinate projections.
\end{itemize}
If $g : D \longrightarrow E$ is a bijection in $\mathfrak{F}$, where $D, E \subseteq M^n$, then $g$ is definably a piecewise $\mgl_n(K)$-transformation.
\end{lem}
\begin{proof}
See \cite[Lemma~2.29]{Yin:int:expan:acvf}.
\end{proof}
\begin{lem}\langlebel{gam:pulback:mono}
Let $g$ be a $\mathds{G}amma[k]$-morphism. Then $g$ is definably a piecewise $\mgl_k(\mathds{K})$-transformation modulo the sign, that is, a piecewise $\mgl_k(\mathds{K}) \times \mathds{Z}_2$-transformation. Consequently, $g$ is a $\vrv$-contraction (recall Definition~\ref{defn:corr:cont}).
\end{lem}
\begin{proof}
For the first claim, it is routine to check that Lemma~\ref{gen:mat:inv} is applicable to the class of definable functions in the $\abs \mathds{G}amma$-sort. The second claim follows from the fact that the natural actions of $\mgl_k(\mathds{K})$ on $(\mathds{R}V^+)^k$ and $(\mathds{G}amma^+)^k$ commute with the map $\vrv$.
\end{proof}
\begin{rem}\langlebel{why:glz}
In \cite{hrushovski:kazhdan:integration:vf}, $\mathds{G}amma[k]$-morphisms are by definition piecewise $\mgl_k(\mathds{Z})$-transformations. This is because, in the setting there, the $\vrv$-contractions are precisely the piecewise $\mgl_k(\mathds{Z})$-transformations, which form a proper subclass of definable bijections in the $\mathds{G}amma$-sort, which in general are piecewise $\mgl_k(\mathds{Q})$-transformations.
\end{rem}
\begin{lem}\langlebel{G:red}
For all $I \in \mathds{G}amma[k]$ there are finitely many definable sets $H_i \subseteq \mathds{G}amma^{n_i}$ with $\dim_{\mathds{G}amma}(H_i) = n_i \leq k$ such that $[I] = \sum_i [H_i] [e]^{k -n_i}$ in $\mathfrak{s}k \mathds{G}amma[k]$.
\end{lem}
\begin{proof}
We do induction on $k$. The base case $k = 0$ is trivial. For the inductive step $k > 0$, the claim is also trivial if $\dim_{\mathds{G}amma}(I) = k$; so let us assume that $\dim_{\mathds{G}amma}(I) < k$. By \cite[Theorem~B]{Dries:tcon:97}, we may partition $I$ into finitely many definable pieces $I_i$ such that each $I_i$ is the graph of a definable function $I'_i \longrightarrow \mathds{G}amma$, where $I'_i \in \mathds{G}amma[k-1]$. So the claim simply follows from the inductive hypothesis.
\end{proof}
\begin{rem}\langlebel{gam:res}
There is a natural map $\mathds{G}amma[*] \longrightarrow \mathds{R}V[*]$ given by $I \longmapsto \bm I \coloneqq (I^\sharp, \id)$ (see Notation~\ref{gamma:what}). By Lemma~\ref{gam:pulback:mono}, this map induces a homomorphism $\mathfrak{s}k \mathds{G}amma[*] \longrightarrow \mathfrak{s}k \mathds{R}V[*]$ of graded semirings. By \cite[Theorem~A]{Dries:tcon:97} and Theorem~\ref{groth:omin}, this homomorphism restricts to an injective homomorphism $\mathfrak{s}k \mathds{G}amma^{c}[*] \longrightarrow \mathfrak{s}k \mathds{R}ES[*]$ of graded semirings. There is also a similar semiring homomorphism $\mathfrak{s}k \mathds{G}amma^c[*] \longrightarrow \mathfrak{s}k \mathds{R}ES$, but it is not injective.
\end{rem}
\begin{ques}
Is the homomorphism $\mathfrak{s}k \mathds{G}amma[*] \longrightarrow \mathfrak{s}k \mathds{R}V[*]$ above injective?
\end{ques}
Now, the map from $\mathfrak{s}k \mathds{R}ES[*] \times \mathfrak{s}k \mathds{G}amma[*]$ to $\mathfrak{s}k \mathds{R}V[*]$ naturally determined by the assignment
\[
([(U, f)], [I]) \longmapsto [(U \times I^\sharp, f \times \id)]
\]
is well-defined and is clearly $\mathfrak{s}k \mathds{G}amma^{c}[*]$-bilinear. Hence it induces a $\mathfrak{s}k \mathds{G}amma^{c}[*]$-linear map
\[
\bb D: \mathfrak{s}k \mathds{R}ES[*] \otimes_{\mathfrak{s}k \mathds{G}amma^{c}[*]} \mathfrak{s}k \mathds{G}amma[*] \longrightarrow \mathfrak{s}k \mathds{R}V[*],
\]
which is a homomorphism of graded semirings. We shall abbreviate ``$\otimes_{\mathfrak{s}k \mathds{G}amma^{c}[*]}$'' as ``$\otimes$'' below. Note that, by the universal mapping property, groupifying a tensor product in the category of $\mathfrak{s}k \mathds{G}amma^{c}[*]$-semimodules is the same, up to isomorphism, as taking the corresponding tensor product in the category of $\ggk \mathds{G}amma^{c}[*]$-modules. We will show that $\bb D$ is indeed an isomorphism of graded semirings.
\subseteqsection{The tensor expression}
Heuristically, $\mathds{R}V$ may be viewed as a union of infinitely many one-dimensional vector spaces over $\K$. Weak $o$\nobreakdash-minimality states that every definable subset of $\mathds{R}V$ is nontrivial only within finitely many such one-dimensional spaces. The tensor expression of $\mathfrak{s}k \mathds{R}V[*]$ we seek may be thought of as a generalization of this phenomenon to all definable sets in $\mathds{R}V$.
\begin{lem}\langlebel{resg:decom}
Let $A \subseteq \mathds{R}V^k \times \mathds{G}amma^l$ be an $\alpha$-definable set, where $\alpha \in \mathds{G}amma$. Set $$p$\nobreakdashr_{\leq k}(A) = U$ and suppose that $\vrv(U)$ is finite. Then there is an $\alpha$-definable finite partition $U_i$ of $U$ such that, for each $i$ and all $t, t' \in U_i$, we have $A_t = A_{t'}$.
\end{lem}
\begin{proof}
By stable embeddedness, for every $t \in U$, $A_t$ is $(\vrv(t), \alpha)$-definable in the $\mathds{G}amma$-sort alone. Since $\vrv(U)$ is finite, the assertion simply follows from compactness.
\end{proof}
\begin{lem}\langlebel{gam:tup:red}
Let $\beta$, $\mathfrak{a}mma = (\mathfrak{a}mma_1, \ldots, \mathfrak{a}mma_m)$ be finite tuples in $\mathds{G}amma$. If there is a $\beta$-definable nonempty proper subset of $\mathfrak{a}mma^\sharp$ then, for some $\mathfrak{a}mma_i$ and every $t \in \mathfrak{a}mma^\sharp_{\tilde i}$, $\mathfrak{a}mma^\sharp_i$ contains a $t$-definable point. Consequently, if $U$ is such a subset of $\mathfrak{a}mma^\sharp$ then either $U$ contains a definable point or there exists a subtuple $\mathfrak{a}mma_* \subseteq \mathfrak{a}mma$ such that $$p$\nobreakdashr_{\mathfrak{a}mma_*}(U) = \mathfrak{a}mma^\sharp_*$, where $$p$\nobreakdashr_{\mathfrak{a}mma_*}$ denotes the obvious coordinate projection, and there is a $\beta$-definable function from $\mathfrak{a}mma^\sharp_*$ into $(\mathfrak{a}mma \smallsetminus \mathfrak{a}mma_*)^\sharp$.
\end{lem}
\begin{proof}
For the first claim we do induction on $m$. The base case $m = 1$ simply follows from $o$\nobreakdash-minimality in the $\K$-sort and Lemma~\ref{RV:no:point}. For the inductive step $m > 1$, let $U$ be a $\beta$-definable nonempty proper subset of $\mathfrak{a}mma^\sharp$. By the inductive hypothesis, we may assume
\[
\{ t \in $p$\nobreakdashr_{>1}(U) : U_t \neq \mathfrak{a}mma^\sharp_1\} = \mathfrak{a}mma^\sharp_{> 1}.
\]
Then $\mathfrak{a}mma_1$ is as desired.
The second claim follows easily from the first.
\end{proof}
\begin{lem}\langlebel{RV:decom:RES:G}
Let $U \subseteq \mathds{R}V^m$ be a definable set. Then there are finitely many definable sets of the form $V_i \times D_i \subseteq (\K^+)^{k_i} \times \mathds{G}amma^{l_i}$ such that $k_i + l_i = m$ for all $i$ and $[U] = \sum_i [V_i \times D_i^\sharp]$ in $\mathfrak{s}k \mathds{R}V[*]$.
\end{lem}
\begin{proof}
The case $m=1$ is an immediate consequence of weak $o$\nobreakdash-minimality in the $\mathds{R}V$-sort. For the case $m>1$, by Lemma~\ref{gam:tup:red}, compactness, and a routine induction on $m$, over a definable finite partition of $U$, we may assume that $U$ is a union of sets of the form $t \times \mathfrak{a}mma^\sharp$, where $t \in (\K^+)^k$, $\mathfrak{a}mma \in \mathds{G}amma^l$, and $k+l=m$. Then the assertion follows from Lemma~\ref{resg:decom}.
\end{proof}
Let $Q$ be a set of parameters in $\mdl R^{\bullet}_{\rv}$. We say that a $Q$-definable set $I \subseteq \mathds{G}amma^m$ is \emph{$Q$-reducible} if $I^\sharp$ is $Q$-definably bijective to $\K^+ \times I_{\tilde i}^\sharp$, where $i \in [m]$ and $I_{\tilde i} = $p$\nobreakdashr_{\tilde i}(I)$. For every $t \in (\K^+)^{n}$ and every $\alpha \in \mathds{G}amma^m$, $\alpha$ is $(t,\alpha)$-reducible if and only if, by Lemma~\ref{gam:tup:red}, there is a $(t,\alpha)$-definable nonempty proper subset of $\alpha^\sharp$ if and only if, by Lemma~\ref{gam:tup:red} again, there is an $\alpha$-definable set $U \subseteq (\K^+)^{n}$ containing $t$ such that $\alpha$ is $(u,\alpha)$-reducible for every $u \in U$ if and only if, by $o$\nobreakdash-minimality in the $\K$-sort and Lemma~\ref{RV:no:point}, $\alpha$ is $\alpha$-reducible.
We say that a definable set $A$ in $\mathds{R}V$ is \emph{$\mathds{G}amma$-tamped} of \emph{height} $l$ if there are $U \in \mathds{R}ES[k]$ and $I \in \mathds{G}amma[l]$ with $\dim_{\mathds{G}amma}(I) = l$ such that $A = U \times I^\sharp$. In that case, there is only one way to write $A$ as such a product, and if $B = V \times J^\sharp \subseteq A$ is also $\mathds{G}amma$-tamped then the coordinates occupied by $J^\sharp$ are also occupied by $I^\sharp$, in particular, $\dim_{\mathds{G}amma}(J) = l$ if and only if $V \subseteq U$ and $J \subseteq I$.
\begin{lem}\langlebel{Gtamp}
Let $A = U \times I^\sharp$, $B = V \times J^\sharp$ be $\mathds{G}amma$-tamped sets of the same height $l$, where $U$, $V$ are sets in $\K^+$. Let $f$ be a definable bijection whose domain contains $A$ and whose range contains $ B$. Suppose that $B \smallsetminus f( A)$, $A \smallsetminus f^{-1}(B)$ do not have $\mathds{G}amma$-tamped subsets of height $l$. Then there are finitely many $\mathds{G}amma$-tamped sets $A_i = U_i \times I_i^\sharp \subseteq U \times I^\sharp$ and $B_i = V_i \times J_i^\sharp \subseteq V \times J^\sharp$ such that
\begin{itemize}
\item $ A \smallsetminus \bigcup_i A_i$ and $ B \smallsetminus \bigcup_i B_i$ do not have $\mathds{G}amma$-tamped subsets of height $l$,
\item each restriction $f \upharpoonright A_i$ is of the form $p_i \times q_i$, where $p_i : U_i \longrightarrow V_i$, $q_i : I_i^\sharp \longrightarrow J_i^\sharp$ are bijections and the latter $\vrv$-contracts to a $\mathds{G}amma[*]$-morphism $q_{i \downarrow} : I_i \longrightarrow J_i$.
\end{itemize}
\end{lem}
Let $t \times \alpha^\sharp \subseteq A$. If $t \times \alpha^\sharp \subseteq A \smallsetminus f^{-1}(B)$ then, by Lemma~\ref{resg:decom}, it is contained in a definable set $U' \times I'^\sharp \subseteq A \smallsetminus f^{-1}(B)$ with $U' \subseteq U$ and $I' \subseteq I$. Since $A \smallsetminus f^{-1}(B)$ does not have $\mathds{G}amma$-tamped subsets of height $l$, we must have $\dim_{\mathds{G}amma}(I') < l$. It follows from (the proof of) Lemma~\ref{gam:red:K} that $I'$ is piecewise reducible, which implies that $\alpha$ is $\alpha$-reducible. At any rate, if $\alpha$ is $(t,\alpha)$-reducible then $\alpha$ is $\alpha$-reducible and hence there is a reducible subset of $I$ that contains $\alpha$.
\begin{proof}
Remove all the reducible subsets of $I$ from $I$ and call the resulting set $\bar I$; similarly for $\bar J$. Then, for all $t \in U$ and all $\alpha \in \bar I$, $f(t \times \alpha^\sharp)$ must be contained in a set of the form $s \times \beta^\sharp$, for otherwise it would have a $(t,\alpha)$-definable nonempty proper subset and hence would be $(t,\alpha)$-reducible. In fact, $f(t \times \alpha^\sharp) = s \times \beta^\sharp$, for otherwise $\beta$ is $(t,\alpha)$-reducible and hence, by $o$\nobreakdash-minimality in the $\K$-sort and the assumption $\dim_{\mathds{G}amma}(I) = \dim_{\mathds{G}amma}(J) = l$, a $(t,\alpha)$-definable subset of $\alpha^\sharp$ can be easily constructed. For the same reason, we must actually have $\beta \in \bar J$. It follows that $f(U \times \bar I^\sharp) = V \times \bar J^\sharp$. Then, by compactness, there are finitely many reducible subsets $I_i$ of $I$ such that, for all $t \in U$ and all $\alpha \in I_* = I \smallsetminus \bigcup_i I_i$, $f(t \times \alpha^\sharp) = s \times \beta^\sharp$ for some $s \in V$ and $\beta \in J$. Applying Lemma~\ref{resg:decom} to (the graph of) the function on $U \times I_*$ induced by $f$, the lemma follows.
\end{proof}
\begin{prop}\langlebel{red:D:iso}
$\bb D$ is an isomorphism of graded semirings.
\end{prop}
\begin{proof}
Surjectivity of $\bb D$ follows immediately from Lemma~\ref{RV:decom:RES:G}. For injectivity, let $\bm U_i \coloneqq (U_i, f_i)$, $\bm V_j \coloneqq (V_j, g_j)$ be objects in $\mathds{R}ES[*]$ and $I_i$, $J_j$ objects in $\mathds{G}amma[*]$ such that $\bb D([\bm U_i] \otimes [I_i])$, $\bb D([\bm V_j] \otimes [J_j])$ are objects in $\mathfrak{s}k \mathds{R}V[l]$ for all $i$, $j$. Set
\[
\textstyle M_i = U_i \times I_i^\sharp, \quad N_i = V_j \times J_j^\sharp, \quad M = \biguplus_i M_i, \quad N = \biguplus_j N_j.
\]
Suppose that there is a definable bijection $f : M \longrightarrow N$. We need to show
\[
\textstyle \sum_i [\bm U_i] \otimes [I_i] = \sum_j [\bm V_j] \otimes [J_j].
\]
By Lemma~\ref{gam:red:K}, we may assume that all $M_{i}$, $N_{j}$ are $\mathds{G}amma$-tamped. By $o$\nobreakdash-minimal cell decomposition, without changing the sums, we may assume that each $U_i$ is a disjoint union of finitely many copies of $(\K^+)^i$ and thereby re-index $M_i$ more informatively as $M_{i, m} = U_i \times I_m^\sharp$, where $I_m$ is an object in $\mathds{G}amma[m]$; similarly each $N_j$ is re-indexed as $N_{j, n}$. By Lemma~\ref{dim:cut:gam}, the respective maximums of the numbers $i+m$, $j+n$ are the $\mathds{R}V$-dimensions of $M$, $N$ and hence must be equal; it is denoted by $p$. Let $q$ be the largest $m$ such that $i + m = p$ for some $M_{i, m}$ and $q'$ the largest $n$ such that $j + n = p$ for some $N_{j, n}$. It is not hard to see that we may arrange $q = q'$.
We now proceed by induction on $q$. The base case $q=0$ is rather trivial. For the inductive step, by Lemma~\ref{Gtamp}, we see that certain products contained in $M_{p-q, q}$, $N_{p-q, q}$ give rise to the same sum and the inductive hypothesis may be applied to the remaining portions.
\end{proof}
We may view $\mathds{G}amma$ as a double cover of $\abs \mathds{G}amma$ via the identification $\mathds{G}amma / {$p$\nobreakdashm 1} = \abs \mathds{G}amma$. Consequently we can associate two Euler characteristics $\chi_{\mathds{G}amma,g}$, $\chi_{\mathds{G}amma, b}$ with the $\mathds{G}amma$-sort, induced by those on $|\mathds{G}amma|$ (see \cite{kage:fujita:2006} and also~\cite[\S~ 9]{hrushovski:kazhdan:integration:vf}). They are distinguished by
\[
\chi_{\mathds{G}amma, g}(\bm H) = \chi_{\mathds{G}amma, g}(\bm H^{-1}) = -1 \quad \text{and} \quad \chi_{\mathds{G}amma, b}(\bm H) = \chi_{\mathds{G}amma, b}(\bm H^{-1}) = 0.
\]
Similarly, there is an Euler characteristic $\chi_{\K}$ associated with the $\K$-sort (there is only one). We shall denote all of these Euler characteristics simply by $\chi$ if no confusion can arise. Using these $\chi$ and the groupification of $\bb D$ (also denoted by $\bb D$), we can construct various retractions from the Grothendieck ring $\ggk \mathds{R}V[*]$ to (certain localizations of) the Grothendieck rings $\ggk \mathds{R}ES[*]$ and $\ggk \mathds{G}amma[*]$.
\begin{lem}\langlebel{gam:euler}
The Euler characteristics induce naturally three graded ring homomorphisms:
\[
\mdl E_{\K} : \ggk \mathds{R}ES[*] \longrightarrow \mathds{Z}[X] \quad \text{and} \quad \mdl E_{\mathds{G}amma, g}, \mdl E_{\mathds{G}amma, b} : \ggk \mathds{G}amma[*] \longrightarrow \mathds{Z}[X].
\]
\end{lem}
\begin{proof}
For $U \in \mathds{R}ES[k]$ and $I \in \mathds{G}amma[k]$, we set $\mdl E_{\K, k}([U]) = \chi(U)$ (see Remark~\ref{omin:res}) and $\mdl E_{\mathds{G}amma, k}([I]) = \chi(I)$. These maps are well-defined and they induce graded ring homomorphisms $\mdl E_{\K} \coloneqq \sum_k \mdl E_{\K, k} X^k$ and $\mdl E_{\mathds{G}amma} \coloneqq \sum_k \mdl E_{\mathds{G}amma, k} X^k$ as desired.
\end{proof}
By the computation in \cite{kage:fujita:2006}, $\ggk \mathds{G}amma[*]$ is canonically isomorphic to the graded ring
\[
\textstyle \mathds{Z}[X, Y^{(2)}] \coloneqq \mathds{Z} \oplus \bigoplus_{i \geq 1} (\mathds{Z}[Y]/(Y^2+Y))X^i,
\]
where $YX$ represents the class $[\bm H] = [\bm H^{-1}]$ in $\ggk \mathds{G}amma[1]$. Thus $\mdl E_{\mathds{G}amma, g}$, $\mdl E_{\mathds{G}amma, b}$ are also given by
\[
\mathds{Z}[X, Y^{(2)}] \two^{Y \longmapsto -1}_{Y \longmapsto 0} \mathds{Z}[X].
\]
\begin{rem}[Explicit description of ${\ggk \mathds{R}V[*]}$]\langlebel{rem:poin}
Of course, $\mdl E_{\K}$ is actually an isomorphism. The homomorphism $\mathfrak{s}k \mathds{G}amma^{c}[*] \longrightarrow \mathfrak{s}k \mathds{R}ES[*]$ in Remark~\ref{gam:res} and $\mdl E_{\K}$ then induce an isomorphism $\mdl E_{\K^c} : \ggk \mathds{G}amma^{c}[*] \longrightarrow \mathds{Z}[X]$. But this isomorphism is different from the groupification $\mdl E_{\mathds{G}amma^c}$ of the canonical isomorphism $\mathfrak{s}k \mathds{G}amma^{c}[*] \cong \mathfrak{s}k \mathds{N}[*]$. This latter isomorphism $\mdl E_{\mathds{G}amma^c}$ is also induced by $\mdl E_{\mathds{G}amma, g}$, $\mdl E_{\mathds{G}amma, b}$ (the two homomorphisms agree on $\ggk \mathds{G}amma^{c}[*]$). They are distinguished by $\mdl E_{\K^c}([e]) = -X$ and $\mdl E_{\mathds{G}amma^c}([e]) = X$. We have a commutative diagram
\[
\bfig
\hSquares(0,0)/<-`->`->`->`->`<-`->/[{\ggk \mathds{R}ES[*]}`{\ggk \mathds{G}amma^{c}[*]}`{\ggk \mathds{G}amma[*]}`\mathds{Z}[X]`\mathds{Z}[X]`{\mathds{Z}[X, Y^{(2)}]}; ``\mdl E_{\K}`\mdl E_{\mathds{G}amma^c}`\cong`\tau`]
\efig
\]
where $\tau$ is the involution determined by $X \longmapsto -X$. The graded ring
\[
\mathds{Z}[X] \otimes_{\mathds{Z}[X]} \mathds{Z}[X, Y^{(2)}]
\]
may be identified with $\mathds{Z}[X, Y^{(2)}]$ via the isomorphism given by $x \otimes y \longmapsto \tau(x)y$. Consequently, by Proposition~\ref{red:D:iso}, there is a graded ring isomorphism
\[
\ggk \mathds{R}V[*] \to^{\sim} \mathds{Z}[X, Y^{(2)}] \quad \text{with} \quad \bm 1_{\K} + [\bm P] \longmapsto 1 + 2YX + X.
\]
Setting
\[
\mathds{Z}^{(2)}[X] = \mathds{Z}[X, Y^{(2)}] / (1 + 2YX + X),
\]
we see that there is a canonical ring isomorphism
\[
\bb E_{\mathds{G}amma}: \ggk \mathds{R}V[*] / (\bm 1_{\K} + [\bm P]) \to^{\sim} \mathds{Z}^{(2)}[X].
\]
There are exactly two ring homomorphisms $\mathds{Z}^{(2)}[X] \longrightarrow \mathds{Z}$ determined by the assignments $Y \longmapsto -1$ and $Y \longmapsto 0$ or, equivalently, $X \longmapsto 1$ and $X \longmapsto -1$. Combining these with $\bb E_{\mathds{G}amma}$, we see that there are exactly two ring homomorphisms
\[
\bb E_{\mathds{G}amma,g}, \bb E_{\mathds{G}amma,b}: \ggk \mathds{R}V[*] / (\bm 1_{\K} + [\bm P]) \longrightarrow \mathds{Z}.
\]
\end{rem}
\begin{prop}\langlebel{prop:eu:retr:k}
There are two ring homomorphisms
\[
\bb E_{\K, g}: \ggk \mathds{R}V[*] \longrightarrow \ggk \mathds{R}ES[*][[\bm A]^{-1}] \quad \text{and} \quad \bb E_{\K, b}: \ggk \mathds{R}V[*] \longrightarrow \ggk \mathds{R}ES[*][[1]^{-1}]
\]
such that
\begin{itemize}
\item their ranges are precisely the zeroth graded pieces of their respective codomains,
\item $\bm 1_{\K} + [\bm P]$ vanishes under both of them,
\item for all $x \in \ggk \mathds{R}ES[k]$, $\bb E_{\K, g} (x) = x [\bm A]^{-k}$ and $\bb E_{\K, b}(x) = x [1]^{-k}$.
\end{itemize}
\end{prop}
\begin{proof}
We first define, for each $n$, a homomorphism
\[
\bb E_{g, n}: \ggk \mathds{R}V[n] \longrightarrow \ggk \mathds{R}ES[n]
\]
as follows. By Proposition~\ref{red:D:iso}, there is an isomorphism
\[
\textstyle \bb D_n : \bigoplus_{i + j = n} \ggk \mathds{R}ES[i] \otimes \ggk \mathds{G}amma[j] \to^{\sim} \ggk \mathds{R}V[n].
\]
Let the group homomorphism $\mdl E_{g, j} : \ggk \mathds{G}amma[j] \longrightarrow \mathds{Z}$ be defined as in Lemma~\ref{gam:euler}, using $\chi_{\mathds{G}amma, g}$. Let
\[
E_{g}^{i, j}: \ggk \mathds{R}ES[i] \otimes \ggk \mathds{G}amma[j] \longrightarrow \ggk \mathds{R}ES[i {+} j]
\]
be the group homomorphism determined by $x \otimes y \longmapsto \mdl E_{g, j}(y) x [\bm T]^{j}$. Let
\[
\textstyle E_{g, n} = \sum_{i + j = n} E_{g}^{i, j} \quad \text{and} \quad \bb E_{g, n} = E_{g, n} \circ \bb D_n^{-1}.
\]
Note that, due to the presence of the tensor $\otimes_{\ggk \mathds{G}amma^{c}[*]}$ and the replacement of $y$ with $\mdl E_{g, j}(y) [\bm T]^{j}$, there is this issue of compatibility between the various components of $E_{g, n}$. In our setting, this is easily resolved since all definable bijections are allowed in $\mathds{G}amma[*]$ and hence $\mathfrak{s}k \mathds{G}amma^c[*]$ is generated by isomorphism classes of the form $[e]^k$. In the setting of \cite{hrushovski:kazhdan:integration:vf}, however, one has to pass to a quotient ring to achieve compatibility (see Remark~\ref{why:glz} and also \cite[\S~2.5]{hru:loe:lef}).
Now, it is straightforward to check the equality
\[
\bb E_{g, n}(x)\bb E_{g, m}(y) = \bb E_{g, n+m}(xy).
\]
The group homomorphisms $\tau_{m, k} : \ggk \mathds{R}ES[m] \longrightarrow \ggk \mathds{R}ES[m{+}k]$ given by $x \longmapsto x [\bm A]^k$ determine a colimit system and the group homomorphisms
\[
\textstyle\bb E_{g, \leq n} \coloneqq \sum_{m \leq n} \tau_{m, n-m} \circ \bb E_{g, m} : \ggk \mathds{R}V[{\leq} n] \longrightarrow \ggk \mathds{R}ES[n]
\]
determine a homomorphism of colimit systems. Hence we have a ring homomorphism:
\[
\colim{n} \bb E_{g, \leq n} : \ggk \mathds{R}V[*] \longrightarrow \colim{\tau_{n, k}} \ggk \mathds{R}ES[n].
\]
For all $n \geq 1$ we have
\[
\bb E_{g, \leq n}(\bm 1_{\K} + [\bm P]) = [\bm A]^n - 2[\bm T][\bm A]^{n-1} - [1] [\bm A]^{n-1} = 0.
\]
This yields the desired homomorphism $\bb E_{\K, g}$ since the colimit in question can be embedded into the zeroth graded piece of $\ggk \mathds{R}ES[*][[\bm A]^{-1}]$.
The construction of $\bb E_{\K, b}$ is completely analogous, with $[\bm A]$ replaced by $[1]$ and $\chi_{\mathds{G}amma, g}$ by $\chi_{\mathds{G}amma, b}$.
\end{proof}
Since the zeroth graded pieces of both $\ggk \mathds{R}ES[*][[\bm A]^{-1}]$ and $\ggk \mathds{R}ES[*][[1]^{-1}]$ are canonically isomorphic to $\mathds{Z}$, the homomorphisms $\bb E_{\K, g}$, $\bb E_{\K, b}$ are just the homomorphisms $\bb E_{\mathds{G}amma, g}$, $\bb E_{\mathds{G}amma, b}$ in Remark~\ref{rem:poin}, more precisely, $\bb E_{\K, g} = \bb E_{\mathds{G}amma, g}$ and $\bb E_{\K, b} = \bb E_{\mathds{G}amma, b}$.
\section{Generalized Euler characteristic}
From here on, our discussion will be of an increasingly formal nature. Many statements are exact copies of those in \cite{Yin:special:trans, Yin:int:acvf, Yin:int:expan:acvf} and often the same proofs work, provided that the auxiliary results are replaced by the corresponding ones obtained above. For the reader's convenience, we will write down all the details.
\subseteqsection{Special bijections}
Our first task is to connect $\mathfrak{s}k \VF_*$ with $\mathfrak{s}k \mathds{R}V[*]$, more precisely, to establish a surjective homomorphism $\mathfrak{s}k \mathds{R}V[*] \longrightarrow \mathfrak{s}k \VF_*$. Notice the direction of the arrow. The main instrument in this endeavor is special bijections.
\begin{conv}\langlebel{conv:can}
We reiterate \cite[Convention~2.32]{Yin:int:expan:acvf} here, with a different terminology, since this trivial-looking convention is actually quite crucial for understanding the discussion below, especially the parts that involve special bijections. For any set $A$, let
\[
\can(A) = \{(a, \rv(a), t) : (a, t) \in A \text{ and } a \in $p$\nobreakdashr_{\VF}(A)\}.
\]
The natural bijection $\can : A \longrightarrow \can(A)$ is called the \emph{regularization} of $A$. We shall tacitly substitute $\can(A)$ for $A$ if it is necessary or is just more convenient. Whether this substitution has been performed or not should be clear in context (or rather, it is always performed).
\end{conv}
\begin{defn}[Special bijections]\langlebel{defn:special:bijection}
Let $A$ be a (regularized) definable set whose first coordinate is a $\VF$-coordinate (of course nothing is special about the first $\VF$-coordinate, we choose it simply for notational ease). Let $C \subseteq \mathds{R}VH(A)$ be an $\mathds{R}V$-pullback (see Definition~\ref{defn:disc}) and
\[
\langlembda: $p$\nobreakdashr_{>1}(C \cap A) \longrightarrow \VF
\]
a definable function whose graph is contained in $C$. Recall Notation~\ref{nota:tor}. Let
\[
\textstyle C^{\sharp} = \bigcup_{x \in $p$\nobreakdashr_{>1} (C)} \mathds{M}M_{\abvrv($p$\nobreakdashr_1(x_{\mathds{R}V}))} \times x \quad \text{and} \quad \mathds{R}VH(A)^{\sharp} = C^{\sharp} \uplus (\mathds{R}VH(A) \smallsetminus C),
\]
where $x_{\mathds{R}V} = $p$\nobreakdashr_{\mathds{R}V}(x)$. The \emph{centripetal transformation $\eta : A \longrightarrow \mathds{R}VH(A)^{\sharp}$ with respect to $\langlembda$} is defined by
\[
\begin{cases}
\eta (a, x) = (a - \langlembda(x), x), & \text{on } C \cap A,\\
\eta = \id, & \text{on } A \smallsetminus C.
\end{cases}
\]
Note that $\eta$ is injective. The inverse of $\eta$ is naturally called the \emph{centrifugal transformation with respect to $\langlembda$}. The
function $\langlembda$ is referred to as the \emph{focus} of $\eta$ and the $\mathds{R}V$-pullback $C$ as the \emph{locus} of $\langlembda$ (or $\eta$).
A \emph{special bijection} $T$ on $A$ is an alternating composition of centripetal transformations and regularizations. By Convention~\ref{conv:can}, we shall only display the centripetal transformations in such a composition. The \emph{length} of such a special bijection $T$, denoted by $\lh(T)$, is the number of centripetal transformations in $T$. The range of $T$ is sometimes denoted by $A^{\flat}$.
\end{defn}
For functions between sets that have only one $\VF$-coordinate, composing with special bijections on the right and inverses of special bijections on the left obviously preserves dtdp.
\begin{lem}\langlebel{inverse:special:dim:1}
Let $T$ be a special bijection on $A \subseteq \VF \times \mathds{R}V^m$ such that $A^{\flat}$ is an $\mathds{R}V$-pullback. Then there is a definable function $\epsilon : $p$\nobreakdashr_{\mathds{R}V} (A^{\flat}) \longrightarrow \VF$ such that, for every $\mathds{R}V$-polydisc $\mathfrak{p} = t^\sharp \times s \subseteq A^{\flat}$,
$(T^{-1}(\mathfrak{p}))_{\VF} = t^\sharp + \epsilon(s)$.
\end{lem}
\begin{proof}
It is clear that $\mathfrak{p}$ is the image of an open polydisc $\mathfrak{a} \times r \subseteq A$. Let $T'$ be $T$ with the last centripetal transformation deleted. Then $T'(\mathfrak{a} \times r)$ is also an open polydisc $\mathfrak{a}' \times r'$. The range of the focus map of $\eta_n$ contains a point in the smallest closed disc containing $\mathfrak{a}'$. This point can be transported back by the previous focus maps to a point in the smallest closed disc containing $\mathfrak{a}$. The lemma follows easily from this observation.
\end{proof}
Note that, since $\dom(\epsilon) \subseteq \mathds{R}V^l$ for some $l$, by Corollary~\ref{function:rv:to:vf:finite:image}, $\ranglen(\epsilon)$ is actually finite.
A definable set $A$ is called a \emph{deformed $\mathds{R}V$-pullback} if there is a special bijection $T$ on $A$ such that $A^{\flat}$ is an $\mathds{R}V$-pullback.
\begin{lem}\langlebel{simplex:with:hole:rvproduct}
Every definable set $A \subseteq \VF \times \mathds{R}V^m$ is a deformed $\mathds{R}V$-pullback.
\end{lem}
\begin{proof}
By compactness and HNF this is immediately reduced to the situation where $A \subseteq \VF$ is contained in an $\mathds{R}V$-disc and is a $\vv$-interval with end-discs $\mathfrak{a}$, $\mathfrak{b}$. This may be further divided into several cases according to whether $\mathfrak{a}$, $\mathfrak{b}$ are open or closed discs and whether the ends of $A$ are open or closed. In each of these cases, Lemma~\ref{clo:disc:bary} is applied in much the same way as its counterpart is applied in the proof of \cite[Lemma~4.26]{Yin:QE:ACVF:min}. It is a tedious exercise and is left to the reader.
\end{proof}
Here is an analogue of \cite[Theorem~5.4]{Yin:special:trans} (see also \cite[Theorem~4.25]{Yin:int:expan:acvf}):
\begin{thm}\langlebel{special:term:constant:disc}
Let $F(x) = F(x_1, \ldots, x_n)$ be an $\langlen{T}{}{}$-term. Let $u \in \mathds{R}V^n$ and $R : u^\sharp \longrightarrow A$ be a special bijection. Then there is a special bijection $T : A \longrightarrow A^\flat$ such that $F \circ R^{-1} \circ T^{-1}$ is $\rv$-contractible. In a commutative diagram,
\[
\bfig
\square(0,0)/`->`->`->/<1500,400>[A^\flat`\VF`\rv(A^\flat)`\mathds{R}V_0;
`\rv`\rv`(F \circ R^{-1} \circ T^{-1})_{\downarrow}]
\morphism(0,400)<500,0>[A^\flat`A; T^{-1}]
\morphism(500,400)<500,0>[A`u^\sharp; R^{-1}]
\morphism(1000,400)<500,0>[u^\sharp`\VF; F]
\efig
\]
\end{thm}
\begin{proof}
First observe that if the assertion holds for one $\langlen{T}{}{}$-term then it holds simultaneously for any finite number of $\langlen{T}{}{}$-terms, since $\rv$-contractibility is preserved by further special bijections on $A^\flat$. We do induction on $n$. For the base case $n=1$, by Corollary~\ref{part:rv:cons} and Remark~\ref{rem:LT:com}, there is a definable finite partition $B_i$ of $u^\sharp$ such that, for all $i$, if $\mathfrak{a} \subseteq B_i$ is an open disc then $\rv \upharpoonright F(\mathfrak{a})$ is constant. By consecutive applications of Lemma~\ref{simplex:with:hole:rvproduct}, we obtain a special bijection $T$ on $A$ such that each $(T \circ R) (B_i)$ is an $\mathds{R}V$-pullback. Clearly $T$ is as required.
For the inductive step, we may concentrate on a single $\mathds{R}V$-polydisc $\mathfrak{p} = v^\sharp \times (v, r) \subseteq A$. Let $$p$\nobreakdashhi(x, y)$ be a quantifier-free formula that defines the function $\rv \circ f$. Recall Convention~\ref{topterm}. Let $G_{i}(x)$ enumerate the top $\langlen{T}{}{}$-terms of $$p$\nobreakdashhi$. For $a \in v_1^\sharp$, write $G_{i,a} = G_{i}(a, x_2, \ldots, x_n)$. By the inductive hypothesis, there is a special bijection $R_{a}$ on $(v_2, \ldots, v_n)^\sharp$ such that every $G_{i,a} \circ R_a^{-1}$ is $\rv$-contractible. Let $U_{k, a}$ enumerate the loci of the components of $R_{a}$ and $\langlembda_{k, a}$ the corresponding focus maps. By compactness,
\begin{itemize}
\item for each $i$, there is a quantifier-free formula $$p$\nobreakdashsi_i$ such that $$p$\nobreakdashsi_i(a)$ defines $(G_{i,a} \circ R_a^{-1})_{\downarrow}$,
\item there is a quantifier-free formula $\theta$ such that $\theta(a)$ determines the sequence $\rv(U_{k, a})$ and the $\VF$-coordinates targeted by $\langlembda_{k, a}$.
\end{itemize}
Let $H_{j}(x_1)$ enumerate the top $\langlen{T}{}{}$-terms of the formulas $$p$\nobreakdashsi_i$, $\theta$. Applying the inductive hypothesis again, we obtain a special bijection $T_1$ on $v_1^\sharp$ such that every $H_{j} \circ T_1^{-1}$ is $\rv$-contractible. This means that, for every $\mathds{R}V$-polydisc $\mathfrak{q} \subseteq T_1(v_1^\sharp)$ and all $a_1, a_2 \in T_1^{-1}(\mathfrak{q})$,
\begin{itemize}
\item the formulas $$p$\nobreakdashsi_i(a_1)$, $$p$\nobreakdashsi_i(a_2)$ define the same $\rv$-contraction,
\item the special bijections $R_{a_1}$, $R_{a_2}$ may be glued together in the obvious sense to form one special bijection on $\{a_1, a_2\} \times (v_2, \ldots, v_n)^\sharp$.
\end{itemize}
Consequently, $T_1$ and $R_{a}$ naturally induce a special bijection $T$ on $\mathfrak{p}$ such that every $G_{i} \circ T^{-1}$ is $\rv$-contractible. This implies that $F \circ R^{-1} \circ T^{-1}$ is $\rv$-contractible and hence $T$ is as required.
\end{proof}
\begin{cor}\langlebel{special:bi:term:constant}
Let $A \subseteq \VF^n$ be a definable set and $f : A \longrightarrow \mathds{R}V^m$ a definable function. Then there is a special bijection $T$ on $A$ such that $A^\flat$ is an $\mathds{R}V$-pullback and the function $f \circ T^{-1}$ is $\rv$-contractible.
\end{cor}
\begin{proof}
By compactness, we may assume that $A$ is contained in an $\mathds{R}V$-polydisc $\mathfrak{p}$. Let $$p$\nobreakdashhi$ be a quantifier-free formula that defines $f$. Let $F_i(x, y)$ enumerate the top $\langlen{T}{}{}$-terms of $$p$\nobreakdashhi$. For $s \in \mathds{R}V^{m}$, let $F_{i, s} = F_{i}(x, s)$. By Theorem~\ref{special:term:constant:disc}, there is a special bijection $T$ on $\mathfrak{p}$ such that each function $F_{i, s} \circ T^{-1}$ is contractible. This means that, for each $\mathds{R}V$-polydisc $\mathfrak{q} \subseteq T(\mathfrak{p})$,
\begin{itemize}
\item either $T^{-1}(\mathfrak{q}) \subseteq A$ or $T^{-1}(\mathfrak{q}) \cap A = \emptyset$,
\item if $T^{-1}(\mathfrak{q}) \subseteq A$ then $(f \circ T^{-1})(\mathfrak{q})$ is a singleton.
\end{itemize}
So $T \upharpoonright A$ is as required.
\end{proof}
\begin{defn}[Lifting maps]\langlebel{def:L}
Let $U$ be a set in $\mathds{R}V$ and $f : U \longrightarrow \mathds{R}V^k$ a function. Let $U_f$ stand for the set $\bigcup \{f(u)^\sharp \times u: u \in U\}$. The \emph{$k$th lifting map}
\[
\mathbb{L}_k: \mathds{R}V[k] \longrightarrow \VF[k]
\]
is given by $(U,f) \longmapsto U_f$.
The map $\mathbb{L}_{\leq k}: \mathds{R}V[{\leq} k] \longrightarrow \VF[k]$ is given by $\bigoplus_{i} \bm U_i \longmapsto \biguplus_{i} \bb L_i \bm U_i$.
Set $\mathbb{L} = \bigcup_k \mathbb{L}_{\leq k}$.
\end{defn}
\begin{cor}\langlebel{all:subsets:rvproduct}
Every definable set $A \subseteq \VF^n \times \mathds{R}V^m$ is a deformed $\mathds{R}V$-pullback. In particular, if $A \in \VF_*$ then there are a $\bm U \in \mathds{R}V[{\leq} n]$ and a special bijection from $A$ onto $\mathbb{L}_{{\leq} n}(\bm U)$.
\end{cor}
\begin{proof}
For the first assertion, by compactness, we may assume $A \subseteq \VF^n$. Then it is a special case of Corollary~\ref{special:bi:term:constant}. The second assertion follows from Lemma~\ref{RV:fiber:dim:same}.
\end{proof}
\begin{defn}[Lifts]\langlebel{def:lift}
Let $F: (U, f) \longrightarrow (V, g)$ be an $\mathds{R}V[k]$-morphism. Then $F$ induces a definable finite-to-finite correspondence $F^\dag \subseteq f(U) \times g(V)$. Since $F^\dag$ can be decomposed into finitely many definable bijections, for simplicity, we assume that $F^\dag$ is itself a bijection. Let $F^{\sharp} : f(U)^\sharp \longrightarrow g(V)^\sharp$ be a definable bijection that $\rv$-contracts to $F^\dag$. Then $F^\sharp$ is called a \emph{lift} of $F$. By Convention~\ref{conv:can}, we shall think of $F^\sharp$ as a definable bijection $\bb L(U, f) \longrightarrow \bb L(V, g)$ that $\rv$-contracts to $F^\dag$.
\end{defn}
\begin{lem}\langlebel{simul:special:dim:1}
Let $f : A \longrightarrow B$ be a definable bijection between two sets that have exactly one $\VF$-coordinate each. Then there are special bijections $T_A : A \longrightarrow A^{\flat}$, $T_B : B \longrightarrow B^{\flat}$ such that $A^{\flat}$, $B^{\flat}$ are $\mathds{R}V$-pullbacks and $f^{\flat}_{\downarrow}$ is bijective in
the commutative diagram
\[
\bfig
\square(0,0)/->`->`->`->/<600,400>[A`A^{\flat}`B`B^{\flat};
T_A`f``T_B]
\square(600,0)/->`->`->`->/<600,400>[A^{\flat}`\rv(A^{\flat})`B^{\flat} `\rv(B^{\flat}); \rv`f^{\flat}`f^{\flat}_{\downarrow}`\rv]
\efig
\]
Thus, if $A, B \in \VF_*$ then $f^{\flat}$ is a lift of $f^{\flat}_{\downarrow}$, where the latter is regarded as an $\mathds{R}V[1]$-morphism between $\rv(A^{\flat})_{1}$ and $\rv(B^{\flat})_1$ (recall Notation~\ref{0coor}).
\end{lem}
\begin{proof}
By Corollaries~\ref{special:bi:term:constant}, \ref{all:subsets:rvproduct}, and Lemma~\ref{open:pro}, we may assume that $A$, $B$ are $\mathds{R}V$-pullbacks, $f$ is $\rv$-contractible and has dtdp, and there is a special bijection $T_B: B \longrightarrow B^{\flat}$ such that $(T_B \circ f)^{-1}$ is $\rv$-contractible. Let $T_B = \eta_{n} \circ \ldots \circ \eta_{1}$, where each $\eta_{i}$ is a centripetal transformation (and regularization maps are not displayed). Then it is enough to construct a special bijection $T_A = \zeta_{n} \circ \ldots \circ \zeta_{1}$ on $A$ such that, for each $i$, both $f_i \coloneqq T_{B, i} \circ f \circ T_{A, i}^{-1}$ and $T_{A, i} \circ (T_B \circ f)^{-1}$ are $\rv$-contractible, where $T_{B, i} = \eta_{i} \circ \ldots \circ \eta_{1}$ and $T_{A, i} = \zeta_{i} \circ \ldots \circ \zeta_{1}$.
To that end, suppose that $\zeta_i$ has been constructed for each $i \leq k < n$. Let $A_{k} = T_{A, k}(A)$ and $B_k = T_{B, k}(B)$. Let $D \subseteq B_k$ be the locus of $\eta_{k+1}$ and $\langlembda$ the corresponding focus map. Since $f_k$ is $\rv$-contractible and has dtdp, each $\mathds{R}V$-polydisc $\mathfrak{p} \subseteq B_k$ is a union of disjoint sets of the form $f_k(\mathfrak{q})$, where $\mathfrak{q} \subseteq A_k$ is an $\mathds{R}V$-polydisc. For each $t = (t_1, t_{\tilde 1}) \in \dom(\langlembda)$, let $O_{t}$ be the set of those $\mathds{R}V$-polydiscs $\mathfrak{q} \subseteq A_k$ such that $f_k(\mathfrak{q}) \subseteq t^\sharp_1 \times t$. Let
\begin{itemize}
\item $\mathfrak{q}_{t} \in O_{t}$ be the $\mathds{R}V$-polydisc with $(\langlembda(t), t) \in \mathfrak{o}_{ t} \coloneqq f_k(\mathfrak{q}_t)$,
\item $C = \bigcup_{t \in \dom(\langlembda)} \mathfrak{q}_{t} \subseteq A_k$ and $a_{t} = f_k^{-1}(\langlembda( t),
t) \in \mathfrak{q}_{t}$,
\item $\kappa : $p$\nobreakdashr_{>1} (C) \longrightarrow \VF$ the corresponding focus
map given by $$p$\nobreakdashr_{>1} (\mathfrak{q}_{t}) \longmapsto $p$\nobreakdashr_1(a_{t})$,
\item $\zeta_{k+1}$ the centripetal transformation determined by $C$ and $\kappa$.
\end{itemize}
For each $t \in \dom(\langlembda)$, $f_{k+1}$ restricts to a bijection between the
$\mathds{R}V$-pullbacks $\zeta_{k+1}(\mathfrak{q}_{t})$ and
$\eta_{k+1}(\mathfrak{o}_{t})$ that is $\rv$-contractible in
both ways and, for any $\mathfrak{q} \in O_{t}$ with $\mathfrak{q} \neq \mathfrak{q}_{t}$, $f_{k+1}(\mathfrak{q})$ is an open polydisc contained in an $\mathds{R}V$-polydisc. So $f_{k+1}$ is $\rv$-contractible.
On the other hand, it is clear that, for any $\mathds{R}V$-polydisc $\mathfrak{p} \subseteq B^{\flat}$, $T_{A, k} \circ (T_B \circ f)^{-1}(\mathfrak{p})$ does not contain any $a_{t}$ and hence, by the construction of $T_{A, k}$, $T_{A, k+1} \circ (T_B \circ f)^{-1}$ is $\rv$-contractible.
\end{proof}
\begin{hyp}\langlebel{hyp:point}
The following lemma is used directly only once in Corollary~\ref{RV:lift}. It should have been presented right after Definition~\ref{defn:corr:cont}. We place it here because this is the first place in this paper, in fact, one of the only two places, the other being Lemma~\ref{blowup:same:RV:coa}, where we need to assume that every definable $\mathds{R}V$-disc contains a definable point. The easiest way to guarantee this is to assume that $\mdl S$ is $\VF$-generated, which, together with Hypothesis~\ref{hyp:gam}, implies that it is a model of $$T$\nobreakdashCVF$ and is indeed an elementary substructure (so every definable set contains a definable point). This assumption will be in effect throughout the rest of the paper.
\end{hyp}
\begin{lem}\langlebel{RVlift}
Every definable bijection $f : U \longrightarrow V$ between two subsets of $\mathds{R}V^k$ can be lifted, that is, there is a definable bijection $f^{\sharp} : U^\sharp \longrightarrow V^\sharp$ that $\rv$-contracts to $f$.
\end{lem}
\begin{proof}
We do induction on $n = \dim_{\mathds{R}V}(U) = \dim_{\mathds{R}V}(V)$. If $n=0$ then $U$ is finite and hence, for every $u \in U$, the $\mathds{R}V$-polydisc $u^\sharp$ contains a definable point, similarly for $V$, in which case how to construct an $f^{\sharp}$ as desired is obvious.
For the inductive step, by weak $o$\nobreakdash-minimality in the $\mathds{R}V$-sort, there are definable finite partitions $U_i$, $V_i$ of $U$, $V$ and injective coordinate projections
\[
$p$\nobreakdashi_i : U_i \longrightarrow \mathds{R}V^{k_i}, \quad $p$\nobreakdashi'_i : V_i \longrightarrow \mathds{R}V^{k_i},
\]
where $\dim_{\mathds{R}V}(U_i) = \dim_{\mathds{R}V}(V_i) = k_i$; the obvious bijection $$p$\nobreakdashi_i(U_i) \longrightarrow $p$\nobreakdashi'_i(V_i)$ induced by $f$ is denoted by $f_i$. Observe that if every $f_i$ can be lifted as desired then, by the construction in the base case above, $F$ can be lifted as desired as well. Therefore, without loss of generality, we may assume $k = n$. For $u \in U$ and $a \in u^\sharp$, the $\mathds{R}V$-polydisc $f(u)^\sharp$ contains an $a$-definable point and hence, by compactness, there is a definable function $f^{\sharp} : U^\sharp \longrightarrow V^\sharp$ that $\rv$-contracts to $f$. By Lemma~\ref{RV:bou:dim}, $\dim_{\mathds{R}V}($p$\nobreakdashartial_{\mathds{R}V}f^{\sharp}(U^\sharp)) < n$ and hence, by the inductive hypothesis, we may assume that $f^{\sharp}$ is surjective. Then there is a definable function $g : V^\sharp \longrightarrow U^\sharp$ such that $f^{\sharp}(g(b)) = b$ for all $b \in V^\sharp$. By Lemma~\ref{RV:bou:dim} and the inductive hypothesis again, we may further assume that $g$ is also a surjection, which just means that $f^{\sharp}$ is a bijection as desired.
\end{proof}
The following corollary is an analogue of \cite[Proposition~6.1]{hrushovski:kazhdan:integration:vf}.
\begin{cor}\langlebel{RV:lift}
For every $\mathds{R}V[k]$-morphism $F : (U, f) \longrightarrow (V, g)$ there is a $\VF[k]$-morphism $F^\sharp$ that lifts $F$.
\end{cor}
\begin{proof}
As in Definition~\ref{def:lift}, we may assume that the finite-to-finite correspondence $F^\dag$ is actually a bijection. Then this is immediate by Lemma~\ref{RVlift}.
\end{proof}
\begin{cor}\langlebel{L:sur:c}
The lifting map $\bb L_{\leq k}$ induces a surjective homomorphism, which is sometimes simply denoted by $\bb L$, between the Grothendieck semigroups
\[
\mathfrak{s}k \mathds{R}V[{\leq} k] \epi \mathfrak{s}k \VF[k].
\]
\end{cor}
\begin{proof}
By Corollary~\ref{RV:lift}, every $\mathds{R}V[k]$-isomorphism can be lifted. So $\bb L_{\leq k}$ induces a map on the isomorphism classes, which is easily seen to be a semigroup homomorphism. By Lemma~\ref{altVFdim} and Corollary~\ref{all:subsets:rvproduct}, this homomorphism is surjective.
\end{proof}
\subseteqsection{$2$-cells}
The remaining object of this section is to identify the kernels of the semigroup homomorphisms $\bb L$ in Corollary~\ref{L:sur:c} and thereby complete the construction of the universal additive invariant. We begin with a discussion of the issue of $2$-cells, as in \cite[\S~4]{Yin:int:acvf}.
The notion of a $2$-cell, which corresponds to that of a bicell in \cite{cluckers:loeser:constructible:motivic:functions}, may look strange and is, perhaps, only of technical
interest. It arises when we try to prove some analogue of Fubini's
theorem, such as Lemma~\ref{contraction:perm:pair:isp} below. The
difficulty is that, although the interaction between $\rv$-contractions and special bijections for definable sets of
$\VF$-dimension $1$ is in a sense ``functorial'' (see Lemma~\ref{simul:special:dim:1}), we are unable to extend the construction to higher $\VF$-dimensions. This is the concern of \cite[Question~7.9]{hrushovski:kazhdan:integration:vf}. It has
also occurred in \cite{cluckers:loeser:constructible:motivic:functions} and actually may be
traced back to the construction of the $o$\nobreakdash-minimal Euler characteristic in \cite{dries:1998}; see
\cite[Section~1.7]{cluckers:loeser:constructible:motivic:functions}.
Anyway, in this situation, a natural strategy for $\rv$-contracting the isomorphism class of a
definable set of higher $\VF$-dimension is to apply the result for
$\VF$-dimension $1$ parametrically and proceed with one $\VF$-coordinate at a time. As in the classical theory of integration, this strategy requires some form of Fubini's theorem: for a well-behaved integration (or additive invariant in our case), an integral should yield the same value
when it is evaluated along different orders of the variables. By induction, this problem is immediately reduced to the case of two variables. A $2$-cell is a definable
subset of $\VF^2$ with certain symmetrical (or ``linear'' in the sense described in Remark~\ref{2cell:linear} below) internal structure that satisfies this Fubini-type requirement. Now the idea is that, if we can find a definable partition for every definable set such that each piece is a $2$-cell indexed by some $\mathds{R}V$-sort parameters, then, by compactness, every definable
set satisfies the Fubini-type requirement. This kind of
partition is achieved in Lemma~\ref{decom:into:2:units}.
\begin{lem}\langlebel{bijection:dim:1:decom:RV}
Let $f : A \longrightarrow B$ be a definable bijection between two subsets of $\VF$. Then there is a special bijection $T$ on $A$ such that $A^\flat$ is an $\mathds{R}V$-pullback and, for each $\mathds{R}V$-polydisc $\mathfrak{p} \subseteq A^\flat$, $f \upharpoonright T^{-1}(\mathfrak{p})$ is $\rv$-affine.
\end{lem}
\begin{proof}
By Lemma~\ref{rv:lin} and compactness, for all but finitely many $a \in A$ there is an $a$-definable $\delta_a \in \abs{\mathds{G}amma}$ such that $f \upharpoonright \mathfrak{o}(a, \delta_a)$ is $\rv$-affine. Without loss of generality, we may assume that, for all $a \in A$, $\delta_a$ exists and is the least element that satisfies this condition. Let $g : A \longrightarrow |\mathds{G}amma|$ be the definable function given by $a \longmapsto \delta_a$. By Corollary~\ref{special:bi:term:constant}, there is a special bijection $T$ on $A$ such that $A^\flat$ is an $\mathds{R}V$-pullback and, for all $\mathds{R}V$-polydisc $\mathfrak{p} \subseteq A^\flat$, $(g \circ T^{-1}) \upharpoonright \mathfrak{p}$ is constant. By Lemmas~\ref{one:atomic} and \ref{rv:lin}, we must have $(g \circ T^{-1})(\mathfrak{p}) \leq \rangled(\mathfrak{p})$, for otherwise the choice of $\delta_a$ is violated for some $a \in T^{-1}(\mathfrak{p})$. So $T$ is as required.
\end{proof}
\begin{lem}\langlebel{bijection:rv:one:one}
Let $A \subseteq \VF^2$ be a definable set such that $\mathfrak{a}_1 \coloneqq $p$\nobreakdashr_1(A)$ and $\mathfrak{a}_2 \coloneqq $p$\nobreakdashr_2(A)$ are open discs. Suppose that there is a definable bijection $f : \mathfrak{a}_1 \longrightarrow \mathfrak{a}_2$ that has dtdp and, for each $a \in \mathfrak{a}_1$, there is a $t_a \in \mathds{R}VV$ with $A_a = t_a^\sharp + f(a)$. Then there is a special bijection $T$ on $\mathfrak{a}_1$ such that $\mathfrak{a}_1^\flat$ is an $\mathds{R}V$-pullback and, for each $\mathds{R}V$-polydisc $\mathfrak{p} \subseteq \mathfrak{a}_1^\flat$, $\rv$ is constant on the set
\[
\{a - f^{-1}(b) : a \in T^{-1}(\mathfrak{p}) \text{ and } b \in A_a \}.
\]
\end{lem}
\begin{proof}
For each $a \in \mathfrak{a}_1$, let $\mathfrak{b}_a$ be the smallest closed disc that contains $A_a$. Since $A_a - f(a) = t_a^\sharp$, we have $f(a) \in \mathfrak{b}_a$ but $f(a) \notin A_a$ if $t_a \neq 0$. Hence $a \notin f^{-1}(A_a)$ if $t_a \neq 0$ and $\{a\} = f^{-1}(A_a)$ if $t_a = 0$. Since $f^{-1}(A_a)$ is a disc or a point, in either case, the function on $f^{-1}(A_a)$ given by $b \longmapsto \rv(a - b)$ is constant. The function $h : \mathfrak{a}_1 \longrightarrow \mathds{R}VV$ given by $a \longmapsto \rv(a - f^{-1}(A_a))$ is definable. Now we apply Corollary~\ref{special:bi:term:constant} as in the proof of Lemma~\ref{bijection:dim:1:decom:RV}. The lemma follows.
\end{proof}
\begin{defn}\langlebel{defn:balance}
Let $A$, $\mathfrak{a}_1$, $\mathfrak{a}_2$, and $f$ be as in Lemma~\ref{bijection:rv:one:one}. We say that $f$ is \emph{balanced in $A$} if $f$ is actually $\rv$-affine and there are $t_1, t_2 \in \mathds{R}VV$, called the \emph{paradigms} of $f$, such that, for every $a \in \mathfrak{a}_1$,
\[
A_a = t_2^\sharp + f(a) \quad \text{and} \quad f^{-1}(A_a) = a - t_1^\sharp.
\]
\end{defn}
\begin{rem}\langlebel{2cell:linear}
Suppose that $f$ is balanced in $A$ with paradigms $t_1$, $t_2$.
If one of the paradigms is $0$ then the other one must be $0$. In this case $A$ is just the (graph of the) bijection $f$ itself.
Assume that $t_1$, $t_2$ are nonzero. Let $\mathfrak{B}_1$, $\mathfrak{B}_2$ be, respectively, the sets of closed subdiscs of $\mathfrak{a}_1$, $\mathfrak{a}_2$ of radii $\abs{\vrv(t_1)}$, $\abs{\vrv(t_2)}$. Let $a_1 \in \mathfrak{b}_1 \in \mathfrak{B}_1$ and $\mathfrak{o}_1$ be the maximal open subdisc of $\mathfrak{b}_1$ containing $a_1$. Let $\mathfrak{b}_2 \in \mathfrak{B}_2$ be the smallest closed disc containing the open disc $\mathfrak{o}_2 \coloneqq A_{a_1}$. Then, for all $a_2 \in \mathfrak{o}_2$, we have
\[
\mathfrak{o}_2 = t_2^\sharp + f(\mathfrak{o}_1) = A_{a_1} \quad \text{and} \quad A_{a_2} = f^{-1}(\mathfrak{o}_2) + t_1^\sharp = \mathfrak{o}_1.
\]
This internal symmetry of $A$ is illustrated by the following diagram:
\[
\bfig
\dtriangle(0,0)|amb|/.``<-/<600,250>[\mathfrak{o}_1`f^{-1}(\mathfrak{o}_2)`\mathfrak{o}_2; $p$\nobreakdashm t_1^\sharp`\times`f^{-1}]
$p$\nobreakdashtriangle(600,0)|amb|/->``./<600,250>[\mathfrak{o}_1`f(\mathfrak{o}_1)`\mathfrak{o}_2; f`` $p$\nobreakdashm t_2^\sharp]
\efig
\]
Since $f$ is $\rv$-affine, we see that its slope must be $-t_2/t_1$ (recall Definition~\ref{rvaffine}).
If we think of $\mathfrak{b}_1$, $\mathfrak{b}_2$ as $\tor(\code {\mathfrak{o}_1})$, $\tor(\code {\mathfrak{o}_2})$ then the set $A \cap (\mathfrak{b}_1 \times \mathfrak{b}_2)$ may be thought of as the ``line'' in $\tor(\code {\mathfrak{o}_1}) \times \tor(\code {\mathfrak{o}_2})$ given by the equation
\[
x_2 = - \tfrac{t_2}{t_1}(x_1 - \code{\mathfrak{o}_1}) + (\code{\mathfrak{o}_2} - t_2).
\]
Thus, by Lemma~\ref{simul:special:dim:1}, the obvious bijection between $$p$\nobreakdashr_1(A) \times t_2^\sharp$ and $t_1^\sharp \times $p$\nobreakdashr_2(A)$ is the lift of an $\mathds{R}V[{\leq}2]$-morphism modulo special bijections; see Lemma~\ref{2:unit:contracted} below for details. The slope of $f$ will play a more important role when volume forms are introduced into the categories (in a sequel).
\end{rem}
\begin{defn}[$2$-cell]\langlebel{def:units}
We say that a set $A$ is a \emph{$1$-cell} if it is either an open disc contained in a single $\mathds{R}V$-disc or a point in $\VF$. We say
that $A$ is a \emph{$2$-cell} if
\begin{enumerate}
\item $A$ is a subset of $\VF^2$ contained in a single $\mathds{R}V$-polydisc and $$p$\nobreakdashr_1(A)$ is a $1$-cell,
\item there is a function $\epsilon : $p$\nobreakdashr_1 (A) \longrightarrow \VF$ and a $t \in \mathds{R}V$ such that, for every $a \in $p$\nobreakdashr_1(A)$, $A_a = t^\sharp + \epsilon(a)$,
\item one of the following three possibilities occurs:
\begin{enumerate}
\item $\epsilon$ is constant,
\item $\epsilon$ is injective, has dtdp, and $\rangled(\epsilon($p$\nobreakdashr_1(A))) \geq \abs{\vrv(t)}$,\langlebel{2cell:3b}
\item $\epsilon$ is balanced in $A$.
\end{enumerate}
\end{enumerate}
The function $\epsilon$ is called the \emph{positioning function} of $A$ and the element $t$ the \emph{paradigm} of $A$.
More generally, a set $A$ with exactly one $\VF$-coordinate is a \emph{$1$-cell} if, for each $t \in $p$\nobreakdashr_{>1}(A)$, $A_t$ is a $1$-cell in the above sense; the parameterized version of the notion of a $2$-cell is formulated in the same way.
\end{defn}
A $2$-cell is definable if all the relevant ingredients are definable. Naturally we will only be concerned with definable $2$-cells. Notice that Corollary~\ref{all:subsets:rvproduct} implies that for every definable set $A$ with exactly one $\VF$-coordinate there is a
definable function $$p$\nobreakdashi: A \longrightarrow \mathds{R}V^l$ such that every fiber $A_s$ is a $1$-cell. This should be understood as $1$-cell decomposition and the next lemma as $2$-cell decomposition.
\begin{lem}[$2$-cell decomposition]\langlebel{decom:into:2:units}
For every definable set $A \subseteq \VF^2$ there is a definable function $$p$\nobreakdashi: A \longrightarrow \mathds{R}V^m$ such that every fiber $A_s$ is an $s$-definable $2$-cell.
\end{lem}
\begin{proof}
By compactness, we may assume that $A$ is contained in a single
$\mathds{R}V$-polydisc. For each $a \in $p$\nobreakdashr_1 (A)$, by Corollary~\ref{all:subsets:rvproduct}, there is an $a$-definable special bijection $T_a$ on $A_a$ such that $A_a^\flat$ is an $\mathds{R}V$-pullback. By Lemma~\ref{inverse:special:dim:1}, there is an $a$-definable function $\epsilon_a : (A_a^\flat)_{\mathds{R}V} \longrightarrow \VF$ such that, for every $(t, s) \in (A_a^\flat)_{\mathds{R}V}$, we have
\[
T_a^{-1}(t^\sharp \times (t, s)) =
t^\sharp + \epsilon_a(t, s).
\]
By compactness, we may glue these functions together, that is, there is a definable set $C \subseteq $p$\nobreakdashr_1(A) \times \mathds{R}V^l$ and a definable function $\epsilon : C \longrightarrow \VF$ such that, for every $a \in $p$\nobreakdashr_1(A)$, $C_a = (A_a^\flat)_{\mathds{R}V}$ and $\epsilon \upharpoonright C_a = \epsilon_a$.
For $(t, s) \in C_{\mathds{R}V}$, write $\epsilon_{(t, s)} = \epsilon \upharpoonright C_{(t, s)}$. By Corollary~\ref{uni:fun:decom} and compactness, we are reduced to the case
that each $\epsilon_{(t, s)}$ is either
constant or injective. If no $\epsilon_{(t, s)}$ is injective then we can finish by applying
Corollary~\ref{all:subsets:rvproduct} to each $C_{(t, s)}$ and then compactness.
Suppose that some $\epsilon_{(t, s)}$ is injective. Then, by Lemmas~\ref{open:pro} and \ref{bijection:dim:1:decom:RV}, we are reduced to the case that $C_{(t, s)}$ is an open disc and
$\epsilon_{(t, s)}$ is $\rv$-affine and has dtdp. Write $\mathfrak{b}_{(t, s)} = \ranglen (\epsilon_{(t, s)})$. If $\rangled(\mathfrak{b}_{(t, s)}) \geq \abvrv(t)$ then $\epsilon_{(t, s)}$ satisfies the condition (\ref{2cell:3b}) in Definition~\ref{def:units}. So let us suppose $\rangled(\mathfrak{b}_{(t, s)}) < \abvrv(t)$. Then
\[
\textstyle \mathfrak{b}_{(t, s)} = \bigcup_{a \in C_{(t, s)}} (t^\sharp + \epsilon_{(t, s)}(a)).
\]
By Lemma~\ref{bijection:rv:one:one}, we are further reduced to the case that there is an $r \in \mathds{R}V$ such that, for every $a \in C_{(t, s)}$,
\[
\rv(a - \epsilon_{(t, s)}^{-1}(t^\sharp + \epsilon_{(t, s)}(a))) = r \quad \text{and hence} \quad \epsilon_{(t, s)}^{-1}(t^\sharp + \epsilon_{(t, s)}(a)) = a - r^\sharp.
\]
So, in this case, $\epsilon_{(t, s)}$ is balanced. Now we are
done by compactness.
\end{proof}
To extend Lemma~\ref{simul:special:dim:1} to all definable bijections, we need not only $2$-cell decomposition but also the following notions.
Let $A \subseteq \VF^{n} \times \mathds{R}V^{m}$, $B \subseteq \VF^{n} \times \mathds{R}V^{m'}$, and $f : A \longrightarrow B$ be a definable bijection.
\begin{defn}\langlebel{rela:unary}
We say that $f$ is \emph{relatively unary} or, more precisely, \emph{relatively unary in the $i$th $\VF$-coordinate}, if $($p$\nobreakdashr_{\tilde{i}} \circ f)(x) = $p$\nobreakdashr_{\tilde{i}}(x)$ for all $x \in A$, where $i \in [n]$. If $f \upharpoonright A_y$ is also a special bijection for every $y \in $p$\nobreakdashr_{\tilde{i}} (A)$ then we say that $f$ is \emph{relatively special in the $i$th $\VF$-coordinate}.
\end{defn}
Obviously the inverse of a relatively unary bijection is a relatively unary bijection. Also note that every special bijection on $A$ is a composition of relatively special bijections.
Choose an $i \in [n]$. By Corollary~\ref{all:subsets:rvproduct} and compactness, there is a bijection $T_i$ on $A$, relatively special in the $i$th $\VF$-coordinate, such that $T_i(A_a)$ is an $\mathds{R}V$-pullback for every $a \in $p$\nobreakdashr_{\tilde i}(A)$. Note that $T_i$ is not necessarily a special bijection on $A$, since the special bijections in the $i$th $\VF$-coordinate for distinct $a, a' \in $p$\nobreakdashr_{\tilde i}(A)$ with $\rv(a) = \rv(a')$ may not even be of the same length. Let
\[
\textstyle A_i = \bigcup_{a \in $p$\nobreakdashr_{\tilde i}(A)} a \times (T_i(A_a))_{\mathds{R}V} \subseteq \VF^{n-1} \times \mathds{R}V^{m_i}.
\]
Write $\hat T_i : A \longrightarrow A_i$ for the function naturally induced by $T_i$. For any $j \in [n{-}1]$, we repeat the above procedure on $A_i$ with respect to the $j$th $\VF$-coordinate and thereby obtain a set $A_{j} \subseteq \VF^{n-2} \times \mathds{R}V^{m_j}$ and a function $\hat T_{j} : A_i \longrightarrow A_{j}$. The relatively special bijection on $T_i(A)$ induced by $\hat T_{j}$ is denoted by $T_j$. Continuing thus, we obtain a sequence of bijections $T_{\sigma(1)}, \ldots, T_{\sigma(n)}$ and a corresponding function $\hat T_{\sigma} : A \longrightarrow \mathds{R}V^{l}$, where $\sigma$ is the permutation of $[n]$ in question. The composition $T_{\sigma(n)} \circ \ldots \circ T_{\sigma(1)}$, which is referred to as the \emph{lift} of $\hat T_{\sigma}$, is denoted by $T_{\sigma}$.
\begin{defn}\langlebel{defn:standard:contraction}
Suppose that there is a $k \in 0 \cup [m]$ such that $(A_a)_{\leq k} \in \mathds{R}V[k]$ for every $a \in A_{\VF}$. In particular, if $k=0$ then $A \in \VF_*$. By Lemma~\ref{RV:fiber:dim:same}, $\hat T_{\sigma}(A)_{\leq n+k}$ is an object of $\mathds{R}V[{\leq} l{+}k]$, where $\dim_{\VF}(A) = l$. The function $\hat T_{\sigma}$ --- or the object $\hat T_{\sigma}(A)_{\leq n+k}$ --- is referred to as a \emph{standard contraction} of the set $A$ with the \emph{head start} $k$.
\end{defn}
The head start of a standard contraction is usually implicit. In fact, it is always $0$ except in Lemma~\ref{isp:VF:fiberwise:contract}, and can be circumvented even there. This seemingly needless gadget only serves to make the above definition more streamlined: If $A \in \VF_*$ then the intermediate steps of a standard contraction of $A$ may or may not result in objects of $\VF_*$ and hence the definition cannot be formulated entirely within $\VF_*$.
\begin{rem}\langlebel{special:dim:1:RV:iso}
In Lemma~\ref{simul:special:dim:1}, clearly $\rv(A^{\flat})$, $\rv(B^{\flat})$ are standard contractions of $A$, $B$. Indeed, if $A, B \in \VF_*$ then $[\rv(A^{\flat})]_{\leq 1} = [\rv(B^{\flat})]_{\leq 1}$.
\end{rem}
\begin{lem}\langlebel{bijection:partitioned:unary}
There is a definable finite partition $A_i$ of $A$ such that each $f \upharpoonright A_i$ is a composition of relatively unary bijections.
\end{lem}
\begin{proof}
This is an easy consequence of weak $o$\nobreakdash-minimality. In more detail, for each $a \in $p$\nobreakdashr_{< n}(A)$ there are an $a$-definable finite partition $A_{ai}$ of $A_a$ and injective coordinate projections $$p$\nobreakdashi_i : f(A_{ai}) \longrightarrow \VF \times \mathds{R}V^{m'}$. Therefore, by compactness, there are a definable finite partition $A_{i}$ of $A$, definable injections $f_i : A_i \longrightarrow \VF^{n} \times \mathds{R}V^{m'}$, and $j_i \in [n]$ such that, for all $x \in A_i$,
\[
$p$\nobreakdashr_{< n}(x) = $p$\nobreakdashr_{< n}(f_i(x)) \quad \text{and} \quad $p$\nobreakdashr_{n \cup [m']}(f_i(x)) = $p$\nobreakdashr_{j_i \cup [m']}(f(x)).
\]
The claim now follows from compactness and an obvious induction on $n$.
\end{proof}
For the next two lemmas, let $12$ and $21$ denote the permutations of $[2]$.
\begin{lem}\langlebel{2:unit:contracted}
Let $A \subseteq \VF^2$ be a definable $2$-cell. Then there are standard contractions $\hat T_{12}$, $\hat R_{21}$ of $A$ such that $[\hat T_{12}(A)]_{\leq 2} = [\hat R_{21}(A)]_{\leq 2}$.
\end{lem}
\begin{proof}
Let $\epsilon$ be the positioning function of $A$ and $t \in \mathds{R}V_0$ the paradigm of $A$. If $t = 0$ then $A$ is (the graph of) the function $\epsilon : $p$\nobreakdashr_1(A) \longrightarrow $p$\nobreakdashr_2(A)$, which is either a constant function or a bijection. In the former case, since $A$ is essentially just an open ball, the lemma simply follows from
Corollary~\ref{all:subsets:rvproduct}. In the latter case, there
are relatively special bijections $T_2$, $R_1$ on $A$ in the coordinates $2$, $1$ such that
\[
T_2(A) = $p$\nobreakdashr_1(A) \times 0 \times 0 \quad \text{and} \quad
R_1(A) = 0 \times $p$\nobreakdashr_2(A) \times 0.
\]
So the lemma follows from Remark~\ref{special:dim:1:RV:iso}. For the rest of the proof we assume $t \neq 0$.
If $\epsilon$ is not balanced in $A$ then $A = $p$\nobreakdashr_1(A) \times $p$\nobreakdashr_2(A)$ is an open polydisc. By Corollary~\ref{all:subsets:rvproduct}, there are special bijections $T_1$, $T_2$ on $$p$\nobreakdashr_1(A)$, $$p$\nobreakdashr_2(A)$ such that $$p$\nobreakdashr_1(A)^\flat$, $$p$\nobreakdashr_2(A)^\flat$ are $\mathds{R}V$-pullbacks. In this case the standard contractions determined by $(T_1, T_2)$ and $(T_2, T_1)$ are essentially the same.
Suppose that $\epsilon$ is balanced in $A$. Let $r$ be the other paradigm of $\epsilon$. Recall that $\epsilon : $p$\nobreakdashr_1 (A) \longrightarrow $p$\nobreakdashr_2(A)$ is again a bijection. Let $T_2$ be the relatively special bijection on $A$ in the coordinate $2$ given by $(a, b) \longmapsto (a, b - \epsilon(a))$ and $R_1$ the relatively special bijection on $A$ in the coordinate $1$ given by $(a, b) \longmapsto (a - \epsilon^{-1}(b), b)$, where $(a, b) \in A$. Clearly
\[
T_2(A) = $p$\nobreakdashr_1(A) \times t^\sharp \times t \quad \text{and} \quad
R_1(A) = r^\sharp \times $p$\nobreakdashr_2(A) \times r.
\]
So, again, the lemma follows from Remark~\ref{special:dim:1:RV:iso}.
\end{proof}
\begin{lem}\langlebel{subset:partitioned:2:unit:contracted}
Let $A \subseteq \VF^2 \times \mathds{R}V^m$ be an object in $\VF_*$. Then there are a definable injection $f : A \longrightarrow \VF^2 \times \mathds{R}V^l$, relatively unary in both coordinates, and standard contractions $\hat T_{12}$, $\hat R_{21}$ of $f(A)$ such that $[\hat T_{12}(f(A))]_{\leq 2} = [\hat R_{21}(f(A))]_{\leq 2}$.
\end{lem}
\begin{proof}
By Lemma~\ref{decom:into:2:units} and compactness, there is a definable function $f: A \longrightarrow \VF^2 \times \mathds{R}V^l$ such that $f(A)$ is a $2$-cell and, for each $(a, t) \in A$, $f(a, t) = (a, t, s)$ for some $s \in \mathds{R}V^{l-m}$. By Lemma~\ref{2:unit:contracted} and compactness, there are standard contractions $\hat T_{12}$, $\hat R_{21}$ of $f(A)$ into $\mathds{R}V^{k+l}$ such that the following diagram commutates
\[
\bfig
\Vtriangle(0,0)/->`->`->/<400,400>[\hat T_{12}(f(A))`\hat R_{21}(f(A))`\mathds{R}V^l; F`$p$\nobreakdashr_{> k}`$p$\nobreakdashr_{> k}]
\efig
\]
and $F$ is an $\mathds{R}V[{\leq} 2]$-morphism $\hat T_{12}(f(A))_{\leq 2} \longrightarrow \hat R_{21}(f(A))_{\leq 2}$.
\end{proof}
\subseteqsection{Blowups and the main theorems}
The central notion for understanding the kernels of the semigroup homomorphisms $\bb L$ is that of a blowup:
\begin{defn}[Blowups]\langlebel{defn:blowup:coa}
Let $\bm U = (U, f) \in \mathds{R}V[k]$, where $k > 0$, such that, for some $j \leq k$, the restriction $$p$\nobreakdashr_{\tilde j} \upharpoonright f(U)$ is finite-to-one. Write $f = (f_1, \ldots, f_k)$. The \emph{elementary blowup} of $\bm U$ in the $j$th coordinate is the pair $\bm U^{\flat} = (U^{\flat}, f^{\flat})$, where $U^{\flat} = U \times \mathds{R}V^{\circ \circ}_0$ and, for every $(t, s) \in U^{\flat}$,
\[
f^{\flat}_{i}(t, s) = f_{i}(t) \text{ for } i \neq j \quad \text{and} \quad f^{\flat}_{j}(t, s) = s f_{j}(t).
\]
Note that $\bm U^{\flat}$ is an object in $\mathds{R}V[{\leq} k]$ (actually in $\mathds{R}V[k{-}1] \oplus \mathds{R}V[k]$) because $f^{\flat}_{j}(t, 0) = 0$.
Let $\bm V = (V, g) \in \mathds{R}V[k]$ and $C \subseteq V$ be a definable set. Suppose that $F : \bm U \longrightarrow \bm C$ is an $\mathds{R}V[k]$-morphism, where $\bm C = (C, g \upharpoonright C) \in \mathds{R}V[k]$. Then
\[
\bm U^{\flat} \uplus (V \smallsetminus C, g \upharpoonright (V \smallsetminus C))
\]
is a \emph{blowup of $\bm V$ via $F$}, denoted by $\bm V^{\flat}_F$. The subscript $F$ is usually dropped in context if there is no danger of confusion. The object $\bm C$ (or the set $C$) is referred to as the \emph{locus} of $\bm V^{\flat}_F$.
A \emph{blowup of length $n$} is a composition of $n$ blowups.
\end{defn}
\begin{rem}
In an elementary blowup, the condition that the coordinate of interest is definably dependent (the coordinate projection is finite-to-one) on the other ones is needed so that the resulting objects stay in $\mathds{R}V[{\leq} k]$. In the setting of \cite{hrushovski:kazhdan:integration:vf}, this condition is also needed for matching blowups with special bijections, since, otherwise, we would not be able to use (a generalization of) Hensel's lemma to find enough centers of $\mathds{R}V$-discs to construct focus maps. In our setting, Lemma~\ref{RVlift} plays the role of Hensel's lemma, which is more powerful, and hence ``algebraicity'' is no longer needed for this purpose (see Lemma~\ref{blowup:same:RV:coa}).
\end{rem}
If there is an elementary blowup of $(U, f) \in \mathds{R}V[k]$ then, \textit{a posteriori}, $\dim_{\mathds{R}V}(f(U)) < k$. Also, there is at most one elementary blowup of $(U, f)$ with respect to any coordinate of $f(U)$. We should have included the coordinate that is blown up as a part of the data. However, in context, either this is
clear or it does not need to be spelled out, and we shall suppress mentioning it below for notational ease.
\begin{lem}\langlebel{blowup:equi:class:coa}
Let $\bm U, \bm V \in \mathds{R}V[{\leq} k]$ such that $[\bm U] = [\bm V]$ in $\mathfrak{s}k \mathds{R}V[{\leq} k]$. Let $\bm U_1$, $\bm V_1$ be blowups of $\bm U$, $\bm V$ of lengths $m$, $n$, respectively. Then there are blowups $\bm U_2$, $\bm V_2$ of $\bm U_1$, $\bm V_1$ of lengths $n$, $m$, respectively, such that $[\bm U_2] = [\bm V_2]$.
\end{lem}
\begin{proof}
Fix an isomorphism $I: \bm U \longrightarrow \bm V$. We do induction on the sum $l = m + n$. For the base case $l = 1$, without loss of generality, we may assume $n = 0$. Let $C$ be the blowup locus of $\bm U_1$. Clearly $\bm V$ may be blown up by using the same elementary blowup as $\bm U_1$, where the blowup locus is changed to $I(C)$, and the resulting blowup is as required.
\[
\bfig
\square(0,0)/.`=``./<500,900>[\bm U`\bm U^{\flat}`
\bm V`\bm V^{\flat};1```1]
\square(500,0)/.```./<1000,900>[\bm U^{\flat}`\bm U_1`
\bm V^{\flat}`\bm V_1; m - 1```n - 1]
\morphism(500,900)/./<500,-300>[\bm U^{\flat}`\bm U^{\flat\flat};1]
\morphism(500,0)/./<500,300>[\bm V^{\flat}`
\bm V^{\flat\flat};1]
\morphism(1000,300)/=/<0,300>[\bm V^{\flat\flat}`
\bm U^{\flat\flat};]
\morphism(1500,900)/./<500,0>[\bm U_1`\bm U_1^{\flat};1]
\morphism(1500,0)/./<500,0>[\bm V_1`\bm V_1^{\flat};1]
\morphism(1000,600)/./<1000,0>[\bm U^{\flat\flat}`\bm U^{\flat3};
m - 1]
\morphism(1000,300)/./<1000,0>[\bm V^{\flat\flat}`\bm V^{\flat3};
n - 1]
\morphism(2000,900)/=/<0,-300>[\bm U_1^{\flat}`\bm U^{\flat3};]
\morphism(2000,0)/=/<0,300>[\bm V_1^{\flat}`\bm V^{\flat3};]
\morphism(2000,600)/./<1000,0>[\bm U^{\flat3}`\bm U^{\flat4};n - 1]
\morphism(2000,300)/./<1000,0>[\bm V^{\flat3}`\bm V^{\flat4};m - 1]
\morphism(3000,300)/=/<0,300>[\bm V^{\flat4}`\bm U^{\flat4};]
\morphism(2000,900)/./<1000,0>[\bm U_1^{\flat}`\bm U_2;n - 1]
\morphism(2000,0)/./<1000,0>[\bm V_1^{\flat}`\bm V_2;m - 1]
\morphism(3000,0)/=/<0,300>[\bm V_2`\bm V^{\flat4};]
\morphism(3000,900)/=/<0,-300>[\bm U_2`\bm U^{\flat4};]
\efig
\]
We proceed to the inductive step. How the isomorphic blowups are constructed is illustrated above. Write $\bm U = (U, f)$ and $\bm V = (V, g)$. Let $\bm U^{\flat}$, $\bm V^{\flat}$ be the first blowups in $\bm U_1$, $\bm V_1$ and $C$, $D$ their blowup loci, respectively. Let $\bm U'^{\flat}$, $\bm V'^{\flat}$ be the corresponding elementary blowups contained in $\bm U^{\flat}$, $\bm V^{\flat}$. If, say, $n = 0$, then by the argument in the base case $\bm V$ may be blown up to an object that is isomorphic to $\bm U^{\flat}$ and hence the inductive
hypothesis may be applied. So assume $m,n > 0$. Let $A = C \cap I^{-1}(D)$ and $B = I(C) \cap D$. Since $(A, f \upharpoonright A)$ and $(B, g \upharpoonright B)$ are isomorphic, the
blowups of $\bm U'$, $\bm V'$ with the loci $(A, f \upharpoonright A)$ and $(B, g \upharpoonright B)$ are isomorphic. Then, it is not hard to see that the blowup $\bm U^{\flat\flat}$ of $\bm U^{\flat}$ using the locus $I^{-1}(D) \smallsetminus C$ and its corresponding blowup of $\bm V'$ and the blowup $\bm V^{\flat\flat}$ of $\bm V^{\flat}$ using the locus $I(C) \smallsetminus D$ and its corresponding blowup of $\bm U'$ are isomorphic.
Applying the inductive hypothesis to the blowups $\bm U^{\flat\flat}$, $\bm U_1$ of $\bm U^{\flat}$, we obtain a blowup $\bm U^{\flat3}$ of $\bm U^{\flat\flat}$ of length $m - 1$ and a blowup $\bm U_1^{\flat}$ of $\bm U_1$ of length $1$ such that they are isomorphic. Similarly, we obtain a blowup $\bm V^{\flat3}$ of $\bm V^{\flat\flat}$ of length $n - 1$ and a blowup $\bm V_1^{\flat}$ of $\bm V_1$ of length $1$ such that they are isomorphic. Applying the inductive hypothesis again to the blowups $\bm U^{\flat3}$, $\bm V^{\flat3}$ of $\bm U^{\flat\flat}$, $\bm V^{\flat\flat}$, we obtain a blowup $\bm U^{\flat4}$ of $\bm U^{\flat3}$ of length $n - 1$ and a blowup $\bm V^{\flat4}$ of $\bm V^{\flat3}$ of length $m - 1$ such that they are isomorphic. Finally, applying the inductive hypothesis to the blowups
$\bm U^{\flat4}$, $\bm U_1^{\flat}$ of $\bm U^{\flat3}$, $\bm U_1^{\flat}$ and the blowups $\bm V^{\flat4}$, $\bm V_1^{\flat}$ of $\bm V^{\flat3}$, $\bm V_1^{\flat}$, we obtain a blowup $\bm U_2$ of $\bm U_1^{\flat}$ of length $n - 1$ and a blowup $\bm V_2$ of $\bm V_1^{\flat}$ of length $m - 1$ such that $\bm U^{\flat4}$, $\bm U_2$, $\bm V^{\flat4}$, and $\bm V_2$ are all isomorphic. So $\bm U_2$, $\bm V_2$ are as desired.
\end{proof}
\begin{cor}\langlebel{blowup:equi:class}
Let $[\bm U] = [\bm U']$ and $[\bm V] = [\bm V']$ in $\mathfrak{s}k \mathds{R}V[{\leq} k]$. If there are isomorphic blowups of $\bm U$, $\bm V$ then there are isomorphic blowups of $\bm U'$, $\bm V'$.
\end{cor}
\begin{defn}\langlebel{defn:isp}
Let $\isp[k]$ be the set of pairs $(\bm U, \bm V)$ of objects of $\mathds{R}V[{\leq} k]$ such that there exist isomorphic blowups $\bm U^{\flat}$, $\bm V^{\flat}$. Set $\isp[*] = \bigcup_{k} \isp[k]$.
\end{defn}
We will just write $\isp$ for all these sets when there is no danger of confusion. By Corollary~\ref{blowup:equi:class}, $\isp$ may be regarded as a binary relation on isomorphism classes.
\begin{lem}\langlebel{isp:congruence:vol}
$\isp[k]$ is a semigroup congruence relation and $\isp[*]$ is a semiring congruence relation.
\end{lem}
\begin{proof}
Clearly $\isp[k]$ is reflexive and symmetric. If $([\bm U_1], [\bm U_2])$, $([\bm U_2], [\bm U_3])$ are in
$\isp[k]$ then, by Lemma~\ref{blowup:equi:class:coa}, there are blowups $\bm U_1^{\flat}$ of $\bm U_1$, $\bm U_{2}^{\flat 1}$ and $\bm U_{2}^{\flat 2}$ of $\bm U_2$, and
$\bm U_3^{\flat}$ of $\bm U_3$ such that they are all isomorphic. So $\isp[k]$ is transitive and hence is an
equivalence relation. For any $[\bm W] \in \mathfrak{s}k \mathds{R}V[l]$, the following are
easily checked:
\[
([\bm U_1 \uplus \bm W], [\bm U_2 \uplus \bm W])\in \isp,\quad
([\bm U_1 \times \bm W], [\bm U_2 \times \bm W])\in \isp.
\]
These yield the desired congruence relations.
\end{proof}
Let $\bm U = (U, f)$ be an object of $\mathds{R}V[k]$ and $T$ a special bijection on $\bb L \bm U$. The set $(T(\mathbb{L} \bm U))_{\mathds{R}V}$ is simply denoted by $U_{T}$ and the object $(U_{T})_{\leq k} \in \mathds{R}V[{\leq} k]$ by $\bm U_{T}$.
\begin{lem}\langlebel{special:to:blowup:coa}
The object $\bm U_T$ is isomorphic to a blowup of $\bm U$ of the same length as $T$.
\end{lem}
\begin{proof}
By induction on the length $\lh (T)$ of $T$ and Lemma~\ref{blowup:equi:class:coa}, this is immediately reduced to the case $\lh (T) = 1$. For that case, let $\langlembda$ be the focus map of $T$. Without loss of generality, we may assume that the locus of $\langlembda$ is $\mathbb{L} \bm U$. Then it is clear how to construct an (elementary) blowup of $\bm U$ as desired.
\end{proof}
\begin{lem}\langlebel{kernel:dim:1:coa}
Suppose that $[A] = [B]$ in $\mathfrak{s}k \VF[1]$ and $\bm U, \bm V \in \mathds{R}V[{\leq} 1]$ are two standard contractions of $A$, $B$, respectively. Then $([\bm U], [\bm V]) \in \isp$.
\end{lem}
\begin{proof}
By Lemma~\ref{simul:special:dim:1}, there are special bijections $T$, $R$ on $\bb L \bm U$, $\bb L \bm V$ such that $\bm U_{T}$, $\bm V_{R}$ are isomorphic. So the assertion follows from Lemma~\ref{special:to:blowup:coa}.
\end{proof}
\begin{lem}\langlebel{blowup:same:RV:coa}
Let $\bm U^{\flat}$ be a blowup of $\bm U = (U, f) \in \mathds{R}V[{\leq} k]$ of length $l$. Then $\bb L \bm U^{\flat}$ is isomorphic to $\bb L \bm U$.
\end{lem}
\begin{proof}
By induction on $l$ this is immediately reduced to the case $l=1$. For that case, without loss of generality, we may assume that $$p$\nobreakdashr_{\tilde 1} \upharpoonright f(U)$ is injective and $\bm U^{\flat}$ is an elementary blowup in the first coordinate. So it is enough to show that there is a focus map into the first coordinate with locus $f(U)^\sharp$. This is guaranteed by Hypothesis~\ref{hyp:point}.
\end{proof}
\begin{lem}\langlebel{isp:VF:fiberwise:contract}
Let $A'$, $A''$ be definable sets with $A'_{\VF} = A''_{\VF} \eqqcolon A \subseteq \VF^n$. Suppose that there is a $k \in \mathds{N}$ such that, for every $a \in A$, $([A'_a]_{\leq k}, [A''_a]_{\leq k}) \in \isp$. Let $\hat T_{\sigma}$, $\hat R_{\sigma}$ be respectively standard contractions of $A'$, $A''$. Then
\[
([\hat T_{\sigma}(A')]_{\leq n+k}, [\hat R_{\sigma}(A'')]_{\leq n+k}) \in \isp.
\]
\end{lem}
Note that the condition $([A'_a]_{\leq k}, [A''_a]_{\leq k}) \in \isp$ makes sense only over the substructure $\mdl S \langle a \rangle$.
\begin{proof}
By induction on $n$ this is immediately reduced to the case $n=1$. So assume $A \subseteq \VF$. Let $$p$\nobreakdashhi'$, $$p$\nobreakdashhi''$ be quantifier-free formulas that define $A'$, $A''$, respectively. Let $\theta$ be a quantifier-free formula such that, for every $a \in A$, $\theta(a)$ defines the necessary data (two blowups and an $\mathds{R}V[*]$-morphism) that witness the condition $([A'_a]_{\leq k}, [A''_a]_{\leq k}) \in \isp$. Applying Corollary~\ref{special:bi:term:constant} to the top \LT-terms of $$p$\nobreakdashhi'$, $$p$\nobreakdashhi''$, and $\theta$, we obtain a special bijection $F: A \longrightarrow A^{\flat}$ such that $A^{\flat}$ is an $\mathds{R}V$-pullback and, for all $\mathds{R}V$-polydiscs $\mathfrak{p} \subseteq A^{\flat}$ and all $a_1, a_2 \in F^{-1}(\mathfrak{p})$,
\begin{itemize}
\item $A'_{a_1} = A'_{a_2}$ and $A''_{a_1} = A''_{a_2}$,
\item $\theta(a_1)$ and $\theta(a_2)$ define the same data.
\end{itemize}
The second item implies that the data defined by $\theta$ over $F^{-1}(\mathfrak{p})$ is actually $\rv(\mathfrak{p})$-definable.
Let $B' = \bigcup_{a \in A} F(a) \times A'_a$, similarly for $B''$. Note that $B'$, $B''$ are obtained through special bijections on $A'$, $A''$. For all $t \in A'_{\mathds{R}V}$, $B'_t$ is an $\mathds{R}V$-pullback that is $t$-definably bijective to the $\mathds{R}V$-pullback $T_{\sigma}(A')_t$. By Lemma~\ref{kernel:dim:1:coa}, we have, for all $t \in A'_{\mathds{R}V}$
\[
([(B'_{\mathds{R}V})_t]_1, [\hat T_{\sigma}(A')_t]_1) \in \isp
\]
and hence, by compactness,
\[
([B'_{\mathds{R}V}]_{\leq k+1}, [\hat T_{\sigma}(A')]_{\leq k+1}) \in \isp.
\]
The same holds for $B''$ and $\hat R_{\sigma}(A'')$. On the other hand, by the second item above, for every $\mathds{R}V$-polydisc $\mathfrak{p} \subseteq A^{\flat}$, we have $((B'_{\mathds{R}V})_{\rv(\mathfrak{p})}, (B''_{\mathds{R}V})_{\rv(\mathfrak{p})}) \in \isp$
and hence, by compactness,
\[
([B'_{\mathds{R}V}]_{\leq k+1}, [B''_{\mathds{R}V}]_{\leq k+1}) \in \isp.
\]
Since $\isp$ is a congruence relation, the lemma follows.
\end{proof}
\begin{cor}\langlebel{contraction:same:perm:isp}
Let $A', A'' \in \VF_*$ with exactly $n$ $\VF$-coordinates each and $f : A' \longrightarrow A''$ be a relatively unary bijection in the $i$th coordinate. Then for any permutation $\sigma$ of $[n]$ with $\sigma(1) = i$ and any standard contractions $\hat T_{\sigma}$, $\hat R_{\sigma}$ of $A'$, $A''$,
\[
([\hat T_{\sigma}(A')]_{\leq n}, [\hat R_{\sigma}(A'')]_{\leq n}) \in \isp.
\]
\end{cor}
\begin{proof}
This is immediate by Lemmas~\ref{kernel:dim:1:coa} and \ref{isp:VF:fiberwise:contract}.
\end{proof}
The following lemma is essentially a version of Fubini's theorem (also see Theorem~\ref{semi:fubini} below).
\begin{lem}\langlebel{contraction:perm:pair:isp}
Let $A \in \VF_*$ with exactly $n$ $\VF$-coordinates. Suppose that $i, j \in [n]$ are distinct and $\sigma_1$, $\sigma_2$ are permutations of $[n]$ such that
\[
\sigma_1(1) = \sigma_2(2) = i, \quad \sigma_1(2) = \sigma_2(1) = j, \quad \sigma_1
\upharpoonright \set{3, \ldots, n} = \sigma_2 \upharpoonright \set{3, \ldots, n}.
\]
Then, for any standard contractions $\hat T_{\sigma_1}$, $\hat T_{\sigma_2}$ of $A$,
\[
([\hat T_{\sigma_1}(A)]_{\leq n}, [\hat T_{\sigma_2}(A)]_{\leq n}) \in \isp.
\]
\end{lem}
\begin{proof}
Let $ij$, $ji$ denote the permutations of $E \coloneqq \{i, j\}$. By compactness and
Lemma~\ref{isp:VF:fiberwise:contract}, it is enough to show
that, for any $a \in $p$\nobreakdashr_{\tilde E}(A)$ and any standard
contractions $\hat T_{ij}$, $\hat T_{ji}$ of $A_a$,
\[
([\hat T_{ij}(A_a)]_{\leq 2}, [\hat T_{ji}(A_a)]_{\leq 2})
\in \isp.
\]
To that end, fix an $a \in $p$\nobreakdashr_{\tilde E}(A)$. By Lemma~\ref{subset:partitioned:2:unit:contracted}, there are a definable bijection $f$ on $A_a$ that is relatively unary in both $\VF$-coordinates and standard contractions $\hat R_{ij}$, $\hat R_{ji}$ of $f(A_a)$ such that
\[
[\hat R_{ij}(f(A_a))]_{\leq 2} = [\hat R_{ji}(f(A_a))]_{\leq 2}.
\]
So the desired property follows from Corollary~\ref{contraction:same:perm:isp}.
\end{proof}
The following proposition is the culmination of the preceding technicalities; it identifies the
congruence relation $\isp$ with that induced by
$\bb L$.
\begin{prop}\langlebel{kernel:L}
For $\bm U, \bm V \in \mathds{R}V[{\leq} k]$,
\[
[\bb L \bm U] = [\bb L \bm V] \quad \text{if and only if} \quad ([\bm U], [\bm V]) \in \isp.
\]
\end{prop}
\begin{proof}
The ``if'' direction simply follows from Lemma~\ref{blowup:same:RV:coa} and
Proposition~\ref{L:sur:c}.
For the ``only if'' direction, we show a stronger claim: if $[A] = [B]$ in $\mathfrak{s}k \VF_*$ and $\bm U, \bm V \in \mathds{R}V[{\leq} k]$ are two standard contractions of $A$, $B$ then $([\bm U], [\bm V]) \in \isp$. We do induction on $k$. The base case $k = 1$ is of course Lemma~\ref{kernel:dim:1:coa}.
For the inductive step, suppose that $F : \bb L \bm U \longrightarrow \bb L \bm V$ is a definable bijection. By Lemma~\ref{bijection:partitioned:unary}, there is a partition of $\bb L \bm U$ into definable sets $A_1, \ldots, A_n$ such that each restriction $F_i = F \upharpoonright A_i$ is a composition of relatively unary bijections. Applying Corollary~\ref{special:bi:term:constant} as before, we obtain two special bijections
$T$, $R$ on $\bb L \bm U$, $\bb L \bm V$ such that $T(A_i)$, $(R \circ F)(A_i)$ is an $\mathds{R}V$-pullback for each $i$. By Lemma~\ref{special:to:blowup:coa}, it is enough to show that, for each $i$, there are standard contractions $\hat T_{\sigma}$, $\hat R_{\tau}$ of $T(A_i)$, $(R \circ F)(A_i)$ such that
\[
([(\hat T_{\sigma} \circ T)(A_i)]_{\leq k}, [(\hat R_{\tau} \circ R \circ F)(A_i)]_{\leq k}) \in \isp.
\]
To that end, first note that each $(R \circ F \circ T^{-1}) \upharpoonright T(A_i)$ is a composition of relatively unary bijections, say
\[
T(A_i) = B_1 \to^{G_1} B_2 \cdots B_l \to^{G_l} B_{l+1} = (R \circ F)(A_i).
\]
For each $j \leq l - 2$, we can choose five standard contractions
\[
[U_j]_{\leq k}, \quad [U_{j+1}]_{\leq k}, \quad [U'_{j+1}]_{\leq k}, \quad [U''_{j+1}]_{\leq k}, \quad [U_{j+2}]_{\leq k}
\]
of $B_j$, $B_{j+1}$, $B_{j+1}$, $B_{j+1}$, $B_{j+2}$ with the permutations $\sigma_{j}$, $\sigma_{j+1}$, $\sigma'_{j+1}$, $\sigma''_{j+1}$, $\sigma_{j+2}$ of $[k]$, respectively, such that
\begin{itemize}
\item $\sigma_{j+1}(1)$ and $\sigma_{j+1}(2)$ are the $\VF$-coordinates targeted by $G_{j}$ and $G_{j+1}$, respectively,
\item $\sigma''_{j+1}(1)$ and $\sigma''_{j+1}(2)$ are the $\VF$-coordinates targeted by $G_{j+1}$ and $G_{j+2}$, respectively,
\item $\sigma_{j} = \sigma_{j+1}$, $\sigma''_{j+1} = \sigma_{j+2}$, and $\sigma'_{j+1}(1) = \sigma''_{j+1}(1)$,
\item the relation between $\sigma_{j+1}$ and $\sigma'_{j+1}$ is as described in Lemma~\ref{contraction:perm:pair:isp}.
\end{itemize}
By Corollary~\ref{contraction:same:perm:isp} and Lemma~\ref{contraction:perm:pair:isp}, all the adjacent pairs of these standard contractions are $\isp$-congruent, except $([U'_{j+1}]_{\leq k}, [U''_{j+1}]_{\leq k})$. Since we can choose $[U'_{j+1}]_{\leq k}$, $[U''_{j+1}]_{\leq k}$ so that they start with the same contraction in the first targeted $\VF$-coordinate of $B_{j+1}$, the resulting sets from this step are the same. So, applying the inductive hypothesis in each fiber over the just contracted coordinate, we see that this last pair is also $\isp$-congruent. This completes the ``only if'' direction.
\end{proof}
This proposition shows that the semiring congruence relation on $\mathfrak{s}k \mathds{R}V[*]$ induced by $\bb L$ is generated by the pair $([1], \bm 1_{\K} + [(\mathds{R}V^{\circ \circ}, \id)])$ and hence its corresponding ideal in the graded ring $\ggk \mathds{R}V[*]$ is generated by the element $\bm 1_{\K} + [\bm P]$ (see Notation~\ref{nota:RV:short} and Remark~\ref{gam:res}).
\begin{thm}\langlebel{main:prop}
For each $k \geq 0$ there is a canonical isomorphism of Grothendieck semigroups
\[
\textstyle \int_{+} : \mathfrak{s}k \VF[k] \longrightarrow \mathfrak{s}k \mathds{R}V[{\leq} k] / \isp
\]
such that
\[
\textstyle \int_{+} [A] = [\bm U]/ \isp \quad \text{if and only if} \quad [A] = [\bb L\bm U].
\]
Putting these together, we obtain a canonical isomorphism of Grothendieck semirings
\[
\textstyle \int_{+} : \mathfrak{s}k \VF_* \longrightarrow \mathfrak{s}k \mathds{R}V[*] / \isp.
\]
\end{thm}
\begin{proof}
This is immediate by Corollary~\ref{L:sur:c} and Proposition~\ref{kernel:L}.
\end{proof}
\begin{thm}\langlebel{thm:ring}
The Grothendieck semiring isomorphism $\int_+$ naturally induces a ring isomorphism:
\[
\textstyle \Xint{\textup{G}} : \ggk \VF_* \to \ggk \mathds{R}V[*] / (\bm 1_{\K} + [\bm P]) \to^{\bb E_{\mathds{G}amma}} \mathds{Z}^{(2)}[X],
\]
and two ring homomorphisms onto $\mathds{Z}$:
\[
\textstyle \Xint{\textup{R}}^g, \Xint{\textup{R}}^b: \ggk \VF_* \to \ggk \mathds{R}V[*] / (\bm 1_{\K} + [\bm P]) \two^{\bb E_{\mathds{G}amma, g}}_{\bb E_{\mathds{G}amma, b}} \mathds{Z}.
\]
\end{thm}
\begin{proof}
This is just a combination of Theorem~\ref{main:prop} and Remark~\ref{rem:poin} (or Proposition~\ref{prop:eu:retr:k}).
\end{proof}
Let $F$ be a definable set with $A \coloneqq F_{\VF} \subseteq \VF^n$. Then $F$ may be viewed as a representative of a \emph{definable} function $\bm F : A \longrightarrow \mathfrak{s}k \mathds{R}V[*] / \isp$ given by $a \longmapsto [F_a] / \isp$. Note that the class $[F_a]$ depends on the parameter $a$ and hence can only be guaranteed to lie in the semiring $\mathfrak{s}k \mathds{R}V[*]$ constructed over $\mdl S \langle a \rangle$ instead of $\mdl S$, but we abuse the notation. Similarly, for distinct $a, a' \in A$, there is a priori no way to compare $[F_a]$ and $[F_{a'}]$ unless we work over the substructure $\mdl S \langle a, a' \rangle$; given another definable set $G$ with $A = G_{\VF}$, the corresponding definable function $\bm G$ is the same as $\bm F$ if $\bm G(a) = \bm F(a)$ over $\mdl S \langle a \rangle$ for all $a \in A$. The set of all such functions is denoted by $\fn_+(A)$, which is a semimodule over $\mathfrak{s}k \mathds{R}V[*] / \isp$. Let $E \subseteq [n]$ be a nonempty set. Then, for each $a \in $p$\nobreakdashr_{E}(A)$, the definable function in $\fn_+(A_a)$ represented by $F_a$ is denoted by $\bm F_a$.
Let $\bb L F = \bigcup_{a \in A} a \times F_a^\sharp$ and then set $\int_{+A} \bm F = \int_+ [\bb L F]$,
which, by Proposition~\ref{kernel:L} and compactness, does not depend on the representative $F$. Thus there is a canonical homomorphism of semimodules:
\[
\textstyle \int_{+A} : \fn_+(A) \longrightarrow \mathfrak{s}k \mathds{R}V[*] / \isp.
\]
\begin{thm}\langlebel{semi:fubini}
For all $\bm F \in \fn_+(A)$ and all nonempty sets $E, E' \subseteq [n]$,
\[
\textstyle \int_{+ a \in $p$\nobreakdashr_{E}(A)} \int_{+ A_a} \bm F_a = \int_{+ a \in $p$\nobreakdashr_{E'}(A)} \int_{+ A_a} \bm F_a.
\]
\end{thm}
\begin{proof}
This is clear since both sides equal $\int_{+A} \bm F$.
\end{proof}
$p$\nobreakdashrovidecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
$p$\nobreakdashrovidecommand{\mathds{M}R}{\relax\ifhmode\unskip\space\fi MR }
$p$\nobreakdashrovidecommand{\mathds{M}Rhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
$p$\nobreakdashrovidecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title{Halfcanonical Gorenstein curves of codimension four}
\begin{abstract}
Recent work of Schenck, Stillman and Yuan \cite{schenck2020calabiyau} outlines all possible Betti tables for Artin Gorenstein algebras $A$ with regularity($A$) = 4 = codim($A$). We populate the second half of this list with examples of stable curves, and ask if there are further possible constructions. The problem of deformation between curves with the same Hilbert series but different Betti tables is ongoing work, but our work solves one case: a deformation (due to Jan Stevens) between a reducible curve corresponding to Betti table type \hyperref[2.7]{2.7} in \cite{schenck2020calabiyau} and the curve obtained as the intersection of a del Pezzo surface of degree 5 and a cubic hypersurface.
\end{abstract}
\section{Introduction}
Gorenstein rings are a frequently seen subset of Cohen--Macaulay rings, first introduced by Grothendieck in a 1961 seminar. Since Buchsbaum and Eisenbud's 1977~\cite{10.2307/2373926} structure theorem on Gorenstein rings of codimension 3, much work has been done on the case of Gorenstein rings of codimension 4, including Reid's general structure theorem~\cite{reid2015gorenstein}.
Recent work on Gorenstein rings involves the study of Gorenstein Calabi--Yau 3-folds, hereon referred to as GoCY 3-folds. Calabi-Yau 3-folds play a vital role in the study of string theory \cite{candelas1985vacuum}, \cite{candelas1991pair}. Following work of Coughlan, Golebiowski, Kapustka and Kapustka~\cite{coughlan2016arithmetically} which presented a list of nonsingular GoCY 3-folds, Schenck, Stillman and Yuan~\cite{schenck2020calabiyau} outlined all possible Betti tables for Artin Gorenstein algebras with Castelnuovo--Mumford regularity and codimension 4. More recently, Kapustka, Kapustka, Ranestad, Schenck, Stillman and Yuan \cite{kapustka2021quaternary} exhibit liftings to GoCY 3-folds corresponding to types of nondegenerate quartic. Other recent work on GoCY 3-folds can be seen in \cite{brown2017polarized}, \cite{brown2019gorenstein}. The possible Betti tables in \cite{schenck2020calabiyau} are split into two sections: eight Betti tables corresponding to the 11 GoCY 3-folds outlined in~\cite{coughlan2016arithmetically}, and eight which cannot correspond to a nonsingular GoCY 3-fold. \\
Our results follow on from~\cite{schenck2020calabiyau}, and focus on those Betti tables which cannot correspond to nonsingular GoCY 3-folds. Our initial goal is to populate the list with concrete examples of stable curves in $\mathbb{P}^5$ corresponding to the given Betti tables. From the restrictions on Castelnuovo-Mumford regularity such curves are halfcanonical, meaning $\omega_C=\mathcal{O}_C(2A)$, where $A$ is the hyperplane class. Our results are summarised in table \ref{tab:table11}. MAGMA code for all eight types can be found at \newline $\hspace*{5mm}$ \url{https://sites.google.com/view/patience-ablett/msc-project}. \newline Note that type 2.4 is described in \cite{schenck2020calabiyau}. We then seek to answer the question of whether these curves are the only possible constructions, and whether we can construct flat deformations between curves in the same Hilbert scheme. Partial results have been achieved here. For type \hyperref[2.7]{2.7} we outline a flat deformation to a curve given by a del Pezzo surface of degree five intersecting a cubic hypersurface. \\
Our method to construct curves relies on first identifying the possible quadric generators in $I_C$. For types \hyperref[2.1]{2.1} and \hyperref[2.2]{2.2} these quadrics are necessarily of the stated form. For types \hyperref[2.5]{2.5} and \hyperref[2.6]{2.6} we can show that we have identified all possible quadrics in the case that they define a Koszul algebra, but a question remains of whether there is a possible set of non-Koszul quadratic generators. Our construction techniques use ideas from liaison theory, which began with work of Peskine and Szpir\'o \cite{peskine1974liaison}. In particular the following result is used:
\begin{theorem}[\cite{migliore2002liaison} page 77]\label{liaison}
Let $X_1$, $X_2 \subset \mathbb{P}^n$ be projectively Cohen--Macaulay subschemes of codimension $r$. Then if $X=X_1 \cup X_2$ is Gorenstein it follows that $X_1 \cap X_2$ is also Gorenstein, and of codimension $r+1$.
\end{theorem}
In the situation of the above theorem, we say that $X_1$ and $X_2$ are geo\-metrically G-linked by $X$, and that $X_1$ is residual to $X_2$ in $X$. Suppose more generally that $X_1 \cup X_2 \subset X$. Then if $(I_X:I_{X_1})=I_{X_2}$ and $(I_X:I_{X_2})=I_{X_1}$ we say that $X_1$ and $X_2$ are algebraically G-linked by $X$, and again $X_1$ is residual to $X_2$ in $X$~\cite[pages 62--64]{migliore2002liaison}. \\
Our constructions also rely on the Tom and Jerry formats as seen in \cite{brown2012fano}, \cite{brown2018tutorial} and \cite{papadakis2001gorenstein}. In this paper, for a given ideal $I$ we use $\text{Tom}_i$ to refer to a skew-symmetric matrix with $a_{kl} \in I$ for $k,l \neq i$ and other elements general. Similarly, we define $\text{Tom}_{ij}$ to have $a_{kl} \in I$ for $k,l \notin \{i,j\}$ and other elements general. On the other hand, $\text{Jer}_{ij}$ refers to a skew matrix with $a_{kl} \in I$ for $k$ or $l \in \{i,j\}$ and other elements general. \\
For simplicity we focus mostly on simpler cases of stable curves where a curve $C$ is given by $C_1 \cup C_2$ with $C_1$, $C_2$ nonsingular and irreducible, meeting transversally in $d$ points. Note that the inclusion $i\colon C_1 \rightarrow C$ is a finite morphism. We may therefore use the following proposition:
\begin{proposition}[\cite{MR0463157} Ch. III, Ex. 7.2]
Let $\pi\colon Y \rightarrow X$ be a finite morphism of projective schemes, with $\dim Y=\dim X$. Then $\omega_Y=\textup{Hom}_{\mathcal{O}_X}(\pi_*\mathcal{O}_Y,\omega_X)$.
\end{proposition}
It follows that in the case of our inclusion, we have $\omega_{C_1}=\text{Hom}_{\mathcal{O}_C}(\mathcal{O}_{C_1},\omega_C)$. Moreover, since $C_1$ is assumed to be nonsingular, it is normal and $\omega_{C_1}=\mathcal{O}_{C_1}(K_{C_1})$. Since $\omega_C=\mathcal{O}_C(2A)$, we are considering $\mathcal{O}_C$-module homomorphisms from $\mathcal{O}_{C_1}$ to $\mathcal{O}_C(2A)$. Such a homomorphism is defined by where $1$ is sent. Indeed, $1$ can mapped to any element of $\mathcal{O}_C(2A)$ annihilated by $I_{C_2}$. Therefore $\omega_{C_1}=\mathcal{O}_{C_1}(2A_1-D)$, where $A_1$ is the hyperplane class on $C_1$ and $D$ is the locus of double points. \\
This paper is part of a work in progress with Miles Reid, Jan Stevens and Stephen Coughlan. In particular we hope to publish further work on the question of deformations. Table \ref{tab:table12} outlines which curves lie in the same Hilbert scheme and which we would therefore hope to construct deformations between.
\renewcommand{1}{1.3}
\begin{table}
\begin{center}
\begin{tabular}[c]{|l|l|l|}
\hline
\multicolumn{3}{| c |}{Classifying curves by genus and degree}\\
\hline
\rule{0pt}{35pt}\pbox{2.5cm}{Degree\\} & \pbox{2.5cm}{Genus\\} & \pbox{2.5cm}{Corresponding \\ Betti table\\}\\
\hline
14 & 15 & CGKK 1 \\
15 & 16 & CGKK 2, SSY 2.7, SSY 2.8 \\
16 & 17 & CGKK 3, SSY 2.3, SSY 2.4, SSY 2.6 \\
17 & 18 & CGKK 4, CGKK 5, CGKK 6, SSY 2.2, SSY 2.5 \\
18 & 19 & CGKK 7, CGKK 8, SSY 2.1 \\
19 & 20 & CGKK 9, CGKK 10 \\
20 & 21 & CGKK 11 \\
\hline
\end{tabular}
\end{center}
\caption{A summary of which Hilbert scheme every curve lies in.}
\label{tab:table12}
\end{table}
\begin{center}
\end{center}
\section{Examples of stable curves}\label{results}
In this section we present a series of stable curves with free resolutions corresponding to the Betti tables of type 2 in~\cite{schenck2020calabiyau}. We begin by outlining two possible constructions for type \hyperref[2.6]{2.6}, which use techniques from liaison theory and Brown and Reid's Tom and Jerry format. We then outline \hyperref[2.7]{2.7} and describe the flat deformation from type \hyperref[2.7]{2.7} to CGKK 2 \cite{coughlan2016arithmetically}, which lies in the same Hilbert scheme. We finally present type \hyperref[2.3]{2.3}, since this is a somewhat different case which uses rational scrolls. Other constructions are similar to types \hyperref[2.6]{2.6} and \hyperref[2.7]{2.7} and we therefore relegate them to an appendix.
\renewcommand{1}{1.3}
\begin{table}
\begin{tabular}[c]{|c||c|c|c|c|}
\hline
\multicolumn{5}{| c |}{Nodal curve models for each type}\\
\hline
\rule{0pt}{35pt}\pbox{2.5cm}{Betti table\\} & \pbox{2.5cm}{Irreducible \\ components\\} & \pbox{2.5cm}{Degrees of \\ components\\} & \pbox{2.5cm}{Genera of\\ components\\} & \pbox{2.5cm}{Number of \\double points\\}\\
\hline
Type 2.1 & $C_1 \cup C_2$ & 12, 6 & 10, 4 & 6 \\
Type 2.2 & $C_1 \cup C_2$ & 11, 6 & 9, 4 & 6\\
Type 2.3 & $C_1 \cup C_2$ & 9, 7& 7, 5& 6\\
Type 2.5 & $C_1 \cup C_2$ & 13, 4 & 12, 3 & 4\\
Type 2.6 & $C_1 \cup C_2$ & 12, 4 or 8, 8 & 11, 3 or 7, 7 & 4 \\
Type 2.7 & $C_1 \cup C_2$ & 11, 4 & 10, 3 & 4 \\
Type 2.8 & $C_1 \cup C_2 \cup C_3$ & 7, 4, 4 & 4, 3, 3 & 8\\
\hline
\end{tabular}
\caption{A summary of our constructions.}
\label{tab:table11}
\end{table}
\subsection{Type 2.6}\label{2.6}
We first construct a curve in $\mathbb{P}^5_{\left<x_0\dots x_5\right>}$ corresponding to Schenck, Stillman and Yuan's type 2.6.
\begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 4 \hspace{0.65cm} 4 \hspace{0.65cm} 1 \hspace{0.6cm}-\\
2&- \hspace{0.55cm} 4 \hspace{0.65cm} 8 \hspace{0.65cm} 4 \hspace{0.6cm} -\\
3&- \hspace{0.55cm} 1 \hspace{0.65cm} 4 \hspace{0.65cm} 4 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array}
\]
\caption*{Type 2.6~\cite{schenck2020calabiyau}}
\label{tab:table5}
\end{table}
The curve $C$ has degree 16, and due to the assumptions on Castelnuovo--Mumford regularity it is halfcanonical with arithmetic genus 17. Let $J=(Q_1,Q_2,Q_3,Q_4)$ be the ideal of the quadric relations, $S=k[x_0,\dots,x_n]$. Then $R=S/J$ has a minimal free resolution with linear part corresponding to the first line of the Betti table. We can use this to rule out possible ideals $J$ where there are too many or too few linear syzygies. Note that we may also have syzygies of higher order, but focusing on the linear syzygies is often enough to find appropriate quadric relations. For type 2.5 and 2.6 we obtain possible quadric relations through an analysis of the case where $R$ is a Koszul algebra, detailed at the end of this section. This raises the question of whether there exist appropriate quadric relations which do not define a Koszul algebra.
It can be shown that the quadrics $\{x_0x_5,x_1x_5,x_2x_5,Q_4\}$, where $Q_4$ is in the ideal $(x_0,x_1,x_2)\backslash(x_5)$, have four linear first syzygies and one linear second syzygy. In this case our curve $C$ breaks into two pieces: $C_1 \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$ and $C_2 \subset \mathbb{P}^2_{\left<x_3:x_4:x_5\right>}$, meeting transversally in $d$ points. For simplicity we assume these curves are nonsingular and irreducible.
It follows that $C_2$ is a plane curve defined by an irreducible cubic or quartic. \\ Recall that
\begin{equation}
\mathcal{O}_{C_1}(K_{C_1}) = \omega_{C_1}=\mathcal{O}_{C_1}(2A_1-D).
\end{equation}
It follows that $K_{C_1}=2A_1-D$, where $A_1$ is the hyperplane class in $C_1$ and $D$ is the locus of double points of $C_1 \cup C_2$. Hence, deg $K_{C_1} = 2g_1-2=2d_1-d$ and consequently
\begin{equation}\label{points1}
d_1=g_1-1+\tfrac{d}{2}.
\end{equation}
Similarly for $C_2$ we have
\begin{equation}\label{points2}
d_2=g_2-1+\tfrac{d}{2}.
\end{equation}
If $C_2$ was a nonsingular cubic then it would intersect the hypersurface given by $x_5=0$ in a maximum of 3 points. However, from (\ref{points2}) we obtain $d=6$, a contradiction. Consequently $C_2$ is defined by a nonsingular quartic of degree 4 and genus 3. It follows that the double locus of $C$ contains 4 points, and we expect $C_1$ to be a curve of degree 12 and genus 11. \\
Let $\Gamma$ be the curve $C_1 \cup l_1$ in $\mathbb{P}^4$, with $l_1$ defined by $x_0=x_1=x_2=0$. Then from our earlier discussion of nodal curves we have $K_{\Gamma}|_{C_1}=K_{C_1}+D$ where $D$ is the divisor of the double locus of 4 points on $C_1$. Considering $D$ as the divisor of the 4 points on $l_1$, we also have $K_{\Gamma}|_{l_1}=K_{l_1}+D=-2H+D$ where $H$ is the hyperplane class. It follows that $\mathcal{O}_{l_1}(K_{\Gamma})=\mathcal{O}_{l_1}(-2+4)=\mathcal{O}_{l_1}(2)$, so $\Gamma$ is halfcanonical, hence Gorenstein. Thus $\Gamma$ is defined by Pfaffians~\cite{10.2307/2373926} and has degree 13. According to the Betti table we need one more quadric relation and 4 cubic relations so it follows that $\Gamma$ should be defined by the $4 \times 4$ Pfaffians of a $5 \times 5$ skew-symmetric matrix. We now describe some constraints on the matrix to ensure $l_1 \subset \Gamma$, and so that it defines four cubic Pfaffians and one quadric. \\
Consider the matrix
\begin{equation}
N = \begin{pmatrix}
& a_{12} & a_{13} & a_{14} & a_{15} \\
& & a_{23} & a_{24} & a_{25} \\
& & & a_{34} & a_{35} \\
& & & & a_{45} \\
\end{pmatrix}.
\end{equation}
Further, let the degrees of the $a_{ij}$ be given by
\begin{equation}
N = \begin{pmatrix}
& 1 & 1 & 1 & 2 \\
& & 1 & 1 & 2 \\
& & & 1 & 2 \\
& & & & 2 \\
\end{pmatrix}.
\end{equation}
Then the $4 \times 4$ Pfaffians are of degree $(2,3,3,3,3)$. Let $I$ be the ideal of the $4 \times 4$ Pfaffians of $N$. Then
\begin{equation}
\begin{split}
I = (& a_{12}a_{34}-a_{13}a_{24}+a_{14}a_{23}, \\
& a_{12}a_{35}-a_{13}a_{25}+a_{15}a_{23}, \\ & a_{12}a_{45}-a_{14}a_{25}+a_{15}a_{24}, \\ & a_{13}a_{45}-a_{14}a_{35}+a_{15}a_{34}, \\ & a_{23}a_{45}-a_{24}a_{35}+a_{25}a_{34}).
\end{split}
\end{equation}
It follows that for any $\{k,l\} \subset \{1,\dots,5\}$, $k\neq l$, setting $a_{ij} \in J=(x_0,x_1,x_2)$ if $i \in \{k,l\}$ or $j \in \{k,l\}$ ensures $I \subset J$. The remaining elements may be general in the coordinates of $\mathbb{P}^4$. In other words, $N$ is a variant of $\text{Jer}_{kl}$. Similarly, if we set $a_{ij} \in J$ for $i,j \neq k$, we ensure $I \subset J$, in which case $N$ is a variant of $\text{Tom}_k$.\\
Defining $\Gamma$ in this way ensures it breaks into two irreducible nonsingular components, our line $l_1$ and curve $C_1$. Moreover, $l_1$ and $C_1$ intersect in 4 points which define a quartic, $q_4$. Mapping $q_4$ into $\mathbb{P}^2$ by adding arbitrary terms in $(x_5)$ defines a nonsingular quartic curve $C_2$. $C=C_1 \cup C_2 \subset \mathbb{P}^5$ is a codimension 4 Gorenstein curve corresponding to Betti table 2.6. A computer algebra package such as MAGMA can be used to verify that each curve is nonsingular and that $C_1$ and $C_2$ intersect transversally. We can also use MAGMA to compute the free resolution as a sanity check. \\
Now instead suppose that the four quadrics are given by $(x_0,x_1) \cap (x_2,x_3)$, which again have the correct minimal free resolution. It follows that $C$ breaks up into $C_1 \subset \mathbb{P}^3_{\left<x_2\dots x_5\right>}$ and $C_2 \subset \mathbb{P}^3_{\left<x_0:x_1:x_4:x_5\right>}$. We may define $C_1$ and $C_2$ in the following way. Consider the complete intersection $X_1=V(F_1,F_2) \subset \mathbb{P}^3_{\left<x_2\dots x_5\right>}$ given by cubics $F_1$, $F_2$ containing the line $l_1 \colon x_2=x_3=0$ in $\mathbb{P}^3$. Such cubics have the form
\begin{equation}
F_1=x_2P_3+x_3P_4, \quad F_2=x_2Q_3 + x_3Q_4,
\end{equation}
with $P_3,Q_3,P_4,Q_4$ quadratic forms in $k[x_2,\dots,x_5]$.
Then $X_1$ breaks into two irreducible components, namely the line $l_1$ and the curve $C_1$, defined by $(F_1,F_2,P_3Q_4-P_4Q_3)$. The curve $C_1$ is nonsingular with degree 8 and genus 7. Similarly we are able to define another (3,3) complete intersection $X_2=V(F_3,F_4) \subset \mathbb{P}^3_{\left<x_0:x_1:x_4:x_5\right>}$ containing the line $l_2 \colon x_0=x_1=0$ in $\mathbb{P}^3$:
\begin{equation}
F_3=x_0P_1+x_1P_2, \quad F_4=x_0Q_1+x_1Q_2.
\end{equation}
Here $P_1,P_2,Q_1,Q_2$ are quadratic forms in $k[x_0,x_1,x_4,x_5]$. Again $X_2$ breaks into two irreducible components with $C_2$ defined by $(F_3,F_4,P_1Q_2-P_2Q_1)$, and $C_2$ is nonsingular with degree 8 and genus 7. It follows from (\ref{points1}), (\ref{points2}) that $C_1$ and $C_2$ meet in four points, which lie on the line $\mathbb{P}^1_{\left<x_4:x_5\right>}$. We outline constraints on the $P_i$, $Q_j$ so that this occurs. If
\begin{equation}
\begin{split}
P_3|_{x_2=x_3=0}&=P_1|_{x_0=x_1=0}, \\ P_4|_{x_2=x_3=0}&=P_2|_{x_0=x_1=0}, \\ Q_3|_{x_2=x_3=0}&=Q_1|_{x_0=x_1=0}, \\ Q_4|_{x_2=x_3=0}&=Q_2|_{x_0=x_1=0},
\end{split}
\end{equation}
then
\begin{equation}
R(x_4,x_5)=(P_3Q_4-P_4Q_3)|_{x_2=x_3=0}=(P_1Q_2-P_2Q_1)|_{x_0=x_1=0}.
\end{equation} In this situation, $C_1$ and $C_2$ meet in exactly 4 points defined by the quartic $R$ in $\mathbb{P}^1_{\left<x_4:x_5\right>}$. Their union is a Gorenstein codimension 4 curve with Betti table 2.6. \\
The candidates for the quadric generators arise from work of Mantero--Mastroeni \cite{mantero2021betti}. Assuming $R=S/J$ is Koszul, we analyse $J=(Q_1,Q_2,Q_3,Q_4)$ in the context of different heights. If $\text{ht} J=4$ then $J$ is a complete intersection of four quadrics and does not have linear syzygies. If $\text{ht} J=1$ then it is given as $zI$ where $z$ is a linear form and $I$ is a complete intersection of linear forms~\cite{mantero2021betti}. Thus for such $J$, $R$ would not correspond to the type 2.6 Betti table, since there would be too many linear syzygies. Moreover, for $\text{ht} J=3$, $R$ is a Koszul almost complete intersection and thus has at most two linear syzygies~\cite{mastroeni2018koszul}. Hence, $J$ must have height 2. Mantero--Mastroeni show that for a Koszul algebra of four quadrics with $\text{ht}J=2$ to have the required Betti table, it must have multiplicity $e(R)=2$.
\begin{theorem}[Mantero--Mastroeni~\cite{mantero2021betti}]
Let $R$ be Koszul with $\textup{ht}J=2=e(R)$. Then $J$ has one of the following possible forms: \\
\textup{(I)} $(x_0,x_1) \cap (x_2,x_3)$ or $(x_0^2,x_0x_1,x_1^2,x_0x_2+x_1x_3)$\\
\textup{(II)} $(a_1x_0,a_2x_0,a_3x_0,q)$ where the $a_i$ are independent linear forms and $q \in (a_1,a_2,a_3)\backslash(x_0)$ \\
\textup{(III)} $(a_1x_0,a_2x_0,a_3x_0,q)$ where the $a_i$ are independent linear forms and $q$ is a non-zero divisor modulo $(a_1x_0,a_2x_0,a_3x_0)$. \\
\end{theorem}
Case (III) does not correspond to a Betti table with four linear first syzygies and one linear second syzygy, so we are in case (I) or case (II). In case (I) the latter option is not reduced so we are restricted to the case $J=(x_0,x_2) \cap (x_2,x_3)$.
\subsection{Type 2.7}\label{2.7}
The following construction is an example of a codimension 4 Gorenstein curve with Betti table as in type 2.7. Any such curve has degree 15 and arithmetic genus 16. \\
\begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 5 \hspace{0.65cm} 5 \hspace{0.65cm} 1 \hspace{0.6cm}-\\
2&- \hspace{0.55cm} 1 \hspace{0.65cm} 2 \hspace{0.65cm} 1 \hspace{0.6cm} -\\
3&- \hspace{0.55cm} 1 \hspace{0.65cm} 5 \hspace{0.65cm} 5 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array}
\]
\caption*{Type 2.7~\cite{schenck2020calabiyau}}
\label{tab:table6}
\end{table}
Consider the quadrics
\begin{equation}
Q_1=x_0x_5, \quad Q_2=x_1x_5, \quad Q_3=x_2x_5, \quad Q_4, \quad Q_5,
\end{equation}
with $Q_4,Q_5 \in (x_0,x_1,x_2)$. Then $J=(Q_1,Q_2,Q_3,Q_4,Q_5)$ has five linear syzygies as required. Thus, $C$ breaks up into two curves, namely $C_1 \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$ and $C_2 \subset \mathbb{P}^3_{\left<x_3:x_4:x_5\right>}$. Again, $C_2$ must be defined by a nonsingular quartic with degree 4 and genus 3, and the double locus of $C$ is 4 points. We obtain from (\ref{points1}) that $C_1$ is degree 11 and genus 10. Let $l_1$ be the line $x_0=x_1=x_2=0$ in $\mathbb{P}^4$. \\
Once more $\Gamma=C_1 \cup l_1$ is Gorenstein since $K_{\Gamma}|_{l_1}$ is halfcanonical. Since $\Gamma$ is degree 12 and we need one more cubic relation we define $\Gamma$ as the complete intersection of two quadrics, $Q_4$ and $Q_5$, and a cubic, $F$, all in $(x_0,x_1,x_2)$. It follows that $C_1$ and $l_1$ are the two irreducible components of $\Gamma$, and $C_1$ is nonsingular. The curves $C_1$ and $l_1$ meet in 4 points defining a quartic and mapping this quartic into $\mathbb{P}^2$, adding arbitrary terms in $(x_5)$, defines a nonsingular quartic curve $C_2$. The union of these two curves is Gorenstein codimension 4, with Betti table as prescribed. \\
Moreover, we can construct a deformation to CGKK 2~\cite{coughlan2016arithmetically}, which is in the same Hilbert scheme as type 2.7 and type 2.8, courtesy of Jan Stevens.
\begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 5 \hspace{0.65cm} 5 \hspace{0.5cm} - \hspace{0.45cm}-\\
2&- \hspace{0.55cm} 1 \hspace{0.5cm} - \hspace{0.5cm} 1 \hspace{0.6cm} -\\
3&- \hspace{0.4cm} - \hspace{0.5cm} 5 \hspace{0.65cm} 5 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array}
\]
\caption*{CGKK 2~\cite{coughlan2016arithmetically}}
\label{tab:table10}
\end{table}
Consider the syzygy module of the five generating quadrics,
\begin{equation}
\begin{split}
& Q_1=x_0x_5, \\
& Q_2=x_1x_5, \\
& Q_3=x_2x_5, \\
& Q_4=a_1x_0+a_2x_1+a_3x_2, \\
& Q_5=b_1x_0+b_2x_1+b_3x_2.
\end{split}
\end{equation}
Then this is a $6 \times 5$ matrix
\begin{equation}
M=\begin{pmatrix}
0 & x_2 & -x_1 & 0 & 0 \\
-x_2 & 0 & x_0 & 0 & 0 \\
x_1 & -x_0 & 0 & 0 & 0 \\
-b_1 & -b_2 & -b_3 & 0 & x_5 \\
a_1 & a_2 & a_3 & -x_5 & 0 \\
0 & 0 & 0 & -Q_5 & Q_4 \\
\end{pmatrix}.
\end{equation}
Using deformation variable $t$, we construct a new matrix
\begin{equation}
M_t=\begin{pmatrix}
0 & x_2 & -x_1 & tb_1 & -ta_1 \\
-x_2 & 0 & x_0 & tb_2 & -ta_2 \\
x_1 & -x_0 & 0 & tb_3 & -ta_3 \\
-b_1 & -b_2 & -b_3 & 0 & x_5 \\
a_1 & a_2 & a_3 & -x_5 & 0 \\
0 & 0 & 0 & -Q_5 & Q_4 \\
\end{pmatrix}.
\end{equation}
Ignoring the bottom row of the matrix, and multiplying rows 4 and 5 by $t$ gives a skew-symmetric matrix. We may then take the $4 \times 4$ Pfaffians and cancel $t$ to obtain the five quadrics
\begin{equation}
\begin{split}
& Q_1 = x_0x_5 - t(a_2b_3-a_3b_2), \\
& Q_2 = x_1x_5 - t(a_1b_3-a_3b_1), \\
& Q_3 = x_2x_5 - t(a_1b_2-a_2b_1), \\
& Q_4=a_1x_0+a_2x_1+a_3x_2, \\
& Q_5=b_1x_0+b_2x_1+b_3x_2.
\end{split}
\end{equation}
We also deform the cubic $F$. Suppose $F=c_1x_0+c_2x_1+c_3x_2$, with $c_1,c_2,c_3$ all of degree 2. As discussed earlier, the quartic in type $2.7$ is given by a quartic in $k[x_0,\dots,x_4]$ plus additional terms in $(x_5)$. In fact the first part is the quartic obtained as the determinant of the matrix \begin{equation}
N=\begin{pmatrix}
a_1 & a_2 & a_3 \\
b_1 & b_2 & b_3 \\
c_1 & c_2 & c_3 \\
\end{pmatrix}.
\end{equation}
We can write the quartic as $q=\text{det}(N) + gx_5$ where $g$ is a degree three polynomial. We define our deformed cubic as $F_t=F+tg$. Consider the ideal $I_t=(Q_1, Q_2, Q_3, Q_4,Q_5,F_t,q)$. Then for $t=0$ this is clearly the defining ideal for our type 2.7 nodal curve. Otherwise, note that $tq$ is in the ideal $J_t$ generated by the first six relations. This follows since
\begin{equation}
tq + c_1Q_1+c_2Q_2+c_3Q_3 -x_5F_t = 0.
\end{equation}
Note that $J_t$ is prime, which can be checked with computer algebra. Thus if $t$ is invertible then $I_t=J_t$ and is defined by the $4 \times 4$ Pfaffians of a $5 \times 5$ skew-symmetric matrix intersecting a cubic hypersurface. This deformation corresponds to Betti table CGKK 2~\cite{coughlan2016arithmetically}.
\subsection{Type 2.3}\label{2.3}
We now focus on a curve with degree 16 and genus 17 corresponding to Betti table type 2.3. \\
\begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 4 \hspace{0.65cm} 3 \hspace{0.52cm} - \hspace{0.45cm}-\\
2&- \hspace{0.55cm} 3 \hspace{0.65cm} 6 \hspace{0.65cm} 3 \hspace{0.6cm} -\\
3&- \hspace{0.4cm} - \hspace{0.5cm} 3 \hspace{0.65cm} 4 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array}
\]
\caption*{Type 2.3~\cite{schenck2020calabiyau}}
\label{tab:table8}
\end{table}
Consider the cubic scroll $\mathbb{F}=\mathbb{F}(1,2) \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$, defined by equations
\begin{equation}
\rank\begin{pmatrix}
x_0 & x_1 & x_3 \\
x_1 & x_2 & x_4 \\
\end{pmatrix} \leq 1.
\end{equation}
$\mathbb{F}(1,2)$ is given by the surface scroll $\mathbb{F}_1$ embedded into $\mathbb{P}^4$ by the linear system $2A+B$ where $A$ is the fibre of $\mathbb{F}_1 \rightarrow \mathbb{P}^1$ and $B$ is the negative section~\cite{Reid1996ChaptersSurfaces}.
The curve $C_2$ is given by $\mathbb{F} \cap X$ where $X$ is a general cubic hypersurface. This curve has degree 9 and genus 7. $C_1 \subset \mathbb{P}^3$ is residual to a conic in a (3,3) complete intersection, such that the double locus of $C$ is given by 6 points. It has degree 7 and genus 5. \\
To describe the construction in terms of explicit relations, consider $x_0x_2-x_1^2$, the third minor of the matrix defining $\mathbb{F}$. $C_1$ lies in $\mathbb{P}^3=V(x_3,x_4)$. Define the plane quadric $Q=V(x_0x_2-x_1^2,x_5) \subset \mathbb{P}^3$, and consider two general cubics $G_1, G_2$ containing $Q$. Such cubics have the form
\begin{align*}
G_1 = P_1(x_0x_2-x_1^2)+Q_1x_5, \\
G_2 = P_2(x_0x_2-x_1^2)+Q_2x_5,
\end{align*}
with $P_1$, $P_2$ linear and $Q_1$, $Q_2$ quadrics. $C_1$ is residual to $Q$ in the $(3,3)$ complete intersection $(G_1,G_2)$. It is defined by one further cubic, given by $H=P_1Q_2-P_2Q_1$. Mapping this $H$ into $\mathbb{P}^4$, adding arbitrary terms in $(x_3,x_4)$, defines a cubic hypersurface $X$. We have $C_2 = \mathbb{F} \cap X$, and $C_1 \cup C_2$ is a Gorenstein codimension four curve in $\mathbb{P}^5$ corresponding to Betti table type 2.3.
\section{Further research}
Having populated the list of Betti tables with examples of nodal curves, a number of open questions remain. Firstly, can we construct more deformations between curves in the same Hilbert scheme, as in type 2.7? Email correspondence with Jan Stevens and Stephen Coughlan answers in the affirmative for types 2.6 and 2.8. There is also the question of whether this list is exhaustive, or if there are further possible curve constructions. The existence of topologically different constructions for type 2.6 suggests this could be the case for other types. In particular for higher degree curves, the picture may be more complicated. In some cases we have only been able to definitively state the quadrics if they define a Koszul algebra, so there may be a different set of quadric relations. We have restricted our search to nodal curves, and so it is possible there are further constructions with worse singularities. We also raise the idea of constructing surfaces and singular 3-folds corresponding to the type 2 Betti tables, or alternatively finite point sets.
\section{Appendix}
\subsection{Type 2.1}
\label{2.1}
We now consider how to construct a curve in $\mathbb{P}^5$ corresponding to Betti table type 2.1, which has degree 18 and genus 19.
\begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.45cm} - \hspace{0.49cm} - \hspace{0.49cm} - \hspace{0.4cm} - \\
1& - \hspace{0.5cm} 2 \hspace{0.7cm} 1 \hspace{0.54cm} - \hspace{0.4cm}-\\
2&- \hspace{0.5cm} 9 \hspace{0.6cm} 18 \hspace{0.57cm} 9 \hspace{0.57cm} -\\
3&- \hspace{0.35cm} - \hspace{0.54cm} 1 \hspace{0.65cm} 2 \hspace{0.57cm} - \\
4& - \hspace{0.35cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.5cm} 1
\end{array}
\]
\caption*{Type 2.1 \cite{schenck2020calabiyau}}
\label{tab:table2}
\end{table}
We see there are two quadric relations, $Q_1$ and $Q_2$, with one linear syzygy, which may be given as $L_1Q_1-L_2Q_2=0$ for some linear forms $L_1,L_2$. Since we want $Q_1 \neq \lambda Q_2$ for any scalar $\lambda$, we have $L_1 \neq \lambda L_2$ and consequently $L_1$ divides $Q_2$, $L_2$ divides $Q_1$. It follows that $Q_1=L_1L_3$, $Q_2=L_2L_3$ for some linear form $L_3$. Thus without loss of generality we can set
\begin{equation}
Q_1 = x_0x_5, \quad Q_2=x_1x_5.
\end{equation}
It follows that $C$ is singular, and breaks up into $C_1 \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$ of degree $d_1$ and genus $g_1$ and $C_2 \subset \mathbb{P}^3_{\left<x_2\dots x_4\right>}$ of degree $d_2$ and genus $g_2$. Supposing that $C$ is at worst a nodal curve, with $C_1$ and $C_2$ nonsingular, and supposing further that $C_2$ is an $(a,b)$ complete intersection we have that $d_2=ab$ and $g_2=\tfrac{1}{2}ab(a+b-4)+1$. It follows from (\ref{points2}) that
\begin{equation}
ab=\tfrac{1}{2}ab(a+b-4)+\tfrac{d}{2}.
\end{equation}
Since we have no relations of degree 4 or higher we expect $a,b$ to be at most 3. Notice also that since $C_2$ intersects $x_5=0$ in $ab$ points, $C_1$ and $C_2$ must intersect in at most $ab$ points. This excludes the cases $(1,2),(1,3),(2,2)$, and if $a=b=3$ then we obtain that $d=0$, contradicting our assumption that $C$ is a nodal curve consisting of two nonsingular curves intersecting transversally at a non-zero number of points. Thus we look at the case where $C_2$ is a $(2,3)$ complete intersection. \\
It follows that $g_2=4$ and $d_2=6$. We expect $C_1$ to be a curve with $d_1=18-6=12$ and consequently $g_1=10$. Let $Q_3 \in (x_2,x_3,x_4,x_5)$ be the quadric in the complete intersection. Consider the plane conic $q$ in $\mathbb{P}^4$ defined by $x_0=x_1=Q_3|_{x_5=0}=0$. Then $\omega_{q}=\mathcal{O}_{q}(-1)$ by the adjunction formula \cite[page 41]{eisenbud_harris_2016}, and $K_q=-A_q$ where $A_q$ is the hyperplane class. Let $\Gamma$ be the union of $C_1$ and $q$. Then as before $K_{\Gamma}|_q=K_{q}+D$ where $D$ is the divisor of the 6 double points on $q$. Thus $\mathcal{O}_q(K_{\Gamma})=\mathcal{O}_q(-1+3)=\mathcal{O}_q(2)$, and $\Gamma$ is halfcanonical and hence Gorenstein. \\
Since $\Gamma$ is degree 14 and Gorenstein codimension 3 we expect it to be defined by the $6 \times 6$ Pfaffians of a $7 \times 7$ skew-symmetric matrix. All elements of the matrix must be linear, since there are no quartic or higher degree relations. Thus we now consider what conditions must be satisfied for the Pfaffians of the matrix to lie in the ideal $J=(x_0,x_1,Q_3|_{x_5=0})$, but not in $(x_0,x_1)$. Consider the matrix
\begin{equation}
M = \begin{pmatrix}
& a_{12} & a_{13} & a_{14} & a_{15} & a_{16} & a_{17}\\
& & a_{23} & a_{24} & a_{25} & a_{26} & a_{27} \\
& & & a_{34} & a_{35} & a_{36} & a_{37} \\
& & & & a_{45} & a_{46} & a_{47} \\
& & & & & a_{56} & a_{57} \\
& & & & & & a_{67} \\
\end{pmatrix}.
\end{equation}
If $Q_3$ is made up of a reducible part in $(x_2^2,x_2x_3,x_2x_4,x_3^2,x_3x_4,x_4^2)$ plus terms in $(x_5)$ we can construct an $M$ such that the Pfaffians lie in $J$. An open problem following on from this work is whether such an $M$ can be constructed to contain a more general quadric, with $C_1$ still a nonsingular and irreducible curve. If $Q_3$ is in the required form, assume without loss of generality that $Q_3|_{x_5=0}=x_2x_3$. Then the following constraints ensure the Pfaffians lie in $J$. First suppose that $a_{ij} \in (x_0,x_1)$ for $i,j,k \notin \{5,6,7\}$, i.e. that $M$ is a $\text{Tom}_{567}$. We add the further constraints that, except for $a_{56}$ which is general, $a_{ij} \in (x_0,x_1,x_2)$ for $j=5$ and $a_{ij} \in (x_0,x_1,x_3)$ for $j=6$. \\
The curve $C_1$ residual to $q$ in $\Gamma$ is nonsingular and irreducible, of degree 12 and genus 10. Moreover, the intersection of $C_1$ with the conic $q$ is 6 points given as the intersection of $Q_3|_{x_5=0}$ and a cubic $H$ in $\mathbb{P}^2_{\left<x_2\dots x_4\right>}$. We can map this cubic into $\mathbb{P}^3$, adding arbitrary terms in $(x_5)$ to define a new nonsingular cubic. The $(2,3)$ complete intersection $C_2$ is given by $(Q_3,H)$. The union of these two curves defines our nodal curve with resolution given by Betti table 2.1.
\subsection{Type 2.2}\label{2.2}
We now outline a construction of a nodal curve $C \subset \mathbb{P}^5$ with degree 17 and genus 18 corresponding to Betti table type 2.2. \\
\begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 3 \hspace{0.65cm} 1 \hspace{0.52cm} - \hspace{0.45cm}-\\
2&- \hspace{0.55cm} 5 \hspace{0.6cm} 12 \hspace{0.5cm} 5 \hspace{0.6cm} -\\
3&- \hspace{0.4cm} - \hspace{0.5cm} 1 \hspace{0.65cm} 3 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array}
\]
\caption*{Type 2.2~\cite{schenck2020calabiyau}}
\label{tab:table7}
\end{table}
The quadric relations in $\mathbb{P}^5_{\left<x_0\dots x_5\right>}$ are necessarily of the form
\begin{equation}
Q_1 = x_0x_5, \quad Q_2= x_1x_5, \quad Q_3,
\end{equation}
with the single syzygy $x_1Q_1\equiv x_0Q_2$. Thus $C$ breaks up into $C_1 \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$ and $C_2 \subset \mathbb{P}^3_{\left<x_2\dots x_5\right>}$. The curve $C_2$ is a $(2,3)$ complete intersection of degree 6 and genus 4. Moreover, the double locus consists of 6 points and we expect $C_1$ to be of genus 11 and degree 9. For $C_1 \cup C_2$ to be halfcanonical we require the 6 points to define a hyperplane section, and we again consider a plane conic $q$ in $\mathbb{P}^4_{\left<x_0\dots x_4\right>}$ in the plane of the double points, with $q$ defined by $x_0=x_1=Q_3=0$. Then we once more have that $\Gamma = C_1 \cup q$ is halfcanonical and hence Gorenstein. We thus define $\Gamma$ using the Pfaffians of a skew-symmetric matrix. The construction is as follows: we define a matrix
\begin{equation}
M = \begin{pmatrix}
& b_{12} & b_{13} & b_{14} & b_{15} \\
& & a_{23} & a_{24} & a_{25} \\
& & & a_{34} & a_{35} \\
& & & & a_{45} \\
\end{pmatrix}
\end{equation}
with entries of the following degrees
\begin{equation}
M = \begin{pmatrix}
& 2 & 2 & 2 & 2 \\
& & 1 & 1 & 1 \\
& & & 1 & 1 \\
& & & & 1 \\
\end{pmatrix},
\end{equation}
whose $4 \times 4$ Pfaffians are four cubics and a quadric, as in ยง\ref{2.6}. Unlike ยง\ref{2.6}, we wish for the four cubics to vanish on the plane $x_0=x_1=0$ in $\mathbb{P}^4$, and for the quadric not to lie in $(x_0,x_1)$ - it may be general in $\mathbb{P}^4$. This occurs if we set, for example, the $b_{ij}$ as quadrics in $(x_0,x_1)$, and the $a_{ij}$ linear and in all coordinates $x_0$ to $x_4$. Let $\Gamma$ be the curve defined by the $4 \times 4$ Pfaffians, and set $Q_3$ to be the quadric Pfaffian. Consider the residual curve $C_1$ to the conic defined by $x_0=x_1=Q_3=0$ in $\Gamma$. This is nonsingular and irreducible, and has degree 11 and genus 9 as required. It meets the conic in 6 points, which define a cubic $F_1$ such that the zero locus is given by $x_0=x_1=Q_3=F_1=0$. Mapping $Q_3$ and $F_1$ into $\mathbb{P}^3$ by adding arbitrary terms in $(x_5)$ we obtain the complete intersection $C_2$.
\subsection{Type 2.5}\label{2.5}
We now construct a curve in $\mathbb{P}^5_{\left<x_0\dots x_5\right>}$ corresponding to Schenck, Stillman and Yuan's type 2.5. The curve $C$ has degree 17, and due to the assumptions on Castelnuovo--Mumford regularity it is halfcanonical with arithmetic genus 18. \\
\renewcommand{1}{1}
\begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 3 \hspace{0.65cm} 3 \hspace{0.65cm} 1 \hspace{0.6cm}-\\
2&- \hspace{0.55cm} 7 \hspace{0.55cm} 14 \hspace{0.55cm} 7 \hspace{0.6cm} -\\
3&- \hspace{0.55cm} 1 \hspace{0.65cm} 3 \hspace{0.65cm} 3 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array}
\]
\caption*{Type 2.5~\cite{schenck2020calabiyau}}
\label{tab:table4}
\end{table}
Let $J=(Q_1,Q_2,Q_3)$ be the ideal of the three quadrics, $S=k[x_0,\dots,x_n]$. In the case that $R=S/J$ is a Koszul algebra we may apply a theorem of Mantero--Mastroeni~\cite{mantero2021betti} to categorize the quadrics.
\begin{proposition}
If $R=S/J$ is a Koszul algebra then $J$ is given by \[Q_1 = x_0x_5, \quad Q_2 = x_1x_5, \quad Q_3 = x_2x_5.\]
\end{proposition}
\begin{proof}
If $\text{ht}J=2$ then $R$ is an almost complete intersection and consequently has at most two linear syzygies~\cite{mastroeni2018koszul}. If $\text{ht}J=3$ then $R$ is a complete intersection and there are no linear syzygies. If $\text{ht} J=1$ then it is given as $zI$ where $z$ is a linear form and $I$ is a complete intersection of linear forms~\cite{mantero2021betti}. This option has the correct number of syzygies. Thus, without loss of generality, let $z=x_5$, $I=(x_0,x_1,x_2)$. Then $J=zI=(x_0x_5,x_1x_5,x_2x_5).$
\end{proof}
It follows that any curve $C$ with such quadric relations breaks up into two curves, $C_1 \subset \mathbb{P}^4_{\left<x_0\dots x_4\right>}$ and $C_2 \subset \mathbb{P}^2_{\left<x_3:x_4:x_5\right>}$. Thus $C_2$ is defined by a single quadric, cubic or quartic in $\mathbb{P}^2$. \\
Recall that a nonsingular plane quadric has genus 0, and a cubic has genus 1. Assuming $C_2$ is nonsingular, it follows that $C_2$ cannot be defined by a quadric or cubic. Note that $C_1 \cap C_2 \subset C_1 \cap H$ where $H$ is the hyperplane given by $x_5=0$. $C_2$ intersects $H$, and consequently $C_1$, in a maximum of deg($C_2$)=$d_2$ points, so $d<2$ or $d<3$ respectively. Thus (\ref{points2}) does not hold in these cases. Further $C_1 \cup C_2$ is halfcanonical so $K_{C_2} + D = 2H$ for $H$ a hyperplane section. We also have for $C_2$ a plane curve of degree $d_2$ that $K_{C_2} = (-3+d_2)H$, and so $D = (5-d_2)H \leq H$ and the only solution is $d_2=4$. Since $C_2$ is defined by a nonsingular plane quartic it has degree 4 and genus 3, and by (\ref{points2}) the double locus of $C = C_1 \cup C_2$ contains 4 points. \\
Let $\Gamma$ be the curve $C_1 \cup l_1$ in $\mathbb{P}^4$, with $l_1$ defined by $x_0=x_1=x_2=0$. Again it follows that $\Gamma$ is halfcanonical, hence Gorenstein. The curve $\Gamma$ is of degree 14 since $C_1$ must have degree 13 by (\ref{points1}). By Buchsbaum-Eisenbud~\cite{10.2307/2373926} if $\Gamma$ is Gorenstein codimension 3 then it is defined by Pfaffians. Since $\Gamma$ is degree 14, and we need seven cubics to define $C$, $\Gamma$ is defined by the $6 \times 6$ Pfaffians of a $7 \times 7$ skew-symmetric matrix. As $\Gamma$ contains the line $l_1$ it is necessary that every Pfaffian lies in the ideal $(x_0,x_1,x_2)$.
\begin{proposition}
Let M be a $7 \times 7$ skew-symmetric matrix. Let $I$ be the ideal of $6 \times 6$ Pfaffians of $M$. Then two possible formats such that $I \subset (x_0,x_1,x_2)$ are as follows:
\begin{flalign*}
(\textup{I}) \hspace{3cm} M = \begin{pmatrix}
& a_{12} & a_{13} & a_{14} & a_{15} & b_{16} & b_{17}\\
& & a_{23} & a_{24} & a_{25} & b_{26} & b_{27} \\
& & & a_{34} & a_{35} & b_{36} & b_{37} \\
& & & & a_{45} & b_{46} & b_{47} \\
& & & & & b_{56} & b_{57} \\
& & & & & & b_{67} \\
\end{pmatrix} \\ \\
\end{flalign*}
\begin{flalign*}
(\textup{II}) \hspace{3cm} M = \begin{pmatrix}
& b_{12} & b_{13} & b_{14} & b_{15} & a_{16} & a_{17}\\
& & b_{23} & b_{24} & b_{25} & a_{26} & a_{27} \\
& & & b_{34} & b_{35} & a_{36} & a_{37} \\
& & & & b_{45} & a_{46} & a_{47} \\
& & & & & a_{56} & a_{57} \\
& & & & & & a_{67} \\
\end{pmatrix} \\
\end{flalign*}
where the $a_{ij}$ represent linear elements in $(x_0,x_1,x_2)$ and the $b_{ij}$ represent linear elements in all the coordinates on $\mathbb{P}^4$.
\end{proposition}
\begin{proof}
Consider the skew-symmetric matrix
\begin{equation}
M = \begin{pmatrix}
& a_{12} & a_{13} & a_{14} & a_{15} & a_{16} & a_{17}\\
& & a_{23} & a_{24} & a_{25} & a_{26} & a_{27} \\
& & & a_{34} & a_{35} & a_{36} & a_{37} \\
& & & & a_{45} & a_{46} & a_{47} \\
& & & & & a_{56} & a_{57} \\
& & & & & & a_{67} \\
\end{pmatrix}.
\end{equation}
We can explicitly state the Pfaffians of $M$, as in \cite{ishikawa2000minor}. Consider the $2n \times 2n$ submatrix defined by deleting the $i$th row and $i$th column of $M$. We define its Pfaffian using the symmetric group $G_i=S_{2n}$ on the set $\{1,\dots,\widehat{i},\dots,2n+1\}$. Let
\[
\mathfrak{S_i} = \left\{ \sigma=(\sigma_1\dots \sigma_{2n}) \in G_i\ \quad \middle\vert \begin{array}{l}
\quad \sigma_{2j-1}<\sigma_{2j} \qquad 1 \leq j \leq n \\
\quad \sigma_{2j-1} < \sigma_{2j+1} \quad 1 \leq j \leq n-1
\end{array}\right\}.
\]
Recall that we can define the $2n \times 2n$ Pfaffian of the submatrix as
\begin{equation}
\text{Pf}_i = \sum_{\sigma \in \mathfrak{S}_i}\text{sgn}(\sigma)a_{\sigma_1 \sigma_2}\cdots a_{\sigma_{2n-1}\sigma_{2n}}
\end{equation}
where $\text{sgn}(\sigma)=(-1)^{\ell (\sigma)}$, with $\ell (\sigma)$ the number of inversions.
It follows that if all the $a_{\sigma_1 \sigma_2}\dots a_{\sigma_{2n-1} \sigma_{2n}}$ are contained in an ideal, then the Pfaffian must be contained in the ideal. We are working with $7 \times 7$ matrices so $n=3$ here and consequently each Pfaffian is a sum of elements of the form $a_{ij}a_{kl}a_{mn}$. It is clear that since any $a_{ij}a_{kl}a_{mn}$ contains three elements from different columns, (I) is a possible solution. Moreover, since any $a_{ij}a_{kl}a_{mn}$ must contain either a 6 or 7 in its indices, (II) also ensures $I \subset (x_0,x_1,x_2)$.
\end{proof}
More generally, we see that for any $(i,j) \in \{1,\dots,7\}$ with $i \neq j$ we can define two possible matrices such that $I \subset (x_0,x_1,x_2)$. Case (II) is the case where $a_{kl} \in (x_0,x_1,x_2)$ for $k \in \{i,j\}$ or $l \in \{i,j\}$ and other elements are general linear elements in the coordinates on $\mathbb{P}^4$. Case (I) is the case where $a_{kl} \in (x_0,x_1,x_2)$ for $k,l \notin \{i,j\}$ and other elements are general. In the language of Tom and Jerry, case (II) corresponds to $\text{Jer}_{67}$ and case (I) corresponds to $\text{Tom}_{67}$.\\
Defining $\Gamma$ using a matrix of the above form gives a curve with two irreducible nonsingular components: $C_1$ of degree 13 and genus 12 as expected, and $l_1$, the line defined by $(x_0,x_1,x_2)$ in $\mathbb{P}^4$. Moreover, the intersection of $C_1$ and $l_1$ is 4 points which define a quartic $q_4$ in $(x_3,x_4)$. This quartic can be mapped into $\mathbb{P}^2$ by adding arbitrary terms in $(x_5)$ to obtain the nonsingular quartic which defines $C_2$. The union of these two curves is a Gorenstein codimension 4 variety in $\mathbb{P}^5$ corresponding to Betti table 2.5.
\subsection{Type 2.8}\label{2.8}
We now consider type 2.8.
\begin{table}[h!]
\[\begin{array}{c|l}
& 0 \hspace{0.65cm} 1\hspace{0.65cm} 2 \hspace{0.68cm} 3 \hspace{0.65cm} 4 \\ \hline
0 &1 \hspace{0.5cm} - \hspace{0.45cm} - \hspace{0.45cm} - \hspace{0.45cm} - \\
1& - \hspace{0.55cm} 5 \hspace{0.65cm} 6 \hspace{0.65cm} 2 \hspace{0.6cm}-\\
2&- \hspace{0.55cm} 2 \hspace{0.65cm} 4 \hspace{0.65cm} 2 \hspace{0.6cm} -\\
3&- \hspace{0.55cm} 2 \hspace{0.65cm} 6 \hspace{0.65cm} 5 \hspace{0.6cm} - \\
4& - \hspace{0.4cm} - \hspace{0.44cm} - \hspace{0.42cm} - \hspace{0.53cm} 1
\end{array}
\]
\caption*{Type 2.8~\cite{schenck2020calabiyau}}
\label{tab:table3}
\end{table}
An example construction of a Gorenstein curve $C$ in $\mathbb{P}^5$ with such a minimal free resolution of its coordinate ring is as follows. Firstly, note that this variety is of degree 15, which follows from the Hilbert function \cite{schenck2020calabiyau}. Secondly the constraints of regularity 4 mean our curve will again be halfcanonical, of arithmetic genus 16. This ``big ears'' construction works in the following manner. Let $x_0,\dots,x_5$ be coordinates on $\mathbb{P}^5$. The quadrics
\begin{equation}
Q_1=x_0x_4, \quad Q_2=x_1x_4, \quad Q_3=x_2x_5, \quad Q_4=x_3x_5, \quad Q_5=x_4x_5
\end{equation}
have six linear first syzygies and two linear second syzygies, thus satisfying the second row of the Betti table. It follows that $C=C_0 \cup C_1 \cup C_2$, with $C_0 \subset \mathbb{P}^3_{\left<x_0\dots x_3\right>}$, $C_1 \subset \mathbb{P}^2_{\left<x_0:x_1:x_5\right>}$, $C_2 \subset \mathbb{P}^2_{\left<x_2:x_3:x_4\right>}$. Each copy of $\mathbb{P}^2$ intersects the copy of $\mathbb{P}^3$ in a line, hence the term ``big ears'' to refer to the curves embedded into each $\mathbb{P}^2$. $C_0$ is a degree 7 genus 4 curve residual to a (3,3) complete intersection in $\mathbb{P}^3_{\left<x_0\dots x_3\right>}$ containing both lines, $l_1 \colon x_0=x_1=0$ and $l_2 \colon x_2=x_3=0$. Each cubic is thus in the ideal $J=(x_0x_2,x_1x_2,x_0x_3,x_1x_3)$. Let $I=(F_1,F_2)$ be the ideal defining the (3,3) complete intersection in $\mathbb{P}^3$, with
\begin{equation}
\begin{split}
& F_1=l_{13}x_1x_3+l_{23}x_2x_3+l_{14}x_1x_4+l_{24}x_2x_4, \\
& F_2=m_{13}x_1x_3+m_{23}x_2x_3+m_{14}x_1x_4+m_{24}x_2x_4,
\end{split}
\end{equation}
where the $l_{ij},m_{ij}$ are linear forms in $k[x_0,\dots,x_3]$. The ideal defining the residual curve $C_0$ contains a further two quartics. These quartics may be calculated directly from the $2 \times 2$ minors of the matrix
\begin{equation}
M =
\begin{pmatrix}
l_{13} & l_{23} & l_{14} & l_{24} \\
m_{13} & m_{23} & m_{14} & m_{24} \\
\end{pmatrix}.
\end{equation}
One quartic, $H_1$ is double on the line $l_1$, and intersects $l_2$ transversally in 4 points, and vice versa for the second quartic $H_2$. Thus mapping $H_1$ into $\mathbb{P}^2_{\left<x_0:x_1:x_5\right>}$ by adding arbitrary terms in $(x_5)$ defines a nonsingular quartic curve $C_1$, and similarly mapping $H_2$ into $\mathbb{P}^2_{\left<x_2\dots x_4\right>}$ by adding arbitrary terms in $(x_4)$ defines a nonsingular quartic curve $C_2$. The union of these three curves is a codimension 4 Gorenstein curve in $\mathbb{P}^5$ whose coordinate ring has free resolution as in Betti table 2.8.
\end{document} |
\begin{document}
\title{ extbf{BIFURCATIONS, ROBUSTNESS AND SHAPE OF ATTRACTORS OF DISCRETE DYNAMICAL SYSTEMS}
\begin{abstract}
We study in this paper global properties, mainly of topological
nature, of attractors of discrete dynamical systems. We consider the Andronov-Hopf bifurcation for homeomorphisms of the plane and establish some robustness properties for attractors of such homeomorphisms. We also give relations between attractors of flows and quasi-attractors of homeomorphisms in $\mathbb{R}^{n}$. Finally, we give a result on the shape (in the sense of Borsuk) of invariant sets of IFS's on the plane, and make some remarks about the recent theory of Conley attractors for IFS.
\end{abstract}
\noindent \textit{2010 MSC:} {37C70,37G35,37B25,54C56,55P55,28A80}
\newline
\noindent \textit{Keywords:} {Dynamical systems, bifurcation, attractor, robustness, shape, iterated function system.}
The aim of this paper is to study global properties, mainly of topological
nature, of attractors of discrete dynamical systems and the persistence or change of these properties under perturbation or bifurcation of the system.
In the case of attractors of flows, \v{C}ech homology and Borsuk's homotopy theory (or shape theory) have been classically used in the study of the topology of attractors, see \cite{gus, san94, kar}. This study depends, in an essential way, on the homotopies provided by the flow.
However, in the discrete case, the absence of such homotopies makes the study considerably more complicated.
A way to circumvent this difficulty, at least for systems in
the plane, is to use a classical theorem by K. Borsuk \cite{bor} where the shape classification of compacta is given in elementary topological terms.
For this reason, virtually no knowledge of shape theory is required in this
paper as long as we take for granted Borsuk's theorem.
In the first section, we recall some notions and results on dynamical systems, fractal geometry and topology (in particular, shape theory) needed throughout the paper.
In Section 2 we study the Andronov-Hopf bifurcation for homeomorphisms of the plane and robustness properties for attractors of such homeomorphisms. The higher dimensional case and other kind of bifurcations will be treated in a forthcoming paper.
Section 3 is dedicated to study relations between attractors of flows and attractors of homeomorphisms in $\mathbb{R}^{n}$. Here, Conley's notion of quasi-attractor plays a substantial role.
Finally, in Section 4, we study the \v{C}ech homology and shape of fractals in the plane and make some remarks about the recent theory of Conley attractors for IFS.
\section{Preliminaries}
This paper lies in the intersection of three great areas: Dynamical systems, fractal geometry and topology (in particular, shape theory). In this section we recall some basic notions of results used in the paper. For more information we refer the reader to the references cited throughout the section.
We will make use of some classical concepts of dynamical systems which can be found in \cite{bhs,rob04}. In particular, we will be dealing with attractors of flows and homeomorphisms. A compact invariant set $K$ of a flow (resp. homeomorhism) is said to be an \emph{attractor} if it admits a \emph{trapping region}, i.e., a compact neighborhood $N$ such that $Nt\subset \mathring{N}$ for every $t>0$ (resp. $f(N)\subset \mathring{N}$), satisfying that
$$
K=\bigcap_{t\geq 0}N[t,+\infty)
\ \
(\text{resp. } K=\bigcap_{k=0}^{\infty } f^k (N)).$$
It is well-known that trapping regions are robust objects, in the sense that, if $N$ is a trapping region of a flow (resp. homeomorphism) it is also a trapping region for small perturbations of it \cite{con,ken,hur}.
Finally, we will consider the more general notion of a \emph{quasi-attractor}, which is a non-empty compact invariant set which is intersection of attractors.
The main notions and ideas from fractal geometry needed in the paper can be found in \cite{bar}.
Specifically, we will study iterated function systems defined on Euclidean spaces. An \emph{iterated function system}, or shortly IFS, is a family
$$\mathcal{F}=\{f_1,f_2,\ldots,f_k\}$$
of self-maps $f_i:\mathbb{R}^n\to\mathbb{R}^n$. We will assume that these self-maps are contractive homeomorphisms. Recall that a map $f:\mathbb{R}^n\to\mathbb{R}^n$ is \emph{contractive} if there exists $\lambda \in [0,1)$ such that
$d(f(x),f(y))\leq \lambda d(x,y)$.
If $\mathcal{F}$ is an IFS consisting of contractive maps it has a unique compact invariant set $K$, i.e.
\[
K=\bigcup_{i=1}^kf_i(K).
\]
Besides, $K$ attracts every non-empty compact subset of $\mathbb{R}^n$ in the Hausdorff metric, defined in the hyperspace
$
\mathcal{H}(\mathbb{R}^n)=\{K\subset\mathbb{R}^n\mid K\;\mbox{is non-empty and compact}\}
$
as
\[
d_H(A,B)=\inf\{\epsilon\geq 0\mid B\subset A_\epsilon,\;\mbox{and}\; A\subset B_\epsilon\}
\]
where
$A_\epsilon=\bigcup_{x\in A}\{y\in\mathbb{R}^n\mid d(x,y)\leq\epsilon\} $.
Hence, $K$ is known as the attractor of the IFS.
Finally, we will use some topological notions in the paper, in particular notions and results from shape theory, a form of homotopy theory introduced and studied by K. Borsuk \cite{bor}, which has been proved to be specially convenient for the study of the global topological properties of the invariant spaces involved in dynamics. In particular, we will make use of
the following result which characterizes the shape of all plane continua in terms of the number of components of their complements.
\begin{theorem}[\cite{bor}] Two continua $K$ and $L$ contained in $\mathbb{R}^{2}$
have the same shape if and only if they disconnect $\mathbb{R}^{2}$ in the
same number (finite or infinite) of connected components.
In particular:
\begin{enumerate}[i)]
\item A continuum has trivial shape (the shape of a point) if and only if it does not disconnect $\mathbb{R}^{2}$.
\item A continuum has the shape of a circle if and only if it
disconnects $\mathbb{R}^{2}$ into exactly two connected components.
\item Every other continuum has the
shape of a wedge of circles, finite, or infinite (Hawaiian earring).
\end{enumerate}
\end{theorem}
We will denote by $\check{H}_*$ and $\check{H}^*$ the \v{C}ech homology and cohomology functors respectively. \v{C}ech homology (cohomology) theory has proven to be important in dynamical systems since, for the case of attractors, it agrees with the homology of its basin of attraction in the case of flows defined on manifolds (more generally ANR's) \cite{gus,san11}, and in the case of homeomorphisms in
manifolds, if we take coefficients in a field \cite{rps}.
In both cases it must be finitely generated. Moreover, \v Cech homology and cohomology are shape invariants.
We also recall that a compact subset $K$ of $\mathbb{R}^n$ is said to be \emph{cellular} if it has a neighborhood basis consisting of topological balls. We will make use of the fact that, if $K$ is a cellular subset of $\mathbb{R}^n$, then $\mathbb{R}^n\setminus K$ is homeomorphic to $\mathbb{R}^n\setminus \{p\}$, where $p\in\mathbb{R}^n $ \cite{dav}.
In particular $\mathbb{R}^n\setminus K$ is connected. Cellular sets are instances of continua with trivial shape.
For general information on algebraic topology we recommend the book by {Spa\-nier} \cite{spa}.
For a complete treatment of shape theory we refer the reader to \cite{bor,cop,dys,mar,mas}.
The use of shape in dynamics is illustrated by the papers
\cite{gmrs,gis,gus,has,kar,ros,rob99,sag}.
\section{Bifurcations and robustness of attractors of {ho\-meo\-mor\-phisms} of the plane}
In this section we prove some results related to the Andronov-Hopf
bifurcation theorem for diffeomorphisms of the plane. This result was proved
by Naimark \cite{nai}, Sacker \cite{sac} and Ruelle and Takens \cite{rut}. The name of Andronov-Hopf bifurcation is commonly used because
of the connection with the Andronov-Hopf bifurcation for differential
equations. The Andronov-Hopf bifurcation for a diffeomorphism occurs when a
pair of eigenvalues for a fixed point changes from absolute value less than
one to absolute value greater than one, i.e., the fixed point changes from
stable to unstable by a pair of eigenvalues crossing the unit circle. The
formulation of the theorem and its proof are rather complicated. In the case
of flows there are some topological versions of the theorem where the
hypotheses are simpler and the conclusions are weaker but significant, see \cite{san07}.
To reach these conclusions, a heavy use is made of the homotopies induced by
the flow. In the case of discrete dynamical systems induced by
homeomorphisms, we don't have such homotopies at our disposal. However, by
using Borsuk's theorem on the shape classification of plane continua \cite{bor}, we are able to prove a result which can be considered as a
topological Andronov-Hopf bifurcation theorem for homeomorphisms of the
plane. We also prove in this section a result concerning robustness of some
global properties of attractors of homeomorphisms of the plane (formulated in
the language of shape theory and \v{C}ech homology). This result was already
known for flows (see \cite{san95}).
However, as in the previous result, we cannot use the flow-induced homotopies which are
essential in the proof of the continuous case.
We use, instead, Borsuk's theorem again to give a discrete counterpart of that result.
\begin{theorem}
Suppose that $\mathbf{\Phi }=(\Phi _{\lambda })_{\lambda \in \mathbb{R}}:\mathbb{R}^{2}\times \mathbb{R\rightarrow R}^{2}$ is a continuous
one-parameter family of homeomorphisms of the plane such that
\begin{enumerate}[(a)]
\item the origin is a fixed point of $\Phi _{\lambda }$ for $\lambda $ near $0$,
\item the origin is an attractor for $\lambda =0$,
\item the origin is a repeller for $\lambda >0$.
\end{enumerate}
Then there exists a $\lambda _{0}>0$ such that for every $\lambda $ with
$0<\lambda \leq \lambda _{0}$ there exists an attractor $K_{\lambda }$ of
$\Phi _{\lambda }$ with $Sh(K_{\lambda })=Sh(\mathbb{S}^{1})$. In particular,
the \v{C}ech homology and cohomology of $K_{\lambda }$ agree with that of
the circle. Moreover, the attractors $K_{\lambda }$ surround the fixed point
$\{0\}$ and shrink to it when $\lambda \rightarrow 0$.
\end{theorem}
\begin{proof}
Since $\{0\}$ is an attractor for $\Phi _{0}$, there exists a trapping
region, i.e. a compact neighborhood $N$ of $\{0\}$ such that $\Phi
_{0}(N)\subset \mathring{N}$, satisfying that $\{0\}=\bigcap_{n=0}^{\infty } \Phi _{0}^n (N)$.
Select a closed disk $D$ centered at $\{0\}$
and contained in $\mathring{N}$ and a $k>0$ such that $\Phi
_{0}^{k}(N)\subset \mathring{D}$. By the continuity of the family $\Phi $
there is a $\lambda _{0}$ such that $N$ is also a trapping region for $\Phi
_{\lambda }$ and $\Phi _{\lambda }^{k}(N)\subset \mathring{D}$ (and hence
$\Phi _{\lambda }^{k}(D)\subset \mathring{D}$) for every $\lambda $ with
$\lambda \leq \lambda _{0}$. Then for every $\lambda \leq \lambda _{0}$ there
exists an attractor $A_{\lambda }$ of $\Phi _{\lambda }$ with $A_{\lambda
}=\bigcap _{n=0}^{\infty }\Phi _{\lambda }^{n}(N)$. Since $\Phi _{\lambda
}^{k}(N)\subset \mathring{D}$ we have that $\Phi _{\lambda }^{(n+1)k}(N)$ is
contained in $\Phi _{\lambda }^{nk}(\mathring{D})$ for every $n\geq 1$ and,
thus, $\Phi _{\lambda }^{(n+1)k}(D)$ is also contained in $\Phi _{\lambda
}^{nk}(\mathring{D})$ (which agrees with the interior of $\Phi _{\lambda
}^{nk}(D)$) and
$$
A_{\lambda }=\bigcap_{n=1}^{\infty } \Phi _{\lambda }^{(n+1)k}(N)\subset \bigcap_{n=1}^{\infty } \Phi _{\lambda }^{nk}(D)\subset \bigcap_{n=1}^{\infty } \Phi _{\lambda }^{nk}(N)=A_{\lambda }.
$$
Hence $A_{\lambda }$ is the intersection of the topological disks $\Phi
_{\lambda }^{nk}(D)$, and the fact that $\Phi _{\lambda }^{(n+1)k}(D)$ is
contained in the interior of $\Phi _{\lambda }^{kn}(D)$ for every $n$
implies that $A_{\lambda }$ is a cellular set. Although we will not need it,
we point out that an alternative description of $A_{\lambda }$ is
$$
A_{\lambda }=\{y\in \mathbb{R}^{2}|\text{ there exist sequences }x_{n}\in D,
\text{ }k_{n}\rightarrow \infty \text{ with }\Phi _{\lambda
}^{k_{n}}(x_{n})\rightarrow y\}.
$$
Then we have for every $\lambda \leq \lambda _{0}$ a cellular attractor
$A_{\lambda }\subset \mathring{D}$ of $\Phi _{\lambda }$ such that $D$ is
contained in its basin of attraction, which we denote by $\mathcal{A} _{\lambda }$.
We remark that $\{0\}$ is a fixed point of $\Phi _{\lambda }$
contained in $A_{\lambda }$. Moreover, since $\{0\}$ is a repeller for $\Phi
_{\lambda }$ we must also have that its basin of repulsion $\mathcal{R} _{\lambda }$ is contained in $A_{\lambda }$. Notice that $\mathcal{R} _{\lambda }$ is open and connected.
Now we define $K_{\lambda }=A_{\lambda }\setminus \mathcal{R}_{\lambda }$. Then $K_{\lambda }$ is a connected attractor of $\Phi _{\lambda }$ whose basin of attraction is
$\mathcal{A}_{\lambda }\setminus \{0\}$.
Since $\mathbb{R}^{2}\setminus K_{\lambda }=(\mathbb{R}^{2}\setminus A_{\lambda })\cup
\mathcal{R}_{\lambda }$ we have that $K_{\lambda }$ decomposes the plane
into two connected components and, by Borsuk's theorem on shape
classification of plane continua \cite{bor}, that $Sh(K_{\lambda })=Sh(\mathbb{S} ^{1})$.
Moreover $K_{\lambda }$ surrounds $\{0\}$ because the origin is in
$\mathcal{R}_{\lambda }$ which is the bounded component of $\mathbb{R} ^{2}\setminus K_{\lambda }$.
We remark that the disk $D$ could have been chosen
arbitrarily small and that $A_{\lambda }\subset D$ for $\lambda $
sufficiently small. Hence the attractors $A_{\lambda }$ (and consequently
also the attractors $K_{\lambda }$) converge to the origin.
\end{proof}
The following result states that attractors of homeomorphisms of the plane
are robust, in the sense that some of their global properties are preserved
under small perturbations of the homeomorphism.
\begin{theorem}
Suppose that $\mathbf{\Phi }=(\Phi _{\lambda })_{\lambda \in \mathbb{R}}: \mathbb{R}^{2}\times \mathbb{R\rightarrow R}^{2}$ is a continuous
one-parameter family of homeomorphisms and that $A$ is an attractor of $\Phi
_{0}$. Then there exists $\lambda _{0}>0$ such that for every $\lambda \leq
\lambda _{0}$ there exists an attractor $A_{\lambda }$ of $\Phi _{\lambda }$
with $Sh(A_{\lambda })=Sh(A)$. In particular $K_{\lambda }$ has the same
\v{C}ech homology and cohomology as $K$. Moreover $A_{\lambda }\rightarrow A$
when $\lambda \rightarrow 0$ (i.e. all $A_{\lambda }$ are contained in an
arbitrary neighborhood of $A$ in $\mathbb{R}^{2}$ for $\lambda $
sufficiently small).
\end{theorem}
\begin{proof}
An argument similar to the one used in the proof of Theorem 2.1 shows that if $N$ is a trapping region for the attractor $A$ of $\Phi _{0}$ then $N$ is
also a trapping region for every $\Phi _{\lambda }$ with $\lambda $
sufficiently small. Hence there exists an attractor $A_{\lambda }$ for $\Phi
_{\lambda }$ corresponding to the trapping region $N$.
On the other hand, by Corollary 2 below (see also \cite{rps}), $\mathbb{R}^{2}\setminus A$ has a finite number of connected components (say $r$).
Then there exists a topological disk with $r-1$ holes $D\subset N$ containing $A$
in its interior and such that the inclusion $j:A\hookrightarrow D$ is a shape
equivalence. Moreover, as in the proof of Theorem 2.1, there exists a $k>0$
such that $A$ is the intersection of the sets $\Phi ^{(n+1)k}(D)\subset \Phi
^{nk}(\mathring{D})$ with $n=0,1,\dots $. Since $j:A\hookrightarrow D$ is a shape
equivalence and $\Phi ^{k}(D)$ is also a topological disk with $r-1$ holes
contained in $D$ and containing $A$, then the inclusion $\Phi
^{k}(D)\hookrightarrow D$ is a homotopy equivalence.
Now, as in Theorem 2.1, $A_{\lambda }$ is the intersection of the sets $\Phi _{\lambda
}^{(n+1)k}(D)\subset \Phi _{\lambda }^{nk}(\mathring{D})$ with $n=0,1,\dots $.
If we choose $\lambda $ sufficiently small, then the inclusion $\Phi
_{\lambda }^{k}(D)\hookrightarrow D$ is also a homotopy equivalence and, thus,
the inclusion $\Phi _{\lambda }^{(n+1)k}(D)\hookrightarrow \Phi _{\lambda
}^{nk}(D)$ is a homotopy equivalence for every $n$ as well. Hence, since
$K_{\lambda }=\bigcap _{n=0}^{\infty }\Phi _{\lambda }^{nk}(D),$ then $K_{\lambda }$ has the shape of a topological disk with $r-1$ holes and,
thus, $Sh(K_{\lambda })=Sh(K)$.
The last part of the theorem is a
consequence of the fact that $D$ can be chosen arbitrarily small.
\end{proof}
\section{Quasi-attractors of flows and attractors of {ho\-meo\-mor\-phisms} of $\mathbb{R}^{n}$}
It is well-known that there are continua in $\mathbb{R}^{n}$ which are
attractors of homeomorphisms of $\mathbb{R}^{n}$ but which are not
attractors of flows defined on $\mathbb{R}^{n}$. One of such examples is the
solenoid in $\mathbb{R}^{3}$. It is, however, interesting to explore
possible relations between these two notions. An interesting connection can
be found by using the notion of quasi-attractor of a flow. This notion,
introduced by C. Conley, plays an important role in his study of the chain
recurrent set and the gradient structure of a flow \cite{con}. There is a similar notion of quasi-attractor of a homeomorphism, which was used by J. Kennedy \cite{ken}, and M. Hurley \cite{hur} in their study of the generic properties of discrete dynamical systems.
\begin{definition}
Let $\varphi :\mathbb{R}^{n}\times \mathbb{R}\rightarrow \mathbb{R}^{n}$ be
a flow. A non-empty compactum $K\subset \mathbb{R}^{n}$ is said to be a
quasi-attractor of $\varphi $ if it is the intersection of a family of
attractors of $\varphi $. The non-empty compactum $K$ is a tame quasi-attractor of $\varphi $ if it is the intersection of a nested sequence of attractors of $\varphi $, $A_{1}\supset A_{2}\supset \dots \supset A_{n}\supset \dots $, all of
them homeomorphic.
\end{definition}
Our next result shows that tame quasi-attractors of flows share with
attractors of flows the following important property.
\begin{theorem}
Let $K$ be a tame quasi-attractor of a flow on $\mathbb{R}^{n}$. Then the
\v{C}ech homology and cohomology of $K$ with coefficients in a field is
finitely generated in every dimension.
\end{theorem}
\begin{proof}
Suppose that $K$ is the intersection of a family of attractors
$$
K_{1}\supset K_{2}\supset \dots \supset K_{k}\supset K_{k+1}\supset \dots
$$
of a flow $\varphi :\mathbb{R}^{n}\times \mathbb{R\rightarrow R}^{n}$, all
of them homeomorphic. Every inclusion
$j_{k}:K_{k+1}\hookrightarrow K_{k}$
induces a
homomorphism $\check{H}_{r}(j_{k}):\check{H}_{r}(K_{k+1})\rightarrow
\check{H}_{r}(K_{k})$ and in this way we obtain an inverse sequence of
vector spaces
$$
\check{H}_{r}(K_{1})\leftarrow \check{H}_{r}(K_{2})\leftarrow \dots \leftarrow \check{H}_{r}(K_{k})\leftarrow
\check{H}_{r}(K_{k+1})\leftarrow \dots
$$
where coefficients are taken in a field. Then, by the continuity of \v{C}ech
homology, we have that $\check{H}_{r}(K)=\underleftarrow{\lim }\check{H} _{r}(K_{k})$. It is known that the \v{C}ech homology of an attractor is
finitely generated (indeed attractors of flows have finite polyhedral
shape), see \cite{gus, san94, kar}. Since the attractors $K_{k}$ are
homeomorphic we have that all $\check{H}_{r}(K_{k})$ are vector spaces
isomorphic to a finite-dimensional vector space $V$ (the same for every $k$)
and the previous inverse sequence takes (up to isomorphism) the form
$$
V\leftarrow V\leftarrow \dots \leftarrow V\leftarrow V\leftarrow \dots
$$
where every arrow represents an endomorphism $h_{k}:V\rightarrow V$ (which
can vary with $k$). Since $V$ \ is finite-dimensional then for every $k$
there exists a subspace $V_{k}$ of $V$ such that $V_{k}$ is the image of the
composition of homomorphisms $h_{k}\circ h_{k+1}\circ \dots \circ h_{k+l}$ \
for every $l\in \mathbb{N}$ except, perhaps, for a finite number of them.
Replacing the inverse sequence, if necessary, by a subsequence we can assume
that the image of the restriction
$h_{k}|V_{k+1}:V_{k+1}\rightarrow V$
is $h_{k}(V_{k+1})=V_{k}$. On the other hand, since $\dim (V_{k})\leq \dim (V_{k+1})$ and $V$ is finite-dimensional then necessarily dim $(V_{k})$ is
the same for almost every $k$ and, hence, $h_{k}|V_{k+1}:V_{k+1}\rightarrow
V_{k}$ is an isomorphism for almost every $k$. On the other hand, the
inverse limit of
$$
V\leftarrow V\leftarrow \dots \leftarrow V\leftarrow V\leftarrow \dots
$$
agrees with the inverse limit of
$$
V_{1}\leftarrow V_{2}\leftarrow \dots \leftarrow V_{k}\leftarrow V_{k+1}\leftarrow \dots
$$
where the arrows are the homomorphisms $h_{k}|V_{k+1}:V_{k+1}\rightarrow
V_{k}$. Since almost all these arrows are isomorphisms, the inverse limit of
the former sequence is isomorphic to $V_{k}$ for sufficiently high $k$.
Hence $\check{H}_{r}(K)$ is finitely generated. The proof for cohomology is
similar.
\end{proof}
We prove in the following result that attractors of homeomorphisms and tame
quasi-attractors of flows are intimately related.
\begin{theorem}
Every continuum of $\mathbb{R}^{n}$ is a quasi-attractor of a flow on $\mathbb{R}^{n}$. Every connected attractor of a homeomorphism of $\mathbb{R} ^{n}$ is a tame quasi-attractor of a flow on $\mathbb{R}^{n}$.
\end{theorem}
\begin{proof}
Every continuum $K$ of $\mathbb{R}^{n}$ is the intersection of a nested
sequence of compact connected $n$-manifolds with boundary $(M_{k})_{k\in
\mathbb{N}}$ with $M_{k+1}\subset \mathring{M}_{k}$. Consider for every $k\in \mathbb{N}$
a collar $N_{k}$ of $\partial M_{k}$ which is, by definition, an open
neighborhood $N_{k}$ of $\partial M_{k}$ in $M_{k}$ homeomorphic to $\partial
M_{k}\times \lbrack 0,1)$ by a homeomorphism
$$h_{k}:\partial M_{k}\times \lbrack 0,1)\rightarrow N_{k}.$$
We may assume that $\bar{N}_{k} \cap M_{k+1}=\emptyset $. We define a flow in $\partial M_{k}\times \lbrack 0,1/2]
$ such that all points in $(\partial M_{k}\times \{0\})\cup (\partial
M_{k}\times \{1/2\})$ are equilibria and the trajectories of points $(x,t)\in \partial M_{k}\times (0,1/2)$ go along $\{x\}\times (0,1/2)$
connecting $(x,0)$ to $(x,1/2)$. By using the homeomorphism $h_{k}$ we carry
this flow to a flow defined in a subset of $N_{k}$. The non-stationary
trajectories of all the flows just defined form a family $\mathcal{C}$ of
oriented curves filling an open set $U$ of $\mathbb{R}^{n}$ which is regular
in the sense of Whitney \cite{whi}. We recall that a family of oriented curves $\mathcal{C}$ is regular if given an oriented arc $pq\subset \gamma \in
\mathcal{C}$ and $\epsilon >0$ there exists $\delta >0$ such that if
$p'\in \gamma '\in \mathcal{C}$ and $d(p,p')<\delta $ then there is a point $q'\in \gamma '$ such
that the oriented arcs $pq$ and $p'q'$ have a parameter
distance less that $\epsilon $ (that is, there exist parametrizations
$f:[0,1]\rightarrow pq$ and $f':[0,1]\rightarrow p'q'$ such that $d(f(t),f'(t))<\epsilon $ for every $t\in
[ 0,1]$). Consider now the partition $\mathcal{D}$ of $\mathbb{R}^{n}$
given by $\mathcal{C}$ and the singletons corresponding to the points of $\mathbb{R}^{n}$ not lying in such trajectories. By Whitney's Theorem 27A in \cite{whi}
there exists a flow $\varphi $ in $\mathbb{R}^{n}$ whose oriented
trajectories correspond to the elements of $\mathcal{D}$. It is easy to see
that $K_{k}=M_{k}\setminus h_{k}(\partial M_{k}\times [ 0,1/2))$ is an
attractor for every $k$ and that $K=\bigcap _{k\in \mathbb{N} } K_{k}$. Moreover, if $K$ is
attractor of a homeomorphism $f$ of $\mathbb{R}^{n}$ then the manifolds $M_{k}$ can be taken as images $f^{n_{k}}(M)$ of a neighborhood $M$ of $K$, which is also a manifold, and the collars $N_{k}$ can be taken as $f^{n_{k}}(N)$ where $N$ is a collar of $\partial M$ in $M$. Then the
manifolds $M_{k}$ are homeomorphic and also are the attractors $K_{k}$.
Hence $K$ is a tame quasi-attractor.
\end{proof}
\begin{corollary}
Let $K$ be a connected attractor of a homeomorphism of $\mathbb{R}^{n}$.
Then there is a flow $\varphi $ on $\mathbb{R}^{n}$ such that $K$ is the
limit in the Hausdorff metric of a sequence of attractors of $\varphi ,$ all
of them homeomorphic.
\end{corollary}
As a consequence of Theorem 3.2 and 3.3 we obtain the following result which was proved by F. Ruiz del Portal and J.J. S\'{a}nchez-Gabites for cohomology with coefficients in $\mathbb{Q} $ and $\mathbb{Z} _p $ ($p$ prime) in \cite{rps} where cohomological properties of attractors of discrete dynamical systems were studied.
\begin{corollary}
Let $K$ be a connected attractor of a homeomorphism of $\mathbb{R}^{n}$.
Then the \v{C}ech homology and cohomology of $K$ with coefficients in a field is
finitely generated in every dimension.
\end{corollary}
\section{Fractals in the plane and Conley attractors for IFS}
It is well-known that an attractor of a flow on a manifold must have the shape of a finite polyhedron \cite{gus, san94, kar}.
This fact imposes a strong restriction on the family of plane continua which may be attractors of flows. In particular, a continuum with the
shape of the Hawaiian earring cannot be attractor of a flow in the plane.
Similarly, we can ask ourselves which are the shapes that may have the fractals in the plane.
It can be easily shown, by using the Collage Theorem \cite{behl}, that all shapes are possible among the connected attractors of
IFS of contractive similarities in the plane. The following is an example of
one of such fractals with the shape of the circle.
\begin{example}
Consider the IFS with six similarities $h_1 , h_2 ,\dots ,h_{6} $, given by:
\begin{align*}
h_1 (x,y)&=\left( \dfrac{19}{30} x,\dfrac{19}{30} y\right) &
h_2 (x,y)&=\left( \dfrac{1}{2}\, \dfrac{11}{30} +\dfrac{19}{30} x,\dfrac{\sqrt{3}}{2}\, \dfrac{11}{30} +\dfrac{19}{30} y\right) \\
h_3 (x,y)&=\left( \dfrac{11}{30} +\dfrac{19}{30} x,\dfrac{19}{30} y\right) &
h_4 (x,y)&=\left( \dfrac{1}{2} x,\dfrac{1}{2} y\right) \\
h_5 (x,y)&=\left( \dfrac{1}{4} +\dfrac{x}{2} ,\dfrac{\sqrt{3}}{4} +\dfrac{y}{2} \right) &
h_6 (x,y)&=\left( \dfrac{1}{2} +\dfrac{x}{2} , \dfrac{y}{2} \right)
\end{align*}
The invariant set of the IFS and its image under the similarities are shown in Figure 1.
\begin{figure}
\caption{The invariant set of the IFS and its image under the similarities}
\end{figure}
If we consider the IFS formed by just the last three maps, the invariant set is the classical Sierpinski gasket (Figure 2, right). On the other hand, the invariant set for the IFS formed by the first three maps is shown in Figure 2 (left).
\begin{figure}
\caption{The invariant set of the IFS formed by the first three maps (left) and the three last maps (right)}
\end{figure}
\end{example}
However, if we consider only continua with empty interior, all shapes are no longer possible. In fact, imposing this simple restriction, only two shapes are possible, exemplified by the Von Koch curve and the Sierpinski triangle.
These are two extreme cases among the shapes in the plane, corresponding to trivial shape and the shape of the Hawaiian earring, respectively.
The intermediate shapes of finite bouquets of circles are excluded.
In particular, there are not connected attractors of IFS of contractive similarities in the plane having empty interior and with the shape of the circle.
This is in sharp contrast with the situation for flows or homeomorphisms.
We have the following general result in this direction.
\begin{theorem}
Let $\mathcal{F}$ be an iterated function system of $\ \mathbb{R}^{2}$
consisting of contractive homeomorphisms and suppose that the attractor $K$ of $\mathcal{F}$ is a continuum with empty interior. Then $K$ has the shape of a point or the shape of the Hawaiian earring.
As a consequence, $\check{H} _{1}(K)$ (with coefficients in $\mathbb{Z}$) is either trivial or isomorphic to $\bigoplus_{n=1}^{\infty } \mathbb{Z}$.
\end{theorem}
\begin{proof}
Suppose on the contrary that the continuum $K$ has the shape of a finite bouquet of circles and that there exists an IFS $\mathcal{F} =\{h_{1},\dots ,h_{r}\}$ consisting of contractive homeomorphisms of $\mathbb{R} ^{2}$ having $K$ as its limit. Then
$$K=h_{1}(K)\cup \dots \cup h_{r}(K).$$
We shall show that this impossible. As a matter of fact, we shall show that there is no contractive homeomorphism $h$ of $\mathbb{R}^{2}$ such that $h(K)\subset K$. We suppose provisionally the contrary and we will get a contradiction.
Let $A$ be a bounded component of $\mathbb{R}^{2}\setminus K$ (which exists because we are assuming that $K$ has not trivial shape). Then there exists $a\in A$ such that $h(a)\in B$, where $B$ is also a bounded component of $\mathbb{R} ^{2}\setminus K$. Otherwise $h(A)\subset C\cup K$ where $C$ is the unbounded component of $\mathbb{R}^{2}\setminus K$. Moreover, $h(A)$ is bounded since it is contained in the compact set $h(\bar{A})$.
Since $\mathring{K}$ is empty we have that $h(A)\cap C\neq \emptyset $.
Select a point $b\in h(A)\cap C$.
We know that $K$ has a system of neighborhoods in $\mathbb{R}^{2}$ consisting of (topological) disks with holes and, thus, there exists a disk $D_{0}$ containing $K$ in its interior with $b\notin D_{0}$.
Since $h(A)$ is bounded there exists a bigger disk $D_{1}$ that contains $h(A)$ and such that $D_{0}\subset \mathring{D}_{1}$.
It is easy to see that in this situation there are points in $\partial h(A)$ contained in $D_{1}\setminus \mathring{D}_{0}$ and, hence, these points are not in $K$. However $\partial h(A)=h(\partial
A)\subset h(K)\subset K$ and this is a contradiction.
Now select a bounded component $A$ of $\mathbb{R}^{2}\setminus K$ such that $l=\diam (\partial A)=\diam(\bar{A})$ is minimal among all the bounded
components. The component $A$ and the number $l$ are well defined because $\mathbb{R}^{2}\setminus K$ has a finite number of components by our hypothesis about
the shape of $K$. We have seen that there exists $a\in A$ such that $h(a)\in
B$ (also bounded). Let $m=\diam(\partial B)$ (hence $l\leq m$). If $k<1$ is
the contractivity factor of $h$ we have that $kl<m$. There exists a disc $D'\subset B$ such that $h(a)\in D'$ and $\diam D'>kl$. Moreover $h(\partial A)\subset K\subset \mathbb{R}^{2}\setminus D'$. Select a disk $D\subset A$ containing $a$ and such that $\partial D$ is
near $\partial A$ so that $h(\partial D)\subset \mathbb{R}^{2}\setminus D'$.
We have then the following situation: 1) $h(a)\in D'$ and $h(a)\in
h(D)$ (which is also a disk), 2) $h(\partial D)\subset \mathbb{R}^{2}\setminus D'$. Hence $D'\subset h(D)$ and $\diam D'\leq \diam(h(D))$. However $\diam D'>kl$ and $\diam h(D)\leq kl$. This
contradiction establishes our result.
\end{proof}
\begin{corollary}
Let $\mathcal{F}$ be an iterated function system of $\mathbb{R}^2 $ consisting of contractive homeomorphisms and suppose that the attractor $K$ of $\mathcal{F}$ is a continuum with Hausdorff dimension less than 2. Then $K$ has the shape of a point or the shape of the Hawaiian earring.
\end{corollary}
We finish with a few remarks concerning Conley attractors for IFS, a theory
recently developed by Barnsley and Vince \cite{bav}.
The following remarks are presented without much detail and they are intended to suggest a possible line for future research.
Barnsley and Vince considered invertible systems $\mathcal{F}$ consisting of homeomorphisms of a compact metric space $X$ with no contractivity requirements.
A compact set $A\subset X$ is said to be a Conley attractor of $\mathcal{F}$ if there is an open set $U$ such that $A\subset U$ and $A=\lim_{k\rightarrow \infty } \mathcal{F}^{k}(\bar{U})$, where the limit is taken in the Hausdorff metric. They prove that such attractors are characterized by the existence of arbitrarily small attractor blocks, i.e. neighborhoods $Q$ of $A$ such that $\mathcal{F}(\bar{Q})\subset \mathring{Q}$ and
\[ A=\bigcap _{k\rightarrow \infty }\mathcal{F}^{k}(\bar{Q}). \]
An important ingredient in the Conley theory of flows is the notion of
continuation of an isolated invariant set. By using Barnsley and Vince
theorem on the existence of arbitrarily small attractor blocks it is
possible to give a meaning to the notion of continuation of a Conley
attractor for IFS. The following result can be easily proved.
\begin{theorem}
Let $\mathcal{F}_{\lambda }$, with $\lambda \in [0,1]$, be a
parametrized family (depending continuously on $\lambda $) of invertible IFS
consisting of homeomorphisms of a compact metric space $X$ and let $K_{0}$
be a Conley attractor for $\mathcal{F}_{0}$. Then there is a $\lambda _{0}$
such that for every $\lambda \leq \lambda _{0}$ there exists a Conley
attractor $K_{\lambda }$ of $\mathcal{F}_{\lambda }$ and $K_{\lambda
}\rightarrow K_{0}$ (i.e. all $K_{\lambda }$ are contained in an arbitrary
neighborhood of $K_{0}$ in $X$ for $\lambda $ sufficiently small).
\end{theorem}
The convergence is weaker than in the Hausdorff metric and the Conley
attractors $K_{\lambda }$ are uniquely determined, in the sense that they
are the only ones sharing the isolating block $Q$ with $K_{0}$. If we
require contractivity conditions in the basin of $K_{0}$ we can achieve
convergence in the Hausdorff metric. In this case we would get more general
versions of the "blowing in the wind" theorem \cite{bar}.
It would be interesting to study more general conditions implying convergence in the Hausdorff metric.
\begin{problem}
Study relations between the shape of the Conley attractor $K_{0}$ and the
shape of its continuations $K_{\lambda }$.
\end{problem}
\centerline{\scshape H\'{e}ctor Barge}
{\footnotesize
\centerline{E.T.S. Ingenieros inform\'{a}ticos}
\centerline{Universidad Polit\'{e}cnica de Madrid}
\centerline{28660 Madrid, Spain}
\centerline{\noindent \textit{email:} {[email protected]}}}
\centerline{\scshape Antonio Giraldo}
{\footnotesize
\centerline{E.T.S. Ingenieros inform\'{a}ticos}
\centerline{Universidad Polit\'{e}cnica de Madrid}
\centerline{28660 Madrid, Spain}
\centerline{\noindent \textit{email:} {[email protected]}}}
\centerline{\scshape Jos\'{e} M.R. Sanjurjo}
{\footnotesize
\centerline{Facultad de Ciencias Matem\'{a}ticas}
\centerline{Universidad Complutense de Madrid}
\centerline{28040 Madrid, Spain}
\centerline{\noindent \textit{email:} {jose\[email protected]}}}
\end{document} |
\begin{document}
\title{A rational Arnoldi approach for ill-conditioned linear systems}
\author{C. Brezinski\thanks{
Laboratoire Paul Painlev\'e, UMR CNRS 8524, UFR de Math\'ematiques Pures et
Appliqu\'ees, Universit\'e des Sciences et Technologies de Lille,
59655--Villeneuve d'Ascq cedex, France. E--mail: \texttt{
[email protected]}} \and P. Novati \thanks{
Universit\`a degli Studi di Padova, Dipartimento di Matematica Pura ed
Applicata, Via Trieste 63, 35121--Padova, Italy. E--mail: \texttt{
[email protected]}} \and M. Redivo--Zaglia \thanks{
Universit\`a degli Studi di Padova, Dipartimento di Matematica Pura ed
Applicata, Via Trieste 63, 35121--Padova, Italy. E--mail: \texttt{
[email protected]}} }
\maketitle
\begin{abstract}
For the solution of full-rank ill-posed linear systems a new approach based
on the Arnoldi algorithm is presented. Working with regularized systems, the
method theoretically reconstructs the true solution by means of the
computation of a suitable function of matrix. In this sense the method can
be referred to as an iterative refinement process. Numerical experiments
arising from integral equations and interpolation theory are presented.
Finally, the method is extended to work in connection with the standard
Tikhonov regularization with a right hand side contaminated by noise.
\end{abstract}
\noindent{\bf Keywords:} Ill-conditioned linear systems. Arnoldi
algorithm. Matrix function. Tikhonov regularization.
\section{Introduction}
In this paper we consider the solution of ill-conditioned linear systems
\begin{equation}
Ax=b. \label{pr}
\end{equation}
We mainly focus the attention on linear systems in which $A\in \mathbb{R}
^{N\times N}$ is full rank with singular values that gradually decay to $0$,
as for instance in the case of the discretized Fredholm integral equations
of the first kind. In order face this kind of problems one typically apply
some regularization technique such as the well known Tikhonov regularization
(see e.g. \cite{PCH} for a wide background). The Tikhonov regularized system
takes the form
\begin{equation}
(A^{T}A+\lambda H^{T}H)x_{\lambda }=A^{T}b, \label{pt}
\end{equation}
where $\lambda \in \mathbb{R}$ is a suitable parameter and $H$ is the
regularization matrix. The system (\ref{pt}) should have singular values
bounded away from $0$ in order to reduce the condition number and, at the
same time, its solution $x_{\lambda }$\ should be closed to the solution of
the original system.
For this kind of problem the method initially presented in this paper is
based on the shift and invert transformation
\begin{equation}
Z=(A+\lambda I)^{-1}, \label{T1}
\end{equation}
where $\lambda >0$ is a suitable parameter and $I$ is the identity matrix.
Provided that $\lambda $ is large enough, if $A\ $is positive definite ($
F(A)\subset \mathbb{C}^{+}$, where $F(A)$ denotes the field of values) the
shift $A+\lambda I$, that represents the most elementary example of
regularization, has the immediate effect of moving the spectrum (that we
denote by $\sigma (A)$) away from $0$ so reducing the condition number.
Moreover, since
\begin{equation*}
x=A^{-1}b=f(Z)b,
\end{equation*}
where
\begin{equation}
f(z)=\left( \frac{1}{z}-\lambda \right) ^{-1}=(1-\lambda z)^{-1}z,
\label{fz}
\end{equation}
the idea is to solve the system $Ax=b$ by computing $f(Z)b$. For the
computation of $f(Z)b$, we use the standard Arnoldi method projecting the
matrix $Z$ onto the Krylov subspaces generated by $Z$ and $b$, that is $
K_{m}(Z,b)=\mathrm{span}\{b,Zb,...,Z^{m-1}b\}$. By definition of $Z$ the
method is commonly referred to as the Restricted-Denominator (RD) rational
Arnoldi method \cite{Vanh}, \cite{Morno}.
Historically, a first attempt to reconstruct the solution from $x_{\lambda }$
that solves
\begin{equation}
\left( A+\lambda I\right) x_{\lambda }=b, \label{st}
\end{equation}
was proposed by Riley in \cite{Ri}. The algorithm is just based on the
approximation of $f(Z)$ by means of its Taylor series. Indeed we have
\begin{equation}
A^{-1}b=\frac{1}{\lambda }\sum_{k=1}^{\infty }(\lambda Z)^{k}b, \label{ser}
\end{equation}
that leads to the recursion
\begin{equation}
x_{k+1}=y+\lambda Zx_{k},\quad x_{0}=0,\quad y=Zb. \label{ar}
\end{equation}
It is easy to see that the method is equivalent to the \emph{iterative
improvement}
\begin{eqnarray*}
\left( A+\lambda I\right) e_{k} &=&b-Ax_{k} \\
x_{k+1} &=&x_{k}+e_{k}
\end{eqnarray*}
generally referred to as \emph{iterated Tikhonov regularization} or \emph{
preconditioned Landweber iteration} (see e.g. \cite{Go}, \cite{HH}, \cite{KC}
, \cite{KiCh}, \cite{Neu}). The main problem concerning this kind of
algorithms is that they can be extremely slow because the spectrum of $Z$
accumulates at $1/\lambda $ (cf. (\ref{T1}), (\ref{ser})). This, of course,
large values of $\lambda $, that is, when $A+\lambda I$ is well conditioned.
>From the point of view of the computation of function of matrices this is a
well known problem, i.e., the the computation by means of the Taylor series
generally provides poor results unless the spectrum of the matrix is close
to the expansion point. Indeed, from well known results of complex
approximation, the rate of convergence of a polynomial method for the
computation of a function of matrix depends on the position of the
singularity of the function, with respect to the location of the spectrum of
the matrix.
We also point out that, in \cite{BRSR}, the authors construct an improved
approximation via extrapolation with respect to the regularization
parameter, using the singular values representation of the solution.
Extrapolation techniques can also be applied to accelerate (\ref{ar}), as
suggested in \cite{BRED} and also indicated by Fasshauer in \cite{Fass}.
For problems in which the right hand side is affected by noise, instead of
working with the transformation (\ref{T1}) or implicitly with systems of
type (\ref{st}), we shall work with the standard regularization (\ref{pt})
and hence on the transformation
\begin{equation*}
Z=(A^{T}A+\lambda L^{T}L)^{-1}.
\end{equation*}
As we shall see, the subsequent Arnoldi-based algorithm for the
reconstruction of the exact solution will be almost identical to the one
based on (\ref{T1}), but the use of a regularization matrix $L$ different
from the identity allows to define methods less sensitive to perturbations
on the right hand side.
The paper is organized as follows. In Section \ref{due}, we describe the
Arnoldi method for the computation of $f(Z)b$ and, in Section \ref{tre}, we
present a theoretical a-priori error analysis. In Section \ref{quattro}, we
show an a-posteriori representation of the error. In Section \ref{cinque},
we analyze the choice of the parameter $\lambda $. Some numerical
experiments taken out from Hansen's Matlab toolbox on regularization \cite{H1,H2}
, and from the theory of interpolation with radial basis functions are
presented in Section \ref{sei}. Finally, in Section \ref{sette}, we extend
our method to the Tikhonov regularization in its general form (\ref{pt})
showing also some tests with data affected by noise.
\section{The Arnoldi method for $f(Z)b$.}
\label{due}
For the construction of the subspaces $K_{m}(Z,b)$, the Arnoldi algorithm
generates an orthonormal sequence$\ \left\{ v_{j}\right\} _{j\geq 0}$, with $
v_{1}=b/\left\Vert b\right\Vert $, such that $K_{m}(Z,b)=\mathrm{span}
\left\{ v_{1},v_{2},...,v_{m}\right\} $ (here and below the norm used is
always the Euclidean norm). For every $m$ we have
\begin{equation}
ZV_{m}=V_{m}H_{m}+h_{m+1,m}v_{m+1}e_{m}^{T}, \label{cla}
\end{equation}
where $V_{m}=\left[ v_{1},v_{2},...,v_{m}\right] $, $H_{m}$ is an upper
Hessenberg matrix with entries $h_{i,j}=v_{i}^{T}Zv_{j}$ and $e_{j}$ is the $
j$-th vector of the canonical basis of \ $\mathbb{R}^{m}$. Formula (\ref{cla}
) is just the matrix formulation of the algorithm.
The $m$-th Arnoldi approximation to $x=f(Z)b$ is defined as
\begin{equation*}
x_{m}=\left\Vert b\right\Vert V_{m}f(H_{m})e_{1}.
\end{equation*}
Regarding the computation $f(H_{m})$, since the method is expected to
produce a good approximation of the solution in a relatively small number of
iterations, that is for $m\ll N$, one typically considers a certain rational
approximation to $f$, or the Schur-Parlett algorithm (see e.g. \cite[Chapter
11]{GV} or \cite{H}).
Denoting by $\Pi _{m-1}$ the vector space of polynomials of degree at most $
m-1$, it can be seen that
\begin{equation}
x_{m}=\overline{p}_{m-1}(Z)b, \label{pol}
\end{equation}
where $\overline{p}_{m-1}\in $ $\Pi _{m-1}$ interpolates, in the Hermite
sense, the function $f$ at the eigenvalues of $H_{m}$ \cite{Saad2}.
As already mentioned, this kind of approach is commonly referred to as the
RD rational Arnoldi method since it is based on the use of single pole
rational forms of the type
\begin{equation*}
R_{m-1}(x)=\frac{q_{m-1}(x)}{(x+a)^{m-1}},\mathbf{\quad }a\in \mathbb{R}
,\quad q_{m-1}\in \Pi _{m-1},\quad m\geq 1,
\end{equation*}
introduced and studied by N{\o }rsett in \cite{Nors} for the approximation
of the exponential function. In other words, with respect to $A$, formula (
\ref{pol}) is actually a rational approximation.
It is worth noting that, at each step of the Arnoldi algorithm, we have to
compute the vectors $w_{j}=Zv_{j}$, $j\geq 1$, which leads to solve the
systems
\begin{equation*}
(A+\lambda I)w_{j}=v_{j},\quad j\geq 1.
\end{equation*}
Since $v_{1}=b/\left\Vert b\right\Vert $, the corresponding $w_{1}$ is just
the scaled solution of a regularized system (with the rough regularization $
A\rightarrow A+\lambda I$). In this sense if $\lambda $ arises from the
standard techniques that seek for the optimal regularization parameter $
\lambda _{opt}$ (L-curve, Generalized Cross Validation, etc.) this procedure
can be employed as a tool to improve the quality of the approximation $
w_1\!\left\Vert b\right\Vert$. Anyway we shall see that, using the Arnoldi
algorithm, larger values for $\lambda $ are more reliable.
\section{Error analysis}
\label{tre}
The error $E_{m}:=x-x_{m}$ can be expressed and bounded in many ways (see
e.g. the recent paper \cite{BR} and the references therein). In any case,
however, the sharpness of the bound essentially depends on the amount of
information about the location of the field of values of $Z$, defined by
\begin{equation*}
F(Z):=\left\{ \frac{x^{H}Zx}{x^{H}x},x\in \mathbb{C}^{N}\mathbf{\backslash }
\left\{ 0\right\} \right\} .
\end{equation*}
The bound we propose is based on the use of Faber polynomials. We need some
definitions and we refer to \cite{SL} or \cite{Wal} for a wide background of
what follows.
Let $\Omega $ be a compact and connected set of the complex plane. By the
Riemann mapping theorem there exists a conformal surjection
\begin{equation}
\psi :\overline{\mathbb{C}}\setminus \left\{ w:\left\vert w\right\vert \leq
1\right\} \rightarrow \overline{\mathbb{C}}\setminus \Omega ,\quad \psi
\left( \infty \right) =\infty ,\quad \psi ^{\prime }\left( \infty \right)
=\gamma , \label{3.1}
\end{equation}
that has a Laurent expansion of the type
\begin{equation*}
\psi (w)=\gamma w+c_{0}+\frac{c_{1}}{w}+\frac{c_{2}}{w^{2}}+\cdots
\end{equation*}
The constant $\gamma $ is the capacity of $\Omega $. If $\Omega $ is an
ellipse or a line segment then $c_{i}=0$ for $i\geq 2$. Given a function $g$
analytic in $\Omega $, it is known that defining $p_{m-1}$ as the truncated
Faber series of exact degree $m-1$ with respect to $g$ and $\psi ,$ then $
p_{m-1}$ provides an asymptotically optimal uniform approximation to $g$ in $
\Omega $, that is
\begin{equation}
\underset{m\rightarrow \infty }{\lim }\sup \left\Vert p_{m-1}-g\right\Vert
_{\Omega }^{1/m}=\underset{m\rightarrow \infty }{\lim }\sup \left\Vert
p_{m-1}^{\ast }-g\right\Vert _{\Omega }^{1/m}, \label{mc}
\end{equation}
$\left\{ p_{m-1}^{\ast }\left( z\right) \right\} _{m\geq 1}$ being the
sequence of polynomials of best uniform approximation to $g$ in $\Omega $.
Property (\ref{mc}) is also called \emph{maximal convergence}. Let moreover $
\phi :\overline{\mathbb{C}}\setminus \Omega \rightarrow \overline{\mathbb{C}}
\setminus \left\{ w:\left\vert w\right\vert \leq 1\right\} $ be the inverse
of $\psi $. For any $r>1,$ let $\Gamma _{r}$ be the equipotential curve
\begin{equation*}
\Gamma _{r}:=\left\{ z:\left\vert \phi \left( z\right) \right\vert
=r\right\} ,
\end{equation*}
and let us denote by $\Omega _{r}$ the bounded domain with boundary $\Gamma
_{r}$. Let $\widehat{r}>1$ be the largest number such that $g$ is analytic
in $\Omega _{r}$ for each $\gamma <r<\widehat{r}$ and has a singularity on $
\Gamma _{\widehat{r}}$. Then, it is known that the rate of convergence of
the sequence $\left\{ p_{m-1}\left( z\right) \right\} _{m\geq 1}$ is given by
\begin{equation}
\underset{m\rightarrow \infty }{\lim }\sup \left\Vert p_{m-1}-g\right\Vert
_{\Omega }^{1/m}=\frac{1}{\widehat{r}}. \label{mp}
\end{equation}
For this reason we know that superlinear convergence is only attainable for
entire functions, where asymptotically one can set $\widehat{r}:=m$. In
order to derive error bounds for the computation of $f(Z)b$ we need the
following classical result
\begin{theorem}
\textrm{\cite{Ellac}} Let $\Omega $ be a compact and convex subset such that
$g$ is analytic in $\Omega $. For $1<r<\widehat{r}$ the following bound
holds
\begin{equation}
\left\Vert p_{m-1}-g\right\Vert _{\Omega }\leq 2\left\Vert g\right\Vert
_{\Gamma _{r}}\frac{\displaystyle \left( \frac{1}{r}\right) ^{m}}{
\displaystyle 1-\frac{1}{r}}. \label{4.3b}
\end{equation}
\end{theorem}
Using the above theorem, for our function $f(z)=z/(1-\lambda z)$, singular
at $1/\lambda $, we can state the
\begin{proposition}
\label{p1}Assume that $\Omega $ is an ellipse of the complex plane,
symmetric with respect to the real axis with associated conformal mapping $
\psi (w)=\gamma w+c_{0}+c_{1}/w$. Assume that $\psi (1)<1/\lambda $ and let $
\widehat{r}$ be such that $\psi (\widehat{r})=1/\lambda $. Let moreover $
\overline{m}$ be the smallest integer such that
\begin{equation*}
\frac{\widehat{r}}{\overline{m}+1}<\widehat{r}-1.
\end{equation*}
Then for $m\geq \overline{m}$
\begin{equation}
\left\Vert p_{m-1}-f\right\Vert _{\Omega }\leq \frac{2\, e \,\overline{m}\,
\widehat{r}}{\overline{m}(\widehat{r}-1)-1}\frac{1}{\lambda ^{2}\psi
^{\prime }(\widehat{r})}\frac{m+1}{\widehat{r}^{m}}, \label{f1e}
\end{equation}
and for $m<\overline{m}$
\begin{equation}
\left\Vert p_{m-1}-f\right\Vert _{\Omega }\leq \frac{4}{\lambda ^{2}\left(
\widehat{r}-1\right) \psi ^{\prime }(\widehat{r})}\left( \frac{2}{\widehat{r}
+1}\right) ^{m}\frac{\widehat{r}+1}{\widehat{r}-1}. \label{f2e}
\end{equation}
\end{proposition}
\begin{proof}
Let $r=\widehat{r}-\varepsilon $, with $0<\varepsilon <\widehat{r}-1$. By
the properties of $\Omega $, we have
\begin{equation*}
\left\Vert f\right\Vert _{\Gamma _{r}}=\frac{\psi (r)}{1-\lambda \psi (r)},
\end{equation*}
and, by direct computation
\begin{equation*}
\psi (r)=\psi (\widehat{r})-\gamma \varepsilon +\frac{c_{1}\varepsilon }{(
\widehat{r}-\varepsilon )\widehat{r}}.
\end{equation*}
Hence using $\psi (\widehat{r})=1/\lambda $ we find
\begin{eqnarray*}
\left\Vert f\right\Vert _{\Gamma _{r}} &\leq &\frac{\psi (\widehat{r})}{
1-\lambda \left( \psi (\widehat{r})-\gamma \varepsilon +\frac{\displaystyle
c_{1}\varepsilon }{\displaystyle (\widehat{r}-\varepsilon )\widehat{r}}
\right) }, \\
&=&\frac{1}{\lambda ^{2}\varepsilon \left( \gamma -\frac{\displaystyle c_{1}
}{\displaystyle (\widehat{r}-\varepsilon )\widehat{r}}\right) }, \\
&\leq &\frac{1}{\lambda ^{2}\varepsilon \psi ^{\prime }(\widehat{r})}.
\end{eqnarray*}
By (\ref{4.3b}), we thus obtain
\begin{equation}
\left\Vert p_{m-1}-f\right\Vert _{\Omega }\leq \frac{2}{\lambda
^{2}\varepsilon \psi ^{\prime }(\widehat{r})}\frac{1}{\left( \widehat{r}
-\varepsilon \right) ^{m}}\frac{1}{\displaystyle 1-\frac{1}{\widehat{r}
-\varepsilon }}. \label{ers}
\end{equation}
Now setting
\begin{equation}
\varepsilon =\frac{\widehat{r}}{m+1}, \label{ep}
\end{equation}
since this value minimizes
\begin{equation*}
\frac{1}{\varepsilon \left( \widehat{r}-\varepsilon \right) ^{m}},
\end{equation*}
let $\overline{m}$ be the smallest positive integer such that
\begin{equation*}
\frac{\widehat{r}}{\overline{m}+1}<\widehat{r}-1.
\end{equation*}
By inserting (\ref{ep}) into (\ref{ers}) and using
\begin{equation*}
\frac{1}{\displaystyle 1-\frac{1}{\widehat{r}-\varepsilon }}\leq \frac{
\overline{m}\widehat{r}}{\overline{m}(\widehat{r}-1)-1},
\end{equation*}
we find (\ref{f1e}). For $m<\overline{m}$ we can take for instance
\begin{equation}
\varepsilon =\frac{\widehat{r}-1}{2}. \label{ep2}
\end{equation}
Substituting (\ref{ep2}) into (\ref{ers}) we obtain (\ref{f2e}).
\end{proof}
\begin{remark}
Note that the assumption $\psi (1)<1/\lambda $ in Proposition \ref{p1} just
means that the ellipse is strictly on the left of the singularity of $f$.
\end{remark}
Regarding the field of values of $Z$, $F(Z)$, it is well known that it is
convex, that $\sigma (Z)\subset F(Z)$, and that $F(H_{m})\subseteq F(Z)$
(where $H_m$ is defined in Section \ref{due}). Of course if $F(A)\subset
\mathbb{C}^{+}$ ($A$ is positive definite) then $F(Z)\subset \{z\in C:0<
\mathrm{Re}(z)<1/\lambda \}$ and the corresponding $f$ is analytic in $F(Z)$
. Using these properties we can state the following result
\begin{theorem}
\label{t1}Assume that $F(A)\subset \mathbb{C}^{+}$. Let $\Omega $ be an
ellipse (with associated conformal mapping $\psi $, and inverse $\phi $)
symmetric with respect to the real axis and such that $F(Z)\subseteq \Omega $
with $f$ analytic in $\Omega $. Then, for $m$ large enough, we have
\begin{equation*}
\left\Vert E_{m}\right\Vert \leq 4\,e\,C\frac{\widehat{r}}{\widehat{r}-1}
\frac{1}{\psi ^{\prime }(\widehat{r})}\,K\,\frac{m+1}{\widehat{r}^{m}},
\end{equation*}
where $K=1/\lambda ^{2}$, $\widehat{r}=\phi (1/\lambda )$, and $C=$ $11.08$ (
$C=1$ if $A$ is symmetric).
\end{theorem}
\begin{proof}
Using the properties of the Arnoldi algorithm, we know that for every $
p_{m-1}\in \Pi _{m-1}$,
\begin{equation}
V_{m}p_{m-1}(H_{m})e_{1}=p_{m-1}(Z)b. \label{re1}
\end{equation}
Hence, from (\ref{re1}), it follows that, for $m\geq 1$ and for every $
p_{m-1}\in \Pi _{m-1}$,
\begin{equation}
E_{m}=x-x_{m}=f(Z)b-p_{m-1}(Z)b-V_{m}(f(H_{m})-p_{m-1}(H_{m}))e_{1}.
\label{e1}
\end{equation}
Since $\left\Vert V_{m}\right\Vert =1$ we have (see \cite{Cru})
\begin{equation}
\left\Vert E_{m}\right\Vert \leq 2C\left\Vert p_{m-1}-f\right\Vert _{F(Z)}.
\label{errf}
\end{equation}
Therefore taking $p_{m-1}$ as the $\left( m-1\right) $-th truncated Faber
(Chebyshev) series, the result follows from Proposition \ref{p1} since $
F(Z)\subseteq \Omega $.
\end{proof}
\begin{remark}
By (\ref{e1}), if both $Z$ and $H_{m}$ are diagonalizable then $C$ in (\ref
{errf}) is a constant depending on the condition number of the
diagonalization matrices and $\Omega $ can be taken as an ellipse containing
$\sigma (A)$.
\end{remark}
Theorem \ref{t1} is surely important from a theoretical point of view since
it states that the Arnoldi algorithm produces asymptotically optimal
approximations. However, if we consider for simplicity the symmetric case,
we can also understand that it cannot be used to suggest the choice of $
\lambda$.
Indeed, let $\lambda _{1}\gtrsim 0$ and $\lambda _{N}$ be respectively the
smallest and the largest eigenvalues $A$. Then $F(A)=[\lambda _{1},\lambda
_{N}]$ and
\begin{equation*}
\displaystyle F(Z)=\left[ \frac{1}{\lambda _{N}+\lambda },\frac{1}{\lambda
_{1}+\lambda }\right] =:I_{\lambda }.
\end{equation*}
In this case, by (\ref{errf}) we have
\begin{equation*}
\left\Vert E_{m}\right\Vert \leq 2\max_{I_{\lambda }}\left\vert
f(z)-p_{m-1}(z)\right\vert .
\end{equation*}
As already mentioned, the conformal mapping $\psi $ associated to $
I_{\lambda }$ takes the form
\begin{equation}
\psi (w)=\gamma w+c_{0}+\frac{c_{1}}{w} \label{conf}
\end{equation}
where
\begin{eqnarray}
\gamma &=&\frac{1}{4}\left( \frac{1}{\lambda _{1}+\lambda }-\frac{1}{
\lambda _{N}+\lambda }\right) =\frac{1}{4}\frac{\lambda _{N}-\lambda _{1}}{
\left( \lambda _{1}+\lambda \right) (\lambda _{N}+\lambda )}, \notag \\
c_{0} &=&\frac{1}{2}\left( \frac{1}{\lambda _{1}+\lambda }+\frac{1}{\lambda
_{N}+\lambda }\right) =\frac{1}{2}\frac{\lambda _{N}+\lambda _{1}+2\lambda }{
\left( \lambda _{1}+\lambda \right) (\lambda _{N}+\lambda )}, \label{c00} \\
c_{1} &=&\gamma \text{.} \notag
\end{eqnarray}
For $r>1$, $\Omega _{r}$ is the confocal ellipse (foci in $\displaystyle
\frac{1}{\lambda _{N}+\lambda }$ and $\displaystyle\frac{1}{\lambda
_{1}+\lambda }$) described by $\psi (re^{i\theta })$, $0\leq \theta <2\pi $.
Since $f(z)$ is singular at $1/\lambda $, $\widehat{r}$ is the solution ($>1$
) of
\begin{equation}
\gamma \widehat{r}+c_{0}+\frac{\gamma }{\widehat{r}}=\frac{1}{\lambda }
\label{rs}
\end{equation}
that is
\begin{equation}
\widehat{r}=u+\sqrt{u^{2}-1}, \label{r1}
\end{equation}
where
\begin{equation}
u=\frac{2\lambda _{1}\lambda _{N}}{\lambda (\lambda _{N}-\lambda _{1})}+
\frac{\lambda _{N}+\lambda _{1}}{\lambda _{N}-\lambda _{1}}. \label{r2}
\end{equation}
Thus, $\widehat{r}$ monotonically decreases with respect to $\lambda $ and $
\widehat{r}\rightarrow \infty $ for $\lambda \rightarrow 0$.
The above arguments simply show that the error analysis does not take into
account of the computational problems in the inversion of $A+\lambda I$ for $
\lambda \approx 0$. The method is very fast for $\lambda \approx 0$ because,
at each step, we are inverting something very close to the original operator
$A$. In order to derive a more useful estimate one should modify the above
analysis imposing in some way the requirement $\lambda \gg \lambda _{1}$. In
some sense this will be done in Section \ref{cinque} where we consider the
conditioning in the computation of $f(Z)b$ that is obviously closely related
to the rate of convergence of any iterative method.
\section{A-posteriori error representation}
\label{quattro}
By a result on Pad\'{e}--type approximation proved in \cite{CBout}, we know
that the Hermite interpolation polynomial of the function
\begin{equation*}
g(s)=\frac{1}{1-st}
\end{equation*}
at the zeros of any polynomial $\nu _{m}$ of exact degree $m$ in $s$ is
given by
\begin{equation*}
R_{m-1}(s)=\frac{1}{1-st}\left( 1-\frac{\nu _{m}(s)}{\nu _{m}(t^{-1})}
\right) .
\end{equation*}
Setting $\lambda =t^{-1}$, we have that
\begin{equation*}
f(\xi )=\frac{1}{\xi ^{-1}-\lambda }=-\lambda ^{-1}g\left( \xi ^{-1}\right) ,
\end{equation*}
and so
\begin{equation}
-\lambda ^{-1}R_{m-1}(\xi ^{-1})=\frac{1}{1-\xi ^{-1}\lambda ^{-1}}\left( 1-
\frac{\nu _{m}(\xi ^{-1})}{\nu _{m}(\lambda )}\right) \label{s}
\end{equation}
interpolates $f(\xi )$. By (\ref{pol}) let $\overline{p}_{m-1}\in $ $\Pi
_{m-1}$ be the polynomial that interpolates, in the Hermite sense, the
function $f(z)$ at the eigenvalues of $H_{m}$, $\xi _{1},...,\xi _{m^{\prime
}}$, $m^{\prime }\leq m$, with multiplicity $k_{i}$, $i=1,...,m^{\prime }$
.Then
\begin{equation*}
\overline{p}_{m-1}^{(j)}(\xi _{i})=-\lambda ^{-1}R_{m-1}^{(j)}(\xi
_{i}^{-1})=f^{(j)}(\xi _{i}),\quad 1\leq i\leq m^{\prime },\ 0\leq j\leq
k_{i}-1.
\end{equation*}
By (\ref{s}) and using the above relation is it easy to see that $\nu
_{m}(s)=\det (sI-H_{m}^{-1})$. In this way, by direct computation,
\begin{eqnarray}
x_{m} &=&\overline{p}_{m-1}(Z)b, \notag \\
&=&A^{-1}b-A^{-1}\left( \frac{\nu _{m}(Z^{-1})}{\nu _{m}(\lambda )}\right) b.
\label{e2}
\end{eqnarray}
Since, of course, $A^{-1}$ and $Z^{-1}$ commute, we find
\begin{equation*}
\frac{\left\Vert x_{m}-x\right\Vert }{\left\Vert x\right\Vert }\leq \frac{
\left\Vert \nu _{m}(A+\lambda I)\right\Vert }{\left\vert \nu _{m}(\lambda
)\right\vert }.
\end{equation*}
A posteriori error estimate can be derived in this way. Since
\begin{eqnarray*}
\nu _{m}(s) &=&\det (sI-H_{m}^{-1}), \\
&=&\frac{s^{m}\det (H_{m}-s^{-1}I)}{\det H_{m}},
\end{eqnarray*}
defining $q_{m}(\xi )=\det (H_{m}-\xi I)$, we have
\begin{equation}
\frac{\left\Vert x_{m}-x\right\Vert }{\left\Vert x\right\Vert }\leq \frac{
\left\Vert (A+\lambda I)^{m}q_{m}(Z)\right\Vert }{\lambda ^{m}\left\vert
q_{m}(\lambda ^{-1})\right\vert }. \label{ape}
\end{equation}
It is worth noting that, using the relation
\begin{equation*}
q_{m}(Z)b=\left( \prod\nolimits_{j=1}^{m}h_{j+1,j}\right) v_{m+1},
\end{equation*}
(see \cite{Morno}), we obtain from (\ref{e2})
\begin{equation*}
\left\Vert x_{m}-x\right\Vert =\frac{\left(
\prod\nolimits_{j=1}^{m}h_{j+1,j}\right) }{\lambda ^{m}\left\vert
q_{m}(\lambda ^{-1})\right\vert }\left\Vert A^{-1}(A+\lambda
I)^{m}v_{m+1}\right\Vert ,
\end{equation*}
which proves the convergence in a finite number $m^{\ast }\leq N$ of steps
of the method in exact arithmetics. Note that by (\ref{e2}) the
corresponding $\nu _{m^{\ast }}$ is the minimal polynomial of $A+\lambda I$
for the vector $b$.
\section{The choice of $\protect\lambda $}
\label{cinque}
As already mentioned, the arguments of Section \ref{tre} reveal that the
standalone error analysis of the computation of $f(Z)b$ is not reliable to
suggest the choice of $\lambda $, since $\kappa (Z)\rightarrow \kappa (A)$
as $\lambda \rightarrow 0$ ($\kappa (\cdot )$ denoting the standard
condition number of a matrix). In other words, it does not take into account
that, at each step, we need to solve a system with the matrix $A+\lambda I$.
At the same time, focusing the attention on the accuracy (so neglecting the
rate of convergence) one could expect that "large" values of $\lambda $
should allow an improvement of it, since the linear systems with $A+\lambda
I $ would be solved more accurately. The numerical experiments show that
this is not true, as shown in Fig. \ref{BBART40}, where we consider the
problem BAART, taken out from the Hansen's Matlab toolbox \texttt{Regtools} (see
\cite{H1} and \cite{H2}).
\begin{figure}
\caption{BAART(40) - Minimum attained error with respect to the number of
iterations for different values of $\protect\lambda$. }
\label{BBART40}
\end{figure}
Indeed the diagram of Fig. \ref{BBART40} represents the standard situation,
that is, increasing $\lambda $, we have a loss of accuracy. The behavior on
the leftmost part of the diagram is clear since it is due to the
conditioning of $Z$ for $\lambda $ small. On the rightmost part we have
again a loss of accuracy but now it depends on the numerical instability in
the computation of $f(Z)$ for $\lambda $ large (the problem can be easily
observed even working scalarly). This observation leads us to consider the
conditioning in the computation of $f(Z)b$ for having a good strategy to
define $\lambda $.
The absolute and the relative condition number for the computation of $g(X)$
where $g$ is a given function and $X$ a square matrix are given by (cf. \cite
{H} Chapter 3)
\begin{eqnarray}
\kappa _{a}(g,X) &=&\lim_{\varepsilon \rightarrow 0}\sup_{\left\Vert
E\right\Vert \leq \varepsilon }\frac{\left\Vert g(X+E)-g(X)\right\Vert }{
\varepsilon }, \label{ka} \\
\kappa _{r}(g,X) &=&\kappa _{a}(g,X)\frac{\left\Vert X\right\Vert }{
\left\Vert g(X)\right\Vert }, \label{kr}
\end{eqnarray}
and these definitions imply that
\begin{equation*}
\left\Vert g(X+E)-g(X)\right\Vert \leq \kappa _{a}(g,X)\left\Vert
E\right\Vert +O(\left\Vert E\right\Vert ^{2}).
\end{equation*}
\begin{proposition}
For the function $f(z)=(1-\lambda z)^{-1}z$ we have the bound
\begin{equation}
\kappa _{r}(f,Z)\leq \frac{\left\Vert (I-\lambda Z)^{-2}\right\Vert
\left\Vert Z\right\Vert }{\left\Vert (Z^{-1}-\lambda I)^{-1}\right\Vert }.
\label{bc}
\end{equation}
\end{proposition}
\begin{proof}
In order to derive first the absolute condition number we have
\begin{eqnarray*}
f(Z+E)-f(Z) &=&\left[ (Z+E)^{-1}-\lambda I\right] ^{-1}-(Z^{-1}-\lambda
I)^{-1}, \\
&=&\left[ (I+Z^{-1}E)^{-1}Z^{-1}-\lambda I\right] ^{-1}-(Z^{-1}-\lambda
I)^{-1}, \\
&=&\left[ Z^{-1}-\lambda I+\Lambda (Z,E)\right] ^{-1}-(Z^{-1}-\lambda
I)^{-1},
\end{eqnarray*}
where
\begin{equation*}
\Lambda (Z,E):=\sum_{k=1}^{\infty }(-1)^{k}(Z^{-1}E)^{k}Z^{-1}.
\end{equation*}
Hence
\begin{eqnarray}
f(Z+E)-f(Z) &=&\left[ I+(Z^{-1}-\lambda I)^{-1}\Lambda (Z,E)\right]
^{-1}(Z^{-1}-\lambda I)^{-1}-(Z^{-1}-\lambda I)^{-1}, \notag \\
&=&\sum\nolimits_{j=0}^{\infty }(-1)^{j}(Z^{-1}-\lambda I)^{-j}\Lambda
(Z,E)^{j}(Z^{-1}-\lambda I)^{-1}-(Z^{-1}-\lambda I)^{-1}, \label{fd}
\end{eqnarray}
and finally
\begin{equation*}
\left\Vert f(Z+E)-f(Z)\right\Vert \leq \left\Vert (Z^{-1}-\lambda
I)^{-1}Z^{-1}EZ^{-1}(Z^{-1}-\lambda I)^{-1}\right\Vert +O(\left\Vert
E\right\Vert ^{2}),
\end{equation*}
so that
\begin{equation*}
\kappa _{a}(f,Z)\leq \left\Vert (I-\lambda Z)^{-2}\right\Vert ,
\end{equation*}
that proves (\ref{bc}) using (\ref{kr}) and the definition of $f(z)$. Note
that by (\ref{fd})
\begin{equation*}
L(Z,E):=(I-\lambda Z)^{-1}E(I-\lambda Z)^{-1}
\end{equation*}
is the Fr\'{e}chet derivative of $f$ at $Z$ applied to $E$.
\end{proof}
This Proposition simply shows that the problem is well conditioned for $
\lambda \rightarrow 0$ and ill conditioned for $\lambda \gg 0$, that matches
with the error analysis of Section \ref{tre}. Of course the situation is
opposite to what happens for the solution of the linear systems with $
A+\lambda I$ during the Arnoldi process. Therefore the idea, confirmed by
many numerical experiments, is to define $\lambda $ such that $\kappa
_{r}(f,Z)\approx \kappa (A+\lambda I)$, that is, to consider the bound (\ref
{bc}) and solve the equation
\begin{equation*}
\frac{\left\Vert (I-\lambda Z)^{-2}\right\Vert \left\Vert Z\right\Vert }{
\left\Vert (Z^{-1}-\lambda I)^{-1}\right\Vert }=\left\Vert (A+\lambda
I)\right\Vert \left\Vert (A+\lambda I)^{-1}\right\Vert .
\end{equation*}
In the SPD case everything becomes clear since we have
\begin{eqnarray*}
\frac{\left\Vert (I-\lambda Z)^{-2}\right\Vert \left\Vert Z\right\Vert }{
\left\Vert (Z^{-1}-\lambda I)^{-1}\right\Vert } &=&\frac{\lambda +\lambda
_{1}}{\lambda _{1}} \\
\left\Vert (A+\lambda I)\right\Vert \left\Vert (A+\lambda I)^{-1}\right\Vert
&=&\frac{\lambda _{N}+\lambda }{\lambda _{1}+\lambda }
\end{eqnarray*}
that for $\lambda _{1}\rightarrow 0$ leads to
\begin{equation*}
\lambda =\sqrt{\lambda _{1}\lambda _{N}}+O(\lambda _{1}).
\end{equation*}
\begin{remark}
If the underlying operator is bounded then one may consider the approximation
\begin{equation*}
\sqrt{\lambda _{1}\lambda _{N}}\approx \frac{1}{\sqrt{\kappa (A)}}\quad
\text{for }\lambda _{1}\rightarrow 0.
\end{equation*}
\end{remark}
\begin{remark}
In the SPD case, taking $\lambda ^{\ast }=\sqrt{\lambda _{1}\lambda _{N}}$
and putting it into (\ref{r1})-(\ref{r2}), we find that the asymptotic
convergence factor of the method is given by
\begin{equation*}
\left\Vert E_{m}\right\Vert ^{1/m}\rightarrow \frac{1}{\widehat{r}}=\frac{
\lambda _{N}^{1/4}-\lambda _{1}^{1/4}}{\lambda _{N}^{1/4}+\lambda _{1}^{1/4}}
=\frac{\kappa (A)^{1/4}-1}{\kappa (A)^{1/4}+1}.
\end{equation*}
\end{remark}
\begin{remark}
The choice of $\lambda ^{\ast }$ has another interesting meaning. Indeed,
let us consider the problem of the computation of $g(A)b$ with $g$ singular
only at 0 and $A$ SPD. Using the transformation $z=\left( a+\lambda \right)
^{-1}$ (cf. (\ref{T1})), if the corresponding $g^{\ast }(z)=g(z^{-1}-\lambda
)$ has a non-removable singularity at 0, then the optimal choice of $\lambda
$ is given by solving the equation
\begin{equation}
c_{0}=\frac{1}{2\lambda } \label{c0}
\end{equation}
(cf. (\ref{conf}) and (\ref{c00})), that is, the midpoint of $[0,1/\lambda ]$
must be equal to the midpoint of $I_{\lambda }$, because in this way we have
simultaneously $\psi (-\widehat{r})=0$ and $\psi (\widehat{r})=1/\lambda $.
A straightforward computation shows that solving (\ref{c0}) leads exactly to
$\lambda ^{\ast }$. For instance, in \cite{Mo} the author uses the RD
Arnoldi method to compute $\sqrt{A}b$ and obtains the same result even if
following a different approach.
\end{remark}
\begin{remark}
The condition number of $A+\lambda ^{\ast }I$ is given by
\begin{equation*}
\kappa (A+\lambda ^{\ast }I)=\frac{\lambda _{N}+\sqrt{\lambda _{1}\lambda
_{N}}}{\lambda _{1}+\sqrt{\lambda _{1}\lambda _{N}}}=\sqrt{\frac{\lambda _{N}
}{\lambda _{1}}}=\sqrt{\kappa (A)}.
\end{equation*}
\end{remark}
In the nonsymmetric case, the analysis is a bit more difficult but many
numerical experiments have shown that just having information on the
conditioning of $A$, the choice $\lambda \approx \kappa (A)^{-1/2}$ is
generally satisfactory, that is, we are rather close to the minimum of a
curve similar to the one of Fig. \ref{BBART40}. For very ill-conditioned
problems we suggest to define $\lambda $ a bit larger, say in the range $
10\kappa (A)^{-1/2}\div 100\kappa (A)^{-1/2}$, since the errors generated by
the solution of the linear systems might be much larger than the machine
precision.
\section{Numerical experiments}
\label{sei}
In order to test the efficiency of our method, that from now on we denote by
RA (Rational Arnoldi), we consider here some numerical experiments where we
compare it with other classical iterative solvers. The RA method have have
been implemented in Matlab following the line of Algorithm \ref{alg_CMP}
described below.
\begin{algorithm}[ht]
\begin{algo}
\REQUIRE $A\in {\mathbb{R}}^{N\times N}\!\!,\;b\in {\mathbb{R}}^{N}, \lambda \in {\mathbb{R}}$
\DEFINE $f=(1-\lambda z)^{-1}z$
\STATE[3:] \If $ (A+\lambda I)$ is \textsc{SPD},
\Then\textbf{Compute} $L$ s.t. $(A+\lambda
I)=L\,L^{T}$
\STATE[~] \Else {\bf Compute} $L,U$ s.t. $(A+\lambda I)=L\,U$,
\Endif\STATE[4:] $v_1\leftarrow b/\Vert b\Vert, V_{1}\leftarrow [v_1 ]$
\FOR[5:] $\!\!m=1,2,\ldots $
\Do
\UPDATE[5.1:] $H_{m}\in {\mathbb{R}}^{m\times m}$ by
Arnoldi's algorithm
\STATE[~] {\bf Remark:} In the Arnoldi's algorithm, we
compute $w_{m}=Zv_{m}$
\STATE[~] solving $(A+\lambda I)w_{m}=v_{m}$, that is
$w_{m}=U^{-1}L^{-1}v_{m}$ or $w_{m}=(L^{T})^{-1}L^{-1}v_{m}$.
\COMPUTE[5.2:]
$f(H_{m})$ by Schur-Parlett algorithm
\STATE[5.3:] $x_{m}\leftarrow \Vert
b\Vert V_{m}f(H_{m})\;e_{1}$
\OUTPUT[5.4:] $x_{m}$, approximation of $
f(Z)b=A^{-1}b$
\UPDATE[5.5:] $V_{m+1}=[v_{1},\ldots ,v_{m+1}]\in {\mathbb{R}}
^{N\times (m+1)}$ orthonormal basis for
\STATE[~] $K_{m+1}(Z,b)$, by
Arnoldi's algorithm
\ENDFOR[~]
\end{algo}
\caption{- RA Algorithm for solving $Ax=b$.} \label{alg_CMP}
\end{algorithm}
It is worth noting that we make use of\ the LU (or Cholesky) factorization
to solve the linear system at each step. The reason is to reduce the
computational cost since the factorization is computed only once at the
beginning, taking also into account that $A+\lambda I$ should be relatively
well conditioned. Anyway, for large scale non-sparse problems an iterative
approach producing an inner-outer iteration should be considered.
We consider four classical test problems taken out from Hansen's Matlab toolbox
\texttt{Regtools}, GRAVITY, FOXGOOD, SHAW and BAART. These discrete linear
problems arise from the discretization of Fredholm integral equations of the
first kind. In all experiments, we consider a noise-free right hand side,
that is, we define $b=Ax$. The numerical results have been obtained with
Matlab 7.9, on a single processor computer Intel Core2 Duo T5800.
Tables \ref{TABLE1} and \ref{TABLE2} below summarize the results. For
comparison, we consider the codes ART, CGLS, LSQR\_B and MR2 taken out from
Hansen's toolbox, CG, GMRES and MINRES that are resident Matlab functions,
and Riley's method. The number between parentheses beside the name of the
test is the dimension of the system. In all tests $\lambda _{RA}$ and $
\lambda _{Riley}$ denote the chosen values of the parameters for the RA and
Riley's method respectively. Since no general indication about the choice of
the parameter for Riley's method is available in the literature, in all
experiments we heuristically select a nearly best one. In the tables we
consider the minimum attained error norm \textit{err}, the corresponding
residual \textit{res} and the number of iterations \textit{nit}. Each method
was stopped when the number of iterations reaches the dimension of the
system. The missing numbers are due to the structure of the coefficient
matrix (symmetric, SPD, and so on).
\begin{table}[th]
\begin{center}
\begin{tabular}{l|c|c|l|c|c|l|}
& \multicolumn{3}{|c|}{GRAVITY(100)} & \multicolumn{3}{|c|}{FOXGOOD(80)} \\
\hline\hline
$\lambda _{RA}$, $\lambda _{\mathit{Riley}}$ & \multicolumn{3}{|c}{\textbf{
1e-9}, 1e-11} & \multicolumn{3}{|c|}{\textbf{1e-8}, 1e-10} \\ \hline
& \textit{err} & \textit{res} & \textit{nit} & \textit{err} & \textit{res} &
\textit{nit} \\ \hline
\textbf{RA} & \multicolumn{1}{|l|}{\textbf{1.6e-5}} & \multicolumn{1}{|l|}{
\textbf{8.1e-9}} & \textbf{2} & \multicolumn{1}{|l|}{\textbf{6.8e-7}} &
\multicolumn{1}{|l|}{\textbf{2.9e-10}} & \textbf{5} \\
CG & \multicolumn{1}{|l|}{1.7e-4} & \multicolumn{1}{|l|}{7.5e-11} & 96 &
\multicolumn{1}{|l|}{} & \multicolumn{1}{|l|}{} & \\
ART & \multicolumn{1}{|l|}{8.4e-2} & \multicolumn{1}{|l|}{5.8e-3} & 100 &
\multicolumn{1}{|l|}{2.3e-3} & \multicolumn{1}{|l|}{8.8e-6} & 80 \\
CGLS & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l|}{} & &
\multicolumn{1}{|l|}{6.3e-6} & \multicolumn{1}{|l|}{9.6e-14} & 80 \\
LSQR\_B & \multicolumn{1}{|l|}{1.7e-3} & \multicolumn{1}{|l|}{2.0e-8} & 100
& \multicolumn{1}{|l|}{2.9e-6} & \multicolumn{1}{|l|}{1.1e-14} & 80 \\
MR2 & \multicolumn{1}{|l|}{1.9e-3} & \multicolumn{1}{|l|}{2.3e-8} & 66 &
\multicolumn{1}{|l|}{2.3e-6} & \multicolumn{1}{|l|}{1.6e-15} & 57 \\
MINRES & \multicolumn{1}{|l|}{1.8e-4} & \multicolumn{1}{|l|}{4.6e-11} & 100
& \multicolumn{1}{|l|}{2.0e-5} & \multicolumn{1}{|l|}{1.6e-15} & 80 \\
RILEY & \multicolumn{1}{|l|}{1.3e-3} & \multicolumn{1}{|l|}{8.0e-11} & 2 &
\multicolumn{1}{|l|}{6.3e-6} & \multicolumn{1}{|l|}{5.2e-10} & 2
\end{tabular}
\end{center}
\caption{Results for GRAVITY and FOXGOOD. }
\label{TABLE1}
\end{table}
\begin{table}[th]
\begin{center}
\begin{tabular}{l|c|c|l|c|c|l|}
& \multicolumn{3}{|c|}{SHAW(64)} & \multicolumn{3}{|c|}{BAART(120)} \\
\hline\hline
$\lambda _{RA}$, $\lambda _{\mathit{Riley}}$ & \multicolumn{3}{|c}{\textbf{
1e-9}, 1e-10} & \multicolumn{3}{|c|}{\textbf{1e-8}, 1e-10} \\ \hline
& \textit{err} & \textit{res} & \multicolumn{1}{|c|}{\textit{nit}} & \textit{
err} & \textit{res} & \textit{nit} \\ \hline
\textbf{RA} & \multicolumn{1}{|l|}{\textbf{3.3e-3}} & \multicolumn{1}{|l|}{
\textbf{2.0e-7}} & \textbf{7} & \multicolumn{1}{|l|}{\textbf{8.3e-6}} &
\multicolumn{1}{|l|}{\textbf{1.3e-8}} & \textbf{6} \\
GMRES & \multicolumn{1}{|l|}{} & \multicolumn{1}{|l|}{} & &
\multicolumn{1}{|l|}{9.6e-6} & \multicolumn{1}{|l|}{1.4e-15} & 15 \\
ART & \multicolumn{1}{|l|}{7.7e-1} & \multicolumn{1}{|l|}{6.8e-2} & 64 &
\multicolumn{1}{|l|}{3.4e-1} & \multicolumn{1}{|l|}{2.7e-2} & 120 \\
CGLS & \multicolumn{1}{|l|}{2.8e-2} & \multicolumn{1}{|l|}{5.1e-10} & 64 &
\multicolumn{1}{|l|}{2.4e-2} & \multicolumn{1}{|l|}{1.7e-14} & 120 \\
LSQR\_B & \multicolumn{1}{|l|}{2.8e-2} & \multicolumn{1}{|l|}{1.5e-10} & 62
& \multicolumn{1}{|l|}{2.4e-2} & \multicolumn{1}{|l|}{2.4e-15} & 120 \\
MR2 & \multicolumn{1}{|l|}{1.6e-1} & \multicolumn{1}{|l|}{3.7e-6} & 15 &
\multicolumn{1}{|l|}{} & \multicolumn{1}{|l|}{} & \\
MINRES & \multicolumn{1}{|l|}{1.0e-2} & \multicolumn{1}{|l|}{1.2e-11} & 64 &
\multicolumn{1}{|l|}{} & \multicolumn{1}{|l|}{} & \\
RILEY & \multicolumn{1}{|l|}{9.6e-3} & \multicolumn{1}{|l|}{8.0e-10} & 2 &
\multicolumn{1}{|l|}{1.3e-5} & \multicolumn{1}{|l|}{1.3e-10} & 2
\end{tabular}
\end{center}
\caption{Results for SHAW and BAART. }
\label{TABLE2}
\end{table}
The results of Tables \ref{TABLE1} and \ref{TABLE2} are of course
encouraging, especially considering the accuracy with respect to the number
of iterations. Indeed, both RA and Riley's method require a linear system to
solve at each step, and so it is fundamental to keep the number of
iterations low. However, it is worth pointing out that, in the experiments, such
linear systems are solved with the LU or Cholesky factorization, so that
most part of the computational cost is due to the first iteration.
A classical drawback of many iterative solvers for ill-conditioned problems
is the so-called semi-convergence (see e.g. \cite{B}), that is the
iterations initially approach the exact solution but quite rapidly diverges.
This phenomenon is very common in particular for iterative refinement methods
(thus for Riley's and RA) where there is a heavy propagation of errors. Of
course, unless a sharp error estimator is available, this undesired behavior
can be quite dangerous for applications. In order to understand what we can
do to face this problem, in Fig. \ref{BBART120} we consider the error
behavior of the RA method for BAART changing the value of the parameter.
\begin{figure}
\caption{BAART(120) - Error behavior for $\protect\lambda
=10^{-4}
\label{BBART120}
\end{figure}
Looking at Fig. \ref{BBART120}, we can observe that increasing $\lambda $
the procedure becomes absolutely stable, even if we have to pay a small
price in terms of accuracy. Therefore, for applications in which it is not
possible to monitor in some way the accuracy step by step, the
semi-convergence can be prevented taking $\kappa (A)^{-1/2}\ll \lambda \leq
\kappa (A)^{-1/4}$, thus looking for a compromise between accuracy and
stability. On the other side, reducing $\lambda $, the method is really fast
but also highly unstable. This last consideration is particularly true for
Riley's method, where, at least for these kind of problems, one always
observes a rapid divergence after a couple of iterations, also for
relatively large values of $\lambda $.
In this Section, we also look at another classical example coming out from
approximation theory. We consider in particular the reconstruction of the
Franke's bivariate test function via interpolation by means of Gaussian
Radial Basis Functions (RBF)\ with shape coefficients equal to 1 (see e.g.
\cite{Fassa} for a background). For simplicity, instead of scattered points,
we consider here the very special case of a grid of $15\times 15$ equally
spaced points on the square $[0,1]\times \lbrack 0,1]$ that leads to a SPD
linear systems of dimension $225$ whose condition number is about $10^{21}$.
In Fig. \ref{Franke}, the surfaces obtained with the Cholesky factorization,
the CG and the RA method (with $\lambda =10^{-11}$) are plotted. Since the
exact solution of the system is unknown, we used the residual as a stopping
criterion, so that the CG result corresponds to the iteration 190 (residual $
\approx 1.6\mathrm{e}-1$), while the RA result corresponds to the iteration
10 (residual $\approx 1.4\mathrm{e}-1$).
\begin{figure}
\caption{Interpolation of Franke's bivariate test function by means of
Gaussian RBF. }
\label{Franke}
\end{figure}
While the result with the Cholesky factorization was expected (a similar test
have been presented in \cite{Fass}), the difficulties with Krylov methods were
not. Indeed, the CG method has shown to be the best Krylov method for this
problem, but the results are poor if compared with those of the RA method. We
have to point out that, for this case, the reconstruction given by the RA
and the Riley's method are very similar.
\section{Extension to Tikhonov regularization}
\label{sette}
In many applications it is often necessary to deal with ill-conditioned
linear systems in which the right hand side is affected by noise. Defining $
e_{b}$ as a perturbation (of course unknown) of the right hand side $b$, one
is forced to solve in some way
\begin{equation}
A\widetilde{x}=\widetilde{b},\quad \widetilde{b}:=b+e_{b}, \label{pp}
\end{equation}
hoping that the computed solution of (\ref{pp}) is close to the solution of $
Ax=b$. In this situation, the RA method does not seem to be so powerful and
robust as in the noise-free case. Moreover, unless the noise level is very
low, it is also difficult to design a strategy to define the parameter $
\lambda $. Indeed, in order to adopt the theory of Section \ref{cinque}
based on the analysis of the conditioning, we should need, for instance, to
construct an invertible linear filter $F$ such that $Fe_{b}\approx 0$. In
this way $F^{-1}Ax\approx \widetilde{b}$, and hence information on the choice
of $\lambda $ can be obtained considering $\kappa (F^{-1}A)$. Anyway this
kind of approach is beyond the purpose of this paper, and we prefer to extend
the idea of the RA method in order to make it able to work directly with
Tikhonov regularization in its standard form.
As well known Tikhonov regularization is based on the solution of the
minimization problem
\begin{equation}
\min_{x}\left( \left\Vert Ax-\widetilde{b}\right\Vert ^{2}+\lambda
\left\Vert Hx\right\Vert ^{2}\right) ,\quad \lambda >0, \label{mn}
\end{equation}
where the matrix $H$ is generally taken as an high-pass filter (e.g. the
second derivative) so that the term $\left\Vert Hx\right\Vert ^{2}$ plays
the role of the penalization term in a constrained minimization. The main
problem is that the noise generally involves also frequencies of the exact
solution so that it is not possible to solve (\ref{mn}) letting $\lambda
\rightarrow \infty $ as in standard constrained minimization. Anyway,
defining suitably $\lambda $ (see \cite{PCH} for a background), the
corresponding solution $x_{\lambda }$ is expected to be somehow similar to
the desired noise-free solution. The problem (\ref{mn}) leads to the
solution of the regularized system
\begin{equation}
(A^{T}A+\lambda H^{T}H)x_{\lambda }=A^{T}\widetilde{b}, \label{tik}
\end{equation}
where the matrix $A^{T}A+\lambda H^{T}H$ is also expected to be better
conditioned than $A$.
Following the idea of the RA method, we consider here the transformation
\begin{equation*}
Z=(A^{T}A+\lambda H^{T}H)^{-1}.
\end{equation*}
Since the exact solution can be written as $x=\left( A^{T}A\right)
^{-1}A^{T}b$, we have
\begin{eqnarray*}
x &=&\left( Z^{-1}-\lambda H^{T}H\right) ^{-1}A^{T}b, \\
&=&f(Q)\left( H^{T}H\right) ^{-1}A^{T}b,
\end{eqnarray*}
where
\begin{equation*}
Q=Z\left( H^{T}H\right) =\left( \left( H^{T}H\right) ^{-1}A^{T}A+\lambda
I\right) ^{-1}.
\end{equation*}
Note that we are assuming to work with the exact right hand side even if, in
practice, the method is applied with $\widetilde{b}$.
Hence we can compute the solution working with the Arnoldi algorithm based
on the construction of the Krylov subspaces $K_{m}(Q,\left( H^{T}H\right)
^{-1}A^{T}b)$. Thus, starting from $v_{1}=v/\left\Vert v\right\Vert $, where $
v$ is the solution of
\begin{equation}
\left( H^{T}H\right) v=A^{T}b, \label{fs}
\end{equation}
we need to compute, at each step of the algorithm, the vectors $w_{j}=Qv_{j}$
, $j\geq 1$, that is, we need to solve systems of the type
\begin{equation*}
(A^{T}A+\lambda H^{T}H)w_{j}=\left( H^{T}H\right) v_{j}\text{.}
\end{equation*}
Note that by (\ref{fs}) and the arising definition of $v_{1}$, the first
step of the Arnoldi algorithm yields the Tihhonov regularized solution $
x_{\lambda }$ (cf. (\ref{tik})). Hence, also in this case, the procedure can
be interpreted as an iterated Tikhonov regularization.
In order to appreciate the potential of this extension (that we indicate by
RAT, Rational-Arnoldi-Tikhonov) we consider the test problem SHAW and BAART
with a right hand side contaminated by an error $e_{b}$ defined by
\begin{equation*}
e_{b}=\frac{\delta \left\Vert b\right\Vert }{\sqrt{N}}\;u,
\end{equation*}
where $\delta $ is the relative noise level, and $u$ is a vector containing
random values drawn from a normal distribution with mean $0$ and standard
deviation $1$. In the experiments, we define $\delta =10^{-3}$, and, as
suggested in \cite{CRS}, we take as regularization matrix
\begin{equation*}
H=\left(
\begin{array}{ccccc}
2 & -1 & & & \\
-1 & 2 & -1 & & \\
& \ddots & \ddots & \ddots & \\
& & -1 & 2 & -1 \\
& & & -1 & 2
\end{array}
\right) \in \mathbb{R}^{N\times N}.
\end{equation*}
Indeed, at least for these experiments, this choice produces better results
than the classical $(N-2)\times N$ matrix representing the second derivative
operator. Since the noise is randomly generated, for both examples we
consider two tests, and we compare the RAT method (with different values of
the parameter $\lambda $) with GMRES, ART, LSQR\_B and MR2. The results are
collected in Table \ref{TABLE3}.
\begin{table}[th]
\begin{center}
\begin{tabular}{l|l|l|l|l|l|l|l|l|l|}
\multicolumn{2}{c}{~} & \multicolumn{4}{|c|}{SHAW(64)} & \multicolumn{4}{c|}{
BAART(120)} \\ \hline\hline
\multicolumn{1}{c}{~} & \multicolumn{1}{c|}{~} & \multicolumn{2}{c|}{test \#1
} & \multicolumn{2}{c|}{test \#2} & \multicolumn{2}{c|}{test \#1} &
\multicolumn{2}{c|}{test \#2} \\ \hline
& \multicolumn{1}{c|}{$\lambda $} & \multicolumn{1}{c|}{\textit{err}} &
\multicolumn{1}{c|}{\textit{nit}} & \multicolumn{1}{c|}{\textit{err}} &
\multicolumn{1}{c|}{\textit{nit}} & \multicolumn{1}{c|}{\textit{err}} &
\multicolumn{1}{c|}{\textit{nit}} & \multicolumn{1}{c|}{\textit{err}} &
\multicolumn{1}{c|}{\textit{nit}} \\ \hline
{\bf RAT} & 1e-3 & 0.287 & 5 & 0.215 & 3 & 0.046 & 2 & 0.046 & 2 \\
& 1e-2 & 0.293 & 5 & 0.242 & 5 & 0.028 & 3 & 0.035 & 3 \\
& 1e-1 & 0.226 & 9 & 0.230 & 7 & 0.022 & 3 & 0.029 & 3 \\
& 1e-0 & 0.297 & 7 & 0.269 & 8 & 0.010 & 3 & 0.013 & 3 \\
& 1e+1 & {\bf 0.199} & 14 & 0.269 & 8 & {\bf 0.007} & 3 & 0.009 & 3 \\
& 1e+2 & 0.293 & 18 & {\bf 0.173} & 10 & 0.008 & 4 & {\bf 0.007} & 3 \\
& 1e+3 & 0.288 & 11 & 0.268 & 13 & 0.008 & 4 & 0.010 & 4 \\
& 1e+4 & 0.575 & 10 & 0.522 & 7 & 0.008 & 4 & 0.010 & 4 \\
GMRES & & 0.392 & 7 & 0.374 & 7 & 0.059 & 3 & 0.056 & 3 \\
ART & & 0.837 & 64 & 0.837 & 11 & 0.344 & 120 & 0.340 & 120 \\
LSQR\_B & & 0.361 & 14 & 0.375 & 10 & 0.142 & 6 & 0.147 & 4 \\
MR2 & & 0.355 & 12 & 0.288 & 9 & & & &
\end{tabular}
\end{center}
\caption{Minimum attained error and corresponding iteration number for SHAW
and BAART with Gaussian noise of level $\protect\delta =10^{-3}$ }
\label{TABLE3}
\end{table}
Similarly to the noise-free case, we also consider the stabilizing effect of
a careful choice of $\lambda $. Indeed, in Figure 4 we plot the error
behavior of some of the methods considered for the solution of SHAW(64).
Taking $\lambda =10$ for the RAT method, we can overcome the problem of
semi-convergence keeping at the same time a good level of accuracy contrary
to other well performing methods such as GMRES and LSQR\_B.
\begin{figure}
\caption{Error behavior for SHAW(64) with noise. RAT method is implemented
with $\protect\lambda =10$.}
\end{figure}
\section{Conclusions}
Our experience with the RA and the RAT methods leads us to consider these
methods as reliable alternatives to the classical iterative solvers for
ill-conditioned problems. Since they actually are iterative refinement
processes, the attainable accuracy is almost never worse that the other
solvers. While this property could be somehow expected, maybe the most
important feature of these methods is their robustness. Indeed, contrary to
other iterative refinement processes such as the Riley's algorithm, the
methods work pretty well for a large window of values of $\lambda $. Hence,
having a good error estimator or working with applications in which it is
possible to monitor the result step by step, one may reduce $\lambda $ in
order to save computational work; in the opposite case, one may increase $
\lambda $ slowing down the method but assuring a stable convergence.
To this purpose, we intend to use, in a forthcoming work, the estimates of the norm of the error described in \cite{CBerr} and \cite{CBerr2} which are based on an extrapolation procedure of the moments of the matrix of the system with respect to the residuals of the iterative method.
\noindent\textbf{Acknowledgement:} The authors are grateful
to Marco Donatelli, Igor Moret, Giuseppe Rodriguez, and Marco Vianello for
many helpful discussions and comments.
\end{document} |
\begin{document}
\title{Gravitational time dilation induced decoherence during spontaneous emission}
\author{Dong Xie}
\email{[email protected]}
\affiliation{Faculty of Science, Guilin University of Aerospace Technology, Guilin, Guangxi, P.R. China.}
\author{Chunling Xu}
\affiliation{Faculty of Science, Guilin University of Aerospace Technology, Guilin, Guangxi, P.R. China.}
\author{An Min Wang}
\email{[email protected]}
\affiliation{Department of Modern Physics , University of Science and Technology of China, Hefei, Anhui, China.}
\begin{abstract}
We investigate decoherence of quantum superpositions induced by gravitational time dilation and spontaneous emission between two atomic levels. It has been shown that gravitational time dilation can be an universal decoherence source.
Here, we consider decoherence induced by gravitational time dilation only in the situation of spontaneous emission. Then, we obtain that the coherence of particle's position state depends on reference frame due to the time dilation changing the distinguishability of emission photon from two positions of particle. Changing the direction of light field can also result in the difference about the coherence of quantum superpositions. For observing the decoherence effect mainly due to gravitational time dilation, time-delayed feedback can be utilized to increase the decoherence of particle's superpositions.
\end{abstract}
\pacs{03.65.Yz, 04.62.+v, 42.50.-p}
\maketitle
\section{Introduction}
Quantum phenomenon has been observed by numerous experiments on microscopic scales. However, on macroscopic scales, it is difficult to find quantum effects, such as quantum superpositions. A lot of physicists have been looking up the root of quantum-to-classical transition for decades. The reason can be divided into two categories: coarsened measurement and decoherence\cite{lab1,lab2,lab3,lab4,lab5,lab6,lab7,lab8}.
Commonly viewpoint is that decoherence plays a prominent role in quantum-to-classical transition. There are two routes to explain decoherence: one route is that system interacts with external environments, the other is taken in wave function collapse\cite{lab9,lab10,lab11}, which need not external environments.
The latter one is often inspired by general relativity and makes a fundamental modification on quantum theory. Recently, Igor at al.\cite{lab12} demonstrated the existence of decoherence induced by gravitational time dilation without any modification of quantum mechanics. This work motivates further study on decoherence due to time dilation.
Spontaneous emission between two atomic levels inevitably occurs. We research decoherence due to time dilation during spontaneous emission. Without spontaneous emission, decoherence will not occur in our model only by time dilation. As we all know, spontaneous emission can induce decoherence. We find that gravitational time dilation can reduce or increase the decoherence due to spontaneous emission in different reference frames (different zero potential energy point). It is attributed to the fact that in different reference frame, the distinguishability of emission photon from different positions is different. The direction of emission light also influences the coherence of quantum superpositions in fixed direction of gravitational field.
In order to make the decoherence due to time dilation stronger than due to spontaneous emission, time-delayed feedback control\cite{lab121,lab1211} is used.
The rest of paper is arranged as follows. In section II, we present the model about the decoherence of quantum superpositions due to time dilation during spontaneous emission. Coherence of particle's position in different reference frame is explored in section III. In section IV, we discuss the influence of different directions of emission light. In section V, a
time-delayed feedback scheme is utilized to increase decoherence induced by gravitational time dilation. We deliver a conclusion and outlook in section VI.
\section{Model}
Firstly, let us simply review the gravitational time dilation which causes clocks to run slower near a massive object. Given a particle of rest mass $m$ with an arbitrary internal Hamiltonian $H_0$, which interacts with the gravitational potential $\Phi(x)$. The total Hamiltonian $H$ is described by\cite{lab12,lab122}
\begin{eqnarray}
H=H_{ext}+H_0[1+\Phi(x)/c^2-p^2/(2m^2c^2)],
\end{eqnarray}
where $H_{ext}$ is external Hamiltonian. For a free particle, $H_{ext}=mc^2+p^2/2m+m\Phi(x)$. In Eq.(1), the last term, $-H_0p^2/(2m^2c^2)$, is simply the velocity-dependent special relativistic time dilation. The coupling with position, $H_0\Phi(x)/c^2$, represents the gravitational time dilation. When we consider slowly moving particles, $p\approx0$, the gravitational time dilation will be the main source of time dilation. It will not be canceled by the velocity-dependent special relativistic time dilation.
We consider that an atom with two levels is in superposition of two vertically distinct positions $x_1$ and $x_2$.
The atom is coupled to a single unidirectional light field, as depicted in Fig. 1.
\begin{figure}
\caption{\label{fig.2}
\label{fig.2}
\end{figure}
The whole system interacts with a homogeneous gravitational field $\Phi(x)\approx g x$ which generates the gravitational time dilation. The total system-field Hamiltonian is described by ($\hbar=1$)
\begin{eqnarray}
H=[mc^2+m g x_1+w_1(1+g x_1/c^2)|1\rangle\langle1|+w_2(1+g x_1/c^2)|2\rangle\langle2|]|x_1\rangle\langle x_1|\nonumber
\\+\sqrt{\kappa_1/2\pi}(1+g x_1/c^2)|x_1\rangle\langle x_1|\int dw [a b^\dagger(w)\exp(-i w x_1/c)+H.c]\nonumber\\
+[mc^2+m g x_2+ w_1(1+g x_2/c^2)|1\rangle\langle1|+ w_2(1+g x_2/c^2)|2\rangle\langle2|]|x_2\rangle\langle x_2|\nonumber\\
+\sqrt{\kappa_2/2\pi}(1+g x_2/c^2)|x_2\rangle\langle x_2|\int dw [a b^\dagger(w)\exp(-i w x_2/c)+H.c]+\int dw w b^\dagger(w)b(w),
\end{eqnarray}
where $w_1$ and $w_2$ ($w_1>w_2$) are eigenvalues for the atomic level 1 and 2, respectively, and operator $a=|2\rangle\langle1|$.
$\kappa_1$ and $\kappa_2$ denote the coupling constants in position $x_1$ and $x_2$, respectively. Without extra control, the two coupling constants should be same: $\kappa_1=\kappa_2=\kappa$. The last term in Eq.(2) represents the free field Hamiltonian, and the filed modes, $b(w)$, satisfy $[b(w),b^\dagger(w')]=\delta(w-w')$.
Using Pauli operator, $\sigma_z=|1\rangle\langle1|-|2\rangle\langle2|$, to simplify the Eq.(2), we can obtain the new form of system-field Hamiltonian
\begin{eqnarray}
H=[E_1+w_0/2(1+g x_1/c^2)\sigma_z]|x_1\rangle\langle x_1|+\sqrt{\kappa/2\pi}(1+g x_1/c^2)|x_1\rangle\langle x_1|\int dw [a b^\dagger(w)\exp(-i w x_1/c)+H.c]\nonumber\\
+[E_2+w_0/2(1+g x_2/c^2)\sigma_z]|x_2\rangle\langle x_2|+\sqrt{\kappa/2\pi}(1+g x_2/c^2)|x_2\rangle\langle x_2|\int dw [a b^\dagger(w)\exp(-i w x_2/c)+H.c]\nonumber\\+\int dw w b^\dagger(w)b(w),
\end{eqnarray}
where $E_i=mc^2+\frac{(w_1+w_2)}{2}(1+g x_i/c^2)$ for $i=1,2$ and $w_0=w_1-w_2$.
We consider the initial field in the vacuum state and the atom in the state $|1\rangle\frac{|x_1\rangle+|x_2\rangle}{\sqrt{2}}$.
Then, the atom will spontaneously emit photon. According to there being only a single excitation conservation between system and field\cite{lab13}, the system state in any time $t$ can be solved analytically, see Appendix.
\section{Coherence of particle's position}
The quantum coherence of particle's position state can be quantified by the interferometric visibility $V(t)$, as shown in Eq.(27) in Appendix. When the time $t$ satisfy $\lambda_1\kappa t\gg1$ and $\lambda_2\kappa t\gg1$, the amplitude of excitation state $C_1\approx0$ and $C_2\approx0$. Then, we arrive at
\begin{eqnarray}
V=\frac{2\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}
\exp[-\lambda_1^2\kappa\tau],
\end{eqnarray}
where $\lambda_i=1+g x_i/c^2$ for $i=1,2$.
From the above equation, we can see that the decoherence comes from the spontaneous emission (when $\lambda_1=\lambda_2=1$) and the gravitational time dilation. Spontaneous emission can generate the decoherence due to the fact that photon is emitted from different positions, which leads to having a phase difference $w\tau$, where $w$ denotes the frequency of photon.
And we achieve that coherence depends on the reference frame. Different zero potential energy point (different value of $\lambda_1$ ) will give different coherence strength. The counterintuitive result occurs because in different frame the phase difference will become different so that the distinguishability of emitting photon from two positions is different. Reducing the zero potential point (increasing the value of $\lambda_1$), the phase difference will increase because of time dilation. For a fixed position difference $\Delta=g(x_2-x_1)/c^2$, the quantum coherence can be rewritten
\begin{eqnarray}
V(\lambda_1,\Delta)=\frac{2\kappa\lambda_1(\lambda_1+\Delta)}{\sqrt{[\kappa(\lambda_1^2+(\lambda_1+\Delta)^2)]^2+(w_0\Delta)^2}}
\exp[-\lambda_1^2\kappa\tau].
\end{eqnarray}
There is an optimal value of $\lambda_1$, which can give the maximal quantum coherence, as shown in Fig. 2.
\begin{figure}
\caption{\label{fig.2}
\label{fig.2}
\end{figure}
In an optimal reference frame, one can obseve the maximal coherence: $V$ is close to 1.
In order to observe the decoherence induced by gravitational time dilation, it need to satisfy that the decoherence effect from time dilation is stronger than only from spontaneous emission ($\lambda_1=\lambda_2=1$):
\begin{eqnarray}
\frac{2\kappa\lambda_1(\lambda_1+\Delta)}{\sqrt{[\kappa(\lambda_1^2+(\lambda_1+\Delta)^2)]^2+(w_0\Delta)^2}}\exp[-\lambda_1^2\kappa\tau]
\ll\exp[-\kappa\tau].
_{}\end{eqnarray}
Noting that the value of $\lambda_2-\lambda_1$ is generally small in experiment, the condition $\exp[(\lambda_1^2-1)\kappa\tau]\gg1$ is necessary for observing decoherence mainly induced by gravitational time dilation.
When one changes the direction of emitting photon, the quantum coherence will change accordingly,
\begin{eqnarray}
V'=\frac{2\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}
\exp[-\lambda_2^2\kappa\tau].
\end{eqnarray}
It is due to that the phase difference changes with the direction of emitting photon, becoming $-w\tau$. Different directions of emitting photon in the fixed gravitational field will generate different quantum coherence $V$.
Then, we consider general three-dimensional space: the emitting photon can be along any direction, as shown in Fig. 3.
\begin{figure}
\caption{\label{fig.2}
\label{fig.2}
\end{figure}
We obtain the quantum coherence of particle's position state as following:
\begin{eqnarray}
&V_3=\frac{3\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}|
\int_0^{\pi/2} d\theta\sin\theta\cos^2\theta
\exp[(iw_0\lambda_1-\lambda_1^2\kappa)\tau\cos\theta]+\exp[-(iw_0\lambda_2+\lambda_2^2\kappa)\tau\cos\theta]|,\\
&=\frac{3\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}|[-2 + \exp(k_1) (2 - 2 k_1 + k_1^2)]/k_1^3+[-2 + \exp(k_2) (2 - 2 k_2 + k_2^2)]/k_2^3|,\\
&\textmd{in which},\nonumber\\
&k_j=[(-1)^jiw_0\lambda_j-\lambda_j^2\kappa]\tau, \ \textmd{for} \ j=1,2,
\end{eqnarray}
where the coupling strength between atom and light field changes with the direction of emitting photon, becoming $\sqrt{\kappa/2}\cos\theta $\cite{lab14}.
For $w_0\lambda_j\tau\ll1$ and $\lambda_j^2\kappa\tau\ll1$, $V_3\approx\frac{2\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}(1-\lambda_1^2\kappa\tau-\lambda_2^2\kappa\tau)<V'<V.$ It means that the quantum coherence in general three-dimensional space is smaller than in one-dimensional space of fixed direction.
For $w_0\lambda_j\tau\gg1$ and $\lambda_j^2\kappa\tau\gg1$, $V_3\approx\frac{2\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}|\cos3\varphi|(3/[(w_0^2\lambda_1^2+(\lambda_1^2\kappa\tau)^2]^{3/2}+3/[(w_0^2\lambda_1^2+(\lambda_1^2\kappa\tau)^2]^{3/2})\geq V,$ with $\cos\varphi=\lambda_1^2\kappa\tau/\sqrt{w_0^2\lambda_1^2+(\lambda_1^2\kappa\tau)^2}$. It means that in new condition the quantum coherence in general three-dimensional space is larger than in one-dimensional space of fixed direction.
The root of generating $V_3\neq V$ is the phase difference changing from $w\tau$ to $w\tau\cos\theta$.
\section{time delay feedback}
When one chooses the center of two positions as the zero potential point, the interferometric visibility reads
\begin{eqnarray}
V_c=\frac{2\kappa(1-\Delta/2)(1+\Delta/2)}{\sqrt{[\kappa((1-\Delta/2)^2+(1+\Delta/2)^2)]^2+(w_0\Delta)^2}}
\exp[-(1-\Delta/2)^2\kappa\tau].
\end{eqnarray}
In order to observe the decoherence from gravitational time dilation, not from the spontaneous emission, it is necessary to satisfy condition $V_c\ll\exp[-\kappa\tau]$. However, the value of $\Delta$ is very small in experiment. So, the condition is hard to meet. We can utilize the time delay feedback \cite{lab15,lab16} to increase the decoherence from gravitational time dilation.
\begin{figure}
\caption{\label{fig.2}
\label{fig.2}
\end{figure}
As shown in Fig. 4, the light field is reflected by a mirror. The whole system-field Hamiltonian can be described by
\begin{eqnarray}
&H=\sqrt{\kappa/2\pi}(1+g x_1/c^2)|x_1\rangle\langle x_1|\int dw \{a b^\dagger(w)2\exp[-i w(r+\Delta c^2/2g)]\cos(2w r)+H.c\}\nonumber\\
&+\sqrt{\kappa/2\pi}(1+g x_2/c^2)|x_2\rangle\langle x_2|\int dw\{a b^\dagger(w)2\exp[-i w(r+\Delta c^2/2g)]\cos[ w (2r+2\Delta c^2/g)/c]+H.c\}\nonumber\\
&+\int dw w b^\dagger(w)b(w)+[E_1+w_0/2(1+g x_1/c^2)\sigma_z]|x_1\rangle\langle x_1|+[E_2+w_0/2(1+g x_2/c^2)\sigma_z]|x_2\rangle\langle x_2|,
\end{eqnarray}
Using the way in Appendix, we can obtain the quantum coherence at time $t\gg1$. With the feedback, the spontaneous emission is suppressed due to superposition effect. The total system-field wave function can also be described by Eq.(13) in Appendix.
When the conditions $w_0(1+\Delta/2)2r/c=n\pi$ and $w_0(1-\Delta/2)(2r+2\Delta c^2/g)/c\neq m\pi$ hold, for $t\gg1$ the amplitudes $|C_2|^2=\exp[-n\pi\kappa(1+\Delta/2)/w_0]$ and $|C_1|\simeq0$, where $n,m=1,2,3\cdot\cdot\cdot$. When $w_0\gg\kappa$ and $n=1$, $|C_2|^2\simeq1$. So, we achieve that the quantum coherence $V_c\simeq0$. Without gravitational time dilation, the quantum coherence is much larger than 0. So, utilizing time-delayed feedback scheme can satisfy that the decoherence induced by the gravitational time dilation is far less than by spontaneous emission.
\section{conclusion and outlook}
We explore the decoherence of an atom's positions induced by the gravitational time dilation only in the situation of spontaneous emission. As the phase difference of photon emitted from two positions are different in different reference frames, the quantum coherence of superposition state of positions depends on the reference frame. So one can choose proper reference frame to observe the decoherence from the gravitational time dilation. It is worth mentioning that the direction of emitting photon will influence the quantum coherence. So comparing the case of fixed emitting direction with the case of any direction, there are some differences about quantum coherence. When one chooses the center of two positions as the zero potential point, the decoherence induced by the gravitational time dilation is difficult to be far larger than by spontaneous emission. The time delay feedback can be used to increase the decoherence from the time dilation with proper conditions.
In this article, we only discuss the decoherence of an atom with two energy levels induced by gravitational time dilation. It is interesting to research the decoherence of many particles with many energy levels induced by time dilation with spontaneous emission. In this case we believe that it will increase the decoherence effect from the gravitational time dilation. And considering extra drive is the further research direction. In this situation, due to the fact that a single excitation conservation between system and field do not hold, the question will become complex and rich.
\ \ $ \mathbf{ APPENDIX}$\\
In the single photon limit, the total system-field wave function is described by
\begin{eqnarray}
|\Psi(t)\rangle=C_1|x_1\rangle|1\rangle|0\rangle+\int dw C_{1w}b^\dagger(w)|x_1\rangle|2\rangle|0\rangle+C_2|x_2\rangle|1\rangle|0\rangle+\int dw C_{2w}b^\dagger(w)|x_2\rangle|2\rangle|0\rangle.
\end{eqnarray}
The variables $C_1$, $C_{1w}$, $C_2$ and $C_{2w}$ denote the corresponding amplitudes of the four states at time $t$.
Applying the Schr$\ddot{o}$dinger equation in the rotating frame, we arrive at the following set of partial differential equations:
\begin{eqnarray}
i\partial_tC_1=[E_1+w_0/2(1+g x_1/c^2)]C_1+(1+g x_1/c^2)\sqrt{\kappa/2\pi}\int dw \exp[i(wx_1/c-w)t]C_{1k},\\
i\partial_tC_{1w}=[E_1-w_0/2(1+g x_1/c^2)]C_{1k}+(1+g x_1/c^2)\sqrt{\kappa/2\pi}\int dw \exp[i(-wx_1/c+w)t]C_{1},\\
i\partial_tC_2=[E_2+w_0/2(1+g x_2/c^2)]C_1+(1+g x_2/c^2)\sqrt{\kappa/2\pi}\int dw \exp[i(wx_2/c-w
)t]C_{2k},\\
i\partial_tC_{2w}=[E_2-w_0/2(1+g x_2/c^2)]C_{2k}+(1+g x_2/c^2)\sqrt{\kappa/2\pi}\int dw \exp[i(-wx_2/c+w)t]C_{2}.
\end{eqnarray}
Substituting $C'_1=\exp[-i(E_1+w_0/2(1+g x_1/c^2))t]C_1, C'_{1k}=\exp[-i(E_1-w_0/2(1+g x_1/c^2))t]C_{1k},$
$C'_2=\exp[-i(E_2+w_0/2(1+g x_2/c^2))t]C_2, C'_{2k}=\exp[-i(E_2-w_0/2(1+g x_2/c^2))t]C_{2k}$ into above equations, we arrive at the following simplified equations:
\begin{eqnarray}
i\partial_tC'_1=(1+g x_1/c^2)\sqrt{\kappa/2\pi}\int dw \exp[i(wx_1/c+(w_0(1+g x_1/c^2)-w)t]C'_{1w},\\
i\partial_tC'_{1w}=(1+g x_1/c^2)\sqrt{\kappa/2\pi}\int dw \exp[i(-wx_1/c+(w-w_0(1+g x_1/c^2))t]C'_{1},\\
i\partial_tC'_2=(1+g x_2/c^2)\sqrt{\kappa/2\pi}\int dw \exp[i(wx_2/c+(w_0(1+g x_2/c^2)-w)t]C'_{2w},\\
i\partial_tC'_{2w}=(1+g x_2/c^2)\sqrt{\kappa/2\pi}\int dw \exp[i(-wx_2/c+(w-w_0(1+g x_2/c^2))t]C'_{2}.
\end{eqnarray}
Eq.(15) and Eq.(17) are integrated formally and inserted into Eq.(14) and Eq.(16),respectively. Utilizing the integral
\begin{eqnarray}
\int dw\exp[i((w_0(1+g x_1/c^2)-w)t]=2\pi\delta(t),
\end{eqnarray}
we can analytically solve the set of partial differential equations.
At time $t$, using the initial values $C_1(0)=1/\sqrt{2}$ and $C_2(0)=1/\sqrt{2}$, we obtain
\begin{eqnarray}
C'_1(t)=1/\sqrt{2}\exp[-1/2\lambda_1^2\kappa t],\\
C'_{1w}=\frac{1-\exp[-1/2\lambda_1^2\kappa t-i(\lambda_1w_0-w)t]}{\lambda_1^2\kappa+i(\lambda_1w_0-w)}\sqrt{\kappa/2\pi}\lambda_1\exp[iwx_1/c],\\
C'_2(t)=1/\sqrt{2}\exp[-1/2\lambda_2^2\kappa t],\\
C'_{2w}=\frac{1-\exp[-1/2\lambda_2^2\kappa t-i(\lambda_2w_0-w)t]}{\lambda_2^2\kappa+i(\lambda_2w_0-w)}\sqrt{\kappa/2\pi}\lambda_2\exp[iwx_2/c],
\end{eqnarray}
where $\lambda_i=1+g x_i/c^2$ for $i=1,2$.
The quantum coherence of position state can be quantified by the interferometric visibility
\begin{eqnarray}
V(t)&=&2|C_1^*C_2+\int dwC^*_{1w}\int dw'C_{2w'}|\nonumber\\
&=&2|\exp[i(x_2-x_1)w_1g/c^2t]{C'_1}^*C'_2+\exp[i(x_2-x_1)w_2g/c^2t]\int dwC'^*_{1w}\int dw'C'_{2w'}|,
\end{eqnarray}
where the term $\int dwC'^*_{1w}\int dw'C'_{2w'}$ can be integrated by residue theorem. We can arrive at
\begin{eqnarray}
&\int dwC'^*_{1w}\int dw'C'_{2w'}=\frac{\kappa\lambda_1\lambda_2}{\kappa(\lambda_1^2+\lambda_2^2)+iw_0(\lambda_2-\lambda_1)}
\{\exp[-1/2\lambda_1^2\kappa\tau+ iw_0\lambda_1\tau]-\exp[-1/2\lambda_1^2\kappa t+i\lambda_1w_0t+i\xi(\tau-t)]-\nonumber\\
&\exp[-1/2\lambda_2^2\kappa t-i\lambda_2w_0t+i(w_0\lambda_1+i\lambda_1^2\kappa)(\tau+t)]+\exp[-1/2\lambda_1^2\kappa (t+\tau)-1/2\lambda_1^2\kappa t+i(\lambda_2-\lambda_1)w_0t+i\lambda_1w_0\tau]\},\\
&\textmd{in which},\nonumber\\
&\tau=(x_2-x_1)/c\geq0,\\
&\textmd{for}\ t<\tau, \ \xi=w_0\lambda_1+i\lambda_1^2\kappa,\ \ \textmd{for}\ t\geq\tau,\ \xi=w_0\lambda_2-i\lambda_2^2\kappa.
\end{eqnarray}
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.